Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,503,861 | 2011-06-28T08:44:00.000 | -1 | 0 | 0 | 0 | 0 | python,html,templates,plone,zope | 0 | 6,503,962 | 0 | 2 | 0 | false | 1 | 0 | Perhaps you could approach this from a Javascript way? A lot of applications have like a global js file, that's included in all pages. Starting from that you could modify the DOM easily. | 1 | 1 | 0 | 0 | My goal is to inject some HTML-Code in front of every Plone article (between the page's header and the first paragraph)? I'm running Plone 4. Does anyone have a hint on how to realize that?
The other question is: is it possible to place some HTML code randomly in every Plone article? | How to inject template code in Plone? | 1 | -0.099668 | 1 | 0 | 0 | 667 |
6,506,372 | 2011-06-28T12:28:00.000 | 0 | 0 | 0 | 0 | 0 | python,xpath,selenium,beautifulsoup,lxml | 1 | 6,506,453 | 0 | 3 | 0 | false | 1 | 0 | There is no way to exclude unexpected attributes with XPath.
So you must find a safer way to locate elements you want. Things that you should consider:
In a form, each input should have a distinct name. The same is true for the form itself. So you can try //form[@name='...']/input[@name='...']
Add a class to the fields that you care about. Classes don't have be mentioned in any stylesheet. In fact, I used this for form field validation by using classes like decimal number or alpha number | 1 | 1 | 0 | 0 | I'm working through a Selenium test where I want to assert a particular HTML node is an exact match as far as what attributes are present and their values (order is unimportant) and also that no other attributes are present. For example given the following fragment:
<input name="test" value="something"/>
I am trying to come up with the a good way of asserting its presence in the HTML output, such that the following (arbitrary) examples would not match:
<input name="test" value="something" onlick="doSomething()"/>
<input name="test" value="something" maxlength="75"/>
<input name="test" value="something" extraneous="a" unwanted="b"/>
I believe I can write an XPath statement as follows to find all of these, for example:
//input[value='something' and @name='test']
But, I haven't figured out how to write in such a way that it excludes not exact matches in a generalize fashion. Note, it doesn't have to be an XPath solution, but that struck me as the most likely elegant possibility. | Finding Only HTML Nodes Whose Attributes Match Exactly | 0 | 0 | 1 | 0 | 1 | 146 |
6,515,160 | 2011-06-29T02:30:00.000 | 2 | 1 | 0 | 0 | 1 | python,mongodb,pyramid | 0 | 6,934,811 | 0 | 1 | 0 | false | 0 | 0 | Just create a database in your TestCase.setUp and delete in TestCase.tearDown
You need mongodb running because there is no mongolite3 like sqlite3 for sql
I doubt that django is able to create a temporary file to store a mongodb database. It probably just use sqlite:/// which create a database with a memory storage. | 1 | 2 | 0 | 0 | I have a pyramid project that uses mongodb for storage. Now I'm trying to write a test but how do I specify connection to the mongodb?
More specifically, which database should I connect to (test?) and how do I use fixtures? In Django it creates a temporary database but how does it work in pyramid? | How do i create unittest in pyramid with mongodb? | 0 | 0.379949 | 1 | 1 | 0 | 594 |
6,523,883 | 2011-06-29T16:18:00.000 | 0 | 0 | 0 | 1 | 1 | c++,python,postgresql,process,cross-platform | 0 | 6,524,149 | 1 | 2 | 0 | false | 0 | 0 | The C++ standard does not know of multiprocess systems. There is, therefore, no API for interacting with processes. (After all, how would the standard mandate a multiprocess system on an 8 bit microcontroller?)
Moreover, some platforms (e.g. the Win32 Subsystem on Windows NT) do not keep track of process parent child relationships. (NT does under the hood but you'd have to call undocumented APIs to get at the information)
I'm fairly certain POSIX does define APIs like this, but I have not used them myself. | 1 | 1 | 0 | 0 | I have a long-running Python program that starts and stops a Postgres server as part of its operation. I stop the server by using subprocess to spawn pg_ctl -m fast. As a fall-back, I check the return code and, if it failed, I then run pg_ctl -m immediate.
The problem is that sometimes both fail. I haven't been able to reproduce this myself, but it happens with some frequency for users of my program. I log stdout/stderr from the pg_ctl calls, but don't get any useful info there. As far as I can tell, either the master process or its children have stopped responding to SIGQUIT, and the only way to terminate them is with SIGKILL, which pg_ctl does not use.
I've basically exhausted ideas on the Postgres side. I'm using Postgres 8.3, so I'm sure upgrading to a more recent version would resolve this, but unfortunately that is not an option for me. The only solution I can come up with is to kill the children manually. But I don't know how to distinguish between the children spawned by my pg_ctl start and other postgres processes that might be running on the machine.
Is there a way to identify a process as a child of another process that I spawned? A cross-platform method of doing this from Python would be ideal, but I'm willing to write a C extension if there exist APIs on Windows/Linux/UNIX to do this. | How can one identify children of a child process? | 0 | 0 | 1 | 0 | 0 | 369 |
6,535,373 | 2011-06-30T13:19:00.000 | 5 | 0 | 1 | 0 | 0 | python | 0 | 6,535,397 | 0 | 2 | 0 | false | 0 | 0 | Use >>. You are removing the sign anyway with the & 0xFF. Note, that you cannot leave out the & part, i.e., (n >> 8) & 0xff, or you'll get the wrong result, as you have already noted. | 1 | 0 | 0 | 0 | I'm 'hacking' my router, and I need to rewrite one JS function that takes date in hexdec format and convert it into Y m d
JS code looks like:
return [(((n >> 16) & 0xFF) + 1900), ((n >>> 8) & 0xFF), (n & 0xFF)];
where n is variable in format 0x123456 (e.g. 6 hexdec digits after 0x sign)
found that python has operators like >> but does't have >>> operator.
Any idea how to do that ?
thanks | Special JS operators in python | 0 | 0.462117 | 1 | 0 | 0 | 664 |
6,549,669 | 2011-07-01T15:00:00.000 | 35 | 0 | 0 | 1 | 0 | python | 0 | 6,549,740 | 0 | 7 | 0 | true | 0 | 0 | When you pass a negative PID to kill, it actually sends the signal to the process group by that (absolute) number. You do the equivalent with os.killpg() in Python. | 1 | 27 | 0 | 0 | for example from bash:
kill -9 -PID
os.kill(pid, signal.SIGKILL) kill only parent process. | how to kill process and child processes from python? | 0 | 1.2 | 1 | 0 | 0 | 63,568 |
6,552,097 | 2011-07-01T18:51:00.000 | 1 | 0 | 1 | 0 | 0 | python,multithreading | 0 | 6,552,616 | 0 | 3 | 0 | false | 0 | 0 | You can have a reference to parent thread on child thread.. and then get it's ID | 1 | 6 | 0 | 0 | I looking for the way to get parent ID or name from child thread.
In example, I have main thread as MainThread. In this thread i create a few new threads. Then I use threading.enumerate() to get references to all running thread, pick one of child threads and somehow get ID or name of MainThread. Is any way to do that? | threading - how to get parent id/name? | 0 | 0.066568 | 1 | 0 | 0 | 7,432 |
6,558,957 | 2011-07-02T18:57:00.000 | 3 | 0 | 1 | 0 | 0 | python,python-c-api | 0 | 6,558,981 | 0 | 1 | 0 | true | 0 | 0 | Basically, you use the Python C API to get the module the function is contained in, then query the module dictionary for the function. That's more or less the same what the Python runtime does internally when your Python code invokes a function from somewhere.
Relevant functions from the API to look at are PyImport_ImportModule, PyModule_GetDict, PyDict_GetItem and the PyObject_CallXXX family of functions. | 1 | 2 | 0 | 1 | From the c-api, I would like to call a python function by name. I would then be calling the function with a list of python objects as arguments.
It is not clear to me in the Python documentation how I would get a "Callable" object from the main python interpreter.
Any help appreciated in:
Getting the address from the function
Calling the function with my PythonObject's as arguments.
I'm using Python 2.x series for my development. | How to call a python function by name from the C-API? | 0 | 1.2 | 1 | 0 | 0 | 1,070 |
6,559,656 | 2011-07-02T21:17:00.000 | 4 | 0 | 1 | 0 | 0 | python | 0 | 6,559,698 | 0 | 1 | 0 | true | 0 | 0 | Why not just modify sys.path to include the directory one level down? Then the same imports will work in both places. | 1 | 0 | 0 | 0 | I have dev and production systems.
My production system is different from my dev system in that that it adds one directory to the beginning path.
for eg. on dev system:
main->module1->module2
becomes on production:
project_name->main->module1->module2.
Because of that I have to change all my imports to accommodate for this change.
I wanted to make settings file in the main folder and include it in every file and call exec("import %s.modulexxx" % path).
But the problem is how to access settings file (because I also need to know my directory path).
Is there a way to include some file below, for eg. :
if it is main->module1->module2 , in module2 I could include ../../settings.py
so if it changes to project_name->main->module1->module2 it would still work because it would still be 2 level below.
Any help? | Importing modules with variable paths? | 0 | 1.2 | 1 | 0 | 0 | 53 |
6,570,892 | 2011-07-04T11:33:00.000 | 1 | 0 | 1 | 1 | 0 | python,perl,date,calendar | 0 | 6,571,157 | 0 | 2 | 0 | false | 0 | 0 | Did you think about scheduling the invoke of your script?
For me, the best approach is this:
1.Have two options to run the script:
run_script
run_script --update
2.Schedule the update run in some task scheduler (for example Cron) to be executed daily.
3.When you would want to check the image for current day, simply run the script without update option.
If you would like me to extend any part of these, simply ask about it. | 1 | 3 | 0 | 0 | I would like to write a tiny calendar-like application for someone as a birthday present (to be run on Ubuntu). All it should do is display a separate picture each day, so whenever it's invoked it should check the date and select the appropriate picture from the collection I would provide, but also, in case it just keeps running, it should switch to the next picture when the next day begins.
The date-checking on invocation isn't the problem; my question pertains to the second case: how can I have the program notice the beginning of the next day? My clumsy approach would be to make it check the current date at regular intervals and let it change the displayed picture once there was a change in date, but that strikes me as very roundabout and not particularly elegant.
In case any of you have got some idea of how I could accomplish this, please don't hesitate to reply. I would aim to write the application in either Perl or Python, so suggestions concerning those two languages would be most welcome, but any other suggestions would be appreciated as well.
Thanks a lot for your time! | How to periodically check for the current date from within a program? | 0 | 0.099668 | 1 | 0 | 0 | 328 |
6,574,740 | 2011-07-04T18:04:00.000 | 0 | 1 | 0 | 0 | 0 | java,c++,python,math,matlab | 0 | 6,574,755 | 0 | 5 | 0 | false | 1 | 0 | I think you can use PHP or Java Web. | 2 | 3 | 0 | 0 | I am very new to programming. I am familiar with HTML, C++ and learning PHP to start a database.
I want to make a website which tracks a stock price. I have written various algorithms in Matlab however, MATLAB only has a to-Java conversion.
I was wondering what language would be the best to do a lot of calculations. I want my calculations to be done in real time and plotted. Would Java be the best language for this?
I can do the calculations in C++ but I don't know how to put the plots on the website. Likewise I believe I can do everything in Matlab but the conversion looks a little sketchy.
I would be very thankful if someone with experience with Java, or I also heard python, would comment on my post. | Math Intensive, Calculation Based Website - Which Language Should I Use? | 0 | 0 | 1 | 0 | 0 | 1,004 |
6,574,740 | 2011-07-04T18:04:00.000 | 0 | 1 | 0 | 0 | 0 | java,c++,python,math,matlab | 0 | 6,574,803 | 0 | 5 | 0 | false | 1 | 0 | I would do C++ and write them to a database, then using php you can grab them from the same database and show them online, otherwise then java can do all that but make sure all calculations aren't done on the fly since that will kill your server, especially with stocks that can turn into a lot of data. | 2 | 3 | 0 | 0 | I am very new to programming. I am familiar with HTML, C++ and learning PHP to start a database.
I want to make a website which tracks a stock price. I have written various algorithms in Matlab however, MATLAB only has a to-Java conversion.
I was wondering what language would be the best to do a lot of calculations. I want my calculations to be done in real time and plotted. Would Java be the best language for this?
I can do the calculations in C++ but I don't know how to put the plots on the website. Likewise I believe I can do everything in Matlab but the conversion looks a little sketchy.
I would be very thankful if someone with experience with Java, or I also heard python, would comment on my post. | Math Intensive, Calculation Based Website - Which Language Should I Use? | 0 | 0 | 1 | 0 | 0 | 1,004 |
6,577,807 | 2011-07-05T03:49:00.000 | 0 | 0 | 1 | 0 | 0 | python,matplotlib | 1 | 6,580,497 | 0 | 2 | 0 | false | 0 | 0 | You may use plt.annotate or plt.text.
And, as an aside, 1) you probably want to use different variables for the file names and numpy arrays you're loading your data into (what is data in data=plb.loadtxt(data)),
2) you probably want to move the label positioning into the loop (in your code, what is data in the plt.clabel(data)). | 1 | 0 | 1 | 0 | How can I use pyplot.clabel to attach the file names to the lines being plotted?
plt.clabel(data) line gives the error | matplotlib.pyplot how to add labels with .clabel? | 0 | 0 | 1 | 0 | 0 | 1,086 |
6,578,452 | 2011-07-05T05:59:00.000 | 13 | 0 | 1 | 0 | 0 | python,obfuscation | 0 | 6,578,598 | 0 | 3 | 0 | false | 0 | 0 | I would suggest you go back into thinking about this, considering:
Use the right tool for the job
Obfuscation is hard
Other means to achieve your goals
Firstly, Python was not designed to be obfuscated. Every aspect of the language is free and accessible to anybody who wants to inspect or modify it. Being a bytecode language makes it difficult to lock down, and Python bytecode is easy to understand. If you want to build something you can't see inside, you will have to use another tool.
Secondly, everything (literally) can be reverse-engineered eventually, so do not assume you'll be able to fully protect any piece of code.
You must be able to understand the tradeoff between the importance of hiding a piece of code (for an estimate amount X of resources) versus how useful hiding it actually is (also in terms of effort). Try and realistically evaluate how important your "design and implementation" really is, to justify all this.
Consider having legal requirements. If you expect people will misuse your code, maybe it would be more useful if you could easily discover the ones that do and turn this into a legal issue. | 3 | 7 | 0 | 0 | we have a business-critical program implemented in Python. Our boss don't want others, especially our rivals to know how it is designed and implemented. So I have to find a way to encrypt it. I first thought of pyc and pyo, but soon I found that they are likely to be disassembled. I wanna encrypt our source codes, but i don't know how to do it? Could you guys please help me with this? Any guidance would be highly appreciated. | how to encrypt python source code? | 0 | 1 | 1 | 0 | 0 | 13,175 |
6,578,452 | 2011-07-05T05:59:00.000 | 3 | 0 | 1 | 0 | 0 | python,obfuscation | 0 | 6,579,423 | 0 | 3 | 0 | false | 0 | 0 | Anything can be reverse engineered. It is not possible to give a user's machine information without the possibility for the user to examine that information. All you can do is make it take more effort.
Python is particularly bad if you have this requirement, because Python bytecode is much easier to read than fully assembled machine code. Ultimately whatever you do to make it more obfuscated, the user's computer will have to be able to de-obfuscate it to turn it into normal Python bytecode in order for the Python interpreter to exectute it. Therefore a motivated user is going to be able to de-obfuscate whatever you give them into Python bytecode as well.
If you really have rivals who are likely to want to figure out how your programs work, you must assume that any code you release to end users in any form will be fully understood by your rivals. There is no possible way to absolutely guard against this.
The only way you can get around this is to not give your users this code either, if you can run your code on a server under your control, and only give your users a dumb program that makes requests to your server for the real work. | 3 | 7 | 0 | 0 | we have a business-critical program implemented in Python. Our boss don't want others, especially our rivals to know how it is designed and implemented. So I have to find a way to encrypt it. I first thought of pyc and pyo, but soon I found that they are likely to be disassembled. I wanna encrypt our source codes, but i don't know how to do it? Could you guys please help me with this? Any guidance would be highly appreciated. | how to encrypt python source code? | 0 | 0.197375 | 1 | 0 | 0 | 13,175 |
6,578,452 | 2011-07-05T05:59:00.000 | 4 | 0 | 1 | 0 | 0 | python,obfuscation | 0 | 6,578,518 | 0 | 3 | 0 | false | 0 | 0 | separate confidential functionality in C functions and develop SWIG wrappers. If you are using C++, you can consider boost python. | 3 | 7 | 0 | 0 | we have a business-critical program implemented in Python. Our boss don't want others, especially our rivals to know how it is designed and implemented. So I have to find a way to encrypt it. I first thought of pyc and pyo, but soon I found that they are likely to be disassembled. I wanna encrypt our source codes, but i don't know how to do it? Could you guys please help me with this? Any guidance would be highly appreciated. | how to encrypt python source code? | 0 | 0.26052 | 1 | 0 | 0 | 13,175 |
6,581,744 | 2011-07-05T11:13:00.000 | 1 | 0 | 0 | 0 | 0 | python,wxpython | 0 | 6,584,095 | 0 | 1 | 0 | true | 0 | 1 | I would use the SetItems() method, which according to the docs does the following: "Clear and set the strings in the control from a list".
Edit: myListCtrl.SetItems(ListOfStrings)
That will replace all the items in the control with whatever is in the list. | 1 | 0 | 0 | 0 | i have a wxlistbox in a class and i want to update the data inside the listbox from a different class.Is i possible to reload the class while leave the control from another class?if yes ,how?
eg:
i have two classes,Class A and Class B.In class A there is a wxlistbox.while starting the program class A initilise the wxlistbox and bind some values.when a button inside class A clicked it call another frame class B.while close the frame B the wxlistbox inside class A should update.
My question is how to refresh listbox while close the frame B? | how to update a wxlistbox from a different class? | 0 | 1.2 | 1 | 0 | 0 | 169 |
6,586,552 | 2011-07-05T17:31:00.000 | 11 | 0 | 0 | 0 | 0 | python,database,django,concurrency,thread-safety | 0 | 6,586,594 | 0 | 4 | 0 | true | 1 | 0 | This must be a very common situation. How do I handle it in a threadsafe way?
Yes.
The "standard" solution in SQL is to simply attempt to create the record. If it works, that's good. Keep going.
If an attempt to create a record gets a "duplicate" exception from the RDBMS, then do a SELECT and keep going.
Django, however, has an ORM layer, with it's own cache. So the logic is inverted to make the common case work directly and quickly and the uncommon case (the duplicate) raise a rare exception. | 1 | 24 | 0 | 0 | In my Django app very often I need to do something similar to get_or_create(). E.g.,
User submits a tag. Need to see if
that tag already is in the database.
If not, create a new record for it. If
it is, just update the existing
record.
But looking into the doc for get_or_create() it looks like it's not threadsafe. Thread A checks and finds Record X does not exist. Then Thread B checks and finds that Record X does not exist. Now both Thread A and Thread B will create a new Record X.
This must be a very common situation. How do I handle it in a threadsafe way? | Django: how to do get_or_create() in a threadsafe way? | 1 | 1.2 | 1 | 0 | 0 | 9,325 |
6,587,095 | 2011-07-05T18:22:00.000 | 6 | 0 | 1 | 0 | 0 | python,haskell,multicore | 0 | 6,587,378 | 0 | 2 | 0 | false | 0 | 0 | The Python multiprocessing module has nothing to do with threads. It tries to provide an API similar to threading (which does expose threads) with processes underneath.
Python threads are mapped to OS threads, and if you create more threads than cores the exact same thing happens as if you'd do it in C (pthreads, Win API threads, etc.) - the OS juggles the threads between cores.
There's a lot of information online about Python threads, just Google it. | 1 | 6 | 0 | 0 | Maybe there's someone out there with the right interests that will know how to answer this. Basically the question is: What are the differences between the multiprocessing module in Python, and the parallelism in Haskell. For instance: are threads created in Python mapped to OS threads? If so, what if there are more threads than cores? Are they multiplexed into the OS threads? Who schedules these threads? Thanks for all the info: documentation/insights greatly appreciated. | Haskell vs. Python threading model | 0 | 1 | 1 | 0 | 0 | 926 |
6,588,041 | 2011-07-05T19:46:00.000 | 3 | 1 | 1 | 0 | 0 | python,unicode-string,python-2.7 | 0 | 6,588,072 | 0 | 4 | 0 | true | 0 | 0 | Python natively supports Unicode. If you directly read and write from the first file to the second, then no data is lost as it copies the bytes verbatim. However, if you decode the string and then re-encode it, you'll need to make sure you use the right encoding. | 2 | 0 | 0 | 0 | I wrote a simple file parser and writer, but then I came across an article talking about the importance of unicode and then it occurred to me that I'm assuming the input file is ascii encoded, which may not be the case all the time, though it would be rare in my situation.
In those rare cases, I would expect UTF-8 encoded files.
Is there a way to work with UTF-8 files by simply changing how I read and write? All I do with the strings is store them and then write them out, so I just need to make sure I can read them, store them, and write them properly.
Furthermore, would I have to treat ascii and UTF-8 files separately and write different functions for each? I have not worked with anything other than ascii files yet and only read about handling unicode. | Writing UTF-8 friendly parsers in python | 0 | 1.2 | 1 | 0 | 0 | 669 |
6,588,041 | 2011-07-05T19:46:00.000 | 2 | 1 | 1 | 0 | 0 | python,unicode-string,python-2.7 | 0 | 6,588,187 | 0 | 4 | 0 | false | 0 | 0 | If you are using Python 2.6 or later, you can use the io library and its io.open method to open the files you want. It has an encoding argument which should be set to 'utf-8' in your case. When you read or write the returned file objects, string are automatically en-/decoded.
Anyway, you don't need to do something special for ASCII, because UTF-8 is a superset of ASCII. | 2 | 0 | 0 | 0 | I wrote a simple file parser and writer, but then I came across an article talking about the importance of unicode and then it occurred to me that I'm assuming the input file is ascii encoded, which may not be the case all the time, though it would be rare in my situation.
In those rare cases, I would expect UTF-8 encoded files.
Is there a way to work with UTF-8 files by simply changing how I read and write? All I do with the strings is store them and then write them out, so I just need to make sure I can read them, store them, and write them properly.
Furthermore, would I have to treat ascii and UTF-8 files separately and write different functions for each? I have not worked with anything other than ascii files yet and only read about handling unicode. | Writing UTF-8 friendly parsers in python | 0 | 0.099668 | 1 | 0 | 0 | 669 |
6,599,716 | 2011-07-06T16:20:00.000 | 3 | 0 | 0 | 0 | 0 | python,database,django,sqlite,in-memory-database | 0 | 6,600,219 | 0 | 3 | 0 | true | 1 | 0 | Disconnect django.contrib.auth.management.create_superuser from the post_syncdb signal, and instead connect your own function that creates and saves a new superuser User with the desired password. | 1 | 1 | 0 | 0 | I'm trying to have a purely in-memory SQLite database in Django, and I think I have it working, except for an annoying problem:
I need to run syncdb before using the database, which isn't too much of a problem. The problem is that it needs to create a superuser (in the auth_user table, I think) which requires interactive input.
For my purposes, I don't want this -- I just want to create it in memory, and I really don't care about the password because I'm the only user. :) I just want to hard-code a password somewhere, but I have no idea how to do this programmatically.
Any ideas? | Django In-Memory SQLite3 Database | 0 | 1.2 | 1 | 1 | 0 | 1,200 |
6,602,341 | 2011-07-06T20:12:00.000 | 0 | 0 | 1 | 1 | 1 | python,batch-file,command-prompt | 0 | 6,602,895 | 0 | 3 | 0 | false | 0 | 0 | Have you tried changing the extension of the python script to .pyw, or just invoke it with pythonw.exe? | 3 | 2 | 0 | 0 | I have a batch file that runs a python script. When the python script is invoked, it starts a second windows console and then disappears when it is completed. This is a problem because I am editing the PYTHONPATH environment variable in the batch file, but because the python script is running in a second window, it cannot see the edited PYTHONPATH environment variable. It used to work just fine (everything would run in the same windows console). I just installed Vista SP2 and this problem showed up. Any thoughts on how to fix what might be broken?
Thanks. | Calling python script from batch file opens second console | 0 | 0 | 1 | 0 | 0 | 2,124 |
6,602,341 | 2011-07-06T20:12:00.000 | 0 | 0 | 1 | 1 | 1 | python,batch-file,command-prompt | 0 | 6,603,729 | 0 | 3 | 0 | false | 0 | 0 | It could be that the .py filetype is associated to pythonw.exe, therefore causing it to open in a new process. Find any .py file, right click it, select properties, and check to see under "Opens with:" what the default interpreter is. | 3 | 2 | 0 | 0 | I have a batch file that runs a python script. When the python script is invoked, it starts a second windows console and then disappears when it is completed. This is a problem because I am editing the PYTHONPATH environment variable in the batch file, but because the python script is running in a second window, it cannot see the edited PYTHONPATH environment variable. It used to work just fine (everything would run in the same windows console). I just installed Vista SP2 and this problem showed up. Any thoughts on how to fix what might be broken?
Thanks. | Calling python script from batch file opens second console | 0 | 0 | 1 | 0 | 0 | 2,124 |
6,602,341 | 2011-07-06T20:12:00.000 | 0 | 0 | 1 | 1 | 1 | python,batch-file,command-prompt | 0 | 6,612,528 | 0 | 3 | 0 | false | 0 | 0 | Ok, so I decided to reinstall python. If I uninstall and reinstall (I was using the windows installer) in the default location, it seems to have no effect. I cleaned out the registry and reinstalled. Still no different. However, if I install python in a different location (other than the default) it seems to run fine. Something is obviously corrupt somewhere, but I don't know where. So I am going to just reinstall all of my other modules in a different location and go from there.
Thank you all for your responses. | 3 | 2 | 0 | 0 | I have a batch file that runs a python script. When the python script is invoked, it starts a second windows console and then disappears when it is completed. This is a problem because I am editing the PYTHONPATH environment variable in the batch file, but because the python script is running in a second window, it cannot see the edited PYTHONPATH environment variable. It used to work just fine (everything would run in the same windows console). I just installed Vista SP2 and this problem showed up. Any thoughts on how to fix what might be broken?
Thanks. | Calling python script from batch file opens second console | 0 | 0 | 1 | 0 | 0 | 2,124 |
6,607,858 | 2011-07-07T08:34:00.000 | 3 | 0 | 1 | 0 | 0 | java,python,jython,pyc | 0 | 6,609,145 | 0 | 2 | 0 | false | 1 | 0 | The 'compiled' python code '.pyc' files are implementation-specific. Even CPython (the standard Python implementation) is not able to import .pyc files generated by a different version of CPython. And is not supposed to. So, I would be surprised if Jython had an ability to run .pyc files created by any of CPython version.
'.pyc' files are not the same as Java bytecode (which is designed to be portable).
Decompilation seems the only way. I think there are some .pyc decompilers available, they should be able to generate Python code that could be run by Jython. | 1 | 0 | 0 | 0 | I am working on building a web interface for a Python tool. It's being designed using J2EE (Spring).
In the process, I need to make calls to Python functions and hence I am using Jython for the same.
But for some modules I don't have the Python source files, I only have the .pyc files, and a document listing the methods of that file. I need to know how I can call these functions inside the .pyc file using jython.
I have tried to de-compile the Python files but since they have been complied with Python 2.7, I am not able find a decompiler to do the job | use .pyc files via Jython | 0 | 0.291313 | 1 | 0 | 0 | 1,308 |
6,614,447 | 2011-07-07T17:12:00.000 | 5 | 0 | 0 | 0 | 1 | python | 0 | 9,271,325 | 0 | 3 | 0 | false | 0 | 0 | I had the same problem just now with some completely unrelated code. I believe my solution was similar to that in eryksun's answer, though I didn't have any trees. What I did have were some sets, and I was doing random.choice(list(set)) to pick values from them. Sometimes my results (the items picked) were diverging even with the same seed each time and I was close to pulling my hair out. After seeing eryksun's answer here I tried random.choice(sorted(set)) instead, and the problem appears to have disappeared. I don't know enough about the inner workings of Python to explain it. | 1 | 7 | 1 | 0 | I am trying to get reproducible results with the genetic programming code in chapter 11 of "Programming Collective Intelligence" by Toby Segaran. However, simply setting seed "random.seed(55)" does not appear to work, changing the original code "from random import ...." to "import random" doesn't help, nor does changing Random(). These all seem to do approximately the same thing, the trees start out building the same, then diverge.
In reading various entries about the behavior of random, I can find no reason, given his GP code, why this divergence should happen. There doesn't appear to be anything in the code except calls to random, that has any variability that would account for this behavior. My understanding is that calling random.seed() should set all the calls correctly and since the code isn't threaded at all, I'm not sure how or why the divergence is happening.
Has anyone modified this code to behave reproducibly? Is there some form of calling random.seed() that may work better?
I apologize for not posting an example, but the code is obviously not mine (I'm adding only the call to seed and changing how random is called in the code) and this doesn't appear to be a simple issue with random (I've read all the entries on Python random here and many on the web in general).
Thanks.
Mark L. | Python random seed not working with Genetic Programming example code | 0 | 0.321513 | 1 | 0 | 0 | 2,656 |
6,619,923 | 2011-07-08T04:17:00.000 | 6 | 0 | 0 | 0 | 0 | python,pygame | 0 | 6,619,966 | 0 | 2 | 0 | true | 0 | 1 | I don't think there is, because some window managers don't give you the ability to remove the close button. But you can write an event handler such that the close button does whatever you want, including nothing.
Why do you want to prevent the user from closing? If it's just a matter that you would rather provide an in-game "quit" button that confirms and/or saves before quitting, you can perform the same task when the user hits the close button. | 2 | 4 | 0 | 0 | In a pygame application window, the minimize, resize and close buttons are present. Is there a way to disable the close(X) button? | how to disable the window close button in pygame? | 0 | 1.2 | 1 | 0 | 0 | 1,851 |
6,619,923 | 2011-07-08T04:17:00.000 | 0 | 0 | 0 | 0 | 0 | python,pygame | 0 | 49,826,543 | 0 | 2 | 0 | false | 0 | 1 | Just for the record, another option would be to pass the following argument to the set_mode() method call:
pygame.display.set_mode(..., flags = pygame.NOFRAME)
This however makes the whole frame go away, including the top strip to move the window around and the other buttons, such as minimize, so it's rather overkill for just getting rid of the X button. | 2 | 4 | 0 | 0 | In a pygame application window, the minimize, resize and close buttons are present. Is there a way to disable the close(X) button? | how to disable the window close button in pygame? | 0 | 0 | 1 | 0 | 0 | 1,851 |
6,628,506 | 2011-07-08T18:05:00.000 | 0 | 0 | 0 | 1 | 0 | python,mapreduce,disco | 0 | 6,628,861 | 0 | 2 | 0 | true | 0 | 0 | Never mind, it appears that what I'm doing isn't really meant to be done. It might be possible, but it would be far better to merely use semantic DDFS tags to refer to blobs of data.
The correct use case for Discodex is to store indexes constructed by a Disco map-reduce program that does not need be the input of another map-reduce program. | 1 | 1 | 0 | 0 | I have a large amount of static data that needs to offer random access. Since, I'm using Disco to digest it, I'm using the very impressive looking Discodex (key, value) store on top of the Disco Distributed File System. However, Disco's documentation is rather sparse, so I can't figure out how to use my Discodex indices as an input into a Disco job.
Is this even possible? If so, how do I do this?
Alternatively, I am thinking about this incorrectly? Would it be better to just store that data as a text file on DDFS? | Running a Disco map-reduce job on data stored in Discodex | 1 | 1.2 | 1 | 0 | 0 | 567 |
6,630,873 | 2011-07-08T21:46:00.000 | 0 | 0 | 0 | 1 | 0 | python,linux,command-line,download | 0 | 6,630,925 | 0 | 3 | 0 | false | 0 | 0 | Well if you are getting into a linux machine you can use the package manager of that linux distro.
If you are using Ubuntu just use apt-get search python, check the list and do apt-get install python2.7 (not sure if python2.7 or python-2.7, check the list)
You could use yum in fedora and do the same.
if you want to install it on your windows machine i dont know any package manager, i would download the wget for windows, donwload the package from python.org and install it | 1 | 30 | 0 | 0 | I'm on windows, but I'm using a putty shell to connect to a linux machine, and want to install python 2.7. Can't figure out how to do it. How can I download python from command line? | How to download python from command-line? | 0 | 0 | 1 | 0 | 0 | 127,053 |
6,639,247 | 2011-07-10T04:52:00.000 | 2 | 0 | 0 | 0 | 0 | javascript,python,django,time,countdown | 0 | 6,639,561 | 0 | 2 | 0 | false | 1 | 0 | I don't think this question has anything to do with SQL, really--except that you might retrieve an expiration time from SQL. What you really care about is just how to display the timeout real-time in the browser, right?
Obviously the easiest way is just to send a "seconds remaining" counter to the page, either on the initial load, or as part of an AJAX request, then use Javascript to display the timer, and update it every second with the current value. I would opt for using a "seconds remaining" counter rather than an "end datetime", because you can't trust a browser's clock to be set correctly--but you probably can trust it to count down seconds correctly.
If you don't trust Javascript, or the client's clock, to be accurate, you could periodically re-send the current "seconds remaining" value to the browser via AJAX. I wouldn't do this every second, maybe every 15 or 60 seconds at most.
As for deleting/moving data when the clock expires, you'll need to do all of that in Javascript.
I'm not 100% sure I answered all of your questions, but your questions seem a bit scattered anyway. If you need more clarification on the theory of operation, please ask. | 2 | 0 | 0 | 0 | If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient.
Is this even a plausible strategy?
Or is this the way they do it:
When a page loads, it takes the end datetime from the server and runs a javascript countdown clock against it on the user machine?
If so, how do you do the countdown clock with javascript? And how would I be able to delete/move data once the time limit is over without a user page load? Or is it absolutely necessary for the user to load the page to check the time limit to create an efficient countdown clock? | Live countdown clock with django and sql? | 1 | 0.197375 | 1 | 1 | 0 | 2,127 |
6,639,247 | 2011-07-10T04:52:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,django,time,countdown | 0 | 6,639,878 | 0 | 2 | 0 | false | 1 | 0 | I have also encountered the same problem a while ago.
First of all your problem is not related neither django nor sql. It is a general concept and it is not very easy to implement because of overhead in server.
One solution come into my mind is keeping start time of the process in the database.
When someone request you to see remaingn time, read it from database, subtract the current time and server that time and in your browser initialize your javascript function with that value and countdown like 15 sec. After that do the same operation with AJAX without waiting user's request.
However, there would be other implementations depending your application. If you explain your application in detail there could be other solutions.
For example, if you implement a questionnaire with limited time, then for every answer submit, you should pass the calculated javascript value for that second. | 2 | 0 | 0 | 0 | If I make a live countdown clock like ebay, how do I do this with django and sql? I'm assuming running a function in django or in sql over and over every second to check the time would be horribly inefficient.
Is this even a plausible strategy?
Or is this the way they do it:
When a page loads, it takes the end datetime from the server and runs a javascript countdown clock against it on the user machine?
If so, how do you do the countdown clock with javascript? And how would I be able to delete/move data once the time limit is over without a user page load? Or is it absolutely necessary for the user to load the page to check the time limit to create an efficient countdown clock? | Live countdown clock with django and sql? | 1 | 0 | 1 | 1 | 0 | 2,127 |
6,643,747 | 2011-07-10T21:15:00.000 | 1 | 0 | 1 | 0 | 0 | python,algorithm | 0 | 6,644,412 | 0 | 4 | 0 | false | 0 | 0 | Sort the ranges (x, y) by increasing x values. Now, for each range, if it overlaps the previous range, set your current "big range"'s y value to the current range's y value. If it doesn't, start a new "big range": this one won't overlap any of the previous ones. If the current range is completely included in the current big one, ignore it. | 1 | 2 | 0 | 0 | In Python, you can get the numbers in a range by calling range(x,y). But given two ranges, say 5-15, and 10-20 how can you get all the numbers 5-20 without duplicates? The ranges may also be disjoint.
I could concat all the results and then uniquify the list, but is that the fastest solution? | Given a bunch of ranges of numbers, get all the numbers within those ranges? | 0 | 0.049958 | 1 | 0 | 0 | 292 |
6,656,750 | 2011-07-11T21:22:00.000 | 1 | 0 | 0 | 0 | 0 | python,user-interface,tkinter,pygame | 0 | 6,657,099 | 0 | 3 | 0 | false | 0 | 1 | Since tkinter is built into Python, it might be better. I prefer wx, but if you just want a few dialogues, tkinter is fine.
You could also just try raw_input('type "1" for low res, "2" for high res'). I think it brings up a dialog window in Windows. | 1 | 1 | 0 | 0 | So, Iv'e got a pygame application. Right now, it takes a command line argument to specify which display to show the main screen on. However, I'm running it on windows, where it's hard to specify command line input to a graphical application.
So, I'd like to have a very very very simple dialog box which pops up, prompts the user for an integer, then closes. My research has shown that pygame can't do a dialog box like this, and I can't get input in a pygame window, because the program doesn't know yet which monitor to draw the pygame window to.
So, my question is, what is the simplest way to create a dialog box for input? I've looked into wx and tkinter. I could use either of them, but what I'm wondering is, but I want to import the least number of extra toolkits. i.e. I don't want to have to start the wx main loop just so I can make 1 dialog, then close it, then start a whole new pygame window.
I know how to do this in wx, so I'm mostly looking for advice/ideas as to which toolkit would be simplest, as opposed to instruction on how to actually do it (though that's always nice too). | Simplest way to get initial text input in pygame | 0 | 0.066568 | 1 | 0 | 0 | 2,527 |
6,666,162 | 2011-07-12T14:51:00.000 | 1 | 0 | 1 | 1 | 0 | python,ctypes | 0 | 28,210,864 | 0 | 2 | 0 | false | 0 | 0 | In fact,you can't create an instance of OpcServer Object if you use the moudle ctypes.Because C is not Object-Oriented language.If you use C++ to make a .dll file,you should make a C interface,in program such as extern "C",if you return an object in this .dll file,python' function can't recieve this object.If I'm not mistaken.If you really want to return an object in the dll file,maybe you can use boost.python to develop the dll file. | 1 | 0 | 0 | 0 | I want to use OpcDaNet.dll in python, I use for that ctypes, but to be able to use the functions I'm intersted in, I have to create an instance of OpcServer Object, how can I do that with Ctypes?
thanks for your answres | ctypes lib in python | 0 | 0.099668 | 1 | 0 | 0 | 533 |
6,668,073 | 2011-07-12T17:00:00.000 | 1 | 0 | 0 | 0 | 1 | python,mysql,mysql-python,resolve | 0 | 6,668,116 | 0 | 2 | 0 | false | 0 | 0 | This is an option which needs to be set in the MySQL configuration file on the server. It can't be set by client APIs such as MySQLdb. This is because of the potential security implications.
That is, I may want to deny access from a particular hostname. With skip-name-resolve enabled, this won't work. (Admittedly, access control via hostname is probably not the best idea anyway.) | 1 | 12 | 0 | 0 | I try to connect to database in a domain from my virtual machine.
It works on XP, but somehow does not work on Win7 and quitting with:
"OperationalError: (1042, "Can't get hostname for your address")"
Now I tried disable Firewall and stuff, but that doesn't matter anyway.
I don't need the DNS resolving, which will only slow everything down.
So I want to use the option "skip-name-resolve", but there is no my.ini
or my.cnf when using MySQLdb for Python, so how can I still use this option?
Thanks for your help
-Alex | How to use the option skip-name-resolve when using MySQLdb for Python? | 0 | 0.099668 | 1 | 1 | 0 | 60,490 |
6,680,695 | 2011-07-13T14:39:00.000 | 0 | 0 | 0 | 0 | 0 | python,logging | 0 | 6,681,434 | 0 | 2 | 0 | true | 0 | 0 | I assume by multiple files you mean you have multiple separate scripts running and you want to log some consolidated summary of events occurring in all those scripts.
If you have a single script it is easy to do by maintaining a dict and dumping it to a log file or may be overwrite a single summary file, for multiple scripts you can run a separate logging server , which is used by all scripts for logging or at-least to add and dump summary info, alternatively you can use a common database with table locking | 1 | 1 | 0 | 0 | I'm looking to keep a summary of both the number of times a specific thing happens and when something very important arises. I'll give an example...
My code establishes a ssh connection and runs commands. An exception is raised and handled every time the script loses connection to the ssh server. Let's say this type of exception is handled in different ways across multiple files. How can I keep a record of the number of times this is handled? This is not specific to exceptions. If, for example, I run a command and get a certain response back, I want to make note that this occurred. I need all of this to be logged (I already know how to set up the logger) when the script ends.
The reason for doing this is because my script logs a large amount of information and I want to be able to have a quick summary of important things that happened at the end of the log file.
tldr; How can I keep a record of the number of times something occurs / when something occurs across multiple files so I can log it when the script ends? | How can I record the number of times something happens over several files? | 1 | 1.2 | 1 | 0 | 0 | 303 |
6,694,662 | 2011-07-14T14:17:00.000 | 2 | 0 | 0 | 1 | 0 | python,google-app-engine,google-cloud-datastore | 0 | 6,697,664 | 0 | 2 | 0 | true | 1 | 0 | You basically need to do it in two steps:
Do what systempuntoout's answer said to only allow logged-in users to see your site.
On each of your routes (URL handlers), the first step should be to get their user object and check if they are on a list you're keeping of users allowed to see your app. For a first run, you could just have the list be a global variable, but this isn't very flexible (it makes you redeploy your app every time you want to update the list), so for a second run you should refactor it to perhaps read from the Datastore to see if a user is in the allowed list or not. | 1 | 3 | 0 | 0 | I want to upload my application on google app engine
and want to use this by only selected user
so want to know how it possible.
want to use users gmail account. | How to allow only selected user login in gae+python application? | 1 | 1.2 | 1 | 0 | 0 | 755 |
6,697,259 | 2011-07-14T17:10:00.000 | 2 | 0 | 0 | 0 | 0 | python,keyboard,matplotlib,interactive | 0 | 11,928,786 | 0 | 6 | 1 | false | 0 | 0 | Use waitforbuttonpress(timeout=0.001) then plot will see your mouse ticks. | 1 | 70 | 1 | 0 | I used matplotlib to create some plot, which depends on 8 variables. I would like to study how the plot changes when I change some of them. I created some script that calls the matplotlib one and generates different snapshots that later I convert into a movie, it is not bad, but a bit clumsy.
I wonder if somehow I could interact with the plot regeneration using keyboard keys to increase / decrease values of some of the variables and see instantly how the plot changes.
What is the best approach for this?
Also if you can point me to interesting links or a link with a plot example with just two sliders? | Interactive matplotlib plot with two sliders | 0 | 0.066568 | 1 | 0 | 0 | 98,213 |
6,700,149 | 2011-07-14T21:29:00.000 | -2 | 0 | 0 | 0 | 0 | python,zeromq | 0 | 6,831,300 | 0 | 3 | 1 | false | 0 | 0 | In ZeroMQ there can only be one publisher per port. The only (ugly) workaround is to start each child PUB socket on a different port and have the parent listen on all those ports.
but the pipeline pattern describe on 0MQ, user guide is a much better way to do this. | 1 | 9 | 1 | 0 | I'd like to write a python script (call it parent) that does the following:
(1) defines a multi-dimensional numpy array
(2) forks 10 different python scripts (call them children). Each of them must be able to read the contents of the numpy array from (1) at any single point in time (as long as they are alive).
(3) each of the child scripts will do it's own work (children DO NOT share any info with each other)
(4) at any point in time, the parent script must be able to accept messages from all of its children. These messages will be parsed by the parent and cause the numpy array from (1) to change.
How do I go about this, when working in python in a Linux environment? I thought of using zeroMQ and have the parent be a single subscriber while the children will all be publishers; does it make sense or is there a better way for this?
Also, how do I allow all the children to continuously read the contents of the numpy array that was defined by the parent ? | Python zeromq -- Multiple Publishers To a Single Subscriber? | 0 | -0.132549 | 1 | 0 | 0 | 14,920 |
6,709,693 | 2011-07-15T15:45:00.000 | 3 | 0 | 1 | 0 | 0 | python,algorithm | 0 | 6,709,749 | 0 | 7 | 0 | false | 0 | 0 | Just use the same algorithm for calculating edit distance on strings if the values don't have any particular meaning. | 1 | 27 | 0 | 0 | I have two lists:
eg.
a = [1,8,3,9,4,9,3,8,1,2,3]
and
b = [1,8,1,3,9,4,9,3,8,1,2,3]
Both contain ints. There is no meaning behind the ints (eg. 1 is not 'closer' to 3 than it is to 8).
I'm trying to devise an algorithm to calculate the similarity between two ORDERED lists. Ordered is keyword right here (so I can't just take the set of both lists and calculate their set_difference percentage). Sometimes numbers do repeat (for example 3, 8, and 9 above, and I cannot ignore the repeats).
In the example above, the function I would call would tell me that a and b are ~90% similar for example. How can I do that? Edit distance was something which came to mind. I know how to use it with strings but I'm not sure how to use it with a list of ints. Thanks! | Calculating the similarity of two lists | 0 | 0.085505 | 1 | 0 | 0 | 38,544 |
6,709,833 | 2011-07-15T15:56:00.000 | 0 | 0 | 0 | 0 | 0 | php,python,mysql,xampp,php-gtk | 1 | 6,711,276 | 0 | 2 | 0 | true | 0 | 0 | If you intend to just convert the data which I guess is a process you do only once you will run the script locally as a command script. For that you don't need a web site and thus XAMPP. What language you take is secondary except you say that PHP has a library. Does python or others have one?
About your concern of error detection why not test your script with only one file first. If that conversion is successful you can build your loop and test this on maybe five files, i.e. have a counter that ends the process after that number. It that is still okay you can go on with the rest. You can also write log data and dump a result for every 100 files processed. This way you can see if your script is doing something or idling. | 2 | 1 | 0 | 0 | Recently I've begun working on exploring ways to convert about 16k Corel Paradox 4.0 database tables (my client has been using a legacy platform over 20 years mainly due to massive logistical matters) to more modern formats (i.e.CSV, SQL, etc.) en mass and so far I've been looking at PHP since it has a library devoted to Paradox data processing however while I'm fairly confident in how to write the conversion code (i.e. simply calling a few file open, close, and write functions) I'm concerned about error detection and ensuring that when running the script, I don't spend hours waiting for it to run only to see 16k corrupt files exported.
Also, I'm not fully sure about the logic loop for calling the files. I'm thinking of having the program generate a list of all the files with the appropriate extension and then looping through the list, however I'm not sure if that's ideal for a directory of this size.
This is being run on a local Windows 7 x64 system with XAMPP setup (the database is all internal use) so I'm not sure if pure PHP is the best idea -- so I've been wondering if Python or some other lightweight scripting language might be better for handling this.
Thanks very much in advance for any insights and assistance, | Batch converting Corel Paradox 4.0 Tables to CSV/SQL -- via PHP or other scripts | 0 | 1.2 | 1 | 1 | 0 | 734 |
6,709,833 | 2011-07-15T15:56:00.000 | 1 | 0 | 0 | 0 | 0 | php,python,mysql,xampp,php-gtk | 1 | 39,728,385 | 0 | 2 | 0 | false | 0 | 0 | This is doubtless far too late to help you, but for posterity...
If one has a Corel Paradox working environment, one can just use it to ease the transition.
We moved the Corel Paradox 9 tables we had into an Oracle schema we built by connecting to the schema (using an alias such as SCHEMA001) then writing this Procedure in a script from inside Paradox:
Proc writeTable(targetTable String)
errorTrapOnWarnings(Yes)
try
tc.open(targetTable)
tc.copy(":SCHEMA001:" + targetTable)
tc.close()
onFail
errorShow()
endTry
endProc
One could highly refine this with more Paradox programming, but you get the idea. One thing we discovered, though, is that Paradox uses double quotes for the column names when it creates the Oracle version, which means you can get lower-case letters in column names in Oracle, which is a pain. We corrected that by writing a quick Oracle query to upper() all the resulting column names.
We called the procedure like so:
Var
targetTable String
tc TCursor
endVar
method run(var eventInfo Event)
targetTable = "SomeTableName"
writeTable(targetTable)
msgInfo("TransferData.ssl--script finished",
"That's all, folks!" )
return
endMethod | 2 | 1 | 0 | 0 | Recently I've begun working on exploring ways to convert about 16k Corel Paradox 4.0 database tables (my client has been using a legacy platform over 20 years mainly due to massive logistical matters) to more modern formats (i.e.CSV, SQL, etc.) en mass and so far I've been looking at PHP since it has a library devoted to Paradox data processing however while I'm fairly confident in how to write the conversion code (i.e. simply calling a few file open, close, and write functions) I'm concerned about error detection and ensuring that when running the script, I don't spend hours waiting for it to run only to see 16k corrupt files exported.
Also, I'm not fully sure about the logic loop for calling the files. I'm thinking of having the program generate a list of all the files with the appropriate extension and then looping through the list, however I'm not sure if that's ideal for a directory of this size.
This is being run on a local Windows 7 x64 system with XAMPP setup (the database is all internal use) so I'm not sure if pure PHP is the best idea -- so I've been wondering if Python or some other lightweight scripting language might be better for handling this.
Thanks very much in advance for any insights and assistance, | Batch converting Corel Paradox 4.0 Tables to CSV/SQL -- via PHP or other scripts | 0 | 0.099668 | 1 | 1 | 0 | 734 |
6,711,365 | 2011-07-15T18:08:00.000 | 2 | 1 | 0 | 1 | 0 | python,linux,module,copy | 0 | 6,711,450 | 0 | 1 | 0 | false | 0 | 0 | Make it a proper Python package on top of setuptools and register your command-line frontends using the 'console_scripts' entry-point. | 1 | 1 | 0 | 0 | I have some python modules to copy into my Linux computer.
I found out that I need to copy them into one of the directory that python searches or else show a new path for it.
1. when I tried to copy files into /usr/bin/..../python2.6 .. its not allowing me.
how do I make it.
2. Also do tell me how do I add a new search path ?
please guide me in detail. I have very less knowledge in linux
Also please tell me how do I get over this kind of problems myself. Is there any small book or a kind of to learn? | how to copy python modules into python lib directory | 0 | 0.379949 | 1 | 0 | 0 | 1,248 |
6,712,092 | 2011-07-15T19:16:00.000 | 1 | 1 | 0 | 0 | 0 | php,python,cgi,webserver,cgihttprequesthandler | 0 | 6,712,146 | 0 | 1 | 0 | true | 0 | 0 | When you say your Python process "spawns and executes" a cgi-php script, I believe what you mean is "it calls my PHP script by executing the PHP CLI executable, passing it the name of my script."
Using the PHP CLI executable, HTTP-specific superglobals and environment values will not be set automatically. You would have to read in all HTTP request headers and GET/POST data in your Python server process, and then set them in the environment used by your PHP script.
The whole experiment sounds interesting, but this is what mod_php does already. | 1 | 0 | 0 | 0 | I have a very simple CGI webserver running using python CGIHTTPServer class.
This class spawns and executes a cgi-php script.
If the webpage sends POST request, how can I access the POST request data in the php script? | Accessing POST request data from CGIHTTPServer (python) | 0 | 1.2 | 1 | 0 | 1 | 1,199 |
6,721,138 | 2011-07-17T00:21:00.000 | 1 | 1 | 1 | 0 | 0 | python,c,encryption,directory | 0 | 6,721,220 | 0 | 1 | 0 | true | 0 | 0 | I think the best thing to do would be to encrypt the individual text files using GPG, one of the strongest encryption systems available(and for free!) You can get several python libraries to do this, and I recommend python-gnupg. Also, you can probably just reference the file where the key is located and distribute it along with the application? If you want to include a preset key and not have your users be able to see where that key is, you are going to have a very hard time. How about using a key on a server you control that somehow only accepts requests for the key from copies of your application? I don't know how you'd make this secure though through Python.
About adding files to the folder and sending it along with the program, perhaps you aren't thinking of the most optimal solution? There are plenty of python data structures that can be serialized and accomplish most of the things you are talking about in your post. | 1 | 0 | 0 | 0 | I want to design an application that reads some a folder of text files and shows the user its contents. Three problems arise: I need the folder containing the text files to be encrypted which I don't know how to do, two, I need a way to read the encrypted files without revealing the key in the python code, so I guess C would be the best way to do that even if I don't like that way(any suggestions are welcome,using python if possible), and three, I need a way to add files to the folder and then send the encrypted folder along with the program.
Is there any way to do those things without ever revealing the key or giving the user the possibility to read the folder except using my program?
Thanks in advance for any help!
EDIT: Also, is there a way to use C to encrypt and decrypt files so that I can put the key in the compiled file and distribute that with my program? | How to use python to read an encrypted folder | 1 | 1.2 | 1 | 0 | 0 | 870 |
6,736,152 | 2011-07-18T16:30:00.000 | 0 | 0 | 0 | 1 | 0 | python,ajax,cgi,subprocess,pipe | 0 | 6,737,020 | 0 | 2 | 0 | false | 0 | 0 | This sounds a lot like a homework question, but even with this list, you have a lot of work ahead of you for a dubious reward, so here we go.
Your C++ program should listen on a socket
Your python program needs to listen on a web socket, and also have a connection open to the C++ program through the C++ socket.
I'd suggest something like web.py for your web framework
Your web.py program is going to accept XMLHTTP Requests at a URL
your web page is going to submit requests through that XMLHTTP request, and send results back into the web page.
An easy way to do this on the frontend is to you jquery ajax commands; they will hit your web.py URL, which will validate the input, call a function to send it off to the C++ socket, get a response and send it back as a response to your jquery request.
Good luck. | 1 | 1 | 0 | 1 | Let me start with an example:
There is a c++ program which can be run on my server, the program is named "Guess the Number" which means every time it runs,
first it will generate an integer between 1 and 100 randomly.
then i need the user to guess a number and pass it to me though a web page, form or something.
now i want pass the number to the program and then the program will tell me whether it's bigger or smaller.
then i put the information on the web page to let the user know and then make his next guess.
i am able to write the program. and i know how pass the first argument and give back the information, but don't know how to interact in the next steps. i.e.
How to pass the arguments to the program REAL-TIME and get the output?
to make this more clearly:
i use subprocess in python to run the program with the first argument and get the output.
the c++ program use std inputs and outputs, like while (!check(x)) scanf("%d",&x);, and in check(int x), i use if (x>rand_num) printf("too big\n"); to output. | How to INTERACT between a program on the server and the user via a web page REAL-TIME | 0 | 0 | 1 | 0 | 1 | 233 |
6,737,694 | 2011-07-18T18:40:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 6,738,871 | 0 | 5 | 0 | false | 0 | 1 | It depends on what you are trying to do. You can have them coexist because 3.2 is backwards-compatible. Now you can install both versions on your computer 3.2 and 2.7, but 3.2 unfortunately will have to be used in IDLE....ugh.... Just install both and then depending on which one you want to use. Run that IDLE for either one or the other.
Now if you are meaning that you want more stable version that can be professionally done, go with 2.7 ( I have done otherwise, but if you run into problems 2.7 is more supported.
If you want the more cutting edge stuff, go with 3.2. Either one you go with works with about eveyrthing. If it doesn't, give it a month or two and the rest of the world will catch up. | 1 | 10 | 0 | 0 | I'm new to programming and thought Python would be a good language to learn. Most of the tutorials I found were 2.7 based so I started to learn using that version. I've recently found a tkinter tutorial that I'd like to try out but I'm having a problem. If I run a script it will use Python 2.7 which contains Tkinter and not tkinter.
This problem made me think, how can I get my two versions to co-exist so I can program in both 2.x and 3.x? | How to I get Python 2.x and 3.x to co-exist? | 0 | 0 | 1 | 0 | 0 | 15,701 |
6,745,464 | 2011-07-19T10:05:00.000 | 3 | 1 | 1 | 0 | 0 | python,math,trigonometry | 0 | 6,745,509 | 0 | 7 | 0 | false | 0 | 0 | You're looking for the math.acos() function. | 1 | 52 | 0 | 0 | Apologies if this is straight forward, but I have not found any help in the python manual or google.
I am trying to find the inverse cosine for a value using python.
i.e. cos⁻¹(x)
Does anyone know how to do this?
Thanks | Inverse Cosine in Python | 0 | 0.085505 | 1 | 0 | 0 | 117,730 |
6,750,619 | 2011-07-19T16:28:00.000 | 7 | 0 | 0 | 0 | 0 | python,http,parallel-processing,download,feed | 1 | 6,882,144 | 0 | 10 | 0 | false | 0 | 0 | You can try pycurl, though the interface is not easy at first, but once you look at examples, its not hard to understand. I have used it to fetch 1000s of web pages in parallel on meagre linux box.
You don't have to deal with threads, so it terminates gracefully, and there are no processes left behind
It provides options for timeout, and http status handling.
It works on both linux and windows.
The only problem is that it provides a basic infrastructure (basically just a python layer above the excellent curl library). You will have to write few lines to achieve the features as you want. | 4 | 19 | 0 | 0 | I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites.
My requirements for the solution are:
Should be interruptable. Ctrl+C should immediately terminate all downloads.
There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown.
It should work on Linux and Windows too.
It should retry downloads, be resilient against network errors and should timeout properly.
It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way.
It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time.
Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files.
Preferably it should take advantage of http keep-alive to maximize the transfer speed.
Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution.
I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously.
Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice. | Library or tool to download multiple files in parallel | 0 | 1 | 1 | 0 | 1 | 6,882 |
6,750,619 | 2011-07-19T16:28:00.000 | 5 | 0 | 0 | 0 | 0 | python,http,parallel-processing,download,feed | 1 | 6,750,711 | 0 | 10 | 0 | false | 0 | 0 | There are lots of options but it will be hard to find one which fits all your needs.
In your case, try this approach:
Create a queue.
Put URLs to download into this queue (or "config objects" which contain the URL and other data like the user name, the destination file, etc).
Create a pool of threads
Each thread should try to fetch a URL (or a config object) from the queue and process it.
Use another thread to collect the results (i.e. another queue). When the number of result objects == number of puts in the first queue, then you're finished.
Make sure that all communication goes via the queue or the "config object". Avoid accessing data structures which are shared between threads. This should save you 99% of the problems. | 4 | 19 | 0 | 0 | I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites.
My requirements for the solution are:
Should be interruptable. Ctrl+C should immediately terminate all downloads.
There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown.
It should work on Linux and Windows too.
It should retry downloads, be resilient against network errors and should timeout properly.
It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way.
It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time.
Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files.
Preferably it should take advantage of http keep-alive to maximize the transfer speed.
Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution.
I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously.
Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice. | Library or tool to download multiple files in parallel | 0 | 0.099668 | 1 | 0 | 1 | 6,882 |
6,750,619 | 2011-07-19T16:28:00.000 | -1 | 0 | 0 | 0 | 0 | python,http,parallel-processing,download,feed | 1 | 6,750,741 | 0 | 10 | 0 | false | 0 | 0 | Threading isn't "half-assed" unless you're a bad programmer. The best general approach to this problem is the producer / consumer model. You have one dedicated URL producer, and N dedicated download threads (or even processes if you use the multiprocessing model).
As for all of your requirements, ALL of them CAN be done with the normal python threaded model (yes, even catching Ctrl+C -- I've done it). | 4 | 19 | 0 | 0 | I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites.
My requirements for the solution are:
Should be interruptable. Ctrl+C should immediately terminate all downloads.
There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown.
It should work on Linux and Windows too.
It should retry downloads, be resilient against network errors and should timeout properly.
It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way.
It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time.
Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files.
Preferably it should take advantage of http keep-alive to maximize the transfer speed.
Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution.
I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously.
Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice. | Library or tool to download multiple files in parallel | 0 | -0.019997 | 1 | 0 | 1 | 6,882 |
6,750,619 | 2011-07-19T16:28:00.000 | 0 | 0 | 0 | 0 | 0 | python,http,parallel-processing,download,feed | 1 | 6,889,198 | 0 | 10 | 0 | false | 0 | 0 | I used the standard libs for that, urllib.urlretrieve to be precise. downloaded podcasts this way, via a simple thread pool, each using its own retrieve. I did about 10 simultanous connections, more should not be a problem. Continue a interrupted download, maybe not. Ctrl-C could be handled, I guess. Worked on Windows, installed a handler for progress bars. All in all 2 screens of code, 2 screens for generating the URLs to retrieve. | 4 | 19 | 0 | 0 | I'm looking for a python library or a command line tool for downloading multiple files in parallel. My current solution is to download the files sequentially which is slow. I know you can easily write a half-assed threaded solution in python, but I always run into annoying problem when using threading. It is for polling a large number of xml feeds from websites.
My requirements for the solution are:
Should be interruptable. Ctrl+C should immediately terminate all downloads.
There should be no leftover processes that you have to kill manually using kill, even if the main program crashes or an exception is thrown.
It should work on Linux and Windows too.
It should retry downloads, be resilient against network errors and should timeout properly.
It should be smart about not hammering the same server with 100+ simultaneous downloads, but queue them in a sane way.
It should handle important http status codes like 301, 302 and 304. That means that for each file, it should take the Last-Modified value as input and only download if it has changed since last time.
Preferably it should have a progress bar or it should be easy to write a progress bar for it to monitor the download progress of all files.
Preferably it should take advantage of http keep-alive to maximize the transfer speed.
Please don't suggest how I may go about implementing the above requirements. I'm looking for a ready-made, battle-tested solution.
I guess I should describe what I want it for too... I have about 300 different data feeds as xml formatted files served from 50 data providers. Each file is between 100kb and 5mb in size. I need to poll them frequently (as in once every few minutes) to determine if any of them has new data I need to process. So it is important that the downloader uses http caching to minimize the amount of data to fetch. It also uses gzip compression obviously.
Then the big problem is how to use the bandwidth in an as efficient manner as possible without overstepping any boundaries. For example, one data provider may consider it abuse if you open 20 simultaneous connections to their data feeds. Instead it may be better to use one or two connections that are reused for multiple files. Or your own connection may be limited in strange ways.. My isp limits the number of dns lookups you can do so some kind of dns caching would be nice. | Library or tool to download multiple files in parallel | 0 | 0 | 1 | 0 | 1 | 6,882 |
6,755,794 | 2011-07-19T23:18:00.000 | 16 | 0 | 1 | 0 | 0 | python,ipython | 0 | 6,766,813 | 0 | 3 | 0 | false | 0 | 0 | It should be stored as _exit_code after you run the command (at least in the upcoming v0.11 release). | 1 | 8 | 0 | 0 | Does anybody know how to check the status of the last executed command (exit code) in ipython? | Check the exit status of last command in ipython | 0 | 1 | 1 | 0 | 0 | 5,195 |
6,764,329 | 2011-07-20T15:21:00.000 | 8 | 0 | 0 | 0 | 0 | python,optimization,pyqt | 0 | 6,821,582 | 0 | 2 | 0 | true | 0 | 1 | I'm not sure if this is exactly the same thing you are doing, but it sounds similar to something I have in some apps, where there is some list of custom widget. And it does significantly slow down when you are creating and destroying tons of widgets.
If its an issue of lesser amounts of total widgets, but just being created and deleted a lot, you can just create the widgets once, and only change the data of those widgets as information needs to be updated... as opposed to creating new widgets each time the information changes. That way you can even change the data from threads without having to worry about creating widgets.
Another situation is where you are displaying a list with custom widgets and there are a TON of results. I notice it always slows down when you have 1000's of custom widgets in a list. A trick my co-worker came up with was to have a fake kind of list where it is using a static number of slots in the display. Say, it shows 10 slots in the view. The scrollbar doesn't really scroll down across MORE widget...what it does is scrolls the DATA through the 10 visible widgets. You can a crazy performance increase doing that. But only if it is an acceptable display style for your application. | 2 | 6 | 0 | 0 | For those of you who have written fairly complex PyQt applications, what tips and tricks would you offer for speeding up your applications? I have a few examples of where my program begins to slow down as it grows larger:
I have a 'dashboard' written that is destroyed and re-created when a user clicks on an item in a TreeWidget. What would be a better way to have a modular interface where clicking an item in the TreeWidget changes the dashboard, but doesn't require destroying a widget and recreating it.
Each dashboard also loads an image from a network location. This creates some slowdown as one navigates around the application, but after it's loaded into memory, 'going back to that same dash' is faster. Is there a good method or way to run a thread on program load that maybe pre-loads the images into memory? If so, how do you implement that?
When you have a large variety of dashboard items and data that gets loaded into them, do you guys normally thread the data load and load it back in which each thread completes? Is this viable when somebody is browsing around quickly? Would implementing a kill-switch for the threads such that when a user changes dashboards, the threads die work? Or would the constant creation and killing of threads cause some sort of, well, meltdown.
Sorry for the huge barrage of questions, but they seemed similar enough to warrant bundling them together. | Optimizing your PyQt applications | 0 | 1.2 | 1 | 0 | 0 | 3,849 |
6,764,329 | 2011-07-20T15:21:00.000 | 1 | 0 | 0 | 0 | 0 | python,optimization,pyqt | 0 | 6,774,281 | 0 | 2 | 0 | false | 0 | 1 | Are you using QNetworkAccessManager to load you images? It has cache support. Also it loads everything in background with finishing callbacks.
I don't really understand what your dashboard is doing. Have you think about using QWebkit? Maybe your dashboard content is easy to be implemented in HTML?
PS. I don't like threads in Python and don't think they are good idea. Deferred jobs delegated to Qt core is better. | 2 | 6 | 0 | 0 | For those of you who have written fairly complex PyQt applications, what tips and tricks would you offer for speeding up your applications? I have a few examples of where my program begins to slow down as it grows larger:
I have a 'dashboard' written that is destroyed and re-created when a user clicks on an item in a TreeWidget. What would be a better way to have a modular interface where clicking an item in the TreeWidget changes the dashboard, but doesn't require destroying a widget and recreating it.
Each dashboard also loads an image from a network location. This creates some slowdown as one navigates around the application, but after it's loaded into memory, 'going back to that same dash' is faster. Is there a good method or way to run a thread on program load that maybe pre-loads the images into memory? If so, how do you implement that?
When you have a large variety of dashboard items and data that gets loaded into them, do you guys normally thread the data load and load it back in which each thread completes? Is this viable when somebody is browsing around quickly? Would implementing a kill-switch for the threads such that when a user changes dashboards, the threads die work? Or would the constant creation and killing of threads cause some sort of, well, meltdown.
Sorry for the huge barrage of questions, but they seemed similar enough to warrant bundling them together. | Optimizing your PyQt applications | 0 | 0.099668 | 1 | 0 | 0 | 3,849 |
6,791,916 | 2011-07-22T15:00:00.000 | 0 | 0 | 0 | 0 | 0 | python,stream | 0 | 17,312,308 | 0 | 2 | 1 | false | 0 | 0 | You can try using Scapy sniffing function sniff, append the captured packets to a list and do your extraction process. | 1 | 0 | 0 | 0 | i want to use python in combination with tcpdump to put the generated stream from tcpdump into a more readable way. There are some fields in the streams with interesting values.
I found some stuff here at stackoverflow regarding python and tcpdump but in my mind the best way is to put it in an array.
After this is done i want to read some fields out of the array, and than it can be cleared and used for the next network frame.
Can somebody give me some hints, how this can be done? | put stream from tcpdump into an array - which python version should i use? | 0 | 0 | 1 | 0 | 1 | 476 |
6,795,657 | 2011-07-22T20:20:00.000 | 0 | 0 | 1 | 0 | 0 | python,indexing,numpy,slice | 0 | 6,795,732 | 0 | 5 | 0 | false | 0 | 0 | I think you want to just do myslice = slice(1,2) to for example define a slice that will return the 2nd element (i.e. myarray[myslice] == myarray[1:2]) | 1 | 5 | 1 | 0 | In Numpy (and Python in general, I suppose), how does one store a slice-index, such as (...,0,:), in order to pass it around and apply it to various arrays? It would be nice to, say, be able to pass a slice-index to and from functions. | Numpy: arr[...,0,:] works. But how do I store the data contained in the slice command (..., 0, :)? | 0 | 0 | 1 | 0 | 0 | 389 |
6,802,119 | 2011-07-23T17:35:00.000 | 0 | 1 | 0 | 0 | 1 | python,pylint | 1 | 64,587,242 | 0 | 2 | 0 | false | 0 | 0 | Run pylint on dev branch, get x errors
Run pylint on master branch, get y errors
If y > x, which means you have new errors
You can do above things in CI process, before code is merged to master | 2 | 8 | 0 | 0 | Does anybody know how to distinguish new errors (those that were found during latest Pylint execution) and old errors (those that were alredy found during previous executions) in the Pylint report?
I'm using Pylint in one of my projects, and the project is pretty big. Pylint reports pretty many errors (even though I disabled lots of them in the rcfile). While I fix these errors with time, it is also important to not introduce new ones. But Pylint HTML and "parseable" reports don't distinguish new errors from those that were identified previously, even though I run Pylint with persistent=yes option.
As for now - I compare old and new reports manually. What would be really nice though, is if Pylint could highlight somehow those error messages which were found on a latest run, but were not found on a previous one. Is it possible to do so using Pylint or existing tools or something? Becuase if not - it seems I will end up writing my own comparison and report generation. | Pylint - distinguish new errors from old ones | 1 | 0 | 1 | 0 | 0 | 925 |
6,802,119 | 2011-07-23T17:35:00.000 | 4 | 1 | 0 | 0 | 1 | python,pylint | 1 | 6,805,524 | 0 | 2 | 0 | true | 0 | 0 | Two basic approaches. Fix errors as they appear so that there will be no old ones. Or, if you have no intention of fixing certain types of lint errors, tell lint to stop reporting them.
If you have a lot of files it would be a good idea to get a lint report for each file separately, commit the lint reports to revision control like svn, and then use the revision control systems diff utility to separate new lint errors from older pre-existing ones. The reason for seperate reports for each .py file is to make it easier to read the diff output.
If you are on Linux, vim -d oldfile newfile is a nice way to read diff. If you are on Windows then just use the diff capability built into Tortoise SVN. | 2 | 8 | 0 | 0 | Does anybody know how to distinguish new errors (those that were found during latest Pylint execution) and old errors (those that were alredy found during previous executions) in the Pylint report?
I'm using Pylint in one of my projects, and the project is pretty big. Pylint reports pretty many errors (even though I disabled lots of them in the rcfile). While I fix these errors with time, it is also important to not introduce new ones. But Pylint HTML and "parseable" reports don't distinguish new errors from those that were identified previously, even though I run Pylint with persistent=yes option.
As for now - I compare old and new reports manually. What would be really nice though, is if Pylint could highlight somehow those error messages which were found on a latest run, but were not found on a previous one. Is it possible to do so using Pylint or existing tools or something? Becuase if not - it seems I will end up writing my own comparison and report generation. | Pylint - distinguish new errors from old ones | 1 | 1.2 | 1 | 0 | 0 | 925 |
6,802,638 | 2011-07-23T19:06:00.000 | 0 | 1 | 0 | 0 | 0 | python,email,queue,scheduling | 0 | 6,806,145 | 0 | 3 | 0 | false | 0 | 0 | In my app, I insert emails in DB table, and I have python script running under cron that checks this table and sends email updating record as sent. | 2 | 0 | 0 | 0 | I have a Pyramid web application that needs to send emails such as confirmation emails after registration, newsletters and so forth. I know how to send emails using smtplib in python and I decided on an smtp service (I think sendgrid will do the trick).
The real problem is the scheduling and delay sending of the emails - for example, when a user registers, the email is to be sent on the form post view. But, I don't want to block the request, and therefore would like to "schedule" the email in a non-blocking way.
Other than implementing this myself (probably with a DB and a worker), is there an existing solution to email queue and scheduling?
Thanks! | Best way to do email scheduling on a python web application? | 1 | 0 | 1 | 0 | 0 | 493 |
6,802,638 | 2011-07-23T19:06:00.000 | 0 | 1 | 0 | 0 | 0 | python,email,queue,scheduling | 0 | 6,802,910 | 0 | 3 | 0 | false | 0 | 0 | The existing solution to which you refer is to run your own SMTP server on the machine, bound only to localhost to prevent any other machines from connecting to it. Since you're the only one using it, submitting a message to it should be close to instantaneous, and the server will handle queuing, retries, etc. If you are running on a UNIX/Linux box, there's probably already such a server installed. | 2 | 0 | 0 | 0 | I have a Pyramid web application that needs to send emails such as confirmation emails after registration, newsletters and so forth. I know how to send emails using smtplib in python and I decided on an smtp service (I think sendgrid will do the trick).
The real problem is the scheduling and delay sending of the emails - for example, when a user registers, the email is to be sent on the form post view. But, I don't want to block the request, and therefore would like to "schedule" the email in a non-blocking way.
Other than implementing this myself (probably with a DB and a worker), is there an existing solution to email queue and scheduling?
Thanks! | Best way to do email scheduling on a python web application? | 1 | 0 | 1 | 0 | 0 | 493 |
6,806,266 | 2011-07-24T10:30:00.000 | 1 | 1 | 1 | 0 | 0 | python,git | 0 | 6,806,465 | 0 | 4 | 0 | false | 0 | 0 | I really advise using only the command line git, git-python its used for macros or complicated things, not just for pulling, pushing or cloning :) | 1 | 2 | 0 | 0 | am working on a code which I would like to retrieve the commits from a repository on github. Am not entirely sure how to do such a thing, I got git-python but most the api's are for opening a local git repository on the same file system.
Can someone advice?
regards, | git-python get commit feed from a repository | 0 | 0.049958 | 1 | 0 | 0 | 10,402 |
6,820,856 | 2011-07-25T18:49:00.000 | 1 | 0 | 1 | 1 | 0 | python,winapi,subprocess | 0 | 6,820,974 | 0 | 1 | 0 | true | 0 | 0 | If you want to wait for a spawned process, then use subprocess.Popen and then either wait or communicate. start is AFAIR a shell construct, not a real exec (so you'd have to use shell = True — but that still wouldn't do what you want). | 1 | 0 | 0 | 0 | I've used Python's subprocess.call() before, but how do you get it to act like the Windows START /WAIT myprogram?
I've tried subprocess.call(['start', '/wait', 'myprogram.exe']) but it can't find start and neither can I. | python: executing "start /wait someprocess" | 0 | 1.2 | 1 | 0 | 0 | 127 |
6,823,316 | 2011-07-25T22:47:00.000 | 3 | 1 | 1 | 0 | 0 | python,iis | 0 | 25,462,179 | 0 | 4 | 0 | false | 1 | 0 | just make sure the path to the directory holding the cgi scripts doesn't have spaces or &.
i tried lots of things for many days and nothing worked then i changed the path and it worked
UPDATE:
If it has spaces, put quotes around the path, but not the %s %s
like this:
"C:\Program Files\Python36\python.exe" %s %s | 1 | 66 | 0 | 0 | I've got a background in PHP, dotNet and am charmed by Python. I want to transpose functionality from PHP to Python step by step, running bits and pieces side-by-side. During this transition, which could take 2 years since the app is enormous, I am bound to IIS. I've got 15 years background of web-programming, including some C work in an ISAPI module on IIS which is the kind of work I don't want to dive into any more.
It seems Python just doesn't run well on IIS. I've struggled with FastCGI (not supported, just for PHP) and PyIsapie (badly documented, couldn't get it up and running). In the end I got it up and running with a HeliconZoo dll BUT:
My next problem is: how to debug/develop a site? In PHP you install a debugger and whenever you have a problem in your website, you just debug it, set a breakpoint, step through code, inspect watches and such. It seems to me this is the most rudimentary type of work for a developer or troubleshooter. I've bought WingIDE which is an excellent tool and debugger but it can't hook into the Python instance in the IIS process for some reason so no debugging. I noticed Helicon starts Python with -O so I even recompiled Python to ignore this flag altogether but my debugger (WingIDE) just won't come up.
I can set up a PHP 'hello world' website on IIS in half an hour including download time. I think I've spent about 120 hours or more getting this to work for Python to no avail. I've bought Programming Python and Learning Python which is about 3000 pages. And I've googled until I dropped.
I think Python is a great language but I'm on the verge of aborting my attempts. Is there anyone who can give me a step-by-step instruction on how to set this up on IIS7? | Python on IIS: how? | 0 | 0.148885 | 1 | 0 | 0 | 85,151 |
6,838,281 | 2011-07-27T00:44:00.000 | 0 | 0 | 0 | 0 | 0 | python,linux,flash,npapi | 0 | 6,839,048 | 0 | 1 | 0 | false | 0 | 0 | Your best bet (assuming basing your hosting code on open source software isn't an issue for you licensing-wise) is probably to look at the implementation of NPAPI host in WebKit, Chromium, and/or Gecko.
The Mozilla documentation for the browser side of NPAPI will help, but there are a lot of little corner cases where plugins expect certain behavior because some early browser did it, and now everyone who wants to support those plugins has to do the same thing even if it's not part of the standard; the only way to learn about those is to look at existing implementation. | 1 | 2 | 0 | 0 | The docs I've found on NPAPI plugins explain how to write plugins to be loaded by browsers, but how hard is it to write an application that loads existing NPAPI plugins?
(My ultimate goal here is to find a way to use SWF content inside an application written with Python and wxPython in Linux. I've accomplished this on Windows by using the comtypes module to load the ActiveX Flash player. For Linux, I'm thinking that wrapping the Gnash NPAPI plugin in some C/C++ code and making this a Python extension seems like it would work, albeit a bit convoluted...) | How can I load an NPAPI plugin in my own application? | 1 | 0 | 1 | 0 | 0 | 499 |
6,870,118 | 2011-07-29T07:51:00.000 | 2 | 0 | 0 | 0 | 0 | python,resize,tkinter,tk | 0 | 6,870,304 | 0 | 3 | 0 | false | 0 | 1 | You can use right-anchored geometry specifications by using a minus sign in the right place:
123x467-78+9
However, I don't know if this will work on Windows (the above is an X11 trick, and I don't know if it is implemented in the platform-compatibility layer or not); you might have to just calculate the new position given the projected size of the left side and use that. | 2 | 0 | 0 | 0 | I'm not sure on how to articulate this...
I have a Tkinter window, and I need to hide half of this window when a button is pressed.
However, I need the left-most side to be hidden, so that the window is now half the size it originally was,
and shows the right half of the original window.
All of Tkinter's resize functions span from the left side of the window.
Changing the geometry values can only show the left side whilst hiding the right;
I need the inverse.
Does anybody know how to go about doing this?
(I don't want the user to have to drag the window border,
I need the button to automate it).
Specs:
Python 2.7.1
Tkinter
Windows 7 | Resize Tkinter Window FROM THE RIGHT (python) | 0 | 0.132549 | 1 | 0 | 0 | 1,136 |
6,870,118 | 2011-07-29T07:51:00.000 | 0 | 0 | 0 | 0 | 0 | python,resize,tkinter,tk | 0 | 6,954,676 | 0 | 3 | 0 | false | 0 | 1 | My IT teacher had a suggestion:
Add a Scrollbar out of sight,
and after resizing the root window,
force the scrollbar to scroll all the way to the right.
(So I guess I'd have to create a canvas,
pack all my widgets to the frame,
pack the frame to the canvas,
configure the canvas with the scrollbar?)
I'm not sure if there's a function to set the current position of the scrollbar (thus xview of the window),
but I'd imagine there is.
Have yet to implement this, but it looks promising.
Thanks for the suggestions! | 2 | 0 | 0 | 0 | I'm not sure on how to articulate this...
I have a Tkinter window, and I need to hide half of this window when a button is pressed.
However, I need the left-most side to be hidden, so that the window is now half the size it originally was,
and shows the right half of the original window.
All of Tkinter's resize functions span from the left side of the window.
Changing the geometry values can only show the left side whilst hiding the right;
I need the inverse.
Does anybody know how to go about doing this?
(I don't want the user to have to drag the window border,
I need the button to automate it).
Specs:
Python 2.7.1
Tkinter
Windows 7 | Resize Tkinter Window FROM THE RIGHT (python) | 0 | 0 | 1 | 0 | 0 | 1,136 |
6,874,214 | 2011-07-29T13:52:00.000 | 0 | 0 | 1 | 0 | 0 | python,eclipse,pydev,pylint | 0 | 7,062,993 | 0 | 5 | 0 | false | 0 | 0 | have you tried rebuilding your project? | 4 | 11 | 0 | 0 | I have pylint installed (works fine on the command line) and set up within Pydev in Eclipse.
Pylint is being triggered OK when I edit files, and is outputting to the Eclipse console.
But, the pylint warnings don't appear as marks in the editor margin (in the same way as compiler warnings and errors)
Newly-generated warnings don't appear in the Problems view either - there are some old ones showing, but they disappear if I re-save the relevant module.
I know this is possible as I've had it working previously - but how do I set this up?
Ticking or unticking "Redirect Pylint output to console?" doesn't seem to make any difference. | How to get pylint warnings to be marked in the Pydev Eclipse editor margin? | 1 | 0 | 1 | 0 | 0 | 3,051 |
6,874,214 | 2011-07-29T13:52:00.000 | -1 | 0 | 1 | 0 | 0 | python,eclipse,pydev,pylint | 0 | 7,064,520 | 0 | 5 | 0 | false | 0 | 0 | Only modules reachable through PYTHONPATH are passed to pylint, so you need to set your PYTHONPATH correctly in the project options. | 4 | 11 | 0 | 0 | I have pylint installed (works fine on the command line) and set up within Pydev in Eclipse.
Pylint is being triggered OK when I edit files, and is outputting to the Eclipse console.
But, the pylint warnings don't appear as marks in the editor margin (in the same way as compiler warnings and errors)
Newly-generated warnings don't appear in the Problems view either - there are some old ones showing, but they disappear if I re-save the relevant module.
I know this is possible as I've had it working previously - but how do I set this up?
Ticking or unticking "Redirect Pylint output to console?" doesn't seem to make any difference. | How to get pylint warnings to be marked in the Pydev Eclipse editor margin? | 1 | -0.039979 | 1 | 0 | 0 | 3,051 |
6,874,214 | 2011-07-29T13:52:00.000 | 2 | 0 | 1 | 0 | 0 | python,eclipse,pydev,pylint | 0 | 7,349,528 | 0 | 5 | 0 | false | 0 | 0 | I was having the same problem, and it turned out to be my pylint configuration file (~/.pylintrc by default). Be sure the output-format field is correct. It is under the [REPORTS] section, and the line should be:
output-format=text
If you've ever used pylint with another application (I do with emacs), it might say output-format=parseable. | 4 | 11 | 0 | 0 | I have pylint installed (works fine on the command line) and set up within Pydev in Eclipse.
Pylint is being triggered OK when I edit files, and is outputting to the Eclipse console.
But, the pylint warnings don't appear as marks in the editor margin (in the same way as compiler warnings and errors)
Newly-generated warnings don't appear in the Problems view either - there are some old ones showing, but they disappear if I re-save the relevant module.
I know this is possible as I've had it working previously - but how do I set this up?
Ticking or unticking "Redirect Pylint output to console?" doesn't seem to make any difference. | How to get pylint warnings to be marked in the Pydev Eclipse editor margin? | 1 | 0.07983 | 1 | 0 | 0 | 3,051 |
6,874,214 | 2011-07-29T13:52:00.000 | 3 | 0 | 1 | 0 | 0 | python,eclipse,pydev,pylint | 0 | 7,449,962 | 0 | 5 | 0 | true | 0 | 0 | I had this exact problem today, on a brand new system. I tracked down the cause, and it seems that PyDev refuses to pick up the messages from pylint 0.24.0, which was released on July 20, 2011.
Reverting to the previous version (pylint 0.23.0) seems to have solved the problem. For me, that involved removing everything from Python's Lib/site-packages directory that was related to pylint, and then running python setup.py install from the directory I'd extracted pylint 0.23.0 into. (Without deleting those files in the site-packages directory first, it kept using the new version.) But after both those steps, the messages started showing up in PyDev as expected.
You can check your pylint version with pylint --version from a shell prompt; if it shows 0.23.0 you're good to go. | 4 | 11 | 0 | 0 | I have pylint installed (works fine on the command line) and set up within Pydev in Eclipse.
Pylint is being triggered OK when I edit files, and is outputting to the Eclipse console.
But, the pylint warnings don't appear as marks in the editor margin (in the same way as compiler warnings and errors)
Newly-generated warnings don't appear in the Problems view either - there are some old ones showing, but they disappear if I re-save the relevant module.
I know this is possible as I've had it working previously - but how do I set this up?
Ticking or unticking "Redirect Pylint output to console?" doesn't seem to make any difference. | How to get pylint warnings to be marked in the Pydev Eclipse editor margin? | 1 | 1.2 | 1 | 0 | 0 | 3,051 |
6,875,599 | 2011-07-29T15:41:00.000 | 1 | 0 | 0 | 1 | 0 | python,python-2.7,socketserver | 0 | 68,489,995 | 0 | 5 | 0 | false | 0 | 0 | It seems that you can't use ForkingServer to share variables because Copy-on-Write happens when a process tries to modify a shared variable.
Change it to ThreadingServer and you'll be able to share global variables. | 1 | 24 | 0 | 0 | I would like to pass my database connection to the EchoHandler class, however I can't figure out how to do that or access the EchoHandler class at all.
class EchoHandler(SocketServer.StreamRequestHandler):
def handle(self):
print self.client_address, 'connected'
if __name__ == '__main__':
conn = MySQLdb.connect (host = "10.0.0.5", user = "user", passwd = "pass", db = "database")
SocketServer.ForkingTCPServer.allow_reuse_address = 1
server = SocketServer.ForkingTCPServer(('10.0.0.6', 4242), EchoHandler)
print "Server listening on localhost:4242..."
try:
server.allow_reuse_address
server.serve_forever()
except KeyboardInterrupt:
print "\nbailing..." | With python socketserver how can I pass a variable to the constructor of the handler class | 0 | 0.039979 | 1 | 0 | 0 | 13,253 |
6,877,553 | 2011-07-29T18:40:00.000 | 0 | 0 | 0 | 0 | 0 | python,dns,dnspython | 0 | 6,890,137 | 0 | 1 | 0 | false | 1 | 0 | Do you really have to do it with DNSPython? Is this a custom name server?
The typical way you normally do it (with bind, for example) is by pre-signing the zone file. The DNSSEC RRSIG does not have any dependency on the connection parameters so we don't really have to do on-the-fly signing. Also, things like NSEC would be easier to handle if you pre-sign. | 1 | 3 | 0 | 0 | I'm trying to DNSSEC Sign a RRSET, however I am not able finding any references to how to do so using DNSPython. Yes it has dns.dnssec.validate_rrsig(), but I want to DNSSEC sign a rrset, how can this be done?
I've been pooring over the RFC's however I'm obviously lacking something in order to make it work. | DNSSEC Sign RRSET using DNSPython | 0 | 0 | 1 | 0 | 0 | 884 |
6,879,815 | 2011-07-29T22:51:00.000 | -1 | 0 | 0 | 0 | 0 | python,django,django-admin | 0 | 6,880,287 | 0 | 2 | 0 | false | 1 | 0 | I'm not too sure, but admin forms don't reach to the commit point unless they meet clean() requirements. After that I guess everything will be committed. This behavior should be sufficient for the default forms in admin. However, for more complex forms, you can create your custom admin form and I'm pretty sure you can define whether or not you want to commit on success or on save. | 1 | 6 | 0 | 1 | I just wonder how is transaction managed in django's admin commands. Commit on save? Commit on success? I can't find related info from official documents. | How is transaction managed in Django's admin commands? | 0 | -0.099668 | 1 | 0 | 0 | 1,967 |
6,882,218 | 2011-07-30T09:17:00.000 | 1 | 1 | 0 | 0 | 1 | android,python,sl4a | 0 | 10,452,996 | 0 | 2 | 0 | true | 1 | 1 | I used a round about method to circumvent the problem. First the python script needs to be modified to look for a text file containing the attributes. Now whenever I need to start the script, i have to push the txt file containing the attributes and then start the script. | 1 | 2 | 0 | 0 | I am trying to get a Python script which I normally run on my PC to run on my Android phone (HTC Hero). I have SL4A running on my phone and have made a few tweaks to the Python script so that this does now run. The problem that I am having is how to pass parameters to the script. I have tried creating a sh script in SL4A which called the python file with the parameters, but this didn't work. I have also tried using the app TaskBomb to call through to the python file, but again this doesn't work when parameters are supplied. When no params are supplied the file loads correctly, but when I add -h to the filename it says it can no longer find the python file I am calling.
Is anybody able to provide assistance with how to this? | Passing parameters to a python script using SL4A on Android | 0 | 1.2 | 1 | 0 | 0 | 2,086 |
6,892,408 | 2011-07-31T21:22:00.000 | 1 | 0 | 0 | 1 | 0 | python,ruby-on-rails-3,google-app-engine,data-migration | 0 | 6,892,479 | 0 | 2 | 0 | false | 1 | 0 | You can't change the name of an entity. It's not permitted.
If you change the name of an attribute in a model (please don't call them columns), AppEngine will ignore the old data in the old field, and return None for the new field. | 1 | 5 | 0 | 0 | I've done a lot of development in rails, and am looking into developing projects using python & app engine.
From the demo project and what I've seen so far, I've got a question/concern about app engine projects:
How is data migration handled in app-engine? For example, if I change the name of an entity/table (ex: Texts to Documents), or change a column in an existing table (ex: age to dob) - how does it handle old data when this happens?
Thanks! | Data Migrations and AppEngine | 0 | 0.099668 | 1 | 0 | 0 | 608 |
6,896,942 | 2011-08-01T10:10:00.000 | 1 | 1 | 1 | 0 | 0 | php,asp.net,python,django | 0 | 6,899,782 | 0 | 3 | 1 | false | 1 | 0 | python has a ui like vb, it's called pygtk (pygtk.org), i suggest you learn python, it's the easiest to learn, and you don't have to write as much as you would in .net
php is powerful, and you have to learn it, you just have to, but for big complicated web apps, I rather choose ruby on rails or even better django
which is the best? "the best" is just an opinion, asp.net developers think that it's the best and i think that python is the best, it's an argument that will never end | 3 | 2 | 0 | 0 | I know similar questions to this have been asked before, but I'm looking for a more specific answer so here it goes:
I'm a student and am looking to develop a web app. I have little experience with all mentioned and like different aspects of each. I like the Visual Web Dev that can be used to create ASP.NET sites, a aspect that isn't there for PHP or Python..
I'm also looking to learn and develop with the language that will be most beneficial for uni, and work in the future - the language that is used and respected most in industry. I've heard mixed views about this. Because I'd want to know the language that is most used, most in demand, and has the longest future.
I'd also like the ability to make things fast, they all come with frameworks.. But would it be better me learning things from scratch and understanding how it works that use a framework? Which language has the most frameworks and support?
I see that a lot of industries (the ones I've looked at) use ASP.NET. But it seems (remember no real experience) to be easier (especially as a GUI can be used) so does that make it less valuable.
Basically - which language do you think would be best for me based on this? WHich would you recommend based on the advantages and disadvantages of each and ease of fast efficient and powerful development?
Thanks | Python + Django vs. ASP.NET + C#/VB vs PHP? | 0 | 0.066568 | 1 | 0 | 0 | 3,202 |
6,896,942 | 2011-08-01T10:10:00.000 | 1 | 1 | 1 | 0 | 0 | php,asp.net,python,django | 0 | 6,899,152 | 0 | 3 | 1 | true | 1 | 0 | This question is really too open ended. there is no one true language, otherwise we'd all be using it. As you've seen they all have merit. You didn't mention Java which still holds a lot of clout in enterprise computing.
The only answer is pick one you like and get good at it. You can spends years wishing you'd picked one of the others. Also, if you get good at one & have a firm understanding of the basics, at a later date you'll find it easy (ish) to pick up another one.
For what it's worth. my money's on .net. But that's just me.
Simon | 3 | 2 | 0 | 0 | I know similar questions to this have been asked before, but I'm looking for a more specific answer so here it goes:
I'm a student and am looking to develop a web app. I have little experience with all mentioned and like different aspects of each. I like the Visual Web Dev that can be used to create ASP.NET sites, a aspect that isn't there for PHP or Python..
I'm also looking to learn and develop with the language that will be most beneficial for uni, and work in the future - the language that is used and respected most in industry. I've heard mixed views about this. Because I'd want to know the language that is most used, most in demand, and has the longest future.
I'd also like the ability to make things fast, they all come with frameworks.. But would it be better me learning things from scratch and understanding how it works that use a framework? Which language has the most frameworks and support?
I see that a lot of industries (the ones I've looked at) use ASP.NET. But it seems (remember no real experience) to be easier (especially as a GUI can be used) so does that make it less valuable.
Basically - which language do you think would be best for me based on this? WHich would you recommend based on the advantages and disadvantages of each and ease of fast efficient and powerful development?
Thanks | Python + Django vs. ASP.NET + C#/VB vs PHP? | 0 | 1.2 | 1 | 0 | 0 | 3,202 |
6,896,942 | 2011-08-01T10:10:00.000 | 0 | 1 | 1 | 0 | 0 | php,asp.net,python,django | 0 | 6,897,150 | 0 | 3 | 1 | false | 1 | 0 | I would recomend looking at asp.net mvc and scaffolding. That way you can create good applications quick and effective. | 3 | 2 | 0 | 0 | I know similar questions to this have been asked before, but I'm looking for a more specific answer so here it goes:
I'm a student and am looking to develop a web app. I have little experience with all mentioned and like different aspects of each. I like the Visual Web Dev that can be used to create ASP.NET sites, a aspect that isn't there for PHP or Python..
I'm also looking to learn and develop with the language that will be most beneficial for uni, and work in the future - the language that is used and respected most in industry. I've heard mixed views about this. Because I'd want to know the language that is most used, most in demand, and has the longest future.
I'd also like the ability to make things fast, they all come with frameworks.. But would it be better me learning things from scratch and understanding how it works that use a framework? Which language has the most frameworks and support?
I see that a lot of industries (the ones I've looked at) use ASP.NET. But it seems (remember no real experience) to be easier (especially as a GUI can be used) so does that make it less valuable.
Basically - which language do you think would be best for me based on this? WHich would you recommend based on the advantages and disadvantages of each and ease of fast efficient and powerful development?
Thanks | Python + Django vs. ASP.NET + C#/VB vs PHP? | 0 | 0 | 1 | 0 | 0 | 3,202 |
6,904,069 | 2011-08-01T20:28:00.000 | 4 | 0 | 1 | 1 | 1 | python | 0 | 6,904,116 | 0 | 2 | 0 | false | 0 | 0 | If you're on Python 2.6 or higher, you can simply use shutil.copytree and its ignore argument. Since it gets passed all the files and directories, you can call your function from there, unless you want it to be called right after the file is copied.
If that is the case, the easiest thing is to copy and modify the copytree code. | 1 | 3 | 0 | 0 | I want to copy a directory to another directory recursively. I also want to ignore some files (eg. all hidden files; everything starting with ".") and then run a function on all the other files (after copying them). This is simple to do in the shell, but I need a Python script.
I tried using shutil.copytree, which has ignore support, but I don't know how to have it do a function on each file copied. I might also need to check some other condition when copying so I can't just run the function on all the files once they are copied over. I also tried looking at os.walk but I couldn't figure it out. | Copying files recursively with skipping some directories in Python? | 0 | 0.379949 | 1 | 0 | 0 | 2,031 |
6,908,827 | 2011-08-02T07:52:00.000 | 1 | 0 | 0 | 0 | 0 | python,google-app-engine | 0 | 6,909,612 | 0 | 3 | 0 | false | 1 | 0 | I don't really understand your question. If you want an automatically-generated key name, just leave out the key when you instantiate the object - one will be automatically assigned when you call put(). | 1 | 0 | 0 | 0 | I want to insert new entities programatically as well as manually. For this I was thinking about using key_name to uniquely identify an entity.
The problem is that I don't know how to get the model to generate a new unique key name when I create the entity.
On the other hand, I cannot create the ID (which is unique across data store) manually.
How can I do "create unique key name if provided value is None"?
Thanks for your help! | Fun with GAE: using key_name as PK? | 0 | 0.066568 | 1 | 0 | 0 | 178 |
6,909,143 | 2011-08-02T08:23:00.000 | 0 | 0 | 0 | 0 | 1 | django,apache,media,mod-python | 0 | 6,909,236 | 0 | 2 | 0 | false | 1 | 0 | It is surely a pro since the django serves the requests faster without having to deal with the media.
A con is that, if and when you edit the media, you need to also restart the apache, for the media to refresh.
update based on your comment:
You can of-course easily do it. One simple way I practice this is, by using the nginx and symlinking the media folder into nginx sites-enabled and run nginx on port 80 (or any other).
You can set the MEDIA_URL in your settings, where you point it to the url with the appropriate port. | 2 | 0 | 0 | 0 | For some reason i can't figure out, other than the 'stupid' errors that keep creeping up when I try to access media files (files uploaded by the user) in my Django app, why I just can't server media files!
In this particular case, all I want for example is to be able to serve up images to the front that have been uploaded. My up correctly serves static files via /static/, but when I try to serve my /site_media/ files hell breaks loose! What could I be doing wrong?
So, after realizing that Django wasn't essentially crafted to actually handle media files, I decided to resort to using Apache via the recommended mod_python option like it is recommended to do in production. But I've never done this before, and am wondering whether this is worth the trouble on the development server.
Well, I know eventually I have to go down this path when I go production, and so will still have to learn how to do this, but what are the pros and cons for this route on the development server? | Serve Media Files Via Apache on a Django Development Server? | 0 | 0 | 1 | 0 | 0 | 859 |
6,909,143 | 2011-08-02T08:23:00.000 | 0 | 0 | 0 | 0 | 1 | django,apache,media,mod-python | 0 | 6,909,511 | 0 | 2 | 0 | false | 1 | 0 | First, mod_python is not recommended. In fact, it's specifically recommended against. Use mod_wsgi instead.
Secondly, we have no possible way of telling what you're doing wrong when serving static media via the dev server, because you have provided no code or details of your setup.
Finally, there is no reason why you can't use Apache - or even better, a lightweight server such as nginx - and point it at your static directory only. Then, set STATIC_URL in your settings.py to the address served by that server. It makes no difference what port it is on while you're in development. | 2 | 0 | 0 | 0 | For some reason i can't figure out, other than the 'stupid' errors that keep creeping up when I try to access media files (files uploaded by the user) in my Django app, why I just can't server media files!
In this particular case, all I want for example is to be able to serve up images to the front that have been uploaded. My up correctly serves static files via /static/, but when I try to serve my /site_media/ files hell breaks loose! What could I be doing wrong?
So, after realizing that Django wasn't essentially crafted to actually handle media files, I decided to resort to using Apache via the recommended mod_python option like it is recommended to do in production. But I've never done this before, and am wondering whether this is worth the trouble on the development server.
Well, I know eventually I have to go down this path when I go production, and so will still have to learn how to do this, but what are the pros and cons for this route on the development server? | Serve Media Files Via Apache on a Django Development Server? | 0 | 0 | 1 | 0 | 0 | 859 |
6,917,330 | 2011-08-02T19:16:00.000 | 4 | 0 | 0 | 0 | 0 | python,byte,hex,checksum,crc | 0 | 6,922,436 | 0 | 1 | 0 | false | 1 | 0 | Thanks !!!
the following two solutions worked;
checksum = sum(map(ord, b))
or
checksum = sum(bytearray(b))
/ J | 1 | 0 | 0 | 0 | I am creating a Hex file using python and at the end I need to add a checksum that consists of sum of all hex values so that checksum = Byte 0x000000 + Byte 0x000001 +…+ Byte 0x27DAFF (not including this 4 bytes). This checksum shall then be written to buffer at position 0x27DB00-0x27DB03 as unsigned long.
Any good ideas for how to get this done fast, I am running python2.7.
As info of what I am up to I start with creating a buffer using ctypes, then write lots and lots of hex stuff to buffer, then create a cStringIO from buffer and write this string object to a file_obj which happen to be a django http response (i.e. returns the hex file as downloadable file) so any smart things involving the buffer would be appreciated !!! :-)
/ Jens | How to generate checksum from hex byte using python | 0 | 0.664037 | 1 | 0 | 0 | 7,142 |
6,928,110 | 2011-08-03T14:28:00.000 | 4 | 0 | 0 | 1 | 0 | python,gcc,setuptools,setup.py,distutils | 0 | 60,766,834 | 1 | 3 | 0 | false | 0 | 0 | I ran into this problem when I needed to fully remove a flag (-pipe) so I could compile SciPy on a low-memory system. I found that, as a hack, I could remove unwanted flags by editing /usr/lib/pythonN.N/_sysconfigdata.py to remove every instance of that flag, where N.N is your Python version. There are a lot of duplicates, and I'm not sure which are actually used by setup.py. | 1 | 67 | 0 | 0 | I understand that setup.py uses the same CFLAGS that were used to build Python. I have a single C extension of ours that is segfaulting. I need to build it without -O2 because -O2 is optimizing out some values and code so that the core files are not sufficient to pin down the problem.
I just need to modify setup.py so that -O2 is not used.
I've read distutils documentation, in particular distutils.ccompiler and distutils.unixccompiler and see how to add flags and libs and includes, but not how to modify the default GCC flags.
Specifically, this is for a legacy product on Python 2.5.1 with a bunch of backports (Fedora 8, yes, I know...). No, I cannot change the OS or Python version and I cannot, without great problems, recompile Python. I just need to build a one off of the C extension for one customer whose environment is the only one segfaulting. | How may I override the compiler (GCC) flags that setup.py uses by default? | 0 | 0.26052 | 1 | 0 | 0 | 35,921 |
6,928,566 | 2011-08-03T14:59:00.000 | 3 | 1 | 0 | 1 | 0 | python,udev | 0 | 6,930,448 | 0 | 2 | 0 | false | 0 | 0 | With pyudev, each device object provides a dictionary-like interface for its attributes. You can list them all with device.keys(), e.g. UUID is for block devices is dev['ID_FS_UUID']. | 1 | 0 | 0 | 0 | I want get the mount node of an usb mass-storage device, like /media/its-uuid
in pyudev, class Device has some general attributes, but not uuid or mount node.
how to do it
thanks help | how to get uuid of a device using udev | 0 | 0.291313 | 1 | 0 | 0 | 2,348 |
6,936,252 | 2011-08-04T04:37:00.000 | 1 | 1 | 0 | 0 | 0 | python,django,email | 0 | 6,937,662 | 0 | 3 | 0 | false | 1 | 0 | You have no other way than generate confirm url in your message like most sites registrations do. If person would be glad to register on your website, he will certainly click confirm at his email client of any sort. Otherwise it's a spam/scam email.
There is no way you can do it and know it's a live e-mail for sure...
Besides there are 2 other ways mentioned by my colleagues... But they are based on "unsecure" settings in old email clients rather than sure way... IMHO. | 2 | 1 | 0 | 0 | I have a simple mail sending application which runs in python using Django. After sending an email is there a way to see if the recipient has opened it or is it still un-opened?
if so how can i do it? | Checking the status of a sent email in django | 1 | 0.066568 | 1 | 0 | 0 | 2,684 |
6,936,252 | 2011-08-04T04:37:00.000 | 3 | 1 | 0 | 0 | 0 | python,django,email | 0 | 6,936,289 | 0 | 3 | 0 | false | 1 | 0 | You can try setting require return receipt flag on the email you are sending. But, a lot of people (I know I do) ignore that return receipt so you will never find out in those cases.
If you are asking for a 100% certain method of finding out if the recipient read his/her email, then the straight answer is: NO, you can't do that. | 2 | 1 | 0 | 0 | I have a simple mail sending application which runs in python using Django. After sending an email is there a way to see if the recipient has opened it or is it still un-opened?
if so how can i do it? | Checking the status of a sent email in django | 1 | 0.197375 | 1 | 0 | 0 | 2,684 |
6,941,231 | 2011-08-04T12:15:00.000 | 1 | 0 | 1 | 0 | 0 | python,pdf,pypdf | 0 | 6,941,309 | 0 | 1 | 0 | false | 0 | 0 | Have a look at pdftk. It is a toolbox for working with pdf files. You can integrate it into python with the subprocess module. | 1 | 2 | 0 | 0 | I have one pdf file. I want to split that file into multiple pdf files
by some specific word from that file. how can i do that in python ? | how to split pdf file into multiple pdf files by specific word? | 0 | 0.197375 | 1 | 0 | 0 | 457 |
6,967,076 | 2011-08-06T13:22:00.000 | 0 | 0 | 1 | 0 | 0 | python,class,private | 0 | 68,909,648 | 0 | 2 | 0 | false | 0 | 0 | Here’s a way to declare private variables: overload the __setattr__ (and other many methods about attributes) method, in the method, you can check current executing method is in the own class. If it is true, let it pass. Otherwise, you can raise an exception, or you can use your own customize error handle logic. (Tips: use inspect.stack function) | 1 | 1 | 0 | 0 | When trying to access __variables from a class, the parser assumes the 2 underscores are private relative to the current class. Notice how an unrelated function gets a "private" variable.
Is this a bug?
>>> def f(): pass
...
>>> class A:
... def g(self):
... f.__x = 1
... def h():
... pass
... h.__y = 2
... return h
...
>>> z = A().g()
>>> dir(z)
['_A__y', '__call__', '__class__', '__delattr__', '__dict__', '__doc__', '__get_
_', '__getattribute__', '__hash__', '__init__', '__module__', '__name__', '__new
__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__', 'func_
closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals',
'func_name']
>>> dir(f)
['_A__x', '__call__', '__class__', '__delattr__', '__dict__', '__doc__', '__get_
_', '__getattribute__', '__hash__', '__init__', '__module__', '__name__', '__new
__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__', 'func_
closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals',
'func_name']
Tested on python 2.5 and 3.2 | Python private class variables that aren't class variables | 1 | 0 | 1 | 0 | 0 | 514 |
6,975,995 | 2011-08-07T21:46:00.000 | 0 | 0 | 0 | 0 | 0 | python,virtualbox,rdp | 0 | 7,852,841 | 0 | 2 | 0 | false | 0 | 0 | Have you considered Jython, which ought to be able to integrate natively with the Java library you already have? | 1 | 2 | 0 | 0 | Is there a way to access the screen of a stock version of a headless VirtualBox 4.x remotely using RDP with Python or access it using the VNC protocol?
I want to be able to access the boot screen (F12), too, so I cannot boot a VNC server in the Guest as the Guest is not yet booted.
Note that I already have an RFB version in pure Python, however stock VirtualBox seems not to support VNC style remote connections, OTOH I somehow was unable to find a Python RDP library, sadly.
What I found so far but I do not want to use:
A Java RDP client, however I do not want to switch horses, so I want to keep it Python
VirtualBox API seems to provide Python with access to the framebuffer, but I am not completely sure. However this then is bound to VirtualBox only, an RDP library (or letting VB talk RFB) would be more generic.
Notes:
So what I need either is a way to add VNC/RFB support to an original VirtualBox (.vbox-extpack?)
or find some RDP library written in pure Python.
It must be available on at least all platforms for which VirtualBox is available.
If neither is possible, I think I will try the VirtualBox API in Python. | How to connect Python to VirtualBox using RDP or RFB? | 1 | 0 | 1 | 0 | 1 | 1,273 |
6,979,529 | 2011-08-08T08:38:00.000 | 2 | 1 | 0 | 0 | 0 | python,django,content-management-system,contenttype,feincms | 0 | 6,979,669 | 0 | 1 | 0 | false | 0 | 0 | No, that's not possible currently.
What's the reasoning behind this feature request? | 1 | 2 | 0 | 0 | I'd like to disable the option to move FeinCMS contenttypes whithin a region.
Does anyone have any suggestions how to accomplish this? | Disable option to move FeinCMS contenttypes within a region | 0 | 0.379949 | 1 | 0 | 0 | 108 |
6,984,672 | 2011-08-08T15:38:00.000 | 1 | 0 | 0 | 0 | 1 | python,django,django-admin,gunicorn | 0 | 6,984,814 | 0 | 5 | 0 | false | 1 | 0 | If you use contrib.static, you have to execute a collectstatic command to get all the app-specific static files (including admin's own) into the public directory that is served by gunicorn. | 1 | 3 | 0 | 0 | I have a mostly entirely plain django project, with no adding of my own media or customization of the admin interface in any way. Running the server with python manage.py runserver results in a nicely-formatted admin interface. Running the server with gunicorn_django does not. Why is this the case, and how can I fix it?
It's definitely an issue of not finding the css files, but where are they stored? I never configured this, and the MEDIA_ROOT setting is ''.
EDIT: I just want to know how django-admin serves the non-existent admin files... and how can I get gunicorn_django to do the same? | django: admin site not formatted | 0 | 0.039979 | 1 | 0 | 0 | 2,860 |
6,999,621 | 2011-08-09T16:34:00.000 | 51 | 0 | 0 | 0 | 0 | python,plot,matplotlib | 0 | 7,000,381 | 0 | 2 | 0 | true | 0 | 0 | Specify, in the coordinates of your current axis, the corners of the rectangle that you want the image to be pasted over
Extent defines the left and right limits, and the bottom and top limits. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max].
Assuming you have longitude along the horizontal axis, then use extent=[longitude_top_left,longitude_top_right,latitude_bottom_left,latitude_top_left]. longitude_top_left and longitude_bottom_left should be the same, latitude_top_left and latitude_top_right should be the same, and the values within these pairs are interchangeable.
If your first element of your image should be plotted in the lower left, then use the origin='lower' imshow option as well, otherwise the 'upper' default is what you want. | 1 | 37 | 1 | 0 | I managed to plot my data and would like to add a background image (map) to it.
Data is plotted by the long/lat values and I have the long/lat values for the image's three corners (top left, top right and bottom left) too.
I am trying to figure out how to use 'extent' option with imshow. However, the examples I found don't explain how to assign x and y for each corner ( in my case I have the information for three corners).
How can I assign the location of three corners for the image when adding it to the plot?
Thanks | how to use 'extent' in matplotlib.pyplot.imshow | 0 | 1.2 | 1 | 0 | 0 | 80,144 |
7,017,059 | 2011-08-10T20:06:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,internationalization,django-cms | 0 | 7,448,955 | 0 | 1 | 0 | false | 1 | 0 | Found the solution. Django-nani. | 1 | 1 | 0 | 0 | I am implementing payment plans on an multilingual eshop based on django-cms. I need internationalized support for payment plans.
User can enter abritrary number of payment plans (implies standard django models)
Every payment plan must have a description in every language site supports (3 at the moment). Implies basic django-cms post or plugin.
Possible solutions I thought of, but did not fit..
If I go with django models, how do I handle i18n?
If I go with cms plugin, how would I link those descriptions to same django models in every language?
If I go with cms page, how do I create separate entities?
What is the most elegant solution?
Thanks. | Django-cms translatable objects | 0 | 0 | 1 | 0 | 0 | 240 |
7,020,108 | 2011-08-11T02:15:00.000 | 0 | 0 | 0 | 0 | 0 | python,sql,django,django-models,django-signals | 0 | 7,021,874 | 0 | 3 | 0 | false | 1 | 0 | As other have said, the best solution is django-celery (https://github.com/ask/django-celery) but it's a little heavyweight if you are low on resources.
In a similar need, I have a middleware that checks the conditions and executes the needed operations (which, in my case, are just changing a boolean in some records of the database). This works for me because until someone actually access the data there is no need to change it, and because the "task" is just one UPDATE query, so it's not an unbearable load.
Otherwise you can set up a cron job every short time (1 minute? 5 minutes?) and check conditions and do consequences. | 1 | 2 | 0 | 0 | I'm totally confused and have no idea how to do this, so please forgive me if my description/information is bad.
So I want say to do a notification via django-notification or simply send an e-mail to one of my user when a post of his had ended like on ebay. In my database I have a model which stores the datetime of when the post is going to end, but I'm not sure how to effectively check or store a signal or something which would alert the system to alert the user when the current time is > than the end datetime.
thanks!
Since I want to send an email/notification the second a post ends, I don't think I can use a scheduler to check if any post had ended, I believe this would be too inefficient, because I would have to check like every second, but like I said above, I'm not sure about anything... | Django 1.3, how to signal when a post has ended like on ebay? | 0 | 0 | 1 | 0 | 0 | 204 |
7,027,803 | 2011-08-11T14:37:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,git,pdb | 0 | 7,027,964 | 0 | 3 | 0 | false | 1 | 0 | The best option would be to have an extensive test suite, and to run the tests before pushing to production. Extraneous pdb breakpoints will prevent the tests from passing.
If you can't do that, then option 2 is the best: write a utility to break into the debugger, and make it sensitive to the state of the settings. You still have to solve the problem of how to be sure people use the wrapper rather than a raw pdb call though. | 2 | 3 | 0 | 0 | What do you suggest to get rid of pdb calls on production software?
In my case, I'm developing a django website.
I don't know if I should:
Monkey patch pdb from settings.py (dependding on DEBUG boolean).
Make a pdb wrapper for our project which expose set_trace or print basic log if DEBUG = True
Dissalow comitting brakpoints on git hooks... (if you think it's the best idea how would you do that?). | How to ensure there are no pdb calls out of debugging configuration? | 0 | 0.066568 | 1 | 0 | 0 | 133 |
7,027,803 | 2011-08-11T14:37:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,git,pdb | 0 | 7,028,046 | 0 | 3 | 0 | true | 1 | 0 | The third one. You have to enforce some commit rules. For example, run a serie of tests before a commit, etc. This way, developpers have a simple way to check if a pdb break remain. If someone commit a set_trace, he has to bake a cake for the rest of the team.
This works fine in my company :-)
edit: you may present this method to your boss as CDD (Cake Driven Developpement) | 2 | 3 | 0 | 0 | What do you suggest to get rid of pdb calls on production software?
In my case, I'm developing a django website.
I don't know if I should:
Monkey patch pdb from settings.py (dependding on DEBUG boolean).
Make a pdb wrapper for our project which expose set_trace or print basic log if DEBUG = True
Dissalow comitting brakpoints on git hooks... (if you think it's the best idea how would you do that?). | How to ensure there are no pdb calls out of debugging configuration? | 0 | 1.2 | 1 | 0 | 0 | 133 |
Subsets and Splits