Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14,942,462 | 2013-02-18T17:57:00.000 | 1 | 0 | 1 | 0 | 0 | python,database,json,sqlalchemy,python-db-api | 0 | 14,951,638 | 0 | 1 | 0 | false | 0 | 0 | There's no magic way, you'll have to write a Python program to load your JSON data in a database. SQLAlchemy is a good tool to make it easier. | 1 | 0 | 0 | 0 | If we have a json format data file which stores all of our database data content, such as table name, row, and column, etc content, how can we use DB-API object to insert/update/delete data from json file into database, such as sqlite, mysql, etc. Or please share if you have better idea to handle it. People said it is good to save database data information into json format, which will be much convenient to work with database in python.
Thanks so much! Please give advise! | how will Python DB-API read json format data into an existing database? | 1 | 0.197375 | 1 | 1 | 0 | 454 |
14,947,860 | 2013-02-19T00:35:00.000 | 0 | 0 | 0 | 1 | 0 | python,google-app-engine | 0 | 14,950,038 | 0 | 1 | 0 | false | 1 | 0 | Easiest thing is to modify google/appengine/tools/dev_appserver_import_hook.py and add the module you want to the whitelist.
This will allow you to import whatever you want.
Now there's a good reason that the imports are restricted in the development server. The restricted imports match what's available on the production environment. So if you add libraries to the whitelist, your code may run on your local development server, but it will not run on the production environment.
And no, you can't import restricted modules on production. | 1 | 1 | 0 | 0 | I am playing around with local deployment of GAE python SDK. The code that I am trying to run contains many external libraries which are not part of GAE import whitelist. I want to disable the import restrictions and let GAE app import any locally installed module.
After walking through the code, I figured out that they use custom import hooks for restricting imports. However, I have not been able to figure out how to disable the overridden import hook.
Let me know if you have any idea how this can be accomplished. | How to disable Google App Engine python SDK import hook? | 0 | 0 | 1 | 0 | 0 | 158 |
14,959,093 | 2013-02-19T13:47:00.000 | 1 | 0 | 0 | 0 | 0 | python,.net,ironpython | 0 | 14,959,257 | 0 | 1 | 0 | true | 0 | 1 | Well, I've never used IronPython so I don't know how much help this will be, but what I usually do when trying to figure out these things in regular python is to print type(sender) , print sender and print dir(sender) to console(or output to a file if you don't have a console available).
This should help you figure out what exactly is the "sender" parameter. In the simplest case it could be the button itself so a simple == will work to know which button it was. Or it could have a method/property that gets you the button object. In which case, dir(sender) might contain an obvious one, or if not, google the class name gotten from type(sender) and see if you can find any docs. | 1 | 0 | 0 | 0 | If there are lets say 4 buttons, all with the same Click event, how can I find out which button was pressed?
if the event looks like this def Button_Click(self, sender, e): I'm sure I can compare sender to my buttons somehow. But how? | event handling iron python | 0 | 1.2 | 1 | 0 | 0 | 901 |
14,960,161 | 2013-02-19T14:42:00.000 | 1 | 0 | 0 | 0 | 1 | python,debugging,pdb | 0 | 14,960,605 | 0 | 2 | 0 | false | 0 | 0 | This is the thing: Ctrl+D does not kill programs, it cuts waiting halfway through. When you press Ctrl+D, you interrupt the process' read() call that's waiting for input.
Ctrl+D
Most programs will abort when they read 0 bytes as input. If you Ctrl+D before entering anything, you'll be sending 0 bytes down the input pipe, and possibly induce a shutdown of a the program, which may think there is nothing left to be done. This is not forced.
However, if you press some keys, then Ctrl+D, the read() call you interrupted will return that text, and the underlying program decides to wait for another round.
That's why, when you Ctrl+D again without entering any new text, you get the behavior you expect.
Your case
This is what's probably happening:
You type some character, they get buffered.
You Ctrl+D. The text reaches iPdb, but it does not detect a newline, and thus it waits for more.
You Ctrl+D again. This time 0 bytes reach iPdb, which assumes nothing more is coming and processes the text with or without newlines. | 1 | 2 | 0 | 0 | I am debugging my Python scripts with ipdb. Somehow I have the problem, that after entering a command, for instance n, s, c, b etc. I have to press Ctrl+D two times in order for ipdb to process the command and proceed.
Any idea what causes this and how I can turn it off? | ipdb requires Ctrl+D for processing command | 0 | 0.099668 | 1 | 0 | 0 | 463 |
14,972,631 | 2013-02-20T05:03:00.000 | 3 | 0 | 1 | 0 | 1 | python,naming-conventions,abstract-class | 1 | 14,973,303 | 0 | 6 | 0 | false | 0 | 0 | Create your 'abstract' class and raise NotImplementedError() in the abstract methods.
It won't stop people using the class and, in true duck-typing fashion, it will let you know if you neglect to implement the abstract method. | 3 | 20 | 0 | 0 | I come from a C# background where the language has some built in "protect the developer" features. I understand that Python takes the "we're all adults here" approach and puts responsibility on the developer to code thoughtfully and carefully.
That said, Python suggests conventions like a leading underscore for private instance variables. My question is, is there a particular convention for marking a class as abstract other than just specifying it in the docstrings? I haven't seen anything in particular in the python style guide that mentions naming conventions for abstract classes.
I can think of 3 options so far but I'm not sure if they're good ideas:
Specify it in the docstring above the class (might be overlooked)
Use a leading underscore in the class name (not sure if this is universally understood)
Create a def __init__(self): method on the abstract class that raises an error (not sure if this negatively impacts inheritance, like if you want to call a base constructor)
Is one of these a good option or is there a better one? I just want to make sure that other developers know that it is abstract and so if they try to instantiate it they should accept responsibility for any strange behavior. | Python abstract classes - how to discourage instantiation? | 0 | 0.099668 | 1 | 0 | 0 | 12,928 |
14,972,631 | 2013-02-20T05:03:00.000 | 3 | 0 | 1 | 0 | 1 | python,naming-conventions,abstract-class | 1 | 14,973,549 | 0 | 6 | 0 | false | 0 | 0 | I just name my abstract classes with the prefix 'Abstract'. E.g. AbstractDevice, AbstractPacket, etc.
It's about as easy and to the point as it comes. If others choose to go ahead and instantiate and/or use a class that starts with the word 'Abstract', then they either know what they're doing or there was no hope for them anyway.
Naming it thus, also serves as a reminder to myself not to go nuts with deep abstraction hierarchies, because putting 'Abstract' on the front of a whole lot of classes feels stupid too. | 3 | 20 | 0 | 0 | I come from a C# background where the language has some built in "protect the developer" features. I understand that Python takes the "we're all adults here" approach and puts responsibility on the developer to code thoughtfully and carefully.
That said, Python suggests conventions like a leading underscore for private instance variables. My question is, is there a particular convention for marking a class as abstract other than just specifying it in the docstrings? I haven't seen anything in particular in the python style guide that mentions naming conventions for abstract classes.
I can think of 3 options so far but I'm not sure if they're good ideas:
Specify it in the docstring above the class (might be overlooked)
Use a leading underscore in the class name (not sure if this is universally understood)
Create a def __init__(self): method on the abstract class that raises an error (not sure if this negatively impacts inheritance, like if you want to call a base constructor)
Is one of these a good option or is there a better one? I just want to make sure that other developers know that it is abstract and so if they try to instantiate it they should accept responsibility for any strange behavior. | Python abstract classes - how to discourage instantiation? | 0 | 0.099668 | 1 | 0 | 0 | 12,928 |
14,972,631 | 2013-02-20T05:03:00.000 | 0 | 0 | 1 | 0 | 1 | python,naming-conventions,abstract-class | 1 | 14,974,080 | 0 | 6 | 0 | false | 0 | 0 | To enforce things is possible, but rather unpythonic. When I came to Python after many years of C++ programming I also tried to do the same, I suppose, most of people try doing so if they have an experience in more classical languages. Metaclasses would do the job, but anyway Python checks very few things at compilation time. Your check will still be performed at runtime. So, is the inability to create a certain class really that useful if discovered only at runtime? In C++ (and in C# as well) you can not even compile you code creating an abstract class, and that is the whole point -- to discover the problem as early as possible. If you have abstract methods, raising a NotImplementedError exception seems to be quite enough. NB: raising, not returning an error code! In Python errors usually should not be silent unless thay are silented explicitly. Documenting. Naming a class in a way that says it's abstract. That's all.
Quality of Python code is ensured mostly with methods that are quite different from those used in languages with advanced compile-time type checking. Personally I consider that the most serious difference between dynamically typed lngauges and the others. Unit tests, coverage analysis etc. As a result, the design of code is quite different: everything is done not to enforce things, but to make testing them as easy as possible. | 3 | 20 | 0 | 0 | I come from a C# background where the language has some built in "protect the developer" features. I understand that Python takes the "we're all adults here" approach and puts responsibility on the developer to code thoughtfully and carefully.
That said, Python suggests conventions like a leading underscore for private instance variables. My question is, is there a particular convention for marking a class as abstract other than just specifying it in the docstrings? I haven't seen anything in particular in the python style guide that mentions naming conventions for abstract classes.
I can think of 3 options so far but I'm not sure if they're good ideas:
Specify it in the docstring above the class (might be overlooked)
Use a leading underscore in the class name (not sure if this is universally understood)
Create a def __init__(self): method on the abstract class that raises an error (not sure if this negatively impacts inheritance, like if you want to call a base constructor)
Is one of these a good option or is there a better one? I just want to make sure that other developers know that it is abstract and so if they try to instantiate it they should accept responsibility for any strange behavior. | Python abstract classes - how to discourage instantiation? | 0 | 0 | 1 | 0 | 0 | 12,928 |
14,994,955 | 2013-02-21T05:08:00.000 | 0 | 0 | 1 | 0 | 0 | python,notepad++ | 0 | 63,126,890 | 0 | 2 | 0 | false | 0 | 0 | To apply the WordWrap, just goo to view in the Menu bar and check the WordWrap option. | 1 | 3 | 0 | 0 | I know the Python coding standard has a limit of 78 characters per line. I am working in Notepad++ and
how do I set it so it wraps after 78 characters? | Wrapping in Notepad++ | 0 | 0 | 1 | 0 | 0 | 9,109 |
15,012,694 | 2013-02-21T21:37:00.000 | 0 | 0 | 0 | 1 | 0 | python,usb | 0 | 15,013,500 | 0 | 2 | 0 | false | 0 | 0 | "Everything is a file" is one of the core ideas of Unix. Windows does not share this philosophy and, as far as I know, doesn't provide an equivalent interface. You're going to have to find a different way.
The first way would to be to continue handling everything at a low level & have your code use a different code path under Windows. The only real reason to do this is if your goal is to learn about USB programming at a low level.
The other way is to find a library that's already abstracted out the differences between platforms. PySDL immediately comes to mind (followed by PyGame, which is a higher level wrapper around that) but, as that's a gaming/multimedia library, it might be overkill for what you're doing. Google tells me that PyUSB exists and appears to just focus on handing USB devices. PySDL/PyGame have been around a while & are probably more mature so, unless you've got a particular aversion to them, I'd probably stick with them. | 2 | 0 | 0 | 0 | I'm trying to access a usb device through python but I'm unsure how to find the path to it.
The example I'm going from is:
pipe = open('/dev/input/js0','r')
In which case this is either a mac or linux path. I don't know how to find the path for windows.
Could someone steer me in the proper direction? I've sifted through the forums but couldn't quite find my answer.
Thanks,
-- Mark | opening a usb device in python -- what is the path in winXP? | 0 | 0 | 1 | 0 | 0 | 1,609 |
15,012,694 | 2013-02-21T21:37:00.000 | 0 | 0 | 0 | 1 | 0 | python,usb | 0 | 15,012,889 | 0 | 2 | 0 | false | 0 | 0 | The default USB path on windows is D:\. So, if we have a text document named mydoc.txt, which is in the folder myData the appropriate path is D:\myData\mydoc.txt | 2 | 0 | 0 | 0 | I'm trying to access a usb device through python but I'm unsure how to find the path to it.
The example I'm going from is:
pipe = open('/dev/input/js0','r')
In which case this is either a mac or linux path. I don't know how to find the path for windows.
Could someone steer me in the proper direction? I've sifted through the forums but couldn't quite find my answer.
Thanks,
-- Mark | opening a usb device in python -- what is the path in winXP? | 0 | 0 | 1 | 0 | 0 | 1,609 |
15,024,894 | 2013-02-22T13:01:00.000 | 0 | 0 | 0 | 0 | 0 | django,python-2.7 | 0 | 32,207,711 | 0 | 3 | 0 | false | 0 | 0 | What you want to do is called Single sign on (SSO) and it's much easier to implement on actual web server than Django.
So, you should check how to do SSO on Apache/Nginx/whateverYouAreUsing, then the web server will forward the authenticated username to your django app. | 1 | 3 | 0 | 0 | when user logs in to his desktop windows os authenticates him against Active Directory Server.
so Whenever he accesses a web page he should not be thrown a login page for entering his userid or password.Instead, his userid and domain need to be captured from his desktop and passed to the web server.(let him enter password after that)
Is this possible in python to get username and domain of of client?
win32api.GetUserName() gives the username of the server side.
Thanks in advance | how to get username and domain of windows logged in client using python code? | 1 | 0 | 1 | 0 | 1 | 2,471 |
15,031,315 | 2013-02-22T18:59:00.000 | 2 | 1 | 1 | 0 | 0 | python,multithreading,parallel-processing,multiprocessing | 0 | 15,031,533 | 0 | 2 | 0 | false | 0 | 0 | There is no definitive answer to your question: it really depends what the functions do, how often they are called and what level of parallelism you need.
The threading and multiprocessing modules work in radically different ways.
threading implements native threads within the Python interpreter: fairly inexpensive to create but limited in parallelism due to Python's Global Interpreter Lock (GIL). Threads share the same address space, so may interfere with each other (e.g. if a thread causes the interpreter to crash, all threads, including your app, die), but inter-thread communication is cheap and fast as a result.
multiprocessing implements parallelism using distinct processes: the setup is far more expensive than threads (required creation of a new process), but each process runs its own copy of the interpreter (hence no GIL related locking issues) and run in different address spaces (isolating your main app). The child processes communicate with the parent over IPC channels and required Python objects to be pickled/unpickled - so again, more expensive than threads.
You need to figure out what trade-off is best suited to your purpose. | 1 | 0 | 0 | 0 | I've written an irc bot that runs some commands when told so, the commands are predefined python functions that will be called on the server where the bot is running.
I have to call those functions without knowing exactly what they'll do
(more I/O or something computationally expensive, nothing harmful since I review them when I accept them), but I need to get their return value in order to give a reply back to the irc channel.
What module do you recommend for running several of these callbacks in parallel and why?
The threading or multiprocessing modules, something else?
I heard about twisted, but I don't know how it will fit in my current implementation since I know nothing about it and the bot is fully functional from the point of view of the protocol.
Also requiring the commands to do things asynchronously is not an option since I want the bot to be easily extensible. | What module to use for calling user-defined functions in parallel | 0 | 0.197375 | 1 | 0 | 0 | 221 |
15,031,856 | 2013-02-22T19:33:00.000 | 3 | 0 | 0 | 0 | 0 | javascript,python,postgresql,flot | 0 | 15,032,100 | 0 | 2 | 1 | false | 1 | 0 | You can't send a Python or Javascript "datetime" object over JSON. JSON only accepts more basic data types like Strings, Ints, and Floats.
The way I usually do it is send it as text, using Python's datetime.isoformat() then parse it on the Javascript side. | 1 | 8 | 0 | 0 | I have a postgre database with a timestamp column and I have a REST service in Python that executes a query in the database and returns data to a JavaScript front-end to plot a graph using flot.
Now the problem I have is that flot can automatically handle the date using JavaScript's TIMESTAMP, but I don't know how to convert the Postgre timestamps to JavaScript TIMESTAMP (YES a timestamp, not a date stop editing if you don't know the answer) in Python. I don't know if this is the best approach (maybe the conversion can be done in JavaScript?). Is there a way to do this? | Converting postgresql timestamp to JavaScript timestamp in Python | 0 | 0.291313 | 1 | 1 | 0 | 4,296 |
15,036,815 | 2013-02-23T03:43:00.000 | 1 | 1 | 0 | 0 | 0 | c#,python,asp.net,unit-testing | 0 | 16,286,713 | 0 | 1 | 0 | false | 0 | 0 | I don't know if you can fit them in one runner or process. I'm also not that familiar with Python. It seems to me that the Python written tests are more on a high level though. Acceptance tests or integration tests or whatever you want to call them. And the NUnit ones are unit test level. Therefore I would suggest that you first run the unit tests and if they pass the Python ones. You should be able to integrate that in a build script. And as you already suggested, if you can run that on a CI server, that would be my preferred approach in your situation. | 1 | 5 | 0 | 0 | What I'm trying to do is to combine two approaches, two frameworks into one solid scope, process ...
I have a bunch of tests in python with self-written TestRunner over proboscis library which gave me a good way to write my own Test Result implementation (in which I'm using jinja). This framework is now a solid thing. These tests are for tesing UI (using Selenium) on ASP.NET site.
On another hand I have to write tests for business logic. Apparently it would be right to use NUnit or TestDriven.NET for C#.
Could you please give me a tip, hint, advice of how I should integrate these two approaches in one final solution? May be the answer would be just to set up a CI server, donno...
Please note, the reason I'm using Python for ASP.Net portal is in its flexibility and opportunity to build any custom Test Runner, Test Loader, Test Discovery and so on...
P.S. Using IronPython is not an option for me.
P.P.S. For the sake of clarity: proboscis is the python library which allows to set test order and dependency of a choosen test. And these two options are the requirements!
Thank you in advance! | Integrating tests written in Python and tests in C# in one solid solution | 1 | 0.197375 | 1 | 0 | 0 | 923 |
15,049,661 | 2013-02-24T07:33:00.000 | 0 | 0 | 1 | 0 | 0 | python,skype,skype4py | 0 | 15,052,112 | 0 | 3 | 0 | false | 0 | 0 | The Type property of the chat object will be either chatTypeDialog or chatTypeMultiChat with the latter being a group chat. You can safely ignore the other legacy enumeration values. | 1 | 0 | 0 | 0 | is there a way to check if a chat is a group chat? Or at least to find out how many users there are in a group.
Like by checking for the user number, if it is 2, then it is obviously 1-1 (Single), but if it as anything else, it would be a group chat. | Skype4Py Check If Group Chat | 1 | 0 | 1 | 0 | 1 | 1,405 |
15,055,029 | 2013-02-24T18:27:00.000 | 5 | 0 | 0 | 0 | 1 | python,web-applications,permissions,security,pyramid | 0 | 15,057,901 | 0 | 1 | 0 | true | 1 | 0 | Make a readwrite permission. Each view gets one and only one permission but each principal can be mapped to many permissions. | 1 | 10 | 0 | 0 | I am configuring access control for a web application based on the Pyramid framework. I am setting up permissions for my view callables using the @view_config decorator. I have two permissions, namely 'read' and 'write'. Now, I want certain views to require both permissions. I was unable to figure out how to do this with view_config - am I missing something, or is there maybe another way to do this? | Multiple permissions in view_config decorator? | 1 | 1.2 | 1 | 0 | 0 | 1,193 |
15,056,269 | 2013-02-24T20:27:00.000 | 0 | 1 | 0 | 0 | 0 | python,pyephem,azimuth,altitude | 1 | 15,056,730 | 0 | 2 | 0 | false | 0 | 0 | Without knowing the details of the internal calculations that PyEphem is doing I don't know how easy or difficult it would be to invert those calculations to give the result you want.
With regards to the "sneaking up on it" option however, you could pick two starting times (eg sunrise and noon) where the azimuth is known to be either side (one greater and one less than) the desired value. Then just use a simple "halving the interval" approach to quickly find an approximate solution. | 1 | 3 | 0 | 0 | I am using PyEphem to calculate the location of the Sun in the sky at various times.
I have an Observer point (happens to be at Stonehenge) and can use PyEphem to calculate sunrise, sunset, and the altitude angle and azimuth (degrees from N) for the Sun at any hour of the day. Brilliant, no problem.
However, what I really need is to be able to calculate the altitude angle of the Sun from an known azimuth. So I would set the same observer point (long/lat/elev/date (just yy/mm/dd, not time)) and an azimuth for the Sun. And from this input, calculate the altitude of the Sun and the time it is at that azimuth.
I had hoped I would be able to just set Sun.date and Sun.az and work backwards from those values, but alas. Any thoughts on how to approach this (and if it even is approachable) with PyEphem?
The only other option I'm seeing available is to "sneak up" on the azimuth by iterating over a sequence of times until I get within a margin of error of the azimuth I desire, but that is just gross.
thanks in advance, Dave | PyEphem: can I calculate Sun's altitude from azimuth | 0 | 0 | 1 | 0 | 0 | 1,610 |
15,057,301 | 2013-02-24T22:15:00.000 | 2 | 0 | 1 | 0 | 0 | python,multithreading,python-multithreading | 0 | 15,057,672 | 0 | 3 | 0 | false | 0 | 0 | Threads for the purpose of speed in python is not a terribly good idea, particularly for cpu bound operations. The GIL sees off any potential performance improvements from multiple CPU's (the # of which is the theoretical limit of your speed increase from threading - though in practice YMMV).
For truly independent "checks" you are far better off looking at multiprocessing. | 1 | 3 | 0 | 0 | I am writing a simple script which should do big amount of checks. Every check is independent so I decided to put it into multiple threads. However I don't know how fast will be machine on which the script will be set. I've already found quite nice util to check basic parameters of the target machine but I am wondering if there's any way to determine what is the max sensible amount of threads (I mean the moment when new thread starts slowing the process instead of speeding it)? | How can I determine sensible thread number in python? | 0 | 0.132549 | 1 | 0 | 0 | 2,550 |
15,057,651 | 2013-02-24T22:52:00.000 | 0 | 0 | 0 | 0 | 0 | python,python-3.3 | 0 | 15,057,740 | 0 | 2 | 0 | false | 1 | 0 | PHP scripts are run server-side and produce a HTML document (among other things). You will never see the PHP source of a HTML document when requesting a website, hence there is no way for Python to grab it either. This isn't even Python-related. | 1 | 0 | 0 | 0 | I know how to grab a sources HTML but not PHP is it possible with the built in functions? | Python grabbing pages source with PHP in it | 1 | 0 | 1 | 0 | 1 | 128 |
15,059,534 | 2013-02-25T03:16:00.000 | 0 | 0 | 0 | 0 | 0 | python,django | 0 | 15,059,615 | 0 | 2 | 0 | false | 1 | 0 | One way to do it is to create a row in a persistent database (or a redis key/value pair) for the task which says if it is running or finished. Have the code set the value to be running when the task starts and done when the task completes. Then have an AJAX call do a GET lookup on a URL that sends the status for the task via a web service. You can put that in a setInterval() to periodically poll the database to see if it is done. You could send an email on completion or just have a landing page / dashboard that shows the status of the tasks being run. | 1 | 1 | 0 | 0 | I'm using Django to develop a classifier service, and user can build a model using api like http://localhost/api/buildmodel, however, because building a model takes a long time, maybe 2 hours, and I'm using web page to show the result of building a model. How to design my Django program to return immediately and do something to show the result after building finish? maybe I can use ajax but I want to implement it in Python, like using a async method and calling a callback function after building, any suggestions will be appreciated. | how to do with a request which needs to take a long time to run? | 0 | 0 | 1 | 0 | 0 | 73 |
15,072,062 | 2013-02-25T16:51:00.000 | 0 | 0 | 0 | 0 | 0 | python,url,cherrypy | 0 | 52,942,169 | 0 | 2 | 0 | false | 0 | 0 | Well, .../a/b/x=y is the wrong way to send a value regardless of whether it is a file name or not. The correct way would be .../a/b?x=y
or .../a/b/?x=y which would make x a standard query parameter and cherrypy would treat it as such. Thereafter whether there were slashes in the value of x or not would be moot. They would get through to your code just fine. | 1 | 1 | 0 | 0 | In CherryPy how do you pass an argument like file path (i.e. /abc/def/ghi) through a URL? I want to do something like http://...../filepath="abc/def/ghi". Thanks. | Passing file path argument to a CherryPy | 0 | 0 | 1 | 0 | 0 | 252 |
15,076,133 | 2013-02-25T20:44:00.000 | 0 | 0 | 1 | 0 | 0 | python,image,colors,pygame,numerical | 0 | 28,798,012 | 0 | 2 | 0 | false | 0 | 1 | I assume by image you mean pygame.Surface. You have several options:
pygame.Surface.set_at(...)
Use a palettized surface. Then change the palette. Based on your use case, this is actually probably what I'd suggest.
Use the pygame.PixelArray PyGame module. I don't think it requires NumPy.
Just use NumPy. It's really not that huge of a requirement; lots of projects require it and it's simple to set up. You get access to the powerful pygame.surfarray module | 1 | 0 | 0 | 0 | I'm making a simple game, whereby I want my characters quite customizable. I want my characters to be able to be fully edited in colours, for example, if a player wants their character to have cyan skin, they just put into the sliders, or what I choose to use, "0,255,255", or purple "255,0,255", or something random "25,125, 156", and have their character be that colour. I haven't even started creating the game, but I've got the basis down and I know exactly what I must do for pretty much every EXCEPT this.
I done a search in Google, and it turns out, I need numerical python for this? Well this is a whole new package, and in order for the average player to play, I must change it to EXE form... (or have python, pygame and numerical python installed onto their PC, which will be a problem if they have a later version...). Now, it's already getting complex with just pygame, but with numerical python as well, is there even a tutorial on how to do this?
Any suggestions? Thanks! | PYGAME - Edit colours of an image (makes white red at 255,0,0) without numerical python? | 0 | 0 | 1 | 0 | 0 | 2,070 |
15,093,780 | 2013-02-26T16:06:00.000 | 1 | 0 | 1 | 1 | 0 | python,windows,python-2.7,save | 1 | 15,094,305 | 0 | 1 | 0 | true | 0 | 0 | Does it contain some or all of the current execution state of your program? Yes. Is it in a form that you could easily extract the information in the user-level format you are probably looking for from it? Probably not. It will dump the state of the entire Python interpreter, including the data as represented in memory for the specific Python program that is running. To reconstruct that data, I'm pretty sure you'd need to run the Python interpreter itself in debug mode, then try to reconstruct your data from whatever your C debugger can piece together. If this sounds very difficult or impossible to you, then you probably have some understanding of what it entails. | 1 | 0 | 0 | 0 | I have a Python program that had some kind of error that prevents it from saving my data. The program is still running, but I cannot save anything. Unfortunately, I really need to save this data and there seems to be no other way to access it.
Does the DMP file created for the process through the task manager contain the data my program collected, and if so, how do I access it?
Thanks. | Can you read variable data of an already running Python Script from its DMP file in Windows? | 0 | 1.2 | 1 | 0 | 0 | 165 |
15,101,770 | 2013-02-26T23:55:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,google-chrome,web-applications,tsv | 0 | 15,101,984 | 0 | 3 | 1 | true | 1 | 0 | The whole point of a web application is that the GUI is written in HTML, CSS, and JavaScript, not Python. However, it talks to a web service, which can be written in Python.
For a well-written desktop app, the transition should be pretty easy. If you've already got a clean separation between the GUI part and the engine part of your code, the engine part will only need minor changes (and maybe stick it behind, e.g., a WSGI server). You will have to rewrite the GUI part for the web, but in a complex app, that should be the easy part.
However, many desktop GUI apps don't have such a clean separation. For example, if you have button handlers that directly do stuff to your model, there's really no way to make that work without duplicating the model on both sides (the Python web service and the JS client app) and synchronizing the two, which is a lot of work and leads to a bad experience. In that case, you have to decide between rewriting most of your app from scratch, or refactoring it to the point where you can web-service-ify it.
If you do choose to go the refactoring route, I'd consider adding a second local interface (maybe using the cmd module to build a CLI, or tkinter for an alternate GUI), because that's much easier to do. Once the same backend code can support your PyQt GUI and your cmd GUI, adding a web interface is much easier. | 3 | 0 | 0 | 0 | I have a small python program to help my colleagues to analyse some tsv data. The data is not big, usually below 20MB. I developed the GUI with PyQT. I want to change this desktop program to a web app so my colleagues don't have to upgrade the program every time I change it or fix a bug. They can just go to a website and use Chrome as the GUI.
So how do I do this? I spend most of my time developing desktop program and just know some basic web developing knowledges. I have read some framework such as flask and web2py, and know how to use HTML and Javascript to make buttons but no clue to achieve my purpose.
Can someone give me a practical way to do this?
It'd be better if the user don't have to upload the local data to the server. Maybe just download the python code from server then execute in Chrome. Is this possible?
Thanks. | How to setup a web app which can handle local data without uploading the data? Use python | 1 | 1.2 | 1 | 0 | 1 | 218 |
15,101,770 | 2013-02-26T23:55:00.000 | 1 | 0 | 0 | 0 | 0 | javascript,python,google-chrome,web-applications,tsv | 0 | 15,101,809 | 0 | 3 | 1 | false | 1 | 0 | No, you cannot run Python code in a web browser.[1] You'd have to port the core of your application to JavaScript to do it all locally.
Just do the upload. 20MB isn't all that much data, and if it's stored on the server then they can all look at each others' results, too.
[1] There are some tools that try to transpile Python to JavaScript: pyjs compiles directly, and Emscripten is an entire LLVM interpreter in JS that can run CPython itself. I wouldn't really recommend relying on these. | 3 | 0 | 0 | 0 | I have a small python program to help my colleagues to analyse some tsv data. The data is not big, usually below 20MB. I developed the GUI with PyQT. I want to change this desktop program to a web app so my colleagues don't have to upgrade the program every time I change it or fix a bug. They can just go to a website and use Chrome as the GUI.
So how do I do this? I spend most of my time developing desktop program and just know some basic web developing knowledges. I have read some framework such as flask and web2py, and know how to use HTML and Javascript to make buttons but no clue to achieve my purpose.
Can someone give me a practical way to do this?
It'd be better if the user don't have to upload the local data to the server. Maybe just download the python code from server then execute in Chrome. Is this possible?
Thanks. | How to setup a web app which can handle local data without uploading the data? Use python | 1 | 0.066568 | 1 | 0 | 1 | 218 |
15,101,770 | 2013-02-26T23:55:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,google-chrome,web-applications,tsv | 0 | 32,979,390 | 0 | 3 | 1 | false | 1 | 0 | If I get your point correctly, you want
Web connection, so your python program updated on server, client get it before using it.
Data store on local to avoid upload big file.
You can write a python program to check a server location to get your latest program if needed. You need a url / server file for program version / created date/time information to determine if you need to update or not.
After get latest python program, then start this python program to run locally.
With this said, What you need is to update your program to add below features:
Access your server, to get latest version information
Check against current version to see if you need to download latest program
Download latest version and use that to run locally.
Does this solve your problem? | 3 | 0 | 0 | 0 | I have a small python program to help my colleagues to analyse some tsv data. The data is not big, usually below 20MB. I developed the GUI with PyQT. I want to change this desktop program to a web app so my colleagues don't have to upgrade the program every time I change it or fix a bug. They can just go to a website and use Chrome as the GUI.
So how do I do this? I spend most of my time developing desktop program and just know some basic web developing knowledges. I have read some framework such as flask and web2py, and know how to use HTML and Javascript to make buttons but no clue to achieve my purpose.
Can someone give me a practical way to do this?
It'd be better if the user don't have to upload the local data to the server. Maybe just download the python code from server then execute in Chrome. Is this possible?
Thanks. | How to setup a web app which can handle local data without uploading the data? Use python | 1 | 0 | 1 | 0 | 1 | 218 |
15,104,090 | 2013-02-27T04:13:00.000 | 0 | 0 | 0 | 0 | 0 | python,png | 1 | 15,104,289 | 0 | 2 | 0 | false | 0 | 1 | Is there a reason why this is a bad idea or should I keep toying with it?
This is essentially like encoding file type by file extension. The only bad thing about encoding file metadata in the filename is it's quite limited in space. You can encode much richer metadata if you used proper PNG metadata field or in a separate file. Encoding metadata in the file name is hard to extend with new fields in new version, with optional fields, etc.
For example, you might find out that you want miscellaneous sprite sheet where each frame has different size, and so you need to encode the position, size, class name, and serial number for each individual frame.
It's fine as long as you don't expect to want to encode many other metadata in the future. | 2 | 0 | 0 | 0 | I am making a little game app that allows users to load up textures manually. One such example would be a standard deck of 52 cards.
Would it be bad to store the processing info as:
Card_79x123_53_53.png
Upon getting the filename from a file dialog, I would split the underscores to get the following info:
ObjectType : matched to list
(w,h)
Number of objects expected (in case there's tailing empty space)
Extra info (in this case the location of the face-down texture relative to N above)
and the dimensions of the image provide the rest, of course.
ANY error in processing would of course be raised and the attempt to load textures would be rejected.
Is there a reason why this is a bad idea or should I keep toying with it? | Is it a bad idea to use a PNG image's filename to store the info on how to process it? | 1 | 0 | 1 | 0 | 0 | 65 |
15,104,090 | 2013-02-27T04:13:00.000 | 0 | 0 | 0 | 0 | 0 | python,png | 1 | 15,104,140 | 0 | 2 | 0 | true | 0 | 1 | Probably it would be better to ask user about how to process image explicitly either by adding extra controls into file dialog, or by showing another dialog after file dialog is submitted. In this case information taken from file name could be used as hints to preliminary fill these extra controls with values, but user will have a chance to correct what is wrong and fill what is missing. | 2 | 0 | 0 | 0 | I am making a little game app that allows users to load up textures manually. One such example would be a standard deck of 52 cards.
Would it be bad to store the processing info as:
Card_79x123_53_53.png
Upon getting the filename from a file dialog, I would split the underscores to get the following info:
ObjectType : matched to list
(w,h)
Number of objects expected (in case there's tailing empty space)
Extra info (in this case the location of the face-down texture relative to N above)
and the dimensions of the image provide the rest, of course.
ANY error in processing would of course be raised and the attempt to load textures would be rejected.
Is there a reason why this is a bad idea or should I keep toying with it? | Is it a bad idea to use a PNG image's filename to store the info on how to process it? | 1 | 1.2 | 1 | 0 | 0 | 65 |
15,105,183 | 2013-02-27T05:56:00.000 | 1 | 0 | 0 | 0 | 0 | python,sockets,ssh,paramiko | 0 | 15,105,597 | 0 | 1 | 0 | false | 0 | 0 | Try switching off Windows firewall. It's a network error, it should not be because of SSH key problems.
Error Code 10060: Connection timeout
Background: The gateway could not receive a timely response from the website you are trying to access. This might indicate that the network is congested, or that the website is experiencing technical difficulties. | 1 | 0 | 0 | 0 | It seems the socket connection through paramiko (v1.10.0) is not stable.
I have two computers. The python code is on the PC one. The connection sometime is successful and sometime is not (Same code). When the PC paramiko code fails (socket.error, 10060), I use my Mac via terminal ssh login the server and everything is fine.
I use set_missing_host_key_policy in the code. But the Mac has the key I guess. I typed yes when login at the first time.
If the unstable connection is caused by the hotkey, how do I get the host key? From the server or somewhere in my local folder (win7)? | python paramiko module socket.error, errno 10060 | 0 | 0.197375 | 1 | 0 | 1 | 3,123 |
15,121,468 | 2013-02-27T20:13:00.000 | 2 | 1 | 0 | 1 | 0 | python,python-2.6 | 0 | 15,121,639 | 0 | 2 | 0 | false | 0 | 0 | If you really need to check this, Pavel Anossov's answer is the way to do it, and it's pretty much the same as your initial guess.
But do you really need to check this? Why not just write a Python script that writes to stdout and/or stderr, and your cron job can just redirect to log files?
Or, even better, use the logging module and let it write to syslog or whatever else is appropriate and also write to the terminal if there is one? | 1 | 2 | 0 | 0 | I have a Python script that normally runs out of cron. Sometimes, I want to run it myself in a (Unix) shell, and if so, have it write its output to the terminal instead of writing to a log file.
What is the pythonic way of determining if a script is running out of cron or in an interactive shell (I mean bash, ksh, etc. not the python shell)?
I could check for the existence of the TERM environment variable perhaps? That makes sense but seems deceptively simple...
Could os.isatty somehow be used?
I'm using Python 2.6 if it makes a difference. Thanks! | How to check if I'm running in a shell (have a terminal) in Python? | 0 | 0.197375 | 1 | 0 | 0 | 206 |
15,134,201 | 2013-02-28T11:20:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,sass,compass-sass | 0 | 15,987,894 | 0 | 2 | 0 | false | 0 | 0 | You don't need compass with pyScss. Just run "python pyScss -mscss" to watch your workingdir for changes and compile the .scss (or .sass) to .css. | 1 | 0 | 0 | 0 | I am with Python 2.7, pyScss 1.15 and Compass 0.12.2, but Compass dosn't work, could someone give an advise how to make it work? | Compass not working with pyScss 1.1.5 | 0 | 0 | 1 | 0 | 0 | 312 |
15,171,378 | 2013-03-02T06:07:00.000 | 1 | 0 | 0 | 0 | 0 | python,image,drag-and-drop,pygame | 0 | 15,173,810 | 0 | 2 | 0 | false | 0 | 1 | What methods are you using to draw the images? It's hard to answer this question without that.
If you aren't already doing this, you could use a class to hold data about your image, such as position and geometry. | 2 | 0 | 0 | 0 | I am coding for a mouse drag and drop effect on images. Meanwhile, I want to take record of the upper-left point of image each time I dragged and dropped it, are there any ways to get it? | How to get image position in Pygame | 1 | 0.099668 | 1 | 0 | 0 | 1,791 |
15,171,378 | 2013-03-02T06:07:00.000 | 1 | 0 | 0 | 0 | 0 | python,image,drag-and-drop,pygame | 0 | 15,177,879 | 0 | 2 | 0 | false | 0 | 1 | If you derive your classes from pygame.sprite.Sprite , you can get the position by guy.rect. Depending on if you want center, or toplef, or the full rect:
guy.rect.topleft or guy.rect.center or guy.rect | 2 | 0 | 0 | 0 | I am coding for a mouse drag and drop effect on images. Meanwhile, I want to take record of the upper-left point of image each time I dragged and dropped it, are there any ways to get it? | How to get image position in Pygame | 1 | 0.099668 | 1 | 0 | 0 | 1,791 |
15,172,826 | 2013-03-02T09:21:00.000 | 0 | 0 | 0 | 0 | 0 | python,css,heroku,responsive-design,flask | 0 | 15,181,474 | 0 | 1 | 0 | false | 1 | 0 | Bug found. It was a very, very obscure rendering issue with the text-indent css attribute that only seemed to affect the iPhone 5.
Additionally, if you ever need to debug Google Chrome for iPhone, clearing the cache won't do anything if you don't delete the app from the multitasking menu too (the bar that comes up when you double tap the home button). That literally took me half an hour to figure out. | 1 | 0 | 0 | 0 | First question posted to Stack Overflow but have spent many hours reading answers here :).
I'm creating a Heroku Python app and am using responsive design media queries in my css. I deploy my app to Heroku and visit myherokuapp.herokuapp.com. Website looks fine on laptop browser...responsive design elements working as well. Visiting the same url on my iPhone, however, seems to show a page where one of my css files (the media queries) is loading but the other (the main css file) is not.
Does Heroku cache css files? I read somewhere that you have to host static files elsewhere if you have a Django app, but not sure if that's applicable to me. I'm also using the Flask function <link rel="stylesheet" href="{{ url_for('static', filename='css/style.css') }}">. Does that have anything to do with it?
Edit: Does anyone know how to run the equivalent of the Firefox/Google Inspector on mobile? That would really help me figure out what files are there and what aren't. | Heroku Python App - CSS Not Loading On IPhone Only | 1 | 0 | 1 | 0 | 0 | 381 |
15,180,611 | 2013-03-02T23:24:00.000 | 5 | 1 | 0 | 0 | 0 | c++,boost,boost-python | 0 | 15,180,650 | 0 | 1 | 0 | true | 0 | 0 | There are two ways to interoperate:
1) from a "Python process", call functions written in C++.
Python already has a system to load dlls, they're called "extension modules". Boost.Python can compile your source to produce one. Basically you write a little wrapper to declare a function callable from Python, and the "metaprogramming" is there to do stuff like detecting what types the C++ function takes and returns, so that it can emit the right code to convert those from/to the equivalent Python types.
2) from a "C++ process", launch and control the Python interpreter.
Python provides a C API to do this, and Boost.Python knows how to use it. | 1 | 3 | 0 | 0 | I'm a newbie to boost and one of its libraries which I can't understand it is Boost.Python. Can anyone explain me in details how does this interoperability achieved?In the documentation there only a few words about metaprogramming.
P.S. I tried to look code but because of my lack of C++ knowledge I didn't understand principles.
Thanks in advance | How does boost::python work?Any ideas about the realisation details? | 0 | 1.2 | 1 | 0 | 0 | 301 |
15,187,345 | 2013-03-03T15:42:00.000 | 0 | 0 | 0 | 0 | 0 | jquery,python,selenium-webdriver | 0 | 22,935,427 | 0 | 2 | 1 | false | 0 | 0 | We can Use this Funda
If driver.findElements(By.id(id)).size() > 0 Then
Element Present;
Else
Element Not Present
Same Applies for Value.
Please Let me know is my Funda OK or NOT. | 1 | 0 | 0 | 0 | I am trying to find the presence of an element by finding the length of the jquery.
How can we capture the length of a webelement in a variable, so that I can check the value of the variable to make decision.
Or is there any other way to accomplish the same result. I am using python, selenium webdriver and JQuery. Thanks in advance. | When using python and selenium how to find the presence of an element based on id and value | 1 | 0 | 1 | 0 | 1 | 467 |
15,207,938 | 2013-03-04T18:00:00.000 | 1 | 0 | 1 | 0 | 0 | python,virtualenv,setuptools | 1 | 33,958,217 | 0 | 1 | 0 | false | 0 | 0 | Use a recent version of virtualenv and you will not see this error. | 1 | 5 | 0 | 0 | If you try to run virtualenv with the environmental variable PYTHONDONTWRITEBYTECODE=true set, it gives this error:
The PYTHONDONTWRITEBYTECODE environment variable is not compatible with setuptools. Either use --distribute or unset PYTHONDONTWRITEBYTECODE.
Why does setuptools require the ability to write bytecode?
I don't particularly like having .pyc files around, so I like to prevent it from being written.
(I'm not asking how to get around this; that's trivial: just add PYTHONDONTWRITEBYTECODE="" at the beginning of any command that requires the flag to be unset, or unset it globally) | Why does setuptools need to write bytecode? | 0 | 0.197375 | 1 | 0 | 0 | 366 |
15,210,454 | 2013-03-04T20:25:00.000 | 1 | 0 | 1 | 1 | 0 | python,console | 0 | 15,210,530 | 0 | 2 | 0 | true | 0 | 0 | http://stackoverflow.com/questions/7054424/python-not-recognised-as-a-command
Add python to your environment path. You should then be able to use it anywhere. | 1 | 1 | 0 | 0 | I've looked around for an answer to this, but i don't know how to phrase it in a way that google will understand.
I'm trying to learn Python, and i've installed it on my machine. However, when i just type "python" in cmd.exe, the python app is not found/launched.
I have to manually go to the directory in which python.exe is found in order to run my python commands. Is this normal? Online tutorials seem to indicate that I should be able to run the app from anywhere :s
I'm on Win7, and trying to run Python from the Django stack by BitNami. | How to start app without navigating to its directory | 0 | 1.2 | 1 | 0 | 0 | 113 |
15,221,155 | 2013-03-05T10:13:00.000 | 1 | 0 | 1 | 0 | 0 | python,multithreading,instance | 0 | 15,221,754 | 0 | 1 | 0 | false | 1 | 0 | You can't (easily) "get an instance of a running program" from outside. You can certainly instrument your program so that it communicates its statistics somehow, eg via a socket, or as an even lower-tech solution you could get it to store the relevant data periodically in a file on disk or in a database, which your web app could read. | 1 | 0 | 0 | 0 | Suppose a python program is running. Say the object of an Class in that program can give you some stats. So if i have to develop a web UI to display the stats, how do i get the instance of that class which is running[as an separate desktop app] and display the stats in web, which I would be using web2py or django. | Get instance of an running Python program | 0 | 0.197375 | 1 | 0 | 0 | 160 |
15,221,473 | 2013-03-05T10:29:00.000 | 8 | 0 | 1 | 0 | 0 | python,upgrade,virtualenv,pip,package-managers | 0 | 65,086,049 | 1 | 23 | 0 | false | 0 | 0 | for windows,
go to command prompt
and use this command
python -m pip install -–upgrade pip
Dont forget to restart the editor,to avoid any error
you can check the version of the pip by
pip --version
if you want to install any particular version of pip , for example version 18.1 then use this command,
python -m pip install pip==18.1 | 3 | 771 | 0 | 0 | I'm able to update pip-managed packages, but how do I update pip itself? According to pip --version, I currently have pip 1.1 installed in my virtualenv and I want to update to the latest version.
What's the command for that? Do I need to use distribute or is there a native pip or virtualenv command? I've already tried pip update and pip update pip with no success. | How do I update/upgrade pip itself from inside my virtual environment? | 0 | 1 | 1 | 0 | 0 | 1,701,097 |
15,221,473 | 2013-03-05T10:29:00.000 | 0 | 0 | 1 | 0 | 0 | python,upgrade,virtualenv,pip,package-managers | 0 | 72,138,923 | 1 | 23 | 0 | false | 0 | 0 | While updating pip in virtual env use full path in python command
Envirnments folder struture
myenv\scripts\python
h:\folderName\myenv\scripts\python -m pip install --upgrade pip | 3 | 771 | 0 | 0 | I'm able to update pip-managed packages, but how do I update pip itself? According to pip --version, I currently have pip 1.1 installed in my virtualenv and I want to update to the latest version.
What's the command for that? Do I need to use distribute or is there a native pip or virtualenv command? I've already tried pip update and pip update pip with no success. | How do I update/upgrade pip itself from inside my virtual environment? | 0 | 0 | 1 | 0 | 0 | 1,701,097 |
15,221,473 | 2013-03-05T10:29:00.000 | 0 | 0 | 1 | 0 | 0 | python,upgrade,virtualenv,pip,package-managers | 0 | 49,479,521 | 1 | 23 | 0 | false | 0 | 0 | I had installed Python in C:\Python\Python36 so I went to the Windows command prompt and typed "cd C:\Python\Python36 to get to the right directory. Then entered the "python -m install --upgrade pip" all good! | 3 | 771 | 0 | 0 | I'm able to update pip-managed packages, but how do I update pip itself? According to pip --version, I currently have pip 1.1 installed in my virtualenv and I want to update to the latest version.
What's the command for that? Do I need to use distribute or is there a native pip or virtualenv command? I've already tried pip update and pip update pip with no success. | How do I update/upgrade pip itself from inside my virtual environment? | 0 | 0 | 1 | 0 | 0 | 1,701,097 |
15,237,706 | 2013-03-06T02:07:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,amazon-s3 | 0 | 15,782,517 | 0 | 2 | 0 | false | 1 | 0 | You can just create index.html inside /static-pages/12345/ folder and it will be served. | 1 | 2 | 0 | 0 | I'm planning to build a Django app to generate and later server static pages (probably stored on S3). When users visit a url like mysite.com/static-pages/12345 the static file in my S3 bucket named 12345.html should be served. That static file might be the static html page of a blog page my site has generated for the user, for example.
This is different from including static resources like CSS/Javascript files on a page that is rendered as a Django template since I already know how to use Django templates and SQL databases - what's unfamiliar to me is that my "data" is now a file on S3 rather than an entry in a database AND that I don't actually need to use a template.
How exactly can I retrieve the requested data (i.e. a static page) and return it to the user? I'd like to minimize performance penalties within reason, although of course it would be fastest if users directly requested their static pages from S3 (I don't want them to do this)".
A few additional questions:
I've read elsewhere about a django flatpages app which stores html pages in a database, but it seems like static html pages are best stored on a filesystem like S3, no?
Is there a way to have the request come in to my Django application and have S3 serve the file directly while making it appear to have come from my application (i.e. the browser url still says mysite.com/static-pages/12345, but the page did not go through my Django server)?
Thanks very much! | Serve Static Pages from S3 using Django | 0 | 0 | 1 | 0 | 0 | 795 |
15,260,422 | 2013-03-06T23:45:00.000 | 0 | 1 | 0 | 1 | 0 | python,testing,nose,pytest | 0 | 25,073,350 | 0 | 1 | 0 | false | 0 | 0 | I am not sure if this would help. But if you know ahead of time how you want to divide up your tests, instead of having pytest distribute your tests, you could use your continuous integration server to call a different run of pytest for each different machine. Using -k or -m to select a subset of tests, or simply specifying different test dir paths, you could control which tests are run together. | 1 | 3 | 0 | 0 | I have several thousand tests that I want to run in parallel. The tests are all compiled binaries that give a return code of 0 or non-zero (on failure). Some unknown subsets of them try to use the same resources (files, ports, etc). Each test assumes that it is running independently and just reports a failure if a resources isn't available.
I'm using Python to launch each test using the subprocess module, and that works great serially. I looked into Nose for parallelizing, but I need to autogenerate the tests (to wrap each of the 1000+ binaries into Python class that uses subprocess) and Nose's multiprocessing module doesn't support parallelizing autogenerated tests.
I ultimately settled on PyTest because it can run autogenerated tests on remote hosts over SSH with the xdist plugin.
However, as far as I can tell, it doesn't look like xdist supports any kind of control of how the tests get distributed. I want to give it a pool of N machines, and have one test run per machine.
Is what I want possible with PyTest/xdist? If not, is there a tool out there that can do what I'm looking for? | Controlling the distribution of tests with py.test xdist | 0 | 0 | 1 | 0 | 0 | 1,185 |
15,263,196 | 2013-03-07T04:39:00.000 | 2 | 0 | 0 | 0 | 0 | python,kettle | 0 | 15,274,794 | 0 | 1 | 0 | true | 0 | 0 | It doesnt support it directly from what I've seen.
However there is a mongodb input step. And a lot of work has been done on it recently ( and still ongoing.
So given there is a mongodb input step, if you're using an ETL tool already then why would you want to make it execute a python script to do the job?? | 1 | 3 | 0 | 0 | I have a problem in kettle connecting python. In kettle, I only find the js script module.
Does kettle support python directly? I mean, can I call a python script in kettle without using js or others?
By the way, I want to move data from Oracle to Mongo regularly. I choose to use python to implement the transformation. So without external files, does it have some easy methods to keep the synchronization between a relational db and a no-rdb?
Thanks a lot. | how to call python script in kettle | 0 | 1.2 | 1 | 1 | 0 | 6,043 |
15,282,318 | 2013-03-07T21:41:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,python-2.7 | 0 | 15,282,357 | 0 | 4 | 0 | false | 1 | 0 | The created project should have a static folder. Put all resources (images, ...) in there.
Then, in your HTML template, you can reference STATIC_ROOT and add the resource path (relative to the static folder) | 2 | 1 | 0 | 0 | I've looked through countless answers and questions trying to find a single definitive guide or way to do this, but it seems that everyone has a different way. Can someone please just explain to me how to serve static files in templates?
Assuming I've just created a brand new project with Django 1.4, what all do I need to do to be able to render images? Where should I put the media and static folders? | Django Static Setup | 0 | 0 | 1 | 0 | 0 | 528 |
15,282,318 | 2013-03-07T21:41:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,python-2.7 | 0 | 15,283,333 | 0 | 4 | 0 | true | 1 | 0 | Put your static files into <app>/static or add an absolute path to STATICFILES_DIRS
Configure your web server (you should not serve static files with Django) to serve files in STATIC_ROOT
Point STATIC_URL to the base URL the web server serves
Run ./manage.py collectstatic
Be sure to use RequestContext in your render calls and {{ STATIC_URL }} to prefix paths
Coffee and pat yourself on the back
A little bit more about running a web server in front of Django. Django is practically an application server. It has not been designed to be any good in serving static files. That is why it actively refuses to do that when DEBUG=False. Also, the Django development server should not be used for production. This means that there should be something in front of Django at all times. It may be a WSGI server such as gunicorn or a 'real' web server such as nginx or Apache.
If you are running a reverse proxy (such as nginx or Apache) you can bind /static to a path in the filesystem and the rest of the traffic to pass through to Django. That means your STATIC_URL can be a relative path. Otherwise you will need to use an absolute URL. | 2 | 1 | 0 | 0 | I've looked through countless answers and questions trying to find a single definitive guide or way to do this, but it seems that everyone has a different way. Can someone please just explain to me how to serve static files in templates?
Assuming I've just created a brand new project with Django 1.4, what all do I need to do to be able to render images? Where should I put the media and static folders? | Django Static Setup | 0 | 1.2 | 1 | 0 | 0 | 528 |
15,282,336 | 2013-03-07T21:42:00.000 | 0 | 0 | 0 | 0 | 0 | python,nlp,nltk,sentiment-analysis,rapidminer | 0 | 15,319,789 | 0 | 1 | 0 | false | 0 | 0 | Well, I think that rapidminer is very interesting and can handle this task. It contains several operators dealing with text mining. Also, it allows the creation of new operators with high fluency. | 1 | 2 | 1 | 0 | I'm doing the sentiment analysis for the Arabic language , I want to creat my own corpus , to do that , I collect 300 status from facebook and I classify them into positive and negative , now I want to do the tokenization of these status , in order to obain a list of words , and hen generate unigrams and bigrams, trigrams and use the cross fold validation , I'm using for the moment the nltk python, is this software able to do this task fr the arabic language or the rapis Minner will be better to work with , what do you think and I'm wondering how to generate the bigrams, trigrams and use the cross fold validation , is there any idea ?? | creating arabic corpus | 1 | 0 | 1 | 0 | 0 | 1,057 |
15,297,322 | 2013-03-08T15:21:00.000 | 4 | 0 | 0 | 0 | 0 | python-2.7,jenkins | 0 | 15,340,482 | 0 | 8 | 0 | true | 0 | 0 | You can query the last build timestamp to determine if the build finished. Compare it to what it was just before you triggered the build, and see when it changes. To get the timestamp, add /lastBuild/buildTimestamp to your job URL
As a matter of fact, in your Jenkins, add /lastBuild/api/ to any Job, and you will see a lot of API information. It even has Python API, but I not familiar with that so can't help you further
However, if you were using XML, you can add lastBuild/api/xml?depth=0 and inside the XML, you can see the <changeSet> object with list of revisions/commit messages that triggered the build | 1 | 10 | 0 | 0 | I am using Python 2.7 and Jenkins.
I am writing some code in Python that will perform a checkin and wait/poll for Jenkins job to be complete. I would like some thoughts on around how I achieve it.
Python function to create a check-in in Perforce-> This can be easily done as P4 has CLI
Python code to detect when a build got triggered -> I have the changelist and the job number. How do I poll the Jenkins API for the build log to check if it has the appropriate changelists? The output of this step is a build url which is carrying out the job
How do I wait till the Jenkins job is complete?
Can I use snippets from the Jenkins Rest API or from Python Jenkins module? | Wait until a Jenkins build is complete | 0 | 1.2 | 1 | 0 | 1 | 15,591 |
15,304,785 | 2013-03-08T23:01:00.000 | 8 | 0 | 0 | 1 | 0 | python,symlink,homebrew | 0 | 15,304,867 | 0 | 5 | 0 | false | 0 | 0 | You definitely do not want to do this! You may only care about Python 3, but many people write code that expects python to symlink to Python 2. Changing this can seriously mess your system up. | 2 | 13 | 0 | 0 | I want to install python using homebrew and I noticed there are 2 different formulas for it, one for python 2.x and another for 3.x. The first symlinks "python" and the other uses "python3". so I ran brew install python3.
I really only care about using python 3 so I would like the default command to be "python" instead of having to type "python3" every time. Is there a way to do this? I tried brew switch python 3.3 but I get a "python is not found in the Cellar" error. | In homebrew how do I change the python3 symlink to only "python" | 0 | 1 | 1 | 0 | 0 | 11,485 |
15,304,785 | 2013-03-08T23:01:00.000 | 1 | 0 | 0 | 1 | 0 | python,symlink,homebrew | 0 | 42,743,923 | 0 | 5 | 0 | false | 0 | 0 | As mentioned this is not the best idea. However, the simplest thing to do when necessary is run python3 in terminal. If you need to run something for python3 then run python3 | 2 | 13 | 0 | 0 | I want to install python using homebrew and I noticed there are 2 different formulas for it, one for python 2.x and another for 3.x. The first symlinks "python" and the other uses "python3". so I ran brew install python3.
I really only care about using python 3 so I would like the default command to be "python" instead of having to type "python3" every time. Is there a way to do this? I tried brew switch python 3.3 but I get a "python is not found in the Cellar" error. | In homebrew how do I change the python3 symlink to only "python" | 0 | 0.039979 | 1 | 0 | 0 | 11,485 |
15,304,934 | 2013-03-08T23:15:00.000 | 6 | 0 | 1 | 0 | 0 | python,exception | 1 | 15,304,996 | 0 | 5 | 0 | false | 0 | 0 | I fear those come straight from the standard C library, so you'll have to look it up in your system documentation. (GLibC, Microsoft, UNIX…) | 2 | 11 | 0 | 0 | For a specific Exception type (let's say for IOError), how can i extract the complete list of Errnos and descriptions like this:
Errno 2: No such file or directory
Errno 122: Disk quota exceeded
... | How to get the list of error numbers (Errno) for an Exception type in python? | 0 | 1 | 1 | 0 | 0 | 10,342 |
15,304,934 | 2013-03-08T23:15:00.000 | 4 | 0 | 1 | 0 | 0 | python,exception | 1 | 15,305,006 | 0 | 5 | 0 | false | 0 | 0 | look for errno.h on your system. | 2 | 11 | 0 | 0 | For a specific Exception type (let's say for IOError), how can i extract the complete list of Errnos and descriptions like this:
Errno 2: No such file or directory
Errno 122: Disk quota exceeded
... | How to get the list of error numbers (Errno) for an Exception type in python? | 0 | 0.158649 | 1 | 0 | 0 | 10,342 |
15,329,256 | 2013-03-11T00:01:00.000 | 0 | 0 | 0 | 0 | 0 | c++,python,algorithm,primes,prime-factoring | 0 | 15,329,656 | 0 | 3 | 0 | false | 0 | 0 | As Malvolio was (indirectly) going about, I personal wouldn't find a use for prime factorization if you want to find factors in a range, I would start at int t = (int)(sqrt(n)) and then decremnt until1. t is a factor2. Complete util t or t/n range has been REACHED(a flag) and then (both) has left the range
Or if your range is relatively small, check versus those values themselves. | 1 | 1 | 1 | 0 | So I have algorithms (easily searchable on the net) for prime factorization and divisor acquisition but I don't know how to scale it to finding those divisors within a range. For example all divisors of 100 between 23 and 49 (arbitrary). But also something efficient so I can scale this to big numbers in larger ranges. At first I was thinking of using an array that's the size of the range and then use all the primes <= the upper bound to sieve all the elements in that array to return an eventual list of divisors, but for large ranges this would be too memory intensive.
Is there a simple way to just directly generate the divisors? | How can I efficiently get all divisors of X within a range if I have X's prime factorization? | 0 | 0 | 1 | 0 | 0 | 1,052 |
15,345,864 | 2013-03-11T18:28:00.000 | 0 | 0 | 0 | 0 | 0 | python,database,development-environment | 0 | 15,346,132 | 0 | 2 | 0 | false | 0 | 0 | Make sure you have a python programs or programs to fill databases with test data from scratch. It allows each developer to work from different starting points, but also test with the same environment. | 1 | 0 | 0 | 0 | I'm currently exploring using python to develop my server-side implementation. I've decided to use SQLAlchemy for database stuff.
What I'm not currently to sure about is how it should be set up so that more than one developer can work on the project. For the code it is not a problem but how do I handle the database modifications? How do the users sync databases and how should potential data be set up? Should/can each developer use their own sqlite db for development?
For production postgresql will be used but the developers must be able to work offline. | Multi developer environment python and sqlalchemy | 0 | 0 | 1 | 1 | 0 | 195 |
15,346,908 | 2013-03-11T19:27:00.000 | 0 | 0 | 0 | 0 | 0 | python,networking | 0 | 17,349,828 | 0 | 2 | 0 | false | 0 | 0 | pytun is not sufficient for this. It serves to connect your Python application to a system network interface. In effect, you become responsible for implementing that system network interface.
If you want traffic that is routed over that network interface to traverse an actual network, then it is the job of your Python program to do the actual network operations that move the data from host A to host B.
This is probably a lot of work to do well. I suggest you use an existing VPN tool instead. | 1 | 2 | 0 | 0 | I need to make a simple p2p vpn app, after a lot of searches I found a tun/tap module for python called PYTUN that is used to make a tunnel. How can I use this module to create a tunnel between 2 remote peers?
All the attached doc only show how to make the tunnel interface on your local computer and config it, but it does not mention how to connect it to the remote peer. | how to use the tun/tap python module (pytun) to create a p2p tunnel? | 0 | 0 | 1 | 0 | 1 | 2,900 |
15,353,057 | 2013-03-12T03:59:00.000 | 0 | 0 | 0 | 0 | 0 | sublimetext2,python-3.3 | 0 | 15,373,407 | 0 | 1 | 0 | false | 0 | 0 | You need to change the Sublime build system for Python. Copy the Python.sublime-build file from the Packages/Python folder to the Packages/User folder. In the file in the User folder, change the cmd option from python to c:/python33/python (where your Python 3.3 executable is located). | 1 | 0 | 0 | 0 | I am running windows 7 32 bit and i am using sublime text 2 I want to know how can I change the default python in ST2 to the python 3.3 I down loaded. Any help would be great thanks. | python 3.3 on sublime text 2 windows 7 | 0 | 0 | 1 | 0 | 0 | 1,111 |
15,367,688 | 2013-03-12T17:11:00.000 | 0 | 0 | 0 | 1 | 0 | python,python-idle | 0 | 49,071,459 | 0 | 9 | 0 | false | 0 | 0 | Here's a way to reset IDLE's default working directory for MacOS if you launch Idle as an application by double-clicking it. You need a different solution if you launch Idle from a command line in Terminal. This solution is a permanent fix. You don't have to rechange the directory everytime you launch IDLE. I wish it were easier.
The idea is to edit a resource file inside of the IDLE package in Applications.
Start by finding the the file. In Finder, go to IDLE in Applications (in the Python folder) as if you wanted to open it. Right click and select "show package contents". Open Contents, then open Resources. In Resources, you'll see a file called idlemain.py. This file executes when you launch idle and sets, among other things, the working directory. We're going to edit that.
But before you can edit it, you need to give yourself permission to write to it. To do that, right click on the idlemain.py and select get info. Scroll to the bottom of the getinfo window and you'll see the Sharing & Permissions section. On the bottom right there's a lock icom. Click the lock and follow the prompts to unlock it. Once it's unlocked, look to the left for the + (under the list of users with permissions). Click it. That will bring up a window with a list of users you can add. Select yourself (probably the name of your computer or your user account) and click Select. You'll see yourself added to the list of names with permissions. Click where is says "Read only" next to your name and change it to "Read & Write". Be careful not to change anything else. When you're done, click the lock again to lock the changes.
Now go back to idlemain.py and open it with any text editor (you could use Idle, TextEdit, or anything. Right under the import statement at the top is the code to change the default working directory. Read the comment if you like, then replace the single line of code under the comment with
os.chdir('path of your desired working directory')
Mine looks like this:
os.chdir('/Users/MyName/Documents/Python')
Save your changes (which should work because you gave yourself permission). Next time you start Idle, you should be in your desired working directory. You can check with the following commands:
import os
os.getcwd() | 3 | 11 | 0 | 0 | Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it.
In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed.
It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance. | Default working directory for Python IDLE? | 0 | 0 | 1 | 0 | 0 | 37,378 |
15,367,688 | 2013-03-12T17:11:00.000 | -1 | 0 | 0 | 1 | 0 | python,python-idle | 0 | 54,316,970 | 0 | 9 | 0 | false | 0 | 0 | This ought to be the number one answer. I have been playing around this for an hour or more and nothing worked. Paul explains this perfectly. It's just like the PATH statement in Windows. I successfully imported a module by appending my personal "PythonModules" path/dir on my Mac (starting at "/users/etc") using a simple
import xxxx command in Idle. | 3 | 11 | 0 | 0 | Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it.
In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed.
It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance. | Default working directory for Python IDLE? | 0 | -0.022219 | 1 | 0 | 0 | 37,378 |
15,367,688 | 2013-03-12T17:11:00.000 | 0 | 0 | 0 | 1 | 0 | python,python-idle | 0 | 15,367,752 | 0 | 9 | 0 | false | 0 | 0 | It can change depending on where you installed Python. Open up IDLE, import os, then call os.getcwd() and that should tell you exactly where your IDLE is working on. | 3 | 11 | 0 | 0 | Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it.
In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed.
It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance. | Default working directory for Python IDLE? | 0 | 0 | 1 | 0 | 0 | 37,378 |
15,369,985 | 2013-03-12T19:07:00.000 | 0 | 0 | 0 | 0 | 0 | python,numpy,scipy | 0 | 15,370,151 | 0 | 5 | 0 | false | 0 | 0 | If you don't mind installing additional packages (for both python and c++), you can use [BSON][1] (Binary JSON). | 3 | 6 | 1 | 0 | Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.
What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs.
I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB.
Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program?
Thank you for the help. I appreciate any guidance I can get.
EDIT:
There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in | python - saving numpy array to a file (smallest size possible) | 0 | 0 | 1 | 0 | 0 | 7,265 |
15,369,985 | 2013-03-12T19:07:00.000 | 1 | 0 | 0 | 0 | 0 | python,numpy,scipy | 0 | 15,370,191 | 0 | 5 | 0 | false | 0 | 0 | numpy.ndarray.tofile and numpy.fromfile are useful for direct binary output/input from python. std::ostream::write std::istream::read are useful for binary output/input in c++.
You should be careful about endianess if the data are transferred from one machine to another. | 3 | 6 | 1 | 0 | Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.
What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs.
I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB.
Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program?
Thank you for the help. I appreciate any guidance I can get.
EDIT:
There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in | python - saving numpy array to a file (smallest size possible) | 0 | 0.039979 | 1 | 0 | 0 | 7,265 |
15,369,985 | 2013-03-12T19:07:00.000 | 1 | 0 | 0 | 0 | 0 | python,numpy,scipy | 0 | 19,226,920 | 0 | 5 | 0 | false | 0 | 0 | Use the an hdf5 file, they are really simple to use through h5py and you can use set compression a flag. Note that hdf5 has also a c++ interface. | 3 | 6 | 1 | 0 | Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program.
What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs.
I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB.
Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program?
Thank you for the help. I appreciate any guidance I can get.
EDIT:
There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in | python - saving numpy array to a file (smallest size possible) | 0 | 0.039979 | 1 | 0 | 0 | 7,265 |
15,372,361 | 2013-03-12T21:20:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,content-management-system,django-cms | 0 | 15,381,624 | 0 | 1 | 0 | false | 1 | 0 | You need to create different page trees per language.
Every page has only one template. Use {% trans %} and {% blocktrans %} for translating string in it. Or {% if request.LANGUAGE == "en" %}.
If the templates really differ that much: don't add other languages to pages... but create different page trees with only one language. | 1 | 0 | 0 | 0 | How can i add different base templates for different languages of same page in django cms?
I am trying to set a page and show it in different languages. And for all the languages, i need to use a different base template.
I am completely new django cms. Please help. | how to add different base templates for different languages of same page in django cms | 0 | 0.53705 | 1 | 0 | 0 | 404 |
15,381,092 | 2013-03-13T09:15:00.000 | 1 | 1 | 0 | 0 | 1 | python,amazon-web-services,amazon-sqs,amazon-sns | 0 | 15,879,297 | 0 | 3 | 0 | true | 0 | 0 | What you laid out will work in theory, but I am moved away from putting messages directly into queues, and instead put those messages in to SNS topics, and then subscribe the queues to the topics to get them there - gives you more flexibility to change things down the road without every touching the code or the servers that are in production.
For the what you are doing now, the SNS piece is unnecessary, but using will allow you to change functionality without touching you existing servers down the road.
For example: needs change and you want to add a process C that also kicks off every time the 'Start Process' runs on Sever B. Right thru the AWS SNS console you could direct a second copy of the message to another Queue that previously did not exist, and setup a server C that polls from that Queue (a fan out pattern).
Also, what I often like to do during initial rollout is add notifications to SNS so I know whats going on, i.e. every time the 'start process' event occurs, I subscribe my cell phone (or email address) to the topic so I get notified - I can monitor in real time what is (or isn't) happening. Once a period of time has gone by after a production deployment, I can go into AWS console and simply unsubscribe my email/cell from the process - without every touching any servers or code. | 3 | 2 | 0 | 0 | I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario.
Server A: Main Dedicated Server
Server B: Cloud Process Server
Server A sends message to SQS via SNS to say "Start Process"
Server B constantly polls SQS for "Start Process" message
Server B finds "Start Process" message on SQS
Server B runs "process.sh" file
Server B completes running "process.sh" file
Server B removes "Start Process" from SQS
Server B sends message to SQS via SNS to say "Start Import"
Server A polls constantly polls SQS for "Start Import" message
Server A finds "Start Import" message on SQS
Server A runs import.sh
Server A completes running "import.sh"
Server A removes "Start Import" from SQS
Is this how SQS should be used or am I missing the point completely? | How should Amazon SQS be used? Import / Process Scenario | 0 | 1.2 | 1 | 0 | 0 | 1,359 |
15,381,092 | 2013-03-13T09:15:00.000 | 1 | 1 | 0 | 0 | 1 | python,amazon-web-services,amazon-sqs,amazon-sns | 0 | 15,397,063 | 0 | 3 | 0 | false | 0 | 0 | Well... SQS doesn't not support message routing, in order to assign message to server A or B that why one of the available solutions: create SNS topics "server a" and "server b". These topics should put messages to SQS, which your application will pull. Also it possible to implement web hook - the subscriber on SNS events which will analyze message and do callback to your application. | 3 | 2 | 0 | 0 | I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario.
Server A: Main Dedicated Server
Server B: Cloud Process Server
Server A sends message to SQS via SNS to say "Start Process"
Server B constantly polls SQS for "Start Process" message
Server B finds "Start Process" message on SQS
Server B runs "process.sh" file
Server B completes running "process.sh" file
Server B removes "Start Process" from SQS
Server B sends message to SQS via SNS to say "Start Import"
Server A polls constantly polls SQS for "Start Import" message
Server A finds "Start Import" message on SQS
Server A runs import.sh
Server A completes running "import.sh"
Server A removes "Start Import" from SQS
Is this how SQS should be used or am I missing the point completely? | How should Amazon SQS be used? Import / Process Scenario | 0 | 0.066568 | 1 | 0 | 0 | 1,359 |
15,381,092 | 2013-03-13T09:15:00.000 | 3 | 1 | 0 | 0 | 1 | python,amazon-web-services,amazon-sqs,amazon-sns | 0 | 15,391,518 | 0 | 3 | 0 | false | 0 | 0 | I'm almost sorry that Amazon offers SQS as a service. It is not a "simple queue", and probably not the best choice in your case. Specifically:
it has abysmal performance in low volume messaging (some messages will take 90 seconds to arrive)
message order is not preserved
it is fond of delivering messages more than once
they charge you for polling
The good news is it scales well. But guess what, you don't have a scale problem, so dealing with the quirky behavior of SQS is just going to cause you pain for no good reason. I highly recommend you check out RabbitMQ, it is going to behave exactly like you want a simple queue to behave. | 3 | 2 | 0 | 0 | I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario.
Server A: Main Dedicated Server
Server B: Cloud Process Server
Server A sends message to SQS via SNS to say "Start Process"
Server B constantly polls SQS for "Start Process" message
Server B finds "Start Process" message on SQS
Server B runs "process.sh" file
Server B completes running "process.sh" file
Server B removes "Start Process" from SQS
Server B sends message to SQS via SNS to say "Start Import"
Server A polls constantly polls SQS for "Start Import" message
Server A finds "Start Import" message on SQS
Server A runs import.sh
Server A completes running "import.sh"
Server A removes "Start Import" from SQS
Is this how SQS should be used or am I missing the point completely? | How should Amazon SQS be used? Import / Process Scenario | 0 | 0.197375 | 1 | 0 | 0 | 1,359 |
15,381,202 | 2013-03-13T09:20:00.000 | 0 | 0 | 0 | 0 | 0 | php,python,security | 0 | 15,381,241 | 0 | 5 | 0 | false | 0 | 0 | .htaccess, chmod or you could use a key defined by yourself... You have several possibilies.
Edit: Anyway, if the file only contains a function. Nobody can use it from an external http request, unless you actually call it in this file: function(); | 1 | 0 | 0 | 0 | I have two Python files that I want to prevent anyone from executing unless it's from the server itself. The files implement a function that increases an amount of money to a user. What I need is to make this file not public to the web, so that if someone tries to call this file, the file would refuse this request, unless the call is from the server.
Does anyone know how I can do this? My first idea was to check for the IP address but a lot of people can spoof their IP.
Example
Let's say I have this file: function.py, a function in this file will accept a new amount of money and increase the appropriate balance in the database.
When someone tries to post data to this file, and this person is outside the server (lets say from 244.23.23.0) the file will be in-accessible. Whereas, calling the function from the server itself will be accepted.
So files can access other files on the server, but external users cannot, with the result that no one can execute this file unless it's called from the server.
This is really important to me, because it's related to real money. Also, the money will come from PayPal IPN. And actually, if there was a way to prevent access unless it was coming from PayPal, that would be an amazing way to secure the app.
OK, as far as what I have tried:
Put the database in a cloud SQL using Google [https://developers.google.com/cloud-sql/]
Try to check the IP of the incoming request, in the file
Thanks for any and all help. | Restrict execution of Python files only to server (prevent access from browser) | 0 | 0 | 1 | 0 | 1 | 538 |
15,388,961 | 2013-03-13T15:04:00.000 | 2 | 0 | 0 | 0 | 1 | python,clang,llvm-clang | 0 | 18,359,608 | 0 | 1 | 0 | true | 0 | 1 | For RECORD, the function get_declaration() points to the declaration of the type (a union, enum, struct, or typedef); getting the spelling of the node returns its name. (Obviously, take care between differentiating the TYPEDEF_DECL vs. the underlying declaration kind.)
For FUNCTIONPROTO, you have to use a combination of get_result() and get_arguments(). | 1 | 4 | 0 | 0 | I am writing a python script(using python clang bindings) that parses C headers and extracts info about functions: name, return types, argument types.
I have no problem with extracting function name, but I can't find a way to convert a clang.cindex.Type to C type string. (e.g. clang.cindex.TypeKind.UINT to unsigned int)
Currently, as a temporary solution, i have a dictionary clang.cindex.TypeKind -> C type string and code to process pointers and const qualifiers, but I didn't found a way to extract structure names.
Is there a generic way to get C definition of clang.cindex.Type? If there isn't, how can I get C type string for clang.cindex.TypeKind.RECORD and clang.cindex.TypeKind.FUNCTIONPROTO types? | Extract type string with Clang bindings | 0 | 1.2 | 1 | 0 | 0 | 1,599 |
15,393,202 | 2013-03-13T18:10:00.000 | 1 | 0 | 1 | 1 | 0 | python,python-2.7,py2exe | 0 | 15,393,682 | 0 | 2 | 0 | false | 0 | 0 | Yep, the path to the file gets passed in as an argument and can be accessed via sys.argv[1]. | 1 | 0 | 0 | 0 | I wonder how the Windows "Open file with..." feature works. Or rather, how would do if I write a program in python, compile a executable with py2exe and then want to be able to open certain files in that program by right-clicking and choose it in "Open with".
Is the file simply passed as an argument, like "CMD>C:/myapp.exe file"? | Windows "open with" python py2exe application | 0 | 0.099668 | 1 | 0 | 0 | 262 |
15,397,024 | 2013-03-13T21:40:00.000 | 4 | 1 | 1 | 0 | 0 | python,django,performance,apache,redhat | 0 | 15,397,078 | 0 | 1 | 0 | true | 0 | 0 | If you compile with the exact same flags that were used to compile the RPM version, you will get a binary that's exactly as fast. And you can get those flags by looking at the RPM's spec file.
However, you can sometimes do better than the pre-built version. For example, you can let the compiler optimize for your specific CPU, instead of for "general 386 compatible" (or whatever the RPM was optimized for). Of course if you don't know what you're doing (or are doing it on purpose), it's always possible to build something slower than the pre-built version, too.
Meanwhile, 2.7.3 is faster in a few areas than 2.6.6. Most of them usually won't affect you, but if they do, they'll probably be a big win.
Finally, for the vast majority of Python code, the speed of the Python interpreter itself isn't relevant to your overall performance or scalability. (And when it is, you probably want to try PyPy, Jython, or IronPython to replace CPython.) This is especially true for a WSGI service. If you're not doing anything slow, Apache will probably be the bottleneck. If you are doing anything slow, it's probably something I/O bound and well outside of Python's control (like reading files).
Ultimately, the only way you can know how much gain you get is by trying it both ways and performance testing. But if you just want a rule of thumb, I'd say expect a 0% gain, and be pleasantly surprised if you get lucky. | 1 | 1 | 0 | 1 | I would like to know if there are any documented performance differences between a Python interpreter that I can install from an rpm (or using yum) and a Python interpreter compiled from sources (with a priori well set flags for compilations).
I am using a Redhat 6.3 machine as Django/Apache/Mod_WSGI production server. I have already properly compiled everything in different setups and in different orders. However, I usually keep the build-dev dependencies on such machine. For some various ego-related (and more or less practical) reasons, I would like to use Python-2.7.3. By default, Redhat comes with Python-2.6.6. I think I could go with it but it would hurt me somehow (I would have to drop and find a replacement for a few libraries and my ego).
However, besides my ego and dependencies, I would like to know what would be the impact in terms of performance for a Django server. | Performance differences between python from package and python compiled from source | 0 | 1.2 | 1 | 0 | 0 | 139 |
15,408,255 | 2013-03-14T11:41:00.000 | 0 | 0 | 0 | 0 | 1 | python,django | 1 | 15,408,439 | 0 | 3 | 0 | true | 1 | 0 | django_roa is not yet compatible with django 1.5. I'm afraid it only works with django 1.3. | 2 | 5 | 0 | 0 | I'm using django and I'm trying to set up django-roa but when i'm trying to start my webserver I have this error cannot import name LOOKUP_SEP
If I remove django_roa from my INSTALLEDS_APP it's okay but I want django-roa working and I don't know how resolve this problem.
And I don't know what kind of detail I can tell to find a solution.
Thanks | cannot import name LOOKUP_SEP | 0 | 1.2 | 1 | 0 | 0 | 2,388 |
15,408,255 | 2013-03-14T11:41:00.000 | 0 | 0 | 0 | 0 | 1 | python,django | 1 | 18,352,809 | 0 | 3 | 0 | false | 1 | 0 | I downgraded from 1.5.2 to 1.4.0 and my app started working again. Via pip:
pip install django==1.4
Hope that helps. | 2 | 5 | 0 | 0 | I'm using django and I'm trying to set up django-roa but when i'm trying to start my webserver I have this error cannot import name LOOKUP_SEP
If I remove django_roa from my INSTALLEDS_APP it's okay but I want django-roa working and I don't know how resolve this problem.
And I don't know what kind of detail I can tell to find a solution.
Thanks | cannot import name LOOKUP_SEP | 0 | 0 | 1 | 0 | 0 | 2,388 |
15,442,919 | 2013-03-15T22:13:00.000 | 0 | 0 | 0 | 0 | 0 | python,amazon-simpledb | 0 | 15,460,747 | 0 | 2 | 1 | true | 1 | 0 | I opted to go with storing large text documents in Amazon S3 (retrieval seems to be quick), I'll be implementing an EC2 instance for caching the documents with S3 as a failover. | 1 | 0 | 0 | 0 | I've been reading up on SimpleDB and one downfall (for me) is the 1kb max per attribute limit. I do a lot of RSS feed processing and I was hoping to store feed data in SimpleDB (articles) and from what I've read the best way to do this is to shard the article across several attributes. The typical article is < 30kb of plain text.
I'm currently storing article data in DynamoDB (gzip compressed) without any issues, but the cost is fairly high. Was hoping to migrate to SimpleDB for cheaper storage with still fast retrievals. I do archive a json copy of all rss articles on S3 as well (many years of mysql headaches make me wary of db's).
Does anyone know to shard a string into < 1kb pieces? I'm assuming an identifier would need to be appended to each chunk for order of reassembly.
Any thoughts would be much appreciated! | python - simpledb - how to shard/chunk a big string into several <1kb values? | 0 | 1.2 | 1 | 0 | 0 | 255 |
15,456,709 | 2013-03-17T01:39:00.000 | 0 | 0 | 0 | 0 | 0 | python,excel,google-drive-api | 0 | 15,505,507 | 0 | 1 | 0 | true | 0 | 0 | Ended up just downloading with xlrd and using that. Thanks for the link Rob. | 1 | 0 | 0 | 0 | So I know how to download Excel files from Google Drive in .csv format. However, since .csv files do not support multiple sheets, I have developed a system in a for loop to add the '&grid=tab_number' to the file download url so that I can download each sheet as its own .csv file. The problem I have run into is finding out how many sheets are in the excel workbook on the Google Drive so I know how many times to set the for loop for. | Complicated Excel Issue with Google API and Python | 0 | 1.2 | 1 | 1 | 1 | 95 |
15,469,799 | 2013-03-18T04:31:00.000 | 0 | 0 | 0 | 1 | 0 | python,xcode,interface | 0 | 15,469,982 | 0 | 3 | 0 | true | 0 | 1 | Open Automator
Choose "Application"
Drag a "Run Shell Script" onto the workflow panel
Choose "/usr/bin/python" as the shell. Paste in your script, and select Pass Input: "to stdin"
Or, choose bash as the shell, and simply have the automator script run your Python script with Pass Input "as arguments" selected on the top right. You'll then use the contents of $@ as your arguments.
Save the application.
Done. You have a .app onto which files can be dragged. | 1 | 3 | 0 | 0 | So I have a lot of python scripts that I have written for my work but no one in my lab knows how to use Python so I wanted to be able to generate a simple Mac App where you can 'Browse' for a file on your computer and type in the name of the file that you want to save . . . everything else will be processed by the application for the python script I have generated.
Does anyone know if this is possible? I watched some tutorials on people generating applications in Xcode with Objective C but I don't want to have to learn a new language to reconstruct my Python scripts.
Thank you | Is it possible to create Python-based Application in Xcode or equivalent? | 1 | 1.2 | 1 | 0 | 0 | 3,884 |
15,485,567 | 2013-03-18T19:47:00.000 | 0 | 0 | 1 | 1 | 0 | python | 0 | 15,486,937 | 0 | 1 | 0 | false | 0 | 0 | I am pretty sure OSX build tools (XCode et. al.) exist only on Apple platforms and there is no business rationale why Apple would have ported them to Windows.
So the probable answer is "buy Mac". | 1 | 0 | 0 | 0 | Given that the code has been written indepdently of platform, how do I build a package for MAC OS when I am on Windows and the package has been successfully built there? I can use python setup.py bdist_msi on windows, but not python setup.py bdist_dmg, since I am not on MAC. What to do about that?
Python 3.3, tkinter, cxFreeze, Windows 8. | Build package for OSX when on Windows (Python 3.3, tkinter) | 0 | 0 | 1 | 0 | 0 | 98 |
15,487,848 | 2013-03-18T22:06:00.000 | 1 | 1 | 1 | 1 | 0 | python,linux | 0 | 15,487,877 | 0 | 3 | 0 | false | 0 | 0 | Do /aaa/python2.5 python_code.py. If you use Python 2.5 more often, consider changing the $PATH variable to make Python 2.5 the default. | 1 | 2 | 0 | 0 | I am doing maintenance for a python code. Python is installed in /usr/bin, the code installed in /aaa, a python 2.5 installed under /aaa/python2.5. Each time I run Python, it use /usr/bin one. How to make it run /aaa/python2.5?
Also when I run Python -v; import bbb; bbb.__file__; it will automatically show it use bbb module under /usr/ccc/(don't know why), instead of use bbb module under /aaa/python2.5/lib
How to let it run python2.5 and use `/aaa/python2.5/lib' module? The reason I asking this is if we maintain a code, but other people is still using it, we need to install the code under a new directory and modify it, run it and debug it. | How to run python in different directory? | 0 | 0.066568 | 1 | 0 | 0 | 1,363 |
15,493,342 | 2013-03-19T06:59:00.000 | 0 | 1 | 1 | 0 | 0 | python,emacs,restructuredtext | 0 | 28,541,254 | 0 | 3 | 0 | false | 0 | 0 | As far as for edit-purposes, narrowing to docstring and activating rst-mode should be the way to go.
python-mode el provides py--docstring-p, which might be easily adapted for python.el
Than binding the whole thing to some idle-timer, would do the narrowing/switching.
Remains some expression which toggles-off rst-mode and widens. | 1 | 13 | 0 | 0 | How to I get Emacs to use rst-mode inside of docstrings in Python files? I vaguely remember that different modes within certain regions of a file is possible, but I don't remember how it's done. | Have Emacs edit Python docstrings using rst-mode | 1 | 0 | 1 | 0 | 0 | 1,425 |
15,517,766 | 2013-03-20T07:33:00.000 | 0 | 0 | 0 | 1 | 0 | python,django,google-app-engine,python-2.7,django-nonrel | 0 | 15,525,796 | 0 | 2 | 0 | false | 1 | 0 | The django library built into GAE is straight up normal django that has an SQL ORM. So you can use this with Cloud SQL but not the HRD.
django-nonrel is up to 1.4.5 according to the messages on the newsgroup. The documentation, unfortunately, is sorely behind. | 1 | 0 | 0 | 0 | AppEngine 1.7.6 has promoted Django 1.4.2 to GA.
I wonder how and if people this are using The reason for my question is that Django-nonrel seems to be stuck on Django 1.3 and there are no signs of an updated realease.
What I would like to use from Djano are controllers, views and especially form validations. | AppEngine 1.7.6 and Django 1.4.2 release | 0 | 0 | 1 | 0 | 0 | 154 |
15,524,030 | 2013-03-20T12:50:00.000 | 4 | 0 | 1 | 0 | 0 | python,list,logging,dictionary,system | 0 | 15,524,068 | 0 | 3 | 0 | false | 0 | 0 | You should look into collections.Counter. Your question is a bit unclear. | 1 | 0 | 0 | 0 | Can anyone tell me how to count the number of times a word appears in a dictionary. Iv already read a file into the terminal into a list. would I need to put the list into a dictionary or start put reading the file into the terminal into a dictionary and not a list? the file is a log file if that matters... | Counting words in python | 0 | 0.26052 | 1 | 0 | 0 | 1,257 |
15,529,417 | 2013-03-20T16:35:00.000 | 2 | 0 | 1 | 0 | 0 | python,module,version | 0 | 15,529,520 | 1 | 2 | 0 | false | 0 | 0 | I'm not sure if it's possible to change the active installed versions of a given module. Given my understanding of how imports and site-packages work, I'm leaning towards no.
Have you considered using virtualenv though ?
With virtualenv, you could create multiple shared environments -- one for biopython 1.58 , another for 1.61 , another for whatever other special situations you need. They don't need to be locked down to a particular user, so while it would take more space than what you desired, it could take less space than everyone having their own python environment. | 1 | 4 | 0 | 0 | so I am working on a shared computer. It is a workhorse for computations across the department. The problem we have run into is controlling versions of imported modules. Take for example Biopython. Some people have requirements of an older version of biopython - 1.58. And yet others have requirements for the latest biopython - 1.61. How would I have both versions of the module installed side by side, and how does one specifically access a particular version. I ask because sometimes these apis change and break old scripts for other people (or they expect certain functionality that is no longer there).
I understand that one could locally (i.e. per user) install the module and specifically direct python to that module. Is there another way to handle this? Or would everyone have to create an export PYTHONPATH before using? | How do I access different python module versions? | 0 | 0.197375 | 1 | 0 | 0 | 361 |
15,534,297 | 2013-03-20T20:42:00.000 | 1 | 1 | 0 | 0 | 0 | python,service,web | 0 | 15,534,482 | 0 | 1 | 0 | false | 1 | 0 | I'm not experiment on this topic but what I would do is setup a database in between (on the Synology rather than on the Raspberry Pi). Let's call your Synology server, and Raspberry Pi a sensor client.
I would host a database on the server, and push the from the sensor client. The data would be pushed either using an API through webservices or a more low level if you need it faster (some code needed on server side for this) or, since the client computer is under your control, it could directly push in the database.
Your concrete choice between database, webservice or other API depends on:
How much data have to be pushed?
How fast data have to pushed?
How much do you trust your network?
How much do you trust your sensor client?
I've never used it but I suggest you use SQLAlchemy for connecting to the database (from both side).
If in some use case the remote server can be down, the sensor client would store sensor data in some local file and push them when the server come back online. | 1 | 2 | 0 | 0 | I'm looking for ideas, on how to display sensor data in a webpage, hosted by a Synology Diskstation, where the data comes from sensors connected to a Raspberry pi. This is going to be implemented in Python.
I have put together the sensors, and have these connected to the Raspberry. I have also the Python code, so I can read the sensors. I have a webpage up and running on the Diskstation using Python. But how do I get the data from the rasp to the Diskstation. The reading is just done, when the webpage is displayed.
Guess some kind of WebServices on the Rasp ? I have looked at Pyro4, but doesn't look like it can be installed at the Diskstation. And I would prefer not to install a whole WebServer Framework on the rasp.
Do you have a suggestion ? | Move data from Raspberry pi to a synology diskstation to present in a webpage | 0 | 0.197375 | 1 | 0 | 0 | 699 |
15,591,618 | 2013-03-23T20:19:00.000 | 0 | 0 | 0 | 0 | 0 | python,wysiwyg | 1 | 15,591,769 | 0 | 3 | 0 | false | 0 | 1 | See Glade, particularly in use with the libglade Python bindings. | 1 | 0 | 0 | 0 | I am searching for month now and growing quite frustrated.
I just love python.
So after doing a lot of console based stuff I wanted to do some graphical UIs as well.
I am aware of most of the frameworks (wxpython, glade, tk etc).
But: I do not want to write the code for the GUI itself per hand! Declaring every element from hand, thinking about grids and doing a trail and error to find out just how many pixels you have to move an object to get it in the right place. Well, lets say that just sounds like 1990's to me, and it is no fun at all.
So to put it plain and simple, what I am looking for is a solution that allows me to design a GUI graphically (WYSIWYG) and have an event based linking to python code.
Almost all major languages have that: For C/C++ their are certainly the most IDEs/tools that can do that. For Java there is Netbeans with Wwing (example of what i want; it would be ideal if that UI designer in Netbeans could spit out jython code, but no: python is supported but not UIdesign). Even Mono/Visual Basic etc. has tools like that.
So why the hell is their nothing for python?
P.S. And please, no comments like "If you are are real programmer you do it by hand to get cleaner code". If I want something very specific I edit it by hand, but designing a standard UI by hand is a waste of time. | Is there really no event based wysiwyg Gui builder for python/jython etc | 0 | 0 | 1 | 0 | 0 | 5,155 |
15,592,980 | 2013-03-23T22:45:00.000 | 0 | 0 | 0 | 0 | 1 | python,netezza | 0 | 15,643,468 | 0 | 3 | 0 | false | 0 | 0 | You need to get the nzcli installed on the machine that you want to run nzload from - your sysadmin should be able to put it on your unix/linux application server. There's a detailed process to setting it all up, caching the passwords, etc - the sysadmin should be able to do that to.
Once it is set up, you can create NZ control files to point to your data files and execute a load. The Netezza Data Loading guide has detailed instructions on how to do all of this (it can be obtained through IBM).
You can do it through aginity as well if you have the CREATE EXTERNAL TABLE privledge - you can do a INSERT INTO FROM EXTERNAL ... REMOTESOURCE ODBC to load the file from an ODBC connection. | 2 | 2 | 1 | 0 | I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow.
Can point me some example python script or some idea how can I do the same?
Thank you | How to use NZ Loader (Netezza Loader) through Python Script? | 0 | 0 | 1 | 1 | 0 | 4,583 |
15,592,980 | 2013-03-23T22:45:00.000 | 1 | 0 | 0 | 0 | 1 | python,netezza | 0 | 17,522,337 | 0 | 3 | 0 | false | 0 | 0 | you can use nz_load4 to load the data,This is the support utility /nz/support/contrib/bin
the syntax is same like nzload,by default nz_load4 will load the data using 4 thread and you can go upto 32 thread by using -tread option
for more details use nz_load4 -h
This will create the log files based on the number of thread,like if | 2 | 2 | 1 | 0 | I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow.
Can point me some example python script or some idea how can I do the same?
Thank you | How to use NZ Loader (Netezza Loader) through Python Script? | 0 | 0.066568 | 1 | 1 | 0 | 4,583 |
15,609,211 | 2013-03-25T06:49:00.000 | 0 | 1 | 1 | 0 | 1 | python | 1 | 15,609,275 | 0 | 1 | 0 | false | 0 | 0 | I am afraid there's no easy way to arbitrarily modify a running Python script.
One approach is to test the script on a small amount of data first. This way you'll reduce the likelihood of discovering bugs when running on the actual, large, dataset.
Another possibility is to make the script periodically save its state to disk, so that it can be restarted from where it left off, rather than from the beginning. | 1 | 1 | 0 | 0 | I'm using python scripts to execute simple but long measurements. I as wondering if (and how) it's possible to edit a running script.
An example:
Let's assume I made an error in the last lines of a running script.These lines have not yet been executed. Now I'd like to fix it without restarting the script. What should I do?
Edit:
One Idea I had was loading each line of the script in a list. Then pop the first one. Feed it to an interpreter instance. Wait for it to complete and pop the next one. This way I could modify the list.
I guess I can't be the first one thinking about it. Someone must have implemented something like this before and I don't wan't to reinvent the weel. I one of you knows about a project please let me know. | Modifying a running script | 0 | 0 | 1 | 0 | 0 | 109 |
15,625,662 | 2013-03-25T22:01:00.000 | 1 | 0 | 1 | 0 | 0 | python,django,date,datetime | 0 | 15,625,871 | 0 | 2 | 0 | false | 1 | 0 | What you're looking for is probably coverd by post_date__year=year and post_date__month=month in django.
Nevertheless all this seems a little bit werid for get parameters. Do you have any constraint at database level that forbids you from putting there two posts with the same title in the same month of given year? | 2 | 1 | 0 | 0 | I'm working on a blog using Django and I'm trying to use get() to retrieve the post from my database with a certain post_title and post_date. I'm using datetime for the post_date, and although I can use post_date = date(year, month, day) to fetch a post made on that specific day, I don't know how to get it to ignore the day parameter, I can't only pass two arguments to date() and since only integers can be used I don't think there's any kind of wildcard I can use for the day. How would I go about doing this?
To clarify, I'm trying to find a post using the year in which it was posted, the month, and it's title, but not the day. Thanks in advance for any help! | Matching Month and Year In Python with datetime | 0 | 0.099668 | 1 | 0 | 0 | 483 |
15,625,662 | 2013-03-25T22:01:00.000 | 1 | 0 | 1 | 0 | 0 | python,django,date,datetime | 0 | 15,625,840 | 0 | 2 | 0 | true | 1 | 0 | you could use post_date__year and post_date__month | 2 | 1 | 0 | 0 | I'm working on a blog using Django and I'm trying to use get() to retrieve the post from my database with a certain post_title and post_date. I'm using datetime for the post_date, and although I can use post_date = date(year, month, day) to fetch a post made on that specific day, I don't know how to get it to ignore the day parameter, I can't only pass two arguments to date() and since only integers can be used I don't think there's any kind of wildcard I can use for the day. How would I go about doing this?
To clarify, I'm trying to find a post using the year in which it was posted, the month, and it's title, but not the day. Thanks in advance for any help! | Matching Month and Year In Python with datetime | 0 | 1.2 | 1 | 0 | 0 | 483 |
15,635,888 | 2013-03-26T11:27:00.000 | 2 | 0 | 0 | 1 | 1 | python,google-app-engine,google-cloud-datastore | 0 | 15,641,028 | 0 | 2 | 0 | false | 1 | 0 | This is answered, but to explain a little further - the local datastore, by default writes to the temporary file system on your computer. By default, the temporary file is emptied any time you restart the computer, hence your datastore is emptied. If you don't restart your computer, your datastore should remain. | 1 | 0 | 0 | 0 | I'm running App Engine with Python 2.7 on OS X. Once I stop the development server all data in the data store is lost. Same thing happens when I try to deploy my app. What might cause this behaviour and how to fix it? | GAE: Data is lost after dev server restart | 0 | 0.197375 | 1 | 0 | 0 | 322 |
15,645,296 | 2013-03-26T19:02:00.000 | 0 | 0 | 1 | 0 | 0 | python,multithreading,io | 0 | 15,647,380 | 0 | 1 | 0 | false | 0 | 0 | There is no need to repeatedly check for either I/O completion or for lock release.
An I/O completion, signaled by a hardware interrupt to a driver, or a lock release as signaled by a software interrupt from another thread, will make threads waiting on those operations ready 'immediately', and quite possibly running, and quite possibly preempting another thread when being made running. Essentially, after either a software or hardware interrupt, the OS can decide to interrupt-return to a different thread than the one interrupted.
The high I/O performance of this mechanism, eliminating any polling or checking, is 99% of the reason for putting up with the pain of premptive multitaskers. | 1 | 1 | 0 | 0 | I'm curious. I've been programming in Python for years. When I run a command that blocks on I/O (whether it's a hard-disk read or a network request), or blocks while waiting on a lock to be released, how is that implemented? How does the thread know when to reacquire the GIL and start running again?
I wonder whether this is implemented by constantly checking ("Is the output here now? Is it here now? What about now?") which I imagine is wasteful, or alternatively in a more elegant way. | How is waiting for I/O or waiting for a lock to be released implemented? | 1 | 0 | 1 | 0 | 0 | 259 |
15,654,714 | 2013-03-27T08:43:00.000 | 0 | 0 | 0 | 1 | 0 | python,linux,cherrypy,gnu-screen | 0 | 25,355,763 | 0 | 3 | 0 | false | 0 | 0 | You can use syslog or even better you can configure it to send all logs to a database! | 2 | 2 | 0 | 0 | I'm developing a small piece of software, that is able to control (start, stop, restart and so on - with gnu screen) every possible gameserver (which have a command line) and includes a tiny standalone webserver with a complete webinterface (you can access the gnu screen from there, like if you're attached to it) on linux.
Almost everything is working and needs some code cleanup now.
It's written in python, the standalone webserver uses cherrypy as a framework.
The problem is, that the gnu screen output on the webinterface is done via a logfile, which can cause high I/O when enabled (ok, it depends on what is running).
Is there a way to pipe the output directly to the standalone webserver (it has to be fast)? Maybe something with sockets, but i dont know how to handle them yet. | A way to "pipe" gnu screen output to a running python process? | 0 | 0 | 1 | 0 | 0 | 1,489 |
15,654,714 | 2013-03-27T08:43:00.000 | 1 | 0 | 0 | 1 | 0 | python,linux,cherrypy,gnu-screen | 0 | 15,661,154 | 0 | 3 | 0 | false | 0 | 0 | Writing to a pipe would work but it's dangerous since your command (the one writing the pipe) will block when you're not fast enough reading the data from the pipe.
A better solution would be create a local "log server" which publishes stdin on a socket. Now you can pipe the output of your command to the log server which reads from stdin and sends copy of the input to anyone connected to it's socket.
When no one is connected, then the output is just ignored.
Writing such a "log server" is trivial (about 1h in Python, I'd guess).
An additional advantage would be that you could keep part of the log file in memory (say the last 100 lines). When your command crashes, then you could still get the last output from your log server.
For this to work, you must not terminate the log server when stdin returns EOF. The drawback is that you need to clean up stale log servers yourself. When you use sockets, you can send it a "kill" command from your web app. | 2 | 2 | 0 | 0 | I'm developing a small piece of software, that is able to control (start, stop, restart and so on - with gnu screen) every possible gameserver (which have a command line) and includes a tiny standalone webserver with a complete webinterface (you can access the gnu screen from there, like if you're attached to it) on linux.
Almost everything is working and needs some code cleanup now.
It's written in python, the standalone webserver uses cherrypy as a framework.
The problem is, that the gnu screen output on the webinterface is done via a logfile, which can cause high I/O when enabled (ok, it depends on what is running).
Is there a way to pipe the output directly to the standalone webserver (it has to be fast)? Maybe something with sockets, but i dont know how to handle them yet. | A way to "pipe" gnu screen output to a running python process? | 0 | 0.066568 | 1 | 0 | 0 | 1,489 |
15,693,565 | 2013-03-28T22:53:00.000 | 1 | 1 | 0 | 1 | 0 | python,testing,jenkins,distributed | 0 | 15,693,722 | 0 | 2 | 0 | false | 0 | 0 | To debug this:
Add set -x towards the top of your shell script.
Set a PS4 which prints the line number of each line when it's invoked: PS4='+ $BASH_SOURCE:$FUNCNAME:$LINENO:'
Look in particular for any places where your scripts assume environment variables which aren't set when Hudson is running.
If your Python scripts redirect stderr (where logs from set -x are directed) and don't pass it through to Hudson (and so don't log it), you can redirect it to a file from within the script: exec 2>>logfile
There are a number of tools other than Jenkins for kicking off jobs across a number of machines, by the way; MCollective (which works well if you already use Puppet), knife ssh (which you'll already have if you use Chef -- which, in my not-so-humble opinion, you should!), Rundeck (which has a snazzy web UI, but shouldn't be used by anyone until this security bug is fixed), Fabric (which is a very good choice if you don't have mcollective or knife already), and many more. | 1 | 1 | 0 | 0 | I have a ton of scripts I need to execute, each on a separate machine. I'm trying to use Jenkins to do this. I have a Python script that can execute a single test and handles time limits and collection of test results, and a handful of Jenkins jobs that run this Python script with different args. When I run this script from the command line, it works fine. But when I run the script via Jenkins (with the exact same arguments) the test times out. The script handles killing the test, so control is returned all the way back to Jenkins and everything is cleaned up. How can I debug this? The Python script is using subprocess.popen to launch the test.
As a side note, I'm open to suggestions for how to do this better, with or without Jenkins and my Python script. I just need to run a bunch of scripts on different machines and collect their output. | Shell scripts have different behavior when launched by Jenkins | 0 | 0.099668 | 1 | 0 | 0 | 1,510 |
15,694,341 | 2013-03-29T00:09:00.000 | 1 | 0 | 0 | 0 | 0 | python,forms,parsing,templates,jinja2 | 0 | 15,704,671 | 0 | 1 | 1 | true | 1 | 0 | I think Jinja makes sense for building this, in particular because it contains a full-on lexer and parser. You can leverage those to derive your own versions of this that do what you need. | 1 | 2 | 0 | 0 | I'd like to do somehow the contrary to what a template is used for: I want to write templates and programmatically derive a representation of the different tags and placeholders present in the template, to ultimately generate a form.
To put it another way, when you usually have the data and populate the template with it, I want to have the template and ask the user the right data to fill it.
Example (with pseudo-syntax): Hello {{ name_of_entity only-in ['World', 'Universe', 'Stackoverflow'] }}!
With that I could programatically derive that I should generate a form with a select tag named 'name_of_entity' and having 3 options ('World', 'Universe', 'Stackoverflow').
I looked into Jinja2, and it seems I can reach my goal using it and extending it (even if it's made to do things the other way). But I am still unsure how I should do in some cases, eg.:
if I want to represent that {{ weekday }} has values only in ['Mo', 'Tu', ...]
if I want to represent in the template that the {{ amount }} variable is accepting only integers...
Is Jinja a good base to reach these goals? If yes, how would you recommend to do that? | Template to forms | 0 | 1.2 | 1 | 0 | 0 | 83 |
15,703,520 | 2013-03-29T12:45:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 15,703,555 | 0 | 1 | 0 | true | 0 | 0 | When inserting into an empty list, make both head and tail refer to the new node. Also, make sure that the node's next and previous references are consistent with what the rest of the code is expecting. | 1 | 1 | 0 | 0 | I know how to add nodes before and after the head and tail, but I dont know how to add a node to an empty doubly linked list. How would I go about doing this? Thank you. | Adding to empty doubly linked list in python | 0 | 1.2 | 1 | 0 | 0 | 313 |
15,714,976 | 2013-03-30T04:29:00.000 | 0 | 0 | 1 | 1 | 0 | python,linux,pyqt,portability,python-bindings | 1 | 15,716,571 | 0 | 1 | 0 | true | 0 | 1 | If you package your application in the Linux distribution's package format, it can contain dependency information. That is the canonical solution to this problem.
Otherwise you'd have to include all nested dependencies to make sure that it'll work. | 1 | 0 | 0 | 0 | I've managed to make a single working executable file (for Windows) from a PyQt based Python app using PyInstaller, but is it also possible for Linux?
On linux machine (LUbuntu), when I run the .py script, I've got errors about missing PyQt bindings and I can't even download them by apt-get because of inability to connect the servers. It would be much more convenient to somehow pack the missing libraries to my program's files in order to make it more portable, but how can I do it? | How to convert a Python PyQt based program to a portable package in Linux? | 1 | 1.2 | 1 | 0 | 0 | 1,029 |
15,730,976 | 2013-03-31T15:25:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,encryption,passwords | 0 | 15,960,639 | 0 | 2 | 0 | false | 1 | 0 | Reconsider your decision about keeping your old password hashes.
EXCEPT if you already used some very modern and strong scheme for them (like pbkdf2, bcrypt. shaXXX_crypt) - and NOT just some (salted or not) sha1-hash.
I know it is tempting to just stay compatible and support the old crap, but these old (salted or unsalted, doesn't matter much for brute-forcing) sha1-hashes can be broken nowadays at a rate of > 1*10^9 guesses per second.
also, old password minimum length requirements might need a reconsideration due to same reasons.
the default django password hash scheme is a very secure one, btw, you should really use it. | 1 | 2 | 0 | 1 | I am currently developing a tool in Python using Django, in which I have to import an existing User database. Obviously, password for these existing users have not the same encryption than the default password encryption used by Django.
I want to override the encryption for the password method to keep my passwords unmodified. I don't find how to override existing method in the documentation, just found how to add information about user (I don't find how to remove information - like first name or last name - about user either, so if someone knows, tell me please).
Thank you for your help. | Django Override password encryption | 0 | 0 | 1 | 0 | 0 | 2,355 |
15,754,610 | 2013-04-02T01:22:00.000 | 6 | 0 | 0 | 0 | 0 | python,amazon-s3,gzip,boto | 0 | 15,763,863 | 0 | 3 | 0 | true | 0 | 0 | There really isn't a way to do this because S3 doesn't support true streaming input (i.e. chunked transfer encoding). You must know the Content-Length prior to upload and the only way to know that is to have performed the gzip operation first. | 1 | 17 | 0 | 0 | I have a large local file. I want to upload a gzipped version of that file into S3 using the boto library. The file is too large to gzip it efficiently on disk prior to uploading, so it should be gzipped in a streamed way during the upload.
The boto library knows a function set_contents_from_file() which expects a file-like object it will read from.
The gzip library knows the class GzipFile which can get an object via the parameter named fileobj; it will write to this object when compressing.
I'd like to combine these two functions, but the one API wants to read by itself, the other API wants to write by itself; neither knows a passive operation (like being written to or being read from).
Does anybody have an idea on how to combine these in a working fashion?
EDIT: I accepted one answer (see below) because it hinted me on where to go, but if you have the same problem, you might find my own answer (also below) more helpful, because I implemented a solution using multipart uploads in it. | How to gzip while uploading into s3 using boto | 0 | 1.2 | 1 | 0 | 1 | 15,998 |
15,757,213 | 2013-04-02T06:03:00.000 | 0 | 0 | 1 | 0 | 0 | python-3.x,format,file-format | 0 | 54,352,007 | 0 | 2 | 0 | false | 0 | 0 | This may not be appropriate for your question but I think this may help you.
I have a similar problem faced... but end up with some thing like creating a zip file and then renamed the zip file format to my custom file format... But it can be opened with the winRar. | 1 | 6 | 0 | 0 | How to start creating my own filetype in Python ? I have a design in mind but how to pack my data into a file with a specific format ?
For example I would like my fileformat to be a mix of an archive ( like other format such as zip, apk, jar, etc etc, they are basically all archives ) with some room for packed files, plus a section of the file containing settings and serialized data that will not be accessed by an archive-manager application.
My requirement for this is about doing all this with the default modules for Cpython, without external modules.
I know that this can be long to explain and do, but I can't see how to start this in Python 3.x with Cpython. | Custom filetype in Python 3 | 0 | 0 | 1 | 0 | 0 | 4,824 |
Subsets and Splits