Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 1 | I need a url for using that for a template. Now there are two ways of storing the url and use that again in python I guess...
One is using session to store that URL and get it later whenever we need it...
or
Second is using cookies to store that URL and get it later..
So which method is more appropriate in terms of security ?
Is there any other method in python which is more better for storing the url and use that later, which is more secure..?
While using cookies somebody can easily change the information I guess, in sessions also somebody can hijack it and make the changes.... | 0 | python,django,http | 2012-06-25T11:43:00.000 | 0 | 11,188,725 | I don't think "session hijacking" means what you think it means. The only thing someone can do with session hijacking is impersonate a user. The actual session data is stored on the back end (eg in the database), so if you don't give the user access to that particular data then they can't change it, whether they're the actual intended user or someone impersonating that user.
So, the upshot of this is, store it in the session.
Edit after comment Well, you'd better not allow any information to be sent to your server then, and make your website browse-only.
Seriously, I don't see why "session data" is any less secure than anything else. You are being unreasonably paranoid. If you want to store data, you need to get that data from somewhere, either from a calculation on the server side, or from user submissions. If you can't calculate this specific URL on the server side, it needs to come from the user. And then you need to store it on the server against the particular user. I don't see what else you want to do. | 0 | 276 | true | 0 | 1 | Storing URL into cookies or session? | 11,188,963 |
1 | 2 | 0 | 4 | 1 | 0 | 1.2 | 0 | And if it doesn't, is there anyway to speed up my python code for accessing pytables on a 64-bit system (so no psyco)? | 0 | python,cython,pypy,pytables,psyco | 2012-06-25T19:44:00.000 | 0 | 11,196,258 | There is some support numpy. Running pypy 1.9 I get the following message on importing numpy:
ImportError: The 'numpy' module of PyPy is in-development and not
complete. To try it out anyway, you can either import from 'numpypy',
or just write 'import numpypy' first in your program and then import
from 'numpy' as usual. | 0 | 589 | true | 0 | 1 | Does Pypy Support PyTables and Numpy? | 11,196,841 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 | I am working on a multi threaded server application for processing serial/USB ports.
The issue is that if a cable gets unplugged, pyserial keeps reporting that the port is open and available. When reading I only receive Empty exceptions (due to read timeout).
How do I find out that a port has been disconnected so that I can handle this case?
Edit: OS is Ubuntu 12.04
Edit 2: Clarification - I am connecting to serial port devices via Serial to USB connector, thus the device being disconnected is an USB device. | 0 | python,pyserial | 2012-06-26T07:37:00.000 | 0 | 11,202,713 | A Serial port has no real concept of "cable connected" or not connected.
Depending on the equipment you are using you could try to poll the DSR or CTS lines, and decide there is no device connected when those stay low over a certain time.
From wikipedia:
DTR and DSR are usually on all the time and, per the RS-232 standard
and its successors, are used to signal from each end that the other
equipment is actually present and powered-up
So if you've got a conforming device, the DSR line could be the thing you need.
Edit:
As you seem to use an USB2Serial converter, you can try to check whether the device node still exists - you don't need to try to open it.
so os.path.exists(devNode) could suffice. | 0 | 2,182 | true | 0 | 1 | How to find out if serial port is closed? | 11,202,829 |
1 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 0 | I have a GPIB device that I'm communicating with using a National Instruments USB to GPIB. the USB to GPIB works great.
I am wondering what can cause a GPIB device to be unresponsive? If I Turn off the device and turn it back on it will respond, but when I run my program it will respond at first. It then cuts off I can't even communicate with the GPIB device it just times out.
Did I fill up the buffer?
Some specifics from another questioner
I'm controlling a National Instruments GPIB card (not USB) with PyVisa. The instrument on the GPIB bus is a Newport ESP300 motion controller. During a session of several hours (all the while sending commands to and reading from the ESP300) the ESP300 will sometimes stop listening and become unresponsive. All reads time out, and not even *idn? produces a response.
Is there something I can do that is likely to clear this state? e.g. drive the IFC line? | 0 | c#,python,visa,gpib | 2012-06-26T21:43:00.000 | 0 | 11,216,401 | There should be a clear command (something like "*CLS?", but dont quote me on that). I always run that when i first connect to a device. Then make sure you have a good timeout duration. I found for my device around 1 second works. Less then 1 second makes it so I miss the read after a write. Most of the time, a timeout is because you just missed it or you are reading after a command without a return. Make sure you are also checking for errors in the error queue in between write to make sure the write actually properly when through. | 0 | 2,223 | false | 0 | 1 | What can cause a GPIB to be unresponsive | 15,514,499 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have a CentOS 5.8 server and am planning to install a later version of python (presumably 2.7). I have heard a lot of mention that CentOS relies quite heavily on 2.4 for many admin features etc. I'm trying to determine exactly what these features are (and whether I would actually be using them) so that I can decide whether to update python through yum or build from source.
Can anyone give me some more detailed information on what CentOS features have dependencies on Python 2.4. | 0 | python,centos | 2012-06-27T10:49:00.000 | 1 | 11,224,517 | If python2.7 is available on Yum, you should use that: the package management on large distros (redhat, ubuntu, debian, fedora ) takes care of maintaining parallel Python installs for you which won't conflict with each other.
This option should keep your system "/usr/bin/python¬ file pointing to Python2.4 and give you another python2.7 binary.
Otherwise, if you choose to build it from source, pick another prefix - /opt - (not even /usr/local will be quite safe) for building it.
You don't need to know exactly which system parts depend on Python 2.4 - just rest assured it will crash very hard and unpredictably if you try to modify the system Python itself. | 0 | 1,419 | true | 0 | 1 | CentOS 5.8 dependencies on Python 2.4? | 11,251,911 |
2 | 6 | 0 | 20 | 150 | 0 | 1 | 0 | How do you get Jenkins to execute python unittest cases?
Is it possible to JUnit style XML output from the builtin unittest package? | 0 | python,unit-testing,jenkins,junit,xunit | 2012-06-28T09:33:00.000 | 0 | 11,241,781 | I would second using nose. Basic XML reporting is now built in. Just use the --with-xunit command line option and it will produce a nosetests.xml file. For example:
nosetests --with-xunit
Then add a "Publish JUnit test result report" post build action, and fill in the "Test report XMLs" field with nosetests.xml (assuming that you ran nosetests in $WORKSPACE). | 0 | 104,678 | false | 0 | 1 | Python unittests in Jenkins? | 11,463,624 |
2 | 6 | 0 | 4 | 150 | 0 | 0.132549 | 0 | How do you get Jenkins to execute python unittest cases?
Is it possible to JUnit style XML output from the builtin unittest package? | 0 | python,unit-testing,jenkins,junit,xunit | 2012-06-28T09:33:00.000 | 0 | 11,241,781 | I used nosetests. There are addons to output the XML for Jenkins | 0 | 104,678 | false | 0 | 1 | Python unittests in Jenkins? | 11,241,965 |
1 | 3 | 0 | 1 | 1 | 0 | 0.066568 | 0 | I want to create a UI which invokes my python script. Can i do it using JSP? If so, can you please explain how ? Or can i do it using some other language. I have gone through many posts related to it but could not find much? please help me out? Explanations using examples would be more helpful.
Thanks In Advance.. | 0 | java,python,jsp | 2012-06-28T11:53:00.000 | 0 | 11,244,049 | It would be neater to expose your python API as RESTful services, that JSP can access using Ajax and display data in the page. I'm specifically suggesting this because you said 'JSP' not 'Java'. | 0 | 8,657 | false | 1 | 1 | Is it possible to invoke a python script from jsp? | 11,244,145 |
1 | 1 | 0 | 2 | 0 | 1 | 1.2 | 0 | I am starting an open source Python library that my company expects will be used by all of our customers. Since I am a sucker for proper presentation and practices, I have a question about file modes as saved by git. However, I want to avoid turning this into a best-practice type of discussion discouraged by StackOverflow, so here the is question in a form seeking a concrete answer:
Is there a reason why I shouldn't set Python examples in my library to be executable? I tend to set the executable flag on Python that I need to run and would prefer to do so (simply because it's generally slightly easier to type ./ than python), but I have noticed that most open source libraries differ from that in practice. I don't feel that such security should be manifested that way, but I want to make sure. I would not be setting library files to be executable, just example files or tests that I feel should be executable.
As a related question, should library files that are never meant to be executed directly omit the hashbang (#!/usr/bin/env python) on the first line? | 0 | python,git,packaging | 2012-06-28T21:04:00.000 | 0 | 11,252,864 | Personally, I only set files I intend to be executed as scripts as executable. Using a least permissive model is a smart, if not ideal, design choice when it comes to security. If you don't need the permissions, don't use them.
I don't see any reason why omitting the shebang is a bad idea, other than if someone else want's to make the file executable they have two steps instead of one. | 0 | 95 | true | 0 | 1 | Is there a reason to not set Python files' modes as executable in an open source git repository? | 11,253,601 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | Is there a way to measure audio output level in Python? I'd like to measure the volume of a 30 second audio file every 1/10th of a second, then export the data into something like Excel. Is this possible? | 0 | python,excel,audio,measure | 2012-06-29T22:15:00.000 | 0 | 11,269,631 | I know that this is a long shot, but maybe there are some libavcodec/FFMpeg ports to python. It's always worth a shot to see if there is something that exists out there along these lines... | 0 | 799 | false | 0 | 1 | Using Python to measure audio output level? | 11,269,704 |
1 | 1 | 0 | 1 | 1 | 1 | 0.197375 | 0 | Sorry about the nooby question, but, when I download and unzip a third-party python package, and then python setup.py install it thereby making an egg directory in site-packages, what do I do with the original unzipped directory in the Download folder? Should I sudo copy & paste all the test/docs/README files along with the rest of the corresponding site-packages files? I've typically deleted them but don't think that's a smart thing to do.. | 0 | python,macos,packages,egg,file-management | 2012-06-30T06:05:00.000 | 0 | 11,271,883 | If all you want is to use the installed Python package, then you don't need the downloaded directory at all. You can delete it if you like. If you want to use it for its docs, then you can keep it, or move it somewhere else. There's no connection between the installed package and the original unzipped directory you installed from, so you are free to do what you like with it. | 0 | 55 | false | 0 | 1 | What to do with the test/docs/readme files after the downloaded python package is built? | 11,273,716 |
1 | 4 | 0 | 3 | 15 | 1 | 0.148885 | 0 | I'm trying to figure out a way to share memory between python processes. Basically there is are objects that exists that multiple python processes need to be able to READ (only read) and use (no mutation). Right now this is implemented using redis + strings + cPickle, but cPickle takes up precious CPU time so I'd like to not have to use that. Most of the python shared memory implementations I've seen on the internets seem to require files and pickles which is basically what I'm doing already and exactly what I'm trying to avoid.
What I'm wondering is if there'd be a way to write a like...basically an in-memory python object database/server and a corresponding C module to interface with the database?
Basically the C module would ask the server for an address to write an object to, the server would respond with an address, then the module would write the object, and notify the server that an object with a given key was written to disk at the specified location. Then when any of the processes wanted to retrieve an object with a given key they would just ask the db for the memory location for the given key, the server would respond with the location and the module would know how to load that space in memory and transfer the python object back to the python process.
Is that wholly unreasonable or just really damn hard to implement? Am I chasing after something that's impossible? Any suggestions would be welcome. Thank you internet. | 0 | python,c,shared-memory | 2012-07-02T23:40:00.000 | 0 | 11,302,656 | If you don't want pickling, multiprocessing.sharedctypes might fit. It's a bit low-level, though; you get single values or arrays of specified types.
Another way to distribute data to child processes (one way) is multiprocessing.Pipe. That can handle Python objects, and it's implemented in C, so I cannot tell you wether it uses pickling or not. | 0 | 19,934 | false | 0 | 1 | Shared memory between python processes | 11,305,191 |
1 | 4 | 0 | 1 | 1 | 1 | 0.049958 | 0 | I am trying to have the bus loads in PSS/E to change by using python program. So I am trying to write a script in python where I could change loads to different values between two buses in PSS/E. | 0 | python,psse | 2012-07-03T18:10:00.000 | 0 | 11,316,694 | You can use API routine called "LOAD_CHNG_4" (search for this routin in API.pdf documentation). This routine belongs to the set of load data specification functions. It can be used to modify the data of an existing load in the working case. | 0 | 5,345 | false | 0 | 1 | Python and PSS/E | 26,021,216 |
1 | 8 | 0 | 3 | 61 | 1 | 0.07486 | 0 | I'm trying to find out a way in python to redirect the script execution log to a file as well as stdout in a pythonic way. Is there any easy way of achieving this? | 0 | python | 2012-07-04T08:13:00.000 | 0 | 11,325,019 | You should use the logging library, which has this capability built in. You simply add handlers to a logger to determine where to send the output. | 0 | 174,224 | false | 0 | 1 | How to output to the console and file? | 11,325,504 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Not sure if the title is a great way to word my actual problem and I apologize if this is too general of a question but I'm having some trouble wrapping my head around how to do something.
What I'm trying to do:
The idea is to create a MySQL database of 'outages' for the thousands of servers I'm responsible for monitoring. This would give a historical record of downtime and an easy way to retroactively tell what happened. The database will be queried by a fairly simple PHP form where one could browse these outages by date or server hostname etc.
What I have so far:
I have a python script that runs as a cron periodically to call the Pingdom API to get a list of current down alerts reported by the pingdom service. For each down alert, a row is inserted into a database containing a hostname, time stamp, pingdom check id, etc. I then have a simple php form that works fine to query for down alerts.
The problem:
What I have now is missing some important features and isn't quite what I'm looking for. Currently, querying this database would give me a simple list of down alerts like this:
Pindom alerts for Test_Check from 2012-05-01 to 2012-06-30:
test_check was reported DOWN at 2012-05-24 00:11:11
test_check was reported DOWN at 2012-05-24 00:17:28
test_check was reported DOWN at 2012-05-24 00:25:24
test_check was reported DOWN at 2012-05-24 00:25:48
What I would like instead is something like this:
test_check was reported down for 15 minutes (2012-05-24 00:11:11 to 2012-05-24 00:25:48)(link to comment on this outage)(link to info on this outage).
In this ideal end result, there would be one row containing a outage ID, hostname of the server pingdom is reporting down, the timestamp for when that box was reported down originally and the timestamp for when it was reported up again along with a 'comment' field I (and other admins) would use to add notes about this particular event after the fact. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options.
I'm a little lost as to how I will go about combining several down alerts that occur within a short period of time into a single 'outage' that would be inserted into a separate table in the existing MySQL database where individual down alerts are currently being stored. This would allow me to comment and add specific details for future reference and would generally make this thing a lot more usable. I'm not sure if I should try to do this when pulling the alerts from pingdom or if I should re-process the alerts after they're collected to populate the new table and I'm not quite sure how I would work out either of those options.
I've been wracking my brain trying to figure out how to do this. It seems like a simple concept but I'm a somewhat inexperienced programmer (I'm a Linux admin by profession) and I'm stumped at this point.
I'm looking for any thoughts, advice, examples or even just a more technical explanation of what I'm trying to do here to help point me in the right direction. I hope this makes sense. Thanks in advance for any advice :) | 1 | php,python,mysql,json,pingdom | 2012-07-04T12:56:00.000 | 0 | 11,329,588 | The most basic solution with the setup you have now would be to:
Get a list of all events, ordered by server ID and then by time of the event
Loop through that list and record the start of a new event / end of an old event for your new database when:
the server ID changes
the time between the current event and the previous event from the same server is bigger than a certain threshold you set.
Store the old event you were monitoring in your new database
The only complication I see, is that the next time you run the script, you need to make sure that you continue monitoring events that were still taking place at the time you last ran the script. | 0 | 131 | false | 0 | 1 | How can I combine rows of data into a new table based on similar timestamps? (python/MySQL/PHP) | 11,329,769 |
1 | 1 | 0 | 0 | 3 | 0 | 0 | 1 | I'm using python 2.7 and paramiko library. client app running on window sends ssh commands to server app running on linux.
when I send vi command, I get the response
<-[0m<-[24;2H<-[K<-[24;1H<-[1m~<-[0m<-[25;2H....
I don't know what these characters mean and how I process it. I'm struggling for hours, please help me. | 0 | python,linux,paramiko | 2012-07-05T10:21:00.000 | 0 | 11,342,314 | Reviewing my SO activity this week, saw this opportunity to whore for rep:
Those look like ANSI/VT100 terminal control codes, which suggests that something which thinks it is attached to a terminal is sending them but they are being received by something which doesn't know what to do with them.
Now you can Google for 'VT100 control codes' and learn what you want. | 0 | 223 | false | 0 | 1 | vi command returns error format data? | 11,360,629 |
1 | 1 | 0 | 8 | 9 | 0 | 1.2 | 0 | I'm interested in what pitfalls can be (except Python is not installed in target system) when using Python for deb package flow control scripts (preinst, postinst, etc.). Will it be practical to implement those scripts in Python, not in sh?
As I understand it's at least possible. | 0 | python,debian,packaging,deb | 2012-07-05T15:28:00.000 | 1 | 11,347,613 | The only reason this isn't commonly done, afaik, is that it's not convention, and Python isn't usually more useful or straightforward than plain shell script for the sorts of things that maintainer scripts do. When it is more useful, you can often break out the Python-needing functionality into a separate Python script which is called by the maintainer scripts.
It can help to follow convention in this sort of situation, since there are a lot of helpful tools and scripts (e.g., Lintian, Debhelper) which generally assume that maintainer scripts use bash. If they don't, it's ok, but those tools may not be as useful as they would be otherwise. The only other issue I think you need to be aware of is that if your preinst or postrm scripts need Python, then Python needs to be a pre-dependency (Pre-Depends) of your package instead of just a Depends.
That said, I've found it useful to use Python in a maintainer script before. | 0 | 912 | true | 0 | 1 | Will it be practical to implement deb preinst, postint, etc. scripts in Python, not in sh | 11,350,615 |
1 | 1 | 0 | 11 | 5 | 0 | 1.2 | 0 | I'm looking to add simple repeating tasks to my current application and I'm looking at the uwsgi signals api and there are two decorators @timer and @rbtimer. I've tried looking through the doc and even the python source at least but it appears it's probably more low level than that somewhere in the c implementation.
I'm familiar with the concept of a red-black tree but I'm not sure how that would relate to timers. If someone could clear things up or point me to the doc I might have missed I'd appreciate it. | 0 | python,timer,uwsgi | 2012-07-05T19:06:00.000 | 1 | 11,350,907 | @timer uses kernel-level facilities, so they are limited in the maximum number of timers you can create.
@rbtimer is completely userspace so you can create an unlimited number of timers at the cost of less precision | 0 | 826 | true | 1 | 1 | What's the difference between timer and rbtimer in uWSGI? | 11,353,126 |
1 | 2 | 0 | 0 | 1 | 1 | 1.2 | 0 | I read an article about a regular expression to detect base64 but when I try it in "yara python" it gives an error of "unterminated regular expression"
the regular expression is:
(?:[A-Za-z0-9+/]{4}){2,}(?:[A-Za-z0-9+/]{2}[AEIMQUYcgkosw048]=|[A-Za-z0-9+/][AQgw]==)
could anyone throw a suggestion please?
thanks | 0 | python,regex,base64 | 2012-07-06T07:38:00.000 | 0 | 11,357,851 | I would suggest escaping / character in the [A-Za-z0-9+/] block, because while unescaped it defines regular expression start/end. | 0 | 5,830 | true | 0 | 1 | regular Expression to detect base64 | 11,357,972 |
2 | 3 | 0 | 3 | 0 | 1 | 0.197375 | 0 | I'm writing pretty big and complex application, so I want to stick to design patterns to keep code in good quality. I have problem with one instance that needs to be available for almost all other instances.
Lets say I have instance of BusMonitor (class for logging messages) and other instances that use this instance for logging actions, in example Reactor that parses incoming frames from network protocol and depending on frame it logs different messages.
I have one main instance that creates BusMonitor, Reactor and few more instances.
Now I want Reactor to be able to use BusMonitor instance, how can I do that according to design patterns?
Setting it as a variable for Reactor seems ugly for me:
self._reactor.set_busmonitor(self._busmonitor)
I would do that for every instance that needs access to BusMonitor.
Importing this instance seems even worse.
Altough I can make BusMonitor as Singleton, I mean not as Class but as Module and then import this module but I want to keep things in classes to retain consistency.
What approach would be the best? | 0 | python,design-patterns,singleton | 2012-07-06T11:39:00.000 | 0 | 11,361,488 | I want to keep things in classes to retain consistency
Why? Why is consistency important (other than being a hobgoblin of little minds)?
Use classes where they make sense. Use modules where they don't. Classes in Python are really for encapsulating data and retaining state. If you're not doing those things, don't use classes. Otherwise you're fighting against the language. | 0 | 105 | false | 0 | 1 | Python app design patterns - instance must be available for most other instances | 11,362,386 |
2 | 3 | 0 | 0 | 0 | 1 | 1.2 | 0 | I'm writing pretty big and complex application, so I want to stick to design patterns to keep code in good quality. I have problem with one instance that needs to be available for almost all other instances.
Lets say I have instance of BusMonitor (class for logging messages) and other instances that use this instance for logging actions, in example Reactor that parses incoming frames from network protocol and depending on frame it logs different messages.
I have one main instance that creates BusMonitor, Reactor and few more instances.
Now I want Reactor to be able to use BusMonitor instance, how can I do that according to design patterns?
Setting it as a variable for Reactor seems ugly for me:
self._reactor.set_busmonitor(self._busmonitor)
I would do that for every instance that needs access to BusMonitor.
Importing this instance seems even worse.
Altough I can make BusMonitor as Singleton, I mean not as Class but as Module and then import this module but I want to keep things in classes to retain consistency.
What approach would be the best? | 0 | python,design-patterns,singleton | 2012-07-06T11:39:00.000 | 0 | 11,361,488 | I found good way I think. I made module with class BusMonitor, and in the same module, after class definition I make instance of this class. Now I can import it from everywhere in project and I retain consistency using classes and encapsulation. | 0 | 105 | true | 0 | 1 | Python app design patterns - instance must be available for most other instances | 11,471,003 |
1 | 2 | 0 | 4 | 10 | 1 | 1.2 | 0 | I was using ubuntu.
I found that many Python libraries installed went in both /usr/lib/python and /usr/lib64/python.
When I print a module object, the module path showed that the module lived in /usr/lib/python.
Why do we need the /usr/lib64/python directory then?
What's the difference between these two directories?
BTW
Some package management script and egg-info that lived in both directories are actually links to packages in /usr/share.
Most Python modules are just links, but the so files are not. | 0 | python | 2012-07-06T23:24:00.000 | 1 | 11,370,877 | The 64-bit version of the libraries?
What version of Python are you running? If you are running the 32-bit version, then you probably won't need those files. | 0 | 10,916 | true | 0 | 1 | What's the difference between /usr/lib/python and /usr/lib64/python? | 11,370,887 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 0 | I have used the Python's C-API to call some Python code in my c code and now I want to profile my python code for bottlenecks. I came across the PyEval_SetProfile API and am not sure how to use it. Do I need to write my own profiling function?
I will be very thankful if you can provide an example or point me to an example. | 0 | python,profiling | 2012-07-06T23:51:00.000 | 0 | 11,371,057 | If you only need to know the amount of time spent in the Python code, and not (for example), where in the Python code the most time is spent, then the Python profiling tools are not what you want. I would write some simple C code that sampled the time before and after the Python interpreter invocation, and use that. Or, C-level profiling tools to measure the Python interpreter as a C function call.
If you need to profile within the Python code, I wouldn't recommend writing your own profile function. All it does is provide you with raw data, you'd still have to aggregate and analyze it. Instead, write a Python wrapper around your Python code that invokes the cProfile module to capture data that you can then examine. | 0 | 276 | true | 0 | 1 | Profiling Python via C-api (How to ? ) | 11,371,096 |
2 | 3 | 0 | 0 | 0 | 0 | 1.2 | 0 | I currently am developing a website in the Symfony2 framework, and i have written a Command that is run every 5 minutes that needs to read a tonne of RSS news feeds, get new items from it and put them into our database.
Now at the moment the command takes about 45 seconds to run, and during those 45 seconds it also takes about 50% to up to 90% of the CPU, even though i have already optimized it a lot.
So my question is, would it be a good idea to rewrite the same command in something else, for example python? Are the RSS/Atom libraries available for python faster and more optimized than the ones available for PHP?
Thanks in advance,
Jaap | 0 | php,python,symfony | 2012-07-08T09:43:00.000 | 0 | 11,382,163 | Is solved this by adding a usleep() function at the end of each iteration of a feed. This drastically lowered cpu and memory consumption. The process used to take about 20 minutes, and now only takes around and about 5! | 0 | 269 | true | 1 | 1 | Reading RSS feeds in php or python/something else? | 15,361,568 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I currently am developing a website in the Symfony2 framework, and i have written a Command that is run every 5 minutes that needs to read a tonne of RSS news feeds, get new items from it and put them into our database.
Now at the moment the command takes about 45 seconds to run, and during those 45 seconds it also takes about 50% to up to 90% of the CPU, even though i have already optimized it a lot.
So my question is, would it be a good idea to rewrite the same command in something else, for example python? Are the RSS/Atom libraries available for python faster and more optimized than the ones available for PHP?
Thanks in advance,
Jaap | 0 | php,python,symfony | 2012-07-08T09:43:00.000 | 0 | 11,382,163 | You could try to check Cache-Headers of the feeds first before parsing them.
This way you can save the expensive parsing operations on probably a lot of feeds.
Store a last_updated date in your db for the source and then check against possible cache headers. There are several, so see what fits best or is served the most or check against all.
Headers could be:
Expires
Last-Modified
Cache-Control
Pragma
ETag
But beware: you have to trust your feed sources.
Not every feed provides such headers or provides them correctly.
But i am sure a lot of them do. | 0 | 269 | false | 1 | 1 | Reading RSS feeds in php or python/something else? | 11,393,069 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have emacs 24.1.1, which comes with GNU's python.el in byte-compiled form at emacs/24.1/lisp/progmodes.
I downloaded Fabian Gallina's python.el (note the same name) and placed it at emacs/site-lisp, which is part of emacs' load-path.
When I edit a Python file, it is Gallina's mode which is loaded, NOT GNU's. However, I have not put (require 'python) in my .emacs file, despite what Gallina's documentation suggests.
Why is this? Why does Gallina's python.el take precedence over GNU's? Why does it get loaded without (require 'python)? | 0 | python,emacs,python-mode | 2012-07-09T01:22:00.000 | 0 | 11,388,125 | To load an already loaded library from new place, write in your Emacs init-file something like
(unload-feature...
(load FROM-NEW-PLACE... | 0 | 828 | false | 0 | 1 | Understanding which python mode is loaded by emacs / Aquamacs and why | 15,865,420 |
1 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | What is the degree of source code dependency that can be resolved by examining at the source code for the following programming languages -- Java, Python and Lisp.
For example, can I say for sure by looking at a collection of Python files that examining all the "import" statements in every file are the only dependencies (source dependencies)?
In Lisp, I'm aware of the (load "filename") command that allows including function defined in other files. | 0 | java,python,lisp | 2012-07-10T06:07:00.000 | 0 | 11,407,544 | Even if you find an "import" statement of whatever kind it is not shure that the code will use it.
In Java you can import a name space, but also use the full qualified name of the class without any import statement
javax.swing.JButton but = new javax.swing.JButton("MyButton");
And last but not least all of them supports some kind of symbolic programming. You may use a plain string to get code loaded or executed:
Object x = Class.forName("javax.swing."+compName);
return x.toString(); | 0 | 87 | false | 0 | 1 | Listing source dependencies | 11,407,713 |
1 | 2 | 0 | 7 | 0 | 0 | 1.2 | 0 | I'm writing a script to be run as a cron and I was wondering, is there any difference in speed between the Ruby MySQL or Python MySQL in terms of speed/efficiency? Would I be better of just using PHP for this task?
The script will get data from a mysql database with 20+ fields and store them in another table every X amount of minutes. Not much processing of the data will be necessary. | 1 | python,mysql,ruby | 2012-07-11T11:28:00.000 | 0 | 11,431,679 | Just pick the language you feel most comfortable with. It shouldn't make a noticeable difference.
After writing the application, you can search for bottlenecks and optimize that | 0 | 254 | true | 0 | 1 | Python MySQL vs Ruby MySQL | 11,431,795 |
1 | 1 | 0 | 6 | 6 | 1 | 1.2 | 0 | I would like to protect my python source code, I know there is no absolute protection possible, but still there should be some means to make it difficult enough to do or quite time-consuming. I want
1) to remove all documentation, comments automatically and
2) to systematically change the names of variables and functions within a module (obfuscation?), so that I can keep an external interface (with meaningful names) while the internal names of variables and functions are impossible to pronounce.
Perhaps the best solution, which would make 1) and 2) redundant, is the following:
3) Is there a simple way to compile python modules to .so libraries, with a clear interface and which can be used by other python modules? It would be similar as building C and C++ extensions with distutils, except that the source code is python itself rather than C/C++. The idea is to organize all "secret code" into modules, compile them, and then import them in the rest of the python code which is not considered secret.
Again, I am aware that everything can be reverse-engineered, I think in pragmatic terms, most of the average developers would not be able to reverse-engineer code and even if they would be able, ethical/legal/timing reasons would make them think twice if they really want to work on this. | 0 | python,compilation,source-code-protection | 2012-07-11T15:48:00.000 | 0 | 11,436,484 | As mgilson mentioned in the comments, Cython is probably your best bet here. Generally speaking, you can use it to convert your pure-python source code into compiled extension modules. While the primary intent of Cython is for enhanced performance, there shouldn't be any barriers for using it for source-code protection. The extension modules it outputs aren't limited in any special ways so anything you were able to do from within Python before, you should be able to do from the Cython-generated extension modules. Cython does have a few known limitations in terms of supported features that may need to be worked around but, overall, it looks well suited to serving your purpose. | 0 | 11,732 | true | 0 | 1 | How to protect and compile python source code into a .so library? | 11,438,657 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I am looking for help debugging Mechanize. When I navigate to a page and attempt to call .read(), I get non-unicode result about 1 out of every 5 or so attempts. The non-unicode result looks like the following:
úRW!¤cêLÒ0T¸²ÖþF\<äs +€²Ü@9‚ÈøMq1;=®}ÿ½8¹WP[ëæåñ±øþûÚc!ˆÍzòØåŸ¿þUüþf>àSÕ‹‚~é÷bƪ}Ãp#',®ˆâËýÊæÚ³õµÊZñMyô‘;–sä„IWÍÞ·mwx¨|ýHåÀ½A ºÒòÀö QNqÒ4O{Žë+óZu"úÒ¸½vº³ÔP”º‘cÇ—Êâ#<31{HiºF4N¨ÂÀ"Û´>•ŠÜÅò€U±§¶8ÑWEú(ƒ‘cÀWÄ~‡ ‡—¯J$ÁvQìfj²a$DdªÐŠÐ5[ü(4` ŒÛ"–<‹eñƒ(‚¹=[U¤#íQhÉÔô6(î$M ²-Õ£›Œndû8mØïõ7;"¨zÒ€F°¬@Xˆ€*õ䊈xŸÊ%úÅò= kôc¡¢ØyœÑy³í>ËÜ-¥m+ßê¸ïmì Ycãa®-Ø•†ê¸îmq«x} i¥GEŽj]ÏëUÆËGS°êõ½AxwÕµêúR¶à|ôO¹ýüà:S¸S‡®U%}•Cî3ãg~QÛó´Ó]ïn[FwuCm6žš[«J®™›Ý-£A˜Ö€sµ1khí"”/\S~u£C7²Í#wÑ»@ç@sô,ÆQèÊôó®.ä(å*æ‡#÷»'õµ{à˜Õ„SÒ%@ˆtL †¸±¹åI{„Õv#³ëŠUG…s‡•·Aíí»8¡Ò|Ö«à4€¼dˆ¸—áÐåqA‘ï $Õ[NØÖ£o\s£Z_¾^ Äóo~?<Ú¿Ùÿ]À@@bÈ%¶Á$¦G oË·ò}[µ+>ðµ°Íöе?R1úQ–&PãýT¥¢ði+|óf«ú,â,ÛQ㤚ӢÏìÙT£šÚA䡳£
I have tried the normal Mechanize parser (mechanize.Browser()) as well as the commonly suggested alternative (factory=mechanize.RobustFactory()).
Any suggestions for next steps? | 0 | python,mechanize | 2012-07-11T16:09:00.000 | 0 | 11,436,837 | Problem solved:
If you are getting similar output, check the page headers as it is probably gzipped after instantiating the browser set set_handle_gzip(True) | 0 | 155 | true | 1 | 1 | python mechanize odd .read() output | 11,458,357 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | So I'm writing a set of C++ classes that run Python scripts. I've gotten code working to run any script I want from a specified directory, and I can pass and return values just fine.
But the one issue I have is I can't seem to find out how to set Python doubles up.
For example, when I was using long values, I could use "PyLong_AsLong([whatever value I'm trying to convert to a long from a PyObject])" -- but is there a PyDouble_Something in the Python/C API I can use for that?
My google searching has so far turned up nothing. | 0 | python,python-c-api | 2012-07-12T02:15:00.000 | 0 | 11,444,173 | I found out the answer.
I was mistaken, and I thought there were doubles in Python, but there aren't.
I got it to work using a "PyFloat" object, and just converted the double like this:
"PyFloat_FromDouble([the double I wanted as a PyObject])" | 0 | 73 | false | 0 | 1 | doubles in embedded python | 11,444,316 |
2 | 5 | 0 | 0 | 2 | 0 | 0 | 0 | I want to write a program that ssh's into remote boxes and runs jobs there if the remote computer is not actively being used. I'll be logging in as clusterJobRunner@remoteBox, and the other user will be logged in as someLocalUser@remoteBox.
Is there a way to see if a remote user is actively using the box using either Python or Java? | 0 | java,python | 2012-07-16T18:10:00.000 | 1 | 11,510,032 | I second the answer by @Eero Aaltonen -- you should run your stuff under nice. A Linux computer can run at 100% CPU busy, yet feel nice and fast for the user, if the extra tasks are all under nice; the scheduler will only run the nice tasks when the main user's tasks are idle.
But if you want to figure out if the machine is being used, I suggest you look into the w command. Try man w at your prompt. The w command prints the load average for the machine, and a list of users and how much time they have been using (a combined time that includes any background tasks they are running, plus a time for their main task). | 0 | 1,087 | false | 0 | 1 | How can I find out if someone is actively using a Linux computer in Python or Java? | 11,511,519 |
2 | 5 | 0 | 1 | 2 | 0 | 0.039979 | 0 | I want to write a program that ssh's into remote boxes and runs jobs there if the remote computer is not actively being used. I'll be logging in as clusterJobRunner@remoteBox, and the other user will be logged in as someLocalUser@remoteBox.
Is there a way to see if a remote user is actively using the box using either Python or Java? | 0 | java,python | 2012-07-16T18:10:00.000 | 1 | 11,510,032 | In Java you can execute the users Linux command using Runtime.exec(), grab the standard output and get it into a parsable String. I don't think there are any OS-independent ways to do this. | 0 | 1,087 | false | 0 | 1 | How can I find out if someone is actively using a Linux computer in Python or Java? | 11,510,091 |
1 | 1 | 0 | 3 | 8 | 1 | 1.2 | 0 | [Using Python 3.2]
If I don't provide encoding argument to open, the file is opened using locale.getpreferredencoding(). So for example, on my Windows machine, any time I use open('abc.txt'), it would be decoded using cp1252.
I would like to switch all my input files to utf-8. Obviously, I can add encoding = 'utf-8' to all my open function calls. Or, better, encoding = MY_PROJECT_DEFAULT_ENCODING, where the constant is defined at the global level somewhere.
But I was wondering if there is a clean way to avoid editing all my open calls, by changing the "default" encoding. Is it something I can change by changing the locale? Or by changing a parameter inside the locale? I tried to follow the Python manual but failed to understand how this is supposed to be used.
Thanks! | 0 | python,character-encoding,python-3.x,locale | 2012-07-17T00:13:00.000 | 0 | 11,514,414 | In Windows, with Python 3.3+, execute chcp 65001 in the console or a batch file before running Python in order to change the locale encoding to UTF-8. | 0 | 3,877 | true | 0 | 1 | Changing the "locale preferred encoding" | 11,516,682 |
1 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | We've got a number of perl and python scripts we want to expose to some of our teammates for casual usage; and we really don't want ot deal with getting them setup with git, perl, python, dependencies, etc.
One idea we had was to write a descriptor for each script as to what arguments it needed; and then let a simple HTML page call a CGI script with the appropriate arguments, wait and return stdout to the user.
This seems such a simple need that I'm amazed that I can't find anything like it existing out there. No framework that renders out the form, that puts out a virtual console screen...
There are, of course, major security concerns. Can anyone recommend a solution that does the above, or something else similar? | 0 | python,http,command-line,cgi | 2012-07-17T00:44:00.000 | 1 | 11,514,608 | Are the teammates developers or comfortable with the command line? If so, I would propose SSH.
Run SSHD on the box with the scripts. On Windows, this is easy with cygwin, otherwise it's there by default on Mac and Linux
The client logs in (ssh user@host) and runs the script. Set up security with certificates and you won't even have to type your password.
If there are problems, I would much rather be at the command line and able to debug the script than at the end of an opaque web page.
Maintenance will be a lot easier too. | 0 | 132 | false | 0 | 1 | Exposing commandline tools remotely to users | 11,515,334 |
1 | 2 | 0 | 1 | 0 | 1 | 1.2 | 0 | I am reading a ASCII file from LINUX(Debian) into Python CGI script where it is edited via a web page and then saved,
If I use a graphical text editor the edited and un-edited file appear the same and are corectly formatted.
Using vi the edited file contains ctrl M as the EOL marker and all lines rolled into one but the unedited file is correctly formatted. Using :set List in vi to see control characters the edited file remains as described above, but in the unedited file $ appears as EOL marker.
I know LINUX EOL is ctrl 0x0D but what is the $?
Why does $ format correctly and ctrl M does not? | 0 | python,linux,ascii,vi | 2012-07-17T10:34:00.000 | 1 | 11,520,713 | The $ is displayed by vi (in certain modes). It is not in the file contents. You could use od -cx yourfile to check that. | 0 | 409 | true | 0 | 1 | LINUX End of Line | 11,520,748 |
1 | 2 | 1 | 2 | 0 | 0 | 1.2 | 0 | So I made a game in PHP that worked fairly well, a simple game fairly similar to tic-tac-toe, I didn't really want to go much further with PHP improving the game. With that in mind I decided to learn Python; I'm familiar with the basics now. I used simple math, dictionaries and conditional statements to create a mock-up of my game. However it is turn based and I'd prefer the two players not be on the same computer physically taking turns with the computer.
So what I envision my final product to be is a stand-alone app which each user has on their computer, they execute the app and enter a username then are brought to a screen where other users are, who have logged in as described, from there two users could mutually agree to start a round of the game, after completion they will be brought back to the 'waiting room'
Now for something like this would I need (or be greatly helped by) a framework? If so which one(s)?
Would this need a database on a server, or could all data be stored on the user's computers?
Would I be dealing with CGI or Sockets or both in creating something like this?
Would making this game into a web-app be easier? (similar to something I would create if I used PHP and ran the game off of a website)
I would appreciate reading material on the subject. A link to an example source-code that solves a problem similar to what I have gets a gold star =)
Thank you all for you time, I greatly appreciate everything. | 0 | php,python,network-programming | 2012-07-18T05:32:00.000 | 0 | 11,534,826 | General Response
Especially if you include a "waiting room" and such things/ want this to be widely usable, this is a rather big project (definitely not a bad thing, but you may want to do some small projects first to get your feet wet with python programming for the web). However, it is relatively easy to have a simple terminal-based, turn-based game that transmits data over the network between its players; I'd focus on making the simple version first to get a feel for what is involved. That being said, here are some answers to the specific questions you asked; unfortunately they can't be too detailed, because there is so much to learn about these topics.
Specific Answers
Now for something like this would I need (or be greatly helped by) a framework? If so which one(s)?
There are frameworks that would help with several different parts of this project, but you have some big design decisions to make before you start looking into frameworks to help with the implementation.
Would this need a database on a server, or could all data be stored on the user's computers?
Having a "waiting room" implies that there is some kind of server set up to facilitate making connections between players. Whether a database is necessary depends entirely on the scale of the application. If you want to keep track of users/enable repeat logins, there's almost certainly a database involved.
Would I be dealing with CGI or Sockets or both in creating something like this?
Read more about what CGI and sockets are and think about this one.
Would making this game into a web-app be easier? (similar to something I would create if I used PHP and ran the game off of a website)
There seem to be more resources to help making a web app version, but there are a whole new set of challenges and perhaps even more new things to learn about. It depends partly on what you are already comfortable with. Making a web app and making a standalone app that uses the internet are, perhaps surprisingly, very different, but both will involve a lot of new learning.
Conclusion
Well, I hope that was helpful in some way. Best of luck! | 0 | 1,912 | true | 0 | 1 | Wanting to make a turn-based python game. Where do I go from here? | 11,535,406 |
1 | 1 | 0 | 0 | 3 | 0 | 0 | 0 | I would like to use html5 validator from LiipFunctionalTestBundle in my Symfony2 project.
So, I followed instructions on bundle's github page, but I got this error during python build:
IOError: [Errno 2] No such file or directory: './syntax/relaxng/datatype/java/dist/html5-datatypes.jar'
indeed, there is a "dist" folder under that path, but it's empty (no files inside).
I also tried to download file from daisy-pipeline, but it's deleted after running python build again
I'm using Java 1.7.0_04 on Ubuntu x64 | 0 | java,python,html,symfony,liipfunctionaltestbundle | 2012-07-18T11:48:00.000 | 0 | 11,540,645 | As noted above:
You need to install JDK, not only JRE. That is because you need java compiler. | 0 | 218 | false | 1 | 1 | html5 checker compilation | 26,174,556 |
2 | 2 | 0 | 10 | 2 | 1 | 1 | 0 | I am currently writing a new test runner for Django and I'd like to know if it's possible to TDD my test runner using my own test runner. Kinda like compiler bootstrapping where a compiler compiles itself.
Assuming it's possible, how can it be done? | 0 | python,django,testing,tdd,bootstrapping | 2012-07-18T16:11:00.000 | 0 | 11,545,759 | Yes. One of the examples Kent Beck works through in his book "Test Driven Development: By Example" is a test runner. | 0 | 123 | false | 1 | 1 | Is it possible to TDD when writing a test runner? | 11,546,149 |
2 | 2 | 0 | 4 | 2 | 1 | 1.2 | 0 | I am currently writing a new test runner for Django and I'd like to know if it's possible to TDD my test runner using my own test runner. Kinda like compiler bootstrapping where a compiler compiles itself.
Assuming it's possible, how can it be done? | 0 | python,django,testing,tdd,bootstrapping | 2012-07-18T16:11:00.000 | 0 | 11,545,759 | Bootstrapping is a cool technique, but it does have a circular-definition problem. How can you write tests with a framework that doesn't exist yet?
Bootstrapping compilers can get around this problem in several ways, but it's my understanding that usually the first implementation isn't bootstrapped. Later bootstraps would be rewrites that then use the original compiler to compile themselves.
So use an existing framework to write it the first time out. Then, once you have a stable release, you can re-write the tests using your own test-runner. | 0 | 123 | true | 1 | 1 | Is it possible to TDD when writing a test runner? | 11,546,191 |
1 | 1 | 0 | 2 | 0 | 1 | 1.2 | 0 | I have a python module packaged by distutils into a zipped egg installed in a custom prefix. If I set PYTHONPATH to contain that prefix's site-packages directory, the egg is added to sys.path and the module can be imported. If I instead from within the script run site.addsitedir with the prefix's site-packages directory, however, the egg is not added to sys.path and the module import fails. In both cases, the module's site-packages directory ends up in sys.path.
Is this expected behavior? If so, is there any way to tell Python to process the .pth files in a given directory without setting an env var? | 0 | python,pythonpath | 2012-07-19T14:11:00.000 | 1 | 11,562,721 | If I set PYTHONPATH to contain that prefix's site-packages directory, the egg is added to sys.path and the module can be imported.
Adding some directory to PYTHONPATH doesn't trigger processing of .pth-files in it. Therefore your zipped egg won't be in sys.path. You can import a module from the egg only if the egg itself is in sys.path (parent directory is not enough).
If I instead from within the script run site.addsitedir with the prefix's site-packages directory, however, the egg is not added to sys.path and the module import fails.
site.addsitedir() triggers processing of .pth-files if the directory hasn't been seen yet so it should work.
The behavior you described is the opposite of what should happen.
As a workaround you could add the egg to sys.path manually: sys.path.insert(0, '/path/to/the.egg') | 0 | 3,434 | true | 0 | 1 | site.addsitedir doesn't add egg to sys.path | 11,576,705 |
1 | 1 | 0 | 4 | 0 | 0 | 1.2 | 0 | I am using Python 2.7, beanstalkd server with beanstalkc as the client library.
It takes about 500 to 1500 ms to process each job, depending on the size of the job.
I have a cron job that will keep adding jobs to the beanstalkd queue and a "worker" that will run in an infinite loop getting jobs and processing them.
eg:
def get_job(self):
while True:
job = self.beanstalk.reserve(timeout=0)
if job is None:
timeout = 10 #seconds
continue
else:
timeout = 0 #seconds
self.process_job(job)
This results in "timed out" exception.
Is this the best practice to pull a job from the queue?
Could someone please help me out here? | 0 | python,timeout,jobs,beanstalkd,beanstalkc | 2012-07-19T18:53:00.000 | 1 | 11,567,431 | Calling beanstalk.reserve(timeout=0) means to wait 0 seconds for a job to become available,
so it'll time out immediately unless a job is already
in the queue when it's called. If you want it never to time out,
use timeout=None (or omit the timeout parameter, since None is the default). | 0 | 1,433 | true | 1 | 1 | Getting jobs from beanstalkd - timed out exception | 11,570,528 |
6 | 17 | 0 | 10 | 140 | 1 | 1 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | 0 | python,django,git,version-control | 2012-07-20T08:11:00.000 | 0 | 11,575,398 | I suggest using configuration files for that and to not version them.
You can however version examples of the files.
I don't see any problem of sharing development settings. By definition it should contain no valuable data. | 0 | 36,273 | false | 1 | 1 | How can I save my secret keys and password securely in my version control system? | 11,575,518 |
6 | 17 | 0 | 4 | 140 | 1 | 0.047024 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | 0 | python,django,git,version-control | 2012-07-20T08:11:00.000 | 0 | 11,575,398 | EDIT: I assume you want to keep track of your previous passwords versions - say, for a script that would prevent password reusing etc.
I think GnuPG is the best way to go - it's already used in one git-related project (git-annex) to encrypt repository contents stored on cloud services. GnuPG (gnu pgp) provides a very strong key-based encryption.
You keep a key on your local machine.
You add 'mypassword' to ignored files.
On pre-commit hook you encrypt the mypassword file into the mypassword.gpg file tracked by git and add it to the commit.
On post-merge hook you just decrypt mypassword.gpg into mypassword.
Now if your 'mypassword' file did not change then encrypting it will result with same ciphertext and it won't be added to the index (no redundancy). Slightest modification of mypassword results in radically different ciphertext and mypassword.gpg in staging area differs a lot from the one in repository, thus will be added to the commit. Even if the attacker gets a hold of your gpg key he still needs to bruteforce the password. If the attacker gets an access to remote repository with ciphertext he can compare a bunch of ciphertexts, but their number won't be sufficient to give him any non-negligible advantage.
Later on you can use .gitattributes to provide an on-the-fly decryption for quit git diff of your password.
Also you can have separate keys for different types of passwords etc. | 0 | 36,273 | false | 1 | 1 | How can I save my secret keys and password securely in my version control system? | 11,576,543 |
6 | 17 | 0 | 2 | 140 | 1 | 0.023525 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | 0 | python,django,git,version-control | 2012-07-20T08:11:00.000 | 0 | 11,575,398 | Encrypt the passwords file, using for example GPG. Add the keys on your local machine and on your server. Decrypt the file and put it outside your repo folders.
I use a passwords.conf, located in my homefolder. On every deploy this file gets updated. | 0 | 36,273 | false | 1 | 1 | How can I save my secret keys and password securely in my version control system? | 11,666,554 |
6 | 17 | 0 | 3 | 140 | 1 | 0.035279 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | 0 | python,django,git,version-control | 2012-07-20T08:11:00.000 | 0 | 11,575,398 | Provide a way to override the config
This is the best way to manage a set of sane defaults for the config you checkin without requiring the config be complete, or contain things like hostnames and credentials. There are a few ways to override default configs.
Environment variables (as others have already mentioned) are one way of doing it.
The best way is to look for an external config file that overrides the default config values. This allows you to manage the external configs via a configuration management system like Chef, Puppet or Cfengine. Configuration management is the standard answer for the management of configs separate from the codebase so you don't have to do a release to update the config on a single host or a group of hosts.
FYI: Encrypting creds is not always a best practice, especially in a place with limited resources. It may be the case that encrypting creds will gain you no additional risk mitigation and simply add an unnecessary layer of complexity. Make sure you do the proper analysis before making a decision. | 0 | 36,273 | false | 1 | 1 | How can I save my secret keys and password securely in my version control system? | 11,689,937 |
6 | 17 | 0 | 2 | 140 | 1 | 0.023525 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | 0 | python,django,git,version-control | 2012-07-20T08:11:00.000 | 0 | 11,575,398 | This is what I do:
Keep all secrets as env vars in $HOME/.secrets (go-r perms) that $HOME/.bashrc sources (this way if you open .bashrc in front of someone, they won't see the secrets)
Configuration files are stored in VCS as templates, such as config.properties stored as config.properties.tmpl
The template files contain a placeholder for the secret, such as:
my.password=##MY_PASSWORD##
On application deployment, script is ran that transforms the template file into the target file, replacing placeholders with values of environment variables, such as changing ##MY_PASSWORD## to the value of $MY_PASSWORD. | 0 | 36,273 | false | 1 | 1 | How can I save my secret keys and password securely in my version control system? | 49,701,069 |
6 | 17 | 0 | 0 | 140 | 1 | 0 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | 0 | python,django,git,version-control | 2012-07-20T08:11:00.000 | 0 | 11,575,398 | You could use EncFS if your system provides that. Thus you could keep your encrypted data as a subfolder of your repository, while providing your application a decrypted view to the data mounted aside. As the encryption is transparent, no special operations are needed on pull or push.
It would however need to mount the EncFS folders, which could be done by your application based on an password stored elsewhere outside the versioned folders (eg. environment variables). | 0 | 36,273 | false | 1 | 1 | How can I save my secret keys and password securely in my version control system? | 11,713,674 |
2 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 0 | I have a script which will log into a STMP server (GMail's) to send an email notification. How can I do this without distributing the password in plain text? | 0 | python,gmail,password-protection | 2012-07-20T09:21:00.000 | 0 | 11,576,502 | Have the script request the password when running.
Note, I wouldn't advise that it accept the password as a command line argument as this isn't very secure because it will be logged in the command history etc. | 0 | 168 | false | 0 | 1 | Distribute a script, but protect the password | 11,576,535 |
2 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I have a script which will log into a STMP server (GMail's) to send an email notification. How can I do this without distributing the password in plain text? | 0 | python,gmail,password-protection | 2012-07-20T09:21:00.000 | 0 | 11,576,502 | check if your provider offers an smtp server that doesn't require authentication and use that instead. | 0 | 168 | false | 0 | 1 | Distribute a script, but protect the password | 11,576,594 |
1 | 4 | 0 | 0 | 11 | 0 | 0 | 0 | I'm looking for a language or library to allow me to simulate key strokes at the maximum level possible, without physically pressing the key.
(My specific measure of the level of the keystroke is whether or not it will produce the same output as a physical Key Press when my computer is already running key listeners (such as MouseKeys and StickyKeys)).
I've tried many methods of keystroke emulation;
The java AWT library, Java win32api, python win32com sendKeys, python ctypes Key press, and many more libraries for python and Java, but none of them simulate the key stroke at a close enough level to actual hardware.
(When Windows MouseKeys is active, sending a key stroke of a colon, semi colon or numpad ADD key just produces those characters, where as a physical press performs the MouseKeys click)
I believe such methods must involve sending the strokes straight to an application, rather than passing them just to the OS.
I'm coming to the idea that no library for these high (above OS code) level languages will produce anything adequate. I fear I might have to stoop to some kind of BIOS programming.
Does anybody have any useful information on the matter whatsoever?
How would I go about emulating key presses in lower level languages?
Should I be looking for a my-hardware-specific solution (some kind of Fujitsu hardware API)?
I almost feel it would be easier to program a robot to simply sit by the hardware and press the keys.
Thanks! | 0 | java,python,hardware,keystroke,simulate | 2012-07-22T05:08:00.000 | 0 | 11,597,892 | I'm not on a Windows box to test it against MouseKeys, so no guarantees that it will work, but have you tried AutoHotkey? | 0 | 14,658 | false | 1 | 1 | Simulate Key Press at hardware level - Windows | 11,598,099 |
1 | 3 | 0 | 4 | 28 | 0 | 0.26052 | 0 | I am getting ready to start a little Android development and need to choose a language. I know Python but would have to learn Java. I'd like to know from those of you who are using Python on Android what the limitations are. Also, are there any benefits over Java? | 0 | java,android,python,sl4a | 2012-07-22T12:38:00.000 | 0 | 11,600,364 | I have developed Android Apps on the market, coded in Python. Downsides:
Thus far my users must download the interpreter as well, but they are immediately prompted to do so. (UPDATE: See comment below.)
The script does not exit properly, so I include a webView page that asks them to goto:Settings:Apps:ForceClose if this issue occurs. | 0 | 20,495 | false | 0 | 1 | What are the limitations of Python on Android? | 12,758,628 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I have quite a few static files used in my Google App Engine application (CSS, robots.txt, etc.) They are all defined in app.yaml.
I want to have some automated tests that check whether those definitions in app.yaml are valid and my latest changes didn't brake anything. E.g. check that specific URLs return correct responses. Ideally, it should be a part of my app unit tests. | 0 | python,unit-testing,google-app-engine | 2012-07-22T12:39:00.000 | 1 | 11,600,374 | I have a post deploy script for the staging environment that just does curl on the urls to validate they are all there. If this script passes (among other things) I will deploy from staging to production. | 0 | 120 | false | 1 | 1 | How to write tests for static routes defined in app.yaml? | 11,600,406 |
1 | 3 | 0 | 0 | 5 | 0 | 0 | 0 | for the most part I work in Python, and as such I have developed a great appreciation for the repr() function which when passed a string of arbitrary bytes will print out it's human readable hex format. Recently I have been doing some work in C and I am starting to miss the python repr function. I have been searching on the internet for something similar to it, preferably something like void buffrepr(const char * buff, const int size, char * result, const int resultSize) But I have ha no luck, is anyone aware of a simple way to do this? | 0 | python,c,repr | 2012-07-22T15:51:00.000 | 0 | 11,601,703 | The most simple way would be printf()/sprintf() with the %x and %X format specifiers. | 0 | 1,147 | false | 0 | 1 | Python style repr for char * buffer in c? | 11,601,779 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have Sage 4.7.1 installed and have run into an odd problem. Many of my older scripts that use functions like deepcopy() and uniq() no longer recognize them as global names. I have been able to fix this by importing the python modules one by one, but this is quite tedious. But when I start the command-line Sage interface, I can type "list2=deepcopy(list1)" without importing the copy module, and this works fine. How is it possible that the command line Sage can recognize global name 'deepcopy' but if I load my script that uses the same name it doesn't recognize it?
oops, sorry, not familiar with stackoverflow yet. I type: 'sage_4.7.1/sage" to start the command line interface; then, I type "load jbom.py" to load up all the functions I defined in a python script. When I use one of the functions from the script, it runs for a few seconds (complex function) then hits a spot where I use some function that Sage normally has as a global name (deepcopy, uniq, etc) but for some reason the script I loaded does not know what the function is. And to reiterate, my script jbom.py used to work the last time I was working on this particular research, just as I described.
It also makes no difference if I use 'load jbom.py' or 'import jbom'. Both methods get the functions I defined in my script (but I have to use jbom. in the second case) and both get the same error about 'deepcopy' not being a global name.
REPLY TO DSM: I have been sloppy about describing the problem, for which I am sorry. I have created a new script 'experiment.py' that has "import jbom" as its first line. Executing the function in experiment.py recognizes the functions in jbom.py but deepcopy is not recognized. I tried loading jbom.py as "load jbom.py" and I can use the functions just like I did months ago. So, is this all just a problem of layering scripts without proper usage of import/load etc?
SOLVED: I added "from sage.all import *" to the beginning of jbom.py and now I can load experiment.py and execute the functions calling jbom.py functions without any problems. From the Sage doc on import/load I can't really tell what I was doing wrong exactly. | 0 | python,module,sage | 2012-07-22T18:18:00.000 | 0 | 11,602,817 | Okay, here's what's going on:
You can only import files ending with .py (ignoring .py[co]) These are standard Python files and aren't preparsed, so 1/3 == int(0), not QQ(1)/QQ(3), and you don't have the equivalent of a from sage.all import * to play with.
You can load and attach both .py and .sage files (as well as .pyx and .spyx and .m). Both have access to Sage definitions but the .py files aren't preparsed (so y=17 makes y a Python int) while the .sage files are (so y=17 makes y a Sage Integer).
So import jbom here works just like it would in Python, and you don't get the access to what Sage has put in scope. load etc. are handy but they don't scale up to larger programs so well. I've proposed improving this in the past and making .sage scripts less second-class citizens, but there hasn't yet been the mix of agreement on what to do and energy to do it. In the meantime your best bet is to import from sage.all. | 0 | 784 | true | 0 | 1 | python modules missing in sage | 11,603,378 |
1 | 5 | 0 | 4 | 13 | 1 | 0.158649 | 0 | I'm working on a module using sockets with hundreds of test cases. Which is nice. Except now I need to test all of the cases with and without socket.setdefaulttimeout( 60 )... Please don't tell me cut and paste all the tests and set/remove a default timeout in setup/teardown.
Honestly, I get that having each test case laid out on it's own is good practice, but i also don't like to repeat myself. This is really just testing in a different context not different tests.
i see that unittest supports module level setup/teardown fixtures, but it isn't obvious to me how to convert my one test module into testing itself twice with two different setups.
any help would be much appreciated. | 0 | python,unit-testing,sockets,fixtures | 2012-07-22T23:39:00.000 | 0 | 11,604,888 | I would do it like this:
Make all of your tests derive from your own TestCase class, let's call it SynapticTestCase.
In SynapticTestCase.setUp(), examine an environment variable to determine whether to set the socket timeout or not.
Run your entire test suite twice, once with the environment variable set one way, then again with it set the other way.
Write a small shell script to invoke the test suite both ways. | 0 | 5,608 | false | 0 | 1 | python unittests with multiple setups? | 11,604,901 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I am working on a web service that requires user input python code to be executed on my server (we have checks for code injection). I have to import a rather large module so I would like to make sure that I am not starting up python and importing the module from scratch each time something runs (it takes about 4-6s).
To do this I was planning to create a python (3.2) deamon that imports the user input code as a module, executes it and then delete/garbage collect that module. I need to make sure that that module is completely gone from RAM since this process will continue until the server is restarted. I have read a bunch of things that say this is a very difficult thing to do in python.
What is the best way to do this? Would it be better to use exec to define a function with the user input code (for variable scoping) and then execute that function and somehow remove the function? Or is there a better way to do this process that I have missed? | 0 | python,module,daemon | 2012-07-23T11:10:00.000 | 0 | 11,611,351 | You could perhaps consider to create a pool of python daemon processes?
Their purpose would be to serve one request and to die afterwards.
You would have to write a pool-manager that ensures that there are always X daemon processes waiting for an incoming request. (X being the number of waiting daemon processes: depending on the required workload). The pool-manager would have to observe the pool of daemon processes and start new instances every time a process was finished. | 0 | 477 | true | 0 | 1 | User Input Python Script Executing Daemon | 11,614,012 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I am trying to use the python ctypes library to access various functions in a COM DLL created in Visual Fox Pro (from a .prg file).
Here is an example in fox pro (simplified from the actual code)
DEFINE CLASS Testing AS CUSTOM OLEPUBLIC
PROCEDURE INIT
ON ERROR
SET CONSOLE OFF
SET NOTIFY OFF
SET SAFETY OFF
SET TALK OFF
SET NOTIFY OFF
ENDPROC
FUNCTION get_input_out(input AS STRING) AS STRING
output = input
RETURN output
ENDFUNC
ENDDEFINE
In python i am doing something along the lines of:
import ctypes
link = ctypes.WinDLL("path\to\com.dll")
print link.get_input_out("someinput")
The dll registers fine and is loaded but i just get the following when I try to call the function.
AttributeError: function 'get_input_out' not found
I can verfiy the dll does work as i was able to access the functions with a php script using the COM libary.
I would really like to get this working in python but so far my attempts have all been in vain, will ctypes even work with VFP? Any advice would be appreciated. | 0 | python,dll,ctypes,foxpro,visual-foxpro | 2012-07-23T12:35:00.000 | 0 | 11,612,663 | Try removing the parentheses from the call to the function. Change print link.get_input_out("someinput") to print link.get_input_out "someinput". | 0 | 740 | false | 0 | 1 | Using Python ctypes to access a Visual Foxpro COM DLL | 11,613,687 |
3 | 3 | 0 | 0 | 3 | 1 | 0 | 0 | I am currently doing some I/O intensive load-testing using python. All my program does is to send HTTP requests as fast as possible to my target server.
To manage this, I use up to 20 threads as I'm essentially bound to I/O and remote server limitations.
According to 'top', CPython uses a peak of 130% CPU on my dual core computer.
How is that possible ? I thought the GIL prevented this ? Or is it the way Linux 'counts' the resources consumed by each applications ? | 0 | python,linux,multithreading,load | 2012-07-23T15:17:00.000 | 0 | 11,615,449 | If you find this irritating, set your preferences (specifically, the preferences of your System Monitor or equivalent tool) to enable "Solaris Mode," which calculates CPU% as a proportion of total processing power, not the proportion of a single core's processing power. | 0 | 1,861 | false | 0 | 1 | Python interpreters uses up to 130% of my CPU. How is that possible? | 11,617,105 |
3 | 3 | 0 | 1 | 3 | 1 | 0.066568 | 0 | I am currently doing some I/O intensive load-testing using python. All my program does is to send HTTP requests as fast as possible to my target server.
To manage this, I use up to 20 threads as I'm essentially bound to I/O and remote server limitations.
According to 'top', CPython uses a peak of 130% CPU on my dual core computer.
How is that possible ? I thought the GIL prevented this ? Or is it the way Linux 'counts' the resources consumed by each applications ? | 0 | python,linux,multithreading,load | 2012-07-23T15:17:00.000 | 0 | 11,615,449 | That is possible in situations when used C-extension library call releases GIL and does some further processing in the background. | 0 | 1,861 | false | 0 | 1 | Python interpreters uses up to 130% of my CPU. How is that possible? | 11,615,563 |
3 | 3 | 0 | 15 | 3 | 1 | 1.2 | 0 | I am currently doing some I/O intensive load-testing using python. All my program does is to send HTTP requests as fast as possible to my target server.
To manage this, I use up to 20 threads as I'm essentially bound to I/O and remote server limitations.
According to 'top', CPython uses a peak of 130% CPU on my dual core computer.
How is that possible ? I thought the GIL prevented this ? Or is it the way Linux 'counts' the resources consumed by each applications ? | 0 | python,linux,multithreading,load | 2012-07-23T15:17:00.000 | 0 | 11,615,449 | 100 percent in top refer to a single core. On a dual-core machine, you have up to 200 per cent available.
A single single-threaded process can only make use of a single core, so it is limited to 100 percent. Since your process has several threads, nothing is stopping it from making use of both cores.
The GIL only prevents pure-Python code from being executed concurrently. Many library calls (including most I/O stuff) release the GIL, so no problem here as well. Contrary to much of the FUD on the internet, the GIL rarely reduces real-world performance, and if it does, there are usually better solutions to the problem than using threads. | 0 | 1,861 | true | 0 | 1 | Python interpreters uses up to 130% of my CPU. How is that possible? | 11,615,490 |
4 | 7 | 0 | 7 | 2 | 1 | 1.2 | 0 | I'm debating whether to use C++ or Python for a largely math-based program.
Both have great math libraries, but which language is generally faster for complex math? | 0 | c++,python,math | 2012-07-24T06:41:00.000 | 0 | 11,625,450 | I guess it is safe to say that C++ is faster. Simply because it is a compiled language which means that only your code is running, not an interpreter as with python.
It is possible to write very fast code with python and very slow code with C++ though. So you have to program wisely in any language!
Another advantage is that C++ is type safe, which will help you to program what you actually want.
A disadvantage in some situations is that C++ is type safe, which will result in a design overhead. You have to think (maybe long and hard) about function and class interfaces, for instance.
I like python for many reasons. So don't understand this a plea against python. | 0 | 3,134 | true | 0 | 1 | C++ or Python for an Extensive Math Program? | 11,625,468 |
4 | 7 | 0 | 8 | 2 | 1 | 1 | 0 | I'm debating whether to use C++ or Python for a largely math-based program.
Both have great math libraries, but which language is generally faster for complex math? | 0 | c++,python,math | 2012-07-24T06:41:00.000 | 0 | 11,625,450 | You could also consider a hybrid approach. Python is generally easier and faster to develop in, specially for things like user interface, input/output etc.
C++ should certainly be faster for some math operations (although if your problem can be formulated in terms of vector operations or linear algebra than numpy provides a python interface to very efficient vector manipulations).
Python is easy to extend with Cython, Swig, Boost Python etc. so one strategy is write all the bookkeeping type parts of the program in Python and just do the computational code in C++. | 0 | 3,134 | false | 0 | 1 | C++ or Python for an Extensive Math Program? | 11,625,521 |
4 | 7 | 0 | 4 | 2 | 1 | 0.113791 | 0 | I'm debating whether to use C++ or Python for a largely math-based program.
Both have great math libraries, but which language is generally faster for complex math? | 0 | c++,python,math | 2012-07-24T06:41:00.000 | 0 | 11,625,450 | It all depends if faster is "faster to execute" or "faster to develop". Overall, python will be quicker for development, c++ faster for execution. For working with integers (arithmetic), it has full precision integers, it has a lot of external tools (numpy, pylab...) My advice would be go python first, if you have performance issue, then switch to cpp (or use external libraries written in cpp from python, in an hybrid approach)
There is no good answer, it all depends on what you want to do in terms of research / calculus | 0 | 3,134 | false | 0 | 1 | C++ or Python for an Extensive Math Program? | 11,625,696 |
4 | 7 | 0 | 0 | 2 | 1 | 0 | 0 | I'm debating whether to use C++ or Python for a largely math-based program.
Both have great math libraries, but which language is generally faster for complex math? | 0 | c++,python,math | 2012-07-24T06:41:00.000 | 0 | 11,625,450 | I sincerely doubt that Google and Stanford don't know C++.
"Generally faster" is more than just language. Algorithms can make or break a solution, regardless of what language it's written in. A poor choice written in C++ and be beaten by Java or Python if either makes a better algorithm choice.
For example, an in-memory, single CPU linear algebra library will have its doors blown in by a parallelized version done properly.
An implicit algorithm may actually be slower than an explicit one, despite time step stability restrictions, because the latter doesn't have to invert a matrix. This is often true for hyperbolic partial differential equations.
You shouldn't care about "generally faster". You ought to look deeply into the problem you're trying to solve and the algorithms used to solve it. You'll do better that way than a blind language choice. | 0 | 3,134 | false | 0 | 1 | C++ or Python for an Extensive Math Program? | 11,630,046 |
2 | 2 | 0 | 1 | 3 | 1 | 0.099668 | 0 | I'm writing a 2D game in Python that has gravity as a major mechanic.
I have some of the game engine made, but the part where I'm stuck is actually determining what to add to the X and Y velocities of each mass.
So say I have circle A and circle B, each with a position, a velocity, and a mass. Each should be pulled towards the other fairly realistically, simulating Newtonian gravity. How would I achieve this?
Yes, I am being very ambiguous with the units of measurement. Later I can experiment with changing the variables to fit the formula. | 0 | python,physics,game-physics | 2012-07-25T00:17:00.000 | 0 | 11,641,077 | Assuming that you've got an in-game quantized unit of time, a "tick" of the clock, if you will, give each body a velocity vector (how much, and in which directions, it moves per "tick") and for each tick, have each other body change its velocity vector by some amount based on their distance (exert a force on it, divided by its mass). Then, whenever your clock ticks, the bodies move according to their velocity vectors, and then their velocity vectors changed based on the net force on them. As long as you decide which happens first - acceleration or motion - provided that your ticks are small enough, you should be fine. | 0 | 1,914 | false | 0 | 1 | Newtonian gravity simulation | 11,641,106 |
2 | 2 | 0 | 3 | 3 | 1 | 1.2 | 0 | I'm writing a 2D game in Python that has gravity as a major mechanic.
I have some of the game engine made, but the part where I'm stuck is actually determining what to add to the X and Y velocities of each mass.
So say I have circle A and circle B, each with a position, a velocity, and a mass. Each should be pulled towards the other fairly realistically, simulating Newtonian gravity. How would I achieve this?
Yes, I am being very ambiguous with the units of measurement. Later I can experiment with changing the variables to fit the formula. | 0 | python,physics,game-physics | 2012-07-25T00:17:00.000 | 0 | 11,641,077 | You need to solve the equations of motion for each body. They'll be written as a set of coupled, first-order, ordinary differential equations. You'll write one equation each for the x- and y-directions, which will give you the acceleration as a function of the gravitational force between the two bodies divided by their respective masses.
You know the relationships between acceleration, velocity, and displacement.
You end up four coupled ordinary differential equations to solve. Use a time stepping solution to advance the solution in time - explicit or implicit, your choice. | 0 | 1,914 | true | 0 | 1 | Newtonian gravity simulation | 11,641,140 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | In selenium testing, there is htmlunitdriver which you can run tests without browser with. I need to do this with windmill too. Is there a way to do this in windmill?
Thank! | 0 | python,selenium,windmill,browser-testing | 2012-07-25T08:17:00.000 | 0 | 11,645,451 | If you're looking to run Windmill in headless mode (no monitor) you can do it by running
Xvfb :99 -ac &
DISPLAY=:99 windmill firefox -e test=/path/to/your/test.py | 0 | 364 | false | 1 | 1 | Windmill-Without web browser | 12,344,550 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | How do I debug a Python extension written in C? I found some links that said we need to get the Python debug built, but how do we do that if we don't have root access? I have Python 2.7 installed. | 0 | python,debugging | 2012-07-25T10:33:00.000 | 1 | 11,647,810 | You can compile a debug-enabled version python in your home folder without having root access and develop the C extension against that version. | 0 | 262 | false | 0 | 1 | How do I debug a Python extension written in C? | 11,648,687 |
2 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | In a programming language that has a file object, would you rather pass this object to a function or the path to the physical file and let the function open the file itself?
If the language does matter for your answer, please consider c++ and python.
Thanks,
Somebody | 0 | c++,python,coding-style | 2012-07-25T12:29:00.000 | 0 | 11,649,744 | That depends very much on the specific case.
If I were to use the file in several (sub)functions than I would rather pass the initialised file object (or function).
If I have one function to get the filename and path and another to do something with the data of the file, I would probably prefer to pass the path and filename and have the file opened by the function that uses the data. | 0 | 681 | false | 0 | 1 | pass file or filename to function | 11,649,914 |
2 | 3 | 0 | 3 | 1 | 1 | 0.197375 | 0 | In a programming language that has a file object, would you rather pass this object to a function or the path to the physical file and let the function open the file itself?
If the language does matter for your answer, please consider c++ and python.
Thanks,
Somebody | 0 | c++,python,coding-style | 2012-07-25T12:29:00.000 | 0 | 11,649,744 | My understanding of good coding practices is to open the file where the information is to be used and not in a more global scope in any language. | 0 | 681 | false | 0 | 1 | pass file or filename to function | 11,649,800 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | I addded the project root of my python project to the PYTHONPATH. Now the import of my modules works in the CLI of python bot NOT in a python script.
How can I fix that? | 0 | python,import | 2012-07-26T08:57:00.000 | 0 | 11,665,765 | Call your script with -v option.
python -v yourscript.py
This will trace all the import statements and look or do grep for your project name. If it's not in that, then either it's not at all added to your python path or you're running different python interpreter. | 0 | 72 | true | 0 | 1 | Relative import works on CLI but not in script | 11,666,220 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 1 | Is there a way to list all my HIT types (not HITs or assignments) using the mturk api?
I can't find any documentation on this. I'm using python, so it'd be nice if boto supported this query. | 0 | python,boto,mechanicalturk | 2012-07-26T16:21:00.000 | 0 | 11,673,711 | Looking through the MTurk API (http://docs.amazonwebservices.com/AWSMechTurk/latest/AWSMturkAPI/Welcome.html) I don't see anything that returns a list of HIT types. You should post a query to the MTurk forum (https://forums.aws.amazon.com/forum.jspa?forumID=11). It seems like a useful feature to add. | 0 | 362 | true | 0 | 1 | List all hitTypes through the mturk API? | 11,677,299 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 1 | Is there a way to list all my HIT types (not HITs or assignments) using the mturk api?
I can't find any documentation on this. I'm using python, so it'd be nice if boto supported this query. | 0 | python,boto,mechanicalturk | 2012-07-26T16:21:00.000 | 0 | 11,673,711 | Unfortunately there isn't. We resort to persisting every HitType locally that we create through turk's api at houdiniapi.com which works just fine. | 0 | 362 | false | 0 | 1 | List all hitTypes through the mturk API? | 11,678,042 |
1 | 2 | 0 | 2 | 1 | 1 | 0.197375 | 0 | I'm sure this is well documented somewhere, but I can't find it! I want to make my scripts portable to machines that may not have their Python interpreters in the same location. For that reason, I thought that I could just code the first line as #!python3 rather than with the absolute path to the interpreter, like #!/usr/local/bin/python3.
No doubt most of you understand why this doesn't work, but I have no idea. Although my lab mates aren't complaining about having to recode my scripts to reflect the absolute path to the interpreter on their own machines, this seems like it shouldn't be necessary. I'd be perfectly happy with a response providing a link to the appropriate documentation. Thanks in advance. | 0 | python | 2012-07-26T17:02:00.000 | 1 | 11,674,359 | env is a program that handles these sort of things. You should pretty much always use something like #! /usr/bin/env python3 as your shebang line rather than specifying an absolute path. | 0 | 155 | false | 0 | 1 | How to make Python script portable to machines with interpreters in different locations? | 11,674,391 |
1 | 1 | 0 | 1 | 2 | 1 | 0.197375 | 0 | I am starting on a new project at a new job. This is my first time working heavily in Python. Mocking is a whole new beast compared to the hoops I had to jump through in a statically typed language. I took it upon myself to look into the team's unit tests and hopefully upgrade some of them from using Dingus to Mock.
Earlier today, I came across some tests that were checking a conversion class. Specifically, it converted strings of hexadecimal numbers into Mongo ObjectIds (unique identifiers). What I expected to see was a test that verified given a valid hex number, an ObjectId with same hex number would be returned -or- given a bad hex number an error would occur. Instead, all that the tests verified were that an ObjectId was created and returned. In fact, ObjectId was mocked out entirely and so was the hex number!
Now, creating an ObjectId from a string doesn't require going out to a server or anything. Everything is run locally.
I asked about this particular test suite with my new coworkers. Their thoughts were that the actual conversion should be verified using an integration test and being a unit test, all the unit test should do is make sure the code flows from top to bottom as expected and the ObjectId is created and returned. So, basically, the tests only verify that this class interacts with the environment in the expected way.
I have been writing unit tests for a long time. In my experience, I wouldn't be using mocks at all and I would just verify the conversions occurred as expected. This means interacting with the ObjectId class from another module. Perhaps my idea of a unit tests is too encompassing. I have always reserved integration tests for connecting to remote servers, files and whatnot.
The way I look at it, working with ObjectId in this example is no different than working with str or list. Sure, I can mock out str and list, but since they are essential to what my code is doing, mocking them out doesn't make much sense in my mind. The only time I should care about interacting with a dependency is when it can change the outcome of the test.
Is there any value in writing unit tests that simply check the flow of code? Shouldn't unit tests be the result of verifying the behavior/correctness of the code in mind? | 0 | python,unit-testing,mocking | 2012-07-26T17:29:00.000 | 0 | 11,674,762 | So, it's tough to see exactly what's going on without seeing the code, but based solely on your explanation...
I would agree with you. The behavior is what is important, not the flow of the code.
What if someone later on needs to change the flow of the code to support a different case (say, using a function with a different argument that accomplishes the same result); they can do so without breaking the existing tests.
What if you upgrade the library that is being used, and now calling the function actually has a different result than what you want? Your test still works (the function is being called), but what the unit test is actually trying to test does not.
Really, how mocks and tests are used is still a pretty young discipline. The jury is still out over whether unit testing (and the various strategies that are used in unit testing, such as mocking) are even considered "good thing". No doubt, however, I have found myself creating tests not to actually test behavior, but just so that I can say I have the test, and improper use of mocks is a great way to pretend you've created a test when really you've just created a false feeling of accomplishment that your code has now been more rigorously tested. | 0 | 355 | false | 0 | 1 | Python Unit Testing and when to Mock | 11,674,984 |
2 | 2 | 0 | -1 | 1 | 1 | -0.099668 | 0 | I was until now programming in C, which is a very basic language. But now as I am studying data structures, my online teacher actually uses some methods like leftChild(), rightChild(), etc.
But then I started searching whether tree ADT and such are implemented in C++, Python, Java
by default. And mostly the answers were no.
I just want to confirm whether any language supports tree ADT by default that means without downloading their classes separately. | 0 | java,c++,python,c | 2012-07-26T18:28:00.000 | 0 | 11,675,707 | Many of the STL containers in C++ are commonly implemented using trees. Examples include std::map and std::set | 0 | 668 | false | 0 | 1 | built in Abstract Data Types in c++/python/java | 11,675,784 |
2 | 2 | 0 | -1 | 1 | 1 | -0.099668 | 0 | I was until now programming in C, which is a very basic language. But now as I am studying data structures, my online teacher actually uses some methods like leftChild(), rightChild(), etc.
But then I started searching whether tree ADT and such are implemented in C++, Python, Java
by default. And mostly the answers were no.
I just want to confirm whether any language supports tree ADT by default that means without downloading their classes separately. | 0 | java,c++,python,c | 2012-07-26T18:28:00.000 | 0 | 11,675,707 | These are the basic ADTs. I think you should first learn and code them yourselves before jumping into any library with these inbuild features.
Thanks | 0 | 668 | false | 0 | 1 | built in Abstract Data Types in c++/python/java | 11,676,043 |
1 | 3 | 0 | 1 | 3 | 1 | 0.066568 | 0 | I'm currently writing a script, which at some point needs to compare numbers provided to the script by two different sources/inputs. One source provides the numbers as integers and one source provides them as strings. I need to compare them, so I need to use either str() on the integers or int() on the strings.
Assuming the amount of conversions would be equal, would it be more efficient to convert the strings into integers or vice versa? | 0 | python,string,int,type-conversion,performance | 2012-07-27T11:49:00.000 | 0 | 11,687,183 | I don't really know what you exactly mean by "compare", but if it is not always only strict egality you'd better work with integers. You could need to sort your data or whatever, and it will be easier this way ! | 0 | 2,669 | false | 0 | 1 | More efficient to convert string to int or inverse? | 11,687,359 |
1 | 1 | 0 | 3 | 0 | 0 | 0.53705 | 0 | I want to write a test script in python which should reboot the system in between the test execution on local machine... (No remote automation server is monitoring the script). How the script execution can be made continuous even after reboot? The script covers following scenario...
Create a Volume on some disk
Create a filesystem and mount the file system temporary
Reboot the system
Verify if filesystem is mounted
Mount the filesystem again. | 0 | python,reboot | 2012-07-28T10:24:00.000 | 1 | 11,700,172 | It's not about python but rather about your whole system config. In given conditions I suggest you to split your script on 2 parts. First part is doing 1..3 and storing some extra info you're required onto persistent storage other than the fs you're experimenting on. The second part is invoked on each OS os start, reads some data stored by first part and then performs checking actions 4..5. It seems to be the most obvious and simple way. | 0 | 1,400 | false | 0 | 1 | How to continue the python script execution from the point it left before reboot | 11,700,297 |
1 | 3 | 0 | 1 | 5 | 0 | 0.066568 | 1 | I need to call GET, POST, PUT, etc. requests to another URI because of search, but I cannot find a way to do that internally with pyramid. Is there any way to do it at the moment? | 0 | python,pyramid | 2012-07-28T14:35:00.000 | 0 | 11,701,920 | Also check the response status code: response.status_int
I use it for example, to introspect my internal URIs and see whether or not a given relative URI is really served by the framework (example to generate breadcrumbs and make intermediate paths as links only if there are pages behind) | 0 | 552 | false | 0 | 1 | Pyramid subrequests | 13,202,389 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | I put a simple python script inside the cgi-bin in apache2 and tried to execute it using the browser as follows,
"http://www.example.com/cgi-bin/test.py"
But it gives a 500 Internal sever error.
Following is the error.log in apache2.
[Sun Jul 29 22:07:51 2012] [error] (8)Exec format error: exec of '/usr/lib/cgi-bin/test.py' failed
[Sun Jul 29 22:07:51 2012] [error] [client ::1] Premature end of script headers: test.py
[Sun Jul 29 22:07:51 2012] [error] [client ::1] File does not exist: /var/www/favicon.ico
can anyone help me on this? | 0 | python,apache | 2012-07-29T16:51:00.000 | 1 | 11,711,060 | BlaXpirit's answer should solve your problem with a 500 server internal error.
It is important to note the "\n" at the end of the first print statement. You can also write it as
print("Content-Type: text/html; charset=utf-8")
print()
I was surprised to learn that writing out these headers is necessary even if your Python program is only going to do server-side work - with no response to the browser at all. | 0 | 2,241 | false | 0 | 1 | How to run a python script inside the cgi-bin of apache server? | 14,268,807 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | Can each node of selenium grid run different python script/test?
- how to setup? | 0 | python,testing,selenium | 2012-07-30T06:58:00.000 | 0 | 11,716,677 | Yes, use different browser configurations in the hub, and use two or more programs to contact the grid with different browsers | 0 | 382 | true | 0 | 1 | Can each node of selenium grid run different script/test? - how to setup? | 11,718,057 |
1 | 3 | 0 | 5 | 6 | 0 | 1.2 | 0 | I'm trying to develop a small script that generate a complete new pdf, mainly text and tables, file as result.
I'm searching for the best way to do it.
I've read about reportlab, which seems pretty good. It has only one drawback asfar as I can see. It is quiet hard to write a template without the commercial version, and the code seems to be hard to maintain.
So I've searched for a more sufficient way and found xhtml2pdf, but this software is quiet old, and cannot generate tables over two pages or more.
The last solution in my mind it to generate a tex-File with a template framework, and later call pdftex as subprocess.
I would implement the last one, and go over LateX. Would you do so, have you better ideas? | 0 | python,pdf,latex | 2012-07-30T16:29:00.000 | 0 | 11,725,645 | I would suggest using the LaTeX approach. It is cross-platform, works in many different languages and is easy to maintain. Plus it's non-commercial! | 0 | 3,407 | true | 1 | 1 | Generate a pdf with python | 11,725,677 |
1 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I have to design a interface using PHP for a software written in python. Currently this software is used from command line by passing input, mostly the input is a text file. There are series of steps and for every step a python script is called. Every step takes a text file as input and an generates an output text file in the folder decided by the user. I am using system() of php but I can't see the output but when I use the same command from command line it generates the output. Example of command :
python /software/qiime-1.4.0-release/bin/check_id_map.py -m /home/qiime/sample/Fasting_Map.txt -o /home/qiime/sample/mapping_output -v | 0 | php,python,qiime | 2012-07-31T04:33:00.000 | 1 | 11,733,149 | instead of system() try surrounding the code in `ticks`...
It has a similar functionality but behaves a little differently in the way it returns the output.. | 0 | 634 | false | 0 | 1 | I need to run a python script from php | 11,733,222 |
2 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | just using python & gevent.server to server a simple login server(just check some data and do some db operation), would it be a problem when it's under ddos attack?
would it be better if using apache/ngnix to server http request? | 0 | python,apache,nginx,gevent,httpserver | 2012-07-31T10:08:00.000 | 0 | 11,737,754 | In my opinion, you will never get the same level of security with a pure-Python server that you could have with majors web servers, as Apache and Nginx are.
These are well-tested before being released, so, by using a stable build and by configuring it properly, you will be close of the maximum of security possible.
Pure-python servers are very usefull during development, but I do not know any that can claim to compete with them for security testing / bug report / quick fix.
This is why it is generally advisable to put one of these servers in front before the server in pure python, using, for example, options like ProxyPass. | 0 | 1,332 | false | 1 | 1 | http server using python & gevent(not using apache) | 11,740,272 |
2 | 2 | 0 | 3 | 2 | 0 | 0.291313 | 0 | just using python & gevent.server to server a simple login server(just check some data and do some db operation), would it be a problem when it's under ddos attack?
would it be better if using apache/ngnix to server http request? | 0 | python,apache,nginx,gevent,httpserver | 2012-07-31T10:08:00.000 | 0 | 11,737,754 | If you are using gevent.server to implement your own HTTP server, I advise against it, and you should use instead gevent.pywsgi, that provides a full-featured, stable and thoroughly tested HTTP server. It is not as fast as gevent.wsgi, which is backed by libevent-http, but has more features that you are likely to need, like HTTPS.
Gevent is much more likely to survive a DDOS attack than Apache, but nginx is as good as gevent on this regard, although I don't see why using it if you can do just fine with your pure Python server. It would be the case of using nginx if you had multiple backends through the same server, like your auth server together with some static file serving (what could be done entirely by nginx) and possibly other subsystem, or other virtual hosts, that all could be served through a single nginx configuration. | 0 | 1,332 | false | 1 | 1 | http server using python & gevent(not using apache) | 11,740,845 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I am trying to send a python (2.6) HTML email with color coded output. My script creates an output string which I format to look like a table (using str.format). It prints okay on the screen:
abcd 24222 xyz A
abcd 24222 xyz B
abcd 24222 xyz A
abcd 24222 xyz D
But I also need to send it as an email message and I need to have A (say in Green color), B (in Red) etc. How could I do it?
What I've tried is attach FONT COLOR = #somecolor & /FONT tags at the front and back of A, B etc. And I wrote a method/module which adds table, tr & td tags) at appropriate parts of the string so that the message would like an HTML table in the email. But, There is an issue with this approach:
1) This doesn't always work properly. The emails (obtained by running the exact same script) are different and many times with misalligned members and mysterious tr's or td's appearing (at different locations each time). even though my html table creation is correct
Any help would be appreciated. | 0 | python | 2012-07-31T16:43:00.000 | 0 | 11,745,033 | All right. Her's what worked for me, just in case anybody bumps into the same problem. I had to enter carriage return (i.e. \n) after my every tag in the HTML table. And everything worked fine.
PS: One clue as to whether this will help you is that, I am creating one big string of HTML. | 0 | 1,729 | true | 1 | 1 | Python HTML email : customizing output color-coding | 11,868,901 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | we have successfully added pydev plugin on our eclipse. as a result in pydev projects it detects errors and so on.
but the question is that is there any way that we use pydev abilities (e.g. error detection) in non-pydev projects?(e.g. a java project).
actually we are developing an eclipse plugin that contains some .py files and we want it to interpret them as a side feature | 0 | python,eclipse,pydev | 2012-08-01T09:18:00.000 | 0 | 11,756,207 | PyDev should be working fine. In project properties, you can set interpreter, PYTHONPATH and other PyDev related settings.
To manually trigger code analysis, right-click on project, file or folder and select PyDev->Code analysis | 0 | 113 | false | 1 | 1 | python interpreter on non-pydev projects? | 11,758,291 |
1 | 4 | 0 | 1 | 0 | 0 | 0.049958 | 0 | The requirement is to develop a HTML based facebook app. It would not be content based like a newspaper site,
but will mostly have user generated data which would be aggregated and presented from database + memcache.
The app would contain 4-5 pages at most, with different purposes.
We decided to write the app in Python instead of PHP , and tried to evaluate django.
However, we found django is not as flexible as how CodeIgniter in PHP is i.e. putting less restrictions and rules, and allowing you to do what you want to do.
PHP CodeIgnitor is minimalistic MVC framework, which we would have chosen if we were to develop in PHP.
Can you please suggest a flexible and minimalistic python based web framework? I have heard of pylons,cheeryPy,web.py , but I am completely unaware of their usage and structure. | 0 | python,django,pylons,cherrypy | 2012-08-01T12:25:00.000 | 0 | 11,759,164 | For the fastest development you may dive into Django. But Django is probably not the fastest solution. Flask is lighter. Also you can try Pyramid. | 0 | 1,400 | false | 1 | 1 | Which Python framework is flexible and similar to CodeIgniter in PHP? | 11,760,761 |
1 | 3 | 0 | 0 | 4 | 0 | 0 | 0 | I'm working on a bit of python code that uses mechanize to grab data from another website. Because of the complexity of the website the code takes 10-30 seconds to complete. It has to work its way through a couple pages and such.
I plan on having this piece of code being called fairly frequently. I'm wondering the best way to implement something like this without causing a huge server load. Since I'm fairly new to python I'm not sure how the language works.
If the code is in the middle of processing one request and another user calls the code, can two instances of the code run at once? Is there a better way to implement something like this?
I want to design it in a way that it can complete the hefty tasks without being too taxing on the server. | 0 | php,python,mysql,cron | 2012-08-01T15:25:00.000 | 0 | 11,762,480 | You can run more than one python process at a time. As for causing excessive load on the server that can only be alleviated by making sure either you only have one instance running at any given time or some other number of processes, say two. To accomplish this you can look at using a lock file or some kind of system flag, mutex ect...
But, the best way to limit excessive use is to limit the number of tasks running concurrently. | 0 | 128 | false | 0 | 1 | Best way to implement frequent calls to taxing Python scripts | 11,762,685 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 1 | In my app i sent packet by raw socket to another computer than get packet back and write the return packet to another computer by raw socket.
My app is c++ application run on Ubuntu work with nfqueue.
I want to test sent packets for both computer1 and computer2 in order to check if they are as expected.
I need to write an automation test that check my program, this automation test need to listen to the eth load the sent packets and check if they are as expected (ip,ports, payload).
I am looking for a simple way (tool (with simple API), code) to do this.
I need a simple way to listen (automate) to the eth .
I preffer that the test will check the sender , but it might be difficult to find an api to listen the eth (i sent via raw socket) , so a suggested API that will check the receivers computers is also good.
The test application can be written in c++, java , python. | 0 | c++,python,testing,networking | 2012-08-01T15:40:00.000 | 1 | 11,762,812 | The only way to check if a packet has been sent correctly is by verifying it's integrity on the receiving end. | 0 | 1,276 | false | 0 | 1 | How to test if packet is sent correct? | 11,763,064 |
2 | 3 | 0 | 0 | 0 | 0 | 1.2 | 1 | In my app i sent packet by raw socket to another computer than get packet back and write the return packet to another computer by raw socket.
My app is c++ application run on Ubuntu work with nfqueue.
I want to test sent packets for both computer1 and computer2 in order to check if they are as expected.
I need to write an automation test that check my program, this automation test need to listen to the eth load the sent packets and check if they are as expected (ip,ports, payload).
I am looking for a simple way (tool (with simple API), code) to do this.
I need a simple way to listen (automate) to the eth .
I preffer that the test will check the sender , but it might be difficult to find an api to listen the eth (i sent via raw socket) , so a suggested API that will check the receivers computers is also good.
The test application can be written in c++, java , python. | 0 | c++,python,testing,networking | 2012-08-01T15:40:00.000 | 1 | 11,762,812 | I operate tcpdump on the reciver coputer and save all packet to file.
I analysis the tcpdump with python and check that packet send as expected in the test. | 0 | 1,276 | true | 0 | 1 | How to test if packet is sent correct? | 11,809,920 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am trying to use Ajaxterm and I remember that when I used it for the first time about a year ago, there was something about logging in as root.
Can anyone tell me how to enable root login or point me to a guide? Many different google searches have returned no results.
P.S. My question is NOT whether or not I should login as root, but how to login as root. | 0 | javascript,python,ajax,bash,terminal | 2012-08-01T17:51:00.000 | 0 | 11,764,777 | Once you have logged in as a non-root user you can just su to the root user | 0 | 249 | true | 0 | 1 | Login as root in Ajaxterm | 11,764,917 |
1 | 3 | 0 | 5 | 15 | 1 | 0.321513 | 0 | As programmers we read more than we write. I've started working at a company that uses a couple of "big" Python packages; packages or package-families that have a high KLOC. Case in point: Zope.
My problem is that I have trouble navigating this codebase fast/easily. My current strategy is
I start reading a module I need to change/understand
I hit an import which I need to know more of
I find out where the source code for that import is by placing a Python debug (pdb) statement after the imports and echoing the module, which tells me it's source file
I navigate to it, in shell or the Vim file explorer.
most of the time the module itself imports more modules and before I know it I've got 10KLOC "on my plate"
Alternatively:
I see a method/class I need to know more of
I do a search (ack-grep) for the definition of that method/class across the whole codebase (which can be a pain because the codebase is partly in ~/.buildout-eggs)
I find one or more pieces of code that define that method/class
I have to deduce which one of them is the one I need to read
This costs a lot of time, which is understandable for a big codebase. But I get the feeling that navigating a large and unknown Python codebase is a common enough problem.
So I'm looking for technical tools or strategic solutions for this problem.
...
I just can't imagine hardcore Python programmers using the strategies outlined above. | 0 | python,vim,codebase | 2012-08-02T13:07:00.000 | 0 | 11,778,071 | I use ipython's ?? command
You just need to figure out how to import the things you want to look for, then add ?? to the end of the module or class or function or method name to view their source code. And the command completion helps on figuring out long names as well. | 0 | 7,153 | false | 0 | 1 | Navigating a big Python codebase faster | 11,778,262 |
1 | 4 | 0 | 0 | 3 | 0 | 0 | 0 | i have a php file calls a script and prints the output like this
$output=shell_exec('/usr/bin/python hello.py');
echo $output;
it prints;
b'total 16\ndrwx---r-x 2 oae users 4096 Jul 31 14:21 .\ndrwxr-x--x+ 9 oae root 4096 Jul 26 13:59 ..\n-rwx---r-x 1 oae users 90 Aug 3 11:22 hello.py\n-rwx---r-x 1 oae users 225 Aug 3 11:22 index.php\n'
but it should be like this;
total 16K
drwx---r-x 2 oae users 4.0K Jul 31 14:21 ./
drwxr-x--x+ 9 oae root 4.0K Jul 26 13:59 ../
-rwx---r-x 1 oae users 90 Aug 3 11:22 hello.py*
-rwx---r-x 1 oae users 225 Aug 3 11:22 index.php*
\n characters shouldn't be shown.How can i solve this? | 0 | php,python | 2012-08-03T08:32:00.000 | 1 | 11,792,129 | An alternative would be wrapping the string between <pre>...</pre> tags. | 0 | 10,514 | false | 0 | 1 | Print python script output correctly in PHP | 11,792,313 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I currently run my own server "in the cloud" with PHP using mod_fastcgi and mod_vhost_alias. My mod_vhost_alias config uses a VirtualDocumentRoot of /var/www/%0/htdocs so that I can serve any domain that routes to my server's IP address out of a directory with that name.
I'd like to begin writing and serving some Python projects from my server, but I'm unsure how to configure things so that each site has access to the appropriate script processor.
For example, for my blog, dead-parrot.com, I'm running a PHP blog platform (Habari, not WordPress). But I'd like to run an app I've written in Flask on not-dead-yet.com.
I would like to enable Python execution with as little disruption to my mod_vhost_alias configuration as possible, so that I can continue to host new domains on this server simply by adding an appropriate directory. I'm willing to alter the directory structure, if necessary, but would prefer not to add additional, specific vhost config files for every new Python-running domain, since apart from being less convenient than my current setup with just PHP, it seems kind of hacky to have to name these earlier alphabetically to get Apache to pick them up before the single mod_vhost_alias vhost config.
Do you know of a way that I can set this up to run Python and PHP side-by-side as conveniently as I do just PHP? Thanks! | 1 | php,python,apache,mod-vhost-alias | 2012-08-03T12:53:00.000 | 0 | 11,796,126 | Even I faced the same situation, and initially I was wondering in google but later realised and fixed it, I'm using EC2 service in aws with ubuntu and I created alias to php and python individually and now I can access both. | 0 | 6,266 | false | 1 | 1 | Can I run PHP and Python on the same Apache server using mod_vhost_alias and mod_wsgi? | 36,646,397 |
4 | 7 | 0 | 0 | 17 | 1 | 0 | 0 | We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems? | 0 | c#,c++,python,serialization,cross-language | 2012-08-03T20:01:00.000 | 0 | 11,802,505 | You can wrap your business logic as a web service and call it from all three languages - just a single implementation. | 0 | 1,432 | false | 0 | 1 | How to share business concepts across different programming languages? | 11,803,274 |
4 | 7 | 0 | 2 | 17 | 1 | 0.057081 | 0 | We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems? | 0 | c#,c++,python,serialization,cross-language | 2012-08-03T20:01:00.000 | 0 | 11,802,505 | All the components in the system operate with the same business concepts and communicate
one with another also in terms of these concepts.
When I got you right, you have split up your system in different parts communicating by well-defined interfaces. But your interfaces share data structures you call "business concepts" (hard to understand without seeing an example), and since those interfaces have to build for all of your three languages, you have problems keeping them "in-sync".
When keeping interfaces in sync gets a problem, then it seems obvious that your interfaces are too broad. There are different possible reasons for that, with different solutions.
Possible Reason 1 - you overgeneralized your interface concept. If that's the case, redesign here: throw generalization over board and create interfaces which are only as broad as they have to be.
Possible reason 2: parts written in different languages are not dealing with separate business cases, you may have a "horizontal" partition between them, but not a vertical. If that's the case, you cannot avoid the broadness of your interfaces.
Code generation may be the right approach here if reason 2 is your problem. If existing code generators don't suffer your needs, why don't you just write your own? Define the interfaces for example as classes in C#, introduce some meta attributes and use reflection in your code generator to extract the information again when generating the according C++, Python and also the "real-to-be-used" C# code. If you need different variants with or without serialization, generate them too. A working generator should not be more effort than a couple of days (YMMV depending on your requirements). | 0 | 1,432 | false | 0 | 1 | How to share business concepts across different programming languages? | 11,807,691 |
4 | 7 | 0 | 0 | 17 | 1 | 0 | 0 | We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems? | 0 | c#,c++,python,serialization,cross-language | 2012-08-03T20:01:00.000 | 0 | 11,802,505 | I would accomplish that by using some kind of meta-information about your domain entities (either XML or DSL, depending on complexity) and then go for code generation for each language. That would reduce (manual) code duplication. | 0 | 1,432 | false | 0 | 1 | How to share business concepts across different programming languages? | 11,807,498 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.