Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11,346,002 |
2012-07-05T14:04:00.000
| 3 | 0 | 1 | 0 |
python
| 11,346,030 | 2 | false | 0 | 0 |
The difference of two datetime.datetime objects is a datetime.timedelta object. Its .days attribute gives you its length in days.
| 1 | 0 | 0 |
I have an array of dates. I am comparing the objects of the array (B) to a control (A). How do I check if B's age is 10 days when compared to A? Ideally, I would like to be able to declare a variable as the difference in days. Thanks.
|
Compare two datetimes and get day difference
| 0.291313 | 0 | 0 | 3,684 |
11,346,224 |
2012-07-05T14:17:00.000
| 2 | 0 | 0 | 0 |
python,oracle
| 11,347,776 | 1 | true | 0 | 0 |
If you only need one or two connections, I see no harm in keeping them open indefinitely.
With Oracle, creating a new connection is an expensive operation, unlike in some other databases, such as MySQL where it is very cheap to create a new connection. Sometimes it can even take a few seconds to connect which can become a bit of a bottleneck for some applications if they close and open connections too frequently.
An idle connection on Oracle uses a small amount of memory, but aside from that, it doesn't consume any other resources while it sits there idle.
To keep your DBAs happy, you will want to make sure you don't have lots of idle connections left open, but I'd be happy with one or two.
| 1 | 3 | 0 |
I'm writing a bit of Python code that watches a certain directory for new files, and inserts new files into a database using the cx_Oracle module. This program will be running as a service. At a given time there could be many files arriving at once, but there may also be periods of up to an hour where no files are received. Regarding good practice: is it bad to keep a database connection open indefinitely? On one hand something tells me that it's not a good idea, but on the other hand there is a lot of overhead in creating a new database object for every file received and closing it afterwards, especially when many files are received at once. Any suggestions on how to approach this would be greatly appreciated.
|
Keeping database connection open - good practice?
| 1.2 | 1 | 0 | 1,153 |
11,347,613 |
2012-07-05T15:28:00.000
| 8 | 1 | 0 | 1 |
python,debian,packaging,deb
| 11,350,615 | 1 | true | 0 | 0 |
The only reason this isn't commonly done, afaik, is that it's not convention, and Python isn't usually more useful or straightforward than plain shell script for the sorts of things that maintainer scripts do. When it is more useful, you can often break out the Python-needing functionality into a separate Python script which is called by the maintainer scripts.
It can help to follow convention in this sort of situation, since there are a lot of helpful tools and scripts (e.g., Lintian, Debhelper) which generally assume that maintainer scripts use bash. If they don't, it's ok, but those tools may not be as useful as they would be otherwise. The only other issue I think you need to be aware of is that if your preinst or postrm scripts need Python, then Python needs to be a pre-dependency (Pre-Depends) of your package instead of just a Depends.
That said, I've found it useful to use Python in a maintainer script before.
| 1 | 9 | 0 |
I'm interested in what pitfalls can be (except Python is not installed in target system) when using Python for deb package flow control scripts (preinst, postinst, etc.). Will it be practical to implement those scripts in Python, not in sh?
As I understand it's at least possible.
|
Will it be practical to implement deb preinst, postint, etc. scripts in Python, not in sh
| 1.2 | 0 | 0 | 912 |
11,349,211 |
2012-07-05T17:10:00.000
| 0 | 0 | 1 | 0 |
python,linux,centos5,python-3.2
| 11,349,949 | 2 | false | 0 | 0 |
If you can't import time, math modules that are in stdlib then your installation of Python 3 is broken.
When you run python setup.py install the files are installed in correct place for current python executable be it a system python or python from a virtualenv environment. The same goes for pip.
You don't need to modify any paths, just use an appropriate executables.
| 1 | 6 | 0 |
I am developing in both Python 3 and Python 2.6, and have both versions installed. With Python 3, however, the path to lots of the good modules (time, math, ...) is not part of my Python path. I can add the directory to the path, but it's tedious.
Is there a way to permanently modify the path for my Python 3 installation without affecting Python 2?
|
Modify Python Path for Python3 only
| 0 | 0 | 0 | 1,654 |
11,349,476 |
2012-07-05T17:30:00.000
| 1 | 0 | 0 | 0 |
python,django,data-importer
| 16,125,317 | 8 | false | 1 | 0 |
I have done the same thing.
Firstly, my script was already parsing the emails and storing them in a db, so I set the db up in settings.py and used python manage.py inspectdb to create a model based on that db.
Then it's just a matter of building a view to display the information from your db.
If your script doesn't already use a db it would be simple to create a model with what information you want stored, and then force your script to write to the tables described by the model.
| 4 | 1 | 0 |
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information.
I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app?
The Django server is running on a Linux box under Apache / FastCGI if that makes a difference.
[Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?...
The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
|
How can I periodically run a Python script to import data into a Django app?
| 0.024995 | 0 | 0 | 3,419 |
11,349,476 |
2012-07-05T17:30:00.000
| 0 | 0 | 0 | 0 |
python,django,data-importer
| 11,349,554 | 8 | false | 1 | 0 |
When you are saying " get that data into the DJango app" what exactly do you mean?
I am guessing that you are using some sort of database (like mysql). Insert whatever data you have collected from your cronjob into the respective tables that your Django app is accessing. Also insert this cron data into the same tables that your users are accessing. So that way your changes are immediately reflected to the users using the app as they will be accessing the data from the same table.
| 4 | 1 | 0 |
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information.
I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app?
The Django server is running on a Linux box under Apache / FastCGI if that makes a difference.
[Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?...
The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
|
How can I periodically run a Python script to import data into a Django app?
| 0 | 0 | 0 | 3,419 |
11,349,476 |
2012-07-05T17:30:00.000
| 0 | 0 | 0 | 0 |
python,django,data-importer
| 11,349,556 | 8 | false | 1 | 0 |
Best way?
Make a view on the django side to handle receiving the data, and have your script do a HTTP POST on a URL registered to that view.
You could also import the model and such from inside your script, but I don't think that's a very good idea.
| 4 | 1 | 0 |
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information.
I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app?
The Django server is running on a Linux box under Apache / FastCGI if that makes a difference.
[Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?...
The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
|
How can I periodically run a Python script to import data into a Django app?
| 0 | 0 | 0 | 3,419 |
11,349,476 |
2012-07-05T17:30:00.000
| 1 | 0 | 0 | 0 |
python,django,data-importer
| 16,125,548 | 8 | false | 1 | 0 |
Forget about this being a Django app for a second. It is just a load of Python code.
What this means is, your Python script is absolutely free to import the database models you have in your Django app and use them as you would in a standard module in your project.
The only difference here, is that you may need to take care to import everything Django needs to work with those modules, whereas when a request enters through the normal web interface it would take care of that for you.
Just import Django and the required models.py/any other modules you need for it work from your app. It is your code, not a black box. You can import it from where ever the hell you want.
EDIT: The link from Rohan's answer to the Django docs for custom management commands is definitely the least painful way to do what I said above.
| 4 | 1 | 0 |
I have a script which scans an email inbox for specific emails. That part's working well and I'm able to acquire the data I'm interested in. I'd now like to take that data and add it to a Django app which will be used to display the information.
I can run the script on a CRON job to periodically grab new information, but how do I then get that data into the Django app?
The Django server is running on a Linux box under Apache / FastCGI if that makes a difference.
[Edit] - in response to Srikar's question When you are saying " get that data into the Django app" what exactly do you mean?...
The Django app will be responsible for storing the data in a convenient form so that it can then be displayed via a series of views. So the app will include a model with suitable members to store the incoming data. I'm just unsure how you hook into Django to make new instances of those model objects and tell Django to store them.
|
How can I periodically run a Python script to import data into a Django app?
| 0.024995 | 0 | 0 | 3,419 |
11,349,709 |
2012-07-05T17:46:00.000
| 1 | 0 | 0 | 1 |
jquery,python,html,google-app-engine,steam-web-api
| 11,350,089 | 2 | false | 1 | 0 |
Since you have the steam id from the service you can then make another request to their steam community page via the id. From there you can use beautiful soup to return a dom to grab the required information for your project.
Now onto your question. You can have all this happen within a request in a handler, if you are using a web framework such as Tornado, and the handler can return json in the page and you can render this json using your javascript code.
Look into a web framework for python such as Tornado or Django to help you with return and displaying the data.
| 1 | 0 | 0 |
So basically, at the moment, we are trying to write a basic HTML 5 page that, when you press a button, returns whether the user, on Steam, is in-game, offline, or online. We have looked at the Steam API, and to find this information, it requires the person's 64 bit ID (steamID64) and we, on the website, are only given the username. In order to find their 64 bit id, we have tried to scrape off of a website (steamidconverter.com) to get the user's 64 bit id from their username. We tried doing this through the javascript, but of course we ran into the cross domain block, not allowing us to access that data from our google App Engine website.
I have experience in Python, so I attempted to figure out how to get the HTML from that website (in the form of steamidconverter.com/(personsusername)) with Python. That was a success in scraping, thanks to another post on Stack Overflow.
BUT, I have no idea how to get that data back to the javascript and get it to do the rest of the work. I am stumped and really need help. This is all on google App Engine. All it is at the moment, is a button that runs a simple javascript that attempts to use JQuery to get the contents of the page back, but fails. I don't know how to integrate the two!
Please Help!
|
Scraper Google App Engine for Steam
| 0.099668 | 0 | 0 | 614 |
11,350,907 |
2012-07-05T19:06:00.000
| 11 | 1 | 0 | 1 |
python,timer,uwsgi
| 11,353,126 | 1 | true | 1 | 0 |
@timer uses kernel-level facilities, so they are limited in the maximum number of timers you can create.
@rbtimer is completely userspace so you can create an unlimited number of timers at the cost of less precision
| 1 | 5 | 0 |
I'm looking to add simple repeating tasks to my current application and I'm looking at the uwsgi signals api and there are two decorators @timer and @rbtimer. I've tried looking through the doc and even the python source at least but it appears it's probably more low level than that somewhere in the c implementation.
I'm familiar with the concept of a red-black tree but I'm not sure how that would relate to timers. If someone could clear things up or point me to the doc I might have missed I'd appreciate it.
|
What's the difference between timer and rbtimer in uWSGI?
| 1.2 | 0 | 0 | 826 |
11,351,264 |
2012-07-05T19:31:00.000
| 1 | 0 | 0 | 0 |
python,numpy
| 11,359,378 | 2 | false | 0 | 0 |
There is no configuration file for this. You will have to call np.seterr() yourself.
| 1 | 1 | 1 |
I'd like to change my seterr defaults to be either all 'warn' or all 'ignore'. This can be done interactively by doing np.seterr(all='ignore'). Is there a way to make it a system default? There is no .numpyrc as far as I can tell; is there some other configuration file where these defaults can be changed?
(I'm using numpy 1.6.1)
EDIT: The problem was not that numpy's default settings had changed, as I had incorrectly suspected, but that another code, pymc, was changing things that are normally ignore or warn to raise, causing all sorts of undesired and unexpected crashes.
|
Change numpy.seterr defaults?
| 0.099668 | 0 | 0 | 880 |
11,351,290 |
2012-07-05T19:32:00.000
| 2 | 0 | 1 | 0 |
python,nlp,nltk
| 11,355,186 | 3 | false | 0 | 0 |
Because the number of contractions are very minimal, one way to do it is to search and replace all contractions to it full equivalent (Eg: "don't" to "do not") and then feed the updated sentences into the wordpunct_tokenizer.
| 1 | 19 | 1 |
I'm tokenizing text with nltk, just sentences fed to wordpunct_tokenizer. This splits contractions (e.g. 'don't' to 'don' +" ' "+'t') but I want to keep them as one word. I'm refining my methods for a more measured and precise tokenization of text, so I need to delve deeper into the nltk tokenization module beyond simple tokenization.
I'm guessing this is common and I'd like feedback from others who've maybe had to deal with the particular issue before.
edit:
Yeah this a general, splattershot question I know
Also, as a novice to nlp, do I need to worry about contractions at all?
EDIT:
The SExprTokenizer or TreeBankWordTokenizer seems to do what I'm looking for for now.
|
nltk tokenization and contractions
| 0.132549 | 0 | 0 | 12,956 |
11,353,206 |
2012-07-05T22:06:00.000
| 0 | 0 | 0 | 0 |
python,ios,webserver,download
| 11,353,286 | 4 | false | 1 | 0 |
If you're literally just serving content (ie, not doing any calculations or look-ups), then use the nginx webserver to serve it based on URL.
| 1 | 1 | 0 |
I am trying to write an iPhone application that uses a Python server. The iPhone application will send an HTTP request to the server, which should then respond by sending back a file that is on the server. What is the best way to do this? Thanks.
|
How to respond to an HTTP request using Python
| 0 | 0 | 1 | 750 |
11,356,681 |
2012-07-06T06:04:00.000
| 5 | 0 | 0 | 1 |
python,bash,shell
| 11,356,753 | 4 | false | 0 | 0 |
Set your python script as the login shell of the user(in /etc/passwd/). This way she will be automatically logged out after the script exits.
| 1 | 2 | 0 |
I'm writing a shell script in Python for bash. The script automatically runs when the user logs into the account, and I want it to log the user out when it exits. I tried using os.system('exit'), but it doesn't work. How would I achieve this?
|
How do I make a Python script log out of the shell when it exits?
| 0.244919 | 0 | 0 | 1,998 |
11,357,611 |
2012-07-06T07:22:00.000
| 1 | 0 | 0 | 0 |
python,button,windows-xp,wxpython
| 11,357,674 | 2 | true | 0 | 1 |
Assuming you're running your program on Windows (you didn't say which OS, but dotted lines are used by Windows Classic look), the dotted lines are called the focus rect, and they appear to mark a button or widget as focused. They are a system setting, and your program is acting as it should - wxWidgets is meant to emulate the underlying OS default behaviour as closely as possible.
Update
I don't think you can change this behaviour from inside the program. I really doubt that wxWidgets has a setting somewhere for this, as it is OS-dependent and is the standard and correct behaviour for the Classic theme. But the focus rect is shown by default only on the Classic Look which few people use.
Try switching to Luna theme (the default on XP), and you'll see that the focus rect won't appear unless you start hitting Tab while your window is in focus. By the way, the focus rect is needed exactly for when you are switching the focus using the Tab key. You need to see where the focus is, after all. That way you know when you press Enter or Space, which button is going to be pressed. Not everyone uses only the mouse.
| 2 | 0 | 0 |
How to change the button decoration with wxPython, generally when the button is clicked, a dotted lines appear on the button.. any way to make that button not show the dotted lines?
Thanks
|
wxPython, button design
| 1.2 | 0 | 0 | 346 |
11,357,611 |
2012-07-06T07:22:00.000
| 0 | 0 | 0 | 0 |
python,button,windows-xp,wxpython
| 11,361,156 | 2 | false | 0 | 1 |
You can use a custom button, for instance wx.lib.buttons.GenButton which is in pure python so you can overwrite the look, feel etc.
This also has a method SetUseFocusIndicator to turn off the dotted focus indicator
| 2 | 0 | 0 |
How to change the button decoration with wxPython, generally when the button is clicked, a dotted lines appear on the button.. any way to make that button not show the dotted lines?
Thanks
|
wxPython, button design
| 0 | 0 | 0 | 346 |
11,357,851 |
2012-07-06T07:38:00.000
| 0 | 1 | 1 | 0 |
python,regex,base64
| 11,357,972 | 2 | true | 0 | 0 |
I would suggest escaping / character in the [A-Za-z0-9+/] block, because while unescaped it defines regular expression start/end.
| 1 | 1 | 0 |
I read an article about a regular expression to detect base64 but when I try it in "yara python" it gives an error of "unterminated regular expression"
the regular expression is:
(?:[A-Za-z0-9+/]{4}){2,}(?:[A-Za-z0-9+/]{2}[AEIMQUYcgkosw048]=|[A-Za-z0-9+/][AQgw]==)
could anyone throw a suggestion please?
thanks
|
regular Expression to detect base64
| 1.2 | 0 | 0 | 5,830 |
11,360,858 |
2012-07-06T10:55:00.000
| 7 | 0 | 1 | 0 |
python
| 50,576,864 | 3 | false | 0 | 0 |
I call it "optimistic programming". The idea is that most times people will do the right thing, and errors should be few. So code first for the "right thing" to happen, and then catch the errors if they don't.
My feeling is that if a user is going to be making mistakes, they should be the one to suffer the time consequences. People who use the tool the right way are sped through.
| 1 | 170 | 0 |
What is meant by "using the EAFP principle" in Python? Could you provide any examples?
|
What is the EAFP principle in Python?
| 1 | 0 | 0 | 47,676 |
11,361,488 |
2012-07-06T11:39:00.000
| 3 | 1 | 1 | 0 |
python,design-patterns,singleton
| 11,362,386 | 3 | false | 0 | 0 |
I want to keep things in classes to retain consistency
Why? Why is consistency important (other than being a hobgoblin of little minds)?
Use classes where they make sense. Use modules where they don't. Classes in Python are really for encapsulating data and retaining state. If you're not doing those things, don't use classes. Otherwise you're fighting against the language.
| 2 | 0 | 0 |
I'm writing pretty big and complex application, so I want to stick to design patterns to keep code in good quality. I have problem with one instance that needs to be available for almost all other instances.
Lets say I have instance of BusMonitor (class for logging messages) and other instances that use this instance for logging actions, in example Reactor that parses incoming frames from network protocol and depending on frame it logs different messages.
I have one main instance that creates BusMonitor, Reactor and few more instances.
Now I want Reactor to be able to use BusMonitor instance, how can I do that according to design patterns?
Setting it as a variable for Reactor seems ugly for me:
self._reactor.set_busmonitor(self._busmonitor)
I would do that for every instance that needs access to BusMonitor.
Importing this instance seems even worse.
Altough I can make BusMonitor as Singleton, I mean not as Class but as Module and then import this module but I want to keep things in classes to retain consistency.
What approach would be the best?
|
Python app design patterns - instance must be available for most other instances
| 0.197375 | 0 | 0 | 105 |
11,361,488 |
2012-07-06T11:39:00.000
| 0 | 1 | 1 | 0 |
python,design-patterns,singleton
| 11,471,003 | 3 | true | 0 | 0 |
I found good way I think. I made module with class BusMonitor, and in the same module, after class definition I make instance of this class. Now I can import it from everywhere in project and I retain consistency using classes and encapsulation.
| 2 | 0 | 0 |
I'm writing pretty big and complex application, so I want to stick to design patterns to keep code in good quality. I have problem with one instance that needs to be available for almost all other instances.
Lets say I have instance of BusMonitor (class for logging messages) and other instances that use this instance for logging actions, in example Reactor that parses incoming frames from network protocol and depending on frame it logs different messages.
I have one main instance that creates BusMonitor, Reactor and few more instances.
Now I want Reactor to be able to use BusMonitor instance, how can I do that according to design patterns?
Setting it as a variable for Reactor seems ugly for me:
self._reactor.set_busmonitor(self._busmonitor)
I would do that for every instance that needs access to BusMonitor.
Importing this instance seems even worse.
Altough I can make BusMonitor as Singleton, I mean not as Class but as Module and then import this module but I want to keep things in classes to retain consistency.
What approach would be the best?
|
Python app design patterns - instance must be available for most other instances
| 1.2 | 0 | 0 | 105 |
11,362,376 |
2012-07-06T12:39:00.000
| 1 | 0 | 0 | 0 |
python,pandas
| 11,401,559 | 1 | false | 0 | 0 |
Right now there is not an easy way to maintain metadata on pandas objects across computations.
Maintaining metadata has been an open discussion on github for some time now but we haven't had to time code it up.
We'd welcome any additional feedback you have (see pandas on github) and would love to accept a pull-request if you're interested in rolling your own.
| 1 | 1 | 1 |
Is it possible to customize Serie (in a simple way, and DataFrame by the way :p) from pandas to append extras informations on the display and in the plots? A great thing will be to have the possibility to append informations like "unit", "origin" or anything relevant for the user that will not be lost during computations, like the "name" parameter.
|
Append extras informations to Series in Pandas
| 0.197375 | 0 | 0 | 148 |
11,366,556 |
2012-07-06T16:59:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,gil
| 15,244,372 | 2 | false | 0 | 0 |
I dont know what you are looking for ... but just you should consider the use of the both macros Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS, with this macros you can make sure that the code between them doesn't have the GIL locked and random crashes inside them will be sure.
| 1 | 16 | 0 |
I tried to find a function that tells me whether the current thread has the global interpreter lock or not.
The Python/C-API documentation does not seem to contain such a function.
My current solution is to just acquire the lock using PyGILState_Ensure() before releasing it using PyEval_SaveThread to not try releasing a lock that wasn't acquired by the current thread.
(btw. what does "issues a fatal error" mean?)
Background of this question: I have a multithreaded application which embeds Python. If a thread is closed without releasing the lock (which might occur due to crashes), other threads are not able to run any more. Thus, when cleaning up/closing the thread, I would like to check whether the lock is held by this thread and release it in this case.
Thanks in advance for answers!
|
How can I check whether a thread currently holds the GIL?
| 0.197375 | 0 | 0 | 5,787 |
11,367,082 |
2012-07-06T17:34:00.000
| 0 | 0 | 1 | 0 |
python,python-multithreading
| 11,367,103 | 1 | false | 0 | 0 |
I would say it wouldn't be reasonable on a 32bit machine. I would want to run that kind of load on a 64bit machine with ample memory to handle any overhead that that number of processes might need.
| 1 | 0 | 0 |
The server is running Red Hat in 32bit with 8 cores.
The company classes that must be instantiated are not pickleable.
I tried threading but reaching 4 to 7 concurrent threads dropped performance to that of sequential processing. This was due in part to my ignorance, PySimpleClient and the underlying C++ implementation.
I tried multiprocessing with Queues but this was not robust and did not improve performance.
I currently am running 60 multiprocess processes each with a pipe successfully. The performance is great and robustness so far is excellent.
But I need 700 processes minimum. Is 700 reasonable?
|
Python multiprocessing processes and pipes - is 700 processes reasonable?
| 0 | 0 | 0 | 275 |
11,368,143 |
2012-07-06T18:59:00.000
| 0 | 0 | 1 | 0 |
python,image,python-3.x,scaling,pixel
| 16,485,248 | 4 | false | 0 | 1 |
The easiest thing to do is turn the images into numpy matrices, and then construct a new, much bigger numpy matrix to house all of them. Then convert the np matrix back into an image. Of course it'll be enormous, so you may want to downsample.
| 1 | 2 | 0 |
I have a script to save between 8 and 12 images to a local folder. These images are always GIFs. I am looking for a python script to combine all the images in that one specific folder into one image. The combined 8-12 images would have to be scaled down, but I do not want to compromise the original quality(resolution) of the images either (ie. when zoomed in on the combined images, they would look as they did initially)
The only way I am able to do this currently is by copying each image to power point.
Is this possible with python (or any other language, but preferably python)?
As an input to the script, I would type in the path where only the images are stores (ie. C:\Documents and Settings\user\My Documents\My Pictures\BearImages)
EDIT: I downloaded ImageMagick and have been using it with the python api and from the command line. This simple command worked great for what I wanted: montage "*.gif" -tile x4 -geometry +1+1 -background none combine.gif
|
Python: Import multiple images from a folder and scale/combine them into one image?
| 0 | 0 | 0 | 3,192 |
11,370,310 |
2012-07-06T22:09:00.000
| 7 | 0 | 1 | 0 |
python
| 11,370,345 | 2 | true | 0 | 0 |
The value is not being recalculated each time you print it, the long delay you see is the cost of converting the large number into a string for displaying.
| 1 | 2 | 0 |
Okay, I was trying to throw in some really large number evaluation on python - of the order of 10^(10^120)- which i then realized was quite huge. Anyways, I then receded to 10**10**5 and 10**10**6. Checking the time difference of the two brought me to this somewhat strange finding which I could see as an inefficiency.
The finding was that when I tried cProfile.run("x=10**10**6") it took 0.3s and cProfile.run("print 10**10**6") took 40s.
Then I tried x= 10**10**6 which took almost no time but thereafter every time that I interpreted x (x followed by enter) it would take a really long time (40s I suppose). So, I am assuming that every time that I interpret x it calculates the entire value over again.
So my question is: isn't that extremely inefficient? Say I had declared some variable in a module, x= 10**10, and every time I would reference x the python interpreter would compute the value of 10**10 over and over again ?
Gory details would be much appreciated.
|
Is Python reevaluating arithmetic operations with bignums each time the resut is used?
| 1.2 | 0 | 0 | 118 |
11,370,877 |
2012-07-06T23:24:00.000
| 4 | 1 | 1 | 1 |
python
| 11,370,887 | 2 | true | 0 | 0 |
The 64-bit version of the libraries?
What version of Python are you running? If you are running the 32-bit version, then you probably won't need those files.
| 1 | 10 | 0 |
I was using ubuntu.
I found that many Python libraries installed went in both /usr/lib/python and /usr/lib64/python.
When I print a module object, the module path showed that the module lived in /usr/lib/python.
Why do we need the /usr/lib64/python directory then?
What's the difference between these two directories?
BTW
Some package management script and egg-info that lived in both directories are actually links to packages in /usr/share.
Most Python modules are just links, but the so files are not.
|
What's the difference between /usr/lib/python and /usr/lib64/python?
| 1.2 | 0 | 0 | 10,916 |
11,371,057 |
2012-07-06T23:51:00.000
| 2 | 1 | 0 | 0 |
python,profiling
| 11,371,096 | 1 | true | 0 | 0 |
If you only need to know the amount of time spent in the Python code, and not (for example), where in the Python code the most time is spent, then the Python profiling tools are not what you want. I would write some simple C code that sampled the time before and after the Python interpreter invocation, and use that. Or, C-level profiling tools to measure the Python interpreter as a C function call.
If you need to profile within the Python code, I wouldn't recommend writing your own profile function. All it does is provide you with raw data, you'd still have to aggregate and analyze it. Instead, write a Python wrapper around your Python code that invokes the cProfile module to capture data that you can then examine.
| 1 | 1 | 0 |
I have used the Python's C-API to call some Python code in my c code and now I want to profile my python code for bottlenecks. I came across the PyEval_SetProfile API and am not sure how to use it. Do I need to write my own profiling function?
I will be very thankful if you can provide an example or point me to an example.
|
Profiling Python via C-api (How to ? )
| 1.2 | 0 | 0 | 276 |
11,371,175 |
2012-07-07T00:10:00.000
| 0 | 0 | 1 | 0 |
python,tuples
| 11,371,262 | 1 | false | 0 | 0 |
If you change a function that returned an object to instead return a tuple, then the callers of the function will have to be changed. There is no way around that. Either you change the callers to extract just the first object, or the unchanged code will have a tuple where it used to have an object.
| 1 | 0 | 0 |
I want to modify one of the API methods which returns an object. I think it should return a tuple of objects. I don't want to change the way people call this API method. Is there any way to return a tuple of objects that can be referenced directly for the first object?
|
Tuple of objects that, if referenced directly, returns first object
| 0 | 0 | 0 | 60 |
11,371,544 |
2012-07-07T01:25:00.000
| 0 | 0 | 1 | 0 |
python,coding-style
| 11,371,668 | 2 | false | 0 | 0 |
It avoids keyword name conflicts.
Say we had an imaginary list of ['when', 'who', 'why', 'where', 'with']
ie. one can't type my_name_tuple.with without the interpreter going ouch.
Read the docs on the namedtuple thoroughly and you should get it.
| 1 | 9 | 0 |
I was reading source code of collections.py yesterday.
In the namedtuple function, a template string is generated and then execed in a temporary namespace.
In the namespace dict, property is renamed to _property and tuple to _tuple.
I wonder what's the reason behind this. What problems does this renaming helps avoid?
|
Reasons to rename property to _property
| 0 | 0 | 0 | 887 |
11,372,221 |
2012-07-07T04:16:00.000
| 0 | 0 | 1 | 0 |
python,virtualenv,python-module,virtualenvwrapper
| 13,736,930 | 4 | false | 0 | 0 |
Try installing virtualenvwrapper with **sudo** pip virtualenvwrapper. It might be referring to the shell script it installs in /usr/local/bin.
| 1 | 5 | 0 |
Given what I know about Python, the problem I'm having shouldn't been happening. I installed virtualenvwrapper on Mac OS X Snow Leopard with pip. It's there in /Library/Python/2.6/site-packages. But when I try to import virtualenvwrapper, Python tells me there's no such module with that name. Other modules (e.g. virtualenv) load just fine, and /Library/Python/2.6/site-packages is right at the top of my Python path. So is there something weird about virtualenvwrapper so that Python isn't finding it?
|
Virtualenvwrapper not found
| 0 | 0 | 0 | 5,534 |
11,378,388 |
2012-07-07T20:29:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7
| 11,378,413 | 1 | true | 0 | 0 |
Keep an accumulator in each process, then at the end add up all those accumulators. You only need to store one value per process.
| 1 | 0 | 0 |
So I have a script that reads a file with 700,000 or so lines. For each line it returns a list of values it calculated from that line. Before I tried to use multiprocessing I was using a for loop and increment the values for each line to a global variable (because in the end I am after a sum). Unfortunately with the multiprocessing modules I cannot just add something to the global variable, because they are separate processes. Instead I had each process return the values I am after, and use Pool.map to create a huge list of the returned values. Then, I could loop through that list and get the sums I am after. This is very memory intensive. Any suggestions? I realize this is probably hard to read, so, I can clarify if needed. Thanks!
|
multiprocessing and memory issues
| 1.2 | 0 | 0 | 178 |
11,378,645 |
2012-07-07T21:08:00.000
| 1 | 0 | 0 | 0 |
python,html,pyramid,templating
| 11,378,667 | 5 | false | 1 | 0 |
You can replace the <script>, <iframe> tags with something else or you can html encode the strings so that it appears as text on the page but is not rendered by the browser itself.
Doing a string replace of all <'s and >'s with the < and > should be more than sufficient at preventing the XSS you are seeing as well.
| 3 | 0 | 0 |
I am creating a website where you "post", and the form content is saved in a MySql database, and upon loading the page, is retrieved, similar to facebook. I construct all the posts and insert raw html into a template. The thing is, as I was testing, I noticed that I could write javascript or other HTML into the form and submit it, and upon reloading, the html or JS would treated as source code, not a post. I figured that some simple encoding would do the trick, but using <form accept-charset="utf-8"> is not working. Is there an efficient way to prevent this type of security hole?
|
Preventing a security breach
| 0.039979 | 0 | 0 | 2,607 |
11,378,645 |
2012-07-07T21:08:00.000
| 0 | 0 | 0 | 0 |
python,html,pyramid,templating
| 11,378,727 | 5 | false | 1 | 0 |
It's a little out there, but you could write a block of code to recognize certain key aspects of html/javascript codes and act accordingly. recognize the block, for example, and either not allow that query to be passed or edit it so it's no longer valid html....
| 3 | 0 | 0 |
I am creating a website where you "post", and the form content is saved in a MySql database, and upon loading the page, is retrieved, similar to facebook. I construct all the posts and insert raw html into a template. The thing is, as I was testing, I noticed that I could write javascript or other HTML into the form and submit it, and upon reloading, the html or JS would treated as source code, not a post. I figured that some simple encoding would do the trick, but using <form accept-charset="utf-8"> is not working. Is there an efficient way to prevent this type of security hole?
|
Preventing a security breach
| 0 | 0 | 0 | 2,607 |
11,378,645 |
2012-07-07T21:08:00.000
| 8 | 0 | 0 | 0 |
python,html,pyramid,templating
| 11,379,323 | 5 | false | 1 | 0 |
Well, for completeness of the picture I'd like to mention that there are two places where you can sanitize user input in Pyramid - on the way in, before saving data in the database, and on the way out, before rendering the data in the template. Arguably, there's nothing wrong with storing HTML/JavaScript in the database, it's not going to bite you - as long as you ensure that everything that is rendered in your template is properly escaped.
In fact, both Chameleon and Mako templating engines have HTML escaping turned on by default, so if you just use them "as usual", you'll never get user-entered HTML injected into your page - instead, it'll be rendered as text. Without this, sanitizing user input would be a daunting task as you'd need to check every single field in every single form user enters data into (i.e. not only "convenient" textarea widgets, but everything else too - user name, user email etc.).
So you must be doing something unusual (or using some other template library) to make Pyramid behave this way. If you provide more details on the templating library you're using and a code sample, we'll be able to find ways to fix it in a proper way.
| 3 | 0 | 0 |
I am creating a website where you "post", and the form content is saved in a MySql database, and upon loading the page, is retrieved, similar to facebook. I construct all the posts and insert raw html into a template. The thing is, as I was testing, I noticed that I could write javascript or other HTML into the form and submit it, and upon reloading, the html or JS would treated as source code, not a post. I figured that some simple encoding would do the trick, but using <form accept-charset="utf-8"> is not working. Is there an efficient way to prevent this type of security hole?
|
Preventing a security breach
| 1 | 0 | 0 | 2,607 |
11,379,910 |
2012-07-08T01:03:00.000
| 0 | 0 | 1 | 0 |
python,matplotlib
| 68,635,240 | 2 | false | 0 | 0 |
simply put, you can use the following command to set the range of the ticks and change the size of the ticks
import matplotlib.pyplot as plt
set the range of ticks for x-axis and y-axis
plt.set_yticks(range(0,24,2))
plt.set_xticks(range(0,24,2))
change the size of ticks for x-axis and y-axis
plt.yticks(fontsize=12,)
plt.xticks(fontsize=12,)
| 1 | 14 | 1 |
While plotting using Matplotlib, I have found how to change the font size of the labels.
But, how can I change the size of the numbers in the scale?
For clarity, suppose you plot x^2 from (x0,y0) = 0,0 to (x1,y1) = (20,20).
The scale in the x-axis below maybe something like
0 1 2 ... 20.
I want to change the font size of such scale of the x-axis.
|
How do I change the font size of the scale in matplotlib plots?
| 0 | 0 | 0 | 44,462 |
11,382,163 |
2012-07-08T09:43:00.000
| 0 | 1 | 0 | 0 |
php,python,symfony
| 15,361,568 | 3 | true | 1 | 0 |
Is solved this by adding a usleep() function at the end of each iteration of a feed. This drastically lowered cpu and memory consumption. The process used to take about 20 minutes, and now only takes around and about 5!
| 2 | 0 | 0 |
I currently am developing a website in the Symfony2 framework, and i have written a Command that is run every 5 minutes that needs to read a tonne of RSS news feeds, get new items from it and put them into our database.
Now at the moment the command takes about 45 seconds to run, and during those 45 seconds it also takes about 50% to up to 90% of the CPU, even though i have already optimized it a lot.
So my question is, would it be a good idea to rewrite the same command in something else, for example python? Are the RSS/Atom libraries available for python faster and more optimized than the ones available for PHP?
Thanks in advance,
Jaap
|
Reading RSS feeds in php or python/something else?
| 1.2 | 0 | 0 | 269 |
11,382,163 |
2012-07-08T09:43:00.000
| 0 | 1 | 0 | 0 |
php,python,symfony
| 11,393,069 | 3 | false | 1 | 0 |
You could try to check Cache-Headers of the feeds first before parsing them.
This way you can save the expensive parsing operations on probably a lot of feeds.
Store a last_updated date in your db for the source and then check against possible cache headers. There are several, so see what fits best or is served the most or check against all.
Headers could be:
Expires
Last-Modified
Cache-Control
Pragma
ETag
But beware: you have to trust your feed sources.
Not every feed provides such headers or provides them correctly.
But i am sure a lot of them do.
| 2 | 0 | 0 |
I currently am developing a website in the Symfony2 framework, and i have written a Command that is run every 5 minutes that needs to read a tonne of RSS news feeds, get new items from it and put them into our database.
Now at the moment the command takes about 45 seconds to run, and during those 45 seconds it also takes about 50% to up to 90% of the CPU, even though i have already optimized it a lot.
So my question is, would it be a good idea to rewrite the same command in something else, for example python? Are the RSS/Atom libraries available for python faster and more optimized than the ones available for PHP?
Thanks in advance,
Jaap
|
Reading RSS feeds in php or python/something else?
| 0 | 0 | 0 | 269 |
11,382,734 |
2012-07-08T11:22:00.000
| 0 | 0 | 0 | 0 |
python,django
| 11,382,790 | 3 | false | 1 | 0 |
It depends on the app (how it was installed, how it was used, etc) but usually you can remove app from INSTALLED_APPS and then delete its tables in the database.
| 2 | 27 | 0 |
Initially I made 2 apps (app_a and app_b) in a single project in Django. Now I want to delete one (say app_a). How should I do so? Is removing the app name from INSTALLED_APPS in the settings file sufficient?
|
How to delete an app from a django project
| 0 | 0 | 0 | 47,184 |
11,382,734 |
2012-07-08T11:22:00.000
| 51 | 0 | 0 | 0 |
python,django
| 11,382,770 | 3 | true | 1 | 0 |
You need to remove or check the following:
Remove the app from INSTALLED_APPS.
Remove any database tables for the models in that app (see app_name_model_name in your database).
Check for any imports in other apps (it could be that they're importing code from that app).
Check templates if they are using any template tags of that app (which would produce errors if that app is no longer there).
Check your settings file to see if you're not using any code from that app (such as a context processor in your_app/context_processors.py, if it has such as file).
Check if any static content of the app is used in other apps.
Remove the app directory entirely.
When you've been following proper coding principles (i.e., each Django app is a self-contained part of the web application) then most situations above won't occur. But when other apps do use some parts of that app, you need to check that first as it may require refactoring before deleting the app.
| 2 | 27 | 0 |
Initially I made 2 apps (app_a and app_b) in a single project in Django. Now I want to delete one (say app_a). How should I do so? Is removing the app name from INSTALLED_APPS in the settings file sufficient?
|
How to delete an app from a django project
| 1.2 | 0 | 0 | 47,184 |
11,383,796 |
2012-07-08T14:14:00.000
| 0 | 0 | 0 | 0 |
javascript,python,xml,forms
| 11,383,885 | 2 | false | 1 | 0 |
Is your system a web application, If so your javascript can post to python back-end using ajax. Then you can encrypt a form to json string and send to back-end, in back en you can parse that string into python variable... Javascript it self does not have access to your local file except you run it local (but it's really limitted)
I suggest you should try a web frame work like Django. It's easy to learn in one day.
| 1 | 2 | 0 |
I am a novice python programmer and I am having troubles finding a tool to help me get a form from a javascript. I have written a small script in python and also have a simple interface done in javascript. The user needs to select a few items in the browser and the javascript then returns a sendForm(). I would like to then to recover the form with my python script. I know I could generate an xml file with javascript and tell my python script to wait until its creation and then catch it (with a os.path.exist(..)) but i would like to avoid this. I have seen that libraries such as cgi, mechanize, pyjs,(selenium?) exist to interface python and html,javascript but I can't find which one to use or if there would be another tool that would handle recovering the form easily.
More info: the python script generates an xml which is read by javascript. The user selects items in the javascript (with checkboxes) which are then tagged in the xml by javascript. The javascript then outputs the modified xml in a hidden field and it is this modified xml that I wish to retrieve with my python script after it is created.
Thank you all a lot for your help
|
getting javascript form content with python
| 0 | 0 | 1 | 453 |
11,387,702 |
2012-07-09T00:00:00.000
| 1 | 0 | 0 | 1 |
python,linux,shell
| 11,387,734 | 3 | false | 0 | 0 |
The process that starts your python script (probably forkink) has a pwd (its working directory). The idea is to change the pwd of the process before to fork and execute python.
You need to look over the manual of the process that executes the shell command, and see how to set the pwd.(in shell you use cd or pushd)
| 1 | 0 | 0 |
I have a python script that looks files up in a relative directory. For example: the python script is in /home/username/projectname/. I have a file that is being called within the python script that is in /home/username/projectname/subfolder.
If I run the script from the shell as python scriptname.py it runs perfectly fine.
However, i'm trying to run the script as a startup service. I'm setting it up in webmin, and I believe its using a terminal command to call it. In the startup command, I'm doing something like this to call the script:
execute python home/username/projectname/scriptname.py. The script is starting up fine, but I get an error because it cant access the files in the relative directory.
I am guessing that there is a better way to call the python program from within the startup command so that its aware of the relative path.
|
How to change the working directory for a shell script (newbie here)
| 0.066568 | 0 | 0 | 2,233 |
11,387,972 |
2012-07-09T00:54:00.000
| 0 | 0 | 0 | 0 |
python,django,jinja2,mako
| 11,388,002 | 2 | false | 1 | 0 |
We use Mako in our workplace, and I highly recommend not using it. It's great for your own views, but if you want to use ANY third party libraries that include templates, they simply won't work.
Other than that, the official documentation, and googling for specific problems you're having is the best way to go when using django templating. Unfortunately, I can't comment on jinja templating.
| 1 | 0 | 0 |
I'm new to Python and Django, but now have a pretty firm understanding of basic database and back end programming. However, I am finding it hard to learn the views and templates layers. I was wondering if anyone can suggest additional tutorials and resources, other than the official Django documentation.
I am also new to HTML, and am open to tutorials using Mako or Jinja2.
Thanks!
|
Django views and templates, including Jinja2 and Mako, tutorials and resources
| 0 | 0 | 0 | 589 |
11,388,032 |
2012-07-09T01:05:00.000
| 5 | 0 | 1 | 0 |
python,python-2.7
| 11,388,044 | 2 | false | 0 | 0 |
A word is in alphabetical order if (and only if) each adjacent pair of its letters is in alphabetical order.
| 1 | 2 | 0 |
I have a text file full of words. I need to go through the text file and count the number of words that are spelled in alphabetical order. I am struggling to think of way to figure out if a word is in alphabetical order.
I have already looked around and have seen that it can be done with sorted. However, we haven't yet learned about sort in class so I can't use it. Help/tips much appreciated.
Do I need to do something like assigning each letter of the alphabet a number?
|
How to figure out if a word in spelled in alphabetical order in Python
| 0.462117 | 0 | 0 | 760 |
11,388,125 |
2012-07-09T01:22:00.000
| 0 | 1 | 0 | 0 |
python,emacs,python-mode
| 15,865,420 | 2 | false | 0 | 0 |
To load an already loaded library from new place, write in your Emacs init-file something like
(unload-feature...
(load FROM-NEW-PLACE...
| 1 | 0 | 0 |
I have emacs 24.1.1, which comes with GNU's python.el in byte-compiled form at emacs/24.1/lisp/progmodes.
I downloaded Fabian Gallina's python.el (note the same name) and placed it at emacs/site-lisp, which is part of emacs' load-path.
When I edit a Python file, it is Gallina's mode which is loaded, NOT GNU's. However, I have not put (require 'python) in my .emacs file, despite what Gallina's documentation suggests.
Why is this? Why does Gallina's python.el take precedence over GNU's? Why does it get loaded without (require 'python)?
|
Understanding which python mode is loaded by emacs / Aquamacs and why
| 0 | 0 | 0 | 828 |
11,389,331 |
2012-07-09T05:04:00.000
| 0 | 0 | 1 | 0 |
python,subprocess
| 11,389,343 | 2 | false | 0 | 0 |
Instead of process.communicate(), use process.stdout.read()
| 2 | 2 | 0 |
How can I read from output PIPE multiple times without using process.communicate() as communicate closes the PIPE after reading the output but I need to have sequential inputs and outputs.
For example,
1) process.stdin.write('input_1')
2) After that, I need to read the output PIPE (how can I accomplish that without using communicate as it closes the PIPE) and then give another input as
3) process.stdin.write('input_2')
4) And then read the output of step 3
But if I use process.communicate after giving first input then it closes the output PIPE and i am unable to give second input as the PIPE is closed.
Kindly help please.
|
Python Sub-process (Output PIPE)
| 0 | 0 | 0 | 176 |
11,389,331 |
2012-07-09T05:04:00.000
| 1 | 0 | 1 | 0 |
python,subprocess
| 11,389,339 | 2 | false | 0 | 0 |
flush() stdin, then read() stdout.
| 2 | 2 | 0 |
How can I read from output PIPE multiple times without using process.communicate() as communicate closes the PIPE after reading the output but I need to have sequential inputs and outputs.
For example,
1) process.stdin.write('input_1')
2) After that, I need to read the output PIPE (how can I accomplish that without using communicate as it closes the PIPE) and then give another input as
3) process.stdin.write('input_2')
4) And then read the output of step 3
But if I use process.communicate after giving first input then it closes the output PIPE and i am unable to give second input as the PIPE is closed.
Kindly help please.
|
Python Sub-process (Output PIPE)
| 0.099668 | 0 | 0 | 176 |
11,391,424 |
2012-07-09T08:26:00.000
| 0 | 0 | 0 | 0 |
python,django,project
| 52,744,961 | 4 | false | 1 | 0 |
Yes.
Just Delete the folder. If the project is installed in your settings.py, remove it.
| 4 | 17 | 0 |
Initially, I had a single project in Django now I want to delete the last project and start a fresh new project with the same name as the last one. How should I do so? Can deleting the project folders be sufficient.
|
How to delete project in django
| 0 | 0 | 0 | 46,212 |
11,391,424 |
2012-07-09T08:26:00.000
| 0 | 0 | 0 | 0 |
python,django,project
| 32,165,971 | 4 | false | 1 | 0 |
As @Neverbackdown said deleting folder would be sufficient, but along with that you would also want to delete database or tables.
| 4 | 17 | 0 |
Initially, I had a single project in Django now I want to delete the last project and start a fresh new project with the same name as the last one. How should I do so? Can deleting the project folders be sufficient.
|
How to delete project in django
| 0 | 0 | 0 | 46,212 |
11,391,424 |
2012-07-09T08:26:00.000
| 3 | 0 | 0 | 0 |
python,django,project
| 60,130,219 | 4 | false | 1 | 0 |
To delete the project you can delete the project folder.
But this method is good only if you use SQLite as a database.
If you use any other database like Postgresql with Django, you need to delete the database manually.
If you are on VPS. Simply go to the folder where your project folder resides and put command rm -r projectfoldername
This should delete the folder and it's contents.
| 4 | 17 | 0 |
Initially, I had a single project in Django now I want to delete the last project and start a fresh new project with the same name as the last one. How should I do so? Can deleting the project folders be sufficient.
|
How to delete project in django
| 0.148885 | 0 | 0 | 46,212 |
11,391,424 |
2012-07-09T08:26:00.000
| 28 | 0 | 0 | 0 |
python,django,project
| 11,391,460 | 4 | true | 1 | 0 |
Deleting the project folder is sufficient and make changes in your apache server too[if you have one].
| 4 | 17 | 0 |
Initially, I had a single project in Django now I want to delete the last project and start a fresh new project with the same name as the last one. How should I do so? Can deleting the project folders be sufficient.
|
How to delete project in django
| 1.2 | 0 | 0 | 46,212 |
11,392,302 |
2012-07-09T09:26:00.000
| 0 | 0 | 0 | 0 |
python,xmpp,ejabberd
| 11,941,846 | 2 | false | 0 | 0 |
It is possible for a component to subscribe to a user's presence exactly the same way a user does. Also it is possible for the user to subscribe to a component's presence. You just have to follow the usual pattern, i.e. the component/user sends a <presence/> of type subscribe which the user/component can accept by sending a <presence/> of type subscribed.
You can also have the user just send a presence to the component directly.
There is no need to write custom hooks or create proxy users.
| 2 | 1 | 0 |
I have an ejabberd server at jabber.domain.com, with an xmpp component written in python (using sleekxmpp) at presence.domain.com.
I wanted the component to get a notification each time a client changed his presence from available to unavailable and vice-versa.
The clients themselves don't have any contacts.
Currently, I have set up my clients to send their available presence stanzas to [email protected], and I do get their online/offline presence notifications. But I feel this isn't the right approach.
I was hoping the clients wouldn't be aware of the component at presence.domain.com, and they would just connect to jabber.domain.com and the component should somehow get notified by the server about the clients presence.
Is there a way to do that?
Is my component setup correct? or should I think about using an xmpp plugin/module/etc..
Thanks
|
Getting ejabberd to notify an external module on client presence change
| 0 | 0 | 1 | 1,392 |
11,392,302 |
2012-07-09T09:26:00.000
| 5 | 0 | 0 | 0 |
python,xmpp,ejabberd
| 11,926,839 | 2 | true | 0 | 0 |
It is not difficult to write a custom ejabberd module for this. It will need to register to presence change hooks in ejabberd, and on each presence packet route a notification towards your external component.
There is a pair of hooks 'set_presence_hook' and 'unset_presence_hook' that your module can register to, to be informed when the users starts/end a session.
If you need to track other presence statuses, there is also a hook 'c2s_update_presence' that fires on any presence packets sent by your users.
Other possibility, without using a custom module, is using shared rosters. Add [email protected] to the shared rosters of all your users, but in this case they will see this item reflected on their roster.
| 2 | 1 | 0 |
I have an ejabberd server at jabber.domain.com, with an xmpp component written in python (using sleekxmpp) at presence.domain.com.
I wanted the component to get a notification each time a client changed his presence from available to unavailable and vice-versa.
The clients themselves don't have any contacts.
Currently, I have set up my clients to send their available presence stanzas to [email protected], and I do get their online/offline presence notifications. But I feel this isn't the right approach.
I was hoping the clients wouldn't be aware of the component at presence.domain.com, and they would just connect to jabber.domain.com and the component should somehow get notified by the server about the clients presence.
Is there a way to do that?
Is my component setup correct? or should I think about using an xmpp plugin/module/etc..
Thanks
|
Getting ejabberd to notify an external module on client presence change
| 1.2 | 0 | 1 | 1,392 |
11,393,269 |
2012-07-09T10:30:00.000
| 6 | 0 | 0 | 0 |
python,odbc,pyodbc
| 11,393,468 | 1 | true | 0 | 0 |
Assuming you are using unixODBC here was some possibilities:
rebuild unixODBC from scratch and set --sysconfdir
export ODBCSYSINI env var pointing to a directory and unixODBC will look here for odbcinst.ini and odbc.ini system dsns
export ODBCINSTINI and point it at your odbcinst.ini file
BTW, I doubt pyodbc looks anything up in the odbcinst.ini file but unixODBC will. There is a list of ODBC Driver manager APIs which can be used to examine ODBC ini files.
| 1 | 5 | 0 |
I am trying to query ODBC compliant databases using pyodbc in ubuntu. For that, i have installed the driver (say mysql-odbc-driver). After installation the odbcinst.ini file with the configurations gets created in the location /usr/share/libmyodbc/odbcinst.ini
When i try to connect to the database using my pyodbc connection code, i get a driver not found error message.
Now when I copy the contents of the file to /etc/odbcinst.ini, it works!
This means pyodbc searches for the driver information in file /etc/odbcinst.ini.
How can I change the location where it searches the odbcinst.ini file for the driver information
Thanks.
|
setting the location where pyodbc searches for odbcinst.ini file
| 1.2 | 1 | 0 | 7,504 |
11,393,770 |
2012-07-09T11:01:00.000
| 0 | 0 | 1 | 1 |
python,events,io,response,multiprocessing
| 11,394,262 | 1 | false | 0 | 0 |
You can't. Only one process can have access to one port at one time and you cannot respond directly without accessing the port.
But you don't need that. What you need is proxy! You can add a thread to your app which will listen on a different port. Then you fire your image process and when that process finishes its work you can send the result to the port. Then you're thread will read it and send the response.
| 1 | 1 | 0 |
Can I "move" response object somehow from one process to another?
The first process is a non-blocking server which does some other IO. It needs to be done in a non-blocking environment like Tornado or Twisted or something like this.
Another process (actually, a pool of "worker" processes) is needed to process images with PIL. I can't do it in threads because of GIL. However, either the worker needs to get a file-handle of response object to write the result to, or it should return the result back to the first process, and since the result can be pretty huge (~1 mb), it does not seem like a good idea. (It's probably going to be a separate pool of processes, not a fork for every request - the latter one seems like a bad strategy)
So, can I somehow allow the worker process to write to the response directly?
|
Receive HTTP request in one Python process, reply from another
| 0 | 0 | 0 | 152 |
11,396,216 |
2012-07-09T13:39:00.000
| 2 | 0 | 0 | 0 |
django,google-app-engine,google-cloud-datastore,python-2.7
| 11,398,730 | 1 | true | 1 | 0 |
The big limitation is that the datastore doesn't do JOINs, so anything that uses JOINS, like many-to-many relations won't work.
Any packages/middleware that uses many-to-many won't work, but others will.
For example, the sessions/auth middleware will work. But if you use permissions with auth, it won't. If you use the admin pages for auth, they use permissions, so you'll have some trouble with those too.
i8n works.
forms work.
nonrel does not work with ndb.
I don't know what you mean by "until your project gets bigger". django-nonrel won't help with the size of your app.
In my opinion there's two big reasons to use nonrel:
You're non-committal about App Engine. Nonrel potentially allows you to move to MongoDB as a backend.
You want to use django packages for "free". For example, I used tastypie for a REST API, and django-social-auth to get OAuth for FB/Twitter logins with very little effort. (On the flip side, with 1.7.0, they've addressed the REST API with endpoints)
| 1 | 1 | 0 |
I understand that full django can be used out of the box with CloudSQL. But I'm interested in using HRD. I'd like to learn more about what percentage of django can be used with nonrel. Does middleware work? How about other features of the framework like i18n, forms, etc. Also does nonrel work with NDB?
The background here is that I've even using webapp2 and before that webapp and find them great until your project gets bigger. So for this project I'm interested to reevaluate other options.
|
What are the limitations of using Django nonrel with Google App Engine?
| 1.2 | 0 | 0 | 341 |
11,396,717 |
2012-07-09T14:08:00.000
| 0 | 0 | 1 | 0 |
python
| 11,397,877 | 2 | true | 0 | 0 |
There is no such thing -- it would require the whole Python language to be running in Whitespace - so that your Python program culd use functions, lists, and other built-ins, not to mention the standard library.
Such a thing could be achievable via Pypy - you could write a Whitespace backend -- An effort I doubt could find funding or support.
However, a "one man hack" to get there might be changing the Python byte-code ops to use only whitespace charaters. It would not be "whitespace compatible" - but you would have the same effect in the end: compiled Python files - those we know as "pyc"s could be composed of Whitespace only for the code. (but not for the data and meta-data markup), hacking a couple of files in the Python source tree.
Changing the opcodes themselves is easy - they are in the Lib/opcode.py file in a Python Source tree - but you would have to change the interpreter to work with multi-byte opcodes (as currently all opcodes are one byte+ parameters).
| 1 | 4 | 0 |
I am having trouble finding one because "Whitespace" is a common word, but I'm curious if this can be done. Thanks.
|
Is there a Python to Whitespace language converter?
| 1.2 | 0 | 0 | 1,726 |
11,399,489 |
2012-07-09T16:43:00.000
| 1 | 0 | 0 | 0 |
python,events,wxpython,wxwidgets
| 11,401,507 | 3 | true | 0 | 1 |
If I have three buttons that do radically different things, then I want different event handlers for them because I find that easier to debug. If they're all print buttons with slightly different formatting options applied, then I'll hook them all up to the same handler and use event.GetEventObject() to figure out which one called. The few times I've had multiple events handled by the same handler was when I had a toolbar button and a menu item both call the same thing. It has more to do with program flow and ease of debugging and that just comes with practice.
| 3 | 0 | 0 |
In wxpython, is it better to handle events by creating a separate function for each event handler (say a separate function for every single button click) or to create one large button_handler, and then determine the button clicked from there?
Basically, I am wondering if it is more resource intensive to have many different events being watched for each separate thing, or just one big event that will figure out which one was clicked when it was fired.
|
event handling in wxpython
| 1.2 | 0 | 0 | 365 |
11,399,489 |
2012-07-09T16:43:00.000
| 2 | 0 | 0 | 0 |
python,events,wxpython,wxwidgets
| 11,399,885 | 3 | false | 0 | 1 |
Resource intensity is not your issue here, but you would definitely want to use one big event loop for this. Due to the Global Interpreter Lock, many event handlers in python have annoying ways of dealing the event queue, and in some event handlers it may even be impossible to check the event without removing it from the stack (VPython for example), and so in these cases you may well run into strange and hard to track errors if you use multiple checks within your code. If you use one large event this won't happen, or if it does, will be much easier to track down.
Ravenspoint is correct in that the resource intensity of either approach is trivial, and based on resource intensity alone you shouldn't worry about it, but a single event loop is significantly easier to maintain.
| 3 | 0 | 0 |
In wxpython, is it better to handle events by creating a separate function for each event handler (say a separate function for every single button click) or to create one large button_handler, and then determine the button clicked from there?
Basically, I am wondering if it is more resource intensive to have many different events being watched for each separate thing, or just one big event that will figure out which one was clicked when it was fired.
|
event handling in wxpython
| 0.132549 | 0 | 0 | 365 |
11,399,489 |
2012-07-09T16:43:00.000
| 2 | 0 | 0 | 0 |
python,events,wxpython,wxwidgets
| 11,399,632 | 3 | false | 0 | 1 |
Don't worry about it. The resources required for either scheme will be trivial, especially in a python script. Focus on designing your code in the way that makes it easiest to understand and maintain.
| 3 | 0 | 0 |
In wxpython, is it better to handle events by creating a separate function for each event handler (say a separate function for every single button click) or to create one large button_handler, and then determine the button clicked from there?
Basically, I am wondering if it is more resource intensive to have many different events being watched for each separate thing, or just one big event that will figure out which one was clicked when it was fired.
|
event handling in wxpython
| 0.132549 | 0 | 0 | 365 |
11,400,163 |
2012-07-09T17:35:00.000
| 3 | 0 | 1 | 0 |
python,list,performance
| 11,400,186 | 6 | false | 0 | 0 |
It doesn't iterate in either case. list[-1] is essentially identical to list[len(list) - 1]. A list is backed by an array, so lookups are constant time.
| 2 | 14 | 0 |
Quick question about the built in python list object. Say you have a list with the numbers 0 - 99. You are writing a program that takes the last item in the list and uses it for some other purpose. Is it more efficient to use list[-1] than to use list[99]? In other words, does python iterate through the whole list in either case?
Thanks for your help.
|
Python List Indexing Efficiency
| 0.099668 | 0 | 0 | 11,738 |
11,400,163 |
2012-07-09T17:35:00.000
| 21 | 0 | 1 | 0 |
python,list,performance
| 11,400,197 | 6 | true | 0 | 0 |
Python does not iterate through lists to find a particular index. Lists are arrays (of pointers to elements) in contiguous memory and so locating the desired element is always a simple multiplication and addition. If anything, list[-1] will be slightly slower because Python needs to add the negative index to the length to get the real index. (I doubt it is noticeably slower, however, because all that's done in C anyway.)
| 2 | 14 | 0 |
Quick question about the built in python list object. Say you have a list with the numbers 0 - 99. You are writing a program that takes the last item in the list and uses it for some other purpose. Is it more efficient to use list[-1] than to use list[99]? In other words, does python iterate through the whole list in either case?
Thanks for your help.
|
Python List Indexing Efficiency
| 1.2 | 0 | 0 | 11,738 |
11,403,615 |
2012-07-09T21:39:00.000
| 0 | 0 | 1 | 0 |
python,datetime
| 11,403,686 | 3 | false | 0 | 0 |
If you just want the current system time, you could try the time module. time.time() would give you a unix timestamp of the current system time.
Edit: In response to the OP's comment
Use time.strftime() and time.localtime() to interface with datetime objects and your DB
| 1 | 0 | 0 |
My code uses datetime.now() to get current date and time. The problem is that time is 1 hour behind due to daylight saving.
How can I get "real" current time (same I see in system's clock)
Thanks
|
Python datetime and daylight saving
| 0 | 0 | 0 | 1,604 |
11,404,165 |
2012-07-09T22:32:00.000
| 7 | 0 | 1 | 1 |
python,startup
| 11,404,208 | 2 | false | 0 | 0 |
Python has a special script that is run on startup. On my platform it is located at /usr/lib/python2.5/site-packages/sitecustomize.py IIRC. So, you could either put init.py in that directory alongside a sitecustomize.py script that imports it, or just paste the content of init.py in the sitecustomize.py.
| 1 | 11 | 0 |
I would like to execute a script work.py in Python, after executing some initialization script init.py.
If I were looking for an interactive session, executing python -i init.py or setting PYTHONSTARTUP=/path/to/init.py would do the trick, but I am looking to execute another script.
Since this is a generic case which occurs often (init.py sets environment and so is the same all of the time), I would highly prefer not referencing init.py from work.py. How could this be done? Would anything change if I needed this from a script instead of from the prompt?
Thank you very much.
|
Python startup script
| 1 | 0 | 0 | 28,023 |
11,404,994 |
2012-07-10T00:11:00.000
| 0 | 0 | 0 | 0 |
python,wxpython,wxwidgets
| 11,411,866 | 2 | false | 0 | 1 |
Embedding one GUI application inside another is not a simple thing. Applications are written to provide their own main frame, for example. You could try to position Notepad to a particular place on the screen instead.
If you're really talking about Notepad, then you have a different course of action. Notepad is nothing more than a text control with some code to save and load the contents to a file.
| 1 | 0 | 0 |
I am looking for a way to embed an .exe into a frame. (MDI)
I am not sure how this can be done.
I am using wxpython 2.9 and there is nothing online about this (until now).
|
Embed .exe in wxpython
| 0 | 0 | 0 | 389 |
11,405,549 |
2012-07-10T01:44:00.000
| 5 | 0 | 1 | 1 |
python,windows,python-2.7,pycrypto
| 56,252,476 | 19 | false | 0 | 0 |
If you are on Windows and struggling with installing Pycrypcto just use the:
pip install pycryptodome.
It works like a miracle and it will make your life much easier than trying to do a lot of configurations and tweaks.
| 4 | 152 | 0 |
I've read every other google source and SO thread, with nothing working.
Python 2.7.3 32bit installed on Windows 7 64bit. Download, extracting, and then trying to install PyCrypto results in "Unable to find vcvarsall.bat".
So I install MinGW and tack that on the install line as the compiler of choice. But then I get the error "RuntimeError: chmod error".
How in the world do I get around this? I've tried using pip, which gives the same result. I found a prebuilt PyCrypto 2.3 binary and installed that, but it's nowhere to be found on the system (not working).
Any ideas?
|
How do I install PyCrypto on Windows?
| 0.052583 | 0 | 0 | 355,117 |
11,405,549 |
2012-07-10T01:44:00.000
| 3 | 0 | 1 | 1 |
python,windows,python-2.7,pycrypto
| 45,761,612 | 19 | false | 0 | 0 |
My answer might not be related to problem mention here, but I had same problem with Python 3.4 where Crypto.Cipher wasn't a valid import. So I tried installing PyCrypto and went into problems.
After some research I found with 3.4 you should use pycryptodome.
I install pycryptodome using pycharm and I was good.
from Crypto.Cipher import AES
| 4 | 152 | 0 |
I've read every other google source and SO thread, with nothing working.
Python 2.7.3 32bit installed on Windows 7 64bit. Download, extracting, and then trying to install PyCrypto results in "Unable to find vcvarsall.bat".
So I install MinGW and tack that on the install line as the compiler of choice. But then I get the error "RuntimeError: chmod error".
How in the world do I get around this? I've tried using pip, which gives the same result. I found a prebuilt PyCrypto 2.3 binary and installed that, but it's nowhere to be found on the system (not working).
Any ideas?
|
How do I install PyCrypto on Windows?
| 0.031568 | 0 | 0 | 355,117 |
11,405,549 |
2012-07-10T01:44:00.000
| 2 | 0 | 1 | 1 |
python,windows,python-2.7,pycrypto
| 11,405,593 | 19 | false | 0 | 0 |
This probably isn't the optimal solution but you might download and install the free Visual C++ Express package from MS. This will give you the C++ compiler you need to compile the PyCrypto code.
| 4 | 152 | 0 |
I've read every other google source and SO thread, with nothing working.
Python 2.7.3 32bit installed on Windows 7 64bit. Download, extracting, and then trying to install PyCrypto results in "Unable to find vcvarsall.bat".
So I install MinGW and tack that on the install line as the compiler of choice. But then I get the error "RuntimeError: chmod error".
How in the world do I get around this? I've tried using pip, which gives the same result. I found a prebuilt PyCrypto 2.3 binary and installed that, but it's nowhere to be found on the system (not working).
Any ideas?
|
How do I install PyCrypto on Windows?
| 0.02105 | 0 | 0 | 355,117 |
11,405,549 |
2012-07-10T01:44:00.000
| 0 | 0 | 1 | 1 |
python,windows,python-2.7,pycrypto
| 51,176,072 | 19 | false | 0 | 0 |
I had Pycharm for python.
Go to pycharm -> file -> setting -> project interpreter
Click on +
Search for "pycrypto" and install the package
Note: If you don't have "Microsoft Visual C++ Compiler for Python 2.7" installed then it will prompt for installation, once installation finished try the above steps it should work fine.
| 4 | 152 | 0 |
I've read every other google source and SO thread, with nothing working.
Python 2.7.3 32bit installed on Windows 7 64bit. Download, extracting, and then trying to install PyCrypto results in "Unable to find vcvarsall.bat".
So I install MinGW and tack that on the install line as the compiler of choice. But then I get the error "RuntimeError: chmod error".
How in the world do I get around this? I've tried using pip, which gives the same result. I found a prebuilt PyCrypto 2.3 binary and installed that, but it's nowhere to be found on the system (not working).
Any ideas?
|
How do I install PyCrypto on Windows?
| 0 | 0 | 0 | 355,117 |
11,406,085 |
2012-07-10T03:11:00.000
| 0 | 0 | 1 | 0 |
python,memory,data-mining
| 11,406,222 | 4 | false | 0 | 0 |
First thought - switch to 64-bit python and increase your computer's virtual memory settings ;-)
Second thought - once you have a large dictionary, you can sort on key and write it to file. Once all your data has been written, you can then iterate through all the files simultaneously, comparing and writing out the final data as you go.
| 2 | 2 | 0 |
I am working on a research project in big data mining. I have written the code currently to organize the data I have into a dictionary. However, The amount of data is so huge that while forming the dictionary, my computer runs out of memory. I need to periodically write my dictionary to main memory and create multiple dictionaries this way. I then need to compare the resulting multiple dictionaries, update the keys and values accordingly and store the whole thing in one big dictionary on disk. Any idea how I can do this in python? I need an api that can quickly write a dict to disk and then compare 2 dicts and update keys. I can actually write the code to compare 2 dicts, that's not a problem but I need to do it without running out of memory..
My dict looks like this:
"orange" : ["It is a fruit","It is very tasty",...]
|
Integrating multiple dictionaries in python (big data)
| 0 | 0 | 0 | 442 |
11,406,085 |
2012-07-10T03:11:00.000
| 0 | 0 | 1 | 0 |
python,memory,data-mining
| 11,406,103 | 4 | false | 0 | 0 |
You should use a database such as PostgreSQL.
| 2 | 2 | 0 |
I am working on a research project in big data mining. I have written the code currently to organize the data I have into a dictionary. However, The amount of data is so huge that while forming the dictionary, my computer runs out of memory. I need to periodically write my dictionary to main memory and create multiple dictionaries this way. I then need to compare the resulting multiple dictionaries, update the keys and values accordingly and store the whole thing in one big dictionary on disk. Any idea how I can do this in python? I need an api that can quickly write a dict to disk and then compare 2 dicts and update keys. I can actually write the code to compare 2 dicts, that's not a problem but I need to do it without running out of memory..
My dict looks like this:
"orange" : ["It is a fruit","It is very tasty",...]
|
Integrating multiple dictionaries in python (big data)
| 0 | 0 | 0 | 442 |
11,407,544 |
2012-07-10T06:07:00.000
| 1 | 1 | 1 | 0 |
java,python,lisp
| 11,407,713 | 2 | false | 0 | 0 |
Even if you find an "import" statement of whatever kind it is not shure that the code will use it.
In Java you can import a name space, but also use the full qualified name of the class without any import statement
javax.swing.JButton but = new javax.swing.JButton("MyButton");
And last but not least all of them supports some kind of symbolic programming. You may use a plain string to get code loaded or executed:
Object x = Class.forName("javax.swing."+compName);
return x.toString();
| 1 | 0 | 0 |
What is the degree of source code dependency that can be resolved by examining at the source code for the following programming languages -- Java, Python and Lisp.
For example, can I say for sure by looking at a collection of Python files that examining all the "import" statements in every file are the only dependencies (source dependencies)?
In Lisp, I'm aware of the (load "filename") command that allows including function defined in other files.
|
Listing source dependencies
| 0.099668 | 0 | 0 | 87 |
11,411,182 |
2012-07-10T10:17:00.000
| 0 | 0 | 0 | 0 |
python,webdriver
| 11,412,106 | 1 | false | 0 | 0 |
You can use the get_attribute(name) method on a webelement to retrieve attributes.
| 1 | 0 | 0 |
Can anyone please tell me how to find the x-offset and y-offset default value of a Slider in a webpage using python for selenium webdriver.
Thanks in Advance !
|
How to find x and y-offset for slider in python for a web-application
| 0 | 0 | 1 | 636 |
11,411,428 |
2012-07-10T10:31:00.000
| 0 | 0 | 0 | 1 |
python,linux
| 11,411,496 | 11 | false | 0 | 0 |
You can check the modification time of the file and see if it has not been modified for a period of time. Since a file can be opened in update mode and be modified any time, you cannot be 100% sure that it will never be modified.
| 5 | 2 | 0 |
Is there a quick way (i.e. that minimizes time-to-answer) to find out if a file is open on Linux?
Let's say I have a process that writes a ton a files in a directory and another process which reads those files once they are finished writing, can the latter process know if a file is still being written to by the former process?
A Python based solution would be ideal, if possible.
Note: I understand I could be using a FIFO / Queue based solution but I am looking for something else.
|
Quick way to know if a file is open on Linux?
| 0 | 0 | 0 | 1,556 |
11,411,428 |
2012-07-10T10:31:00.000
| 1 | 0 | 0 | 1 |
python,linux
| 11,411,514 | 11 | false | 0 | 0 |
If you can change the 'first' process logic, the easy solution would be to write data to a temp file and rename the file once all the data is written.
| 5 | 2 | 0 |
Is there a quick way (i.e. that minimizes time-to-answer) to find out if a file is open on Linux?
Let's say I have a process that writes a ton a files in a directory and another process which reads those files once they are finished writing, can the latter process know if a file is still being written to by the former process?
A Python based solution would be ideal, if possible.
Note: I understand I could be using a FIFO / Queue based solution but I am looking for something else.
|
Quick way to know if a file is open on Linux?
| 0.01818 | 0 | 0 | 1,556 |
11,411,428 |
2012-07-10T10:31:00.000
| 2 | 0 | 0 | 1 |
python,linux
| 11,411,550 | 11 | false | 0 | 0 |
lsof | grep filename immediately comes to mind.
| 5 | 2 | 0 |
Is there a quick way (i.e. that minimizes time-to-answer) to find out if a file is open on Linux?
Let's say I have a process that writes a ton a files in a directory and another process which reads those files once they are finished writing, can the latter process know if a file is still being written to by the former process?
A Python based solution would be ideal, if possible.
Note: I understand I could be using a FIFO / Queue based solution but I am looking for something else.
|
Quick way to know if a file is open on Linux?
| 0.036348 | 0 | 0 | 1,556 |
11,411,428 |
2012-07-10T10:31:00.000
| 0 | 0 | 0 | 1 |
python,linux
| 11,411,884 | 11 | false | 0 | 0 |
You can use fcntl module, afaik, it's has fcntl function that identical to C function, so something like fcntl(fd, F_GETFL) could be useful, but I'm not sure. Can you check is target file is blocked for writing by opening it in write mode?
| 5 | 2 | 0 |
Is there a quick way (i.e. that minimizes time-to-answer) to find out if a file is open on Linux?
Let's say I have a process that writes a ton a files in a directory and another process which reads those files once they are finished writing, can the latter process know if a file is still being written to by the former process?
A Python based solution would be ideal, if possible.
Note: I understand I could be using a FIFO / Queue based solution but I am looking for something else.
|
Quick way to know if a file is open on Linux?
| 0 | 0 | 0 | 1,556 |
11,411,428 |
2012-07-10T10:31:00.000
| 10 | 0 | 0 | 1 |
python,linux
| 11,411,493 | 11 | true | 0 | 0 |
You can of course use INOTIFY feature of Linux, but it is safer to avoid the situation: let the writing process create the files (say data.tmp) which the reading process will definitely ignore. When the writer finishes, it should just rename the file for the reader (into say .dat). The rename operation guarantees that there may be no misunderstandings.
| 5 | 2 | 0 |
Is there a quick way (i.e. that minimizes time-to-answer) to find out if a file is open on Linux?
Let's say I have a process that writes a ton a files in a directory and another process which reads those files once they are finished writing, can the latter process know if a file is still being written to by the former process?
A Python based solution would be ideal, if possible.
Note: I understand I could be using a FIFO / Queue based solution but I am looking for something else.
|
Quick way to know if a file is open on Linux?
| 1.2 | 0 | 0 | 1,556 |
11,416,158 |
2012-07-10T14:59:00.000
| 0 | 0 | 0 | 0 |
python,ruby,heroku
| 11,418,565 | 2 | false | 1 | 0 |
If you are running locally, you can also just run the flask app from the command line using python, skipping Foreman altogether. This is how I run locally on my Windows 7 machines.
| 1 | 0 | 0 |
On working through HEROKU's guide to getting Python app to run on HEROKU and one first creates a hello world program ostensibly using python and flask and runs it locally using Foreman. I get an 'unsupported signal SIGHUP' from Ruby/Gems/foreman/engine. I am Running Win7. Anybody else hit this problem or have any ideas ? thanks.
|
python on heroku. Get SIGHUP error
| 0 | 0 | 0 | 159 |
11,419,922 |
2012-07-10T18:50:00.000
| 2 | 0 | 0 | 0 |
python,pylons
| 11,420,889 | 1 | false | 1 | 0 |
Try looking at request.headers['accept-language'], or indeed the entire request.headers object. I suspect your browser is not providing those headers.
Also, take a look at the browser request in wireshark, and the client request on the server.
| 1 | 0 | 0 |
In pylons project when I do request.accept_language.best_matches(), it is returning me Null. I have set 2 languages in browser (en-us and es-ar) by going to Preferences-Content- Languages in firefox.
How can I get the languages specified in the browser?
repr(request.accept_language) gives <NilAccept: <class 'webob.acceptparse.Accept'>>
|
request.accept_language is always null in python
| 0.379949 | 0 | 1 | 519 |
11,420,053 |
2012-07-10T18:59:00.000
| 2 | 0 | 0 | 0 |
python,c
| 11,420,313 | 4 | false | 0 | 1 |
There's also numpy which can be reasonably fast when dealing with "array operations" (sometimes called vector operations, but I find that term confusing with SIMD terminology). You'll probably need numpy if you decide to go the cython route, so if the algorithm isn't too complicated, you might want to see if it is good enough with numpy by itself first.
Note that there are two different routes you can take here. You can use subprocess which basically issues system calls to some other program that you have written. This is slow because you need to start a new process and send the data into the process and then read the data back from the process. In other words, the data gets replicated multiple times for each call. The second route is calling a C function from python. Since Cpython (the reference and most common python implementation) is written in C, you can create C extensions. They're basically compiled libraries that adhere to a certain API. Then Cpython can load those libraries and use the functions inside, passing pointers to the data. In this way, the data isn't actually replicated -- You're working with the same block of memory in python that you're using in C. The downside here is that the C API is a little complex. That's where 3rd party extensions and existing libraries come in (numpy, cython, ctypes, etc). They all have different ways of pushing computations int C functions without you having to worry about the C API. Numpy removes loops so you can add, subtract, multiply arrays quickly (among MANY other things). Cython translates python code to C which you can then compile and import -- typically to gain speed here you need to provide additional hints which allow cython to optimize the generated code, ctypes is a little fragile since you have to re-specify your C function prototype, but otherwise it's pretty easy as long as you can compile your library into a shared object ... The list could go on.
Also note that if you're not using numpy, you might want to check out pypy. It claims to run your python code faster than Cpython.
| 1 | 2 | 0 |
I'm new to programming and was wondering how I can have a python program execute and communicate with a c program. I am doing a mathematical computation in python, and was wondering if I could write up the main computation in C, that way the computation runs faster. I've been reading about "calling c functions from python", "including C or C++ code directly in your Python code", and "using c libraries from python". Is this the same thing? I want a python program to execute a c program and receive the results.
What does it mean to "call C library functions" from python? Would it allow the python script to use c libraries or allow the script to execute code within a c compiler?
thanks
|
executing c program from a python program
| 0.099668 | 0 | 0 | 634 |
11,421,448 |
2012-07-10T20:30:00.000
| 1 | 0 | 1 | 0 |
python,pyinstaller
| 11,438,184 | 1 | true | 0 | 0 |
Ok, it looks like "import os" in one of my modules was causing this issue. I had no luck getting it to successfully use the win32api module, but since this was only being used to set the program name, I just commented this out and this particular issue is resolved. Thanks Luke for your help!
| 1 | 0 | 0 |
I'm trying to compile a Python project (using Python 2.7 on Windows XP) into an EXE using PyInstaller with the default options. When I try to run the EXE, I get the message:
PyInstaller - ImportError: No module named win32api
I added the win32api path to the windows PATH environment variable (I do have Python Win32 Extensions installed) but it's not working. I'm pretty new to this and a little overwhelmed by all the options etc, and I really have no idea where to start (or what information would be useful to solving this problem.) I assume this is some little thing that I'm missing, but I haven't found anyone having precisely this problem online and any help would be greatly appreciated.
|
PyInstaller - ImportError: No module named win32api
| 1.2 | 0 | 0 | 3,215 |
11,421,476 |
2012-07-10T20:32:00.000
| 0 | 0 | 0 | 0 |
python,eclipse,networkx
| 11,421,571 | 2 | false | 0 | 0 |
I think there are two options:
Rebuild your interpreter
Add it to your python path by appending the location of networkx to sys.path in python
| 2 | 3 | 0 |
I'm using PyDev in eclipse with Python 2.7 on windows 7. I Installed networkx and it is properly running within Python shell but in eclipse it is showing error as it is unable to locate networkx can anyone tell me how to remove this error?
|
Integrate networkx in eclipse on windows
| 0 | 0 | 1 | 1,086 |
11,421,476 |
2012-07-10T20:32:00.000
| 4 | 0 | 0 | 0 |
python,eclipse,networkx
| 11,421,549 | 2 | true | 0 | 0 |
you need to rebuild your interpreter
go to project > properties > pyDev-Interpreter/Grammar
click the "click here to configure"
remove the existing interpreter
hit "Auto config" button and follow the prompts
kind of a pain but the only way Ive found to autodiscover newly installed packages
| 2 | 3 | 0 |
I'm using PyDev in eclipse with Python 2.7 on windows 7. I Installed networkx and it is properly running within Python shell but in eclipse it is showing error as it is unable to locate networkx can anyone tell me how to remove this error?
|
Integrate networkx in eclipse on windows
| 1.2 | 0 | 1 | 1,086 |
11,422,766 |
2012-07-10T22:07:00.000
| 17 | 0 | 1 | 0 |
python,pycharm
| 11,422,878 | 2 | true | 0 | 0 |
In PyCharm you can select a function and press Alt+Shift+F7 to run a usage search. It's also available under "Edit → Find → Find Usages". It looks like it's more intelligent than a text search.
Using static analysis to find where a function is called from is difficult in general in Python because it uses dynamic binding and has a lot of introspection so it's very easy to get false positives miss usages. In the case of module-level functions I think a good solution is to always use module.function to call the function and never do a from module import function. That way you can do a text search for 'module.function'. Python style guides generally recommend that you import functions etc. in this way so I think this is generally accepted good practice.
Finding method calls is of course much harder. One of the things I like about developing in Java and C# is being able to find all usages of a method by static analysis.
| 2 | 18 | 0 |
Is there a way for PyCharm to show where a given Python function is called from?
I currently rely on simply searching for the function name across the project and this often works fine, but if a function name is vague there are a lot of incorrect hits. I'm wondering if I'm missing a feature somewhere, e.g. perhaps the search results could be further narrowed down to only show where modules import the module I'm searching from?
|
Find where Python function is called in PyCharm
| 1.2 | 0 | 0 | 14,156 |
11,422,766 |
2012-07-10T22:07:00.000
| 9 | 0 | 1 | 0 |
python,pycharm
| 37,657,314 | 2 | false | 0 | 0 |
Press the Ctrl key and simultaneously hover your mouse over the function header.The function name should get highlighted.Click on the function name to get a list of all instances where the function is called.
If you press the Ctrl key and simultaneously hover your mouse over a function call, then the function name will be highlighted and clicking on it will take you to the function definition.
| 2 | 18 | 0 |
Is there a way for PyCharm to show where a given Python function is called from?
I currently rely on simply searching for the function name across the project and this often works fine, but if a function name is vague there are a lot of incorrect hits. I'm wondering if I'm missing a feature somewhere, e.g. perhaps the search results could be further narrowed down to only show where modules import the module I'm searching from?
|
Find where Python function is called in PyCharm
| 1 | 0 | 0 | 14,156 |
11,423,412 |
2012-07-10T23:21:00.000
| 3 | 0 | 1 | 0 |
python
| 11,423,635 | 4 | false | 0 | 0 |
Can't think of any reason I wouldn't just write these as two totally separate scripts, since they don't demand any sharing of state.
| 1 | 1 | 0 |
I'm new to Python (and programming in general), so please be patient.
I am doing some lab equipment automation, and I am looking for a way to toggle power on one piece of equipment while taking data on another piece of equipment. I want these events to be asynchronous to each other, i.e. I want the power to toggle randomly during the data-taking process.
I've checked out the following:
-time.sleep--I was actually able to use this successfully on one setup, because the power supply was so slow to respond--I told it to shut off, then slept a random amount of time, then started taking data on the other equipment. This relies on the power supply always reacting much more slowly than the other piece of equipment, which will generally not be the case.
-multiprocessing/threading--I've been reading about this on SO and python.org, but I'm still not clear on whether this will accomplish what I want. I tried to test it but I'm finding it difficult to code, and I don't want to invest more time in it if it's nowhere near to what I want.
So, in a nutshell: Will multiprocessing do what I want? Is there any other way to do this?
|
Asynchronous events in Python
| 0.148885 | 0 | 0 | 270 |
11,423,764 |
2012-07-11T00:05:00.000
| 0 | 0 | 1 | 0 |
python,pybrain
| 11,423,815 | 2 | false | 0 | 0 |
Nevermind. Apparently I have to specify, for one reason or another, that I want to import pybrain.tools.shortcuts, I can't just mass import pybrain and expect shortcuts to show up. Thanks for reading this......I guess
| 1 | 1 | 0 |
So, I'll make it simple.
Pybrain's API shows that they have, as an example, a function called buildNetwork(). It says it's in the pybrain.tools.shortcut.buildNetwork, if that makes sense?
The problem is, .shortcut doesn't exist. I'm very new to using APIs but it appears to me that this function doesn't even exist.
Help?
|
PyBrain cannot access function listed in API/Index
| 0 | 0 | 0 | 83 |
11,427,168 |
2012-07-11T06:57:00.000
| 1 | 0 | 0 | 0 |
javascript,python
| 11,427,225 | 1 | false | 1 | 1 |
It takes time for the image to get from your phone to your server to your desktop client. There's nothing you can do to change that.
The best you can hope to do is to benchmark your entire application, figure out where are your bottlenecks, and hope it's not the network connection itself.
| 1 | 0 | 0 |
I am capturing mobile snapshot(android) through monkeyrunner and with the help of some python script(i.e. for socket connection),i made it to display in an html page.but there is some time delay between the image i saw on my browser and that one on the android device.how can i synchronise these things so that the mobile screen snapshot should be visible at the sametime on the browser.
|
how to darw the image faster in canvas
| 0.197375 | 0 | 0 | 41 |
11,429,862 |
2012-07-11T09:44:00.000
| 8 | 0 | 0 | 0 |
python,django
| 11,429,918 | 1 | true | 1 | 0 |
A Django package has the general structure of a Django app (models.py, views.py, et cetera) and it can have additional settings to define in your settings.py file. Using the Django package makes it easier to integrate the functionality into your Django web application rather than simply calling a Python library.
Usually the Python library provides all the functionality and a Django package provides additional functionality to use it (such as useful template tags, settings or context processors). You'll need to install both as the Django package won't work without the library. But this can vary so you'll need to look into the provided functionalities by the Django package.
| 1 | 5 | 0 |
I'm new to django and I've been browsing around the djangopackages site. I'm wondering what is the difference between those "django" packages, and python libraries that are not django packages.
So for example sendgrid has a django package and also a few regular python libraries. If I want to use the sendgrid wrapper from a django app, what benefits do i get by using the django package rather than the other python libraries that are available and more frequently maintained?
|
What's the difference between a django package and a python library?
| 1.2 | 0 | 0 | 1,364 |
11,430,276 |
2012-07-11T10:07:00.000
| 0 | 0 | 1 | 0 |
python,database,sqlite,concurrency,locking
| 20,908,479 | 4 | false | 0 | 0 |
Why not read in all the items from the database and put them in a queue? You can have a worker thread get at item, process it and move on to the next one.
| 2 | 0 | 0 |
There is a list of data that I want to deal with. However I need to process the data with multiple instances to increase efficiency.
Each time each instance shall take out one item, delete it from the list and process it with some procedures.
First I tried to store the list in a sqlite database, but sqlite allows multiple read-locks which means multiple instances might get the same item from the database.
Is there any way that makes each instance will get an unique item to process?
I could use other data structure (other database or just file) if needed.
By the way, is there a way to check whether a DELETE operation is successful or not, after executing cursor.execute(delete_query)?
|
Concurrency on sqlite database using python
| 0 | 1 | 0 | 484 |
11,430,276 |
2012-07-11T10:07:00.000
| 0 | 0 | 1 | 0 |
python,database,sqlite,concurrency,locking
| 11,430,479 | 4 | true | 0 | 0 |
How about another field in db as a flag (e.g. PROCESSING, UNPROCESSED, PROCESSED)?
| 2 | 0 | 0 |
There is a list of data that I want to deal with. However I need to process the data with multiple instances to increase efficiency.
Each time each instance shall take out one item, delete it from the list and process it with some procedures.
First I tried to store the list in a sqlite database, but sqlite allows multiple read-locks which means multiple instances might get the same item from the database.
Is there any way that makes each instance will get an unique item to process?
I could use other data structure (other database or just file) if needed.
By the way, is there a way to check whether a DELETE operation is successful or not, after executing cursor.execute(delete_query)?
|
Concurrency on sqlite database using python
| 1.2 | 1 | 0 | 484 |
11,431,593 |
2012-07-11T11:23:00.000
| 0 | 0 | 0 | 0 |
python,wxpython,wxwidgets
| 11,434,766 | 1 | true | 0 | 0 |
I assume you want to rename tree elements (leaves), right? Well when you instantiate the TreeCtrl, give it the following style flags: wx.TR_DEFAULT_STYLE | wx.TR_EDIT_LABELS
Now you should be able to click or double-click any of the items and rename them. See the wxPython demo for more cool tricks.
| 1 | 0 | 0 |
I am trying to make a rename feature for a Treectrl.
Where, a TextCtrl goes over the element and I am able to rename it. I cant find this feature on the API but I am sure that there has to be some way to do it.
|
Renaming Item in a TreeCtrl
| 1.2 | 0 | 1 | 214 |
11,431,679 |
2012-07-11T11:28:00.000
| 7 | 1 | 0 | 0 |
python,mysql,ruby
| 11,431,795 | 2 | true | 0 | 0 |
Just pick the language you feel most comfortable with. It shouldn't make a noticeable difference.
After writing the application, you can search for bottlenecks and optimize that
| 1 | 0 | 0 |
I'm writing a script to be run as a cron and I was wondering, is there any difference in speed between the Ruby MySQL or Python MySQL in terms of speed/efficiency? Would I be better of just using PHP for this task?
The script will get data from a mysql database with 20+ fields and store them in another table every X amount of minutes. Not much processing of the data will be necessary.
|
Python MySQL vs Ruby MySQL
| 1.2 | 1 | 0 | 254 |
11,434,026 |
2012-07-11T13:44:00.000
| 7 | 0 | 1 | 0 |
python,tuples,namedtuple
| 11,434,151 | 2 | true | 0 | 0 |
A namedtuple instance x can be converted to a tuple using tuple(x), but you shouldn't need to do so. If some code only accepts tuples, but no namedtuples, I consider that code broken. (There may be special cases that require such a behaviour, but I can't think of any right now.)
| 1 | 2 | 0 |
I'm using python while some method requires tuple as its argument. Although instance created by namedtuple is tuple subclass, but it seems that I still need to transform it to tuple.
Is there any way to transform the tuple subclass made by namedtuple to tuple quickly? Thanks!
|
How can I transform a tuple subclass made by namedtuple to tuple itself?
| 1.2 | 0 | 0 | 160 |
11,436,484 |
2012-07-11T15:48:00.000
| 6 | 1 | 1 | 0 |
python,compilation,source-code-protection
| 11,438,657 | 1 | true | 0 | 0 |
As mgilson mentioned in the comments, Cython is probably your best bet here. Generally speaking, you can use it to convert your pure-python source code into compiled extension modules. While the primary intent of Cython is for enhanced performance, there shouldn't be any barriers for using it for source-code protection. The extension modules it outputs aren't limited in any special ways so anything you were able to do from within Python before, you should be able to do from the Cython-generated extension modules. Cython does have a few known limitations in terms of supported features that may need to be worked around but, overall, it looks well suited to serving your purpose.
| 1 | 6 | 0 |
I would like to protect my python source code, I know there is no absolute protection possible, but still there should be some means to make it difficult enough to do or quite time-consuming. I want
1) to remove all documentation, comments automatically and
2) to systematically change the names of variables and functions within a module (obfuscation?), so that I can keep an external interface (with meaningful names) while the internal names of variables and functions are impossible to pronounce.
Perhaps the best solution, which would make 1) and 2) redundant, is the following:
3) Is there a simple way to compile python modules to .so libraries, with a clear interface and which can be used by other python modules? It would be similar as building C and C++ extensions with distutils, except that the source code is python itself rather than C/C++. The idea is to organize all "secret code" into modules, compile them, and then import them in the rest of the python code which is not considered secret.
Again, I am aware that everything can be reverse-engineered, I think in pragmatic terms, most of the average developers would not be able to reverse-engineer code and even if they would be able, ethical/legal/timing reasons would make them think twice if they really want to work on this.
|
How to protect and compile python source code into a .so library?
| 1.2 | 0 | 0 | 11,732 |
11,436,837 |
2012-07-11T16:09:00.000
| 0 | 1 | 0 | 0 |
python,mechanize
| 11,458,357 | 1 | true | 1 | 0 |
Problem solved:
If you are getting similar output, check the page headers as it is probably gzipped after instantiating the browser set set_handle_gzip(True)
| 1 | 0 | 0 |
I am looking for help debugging Mechanize. When I navigate to a page and attempt to call .read(), I get non-unicode result about 1 out of every 5 or so attempts. The non-unicode result looks like the following:
úRW!¤cêLÒ0T¸²ÖþF\<äs +€²Ü@9‚ÈøMq1;=®}ÿ½8¹WP[ëæåñ±øþûÚc!ˆÍzòØåŸ¿þUüþf>àSÕ‹‚~é÷bƪ}Ãp#',®ˆâËýÊæÚ³õµÊZñMyô‘;–sä„IWÍÞ·mwx¨|ýHåÀ½A ºÒòÀö QNqÒ4O{Žë+óZu"úÒ¸½vº³ÔP”º‘cÇ—Êâ#<31{HiºF4N¨ÂÀ"Û´>•ŠÜÅò€U±§¶8ÑWEú(ƒ‘cÀWÄ~‡ ‡—¯J$ÁvQìfj²a$DdªÐŠÐ5[ü(4` ŒÛ"–<‹eñƒ(‚¹=[U¤#íQhÉÔô6(î$M ²-Õ£›Œndû8mØïõ7;"¨zÒ€F°¬@Xˆ€*õ䊈xŸÊ%úÅò= kôc¡¢ØyœÑy³í>ËÜ-¥m+ßê¸ïmì Ycãa®-Ø•†ê¸îmq«x} i¥GEŽj]ÏëUÆËGS°êõ½AxwÕµêúR¶à|ôO¹ýüà:S¸S‡®U%}•Cî3ãg~QÛó´Ó]ïn[FwuCm6žš[«J®™›Ý-£A˜Ö€sµ1khí"”/\S~u£C7²Í#wÑ»@ç@sô,ÆQèÊôó®.ä(å*æ‡#÷»'õµ{à˜Õ„SÒ%@ˆtL †¸±¹åI{„Õv#³ëŠUG…s‡•·Aíí»8¡Ò|Ö«à4€¼dˆ¸—áÐåqA‘ï $Õ[NØÖ£o\s£Z_¾^ Äóo~?<Ú¿Ùÿ]À@@bÈ%¶Á$¦G oË·ò}[µ+>ðµ°Íöе?R1úQ–&PãýT¥¢ði+|óf«ú,â,ÛQ㤚ӢÏìÙT£šÚA䡳£
I have tried the normal Mechanize parser (mechanize.Browser()) as well as the commonly suggested alternative (factory=mechanize.RobustFactory()).
Any suggestions for next steps?
|
python mechanize odd .read() output
| 1.2 | 0 | 0 | 155 |
11,437,271 |
2012-07-11T16:35:00.000
| 2 | 0 | 1 | 1 |
python,windows,python-2.7,cross-compiling
| 11,437,392 | 2 | false | 0 | 0 |
Get Virtualbox, install Ubuntu in it, and build it "natively" in for Linux. These things work really well, and cross compilation is just asking for trouble. You're going to eventually need Linux to answer the support questions you'll get from these customers anyway! :(
| 1 | 3 | 0 |
I have a program I've written in Python 2.7 on Windows, and I've been using py2exe with total success to make it into an exe (and associated files). However, a reasonable number of people who I want to use it are on Linux/OSX, and while some have been able to make the Windows version work with Wine, others have not been so successful. I've looked thoroughly into py2installer, py2app, freeze and others, but if I understand correctly (I am new to Python and very new to compiling) you need to run them on the system you want to compile them for, i.e. you can only compile for Linux on Linux and OSX on OSX. I don't want to distribute just the raw files because I want the source code to be obfuscated as it is inside a .exe, amd obviously not everyone has Python.
So, my question is: is there any way to compile for OSX or Linux, in Python, while on a Windows machine? And if not, what do you think the best alternative solution might be?
|
Cross-compiling in Python in Windows TO Linux/OSX
| 0.197375 | 0 | 0 | 2,026 |
11,437,350 |
2012-07-11T16:39:00.000
| 0 | 0 | 0 | 0 |
python,label,border,gtk3
| 11,618,562 | 2 | false | 0 | 1 |
The term: "lot of labels" is relative. Are we talking about 14 or 84? If it's closer to 84 you should probably be using Glade to create the interface then set the frames x-pad and y-pad properties. With CSS for gtk3 you'll have to pack any label in a frame for margin, padding or any of their variants (margin-top, padding-bottom) to work.
A GtkFrame is a child of GtkMisc. GtkMisc has the function gtk_misc_set_padding(). You can use that function on your labels without packing them into frames. But you'd have to set it for each label.
I don't know Python only C and some C++ but you could do something like this:
1) create a enum for all your labels starting with label0. (namedtuple in python I think)
The reason I say to use an enum is that it's going to get very difficult later to keep track of all the labels without them.
2) create an array of pointers to GtkWidget, one for each label.
3) create your labels using the pointers to GtkWidget with the enum as the index.
(here's where the enum is really needed)
4) create a for loop with gtk_misc_set_padding() in it. Use the GtkWidget array as the parameter in gtk_misc_set_padding(). Loop through each label, setting its padding.
I could provide a example but it would probably wouldn't be useful if you don't know C. If you'd still like it let me know.
| 1 | 0 | 0 |
By easier i mean - can i define a style or something and apply it to all labels in my program?
I have a lot of labels in it and I don't want to type so much.
I heard about "Pango Style" but can I apply it to all label widgets at once?
|
Is there a easier way of creating a bordered Label in python Gtk3 than putting it in Frame?
| 0 | 0 | 0 | 929 |
11,439,607 |
2012-07-11T19:01:00.000
| 0 | 0 | 1 | 1 |
python,windows-7,geany
| 45,036,410 | 4 | false | 0 | 0 |
I faced this issue. Added python to PATH, working fine on cmd. But Geany wasn't able to execute. Turns out, while saving the file, I had not entered .py as extension. Once I did it, worked fine.
| 2 | 6 | 0 |
I'm not a great coder, in fact I'm just trying to learn, but I can't get Geany to regonise Python in my system (Windows 7) when I try to execute the program. When I click Execute, it opens a command prompt saying:
'python' is not recognized as an internal or external command, operable program or batch file
How can I fix this?
|
Geany unable to execute Python
| 0 | 0 | 0 | 15,525 |
11,439,607 |
2012-07-11T19:01:00.000
| 2 | 0 | 1 | 1 |
python,windows-7,geany
| 13,714,776 | 4 | false | 0 | 0 |
I had the same problem and found that setting the path as described in the other prior posts was necessary but not sufficient. In my case, I had saved my script to the "geany" directory. It turns out that there was a permissions problem with the geany editor trying to create a temporary file in the geany folder. As soon as I saved my script to another folder the permissions error went away.
| 2 | 6 | 0 |
I'm not a great coder, in fact I'm just trying to learn, but I can't get Geany to regonise Python in my system (Windows 7) when I try to execute the program. When I click Execute, it opens a command prompt saying:
'python' is not recognized as an internal or external command, operable program or batch file
How can I fix this?
|
Geany unable to execute Python
| 0.099668 | 0 | 0 | 15,525 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.