Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23,369,108 | 2014-04-29T15:34:00.000 | 0 | 0 | 0 | 1 | python,firefox-addon,firefox-addon-sdk | 24,506,049 | 4 | false | 0 | 0 | Just install Python 2.6 instead of Python 2.7. When i tried with Python 2.7 i got also the same error. Then i removed Python 2.7 and installed Python 2.6. And all worked finely. | 1 | 0 | 0 | I have downloaded Add-on SDK and executed activate.
Python 2.7 is installed. PATH variable is configured properly and py files can run from anywhere.
However, when i am trying to execute cfx (from Far command prompt, using the full path), i get the message: 'python' is not recognized as an internal or external command. How do i make it run? | Running cfx program for Firefox Add-on SDK on Windows | 0 | 0 | 0 | 497 |
23,370,556 | 2014-04-29T16:43:00.000 | -1 | 0 | 1 | 0 | python,audio,pyaudio | 35,827,692 | 2 | false | 1 | 0 | check if the top end 8 bits are 0. If they are not, you have a true 24 bit recording. | 1 | 5 | 0 | I need to record 24-bit audio (because it's the archival standard for audio digitization). However, the wave library seems to only go up to 16-bit.
It looks like pyaudio can work with 24-bit audio but every example I've found shows pyaudio using the wave library, meaning it has to save 16-bit.
Is it possible to record and playback 24-bit audio with pyaudio? | Recording 24-bit audio with pyaudio | -0.099668 | 0 | 0 | 2,262 |
23,374,854 | 2014-04-29T20:42:00.000 | 0 | 1 | 1 | 0 | c#,python,ipc | 62,980,335 | 2 | false | 0 | 0 | Based on the what you have said, you can connect to the python process and catch standard output text. Easy, fast and reliable! | 1 | 5 | 0 | I have some C# code that needs to call a Python script several thousand times, each time passing a string, and then expecting a float back. The python script can be run using ANY version of Python, so I cannot use Iron python. It's been recommended that I use IPC named pipes. I have no experience with this, and am having trouble figuring out how to do this between C# and Python. Is this a simple process, or am I looking at a decent amount of work? Is this the best way to solve my problem? | Simplest way to communicate between Python and C# using IPC? | 0 | 0 | 0 | 9,440 |
23,376,103 | 2014-04-29T22:01:00.000 | 0 | 0 | 0 | 0 | python,mysql,python-3.x,mysql-python,python-3.4 | 49,185,013 | 11 | false | 0 | 0 | for fedora and python3 use: dnf install mysql-connector-python3 | 1 | 53 | 0 | I have installed Python version 3.4.0 and I would like to do a project with MySQL database. I downloaded and tried installing MySQLdb, but it wasn't successful for this version of Python.
Any suggestions how could I fix this problem and install it properly? | Python 3.4.0 with MySQL database | 0 | 1 | 0 | 143,430 |
23,376,816 | 2014-04-29T23:01:00.000 | 3 | 0 | 0 | 0 | python,amazon-s3,zip | 23,376,918 | 5 | false | 0 | 0 | If speed is a concern, a good approach would be to choose an EC2 instance fairly close to your S3 bucket (in the same region) and use that instance to unzip/process your zipped files.
This will allow for a latency reduction and allow you to process them fairly efficiently. You can remove each extracted file after finishing your work.
Note: This will only work if you are fine using EC2 instances. | 1 | 11 | 0 | I have zip files uploaded to S3. I'd like to download them for processing. I don't need to permanently store them, but I need to temporarily process them. How would I go about doing this? | Python S3 download zip file | 0.119427 | 0 | 0 | 24,492 |
23,378,498 | 2014-04-30T02:11:00.000 | 0 | 0 | 0 | 0 | python,django,django-urls | 23,379,239 | 2 | true | 1 | 0 | For 1), if you don't want to do a separate route for every single route on your website, you'll need middleware that implements process_exception and outputs an HttpResponseRedirect.
For 2 and 3, those are rules that are presumably limited to specific routes, so you can do them without middleware.
2 might be doable in urls.py with a RedirectView, but since the relevant bit is a query string argument, I would probably make that an actual view function that looks at the query string. Putting a ? character in a url regex seems strange because it will interfere with any other use of query strings on that endpoint, among other reasons.
For 3, that's a straightforward RedirectView and you can do it entirely in urls.py. | 1 | 1 | 0 | Need a little help with my urls.py and other stuff.
How can I replicate this in Django?
1) When user requests a non-existent page it will redirect to one up the directory level. Ex: example.com/somegoodpage/somebadpage should be redirected to example.com/somegoodpage.
2) When user requests page example.com/foo/bar/?name=John it will make url to example.com/foo/bar/name=John
3) When user requests page example.com/foo/bar/John it will change url to example.com/foo/bar/name=John.
Any help is greatly appreciated. Thank You. | Python Django: urls.py questions | 1.2 | 0 | 0 | 173 |
23,379,033 | 2014-04-30T03:13:00.000 | 0 | 0 | 0 | 1 | python,macos,shell,.app | 23,379,138 | 2 | false | 0 | 0 | if your using applescript, just save it as a bundle by save as, and click the drop down saying script, change that to bundle i think. after that, click on the bundle icon in apple script and drag the script to the folder you want. to run it put your run command in and drag the script that you placed in the bundle folders before in the directory slot of the run command. i can not give you anything exact due to the fact that i am not on my mac, but i am giving you the best i know. | 2 | 1 | 0 | I have a script (Shell, chmod-ed to 755. Python is in the script, meaning not run from an outside .py file) that is executable. It works when I run it. How can I make a .app that executes said script on runtime? I have a simple .app that has this structure: APPNAME.App>Contents>MacOS>script
This does not run. Is there any way I can piggyback a script onto another application, The Powder Toy, for example? I'm not new to OSX, I just don't have root privileges and can't install XCode.
Rembember, I can't install anything from source or use setup scripts, effectively annihilating py2app as an option.
EDIT:
This answer is courtesy of mklement0. Automator lets you choose the environment to run your script, type it in, and bundle it into a .app, removing the need for a shell script. | Make a Mac .App run a script on running of said .App | 0 | 0 | 0 | 930 |
23,379,033 | 2014-04-30T03:13:00.000 | 1 | 0 | 0 | 1 | python,macos,shell,.app | 23,379,360 | 2 | true | 0 | 0 | Run Automator and create a new Application project.
Add a Run Shell Script action.
In the Shell: list, select the interpreter of choice; /usr/bin/python in this case.
Paste the contents of your Python script into the action and save the *.app bundle. | 2 | 1 | 0 | I have a script (Shell, chmod-ed to 755. Python is in the script, meaning not run from an outside .py file) that is executable. It works when I run it. How can I make a .app that executes said script on runtime? I have a simple .app that has this structure: APPNAME.App>Contents>MacOS>script
This does not run. Is there any way I can piggyback a script onto another application, The Powder Toy, for example? I'm not new to OSX, I just don't have root privileges and can't install XCode.
Rembember, I can't install anything from source or use setup scripts, effectively annihilating py2app as an option.
EDIT:
This answer is courtesy of mklement0. Automator lets you choose the environment to run your script, type it in, and bundle it into a .app, removing the need for a shell script. | Make a Mac .App run a script on running of said .App | 1.2 | 0 | 0 | 930 |
23,381,990 | 2014-04-30T07:20:00.000 | 3 | 1 | 0 | 1 | python,rabbitmq,twisted | 23,382,374 | 1 | true | 0 | 0 | If your server is does not receive packets often, it will not improve much - only gain some tiny overhead on inter server communication. Still it is a very good design idea, because it scales well and once you finally get many packets you will just add an instance of data processing server. | 1 | 2 | 0 | I am developing a tcp/ip server whose purpose is receive packets from client, parse them, do some computation(on data arriving in packet) and store it in database. Till now, everything was being done by single server application written using twisted python. Now I am across RabbitMQ so my question is, if it is possible and if it will lead to better performance if my twisted server application just receives the packets from clients and pass it another c++ application using RabbitMQ. The c++ application will in turn parse packets, do computation on it etc.. Everything will be done on single server. | Using rabbitmq with twisted | 1.2 | 0 | 0 | 545 |
23,382,499 | 2014-04-30T07:49:00.000 | 3 | 0 | 0 | 0 | python,mysql,database,events | 23,382,595 | 3 | false | 0 | 0 | You can use 'Stored Procedures' in your database a lot of RDBMS engines support one or multiple programming languages to do so. AFAIK postgresql support signals to call external process to. Google something like 'Stored Procedures in Python for PostgreSQL' or 'postgresql trigger call external program' | 1 | 8 | 0 | I'm running a python script that makes modifications in a specific database.
I want to run a second script once there is a modification in my database (local server).
Is there anyway to do that?
Any help would be very appreciated.
Thanks! | Run python script on Database event | 0.197375 | 1 | 0 | 20,648 |
23,383,554 | 2014-04-30T08:44:00.000 | 1 | 0 | 1 | 0 | python,syntax,pycharm | 43,342,637 | 9 | false | 0 | 0 | You can also add particular typo word to PyCharm dictionary by pressing Alt + Enter this word and select Save [typo word] to dictionary. Sometimes this is best choice for me. | 4 | 112 | 0 | Where is the option to disable the spell check on the strings for the PyCharm IDE?
I hate the jagged line under my comments and strings. | How to avoid the spell check on string in Pycharm | 0.022219 | 0 | 0 | 60,750 |
23,383,554 | 2014-04-30T08:44:00.000 | 6 | 0 | 1 | 0 | python,syntax,pycharm | 23,383,853 | 9 | false | 0 | 0 | PyCharm does not check the syntax inside strings and comments. It checks spelling.
You can find the settings of the spell-checker under the Settings... page. There is a Spelling page inside Project Settings. Inside this page, at the bottom of the Dictionaries tab you can enable/disable dictionaries. If you don't want spell checking simply disable all of them.
Note that it is possible to add custom words in the spell checker, or, if you already have a dictionary on your computer, you can add it. This can turn out useful if you want to spell check different languages (for example Italian).
In particular if you are using a Linux system and you have installed the Italian language pack, you probably have the Italian dictionary under: /usr/share/dict/italian.
You can add the /usr/share/dict directory to the directories searched for dictionaries and enable the italian dictionary.
It seems like PyCharm only checks files with the .dic extension. If you want to use /usr/share/dict/italian you should probably either copy it into an other directory renaming it italian.dic or you could create a symbolic link. | 4 | 112 | 0 | Where is the option to disable the spell check on the strings for the PyCharm IDE?
I hate the jagged line under my comments and strings. | How to avoid the spell check on string in Pycharm | 1 | 0 | 0 | 60,750 |
23,383,554 | 2014-04-30T08:44:00.000 | 2 | 0 | 1 | 0 | python,syntax,pycharm | 58,810,499 | 9 | false | 0 | 0 | In the newer version of PyCharm, go to Preferences -> Editor -> Inspections -> Spelling. | 4 | 112 | 0 | Where is the option to disable the spell check on the strings for the PyCharm IDE?
I hate the jagged line under my comments and strings. | How to avoid the spell check on string in Pycharm | 0.044415 | 0 | 0 | 60,750 |
23,383,554 | 2014-04-30T08:44:00.000 | 0 | 0 | 1 | 0 | python,syntax,pycharm | 71,669,007 | 9 | false | 0 | 0 | For single line you can use the # noqa at the end of the line and it will suppress spelling inspections (the squiggly green line).
GOOFY_API_USERNAME_THING = "LAksdfallae@#lkdaslekserlse#lsese" # noqa
I'm using PyCharm 2021.3.2 | 4 | 112 | 0 | Where is the option to disable the spell check on the strings for the PyCharm IDE?
I hate the jagged line under my comments and strings. | How to avoid the spell check on string in Pycharm | 0 | 0 | 0 | 60,750 |
23,383,569 | 2014-04-30T08:45:00.000 | 1 | 0 | 0 | 0 | django,python-2.7,django-settings | 23,384,956 | 1 | false | 1 | 0 | In your settings.py, add FORCE_SCRIPT_NAME = 'path', where 'path' is the directory where the project resides.
For example, if your site exists at http://john.example.com/mywebsite, settings.py would contain FORCE_SCRIPT_NAME = '/mywebsite'. | 1 | 0 | 0 | I am trying to deploy my django on a server, but this server has an extra tag for the url like this. http://john.example.com/mywebsite instead of http://john.example.com
so whenever I want to redirect from the homepage to other pages, I got error messages because the redirected pages are missing the extra tag /mywebsite
for example
I want to direct to http://john.example.com/mywebsite/apple but when I choose on link on the template it direct me to http://john.example.com/apple
so I just wonder if there is a way to fix this by setting default home directory to http://john.example.com/mywebsite instead of http://john.example.com/ so I don't have to fix all my code
Thanks you | How to set default url in settings.py for "www.example.com/path" | 0.197375 | 0 | 0 | 617 |
23,383,615 | 2014-04-30T08:47:00.000 | 0 | 0 | 1 | 0 | python,sockets | 23,384,465 | 2 | false | 0 | 0 | Yes you can do this, but mostly people do multiple blocking sockets with threads or multiple non-blocking sockets with event loops. But it should be no problem in switching in-between, as long as you don't switch between buffered and unbuffered I/O. | 2 | 1 | 0 | I would like to start the conversation in blocking mode and later switch to non-blocking.
Is that a stupid idea?
The python docs are kind of ambiguous about it, there it says:
... You do this [setblocking(0)] after creating the socket, but before using it. (Actually, if you’re nuts, you can switch back and forth.)
I read this as 'please don't do that', so I was wondering if there are reasons as to why it is discouraged.
Is there some kind of undefined behavior, what problems could I run into? | switching between blocking and nonblocking on a python socket | 0 | 0 | 1 | 243 |
23,383,615 | 2014-04-30T08:47:00.000 | 0 | 0 | 1 | 0 | python,sockets | 23,384,627 | 2 | false | 0 | 0 | blocking and nonblocking socket are different programming model.
switching between them will make your program too complicated. | 2 | 1 | 0 | I would like to start the conversation in blocking mode and later switch to non-blocking.
Is that a stupid idea?
The python docs are kind of ambiguous about it, there it says:
... You do this [setblocking(0)] after creating the socket, but before using it. (Actually, if you’re nuts, you can switch back and forth.)
I read this as 'please don't do that', so I was wondering if there are reasons as to why it is discouraged.
Is there some kind of undefined behavior, what problems could I run into? | switching between blocking and nonblocking on a python socket | 0 | 0 | 1 | 243 |
23,383,699 | 2014-04-30T08:52:00.000 | 7 | 0 | 1 | 0 | python,warnings,sublimetext3,syntax-checking | 27,613,279 | 4 | false | 0 | 0 | You can use set sublime as:
view -> Indentation -> Convert indentation to spaces
This will enable your tabs to be converted to 4(according your setting) spaces. It works on my machine.
And modify the existing tabs in the file to spaces:
View -> Indentation -> Convert Indentation to Spaces | 1 | 10 | 0 | I didn't find an answer to this question on the web, so I'll say it up front; this is NOT a question about SublimeLinter, and I do NOT want to format my python code according to the PEP8 standards.
How to disable the warning "Indentation contains tabs" in the Python Checker package? | Sublime Text 3 - Disable Python Checker warning "indentation contains tabs" | 1 | 0 | 0 | 21,090 |
23,384,351 | 2014-04-30T09:25:00.000 | 1 | 0 | 1 | 0 | python,regex,python-2.7,sentiment-analysis | 23,384,749 | 3 | false | 0 | 0 | I would not do this with regexp. Rather I would;
Split the input on punctuation characters.
For each fragment do
Set negation counter to 0
Split input into words
For each word
Add negation counter number of NEG_ to the word. (Or mod 2, or 1 if greater than 0)
If original word is in {No,Never,Not} increase negation counter by one. | 2 | 3 | 0 | How do I add the tag NEG_ to all words that follow not, no and never until the next punctuation mark in a string(used for sentiment analysis)? I assume that regular expressions could be used, but I'm not sure how.
Input:It was never going to work, he thought. He did not play so well, so he had to practice some more.
Desired output:It was never NEG_going NEG_to NEG_work, he thought. He did not NEG_play NEG_so NEG_well, so he had to practice some more.
Any idea how to solve this? | How to add tags to negated words in strings that follow "not", "no" and "never" | 0.066568 | 0 | 0 | 1,678 |
23,384,351 | 2014-04-30T09:25:00.000 | 0 | 0 | 1 | 0 | python,regex,python-2.7,sentiment-analysis | 23,384,819 | 3 | false | 0 | 0 | You will need to do this in several steps (at least in Python - .NET languages can use a regex engine that has more capabilities):
First, match a part of a string starting with not, no or never. The regex \b(?:not?|never)\b([^.,:;!?]+) would be a good starting point. You might need to add more punctuation characters to that list if they occur in your texts.
Then, use the match result's group 1 as the target of your second step: Find all words (for example by splitting on whitespace and/or punctuation) and prepend NEG_ to them.
Join the string together again and insert the result in your original string in the place of the first regex's match. | 2 | 3 | 0 | How do I add the tag NEG_ to all words that follow not, no and never until the next punctuation mark in a string(used for sentiment analysis)? I assume that regular expressions could be used, but I'm not sure how.
Input:It was never going to work, he thought. He did not play so well, so he had to practice some more.
Desired output:It was never NEG_going NEG_to NEG_work, he thought. He did not NEG_play NEG_so NEG_well, so he had to practice some more.
Any idea how to solve this? | How to add tags to negated words in strings that follow "not", "no" and "never" | 0 | 0 | 0 | 1,678 |
23,385,917 | 2014-04-30T10:39:00.000 | 0 | 0 | 0 | 0 | python,menu,process,nuke | 23,422,211 | 2 | false | 1 | 0 | Turns out that I have to write out inside the menu.py file instead of init.py file.
And for some reasons, the naming convention - 'Show Primary Grade' works despite me unable to find the pass name for it even though I am able to track down its gizmo file... | 1 | 2 | 0 | I am trying to set my viewerProcess option to be 'Show Primary Grade' instead of 'Film' whenever my nuke is booted
However, due to the limiting information I am able to find on the net, I tried inserting the following code nuke.knobDefault("viewerProcess", "Show Primary Grade") in the init.py but I am unable to get it covered, much less not knowing if the code I have written is right or wrong.
As the Show Primary Grade is a custom plugin that my workplace is using (it is shown in this naming in the list of selection), is there any ways to check and make sure that I am writing it right?
Oh and by the way, am I able to set its script Editor to be like Maya, where whenever the user clicks on something, it will display the results in the output field? | setting custom viewerProcess fails | 0 | 0 | 0 | 902 |
23,386,550 | 2014-04-30T11:11:00.000 | 1 | 0 | 0 | 0 | python,django,debugging,ipdb | 23,388,104 | 1 | true | 1 | 0 | I'd ensure I've killed the runserver/gunicorn and restarted it cleanly, to ensure there are no threads that are still running ipdb. (if you're using django-devserver, for instance, that's multi-threaded) | 1 | 0 | 0 | Ihave problem with IPDB. I comment out it after I do not use it but after I run the web page after single refresh the debbuger is fired anyway. I have to referesh at least two times or so, to force django not willing going into debugging.
additionally I am expiriencing extremlly often error: [Errno 32] Broken pipe
(if it matters I am running it in vagrant based vm) | django import ipdb; ipdb.set_trace(); still want to run debugger even if commented. WHY? | 1.2 | 0 | 0 | 1,057 |
23,386,831 | 2014-04-30T11:24:00.000 | 12 | 0 | 1 | 0 | python,reload,interpreter | 23,386,923 | 4 | true | 0 | 0 | If you import as import foo as f in the first place, then the reload call can be reload(f) | 1 | 8 | 0 | Messing around in the interpreter, it would be useful for me to be able to do something along the lines of reload(foo) as f, though I know it is not possible. Just like I do import foo as f, is there a way to do it?
Using Python 2.6
Thanks! | Is it possible to reload a python module as something? | 1.2 | 0 | 0 | 2,114 |
23,388,509 | 2014-04-30T12:46:00.000 | 1 | 0 | 1 | 0 | python | 23,388,853 | 2 | false | 0 | 0 | You can think of it this way: "Duck Typing" applies to the type of a variable, not of the variable's contents. A string variable is something that can for example be indexed with [] or added to other strings with + and even repeated several times with * {some integer}, but you can't add a string to an integer, even if the string happens to be a number.
The number-ness of a string has nothing to do with the type. | 1 | 0 | 0 | I'm new to using Python (and dynamically typed languages in general) and I'm having trouble with the my variables being incorrectly-typed at run time. The program I've written accepts 6 variables (all should be integers) and performs a series of calculations using them. However, the interpreter refuses to perform the first multiplication because it believes the variables are type 'str'. Even when I enter integers for all values it breaks at run-time and claims I've entered strings. Shouldn't Python treat anything that walks and quacks like an int as if it were an int?
Thanks in advance.
PS: I'm running Python 3.4.0, if that helps. | How do I make my program acknowledge that a variable contains an integer in Python | 0.099668 | 0 | 0 | 61 |
23,388,647 | 2014-04-30T12:53:00.000 | 1 | 0 | 0 | 0 | python,sqlite,pygtk | 23,389,055 | 1 | true | 0 | 1 | AFAIK, WinXP supports setlocale just fine.
If you want to do locale-aware conversions, try using locale.atof('2,34') instead of float('2,34'). | 1 | 0 | 0 | I'm trying to use the data collected by a form I to a sqlite query. In this form I've made a spin button which gets any numeric input (ie. either2,34 or 2.34) and sends it in the form of 2,34 which python sees as str.
I've already tried to float() the value but it doesn't work. It seems to be a locale problem but somehow locale.setlocale(locale.LC_ALL, '') is unsupported (says WinXP).
All these happen even though I haven't set anything to greek (language, locale, etc) but somehow Windows does its magic.
Can someone help?
PS: Of course my script starts with # -*- coding: utf-8 -*- so as to have anything in greek (even comments) in the code. | pygtk spinbutton "greek" floating point | 1.2 | 1 | 0 | 48 |
23,388,810 | 2014-04-30T13:00:00.000 | 2 | 0 | 1 | 0 | python,ipython,ipython-notebook | 41,843,805 | 9 | false | 0 | 0 | just use the print command instead of calling the list directly. Like print mylist . It would not truncate then. | 2 | 64 | 0 | I have a long list (about 4000 items) whose content is suppressed when I try to display it in an ipython notebook output cell. Maybe two-thirds is shown, but the end has a "...]", rather than all the contents of the list. How do I get ipython notebook to display the whole list instead of a cutoff version? | IPython Notebook output cell is truncating contents of my list | 0.044415 | 0 | 0 | 121,706 |
23,388,810 | 2014-04-30T13:00:00.000 | 6 | 0 | 1 | 0 | python,ipython,ipython-notebook | 63,039,619 | 9 | false | 0 | 0 | For cases where the output of print(mylist) is something like [1, 1, 1, ..., 1, 1, 1] then [*mylist] will expand the items into rows where all items are visible. | 2 | 64 | 0 | I have a long list (about 4000 items) whose content is suppressed when I try to display it in an ipython notebook output cell. Maybe two-thirds is shown, but the end has a "...]", rather than all the contents of the list. How do I get ipython notebook to display the whole list instead of a cutoff version? | IPython Notebook output cell is truncating contents of my list | 1 | 0 | 0 | 121,706 |
23,392,591 | 2014-04-30T15:47:00.000 | 5 | 0 | 1 | 0 | python,inheritance | 23,392,692 | 1 | true | 0 | 0 | Yes, potentially you might want to skip the immediate superclass method, but still call the methods further up the hierarchy. This might happen for example when you know you have replaced the logic with your own, and calling the superclass method would be pointless or even harmful. | 1 | 2 | 0 | I know that in Python 3, you can write super() and Python automatically passes the correct arguments to super.
It's also possible to introduce subtle bugs by accidentally writing super(Parent, self).
Are there any scenarios where you wouldn't want to pass the current class as the first argument to super? | Would you ever pass anything other than (MyClass, self) to super()? | 1.2 | 0 | 0 | 44 |
23,393,456 | 2014-04-30T16:31:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,statistics,categorization | 23,421,883 | 1 | true | 0 | 0 | The principled way to do this is to assign probabilities to different model types and to different parameters within a model type. Look for "Bayesian model estimation". | 1 | 0 | 1 | My problem is as follows:
I am given a number of chi-squared values for the same collection of data sets, fitted with different models. (so, for example, for 5 collections of points, fitted with either a single binomial distribution, or both binomial and normal distributions, I would have 10 chi-squared values).
I would like to use machine learning categorization to categorize the data sets into "models":
e.g. data sets (1,2,5 and 7) are best fitted using only binomial distributions, whereas sets (3,4,6,8,9,10) - using normal distribution as well.
Notably, the number of degrees of freedom is likely to be different for both chi-squared distributions and is always known, as is the number of models.
My (probably) naive guess for a solution would be as follows:
Randomly distribute the points (10 chi-squared values in this case) into the number of categories (2).
Fit each of the categories using the particular chi-squared distributions (in this case with different numbers of degrees of freedom)
Move outlying points from one distribution to the next.
Repeat steps 2 and 3 until happy with result.
However I don't know how I would select the outlying points, or, for that matter, if there already is an algorithm that does it.
I am extremely new to machine learning and fairly new to statistics, so any relevant keywords would be appreciated too. | Categorizing points using known distributions | 1.2 | 0 | 0 | 52 |
23,393,532 | 2014-04-30T16:35:00.000 | 0 | 0 | 0 | 1 | python,fabric | 23,403,551 | 1 | false | 0 | 0 | Looks like you have your /usr/local/bin directory set to 777 or world writable. This is very bad, lock it down to 755 and owned by root at the very least. | 1 | 0 | 0 | When i try to run a simple deploy this message error comes to me:
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/universal-darwin13/rbconfig.rb:212: warning: Insecure world writable dir /usr/local/bin in PATH, mode 040777
Usage: fab mapfile [options]
Please specify a valid map file.
I am not familiar with fabric. I am Mac os environment ... | Usage: fab mapfile [options] Please specify a valid map file | 0 | 0 | 0 | 92 |
23,394,785 | 2014-04-30T17:48:00.000 | 3 | 0 | 0 | 0 | python,testing,sqlalchemy | 23,702,417 | 1 | true | 0 | 0 | The scoped session manager will by default return the same session object for each connection. Accordingly, one can replace .commit with .flush, and have that change persist across invocations to the session manager.
That will prevent commits.
To then rollback all changes, one should use session.transaction.rollback(). | 1 | 3 | 0 | I'm using fixtures with SQLAlchemy to create some integration tests.
I'd like to put SQLAlchemy into a "never commit" mode to prevent changes ever being written to the database, so that my tests are completely isolated from each other. Is there a way to do this?
My initial thoughts are that perhaps I could replace Session.commit with a mock object; however I'm not sure if there are other things that might have the same effect that I also need to mock if I'm going to go down that route. | sqlalchemy - force it to NEVER commit? | 1.2 | 1 | 0 | 838 |
23,395,679 | 2014-04-30T18:40:00.000 | 2 | 1 | 0 | 0 | python,file-comparison | 23,396,192 | 2 | false | 0 | 0 | the "report generating" methods of dircmp.filecmp don't accept a file object, they just use the print statement (or, in the Python 3 version, the print() function)
You could create a subclass of dircmp.filecmp which accepts a file argument to methods report, report_full_closure and report_partial_closure (if needed), at each print ... site writing print >>dest, .... Where report_*_closure call recursively, pass the dest argument down to the recursive call.
The lack of ability to print output to a specific file seems to me to be an oversight, so having added an optional file argument to these methods and tested it thoroughly you may wish to offer your contribution to the Python project.
If your program is single threaded, you could temporarily replace sys.stdout with your destination file before calling the report methods. But this is a dirty and fragile method, and it is probably foolish to imagine your program will be forever single threaded. | 1 | 2 | 0 | I'd like to output a series of dircmp report_full_disclosure()'s to a text file. However the report_full_disclosure() format is one blob of text, it doesn't play nicely with file.write(comparison_object.report_full_disclosure()) because file.write() expects a single line to write to the file.
I've tried iterating through the report_full_disclosure() report but it doesn't work either. Has anyone else had this particular problem before? Is there a different method to write out to files? | How do write filecmp's report_full_disclosure() to a text file? | 0.197375 | 0 | 0 | 1,224 |
23,396,807 | 2014-04-30T19:47:00.000 | 1 | 0 | 0 | 0 | python,nlp,nltk | 23,396,854 | 1 | false | 0 | 0 | In short "you cannot". This task is far beyond simple text processing which is provided with NLTK. Such objects relations sentiment analysis could be the topic of the research paper, not something solvable with a simple approach. One possible method would be to perform a grammar analysis, extraction of the conceptual relation between objects and then independent sentiment analysis of words included, but as I said before - it is rather a reasearch topic. | 1 | 0 | 1 | I'm using NLTK to extract named entities and I'm wondering how it would be possible to determine the sentiment between entities in the same sentence. So for example for "Jon loves Paris." i would get two entities Jon and Paris. How would I be able to determine the sentiment between these two entities? In this case should be something like Jon -> Paris = positive | How to determine the "sentiment" between two named entities with Python/NLTK? | 0.197375 | 0 | 0 | 301 |
23,397,090 | 2014-04-30T20:04:00.000 | 0 | 0 | 0 | 0 | python,django,forms,selenium | 45,631,061 | 2 | false | 1 | 0 | I guess it depends on how deep down the rabbit hole you want to go.
If you're building a test for the functional side from the user's perspective, and your GET action results in changes on the webpage, then trigger the submit with selenium, then wait for the changes to propagate to the webpage (like waiting for one/more elements to change value or waiting for an element to appear)
If you want to build an unit test, then all you should be testing is the ability to Submit the data, not also the ability of the javascript code to do a POST request, then a GET then display the data.
If you want to build an integration test, then you will need to check that each individual action in the sequence you described is performed correctly in whatever scenario you deem appropriate and then check that the total result of those actions is as expected. The tricky part will be chaining all those checks together.
If you want to build an end to end test, then you need to check for all of the above, plus changes to any permanent storage locations that the code you test changes (like databases or in-memory structures) plus whatever stress/security/usability/performance checks your software needs to pass in your specific context. | 1 | 3 | 0 | I'm using Selenium (python bindings, with Django) for automated web app testing. My page has a form with a button, and the button has a .click() handler that captures the click (i.e. the form is not immediately submitted). The handler runs function2(), and when function2() is done, it submits the form.
In my test with Selenium, I fill in the form, then click the Submit button. I want to verify that the form is eventually submitted (resulting in a POST request). The form POSTs and then redirects with GET to the same url, so I can't check for a new url. I think I need to check that the POST request occurs. How do I check for successful form submission?
Alternatively I could check at the database level to see that some new object is created, but that would make for a bad "unit" test. | How to detect POST after form submit in Selenium python? | 0 | 0 | 1 | 2,251 |
23,399,888 | 2014-04-30T23:38:00.000 | 1 | 0 | 1 | 0 | python,path,console,ipython | 23,588,025 | 2 | true | 0 | 0 | I am pulling the answer out of the comments.
Point your PATH system variable to the correct version of Python. This is accomplished (on Windows) by going to System Properties -> Advanced -> Environment Variables. If you already have a Python directory in there, modify it to the correct, new path. Otherwise, append it to the end of the existing string. Do not remove what is already present. | 1 | 0 | 0 | I've been using ipython notebook with console2 for a while now and recently installed a different version of python and now my console is giving me an error saying "No module named IPython". I think the path has been changed or something, but I don't know how to fix it. Any help is greatly appreciated! | ipython console2 no module named Ipython | 1.2 | 0 | 0 | 1,050 |
23,406,340 | 2014-05-01T10:58:00.000 | 0 | 0 | 1 | 0 | c#,visual-studio,ironpython | 23,409,623 | 1 | true | 0 | 0 | As Pawel said, IronPython uses a complex build system to handle all of the platforms it supports and make it easy-ish to add new ones (at least until IronPython and the DLR can be refactored into portable libraries ... sometime around 2016). In this case you need to dig through a couple of levels of imports to find where it's defined (Build/Common.proj).
The reason this happen is that even though the Reference is conditional (and handled correctly) in MSBuild, VS ignores the conditional check and displays any Reference elements it finds. Moving the Reference into Build/net35.props instead should fix the display, since VS does a much better job of handling imports. | 1 | 0 | 0 | I am building ironpython3 with Visual Studio 2013. In the references Visual Studio shows a yellow triangle on Microsoft.Scripting.Core reference. Since I read this reference has been moved to System.Core I simply removed the reference to Microsoft.Scripting.Core. The project builds just fine without this reference.
But after ending Visual Studio and opening the project again, the reference is just back. I removed all references I found (Strg-Shift-F) from the sources, but still no change. I had a look in the .csproj file but did not find any references there.
I simply cannot remove the reference permanently.
Any ideas where to look to permanently remove the reference?
Thank you. | Cannot remove a reference in Visual Studio 2013 | 1.2 | 0 | 0 | 1,489 |
23,415,765 | 2014-05-01T20:10:00.000 | 0 | 0 | 0 | 0 | wxpython,panel | 23,416,753 | 1 | true | 0 | 1 | There are a couple of approaches. The first is to just create all of the panels that you need and hide them as you've been doing. Another approach would be to create the sizers with the widgets laid out appropriately and just hide the sizers as needed. This way you would only ever have one or two panels, but different layouts. | 1 | 0 | 0 | I do not have any code to show with this question because I am just seeking advice. I have a functional piece of code and through the execution of which I create approx 20 - 30 unique panels. I only use one at any given time (with a few small exceptions) and I use show/hide to manage which panels are displayed at a given time.
My question is this: Is it best that I create all 30 panels at the inception point of the code, hide the 29 that are not needed and move on, and then show/hide between the 30 panels as the code runs, or should I only create the one or two that are needed at the start and create the others as they are called for and then hide/destroy the ones that have served their purpose. | wxpython create multiple panels at start or as the program progresses | 1.2 | 0 | 0 | 16 |
23,418,949 | 2014-05-02T00:33:00.000 | 21 | 0 | 1 | 0 | module,pygame,python-2.5,attributeerror,traceback | 31,551,977 | 1 | true | 0 | 1 | I think there is a python file named "copy" in your directory. I had the same problem, after I delete the "copy" file, the error has gone. | 1 | 6 | 0 | I encountered Error: 'module' object has no attribute 'copy' while running a pygame program. In my code, I never referred to a copy attribute, so I don't understand where the error is coming from. | Pygame AttributeError: 'module' object has no attribute 'copy' | 1.2 | 0 | 0 | 11,076 |
23,419,967 | 2014-05-02T02:46:00.000 | 7 | 1 | 1 | 0 | python,profiling | 24,132,022 | 2 | false | 0 | 0 | Here is my understanding:
RCalls number of recursive calls
Local Total time spent on local execution (without calling another method)
/Call Local time per call
Cum Total cumulative time
/Call Cumulative time per call | 1 | 8 | 0 | This may be really obvious but just wanted to make sure I understand what the columns are in runsnakerun.
Name, Calls, RCalls, Local, /Call, Cum, /Call, File, Line, Directory
Here are some that I think I understand
Name - name of function being called
Calls - number of calls?
File - file where the function is stored
Line - Line in File where the function is defined
Directory - directory of file with function definition
The ones I don't feel comfortable venturing a guess are the rest:
RCalls
Local
/Call
Cum
/Call
Thanks | Python profiling - What are the columns in runsnakerun output? | 1 | 0 | 0 | 1,257 |
23,421,959 | 2014-05-02T06:17:00.000 | 0 | 0 | 0 | 0 | python,openerp-7 | 23,443,504 | 1 | false | 1 | 0 | You can use webkit reports,
Install report_webkit module in OpenERP
Install wkhtmltopdf
in terminal type, sudo apt-get install wkhtmltopdf
in terminal type which wkhtmltopdf and copy the path
in OpenERP, Go to Settings->Technical->Parameters->System Parameters
And create a new record with key 'webkit_path' and paste the path in value field
Now you can create reports using mako templates
For testing wkhtmltopdf is working correctly,
in terminal type, wkhtmltopdf www.google.com google.pdf | 1 | 0 | 0 | I wanted to know whats is the process of creating reports in my openerp-7 module. Do I have to install some specific module for reporting or is there any default configuration ? I want to create reports of my forms . Like if I want to create report in form view then what am I suppose to do .
Hopes for suggestion
Thanks and regards | How to create reports of form view in Openerp-7? | 0 | 0 | 0 | 480 |
23,422,259 | 2014-05-02T06:40:00.000 | 0 | 0 | 0 | 0 | python,firefox,drag-and-drop,selenium-webdriver | 23,422,482 | 1 | false | 0 | 0 | Is drag and drop working manually when you open page in firefox? It may happen page has changed. | 1 | 1 | 0 | Drag and drop feature is not working in test framework. It is working for last 6 months and all of a sudden this feature alone of Selenium is not working.
Previously my selenium version 2.33 and upgraded to latest 2.41.0. Still it is not working
All other actions say double click, filling text box etc are working fine.
I am not getting any error from selenium that drag and drop is failing but the object is not seen in the target.
I increased implicit time, and also inserted explicit sleep but it is not working.
firefox version is 16.
Did anyone faced similar issue? | Python Selenium Webdriver: Drag and drop failing | 0 | 0 | 1 | 291 |
23,423,582 | 2014-05-02T08:11:00.000 | 0 | 0 | 0 | 0 | python,web-crawler,scrapy | 37,941,838 | 2 | true | 1 | 0 | HtmlAgilityPack helped me in parsing and reading Xpath for the reviews. It worked :) | 1 | 0 | 0 | I am using scrapy to scrape reviews about books from a site. Till now i have made a crawler and scraped comments of a single book by giving its url as starting url by myself and i even had to give tags of comments about that book by myself after finding it from page's source code. Ant it worked. But the problem is that till now the work i've done manually i want it to be done automatically. i.e. I want some way that crawler should be able to find book's page in the website and scrape its comments. I am extracting comments from goodreads and it doesn't provide a uniform method for url's or even tags are also different for different books. Plus i don't want to use Api. I want to do all work by myself. Any help would be appreciated. | Comments scraping without using Api | 1.2 | 0 | 1 | 514 |
23,426,515 | 2014-05-02T10:59:00.000 | 0 | 0 | 0 | 0 | python,django,web-services,openerp,xmlrpclib | 23,648,694 | 2 | false | 1 | 0 | Aternatively you can promote datetime.date() to datetime.datetime() before sending the reply. | 1 | 2 | 0 | I am trying save record from django (front-end) to openerp (back-end). I am using openerp webservice using xmlrpclib. It works well with normal string and number data, but when i tried to pass date field, it throws error. cannot marshal <type 'datetime.date'> objects
Please help me.. | cannot marshal objects | 0 | 0 | 0 | 4,577 |
23,426,916 | 2014-05-02T11:20:00.000 | 1 | 0 | 0 | 0 | python,django,mezzanine,cartridge | 23,427,146 | 1 | true | 1 | 0 | The products most likely aren't published, but can be previewed by an authenticated administrator.
Check the "status" and "published from" fields for each product. | 1 | 0 | 0 | I am trying to develop a small project to learn how mezzanine and cartridge work.
I have the problem that items in the shop are listed only if I am logged in, while I'd like to be able to show them to unauthorized users.
Is there a setting that has to be toggled? | Products are not shown if the user is not logged in | 1.2 | 0 | 0 | 56 |
23,427,939 | 2014-05-02T12:14:00.000 | 3 | 0 | 0 | 0 | python,qt,pyqt | 23,436,083 | 2 | true | 0 | 1 | You can call QApplication::setStyle("windows") before creating QApplication object to set a style by name. However, Qt on Linux is usually built without modern Windows styles, so it's possible that the best you will accomplish is the classic (old) Windows style, which is not very attractive. You can also try to use "fusion" style or other values that QStyleFactory::keys() returns. | 2 | 0 | 0 | How can I make a PyQt5 application to look like the same when run on Windows, and Linux?
I would prefer the Windows style on both systems.
Thanks | Make PyQt application look like the same on Windows and Linux | 1.2 | 0 | 0 | 1,984 |
23,427,939 | 2014-05-02T12:14:00.000 | 0 | 0 | 0 | 0 | python,qt,pyqt | 23,474,604 | 2 | false | 0 | 1 | You can use existing styles mentioned above
or you can specify the styles of your widgets using Stylesheet, which is quite powerful. | 2 | 0 | 0 | How can I make a PyQt5 application to look like the same when run on Windows, and Linux?
I would prefer the Windows style on both systems.
Thanks | Make PyQt application look like the same on Windows and Linux | 0 | 0 | 0 | 1,984 |
23,428,618 | 2014-05-02T12:49:00.000 | 8 | 0 | 0 | 0 | python,sqlite,treeview | 23,489,182 | 1 | true | 0 | 1 | Finally I managed to figure this out, so I post this answer to my own question in case someone finds it helpful:
All I had to do was to .clear the list_store, rebuild it and use set_model to the TreeView.
The refresh function goes as below:
liststore.clear()
create_model_checks() # re-create liststore
treeView.set_model(liststore) | 1 | 6 | 0 | I finally managed to write a little app that reads from a sqlite database and show the results to a treeview. Another form (in another module) gives the ability to write new or update existing records. After writing to the database it closes the window
What I'm trying to do now is to update the "main" window (containing the treeview) to show the new dataset. I have managed so far to do this but a) the initial mainwindow stays there while a new instance of it opens on top of it showing the desired (new) dataset.
How would I make this work? Can someone give me suggestions/example?
Perhaps I need to say that the __init__ function of my mainwindow module does everything upon running: creates the gui, reads from the database and show all. I suspect that this may be the problem but having tryed almost any combination of breaking it into pieces (functions), I had no success
--EDIT--
OK I have many different functions __init__ now creates the main gui while others read the data from the DB and place it on a treeview.
I tried to use a timer but also this option doesn't seem to be apropriate as gtk.TreeView doesn't have such a method. | pygtk treeview contents update/refresh | 1.2 | 0 | 0 | 3,427 |
23,436,414 | 2014-05-02T20:28:00.000 | 2 | 0 | 1 | 0 | python,url,cherrypy | 23,436,508 | 1 | true | 0 | 0 | Turns out using an _ in the method definition works as a dot/period. This does mean I have to have two function definitions if I want to be able to call it either with or without the .py, but since I can easily call the one function from the other, this is a minor quibble. | 1 | 1 | 0 | Is there a way to get CherryPy to respond to a url that includes a period, such as http://some/base/path/oldscript.py ? I have a number of old CGI scripts that were called like that, and I'm trying to roll them into a nice pretty CherryPy web app - but I don't want to break all the bookmarks and the like that still point to the CGI scripts. Ideally the same method would respond to the url regardless of if it has a .py or not. | cherrypy: respond to url that includes a dot? | 1.2 | 0 | 1 | 450 |
23,437,084 | 2014-05-02T21:16:00.000 | 2 | 0 | 0 | 0 | python,urllib2 | 23,437,205 | 2 | true | 0 | 0 | There's nothing wrong with using try/except. That's the way to do such things in python.
it's easier to ask forgiveness than permission | 1 | 2 | 0 | Is there a way beyond catching exceptions(HTTPError,URLError), to check if access to a url is permitted.
I have to constantly(every 30 secs) ping a url (using urllib2.open)(which might or might not be valid).
And throwing and catching exceptions each time seems excessive. I couldn't find anything in the documentation. | Python: Checking if you can connect to a given url (urllib2) | 1.2 | 0 | 1 | 121 |
23,439,474 | 2014-05-03T02:08:00.000 | 0 | 0 | 0 | 0 | python,pygame,alpha | 23,485,047 | 1 | true | 0 | 1 | It is possible that you have already thought of this and have decided against it, but it would obviously run far better in real time if the polygons were pre-drawn. Presuming there aren't very many different types of polygons, you could even resize them however you need and you would be saving CPU.
Also, assuming that all of the polygons are regular, you could just have several different equilateral triangles with gradients going in various directions on them to produce the necessary shapes.
Another thing you could do is define the polygon you are drawing, than draw an image of a gradient saved on your computer inside that shape.
The final thing you could do is to build your program (or certain, CPU intensive parts of your program) in C or C++. Being compiled and automatically optimized during compiling, these languages are significantly faster than python and better suited to what you are trying to do. | 1 | 2 | 0 | I have a scene, and I need to be able to overlay the scene with translucent polygons (which can be done easily using pygame.gfxdraw.filled_polygon which supports drawing with alpha), but the catch is that the amount of translucency has to fade over a distance (so for example, if the alpha value is 255 at one end of the polygon, then it is 0 at the other end and it blends from 255 to 0 through the polygon). I've implemented drawing shapes with gradients by drawing the gradient and then drawing a mask on top, but I've never come across a situation like this, so I have no clue what to do. I need a solution that can run in real time. Does anyone have any ideas? | Gradient alpha polygon with pygame | 1.2 | 0 | 0 | 948 |
23,440,391 | 2014-05-03T04:49:00.000 | 0 | 0 | 0 | 1 | python,sublimetext2 | 23,440,406 | 2 | false | 0 | 0 | Just add raw_input("Press ENTER to exit") and it will "pause" until you press a key. You should be able to add this line anywhere and as often as needed. | 1 | 0 | 0 | I was learning python use sublime text2 dev.
when I code "hello world" and build it, the "cmd"window appears and disappears in a moment.
I want to make the output hold on,but I don't know how.
help me, thank you. | how to prevent the window from self-close when building program in sublime | 0 | 0 | 0 | 302 |
23,441,657 | 2014-05-03T07:44:00.000 | 124 | 0 | 1 | 0 | python,pycharm | 45,199,074 | 6 | false | 0 | 0 | I found out an easier way.
go to File -> Settings -> Keymap
Search for Execute Selection in Console and reassign it to a new shortcut, like Crl + Enter.
This is the same shortcut to the same action in Spyder and R-Studio. | 2 | 106 | 0 | Is it possible to run only a part of a program in PyCharm?
In other editors there is something like a cell which I can run, but I can't find such an option in PyCharm?
If this function doesn't exist it would be a huge drawback for me... Because for my data analysis I very often only need to run the last few lines of my code. | Pycharm: run only part of my Python file | 1 | 0 | 0 | 152,784 |
23,441,657 | 2014-05-03T07:44:00.000 | 25 | 0 | 1 | 0 | python,pycharm | 23,442,815 | 6 | false | 0 | 0 | You can select a code snippet and use right click menu to choose the action "Execute Selection in console". | 2 | 106 | 0 | Is it possible to run only a part of a program in PyCharm?
In other editors there is something like a cell which I can run, but I can't find such an option in PyCharm?
If this function doesn't exist it would be a huge drawback for me... Because for my data analysis I very often only need to run the last few lines of my code. | Pycharm: run only part of my Python file | 1 | 0 | 0 | 152,784 |
23,448,140 | 2014-05-03T18:11:00.000 | 0 | 0 | 0 | 0 | string,listbox,wxpython | 23,455,859 | 2 | false | 0 | 1 | Than you very much! I have fixed it! If somebody will have the same problem you must change OnDelete code (The first code in question) for this:
def OnDelete(self, event):
sel = self.listbox.GetSelection()
f = open("Save/savegame.txt", "r")
read = f.readlines()
f.close()
name = self.listbox.GetStringSelection()
newfile = """"""
for i in read:
if name in i:
pass
else:
newfile += i
n = open("Save/savegame.txt", "w")
one = str(newfile)
n.write(one)
n.close
self.listbox.Delete(sel) | 1 | 0 | 0 | I have very complicated problem. I´ve search whole internet and tried everything, but nothing worked. I want get string from listbox and than delete line in file with word from listbox. Can please somebody help me? Here is code:
def OnDelete(self, event):
sel = self.listbox.GetSelection()
if sel != -1:
self.listbox.Delete(sel)
subor = open("Save/savegame.txt", "r")
lines = subor.readlines()
subor.close()
subor = open("Save/savegame.txt", "w")
selstring = self.listbox.GetString(self.listbox.GetSelection())
for line in lines:
if line!=selstring:
subor.write(line)
subor.close()
And this is code for saving file:
def OnNewGame(self,event):
nameofplr = wx.GetTextFromUser('Enter your name:', 'NEW GAME')
subor=open("Save/savegame.txt","a")
subor.write( "\n" + nameofplr)
subor.close()
savegame=open("Save/" + nameofplr + ".prjct", "w+")
savegame.close()
It shows this error:
Traceback (most recent call last):
File "D:\Python\Python Projects\Project\Project.py", line 106, in OnDelete
selstring = self.listbox.GetString(self.listbox.GetSelection())
File "D:\Python\lib\site-packages\wx-3.0-msw\wx_core.py", line 12962, in GetString
return core.ItemContainer_GetString(*args, **kwargs)
wx._core.PyAssertionError: C++ assertion "IsValid(n)" failed at ....\src\msw\listbox.cpp(387) in wxListBox::GetString(): invalid index in wxListBox::GetString
Thank you very much for help! | How to get string from listbox | 0 | 0 | 0 | 817 |
23,453,650 | 2014-05-04T07:16:00.000 | 0 | 1 | 0 | 0 | python,zeromq | 23,468,773 | 1 | false | 0 | 0 | You shouldn't be able to bind more than once to the same port number, either from the same process or another.
ZMQ should give a failure when you issue bind with a port number already in use. Are you checking return codes? | 1 | 0 | 0 | I have a ZMQ server listening on port 12345 TCP. When another server connects on that port locally or via VM it works fine, but if I try from a remote server that has to go through port forwarding on my Fios firewall it just bombs. The packets are showing up in Wireshark but ZMQ just ignores them. Is there anyway to get past this? | Incoming ZeroMQ traffic dropped by server due to NAT? | 0 | 0 | 1 | 144 |
23,453,735 | 2014-05-04T07:27:00.000 | 0 | 1 | 1 | 0 | python,ldap,python-ldap | 23,841,869 | 1 | true | 0 | 0 | As stated by Guntram Blohm, it is not possible to delete object classes on existing objects because doing that would invalidate the schema checks that the server did when creating the object. So the way to do it will be to delete the object and create a new one. This is a property of the server and the client libraries cannot do anything about it. | 1 | 0 | 0 | I am using Python-LDAP module to interact with my LDAP server. How can I remove an objectClass from an entry using python-ldap? When I generated a modlist with modlist.modifyModlist({'objectClass':'inetLocalMailRecipient},{'objectClass' : ''}) , it just generates (1, 'objectClass', None) which obviously doesn't seem correct. What am I doing wrong here? I want to remove one objectClass from a given entry using python ldap. | How to remove an objectClass from an entry using python-ldap? | 1.2 | 0 | 0 | 615 |
23,454,496 | 2014-05-04T09:13:00.000 | 0 | 0 | 0 | 0 | python,parsing,html-parsing,beautifulsoup | 23,464,291 | 2 | false | 1 | 0 | Your solution is really going to be specific to each website page you want to scrape, so, without knowing the websites of interest, the only thing I could really suggest would be to inspect the page source of each page you want to scrape and look if the article is contained in some html element with a specific attribute (either a unique class, id, or even summary attribute) and then use beautiful soup to get the inner html text from that element | 1 | 0 | 0 | I have built a program for summarization that utilizes a parser to parse from multiple websites at a time. I extract only <p> in each article.
This throws out a lot of random content that is unrelated to the article. I've seen several people who can parse any article perfectly. How can i do it? I am using Beautiful Soup | Parsing multiple News articles | 0 | 0 | 1 | 917 |
23,454,521 | 2014-05-04T09:16:00.000 | 0 | 1 | 0 | 0 | python,linux,web,flask,raspberry-pi | 23,454,864 | 2 | false | 1 | 0 | Best practice is to never do this kind of thing. If you are giving sudo access to your pi from internet and then executing user input you are giving everyone in the internet the possibility of executing arbitrary commands in your system. I understand that this is probably your pet project, but still imagine someone getting access to your computer and turning camera when you don't really expect it. | 1 | 0 | 0 | I have created a web-app using Python Flask framework on Raspberry Pi running Raspbian. I want to control the hardware and trigger some sudo tasks on the Pi through web.
The Flask based server runs in non-sudo mode listening to port 8080. When a web client sends request through HTTP, I want to start a subprocess with sudo privileges. (for ex. trigger changes on gpio pins, turn on camera etc.). What is the best practice for implementing this kind of behavior?
The webserver can ask for sudo password to the client, which can be used to raise the privileges. I want some pointers on how to achieve this. | How to start a privileged process through web on the server? | 0 | 0 | 0 | 166 |
23,457,532 | 2014-05-04T14:43:00.000 | 2 | 1 | 1 | 0 | python,pycharm | 23,457,554 | 3 | false | 1 | 0 | I'd suggest you to use composition instead of inheritance. Then you design class's interface and decide which methods are available. | 1 | 5 | 0 | I am making a programming framework (based on Django) that is intended for students with limited programming experience. Students are supposed to inherit from my base classes (which themselves are inherited from Django models, forms, and views).
I am testing this out now with some students, and the problem is that when they write code in their IDE (most of them are using PyCharm), autocomplete gives them a ton of suggestions, since there are so many inherited methods and attributes, 90% of which are not relevant to them.
Is there some way to hide these inherited members? At the moment I am primarily thinking of how to hide them in auto-complete (in PyCharm and other IDEs). They can (and probably should) still work if called, but just not show up in places like auto-complete.
I tried setting __dict__, but that did not affect what showed up in autocomplete. Another idea I have is to use composition instead of inheritance, though I would have to think this through in more detail.
Edit: This framework is not being used in CS classes; rather, students will be using it to build apps for non-CS domain. So my priority is to keep it simple as possible, perhaps even if it's not a "pure" approach. (Nevertheless, I am considering those arguments as they do have merit.) | In Python, can I hide a base class's members? | 0.132549 | 0 | 0 | 1,650 |
23,458,792 | 2014-05-04T16:42:00.000 | 5 | 0 | 0 | 0 | python,scikit-learn | 23,472,135 | 1 | true | 0 | 0 | There's no hard limit to the number of iterations for LogisticRegression; instead it tries to detect convergence with a specified tolerance, tol: the smaller tol, the longer the algorithm will run.
From the source code, I gather that the algorithms stops when the norm of the objective's gradient is less than tol times its initial value, before training started. This is worth documenting.
As for random forests, training stops when n_estimators trees have been fit of maximum depth max_depth, constrained by the parameters min_samples_split, min_samples_leaf and max_leaf_nodes. Tree learning is completely different from iterative linear model learning. | 1 | 4 | 1 | I'm using scikit-learn to train classifiers. I'm particularly using linear_model.LogisticRegression. But my question is: what's the stopping criteria for the training?! because I don't see any parameter that indicates the number of epochs!
Also the same for random forests? | When does fit() stop running in scikit? | 1.2 | 0 | 0 | 1,201 |
23,462,833 | 2014-05-04T23:39:00.000 | 2 | 1 | 1 | 0 | python,c++ | 23,462,868 | 1 | false | 0 | 0 | No, the PyObjects don't generate code -- it is just a c struct that holds the information of that particular Python object. You can inspect them in your C++ debugger.
I'm experienced with py embedding. The docs seem clear. Don't really get what your problem is. | 1 | 0 | 0 | I am learning how to run Python through C++ and I'm having a hard time getting a handle on things. Is there a way to output the code that is generated by the various PyObjects? I'm not very experienced with embedding, so the documentation went a bit over my head. | Debugging Python extended on C++ | 0.379949 | 0 | 0 | 35 |
23,464,220 | 2014-05-05T03:13:00.000 | 1 | 0 | 0 | 0 | image,drag-and-drop,wxpython | 23,479,767 | 1 | true | 0 | 1 | I think you could use a wx.Panel without any issues. There is a CustomDragAndDrop example in the wxPython demo that shows how to drag and drop images from one wx.Window to another. A wx.Panel is a type of wx.Window and I would think it would be better to use anyway. I would give that a try. | 1 | 0 | 0 | I am using wxPython for a board game. And want to drag chessman over positions.
What is the container I should use? Should I put image for chessman in its own panel or as button or somewhere? | wxPython -- Drag and Drop image between position-fixed containers, what is the suitable container? | 1.2 | 0 | 0 | 68 |
23,467,213 | 2014-05-05T07:36:00.000 | 1 | 0 | 1 | 0 | python,documentation,distutils | 23,501,164 | 1 | true | 0 | 0 | Installing docs is not supported by distutils. These days it seems more important to publish them online than to install them on the target system though, so if you’re set up to use Reat The Docs or upload docs to PyPI, you’re good. | 1 | 2 | 0 | I'm publishing a few Python packages internally and I would like to include API documentation, currently in .rst form, but might go for Sphinx-generated HTML in the future. I need more details than I can reasonably put into a docstring (background info, tutorial examples, etc), so I've written it out long-hand.
I have a simple distutils-based setup.py where I pull in the text files, but I'm not sure where to put them on the target system (I target Windows, Linux and Mac.)
Is there a convention in the Python world for installing API docs alongside packages? Or are docs generally assumed to only be available online?
Any ideas welcome, thanks! | Installing Python package documentation | 1.2 | 0 | 0 | 209 |
23,467,814 | 2014-05-05T08:12:00.000 | 0 | 0 | 0 | 0 | python,c++,openscenegraph | 23,624,458 | 3 | true | 0 | 1 | Finally I managed to solve this issue by creating a new python type (python extension) and using Node Visitors to assign node references upon creation. | 1 | 0 | 0 | I am working with an embedded python system which requires a C++ frontend using OpenSceneGraph for rendering visualizations. My question is:
Is there any possible way to perform this task? I need to modify C++ osg nodes from Python. Would it be an option to create wrappers for this osg nodes? If this is the answer could you provide some guidance? | How can I use C++ OSG objects from Python? | 1.2 | 0 | 0 | 741 |
23,468,132 | 2014-05-05T08:33:00.000 | 0 | 0 | 0 | 0 | python,django,mercurial | 23,479,318 | 1 | false | 1 | 0 | This is something you should fix at the web application level, not at the Mercurial level. If you're fine with having people wait you set up a distributed locking scheme where the web worker thread tries to acquire a repository-specific lock from shared memory/storage before taking any actions. If it can't acquire the lock you respond with either a status-code 503 with a retry-after header or you have the web-worker thread retry until it can get the lock or times out. | 1 | 0 | 0 | Our Django project provides interfaces to users to create repository
create new repo
add new changes to existing repo
Any user can access any repo to make changes directly via an HTTP POST containing changes.
Its totally fine if the traffic is less. But if the traffic increases up to the point that multiple users want to add changes to same repo at exactly same time, how to handle it?
We currently use Hg (Mercurial) for repos | Django: Atomic operations on a directory in media storage | 0 | 0 | 0 | 37 |
23,477,570 | 2014-05-05T16:40:00.000 | 4 | 1 | 0 | 1 | python,cron,flask,openshift,nohup | 23,485,693 | 1 | true | 1 | 0 | I'm lazy. Cut and paste :)
I have been told 5 minutes is the limit for the free accounts. That includes all background processes. I asked a similar question here on SO. | 1 | 3 | 0 | I have a long-running daily cron on OpenShift. It takes a couple hours to run. I've added nohup and I'm running it in the background. It still seems to timeout at the default 5 minutes (It works appropriately for this time). I'm receiving no errors and it works perfectly fine locally.
nohup python ${OPENSHIFT_REPO_DIR}wsgi/manage.py do_something >> \
${OPENSHIFT_DATA_DIR}do_something_data.log 2> \
${OPENSHIFT_DATA_DIR}do_something_error.log &
Any suggestions is appreciated. | Long-running Openshift Cron | 1.2 | 0 | 0 | 629 |
23,478,266 | 2014-05-05T17:31:00.000 | 10 | 0 | 1 | 0 | python,pydev,python-import | 23,481,504 | 1 | true | 0 | 0 | The problem seems to be coming from pylint. Error disappears if I add: # pylint: disable-msg=E0611 to the line | 1 | 5 | 0 | I'm getting a 'No-name-in-module' import error whenever I try to import linalg from scipy
I have no trouble importing scipy or anything else from scipy. For some reason it doesn't like linalg. Oddly, eclipse includes linalg under auto-completion.
I have tried:
Removing the interpreter and then adding it again
Adding site-tools, the scipy directory, and even the scipy/linalg directory to libraries under the python interpreter preferences.
Deleting all class files
Importing linalg as a different name
Reinstalling eclipse
freaking out
I'm running anaconda on ubuntu 14.04 | 'No-name-in-module' import error | 1.2 | 0 | 0 | 9,320 |
23,479,021 | 2014-05-05T18:14:00.000 | 0 | 0 | 1 | 1 | python-2.7,python-3.x,windows-7 | 23,479,112 | 2 | false | 0 | 0 | To make sure that you are always running python 3, you can modify your Windows PATH Environment variable to include the python 3 directory, and remove the python 2 directory. | 1 | 1 | 0 | I have python 2.7.6 and python 3.4 installed on windows 7 machine.
When I open command prompt for windows and type python, python 2.7.6 starts by default.
I have a python script which I want to compile (or interpret officially speaking) using python 3.4.
Is there a command to use python 3.4 from c:/ prompt? or make 3.4 the default python interpreter?
thanks | Forcing a python program to interpret using python 3 | 0 | 0 | 0 | 161 |
23,482,115 | 2014-05-05T21:20:00.000 | 6 | 0 | 1 | 1 | python,multiprocessing | 25,519,766 | 1 | false | 0 | 0 | For me the error was actually that my receiving process had thrown an exception and terminated, and so the sending process was receiving an EOFError, meaning that the interprocess communication pipeline had closed. | 1 | 5 | 0 | I have a bunch of clients connecting to a server via 0MQ. I have a Manager queue used for a pool of workers to communicate back to the main process on each client machine.
On just one client machine having 250 worker processes, I see a bunch of EOFError's almost instantly. They occur at the point that the put() is being performed.
I would expect that a lot of communication might slow everything down, but that I should never see EOFError's in internal multiprocessing logic. I'm not using gevent or anything that might break standard socket functionality.
Any thoughts on what could make puts to a Manager queue start raising EOFError's? | EOFError with multiprocessing Manager | 1 | 0 | 0 | 5,086 |
23,489,412 | 2014-05-06T08:17:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 23,489,517 | 1 | false | 0 | 0 | The underlying file handle generated when opening a file can be used for reading at one point of the file only. You cannot read multiple offset with the same file handle.
So you should use one thread reading the file while other threads read a kind of queue generated by the first thread with the file buffers. | 1 | 0 | 0 | i will like to design my platform to calculate something and the structure is ,
one big file (maybe 5gb or 10gb)
20 threading and execute different algorithm
the current structure of mine is 20 thread open the big file by itself
and then read line by line to execute by each thread
however, i would like to design a new structure that open big file just one time ,
and every thread read the same memory block ,
i survey mmap and multiprocess.array ,but still have not idea how to apply it safety and easily.
can anybody help me ? thanks. | How can design multithreading and read same input | 0.197375 | 0 | 0 | 43 |
23,491,608 | 2014-05-06T10:04:00.000 | 1 | 1 | 0 | 0 | python,c++,mysql,c | 23,493,503 | 3 | false | 0 | 1 | Not the answer you expected, but i have been down that road and advise KISS:
First make it work in the most simple way possible.
Only than look into speeding things up later / complicating the design.
There are lots of other ways to phrase this such as "do not fix hypothetical problems unless resources are unlimited". | 3 | 4 | 0 | I'm implementing an algorithm into my Python web application, and it includes doing some (possibly) large clustering and matrix calculations. I've seen that Python can use C/C++ libraries, and thought that it might be a good idea to utilize this to speed things up.
First: Are there any reasons not to, or anything I should keep in mind while doing this?
Second: I have some reluctance against connecting C to MySQL (where I would get the data the calculations). Is this in any way justified? | Using C/C++ for heavy calculations in Python (Also MySQL) | 0.066568 | 0 | 0 | 1,084 |
23,491,608 | 2014-05-06T10:04:00.000 | 1 | 1 | 0 | 0 | python,c++,mysql,c | 23,497,013 | 3 | true | 0 | 1 | Use the ecosystem.
For matrices, using numpy and scipy can provide approximately the same range of functionality as tools like Matlab. If you learn to write idiomatic code with these modules, the inner loops can take place in the C or FORTRAN implementations of the modules, resulting in C-like overall performance with Python expressiveness for most tasks. You may also be interested in numexpr, which can further accelerate and in some cases parallelize numpy/scipy expressions.
If you must write compute-intensive inner loops in Python, think hard about it first. Maybe you can reformulate the problem in a way more suited to numpy/scipy. Or, maybe you can use data structures available in Python to come up with a better algorithm rather than a faster implementation of the same algorithm. If not, there’s Cython, which uses a restricted subset of Python to compile to machine code.
Only as a last resort, and after profiling to identify the absolute worst bottlenecks, should you consider writing an extension module in C/C++. There are just so many easier ways to meet the vast majority of performance requirements, and numeric/mathematical code is an area with very good existing library support. | 3 | 4 | 0 | I'm implementing an algorithm into my Python web application, and it includes doing some (possibly) large clustering and matrix calculations. I've seen that Python can use C/C++ libraries, and thought that it might be a good idea to utilize this to speed things up.
First: Are there any reasons not to, or anything I should keep in mind while doing this?
Second: I have some reluctance against connecting C to MySQL (where I would get the data the calculations). Is this in any way justified? | Using C/C++ for heavy calculations in Python (Also MySQL) | 1.2 | 0 | 0 | 1,084 |
23,491,608 | 2014-05-06T10:04:00.000 | 1 | 1 | 0 | 0 | python,c++,mysql,c | 23,497,054 | 3 | false | 0 | 1 | cython support for c++ is much better than what it was. You can use most of the standard library in cython seamlessly. There are up to 500x speedups in the extreme best case.
My experience is that it is best to keep the cython code extremely thin, and forward all arguments to c++. It is much easier to debug c++ directly, and the syntax is better understood. Having to maintain a code base unnecessarily in three different languages is a pain.
Using c++/cython means that you have to spend a little time thinking about ownership issues. I.e. it is often safest not to allocate anything in c++ but prepare the memory in python / cython. (Use array.array or numpy.array). Alternatively, make a c++ object wrapped in cython which has a deallocation function. All this means that your application will be more fragile than if it is written only in python or c++: You are abandoning both RAII / gc.
On the other hand, your python code should translate line for line into modern c++. So this reminds you not to use old fashioned new or delete etc in your new c++ code but make things fast and clean by keeping the abstractions at a high level.
Remember too to re-examine the assumptions behind your original algorithmic choices. What is sensible for python might be foolish for c++.
Finally, python makes everything significantly simpler and cleaner and faster to debug than c++. But in many ways, c++ encourages more powerful abstractions and better separation of concerns.
When you programme with python and cython and c++, it slowly comes to feel like taking the worse bits of both approaches. It might be worth biting the bullet and rewriting completely in c++. You can keep the python test harness and use the original design as a prototype / testbed. | 3 | 4 | 0 | I'm implementing an algorithm into my Python web application, and it includes doing some (possibly) large clustering and matrix calculations. I've seen that Python can use C/C++ libraries, and thought that it might be a good idea to utilize this to speed things up.
First: Are there any reasons not to, or anything I should keep in mind while doing this?
Second: I have some reluctance against connecting C to MySQL (where I would get the data the calculations). Is this in any way justified? | Using C/C++ for heavy calculations in Python (Also MySQL) | 0.066568 | 0 | 0 | 1,084 |
23,492,589 | 2014-05-06T10:48:00.000 | 1 | 1 | 0 | 1 | python,path,absolute-path | 23,492,856 | 2 | true | 0 | 0 | you could change your current working directory inside the script before you start calling your relative imports, use os.chdir("absolute path on where your script lives"). | 2 | 0 | 0 | I realise this question may already exist, but the answers I've found haven't worked and I have a slightly different setup.
I have a python file /home/pi/python_games/frontend.py that I am trying to start when lxde loads by placing @python /home/pi/python_games/frontend.py in /etc/xdg/lxsession/LXDE/autostart.
It doesn't run and there are no error messages.
When trying to run python /home/pi/python_games/frontend.py, python complains about not being able to find the files that are loaded using relative links eg: /home/pi/python_games/image.png is called with image.png. Obviously one solution would be to give these resources absolute paths, but the python program also calls other python programs in its directory that also have relative paths, and I don't want to go changing all them.
Anyone got any ideas?
Thanks
Tom | Starting a python script on boot (startx) with an absolute path, in which there are relative paths | 1.2 | 0 | 0 | 751 |
23,492,589 | 2014-05-06T10:48:00.000 | 0 | 1 | 0 | 1 | python,path,absolute-path | 23,496,772 | 2 | false | 0 | 0 | Rather than change your current working directory, in yourfrontend.pyscript you could use the value of the predefined__file__module attribute, which will be the absolute pathname of the script file, to determine absolute paths to the other files in the same directory.
Functions in theos.pathmodule, such assplit()andjoin(), will make doing this fairly easy. | 2 | 0 | 0 | I realise this question may already exist, but the answers I've found haven't worked and I have a slightly different setup.
I have a python file /home/pi/python_games/frontend.py that I am trying to start when lxde loads by placing @python /home/pi/python_games/frontend.py in /etc/xdg/lxsession/LXDE/autostart.
It doesn't run and there are no error messages.
When trying to run python /home/pi/python_games/frontend.py, python complains about not being able to find the files that are loaded using relative links eg: /home/pi/python_games/image.png is called with image.png. Obviously one solution would be to give these resources absolute paths, but the python program also calls other python programs in its directory that also have relative paths, and I don't want to go changing all them.
Anyone got any ideas?
Thanks
Tom | Starting a python script on boot (startx) with an absolute path, in which there are relative paths | 0 | 0 | 0 | 751 |
23,497,255 | 2014-05-06T14:15:00.000 | 1 | 0 | 1 | 0 | python,exception | 23,497,677 | 2 | false | 0 | 0 | Only an uncaught exception will terminate a program. If you raise an exception that your 3rd-party software is not prepared to catch and handle, the program will terminate. Raising an exception is like a soft abort: you don't know how to handle the error, but you give anyone using your code the opportunity to do so rather than just calling sys.exit().
If you are not prepared for the program to exit, don't raise an exception. Just log the error instead. | 1 | 1 | 0 | Does invoking a raise statement in python cause the program to exit with traceback or continue the program from next statement? I want to raise an exception but continue with the remainder program.
Well I need this because I am running the program in a thirdparty system and I want the exception to be thrown yet continue with the program. The concerned code is a threaded function which has to return .
Cant I spawn a new thread just for throwing exception and letting the program continue? | Does raising a manual exception in a python program terminate it? | 0.099668 | 0 | 0 | 1,370 |
23,498,086 | 2014-05-06T14:53:00.000 | 2 | 0 | 1 | 1 | python,ubuntu,pip,virtualenv,packages | 23,498,202 | 2 | true | 0 | 0 | apt-get update updates packages from Ubuntu package catalog, which has nothing to do with mainstream versions.
LTS in Ubuntu stands for Long Term Support. Which means that after a certain period in time they will only release security-related bugfixes to the packages. In general, major version of packages will not change inside of a major Ubuntu release, to make sure backwards-compatibility is kept.
So if then only thing you can do is apt-get update, you have 2 options:
find a PPA that provides fresher versions of packages that you need, add it and repeat the update/install exercise
find those packages elsewhere, download them in .deb format and install. | 1 | 2 | 0 | I'm trying to install pip and virtualenv on a server (running Ubuntu 12.04.4 LTS) on which I have access, but I can only do it with sudo apt-get install (school politics). The problem is that althought I have run the sudo apt-get update command to update the packages list, I think it keeps installing old ones. After doing sudo apt-get install python-pip python-virtualenv, I do pip --version on which I get the 1.0, and virtualenv --version on which I get 1.7.1.2. These two version are quite old (pip is already in 1.5.5 and virtualenv in 1.11.5). I read that the problem is that the packages list is not up-to-date, but the command sudo apt-get update should solve this, but I guess no. How can I solve this? Thanks a lot! | apt-get installing older version of packages (Ubuntu) | 1.2 | 0 | 0 | 3,744 |
23,500,942 | 2014-05-06T17:16:00.000 | 1 | 0 | 0 | 0 | python,apache | 23,501,387 | 1 | false | 1 | 0 | It looks like your application is using zmq to bind to some port.
As you have suspected already, each request can be run as independent process, thus competing in access to the port to bind to.
There can be so called workers, each running one process processing http/wsgi requests, and each trying to bind.
You shall redesign your app not to use bind, but connect, this will probably require having another process with zeromq serving something you do with that (but this last line is dependent on what you do in your app). | 1 | 0 | 0 | I have a python web application running on apache2 deployed with mod_wsgi. The application has a thread continuously running. This thread is a ZeroMQ thread and listening to a port in loop. The application is not maintaining session. Now if I open the browser and sends a request to the apache server the data is accepted for the first time. Now when second time I send the request It shows Internal server error. When I checked the error log file for traceback, It shows the ZMQError:- The address already in use.
Does apache reloads the application on each request sent from the browser since so that the ZeroMQ thread is being created everytime and being assigned the port but since the port has already been assigned it shows error.... | How does apache runs a application when a request comes in? | 0.197375 | 0 | 0 | 54 |
23,503,667 | 2014-05-06T19:49:00.000 | 3 | 1 | 0 | 0 | python,math,numpy,scipy | 23,552,362 | 4 | false | 0 | 0 | Approach #3: Compute the QR decomposition of AT
In general, to find an orthogonal basis of the range space of some matrix X, one can compute the QR decomposition of this matrix (using Givens rotations or Householder reflectors). Q is an orthogonal matrix and R upper triangular. The columns of Q corresponding to non-zero diagonal entries of R form an orthonormal basis of the range space.
If the columns of X=AT, i.e., the rows of A, already are orthogonal, then the QR decomposition will necessarily have the R factor diagonal, where the diagonal entries are plus or minus the lengths of the columns of X resp. the rows of A.
Common folklore has it that this approach is numerically better behaved than the computation of the product A*AT=RT*R. This may only matter for larger matrices. The computation is not as straightforward as the matrix product, however, the amount of operations is of the same size. | 1 | 7 | 1 | I can test the rank of a matrix using np.linalg.matrix_rank(A) . But how can I test if all the rows of A are orthogonal efficiently?
I could take all pairs of rows and compute the inner product between them but is there a better way?
My matrix has fewer rows than columns and the rows are not unit vectors. | How to detect if all the rows of a non-square matrix are orthogonal in python | 0.148885 | 0 | 0 | 7,018 |
23,507,196 | 2014-05-07T01:25:00.000 | 0 | 0 | 0 | 0 | php,python,sockets | 23,507,273 | 1 | true | 0 | 0 | I've decided it would be better if I still do the HTTP request, but depending on what the request returned, I could have the client send a packet to the server telling it to give the specific user a notification.
For example, if I tried adding user ID #323095643, I send the HTTP request and everything turns out okay and I get an OK response, the flash client sends a packet to the server basically saying "Hey, give user ID #323095643 a notification saying '_ wants you to be their friend'" and it sends it to that user. | 1 | 0 | 0 | I'm working on a game that allows you to add other users as your friend. The game is in Flash AS2 and the server is Python.
I'm currently hoping to make it so the game sends an HTTP request to an API server that will then verify everything, such as if they're already friends, if they can be friends, etc, and then if everything's okay, I'd then insert the request into a request able. Anyway, if the user is currently online, I would like to send the user a notification through the server.
The only way I could see this happening is if I opened up a socket connection through the add friend script and then sent a special packet letting the server know that it needs to send the notification. Right now that seems like it would take a while to do and not so efficient if a lot of users used it daily.
How would I best approach this and is there any other efficient ways? | Communicating to Socket Server From External Script | 1.2 | 0 | 1 | 41 |
23,508,360 | 2014-05-07T03:47:00.000 | 1 | 0 | 1 | 0 | python,fwrite | 23,508,435 | 3 | false | 0 | 0 | One way to do it is the append the new data at the end of the existing file. If you catch an error , delete all that you appended.
If no errors, delete everything that you had before appending so the new file will have only the appended data.
Create a copy of your file before starting capturing the data in a new file. | 2 | 0 | 0 | I need to write data to a file and overwrite it if file exists. In case of an error my code should catch an exception and restore original file (if applicable)
How can I restore file? Should I read original one and put content i.e. in a list and then in case of an exception just write this list to a file?
Are there any other options? Many thanks if anyone can provide some kind of an example
Cheers | Restore original file in case of an error | 0.066568 | 0 | 0 | 186 |
23,508,360 | 2014-05-07T03:47:00.000 | 1 | 0 | 1 | 0 | python,fwrite | 23,508,401 | 3 | false | 0 | 0 | Any attempt to restore on error will be fragile; what if your program can't continue or encounters another error when attempting to restore the data?
A better solution would be to not replace the original file until you know that whatever you're doing succeeded. Write whatever you need to to a temporary file, then when you're ready, replace the original with the temporary. | 2 | 0 | 0 | I need to write data to a file and overwrite it if file exists. In case of an error my code should catch an exception and restore original file (if applicable)
How can I restore file? Should I read original one and put content i.e. in a list and then in case of an exception just write this list to a file?
Are there any other options? Many thanks if anyone can provide some kind of an example
Cheers | Restore original file in case of an error | 0.066568 | 0 | 0 | 186 |
23,510,212 | 2014-05-07T06:21:00.000 | 2 | 0 | 0 | 0 | python,flask,amazon-dynamodb | 23,512,512 | 1 | false | 1 | 0 | No. SQLite is just one option for backend storage. SQLite is mentioned in the tutorial only for its simplicity in getting something working fast and simply on a typical local developers environment. (No db to or service to install/configure etc.) | 1 | 0 | 0 | I am writing a small web application using Flask and I have to use DynamoDB as backend for some hard requirements.
I went through the tutorial on Flask website without establishing sqlite connection. All data were pulled directly from DynamoDB and it seemed to work.
Since I am new to web development in general and Flask framework, do you see any problems with this approach? | Use Flask with Amazon DynamoDB without SQLite | 0.379949 | 1 | 0 | 637 |
23,513,981 | 2014-05-07T09:31:00.000 | 0 | 0 | 0 | 1 | python,github | 23,514,046 | 3 | false | 0 | 0 | .sh is a shell script, you can just execute it.
./setup.sh | 2 | 0 | 0 | I have downloaded and unzip the cabot(python tool) in my linux system.But then I don't know how to install it.In the cabot folder there is setup.sh file. But when I put build or install it is not working.So What to do? | How to install cabot in linux | 0 | 0 | 0 | 1,740 |
23,513,981 | 2014-05-07T09:31:00.000 | 0 | 0 | 0 | 1 | python,github | 23,514,107 | 3 | false | 0 | 0 | It's an ".sh" file right?
Then to run the same what you have to do is :-
1)Open Terminal
2)Change directory to file location
3) run the following command.
sh setup.sh | 2 | 0 | 0 | I have downloaded and unzip the cabot(python tool) in my linux system.But then I don't know how to install it.In the cabot folder there is setup.sh file. But when I put build or install it is not working.So What to do? | How to install cabot in linux | 0 | 0 | 0 | 1,740 |
23,515,224 | 2014-05-07T10:26:00.000 | 1 | 1 | 0 | 1 | python,linux | 23,515,529 | 1 | true | 0 | 0 | If you are using logrotate for log rotation then it has options to remove old files, if not you could run something as simple as this once a day in your cron:
find /path/to/log/folder -mtime +5 -type f -exec rm {} \;
Or more specific match a pattern in the filename
find . -mtime +5 -type f -name *.log -exec ls -l {} \;
Why not set up logrotate for syslog to rotate daily then use its options to remove anything older than 5 days.
Other options involve parsing log file and keeping certain aspect etc removing other bits etc which involved writing to another file and back etc and when it comes to live log files this can end up causing other issues such as a requirement to restart service to relog back into files. so best option would be logrotate for the syslog | 1 | 0 | 0 | I have two questions about using crontab file:
1.I am using a service. When it runs, a new log file created everyday in a log directory. i want to delete all files that already exist greater 5 day in that log directory
2.I want to delete all the infomation that exist greater than 5 days in a log file( /var/log/syslog)
I don't know how to do that with crontab in linux. Please help me! Thanks in advance! | How to delete some file with crontab in linux | 1.2 | 0 | 0 | 374 |
23,526,579 | 2014-05-07T19:25:00.000 | 1 | 0 | 0 | 0 | python,scrapy,scrapyd | 24,659,705 | 1 | false | 1 | 0 | Maybe you should do a cron job that executes every three hours and performs a curl call to Scrapyd to schedule the job. | 1 | 0 | 0 | I want to make my spider start every three hours.
I have a scrapy confinguration file located in c:/scrapyd folder.
I changed the poll_interval to 100
the spider works, but it didn't repeat each 100 seconds.
how to do that please? | scrapyd pool_intervel to scheduler a spider | 0.197375 | 0 | 1 | 215 |
23,529,212 | 2014-05-07T21:56:00.000 | 2 | 0 | 0 | 0 | scripting,mysql-python | 23,529,243 | 1 | true | 0 | 0 | you can just separate the queries with a semi-column and run them as a batch. | 1 | 0 | 0 | I have a mysql database with some huge tables, i have a task that I must run three queries one after another and the last one exports to the outfile.csv.
i.e.
Query 1. Select values from some tables with certain parameter. then write into a new table. aprox 4.5 hours
Query 2. After the first one is done, then use the new table join with another to get results to new table. Then write to outfile.csv. aprox 2 hours
How do I manage to automatically call these queries one after another even though one can take 4 hours to finish
I am open to any solution, Scripts, or database functions. I am running on ubuntu server so, no graphical solutions.
Thanks for your help. | How to automatically run chain multiple mysql queries | 1.2 | 1 | 0 | 121 |
23,530,254 | 2014-05-07T23:36:00.000 | 2 | 0 | 1 | 1 | python,macos,homebrew | 23,530,352 | 2 | false | 0 | 0 | Use pip inside of the virtualenv and it will isolate the packages to just that virtualenv. Each virtualenv has a local version of pip and will install the packages locally. | 1 | 0 | 0 | Is there anyway I can use homewbrew to install packages (like numpy or matplotlib) into isolated virtual environments created using virtualenv, without having the packaged installed system wide. | Use homebrew to install applications to virtual enviornment | 0.197375 | 0 | 0 | 55 |
23,531,555 | 2014-05-08T02:09:00.000 | 0 | 1 | 0 | 0 | python,emacs,virtualenv,org-mode | 23,557,258 | 1 | false | 1 | 0 | Reads like a bug, please consider reporting it at [email protected]
As a workaround try setting the virtualenv at the Python-side, i.e. give PYTHONPATH as argument.
Alternatively, mark the source-block as region and execute it the common way, surpassing org | 1 | 1 | 0 | I'm running into a few issues on my Emacs + Org mode + Python setup. I thought I'd put this out there to see if the community had any suggestions.
Virtualenv:
I'm trying to execute a python script within a SRC block using a virtual environment instead of my system's python implementation. I have a number of libraries in this virtual environment that I don't have on my system's python (e.g. Matplotlib). Now, I set python-shell-virtualenv-path to my virtualenv's root directory. When I run M-x run-python the shell runs from my virtual environment. That is, I can import Matplotlib with no problems. But when I import Matplotlib within a SRC block I get an import error.
How can I have it so the SRC block uses the python in my virtual
environment and not my system's python?
Is there any way I can set
the path to a given virtual environment automatically when I load an
org file?
HTML5 Export:
I'm trying to export my org-files in 'html5', as opposed to the default 'xhtml-strict'. The manual says to set org-html-html5-fancy to t. I tried searching for org-html-html5-fancy in M-x org-customize but I couldn't find it. I tried adding (setq org-html-html5-fancy t) to my init.el, but nothing happened. I'm not at all proficient in emacs-lisp so my syntax may be wrong. The manual also says I can set html5-fancy in an options line. I'm not really sure how to do this. I tried #+OPTIONS html5-fancy: t but it didn't do anything.
How can I export to 'html5' instead of 'xhtml-strict' in org version
7.9.3f and Emacs version 24.3.1?
Is there any way I can view and customize the back-end that parses
the org file to produce the html?
I appreciate any help you can offer. | Run python from virtualenv in org file & HTML5 export in org v.7.9.3 | 0 | 0 | 0 | 510 |
23,532,399 | 2014-05-08T03:49:00.000 | 0 | 0 | 0 | 1 | python,websocket,tornado | 23,540,966 | 1 | false | 0 | 0 | If you will only ever have one connection, you can call IOLoop.current().stop() from your WebSocketHandler's on_close method. If you need more than one you can increment a counter in open(), decrement it in on_close(), and stop the IOLoop when it reaches zero. | 1 | 0 | 0 | I want to start a temporary web app when there is a connection comes, stop it when there is no connection and timeout.
I use python tornado WebSocketHandler, anyone can help? An example will be helpfull. | How to stop tornado web app when there is no connection? | 0 | 0 | 0 | 53 |
23,534,594 | 2014-05-08T06:42:00.000 | 0 | 1 | 1 | 0 | python,file-io,newline | 23,535,129 | 1 | false | 0 | 0 | The mode attribute is left intact when opening files. I expect you could check if newlines exists and mode contains 'U'. Other translations are indicated by encoding. | 1 | 0 | 0 | How can I see if a file-like object is in universal newline mode or not (or any details related to that)?
Both Python 2 and/or 3 answers are okay.
Hint: No, the newlines attribute does not reflect this. It is always there when the Python interpreter has universal newlines support. | How to see whether a file was opened in universal newlines mode? (Or whatever newline translations it is doing?) | 0 | 0 | 0 | 54 |
23,537,731 | 2014-05-08T09:24:00.000 | 0 | 0 | 0 | 1 | python,multithreading,concurrency,tornado,gevent | 23,543,454 | 1 | false | 0 | 0 | If by ssh you mean actually running the ssh process, try tornado.process.Subprocess. If you want something more integrated into the server, twisted has an asynchronous ssh implementation. If you're using something like paramiko, threads are probably your best bet. | 1 | 1 | 0 | Scenario:
My server/application needs to handle multiple concurrent requests and for each request the server opens an ssh link to another m/c, runs a long command and sends the result back.
1 HTTP request comes → server starts 1 SSH connection → waits long time → sends back the SSH result as HTTP response
This should happen simultaneously for > 200 HTTP and SSH connections in real time.
Solution:
The server just has to do one task, that is, open an SSH connection for each HTTP request and keep the connection open. I can't even write the code in an asynchronous way as there is just one task to do: SSH. IOLoop will get blocked for each request. Callback and deferred functions don't provide an advantage as the ssh task runs for a long time. Threading sounds the only way out with event driven technique.
I have been going through tornado examples in Python but none suit my particular need:
tornado with twisted
gevent/eventlet pseudo multithreading
python threads
established HTTP servers like Apache
Environment:
Ubuntu 12.04, high RAM & net speed
Note:
I am bound to use python for coding and please be limited to my scenario only. Opening up multiple SSH links while keeping HTTP connections open sounds all asynch work but I want it to look like a synchronous activity. | Handle multiple HTTP connections and a heavy blocking function like SSH | 0 | 0 | 0 | 289 |
23,542,821 | 2014-05-08T13:13:00.000 | 0 | 0 | 1 | 0 | python,django,pycharm | 23,542,926 | 1 | false | 0 | 0 | I'm not sure I understand exactly what your question is.
Look under the VCS menu selection at the top or the Change tab on the bottom. All merge activities are handled by the built in source code management clients for either Subversion or Git.
I think JetBrains clients for Subversion and Git are top shelf. | 1 | 0 | 0 | Is it possible to merge to files, one from remote repository, other local, if I've already applied merging?
I can't find any function in Pycharm menu for that. | Pycharm - Merge again | 0 | 0 | 0 | 118 |
23,544,818 | 2014-05-08T14:32:00.000 | 0 | 0 | 1 | 0 | python,setuptools | 23,544,854 | 2 | false | 0 | 0 | No it wouldn't. If you have 2 files in the same directory, they can import each other. | 1 | 0 | 0 | I have seen a few packages on github that does something like this:
from setuptools import setup, find_packages
import mypackage
setup(name="mypackage", version=mypackage.__version__ ..
This would fail as mypackage when running "python setup.py develop" as mypackage has not been installed yet. Is there a way to fix this? | How do you list down a python dependency that depends on itself? | 0 | 0 | 0 | 63 |
23,547,783 | 2014-05-08T16:44:00.000 | 2 | 0 | 0 | 0 | python,django,django-rest-framework | 23,548,216 | 1 | true | 1 | 0 | This is not a question about Django REST, but about Django itself.
The problem with extending the User object directly is that it is already a concrete model, so extending it will use multi-table inheritance. That's not usually a good idea - especially if you're further extending it.
AbstractUser is an abstract model, but (unlike AbatractBaseUser) contains all the fields that User defines. You should use that. | 1 | 2 | 0 | I am using Django REST to create users for my app.
Everywhere i look at, for users they extend AbstractBaseUser.
I tried extending the User model, and it seems to work just fine.
I have an PersonalAbstractUser that extends the Django User. Then, Worker and Client extends PersonalAbstractUser.
Login and custom permissions seem to work just fine up until now, but i am getting concerned when i see that no one else is extending User...
Why is that? Did i miss something? | Why not to extend User in Django REST | 1.2 | 0 | 0 | 74 |
23,548,149 | 2014-05-08T17:03:00.000 | 0 | 0 | 0 | 1 | python,sockets,udp,ip,openshift | 23,548,201 | 2 | false | 0 | 0 | That will not work on OpenShift, we only offer two kinds of external ports for use, http/https and ws/wss | 1 | 1 | 0 | I had built a python udp hole puncher using raw socket and I wonder whether is there a service, or an option to use an external server, on the web(like dedicated server) that will host and run this program.
openshift was something i considered but it did not work because it uses apache is a proxy and therefore its impossible to use raw sockets for connection.
I prefer a free solution
thanks a lot | Server host for python raw sockets use | 0 | 0 | 1 | 287 |
23,548,519 | 2014-05-08T17:21:00.000 | 2 | 0 | 0 | 0 | python,django,security | 23,548,585 | 1 | true | 1 | 0 | This is not really suited for StackOveflow; but the suggestion I would make is to take the parts of your code that are subject to audit; write them as a Python C module which is then imported. You can ship the compiled module along with your normal, unmodified django application.
This would only work if certain parts of your code are subject to this audit/restriction and not the entire application.
Your only other recourse is to host it yourself and provide your own audit/controls on the source. | 1 | 2 | 0 | Here's the deal: I have a Python application for business written in Django. It's not in Cloud, customers should install them at their own servers.
However, Brazil IT laws for tax payment calculation softwares forces me to homologate every piece of code (in this case, every file.py). They generate a MD5 hash and if a customer of mine is running a modified version, I have to pay a fine and should even be sued by Government.
I really don't care if my source code is available to everyone. Really. I just want to guarantee no changes at the source code.
Does anyone have an idea to protect the code? Customers should have root access to servers, so a simple "statement of compliance" should not guarantee anything... | Allow customers to see Python code, but secure modification | 1.2 | 0 | 0 | 65 |
23,550,613 | 2014-05-08T19:14:00.000 | 1 | 0 | 0 | 0 | python,pyqt4 | 23,551,836 | 3 | false | 0 | 1 | I think that you can afford generating a new TextEdit content from the relevant data, every time something changes. That should be very easy to implement. QCursors and stuff like that is good for editable QTextEdits which is not true in your case. And there is no guarantee that it will be faster at all. | 1 | 0 | 0 | How do you go about selecting a specific line in a QTextEdit and changing say... The font colour to green
I am using a QTextEdit widget to display the content of a file, a sequence of commands being sent over rs232. I would like to provide some visual feedback as to what line is being executed, say change the text colour.
I am able to change the text file of text being appended to a QTextEdit (for a log I display) but that doesn't work.
I have been looking into Qcursors but am a bit lost | format a specific line in a QtextEdit | 0.066568 | 0 | 0 | 2,307 |
23,551,808 | 2014-05-08T20:24:00.000 | 3 | 0 | 0 | 1 | python,django,architecture,celery | 23,846,005 | 2 | false | 1 | 0 | Celery actually makes this pretty simple, since you're already putting the tasks on a queue. All that changes with more workers is that each worker takes whatever's next on the queue - so multiple workers can process at once, each on their own machine.
There's three parts to this, and you've already got one of them.
Shared storage, so that all machines can access the same files
A broker that can hand out tasks to multiple workers - redis is fine for that
Workers on multiple machines
Here's how you set it up:
User uploads file to front-end server, which stores in your shared storage (e.g. S3, Samba, NFS, whatever), and stores the reference in the database
Front-end server kicks off a celery task to process the file e.g.
def my_view(request):
# ... deal with storing the file
file_in_db = store_file(request)
my_process_file_task.delay(file_in_db.id) # Use PK of DB record
# do rest of view logic...
On each processing machine, run celery-worker:
python manage.py celery worker --loglevel=INFO -Q default -E
Then as you add more machines, you'll have more workers and the work will be split between them.
Key things to ensure:
You must have shared storage, or this gets much more complicated
Every worker machine must have the right Django/Celery settings to be able to find the redis broker and the shared storage (e.g. S3 bucket, keys etc) | 2 | 1 | 0 | Currently we have everything setup on single cloud server, that includes:
Database server
Apache
Celery
redis to serve as a broker for celery and for some other tasks
etc
Now we are thinking to break apart the main components to separate servers e.g. separate database server, separate storage for media files, web servers behind load balancers. The reason is to not to pay for one heavy server and use load balancers to create servers on demand to reduce cost and improve overall speed.
I am really confused about celery only, have anyone ever used celery on multiple production servers behind load balancers? Any guidance would be appreciated.
Consider one small use case which is currently how it is been done on single server (confusion is that how that can be done when we use multiple servers):
User uploads a abc.pptx file->reference is stored in database->stored on server disk
A task (convert document to pdf) is created and goes in redis (broker) queue
celery which is running on same server picks the task from queue
Read the file, convert it to pdf using software called docsplit
create a folder on server disk (which will be used as static content later on) puts pdf file and its thumbnail and plain text and the original file
Considering the above use case, how can you setup up multiple web servers which can perform the same functionality? | django-celery infrastructure over multiple servers, broker is redis | 0.291313 | 0 | 0 | 2,804 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.