Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
18,039,614 | 2013-08-04T04:07:00.000 | 1 | 0 | 1 | 1 | python,macos,compiler-construction,pyinstaller,py2app | 18,039,652 | 2 | true | 0 | 0 | I know that on a mac you change the extension of the file to .command and that will make it so you can just click on it and it will run through the terminal if that's what it is specified to do. However I'm not sure if it will work if they do not actually have python installed. | 1 | 3 | 0 | I finally got py2app to work, and my program was made. However, it won't open because it relies on the terminal and raw_input. I just found out py2app is more for GUI interfaces.
All I want, is to turn the program into an application my users can click on, and it'll open in Terminal. Without them having to either install Python, or go to the terminal and type python "filename" (also, don't they have to set up the paths and everything to do that?).
Please help; I've been pulling my hair out all day looking for the answer. If this isn't possible, I'm just going to give them the .py file and instruct them to start it with python in the terminal and hope it's already set up so they can do that. | Is there any way to make a Python program that has the user use the terminal (NO GUI) into a stand-alone for Mac? | 1.2 | 0 | 0 | 371 |
18,040,162 | 2013-08-04T05:58:00.000 | 1 | 0 | 1 | 0 | python,emacs,python-3.x,python-3.3 | 41,845,769 | 2 | false | 0 | 0 | Emacs may be in a loop: Typing C-g may get you out of it. | 2 | 2 | 0 | I am using emacs to write Python code. However sometimes Emacs gets frozen with the message "Loading Compile...Done" at the bottom of the editor. I won't be able to make changes to the file or execute any commands when this happens.
How can I fix this issue? | Emacs frozen with message "Loading Compile...Done" | 0.099668 | 0 | 0 | 649 |
18,040,162 | 2013-08-04T05:58:00.000 | 0 | 0 | 1 | 0 | python,emacs,python-3.x,python-3.3 | 18,078,549 | 2 | false | 0 | 0 | You should check that emacs is actually able to run Python or whichever compiler you are trying to invoke using compilation mode. If it is python try:
Giving the full path to the python executable.
Installing the executable with a path with no spaces especially on windows. | 2 | 2 | 0 | I am using emacs to write Python code. However sometimes Emacs gets frozen with the message "Loading Compile...Done" at the bottom of the editor. I won't be able to make changes to the file or execute any commands when this happens.
How can I fix this issue? | Emacs frozen with message "Loading Compile...Done" | 0 | 0 | 0 | 649 |
18,042,919 | 2013-08-04T12:26:00.000 | 0 | 0 | 1 | 0 | python,pyqt,virtualenv,pip,pyqt5 | 49,553,504 | 5 | false | 0 | 0 | Anon's solution of adding a Qt libraryPath worked for me. I am using Anaconda3 on Windows. But I found an alternative.
Copy the file …\Anaconda3\qt.conf to the Scripts folder in the virtual environment. Now I don't need to change any Python code.
The conf file seems to have been created by …\Anaconda3\Scripts\.qt-post-link.bat. | 1 | 9 | 0 | I installed PyQt5 globally on my win7 system (python 3.3), using the installer provided from the official riverbank website.
Then i created a new –no-site-packages virtualenv, where the only things i see listed after typing pip list, are pip (1.4) and setuptools (0.9.7).
The problem now however, is that i need to install there the complete PyQt5 too and this seems impossible using the pip tool.
Both pip install sip and pip install PyQt5 inside the virtual enviroment are returning errors.
Can someone provide a "how to" guide of what exactly should i do?
I also want to be able to work with that PyQt5 (of the new virtualenv) from inside an IDLE, so
I copied the tcl folder from the global installation of my Python to the location of my virtual environment and also created a shortcut targeting: {location of my virtual enviroment}\Scripts\pythonw.exe C:\Python33\Lib\idlelib\idle.pyw so i could open the virtualenv IDLE and not the global one. (Hope I did not do anything wrong there... correction maybe please.) | How to install PyQt5 on a new virtualenv and work on an IDLE | 0 | 0 | 0 | 25,557 |
18,043,529 | 2013-08-04T13:37:00.000 | 0 | 0 | 0 | 0 | python,django,one-to-one,modeladmin | 18,045,653 | 1 | false | 1 | 0 | Add OneToOneField for Photo in House model (you will need to reference it as "YOUR_APP.Photo" to avoid circular references), provide ModelAdmin with custom form, and in that form's constructor filter queryset for that field to display only photos for current house.
Different approach is to add highlighted flag in Photo model and ensure on save that only one photo is highlighted for one house. | 1 | 1 | 0 | I have 2 model classes in my django app:
1: House (name, location, id...)
2: Photo (description, id, house -foreign key-).
In the admin interface, the photo is displayed as inline for the house form, but now, I want the user to be able to choose ONE picture as highlighted for that house.
My question is: there's someway to add a radiobutton so the user is only able to choose one picture?
Could you help me to achieve this, please?
Thanks! | Django admin radio-button for one-to-one relationship | 0 | 0 | 0 | 419 |
18,045,565 | 2013-08-04T17:26:00.000 | 2 | 0 | 1 | 0 | python,regex | 18,072,336 | 3 | false | 0 | 0 | I think you don't need regexpes for this problem,
you need some recursial graph search function | 1 | 5 | 0 | I have a regex like this '^(a|ab|1|2)+$' and want to get all sequence for this...
for example for re.search(reg, 'ab1') I want to get ('ab','1')
Equivalent result I can get with '^(a|ab|1|2)(a|ab|1|2)$' pattern,
but I don't know how many blocks been matched with (pattern)+
Is this possible, and if yes - how? | Python regexp: get all group's sequence | 0.132549 | 0 | 0 | 1,138 |
18,045,780 | 2013-08-04T17:48:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,wxpython | 18,046,345 | 1 | true | 0 | 0 | You need to use a Masked Edit Control see the wxPython demo for a number of examples. | 1 | 0 | 0 | I have TextCtrl field in which user should enter the date in format dd.mm.yyyy
I wanna force him to do it in that format, so is it possible to have something like this
* * . * * . * * * * in text field and when he enters numbers, those numbers replaces *.
So he doesnt need to type dots , only numbers. | Date input field format | 1.2 | 0 | 0 | 239 |
18,046,323 | 2013-08-04T18:43:00.000 | 6 | 0 | 1 | 0 | ipython-notebook,pythonanywhere | 18,062,428 | 1 | true | 0 | 0 | PythonAnywhere dev here -- unfortunately it's not possible right now, but it's on our list. I've added an upvote on your behalf. | 1 | 6 | 0 | Is it possible to run the Ipython Notebook via PythonAnywhere. This will be very usefull. | It it possible to serve an IPython Notebook from PythonAnywhere | 1.2 | 0 | 0 | 619 |
18,047,636 | 2013-08-04T21:15:00.000 | 1 | 0 | 0 | 0 | python,user-interface,tkinter | 18,048,021 | 2 | false | 0 | 1 | Yes, it is possible. There are two ways to do it:
Whenever you want to update the label from your code you can call the_widget.configure(the_text). This will change the text of the label.
You can create an instance of a tkinter.StringVar, and assign it to the textvariable attribute of a label. Whenever you change the value of the variable (via the_variable.set(the_text), the label will automatically update.
Note that for either of these to work, the event loop needs to be able to process events (ie: you won't see anything if your function takes a long time to run and you never call update_idletasks or re-enter the event loop). | 1 | 4 | 0 | I have created a tkinter GUI for my python script. When I run the script, I want a dynamic string in one of the Label widgets on the GUI window, which will display:
"Working."
Then:
"Working.."
then
"Working..."
and then start from "Working." again until the script is completed.
(Actually I'd prefer a progress bar in this area)
Is it possible? | Python: Is it possible to create an tkinter label which has a dynamic string when a function is running in background? | 0.099668 | 0 | 0 | 22,257 |
18,047,710 | 2013-08-04T21:22:00.000 | 1 | 0 | 0 | 0 | algorithm,python-3.x | 18,047,742 | 2 | false | 0 | 0 | Divide the world in zones. You only need to check at most 4 zones if zone width is slightly larger than the maximum viewing distance.
Using a quad-tree or a kd-tree has the disadvantage that you need to constantly update the structure. But it might work better, do some profiling. | 1 | 0 | 0 | I'm working on a evolutionary simulation with predators, prey, and food (plants that grow on terrain depending on the conditions and meat that creatures give off when they die).
Each of them ocupy an (x,y) position.
At the moment, each creature has a few "eyes" which are sensible to red, green and blue color channels, and when a creature or a piece of food is within their viewing distance, the eyes react sending an input to their neural network, depending on the color of the object they are seeing, it's relative angle and it's distance from the creature.
What I'm doing right now is iterating through ALL the plants, meat pieces, and creatures, and checking if they are within the creature's viewing distance. If that condition is true, then the inputs for the network are calculated.
The problem is that the world is massive (about 10,000*10,000 "units") compared to the creatures viewing distance, which is normally between 150 and 300 "units". On top of that, plant number can get really high depending on terrain conditions (up to a few thousand, too), together whith all the other creatures and meat pieces.
So, I normally end up with a massive loop being performed for each creature, which really slows down the simulation, when most of the creatures and food pieces checked are completely irrelevant (are too far away).
What I'm asking for is some method or algorithm that can reduce the number of points being checked for distance in each loop, limiting the distance of the points being checked, or some other technique.
PS: I thought about dividing the simulation in various "zones" so if a creature was in a zone it would only check for other points (food and other creatures) in that particular zone. However, as they are continuously moving, if they were on the edge of a zone it would make their view very inaccurate.
I also slightly improved the speed by checking for distance^2 (not doing the sqrt), and thein calculating it only if it was smaller than viewing_distance^2. | Points within a distance of a moving point | 0.099668 | 0 | 0 | 162 |
18,048,357 | 2013-08-04T22:45:00.000 | 1 | 0 | 0 | 1 | python,tornado | 19,019,586 | 1 | false | 0 | 0 | If I'm understanding your question correctly, all you need to do is call IOLoop.add_callback from the thread that is reading from the queue. This will run your callback in the IOLoop's thread so you can write your message out on the client websocket connections. | 1 | 1 | 0 | I have a tornado application that will serve data via websocket.
I have a separate blocking thread which is reading input from another application and pushing an object into a Queue and another thread which has a blocking listener to that Queue.
What I would like is for the reader thread to somehow send a message to tornado whenever it sees a new item in the Queue and then tornado can relay that via websocket to listening clients.
The only way I can think to do this is to have a websocket client in the reader thread and push the information to tornado via websocket. However it seems that I should be able to do this without using websocket and somehow have tornado listen for non websocket async events and then call a callback.
But I can't find anything describing how to do this. | How to add custom events to tornado | 0.197375 | 0 | 0 | 625 |
18,048,512 | 2013-08-04T23:09:00.000 | 2 | 0 | 1 | 0 | python,html,regex,parsing,html-parsing | 18,048,532 | 2 | false | 0 | 0 | People shy away from doing regexes to search HTML because it isn't the right tool for the job when parsing tags. But everything should be considered on a case-by-case basis. You aren't searching for tags, you are searching for a well-defined string in a document. It seems to me the simplest solution is just a regex or some sort of XPath expression -- simple parsing requires simple tools. | 1 | 0 | 0 | Sometimes I am not sure when do I have to use one or another. I usually parse all sort of things with Python, but I would like to focus this question on HTML parsing.
Personally I find DOM manipulation really useful when having to parse more than two regular elements (i.e. title and body of a list of news, for example).
However, I found myself in situations where it is not clear for me to build a regex or try to get the desired value simply manipulating strings. A particular fictional example: I have to get the total number of photos of an album, and the only way to get this is parsing the number of photos using this way:
(1 of 190)
So I have to get the '190' from the whole HTML document. I could write a regex for that, although regex for parsing HTML is not exactly the best, or that is what I always understood. On the other hand, using DOM seems overwhelming for me as it is just a simple element. String manipulation seems to be the best way, but I am not really sure if I should proceed like that in such a similar case.
Can you tell me how would you parse these kind of single elements from a HTML document using Python (or any other language)? | Should I use regex or just DOM/string manipulation? | 0.197375 | 0 | 1 | 525 |
18,049,548 | 2013-08-05T01:47:00.000 | 1 | 0 | 1 | 0 | python,stack-trace | 53,310,998 | 4 | false | 0 | 0 | Also try the "py-spy" module, which can connect to a python process and get the instantaneous stack dump. | 2 | 23 | 0 | traceback.format_exc()
can get it with raising an exception.
traceback.print_stack()
prints the stack without an exception needed, but it does not return a string.
There doesn't seem to be a way to get the stack trace string without raising an exception in python? | How to get stack trace string without raising exception in python? | 0.049958 | 0 | 0 | 11,592 |
18,049,548 | 2013-08-05T01:47:00.000 | 33 | 0 | 1 | 0 | python,stack-trace | 18,049,562 | 4 | true | 0 | 0 | It's traceback.extract_stack() if you want convenient access to module and function names and line numbers, or ''.join(traceback.format_stack()) if you just want a string that looks like the traceback.print_stack() output. | 2 | 23 | 0 | traceback.format_exc()
can get it with raising an exception.
traceback.print_stack()
prints the stack without an exception needed, but it does not return a string.
There doesn't seem to be a way to get the stack trace string without raising an exception in python? | How to get stack trace string without raising exception in python? | 1.2 | 0 | 0 | 11,592 |
18,050,770 | 2013-08-05T04:50:00.000 | 2 | 0 | 0 | 0 | python,eclipse,openerp | 18,071,459 | 2 | false | 1 | 0 | Do you mean you want a dynamic field on the form/tree view or in the model?
If it is in the view then you override fields_view_get, call super and then process the returned XML for the form type you want adding in the field or manipulating the XML. ElementTree is your friend here.
If you are talking about having a dynamic database field, I don't think you can and OpenERP creates a registry for each database when that database is first accessed and this process performs database refactoring at that time. The registry contains the singleton model instances you get with self.pool.get...
To achieve this you will need to create some kind of generic field like field1 and then in fields_view_get change the string attribute to give it a dynamic label.
Actually, a plan C occurs to me. You could create a properties type of table, use a functional field to read the value for the current user and override fields_view_get to do the form. | 2 | 0 | 0 | Hi I am working on an openerp module . I want to make a field dynamically . I want to take a name of a field from user and then create a field to it . How this can be done ? Can I do it with fields.function to return name, char type ? Plz help | how to set name of a field dynamically in openerp? | 0.197375 | 0 | 0 | 939 |
18,050,770 | 2013-08-05T04:50:00.000 | 0 | 0 | 0 | 0 | python,eclipse,openerp | 18,071,982 | 2 | false | 1 | 0 | You can create Fields Dynamically by the help of class self.pool.get('ir.model.fields')
Use Create Function. | 2 | 0 | 0 | Hi I am working on an openerp module . I want to make a field dynamically . I want to take a name of a field from user and then create a field to it . How this can be done ? Can I do it with fields.function to return name, char type ? Plz help | how to set name of a field dynamically in openerp? | 0 | 0 | 0 | 939 |
18,052,515 | 2013-08-05T07:16:00.000 | 1 | 0 | 0 | 0 | python,django,django-manage.py | 18,085,375 | 1 | true | 1 | 0 | Some dependency might be starting a thread. Django will wait for all threads to finish when autoreloading on code changes, or executing a management command. Inspect all dependencies to identify which one might be causing this problem. | 1 | 1 | 0 | during the last days I am observing a very strange behavior in one of my django projects:
When I run some manage.py commands I see that although the commands are executed they do not end. For instance, if I try running syncdb:
c:\django> python manage.py syncdb
Syncing...
Creating tables ...
Creating table questions_category
Creating table questions_question
Creating table questions_answer
Installing custom SQL ...
Installing indexes ...
Installed 0 object(s) from 0 fixture(s)
At this time I should get a command input - but I don't! I have the same behavior with various other manage.py commands - they are running ok but they don't exit (for instance dumpdata or loadata - the data is dumped/loaded ok but when these finish I don't get the command prompt)! Has anybody observed the same behavior ? Is there a way to debug that ? I tried adding print statements at the end of my settings.py and I could see the output without problem.
Also, another problem I have which probably is related to the above is that the runserver_plus command no longer is able to find out code changes. So, when I run manage.py runserver_plus and change for instance my settings.py I see this:
* Detected change in 'C:\\progr\\py\\adeies\\adeies\\settings.py', reloading
And it stops there :( It doesnt reload the application ! Using the normal runserver reloads the application without problem however for obvious reasons I prefer using the runserver_plus command.
Do you have any ideas on how to debug this ?
Thanks ! | Django strange behavior: manage.py commands do not end | 1.2 | 0 | 0 | 1,139 |
18,052,778 | 2013-08-05T07:31:00.000 | 2 | 0 | 1 | 0 | python,logging | 42,301,172 | 3 | false | 0 | 0 | Not usually; it is typically not meant to be passed as a parameter.
The convention is to use log = logging.getLogger(__name__) in the top of each module. The value of __name__ is different for each module. The resultant value of __name__ can then be reflected in each log message. | 1 | 5 | 0 | A Python application we're developing requires a logger. A coworker argues that the logger should be created and configured in every class that's using it. My opinion is that it should be created and configured on application start and passed as a constructor-parameter.
Both variants have their merits and we're unsure what the best practice is. | Should a Python logger be passed as parameter? | 0.132549 | 0 | 0 | 4,054 |
18,056,464 | 2013-08-05T10:55:00.000 | 17 | 1 | 1 | 0 | python,oop,unit-testing | 18,056,870 | 1 | false | 0 | 0 | The two are quite different.
setUpClass is a class method, for one, so it'll only let you set class attributes.
They are also called at different times. The test runner creates a new instance for every test. If your test class contains 5 test methods, 5 instances are created and __init__ is called 5 times.
setUpClass is normally called only once. (If you shuffle up test ordering and test methods from different classes are intermingled, setUpClass can be called multiple times, use tearDownClass to clean up properly and that won't be a problem).
Also, a test runner usually creates all test instances at the start of the test run; this is normally cheap, as test instances don't hold (much) state so won't take up much memory.
As a rule of thumb, you should not use __init__ at all. Use setUpClass to create state shared between all the tests, and use setUp to create per-test state. setUp is called just before a test is run, so you can avoid building up a lot of memory-intensive state until it is needed for a test, and not before. | 1 | 14 | 0 | Is there any runtime-logic difference between these two methods? Or any behviour differences?
If not, then should I forget about __init__ and use only setUpClass thinking here about unittests classes like about namespaces instead of language OOP paradigm? | When should I use setUpClass and when __init__? | 1 | 0 | 0 | 3,648 |
18,058,389 | 2013-08-05T12:39:00.000 | 1 | 0 | 1 | 1 | python,windows,command-line,cmd | 66,637,678 | 8 | false | 0 | 0 | simply add both to the env variables and then you have to move the version you want to the top | 4 | 63 | 0 | I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
python ex1.py
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line? | How to switch between python 2.7 to python 3 from command line? | 0.024995 | 0 | 0 | 192,684 |
18,058,389 | 2013-08-05T12:39:00.000 | 0 | 0 | 1 | 1 | python,windows,command-line,cmd | 70,420,111 | 8 | false | 0 | 0 | Are you using Python version 3+?
Go to your project path
Run py -[version_number_here] and hit Enter
-> This will open the Python's Terminal (sort of)
Happy Coding | 4 | 63 | 0 | I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
python ex1.py
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line? | How to switch between python 2.7 to python 3 from command line? | 0 | 0 | 0 | 192,684 |
18,058,389 | 2013-08-05T12:39:00.000 | -5 | 0 | 1 | 1 | python,windows,command-line,cmd | 41,410,363 | 8 | false | 0 | 0 | You can try to rename the python executable in the python3 folder to python3, that is if it was named python formally... it worked for me | 4 | 63 | 0 | I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
python ex1.py
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line? | How to switch between python 2.7 to python 3 from command line? | -1 | 0 | 0 | 192,684 |
18,058,389 | 2013-08-05T12:39:00.000 | 3 | 0 | 1 | 1 | python,windows,command-line,cmd | 55,208,194 | 8 | false | 0 | 0 | In case you have both python 2 and 3 in your path, you can move up the Python27 folder in your path, so it search and executes python 2 first. | 4 | 63 | 0 | I'm trying to find the best way to switch between the two python compilers, 2.7 to 3.3.
I ran the python script from the cmd like this:
python ex1.py
Where do I set the "python" environment in the window's environment variable to point to either python 3.3 or 2.7?
I am wondering if there is an easy way to switch between the two versions from the cmd line? | How to switch between python 2.7 to python 3 from command line? | 0.07486 | 0 | 0 | 192,684 |
18,059,436 | 2013-08-05T13:31:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,python-3.x,diff,python-2to3 | 18,995,552 | 1 | true | 0 | 1 | 2to3 -w will both replace the files and print the diffs; you need to use --no-diffs to turn off the diff output. 2to3 --no-diffs -w dir/*.py should do the trick. | 1 | 1 | 0 | Ok. so I know 2to3 only provides a dif list. However, 2o3 should modify the actual file right?
when I run this command with -w it gives me a dif list. The file is not changed (it is still Tkinter) Also, I sse no backup like 2to3 is supposed to provide.
Edit: I actually did this on a folder. One file in the folder(which was already 3.x) was modified. I put this file in the folder by mistake but the files I wanted we're not changed (though terminal did give me a dif list). | python 2to3 doesn't change huey file when -w argument is provided | 1.2 | 0 | 0 | 182 |
18,064,808 | 2013-08-05T18:09:00.000 | 3 | 0 | 1 | 0 | python,math | 18,064,953 | 1 | true | 0 | 0 | Is it a program or a function? If it's a program, something that will be invoked by people, the right way is to output the phrase "No solutions" or something like that.
Now, if it's a function that returns variable(s), the question is different. First, not all languages have a None as a possible numeric value; C/C++, for example, does not. Does the code solve any kind of equation? In that case, consider this. An equation may have multiple roots. That means you should somehow return a collection of roots. If there are no roots, an empty collection would be the right thing to return.
Also, an equation may have an infinite number of roots (example: 0*x=0). | 1 | 0 | 0 | I'm writing a function that solves equations.
And if an equation has no roots, is it correctly to set to x's None type?
Or I will get some problems with that in the future and there is a better variant? | If equation has no roots | 1.2 | 0 | 0 | 197 |
18,065,256 | 2013-08-05T18:34:00.000 | 2 | 0 | 0 | 0 | python,security | 18,065,965 | 2 | false | 0 | 0 | What you are asking about is part of what's commonly called "key management." If you google the term, you'll find lots of interesting reading. You may well discover that there are other parts of key management that your solution needs to address, like revocation and rotation.
In the particular part of key management that you're looking at, you need to figure out how to have two nodes trust each other. This means that you have to identify a separate thing that you trust on which to base the nodes' trust. There are two common approaches:
Trusting a third party. This is the model that we use for most websites we visit. When our computers are created, the trusted third party creates the device to already know about and trust certain entities, like Verisign. When we contact a web site over HTTPS, the browser automatically checks if Verisign (or another trusted third party certificate authority) agrees that this is the website that it claims to be. The magic of Public Key Cryptography and how this works is a whole separate topic, which I recommend you investigate (just google for it :) ).
Separate, secure channel. In this model, we use a separate channel, like an administrator who transfers the secret from one node to the other. The admin can do this in any manner s/he wishes, such as encrypted data carried carried via USB stick over sneakernet, or the data can be transferred across a separate SFTP server that s/he has already bootstrapped and can verify that it's secure (such as with his/her own internal certificate authority). Some variations of this are sharing a PGP key on a business card (if you trust that the person giving you the business card is the person with whom you want to communicate), or calling the key-owner over the phone and verbally confirming that the hash of the data you received is the same as the hash of the data they sent.
There are on-line key exchange protocols - you can look them up, probably even on Wikipedia, using the phrase "key exchange", but you need to be careful that they actually guarantee the things you need to determine - like how does the protocol authenticate the other side of the communication channel. For example, Diffie Hellman guarantees that you have exchanged a key without ever exchanging the actual contents of the key, but you don't know with whom you are communicating - it's an anonymous key exchange model.
You also mention that you're worried about message replay. Modern secure communication protocols like SSH and TLS protect against this. Any good protocol will have received analysis about its security properties, which are often well described on Wikipedia.
Oh, and you should not create your own protocol. There are many tomes about how to write secure protocols, analyses of existing protocols and there security properties (or lack thereof). Unless you're planning to become an expert in the topic (which will take many years and thousands of pages of reading), you should take the path of least resistance and just use a well known, well exercised, well respected protocol that does the work you need. | 1 | 0 | 0 | I'm building an authentication server in python and was wondering about how I could secure a connection totally between two peers totally? I cannot see how in any way a malicious user wouldn't be able to copy packets and simply analyze them if he understands what comes in which order.
Admitting a client server schema. Client asks for an Account. Even though SRP, packets can be copied and sent later on to allow login.
Then now, if I add public - private key encryption, how do I send the public key to each other without passing them in an un-encrypted channel?
Sorry if my questions remains noobish or looks like I haven't minded about the question but I really have a hard time figuring out how I can build up an authentication process without having several security holes. | How could i totally secure a connection between two nodes? | 0.197375 | 0 | 1 | 1,317 |
18,065,677 | 2013-08-05T18:59:00.000 | 2 | 0 | 0 | 0 | python,gtk,pygtk | 18,066,003 | 1 | true | 0 | 1 | In GTK+ all widgets are hidden by default (which I think was a stupid design decision, but oh well). You usually call show_all() on a window, so indirectly show all widgets contained in it by the time of the call. If you add (pack, whatever) a widget later, don't forget to show() it manually. | 1 | 0 | 0 | Can I use pack once the main loop has been showed, or should I use something else to add /remove widgets to /from a vbox afterwards ?
I have this gtk.Window() that contains a vbox, where a menu, a treeview and a button are packed. At the push of this button, I want to display an image in a new container inside this window / vbox, and ideally, close said container at will.
(think image viewer with a file list, you click on an image file and a pane opens displaying it, if you click on another image file the new image is displayed in place of the old, and you can close the image pane)
My question is : How do you do that ? My trials so far led me to believe that once the vbox has been show()'d, you cant pack anything else into it..?
Has the "image" container have to exist prior to being displayed...?
What is the proper process to do this, in witch direction of the GTK manual should I look? | pyGTK : pack and unpack | 1.2 | 0 | 0 | 152 |
18,066,856 | 2013-08-05T20:12:00.000 | 0 | 1 | 0 | 0 | python,api,lotus-notes,email-attachments | 18,662,488 | 1 | false | 0 | 0 | You can do this in LotusScript as an export of data. This could be an agent that walks down a view in Notes, selects a document, document attachments could be put into a directory. Then with those objects in the directory(ies) you can run any script you like like a shell script or whatever.
With LotusScript you can grab out meta data or other meaningful text for your directory name. Detach the objects you want from richtext then move to the next document. The view that you travel down will effect the type of documents that you are working with. | 1 | 0 | 0 | Basically, I need to write a Python script that can download all of the attachment files in an e-mail, and then organize them based on their name. I am new to using Python to interact with other applications, so I was wondering if you had any pointers regarding this, mainly how to create a link to Lotus Notes (API)? | Using Python To Access E-mail (Lotus Notes) | 0 | 0 | 0 | 1,295 |
18,067,094 | 2013-08-05T20:26:00.000 | 1 | 0 | 1 | 0 | python,debugging | 18,067,268 | 5 | false | 0 | 0 | Use a decent python IDE - there are a lot out there and you will be able to stop at breakpoints inspect variables by hovering or adding watches and enter a context console where you can interact with your code in the context of the breakpoint. | 3 | 2 | 0 | As a self-taught programmer, I learned to debug using an interactive console that kept all of my variables in memory when I build /run the script. However, I noticed the overwhelming trend for debugging in IDEs (and, I suppose CLI + Editor solutions, for that matter) is to build your script in one place and provide a separate console "sandbox" type area that only keeps variables if you copy/paste your code.
How do you debug without an interactive console? Can anyone list a few debugging steps that could help me be a better programmer / debugger?
Currently, this is a very simplified version of what I do:
Write some pseudocode (sometimes)
Write some code in an editor that should work
run / build the script
Check stdout for errors
If no errors, then 7.
If errors, then back to 2 after fixing the offending code.
Type variable names into console to verify that they look like I
anticipated.
Rinse and Repeat until it works as I intended. | How do you debug without an interactive console | 0.039979 | 0 | 0 | 172 |
18,067,094 | 2013-08-05T20:26:00.000 | 0 | 0 | 1 | 0 | python,debugging | 18,111,800 | 5 | false | 0 | 0 | It turns out that PyCharm, at least, DOES have an interactive console and the default keymapping (on Mac) is option-shift-E. Then your variables are loaded in memory. However, the suggestions above are better programming practices. | 3 | 2 | 0 | As a self-taught programmer, I learned to debug using an interactive console that kept all of my variables in memory when I build /run the script. However, I noticed the overwhelming trend for debugging in IDEs (and, I suppose CLI + Editor solutions, for that matter) is to build your script in one place and provide a separate console "sandbox" type area that only keeps variables if you copy/paste your code.
How do you debug without an interactive console? Can anyone list a few debugging steps that could help me be a better programmer / debugger?
Currently, this is a very simplified version of what I do:
Write some pseudocode (sometimes)
Write some code in an editor that should work
run / build the script
Check stdout for errors
If no errors, then 7.
If errors, then back to 2 after fixing the offending code.
Type variable names into console to verify that they look like I
anticipated.
Rinse and Repeat until it works as I intended. | How do you debug without an interactive console | 0 | 0 | 0 | 172 |
18,067,094 | 2013-08-05T20:26:00.000 | 1 | 0 | 1 | 0 | python,debugging | 56,380,504 | 5 | false | 0 | 0 | Use print statements in between the areas of problem code... otherwise, just download a good IDE | 3 | 2 | 0 | As a self-taught programmer, I learned to debug using an interactive console that kept all of my variables in memory when I build /run the script. However, I noticed the overwhelming trend for debugging in IDEs (and, I suppose CLI + Editor solutions, for that matter) is to build your script in one place and provide a separate console "sandbox" type area that only keeps variables if you copy/paste your code.
How do you debug without an interactive console? Can anyone list a few debugging steps that could help me be a better programmer / debugger?
Currently, this is a very simplified version of what I do:
Write some pseudocode (sometimes)
Write some code in an editor that should work
run / build the script
Check stdout for errors
If no errors, then 7.
If errors, then back to 2 after fixing the offending code.
Type variable names into console to verify that they look like I
anticipated.
Rinse and Repeat until it works as I intended. | How do you debug without an interactive console | 0.039979 | 0 | 0 | 172 |
18,068,855 | 2013-08-05T22:27:00.000 | 1 | 0 | 0 | 0 | python,architecture,twisted,scalability,distributed | 18,069,861 | 1 | false | 0 | 0 | There are many solutions to implement a shared database. It depends on your technology stack, network architecture, programming language(s), etc. This is too broad of a question to be answered in a few paragraphs. Pick one approach and go with it, but make your code modular enough to replace your approach with another if necessary.
Update: Based on your comment that you are using Twisted, I will ask you a question. If you had a cluster of Twisted servers that are all sharing network state (your "distributed nodes"), how would you request your "complex operations" from those servers and how would you get back the results? If you can answer this in enough detail, you will have determined the requirements of your nodes. And then you can determine how they share and update the network state. At that point, you can ask a much more specific question (like "how do I replicate memcache across my nodes?"). | 1 | 2 | 0 | I apologize in advance for how long this explanation is, I don't know how to make it more concise because I imagine almost all of this is relevant. Sorry!
I'm designing a game server in Python with Twisted (probably not relevant, as this is an architecture question).
The general idea is that players connect and are placed into a lobby. They can then queue for a match (for instance, deathmatch or team deathmatch), and, when a match is found, they are placed into that game. When the game ends they are placed back into the lobby.
Because I'm aware of how complex distributed systems can be, I tried to simplify it as much as possible. The idea I came up with was to abstract all the information about a player into an object. All game logic is implemented in a GameHandler. When a player connects, they're assigned to a Lobby(GameHandler) instance. When they join a match, they are reassigned to a, say, Deathmatch(GameHandler) instance (which are held in a map of server: gamehandler).
At that point, when the player is added to a match, the server they're actually connected to serves as a reverse proxy (I didn't write the client and it can't be modified, there can't be any connection re-negotiation) and sends the info about the player to the match server. Then, using the instance map, all traffic from that player is routed without being looked at to the game instance they're in, and vice versa with an ID system. I don't think this is a problem because the servers should all be able to forward data on a gigabit LAN.
When a game is over, it just notifies all the Lobby servers (reverse proxies) that forwarded players, and they're returned back to the Lobby instance.
That should mean that I can scale out with resources by adding backend servers, scale out with network links by adding more reverse proxy lobby-only servers, and I can also scale up on any of the individual nodes. There is also no technical limitation that forces backend servers to be backend, every single server could have a Lobby instance, but games could be distributed across the network.
Now, so far so good (In theory, I haven't started implementing the distribution yet because I want to work out the main points beforehand), but that leaves one major question:
How do I control metainformation that all nodes need to know?
For instance:
How many players are online for the server list to display before a client connects?
Is matchmaking implemented (I'm planning on using Microsoft's TrueSkill algorithm, if that matters) in some P2P manner or should I delegate an entire server just for that (or even for metainformation)?
What about a party system where players join a queue together? Which server keeps track of the players in the group?
How do I manage configuration, like banned players? Every forward server would need to know about them.
If the lobby instance the player connected to happens to be full, how do I find another lobby instance that isn't full so I can route their connection there? This goes back to the first point, I need to have a way for nodes to easily query the network state
I could implement a P2P solution, or a primary server to handle this sort of thing. My major concern would be that adding a primary control server would add a single point of failure (which I would like to avoid), but the P2P solution would seem to be an order of magnitude more complex, and potentially slow things down significantly or use a fair amount of resources caching data on all the nodes.
So the question is: Is my design decent, and what do you think the pros and cons of those two solutions to the metainformation problem are? Is there a better third solution? What do you think is best?
Thank you in advance, your help is very much appreciated. | How do I manage metainformation in a horizontally scaled game server? | 0.197375 | 0 | 0 | 246 |
18,069,628 | 2013-08-05T23:48:00.000 | 0 | 0 | 1 | 0 | python,unicode,decode,encode | 18,069,703 | 3 | false | 0 | 0 | Unicode strings have the same methods as standard strings, you can remove '\n' with line.replace(r'\n','') and check if it exists with '\n' in unc | 1 | 0 | 0 | How do I remove line breakers i.e. '\n' from a unicode text read from text file using python? Also, how do I test if values of a list is linebreaker or not in unicode string? | how to avoid linebreakers from unicode string read from text file in python | 0 | 0 | 0 | 129 |
18,072,977 | 2013-08-06T06:19:00.000 | 1 | 1 | 1 | 0 | python,unit-testing,testing,fortran,fortran90 | 18,076,400 | 2 | false | 0 | 0 | If you use the "os.system()" function, this can be used to call linux/unix commands from the python script directly. You can also use the "subprocess" module.
Use it like this:
os.system("ls -G")
This will call 'ls -G' from python just like if you were calling it yourself. You can easily compile and call fortran code using this command as well. Or, if you're familiar with bash scripting, you could use that as a wrapper for your unit testing as well. The scientific computing community seems to like perl for these types of tasks, but python should work just fine.
At least you're working with fortran90 and not fortran77. Those goto statements can make debugging a code excessively interesting. :P | 1 | 0 | 0 | We are developing a numerical simulation program in FORTRAN90 (procedural, not OO and unfortunately some COMMON blocks are present but no GOTO's :-) ) and are thinking of using Python to help us in unit-testing (retroactively) and verification testing. We would like to set up a testing environment in Python to a) do unit-testing and b) do verification testing (i.e. run small test cases with well-known solutions). We would like to be able to group different tests together (by FORTRAN90 procedure for unit-testing or by problem topic for verification testing) and allow tests to be run either individually or by group.
The simulation program is text-input/output based, so we could come up with some input files to be run and compared to verified output files. For unit testing, however, I guess we will probably need to write wrappers for each FORTRAN90 subroutine.
Has anybody been in a likewise situation before? What tips can you give us?
thanks.
(btw rewriting the FORTRAN90 code in Python is not (yet) an option) | Using Python for testing non-python code | 0.099668 | 0 | 0 | 377 |
18,073,826 | 2013-08-06T07:11:00.000 | 1 | 0 | 0 | 1 | python,socket.io,twisted,tornado,sockjs | 18,075,866 | 1 | false | 0 | 0 | You should send all importent data through queue with delivery confirm. So if you server will crash all data will come to it from queue. Try to use rabbitmq. | 1 | 0 | 0 | I am working now on real-time game based on tornado, tornado-sockjs.
There are a lot of different timeout strategies in our game application:
TIMEOUT_GAME_IF_NOBODY, TIMEOUT_GAME_IF_SERVER_OFF. These timeouts has callbacks
that can work with storage directly (update, insert, and so on). The question is what is
the right way to organize timeout strategy into a module ?! How can we re-execute callbacks
in case of server failure ? Imagine that three timeouts are hanging, and suddenly server that handles these timeouts, crashed. It means that some information was not updated. | Tornado timeouts and server failured | 0.197375 | 0 | 0 | 95 |
18,074,758 | 2013-08-06T08:04:00.000 | 0 | 1 | 0 | 0 | python,python-2.7,graphics,pixel,raspberry-pi | 22,917,790 | 1 | true | 0 | 0 | I think the is no way to have graphics without x-session.
The best solution to this is to set boot to desktop and using pygame library create full screen window to draw graphics. | 1 | 1 | 0 | I am trying to get / set screen pixels (draw picture, line, circle, box, etc.) without starting x session. i tried google it but no success.
I am new to python. please help | Python graphics on raspberry pi command line | 1.2 | 0 | 0 | 770 |
18,077,145 | 2013-08-06T10:07:00.000 | 1 | 0 | 0 | 1 | python,windows,winapi,named-pipes | 24,032,255 | 2 | true | 0 | 0 | I have managed to achieve what I wanted. I call CreateNamedPipe and CloseHandle exactly once per session, and I call DisconnectNamedPipe when my write fails, followed by another ConnectNamedPipe.
The trick is to only call DisconnectNamedPipe when the pipe was actually connected. I called it every time I tried to connect "just to be sure" and it gave me strange errors.
See also djgandy's answer for more information about pipes. | 1 | 6 | 0 | With Windows named pipes, what is the proper way to use the CreateNamedPipe, ConnectNamedPipe, DisconnectNamedPipe, and CloseHandle calls?
I am making a server app which is connecting to a client app which connects and disconnects to the pipe multiple times across a session.
When my writes fail because the client disconnected, should I call DisconnectNamedPipe, CloseHandle, or nothing on my handle.
Then, to accept a new connection, should I call CreateNamedPipe and then ConnectNamedPipe, or just ConnectNamedPipe?
I would very much like an explanation of the different states my pipe can be in as a result of these calls, because I have not found this elsewhere.
Additional info:
Language: Python using the win32pipe,win32file and win32api libraries.
Pipe settings: WAIT, no overlap, bytestream. | Windows named pipes in practice | 1.2 | 0 | 0 | 5,946 |
18,079,351 | 2013-08-06T11:48:00.000 | 0 | 0 | 0 | 0 | python,user-interface,wxpython,wxwidgets | 18,079,725 | 4 | false | 0 | 1 | You should be able to call the Layout method on the parent of the sizer, this will make it recalculate the shown items. | 3 | 0 | 0 | I have a wxpython grid sizer that is sizing sublists of bitmap buttons. The master list I would like to create just once because creating these buttons takes a considerable amount of time, and thus I do not want to destroy them. My idea is to somehow remove all of the buttons from the sizer, make a new list of the buttons that I want the sizer to contain, and then use the sizer's AddMany method.
If I can't remove the buttons from the sizer without destroying them, then is there a way to use the sizer's Show method to hide some of the times, but then have the sizer adjust to fill in the gaps? When I hide them, all I can get them to do right now is just to have them disappear and leave a gap. I need the next item to be adjusted to the gap's place.
Also is there a way to sort the grid sizer's item list?
Thanks for any help you can offer. | How to either sort a wxsizer or remove items without destroying them? | 0 | 0 | 0 | 698 |
18,079,351 | 2013-08-06T11:48:00.000 | 0 | 0 | 0 | 0 | python,user-interface,wxpython,wxwidgets | 18,079,420 | 4 | false | 0 | 1 | So I found out that the detach method is what I'm looking for! I would still be interested to know of a way to sort a sizer's item list though, without detaching all of the items and then re attaching a sublist. | 3 | 0 | 0 | I have a wxpython grid sizer that is sizing sublists of bitmap buttons. The master list I would like to create just once because creating these buttons takes a considerable amount of time, and thus I do not want to destroy them. My idea is to somehow remove all of the buttons from the sizer, make a new list of the buttons that I want the sizer to contain, and then use the sizer's AddMany method.
If I can't remove the buttons from the sizer without destroying them, then is there a way to use the sizer's Show method to hide some of the times, but then have the sizer adjust to fill in the gaps? When I hide them, all I can get them to do right now is just to have them disappear and leave a gap. I need the next item to be adjusted to the gap's place.
Also is there a way to sort the grid sizer's item list?
Thanks for any help you can offer. | How to either sort a wxsizer or remove items without destroying them? | 0 | 0 | 0 | 698 |
18,079,351 | 2013-08-06T11:48:00.000 | 0 | 0 | 0 | 0 | python,user-interface,wxpython,wxwidgets | 18,086,362 | 4 | true | 0 | 1 | You can't sort the sizer items in place. It would be possible to write your own function for doing this, of course, but it would use wxSizer::Detach() and Insert() under the hood anyhow. | 3 | 0 | 0 | I have a wxpython grid sizer that is sizing sublists of bitmap buttons. The master list I would like to create just once because creating these buttons takes a considerable amount of time, and thus I do not want to destroy them. My idea is to somehow remove all of the buttons from the sizer, make a new list of the buttons that I want the sizer to contain, and then use the sizer's AddMany method.
If I can't remove the buttons from the sizer without destroying them, then is there a way to use the sizer's Show method to hide some of the times, but then have the sizer adjust to fill in the gaps? When I hide them, all I can get them to do right now is just to have them disappear and leave a gap. I need the next item to be adjusted to the gap's place.
Also is there a way to sort the grid sizer's item list?
Thanks for any help you can offer. | How to either sort a wxsizer or remove items without destroying them? | 1.2 | 0 | 0 | 698 |
18,086,645 | 2013-08-06T17:25:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,reportlab | 18,106,346 | 1 | false | 0 | 0 | I found solution, I just needed to wrap text in tag. | 1 | 0 | 0 | I am using table in report lab, and sometimes text in a column is wider than the width of column, and it overleaps the text in next column, is there a way to automatically split it into two lines if it is too long? | Reportlab split text in two lines | 0 | 0 | 0 | 563 |
18,087,121 | 2013-08-06T17:53:00.000 | 0 | 0 | 1 | 0 | python,multithreading,hadoop,hadoop-streaming,amazon-emr | 18,244,284 | 2 | true | 0 | 0 | I tried to use threading with python, there were issues with the Global Interpreter Lock. Ported code to use the multiprocessing module, internally hadoop assigns as many mappers as there are cores in the cluster, hence multiprocessing is not the way to go if you need speed up. Multithreading if performed right might give some speedup | 1 | 0 | 0 | I am making use of Hadoop streaming to write a python based HTML grabber. I find that running a single threaded python script is slow. I want to modify it to a multithreaded version. Does anyone know what would be a good number to set the number of threads in the mapper to. I am not sure of the specs of each node of the cluster but I assume that it would support atleast two threads. | Threading with Hadoop Streaming | 1.2 | 0 | 0 | 483 |
18,087,793 | 2013-08-06T18:29:00.000 | 1 | 0 | 1 | 1 | windows,python-2.7,out-of-memory | 18,344,344 | 2 | true | 0 | 0 | There is no way to increase memory usage for a process. The problem was with the python module I was using. After updating to a newer version of the module I was not limited to 1 GB of RAM. | 1 | 0 | 0 | I have a python script that loads mp3 music files into memory using NumPY, manipulates certain parts of each song, and renders the multiple music files into one single mp3 file. It can very RAM intensive depending on how many mp3 files the user specifies.
My problem is that the script throws "Memory Error" when I attempt to provide 8 or more mp3 songs (each around 5MB in size).
I am running:
Windows Server 2008 R2 64 bit with 64 GB of RAM and 4 core processors
32 bit version of Python
When I run Task Manager to view the python.exe process I notice that it crashes when it exceeds 1GB of RAM.
Is there a way I can increase the 1GB limit so that python.exe can use more RAM and not crash? | Increase Memory Usage for Python process on Windows | 1.2 | 0 | 0 | 3,740 |
18,089,022 | 2013-08-06T19:40:00.000 | 1 | 0 | 0 | 0 | python,concurrency,locking,semaphore,superglobals | 18,089,913 | 3 | false | 1 | 0 | Not exactly an answer to your question, but maybe another idea about how to tackle this with out using globals.
Why don't you write a small program for controlling your USB device. This script runs once (one instance) on your server and takes care of communicating with the device in the manner you need. It also takes care of concurrency.
Now communicate from your web application via pipes, sockets, whatever with this script, send commands to it and receive results from it. | 1 | 1 | 0 | I would like to be able to use a USB device from several applications (for instance I run a Flask web application), making sure only one uses it at a time.
In my case I am using a relay to open / close a door. The door takes about 20 seconds to open. During that time the relay should not be activated, because this would lock the door in the middle.
Thanks in advance! | How to prevent concurrent access to a resource such as a USB device? | 0.066568 | 0 | 0 | 1,351 |
18,089,598 | 2013-08-06T20:14:00.000 | 5 | 0 | 1 | 0 | python,mongodb,pymongo,bson | 18,089,722 | 3 | false | 0 | 0 | Assuming you are not specifically interested in mongoDB, you are probably not looking for BSON. BSON is just a different serialization format compared to JSON, designed for more speed and space efficiency. On the other hand, pickle does more of a direct encoding of python objects.
However, do your speed tests before you adopt pickle to ensure it is better for your use case. | 1 | 18 | 0 | I have read somewhere that you can store python objects (more specifically dictionaries) as binaries in MongoDB by using BSON. However right now I cannot find any any documentation related to this.
Would anyone know how exactly this can be done? | Is there a way to store python objects directly in mongoDB without serializing them | 0.321513 | 1 | 0 | 23,835 |
18,089,667 | 2013-08-06T20:18:00.000 | 116 | 0 | 0 | 0 | python,pandas | 47,751,572 | 7 | false | 0 | 0 | Here's a comparison of the different methods - sys.getsizeof(df) is simplest.
For this example, df is a dataframe with 814 rows, 11 columns (2 ints, 9 objects) - read from a 427kb shapefile
sys.getsizeof(df)
>>> import sys
>>> sys.getsizeof(df)
(gives results in bytes)
462456
df.memory_usage()
>>> df.memory_usage()
...
(lists each column at 8 bytes/row)
>>> df.memory_usage().sum()
71712
(roughly rows * cols * 8 bytes)
>>> df.memory_usage(deep=True)
(lists each column's full memory usage)
>>> df.memory_usage(deep=True).sum()
(gives results in bytes)
462432
df.info()
Prints dataframe info to stdout. Technically these are kibibytes (KiB), not kilobytes - as the docstring says, "Memory usage is shown in human-readable units (base-2 representation)." So to get bytes would multiply by 1024, e.g. 451.6 KiB = 462,438 bytes.
>>> df.info()
...
memory usage: 70.0+ KB
>>> df.info(memory_usage='deep')
...
memory usage: 451.6 KB | 2 | 170 | 1 | I have been wondering... If I am reading, say, a 400MB csv file into a pandas dataframe (using read_csv or read_table), is there any way to guesstimate how much memory this will need? Just trying to get a better feel of data frames and memory... | How to estimate how much memory a Pandas' DataFrame will need? | 1 | 0 | 0 | 124,007 |
18,089,667 | 2013-08-06T20:18:00.000 | 10 | 0 | 0 | 0 | python,pandas | 18,089,887 | 7 | false | 0 | 0 | Yes there is. Pandas will store your data in 2 dimensional numpy ndarray structures grouping them by dtypes. ndarray is basically a raw C array of data with a small header. So you can estimate it's size just by multiplying the size of the dtype it contains with the dimensions of the array.
For example: if you have 1000 rows with 2 np.int32 and 5 np.float64 columns, your DataFrame will have one 2x1000 np.int32 array and one 5x1000 np.float64 array which is:
4bytes*2*1000 + 8bytes*5*1000 = 48000 bytes | 2 | 170 | 1 | I have been wondering... If I am reading, say, a 400MB csv file into a pandas dataframe (using read_csv or read_table), is there any way to guesstimate how much memory this will need? Just trying to get a better feel of data frames and memory... | How to estimate how much memory a Pandas' DataFrame will need? | 1 | 0 | 0 | 124,007 |
18,090,039 | 2013-08-06T20:40:00.000 | 0 | 0 | 0 | 1 | python,sockets,webserver,twisted.web | 18,104,107 | 2 | true | 0 | 0 | I have contacted Fatcow.com support. They do not support SSH connections and do not support Python 2.7 with Twisted library, especially python socket application as server. So it is dead end.
Question resolved. | 1 | 0 | 0 | I have a small application written in python using TwistedWeb. It is a chat server.
Everything is configured right now as for localhost.
All I want is to run this python script on a server(shared server provided by Fatcow.com).
I mean that the script will be working all the time and some clients will connect/disconnect to it. Fatcow gives me python 2.5 without any custom libraries. Is there a way and a tutorial how to make it work with TwistedWeb?
Thanks in advice. | How to install twistedweb python library on a web server | 1.2 | 0 | 0 | 817 |
18,093,487 | 2013-08-07T02:10:00.000 | 0 | 0 | 0 | 0 | django,mechanize,mechanize-python | 27,438,161 | 1 | false | 1 | 0 | As per comments, the answer is:
pip install mechanize
then just open a python interpreter and import mechanize to confirm.
That should be it, you can start using mechanize in your Django project. | 1 | 1 | 0 | How do I use the mechanize library with Django?
I read online that I could put it in a directory (e.g. /lib/) and include as needed.
The problem is, the the source I had found didn't show how to use it from configuration to initial use. Unfortunately, I also looked high and low elsewhere on google with nothing to find. I also checked a book I have on django without any info..
Can anyone help me out?
I'm on a local install of django with python 2.7.
Thank you | Use of Mechanize Library with Python 2.7 and Django | 0 | 0 | 0 | 277 |
18,094,718 | 2013-08-07T04:31:00.000 | 3 | 0 | 0 | 1 | python,google-app-engine | 18,107,791 | 1 | true | 1 | 0 | Here's the deal.
You have to migrate. They have announced the deprecation of the Python 2.5 runtime and will continue to support it in accordance with the Google App Engine Terms of Service. Here is the section you should concern yourself with...
7.3 Deprecation Policy.
Google will announce if we intend to discontinue or make backwards
incompatible changes to this API or Service. We will use commercially
reasonable efforts to continue to operate that Service without these
changes until the later of: (i) one year after the announcement or
(ii) April 20, 2015, unless (as Google determines in its reasonable
good faith judgment):
required by law or third party relationship (including if there is a
change in applicable law or relationship), or doing so could create a
security risk or substantial economic or material technical burden
(the above policy, the "Deprecation Policy"). This Deprecation Policy
doesn't apply to versions, features, and functionality labeled as
"experimental."
By my reckoning you have until some time in 2014 until they flat out drop support.
In the meantime...
Fork your application. Update the app.yaml to specify the Python27 runtime and for good measure turn thread safety on (you might save some money).
Test your application.
Move on with your life. | 1 | 0 | 0 | I hope this posting is in the right location.
I am very new to Google App Engine, in fact its part of a iOS Application that I purchased from another developer, so bear with me please.
The iOS Application currently has 20,000 active users. There is no way I can break the system and their Application...so my question is, Should I Migrate to Pythion 2.7 since the message says 2.5 will soon be deprecated. Does that mean my users will drop if I dont migrate?
If I do Migrate, is there a chance that something might break and completely destroy the userbase and their use to the Application? What can go wrong if I Migrate?
This is the message at the top of my Dashboard on Google App Engine
A version of this application is using the Python 2.5 runtime, which is deprecated!
The application should be updated to the Python 2.7 runtime as soon as possible, which offers performance improvements and many new features. Learn how simple it is to migrate your application to Python 2.7.
Thanks everyone..
DC | Google App Engine shows notice to Upgrade Python to 2.7 | 1.2 | 0 | 0 | 427 |
18,104,712 | 2013-08-07T13:28:00.000 | 1 | 0 | 0 | 0 | python,user-interface,wxpython | 18,120,007 | 1 | false | 0 | 1 | As each notification arrives add it to a queue, (i.e. a list), and as each dialog is closed remove that notification from the queue and if it is not empty show the next - and listen to your users complain.
N.B. Be especially careful not to get into a situation I spotted a few times where clicking on the dismiss button always caused another notification. Another classic case was an error window reporting too many error windows were open. | 1 | 0 | 0 | I have multi threaded wxpython app and main GUI thread receives notification from other threads to show it under modal dialog box. I want some kind of scheduling/queuing so dialog should appear one after other if multiple notification (from other threads) comes at same time. | wxpython, showing modal dialog one by one | 0.197375 | 0 | 0 | 332 |
18,106,975 | 2013-08-07T15:05:00.000 | 12 | 0 | 1 | 0 | python,numpy | 18,107,074 | 2 | false | 0 | 0 | If you are using numpy, the best way to do what Antonis has suggested is to use the function np.allclose(a, b). You can also specify the tolerance (1e-10 from above). | 1 | 9 | 0 | I have to compare two numbers. One of them comes from regulat python code and comes other from numpy. Debugger shows they have same value '29.0', but type of first is float and type of second is float64, so a == b and a - b == 0 is False. How can I deal with it? Is there any way to
force a regular python variable to be float64 or numpy use float by default?
Update: In the end of all these values comes from the same file where 29.0 is written, so I don't think there are differences in numeric values. | Compare `float` and `float64` in python | 1 | 0 | 0 | 21,187 |
18,109,671 | 2013-08-07T17:16:00.000 | 1 | 0 | 1 | 0 | java,c++,python,c,coding-style | 18,109,807 | 5 | false | 0 | 0 | One thing is that, you can reference and navigate the arrays using pointers. Infact the the array operations decay to pointer arithmetic at the back end.
Suppose you want to reach a nth element of an array then you can simply do (a + n) where a is the base address of an array(1-dimension), but if the subscript starts at 1 then to reach the nth element you would have to do (a + n -1) all the time.
This is because just by taking the name of an array you the the address of the starting element of it, which is the simplest way! | 3 | 0 | 0 | Is there any special reason? I know that that's how the language has been written, but can't we change it?
And what are the challenges we'd face if the index start with 1? | Why doesn't the index/list of an array begin with 1? | 0.039979 | 0 | 0 | 278 |
18,109,671 | 2013-08-07T17:16:00.000 | 3 | 0 | 1 | 0 | java,c++,python,c,coding-style | 18,109,758 | 5 | false | 0 | 0 | The basic reason behind it is that the computer remembers the address at wich the first part of any variable/object is stored. So the index represents the "distance" in between that and what you're looking for, so the first one is 0 away, the second 1... | 3 | 0 | 0 | Is there any special reason? I know that that's how the language has been written, but can't we change it?
And what are the challenges we'd face if the index start with 1? | Why doesn't the index/list of an array begin with 1? | 0.119427 | 0 | 0 | 278 |
18,109,671 | 2013-08-07T17:16:00.000 | 3 | 0 | 1 | 0 | java,c++,python,c,coding-style | 18,109,818 | 5 | false | 0 | 0 | In C and C++, array indexing is syntactic sugar for dereferencing an offset pointer. That is,
array[i] is equivalent to *(array + i). It makes sense for pointers to point to the beginning of their block of memory, and this implies that the first element of the array needs to be *array, which is just array[0]. | 3 | 0 | 0 | Is there any special reason? I know that that's how the language has been written, but can't we change it?
And what are the challenges we'd face if the index start with 1? | Why doesn't the index/list of an array begin with 1? | 0.119427 | 0 | 0 | 278 |
18,111,203 | 2013-08-07T18:37:00.000 | 1 | 0 | 0 | 0 | php,python,codeigniter | 18,111,276 | 4 | false | 1 | 0 | You could add a disable tag in your input if the user is not logged in. For example: <input type="text" name="lname" disabled> | 1 | 0 | 0 | I have a form that's pre-populated with values from a database. If a user is logged in and has sufficient permissions, the user can edit the fields. That all works well; however, if the user is not logged in, the fields should not be able to be changed. I have it so that any changes the bad user makes won't save to the database, but I'd like to make it so that the user can't even click into those fields.
Is there a way to conditionally allow form fields to be clickable? | Keep form fields from being editable | 0.049958 | 0 | 0 | 139 |
18,113,426 | 2013-08-07T20:42:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,boto | 18,366,790 | 2 | false | 1 | 0 | The problem ended up being an internal billing error at AWS and was not related to either S3 or Boto. | 1 | 0 | 0 | I have several S3 buckets containing a total of 40 TB of data across 761 million objects. I undertook a project to copy these objects to EBS storage. To my knowledge, all buckets were created in us-east-1. I know for certain that all of the EC2 instances used for the export to EBS were within us-east-1.
The problem is that the AWS bill for last month included a pretty hefty charge for inter-regional data transfer. I'd like to know how this is possible?
The transfer used a pretty simple Python script with Boto to connect to S3 and download the contents of each object. I suspect that the fact that the bucket names were composed of uppercase letters might have been a contributing factor (I had to specify OrdinaryCallingFormat()), but I don't know this for sure. | Boto randomly connecting to different regions for S3 transfers | 0 | 1 | 1 | 878 |
18,114,285 | 2013-08-07T21:37:00.000 | 45 | 0 | 1 | 0 | python,multithreading,parallel-processing,process,multiprocessing | 18,114,475 | 6 | false | 0 | 0 | Multiple threads can exist in a single process.
The threads that belong to the same process share the same memory area (can read from and write to the very same variables, and can interfere with one another).
On the contrary, different processes live in different memory areas, and each of them has its own variables. In order to communicate, processes have to use other channels (files, pipes or sockets).
If you want to parallelize a computation, you're probably going to need multithreading, because you probably want the threads to cooperate on the same memory.
Speaking about performance, threads are faster to create and manage than processes (because the OS doesn't need to allocate a whole new virtual memory area), and inter-thread communication is usually faster than inter-process communication. But threads are harder to program. Threads can interfere with one another, and can write to each other's memory, but the way this happens is not always obvious (due to several factors, mainly instruction reordering and memory caching), and so you are going to need synchronization primitives to control access to your variables. | 1 | 188 | 0 | I am learning how to use the threading and the multiprocessing modules in Python to run certain operations in parallel and speed up my code.
I am finding this hard (maybe because I don't have any theoretical background about it) to understand what the difference is between a threading.Thread() object and a multiprocessing.Process() one.
Also, it is not entirely clear to me how to instantiate a queue of jobs and having only 4 (for example) of them running in parallel, while the other wait for resources to free before being executed.
I find the examples in the documentation clear, but not very exhaustive; as soon as I try to complicate things a bit, I receive a lot of weird errors (like a method that can't be pickled, and so on).
So, when should I use the threading and multiprocessing modules?
Can you link me to some resources that explain the concepts behind these two modules and how to use them properly for complex tasks? | What are the differences between the threading and multiprocessing modules? | 1 | 0 | 0 | 75,191 |
18,114,584 | 2013-08-07T22:02:00.000 | 1 | 0 | 0 | 1 | python,python-2.7,tornado | 18,116,544 | 1 | false | 0 | 0 | Yes, unless you use the @asynchronous decorator. | 1 | 1 | 0 | When I am working with Tornado and on the end of get/post request I have return statement or I don't have anu return at all ( no even self.write ) does it close connections ?
(when I type into command line netstat -tanp | wc -l I got a lot of connections, like not alive, only existing ). Does it close connection at the end of request ? | Does Tornado close connection at the end of request? | 0.197375 | 0 | 0 | 579 |
18,114,628 | 2013-08-07T22:05:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,pickle | 18,115,167 | 2 | false | 0 | 0 | There are a few ways of pickling multiple objects. Which is best will depend a bit on your use case.
One option is to put all your separate account dictionaries into a larger data structure, such as a list or dictionary (perhaps keyed by account ID). Then you pickle the larger data structure and the account data will be saved in one go. Note that Pickle's format is binary, not text. If you want something human readable you should probably use json or yaml instead.
Another option is to write several pickle values to a file in sequence. This works pretty much as you'd expect, just call pickle.dump repeatedly to save them, and pickle.load repeatedly to load them back in. One downside to this approach is that you can't easily search for a specific data item that you need, you need to keep loading them in sequence until you find the one you want. Another is that the identity of objects that are pickled more than once will be lost (the values will probably be equal when you load them, but not references to the same object). An advantage is that you can append more data to the end of the file without reading in the previous stuff, just to write it out again.
A third option is to use the shelve module from the standard library. It acts somewhat like a dictionary, but the keys and values are pickled to a file. It may solve the lookup issue of the separate pickles above, but it doesn't fix the loss of object identity between values.
A final idea is to not use pickle at all, but instead use a real database system. This may be a bit harder to code, but may let you avoid issues of concurrency and data integrity that may otherwise be difficult to solve. | 1 | 6 | 0 | So I'm new to python and I'll most likely be asking many noob questions throughout my learning process (If they haven't already been asked/answered of course).
One question I have is if there is a way to save multiple dictionaries to one text file using pickle, or if each individual dictionary has to be saved to it's own separate file. For example, if I want to create a program to manage web accounts, with each account having a variety of arbitrary keys/values, can I save all these individual accounts to one archive as separate dictionaries?
Thanks in advance, and a noob would appreciate example code and/or any suggestions. | Pickling multiple dictionaries | 0.099668 | 0 | 0 | 12,360 |
18,119,631 | 2013-08-08T06:50:00.000 | 0 | 0 | 0 | 0 | python,pyside,cx-freeze | 18,120,660 | 2 | false | 0 | 1 | Take a look at the pyside documentation and see if there is a redirect output to a window option - it is entirely possible that something is causing an error that is being printed out to nowhere. | 1 | 4 | 0 | I have small Python3 application for manipulating some specific XML files. For gui I am using PySide and for parsing files -lxml.
I had some troubles with freezing it with cx_freeze but finally succeed. Now - some parts of application simply don't work... no error message & no log created.
For example on Enter press signal in LineEdit - new dialog should be shown... but nothing happens.
I have same version in standard python files and those are running correctly. How do I debug the frozen application? | cx_freeze - how debug app | 0 | 0 | 0 | 1,843 |
18,121,715 | 2013-08-08T08:50:00.000 | 3 | 0 | 0 | 0 | python,beautifulsoup,python-import | 29,046,403 | 3 | false | 1 | 0 | Make sure you give 'B' and 'S' as capital while typing 'BeautifulSoup' | 2 | 8 | 0 | I installed Beautiful Soup library, and it seems to be well set up as there is the bs4 folder in C:\Python33\Lib\site-packages.
(I changed the name into bs4 before installation, and it went the same after install)
But when I type in from bs4 import beautifulsoup in the code, it says there is no such library.
And I don't see any beautifulsoup.py or something. Isn't there supposed to be one?
I'm really confused. Anyone help please? | I cannot import beautiful soup on python | 0.197375 | 0 | 0 | 29,930 |
18,121,715 | 2013-08-08T08:50:00.000 | 0 | 0 | 0 | 0 | python,beautifulsoup,python-import | 57,383,104 | 3 | false | 1 | 0 | If you have a file in the same directory called bs4.py, This is the problem, don't name your files as package names, and btw package names are case-sensitive. | 2 | 8 | 0 | I installed Beautiful Soup library, and it seems to be well set up as there is the bs4 folder in C:\Python33\Lib\site-packages.
(I changed the name into bs4 before installation, and it went the same after install)
But when I type in from bs4 import beautifulsoup in the code, it says there is no such library.
And I don't see any beautifulsoup.py or something. Isn't there supposed to be one?
I'm really confused. Anyone help please? | I cannot import beautiful soup on python | 0 | 0 | 0 | 29,930 |
18,122,169 | 2013-08-08T09:13:00.000 | 0 | 0 | 1 | 0 | python,excel | 18,122,262 | 4 | false | 0 | 0 | I'm not sure you can directly write to an excel workbook, but you can easily create a CSV file and then import it to your excel workbook. | 1 | 0 | 0 | I want to read from a specified file in a source code (.map file) and to write the variables from there that have a certain name to an excel workbook. Some hints? | Reading from a text file and writing to an excel file in Python | 0 | 0 | 0 | 1,596 |
18,122,835 | 2013-08-08T09:47:00.000 | 1 | 0 | 0 | 0 | python,node.js,ssl,websocket,socket.io | 18,315,811 | 1 | true | 0 | 0 | My case seems to be a rare one. I built this whole environment on a EC2 instance based on Amazon Linux. As almost all the yum packages are not up to date, i had to install pretty much every yum packages from source. By doing so i could have missed configuration unchanged/added. Or HAProxy required lib could have been not the latest.
In any case, i tried building the environment again on ubuntu 12.04 based EC2 instance. HAProxy worked like a charm with a bit of configuration tweaks. I can now connect my SocketIO server from JS, Python & PHP over SSL without any problem. I could also create a Secured TCP Amazon ELB that listens on 443 and proxy it to non-standard port (8xxx).
Let me know if anyone else encounters a similar problem, I will be happy to help! | 1 | 5 | 0 | I have a NodeJS-socketIO server that has clients listening from JS, PHP & Python. It works like a charm when the communication happens over plain HTTP/WS channel.
Now, when i try to secure this communication, the websocket transport is not working anymore. It falls back to xhr-polling(long polling) transport. Xhr-polling still works for JS client but not on python which purely depends on socket transport.
Things i tried:
On node, Using https(with commercial certificates) instead of http - Works good for serving pages via Node but not for socketIO
Proxy via HAProxy (1.15-dev19). From HTTPS(HAProxy) to HTTP(Node). Couldn't get Websocket transport working and it falls back to xhr-polling on JS. Python gets 502 on handshake.
Proxy via STunnel (for HTTPS) -> HAProxy(Websocket Proxy) -> Node(SocketIO) - This doesnt work either. Python client still gets 502 on handshake.
Proxy via Stunnel(HTTPS) -> Node(SocketIO) - This doesnt work too. Not sure if STunnel support websocket proxy
node-http-proxy : Throws 500(An error has occurred: {"code":"ECONNRESET"}) on websocket and falls back to xhr-polling
Im sure its a common use case and there is a solution exist. Would really appreciate any help.
Thanks in advance! | NodeJS - SocketIO over SSL with websocket transport | 1.2 | 0 | 1 | 1,394 |
18,125,212 | 2013-08-08T11:47:00.000 | 1 | 0 | 0 | 0 | python,http,python-requests | 18,125,252 | 1 | false | 0 | 0 | The remote connection timed out.
The host you are trying to connect to is not answering; it is not refusing connections, it is just not responding at all to connection attempts.
Perhaps the host is overloaded or down? It could also be caused by the site blocking your IP address by dropping the packets (a firewall DROP rule instead of a REJECT rule).
You can try to connect to the site from a different IP address; if those connections work fine, but not from the original address, there is a higher likelihood that you are deliberately being blocked. | 1 | 1 | 0 | Im using requests to routinely download a webpage and check it for updates, but recently ive been getting these errors :
HTTPConnectionPool(host='somehost', port=someport): Max retries
exceeded with url: someurl (Caused by : [Errno
10060] A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond)
Now this script has been running for weeks with this issue never coming up. Could it be that the site administrator has started blocking my proxy's IP?
I should add that its not against the TOS of the site to scrape it.
Can anyone help me figure out whats the reason for this?
Thanks | socket Errno 10060 | 0.197375 | 0 | 1 | 6,095 |
18,127,128 | 2013-08-08T13:20:00.000 | 0 | 0 | 0 | 0 | python,google-chrome,web-applications,flask | 18,130,320 | 3 | false | 1 | 0 | Let's assume:
This is not a server issue, so we don't have to go fiddle with Apache, nginx, etc. timeout settings.
The delay is minutes, not hours or days, just to make the scenario manageable.
You control the web page on which the user hits submit, and from which user interaction is managed.
If those obtain, I'd suggest not using a standard HTML form submission, but rather have the submit button kick off a JavaScript function to oversee processing. It would put up a "please be patient...this could take a little while" style message, then use jQuery.ajax, say, to call the long-time-taking server with a long timeout value. jQuery timeouts are measured in milliseconds, so 60000 = 60 seconds. If it's longer than that, increase your specified timeout accordingly. I have seen reports that not all clients will allow super-extra-long timeouts (e.g. Safari on iOS apparently has a 60-second limitation). But in general, this will give you a platform from which to manage the interactions (with your user, with the slow server) rather than being at the mercy of simple web form submission.
There are a few edge cases here to consider. The web server timeouts may indeed need to be adjusted upward (Apache defaults to 300 seconds aka 5 minutes, and nginx less, IIRC). Your client timeouts (on iOS, say) may have maximums too low for the delays you're seeing. Etc. Those cases would require either adjusting at the server, or adopting a different interaction strategy. But an AJAX-managed interaction is where I would start. | 1 | 8 | 0 | I have a web application which acts as an interface to an offsite server which runs a very long task. The user enters information and hits submit and then chrome waits for the response, and loads a new webpage when it receives it. However depending on the network, input of the user, the task can take a pretty long time and occasionally chrome loads a "no data received page" before the data is returned (though the task is still running).
Is there a way to put either a temporary page while my task is thinking or simply force chrome to continue waiting? Thanks in advance | Time out issues with chrome and flask | 0 | 0 | 1 | 13,355 |
18,128,391 | 2013-08-08T14:16:00.000 | 4 | 0 | 1 | 0 | python,argparse,configparser | 18,128,458 | 2 | true | 0 | 0 | Yes, the argparse and ConfigParser libraries use the old-style % string formatting syntax internally. These libraries were developed before str.format() and format() were available, or in the case of argparse the library authors aimed at compatibility with earlier Python versions.
If the % formatting ever is removed, then those libraries will indeed have to move to using string formatting using {} placeholders.
However, for various reasons, the % old-style string formatting style is here to stay for the foreseeable future; it has been 'un-deprecated'; str.format() is to be preferred but % is kept around for backwards compatibility. | 1 | 2 | 0 | When specifying help in argparse, I often use strings like %(default)s or %(const)s in the help= argument to display default arguments. The syntax is a bit weird, though: I assume it's left over from the days where python strings were formatted with %, but since python 2.6 the standard way to format strings has been using the format() function.
So are these libraries just using the 'old' replacement syntax, or does this come from somewhere else? It's been stated that the % replacement operator will disappear at some point, will these libraries change to the '{}'.format() syntax then? | Where does the argparse and ConfigParser string replacement syntax come from? | 1.2 | 0 | 0 | 595 |
18,131,050 | 2013-08-08T16:18:00.000 | 0 | 1 | 0 | 1 | python,cron,cron-task | 18,131,872 | 1 | false | 0 | 0 | Cron jobs run with the permissions of the user that the cron job was setup under.
I.E. Whatever is in the cron table of the reports user, will be run as the reports user.
If you're having to so sudo to get the script to run when logged in as reports, then the script likely won't run as a cron job either. Can you run this script when logged in as reports without sudo? If not, then the cron job can't either. Make sense?
Check your logs - are you getting permissions errors?
There are a myriad of reasons why your script would need certain privs, but an easy way to fix this is to set the cron job up under root instead of reports. The longer way is to see what exactly is requiring elevated permissions and fix that. Is it file permissions? A protected command? Maybe adding reports to certain groups would allow you to run it under reports instead of root.
*be ULTRA careful if/when you setup cron jobs as root | 1 | 1 | 0 | I have a small problem running a python script as a specific user account in my CentOS 6 box.
My cron.d/cronfile looks like this:
5 17 * * * reports /usr/local/bin/report.py > /var/log/report.log 2>&1
The account reports exists and all the files that are to be accessed by that script are chowned and chgrped to reports. The python script is chmod a+r. The python script starts with a #!/usr/bin/env python.
But this is not the problem. The problem is that I see nothing in the logfile. The python script doesn't even start to run! Any ideas why this might be?
If I change the user to root instead of reports in the cronfile, it runs fine. However I cannot run it as root in production servers.
If you have any questions please ask :)
/e:
If I do sudo -u reports python report.py it works fine. | Running python cron script as non-root user | 0 | 0 | 0 | 1,721 |
18,133,117 | 2013-08-08T18:14:00.000 | 0 | 0 | 1 | 0 | python,pygame | 18,133,587 | 2 | false | 0 | 1 | Assuming pygame 1.9.1 release, it's in the src folder, but is implemented in C. | 1 | 0 | 0 | I would like to edit the pygame.image to add a method that returns the name of the object.
I've been looking around in /Library/Python/2.7/site-packages where I found Pygame, but I can't find the image class, even though I have found the folder.
Anyone knows? | Where can I find the pygame.image class? | 0 | 0 | 0 | 54 |
18,134,390 | 2013-08-08T19:24:00.000 | 1 | 0 | 0 | 0 | python,sql,db2,pyodbc | 18,135,069 | 2 | false | 0 | 0 | db2 export is a command run in the shell, not through SQL via odbc.
It's possible to write database query results to a file with python and pyodbc, but db2 export will almost certainly be faster and effortlessly handle file formatting if you need it for import. | 1 | 0 | 0 | I am trying to run the following db2 command through the python pyodbc module
IBM DB2 Command : "DB2 export to C:\file.ixf of ixf select * from emp_hc"
i am successfully connected to the DSN using the pyodbc module in python and works fine for select statement
but when i try to execute the following command from the Python IDLE 3.3.2
cursor.execute(" export to ? of ixf select * from emp_hc",r"C:\file.ixf")
pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0104N An unexpected token "db2 export to ? of" was found following "BEGIN-OF-STATEMENT". Expected tokens may include: "". SQLSTATE=42601\r\n (-104) (SQLExecDirectW)')
or
cursor.execute(" export to C:\file.ixf of ixf select * from emp_hc")
Traceback (most recent call last):
File "", line 1, in
cursor.execute("export to C:\myfile.ixf of ixf select * from emp_hc")
pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0007N The character "\" following "export to C:" is not valid. SQLSTATE=42601\r\n (-7) (SQLExecDirectW)')
am i doing something wrong ? any help will be greatly appreciated. | sql import export command error using pyodbc module python | 0.099668 | 1 | 0 | 1,372 |
18,135,386 | 2013-08-08T20:20:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 18,137,477 | 1 | true | 0 | 1 | Try placing a whole bunch of buttons in a frame and assigning a scroll bar a to that frame and make it so when the user presses a button is changes colour or picture or something and then any button with that same colour of picture before will go back to normal. Also, and a tkinter variable with which button is active so you can reference it later.
I'm fairly sure you can use both texts and images in a button simultaneously, but if not you can just put a button and an image side by side in the same row on the frame
I recommend you make this entire scrolling object a class and keep references of everything inside within that class.
If you need any help doing this, just give me a shout. | 1 | 0 | 0 | I'm using python 2.7 and I'd like to have an GUI with a scrollable list where each item in the list has both an image and some text. I'd like these items to be selectable like in a ListBox. I've tried a couple things and it seems ListBox only accepts text?
What widget/combination of widgets should I use? | Python tkinter: a scrollable list with text and images? | 1.2 | 0 | 0 | 794 |
18,137,913 | 2013-08-08T23:28:00.000 | 30 | 0 | 1 | 0 | ipython-notebook | 18,140,871 | 2 | true | 0 | 0 | I want to insert a link to a local file located in the notebook directory
I want to insert this link within a markdown cell.
The path need to be relative to where the server has been started, and prefixed with files/.
e.g: [my molecule](files/molecules/ethanol.mol)
the file is to be opened with a local application (in this case, a molecule viewer)
Not possible unless your application support custom links protocol like the itunes:// or apt-get:// one. the best that can append is that on link click you will be prompted to download the file. (keep in mind that the server can be on a different machine thant your browser) | 1 | 24 | 0 | Dear ipython notebook users,
I want to insert a link to a local file located in the notebook directory, and no, it is not an image (the only example I've found). I want to insert this link within a markdown cell.
When clicked on the link, the file is to be opened with a local application (in this case, a molecule viewer)
I've tried to come up with the correct syntax, but no luck. Please, any help is greatly appreciated. | how to insert a link to a local file into a markdown cell? | 1.2 | 0 | 0 | 28,792 |
18,140,011 | 2013-08-09T04:25:00.000 | 6 | 0 | 0 | 1 | python,django,macos,security | 18,140,216 | 2 | true | 0 | 0 | Yes of course it's absolutely possible. Depending on the services you are running, you will always be adding more potential holes for an attacker to find. | 2 | 0 | 0 | I am considering purchasing an imac with the thought of dual-purposing the machine. I'd like to use it as a home computer, but also host a personal website or two using OSX Server.
By using my computer as a server, is there any way that a malicious attack through my website can allow someone access to files that are stored locally on my hard drive? Is it safer to simply use a dedicated machine or service?
NB: I hope that a question regarding website security is appropriate, sorry that this isn't explicitly a coding question. | If I use my personal machine as a web server am I putting my local data at risk? | 1.2 | 0 | 0 | 105 |
18,140,011 | 2013-08-09T04:25:00.000 | 0 | 0 | 0 | 1 | python,django,macos,security | 18,141,423 | 2 | false | 0 | 0 | Yes plus it won't save you much time / money. Proper hosting isn't that expensive. What about the DNS, you're going to point to your own Internet Connection IP address, can you guarantee it won't change or stop working at any time?
Is it safer to simply use a dedicated machine or service?
Go with a service that handles everything for you unless you enjoy system admin stuffs. | 2 | 0 | 0 | I am considering purchasing an imac with the thought of dual-purposing the machine. I'd like to use it as a home computer, but also host a personal website or two using OSX Server.
By using my computer as a server, is there any way that a malicious attack through my website can allow someone access to files that are stored locally on my hard drive? Is it safer to simply use a dedicated machine or service?
NB: I hope that a question regarding website security is appropriate, sorry that this isn't explicitly a coding question. | If I use my personal machine as a web server am I putting my local data at risk? | 0 | 0 | 0 | 105 |
18,142,023 | 2013-08-09T07:32:00.000 | 0 | 1 | 0 | 0 | python,pygame | 18,142,439 | 4 | false | 0 | 0 | pickle may be too big. And work strange in some cases.
ZIP allows to read-write data from different parts of archive.
try zip with password or change first bytes of file to prevent normal unpack
PS. Make some different variants and check size/speed | 2 | 1 | 0 | Currently developping a RPG, I'm asking how could I protect the saved data so that the player/user can't read or modify it easily. I mean yes a person that is experienced with computers and programming could modify it but I don't want the lambda user to be able to modify it, as easily as one could modify a plaintext xml file.
Is there a way I could do that with python? | Protecting a Save file from modification in a game? | 0 | 0 | 0 | 285 |
18,142,023 | 2013-08-09T07:32:00.000 | 2 | 1 | 0 | 0 | python,pygame | 18,142,235 | 4 | false | 0 | 0 | Just pickle or cpickle a configuration object with the compression set to max is a quick and easy option. | 2 | 1 | 0 | Currently developping a RPG, I'm asking how could I protect the saved data so that the player/user can't read or modify it easily. I mean yes a person that is experienced with computers and programming could modify it but I don't want the lambda user to be able to modify it, as easily as one could modify a plaintext xml file.
Is there a way I could do that with python? | Protecting a Save file from modification in a game? | 0.099668 | 0 | 0 | 285 |
18,142,324 | 2013-08-09T07:54:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 18,142,519 | 2 | false | 0 | 0 | There is no such concept in Python. There are conventions that are used - like the ones Steve mentioned but also others such as calling the first variable of an instance method self.
In addition, for module level imports - there is a way to prevent the default behavior of importing all names from a module. This is done by populating __all__ with a list of names that should be imported (exposed) by default.
However, as with __var and _var it is just a convention (although one that is enforced by Python). It doesn't restrict you though - you can explicitly import any name. | 1 | 3 | 0 | I have come from c++ and java background so, I was curious to know if python provides access specifiers as provided by c++/java. I've seen some code, and this is what I think,
__variable ---> private.
_variable ----> protected.
Correct me if I'm wrong. | What are the access specifiers in python? | 0.099668 | 0 | 0 | 8,855 |
18,143,045 | 2013-08-09T08:43:00.000 | 1 | 0 | 1 | 0 | python | 18,143,966 | 4 | false | 0 | 0 | Get the right number :
The program chooses a random value betwen 1 an 100 then you have to guess.
It tells you if you are above or below. | 2 | 6 | 0 | I'm going to be teaching some year 9 and 10 students Python soon and thought it would be cool to do some Project Euler type challenges with them. The first problem seems doable by them, but I think some of the others may be a bit over their head, or not require enough programming.
If anyone has a place to find some easy programming problems, or can think of any, can they please let me know.
edit: By year 9 and 10 I mean that they have been in school for 9 or 10 years. So about 13, 14, and 15 type age. Sorry for the confusion! | Easy Problems for Children to Solve in Python | 0.049958 | 0 | 0 | 493 |
18,143,045 | 2013-08-09T08:43:00.000 | 2 | 0 | 1 | 0 | python | 18,143,892 | 4 | false | 0 | 0 | Oh I remember something I was taught in school! My IT teacher created a class in python which attributes created a mathematical sequence. The goal was to guess the formula behind this sequence using only python. Obviously, you couldn't look at the file with class, only import it in python. Maybe there's more math than programming here, but to solve this, students will have to learn how variables, namespaces (to find the variables), loops (to print those variables), and classes (which store those variables) work in python and this is more or less everything you need to know at first, in my opinion.
Ah, good times. We also used to play "hide and seek" in shell on IT lessons: the teacher would hide a file somewhere and leave some clues scattered around, and we had to find that file using text environment on linux :) | 2 | 6 | 0 | I'm going to be teaching some year 9 and 10 students Python soon and thought it would be cool to do some Project Euler type challenges with them. The first problem seems doable by them, but I think some of the others may be a bit over their head, or not require enough programming.
If anyone has a place to find some easy programming problems, or can think of any, can they please let me know.
edit: By year 9 and 10 I mean that they have been in school for 9 or 10 years. So about 13, 14, and 15 type age. Sorry for the confusion! | Easy Problems for Children to Solve in Python | 0.099668 | 0 | 0 | 493 |
18,144,810 | 2013-08-09T10:36:00.000 | 0 | 0 | 0 | 0 | python,constraints,nearest-neighbor,kdtree | 18,339,341 | 1 | false | 0 | 0 | If you are looking for the neighbours in a line of sight, couldn't use an method like
cKDTree.query_ball_point(self, x, r, p, eps)
which allows you to query the KDTree for neighbours that are inside a radius of size r around the x array points.
Unless I misunderstood your question, it seems that the line of sight is known and is equivalent to this r value. | 1 | 4 | 1 | I have a slight variant on the "find k nearest neighbours" algorithm which involves rejecting those that don't satisfy a certain condition and I can't think of how to do it efficiently.
What I'm after is to find the k nearest neighbours that are in the current line of sight. Unfortunately scipy.spatial.cKDTree doesn't provide an option for searching with a filter to conditionally reject points.
The best algorithm I can come up with is to query for n nearest neighbours and if there aren't k that are in the line of sight then query it again for 2n nearest neighbours and repeat. Unfortunately this would mean recomputing the n nearest neighbours repeatedly in the worst cases. The performance hit gets worse the more times I have to repeat this query. On the other hand setting n too high is potentially wasteful if most of the points returned aren't needed.
The line of sight changes frequently so I can't recompute the cKDTree each time either. Any suggestions? | nearest k neighbours that satisfy conditions (python) | 0 | 0 | 0 | 631 |
18,145,598 | 2013-08-09T11:22:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 18,145,867 | 4 | false | 0 | 0 | You could write data to a file instead of appending it to a list. | 3 | 1 | 0 | I have function that appends the substrings of a string to a list. When the input string is large a
MemoryError exception
is thrown. Is there any length limit for the 1 dimensional list?.If yes how can I extend it? | Extend of size of a list in python | 0 | 0 | 0 | 124 |
18,145,598 | 2013-08-09T11:22:00.000 | 3 | 0 | 1 | 0 | python,python-2.7 | 18,145,622 | 4 | false | 0 | 0 | Yes. Available memory. Make more memory available to the process, either by adding more swap, adding more RAM, or moving to an architecture with a larger memory limit. | 3 | 1 | 0 | I have function that appends the substrings of a string to a list. When the input string is large a
MemoryError exception
is thrown. Is there any length limit for the 1 dimensional list?.If yes how can I extend it? | Extend of size of a list in python | 0.148885 | 0 | 0 | 124 |
18,145,598 | 2013-08-09T11:22:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 18,146,513 | 4 | false | 0 | 0 | Are you by chance using a 32 bit build of python? When using a 64 bit address space, your process should come to a grinding halt long before it has exhausted all of the memory it can theoretically access, assuming you have adequate swap space available. On a 32 bit process, you can access only about 3 gigabytes of memory; not that much by today's standards. | 3 | 1 | 0 | I have function that appends the substrings of a string to a list. When the input string is large a
MemoryError exception
is thrown. Is there any length limit for the 1 dimensional list?.If yes how can I extend it? | Extend of size of a list in python | 0 | 0 | 0 | 124 |
18,146,661 | 2013-08-09T12:26:00.000 | 3 | 0 | 1 | 0 | python,django | 18,146,782 | 2 | false | 1 | 0 | Well, this is a bit of a bad idea. The idea of having a requirements.txt is that you can perfectly replicate what you have on the machine you develop on, i.e exactly the environment it works on.
You can do what you want by just not specifying a version number in requirements.txt, but you are better off manually upgrading each module/package and confirming it works before using it in production. | 1 | 1 | 0 | When I create my requirements.txt I always want it to get the most recent package, without me having to know a version number, how can I do this?
For example I want this to get the latest version of Django:
requirements.txt
Django>=
South==0.7.6 | Django/Python requirements.txt get always recent package | 0.291313 | 0 | 0 | 107 |
18,150,150 | 2013-08-09T15:19:00.000 | 1 | 0 | 0 | 0 | python,numpy,fft | 18,153,590 | 2 | false | 0 | 0 | The bandwidth of each FFT result bin is inversely proportional to the length of the FFT window. For a wider bandwidth per bin, use a shorter FFT. If you have more data, then Welch's method can be used with sequential STFT windows to get an average estimate. | 1 | 0 | 1 | I have a numpy fft for a large number of samples. How do I reduce the resolution bandwidth, so that it will show me fewer frequency bins, with averaged power output? | FFT resolution bandwidth | 0.099668 | 0 | 0 | 921 |
18,150,858 | 2013-08-09T15:55:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,mysql.sock | 66,405,102 | 5 | false | 1 | 0 | I faced this problem when connecting MySQL with Django when using Docker.
Try 'PORT':'0.0.0.0'.
Do not use 'PORT': 'db'. This will not work if you tried to run your app outside Docker. | 3 | 26 | 0 | when connecting to mysql database in Django ,I get the error.
I'm sure mysql server is running.
/var/run/mysqld/mysqld.sock doesn't exist.
When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output.
I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.
I searched a lot, but I still don't know how to do.
Any help is greate. Thanks in advance.
Well, I just tried some ways. And it works.
I did as follows.
Add socket = /tmp/mysql.sock .Restart the mysql server.
ln -s /tmp/mysql.sock /var/lib/mysqld/mysqld.sock
I met an another problem today. I can't login to mysql.
I'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.
I add socket = /var/mysqld/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs. | OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") | 0 | 1 | 0 | 101,165 |
18,150,858 | 2013-08-09T15:55:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,mysql.sock | 56,762,083 | 5 | false | 1 | 0 | in flask, you may use that
app=Flask(__name__)
app.config["MYSQL_HOST"]="127.0.0.1
app.config["MYSQL_USER"]="root"... | 3 | 26 | 0 | when connecting to mysql database in Django ,I get the error.
I'm sure mysql server is running.
/var/run/mysqld/mysqld.sock doesn't exist.
When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output.
I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.
I searched a lot, but I still don't know how to do.
Any help is greate. Thanks in advance.
Well, I just tried some ways. And it works.
I did as follows.
Add socket = /tmp/mysql.sock .Restart the mysql server.
ln -s /tmp/mysql.sock /var/lib/mysqld/mysqld.sock
I met an another problem today. I can't login to mysql.
I'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.
I add socket = /var/mysqld/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs. | OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") | 0 | 1 | 0 | 101,165 |
18,150,858 | 2013-08-09T15:55:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,mysql.sock | 72,389,079 | 5 | false | 1 | 0 | You need to change your HOST from 'localhost' to '127.0.0.1' and check your django app :) | 3 | 26 | 0 | when connecting to mysql database in Django ,I get the error.
I'm sure mysql server is running.
/var/run/mysqld/mysqld.sock doesn't exist.
When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output.
I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.
I searched a lot, but I still don't know how to do.
Any help is greate. Thanks in advance.
Well, I just tried some ways. And it works.
I did as follows.
Add socket = /tmp/mysql.sock .Restart the mysql server.
ln -s /tmp/mysql.sock /var/lib/mysqld/mysqld.sock
I met an another problem today. I can't login to mysql.
I'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.
I add socket = /var/mysqld/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs. | OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") | 0 | 1 | 0 | 101,165 |
18,152,108 | 2013-08-09T17:12:00.000 | 0 | 0 | 1 | 1 | python,subprocess,multiprocessing | 18,152,311 | 2 | false | 0 | 0 | So since it's a file descriptor that's returned by a Pipe I regret to say you can't go back; An idea though would be to have either reader process add the data to a multiprocessing.Queue where both can read out of and later drop the data.
You can always have a pipe from the writer process to each of the readers as well. also there are other things such as shared memory or dbus that you could use to ferry around data.
Could you describe your problem more in depth?
Depending on the platform you can also just have the process use multiple streams - e.g. stdout and a 4th one - but this isn't portable between OS's. | 1 | 2 | 0 | Suppose I have a process that generates some data, and this data is consumed by two different processes which are independent of one another.
One way to solve this problem would be to have the generated data written to a file, and then have the other two processes read from the file. This will work fine if the size of the file is not big, but IO becomes expensive if there is a lot of data.
If I had only one process consuming the data, I can just connect the two processes using os.pipe() and funnel data from the output of one into the input of the other.
However, since I have two consumer processes, I'm not sure if there's a way I can duplicate the read side of the pipe so that both consumers can read from it. | Is it possible to duplicate a pipe in Python, so that it has one write end but two read ends? | 0 | 0 | 0 | 1,227 |
18,154,487 | 2013-08-09T19:44:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 18,162,080 | 2 | false | 0 | 0 | Use split(', ') which gives you the List
list = ['dogs, cats', 'x, y, z']
finalList = []
for key in list:
finalList.extend(key.split(', '))
you will get
['x', 'y', 'z', 'dogs', 'cats'] | 1 | 1 | 0 | Say you have a list ['dogs, cats']. How can one turn that into ['dogs', 'cats'] for arbitrary number of ['x, y, z'] | Python List of Strings created from a list of one string | 0 | 0 | 0 | 98 |
18,155,811 | 2013-08-09T21:19:00.000 | 0 | 0 | 0 | 1 | python,python-3.x,websocket,tornado | 18,156,111 | 1 | true | 1 | 0 | Tornado works well with large amount of short concurrent requests.
It does not split long request into smaller ones. So process blocks.
Why you passing big amount of data using sockets? Final solution depends on answer to this question.
If you don't have big requests too often - just use haproxy in front of multiple tornado instances. | 1 | 1 | 0 | I have two pretty simple Tornado-based websocket handlers running in the same process, each of which function properly on their own. However, when one is receiving a large amount of data (>8MB) the process blocks and the other is unable to process messages until all of the data has been received. Is there any way I can get around this and prevent tornado from blocking here? | Python Tornado Websocket Handler blocks while receiving data | 1.2 | 0 | 0 | 407 |
18,156,044 | 2013-08-09T21:37:00.000 | 1 | 0 | 0 | 1 | python,python-2.7,python-3.x | 18,156,059 | 3 | false | 0 | 0 | You can use sys.argv[1] to get the first command line argument. If you want more arguments, you can reference them with sys.argv[2], etc. | 1 | 0 | 0 | I have python code which asks user input. (e.g. src = input('Enter Path to src: '). So when I run code through command prompt (e.g. python test.py) prompt appears 'Enter path to src:'. But I want to type everything in one line (e.g. python test.py c:\users\desktop\test.py). What changes should I make? Thanks in advance | How do I accept user input on command line for python script instead of prompt | 0.066568 | 0 | 0 | 3,737 |
18,157,029 | 2013-08-09T23:14:00.000 | 0 | 1 | 0 | 1 | python,eclipse,pydev | 18,157,426 | 2 | false | 0 | 0 | Yes, there is.
Just start debugging - as far as I know, you have to set breakpoint, otherwise program just run to the end. And when stopped at breakpoint, in console window, click open console icon -> choose pydev console -> PyDev Debug Console.
Let me know if it works for you. | 1 | 0 | 0 | I have used standart ide for python - IDLE for a long time. It has convinient debug. I can write script, press F5 and it is possible to use all objects in terminal.
Now i want working with eclipse and pydev plugin. Is there any similar way to debug in eclipse? | debug script with pydev in eclipse | 0 | 0 | 0 | 459 |
18,160,078 | 2013-08-10T08:25:00.000 | 10 | 1 | 1 | 0 | python,unit-testing,argparse | 18,160,281 | 9 | false | 0 | 0 | Populate your arg list by using sys.argv.append() and then call
parse(), check the results and repeat.
Call from a batch/bash file with your flags and a dump args flag.
Put all your argument parsing in a separate file and in the if __name__ == "__main__": call parse and dump/evaluate the results then test this from a batch/bash file. | 1 | 226 | 0 | I have a Python module that uses the argparse library. How do I write tests for that section of the code base? | How do you write tests for the argparse portion of a python module? | 1 | 0 | 0 | 91,355 |
18,163,556 | 2013-08-10T15:48:00.000 | 0 | 0 | 1 | 0 | python,templates,flask,jinja2 | 18,184,565 | 1 | false | 1 | 0 | There is not any performance loss when using template inheritance,You will have to use it in any projects because of Re-usability.But that does not mean you can indefinitely inherit because it will have file read overheads. | 1 | 0 | 0 | I just wonder whether single template rendering's performance is better than multiple template rendering's.
What are the cons and pros of single template rendering considering multiple template rendering? | Jinja2 template rendering multiple or single file | 0 | 0 | 0 | 1,032 |
18,164,348 | 2013-08-10T17:07:00.000 | 3 | 0 | 0 | 0 | python,scipy,scikit-learn,nearest-neighbor | 18,201,497 | 2 | false | 0 | 0 | You can try to transform your high-dimensional sparse data to low-dimensional dense data using TruncatedSVD then do a ball-tree. | 1 | 9 | 1 | I have a large corpus of data (text) that I have converted to a sparse term-document matrix (I am using scipy.sparse.csr.csr_matrix to store sparse matrix). I want to find, for every document, top n nearest neighbour matches. I was hoping that NearestNeighbor routine in Python scikit-learn library (sklearn.neighbors.NearestNeighbor to be precise) would solve my problem, but efficient algorithms that use space partitioning data structures such as KD trees or Ball trees do not work with sparse matrices. Only brute-force algorithm works with sparse matrices (which is infeasible in my case as I am dealing with large corpus).
Is there any efficient implementation of nearest neighbour search for sparse matrices (in Python or in any other language)?
Thanks. | Efficient nearest neighbour search for sparse matrices | 0.291313 | 0 | 0 | 3,890 |
18,164,354 | 2013-08-10T17:07:00.000 | 2 | 0 | 1 | 1 | python,windows,windows-7 | 18,216,038 | 1 | false | 0 | 0 | Finally! A few days ago I managed to find a solution!!! The problem was with the icon. I don't know why but when I removed the icon from my setup file things got charmingly ok. But I needed the icon so after I created my exe file I packed everything in a rar file. I mean SFX rar file, and I set it's icon to what I wanted. So it is solved for me. Still, the error I was facing happens in many other cases, I have no solution for any of those. | 1 | 1 | 0 | I have created a simple python application to detect changes in a set of words. Now I need an executable file of my script. Since I use python 3.3 the only way I found was using cx_Freeze. I have created my setup file according to the documentation presented by cx_Freeze website, and it seems to work. The thing is while it is creating the files in the bin folder python.exe crashes, there is only a windows error saying python.exe stopped working. In the lines printed to command prompt I can see that the crash has occurred after copying python33.dll. This I can confirm by comparing the copied file and the original file. Still, an exe file is created which also crashes when I run it. Tracing it, I found out that the exe file crashes when it tries to get a zipimporter instance, giving the error "cannot get zipimporter instance". I have a windows 7 64 bit, python 3.3.2 64 bit, and cx_Freeze 4.3.1 64 bit. I also have a windows 7 32 bit on a virtual machine with python 3.3.2 32 bit and cx_Freeze 4.3.1 32 bit. To my knowledge both Linux and windows users have this problem but only Linux users seem to have a solution! Maybe I didn't find the solution to my problem, but I have spent two days looking. I would be really grateful if you can help. | python.exe version 3.3.2 64 & 32 crash while creating .exe file on win 7 64 & 32 with cx_Freeze | 0.379949 | 0 | 0 | 415 |
18,164,647 | 2013-08-10T17:40:00.000 | 1 | 0 | 1 | 0 | python | 18,164,661 | 1 | true | 1 | 0 | No, you can't. Python packages often have filesystem paths written to various metadata files. Just take the time to go through the site-packages and install the things into a fresh virtualenv, then call pip freeze to get a serialized list that you can use going forward. | 1 | 0 | 0 | I have an old Django project that I want to convert to use virtualenv. If I could copy the current global Python packages to the new env, I think I'd be assured that I'd have the same environment and would save myself some time over creating a requirements file by hand. So, could I just copy the global site-packages contents into the env's site-packages directory? | Convert to virtualenv an existing project by copying site-packages | 1.2 | 0 | 0 | 952 |
18,168,329 | 2013-08-11T03:02:00.000 | 4 | 0 | 1 | 0 | python,macos,multiprocessing | 18,168,461 | 1 | true | 0 | 0 | You need to use the multiprocessing module.
Both modules enable concurrency, but only multiprocessing enables true parallelism. Due to Python's Global Interpreter Lock, multiple threads cannot execute simultaneously.
Keeping all 16 of your processors busy comes at the cost of a certain increased difficulty in programming since separate processes do not execute in a shared memory space, so if a spawned process needs to share data with its parent process you will need to serialize it. | 1 | 0 | 0 | Any basic information would be greatly appreciated.
I am almost completed with my project, all I have to do now is run my code to get my data. However, it takes a very long time, and it has been suggested that I make my code (python) available to multiprocess. However, I am clueless on how to do this and have had a lot of trouble running how. I use a Mac OS X 10.8.2. I know that I need a semaphore.
I have looked up the multiprocessing module and the Thread module, although I could not understand most of this. Do the Process() or Manager() functions have anything to do with this?
Lastly, I have 16 processors available for this. | Multiprocessing in python on Mac OS | 1.2 | 0 | 0 | 3,162 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.