Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
22,176,118
2014-03-04T15:28:00.000
0
1
0
1
python-2.7
22,176,917
1
false
0
0
That is by design. The process may still hold a capability (i.e. a handle to something external to the process) that would not be available to the process owner otherwise, so most debugging facilities are available to the root user only.
1
0
0
One of my python scripts has to be started as root but after some initialization changes its process ownership to something else by calling setuid/setgid. Works like a champ except for one thing: most of the files under /prod/pid are still owned by root and most important /proc/pid/io is owned by root so I can't monitor that process's I/O stats. Might there be some additional calls I can make to change /proc ownership?
Is it possible to make ownership of /proc/pid/io follow changed process ownership?
0
0
0
35
22,177,872
2014-03-04T16:45:00.000
2
0
0
0
python,lxml,lxml.html
22,177,986
2
true
1
0
This will select the parent element of the XPath expression you gave: //*[@id="titleStoryLine"]/div/h4[text()="Genres:"]/..
1
2
0
How can we traverse back to parent in xpath? I am crawling IMDB, to obtain genre of films, I am using elem = hxs.xpath('//*[@id="titleStoryLine"]/div/h4[text()="Genres:"]') Now,the genres are listed as anchor links, which are siblings to this tag. how can this be achieved?
Traversing back to parent with lxml.html.xpath
1.2
0
1
405
22,178,513
2014-03-04T17:13:00.000
0
1
0
0
python
22,181,923
2
false
1
0
I asked about a soft button earlier. If your computer program is password/access protected you could just store it all in a pickle/config file somewhere, I am unsure what the value of the sql file is: use last_push = time.time() and check the difference to current push if seconds difference less than x do not progress, if bigger than x reset last_push and progress.... or am I missing something
2
0
0
I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails. My plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. Is this the most efficient way to do this? (I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)
Check time since last request
0
1
0
76
22,178,513
2014-03-04T17:13:00.000
0
1
0
0
python
22,179,026
2
false
1
0
If this is the easiest solution for you to implement, go right ahead. Worst case scenario, it's too slow to be practical and you'll need to find a better way. Any other scenario, it's good enough and you can forget about it. Honestly, it'll almost certainly be efficient enough to serve your purposes. The number of users at any one time will very rarely exceed one. An SQL query to determine if the timestamp is over a day before the current time will be quick, enough so that even the most determined gas-hole(!) wouldn't be able to cause any damage by spam-clicking the button. I would be very surprised if you ran into any problems.
2
0
0
I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails. My plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. Is this the most efficient way to do this? (I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)
Check time since last request
0
1
0
76
22,179,731
2014-03-04T18:13:00.000
0
0
1
1
python,pyinstaller
22,179,885
2
false
0
0
if the problem is indeed os.makedir you will need to run the program as administrator, or pick a non-protected folder to mkdir in ... Unfortunately is sounds like you are not sure this is the problem, if you build your executable with --console flag you will probably get output that tells you exactly what the problem is...
2
0
0
I want to create executable with pyinstaller (in ubuntu). My program works, but when I create executable, program doesn't work. Problem is probably in command os.mkdir. How can I solve it? Thank you.
pyinstaller - mkdir error
0
0
0
417
22,179,731
2014-03-04T18:13:00.000
0
0
1
1
python,pyinstaller
22,179,931
2
false
0
0
Hmm... If you want to create a directory where you want to store some files, you can create it by using os.system("mkdir $nameOfDirectory$"). Hope this helps, os.system() executes in Terminal the command between the paranthesis.
2
0
0
I want to create executable with pyinstaller (in ubuntu). My program works, but when I create executable, program doesn't work. Problem is probably in command os.mkdir. How can I solve it? Thank you.
pyinstaller - mkdir error
0
0
0
417
22,180,238
2014-03-04T18:40:00.000
3
0
0
1
python-2.7,openpyxl
22,196,176
2
false
0
0
Install openpyxl using pip: sudo pip install openpyxl
1
1
0
I would like to install openpyxl-1.8.4 on Python 2.7 that comes with Mac Lion. My Python compiler is under system/library/Frameworks/Python.frameworks/Versions/2.7/bin. I tried python2.7 setup.py install and also sudo python2.7 setup.py install and seems to me none of them works. Am I missing something? I really appreciate your help. Thanks
Installing Openpyxl for preinstalled python 2.7 mac lion
0.291313
0
0
9,105
22,180,528
2014-03-04T18:57:00.000
1
1
0
0
python,tdd,tornado
22,180,586
2
false
0
0
I you want to mock sleep then you must not use it directly in your application's code. I would create a class method like System.sleep() and use this in your application. System.sleep() can be mocked then.
1
1
0
I am writing an application that would asynchronously trigger some events. The test looks like this: set everything up, sleep for sometime, check that event has triggered. However because of that waiting the test takes quite a time to run - I'm waiting for about 10 seconds on every test. I feel that my tests are slow - there are other places where I can speed them up, but this sleeping seems the most obvious place to speed it up. What would be the correct way to eliminate that sleep? Is there some way to cheat datetime or something like that? The application is a tornado-based web app and async events are triggered with IOLoop, so I don't have a way to directly trigger it myself. Edit: more details. The test is a kind of integration test, where I am willing to mock the 3rd party code, but don't want to directly trigger my own code. The test is to verify that a certain message is sent using websocket and is processed correctly in the browser. Message is sent after a certain timeout which is started at the moment the client connects to the websocket handler. The timeout value is taken as a difference between datetime.now() at the moment of connection and a value in database. The value is artificially set to be datetime.now() - 5 seconds before using selenium to request the page. Since loading the page requires some time and could be a bit random on different machines I don't think reducing the 5 seconds time gap would be wise. Loading the page after timeout will produce a different result (no websocket message should be sent). So the problem is to somehow force tornado's IOLoop to send the message at any moment after the websocket is connected - if that happened in 0.5 seconds after setting the database value, 4.5 seconds left to wait and I want to try and eliminate that delay. Two obvious places to mock are IOLoop itself and datetime.now(). the question is now which one I should monkey-patch and how.
Mocking "sleep"
0.099668
0
1
481
22,181,860
2014-03-04T20:05:00.000
2
1
0
1
php,google-app-engine,python-2.7
22,183,262
2
false
1
0
Those runtimes (Py, PHP, Java. etc.) are isolated from each other and are tightly sandboxed. So when you deploy a Python app, for example, it doesn't have access to the PHP or Java runtime. So, it's not possible to run PHP inside a python sandbox, at least not in the appengine platform.
1
1
0
I have a project that already written in php, and now i am using python in google app engine, actually i want to use the api that google support for python, for example : datastore, blobstore ... and also to save my time to re write the code again in python ! so, is it possible to run php script in python code ?
Using php inside python code ,google app engine
0.197375
0
0
361
22,182,710
2014-03-04T20:47:00.000
2
1
0
0
python,keyboard-shortcuts,raspberry-pi
22,183,462
1
true
0
0
Just record the time when each keypress comes in, and store the last couple. If the time of the next keypress is shorter than your required threshold, just ignore it.
1
1
0
I have a raspberry pi that is setup to run different videos depending on the key press on a keyboard. If someone accidentally hits two keys at once, it causes the unit to temporarily freeze up. What is the best way and code to limit one key press of keys x,y,z for two seconds?
Python limit key input per second
1.2
0
0
277
22,184,040
2014-03-04T22:00:00.000
4
0
1
0
python,jython
22,184,374
1
true
0
0
There is no drawback in learning Jython - it is a conformant implementation of Python 2's syntax - and the differences to Python3 are just the one you will find documented everywhere. I don't know where jython stands in terms of implementation of Python's stdlib - but I believe it has most of Python's 2.7 stdlib in place and working - some modules won't work, like "ctypes" for example. But as far as the language constructs go, you will be fine. (IMO it is a good tool, not only for what you want, but a nice tool for exploring Java's libraries themselves in an interactive way, since you can use any Java class from the jython interactive shell) As for the comments talking about unavailable modules: those are 3rd party modules installable on CPython. You certainly don't need them to get the language constructs, like you want. It is a trade off: you loose a lot of the Python ecosystem, but you can use the Java ecosystem in its place. And certainly, when starting a new project, you can just use normal CPython with whatever modules you need: the language is the same.
1
3
0
If I learn Jython first, does that mean I will also be learning all of the Python language? When I say all, I mean all the basic constructs of the language, nothing else. What will I not learn about Python or CPython, if I start with Jython? Thanks.
What will I not learn about Python 2.7 and 3.x, if I start with Jython?
1.2
0
0
100
22,185,255
2014-03-04T23:16:00.000
0
0
1
0
python
22,185,832
4
false
0
0
You can use str.split() to do this, but that removes spaces. In case you want to preserve the spaces, use list(str).
1
0
0
Its been a while since I last really used python. How do you tokenize a user input? Lets say for example: User input: Hey my name is Jon Tokenizer will split it based on the spaces
Tokenizing User Input in Python
0
0
0
1,864
22,185,277
2014-03-04T23:17:00.000
4
0
0
0
python,scipy,sparse-matrix
22,185,324
1
true
0
0
m.nnz is the number of nonzero elements in the matrix m, you can use m.size to get the total number of elements.
1
2
1
I have a large sparse matrix, implemented as a lil sparse matrix from sci-py. I just want a statistic for how sparse the matrix is once populated. Is there a method to find out this?
Determine sparsity of sparse matrix ( Lil matrix )
1.2
0
0
157
22,186,027
2014-03-05T00:22:00.000
2
0
1
0
python
22,186,085
2
false
0
0
This is how Python works, you are free to redefine variables. If you expect it to behave like a statically typed language, you're only in for disappointment.
1
2
0
I just spent a whole day tracking down this bug: for idx, val in enumerate(some_list): for idx, otherval in enumerate(another_list): #the idx for the outer loop is overwritten #blah blah Coming from a strongly typed language background, I got bitten hard by this. In strongly typed languages I would get an error about variable re-declaration. I don't know why the interpreter doesn't issue a warning for this, and the design decision behind this. This is obviously a bug, I mean, what could possibly be the legit use of this construct? Is there any option to enable this sort of check? Thanks.
Python does not warn about variable re-declaration
0.197375
0
0
83
22,186,057
2014-03-05T00:24:00.000
0
0
0
0
websocket,ipython,ipython-notebook
26,615,734
1
false
1
0
Try reinstalling your iPython server or creating a new profile for the server
1
6
0
iPython was working fine until a few hours ago when I had to do a hard shutdown because I was not able to interrupt my kernel. Now opening any notebook gives me the following error: "WebSocket connection to could not be established. You will NOT be able to run code. Check your network connection or notebook server configuration." I have the latest version of Chrome and I am only trying to access local notebooks. The Javascript console gives me this: Starting WebSockets: [link not allowed by StackOverflow]/737c7279-7fab-467c-9e0f-cba16233e4b5 kernel.js:143 WebSocket connection failed: [link not allowed by StackOverflow]/737c7279-7fab-467c-9e0f-cba16233e4b5 notificationarea.js:129 Resource interpreted as Image but transferred with MIME type image/x-png: "[link not allowed by StackOverflow]/static/components/jquery-ui/themes/smoothness/images/ui-bg_glass_75_dadada_1x400.png". (anonymous function)
iPython notebook Websocket connection cannot be established
0
0
1
832
22,186,843
2014-03-05T01:43:00.000
25
1
1
0
python,pylint
22,224,042
3
true
0
0
For some reason pylint think the class isn't abstract (currenly detection is done by checking for method which raise NotImplementedError). Adding a comment like #pylint: disable=W0223 at the top of the module (for disabling only in this module) or class (only in this class), should do the trick.
1
19
0
Pylint generates this error for subclasses of an abstract class, even when those subclasses are not themselves instantiated and the methods are overridden in the concrete subclasses. Why does Pylint think my abstract subclasses are intended to be concrete? How can I shut up this warning without getting out the hammer and disabling it altogether in the rc file?
Pylint W0223: Method ... is abstract in class ... but is not overridden
1.2
0
0
10,479
22,187,196
2014-03-05T02:19:00.000
3
0
1
0
python,python-3.x,python-3.3
22,187,241
1
true
0
1
python33.dll is found under c:\windows\system32
1
2
0
I'm trying to build the libpython33.a file so I can create my own C extensions using MinGW. For that, I need the python33.dll file to create a .def file and then convert that to the finally libpython33.a. In my Python 2.7 installation, I can see the file called python27.dll along with python.exe and pythonw.exe. But in the Python 3 folder I there's no DLLs. In the Python3/DLLs folder there's a file called python3.dll (not python33.dll). Is my Python instalation damaged or that's ok?
python33.dll not found in my Python installation path?
1.2
0
0
5,326
22,188,097
2014-03-05T03:46:00.000
3
0
0
0
python,primes
22,188,201
3
false
0
0
Without a complete enumeration of the relative primeness of all numbers between 0 and k (a huge task and one that grows as the square of k) you can make an estimate by selecting a relatively large number of random pairs (p of them) and determine whether they are relatively prime. The assumption is that as the sample size increases the proportion of relative primes tends towards the required probability value (i.e. if you take 10,000 sampled pairs and you find that 7,500 of them are relatively prime then you'd estimate the probability of relative primeness at 0.75). in Python random.randint(0, k) selects a (pseudo-)random integer between 0 and k.
1
0
1
By generating and checking p random pairs. Somewhat confused on how to go about doing this. I know I could make an algorithm that determines whether or not two integers are relatively prime. I am also having difficulty understanding what generating and checking p random pairs means.
Estimate probability that two random integers between 0 and k are relatively prime
0.197375
0
0
519
22,191,236
2014-03-05T07:23:00.000
0
0
1
0
python,sqlite
22,222,873
2
false
0
0
If you are looking for concurrency, SQlite is not the answer. The engine doesn't perform good when concurrency is needed, especially when writing from different threads, even if the tables are not the same. If your scripts are accessing different tables, and they have no relationships at DB level (i.e. declared FK's), you can separate them in different databases and then your concurrency issue will be solved. If they are linked, but you can link them in the app level (script), you can separate them as well. The best practice in those cases is implementing a lock mechanism with events, but honestly I have no idea how to implement such in phyton.
1
0
0
I sometimes run python scripts that access the same database concurrently. This often causes database lock errors. I would like the script to then retry ASAP as the database is never locked for long. Is there a better way to do this than with a try except inside a while loop and does that method have any problems?
Sqlite3 and Python: Handling a locked database
0
1
0
619
22,191,409
2014-03-05T07:34:00.000
0
0
0
1
python,linux,pygtk
22,983,513
1
true
0
1
Just adding the solution to my own question! As suggested in the comments above installed packages by downloading the source and compiled them in a machine which had glibc2.5 then created binary executable of my pygtk app using pyinstaller I had tried compiling packages earlier also, but weren't checking the ./configure output properly. The problem was, I was trying to install gtk and pygtk without installing cairo and pango. So pygtk skipped building gtk packages because it did not find any cairo package. This was mentioned in the ./ configure script but I had not checked that. Summarizing: To configure pygtk to python need to folow these steps install sqlite-devel #If sqlite needed install python(2.7) install gtk(2.24.0) -> requires glib(2.27.3), atk(1.29.2), cairo(1.8.10), pango(1.22.4), gdk-pixbuf(2.21.3) install pygtk(2.24.0) -> requires pygobject-2.28.3, pycairo(1.8.10) All the above packages must be compiled to the same prefix, and need to set the PYTHON and PATHONPATH environment variables. The versions of packages also play major role. Added version in parenthesis that worked for me. There are many dependencies while installing some of the packages so I had to install following packages using yum: libxext, librender, gettext, zlib, libgtk2-devel
1
1
0
I have developed a pygtk application and i need to release it to customers. I am using python 2.7, pygtk 2.2 in ubuntu. My question is how can I bundle the required packages(python, pygtk, gobject) together with my application, so that even if these packages are not installed in client machine I can run my application. I tried with pyinstaller but, the executable depends on the glibc i.e executable created with higher glibc version will not work with the machine which has lower glibc version. So is there any way to create a release directory which includes all the packages required so that I can run my application in any system without installing the packages. Thanks in advance,
Releasing pygtk application
1.2
0
0
479
22,192,424
2014-03-05T08:35:00.000
4
1
1
0
emacs,elisp,python-mode
22,218,973
2
false
0
0
In upcoming Emacs 24.4 auto-indendation is enabled by default thanks to electric-indent-mode. Since Emacs 24.4 has been in feature-freeze for quite some time now, there should be no major breaking bugs left, so you could already make a switch.
1
4
0
I've recently declared emacs bankruptcy and in rebuilding my config switched from the old python-mode.el to the built-in python.el. One thing that's I'm missing is the old behaviour of auto-indenting to the correct level when hitting RET. Is there any way to re-enable this?
Enable auto-indent in python-mode (python.el) in Emacs 24?
0.379949
0
0
1,623
22,195,254
2014-03-05T10:37:00.000
1
0
0
0
python,opengl
22,212,424
1
false
0
1
My recommendation is PyOpenGL plus PyQt. Python plus either Pyglet or wxPython are possible alternatives. PyOpenGL (the Mike Fletcher version, right?) is the best Python OpenGL API I know of. It has support for OpenGL 3 and 4 and is just very nice and Pythonic. PyQt itself only supports OpenGL 2, but PyOpenGL will run inside a PyQt context. Since PyQt does have a Python 3 version, this combination should meet your needs. For the GUI stuff I prefer wxPython, but as you note that hasn't been updated for Python 3 yet. You could take a look at the wxPython Phoenix project, but that's very much a work in progress. Pyglet is also quite nice but has less GUI functionality that wxPython or PyQt. Think of it as the equivalent of GLUT. The Python 3 version is currently in alpha, but given that it's not a complete rewrite I'd expect it to be stable very soon. Hope this helps.
1
0
0
I want to write a visualization for some complex scientific data in Python. I have done a similar thing a few years ago in Objective-C/Cocoa/OpenGL. The visualization will contain some fancy shader programs, so at least OpenGL 3.0 is required. Also, I need to draw a window and do some mouse/keyboard handling. Some GUI widgets would be nice, but not required. Python 3 support is highly desirable. I looked into: PyOpenGL, which has no window/mouse/keyboard handling. PyGlet, which only supports Python 2.7. PyQt, which only supports OpenGL 2.0. PySide, which is pretty much dead, and stuck in Qt 4.7. wxPython, which only supports Python 2.7. PyGame, which is pretty much dead. Do you know any library that can do modern OpenGL and some windowing in Python 3?
OpenGL 3+ with Python 3
0.197375
0
0
1,787
22,199,798
2014-03-05T13:55:00.000
0
0
1
0
python,file-io
22,199,876
4
false
0
0
Have a single file writing process and have the others queue messages for it to write. This could be by event raising, message queues, pyro, publish, your choice!
1
2
0
I have an odd situation: I have inherited a project which has a number of Python processes running concurrently doing various things. These processes are spun-up independently of each other; multiprocessing/threading isn't involved. I'd like to add some functionality where they write to a single text file with a one-line update when certain events occur, which I can then parse much later on (probably on a separate server) to gather statistics. I'm looking for a way to append this line to the file that won't cause problems if one of the other processes is trying to do the same thing at the same time. I don't want to add any other software to the stack if possible. Suggestions?
Fastest way to write to a txt file in Python
0
0
0
535
22,202,071
2014-03-05T15:31:00.000
0
0
1
0
python-2.7,nlp,visual-studio-2013,nltk
25,806,619
1
false
0
0
It could be that your solution path contains some illegal character like %. It happened to me.
1
0
0
I'm trying to use nltk with Visual Studio 2013, but everytime I import nltk it gives me the error. I've already installed pythontools and python with nltk. Thanks in advance
"The system cannot find the path specified" error on Visual Studio 2013
0
0
0
1,087
22,205,644
2014-03-05T18:05:00.000
1
1
1
0
python
22,205,974
2
false
0
0
JSON/pickle are very low efficiency solutions as they require at best several memory copies to get your data in or out. Keep you data binary if you want the best efficiency. The pure python approach would involve using struct.unpack, however this is a little kludgy as you still need a memory copy. Even better is something like numpy.memmap which directly maps your file to a numpy array. Very fast, very memory efficient. Problem solved. You can also write your file using the same approach.
1
0
0
I have a large integer array that I need to store in a file, what is the most efficient way so I can have quick retrieval speed? I'm not concerned with efficiency of writing to disk, but reading only I am wondering if there is a good solution other than json and pickle?
Python - storing integer array to disk for efficient retrieval
0.099668
0
0
315
22,207,218
2014-03-05T19:23:00.000
0
0
0
0
python,c,image,video
22,207,404
1
false
0
1
I can think of OpenCV, but it may be overkill, depending on what kind of "manipulation" you need. It is c/c++ lib, but also provides python and java wrappers.
1
0
0
Basically I want to know the libraries which can do it, preferably in C or Python.
How to grab a frame from a video and then manipulate it and then reassemble the video with it?
0
0
0
34
22,211,193
2014-03-05T22:55:00.000
0
0
0
1
python,celery
22,226,475
1
false
1
0
After a bit of research, I ended up answering my own question: it was a bug which has been fixed in the later versions of celery.
1
1
0
I am trying to capture the worker-related events, but there is something weird going on in my application: all the task events are being generated and captured and the worker events as well, except for worker-offline event. Is there any specific setting that I need to make for this event to be generated? I am using Celery 3.0.23
Celery worker-offline event not generated
0
0
0
150
22,212,114
2014-03-06T00:01:00.000
0
0
0
0
python,django,django-cms
29,132,323
2
false
1
0
This is what works for me: ./manage.py dumpdata cms.page cms.title > pages.json
1
1
0
On my local machine I'm building a Django CMS site. I have about 50 CMS pages with page titles, custom slugs, and data. How do I dump just the CMS pages data on my local machine and load it into my staging environment? I've tried using a fixture with python manage.py dumpdata cms.page --indent=2 > cmspages.json, however, the page title, slug, and data are not in the json so when I load cmspages.json the pages are created but no data is loaded. How do I migrate my CMS pages to my staging environment?
Django CMS migrate pages
0
0
0
589
22,214,115
2014-03-06T03:14:00.000
2
0
0
0
python,windows,winapi,mouseevent
22,229,634
1
true
0
0
i do not know the PyWin32 package but from a win32 api point of view the thing should be easy. get a HWND of that window and post (PostMessage) the events you want to the window. eg: WM_LBUTTONDOWN & WM_LBUTTONUP, WM_RBUTTONDOWN & WM_RBUTTONUP, WM_MOUSEMOVE... look at the win32 help how to set the wParam & lParam data for the specific events. i controlled diablo 3 this way for example ;) Edit: there is no need to be in focus or maximized for this Edit Edit: may be you should look after autoit, a widely used scrip language for window automation. I never used it but read the name very often in this context. it may also be usable from python.
1
2
0
I am trying to do some automation with Python, but I want to execute it and still being able to use my machine freely. So I am using PyWin32 to emulate some clicks and typing but it only works if I run the script while the window is open and focused. There is a way to make my script only focus on a window, and still be able to click on that window without taking control over the mouse, even if the window is not focused (if it works when is minimized, is best!)?
How to emulate mouse/keyboard events in a unfocused/minimized window?
1.2
0
0
1,997
22,214,166
2014-03-06T03:20:00.000
0
0
1
0
python-2.7
22,214,465
2
false
0
0
Is this ok? using C int def my_min(nums) { int i,min; int min[N]; for(i=0;i<nums;i++) { scanf("%d",&min[i]); if (i==0) { min=min[0]; } else { if(min>min[i]) { min=min[i]; } } } return min; }
1
0
1
Given an array length 1 or more of ints, return the smallest value in the array. my_min([10, 3, 5, 6]) -> 3 The program starts with def my_min(nums):
Given an array length 1 or more of ints, return the smallest value in the array
0
0
0
58
22,214,989
2014-03-06T04:31:00.000
0
0
1
0
python,pygame
22,255,548
3
false
0
1
After considering both answers above what I simply did was write a script where I loaded all images and created a dictionary called image_dict with the image names as the keys and saved the program as "load_images.py" Then from the second program I did a "from load_images.py import image_dict". now the keys gave me the images. Thank you.
1
0
0
I have a lot of images i need to load into a Pygame program. Can I load all these from a separate script which I load into my main program as a module and be used from within the main program? Any help much appreciated. Thanks in advance.
Loading pygame images as a module
0
0
0
733
22,215,361
2014-03-06T04:57:00.000
2
0
0
0
java,python,user-interface,photoshop
22,215,461
2
false
1
1
but how do I take what I made in Photoshop add some java or python code to it to make certain things happen No, you cannot expect things to happen magically, for that you need to learn front-end technologies like HTML, CSS, JavaScript etc and manually convert the UI which is in Photoshop to corresponding code. This applies for web applications. If you want to build desktop application, you need to use Swings, SWT etc to achieve the same. I have zero experience in this If this is the case, I recommend to read some basic tutorials, then you will get idea what to do
1
0
0
so I am actually trying to get into software development and I currently have just spent a few days making a GUI in Photoshop. Now I know how to code in Java and Python but I have never implemented a GUI before. I am stuck on this because I know I can write the code and everything but how do I take what I made in Photoshop add some java or python code to it to make certain things happen? I have zero experience in this and I have only written code to accomplish tasks without the need for a GUI.
UI Designed in Photoshop for Software
0.197375
0
0
1,055
22,215,833
2014-03-06T05:31:00.000
0
0
0
1
python-2.7,cassandra,cqlsh
22,231,783
1
true
0
0
ValueError: invalid literal for int() with base10:'Unknown' It happens when you run Cassandra from sources, and version.properties file is missing. Just execute ant generate-eclipse-files in the cassandra folder, that will generate the file.
1
0
0
I configured source code of Cassandra 2.0.3 in eclipse. JDK:jdk1.7.0_45 Win 7-32 bits, python 2.7.3 python but cqlsh just doesnt start. Please help me what can be the possible problem?
Cassandra 2.0.3 cqlsh Fail to start
1.2
0
0
784
22,220,635
2014-03-06T09:47:00.000
1
0
1
1
python,tornado,coroutine
22,301,461
2
true
0
0
gen.py does call send(), but in Runner.run(), not in engine() or coroutine() as you might expect. It seems that engine() and coroutine() basically evaluate the wrapped function to see whether it returns a generator. If it does, it calls Runner.run() on the result, which internally seems to loop over send(). It's not exactly obvious what it's doing though...
1
0
0
I know coroutine in Python use get = yield ret , callee.send() , callee.next(). But I havn't find above things such as call.send() in Tornado Source code gen.py. How to explain coroutine in Tornado with an easy understand way? Without a Bigger picture, I just can't understand what Tornado did.
How to explain coroutine in Tornado and Python?
1.2
0
0
449
22,224,256
2014-03-06T12:17:00.000
0
0
0
0
python,emacs,elisp,ipython
22,227,180
2
false
0
0
Check customization of py-switch-buffers-on-execute-p, it defaults to nil, but seems `t' there. By default the output buffer is displayed, but not selected.
1
0
0
I more or less set up ipython working with emacs. However, there is still something not very handy. Every time i select a region from my script buffer and do py-execute-region, after the execution, the cursor STAYS in the PYTHON buffer, rather than returning to my script buffer. Then i have to do a C-x o to move the cursor back to my script and keep writing things. Is there an option/fix that lets py-execute-buffer return to original buffer after execution? Thanks!
how to return to original buffer after py-execute-region
0
0
0
93
22,235,952
2014-03-06T21:05:00.000
0
0
1
0
documentation,python-sphinx,documentation-generation,autodoc
22,318,733
1
false
1
0
The best way to combine different languages in one Sphinx project is to write docs without autodoc or other means of automatic generation. For the most part they are available only for Python and even if some extension out there does allow other languages, you will be buried under different workflows before you even notice. Salvage your docs from the code and write them in concise manner in a separate docs folder of your project or even separate repository. You could use the generic Sphinx directories like class or method with no attachment to the code and for virtually any major programming language. I for myself did a project like that, where I needed to combine C, C++ and Python code in one API and it was done manually. If you create this kind of detached project, the maintenance should be much of an issue. It's not much harder, than autodoc workflow. What for PDF and HTML - any Sphinx project allows that. See their docs for details on different builders like latexpdf or html.
1
2
0
the project I am working on ship a package that contains API for different languages: Java, Python, C#, and others. All these APIs shared mostly the same documentation. The documentation should be available in PDF and HTML separately on our website. The user usually download/browse the one it is interested in. Currently we use sdocml, but we are not that satisfied and so we want to move to a more up to date tool and we are considering Sphinx. Looking at the Sphinx documentation I cannot clearly figure out how: 1- say to generate the docs for a certain API (for instance the Java one) 2- does autodoc works for any domain? 3- is there a c# extension? Any help is most welcome!
Using Sphinx within a project using several programming languages
0
0
0
1,026
22,236,546
2014-03-06T21:36:00.000
2
0
1
0
python,pygame,importerror
22,236,742
1
false
0
1
You are trying to use a pygame distribution that is for python 2.7 with a different version of python. You should download and use the appropriate version. If installed correctly, there will be no need for sys.path.insert.
1
0
0
import sys sys.path.insert(1,"C:/Users/ravir_000/Desktop/python_CS105/python_CS105/Python27/Lib/site-packages") import math import random import pygame from pygame.locals import * pygame.init() How do you get rid of the error? Import error: Module use of python27.dll conflicts with this version of python. I was working on this a few hours ago and it was working fine. When I got into class it started giving me this error. I have tried to install/reinstall pyscripter and pygame but it still does not work. I am sure that my path to pygame is correct. Any ideas?
Import error: Module use of python27.dll conflicts with this version of python
0.379949
0
0
4,333
22,239,146
2014-03-07T00:32:00.000
0
0
1
0
python,string,path,substring
22,239,296
4
false
0
0
If you do not have a known list of options, I don't think you can have 100% reliable algorithm. If we go with some reasonable guesses, I could think of something like this: if it does not start with -, it is a path if it starts with -, remove the first letter and use os.path.exists on remainder and if it returns true then you have path repeat above until you get to non-alphanumeric character Some cases where this breaks down: we really have a path, but it does not exist in the file system we might eat the drive letter, and check if path exists on wrong drive if option can contain non-alphanumeric characters parameter is a file or folder in current directory
1
2
0
I am working in python. my problem is I have several strings and I have to detect which part of them look like a path and then normalize the ones that are actual paths. e.g. I have several different strings like follows : 1. ..\..\..\Workspace 2. C:\source\Workspace 3. -B..\..\source 4. -build..\..\work\source..\workspace 5. -HD:\abc\bds\Workspace Basically some strings are paths straightaway and some have a trailing - attached to them. The length of option string is variable. In the above example strings 3, 4, 5 contain a path string pre-attached with an option -B, -build, -H respectively. The problem is how to detect from the above examples, which part of the string is a path and which is not. The options are not fixed so I can't just check given string's head for pre-specified options. The only way to go forward is taking the sub-string starting from .. or one character before : (e.g. C: or D:). So the question is that is there any generalized way of doing this or does python provide any function to take truncated portion of a string starting from any particular position ? any answer which satisfies above 5 examples is cool even though it is kinda overfitting. The code that I am currently using for e.g. 2 and 5 is : path = path[path.find(':')-1:] and for others its path = path[path.find('.'):]. But this is not generalized and uncool. So I am looking for a better algorithm or solution.
find if string is a path
0
0
0
6,380
22,239,230
2014-03-07T00:41:00.000
0
0
0
1
python,notify
27,495,023
1
true
0
0
Well ... I figured out that in my organization Microsoft Exchange will not allow email started from script except those originated from server. I handled to start email from server and now I'm all set. Thanks for suggestions. Ticket could be closed.
1
0
0
I have a task to monitor disk usage and notify a few users when it runs out of space. I wrote python script that checks disk usage. Unfortunately I can't use email notification from the script because company policy does not allow it. My question: Are there any other options that would allow me to notify selected users in my network about particular event i.e. full disk space? I mean some kind of message that will pop-up on the screen or etc. Please keep in mind that I practically don't have any administrative privileges in the network. Thanks
How to notify users in network
1.2
0
1
129
22,241,028
2014-03-07T03:46:00.000
0
0
0
1
python,linux,django,ubuntu,django-deployment
22,241,285
1
true
1
0
Here is my stack: Nignx + gunicon, Supervisor Deployment, If you are planing frequent releases you should be looking at something like Fabric. Even of not frequent Fabric is a very good tool to be aware of. People have preference in terms of stack, but this one has been working great for me.
1
0
0
I am going to seploy my first Django application to a cloud server like Amazon EC2 and the system is Linux Ubuntu. But I cannot find a very good step-by-step tutorial for the deployment. Could you recommend one? And I also have the following questions: What is the most recommended environment? Gunicorn, Apache+mod_python or ohters? How to deploy my code? I am using mac and should I use ftp or check out from my github repository? Thank you!
Django Deployment on Linux Ubuntu
1.2
0
0
117
22,243,208
2014-03-07T06:32:00.000
2
0
0
0
python,forms,http-post,pyramid
22,251,970
3
false
1
0
I've managed to get it working. Silly me, coming from an ASP.NET background forgot the basics of POST form submissions, and that's each form field needs a name== attribute. As soon as I put them in, everything started working.
1
1
0
I'm currently working on a pyramid project, however I can't seem to submit POST data to the app from a form. I've created a basic form such as: <form method="post" role="form" action="/account/register"> <div class="form-group"> <label for="email">Email address:</label> <input type="email" class="form-control" id="email" placeholder="[email protected]"> <p class="help-block">Your email address will be used as your username</p> </div> <!-- Other items removed --> </form> and I have the following route config defined: # projects __init__.py file config.add_route('account', '/account/{action}', request_method='GET') config.add_route('account_post', '/account/{action}', request_method='POST') # inside my views file @view_config(route_name='account', match_param="action=register", request_method='GET', renderer='templates/account/register.jinja2') def register(self): return {'project': 'Register'} @view_config(route_name='account_post', match_param="action=register", request_method='POST', renderer='templates/account/register.jinja2') def register_POST(self): return {'project': 'Register_POST'} Now, using the debugger in PyCharm as well as the debug button in pyramid, I've confirmed that the initial GET request to view the form is being processed by the register method, and when I hit the submit button the POST request is processed by the *register_POST* method. However, my problem is that debugging from within the *register_POST* method, the self.request.POST dict is empty. Also, when I check the debug button on the page, the POST request is registered in the list, but the POST data is empty. Am I missing something, or is there some other way of access POST data? Cheers, Justin
Pyramid self.request.POST is empty - no post data available
0.132549
0
0
1,795
22,245,407
2014-03-07T08:48:00.000
0
1
0
0
python,postgresql,sqlalchemy,zeromq
22,247,025
2
false
1
0
This comes close to your second solution: Create a buffer, drop the ids from your zeromq messages in there and let you worker poll regularly this id-pool. If it fails retrieving an object for the id from the database, let the id sit in the pool until the next poll, else remove the id from the pool. You have to deal somehow with the asynchronous behaviour of your system. When the ids arrive constantly before persisting the object in the database, it doesnt matter whether pooling the ids (and re-polling the the same id) reduces throughput, because the bottleneck is earlier. An upside is, you could run multiple frontends in front of this.
1
1
0
Inside an web application ( Pyramid ) I create certain objects on POST which need some work done on them ( mainly fetching something from the web ). These objects are persisted to a PostgreSQL database with the help of SQLAlchemy. Since these tasks can take a while it is not done inside the request handler but rather offloaded to a daemon process on a different host. When the object is created I take it's ID ( which is a client side generated UUID ) and send it via ZeroMQ to the daemon process. The daemon receives the ID, and fetches the object from the database, does it's work and writes the result to the database. Problem: The daemon can receive the ID before it's creating transaction is committed. Since we are using pyramid_tm, all database transactions are committed when the request handler returns without an error and I would rather like to leave it this way. On my dev system everything runs on the same box, so ZeroMQ is lightning fast. On the production system this is most likely not an issue since web application and daemon run on different hosts but I don't want to count on this. This problem only recently manifested itself since we previously used MongoDB with a write_convern of 2. Having only two database servers the write on the entity always blocked the web-request until the entity was persisted ( which is obviously is not the greatest idea ). Has anyone run into a similar problem? How did you solve it? I see multiple possible solutions, but most of them don't satisfy me: Flushing the transaction manually before triggering the ZMQ message. However, I currently use SQLAlchemy after_created event to trigger it and this is really nice since it decouples this process completely and thus eliminating the risk of "forgetting" to tell the daemon to work. Also think that I still would need a READ UNCOMMITTED isolation level on the daemon side, is this correct? Adding a timestamp to the ZMQ message, causing the worker thread that received the message, to wait before processing the object. This obviously limits the throughput. Dish ZMQ completely and simply poll the database. Noooo!
ZeroMQ is too fast for database transaction
0
1
0
2,062
22,250,987
2014-03-07T13:09:00.000
1
0
1
0
mongodb,python-2.7,caching
22,251,094
1
true
0
0
Yes but you will need to make one, how about memcached or redis? However as a pre-cautionary note, MongoDB does have its recently used data cached to RAM by the OS already so unless you are doing some really resource intensive aggregation query or you are using the results outside of your working set window you might not actually find that it increases performance all that much.
1
0
0
I am using mongodb 2.4.6 and python 2.7 .I have frequent executing queries.Is it possible to save the frequent qaueries results in cache.? Thanks in advance!
How to cache Mongodb Queries?
1.2
1
0
1,377
22,254,612
2014-03-07T15:49:00.000
4
1
0
0
python,template-engine,mako,plaintext
27,262,018
1
false
1
0
If you add a backslash at the end of the line like this: "<% %>\", you can suppress the newline.
1
2
0
In an existing application we are using Mako templates (unfortunately..). That works ok for HTML output since newlines do not matter. However, we now need to generate a text/plain email using a template - so any newlines introduced by control statements are not acceptable. Does Mako provide any options to make statement lines (i.e. those starting with %) not cause a newline in the output? I checked the docs but couldn't find anything so far...
Is there a way to use Mako templates for plain-text files where newlines matter?
0.664037
0
0
855
22,255,579
2014-03-07T16:33:00.000
36
0
1
1
python,macos,python-2.7,homebrew,brew-doctor
22,355,720
6
false
0
0
I also received this message. Something, sometime installed /Library/Frameworks/Python.framework on my machine (the folder date was about 4 years old). I've chosen to remove it. Please note that the Apple provided framework lives in /System/Library/Frameworks/Python.framework/
1
73
0
When I ran Homebrew's brew doctor (Mac OS X 10.9.2), I get the following warning message: Warning: Python is installed at /Library/Frameworks/Python.framework Homebrew only supports building against the System-provided Python or a brewed Python. In particular, Pythons installed to /Library can interfere with other software installs. Therefore, I ran brew install and followed the steps provided in the installation's caveats output to install Homebrew's version of Python. Running which python confirms that Homebrew's version of it is indeed at the top of my PATH. Output is /usr/local/bin/python. Despite all this, when I rerun brew doctor, I am still getting the same warning message. How do I suppress this warning? Do I need to delete the /Library/Frameworks/Python.framework directory from my computer? Am I just supposed to ignore it? Is there a different application on my computer that may be causing this warning to emit? Note that I don't have any applications in particular that are running into errors due to this warning from brew doctor. Also note that this warning message didn't always print out when I ran brew doctor, it was something that started to appear recently. Also, I am using Python 2.7 on my computer, trying to stay away from Python 3.
Homebrew brew doctor warning about /Library/Frameworks/Python.framework, even with brew's Python installed
1
0
0
45,838
22,256,760
2014-03-07T17:28:00.000
1
0
0
0
python,database,postgresql,gps,twisted
22,409,012
1
true
1
0
Databases do not just lose data willy-nilly. Not losing data is pretty much number one in their job description. If it seems to be losing data, you must be misusing transactions in your application. Figure out what you are doing wrong and fix it. Making and breaking a connection between your app and pgbouncer for each transaction is not good for performance, but is not terrible either; and if that is what helps you fix your transaction boundaries then do that.
1
0
0
Introduction I am working on a GPS Listener, this is a service build on twisted python, this app receive at least 100 connections from gps devices, and it is working without issues, each GPS send data each 5 seconds, containing positions. ( the next week must be at least 200 gps devices connected ) Database I am using a unique postgresql connection, this connection is shared between all gps devices connected for save and store information, postgresql is using pgbouncer as pooler Server I am using a small pc as server, and I need to find a way to have a high availability application with out loosing data Problem According with my high traffic on my app, I am having issues with memory data after 30 minutes start to appear as no saved, however queries are being executed on postgres ( I have checked that on last activity ) Fake Solution I have amke a script that restart my app, postgres ang pgbouncer, however this is a wrong solution, because each time that I restart my app, gps get disconnected, and must to reconnected again Posible Solution I am thinking on a high availability solution based on a Data Layer, where each time when database have to be restarted or something happened, a txt file store data from gps devices. For get it, I am thing on a no unique connection, I am thinking on a simple connection each time one data must be saved, and then test database, like a pooler, and then if database connection is wrong, the txt file store it, until database is ok again, and the other process read txt file and send info to database Question Since I am thinking on a app data pooler and a single connection each time when this data must be saved for try to no lost data, I want to know Is ok making single connection each time that data is saved for this kind of app, knowing that connections will be done more than 100 times each 5 seconds? As I said, my question is too simple, which one is the right way on working with db connections on a high traffic app? single connections per query or shared unique connection for all app. The reason on looking this single question, is looking for the right way on working with db connections considering memory resources. I am not looking for solve postgresql issues or performance, just to know the right way on working with this kind of applications. And that is the reason on give as much of possible about my application Note One more thing,I have seen one vote to close this question, that is related to no clear question, when the question is titled with the word "question" and was marked on italic, now I have marked as gray for notice people that dont read the word "question" Thanks a lot
Right way to manage a high traffic connection application
1.2
1
0
270
22,257,531
2014-03-07T18:08:00.000
0
0
1
0
python,pyscripter
43,157,737
1
false
0
0
in shortcuts/interpreter/actClearContents = [shortcut-of-your-choice] => clear the console
1
0
0
Is there an API for controlling the PyScripter Python Interpreter window? I'd like basic control such as clearing the screen. I'm aware of (and using) the option to Clear output before run, but I'm specifically looking for an option to achieve this programmatically. I've tried using system calls to CLS but this doesn't work in the Python Interpreter window. I've also tried using ANSI escape codes but it looks like this too is not supported in the Python Interpreter window. Another solution I've considered is printing a lot of newlines but this solution doesn't truly clear the screen. Edit: Using another IDE/console/environment is not an option for me at this point.
Programmatically clear Python Interpreter window in Pyscripter?
0
0
0
551
22,262,073
2014-03-07T22:39:00.000
0
1
0
1
python,ubuntu
65,688,297
5
false
0
0
In my case it works after includes at the first line: #!/home/yourusername/anaconda3/bin/python You can check the appropiate path running which python in your console. It is also neccessary to change the file manager setting and configure it to run your scripts.
1
2
0
I've created a simple python script and therefor have a .py file. I can run it from the terminal but if I double click it only opens up in gedit. I've read this question other places and tried the solutions, however none have worked. I'm running Ubuntu 13.04, I've selected the box to make the file executable. I've even installed a fresh instance of Ubuntu 13.10 on another computer and it does the same thing. What might I be missing here?
execute python script from linux desktop
0
0
0
15,466
22,262,301
2014-03-07T22:56:00.000
2
0
1
0
python
22,262,376
2
false
0
0
sys.stdin.readline() blocks until you enter input.
1
1
0
I tried time.sleep(secs). However this has to sleep for some specific seconds. It is not totally paused. Does Python have any function to pause the script forever if there is no any input ? Thanks !
how to pause a script of python
0.197375
0
0
119
22,262,420
2014-03-07T23:04:00.000
0
0
0
1
python,google-app-engine,blogs
22,264,642
2
false
1
0
You say you already used that id before. If you havent deleted that app just use that one to load your new code there. You will need to delete existing datastore data etc.
1
0
0
I made a blog in Python and I am running it off of Google App Engine. When I started, I put a random ID, just because I was experimenting. Lately, my blog got a bit popular and I wanted to change the ID. I wanted to duplicate my app, but the problem is that I already registered that ID a while ago with google. How can I duplicate it even though the name already exists. Thanks, Liam
Duplicate App to an already existing ID on Google app engine
0
0
0
104
22,263,012
2014-03-07T23:58:00.000
0
0
0
1
python,space
22,263,086
1
false
0
0
This question doesn't make a lot of sense. Directories don't have "free space". As long as there is free space on the drive, you can use as much of it in a directory as you want.
1
0
0
I have a task to check free space on particular directory on Windows. I figured out how to check for space of the drive. I was using ctypes module . Could you please help me to figure out which module or function should I use to get similar information for directory on Windows? Thanks
check directory free space on windows using python
0
0
0
102
22,264,743
2014-03-08T03:45:00.000
0
0
0
0
python
22,264,987
2
true
0
0
There is no proper way. Use whatever works best for your particular scenario. Some common ways for storing user data include: Text files (e.g. Windows INI, cfg files) binary files (sometimes compressed) Windows registry system environment variables online profiles There's nothing wrong with using text files. A lot of proper applications uses them exactly for the reason that they are easy to implement, and additionally human readable. The only thing you need to worry about is to make sure you have some form of error handling in place, in case the user decides to replace you config file content with some rubbish.
1
0
0
I'm looking to store some individual settings to each user's computer. Things like preferences and a license key. From what I know, saving to the registry could be one possibility. However, that won't work on Mac. One of the easy but not so proper techniques are just saving it to a settings.txt file and reading that on load. Is there a proper way to save this kind of data? I'm hoping to use my wx app on Windows and Mac.
How to save program settings to computer?
1.2
0
0
101
22,265,689
2014-03-08T05:53:00.000
2
0
1
0
python
22,265,727
2
false
0
0
I think you may be confusing package with module. A python module is always a single .py file. A package is essentially a folder which contains a special module always named __init__.py, and one or more python modules. Attempting to execute the package will simply run the __init__.py module.
1
0
0
Normally when you execute a python file you do python *.py but if the whole Module which contain many .py files inside for example MyModule contain many .py file and if I do python -m MyModule $* what would happen as oppose python individual python file?
Python the whole Module
0.197375
0
0
36
22,266,802
2014-03-08T08:06:00.000
16
0
0
0
python,user-interface,pyqt,qt-designer
57,951,576
4
false
0
1
In pyqt5 you can use: convert to none-executable python file : pyuic5 -o pyfilename.py design.ui convert to executable python file : pyuic5 -x -o pyfilename.py design.ui and also for resource diles(qrc): convert qrc to python file : pyrcc5 -o pyfilename.py res.qrc Note: that if you run the command in the wrong way,your ui file will be lost.So you have to make a copy of your files:)
2
22
0
This .ui file is made by Qt Designer. It's just a simple UI. All the commands or codes for doing this on the websites I have looked through are not for windows.
How to convert a .ui file to .py file
1
0
0
110,222
22,266,802
2014-03-08T08:06:00.000
15
0
0
0
python,user-interface,pyqt,qt-designer
34,833,304
4
false
0
1
To convert from .ui to .py in Windows Go to the directory where your ui file is. Press shift right-click your mouse. Click open command window here. This will open the cmd, check what is the directory of your (pyuic4.bat) file. Usually, it is in: C:\Python34\Lib\site-packages\PyQt4\pyuic4.bat. Write in the cmd: C:\Python34\Lib\site-packages\PyQt4\pyuic4.bat -x filename.ui -o filename.py (hit Enter) this will generate a new file .py for your .ui file and in the same directory Note: This command for Python 3.4 version and PyQt4 version. If you are using other versions you should change the numbers (e.g PyQt5)
2
22
0
This .ui file is made by Qt Designer. It's just a simple UI. All the commands or codes for doing this on the websites I have looked through are not for windows.
How to convert a .ui file to .py file
1
0
0
110,222
22,267,925
2014-03-08T10:10:00.000
0
0
1
0
python,pygame,pip,enthought,canopy
36,794,680
2
false
0
0
I have one non_conventional Idea, just install the pygame for for your python which you just installed in your C:\ drive and then copy the pygame module in the which is found in the python27\Lib\sitepackages\pygame then paste it in the site-packages of the canopy
1
0
0
Alright tried all the previous suggestions as specified in similar questions asked regarding this topic on StackOverflow. But I've encountered the following problems: The default package manager has no pygame package available,hence its of no use. Tried enpkg method,but then it exits sending out error messages which I can't read as they vanish quickly just before the window closes. Tried the pip and easy_install methods but they all spit out various errors like "Could not find any downloads specifying the requirements".I'm sure I haven't made any syntax mistake while issuing the shell commands. Hence I wish to ask for any way to install Pygame package in Enthought canopy either completely manually or by any other way conceivable. If possible,a precise walkthrough would be greatly appreciated. And please don't close this question right away as it's 'NOT THE SAME' as others because I've tried the other alternatives but to no avail. Thanks for your time! Edit:Forgot to mention,I'm using Windows 7 ,64 bit.
Can't Install Pygame for Enthought Canopy using pip or easy_install?
0
0
0
1,214
22,268,968
2014-03-08T11:53:00.000
1
1
0
0
python,web,mod-wsgi
22,933,666
1
false
1
0
You'll need JavaScript to do this Possibility 1, data generated by server - Make a static HTML page with an empty div. - Place a piece of Javascript code onto it that is run after the page is loaded. - The JavaScript will contain a timer that downloads the output of your script say every 5 seconds, using AJAX ands sets your div's html to the result. - The easiest way to get this working is probably to use the AJAX facilities in JQuery. Possibility 2, data generated by client - If it is possible to have your dynamic output generated on the client by a piece of JavaScript code this will scale better (since it takes the burden off the server.) - You may still load the input data needed to compute the dynamic output formatted as JSON by means of AJAX.
1
1
0
I have a Python application that I launch from a form using mod_wsgi. I would like to display in real time the output of the script, while it is running, to a web page. Does anybody know how I can do this?
Real time output of script on web page using mod_wsgi
0.197375
0
0
406
22,270,981
2014-03-08T15:06:00.000
0
0
1
0
python,json
22,271,663
3
false
0
0
It is a unicode string. You can treat it as a normal python string in most cases. If you really want to convert it to a normal string use str(). If you need to convert it to a bytes type, use object.encode(encoding) where encoding is the encoding of the Unicode character, usually 'utf-8'.
3
1
0
I was doing some work in Python with graphs and wanted to a save some structures in files so I could load them fast when I resumed work. One of those was a dictionary which I saved in JSON format using json.dump. When I load it back with json.load the keys have changed from "1" to u'1'. Why is that? What does it mean? How can I change it? I use the keys later to make some lists which I will then use with the original graph which nodes are the keys (in integer form) and it causes problem in comparisons...
Load JSON file in Python without the 'u in the key
0
0
0
1,037
22,270,981
2014-03-08T15:06:00.000
3
0
1
0
python,json
22,271,029
3
false
0
0
The u'' or u"" just means that this is a unicode string. Which in general should not be any problem unless you need a byte string. Though I would expect that your original data already was unicode, so it should not be a problem.
3
1
0
I was doing some work in Python with graphs and wanted to a save some structures in files so I could load them fast when I resumed work. One of those was a dictionary which I saved in JSON format using json.dump. When I load it back with json.load the keys have changed from "1" to u'1'. Why is that? What does it mean? How can I change it? I use the keys later to make some lists which I will then use with the original graph which nodes are the keys (in integer form) and it causes problem in comparisons...
Load JSON file in Python without the 'u in the key
0.197375
0
0
1,037
22,270,981
2014-03-08T15:06:00.000
3
0
1
0
python,json
22,271,061
3
false
0
0
The u prefix signifies a Unicode string. In Python 2.x, you can convert it to a regular string with str(). That shouldn't really be necessary, though; u'1' == '1' because Python will do any conversion for you before comparing.
3
1
0
I was doing some work in Python with graphs and wanted to a save some structures in files so I could load them fast when I resumed work. One of those was a dictionary which I saved in JSON format using json.dump. When I load it back with json.load the keys have changed from "1" to u'1'. Why is that? What does it mean? How can I change it? I use the keys later to make some lists which I will then use with the original graph which nodes are the keys (in integer form) and it causes problem in comparisons...
Load JSON file in Python without the 'u in the key
0.197375
0
0
1,037
22,274,048
2014-03-08T19:35:00.000
0
0
1
0
python,events,event-handling,spss
22,282,456
1
false
0
0
You cannot catch events using the programmability or scripting apis. The only formatting in the Data Editor comes from variable formats (and column width) except for the special coloring using with missing data imputation. Tables in the Viewer, of course, have extensive cell formatting capabilities.
1
0
0
I want to write a hopefully short python script that would do things with the contents of a specially formatted text cell of an spss data table. How can I hook on the event that the user clicked into a data cell? How can I then get its value and do things I want? Does spss have clear-cut interface for doing this?
how to hook on a table cell in spss using python?
0
0
0
76
22,274,186
2014-03-08T19:45:00.000
1
0
0
0
python,matplotlib,color-mapping
22,274,333
1
true
0
0
imgshow takes two arguments vmin and vmax for the color scale. You could do what you want by putting the same vmin and vmax for both your subplots. To find vmin you can take the minimum between the minimum of all the values in your data (and same reasoning for vmax).
1
0
1
I want to do two subplots with imshow using the same colormap by which I mean: if points in both plots have the same color, they correspond to the same value. But how can I get imshow to use only 9/10 or so of the colormap for the first plot, because it's maximal value is only 9/10 of the maximal value in the second plot? Thanks, Alice
one colormap for multiple subplots with different maximum values
1.2
0
0
105
22,275,083
2014-03-08T21:05:00.000
3
0
0
0
python,django,django-south,database-migration
22,275,172
1
true
1
0
first is to fake the initial migration: python manage.py migrate [yourapp] --fake 0001 then you can apply the migration to the db python manage.py migrate [yourapp] I'm assuming you ran convert_to_south on development, in which case production still wouldn't be aware of the migrations yet. convert_to_south automatically fakes the initial migration for you! If you were to just run migrate on production without faking, it should error.
1
2
0
I have an existing django app and would like to add a field to a model. But because the website is already in production, just deleting the database is not an option any more. These are the steps I took: pip install south added 'south' to INSTALLED_APPS python manage.py syncdb python manage.py convert_to_south [myapp] So now I have the initial migration and south will recognize the changes. Then I added the field to my model and ran: python manage.py schemamigration [myapp] --auto python manage.py migrate [myapp] Now I have the following migrations: 0001_initial.py 0002_auto__add_field_myapp_fieldname.py Which commands should I run on my production server now to migrate? Also should I install south first and then pull the code changes and migrations?
Add field to existing django app in production environment
1.2
0
0
215
22,276,347
2014-03-08T23:18:00.000
0
0
0
1
python,bash,shell,ascii
22,276,497
3
false
0
0
Assuming that: you have control over ./main and that it is a shell script the entire output from the Python script is to be interpreted as a single parameter simply use $* rather than $1 inside ./main
1
3
0
I have a Python script script.py which I am using to generate the command line argument to another script exactly like so: ./main $(./script.py) The output of script.py may contain spaces (e.g. foo bar) which are being unintentionally interpreted by the shell. I want the argument to ./main to be the single string "foo bar". Of course I can solve this problem if I quote the argument to ./main, like this: ./main "$(./script.py)" But I can't and don't want to do that. (The reason is because ./main is being called without quotes from another script which I don't have control to edit.) Is there an alternative representation of the space character that my Python script can use, and that bash won't interpret?
Space character in python that won't be interpreted by bash
0
0
0
288
22,279,499
2014-03-09T07:23:00.000
1
0
0
0
python,postgresql,pgrouting
22,392,600
1
true
0
0
pyscopg2 Is an excellent python module that allows your scripts to connect to your postgres database and run SQL whether as inputs or as fetch queries. You can have python walk through the number of possible combinations between vertices and have it build the individual SQL queries as strings. It can then run through them and print your output into a text file.
1
0
0
I use driving_distance function in pgRouting to work with my river network. There are 12 vertices in my river network, and I want to get the distance between all of these 12 vertices, starting from vertex_id No.1. The result is fine, but I want to get other results using other vertices as starting point. I know it would not cost much time to change the SQL code everytime, but thereafter I would have more than 500 vertices in this river network, so I need to do this more efficiently. How to use python to get what I want? How can I write a python script to do this? Or there are existing python script that I want? I am a novice with programming language, please give me any detailed advice, thank you.
How to use python to loop through all possible results in postgresql?
1.2
1
0
684
22,279,611
2014-03-09T07:37:00.000
0
0
1
0
python,sorting
22,281,914
3
false
0
0
I would use a dictionary with the first element as key. Also look into ordered dictionaries.
1
0
1
Hi is there anyway to group this list such that it would return a string(first element) and a list within a tuple for each equivalent first element? ie., [('106', '1', '1', '43009'), ('106', '1', '2', '43179'), ('106', '1', '4', '43619'), ('171', '1', '3', '59111'), ('171', '1', '4', '57089'), ('171', '1', '5', '57079'), ('184', '1', '18', '42149'), ('184', '1', '19', '12109'), ('184', '1', '20', '12099')] becomes : [('106',[('106', '1', '1', '43009'), ('106', '1', '2', '43179'), ('106', '1', '4', '43619')]), ('171',[('171', '1', '3', '59111'), ('171', '1', '4', '57089'), ('171', '1', '5', '57079')]), ('184'[(('184', '1', '18', '42149'), ('184', '1', '19', '12109'), ('184', '1', '20', '12099')])]
Grouping a list in python specifically
0
0
0
53
22,282,264
2014-03-09T12:39:00.000
0
0
0
0
python,django,forms,session
22,282,298
2
true
1
0
The user is never in request.session. It's directly on the request object as request.user.
1
0
0
I have a form where users are submitting data. One of the fields is "author" which i automatically fill in by using the {{ user }} variable in the template, it will have the username if the user is logged in and AnonymousUser if not. This {{ user }} is not part of the form, just text. When a user submits the form i need to see which user, or if this was an anonymous user that submitted the data so i though i would use the request.session['user'] but this doesnt work since the user key is not available. I tried setting the request.session['user'] value to the user object but the session dictionary doesnt accept objects, it says its not JSON serializable. I though the context processors would add this user variable to it was also available to the view but it isnt. I need a user object and not just the user name to save to the database along with the form. Is there any way to extract the user object when its not part of the form and the user is logged in ? I need to associate the submitted data with a user or an anonymous user and the foreign key requires an object which i also think is must convient to work with when extracting the data from the DB again. I dont see it being helpful to post any code here since this is a question of how to extract a user object after a post and not specifically a problem with the code.
User session object is not available on POST in django?
1.2
0
0
123
22,283,494
2014-03-09T14:35:00.000
0
0
0
0
python,arrays,numpy
22,284,670
1
false
0
0
I think you're certainly on the right track (because python together with numpy is a great combination for this task), but in order to do what you want to do, you do need some basic programming skills. I'll assume you at least know a little about working in an interactive python shell and how to import modules etc. :-) Then probably the easiest approach is to only have a single numpy array: one that contains the sum of the data in your files. After that it's just a matter of dividing by the number of files that you have. So for example you could follow the following approach: loop over all files in a folder with a for-loop and the os.listdir method check if the file belongs to the data collection, for example by using something like str.endswith('.csv') convert the filename to a full path by using os.path.join read the data to a numpy array with numpy.loadtxt add this data to the array containing the sums, which is initialized with np.zeros before the loop keep a count of how many files you processed after the loop, calculate the means by dividing the sums by the number of files processed
1
0
1
In a folder, I have a number of .csv files (count varies) each of which has 5 rows and 1200 columns of numerical data(float). Now I want to average the data in these files (i.e. R1C1 of files gives one averaged value in a resulting file, and so on for every position (R2C2 of all files gives one value in the same position of resulting file etc.). How do I sequentially input all files in that folder into a couple of arrays; what functions in numpy can be used to just find the mean among the files (now arrays) that have been read into these arrays. Is there a better way to this? New to computing, appreciate any help.
Python averaging with ndarrays, csvfile data
0
0
0
59
22,284,018
2014-03-09T15:19:00.000
0
0
0
0
python,openerp,openerp-7
22,312,919
2
false
1
0
You can't restrict all access to the partners table (contains suppliers and customers) as the system will probably not work at all. As of OpenERP 7, res.partners also contains contacts and each user has a contact so if you block all access you will probably break a lot of things (YMMV). You may be able to get away with allowing read access only. The easiest would be to alter the views of customers and suppliers to add a security group that most users don't belong to so they can't see the view at all. You will have to track down the form views but you can do this pretty easily through: Settings -> Technical -> User Interface -> Views and search for the object res.partner.
2
0
0
Can anyone help me to restrict employees from accessing the suppliers and also restrict them from the notes of customers in OpenERP 7. I am trying to setup a Contact Centre platform using OpenERP 7, where i can have Service Requests. Thanks in Advance
Restrict employees access to suppliers in OpenERP 7
0
0
0
296
22,284,018
2014-03-09T15:19:00.000
0
0
0
0
python,openerp,openerp-7
26,007,374
2
false
1
0
You could create a rule on the Partner object for your employee group - [('customer','=',True)] - that way only customers are shown, i.e. only those suppliers will be shown who are also customers. You could then also take away the Suppliers menu for the cosmetics.
2
0
0
Can anyone help me to restrict employees from accessing the suppliers and also restrict them from the notes of customers in OpenERP 7. I am trying to setup a Contact Centre platform using OpenERP 7, where i can have Service Requests. Thanks in Advance
Restrict employees access to suppliers in OpenERP 7
0
0
0
296
22,285,294
2014-03-09T17:13:00.000
4
1
1
0
python,eclipse-plugin,pydev
22,285,769
1
true
0
0
You can open the dropdown menu on the toolbar (Ctrl + F10) and choose Setup Custom Filters. Here you should be able to add a custom filter for __init__.py files.
1
0
0
How to suppress __init__.py from showing in the project tree [Eclipse PyDev]?
Suppress __init__.py in Eclipse PyDev
1.2
0
0
417
22,288,044
2014-03-09T20:55:00.000
-1
0
0
1
python,google-app-engine,python-2.7,app-engine-ndb
22,288,117
1
true
1
0
It depends. Are the restrictions one-off or is any particular restriction going to be reused in many different fields/models? For one-off restrictions, the validator argument is simpler and involves less boilerplate. For reuse, subclassing lets you avoid having to repeatedly specify the validator argument.
1
0
0
I'm using AppEngine NDB properties and I wonder what would be the best approach to: limit StringProperty to be not longer than 100 characters apply regexp validation to StringProperty prohibit IntegerProperty to be less than 0 Would it be best to use the validator argument or to subclass base ndb properties?
NDB validator argument vs extending base property classes
1.2
0
0
138
22,288,569
2014-03-09T21:43:00.000
2
0
1
1
python,django,shell,virtualenv,pycharm
64,667,900
29
false
0
0
I had the same problem with venv in PyCharm. But It is not big problem! Just do: enter in your terminal venv directory( cd venv/Scripts/ ) You will see activate.bat Just enter activate.bat in your terminal after this you will see YOUR ( venv )
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
0.013792
0
0
226,921
22,288,569
2014-03-09T21:43:00.000
0
0
1
1
python,django,shell,virtualenv,pycharm
71,744,983
29
false
0
0
Had the same issue, this is how I solved it: All you gotta do is change the default terminal from Power shell to CMD. Open pycharm --> Go to Settings --> Tools --> Terminal. Change the Shell Path to C:\Windows\system32\cmd.exe from PS. Check the Activate virtualenv checkbox. Hit apply and open new terminal.
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
0
0
0
226,921
22,288,569
2014-03-09T21:43:00.000
0
0
1
1
python,django,shell,virtualenv,pycharm
70,631,199
29
false
0
0
Windows Simple and Easy Solution: In Pycharm inside the Projects menu on the left there will be folders. Find the Scripts folder Inside there you'll find activate.bat Right click on activate.bat Copy/Path Reference Select Absolute Path Find the Terminal tab located in the middle at the bottom of Pycharm. Paste it into the terminal console and press enter If you did it right the terminal path will have brackets (venv) around the name of the folder you activated. Before: "PS C:\" After: "(venv) C:\" Note The folder name may be different than yours the important part is the (brackets) :D
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
0
0
0
226,921
22,288,569
2014-03-09T21:43:00.000
3
0
1
1
python,django,shell,virtualenv,pycharm
52,873,952
29
false
0
0
On Mac it's PyCharm => Preferences... => Tools => Terminal => Activate virtualenv, which should be enabled by default.
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
0.020687
0
0
226,921
22,288,569
2014-03-09T21:43:00.000
1
0
1
1
python,django,shell,virtualenv,pycharm
37,982,649
29
false
0
0
If your Pycharm 2016.1.4v and higher you should use "default path" /K "<path-to-your-activate.bat>" don't forget quotes
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
0.006896
0
0
226,921
22,288,569
2014-03-09T21:43:00.000
1
0
1
1
python,django,shell,virtualenv,pycharm
27,075,910
29
false
0
0
I have a solution that worked on my Windows 7 machine. I believe PyCharm's terminal is a result of it running cmd.exe, which will load the Windows PATH variable, and use the version of Python that it finds first within that PATH. To edit this variable, right click My Computer --> Properties --> Advanced System Settings --> Advanced tab --> Environment Variables... button. Within the System variables section, select and edit the PATH variable. Here is the relevant part of my PATH before editing: C:\Python27\; C:\Python27\Lib\site-packages\pip\; C:\Python27\Scripts; C:\Python27\Lib\site-packages\django\bin; ...and after editing PATH (only 3 lines now): C:[project_path]\virtualenv-Py2.7_Dj1.7\Lib\site-packages\pip; C:[project_path]\virtualenvs\virtualenv-Py2.7_Dj1.7\Scripts; C:[project_path]\virtualenvs\virtualenv-Py2.7_Dj1.7\Lib\site-packages\django\bin; To test this, open a new windows terminal (Start --> type in cmd and hit Enter) and see if it's using your virtual environment. If that works, restart PyCharm and then test it out in PyCharm's terminal.
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
0.006896
0
0
226,921
22,288,569
2014-03-09T21:43:00.000
1
0
1
1
python,django,shell,virtualenv,pycharm
69,456,332
29
false
0
0
I had the similar problem of not having venv activated in pycharm terminal (Pycharm version 2021.2.2). Just simply follow the steps below. Go to "settings -> tools -> terminal" then at the bottom of that window check if "Activate virtualenv" is having a tick or not, if not then make sure that box is ticked. Then in the middle part of that window check the the shell path is set to "cmd.exe" or not, if not then set it to "cmd.exe" (it will have it's path associated with it so no need to do anything just click on "cmd.exe" from the drop down list) then click on "apply" button below and click "ok". Now it's done, just close your opened terminal and re-open it. You will see "venv" in front of your project path. P.S: Don't mind the double quotes in my answer they are just for highlighting the texts, nothing more.
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
0.006896
0
0
226,921
22,288,569
2014-03-09T21:43:00.000
11
0
1
1
python,django,shell,virtualenv,pycharm
69,724,636
29
false
0
0
Somehow a small trick worked for me. All you gotta do is change the default terminal from Power shell to CMD. Open pycharm --> Go to Settings --> Tools --> Terminal Change the Shell Path to C:\Windows\system32\cmd.exe from PS Check the Activate virtualenv checkbox Hit apply and open new terminal It's 2021 you don't need to specify the file path or add the environment variable. Update: It's 2022 and I run into the same issue Fix: Follow the above steps and in addition, make sure you have selected the Your virtual env python.exe as your project python interpreter, and that's it
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
1
0
0
226,921
22,288,569
2014-03-09T21:43:00.000
43
0
1
1
python,django,shell,virtualenv,pycharm
23,730,300
29
false
0
0
For Windows users when using PyCharm and a virtual environment under Windows, you can use the /k parameter to cmd.exe to set the virtual environment automatically. Go to Settings, Terminal, Default shell and add /K <path-to-your-activate.bat>. I don't have the reputation to comment on the earlier response so posting this corrected version. This really saves a LOT of time. Update: Note: Pycharm now supports virtual environments directly and it seems to work well for me - so my workaround not needed anymore.
9
139
0
I've set up PyCharm, created my virtualenv (either through the virtual env command, or directly in PyCharm) and activated that environment as my Interpreter. Everything is working just fine. However, if I open a terminal using "Tools, Open Terminal", the shell prompt supplied is not using the virtual env; I still have to use source ~/envs/someenv/bin/activate within that Terminal to activate it. Another method is to activate the environment in a shell, and run PyCharm from that environment. This is "workable" but pretty ugly, and means I have major problems if I switch environments or projects from PyCharm: I'm now using the totally-wrong environment. Is there some other, much-easier way to have "Tools, Open Terminal" automatically activate the virtual environment?
How do I activate a virtualenv inside PyCharm's terminal?
1
0
0
226,921
22,290,625
2014-03-10T01:25:00.000
0
0
0
0
python,git,python-2.7,subprocess
22,291,439
1
true
0
0
I switched the protocol of my repo to ssh from https. This removes the need to enter a password
1
1
0
If I run git push origin master, it asks for my Github username and password. How would I put these in with call() like this call(['git', 'push', 'origin', 'master'])? When I look at the git-push man page, it says nothing about these being arguments.
input inside script with Subprocess.call()
1.2
0
0
112
22,292,424
2014-03-10T05:03:00.000
1
0
0
0
python,django,django-1.6
22,292,916
1
true
1
0
This is a good question. The docs say Note that this processor is not enabled by default; you’ll have to activate it. but no explanation. My take on it is due to django's intense desire to separate view logic from the template. The request object is the gateway to all data that view logic is built from (given what the browser sent us, do X, Y, Z) - therefore allowing it in the templates is akin to giving the template huge amounts of control which should be placed in the view under normal circumstances. The idea is to populate the template context with specifics, not everything. Removing them is just some more encouragement that "most things should be done in the view". The common django.contrib apps mostly don't rely on it, if it's not required by default. And of course, that's further proof the request object isn't necessary in the template except for special use cases. That's my take, anyways.
1
5
0
I was troubleshooting a problem with obtaining the request obj with a new project and realized "django.core.context_processors.request" was commented in vanilla installs of Django. Like the title suggests, why would this seemingly helpful context processor be turned off by default? Is it an issue with performance? Is it an issue with security? Is it somehow redundant? Some mild searching has not turned up anything for me, but I thought I'd ask here.
Why is "django.core.context_processors.request" not enabled by default?
1.2
0
0
160
22,292,615
2014-03-10T05:23:00.000
1
0
1
0
python,udp,sendto
22,292,817
2
false
0
0
The length of UDP packet is limited. If your data is too large, the return value can't equal the length.There are also some situations, such as not enough send buffer, network fault. The size only means the bytes that have been sent to send buffer.
1
3
0
In my recent project, I need to use the UDP protocol to transmit data. If I send data by using size = s.sendto(data, (<addr>, <port>)), will the UDP protocol ensure that the data is packed into one UDP packet? If so, will size == len(data) always be True? Is there anything that I misunderstood? More precisely, will 'sendto()' split my data into several smaller chunks and then pack each chunk into UDP packet to transimit?
Python - Is sendto()'s return value useless?
0.099668
0
1
2,494
22,294,229
2014-03-10T07:21:00.000
0
0
0
0
python,qt,parent-child,mainwindow
30,509,870
2
false
0
1
You could pass a reference (self) to your childwindow.
1
1
0
How can I make a childWindow access and/or modify an attribute from his MainWindow? I have a MainWindow that opens different childWindows, dependeing on the pressed button on the MainWindow. I would like any of the childWindows to be able to modify some attributes of the MainWindow, but I cannot get the good way to access them.
attributes of Mainwindow from childWindow
0
0
0
82
22,295,895
2014-03-10T09:05:00.000
0
1
0
0
python,serial-port,pyserial
24,922,412
2
false
0
0
are you looking for urllib.requests? if you are using python 2.7 when you ask for requests you import urllib and you don't actually use request, but its methods are available to the urllib handle, so for instance: urllib.urlopen("http://google.com") will work in python 2.7.x, where urllib.request.urlopen("http://google.com") will work in python3.x.x
1
1
0
This is the second time today this has happened.. I tried to import requests earlier and I got an Import Error: no module named requests Same thing for serial I googled the crap out of this and nothing I've found works. Any ideas as to what's going on? I'm trying to use pyserial to take input from an arduino
Python no module named serial / no module named requests
0
0
1
2,633
22,300,744
2014-03-10T12:56:00.000
0
0
0
0
sql,oracle,shell,oracle10g,python-2.x
23,064,270
1
false
0
0
I used a bash script to produce the csv file and then manipulated the data with Python. That was the only solution I could think of with Python 2.3.
1
0
0
Unfortunately I have a REHL3 and Python 2.3 and no chance of upgrading. Does anyone have any examples of how to interact with the DB, openning sqlplus, logging in and then I only want a simple SELECT query bring the data to a CSV and then I can figure out the rest. Any ideas please?
Simple query to Oracle SQL using Python 2.3
0
1
0
319
22,301,208
2014-03-10T13:16:00.000
-2
0
0
1
python,celery
29,481,481
2
false
0
0
Celery is not intended to run long tasks cause it blocks the worker for your task only. I recommend re-arranging your logic, making the task invoke itself instead of making the loop. Once shutdown is in progress, your current task will complete and will resume right at the same point where it stopped before celery shutdown. Also, having task split into chunks, you will be able to divert the task to another worker/host which is probably what you would like to do in the future.
1
4
0
I'm using celery 3.X and RabbitMQ backend. From time to time it needs to restart celery (to push a new source code update to the server). But there is a task with big loop and try/catch inside of the loop; it can takes a few hours to accomplish the task. Nothing critical will happen if I will stop it and will restart it later. QUESTION: The problem is every time after I stopped the workers (via sudo service celeryd stop) I have to KILL the task manually (via kill -9); the task ignores SIGTERM from worker. I've read throw Celery docs & Stackoverflow but I can't find working solution. Any ideas how to fix the problem?
Notify celery task to stop during worker shutdown
-0.197375
0
0
1,753
22,302,439
2014-03-10T14:07:00.000
0
0
0
1
java,python,google-app-engine
22,304,153
1
false
1
0
No you cant if you want to store them in static storage. You can store them somewhere non-static but you will lose the many advantages of having it as static content.
1
0
0
Can I upload a static HTML file to templates folder without re-deploying the app? Offline I create an HTML file which I want to upload to my Google app engine app,which displays the HTML as per URLs. But I don't want to deploy my site every time I am uploading a new file. Any suggestion would be helpful.
How can I upload a static HTML to templates folder of GAE app?
0
0
0
60
22,303,025
2014-03-10T14:31:00.000
2
0
1
0
python,list,numpy,scipy,scientific-computing
22,303,159
3
false
0
0
If you don't want to change axis after initialize then best option is list of tuples. If you want to change valued then use list of lists. Advantage of list is order maintain in list. If you use dict then you have to give key as some value and dict is not made for order operation. There are ways to order dict like OrderDict class and all.
1
2
0
Lets say that I have a string that represents points on the x, y axis "(1,2), (10,20), (100,200)" what is the best method of abstracting the coordinate integers for numerical analysis? So this is kind of a two part question -- First which form is the best for math/scientific computing purposes? (e.g. a list of lists [[1,2], [10,20]..., OR two lists x = [1,10,100] y = [2,20..., OR dictionary OR some other form?) Second what is the best Python way to obtain that form? Thanks in advance! PS if there is an easy answer to this in numpy or scipy I would love to know, but I would also like to know how to solve it without using either.
Method for reading a string of coordinates and taking out intgers (Python)
0.132549
0
0
138
22,308,688
2014-03-10T18:46:00.000
1
0
0
0
python,python-2.7,pandas
22,323,918
1
true
0
0
This is a 'bug' in that I think this is a debugging message. To work-around, pass engine='python' to disable the message.
1
1
1
When using pandas.read_csv setting sep = None for automatic delimiter detection, the message Using Python parser to sniff delimiter is printed to STDOUT. My code calls this function often so this greatly annoys me, how can I prevent this from happening short of going into the source and deleting the print statement. This is with pandas 0.13.1, Python 2.7.5
Using Python parser to sniff delimiter Spammed to STDOUT
1.2
0
0
113
22,309,352
2014-03-10T19:21:00.000
1
0
1
0
python,setuptools,pkg-resources
25,874,516
1
false
0
0
How to get it to iterate over sys.path? pkg_resources.WorkingSet(None).iter_entry_points Why does it behave differently? Probably because the installed package forces at least the meta data about itself into memory. Looking at the code, my guess would be that your main module has a requires attribute, but that's only an educated guess. Anyway, to force the "installed" behaviour while developing, it should be enough to run python setup.py develop
1
9
0
I have a Python app that looks for plugins via pkg_resources.iter_entry_points. When run directly from source checkout, this will find anything in sys.path that fits the bill, including source checkouts that happen to have an applicable .egg-info for setuptools to find. Yet when I install the package anywhere via python setup.py install, it suddenly ceases to detect everything enumerated in sys.path, instead only finding things that are installed alongside it in site-packages. Why is pkg_resources.iter_entry_points behaving differently for the vanilla source checkout v. the installed application? How can I make it traverse everything in sys.path, as it does in development?
Why does my installed app handle pkg_resources.iter_entry_points differently than in source?
0.197375
0
0
458
22,309,886
2014-03-10T19:49:00.000
1
0
1
0
python,modular-arithmetic
22,309,945
2
false
0
0
No, there is no such built-in function. You can certainly write your own, however.
1
0
0
For example, I have a reference number a = 15 and b= 3. If x=2, f(a,b,x) = 1 because if one divide 15 into 3 parts, the number 2 is in the first part. If x=7, f(a,b,x) = 2 because if one divide 15 into 3 parts, the number 7 is in the second part. If x=15, f(a,b,x) = 3 because if one divide 15 into 3 parts, the number 15 is in the third part. If x<0 or >15 the results are irrelevant to me. Is there any built-in function like this?
What function has these integer results?
0.099668
0
0
83
22,312,452
2014-03-10T22:12:00.000
4
0
1
0
javascript,python,ajax,mongodb,unicode
22,315,740
2
true
1
0
Does this need to be escaped to "&lt;script&gt;alert(&#x27;hi&#x27;);&lt;&#x2F;script&gt;" before sending it to the server? No, it has to be escaped like that just before it ends up in an HTML page - step (5) above. The right type of escaping has to be applied when text is injected into a new surrounding context. That means you HTML-encode data at the moment you include it in an HTML page. Ideally you are using a modern templating system that will do that escaping for you automatically. (Similarly if you include data in a JavaScript string literal in a <script> block, you have to JS-encode it; if you include data in in a stylesheet rule you have to CSS-encode it, and so on. If we were using SQL queries with data injected into their strings then we would need to do SQL-escaping, but luckily Mongo queries are typically done with JavaScript objects rather than a string language, so there is no escaping to worry about.) The database is not an HTML context so HTML-encoding input data on the way to the database is not the right thing to do. (There are also other sources of XSS than injections, most commonly unsafe URL schemes.)
1
7
0
I'm trying to determine the best practices for storing and displaying user input in MongoDB. Obviously, in SQL databases, all user input needs to be encoded to prevent injection attacks. However, my understanding is that with MongoDB we need to be more worried about XSS attacks, so does user input need to be encoded on the server before being stored in mongo? Or, is it enough to simply encode the string immediately before it is displayed on the client side using a template library like handlebars? Here's the flow I'm talking about: On the client side, user updates their name to "<script>alert('hi');</script>". Does this need to be escaped to "&lt;script&gt;alert(&#x27;hi&#x27;);&lt;&#x2F;script&gt;" before sending it to the server? The updated string is passed to the server in a JSON document via an ajax request. The server stores the string in mongodb under "user.name". Does the server need to escape the string in the same way just to be safe? Would it have to first un-escape the string before fully escaping so as to not double up on the '&'? Later, user info is requested by client, and the name string is sent in JSON ajax response. Immediately before display, user name is encoded using something like _.escape(name). Would this flow display the correct information and be safe from XSS attacks? What about about unicode characters like Chinese characters? This also could change how text search would need to be done, as the search term may need to be encoded before starting the search if all user text is encoded. Thanks a lot!
Encoding user input to be stored in MongoDB
1.2
1
0
2,232
22,314,445
2014-03-11T01:01:00.000
0
0
1
0
python-3.x,python-idle
62,058,126
2
false
0
0
You will find an idle.py file at C:\Python33\Lib\idlelib. There are 2 idle.pys file and both of them work. The first opens a command prompt alongside idle and the other doesn't.
1
1
0
I am running python 3.3.5. I can't figure out how to launch the IDLE IDE. I installed everything correctly but the IDLE doesn't appear anywhere.
Finding IDLE in Python 3
0
0
0
403
22,315,311
2014-03-11T02:32:00.000
1
0
0
0
python,json,django,django-rest-framework
22,322,831
1
true
1
0
You need to look into using the required=False flag on the device field in your serializer class.
1
1
0
I need to deserialize incoming JSON. The incoming JSON will be transformed to a Django model object called AdvancedUser. An AdvancedUser has a one to one with a Device model. When I POST my incoming JSON, I'm getting errors that say "Device field is required". The Device field is optional in my AdvancedUser model declaration code. How do I get rid of this error? It's OK if no Device field is passed in.
Djangorestframework: is it possible to deserialize only specific fields of incoming JSON?
1.2
0
0
61
22,319,312
2014-03-11T07:43:00.000
0
1
0
1
python-2.7,openshift,paas
22,352,836
2
false
0
0
I created a new gear with each cartridge type [python-2.6, python-2.7, python-3.3] and when the code was cloned to my workstation, none of them contained an app.py.disabled file. Can you give more information about how you created the application? Did you use a specific quickstart or url?
1
0
0
I recently created a python app on openshift. I found a file called app.py.disabled when I git cloned the repo. Can anyone explain what it does?
The use of app.py.disabled on openshift
0
0
0
196
22,319,912
2014-03-11T08:17:00.000
1
0
1
0
python,loops
22,320,480
2
false
0
0
You can add each of the files i.e., A1,A2..Ax in a list. Then save the list. That should work for you.
1
0
0
Actually, I am new in Python, so I need more help to handle my data. The next step is make loop for a month data from daily data (one day one file). And then make the variables (numeric) resulted from this program as a new array and save those as a file. Thank you in advance.
Loop and Save Data
0.099668
0
0
393
22,321,243
2014-03-11T09:18:00.000
2
0
1
1
python,macos
22,321,316
2
false
0
0
Use pip list or install yolk with pip install yolk and then yolk -l.
1
0
0
I'm new to python, and I installed so many libraries, so I forgot what I installed and not. I'd like to get a list of libraries I've installed. Any help would be grateful. I'm using mac OS 10.9.2
How do I get a list of python libraries I've installed?
0.197375
0
0
345
22,329,138
2014-03-11T14:45:00.000
2
0
1
0
python
22,329,160
3
false
0
0
Use os.path.isdir. And always beware race conditions.
1
2
0
I have listed all directories and files of a given path by using os.listdir() I want to see the element in the list is a file or directory, what should I do?
Python function to check whether it is a file or a dir
0.132549
0
0
78