Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
IntelliJ setting path variable?
38,203,517
0
0
1,199
0
java,python,macos,intellij-idea,path
Regarding your second question: Go to the menu item on the top called "IntelliJ Idea" and under that you'll find a "Preferences item"
0
1
0
0
2015-11-26T19:49:00.000
1
0
false
33,946,061
0
0
0
1
I'm trying to set a path variable because I'm executing command on my Mac using Runtime.getRuntime().exec();. They work when pressing the "play" button in IntelliJ, and also when running from command line, however, not when double-clicking. I have found that I should set the PATH variable. In terminal, the PATH variable is /Library/Frameworks/Python.framework/Versions/3.5/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/binsr/bin:/bin Which is also weird, because of the /binsr but that doesn't matter that much. I want to make IntelliJ set the PATH variable of my application to this. The documentation and some other answers say it is in here: File | Settings | Build, Execution, Deployment | Path Variables But there is no "Settings" under "File"!! There is a "Preferences" on Mac, and it does have a Build, Execution and Deployment, but that doesn't have path variables??!!? This is really frustrating me, and I would appreciate any help. Thanks in advance, Sten
Is it possible to visualise the mapper results in a map-reduce process?
33,974,868
0
0
101
0
python,hadoop,dictionary,cloudera,reduce
If that's just for initial analysis purpose to understand the data and key then you probably would like to set the Reducer count to 0 and get the map's output. -D mapred.reduce.tasks =0 is a way in java, explore the same for Python.
0
1
0
0
2015-11-28T14:14:00.000
2
0
false
33,972,287
0
0
0
1
In the file part-00000 we can find the result of all the process (map+reduce), but i would like to see the result of the firt step (mapping) then the whole results. I'm working on Hadoop Cloudera with python map-reduce scripts.
As codified the limit of 12 connections appengine to cloudsql
33,978,178
2
0
223
1
python,google-app-engine,google-cloud-sql
Each single app engine instance can have no more than 12 concurrent connections to Cloud SQL -- but then, by default, an instance cannot service more than 8 concurrent requests, unless you have deliberately pushed that up by setting the max_concurrent_requests in the automatic_scaling stanza to a higher value. If you've done that, then presumably you're also using a hefty instance_class in that module (perhaps the default module), considering also that Django is not the lightest-weight or fastest of web frameworks; an F4 class, I imagine. Even so, pushing max concurrent requests above 12 may result in latency spikes, especially if serving each and every request also requires other slow, heavy-weight operations such as MySQL ones. So, consider instead using many more instances, each of a lower (cheaper) class, serving no more than 12 requests each (again, assuming that every request you serve will require its own private connection to Cloud SQL -- pooling those up might also be worth considering). For example, an F2 instance costs, per hour, half as much as an F4 one -- it's also about half the power, but, if serving half as many user requests, that should be OK. I presume, here, that all you're using those connections for is to serve user requests (if not, you could dispatch other, "batch-like" uses to separate modules, perhaps ones with manual or basic scheduling -- but, that's another architectural issue).
0
1
0
0
2015-11-28T22:26:00.000
1
0.379949
false
33,977,130
0
0
1
1
I want to connect my project from app engine with (googleSQL), but I get that error exceeded the maximum of 12 connections in python, I have a CLOUDSQL D8 1000 simultaneous connections how can i change this number limit conexions, I'm using django and python thanks
Python APNs background connection
34,002,070
0
0
64
0
python,google-app-engine
You can use the datastore (eventually shadowed by memcache for performance) to persist all the necessary APN (or any other) connection/protocol status/context info such that multiple related requests can share the same connection as if your app would be a long-living one. Maybe not trivial, but definitely feasible. Some requests may need to be postponed temporarily, depending on the shared connection status/context, that's true.
0
1
0
0
2015-11-30T06:59:00.000
3
0
false
33,993,034
0
0
1
2
What would be the best practice in this scenario? I have an App Engine Python app, with multiple cron jobs. Instantiated by user requests and cron jobs, push notifications might be sent. This could easily scale up to a total of +- 100 pushes per minute. Setting up and tearing down a connection to APNs for every batch is not what I want. Neither is Apple advising to do this. So I would like to keep the connection alive, even when user requests finish or when a cron finishes. Possibly with a timeout (2 minutes no pushes, then close then connection). Reading the GAE documentation, I couldn't figure out if there even is such a thing available. Also, I might need this to be available in different apps and/or modules.
Python APNs background connection
33,993,066
0
0
64
0
python,google-app-engine
You can put the messages in a pull taskqueue and have a backend instance (or a cron job) to process the tasks
0
1
0
0
2015-11-30T06:59:00.000
3
0
false
33,993,034
0
0
1
2
What would be the best practice in this scenario? I have an App Engine Python app, with multiple cron jobs. Instantiated by user requests and cron jobs, push notifications might be sent. This could easily scale up to a total of +- 100 pushes per minute. Setting up and tearing down a connection to APNs for every batch is not what I want. Neither is Apple advising to do this. So I would like to keep the connection alive, even when user requests finish or when a cron finishes. Possibly with a timeout (2 minutes no pushes, then close then connection). Reading the GAE documentation, I couldn't figure out if there even is such a thing available. Also, I might need this to be available in different apps and/or modules.
Django and apache on different dockers
33,995,927
4
5
807
0
python,django,apache
mod_wsgi would be the wrong technology if you want to do this. It runs as part of Apache itself, so there literally is nothing to run in the Django container. A better way would be to use gunicorn to run Django in one container, and have the other running the webserver as a proxy - you could use Apache for this, although it's more common to use nginx.
0
1
0
0
2015-11-30T10:02:00.000
1
1.2
true
33,995,862
0
0
1
1
We have an application written in django. We are trying a deployment scenario which will have one docker running apache, the second docker running django and the third docker running the DB server. In most of the documentation it is mentioned that apache and django will sit on the same machine (django in virtualenv to be precise), is there any way we can ask apache to talk to mod_wsgi sitting on a remote machine which has the django application?
Where is the definition for a new Python interpreter in Eclipse Pydev stored?
34,048,699
1
0
328
0
python,eclipse,pydev,configuration-files,anaconda
Well, usually the default way of operating would be not committing files with a named interpreter, rather leave it empty and let it use the one that's configured for the user. Now, having said that, there are scenarios where it may be useful to commit a named interpreter, but it's usually if you're within a company that has standardized say a Python2 and a Python3 interpreter and a given project is dependent only on one of those (then it may make sense standardizing that), but usually the default is leaving empty and letting each user configure its own Python interpreter. On a side note, if you wanted to have the same interpreter for everyone, it's possible to have a plugin which would do that inside PyDev, although that'd require creating a plugin inside Eclipse (although it should be relatively straightforward).
0
1
0
0
2015-12-02T15:19:00.000
2
0.099668
false
34,046,345
1
0
0
1
I'm using Eclipse Luna Service Release 2 (4.4.2), with PyDev 4.0.0, on Windows 7. I've recently installed Anaconda 2.4.0 to use as my Python interpreter. I've configured a new "Anaconda2" Python interpreter in Eclipse, and modified my project settings to use this new interpreter. I'd like to commit my modified project file to source control, so that colleagues can take advantage of the update. I can see that .pydevproject has been modified, but when I look at the changes, it simply specified that "Anaconda2" is the interpreter to be used with the project. For this to be useful to others, they'll presumably also need my definition of what the "Anaconda2" interpreter actually is (i.e. the path to the Python executable). However, I can't find where this definition is stored. I've looked in my project directory, in the Eclipse installation directory (C:\eclipse) and in the Windows Registry, with no success. Where is this information stored, so that I can share the updated file with colleagues, rather than leaving them needing to manually set up the interpreter themselves? (Assume that we have a standard development environment, so that everyone will have Anaconda installed to the same location on their hard drive.)
Import CV2: DLL load failed (Python in Windows 64bit)
49,609,215
-1
1
10,949
0
python,windows,opencv,dll,64-bit
in this case, I just copy file 'python3.dll' from my python3 installation folder to my virtualenv lib folder, and then it works.
0
1
0
0
2015-12-02T16:53:00.000
4
-0.049958
false
34,048,431
1
0
0
1
ImportError: DLL load failed: %1 is not a valid Win32 application Does anyone know how to fix this? This problem occurs when i am trying to import cv2. My laptop is 64bit and installed 64bit python, i also put the cv2.pyd file in the site-packages folder of Python. My PYTHONPATH value = C:\Python35;C:\Python35\DLLs;C:\Python35\Lib;C:\Python35\libs;C:\Users\CV\OpenCV\opencv\build\python\2.7\x64;%OPENCV_DIR%\bin; My OPENCV_DIR value = C:\Users\CV\OpenCV\opencv\build\x64\vc12 I also put reference of my pythonpath and my opencv_dir to the PATH by putting **%PYTHONPATH%;%PYTHONPATH%\Scripts\;%OPENCV_DIR%;** I also installed opencv_python-3.0.0+contrib-cp35-none-win_amd64 through pip install and command line. None of this solved my problem.
Pycharm: terminate all running processes
39,897,548
3
13
30,996
0
python,pycharm
Ctrl-Shift-F4 closes just one tab. Right-click on the run-bar next to Run: > Close All
0
1
0
0
2015-12-03T20:20:00.000
6
0.099668
false
34,075,427
1
0
0
4
PyDev has a feature to terminate all running processes. Does PyCharm have something similar? I only see a "stop" button in the menu.
Pycharm: terminate all running processes
44,539,508
1
13
30,996
0
python,pycharm
If you want to force all running processes to stop at once just kill python process. On Windows this can easily be done by clicking 'End Process' on the Task Manager (on the Processes tab). This is quite usefull if you end up stuck with some running ghost processes of your python app in the background as I had (even when PyCharm was closed).
0
1
0
0
2015-12-03T20:20:00.000
6
0.033321
false
34,075,427
1
0
0
4
PyDev has a feature to terminate all running processes. Does PyCharm have something similar? I only see a "stop" button in the menu.
Pycharm: terminate all running processes
46,736,152
1
13
30,996
0
python,pycharm
In PyCharm, if you click on the bottom right ... "Running x processes", then x number of windows pop up. (Here, x is the number of processing running.) Each has an X button to kill the process.
0
1
0
0
2015-12-03T20:20:00.000
6
0.033321
false
34,075,427
1
0
0
4
PyDev has a feature to terminate all running processes. Does PyCharm have something similar? I only see a "stop" button in the menu.
Pycharm: terminate all running processes
71,991,268
0
13
30,996
0
python,pycharm
To kill a Python program, when the program is called from within a Terminal opened in PyCharm, either: right-click in the terminal window and select "Close Session" from the dropdown menu press Ctrl + Shift - W
0
1
0
0
2015-12-03T20:20:00.000
6
0
false
34,075,427
1
0
0
4
PyDev has a feature to terminate all running processes. Does PyCharm have something similar? I only see a "stop" button in the menu.
Running pip causes deadlock
34,076,437
0
0
245
0
python,python-3.x,pip
I just fixed it. Solution is to call pip as a python module. Remove pip.exe, pip3.exe, pip3.5.exe from PYTHON_PATH/Scripts Create file pip.bat inside folder described above Open pip.bat in text editor and copy lines below to it @echo off call "python" -m pip %*
0
1
0
0
2015-12-03T21:00:00.000
2
0
false
34,076,020
1
0
0
1
When I run pip install or only pip from Windows Command Line, I think that it causes deadlock and it's impossible to exit running process by pressing CTRL + C When I run it from Git Bash, it gives me errors 0 [sig] bash 9796 get_proc_lock: Couldn't acquire sync_proc_subproc for(5,1), last 7, Win32 error 0 1040 [sig] bash 9796 proc_subproc: couldn't get proc lock. what 5, val 1
Is there any way to check the progress on a Python script without interrupting the program?
34,078,504
0
5
3,248
0
python
Ah the classic halting problem: is it really still running? There is no way to do this if you've already started the program, unless you've written in some debugging lines that check an external configuration for a debug flag (and I assume you haven't since you're asking this question). You could look to the output or log of the script (if it exists), checking for signs of specific places in the data that the script has processed and thereby estimate the progress of the data processing. Worst case: stop the thing, add some logging, and start it tonight just before bed.
0
1
0
0
2015-12-03T23:44:00.000
2
0
false
34,078,431
1
0
0
1
Let's say I've written a script in python, program.py. I decide to run this in the Terminal using python program.py. This code runs through an exceptional amount of data, and takes several hours to run. Is there any way I can check on the status of this code without stoping the program?
Can I install using Macports both py27 and py34 ports in the same location?
34,103,248
1
1
156
0
python-2.7,python-3.x,macports
Your problems appear to be a generic Macports download problem. Resetting the download process via sudo port clean <portname> should help. As to the general question of using multiple versions: Macports allows you to install an arbitrary number of different versions in parallel. You switch between them using port select --set <application> <portname>, for example sudo port select --set python python34. For easier access, you can define your own shell alias (e.g. python3 or python34), pointing to /opt/local/bin/python34.
0
1
0
0
2015-12-04T14:57:00.000
2
1.2
true
34,091,129
1
0
0
1
I've been using Python3.4 to complete certain tasks, though I still use Python2.7 as default. I think I should be able to begin downloading py34 ports from using sudo port install py34-whatever in the same location as my Python2.7 ports. However, I am running into significant downloading errors doing this. Is it possible to download both py27 and py34 ports into the same location? Will there be problems doing this?
Install Python-Docx on Win 10
34,111,456
1
0
737
0
windows,installation,python-docx
for Winpython, you may try this: click on the "WinPython Command Prompt.exe" icon then type on the opened console the following 3 words: pip install python-docx
0
1
0
0
2015-12-05T15:09:00.000
1
0.197375
false
34,107,021
1
0
0
1
I installed Python34 and Python32 on my Win10. Also I downloaded WinPython and tried to add the package 'python-docx' with their control panel. This failed: filenaming not recognized (tar.gz) Then I tried to install it myself with the cmd. The error was lxml not found. That installation failed because it didnt find Python on my computer. I'm running out of ideas.. Is it really that hard to install python docx?
Run Python script with path as argument from total commander
34,111,757
1
2
1,275
0
python,total-commander
You can add a new button to the button bar. Right-click on an existing Icon and copy the icon by choosing "copy" from drop-down menu. Paste it into the button bar by right-clicking on it, choosing "paste" from the menu. Right-click on this copied icon and choose "modify" (or similar). This opens a window that allows you the choose a program and a parameter. Note: My version is set to a different language so the names of the menu items might be a bit different.
0
1
0
1
2015-12-05T17:45:00.000
2
0.099668
false
34,108,753
0
0
0
1
I've got a script that accepts a path as an external argument and I would like to attach it to Total Commander. So I need a script/plugin for TC that would pass a path of opened directory as an argument and run a Python script which is located for example at C:/Temp How can I achieve this? Best Regards, Marek
Running Octave tasks from Python
34,116,773
0
2
1,133
0
python,subprocess,octave,message-queue,oct2py
All three options are reasonable depending on your particular case. I don't want to rely on maintenance of external libraries such as oct2py, I am in favor of option 3 oct2py is implemented using option 3. You can reinvent what it already does or use it directly. oct2py is pure Python and it has permissive license: if its development were to stop tomorrow; you could include its code alongside yours.
0
1
0
0
2015-12-06T07:26:00.000
2
0
false
34,115,098
0
1
0
1
I have a pretty complex computation code written in Octave and a python script which receives user input, and needs to run the Octave code based on the user inputs. As I see it, I have these options: Port the Octave code to python. Use external libraries (i.e. oct2py) which enable you to run the Octave/Matlab engine from python. Communicate between a python process and an octave process. One such possibility would be to use subprocess from the python code and wait for the answer. Since I'm pretty reluctant to port my code to python and I don't want to rely on maintenance of external libraries such as oct2py, I am in favor of option 3. However, since the system should scale well, I do not want to spawn a new octave process for every request, and a tasks queue system seems more reasonable. Is there any (recommended) tasks queue system to enqueue tasks in python and have an octave worker on the other end process it?
How do I configure spacemacs for python 3?
45,569,548
5
16
10,781
0
python,emacs,spacemacs
The variable that needed to be set was flycheck-python-pycompile-executable, to "python3". To get support for async, emacs25 must be used (note debian will install emacs24 and emacs25 side-by-side, and use emacs24 by default).
0
1
0
0
2015-12-07T14:23:00.000
2
1.2
true
34,135,856
1
0
0
1
I would like to use spacemacs for python development, but I see a syntax error on Python 3 constructs, like print(*(i + 1 for i in range(n)) or async def foo():. Adding a shebang to my file (#!/usr/bin/python3 or #!/usr/bin/env python3) does not help. What configuration changes do I need to make to use a specific python version? Ideally per-project or per-file, but global is better than nothing. I have 2.7 and 3.4 installed system-wide, and 3.5 in ~/local (~/local/bin is in my $PATH).
Celery w/ Redis broker: Is it possible to have more 10k connection
34,169,982
0
1
599
0
python,redis,celery
Don't. Your Redis command latency with over 10,000 connections will suffer, usually heavily. Even the basic Redis ping command shows this. Step one: re-evaluate the 10k worker requirement. Chances are very high it is heavily inflated. What data supports it? Most of the time people are used to slow servers where concurrency is higher because each request takes orders of magnitude more time than Redis does. Consider this, a decently tuned Redis single instance can handle over a million requests per second. Do the math and you'll see that it is unlikely you will have the traffic workload to keep those workers busy without slamming into other limits such as the speed of light and Redis capacity. If you truly do need that level of concurrency perhaps try integrating Twemproxy to handle the connections in a safer manner, though you will likely see the latency effects anyway if you really have the workload necessary to justify 10k concurrent connections. Your other options are Codis and partitioning your data across multiple Redis instances, or some combination of the above.
0
1
1
0
2015-12-07T19:29:00.000
1
0
false
34,141,600
0
0
0
1
Currently, redis has maxclients limit of 10k So, I cant spawn more than 10k celery workers (a celery worker with 200 prefork across 50 machines). Without changing redis maxclient limit, what are some of the things I can do to accommodate more than 10k celery workers? I was thinking setting up master-slave redis cluster but how would a celery daemon know to connect different slaves?
correct package name for glibc in Amazon Linux EBS
34,144,641
1
0
217
0
python,linux,amazon-web-services,lxml,amazon-elastic-beanstalk
Answer to my question, the package name is: glibc-devel.i686
0
1
0
0
2015-12-07T20:54:00.000
1
0.197375
false
34,143,037
0
0
0
1
I need to install glibc-devel package for my EBS python 2.7 64 bits at AWS. Different from any other solutions, I have to install python27-devel instead of python-devel, postgresql93-devel instead of postgresql-devel, so I was wondering the correct name for glibc-devel package because with that name it seems to skipt the installation yum packages (.ebextensions/config file). The main problem is to install lxml from pip packages. I successfully installed libxslt-devel and libxml2-devel in that server, also gcc and patch.
Can Python server code be read?
34,150,245
0
1
123
0
python,websocket,server
It is hard to give a definite yes or no answer, because there are a million ways in which your server may expose the .py file. The crucial point is though, that your server needs to actively expose the file to the outside world. A computer with no network-enabled services running does not expose anything on the network, period. Only physical access to the computer would allow you access to the file. From this absolute point, it's a slow erosion of security with every additional service that offers a network component. Your Python server itself (presumably) doesn't expose its own source code; it only offers the services it's programmed to offer. However, you may have other servers running on the machine which actively do offer the file for download, or perhaps can be tricked into doing so. That's where an absolute "No" is hard to give, because one would need to run a full audit of your machine to be able to give a definitive answer. Suffice it to say that a properly configured server without gaping security holes will not enable users to download the underlying source code through the network.
0
1
0
1
2015-12-08T05:34:00.000
1
0
false
34,148,739
0
0
0
1
I am working on a Python WebSocket server. I initiate it by running the python server.py command in Terminal. After this, the server runs fine and actually pretty well for what I'm using it for. The server runs on port 8000. My question is, if I keep the server.py file outside of my localhost directory or any sub-directory, can the Python file be read and the code viewed by anyone else? Thanks.
Multiple executable accessing the same folder at the same time
34,150,899
2
0
814
0
c#,python,os.walk
It should be fine for the external app to create and write to a file. If the Python app is reading a file, the .NET app may not be able to write to it while Python is reading it, without both processes opening the file in a shareable way, however. Likewise if the Python app is going to start reading the newly-created file, it may either find that it can't do so until the .NET app has finished writing to it, or it may read incomplete data. Again, changes would quite possibly be required to both processes to allow reading at all. It's worth thoroughly testing all the poosibilities you're concerned about, possibly involving the creation of a "fake" external app which writes to a file very slowly, but opening it in the same way that the real one does.
0
1
0
0
2015-12-08T07:15:00.000
1
1.2
true
34,150,069
1
0
0
1
We have a python application that checks a directory(C:\sample\folder) every 5 seconds, there's also this external application(.net app) that puts file into that same directory (C:\sample\folder). Will there be any conflict when the two application access the same folder at the same time (accidentally)? Conflicts like : the external app wont be able to place a file because the python app is currently walking through that same directory?
how to install imutils 0.2 for python in windows 07
49,567,169
1
0
25,716
0
python,windows,installation
In windows, py3, conda install imutils is not avaliable. pip3 install imutils and the newest version will be installed.
0
1
0
0
2015-12-08T16:42:00.000
3
0.066568
false
34,161,318
1
0
0
1
I want to install imutils 0.2 package to python and i have windows 7 operating system. I only found the gz file and would like to know the way of gz files. Or else if there are any exe files available please let me know
uwsgi fails under pyenv/2.7.11 with _io.so: undefined symbol: _PyCodecInfo_GetIncrementalEncoder
34,168,578
2
1
23,361
0
python,uwsgi,pyenv
I had the same (or better: a similar) problem with uwsgi when upgrading Python from 2.7.3 to 2.7.10: The module that I tried to import was socket (socket.py) Which in turn tried to import _socket (_socket.so) - and the unresolved symbol was _PyInt_AsInt The problem is a mismatch between some functions between Python minor minor releases (which doesn't break any backward compatibility, BTW). Let me detail: Build time: when your uwsgi was built, the build was against Python 2.7.10 (as you specified). Python could have been compiled/built: statically - most likely, the PYTHON LIBRARY (from now on, I am going to refer to it as PYTHONCORE as it's named by its creators) in this case: (libpython2.7.a) is in a static lib which is included in the python executable resulting a huge ~6MB executable dynamically - PYTHONCORE (libpython2.7.so) is a dynamic library which python executable (~10KB of bytes large, this time) uses at runtime Run time: the above uwsgi must run in an Python 2.7.11 environment Regardless of how Python is compiled, the following thing happened: between 2.7.10 and 2.7.11 some internal functions were added/removed (in our case added) from both: PYTHONCORE Dynamic (or extension) modules (written in C) - .so files located in ${PYTHON_LIB_DIR}/lib-dynload (e.g. /home/user/.pyenv/versions/2.7.11/envs/master2/lib/python2.7/lib-dynload); any dynamic module (.so) is a client for PYTHONCORE So, basically it's a version mismatch (encountered at runtime): 2.7.10 (which uwsgi was compiled against): PYTHONCORE - doesn't export PyCodecInfo_GetIncrementalEncoder _io.so (obviously) doesn't use the exported func (so, no complains at import time) 2.7.11 (which uwsgi is run against): PYTHONCORE - still (as it was "embedded" in uwsgi at compile (build) time, so it's still 2.7.10) doesn't export PyCodecInfo_GetIncrementalEncoder _io.so - uses/needs it resulting a situation where a Python 2.7.11 dynamic module was used against Python 2.7.10 runtime, which is unsupported. As a conclusion make sure that your uwsgi buildmachine is in sync (from Python PoV) with the runmachine, or - in other words - build uwsgi with the same Python version you intend to run it with!
0
1
0
1
2015-12-08T22:48:00.000
3
1.2
true
34,167,557
0
0
0
1
when i start uwsgi 2.0.11.2 under pyenv 2.7.11 i get: ImportError: /home/user/.pyenv/versions/2.7.11/envs/master2/lib/python2.7/lib-dynload/_io.so: undefined symbol: _PyCodecInfo_GetIncrementalEncoder also uwsgi prints Python version: 2.7.10 (default, May 30 2015, 13:57:08) [GCC 4.8.2] not sure how to fix it
Supervisor instance on VPS (digitalocean) exits when exiting the terminal
34,168,319
0
0
47
0
python,service,supervisord
Run with nohup. you should detach the process from your current terminal or it will terminate as soon as you exit.
0
1
0
0
2015-12-08T23:47:00.000
1
1.2
true
34,168,281
0
0
0
1
I'm running a supervisor instance on a VPS, but it seems to exit when I exit the terminal. Why is that happening?
Real-time backend for IoT App
34,178,035
1
0
586
0
python,firebase,backend,iot,real-time-data
You're comparing apples to oranges here in your options. The first three are entirely under your control, because, well, you own the server. There are many ways to get this wrong and many ways to get this right, depending on your experience and what you're trying to build. The last three would fall under Backend-As-A-Service (BaaS). These let you quickly build out the backend of an application without worrying about all the plumbing. Your backend is operated, maintained by a third party so you lose control when compared to your own server. ... and of course at the best price AWS, Azure, GAE, Firebase, PubNub all have free quotas. If your application becomes popular and you need to scale, at some point, the BaaS options might end up being more expensive.
0
1
0
1
2015-12-09T11:02:00.000
2
0.099668
false
34,177,156
0
0
1
1
I'm working on an IoT App which will do majority of the basic IoT operations like reading and writing to "Things". Naturally, it only makes sense to have an event-driven server than a polling server for real-time updates. I have looked into many options that are available and read many articles/discussions too but couldn't reach to a conclusion about the technology stack to use for the backend. Here are the options that i came across: Meteor Python + Tornado Node.js + Socket.io Firebase PubNub Python + Channel API (Google App Engine) I want to have as much control on the server as possible, and of course at the best price. What options do i have? Am i missing something out? Personally, i prefer having a backend in Python from my prior experience.
Ubuntu, how do you remove all Python 3 but not 2
55,326,327
-3
13
127,783
0
python,ubuntu
Its simple just try: sudo apt-get remove python3.7 or the versions that you want to remove
0
1
0
0
2015-12-10T10:04:00.000
6
-0.099668
false
34,198,892
1
0
0
4
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Ubuntu, how do you remove all Python 3 but not 2
55,406,526
2
13
127,783
0
python,ubuntu
neither try any above ways nor sudo apt autoremove python3 because it will remove all gnome based applications from your system including gnome-terminal. In case if you have done that mistake and left with kernal only than trysudo apt install gnome on kernal. try to change your default python version instead removing it. you can do this through bashrc file or export path command.
0
1
0
0
2015-12-10T10:04:00.000
6
0.066568
false
34,198,892
1
0
0
4
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Ubuntu, how do you remove all Python 3 but not 2
34,220,703
8
13
127,783
0
python,ubuntu
So I worked out at the end that you cannot uninstall 3.4 as it is default on Ubuntu. All I did was simply remove Jupyter and then alias python=python2.7 and install all packages on Python 2.7 again. Arguably, I can install virtualenv but me and my colleagues are only using 2.7. I am just going to be lazy in this case :)
0
1
0
0
2015-12-10T10:04:00.000
6
1.2
true
34,198,892
1
0
0
4
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Ubuntu, how do you remove all Python 3 but not 2
34,198,961
9
13
127,783
0
python,ubuntu
EDIT: As pointed out in recent comments, this solution may BREAK your system. You most likely don't want to remove python3. Please refer to the other answers for possible solutions. Outdated answer (not recommended) sudo apt-get remove 'python3.*'
0
1
0
0
2015-12-10T10:04:00.000
6
1
false
34,198,892
1
0
0
4
I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place: iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7 Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7. So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7?
Python script from batch file won't run in task scheduler
34,218,832
0
0
1,575
0
python,windows,python-2.7,batch-file,scheduled-tasks
Thank you guys for your help. It was indeed "just" the working directory I had to set to the location of the bat file
0
1
0
0
2015-12-10T11:18:00.000
1
0
false
34,200,551
0
0
0
1
Hi folks so I got the following problem, I have the following code in a batch file: ..\python-2.7.10.amd64\python.exe ./bin/bla.py ./conf/config.conf > ./logs/output.txt This works like a charme by double clicking the batch. Next my plan was to automate the call of this batch by adding it to the task scheduler in windows. So I changed all the relative paths to absolute paths: D:\path\to\python-2.7.10.amd64\python.exe D:\path\to\bin\bla.py D:\path\to\conf\config.conf > D:\path\to\logs\output.txt This also still works by double clicking the batch file. So my next step was adding the batch to the task scheduler but when I run it from there I get this error message: Traceback (most recent call last): File "D:\path\to\bin\bla.py", line 159, in logging.config.fileConfig(logFile) File "D:\path\to\python-2.7.10.amd64\lib\logging\confi eConfig formatters = _create_formatters(cp) File "D:\path\to\python-2.7.10.amd64\lib\logging\confi reate_formatters flist = cp.get("formatters", "keys") File "D:\path\to\python-2.7.10.amd64\lib\ConfigParser. raise NoSectionError(section) ConfigParser.NoSectionError: No section: 'formatters' So for some reason the python script can't find the conf file by the absolute path I think but I don't understand why. I also tried it with the relative paths in the task scheduler it obviously also doesn't work. Does anyone of you have a clue why it works straight from the batch but not from the task scheduler ?
Python Executables on Windows
34,210,329
0
0
71
0
python,windows-10
Choose open with, then scroll down and click something like "choose another application from this computer" (I don't exactly know, I use windows in different language). Then just select your Python executable and click OK.
0
1
0
0
2015-12-10T19:02:00.000
1
0
false
34,209,697
1
0
0
1
I just upgraded to windows 10, and downloaded the Anaconda Python distribution and chose the option for it to add everything to my PATH etc. Back in windows 8 when I created a .py file I could execute it from the file explorer just by clicking on it, but for some reason windows 10 won't recognise .py files and when I try run them it opens them in notepad. I am able to run them from the command line. What's gone wrong? UPDATE: When I choose another application to open the file, I click on the python application and it says "Cannot Execute as Python27.dll is not found", I installed python 3, why is it trying to open in python2.7?
Python 3 installation on windows running from command line
34,212,369
1
11
25,234
0
python,python-3.x
You have to add the python bin folder to your path. You can do it manually but when you install python i remember you have an option to do that.
0
1
0
0
2015-12-10T21:26:00.000
4
0.049958
false
34,212,036
1
0
0
1
Just curious, is there a particular reason why Python 3.x is not installed on Windows to run default with the command line "python3", like it does on Mac OSX and Linux? Is there some kind of way to configure Python so that it runs like this? Thanks. EDIT: Just to add, the reason I am asking is because I have both the Python 2 and 3 interpreter installed on my computer, and so it is ambiguous, as both are run using the command "python".
Python Twistd MySQL - Get Updated Row id (not inserting)
35,131,551
0
3
311
1
python,mysql,twisted
I think the best way to accomplish this is to first make a select for the id (or ids) of the row/rows you want to update, then update the row with a WHERE condition matching the id of the item to update. That way you are certain that you only updated the specific item. An UPDATE statement can update multiple rows that matches your criteria. That is why you cannot request the last updated id by using a built in function.
0
1
0
0
2015-12-10T23:32:00.000
1
0
false
34,213,706
0
0
0
1
Python, Twistd and SO newbie. I am writing a program that organises seating across multiple rooms. I have only included related columns from the tables below. Basic Mysql tables Table id Seat id table_id name Card seat_id The Seat and Table tables are pre-populated with the 'name' columns initially NULL. Stage One I want to update a seat's name by finding the first available seat given a group of table ids. Stage Two I want to be able to get the updated row id from Stage One (because I don't already know this) to add to the Card table. Names can be assigned to more than one seat so I can't just find a seat that matches a name. I can do Stage One but have no idea how to do Stage Two because lastrowid only works for inserts not updates. Any help would be appreciated. Using twisted.enterprise.adbapi if that helps. Cheers
Accessing a 32bit with python on a Debian 64bit with CDLL lib (or other)
34,221,649
0
1
285
0
python,ubuntu,debian,32bit-64bit,ctypes
I am not sure if you can do this in the same process - we are talking about arithmetic here: 32bit pointers are different from 64bit pointers, so trying to reference them in the same process ... well, I am not sure what happens when trying to access a memory area which is not accessible or which is not supposed to be accessed (I guess Segmentation fault? ). The only solution I can think of is it to have a separate Python 32 bit instance that runs in its own process. Then, with some form of IPC you can call the python32 bit instance from your 64 bit instance.
0
1
0
1
2015-12-11T10:36:00.000
2
0
false
34,221,468
0
0
0
1
I'm trying for ages to access a 32bit C compiled lib within an 64bit Ubuntu. I'm using python and CDLL lib in order to make it happen but with no success so far. I can easily open the same 32bit lib on a 32bit OS, and the 64bit version on a 64bit OS. So, what I'm asking is if anyone knows a way to encapsulate/sandbox/wrap the lib so I can achieve my goal. That way I can use a single 64bit server to access the 32 and 64bit versions of those libs. If someone knows another python lib that can make the trick please let me know.
UltiSnips not work: PYTHON caused GVim to EXIT
46,685,085
0
1
120
0
python,vim,ultisnips
Problem solved. I downgraded python from 2.7.11 to 2.7.9 and it worked well. – Cicero
0
1
0
0
2015-12-12T06:35:00.000
1
0
false
34,237,068
0
0
0
1
Environments OS:Windos 7, x64 bit Vim: gvim74 from vim.org Python: Python 2.7.11 UltiSnips: just downloaded from github Gvim worked perfectly for me with SnipMate for a long time and lately I want to use UltiSnips instead. So i newly installed python on my PC, installed UltiSnips with Pathogen and just deleted snipmate, hoped it works well but it doesn't. The problem is: When I open gvim, it exits as soon as I press "i". Then I recovered gvim before I installed UltiSnips and simply executed a command on gvim like "python print "Hello"" or "python 1" or so, It does nothing but causes gvim to exit instantly, just like a executed "q!" command. OK It probably is a problem happens when gvim encounters python while does nothing with UltiSnips. Hope for suggestios or methods to solve this. Thanks all your guys.
How can I turn my python script off from the terminal?
34,241,511
0
0
457
0
python,terminal,pickle
Check atexit() Add a function and decorate it with atexit
0
1
0
0
2015-12-12T15:14:00.000
2
0
false
34,241,400
0
0
0
1
I have just made a script that I want to turn off from the terminal, but instead of just ending it I want it to pickle a file. Is there a correct way to do this?
unzipping zipx by controlling winzip with python
34,455,025
1
0
777
0
python,unzip,win32com,winzip
Forget win32com. Instead, create a destination folder loop over zipx archives; for each one: create a temp folder using Python's tempfile module use the subprocess module to run your unzip utility (that handles zipx format) on the zipx archive, with command line option to extract to the temp folder created use the shutil module to copy each unzipped file in that folder to the common destination folder created in first step, if file meets the condition. For file size, use Path.stat().st_size or os.path.get_size(). erase temp folder So each archive gets unzipped to a different temp folder, but all extracted files get moved to one common folder. Alternately, you could use the same temp folder for all archives, but empty the folder at end of each iteration, and delete the temp folder at end of script. create a destination folder create a temp archive extraction folder using Python's tempfile module loop over zipx archives; for each one: use the subprocess module to run your unzip utility (that handles zipx format) on the zipx archive, with command line option to extract to the temp folder created use the shutil module to copy each unzipped file in that folder to the common destination folder created in first step, if file meets the condition. For file size, use Path.stat().st_size or os.path.get_size(). erase contents of temp folder erase temp folder
0
1
0
0
2015-12-13T04:14:00.000
1
0.197375
false
34,247,918
1
0
0
1
I need to unzip numerous zipx files into directory while checking on the run if unzipped files comply with a condition. The condition is "if there is file with the same name overright it only if unzipped file is larger". I wanted to control winzip with win32com but I couldn't find Object.Name with COM browser (win32com\client\combrowse.py). Also would be nice to find methods I could use with this winzip object. Could anyone help with the way I choose or advice an easier option to solve the described problem. Thansk.
How to see pip package sizes installed?
61,732,256
5
46
34,529
0
python,linux,debian,pip
Here's how, pip3 show numpy | grep "Location:" this will return path/to/all/packages du -h path/to/all/packages last line will contain size of all packages in MB Note: You may put any package name in place of numpy
0
1
0
0
2015-12-14T11:40:00.000
10
0.099668
false
34,266,159
1
0
0
1
I'm not sure this is possible. Google does not seem to have any answers. Running Linux Debian can I list all pip packages and size (amount of disk space used) thats installed? i.e. List all pip packages with size on disk?
How does calling C or C++ from python work?
34,284,538
2
2
397
0
python,c++,c,language-binding
You can call between C, C++, Python, and a bunch of other languages without spawning a separate process or copying much of anything. In Python basically everything is reference-counted, so if you want to use a Python object in C++ you can simply use the same reference count to manage its lifetime (e.g. to avoid copying it even if Python decides it doesn't need the object anymore). If you want the reverse, you may need to use a C++ std::shared_ptr or similar to hold your objects in C++, so that Python can also reference them. In some cases things are even simpler than this, such as if you have a pure function in C or C++ which takes some values from Python and returns a result with no side effects and no storing of the inputs. In such a case, you certainly do not need to copy anything, because you can read the Python values directly and the Python interpreter will not be running while your C or C++ code is running (because they are all in a single thread). There is an extensive Python (and NumPy, by the way) C API for this, plus the excellent Boost.Python for C++ integration including smart pointers.
1
1
0
1
2015-12-15T08:39:00.000
1
1.2
true
34,284,421
0
0
0
1
There are multiple questions about "how to" call C C++ code from Python. But I would like to understand what exactly happens when this is done and what are the performance concerns. What is the theory underneath? Some questions I hope to get answered by understanding the principle are: When considering data (especially large data) being processed (e.g. 2GB) which needs to be passed from python to C / C++ and then back. How are the data transferred from python to C when function is called? How is the result transferred back after function ends? Is everything done in memory or are UNIX/TCP sockets or files used to transfer the data? Is there some translation and copying done (e.g. to convert data types), do I need 2GB memory for holding data in python and additional +-2GB memory to have a C version of the data that is passed to C function? Do the C code and Python code run in different processes?
Run script with udev after USB plugged in on RPi
34,300,132
1
1
1,205
0
python,bash,raspberry-pi2,usb-drive,udev
Well you probably described you problem. The mount process is too slow. You can mount your usb device from your script.sh Also you probably need to disable automatic USB device mount for your system or the specific device only. If you add a symlink to your udev rule e.g. SYMLINK+="backup", then you can mount this device by: mkdir -p /path/to/foo mount -t ext4 /dev/backup /path/to/foo
0
1
0
1
2015-12-15T17:04:00.000
1
1.2
true
34,295,198
0
0
0
1
I am trying to run a script from a udev rule after any USB drive has been plugged in. When I run the script manually, after the USB is mounted normally, it will run fine. The script calls a python program to run and the python program uses a file on the USB drive. No issues there. If I make the script to simply log the date in a file, that works just fine. So I know my UDEV rule and my script work fine, each on their own. The issue seems to come up when udev calls the script, then script calling the python program and the python program does not run right. I believe it to be that the USB drive has not finished mounting before the python script runs. When watching top, my script begins to run, then python begins to run, they both end, and then I get the window popup of my accessing the files on my USB drive. So I tried having script1.sh call script2.sh call python.py. I tried having script.sh call python1.py call python2.py. I tried adding sleep function both in the script.sh and python.py. I tried in the rule, RUN+="/home/pi/script.sh & exit". I tried exit in the files. I tried disown in the files. What else can I try?
run celery task using a django management command
34,311,859
1
1
1,782
0
python,django,celery,celery-task
Executing Celery tasks from a command line utility is the same as executing them from views. If you have a task called foo, then in both cases: Calling foo(...) executes the code of the task as if foo were just a plain Python function. Calling foo.delay(...) executes the code of the task asynchronously, through a Celery worker.
0
1
0
0
2015-12-16T09:26:00.000
1
1.2
true
34,308,221
0
0
1
1
I'm trying to run a task, using celery 3.1, from a custom management command. If I call my task from a view it works fine but when starting the same task from my management command, the task will only run synchronous in current context (not async via celery). I don't have djcelery installed. What do I need to add to my management command to get async task processing on command line?
IPC between C application and Python
34,316,940
0
7
10,304
0
python,c,ipc
If your struct is simple enough, you could even not use IPC at all. Provided, you can serialize it as string parameters that could be used as program arguments and provided the int value to return can be in the range 0-127, you could simply: in C code: prepare the command arguments to pass to the Python script fork-exec (assuming a Unix-like system) a Python interpretor with the script path and the script arguments wait for child termination read what the script passed as code termination in Python: get the arguments from command line and rebuild the elements of the struct process it end the script with exit(n) where n is an integer in the range 0-127 that will be returned to caller. If above does not meet your requirements, next level would be to use pipes: in C code: prepare 2 pipe pairs one for C->Python (let's call it input), one for Python->C (let's call it output) serialize the struct into a char buffer fork in child close write side of input pipe close read side of output pipe dup read side of input pipe to file descriptor 0 (stdin) (see `dup2) dup write side of output pipe to file descriptor 1 (stdout) exec a Python interpretor with the name of the script in parent close read side of input pipe close write side of output pipe write the buffer (eventually preceded by its size if it cannot be known a priori) to the write side on input file wait for the child to terminate read the return value from the read side of output pipe in Python: read the serialized data from standard input process it write the output integer to standard output exit
0
1
0
0
2015-12-16T15:08:00.000
4
0
false
34,315,470
1
0
0
1
So I am relatively new to IPC and I have a c program that collects data and a python program that analyses the data. I want to be able to: Call the python program as a subprocess of my main c program Pass a c struct containing the data to be processed to the python process Return an int value from the python process back to the c program I have been briefly looking at Pipes and FIFO, but so far cannot find any information to address this kind of problem, since as I understand it, a fork() for example will simply duplicate the calling process, so not what I want as I am trying to call a different process.
Uploading python library to server
34,319,997
0
0
49
0
python,numpy,matplotlib,setup.py
This sounds hacky and quite possibly evil, but if you don't have shell access but do have Python access, I suppose you could write a Python script that writes the library files to the proper location. You can determine the location by examining the __file__ value in each module. If this is a file system location the Python process has permissions write to (possibly the site package directory) it could be done. If this is under a location you can't write to, then no. Be careful, this is quite hacky.
0
1
0
0
2015-12-16T18:04:00.000
1
1.2
true
34,319,068
0
0
0
1
The problem is like this: the python on the server is version 2.4.3 (somewhat obsolete), numpy is version 1.2.1 (obsolete) and matplotlib is version 0.99.1.1 (devastating obsolete + lacks pyplot for some unknown reason). I cannot use shell/bash on server. How can I renew the numpy and matplotlib to current versions? E.g., can I upload some folders of my python install to certain server locations and they will magically work? Or something different? Thank you for your attention. P.S. I can manipulate python path on server during script execution.
Location of tensorflow/models.. in Windows
34,340,617
1
2
1,104
0
python,windows,docker,tensorflow
If you're using one of the devel tags (:latest-devel or :latest-devel-gpu), the file should be in /tensorflow/tensorflow/models/image/imagenet/classify_image.py. If you're using the base container (b.gcr.io/tensorflow/tensorflow:latest), it's not included -- that image just has the binary installed, not a full source distribution, and classify_image.py isn't included in the binary distribution.
0
1
0
0
2015-12-17T15:06:00.000
1
0.197375
false
34,337,788
0
1
0
1
I have installed tensorflow on Windows using Docker, I want to go to the folder "tensorflow/models/image/imagenet" that contains "classify_image.py" python file.. Can someone please how to reach this mentioned path?
Zooming in on the python shell wing_ide
34,385,561
1
1
9,187
0
python,python-2.7,wing-ide
For Wing IDE: Try ctrl++ or ctrl+MouseScrollUp for quick changes. You can also just change your font size in the Editor preferences. For Python IDLE: Under Options --> Configure IDLE; change the Size. For 'cmd' prompt or Bash: Right-Click on the Window bar and select Properties. Change the font size in the 'Font' tab. If you want it to be permanent, do the same in 'Defaults' instead (from the right-click menu).
0
1
0
0
2015-12-20T20:25:00.000
1
0.197375
false
34,385,462
1
0
0
1
Is there any way to zoom in on the Python Shell in Wing IDE? I am having trouble seeing the font because it is too small.
Cron job output in wrong Linux folder
34,400,781
0
0
120
0
python,linux,path,cron
Thanks for the responses after further searching, I found this that worked: */1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1
0
1
0
1
2015-12-21T14:09:00.000
3
0
false
34,397,628
0
0
0
1
This is my first post here. I am a very big fan of Stack Overflow. This is the first time I could not find an answer to one of my questions. Here is the scenario: In my Linux system, I am not an admin or root. When I run a Python script, the output appears in the original folder, however when I run the same Python script as a Cron job, it appears in my accounts home folder. Is there anything I can do to direct the output to a desired folder? I do have the proper shebang path. Thank you!
Etags used in RESTful APIs are still susceptible to race conditions
34,428,792
2
6
1,416
0
python,database,rest,concurrency,etag
This is really a question about how to use ORMs to do updates, not about ETags. Imagine 2 processes transferring money into a bank account at the same time -- they both read the old balance, add some, then write the new balance. One of the transfers is lost. When you're writing with a relational DB, the solution to these problems is to put the read + write in the same transaction, and then use SELECT FOR UPDATE to read the data and/or ensure you have an appropriate isolation level set. The various ORM implementations all support transactions, so getting the read, check and write into the same transaction will be easy. If you set the SERIALIZABLE isolation level, then that will be enough to fix race conditions, but you may have to deal with deadlocks. ORMs also generally support SELECT FOR UPDATE in some way. This will let you write safe code with the default READ COMMITTED isolation level. If you google SELECT FOR UPDATE and your ORM, it will probably tell you how to do it. In both cases (serializable isolation level or select for update), the database will fix the problem by getting a lock on the row for the entity when you read it. If another request comes in and tries to read the entity before your transaction commits, it will be forced to wait.
0
1
0
0
2015-12-23T03:13:00.000
3
0.132549
false
34,428,046
0
0
1
3
Maybe I'm overlooking something simple and obvious here, but here goes: So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known. The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example: Setup: RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example). Etag is based on 'last updated time' of the resource Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently. The problem: Client 1 and 2 both request a resource (get request), both now have the same Etag. Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database. Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes. Doesn't this break the purpose of the Etag? The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something? P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem.
Etags used in RESTful APIs are still susceptible to race conditions
63,120,699
1
6
1,416
0
python,database,rest,concurrency,etag
You are right that you can still get race conditions if the 'check last etag' and 'make the change' aren't in one atomic operation. In essence, if your server itself has a race condition, sending etags to the client won't help with that. You already mentioned a good way to achieve this atomicity: The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. You could do something else, like using a mutex lock. Or using an architecture where two threads cannot deal with the same data. But the database check seems good to me. What you describe about ORM checks might be an addition for better error messages, but is not by itself sufficient as you found.
0
1
0
0
2015-12-23T03:13:00.000
3
0.066568
false
34,428,046
0
0
1
3
Maybe I'm overlooking something simple and obvious here, but here goes: So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known. The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example: Setup: RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example). Etag is based on 'last updated time' of the resource Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently. The problem: Client 1 and 2 both request a resource (get request), both now have the same Etag. Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database. Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes. Doesn't this break the purpose of the Etag? The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something? P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem.
Etags used in RESTful APIs are still susceptible to race conditions
34,428,187
1
6
1,416
0
python,database,rest,concurrency,etag
Etag can be implemented in many ways, not just last updated time. If you choose to implement the Etag purely based on last updated time, then why not just use the Last-Modified header? If you were to encode more information into the Etag about the underlying resource, you wouldn't be susceptible to the race condition that you've outlined above. The only fool proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something? That's your answer. Another option would be to add a version to each of your resources which is incremented on each successful update. When updating a resource, specify both the ID and the version in the WHERE. Additionally, set version = version + 1. If the resource had been updated since the last request then the update would fail as no record would be found. This eliminates the need for locking.
0
1
0
0
2015-12-23T03:13:00.000
3
0.066568
false
34,428,046
0
0
1
3
Maybe I'm overlooking something simple and obvious here, but here goes: So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known. The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example: Setup: RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example). Etag is based on 'last updated time' of the resource Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently. The problem: Client 1 and 2 both request a resource (get request), both now have the same Etag. Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database. Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes. Doesn't this break the purpose of the Etag? The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something? P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem.
Can't connect to DB2 Driver through Python: SQL1042C
34,651,608
1
1
2,550
1
python,db2,dashdb
We are able to install the driver successfully and connection to db is established without any problem. The steps are: 1) Upgraded to OS X El Capitan 2) Install pip - sudo pip install 3) Install ibm_db - sudo pip install ibm_db 4) During installation, below error was hit Referenced from: /Users/roramana/Library/Python/2.7/lib/python/site-packages/ibm_db.so Reason: unsafe use of relative rpath libdb2.dylib in /Users/roramana/Library/Python/2.7/lib/python/site-packages/ibm_db.so with restricted binary After disabling the System Integrity Protection, installation went fine. From the error sql1042c, it seems like you are hitting some environment setup issue. You could try setting DYLD_LIBRARY_PATH to the path where you have extracted the odbc and cli driver . If the problem still persist, please collect db2 traces and share with us: db2trc on -f trc.dmp run your repro db2trc off db2trc flw trc.dmp trc.flw db2trc fmt trc.dmp trc.fmt Share the trc.flw and trc.fmt files.
0
1
0
0
2015-12-23T12:48:00.000
3
0.066568
false
34,436,084
0
0
0
1
I can't connect to a DB2 remote server using Python. Here is what I've done: Created a virtualenv with Python 2.7.10 (On Mac OS X 10.11.1) installed ibm-db using sudo pip install ibm_db Ran the following code: import ibm_db ibm_db.connect("my_connection_string", "", "") I then get the following error: Exception: [IBM][CLI Driver] SQL1042C An unexpected system error occurred. SQLSTATE=58004 SQLCODE=-1042 I've googled around for hours and trying out different solutions. Unfortunately, I haven't been able to find a proper guide for setting the environment up on Mac OS X + Python + DB2.
Python shell cmd and executable formats
34,448,313
0
1
419
0
python,command-line-arguments
You can use the python interpreter as a compiler too to compile your python programs. Say you have a test.py file which you want to compile; then you can use python test.py to compile the file. To be true, you are not actually compiling the file, you are executing it line by line (well, call it interpreting) For command line arguments you can use sys.argv as already mentioned in the above answers.
0
1
0
0
2015-12-24T06:06:00.000
4
0
false
34,448,086
1
0
0
1
I have used both Python and C for a while. C is good in a way that i can use Windows cmd or anything like that to compile files and easily read command line arguments. However, the only thing that runs python that I know is IDLE which is like an interpreter and doesnt take command-line arguments and it's hard to work with. Is there anything like the C's cmd and a compiler for python 3.x? Thanks
Can celery assign task to specify worker
34,469,957
-2
8
12,753
0
python,celery
Just to answer your second question CELERY_TASK_RESULT_EXPIRES is the time in seconds that the result of the task is persisted. So after a task is over, its result is saved into your result backend. The result is kept there for the amount of time specified by that parameter. That is used when a task result might be accessed by different callers. This has probably nothing to do with your problem. As for the first solution, as already stated you have to use multiple queues. However be aware that you cannot assign the task to a specific Worker Process, just to a specific Worker which will then assign it to a specific Worker Process.
0
1
0
0
2015-12-26T02:33:00.000
2
-0.197375
false
34,468,024
0
0
1
1
Celery will send task to idle workers. I have a task will run every 5 seconds, and I want this task to only be sent to one specify worker. Other tasks can share the left over workers Can celery do this?? And I want to know what this parameter is: CELERY_TASK_RESULT_EXPIRES Does it means that the task will not be sent to a worker in the queue? Or does it stop the task if it runs too long?
Django uwsgi subprocess and permissions
34,545,562
0
0
240
0
python,django,permissions,uwsgi,cherokee
As I've said in my comments this issue was related to supervisord. I've solved it assigning the right path and user into "environment" variable of supervisord's config file.
0
1
0
1
2015-12-26T11:52:00.000
1
0
false
34,471,080
0
0
1
1
I'm trying to generate PDF file from Latex template. I've done it in development environment (running python manage.py straight from eclipse)... but I can't make it work into the server, which is running using cherokee and uwsgi. We have realized that open(filename) creates a file owning to root (also root group). This isn't taking place in development environment... but the most strange thing about this issue is that somewhere else in our code we are creating a text file (latex uses is a text file too), but it's created with the user cherokee is supposed to use, not root! What happened? How can we fix it? We are running this code on ubuntu linux and a virtual environment both in development and production. We started following some instructions to do it using python's temporary file and folder creation functions, but we thought that it could be something related with them, and created them "manually" in order to try to solve this issue... but it didn't work.
Xbee API packet is failing to arrive from router to coordinator
34,498,742
1
0
134
0
python,c,arduino,wireless,xbee
There isn't a minimum size, but the module does make use of a "packetization timeout" setting (ATRO) to decide when to send your data. If you wait longer, you may find that the module sends the frame and it arrives at the destination. I'm assuming you're using "AT Mode" even though you write "API Mode". If you are in fact using API mode, please post more of your code, and perhaps include a link to the code library you're using to build your API frames. Are you setting the length correctly? Does the library expect a null-terminated string for the payload? Try adding a 0 to the end of your payload array to see if that helps.
0
1
1
0
2015-12-27T19:25:00.000
1
0.197375
false
34,483,983
0
0
0
1
i need to ask about xbee packet size. is it there any minimum size for the packet of API. i'm using Xbee S2 API mode AP1 however when i send below frame from router to coordinator the packet failed to arrive . Packet : uint8_t payload[] = {'B',200,200,200,200}; However if i send : Packet : uint8_t payload[] = {'B',200,200,200,200,200,200}; the packet arrived successfully .... weird :( Test 3: Packet : uint8_t payload[] = {'B',200,200,200}; the packet arrived successfully Test 4: uint8_t payload[] = {'B',200,200}; the packet is failed to arrive :( i don't know what is the problem
How do I install and link the correct tcl-tk for Python 3 on a Mac?
34,496,074
0
1
141
0
python,macos
flagging python3 on install '--with-tcl-tk' works, but the idle3 launch needs to be linked to it using brew linkapps python3. thereafter, the warning caveat which accompanies idle3 launch disappears. I hope this helps other users. jA
0
1
0
0
2015-12-28T03:15:00.000
1
0
false
34,487,269
0
0
0
1
Will someone please give a clear, precise, repeatable method of linking a brewed Python 3 with the correct tcl-tk for a Mac OS? I am NOT a power user. I received an answer to this question from a homebrew contributor, but that answer no longer works.
FreeBSD PHP exec permission denied
34,491,632
0
0
474
0
php,python,linux
Be sure to use full paths for both python and your script. $foo = exec('/usr/bin/python /path/script.py'); Also, make sure the file permissions where your script is located can be accessed by www, probably will need to chmod 755 /path.
0
1
0
1
2015-12-28T10:02:00.000
1
1.2
true
34,491,359
0
0
0
1
I want to run a couple of Python scripts from PHP. On an Ubuntu machine everything looks good right out of the box. On FreeBSD though I get /usr/local/lib/python2.7: Permission denied Any idea how to give permissions to Apache to run a Python through shell_exec or exec ? Also see how I had to name the full path of the Python ? Is there any way to avoid that too ?
Celery Tasks with eta get removed from RabbitMQ
35,126,618
1
9
1,055
0
python,django,multithreading,celery
As far as I know Celery does not rely on RabbitMQ's scheduled queues. It implements ETA/Countdown internally. It seems that you have enough workers that are able to fetch enough messages and schedule them internally. Mind that you don't need 200 workers. You have the prefetch multiplier set to the default value so you need less.
0
1
0
0
2015-12-28T14:25:00.000
1
0.197375
false
34,495,318
0
0
1
1
I'm using Django 1.6, RabbitMQ 3.5.6, celery 3.1.19. There is a periodic task which runs every 30 seconds and creates 200 tasks with given eta parameter. After I run the celery worker, slowly the queue gets created in RabbitMQ and I see around 1200 scheduled tasks waiting to be fired. Then, I restart the celery worker and all of the waiting 1200 scheduled tasks get removed from RabbitMQ. How I create tasks: my_task.apply_async((arg1, arg2), eta=my_object.time_in_future) I run the worker like this: python manage.py celery worker -Q my_tasks_1 -A my_app -l CELERY_ACKS_LATE is set to True in Django settings. I couldn't find any possible reason. Should I run the worker with a different configuration/flag/parameter? Any idea?
Remote SSH server accessing local files
34,500,718
2
0
866
0
python,linux,ssh
You're asking if you can write a program on the server which can access files from the client when someone runs this program through SSH from the client? If the only program running on the client is SSH, then no. If it was possible, that would be a security bug in SSH.
0
1
1
1
2015-12-28T20:14:00.000
1
1.2
true
34,500,111
0
0
0
1
Is it possible to access local files via remote SSH connection (local files of the connecting client of course, not other clients)? To be specific, I'm wondering if the app I'm making (which is designed to be used over SSH, i.e. user connects to a remote SSH server and the script (written in Python) is automatically executed) can access local (client's) files. I want to implement an upload system, where user(s) (connected to SSH server, running the script) may be able to upload images, from their local computers, over to other hosting sites (not the SSH server itself, but other sites, like imgur or pomf (the API is irrelevant)). So the remote server would require access to local files to send the file to another remote hosting server and return the link.
open cmd with admin rights (Windows 10)
34,500,631
1
4
7,654
0
python,windows,cmd
Try something like this: runas /user:administrator regedit.
0
1
0
0
2015-12-28T20:33:00.000
3
1.2
true
34,500,369
0
0
0
1
I have my own python script that manages the IP address on my computer. Mainly it executes the netsh command in the command line (windows 10) which for you must have administrator rights. It is my own computer, I am the administrator and when running the script I am already logged in with my user (Adrian) which is of type administrator. I can`t use the right click and "run as administrator" solution because I am executing my netsh command from my python script. Anybody knows how to get "run as administrator" with a command from CMD ? Thanks
`Error: Failed to determine the layout of your Qt installation` when installing pyqt for python3 on Mavericks
34,502,219
0
0
900
0
python-3.x,pyqt,homebrew
Answering to help other people who encounter this: The solution was to first upgrade XCode to XCode 7.2 and open it once to accept the license and have it install additional components. Then, a brew update and a brew install pyqt --with-python3 finally worked.
1
1
0
0
2015-12-28T23:17:00.000
1
1.2
true
34,502,194
0
0
0
1
Running brew install pyqt --with-python3, I get Error: Failed to determine the layout of your Qt installation. Adding --verbose to the brew script, the problem is that ld can't find -lgcc_s.10.5. (This is on Mac OS X 10.10.5 Yosemite)
Completing Spotify Authorization Code Flow via desktop application without using browser
34,520,316
2
6
1,073
0
python,api,heroku,oauth-2.0,spotify
I once ran into a similar issue with Google's Calendar API. The app was pretty low-importance so I botched a solution together by running through the auth locally in my browser, finding the response token, and manually copying it over into an environment variable on Heroku. The downside of course was that tokens are set to auto-expire (I believe Google Calendar's was set to 30 days), so periodically the app stopped working and I had to run through the auth flow and copy the key over again. There might be a way to automate that. Good luck!
0
1
1
0
2015-12-29T22:40:00.000
2
0.197375
false
34,520,233
0
0
1
1
Working on a small app that takes a Spotify track URL submitted by a user in a messaging application and adds it to a public Spotify playlist. The app is running with the help of spotipy python on a Heroku site (so I have a valid /callback) and listens for the user posting a track URL. When I run the app through command line, I use util.prompt_for_user_token. A browser opens, I move through the auth flow successfully, and I copy-paste the provided callback URL back into terminal. When I run this app and attempt to add a track on the messaging application, it does not open a browser for the user to authenticate, so the auth flow never completes. Any advice on how to handle this? Can I auth once via terminal, capture the code/token and then handle the refreshing process so that the end-user never has to authenticate? P.S. can't add the tag "spotipy" yet but surprised it was not already available
How do I make Python 3.5 my default version on MacOS?
34,529,150
14
8
36,713
0
python,macos,python-2.7,python-3.x
Since Python 2 and 3 can happily coexist on the same system, you can easily switch between them by specifying in your commands when you want to use Python 3. So for Idle, you need to type idle3 in the terminal in order to use it with Python 3 and idle for using it with Python 2. Similarly, if you need to run a script or reach a python prompt from the terminal you should type python3 when you want to use Python 3 and python when you want to use Python 2.
0
1
0
0
2015-12-30T10:56:00.000
7
1.2
true
34,528,107
1
0
0
6
I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal?
How do I make Python 3.5 my default version on MacOS?
54,570,625
1
8
36,713
0
python,macos,python-2.7,python-3.x
Do right thing, do thing right! Open your terminal, input python -V, It likely shows:Python 2.7.10 input python3 -V, It likely shows:Python 3.7.2 input where python or which python, It likely shows:/usr/bin/python input where python3 or which python3, It likely shows: /usr/local/bin/python3 add the following line at the bottom of your PATH environment variable file in ~/.profile file or ~/.bash_profile under Bash or ~/.zshrc under zsh. alias python='/usr/local/bin/python3' OR alias python=python3 input source ~/.bash_profile under Bash or source ~/.zshrc under zsh. Quit the terminal. Open your terminal, and input python -V, It likely shows: Python 3.7.2 Note, the ~/.bash_profile under zsh is not that ~/.bash_profile. The PATH environment variable under zsh instead ~/.profile (or ~/.bash_file) via ~/.zshrc. Hope this helped you all!
0
1
0
0
2015-12-30T10:56:00.000
7
0.028564
false
34,528,107
1
0
0
6
I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal?
How do I make Python 3.5 my default version on MacOS?
42,657,534
1
8
36,713
0
python,macos,python-2.7,python-3.x
You can switch to any python version in your project by creating a virtual environment. virtualenv -p /usr/bin/python2.x (or python 3.x) In case you just want to run a program in a specific version just open shell and enter python2.x or python3.x
0
1
0
0
2015-12-30T10:56:00.000
7
0.028564
false
34,528,107
1
0
0
6
I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal?
How do I make Python 3.5 my default version on MacOS?
34,528,349
1
8
36,713
0
python,macos,python-2.7,python-3.x
If you dont have any python 2 scripts that you use, you can delete python2. But its not a problem to have them both installed. You just have to use another path python3 to launch IDLE. I would prefer to let them both installled so if you have any scripts that are in python 2 you can still run them or you have to port them to python3.
0
1
0
0
2015-12-30T10:56:00.000
7
0.028564
false
34,528,107
1
0
0
6
I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal?
How do I make Python 3.5 my default version on MacOS?
34,528,211
3
8
36,713
0
python,macos,python-2.7,python-3.x
You can use the python3 command (instead of using python), or you can simply uninstall the 2.7 version if you don't use it
0
1
0
0
2015-12-30T10:56:00.000
7
0.085505
false
34,528,107
1
0
0
6
I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal?
How do I make Python 3.5 my default version on MacOS?
40,856,469
0
8
36,713
0
python,macos,python-2.7,python-3.x
By typing python, you are actually referring to a link. You will find its location with $ which python. In my case it was /usr/local/bin/python. go there $open /usr/local/bin/ and just delete the original python, python-config and idle as they are identical to the 2.7 files in the same folder. Then duplicate the 3.5 files and rename them to what you just deleted. This also changes the default link other editors like Sublime_ReplPython use and updates it therefore to the 3.5 Version. This was my major concern with the standard installation.
0
1
0
0
2015-12-30T10:56:00.000
7
0
false
34,528,107
1
0
0
6
I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal?
Running Python Script from different working directory
34,528,893
1
4
8,093
0
python,subprocess
Things you use from anywhere should be reachable from anywhere. That's where the system's PATH environment variable comes in. You should either move your script to a directory in the PATH, or extend the PATH with the location of your script. Note: make sure the script works location-independently: use sys.path extensively, try to use sys.path.join(base, sub) where possible etc...
0
1
0
0
2015-12-30T11:27:00.000
2
0.099668
false
34,528,609
0
0
0
1
I have this program Namechange.py which changes names from files (always cuts some useless end). Everything works fine, but I use this file a lot in a lot of different directories, which is bothersome when I want to change something. What I'm searching for is a python script which let's me execute this script in a directory I choose. My first idea was that I run another script which copies Namechange.py in the desired directory and then executes it there and deletes it after everythings finished. The copying part works. Till now I tried using symlink ( it just executed the script in the working directory :D) as well as the subprocess module, which says there is no such directory when I use: subprocess.call(["cd", newpath]) newpath is the absolute path to the directory I want to use the script. with error OSError: [Errno 2] No such file or directory. If somebody has a elegant way to achieve this I would be glad. Thanks and goodbye
How to get list of files on Mac that have been tagged by Finder with a color?
47,679,286
0
4
3,233
0
python,macos,tags,finder
Late to the party here, but this is something that has been bugging me too — and I eventually came up with a run-round for tech-shy people that requires no line-command coding, just ordinary Finder commands and SimpleText. Try the following: open a Finder window containing the tagged files and Select All right-click on the selected files to get Contextual Menu and choose "Copy [x number of] files". If it doesn't say "Copy [x number of] files" but just "Copy [filename] you've accidentally deselected the files; reselect all, and try right-clicking again. open SimpleText. Make sure that it is set to use Plain Text and NOT Rich Text. (Menu Bar: Format>Make Plain Text/Rich Text). If it is set to Rich Text, this technique will not work: you will get a document containing the actual files rather than a list of their names! paste. This should paste a list of the filenames of all selected files, one per line, in the order in which they appear in the Finder. Hurrah! Hope this works for you. It changed my life.
0
1
0
0
2016-01-01T09:03:00.000
2
0
false
34,554,852
1
0
0
1
I'd like to tag some files on my Mac with Finder under a certain color tag, and then be able to get that list of files using a Python or bash script. Is it possible to get the list of tags associated with a file via the command line or a Python script?
django ubuntu error pillow 3.0.0 install error
34,580,302
0
0
242
0
python,django,python-imaging-library,pillow
thanks to all at last solved my problem, I have install PIL in system sudo apt-get install python3-pil next I have copy this in my virtualenv cp -R /usr/lib/python3/dist-packages/PIL /home/netai/lab/django/rangoenv/lib/python3.4/site-packages/
0
1
0
0
2016-01-02T16:56:00.000
1
1.2
true
34,568,325
1
0
0
1
I failed trying to install Pillow 3.0.0 on my Ubuntu 14.04 and python 3.4.3 from virtualenv. everytime I get error: pip install pillow Building using 2 processes i686-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -Qunused-arguments -Qunused-arguments build/temp.linux-i686-3.4/_imaging.o build/temp.linux-i686-3.4/decode.o build/temp.linux-i686-3.4/encode.o build/temp.linux-i686-3.4/map.o build/temp.linux-i686-3.4/display.o build/temp.linux-i686-3.4/outline.o build/temp.linux-i686-3.4/path.o build/temp.linux-i686-3.4/libImaging/Access.o build/temp.linux-i686-3.4/libImaging/AlphaComposite.o build/temp.linux-i686-3.4/libImaging/Resample.o build/temp.linux-i686-3.4/libImaging/Bands.o build/temp.linux-i686-3.4/libImaging/BitDecode.o build/temp.linux-i686-3.4/libImaging/Blend.o build/temp.linux-i686-3.4/libImaging/Chops.o build/temp.linux-i686-3.4/libImaging/Convert.o build/temp.linux-i686-3.4/libImaging/ConvertYCbCr.o build/temp.linux-i686-3.4/libImaging/Copy.o build/temp.linux-i686-3.4/libImaging/Crc32.o build/temp.linux-i686-3.4/libImaging/Crop.o build/temp.linux-i686-3.4/libImaging/Dib.o build/temp.linux-i686-3.4/libImaging/Draw.o build/temp.linux-i686-3.4/libImaging/Effects.o build/temp.linux-i686-3.4/libImaging/EpsEncode.o build/temp.linux-i686-3.4/libImaging/File.o build/temp.linux-i686-3.4/libImaging/Fill.o build/temp.linux-i686-3.4/libImaging/Filter.o build/temp.linux-i686-3.4/libImaging/FliDecode.o build/temp.linux-i686-3.4/libImaging/Geometry.o build/temp.linux-i686-3.4/libImaging/GetBBox.o build/temp.linux-i686-3.4/libImaging/GifDecode.o build/temp.linux-i686-3.4/libImaging/GifEncode.o build/temp.linux-i686-3.4/libImaging/HexDecode.o build/temp.linux-i686-3.4/libImaging/Histo.o build/temp.linux-i686-3.4/libImaging/JpegDecode.o build/temp.linux-i686-3.4/libImaging/JpegEncode.o build/temp.linux-i686-3.4/libImaging/LzwDecode.o build/temp.linux-i686-3.4/libImaging/Matrix.o build/temp.linux-i686-3.4/libImaging/ModeFilter.o build/temp.linux-i686-3.4/libImaging/MspDecode.o build/temp.linux-i686-3.4/libImaging/Negative.o build/temp.linux-i686-3.4/libImaging/Offset.o build/temp.linux-i686-3.4/libImaging/Pack.o build/temp.linux-i686-3.4/libImaging/PackDecode.o build/temp.linux-i686-3.4/libImaging/Palette.o build/temp.linux-i686-3.4/libImaging/Paste.o build/temp.linux-i686-3.4/libImaging/Quant.o build/temp.linux-i686-3.4/libImaging/QuantOctree.o build/temp.linux-i686-3.4/libImaging/QuantHash.o build/temp.linux-i686-3.4/libImaging/QuantHeap.o build/temp.linux-i686-3.4/libImaging/PcdDecode.o build/temp.linux-i686-3.4/libImaging/PcxDecode.o build/temp.linux-i686-3.4/libImaging/PcxEncode.o build/temp.linux-i686-3.4/libImaging/Point.o build/temp.linux-i686-3.4/libImaging/RankFilter.o build/temp.linux-i686-3.4/libImaging/RawDecode.o build/temp.linux-i686-3.4/libImaging/RawEncode.o build/temp.linux-i686-3.4/libImaging/Storage.o build/temp.linux-i686-3.4/libImaging/SunRleDecode.o build/temp.linux-i686-3.4/libImaging/TgaRleDecode.o build/temp.linux-i686-3.4/libImaging/Unpack.o build/temp.linux-i686-3.4/libImaging/UnpackYCC.o build/temp.linux-i686-3.4/libImaging/UnsharpMask.o build/temp.linux-i686-3.4/libImaging/XbmDecode.o build/temp.linux-i686-3.4/libImaging/XbmEncode.o build/temp.linux-i686-3.4/libImaging/ZipDecode.o build/temp.linux-i686-3.4/libImaging/ZipEncode.o build/temp.linux-i686-3.4/libImaging/TiffDecode.o build/temp.linux-i686-3.4/libImaging/Incremental.o build/temp.linux-i686-3.4/libImaging/Jpeg2KDecode.o build/temp.linux-i686-3.4/libImaging/Jpeg2KEncode.o build/temp.linux-i686-3.4/libImaging/BoxBlur.o -L/home/netai/lab/django/rangoenv/lib -L/usr/local/lib -ljpeg -lz -o build/lib.linux-i686-3.4/PIL/_imaging.cpython-34m.so i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/_imaging.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/decode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/encode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/map.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/display.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/outline.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/path.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Access.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/AlphaComposite.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Resample.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Bands.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/BitDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Blend.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Chops.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Convert.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/ConvertYCbCr.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Copy.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Crc32.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Crop.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Dib.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Draw.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Effects.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/EpsEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/File.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Fill.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Filter.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/FliDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Geometry.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/GetBBox.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/GifDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/GifEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/HexDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Histo.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/JpegDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/JpegEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/LzwDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Matrix.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/ModeFilter.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/MspDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Negative.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Offset.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Pack.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/PackDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Palette.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Paste.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Quant.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/QuantOctree.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/QuantHash.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/QuantHeap.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/PcdDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/PcxDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/PcxEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Point.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/RankFilter.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/RawDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/RawEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Storage.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/SunRleDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/TgaRleDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Unpack.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/UnpackYCC.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/UnsharpMask.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/XbmDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/XbmEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/ZipDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/ZipEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/TiffDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Incremental.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Jpeg2KDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Jpeg2KEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/BoxBlur.o: No such file or directory i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ error: command 'i686-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Failed building wheel for pillow Failed to build pillow Installing collected packages: pillow Running setup.py install for pillow Complete output from command /home/netai/lab/django/rangoenv/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-ah0pvkjy/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-c_53bm8a-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/netai/lab/django/rangoenv/include/site/python3.4/pillow: running install running build running build_py running egg_info writing dependency_links to Pillow.egg-info/dependency_links.txt writing Pillow.egg-info/PKG-INFO writing top-level names to Pillow.egg-info/top_level.txt warning: manifest_maker: standard file '-c' not found reading manifest file 'Pillow.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'Pillow.egg-info/SOURCES.txt' running build_ext building 'PIL._imaging' extension i686-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -I/tmp/pip-build-ah0pvkjy/pillow/libImaging -I/usr/local/include -I/usr/include -I/usr/include/python3.4m -I/home/netai/lab/django/rangoenv/include/python3.4m -I/usr/include/i386-linux-gnu -c _imaging.c -o build/temp.linux-i686-3.4/_imaging.o i686-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -I/tmp/pip-build-ah0pvkjy/pillow/libImaging -I/usr/local/include -I/usr/include -I/usr/include/python3.4m -I/home/netai/lab/django/rangoenv/include/python3.4m -I/usr/include/i386-linux-gnu -c libImaging/Resample.c -o build/temp.linux-i686-3.4/libImaging/Resample.o i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -I/tmp/pip-build-ah0pvkjy/pillow/libImaging -I/usr/local/include -I/usr/include -I/usr/include/python3.4m -I/home/netai/lab/django/rangoenv/include/python3.4m -I/usr/include/i386-linux-gnu -c libImaging/Crop.c -o build/temp.linux-i686-3.4/libImaging/Crop.o i686-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -I/tmp/pip-build-ah0pvkjy/pillow/libImaging -I/usr/local/include -I/usr/include -I/usr/include/python3.4m -I/home/netai/lab/django/rangoenv/include/python3.4m -I/usr/include/i386-linux-gnu -c libImaging/Geometry.c -o build/temp.linux-i686-3.4/libImaging/Geometry.o i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -I/tmp/pip-build-ah0pvkjy/pillow/libImaging -I/usr/local/include -I/usr/include -I/usr/include/python3.4m -I/home/netai/lab/django/rangoenv/include/python3.4m -I/usr/include/i386-linux-gnu -c libImaging/Matrix.c -o build/temp.linux-i686-3.4/libImaging/Matrix.o i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -I/tmp/pip-build-ah0pvkjy/pillow/libImaging -I/usr/local/include -I/usr/include -I/usr/include/python3.4m -I/home/netai/lab/django/rangoenv/include/python3.4m -I/usr/include/i386-linux-gnu -c libImaging/Quant.c -o build/temp.linux-i686-3.4/libImaging/Quant.o i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -I/tmp/pip-build-ah0pvkjy/pillow/libImaging -I/usr/local/include -I/usr/include -I/usr/include/python3.4m -I/home/netai/lab/django/rangoenv/include/python3.4m -I/usr/include/i386-linux-gnu -c libImaging/RawDecode.c -o build/temp.linux-i686-3.4/libImaging/RawDecode.o i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Qunused-arguments -Qunused-arguments -fPIC -DHAVE_LIBJPEG -DHAVE_LIBZ -I/tmp/pip-build-ah0pvkjy/pillow/libImaging -I/usr/local/include -I/usr/include -I/usr/include/python3.4m -I/home/netai/lab/django/rangoenv/include/python3.4m -I/usr/include/i386-linux-gnu -c libImaging/XbmEncode.c -o build/temp.linux-i686-3.4/libImaging/XbmEncode.o i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ Building using 2 processes i686-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -Qunused-arguments -Qunused-arguments build/temp.linux-i686-3.4/_imaging.o build/temp.linux-i686-3.4/decode.o build/temp.linux-i686-3.4/encode.o build/temp.linux-i686-3.4/map.o build/temp.linux-i686-3.4/display.o build/temp.linux-i686-3.4/outline.o build/temp.linux-i686-3.4/path.o build/temp.linux-i686-3.4/libImaging/Access.o build/temp.linux-i686-3.4/libImaging/AlphaComposite.o build/temp.linux-i686-3.4/libImaging/Resample.o build/temp.linux-i686-3.4/libImaging/Bands.o build/temp.linux-i686-3.4/libImaging/BitDecode.o build/temp.linux-i686-3.4/libImaging/Blend.o build/temp.linux-i686-3.4/libImaging/Chops.o build/temp.linux-i686-3.4/libImaging/Convert.o build/temp.linux-i686-3.4/libImaging/ConvertYCbCr.o build/temp.linux-i686-3.4/libImaging/Copy.o build/temp.linux-i686-3.4/libImaging/Crc32.o build/temp.linux-i686-3.4/libImaging/Crop.o build/temp.linux-i686-3.4/libImaging/Dib.o build/temp.linux-i686-3.4/libImaging/Draw.o build/temp.linux-i686-3.4/libImaging/Effects.o build/temp.linux-i686-3.4/libImaging/EpsEncode.o build/temp.linux-i686-3.4/libImaging/File.o build/temp.linux-i686-3.4/libImaging/Fill.o build/temp.linux-i686-3.4/libImaging/Filter.o build/temp.linux-i686-3.4/libImaging/FliDecode.o build/temp.linux-i686-3.4/libImaging/Geometry.o build/temp.linux-i686-3.4/libImaging/GetBBox.o build/temp.linux-i686-3.4/libImaging/GifDecode.o build/temp.linux-i686-3.4/libImaging/GifEncode.o build/temp.linux-i686-3.4/libImaging/HexDecode.o build/temp.linux-i686-3.4/libImaging/Histo.o build/temp.linux-i686-3.4/libImaging/JpegDecode.o build/temp.linux-i686-3.4/libImaging/JpegEncode.o build/temp.linux-i686-3.4/libImaging/LzwDecode.o build/temp.linux-i686-3.4/libImaging/Matrix.o build/temp.linux-i686-3.4/libImaging/ModeFilter.o build/temp.linux-i686-3.4/libImaging/MspDecode.o build/temp.linux-i686-3.4/libImaging/Negative.o build/temp.linux-i686-3.4/libImaging/Offset.o build/temp.linux-i686-3.4/libImaging/Pack.o build/temp.linux-i686-3.4/libImaging/PackDecode.o build/temp.linux-i686-3.4/libImaging/Palette.o build/temp.linux-i686-3.4/libImaging/Paste.o build/temp.linux-i686-3.4/libImaging/Quant.o build/temp.linux-i686-3.4/libImaging/QuantOctree.o build/temp.linux-i686-3.4/libImaging/QuantHash.o build/temp.linux-i686-3.4/libImaging/QuantHeap.o build/temp.linux-i686-3.4/libImaging/PcdDecode.o build/temp.linux-i686-3.4/libImaging/PcxDecode.o build/temp.linux-i686-3.4/libImaging/PcxEncode.o build/temp.linux-i686-3.4/libImaging/Point.o build/temp.linux-i686-3.4/libImaging/RankFilter.o build/temp.linux-i686-3.4/libImaging/RawDecode.o build/temp.linux-i686-3.4/libImaging/RawEncode.o build/temp.linux-i686-3.4/libImaging/Storage.o build/temp.linux-i686-3.4/libImaging/SunRleDecode.o build/temp.linux-i686-3.4/libImaging/TgaRleDecode.o build/temp.linux-i686-3.4/libImaging/Unpack.o build/temp.linux-i686-3.4/libImaging/UnpackYCC.o build/temp.linux-i686-3.4/libImaging/UnsharpMask.o build/temp.linux-i686-3.4/libImaging/XbmDecode.o build/temp.linux-i686-3.4/libImaging/XbmEncode.o build/temp.linux-i686-3.4/libImaging/ZipDecode.o build/temp.linux-i686-3.4/libImaging/ZipEncode.o build/temp.linux-i686-3.4/libImaging/TiffDecode.o build/temp.linux-i686-3.4/libImaging/Incremental.o build/temp.linux-i686-3.4/libImaging/Jpeg2KDecode.o build/temp.linux-i686-3.4/libImaging/Jpeg2KEncode.o build/temp.linux-i686-3.4/libImaging/BoxBlur.o -L/home/netai/lab/django/rangoenv/lib -L/usr/local/lib -ljpeg -lz -o build/lib.linux-i686-3.4/PIL/_imaging.cpython-34m.so i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/_imaging.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/decode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/encode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/map.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/display.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/outline.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/path.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Access.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/AlphaComposite.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Resample.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Bands.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/BitDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Blend.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Chops.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Convert.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/ConvertYCbCr.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Copy.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Crc32.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Crop.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Dib.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Draw.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Effects.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/EpsEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/File.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Fill.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Filter.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/FliDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Geometry.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/GetBBox.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/GifDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/GifEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/HexDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Histo.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/JpegDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/JpegEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/LzwDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Matrix.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/ModeFilter.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/MspDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Negative.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Offset.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Pack.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/PackDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Palette.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Paste.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Quant.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/QuantOctree.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/QuantHash.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/QuantHeap.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/PcdDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/PcxDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/PcxEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Point.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/RankFilter.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/RawDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/RawEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Storage.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/SunRleDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/TgaRleDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Unpack.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/UnpackYCC.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/UnsharpMask.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/XbmDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/XbmEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/ZipDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/ZipEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/TiffDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Incremental.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Jpeg2KDecode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/Jpeg2KEncode.o: No such file or directory i686-linux-gnu-gcc: error: build/temp.linux-i686-3.4/libImaging/BoxBlur.o: No such file or directory i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ i686-linux-gnu-gcc: error: unrecognized command line option ‘-Qunused-arguments’ error: command 'i686-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Command "/home/netai/lab/django/rangoenv/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-ah0pvkjy/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-c_53bm8a-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/netai/lab/django/rangoenv/include/site/python3.4/pillow" failed with error code 1 in /tmp/pip-build-ah0pvkjy/pillow
Create SVG and save it to datastore(GAE + Python)
34,583,572
4
3
260
0
python-2.7,google-app-engine,google-cloud-datastore
Create your "file" in memory (use e.g io.BytesIO) and then use the getvalue method of the in-memory "file" to get the blob of bytes for the datastore. Do note that a datastore entity is limited to a megabyte or so, thus it's quite possible that some SVG file might not fit in that space -- in which case, you should look into Google Cloud Storage. But, that's a different issue.
0
1
0
0
2016-01-04T00:51:00.000
1
1.2
true
34,583,385
0
0
1
1
i have a doubt, i need to create some svg files (in a sequence) and upload to data store. I know how to create the svg, but it save to filesystem, and i have understood that GAE cannot use it. So, i don't know how to create and put it on the datastore.
Sublime Text Python builds and opening a terminal takes very long time
34,594,257
1
1
380
0
python,macos,terminal,sublimetext
Run python -vvv to dump out imports Python is doing when it starts up. If the slowdown is by a third party library this should give a hint. Check your ~/.bashrc script for duplicate entries (see comments below).
0
1
0
1
2016-01-04T15:13:00.000
1
1.2
true
34,594,184
1
0
0
1
I have been learning a lot of python recently using sublime text on a mac, I installed python 3 and have mainly been using that but as a lot of documentation is in python 2.7 and it comes with the Mac I decided to start using 2.7 instead. I have quite a few libraries installed (for python 3 and for 2.7) When I load my terminal it takes a good 15 seconds for it to get to the prompt and it takes the same amount of time to build python 2.7 from sublime text before it starts executing the code. I know this post is probably too vague but if anyone has had a similar experience or could suggest anything to point me in the right direction I would really appreciate it. Thanks.
How to create virtual machine using WMIC on hyper-v with python script or any command?
35,030,932
0
0
605
0
python,linux,centos6,hyper-v,wmic
There are so many scripts in C# to do the same work if you dont want to use powershell scripts you can choose the best method to run a powershell command for each thing and run the powershell command from python script i did the same in C# with Process Class
0
1
0
1
2016-01-07T07:13:00.000
1
0
false
34,649,314
1
0
0
1
I am new to Hyper-v,WMI and using WMIC i need to create a VM(virtual machine). Can anybody help me through a sample code or script to refer? Preferred script language is Python and I am using CentOS 6 to run wmic. And is there any way to create VM via wmic commands? I have gone through many scripts and code snippets but they were all in powershell and I dont want to use powershell.
Logging from the children of a supervisord subprocess
34,670,535
1
0
353
0
python,subprocess,supervisord
You don't need to do anything. B inherits A's standard streams by default. If A's stdout is redirected to a file then B's stdout automatically writes to the same place.
0
1
0
0
2016-01-07T09:46:00.000
2
1.2
true
34,651,927
1
0
0
1
My system is composed of a python application that is launched from supervisord. Let's call it A. A launches a subprocess B to do some of its task. Both A and B are coded in Python and use the standard logging module to output messages to the console. The stdout and stderr of A are logged to a file specified in the supervisord configuration. This works like a charm. Now, I'd like to tunnel the stdout and stderr from B into the same file as in A. How can this be achieved?
"Cannot update File menu Recent Files list [Errno 13] Permission denied: recent-files.lst" when I open IDLE (Python 3.4 GUI)
34,662,742
5
1
5,487
0
python,python-idle
If the recent-files.lst file is hidden, Python will fail to access it properly. You most likely tried to hide the .idlerc folder and applied the same settings to any subfolders. You can still keep that folder hidden, because it's ugly, but make sure not to set the recent-files.lst file to hidden, too.
0
1
0
0
2016-01-07T18:19:00.000
1
0.761594
false
34,662,353
0
0
0
1
I have no idea why this happens and then it doesn't come up with any typing cursor even when I click so I can't edit anything. I'm running Windows 10, and Python 3.4.4. Anyone know why this is happening? Cannon update File menu Recent Files list. Your operating system says: [Errno 13] Permission denied: 'C:\Users\Aaron\.idlerc\recent-files.lst' Solved, my .idlerc folder was hidden, after making it visible everything worked fine.
Switch between python 2.7 and python 3.5 on Mac OS X
49,465,309
1
57
157,702
0
python,macos,python-2.7,python-3.x,terminal
I just follow up the answer from @John Wilkey. My alias python used to represent python2.7 (located in /usr/bin). However the default python_path is now preceded by /usr/local/bin for python3; hence when typing python, I didn't get either the python version. I tried make a link in /usr/local/bin for python2: ln -s /usr/bin/python /usr/local/bin/ It works when calling python for python2.
0
1
0
0
2016-01-08T15:13:00.000
7
0.028564
false
34,680,228
1
0
0
4
I generally use Python 2.7 but recently installed Python 3.5 using Miniconda on Mac OS X. Different libraries have been installed for these two versions of python. Now, the entering either of the keywords 'python' or 'python3' in terminal invokes python 3.5, and 'python2' returns '-bash: python2: command not found'. How can I now invoke them specifically using aliases 'python2' and 'python3' respectively? I am currently using OS X El Capitan.
Switch between python 2.7 and python 3.5 on Mac OS X
51,038,951
5
57
157,702
0
python,macos,python-2.7,python-3.x,terminal
I already had python3 installed(via miniconda3) and needed to install python2 alongside in that case brew install python won't install python2, so you would need brew install python@2 . Now alias python2 refers to python2.x from /usr/bin/python and alias python3 refers to python3.x from /Users/ishandutta2007/miniconda3/bin/python and alias python refers to python3 by default. Now to use python as alias for python2, I added the following to .bashrc file alias python='/usr/bin/python'. To go back to python3 as default just remove this line when required.
0
1
0
0
2016-01-08T15:13:00.000
7
0.141893
false
34,680,228
1
0
0
4
I generally use Python 2.7 but recently installed Python 3.5 using Miniconda on Mac OS X. Different libraries have been installed for these two versions of python. Now, the entering either of the keywords 'python' or 'python3' in terminal invokes python 3.5, and 'python2' returns '-bash: python2: command not found'. How can I now invoke them specifically using aliases 'python2' and 'python3' respectively? I am currently using OS X El Capitan.
Switch between python 2.7 and python 3.5 on Mac OS X
52,135,099
2
57
157,702
0
python,macos,python-2.7,python-3.x,terminal
Similar to John Wilkey's answer I would run python2 by finding which python, something like using /usr/bin/python and then creating an alias in .bash_profile: alias python2="/usr/bin/python" I can now run python3 by calling python and python2 by calling python2.
0
1
0
0
2016-01-08T15:13:00.000
7
0.057081
false
34,680,228
1
0
0
4
I generally use Python 2.7 but recently installed Python 3.5 using Miniconda on Mac OS X. Different libraries have been installed for these two versions of python. Now, the entering either of the keywords 'python' or 'python3' in terminal invokes python 3.5, and 'python2' returns '-bash: python2: command not found'. How can I now invoke them specifically using aliases 'python2' and 'python3' respectively? I am currently using OS X El Capitan.
Switch between python 2.7 and python 3.5 on Mac OS X
34,686,323
14
57
157,702
0
python,macos,python-2.7,python-3.x,terminal
OSX's Python binary (version 2) is located at /usr/bin/python if you use which python it will tell you where the python command is being resolved to. Typically, what happens is third parties redefine things in /usr/local/bin (which takes precedence, by default over /usr/bin). To fix, you can either run /usr/bin/python directly to use 2.x or find the errant redefinition (probably in /usr/local/bin or somewhere else in your PATH)
0
1
0
0
2016-01-08T15:13:00.000
7
1
false
34,680,228
1
0
0
4
I generally use Python 2.7 but recently installed Python 3.5 using Miniconda on Mac OS X. Different libraries have been installed for these two versions of python. Now, the entering either of the keywords 'python' or 'python3' in terminal invokes python 3.5, and 'python2' returns '-bash: python2: command not found'. How can I now invoke them specifically using aliases 'python2' and 'python3' respectively? I am currently using OS X El Capitan.
Distributed C/C++ Engine with Python
34,722,890
2
0
347
0
python,tesseract,py2exe,pyinstaller
Reading the pytesseract docs, I have found the following section: Install google tesseract-ocr from http://code.google.com/p/tesseract-ocr/. You must be able to invoke the tesseract command as "tesseract". If this isn't the case, for example because tesseract isn't in your PATH, you will have to change the "tesseract_cmd" variable at the top of 'tesseract.py'. This means you need to have tersseract installed on your target machine independent of your script being exefied or not. Tesseract is a requirement for your script to work. You will need to ask your users to have tesseract installed or you use an "install wizzard" tool which will check if tesseract is installed and if not install it for your users. But this is not the task of pyinstaller. Pyinstaller only exefies your Python script.
0
1
0
1
2016-01-10T14:53:00.000
1
0.379949
false
34,706,795
0
0
0
1
I am using an C based OCR engine known as tesseract with Python interface library pytesseract to access its core features. Essentially, the library reads the local contents of the installed engine for use in a Python program. However, the library continues to look for the engine when distributed as an executable. How do I instead include the engine self-contained in the executable?
python executing in IDLE, but not in termnal
35,046,201
0
1
395
0
python,raspberry-pi,executable,sensors
Well, still a little puzzled why it happened, but anyway this solved the problem: As a workaround, I copied the contents of "thermostaatgui.py" over the contents of a working script ("mysimpletest.py"), saved it and it runs OK.
0
1
0
1
2016-01-10T23:01:00.000
3
1.2
true
34,711,799
0
0
0
2
I have a python script on a Raspberry Pi reading the temperature and humidity from a sensor. It works fine when started in IDLE, but when I try starting it in a terminal I get the message:sudo: unable to execute .thermostaatgui.py: No such file or directory. The first line in the script is: #! /usr/bin/python, the same as in other scripts that run without problems and the script is made executable with chmod +x. In the script Adafruit_DHT, datetime and time are imported, other scripts that work do the same.
python executing in IDLE, but not in termnal
34,711,852
1
1
395
0
python,raspberry-pi,executable,sensors
+1 on the above solution. To Debug try this Type "pwd" on your terminal. This will tell you where you are in the shell. Then type "ls -lah" and look for your script. if you can not find it, then you need to "cd" to the directory where the script exists and then execute the script
0
1
0
1
2016-01-10T23:01:00.000
3
0.066568
false
34,711,799
0
0
0
2
I have a python script on a Raspberry Pi reading the temperature and humidity from a sensor. It works fine when started in IDLE, but when I try starting it in a terminal I get the message:sudo: unable to execute .thermostaatgui.py: No such file or directory. The first line in the script is: #! /usr/bin/python, the same as in other scripts that run without problems and the script is made executable with chmod +x. In the script Adafruit_DHT, datetime and time are imported, other scripts that work do the same.
setfs(u/g)id or set(u/g)id with eventlet(python green thread)
37,549,590
0
1
309
0
python,multithreading,eventlet,green-threads,setfsuid
The kernel is ignorant to green threads. If a process has a uid and gid, it is used by all green threads running as part of this process. At a first glance, what you are seeking to do is equivalent to having a privileged process do a setuid prior to opening/creating a file, than doing a second setuid to open/create a second file etc. all to ensure that each file has the right ownership. I never tried such a scheme, but it sounds very very wrong. It is also extremely bad security wise. You are running at high privileges and may find yourself processing user X's data while having user Y's uid. At a second glance, green threads are cooperative, meaning that under the hoods, some of the operations you do will yield. Following such yield, you may change to a different green thread that will change the uid again... Bottom line, forget about changing the uid and gid of the green thread - there is no such thing. Create the file with whatever ID you have and chown to the right id after. Find a way to do that without running as root for security reasons.
0
1
0
0
2016-01-11T10:48:00.000
2
0
false
34,719,592
0
0
0
1
We have an existing project using Eventlet module. There is a server handling client request using green threads. All the requests are handled by a single user 'User A' I now need to change this to do a setfsuid/setfsgid on the threads so that the underlying files are all created with the ownership of the requesting user only. I understand that I need setid Linux capability to make the setfsid calls. But will setfsid calls work with green threads like they do with the native threads ? By reading through various texts over the net regarding 'green threads', I couldn't gather much :(
How to call into python script like a function from bash?
34,731,389
0
1
163
0
python,linux,bash,shell
Short answer: you can't. The return value of a *nix-style executable is an unsigned integer from 0-255. That usually indicates if it failed or not, but you could co-opt it for your own uses. In this case, I don't think a single unsigned byte is enough. Thus, you need to output it some other way. You have a few options The simplest (and probably best in this case) is to continue outputting your output data on stdout, and send your logs/debugging information somewhere else. That could be to a file, or (it's sort-of what it's for) stderr Output your data to a file (such as one given in a command line parameter) Arrange some kind of named pipe scheme. In practice, this is pretty much the same thing as sending it to a file.
0
1
0
0
2016-01-11T21:18:00.000
2
1.2
true
34,731,279
0
0
0
1
I have a build.sh script that my automated build server executes as part of a build. A big portion of logic of the build is calculating and building a version number. All of this logic is in a python script such as calculate-version.py. Typically what I would do in this case is setup the python script to ONLY print the version number, from which I would read stdout from the bash script, and assign that to an environment variable. However, the python script is becoming sufficiently complex that I'd like to start adding logs to it. I need to be able to output (stdout) logs from the Python script (via print()) while at the same time when it is done, propagate a "return value" from the python script back to the parent shell script. What is the best way of doing this? I thought of doing this through environment variables, but my understanding is those won't be available to the parent process.
Celery worker details
34,738,281
0
3
834
0
python,celery-task
How can I get which worker is executing which input? There are 2 options to use multiple workers: You run each worker separately with separate run commands You run in one command using command line option -c i.e. concurrency First method, flower will support it and will show you all the workers, all the tasks (you call inputs), which worker processed which task and other information too. With second method, flower will show you all the tasks being processed by single worker. In this case you can only differentiate by viewing logs generated by celery worker as in logs it does store which worker THREAD executed which task. So, i think you will be better using first option given your requirements. Each worker executed how many inputs and its status? As I mentioned, using first approach, flower will give you this information. If any task is failed how can get failed input data in separately and re-execute with available worker? Flower does provide the filters to filter the failed tasks and does provide what status tasks returned when exiting. Also you can set how many times celery should retry a failed task. But even after retries task fails, then you will have to relaunch the task yourself.
0
1
0
0
2016-01-12T06:54:00.000
2
0
false
34,737,287
0
0
0
1
I have celery task with 100 input data in queue and need to execute using 5 workers. How can I get which worker is executing which input? Each worker executed how many inputs and its status? If any task is failed how can get failed input data in separately and re-execute with available worker? Is there any possible ways to customize celery based on worker specific. We can combine celery worker limitation and flower I am not using any framework.
Accidentally removed dist-packages folder, what to do now?
34,743,144
2
3
4,302
0
python,debian,uninstallation,reinstall
The directory you removed is controlled and maintained by pip. If you have a record of which packages you have installed with pip, you can force it to reinstall them again. If not, too late to learn to make backups; but this doesn't have to be a one-shot attempt -- reinstall the ones you know are missing, then live with the fact that you'll never know if you get an error because you forgot to reinstall a module, or because something is wrong with your code. By and by, you will discover a few more missing packages which you failed to remember the first time; just reinstall those as well as you discover them. As an aside, using virtualenv sounds like a superior solution for avoiding a situation where you need to muck with your system Python installation.
0
1
0
1
2016-01-12T10:10:00.000
3
0.132549
false
34,740,756
0
0
0
1
I did something very stupid. I was copying some self written packages to the python dist-packages folder, then decided to remove one of them again by just rewriting the cp command to rm. Now the dist-packages folder is gone. What do I do now? Can I download the normal contents of this folder from somewhere, or do I need to reinstall python completely. If so - is there something I need to be careful about? The folder I removed is /usr/local/lib/python2.7 so not the one maintained by dpkg and friends.
Writing logs in kubernetes
34,750,390
1
1
1,269
0
python,kubernetes,google-cloud-logging
If you're running at least version 1.1.0 of Kubernetes (you most likely are), then if the logs you write are JSON formatted, they'll show up as structured logs in the Cloud Logging console. Then certain JSON keys are interpreted specially when imported into Cloud Logging, for example 'severity' will be used to set the log level in the console, or 'timestamp' can be used to set the time.
0
1
0
0
2016-01-12T12:11:00.000
1
0.197375
false
34,743,371
0
0
0
1
I have a python service running in kubernetes container and writing logs to stdout. I can see the logs in Cloud Logging Console, but they are not structured, meanining: 1. I can't filter log levels 2. Log record with multiple lines interpreted as multiple log records 3. Dates are not parse etc. How can I address this problem? Can I configure flunetd deamon somehow? Or should I write in a specific format? Thanks
Python on Windows, installing 3dr solo command line, PermissionError: [Errno 13]
34,758,499
0
0
364
0
python,permission-denied,3dr
This is going to sound simple but are you running an elevated command line?
0
1
0
0
2016-01-13T04:31:00.000
2
0
false
34,758,458
0
0
0
2
I am trying to install 3DR solo command line on Windows 10. Below is the exception that i get. i have been doing a lot of reading and googling. I couldnt figure out the permission denied problem. I have this part shutil.copyfile(srcfile, destfile), but i still get denied. Exception: Traceback (most recent call last): File "c:\python35\lib\site-packages\pip\basecommand.py", line 211, in main status = self.run(options, args) File "c:\python35\lib\site-packages\pip\commands\install.py", line 311, in run root=options.root_path, File "c:\python35\lib\site-packages\pip\req\req_set.py", line 646, in install **kwargs File "c:\python35\lib\site-packages\pip\req\req_install.py", line 803, in install self.move_wheel_files(self.source_dir, root=root) File "c:\python35\lib\site-packages\pip\req\req_install.py", line 998, in move_wheel_files isolated=self.isolated, File "c:\python35\lib\site-packages\pip\wheel.py", line 339, in move_wheel_files clobber(source, lib_dir, True) File "c:\python35\lib\site-packages\pip\wheel.py", line 317, in clobber shutil.copyfile(srcfile, destfile) File "c:\python35\lib\shutil.py", line 115, in copyfile with open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: 'c:\python35\Lib\site-packages\_cffi_backend.cp35-win32.pyd'
Python on Windows, installing 3dr solo command line, PermissionError: [Errno 13]
35,596,119
0
0
364
0
python,permission-denied,3dr
If you are upgrading the cffi package i.e. you had it already installed and doing pip install xyz package that is trying to upgrade cffi to its last version, all you have to do is simply delete: c:\python35\Lib\site-packages\_cffi_backend.cp35-win32.pyd then try again.
0
1
0
0
2016-01-13T04:31:00.000
2
0
false
34,758,458
0
0
0
2
I am trying to install 3DR solo command line on Windows 10. Below is the exception that i get. i have been doing a lot of reading and googling. I couldnt figure out the permission denied problem. I have this part shutil.copyfile(srcfile, destfile), but i still get denied. Exception: Traceback (most recent call last): File "c:\python35\lib\site-packages\pip\basecommand.py", line 211, in main status = self.run(options, args) File "c:\python35\lib\site-packages\pip\commands\install.py", line 311, in run root=options.root_path, File "c:\python35\lib\site-packages\pip\req\req_set.py", line 646, in install **kwargs File "c:\python35\lib\site-packages\pip\req\req_install.py", line 803, in install self.move_wheel_files(self.source_dir, root=root) File "c:\python35\lib\site-packages\pip\req\req_install.py", line 998, in move_wheel_files isolated=self.isolated, File "c:\python35\lib\site-packages\pip\wheel.py", line 339, in move_wheel_files clobber(source, lib_dir, True) File "c:\python35\lib\site-packages\pip\wheel.py", line 317, in clobber shutil.copyfile(srcfile, destfile) File "c:\python35\lib\shutil.py", line 115, in copyfile with open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: 'c:\python35\Lib\site-packages\_cffi_backend.cp35-win32.pyd'
How can I have Enterprise and Public version of Django application sharing some code?
34,781,374
0
0
98
0
python,django,git
Probably the best solution is to identify exactly which code is shared between the two projects and make that a reusable app. Then each installation can install that django app, and then has their own site specific code as well.
0
1
0
0
2016-01-14T02:48:00.000
3
0
false
34,780,851
0
0
1
2
I'm building a webapp using Django which needs to have two different versions: an Enterprise version and a standard public version. Up until now, I've been only developing the Enterprise version and am now looking for the best way to separate the two versions in the simplest way while avoiding duplication of code as much as possible. The main difference between the two versions will be that they need different URLs and different Views. I intend to differentiate based on subdomain using a multi-tenant architecture, where the www.example.com is the public version, and company1.example.com hits the enterprise version. I've come up with a couple potential solutions, but I'm not happy with any of them. Separate Git repositories and entirely separate projects, with all common code duplicated. This much duplication of code is bound to be error prone where things will get out of sync and is expected to be ridden with copy-paste mistakes. This is a last-resort solution. Separate Git repositories, with common code shared via Git Submodules (a single common 'base' repository containing base models and shared views). I've read horror stories about git submodules, though, so I'm wary of this solution. Single Git repository containing multiple 'project' folders (public/enterprise) each with their own base urls.py, settings.py, wsgi.py, etc...) and multiple manage.py files to choose which "Project" to run. I'm afraid that this solution would become an utter mess because it wouldn't be possible to have the public and enterprise versions use different versions of the common library if one needs an update before the other. Separate Git repositories, with all shared code developed as 'Re-usable apps' and installed into the python path. This would be a somewhat clean solution, but would be difficult to work with any time changes needed to be made to the common modules. Single project where all features are managed via conditional logic in the views. This would be most prone to bugs and confusion of all, and I'd prefer to avoid this solution. Does anyone have any experience with this type of solution or could anyone help me find the best solution to this problem?
How can I have Enterprise and Public version of Django application sharing some code?
34,781,480
1
0
98
0
python,django,git
What about "a single Git repository, with all shared code developed as 'Re-usable apps'"? That is configure the options enabled with the INSTALLED_APPS setting. First you need to decide on your release process. If you intend on releasing both versions simultaneously, using the one git repository makes sense. An overriding concern might be if you have different distribution requirements for the code, e.g. if you want the code in the public version to be publicly available and the enterprise version to be private. Then you might have to use two git repositories.
0
1
0
0
2016-01-14T02:48:00.000
3
0.066568
false
34,780,851
0
0
1
2
I'm building a webapp using Django which needs to have two different versions: an Enterprise version and a standard public version. Up until now, I've been only developing the Enterprise version and am now looking for the best way to separate the two versions in the simplest way while avoiding duplication of code as much as possible. The main difference between the two versions will be that they need different URLs and different Views. I intend to differentiate based on subdomain using a multi-tenant architecture, where the www.example.com is the public version, and company1.example.com hits the enterprise version. I've come up with a couple potential solutions, but I'm not happy with any of them. Separate Git repositories and entirely separate projects, with all common code duplicated. This much duplication of code is bound to be error prone where things will get out of sync and is expected to be ridden with copy-paste mistakes. This is a last-resort solution. Separate Git repositories, with common code shared via Git Submodules (a single common 'base' repository containing base models and shared views). I've read horror stories about git submodules, though, so I'm wary of this solution. Single Git repository containing multiple 'project' folders (public/enterprise) each with their own base urls.py, settings.py, wsgi.py, etc...) and multiple manage.py files to choose which "Project" to run. I'm afraid that this solution would become an utter mess because it wouldn't be possible to have the public and enterprise versions use different versions of the common library if one needs an update before the other. Separate Git repositories, with all shared code developed as 'Re-usable apps' and installed into the python path. This would be a somewhat clean solution, but would be difficult to work with any time changes needed to be made to the common modules. Single project where all features are managed via conditional logic in the views. This would be most prone to bugs and confusion of all, and I'd prefer to avoid this solution. Does anyone have any experience with this type of solution or could anyone help me find the best solution to this problem?
Smartplug socket programming using Mac Address and UDP port
34,790,670
0
0
362
0
java,python,macos,sockets,udp
You can access the device via a UDP socket, provided you have the IP address of the device as well as the UDP port number. Both Java and Python have socket APIs so you can use either one. Just make sure you follow the network protocol defined by the device to be able to read to / write from the device properly.
0
1
1
0
2016-01-14T07:49:00.000
1
0
false
34,784,149
0
0
0
1
I need to access a smartplug device using socket programming . I have the MAC address and UDP port number of the device . Other information like SSID,password , Apps Id, Dev Id, Cmd ID are also present . Could you please let me know if this can be achieved using Python or Java API . Is there a way in socket programming to access a device using MAC address and get the information sent from a specific UDP port . Thanks in advance for your help .
How to run a python script at a certain time in a tmux terminal?
34,796,268
1
3
2,519
0
python,terminal,cron,tmux
Okay, I see what you're saying. I've done some similar stuff in the past. For the cron to run your script at 3pm and append to a log file you can do that simply like this: 0 15 * * * command >> log # just logs stdout or 0 15 * * * command &>> log # logs both stdout and stderr If you want it in the terminal I can think of two possibilities: Like you said, you could do a while true loop that checks the time every n seconds and when it's 3pm do something. Alternately you could set up an API endpoint that's always on and trigger it by some other program at 3pm. This could be triggered by the cron for example. Personally I also like the convenience of having a tmux or screen to login to to see what's been happening rather than just checking a log file. So I hope you figure out a workable solution for your use case!
0
1
0
0
2016-01-14T17:20:00.000
4
0.049958
false
34,795,776
1
0
0
1
I have a python script, or should I say a python service which needs to run every day at 3pm. Basically, there is a while True : time.sleep(1) at the end of the file. I absolutely need this script to execute in a terminal window (because I need the logs). Bonus if the solution makes it possible to run in a tmux window. I tried cron jobs but can't figure out how to put this in a terminal.
App Engine social platform - Content interactions modeling strategy
34,808,818
1
0
19
0
python,google-app-engine,social-networking
I´m guessing you have two entities in you model: User and Content. Your queries seem to aggregate upon multiple Content objects. What about keeping this aggregated values on the User object? This way, you don´t need to do any queries, but rather only look up the data stored in the User object for these queries. At some point though, you might consider not using the datastore, but look at sql storage instead. It has a higher constant cost, but I´m guessing at some point (more content/users) it might be worth considering both in terms of cost and performance.
0
1
0
0
2016-01-15T10:01:00.000
1
0.197375
false
34,808,553
0
0
1
1
I have a Python server running on Google app engine and implements a social network. I am trying to find the best way (best=fast and cheap) to implement interactions on items. Just like any other social network I have the stream items ("Content") and users can "like" these items. As for queries, I want to be able to: Get the list of users who liked the content Get a total count of the likers. Get an intersection of the likers with any other users list. My Current implementation includes: 1. IntegerProperty on the content item which holds the total likers count 2. InteractionModel - a NdbModel with a key id qual to the content id (fast fetch) and a JsonPropery the holds the likers usernames Each time a user likes a content I need to update the counter and the list of users. This requires me to run and pay for 4 datastore operations (2 reads, 2 writes). On top of that, items with lots of likers results in an InteractionModel with a huge json that takes time to serialize and deserialize when reading/writing (Still faster then RepeatedProperty). None of the updated fields are indexed (built-in index) nor included in combined index (index.yaml) Looking for a more efficient and cost effective way to implement the same requirements.
Get libsass-python to use system libsass library instead of compiling it
39,832,334
0
1
322
0
python,c++,libsass
I did come up with a solution. I created my own packages to install gcc-4.8.2. It was a lot of work and I am not sure if it breaks a bunch of other dependencies down the line. But it worked for the server stack that I needed at the time. I had to create all of the the following packages to get it to work. cpp-4.8.2-8.el6.x86_64.rpm gcc-4.8.2-8.el6.x86_64.rpm gcc-c++-4.8.2-8.el6.x86_64.rpm gcc-gfortran-4.8.2-8.el6.x86_64.rpm libgcc-4.8.2-8.el6.x86_64.rpm libgfortran-4.8.2-8.el6.x86_64.rpm libgomp-4.8.2-8.el6.x86_64.rpm libquadmath-4.8.2-8.el6.x86_64.rpm libquadmath-devel-4.8.2-8.el6.x86_64.rpm libstdc++-4.8.2-8.el6.x86_64.rpm libstdc++-devel-4.8.2-8.el6.x86_64.rpm So again it was a lot of work, but it did work. But after figuring this out a few months later I was able to just upgrade to Centos 7.
0
1
0
1
2016-01-15T17:54:00.000
1
1.2
true
34,816,964
0
0
0
1
Not sure if this is possible but with libsass requiring gcc-c++ >= 4.7 and Centos 6 not having it, I was curious if libsass-python could use the system's libsass instead of compiling it if it exists. I have been able to build a libsass rpm for Centos 6 but python-libsass still tries to compile it itself. I know that I can use devtoolset-1.1 to install python-libsass (that is how I managed to build the libsass rpm) but I am trying to do all of this with puppet. So I thought if the system had libsass then python-libsass wouldn't have to install it. I considered adding an issue in the python-libsass git project but thought I should ask here first.
Best way to store emails from a landing page on google app engine?
34,824,272
1
0
97
0
python,google-app-engine,google-cloud-datastore,app-engine-ndb
Create an email entity, and use the email address as the entities key. This immediately will prevent duplicates. Fetching all of the email addresses can be very efficient as you only need query by kind with a keys only query, and use map_async to process the emails. In addition you could use these entities to store progress of email, maybe provide an audit trail. To increase speed at time of emailing, you could periodically build cached lists of the emails, either in the datastore or stored in blob storage.
0
1
0
0
2016-01-15T22:31:00.000
1
0.197375
false
34,820,966
0
0
1
1
I have a landing page set up and have a html text box (with error checking for valid emails) put together with a submit button. I am currently using NDB to store different entities. What I'm looking for is the best way to store just the email that a person enters. So likely hundreds or thousands of emails will be entered, there shouldn't be duplicates, and eventually we will want to use all of those emails to send a large news update to everyone who entered in their emails. What is the best way to store this email data with these contraints: Fast duplicate checking Quick callback for sending emails en masse
Using Unison "-repeat watch" in FreeBSD (10.2) after installing from ports yields error
36,164,028
1
1
847
0
python,freebsd,ports,unison
I think the message is pretty clear: unison-fsmonitor can't be run on freebsd10 because it's not supported, so you can't use Unison with the -repeat option. Since it's just written in Python, though, I don't see why it shouldn't be supported. Maybe message the developer.
0
1
0
0
2016-01-16T08:56:00.000
1
0.197375
false
34,825,214
0
0
0
1
After installing unison from /usr/ports/net/unison with X11 disabled via make config, running the command unison -repeat watch /dir/mirror/1 /dir/mirror/2 Yields the message: Fatal error: No file monitoring helper program found From here I decided to try using pkg to install unison-nox11 and this yields the same error message. I've also tried copying the fsmonitor.py file from unison-2.48.3.tar.gz to /usr/bin/unison-fsmonitor and I got the following error: Fatal error: Unexpected response 'Usage: unison-fsmonitor [options] root [path] [path]...' from the filesystem watcher (expected VERSION) Running the command unison-fsmonitor version shows the message unsupported platform freebsd10 Anyone have any ideas on how to fix this?
import pandas using IDLE error
34,845,928
0
0
716
0
python,pandas,module
May be you are using different Python versions in IDLE and the command line, if this is the case, you should install Pandas for the Python version that you are using in IDLE
0
1
0
0
2016-01-18T00:51:00.000
1
0
false
34,845,704
1
0
0
1
This is a beginner question. I am using "import pandas as pd" in IDLE, but got the following error message "ImportError: No module named 'pandas", I don't know how to install the the pandas in IDLE. I run the same code in MAC linux command window, it worked. Not sure why not working in IDLE. Thanks for the help!