Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21,237,833 | 2014-01-20T15:28:00.000 | 3 | 0 | 1 | 0 | python,numpy | 21,238,066 | 2 | false | 0 | 0 | I'd agree that in general you want to do this from within the operating system - only because there's a reliability factor in having "possibly runaway code check itself for possibly runaway behavior"
If a hard and fast requirement is to do this WITHIN the script, then I think we'd need to know more about what you're actually doing. If you have a single large data structure that's consuming the majority of the memory, you can use sys.getsizeof to identify how large that structure is, and throw/catch an error if it gets larger than you want.
But without knowing at least a little more about the program structure, I think it'll be hard to help... | 2 | 8 | 0 | I have a couple of Python/Numpy programs that tend to cause the PC to freeze/run very slowly when they use too much memory. I can't even stop the scripts or move the cursor anymore, when it uses to much memory (e.g. 3.8/4GB)
Therefore, I would like to quit the program automatically when it hits a critical limit of memory usage, e.g. 3GB.
I could not find a solution yet. Is there a Pythonic way to deal with this, since I run my scripts on Windows and Linux machines. | Quit Python program when it hits memory limit | 0.291313 | 0 | 0 | 2,186 |
21,238,158 | 2014-01-20T15:43:00.000 | -1 | 0 | 1 | 0 | python,osx-mavericks,pycharm | 21,238,848 | 4 | false | 0 | 0 | Go "Settings -> Project Interpreter -> Python Interpreter" and add your interpreter (the green '+' at up right corner of "settings" windows).
Then, already will available the green Play button (together the 'bug' icon) to run your project from Pycharm. | 1 | 8 | 0 | I know it is not that cleaver question, however I am pretty new to PyCharm and Python.
So as the question tells, I would like to make a shortcut to run the code in the active tab without going to the window "Edit Configurations" and change the running script manually.
I use OS 10.9.1, PyCharm edition 3.0.2 | How to run the code in the active tab in PyCharm? | -0.049958 | 0 | 0 | 3,736 |
21,242,436 | 2014-01-20T19:24:00.000 | 2 | 1 | 0 | 1 | python,shared-libraries,raspberry-pi,ctype | 21,242,449 | 1 | false | 0 | 0 | You'll need to recompile them from source. x86 and ARM are completely different microprocessor architectures, and programs/libraries compiled for one will not work on the other. | 1 | 0 | 0 | I have .so files that work well on my Ubuntu 32bit, would I need different version of them to work on my Raspberry Pi? I am loading them using python. If it wont work, What should I go through? | Shared Library files .so x86 would work on ARM? | 0.379949 | 0 | 0 | 346 |
21,242,498 | 2014-01-20T19:28:00.000 | 0 | 0 | 1 | 0 | python,git,repository,pygit2 | 24,710,103 | 2 | false | 0 | 0 | You can do either ways.
I find the index.add() method straightforward.
You can fetch all the files to be added or removed to the index using Repository.status() as a dictionary. The dictionary contains the filename as key and status of file as value. Depending upon the status values deleted files will be needed to be removed from index using index.remove(filename).
Write this index to in-memory tree using index.write_tree() which will return a tree-id to be used in Repository.commit().
However for changes to be saved to disk use index.write() too. | 1 | 1 | 0 | I'm a little confused about how to get started with PyGit2.
When adding files (plural) to a newly created repo, should I add them to
index.add('path/to/file')
or would I be better off creating a TreeBuilder and using
tb.insert( 'name',oid, GIT_FILEMODE_BLOB ) to add new content ?
If the second case, I am stumped as to how I create the tree object needed to commit to a newly created repo?
Anyone? | PyGit2 - TreeBuilder.insert('name',blobid,GIT_FILEMODE_BLOB) vs index.add( 'path/to/file' )? | 0 | 0 | 0 | 406 |
21,245,341 | 2014-01-20T22:19:00.000 | 0 | 0 | 0 | 0 | python,api,soundcloud | 21,263,821 | 1 | false | 1 | 0 | You can't filter tracks by city. The city is actually stored with the user. So you would have to search for the tracks you want, then perform an additional step to check if the user for each of the tracks is from the city you want.
I wanted to do something similar, but too many users do not have their city saved in their profile so the results are very limited. | 1 | 0 | 0 | I had read about an app called citycounds.fm, which is no longer active, where they made city-based playlists. Unfortunately, I can't seem to find any way to search for tracks by city in the soundcloud api documentation.
Any one know if this is possible? | Is there any way to search for tracks by city in the SoundCloud API? | 0 | 0 | 1 | 934 |
21,245,447 | 2014-01-20T22:26:00.000 | 1 | 0 | 1 | 0 | python,list,search,match | 21,245,629 | 4 | false | 0 | 0 | I like @inspectorG4dget 's answer, but would invert it:
instead of sorting the long list and searching through it (and having to keep it all in memory),
sort the short list (of numbers you are looking for) then iterate through the long list, seeing if each item matches any of the search terms.
This should be both faster and use less memory. You may wish to use Python's bisect module to do this. | 1 | 4 | 0 | Situation:
I want to do a matching: check if a number is in a list of numbers (very large list, length over 1e^5 or even 2e^5) allowing + or - 5 error
Example:
match 95 in list [0, 15, 30, 50,60,80,93] -> true
match 95 in list [0,15,30,50,60,70,80,105,231,123123,12312314,...] -> false
ps: list are not sorted (or I can sort it if in that way the efficiency can be increased)
I tried to use dictionary (somekey, and list of numbers) but it was too slow when I do the search in the list.
Is there any better ideas? (there are 3000+ of numbers I need to search) | Python: search a list of numbers in a list of very large numbers, allowing + or - 5 error | 0.049958 | 0 | 0 | 1,157 |
21,248,395 | 2014-01-21T03:10:00.000 | 3 | 0 | 0 | 0 | python,json,rest,flask | 21,251,197 | 2 | true | 1 | 0 | Here are some ideas:
If the source data that you use for your calculations is not likely to change often then you can run the calculations once and save the results. Then you can serve the results directly for as long as the source data remains the same.
You can save the results back to your database, or as you suggest, you can save them in a faster storage such as Redis. Based on your description I suspect the big performance gain will be in not doing calculations so often, the difference between storing in a regular database vs. Redis or similar is probably not significant in comparison.
If the data changes often then you will still need to do calculations frequently. For such a case an option that you have is to push the calculations to the client. Your Flask app can just return the source data in JSON format and then the browser can do the processing on the user's computer.
I hope this helps. | 1 | 1 | 0 | I'm currently implementing a webapp in flask. It's an app that does a visualization of data gathered. Each page or section will always have a GET call and each call will return a JSON response which then will be processed into displayed data.
The current problem is that some calculation is needed before the function could return a JSON response. This causes some of the response to arrive slower than others and thus making the page loads a bit slow. How do I properly deal with this? I have read into caching in flask and wonder whether that is what the app need right now. I have also researched a bit into implementing a Redis-Queue. I'm not really sure which is the correct method.
Any help or insights would be appreciated. Thanks in advance | How to speed up JSON for a flask application? | 1.2 | 0 | 0 | 2,480 |
21,248,545 | 2014-01-21T03:25:00.000 | 0 | 0 | 1 | 0 | python,py2app | 21,248,644 | 2 | false | 0 | 0 | If you're running Windows (and it seems that you're) you will need to go to the Run window (Win + R) and...
type cmd
type python setup.py install
...to begin the instalation.
If you doesn't have Python on your PATH, the use:
C:\PythonXY\python.exe setup.py install (for example, in Python 2.7 use C:\Python27\python.exe setup.py install).
Also maybe you need to specify the setup.py path so you should do:
C:\PythonXY\python.exe C:\Some\Path\setup.py install.
If you're running any Linux distro then just open the terminal and type that command.
Hope it helps! | 2 | 0 | 0 | I have downloaded py2app, but the problem is that easy install seems to be an online installation, yet I am installing this on an off-line pc, so can't use easy install. I expected to be able to download an EXE file or MSI file to install it on my PC using a normal procedure to install a Python package, because the python packages that I have installed before have been from self running files.
The downloadable version of Py2app does not include any such self running file. It tells me to type $python setup.py install. Where do I type this? Into what command line? | How do I install py2app onan off-internet PC | 0 | 0 | 0 | 110 |
21,248,545 | 2014-01-21T03:25:00.000 | 0 | 0 | 1 | 0 | python,py2app | 21,282,213 | 2 | false | 0 | 0 | You need to download an install the following packages (in this order):
altgraph
macholib
modulegraph
py2app
All of them can be installed by first downloading and extracting the archive and then run "python setup.py install" with the current working directory set to the directory containing the setup.py file.
However... You appear to want to install py2app on a Windows PC, and that won't work because py2app does not support Windows (it cannot cross "compile" a Mac application bundle). | 2 | 0 | 0 | I have downloaded py2app, but the problem is that easy install seems to be an online installation, yet I am installing this on an off-line pc, so can't use easy install. I expected to be able to download an EXE file or MSI file to install it on my PC using a normal procedure to install a Python package, because the python packages that I have installed before have been from self running files.
The downloadable version of Py2app does not include any such self running file. It tells me to type $python setup.py install. Where do I type this? Into what command line? | How do I install py2app onan off-internet PC | 0 | 0 | 0 | 110 |
21,249,217 | 2014-01-21T04:31:00.000 | 2 | 0 | 1 | 0 | python | 21,249,236 | 4 | false | 0 | 0 | Yes, the and operator requires all arguments to be true, and returns the last one checked, which is 5. (If any of the arguments were false, it would return the first false value, since that would be the last one checked in order to verify if all arguments were true.)
The or operator requires only one argument to be true, and returns the last one checked, which is 4, because 4 represents the first true value in the conditional. (If all arguments were false, then the return value would be equal to the last false value, since that would be the last value checked in order to verify if any of the arguments were true.) | 1 | 2 | 0 | When I test the difference between and and or, I meet this problem. Could you please help me understand it? | "4 and 5" is 5, while "4 or 5" is 4. Is there any reason? | 0.099668 | 0 | 0 | 270 |
21,250,136 | 2014-01-21T05:49:00.000 | 1 | 1 | 0 | 0 | java,python,internet-explorer,activex | 21,260,619 | 1 | false | 1 | 0 | I found one solution to this.
We can make the below modification to the registry and achieve running of applets automatically without pop-ups
C:\Windows\system32>reg add "HKCU\Software\Microsoft\Internet Explorer\Main\Feat
ureControl\FEATURE_LOCALMACHINE_LOCKDOWN" /v iexplore.exe /t REG_DWORD /d 0 /f | 1 | 1 | 0 | I am working on some applets and whenever I'm trying to open the applets on IE using my python script, It stops for a manual input to enable the activex.
I tried doing it from the IE settings. but, I require a command line to do it by which I can integrate it in my python script only. | How can I enable activex controls on IE for auto loading of applets | 0.197375 | 0 | 1 | 923 |
21,252,729 | 2014-01-21T08:24:00.000 | 3 | 0 | 0 | 0 | android,python,kivy | 21,260,736 | 2 | true | 0 | 1 | I have recently published an Android application on Google Play written in Python/Kivy
Congratulations. May I ask what app it is?
My question is: is this safe to expose the source code files or is there something that i am missing in the compilation of my project ? What is annoying me more are the temporary files .kv~ and .py~ that are exposing the whole project source code.
As TwilightSun has explained, some of these files are editor backups, which you can remove or exclude from the apk by modifying your buildozer.spec file or the equivalent python-for-android commands if using that directly.
However, more generally, if you are serious about obfuscating your code you will want to take further steps. I'm no expert, but probably this would include things like moving your kv code to a python file (with Builder.load_string) and compiling your whole project with cython. The resulting binaries will be harder to decompile than the python .pyo bytecode that is included by default. | 2 | 3 | 0 | I have recently published an Android application on Google Play written in Python/Kivy. Normally the "build.py" script would wrap the whole project files into one single folder that is the application package folder. But if i check the content of this package on my phone after the installation of the apk i can find the "android.txt" file, the ".kv/.kv~" file and the ".py~"*and *"pyo" files.
My question is: is this safe to expose the source code files or is there something that i am missing in the compilation of my project ? What is annoying me more are the temporary files *.kv~ and .py~ that are exposing the whole project source code.*
But i should mention the gratitude and the respect i have for the Kivy project and the Kivy team. Their efforts allowed me to build and publish a nice Android application with Python that i am really proud of. Thank you so much Kivy team. | How to hide python code file and other related files in a kivy project | 1.2 | 0 | 0 | 1,365 |
21,252,729 | 2014-01-21T08:24:00.000 | 1 | 0 | 0 | 0 | android,python,kivy | 21,253,288 | 2 | false | 0 | 1 | Those files may be editor backups which kivy doesn't recognize.
You can edit the build.py and add some patterns to BLACKLIST_PATTERNS. For your issue, you should add '*~' to the black list. | 2 | 3 | 0 | I have recently published an Android application on Google Play written in Python/Kivy. Normally the "build.py" script would wrap the whole project files into one single folder that is the application package folder. But if i check the content of this package on my phone after the installation of the apk i can find the "android.txt" file, the ".kv/.kv~" file and the ".py~"*and *"pyo" files.
My question is: is this safe to expose the source code files or is there something that i am missing in the compilation of my project ? What is annoying me more are the temporary files *.kv~ and .py~ that are exposing the whole project source code.*
But i should mention the gratitude and the respect i have for the Kivy project and the Kivy team. Their efforts allowed me to build and publish a nice Android application with Python that i am really proud of. Thank you so much Kivy team. | How to hide python code file and other related files in a kivy project | 0.099668 | 0 | 0 | 1,365 |
21,257,554 | 2014-01-21T12:01:00.000 | 0 | 1 | 0 | 0 | php,python | 21,291,038 | 1 | true | 0 | 0 | Porting the "sdk" to Python is probably your best bet - using python-requests and a custom json deserializer should make it easy. You may also want to have a look at RestORM... | 1 | 2 | 0 | I haven't found a suitable solution for this problem yet.
We haven't started the development, but we chose python for various reasons. So I don't wanna switch to PHP just because of an api-sdk.
Here are my thoughts for a possible solution:
Rewrite the api-sdk in Python. It's not extremely complex. I guess it will take 3-5 days. However we have to update the sdk by ourself. And the api, for what the sdk is made for, changes a lot.
Write a wrapper around the sdk. That enables us to call each single sdk-function by executing a php file in python like execfile(filename).
Or I use a wrapper to make the sdk-functions accessible via url.
The sdk returns result objects (like productResult).
The problem with solution 2 and 3 is that I can't to use these result objects in python. Solution 2 & 3 have to return a JSON. So I would lose some functionality of the api.
I happy to discuss your thoughts on this. | How to use an api-sdk (written in PHP) in a Python app | 1.2 | 0 | 0 | 105 |
21,258,884 | 2014-01-21T13:00:00.000 | 2 | 1 | 0 | 1 | python,linux,excel | 21,259,963 | 1 | true | 0 | 0 | Yes, xlrd/xlwt work fine on Linux. Most python code and libraries run the same on any platform. | 1 | 2 | 0 | I am working on a project which is based on python windows version. Now the customer wants the project to be extended to linux platform also.
My project uses the package xlwt, xlrd for writing the results to the excel sheet.
So here, Will these packages are compatible with the linux platform also?
Can I use this package in Linux? Or Is there any equivalent package for Linux to write the result to a spreadsheet?
Since my code is very huge,Is there any tool to convert the whole code from windows platform to linux platform? | Will XLWT work in linux platform? | 1.2 | 0 | 0 | 379 |
21,259,553 | 2014-01-21T13:28:00.000 | 7 | 0 | 0 | 0 | python,asynchronous,background,flask,response | 21,262,500 | 1 | true | 1 | 0 | What you ask cannot be done with the HTTP protocol. Each request receives a response synchronously. The closest thing to achieve what you want would be this:
The client sends the request and the server responds with a job id immediately, while it also starts a background task for this long calculation.
The client can then poll the server for status by sending the job id in a new request. The response is again immediate and contains a job status, such as "in progress", "completed", "failed", etc. The server can also return a progress percentage, which the client can use to render a progress bar.
You could also implement web sockets, but that will require socket enabled server and client. | 1 | 6 | 0 | I'm using Flask to develop a web server in a python app. I'm achieving this scenario: the client (it won't be a browser) sends a request, the server does some long task in background and on completion sends the response back to the client asynchronously. Is it possible to do that? | Flask: asynchronous response to client | 1.2 | 0 | 0 | 3,502 |
21,261,458 | 2014-01-21T14:52:00.000 | 1 | 0 | 1 | 0 | python,c,language-agnostic,cpython | 21,262,093 | 3 | false | 0 | 0 | Each computing platform has (or should have) an application binary interface (ABI). This is a specification of how parameters are passed between routines, how values are returned, what state the machine should be in and so on.
The ABI will specify things such as (for example):
The first integer argument (up to some number of bits, say 32) will be passed in a certain register (such as %EAX or R3). The second will be passed in another specific, register, and so on.
After the list of register is used, additional integer arguments will be passed on the stack, starting at a certain offset from the value of the stack pointer when the call is made.
Pointer arguments will be treated the same as integer arguments.
Floating-point arguments will be passed in floating-point registers F1, F2, and so on, until those registers are used up, and then on the stack.
Compound arguments (such as structures) will be passed as integer arguments if they are very small (e.g., four char objects in one structure) or on the stack if they are large.
Each compiler or other language implementation will generate code that conforms to the ABI, at least where its routines call or are called from other routines that might be outside the language. | 1 | 8 | 0 | Sorry if this is too vague. I was recently reading about python's list.sort() method and read that it was written in C for performance reasons.
I'm assuming that the python code just passes a list to the C code and the C code passes a list back, but how does the python code know where to pass it or that C gave it the correct data type, and how does the C code know what data type it was given? | How do programming languages call code written in another language? | 0.066568 | 0 | 0 | 3,219 |
21,263,589 | 2014-01-21T16:21:00.000 | 1 | 0 | 1 | 1 | python,multiprocess | 21,263,740 | 1 | true | 0 | 0 | Using lock files may be an option. For example, each process checks for a file like "/target_dir/lock" before write. If file exists, process will not write anything. So you have to run separate monitor process, which checks directory size, and creates or deletes lock file. | 1 | 1 | 0 | Several processes are each writing a file to a directory. The goal is to control the size of the directory such that whenever it reaches a size (S), all processes stop writing to the directory and discard the file they are about to write.
If the size then becomes lower than S because some of those files were removed, the processes will resume writing files.
It seems that I need inter-process locking to achieve this design. However, I thought maybe there's an easier way, since inter process locking is not readily available in python and obviously there's contention between processes.
Python 2.7
Platforms (Win, Mac, Linux) | Shared Persistent Storage in Python | 1.2 | 0 | 0 | 240 |
21,264,710 | 2014-01-21T17:11:00.000 | 1 | 0 | 0 | 0 | php,python,ajax,google-app-engine | 21,264,918 | 3 | false | 1 | 0 | The backend is irrelevant when doing Ajax. You could write it in PHP, Python, or even COBOL if that's what floats your boat. The main thing is that your Javascript is asynchronously requesting data, and that your backend is providing it in the format your frontend expects. These days, that's mostly JSON. Python is of course perfectly capable of providing JSON data (via the json module from the standard library). | 1 | 0 | 0 | I'm currently working on a web application, hosted on Google App Engine with the back-end written in Python. Now I feel the need to add Ajax-like features into my website. When I went through some of the Ajax tutorials on the internet, I found that all of them taught in context to having the back-end written in PHP.
So my question is, can't I use Ajax-like features on my application written in Python, hosted on Google App Engine? And if yes, can someone suggest some good tutorials for learning Ajax which uses Python as the back-end example?
EDIT: I'm using webapp2 framework, and am not familiar with Django. | Ajax with Python as backend | 0.066568 | 0 | 0 | 1,351 |
21,266,405 | 2014-01-21T18:35:00.000 | 0 | 1 | 0 | 1 | python,cron | 21,266,639 | 2 | false | 0 | 0 | Each line that contains a job must end in a newline character. | 1 | 0 | 0 | I wrote a Python script which I need it to run every 5 mins. My server is running CentOS 6.4 Final. Here's what I did in detail.
After logging into the server with an account has root access, I did cd /var/spool/cron/, I can see a couple of files has different usernames on it. Edit my file (the one has my username on it) with nano myusername and I added this line at the end of the file.
*/5 * * * * /usr/bin/python /home/myusername/Dev/cron/python_sql_image.py
I waited a bit and the cronjob works now. But new question: this Python code will generate a png file after being executed. When I manually run it, the png file will be created under the same folder with the py script, but when cronjob runs it, the png file was created on /home/myusername. Is there anyway I can change the location? | cronjob on CentOS running a python script | 0 | 0 | 0 | 3,668 |
21,270,951 | 2014-01-21T23:07:00.000 | 3 | 0 | 0 | 0 | python,google-cloud-storage | 21,271,776 | 3 | false | 1 | 0 | Try to PUT in larger blocks, since latency is probably the gating factor. You can edit the DEFAULT_CHUNK_SIZE in apiclient/http.py as a workaround. | 3 | 3 | 0 | How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module.
The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little.
The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions. | Turn off SSL to Google cloud storage | 0.197375 | 0 | 1 | 1,475 |
21,270,951 | 2014-01-21T23:07:00.000 | 1 | 0 | 0 | 0 | python,google-cloud-storage | 21,678,526 | 3 | false | 1 | 0 | You should keep SSL. When using OAuth2 (as GCS does), any request may include an http header (access_token) that you don't want third parties to see. Otherwise, hijacking your account would be extremely easy. | 3 | 3 | 0 | How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module.
The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little.
The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions. | Turn off SSL to Google cloud storage | 0.066568 | 0 | 1 | 1,475 |
21,270,951 | 2014-01-21T23:07:00.000 | 3 | 0 | 0 | 0 | python,google-cloud-storage | 21,271,640 | 3 | true | 1 | 0 | apiclient uses the Google Cloud Storage JSON API, which requires HTTPS.
Can you say a bit about why you would like to disable SSL?
Thanks. | 3 | 3 | 0 | How would I go about turning off SSL when sending data to Google cloud storage? I'm using their apiclient module.
The data that I'm putting to the cloud is already encrypted. I'm also trying to put data from AWS to GCS in 512k sized blocks. I'm seeing about 600ms+ in putting just one block. I was thinking if I don't have to set up a secure connection then I can cut down that PUT time a little.
The code is server side code that lives on AWS and for some god awful reason my company wants to have two (S3 and GCS) as production storage regions. | Turn off SSL to Google cloud storage | 1.2 | 0 | 1 | 1,475 |
21,274,359 | 2014-01-22T04:43:00.000 | 5 | 0 | 0 | 1 | python,macos,command-line,scrapy,bin | 21,274,416 | 1 | true | 1 | 0 | First, next time you get a Permission Denied from pip uninstall foo, try sudo pip uninstall foo rather than trying to do it manually.
But it's too late to do that now, you've already erased the files that pip needs to do the uninstall.
Also:
Up until this point, I've resisted the urge to just delete it. But I know that those folders are hidden by Apple for a reason...
Yes, they're hidden so that people who don't use command-line programs, write their own scripts, etc. will never have to see them. That isn't you. You're a power-user, and sometimes you will need to see stuff that Apple hides from novices. You already looked into /Library, so why not /usr/local?
The one thing to keep in mind is learning to distinguish stuff installed by OS X itself from stuff installed by third-party programs. Basically, anything in /System/Library or /usr is part of OS X, so you should generally not touch it or you might break the OS; anything installed in /Library or /usr/local is not part of OS X, so the worst you could do is break some program that you installed.
Also, remember that you can always back things up. Instead of deleting a file, move it to some quarantine location under your home directory. Then, it it turns out you made a mistake, just move it back.
Anyway, yes, it's safe to delete /usr/local/bin/scrapy. Of course it will break scrapy, but that's the whole point of what you're trying to do, right?
On the other hand, it's also safe to leave it there, except for the fact that if you accidentally type scrapy at a shell prompt, you'll get an error about scrapy not being able to find its modules, instead of an error about no such program existing. Well, that, and it may get in the way of you trying to re-install scrapy.
Anyway, what I'd suggest is this: pip install scrapy again. When it complains about files that it doesn't want to overwrite, those files must be from the previous installation, so delete them, and try again. Repeat until it succeeds.
If at some point it complains that you already have scrapy (which I don't think it will, given what you posted), do pip install --upgrade scrapy instead.
If at some point it fails because it wants to update some Apple pre-installed library in /System/Library/…/lib/python, don't delete that; instead, switch to pip install --no-deps scrapy. (Combine this with the --upgrade flag if necessary.) Normally, the --no-deps option isn't very useful; all it does is get you a non-working installation. But if you're only installing to uninstall, that's not a problem.
Anyway, once you get it installed, pip uninstall scrapy, and now you should be done, all traces gone. | 1 | 2 | 0 | So I've been having a lot of trouble lately with a messy install of Scrapy. While I was learning the command line, I ended up installing with pip and then easy_install at the same time. Idk what kinda mess that made.
I tried the command pip uninstall scrapy, and it gave me the following error:
OSError: [Errno 13] Permission denied: '/Library/Python/2.6/site-packages/Scrapy-0.22.0-py2.6.egg/EGG-INFO/dependency_links.txt'
so, I followed the path and deleted the text file... along with anything else that said "Scrapy" within that path. There were two versions in the /site-packages/ directory.
When I tried again with the pip uninstall scrapy command, I was given the following error:
Cannot uninstall requirement scrapy, not installed
That felt too easy, so I went exploring through my directory hierarchy and I found the following in the usr/local/bin directory:
-rwxr-xr-x 1 greyelerson staff 173 Jan 21 06:57 scrapy*
Up until this point, I've resisted the urge to just delete it. But I know that those folders are hidden by Apple for a reason...
1.) Is it safe to just delete it?
2.) Will that completely remove Scrapy, or are there more files that I need to remove as well? (I haven't found any robust documentation on how to remove Scrapy once it's installed) | Safely removing program from usr/local/bin on Mac OSX 10.6.8? | 1.2 | 0 | 0 | 10,163 |
21,274,517 | 2014-01-22T04:58:00.000 | 2 | 0 | 0 | 0 | python,django,cookies,analytics | 21,274,744 | 1 | true | 0 | 0 | By default cookies are enabled, so the user has to make the conscious decision to turn them off.
Depending on your location it is a mandatory requirement to tell users that your site uses cookies. I believe it helps educate the user as you would provide further information about the purpose of the cookies and how it helps improve their experience on the website. Granted cookies can be used for malicious purposes but on the whole they help the website provide a better service to the user.
If you are to use cookies, tell the user that you do and provide a 'Cookie policy' which states the use of the cookie. People will be more inclined to allow cookies on your site.
Having said that, if the cookie keeps personal data such as names, locations, passwords etc in plain text it would definitely not be a good idea to store this data in a cookie. You are better off passing the data to the server within a form or storing it in a user session and processing it on the server side. | 1 | 1 | 0 | I'm building my first web app and have decided to use cookies to pass persistent data in my multiple signup pages.
Will using cookies hurt my website's new sign up conversion rate? E.g. will I lose some potential users who have cookies turned off? | Does requiring cookies hurt signup conversion rates? | 1.2 | 0 | 0 | 52 |
21,274,959 | 2014-01-22T05:31:00.000 | 0 | 0 | 0 | 0 | python,opencv,math,geometry | 21,278,176 | 2 | false | 0 | 0 | If your question is related to the points being extracted in random order, the tool you need is probably the so called 2D alpha-shape. It is a generalization of the convex hull and will let you trace the "outline" of your set of points, and from there perform interpolation. | 1 | 0 | 1 | I have a set of points extracted from an image. I need to join these points to from a smooth curve. After drawing the curve on the image, I need to find the tangent to the curve and represent it on the image. I looked at cv2.approxPolyDP but it already requires a curve?? | Draw a curve joining a set of points in opencv python | 0 | 0 | 0 | 2,929 |
21,275,131 | 2014-01-22T05:45:00.000 | 3 | 1 | 1 | 0 | python,clr,ironpython | 21,284,660 | 1 | true | 0 | 1 | You can use pyc.py to create an exe/dll, but it's not well documented. Otherwise, you're basically right. | 1 | 3 | 0 | While doing a breaf research on IronPython I got confused about it's execution model and how it integrates with C#.
Can you please point, which of these assumptions are wrong:
IronPython is Not compiled Ahead of time ( into a clr exe|dll
with IL code)
IronPython is distributed as script
When executed, IronPython files are compiled at runtime into IL and then executed in a CLR AppDomain.
Thanks | IronPython and Ahead-of-Time Compilation | 1.2 | 0 | 0 | 185 |
21,279,553 | 2014-01-22T09:56:00.000 | 4 | 1 | 0 | 0 | python,selenium,automation,automated-tests,robotframework | 21,280,237 | 1 | true | 1 | 0 | pybot --suite mytestsuite /path/to/mytestuite-dir So drop the .txt and put path to the directory where the suite is at the end of the command. | 1 | 1 | 0 | I have a robot framework testcase file with the name 'mytestsuite.txt'. It has few test cases..I can run this suite using,
pybot mytestsuite.txt
But when I tried to execute it using --suite option,
pybot --suite mytestsuite.txt
getting the error ,
[ ERROR ] Expected at least 1 argument, got 0.
Is anything wrong in this ,or anyone can suggest how to execute the testsuite file.
Thanks in advance. | Running a testsuite in Robotframework | 1.2 | 0 | 0 | 2,773 |
21,279,742 | 2014-01-22T10:05:00.000 | 0 | 0 | 0 | 0 | python,django | 21,279,991 | 1 | true | 1 | 0 | I think you should use redirect func with passing additional args with it. | 1 | 1 | 0 | I have some details on a customer and I would like to get those details from a view function in app1 to a view in another app, app2. How can this be done? | Pass data from view in app1 to a view in app2 in django | 1.2 | 0 | 0 | 56 |
21,279,835 | 2014-01-22T10:08:00.000 | 0 | 0 | 0 | 0 | python,django | 21,280,027 | 1 | true | 1 | 0 | You could have base app if you want to, but you don't need one. All apps are wired when you declare them in the INSTALLED_APPS in the settings, each app has a urls.py file that will catch the route and call one of the views in that app if there's a match.
I use a base app to define global templates, global static files, helpers.
Hope this helps | 1 | 0 | 0 | I'm pretty new with Django, I've been reading and watching videos but there is one thing that is confusing me. It is related to the apps. I've watched a video where a guy said that is convenient to have apps that do a single thing, so if I have a big project, I will have a lot of apps. I made an analogy to a bunch of classes, where each app would be a class with their own functions and elements, is this a correct interpretation? In this case, is there like an app where I have like a main method in a class? I mean, I don't know how to wire all the applications I have, is there like a principal app in charge of manage the others? or how does it work?
thanks! | how to wire all the django apps | 1.2 | 0 | 0 | 66 |
21,284,817 | 2014-01-22T13:53:00.000 | 0 | 0 | 1 | 0 | python,logging,ipython,cython,ipython-notebook | 21,285,959 | 2 | false | 0 | 0 | C/C++ printing to stdout cannot be redirected, only python one can.
The only way would be to have your c/c++ library to accept a parameter to redirect its logging to a special file. | 1 | 4 | 0 | I am working on a module which is hybrid C++ and Python, connected via Cython. The C++ code includes simple logging functionality which prints log statement to the console.
When using the module via the Python or IPython command line, log statements are printed correctly. However, the module is meant to be used primarily from the IPython HTML Notebook. In the notebook, log statements do not appear. How can I fix this? | How to show log statements of extension module in IPython Notebook | 0 | 0 | 0 | 830 |
21,287,447 | 2014-01-22T15:44:00.000 | 2 | 0 | 1 | 0 | jira,jira-rest-api,python-jira | 21,292,775 | 2 | false | 0 | 0 | Labels are a field that is shared across all issues potentially, but I don't think there is a REST API to get the list of all labels. So you'd either have to write a JIRA add-on to provide such a resource, or retrieve all the issues in question and iterate over them. You can simplify things by excluding issues that have no label
JQL: project = MYPROJ and labels is not empty
And restrict the fields that are returned from the search using the "fields" parameter for search_issues | 1 | 3 | 0 | I am using the python API for Jira. I would like to get a list of all the labels that are being used in a project. I know that issue.fields.labels will get me just the labels of an issue, but I am interested in looping through all the labels used in a project. Found this to list all components in a project
components = jira.project_components(projectID)
Am looking for something similar, but for labels... | In the Jira Python API, how can I get a list of all labels used in a project? | 0.197375 | 0 | 0 | 6,767 |
21,287,667 | 2014-01-22T15:53:00.000 | 0 | 0 | 0 | 0 | python,histogram,data-analysis | 21,287,882 | 3 | false | 0 | 0 | what format is your data in?
Python offers modules to read data from a variety of data formats (CSV, JSON, XML, ...)
CSV is a very common one that suffices for many cases (the csv module is part of the standard library)
Typically you write a small routine that casts the different fields as expected (string to floating point numbers, or dates, integers,...) and cast your data in a numpy matrix (np.array) where each row corresponds to a sample and each column to an observation
for the plots check matplotlib. It is really easy to generate graphs, especially if you have some previous experience with Matlab | 1 | 1 | 1 | I'm new to python and was curious as to how, given a large set of data consisting of census information, I could plot a histogram or graph of some sort. My main question is how to access the file, not exactly how the graph should be coded. Do I import the file directly? How do I extract the data from the file? How is this done?
Thanks | Plotting in Python | 0 | 0 | 0 | 498 |
21,288,595 | 2014-01-22T16:32:00.000 | 1 | 0 | 1 | 0 | python,pycharm,canopy,pane,interactive-shell | 22,088,207 | 1 | false | 0 | 0 | Instead of running the file in PyCharm, debug it and set a breakpoint somewhere after your data structures are created. Then you can play in the console of the debugger just like you would in Canopy, and you can examine your variables in the variable window. Actually this is better than Canopy's editor in my opinion since you can also step through the code and see how it is changing your data and structures.
In PyCharm, so you don't have to make a project, use Control-Shift-R which builds a configuration for that file and runs it.
You are correct, once the process terminates, the variables are gone. You may also look at the Spyder IDE, as it works somewhat like the Canopy editor in this respect. But personally I love the PyCharm IDE best. | 1 | 0 | 0 | I have been using Enthought Canopy for Python up until now. I really like that I can run a .py file, and play around with it in the Python Pane (e.g. make a class, and then play around in the Python Pane, trying to learn how it works, and how I can interact with it).
However, recently, I have fallen in love with pyCharm, specifically the autocomplete functions that Canopy lacks - and also the looks of it. However, when I run my program, there is no similar way of playing around with it afterwards. | Can I use pyCharm in the same way that I use Canopy? Specifically i miss the "data-analysis environment"? | 0.197375 | 0 | 0 | 3,650 |
21,289,974 | 2014-01-22T17:37:00.000 | 0 | 0 | 1 | 1 | python,macos,shell,installation | 23,864,710 | 1 | false | 0 | 0 | pip and easy_install are for python libraries.
apt-get, brew, fink, port, etc. These tools are 'distro style' package management tools.
They have one area of overlap in terms of 'why do i need one of each?' and that is Library dependencies.
pip is the tool endorsed by the most python developers and the python packaging SIG going forward, so TLDR; use pip not easy_install
these tools also work with virtualenvs and virtualenvs are great. use them :)
You will however run into occasions where you need other libraries that python doesnt quite know what to do with when you try and build a python package with pip. It is these moments that make it necessary to have one of the other tools. | 1 | 1 | 0 | Every time I tried to install a new package for python on Mac OS X, I had this issue which these packages had different ways to setup with different package management tools. Specially for new versions of Mac OS X 10.9 Mavericks, some of installers are buggy, then I needed to switch between them. I'm asking for a short description and comparison between these main command-line installers: easy_install, pip, port, apt-get, brew, fink, and etc. Of course, sometimes there is no way other than installing through source code make install, python setup.py, or .pkg installer files. But I guess that's not the case when you need to install more complicated packages with lots of dependencies.
What I'm asking has two sides:
Is it safe to use them side by side? or are there any known conflicts between these command-line tools? (at least brew throws warnings on port availability)
Is there any known cons and pros based on nature of these package managements, in case when we had choice between them? | Python package management in Mac OS X | 0 | 0 | 0 | 867 |
21,295,705 | 2014-01-22T22:49:00.000 | 0 | 0 | 0 | 0 | python,django | 21,296,452 | 1 | false | 1 | 0 | You need to make sure their sessions expire after they log out.
Then you need to query the django session model and to see who's online you need to match the friend with the session in the django session model
sorry not much help code wise | 1 | 0 | 0 | To be clear, I'm not trying to check if the user is authenticated. On my app, I want users to be able to see whether other users they are friends with are currently logged in or not. Can someone point me in the right direction. | Django Check if Users are Logged In | 0 | 0 | 0 | 339 |
21,296,441 | 2014-01-22T23:39:00.000 | 1 | 0 | 1 | 0 | python,ms-access,ms-office,anaconda | 21,333,377 | 2 | true | 0 | 0 | I finally got it!
Yes, the problem was mixing 32 and 64 bits.
I solved the problem installing the Microsoft ACE Drivers 64bits on a MS-DOS console, writting:
AccessDatabaseEngine_x64.exe /passive
And everything works! | 1 | 1 | 0 | I have a serious problem, I don't now how to solve it.
I have a Win 7 64bit laptop, with MS Office 2007 installed (32 bits).
I installed Anaconda 64bits, BUT I am trying to connect to a MS Access MDB file with the ACE drives and I got an error that there is no driver installed.
Due to MS Office 2007, I was forced to install ACE drivers 32 bits.
Any help?
The same code runs perfect under Win XP, with exactly the same installed: Anaconda, ACE drivers and MS Office 2007.
It can be a problem mixin 32bits and 64 bits? | Python on Win 7 64bits error MS Access | 1.2 | 1 | 0 | 217 |
21,298,834 | 2014-01-23T03:35:00.000 | -1 | 0 | 1 | 0 | python,pyglet | 21,299,063 | 8 | false | 0 | 0 | If you have pip installed, then it should be easy,
just use "pip install pyglet".
Else install pip, it makes installing python packages a lot easier. | 3 | 3 | 0 | I am excited to use Pyglet because of all its features, however I am having problems installing the latest development build of Pyglet on Python 3. I am aware people have already asked this question but none of the responses helped me at all.
UPDATE:
What I mean is that I am unable to get Pyglet to install for Python 3, whenever I import Pyglet it showss an error message with some Python 2 code. | How to get Pyglet Working for Python 3? | -0.024995 | 0 | 0 | 17,329 |
21,298,834 | 2014-01-23T03:35:00.000 | 0 | 0 | 1 | 0 | python,pyglet | 59,165,339 | 8 | false | 0 | 0 | If you are installing using pip, try typing pip3 install pyglet and it will install for python 3 instead of python 2.
(Any time you want to install for python3, use pip3 instead of pip)
If it is installed correctly, make sure you save the python file in the same folder that you have pyglet installed. If it is not installed and you don't have the path set up, it won't import. | 3 | 3 | 0 | I am excited to use Pyglet because of all its features, however I am having problems installing the latest development build of Pyglet on Python 3. I am aware people have already asked this question but none of the responses helped me at all.
UPDATE:
What I mean is that I am unable to get Pyglet to install for Python 3, whenever I import Pyglet it showss an error message with some Python 2 code. | How to get Pyglet Working for Python 3? | 0 | 0 | 0 | 17,329 |
21,298,834 | 2014-01-23T03:35:00.000 | 0 | 0 | 1 | 0 | python,pyglet | 39,190,601 | 8 | false | 0 | 0 | I think it's because you want to use it with python3 and you are installing it with pip, which is for python2, pip3 is for python3. | 3 | 3 | 0 | I am excited to use Pyglet because of all its features, however I am having problems installing the latest development build of Pyglet on Python 3. I am aware people have already asked this question but none of the responses helped me at all.
UPDATE:
What I mean is that I am unable to get Pyglet to install for Python 3, whenever I import Pyglet it showss an error message with some Python 2 code. | How to get Pyglet Working for Python 3? | 0 | 0 | 0 | 17,329 |
21,302,971 | 2014-01-23T08:28:00.000 | 1 | 0 | 1 | 0 | python,linux,ubuntu | 21,303,034 | 2 | false | 0 | 0 | The version you installed is probably in /usr/local/bin/python . Try calling it with the complete path. You may want to change your path settings or remove the previously installed version using your package manager if it was installed by the system. | 1 | 1 | 0 | I initially had python 2.7.3 i downloaded from the sourcee and did make install
and after downloading i run python
but again my system is showing 2.7.3
I didn't get any error while installing | Why my python still shows 2.7.3 after installing 2.7.6 | 0.099668 | 0 | 0 | 3,498 |
21,307,904 | 2014-01-23T12:16:00.000 | 1 | 0 | 0 | 0 | python,sockets | 27,230,282 | 1 | true | 0 | 0 | virtual hosts in Apache works because it is specified in the HTTP RFC to send the host header. Unless your client similarly sends the name it used to connect, there is really no way to find this out. DNS lookup happens separately and resolves a host name to an IP. The IP is then used to connect. – Kinjal Dixit | 1 | 1 | 0 | I am using Python SocketServer to implement a socket server.
How can I find out if client used example.com to connect to me, or used x.x.x.x?
Actually, I need something like virtual hosts in Apache.
Googling didn't come up with any notable result.
Thanks | Find host name used to connect to my socket server | 1.2 | 0 | 1 | 188 |
21,310,373 | 2014-01-23T14:03:00.000 | 2 | 0 | 1 | 0 | python,django,pycharm | 21,310,636 | 1 | true | 0 | 0 | Right click -> refactor -> rename
Then you get a dialog that will preview the changes for you.
Very slick. | 1 | 0 | 0 | I am using Pycharm professional edition for my Python and Django project.
Is there anything in pycharm so that whenever i change the Varibale name/Function name/class name where it has been defined in the middle of my project. It will automatically changes the name wherever it has been referenced in a project with multiple files.
So that i dont have to check manually where it is used and change it accordingly.
Any suggestion please suggest. | Pycharm change Identifier Name where ever it has been used automatically if i change it where it has been defined | 1.2 | 0 | 0 | 74 |
21,312,425 | 2014-01-23T15:24:00.000 | 1 | 0 | 1 | 0 | python,dictionary,key,multiple-value | 21,312,503 | 2 | false | 0 | 0 | What you need is to make a custom dictionary class where the __getitem__ method rounds down the value before calling the standard __getitem__ method. | 1 | 1 | 0 | I want to make a Python dictionary. I want values like 0.25, 0.30, 0.35 to be keys in this dictionary. The problems is that I have values like 0.264, 0.313, 0.367. I want this values to access the keys e.g. I want every value from 0.25(inclusive) to 0.30(exclusive) to access the value under the key 0.25. Any ideas how to do this? I think I've done that before somehow, but I have no ideas right now.
Thanks in advance. | How to make a dictionary, in which multiple values access one value(Python) | 0.099668 | 0 | 0 | 143 |
21,315,127 | 2014-01-23T17:20:00.000 | 0 | 0 | 1 | 0 | ipython | 21,315,930 | 1 | true | 0 | 0 | We don't provide any way to make ipython console the default, and we're probably not about to, as it gets much less field testing than the regular terminal IPython. I'd recommend you just alias another convenient name to it. | 1 | 0 | 0 | I often use ipython with vim (vim-ipython plugin). It connects to ipython via ZMQ, so I need to run ipython console. I don't see any purpose to not use ipython console even if I don't use ZMQ features, so I want ipython to start ZMQ-based console without typing console. I know, that this problem could be partly solved with bash aliases, but I think that I would have problems with launching qtconsole or notebook. | How to use ipython ZMQ-based console by default | 1.2 | 0 | 0 | 149 |
21,319,006 | 2014-01-23T20:48:00.000 | 0 | 0 | 0 | 0 | jira,python-jira | 28,475,805 | 1 | false | 1 | 0 | Technically, you can do that with Behaviours plugin.
It needs a Groovy script that looks up the reporter's other issues and transitions them to a paused.
Hovewer, I don't advise doing this, because you'll also need a carefully crafted workflow that supports your "Pause" transition on all statuses, a working Groovy script (needs programming experience, and intense knowledge how JIRA API works). Also needs another script that reopens the previous issue when the newest one is closed, etcetc, there are a lot of pitfalls. | 1 | 1 | 0 | How to avoid that that a user has two opened issues in Jira?
It's possible that Jira treat issue management in this way:
when a user has a issue opened and will open another issue, this first one must be automatically paused? | How to avoid that that a user has two opened issues in Jira? | 0 | 0 | 0 | 53 |
21,322,741 | 2014-01-24T01:28:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine | 21,323,348 | 2 | false | 1 | 0 | It entirely depends on what you are currently storing in the BlobProperty. Since it is typically used to store data with an upper size limit of 1 MB, I am assuming that you are storing it for images or even some files, which are under that limit.
In all probability, you might want to either provide a link to the user via your web application to either download the document or if it is an image, you might want to render it yourself in the web application (for e.g. a user's avatar or something). | 1 | 0 | 0 | How would I go about selecting and downloading or displaying individual entries from the Datastore. Specifically if those entries contain a BlobProperty. | downloading or displaying BlobProperties from Google App Engine | 0 | 0 | 0 | 97 |
21,323,878 | 2014-01-24T03:29:00.000 | 0 | 0 | 0 | 0 | python,opencv,python-2.7,ubuntu,arduino | 21,363,163 | 1 | false | 0 | 0 | After reading the docs and performing many tests, I realized the I had not grabbed a frame before calling retrieve(). My thought was that the USB driver would automatically tick in the web cameras image into a buffer. Then, at any time, I could read that video frame's memory with having to wait. | 1 | 0 | 1 | I am using opencv2. I am able to capture frames from my web cam using cap.read() but not with cap.retrieve(). I like retrieve() because it does not block so my frame speed is faster. My retrieve() used to work but it stopped and now returns a black screen. both functions return true in the return status. I must have installed something like a different usb driver. I am using Ubuntu linux on a pcduino2. | opencv2 capture using retrieve() does not work but read() works | 0 | 0 | 0 | 117 |
21,324,050 | 2014-01-24T03:48:00.000 | -1 | 0 | 0 | 0 | python,django,heroku,rpc,bitcoin | 21,972,946 | 2 | false | 1 | 0 | You can use SSL with RPC to hide the password.
rpcssl=1 | 1 | 0 | 0 | Hey I was wondering if anyone knew how to connect to a bitcoin wallet located on another server with bitcoinrpc
I am running a web program made in django and using a python library called bitcoinrpc to make connections.
When testing locally, I can use bitcoinrpc.connect_to_local), or even bitcoinrpc.connect_to_remote('account','password') and this works as well as long as the account and password match the values specified in my 'bitcoin.conf' file. I can then use the connection object to get values and do some tasks in my django site.
The third parameter in connect_to_local is default localhost. I was wondering:
A) What to specify for this third parameter in order to connect from my webserver to the wallet stored on my home comp (is it my IP address?)
B) Because the wallet is on my PC and not some dedicated server, does that mean that my IP will change and I won't be able to access the wallet?
C) The connection string is in the django app - which is hosted on heroku. Heroku apps are launched by pushing with git but I believe it is to a private repository. Still, if anyone could see the first few lines of my 'view' they would have all they need to take my BTC (or, more accurately, mBTC). Anyone know how bad this is - or any ways to go about doing btc payments/movements in a more secure way.
Thanks a lot. | Bitcoinrpc connection to remote server | -0.099668 | 0 | 0 | 1,399 |
21,324,095 | 2014-01-24T03:53:00.000 | 1 | 0 | 1 | 1 | python | 21,390,894 | 1 | true | 0 | 0 | Python uses configure tools to configure the build. If you can't run the configure script, you may want to install cygwin in order to run the configure script. You'll want to pass the flag --enable-unicode=ucs4 to get UCS4, and you'll likely need a number of other flags to get it to work with the microsoft compiler.
Of course, if you can tolerate a cygwin-based version, and install all the necessary cygwin bits to build, you'll be able to build under cygwin without supplying a lot of flags, because configure will auto-detect the necessary settings. | 1 | 1 | 0 | How can I configure the Python Interpreter build project to build with UCS4 support on Windows? For example, I want to create Python 2.6.9, 64 bits + UCS4 support for Windows.
We want to produce pre-compiled python files (.pyc files) for multiple platforms. Due to our existing build setup we wish to build all the .pyc files on Windows, with those files in turn being distributed for use on other, Unix-like platforms.
So, we need to have various versions of the Python interpreter on Windows - versions that do not exist as an installer package.
I have built Python 2.6.9 32 bits with UCS2 support using Visual Studio and the unmodified Python source tree and Visual Studio project files. I see in pyconfig.h there is #define Py_UNICODE_SIZE 2. However, changing this 2 to a 4 has no effect on the resulting Python Interpreter. | Building Python Interpreter on Windows with UCS4 support | 1.2 | 0 | 0 | 636 |
21,326,448 | 2014-01-24T07:02:00.000 | 4 | 1 | 1 | 0 | python,vmware | 22,400,004 | 4 | false | 0 | 0 | Also pyVmomi directly corresponds to the vsphere Managed Object browser.
So get to the MOB on the vcenter, figure out what properties you need, the methods as well and the 1 to 1 name convention from pyvmomi helps you achieve what you want.
(in short, you learn about vsphere api and are good to go with pyvmomi, no mapping in the head needed) | 2 | 19 | 0 | I need to write python scripts to automate time configuration of Virtual Machines running on a ESX/ESXi host.
I don't know which api to use...
I am able to find to python bindings for VMWare apis viz. PySphere and PyVmomi.
Could anyone please explain what is the difference between them, which one should be used?
Thanks! | What is the Difference between PySphere and PyVmomi? | 0.197375 | 0 | 0 | 25,180 |
21,326,448 | 2014-01-24T07:02:00.000 | -1 | 1 | 1 | 0 | python,vmware | 22,074,852 | 4 | false | 0 | 0 | Just as Josh suggested its a clean interface to VMWare API it also support a few versions of python which is nice as it will allow you to migrate from lets say python2.7 to python3.3. | 2 | 19 | 0 | I need to write python scripts to automate time configuration of Virtual Machines running on a ESX/ESXi host.
I don't know which api to use...
I am able to find to python bindings for VMWare apis viz. PySphere and PyVmomi.
Could anyone please explain what is the difference between them, which one should be used?
Thanks! | What is the Difference between PySphere and PyVmomi? | -0.049958 | 0 | 0 | 25,180 |
21,328,884 | 2014-01-24T09:25:00.000 | 4 | 0 | 0 | 0 | python,excel,openpyxl | 21,352,070 | 1 | true | 0 | 0 | The file size isn't really the issue when it comes to memory use but the number of cells in memory. Your use case really will push openpyxl to the limits at the moment which is currently designed to support either optimised reading or optimised writing but not both at the same time. One thing you might try would be to read in openpyxl with use_iterators=True this will give you a generator that you can call from xlsxwriter which should be able to write a new file for you. xlsxwriter is currently significantly faster than openpyxl when creating files. The solution isn't perfect but it might work for you. | 1 | 5 | 0 | I have a big problem here with python, openpyxl and Excel files. My objective is to write some calculated data to a preconfigured template in Excel. I load this template and write the data on it. There are two problems:
I'm talking about writing Excel books with more than 2 millions of cells, divided into several sheets.
I do this successfully, but the waiting time is unthinkable.
I don't know other way to solve this problem. Maybe openpyxl is not the solution. I have tried to write in xlsb, but I think openpyxl does not support this format. I have also tried with optimized writer and reader, but the problem comes when I save, due to the big data. However, the output file size is 10 MB, at most. I'm very stuck with this. Do you know if there is another way to do this?
Thanks in advance. | openpyxl: writing large excel files with python | 1.2 | 1 | 0 | 5,110 |
21,330,232 | 2014-01-24T10:27:00.000 | 2 | 0 | 1 | 0 | python,memory-management,multiprocessing | 21,332,601 | 1 | true | 0 | 0 | (a) yes
(b) Python pretty much doesn't manage it, the OS does
(c) Yes, if the second process exits then its resources are freed, regardless of the persistence of the first process. You can in principle use shared objects to allow the second process to use something that the first process arranges will persist. How that plays with the specific example of the "something" being a database table is another matter.
Running extra Python processes with multiprocessing is a lot like running extra Python (or for that matter Java) processes with subprocess. The difference is that multiprocessing gives you a suite of ways to communicate between the processes. It doesn't change the basic lifecycle and resource-handling of processes by the OS. | 1 | 2 | 0 | The way I understand it, another Python instance is kicked off with multiprocessing. If so,
a. Is a Python instance started for each multiprocessing process?
b. If one process is working on say, an in-memory database table and another process is working on another in-memory database table, how does Python manage the memory allocation for the 2 processes?
c. Wrt b), is the memory allocation persistant between invocations ie. if the first process is used continuously but the 2nd process is used infrequently, is the in-memory table re-created between process calls to it? | Memory management with Python multiprocessing | 1.2 | 0 | 0 | 1,517 |
21,333,102 | 2014-01-24T12:45:00.000 | 1 | 0 | 1 | 0 | python,pandas | 21,333,741 | 1 | false | 0 | 0 | Take the intersection of their indices.
In [1]: import pandas as pd
In [2]: index1 = pd.DatetimeIndex(start='2000-1-1', freq='1T', periods=1000000)
In [3]: index2 = pd.DatetimeIndex(start='2000-1-1', freq='1D', periods=1000)
In [4]: index1
Out[4]:
[2000-01-01 00:00:00, ..., 2001-11-25 10:39:00]
Length: 1000000, Freq: T, Timezone: None
In [5]: index2
Out[5]:
[2000-01-01 00:00:00, ..., 2002-09-26 00:00:00]
Length: 1000, Freq: D, Timezone: None
In [6]: index1 & index2
Out[6]:
[2000-01-01 00:00:00, ..., 2001-11-25 00:00:00]
Length: 695, Freq: D, Timezone: None
In your case, do the following:
index1 = df1.index
index2 = df2.index
Then take the intersection of these as defined before.
Later you may wish to do something like the following to get the df at intersection index.
df1_intersection =df1.ix[index1 & index2] | 1 | 1 | 1 | I have two DataFrames with TimesSeriesIndex, df1, df2.
df2's index is a subset of df1.index.
How can I extract the index dates from df1 which also contained by df2, so I can run analysis on these dates. | pandas: how to extract a set of dates from a DataFrame with datetime index? | 0.197375 | 0 | 0 | 1,685 |
21,335,434 | 2014-01-24T14:42:00.000 | 0 | 0 | 0 | 0 | python-2.7,ssl,https | 21,338,087 | 1 | false | 0 | 0 | I got Answer
I have re-installed openssl
This is the code i used in terminal
sudo yum install openssl-devel | 1 | 0 | 0 | I am using python 2.7 and i need secure url with ssl protocol(HTTPS).Can we do this in python 2.7
when i trying to import ssl i m getting the following error
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib64/python2.7/ssl.py", line 60, in
import _ssl # if we can't import it, let the error propagate
ImportError: /usr/lib64/python2.7/lib-dynload/_ssl.so: symbol SSLeay_version, version OPENSSL_1.0.1 not defined in file libcrypto.so.10 with link time reference.
Please help me if anybody know...Thanks in advance | How to use ssl in python 2.7 | 0 | 0 | 1 | 2,484 |
21,338,216 | 2014-01-24T16:59:00.000 | -1 | 1 | 0 | 0 | python,heroku,notifications,worker | 21,348,604 | 2 | false | 1 | 0 | Easiest way - push message to your api from worker - it's log or anything you need to have in your app | 1 | 1 | 0 | I'm writing python app which currently is being hosted on Heroku. It is in early development stage, so I'm using free account with one web dyno. Still, I want my heavier tasks to be done asynchronously so I'm using iron worker add-on. I have it all set up and it does the simplest jobs like sending emails or anything that doesn't require any data being sent back to the application. The question is: How do I send the worker output back to my application from the iron worker? Or even better, how do I notify my app that the worker is done with the job?
I looked at other iron solutions like cache and message queue, but the only thing I can find is that I can explicitly ask for the worker state. Obviously I don't want my web service to poll the worker because it kind of defeats the original purpose of moving the tasks to background. What am I missing here? | Ironworker job done notification | -0.099668 | 0 | 0 | 209 |
21,338,613 | 2014-01-24T17:20:00.000 | 2 | 0 | 0 | 0 | python,user-interface,tkinter,window | 21,338,743 | 1 | true | 0 | 1 | You want to call grab_set() on your new window, which will prevent any events from going to the other window. When you're done, call grad_release() | 1 | 0 | 0 | Can anyone give me a point in the right direction please?
I've created a piece of Python code that creates a GUI. One button opens a 'new window', but if I click on the same button again I get another 'new window', keep clicking and I've got multiple copies of the 'new window'.
I want the original window to be unaccessible whilst the 'new window' is open, and then accessible again when I close the 'new window'.
Thanks in advance for any pointers. | Python - GUI creates new window, but the original window is still usable. How do I stop this | 1.2 | 0 | 0 | 154 |
21,342,188 | 2014-01-24T20:46:00.000 | 0 | 0 | 0 | 1 | python,dll,python-3.x,cx-freeze | 23,767,893 | 1 | false | 0 | 0 | Python for Windows really requires that the main pythonXX.dll (in this case, python33.dll) exists in C:\windows\system32\
In all of our various combinations of installing Python to different locations, network drives, etc. we've always had to use a little batch file to copy pythonXX.dll into the system32 dir.
I don't think PATH manipulation will work for you, just try copying the dll to system32 and see if your issues go away.
Then again, if you installed another version of Python to say C:\Python33 , then that windows-based installer will do this for you, and you'll be able to run your other Python location. | 1 | 0 | 0 | I am using cx_Freeze to convert Python scripts to Windows executable. I am using cxfreeze script present in the Scripts directory. I want the executable generated by cxfreeze to be in a different directory and the .dll's and .pyd's in a different one. When I tried to put the two of them in separate directories the .exe did not work I got
The application has failed to start because python33.dll was not found. Try reinstalling to fix this problem
I know this happends because the executable and (dll's & .pyd's) are located in different directories. Is there a way to keep them in different directories ?
I am using the following command to generate the executable
C:\Python33\Scripts>cxfreeze C:\Users\me\Desktop\setup.py --target-name=setup.exe --target-dir=C:\Users\me\Desktop\new_dir | Keeping the .dll and .pyd files in other directory | 0 | 0 | 0 | 853 |
21,343,327 | 2014-01-24T22:04:00.000 | 0 | 0 | 1 | 0 | python,iterator | 21,343,492 | 4 | false | 0 | 0 | __iter__() is intended to return an iterator over the object. What is an iterator over an object that is already an iterator? self, of course. | 2 | 5 | 0 | I understand that the standard says that it does but I am trying to find the underlying reason for this.
If it simply always returns self what is the need for it?
You clearly always have access to the object since you are calling iter on that object, so what's the need for having it? | Why does a Python Iterator need an iter method that simply returns self? | 0 | 0 | 0 | 572 |
21,343,327 | 2014-01-24T22:04:00.000 | 0 | 0 | 1 | 0 | python,iterator | 21,343,407 | 4 | false | 0 | 0 | It's so for loops and other code that needs to work with iterables can unconditionally call iter on the thing they're iterating over, rather than treating iterators and other iterables separately. In particular, non-iterators might reasonably have a method called next, and we want to be able to distinguish them from iterators. | 2 | 5 | 0 | I understand that the standard says that it does but I am trying to find the underlying reason for this.
If it simply always returns self what is the need for it?
You clearly always have access to the object since you are calling iter on that object, so what's the need for having it? | Why does a Python Iterator need an iter method that simply returns self? | 0 | 0 | 0 | 572 |
21,346,669 | 2014-01-25T04:33:00.000 | 1 | 0 | 1 | 1 | python,macos,python-3.x | 21,346,715 | 3 | false | 0 | 0 | You don't want to actually update the system version of python.
But also, python3 is the executable name. | 1 | 0 | 0 | This should be a really simple question, but I'm a little new to all this. I am trying to update my version of Python on Mac OS X 10.6.8. So, I downloaded Python 3.3 from www.python.org, and ran the .dmg file. This then created a "Python 3.3" icon in my Applications folder. However, when I type "python -V" into the Terminal, it prints "Python 2.7.6", so clearly my version of Python has not been updated. What am I doing wrong?
Thanks! | Installing Python on Mac OS X | 0.066568 | 0 | 0 | 262 |
21,348,299 | 2014-01-25T08:17:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 21,348,388 | 2 | false | 0 | 0 | I've just manually corrupted a pickled file. It threw an error. Presumably, if a file does not throw an error it's either the file you pickled, or it's been so carefully tampered with that it fools the pickle module. In that case, I think you're pretty much sunk. | 1 | 0 | 0 | When I load a pickle using pickle.load("foo") how do I know if whats read back is corrupt or not? For example, if I'm pickling a large list using pickle.dump and kill my python process before its finished, what would the consequences then be and how should I deal with them? | On Pickle Corruption | 0 | 0 | 0 | 2,020 |
21,349,462 | 2014-01-25T10:39:00.000 | 1 | 0 | 0 | 0 | android,python,web,google-cloud-messaging,bottle | 21,350,980 | 3 | false | 1 | 0 | The problem is that your clients are waiting for your server to send the GCM push notifications. There is no logic to this behavior.
You need to change your server-side code to process your API requests, close the connection to your client, and only then send the push notifications. | 2 | 0 | 0 | I'm developing a multiplayer Android game with push notifications by using Google GCM.
My web server has a REST API. Most of the requests sent to this API send a request to Google GCM server to send a notification to the opponent.
The thing is on average, a call to my API is ~140 ms long, and ~100 ms is due to the http request sent to Google server.
What can I do to speed up this? I was thinking (I have full control of my server, my stack is Bottle/gunicorn/nginx) of creating an independent process with a database that will try to send a queue of GCM requests, but maybe there's a much simpler way to do that directly in bottle or in pure python. | Avoiding a 100ms http request to slow down the REST API where it's called from | 0.066568 | 0 | 1 | 1,501 |
21,349,462 | 2014-01-25T10:39:00.000 | 0 | 0 | 0 | 0 | android,python,web,google-cloud-messaging,bottle | 21,349,485 | 3 | false | 1 | 0 | The best thing you can do is making all networking asynchronous, if you don't do this yet.
The issue is that there will always be users with a slow internet connection and there isn't a generic approach to bring them fast internet :/.
Other than that, ideas are to
send only few small packets or one huge in favor of many small packets (that's faster)
use UDP over TCP, UDP being connectionless and naturally faster | 2 | 0 | 0 | I'm developing a multiplayer Android game with push notifications by using Google GCM.
My web server has a REST API. Most of the requests sent to this API send a request to Google GCM server to send a notification to the opponent.
The thing is on average, a call to my API is ~140 ms long, and ~100 ms is due to the http request sent to Google server.
What can I do to speed up this? I was thinking (I have full control of my server, my stack is Bottle/gunicorn/nginx) of creating an independent process with a database that will try to send a queue of GCM requests, but maybe there's a much simpler way to do that directly in bottle or in pure python. | Avoiding a 100ms http request to slow down the REST API where it's called from | 0 | 0 | 1 | 1,501 |
21,358,559 | 2014-01-26T01:36:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,floating-point,integer | 62,539,262 | 4 | false | 0 | 0 | Python converts the integer value into its real equivalent, then checks the two values, and thus the answer is true when checking value equivalence, but if one is checking type equivalence then the answer is false. | 1 | 0 | 0 | I want to make condition than number has to be integer. And x == int(x) doesn't help in cases when x == 1.0...
Thank you. | Why in Python 1.0 == 1 >>> True; -2.0 == -2 >>> True and etc.? | 0 | 0 | 0 | 6,378 |
21,359,542 | 2014-01-26T04:03:00.000 | 0 | 1 | 0 | 0 | python,heroku,flask | 35,026,440 | 1 | false | 1 | 0 | According to Heroku spec you are supposed to listen to single PORT which is given to your app in env. variable.
In case your application needs only single port (for the ZeroRPC), you might be luck.
But you shall expect your ZeroRPC being served on port 80.
Possible problems:
not sure, if Heroku allows other than HTTP protocols. It shall try to connect to your application after it gets started to test, it is up and running. It is possible, the test will attempt to do some HTTP request which is likely to fail with ZeroRPC service.
what about authentication of users? You would have to build some security into ZeroRPC itself or accept providing the service publicly to anonymous clients.
Proposed steps:
try providing the ZeroRPC services on the port, Heroku provides you.
rather than setting up HTTP proxy in front of ZeroRPC, check PyPi for "RPC". There is bunch of libraries serving already over HTTP. | 1 | 5 | 0 | We're using Heroku for historical reasons and I have this awesome ZeroRPC based server that I'd love to put up on the Heroku service. I'm a bit naive around exactly the constraints imposed for these 'cloud' based platforms but most do not allow the opening of an arbitrary socket. So I will either have to do some port-forwarding trick or place a web front-end (like Flask) to receive the requests and forward them onto the ZeroRPC backend. The reason I haven't just done Flask/ZeroRPC is that it feels awkward (my front-end experience is basically zero), but I'm assuming I would set up RESTful routes and then just forward stuff to ZeroRPC...head scratch....
Perhaps asking the question in a more opening-ended way; I'm looking for suggestions on how best to deploy a ZeroRPC based service on Heroku (btw I know dotCloud/Docker uses zeroRPC internally, but I'm also not sure if I can deploy my own ZeroRPC server on it). | Best way to use ZeroRPC on Heroku server | 0 | 0 | 0 | 464 |
21,359,906 | 2014-01-26T05:00:00.000 | 0 | 1 | 1 | 0 | python,git,git-diff | 21,361,300 | 1 | true | 0 | 0 | I sense a simple git status or git diff --cached would be enough to make sure that the last commit is indeed the last one, meaning there is no work in progress, some of it already added to the index, which could constitute a new commit.
If git status doesn't mention any file added to the index, then you can push your last commit. | 1 | 1 | 0 | I am making a tool in python to push and obviously I would want to push the last commit so if I am checking the diff of the last I want to push but if the diff is not of current and last HEAD, then git push should not work.
How to check if the git diff is between current head and last head i.e. git diff HEAD^ HEAD and not any other ?
why I need functionality?
because Diff I am seeing is the diff I am going to send in email. however would that make sense I see a different diff and push the last commit .
which is why I am trying to figure out if diff being displayed is of current and last commit only then i should push else not. | how to confirm that current git diff is one with current and last and not with current commit and any other | 1.2 | 0 | 0 | 56 |
21,360,937 | 2014-01-26T07:33:00.000 | 1 | 1 | 1 | 0 | python,dictionary,abc | 21,360,962 | 1 | true | 0 | 0 | Would I be able to craft a dict with purely ABC's?
No. Subclassing an ABC requires you to implement its interface; for example, Mapping requires you to implement __getitem__, __iter__, and __len__. The mixin methods provide default implementations for certain things in terms of the parts you need to implement, but you still need to provide the core. Mapping won't automatically provide a hash table or BST implementation for you. | 1 | 1 | 0 | I am familiar with the concept of Abstract Base Classes (ABC's), as providing sets of properties of the builtin objects, but I don't have really any experience working with them. I can see that there's a Mapping ABC, and a MutableMapping that inherits from it, but I don't see a .fromkeys() method (the only thing missing off the top of my head.)
Would I be able to craft a dict with purely ABC's? What would that look like? Would that amount to nearly the same thing as subclassing dict? Would there be any benefit to doing that? What would be the use-case? | Is it possible to craft a Python dict with all (or most) of the properties of a dict with Abstract Base Classes? | 1.2 | 0 | 0 | 82 |
21,366,217 | 2014-01-26T16:40:00.000 | 1 | 0 | 0 | 0 | python,django,streaming | 21,368,318 | 1 | false | 1 | 0 | You have a couple of options I can see, and for various projects I've used both of them without any significant problems (but not at the same time)
Creating custom management command (as you mentioned). The only issue I've run into was that I had a log file that was by default owned by apache (since that was running django through WSGI) but if someone else was running the manage.py command (e.g. root through crontab), I'd occasionally have an issue that the log file would get rotated and the new owner would be root; workaround was to add a chown to the log file as part of the crontab command or else run everything as the same user. Otherwise, this has been working like a champ.
Have django create your models and then write generic python (or whatever language you'd prefer) to write to the database (and just use django for a front end). You do need to be slightly careful to make sure you're not breaking the django model links (e.g. if you have a many to many relationship and add something to one table, also update the corresponding other tables) | 1 | 0 | 0 | I'm in the process to write a new application,
I was thinking in using django for the http side but I was thinking on the best way for handling the data. My problem is that I need to acquire continue data from different processes, save them as files and insert every related info into the database.
The main scope is to have surveillance camera recording video, splitting them in per hour basis and save them in the data dir. From a script take every new files and add data to the database so the view in html can show new files.
My great doubt is that handling files like
./manage.py do_something_with_new_data
could be a pida.
I googled a lot for other ways for doing it but I didn't found anything. Do someone here the same problem? How did you solved it? | Django / Python, continuosly add data to database via command line | 0.197375 | 0 | 0 | 408 |
21,366,266 | 2014-01-26T16:45:00.000 | 0 | 0 | 1 | 0 | python,nlp,nltk,wordnet,word-sense-disambiguation | 21,422,720 | 1 | false | 0 | 0 | If a word has multiple synsets then you can say it is a polysemous word. | 1 | 0 | 0 | If my input query is:
"Dog is barking at tree"
here word "bark" is polysemous word and we know that. But how to check it through a code in python language using wordnet as a lexical database? | How to find polysemous words from given input query? | 0 | 0 | 0 | 261 |
21,368,874 | 2014-01-26T20:38:00.000 | 1 | 0 | 1 | 0 | python,image,header,bmp | 41,669,265 | 3 | false | 0 | 0 | Use a hex editor, remove the header file bytes and save the altered file - minus the header. There are also a few bytes at the end of a Bitmap file to remove if you are being thorough. Try XVI32 hex editor or HxD. Both are very good hex editors and they are FREE. Good luck. | 1 | 0 | 0 | How to remove a header from a file that is .bmp but with no import of any libraries in python and reading byte byte f.read(1)? | How to remove a header from a file that is bmp but with no import of any libraries in python | 0.066568 | 0 | 0 | 3,772 |
21,369,120 | 2014-01-26T20:59:00.000 | 1 | 0 | 0 | 0 | python,wxpython | 21,383,591 | 1 | true | 0 | 1 | No. The wx.Panel does not support this feature out of the box. The FloatCanvas widget does support panning and zooming though. You could use that widget instead. Of course, I think you will have to draw the items on it instead of using regular widgets. | 1 | 0 | 0 | Is there a general way for zooming in/out on a wx.Panel (all its content should be zoomed as well) ?
Is it possible to define a new sizer type that would provide this feature ? | Zoom in/out on a wx.Panel | 1.2 | 0 | 0 | 413 |
21,370,889 | 2014-01-26T23:51:00.000 | 6 | 0 | 0 | 0 | python,django,mongodb,mongoengine | 21,498,341 | 1 | true | 1 | 0 | You can set the parameter primary_key=True on a Field. This will make the target Field your _id Field. | 1 | 0 | 0 | is it possible to use custom _id fields with Django and MongoEngine?
The problem is, if I try to save a string to the _id field it throws an Invalid ObjectId eror. What I want to do is using my own Id's. This never was a problem without using Django because I caught the DuplicateKeyError on creation if a given id was already existing (which was even necessary to tell the program, that this ID is already taken)
Now it seems as if Django/MongoEngine won't even let me create a custom _id field :-/
Is there any way to work arround this without creating a second field for the ID and let the _id field create itself?
Greetings Codehai | custom _id fields Django MongoDB MongoEngine | 1.2 | 1 | 0 | 2,413 |
21,372,637 | 2014-01-27T03:38:00.000 | 2 | 0 | 1 | 1 | python-2.7,path,window,installation | 28,439,616 | 9 | false | 0 | 0 | Make sure you don't put a space between the semi-colon and the new folder location that you are adding to the path.
For example it should look like...
{last path entry};C:\Python27;C:\Python27\Scripts;
...not...
{last path entry}; C:\Python27; C:\Python27\Scripts; | 3 | 25 | 0 | So I'm trying python 2.7 on my Windows. It is running Windows 8. I cannot add it to my path. I've done the usual: using the advanced system settings, environment variables, adding C:\Python27 in system variables.
However, when I type Python in command prompt it says 'python is not recognized ..' | Installing Python 2.7 on Windows 8 | 0.044415 | 0 | 0 | 140,370 |
21,372,637 | 2014-01-27T03:38:00.000 | 5 | 0 | 1 | 1 | python-2.7,path,window,installation | 21,372,892 | 9 | false | 0 | 0 | System variables usually require a restart to become effective. Does it still not work after a restart? | 3 | 25 | 0 | So I'm trying python 2.7 on my Windows. It is running Windows 8. I cannot add it to my path. I've done the usual: using the advanced system settings, environment variables, adding C:\Python27 in system variables.
However, when I type Python in command prompt it says 'python is not recognized ..' | Installing Python 2.7 on Windows 8 | 0.110656 | 0 | 0 | 140,370 |
21,372,637 | 2014-01-27T03:38:00.000 | 1 | 0 | 1 | 1 | python-2.7,path,window,installation | 21,373,239 | 9 | false | 0 | 0 | i'm using python 2.7 in win 8 too but no problem with that. maybe you need to reastart your computer like wclear said, or you can run python command line program that included in python installation folder, i think below IDLE program. hope it help. | 3 | 25 | 0 | So I'm trying python 2.7 on my Windows. It is running Windows 8. I cannot add it to my path. I've done the usual: using the advanced system settings, environment variables, adding C:\Python27 in system variables.
However, when I type Python in command prompt it says 'python is not recognized ..' | Installing Python 2.7 on Windows 8 | 0.022219 | 0 | 0 | 140,370 |
21,372,906 | 2014-01-27T04:12:00.000 | 5 | 0 | 0 | 0 | python,scala,clojure,netty,java-server | 21,372,962 | 1 | true | 1 | 0 | I did something similar with Scala Akka actors. Instead of HTTP Request I had unlimited number of job requests come in and get added to a queue (regular Queue). Worker Manager would manage that queue and dispatch work to worker actors whenever they are done processing previous tasks. Workers would notify Worker Manager that task is complete and it would send them a new one from the queue. So in this case there is no busy waiting or looping, everything happens on message reception. You can do the same with your HTTP Requests. Akka can be used from Scala or Java and a process I described is easier to implement than it sounds.
As a web server you could use anything really. It can be Jetty, or some Servlet Container like Tomcat, or even Spray-can. All it needs to do is to receive a request and send a message to Worker Manager. The whole system would be asynchronous and non-blocking. | 1 | 3 | 0 | I want to write an Java Server may be using Netty or anything else suggested.
The whole purpose is that I want to queue incoming HTTP Request for a while because the systems I'm targeting are doing Super Memory and Compute intensive tasks so if they are burdened with heavy load they eventually tend to get crashed.
I want to have a queue in place that will actually allow only max upto 5 requests passed to destination at any given time and hold the rest of the requests in queue.
Can this be achieved using Netty in Java, I'm equally open for an implementation in Scala, Python or clojure. | Writing a java server for queueing incoming HTTP Request and processing them a little while later? | 1.2 | 0 | 0 | 361 |
21,376,619 | 2014-01-27T09:00:00.000 | 2 | 0 | 0 | 0 | python,python-3.x,oauth-2.0,openid,openid-connect | 23,262,902 | 4 | false | 0 | 0 | There are examples in the distribution. Just added another RP example (rp3) which I think should be easier to understand. Also started to add documentation. | 1 | 4 | 0 | I'm looking for a good package that can be used to implement a OpenId Connect Provider. I've found one called pyoidc but the documentation around it is not great at all. Can anyone suggest a different package or does any one have an example implementation of pyoidc? | OpenId Connect Provider Python 3 | 0.099668 | 0 | 1 | 3,016 |
21,378,559 | 2014-01-27T10:36:00.000 | 0 | 0 | 0 | 0 | python,django,bash,process,virtualenv | 21,577,197 | 1 | true | 1 | 0 | I've solved the mistery: Django was trying to send emails but it could not because of improper configuration, so it was hanging there forever trying to send those emails.
Most probably (I'm not sure here) Django calls an OS function or a subprocess to do so. The point is that the main process was forking itself and giving the job to a subprocess or thread, or whatever, I'm not expert in this.
It turns out that when your python is forked and you kill the father, the children can apparently keep on living after it.
Correct me if I'm wrong. | 1 | 0 | 0 | I'm running a local django development server together with virtualenv and it's been a couple of days that it behaves in a weird way. Sometimes I don't see any log in the console sometimes I see them.
A couple of times I've tried to quit the process and restart it and I've got the port already taken error, so I inspected the running process and there was still an instance of django running.
Other SO answers said that this is due to the autoreload feature, well so why sometimes I have no problem at all and sometimes I do?
Anyway For curiosity I ps aux| grep python and the result is always TWO running process, one from python and one from my activated "virtualenv" python:
/Users/me/.virtualenvs/myvirtualenv/bin/python manage.py runserver
python manage.py runserver
Is this supposed to be normal? | Django from Virtualenv Multiple processes running | 1.2 | 0 | 0 | 311 |
21,378,653 | 2014-01-27T10:40:00.000 | 1 | 0 | 0 | 0 | python,django | 21,378,863 | 4 | false | 1 | 0 | If the functionality fits well as a method of some model instance, put it there. After all, models are just classes.
Otherwise, just write a Python module (some .py file) and put the code there, just like in any other Python library.
Don't put it in the views. Views should be the only part of your code that is aware of HTTP, and they should stay as small as possible. | 3 | 7 | 0 | I'd like to know where to put code that doesn't belong to a view, I mean, the logic.
I've been reading a few similar posts, but couldn't arrive to a conclusion.
What I could understand is:
A View is like a controller, and lot of logic should not put in the controller.
Models should not have a lot of logic either.
So where is all the logic based stuff supposed to be?
I'm coming from Groovy/Grails and for example if we need to access the DB or if we have a complex logic, we use services, and then those services are injected into the controllers.
Is it a good practice to have .py files containing things other than Views and Models in Django?
PS: I've read that some people use a services.py, but then other people say this is a bad practice, so I'm a little confused... | business logic in Django | 0.049958 | 0 | 0 | 7,262 |
21,378,653 | 2014-01-27T10:40:00.000 | 7 | 0 | 0 | 0 | python,django | 24,603,223 | 4 | false | 1 | 0 | Having a java background I can relate with this question.
I have been working on python for quite some time. Even though I do my best to treat Java as Java and Python as Python, some times I mix them both so that I can get a good deal out of both.
In short
Put all model related stuff in models app, it could be from simply models definition to custom save , pre save hooks .....
Put any request/ response related stuff in views, and some logic like verifying Jon schema, validation request body ... handling exceptions and so on ....
Put your business logic in separate folder/ app or module per views directory/ app. Meaning have separate middle module between your models and views.
There isn't strict rule to organise your code as long as you are consistent.
Project : Ci
Models: ci/model/device.py
Views: ci/views/list_device.py
Business logic:
(1) ci/business_logic/discover_device.py
Or
(2) ci/views/discover_device.py | 3 | 7 | 0 | I'd like to know where to put code that doesn't belong to a view, I mean, the logic.
I've been reading a few similar posts, but couldn't arrive to a conclusion.
What I could understand is:
A View is like a controller, and lot of logic should not put in the controller.
Models should not have a lot of logic either.
So where is all the logic based stuff supposed to be?
I'm coming from Groovy/Grails and for example if we need to access the DB or if we have a complex logic, we use services, and then those services are injected into the controllers.
Is it a good practice to have .py files containing things other than Views and Models in Django?
PS: I've read that some people use a services.py, but then other people say this is a bad practice, so I'm a little confused... | business logic in Django | 1 | 0 | 0 | 7,262 |
21,378,653 | 2014-01-27T10:40:00.000 | 7 | 0 | 0 | 0 | python,django | 21,378,901 | 4 | false | 1 | 0 | I don't know why you say
we can't put a lot of logic in the controller, and we cannot have the models with a lot of logic either
You can certainly put logic in either of those places. It depends to a great extent what that logic is: if it's specifically related to a single model class, it should go in the model. If however it's more related to a specific page, it can go in a view.
Alternatively, if it's more general logic that's used in multiple views, you could put it in a separate utility module. Or, you could use class-based views with a superclass that defines the logic, and subclasses which inherit from it. | 3 | 7 | 0 | I'd like to know where to put code that doesn't belong to a view, I mean, the logic.
I've been reading a few similar posts, but couldn't arrive to a conclusion.
What I could understand is:
A View is like a controller, and lot of logic should not put in the controller.
Models should not have a lot of logic either.
So where is all the logic based stuff supposed to be?
I'm coming from Groovy/Grails and for example if we need to access the DB or if we have a complex logic, we use services, and then those services are injected into the controllers.
Is it a good practice to have .py files containing things other than Views and Models in Django?
PS: I've read that some people use a services.py, but then other people say this is a bad practice, so I'm a little confused... | business logic in Django | 1 | 0 | 0 | 7,262 |
21,389,767 | 2014-01-27T19:23:00.000 | 8 | 0 | 1 | 0 | python | 21,389,865 | 1 | false | 0 | 0 | Use a tuple to store a sequence of items that will not change.
Use a list to store a sequence of items that may change.
Use a dict when you want to associate pairs of two items. | 1 | 0 | 0 | I was having a discussion with a friend on Facebook today and he's just starting to learn python, as we were discussing he said this,
"I've written a million lines of code over the years and the whole idea of when to use a tuple vs list vs dictionary in python is just vague"
and I am having the exact same issue. Then I suggested he post here with questions and it dawned on me. . . Why don't I POST the question? Since it's a big block for me as well.
So programming nerds. In plain english, how would you best answer this? P.S. I love this website. | When to use a tuple vs list vs dictionary in python? | 1 | 0 | 0 | 908 |
21,390,913 | 2014-01-27T20:24:00.000 | 2 | 0 | 0 | 0 | python,google-drive-api,google-oauth | 21,409,448 | 1 | true | 0 | 0 | "How do you authenticate with the Google Drive API using your own credentials for your own Google Drive (without creating an application on the Google Developers Console)?"
You can't. The premise of OAuth is that the user is granting access to the application, and so the application must be registered. In Google's case, that's the API/Cloud Console.
In your case, there is no need to register each application that uses your helper scripts. Just create an app called helper_scripts, embed the client Id in your script source, and then reuse those scripts in as many applications as you like. | 1 | 0 | 0 | I have not found a satisfactory answer/tutorial for this, but I'm sure it must be out there. My goal is to access Google Drive programmatically using my credentials. A secondary and lower-priority goal is to do this properly and that means using OAuth rather than ClientLogin.
Thus: How do you authenticate with the Google Drive API using your own credentials for your own Google Drive (without creating an application on the Google Developers Console)?
All of the documentation assumes an application, but what I'm writing is merely helper scripts in Python 2.7 for my own benefit. | How to authenticate as myself for Google Drive API? | 1.2 | 0 | 1 | 359 |
21,392,346 | 2014-01-27T21:45:00.000 | 0 | 0 | 0 | 1 | python,linux,functional-programming,puppet,pipeline | 21,394,049 | 1 | false | 0 | 0 | I think you are asking how you can transform multiple files when there are dependencies between the files, and possibly parallelise. The problem of resolving the dependencies is called a topological sort. Fortunately, the make utility will handle all of this for you, and you can use the -j flag to parallelise, which is easier than doing this yourself. By default it will only regenerate files if the input files change, but this is easy enough to get around by ensuring all outputs and intermediate files of each batch are removed / not present prior to invocation. | 1 | 2 | 0 | I am trying to determine the best way to build a sort of pipeline system with many interdependent files that will be put through it, and I am wondering if anyone has specific recommendations regarding tools or approaches. We work mostly in Python and Linux.
We get files of experimental data that are delivered to "inbox" directories on an HPC cluster, and these must be processed in several linear, consecutive steps. The issue is that sometimes there are multiple samples that must be processed at some stages of the pipeline as a group, so e.g. samples can independently be put through steps A and B, but all samples in the group must have completed this process to proceed through step C (which requires all of the samples together).
It strikes me as a kind of functional problem, in that each step is kind of a modular piece and I will mostly only be checking for the existence of the output: if I have Sample 1 Step B output, I need Sample 2 Step B output so that I can then get Sample 1+2 C output.
I don't know a great deal about Puppet but I wonder if this kind of tool might be something I could use for this -- something that handles dependencies and deals with monitoring states? Any ideas?
Thanks,
Mario | How to deal with processing interdependent files in a pipeline | 0 | 0 | 0 | 126 |
21,393,558 | 2014-01-27T23:03:00.000 | 2 | 0 | 0 | 0 | python,ms-access,odbc | 21,393,854 | 3 | false | 0 | 0 | Trial and error showed that installing the "Access Database Engine" 2007 seemed to create 32-bit ODBC source for Access accdb files. | 1 | 4 | 0 | I have python 2.7 32 bit running on a Windows 8.1 64 bit machine.
I have Access 2013 and a .accdb file that I'm trying to access from python and pyodbc.
I can create a 64 bit DSN in the 64 bit ODBC manager. However, when I try to connect to it from python, I get the error:
Error: (u'IM002', u'[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified')
Presumably, python is only looking for 32-bit DSNs and doesn't find the 64-bit one that I've created.
When I try to create a 32-bit DSN within the 32-bit ODBC manager, there is no driver for a accdb file (just .mdb).
I think I need a 32 bit ODBC driver for Access 2013 files (.accdb), but haven't been able to find one.
Is it possible to do what I'm trying to do? -- 32bit python access a Access 2013 .accdb file? | 32 bit pyodbc reading 64 bit access (accdb) | 0.132549 | 1 | 0 | 10,567 |
21,393,655 | 2014-01-27T23:09:00.000 | 1 | 0 | 0 | 0 | python,django | 21,394,103 | 2 | false | 1 | 0 | This might get tricky,
And a lot of times depends on what are your constraints.
1. Write your own save method for the model and then delete the old image a and replace with a new one.
os.popen("rm %s" % str(info.photo.path))
Write a cron job to purge all the unreferenced files. But then again if disk space is a issue, you might want to do the 1st one, that will get you some delay in page load. | 1 | 0 | 0 | How I can replace one image uploaded by an ImageField (in my models) with the new one selected by the same ImageField?
And, can I delete all images uploaded by an ImageField when I delete the model object (bulk delete)? | django replace old image uploaded by ImageField | 0.099668 | 0 | 0 | 1,118 |
21,394,321 | 2014-01-27T23:59:00.000 | 1 | 1 | 1 | 0 | c#,python,xml,serialization,data-storage | 21,394,349 | 2 | false | 0 | 0 | What you are trying to do is called serialization. JSON is an excellent option for doing this with support in both languages. | 1 | 1 | 0 | I have a bunch of python objects with fields containing arrays of various dimensions and data types (ints or floats) and I want to write this to a file which I can read into C# objects elsewhere. I'm probably just going to write an XML file, but I thought there might be a quicker/easier way saving and reading it. Also, the resulting XML file will be rather large, which I would like to avoid if it is not too much hassle.
Is there a tried and tested file format that is compatible (and simple to use) with both languages? | Most painless way to write structured data in python and read in C# | 0.099668 | 0 | 0 | 124 |
21,394,447 | 2014-01-28T00:10:00.000 | 1 | 0 | 1 | 0 | python,argparse | 21,394,942 | 2 | false | 0 | 0 | I am not familiar with parser.parse_known_args(). I am using Python 2.7 and there is no such function. What you could do though is save the original sys.argv in say arg_list and do
indices = [arg_list.index(a) for a in selected_arguments]
This will return a list of indices (the positions) of the selected arguments | 1 | 1 | 0 | I would like to get/record the indexes into the sys.argv list as the options are parsed
I am trying to wrap another program with a python script.
And in the wrapper script I am trying to parse the options that matter to the script
and remove them from the argv list, so that I can pass the remainder of the arguments to the program being wrapped.
To do this, I am using parser.parse_known_args() so that I don't have to track every argument the program may support. just the ones that matter to the wrapper.
Now if the parsing recorded the indexes of arguments that need to be removed
I could remove them after parsing and pass the remaining args to the wrapped program.
How do I record this information during parsing?
Not all arguments that is meaningful to the wrapper should be removed. So I need to be selective | How do I get/record the index of the option in the arg list during argparse | 0.099668 | 0 | 0 | 574 |
21,395,056 | 2014-01-28T01:12:00.000 | 0 | 0 | 0 | 0 | python,pyqt,qt-designer | 28,092,434 | 2 | false | 0 | 1 | Just because I had a similar problem and the reason wasn't a wrong object: The property's content can be accessed with toString(). | 1 | 3 | 0 | I've created a dynamic property in the Designer interface. How do I access this property in my code?
I don't see any properties listed with the name I've provided. I've found a dynamicPropertyNames property that contains a QByteArray object and the name I provided, but I cannot figure out how to access the data I stored (nor do I know if this is even the correct place to be querying).
Thanks! | How do I access a dynamic property in PyQt? | 0 | 0 | 0 | 1,777 |
21,397,605 | 2014-01-28T05:30:00.000 | 0 | 1 | 0 | 0 | python-2.7,openerp,base | 21,398,386 | 2 | false | 1 | 0 | why you going in installed modules and search for base module and update it?
you have to only update that module in which you have done changes in xml file not event py file.
if you have changes in xml file of those module you have to update only those module.
if you going to update base module it will update all module which installed in your databae,
because every module depend on base, we can call base is the kernal of our all modules, all module depend on this module, if you update base it will going to update all modules
if you have done some changes in sale then you have to search for sale and update the only
sale module not go to update base module
regards, | 2 | 0 | 0 | I'm new to Openerp.I have modified the base module and when i goto installed modules and search for BASE module and click upgrade button it is nearly taking 5mins.Can any one please say me how can i reduce the time that is taking for up-gradation of existing module.
Note: I have Messaging,Sales,Invoicing,Human-resource,Tools and Reporting modules installed,is it due to i have more modules installed??
Thanks in advance. | Why is it taking more time, When i upgrade a module in Openerp | 0 | 0 | 0 | 155 |
21,397,605 | 2014-01-28T05:30:00.000 | 1 | 1 | 0 | 0 | python-2.7,openerp,base | 21,398,452 | 2 | true | 1 | 0 | As you have said that You are new to OpenERP, Let me tell you something which would be very helpful to you. i.e Never Do changes in Standard modules not in base. If you want to add or remove any functionality of any module, you can do this by creating a customzed module. in which inherit the object you want, and do the changes as per
your requirement.
Now regarding the time spent when upgrading base module, This is because when you update base module it will automatically update all the other modules which are already installed (in your case - Sales,Invoicing,Human-resource,Tools and Reporting) as base is the main module on which all the other modules are dependedent.
So, Better is to do your changes in customized module and upgrade that perticular module only, not the base.
Hope this will help you. | 2 | 0 | 0 | I'm new to Openerp.I have modified the base module and when i goto installed modules and search for BASE module and click upgrade button it is nearly taking 5mins.Can any one please say me how can i reduce the time that is taking for up-gradation of existing module.
Note: I have Messaging,Sales,Invoicing,Human-resource,Tools and Reporting modules installed,is it due to i have more modules installed??
Thanks in advance. | Why is it taking more time, When i upgrade a module in Openerp | 1.2 | 0 | 0 | 155 |
21,397,757 | 2014-01-28T05:42:00.000 | 1 | 1 | 1 | 0 | python | 21,398,143 | 2 | true | 0 | 0 | If your proprietary bits are inside a binary DLL or SO, then there's no real value in making an interface layer a .pyc (as opposed to a .py). You can drop that all together or distribute it as an uncompiled python file. I don't know of any reasons to distribute compiled python files. In many cases, build environments treat them as stale byproducts and clean them out so your program might disappear. | 1 | 0 | 0 | Personally I think it's better to distribute .py files as these will then be compiled by the end-user's own python, which may be more patched.
What are the pros and cons of distributing .pyc files versus .py files for a commercial, closed-source python module?
In other words, are there any compelling reasons to distribute .pyc files?
Edit: In particular, if the .py/.pyc is accompanied by a DLL/SO module which is compiled against a certain version of Python. | Distributing .pyc files versus .py files for a commercial, closed-source python module | 1.2 | 0 | 0 | 1,333 |
21,399,625 | 2014-01-28T07:43:00.000 | 0 | 1 | 0 | 0 | php,python,web-services,integration | 21,410,252 | 1 | false | 0 | 0 | The easiest way to accomplish this is to build a private API for your PHP app to access your Python app. For example, if using Django, make a page that takes several parameters and returns JSON-encoded information. Load that into your PHP page, use json_decode, and you're all set. | 1 | 0 | 0 | I would like to integrate a Python application and PHP application for data access. I have a Python app and it stores data in its application, now i want to access the data from python database to php application database. For PHP-Python integration which methods are used?
Thanks | Integration of PHP-Python applications | 0 | 1 | 0 | 561 |
21,402,952 | 2014-01-28T10:25:00.000 | -1 | 0 | 0 | 0 | python,mongodb,deployment,architecture | 21,403,116 | 3 | false | 1 | 0 | Try Django.. im not sure but worth trying | 1 | 0 | 0 | As the basic framework core:
Request Lifecycle REST
Routing
Requests & Input
Views & Responses
Controllers
Errors & Logging
And these features would make our dev easy and fast:
Authentication
Cache
Core Extension
Events
Facades
Forms & HTML
IoC Container
Mail
Pagination
Queues
Security
Session
SSH
Templates
Unit Testing
Validation
And really good support for MongoDB.
Is there any such framework? | Any python web framework with the following features? | -0.066568 | 0 | 0 | 343 |
21,407,006 | 2014-01-28T13:33:00.000 | 0 | 0 | 0 | 0 | networking,python-3.x | 21,416,803 | 1 | false | 0 | 0 | When some device join the subnet, it should first send the ARP request of its gateway
You can listen and analyze the ARP packets to identity the new joined devices. | 1 | 0 | 0 | I want to write a small tool which tells me everytime when someone joins the same subnet/network. I could to this with time managed network scans but that wouldn't be real time.
Is there an easy solution for this? | Realtime network scanner in python | 0 | 0 | 0 | 208 |
21,407,492 | 2014-01-28T13:55:00.000 | 0 | 1 | 0 | 0 | python | 21,739,346 | 1 | false | 0 | 0 | Can you break the problem down further ?
How should the operator see the results, as a spreadsheet or some other way ?
For example, can you script outside of GeoMagic to fetch result sets and display those to the operator, then write back approved results to another dataset
then at the end, create the report within GeoMagic from the "approved" dataset. | 1 | 1 | 0 | I am trying to automate the report creation in Geomagics, using the create_report() function.
However, we have several sets of results, which need to be reviewed by a human operator (within the Geomagics interface) before the various reports can be created if the results are considered acceptable. Since create_report() works on the current ResultObject, I'd like to be able to set this to all my results in a loop.
(Alternatively, there might be a way to write a report for a specific object, not just the current result?) | How can I specify the current ResultObject in Geomagics with a python script | 0 | 0 | 0 | 133 |
21,408,290 | 2014-01-28T14:27:00.000 | 2 | 1 | 0 | 0 | python,amazon-web-services,amazon-s3,boto,amazon-iam | 21,409,299 | 1 | true | 0 | 0 | The boto library does handle credential rotation. Or, rather, AWS rotates the credentials and boto automatically picks up the new credentials. Currently, boto does this by checking the expiration timestamp of the temporary credentials. If the expiration is within 5 minutes of the current time, it will query the metadata service on the instance for the IAM role credentials. The service is responsible for rotating the credentials.
I'm not aware of a way to force the service to rotate the credentials but you could probably force boto to look for updated credentials by manually adjusting the expiration timestamp of the current credentials. | 1 | 3 | 0 | I'm just starting exploring IAM Roles. So far I launched an instance, created an IAM Role. Everything seems to work as expected. Currently I'm using boto (Python sdk).
What I don't understand :
Does the boto takes care of credential rotation? (For example, imagine I have an instance that should be up for a long time, and it constantly have to upload keys to s3 bucket. In case if credentials are expired, do I need to 'catch' an exception and reconnect? or boto will silently do this for me?)
Is it possible to manually trigger IAM to change credentials on the Role? (I want to do this, because I want to test above example. Or if there is there an alternative to this testcase? ) | How to manually change IAM Roles credentials? | 1.2 | 0 | 1 | 265 |
21,409,370 | 2014-01-28T15:11:00.000 | 0 | 0 | 1 | 1 | python,virtualenv | 27,750,040 | 1 | false | 0 | 0 | There is the option for virtualenv of --system-site-packages which will "Give access to the global site-packages modules to the virtual environment." If the parent host already has the mysql-python module installed, it will use that. | 1 | 2 | 0 | I don't have access to gcc on my shared hosting provider (Hostgator), so when I try to install mysql-python from within a virtualenv using pip install MySQL-python, I get unable to execute gcc: Permission denied. Is there another way to install the MySQL-python library in my virtualenv? | How can I install mysql-python in a virtualenv without compiling anything? | 0 | 0 | 0 | 398 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.