Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
31,179,989 | 2015-07-02T08:57:00.000 | 0 | 0 | 1 | 1 | python | 31,180,100 | 1 | false | 0 | 0 | Oh nevermind, i already know how.
By using --noconsole when converting the .pyw
I think it works with .py extension too
Sorry for bad english.. | 1 | 0 | 0 | I recently converted a python program to an executable (.exe) but i still see an output window when i launched the executable. I don't want any output window because the program is a background process. I tried using .pyw extension before converting to an executable with no success... I'm using pyinstaller to convert to executables. | Block output window from an executable converted from .pyw using pyinstaller | 0 | 0 | 0 | 135 |
31,182,595 | 2015-07-02T10:54:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,pip,fedora,python-3.4 | 42,448,451 | 4 | false | 0 | 0 | on fedora 25 you can just do the following:
copy file sudo cp /usr/bin/pip /usr/bin/pip3
then edit it to change #!/usr/bin/python to #!/usr/bin/python3
do pip3 -V to see that works.
this solution can also work on others version of fedora. | 3 | 2 | 0 | As I have started to use Python 3.4, I need pip to maintain libraries on both Python 2.7 and Python 3.4.
How to select the appropriate pip quickly using terminal?
Note:
This question is NOT related to Virtualenv but with the default python2.7 and python3.4 that comes with Fedora 22 Workstation.
As a temporary fix, I am using PyCharm to manage libraries. | How to switch between python2 and python3 pip on Fedora 22? | 0 | 0 | 0 | 3,457 |
31,182,595 | 2015-07-02T10:54:00.000 | 3 | 0 | 1 | 1 | python,python-2.7,pip,fedora,python-3.4 | 32,880,093 | 4 | false | 0 | 0 | I never use pip install directly (when outside a venv, at least).
Instead I use python-<version> -m pip install --user <packages>, which always does what I really meant regardless of what version the wrapper scripts are for. This is especially useful if I've locally installed a newer version of pip. | 3 | 2 | 0 | As I have started to use Python 3.4, I need pip to maintain libraries on both Python 2.7 and Python 3.4.
How to select the appropriate pip quickly using terminal?
Note:
This question is NOT related to Virtualenv but with the default python2.7 and python3.4 that comes with Fedora 22 Workstation.
As a temporary fix, I am using PyCharm to manage libraries. | How to switch between python2 and python3 pip on Fedora 22? | 0.148885 | 0 | 0 | 3,457 |
31,182,595 | 2015-07-02T10:54:00.000 | 2 | 0 | 1 | 1 | python,python-2.7,pip,fedora,python-3.4 | 31,182,728 | 4 | true | 0 | 0 | Fedora separates Python 2.x and 3.x's environments. yum install python-pip will give you an executable called pip which you can use for Python 2.x packages, and yum install python3-pip will give you an executable called pip3for managing Python 3.x packages.
You can install either, both or neither - they will not interfere with each other. | 3 | 2 | 0 | As I have started to use Python 3.4, I need pip to maintain libraries on both Python 2.7 and Python 3.4.
How to select the appropriate pip quickly using terminal?
Note:
This question is NOT related to Virtualenv but with the default python2.7 and python3.4 that comes with Fedora 22 Workstation.
As a temporary fix, I am using PyCharm to manage libraries. | How to switch between python2 and python3 pip on Fedora 22? | 1.2 | 0 | 0 | 3,457 |
31,182,671 | 2015-07-02T10:57:00.000 | 1 | 0 | 0 | 0 | python,kivy | 31,183,217 | 2 | false | 0 | 1 | If you're using windows with kivy's portable package, I think you can get a shell with kivy's env by running the kivy executable. Assuming so, I think you can run pip install requests in this shell to install it to kivy's environment.
Edit: I see you've now noted you are using OS X, but something similar may be true. I don't know about this though. | 1 | 2 | 0 | I'd like to import the "requests" library into my Kivy application. How do I go about that? Simply giving import requests is not working out.
Edit:
I'm using Kivy on Mac OS X, 10.10.3. | Importing libraries in Kivy | 0.099668 | 0 | 1 | 1,389 |
31,183,654 | 2015-07-02T11:40:00.000 | 1 | 1 | 0 | 0 | python,jenkins,python-unittest | 31,184,389 | 1 | true | 0 | 0 | When you publish the unit test results in the post build section (If you aren't already, you should), you set the thresholds for failure.
If you don't set thresholds, the build will always fail unless running them returns a non zero exit code.
To always fail the build on any unit test failure, set all failure thresholds to zero.
Note that you can also set thresholds for skipped tests as well. | 1 | 1 | 0 | I have few tests written in Python with unittest module. Tests working properly, but in Jenkins even if test fails, build with this test is still marked as successive. Is there way to check output for python test and return needed result? | How to make jenkins trigger build failure if tests were failed | 1.2 | 0 | 0 | 1,706 |
31,185,207 | 2015-07-02T12:52:00.000 | 0 | 1 | 0 | 1 | python,cron,beautifulsoup,crontab | 31,189,359 | 2 | false | 0 | 0 | ~/.local paths (populated by pip install --user) are available automatically i.e., it is enough if the cron job belongs to the corresponding user.
To configure arbitrary path, you could use PYTHONPATH envvar in the crontab. Do not corrupt sys.path inside your script. | 1 | 1 | 0 | I just wrote a small python script that uses BeautifulSoup in order to extract some information from a website.
Everything runs fine whenever the script is run from the command line. However run as a crontab, the server returns me this error:
Traceback (most recent call last):
File "/home/ws/undwv/mindfactory.py", line 7, in
from bs4 import BeautifulSoup
ImportError: No module named bs4
Since I do not have any root access to the server, BeautifulSoup was installed at the user directory: $HOME/local/lib/python2.7/site-packages
I suppose the cron tab does not look for modules in the user directory. Any ideas how to solve that? | How do I enable local modules when running a python script as a cron tab? | 0 | 0 | 0 | 1,207 |
31,186,959 | 2015-07-02T14:13:00.000 | -2 | 0 | 1 | 0 | python,python-2.7,lambda,list-comprehension | 31,187,126 | 4 | false | 0 | 1 | Ahhh, further Googling found a solution (admittedly one I would not have stumbled upon myself). The desired behavior can be invoked by use of a default argument:
lambdas = [lambda i=i: i for i in range(3)] | 1 | 4 | 0 | This question is distilled from the original application involving callback functions for Tkinter buttons. This is one line that illustrates the behavior.
lambdas = [lambda: i for i in range(3)]
if you then try invoking the lambda functions generated:
lambdas[0](), lambdas[1]() and lambdas[2]() all return 2.
The desired behavior was to have lambdas[0]() return 0, lambdas[1]() return 1, lambdas[2])() return 2.
I see that the index variable is interpreted by reference. The question is how to rephrase to have it treated by value. | How to generate a list of different lambda functions with list comprehension? | -0.099668 | 0 | 0 | 1,153 |
31,188,415 | 2015-07-02T15:12:00.000 | 1 | 0 | 0 | 0 | python-3.x,scikit-learn,random-forest | 31,191,159 | 1 | false | 0 | 0 | Short version: This is all you.
I assume by "subsetting features for every node" you are referring to the random selection of a subset of samples and possibly features used to train individual trees in the forest. If that's what you mean, then you aren't building a random forest; you want to make a nonrandom forest of particular trees.
One way to do that is to build each DecisionTreeClassifier individually using your carefully specified subset of features, then use the VotingClassifier to combine the trees into a forest. (That feature is only available in 0.17/dev, so you may have to build your own, but it is super simple to build a voting classifier estimator class.) | 1 | 2 | 1 | I am trying to change the way that random forest algorithm using in subsetting features for every node. The original algorithm as it is implemented in Scikit-learn way is randomly subsetting. I want to define which subset for every new node from several choices of several subsets. Is there direct way in scikit-learn to control such method? If not, is there any way to update the same code of Scikit-learn? If yes, which function in the source code is what you think should be updated? | How to control feature subsetting in random forest in scikit-learn? | 0.197375 | 0 | 0 | 525 |
31,192,752 | 2015-07-02T19:10:00.000 | 3 | 0 | 0 | 0 | python,algorithm,openstreetmap | 31,197,249 | 2 | false | 0 | 0 | Your idea of processing the segments in bins is not bad. You do need to think through what happens to road segments that traverse bin boundaries.
Another idea is to Hough transform all the road segments. The infinite line that each segment lies on corresponds to a point in 2d Hough space: the polar angle of the line is one axis and the distance to the origin of the line's nearest point is the other. The transformation from two points on a line to a Hough point is simple algebra.
Now you can detect nearly co-linear road segments by using a closest point pair algorithm. Happily this can be done in O(n log n) expected time. E.g. using a k-d tree. Insert all the points in the tree. Use the standard k-d tree algorithm to find each point's nearest neighbor. Sort the pair distances and take a prefix of the result as pairs to consider, stopping where the pairs are too far apart to meet your criterion of "nearby and parallel". There are O(n) of such nearest neighbor pairs.
All that's left is to filter out segment pairs that - though nearly co-linear - don't overlap. These segments lie on or near different parts of the same infinite line, but they're not of interest. This is just a little more algebra.
There are reasonably good Wikipedia articles on all of the algorithms mentioned here. Look them up if they're not familiar. | 1 | 4 | 1 | I am working on a Python program that processes map data from Openstreetmap, and I need to be able to identify pairs of streets (ways) that are close to each other and parallel. Right now, the basic algorithm I'm using is quite inefficient:
Put all of the streets (Street objects) into a large list
Find every possible pair of two streets in the list using nested for loops; for each pair, draw a rectangle around the two streets and calculate the angle at which each street is oriented.
If the rectangles overlap, the overlapping area is big enough, and the angles are similar, the two streets in the pair are considered parallel and close to each other.
This works well for small maps but with large maps, the biggest problem obviously is that there would be a huge number of pairs to iterate through since there could be thousands of streets in a city. I want to be able to run the program on a large area (like a city) without having to split the area into smaller pieces.
One idea I'm thinking of is sorting the list of streets by latitude or longitude, and only comparing pairs of streets that are within, say, 50 positions away from each other in the list. It would probably be more efficient but it still doesn't seem very elegant; is there any better way?
Each Street is composed of Node objects, and I can easily retrieve both the Node objects and the lat/long position of each Node. I can also easily retrieve the angle at which a street is oriented. | Efficient algorithm for finding pairs of nearby, parallel streets in a map | 0.291313 | 0 | 0 | 750 |
31,192,996 | 2015-07-02T19:25:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,python-asyncio | 31,204,827 | 1 | true | 0 | 0 | Historically run_in_executor appeared very early and it was an event loop's method. It's modeled after twisted's methods for running code in thread pool. After appearing the run_in_executor has never changed.
It's low-level function, that accepts callback and sits pretty close to other functions which accepts callback, not couroutine: call_soon(), call_later(), add_reader() etc. All those are methods of event loop.
asyncio.gather was invited much later, after about a year of the library development. It is placed on higher abstraction level, works with coroutines and pushed along with other coroutine-related functions like wait() or sleep(). | 1 | 1 | 0 | For example, asyncio.gather has signature asyncio.gather(*coros_or_futures, loop=None, return_exceptions=False).
I can pass specific loop or leave None (and default event loop will be used).
Why doesn't BaseEventLoop.run_in_executor defined same way, like: asyncio.run_in_executor(executor, callback, *args, loop=None)?
If there was some important reason to place it into BaseEventLoop? | Why run_in_executor placed in BaseEventLoop? | 1.2 | 0 | 0 | 629 |
31,196,818 | 2015-07-03T00:40:00.000 | 13 | 0 | 1 | 0 | python,debugging,ipdb | 40,893,062 | 4 | true | 0 | 0 | This could sound obvious: jump makes you jump.
This means that you don't execute the lines you jump: you should use this to skip code that you don’t want to run.
You probably need tbreak (Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as break) as I did when I found this page. | 2 | 25 | 0 | Is there a command to step out of cycles (say, for or while) while debugging on ipdb without having to use breakpoints out of them?
I use the until command to step out of list comprehensions, but don't know how could I do a similar thing, if possible, of entire loop blocks. | ipdb debugger, step out of cycle | 1.2 | 0 | 0 | 15,509 |
31,196,818 | 2015-07-03T00:40:00.000 | 23 | 0 | 1 | 0 | python,debugging,ipdb | 32,097,568 | 4 | false | 0 | 0 | You can use j <line number> (jump) to go to another line.
for example, j 28 to go to line 28. | 2 | 25 | 0 | Is there a command to step out of cycles (say, for or while) while debugging on ipdb without having to use breakpoints out of them?
I use the until command to step out of list comprehensions, but don't know how could I do a similar thing, if possible, of entire loop blocks. | ipdb debugger, step out of cycle | 1 | 0 | 0 | 15,509 |
31,197,136 | 2015-07-03T01:28:00.000 | 5 | 0 | 1 | 0 | python,dictionary | 31,197,166 | 1 | true | 0 | 0 | A dictionary is a simpler data structure that takes up less space and is a bit faster. It only needs to maintain a hash table, while an OrderedDict maintains both a hash table and a linked list.
If you don't care about the order of keys, go with the simpler option.
Also not to be overlooked, there's language level support for dicts. It's easy to type {k1: v1, k2: v2}. That's another win for dicts. An unfair one, perhaps, but there you go. | 1 | 5 | 0 | I find it annoying that Python dictionaries do not store keys in insertion order. Recently, I've started using OrderedDict, which is more convenient to use since it covers this drawback (for example, iterating over columns of a CSV file where the column order is supposed to match the key order of a dictionary).
That said, are there any distinct advantages that a dictionary has over an OrderedDict? If so, what are they? | Advantages of Dict over OrderedDict | 1.2 | 0 | 0 | 1,618 |
31,200,848 | 2015-07-03T07:17:00.000 | -1 | 0 | 1 | 0 | python,multithreading,pip | 31,200,965 | 3 | false | 0 | 0 | pip being run each time is a waste of bandwidth and resources.
In the virtualenv the installed packages stay installed. Hence you could set a flag or a file in the directory which stores a flag, on checking the flag you can execute pip or not. This is a much better solution. | 1 | 2 | 0 | I have a build script (bash) utilizing python pip to fetch requirements from a remote and put it into a virtual env. This build script can be invoked by another script that will call it with any number of threads and different targets. This causes pip to be re-run for each invocation. It will try to check the same requirements for the same virtual env.
Will this be incompatible with pip? | Is pip Thread Safe? | -0.066568 | 0 | 0 | 592 |
31,202,124 | 2015-07-03T08:29:00.000 | 1 | 0 | 1 | 0 | python,sorting,numpy,dictionary,scikit-learn | 31,213,214 | 2 | false | 0 | 0 | As you mention in the comments you don't know the size of the words/tweets matrix that you will eventually obtain, so that makes using an array a cumbersome solution.
It feels more natural to use a dictionary here, for the reasons you noted.
The keys of the dictionary will be the words in the tweets, and the values can be lists with (tweet_id, term_frequency) elements.
Eventually you might want to do something else (e.g. classification) with your term frequencies. I suspect this is why you want to use a numpy array from the start.
It should not be too hard to convert the dictionary to a numpy array afterwards though, if that is what you wish to do.
However note that this array is likely to be both very big (1M * number of words) and very sparse, which means it will contain mostly zeros.
Because this numpy array will take a lot of memory to store a lot of zeros, you might want to look at a data structure that is more memory efficient to store sparse matrix (see scipy.sparse).
Hope this helps. | 1 | 6 | 1 | I have to deal with a large data-set. I need to store term frequency of each sentence; which I can do either using a dictionary list or using NumPy array.
But, I will have to sort and append (in case the word already exists)- Which will be better in this case? | NumPy or Dictionary? | 0.099668 | 0 | 0 | 1,345 |
31,204,230 | 2015-07-03T10:13:00.000 | 2 | 0 | 0 | 1 | python,celery,celery-task | 31,204,413 | 1 | false | 0 | 0 | That's like asking 'how long is a piece of string' and I'm sure there isn't a single simple answer. Certainly it will be more than 8 threads, with a useful upper limit at the maximum concurrent I/O tasks needed, maybe determined by the number of remote users of your service that the I/O tasks are communicating with. Presumably at some number of tasks 'manipulating the data' will start to load up your processor and you won't be i/o bound any more. | 1 | 1 | 0 | If I'm scheduling IO bound task in celery and if my server spec was like Quad Core with 8GB RAM, How many workers and concurrency I can use.
If CPU bound processes are advised to use 4 workers and 8 concurrency for Quad Core processor. Whats the spec for IO bound process.
In my task I will be performing API calls, manipulating the received data and storing the processed data in server. | what is the maximum number of workers and concurrency can be configured in celery | 0.379949 | 0 | 0 | 1,364 |
31,204,723 | 2015-07-03T10:36:00.000 | 0 | 0 | 1 | 0 | python,regex,unix | 31,204,870 | 4 | false | 0 | 0 | No, you can't use fnmatch like that. They are not regex in the way you are using it. You need to give it a specific filename and then a pattern to match it to, it doesn't try to see if two patterns are consistent. | 3 | 1 | 0 | Can we use fnmatch with two regular expressions?
For example, if I use fnmatch("file*", "*", 0), will it match in this case? | fnmatch function with two regex parameters | 0 | 0 | 0 | 737 |
31,204,723 | 2015-07-03T10:36:00.000 | 0 | 0 | 1 | 0 | python,regex,unix | 31,204,795 | 4 | false | 0 | 0 | fnmatch is for matching filenames to "shell-style" expression (not regex to be clear). I have no idea what you're trying to accomplish, but in short, no. You need to give it a specific filename and then a pattern to match it to, it doesn't try to see if two patterns are consistent. | 3 | 1 | 0 | Can we use fnmatch with two regular expressions?
For example, if I use fnmatch("file*", "*", 0), will it match in this case? | fnmatch function with two regex parameters | 0 | 0 | 0 | 737 |
31,204,723 | 2015-07-03T10:36:00.000 | 1 | 0 | 1 | 0 | python,regex,unix | 31,204,839 | 4 | false | 0 | 0 | No. Patterns for fnmatch are NOT regular expressions - they are "Unix shell-style wildcards" according to the python standard lib documents | 3 | 1 | 0 | Can we use fnmatch with two regular expressions?
For example, if I use fnmatch("file*", "*", 0), will it match in this case? | fnmatch function with two regex parameters | 0.049958 | 0 | 0 | 737 |
31,205,122 | 2015-07-03T10:56:00.000 | 1 | 0 | 0 | 1 | python,pipe,subprocess,popen | 31,207,419 | 1 | true | 0 | 0 | The absolute path of Python in self.runcmd should do the magic!
Try using the absolute path of file name while opening the file in write mode. | 1 | 0 | 0 | I am trying to run a Python program from inside another Python program using these commands:
subprocess.call(self.runcmd, shell=True);
subprocess.Popen(self.runcmd, shell=True); and
self.runcmd = " python /home/john/createRecordSet.py /home/john/sampleFeature.dish "
Now the script runs fine but the file its supposed to write to is not even getting created, i'm using "w" mode for creating and writing | subprocess.popen( ) executing Python script but not writing to a file | 1.2 | 0 | 0 | 572 |
31,208,102 | 2015-07-03T13:32:00.000 | 1 | 1 | 0 | 1 | python,gpio,messagebroker,iot,hivemq | 31,220,724 | 2 | false | 0 | 0 | Start HiveMQ with the following: ./bin/run.sh &
Yes it is possible to subscribe to two topics from the same application, but you need to create separate subscribers within your python application. | 1 | 0 | 0 | I recently installed HiveMQ on a Ubuntu machine and everything works fine. Being new to Linux( I am more on windows guy) , I am stuck with following question.
I started HiveMQ with command as ./bin/run.sh . A window opens and confirm that HiveMQ is running..Great !!!. I started this with putty and when I close the putty , HiveMQ also stops. How to make HiveMQ run all the time ?.
I am using the HiveMQ for my IoT projects ( raspberry pi). I know to subscribe and publish to HiveMQ broker from python , but what confuses me is , should I be running the python program continuously to make this work ?. Assuming I need to trigger 2+ GPIO on Pi , can I write one program and keep it running by making it subscribe to 2+ topic for trigger events ?.
Any help is greatly appreciated.
Thanks | HiveMQ and IoT control | 0.099668 | 0 | 0 | 257 |
31,209,635 | 2015-07-03T14:55:00.000 | 0 | 0 | 0 | 1 | python,homebrew | 31,891,599 | 1 | false | 0 | 0 | This happened to me when I installed Python 2.7.10 using brew. My PATH was set to /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin and which python returned /usr/local/bin/python (which is symlinked to Python 2.7.10.)
Problem went away when I closed and restarted Terminal application. | 1 | 1 | 0 | I used Homebrew to install python, the version is 2.7.10, and the system provided version is 2.7.6. My PATH environment variable is set to /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin", so my terminal DOES know to look at the Homebrew bin folder first!
However, when I run python, it still defaults to 2.7.6, the system provided version (the interpreter that shows up says 2.7.6 at the top). If I run /usr/local/bin/python, it runs 2.7.10, which is what I want.
If my PATH variable is properly set, then how is it possible that terminal still finds /usr/bin/python first? | Using Homebrew python instead of system provided python | 0 | 0 | 0 | 504 |
31,211,941 | 2015-07-03T17:37:00.000 | 1 | 1 | 1 | 0 | c#,python | 31,212,458 | 2 | false | 0 | 1 | Convert your Python script to executable using py2exe.exe, and call it from C# using Process.
Another way is to execute cmd command "python program.py" from your C# program. But, you've to make sure that environment variable for Python is set. | 1 | 2 | 0 | I have a C# Application and I want to run python script by passing some arguments and i also want to return some values from python. should i make a .dll file of my python script ? or is there any other way.
I can't use ironpython because i am unable to import my python project libraries in ironpython
thanks | Executing Python Script from C# (ironPython is not an option) | 0.099668 | 0 | 0 | 339 |
31,212,059 | 2015-07-03T17:46:00.000 | 0 | 0 | 0 | 0 | python,selenium,automation,webdriver | 31,212,344 | 3 | false | 1 | 0 | Selenium is really designed to be an external control system for a web browser. I don't think of it as being the source of test data, itself. There are other unit-testing frameworks which are designed for this purpose, but I see Selenium's intended purpose to be different. | 1 | 3 | 0 | Well, the title says it all... It is possible to perform an XmlHttpRequest from Selenium/Webdriver and then render the output of that requests in a browser instance ? If so, can you enlight me please ? | It's possible to do an XHR call and render the output with Selenium? | 0 | 0 | 1 | 1,233 |
31,212,405 | 2015-07-03T18:18:00.000 | 5 | 0 | 1 | 0 | python,python-3.x,pycharm,fabric | 52,993,335 | 1 | false | 0 | 0 | This is possible to do (at least with the current PyCharm 2018.2.4), but it takes some manual effort and cannot be done through the GUI.
Exit PyCharm
Navigate to the .idea folder of your project
Edit modules.xml
Duplicate the <module> line and change the fileurl and filepath attributes. Mine looked like this when I was done:
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/dataops.iml" filepath="$PROJECT_DIR$/.idea/dataops.iml" />
<module fileurl="file://$PROJECT_DIR$/.idea/dataops_py27.iml" filepath="$PROJECT_DIR$/.idea/dataops_py27.iml" />
</modules>
</component>
</project>
Copy $PROJECT_NAME.iml to the name you gave your new module. In my case I did:
cp dataops.iml dataops_py27.iml
Open your project back up in PyCharm and go to Preferences > Project > Project Interpreter. You will see the two modules (the initial module and the new one you just created). Select the new one and configure the interpreter by clicking the gear icon in the upper right corner of the window and selecting Add...
Go to Preferences > Project > Project Structure. Make sure your new module is selected, remove the old content root and add a new one.
Note: If you have many files in the folder and do not want the newly added interpreter to apply to all of them you can exclude them in the Exclude files: text box located at the bottom of the Preferences > Project > Project Structure setting. | 1 | 9 | 0 | I'm working on a Django project that is using Python 3 in a virtualenv. I just came across fabric, which only works under Python 2, so I installed it system wide instead of in my virtualenv (is it even possible to put this in my Python 3 virtualenv, btw?).
The problem here is that I've set PyCharm to use Python 3 as interpreter and having fabric installed for Python 2. When I edit my fabric file it says that all imports from fabric are unknown.
Is there any way I can solve this? Any way to assign my fabric file to use the Python 2 interpreter instead of Python 3, or some other solution? | Set different interpreters for specific files in PyCharm | 0.761594 | 0 | 0 | 2,018 |
31,216,203 | 2015-07-04T02:13:00.000 | 2 | 0 | 1 | 1 | python,linux | 31,216,258 | 2 | false | 0 | 0 | This question is based on a mistaken understanding of how kill -9 PID behaves (or kill with any other signal -- even though -9 can't be overridden by a process's signal handler, it can still be delayed if, for instance, the target is in a blocking syscall).
Thus: kill -9 "$pid", in shell, doesn't tell you when the signal is received either. A return code of 0 just means that the signal was sent, same as what Python's os.kill() returning without an exception does.
The underlying sigaction call -- invoked by both os.kill() and the kill shell command -- has no way of returning result information. Thus, that information is not available in any language. | 2 | 0 | 0 | Is there an alternative to the os.kill function in Python 3 that will give me a return code? I'd like to verify that a process and it's children actually do get killed before restarting them.
I could probably put a kill -0 loop afterwards or do a subprocess.call(kill -9 pid) if I had to but I'm curious if there's a more elegant solution. | Python alternative to os.kill with a return code? | 0.197375 | 0 | 0 | 1,632 |
31,216,203 | 2015-07-04T02:13:00.000 | 0 | 0 | 1 | 1 | python,linux | 31,216,218 | 2 | false | 0 | 0 | os.kill() sends a signal to the process. The return code will still be sent to the parent process. | 2 | 0 | 0 | Is there an alternative to the os.kill function in Python 3 that will give me a return code? I'd like to verify that a process and it's children actually do get killed before restarting them.
I could probably put a kill -0 loop afterwards or do a subprocess.call(kill -9 pid) if I had to but I'm curious if there's a more elegant solution. | Python alternative to os.kill with a return code? | 0 | 0 | 0 | 1,632 |
31,217,356 | 2015-07-04T05:45:00.000 | 6 | 1 | 1 | 0 | python | 31,217,400 | 1 | false | 0 | 0 | Add all your imports in a single txt file, say requirements.txt and every time you run your program on a new system, just do a
pip install -r requirements.txt
Most Code editor's like Pycharm do this for you on the first run.
You can do a pip freeze > requirements.txt to get all the installed/required packages. | 1 | 2 | 0 | Instead of having to run a python script and then get errors saying "ImportError: No module named aaa" because that module isn't installed in my system and then install it and then run the script again, and then maybe get the same kind of error for another module, is there any way to discover which modules aren't installed in my system and install them all at once if there're ones that aren't installed yet and which are required for the script? | How to install all imports at once? | 1 | 0 | 0 | 5,506 |
31,217,511 | 2015-07-04T06:08:00.000 | 3 | 0 | 1 | 0 | python | 31,217,528 | 2 | false | 0 | 0 | I always heard them named "dunder functions" as a shortname for "double-underscore functions".
It's a name a bit surprising at first, but easy to say and understand when talking. | 1 | 5 | 0 | Is there a term for functions starting and ending with the double underscore (init or getattr for example)? I understand their purpose, just wondering if there is a good way to refer to them! Thanks! | Is there a name for double underscore functions? | 0.291313 | 0 | 0 | 455 |
31,219,085 | 2015-07-04T09:30:00.000 | 0 | 0 | 0 | 0 | python-3.x | 31,219,138 | 1 | false | 0 | 0 | I guess I can try Symbolic differentiation. Also I can use SymPy with NumPy. | 1 | 0 | 1 | I am looking for any existing code for CreditGrades model.
Any python code is OKAY.
Also, I will need numeric differentiation for hedge ratio.
Any suggestion on numeric differentiation in python.
Thanks a lot. | Is there any python code for CreditGrades model? | 0 | 0 | 0 | 140 |
31,219,536 | 2015-07-04T10:25:00.000 | 2 | 0 | 1 | 0 | python,tabs,ide | 31,219,572 | 1 | true | 0 | 0 | To tab (indent) multiple lines in IDLE use CTRL + ] (The ] key on the keyboard) .
To tab them in the other direction (to decrease the indentation - dedent) , use CTRL + [ . . | 1 | 1 | 0 | I'm having a problem with the Python IDLE. If I try to tab in multiple lines (Mark lines+Press [Tab]) it just replaces the lines, and doesn't tab them in. If I try to tab the out (Mark lines+Press [Tab+Shift]) the region will turn white while marked. If I try to tab in a single line (Press [Tab]) it tabs way too far (modified 5 spaces, tabs about 20). If I try to tab out a single line (Press [Tab+Shift]), nothing happens. Is there another one having this issue, or an idea to fix it?
Caps and Numlock didn't change anything. | Python IDLE (2.7): multi-tabbing not working | 1.2 | 0 | 0 | 85 |
31,225,861 | 2015-07-04T23:17:00.000 | 1 | 0 | 1 | 0 | python,igraph,anaconda,pkg-config | 31,227,686 | 1 | false | 0 | 0 | I faced some installation issues with Anaconda and my fix was to download manually the components of the Anaconda package.
If you use sudo apt-get python3-numpy
for example, it will download as well as all the dependencies.
So all you have to do is download the major libraries.
Although I don't believe pkg-config causes conflicts with Anaconda. Give it a shot, should be easy to resolve issues if any at all. | 1 | 1 | 0 | I'm using anaconda python 2.7, and keep finding problems installing python libraries using pip that seem to rely on pkg-config. In particular, python-igraph (although the author of that library kindly added a patch to help conda users) and louvain (which I have yet to fix).
Would installing pkg-config lead to conflicts with anaconda? Is there a way to set them up to play nice?
Thanks! | Anaconda and pkg-config on osx 10.10: how to prevent pip installation problems? | 0.197375 | 0 | 0 | 365 |
31,225,935 | 2015-07-04T23:30:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,svm,random-forest,text-classification | 31,226,011 | 2 | false | 0 | 0 | It is very hard to answer this question without looking at the data in question.
SVM does have a history of working better with text classification - but machine learning by definition is context dependent.
Consider the parameters by which you are running the random forest algorithm. What are your number and depth of trees, are you pruning branches? Are you searching a larger parameter space for SVMs therefore are more likely to find a better optimum. | 2 | 0 | 1 | I am making an application for multilabel text classification .
I've tried different machine learning algorithm.
No doubt the SVM with linear kernel gets the best results.
I have also tried to sort through the algorithm Radom Forest and the results I have obtained have been very bad, both the recall and precision are very low.
The fact that the linear kernel to respond better result gives me an idea of the different categories are linearly separable.
Is there any reason the Random Forest results are so low? | Random Forest for multi-label classification | 0.099668 | 0 | 0 | 2,289 |
31,225,935 | 2015-07-04T23:30:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,svm,random-forest,text-classification | 31,234,627 | 2 | false | 0 | 0 | The ensemble of the random forest performs well across many domains and types of data. They are excellent at reducing error from variance and don't over fit if trees are kept simple enough.
I would expect a forest to perform comparably to a SVM with a linear kernel.
The SVM will tend to overfit more because it does not benefit from being an ensemble.
If you are not using cross validation of some kind. At minimum measuring performance on unseen data using a test/training regimen than i could see you obtaining this type of result.
Go back and make sure performance is measured on unseen data and likelier you'll see the RF performing more comparably.
Good luck. | 2 | 0 | 1 | I am making an application for multilabel text classification .
I've tried different machine learning algorithm.
No doubt the SVM with linear kernel gets the best results.
I have also tried to sort through the algorithm Radom Forest and the results I have obtained have been very bad, both the recall and precision are very low.
The fact that the linear kernel to respond better result gives me an idea of the different categories are linearly separable.
Is there any reason the Random Forest results are so low? | Random Forest for multi-label classification | 0.197375 | 0 | 0 | 2,289 |
31,226,223 | 2015-07-05T00:30:00.000 | 1 | 0 | 0 | 0 | python,django,web-deployment,multi-tenant,saas | 31,227,735 | 1 | true | 1 | 0 | You could use different settings files, let's say settings_client_1.py and settings_client_2.py, import common settings from a common settings.py file to keep it DRY. Then add respective database settings.
Do the same with wsgi files, create one for each settings. Say, wsgi_c1.py and wsgi_c2.py
Then, in your web server direct the requests for client1.djangoapp.com to wsgi_c1.py and client2.djangoapp.com to wsgi_c2.py | 1 | 0 | 0 | I'm hoping to be pointed in the right direction as far as what tools to use while in the process of developing an application that runs on two servers per client.
[Main Server][Client db Server]
Each client has their own server which has a django application managing their respective data, in addition to serving as a simple front end.
The main application server has a more feature-rich front end, using the same models/db schemas. It should have full read/write access to the client's database server.
The final desired effect would be a typical SaaS type deal:
client1.djangoapp.com => Connects to mysql database @ client1_IP
client2.djangoapp.com => Connects to mysql database @ client2_IP...
Thanks in advance! | Effectively communicating between two Django applications on two servers (Multitenancy) | 1.2 | 1 | 0 | 843 |
31,230,376 | 2015-07-05T12:32:00.000 | 0 | 0 | 0 | 1 | python-2.7,utf-8,interactive-mode | 31,230,459 | 1 | false | 0 | 0 | I've found a partial solution to that issue: in the terminal.app settings, checking the 'escape non-ascii input' option lets python grab any utf-8 char; unfortunately, it prevents using them at the tcsh prompt as before; yet bash sees them as it should...
goodbye, tcsh! | 1 | 2 | 0 | I'm running Mac OS X 10.6.8;
I had been using python 2.5.4 for 8 years and had NO problem, and neither had I with python 2.6 and python 3.1 as well;
but I recently had to install python 2.7.10, which has become the default interpreter, and now there are issues when the interpreter is running and I need to enter expressions with utf-8 chars in interactive mode: the terminal rings its bell, and, of course, the characters do not show;
yet any python script containing expressions involving utf-8 strings would still be interpreted as usual; it's just that I cannot type directly anything but 7-bit chars, even though I tweaked the site.py script to make sure sys.getdefaultencoding() would yield the 'utf-8' value;
at the tcsh or bash prompt, typing utf-8 works all right, even as arguments to a python -c command; it's just that no python interpreter likes it: none of them — 2.5, 2.6, 2.7... although I haven't given python 3 a try yet!
Can anybody help? | mac os X 10.6.8 python 2.7.10 issues with direct typing of two-bytes utf-8 characters | 0 | 0 | 0 | 61 |
31,231,072 | 2015-07-05T13:53:00.000 | -2 | 0 | 1 | 0 | python,urllib | 31,231,131 | 2 | false | 0 | 0 | I am assuming merriam-webster is a website. Check if they an API. If so you can use it to achieve your task. If they do not have an API, I don't see how you can achieve your task without some highly advanced hacking, crawling algorithm. My suggestion is, as it appears you are trying to develop a dictionary type app, research dictionary websites that have open APIs. | 1 | 2 | 0 | How can I get specific word's definition from merriam-webster using python's script?
I have window with text box and button, and I want to print word's definition on the screen.
thanks | Python script for get data from merriam-webster | -0.197375 | 0 | 0 | 1,050 |
31,235,059 | 2015-07-05T21:14:00.000 | -1 | 0 | 0 | 1 | python,python-2.7,centos,sha | 31,235,259 | 2 | true | 0 | 0 | You can always install a different version of Python using the -altinstall argument, and then run it either in a virtual environment, or just run the commands with python(version) command.
A considerable amount of CentOS is written in Python so changing the core version will most likely break some functions. | 1 | 1 | 0 | I have a dedicated web server which runs CentOS 6.6
I am running some script that uses Python SHA module and I think that this module is deprecated in the current Python version.
I am consider downgrading my Python installation so that I can use this module.
Is there a better option? If not, how should I do it?
These are my Python installation details:
rpm-python-4.8.0-38.el6_6.x86_64
dbus-python-0.83.0-6.1.el6.x86_64
gnome-python2-2.28.0-3.el6.x86_64
gnome-python2-canvas-2.28.0-3.el6.x86_64
libreport-python-2.0.9-21.el6.centos.x86_64
gnome-python2-applet-2.28.0-5.el6.x86_64
gnome-python2-gconf-2.28.0-3.el6.x86_64
gnome-python2-bonobo-2.28.0-3.el6.x86_64
python-urlgrabber-3.9.1-9.el6.noarch
python-tools-2.6.6-52.el6.x86_64
newt-python-0.52.11-3.el6.x86_64
python-ethtool-0.6-5.el6.x86_64
python-pycurl-7.19.0-8.el6.x86_64
python-docs-2.6.6-2.el6.noarch
gnome-python2-libegg-2.25.3-20.el6.x86_64
python-iwlib-0.1-1.2.el6.x86_64
libxml2-python-2.7.6-17.el6_6.1.x86_64
gnome-python2-gnome-2.28.0-3.el6.x86_64
python-iniparse-0.3.1-2.1.el6.noarch
gnome-python2-libwnck-2.28.0-5.el6.x86_64
libproxy-python-0.3.0-10.el6.x86_64
python-2.6.6-52.el6.x86_64
gnome-python2-gnomevfs-2.28.0-3.el6.x86_64
gnome-python2-desktop-2.28.0-5.el6.x86_64
gnome-python2-extras-2.25.3-20.el6.x86_64
abrt-addon-python-2.0.8-26.el6.centos.x86_64
at-spi-python-1.28.1-2.el6.centos.x86_64
python-libs-2.6.6-52.el6.x86_64
python-devel-2.6.6-52.el6.x86_64 | How to downgrade python version on CentOS? | 1.2 | 0 | 0 | 8,690 |
31,241,531 | 2015-07-06T08:54:00.000 | 2 | 0 | 0 | 1 | python,automation,chef-infra,orchestration | 31,248,915 | 3 | false | 0 | 0 | If you have a chef server, you can do a search for the node that runs the ambari-server recipe. Then you use the IP of that machine. Alternately, you can use a DNS name for the ambari-server, and then update you DNS entry to point to the new server when it is available.
Other options include using confd with etcd, or using consul. Each would allow you to update your config post-chef with the ip of the server. | 1 | 5 | 0 | As part of a platform setup orchestration we are using our python package to install various software packages on a cluster of machines in cloud.
We have the following scenario:
out of many softwares, one of our software is Ambari(helps in managing hadoop platform).
it works as follows - 'n' number of cluster machines reporting to 1 ambari-server.
for each cluster machine to do reporting, we have to install ambari-agent on each of cluster machine and modify its properties file with the ambari server it is suppposed to report and start ambari-agent.
what are we able to do--
we were successful in installing ambari server and ambari agents seperately in our cluster machines with the help of seperate chef cookbooks.
what we are not able to do--
how can we modify each machine's ambari-agent properties file so that it is pointing to our ambari server IP. in general what is an elegant way to wire up cluster based softwares as part of chef orchestration?
NB:. ambari-server is created on fly and hence its IP is obtained during run time.
Is it possible? are there any alternatives to above problem?
Thanks | how can we wire up cluster based softwares using chef? | 0.132549 | 0 | 0 | 348 |
31,243,476 | 2015-07-06T10:27:00.000 | 1 | 0 | 1 | 0 | python,file,oop | 31,244,797 | 2 | true | 0 | 1 | You are at the design phase.
So, you have to weigh the chances of losing your data due to some error or crash against the importance of your data and the cost of "protecting" it.
Using with protects you from some errors. If you consider that python itself may crash (e.g.), then you still have some risk. Saving after each step is evidently safer. How useful it is depends on the volume of saved data (and there are techniques for reducing this as well), the impact of each save on performance, and the chances of such crashes.
Without any further info, and simply guessing, my answer to your specific question:
... is it ok to open the file in the init function of Labyrinth and
close it at the end of the game...? Or is it better to open and close
the file every time?
is that I would save after each step. | 1 | 1 | 0 | I have a design question. I'm doing an exercise in python (2.7) which is a simple game with a labyrinth. I need to read an write from a specific file every step of the game.
Currently I have 2 classes (Game and Labyrinth). The Labyrinth class is responsible for reading and writing the file.
My question is, is it ok to open the file in the init function of Labyrinth and close it at the end of the game within another function (which can be called from another class)? Or is it better to open and close the file every time?
The reason I don't save the file content into a string with readlines() is because I'm supposed to save to the file each step of the game. | Working with files in an object | 1.2 | 0 | 0 | 62 |
31,244,471 | 2015-07-06T11:17:00.000 | 2 | 0 | 1 | 0 | python,multithreading,rest,concurrency | 31,245,355 | 1 | true | 0 | 0 | If you use a WSGI compliant framework (or even just plain WSGI as the "framework") then concurrency is handled by the wsgi "container" (apache + mod_wsgi, nginx+gunicorn, whatever) either as threads, processes or a mix of both. All you have to do is write your code so it does support concurrency (ie : no mutable global state etc). | 1 | 0 | 0 | I am new to Python, so maybe this is a easy question, but I am planning to do an REST API application but I need it to be very fast and allow concurrent queries. Is there a way to do a application Python that allows concurrent execution and each execution responds to an REST API?
I don't know if threads are a way to go in this case? I know threading is not the same as concurrency, but maybe are a solution for this case. | Python concurrent REST API | 1.2 | 0 | 1 | 1,626 |
31,246,335 | 2015-07-06T12:49:00.000 | 3 | 0 | 0 | 1 | python,swift,nstask | 31,248,808 | 1 | true | 0 | 0 | This should work:
system("python EXECUTABLE_PATH")
Josh | 1 | 1 | 0 | I'm new to swift and I'm trying to run a Python file from it.
I already got the full path to the file, and my tries with NStask failed so far.
Now I'm somehow stuck launching the python executable with the path to the script as a parameter :-/ I already thought of just creating an .sh file with the appropriate command in it (python $filename) and launch that, but isn't there another way?
Of course I'm running OS X 10.10
Thanks for any help! | Launch Python script from Swift App | 1.2 | 0 | 0 | 3,016 |
31,247,510 | 2015-07-06T13:45:00.000 | 1 | 0 | 1 | 0 | python,easy-install,manual | 31,247,842 | 1 | true | 1 | 0 | User site-package refers to packages installed in ~/.local/lib[64]/python-VERSION/site-packages/
These packages are available as any other installed packages, but only to this specific user. It overrides system packages too. | 1 | 0 | 0 | I have red at the help-page of easy_install that I have an ability to do "install in user site-package". What does this phase mean, "user site-package"? How does it affect functionality of the installed software? | Install in user site-package | 1.2 | 0 | 0 | 78 |
31,250,284 | 2015-07-06T15:53:00.000 | 1 | 0 | 0 | 0 | python,html,beautifulsoup,dom-manipulation | 31,250,400 | 2 | false | 1 | 0 | Beautiful Soup is a Python library for pulling data out of HTML and XML files. You can't directly use it for angular js code. | 1 | 0 | 0 | I have html code embeded with java script code related to angular js. Later I realized that rows and columns of html code need to be inter cahnged. As I have bunch of html files so decided to use Python script. Have tried using BeautifulSoup 4.x. I could able to do interchange of rows and columns but while writing back to disk, it is noticed that few java script tags are missing.
My question is can I use beautiful soup for angular js code? if yes, code snippet would be extremely helpful.
Thanks | manipulating javascript code with BeautifulSoup | 0.099668 | 0 | 0 | 744 |
31,251,808 | 2015-07-06T17:15:00.000 | 1 | 1 | 1 | 0 | python,pytest,nosetests,python-unittest,unittest2 | 31,251,843 | 1 | false | 0 | 0 | py.test -> session scoped fixtures and their finalization should help you
You can use conftest.py to code your fixture. | 1 | 0 | 0 | I'm running series of testcases in multiple files, but I want to run the prereq and cleanup only once through out the run, please let me know is there a way to do it? | Is there a way to run tests prerequisite once and clean up in the end in whole unit test run | 0.197375 | 0 | 0 | 472 |
31,252,360 | 2015-07-06T17:45:00.000 | 0 | 0 | 1 | 0 | python,arrays,image,png,pixel | 31,252,607 | 3 | false | 0 | 0 | You can use the existing pygame module. Import a file into a Surface using pygame.image.load. You can then access the bit array from this using pygame.surfarray.array2d. Please see the Pygame docs for more information. | 1 | 2 | 0 | I would like to convert a PNG image to a 2 dimensional array where each array holds a list of the RGB values of that specific pixel. How could one create a program to read-in a *.png file and convert to this type of data structure? | Converting PNG file to bitmap array in Python | 0 | 0 | 0 | 5,503 |
31,253,300 | 2015-07-06T18:43:00.000 | 1 | 0 | 1 | 0 | dronekit-python | 31,395,660 | 1 | false | 0 | 0 | Assuming you mean the RC Controller (not the flight controller), you'll need to turn off the radio failsafe (FS_THR_ENABLE). | 1 | 1 | 0 | I am trying to arm it using Dronkit-Python and I am able to get it to arm properly through code, however, it requires the controller to be on.
Is there anyway to bypass this? | How to arm a drone using Dronkit-Python without required the controller to be on? | 0.197375 | 0 | 0 | 189 |
31,257,353 | 2015-07-06T23:26:00.000 | 3 | 0 | 0 | 0 | python,excel,openpyxl | 31,262,488 | 2 | true | 0 | 0 | Worksheets have row_dimensions and column_dimensions objects which contain information about particular rows or columns, such as whether they are hidden or not. Column dimensions can also be grouped so you'll need to take that into consideration when looking. | 1 | 5 | 0 | I've been trying to write a script to copy formatting from one workbook to another and, as anyone dealing with openpyxl knows, it's a big script. I've gotten it to work pretty well, but one thing I can't seem to figure out is how to read from the original if columns are hidden.
Can anyone tell me where to look in a workbook, worksheet, column or cell object to see where hidden columns are? | Finding hidden cells using openpyxl | 1.2 | 1 | 0 | 5,323 |
31,257,354 | 2015-07-06T23:26:00.000 | 0 | 0 | 1 | 1 | python,macos,segmentation-fault,skype4py | 31,281,242 | 3 | false | 0 | 0 | Ok, I was not able to solve the problem with Skype4Py on Mac OS. But perhaps someone will be useful to know that I have found a replacement. I used Ruby gem called skype. It works well on Mac OS. So, if you want to send message from script or anything else, just make gem install skype and start to write some ruby code :) | 1 | 1 | 0 | I have a problem with Skype4Py lib in Mac OS. As I know from documentation in github, in macos skype4py must install with specific arch. But when I try to use arch -i386 pip2 install skype4py I get error message Bad CPU type in executable. I am not experienced user in macos (this is been a remote control in team viewer) but what I doing wrong? Also I tried use virtualenv and at the start all be ok, but when in shell I make client.Attach() I have a segfault. Please help. Thanks in advance. | Bad CPU type in executable when doing arch -i386 pip2 install skype4py | 0 | 0 | 0 | 1,867 |
31,261,879 | 2015-07-07T07:03:00.000 | 8 | 1 | 0 | 0 | python,rar | 31,288,178 | 2 | false | 0 | 0 | os.system('rar a <archive_file_path> <file_path_to_be_added_to_archive>')
Can also be used to achieve this. | 1 | 6 | 0 | I want to create a .rar file by passing file paths of the files to be archived.
I have tried the rarfile package. But it doesn't have a 'w' option to write to the rarfile handler.
Is there any other way? | How to create a .rar file using python? | 1 | 0 | 0 | 9,232 |
31,263,032 | 2015-07-07T08:04:00.000 | 0 | 0 | 0 | 0 | python,django,python-3.x,internationalization | 31,278,044 | 1 | false | 1 | 0 | I just needed to add'django.middleware.locale.LocaleMiddleware' to my settings.py file in the MIDDLEWARE_CLASSES section. I figured if internationalization was already on that this wouldn't be necessary. | 1 | 0 | 0 | I have a Django 1.8 project that I would like to internationalize. I have added the code to do so in the application, and when I change the LANGUAGE_CODE tag, I can successfully see the other language used, but when I leave it on en-us, no other languages show up. I have changed my computer's language to the language in question (German), but calls to the site are still in English. What am I doing wrong?
Other things:
USE_I18N = true
LOCALE_PATHS works correctly (since changing the
LANGUAGE_CODE works)
I have also tried settings the LANGUAGES attribute although I don't think I have to anyway.
EDIT: I have also confirmed that the GET call has the header: Accept-Language:de-DE,de;q=0.8,en-US;q=0.6,en;q=0.4, which contains de like I want. My locale folder has a folder de in it. | Django i18n Problems | 0 | 0 | 0 | 59 |
31,263,904 | 2015-07-07T08:49:00.000 | 0 | 0 | 1 | 0 | python,django,vagrant,virtualenv | 41,454,312 | 5 | false | 1 | 0 | No, in your case, you don't need to bother with virtualenv. Since you're using a dedicated virtual machine it's just a layer of complexity you, as a noob, don't really need.
Virtualenv is pretty simple, in concept and usage, so you'll layer it on simply enough when the need arises. But, imho, there is added value in learning how a python installation is truly laid out before adding indirection. When you hit a problem that it can solve, then go for it. But for now, keep it simple: don't bother. | 5 | 1 | 0 | This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness? | Do I really need to use virtualenv with Django? | 0 | 0 | 0 | 1,146 |
31,263,904 | 2015-07-07T08:49:00.000 | 0 | 0 | 1 | 0 | python,django,vagrant,virtualenv | 31,268,157 | 5 | false | 1 | 0 | if you develop multiple projects with different django versions, virtualenv is just a must thing, there is no other way (not that i know). you feel in heaven in virtualenv if you once experience the dependency hell. Even if you develop one project I would recommend to code inside virtualenv, you never know what comes next, back in the days, my old laptop was almost crashing because of so many dependency problems, after i discovered virtualenv, my old laptop became a brand new laptop for my eyes.. | 5 | 1 | 0 | This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness? | Do I really need to use virtualenv with Django? | 0 | 0 | 0 | 1,146 |
31,263,904 | 2015-07-07T08:49:00.000 | 1 | 0 | 1 | 0 | python,django,vagrant,virtualenv | 31,264,702 | 5 | false | 1 | 0 | There are many benefit of working with virtual environment on your development machine.
You can go to any version of any supported module to check for issues
Your project runs under separate environment without conflicting with your system wide modules and settings
Testing is easy
Muliple version of same project can co-exist. | 5 | 1 | 0 | This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness? | Do I really need to use virtualenv with Django? | 0.039979 | 0 | 0 | 1,146 |
31,263,904 | 2015-07-07T08:49:00.000 | 5 | 0 | 1 | 0 | python,django,vagrant,virtualenv | 31,264,104 | 5 | false | 1 | 0 | I would always recommend you use a virtualenv as a matter of course. There is almost no overhead in doing so, and it just makes things easier. In conjunction with virtualenvwrapper you can easily just type workon myproject to activate and cd to your virtualenv in one go. You avoid any issues with having to use sudo to install things, as well as any possible version incompatibilities with system-installed packages. There's just no reason not to, really. | 5 | 1 | 0 | This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness? | Do I really need to use virtualenv with Django? | 0.197375 | 0 | 0 | 1,146 |
31,263,904 | 2015-07-07T08:49:00.000 | 3 | 0 | 1 | 0 | python,django,vagrant,virtualenv | 31,264,041 | 5 | true | 1 | 0 | I don't have any knowledge on Vagrant but I use virtualenvs for my Django projects. I would recommend it for anyone.
With that said, if you're only going to be using one Django project on a virtual machine you don't need to use a virtualenv. I haven't come across a situation where apps in the same project have conflicting dependencies. This could be a problem if you have multiple projects on the same machine however. | 5 | 1 | 0 | This may well stink of newbie but...
I'm working on my first Django project, and am reading a lot that using virtualenv is considered Best Practice. I understand that virtualenv sandboxes all my python dependencies but I just don't know if this is necessary if I'm working in sandboxed VM's anyway? I'm developing in Vagrant, and won't be using these VM's for anything else, and I'll be deploying to a VM server that will only have this Django project on it. Is it possible that in the future further Django apps in this project will require different dependencies and so need to be in different virtualenv's? (Not sure if it works like that tbh?)
Am I just showing my inexperience and shortsightedness? | Do I really need to use virtualenv with Django? | 1.2 | 0 | 0 | 1,146 |
31,265,886 | 2015-07-07T10:21:00.000 | 1 | 0 | 1 | 0 | python | 31,267,382 | 1 | false | 0 | 0 | This answer is Linux-centric, seeing your mention of Raspbian console mode implies a Debian GNU/Linux console system.
If you're really talking about the console, this is possible albeit very hacky. Curses has a raw mode to read most keys, but Control is a modifier key, so won't show up that way. The method I can think of is to read the input device, much like X would. Use lsinput to find which device is your keyboard. As input-events demonstrates, you can see events there while they are also processed elsewhere. Among the downsides are that you won't know if the input was actually going to you (unless you track virtual console switching, job status etc) and you need access to the device, as that property implies it might be sensitive data such as a password being entered to login on another virtual console.
It might be simpler to remap what the control key itself sends using loadkeys, thus changing it from a modifier to a detectable key. It will still retain its lower level protocol properties (in USB HID boot protocol keyboard, for instance, it will have a dedicated bit rather than use one of typically only 6 slots for pressed keys).
Either way, this is not easy or portable, and won't work at all over terminal environments such as an ssh session. | 1 | 2 | 0 | I need to detect, in a python console (text mode) program, when the Ctrl key is pressed, without another key at the same time. I tried with getch from curses library and stdin, but it waits for any key but not Ctrl or Alt. I found information in stackoverflow but always referred to Windows/event environment, or pressed simultaneously with another key (Ctrl + C for example) but this is not my case. I have found in some forum that's no possible, but I can believe it. | How I can capture in Python the Ctrl key (only Ctrl, without another at the same time) when pressed? | 0.197375 | 0 | 0 | 937 |
31,267,147 | 2015-07-07T11:19:00.000 | 0 | 0 | 0 | 0 | python,django,unit-testing,code-coverage,django-nose | 31,290,685 | 1 | true | 1 | 0 | I fixed this by uninstalling coverage.py with pip and installing it using easy_install. | 1 | 0 | 0 | I'm testing a web application using Django-nose to monitor the code coverage. At first it worked perfectly well, but when trying to generate HTML it fails with the error:
Imput error: No module named copy_reg
It happened after a few times (until then in worked). I tried it on a computer with newly installed django, django-nose and coverage and the very same code works fine. Re-installing django and django-nose didn't help.
Any suggestions? Should I re-install any library or something?
Thank you in advance! | Django-nose test html error | 1.2 | 0 | 0 | 61 |
31,268,494 | 2015-07-07T12:24:00.000 | 1 | 0 | 0 | 1 | python,django,multithreading,celery | 31,272,086 | 1 | true | 1 | 0 | I'm assuming you don't want to wait because you are using an external service (outside of your control) for sending email. If that's the case then setup a local SMTP server as a relay. Many services such as Amazon SES, SendGrid, Mandrill/Mailchimp have directions on how to do it. The application will only have to wait on the delivery to localhost (which should be fast and is within your control). The final delivery will be forwarded on asynchronously to the request/response. STMP servers are already built to handle delivery failures with retries which is what you might gain by moving to Celery. | 1 | 1 | 0 | I have a use case where I have to send_email to user in my views. Now the user who submitted the form will not receive an HTTP response until the email has been sent . I do not want to make the user wait on the send_mail. So i want to send the mail asynchronously without caring of the email error. I am using using celery for sending mail async but i have read that it may be a overkill for simpler tasks like this. How can i achieve the above task without using celery | Async Tasks for Django and Gunicorn | 1.2 | 0 | 0 | 420 |
31,274,509 | 2015-07-07T16:37:00.000 | 1 | 0 | 0 | 0 | python,webserver,database-connection,data-access-layer,database-server | 31,274,653 | 1 | true | 0 | 0 | Yes, as Python application lives inside of the web server process, this process will establish the connection with database server. | 1 | 0 | 0 | Let's suppose we have a single host where there is a Web Server and a Database Server.
An external application sends an http request to the web server to access to the database.
The data access logic is made for example by Python API.
The web server takes the request and the Python application calls the method to connect to the database, e.g. MySQLdb.connect(...).
Which process establishes the connection with the database server and communicates with it? Is it the web server process? | Which process establishes the connection with the database server? | 1.2 | 1 | 0 | 26 |
31,274,717 | 2015-07-07T16:48:00.000 | 0 | 1 | 0 | 1 | python,linux,crontab,redhat | 31,286,520 | 1 | true | 0 | 0 | Thank you all guys , but I did a little research and I have found a solution , first you have to test sudo python to see if it works with the module , if not you have to do alias for the sudo you put it inside /etc/bashrc [ to make it system wide alias ] , alias sudo='sudo env PATH=$PATH LD_LIBRARY_PATH=$LD_LIBRARY_PATH ORACLE_HOME=$ORACLE_HOME TNS_ADMIN=$TNS_ADMIN'
Then you have to change crontab to call a script to assign these values to the variables , using source /the script && /usr/bin/python script.py | 1 | 0 | 0 | I am using redhat linux platform
I was wondering why when I use python script inside crontab to run every 2 minutes it won't work even though when I do monitor the crond logs using
tail /etc/sys/cron it shows that it called the script , tried to add the path of python , [ I am using python2.6 -- so the path would be /usr/bin/python2.6 ]
the crontab -e [tried user and root same problem ]
*/2 * * * * /usr/bin/python2.6 FULLPATH/myscript.py | How to modify crontab to run python script? | 1.2 | 0 | 0 | 902 |
31,274,926 | 2015-07-07T16:59:00.000 | 2 | 1 | 1 | 0 | python,python-3.x,pyqt,nose,pyqt5 | 31,299,481 | 1 | false | 0 | 0 | You can try nosetests --processes=1 --process-restartworker | 1 | 1 | 0 | We use Python Nose for unit testing our GUI app components and application logic. Nose runs all tests in one process, not a big problem for the application logic but for a complex C++/Python lib like PyQt, this is a problem since there is "application wide" state that Qt creates, and it is hard to ensure that cleanup occurs at the right time such that every test method has a "clean Qt slate".
So I would prefer to have Nose start a separate Python process for each test method/function (or at least those that are flagged as needed this). I realize this would slow down the test suite but the benefit should outweigh the cost. I have seen the Insulate and the Multiprocess plugins but neither do this (Insulate only starts separate proc if a crash occurs -- Multiprocess just tries to use N process for N cores).
Any suggestions? | Python Nose unit testing using separate (sequential) Python processes | 0.379949 | 0 | 0 | 848 |
31,276,150 | 2015-07-07T18:06:00.000 | 2 | 0 | 1 | 0 | python,inheritance,relation,derived-class,many-to-one | 31,276,250 | 1 | false | 0 | 0 | Typically, inheritance represents an 'is a' relationship and composition a 'has a' relationship. Is a dog a breed? Not really. Breed is a property or trait of a dog, a dog has a breed. Inheritance is not appropriate in this case. | 1 | 0 | 0 | I'm having a trouble deciding which to use, derived classes or a many to one relation.
For example, I'd like to have a class of Breed with average data pertaining to the breed, and then I'd like to have the class Dog for individual dog data that can refer to Breed for the average data. From what I understand, I can do that with either class inheritance or a many to one relation. But I'm not quite sure that the nuances between the two that will make me prefer one over the other. | When to use class inheritance or relations? | 0.379949 | 0 | 0 | 48 |
31,277,368 | 2015-07-07T19:12:00.000 | 1 | 0 | 1 | 0 | python,pandas,scrape,yahoo-finance | 32,193,407 | 1 | false | 1 | 0 | I had the same error about html5lib with Python 3.4 in PyCharm 4.5.3, even though I installed html5lib. When I restarted PyCharm console (where I run the code), the error disappeared and options loaded correctly. | 1 | 0 | 0 | I am trying to get stock data from Yahoo! Finance. I have it installed (c:\ pip install yahoo-finance), but the import in the iPython console is not working. This is the error I get: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 4: invalid start byte.
I am using Python 3.4 and Spyder 2.3.1.
Has anyone else encountered this?
Update:
The unicode error during import is no longer, but now it is replaced with the following when trying to use the yahoo_finance tool...
ImportError: html5lib not found, please install it
However, html5lib is listed when I run help('modules'). | Import Error related to Yahoo Finance tool / html5lib install | 0.197375 | 0 | 1 | 1,702 |
31,279,936 | 2015-07-07T21:49:00.000 | 0 | 0 | 0 | 0 | python,png,python-imaging-library,transparent | 31,280,044 | 1 | false | 0 | 1 | Embarrassing answer:
I was using convert('RGB') before calling getcolors(). Without the conversion, a 4-value tuple comes back with an alpha channel. | 1 | 0 | 0 | I'm trying to count the number of distinct colors in an image using img.getcolors(). However, this does not distinguish between transparent and black pixels - they both report as one pixel colored [0,0,0].
How can I distinguish between transparent and black pixels? Many of the images I need to process are largely black on a transparent background.
For test purposes, I'm using a PNG I created which is half transparent, half black. len(img.getcolors()) is 1. | PIL Image.getcolors() cannot distinguish between black and transparent? | 0 | 0 | 0 | 421 |
31,280,321 | 2015-07-07T22:17:00.000 | 0 | 0 | 0 | 0 | python,django | 31,280,349 | 2 | false | 1 | 0 | If you're passing data to JavaScript, use json. | 1 | 3 | 0 | Passing "'2015/07/01'" to a django template is rendering this character when queried in my browser: '2015/07/01'.
I would need to have '2015/07/01' (that's to include it into a javascript function, and my browser doesn't interpret it as '2015/07/01' but sets it to '2015/07/01' in the javascript).
How could I print '2015/07/01' ? | How to display a quote in django template | 0 | 0 | 0 | 1,608 |
31,280,465 | 2015-07-07T22:18:00.000 | 7 | 0 | 1 | 0 | python-3.x | 54,966,370 | 3 | false | 0 | 0 | Downgrade Tornado to 5.1.1 from 6.0, that will solve it.
Apparently, Tornado 6.0 breaks notebook. | 2 | 6 | 0 | This is probably very simple; and I am embarrassed to ask; but I spent a long time trying to solve it already. I am trying to use an IPython notebook and on the click to get a Python 3 notebook, I often (but not always) get:
Connection failed
A connection to the notebook server could not be established. The notebook will continue trying to connect, but until it does, you will NOT be able to run code. Check your network connection or notebook server connection."
The obvious answer might appear that I have no internet connection; but I can access the internet and interact with external websites. It seems to be a problem of connecting with something local to my computer.
The really frustrating part is that sometimes in the past this has worked with no problem. That suggests to me that it is a simple setting issue. Does anyone have suggestions about how I can debug this?
My operating system is Windows (both 7, and 8.1). I am also using Anaconda 2.3 and Python 3.4 | IPython notebook connection failed issue | 1 | 0 | 0 | 9,726 |
31,280,465 | 2015-07-07T22:18:00.000 | 0 | 0 | 1 | 0 | python-3.x | 56,492,532 | 3 | false | 0 | 0 | I was also facing the same problem and as the few people suggested to update the tornado version , I did but it din't help.
What helped me was ---
I ran this two following commands -
python2 -m pip install --upgrade ipykernel
python2 -m ipykernel install
In Jupyter Notebooks, the kernel is responsible for executing Python code. When you install the Anaconda System for Python3, this version also becomes the default for the notebooks. In order to enable Python 2.7 in your notebooks, you need to install a new kernel. | 2 | 6 | 0 | This is probably very simple; and I am embarrassed to ask; but I spent a long time trying to solve it already. I am trying to use an IPython notebook and on the click to get a Python 3 notebook, I often (but not always) get:
Connection failed
A connection to the notebook server could not be established. The notebook will continue trying to connect, but until it does, you will NOT be able to run code. Check your network connection or notebook server connection."
The obvious answer might appear that I have no internet connection; but I can access the internet and interact with external websites. It seems to be a problem of connecting with something local to my computer.
The really frustrating part is that sometimes in the past this has worked with no problem. That suggests to me that it is a simple setting issue. Does anyone have suggestions about how I can debug this?
My operating system is Windows (both 7, and 8.1). I am also using Anaconda 2.3 and Python 3.4 | IPython notebook connection failed issue | 0 | 0 | 0 | 9,726 |
31,281,119 | 2015-07-07T23:29:00.000 | 0 | 0 | 0 | 0 | python,django,django-templates | 31,282,108 | 1 | true | 1 | 0 | This type of logic does not belong in a template tag. It belongs in a view that will respond to AJAX requests and return a JSONResponse. You'll need some javascript to handle making the request based on the input as well. | 1 | 0 | 0 | Is it possible to modify data through custom template tag in Django? More specifically, I have a model named Shift whose data I want to display in a calendar form. I figured using a custom inclusion tag is the best way to go about it, but I also want users to be able to click on a shift and buy/sell the shift (thus modifying the database). My guess is that you can't do this with an inclusion tag, but if I were to write a different type of custom template tag from the ground up, would this be possible? If so, can you direct me to a few resources that address how to write such a tag?
Thank you in advance. | Django: modifying data with user input through custom template tag? | 1.2 | 0 | 0 | 333 |
31,281,539 | 2015-07-08T00:17:00.000 | 4 | 0 | 1 | 0 | python,opencv,virtualenv | 31,281,670 | 1 | false | 0 | 0 | I'm not sure I got your question right, but probably your virtualenv has been created without specifying the option --system-site-packages, which gives your virtualenv access to the packages you installed system-wise.
If you run virtualenv --system-site-packages tutorial_venv instead of just virtualenv tutorial_venv when creating your tutorial virtualenv, you might be fine.
Fyi, using a virtualenv with only local dependencies it's a fairly widespread practice, which:
gives you isolation and reproducibility in production scenarios
makes possible for users without the privilege of installing packages system-wide to run and develop a python application
The last benefit might be the reason why your tutorial suggested a virtualenv based approach. | 1 | 1 | 0 | I've recently installed opencv3 on ubuntu 14.04. The tutorial I followed was for some reason using a virtualenv. Now I want to move opencv from the virtual to my global environment. The reason for this is that I can't seem to use the packages that are installed on my global environment which is getting on my nerves. So how can I do that? | I want my already created virtualenv to have access to system packages | 0.664037 | 0 | 0 | 895 |
31,283,419 | 2015-07-08T04:14:00.000 | 0 | 0 | 0 | 0 | python,pdf,graphicsmagick | 31,310,493 | 1 | false | 1 | 0 | Future readers of this, if you're experiencing the same dilemma in GraphicsMagick. Here's the easy solution:
Simply write a big number to represent the "last page".
That is: something like:
convert file.pdf[4-99999] +adjoin file%02d.jpg
will work to convert from the 5th pdf page to the last pdf page, into jpgs.
Note: "+adjoin" & "%02d" have to do with getting all the images rather than just the last. You'll see what i mean if you try it. | 1 | 1 | 0 | To convert a range of say the 1st to 5th page of a multipage pdf into single images is fairly straight forward using:
convert file.pdf[0-4] file.jpg
But how do i convert say the 5th to the last page when i dont know the number of pages in the pdf?
In ImageMagick "-1"represents the last page, so:
convert file.pdf[4--1] file.jpg works, great stuff,
but it doesnt work in GraphicsMagick.
Is there a way of doing this easily or do i need to find the number of pages?
PS: need to use graphicsmagick instead of imagemagick.
Thank you so much in advance. | Convert to PDF's Last Page using GraphicsMagick with Python | 0 | 0 | 0 | 191 |
31,284,225 | 2015-07-08T05:31:00.000 | 6 | 0 | 0 | 0 | python,flask,flask-httpauth | 31,305,421 | 1 | true | 1 | 0 | The way I intended that to be handled is by creating two HTTPAuth objects. Each gets its own verify_password callback, and then you can decorate each route with the decorator that is appropriate. | 1 | 1 | 0 | Working on a Flask application which will have separate classes of routes to be authenticated against: user routes and host routes(think Airbnb'esque where users and hosts differ substantially).
Creating a single verify_password callback and login_required combo is extremely straightforward, however that isn't sufficient, since some routes will need host authentication and others routes will necessitate user authentication. Essentially I will need to have one verify_password/login_required for user and one for host, but I can't seem to figure out how that would be done since it appears that the callback is global in respect to auth's scope. | Multiple verify_password callbacks on flask-httpauth | 1.2 | 0 | 0 | 228 |
31,285,027 | 2015-07-08T06:27:00.000 | 0 | 0 | 1 | 0 | python,utf-8,nltk | 31,286,279 | 1 | false | 0 | 0 | I'm not aware of such a setting.
But I have similar issues with pos-tagging non-plain-text (text augmented with some xml-like tags in between). These xml-tags are usually not pos-tagged correctly. So I take them out before I start pos-tagging, keep track of their indices and re-insert them after tagging (and then assign them the proper tag manually).
Arguably, the presence or absence of punctuation won't change nltk's pos-tagging output that much, so you could try the same. Especially since I guess your set of 'problematic' punctuation characters is pretty limited? | 1 | 1 | 0 | I just started using NLTK and I noticed that it doesn't work well with non-ascii punctuation. For example, “ is being tagged as a noun. Also, having non-ascii punctuation messes up the POS tagging for the rest of the words because NLTK is interpreting “ as a word instead of a punctuation.
Is there a setting that can allow NLTK to recognize non-ascii punctuation? Since having a single non-unicode punctuation messes up the POS tagging for the entire document, I can't just replace every “ with ". | Making NLTK work for UTF8 punctuation? | 0 | 0 | 0 | 89 |
31,286,024 | 2015-07-08T07:22:00.000 | 0 | 0 | 1 | 0 | python,neural-network,artificial-intelligence,caffe,deep-dream | 39,757,574 | 4 | false | 0 | 0 | As a rule of thumb deep learning is hard on both compute and memory resources. A 2gb RAM Core Duo machine is just not a good choice for deep learning. Keep in mind a lot of the people who pioneered this field did much of their research using GTX Titan cards because CPU computation even on xeon servers is prohibitivly slow when training deep learning networks. | 2 | 0 | 0 | I managed to install #DeepDream in my server.
I have duo core and 2gb Ram. but it taking 1min to process a image of size 100kbp.
Any advice ? | DeepDream taking too long to render image | 0 | 0 | 0 | 583 |
31,286,024 | 2015-07-08T07:22:00.000 | 1 | 0 | 1 | 0 | python,neural-network,artificial-intelligence,caffe,deep-dream | 31,430,476 | 4 | false | 0 | 0 | Do you run it in a Virtual Machine on Windows or OS X? If so, then it's probably not going to work any faster. In a Virtual Machine (I'm using Docker) you're most of the time not able to use CUDA to render the Images. I have the same problem and I'm going to try it by installing Ubuntu and then install the NVidia drivers for CUDA. At the moment I'm rendering 1080p images which are around 300kb and it takes 15 minutes to do 1 image on an Intel core i7 with 8gb of ram. | 2 | 0 | 0 | I managed to install #DeepDream in my server.
I have duo core and 2gb Ram. but it taking 1min to process a image of size 100kbp.
Any advice ? | DeepDream taking too long to render image | 0.049958 | 0 | 0 | 583 |
31,289,288 | 2015-07-08T09:59:00.000 | 0 | 0 | 0 | 1 | python,packet,sniffer | 31,293,136 | 2 | true | 0 | 0 | I studied the source code of pypcap and as far as I could see there was no way to set the buffer size from it.
Because pypcap is using the libpcap library, I changed the default buffer size in the source code of libpcap and reinstalled it from source. That solved the problem as it seems.
Tcpdump sets the buffer size by calling the set_buffer_size() method of libpcap, but it seems that pypcap cannot do that.
Edit: The buffer size variable is located in the pcap-linux.c file, and the name is opt.buffer_size. I is 2MB by default (2*1024*1024 in source code) | 1 | 2 | 0 | I created a packet sniffer using the pypcap Python library (in Linux). Using the .stats() method of the pypcap library, I see that from time to time few packets get dropped by the Kernel when the network is busy. Is it possible to increase the buffer size for the pypcap object so that less packets get dropped (like it is possible in tcpdump?). | How to set buffer size in pypcap | 1.2 | 0 | 1 | 840 |
31,295,352 | 2015-07-08T14:15:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,python-2.7,django-1.8 | 31,309,910 | 1 | true | 1 | 0 | I think you have misunderstood what inspectdb does. It creates a model for an existing database table. It doesn't copy or replicate that table; it simply allows Django to talk to that table, exactly as it talks to any other table. There's no copying or auto-fetching of data; the data stays where it is, and Django reads it as normal. | 1 | 0 | 0 | I'm making an application that will fetch data from a/n (external) postgreSQL database with multiple tables.
Any idea how I can use inspectdb only on a SINGLE table? (I only need that table)
Also, the data in the database would by changing continuously. How do I manage that? Do I have to continuously run inspectdb? But what will happen to junk values then? | Django 1.8 and Python 2.7 using PostgreSQL DB help in fetching | 1.2 | 1 | 0 | 78 |
31,295,836 | 2015-07-08T14:34:00.000 | 0 | 0 | 1 | 0 | python,ncurses,getstring,python-curses | 31,301,635 | 1 | false | 0 | 0 | Not with getstr(), but it's certainly possible with curses. You just have to read each keypress one at a time, via getch() -- and, if you want an editable buffer, you have to recreate something like the functionality of getstr() yourself. (I'd post an example, but what I have is in C rather than Python.) | 1 | 0 | 0 | I am reading user input text with getstr(). Instead of waiting for the user to press enter, I would like to read the input each time it is changed and re-render other parts of the screen based on the input.
Is this possible with getstr()? How? If not, what's the simplest/easiest alternative? | Python ncurses - how to trigger actions while user is typing? | 0 | 0 | 0 | 145 |
31,296,767 | 2015-07-08T15:12:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 31,298,053 | 1 | true | 0 | 0 | The best way to terminate any threaded application, including python, is a technique known as "cooperative shutdown". With this technique, in each thread you check if the application has been instructed to shutdown on each loop iteration, and if so, exit the loop and finish running each thread. Your shutdown condition is completely up to you but common options include catching a KeyboardInterrupt exception and then setting a shared shutdown variable, timeout, etc, etc... | 1 | 1 | 0 | I have a program with two threaded processes that run on a loop. The problem is, I'm new enough to python that outside of closing the terminal window, I don't know the best method to properly terminate the application. | best way terminate a program with threading | 1.2 | 0 | 0 | 85 |
31,304,788 | 2015-07-08T22:15:00.000 | 2 | 0 | 0 | 1 | python,twisted,deferred,asynchronous-messaging-protocol | 31,305,323 | 1 | true | 0 | 0 | No. There is no way, presently, to cancel an AMP request.
You can't cancel AMP requests because there is no way defined in AMP at the wire-protocol level to send a message to the remote server telling it to stop processing. This would be an interesting feature-addition for AMP, but if it were to be added, you would not add it by allowing users to pass in their own cancellers; rather, AMP itself would have to create a cancellation function that sent a "cancel" command.
Finally, adding this feature would have to be done very carefully because once a request is sent, there's no guarantee that it would not have been fully processed; chances are usually good that by the time the cancellation request is received and processed by the remote end, the remote end has already finished processing and sent a reply. So AMP should implement asynchronous cancellation. | 1 | 1 | 0 | I have a Twisted client/server application where a client asks multiple servers for additional work to be done using AMP. The first server to respond to the client wins -- the other outstanding client requests should be cancelled.
Deferred objects support cancel() and a cancellor function may be passed to the Deferred's constructor. However, AMP's sendRemote() api doesn't support passing a cancellor function. Additionally, I'd want the cancellor function to not only stop the local request from processing upon completion but also remove the request from the remote server.
AMP's BoxDispatcher does have a stopReceivingBoxes method, but that causes all deferreds to error out (not quite what I want).
Is there a way to cancel AMP requests? | How should a Twisted AMP Deferred be cancelled? | 1.2 | 0 | 1 | 186 |
31,305,340 | 2015-07-08T23:04:00.000 | -1 | 0 | 0 | 0 | python,django,django-jsonfield | 31,305,709 | 4 | false | 1 | 0 | Try MyModel.objects.filter(myjsonfield='[]'). | 2 | 6 | 0 | I need to query a model by a JsonField, I want to get all records that have empty value ([]):
I used MyModel.objects.filter(myjsonfield=[]) but it's not working, it returns 0 result though there's records having myjsonfield=[] | Query by empty JsonField in django | -0.049958 | 0 | 0 | 2,302 |
31,305,340 | 2015-07-08T23:04:00.000 | 0 | 0 | 0 | 0 | python,django,django-jsonfield | 37,578,853 | 4 | false | 1 | 0 | JSONfield should be default={} i.e., a dictionary, not a list. | 2 | 6 | 0 | I need to query a model by a JsonField, I want to get all records that have empty value ([]):
I used MyModel.objects.filter(myjsonfield=[]) but it's not working, it returns 0 result though there's records having myjsonfield=[] | Query by empty JsonField in django | 0 | 0 | 0 | 2,302 |
31,307,147 | 2015-07-09T02:35:00.000 | 1 | 0 | 0 | 0 | javascript,python,html,forms,local | 31,307,321 | 1 | false | 1 | 0 | The browsers security model prevents sending data to local processes. Your options are:
Write a browser extension that calls a python script.
Run a local webserver. Most Python web development frameworks have a simple one included. | 1 | 0 | 0 | I am writing a program that opens an html form in a browser window. From there, I need to get the data entered in the form and use it in python code. This has to be done completely locally. I do not have access to a webserver or I would be using PHP. I have plenty of experience with Python but not as much experience with JavaScript and no experience with AJAX. Please help! If you need any more information to answer the question, just ask. All answers are greatly appreciated. | Sending data from JavaScript to Python LOCALLY | 0.197375 | 0 | 1 | 78 |
31,308,812 | 2015-07-09T05:24:00.000 | 20 | 0 | 0 | 1 | python,installation,protocols,protocol-buffers,deep-dream | 45,141,001 | 9 | false | 0 | 0 | Locating the google directory in the site-packages directory (for the proper latter directory, of course) and manually creating an (empty) __init__.py resolved this issue for me.
(Note that within this directory is the protobuf directory but my installation of Python 2.7 did not accept the new-style packages so the __init__.py was required, even if empty, to identify the folder as a package folder.)
...In case this helps anyone in the future. | 5 | 32 | 0 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. | No module named google.protobuf | 1 | 0 | 0 | 96,128 |
31,308,812 | 2015-07-09T05:24:00.000 | 2 | 0 | 0 | 1 | python,installation,protocols,protocol-buffers,deep-dream | 31,325,403 | 9 | false | 0 | 0 | According to your comments, you have multiply versions of python
what could happend is that you install the package with pip of anthor python
pip is actually link to script that donwload and install your package.
two possible solutions:
go to $(PYTHONPATH)/Scripts and run pip from that folder that way you insure
you use the correct pip
create alias to pip which points to $(PYTHONPATH)/Scripts/pip and then run pip install
how will you know it worked?
Simple if the new pip is used the package will be install successfully, otherwise the package is already installed | 5 | 32 | 0 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. | No module named google.protobuf | 0.044415 | 0 | 0 | 96,128 |
31,308,812 | 2015-07-09T05:24:00.000 | 0 | 0 | 0 | 1 | python,installation,protocols,protocol-buffers,deep-dream | 45,384,713 | 9 | false | 0 | 0 | In my case, MacOS has the permission control.
sudo -H pip3 install protobuf | 5 | 32 | 0 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. | No module named google.protobuf | 0 | 0 | 0 | 96,128 |
31,308,812 | 2015-07-09T05:24:00.000 | 3 | 0 | 0 | 1 | python,installation,protocols,protocol-buffers,deep-dream | 52,287,475 | 9 | false | 0 | 0 | when I command pip install protobuf, I get the error:
Cannot uninstall 'six'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
If you have the same problem as me, you should do the following commands.
pip install --ignore-installed six
sudo pip install protobuf | 5 | 32 | 0 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. | No module named google.protobuf | 0.066568 | 0 | 0 | 96,128 |
31,308,812 | 2015-07-09T05:24:00.000 | 0 | 0 | 0 | 1 | python,installation,protocols,protocol-buffers,deep-dream | 46,490,849 | 9 | false | 0 | 0 | I had this problem to when I had a google.py file in my project files.
It is quite easy to reproduce.
main.py: import tensorflow as tf
google.py: print("Protobuf error due to google.py")
Not sure if this is a bug and where to report it. | 5 | 32 | 0 | I am trying to run Google's deep dream. For some odd reason I keep getting
ImportError: No module named google.protobuf
after trying to import protobuf. I have installed protobuf using sudo install protobuf. I am running python 2.7 OSX Yosemite 10.10.3.
I think it may be a deployment location issue but i cant find anything on the web about it. Currently deploying to /usr/local/lib/python2.7/site-packages. | No module named google.protobuf | 0 | 0 | 0 | 96,128 |
31,311,247 | 2015-07-09T07:50:00.000 | 1 | 0 | 1 | 0 | java,python,ruby,perl,interpreter | 31,311,613 | 3 | true | 0 | 0 | Your Python sourcecode (.py) is "compiled" into a representation that does away with all whitespace (.pyc). Instead of doing this transformation over and over again you can just run the .pyc files. So no, whitespace doesn't matter.
If you want performance the best thing to do is optimize your algorithm.
Don't start peephole optimizing too early; clear, well-designed code is your first aim.
After that, since your code is comprehensible and designed (hopefully) for insight and change, find out where the bottlenecks are: Look at the "big-O" complexity of your algorithms (O(n), O(n^2), etc.) and try to improve that.
After that you might use a profiler to find remaining bottlenecks. You can often improve them easily since your code is well structured.
In short: Leaving out whitespace is no good. Understandable code is the way to optimization. | 2 | 2 | 0 | Since the source code is interpreted while running, I think it might make a difference in Performance. What I mean is:
When you have a long (>9000 lines) code and then cut out as many spaces and linebreaks as possible, does it make the program run faster?
If so, does this apply to languages using bytecode (i.e. Java) too? | Does shorter code make a performance difference in interpreted languages? | 1.2 | 0 | 0 | 368 |
31,311,247 | 2015-07-09T07:50:00.000 | 0 | 0 | 1 | 0 | java,python,ruby,perl,interpreter | 31,311,708 | 3 | false | 0 | 0 | Java is not an interpreted language. Many other languages tagged above are also compiled into an intermediate bytecode instead of interpreted directly. Therefore length of source code doesn't relate to performance. Moreover in the program there maybe loops, which obviously takes less line but executed multiple times. A single-line program which loops infinitely will run longer than a program with a billion lines | 2 | 2 | 0 | Since the source code is interpreted while running, I think it might make a difference in Performance. What I mean is:
When you have a long (>9000 lines) code and then cut out as many spaces and linebreaks as possible, does it make the program run faster?
If so, does this apply to languages using bytecode (i.e. Java) too? | Does shorter code make a performance difference in interpreted languages? | 0 | 0 | 0 | 368 |
31,311,620 | 2015-07-09T08:07:00.000 | 2 | 0 | 0 | 0 | python,mongodb,tornado,tornado-motor | 31,311,950 | 1 | true | 0 | 0 | If you store session data in Python your apllication will:
loose it if you stop the Python process;
likely consume more memory as Python isn't very efficient in memory management (and you will have to store all the sessions in memory, not the ones you need right now).
If these are not problems for you you can go with Python structures. But usually these are serious concerns and most of the projects use some external storage for sessions. | 1 | 0 | 0 | I'd like people's views on current design I'm considering for a tornado app. Although I'm using mongoDB to store permanent information I currently have the session information as a python data structure that I've simply added within the Application object at initialisation.
I will need to perform some iteration and manipulation of the sessions while the server is running. I keep debating whether to move these to another mongoDB or just keep it as a python structure.
Is there anything wrong with keeping session information this way? | Tornado Application design | 1.2 | 1 | 0 | 120 |
31,312,292 | 2015-07-09T08:37:00.000 | 4 | 0 | 0 | 0 | python,django,amazon-web-services,amazon-s3,boto | 31,314,635 | 1 | true | 1 | 0 | This has nothing to do with timezone of machine or S3 bucket, your machine time is not correct and if machine time is off by more than 15 minutes, AWS will give error because of security. Just check if time is correct on machine. | 1 | 1 | 0 | My server time is set as Asia/India. So when ever I am trying to post an image in S3 bucket I am getting the following error
RequestTimeTooSkewedThe difference between the request time and the current time is too large.Thu, 09 Jul 2015 17:53:21 GMT2015-07-09T08:23:22Z90000068B8486508D2695Ag6EfiNV8uJi8JY/Y2JWCIBi7fROEa/Uw2Yaw3fw3pfAbI+ZtaFZV7PnHhZ6Yxw07
How can I change the AWS S3 bucket time as IST? | How to change amazon aws S3 time zone setting for a bucket | 1.2 | 0 | 1 | 8,668 |
31,314,526 | 2015-07-09T10:15:00.000 | 0 | 1 | 0 | 0 | python,type-conversion,swig | 31,378,499 | 1 | false | 0 | 1 | Well, sorry to give everyone the run-around on this one.
We found the solution and it's completely unrelated to SWIG.
Problem emanated from the ODBC driver for the Vertica DB we were using which behaved unpredictably when using a long variable to bind the SQL result to.
Tomer and Inbal | 1 | 0 | 0 | I have code in python that integrates c++ code using swig.
The c++ class has a field with the long value 1393685280 which is converted to python.
The problem is that when calling the getter of this field from python I get the int value -2901282016 instead.
How can I fix it?
Thanks | swig converts a positive long to negative int | 0 | 0 | 0 | 103 |
31,314,540 | 2015-07-09T10:16:00.000 | 0 | 0 | 0 | 0 | python,django,pip | 31,645,802 | 1 | false | 1 | 0 | make sure that python.exe is allowed in your firewall settings and any antivirus firewall settings. I had the same problems, and had to allow the program under my AVG firewall settings cause it still wouldn't work even after I had allowed it under Windows firewall. | 1 | 1 | 0 | When trying to install Django through pip we get an error message.
So it's an protocol error, and then since he has in Swedish it says something like:
"a try was made to get access to a socket in a way that is forbidden by the table of access"....
It seems like we need any admin access or something? We tried to run the command prompt as an administrator. By marking the "run as administrator" box in the command prompt settings. We are lost, any help is greatly appreciated.
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', error(10013, 'Ett f\xf6rs\xf6k gjordes att f\xe5 \xe5tkomst till en socket p\xe5 ett s\xe4tt som \xe4r f\xf6rbjudet av \xe5tkomstbeh\xf6righeterna'))': /simple/django/
Could not find a version that satisfies the requirement django (from versions: )
No matching distribution found for django | Error when installing through Pip - Windows | 0 | 0 | 0 | 1,486 |
31,315,046 | 2015-07-09T10:40:00.000 | 1 | 0 | 0 | 0 | python,django,forms,post | 31,315,193 | 1 | true | 1 | 0 | You're getting the ID of the related object.
Since you say you're using a form, you shouldn't be accessing data via request.POST, but by form.cleaned_data, which will do the work to translate that into the actual object. | 1 | 0 | 0 | One of my form fields is a Foreign Key drop down. When the form is submitted I need to get the selected value in views.py.
However, instead of getting (using request.POST.get('value', False)) the value I am getting a number (which seems to be arbitrary).
How can I get the selected value?
Thanks in advance! | Django forms - Post | 1.2 | 0 | 0 | 50 |
31,320,273 | 2015-07-09T14:19:00.000 | 2 | 1 | 0 | 0 | python-2.7,robotframework | 33,709,506 | 4 | false | 1 | 0 | you can use jenkins to run your Robot Framework Testcases. There is a auto-generated mail option in jenkins to send mail with Test results. | 1 | 0 | 0 | We are using ROBOT framework to execute automation test cases.
Could anyone please guide me to write script for e-mail notification of test results.
Note:
I have e-mail server details.
Regards,
-kranti | Email Notification for test results by using Python in Robot Framework | 0.099668 | 0 | 0 | 9,967 |
31,321,872 | 2015-07-09T15:26:00.000 | 0 | 1 | 0 | 0 | python,jira-rest-api | 31,656,989 | 1 | true | 0 | 0 | @ThePavoIC, you seem to be correct. I notice MASSIVE changes in speed if Jira has been restarted and re-indexed recently. Scripts that would take a couple minutes to run would complete in seconds. Basically, you need to make sure Jira is tuned for performance and keep your indexes up to date. | 1 | 3 | 0 | I'm using jira-python to automate a bunch of tasks in Jira. One thing that I find weird is that jira-python takes a long time to run. It seems like it's loading or something before sending the requests. I'm new to python, so I'm a little confused as to what's actually going on. Before finding jira-python, I was sending requests to the Jira REST API using the requests library, and it was blazing fast (and still is, if I compare the two). Whenever I run the scripts that use jira-python, there's a good 15 second delay while 'loading' the library, and sometimes also a good 10-15 second delay sending each request.
Is there something I'm missing with python that could be causing this issue? Anyway to keep a python script running as a service so it doesn't need to 'load' the library each time it's ran? | Jira python runs very slowly, any ideas on why? | 1.2 | 0 | 1 | 923 |
31,321,996 | 2015-07-09T15:31:00.000 | 25 | 0 | 0 | 0 | python,amazon-web-services,boto,amazon-sqs | 31,322,766 | 3 | true | 1 | 0 | The long-polling capability of the receive_message() method is the most efficient way to poll SQS. If that returns without any messages, I would recommend a short delay before retrying, especially if you have multiple readers. You may want to even do an incremental delay so that each subsequent empty read waits a bit longer, just so you don't end up getting throttled by AWS.
And yes, you do have to delete the message after you have read or it will reappear in the queue. This can actually be very useful in the case of a worker reading a message and then failing before it can fully process the message. In that case, it would be re-queued and read by another worker. You also want to make sure the invisibility timeout of the messages is set to be long enough the the worker has enough time to process the message before it automatically reappears on the queue. If necessary, your workers can adjust the timeout as they are processing if it is taking longer than expected. | 1 | 25 | 0 | I have an SQS queue that is constantly being populated by a data consumer and I am now trying to create the service that will pull this data from SQS using Python's boto.
The way I designed it is that I will have 10-20 threads all trying to read messages from the SQS queue and then doing what they have to do on the data (business logic), before going back to the queue to get the next batch of data once they're done. If there's no data they will just wait until some data is available.
I have two areas I'm not sure about with this design
Is it a matter of calling receive_message() with a long time_out value and if nothing is returned in the 20 seconds (maximum allowed) then just retry? Or is there a blocking method that returns only once data is available?
I noticed that once I receive a message, it is not deleted from the queue, do I have to receive a message and then send another request after receiving it to delete it from the queue? seems like a little bit of an overkill.
Thanks | Best practice for polling an AWS SQS queue and deleting received messages from queue? | 1.2 | 0 | 1 | 19,484 |
31,323,466 | 2015-07-09T16:38:00.000 | 1 | 0 | 1 | 0 | python,eclipse,pydev | 32,323,599 | 2 | false | 0 | 0 | In PyDev, there is a templates section. From the menu at the top of the Eclipse window, select: Windows / Preferences / PyDev / Editor / Templates
This will open a dialog box listing.
If you scroll down the list of templates, you will find one named
main. By selecting it, you can see what will be inserted. If it's
not quite what you want, you can edit it. You are also able to add
your own, with included variables.
In your source file, place the cursor where you want the "if ..." to go. Begin to type 'main'. A popup menu should popup containing at least the entry: "main - Main function pattern" Press Enter and the if ... will be entered in the source code. | 1 | 2 | 0 | Is there shortcut or quicker way in Pydev to insert if __name__ == '__main__' ? | Pydev, shortcut for inserting if __name__ == '__main__' in code | 0.099668 | 0 | 0 | 1,569 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.