Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
34,365,044 | 2015-12-18T22:43:00.000 | -1 | 0 | 1 | 0 | python,windows,opencv | 55,111,881 | 13 | false | 0 | 0 | After installing OpenCV with pip and then pip3 in terminal. It would import when writing python in terminal, but not in PyCharm. I tried the invalidation of cache mentioned above, and it worked for a min until cache was warmed up. Same result....
I fixed it by going to:
PyCharm Menu
Preferences
Project (proj name) -> Project Interpreter
(this time instead of CV2)
Plus sign (to install packages)
searched for opencv-python
installed the package
I didn't even have to dot off that library, it then accepted the
"import cv2" | 7 | 22 | 0 | I am using OpenCV 3 and python 2.7 and coding using PyCharm. The code works fine but PyCharm does not recognize cv2 as a module. It underlines it with a red line, so it doesn't display its functions in the intellisence menu.
I tried to set an environment variable OPENCV_DIR but it didn't work
OpenCV is extracted in
F:\opencv and
Python is installed on
C:\Python27
What is wrong ? | PyCharm does not recognize cv2 as a module | -0.015383 | 0 | 0 | 90,179 |
34,365,044 | 2015-12-18T22:43:00.000 | -1 | 0 | 1 | 0 | python,windows,opencv | 51,626,683 | 13 | false | 0 | 0 | You can install your existing libraries to pycharm by enabling the button "Inherit global site-packages" while creating project.
If you don't have installed libraries then you can install it by going to File>Settings>Project:your project name>project interpreter and then install your required package by searching that. | 7 | 22 | 0 | I am using OpenCV 3 and python 2.7 and coding using PyCharm. The code works fine but PyCharm does not recognize cv2 as a module. It underlines it with a red line, so it doesn't display its functions in the intellisence menu.
I tried to set an environment variable OPENCV_DIR but it didn't work
OpenCV is extracted in
F:\opencv and
Python is installed on
C:\Python27
What is wrong ? | PyCharm does not recognize cv2 as a module | -0.015383 | 0 | 0 | 90,179 |
34,365,044 | 2015-12-18T22:43:00.000 | 0 | 0 | 1 | 0 | python,windows,opencv | 42,989,983 | 13 | false | 0 | 0 | I follow the steps in the webapp response and after that It does not work and I decided to reinstall the pycharm IDE, this works for me.
Hope it helps. | 7 | 22 | 0 | I am using OpenCV 3 and python 2.7 and coding using PyCharm. The code works fine but PyCharm does not recognize cv2 as a module. It underlines it with a red line, so it doesn't display its functions in the intellisence menu.
I tried to set an environment variable OPENCV_DIR but it didn't work
OpenCV is extracted in
F:\opencv and
Python is installed on
C:\Python27
What is wrong ? | PyCharm does not recognize cv2 as a module | 0 | 0 | 0 | 90,179 |
34,365,044 | 2015-12-18T22:43:00.000 | 1 | 0 | 1 | 0 | python,windows,opencv | 58,638,028 | 13 | false | 0 | 0 | Installing opencv-python package from pycharm setting worked for me. | 7 | 22 | 0 | I am using OpenCV 3 and python 2.7 and coding using PyCharm. The code works fine but PyCharm does not recognize cv2 as a module. It underlines it with a red line, so it doesn't display its functions in the intellisence menu.
I tried to set an environment variable OPENCV_DIR but it didn't work
OpenCV is extracted in
F:\opencv and
Python is installed on
C:\Python27
What is wrong ? | PyCharm does not recognize cv2 as a module | 0.015383 | 0 | 0 | 90,179 |
34,369,173 | 2015-12-19T09:24:00.000 | 2 | 1 | 1 | 0 | python,performance,debugging,visual-studio-2015,ptvs | 34,369,441 | 1 | true | 0 | 0 | As a workaround, try mixed-mode debugging - it is significantly faster (but also more limited). | 1 | 2 | 0 | VS2015 with the l&g PTVS looks great.
But any non-trivial project runs about 20-50 times slower under a debugger (F5) than without one (Ctrl-F5), which makes it totally unusable for debugging.
Any idea why? Is there any way to accelerate the debugger? | Why is PTVS so slow? | 1.2 | 0 | 0 | 1,501 |
34,376,936 | 2015-12-20T00:46:00.000 | 2 | 0 | 1 | 0 | collections,ironpython,garbage | 34,381,901 | 1 | true | 1 | 0 | In general managed enviroments relase there memory, if no reference is existing to the object anymore (from connection from the root to the object itself). To force the .net framework to release memory, the garbage collector is your only choice. In general it is important to know, that GC.Collect does not free the memory, it only search for objects without references and put the in a queue of objects, which will be released. If you want to free memory synchron, you also need GC.WaitForPendingFinalizers.
One thing to know about large objects in the .net framework is, that they are stored seperatly, in the Large Object Heap (LOH). From my point of few, it is not bad to free those objects synchron, you only have to know, that this can cause some performance issues. That's why in general the GC decide on it's own, when to collect and free memory and when not to.
Because gc.collect is implemented in Python as well as in IronPython, you should be able to use it. If you take a look at the implementation in IronPython, gc.collect does exactly what you want, call GC.Collect() and GC.WaitForPendingFinalizer. So in your case, i would use it.
Hope this helps. | 1 | 1 | 0 | I am a creating a huge mesh object (some 900 megabytes in size).
Once I am done with analysing it, I would like to somehow delete it from the memory.
I did a bit of search on stackoverflow.com, and I found out that del will only delete the reference to mentioned mesh. Not the mesh object itself.
And that after some time, the mesh object will eventually get garbage collected.
Is gc.collect() the only way by which I could instantly release the memory, and there for somehow remove the mentioned large mesh from the memory?
I've found replies here on stackoverflow.com which state that gc.collect() should be avoided (at least when it comes to regular python, not specifically ironpython).
I've also find comments here on stackoverflow which claim that in IronPython it is not even guaranteed the memory will be released if nothing else is holding a reference.
Any comments on all these issues?
I am using ironpython 2.7 version.
Thank you for the reply. | Delete the large object in ironpython, and instantly release the memory? | 1.2 | 0 | 0 | 499 |
34,377,210 | 2015-12-20T01:43:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,svm | 34,377,289 | 1 | true | 0 | 0 | If you are using SVC from sklearn then the answer is no. There is no way to do it, this implementation is purely batch training based. If you are training linear SVM using SGDClassifier from sklearn then the answer is yes as you can simply start the optimization from the previous solution (when removing feature - simply with removed corresponding weight, and when adding - with added any weight there). | 1 | 1 | 1 | I have a support vector machine trained on ~300,000 examples, and it takes roughly 1.5-2 hours to train this model, and I pickled(serialized) it. Currently, I want to add/remove a couple of the parameters of the model. Is there a way to do this without having to retrain the entire model? I am using sklearn in python. | Adding and removing SVM parameters without having to totally retrain | 1.2 | 0 | 0 | 44 |
34,385,462 | 2015-12-20T20:25:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,wing-ide | 34,385,561 | 1 | false | 0 | 0 | For Wing IDE:
Try ctrl++ or ctrl+MouseScrollUp for quick changes. You can also just change your font size in the Editor preferences.
For Python IDLE:
Under Options --> Configure IDLE; change the Size.
For 'cmd' prompt or Bash:
Right-Click on the Window bar and select Properties. Change the font size in the 'Font' tab. If you want it to be permanent, do the same in 'Defaults' instead (from the right-click menu). | 1 | 1 | 0 | Is there any way to zoom in on the Python Shell in Wing IDE? I am having trouble seeing the font because it is too small. | Zooming in on the python shell wing_ide | 0.197375 | 0 | 0 | 9,187 |
34,385,530 | 2015-12-20T20:32:00.000 | 0 | 0 | 1 | 0 | python,blender | 34,385,809 | 2 | false | 0 | 0 | The answer seems to be materials[i].active_texture. I asked a little too soon. | 1 | 0 | 0 | I am loading and cleaning a lot of legacy .fbx files. I need to import the fbx file, check for repeated meshes, materials, and textures and then select the material that has textures that are attached to bitmaps. (Out of 5 fbx files, only one has the usable material/texture)
I can import the fbx files, find the redundant materials, but I cannot figure out which textures are attached to the materials, and then which textures have bitmaps.
any help is appreciated. | How can I determine with script which textures are attached to a blender material? | 0 | 0 | 0 | 959 |
34,390,336 | 2015-12-21T06:56:00.000 | 1 | 0 | 0 | 0 | python,algorithm,machine-learning,naivebayes,data-science | 34,391,185 | 2 | false | 0 | 0 | For Naive Bayes you can discretize your continuous numerical properties.
For example, for "% Owner occupied housing" you split all 100% scale into ten partitions(0-10%, 10-20%, ..., 90-100%) and get the frequency table.
For some properties you can move to binary values: Unemployment rate < 30% - yes/no.
Good luck in learning Machine Learning :) | 1 | 1 | 1 | I am trying to build and train a machine learning data science algorithm that correctly predicts what presidential won in what county. I have the following information for training data.
Total population Median age % BachelorsDeg or higher Unemployment rate Per capita income Total households Average household size % Owner occupied housing % Renter occupied housing % Vacant housing Median home value Population growth House hold growth Per capita income growth Winner
I am new to data science. I do know Naive Bayes is a good classifier for algorithms trying to predict with multiple properties. However, I read the first step for a naive bayes classifier requires a frequency table. My problem is all of the above properties are continuous numerical properties and don't fall into "Yes" or "No" categories. Do I not use a Naive Bayes classifier then?
I also considered using a k nearest neighbor algorithm, but that doesn't look like it will be the most accurate and weight the properties correctly for me...I am looking for a supervised algorithm because I have training data. Can anyone give me any recommendations as to what algorithm to use? In addition, being new to the field, how can I figure out what algorithm to use on my own in the future. | What data science programming algorithm is like Naive Bayes for continuous variables? | 0.099668 | 0 | 0 | 229 |
34,393,217 | 2015-12-21T10:11:00.000 | 5 | 0 | 1 | 0 | ipython | 52,896,545 | 2 | false | 0 | 0 | Press Tab+Shift, it works for jupyter notebook 5.6.0 version. | 1 | 17 | 0 | I read that pressing shift+tab after a function displays the function's docstring in an IPython notebook, but this does not seem to work in my IPython (no notebook). I run IPython 4.0.0 on Ubuntu.
Any suggestion? | How to show function parameters in IPython? | 0.462117 | 0 | 0 | 18,011 |
34,393,876 | 2015-12-21T10:47:00.000 | 4 | 0 | 0 | 0 | python,convolution,deep-learning,tensorflow,conv-neural-network | 34,568,528 | 2 | false | 0 | 0 | I am quoting user2576346's comments under the question:
As I understand, either it should be densely connected or be a convolutional layer ...
No this is not true. A more accurate way to phrase that statement would be that layers are either fully connected (dense) or locally connected.
A convolutional layer is an example of a locally connected layer. In general a locally connected layer is a layer in which each of its units is only connected to a local portion of the input. A convolutional layer is a special type of local layer which exhibits a spatial translation invariance as each convolutional feature detector is strided across the entire image in local receptive windows, e.g. of size 3x3 or 5x5 for example. | 1 | 6 | 1 | What is the difference between a "Local" layer and a "Dense" layer in a convolutional neural network? I am trying to understand the CIFAR-10 code in TensorFlow, and I see it uses "Local" layers instead of regular dense layers. Is there any class in TF that supports implementing "Local" layers? | Difference between local and dense layers in CNNs | 0.379949 | 0 | 0 | 2,001 |
34,397,628 | 2015-12-21T14:09:00.000 | 0 | 1 | 0 | 1 | python,linux,path,cron | 34,400,781 | 3 | false | 0 | 0 | Thanks for the responses after further searching, I found this that worked:
*/1 * * * * /home/ranveer/vimbackup.sh >> /home/ranveer/vimbackup.log 2>&1 | 1 | 0 | 0 | This is my first post here. I am a very big fan of Stack Overflow. This is the first time I could not find an answer to one of my questions.
Here is the scenario:
In my Linux system, I am not an admin or root. When I run a Python script, the output appears in the original folder, however when I run the same Python script as a Cron job, it appears in my accounts home folder. Is there anything I can do to direct the output to a desired folder? I do have the proper shebang path.
Thank you! | Cron job output in wrong Linux folder | 0 | 0 | 0 | 120 |
34,398,524 | 2015-12-21T14:55:00.000 | 0 | 0 | 1 | 0 | javascript,python,encoding,beautifulsoup,decoding | 34,399,571 | 2 | false | 0 | 0 | You can convert the characters to strings using the str() builtin function passing the character as argument | 1 | 4 | 0 | I am currently doing a Python Web Scraping project. Something that I am scraping saves symbols like é, à and other symbols (ex. Cyrillic) as codes like \u00e8, \u00e9. I am using BeautifulSoup to format whatever I get from the web and save it as a string. However I want to output the symbols to a file, not in the encoded format but as their actual symbols (ex. é). How can you decode the string so that I can output the symbols to file? | How to decode and output the following code (ex. \u00e8, \u00e9) in the format of a string to their symbols in Python | 0 | 0 | 0 | 2,566 |
34,400,788 | 2015-12-21T16:59:00.000 | 0 | 0 | 1 | 0 | python,file,3d,3dsmax | 34,401,590 | 1 | false | 0 | 1 | According to my understanding, there are some advanced graphics libraries out there for advanced usage, however Blender, (an application developed in python) supports python scripting. There are even a simple drag and drop game engine for simpler tasks. | 1 | 1 | 0 | I am new to 3d world. I would like to open 3ds files with python and visualize the objects.
I could not find any easy and straight forward way to play with 3ds max files.
Can you let me know how can I achieve this? | how to open 3ds files with python | 0 | 0 | 0 | 717 |
34,400,922 | 2015-12-21T17:07:00.000 | 0 | 0 | 0 | 0 | javascript,python,django,forms | 34,403,444 | 1 | false | 1 | 0 | There is some terminology confusion here, as SColvin points out; it's really not clear what you mean by "custom variables", and how those relates to models.
However your main confusion seems to be around forms. There is absolutely no requirement to use them: they are just one method of updating models. It is always possible to edit the models directly in code, and the data from that can of course come from Javascript if you want. The tutorial has good coverage of how to update a model from code without using a form.
If you're doing a lot of work via JS though, you probably want to look into the Django Rest Framework, which simplifies the process of converting Django model data to and from JSON to use in your client-side code. Again though DRF isn't doing anything you couldn't do manually in your own code, all without the use of forms. | 1 | 0 | 0 | I have a contract job for editing a Django application, and Django is not my main framework to use, so I have a question regarding models in it.
The application I am editing has a form that each user can submit, and every single model in the application is edited directly through the form.
From this perspective, it seems every model is directly a form object, I do not see any model fields that I could use for custom variables. Meaning instead of a "string" that I could edit with JS, I only see a TextField where the only way it could be edited is by including it on a form directly.
If I wanted to have some models that were custom variables, meaning I controlled them entirely through JS rather than form submissions, how would I do that in Django?
I know I could, for example, have some "hidden" form objects that I manipulated with JS. But this solution sounds kind of hacky. Is there an intended way that I could go about this?
Thanks!
(Edit: It seems most responses do not know what I am referring to. Basically I want to allow the client to perform some special sorting functions etc, in which case I will need a few additional lists of data. But I do not want these to be visible to the user, and they will be altered exclusively by js.
Regarding the response of SColvin, I understand that the models are a representation of the database, but from how the application I am working on is designed, it looks as if the only way the models are being used is strictly through forms.
For example, every "string" is a "TextField", and lets say we made a string called "myField", the exclusive use of this field would be to use it in templates with the syntax {{ form.myField|attr:"rows:4" }}.
There are absolutely no use of this model outside of the forms. Every place you see it in the application, there is a form object. This is why I was under the impression that is the primary way to edit the data found in the models.
I did the Django tutorial prior to accepting this project but do not remember seeing any way to submit changes to models outside of the forms.
So more specifically what I would like to do in this case: Let's say I wanted to add a string to my models file, and this string will NOT be included/edited on the form. It will be invisible to the user. It will be modified browser-side by some .js functions, and I would like it to be saved along when submitting the rest of the form. What would be the intended method for going about doing this?
If anyone could please guide me to documentation or examples on how to do this, it would be greatly appreciated! )
(Edit2: No responses ever since the first edit? Not sure if this post is not appearing for anyone else. Still looking for an answer!) | Django saving models by JS rather than form submissions? | 0 | 0 | 0 | 51 |
34,401,492 | 2015-12-21T17:44:00.000 | 2 | 0 | 1 | 0 | python,qt,ipython | 49,870,596 | 2 | false | 0 | 0 | you can use the os module in python with the following commands.
To get the current working directory use this - os.getcwd()
To set a new working directory use this - os.chdir(<absolute path>)
Hope this helps. | 1 | 1 | 0 | I installed WinPython-32bit-3.4.3.7 and use Ipython QT console.
Its defalult working directory is "WinPython-32bit-3.4.3.7\notebooks" by the magic command "%pwd".
I would like to change the default directory to "C:\workspace",for example.
I read the configuration files in "settings\.ipython\profile_default\ipython_config.py and ipython_kernel_config.py".
But I don't find any good solution.
Please tell me whatever trick I can change the default directory!
UPDATE: I understand the way I change the default directory to any directory I like with opening notebook but when applying to qt console, it failed.
In case of Qt Console, isn't it thought to be necessary to change the default directory? | ipython qt console: change the default working directory | 0.197375 | 0 | 0 | 1,409 |
34,401,548 | 2015-12-21T17:48:00.000 | 0 | 0 | 1 | 0 | python,logging | 34,401,743 | 3 | false | 0 | 0 | dictConfig takes a dict as its parameter and does not care, how you got it. You can read the dict from file, compute it, decode it, or create any other way you want, as long, as it has proper structure.
So yes, you can, but you have to extract the dict from file yourself (probabelly using some library) | 2 | 6 | 0 | The examples of seen of using dictConfig to set up logging in Python show the dictionary be created in the Python script. I'd like to use a configuration file, but the documentation says fileConfig is older and less capable than dictConfig. Can a dictConfig be set up from a configuration file?
Thank you very much. | Python logging: can dictConfig be read from a file? | 0 | 0 | 0 | 1,749 |
34,401,548 | 2015-12-21T17:48:00.000 | 1 | 0 | 1 | 0 | python,logging | 34,402,034 | 3 | false | 0 | 0 | No, there is no parser in the standard library that'll take a file and spit out a dictConfig compatible dictionary.
You'll have to create your own file format to populate your dictionary. This is what usually happens; application specific configuration translated to a dictConfig setup, where your configuration file offers a subset of the full dictConfig spectrum. | 2 | 6 | 0 | The examples of seen of using dictConfig to set up logging in Python show the dictionary be created in the Python script. I'd like to use a configuration file, but the documentation says fileConfig is older and less capable than dictConfig. Can a dictConfig be set up from a configuration file?
Thank you very much. | Python logging: can dictConfig be read from a file? | 0.066568 | 0 | 0 | 1,749 |
34,405,180 | 2015-12-21T22:10:00.000 | 2 | 0 | 1 | 0 | java,python,json,serialization,ipc | 34,405,239 | 1 | false | 1 | 0 | I don't think so.
You seem like you are heading in the right direction when you said .
I don't want to get into plain text processing that could potentially
be buggy.
Which is absolutely true and why you should consider formatted text like JSON.
And unfortunately any formatting means overhead : increasing the size of the data you are sending.
So you either need to improvise your own format that has the least amount of "extra stuff" in it. Or use the available ones like Json , XML ... | 1 | 1 | 0 | I have a python application that will be talking to a Java server. The python application will be sending out simple messages continuously to the java server with a handful of values [ For eg: Name, studentRollNumber, marks ]
I considered having this communication take place in json format since I don't want to get into plain text processing that could potentially be buggy. However, if I use json I'm going to keep transferring the names of the fields [ such as "name", "studentRollNumber" ] etc. multiple times. Is there a better way to do this ?
TL;DR
What is a good way to serialize/deserialize an object into text that works in both Java and Python without being too verbose ? | What data interchange formats can be used for a python and a java application to talk to each other? | 0.379949 | 0 | 0 | 365 |
34,405,674 | 2015-12-21T22:49:00.000 | 0 | 0 | 1 | 0 | python,opengl,import,module,glfw | 34,445,504 | 1 | true | 0 | 0 | I dont have to put it into the python DLLs folder instead I have to copy the glfw3.dll which I have downloaded from glfw into system32 folder. Then the glfw module from python doesnt show any errors. | 1 | 0 | 0 | Currently I am trying to get glfw to work. Everytime I try to import it in python it raises
Failed to load GLFW3 shared library.
I have downloaded glfw3 precompiled dll and placed it into the python DLLs folder but it still shows this error. I have also tried to install through easy_install but this error still occurs. | cant get glfw correctly imported | 1.2 | 0 | 0 | 1,369 |
34,409,373 | 2015-12-22T05:59:00.000 | 0 | 1 | 0 | 0 | python,pycharm | 34,421,968 | 1 | false | 0 | 0 | Once you have selected that test-oriented run configuration once, the next times you can just do Ctrl-r, which runs the most-recently-run run configuration. | 1 | 0 | 0 | I would like to run all my tests (or part of it) with a single keyboard shortcut so to achieve faster test cycle.
So what I'm doing currently is to press ctrl+shift+R (on OSX) to prompt Run... dialog and then select the run test configuration, but it requires two stroke and a mental load of selecting appropriate configuration.
Is there a way for me to run my tests quickly like how I can run my app( single stroke of ctrl+R)? | Set different keyboard shortcut for running unittest on Pycharm | 0 | 0 | 0 | 40 |
34,410,203 | 2015-12-22T07:04:00.000 | 2 | 0 | 0 | 0 | python,django,mongodb | 34,410,315 | 2 | true | 1 | 0 | models.py is the Django ORM way of inspecting a fixed relational schema and generating the relevant SQL code to initialize (or modify) the database. "ORM" stands for "Object-Relational Mapping".
Mongo is not relational, hence you don't need this type of schema.
(Of course, that can cause a lot of other problems if the needs of your project change later...)
But you don't need a relational schema since you're not using a relational DB. | 1 | 1 | 0 | Recetly I've seen an app powered with django and mongodb as backend,thing is that app doesn't have a models.py file.All the datas are inserted directly in views.py.I Just need a little clarification about this particular things "Using django without models.py with mongodb." | why "models.py" is not required in django when using mongodb as backend? | 1.2 | 0 | 0 | 517 |
34,410,613 | 2015-12-22T07:32:00.000 | 0 | 0 | 1 | 0 | python | 34,428,433 | 3 | true | 0 | 0 | Sometimes pip refers to python 2 if both are installed on the system. You should try pip3.
Generally on Mac you can find pip3 or python3 etc in
/Library/Frameworks/Python.framework/Versions/3.4/bin/ | 1 | 3 | 0 | I'm trying to install html2text and I've used sudo pip install html2text but I get the error ImportError: No module named 'html2text'I'm not sure if i need to install any things before doing the html2text install command. I'm very new to Python. I'm using Python 3.5. (Using Mac) | How to install html2text (Python) | 1.2 | 0 | 0 | 9,907 |
34,412,266 | 2015-12-22T09:18:00.000 | 0 | 0 | 0 | 0 | python,django | 34,412,549 | 1 | false | 1 | 0 | There are really two ways to go here (that I can think of off top of my head):
Create a temporary field to store the current data of videos.Video.machine, remove videos.Video.machine field, add videos.Video.machine back as a m2m field, migrate the data from the temporary field into this new field, and remove the temporary field.
Create a new field, i.e. videos.Video.machines that is m2m, copy the current field videos.Video.machine into it, and then remove the videos.Video.machine field.
I would personally go with the second since it is not only easier but the naming makes more sense anyway! | 1 | 2 | 0 | I'm trying to change one variable from ForeignKey to ManyToManyField. Obtained the following error when I try to do a command migrate:
"ValueError: Cannot alter field videos.Video.machine into videos.Video.machine - they are not compatible types (you cannot alter to or from M2M fields, or add or remove through= on M2M fields)"
How can solve this problem? | How change from ForeignKey to ManyToManyField in Django? | 0 | 0 | 0 | 624 |
34,412,739 | 2015-12-22T09:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,fonts,pycharm | 39,433,918 | 4 | false | 0 | 0 | Go to File\Settings\Editor\Color & Fonts and choose save as to save currently used schema by a new name in order to make changes on a new schema. Then in mentioned direction go to console font and set size. | 1 | 47 | 0 | There are terminal and python console in pycharm, which are very convenient. But I found that the font size was too small to recognize in terminal or python console. How can change the font size in the terminal or python console? | Set the font size in pycharm's python console or terminal | 0 | 0 | 0 | 24,007 |
34,412,869 | 2015-12-22T09:50:00.000 | 1 | 0 | 0 | 0 | pythonanywhere | 34,435,254 | 1 | false | 1 | 0 | That's kind of odd; if the SQLite DB was in the git repository, and was uploaded correctly, I'd expect it to work. Perhaps the database is in a different directory? On PythonAnywhere, the working directory of your running web app might be (actually, probably is) different to your local machine. And if you're specifying the database using a relative path (which you probably are) then that might mean that the one you created locally is somewhere different to where it is on PythonAnywhere.
BTW, from my memories of the Django Girls tutorial (I coached for one session a few months ago) you're not actually expected to put the database in your Git repository. It's not how websites are normally managed. You'd normally have one database locally, for testing, where you'd be able to put random testing data, and then a completely different one on your live site, with posts for public consumption. | 1 | 3 | 0 | I am following the Djangogirls tutorial according to which I added new posts in the blog on the Django admin. I created a template using Django templates to display this Dynamic data. I checked it by opening 127.0.0.1:8000 in browser and I was able to see the data. Then for deploying this site on Pythonanywhere, I pushed the data to github from my local rep using git push and did git pull on Pythonanywhere from github.All the files including the db.sqlite3(database) file were updated properly in pythonanywhere but still I could not the see the data after running my webapp on pythonanywhere.Then , I manually removed the db.sqlite3 file from pythonanywhere and uploaded the same file from my local desktop and it worked. Why did this work? and is there an alternative for this? | Regarding the database in Pythonanywhere | 0.197375 | 0 | 0 | 194 |
34,413,900 | 2015-12-22T10:41:00.000 | 1 | 0 | 1 | 0 | python,dictionary | 34,414,371 | 3 | false | 0 | 0 | You don't seem to have bothered benchmarking the alternatives. It turns out that the difference is quite slight and I also find inconsistent differences. Besides this is an implementation detail how it's implemented, since both integers and strings are immutable they could possibly be compared as pointers.
What you should consider is which one is the natural choice of key. For example if you don't interpret the key as a number anywhere else there's little reason to convert it to an integer.
Additionally you should consider if you want to consider keys equal if their numeric value is the same or if they need to be lexically identical. For example if you would consider 00 the same key as 0 you would need to interpret it as integer and then integer is the proper key, if on the other hand you want to consider them different then it would be outright wrong to convert them to integers (as they would become the same then). | 1 | 5 | 0 | Say, I'm going to construct a probably large dictionary in Python 3 for in-memory operations. The dictionary keys are integers, but I'm going to read them from a file as string at first.
As far as storage and retrieval are concerned, I wonder if it matters whether I store the dictionary keys as integers themselves, or as strings.
In other words, would leaving them as integers help with hashing? | Trade-off in Python dictionary key types | 0.066568 | 0 | 0 | 269 |
34,422,015 | 2015-12-22T18:12:00.000 | 2 | 0 | 1 | 0 | python,python-multiprocessing,joblib | 34,423,811 | 3 | false | 0 | 0 | The answer to the specific question is: I don't know of a ready-made utility.
A minimal(*) core refactoring would to be add a named parameter to your function currently creating child processes. The default parameter would be your current behavior, and an other value would switch to a behavior compatible with how you are running tests(**).
(*: there might be other, may be better, design alternatives to consider but we do not have enough information)
(**: one may say that the introduction of a conditional behavior would require to test that as well, and we are back to square one...) | 2 | 15 | 0 | I have a function that uses multiprocessing (specifically joblib) to speed up a slow routine using multiple cores. It works great; no questions there.
I have a test suite that uses multiprocessing (currently just the multiprocessing.Pool() system, but can change it to joblib) to run each module's test functions independently. It works great; no questions there.
The problem is that I've now integrated the multiprocessing function into the module's test suite, so that the pool process runs the multiprocessing function. I would like to make it so that the inner function knows that it is already being multiprocessed and not spin up more forks of itself. Currently the inner process sometimes hangs, but even if it doesn't, obviously there are no gains to multiprocessing within an already parallel routine.
I can think of several ways (with lock files, setting some sort of global variable, etc.) to determine the state we're in, but I'm wondering if there is some standard way of figuring this out (either in PY multiprocessing or in joblib). If it only works in PY3, that'd be fine, though obviously solutions that also work on 2.7 or lower would be better. Thanks! | Can functions know if they are already multiprocessed in Python (joblib) | 0.132549 | 0 | 0 | 485 |
34,422,015 | 2015-12-22T18:12:00.000 | 0 | 0 | 1 | 0 | python,python-multiprocessing,joblib | 58,937,307 | 3 | true | 0 | 0 | Check multiprocessing.current_process().daemon -- it will return True if the current process is a spawned one. (Answering own question) | 2 | 15 | 0 | I have a function that uses multiprocessing (specifically joblib) to speed up a slow routine using multiple cores. It works great; no questions there.
I have a test suite that uses multiprocessing (currently just the multiprocessing.Pool() system, but can change it to joblib) to run each module's test functions independently. It works great; no questions there.
The problem is that I've now integrated the multiprocessing function into the module's test suite, so that the pool process runs the multiprocessing function. I would like to make it so that the inner function knows that it is already being multiprocessed and not spin up more forks of itself. Currently the inner process sometimes hangs, but even if it doesn't, obviously there are no gains to multiprocessing within an already parallel routine.
I can think of several ways (with lock files, setting some sort of global variable, etc.) to determine the state we're in, but I'm wondering if there is some standard way of figuring this out (either in PY multiprocessing or in joblib). If it only works in PY3, that'd be fine, though obviously solutions that also work on 2.7 or lower would be better. Thanks! | Can functions know if they are already multiprocessed in Python (joblib) | 1.2 | 0 | 0 | 485 |
34,427,788 | 2015-12-23T02:38:00.000 | 1 | 0 | 1 | 0 | python,installation,geopandas | 60,911,054 | 17 | false | 0 | 0 | When using pip to install GeoPandas, you need to make sure that all dependencies are installed correctly.
First install shapely, fiona, pyproj and rtree
Then you install geopandas
shapely and fiona provide binary wheels with the dependencies included for Mac and Linux, but not for Windows.
pyproj provides binary wheels with depencies included for Mac, Linux, and Windows.
rtree does not provide wheels.
pip install fiona,
Pip install shapely,pyproj,rtree | 2 | 37 | 0 | I have tried to install geopandas via I python by running !pip install geopandas, but this fails with "python setup.py egg_info" failed with error code 1 and then Path to long directory. I read online that pyproj is required for geopandas and also tried to install it however no luck, similar error. Would anyone be able to point me in the right direction? Thank you.
Oh by the way, if this helps, I was able to install shapely, fiona, and Descartes using this method. | how to successfully install pyproj and geopandas? | 0.011764 | 0 | 0 | 92,205 |
34,427,788 | 2015-12-23T02:38:00.000 | 0 | 0 | 1 | 0 | python,installation,geopandas | 61,111,187 | 17 | false | 0 | 0 | I was running into this same problem (it might not be fully over) but I'll show you what I did. I basically did the same things that a lot of people had mentioned and then by accident stumbled upon something that worked well.
Steps involved:
Remove the following packages: fiona, gdal, pyproj, geoplot, rtree via the command 'conda remove fiona' etc. in Anaconda Prompt
Install geoplot in Anaconda Prompt: conda install geoplot -c conda-forge
This has geopandas and all it's dependencies built into it (fiona, gdal, pyproj, etc). I'm not sure this is an ultimate fix but it worked for me! If this doesn't work for you, I would recommend following Vesanen's instructions as that also worked for me for awhile. The problem I ran into was once I had geopandas installed I couldn't install the package geoplot without Spyder crashing. | 2 | 37 | 0 | I have tried to install geopandas via I python by running !pip install geopandas, but this fails with "python setup.py egg_info" failed with error code 1 and then Path to long directory. I read online that pyproj is required for geopandas and also tried to install it however no luck, similar error. Would anyone be able to point me in the right direction? Thank you.
Oh by the way, if this helps, I was able to install shapely, fiona, and Descartes using this method. | how to successfully install pyproj and geopandas? | 0 | 0 | 0 | 92,205 |
34,428,046 | 2015-12-23T03:13:00.000 | 2 | 0 | 0 | 1 | python,database,rest,concurrency,etag | 34,428,792 | 3 | false | 1 | 0 | This is really a question about how to use ORMs to do updates, not about ETags.
Imagine 2 processes transferring money into a bank account at the same time -- they both read the old balance, add some, then write the new balance. One of the transfers is lost.
When you're writing with a relational DB, the solution to these problems is to put the read + write in the same transaction, and then use SELECT FOR UPDATE to read the data and/or ensure you have an appropriate isolation level set.
The various ORM implementations all support transactions, so getting the read, check and write into the same transaction will be easy. If you set the SERIALIZABLE isolation level, then that will be enough to fix race conditions, but you may have to deal with deadlocks.
ORMs also generally support SELECT FOR UPDATE in some way. This will let you write safe code with the default READ COMMITTED isolation level. If you google SELECT FOR UPDATE and your ORM, it will probably tell you how to do it.
In both cases (serializable isolation level or select for update), the database will fix the problem by getting a lock on the row for the entity when you read it. If another request comes in and tries to read the entity before your transaction commits, it will be forced to wait. | 3 | 6 | 0 | Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem. | Etags used in RESTful APIs are still susceptible to race conditions | 0.132549 | 0 | 0 | 1,416 |
34,428,046 | 2015-12-23T03:13:00.000 | 1 | 0 | 0 | 1 | python,database,rest,concurrency,etag | 63,120,699 | 3 | false | 1 | 0 | You are right that you can still get race conditions if the 'check last etag' and 'make the change' aren't in one atomic operation.
In essence, if your server itself has a race condition, sending etags to the client won't help with that.
You already mentioned a good way to achieve this atomicity:
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example.
You could do something else, like using a mutex lock. Or using an architecture where two threads cannot deal with the same data.
But the database check seems good to me. What you describe about ORM checks might be an addition for better error messages, but is not by itself sufficient as you found. | 3 | 6 | 0 | Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem. | Etags used in RESTful APIs are still susceptible to race conditions | 0.066568 | 0 | 0 | 1,416 |
34,428,046 | 2015-12-23T03:13:00.000 | 1 | 0 | 0 | 1 | python,database,rest,concurrency,etag | 34,428,187 | 3 | false | 1 | 0 | Etag can be implemented in many ways, not just last updated time. If you choose to implement the Etag purely based on last updated time, then why not just use the Last-Modified header?
If you were to encode more information into the Etag about the underlying resource, you wouldn't be susceptible to the race condition that you've outlined above.
The only fool proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
That's your answer.
Another option would be to add a version to each of your resources which is incremented on each successful update. When updating a resource, specify both the ID and the version in the WHERE. Additionally, set version = version + 1. If the resource had been updated since the last request then the update would fail as no record would be found. This eliminates the need for locking. | 3 | 6 | 0 | Maybe I'm overlooking something simple and obvious here, but here goes:
So one of the features of the Etag header in a HTTP request/response it to enforce concurrency, namely so that multiple clients cannot override each other's edits of a resource (normally when doing a PUT request). I think that part is fairly well known.
The bit I'm not so sure about is how the backend/API implementation can actually implement this without having a race condition; for example:
Setup:
RESTful API sits on top of a standard relational database, using an ORM for all interactions (SQL Alchemy or Postgres for example).
Etag is based on 'last updated time' of the resource
Web framework (Flask) sits behind a multi threaded/process webserver (nginx + gunicorn) so can process multiple requests concurrently.
The problem:
Client 1 and 2 both request a resource (get request), both now have the same Etag.
Both Client 1 and 2 sends a PUT request to update the resource at the same time. The API receives the requests, proceeds to uses the ORM to fetch the required information from the database then compares the request Etag with the 'last updated time' from the database... they match so each is a valid request. Each request continues on and commits the update to the database.
Each commit is a synchronous/blocking transaction so one request will get in before the other and thus one will override the others changes.
Doesn't this break the purpose of the Etag?
The only fool-proof solution I can think of is to also make the database perform the check, in the update query for example. Am I missing something?
P.S Tagged as Python due to the frameworks used but this should be a language/framework agnostic problem. | Etags used in RESTful APIs are still susceptible to race conditions | 0.066568 | 0 | 0 | 1,416 |
34,428,351 | 2015-12-23T03:50:00.000 | 0 | 0 | 0 | 0 | python,django | 34,428,403 | 1 | false | 1 | 0 | It depends on how you are persisting your sessions. If you are using cookies to persist your sessions which seems likely, and you aren't willing to use a .site.com cookie domain, then you need to offload your session storage to something like Redis or some other key/store sort of server agnostic option. | 1 | 0 | 0 | I have a Django app that has wildcard subdomains. The user has multiple login sessions across these subdomains. For example, he goes to sd1.site.com and logs in(HTTP POST request to sd1.site.com/login/) with credentials username1 and password1. This creates a session for the user on sd1.site.com. He then goes to sd2.site.com and logs in with credentials username2 and password2. This creates a session for the user on sd2.site.com.
My end goal is to tell sd1.site.com that the user is logged in from sd2.site.com as well. My plan is to store a session variable called 'domains_logged_in' with value ['sd1','sd2']. Both sd1 and sd2 should be able to access 'domains_logged_in'.
Setting SESSION_COOKIE_DOMAIN = '.site.com' is not an option as it makes it difficult to manage multiple sessions and is not entirely secure. Am I missing something? | How do I share some session variables across Django Subdomain sessions | 0 | 0 | 0 | 211 |
34,430,982 | 2015-12-23T07:49:00.000 | 0 | 0 | 1 | 0 | python,anaconda | 34,431,271 | 1 | false | 0 | 0 | If you want to start the .py files with a double click you have to associate the python interpreter (its an interpreter not a compiler, because it doesn't generate an binary file (.exe)) with the file extension. You can do this if you right click on the file in the file explorer and select open with. | 1 | 1 | 0 | I have both Anaconda2 and Python2.7 installed on my computer (the Python2.7 was downloaded and installed directly form the www.python.org website). I want to use Anaconda2 and not Python2.7 to run my .py files (because Anaconda2 has some libraries that Python2.7 doesn't). However, the default compiler seems to be the one from Python2.7, even though I've added anaconda2 to my PATH (in Environment Variables). I've also tried deleting python2.7 from the PATH.
Has this happen to anyone and how did you resolve it?
Thank you all!
Edit: I'm using Windows 7. | Set Anaconda2 as default instead of the 'official' python 2.7 on Windows | 0 | 0 | 0 | 726 |
34,436,084 | 2015-12-23T12:48:00.000 | 1 | 0 | 0 | 1 | python,db2,dashdb | 34,651,608 | 3 | false | 0 | 0 | We are able to install the driver successfully and connection to db is established without any problem.
The steps are:
1) Upgraded to OS X El Capitan
2) Install pip - sudo pip install
3) Install ibm_db - sudo pip install ibm_db
4) During installation, below error was hit
Referenced from: /Users/roramana/Library/Python/2.7/lib/python/site-packages/ibm_db.so
Reason: unsafe use of relative rpath libdb2.dylib in /Users/roramana/Library/Python/2.7/lib/python/site-packages/ibm_db.so with restricted binary
After disabling the System Integrity Protection, installation went fine.
From the error sql1042c, it seems like you are hitting some environment setup issue.
You could try setting DYLD_LIBRARY_PATH to the path where you have extracted the odbc and cli driver .
If the problem still persist, please collect db2 traces and share with us:
db2trc on -f trc.dmp
run your repro
db2trc off
db2trc flw trc.dmp trc.flw
db2trc fmt trc.dmp trc.fmt
Share the trc.flw and trc.fmt files. | 1 | 1 | 0 | I can't connect to a DB2 remote server using Python. Here is what I've done:
Created a virtualenv with Python 2.7.10 (On Mac OS X 10.11.1)
installed ibm-db using sudo pip install ibm_db
Ran the following code:
import ibm_db
ibm_db.connect("my_connection_string", "", "")
I then get the following error:
Exception: [IBM][CLI Driver] SQL1042C An unexpected system error
occurred. SQLSTATE=58004 SQLCODE=-1042
I've googled around for hours and trying out different solutions. Unfortunately, I haven't been able to find a proper guide for setting the environment up on Mac OS X + Python + DB2. | Can't connect to DB2 Driver through Python: SQL1042C | 0.066568 | 1 | 0 | 2,550 |
34,437,867 | 2015-12-23T14:31:00.000 | 17 | 0 | 1 | 0 | python,task,python-asyncio | 42,180,040 | 2 | false | 0 | 0 | Adding to the above answer:
If the task at hand is I/O bound and operates on a shared data, coroutines and asyncio are probably the way to go.
If on the other hand, you have CPU-bound tasks where data is not shared, a multiprocessing system like Celery should be better.
If the task at hand is a both CPU and I/O bound and sharing of data is not required, I would still use Celery.You can use async I/O from within Celery!
If you have a CPU bound task but with the need to share data, the only viable option as I see now is to save the shared data in a database. There have been recent attempts like pyparallel but they are still work in progress. | 1 | 26 | 0 | I've been reading about asyncio module in python 3, and more broadly about coroutines in python, and I can't get what makes asyncio such a great tool.
I have the feeling that all you can do with coroutines, you can do it better by using task queues based of the multiprocessing module (celery for example).
Are there usecases where coroutines are better than task queues ? | asyncio and coroutines vs task queues | 1 | 0 | 0 | 10,156 |
34,439,775 | 2015-12-23T16:18:00.000 | 1 | 0 | 0 | 0 | python,angularjs,django,postgresql,python-3.x | 34,440,072 | 3 | false | 1 | 0 | I don't think you need to start worrying about the setup right away. I would discourage premature optimizations. Rather, run the app in production, profile it. See what affects the performance when you hit scale - you would know what's the bottleneck. | 2 | 0 | 0 | So, I'm looking at writing an app with python2 django(-rest-framework), postgres and angular.
I'm aware there are lots of things that can be done
multi-server setup behind load balancer
DB replication/sharding?
caching (in various ways)
swapping DRF serialiser for serpy
running on python3
running on pypy
my question is - Which of these (or other things) should really be done right at the start of the project? | Should I take steps to ensure a Django app can scale before writing it? | 0.066568 | 0 | 0 | 132 |
34,439,775 | 2015-12-23T16:18:00.000 | 1 | 0 | 0 | 0 | python,angularjs,django,postgresql,python-3.x | 34,440,311 | 3 | false | 1 | 0 | The first and main things you have to get right are a clean and correct db schema and clear, readable and correctly factored (DRY... unless it's accidental duplication) and decoupled code. If you know to design a relational DB schema and learn to use Python and Django properly you shouldn't have much problems so far, and if you get both these things right it will (well it should) be easy to scale - by adding cache where needed (Redis, Memcache, or an intermediary NoSQL document database storing "pre-processed" versions of your often accessed data), adding servers, load-balancing etc, depending on your application's needs. Django is built to scale easily, and unless you do stupid things it does scale easily. | 2 | 0 | 0 | So, I'm looking at writing an app with python2 django(-rest-framework), postgres and angular.
I'm aware there are lots of things that can be done
multi-server setup behind load balancer
DB replication/sharding?
caching (in various ways)
swapping DRF serialiser for serpy
running on python3
running on pypy
my question is - Which of these (or other things) should really be done right at the start of the project? | Should I take steps to ensure a Django app can scale before writing it? | 0.066568 | 0 | 0 | 132 |
34,441,206 | 2015-12-23T17:54:00.000 | 1 | 0 | 0 | 0 | python,c | 34,441,467 | 1 | true | 0 | 0 | To compile your code so expression statements invoke sys.displayhook, you need to pass Py_single_input as the start parameter, and you need to provide one statement at a time. | 1 | 1 | 0 | In a python shell, if I type a = 2 nothing is printed. If I type a 2 gets printed automatically. Whereas, this doesn't happen if I run a script from idle.
I'd like to emulate this shell-like behavior using the python C api, how is it done?
For instance, executing this code PyRun_String("a=2 \na", Py_file_input, dic, dic); from C, will not print anything as the output.
I'd like to simulate a shell-like behavior so that when I execute the previous command, the value "2" is stored in a string. Is it possible to do this easily, either via python commands or from the C api? Basically, how does the python shell do it? | Simulate shell behavior (force eval of last command to be displayed) | 1.2 | 0 | 0 | 81 |
34,442,468 | 2015-12-23T19:30:00.000 | 2 | 0 | 1 | 0 | git,python-2.7,python-3.x,bitbucket | 34,442,503 | 2 | false | 0 | 0 | Yes, branching is correct. It is possible that you will want to fix a bug in the Python 2 branch, so it should be a branch, and not a tag. Tags are for releases.
I would name the Python 2 branch python2 and name the Python 3 branch master. This way, it is more obvious which branch is active. | 2 | 0 | 0 | I have a API that was developed using python 2.7. I have some developers that are already using it. I would like to migrate this API to python 3.4.
I will not give support for the python 2.7 API anymore.
My code is stored in bit bucket. What is the best strategy?
Just make a simple branch, e.g., "python3.4"?
Make a tag on the master branch (python 2.7) and start a new branch (python 3.4)? | Git - branch or tag? | 0.197375 | 0 | 0 | 199 |
34,442,468 | 2015-12-23T19:30:00.000 | 0 | 0 | 1 | 0 | git,python-2.7,python-3.x,bitbucket | 34,445,432 | 2 | false | 0 | 0 | Perhaps, as far as the users are concerned, you don't actually have to do anything other than announcing that support for the 2.7 API has ended with the most recent release (which already has a tag). No immediate git action is required.
(If you want to give the users something newer which still supports 2.7, then that calls for one more "last 2.7-based release" before the cut-over.)
A more recent tag denoting the actual commit before the cut-over would be useful for your internal purposes, though. Cutting over to the new API is a significant change, which perhaps deserves to be marked by a tag so you can easily refer to this historic point.
You don't have to make any support branch now. Doing so could signal to users that you intend to support the API, which you don't. ("Oh goodie, I see a python2 branch; that's where I can expect fixes, in spite of the announcement that there won't be any!") It's easy to later create the branch based on a suitable tag, if you change your mind.
That branch could be made from the point before the cut-over, or farther back from the last official release supporting the 2.7 API: there is no need to decide the exact branch point now if you have no intent to support the API at all.
If you later create the branch based on a tag, git won't automatically set up tracking (that is to say, you can't do git branch -t). But in this situation you don't need that anyway, because you won't be rebasing the python2 support branch, only cherry-picking fixes into it. | 2 | 0 | 0 | I have a API that was developed using python 2.7. I have some developers that are already using it. I would like to migrate this API to python 3.4.
I will not give support for the python 2.7 API anymore.
My code is stored in bit bucket. What is the best strategy?
Just make a simple branch, e.g., "python3.4"?
Make a tag on the master branch (python 2.7) and start a new branch (python 3.4)? | Git - branch or tag? | 0 | 0 | 0 | 199 |
34,448,086 | 2015-12-24T06:06:00.000 | 0 | 0 | 1 | 1 | python,command-line-arguments | 34,448,313 | 4 | false | 0 | 0 | You can use the python interpreter as a compiler too to compile your python programs.
Say you have a test.py file which you want to compile; then you can use python test.py to compile the file.
To be true, you are not actually compiling the file, you are executing it line by line (well, call it interpreting)
For command line arguments you can use sys.argv as already mentioned in the above answers. | 1 | 1 | 0 | I have used both Python and C for a while. C is good in a way that i can use Windows cmd or anything like that to compile files and easily read command line arguments. However, the only thing that runs python that I know is IDLE which is like an interpreter and doesnt take command-line arguments and it's hard to work with. Is there anything like the C's cmd and a compiler for python 3.x?
Thanks | Python shell cmd and executable formats | 0 | 0 | 0 | 419 |
34,451,066 | 2015-12-24T10:11:00.000 | 0 | 0 | 0 | 0 | python,xml,openerp | 34,459,305 | 2 | false | 1 | 0 | You can set chat rules via security section in your im_chat addon folder. (/openerp/addons/im_chat/security). | 1 | 1 | 0 | I just wanna ask you that How to remove users from instant messaging in odoo. These are the users are who don't belongs to any of groups in my module.Please help me out.
Thanks in advance | How to remove users from chat in odoo8? | 0 | 0 | 0 | 354 |
34,453,138 | 2015-12-24T12:54:00.000 | 1 | 0 | 1 | 0 | python,function,methods | 34,453,210 | 3 | false | 0 | 0 | Both are logically types of functions, but method or member function specifically refers to the subset of functions that are defined on classes and that operate on specific instances of the class.
In Python, specifically, it may also refer to functions where the self parameter has already been bound to a specific object (as opposed to the free-standing form where self isn't bound). | 2 | 3 | 0 | I'm learning Python. Have knowledge in other languages. There's a difference between methods and functions in python which confuses me. There's a very minute difference. Is my above conclusion on functions and methods true? In what better way can they be differentiated. | In python, functions are blocks of code that perform desired action whereas methods are functions specific to some objects. Is this statement true? | 0.066568 | 0 | 0 | 57 |
34,453,138 | 2015-12-24T12:54:00.000 | 1 | 0 | 1 | 0 | python,function,methods | 34,453,259 | 3 | false | 0 | 0 | Yes, in Python functions and methods are different but similar. Methods needs to take the 'self'(the reference on the caller object) keyword like first parameter,instead functions needs 0 or more parameters. | 2 | 3 | 0 | I'm learning Python. Have knowledge in other languages. There's a difference between methods and functions in python which confuses me. There's a very minute difference. Is my above conclusion on functions and methods true? In what better way can they be differentiated. | In python, functions are blocks of code that perform desired action whereas methods are functions specific to some objects. Is this statement true? | 0.066568 | 0 | 0 | 57 |
34,453,294 | 2015-12-24T13:07:00.000 | 0 | 0 | 1 | 0 | python,processing,jython-2.7 | 34,454,320 | 1 | false | 0 | 0 | Look in the Processing sketchbook directory. You can find the location of this directory by going to File -> Preferences, and looking at the sketchbook location setting at the top.
In your sketchbook directory, you should see a modes directory, and in that you'll find a PythonMode directory. The jython.jar file is in the mode directory.
For example, my jython.jar file is located at C:\Users\Kevin\Documents\Processing3\modes\PythonMode\mode\jython.jar.
You could also just search your computer for jython. | 1 | 0 | 0 | I'm using Processing 3.0.1 on Windows 10. I have installed Python mode.
As I understand Python mode in Processing 3.0.1 uses Jython 2.7.x.
Can some one tell me in which directory I can find the Python/Jython stuff?
Kind regards
Klaus | In which directory are the files for Python mode in Processing 3.0.1 are living? | 0 | 0 | 0 | 193 |
34,455,089 | 2015-12-24T15:48:00.000 | 0 | 0 | 1 | 0 | ipython-notebook | 34,478,609 | 1 | false | 0 | 0 | Maybe use plantuml?
You then choose the UML which gives you something you like ... | 1 | 1 | 0 | I frequently use Jupyter notebook for collaborative purposes. Frequently, people write functions within their own modules that call functions from other modules, all of which are part of our library. An example case would be:
module1.f1 -> module2.f2 -> module3.f3 -> pandas functions.
All the functions, f1/f2/f3 follow docstring format. Is there a way to display the function hierarch f1 - f2 -f3 inside the notebook? | Displaying function dependency graph in iPython Notebook | 0 | 0 | 0 | 183 |
34,466,027 | 2015-12-25T20:08:00.000 | 14 | 1 | 1 | 0 | python,testing,pytest | 51,718,551 | 4 | false | 0 | 0 | I use the conftest.py file to define the fixtures that I inject into my tests, is this the correct use of conftest.py?
Yes, a fixture is usually used to get data ready for multiple tests.
Does it have other uses?
Yes, a fixture is a function that is run by pytest before, and sometimes
after, the actual test functions. The code in the fixture can do whatever you
want it to. For instance, a fixture can be used to get a data set for the tests to work on, or a fixture can also be used to get a system into a known state before running a test.
Can I have more than one conftest.py file? When would I want to do that?
First, it is possible to put fixtures into individual test files. However, to share fixtures among multiple test files, you need to use a conftest.py file somewhere centrally located for all of the tests. Fixtures can be shared by any test. They can be put in individual test files if you want the fixture to only be used by tests in that file.
Second, yes, you can have other conftest.py files in subdirectories of the top tests directory. If you do, fixtures defined in these lower-level conftest.py files will be available to tests in that directory and subdirectories.
Finally, putting fixtures in the conftest.py file at the test root will make them available in all test files. | 2 | 416 | 0 | I recently discovered pytest. It seems great. However, I feel the documentation could be better.
I'm trying to understand what conftest.py files are meant to be used for.
In my (currently small) test suite I have one conftest.py file at the project root. I use it to define the fixtures that I inject into my tests.
I have two questions:
Is this the correct use of conftest.py? Does it have other uses?
Can I have more than one conftest.py file? When would I want to do that? Examples will be appreciated.
More generally, how would you define the purpose and correct use of conftest.py file(s) in a py.test test suite? | In pytest, what is the use of conftest.py files? | 1 | 0 | 0 | 147,291 |
34,466,027 | 2015-12-25T20:08:00.000 | 17 | 1 | 1 | 0 | python,testing,pytest | 34,493,931 | 4 | false | 0 | 0 | In a wide meaning conftest.py is a local per-directory plugin. Here you define directory-specific hooks and fixtures. In my case a have a root directory containing project specific tests directories. Some common magic is stationed in 'root' conftest.py. Project specific - in their own ones. Can't see anything bad in storing fixtures in conftest.py unless they are not used widely (In that case I prefer to define them in test files directly) | 2 | 416 | 0 | I recently discovered pytest. It seems great. However, I feel the documentation could be better.
I'm trying to understand what conftest.py files are meant to be used for.
In my (currently small) test suite I have one conftest.py file at the project root. I use it to define the fixtures that I inject into my tests.
I have two questions:
Is this the correct use of conftest.py? Does it have other uses?
Can I have more than one conftest.py file? When would I want to do that? Examples will be appreciated.
More generally, how would you define the purpose and correct use of conftest.py file(s) in a py.test test suite? | In pytest, what is the use of conftest.py files? | 1 | 0 | 0 | 147,291 |
34,467,177 | 2015-12-25T23:02:00.000 | 0 | 0 | 1 | 0 | python,while-loop,psychopy | 34,479,029 | 2 | false | 0 | 0 | You probably placed your Answerrunning = False at the wrong place. And probably you need to put break at the end of each branch. Please explain more what you want to do, I don't understand.
If you say you need to count tries, then I guess you should have something like number_of_tries = 0 and number_of_tries += 1 somewhere in your code. | 1 | 0 | 1 | I am making an experiment, and the participant must get the possibility to correct himself when he has given the wrong answer.
The goal is that the experiment goes on to the next trial when the correct answer is given. When the wrong answer is given, you get another chance.
For the moment, the experiment crashes after the first trial and it always waits for the second chance answer (even when the right answer was given). | Python: How to give participant the possibility to an answer | 0 | 0 | 0 | 152 |
34,468,024 | 2015-12-26T02:33:00.000 | -2 | 0 | 0 | 1 | python,celery | 34,469,957 | 2 | false | 1 | 0 | Just to answer your second question CELERY_TASK_RESULT_EXPIRES is the time in seconds that the result of the task is persisted. So after a task is over, its result is saved into your result backend. The result is kept there for the amount of time specified by that parameter. That is used when a task result might be accessed by different callers.
This has probably nothing to do with your problem. As for the first solution, as already stated you have to use multiple queues. However be aware that you cannot assign the task to a specific Worker Process, just to a specific Worker which will then assign it to a specific Worker Process. | 1 | 8 | 0 | Celery will send task to idle workers.
I have a task will run every 5 seconds, and I want this task to only be sent to one specify worker.
Other tasks can share the left over workers
Can celery do this??
And I want to know what this parameter is: CELERY_TASK_RESULT_EXPIRES
Does it means that the task will not be sent to a worker in the queue?
Or does it stop the task if it runs too long? | Can celery assign task to specify worker | -0.197375 | 0 | 0 | 12,753 |
34,468,030 | 2015-12-26T02:34:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,multi-master-replication | 34,841,926 | 1 | false | 1 | 0 | Your idea of the router is great! I would add that you need automatically detect whether a databases is [slow] down. You can detect that by the response time and by connection/read/write errors. And if this happens then you exclude this database from your round-robin list for a while, trying to connect back to it every now and then to detect if the databases is alive.
In other words the round-robin list grows and shrinks dynamically depending on the health status of your database machines.
The another important notice is that luckily you don't need to maintain this round-robin list common to all the web servers. Each web server can store its own copy of the round-robin list and its own state of inclusion and exclusion of databases into this list. This is just because a database server can be seen from one web server and can be not seen from another one due to local network problems. | 1 | 0 | 0 | I am working on scaling out a webapp and providing some database redundancy for protection against failures and to keep the servers up when updates are needed. The app is still in development, so I have chosen a simple multi-master redundancy with two separate database servers to try and achieve this. Each server will have the Django code and host its own database, and the databases should be as closely mirrored as possible (updated within a few seconds).
I am trying to figure out how to set up the multi-master (master-master) replication between databases with Django and MySQL. There is a lot of documentation about setting it up with MySQL only (using various configurations), but I cannot find any for making this work from the Django side of things.
From what I understand, I need to approach this by adding two database entries in the Django settings (one for each master) and then write a database router that will specify which database to read from and which to write from. In this scenario, both databases should accept both reads and writes, and writes/updates should be mirrored over to the other database. The logic in the router could simply use a round-robin technique to decide which database to use. From there on, further configuration to set up the actual replication should be done through MySQL configuration.
Does this approach sound correct, and does anyone have any experience with getting this to work? | Multi-master database replication with Django webapp and MySQL | 0 | 1 | 0 | 1,628 |
34,471,080 | 2015-12-26T11:52:00.000 | 0 | 1 | 0 | 1 | python,django,permissions,uwsgi,cherokee | 34,545,562 | 1 | false | 1 | 0 | As I've said in my comments this issue was related to supervisord. I've solved it assigning the right path and user into "environment" variable of supervisord's config file. | 1 | 0 | 0 | I'm trying to generate PDF file from Latex template. I've done it in development environment (running python manage.py straight from eclipse)... but I can't make it work into the server, which is running using cherokee and uwsgi.
We have realized that open(filename) creates a file owning to root (also root group). This isn't taking place in development environment... but the most strange thing about this issue is that somewhere else in our code we are creating a text file (latex uses is a text file too), but it's created with the user cherokee is supposed to use, not root!
What happened? How can we fix it?
We are running this code on ubuntu linux and a virtual environment both in development and production.
We started following some instructions to do it using python's temporary file and folder creation functions, but we thought that it could be something related with them, and created them "manually" in order to try to solve this issue... but it didn't work. | Django uwsgi subprocess and permissions | 0 | 0 | 0 | 240 |
34,472,609 | 2015-12-26T15:18:00.000 | 1 | 0 | 0 | 0 | python,django,database,postgresql,database-migration | 34,480,125 | 1 | true | 1 | 0 | Try those same steps WITHOUT running syncdb and migrate at all. So overall, your steps will be:
heroku pg:backups capture
curl -o latest.dump heroku pg:backups public-url
`scp -P latest.dump [email protected]:/home/myuser
drop database mydb;
create database mydb;
pg_restore --verbose --clean --no-acl --no-owner -U myuser -d mydb latest.dump | 1 | 0 | 0 | I have a Django app with a postgres backend hosted on Heroku. I'm now migrating it to Azure. On Azure, the Django application code and postgres backend have been divided over two separate VMs.
Everything's set up, I'm now at the stage where I'm transferring data from my live Heroku website to Azure. I downloaded a pg_dump to my local machine, transferred it to the correct Azure VM, ran syncdb and migrate, and then ran pg_restore --verbose --clean --no-acl --no-owner -U myuser -d mydb latest.dump. The data got restored (11 errors were ignored, pertaining to 2 tables that get restored, but which my code now doesn't use).
When I try to access my website, I get the kind of error that usually comes in my website if I haven't run syncdb and migrate:
Exception Type: DatabaseError Exception Value:
relation "user_sessions_session" does not exist LINE 1:
...last_activity", "user_sessions_session"."ip" FROM "user_sess...
^
Exception Location:
/home/myuser/.virtualenvs/myenv/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py
in execute, line 54
Can someone who has experienced this before tell me what I need to do here? It's acting as if the database doesn't exist and I had never run syncdb. When I use psql, I can actually see the tables and the data in them. What's going on? Please advise. | Unable to correctly restore postgres data: I get the same error I usually get if I haven't run syncdb and migrate | 1.2 | 1 | 0 | 119 |
34,473,506 | 2015-12-26T17:20:00.000 | 1 | 0 | 0 | 0 | android,django,python-3.x | 34,477,073 | 1 | true | 1 | 0 | You can try using Django's cache mechanism (either memcached or redis) to store the timestamp of the last communication for a given Android App Client with its ID as cache key and an expiration time of whatever you want the timeout to be.
Setting it up like this you are able to simply check if the cache has a record of the current Android App's ID to determine if it errored out. | 1 | 0 | 0 | I have an android app sending requests to a Django back-end asking whether it should perform a certain operation. These act as heartbeats. There is a client-side page that will allow the user to tell the android app to perform those operations. However, I would like to be able to tell the client-side page, whether the phone app has died for some unexpected reason, or has stopped sending the server heartbeats.
Is there a way in Django to add a timer to a view such that a signal will be triggered if the client doesn't send a request after X seconds? Is there a Android Websockets library for Django that would do this better? | Django determine if client hasn't made a request in X seconds | 1.2 | 0 | 0 | 54 |
34,477,062 | 2015-12-27T02:18:00.000 | 1 | 0 | 0 | 0 | python,mysql,django | 34,477,438 | 2 | false | 1 | 0 | I am pretty sure there is no built-in way for something this specific. Finding single words in a text alone is a quite complex task if you take into consideration misspelled words, hyphen-connected words, quotes, all sorts of punctuation and unicode letters.
Your best bet would be using a regex for each text and save the matches on a second model manually. | 1 | 0 | 0 | Edited to clarify my meaning:
I am trying to find a method using a Django action to take data from one database table and then process it into a different form before inserting it into a second table. I am writing a kind of vocabulary dictionary which extracts data about students' vocabulary from their classroom texts. To do this I need to be able to take the individual words from the table field containing the content and then insert the words into separate rows in another table. I have already written the code to extract the individual words from the record in the first database table, I just need a method for putting it into a second database table as part of the same Django action.
I have been searching for an answer for this, but it seems Django actions are designed to handle the data for only one database table at a time. I am considering writing my own MySQL connection to inject the words into the second table in the database. I thought I would write here first though to see if anyone knows if I am missing a built-in way to do this in Django. | Django way to modify a database table using the contents of another table | 0.099668 | 1 | 0 | 884 |
34,483,277 | 2015-12-27T18:04:00.000 | 1 | 0 | 0 | 0 | python,theano,symbolic-computation | 34,484,383 | 2 | true | 0 | 0 | Theano variables do not have explicit shape information since they are symbolic variables, not numerical. Even dtensor3 = T.tensor3(T.config.floatX) does not have an explicit shape. When you type dtensor3.shape you'll get an object Shape.0 but when you do dtensor3.shape.eval() to get its value you'll get an error.
For both cases however, dtensor.ndim works and prints out 5 and 3 respectively. | 1 | 1 | 1 | I was wondering how to make a 5D tensor in Theano.
Specifically, I tried dtensor = T.TensorType('float32', (False,)*5). However, the only issue is that dtensor.shapereturns: AttributeError: 'TensorType' object has no attribute 'shape'
Whereas if I used a standard tensor type likedtensor = T.tensor3('float32'), I don't get this issue when I call dtensor.shape.
Is there a way to have this not be an issue with a 5D tensor in Theano? | 5D tensor in Theano | 1.2 | 0 | 0 | 506 |
34,483,983 | 2015-12-27T19:25:00.000 | 1 | 0 | 0 | 1 | python,c,arduino,wireless,xbee | 34,498,742 | 1 | false | 0 | 0 | There isn't a minimum size, but the module does make use of a "packetization timeout" setting (ATRO) to decide when to send your data. If you wait longer, you may find that the module sends the frame and it arrives at the destination.
I'm assuming you're using "AT Mode" even though you write "API Mode". If you are in fact using API mode, please post more of your code, and perhaps include a link to the code library you're using to build your API frames. Are you setting the length correctly? Does the library expect a null-terminated string for the payload? Try adding a 0 to the end of your payload array to see if that helps. | 1 | 0 | 0 | i need to ask about xbee packet size. is it there any minimum size for the packet of API.
i'm using Xbee S2 API mode AP1 however when i send below frame from router to coordinator the packet failed to arrive .
Packet : uint8_t payload[] = {'B',200,200,200,200};
However if i send :
Packet : uint8_t payload[] = {'B',200,200,200,200,200,200};
the packet arrived successfully .... weird :(
Test 3:
Packet : uint8_t payload[] = {'B',200,200,200};
the packet arrived successfully
Test 4:
uint8_t payload[] = {'B',200,200};
the packet is failed to arrive :(
i don't know what is the problem | Xbee API packet is failing to arrive from router to coordinator | 0.197375 | 0 | 1 | 134 |
34,486,981 | 2015-12-28T02:26:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 34,493,500 | 3 | false | 0 | 0 | If you wanna check correct in english dictionary then you can use pyenchant ... use pip to install it ... its east to use and gives true if spelling of a word is correct and false if word doesn't exist in english dictionary.
pip install pyenchant | 1 | 3 | 0 | In Python, how do I check that the user has entered a name instead of a number, when asking for user input as string? I want a string input in the form of their name, but I want to use error checking to make sure the user doesn't enter a number. | In Python, how do I check that the user has entered a name instead of a number? | 0 | 0 | 0 | 1,176 |
34,487,269 | 2015-12-28T03:15:00.000 | 0 | 0 | 0 | 1 | python,macos | 34,496,074 | 1 | false | 0 | 0 | flagging python3 on install '--with-tcl-tk' works, but the idle3 launch needs to be linked to it using brew linkapps python3.
thereafter, the warning caveat which accompanies idle3 launch disappears.
I hope this helps other users.
jA | 1 | 1 | 0 | Will someone please give a clear, precise, repeatable method of linking a brewed Python 3 with the correct tcl-tk for a Mac OS?
I am NOT a power user.
I received an answer to this question from a homebrew contributor, but that answer no longer works. | How do I install and link the correct tcl-tk for Python 3 on a Mac? | 0 | 0 | 0 | 141 |
34,488,751 | 2015-12-28T06:35:00.000 | 0 | 0 | 0 | 0 | python-2.7,mongovue,pymongo-2.x | 37,524,187 | 1 | true | 0 | 0 | So, found out the fix for this behavior. Refreshing in MongoVue didn't work. So, I had to close it and open the MongoVue again to see the newly created collections. | 1 | 0 | 0 | I am using MongoVue and Python library Pymongo to insert some documents. I used MongoVue to see the db created. It was not listed. However, I made a find() request in shell. I got all the inserted documents.
Once I manually create DB all the inserted documents appears then.Every other db's inside the localhost is not affected.
What is the reason for this behaviour? | Database is not appearing in MongoVue | 1.2 | 1 | 0 | 117 |
34,491,359 | 2015-12-28T10:02:00.000 | 0 | 1 | 0 | 1 | php,python,linux | 34,491,632 | 1 | true | 0 | 0 | Be sure to use full paths for both python and your script.
$foo = exec('/usr/bin/python /path/script.py');
Also, make sure the file permissions where your script is located can be accessed by www, probably will need to chmod 755 /path. | 1 | 0 | 0 | I want to run a couple of Python scripts from PHP.
On an Ubuntu machine everything looks good right out of the box.
On FreeBSD though I get /usr/local/lib/python2.7: Permission denied
Any idea how to give permissions to Apache to run a Python through shell_exec or exec ?
Also see how I had to name the full path of the Python ?
Is there any way to avoid that too ? | FreeBSD PHP exec permission denied | 1.2 | 0 | 0 | 474 |
34,493,061 | 2015-12-28T11:54:00.000 | 2 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,boto,boto3 | 34,494,713 | 1 | true | 1 | 0 | The get_all_reserved_instance_offerings method in boto returns a list of all reserved instance types that are available for purchase. So, if you want to purchase reserved instances you would look through the list of offerings, find the instance type, etc. that you want and then you would be able to purchase that offering with the purchase_reserved_instance_offering method or via the AWS console.
So, perhaps a simple way to say it is get_all_reserved_instance_offerings tells you what you can buy and get_all_reserved_instances tells you what you have already bought. | 1 | 0 | 0 | Both belongs to boto.ec2 . From the documentation i found that get_all_reserved_instances returns all reserved instances, but i am not clear about get_all_reserved_instances_offerings . What is it mean by offering.
One other thing that i want to know is,what is recurring_charges ?
Please clarify ? | What is the difference between get_all_reserved_instances and get_all_reserved_instances_offerings? | 1.2 | 0 | 1 | 178 |
34,493,535 | 2015-12-28T12:26:00.000 | -1 | 0 | 1 | 0 | mongodb,python-3.x,pymongo | 34,493,742 | 5 | false | 0 | 0 | It probably depends on your IDE, not the pymongo itself. the pymongo is responsible for manipulating data and communicating with the mongodb. I am using Visual Studio with PTVS and I have such options provided from the Visual Studio. The PyCharm is also a good option for IDE that will allow you to watch your code variables and the JSON in a formatted structure. | 1 | 6 | 0 | I am using pymongo driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the .pretty() option with mongo shell, which gives the output in a structured way.
I want to know whether there is any method like pretty() in pymongo, which can return output in a structured way ? | Pretty printing of output in pymongo | -0.039979 | 1 | 0 | 4,368 |
34,493,556 | 2015-12-28T07:14:00.000 | 1 | 0 | 0 | 0 | machine-learning,categorical-data,python,scikit-learn | 34,493,557 | 1 | true | 0 | 0 | It depends on the learning algorithm that you are using. If you are using a method that has been designated for sparse data sets (FTRL, FFM, linear SVM) one possible approach is the following (note that it will introduce collisions in the features and a lot of constant columns).
First allocate for each element of your sample a (as large as possible) vector V, of length D.
For each categorical variable, evaluate hash(var_name + "_" + var_value) % D. This gives you an integer i, and you can store V[i] = 1.
Therefore, V never grows larger as new features appear. However, as soon as the number of features is large enough, some features will collide (i.e. be written at the same place) and this may result in an increased error rate...
Edit. You can write your own vectorizer to avoid collisions. First call L the current number of features. Prepare the same vector V of length 2L (this 2 will allow you to avoid collisions as new features arrive - at least for some time, depending of the arrival rate of new features).
Starting with an emty dictionary<input_type,int>, associate to each feature an integer. If have already seen the feature, return the int corresponding to the feature. If not, create a new entry with an integer corresponding to the new index. I think (but I am not sure) this is what LabelEncoder does for you. | 1 | 1 | 1 | Any binary one-hot encoding is aware of only values seen in training, so features not encountered during fitting will be silently ignored. For real time, where you have millions of records in a second, and features have very high cardinality, you need to keep your hasher/mapper updated with the data.
How can we do an incremental update to the hasher (rather calculating the entire fit() every time we incounter a new feature-value pair)? What is the suggested approach here the tackle this? | Using sklearn DictVectorizer in real-time systems | 1.2 | 0 | 0 | 237 |
34,495,318 | 2015-12-28T14:25:00.000 | 1 | 0 | 0 | 1 | python,django,multithreading,celery | 35,126,618 | 1 | false | 1 | 0 | As far as I know Celery does not rely on RabbitMQ's scheduled queues. It implements ETA/Countdown internally.
It seems that you have enough workers that are able to fetch enough messages and schedule them internally.
Mind that you don't need 200 workers. You have the prefetch multiplier set to the default value so you need less. | 1 | 9 | 0 | I'm using Django 1.6, RabbitMQ 3.5.6, celery 3.1.19.
There is a periodic task which runs every 30 seconds and creates 200 tasks with given eta parameter. After I run the celery worker, slowly the queue gets created in RabbitMQ and I see around 1200 scheduled tasks waiting to be fired. Then, I restart the celery worker and all of the waiting 1200 scheduled tasks get removed from RabbitMQ.
How I create tasks:
my_task.apply_async((arg1, arg2), eta=my_object.time_in_future)
I run the worker like this:
python manage.py celery worker -Q my_tasks_1 -A my_app -l
CELERY_ACKS_LATE is set to True in Django settings. I couldn't find any possible reason.
Should I run the worker with a different configuration/flag/parameter? Any idea? | Celery Tasks with eta get removed from RabbitMQ | 0.197375 | 0 | 0 | 1,055 |
34,498,286 | 2015-12-28T17:51:00.000 | 1 | 0 | 1 | 0 | python,terminal,console,character,edit | 34,498,355 | 2 | false | 0 | 0 | The short answer is No. If you are trying to make a game, you should use a library or engine that will reload the canvas some n times per second, but you can`t reload a terminal output. | 1 | 1 | 0 | After searching the internet quite a bit, I haven't found any simple solution for my kind of problem.
Is it possible to change a specific character in the terminal in python?
For example, lets say i have a 40 x 40 matrix filled with spaces and there is one dot in the middle. is it possible to move the dot (i.e. delete it and put it somewhere else) without clearing the whole terminal and loading the new state of the matrix? | Change specific character in terminal | 0.099668 | 0 | 0 | 79 |
34,500,111 | 2015-12-28T20:14:00.000 | 2 | 1 | 0 | 1 | python,linux,ssh | 34,500,718 | 1 | true | 0 | 0 | You're asking if you can write a program on the server which can access files from the client when someone runs this program through SSH from the client?
If the only program running on the client is SSH, then no. If it was possible, that would be a security bug in SSH. | 1 | 0 | 0 | Is it possible to access local files via remote SSH connection (local files of the connecting client of course, not other clients)?
To be specific, I'm wondering if the app I'm making (which is designed to be used over SSH, i.e. user connects to a remote SSH server and the script (written in Python) is automatically executed) can access local (client's) files. I want to implement an upload system, where user(s) (connected to SSH server, running the script) may be able to upload images, from their local computers, over to other hosting sites (not the SSH server itself, but other sites, like imgur or pomf (the API is irrelevant)). So the remote server would require access to local files to send the file to another remote hosting server and return the link. | Remote SSH server accessing local files | 1.2 | 0 | 1 | 866 |
34,500,369 | 2015-12-28T20:33:00.000 | 1 | 0 | 0 | 1 | python,windows,cmd | 34,500,631 | 3 | true | 0 | 0 | Try something like this: runas /user:administrator regedit. | 1 | 4 | 0 | I have my own python script that manages the IP address on my computer. Mainly it executes the netsh command in the command line (windows 10) which for you must have administrator rights.
It is my own computer, I am the administrator and when running the script I am already logged in with my user (Adrian) which is of type administrator.
I can`t use the right click and "run as administrator" solution because I am executing my netsh command from my python script.
Anybody knows how to get "run as administrator" with a command from CMD ?
Thanks | open cmd with admin rights (Windows 10) | 1.2 | 0 | 0 | 7,654 |
34,500,669 | 2015-12-28T20:57:00.000 | 1 | 0 | 0 | 0 | python,nginx,flask | 52,591,986 | 3 | false | 1 | 0 | On a development machine flask can be run without a webserver (nginx, apache etc) or an application container (eg uwsgi, gunicorn etc).
Things are different when you want to handle the load on a production server. For starters python is relatively very slow when it comes to serving static content where as apache / nginx do that very well.
When the application becomes big enough to be broken into multiple separate services or has to be horizontally scaled, the proxy server capabilities of nginx come in very handy.
In the architectures I build, nginx serves as the entry point where ssl is terminates and the rest of the application is behind a VPN and firewall.
Does this help? | 1 | 1 | 0 | I am a .net developer coming over to python. I have recently started using Flask and have some quick questions about serving files.
I noticed a lot of tutorials focused on nginix and flask. However, I am able to run flask without nginx. I'm just curious as to why this is used together (nginx and flask). Is nginx only for static files? | Flask using Nginx? | 0.066568 | 0 | 0 | 159 |
34,502,194 | 2015-12-28T23:17:00.000 | 0 | 0 | 0 | 1 | python-3.x,pyqt,homebrew | 34,502,219 | 1 | true | 0 | 1 | Answering to help other people who encounter this: The solution was to first upgrade XCode to XCode 7.2 and open it once to accept the license and have it install additional components. Then, a brew update and a brew install pyqt --with-python3 finally worked. | 1 | 0 | 0 | Running brew install pyqt --with-python3, I get Error: Failed to determine the layout of your Qt installation. Adding --verbose to the brew script, the problem is that ld can't find -lgcc_s.10.5.
(This is on Mac OS X 10.10.5 Yosemite) | `Error: Failed to determine the layout of your Qt installation` when installing pyqt for python3 on Mavericks | 1.2 | 0 | 0 | 900 |
34,502,379 | 2015-12-28T23:37:00.000 | 0 | 0 | 0 | 0 | python,django,django-migrations,django-1.9 | 61,643,148 | 1 | false | 1 | 0 | Simply delete 0005-0008 migration files from migrations/ folder.
Re. database tables, you won't need to delete anything from there if migrations weren't applied. You can check yourself django_migrations table entries to be sure. | 1 | 0 | 0 | I set a key that I have now realizes is wrong. It is set at migration 0005. The last migration I did was 0004. I'm now up to 0008. I want to rebuild the migrations with the current models.py against the current database schema. Migration 0005 is no longer relevant and has been deleted from models.py. Migration 0005 is also an IntegrityError, so it cannot be applied without deleting data that shouldn't be deleted.
How do I get past migration 0005 so I can migrate? | Delete migrations that haven't been migrated yet | 0 | 1 | 0 | 512 |
34,502,840 | 2015-12-29T00:37:00.000 | 2 | 0 | 0 | 0 | python,pandas | 34,502,877 | 2 | false | 0 | 0 | To sort by name: df.fruit.value_counts().sort_index()
To sort by counts: df.fruit.value_counts().sort_values() | 1 | 0 | 1 | Let's say that I have pandas DataFrame with a column called "fruit" that represents what fruit my classroom of kindergartners had for a morning snack. I have 20 students in my class. Breakdown would be something like this.
Oranges = 7, Grapes = 3, Blackberries = 4, Bananas = 6
I used sort to group each of these fruit types, but it is grouping based on alphabetical order. I would like it to group based on the largest quantity of entries for that class of fruit. In this case, I would like Oranges to turn up first so that I can easily see that Oranges is the most popular fruit.
I'm thinking that sort is not the best way to go about this. I checked out groupby but could not figure out how to use that appropriately either.
Thanks in advance. | Python pandas: determining which "group" has the most entries | 0.197375 | 0 | 0 | 38 |
34,507,744 | 2015-12-29T08:59:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,beautifulsoup,pip | 70,827,357 | 6 | false | 1 | 0 | I had some mismatch between Python version and Beautifulsoup. I was installing this project
Th3Jock3R/LinuxGSM-Arma3-Mod-Update
to a linux centos8 dedicated Arma3 server. Python3 and Beautifulsoup4 seem to match.So I updated Python3, removed manually Beautifulsoup files and re-installed it with: sudo yum install python3-beautifulsoup4 (note the number 3). Works. Then pointing directories in Th3Jock3R:s script A3_SERVER_FOLDER = "" and A3_SERVER_DIR = "/home/arma3server{}".format(A3_SERVER_FOLDER) placing and running the script in same folder /home/arma3server with python3 update.py. In this folder is also located new folder called 'modlists' Now the lightness of mod loading blows my mind. -Bob- | 2 | 27 | 0 | I have both Python 2.7 and Python 3.5 installed. When I type pip install beautifulsoup4 it tells me that it is already installed in python2.7/site-package directory.
But how do I install it into the python3 dir? | How to install beautifulsoup into python3, when default dir is python2.7? | 0.033321 | 0 | 0 | 73,703 |
34,507,744 | 2015-12-29T08:59:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,beautifulsoup,pip | 63,598,946 | 6 | false | 1 | 0 | If you are on windows, this works for Python3 as well
py -m pip install bs4 | 2 | 27 | 0 | I have both Python 2.7 and Python 3.5 installed. When I type pip install beautifulsoup4 it tells me that it is already installed in python2.7/site-package directory.
But how do I install it into the python3 dir? | How to install beautifulsoup into python3, when default dir is python2.7? | 0 | 0 | 0 | 73,703 |
34,509,593 | 2015-12-29T10:59:00.000 | 11 | 0 | 0 | 0 | java,python,apache-spark,hadoop-yarn | 34,515,890 | 1 | true | 0 | 0 | The Spark executor is set up into 3 regions.
Storage - Memory reserved for caching
Execution - Memory reserved for object creation
Executor overhead.
In Spark 1.5.2 and earlier:
spark.storage.memoryFraction sets the ratio of memory set for 1 and 2. The default value is .6, so 60% of the allocated executor memory is reserved for caching. In my experience, I've only ever found that the number is reduced. Typically when a developer is getting a GC issue, the application has a larger "churn" in objects, and one of the first places for optimizations is to change the memoryFraction.
If your application does not cache any data, then setting it to 0 is something you should do. Not sure why that would be specific to YARN, can you post the articles?
In Spark 1.6.0 and later:
Memory management is now unified. Both storage and execution share the heap. So this doesnt really apply anymore. | 1 | 6 | 1 | According to Spark documentation
spark.storage.memoryFraction: Fraction of Java heap to use for Spark's memory cache. This should not be larger than the "old" generation of objects in the JVM, which by default is given 0.6 of the heap, but you can increase it if you configure your own old generation size.
I found several blogs and article where it is suggested to set it to zero in yarn mode. Why is that better than set it to something close to 1? And in general, what is a reasonable value for it ? | spark.storage.memoryFraction setting in Apache Spark | 1.2 | 0 | 0 | 7,480 |
34,510,980 | 2015-12-29T12:21:00.000 | 1 | 0 | 0 | 0 | python,redis | 34,513,168 | 2 | false | 0 | 0 | Redis pubsub is fire and forget.
In other words, when a publish command is sent, the message is received by online subscribers, and who wasn't subscribed/listening for messages when the publish command was sent, won't never get these messages. | 1 | 0 | 0 | Suppose that there is one redis-client which is subscribing to channel c1 and another redis-client publishes "data" to channel c1.
At this point, does the data published is pending in a redis-server by when the client subscribing to "c1" get the data(by calling pubsub.listen() or pubsub.get_message()) or go directly to the client subscribing to the channel c1 through a redis-server?
In other words, When a redis-client call pubsub.getMessage() or pubsub.listen(), does redis-client send a request to redis-server to get data or just get data from the local socket buffer?
When I read some documents it is saying that pubsub.get_message() use select module for a socket internally.
It seems to mean that the data subscribed already is in the client local buffer, not server.
Could you give me any suggestion? | How does redis publish work? | 0.099668 | 0 | 1 | 381 |
34,514,164 | 2015-12-29T15:34:00.000 | 0 | 0 | 0 | 0 | wxpython | 44,302,082 | 1 | false | 0 | 0 | Use EVT_AUINOTEBOOK_TAB_RIGHT_DOWN to catch the event. The event.page will give you the clicked page. | 1 | 1 | 0 | I create a menu that popups after a right on a tab. The menu contains three options: close, close other and close all. Right clicking on a tabs does not display its content (it not already displayed), it just show the menu that control the clicked tab. The issue is that right clicking on another tab popups the menu but the program does not know which tab was clicked.
Is there any built-in methods to get the index of a tabs in AuiNotebook after a right click event? | How to get the index of a tab in AuiNotebook after a right click on a non-active tab? | 0 | 0 | 0 | 174 |
34,514,532 | 2015-12-29T15:56:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,amazon-ec2,flask | 34,514,724 | 1 | true | 1 | 0 | You will not be able to upload directly to the /dev/xvda/upload/hello.txt path, as this is a block device, not a mounted file system (raw hard drive).
You will need to use the path like /upload.
It is likely you are running into permission issues with the /upload folder. As a test I would suggest using the /tmp/ folder for your uploads, that should have open file permissions. If that works then you know it was permission issues preventing /upload from working. To make the /upload folder work, you will need to chown it to the same user that your flask app is running as. (There are other ways to make it work, but this is probably the easiest).
chown flask_user /upload | 1 | 0 | 0 | I have a web app that has to upload a file from local system to flask app on ec2 instance. I defined the upload path and when I access it I get an IOError saying:
IOError: [Errno 20] Not a directory: '/dev/xvda/upload/hello.txt'
I've also tried to use only: /upload
Both of them do not work, I have created the folder on the instance using mkdr command | How to upload files to ec2 instance through flask | 1.2 | 0 | 0 | 747 |
34,520,233 | 2015-12-29T22:40:00.000 | 2 | 0 | 0 | 1 | python,api,heroku,oauth-2.0,spotify | 34,520,316 | 2 | false | 1 | 0 | I once ran into a similar issue with Google's Calendar API. The app was pretty low-importance so I botched a solution together by running through the auth locally in my browser, finding the response token, and manually copying it over into an environment variable on Heroku. The downside of course was that tokens are set to auto-expire (I believe Google Calendar's was set to 30 days), so periodically the app stopped working and I had to run through the auth flow and copy the key over again. There might be a way to automate that.
Good luck! | 1 | 6 | 0 | Working on a small app that takes a Spotify track URL submitted by a user in a messaging application and adds it to a public Spotify playlist. The app is running with the help of spotipy python on a Heroku site (so I have a valid /callback) and listens for the user posting a track URL.
When I run the app through command line, I use util.prompt_for_user_token. A browser opens, I move through the auth flow successfully, and I copy-paste the provided callback URL back into terminal.
When I run this app and attempt to add a track on the messaging application, it does not open a browser for the user to authenticate, so the auth flow never completes.
Any advice on how to handle this? Can I auth once via terminal, capture the code/token and then handle the refreshing process so that the end-user never has to authenticate?
P.S. can't add the tag "spotipy" yet but surprised it was not already available | Completing Spotify Authorization Code Flow via desktop application without using browser | 0.197375 | 0 | 1 | 1,073 |
34,520,291 | 2015-12-29T22:45:00.000 | 0 | 0 | 1 | 0 | python,virtualenv,pycharm | 34,536,770 | 11 | false | 0 | 0 | Open up Preferences -> Project -> Project Interpreter, do you see the module there?
If yes, you might have another file somewhere in your project have the same name as flask.ext.login, this prevents pycharm from locating the actual module.
If no, you can click on the ... beside your interpreter and select more..., select your interpreter and at the bottom (beside the filter), click the Show paths for the selected interpreter, you can add the path of your module there. | 6 | 39 | 0 | I have the latest PyCharm CE and am using it with virtualenv. I have defined the interpreter as the interpreter in the virtualenv. The Project Interpreter window in PyCharm lists all the packages I have installed. I confirmed this by running pip freeze > requirements.txt and running through the packages manually.
My problem is that PyCharm won't find certain includes in its editor windows, like Flask-Login:
In from flask.ext.login import current_user, login_user, logout_user, login_required the includes current_user, login_user, logout_user, login_required are all marked as unresolved references.
Am I missing something? | PyCharm cannot find the packages in virtualenv | 0 | 0 | 0 | 43,919 |
34,520,291 | 2015-12-29T22:45:00.000 | -1 | 0 | 1 | 0 | python,virtualenv,pycharm | 52,166,451 | 11 | false | 0 | 0 | Goto /venv/bin/ and check all activate scripts. You venv path might be wrong. | 6 | 39 | 0 | I have the latest PyCharm CE and am using it with virtualenv. I have defined the interpreter as the interpreter in the virtualenv. The Project Interpreter window in PyCharm lists all the packages I have installed. I confirmed this by running pip freeze > requirements.txt and running through the packages manually.
My problem is that PyCharm won't find certain includes in its editor windows, like Flask-Login:
In from flask.ext.login import current_user, login_user, logout_user, login_required the includes current_user, login_user, logout_user, login_required are all marked as unresolved references.
Am I missing something? | PyCharm cannot find the packages in virtualenv | -0.01818 | 0 | 0 | 43,919 |
34,520,291 | 2015-12-29T22:45:00.000 | -1 | 0 | 1 | 0 | python,virtualenv,pycharm | 52,805,753 | 11 | false | 0 | 0 | I was not able to assign existing virtual environment to my project, but after going to
File -> Settings -> project interpreter-> show all-> click on '+'
to create a new virtual environment or we can choose the existing virtual environment, I am able to assign and use the existing virtual enviroments. | 6 | 39 | 0 | I have the latest PyCharm CE and am using it with virtualenv. I have defined the interpreter as the interpreter in the virtualenv. The Project Interpreter window in PyCharm lists all the packages I have installed. I confirmed this by running pip freeze > requirements.txt and running through the packages manually.
My problem is that PyCharm won't find certain includes in its editor windows, like Flask-Login:
In from flask.ext.login import current_user, login_user, logout_user, login_required the includes current_user, login_user, logout_user, login_required are all marked as unresolved references.
Am I missing something? | PyCharm cannot find the packages in virtualenv | -0.01818 | 0 | 0 | 43,919 |
34,520,291 | 2015-12-29T22:45:00.000 | 0 | 0 | 1 | 0 | python,virtualenv,pycharm | 53,817,399 | 11 | false | 0 | 0 | For me, the easiest solution was to open the project in the root directory (my project has a server and client directories, thus the root directory contained both of them). When you open the project in the root directory, it is able to find the dependencies without messing with pycharm settings as it uses them by convention. | 6 | 39 | 0 | I have the latest PyCharm CE and am using it with virtualenv. I have defined the interpreter as the interpreter in the virtualenv. The Project Interpreter window in PyCharm lists all the packages I have installed. I confirmed this by running pip freeze > requirements.txt and running through the packages manually.
My problem is that PyCharm won't find certain includes in its editor windows, like Flask-Login:
In from flask.ext.login import current_user, login_user, logout_user, login_required the includes current_user, login_user, logout_user, login_required are all marked as unresolved references.
Am I missing something? | PyCharm cannot find the packages in virtualenv | 0 | 0 | 0 | 43,919 |
34,520,291 | 2015-12-29T22:45:00.000 | 3 | 0 | 1 | 0 | python,virtualenv,pycharm | 54,056,397 | 11 | false | 0 | 0 | Also note the accepted answer is no longer applicable to PyCharm menu structure. It is now File > Settings > Project > Project Interpreter > Gear Icon > Show All
The following steps detail the "nuclear" option:
Delete your project virtual environment directory (e.g. /venv)
Delete all other interpreters listed in menu option accessible by the route listed at the top of this post.
Close PyCharm
Delete the .idea directory in your project folder
Restart PyCharm, opening the project folder.
Go through the process of configuring a new interpreter.
That will pretty much get you starting from scratch. | 6 | 39 | 0 | I have the latest PyCharm CE and am using it with virtualenv. I have defined the interpreter as the interpreter in the virtualenv. The Project Interpreter window in PyCharm lists all the packages I have installed. I confirmed this by running pip freeze > requirements.txt and running through the packages manually.
My problem is that PyCharm won't find certain includes in its editor windows, like Flask-Login:
In from flask.ext.login import current_user, login_user, logout_user, login_required the includes current_user, login_user, logout_user, login_required are all marked as unresolved references.
Am I missing something? | PyCharm cannot find the packages in virtualenv | 0.054491 | 0 | 0 | 43,919 |
34,520,291 | 2015-12-29T22:45:00.000 | 3 | 0 | 1 | 0 | python,virtualenv,pycharm | 57,003,938 | 11 | false | 0 | 0 | I noticed that every time I open a different project it still has the venv from the project I was previously working on.
What I do is:
ctrl-alt-s (to go into preferences), then Project Interpreter/settings (gear icon), show all, then remove all the venv environments that aren't your current project (use the - sign). Restart, and you should be good to go. | 6 | 39 | 0 | I have the latest PyCharm CE and am using it with virtualenv. I have defined the interpreter as the interpreter in the virtualenv. The Project Interpreter window in PyCharm lists all the packages I have installed. I confirmed this by running pip freeze > requirements.txt and running through the packages manually.
My problem is that PyCharm won't find certain includes in its editor windows, like Flask-Login:
In from flask.ext.login import current_user, login_user, logout_user, login_required the includes current_user, login_user, logout_user, login_required are all marked as unresolved references.
Am I missing something? | PyCharm cannot find the packages in virtualenv | 0.054491 | 0 | 0 | 43,919 |
34,520,985 | 2015-12-29T23:54:00.000 | 16 | 0 | 1 | 0 | python,inheritance,python-decorators | 34,521,136 | 2 | false | 0 | 0 | The problem with trying to add @override is that at method definition time, the decorator has no way to tell whether or not the method actually overrides another method. It doesn't have access to the parent classes (or the current class, which doesn't even exist yet!).
If you want to add @override, the @override decorator can't actually do any override checking. You then have two options. Either there is no override checking, in which case @override is no better than a comment, or the type constructor needs to specifically know about @override and check it at class creation time. A convenience feature like @override really shouldn't need to complicate core parts of the type system implementation like that. Also, if you accidentally put @override on a non-method, the bug will go undetected until you try to call the decorated function and get a weird TypeError. | 2 | 36 | 0 | I've been using abstract classes in Python with ABCMeta. When you write an abstract method you tag it with the decorator @abstractmethod. One thing that I found odd (and unlike other languages) is that when the subclass overrides the superclass method, no decorator like @override is provided. Does anyone know what the logic behind this might be?
This makes it slightly confusing for someone reading the code to quickly establish which methods override/implement abstract methods versus methods that only exist in the subclass. | Why no @override decorator in Python to help code readability? | 1 | 0 | 0 | 29,666 |
34,520,985 | 2015-12-29T23:54:00.000 | 10 | 0 | 1 | 0 | python,inheritance,python-decorators | 34,521,125 | 2 | false | 0 | 0 | You're confusing Python decorators with Java annotations. Despite the similar syntax, they are completely different things. A Java annotation is an instruction to the compiler. But a Python decorator is executable code that does something concrete: it wraps the function in another function which can change what it does. This is the case for abstractmethod just as much as any other decorator; it does something, namely tell the ABC that there is a method that needs overriding. | 2 | 36 | 0 | I've been using abstract classes in Python with ABCMeta. When you write an abstract method you tag it with the decorator @abstractmethod. One thing that I found odd (and unlike other languages) is that when the subclass overrides the superclass method, no decorator like @override is provided. Does anyone know what the logic behind this might be?
This makes it slightly confusing for someone reading the code to quickly establish which methods override/implement abstract methods versus methods that only exist in the subclass. | Why no @override decorator in Python to help code readability? | 1 | 0 | 0 | 29,666 |
34,521,078 | 2015-12-30T00:04:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x,python-internals | 34,521,222 | 3 | false | 0 | 0 | Your understanding is generally correct, but it's worth noting that python lists are totally different animals compared to arrays in C or C++. From the documentation:
id(obj)
Return the “identity” of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value.
The simple answer to your question is that lists in python are actually references. This results in their memory addresses being different as the address is that of the reference as opposed to the object as one might expect. | 2 | 9 | 0 | Okay a very silly question I'm sure. But how does python assign value to variables?
Say there is a variable a and is assigned the value a=2. So python assigns a memory location to the variable and a now points to the memory location that contains the value 2. Now, if I assign a variable b=a the variable b also points to the same location as variable a.
Now. If I assign a variable c=2 it still points to the same memory location as a instead of pointing to a new memory location. So, how does python work? Does it check first check all the previously assigned variables to check if any of them share the same values and then assign it the memory location?
Also, it doesn't work the same way with lists. If I assign a=[2,3] and then b=[2,3] and check their memory locations with the id function, I get two different memory locations.But c=b gives me the same location. Can someone explain the proper working and reason for this?
edit :-
Basically my question is because I've just started learning about the is operator and apparently it holds True only if they are pointing to the same location. So, if a=1000 and b=1000 a is b is False but, a="world" b="world" it holds true. | How does python assign values after assignment operator | 0 | 0 | 0 | 3,735 |
34,521,078 | 2015-12-30T00:04:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x,python-internals | 34,521,176 | 3 | false | 0 | 0 | This is an optimization that python performs for small integers. In general, you can't count on a and c pointing to the same location. If you try this experiment with progressively larger integers you'll see that it stops working at some point. I'm pretty sure 1000 is large enough but I'm not near a computer; I thought I remembered it being all integers from -128 to 127 are handled this way (or some other "round number"). | 2 | 9 | 0 | Okay a very silly question I'm sure. But how does python assign value to variables?
Say there is a variable a and is assigned the value a=2. So python assigns a memory location to the variable and a now points to the memory location that contains the value 2. Now, if I assign a variable b=a the variable b also points to the same location as variable a.
Now. If I assign a variable c=2 it still points to the same memory location as a instead of pointing to a new memory location. So, how does python work? Does it check first check all the previously assigned variables to check if any of them share the same values and then assign it the memory location?
Also, it doesn't work the same way with lists. If I assign a=[2,3] and then b=[2,3] and check their memory locations with the id function, I get two different memory locations.But c=b gives me the same location. Can someone explain the proper working and reason for this?
edit :-
Basically my question is because I've just started learning about the is operator and apparently it holds True only if they are pointing to the same location. So, if a=1000 and b=1000 a is b is False but, a="world" b="world" it holds true. | How does python assign values after assignment operator | 0 | 0 | 0 | 3,735 |
34,522,603 | 2015-12-30T03:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,python-idle | 34,522,889 | 4 | false | 0 | 0 | Looks like the best you can do is save as a .py file. Open that in a text editor and continue working in IDLE. With each save, the text editor will refresh with all updates, including errors. At least TextWrangler will. | 1 | 2 | 0 | Is there a way to copy text out of Python IDLE on a Mac? When I highlight text and copy then past into a text editor, I get the same text pasted. It is some of the first text I start with in IDLE. None of the other text will copy out. | How to copy text from IDLE? | 0 | 0 | 0 | 2,625 |
34,522,605 | 2015-12-30T03:44:00.000 | 2 | 0 | 0 | 0 | python,twilio,payment | 34,529,283 | 1 | true | 0 | 0 | Phone numbers are purchased from your account. So you need to have atleast 1$(i think 1$ is required to purchase a phone number) of twilio credit to purchase a number. If you didnt have credit, you cannot purchase a numnber. And i think no Api is available to credit your account.
Best way to implement twilio number purchase webportal are
1) Have some credit in your twilio account
2) Charge users when they purchase twilio number
3) Set a twilio recharge trigger so that your twilio account is recharged from your bank account when credit goes below a limit | 1 | 0 | 0 | How can I let users purchase phone numbers without having credit on my account? I'd like the users to pay for the numbers directly themselves. Is it possible to have the users payment sent to my account as a credit and then use it to pay for the number? How can I do this with Twilio API? | How can I let users purchase Twilio numbers from my site? | 1.2 | 0 | 1 | 82 |
34,525,410 | 2015-12-30T08:05:00.000 | 2 | 0 | 1 | 0 | python,escaping,pycharm,sequence | 34,525,766 | 1 | false | 0 | 0 | Did not work with \ but with / instead.
Thanks | 1 | 0 | 0 | I would like to set a path as a string to some random variable. When i do that, Pycharm thinks the path has valid escape sequences.
Instead of taking the path as is, it changes parts of it to different patterns:
\f changes to \x0c and \a to \x07 and so on.
How do I prevent it to do so?
Sorry for not linking the code, I am not allowed. | Pycharm valid escape sequence messes with path strings | 0.379949 | 0 | 0 | 465 |
34,526,093 | 2015-12-30T08:55:00.000 | 0 | 0 | 0 | 0 | python,pymc,pymc3 | 34,560,260 | 1 | true | 0 | 0 | Found it... a bit silly of me. pymc3.Normal(mu,sd).random(), which basically just calls scipy.stats.norm | 1 | 0 | 1 | Is there a PyMC3 equivalent to the pymc.rnormal function, or has it been dropped in favor of numpy.random.normal? | What is the PyMC3 equivalent of the 'pymc.rnormal' function? | 1.2 | 0 | 0 | 93 |
34,528,107 | 2015-12-30T10:56:00.000 | 14 | 0 | 1 | 1 | python,macos,python-2.7,python-3.x | 34,529,150 | 7 | true | 0 | 0 | Since Python 2 and 3 can happily coexist on the same system, you can easily switch between them by specifying in your commands when you want to use Python 3.
So for Idle, you need to type idle3 in the terminal in order to use it with Python 3 and idle for using it with Python 2.
Similarly, if you need to run a script or reach a python prompt from the terminal you should type python3 when you want to use Python 3 and python when you want to use Python 2. | 6 | 8 | 0 | I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal? | How do I make Python 3.5 my default version on MacOS? | 1.2 | 0 | 0 | 36,713 |
34,528,107 | 2015-12-30T10:56:00.000 | 1 | 0 | 1 | 1 | python,macos,python-2.7,python-3.x | 54,570,625 | 7 | false | 0 | 0 | Do right thing, do thing right!
Open your terminal,
input python -V, It likely shows:Python 2.7.10
input python3 -V, It likely shows:Python 3.7.2
input where python or which python, It likely shows:/usr/bin/python
input where python3 or which python3, It likely shows:
/usr/local/bin/python3
add the following line at the bottom of your PATH environment variable file in ~/.profile file or ~/.bash_profile under Bash or ~/.zshrc under zsh.
alias python='/usr/local/bin/python3'
OR
alias python=python3
input source ~/.bash_profile under Bash or source ~/.zshrc under zsh.
Quit the terminal.
Open your terminal, and input python -V, It likely shows:
Python 3.7.2
Note, the ~/.bash_profile under zsh is not that ~/.bash_profile.
The PATH environment variable under zsh instead ~/.profile (or ~/.bash_file) via ~/.zshrc.
Hope this helped you all! | 6 | 8 | 0 | I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal? | How do I make Python 3.5 my default version on MacOS? | 0.028564 | 0 | 0 | 36,713 |
34,528,107 | 2015-12-30T10:56:00.000 | 1 | 0 | 1 | 1 | python,macos,python-2.7,python-3.x | 42,657,534 | 7 | false | 0 | 0 | You can switch to any python version in your project by creating a virtual environment.
virtualenv -p /usr/bin/python2.x (or python 3.x)
In case you just want to run a program in a specific version just open shell and enter python2.x or python3.x | 6 | 8 | 0 | I have just installed Python 3.5.1 on my Mac (running the latest version of OSX). My system came with Python 2.7 installed. When I type IDLE at the Terminal prompt my system pulls up the original Python 2.7 rather than the newly installed Python 3.5. How do I get my system to default to Python 3.5.1 when I open the IDLE window from Terminal? | How do I make Python 3.5 my default version on MacOS? | 0.028564 | 0 | 0 | 36,713 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.