Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
39,691,679
2016-09-25T20:46:00.000
1
0
0
0
python,django,django-models
39,692,799
1
true
1
1
If I understood your description correctly, you want a relationship where there can be many emailWidget or TextWidget for one instance of widgetManager. What you can do in this case is add a ForeignKey field for widgetManager to emailWidget and TextWidget. This way, you can have many instances of the widgets while they refer to the same manager. I think you may have confused inheritance with model relationships when you said you want to extend widgets from a base class. Perhaps I'm wrong? Not sure what you mean't about order of the widget being important either..
1
0
0
I have a model called widgetManager and 2 widget models called emailWidget and TextWidget. Now a single instance of widgetManager can have multiple instances of emailWidget and TextWidget. How can this be achieved with the following in mind Till now i only have two but there can be more in future The order of widget is very important I have tried with adding two many-many relations in widgetManager but that seems impractical and not the best way to go around because if first condition. What i have in mind is maybe i can somehow make a base widget class and extend all the widgets from that class but i am not sure about that. Would be super helpful if someone can point me in the right direction. Thanks in advance.
Proper model defination in django for a widget manager
1.2
0
0
27
39,691,860
2016-09-25T21:08:00.000
1
0
1
0
python-2.7,pip
43,962,413
2
false
0
0
install pip for Python2.7 with easy_install: sudo easy_install-2.7 pip now you can use pip for the same specific version of Python: sudo pip2.7 install BeautifulSoup
1
2
0
I am using macOS Sierra 10.12 and after I upgraded my OS I can no longer install packages for python 3 using pip. Before I used to use pip for python2 and pip3 for python 3 as I have both versions of Python. But now I can no longer use pip to install libraries for python2. Can anyone help me how can I change my default pip installer to python2? So that I can just use pip install in order to install for python 2. For your information - when I only type python on terminal it says my default is python 2.7.
pip command by default using python 3...how to change it to python 2?
0.099668
0
0
4,506
39,691,902
2016-09-25T21:12:00.000
1
0
0
0
python,neural-network,tensorflow,conv-neural-network
54,791,471
9
false
0
0
The correct order is: Conv > Normalization > Activation > Dropout > Pooling
2
206
1
The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow. When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering? It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing? Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there is a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something. Thank you much! UPDATE: An experimental test seems to suggest that ordering does matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated.
Ordering of batch normalization and dropout?
0.022219
0
0
130,653
39,691,902
2016-09-25T21:12:00.000
0
0
0
0
python,neural-network,tensorflow,conv-neural-network
63,051,525
9
false
0
0
ConV/FC - BN - Sigmoid/tanh - dropout. If activiation func is Relu or otherwise, the order of normalization and dropout depends on your task
2
206
1
The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow. When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering? It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing? Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there is a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something. Thank you much! UPDATE: An experimental test seems to suggest that ordering does matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated.
Ordering of batch normalization and dropout?
0
0
0
130,653
39,694,646
2016-09-26T04:15:00.000
0
0
0
0
python,c++,boost,fortran,gfortran
39,716,307
1
false
0
0
I built Boost.Python libraries 1.61.0 from source for Python 2.7 using VC 14.0. Then used those in the build process for Netgen (again using VC 14.0) and pointed to the Python 2.7 library and include directories (as opposed to Python 3.5). This has thus far worked in Python 2.7 with my existing code.
1
2
0
I have a Python 2.7 project that has thus far been using gfortran and MinGW to build extensions. I use MinGW because it seems to support write statements and allocatable arrays in the Fortran code while MSVC does not. There is another project I would like to incorporate into my own (Netgen) but it is currently set up for Python 3.5 using Boost.Python. I first tried to transfer my own program to Python 3.5 but that is where I was reminded of the MSVC issues and apparently MinGW is not supported. For that reason, I've been trying to think of a way to compile Netgen + Boost.Python for deployment in Python 2.7. I think the Boost part is straightforward, but it seems I need Visual C++ 2008 to get it integrated with Python 2.7. I have the Visual C++ Compiler for Python 2.7 from Microsoft, but I haven't gotten it to work inside the CMake build system. I point it to the cl.exe compilers in the VC for Python folders and CMake always tells me that building a simple test program fails. Since I don't actually have (and can't find) Visual Studio 2008, not sure how far I'd get anyway. There's a lot of places that could have issues here, but I'm just looking for a go/no-go answer if that's what it is. Any solutions would obviously be welcomed. I am running Windows 10 64-bit. I'm not a C/C++ expert, but it seems like I have all the tools I need to compile Netgen using the VC for Python tools (cl, link, etc). I just don't have/not sure how to put it all together into a project or something like that.
Building Fortran extension for Python 3.5 or C extension for 2.7
0
0
0
168
39,700,254
2016-09-26T10:19:00.000
1
0
0
0
python,django
39,736,367
1
false
1
0
A partial answer. After some time with WingIDE IDE's debugger, and some profiling with cProfile, I have located the main CPU hogging issue. During initial django startup there's a cascade of imports, in which module validators.py prepares some compiled regular expressions for later use. One in particular, URLValidator.regex, is complicated and also involves five instances of the unicode character set (variable ul). This causes re.compile to perform a large amount of processing, notably in sre_compile.py _optimize_charset() and in large number of calls to the fixup() function. As it happens, the particular combination of calls and data structure apparently hit a special slowness in WingIDE 6.0b2 debugger. It's considerably faster in WingIDE 5.1 debugger (though still much slower than when run from command line). Not sure why yet, but Wingware is looking into it. This doesn't explain the occasional slowness when launched from the command line on Windows; there's an outside change this was waiting for a sleeping drive to awaken. Still observing.
1
0
0
I'm trying to discover the cause of delays in Django 1.8 startup, especially, but not only, when run in a debugger (WingIDE 5 and 6 in my case). Minimal test case: the Django 1.8 tutorial "poll" example, completed just to the first point where 'manage.py runserver' works. All default configuration, using SQLite. Python 3.5.2 with Django 1.8.14, in a fresh venv. From the command line, on Linux (Mint 18) and Windows (7-64), this may run as fast as 2 seconds to reach the "Starting development server" message. But on Windows it sometimes takes 10+ secs. And in the debugger on both machines, it can take 40 secs. One specific issue: By placing print statements at begin and end of django/__init__.py setup(), I note that this function is called twice before the "Starting... " message, and again after that message; the first two times contribute half the delay each. This suggests that django is getting started three times. What is the purpose of that, or does it indicate a problem? (I did find that I could get rid of one of the first two startup()s using the runserver --noreload option. But why does it happen in the first place? And there's still a startup() call after the "Starting..." message.) To summarize the question: -- Any insights into what might be responsible for the delay? -- Why does django need to start three times? (Or twice, even with --noreload).
Django 1.8 startup delay troubleshooting
0.197375
0
0
268
39,710,675
2016-09-26T19:17:00.000
1
0
1
0
python,module,pycharm,package-managers
39,711,232
4
false
0
0
Did you change the interpreter in PyCharm? If not, go to File -> Settings -> Project -> Project Interpreter and change the interpreter to the one in Anaconda. It should find the package, unless it's installed in a weird location. If you don't have the Anaconda interpreter in the list of available interpreters, you can easily add it that dialog as well. Click the gear icon, select "Add local" and navigate to the python executable from Anaconda.
3
2
0
I am trying to install a package called "quantecon" through PyCharm. If I have Python 3.5 as an interpreter then I can find the package in the settings menu. But I need to run Anaconda, it has a bunch of other packages I need like scipy, numpy, etc. Once I install Anaconda and use it as the interpreter (it runs on Python 3.5 and a bunch of other packages) quantecon disappears from the menu of modules in PyCharm. Why does quantecon appear with one interpreter and not with another when they both run on python 3.5? This only happens with PyCharm. If I use jupyter/ipython notebook I can have both Anaconda and quantecon. I prefer working with PyCharm, it would be ideal to be able to have both Anaconda and quantecon there. How can I install quantecon and have Anaconda as the interpreter? Thanks
Pycharm does not find module with one interpreter but does with another, why?
0.049958
0
0
2,380
39,710,675
2016-09-26T19:17:00.000
1
0
1
0
python,module,pycharm,package-managers
57,080,380
4
false
0
0
In PyCharm 2019, I had to remove all interpreters, then re-add the one that should have worked all along. Go to File->Settings (or ctrl+Alt+S), Type 'interpreter' in search Select 'Project Interpreter' on the left pane. Press the 'Cog' icon -> Show All remove all interpreters using the '-' button as many times as needed. Now add your interpreter. That solved it for me.
3
2
0
I am trying to install a package called "quantecon" through PyCharm. If I have Python 3.5 as an interpreter then I can find the package in the settings menu. But I need to run Anaconda, it has a bunch of other packages I need like scipy, numpy, etc. Once I install Anaconda and use it as the interpreter (it runs on Python 3.5 and a bunch of other packages) quantecon disappears from the menu of modules in PyCharm. Why does quantecon appear with one interpreter and not with another when they both run on python 3.5? This only happens with PyCharm. If I use jupyter/ipython notebook I can have both Anaconda and quantecon. I prefer working with PyCharm, it would be ideal to be able to have both Anaconda and quantecon there. How can I install quantecon and have Anaconda as the interpreter? Thanks
Pycharm does not find module with one interpreter but does with another, why?
0.049958
0
0
2,380
39,710,675
2016-09-26T19:17:00.000
0
0
1
0
python,module,pycharm,package-managers
39,711,203
4
false
0
0
Inside PyCharm, in Ubuntu, go to File -> Settings -> Project -> Project Interpreter and change the interpreter. If Anaconda is not there, click on the gear, add local and then go to /home/user/anaconda2/bin/python
3
2
0
I am trying to install a package called "quantecon" through PyCharm. If I have Python 3.5 as an interpreter then I can find the package in the settings menu. But I need to run Anaconda, it has a bunch of other packages I need like scipy, numpy, etc. Once I install Anaconda and use it as the interpreter (it runs on Python 3.5 and a bunch of other packages) quantecon disappears from the menu of modules in PyCharm. Why does quantecon appear with one interpreter and not with another when they both run on python 3.5? This only happens with PyCharm. If I use jupyter/ipython notebook I can have both Anaconda and quantecon. I prefer working with PyCharm, it would be ideal to be able to have both Anaconda and quantecon there. How can I install quantecon and have Anaconda as the interpreter? Thanks
Pycharm does not find module with one interpreter but does with another, why?
0
0
0
2,380
39,713,297
2016-09-26T22:18:00.000
9
0
1
0
python-2.7,installation
39,721,704
2
false
0
0
shutil has been in the standard library.Justimport shutil~
1
14
0
in the python 2.7, I find I can't install the shutil library through pip and easy_install, it said can't find it.How can I install it?
can not install shutil library in Python 2.7
1
0
0
27,430
39,713,338
2016-09-26T22:23:00.000
1
0
1
0
python,python-3.x,dictionary
39,713,537
2
true
0
0
You could use two separate data structures: One to define unchanging information such as card names and values (as you have already done), and another to keep track of changing information, such as how many of each card remains in the deck.
1
0
0
I'm creating a blackjack game with a deck of cards like so: cards = {'ace':[1,11],'2':2,'3':3}... and so on. I want to have 3 decks of cards, so is it possible to have more than one ace, etc.? If so, how? I still want each card to have the keypair of its value, but I don't care if it's in a dictionary.
Possible to have same key pair more than once in a dictonary
1.2
0
0
44
39,713,433
2016-09-26T22:34:00.000
1
0
1
0
python,keyboard,spyder
60,240,641
2
false
0
0
Set this configuration in Spyder: Run > Run Configuration Per File > Execute In An External System Terminal In my experience "msvcrt.kbhit" only works in CMD.
1
2
0
Has anyone come across a way to emulate kbhit() in the Spyder environment on Windows? Somehow the development environment gets between the Python program and the keyboard, so any simple way of doing it (i.e. msvcrt.kbhit()) does not work.
How to make kbhit() work in the Spyder environment
0.099668
0
0
478
39,713,540
2016-09-26T22:46:00.000
1
0
0
0
python,tcp,scapy
40,023,525
1
true
0
0
You can not directly write the TCP options field byte per byte, however you can either: write your entire TCP segment byte per byte: TCP("\x01...\x0n") add an option to Scapy's code manually in scapy/layers/inet.py TCPOptions structure These are workarounds and a definitive solution to this would be to implement a byte per byte TCP options field and commit on Scapy's github of course.
1
0
0
I want to read and write custom data to TCP options field using Scapy. I know how to use TCP options field in Scapy in "normal" way as dictionary, but is it possible to write to it byte per byte?
Read/Write TCP options field
1.2
0
1
402
39,713,636
2016-09-26T22:59:00.000
0
0
0
0
python,neo4j,neo4jclient,neo4jrestclient
39,718,452
1
false
0
0
No, especially if you use the one of the clients to do the migration as they will automatically escape anything that needs to be escaped, but there's nothing I've come across.
1
0
0
I'm attempting to transfer all data over to Neo4j, and am wondering if it would be alright to name all properties on nodes the same as in Postgres exactly. E.g id will be id, name will be name, and so on. Are there any conflicts with doing something like this?
Neo4j Transferring All Data from Postgres
0
1
0
68
39,713,752
2016-09-26T23:12:00.000
2
0
1
0
python,python-3.x,export,processing
40,960,287
1
true
0
1
processing.py is not compatible with CPython (native python), nor with any c-language modules.
1
1
0
I'm currently using PyGame to build games on python, however exporting becomes rather difficult as Py2Exe and Py2App are almost the only ways to do so and are not very effective. I tried to using the wonderful exporting mechanic in Processing (in Python mode), but this opened a Pandora's box of problems. How do you import modules in Processing.py ? I read that if the module is in the form of a PY file it is simply to be located in the same folder as the sketch. But some modules, like PyGame, are more complex and require an installer or a wheel (WHL file, which is installed through PIP). During some testing, I tried to export a simple one-line program ( print('a') ) but the 'application.windows64' folder was missing an EXE file. I'm not an expert, but I think that might be a problem :) Thanks in advance!
Importing Complex Modules into Processing.py
1.2
0
0
218
39,713,798
2016-09-26T23:18:00.000
0
0
1
0
python,graph,kruskals-algorithm
39,714,032
2
false
0
0
Actually the running time of the algorithm is O(E log(V)). The key to its performance lies on your point 4, more specifically, the implementation of determining for a light edge e = (a, b) if 'a' and 'b' belongs to the same set and, if not, performing the union of their respective sets. For more clarifications on the topic I recommend you the book: "Introduction to Algoritms", from MIT Press, ISBN 0-262-03293-7, pag 561 (for the general topic of MST) and pag 568 (for Kruskal's Algorithm). As it states, and I quote: "The running time of Kruskal’s algorithm for a graph G = (V, E) depends on the implementation of the disjoint-set data structure. We shall assume the disjoint-set-forest implementation of Section 21.3 with the union-by-rank and path-compression heuristics, since it is the asymptotically fastest implementation known." Few lines later and with some simple "Time Complexity Theory" calculus, it proves its time complexity to be of O(E log(V)).
1
0
1
Please help me fill any gaps in my knowledge(teaching myself): So far I understand that given a graph of N vertices, and edges we want to form a MST that will have N-1 Edges We order the edges by their weight We create a set of subsets where each vertice is given its own subset. So if we have {A,B,C,D} as our initial set of vertices, we now have {{A}, {B}, {C}, {D}} We also create a set A that will hold the answer We go down the list of ordered edges. We look at it's vertices, so V1 and V2. If they are in seperate subsets, we can join the two subsets, and add the edge into the set A that holds our edges. If they are in the same subset, we go to the next option (because its a cycle) We continue this pattern until we reach the end of the Edge's list or we reach the Number of vertices - 1 for the length of our set A. If the above assertions are true, my following questions regard the implementation: If we use a list[] to hold the subsets of the set that contains the vertice: subsets = [[1][2][3][4][5][6][7]] and each edge is composed of needing to look for two subsets so we need to find (6,7) the result would be my_path = [(6,7)] #holds all paths subsets = [[1][2][3][4][5][6,7]] wouldn't finding the subset in subsets be taking too long to be O(nlog(n)) Is there a better approach or am i doing this correctly?
Need some clarification on Kruskals and Union-Find
0
0
0
329
39,714,476
2016-09-27T00:49:00.000
0
0
1
0
pdf,latex,python-sphinx
63,270,705
2
false
0
0
What is in your index.rst file? In the index.rst file, it is necessary to insert "modules" manually in order for the PDF to work correctly (assuming your build process creates modules.rst in same folders as index.rst). Otherwise, the PDF output just includes nothing.
2
2
0
sphinx can general index as well as a language-specific module indices,But when I use sphinx generate pdf ,I can not find the automatic indicies,what I should do?
I use sphinx to build pdf ,How to generate automatic indices
0
0
0
333
39,714,476
2016-09-27T00:49:00.000
0
0
1
0
pdf,latex,python-sphinx
39,732,119
2
false
0
0
Have you run the build twice? Latex may get the indexes only right after the second build. During the first run, the document with table of content and indices is generated, during the second build that information is used in the latex file.
2
2
0
sphinx can general index as well as a language-specific module indices,But when I use sphinx generate pdf ,I can not find the automatic indicies,what I should do?
I use sphinx to build pdf ,How to generate automatic indices
0
0
0
333
39,715,472
2016-09-27T03:22:00.000
0
0
0
0
python,opencv,video,overlay
39,716,115
2
false
0
0
What you need are 2 Mat objects- one to stream the camera (say Mat_cam), and the other to hold the overlay (Mat_overlay). When you draw on your main window, save the line and Rect objects on Mat_overlay, and make sure that it is not affected by the streaming video When the next frame is received, Mat_cam will be updated and it'll have the next video frame, but Mat_overlay will be the same, since it will not be cleared/refreshed with every 'for' loop iteration. Adding Mat_overlay and Mat_cam using Weighted addition will give you the desired result.
2
1
1
I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top of this that it would allow me to draw multiple images on the video without clearing. Any advice? Thanks ahead!
How to put an overlay on a video
0
0
0
1,160
39,715,472
2016-09-27T03:22:00.000
0
0
0
0
python,opencv,video,overlay
39,721,387
2
false
0
0
I am not sure that I have understood your question properly.What I got from your question is that you want the overlay to remain on your frame, streamed from Videocapture, for that one simple solution is to declare your "Mat_cam"(camera streaming variable) outside the loop that is used to capture frames so that "Mat_cam" variable will not be freed every time you loop through it.
2
1
1
I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top of this that it would allow me to draw multiple images on the video without clearing. Any advice? Thanks ahead!
How to put an overlay on a video
0
0
0
1,160
39,716,983
2016-09-27T05:55:00.000
0
0
0
0
python-2.7,skype-for-business,skypedeveloper
39,847,739
1
false
0
0
Currently there is no way to present content through the Skype Web SDK. This might be something we add in a future release.
1
0
0
i have made a python desktop app which takes voice commands now i want to present it via Skype to someone and so i want the people to hear the response , is there a way to do it, so that everyone on the call can hear the response and give voice command to it..
Skype integration with python desktop app
0
0
1
439
39,717,464
2016-09-27T06:27:00.000
3
0
0
0
python,tableau-api
39,732,617
2
true
0
0
The simplest approach is to issue an HTTP GET request from Python to your Tableau Server and append a format string to the URL such as ".png" or ".pdf". There are size options you can experiment with as well -- press the Share button to see the syntax. You can also pass filter settings in the URL as query parameters
1
2
0
Need to download image from the tableau server using python script. Tableau Rest API doesn't provide any option to do so.I like to know what is proper way of downloading high resolution/full size image from tableau server using python or any other server scripting language.
Tableau download/export images using Rest api python
1.2
0
1
6,844
39,717,737
2016-09-27T06:42:00.000
-1
0
1
0
python,python-2.7,testing,automated-tests
40,362,623
2
false
0
0
It is of course possible since Python is Turing complete. However, you should use one of the available open source or commercial libraries to handle the STDF writing if you are not familiar with STDF. Even one mis-placed byte in the binary output will wreck your file. It is impossible to say whether an existing tool can do this for you because a text file can have anything in it. Your text file will need to adhere to the tool's expectations of where the necessary header data (lot id, program name, etc.), test names and numbers, part identifiers, test results and so on will be in the text file.
1
0
0
I'm Trying to convert .txt data file to STDF (ATE Standard Test Data Format, commonly used in semiconductor tests) file. Is there any way to do that? Are there any libraries in Python which would help in cases like this? Thanks!
How to create / Write STDF file with python?
-0.099668
0
0
2,633
39,718,367
2016-09-27T07:15:00.000
1
0
1
0
python,installation,anaconda,plotly,jupyter
51,005,546
3
false
0
0
I had the similar issue. Go to Anaconda prompt and type pip install plotly hit enter. Restart the kernel and try importing it again.
1
2
0
I am new to Python and I installed plotly library using Anaconda prompt using command pip install plotly and it shows it to be installed under libraries installed but doesn't get imported when I try to import via the Jupyter Library saying the module is not found.
Unable to import plotly into Jupyter Notebook
0.066568
0
0
4,619
39,722,984
2016-09-27T11:02:00.000
1
0
1
0
python,numpy,compilation
39,728,900
3
false
0
0
Python can execute functions written in Python (interpreted) and compiled functions. There are whole API docs about writing code for integration with Python. cython is one of the easier tools for doing this. Libraries can be any combination - pure Python, Python plus interfaces to compiled code, or all compiled. The interpreted files end with .py, the compiled stuff usually is .so or .dll (depending on the operating system). It's easy to install pure Python code - just load, unzip if needed, and put the right directory. Mixed code requires a compilation step (and hence a c compiler, etc), or downloading a version with binaries. Typically developers get the code working in Python, and then rewrite speed sensitive portions in c. Or they find some external library of working c or Fortran code, and link to that. numpy and scipy are mixed. They have lots of Python code, core compiled portions, and use external libraries. And the c code can be extraordinarily hard to read. As a numpy user, you should first try to get as much clarity and performance with Python code. Most of the optimization SO questions discuss ways of making use of the compiled functionality of numpy - all the operations that work on whole arrays. It's only when you can't express your operations in efficient numpy code that you need to resort to using a tool like cython or numba. In general if you have to iterate extensively then you are using low level operations. Either replace the loops with array operations, or rewrite the loop in cython.
1
4
0
Trying to understand whether python libraries are compiled because I want to know if the interpreted code I write will perform the same or worse. e.g. I saw it mentioned somewhere that numpy and scipy are efficient because they are compiled. I don't think this means byte code compiled so how was this done? Was it compiled to c using something like cython? Or was it written using a language like c and compiled in a compatible way? Does this apply to all modules or is it on a case-by-case basis?
Are Python modules compiled?
0.066568
0
0
2,959
39,726,495
2016-09-27T13:50:00.000
1
0
0
0
python,excel,email,sap,business-objects
39,727,668
1
false
1
0
It's kind of hack-ish, but it can be done. Have the program (exe) write out the bytes of the Excel file to standard output. Then configure the program object for email destination, and set the filename to a specific name (ex. "whatever.xlsx"). When emailing a program object, the attached file will contain the standard output/error of the program. Generally this will just be text but it works for binary output as well. As this is a hack, if the program generates any other text (such as error message) to standard out, it will be included in the .xlsx file, which will make the file invalid. I'd suggest managing program errors such that they get logged to a file and NOT to standard out/error. I've tested this with a Java program object; but an exe should work just as well.
1
0
0
I have been trying to schedule a report in SAP BO CMC. This report was initially written in Python and built into a .exe file. This .exe application runs to save the report into an .xlsx file in a local folder. I want to utilize the convenient scheduling functions in SAP BO CMC to send the report in Emails. I tried and created a "Local Program" in CMC and linked it to the .exe file, but you can easily imagine the problem I am faced with -- the application puts the file in the folder as usual but CMC won't be able to grab the Excel file generated. Is there a way to re-write the Python program a bit so that the output is not a file in some folder, but an object that CMC can get as an attachment to the Emails? I have been scheduling Crystal reports in CMC and this happens naturally. The Crystal output can be sent as an attachment to the Email. Wonder if the similar could happen for a .exe , and how? Kindly share your thoughts. Thank you very much! P.S. Don't think it possible to re-write the report in Crystal though, as the data needs to be manipulated based on inputs from different data sources. That's where Python comes in to help. And I hope I don't need to write the program as to cover the Emailing stuff and schedule it in windows' scheduled tasks. Last option... This would be too inconvenient to maintain. We don't get access to the server easily.
How can I out put an Excel file as Email attachment in SAP CMC?
0.197375
1
0
744
39,726,577
2016-09-27T13:54:00.000
1
0
1
0
python,rest,pycurl,http-status-code-422
39,726,781
1
true
0
0
No this does not exist in general. Some services support an OPTIONS request to the route in question, which should return you documentation about the route. If you are lucky this is machine generated from the same source code that implements the route, so is more accurate than static documentation. However, it may just return a very simple summary, such as which HTTP verbs are supported, which you already know. Even better, some services may support a machine description of the API using WSDL or WADL, although you probably will only find that if the service also supports XML. This can be better because you will be able to find a library that can parse the description and generate a local object model of the service to use to interact with the API. However, even if you have OPTIONS or WADL file, the kind of error you are facing could still happen. If the documents are not helping, you probably need to contact the service support team with a demonstration of your problem and request assistance.
1
3
0
I'm working with a networking appliance that has vague API documentation. I'm able to execute PATCH and GET requests fine, but POST isn't working. I receive HTTP status error 422 as a response, I'm missing a field in the JSON request, but I am providing the required fields as specified in the documentation. I have tried the Python Requests module and the vendor-provided PyCurl module in their sample code, but have encountered the same error. Does the REST API have a debug method that returns the required fields, and its value types, for a specific POST? I'm speaking more of what the template is configured to see in the request (such as JSON {str(ServerName) : int(ServerID)}, not what the API developer may have created.
How to determine what fields are required by a REST API, from the API?
1.2
0
1
2,845
39,726,921
2016-09-27T14:09:00.000
0
0
1
0
python
39,727,471
1
false
0
0
Let's start from the second point: if the list you store in memory is larger than the available ram, the computer starts using the hd as ram and this severely slow down everything. The optimal way of outputting in your situation is fill the ram as much as possible (always keeping enough space for the rest of the software running on your pc) and then writing on a file all in once. The fastest way to store a list in a file would be using pickle so that you store binary data that take much less space than formatted ones (so even the read/write process is much much faster). When you write to a file, you should keep the file always open, using something like with open('namefile', 'w') as f. In this way, you save the time to open/close the file and the cursor will always be at the end. If you decide to do that, use f.flush() once you have written the file to avoid loosing data if something bad happen. The append method is good alternative anyway. If you provide some code it would be easier to help you...
1
0
0
Say I have a data file of size 5GB in the disk, and I want to append another set of data of size 100MB at the end of the file -- Just simply append, I don't want to modify nor move the original data in the file. I know I can read the hole file into memory as a long long list and append my small new data to it, but it's too slow. How I can do this more efficiently? I mean, without reading the hole file into memory? I have a script that generates a large stream of data, say 5GB, as a long long list, and I need to save these data into a file. I tried to generate the list first and then output them all in once, but as the list increased, the computer got slow down very very severely. So I decided to output them by several times: each time I have a list of 100MB, then output them and clear the list. (this is why I have the first problem) I have no idea how to do this.Is there any lib or function that can do this?
modify and write large file in python
0
0
0
1,079
39,736,740
2016-09-28T01:43:00.000
0
1
0
0
python,python-2.7,testing,automated-tests,robotframework
39,766,125
1
false
0
0
I may be incorrect but perhaps you want to capture data sent/received between computer and device through serial port. If this is true then serial port sniffer will be required. Linux and mac os x does not support sniffing however you may use sniffing for windows.
1
0
0
We are testing networking devices to which test interaction is done using serial ports. Python 2.7 with Windows is used to achieve this using the PySerial module of Python. The scripts are run using Robot framework. We observe that the Robot logs do not contain the serial device interaction dialogues. We tried checking on Robot framework forums and it is unlikely that such support exists at Robot framework level. We need to implement this in Python. How can the following be achieved: I) Basic requirement: All script interaction with the (multiple) test devices on serial port needs to be captured into a log file II) Advanced requirement: while the script is not actively interacting with the test device there has to be continuous background monitoring of the device under test over serial ports for any errors/crashes Thanks!
Python2.7(on Windows) Need to capture serial port output into log files during Python/Robot script run
0
0
0
466
39,736,788
2016-09-28T01:49:00.000
0
1
0
0
python,linux,redhat,yum
39,917,999
1
false
0
0
Once you get your system back to normal, add Python 2.7 as a Software Collection - it installs "along side" the original Python 2.6 rather than replace it so both are available without collision. Get 2.7 and others from softwarecollections.org.
1
0
0
Im using a scientific linux on a remote machine. I tried to install python 2.7 on it. After that the yum and some other python packages are not working (it says "no module named yum"). I searched it online and it seems I should not have touched the system python as it breaks some of the system tools. Is there a way to reinstall the previous python (which was 2.6). I already tried to install python 2.6 by downloading the package but still yum is not working.
Yum is not working anymore in scientific linux after python update
0
0
0
165
39,737,321
2016-09-28T02:58:00.000
0
0
1
0
python-3.x,pycharm
39,739,300
1
true
0
0
I checked the intersection '∩' character and 'n' by the unicode, I found they are different although it looks similar.
1
0
0
I want to input the intersection '∩' character in Pycharm. After I copy the intersection '∩' character into pycharm, it became to 'n'.
Pycharm: how to input the intersection '∩' character in Pycharm
1.2
0
0
85
39,738,301
2016-09-28T04:52:00.000
0
0
1
0
python,ubuntu,vim,conda,miniconda
46,126,075
2
false
0
0
In Short Just go to ycmd submodule inside YouCompleteMe folder, or to be exact in the YouCompleteMe/third_party/ycmd and the run the git submodule command bellow. git submodule update --init --recursive Explantion I've got the same issue like yours, It was caused by the submodule of YouCompleteMe not being cloned properly. This command should be able to solve the problem. git submodule update --init --recursive But unfortunately the problem still persist, the problem at which urllib3 not found , and instlling the library using pip won't be able to solve this issue. The problem actually located in ycmd submodule at which needing urllib3 or to be more precise the requests submodule of ycmd needed it. After some experimenting, the main problem was the git submodule command unable to properly clone the submodule, at which rasing an error about module not found. Hope, this can be a help for you :)
1
0
0
I am trying to install YouCompleteMe plugin on a source compiled Vim instance. I have a server without sudo privileges, hence I had to compile new Vim (7.4+) in order to make most plugins work. Also, I have installed miniconda and thus refer to the python in miniconda for all installations. While following all steps how to install YouCompleteMe plugin (via Vundle or even manually), I faced this issue : "Cannot find module urllib3". So I installed urllib3 via pip, and then the error changed to "cannot import name _compare_digest". Point to note that conda virtualenv (I have just made the miniconda bin to $PATH) cannot start and it still shows "Cannot find module urllib3" even after installing it explicitly. Is there something wrong with the way I installed vim? I had been extra careful to point to miniconda python wherever it's needed. How do I mitigate this issue and get the plugin running again?
YouCompleteMe post install error : cannot import name _compare_digest
0
0
0
1,206
39,738,703
2016-09-28T05:27:00.000
0
0
0
0
python,machine-learning,scikit-learn
39,743,987
2
false
0
0
Random forests do indeed give P(Y/x) for multiple classes. In most cases P(Y/x) can be taken as: P(Y/x)= the number of trees which vote for the class/Total Number of trees. However you can play around with this, for example in one case if the highest class has 260 votes, 2nd class 230 votes and other 5 classes 10 votes, and in another case class 1 has 260 votes, and other classes have 40 votes each, you migth feel more confident in your prediction in 2nd case as compared to 1st case, so you come up with a confidence metric according to your use case.
1
1
1
Given a classification problem, sometimes we do not just predict a class, but need to return the probability that it is a class. i.e. P(y=0|x), P(y=1|x), P(y=2|x), ..., P(y=C|x) Without building a new classifier to predict y=0, y=1, y=2... y=C respectively. Since training C classifiers (let's say C=100) can be quite slow. What can be done to do this? What classifiers naturally can give all probabilities easily (one I know is using neural network with 100 out nodes)? But if I use traditional random forests, I can't do that, right? I use the Python Scikit-Learn library.
How do you get a probability of all classes to predict without building a classifier for each single class?
0
0
0
1,945
39,738,872
2016-09-28T05:42:00.000
2
0
1
0
python,compilation,comparison,abstract-syntax-tree
39,738,985
2
false
0
0
One approach would be to count then number of functions, objects, keywords possibly grouped into categories such as branching, creating, manipulating, etc., and number variables of each type. Without relying on the methods and variables being called the same name(s). For a given problem the similar approaches will tend to come out with similar scores for these, e.g.: A students who used decision tree would have a high number of branch statements while one who used a decision table would have much lower. This approach would be much quicker to implement than parsing the code structure and comparing the results.
1
2
0
Many would want to measure code similarity to catch plagiarisms, however my intention is to cluster a set of python code blocks (say answers to the same programming question) into different categories and distinguish different approaches taken by students. If you have any idea how this could be achieved, I would appreciate it if you share it here.
How to measure similarity between two python code blocks?
0.197375
0
0
960
39,744,688
2016-09-28T10:20:00.000
4
0
0
0
python,flask-sqlalchemy,flask-migrate
39,761,658
1
true
1
0
If you have made no changes to your model from the current migration, but you get a non-empty migration file generated, then it suggests for some reason your models became out of sync with the database, and the contents of this new migration are just the things that are mismatched. If you say that the migration contains code that drops some constraints and add some other ones, it makes me think that the constraint names have probable changed, or maybe you upgraded your SQLAlchemy version to a newer version that generates constraints with different names.
1
1
0
I am using Flask, Flask-SqlAlchemy and Flask-Migrate to manage my models. And I just realize that in my latest database state, when I create a new migration file, python manage.py db migrate -m'test migration, it will not create an empty migration file. Instead it tries to create and drop several unique key and foreign key constraints. Any ideas why it behaves like this?
Why Flask Migrate doesn't create an empty migration file?
1.2
1
0
1,587
39,747,645
2016-09-28T12:32:00.000
0
0
0
1
python,airflow
39,801,249
1
false
0
0
could it be that you just need to restart the webserver and the scheduler? It happens when you change your code, like adding new tasks. Please post more details and some code.
1
1
0
My DAG has 3 tasks and we are using Celery executor as we have to trigger individual tasks from UI.We are able to execute the individual task from UI. The problem which we are facing currently, is that we are unable to execute all the steps of DAG from UI in one go, although we have set the task dependencies. We are able to execute the complete DAG from command line but is there any way to execute the same via UI ?
Airflow unable to execute all the dependent tasks in one go from UI
0
0
0
157
39,747,900
2016-09-28T12:42:00.000
5
0
0
0
python,numpy
39,747,938
2
false
0
0
Just numpy.transpose(U) or U.T.
1
0
1
How to change the ARRAY U(Nz,Ny, Nx) to U(Nx,Ny, Nz) by using numpy? thanks
The efficient way of Array transformation by using numpy
0.462117
0
0
79
39,749,107
2016-09-28T13:34:00.000
0
0
0
0
python,django,mezzanine
47,832,443
1
false
1
0
Add a rich text page, call it Blog or however you want, then in meta data group, in the field URL add /blog/ or whichever is the url to the main blog app. Mezzanine will match the url with the page and will add the Page object to the rendering context, so you can use it in templates.
1
0
0
How do you point blogs to a menu item in Mezzanine? I am able to point my blogs to Home using urls.py but how about to page types like link and richtextpage?
How to point blog to menu item in Mezzanine?
0
0
0
59
39,751,570
2016-09-28T15:20:00.000
0
0
1
0
python,pycharm
39,753,232
2
false
0
0
Seems like a problem with one of my network servers. It is fixed after talking with an IT professional
1
0
0
I dont know if I can post this here or not. But a strange thing happened recently. I use pycharm to run my python codes and surprisingly when I opened a piece of my code - it got deleted. The file is 0KB now - for some reason. I am using this file for over a month now and this happened when I opened it and the file automatically got deleted from pycharm and next it became 0KB. When I tried to delete this file: I get the following Error 0xx800710FE: This file is currently not available for use on this computer
PyCharm - Python Strange behavior
0
0
0
76
39,752,700
2016-09-28T16:13:00.000
1
0
1
0
python,ipython,spyder,mplot3d
47,186,550
2
false
0
0
I initially faced same issue Everything seems to be alright but couldn't rotate the picture. After toggling between graphical and automatic in Tools > preferences > IPython console > Graphics > Graphics backend > Backend: .... I could rotate the image
2
0
1
I am developing a Python program that involves displaying X-Y-Z Trajectories in 3D space. I'm using the Spyder IDE that naturally comes with Anaconda, and I've been running my scripts in IPython Consoles. So I've been able to generate the 3D plot successfully and use pyplot.show() to display it on the IPython Console. However, when displayed in IPython, only one angle of the graph is shown. And I've read that MPlot3D can be used to create interactive plots. Am I correct in believing that I should be able to rotate and zoom the 3D graph? Or does IPython and/or the Spyder IDE not support this feature? Should I work on rotating the plot image within the script? How do I interact with this plot?
MPlot3D Image Manipulation in IPython
0.099668
0
0
472
39,752,700
2016-09-28T16:13:00.000
4
0
1
0
python,ipython,spyder,mplot3d
46,331,190
2
false
0
0
Yes, you can rotate and interact with Mplot3d plots in Spyder, you just have to change the setting so that plots appear in a separate window, rather than in the IPython console. Just change the inline setting to automatic: Tools > preferences > IPython console > Graphics > Graphics backend > Backend: Automatic Then click Apply, close Spyder, and restart.
2
0
1
I am developing a Python program that involves displaying X-Y-Z Trajectories in 3D space. I'm using the Spyder IDE that naturally comes with Anaconda, and I've been running my scripts in IPython Consoles. So I've been able to generate the 3D plot successfully and use pyplot.show() to display it on the IPython Console. However, when displayed in IPython, only one angle of the graph is shown. And I've read that MPlot3D can be used to create interactive plots. Am I correct in believing that I should be able to rotate and zoom the 3D graph? Or does IPython and/or the Spyder IDE not support this feature? Should I work on rotating the plot image within the script? How do I interact with this plot?
MPlot3D Image Manipulation in IPython
0.379949
0
0
472
39,753,285
2016-09-28T16:45:00.000
0
1
0
1
python,django,git,heroku
39,754,555
1
false
1
0
Not sure but try: heroku run --app cghelper python bot.py &
1
0
0
I have created a bot for my website and I currently host in on heroku.com. I run it by executing the command heroku run --app cghelper python bot.py This executes the command perfectly through CMD and runs that specific .py file in my github repo. The issue is when I close the cmd window this stops the bot.py. How can I get the to run automatically. Thanks
Automatically running app .py in in Heroku
0
0
0
43
39,754,283
2016-09-28T17:41:00.000
2
0
0
0
python,django,python-3.x,pythonanywhere,django-1.9
39,775,664
2
true
1
0
We don't change the request timeout for individual users on PythonAnywhere. In the vast majority of cases, a request that takes 5 min (or even, really, 1 min) indicates that something is very wrong with the app.
2
2
0
I have a webpage made in Django that feeds data from a form to a script that takes quite a long time to run (1-5 minutes) and then returns a detailview with the results of that scripts. I have problem with getting a request timeout. Is there a way to increase time length before a timeout so that the script can finish? [I have a spinner to let users know that the page is loading].
Django: Request timeout for long-running script
1.2
0
0
3,180
39,754,283
2016-09-28T17:41:00.000
0
0
0
0
python,django,python-3.x,pythonanywhere,django-1.9
39,754,475
2
false
1
0
Yes, the timeout value can be adjusted in the web server configuration. Does anyone else but you use this page? If so, you'll have to educate them to be patient and not click the Stop or Reload buttons on their browser.
2
2
0
I have a webpage made in Django that feeds data from a form to a script that takes quite a long time to run (1-5 minutes) and then returns a detailview with the results of that scripts. I have problem with getting a request timeout. Is there a way to increase time length before a timeout so that the script can finish? [I have a spinner to let users know that the page is loading].
Django: Request timeout for long-running script
0
0
0
3,180
39,755,063
2016-09-28T18:28:00.000
1
0
1
1
python,environment-variables
39,755,064
2
false
0
0
I put '\\' at the end of the line to permit multiline values.
1
2
0
I am trying to add a multiline value for an env var in .env so that my process, run by honcho, will have access to it. Bash uses a '\' to permit multilines. But this gives errors in honcho/python code. How to do this?
How do I add a multiline variable in a honcho .env file?
0.099668
0
0
1,091
39,756,822
2016-09-28T20:12:00.000
2
0
0
0
python,macos,tkinter,tkmessagebox
69,845,490
2
false
0
1
You can use icon='warning' instead of icon=tkMessageBox.WARNING I just tried that on Windows. Sorry I don't have OSX to test
1
4
0
tkMessageBox.askyesno('Title', 'Message', icon=tkMessageBox.WARNING) on OS X just gives me the rocket icon. I know there is some weirdness with OS X and tkMessageBox icons because tkMessageBox.showerror() just shows the rocket icon, but tkMessageBox.showwarning shows a yellow triangle (with a small rocket in the corner) Is this is a bug? Is there some workaround to get a warning triangle and Yes/No buttons without having to resort to making my own message box window from scratch?
Why can't I change the icon on a tkMessagebox.askyesno() on OS X?
0.197375
0
0
1,338
39,758,606
2016-09-28T22:25:00.000
2
0
1
0
python,pycharm
39,758,651
1
true
0
0
To import as object : from root.folder1.folder2 import script1 To import a function of your script: from root.folder1.folder2.script1 import NameOfTheFunction
1
0
0
In my project, the file structure is as follows: root/folder1/folder2/script1.py root/folder1/folder2/script2.py I have a statement in script2.py that says "import script1", and Pycharm says no module is found. How do I fix this?
How do I import a module from within a Pycharm project?
1.2
0
0
97
39,759,271
2016-09-28T23:47:00.000
0
0
1
1
python,python-3.4,anaconda
41,580,566
1
true
0
0
update-alternatives will do this trick. Simply remember switching to Py3.4 when you need and switching it back after you finished!
1
0
0
I have to keep using the system's default python3.4 on a centos server. I'm wondering if there is any way to configure system's python using anaconda's python packages:P
Configure system default python to use anaconda's packages
1.2
0
0
33
39,759,680
2016-09-29T00:44:00.000
0
0
1
0
python
39,778,818
2
false
0
0
think i figured it out. Apparently SLES 11.4 does not include the development headers in the default install from their SDK for numpy 1.8. And of course they don't offer matplotlib along with a bunch of common python packages. The python packages per the SLES SDK are the system default are located under/usr/lib64/python2.6/site-packages/ and it is under here I see numpy version 1.8. So using the YAST software manager if you choose various python packages this is where they are located. To this point without having the PYTHONPATH environment variable set I can launch python, type import numpy, and for the most part use it. But if I try to build matplotlib 0.99.1 it responds that it cannot find the header files for numpy version 1.8, so it knows numpy 1.8 is installed but the development package needs to be installed. Assuming by development headers they mean .h files, If I search under /usr/lib64/python2.6/site-packages I have no .h files for anything! I just downloaded the source for numpy-1.8.0.tar.gz and easily did a python setup.py.build followed by python setup.py install and noticed it installed under /usr/local/lib64/python2.6/site-packages/ Without the PYTHONPATH environment variable set, if i try to build matplotlib I still get the error about header files not found. but in my bash shell, as root, after I do export PYTHONPATH=/usr/local/lib64/python2.6/site-packages I can successfully do the build and install of matplotlib 0.99.1 which also installs /usr/local/lib64/python2.6/site-packages Notes: I also just did a successful build & install of numpy-1.11 and that got thrown in under /usr/local/lib64/python2.6/site-packages however when i try to then build matplotlib 0.99.1 with PYTHONPATH set it reports outright that numpy is not installed that version 1.1 or greater is needed. So here it seems this older version of matplotlib needs to use a certain version range of numpy, that the latest numpy 1.11 is not compatible. And the only other environment variable i have which is set by the system is PYTHONSTARTUP which points to the file /etc/pythonstart.
1
0
1
My system is SLES 11.4 having python 2.6.9. I know little about python and have not found where to download rpm's that give me needed python packages. I acquired numpy 1.4 and 1.11 and I believe did a successful python setup.py build followed by python setup.py install on numpy. Going from memory I think this installed under /usr/local/lib64/python2.6/... Next I tried building & installing matplotlib (which requires numpy) and when I do python setup.py build it politely responds with cannot find numpy. So my questions are do i need to set some kind of python related environment variable, something along the lines of LD_LIBRARY_PATH or PATH ? As I get more involved with using python installing packages that I have to build from source I need to understand where things currently are per the default install of python, where new things should go, and where the core settings for python are to know how and where to recognize new packages.
manually building installing python packages in linux so they are recognized
0
0
0
44
39,760,733
2016-09-29T03:04:00.000
6
0
0
1
python,google-cloud-dataflow,dataflow,apache-beam
39,776,373
2
true
1
0
There is not currently built-in value sorting in Beam (in either Python or Java). Right now, the best option is to sort the values yourself in a DoFn like you mentioned.
1
3
0
I noticed that java apache beam has class groupby.sortbytimestamp does python have that feature implemented yet? If not what would be the way to sort elements in a window? I figure I could sort the entire window in a DoFn, but I would like to know if there is a better way.
How can I order elements in a window in python apache beam?
1.2
0
0
1,665
39,767,160
2016-09-29T09:57:00.000
0
0
0
0
python,flask,request
39,767,391
1
false
1
0
Besides performance you want an outfacing service (like a webserver) to be as secure as possible. The flask development server is not developed with high security as a goal, so there are probably sercurity relevant bugs.
1
0
0
I implemented a REST API using flask and I am wondering what is the limit of the dev. server? I mean why investing time and money to deploy the api on a prod server while the dev. server can support the traffic. To avoid marking the question as duplicate, I am not asking for security risks, I want to know what are the limits of the flask dev. server in term of request/seconds. Thanks in advance.
Flask dev server limits
0
0
0
412
39,768,925
2016-09-29T11:23:00.000
2
0
0
0
python,tkinter,raspberry-pi,touchscreen,raspberry-pi3
39,770,561
2
true
0
1
There is always a widget with the keyboard focus. You can query that with the focus_get method of the root window. It will return whatever widget has keyboard focus. That is the window that should receive input from your keypad.
1
1
0
I'm making a program on the Raspberry Pi with a touchscreen display. I'm using Python Tkinter that has two entry widgets and one on screen keypad. I want to use the same keypad for entering data on both entry widgets. Can anyone tell me how can i check if an entry is selected? Similar like clicking on the Entry using the mouse and the cursor appears. How can I know that in Python Tkinter? Thank you.
Check if Entry widget is selected
1.2
0
0
1,831
39,771,998
2016-09-29T13:42:00.000
0
0
1
0
python,linux,pycharm
39,772,172
2
false
0
0
Click on the top-right tab with your project name, then go Edit Configurations and there you can change the interpreter.
2
0
0
I haven't been able to find anything and I am not sure if this is the place I should be asking... But I want to include the path to my interpreter in every new project I create. The reason being is that I develop locally and sync my files to a linux server. It is annoying having to manually type #! /users/w/x/y/z/bin/python every time I create a new project. Also would be nice to include certain imports I use 90% of the time. I got to thinking, in the program I produce music with you can set a default project file. Meaning, when you click new project it is set up how you have configured (include certain virtual instruments, effects, etc). Is it possible to do this or something similar with IDE, and more specifically, Pycharm?
Is it possible to include interpreter path (or set any default code) when I create new python file in Pycharm?
0
0
0
59
39,771,998
2016-09-29T13:42:00.000
1
0
1
0
python,linux,pycharm
39,772,630
2
true
0
0
You should open File in the main menu and click Default Settings, collapse the Editor then click File and Code Templates, in the Files tab click on the + sign and create a new Template, give the new template a name and extension, in the editor box put your template content, in your case #! /users/w/x/y/z/bin/python apply and OK. After that everytime you open a project, select that template to include default lines you want. You could make number of templates.
2
0
0
I haven't been able to find anything and I am not sure if this is the place I should be asking... But I want to include the path to my interpreter in every new project I create. The reason being is that I develop locally and sync my files to a linux server. It is annoying having to manually type #! /users/w/x/y/z/bin/python every time I create a new project. Also would be nice to include certain imports I use 90% of the time. I got to thinking, in the program I produce music with you can set a default project file. Meaning, when you click new project it is set up how you have configured (include certain virtual instruments, effects, etc). Is it possible to do this or something similar with IDE, and more specifically, Pycharm?
Is it possible to include interpreter path (or set any default code) when I create new python file in Pycharm?
1.2
0
0
59
39,772,952
2016-09-29T14:22:00.000
1
1
1
0
python,automation,camera-calibration
41,273,193
1
true
0
0
Look into the In-Sight Native Mode commands. ASCII native mode commands are sent to the camera over Ethernet on port 23 (telnet). These commands are documented in the In-Sight Explorer help file which can be accessed from the In-Sight Explorer Help menu. Look under the 'Communications Reference -> Native Mode Commands' section. The command RB (Read BMP) will send the current image from the camera in ASCII hexadecimal format. Using In-Sight Explorer, you can set the cameras trigger mode to 'Continuous' to continuously acquire images, or you can set it to 'Manual' and trigger the camera via a native mode command. The command to trigger the camera is SE8 (Set event 8). The camera must be online for either trigger method to work (Sensor menu -> Online).
1
0
0
I am new to Cognex's in-sight explorer. I am using it for test automation and I want to know is it possible to capture image using a script [preference is python, but other scripting methods are welcome]. I have my Test Cases [TC] running using python scripts, in case a TC fail I want to capture the camera image at run time and store it on my host PC. I don't want to use any web came or any thing else. I want to use my existing system of Cognex's Camera and in-sight Explorer.
Automate Image capturing using Cognex's In-sight Explorer
1.2
0
0
1,101
39,773,328
2016-09-29T14:39:00.000
0
0
1
0
python,merge,spss
39,777,762
2
false
0
0
If you just match files regardless of what variables are missing, only the variables that exist in the table and do not exist in the file will be added to the file. Note though you'll have trouble if you have text vars in both files with identical names but different widths.
1
3
0
I have an SPSS file that I am removing unwanted variables from, but want to bring in variables from elsewhere if they don't exist. So, I am looking for some Python code to go into my syntax to say - keep all the variables from a list and if any of these don't exist in the first file, then merge them in from the second file. (Python rookie here..) Thanks!
Merge SPSS variables if they don't exist in the original file, using Python
0
0
0
132
39,773,544
2016-09-29T14:49:00.000
0
0
0
0
python,openpyxl
39,774,351
2
false
0
0
I'm not sure what you mean by "text box". In theory you can add pretty much anything covered by the DrawingML specification to a chart but the practice may be slightly different. However, there is definitely no built-in API for this so you'd have to start by creating a sample file and working backwards from it.
1
2
0
I'm trying to add a text box to a chart I've generated with openpyxl, but can't find documentation or examples showing how to do so. Does openpyxl support it?
Adding a text box to an excel chart using openpyxl
0
1
0
3,047
39,776,791
2016-09-29T17:43:00.000
-1
0
0
0
python,flask
41,841,741
2
false
1
0
I was having a similar issue and deleting the .pyc files solved it for me.
1
3
0
A little background: I've been working on this project for about six months now and it's been running on Flask the whole time. Everything has been fine, multiple versions of the backend have been deployed live to support an app that's been in production for months now. The development cycle involves writing everything locally and using Flask-Script's runserver command to test everything locally on localhost:8080 before deploying to a dev server and then finally to the live server. The Problem: The other day my local flask instance, which runs on localhost:8080 apparently stopped respecting my local files. I tried adding a new view (with a new template) and I got a 404 error when trying to view it in my browser. I then tried making some test changes to one of the existing pages by adding a few extra words to the title. I restarted flask and none of those changes appeared. I then went as far as deleting the entire views.py file. After restarting flask again, much to my dismay, I could still view the pages that were there originally (i.e. before this behavior started). Finally, I made some changes to the manage.py file, which is where I put all of the Flask-Script commands, and they weren't recognized either. It's as if flask started reading from a cached version of the filesystem that won't update (which very well might be the case but I have no idea why it started doing this or how to fix the issue). FYI: Browser caching shouldn't be an issue b/c I have the dev tools open with caching disabled. Plus the fact that changes to manage.py aren't being noticed shouldn't have anything to do with the browser.
Flask doesn't seem to recognize file changes
-0.099668
0
0
1,804
39,778,909
2016-09-29T19:55:00.000
0
0
1
0
python,url,encoding
39,779,578
2
false
0
0
Unluckily this depends heavily on the encoding of the site you parsed, as well as your local IO encoding. I'm not really sure if you can translate it after parsing, and if it's really worth the work. If you have the chance to parse it again you can try using python's decode() function, like: text.decode('utf8') Besides that, check that the encoding used above is the same that in your local environment. This is specially important on Windows environments, since they use cp1252 as their standard encoding. In Mac and Linux: export PYTHONIOENCODING=utf8 In Windows: set PYTHONIOENCODING=utf8 It's not much, but I hope it helps.
1
1
0
I wrote a little Python script that parses a website. I got a "ä" character in form of \u00e4 in a url from a link like http://foo.com/h\u00e4ppo, and I need http://foo.com/häppo.
UTF8 Character in URL String
0
0
1
552
39,779,412
2016-09-29T20:27:00.000
0
0
0
0
python,unix
39,813,792
2
true
0
0
I have find the solution. It might because I am using Spyder from anaconda. As long as I use "\" instead of "\", python can recognize the location.
1
0
1
I am trying to write the file to my company's project folder which is unix system and the location is /department/projects/data/. So I used the following code df.to_csv("/department/projects/data/Test.txt", sep='\t', header = 0) The error message shows it cannot find the locations. how to specify the file location in Unix using python?
how to export data to unix system location using python
1.2
0
0
43
39,779,744
2016-09-29T20:48:00.000
3
0
1
0
python,anaconda,jupyter-notebook
54,257,512
13
false
0
0
It could be as simple as opening a new Terminal window.
6
41
0
I have installed anaconda on my MAC laptop, and tried to run jupyter notebook to install it, but I get error jupyter command not found.
After installing anaconda - command not found: jupyter
0.046121
0
0
82,245
39,779,744
2016-09-29T20:48:00.000
0
0
1
0
python,anaconda,jupyter-notebook
60,017,782
13
false
0
0
Open a new terminal and try again, it worked for me. This is written somewhere in the installation guide "For this change to become active, you have to open a new terminal."
6
41
0
I have installed anaconda on my MAC laptop, and tried to run jupyter notebook to install it, but I get error jupyter command not found.
After installing anaconda - command not found: jupyter
0
0
0
82,245
39,779,744
2016-09-29T20:48:00.000
1
0
1
0
python,anaconda,jupyter-notebook
61,518,568
13
false
0
0
If your issue is happening after running conda install jupyter, you can use conda init zsh to configure the ~/.zshrc automatically so that when you just type jupyter notebook on terminal, it can find it.
6
41
0
I have installed anaconda on my MAC laptop, and tried to run jupyter notebook to install it, but I get error jupyter command not found.
After installing anaconda - command not found: jupyter
0.015383
0
0
82,245
39,779,744
2016-09-29T20:48:00.000
1
0
1
0
python,anaconda,jupyter-notebook
65,643,726
13
false
0
0
For Windows After you can successfully run conda from powershell, you can install jupyter with: conda install jupyter command. Then re-open powershell and run conda run jupyter notebook.
6
41
0
I have installed anaconda on my MAC laptop, and tried to run jupyter notebook to install it, but I get error jupyter command not found.
After installing anaconda - command not found: jupyter
0.015383
0
0
82,245
39,779,744
2016-09-29T20:48:00.000
7
0
1
0
python,anaconda,jupyter-notebook
50,165,697
13
false
0
0
@ffledgling anwser did not work for me. What did solve was to install jupyter using conda: conda install jupyter That did the trick. Right after the installation finished I went with jupyter notebook as my next command and saw the server setup and the browser page opening.
6
41
0
I have installed anaconda on my MAC laptop, and tried to run jupyter notebook to install it, but I get error jupyter command not found.
After installing anaconda - command not found: jupyter
1
0
0
82,245
39,779,744
2016-09-29T20:48:00.000
0
0
1
0
python,anaconda,jupyter-notebook
67,445,276
13
false
0
0
If it's a fresh installation. Close the the terminal and re-open You don't have to install jupyter explicitly. Anaconda does for you. Ensure the environment is activated first. If you selected Yes when prompted "Do you wish the installer to initialize Anaconda3 by running conda init? [yes|no]" during installation, prepend your commands with conda mycomp@55:~$ conda activate (base) mycomp@55:~$ jupyter notebook
6
41
0
I have installed anaconda on my MAC laptop, and tried to run jupyter notebook to install it, but I get error jupyter command not found.
After installing anaconda - command not found: jupyter
0
0
0
82,245
39,780,715
2016-09-29T22:02:00.000
2
0
0
0
python,html,django,pdf-generation,weasyprint
39,792,862
1
false
1
0
PDF is not built to be responsive, it is built to display the same no matter where it is viewed. As @alxs pointed out in a comment, there are a few features that PDF viewing applications have added to simulate PDFs being responsive. Acrobat's Reflow feature is the best example of this that I am aware of and even it struggles with most PDFs that users come across in the wild. One of the components (if not the only one) that matters, is that in order for a PDF to be useful in Acrobat's Reflow mode is to make sure that the PDFs you are creating contain structure information, this would be a Tagged PDF. Tagged PDF contains content that has been marked, similar to HTML tags, where text that makes up a paragraph is tagged in the PDF as being a paragraph. A number of PDF tools (creation or viewing) do not interpret the structure of a PDF though.
1
1
0
how to generate a responsive PDF with Django?. I want to generate a PDF with Django but i need that is responsive, that is to say, the text of the PDF has that adapted to don't allow space empty. for example to a agreement this change in the text, then, i need to adapt the to space of paper leaf.
how to generate a responsive PDF with Django?
0.379949
0
0
252
39,784,418
2016-09-30T05:39:00.000
1
0
1
1
python,linux,shell,anaconda,spyder
39,787,575
2
true
0
0
You could use virtualenv 1) create a virtual env using the python version you need for anaconda virtualenv -p /usr/bin/pythonX.X ~/my_virtual_env 2) virtualenv ~/my_virtual_env/bin/activate 3) Run anaconda, then deactivate
1
3
0
I want to install Anaconda locally on my home directory ~/.Anaconda3 (Archlinux) and without setting the path in the shell because I like to keep my system python as the default. So I like to launch the Spyder (or other Anaconda's app) as isolated app from system binaries. I mean when I launch for example .Anaconda3/bin/spyder it launches spyder and this app uses Anaconda's binaries but when I use python ThisScript.py in my shell it uses system python installed from packages (e.g. /bin/python). I managed to update the anaconda using .Anaconda3/bin/conda update --all in my shell without setting the Anaconda's binaries path (.Anaconda/bin/) but thsi way run some apps like spyder doesn't work obviously.
How to isolate Anaconda from system python without set the shell path
1.2
0
0
766
39,786,630
2016-09-30T08:07:00.000
2
1
1
0
python,ftp,pip
40,282,271
1
false
0
0
As far as I know, it is not possible to install packages through an FTP server. Pip works fine using an HTTP server or by downloading it locally. To use it with an HTTP server, just do pip install <package-name> -i <url-of-pip-packages>.
1
3
0
We (our company) have a webserver which is fully hosted, I can only access it through FTP and put files on it that way. It has a couple of Python scripts installed in a /cgi-bin/ folder, and I've been asked to upgrade the functionality of one of them (it uses Apache and regular old CGI, if that helps). Unfortunately, to do this I need to use a package from pip, one with several dependencies. Is there any way to install packages from pip to a server I only have FTP access to? Can I maybe just copy all the folders containing the pip package and its dependencies to some location on the server? I imagine putting it in /cgi-bin/ would be unsafe, but I do have access to non-public_html folders. How would I have to configure my python scripts to be able to import these, if it is even possible to do it this way? Any advice would be greatly appreciated.
Installing pip packages on a server where I have only ftp access?
0.379949
0
0
1,529
39,797,329
2016-09-30T17:52:00.000
19
0
1
0
python,groovy
39,797,926
1
false
0
0
The answer is: a[2..3] another example would be if you wanted [1,2,3,4]: a[1..4]
1
11
0
Given the following list: a = [0,1,2,3,4,5] In python I can do this: a[2:4] which will get me [2,3] Given that same list in groovy, is there a similar slicing mechanism I can use?
Python like list slicing in Groovy
1
0
0
11,895
39,800,075
2016-09-30T21:04:00.000
0
0
0
0
python,postgresql
41,320,600
5
false
0
0
plpython3.dll in the official package is built against python 3.3, not python 3.4. What it expect is python33.dll in system32 folder. You need to install python 3.3 for your system. Since py33 has been phased out, you may soon get frustrated, due to lack of pre-built binary package, lxml, pyzmq etc all needs to be built from source. If you need any binary module, make sure you have a correctly set up compiler.
3
3
0
I have installed PostgreSQL Server 9.6.0 and Python 3.4.2 on Windows 2012 R2 Server. I copied plpython3.dll to C:/Program Files/PostgreSQL/9.6/lib/ The in PostgreSQL I try running this command: CREATE EXTENSION plpython3u; And I receive this message: ERROR: could not load library "C:/Program Files/PostgreSQL/9.6/lib/plpython3.dll": The specified module could not be found. Under this folder: C:\Program Files\PostgreSQL\9.6\share\extension there are plpython3u files. How can I get PostgreSQL to recognize this Python 3 extension? Thanks!
Error during: CREATE EXTENSION plpython3u; on PostgreSQL 9.6.0
0
1
0
5,532
39,800,075
2016-09-30T21:04:00.000
1
0
0
0
python,postgresql
54,007,128
5
false
0
0
Exactly the same situation I faced with Postgres 9.6 Windows 10. PL/Python3U would not get through. Worked around it: Installed Python34 64bit Windows 10 version. Copied Python34.dll to c:\windows\system32 as Python33.dll and it worked.
3
3
0
I have installed PostgreSQL Server 9.6.0 and Python 3.4.2 on Windows 2012 R2 Server. I copied plpython3.dll to C:/Program Files/PostgreSQL/9.6/lib/ The in PostgreSQL I try running this command: CREATE EXTENSION plpython3u; And I receive this message: ERROR: could not load library "C:/Program Files/PostgreSQL/9.6/lib/plpython3.dll": The specified module could not be found. Under this folder: C:\Program Files\PostgreSQL\9.6\share\extension there are plpython3u files. How can I get PostgreSQL to recognize this Python 3 extension? Thanks!
Error during: CREATE EXTENSION plpython3u; on PostgreSQL 9.6.0
0.039979
1
0
5,532
39,800,075
2016-09-30T21:04:00.000
6
0
0
0
python,postgresql
46,281,240
5
false
0
0
Copy the python34.dll file to c:\windows\system32 and name the copy python33.dll The create language plpython3u should then work without a problem.
3
3
0
I have installed PostgreSQL Server 9.6.0 and Python 3.4.2 on Windows 2012 R2 Server. I copied plpython3.dll to C:/Program Files/PostgreSQL/9.6/lib/ The in PostgreSQL I try running this command: CREATE EXTENSION plpython3u; And I receive this message: ERROR: could not load library "C:/Program Files/PostgreSQL/9.6/lib/plpython3.dll": The specified module could not be found. Under this folder: C:\Program Files\PostgreSQL\9.6\share\extension there are plpython3u files. How can I get PostgreSQL to recognize this Python 3 extension? Thanks!
Error during: CREATE EXTENSION plpython3u; on PostgreSQL 9.6.0
1
1
0
5,532
39,800,665
2016-09-30T21:59:00.000
4
0
1
0
python,visual-studio-code
64,365,373
3
false
0
0
For VS Code and Python , Select the block of code For Commenting press CTRL + k + c For Un comment press CTRL + k + u
2
2
0
My visual studio code comment python code with ''' instead of using # when try to comment a block of code with the key combination ctrl + shift + a. I have ubuntu 16.04
comment python code in visual studio code
0.26052
0
0
10,414
39,800,665
2016-09-30T21:59:00.000
2
0
1
0
python,visual-studio-code
55,812,681
3
false
0
0
Under Windows environment this work for me : select block text press CTRL + /
2
2
0
My visual studio code comment python code with ''' instead of using # when try to comment a block of code with the key combination ctrl + shift + a. I have ubuntu 16.04
comment python code in visual studio code
0.132549
0
0
10,414
39,801,748
2016-10-01T00:19:00.000
2
0
1
0
python
39,801,759
3
false
0
0
Examine the text preceding your desired position and count the number of \n characters.
1
0
0
If I have a text that I've read into memory by using open('myfile.txt').read(), and if I know a certain location in this file, say, at character 10524, how can I find the line number of that location?
In Python, how can I get a line number corresponding to a given character location?
0.132549
0
0
156
39,805,033
2016-10-01T09:32:00.000
1
0
0
0
javascript,python,html,django,image-compression
39,805,099
2
false
1
0
I advice you to compress on the browser in order to : avoid loading the server with many CPU and RAM heavy consuming calculations (as numerous as number of clients) dwindle bandwith needed when transfert image threw the network
1
0
0
I have a Django project and i allow users to upload images. I don't want to limit image upload size for users. But want to compress the image after they select and store them. I want to understand which is better: Compress using java-script on the browser. Back end server using python libraries. Also it will be helpful if links can be provided to implement the better approach.
Which is the better location to compress images? In the browser or on the server?
0.099668
0
0
63
39,805,237
2016-10-01T09:56:00.000
1
0
0
0
python,postgresql,web-scraping,scrapy
39,805,342
1
false
1
0
For example: I have a site with 100 pages and 10 records each. So I scrape page 1, and then go to page 2. But on fast growing sites, at the time I do the request for page 2, there might be 10 new records, so I would get the same items again. Nevertheless I would get all items in the end. BUT next time scraping this site, how would I know where to stop? I can't stop at the first record I already have in my database, because this might be suddenly on the first page, because there a new reply was made. Usually each record has a unique link (permalink) e.g. the above question can be accessed by just entering https://stackoverflow.com/questions/39805237/ & ignoring the text beyond that. You'll have to store the unique URL for each record and when you scrape next time, ignore the ones that you already have. If you take the example of tag python on Stackoverflow, you can view the questions here : https://stackoverflow.com/questions/tagged/python but the sorting order can't be relied upon for ensuring unique entries. One way to scrape would be to sort by newest questions and keep ignoring duplicate ones by their URL. You can have an algorithm that scrapes first 'n' pages every 'x' minutes until it hits an existing record. The whole flow is a bit site specific, but as you scrape more sites, your algorithm will become more generic and robust to handle edge cases and new sites. Another approach is to not run scrapy yourself, but use a distributed spider service. They generally have multiple IPs and can spider large sites within minutes. Just make sure you respect the site's robots.txt file and don't accidentally DDoS them.
1
0
0
I want to scrape a lot (a few hundred) of sites, which are basically like bulletin boards. Some of these are very large (up to 1.5 million) and also growing very quickly. What I want to achieve is: scrape all the existing entries scrape all the new entries near real-time (ideally around 1 hour intervals or less) For this we are using scrapy and save the items in a postresql database. The problem right now is, how can I make sure I got all the records without scraping the complete site every time? (Which would not be very agressive traffic-wise, but also not possible to complete within 1 hour.) For example: I have a site with 100 pages and 10 records each. So I scrape page 1, and then go to page 2. But on fast growing sites, at the time I do the request for page 2, there might be 10 new records, so I would get the same items again. Nevertheless I would get all items in the end. BUT next time scraping this site, how would I know where to stop? I can't stop at the first record I already have in my database, because this might be suddenly on the first page, because there a new reply was made. I am not sure if I got my point accross, but tl;dr: How to fetch fast growing BBS in an incremental way? So with getting all the records, but only fetching new records each time. I looked at scrapy's resume function and also at scrapinghubs deltafetch middleware, but I don't know if (and how) they can help to overcome this problem.
How to go about incremental scraping large sites near-realtime
0.197375
0
1
265
39,805,675
2016-10-01T10:46:00.000
-1
1
0
0
python,nltk
39,810,288
4
true
0
0
The Problem is raised probably because you don't have a default directory created for your ntlk downloads. If you are on a Windows Platform, All you need to do is to create a directory named "nltk_data" in any of your root directory and grant write permissions to that directory. The Natural Language Tool Kit initially searches for the destination named "nltk_data" in all of the root directories. For Instance: Create a folder in your C:\ drive named "nltk_data" After Making sure everything is done fine, execute your script to get rid of this error. Hope this helps. Regards.
1
1
0
I have problem on import nltk. I configured apache and run some sample python code, it worked well on the browser. The URL is : /localhost/cgi-bin/test.py. When I import the nltk in test.py its not running. The execution not continue after the "import nltk" line.And it gives me that error ValueError: Could not find a default download directory But when I run in the command prompt its working perfect. how to remove this error?
ValueError: Could not find a default download directory of nltk
1.2
0
0
1,011
39,807,281
2016-10-01T13:35:00.000
1
0
0
0
python,selenium,selenium-chromedriver
39,807,531
1
false
0
0
I don't think hiding the address bar and other GUI elements will have any effect. I would like to suggest using PhantomJS, a headless browser without a GUI at all. This will certainly speed up your tests.
1
0
0
So I'm currently using chromedriver for selenium with Python, responses are quite slow, so I'm trying to reduce how much chromedriver loads.. is there anyway I can remove the address bar, tool bar and most of the gui from chrome its self using chrome arguments?
Python Selenium CHROMEDRIVER
0.197375
0
1
206
39,811,929
2016-10-01T22:10:00.000
1
0
1
0
python,anaconda,theano,conda
66,140,271
5
false
0
0
The problem is that in the code editor you are using, you are running the default interpreter. Based on your code editor, change the python interpreter to the conda interpreter and it will work.
2
21
0
I try to install Theano by Anaconda. It works, but when I enter the python -i, import theano shows No module named 'theano'. Do I need to switch another interpreter of Python, how? Also, for the packages installed by conda, if I don't double install them, can I find in Python? How is Python related to Python by Anaconda? Thanks!!!
Package installed by Conda, Python cannot find it
0.039979
0
0
48,222
39,811,929
2016-10-01T22:10:00.000
0
0
1
0
python,anaconda,theano,conda
68,217,464
5
false
0
0
In my workstation, I was able to solve No module named <module name> error using two different ways. First method, I solved this temporarily by: (1) Open a Terminal (2) $ conda activate <Conda environment name> (3) $ export PYTHONPATH=/home/<user name>/anaconda3/envs/<Conda environment name>/lib/<Python package version>/site-packages:$PYTHONPATH It is a temporary solution. Whenever you run your virtual environment, you have to do this. My runtime environment:     OS: Unbuntu 18.04     Conda version: 4.8.2     Conda-build version: 3.18,11     Python version 3.7.6.final.0 Second method, I removed the alias python=/usr/bin/python3.6 line in bashrc file. Somehow this line blocks using Python tools installed in Anaconda Virtual Environment if the Python version in the Virtual Environment is different.
2
21
0
I try to install Theano by Anaconda. It works, but when I enter the python -i, import theano shows No module named 'theano'. Do I need to switch another interpreter of Python, how? Also, for the packages installed by conda, if I don't double install them, can I find in Python? How is Python related to Python by Anaconda? Thanks!!!
Package installed by Conda, Python cannot find it
0
0
0
48,222
39,815,633
2016-10-02T09:30:00.000
2
0
0
0
python,sockets
39,815,976
1
true
0
0
'localhost' (or '127.0.0.1') is used to connect with program on the same computer - ie. database viewer <-> local database server, Doom client <-> local Doom server. This way you don't have to write different method to connect to local server. Computer can have more then one network card (NIC) and every NIC has own IP address. You can use this IP in program and then program will use only this one NIC to receive requests/connections. This way you may have server which receives requests only from LAN but not from Internet - it is very popular for databases used by web servers. Empty string means '0.0.0.0' which means that program will receive requests from all NICs.
1
2
0
Im using python. I dont understand the purpose of empty string in IP to connect to if its not to connect between two Computers in the same LAN router. My knowledge in network is close to zero, so when I reading in the internet somthing like this: empty string represents INADDR_ANY, and the string '' represents INADDR_BROADCAST So if you please will be able to explain me, like you explain to a baby that dont know nothing - what is the purpose of any of the follows in the IP location in socket object: broadcast '' localhost and if there is more, so I will be glad to know about them too. Tanks.
I have get really confused in IP types with sockets (empty string, 'local host', etc...)
1.2
0
1
1,707
39,817,545
2016-10-02T13:33:00.000
1
0
0
0
python,sorting,plot,bar-chart,seaborn
39,817,575
1
true
0
0
Do you save the changes by pd.sort_values? If not, probably you have to add the inplace keyword: mydf.sort_values(['myValueField'], ascending=False, inplace=True)
1
1
1
I want to use seaborn to perform a sns.barplot where the values are ordered e.g. in ascending order. In case the order parameter of seaborn is set the plot seems to duplicate the labels for all non-NaN labels. Trying to pre-sort the values like mydf.sort_values(['myValueField'], ascending=False) does not change the result as seaborn does not seem to interpret it.
Python sorted plot
1.2
0
0
144
39,819,220
2016-10-02T16:38:00.000
0
0
0
1
python,distutils,macos-sierra
39,820,702
1
true
0
0
I have to pass cc -F /Library/Frameworks for clang 7.2.0 and 8.0.0. Then it can find the headers.
1
0
0
I've been working on an extension module for Python but in OSX Sierra it no longer finds headers belonging to the frameworks I'm linking to. It always found them before without any special effort. Has something changed lately regarding include paths in this tool chain?
Does Python's distutils set include paths for frameworks (osx) when compiling extensions?
1.2
0
0
34
39,826,735
2016-10-03T07:48:00.000
0
0
1
1
python,homebrew,anaconda,miniconda,homebrew-cask
39,827,107
1
false
0
0
Anaconda comes with python for you but do not remove the original python that comes with the system -- many of the operating system's libs depend on it. Anaconda manages its python executable and packages in its own (conda) directory. It changes the system path so the python inside the conda directory is the one used when you access python.
1
0
0
With python3 previously installed via homebrew on macOS, I just downloaded miniconda (via homebrew cask), which brought in another full python setup, I believe. Is it possible to install anaconda/miniconda without reinstalling python? And, if so, would that be a bad idea?
Using an existing python3 install with anaconda/miniconda
0
0
0
351
39,830,272
2016-10-03T11:14:00.000
0
0
1
1
windows,python-2.7,windows-7
39,831,155
1
false
0
0
Need more explanation. Like from where you have downloaded the Python package binary? What was installation path when you have installed it? What was Python 2.6.5 installation path? Does old environment variable still present?
1
1
0
I am using 64 bit window 7 version. Recently installed python 2.7 and was able to see python27 folder inside C drive. I even update environment variable to use C:\Python27 and C:\Python27\Scripts. python --version Python 2.6.5 which python /usr/bin/python How can i update system to use python 2.7 version?
after python 2.7 installation version still showing as 2.6.5
0
0
0
111
39,832,735
2016-10-03T13:23:00.000
4
0
0
0
python,arrays,pandas,numpy,dataframe
39,833,244
2
true
0
0
If A is dataframe and col the column: import pandas as pd output = pd.np.column_stack((A.index.values, A.col.values))
1
2
1
How can I convert 1 column and the index of a Pandas dataframe with several columns to a Numpy array with the dates lining up with the correct column value from the dataframe? There are a few issues here with data type and its driving my nuts trying to get both the index and the column out and into the one array!! Help would be much appreciated!
Convert Pandas Dataframe Date Index and Column to Numpy Array
1.2
0
0
4,044
39,836,893
2016-10-03T17:11:00.000
0
0
0
0
python,automation,imacros
39,837,450
1
false
1
0
There is a python package called mechanize. It helps you automate the processes that can be done on a browser. So check it out.I think mechanize should give you all the tools required to solve the problem.
1
0
0
I have a .csv file with a list of URLs I need to extract data from. I need to automate the following process: (1) Go to a URL in the file. (2) Click the chrome extension that will redirect me to another page which displays some of the URL's stats. (3) Click the link in the stats page that enables me to download the data as a .csv file. (4) Save the .csv. (5) Repeat for the next n URLs. Any idea how to do this? Any help greatly appreciated!
Automate file downloading using a chrome extension
0
0
1
282
39,837,656
2016-10-03T17:58:00.000
0
0
1
1
gdb,gdb-python
39,838,415
1
false
0
0
Yes, you can do this in gdb. Rather than trying to set a breakpoint on the next instruction, you can instead use the si command to single-step to the next instruction.
1
0
0
I am trying to write a Python script for GDB to trace a function. The idea is to set a breakpoint on an address location, let the program run and then, when it breaks, log to file registers, vectors and stack and find out what address the next instruction will be, set a breakpoint on that location and rinse and repeat. I read through the documentation and I'm pretty confident registers, vectors and memory locations can be easily dumped. The actual problem is finding what the next instruction location will be as it requires to analyze the disassembly of the current instruction to determine where the next breakpoint should be placed. Update I am doing all this without using stepi or nexti because the target I'm debugging works only with hardware breakpoints and as far as I know those commands use software breakpoints to break at the next instruction Is there anything like that in GDB?
Tracing instructions with GDB Python scripting
0
0
0
502
39,840,736
2016-10-03T21:31:00.000
0
0
0
0
python,django
39,844,257
3
false
1
0
Explicit is better than implicit. Wrap your interactivity in a function that's called only if the __name__ == "__main__" part was executed. From the django parts, just use it as a library. Most ways of doing these kinds of checks are semi-magical and hence flaky.
1
2
0
I have a Python script that pauses for user input (using raw_input, recently I created a Django web UI for this script. Now when I execute the script via Django is pauses as it's waiting for input in the backend. How can I determine if the script was ran from Django or terminal/cmd/etc? I don't want to maintain 2 streams of code, one for web and another one for terminal.
How to detect if script ran from Django or command prompt?
0
0
0
219
39,844,772
2016-10-04T05:30:00.000
13
0
0
1
python,tensorflow
39,856,855
4
false
0
0
I solved this problem by copying the libstdc++.so.6 file which contains version CXXABI_1.3.8. Try run the following search command first: $ strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep CXXABI_1.3.8 If it returns CXXABI_1.3.8. Then you can do the copying. $ cp /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /home/jj/anaconda2/bin/../lib/libstdc++.so.6
1
12
1
I have re-installed Anaconda2. And I got the following error when 'python -c 'import tensorflow'' ImportError: /home/jj/anaconda2/bin/../lib/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /home/jj/anaconda2/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so) environment CUDA8.0 cuDNN 5.1 gcc 5.4.1 tensorflow r0.10 Anaconda2 : 4.2 the following is in bashrc file export PATH="/home/jj/anaconda2/bin:$PATH" export CUDA_HOME=/usr/local/cuda-8.0 export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
"'CXXABI_1.3.8' not found" in tensorflow-gpu - install from source
1
0
0
24,320
39,845,034
2016-10-04T05:53:00.000
0
0
1
0
python,regex
46,521,567
6
false
0
0
I got it to work by putting the stuff between the carrot and the dollar in parentheses like so: re.compile(r'^(\d{1,3}(,\d{3})*)$') but I find this regex pretty useless, because you can't use it to find these numbers in a document because the string has to begin and end with the exact phrase.
1
0
0
I am a beginner in Python and in regular expressions and now I try to deal with one exercise, that sound like that: How would you write a regex that matches a number with commas for every three digits? It must match the following: '42' '1,234' '6,368,745' but not the following: '12,34,567' (which has only two digits between the commas) '1234' (which lacks commas) I thought it would be easy, but I've already spent several hours and still don't have write answer. And even the answer, that was in book with this exercise, doesn't work at all (the pattern in the book is ^\d{1,3}(,\d{3})*$) Thank you in advance!
How to make regex that matches a number with commas for every three digits?
0
0
0
5,536
39,851,220
2016-10-04T11:34:00.000
0
0
1
0
python,openerp,odoo-8
39,851,575
1
false
1
0
You can get these selected record ids in ids instead of active_ids.
1
1
0
Ive been using the module “web_o2m_delete_multi” that lets me select multiple lines in one2many list view and delete them all. Is there a way to use the selected lines in a python function? I tried active_ids but it’s not working.
Is it possible to use the selected lines of a one2many list in a function?
0
0
0
179
39,851,566
2016-10-04T11:51:00.000
12
0
1
0
python,python-2.7,python-3.x,pip
39,852,126
8
true
0
0
You will have to use the absolute path of pip. E.g: if I installed python 3 to C:\python35, I would use: C:\> python35\Scripts\pip.exe install packagename Or if you're on linux, use pip3 install packagename If you don't specify a full path, it will use whichever pip is in your path.
4
15
0
I am using Windows 10. Currently, I have Python 2.7 installed. I would like to install Python 3.5 as well. However, if I have both 2.7 and 3.5 installed, when I run pip, how do I get the direct the package to be installed to the desired Python version?
Using pip on Windows installed with both python 2.7 and 3.5
1.2
0
0
31,750
39,851,566
2016-10-04T11:51:00.000
1
0
1
0
python,python-2.7,python-3.x,pip
39,852,599
8
false
0
0
The answer from Farhan.K will work. However, I think a more convenient way would be to rename python35\Scripts\pip.exe to python35\Scripts\pip3.exe assuming python 3 is installed in C:\python35. After renaming, you can use pip3 when installing packages to python v3 and pip when installing packages to python v2. Without the renaming, your computer will use whichever pip is in your path.
4
15
0
I am using Windows 10. Currently, I have Python 2.7 installed. I would like to install Python 3.5 as well. However, if I have both 2.7 and 3.5 installed, when I run pip, how do I get the direct the package to be installed to the desired Python version?
Using pip on Windows installed with both python 2.7 and 3.5
0.024995
0
0
31,750
39,851,566
2016-10-04T11:51:00.000
-1
0
1
0
python,python-2.7,python-3.x,pip
48,870,834
8
false
0
0
I tried many things , then finally pip3 install --upgrade pip worked for me as i was facing this issue since i had both python3 and python2.7 installed on my system. mind the pip3 in the beginning and pip in the end. And yes you do have to run in admin mode the command prompt and make sure if the path is set properly.
4
15
0
I am using Windows 10. Currently, I have Python 2.7 installed. I would like to install Python 3.5 as well. However, if I have both 2.7 and 3.5 installed, when I run pip, how do I get the direct the package to be installed to the desired Python version?
Using pip on Windows installed with both python 2.7 and 3.5
-0.024995
0
0
31,750
39,851,566
2016-10-04T11:51:00.000
-1
0
1
0
python,python-2.7,python-3.x,pip
53,885,123
8
false
0
0
1-open command prompt and change direction using the command cd C:\Python35\Scripts 2- write the command pip3 install --upgrade pip 3- close the command prompt and reopen it again to return to the default direction and use the command pip3.exe install package_name to install any package you want
4
15
0
I am using Windows 10. Currently, I have Python 2.7 installed. I would like to install Python 3.5 as well. However, if I have both 2.7 and 3.5 installed, when I run pip, how do I get the direct the package to be installed to the desired Python version?
Using pip on Windows installed with both python 2.7 and 3.5
-0.024995
0
0
31,750
39,852,551
2016-10-04T12:37:00.000
4
0
1
0
python
39,852,575
3
true
0
0
No need for regexes here, why don't you simply go for current_part = current_part.split('/')[0] ?
1
0
0
I have a for loop that changes the string current_part. Current_part should have a format of 1234 but sometimes it has the format of 1234/gg Other formats exist but in all of them, anything after the backlash need to be deleted. I found a similar example below so I tried it but it didn't work. How can I fix this? Thanks current_part = re.sub(r"\B\\\w+", "", str(current_part))
Delete all characters after a backslash in python?
1.2
0
0
899
39,855,273
2016-10-04T14:44:00.000
1
0
0
0
python,css,gtk
39,872,085
1
false
0
1
The CSS 'api' was basically undocumented and unstable before 3.20 so there isn't really any reasonable way to support all versions before it unless you make a separate theme for each version.
1
0
0
I finish a GTK interface with GTK3.18,and it works well,but when i change to GTK3.14 version,the interface turn out to be very bad,the size and the colore of the widgets is changed,and i find there is no enough information about the GTK3.14 version.
what's the difference between GTK 3.14 and 3.18 on the css load
0.197375
0
0
48