Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
39,659,748 | 2016-09-23T11:32:00.000 | 1 | 1 | 1 | 0 | 0 | python,eclipse,python-3.x,autocomplete | 0 | 39,773,761 | 0 | 1 | 0 | true | 0 | 0 | If you are using PyDev, make sure that interpreter grammar is set to 3.0 (right click project -> Properties -> PyDev - Interpreter/Grammar) | 1 | 0 | 0 | 0 | I changed the interpreter for my python projects from 2.x to 3.5 recently. The code interpretes correctly with the 3.5 version.
I noticed that the autocompletion function of Eclipse still autocompletes as if I am using 2.x Python version. For example: print gets autocompleted without parenthesis as a statement and not as a function. Any idea how to notify the Eclipse that it need to use 3.5 autocompletion? | force eclipse to use Python 3.5 autocompletion | 0 | 1.2 | 1 | 0 | 0 | 82 |
39,665,029 | 2016-09-23T15:59:00.000 | 1 | 1 | 0 | 0 | 0 | python | 0 | 39,665,400 | 0 | 2 | 0 | false | 0 | 0 | shutil is a very usefull thing to use when copying files.
I once needed to have a python script that moved all .mp3 files from a directory to a backup, deleted the original directory, created a new once, and moved the .mp3 files back in. shutil was perfect for thise.
The formatting for the command is how @Kieran has stated earlier.
If you're looking to keep file metadata, then use shutil.copy2(src, dest), as that is the equivalent of running copy() and copystat() one after another. | 1 | 1 | 0 | 0 | I am very new to Python. I am curious to how I would be able to copy some files from my directory to another Users directory on my computer using python script? And would I be correct in saying I need to check the permissions of the users and files? So my question is how do I send files and also check the permissions at the same time | Python sending files from a user directory to another user directory | 0 | 0.099668 | 1 | 0 | 0 | 639 |
39,679,167 | 2016-09-24T17:38:00.000 | 3 | 0 | 0 | 0 | 0 | python,django | 0 | 39,679,214 | 0 | 1 | 0 | false | 1 | 0 | You're confusing two different things here. A class can easily have an attribute that is a list which contains instances of another class, there is nothing difficult about that.
(But note that there is no way in which a Message should extend MessageBox; this should be composition, not inheritance.)
However then you go on to talk about Django models. But Django models, although they are Python classes, also represent tables in the database. And the way you represent one table containing a list of entries in another table is via a foreign key field. So in this case your Message model would have a ForeignKey to MessageBox.
Where you put the send method depends entirely on your logic. A message should probably know how to send itself, so it sounds like the method would go there. | 1 | 0 | 0 | 0 | I have been having trouble using django. Right now, I have a messagebox class that is suppose to hold messages, and a message class that extends it. How do I make it so messagebox will hold messages?
Something else that I cannot figure out is how classes are to interact. Like, I have a user that can send messages. Should I call its method to call a method in messagebox to send a msg or can I have a method in user to make a msg directly.
My teacher tries to accentuate cohesion and coupling, but he never even talks about how to implement this in django or implement django period. Any help would be appreciated. | How can a class hold an array of classes in django | 0 | 0.53705 | 1 | 0 | 0 | 121 |
39,691,679 | 2016-09-25T20:46:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,django-models | 0 | 39,692,799 | 0 | 1 | 0 | true | 1 | 1 | If I understood your description correctly, you want a relationship where there can be many emailWidget or TextWidget for one instance of widgetManager.
What you can do in this case is add a ForeignKey field for widgetManager to emailWidget and TextWidget. This way, you can have many instances of the widgets while they refer to the same manager.
I think you may have confused inheritance with model relationships when you said you want to extend widgets from a base class. Perhaps I'm wrong?
Not sure what you mean't about order of the widget being important either.. | 1 | 0 | 0 | 0 | I have a model called widgetManager and 2 widget models called emailWidget and TextWidget. Now a single instance of widgetManager can have multiple instances of emailWidget and TextWidget. How can this be achieved with the following in mind
Till now i only have two but there can be more in future
The order of widget is very important
I have tried with adding two many-many relations in widgetManager but that seems impractical and not the best way to go around because if first condition.
What i have in mind is maybe i can somehow make a base widget class and extend all the widgets from that class but i am not sure about that. Would be super helpful if someone can point me in the right direction. Thanks in advance. | Proper model defination in django for a widget manager | 0 | 1.2 | 1 | 0 | 0 | 27 |
39,691,860 | 2016-09-25T21:08:00.000 | 1 | 0 | 1 | 0 | 0 | python-2.7,pip | 0 | 43,962,413 | 1 | 2 | 0 | false | 0 | 0 | install pip for Python2.7 with easy_install:
sudo easy_install-2.7 pip
now you can use pip for the same specific version of Python:
sudo pip2.7 install BeautifulSoup | 1 | 2 | 0 | 0 | I am using macOS Sierra 10.12 and after I upgraded my OS I can no longer install packages for python 3 using pip. Before I used to use pip for python2 and pip3 for python 3 as I have both versions of Python. But now I can no longer use pip to install libraries for python2.
Can anyone help me how can I change my default pip installer to python2? So that I can just use pip install in order to install for python 2.
For your information - when I only type python on terminal it says my default is python 2.7. | pip command by default using python 3...how to change it to python 2? | 0 | 0.099668 | 1 | 0 | 0 | 4,506 |
39,694,646 | 2016-09-26T04:15:00.000 | 0 | 0 | 0 | 0 | 0 | python,c++,boost,fortran,gfortran | 0 | 39,716,307 | 0 | 1 | 0 | false | 0 | 0 | I built Boost.Python libraries 1.61.0 from source for Python 2.7 using VC 14.0. Then used those in the build process for Netgen (again using VC 14.0) and pointed to the Python 2.7 library and include directories (as opposed to Python 3.5). This has thus far worked in Python 2.7 with my existing code. | 1 | 2 | 0 | 0 | I have a Python 2.7 project that has thus far been using gfortran and MinGW to build extensions. I use MinGW because it seems to support write statements and allocatable arrays in the Fortran code while MSVC does not.
There is another project I would like to incorporate into my own (Netgen) but it is currently set up for Python 3.5 using Boost.Python. I first tried to transfer my own program to Python 3.5 but that is where I was reminded of the MSVC issues and apparently MinGW is not supported. For that reason, I've been trying to think of a way to compile Netgen + Boost.Python for deployment in Python 2.7.
I think the Boost part is straightforward, but it seems I need Visual C++ 2008 to get it integrated with Python 2.7. I have the Visual C++ Compiler for Python 2.7 from Microsoft, but I haven't gotten it to work inside the CMake build system. I point it to the cl.exe compilers in the VC for Python folders and CMake always tells me that building a simple test program fails. Since I don't actually have (and can't find) Visual Studio 2008, not sure how far I'd get anyway.
There's a lot of places that could have issues here, but I'm just looking for a go/no-go answer if that's what it is. Any solutions would obviously be welcomed.
I am running Windows 10 64-bit.
I'm not a C/C++ expert, but it seems like I have all the tools I need to compile Netgen using the VC for Python tools (cl, link, etc). I just don't have/not sure how to put it all together into a project or something like that. | Building Fortran extension for Python 3.5 or C extension for 2.7 | 0 | 0 | 1 | 0 | 0 | 168 |
39,713,433 | 2016-09-26T22:34:00.000 | 1 | 0 | 1 | 0 | 1 | python,keyboard,spyder | 0 | 60,240,641 | 0 | 2 | 0 | false | 0 | 0 | Set this configuration in Spyder:
Run > Run Configuration Per File > Execute In An External System Terminal
In my experience "msvcrt.kbhit" only works in CMD. | 1 | 2 | 0 | 0 | Has anyone come across a way to emulate kbhit() in the Spyder environment on Windows? Somehow the development environment gets between the Python program and the keyboard, so any simple way of doing it (i.e. msvcrt.kbhit()) does not work. | How to make kbhit() work in the Spyder environment | 0 | 0.099668 | 1 | 0 | 0 | 478 |
39,713,540 | 2016-09-26T22:46:00.000 | 1 | 0 | 0 | 0 | 0 | python,tcp,scapy | 0 | 40,023,525 | 0 | 1 | 0 | true | 0 | 0 | You can not directly write the TCP options field byte per byte, however you can either:
write your entire TCP segment byte per byte: TCP("\x01...\x0n")
add an option to Scapy's code manually in scapy/layers/inet.py TCPOptions structure
These are workarounds and a definitive solution to this would be to implement a byte per byte TCP options field and commit on Scapy's github of course. | 1 | 0 | 0 | 0 | I want to read and write custom data to TCP options field using Scapy. I know how to use TCP options field in Scapy in "normal" way as dictionary, but is it possible to write to it byte per byte? | Read/Write TCP options field | 1 | 1.2 | 1 | 0 | 1 | 402 |
39,715,472 | 2016-09-27T03:22:00.000 | 0 | 0 | 0 | 0 | 0 | python,opencv,video,overlay | 0 | 39,716,115 | 0 | 2 | 0 | false | 0 | 0 | What you need are 2 Mat objects- one to stream the camera (say Mat_cam), and the other to hold the overlay (Mat_overlay).
When you draw on your main window, save the line and Rect objects on Mat_overlay, and make sure that it is not affected by the streaming video
When the next frame is received, Mat_cam will be updated and it'll have the next video frame, but Mat_overlay will be the same, since it will not be cleared/refreshed with every 'for' loop iteration. Adding Mat_overlay and Mat_cam using Weighted addition will give you the desired result. | 2 | 1 | 1 | 0 | I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top of this that it would allow me to draw multiple images on the video without clearing. Any advice? Thanks ahead! | How to put an overlay on a video | 0 | 0 | 1 | 0 | 0 | 1,160 |
39,715,472 | 2016-09-27T03:22:00.000 | 0 | 0 | 0 | 0 | 0 | python,opencv,video,overlay | 0 | 39,721,387 | 0 | 2 | 0 | false | 0 | 0 | I am not sure that I have understood your question properly.What I got from your question is that you want the overlay to remain on your frame, streamed from Videocapture, for that one simple solution is to declare your "Mat_cam"(camera streaming variable) outside the loop that is used to capture frames so that "Mat_cam" variable will not be freed every time you loop through it. | 2 | 1 | 1 | 0 | I am currently working in Python and using OpenCV's videocapture and cv.imshow to show a video. I am trying to put an overlay on this video so I can draw on it using cv.line, cv.rectangle, etc. Each time the frame changes it clears the image that was drawn so I am hoping if I was to put an overlay of some sort on top of this that it would allow me to draw multiple images on the video without clearing. Any advice? Thanks ahead! | How to put an overlay on a video | 0 | 0 | 1 | 0 | 0 | 1,160 |
39,722,984 | 2016-09-27T11:02:00.000 | 1 | 0 | 1 | 0 | 0 | python,numpy,compilation | 0 | 39,728,900 | 0 | 3 | 0 | false | 0 | 0 | Python can execute functions written in Python (interpreted) and compiled functions. There are whole API docs about writing code for integration with Python. cython is one of the easier tools for doing this.
Libraries can be any combination - pure Python, Python plus interfaces to compiled code, or all compiled. The interpreted files end with .py, the compiled stuff usually is .so or .dll (depending on the operating system). It's easy to install pure Python code - just load, unzip if needed, and put the right directory. Mixed code requires a compilation step (and hence a c compiler, etc), or downloading a version with binaries.
Typically developers get the code working in Python, and then rewrite speed sensitive portions in c. Or they find some external library of working c or Fortran code, and link to that.
numpy and scipy are mixed. They have lots of Python code, core compiled portions, and use external libraries. And the c code can be extraordinarily hard to read.
As a numpy user, you should first try to get as much clarity and performance with Python code. Most of the optimization SO questions discuss ways of making use of the compiled functionality of numpy - all the operations that work on whole arrays. It's only when you can't express your operations in efficient numpy code that you need to resort to using a tool like cython or numba.
In general if you have to iterate extensively then you are using low level operations. Either replace the loops with array operations, or rewrite the loop in cython. | 1 | 4 | 0 | 0 | Trying to understand whether python libraries are compiled because I want to know if the interpreted code I write will perform the same or worse.
e.g. I saw it mentioned somewhere that numpy and scipy are efficient because they are compiled. I don't think this means byte code compiled so how was this done? Was it compiled to c using something like cython? Or was it written using a language like c and compiled in a compatible way?
Does this apply to all modules or is it on a case-by-case basis? | Are Python modules compiled? | 0 | 0.066568 | 1 | 0 | 0 | 2,959 |
39,726,921 | 2016-09-27T14:09:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 39,727,471 | 0 | 1 | 0 | false | 0 | 0 | Let's start from the second point: if the list you store in memory is larger than the available ram, the computer starts using the hd as ram and this severely slow down everything. The optimal way of outputting in your situation is fill the ram as much as possible (always keeping enough space for the rest of the software running on your pc) and then writing on a file all in once.
The fastest way to store a list in a file would be using pickle so that you store binary data that take much less space than formatted ones (so even the read/write process is much much faster).
When you write to a file, you should keep the file always open, using something like with open('namefile', 'w') as f. In this way, you save the time to open/close the file and the cursor will always be at the end. If you decide to do that, use f.flush() once you have written the file to avoid loosing data if something bad happen. The append method is good alternative anyway.
If you provide some code it would be easier to help you... | 1 | 0 | 0 | 0 | Say I have a data file of size 5GB in the disk, and I want to append another set of data of size 100MB at the end of the file -- Just simply append, I don't want to modify nor move the original data in the file. I know I can read the hole file into memory as a long long list and append my small new data to it, but it's too slow. How I can do this more efficiently?
I mean, without reading the hole file into memory?
I have a script that generates a large stream of data, say 5GB, as a long long list, and I need to save these data into a file. I tried to generate the list first and then output them all in once, but as the list increased, the computer got slow down very very severely. So I decided to output them by several times: each time I have a list of 100MB, then output them and clear the list. (this is why I have the first problem)
I have no idea how to do this.Is there any lib or function that can do this? | modify and write large file in python | 0 | 0 | 1 | 0 | 0 | 1,079 |
39,738,872 | 2016-09-28T05:42:00.000 | 2 | 0 | 1 | 0 | 0 | python,compilation,comparison,abstract-syntax-tree | 0 | 39,738,985 | 0 | 2 | 0 | false | 0 | 0 | One approach would be to count then number of functions, objects, keywords possibly grouped into categories such as branching, creating, manipulating, etc., and number variables of each type. Without relying on the methods and variables being called the same name(s).
For a given problem the similar approaches will tend to come out with similar scores for these, e.g.: A students who used decision tree would have a high number of branch statements while one who used a decision table would have much lower.
This approach would be much quicker to implement than parsing the code structure and comparing the results. | 1 | 2 | 0 | 0 | Many would want to measure code similarity to catch plagiarisms, however my intention is to cluster a set of python code blocks (say answers to the same programming question) into different categories and distinguish different approaches taken by students.
If you have any idea how this could be achieved, I would appreciate it if you share it here. | How to measure similarity between two python code blocks? | 0 | 0.197375 | 1 | 0 | 0 | 960 |
39,759,680 | 2016-09-29T00:44:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 39,778,818 | 0 | 2 | 0 | false | 0 | 0 | think i figured it out. Apparently SLES 11.4 does not include the development headers in the default install from their SDK for numpy 1.8.
And of course they don't offer matplotlib along with a bunch of common python packages.
The python packages per the SLES SDK are the system default are located under/usr/lib64/python2.6/site-packages/ and it is under here I see numpy version 1.8. So using the YAST software manager if you choose various python packages this is where they are located.
To this point without having the PYTHONPATH environment variable set I can launch python, type import numpy, and for the most part use it. But if I try to build matplotlib 0.99.1 it responds that it cannot find the header files for numpy version 1.8, so it knows numpy 1.8 is installed but the development package needs to be installed.
Assuming by development headers they mean .h files,
If I search under /usr/lib64/python2.6/site-packages I have no .h files for anything!
I just downloaded the source for numpy-1.8.0.tar.gz and easily did a python setup.py.build followed by python setup.py install and noticed it installed under /usr/local/lib64/python2.6/site-packages/
Without the PYTHONPATH environment variable set, if i try to build matplotlib I still get the error about header files not found.
but in my bash shell, as root, after I do export PYTHONPATH=/usr/local/lib64/python2.6/site-packages I can successfully do the build and install of matplotlib 0.99.1 which also installs /usr/local/lib64/python2.6/site-packages
Notes:
I also just did a successful build & install of numpy-1.11 and that got thrown in under /usr/local/lib64/python2.6/site-packages however when i try to then build matplotlib 0.99.1 with PYTHONPATH set it reports outright that numpy is not installed that version 1.1 or greater is needed. So here it seems this older version of matplotlib needs to use a certain version range of numpy, that the latest numpy 1.11 is not compatible.
And the only other environment variable i have which is set by the system is PYTHONSTARTUP which points to the file /etc/pythonstart. | 1 | 0 | 1 | 0 | My system is SLES 11.4 having python 2.6.9.
I know little about python and have not found where to download rpm's that give me needed python packages.
I acquired numpy 1.4 and 1.11 and I believe did a successful python setup.py build followed by python setup.py install on numpy.
Going from memory I think this installed under /usr/local/lib64/python2.6/...
Next I tried building & installing matplotlib (which requires numpy) and when I do python setup.py build it politely responds with cannot find numpy. So my questions are
do i need to set some kind of python related environment variable, something along the lines of LD_LIBRARY_PATH or PATH ?
As I get more involved with using python installing packages that I have to build from source I need to understand where things currently are per the default install of python, where new things should go, and where the core settings for python are to know how and where to recognize new packages. | manually building installing python packages in linux so they are recognized | 1 | 0 | 1 | 0 | 0 | 44 |
39,768,925 | 2016-09-29T11:23:00.000 | 2 | 0 | 0 | 0 | 0 | python,tkinter,raspberry-pi,touchscreen,raspberry-pi3 | 0 | 39,770,561 | 0 | 2 | 0 | true | 0 | 1 | There is always a widget with the keyboard focus. You can query that with the focus_get method of the root window. It will return whatever widget has keyboard focus. That is the window that should receive input from your keypad. | 1 | 1 | 0 | 0 | I'm making a program on the Raspberry Pi with a touchscreen display.
I'm using Python Tkinter that has two entry widgets and one on screen keypad. I want to use the same keypad for entering data on both entry widgets.
Can anyone tell me how can i check if an entry is selected? Similar like clicking on the Entry using the mouse and the cursor appears. How can I know that in Python Tkinter?
Thank you. | Check if Entry widget is selected | 0 | 1.2 | 1 | 0 | 0 | 1,831 |
39,771,998 | 2016-09-29T13:42:00.000 | 0 | 0 | 1 | 0 | 0 | python,linux,pycharm | 0 | 39,772,172 | 0 | 2 | 0 | false | 0 | 0 | Click on the top-right tab with your project name, then go Edit Configurations and there you can change the interpreter. | 2 | 0 | 0 | 0 | I haven't been able to find anything and I am not sure if this is the place I should be asking...
But I want to include the path to my interpreter in every new project I create. The reason being is that I develop locally and sync my files to a linux server. It is annoying having to manually type #! /users/w/x/y/z/bin/python every time I create a new project. Also would be nice to include certain imports I use 90% of the time.
I got to thinking, in the program I produce music with you can set a default project file. Meaning, when you click new project it is set up how you have configured (include certain virtual instruments, effects, etc).
Is it possible to do this or something similar with IDE, and more specifically, Pycharm? | Is it possible to include interpreter path (or set any default code) when I create new python file in Pycharm? | 1 | 0 | 1 | 0 | 0 | 59 |
39,771,998 | 2016-09-29T13:42:00.000 | 1 | 0 | 1 | 0 | 0 | python,linux,pycharm | 0 | 39,772,630 | 0 | 2 | 0 | true | 0 | 0 | You should open File in the main menu and click Default Settings, collapse the Editor then click File and Code Templates, in the Files tab click on the + sign and create a new Template, give the new template a name and extension, in the editor box put your template content, in your case #! /users/w/x/y/z/bin/python apply and OK. After that everytime you open a project, select that template to include default lines you want. You could make number of templates. | 2 | 0 | 0 | 0 | I haven't been able to find anything and I am not sure if this is the place I should be asking...
But I want to include the path to my interpreter in every new project I create. The reason being is that I develop locally and sync my files to a linux server. It is annoying having to manually type #! /users/w/x/y/z/bin/python every time I create a new project. Also would be nice to include certain imports I use 90% of the time.
I got to thinking, in the program I produce music with you can set a default project file. Meaning, when you click new project it is set up how you have configured (include certain virtual instruments, effects, etc).
Is it possible to do this or something similar with IDE, and more specifically, Pycharm? | Is it possible to include interpreter path (or set any default code) when I create new python file in Pycharm? | 1 | 1.2 | 1 | 0 | 0 | 59 |
39,773,544 | 2016-09-29T14:49:00.000 | 0 | 0 | 0 | 0 | 0 | python,openpyxl | 0 | 39,774,351 | 0 | 2 | 0 | false | 0 | 0 | I'm not sure what you mean by "text box". In theory you can add pretty much anything covered by the DrawingML specification to a chart but the practice may be slightly different.
However, there is definitely no built-in API for this so you'd have to start by creating a sample file and working backwards from it. | 1 | 2 | 0 | 1 | I'm trying to add a text box to a chart I've generated with openpyxl, but can't find documentation or examples showing how to do so. Does openpyxl support it? | Adding a text box to an excel chart using openpyxl | 0 | 0 | 1 | 1 | 0 | 3,047 |
39,779,412 | 2016-09-29T20:27:00.000 | 0 | 0 | 0 | 0 | 0 | python,unix | 1 | 39,813,792 | 0 | 2 | 0 | true | 0 | 0 | I have find the solution. It might because I am using Spyder from anaconda. As long as I use "\" instead of "\", python can recognize the location. | 1 | 0 | 1 | 0 | I am trying to write the file to my company's project folder which is unix system and the location is /department/projects/data/. So I used the following code
df.to_csv("/department/projects/data/Test.txt", sep='\t', header = 0)
The error message shows it cannot find the locations. how to specify the file location in Unix using python? | how to export data to unix system location using python | 0 | 1.2 | 1 | 0 | 0 | 43 |
39,780,715 | 2016-09-29T22:02:00.000 | 2 | 0 | 0 | 0 | 0 | python,html,django,pdf-generation,weasyprint | 0 | 39,792,862 | 0 | 1 | 0 | false | 1 | 0 | PDF is not built to be responsive, it is built to display the same no matter where it is viewed.
As @alxs pointed out in a comment, there are a few features that PDF viewing applications have added to simulate PDFs being responsive. Acrobat's Reflow feature is the best example of this that I am aware of and even it struggles with most PDFs that users come across in the wild.
One of the components (if not the only one) that matters, is that in order for a PDF to be useful in Acrobat's Reflow mode is to make sure that the PDFs you are creating contain structure information, this would be a Tagged PDF. Tagged PDF contains content that has been marked, similar to HTML tags, where text that makes up a paragraph is tagged in the PDF as being a paragraph. A number of PDF tools (creation or viewing) do not interpret the structure of a PDF though. | 1 | 1 | 0 | 0 | how to generate a responsive PDF with Django?.
I want to generate a PDF with Django but i need that is responsive, that is to say, the text of the PDF has that adapted to don't allow space empty.
for example to a agreement this change in the text, then, i need to adapt the to space of paper leaf. | how to generate a responsive PDF with Django? | 0 | 0.379949 | 1 | 0 | 0 | 252 |
39,801,748 | 2016-10-01T00:19:00.000 | 2 | 0 | 1 | 0 | 0 | python | 0 | 39,801,759 | 0 | 3 | 0 | false | 0 | 0 | Examine the text preceding your desired position and count the number of \n characters. | 1 | 0 | 0 | 0 | If I have a text that I've read into memory by using open('myfile.txt').read(), and if I know a certain location in this file, say, at character 10524, how can I find the line number of that location? | In Python, how can I get a line number corresponding to a given character location? | 0 | 0.132549 | 1 | 0 | 0 | 156 |
39,805,237 | 2016-10-01T09:56:00.000 | 1 | 0 | 0 | 0 | 0 | python,postgresql,web-scraping,scrapy | 0 | 39,805,342 | 0 | 1 | 0 | false | 1 | 0 | For example: I have a site with 100 pages and 10 records each. So I scrape page 1, and then go to page 2. But on fast growing sites, at the time I do the request for page 2, there might be 10 new records, so I would get the same items again. Nevertheless I would get all items in the end. BUT next time scraping this site, how would I know where to stop? I can't stop at the first record I already have in my database, because this might be suddenly on the first page, because there a new reply was made.
Usually each record has a unique link (permalink) e.g. the above question can be accessed by just entering https://stackoverflow.com/questions/39805237/ & ignoring the text beyond that. You'll have to store the unique URL for each record and when you scrape next time, ignore the ones that you already have.
If you take the example of tag python on Stackoverflow, you can view the questions here : https://stackoverflow.com/questions/tagged/python but the sorting order can't be relied upon for ensuring unique entries. One way to scrape would be to sort by newest questions and keep ignoring duplicate ones by their URL.
You can have an algorithm that scrapes first 'n' pages every 'x' minutes until it hits an existing record. The whole flow is a bit site specific, but as you scrape more sites, your algorithm will become more generic and robust to handle edge cases and new sites.
Another approach is to not run scrapy yourself, but use a distributed spider service. They generally have multiple IPs and can spider large sites within minutes. Just make sure you respect the site's robots.txt file and don't accidentally DDoS them. | 1 | 0 | 0 | 0 | I want to scrape a lot (a few hundred) of sites, which are basically like bulletin boards. Some of these are very large (up to 1.5 million) and also growing very quickly. What I want to achieve is:
scrape all the existing entries
scrape all the new entries near real-time (ideally around 1 hour intervals or less)
For this we are using scrapy and save the items in a postresql database. The problem right now is, how can I make sure I got all the records without scraping the complete site every time? (Which would not be very agressive traffic-wise, but also not possible to complete within 1 hour.)
For example: I have a site with 100 pages and 10 records each. So I scrape page 1, and then go to page 2. But on fast growing sites, at the time I do the request for page 2, there might be 10 new records, so I would get the same items again. Nevertheless I would get all items in the end. BUT next time scraping this site, how would I know where to stop? I can't stop at the first record I already have in my database, because this might be suddenly on the first page, because there a new reply was made.
I am not sure if I got my point accross, but tl;dr: How to fetch fast growing BBS in an incremental way? So with getting all the records, but only fetching new records each time. I looked at scrapy's resume function and also at scrapinghubs deltafetch middleware, but I don't know if (and how) they can help to overcome this problem. | How to go about incremental scraping large sites near-realtime | 0 | 0.197375 | 1 | 0 | 1 | 265 |
39,805,675 | 2016-10-01T10:46:00.000 | -1 | 1 | 0 | 0 | 0 | python,nltk | 1 | 39,810,288 | 0 | 4 | 0 | true | 0 | 0 | The Problem is raised probably because you don't have a default directory created for your ntlk downloads. If you are on a Windows Platform, All you need to do is to create a directory named "nltk_data" in any of your root directory and grant write permissions to that directory. The Natural Language Tool Kit initially searches for the destination named "nltk_data" in all of the root directories.
For Instance: Create a folder in your C:\ drive named "nltk_data"
After Making sure everything is done fine, execute your script to get rid of this error.
Hope this helps.
Regards. | 1 | 1 | 0 | 0 | I have problem on import nltk.
I configured apache and run some sample python code, it worked well on the browser.
The URL is : /localhost/cgi-bin/test.py.
When I import the nltk in test.py its not running. The execution not continue after the "import nltk" line.And it gives me that error ValueError: Could not find a default download directory
But when I run in the command prompt its working perfect.
how to remove this error? | ValueError: Could not find a default download directory of nltk | 0 | 1.2 | 1 | 0 | 0 | 1,011 |
39,836,893 | 2016-10-03T17:11:00.000 | 0 | 0 | 0 | 0 | 0 | python,automation,imacros | 0 | 39,837,450 | 0 | 1 | 0 | false | 1 | 0 | There is a python package called mechanize. It helps you automate the processes that can be done on a browser. So check it out.I think mechanize should give you all the tools required to solve the problem. | 1 | 0 | 0 | 0 | I have a .csv file with a list of URLs I need to extract data from. I need to automate the following process: (1) Go to a URL in the file. (2) Click the chrome extension that will redirect me to another page which displays some of the URL's stats. (3) Click the link in the stats page that enables me to download the data as a .csv file. (4) Save the .csv. (5) Repeat for the next n URLs.
Any idea how to do this? Any help greatly appreciated! | Automate file downloading using a chrome extension | 0 | 0 | 1 | 0 | 1 | 282 |
39,851,566 | 2016-10-04T11:51:00.000 | 12 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,pip | 0 | 39,852,126 | 0 | 8 | 0 | true | 0 | 0 | You will have to use the absolute path of pip.
E.g: if I installed python 3 to C:\python35, I would use:
C:\> python35\Scripts\pip.exe install packagename
Or if you're on linux, use pip3 install packagename
If you don't specify a full path, it will use whichever pip is in your path. | 4 | 15 | 0 | 0 | I am using Windows 10. Currently, I have Python 2.7 installed. I would like to install Python 3.5 as well. However, if I have both 2.7 and 3.5 installed, when I run pip, how do I get the direct the package to be installed to the desired Python version? | Using pip on Windows installed with both python 2.7 and 3.5 | 0 | 1.2 | 1 | 0 | 0 | 31,750 |
39,851,566 | 2016-10-04T11:51:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,pip | 0 | 39,852,599 | 0 | 8 | 0 | false | 0 | 0 | The answer from Farhan.K will work. However, I think a more convenient way would be to rename python35\Scripts\pip.exe to python35\Scripts\pip3.exe assuming python 3 is installed in C:\python35.
After renaming, you can use pip3 when installing packages to python v3 and pip when installing packages to python v2. Without the renaming, your computer will use whichever pip is in your path. | 4 | 15 | 0 | 0 | I am using Windows 10. Currently, I have Python 2.7 installed. I would like to install Python 3.5 as well. However, if I have both 2.7 and 3.5 installed, when I run pip, how do I get the direct the package to be installed to the desired Python version? | Using pip on Windows installed with both python 2.7 and 3.5 | 0 | 0.024995 | 1 | 0 | 0 | 31,750 |
39,851,566 | 2016-10-04T11:51:00.000 | -1 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,pip | 0 | 48,870,834 | 0 | 8 | 0 | false | 0 | 0 | I tried many things , then finally
pip3 install --upgrade pip worked for me as i was facing this issue since i had both python3 and python2.7 installed on my system.
mind the pip3 in the beginning and pip in the end.
And yes you do have to run in admin mode the command prompt and make sure if the path is set properly. | 4 | 15 | 0 | 0 | I am using Windows 10. Currently, I have Python 2.7 installed. I would like to install Python 3.5 as well. However, if I have both 2.7 and 3.5 installed, when I run pip, how do I get the direct the package to be installed to the desired Python version? | Using pip on Windows installed with both python 2.7 and 3.5 | 0 | -0.024995 | 1 | 0 | 0 | 31,750 |
39,851,566 | 2016-10-04T11:51:00.000 | -1 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,pip | 0 | 53,885,123 | 0 | 8 | 0 | false | 0 | 0 | 1-open command prompt and change direction using the command cd C:\Python35\Scripts
2- write the command pip3 install --upgrade pip
3- close the command prompt and reopen it again to return to the default direction and use the command pip3.exe install package_name to install any package you want | 4 | 15 | 0 | 0 | I am using Windows 10. Currently, I have Python 2.7 installed. I would like to install Python 3.5 as well. However, if I have both 2.7 and 3.5 installed, when I run pip, how do I get the direct the package to be installed to the desired Python version? | Using pip on Windows installed with both python 2.7 and 3.5 | 0 | -0.024995 | 1 | 0 | 0 | 31,750 |
39,861,106 | 2016-10-04T20:22:00.000 | 3 | 0 | 1 | 0 | 0 | python,matplotlib,ipython,jupyter | 0 | 50,086,042 | 0 | 2 | 0 | false | 0 | 0 | %matplotlib auto should switch to the default backend. | 2 | 7 | 0 | 0 | Well, I know I can use %matplotlib inline to plot inline.
However, how to disable it?
Sometime I just want to zoom in the figure that I plotted. Which I can't do on a inline-figure. | How to DISABLE Jupyter notebook matplotlib plot inline? | 0 | 0.291313 | 1 | 0 | 0 | 9,341 |
39,861,106 | 2016-10-04T20:22:00.000 | 1 | 0 | 1 | 0 | 0 | python,matplotlib,ipython,jupyter | 0 | 39,861,256 | 0 | 2 | 0 | true | 0 | 0 | Use %matplotlib notebook to change to a zoom-able display. | 2 | 7 | 0 | 0 | Well, I know I can use %matplotlib inline to plot inline.
However, how to disable it?
Sometime I just want to zoom in the figure that I plotted. Which I can't do on a inline-figure. | How to DISABLE Jupyter notebook matplotlib plot inline? | 0 | 1.2 | 1 | 0 | 0 | 9,341 |
39,880,906 | 2016-10-05T18:06:00.000 | 0 | 0 | 1 | 0 | 0 | python,package,environment-variables | 0 | 39,918,729 | 0 | 1 | 0 | false | 0 | 0 | I think I'll just detail in the readme file what to insert and where. I tried to find a difficult solution when it was really simple and straightforward | 1 | 0 | 0 | 0 | I created a python package for in-house use which relies upon some environmental variables (namely, the user and password to enter an online database). for my company, the convenience of installing a package rather than having it in every project is significant as the functions inside are used in completely separate projects and maintainability is a primary issue.
so, how do I "link" the package with the environmental variables? | In-house made package and environmental variables link | 0 | 0 | 1 | 0 | 0 | 23 |
39,882,504 | 2016-10-05T19:49:00.000 | 2 | 0 | 0 | 0 | 1 | python,django,django-tables2 | 0 | 39,882,505 | 0 | 2 | 1 | true | 1 | 0 | Im posting this as a future reference for myself and other who might have the same problem.
After searching for a bit I found out that django-tables2 was sending a single query for each row. The query was something like SELECT * FROM "table" LIMIT 1 OFFSET 1 with increasing offset.
I reduced the number of sql calls by calling query = list(query) before i create the table and pass the query. By evaluating the query in the python view code the table now seems to work with the evaulated data instead and there is only one database call instead of hundreds. | 1 | 3 | 0 | 0 | im using django-tables2 in order to show values from a database query. And everythings works fine. Im now using Django-dabug-toolbar and was looking through my pages with it. More out of curiosity than performance needs. When a lokked at the page with the table i saw that the debug toolbar registerd over 300 queries for a table with a little over 300 entries. I dont think flooding the DB with so many queries is a good idea even if there is no performance impact (at least not now). All the data should be coming from only one query.
Why is this happening and how can i reduce the number of queries? | django-tables2 flooding database with queries | 0 | 1.2 | 1 | 1 | 0 | 348 |
39,891,681 | 2016-10-06T08:54:00.000 | 1 | 0 | 0 | 0 | 1 | python,bitmap,wxpython | 0 | 39,901,257 | 0 | 1 | 0 | true | 0 | 1 | Take a look at the wx.lib.agw.supertooltip module. It should help you to create a tooltip-like window that displays custom rich content.
As for triggering the display of the tooltip, you can catch mouse events for the tree widget (be sure to call Skip so the tree widget can see the events too) and reset a timer each time the mouse moves. If the timer expires because the mouse hasn't been moved in that long then you can use tree.HitTest to find the item that the cursor is on and then show the appropriate image for that item. | 1 | 0 | 0 | 0 | So i'm programming python program that uses wxPython for UI, with wx.TreeCtrl widget for selecting pictures(.png) on selected directory. I would like to add hover on treectrl item that works like tooltip, but instead of text it shows bitmap picture.
Is there something that already allows this, or would i have to create something with wxWidgets?
I am not too familiar with wxWidgets, so if i have to create something like that how hard would it be, lot of code is already using the treectrl, so it needs to be able to work same way.
So how would i have to go about doing this? And if there might be something i might be missing id be happy to know. | wxpython treectrl show bitmap picture on hover | 0 | 1.2 | 1 | 0 | 0 | 160 |
39,906,167 | 2016-10-06T21:51:00.000 | 0 | 0 | 0 | 0 | 0 | java,python,rest,api | 0 | 39,906,371 | 0 | 2 | 0 | false | 1 | 0 | Furthermore, in the future you might want to separate them from the same machine and use network to communicate.
You can use http requests.
Make a contract in java of which output you will provide to your python script (or any other language you will use) send the output as a json to your python script, so in that way you can easily change the language as long as you send the same json. | 1 | 1 | 0 | 0 | I have a Java process which interacts with its REST API called from my program's UI. When I receive the API call, I end up calling the (non-REST based) Python script(s) which do a bunch of work and return me back the results which are returned back as API response.
- I wanted to convert this interaction of UI API -> JAVA -> calling python scripts to become end to end a REST one, so that in coming times it becomes immaterial which language I am using instead of Python.
- Any inputs on whats the best way of making the call end-to-end a REST based ? | Inputs on how to achieve REST based interaction between Java and Python? | 0 | 0 | 1 | 0 | 1 | 2,327 |
39,906,620 | 2016-10-06T22:31:00.000 | 1 | 0 | 1 | 0 | 1 | python,file,binaryfiles,file-writing | 0 | 39,906,690 | 0 | 1 | 0 | false | 0 | 0 | What you're doing wrong is assuming that it can be done. :-)
You don't get to insert and shove the existing data over; it's already in that position on disk, and overwrite is all you get.
What you need to do is to mark the insert position, read the remainder of the file, write your insertion, and then write that remainder after the insertion. | 1 | 0 | 0 | 0 | I've tried to do this using the 'r+b', 'w+b', and 'a+b' modes for open(). I'm using with seek() and write() to move to and write to an arbitrary location in the file, but all I can get it to do is either 1) write new info at the end of the file or 2) overwrite existing data in the file.
Does anyone know of some other way to do this or where I'm going wrong here? | how do I insert data to an arbitrary location in a binary file without overwriting existing file data? | 0 | 0.197375 | 1 | 0 | 0 | 56 |
39,940,303 | 2016-10-09T05:21:00.000 | 0 | 0 | 1 | 0 | 1 | python,windows-server-2008-r2,msvcr100.dll | 0 | 54,189,960 | 0 | 1 | 0 | false | 0 | 0 | The "Failed to write all bytes for (random DLL name)" error generally indicates that the disk is full. Would be nice if Microsoft had bothered to add an extra sentence indicating such, but this is usually the problem.
If your disk isn't full, then it may be a permissions issue -- make sure the user you're running the program as has write access to wherever it's trying to write to. | 1 | 0 | 0 | 0 | I made a python console exe. It cannot work on windows2008 R2 server.
I copy MSVCR100.dll and MSVCP100.dll from another computer onto the dir containing the exe file. It has been working correctly a long time.
Today, when start it show that "Failed to write all bytes for MSVCR100.dll"
I don't know what caused it and how to deal with it.
Thanks for any suggestions. | Failed to write all bytes for MSVCR100.dll | 0 | 0 | 1 | 0 | 0 | 2,132 |
39,949,845 | 2016-10-10T00:18:00.000 | 0 | 0 | 1 | 0 | 1 | python-3.x | 0 | 39,950,009 | 0 | 6 | 0 | false | 0 | 0 | Figured it out, if you just started python then you probably did not add python to your path.
To do so uninstall python and then reinstall it. This time click "add python to path" at the bottom of the install screen. | 5 | 4 | 0 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0 | 0 | 1 | 0 | 0 | 27,825 |
39,949,845 | 2016-10-10T00:18:00.000 | 1 | 0 | 1 | 0 | 1 | python-3.x | 0 | 43,901,308 | 0 | 6 | 0 | false | 0 | 0 | Simply restart your VB studio code. Those show that some packages have been downloaded but not yet installed until reboot it. | 5 | 4 | 0 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0 | 0.033321 | 1 | 0 | 0 | 27,825 |
39,949,845 | 2016-10-10T00:18:00.000 | 1 | 0 | 1 | 0 | 1 | python-3.x | 0 | 43,998,477 | 0 | 6 | 0 | false | 0 | 0 | Add python path by following these steps.
1. Go to uninstall a program.
2. Go to Python 3.6.1 (this is my python version). Select and click on Uninstall/change.
3.Click on Modify.
4. Click next > In advanced options > tick add Python to environment variable. Click install. Restart VS code. | 5 | 4 | 0 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0 | 0.033321 | 1 | 0 | 0 | 27,825 |
39,949,845 | 2016-10-10T00:18:00.000 | 2 | 0 | 1 | 0 | 1 | python-3.x | 0 | 44,814,591 | 0 | 6 | 0 | false | 0 | 0 | For those who are having this error after the recent (May-June of 2017) update of Visual Studio Code.
Your old launch.json file might be causing this issue, due to the recent updates of launch.json file format and structure.
Try to delete launch.json file in the .vscode folder. The .vscode folder exists in your workspace where your source code exists, not to be confused with the one in your user home folder (C:\Users\{username}\.vscode).
This workaround worked fine for me with Windows10 + Visual Studio Code + Python extension. Just delete the existing launch.json and restart Visual Studio Code, and then start your debugging. The launch.json file might be regenerated again, but this time it should be in the correct shape. | 5 | 4 | 0 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0 | 0.066568 | 1 | 0 | 0 | 27,825 |
39,949,845 | 2016-10-10T00:18:00.000 | 7 | 0 | 1 | 0 | 1 | python-3.x | 0 | 41,195,399 | 0 | 6 | 0 | false | 0 | 0 | Do not uninstall!
1) Go to location that you installed the program.
*example: C:\Program Files (x86)\Microsoft VS Code
copy the location.
2) right click on computer> properties >Advanced System Settings> Environment variables > under user variables find "path" click> edit> under variable value: go to the end of the line add ; then paste your location>ok > then go under system variables find "path"> do the same thing.... add ; then paste your location.
FOR EXAMPLE" ;C:\Program Files (x86)\Microsoft VS Code
3) Restart your Visual Studio Code | 5 | 4 | 0 | 0 | I just downloaded Python and Visual Studio. I'm trying to test the debugging feature for a simple "Hello World" script and I'm receiving this error:
Failed to launch the Python Process, please validate the path 'python'
followed by this in the debug console:
Error: spawn python ENOENT
Could someone please help me out and tell me how to fix this?
I'm running on windows 10.
Thanks! | Visual Studio Python "Failed to launch the Python Process, please validate the path 'python'' & Error: spawn python ENOENT | 0 | 1 | 1 | 0 | 0 | 27,825 |
39,958,650 | 2016-10-10T12:46:00.000 | 0 | 0 | 1 | 0 | 1 | python,multithreading,console-application | 0 | 39,958,943 | 0 | 1 | 0 | true | 0 | 0 | In a console, standard output (produced by the running program(s)) and standard input (produced by your keypresses) are both sent to screen, so they may end up all mixed.
Here your thread 1 writes 1 x by line every second, so if your take more than 1 second to type HELLO then that will produce the in-console output that you submitted.
If you want to avoid that, a few non-exhaustive suggestions:
temporarily interrupt thread1 output when a keypress is detected
use a library such as ncurses to create separates zones for your program output and the user input
just suppress thread1 input, or send it to a file instead. | 1 | 0 | 0 | 0 | I have a problem with console app with threading. In first thread i have a function, which write symbol "x" into output. In second thread i have function, which waiting for users input. (Symbol "x" is just random choice for this question).
For ex.
Thread 1:
while True:
print "x"
time.sleep(1)
Thread 2:
input = null
while input != "EXIT":
input = raw_input()
print input
But when i write text for thread 2 to console, my input text (for ex. HELLO) is rewroted.
x
x
HELx
LOx
x
x[enter pressed here]
HELLO
x
x
Is any way how can i prevent rewriting my input text by symbol "x"?
Thanks for answers. | One of threads rewrites console input in Python | 0 | 1.2 | 1 | 0 | 0 | 201 |
39,969,168 | 2016-10-11T01:29:00.000 | 2 | 0 | 0 | 0 | 0 | python,machine-learning,recommendation-engine,data-science | 0 | 40,001,529 | 0 | 1 | 0 | true | 1 | 0 | I would keep it simple and separate:
Your focus is collaborative filtering, so your recommender should generate scores for the top N recommendations regardless of location.
Then you can re-score using distance among those top-N. For a simple MVP, you could start with an inverse distance decay (e.g. final-score = cf-score * 1/distance), and adjust the decay function based on behavioral evidence if necessary. | 1 | 0 | 0 | 0 | I am currently building a recommender engine in python and I faced the following problem.
I want to incorporate collaborative filtering approach, its user-user variant. To recap, its idea is that we have an information on different users and which items they liked (if applicable - which ratings these users assigned to items). When we have new user who liked couple of things we just find users who liked same items and recommend to this new user items which were liked by users similar to new user.
But I want to add some twist to it. I will be recommending places to users, namely 'where to go tonight'. I know user preferences, but I want to also incorporate the distance to each item I could recommend. The father the place I am going to recommend to the user - the least attractive it should be.
So in general I want to incorporate a penalty into recommendation engine and the amount of penalty for each place will be based on the distance from user to the place.
I tried to googleif anyone did something similar but wasn't able to find anything. Any advice on how to properly add such penalty? | Recommender engine in python - incorporate custom similarity metrics | 0 | 1.2 | 1 | 0 | 0 | 114 |
39,970,515 | 2016-10-11T04:29:00.000 | 0 | 0 | 0 | 0 | 0 | python,matplotlib | 0 | 39,971,118 | 0 | 3 | 0 | false | 0 | 0 | It seems to me heatmap is the best candidate for this type of plot. imshow() will return u a colored matrix with color scale legend.
I don't get ur stretched ellipses problem, shouldnt it be a colored squred for each data point?
u can try log color scale if it is sparse. also plot the 12 classes separately to analyze if theres any inter-class differences. | 2 | 1 | 1 | 0 | I have a sparse matrix X, shape (6000, 300). I'd like something like a scatterplot which has a dot where the X(i, j) != 0, and blank space otherwise. I don't know how many nonzero entries there are in each row of X. X[0] has 15 nonzero entries, X[1] has 3, etc. The maximum number of nonzero entries in a row is 16.
Attempts:
plt.imshow(X) results in a tall, skinny graph because of the shape of X. Using plt.imshow(X, aspect='auto) will stretch out the graph horizontally, but the dots get stretched out to become ellipses, and the plot becomes hard to read.
ax.spy suffers from the same problem.
bokeh seems promising, but really taxes my jupyter kernel.
Bonus:
The nonzero entries of X are positive real numbers. If there was some way to reflect their magnitude, that would be great as well (e.g. colour intensity, transparency, or across a colour bar).
Every 500 rows of X belong to the same class. That's 12 classes * 500 observations (rows) per class = 6000 rows. E.g. X[:500] are from class A, X[500:1000] are from class B, etc. Would be nice to colour-code the dots by class. For the moment I'll settle for manually including horizontal lines every 500 rows to delineate between classes. | Python: Plot a sparse matrix | 0 | 0 | 1 | 0 | 0 | 2,877 |
39,970,515 | 2016-10-11T04:29:00.000 | 0 | 0 | 0 | 0 | 0 | python,matplotlib | 0 | 40,127,976 | 0 | 3 | 0 | false | 0 | 0 | plt.matshow also turned out to be a feasible solution. I could also plot a heatmap with colorbars and all that. | 2 | 1 | 1 | 0 | I have a sparse matrix X, shape (6000, 300). I'd like something like a scatterplot which has a dot where the X(i, j) != 0, and blank space otherwise. I don't know how many nonzero entries there are in each row of X. X[0] has 15 nonzero entries, X[1] has 3, etc. The maximum number of nonzero entries in a row is 16.
Attempts:
plt.imshow(X) results in a tall, skinny graph because of the shape of X. Using plt.imshow(X, aspect='auto) will stretch out the graph horizontally, but the dots get stretched out to become ellipses, and the plot becomes hard to read.
ax.spy suffers from the same problem.
bokeh seems promising, but really taxes my jupyter kernel.
Bonus:
The nonzero entries of X are positive real numbers. If there was some way to reflect their magnitude, that would be great as well (e.g. colour intensity, transparency, or across a colour bar).
Every 500 rows of X belong to the same class. That's 12 classes * 500 observations (rows) per class = 6000 rows. E.g. X[:500] are from class A, X[500:1000] are from class B, etc. Would be nice to colour-code the dots by class. For the moment I'll settle for manually including horizontal lines every 500 rows to delineate between classes. | Python: Plot a sparse matrix | 0 | 0 | 1 | 0 | 0 | 2,877 |
39,972,261 | 2016-10-11T07:25:00.000 | 0 | 0 | 1 | 0 | 0 | python,pandas,upgrade,arcmap | 1 | 39,972,738 | 1 | 1 | 0 | false | 0 | 0 | I reinstalled python again directly from python.org and then installed pandas which seems to work.
I guess this might stop the ArcMap version of python working properly but since I'm not using python with ArcMap at the moment it's not a big problem. | 1 | 0 | 1 | 0 | I recently installed ArcGIS10.4 and now when I run python 2.7 programs using Idle (for purposes unrelated to ArcGIS) it uses the version of python attached to ArcGIS.
One of the programs I wrote needs an updated version of the pandas module. When I try to update the pandas module in this verion of python (by opening command prompt as an administrator, moving to C:\Python27\ArcGIS10.4\Scripts and using the command pip install --upgrade pandas) the files download ok but there is an access error message when PIP tries to upgrade. I have tried restarting the computer in case something was open. The error message is quite long and I can't cut and paste from command prompt but it finishes with
" Permission denied: 'C:\Python27\ArcGIS10.4\Lib\site-packages\numpy\core\multiarray.pyd' "
I've tried the command to reinstall pandas completely which also gave an error message. I've tried installing miniconda in the hope that I could get a second version of python working and then use that version instead of the version attached to ArcMap. However I don't know how to direct Idle to choose the newly installed version.
So overall I don't mind having 2 versions of python if someone could tell me how to choose which one runs or if there's some way to update the ArcMap version that would be even better. I don't really want to uninstall ArcMap at the moment.
Any help is appreciated! Thanks! | How to update pandas when python is installed as part of ArcGIS10.4, or another solution | 0 | 0 | 1 | 0 | 0 | 212 |
39,989,680 | 2016-10-12T02:41:00.000 | 0 | 0 | 0 | 0 | 0 | python | 0 | 39,989,752 | 0 | 1 | 0 | false | 0 | 0 | I think you're going to have to extract your own snippets by opening and reading the url in the search result. | 1 | 1 | 0 | 0 | I am now trying the python module google which only return the url from the search result. And I want to have the snippets as information as well, how could I do that?(Since the google web search API is deprecated) | How can I get the google search snippets using Python? | 0 | 0 | 1 | 0 | 1 | 167 |
40,001,836 | 2016-10-12T14:55:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-3.5,projects-and-solutions,spyder | 0 | 44,144,280 | 0 | 1 | 0 | false | 0 | 0 | There is an easy solution to this, at least for simple cases and as of May 2017 (Spyder 3.1.2): Create a new empty project in Spyder 3. The new project directory will then have a subdirectory named ".spyproject" with these files in it: codestyle.ini, encoding.ini, vcs.ini, workspace.ini. Copy the entire .spyproject subdirectory to the old Spyder 2 project directory.
This allows Spyder 3 to at least see the source files in the old project directory, even if all the settings don't track.
I only have dumb use cases (e.g. no "related projects") in my Spyder 2 projects. But this way I don't have to generate 75 new projects and manually import the old files. | 1 | 1 | 0 | 0 | The past few months, I've been working on a project using Spyder2 IDE with Python 2.7. However, now I'm being instructed to look into ways of translating the program from Python 2.7 to Python 3.5, which means I'm using Anaconda3 now instead of Anaconda2, and that means I'm using Spyder3 as the default IDE instead of Spyder2. I want to be able to import the entire project, but Spyder3 does not recognize it as such. So how to I import a Spyder2 Project into the Spyder3 IDE? | Import Project from Spyder2 to Spyder3 | 0 | 0.197375 | 1 | 0 | 0 | 505 |
40,013,849 | 2016-10-13T06:34:00.000 | 0 | 1 | 0 | 0 | 0 | php,python,web,messagebox | 0 | 40,015,293 | 0 | 1 | 0 | false | 0 | 0 | I think you will need to read about pub/sub for messaging services. For php, you can use libraries such as redis.
So for e.g, user1 subscribe to topic1, any user which publish to topic1, user1 will be notified, and you can implement what will happen to the user1. | 1 | 0 | 0 | 0 | I am running a website where user can send in-site message (no instantaneity required) to other user, and the receiver will get a notification about the message.
Now I am using a simple system to implement that, detail below.
Table Message:
id
content
receiver
sender
Table User:
some info
notification
some info
When a User A send message to User B, a record will be add to Message and the B.notification will increase by 1. When B open the message box the notification will decrease to 0.
It's simple but does well.
I wonder how you/company implement message system like that.
No need to care about UE(like confirm which message is read by user), just the struct implement.
Thank a lot :D | How to implement a message system? | 0 | 0 | 1 | 0 | 0 | 123 |
40,020,767 | 2016-10-13T12:17:00.000 | 2 | 0 | 0 | 0 | 0 | python,apache-spark,ibm-cloud,ibm-cloud-plugin | 0 | 40,021,035 | 0 | 1 | 0 | true | 0 | 0 | In a Python notebook:
!pip install <package>
and then
import <package> | 1 | 0 | 1 | 0 | 1) I have Spark on Bluemix platform, how do I add a library there ?
I can see the preloaded libraries but cant add a library that I want.
Any command line argument that will install a library?
pip install --package is not working there
2) I have Spark and Mongo DB running, but I am not able to connect both of them.
con ='mongodb://admin:ITCW....ssl=true'
ssl1 ="LS0tLS ....."
client = MongoClient(con,ssl=True)
db = client.mongo11
collection = db.mongo11
ff=db.sammy.find()
Error I am getting is :
SSL handshake failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590) | Add a library in Spark in Bluemix & connect MongoDB , Spark together | 0 | 1.2 | 1 | 1 | 0 | 106 |
40,033,066 | 2016-10-14T00:21:00.000 | 2 | 0 | 0 | 1 | 0 | python,linux | 0 | 40,033,097 | 0 | 1 | 0 | true | 0 | 0 | You can set up the script to run via cron, configuring time as @reboot
With python scripts, you will not need to compile it. You might need to install it, depending on what assumptions your script makes about its environment. | 1 | 2 | 0 | 0 | I've been learning Python for a project required for work. We are starting up a new server that will be running linux, and need a python script to run that will monitor a folder and handle files when they are placed in the folder.
I have the python "app" working, but I'm having a hard time finding how to make this script run when the server is started. I know it's something simple, but my linux knowledge falls short here.
Secondary question: As I understand it I don't need to compile or install this application, basically just call the start script to run it. Is that correct? | Run a python application/script on startup using Linux | 0 | 1.2 | 1 | 0 | 0 | 87 |
40,046,656 | 2016-10-14T15:18:00.000 | 1 | 0 | 1 | 1 | 0 | python,python-2.7,centos | 0 | 40,047,015 | 1 | 1 | 0 | false | 0 | 0 | Replacing 2.7.6 with 2.7.12 would be fine using the procedure you linked.
There should be no real problems with libraries installed with pip easy_install as the version updates are minor.
Worst comes to worst and there is a library conflict it would be because the python library used for compiling may be different and you can always reinstall the library which would recompile against the correct python library if required. This is only problematic if the library being installed is actually compiled against the python library. Pure python packages would not be affected.
If you were doing a major version change this would be okay as well as on CentOS you have to call python with python2.7 and not python, so a new version would call with python2.8 | 1 | 0 | 0 | 0 | I run a script on several CentOS machines that compiles Python 2.7.6 from source and installs it. I would now like to update the script so that it updates Python to 2.7.12, and don't really know how to tackle this.
Should I do this exactly the same way, just with source code of higher version, and it will overwrite the old Python version?
Should I first uninstall the old Python version? If so, then how?
Sorry if this is trivial - I tried Googleing and searching through Stack, but did not found anyone with a similar problem. | Updating Python version that's compiled from source | 0 | 0.197375 | 1 | 0 | 0 | 240 |
40,071,987 | 2016-10-16T15:27:00.000 | 1 | 0 | 1 | 0 | 0 | python,linux,virtualenv,archlinux-arm,pacman-package-manager | 0 | 40,072,017 | 0 | 1 | 0 | true | 0 | 0 | You can create the virtualenv with the --system-site-packages switch to use system-wide packages in addition to the ones installed in the stdlib. | 1 | 0 | 0 | 0 | My system is Archlinux.
My project will use NumPy, and my project is in a virtual environment created by virtualenv.
As it is difficult to install NumPy by pip, I install it by Pacman:
sudo pacman -S python-scikit-learn
But how can I use it in virtualenv? | How to use the NumPy installed by Pacman in virtualenv? | 0 | 1.2 | 1 | 0 | 0 | 492 |
40,099,001 | 2016-10-18T03:26:00.000 | 1 | 0 | 0 | 0 | 0 | python-xarray | 1 | 40,099,554 | 1 | 1 | 0 | false | 0 | 0 | Use "conda install xarray==0.8.0" if you're using anaconda, or "pip install xarray==0.8.0" otherwise. | 1 | 1 | 1 | 0 | I am reading other's pickle file that may have data type based on xarray. Now I cannot read in the pickle file with the error "No module named core.dataset".
I guess this maybe a xarray issue. My collaborator asked me to change my version to his version and try again.
My version is 0.8.2, and his version 0.8.0. So how can I change back to his version?
Thanks! | how to install previous version of xarray | 0 | 0.197375 | 1 | 0 | 0 | 999 |
40,109,065 | 2016-10-18T13:02:00.000 | 1 | 0 | 0 | 0 | 0 | python,openerp,odoo-9 | 0 | 40,124,695 | 0 | 1 | 0 | true | 1 | 0 | Your smart button on partners should use a new action, like the button for customer or vendor bills. This button definition should include context="{'default_partner_id': active_id} which will allow to change the partner filter later on, or the upcoming action definition should include the partner in its domain.
The action should be for model account.invoice and have to have the following domain:
[('date_due', '<', time.strftime('%Y-%m-%d')), ('state', '=', 'open')]
If you want to filter only outgoing (customer invoices) add a filter tuple for field type. | 1 | 2 | 0 | 0 | In accounting -> Customer Invoices, there is a filter called Overdue. Now I want to calculate the overdue payments per user and then display it onto the customer form view.
I just want to know how can we apply the condition of filter in python code. I have already defined a smart button to display it with a (total invoice value) by inheriting account.invoice.
"Overdue" filter in invoice search view:
['&', ('date_due', '<', time.strftime('%Y-%m-%d')), ('state', '=', 'open')] | Display Sum of overdue payments in Customer Form view for each customer | 1 | 1.2 | 1 | 0 | 0 | 270 |
40,120,312 | 2016-10-19T00:48:00.000 | -3 | 0 | 0 | 1 | 1 | python,django,celery,amazon-elastic-beanstalk,celerybeat | 0 | 40,166,437 | 0 | 2 | 0 | true | 1 | 0 | In case someone experience similar issues: I ended up switching to a different Queue / Task framework for django. It is called django-q and was set up and working in less than an hour. It has all the features that I needed and also better Django integration than Celery (since djcelery is no longer active).
Django-q is super easy to use and also lighter than the huge Celery framework. I can only recommend it! | 2 | 13 | 0 | 0 | I am trying to figure out the best way to structure a Django app that uses Celery to handle async and scheduled tasks in an autoscaling AWS ElasticBeanstalk environment.
So far I have used only a single instance Elastic Beanstalk environment with Celery + Celerybeat and this worked perfectly fine. However, I want to have multiple instances running in my environment, because every now and then an instance crashes and it takes a lot of time until the instance is back up, but I can't scale my current architecture to more than one instance because Celerybeat is supposed to be running only once across all instances as otherwise every task scheduled by Celerybeat will be submitted multiple times (once for every EC2 instance in the environment).
I have read about multiple solutions, but all of them seem to have issues that don't make it work for me:
Using django cache + locking: This approach is more like a quick fix than a real solution. This can't be the solution if you have a lot of scheduled tasks and you need to add code to check the cache for every task. Also tasks are still submitted multiple times, this approach only makes sure that execution of the duplicates stops.
Using leader_only option with ebextensions: Works fine initially, but if an EC2 instance in the enviroment crashes or is replaced, this would lead to a situation where no Celerybeat is running at all, because the leader is only defined once at the creation of the environment.
Creating a new Django app just for async tasks in the Elastic Beanstalk worker tier: Nice, because web servers and workers can be scaled independently and the web server performance is not affected by huge async work loads performed by the workers. However, this approach does not work with Celery because the worker tier SQS daemon removes messages and posts the message bodies to a predefined urls. Additionally, I don't like the idea of having a complete additional Django app that needs to import the models from the main app and needs to be separately updated and deployed if the tasks are modified in the main app.
How to I use Celery with scheduled tasks in a distributed Elastic Beanstalk environment without task duplication? E.g. how can I make sure that exactly one instance is running across all instances all the time in the Elastic Beanstalk environment (even if the current instance with Celerybeat crashes)?
Are there any other ways to achieve this? What's the best way to use Elastic Beanstalk's Worker Tier Environment with Django? | Multiple instances of celerybeat for autoscaled django app on elasticbeanstalk | 1 | 1.2 | 1 | 0 | 0 | 1,251 |
40,120,312 | 2016-10-19T00:48:00.000 | 1 | 0 | 0 | 1 | 1 | python,django,celery,amazon-elastic-beanstalk,celerybeat | 0 | 54,745,929 | 0 | 2 | 0 | false | 1 | 0 | I guess you could single out celery beat to different group.
Your auto scaling group runs multiple django instances, but celery is not included in the ec2 config of the scaling group.
You should have different set (or just one) of instance for celery beat | 2 | 13 | 0 | 0 | I am trying to figure out the best way to structure a Django app that uses Celery to handle async and scheduled tasks in an autoscaling AWS ElasticBeanstalk environment.
So far I have used only a single instance Elastic Beanstalk environment with Celery + Celerybeat and this worked perfectly fine. However, I want to have multiple instances running in my environment, because every now and then an instance crashes and it takes a lot of time until the instance is back up, but I can't scale my current architecture to more than one instance because Celerybeat is supposed to be running only once across all instances as otherwise every task scheduled by Celerybeat will be submitted multiple times (once for every EC2 instance in the environment).
I have read about multiple solutions, but all of them seem to have issues that don't make it work for me:
Using django cache + locking: This approach is more like a quick fix than a real solution. This can't be the solution if you have a lot of scheduled tasks and you need to add code to check the cache for every task. Also tasks are still submitted multiple times, this approach only makes sure that execution of the duplicates stops.
Using leader_only option with ebextensions: Works fine initially, but if an EC2 instance in the enviroment crashes or is replaced, this would lead to a situation where no Celerybeat is running at all, because the leader is only defined once at the creation of the environment.
Creating a new Django app just for async tasks in the Elastic Beanstalk worker tier: Nice, because web servers and workers can be scaled independently and the web server performance is not affected by huge async work loads performed by the workers. However, this approach does not work with Celery because the worker tier SQS daemon removes messages and posts the message bodies to a predefined urls. Additionally, I don't like the idea of having a complete additional Django app that needs to import the models from the main app and needs to be separately updated and deployed if the tasks are modified in the main app.
How to I use Celery with scheduled tasks in a distributed Elastic Beanstalk environment without task duplication? E.g. how can I make sure that exactly one instance is running across all instances all the time in the Elastic Beanstalk environment (even if the current instance with Celerybeat crashes)?
Are there any other ways to achieve this? What's the best way to use Elastic Beanstalk's Worker Tier Environment with Django? | Multiple instances of celerybeat for autoscaled django app on elasticbeanstalk | 1 | 0.099668 | 1 | 0 | 0 | 1,251 |
40,126,407 | 2016-10-19T08:47:00.000 | 2 | 0 | 0 | 0 | 0 | python,image,image-processing,rgb | 0 | 40,127,791 | 0 | 1 | 0 | true | 0 | 0 | Per color plane, replace the pixel at (X, Y) by the pixel at (X-1, Y+3), for example. (Of course your shifts will be different.)
You can do that in-place, taking care to loop by increasing or decreasing coordinate to avoid overwriting.
There is no need to worry about transparency. | 1 | 1 | 0 | 0 | What I'm trying to do is recreating what is commonly called an "RGB shift" effect, which is very easy to achieve with image manipulation programs.
I imagine I can "split" the channels of the image by either opening the image as a matrix of triples or opening the image three times and every time operate just on one channel, but I wouldn't know how to "offset" the channels when merging them back together (possibly by creating a new image and position each channel's [0,0] pixel in an offsetted position?) and reduce each channel's opacity as to not show just the last channel inserted into the image.
Has anyone tried to do this? Do you know if it is possible? If so, how did you do it?
Thanks everyone in advance! | Split and shift RGB channels in Python | 1 | 1.2 | 1 | 0 | 0 | 1,180 |
40,141,313 | 2016-10-19T20:51:00.000 | 0 | 1 | 0 | 0 | 0 | python,amazon-web-services,aws-lambda | 0 | 56,790,848 | 0 | 1 | 0 | true | 0 | 0 | In IoT Code, Create a rule for invoking a Lambda to accept JSON data. Then you can do anything with that data. | 1 | 0 | 0 | 0 | I am Publishing data from Raspberry Pi to AWS IoT and I can see the updates there.
Now, I need to get that data into AWS Lambda and connect it to AWS SNS to send a message above a threshold. I know about working with SNS and IoT.
I just want to know that how I can get the data from AWS IoT to AWS Lambda ??
Please Help !!
Thanks :) | Stream data from AWS IoT to AWS Lambda using Python? | 0 | 1.2 | 1 | 0 | 0 | 471 |
40,142,959 | 2016-10-19T23:10:00.000 | 5 | 0 | 0 | 0 | 0 | python | 0 | 41,708,567 | 0 | 1 | 0 | false | 0 | 0 | Just looking for the answer to this myself. gmplot was updated to June 2016 to include a hovertext functionality for the marker method, but unfortunately this isn't available for the scatter method. The enthusiastic user will find that the scatter method simply calls the marker method over and over, and could modify the scatter method itself to accept a title or range of titles.
If like myself you are using an older version, make sure to run
pip install --upgrade gmplot
and to place a marker with hovertext (mouse hovering over pin without clicking)
gmap=gmplot.GoogleMapPlotter("Seattle")
gmap.marker(47.61028142523736, -122.34147349538826, title="A street corner in Seattle")
st="testmap.html"
gmap.draw(st) | 1 | 3 | 1 | 1 | I plotted some points on google maps using gmplot's scatter method (python). I want to add some text to the points so when someone clicks on those points they can see the text.
I am unable to find any documentation or example that shows how to do this.
Any pointers are appreciated. | Add text to scatter point using python gmplot | 0 | 0.761594 | 1 | 0 | 0 | 8,147 |
40,148,265 | 2016-10-20T07:39:00.000 | 0 | 0 | 1 | 0 | 0 | python,jupyter,jupyter-notebook | 0 | 42,150,227 | 0 | 2 | 0 | false | 0 | 0 | Did you install python by Anaconda?
Try to install under Anaconda2/envs when choosing destination folder,
like this:
D:/Anaconda2/envs/py3
then"activate py3" by cmd, py3 must be then same name of installation folder | 2 | 0 | 0 | 0 | I have anaconda2 and anaconda3 installed on windows machine, have no access to internet and administrator rights. How can I switch between python 2 and 3 when starting jupyter? Basic "jupyter notebook" command starts python 2. With internet I would just add environment for python 3 and select it in jupyter notebook after start but how can I do this in this situation? | jupyter notebook select python | 0 | 0 | 1 | 0 | 0 | 1,469 |
40,148,265 | 2016-10-20T07:39:00.000 | 1 | 0 | 1 | 0 | 0 | python,jupyter,jupyter-notebook | 0 | 42,550,420 | 0 | 2 | 0 | false | 0 | 0 | There's important points to consider:
you have to have jupyter notebook installed in each environment you want to run it from
if jupyter is only installed in one environment, your notebook will default to that environment no matter from which environment your start it, and you will have no option to change the notebook kernel (i.e. the conda package, and therefore which python version to use for your notebook)
You can list the packages in your environment with conda list. You can also check what environments exist with conda info --envs to make sure there's indeed one with python 3 (and use conda list to check it has jupyter installed).
From what you write, since your notebook defaults to python2, conda list should show you python 2 related packages.
So, as has been pointed, first activate the anaconda environment for python3 with the command activate your_python3_environment then restart your Notebook.
You don't need internet for this but you do need to be able to swap between anaconda2 and 3 (which you say are both installed) and both should have jupyter installed. | 2 | 0 | 0 | 0 | I have anaconda2 and anaconda3 installed on windows machine, have no access to internet and administrator rights. How can I switch between python 2 and 3 when starting jupyter? Basic "jupyter notebook" command starts python 2. With internet I would just add environment for python 3 and select it in jupyter notebook after start but how can I do this in this situation? | jupyter notebook select python | 0 | 0.099668 | 1 | 0 | 0 | 1,469 |
40,193,388 | 2016-10-22T14:38:00.000 | 0 | 0 | 0 | 0 | 0 | python,python-2.7,csv | 0 | 65,651,852 | 0 | 8 | 0 | false | 0 | 0 | I think the best way to check this is -> simply reading 1st line from file and then match your string instead of any library. | 1 | 11 | 1 | 0 | I have a CSV file and I want to check if the first row has only strings in it (ie a header). I'm trying to avoid using any extras like pandas etc. I'm thinking I'll use an if statement like if row[0] is a string print this is a CSV but I don't really know how to do that :-S any suggestions? | How to check if a CSV has a header using Python? | 0 | 0 | 1 | 0 | 0 | 24,553 |
40,194,021 | 2016-10-22T15:47:00.000 | 0 | 0 | 1 | 0 | 1 | python-2.7,spyder | 0 | 41,088,731 | 0 | 2 | 0 | false | 0 | 0 | You Could try to uninstall that version of spyder and download Anaconda, a free package manager that comes pre-installed with spyder and should work fine, as it did for me as I have windows 7 x64 | 1 | 0 | 0 | 0 | After installing Winpython on windows 7 64 bits, when I launch Spyder I face this:
ImportError: No module named encodings
Python 2.7.12 Shell works well but Spyder don't.
Do you know how to solve this problem?
I really appreciate any help you can provide | Can't launch Spyder on windows 7 | 0 | 0 | 1 | 0 | 0 | 609 |
40,195,188 | 2016-10-22T17:45:00.000 | 0 | 1 | 0 | 0 | 0 | php,python,asp.net,raspberry-pi,sms | 0 | 40,195,564 | 0 | 1 | 0 | true | 0 | 0 | What are the (logical) pitfalls of this scenario?
My opion would be to pass the data and the two fields (phoneNumber and SmsType) through a POST request rather then an GET request because you can send more data in an post request and encapsulate it with JSON making it easier to handle the data.
What would be a simpler approch ?
Maybe not simple but more elegant, extend the python script with something like flask and build the webserver right into the python script, saves you running a webserver with php! | 1 | 0 | 0 | 0 | Just discovered the amazing Raspberry Pi 3 and I am trying to learn how to use it in one of my projects.
Setup:
ASP.NET app on Azure.
RPi:
software: Raspbian, PHP, Apache 2, and MariaDB.
has internet access and a web server a configured.
3G dongle for SMS sending, connected to the RPi.
Desired scenario:
when a specific button within the ASP app is clicked:
through jQuery $.ajax() the RPi's ip is called with the parameters phoneNumber and smsType.
then the RPi:
fetches the SMS text from a MariaDB database based on the smsType parameter.
invokes a Python script using the PHP exec("python sendSms.py -p phoneNumber -m fetchedText", $output) (i.e. with the phone number and the fetched text):
script will send the AT command(s) to the dongle.
script will return true or false based on the action of the dongle.
echo the $output to tell the ASP what is the status.
finally, the ASP will launch a JavaScript alert() saying if it worked or not.
This is what I need to accomplish. For most of the parts I found resources and explanations. However, before starting on this path I want to understand few things:
General questions (if you think they are not appropriate, please ignore this category):
What are the (logical) pitfalls of this scenario?
What would be a simpler way to approach this?
Specific questions:
Is there a size limit to consider when passing parameters through the url? | Calling Raspberry Pi from ASP.NET to send a SMS remotely | 0 | 1.2 | 1 | 0 | 0 | 98 |
40,200,840 | 2016-10-23T08:04:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 40,201,438 | 0 | 3 | 0 | false | 0 | 0 | the else suite is executed after the for terminates normally (not by a break).
so it will definitely execute the else statement in your code, because you don't break in the for loop. | 1 | 1 | 0 | 0 | I am writing a program to search a txt file for a certain line based only on part of the string. If the string isn't found, it should print not found once, but it is printing it multiple times. Even after indenting and using a correct code it still prints: | how do i stop the invalid code message repeating while also occuring at the right time | 0 | 0.066568 | 1 | 0 | 0 | 41 |
40,214,784 | 2016-10-24T09:19:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,tkinter | 1 | 50,161,426 | 0 | 2 | 0 | false | 0 | 1 | I had the same exact issue with Python-3.4.3. I followed Brice's solution and got halfway there. Not only did I require the -l flags after the -L flag as he suggested, but I discovered my LD_LIBRARY_PATH was inadequate when performing the 'make altinstall'. Be sure to include the same directory in LD_LIBRARY_PATH as used in your -L flag entry. | 1 | 0 | 0 | 0 | python2.7
when I import Tkinter, it prompt no module named _tkinter, I don't have the limits of administrator, so I install tcl and tk, then recompile python with --with-tcltk-includes and --with-tcltk-libs parameter, but when running 'make', the error """*** WARNING: renaming "_tkinter" since importing it failed: build/lib.linux-x86_64-2.7/_tkinter.so: undefined symbol: Tk_Init""" occurred, I really don't know how to deal with it
can somebody help me?
thanks! | undefined symbol: Tk_Init | 0 | 0 | 1 | 0 | 0 | 648 |
40,223,807 | 2016-10-24T17:07:00.000 | 2 | 0 | 1 | 1 | 0 | python,pycharm,pickle | 1 | 40,224,304 | 0 | 1 | 0 | true | 0 | 0 | As suggested in the comments, this is most likely because Python is not added to your environment variables. If you do not want to touch your environment variables, and assuming your Python is installed in C:\Python35\,
Navigate tp C:\Python35\ in Windows Explorer
Go to the address bar and type cmd to shoot up a command prompt in that directory
Alternatively to steps 1 and 2, directly shoot up a command prompt, and cd to your Python installation Directory (default: C:\Python35)
Type python -m pip install pip --upgrade there | 1 | 0 | 0 | 0 | When trying to install cPickle using pycharm I get this:
Command "python setup.py egg_info" failed with error code 1 in C:\Users\Edwin\AppData\Local\Temp\pycharm-packaging\cpickle
You are using pip version 7.1.2, however version 8.1.2 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
So then when I go command prompt to type in:
python -m pip install --upgrade pip
I get this:
'python' is not recognized as an internal or external command, operable program or batch file.
So how do I install cPickle?
BTW: I am using windows & python 3.5.1 | Why can't I install cPickle | Pip needs upgrading? | 0 | 1.2 | 1 | 0 | 0 | 4,792 |
40,236,281 | 2016-10-25T09:29:00.000 | 0 | 1 | 0 | 0 | 0 | php,python,python-3.x,pip,composer-php | 0 | 40,252,615 | 0 | 1 | 0 | true | 0 | 0 | I've decided to create separate PHP package for my PHP library, and upload it to a packagist.org, so, user could get it using php composer, but not forced to, as it would be in case of including library.php into python package. | 1 | 0 | 0 | 0 | I've created tool, that runs as a server, and allow clients to connect to it through TCP, and run some commands. It's written on python 3
Now I'm going to build package and upload it to Pypi, and have conceptual problem.
This tool have python client library inside, so, after installation of the package, it'll be possible to just import library into python script, and use for connection to the daemon without dealing with raw TCP/IP.
Also, I have PHP library, for connection to me server, and the problem is - I don't know how to include it into my python package the right way.
Variants, that I found and can't choose the right one:
Just include library.php file into package, and after running "pip install my_package", I would write "require('/usr/lib/python3/dist-packages/my_package/library.php')" into my php file. This way allows to distribute library with the server, and update it synchronously, but add long ugly paths to php require instruction.
As library.php in placed on github repository, I could just publish it's url in the docs, and it'll be possible to just clone repository. It makes possible to clone repo, and update library by git pull.
Create separate package with my library.php, upload it into packagist, and use composer to download it when it's needed. Good for all composer users, and allow manual update, but doens't update with server's package.
Maybe I've missed some other variants.
I want to know what would be true python'ic and php'ic way to do this. Thanks. | How to include PHP library in Python package (on not do it) | 0 | 1.2 | 1 | 0 | 1 | 605 |
40,251,259 | 2016-10-25T23:19:00.000 | 1 | 0 | 0 | 0 | 0 | python,python-3.x,tkinter,console | 1 | 40,251,792 | 0 | 2 | 0 | true | 0 | 1 | You can apply modifiers to the text widget indicies, such as linestart and lineend as well as adding and subtracting characters. The index after the last character is "end".
Putting that all together, you can get the start of the last line with "end-1c linestart". | 2 | 3 | 0 | 0 | I am working on a virtual console, which would use the systems builtin commands and then do the action and display output results on next line in console. This is all working, but how do I get the contents of the last line, and only the last line in the tkinter text widget? Thanks in advance. I am working in python 3.
I have treed using text.get(text.linestart, text.lineend) To no avail. Have these been deprecated? It spits out an error saying that AttributeError: 'Text' object has no attribute 'linestart' | How to get the contents of last line in tkinter text widget (Python 3) | 0 | 1.2 | 1 | 0 | 0 | 2,994 |
40,251,259 | 2016-10-25T23:19:00.000 | 0 | 0 | 0 | 0 | 0 | python,python-3.x,tkinter,console | 1 | 56,634,803 | 0 | 2 | 0 | false | 0 | 1 | Test widget has a see(index) method.
text.see(END) will scroll the text to the last line. | 2 | 3 | 0 | 0 | I am working on a virtual console, which would use the systems builtin commands and then do the action and display output results on next line in console. This is all working, but how do I get the contents of the last line, and only the last line in the tkinter text widget? Thanks in advance. I am working in python 3.
I have treed using text.get(text.linestart, text.lineend) To no avail. Have these been deprecated? It spits out an error saying that AttributeError: 'Text' object has no attribute 'linestart' | How to get the contents of last line in tkinter text widget (Python 3) | 0 | 0 | 1 | 0 | 0 | 2,994 |
40,266,219 | 2016-10-26T15:24:00.000 | 1 | 0 | 0 | 0 | 0 | python,proxy,web-scraping,scrapy,web-crawler | 1 | 40,296,417 | 0 | 1 | 1 | true | 1 | 0 | Thanks.. I figure out here.. the problem is that some proxy location doesn't work with https.. so I just changed it and now it is working. | 1 | 0 | 0 | 0 | I am using a proxy (from proxymesh) to run a spider written in scrapy python, the script is running normally when I don't use the proxy, but when I use it, I am having the following error message:
Could not open CONNECT tunnel with proxy fr.proxymesh.com:31280 [{'status': 408, 'reason': 'request timeout'}]
Any clue about how to figure out?
Thanks in advance. | Proxy Error 408 when running a script written in Scrapy Python | 0 | 1.2 | 1 | 0 | 1 | 497 |
40,269,957 | 2016-10-26T18:46:00.000 | 0 | 0 | 0 | 0 | 0 | python,opencv,tkinter | 0 | 40,271,580 | 0 | 1 | 0 | true | 0 | 1 | OpenCV window in Tkinter window is not good idea. Both windows use own mainloop (event loop) which can't work at the same time (or you have to use threading) and don't have contact one with other.
Probably it is easier to get video frame and display in Tkinter window on Label or Canvas. You can use tk.after(miliseconds, function_name) to run periodically function which will update video frame in Tkinter window. | 1 | 0 | 0 | 0 | I am making a hand controlled media player application in Python and through OpenCV.I want to embed gesture window of OpenCV in Tkinter frame so I can add further attributes to it.
Can someone tell how to embed OpenCV camera window into Tkinter frame? | opencv window in tkinter frame | 0 | 1.2 | 1 | 0 | 0 | 747 |
40,277,199 | 2016-10-27T06:01:00.000 | 0 | 0 | 0 | 0 | 0 | python,flask | 0 | 40,277,255 | 0 | 1 | 0 | false | 1 | 0 | If you want the user to stay in place, you should send the form using JavaScript asynchronously. That way, the browser won't try to fetch and render a new page.
You won't be able to get this behavior from the Flask end only. You can return effectively nothing but the browser will still try to get it and render that nothing for the client. | 1 | 0 | 0 | 0 | I have written a python function using flask framework to process some data submitted via a web form. However I don't want to re-render the template, I really just want to process the data and the leave the web form, it the state it was in, when the POST request was created. Not sure how to do this ... any suggestions ? | Flask return nothing, instead of having the re-render template | 0 | 0 | 1 | 0 | 0 | 795 |
40,284,296 | 2016-10-27T12:12:00.000 | 4 | 0 | 1 | 0 | 0 | python,numpy,matrix | 0 | 40,284,356 | 0 | 1 | 0 | true | 0 | 0 | The Hermitian part is (A + A.T.conj())/2, the anti-hermitian part is (A - A.T.conj())/2 (it is quite easy to prove).
If A = B + C with B Hermitian and C anti-Hermitian, you can take the conjugate (I'll denote it *) on both sides, uses its linearity and obtain A* = B - C, from which the values of B and C follow easily. | 1 | 0 | 1 | 0 | Imagine I have a numpy array in python that has complex numbers as its elements.
I would like to know if it is possible to split any matrix of this kind into a hermitian and anti-hermitian part? My intuition says that this is possible, similar to the fact that any function can be split into an even and an uneven part.
If this is indeed possible, how would you do this in python? So, I'm looking for a function that takes as input any matrix with complex elements and gives a hermitian and non-hermitian matrix as output such that the sum of the two outputs is the input.
(I'm working with python 3 in Jupyter Notebook). | python: split matrix in hermitian and anti-hermitian part | 1 | 1.2 | 1 | 0 | 0 | 309 |
40,287,113 | 2016-10-27T14:20:00.000 | 4 | 0 | 0 | 0 | 1 | python,python-3.x,numpy,linear-regression | 0 | 40,293,068 | 0 | 1 | 1 | false | 0 | 0 | some brief answers
1) Calling statsmodels repeatedly is not the fastest way. If we just need parameters, prediction and residual and we have identical explanatory variables, then I usually just use params = pinv(x).dot(y) where y is 2 dimensional and calculate the rest from there. The problem is that inference, confidence intervals and similar require work, so unless speed is crucial and only a restricted set of results is required, statsmodels OLS is still more convenient.
This only works if all y and x have the same observations indices, no missing values and no gaps.
Aside: The setup is a Multivariate Linear Model which will be supported by statsmodels in, hopefully not very far, future.
2) and 3) The fast simple linear algebra of case 1) does not work if there are missing cells or no complete overlap of observation (indices). In the analog to panel data, the first case requires "balanced" panels, the other cases imply "unbalanced" data. The standard way is to stack the data with the explanatory variables in a block-diagonal form. Since this increases the memory by a large amount, using sparse matrices and sparse linear algebra is better. It depends on the specific cases whether building and solving the sparse problem is faster than looping over individual OLS regressions.
Specialized code: (Just a thought):
In case 2) with not fully overlapping or cellwise missing values, we would still need to calculate all x'x, and x'y matrices for all y, i.e. 500 of those. Given that you only have two regressors 500 x 2 x 2 would still not require a large memory. So it might be possible to calculate params, prediction and residuals by using the non-missing mask as weights in the cross-product calculations.
numpy has vectorized linalg.inv, as far as I know. So, I think, this could be done with a few vectorized calculations. | 1 | 4 | 1 | 0 | I think I have a pretty reasonable idea on how to do go about accomplishing this, but I'm not 100% sure on all of the steps. This question is mostly intended as a sanity check to ensure that I'm doing this in the most efficient way, and that my math is actually sound (since my statistics knowledge is not completely perfect).
Anyways, some explanation about what I'm trying to do:
I have a lot of time series data that I would like to perform some linear regressions on. In particular, I have roughly 2000 observations on 500 different variables. For each variable, I need to perform a regression using two explanatory variables (two additional vectors of roughly 2000 observations). So for each of 500 different Y's, I would need to find a and b in the following regression Y = aX_1 + bX_2 + e.
Up until this point, I have been using the OLS function in the statsmodels package to perform my regressions. However, as far as I can tell, if I wanted to use the statsmodels package to accomplish my problem, I would have to call it hundreds of times, which just seems generally inefficient.
So instead, I decided to revisit some statistics that I haven't really touched in a long time. If my knowledge is still correct, I can put all of my observations into one large Y matrix that is roughly 2000 x 500. I can then stick my explanatory variables into an X matrix that is roughly 2000 x 2, and get the results of all 500 of my regressions by calculating (X'Y)/(X'X). If I do this using basic numpy stuff (matrix multiplication using * and inverses using matrix.I), I'm guessing it will be much faster than doing hundreds of statsmodel OLS calls.
Here are the questions that I have:
Is the numpy stuff that I am doing faster than the earlier method of calling statsmodels many times? If so, is it the fastest/most efficient way to accomplish what I want? I'm assuming that it is, but if you know of a better way then I would be happy to hear it. (Surely I'm not the first person to need to calculate many regressions in this way.)
How do I deal with missing data in my matrices? My time series data is not going to be nice and complete, and will be missing values occasionally. If I just try to do regular matrix multiplication in numpy, the NA values will propagate and I'll end up with a matrix of mostly NAs as my end result. If I do each regression independently, I can just drop the rows containing NAs before I perform my regression, but if I do this on the large 2000 x 500 matrix I will end up dropping actual, non-NA data from some of my other variables, and I obviously don't want that to happen.
What is the most efficient way to ensure that my time series data actually lines up correctly before I put it into the matrices in the first place? The start and end dates for my observations are not necessarily the same, and some series might have days that others do not. If I were to pick a method for doing this, I would put all the observations into a pandas data frame indexed by the date. Then pandas will end up doing all of the work aligning everything for me and I can extract the underlying ndarray after it is done. Is this the best method, or does pandas have some sort of overhead that I can avoid by doing the matrix construction in a different way? | Fastest way to calculate many regressions in python? | 0 | 0.664037 | 1 | 0 | 0 | 3,105 |
40,296,765 | 2016-10-28T01:49:00.000 | 1 | 0 | 0 | 0 | 0 | python,nlp,gensim,doc2vec | 0 | 41,733,461 | 0 | 1 | 0 | false | 0 | 0 | Gensim's Word2Vec/Doc2Vec models don't store the corpus data – they only examine it, in multiple passes, to train up the model. If you need to retrieve the original texts, you should populate your own lookup-by-key data structure, such as a Python dict (if all your examples fit in memory).
Separately, in recent versions of gensim, your code will actually be doing 1,005 training passes over your taggeddocs, including many with a nonsensically/destructively negative alpha value.
By passing it into the constructor, you're telling the model to train itself, using your parameters and defaults, which include a default number of iter=5 passes.
You then do 200 more loops. Each call to train() will do the default 5 passes. And by decrementing alpha from 0.025 by 0.002 199 times, the last loop will use an effective alpha of 0.025-(200*0.002)=-0.375 - a negative value essentially telling the model to make a large correction in the opposite direction of improvement each training-example.
Just use the iter parameter to choose the desired number of passes. Let the class manage the alpha changes itself. If supplying the corpus when instantiating the model, no further steps are necessary. But if you don't supply the corpus at instantiation, you'll need to do model.build_vocab(tagged_docs) once, then model.train(tagged_docs) once. | 1 | 1 | 1 | 0 | I am preparing a Doc2Vec model using tweets. Each tweet's word array is considered as a separate document and is labeled as "SENT_1", SENT_2" etc.
taggeddocs = []
for index,i in enumerate(cleaned_tweets):
if len(i) > 2: # Non empty tweets
sentence = TaggedDocument(words=gensim.utils.to_unicode(i).split(), tags=[u'SENT_{:d}'.format(index)])
taggeddocs.append(sentence)
# build the model
model = gensim.models.Doc2Vec(taggeddocs, dm=0, alpha=0.025, size=20, min_alpha=0.025, min_count=0)
for epoch in range(200):
if epoch % 20 == 0:
print('Now training epoch %s' % epoch)
model.train(taggeddocs)
model.alpha -= 0.002 # decrease the learning rate
model.min_alpha = model.alpha # fix the learning rate, no decay
I wish to find tweets similar to a given tweet, say "SENT_2". How?
I get labels for similar tweets as:
sims = model.docvecs.most_similar('SENT_2')
for label, score in sims:
print(label)
It prints as:
SENT_4372
SENT_1143
SENT_4024
SENT_4759
SENT_3497
SENT_5749
SENT_3189
SENT_1581
SENT_5127
SENT_3798
But given a label, how do I get original tweet words/sentence? E.g. what are the tweet words of, say, "SENT_3497". Can I query this to Doc2Vec model? | How to extract words used for Doc2Vec | 0 | 0.197375 | 1 | 0 | 0 | 1,260 |
40,309,777 | 2016-10-28T16:55:00.000 | 2 | 0 | 0 | 0 | 0 | python,regex,pandas | 0 | 40,310,470 | 0 | 2 | 0 | false | 0 | 0 | The problem is that the match function does not return True when it matches, it returns a match object. Pandas cannot add this match object because it is not an integer value. The reason you get a sum when you are using 'not' is because it returns a boolean value of True, which pandas can sum the True value and return a number. | 1 | 3 | 1 | 0 | I can find the number of rows in a column in a pandas dataframe that do NOT follow a pattern but not the number of rows that follow the very same pattern!
This works:
df.report_date.apply(lambda x: (not re.match(r'[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}', x))).sum()
This does not: removing 'not' does not tell me how many rows match but raises a TypeError. Any idea why that would be the case?
df.report_date.apply(lambda x: (re.match(r'[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}', x))).sum() | cannot sum rows that match a regular expression in pandas / python | 0 | 0.197375 | 1 | 0 | 0 | 883 |
40,321,096 | 2016-10-29T16:16:00.000 | 1 | 0 | 1 | 0 | 0 | python,string,format,python-requests,bs4 | 0 | 40,321,224 | 0 | 1 | 0 | false | 0 | 0 | Just use repr()
Like:
print(repr(<variable with string>)) | 1 | 0 | 0 | 0 | So, I have been playing around with requests and bs4 for a project I'm working on and have managed to return the following in a variable:
"----------
Crossways Inn
Withy Road
West Huntspill
Somerset
TA93RA
01278783756
www.crosswaysinn.com
----------"
This was scraped from a site, using .text function within the bs4 module.
Is there any way I can format this within my program to look like the following:
"----------\n
Crossways Inn\n
Withy Road\n
West Huntspill\n
Somerset\n
TA93RA\n
01278783756\n
www.crosswaysinn.com\n
----------\n"
Sorry for the vague explanation of what I want to do, but do not know how to explain it better. Thanks! | How do I format the contents of the following variable in Python? | 0 | 0.197375 | 1 | 0 | 0 | 45 |
40,321,668 | 2016-10-29T17:23:00.000 | 1 | 0 | 1 | 0 | 0 | c++,python-2.7,struct,namedtuple | 0 | 40,336,177 | 0 | 2 | 0 | false | 0 | 0 | namedtuple is implemented purely in Python. You can see its full source in collections.py. It's very short. The thing to keep in mind is that namedtuple itself is a function which creates a class in the frame in which it is called and then returns this class (not an instance of this class). And it is this returned class that is then used to create instances. So the object which you get is not what you want to pass into C++ if you want to pass individual instances.
C++ creates struct definitions at compile time. namedtuple creates namedtuple classes at run time. If you want to bind them to C++ structs, either use the PyObject to create your newly minted class' instances inside of C++ and assign them to struct elements at compile time. Or create the newly minted class' instances in Python and pass them to C++.
Or you can use _asdict method (provided by namedtuple factory method for all classes it builds) and pass that to C++ to then do the binding of run-time defined data to compile-time defined data.
If you really want to do the bulk of the work in C++, you may also use the Struct module instead of using namedtuple.
namedtuple is really the swiss-army knife of Python for data which stays in Python. It gives positional access, named access, and all the elements are also "properties" (so they have fget accessor method which can be used in maps, filters, etc. instead of having to write your own lambdas).
It's there for things like DB binding (when you don't know which columns will be there at run time). It's less clunky than OrderedDict for converting data from one format into another. When it's used that way, the overhead of processing strings is nothing compared to actual access of the db (even embedded). But I wouldn't use namedtuple for large arrays of structs which are meant to be used in calculations. | 1 | 1 | 0 | 0 | I have searched the internet for hours at this point. Does anyone know how to parse a namedtuple returned from a python function into a struct or just into separate variables. The part I am having trouble with is getting the data out of the returned pointer. I am calling a python function embedded in C++ using the PyObject_CallFunction() call and I don't know what to do once I have the PyObject* to the returned data.
I am using Python 2.7 for reference.
EDIT: I ended up moving all of the functionality I was trying to do in both Python and C++ to just Python for now. I will update in the near future about attempting the strategy suggested in the comments of this question. | Python NamedTuple to C++ Struct | 0 | 0.099668 | 1 | 0 | 0 | 1,655 |
40,323,573 | 2016-10-29T20:54:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7 | 0 | 40,323,605 | 0 | 2 | 0 | false | 0 | 0 | Square brackets typically mean that the value is optional. Here, varname refers to the environment variable you want to get and value is an optional value that is return if the environment variable doesn't exist. | 1 | 0 | 0 | 0 | I have a problem understanding some description of functions in Python.
I understand simply functions like os.putenv(varname, value) but I have no idea how to use this: os.getenv(varname[, value]). How to pass arguments to that function, what does those square brackets mean? | How to read Python function documentation | 1 | 0 | 1 | 0 | 0 | 161 |
40,329,307 | 2016-10-30T12:52:00.000 | 0 | 0 | 0 | 0 | 0 | python,neural-network,concatenation,convolution,keras | 0 | 52,020,891 | 0 | 2 | 0 | false | 0 | 0 | I do not understand why to have 3 CNNs because you would mostly have the same results than on a single CNN. Maybe you could train faster.
Perhaps you could also do pooling and some resnet operation (I guess this could prove similar to what you want).
Nevertheless, for each CNN you need a cost function in order to optimize the "heuristic" you use (eg: to improve recognition). Also, you could do something as in the NN Style Transfer in which you compare results between several "targets" (the content and the style matrices); or simply train 3 CNNs then cutoff the last layers (or freeze them) and train again with the already trained weights but now with your target FN layer... | 1 | 4 | 1 | 0 | I want to implement a multiscale CNN in python. My aim is to use three different CNNs for three different scales and concatenate the final outputs of the final layers and feed them to a FC layer to take the output predictions.
But I don't understand how can I implement this. I know how to implement a single scale CNN.
Could anyone help me in this? | Multiscale CNN - Keras Implementation | 0 | 0 | 1 | 0 | 0 | 1,527 |
40,340,100 | 2016-10-31T10:09:00.000 | 4 | 0 | 0 | 0 | 0 | python,unit-testing,apache-kafka,kafka-python | 0 | 40,342,318 | 0 | 3 | 0 | true | 0 | 0 | If you need to verify a Kafka specific feature, or implementation with a Kafka-specific feature, then the only way to do it is by using Kafka!
Does Kafka have any tests around its deduplication logic? If so, the combination of the following may be enough to mitigate your organization's perceived risks of failure:
unit tests of your hash logic (make sure that the same object does indeed generate the same hash)
Kafka topic deduplication tests (internal to Kafka project)
pre-flight smoke tests verifying your app's integration with Kafka
If Kafka does NOT have any sort of tests around its topic deduplication, or you are concerned about breaking changes, then it is important to have automated checks around Kafka-specific functionality. This can be done through integration tests. I have had much success recently with Docker-based integration test pipelines. After the initial legwork of creating a Kafka docker image (one is probably already available from the community), it becomes trivial to set up integration test pipelines. A pipeline could look like:
application-based unit tests are executed (hash logic)
once those pass, your CI server starts up Kafka
integration tests are executed, verifying that duplicate writes only emit a single message to a topic.
I think the important thing is to make sure Kafka integration tests are minimized to ONLY include tests that absolutely rely on Kafka-specific functionality. Even using docker-compose, they may be orders of magnitude slower than unit tests, ~1ms vs 1 second? Another thing to consider is the overhead of maintaining an integration pipeline may be worth the risk of trusting that Kakfa will provide the topic deduplication that it claims to. | 1 | 11 | 0 | 0 | We have a message scheduler that generates a hash-key from the message attributes before placing it on a Kafka topic queue with the key.
This is done for de-duplication purposes. However, I am not sure how I could possibly test this deduplication without actually setting up a local cluster and checking that it is performing as expected.
Searching online for tools for mocking a Kafka topic queue has not helped, and I am concerned that I am perhaps thinking about this the wrong way.
Ultimately, whatever is used to mock the Kafka queue, should behave the same way as a local cluster - i.e. provide de-deuplication with Key inserts to a topic queue.
Are there any such tools? | Python: how to mock a kafka topic for unit tests? | 0 | 1.2 | 1 | 0 | 0 | 11,023 |
40,341,471 | 2016-10-31T11:41:00.000 | 0 | 0 | 1 | 0 | 0 | python,image,python-3.x,python-3.5 | 0 | 40,343,522 | 0 | 1 | 0 | true | 0 | 1 | Is that a homework ? Working with a new target image, as suggested in the comments, is the easiest.
But theoretically, assuming your original image is represented as some 2 dimension table of pixels, you could do it without creating a new image:
First double both dimensions of the original image (with the original image staying on "upper left" and occupying 1/4 of the new image, and filling the other 3/4 with blank or any value).
Then take the lower right pixel from the original image, and write 4 identical pixels in the lower right of the resized image.
Then take the original pixel directly at the the left of the previous original pixel, copy it on the 4 pixels directly at the left of the 4 previous new pixels. Repeat until you reach the left end of the line, then start the process again on the line above.
At some point you will overwrite pixels from the original image, but that doesn't matter since you will already have duplicated those in the new image.
That's pure theory, assuming you are not allowed to use external libraries such as Pillow. | 1 | 1 | 0 | 0 | i have to double an image using python,
So i think i can replace each pixel of the image with a square formed by 4 pixels
how do i can do that and assign to each pixel of the little square different colors? | Double an image using python | 0 | 1.2 | 1 | 0 | 0 | 1,504 |
40,370,004 | 2016-11-01T23:07:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,tensorflow | 0 | 40,370,096 | 0 | 1 | 0 | false | 0 | 0 | You should just make your label (y) in your reduced sum format (i.e. 3 bits), and train to that label. The neural net should be smart enough to adjust the weights to imitate your reduce_sum logic. | 1 | 0 | 1 | 0 | I have a question about tensorflow tensor.
If I have a NeuralNet like y=xw+b as an example.
then x is placeholder([7,7] dims), w is Variable([7,1]) and b is Variable([1,1])
So, y is tensorflow tensor with [7,1] dims.
then, in this case. can I make a new tensor like
new_y = [tf.reduce_sum(y[0:3]), tf.reduce_sum(y[3:5]), tf.reduce_sum(y[5:])]
and use it for training step?
If possible, how can I make it? | Generate new tensorflow tensor according to the element index of original tensor | 0 | 0 | 1 | 0 | 0 | 54 |
40,384,700 | 2016-11-02T16:14:00.000 | 1 | 0 | 1 | 1 | 1 | python,macos,pip | 0 | 40,386,891 | 0 | 1 | 0 | true | 0 | 0 | Resolved the problem. Turns out that this is Homebrew's behavior. I must have recently ran brew upgrade and it installed a newer version of python3. It seems that something got weird with re-linking the new python3, so all binaries for the new installs ended up somewhere deep in /usr/local/Cellar/python3.
I expect that re-linking python3 would solve this, but I ended up removing all versions of python3 and reinstalling. After that all I had to do was re-install any and all packages that had binary files in them.
Not sure if this is the intended behavior or a bug in python3 package. | 1 | 1 | 0 | 0 | Suddenly, my pip install commands stopped installing binaries into /usr/local/bin. I tried to upgrade pip to see if that might be the problem, it was up to date and a forced re-install deleted my /usr/local/pip3 and didn't install it back, so now I have to use python3 -m pip to do any pip operations. I am running OS X Sierra with the latest update (that is the main thing that changed, so I think the OS X upgrade might have caused this) with python3 installed by homebrew. How do I fix this?
Edit: I am still trying to work this out. python3 -m pip show -f uwsgi actually shows the uwsgi binary as being installed to what amounts to /usr/local/bin (it uses relative paths). Yet the binary is not there and reinstalling doesn't put it there and doesn't produce any errors. So either pip records the file in its manifest, but doesn't actually put it there or the OS X transparently fakes the file creation (did Apple introduce some new weird security measures?) | pip3 stopped installing executables into /usr/local/bin | 0 | 1.2 | 1 | 0 | 0 | 1,216 |
40,389,402 | 2016-11-02T20:39:00.000 | -1 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn,k-means | 0 | 40,390,555 | 0 | 2 | 0 | false | 0 | 0 | Clustering users makes sense. But if your only feature is the rating, I don't think it could produce a useful model for prediction. Below are my assumptions to make this justification:
The quality of movie should be distributed with a gaussion distribution.
If we look at the rating distribution of a common user, it should be something like gaussian.
I don't exclude the possibility that a few users only give ratings when they see a bad movie (thus all low ratings); and vice versa. But on a large scale of users, this should be unusual behavior.
Thus I can imagine that after clustering, you get small groups of users in the two extreme cases; and most users are in the middle (because they share the gaussian-like rating behavior). Using this model, you probably get good results for users in the two small (extreme) groups; however for the majority of users, you cannot expect good predictions. | 1 | 0 | 1 | 0 | I have a file called train.dat which has three fields - userID, movieID and rating.
I need to predict the rating in the test.dat file based on this.
I want to know how I can use scikit-learn's KMeans to group similar users given that I have only feature - rating.
Does this even make sense to do? After the clustering step, I could do a regression step to get the ratings for each user-movie pair in test.dat
Edit: I have some extra files which contain the actors in each movie, the directors and also the genres that the movie falls into. I'm unsure how to use these to start with and I'm asking this question because I was wondering whether it's possible to get a simple model working with just rating and then enhance it with the other data. I read that this is called content based recommendation. I'm sorry, I should've written about the other data files as well. | Clustering before regression - recommender system | 0 | -0.099668 | 1 | 0 | 0 | 733 |
40,425,856 | 2016-11-04T15:03:00.000 | 1 | 0 | 0 | 1 | 0 | javascript,python,cookies,bokeh | 0 | 40,429,660 | 0 | 1 | 0 | true | 1 | 0 | The cookies idea might work fine. There are a few other possibilities for sharing data:
a database (e.g. redis or something else, that can trigger async events that the app can respond to)
direct communication between the apps (e.g. with zeromq or similiar) The Dask dashboard uses this kind of communication between remote workers and a bokeh server.
files and timestamp monitoring if there is a shared filesystem (not great, but sometimes workable in very simple cases)
Alternatively if you can run both apps on the same single server (even though they are separate apps) then you could probably communicate by updating some mutable object in a module that both apps import. But this would not work in a scale-out scenario with more than one Bokeh server running.
Any/all of these somewhat advanced usages, an working example would make a great contribution for the docs so that others can use them to learn from. | 1 | 0 | 0 | 0 | I have two Bokeh apps (on Ubuntu \ Supervisor \ Nginx), one that's a dashboard containing a Google map and another that's an account search tool. I'd like to be able to click a point in the Google map (representing a customer) and have the account search tool open with info from the the point.
My problem is that I don't know how to get the data from A to B in the current framework. My ideas at the moment:
Have an event handler for the click and have it both save a cookie and open the account web page. Then, have some sort of js that can read the cookie and load the account.
Throw my hands up, try to put both apps together and just find a way to pass it in the back end. | Transfer Data from Click Event Between Bokeh Apps | 0 | 1.2 | 1 | 0 | 0 | 157 |
40,430,960 | 2016-11-04T20:07:00.000 | 2 | 0 | 1 | 0 | 1 | python-2.7,python-3.x,spyder,graphlab | 0 | 40,952,297 | 0 | 1 | 0 | true | 0 | 0 | Following method will solve this:
Open Spyder --> tools --> preferences --> python interpreter --> change from default to custom and select the python executable under gl-env environment.
Restart spyder. It will work. | 1 | 2 | 0 | 0 | I can run my python file with imported functionalities from GraphLab from the Terminal (first use the source activate gl-env and then run the file). So the file and installations are alright in that sense.
However, I can't figure out how to run the file directly in Spyder IDE. I only get ImportError: No module named 'graphlab'. The Spyder runs with python3.5 and I've tried to change to 2.7 as the GraphLap seems to, but it doesn't work either (I redirected to the same python2.7 'scientific_startup.py' used in GraphLab lib ).
I wonder if anyone knows how to run the file directly from Spyder?? | run graphlab from Spyder | 0 | 1.2 | 1 | 0 | 1 | 391 |
40,431,073 | 2016-11-04T20:16:00.000 | 1 | 1 | 1 | 0 | 0 | python,hashlib,sparse-file | 0 | 40,431,152 | 0 | 1 | 1 | false | 0 | 0 | The hashlib module doesn't even work with files. You have to read the data in and pass blocks to the hashing object, so I have no idea why you think it would handle sparse files at all.
The I/O layer doesn't do anything special for sparse files, but that's the OS's job; if it knows the file is sparse, the "read" operation doesn't need to do I/O, it just fills in your buffer with zeroes with no I/O. | 1 | 0 | 0 | 0 | I wanted to know how does python hashlib library treat sparse files. If the file has a lot of zero blocks then instead of wasting CPU and memory on reading zero blocks does it do any optimization like scanning the inode block map and reading only allocated blocks to compute the hash?
If it does not do it already, what would be the best way to do it myself.
PS: Not sure it would be appropriate to post this question in StackOverflow Meta.
Thanks. | Python hashlib and sparse files | 0 | 0.197375 | 1 | 0 | 0 | 96 |
40,438,629 | 2016-11-05T13:10:00.000 | 0 | 0 | 0 | 0 | 0 | android,python | 0 | 40,438,672 | 0 | 2 | 0 | false | 0 | 0 | Your problem is not Android-related definitely.
You simply need to educated yourself about networking. Yes, it will cost you some money - you spend them buying few books and some hardware for building home network.
After about 3-6-12 months of playing with your home network you will find your question rather simple to answer. | 1 | 0 | 0 | 0 | I have a second laptop running kali linux which is not used at all, meaning it can be running anytime as a server for my application. So what I actually want to do is connect from my application to my server and send some data, on the server run a python program that uses this code and return some data back. I never tried to work with servers, can I even turn my computer into a server for my application? does this cost any money? can I run a python code on the server and return the results?
I know I haven't published any code but I actually don't know how to start this project and I can use some help so can someone refer me to something to start with? Thanks.. | Using a server in android to run code | 0 | 0 | 1 | 0 | 1 | 259 |
40,442,688 | 2016-11-05T20:07:00.000 | 4 | 1 | 1 | 0 | 0 | python | 0 | 40,442,773 | 0 | 1 | 0 | true | 0 | 0 | When you import a C extension, python uses the platform's shared library loader to load the library and then, as you say, jumps to a function in the library. But you can't load just any library or jump to any function this way. It only works for libs specifically implemented to support python and to functions that are exported by the library as a python object. The lib must understand python objects and use those objects to communicate.
Alternately, instead of importing, you can use a foreign-function library like ctypes to load the library and convert data to the C view of data to make calls. | 1 | 1 | 0 | 0 | In Python ( CPython) we can import module:
import module and module can be just *.py file ( with a python code) or module can be a module written in C/C++ ( be extending python). So, a such module is just compiled object file ( like *.so/*.o on the Unix).
I would like to know how is it executed by the interpreter exactly.
I think that python module is compiled to a bytecode and then it will be interpreted. In the case of C/C++ module functions from a such module are just executed. So, jump to the address and start execution.
Please correct me if I am wrong/ Please say more. | C/C++ module vs python module. | 0 | 1.2 | 1 | 0 | 0 | 191 |
40,452,529 | 2016-11-06T17:50:00.000 | 3 | 0 | 0 | 0 | 0 | python,django | 0 | 40,452,589 | 0 | 3 | 0 | false | 1 | 0 | meaning it's really static
Use nginx to serve static files. Do not use django. You will setup project structure when it will be required. | 2 | 1 | 0 | 0 | It is a little oxymoron now that I am making a small Django project that it is hard to decide how to structure such project. Before I will at least will have 10 to 100 apps per project. Now my project is just a website that presents information about a company with no use database, meaning it's really static, with only 10 to 20 pages. Now how do you start, do you create an app for such project. | How to structure a very small Django Project? | 1 | 0.197375 | 1 | 0 | 0 | 318 |
40,452,529 | 2016-11-06T17:50:00.000 | 1 | 0 | 0 | 0 | 0 | python,django | 0 | 40,452,810 | 0 | 3 | 0 | false | 1 | 0 | Frankly I won't use Django in that case, I would use Flask for such small projects. it's easy to learn and setup a small website.
PS: I use Flask in small and large apps! | 2 | 1 | 0 | 0 | It is a little oxymoron now that I am making a small Django project that it is hard to decide how to structure such project. Before I will at least will have 10 to 100 apps per project. Now my project is just a website that presents information about a company with no use database, meaning it's really static, with only 10 to 20 pages. Now how do you start, do you create an app for such project. | How to structure a very small Django Project? | 1 | 0.066568 | 1 | 0 | 0 | 318 |
40,454,897 | 2016-11-06T21:47:00.000 | 1 | 0 | 0 | 0 | 0 | python,algorithm,path-finding | 0 | 40,458,170 | 0 | 4 | 0 | false | 0 | 1 | For this problem, simply doing a breadth-first search is enough (Dijkstra and BFS work in same way for unweighted graphs). To ensure that only the chess knight's moves are used, you'll have to define the moves in a proper way.
Notice that a chess knight moves two squares to any direction, then one square perpendicular to that. This means it can move two squares left of right then one square up or down, or two squares up or down then one square left or right.
The calculation will be much easier if you identify the cells by rows (0 - 7) and columns (0 - 7) instead of 0 - 63. This can be done easily by dividing the cell index by 8 and using the quotient and remainder as row and column indices. So, if the knight is at position (x, y) now, its next possible positions can be any of (x - 2, y - 1), (x - 2, y + 1), (x + 2, y - 1), (x + 2, y + 1), (x - 1, y - 2), (x - 1, y + 2), (x + 1, y - 2), (x + 1, y + 2). Be careful that all of these 8 cells may not be inside the grid, so discard the locations that falls out of the board. | 3 | 1 | 0 | 0 | I have a problem shown below that wants to find the quickest way to get between any two points by using only the moves of a knight in chess. My first thought was to us the A* algorithm or Dijkstra's algorithm however, I don't know how to make sure only the moves of a knight are used. I would appreciate it if you could suggest a better algorithm or just some tips to help me. Thank you.
Write a function called answer(src, dest) which takes in two parameters: the source square, on which you start, and the destination square, which is where you need to land to solve the puzzle. The function should return an integer representing the smallest number of moves it will take for you to travel from the source square to the destination square using a chess knight's moves (that is, two squares in any direction immediately followed by one square perpendicular to that direction, or vice versa, in an "L" shape). Both the source and destination squares will be an integer between 0 and 63, inclusive, and are numbered like the example chessboard below:
-------------------------
| 0| 1| 2| 3| 4| 5| 6| 7|
-------------------------
| 8| 9|10|11|12|13|14|15|
-------------------------
|16|17|18|19|20|21|22|23|
-------------------------
|24|25|26|27|28|29|30|31|
-------------------------
|32|33|34|35|36|37|38|39|
-------------------------
|40|41|42|43|44|45|46|47|
-------------------------
|48|49|50|51|52|53|54|55|
-------------------------
|56|57|58|59|60|61|62|63|
------------------------- | Simple algorithm to move from one tile to another using only a chess knight's moves | 0 | 0.049958 | 1 | 0 | 0 | 1,507 |
40,454,897 | 2016-11-06T21:47:00.000 | 2 | 0 | 0 | 0 | 0 | python,algorithm,path-finding | 0 | 40,458,703 | 0 | 4 | 0 | true | 0 | 1 | Approach the problem in the following way:
Step 1: Construct a graph where each square of the chess board is a vertex.
Step 2: Place an edge between vertices exactly when there is a single knight-move from one square to another.
Step 3: Apply Dijkstra's algorithm. Dijkstra's algorithm is an algorithm to find the length of a path between two vertices (squares). | 3 | 1 | 0 | 0 | I have a problem shown below that wants to find the quickest way to get between any two points by using only the moves of a knight in chess. My first thought was to us the A* algorithm or Dijkstra's algorithm however, I don't know how to make sure only the moves of a knight are used. I would appreciate it if you could suggest a better algorithm or just some tips to help me. Thank you.
Write a function called answer(src, dest) which takes in two parameters: the source square, on which you start, and the destination square, which is where you need to land to solve the puzzle. The function should return an integer representing the smallest number of moves it will take for you to travel from the source square to the destination square using a chess knight's moves (that is, two squares in any direction immediately followed by one square perpendicular to that direction, or vice versa, in an "L" shape). Both the source and destination squares will be an integer between 0 and 63, inclusive, and are numbered like the example chessboard below:
-------------------------
| 0| 1| 2| 3| 4| 5| 6| 7|
-------------------------
| 8| 9|10|11|12|13|14|15|
-------------------------
|16|17|18|19|20|21|22|23|
-------------------------
|24|25|26|27|28|29|30|31|
-------------------------
|32|33|34|35|36|37|38|39|
-------------------------
|40|41|42|43|44|45|46|47|
-------------------------
|48|49|50|51|52|53|54|55|
-------------------------
|56|57|58|59|60|61|62|63|
------------------------- | Simple algorithm to move from one tile to another using only a chess knight's moves | 0 | 1.2 | 1 | 0 | 0 | 1,507 |
40,454,897 | 2016-11-06T21:47:00.000 | 1 | 0 | 0 | 0 | 0 | python,algorithm,path-finding | 0 | 41,089,879 | 0 | 4 | 0 | false | 0 | 1 | While User_Targaryen's answer is the best because it directly answers your question, I would recommend an algebraic solution, if your goal is computing is the delivery of an answer in the shortest amount of time.
To shorten the algorithm, use reflections about the x, y, and xy axes, so as to consider only positive (x, y) where x >= y, and place the starting move at the origin, coordinate (0, 0). This is one octant (one eighth) of the possible directions.
A hint to discovering the solution is to use graph-paper or Dijkstra's algorithm with the restriction of reaching all points in the first octant up to 5 moves, and display this as a grid. Each cell of the grid should be labeled with a digit representing the minimum number of moves.
Let me know if you would like to broaden your question and would like additional information. | 3 | 1 | 0 | 0 | I have a problem shown below that wants to find the quickest way to get between any two points by using only the moves of a knight in chess. My first thought was to us the A* algorithm or Dijkstra's algorithm however, I don't know how to make sure only the moves of a knight are used. I would appreciate it if you could suggest a better algorithm or just some tips to help me. Thank you.
Write a function called answer(src, dest) which takes in two parameters: the source square, on which you start, and the destination square, which is where you need to land to solve the puzzle. The function should return an integer representing the smallest number of moves it will take for you to travel from the source square to the destination square using a chess knight's moves (that is, two squares in any direction immediately followed by one square perpendicular to that direction, or vice versa, in an "L" shape). Both the source and destination squares will be an integer between 0 and 63, inclusive, and are numbered like the example chessboard below:
-------------------------
| 0| 1| 2| 3| 4| 5| 6| 7|
-------------------------
| 8| 9|10|11|12|13|14|15|
-------------------------
|16|17|18|19|20|21|22|23|
-------------------------
|24|25|26|27|28|29|30|31|
-------------------------
|32|33|34|35|36|37|38|39|
-------------------------
|40|41|42|43|44|45|46|47|
-------------------------
|48|49|50|51|52|53|54|55|
-------------------------
|56|57|58|59|60|61|62|63|
------------------------- | Simple algorithm to move from one tile to another using only a chess knight's moves | 0 | 0.049958 | 1 | 0 | 0 | 1,507 |
40,457,331 | 2016-11-07T03:20:00.000 | 1 | 0 | 0 | 0 | 0 | python,information-retrieval,information-extraction | 0 | 40,603,239 | 0 | 2 | 0 | false | 0 | 0 | Evaluation has two essentials. First one is a test resource with the ranking of documents or their relevancy tag (relevant or not-relevant) for specific queries, which is made with an experiment (like user click, etc. and is mostly used when you have a running IR system), or made through crowd-sourcing. The second essential part of evaluation is which formula to use for evaluating an IR system with the test collection.
So based on what you said, if you don't have a labeled test collection you cant evaluate your system. | 1 | 3 | 1 | 0 | i wrote one program to do the information retrieval and extraction. user enter the query in the search bar, the program can show the relevant txt result such as the relevant sentence and the article which consists the sentence.
I did some research for how to evaluate the result. I might need to calculate the precision, recall, AP, MAP....
However, I am new to that. How to calculate the result. Since my dataset is not labeled and i did not do the classification. The dataset I used was the article from BBC news. there were 200 articles. i named it as 001.txt, 002.txt ...... 200.txt
It would be good if u have any ideas how to do the evaluation in python. Thanks. | information retrieval evaluation python precision, recall, f score, AP,MAP | 0 | 0.099668 | 1 | 0 | 0 | 6,407 |
40,471,132 | 2016-11-07T17:33:00.000 | 2 | 0 | 0 | 0 | 0 | python,django | 0 | 40,474,452 | 0 | 2 | 0 | true | 1 | 0 | Yes, generally POST is a better way of submitting data than GET. There is a bit of a confusion about terminology in Django. While Django is, indeed MVC, models are models, but views are in fact controllers and views are templates. Since you are going to use AJAX to submit and retrieve the data, you don't care about templates. So what you most likely want is something like this
in your urls.py as part of your urlpatterns variable
url(r'mything/$', MyView.as_view())
in your views.py
from django.views import View
from django.http import HttpResponse
class MyView(View):
def post(self, request):
data = request.POST
... do your thing ...
return HttpResponse(results)
and in your javascript
jQuery.post('/mything/', data, function() { whatever you do here }) | 1 | 2 | 0 | 1 | I am working on my first django project which is also my first backend project. In the tutorials/reading I have completed, I haven't come across passing information back to django without a modelform.
My intention is to calculate a value on a page using javascript and pass it to django when a user hits a submit button on that page. The submit button will also be a link to another page. I know I could process the information in a view via the url if I knew how to pass the information back to django.
I'm aware that django uses MVC and as I have my models and views in place, I am lead to believe that this has something to do with controllers.
Basically, I would like to know how to pass information from a page to django as a user follows a link to another page. I understand that this isn't the place for long step by step tutorials on specific topics but I would appreciate any links to resources on this subject. I don't know what this process is even called so I can't search documentation for it.
EDIT:
From further reading, I think that I want to be using the submit button to GET or POST the value. In this particular case, POST is probably better. Could someone confirm that this is true? | Django - beginner- what is the process for passing information to a view via a url? | 0 | 1.2 | 1 | 0 | 0 | 47 |
40,482,242 | 2016-11-08T08:31:00.000 | 0 | 0 | 0 | 1 | 1 | python,angularjs,templates,tornado | 0 | 40,517,136 | 0 | 1 | 0 | false | 1 | 0 | This is not really a Tornado question, as this is simply how Web works.
One possible solution is to have only one form, but display its fields so that they look like two forms; in addition, have two separate submit buttons, each with its own name and value. Now, when you click on either button the whole form will be submitted, but in the handler you can process only the fields associated with the clicked button, while still displaying values in all the fields. | 1 | 0 | 0 | 0 | I have two forms, when I submit form#1 I get some corresponding file, but when I submit form#2 thenafter, the corresponding file gets shown but form#1 goes empty. So basically I want some thing like a SPA(e.g angular) but I am taking form#1 and form#2 as separate requests routes and each render my index.html every time, so form#2 is wiped off when I submit form#1 and vice-versa.
I dont want a working code but any ideas on how I do that with Tornado (not angular, or say Tornado + Angular ? )
I think one way for example is to handle these requests via a controller and do an AJAX post to corresponding Tornado Handler, which after the file is rendered, displays / serves the very file back again. But this uses AngularJS as a SPA. Any other solution possible?
Thanks in Advance | ways to avoid previous reload tornado | 0 | 0 | 1 | 0 | 0 | 47 |
Subsets and Splits