Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
30,261,701 | 2015-05-15T14:12:00.000 | 1 | 1 | 0 | 0 | 0 | python,unix,path | 0 | 30,261,822 | 0 | 2 | 0 | true | 0 | 0 | Python doesn't share its own path with the general $PATH, so to be able to do what you're looking for, you must add your scripts in the $PYTHONPATH instead. | 1 | 0 | 0 | 0 | I have a collection of python scripts that import from each other. If I want to use these in a location where the scripts are not physically present, how can I do this. I tried adding the path of the dir with the scripts to my $PATH but got no joy. Any help appreciated, thanks. | Calling python scripts from anywehre | 0 | 1.2 | 1 | 0 | 0 | 52 |
30,264,100 | 2015-05-15T16:10:00.000 | 0 | 0 | 1 | 0 | 1 | python,virtualenv,zipline | 1 | 45,375,832 | 0 | 3 | 0 | false | 0 | 0 | These are the Requirements/ Steps to Make Zipline Work:
Install Microsoft Visual C++ 2010 Express
Download and install python 3.4
Download zipline from github and Extract in C:/
Set Anaconda as project interpreter
Since zipline is compatible with Python 3.4, you need to create an environment with Python 3.4
Run this command in console of IDE:
$conda create -n python34 python=3.4 anaconda(replace py34 with the location of python34 folder)
Now run this command in console:
$activate python34 #Activates the python 3.4 environment
$pip install -e C:\GitHub\zipline (Directory where you extracted zipline)
Ingest data from quandl with below command
$zipline ingest
Hope this helps others who visit this page! | 2 | 3 | 0 | 0 | I installed zipline package via Enthought Cantopy. Now I try to run a script using it in command prompt, but get error ImportError: No module named zipline.
I also tried to run the same code using IPython, with the same output.
I think it is related to python virtual environments, but don't know how to fix that. | zipline error. No module named zipline | 0 | 0 | 1 | 0 | 0 | 2,894 |
30,264,100 | 2015-05-15T16:10:00.000 | 2 | 0 | 1 | 0 | 1 | python,virtualenv,zipline | 1 | 30,345,896 | 0 | 3 | 0 | true | 0 | 0 | I figured it out. The problem was in the version of python I have. I have 32-bit python and Enthought Cantopy with 64 bit python, installed zipline package was under 64 bit python while command prompt was using 32 bit version. Installing 64 bit python fixed the issue. | 2 | 3 | 0 | 0 | I installed zipline package via Enthought Cantopy. Now I try to run a script using it in command prompt, but get error ImportError: No module named zipline.
I also tried to run the same code using IPython, with the same output.
I think it is related to python virtual environments, but don't know how to fix that. | zipline error. No module named zipline | 0 | 1.2 | 1 | 0 | 0 | 2,894 |
30,265,151 | 2015-05-15T17:12:00.000 | 0 | 0 | 1 | 0 | 0 | python,matplotlib,six | 0 | 30,265,202 | 0 | 1 | 0 | false | 0 | 0 | You can install the package by running pip install six-1.9.0.tar.gz which requires you to copy six-1.9.0.tar.gz to the machine somehow. | 1 | 0 | 0 | 0 | I am trying to install matplotlib on my lab computer that does not have internet access. Since it requires six for its full implementation, I am unable to run the scripts that has matplotlib module.
I know how to install six by using pip but am stuck when there is no internet access.
Thanks in advance! | is there a way to install six if I do not have access to internet? | 1 | 0 | 1 | 0 | 0 | 237 |
30,315,466 | 2015-05-19T01:47:00.000 | 0 | 0 | 0 | 0 | 0 | python,pygame | 0 | 30,315,673 | 0 | 2 | 0 | true | 0 | 1 | Not sure if this is the problem, but the stretching might be caused by not redrawing the background. Blitting is like painting, you can't erase stuff. When you want to move something, you need to repaint all the places that have changed - also the background that is not visible after movement.
The quickest way to do this is to redraw the whole background. | 1 | 0 | 0 | 0 | I've got my own drawing loaded on to the screen and scaled it to the size I want, but the background of the program I used to make the drawing is still on the image. I noticed that when I move the image, the background doesn't move with the picture, but it actually looks like it's stretching out and it will cover wherever I move the picture to. I think this is because I used the .blit feature when getting my picture on screen, but I can't find a clear enough answer on how to get the picture on screen any other way. Can someone point me in the right direction, please? | Getting my drawing into pygame without .blit() | 0 | 1.2 | 1 | 0 | 0 | 79 |
30,318,168 | 2015-05-19T06:24:00.000 | 0 | 0 | 0 | 1 | 0 | python,linux,ubuntu,cron,crontab | 0 | 30,319,952 | 0 | 5 | 0 | false | 0 | 0 | If all you want to do is reset the content of user crontab file , then just remove the crontab file (or overwrite with your default) , and reload the cron service . | 2 | 2 | 0 | 0 | We know the crontab command is used for scheduled tasks in Linux.
I want to write a Python script. Its function is to receive some data (these data are related to crontab setting) and execute a 'crontab' command in order to reset the content of the user's crontab file.
I know how to execute external Linux commands in Python. But when you execute the crontab command (e.g. crontab -u xxx -e), you need to interact with an editor to modify the user's crontab file. (Suppose I don't know where the file is. For new users, crontab will generate a new file anyway. And I don't execute the command as the root user).
So the question is, how can I just execute crontab in Python? Is there any way to avoid interacting with an editor to modify the user's crontab file in Python?
My OS is ubuntu 14.01. | Python: How to handle 'crontab' command? | 0 | 0 | 1 | 0 | 0 | 6,083 |
30,318,168 | 2015-05-19T06:24:00.000 | 0 | 0 | 0 | 1 | 0 | python,linux,ubuntu,cron,crontab | 0 | 30,319,851 | 0 | 5 | 0 | false | 0 | 0 | You could/should first dump your current crontab with crontab -l, edit it the way you want (e. g. add some lines, or modify) and then install the new one.
This usually works with crontab <filename>, but should as well work with crontab - and then piping the new contents into the process's stdin. | 2 | 2 | 0 | 0 | We know the crontab command is used for scheduled tasks in Linux.
I want to write a Python script. Its function is to receive some data (these data are related to crontab setting) and execute a 'crontab' command in order to reset the content of the user's crontab file.
I know how to execute external Linux commands in Python. But when you execute the crontab command (e.g. crontab -u xxx -e), you need to interact with an editor to modify the user's crontab file. (Suppose I don't know where the file is. For new users, crontab will generate a new file anyway. And I don't execute the command as the root user).
So the question is, how can I just execute crontab in Python? Is there any way to avoid interacting with an editor to modify the user's crontab file in Python?
My OS is ubuntu 14.01. | Python: How to handle 'crontab' command? | 0 | 0 | 1 | 0 | 0 | 6,083 |
30,320,265 | 2015-05-19T08:20:00.000 | 0 | 0 | 1 | 0 | 0 | python,django,heroku,version | 0 | 30,321,377 | 0 | 4 | 0 | false | 1 | 0 | If you didn't have access to source code, from headers I can see that server is using Gunicorn 0.17.2, which is compatible with Python 2.x >= 2.6, which rules out i.e. Python 3. | 1 | 1 | 0 | 0 | We run a site called inteokej.nu and I need to find out which version of Python it runs on. I have pulled all the files to my computer but I don't know if and how I can find out the version number from them? The site is hosted on Heroku and maybe there's a way to find out the version with some kind of Heroku command?
As for now I don't have any possibilities to change any code (e.g. add a code snippet to get the version).
Thanks in advance! | Find out Python version from source code (or Heroku) | 0 | 0 | 1 | 0 | 0 | 1,237 |
30,324,638 | 2015-05-19T11:41:00.000 | 3 | 0 | 1 | 0 | 0 | python,file,seek | 0 | 30,324,707 | 0 | 4 | 0 | false | 0 | 0 | Assuming the file isn't too big and memory isn't a concern
open('file.txt').readlines()[-2] | 1 | 1 | 0 | 0 | I am wondering if there is a simple way to get to the penultimate line of an open file. f.seek is giving me no end of trouble. I can easily get to the final line, but I can't figure out how to get to the line above that. | Printing to the penultimate line of a file | 1 | 0.148885 | 1 | 0 | 0 | 490 |
30,324,761 | 2015-05-19T11:47:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,windows-7-x64 | 1 | 30,326,360 | 0 | 1 | 0 | true | 1 | 0 | To Summarize the comments:
This is because you're in c:\Windows\system32. By default when you open command prompt is in that system32 directory. you need to change it first. To change directory use cd <what path you want>.
Once you're in the correct directory, you can start the project with django-admin startproject command.
For example: cd c:\ to change directory to your root directory. then type C:\Python34\Scripts\django-admin startproject mysite. It will then create the directory under C:\mysite
Last but not least, you can also put C:\Python34\Scripts in your system environment variable PATH, so that you don't have to type the full address of django-admin. Instead you can then use the django-admin startproject mysite. | 1 | 0 | 0 | 0 | I am currently learning how to use Python/Django. I have successfully installed and setup Django. I am experiencing problems creating a project using the django-admin startproject mysite command. I have executed the command from command prompt using the path: C:\Python34\Scripts\django-admin startproject mysite I tried searching for it in the current directory but could not find the folder. I tried executing the command again and found out that the directory exists in:
'C:\Windows\system32\mysite'
I tried searching for it in this directory, but could not find it. How can I make this directory visible and set it under the Python directory?
Here is the exact error I am getting in Command Prompt:
CommandError: 'C:\Windows\system32\mysite' already exists
I am currently running Windows 7 - 64 Bit | Django - Cannot find Directory 'mysite' | 0 | 1.2 | 1 | 0 | 0 | 1,391 |
30,330,543 | 2015-05-19T15:58:00.000 | 1 | 1 | 0 | 0 | 0 | python,wifi,tethering | 0 | 30,333,512 | 0 | 2 | 0 | false | 0 | 0 | Normally not - from the computer's prospective the tethered cell phone is simply just another wifi router/provider.
You might be able to detect some of the phone carrier networks from the traceroute info to some known servers (DNS names or even IP address ranges of network nodes - they don't change that often).
If you have control over the phone tethering you could also theoretically use the phone's wifi SSID (or even IP address ranges) to identify tethering via individual/specific phones (not 100% reliable either unless you know that you can't get those parameters from other sources). | 1 | 1 | 0 | 0 | I was wondering if there is a way to detect, from a python script, if the computer is connected to the internet using a tethered cell phone?
I have a Python script which runs a bunch of network measurement tests. Ideally, depending on the type of network the client is using (ethernet, wifi, LTE, ...) the tests should change. The challenge is how to get this information from the python client, with out asking the user to provide it. Especially detect tethering. | Detect if connected to the internet using a tethered phone in Python | 1 | 0.099668 | 1 | 0 | 1 | 605 |
30,357,663 | 2015-05-20T18:28:00.000 | 3 | 0 | 1 | 1 | 0 | python,http,tornado,web-frameworks | 0 | 30,362,810 | 0 | 1 | 0 | true | 0 | 0 | The vast majority of Tornado apps should have only one IOLoop, running in the main thread. You can run multiple HTTPServers (or other servers) on the same IOLoop.
It is possible to create multiple IOLoops and give each one its own thread, but this is rarely useful, because the GIL ensures that only one thread is running at a time. If you do use multiple IOLoops you must be careful to ensure that the different threads only communicate with each other through thread-safe methods (i.e. IOLoop.add_callback). | 1 | 5 | 0 | 0 | i have been working on tornado web framework from sometime, but still i didnt understood the ioloop functionality clearly, especially how to use it in multithreading.
Is it possible to create separate instance of ioloop for multiple server ?? | Tornado ioloop + threading | 1 | 1.2 | 1 | 0 | 0 | 4,428 |
30,363,813 | 2015-05-21T03:00:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x,pip,anaconda | 0 | 42,434,840 | 0 | 4 | 0 | false | 0 | 0 | (in command prompt)C:\Python34\scripts\pip.exe install pytz
this assumes your path is similar to mine. I used the default install location for all my pythons(2.7,3.4). | 1 | 11 | 0 | 0 | I have the following Python distributions installed on my Windows computer:
Python 2.7 (IDLE)
Python 3.4 (IDLE)
Anaconda (Python 3.4)
Obviously, they all store their libraries in different locations.
So, how can I easily make a targeted installation to (a different) one of them each time I need to do so?
For example, right now, I am trying to install pytz to Python 3.4 (IDLE), and pip install seems to be defaulting to Python 2.7 (IDLE), which is the first distribution of Python I had installed on my computer. | How can I control which Python distribution to pip install a package to when I have Python 2, Python 3, and Anaconda on my computer? | 0 | 0 | 1 | 0 | 0 | 16,412 |
30,367,541 | 2015-05-21T07:43:00.000 | 3 | 0 | 1 | 0 | 1 | python,pycharm | 1 | 30,636,214 | 0 | 2 | 0 | false | 0 | 0 | Just put the hello.py file in a subdirectory of your project directory.
All the files in the project directory essentially provides relevant "system information" of the project. One project may contains couple of applications. Each application should be put in one subdirectory.
set:
"Settings/Editor/File Encodings/Project Encoding"
to:
"UTF-8" | 1 | 0 | 0 | 0 | I just started to us Python and Pycharm today. I installed Python 3.4.3 and pycharm 4.5, and I'm using Windows 7 OS on a acer TravelMate 8471 laptop.
When I try to print("hello"), the error is:
"Fatal Python error: Py_Initialize: can't initialize sys standard streams
LookupError: unknown encoding: x-windows-950"
Does anyone know how to fix this issue ? | Issue with pycharm:(Pycharm:Py_Initialize: can't initialize sys standard streams) | 0 | 0.291313 | 1 | 0 | 0 | 4,364 |
30,368,271 | 2015-05-21T08:22:00.000 | 2 | 1 | 0 | 0 | 0 | python,django | 0 | 30,369,384 | 0 | 3 | 0 | false | 1 | 0 | Another way to look at it: send the mail to your backup email account ex: [email protected]. So you can store the email, check if the email is sent or not.
Other than that, having an extra model for logged emails is a way to go. | 2 | 3 | 0 | 0 | I was wondering how can I store sent emails
I have a send_email() function in a pre_save() and now I want to store the emails that have been sent so that I can check when an email was sent and if it was sent at all. | Python / Django | How to store sent emails? | 0 | 0.132549 | 1 | 0 | 0 | 559 |
30,368,271 | 2015-05-21T08:22:00.000 | 5 | 1 | 0 | 0 | 0 | python,django | 0 | 30,368,302 | 0 | 3 | 0 | true | 1 | 0 | I think the easiest way before messing up with middleware or whatever is to simply create a model for your logged emails and add a new record if send was successful. | 2 | 3 | 0 | 0 | I was wondering how can I store sent emails
I have a send_email() function in a pre_save() and now I want to store the emails that have been sent so that I can check when an email was sent and if it was sent at all. | Python / Django | How to store sent emails? | 0 | 1.2 | 1 | 0 | 0 | 559 |
30,387,974 | 2015-05-22T03:38:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,django-models,django-syncdb | 0 | 30,392,918 | 0 | 2 | 0 | true | 1 | 0 | Thanks Amyth for the hints.
btw the commands is a bit different, i will post a 10x tested result here.
Using south
1. setup the model
python manage.py schemamigration models --initial
dump data if you have to
python manage.py dumpdata -e contenttypes -e auth.Permission --natural > data.json
syncdb
python manage.py syncdb
python manage.py migrate models
load the data back into the db
python manage.py loaddata data.json
Afterwards, you may use
python manage.py schemamigration models --auto
python manage.py migrate models
after every change you made in the models schema
A few notes
1. Unloading the database and reload it is essential, because if not doing so the first migration will tell you already have those models.
2. The -e contenttypes -e auth.Permission --natural parameter in dumpdata is essential otherwise exception will be thrown when doing loaddata. | 1 | 1 | 0 | 0 | How to do syncdb in django 1.4.2?
i.e. having data in database, how to load the models again when the data schema is updated?
Thanks in advance | How to do django syncdb in version 1.4.2? | 0 | 1.2 | 1 | 1 | 0 | 2,775 |
30,408,098 | 2015-05-23T01:50:00.000 | 0 | 0 | 0 | 0 | 0 | python,algorithm,dijkstra,libtcod | 0 | 54,982,870 | 0 | 2 | 0 | false | 0 | 1 | As far as I understand, what you want to achieve is very well possible with tcod's built-in pathfinding function.
path_new_using_function will call your path_func with adjacent cells, so you can simply have it return the values you listed above depending on the terrain below (xFrom, yFrom) and/or (xTo, yTo). | 1 | 2 | 0 | 0 | I am building a turn-based strategy game using Libtcod and Python. The game map has variable terrain and each tile can be 1 of 5 types:
Plains - Costs 1 to move over
Forest - Costs 2
River - Costs 4
Hill - Costs 3
Mountain - Impassable
Each type has its own movement cost, so that it costs less "move points" to move through plains than through a forest, for example. I want to display all the squares a unit can move to given its movement range/starting move points.
Libtcod has pathfinding functionality built for both A* and Dijtskra and it is trivial to display all the squares in a given range, without accounting for terrain.
However, I cannot figure out how I can implement the terrain costs, without having to write my own pathfinding algorithm. Looking at the docs I know it has something to do with:
def path_func(xFrom,yFrom,xTo,yTo,userData) : ...
path_new_using_function(width, height, path_func, user_data=0, diagonalCost=1.41)
dijkstra_new_using_function(width, height, path_func, user_data=0, diagonalCost=1.41)
but I cannot figure out what the custom function is supposed to do. According to the docs, it supposed to
...return the walk cost from coordinates xFrom,yFrom to coordinates
xTo,yTo. The cost must be > 0.0f if the cell xTo,yTo is walkable. It
must be equal to 0.0f if it's not.
but, isn't that the point of the dijtskra algorithm to begin with? That is, the algorithm is supposed to take into account the variable costs of each tile and then build a path accordingly.
The map itself already has the terrain and move costs applied to it, I just need a way to bridge that data with the pathfinding. | Python Libtcod: How to do pathfinding with variable move cost terrain? | 0 | 0 | 1 | 0 | 0 | 965 |
30,421,373 | 2015-05-24T08:12:00.000 | 2 | 0 | 1 | 0 | 0 | python,ipython,jupyter | 0 | 37,065,235 | 0 | 3 | 0 | false | 0 | 0 | For me to resolve this issue, I had to stop my anti-virus program. | 3 | 59 | 0 | 0 | I use Windows 7, Python 2.7.9 plus latest version of IPython 3.1.
I ran %python inside an IPython Notebook and ran the cell, instead of returning the Python version, it did not run and jumped to a new line and printed a In [*] instead of a line number. Now no line is running in ipython everything is ignored when I try to run a cell value.
Anyone know what has happen? | What does In [*] in IPython Notebook mean and how to turn it off? | 0 | 0.132549 | 1 | 0 | 0 | 114,689 |
30,421,373 | 2015-05-24T08:12:00.000 | 7 | 0 | 1 | 0 | 0 | python,ipython,jupyter | 0 | 54,258,699 | 0 | 3 | 0 | false | 0 | 0 | The issue causing your kernel to be busy can be a specific line of code. If that is the case, your computer may just need time to work through that line.
To find out which line or lines are taking so long, as mentioned by Mike Muller you need to restart the program or interrupt the kernel. Then go through carefully running one line at a time until you reach the first one with the asterisk.
If you do not restart or interrupt your busy program, you will not be able to tell which line is the problem line and which line is not, because it will just stay busy while it works on that problem line. It will continue to give you the asterisk on every line until it finishes running that one line of code even if you start back at the beginning. This is extremely confusing, because lines that have run and produced output suddenly lose their output when you run them on the second pass. Also confusing is the fact that you can make changes to your code while the kernel is busy, but you just can't get any new output until it is free again.
Your code does not have to be wrong to cause this. You may just have included a time-consuming command. Bootstrapping has caused this for me.
If your code is what it needs to be, it doesn't actually matter which line is the problem line, and you just need to give all of your code time to run. The main reasons to find out which is the problem line would be if some lines were expendable, or in case you were getting the asterisk for some other reason and needed to rule this one out.
If you are writing code on an internet service that times out when you aren't giving it input, your code might not have enough time to finish running if you just wait on it. Scrolling every few minutes is usually enough to keep those pages from timing out. | 3 | 59 | 0 | 0 | I use Windows 7, Python 2.7.9 plus latest version of IPython 3.1.
I ran %python inside an IPython Notebook and ran the cell, instead of returning the Python version, it did not run and jumped to a new line and printed a In [*] instead of a line number. Now no line is running in ipython everything is ignored when I try to run a cell value.
Anyone know what has happen? | What does In [*] in IPython Notebook mean and how to turn it off? | 0 | 1 | 1 | 0 | 0 | 114,689 |
30,421,373 | 2015-05-24T08:12:00.000 | 74 | 0 | 1 | 0 | 0 | python,ipython,jupyter | 0 | 30,421,412 | 0 | 3 | 0 | true | 0 | 0 | The kernel is busy. Go to the menu Kernel and click Interrupt. If this does not work click Restart. You need to go in a new cell and press Shift + Enter to see if it worked. | 3 | 59 | 0 | 0 | I use Windows 7, Python 2.7.9 plus latest version of IPython 3.1.
I ran %python inside an IPython Notebook and ran the cell, instead of returning the Python version, it did not run and jumped to a new line and printed a In [*] instead of a line number. Now no line is running in ipython everything is ignored when I try to run a cell value.
Anyone know what has happen? | What does In [*] in IPython Notebook mean and how to turn it off? | 0 | 1.2 | 1 | 0 | 0 | 114,689 |
30,427,300 | 2015-05-24T19:12:00.000 | 0 | 1 | 0 | 0 | 0 | javascript,php,python,email | 0 | 30,427,829 | 0 | 1 | 0 | false | 1 | 0 | What you're describing would be handled largely by your backend. If this were my project, I would choose the following simple route:
Store the messages the buyers/sellers send in your own database, then simply send notification emails when messages are sent. Have them reply to each other on your own site, like Facebook and eBay do.
An example flow would go like this:
(Gather the user and buyer's email addresses via registration)
Buyer enters a message and clicks 'Send Message' button on seller's page
Form is posted (via AJAX or via POST) to a backend script
Your backend code generates an email message
Sets the 'To' field to the seller
Your seller gets an email alert of the potential buyer which shows the buyer's message
The Seller then logs on to your site (made easy by a URL in the email) to respond
The Seller enters a response message on your site and hits 'Reply'
Buyer gets an email notification with the message body and a link to your site where they can compose a reply.
...and so on. So, replies would have to be authored on-site, rather than as an email 'reply.'
If you choose to go this route, there are some simple 3rd party "transactional email" providers, which is the phrase you'd use to find them. I've personally used SendGrid and found them easy to set up and use. They have a simple plug in for every major framework. There is also Mandrill, which is newer but gets good reviews as well. | 1 | 0 | 0 | 0 | I'm looking into what it would take to add a feature to my site so this is a pretty naive question.
I'd like to be able to connect buyers and sellers via an email message once the buyer clicks "buy".
I can see how I could do this in java script, querying the user database and sending an email with both parties involved. What I'm wondering is if there's a better way I can do this, playing monkey in the middle so they only receive an email from my site, and the it's automatically forwarded to the other party. That way they don't have to remember to hit reply-all, just reply. Also their email addresses remain anonymous.
Again assuming I generate a unique subject line with the transaction ID I could apply some rules here to just automatically forward the email from one party to the other but is there an API or library which can already do this for you? | Email API for connecting in a marketplace? | 1 | 0 | 1 | 0 | 1 | 74 |
30,428,292 | 2015-05-24T21:01:00.000 | 0 | 0 | 0 | 1 | 0 | python,sublimetext3,sublimetext,sublime-text-plugin | 0 | 30,431,423 | 0 | 1 | 0 | false | 0 | 0 | After another 5 hours of reading, I figured it out. As I assumed, it was a lack of Python knowledge on my part.
All I needed to do was create a module level variable to use as a flag. | 1 | 0 | 0 | 0 | I'm creating a plugin in sublime Text 3, and I've hit a snag that I can't figure out. This is my first using python, and the first time I've done even driven desktop development in over a decade, so hopefully this is just a lack of knowledge on my part.
The plugin I'm writing uses text commands to gather data and then uses that data to call another text command that starts a subprocess than can run for a significant period of time depending on the arguments passed.
the following is some simplified code.
class BlaOneCommand(sublime_plugin.TextCommand):
def run(self, edit):
commandArgs = []
self.view.run_command('run_command', {"args": commandArgs})
class BlaTwoCommand(sublime_plugin.TextCommand):
def run(self, edit):
commandArgs = []
self.view.run_command('run_command', {"args": commandArgs})
class BlaThreeCommand(sublime_plugin.TextCommand):
def run(self, edit):
commandArgs = []
self.view.run_command('run_command', {"args": commandArgs})
class BlaRunCommand(sublime_plugin.TextCommand):
def run(self, edit, args):
self.commandArgs = args
sublime.set_timeout_async(self.runCommand, 0)
def runCommand(self):
proc = ''
if os.name == 'nt':
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
proc = subprocess.Popen(self.commandArgs, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False, startupinfo=startupinfo)
else:
proc = subprocess.Popen(self.commandArgs, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False)
while proc.poll() is None:
try:
data = proc.stdout.readline().decode(encoding='UTF-8')
print(data, end="")
except:
return;
BlaOne, BlaTwo, & BlaThree are set up in a context menue. and what I need to do is disable some or all of them while the subprocess is running. I know this can be done by overriding the is_enabled method. However I'm struggling with how to tie them all together.
How can I make all the objects aware of each other, so they can enable/disable each other? | how do you disable a text command from another text command in Sublimetext | 0 | 0 | 1 | 0 | 0 | 65 |
30,437,566 | 2015-05-25T11:41:00.000 | 3 | 0 | 1 | 0 | 0 | python,string | 0 | 56,108,440 | 0 | 3 | 0 | false | 0 | 0 | Split string on consecutive whitespace† at most maxsplit times††
† Resulting list will contain no leading or trailing empty strings ("") if the string has leading or trailing whitespace
†† Splits are made left to right. To split the other way (right to left), use the str.rsplit() method (requires Python 2.4+)
Python 2
str.split(sep[, maxsplit]])
Use str.split(None, maxsplit)
Note:
Specifying sep as None ≝ not specifying sep
str.split(None, -1) ≝ str.split() ≝ str.split(None)
Python 3
str.split(sep=None, maxsplit=-1)
Option A: Stick with positional arguments (Python 2 option): str.split(None, maxsplit)
>>> ' 4 2 0 '.split(None, 420)
['4', '2', '0']
Option B (personal preference, using keyword arguments): str.split(maxsplit=maxsplit)
>>> ' 4 2 0 '.split(maxsplit=420)`
['4', '2', '0'] | 1 | 14 | 0 | 0 | I want to split a string on whitespaces (default behavior), but I want it to split it only once - I.e. I want it to return an array with 2 items at most.
If it is not possible - i.e. if for specifying the limit I have to also specify the pattern - could you please tell how to specify the default one? | Python: str.split() - is it possible to only specify the "limit" parameter? | 1 | 0.197375 | 1 | 0 | 0 | 17,995 |
30,441,107 | 2015-05-25T14:57:00.000 | 0 | 0 | 0 | 0 | 0 | python,file,memory,merge | 0 | 30,441,330 | 0 | 3 | 0 | false | 0 | 0 | As @Marc B said, reading one row at a time is the solution.
About the join I would do the following (pseudocode: I don't know python).
"Select distinct Model from A" on first file A.csv
Read all rows, search for Model field and collect distinct values in a list/array/map
"Select distinct Model from B" on second file B.csv
Same operation as 1, but using another list/array/map
Find matching models
Compare the two lists/arrays/maps finding only matching models (they will be part of the join)
Do the join
Reading rows of file A which match model, read all the rows of file B which match same model and write a file C with join result. To this for all models.
Note: it's not particularly optimized.
For point 2 just choose a subset of matching models and/or read a part of rows of file A and/or B with maching models. | 1 | 0 | 1 | 0 | I have a large A.csv file (~5 Gb) with several columns. One of the columns is Model.
There is another large B.csv file (~15 Gb) with Vendor, Name and Model columns.
Two questions:
1) How can I create result file that combines all columns from A.csv and corresponding Vendor and Name from B.csv (join on Model). The trick is - how to do it when my RAM is 4 Gb only, and I'm using python.
2) How can I create a sample (say, 1 Gb) result file that combines random subsample from A.csv (all columns) joined with Vendor and Name from B.csv. The trick is, again, in 4 Gb of RAM.
I know how to do it in pandas, but 4 Gb is limiting factor I can't overcome ( | Concatenate large files in sql-like way with limited RAM | 0 | 0 | 1 | 1 | 0 | 862 |
30,448,027 | 2015-05-26T01:20:00.000 | 1 | 0 | 0 | 0 | 0 | python,web,scrapy | 0 | 30,448,526 | 0 | 1 | 0 | false | 1 | 0 | Item refers to an item of data that it's scraped. You can also call it a record or an entry.
Spider is the thing that does crawling (starting requests and following links) and scraping (extracting data items from responses). They can schedule whatever amount of requests and extract whatever amount of items as you want, there isn't any limit.
Item pipelines are an abstraction to process the items that are extracted by a spider. The idea is that you can combine different "pipes" through which the data items will come through, and then you'll arrange them in a way that will accomplish whatever you need. Examples of use cases for pipelines are applying validation constraints, saving data into a database, doing some clean-up on the data (e.g., remove HTML tags), etc.
So, recapping:
Spiders extract data items, which Scrapy send one by one to a configured item pipeline (if there is possible) to do post-processing on the items. | 1 | 1 | 0 | 0 | New to scrapy.There are something confused me:what's the relationship between spiders,pipelines and items?
1.should one pipeline handle only one specific item or it can handle multiple items?
2.how to use one spider to crawl multiple items or I should use one spider just to crawl one item? | can one spider handle multiple items and multiple pipelines? | 0 | 0.197375 | 1 | 0 | 0 | 722 |
30,464,163 | 2015-05-26T16:32:00.000 | 12 | 0 | 0 | 0 | 0 | python,haskell,functional-programming,ocaml,sml | 0 | 30,464,615 | 0 | 4 | 0 | false | 0 | 0 | You have to keep track of the nodes you visit. Lists are not king in the ML family, they're just one of the oligarchs. You should just use a set (tree based) to track the visited nodes. This will add a log factor compared to mutating the node state, but is so much cleaner it's not funny. If you know more about your nodes you can possibly eliminate the log factor by using a set not based on a tree (a bit vector say). | 2 | 26 | 0 | 0 | Functional depth first search is lovely in directed acyclic graphs.
In graphs with cycles however, how do we avoid infinite recursion? In a procedural language I would mark nodes as I hit them, but let's say I can't do that.
A list of visited nodes is possible, but will be slow because using one will result in a linear search of that list before recurring. A better data structure than a list here would obviously help, but that's not the aim of the game, because I'm coding in ML - lists are king, and anything else I will have to write myself.
Is there a clever way around this issue? Or will I have to make do with a visited list or, god forbid, mutable state? | Functional Breadth First Search | 1 | 1 | 1 | 0 | 0 | 5,901 |
30,464,163 | 2015-05-26T16:32:00.000 | 3 | 0 | 0 | 0 | 0 | python,haskell,functional-programming,ocaml,sml | 0 | 30,465,604 | 0 | 4 | 0 | false | 0 | 0 | It is pretty OK to have a mutable state hidden inside the function. If it is not visible, then it doesn't exist. I usually use hash sets for this. But in general, you should stick to this if your profiling pinpointed that. Otherwise, just use set data structure. OCaml has an excellent Set based on eagerly balanced AVL trees. | 2 | 26 | 0 | 0 | Functional depth first search is lovely in directed acyclic graphs.
In graphs with cycles however, how do we avoid infinite recursion? In a procedural language I would mark nodes as I hit them, but let's say I can't do that.
A list of visited nodes is possible, but will be slow because using one will result in a linear search of that list before recurring. A better data structure than a list here would obviously help, but that's not the aim of the game, because I'm coding in ML - lists are king, and anything else I will have to write myself.
Is there a clever way around this issue? Or will I have to make do with a visited list or, god forbid, mutable state? | Functional Breadth First Search | 1 | 0.148885 | 1 | 0 | 0 | 5,901 |
30,464,980 | 2015-05-26T17:18:00.000 | 4 | 1 | 0 | 1 | 0 | python,linux,macos,centos,version | 0 | 30,465,953 | 0 | 12 | 0 | false | 0 | 0 | As someone mentioned in a comment, you can use which python if it is supported by CentOS. Another command that could work is whereis python. In the event neither of these work, you can start the Python interpreter, and it will show you the version, or you could look in /usr/bin for the Python files (python, python3 etc). | 4 | 75 | 0 | 0 | I just started setting up a centos server today and noticed that the default version of python on centos is set to 2.6.6. I want to use python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my mac and found that I had python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to lookup all the version of python installed on centos so I don't accidentally install it twice. | How to check all versions of python installed on osx and centos | 0 | 0.066568 | 1 | 0 | 0 | 184,308 |
30,464,980 | 2015-05-26T17:18:00.000 | 4 | 1 | 0 | 1 | 0 | python,linux,macos,centos,version | 0 | 58,172,493 | 0 | 12 | 0 | false | 0 | 0 | COMMAND: python --version && python3 --version
OUTPUT:
Python 2.7.10
Python 3.7.1
ALIAS COMMAND: pyver
OUTPUT:
Python 2.7.10
Python 3.7.1
You can make an alias like "pyver" in your .bashrc file or else using a text accelerator like AText maybe. | 4 | 75 | 0 | 0 | I just started setting up a centos server today and noticed that the default version of python on centos is set to 2.6.6. I want to use python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my mac and found that I had python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to lookup all the version of python installed on centos so I don't accidentally install it twice. | How to check all versions of python installed on osx and centos | 0 | 0.066568 | 1 | 0 | 0 | 184,308 |
30,464,980 | 2015-05-26T17:18:00.000 | 20 | 1 | 0 | 1 | 0 | python,linux,macos,centos,version | 0 | 56,606,519 | 0 | 12 | 0 | false | 0 | 0 | we can directly use this to see all the pythons installed both by current user and the root by the following:
whereis python | 4 | 75 | 0 | 0 | I just started setting up a centos server today and noticed that the default version of python on centos is set to 2.6.6. I want to use python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my mac and found that I had python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to lookup all the version of python installed on centos so I don't accidentally install it twice. | How to check all versions of python installed on osx and centos | 0 | 1 | 1 | 0 | 0 | 184,308 |
30,464,980 | 2015-05-26T17:18:00.000 | 7 | 1 | 0 | 1 | 0 | python,linux,macos,centos,version | 0 | 30,466,232 | 0 | 12 | 0 | true | 0 | 0 | Use, yum list installed command to find the packages you installed. | 4 | 75 | 0 | 0 | I just started setting up a centos server today and noticed that the default version of python on centos is set to 2.6.6. I want to use python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my mac and found that I had python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to lookup all the version of python installed on centos so I don't accidentally install it twice. | How to check all versions of python installed on osx and centos | 0 | 1.2 | 1 | 0 | 0 | 184,308 |
30,493,289 | 2015-05-27T21:16:00.000 | 3 | 0 | 0 | 0 | 0 | python,django | 0 | 30,493,396 | 0 | 2 | 0 | true | 1 | 0 | This seems like a bad idea.
Out of the box, the information architecture presented by the admin interface is going to be very flat where views essentially mirror what's in the database 1:1. I can imagine a slim subset of users in internal IT apps or similar where that may be appropriate but the compromises in usability are serious, without modifying the admin interface so much that you'll probably wish you had done your app the traditional way by the time you were done.
If usability and information architecture are not serious concerns or requirements for your app, then you may proceed apace. | 1 | 0 | 0 | 1 | I am about to begin work on a project that will use Django to create a system with three tiers of users. Each user will login into the dashboard type interface (each user will have different types of tools on the dashboard). There will be a few CRUD type interfaces for each user tier among other things. Only users with accounts will be able to interact with the system (anyone visiting is greeted with a login screen).
It seems that many people recommend to simply modify the default Admin app to fit the requirements. Is this an ideal solution and if so, how do I set so the admin interface is at the site's root (instead of admin/). Also, any documentation on in-depth and secure modification of the admin interface (along with the addition of different user tiers) would be appreciated. | Django 1.8, Using Admin app as the main site | 0 | 1.2 | 1 | 0 | 0 | 165 |
30,495,979 | 2015-05-28T01:59:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,authentication | 0 | 60,831,884 | 0 | 2 | 0 | false | 1 | 0 | You should need just use Groups Django mechanism - you need to create four groups like userA and let say common and check whether user is in first or second group - then show him appropriate view | 1 | 22 | 0 | 0 | I am creating a Django site that will have 4 types of users;
Super Admin: Essentially manages all types of users
UserTypeA: Can login to site and basically will have a CRUD interface for some arbitrary model. Also has extra attributes (specific info that only pertains to TypeA users)
UserTypeB: Can login to site and has extra information that pertains specifically to TypeB users, also can create and manage TypeC user accounts
UserTypeC: Can login to site and has extra information that pertains only to TypeC users.
Noting that I plan on creating a custom interface and not utilizing the built-in admin interface. What would be the best method for implementing this user structure? I would prefer to use Django's built-in authentication system (which manages password salting and hashing, along with session management etc), but I would be open to using some other authentication system if it is secure and production ready.
EDIT: Also, my plan is to use 1 log-in screen to access the site and utilize the permissions and user type to determine what will be available on each user's dashboard.
EDIT: Would it be possible to accomplish this by using an app for each user type, and if so how would this be done with a single log-in screen. | Django 1.8, Multiple Custom User Types | 1 | 0 | 1 | 0 | 0 | 14,610 |
30,503,356 | 2015-05-28T10:04:00.000 | -4 | 0 | 1 | 0 | 0 | python,json,python-2.7,data-modeling,ietf-netmod-yang | 0 | 30,657,742 | 0 | 2 | 0 | false | 0 | 0 | Yang is a modeling language, not a data generation language.
What you are asking for is a simulator that will contain the same or pseudo logic as your application to generate data. | 1 | 5 | 0 | 0 | How do I convert "YANG" data model to "JSON"?
As there is many many docs available in web, in that they changed in YANG
synatx to JSON but how the value for the leaf or leaf list they are getting? from where and how it will get actual data in JSON from YANG? | how to convert YANG data model to JSON data? | 0 | -1 | 1 | 0 | 1 | 13,938 |
30,516,510 | 2015-05-28T20:28:00.000 | 0 | 1 | 0 | 0 | 0 | python,python-2.7,python-unittest | 0 | 30,516,764 | 0 | 2 | 0 | false | 0 | 0 | unittest.shortDescription() takes no arguments. You would have to override it to get the entire docstring. | 1 | 1 | 0 | 0 | unittest.shortDescription() returns only the first line of the test method's docstring.
Is there a way to change this behavior, e.g. to display the entirety of the docstring, or to display another message ?
Would I need to override shortDescription() ?
EDIT: I did know that shortDescription() takes no arguments (besides the implicit object reference), but I was very unclear in the wording of my question. What I'm really looking for is pointers to how to override shortDescription() and get at, say, the entire contents of the docstring. Thanks ! | How to extend/customize unittest's shortDescription()? | 0 | 0 | 1 | 0 | 0 | 291 |
30,519,299 | 2015-05-29T00:34:00.000 | 0 | 0 | 0 | 0 | 1 | python,postgresql,amazon-rds,gevent | 0 | 30,519,353 | 0 | 2 | 0 | false | 1 | 0 | You could try this from within psql to get more details on query timing
EXPLAIN sql_statement
Also turn on more database logging. mysql has slow query analysis, maybe PostgreSQL has an equivalent. | 1 | 0 | 0 | 0 | First, the server setup:
nginx frontend to the world
gunicorn running a Flask app with gevent workers
Postgres database, connection pooled in the app, running from Amazon RDS, connected with psycopg2 patched to work with gevent
The problem I'm encountering is inexplicably slow queries that are sometimes running on the order of 100ms or so (ideal), but which often spike to 10s or more. While time is a parameter in the query, the difference between the fast and slow query happens much more frequently than a change in the result set. This doesn't seem to be tied to any meaningful spike in CPU usage, memory usage, read/write I/O, request frequency, etc. It seems to be arbitrary.
I've tried:
Optimizing the query - definitely valid, but it runs quite well locally, as well as any time I've tried it directly on the server through psql.
Running on a larger/better RDS instance - I'm currently working on an m3.medium instance with PIOPS and not coming close to that read rate, so I don't think that's the issue.
Tweaking the number of gunicorn workers - I thought this could be an issue, if the psycopg2 driver is having to context switch excessively, but this had no effect.
More - I've been working for a decent amount of time at this, so these were just a couple of the things I've tried.
Does anyone have ideas about how to debug this problem? | Inconsistently slow queries in production (RDS) | 0 | 0 | 1 | 1 | 0 | 1,181 |
30,519,737 | 2015-05-29T01:34:00.000 | 3 | 0 | 1 | 0 | 0 | python,python-3.x | 0 | 30,519,763 | 0 | 3 | 0 | false | 0 | 0 | You can't change the values in tuples, tuples are immutable. You would need to make them be lists or create a new tuple with the value you you want and store that. | 1 | 11 | 0 | 0 | Suppose that I have tuples of the form
[(('d',0),('g',0)),(('d',0),('d',1)),(('i',0),('g',0))]
Then how do I increment the numbers inside the tuple that they are of the form:-
[(('d',1),('g',1)),(('d',1),('d',2)),(('i',1),('g',1))]
?
I am able to do this in a single for loop. But I am looking for shorter methods.
P.S. You are allowed to create new tuples | how to increment inside tuple in python? | 0 | 0.197375 | 1 | 0 | 0 | 13,989 |
30,533,263 | 2015-05-29T15:18:00.000 | 3 | 0 | 1 | 1 | 0 | python,vim,nerdtree,python-mode,netrw | 0 | 30,533,662 | 0 | 3 | 0 | true | 0 | 0 | But having a file opened, if I open netrw by typing :E and open another file by hitting <enter> VIM closes the old one and opens the new one in the same window.
[...]
How can I open multiple files/buffers in the same window using netrw?
Buffers are added to a list, the buffer list, and facultatively displayed in one or more window in one or more tab pages.
Since a window can only display one buffer, the only way to see two separate buffers at the same time is to display them in two separate windows. That's what netrw's o and v allow you to do.
When you use <CR>to edit a file, the previous buffer doesn't go away: it is still in the buffer list and can be accessed with :bp[revious]. | 1 | 2 | 0 | 0 | I have recently switched to VIM using NERDTree and python-mode. As NERDTree seems to have a conflict with python-mode and breaks my layout if I close one out of multiple buffers, I decided to switch to netrw since it is shipped with VIM anyway.
But having a file opened, if I open netrw by typing :E and open another file by hitting <enter> VIM closes the old one and opens the new one in the same window. And if I hit <o> in the same situation VIM adds another buffer but adds a new window in a horizontal split.
How can I add multiple files/buffers to the buffer list and only show the last added buffer in the active window (without new splits) using netrw? #edited#
Thanks in advance! I hope I haven't missed something trivial from the manual.. ;-) | VIM + netrw: open multiple files/buffers in the same window | 0 | 1.2 | 1 | 0 | 0 | 3,905 |
30,538,356 | 2015-05-29T20:17:00.000 | 1 | 1 | 0 | 0 | 0 | python,robotframework | 0 | 30,659,802 | 0 | 2 | 0 | false | 0 | 0 | I took a relatively quick look through the sources, and it seems that the execution context does have any reference to currently executing keyword. So, the only way I can think of resolving this is:
Your library needs also to be a listener, since listeners get events when a keyword is started
You need to go through robot.libraries.BuiltIn.EXECUTION_CONTEXT._kw_store.resources to find out which resource file contains the keyword currently executing.
I did not do a POC, so I am not sure whether this actually doable, bu that's the solution that comes to my mind currently. | 1 | 1 | 0 | 0 | I want to create a python library with a 0 argument function that my custom Robot Framework keywords can call. It needs to know the absolute path of the file where the keyword is defined, and the name of the keyword. I know how to do something similar with test cases using the robot.libraries.BuiltIn library and the ${SUITE_SOURCE} and ${TEST NAME} variables, but I can't find anything similar for custom keywords. I don't care how complicated the answer is, maybe I have to dig into the guts of Robot Framework's internal classes and access that data somehow. Is there any way to do this? | Robot Framework location and name of keyword | 0 | 0.099668 | 1 | 0 | 0 | 2,627 |
30,543,041 | 2015-05-30T06:23:00.000 | 1 | 0 | 1 | 1 | 0 | python,macos | 0 | 30,543,046 | 0 | 1 | 0 | true | 0 | 0 | python is installed on OSX by default
and you just need to open terminal and write ‘python’ command, then you can start your python coding | 1 | 1 | 0 | 0 | I'm new to computer programming!
I want to learn python and write my program and run it on my mac OS X machine.
How can i setup python programming tools on OS X and how can i use that ?
I before this never use any other programming language. | How can i install python on OSX? | 1 | 1.2 | 1 | 0 | 0 | 36 |
30,547,102 | 2015-05-30T14:02:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,python-2.7,numpy | 0 | 30,549,380 | 0 | 1 | 0 | true | 1 | 0 | I assume your question is about "how do I do these calculations in the restful framework for django?", but I think in this case you need to move away from that idea.
You did everything correctly but RESTful APIs serve resources -- basically your model.
A computation however is nothing like that. As I see it, you have two ways of achieving what you want:
1) Write a model that represents the results of a computation and is served using the RESTful framework, thus your computation being a resource (can work nicely if you store the results in your database as a way of caching)
2) Add a route/endpoint to your api, that is meant to serve results of that computation.
Path 1: Computation as Resource
Create a model, that handles the computation upon instantiation.
You could even set up an inheritance structure for computations and implement an interface for your computation models.
This way, when the resource is requested and the restful framework wants to serve this resource, the computational result will be served.
Path 2: Custom Endpoint
Add a route for your computation endpoints like /myapi/v1/taxes/compute.
In the underlying controller of this endpoint, you will load up the models you need for your computation, perform the computation, and serve the result however you like it (probably a json response).
You can still implement computations with the above mentioned inheritance structure. That way, you can instantiate the Computation object based on a parameter (in the above case taxes).
Does this give you an idea? | 1 | 1 | 0 | 0 | I have developed a RESTful API using the Django-rest-framework in python. I developed the required models, serialised them, set up token authentication and all the other due diligence that goes along with it.
I also built a front-end using Angular, hosted on a different domain. I setup CORS modifications so I can access the API as required. Everything seems to be working fine.
Here is the problem. The web app I am building is a financial application that should allow the user to run some complex calculations on the server and send the results to the front-end app so they can be rendered into charts and other formats. I do not know how or where to put these calculations.
I chose Django for the back-end as I expected that python would help me run such calculations wherever required. Basically, when I call a particular api link on the server, I want to be able to retrieve data from my database, from multiple tables if required, and use the data to run some calculations using python or a library of python (pandas or numpy) and serve the results of the calculations as response to the API call.
If this is a daunting task, I at least want to be able to use the API to retrieve data from the tables to the front-end, process the data a little using JS, and send it to a python function located on the server with this processed data, and this function would run the necessary complex calculations and respond with results which would be rendered into charts / other formats.
Can anyone point me to a direction to move from here? I looked for resources online but I think I am unable to find the correct keywords to search for them. I just want a shell code kind of a thing to integrate into my current backed using which I can call some python scripts that I write to run these calculations.
Thanks in advance. | Running complex calculations (using python/pandas) in a Django server | 1 | 1.2 | 1 | 0 | 0 | 2,516 |
30,553,766 | 2015-05-31T04:12:00.000 | 11 | 0 | 1 | 0 | 0 | python | 0 | 30,553,899 | 0 | 5 | 0 | true | 0 | 0 | Types aren't used the same way in Python as statically types languages. A hashable object is simply one with a valid hash method. The interpreter simply calls that method, no type checking or anything. From there on out, standard hash map principles apply: for an object to fulfill the contract, it must implement both hash and equals methods. | 1 | 8 | 0 | 0 | I understand that the following is valid in Python: foo = {'a': 0, 1: 2, some_obj: 'c'}
However, I wonder how the internal works. Does it treat everything (object, string, number, etc.) as object? Does it type check to determine how to compute the hash code given a key? | Python: how does a dict with mixed key type work? | 0 | 1.2 | 1 | 0 | 0 | 16,230 |
30,560,147 | 2015-05-31T17:05:00.000 | 0 | 0 | 0 | 0 | 1 | python,widget,pyqt,pyqt4 | 0 | 30,563,200 | 0 | 1 | 0 | false | 0 | 1 | Use a Qt layout (like a QVBoxLayout, QHBoxLayout, or a QGridLayout) | 1 | 0 | 0 | 0 | I'm working on developing a PyQt4 application that will require a lot of widgets and I have run into an issue. When you say where to move the widget to (such as: btn.move(100, 100) it moves it properly, but if you resize the window, you can't see it). I'm not sure how to fix this. I don't want to restrict resizing of the window from the user, but I can't have widgets not showing up on screen.
So if the user resizes the program window to 600x600, how can I have widgets automatically change their location? | How to update PyQt4 widget locations based on window size? | 0 | 0 | 1 | 0 | 0 | 42 |
30,564,015 | 2015-06-01T00:10:00.000 | 8 | 0 | 0 | 0 | 0 | python,distribution,point | 0 | 30,564,059 | 0 | 7 | 0 | false | 0 | 0 | FIRST ANSWER:
An easy solution would be to do a check to see if the result satisfies your equation before proceeding.
Generate x, y (there are ways to randomize into a select range)
Check if ((x−500)^2 + (y−500)^2 < 250000) is true
if not, regenerate.
The only downside would be inefficiency.
SECOND ANSWER:
OR, you could do something similar to riemann sums like for approximating integrals. Approximate your circle by dividing it up into many rectangles. (the more rectangles, the more accurate), and use your rectangle algorithm for each rectangle within your circle. | 1 | 14 | 1 | 0 | I am wondering how i could generate random numbers that appear in a circular distribution.
I am able to generate random points in a rectangular distribution such that the points are generated within the square of (0 <= x < 1000, 0 <= y < 1000):
How would i go upon to generate the points within a circle such that:
(x−500)^2 + (y−500)^2 < 250000 ? | How to generate random points in a circular distribution | 0 | 1 | 1 | 0 | 0 | 31,584 |
30,565,431 | 2015-06-01T03:57:00.000 | 2 | 0 | 0 | 0 | 0 | python,amazon-web-services,websocket,webserver,amazon-elastic-beanstalk | 0 | 30,565,453 | 0 | 1 | 0 | true | 1 | 0 | AWS doesn't "know" anything about your content. The webserver that you install will be configured to point to the "root" directory in which index.html (or something equivalent) should be.
Since it depends on which webserver (django, flask, Jinja etc) you install - you should lookup its documentation! | 1 | 1 | 0 | 0 | I'm deploying an python web server on AWS now and I have a some question about it. I'm using websocket to communicate between back end and front end.
Do I have to use framework like django or flask?
If not, where should I put the index.html file? in other word, after deploying, how do AWS know the default page of my application?
Thanks in advance. | Deploy python web server on AWS Elastic Beanstalk | 0 | 1.2 | 1 | 0 | 0 | 284 |
30,575,409 | 2015-06-01T13:53:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,django-queryset | 0 | 30,579,869 | 0 | 1 | 0 | true | 1 | 0 | You can reload each object in the start of the loop body. Just use TheModel.objects.get(pk=curr_instance.pk) to do this. | 1 | 0 | 0 | 0 | I have a queryset which I iterate through in a loop. I use data from the queryset to change data inside the queryset which might be needed in a lter step of the loop.
Now the queryset is only loaded once at the beginning of the loop. How can I make django reload the data in every iteration? | Django how to not use cached queryset while iterating? | 0 | 1.2 | 1 | 0 | 0 | 49 |
30,579,494 | 2015-06-01T17:23:00.000 | -1 | 1 | 0 | 0 | 0 | python,p2p | 0 | 30,579,803 | 0 | 1 | 0 | false | 0 | 0 | I think the simpliest way to do this is reading socket server in this battleship game. But here is a problem, in this case you will have a problem with connecting, in case when your ip is invisible from the internet. | 1 | 0 | 0 | 0 | this is a conceptual question. As part hobby, part art project I'm looking to build a Python script that allows two people to play battleships between their computers (across the net, without being on the same network).
The idea would be you could run the program something like:
python battleships.py 192.168.1.1
Where the IP address would be the computer you wanted to do battle with.
I have some modest Python coding abilities but I'm curious how hard it would be to build this and how one might go about it?
One key goal is that it must require almost zero set-up: I'm hoping anyone can download the python script, open the terminal and play battleships with someone else.
Thanks! | Conceptual: how to code battleships between two computers in Python? | 0 | -0.197375 | 1 | 0 | 0 | 282 |
30,591,965 | 2015-06-02T09:16:00.000 | 1 | 0 | 0 | 0 | 1 | python,django,database-design | 0 | 30,592,097 | 0 | 1 | 0 | false | 1 | 0 | django.contrib.auth has groups and group permissions, so all you have to do is to define landlords and tenants groups with the appropriate permissions then on your models's save() method (or using signals or else) add your Landlord and Tenant instances to their respective groups. | 1 | 0 | 0 | 0 | I know how permissions/groups/user work together in a "normal" way.
However, I feel incomfortable with this way to do in my case, let me explain why.
In my Django models, all my users are extended with models like "Landlord" or "Tenant".
Every landlord will have the same permissions, every tenant will have other same permissions.. So it seems to me there is not interest to handle permission in a "user per user" way.
What I'd like to do is link the my Tenant and Landlord models (not the instances) to lists of permissions (or groups).
Is there a way to do this? Am I missing something in my modelisation? How would you do that? | Can I link Django permissions to a model class instead of User instances? | 1 | 0.197375 | 1 | 0 | 0 | 111 |
30,592,411 | 2015-06-02T09:36:00.000 | 3 | 0 | 0 | 0 | 0 | python,html,boto,bottle | 0 | 30,595,791 | 0 | 1 | 0 | false | 1 | 0 | What I have ended up doing to fix this issue is used bottle to make a url which completes the needed function. Then just made an html button that links to the relevant url. | 1 | 1 | 0 | 0 | I'm currently trying to right a python script that overnight turns off all of our EC2 instances then in the morning my QA team can go to a webpage and press a button to turn the instances back on.
I have written my python script that turns the severs off using boto. I also have a function which when ran turns them back on.
I have an html doc with buttons on it.
I'm just struggling to work out how to get these buttons to call the function. I'm using bottle rather than flask and I have no Java SCript experience. So I would like t avoid Ajax if possible. I dont mind if the whole page has to reload after the button is pressed. After the single press the webpage isnt needed anyway. | html buttons calling python functions using bottle | 0 | 0.53705 | 1 | 0 | 1 | 617 |
30,595,908 | 2015-06-02T12:20:00.000 | 1 | 0 | 0 | 0 | 1 | python,user-interface,events,kivy | 0 | 56,684,592 | 0 | 3 | 1 | false | 0 | 1 | I've dealt with similar problem and creating new thread didn't do the trick. I had to use Clock.schedule_once(new_func) function. It schedules function call to the next frame, so it is going to run almost immediately after callback ends. | 1 | 9 | 0 | 0 | I am writing a Kivy UI for cmd line utility I have developed. Everything works fine, but some of the processes can take from a few seconds to a few minutes to process and I would like to provide some indication to the user that the process is running. Ideally, this would be in the form of a spinning wheel or loading bar or something, but even if I could update my display to show the user that a process is running, it would be better than what I have now.
Currently, the user presses a button in the main UI. This brings up a popup that verifies some key information with the user, and if they are happy with those options, they press a 'run' button. I have tried opening a new popup to tell them that the process is running, but because the display doesn't update until the process finishes, this doesn't work.
I have a lot of coding experience, but mostly in the context of math and engineering, so I am very new to the designing of UIs and having to handle events and threads. A simple self-contained example would be greatly appreciated. | Building a simple progress bar or loading animation in Kivy? | 0 | 0.066568 | 1 | 0 | 0 | 12,311 |
30,596,353 | 2015-06-02T12:38:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-3.x,pyside | 0 | 38,494,564 | 0 | 1 | 1 | false | 0 | 0 | Right-click the file, click properties, under general it says "opens with:"...
Click the "Change" button to the right of that, and then click more options. On that menu there should be an option called "pythonw" click that. Then on the bottom-right click "apply", then "OK". Then just double-click on the file and it should run with no console window so you won't be able to see it running. | 1 | 1 | 0 | 0 | Someone gave me a python file to open and use as a resource. The only issue is I don't know anything about python, it's very different from my basic knowledge of coding.
The file is not a normal .py file, but rather a console-less .pyw file. I have imported the newest version of python and installed PySide, but I have had no successful attempts at opening the file.
I was wondering if someone might know how to open this kind of file? Does it need to be somewhere specific? | How to open PYW files in Windows 8 | 0 | 0.197375 | 1 | 0 | 0 | 3,111 |
30,603,407 | 2015-06-02T18:05:00.000 | 4 | 0 | 0 | 0 | 0 | python,ibm-cloud | 0 | 30,603,436 | 0 | 3 | 0 | false | 1 | 0 | The cause for this problem was that I was not correctly telling my Python app the needed configuration information when I pushed it out to Bluemix.
What I ended up having to do was add a requirements.txt file and a Procfile file into the root directory of my Python application, to draw that connection between my Python app and the needed libraries/packages.
In the requirements.txt file I specified the library packages needed by my Python app. These are the file contents:
web.py==0.37
wsgiref==0.1.2
where web.py==0.37 is the version of the web.py library that will be downloaded, and wsgiref==0.1.2 is the version of the web server gateway interface that is needed by the version of web.py I am using.
My Procfile contains the following information:
web: python .py $PORT
where myappname is the name of my Python app, and $PORT is the port number that my Python app uses to receive requests.
I found out too that $PORT is optional because when I did not specify $PORT my app ran with the port number under the VCAP_APP_PORT environment variable for my app.
From there it was just a matter of pushing my app out to Bluemix again only this time it ran fine. | 1 | 2 | 0 | 0 | My Python app needs web.py to run but I'm unable to figure out how to get it up to bluemix. I see no options using cf push. I tried to "import web" and added some additional code to my app without success.
When I push my Python app to bluemix without web.py it fails (naturally) since it does not have what it needs to run.
I'm sure I'm just missing an import mechanism. Any help? | How to import a 3rd party Python library into Bluemix? | 0 | 0.26052 | 1 | 0 | 0 | 1,450 |
30,610,892 | 2015-06-03T04:31:00.000 | 5 | 1 | 1 | 0 | 0 | python,api,github | 0 | 30,612,182 | 0 | 1 | 0 | true | 0 | 0 | Why do you need to post your API key? Why not post your app code to Github without your API key and have a configuration parameter for your users to add their own API key? | 1 | 7 | 0 | 0 | I'm writing a Python application that utilizes the Tumblr API and was wondering how I would go about hiding, or encrypting, the API key.
Github warns against pushing this information to a repo, so how would I make the application available to the public and still follow that policy? | API key encryption for Github? | 1 | 1.2 | 1 | 0 | 0 | 248 |
30,615,536 | 2015-06-03T09:06:00.000 | 0 | 0 | 1 | 0 | 0 | python,algorithm,iterator,permutation | 0 | 30,615,998 | 0 | 2 | 0 | false | 0 | 0 | This is a generic issue and rather not a Python-specific. In most languages, even when iterators are used for using structures, the whole structure is kept in memory. So, iterators are mainly used as "functional" tools and not as "memory-optimization" tools.
In python, a lot of people end up using a lot of memory due to having really big structures (dictionaries etc.). However, all the variables-objects of the program will be stored in memory in any way. The only solution is the serialization of the data (save in filesystem, Database etc.).
So, in your case, you could create a customized function that would create the list of the permutations. But, instead of adding each element of the permutation to a list, it would save the element either in a file (or in a database with the corresponding structure). Then, you would be able to retrieve one-by-one each permutation from the file (or the database), without bringing the whole list in memory.
However, as mentioned before, you will have to always know in which permutation you currently are. In order to avoid retrieving all the created permutations from Database (which would create the same bottleneck), you could have an index for each place holding the symbol used in the previously generated permutation (and create the permutations adding the symbols and a predefined sequence). | 1 | 6 | 1 | 0 | I'd like to create a random permutation of the numbers [1,2,...,N] where N is a big number. So I don't want to store all elements of the permutation in memory, but rather iterate over the elements of my particular permutation without holding former values in memory.
Any idea how to do that in Python? | Generate random permutation of huge list (in Python) | 0 | 0 | 1 | 0 | 0 | 1,933 |
30,631,062 | 2015-06-03T21:25:00.000 | 1 | 0 | 0 | 0 | 0 | python,html,css,django | 0 | 30,631,241 | 0 | 3 | 0 | false | 1 | 0 | You can do this is many ways.
In general you need to return some variable from your view to the html and depending on this variable select a style sheet, if your variable name will match you style sheet's name you can do "{{variable}}.css", if not you can use JQuery. | 1 | 4 | 0 | 0 | Sorry in advance if there is an obvious answer to this, I'm still learning the ropes with Django.
I'm creating a website which has 6 pre determined subjects (not stored in DB)
english, civics, literature, language, history, bible
each subject is going to be associated with a unique color.
I've got a template for a subject.html page and a view that loads from the url appname/subject/subjectname
what I need to do is apply particular css to style the page according to the subject accessed. for example if the user goes to appname/subject/english I want the page to be "themed" to english.
I hope I've made myself clear, also I would like to know if there is a way I can add actual css code to the stylesheet and not have to change attributes one by one from the back-end.
thanks very much! | Changing css styles from view in Django | 1 | 0.066568 | 1 | 0 | 0 | 6,764 |
30,642,356 | 2015-06-04T11:12:00.000 | 5 | 0 | 0 | 0 | 0 | python,pandas,dataframe | 0 | 30,648,685 | 0 | 1 | 0 | true | 0 | 0 | Generally creating a new object and binding it to a variable will allow the deletion of any object the variable previously referred to. del, mentioned in @EdChum's comment, removes both the variable and any object it referred to.
This is an over-simplification, but it will serve. | 1 | 2 | 1 | 0 | Tips are there for dropping column and rows depending on some condition.
But I want to drop the whole dataframe created in pandas.
like in R : rm(dataframe) or in SQL: drop table
This will help to release the ram utilization. | how to drop dataframe in pandas? | 0 | 1.2 | 1 | 0 | 0 | 9,398 |
30,649,428 | 2015-06-04T16:35:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,url-routing | 0 | 30,649,463 | 0 | 1 | 0 | true | 1 | 0 | Well, that really is not how it works. Each view is separate and is only called from the URLs that map to it. If you have shared code, you probably want to either factor it out into separate functions that you can call from each view, or use something like a template tag or context processor to add the relevant information to the template automatically. | 1 | 0 | 0 | 0 | I was wondering how to call my index(request) function thats in views.py upon every page reload. Currently index(request) only gets called when the app originally loads. Every other page reload after that calls another function in views.py called filter_report(request). The problem I am running into is that 85% of the code in filter_report(request) is also in index(request) and from my understanding you don't really want 2 functions that do a lot of the same stuff. What I would like to do is take that 15% of code that isn't in index(request) but is in filter_report(request) and split it into different methods and just have index(request) call those other methods based on certain conditionals. | Django: call index function on page reload | 0 | 1.2 | 1 | 0 | 0 | 519 |
30,655,378 | 2015-06-04T22:35:00.000 | 0 | 0 | 1 | 0 | 1 | python,arrays,numpy,multidimensional-array,netcdf | 0 | 30,713,394 | 0 | 1 | 0 | false | 0 | 0 | After talking to a few people where I work we came up with this solution:
First we made an array of zeroes using the following argument:
array1=np.zeros((28,5,24,4))
Then appended this array by specifying where in the array we wanted to change:
array1[:,0,0,0]=list1
This inserted the values of the list into the first entry in the array.
Next to write the array to a netCDF file, I created a netCDF in the same program I made the array, made a single variable and gave it values like this:
netcdfvariable[:]=array1
Hope that helps anyone who finds this. | 1 | 0 | 1 | 0 | This question has potentially two parts but maybe only one if the first part can be encapsulated by the second. I am using python with numpy and netCDF4
First:
I have four lists of different variable values (hereafter referred to elevation values) each of which has a length of 28. These four lists are one set of 5 different latitude values of which are one set of the 24 different time values.
So 24 times...each time with 5 latitudes...each latitude with four lists...each list with 28 values.
I want to create an array with the following dimensions (elevation, latitude, time, variable)
In words, I want to be able to specify which of the four lists I access,which index in the list, and specify a specific time and latitude. So an index into this array would look like this:
array(0,1,2,3) where 0 specifies the first index of the the 4th list specified by the 3. 1 specifies the 2nd latitude, and 2 specifies the 3rd time and the output is the value at that point.
I won't include my code for this part since literally the only things of mention are the lists
list1=[...]
list2=[...]
list3=[...]
list4=[...]
How can I do this, is there an easier structure of the array, or is there anything else I a missing?
Second:
I have created a netCDF file with variables with these four dimensions. I need to set those variables to the array structure made above. I have no idea how to do this and the netCDF4 documentation does a 1-d array in a fairly cryptic way. If the arrays can be made directly into the netCDF file bypassing the need to use numpy first, by all means show me how.
Thanks! | Creating and Storing Multi-Dimensional Array in a netCDF File | 1 | 0 | 1 | 0 | 0 | 1,886 |
30,669,568 | 2015-06-05T14:51:00.000 | 1 | 0 | 0 | 0 | 0 | python,sql,django,git,workflow | 0 | 30,669,779 | 0 | 4 | 0 | false | 1 | 0 | You should track migrations. The only thing that you must keep an eye out for is at branch merge. If everyone uses a feature branch and develops on his branch then the changes are applied once the branch is integrated. At that point (pull request time or integration time) you need to make sure that the migrations make sense and if not fix them. | 2 | 1 | 0 | 0 | Suppose you write a Django website and use git to manage the source code. Your website has various instances (one for each developer, at least).
When you perform a change on the model in a commit, everybody needs to update its own database. In some cases it is enough to run python manage.py migrate, in some other cases you need to run a few custom SQL queries and/or run some Python code to update values at various places.
How to automate this? Is there a clean way to bundle these "model updates" (for instance small shell scripts that do the appropriate actions) in the associated commits? I have thought about using git hooks for that, but as the code to be run changes over time, it is not clear to me how to use them for that purpose. | How to track Django model changes with git? | 1 | 0.049958 | 1 | 0 | 0 | 216 |
30,669,568 | 2015-06-05T14:51:00.000 | 4 | 0 | 0 | 0 | 0 | python,sql,django,git,workflow | 0 | 30,669,896 | 0 | 4 | 0 | true | 1 | 0 | All changes to models should be in migrations. If you "need to run a few custom SQL queries and/or run some Python code to update values" then those are migrations too, and should be written in a migration file. | 2 | 1 | 0 | 0 | Suppose you write a Django website and use git to manage the source code. Your website has various instances (one for each developer, at least).
When you perform a change on the model in a commit, everybody needs to update its own database. In some cases it is enough to run python manage.py migrate, in some other cases you need to run a few custom SQL queries and/or run some Python code to update values at various places.
How to automate this? Is there a clean way to bundle these "model updates" (for instance small shell scripts that do the appropriate actions) in the associated commits? I have thought about using git hooks for that, but as the code to be run changes over time, it is not clear to me how to use them for that purpose. | How to track Django model changes with git? | 1 | 1.2 | 1 | 0 | 0 | 216 |
30,699,683 | 2015-06-07T23:56:00.000 | 0 | 0 | 1 | 1 | 0 | python,exe,py2exe | 0 | 30,699,958 | 0 | 2 | 0 | false | 0 | 0 | Py2exe is a tool provides an exe application which can be run without installing python interpreter, after packaging you find your exe and dlls of interpreter and all modules... In dist folder. It does nt provide all in one exe, use pyinstaller instead | 1 | 0 | 0 | 0 | After I make an exe file, there are many files such as .pyd that my exe depend on them..
I want to make a program with only one exe file which will be handie..
please help me | PYTHON py2exe makes too many files.. how do I execute only one .EXE file? | 0 | 0 | 1 | 0 | 0 | 280 |
30,713,927 | 2015-06-08T15:58:00.000 | 0 | 0 | 1 | 0 | 0 | python,terminal,ipython | 0 | 30,714,067 | 0 | 1 | 0 | false | 0 | 0 | You can't make os.system or subprocess unavailable, and users could use these to build themselves terminals even if you disable the built in terminals. However, if you run the ipython instance in a sandbox then it won't matter that they have command line access. | 1 | 0 | 0 | 0 | I am using iPython notebook build a online interactive teaching website, but I don't want users to run command line, any idea how to remove iPython notebook command line function? Is there any configuration or something? I have been stuck here for 3 days! | How to stop the iPython notebook to run the command line, run only python code | 0 | 0 | 1 | 0 | 0 | 750 |
30,724,143 | 2015-06-09T06:05:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,sql,sql-server,csv | 0 | 30,724,975 | 0 | 1 | 0 | true | 0 | 0 | My answer is to work with bulk-insert.
1. Make sure you have bulk-admin permission in server.
2. Use SQL authentication login (For me most of the time window authentication login haven't worked.) for bulk-insert operation. | 1 | 0 | 1 | 0 | i have tried import file csv using bulk insert but it is failed, is there another way in query to import csv file without using bulk insert ?
so far this is my query but it use bulk insert :
bulk insert [dbo].[TEMP] from
'C:\Inetpub\vhosts\topimerah.org\httpdocs\SNASPV0374280960.txt' with
(firstrow=2,fieldterminator = '~', rowterminator = ' '); | how to import file csv without using bulk insert query? | 1 | 1.2 | 1 | 1 | 0 | 777 |
30,736,765 | 2015-06-09T15:41:00.000 | 2 | 0 | 1 | 0 | 0 | python,multithreading,numpy,intel,intel-mkl | 0 | 30,737,140 | 0 | 1 | 0 | true | 0 | 0 | Upon further investigation it looks you are able to set the environment variable MKL_NUM_THREADS to achieve this. | 1 | 2 | 1 | 0 | I just installed a intel MKL optimized version of scipy and when running my benchmarks, I got remarkable speedup with it. I then looked closer and found out it was running on 20 cores ... how do I restrict it to single threaded mode? Is there a way I could have installed it to single threaded mode by default, while leaving the option open to run on a specified number of cores? | Restrict MKL optimized scipy to single thread | 0 | 1.2 | 1 | 0 | 0 | 225 |
30,769,851 | 2015-06-11T00:59:00.000 | 2 | 0 | 0 | 0 | 0 | python,lambda,tkinter,command | 0 | 30,770,368 | 0 | 2 | 0 | true | 0 | 1 | A good way to look at it is to imagine the button or binding asking you the question "what command should I call when the button is clicked?". If you give it something like self.red(), you aren't telling it what command to run, you're actually running the command. Instead, you have to give it the name (or more accurately, a reference) of a function.
I recommend this rule of thumb: never use lambda. Like all good rules of thumb, it only applies for as long as you have to ask the question. Once you understand why you should avoid lambda, it's OK to use it whenever it makes sense. | 1 | 1 | 0 | 0 | I'm confused as to the difference between using a function in commands of tkinter items. say I have self.mb_BO.add_radiobutton(label= "Red", variable=self.BO, value=2, command=self.red)
what is the difference in how the add statement works from this:
self.mb_BO.add_radiobutton(label= "Red", variable=self.BO, value=2, command=self.red())
where func red(self) changes the color to red.
And self.mb_BO.add_radiobutton(label= "Red", variable=self.BO, value=2, command=lambda: self.red())
Essentially I don't understand what these commands are doing and when to use the callback or function reference. I've spent hours looking online for an easy to follow summary to no avail and I am still just as confused. | commands in tkinter when to use lambda and callbacks | 1 | 1.2 | 1 | 0 | 0 | 881 |
30,770,219 | 2015-06-11T01:45:00.000 | 1 | 0 | 1 | 0 | 0 | python,qt,pyside,cx-freeze | 0 | 43,317,798 | 0 | 1 | 0 | true | 0 | 1 | In python there's no static linking. All the imports requires the correct dependencies to be installed on our target machine. The choice of the version of such libraries are in our decision.
Now let's come to the binary builders for python. In this case, we'll have to determine the linking type based on the GNU definitions. If the user can replace the dependency as he likes, it's dynamic. If the dependency is attached together with the binary itself, it's static linking. In case of cx_freeze or pyinstaller, if we build this as one file, it's static linking. If we build this in normal mode where all the dependencies are collected as separate files, it's dynamic linking. Idea is, whether we can replace the dependency in target machine or not. | 1 | 12 | 0 | 0 | I know the difference between static and dynamic linking in C or C++. But what does it mean in Python? Since it's just an interpreter, and only having one style of import mechanism of modules, how does this make sense?
If I freeze my Python application with cx_freeze by excluding a specific library, is it a kind of dynamic linking? Because, users have to download and install that library by themselves in order to run my application.
Actually my problem is, I'm using the PySide library (with LGPL v2.1) to develop a Python GUI application. The library says I should dynamically link to the library to obey their legal terms (same as Qt). In this case, how do I link to PySide dynamically? | What does it mean for statically linking and dynamically linking in Python? | 1 | 1.2 | 1 | 0 | 0 | 2,509 |
30,786,979 | 2015-06-11T16:58:00.000 | 0 | 0 | 0 | 0 | 0 | android,python,django,networking,android-networking | 0 | 30,787,714 | 0 | 2 | 0 | false | 1 | 0 | Django doesn't care what the client is, and Android's HttpClient don't care whether the url are served by Django, Tomcat, Rails, Apache or whatever. It's only HTTP. IOW:
learn to write a Django App (it's fully documented)
learn to use the Android's HttpClient (it's fully documented too IIRC)
connect the dots... | 1 | 0 | 0 | 0 | I am working on a sensor app in android and I've to store the accelerometer readings on a django server and then retrieve them on my device. I am new to django and I don't know how to communicate with Android's HttpClient and a django server. | I want to write a django application that can respond to the HttpClient requests from an android device. Can I have an example code? | 0 | 0 | 1 | 0 | 0 | 131 |
30,787,397 | 2015-06-11T17:20:00.000 | 0 | 0 | 1 | 0 | 1 | python,pycharm,kivy | 0 | 30,808,862 | 0 | 1 | 0 | false | 0 | 0 | I found a way to set the environmental variables found in the kivy.bat. I simply created a new .bat that sets the environmental variables and then runs pycharm from the command line. This allows the variables to persist between projects. | 1 | 1 | 0 | 0 | I'm moving to pycharm from sublime text and can't get it working with kivy and virtualenv. I've created a virtualenv with a new project in pycharm but I can't figure out how to get kivy working. The kivy help shows using the kivy.bat as the python interpreter but I want to use the virtualenv. One possible option would be to add all the environmental variables from the kivy.bat, but this doesn't sound like fun to do with multiple virtualenvs. Any help or tips would be greatly appreciated. | Pycharm, virtualenv and kivy setup | 0 | 0 | 1 | 0 | 0 | 295 |
30,804,783 | 2015-06-12T13:50:00.000 | 0 | 0 | 0 | 0 | 1 | python,django,django-south | 1 | 30,805,141 | 0 | 1 | 0 | false | 1 | 0 | I figured out the last migration files were accidentally corrupted and caused the KeyError. | 1 | 0 | 0 | 0 | I am trying to do a schemamigration in Django with south using the following command where core is the app I would like to migrate.
$ python manage.py schemamigration core --auto
Unfortunately this throws the following KeyError:
KeyError: u"The model 'externaltoolstatus' from the app 'core' is not available in this migration."
Does anybody know a way to how figure out what went wrong or where/when this error was thrown during the migration? | Django south schemamigration KeyError | 0 | 0 | 1 | 0 | 0 | 92 |
30,838,875 | 2015-06-15T06:48:00.000 | 1 | 0 | 0 | 0 | 0 | python,python-3.x,requirements | 0 | 30,839,105 | 0 | 2 | 0 | false | 0 | 0 | The pattern makes sense in some cases, but for me it's when you want to be able to run each module as a self sustained executeable.
I.E. Should you want to use the script from within FORTRAN or similar language, it is the easiest way, to build the python module to an executeable, and then call it from FORTRAN.
That would not mean that one module is pr definition 1 python file, just that it only has one entry point, and is in fact executeable.
The one module pr script, could be to make it easier to locate the code. Or to mail it to someone for code inspection or peer review (done often in scientific communities)
So the requirements may be a mix of technical and social requirements.
Anyway back to the problem.
I would use the subprocess module to call the next module. (with close_fds set to true)
If close_fds is true, all file descriptors except 0, 1 and 2 will be
closed before the child process is executed. (Unix only). Or, on
Windows, if close_fds is true then no handles will be inherited by the
child process. Note that on Windows, you cannot set close_fds to true
and also redirect the standard handles by setting stdin, stdout or
stderr. | 1 | 0 | 0 | 0 | How would the output of one script be passed as the input to another? For example if a.py outputs format.xml then how would a.py call b.py and pass it the argument format.xml? I think it's supposed to work like piping done on the command line.
I've been hired by a bunch of scientists with domain specific knowledge but sometimes there computer programming requirements don't make sense. There's a long chain of "modules" and my boss is really adamant about 1 module being 1 python script, and the output of one module is the input of the next. I'm very new to Python, but if this design pattern rings a bell to anyone let me know.
Worse yet the project is to be converted to executable format (using py2exe) and there still has to be the same number of executable files as .py files. | output of one file input to next | 0 | 0.099668 | 1 | 0 | 0 | 360 |
30,856,274 | 2015-06-15T22:53:00.000 | 0 | 0 | 1 | 0 | 0 | python,windows,directory,project,virtualenv | 0 | 30,856,299 | 0 | 2 | 0 | false | 0 | 0 | testenv/bin/pip and testenv/bin/python
I'd check it in a local repository and check it out in the virtualenv.
No, you have not. | 1 | 1 | 0 | 0 | I am new to python development using virtualenv. I have installed python 2.7, pip, virtualenv, virtualenvwrapper in windows and I am using windows PS. I have referred lots of tutorials for setting this up. Most of them contained the same steps and almost all of them stopped short of explaining what to do after the virtualenv was created.
How do I actually work in a virtualenv? suppose I want to create a new flask application after installing that package in my new env virtualenv (eg; testenv).
If I already have an existing project and I want to put it inside a newly created virtual env, how do I do that? How should the folder structure be like?
My understanding of virtual env is that it provides a sandbox for your application by isolating it and keeping all its dependencies to itself in that particular env (and not sharing and it with others). Have I understood it wrong?
Please help me clear this. | Run an existing python web application inside a virtalenv | 0 | 0 | 1 | 0 | 0 | 192 |
30,870,391 | 2015-06-16T14:21:00.000 | 0 | 1 | 0 | 0 | 0 | python,python-2.7,raspberry-pi | 0 | 30,873,178 | 0 | 1 | 0 | false | 0 | 0 | Maybe an easier way would be to use the shell to kill the process in question? Each process in linux has a number assigned to it, which you can see by typing
pstree -p
In your terminal. You can then kill the process by typing in
sudo kill process number
Does that help, or were you thinking of something a bit more complicated? | 1 | 0 | 0 | 0 | i am using 2 python programs with rc.local for my raspberry, first program is my main program and the other is the second program. The second program is shutdown the raspberry, but when i do the second program my first program is still running and will stopped until raspberry truly shutdown.
I want to make the second program kill the first program before raspberry truly shutdown, how can i do it? | how to kill a python programs using another python program on raspberry? | 0 | 0 | 1 | 0 | 0 | 526 |
30,880,392 | 2015-06-17T00:25:00.000 | 0 | 0 | 0 | 0 | 0 | python,sockets | 0 | 30,880,515 | 0 | 1 | 0 | true | 0 | 0 | could you make your server log for heartbeats? and also post heartbeats to the clients on the socket?
if so, have a monitor check for the server heartbeats and restart the server application if the heartbeats exceed the threshold value.
also, check for heartbeats on the client and reestablish connection when you did not hear a heartbeat. | 1 | 0 | 0 | 0 | I was trying to implement a multiuser chat (group chat) with socket on python.
It basically works like this: Each messages that a user send is received by the server and the server sends it back to the rest of the users.
The problem is that if the server close the program, it crashes for everyone else.
So, how can you handle the departure of the server, should you change the server somehow, or there is other way around it?
Thank you | Creating Multi-user chat with sockets on python, how to handle the departure of the server? | 0 | 1.2 | 1 | 0 | 1 | 560 |
30,886,340 | 2015-06-17T08:35:00.000 | 0 | 0 | 0 | 0 | 0 | python,apache-spark | 0 | 30,887,058 | 0 | 2 | 0 | false | 0 | 0 | I think when you only use map action on FIRST_RDD(logs) you will get SECOND_RDD count of new this SECOND_RDD will be equal to count of FIRST_RDD.But if you use distinct on SECOND_RDD, count will decrease to number of distinct tuples present in SECOND_RDD. | 1 | 0 | 1 | 0 | I was working with Apache Log File. And I created RDD with tuple (day,host) from each log line. Next step was to Group up host and then display the result.
I used distinct() with mapping of first RDD into (day,host) tuple. When I don't use distinct I get different result as when I do. So how does a result change when using distinct() in spark?? | How does result changes by using .distinct() in spark? | 0 | 0 | 1 | 0 | 0 | 172 |
30,896,343 | 2015-06-17T15:44:00.000 | 0 | 0 | 1 | 0 | 0 | python,installation,ldap,python-2.6,gssapi | 1 | 52,655,111 | 0 | 3 | 0 | false | 0 | 0 | For me, the issue got resolved after installing the package "krb5-libs" in Centos.
Basically we need to have libgssapi_krb5.so file for installing gssapi. | 2 | 16 | 0 | 0 | I am trying to install the GSSAPI module through pip but I receive this error that I don't know how to resolve.
Could not find main GSSAPI shared library. Please try setting
GSSAPI_MAIN_LIB yourself or setting ENABLE_SUPPORT_DETECTION to
'false'
I need this to work on Python 2.6 for LDAP3 authentication. | How to install GSSAPI Python module? | 0 | 0 | 1 | 0 | 0 | 20,242 |
30,896,343 | 2015-06-17T15:44:00.000 | 14 | 0 | 1 | 0 | 0 | python,installation,ldap,python-2.6,gssapi | 1 | 50,410,443 | 0 | 3 | 0 | false | 0 | 0 | sudo apt install libkrb5-dev
actually installs /usr/bin/krb5-config and /usr/lib/libgssapi_krb5.so
so none of the symlinking was needed, just install libkrb5-dev and you should be good. | 2 | 16 | 0 | 0 | I am trying to install the GSSAPI module through pip but I receive this error that I don't know how to resolve.
Could not find main GSSAPI shared library. Please try setting
GSSAPI_MAIN_LIB yourself or setting ENABLE_SUPPORT_DETECTION to
'false'
I need this to work on Python 2.6 for LDAP3 authentication. | How to install GSSAPI Python module? | 0 | 1 | 1 | 0 | 0 | 20,242 |
30,937,667 | 2015-06-19T12:01:00.000 | 3 | 0 | 0 | 0 | 0 | python,scikit-learn,gaussian,naivebayes | 1 | 30,938,653 | 0 | 2 | 0 | true | 0 | 0 | Yes, you will need to convert the strings to numerical values
The naive Bayes classifier can not handle strings as there is not a way an string can enter in a mathematical equation.
If your strings have some "scalar value" for example "large, medium, small" you might want to classify them as "3,2,1",
However, if your strings are things without order such as colours or names, you can do this or assign binary variables with every variable referring to a colour or name, if they are not many.
For example if you are classifying cars an they can be red blue and green you can define the variables 'Red' 'Blue' 'Green' that take the values 0/1, depending on the colour of your car. | 1 | 1 | 1 | 0 | I am trying to implement Naive Bayes classifier in Python. My attributes are of different data types : Strings, Int, float, Boolean, Ordinal
I could use Gaussian Naive Bayes classifier (Sklearn.naivebayes : Python package) , But I do not know how the different data types are to be handled. The classifier throws an error, stating cannot handle data types other than Int or float
One way I could possibly think of is encoding the strings to numerical values. But I also doubt , how good the classifier would perform if I do this. | NaiveBayes classifier handling different data types in python | 0 | 1.2 | 1 | 0 | 0 | 3,764 |
30,947,172 | 2015-06-19T20:58:00.000 | 4 | 0 | 1 | 0 | 1 | python,eclipse | 0 | 30,955,933 | 0 | 1 | 0 | true | 0 | 0 | I'm assuming you're using PyDev. I don't know if there are other alternatives but that's what I use for Python in Eclipse.
Right-click on your project folder in the Package Explorer view and select "Properties".
Select "PyDev - Interpreter/Grammar"
Select the appropriate Grammar Version and Interpreter, if those options contain the Python version you want.
If not, click on "Click here to configure an interpreter not listed."
Click "New" and provide an interpreter name (e.g. python3.4) and path to the executable (C:\Python34)
Once you've done that, you should see the option to select your Python 3.4 interpreter under Run Configurations > Interpreter. It'll be displayed using the interpreter name you provided in step 5. | 1 | 4 | 0 | 0 | I've installed Python 3.4 and am currently using Python 2.7. I want to create a Project in Python 3.4, but, when I go to Run-->Run Configurations and then look to make a new entry under Python Run , I see that C:\Python34 doesn't show up. Also, when I try to create a new Project, the "Grammar Version" goes only up to 3.0. I don't know how to resolve this.
Edit: Could this be because I haven't installed Python 3.4 correctly?
Thanks | Trouble trying to run Python 2.7 and 3.4 in Eclipse | 0 | 1.2 | 1 | 0 | 0 | 1,700 |
30,954,589 | 2015-06-20T13:36:00.000 | 2 | 0 | 1 | 1 | 0 | python,c++ | 0 | 30,954,837 | 0 | 2 | 0 | false | 0 | 1 | Look For python.net which is cable of making a call to the interfaces written in .net supported languages.
What all you need to do is
Steps:
Download and put it two files Python.Runtime.dll and clr.pyd in your DLLs folder.
From you >>>(triple greater than prompt )Python prompt Try
>>>import clr
if it doesn't gives any error you are good to go .
Next You need to do is put Your C++ Dll inside Lib/site-packages Folder .
(This is not mandatory but good for beginners).
Next to import clr try importing your Dll as a module import YourDllName
If step 5 doesn't gives you a error . Hola You are done That's All Folks :) | 1 | 0 | 0 | 0 | I need to execute C++ code to acquire images to process in python.
I need to use these commands from python:
make and
./name_of_the_executable
Could anybody please help know me how to do it? | Excuting cpp file from Python | 0 | 0.197375 | 1 | 0 | 0 | 70 |
30,976,120 | 2015-06-22T09:13:00.000 | 0 | 0 | 0 | 0 | 0 | python,scikit-learn,tf-idf | 0 | 56,899,773 | 0 | 3 | 0 | false | 0 | 0 | @kinkajou, No, TF and IDF are not same but they belong to the same algorithm- TF-IDF, i.e Term frequency Inverse document Frequency | 1 | 7 | 1 | 0 | I have code that runs basic TF-IDF vectorizer on a collection of documents, returning a sparse matrix of D X F where D is the number of documents and F is the number of terms. No problem.
But how do I find the TF-IDF score of a specific term in the document? i.e. is there some sort of dictionary between terms (in their textual representation) and their position in the resulting sparse matrix? | Find the tf-idf score of specific words in documents using sklearn | 0 | 0 | 1 | 0 | 0 | 11,861 |
30,985,798 | 2015-06-22T17:04:00.000 | 0 | 0 | 0 | 0 | 0 | python,django | 0 | 30,986,554 | 0 | 1 | 0 | false | 1 | 0 | Definitely go for "way #1". Keeping an independent layer for your service(s) API will help a lot later when you have to enhance that layer for reliability or extending the API.
Stay away from singletons, they're just global variables with a new name. Use an appropriate life cycle for your interface objects. The obvious "instantiate for each request" is not the worst idea, and it's easier to optimize that if needed (by caching and/or memoization) than it's to unroll global vars everywhere.
Keep in mind that web applications are supposed to be several processes on a "shared nothing" design; the only shared resources must be external to the app: database, queue managers, cache storage.
Finally, try to avoid using this API directly from the view functions/CBV. Either use them from your models, or write a layer conceptually similar to the way models are used from the views. No need of an ORM-like api, but keep any 'business process' away from the views, which should be concerned only with the presentation and high level operations. Think "thick model, thin views" with your APIs as a new kind of models. | 1 | 1 | 0 | 0 | I am building my first Django web application and I need a bit of advice on code layout.
I need to integrate it with several other applications that are exposed through RESTful APIs and additionally Django's internal data. I need to develop the component that will pull data from various sources, django's DB, format it consistently for output and return it for rendering by the template.
I am thinking of the best way to write this component. There are a couple of ways to proceed and I wanted to solicit some feedback from more experienced web developers on things I may have missed.
Way #1
Develop a completely standalone objects for interacting with other applications via their APIs, etc... This would not have anything related with django and test independently. Then in django, import this module in the views that need it, run object methods to get required data, etc...
If I need to access any of this functionality via a GET request (like through JavaScript), I can have a dedicated view that imports the module and returns json.
Way #2
Develop this completely as django view(s) expose as a series of GET/POST calls that would all call each other to get the work done. This would be directly tied in the application.
Way #3
Start with Way #1, but instead of creating a view, package it as a django app. Develop unit tests on the app as well as the individual objects.
I think that way 1 or 3 would be very much encapsulated and controlled.
Way 2 would be more complicated, but facilitate higher component re-use.
What is better from a performance standpoint? If I roll with Way #1 or 3, would an instance of the object be instantiated for each request?
If so this approach may be a bit too heavy for this. If I proceed with this, can they be singletons?
Anyway, I hope this makes sense.
thanks in advance. | Python Module in Django | 1 | 0 | 1 | 0 | 0 | 765 |
30,988,075 | 2015-06-22T19:17:00.000 | 2 | 0 | 1 | 0 | 0 | python,ipython,ipython-notebook | 0 | 63,175,447 | 0 | 8 | 0 | false | 0 | 0 | If some stumbles here as of 2020, it's now possible to move .ipynb or other kind of files by simply checking it and clicking move.
Nevertheless, for .ipynb files you must be sure that the notebook isn't running (gray icon). If it's running it should be green and you must shut it down before moving. | 4 | 14 | 0 | 0 | After creating a .ipynb file in the root directory /, how can you move that .pynb file into a deeper directory ie: /subdirectory using the web UI? | Move .ipynb using the IPython Notebook Web Interface | 0 | 0.049958 | 1 | 0 | 0 | 23,019 |
30,988,075 | 2015-06-22T19:17:00.000 | 0 | 0 | 1 | 0 | 0 | python,ipython,ipython-notebook | 0 | 40,560,790 | 0 | 8 | 0 | false | 0 | 0 | Ran into this issue and solved it by :
Create a new folder in jupyter notebooks.
Go to the folder/directory and click the "Upload "button which is next to the "New" button.
Once you click "Upload", your pc file explorer window will pop-up, now simply find where you have your jupyter notebooks saved on your local machine and upload them to that desired file/directory.
Although this doesn't technically move your python files to your desired directory, it does however make a copy in that directory. So next time you can be more organized and just click on a certain directory that you want and create/edit/view the files you chose to be in there instead of looking for them through your home directory. | 4 | 14 | 0 | 0 | After creating a .ipynb file in the root directory /, how can you move that .pynb file into a deeper directory ie: /subdirectory using the web UI? | Move .ipynb using the IPython Notebook Web Interface | 0 | 0 | 1 | 0 | 0 | 23,019 |
30,988,075 | 2015-06-22T19:17:00.000 | 0 | 0 | 1 | 0 | 0 | python,ipython,ipython-notebook | 0 | 43,175,463 | 0 | 8 | 0 | false | 0 | 0 | Duplicate the notebook and delete the original, was my workaround. | 4 | 14 | 0 | 0 | After creating a .ipynb file in the root directory /, how can you move that .pynb file into a deeper directory ie: /subdirectory using the web UI? | Move .ipynb using the IPython Notebook Web Interface | 0 | 0 | 1 | 0 | 0 | 23,019 |
30,988,075 | 2015-06-22T19:17:00.000 | 1 | 0 | 1 | 0 | 0 | python,ipython,ipython-notebook | 0 | 46,796,512 | 0 | 8 | 0 | false | 0 | 0 | Ipython 5.1:
1. Make new folder -- with IPython running, New, Folder, select 'Untitled folder' just created, rename (and remember the name!)
2. Go to the file you want to move, Move, write new directory name at prompt
Note: If the folder exists, skip 1.
Note: If you want to leave a copy in the original directory, Duplicate and then move. | 4 | 14 | 0 | 0 | After creating a .ipynb file in the root directory /, how can you move that .pynb file into a deeper directory ie: /subdirectory using the web UI? | Move .ipynb using the IPython Notebook Web Interface | 0 | 0.024995 | 1 | 0 | 0 | 23,019 |
31,003,551 | 2015-06-23T13:02:00.000 | 0 | 0 | 0 | 0 | 0 | python,matplotlib | 0 | 31,004,307 | 0 | 2 | 0 | false | 0 | 1 | One solution is to make MatPlotLib react to key events immediately.
Another solution is print a 'Cursor' or marker line on the plot, and change its coordinates with the mouse events. Eg. draw a vertical line, and update its X coordinates with the left and right keys. You can then add a label with the X coordinate along the line, and other nice tricks. | 1 | 1 | 0 | 0 | I would like to move the cursor in a Python matplotlib window by a (data) pixel at a time (I'm displaying a 2D image), using the keyboard arrow keys. I can trap the keyboard keypress events, but how do I reposition the cursor based on these events? | move cursor in matplotlib window with keyboard stroke | 0 | 0 | 1 | 0 | 0 | 533 |
31,009,555 | 2015-06-23T17:30:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,selenium,functional-testing | 0 | 31,042,109 | 0 | 2 | 0 | true | 1 | 0 | I've decided that I am just going to call the method of the nightly methods directly, instead of making a truly functional test. | 1 | 0 | 0 | 0 | I am learning TDD and am using Django and Selenium to do some functional testing. I wanted to make it so that a user selects a checkbox that essentially says "add 1 to this number nightly". Then, for all of the users that have this setting on, a nightly process will automatically increment the number on these accounts.
I want to be able to test this functionality in my functional test in selenium, but I don't know how I would go about it. I obviously don't want to wait a day for the test to finish either. Can someone help me think about how I can get started? | Django Functional Test for Time Delayed Procedure | 0 | 1.2 | 1 | 0 | 0 | 74 |
31,022,379 | 2015-06-24T09:09:00.000 | 0 | 0 | 0 | 0 | 0 | python,pandas | 0 | 31,022,952 | 0 | 4 | 0 | false | 0 | 0 | You can not use any of these chars in the file name ;
/:*?\"| | 1 | 0 | 1 | 0 | I want to save a dataframe to a .csv file with the name '123/123', but it will split it in to two strings if I just type like df.to_csv('123/123.csv').
Anyone knows how to keep the slash in the name of the file? | How to save a dataframe as a csv file with '/' in the file name | 0 | 0 | 1 | 0 | 0 | 373 |
31,024,852 | 2015-06-24T11:04:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 31,025,251 | 0 | 1 | 0 | false | 0 | 0 | There's no way to "listen" for changes other than repeatedly requesting that URL to check if something has changed; i.e. classic "pull updates". To get actual live notifications of changes, that other server needs to offer a way to push such a notification to you. If it's just hosting the file and is not offering any active notifications of any sort, then repeatedly asking is the best you can do. Try not to kill the server when doing so. | 1 | 0 | 0 | 0 | I am new to python. I want to read JSON data from URL and if there is any change in JSON data on server, I want to update in my JSON file which is on client. How can I do that through python?
Actually i am ploting graph on django using JSON data which is on another server. That JSON data is updated frequently. So here i want to update my charts based on updated json data. For that i has to listen to URL link for change. So how can i do that.....i know with select() system call i can but need some another way | Asynchronous way to listen web service for data change | 0 | 0 | 1 | 0 | 1 | 190 |
31,060,517 | 2015-06-25T20:48:00.000 | 5 | 1 | 1 | 0 | 1 | python | 1 | 31,061,329 | 0 | 1 | 0 | true | 0 | 0 | Look at sys.excepthook. As its name suggests, it's a hook into the exception. You can use it to send you an email when exceptions are raised. | 1 | 5 | 0 | 0 | i have a scenario where some unknown exceptions may be raised during program execution and i can't except most of them and i want every time any exception will raise an email should send to me as exceptions cause program to terminate if not properly catch!
so i have read about python provide atexit module but it did not work with exceptions so my question is , is there any way to make atexit work with exceptions?? so every any exception raised and programs terminates it should send me a mail?
thanks | how to use atexit when exception is raised | 1 | 1.2 | 1 | 0 | 0 | 2,151 |
31,068,690 | 2015-06-26T08:43:00.000 | 0 | 0 | 0 | 0 | 0 | python,flask,scalability,binaryfiles | 0 | 31,069,570 | 0 | 2 | 1 | false | 1 | 0 | Pickle files is good way to load data in python. By the way consider using the C implementation: cPickle.
If you want to scale, using pickle files. Ideally you want to look for a partition key, that fits for your data and your project needs.
Let's say for example you have historical data, instead of having a single file with all historical data, you can create a pickle file per date. | 1 | 2 | 0 | 0 | I am working on my first web application. I am doing it in Python with Flask, and I want to run a piece of Python code that takes an object, with Pickle, from a (binary) file of several MB which contains all the necessary data. I have been told that so using Pickle in a web app is not a good idea because of scalability; why?
Obviously, for a given purpose it's better to take just the necessary data. However, if I do this, with an Elasticsearch database and in the fastest way I can, the whole process takes about 100 times more time than if I take all the data at once from the binary; once the binary is unpickled, which will take at most one second, the computations are very fast, so I am wondering if I should use the binary file, and if so, how to do it in a scalable way. | Is it feasible to unpickle large amounts of data in a Python web app that needs to be scalable? | 0 | 0 | 1 | 0 | 0 | 1,053 |
31,073,993 | 2015-06-26T13:07:00.000 | 0 | 1 | 0 | 1 | 0 | python,git,salt-stack | 0 | 31,254,893 | 0 | 3 | 0 | true | 0 | 0 | If you need the highstate on the minion to cause something to occur on the master than you are going to want too look into using salt's Reactor (which is designed to do exactly this kind of multi-machine stuff). | 2 | 0 | 0 | 0 | I allow myself to write to you, due to a block on my part at Salt.
I made a bash script that adds a host in my zabbix monitoring server. it works perfectly when I run .sh
the idea is that I want to automate this configuration through salt. I am when I do a highstate my state that contains the script runs in the master before minion because there's my login authentication in my bash script.
Is there's a special configuration for its? is what you have ideas how to do like this kind of setup? according to my research I found that to be used as the salt-runner but I do not know if this is good or not;
In anticipation of your return, I wish you a good weekend. | Run script bash on saltstack master before minion | 0 | 1.2 | 1 | 0 | 0 | 905 |
31,073,993 | 2015-06-26T13:07:00.000 | 0 | 1 | 0 | 1 | 0 | python,git,salt-stack | 0 | 31,083,687 | 0 | 3 | 0 | false | 0 | 0 | Run a minion on the same box as your master, then you can run the script on your master's minion and then on the other server. | 2 | 0 | 0 | 0 | I allow myself to write to you, due to a block on my part at Salt.
I made a bash script that adds a host in my zabbix monitoring server. it works perfectly when I run .sh
the idea is that I want to automate this configuration through salt. I am when I do a highstate my state that contains the script runs in the master before minion because there's my login authentication in my bash script.
Is there's a special configuration for its? is what you have ideas how to do like this kind of setup? according to my research I found that to be used as the salt-runner but I do not know if this is good or not;
In anticipation of your return, I wish you a good weekend. | Run script bash on saltstack master before minion | 0 | 0 | 1 | 0 | 0 | 905 |
31,082,231 | 2015-06-26T21:12:00.000 | 0 | 0 | 1 | 0 | 0 | python,algorithm | 0 | 31,084,184 | 0 | 1 | 0 | false | 0 | 0 | I just don't see how I can find the exact intervals where there are anomalies using windows.
You can't. How do you know there aren't two anomalies?
Is there an approximation method that anyone can come up with?
For each interval of windows where there is continuously an anomaly detected, assume that there is an anomaly lasting from the last day of the first window to the first day of the last window. | 1 | 0 | 0 | 0 | I've been trying to determine how to detect point-anomalies given window-anomalies.
In more detail, I know for each 30-day window whether it contains an anomaly. For example, window 1 starts at 1/1/2009, window 2 at 1/2/2009, and so on.
Now I'm trying to use this knowledge to determine which dates these anomalies lie. If I have an anomaly on dates 1/5/2009 to 1/8/2009, my window will raise a signal for windows from a window with a last day of 1/8/2009 to a window starting on 1/5/2009.
I just don't see how I can find the exact intervals where there are anomalies using windows. Is there an approximation method that anyone can come up with? Feel free to include some code in Python if you'd like.
Thanks! | how to determine point anomalies given window anomalies? | 0 | 0 | 1 | 0 | 0 | 43 |
31,098,228 | 2015-06-28T09:40:00.000 | 2 | 0 | 0 | 0 | 0 | python,numpy,matrix,equation-solving | 0 | 48,433,206 | 0 | 3 | 0 | false | 0 | 0 | You could add a row consisting of ones to A and add one to B. After that use
result = linalg.lstsq(A, B)[0]
Or you can replace one of A's rows to row consisting of ones, also replace value in B to one in the same row. Then use
result = linalg.solve(A, B) | 1 | 7 | 1 | 0 | I want to solve some system in the form of matrices using linalg, but the resulting solutions should sum up to 1. For example, suppose there are 3 unknowns, x, y, z. After solving the system their values should sum up to 1, like .3, .5, .2. Can anyone please tell me how I can do that?
Currently, I am using something like result = linalg.solve(A, B), where A and B are matrices. But this doesn't return solutions in the range [0, 1]. | Solving system using linalg with constraints | 0 | 0.132549 | 1 | 0 | 0 | 9,737 |
31,118,668 | 2015-06-29T14:44:00.000 | 0 | 0 | 0 | 0 | 0 | python,events,grid,wxpython,wxwidgets | 0 | 31,124,993 | 0 | 1 | 0 | true | 0 | 1 | Adding event.Skip() at the end of my custom handler passes the event to the default handler. | 1 | 0 | 0 | 0 | In the program I'm writing, the user needs to be able to select a cell in the grid and edit its value. The program also shows what the value of the currently selected value is in hexadecimal (so (0,0) is 0x00, (1,3) is 0x19, etc.) I originally had this display be updated through a binding to the wx.grd.EVT_GRID_SELECT_CELL event. However, upon doing this, the GridCursor would no longer move, it would stay on (0,0). So, I added a SetGridCursor statement to the handler to have it move the cursor when the handler was called. However, this generated an infinite loop, as apparently SetGridCursor generates an EVT_GRID_SELECT_CELL command when called.
My question is, how do I have code that executes when a new cell is selected while still maintaining the old cell selection functionality? | In wxPython Grid, creating event handler for Cell Selection disables moving the GridCursor | 0 | 1.2 | 1 | 0 | 0 | 137 |
31,145,860 | 2015-06-30T18:35:00.000 | 5 | 0 | 1 | 0 | 0 | python,regex | 0 | 31,145,877 | 0 | 1 | 0 | true | 0 | 0 | Don't use regexes for that. s.split(',') will do exactly what you want. | 1 | 0 | 0 | 0 | There are several questions regarding this topic, but none of them seem to answer this question specifically.
If I have the pattern p='([0-9]+)(,([0-9]+))*' and s='1,2,3,4,5' and I run m = re.match(p, s, 0) I get a match (as expected). However, I would like to be able to print the list ('1', '2', '3', '4', '5'). I can't seem to do this with the re.match output. It gives me ('1', ',5', '5').
Also, how do I get the number of matches (in this case 5)? | How to match and print list of comma separated numbers in python? | 0 | 1.2 | 1 | 0 | 0 | 507 |
31,186,959 | 2015-07-02T14:13:00.000 | -2 | 0 | 1 | 0 | 0 | python,python-2.7,lambda,list-comprehension | 0 | 31,187,126 | 0 | 4 | 0 | false | 0 | 1 | Ahhh, further Googling found a solution (admittedly one I would not have stumbled upon myself). The desired behavior can be invoked by use of a default argument:
lambdas = [lambda i=i: i for i in range(3)] | 1 | 4 | 0 | 0 | This question is distilled from the original application involving callback functions for Tkinter buttons. This is one line that illustrates the behavior.
lambdas = [lambda: i for i in range(3)]
if you then try invoking the lambda functions generated:
lambdas[0](), lambdas[1]() and lambdas[2]() all return 2.
The desired behavior was to have lambdas[0]() return 0, lambdas[1]() return 1, lambdas[2])() return 2.
I see that the index variable is interpreted by reference. The question is how to rephrase to have it treated by value. | How to generate a list of different lambda functions with list comprehension? | 0 | -0.099668 | 1 | 0 | 0 | 1,153 |
31,196,818 | 2015-07-03T00:40:00.000 | 13 | 0 | 1 | 0 | 0 | python,debugging,ipdb | 0 | 40,893,062 | 0 | 4 | 0 | true | 0 | 0 | This could sound obvious: jump makes you jump.
This means that you don't execute the lines you jump: you should use this to skip code that you don’t want to run.
You probably need tbreak (Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as break) as I did when I found this page. | 2 | 25 | 0 | 0 | Is there a command to step out of cycles (say, for or while) while debugging on ipdb without having to use breakpoints out of them?
I use the until command to step out of list comprehensions, but don't know how could I do a similar thing, if possible, of entire loop blocks. | ipdb debugger, step out of cycle | 0 | 1.2 | 1 | 0 | 0 | 15,509 |
Subsets and Splits