Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
35,584,309 | 2016-02-23T17:32:00.000 | 0 | 0 | 0 | 0 | python,text,colors,xlwings | 70,505,321 | 4 | false | 0 | 1 | This solution is working fine for me.
from xlwings.utils import rgb_to_int
import xlwings as xw
sht.range('A1').api.Font.Color = rgb_to_int((192, 192, 192)) | 1 | 3 | 0 | I'm using xlwings on a Mac and would like to set the foreground color of text in a cell from Python. I see that range.color will change background color which I could use but it has an additional problem that the cell borders are overwritten by the new BG color.
Is there any way to change foreground text color from Python and/or prevent the cell borders being overwritten by a new BG color? | xlwings: set foreground text color from Python | 0 | 0 | 0 | 5,576 |
35,584,728 | 2016-02-23T17:54:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 35,585,139 | 3 | false | 0 | 0 | my_string="Today is Tuesday"
"".join(reverse(my_string))
and
my_string[::-1]
both will give the required reverse string | 1 | 1 | 0 | How about reversing a string, not by each character but by word.
Like for example: "Python is the best programming language", the output should be "language programming best the is Python", but the problem is that there shouldn't be any predefined function included. | Reversing a string in python (WORD by WORD) | 0 | 0 | 0 | 884 |
35,584,780 | 2016-02-23T17:56:00.000 | 1 | 0 | 1 | 1 | python,import,module,pyyaml | 35,584,960 | 2 | false | 0 | 0 | You should be able to run import yaml if you installed pyyaml.
Did you try pip install pyyaml? | 2 | 1 | 0 | The software I have requires yaml, based on the import yaml at the top. I installed pyyaml on the mac I am using and it still threw the import error. I tried to change the code in the program to import pyyaml but that still didn't help. Any idea what the module is called to import it? If you need more information just ask. | cannot import yaml on mac | 0.099668 | 0 | 0 | 2,463 |
35,584,780 | 2016-02-23T17:56:00.000 | -1 | 0 | 1 | 1 | python,import,module,pyyaml | 56,364,791 | 2 | true | 0 | 0 | For python2:
sudo yum install python-yaml
For python3:
sudo yum install python3-yaml | 2 | 1 | 0 | The software I have requires yaml, based on the import yaml at the top. I installed pyyaml on the mac I am using and it still threw the import error. I tried to change the code in the program to import pyyaml but that still didn't help. Any idea what the module is called to import it? If you need more information just ask. | cannot import yaml on mac | 1.2 | 0 | 0 | 2,463 |
35,585,002 | 2016-02-23T18:07:00.000 | 1 | 1 | 1 | 0 | python | 35,585,229 | 2 | false | 0 | 0 | We usually raiseError when we expect a certain value or input from the user. For eg: If a program requires the user to enter a positive integer and they enter a negative integer, we raise an error and ask them to enter the A POSITIVE INTEGER.
We handle errors when it's not up to the user for it. For eg: If the website to access requires email verification and the email entered by the user is not recognized, you raiseError and ask them to put in a valid email address, but if, the website has a search bar and the string put in does not split properly for the search and we get a keyValueError, it's up to the programmer to handle it. | 2 | 1 | 0 | Should you generally pass errors that occur in functions and class methods back to the caller to handle? What are cases when you might not? I'm asking because I am creating a module to perform the oauth dance, and if you get a negative response from the websites you are trying to access I'm not sure if I should pass it up to the caller, or handle it there. | What is best practice regarding passing errors to the caller? | 0.099668 | 0 | 0 | 51 |
35,585,002 | 2016-02-23T18:07:00.000 | 3 | 1 | 1 | 0 | python | 35,585,134 | 2 | true | 0 | 0 | It generally depends on the answer to two questions:
What layer has the information to explain the error, and present it to users or developers?
What layer can correct the error, in a way that the upper layer cannot tell it ever happened?
Examine the problem layer by layer. Find where the error can be caught, corrected and transparently handled. Failing that, find where the error can be explained in useful terms, and enriched with relevant information.
It's often the case that the function that actually encounters the error can neither explain it adequately nor correct it. It should raise an exception, delegating the decision to the upper layer, possibly attaching additional data to the error.
When the exception has climbed high enough, you'll find yourself in one of the 2 cases I described above, in a position where you can either correct the error transparently or report it in clear language, with the information needed to track down the cause.
In the case of your OAuth module, you should:
Decide whether retrying the action makes sense (eg network error)
Determine the cause of the problem (eg wrong credentials), and raise an exception that clearly conveys that. | 2 | 1 | 0 | Should you generally pass errors that occur in functions and class methods back to the caller to handle? What are cases when you might not? I'm asking because I am creating a module to perform the oauth dance, and if you get a negative response from the websites you are trying to access I'm not sure if I should pass it up to the caller, or handle it there. | What is best practice regarding passing errors to the caller? | 1.2 | 0 | 0 | 51 |
35,585,580 | 2016-02-23T18:41:00.000 | 1 | 0 | 0 | 0 | python,formatting,bokeh | 35,589,047 | 1 | false | 0 | 0 | To elaborate on what bigreddot proposed: PrintfTickFormatter(format='%0.0f %%') worked. One thing to note is the %% to properly escape the %. | 1 | 0 | 1 | I am creating a plot in Bokeh with percentages on the y-axis. The data is represented as a percent (e.g. '99.0') as opposed to a likelihood (e.g. '0.990'). I want to add a '%' sign after each number on the axis, but when using NumeralTickFormatter(format='0 %') my values are multiplied by 100 because it expects a likelihood. I don't want to change the data representation to a likelihood, so is there some other way I can get the '%' sign to appear on the axis ticks? | Bokeh: add a % sign to axis ticks without changing numeric value | 0.197375 | 0 | 0 | 1,425 |
35,588,159 | 2016-02-23T21:06:00.000 | 0 | 0 | 1 | 1 | python,parallel-processing,multiprocessing,pool | 35,588,384 | 2 | false | 0 | 0 | Python, before starts the execution of the process that you specify in applyasync/asyncmap of Pool, assigns to each worker a piece of the work.
For example, lets say that you have 8 files to process and you start a Pool with 4 workers.
Before starting the file processing, two specific files will be assigned to each worker. This means that if some worker ends its job earlier than the others, will simply "have a break" and will not start helping the others. | 1 | 1 | 0 | I wrote a python program to launch parallel processes (16) using pool, to process some files. At the beginning of the run, the number of processes is maintained at 16 until almost all files get processed. Then, for some reasons which I don't understand, when there're only a few files left, only one process runs at a time which makes processing time much longer than necessary. Could you help with this? | Python multiprocessing pool number of jobs not correct | 0 | 0 | 0 | 1,011 |
35,588,202 | 2016-02-23T21:09:00.000 | 1 | 0 | 0 | 0 | python,django,dashboard | 35,698,893 | 2 | true | 1 | 0 | I wasn't able to elaborate my question coz am a django newbie, but after a week of trying a lot of different things I found a way out. For the dealers and other non staff users I created a dashboard and also overiding the registration to suit my project.
Its now working fine. | 2 | 1 | 0 | I have a system that has two types of users with different privileges, the first user is the admin who can access all objects from the database, the second one is a dealer who can only view information pertaining to them alone.(There are many dealers)
This is how the system iworks: the admin creates a coupon code and issues it to a person (already done) then that person goes to a dealer who is supposed to check if that coupon code exists.
when a dealer logs in he is supposed to be redirected to a dashboard that has the number of items he has sold and to whom. To sell a new item he needs to check if that coupon code exists and if it does then access a form to fill in the item details(I have a model for issued_items)
How would I implement a custom admin page for the dealer without affecting the admin dashboard.
I created a dealer with super-admin and changed his permissions so that he is only able to change specific models, problem is, the models appear with all objects in that model even the ones by other dealers.
I have thought(not tried yet) of creating a view,and a template and redirect login but if i do this then i override the admin
(not so sure)Probably create a new app for the dealer??????? | Custom user dashboard in django | 1.2 | 0 | 0 | 1,209 |
35,588,202 | 2016-02-23T21:09:00.000 | 2 | 0 | 0 | 0 | python,django,dashboard | 35,592,084 | 2 | false | 1 | 0 | This sounds like a situation where you want the functionality to be loosely coupled to prevent headaches down the road, so I'd go with option 3. Leave the admin for the admins and create a new dealer app for the dealers to go to, with a regular view/model/template that they'll be required to login to see. | 2 | 1 | 0 | I have a system that has two types of users with different privileges, the first user is the admin who can access all objects from the database, the second one is a dealer who can only view information pertaining to them alone.(There are many dealers)
This is how the system iworks: the admin creates a coupon code and issues it to a person (already done) then that person goes to a dealer who is supposed to check if that coupon code exists.
when a dealer logs in he is supposed to be redirected to a dashboard that has the number of items he has sold and to whom. To sell a new item he needs to check if that coupon code exists and if it does then access a form to fill in the item details(I have a model for issued_items)
How would I implement a custom admin page for the dealer without affecting the admin dashboard.
I created a dealer with super-admin and changed his permissions so that he is only able to change specific models, problem is, the models appear with all objects in that model even the ones by other dealers.
I have thought(not tried yet) of creating a view,and a template and redirect login but if i do this then i override the admin
(not so sure)Probably create a new app for the dealer??????? | Custom user dashboard in django | 0.197375 | 0 | 0 | 1,209 |
35,591,797 | 2016-02-24T01:49:00.000 | 0 | 0 | 0 | 0 | python,django | 35,593,731 | 1 | false | 1 | 0 | First of all, if you want to override almost every part of redux, wont it be better to use built-in django authentication and to extend it as you wish?
Yes, you are on the right way. You need to override those things you do not like by coping them to your project and then by changing the copy. Though it's will be a cleaner code if you place templates in templates/registration, views in views.py and etc, actually you can do in some other way you wish. | 1 | 0 | 0 | The first thing I did was of course customizing forms, views and templates in site-packages. And then I learned that everything will be reset to default after upgrading the package.
So now I decided to create a new application "accounts" and make customizations there.
My question is which approach is better (haven't tried any, sorry)
First approach:
Set INCLUDE_REGISTER_URL = False
in accounts.views import RegistrationView and create MyRegistrationView (same thing with forms)
in accounts.urls include registration.backends.default.urls and create my own urlpattern for MyRegistrationView
create custom templates in templates/registration
put registration above django.contrib.admin in INSTALLED_APPS
Second approach:
in accounts.views import RegistrationView and create MyRegistrationView (same thing with forms)
Create complete replica of registration.backends.default.urls in accounts.urls with my new custom template names
put custom templates inside my accounts app
Or are there any better approaches? (probably are) | Which approach to use for customizing django-registration-redux | 0 | 0 | 0 | 102 |
35,592,092 | 2016-02-24T02:22:00.000 | 0 | 0 | 0 | 0 | python,mysql,postgresql | 35,598,628 | 1 | false | 0 | 0 | I believe that the problem is that you are inserting each row in a separate transaction (which is the default behavior when you run SQL-queries without explicitly starting a transaction). In that case, the database must write (flush) changes to disk on every INSERT. It can be 100x times slower than inserting data in a single transaction. Try to run BEGIN before importing data and COMMIT after. | 1 | 0 | 0 | I need to migrate data from MySQL to Postgres. It's easy to write a script that connects to MySQL and to Postgres, runs a select on the MySQL side and inserts on the Postgres side, but it is veeeeery slow (I have + 1M rows). It's much faster to write the data to a flat file and then import it.
The MySQL command line can download tables pretty fast and output them as tab-separated values, but that means executing a program external to my script (either by executing it as a shell command and saving the output to a file or by reading directly from the stdout). I am trying to download the data using Python instead of the MySQL client.
Does anyone know what steps and calls does the MySQL command line perform to query a large dataset and output it to stdout? I thought it could be just that the client is in C and should be much faster than Python, but the Python binding for MySQL is itself in C so... any ideas? | Why is MySQL command line so fast vs. Python? | 0 | 1 | 0 | 181 |
35,592,602 | 2016-02-24T03:19:00.000 | 2 | 0 | 0 | 0 | python,selenium,redirect,selenium-webdriver,event-handling | 35,671,022 | 4 | true | 0 | 0 | Answer my own question.
If the redirect chain is very long, consider to try the methods @alecxe and @Krishnan provided. But in this specific case, I've found a much easier workaround:
When the page finally landed c.com, use
driver.execute_script('return window.document.referrer') to get the
intermediate URL | 1 | 5 | 0 | I'm using Selenium with Python API and Firefox to do some automatic stuff, and here's my problem:
Click a link on original page, let's say on page a.com
I'm redirected to b.com/some/path?arg=value
And immediately I'm redirected again to the final address c.com
So is there a way to get the intermediate redirect URL b.com/some/path?arg=value with Selenium Python API? I tried driver.current_url but when the browser is on b.com, seems the browser is still under loading and the result returned only if the final address c.com is loaded.
Another question is that is there a way to add some event handlers to Selenium for like URL-change? Phantomjs has the capacity but I'm not sure for Selenium. | How can I get a intermediate URL from a redirect chain from Selenium using Python? | 1.2 | 0 | 1 | 14,337 |
35,594,591 | 2016-02-24T06:12:00.000 | 1 | 0 | 1 | 0 | python,virtualenv | 35,594,757 | 1 | false | 0 | 0 | Generally we can define the python path for virtual environment using -p option
For windows:
virtualenv -p c:\python2.x <virtual env path>
For Ubuntu:
virtualenv -p /usr/bin/python2.x <virtual env path> | 1 | 1 | 0 | I've installed and setup virtualenv in my project folder and I want to use 2.7.10 rather than the default 2.7.9
How do I do that in virtualenv? | How to use Python 2.7.10 in virtualenv | 0.197375 | 0 | 0 | 2,382 |
35,596,059 | 2016-02-24T07:41:00.000 | 0 | 1 | 0 | 0 | python,django,email,mandrill,django-allauth | 35,596,437 | 1 | true | 1 | 0 | Turns out I needed to add DEFAULT_FROM_EMAIL to my settings.py file. I don't understand why it works with a gmail address and not a custom one, but this fixed it. | 1 | 0 | 0 | I started my app with a gmail account, and have recently upgraded to Mandrill. I am not using the API, just changed my smtp settings through env variables.
When I add the new mandrill smtp provider, my in-app mails work perfectly, but allauth's mails do not work at all. (I can see they are not rejected or bounced through mandrill's data, they're just not sent).
Any help? | Django Allauth mails stop working when I change my smtp provider (mandrill) | 1.2 | 0 | 0 | 86 |
35,597,157 | 2016-02-24T08:41:00.000 | 10 | 1 | 0 | 1 | python,vim,crash | 35,620,795 | 2 | true | 0 | 0 | Finally solved the problem.
It turned out that Python uses PYTHONPATH variable to resolve the PYTHON folder (used to load python libraries and so on). Here is the default value for Python 2.7:
C:\Python27\Lib;C:\Python27\DLLs;C:\Python27\Lib\lib-tk
The variable can be set using one of the following:
1. Windows registry
Set the default value of HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\PythonPath key
2. Environment variable
Create environment variable PYTHONPATH and set the value (same as you edit global PATH)
3. _vimrc file
This is the most portable way. Edit your _vimrc (i.e. open vim and enter :e $MYVIMRC command) and set the variable:
let $PYTHONPATH = "C:\\Python27\\Lib;C:\\Python27\\DLLs;C:\\Python27\\Lib\\lib-tk" | 1 | 7 | 0 | I cannot use python in GVIM. When I type:
:python print 1 it just closes GVIM without any message. I triend to run it with -V90logfile but I couldn't find any information about the crash.
GVIM is compiled with python (:version shows +python/dyn +python3/dyn).
GVIM version: 7.3.46 (32 bit with OLE).
Python version: 2.7.3
Initially GVIM couldn't find python27.dll so I edited $MYVIMRC and added:
let $Path = "C:\\Program Files (x86)\\Python27;".$Path
Both GVIM and Python have been installed using corporate standards - not manually via installers. Asking here as IT were not able to help me and redirected to external support.
I could reproduce the error on my personal computer, where I copied both GVIM & PYTHON without installing them. Any further suggestions? | GVIM crashes when running python | 1.2 | 0 | 0 | 1,440 |
35,598,908 | 2016-02-24T10:02:00.000 | 1 | 0 | 1 | 0 | python,windows,macos,qt,pyqt5 | 35,684,613 | 1 | false | 0 | 1 | PyInstaller brings together every dependancy of your python script. But you need to install all dependancies before running pyinstaller.
So, before running it on windows, you must install qt5, sip and pyqt5. | 1 | 1 | 0 | I want to create an app for Windows but I've to developed it on Mac OS X.
I've created a desktop app with Python using PyQt5. The steps that I followed are:
Step 1. Create a desktop app (On Mac OS X):
Install Qt.
Install Sip.
Install PyQt5.
¡Develop!
Step 2. Package the Python app with PyInstaller(On Windows 8):
Install Python.
Install Pip-Win.
Install PyWin32.
Install PyInstaller.
Create the executable.
The problem is that, when I execute the app on Windows, it shows a window with the following message:
Fatal error: app returned -1.
Anybody knows what is wrong? Maybe I need to do the Step 1 on Windows too? | Desktop app created with PyInstaller doesn't run | 0.197375 | 0 | 0 | 520 |
35,600,560 | 2016-02-24T11:13:00.000 | 0 | 0 | 0 | 0 | python,selenium,deployment,scrapy | 35,968,793 | 1 | true | 1 | 0 | As for my problem with phantomjs it was solved by reinstalling it and increasing the droplet memory .. and for the other message with the other browsers I used "xvfb-run -a" it was a temporary solution but it worked ... | 1 | 0 | 0 | I am new to scrapping.
I have a scrapy spider that uses selenium for items interaction
I tried to run it on a digitalocean droplet but it fails to runs the phantomjs driver all the time like it's kinda blocked raising exception:
BadStatusLine: ''
and any other webdrivers are unstable according to the display issue and xvfb.
raising irregularly
Message: The browser appears to have exited before we could connect.
is there any idea what should i do where where i can deploy it ? | deployment of scrapy selenium project | 1.2 | 0 | 1 | 146 |
35,604,173 | 2016-02-24T13:57:00.000 | 1 | 0 | 0 | 0 | python,arrays,numpy,curves,curvesmoothing | 35,604,894 | 2 | false | 0 | 0 | take the length of the line on every axes, the split as you want.
example:
point 1: [0,0]
point 2: [1,1]
then:
length of the line on X axes: 1-0 = 1
also in the Y axes.
now, if you want to split it in two, just divide these lengths, and create a new array.
[0,0],[.5,.5],[1,1] | 1 | 4 | 1 | I have a 2D numpy array that represents the coordinates (x, y) of a curve, and I want to split that curve into parts of the same length, obtaining the coordinates of the division points.
The most easy example is a line defined for two points, for example [[0,0],[1,1]], and if I want to split it in two parts the result would be [0.5,0.5], and for three parts [[0.33,0.33],[0.67,0.67]] and so on.
How can I do that in a large array where the data is less simple? I'm trying to split the array by its length but the results aren't good. | Split numpy array into similar array based on its content | 0.099668 | 0 | 0 | 279 |
35,604,605 | 2016-02-24T14:17:00.000 | 1 | 0 | 0 | 0 | python,google-api-python-client | 35,604,757 | 3 | false | 0 | 0 | You can install google drive on your local machine and copy the file into the google drive directory at the correct position. then google drive (the client software) will update the file. | 1 | 1 | 0 | I first had a updating problem with using google drive api, Even I followed the example of Quickstart, and after making some changes on it, the file on google drive is updated successfully. But now here comes a new problem after updating, I am not sure if it is because my change to the Quickstart is not proper, or something else. The problem is after updating the an excel file on google drive with an excel file on my local machine, the excel file on my local mahine is not editable if I don't close the IDLE terminal; but if I close the IDLE window, I can do everything with the excel file and save the changes. Such as, without closing the IDLE file, and I made some changes on the excel file and try to save it, then the system says something like sharing violation, and save the file as a temporary file 62635600...., if I try to delete the excel file, then the system says the file is being used by pythonw.exe. After closing the IDLE window, the excel goes back to normal, same as a normal excel file. Anybody has any idea? | After updating file on google drive through google api, the file on local machine is not editable without closing IDLE window | 0.066568 | 1 | 0 | 597 |
35,604,937 | 2016-02-24T14:31:00.000 | 0 | 0 | 0 | 0 | python,sql-server,flask,jinja2 | 35,608,084 | 1 | true | 1 | 0 | Well, this feels like a hack, but since the only time I'm ever using these guid's is when i'm reading them from the database, I just did:
CAST(REC_GUID_ID as VARCHAR(36)) as REC_GUID_ID
And now they are in a format that everything seems to read just fine. | 1 | 0 | 0 | I have a flask app that recently had to start using mssql generated guid's as primary keys (previously it was just integers). The guid's are latin-1 encoding. Also, I am not using sqlalchemy. Now, when I'm trying to display the queried mssql guid's in a flask jinja2 template, I get the following error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc1 in position 0: ordinal not in range(128).
I've tried:
unsetting the LANG on the linux host
Forcing utf-8 in FreeTDS config (this was already done)
escaping in the jinja template
using python3, no luck
switching from pypyodbc to pyodbc3, but the problem presists
Nothing seems to work. If I import sys and set the decoding to utf-8, the error changes replacing ascii with utf-8, but the jinja template will not render the guid's.
Any thoughts? Thanks for reading. Also to note, my dev environment is on windows 7 and this issue does not crop up there. It's only on the linux server. | Unicode issue using flask and mssql guids with FreeTDS | 1.2 | 1 | 0 | 167 |
35,606,542 | 2016-02-24T15:44:00.000 | 1 | 1 | 1 | 0 | python,itk,image-registration | 35,612,218 | 2 | false | 0 | 0 | The wrapped SimpleITK interface for Python does not provide an interface to extend from or derive from. The options for the SimpleITK ImageRegistrationMethods are the options available.
Deriving classes and tweaking algorithms is best done with ITK at the C++ level.
You may be able to put together a little registration framework with components of SimpleITK and Python. For example you could use the ResampleImageFilter and the Transform classes from SimpleITK along with a scipy optimizer and a custom metric. | 1 | 0 | 0 | SimpleITK provides easy to use Python interface. Can I extend the class from there?
I need to solve a registration problem, which requires me to write my customized registration class, especially the similarity metric. How can I extend SimpleITK in Python for my use? | How to extend an ITK class in Python? | 0.099668 | 0 | 0 | 751 |
35,607,753 | 2016-02-24T16:32:00.000 | 0 | 0 | 0 | 0 | python,django,django-rest-framework,django-rest-auth | 35,726,205 | 1 | true | 1 | 0 | To anyone that stumbles onto this question, I couldn't figure out how to make the hybrid approach work. Having Django serve pages that each contained API calls seemed OK, but I never saw any requests made to the API- I believe due to some other security issues. I'm sure it's possible, but I decided to go for the single page app implementation after all to make things simpler. | 1 | 5 | 0 | I'm building an app with a Django backend, Angular frontend, and a REST API using Django REST Framework for Angular to consume. When I was still working out backend stuff with a vanilla frontend, I used the provided Django authentication to handle user auth- but now that I'm creating a REST based app, I'm not sure how to approach authentication.
Since all user data will be either retrieved or submitted via the API, should API authentication be enough? If so, do I need to remove the existing Django authentication middleware?
Right now, when I try to hit API endpoints on an early version of the app, I'm directed to what looks like the normal Django login form. If I enter a valid username and password, it doesn't work- just prompts to login again. Would removing the basic Django authentication prevent this? I want to be prompted to login, however I'm not sure how to handle that with these technologies.
The package django-rest-auth seems useful, and the same group makes an Angular module- but the docs don't go much past installation and the provided endpoints. Ultimately, I think the core of this question is: how do I entirely switch authentication away from what's provided by Django to something like django-rest-auth or one of the other 3rd party packages recommended by DRF?
edit: I made this comment below, but I realized that I need to figure out how combined auth will work. I'm not building a single page app, so individual basic pages will be served from Django, but each page will hit various API endpoints to retrieve the data it needs. Is there a way to have something like django-rest-auth handle all authentication? | Django, Angular, & DRF: Authentication to Django backend vs. API | 1.2 | 0 | 0 | 698 |
35,609,592 | 2016-02-24T17:59:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pygame | 39,562,265 | 6 | false | 0 | 1 | Even when you have windows 64 bit you need to get the win32.whl file , then follow the standard instructions | 1 | 6 | 0 | I am unable to find a pygame download for Python 3.5 and the ones I have downloaded don't seem to work when I import to the shell. Help?
This is the message I receive on the shell:
import pygame
Traceback (most recent call last):
File "", line 1, in
import pygame
ImportError: No module named 'pygame' | How do I download Pygame for Python 3.5.1? | 0 | 0 | 0 | 27,913 |
35,611,473 | 2016-02-24T19:39:00.000 | 0 | 0 | 0 | 0 | javascript,python,ruby,cassandra | 35,617,799 | 1 | false | 1 | 0 | Not sure you tried this approach, add tenant info with all your table keys, this way you write your class method to prepare query with tenant will append to it.
1) "ClientA:Name" ....
2) "ClientB:Name"... | 1 | 2 | 0 | I'm wondering about the best way to manage multitenancy in Cassandra. I have a web app, and I want all users in the web app to have a namespaced area in Cassandra. I can do this via cqlsh (create user with password + grants), but I can't find documentation on it for the ruby, python, or javascript drivers. Any help?
EDIT:
Right now, I'm using the ruby driver and just using session.execute but this seems suboptimal for security reasons. | Programmatically managing Cassandra permissions | 0 | 0 | 0 | 54 |
35,612,338 | 2016-02-24T20:23:00.000 | 0 | 0 | 1 | 0 | python,ipython,pycharm,anaconda | 52,747,634 | 1 | false | 0 | 0 | Short answer:
Go to File > Default settings > Build, Execution, Deployment > Console and select Use Ipython if available
Go to Run > Edit Configurations and select Show command line afterwards
Tip: Run selected parts of your code with ALT + SHIFT + E
The details:
If you've selected Anaconda as the project interpreter, IPython will most likely be the selected console even though it neither looks nor behaves like the IPython console you are used to in Spyder.
I guess you are used to seeing this in Spyder: enter image description here
I'm also guessing that the following is what you're seeing in PyCharm in the Console Window:
enter image description here
Unlike Spyder, PyCharm has no graphical indicator showing that this is an IPython console. So, to make sure it's an IPython console and make it behave more or less like the IPython console you are used to from Spyder, you should try to follow these two steps:
Go to File > Default Settings > Build, Execution, Deployment > Console and make sure to select Use IPython if available. enter image description here
Go to Run > Edit Configurations and select Show command line afterwards enter image description here
Now you can run selected parts of your code with ALT+SHIFT+E more or less exactly like in Spyder.
If this doesn't do the trick, you should check out these other posts on SO:
Interacting with program after execution
Disable ipython console in pycharm | 1 | 10 | 0 | I am using PyCharm IDE with Anaconda distribution.
when I run:Tools > Python Console... PyCharm uses ipython console which is part of Anaconda distribution.
But it using a default profile.
I already tried add option --profile=myProfileName in Environment variables and in Interpreter options in Settings > Build, Execution, Deployment > Console > Python Console
But it keeps using default profile.
My question is how to set different ipython profile in PyCharm? | How to set ipython profile in PyCharm | 0 | 0 | 0 | 1,263 |
35,615,273 | 2016-02-24T23:23:00.000 | -2 | 0 | 0 | 0 | python,django,postgresql,sqlite,heroku | 35,615,302 | 3 | false | 1 | 0 | sure you can deploy with sqlite ... its not really recommended but should work ok if you have low network traffic
you set your database engine to sqlite in settings.py ... just make sure you have write access to the path that you specify for your database | 1 | 0 | 0 | I've built a Django app that uses sqlite (the default database), but I can't find anywhere that allows deployment with sqlite. Heroku only works with postgresql, and I've spent two days trying to switch databases and can't figure it out, so I want to just deploy with sqlite. (This is just a small application.)
A few questions:
Is there anywhere I can deploy with sqlite?
If so, where/how? | Is it possible to deploy Django with Sqlite? | -0.132549 | 1 | 0 | 3,302 |
35,617,670 | 2016-02-25T03:28:00.000 | 1 | 1 | 0 | 0 | python,mysql,database,synchronization,raspberry-pi | 38,479,349 | 1 | false | 0 | 0 | I went with my first thought:
store the sensor data on a local DB (SQLite3 for its small footprint). Records are created every half minute.
a separate script - run regularly via cron - compares the last timestamp entry in the cloud DB with the local one and updates the cloud DB.
Even though the comparison would ideally mean a doubling of DB transactions (a read + a write), if the last timestamp recorded on the online DB is stored locally for reference the remote read becomes unnecessary, thus being more efficient. | 1 | 1 | 0 | I have a Raspberry Pi collecting data from sensors attached to it. I would like to have this data - collected every minute - accessible from an online DB (Amazon RDS | MySQL).
Currently, a python script running on the Pi pushes this data to an Amazon RDS instance every 50 seconds (~per minute). However, I have no records when internet is down. I will appreciate any suggestions on how to fix this.
Here are my thoughts so far:
store data on a local MySQL DB, run a separate script that checks for differences between the online and local DB and updates the online one where needed. This will run every minute and write only one record to the online DB every minute if all is well.
Utilize some sort of feature within MySQL itself - a replication job? | Syncing locally collected regular data to online DB over unreliable internet connection | 0.197375 | 1 | 0 | 476 |
35,618,159 | 2016-02-25T04:10:00.000 | 5 | 1 | 0 | 0 | python,deployment,pyc | 35,619,259 | 1 | false | 1 | 0 | Sure, you can go ahead and precompile to .pyc's as it won't hurt anything.
Will it affect the first or nth pageload? Assuming Flask/WSGI runs as a persistent process, not at all. By the time the first page has been requested, all of the Python modules will have already been loaded into memory (as bytecode). Thus, server startup time will be the only thing affected by not having the files pre-compiled.
However, if for some reason a new Python process is invoked for each page request, then yes, there would (probably) be a noticeable difference in performance and it would be better to pre-compile.
As Klaus said in the comments above, the only other time a pageload might be affected is if a function happens to try and import a module that hasn't already been imported. This will require the module to be parsed and converted to bytecode then loaded into memory before being able to continue. | 1 | 5 | 0 | When developing a Python web app (Flask/uWSGI) and running it on my local machine, *.pyc files are generated by the interpreter. My understanding is that these compiled files can make things load faster, but not necessarily run faster.
When I deploy this same app to production, it runs under a user account that has no write permissions on the local file system. There are no *.pyc files committed to source control, and no effort is made to generate them during the deploy. Even if Python wanted to write a .pyc file at runtime, it would not be able to.
Recently I started wondering if this has any tangible effect on the performance of the app, either in terms of the very first pageview after the process starts, or consistently throughout its entire lifetime.
Should I throw a python -m compileall in as part of my deploy scripts? | Should I generate *.pyc files when deploying? | 0.761594 | 0 | 0 | 2,979 |
35,623,515 | 2016-02-25T09:41:00.000 | 0 | 0 | 1 | 0 | python,django | 35,623,671 | 1 | true | 1 | 0 | Check PATH environment variable in console2. It should contains path to the pip executable.
In Windows Command-Prompt the syntax is echo %PATH%
To get a list of all environment variables enter the command set | 1 | 1 | 0 | I have downloaded console2 for windows. Using that I wanted to setup django framework, but console2 couldn't recognize 'pip' as a command. When I tried the same command in windows command prompt, 'pip' was recognized by windows command prompt. Why so? | 'pip' command doesn't execute in console2 but executes in windows command prompt | 1.2 | 0 | 0 | 36 |
35,624,808 | 2016-02-25T10:34:00.000 | 0 | 0 | 1 | 0 | python,macos,console,pycharm,jetbrains-ide | 54,680,604 | 1 | true | 0 | 0 | It's in the same window know so much easier to go to it. | 1 | 3 | 0 | Hi I am wondering how I can have the python console pop up automatically after I run a script in Pycharm. Currently it opens in the background and I have to either command-tab to it, or click manually. Maybe there is a way to edit the configuration to allow it to pop up, I haven't found one.
Thanks | How to automatically switch focus to python console when running script in Pycharm? | 1.2 | 0 | 0 | 360 |
35,628,756 | 2016-02-25T13:32:00.000 | 1 | 0 | 0 | 0 | python-3.x,matplotlib | 35,629,019 | 2 | false | 0 | 0 | Forgive me for I'm not familiar with matplotlib but I'm presuming that you're reading the csv file directly into matplotlib. If so is there an option to read the csv file into your app as a list of ints or as a string and then do the data validation before passing that string to the library?
Apologies if my idea is not applicable. | 2 | 0 | 1 | I am building graphics using Matplotlib and I sometimes have wrong values in my Csv files, it creates spikes in my graph that I would like to suppress, also sometimes I have lots of zeros ( when the sensor is disconnected ) but I would prefer the graph showing blank spaces than wrong zeros that could be interpreted as real values. | Hide wrong values of a graph | 0.099668 | 0 | 0 | 21 |
35,628,756 | 2016-02-25T13:32:00.000 | 0 | 0 | 0 | 0 | python-3.x,matplotlib | 35,629,514 | 2 | true | 0 | 0 | I found a way that works:
I used the Xlim to set my max and min x values and then i set all the values that i didnt want to nan ! | 2 | 0 | 1 | I am building graphics using Matplotlib and I sometimes have wrong values in my Csv files, it creates spikes in my graph that I would like to suppress, also sometimes I have lots of zeros ( when the sensor is disconnected ) but I would prefer the graph showing blank spaces than wrong zeros that could be interpreted as real values. | Hide wrong values of a graph | 1.2 | 0 | 0 | 21 |
35,630,725 | 2016-02-25T14:54:00.000 | 0 | 0 | 1 | 0 | javascript,python,json,security | 35,648,324 | 2 | true | 0 | 0 | There is no such thing as sanitized and unsanitized data, without context.
Data is only considered unsafe if user controlled data is used in a context where it has special meaning.
e.g. ' in SQL, and <script> in HTML.
Contrary to <script> in SQL, which is completely safe.
The upshot is to encode/sanitize when the data is used, not when it is received from JSON. | 2 | 4 | 0 | Our infrastructure is using Python for everything in the backend and Javascript for our "front-end" (it's a library we serve to other sites). The communication between the different components of the infrastructure is done via JSON messages.
In Python, json.load() and json.dump() are a safe way of dealing with a JSON string. In Javascript, JSON.parse() would be use instead. But, these functions only guarantee that the string has a proper JSON format, am I right?
If I'm concerned about injection attacks, I would need to sanitize every field of the JSON by other means. Am I right in this assumption? Or just by using the previously mentioned functions we would be safe? | JSON parsing and security | 1.2 | 0 | 1 | 7,532 |
35,630,725 | 2016-02-25T14:54:00.000 | 5 | 0 | 1 | 0 | javascript,python,json,security | 35,630,951 | 2 | false | 0 | 0 | JSON.parse will throw an exception if the input string is not in valid JSON format.
It is safe to use, I can't think of any way to attack your code with just JSON.parse. It does not work like eval.
Of course you can check the resulting json object to make sure it has the structure you're expecting. | 2 | 4 | 0 | Our infrastructure is using Python for everything in the backend and Javascript for our "front-end" (it's a library we serve to other sites). The communication between the different components of the infrastructure is done via JSON messages.
In Python, json.load() and json.dump() are a safe way of dealing with a JSON string. In Javascript, JSON.parse() would be use instead. But, these functions only guarantee that the string has a proper JSON format, am I right?
If I'm concerned about injection attacks, I would need to sanitize every field of the JSON by other means. Am I right in this assumption? Or just by using the previously mentioned functions we would be safe? | JSON parsing and security | 0.462117 | 0 | 1 | 7,532 |
35,633,187 | 2016-02-25T16:39:00.000 | 0 | 0 | 0 | 0 | python,django | 35,633,712 | 2 | false | 1 | 0 | You will have to use a different separator character between parameters than inside parameters. After parameters have been matched, you can always replace that separator by the slash that should actually be there inside the parameters.
So, either those parameters that use a slash internally allow for some other safe character like a dash or a dot (meaning that there is a character that cannot occur otherwise due to the nature of the respective parameter), or you have to decide on some separator character and create some escaping rule. | 1 | 1 | 0 | I'm using Django as a restful api, and i have urls like url(r'^datavore/(?P<configuration>.*)/(?P<dataset>.*)/(?P<varname>.*)/(?P<region>[a-z-A-Z\_]+)/(?P<date_range>.*)/filelist/$', views.filelist,name="filelist"),
my problem is when the dataset parameter contains '/' it modify the structur of my url the dataset parameter contains only the string after the / . Any idea how to fix this ? | Slash in parameter Django | 0 | 0 | 0 | 979 |
35,635,870 | 2016-02-25T18:53:00.000 | 1 | 0 | 0 | 0 | python,dictionary,pandas,dataframe,panel | 35,638,383 | 1 | false | 0 | 0 | I don't think you need a panel. I recommend a nested dataframe approach. | 1 | 5 | 1 | I hope this doesn't sound as an open question for discussion. I am going to give some details for my specific case.
I am new to Pandas and I need to store several 2D arrays, where columns represent frequencies and rows represent directions (2D waves spectra, if you are curious). Each array represent a specific time.
I am storing these arrays as Pandas DataFrames, but for keeping them in a single object I thought of 2 options:
Storing the DataFrames in a dictionary where the key is the time stamp.
Storing the DataFrames in a Pandas Panel where the item is the time stamp.
The first option seems simple and has the flexibility to store arrays with different sizes, indexes and column names. The second option seems better for processing the data, since Panels have specific methods, and can also be easily saved or exported (e.g. to csv or pickle).
Which of the two options is better suited in terms of: speed, memory use, flexibility and data analysis?
Regards | Is it better to store Pandas Data Frames in a dictionary or in a Panel? | 0.197375 | 0 | 0 | 1,554 |
35,637,501 | 2016-02-25T20:19:00.000 | 0 | 0 | 0 | 0 | python,psychopy | 35,664,986 | 2 | true | 0 | 0 | difference = 1.0 - (RT - int(RT)) | 1 | 0 | 1 | I'm building a fMRI paradigm and I have a stimulus that disappears when a user presses a button (up to 4s), then a jitter (0-12s), then another stimulus presentation. I'm locking the stimuli presentation to the 1s TR of the scanner so I'm curious how I can round up the jitter time to the nearest second.
So, the task is initialized as:
stimulus 1 ( ≤4 s) -- jitter (e.g. 6 s) -- stimulus 2
But if the user responds to stimulus-1 at 1.3 seconds, then the task becomes
stimulus-1 (1.3 s) -- jitter (6.7 s) -- stimulus-2
Does that make sense? Thanks for the help! | Rounding up a jitter in psychopy | 1.2 | 0 | 0 | 188 |
35,637,516 | 2016-02-25T20:20:00.000 | 2 | 0 | 0 | 1 | python,django,bash | 35,637,631 | 2 | false | 1 | 0 | I assume you're using virtualenv. If so, do you know where it put the bin directory? If so, run source bin/activate. After that, when you try runserver, it should use the correct Python instance.
More complete:
source /path/to/bin/activate
But I typically run source bin/activate from the directory that contains the related bin. | 1 | 1 | 0 | I am working on a django app on my macbook with Yosemite.
My app was in a virtual environment.
I restarted my terminal and when I cd'd to my app it was no longer in the virtual environment and now doesn't run. And all my virtual environment commands give me -bash: command not found.
I fully recognize this is a very noobie question but I really want to work on my app and I have tried everything I could find on google and stackoverflow.
Please help.
Preferably with the commands I need to type from my command line - thank you! | Django virtual environment disaster | 0.197375 | 0 | 0 | 108 |
35,637,968 | 2016-02-25T20:46:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,classification,random-forest | 35,638,661 | 1 | true | 0 | 0 | Just do it yourself. Each classifier in scikit-learn gives you access to decision_function or predict_proba, both of which are support for predict operation (predict is just argmax of these). Thus - just select 100 with the highest support. | 1 | 0 | 1 | If I have a dataset with events where each event has data with 1000 possible items with only 100 being correct for each event. How do I force my classifier to select only 100 for each event?
After I run it through my training model (with 18 features and always has 100 targets/event flagged as 1) the classifier selects anywhere between 60-80 items instead of 100. Even if I give each event an event number that doesn't help.
I'm using python sklearn gradient boosting and random forest method. | Force classifer to select a fixed number of targets | 1.2 | 0 | 0 | 33 |
35,639,333 | 2016-02-25T22:06:00.000 | 1 | 0 | 1 | 1 | python,line-endings | 35,639,400 | 1 | false | 0 | 0 | In general newlines (as typically used in Linux) are more portable than carriage return and then newline (as used in Windows). Note also that if you store your code on GitHub or another Git repository, it will convert it properly without you having to do anything. | 1 | 1 | 0 | Which line endings should be used for platform independent code (not file access)?
My next project must run on both Windows and Linux.
I wrote code on Linux and used Hg to clone to Windows and it ran fine with Linux endings. The downside is that if you open the file in something other than a smart editor the line endings are not correct. | Python Code Line Endings | 0.197375 | 0 | 0 | 885 |
35,643,223 | 2016-02-26T04:01:00.000 | 0 | 0 | 0 | 0 | javascript,php,jquery,python,html | 35,643,411 | 2 | false | 1 | 0 | One way to do it is to get the country name of the select tag with $( "#myselect option:selected" ).text(); or it's value with $( "#myselect" ).val(); and then use a switch statement to change the date time field. | 1 | 0 | 0 | I have an HTML drop-down where country names are specified.And there is another field where a date time field is specified.
When a user selects a country i want the date-time field to be filled with current date-time of that particular country.Please let me know in what all ways this can be done and how.
Thanks in advance. | Fill the current time based on the selection of the country in html dropdown | 0 | 0 | 0 | 397 |
35,644,542 | 2016-02-26T05:56:00.000 | 1 | 0 | 1 | 0 | windows,python-3.x,virtualenv | 35,644,869 | 1 | true | 0 | 0 | Yes its done, I just updated my virtualenv through pip install --upgrade virtualenv | 1 | 1 | 0 | I am new to python please forgive me if question is silly.
I am using python version 3.4.
I have install virtualenv.
I am tryping command virtualenv env and i get response like command not found.
Does this required any environment variable to set ? | Python virtualenv command not regonized on windows 7 | 1.2 | 0 | 0 | 193 |
35,645,776 | 2016-02-26T07:20:00.000 | 1 | 0 | 0 | 0 | python,statistics,mathematical-optimization | 35,646,195 | 1 | true | 0 | 0 | As I'm understanding it, you want some measure of central tendency. There are three of these: mean, median, and mode. Which one you want to use depends on your goals and priorities. Mean is very popular and understandable to people. It has a lot of useful statistical properties. However, it is subject to outliers. On the other hand, mode and median are not (as) influenced by outliers, but they have fewer statistical usages. Further, in the case of the median and mean, the value you calculate may not actually be in your data set, whereas the mode will.
Which of these considerations matter for you?
But even after you pick the measure of central tendency you like, how are you going to determine when something is "too far" out of the set? In your question you're doing it as just a percentage, but this might not be the best way.
For most problems, I would probably use the mean as my measure of central tendency and use standard deviation as the statistic to determine if a figure is "off the mark." But something else might work better for you. | 1 | 0 | 1 | I am trying to find a good statistical method to compare a given value with an existing set of values. Currently I am considering mean of the existing numbers and comparing it with the given value. If the value is off by 50% of the mean then I would say it is off the flow. I am using python programming language for all calculations. Is there any other method possible which is more efficient?
Ex: 1,4,7,0,0,0 are the values that exist currently.
I get the mean of these : 2
If the given value is 10, I would say it is off the mark.
Can there be a more efficient way? | Statistic to consider while analysing a value with previous set of values | 1.2 | 0 | 0 | 37 |
35,647,221 | 2016-02-26T08:49:00.000 | 4 | 0 | 0 | 0 | python,django | 35,650,368 | 2 | false | 1 | 0 | Based on itzmeontv's answer:
To override original templates in registration application.
create folder templates inside your base app if it doesn't exist
create folder registration inside it. So folder looks like <yourapp>/templates/registration
Inside yourapp/templates/registration , create htmls with same name as in registration app. For ex : password_change_form.html. So it will look like <yourapp>/templates/registration/password_change_form.html.
Make sure that your base app comes before registration in INSTALLED_APPS. | 1 | 0 | 0 | As the question says, I'm using django-registration-redux and I've made templates for the registration emails and pages but can't figure out how to make the template for the password reset email.
I'm using django 1.9 | How do I make a custom email template for django-registration-redux password reset? | 0.379949 | 0 | 0 | 606 |
35,647,516 | 2016-02-26T09:06:00.000 | 2 | 0 | 0 | 0 | python,html,flask | 35,647,647 | 1 | true | 1 | 0 | You can't trigger anything on the server without making a request to a URL.
If you don't want the page to reload, you can either redirect back to the original page after your action is finished, or you can use Ajax to make the request without changing the page; but the request itself is always to a URL. | 1 | 0 | 0 | In flask programming, people usually use 'url_for' such as {{url_for = 'some url'}}
This way have to make URL(@app.route), template(HTML) and map each other.
But, I just want to send email when I click submit button in HTML modal.
For this, There is no page reloading. To do this, I think I have to connect between button and python function without URL, return(response)
I wonder that how to make it, help me please, I'm beginner in flask programming. | How to connect between HTML button and python(or flask) function? | 1.2 | 0 | 0 | 1,304 |
35,647,765 | 2016-02-26T09:20:00.000 | 9 | 0 | 1 | 0 | python,ipython,jupyter-notebook | 35,648,512 | 2 | false | 0 | 0 | It runs in sequence. You will actually see the progress as the cells that are in queue will show a star In [*]:, while the cells that have successfully will show some number, ex.: In [123]. | 1 | 5 | 0 | I have a code that is applying changes to a dataset and then the next cell picks this up to continue with an other set of changes. This is done for my own readability and troubleshooting in datamunging.
I think I finished the code and want to apply it to the initial data. Ipython notebook has an option run all cells.
My question is does it run them one after another or simultaneously ? | Does ipython notebook 'run all cells' execute simultaneously or in sequence? | 1 | 0 | 0 | 4,608 |
35,648,212 | 2016-02-26T09:42:00.000 | 0 | 0 | 0 | 0 | python,numerical-methods,nonlinear-functions,nonlinear-optimization | 35,835,011 | 2 | false | 0 | 0 | You can square the function and use global optimization software to locate all the minima inside a domain and pick those with a zero value.
Stochastic multistart based global optimization methods with clustering are quite proper for this task. | 1 | 0 | 1 | Let us assume I have a smooth, nonlinear function f: R^n -> R with the (known) maximum number of roots N. How can I find the roots efficiently? Right now I have calculated the function on the grid on a preselected area, refined the grid where the function is below a predefined threshold and continued that routine, but this does not seem to be very efficient, though, because I have noticed that it is difficult to select the area correctly before and to define the threshold accordingly. | Find all roots of a nonlinear function | 0 | 0 | 0 | 866 |
35,648,909 | 2016-02-26T10:12:00.000 | 2 | 1 | 1 | 0 | python,eclipse,pydev | 35,649,823 | 1 | true | 0 | 0 | Disclaimer: I am not familiar with PyDev, just with Eclipse in general. You definitely should not check in the .metadata folder. That one is for your Eclipse workspace as a whole and contains your personal configuration. (That's why your workspace appeared empty after you deleted that folder.) In fact, you should not check in your workspace folder at all, but just the several project folders within it.
Whether to check in the .project files is sort of a matter of taste. Those contain project specific information and settings and with those its easier to import the project into Eclipse, but you can import the project without those, too, it's just a bit more work. If other developers are not using Eclipse, those are useless for them. In the worst case, your co-developers will delete those files from source control and when you update your project later, they are deleted on your end, too, messing up your project.
About deleting the files: Note that there is a difference between not checking files into version control and deleting them locally. So in short: Do not commit those files into version control, but don't delete them locally, either. Depending on what sort of version control you are using, you can set it to ignore those files. | 1 | 2 | 0 | I've create a pydev project in eclipse.
At the top level of my workspace I can see these two files:
.project
.pydevproject
I can also see these in each of my subfolders that contain my actual projects.
At the top of my workspace there is also a
.metadata. folder.
What should I commit to source control?
Ie what can I delete and still be able to open the project with minimal effort (hopefully entirely automated regeneration of files)? If this was Visual Studios C++ project the answer would be to keep just the ".sln", "vcxproj" and "vcxproj.filters" because the "vs" folder and "suo" files will autogenerate on openning. I've tried to delete the ".metadata" folder, but after that nothing appears to load in my workspace.
Also, I am working with someone not using an IDE. What eclipse files do we need to update to keep in sync? | What to commit to source control in Eclipse Pydev | 1.2 | 0 | 0 | 339 |
35,649,215 | 2016-02-26T10:26:00.000 | 2 | 0 | 0 | 0 | python,debugging,plugins,mysql-workbench | 35,668,094 | 1 | true | 0 | 0 | There is the GRT scripting shell, which you can reach via menu -> Scripting -> Scripting Shell. This shell is mostly useful for python plugins, but also shows some useful informations from the GRT (classes, the current tree with all settings, open editors, models etc.) | 1 | 1 | 0 | I am currently developing an export plugin for MySQL Workbench 6.3. It is my first one.
Is there any developer tool that I can use to help me (debug console, watches, variables state, etc.) | MySQL Workbench developer tools | 1.2 | 1 | 0 | 156 |
35,651,059 | 2016-02-26T11:55:00.000 | 1 | 1 | 0 | 0 | python,c++,ipc | 35,651,676 | 1 | true | 0 | 0 | First of all, this question is highly opinion-based!
The cleanest way would be to use them in the same process and get them communicate directly. The only complexity is to implement proper API and C++ -> Python calls. Drawbacks are maintainability as you noted and potentially lower robustness (both crash together, not a problem in most cases) and lower flexibility (are you sure you'll never need to run them on different machines?). Extensibility is the best as it's very simple to add more communication or to change existing. You can reconsider maintainability point. Can you python app be used w/o C++ counterpart? If not I wouldn't worry about maintainability so much.
Then shared memory is the next choice with better maintainability but same other drawbacks. Extensibility is a little bit worse but still not so bad. It can be complicated, I don't know Python support for shared memory operation, for C++ you can have a look at Boost.Interprocess. The main question I'd check first is synchronisation between processes.
Then, network communication. Lots of choices here, from the simplest possible binary protocol implemented on socket level to higher-level options mentioned in comments. It depends how complex your C++ <-> Python communication is and can be in the future. This approach can be more complicated to implement, can require 3rd-party libraries but once done it's extensible and flexible. Usually 3rd-party libraries are based on code generation (Thrift, Protobuf) that doesn't simplify your build process.
I wouldn't seriously consider file system or database for this case. | 1 | 7 | 0 | I have 2 code bases, one in python, one in c++. I want to share real time data between them. I am trying to evaluate which option will work best for my specific use case:
many small data updates from the C++ program to the python program
they both run on the same machine
reliability is important
low latency is nice to have
I can see a few options:
One process writes to a flat file, the other process reads it. It is non scalable, slow and I/O error prone.
One process writes to a database, the other process reads it. This makes it more scalable, slightly less error prone, but still very slow.
Embed my python program into the C++ one or the other way round. I rejected that solution because both code bases are reasonably complex, and I prefered to keep them separated for maintainability reasons.
I use some sockets in both programs, and send messages directly. This seems to be a reasonable approach, but does not leverage the fact that they are on the same machine (it will be optimized slightly by using local host as destination, but still feels cumbersome).
Use shared memory. So far I think this is the most satisfying solution I have found, but has the drawback of being slightly more complex to implement.
Are there other solutions I should consider? | Sharing information between a python code and c++ code (IPC) | 1.2 | 0 | 0 | 1,529 |
35,651,534 | 2016-02-26T12:18:00.000 | 0 | 0 | 1 | 0 | python,windows,python-2.7,pyinstaller | 64,098,310 | 1 | false | 0 | 0 | I had the same problem with only windows 10 machines.
I build with command
pyinstaller --onedir --noconsole --noupx Reserv.py
but there was another dependency of libiomp5md.dll which was not included to my dist build directory. So when I add this file, my exe start to work with win10. | 1 | 3 | 0 | I have an windows 7 32 bit virtual machine, on my Windows 10 64 bit machine in the virtual machine. I created the .exe file for the hello world program using pyinstaller, the exe runs fine in the windows 7 machine but when i tried it on the windows 10 and on my sister laptop windows 8.1 it just won't open. if i open the file through the cmd it just stuck on loading.
Any idea what is going wrong there ?
thanx. | Pyinstaller .exe not working in windows 10 and windows 8.1 | 0 | 0 | 0 | 2,422 |
35,654,314 | 2016-02-26T14:30:00.000 | 1 | 0 | 0 | 0 | python | 35,654,506 | 1 | true | 0 | 0 | Whatever the context, having the password stored in plain-text is not good at all, even though I like your idea.
I don't think there could be another way to achieve this, except by having an agreement with the power company. | 1 | 0 | 0 | I want to create a website that you can register on, and enter your Power/Gas company login info.
I will then have a Python scraper that gets your power/gas usage and creates bar charts and other information about it. The Python script will run monthly to update with the latest info.
Is this a terrible idea? I believe the passwords to the power company login will have to be saved in plaintext, there wouldnt be any other way to use them again later if they were encrypted.
I'm not sure if anyone will trust to give their logins to my website either.
Is this a bad thing to do, and should it just not be done, or is there a better way to do this? | Python HTTPS Login to account to scrape data, is this bad practice? | 1.2 | 0 | 1 | 44 |
35,655,334 | 2016-02-26T15:15:00.000 | 2 | 0 | 1 | 0 | matlab,python-3.x,shortcut,spyder | 35,659,592 | 3 | false | 0 | 0 | It's not exactly what you're looking for, but you can jump to the function definition itself by right-clicking and selecting "Go to definition" from the context menu. | 1 | 2 | 0 | In matlab whenever the cursor is on a function, ex: for or plot etc..., and you hit f1, a small window opens with the basic syntax and usage. Is something like that there for python spyder.
ctrl+i is not useful for general commands like for or int.
I searched a lot online... And I am new to Python and am going through a huge piece of code and would appreciate finding a shortcut for the same. | spyder alternative for f1 matlab help feature | 0.132549 | 0 | 0 | 193 |
35,655,346 | 2016-02-26T15:16:00.000 | 0 | 1 | 0 | 0 | python,garbage-collection | 38,498,181 | 2 | false | 0 | 0 | You can attach to a running python script with debugger and issue any command within it, like inside interactive console. I used PyCharm's debugger, but there are variety of them. | 1 | 5 | 0 | In order to investigate some issues with unreleased system resources I'd like to force immediate garbage collection on an already running python script.
Is this somehow possible, e.g. by sending some kind of signal that Python would understand as an order to run gc; or any other similar way? Thanks.
I'm running Python 2.7 on a Linux server. | How to force Python garbage collection on a running script | 0 | 0 | 0 | 2,451 |
35,655,693 | 2016-02-26T15:32:00.000 | 2 | 0 | 0 | 0 | python,numpy,apache-spark,pyspark | 35,658,350 | 1 | false | 0 | 0 | Yes, putting the packages on a NAS mount to which all the datanodes are connected will work up to dozens and perhaps 100 nodes if you have a good NAS. However, this solution will break down at scale as all the nodes try to import the files they need. The Python import mechanism usese a lot of os.stat calls to the file-system and this can cause bottle-necks when all the nodes are trying to load the same code. | 1 | 0 | 1 | We want to use Python 3.x with packages like NumPy, Pandas,etc. on top of Spark.
We know the Python distribution with these packages needs to be present/distributed on all the datanodes for Spark to use these packages.
Instead of setting up this Python distro on all the datanodes, will putting it on a NAS mount to which all datanodes are connected work?
Thanks | Python packages for Spark on datanodes | 0.379949 | 0 | 0 | 144 |
35,656,229 | 2016-02-26T15:56:00.000 | 3 | 0 | 1 | 0 | python,django | 35,656,339 | 1 | false | 1 | 0 | Well, I think this a lot based on your opinion.
Personally I would say it's a good idea to use a lot of third party packages. It enables you to develop faster and why inventing the wheel all over again?
Advantages:
faster development
DRY, don't reinvent the wheel
higher likelihood that the tools are time-tested and have bugs worked out (@erip)
Disadvantages:
support of third party packages could be dropped.
sometimes they don't fit your needs exactly
if open source license changes, you're suddenly without support or liable to a legal battle (@Sayse) | 1 | 0 | 0 | My organization uses django for our website so we have the opportunity to use pypi packages, but we don't seem to have used many in the past and developers have written there own solutions instead. I've always used lots of packages in my own project. Is there really any downside to using these packages? | Is it ok to rely on lots of third party packages? | 0.53705 | 0 | 0 | 96 |
35,657,332 | 2016-02-26T16:49:00.000 | 8 | 0 | 0 | 0 | python,django,gunicorn,django-manage.py | 35,657,627 | 2 | false | 1 | 0 | manage.py runserver is only a development server, it is not meant for production under any circumstance. You need to use something like Apache, uWSGI, NGINX or some other server to serve your django project once it's ready for deployment. | 2 | 33 | 0 | I have been running my beginner Django projects with manage.py runserver. I see suggestions to use gunicorn instead. What is the difference ? | Django: Difference between using server through manage.py and other servers like gunicorn etc. Which is better? | 1 | 0 | 0 | 10,890 |
35,657,332 | 2016-02-26T16:49:00.000 | 79 | 0 | 0 | 0 | python,django,gunicorn,django-manage.py | 35,660,663 | 2 | true | 1 | 0 | nginx and gunicorn are probably the most popular configuration for production deployments. Before detailing why gunicorn is recommended over runserver, let's quickly clarify the difference between nginx and gunicorn, because both state that they are web servers.
NGINX should be your entrance point to the public, it's the server listening to port 80 (http) and 443 (https). Its main purpose is handling HTTP requests, that is applying redirects, HTTP Auth if required, managing TSL/SSL Certificates and - among other things - decides where your requests is finally going to. E.g. there maybe a node.js app living on localhost:3000 that awaits requests on/foo/api while gunicorn is waiting at localhost:8000 to serve your awesome app. This functionality of proxying incoming requests to so called upstream services (in this case node.js and gunicorn) is called reverse-proxy.
GUNICORN is a server that translates HTTP requests into Python. WSGI is one of the interfaces/implementations that does that (e.g., the text parts of http headers are transformed into key-value dicts).
Django's built-in development web server (what you get when you run manage.py runserver) provides that functionality also, but it targets a development environment (e.g., restart when the code changes), whereas Gunicorn targets production.
Gunicorn has many features that Django's built-in server is lacking:
gunicorn can spawn multiple worker processes to parallelize incoming requests to multiple CPU cores
gunicorn has better logging
gunicorn is generally optimized for speed
gunicorn can be
configured to fine grades depending on your setup
gunicorn is
actively designed and maintained with security in mind
There are web servers other than gunicorn, but gunicorn (inspired by ruby's unicorn) is very popular and easy to setup, and hence is not only a good starting point, but a professional solution that is used by large projects. | 2 | 33 | 0 | I have been running my beginner Django projects with manage.py runserver. I see suggestions to use gunicorn instead. What is the difference ? | Django: Difference between using server through manage.py and other servers like gunicorn etc. Which is better? | 1.2 | 0 | 0 | 10,890 |
35,658,436 | 2016-02-26T17:48:00.000 | 9 | 0 | 1 | 1 | python,python-2.7,homebrew,pyinstaller | 36,139,384 | 1 | true | 0 | 0 | The pyinstaller docs are poorly worded and you may be misunderstanding their meaning.
PyInstaller works with the default Python 2.7 provided with current
Mac OS X installations. However, if you plan to use a later version of
Python, or if you use any of the major packages such as PyQt, Numpy,
Matplotlib, Scipy, and the like, we strongly recommend that you
install THESE using either MacPorts or Homebrew.
It means to say "install later versions of Python as well as python packages with Homebrew", and not to say "install pyinstaller itself with homebrew". In that respect you are correct, there is no formula for pyinstaller on homebrew.
You can install pyinstaller with pip though: pip install pyinstaller or pip3 install pyinstaller. Then confirm the install with pyinstaller --version. | 1 | 1 | 0 | I am using python 2.7.0 and pygame 1.9.1, on OS X 10.10.5. The user guide for PyInstaller dictates that Mac users should use Homebrew, and I have it installed. I used it to install both Python and Pygame. But 'brew install PyInstaller' produces no formulae at all when typed into Terminal! So how can I use homebrew to install PyInstaller? This seems like it should be simple, and I'm sorry to bother you, but I have searched high and low with no result. | Using homebrew to install pyinstaller | 1.2 | 0 | 0 | 3,559 |
35,659,462 | 2016-02-26T18:50:00.000 | 4 | 0 | 1 | 1 | python,virtualenv | 35,659,549 | 1 | true | 0 | 0 | cron.knightly.py is not what you want. Python modules do not end with .py. Just as you wouldn't import math.py, you don't run python -m something.py. Change it to python -m cron.nightly | 1 | 1 | 0 | I run a script with inside a virtual environment like so:
python -m cron.nightly.py
Everything runs fine, but after the last line completes, I get an error:
/Users/user/.virtualenvs/vrn/bin/python: No module named cron.nightly.py
Which is fine, except that because the script doesn't exit with a 0 (I think) every time it runs Jenkins marks the job as failed and so I can't tell when the code actually fails or not without looking at each individual console output, which is not ideal to say the least.
If someone could help me explain why I get this error (there's no other traceback) and how to fix it I would really appreciate it. | Python script with -m completes but errors out at very end | 1.2 | 0 | 0 | 25 |
35,661,419 | 2016-02-26T20:46:00.000 | 1 | 0 | 0 | 0 | python,opencv,frames | 35,663,902 | 2 | false | 0 | 0 | How are your objects filled or just an outline?
In either case the approach I would take is to detect the vertices by finding the maximum gradient or just by the bounding box. The vertices will be on the bounding box. Once you have the vertices, you can say whether the object is a square or a rectangle just by finding the distances between the consecutive vertices. | 2 | 0 | 1 | I have a video consisting of different objects such as square, rectangle , triangle. I somehow need to detect and show only square objects. So in each frame, if there is a square, it is fine but if there is a triangle or rectangle then it should display it. I am using background subtraction and I am able to detect all the three objects and create a bounding box around them. But I am not able to figure out how to display only square object. | How to detect objects in a video opencv with Python? | 0.099668 | 0 | 0 | 948 |
35,661,419 | 2016-02-26T20:46:00.000 | 3 | 0 | 0 | 0 | python,opencv,frames | 35,788,514 | 2 | true | 0 | 0 | You can use the following algorithm:
-Perform Background subtraction, as you're doing currently
-enclose foreground in contours (using findContours(,,,) then drawContours(,,,) function)
-enclose obtained contours in bounding boxes (using boundingRect(,,,) function)
-if area of bounding box is approximately equal to that of enclosed contour, then the shape is a square or rectangle, not a triangle.
(A large part of the box enclosing a triangle will lie outside the triangle)
-if boundingBox height is approximately equal to its width, then it is a square. (access height and width by Rect.height and Rect.width) | 2 | 0 | 1 | I have a video consisting of different objects such as square, rectangle , triangle. I somehow need to detect and show only square objects. So in each frame, if there is a square, it is fine but if there is a triangle or rectangle then it should display it. I am using background subtraction and I am able to detect all the three objects and create a bounding box around them. But I am not able to figure out how to display only square object. | How to detect objects in a video opencv with Python? | 1.2 | 0 | 0 | 948 |
35,662,365 | 2016-02-26T21:47:00.000 | 4 | 0 | 1 | 0 | python,string,unicode,encode,python-2.x | 35,662,457 | 3 | false | 0 | 0 | Python realizes that it can't do an encode on a str type, so it tries to decode it first! It uses the 'ascii' codec, which will fail if you have any characters with a codepoint above 0x7f.
This is why you sometimes see a decode error raised when you were trying to do an encode. | 1 | 7 | 0 | I got the point about unicode, encoding and decoding. But I don't understand why the encode function works on str type. I expected it to work only on unicode type.
Therefore my question is : what is the behavior of encode when it's used on a str rather than unicode ? | What happens when encode is used on str in python? | 0.26052 | 0 | 0 | 919 |
35,664,972 | 2016-02-27T02:28:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook | 63,410,851 | 10 | false | 0 | 0 | list all magic command %lsmagic
show current directory %pwd | 3 | 67 | 0 | I couldn't find a place for me to change the working directory in Jupyter Notebook, so I couldn't use the pd.read_csv method to read in a specific csv document.
Is there any way to make it? FYI, I'm using Python3.5.1 currently.
Thanks! | How to change working directory in Jupyter Notebook? | 0.019997 | 0 | 0 | 183,147 |
35,664,972 | 2016-02-27T02:28:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook | 60,910,519 | 10 | false | 0 | 0 | Open jupyter notebook click upper right corner new and select terminal then type cd + your desired working path and press enter this will change your dir. It worked for me | 3 | 67 | 0 | I couldn't find a place for me to change the working directory in Jupyter Notebook, so I couldn't use the pd.read_csv method to read in a specific csv document.
Is there any way to make it? FYI, I'm using Python3.5.1 currently.
Thanks! | How to change working directory in Jupyter Notebook? | 0.019997 | 0 | 0 | 183,147 |
35,664,972 | 2016-02-27T02:28:00.000 | 4 | 0 | 1 | 0 | python,jupyter-notebook | 61,349,704 | 10 | false | 0 | 0 | It's simple, every time you open Jupyter Notebook and you are in your current work directory, open the Terminal in the near top right corner position where create new Python file in. The terminal in Jupyter will appear in the new tab.
Type command cd <your new work directory> and enter, and then type Jupyter Notebook in that terminal, a new Jupyter Notebook will appear in the new tab with your new work directory. | 3 | 67 | 0 | I couldn't find a place for me to change the working directory in Jupyter Notebook, so I couldn't use the pd.read_csv method to read in a specific csv document.
Is there any way to make it? FYI, I'm using Python3.5.1 currently.
Thanks! | How to change working directory in Jupyter Notebook? | 0.07983 | 0 | 0 | 183,147 |
35,667,182 | 2016-02-27T07:50:00.000 | -2 | 0 | 1 | 1 | python,python-2.7,python-3.x,anaconda | 35,667,217 | 2 | false | 0 | 0 | You can try going to:
/Library/Python
and manually delete the versions you do not want. This isn't recommended though. | 1 | 4 | 0 | I made a mistake and installed many different versions of python on my linux machine. I installed all the versions of python with the help of anaconda. My default python version shows 2.7.11.
Now I want to remove all the versions of python and its dependencies from my linux system. What should I do? | Uninstall different versions of python | -0.197375 | 0 | 0 | 2,491 |
35,669,264 | 2016-02-27T11:42:00.000 | 0 | 0 | 0 | 0 | python,canvas,tkinter | 35,670,592 | 2 | false | 0 | 1 | myscrollbar = Scrollbar(myframe,orient="vertical",command=canvas.yview)
canvas.configure(yscrollcommand=myscrollbar.set) | 1 | 0 | 0 | I am currently trying to create a canvas which will have other objects on it and I want to use a scroll-bar for when the objects on the canvas take too much space so that the size of the window will stay the same.
The problem is that although I set an initial size for the canvas it changes to fit all of the objects on it. Can I make the canvas to keep a specific size so that I can use the scroll-bar instead? | tkinter fixed canvas size | 0 | 0 | 0 | 1,958 |
35,670,348 | 2016-02-27T13:24:00.000 | 0 | 0 | 0 | 0 | python,pandas,apply,next | 35,670,515 | 2 | false | 0 | 0 | While it's not the most "fancy" way - I would just use a numeric iterator and access lines i and i+1 | 1 | 1 | 1 | I have a df in pandas
import pandas as pd
import pandas as pd
df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value'])
I want to iterate over rows in df. For each row i want rows value and next rows value.
Here is the desired result.
0 1 AA BB
1 2 BB CC
I have tried a pairwise() function with itertools.
from itertools import tee, izip
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return izip(a, b)
import pandas as pd
df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value'])
for (i1, row1), (i2, row2) in pairwise(df.iterrows()):
print i1, i2, row1["value"], row2["value"]
But, its too slow. Any idea how to achieve the output with iterrows ? I would like to try pd.apply for a large dataset. | python pandas ... alternatives to iterrows in pandas to get next rows value (NEW) | 0 | 0 | 0 | 908 |
35,672,982 | 2016-02-27T17:27:00.000 | 1 | 0 | 0 | 0 | python,pycharm,sublimetext3 | 35,673,049 | 1 | true | 0 | 0 | When cursor in the line pressing of cntrl + c would select and copy text of libe into buffer ( i am on OS X). Please give more details regarding your OS, pycharm version ,etc if it doesn't work for you.
If you have cmd+C (OS X) You can setup pycharm to copy with cntrl+c by adjusting pycharm keymap for that perform next steps:
Go to Pycharm Preferences
Open Keymap Settings
Find copy shortcut in shortcuts list
Double click on it
Choos "Add Keyboard Shortcut" option
Change to CNTRL +C
Save and applky changes
You are done. | 1 | 1 | 0 | Is there a way to make PyCharm use Ctrl/C to copy the line where the cursor is if no text is selected (which is what Sublime Text does out of the box). It is really handy feature. | Select line with Ctrl/C in PyCharm like Sublime Text does | 1.2 | 0 | 0 | 1,108 |
35,677,767 | 2016-02-28T01:47:00.000 | 0 | 0 | 1 | 0 | python,matplotlib,plot,figure | 35,677,835 | 3 | false | 0 | 0 | pyplot is matlab like API for those who are familiar with matlab and want to make quick and dirty plots
figure is object-oriented API for those who doesn't care about matlab style plotting
So you can use either one but perhaps not both together. | 1 | 39 | 1 | I'm not really new to matplotlib and I'm deeply ashamed to admit I have always used it as a tool for getting a solution as quick and easy as possible. So I know how to get basic plots, subplots and stuff and have quite a few code which gets reused from time to time...but I have no "deep(er) knowledge" of matplotlib.
Recently I thought I should change this and work myself through some tutorials. However, I am still confused about matplotlibs plt, fig(ure) and ax(arr). What is really the difference?
In most cases, for some "quick'n'dirty' plotting I see people using just pyplot as plt and directly plot with plt.plot. Since I am having multiple stuff to plot quite often, I frequently use f, axarr = plt.subplots()...but most times you see only code putting data into the axarr and ignoring the figure f.
So, my question is: what is a clean way to work with matplotlib? When to use plt only, what is or what should a figure be used for? Should subplots just containing data? Or is it valid and good practice to everything like styling, clearing a plot, ..., inside of subplots?
I hope this is not to wide-ranging. Basically I am asking for some advice for the true purposes of plt <-> fig <-> ax(arr) (and when/how to use them properly).
Tutorials would also be welcome. The matplotlib documentation is rather confusing to me. When one searches something really specific, like rescaling a legend, different plot markers and colors and so on the official documentation is really precise but rather general information is not that good in my opinion. Too much different examples, no real explanations of the purposes...looks more or less like a big listing of all possible API methods and arguments. | Understanding matplotlib: plt, figure, ax(arr)? | 0 | 0 | 0 | 8,825 |
35,679,136 | 2016-02-28T05:36:00.000 | 0 | 0 | 1 | 0 | python | 35,679,176 | 5 | false | 0 | 0 | Encapsulate both in a function or such ... read/write is controlled through the function thus assuring the behavior you require | 1 | 4 | 0 | I have two integers x and y that are equal to each other. Whenever I change x I want the change to also be reflected in y. How can I do that with Python?
I think this can be achieved somehow by pointing both variables to the 'memory location' where the value is stored. | Python - when one integer variable is changed, make the second variable the same | 0 | 0 | 0 | 156 |
35,681,330 | 2016-02-28T10:26:00.000 | 0 | 0 | 0 | 0 | python,validation,amazon-web-services,amazon-s3,boto3 | 36,065,806 | 1 | false | 0 | 0 | If you don't want the token credential become show stopper before a massive batch processing, you can try execute a small put_object() to test whether the access is denied.
Because many different roles can be specify inside IAM policies and ACL, this is the only feasible ways to validate the access. | 1 | 1 | 0 | I'm using amazon s3-boto3 with python to put some files at amazon s3 bucket
before running the code i want to make a validation for the
access token , secret , region and bucket
to ensure all the data are correct without waiting for the exception when executing my code.
i checked the documentation and make a search at google with no luck
any help ? | Amazon S3 validating access token , secret , region and bucket | 0 | 0 | 1 | 161 |
35,682,844 | 2016-02-28T13:01:00.000 | 0 | 0 | 1 | 0 | python,thread-safety,github3.py | 35,702,676 | 2 | true | 1 | 0 | To give you a more thorough answer, Aviv, since you're simply sharing instances and calling methods, then it is absolutely threadsafe. Some of the questions of requests thread-safety are mostly around cookies, their expiration, and their revocation. Cookies aren't used by github3.py to talk to the GitHub API so you should be fine. | 1 | 0 | 0 | Does anytone knows if github3py is threadsafe.
Specifically:
GitHub.repository()
Repository.iter_pulls()
Repository.branch()
Repository.create_status()
None of the threads edit the objects, just share the instances and call the methods.
Thanks | Is github3py thread-safe? | 1.2 | 0 | 0 | 77 |
35,685,734 | 2016-02-28T17:23:00.000 | 0 | 0 | 1 | 0 | python,user-interface,python-3.x | 35,685,792 | 1 | false | 0 | 1 | Since your question mentions .exe executables, I'wll assume you work in the Windows environment. Try using a .pywextension instead of a .py extension for the python program. | 1 | 0 | 0 | So recently I made a script, and I also finished gui and managed to merge those two together. Now i wish when I start the exe file that cmd doesn't appear but instead only GUI? Any idea on how to manage this? So far my searching didn't yield any satisfying results. Some more info is: Python 3.5, using pyinstaller to convert to exe, Tkinter Gui, pycharm 5.0.1. Thanks! | How do I stop cmd from appearing when i run exe file (python 3, gui) | 0 | 0 | 0 | 89 |
35,686,522 | 2016-02-28T18:31:00.000 | 0 | 0 | 1 | 1 | python,cygwin,pip | 39,111,999 | 1 | false | 0 | 0 | Likely you don't need the python -m part. If pip is in your path, then just typing pip install --upgrade pip should work. Where is pip installed? which pip will tell you where it's located, if it's in your path | 1 | 1 | 0 | In cygwin I can't upgrade pip, it worked find in cmd:
$ python -m pip install --upgrade pip
/usr/bin/python: No module named pip | Cannot upgrade pip in cygwing running on Windows 10: /usr/bin/python: No module named pip | 0 | 0 | 0 | 136 |
35,686,627 | 2016-02-28T18:38:00.000 | 1 | 0 | 1 | 0 | javascript,python,mongodb,special-characters | 35,686,982 | 1 | false | 0 | 0 | Have you tried var myEncodedString = encodeURIComponent('your string or whatever'); in your js code | 1 | 2 | 0 | I'm having a lot of problems with properly processing certain special characters in my application.
Here's what I'm doing at the moment:
User enters a location, data is retrieved via the Google Geolocation API (full name, lat and lon) and is sent via ajax (as a JSON string) to a python script
The python script parses the JSON string, reads the parameters and executes a http request to an API, which runs on nodejs
Data is inserted into MongoDB
The problem is when there's a special character in the location name. The original location name I'm testing on has the character è. When the json is parsed in Python I get a \xc9 and after the process completes I always end up with a completely different (mostly invalid) character in the database than what was originally entered. I've tried all sorts of encoding and decoding (mostly from similar questions on stackoverflow), but I don't properly understand what exactly I should do.
Any tips would be appreciated. Thanks! | encoding special characters in javascript/python | 0.197375 | 0 | 1 | 290 |
35,688,599 | 2016-02-28T21:37:00.000 | 0 | 1 | 0 | 1 | python,shell,cron | 35,688,902 | 1 | false | 0 | 0 | Difficult to answer without more colour on your environment.
Here's how to solve this though: do not redirect your output to /dev/null. Then read in your cron log what happened. It seems very likely that your script fails, and therefore does not return anything to standard out, so does not create a file.
I highly suspect it is because you are using a python module, or python version or python path that is loaded in your bashrc. Crontab does not execute your bashrc, it's an independent environment, so you cannot assume that a script that runs correctly when you manually launch will work in your cron.
Try sourcing your bashrc in your cron task, and it's very likely to solve your problem. | 1 | 1 | 0 | I have this .sh which starts a python file. This python file generates a .txt when started via the commandline with sudo but doesn't when started via the .sh
Why doesn't the pyhton file give me a .txt when started with the cron and .sh?
When I use su -c "python /var/www/html/readdht.py > /var/www/html/dhtdata.txt" 2>&1 >/dev/null, .sh gives me output, but omits the newlines, so I get one big string.
The python file creates a .txt correctly when started from the commandline with sudo python readdht.py.
If the .sh the python file is started with su -c "python /var/www/html/readdht.py no .txt is created
What's going on? | .sh started by cron does not create file via python | 0 | 0 | 0 | 43 |
35,690,072 | 2016-02-29T00:19:00.000 | 0 | 0 | 1 | 1 | python,zip,archive | 35,690,298 | 4 | false | 0 | 0 | I got the answer. It is that we can use two commands: archive.getall_members() and archive.getfile_members().
We iterate over each of them and store the file/folder names in two arrays a1(contains file/folder names) and a2(contains file names only). If both the arrays contain that element, then it is a file otherwise it is a folder. | 1 | 7 | 0 | I have an archive which I do not want to extract but check for each of its contents whether it is a file or a directory.
os.path.isdir and os.path.isfile do not work because I am working on archive. The archive can be anyone of tar,bz2,zip or tar.gz(so I cannot use their specific libraries). Plus, the code should work on any platform like linux or windows. Can anybody help me how to do it? | How to check if it is a file or folder for an archive in python? | 0 | 0 | 0 | 20,515 |
35,690,489 | 2016-02-29T01:17:00.000 | 1 | 0 | 0 | 0 | python,django,list,model | 35,691,578 | 1 | false | 1 | 0 | I think the best way to organize your 'SysApp - documents' relationship, assuming that each document is related to only one sysapp, is to use ForeignKey, as you mentioned.
In that case you'll only have to create 2 models: the first one is SysApp with a name field and the second is Document with fields title, url to file, description and a foreignkey to SysApp. Now you can create documents and attach them to the sys you want. So you do not need to specify document_2, document_3 etc. fields.
If you need to attach one document to more than one sysapp use ManyToMany instead of ForeignKey. | 1 | 0 | 0 | Let's say I have a model called 'SysApp'. Each system has 5 documents. Each document has fields:
Title
URL to the file (external url)
Description
Rather than defining multiple fields like
title_1,
url_1,
description_1,
title_2,
url_2,
description_2
(Hardcoded approach)
is there a better way to handle this type of use case?
One way of doing is to create a model storing each document and then SysApp will reference each document using a ForeignKey. However I still have to create field like document_1, document_2 etc. Also it would be quite difficult for editors to manage when there are 100+ SysApp and 3-400+ documents.
Is it possible to manage these fields like a list or dictionary?
Thank you | What is the best approach to design model fields that is like a list in Django? | 0.197375 | 0 | 0 | 33 |
35,692,719 | 2016-02-29T05:44:00.000 | 0 | 0 | 1 | 0 | python,mongodb,python-2.7,pymongo | 70,957,482 | 9 | false | 0 | 0 | len(list(cursor.clone()))
worked really well for me, does not consume the editor so it can be use straight with your variable | 2 | 29 | 0 | I'm looking for a feasible way to get the length of cursor got from MongoDB. | How to get the length of a cursor from mongodb using python? | 0 | 0 | 0 | 40,200 |
35,692,719 | 2016-02-29T05:44:00.000 | 2 | 0 | 1 | 0 | python,mongodb,python-2.7,pymongo | 35,693,917 | 9 | false | 0 | 0 | I find that using cursor.iter().count() is a feasible way to resolve this problem | 2 | 29 | 0 | I'm looking for a feasible way to get the length of cursor got from MongoDB. | How to get the length of a cursor from mongodb using python? | 0.044415 | 0 | 0 | 40,200 |
35,692,925 | 2016-02-29T06:03:00.000 | 1 | 0 | 1 | 0 | python,string,list,integer,literate-programming | 35,693,035 | 3 | false | 0 | 0 | When you save to the file, you'll want to do so cleanly-- make sure that you write in a consistent format.
When you read from the file, I suggest using literal_eval from the ast package. This will turn any strings contained in the file into Python types. | 1 | 0 | 0 | I am developing a Twitch Python IRC Bot that has a currency system. The balances of users are stored in a list, which is loaded from and saved to a text file. However, because the text file's content is loading into one singular string that is then placed inside the list, I cannot iterate through each entry in the string to add points to it. For example, here is the iterable list I'm looking for: [10, 21, 42, 5]
But, when this is saved to a text file and then loaded into the list, it turns out like this: ['10, 21, 42, 5', '0' (with a 0 entry added on). As you can see, this would iterate through the list with only recognizing 2 sections, which is where the quotations surround. Is there a way I can convert the string fetched from the text file into a list of integers that I can iterate through? Thanks in advance! | Python - Convert String into Iterable Integer List | 0.066568 | 0 | 0 | 6,049 |
35,693,644 | 2016-02-29T07:01:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,multi-tenant | 35,694,308 | 1 | false | 1 | 0 | You need to look at foreign key based fields in particular the many to many field. You can then use manytomany through a role object which captures information about roles i.e.
See the django docs for excellent examples | 1 | 1 | 0 | I'm having a SaaS application which needs a main user (like the owner of the business who would use the SaaS) to be the admin of that particular tenancy. Now the main user needs of have multiple sub users (like a user looking for sales, other for purchase, etc).
Now my question is single level tenancy is possible in Django. How can I do the second one?
Any help will be highly appreciated. | Tenancy and sub-tenancy in Django | 0 | 0 | 0 | 50 |
35,694,595 | 2016-02-29T08:09:00.000 | 1 | 0 | 0 | 0 | java,python,algorithm,matrix | 35,694,812 | 1 | false | 0 | 0 | From your comment, I undestand your question as a matter of understanding the task decription.
You are to sample data from one stream for one minute, then wait two minutes, then select another stream for sampling and repeat, for one hour. | 1 | 0 | 0 | I have an interview task to do program. But, i am confused to understand the question in right manner.
Write the program to monitor call quality metrics for a duration of 1 min every 3 mins over a 1 hour (60 mins) time frame. i.e. Monitor 20 calls of 1 min duration each.
Your program should log data for the number of dropouts, clicks in audio and other relevant call quality metrics.
Implement your solution using Python. You may use any other supporting technologies in your solution.
What exactly interviewer want to do ? | Write a program to monitor Call Quality Matrics for particular time frame | 0.197375 | 0 | 0 | 62 |
35,697,527 | 2016-02-29T10:43:00.000 | 2 | 0 | 1 | 0 | python-2.7,python-3.x,pip | 35,698,580 | 1 | true | 0 | 0 | You can use $pip2 install [package name] as it says in the comments but down the road your life will be much easier if you use virtual environments to compartmentalize your code. That way you specify which python version to use in your project only once (at the beginning) and then you can configure pip to always install packages for that version. | 1 | 1 | 0 | Now
pip -V
shows that it's python3.5's pip.
What to do if I want python2's package? | how to install python2 packages via pip when I have both python2 and python3? | 1.2 | 0 | 0 | 1,902 |
35,702,619 | 2016-02-29T14:54:00.000 | 2 | 0 | 0 | 0 | python,wsgi | 35,702,780 | 1 | false | 0 | 0 | WSGI is completely unconcerned with IP versions; it is a specification for communication between a webserver and Python code. It is up to your server - Apache, nginx, gunicorn, uwsgi, whatever - to listen to the port. | 1 | 1 | 0 | I am using python WSGI module for servicing IPv4 http requests. Does WGSI support IPv6 requests? Can it listen to IPv6 IP port combination? | Does python WSGI supports IPv6? | 0.379949 | 0 | 1 | 394 |
35,705,211 | 2016-02-29T17:01:00.000 | 3 | 0 | 0 | 0 | python,database,postgresql,sqlalchemy,flask-sqlalchemy | 35,707,179 | 2 | false | 1 | 0 | Just the message "killed" appearing in the terminal window usually means the kernel was running out of memory and killed the process as an emergency measure.
Most libraries which connect to PostgreSQL will read the entire result set into memory, by default. But some libraries have a way to tell it to process the results row by row, so they aren't all read into memory at once. I don't know if flask has this option or not.
Perhaps your local machine has more available RAM than the server does (or fewer demands on the RAM it does have), or perhaps your local machine is configured to read from the database row by row rather than all at once. | 1 | 1 | 0 | I am using a postgres database with sql-alchemy and flask. I have a couple of jobs which I have to run through the entire database to updates entries. When I do this on my local machine I get a very different behavior compared to the server.
E.g. there seems to be an upper limit on how many entries I can get from the database?
On my local machine I just query all elements, while on the server I have to query 2000 entries step by step.
If I have too many entries the server gives me the message 'Killed'.
I would like to know
1. Who is killing my jobs (sqlalchemy, postgres)?
2. Since this does seem to behave differently on my local machine there must be a way to control this. Where would that be?
thanks
carl | postgres database: When does a job get killed | 0.291313 | 1 | 0 | 2,227 |
35,711,344 | 2016-02-29T22:51:00.000 | 0 | 0 | 0 | 0 | python,charts,flask,bokeh | 38,001,749 | 1 | false | 1 | 0 | I also face this problem. It turns out Bokeh stores data in its curdoc across requests.
I fix this by manually clearing bokeh document everytime at the end of each request with: curdoc().clear() | 1 | 1 | 0 | I am running a flask application which makes calls to a bokeh api for generating charts to be rendered in html. The first time I generate the chart, it is taking about 0.07s. The second time, about 0.14s. The third time about 0.21s, and so on. I must be doing something wrong. Wondering if anyone has any thoughts on how to fix this. Thank You. Neela. | Calling bokeh api from python / flask for charts -- becoming slower with each call | 0 | 0 | 0 | 125 |
35,712,731 | 2016-03-01T00:53:00.000 | 1 | 0 | 1 | 0 | python,neural-network,neuroscience,biological-neural-network,neuron-simulator | 52,202,141 | 2 | true | 0 | 0 | If you are using the NEURON gui, you could also find section properties in the NEURON's control menu:
Tools-> Model View
This's will open a ModelView window which has section and segment details such as:
Type of cells: real cells/ artificial cells
NetCon objects
LinearMechanisms objects
Density Mechanisms
Point Processes
If you click on each property, a drop-down menu appears showing the details of the property selected
You could also view the structure of the model if you click on the cell type (real/artificial cells) | 1 | 2 | 0 | In NEURON simulator, is there an easier way to list properties of a section other than iterating over each property individually? | Easy way to list NEURON section properties/information? | 1.2 | 0 | 0 | 219 |
35,712,823 | 2016-03-01T01:02:00.000 | 0 | 0 | 0 | 0 | python,django | 35,730,732 | 1 | false | 1 | 0 | As Peter was saying it is definitely a bad idea. If you must do it, then below one might be little better:
Create a view to render a form, where user can choose the task name, class name, and then a text area to write his function or task (or) totally new task.py itself (as per your requirement).
Once submitted, in the backend check for pep8 code formatting and (having a copy of your entire source code in your server/somewhere) and copy this new file there and run python manage.py shell to just basic sanitize the code.
Restrict the access to this form based on users model (users).
Again as Peter was saying this is totally spectacular security hole. | 1 | 0 | 0 | I have some functions as tasks in my tasks.py file in django and I want to be able to edit the code of each task in my administration panel. Is there any way of doing this. If possible, I would also like to be able to add more tasks in my tasks.py file directly through administration panel without having to go into tasks.py file to add a new task function. If anyone can point me in the right direction, that would be really appreciated. | How to make django task.py code editable in admin panel? | 0 | 0 | 0 | 53 |
35,714,894 | 2016-03-01T04:51:00.000 | 0 | 1 | 0 | 0 | python,json,tweepy | 35,715,292 | 1 | false | 0 | 0 | The _json attribute started working once I upgraded my tweepy version to 3.5.0. | 1 | 0 | 0 | I'm trying to build a Twitter crawler that would crawl all the tweets of a specified user and would save them in json format. While trying to convert the Status object into json format using _json attribute of Status, I'm getting the following error :
AttributeError : 'Status' object has no attribute '_json'
Can anyone please help me with this? | Tweepy : AttributeError : Status object has no attribute _json | 0 | 0 | 1 | 1,607 |
35,715,313 | 2016-03-01T05:27:00.000 | 0 | 0 | 0 | 0 | python,c++,opencv,visualization,dicom | 35,716,718 | 1 | true | 0 | 1 | I would say go with VTK, ITK and Qt. Python brings nothing to the table in terms of GUI compared against Qt. VTK/ITK can read DICOM for you and probably also do segmentation and registration as needed. | 1 | 1 | 0 | I have a problem statement- to develop an application for processing DICOM images for diagnostic applications from scratch. It includes:
Image processing- model based segmentation and registration (C++)
Visualization- 2D and 3D image visualization (C++)
Graphical User Interface (Python)
Is it feasible to develop this using C++ (for large chunk of image data) and Python for the GUI?
What are the pros and cons of using openCV?
Which libraries would be suitable for this? | DICOM image processing, visualization and GUI- C++ and Python | 1.2 | 0 | 0 | 803 |
35,716,086 | 2016-03-01T06:28:00.000 | 1 | 1 | 0 | 0 | java,android,python | 59,486,412 | 1 | true | 1 | 0 | Running a python script inside android app is not practical at the moment, but what you can do is creating a HTTP web service for interpreting python and sending back the results to the android application.
Then it's just Android app communicating with a HTTP web service which is simpler than packing an interpreter.
This way it makes the app lighter too. | 1 | 5 | 0 | Is it possible to run "python" script inside "Java" in an android app?
The main app will be Java but some cryptography should be done in "python"
Is it possible to do this? | Run python script inside java on android | 1.2 | 0 | 0 | 3,858 |
35,716,642 | 2016-03-01T07:04:00.000 | 1 | 0 | 1 | 0 | python,mongodb | 35,765,568 | 2 | true | 0 | 0 | i have done something like this..
'To':'test@gmail(dot)com' | 1 | 0 | 0 | I am using mongodb in python. The problem I'm facing is during the generation of a key. The code through which I'm generating a key is:
post_id = posts.insert_one({msg["To"]:a}
Now here, the "To" consist of an email address (which consists of a symbol dot(.)). I researched few documents online and I got to knew that “To” of a mail cannot be used as a key, because in mongodb they use “.(dot)” and “$” as a nested document.
So now how can I proceed? | How to set a generated key in mongodb using python? | 1.2 | 1 | 0 | 77 |
35,720,890 | 2016-03-01T10:48:00.000 | 0 | 0 | 1 | 0 | python,matplotlib,virtualenv | 35,730,153 | 1 | false | 0 | 0 | I don't think you're mixing things:
Jupyter (previously know as python) can tell that the object has an representation (whether it's a string, a float, an image, a js script...) after calling plt.plot(). This representation is embedded into a html page, which can be displayed in a browser. Do mind that all this happens without using DISPLAY
In that virtual machine, are you using a graphical interface ("desktop") or bare-bones terminal? if the previous, DISPLAY should be set. If the later, then you cannot display on bare-bones terminal, and should use other resources to forward display or output your image as a file. | 1 | 0 | 0 | When I try to plot with matplotlib, I get an _tkinter.TclError: couldn't connect to display, because I'm executing my python script from within a virtualenv in a vm on a mac :) In IPython I can do %matplotlib inline. Is there a way to tell the venv to use my standard display? | Is there a way to set $DISPLAY inside a virtualenv to my standard display? | 0 | 0 | 0 | 53 |
35,721,894 | 2016-03-01T11:38:00.000 | 0 | 0 | 0 | 0 | python,amazon,amazon-product-api | 35,895,419 | 1 | true | 0 | 0 | The only answer I've found so far, is manually choosing BrowseNode and SearchIndex | 1 | 0 | 0 | At the moment I'm writing on a program to fill a database with product information. I've got a list of categories, from which i need the 20 best products from Amazon. Since i only got a keyword list of them and no BrowseNodes, I can't search in other SearchIndexes then "All" with the Amazon API. This leads to alot of false Products.
For example the categorie "tennis bat" gives me about 12 real tennis bat's and 8 other things for tennis, for example a damper, tennis bags or sometimes things completly unrelated.
I've also found no reliable way of getting the BrowseNode of a categorie.
I've tried to search for Items with the Keyword and getting the BrowseNode of the first Item, but not only does those products have many different BrowseNodes, but sometimes they dont relate to the keyword at all.
I also can't search for them manually, because the list of categories is over 700 by now.
Would love any input on my problem. | Get best products of an Amazon categorie | 1.2 | 0 | 1 | 93 |
35,725,478 | 2016-03-01T14:25:00.000 | 8 | 0 | 1 | 0 | python,elixir,data-science | 47,420,729 | 2 | false | 0 | 0 | I'm an advocate for using the right tool for the job. There are typically two requirements to do data science:
Libraries (because you don't want to reinvent the wheel at every corner)
Performance (particularly if dealing with large amounts of data)
Python and R are the right tools. They offer the largest number of high-quality libraries, and though slow on their own, they perform well thanks to libraries written and optimized in fast languages like C and Fortran.
Some like alternatives like Julia and Scala. These are faster languages on their own and have a decent amount of libraries, though you might still run into some situations where suitable libraries are available in Python or R, but not Julia or Scala.
With languages like Elixir, you're are for the most part on your own. The amount of data science specific libraries is limited, and the Elixir community - though wonderful - is mostly focused on distributed computing and web development, so don't count on lots of support there.
In short, can you? Technically yes, and there is no harm in experimenting, but you're making your life significantly harder.
Keep also in mind that, contrary to popular belief, Elixir is not a fast language when it comes to single-thread performance. Depending on the task at hand, you'll find that Ruby is just as fast or even faster in some instances.
Don't get me wrong, Elixir is a great language and it's amazing at what it does best, it's just that it's not the kind of language I'd reach out to first for mathematical computations. | 1 | 2 | 0 | I recently started playing with Elixir and some patterns remind me of Python, which is widely used in data science projects. For example list comprehensions or anonymous functions.
Considering the high performance of Elixir and the ability to run multiple processes and deal with asynchronous tasks it seams to me to be a very good fit for Data Science projects.
Am I missing a point? Does somebody have experience with this? | Elixir for Data Science | 1 | 0 | 0 | 3,241 |
35,726,924 | 2016-03-01T15:31:00.000 | 4 | 0 | 0 | 0 | python,ckan | 35,729,219 | 1 | true | 1 | 0 | Backup CKAN's databases (the main one and Datastore one if you use it) with pg_dump. If you use Filestore then you need to take a backup copy of the files in the directory specified by ckan.storage_path (default is /var/lib/ckan/default)
Restore the database backups (after doing createdb) using psql -f. Then run paster db upgrade just in case it was from an older ckan version. Then paster --plugin=ckan search-index rebuild. In an emergency use rebuild_fast instead of rebuild, but I think it might create some duplicates entries, so to be certain you could then do rebuild -r to do it again carefully but slowly.
initialize [the datastore database] from the resources folder (if there is a way)
I don't think the CKAN Data Pusher has a command-line interface to push all the resources. It would be a good plan for you to write one and submit a PR for everyone's benefit. | 1 | 5 | 0 | I'm trying to write some documentation on how to restore a CKAN instance in my organization.
I have backuped and restored successfully CKAN database and resources folder but i don't know what i have to do with datastore db.
Which is the best practice?
Use pg_dump to dump the database or initialize it from the resources folder (if there is a way)?
Thanks.
Alex | Ckan backup and restore | 1.2 | 1 | 0 | 1,289 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.