Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,525,463 | 2017-03-01T07:17:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,ubuntu | 42,525,853 | 2 | false | 0 | 0 | You can use sudo dpkg panda3d1.9_1.9.3-xenial_amd64.deb it won't affect your default package. | 2 | 0 | 0 | I downloaded a deb package panda3d1.9_1.9.3-xenial_amd64.deb and I want to install it for Python 3. My OS is Linux Ubuntu 16.04. The default python is 2.7.12 and I would prefer to keep it as default, but Python 3 is installed too and available to use. How do I install this package for Python 3 only?
I am not sure pip may help. | How do I install a python deb package for Python3 on Ubuntu? | 0 | 0 | 0 | 1,157 |
42,525,463 | 2017-03-01T07:17:00.000 | 1 | 0 | 1 | 1 | python,python-3.x,ubuntu | 42,528,081 | 2 | true | 0 | 0 | If the package was built to only support Python 2, there is no straightforward way to install it for Python 3. You will want to ask the packager to provide a package built for Python 3 if there isn't one already.
(This replaces my earlier answer, which was incorrect or at least misleading. Thanks to @Goyo in particular for setting me straight.) | 2 | 0 | 0 | I downloaded a deb package panda3d1.9_1.9.3-xenial_amd64.deb and I want to install it for Python 3. My OS is Linux Ubuntu 16.04. The default python is 2.7.12 and I would prefer to keep it as default, but Python 3 is installed too and available to use. How do I install this package for Python 3 only?
I am not sure pip may help. | How do I install a python deb package for Python3 on Ubuntu? | 1.2 | 0 | 0 | 1,157 |
42,526,695 | 2017-03-01T08:27:00.000 | 0 | 0 | 0 | 0 | python,openerp,open-source,erp,odoo-10 | 44,158,222 | 3 | false | 1 | 0 | the funciton that change the tilte_part is :
_title_changed in the js file : addons/web/static/src/js/abstract_web_client.js
it uses this.set('title_part', {"zopenerp": "Odoo"});
so you can change the word odoo by something that you need. and costumize the order of the title in _title_changed to display it like what ever you want. | 1 | 1 | 0 | I have changed the Odoo 10 login page title by using website builder app but it does not work on the other pages after login. After login when I access different installed apps then the page title shows Odoo with apps name like "Products - Odoo" or "Customers - Odoo". | How to change the title in Odoo 10? | 0 | 0 | 0 | 5,989 |
42,527,324 | 2017-03-01T08:59:00.000 | 0 | 0 | 0 | 0 | python,html,mysql,beautifulsoup | 42,529,552 | 3 | false | 1 | 0 | If you are using the data for normal use you can store it in sqlite db instead of mysql which has inbuilt support from python. If your site is mostly static then you can use Beautifulsoup for scraping and there are lot of python libraries like numpy for statistical analysis. If your target site has dynamically generated content then better use phantomjs or selenium driver to retrieve those contents | 3 | 0 | 0 | I have coded python to scrape a webpage and retrieve listing prices.
I want to store the data and conduct a statistical analysis on the dataset.
Would this work?
Python -> beautifulsoup -> mySQL -> html
Data set:
$10 , $20, $10
I want to be able to calculate averages and then display them on the html page. | Scrape and store data for html display | 0 | 0 | 0 | 475 |
42,527,324 | 2017-03-01T08:59:00.000 | 1 | 0 | 0 | 0 | python,html,mysql,beautifulsoup | 42,527,517 | 3 | false | 1 | 0 | You could stay in Python for the analysis (for example with Python Pandas dataframes) before storing in mySQL:
Python -> Beautifulsoup -> pandas -> mySQL -> html | 3 | 0 | 0 | I have coded python to scrape a webpage and retrieve listing prices.
I want to store the data and conduct a statistical analysis on the dataset.
Would this work?
Python -> beautifulsoup -> mySQL -> html
Data set:
$10 , $20, $10
I want to be able to calculate averages and then display them on the html page. | Scrape and store data for html display | 0.066568 | 0 | 0 | 475 |
42,527,324 | 2017-03-01T08:59:00.000 | 1 | 0 | 0 | 0 | python,html,mysql,beautifulsoup | 42,527,571 | 3 | false | 1 | 0 | Beautifulsoup is an HTML parser. You can feed it an HTML page using Python, and extract the data you need from it. Then you can post-process the data in Python, and load it into MySQL once you're ready. I'm a bit confused about the step MySQL -> HTML, since neither is a programming language (HTML is a markup language that can't talk to MySQL, and MySQL is a database management system that can't directly output HTML), but sure, displaying MySQL data in an HTML page is a trivial step.
It might be a good idea to separate these steps a bit better, by the way. You have some code that extracts data and loads it into a database, and you have some code that displays data from the database. Keeping these two separated might increase you code quality. | 3 | 0 | 0 | I have coded python to scrape a webpage and retrieve listing prices.
I want to store the data and conduct a statistical analysis on the dataset.
Would this work?
Python -> beautifulsoup -> mySQL -> html
Data set:
$10 , $20, $10
I want to be able to calculate averages and then display them on the html page. | Scrape and store data for html display | 0.066568 | 0 | 0 | 475 |
42,533,392 | 2017-03-01T13:43:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 42,533,424 | 2 | false | 0 | 0 | Extract the .egg/.tar to *python_installation_path*\Lib\site-packages using 7-Zip | 1 | 0 | 0 | How to import modules from self created packages(wheel,eggs,tar.gz etc) local/or in artifactory in another python package? (Possibly a code snippet would be helpful)
Requirement -
There is a developed Python Package abc.whl located on local intranet repository, and another file in second python project /xyz/def.py need to import a module from package abc.whl | How to import modules from self created packages(wheel,eggs,tar.gz etc) local/or in artifactory in another python package? | 0 | 0 | 0 | 1,100 |
42,539,906 | 2017-03-01T19:02:00.000 | 0 | 0 | 1 | 0 | python,scikit-learn | 52,912,100 | 3 | false | 0 | 0 | If someone is working with via bash here are the steps :
For ubunutu :
sudo apt-get install python-sklearn | 2 | 0 | 1 | Why am I not able to import sklearn?
I downloaded Anaconda Navigator and it has scikit-learn in it. I even pip installed sklearn , numpy and scipy in Command Prompt and it shows that it has already been installed, but still when I import sklearn in Python (I use PyCharm for coding) it doesn't work. It says 'No module named sklearn'. | Why can i not import sklearn | 0 | 0 | 0 | 7,012 |
42,539,906 | 2017-03-01T19:02:00.000 | 0 | 0 | 1 | 0 | python,scikit-learn | 42,574,660 | 3 | false | 0 | 0 | Problem solved! I didn't know that I was supposed to change my interpreter to Anaconda's interpreter(I am fairly new to Python). Thanks for the help! | 2 | 0 | 1 | Why am I not able to import sklearn?
I downloaded Anaconda Navigator and it has scikit-learn in it. I even pip installed sklearn , numpy and scipy in Command Prompt and it shows that it has already been installed, but still when I import sklearn in Python (I use PyCharm for coding) it doesn't work. It says 'No module named sklearn'. | Why can i not import sklearn | 0 | 0 | 0 | 7,012 |
42,540,201 | 2017-03-01T19:19:00.000 | 3 | 0 | 1 | 0 | python,c++ | 42,540,332 | 1 | true | 0 | 1 | There are many ways two executables can exchange data.
Some examples:
write/read data to/from a shared file (don't forget locking so they don't stumble on eachother).
use TCP or UDP sockets between the processes to exchange data.
use shared memory.
if one application starts the other you can pass data via commandline arguments or in the environment.
use pipes between the processes.
use Unix domain sockets between the processes.
And there are more options but the above are probably the most common ones.
What you need to research is IPC (Inter-Process Communication). | 1 | 0 | 0 | I'm relatively inexperienced with C++, but I need to build a framework to shuffle some data around. Not necessarily relevant, but the general flow path of my data needs to go like this:
Data is generated in a python script
The python object is passed to a compiled C++ extension
The C++ extension makes some changes and passes the data (presumably a pointer?) to compiled C++/CUDA code (.exe)
C++/CUDA .exe does stuff
Data is handled back in the python script and sent to more python functions
Step 3. is where I'm having trouble. How would I go about calling the .exe containing the CUDA code in a way that it can access the data that is seen in the C++ python extension? I assume I should be able to pass a pointer somehow, but I'm having trouble finding resources that explain how. I've seen references to creating shared memory, but I'm unclear on the details there, as well. | Pass Data from One .exe to Another | 1.2 | 0 | 0 | 636 |
42,541,297 | 2017-03-01T20:21:00.000 | 15 | 0 | 1 | 0 | python,go,utf-8,language-comparisons | 42,541,495 | 2 | false | 0 | 0 | In Python, str.encode('utf8') converts a string to bytes. In Go, strings are utf-8 encoded already, if you need bytes, you can do: []byte(str). | 1 | 5 | 0 | How can I convert a string in Golang to UTF-8 in the same way as one would use str.encode('utf8') in Python? (I am trying to translate some code from Python to Golang; the str comes from user input, and the encoding is used to calculate a hash)
As far as I understand, the Python code converts unicode text into a string. The string is a collection of UTF-8 bytes. This sounds similar to strings in Go. So is this encoding already done for me when I store some text as a Go string?
Should I walk over the string and try utf8.EncodeRune in go? I'm really confused. | Equivalent of python's encode('utf8') in golang | 1 | 0 | 0 | 10,999 |
42,542,214 | 2017-03-01T21:17:00.000 | 1 | 1 | 1 | 0 | python,python-module,python-importlib | 42,545,218 | 3 | false | 0 | 0 | I studied importlib's source code and since I don't intend to make a reusable Loader, it seems like a lot of unnecessary complexity. So I just settled on creating a module with types.ModuleType, adding bld to the module's __dict__, compiling and caching the bytecode with compile, and executing the module with exec. At a low level, that's basically all importutil does anyway. | 1 | 3 | 0 | I'm trying to put together a small build system in Python that generates Ninja files for my C++ project. Its behavior should be similar to CMake; that is, a bldfile.py script defines rules and targets and optionally recurses into one or more directories by calling bld.subdir(). Each bldfile.py script has a corresponding bld.File object. When the bldfile.py script is executing, the bld global should be predefined as that file's bld.File instance, but only in that module's scope.
Additionally, I would like to take advantage of Python's bytecode caching somehow, but the .pyc file should be stored in the build output directory instead of in a __pycache__ directory alongside the bldfile.py script.
I know I should use importlib (requiring Python 3.4+ is fine), but I'm not sure how to:
Load and execute a module file with custom globals.
Re-use the bytecode caching infrastructure.
Any help would be greatly appreciated! | How do I load a Python module with custom globals using importlib? | 0.066568 | 0 | 0 | 1,152 |
42,544,689 | 2017-03-02T00:22:00.000 | 2 | 0 | 0 | 0 | python,django,django-models,syntax | 42,545,133 | 2 | true | 1 | 0 | Thanks to answers below I found the answer to my question:
Django have a const LOOKUP_SEP = '__' then using split to split the param to a key value pair | 1 | 4 | 0 | I noticed that Django use double under score to define a lookup in Model.objects.filter instance.
For example:
Room.objects.filter(width__lte = 10)
How does it work? How can I create my own function like Django and know that width__lte is actually separated for width and lower then or equal to 10. | How does django implement double underscore in filter? | 1.2 | 0 | 0 | 2,764 |
42,546,031 | 2017-03-02T02:58:00.000 | 1 | 0 | 1 | 0 | python,anaconda,spyder | 42,579,694 | 1 | true | 0 | 0 | This was fixed in Spyder 3.2, which was released in July of 2017. | 1 | 1 | 0 | Spyder Variable explore only show variables when i run a python script. But while debugging, there is nothing in Spyder Variable explore.
how to set? | how to show variable in Spyder Variable explore while debugging? | 1.2 | 0 | 0 | 1,166 |
42,546,701 | 2017-03-02T04:09:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,ipython-notebook,jupyter-notebook | 42,558,220 | 2 | true | 0 | 0 | A link to a gist is by far the superior option from those you have listed as that means helpers can run your code pretty easily and debug it from there.
An alternative option is to post the code that creates your DataFrame (or at least a minimal example of it) so that we can recreate it. This is advantageous over a gist since helpers don't have to look and download the gist because the code is in the body of the question. Also, this method is superior since you may later delete the gist and so the question is now useless for future reference, but if your code is in the body of the question then all future users can enjoy it as long as SO lives :) | 1 | 1 | 1 | I just run across a problem when trying to ask help using Pandas DataFrames in Jupyper notebook.
More specifically my problem is what is the best way to embed iPython notebook input and output to StackOverflow question?
Simply copy&paste breaks DataFrame output formatting so bad it becomes impossible to read.
Which would be preferred way to handle notebooks with StackOverflow:
screenshot
link to gist with the notebook
converting notebook to HTML and embedding it
Something else what? | Best way to embed Jupyter/IPython notebook information | 1.2 | 0 | 0 | 432 |
42,548,500 | 2017-03-02T06:41:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 42,549,604 | 1 | false | 0 | 0 | Is attribute reference a bottom-up or top down
I wouldn't classify it as bottom-up or top-down. Pythons __getattribute__ first searches in the class dictionary in order to first find data descriptors if they exist, it then also searches the instance dictionary for instance variables (if no data descriptors have been found).
Looks like both will give similar results
No, if it was strictly bottom-up (instance first) then an instance variable with the same name as a data descriptor would mask it.
If it was top-down then a non-data descriptor with the same name as an instance variable would mask it. | 1 | 0 | 0 | I have some doubts regarding attribute reference in Python. I used to think attribute reference as in instance.attribute is bottom-up approach.
First that attribute is looked up in instance dictionary. But I was reading some article where it claims that attribute lookup is top-down approach i.e. when an attribute is referenced, Class.__getattribute__ is called as first step. Here instance is the instance of class Class
My question is (considering class may contain a data descriptor or non data descriptor)
Is attribute reference a bottom-up or top down
Looks like both will give similar results. Am I correct here? | Attribute reference in Python | 0 | 0 | 0 | 843 |
42,550,180 | 2017-03-02T08:23:00.000 | 0 | 0 | 1 | 0 | python-3.x,python-idle | 49,953,050 | 3 | false | 0 | 0 | I developed the bad habit of writing/editing python files with IDLE from watching intro videos when I was still relatively new to programming. I have since learned that file editors like Sublime or IDE's like PyCharm are a significantly better way to go and would highly recommend anyone reading this. | 2 | 1 | 0 | I have a number of python files with .py extensions that I was working on, closed, and tried to come back to later. When I tried to open them by right clicking and selecting “Edit with IDLE,” instead of opening a pycache folder was created.
I have a work around in which I go to edit the file with Notepad++, copy the text into a new python editor, delete the old file, and resave the new file with the same name. My research has turned up questions related to pycache and IDLE, but none specifically addressing the issue. Has anyone encountered a similar problem/know how to solve it? I’m running Python 3.5.2 on Windows 7. | Trying to open a python file in IDLE. Instead, a pycache folder is created. How do I fix this? | 0 | 0 | 0 | 1,584 |
42,550,180 | 2017-03-02T08:23:00.000 | 2 | 0 | 1 | 0 | python-3.x,python-idle | 47,170,350 | 3 | false | 0 | 0 | What did you name the .py file as? If you named it something like "string.py", Python might interpret the file as one of those in the "Lib" folder. Why you can resave it with the same name and have it working afterwards is anyone's guess. I suggest just renaming the python file to something else. | 2 | 1 | 0 | I have a number of python files with .py extensions that I was working on, closed, and tried to come back to later. When I tried to open them by right clicking and selecting “Edit with IDLE,” instead of opening a pycache folder was created.
I have a work around in which I go to edit the file with Notepad++, copy the text into a new python editor, delete the old file, and resave the new file with the same name. My research has turned up questions related to pycache and IDLE, but none specifically addressing the issue. Has anyone encountered a similar problem/know how to solve it? I’m running Python 3.5.2 on Windows 7. | Trying to open a python file in IDLE. Instead, a pycache folder is created. How do I fix this? | 0.132549 | 0 | 0 | 1,584 |
42,550,910 | 2017-03-02T09:02:00.000 | 0 | 0 | 0 | 0 | python,csv,hyperlink | 42,551,702 | 1 | false | 0 | 0 | For HYPERLINK you need use only absolute url | 1 | 0 | 1 | I am creating a csv file in which i need to give hyperlinks to files in the same folder of csv file.
I have tried with absolute url like =HYPERLINK("file:///home/user/Desktop/myfolder/clusters.py") and its working fine.But can i given the relative path like
=HYPERLINK("file:///myfolder/clusters.py") because that is what my project required.User will download this csv along with some other files into his machine.So i cant give the absolute path of other files in csv. | Give relative path of file in csv python | 0 | 0 | 0 | 215 |
42,552,096 | 2017-03-02T09:55:00.000 | 0 | 0 | 0 | 0 | python,windows,tkinter,size,display | 42,555,744 | 1 | true | 0 | 1 | This is one of the reasons why place is a poor choice. You should switch to using grid and/or pack. They are specifically designed to handle different screen sizes, different resolutions, different widget styles, and different fonts. | 1 | 0 | 0 | My problem is when somebody runs my tkinter gui (in Windows 7) and has larger display settings (125%), the gui doesn't look well (buttons are closer to each other, end of text cannot be seen, etc.). I use place method with x - y coordinates to place the widgets.
Maybe using pack method could solve this, but it is easier to use place for me, because there are lots of labels and buttons with exact places.
Another solution can be if the display settings could be checked with pywin32 and resize everything if needed. If it is possible, please confirm and help, what is the related function or if you have any other idea/advice, please share it. | Python tkinter - windows larger display settings | 1.2 | 0 | 0 | 213 |
42,552,650 | 2017-03-02T10:18:00.000 | 1 | 0 | 0 | 0 | python,django,database | 42,555,104 | 2 | true | 1 | 0 | When doing a bulk delete, neither the models delete() methods nor the eventual pre_delete and post_delete signals are invoked, so if your code relies on either of those you are in trouble. Hence the very sensible choice to loop over instances and call their delete() method individually. No need to report it as a bug (nor to submit a patch), it's actually a feature ;) | 1 | 0 | 0 | I have been adding items from a txt to my database in a djago-view - with and without the @transaction.atomic-decorator, i.e. with a loop over db-writes or one db-write -- the performance difference is near infinite!^^
Now my observation: the default delete-action in the admin panel clearly does the (inferior) loop over db-wirtes. Which takes really long for deleting 1000 entries.
Why is this, is there a reason, am I missing something?!
Or should I fix this and open a pull request ;) (would be my first oss-contribution :))
As mentioned in the first answer, there is a confirmation step between chosing the action and the actual delete. But even after the confirmation it takes several minutes (for a few thousand entries) to delete the items, during which the database is locked, so there is no way back at that point... | django built-in admin action delete - reason for bad performance? | 1.2 | 0 | 0 | 299 |
42,553,676 | 2017-03-02T11:04:00.000 | 1 | 0 | 0 | 1 | python,ibm-cloud,iot-for-automotive,iot-driver-behavior | 42,668,207 | 1 | true | 0 | 0 | I think your procedure is OK.
There are following possibilities not to get valid analysis result.
(1) In current Driving Behavior Analysis, it requires at least 10 valid gps points within a trip (trip_id) on a vehicle (trip_id). Please check your data which is used on "sendCarProbe" API.
(2) Please check "sendJobRequest" API's from and to date (yyyy-mm-dd) really matches with your car probe timestamp. | 1 | 0 | 0 | I want to explore and use the driver behavior service in my application. Unfortunately I got stuck as I'm getting empty response from getAnalyzedTripSummary API instead Trip UUID.
Here are the steps I've followed.
I've added the services called Driver behavior and Context Mapping to my application @Bluemix.
Pushed multiple sample data packets to the Driver Behavior using "sendCarProbe" API
Sent Job Request using "sendJobRequest" API with from and to dates as post data.
Tried "getJobInfo" API, which results the status of job "job_status" : "SUCCEEDED")
Tried "getAnalyzedTripSummaryList" to get trip_uuid. But
its resulting empty. []
Could someone help me to understand what's wrong and why I'm getting empty response? | IBM Bluemix IoT Watson service Driver Behaviour | 1.2 | 0 | 0 | 114 |
42,556,596 | 2017-03-02T13:19:00.000 | 1 | 0 | 1 | 0 | python-2.7,readline | 42,571,573 | 1 | false | 0 | 0 | I tried the following and it worked.
Navigate to ncurses directory that contains configure and run the following
./configure --with-shared --without-debug
make
make install
Then install readline again. The error desappeared. The module was successfully installed. | 1 | 0 | 0 | I am trying to install readline using the command "python setup.py install" by navigating to the directory where i untarred readline 6.2.4.1.
ncurses-6.0 is currently installed in the server.
I have tried recompiling with -fPIC but that also doesnt seem to solve the problem
Command: sudo CFLAGS="-fPIC" python setup.py install
Error Log:
building 'readline' extension
gcc -pthread -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -fPIC -DHAVE_RL_CALLBACK -DHAVE_RL_CATCH_SIGNAL -DHAVE_RL_COMPLETION_APPEND_CHARACTER -DHAVE_RL_COMPLETION_DISPLAY_MATCHES_HOOK -DHAVE_RL_COMPLETION_MATCHES -DHAVE_RL_COMPLETION_SUPPRESS_APPEND -DHAVE_RL_PRE_INPUT_HOOK -I. -I/home/roaming/i332346/opt/Python-local/include/python2.7 -c Modules/2.x/readline.c -o build/temp.linux-x86_64-2.7/Modules/2.x/readline.o -Wno-strict-prototypes
In file included from /home/roaming/i332346/opt/Python-local/include/python2.7/Python.h:126:0,
from Modules/2.x/readline.c:8:
/home/roaming/i332346/opt/Python-local/include/python2.7/modsupport.h:27:1: warning: ‘PyArg_ParseTuple’ is an unrecognized format function type [-Wformat=]
PyAPI_FUNC(int) PyArg_ParseTuple(PyObject *, const char *, ...) Py_FORMAT_PARSETUPLE(PyArg_ParseTuple, 2, 3);
^
gcc -pthread -shared -fPIC build/temp.linux-x86_64-2.7/Modules/2.x/readline.o readline/libreadline.a readline/libhistory.a -lncurses -o build/lib.linux-x86_64-2.7/readline.so
/usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux/bin/ld: /usr/lib/libncurses.a(lib_termcap.o): relocation R_X86_64_32 against `_nc_globals' can not be used when making a shared object; recompile with -fPIC
/usr/lib/libncurses.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
error: command 'gcc' failed with exit status 1 | relocation R_X86_64_32 against `_nc_globals' can not be used when making a shared object; recompile with -fPIC while installing readline | 0.197375 | 0 | 0 | 755 |
42,558,294 | 2017-03-02T14:38:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,request,timeoutexception,http-status-code-504 | 42,558,358 | 1 | false | 1 | 0 | The gateway timeout means the connected server had some sort of timeout after receiving your request(i.e. you did make a connection). However, the requests timeout exception means your script never connected to the server and timed out waiting on a response from the server (i.e. you did not make a connection). | 1 | 1 | 0 | requests.exceptions.Timeout VS
requests.models.Response.status_code = 504 [gateway timeout]
what is the actual difference between the two as both deals with saying timeout has occurred?
Let us say Service s1 makes call to S2
In s1:
request.post( url=s2,..., timeout=60 )
when will requests.exceptions.Timeout be raised and in what scenario 504 is received.
Can retries be made for all of those exceptions - I believe answer for above question might give lead to this..
Thanks in advance. | python: requests.exceptions.Timeout vs requests.models.Response.status_code 504 ( gateway timeout ) | 0 | 0 | 1 | 1,354 |
42,560,179 | 2017-03-02T16:01:00.000 | 1 | 0 | 1 | 0 | python,pyinstaller | 53,063,002 | 1 | false | 0 | 1 | Had the same issue, pyinstaller -F script.py without the build on the command prompt worked for me. | 1 | 3 | 0 | I am having some problems with using PyInstaller to package a project. I have used it successfully in the past for simpler scripts, but I am attempting to package a larger project (pyqt4 gui that calls multiple scrips and modules) and I get the following error:
IOError: [Errno 13] Permission denied: 'C\Users\username\AppData\Roaming\pyinstaller\bincache01_py27_64bi\qt4_plugins\imageformats\qsvg4.dll'
I'm running pyinstaller from a command prompt with admin privileges. I've checked the permission on the file in question and I definitely have all permissions for that file.
I haven't been able to find anything that's helped, most of the people reporting similar issues seemed to solved them by running from a command prompt with admin privileges. If anyone has any ideas or advice that would be greatly appreciated.
Thank you. | Permissions Error with PyInstaller (running as admin) | 0.197375 | 0 | 0 | 2,161 |
42,563,683 | 2017-03-02T18:56:00.000 | 0 | 0 | 0 | 0 | python,bash,curl,sed,web-scraping | 42,564,318 | 2 | false | 0 | 0 | With Python you can also scrape sites rendered with JavaScript using selenium und a headless browser like PhantomJS. Maybe this is possible with bash scripting too, but the more complext your code gets the bigger the advantage of the clarity of python IMHO. | 1 | 0 | 0 | I am trying to get more information from experienced people doing web scraping in general, I am getting into web scraping using Python libraries. At the same time, I noticed some people are using simple Bash, and using commands for web scraping such as wget, curl, sed, grep, awk.
These commands seem to be much cleaner in scripting than using Python libraries for web scraping.
What are your takes on this? Do you see any advantage of using python libraries over Bash that I am not getting? Or even using Python with Bash to accomplish web scraping? | Using Bash scripting for web scraping over python libraries? | 0 | 0 | 1 | 707 |
42,564,069 | 2017-03-02T19:17:00.000 | 1 | 0 | 0 | 0 | python,apache-spark,pyspark | 50,958,242 | 1 | false | 0 | 0 | I don't think it's feasible during an interactive session. You will have to restart your session to use the modified module. | 1 | 8 | 1 | Within an interactive pyspark session you can import python files via sc.addPyFile('file_location'). If you need to make changes to that file and save them, is there any way to "re-broadcast" the updated file without having to shut down your spark session and start a new one?
Simply adding the file again doesn't work. I'm not sure if renaming the file works, but I don't want to do that anyways.
As far as I can tell from the spark documentation there is only a method to add a pyfile, not update one. I'm hoping that I missed something!
Thanks | How can you update a pyfile in the middle of a PySpark shell session? | 0.197375 | 0 | 0 | 1,140 |
42,568,374 | 2017-03-03T00:14:00.000 | 5 | 0 | 1 | 0 | python,command-line | 58,019,890 | 2 | false | 0 | 0 | raw_input() was renamed to input() in Python 3.0. | 1 | 7 | 0 | Say I have python script, call it "script.py".
Normally, in the command line, file executes when user types "python script.py".
What I want is to add an "are you sure? (y/n)" prompt after user types "python script.py". And only after typing y [enter] should the script execute. | Add "are you sure? (y/n)" prompt before executing python script | 0.462117 | 0 | 0 | 6,392 |
42,570,494 | 2017-03-03T04:07:00.000 | -1 | 1 | 0 | 1 | python,import,sshfs,fedora-25,remote-host | 43,487,256 | 1 | false | 0 | 0 | In my case it were the Cern ROOT libraries import. When importing, they look in the current directory, no matter what I do. So the solution is to
store the current directory
cd to some really local directory, like "/" or "/home" before imports
come back to the stored directory after imports | 1 | 0 | 0 | I have trouble running my a little bit complex python program in a remote directory, mounted by SSHFS. It takes a few seconds to perform imports when executing in a remote directory and a fraction of a second in a local directory. The program should not access anything in the remote directory on its own, especially in the import phase.
By default, there is current (remote) directory I sys.path, but when I remove it before (other) imports, speed does not change. I confirmed with python -vv that this remote directory is not accessed in the process of looking for modules. Still, I can see a stable flow of some data from the network with an external network monitor during the import phase.
Moreover, I can't really identify what exectly it is doing when consuming most time. It seems to happen after one import is finished, according to my simple printouts, and before a next import is started...
I'm running Fedora 25 Linux | Python script very slow in a remote directory | -0.197375 | 0 | 0 | 1,102 |
42,573,079 | 2017-03-03T07:29:00.000 | 0 | 1 | 0 | 0 | php,python | 42,573,390 | 1 | false | 0 | 0 | if you can see a page witch contains your needed data with your eyes. you can use web scraping to gather them. | 1 | 0 | 0 | I've read the API documentation and there seem to be no way to get user email. | Is there any way to extract a list of users and their email ids from wattpad? | 0 | 0 | 1 | 67 |
42,579,614 | 2017-03-03T13:03:00.000 | 2 | 0 | 0 | 0 | python,tkinter | 42,579,716 | 1 | false | 0 | 1 | It is not possible to add buttons to the titlebar of a window in Tkinter. | 1 | 0 | 0 | I am creating a game as a school project with tkinter.I got a 800 lines long code, and I want to add a 4rd button in the title bar of one of my windows(not the main one). Is it possible and ,if yes, how can I manage to do it ? | Add a button in the title bar in Tkinter | 0.379949 | 0 | 0 | 328 |
42,580,176 | 2017-03-03T13:28:00.000 | 1 | 0 | 0 | 0 | python,pyqt4,qpixmap | 42,604,577 | 1 | false | 0 | 1 | When you load image into QPixmap or QImage, it is converted from file format to internal representation. Because of that, QImage.byteCount() returns number of bytes used to store image. As you already mentioned, it is equals to width*height*4. Here, digit 4 is color depth (bytes per pixel). You can get it via QImage.depth() method. Note that it will return number of bits, so you have to divide it by 8 to get bytes.
So, if you want to get file size, you can either use len(data) (as suggested by ekhumoro) or load it to QFile and call size() (if you have/save it on hard drive). | 1 | 0 | 0 | I get images from aws and assign them to QPixmap variables. I want to show their information as width height and file size. However I could not find a way to get file size of them. I converted them to QImage and used byteCount method however, although the file size of the image is 735 byte, it returns 3952 byte which is equal to width*height*4. | How can I find the file size of image which is read as QPixmap? | 0.197375 | 0 | 0 | 985 |
42,580,235 | 2017-03-03T13:31:00.000 | 0 | 0 | 1 | 0 | python-2.7,python-3.x,digital-signature | 45,190,212 | 2 | false | 0 | 0 | There are many packages presents.
pdfrw: Read and write PDF files; watermarking, copying images from one PDF to another. Includes sample code.
slate : Simplifies extracting text from PDF files. Wrapper around PDFMiner.
PDFQuery : PDF scraping with Jquery or XPath syntax. Requires PDFMiner, pyquery and lxml libraries.
PDFMiner : Extracting text, images, object coordinates, metadata from PDF files. Pure Python.
PyPDF2 : Split, merge, crop, etc. of PDF files. | 1 | 4 | 0 | How to add a digital signature to any given file using python and verify the same . That is input a file and output a digitally signed file and giving a digitally signed file with a key verify the digital signature. How to do this using python? | How to add a digital signature to any given file using python | 0 | 0 | 0 | 2,588 |
42,582,335 | 2017-03-03T15:09:00.000 | 3 | 0 | 0 | 0 | python,python-2.7,tkinter,listbox | 42,582,872 | 1 | true | 0 | 1 | The way to get the line number of the active line of the listbox is to use the index method: listbox.index(tk.ACTIVE). | 1 | 1 | 0 | Let's get the formalities out of the way: This is about Python 2.7.x running on Windows 7.
I'm creating a subclass of the Tkinter listbox widget, and one of the things I need the subclass to provide is a property containing the index (line number) of the line that is currently active (i.e. has the focus). I know that Tkinter supports the constant tk.ACTIVE for all listbox methods that take an index, but in my custom widget this property needs to always be an integer, never a string.
I've scoured the documents but there doesn't seem to be method that will return the index of the active line, nor a way to "convert" tk.ACTIVE to its effective index number. Methods like curselection() or selection_includes() are not helpful because this listbox is always going to have a selectmode of tk.EXTENDED -- which means any number of lines may be selected and further that the current active line may or may not be among them.
I considered using an event binding to wait for arrow keys, mouse clicks, etc. and look for changes to the curselection() tuple, but this is not quite helpful or straightforward either. E.g. suppose the user shift-clicks to select a range -- he may go top-to-bottom or bottom-to-top, and either way the tuple will just show the range, not which line is active.
So then: is there any way at all (overt or sneaky, simple or complex) to get the equivalent index number for the tk.ACTIVE line? | "Convert" tk.ACTIVE to the index (line number) in a listbox | 1.2 | 0 | 0 | 717 |
42,582,399 | 2017-03-03T15:11:00.000 | 0 | 0 | 0 | 0 | python-3.x,python-3.4 | 55,909,536 | 2 | false | 1 | 0 | you could try this:
app.PostCommand("exit") | 2 | 2 | 0 | I am new to Python scripting and I am currently trying to get acquainted with python scripting and DigSILENT Powerfactory.
I have managed thus far to execute powerfactory, activate projects and execute a load flow but after my code ends, when I try to rerun it, it wont run. In order for it to run I need to close Spyder and reopen it. I believe this is related to the fact that powerfactory is still running in the background so I was wondering if there is any command that "forces" powerfactory to shut down after the code execution.
Any tip would be greatly appreciated :) | How can I exit powerfactory using Python in Unattended mode? | 0 | 0 | 0 | 826 |
42,582,399 | 2017-03-03T15:11:00.000 | 0 | 0 | 0 | 0 | python-3.x,python-3.4 | 43,653,385 | 2 | false | 1 | 0 | I had the same problem. But faster than shutdown/reopen spyder is to restart the kernel with ctrl + . | 2 | 2 | 0 | I am new to Python scripting and I am currently trying to get acquainted with python scripting and DigSILENT Powerfactory.
I have managed thus far to execute powerfactory, activate projects and execute a load flow but after my code ends, when I try to rerun it, it wont run. In order for it to run I need to close Spyder and reopen it. I believe this is related to the fact that powerfactory is still running in the background so I was wondering if there is any command that "forces" powerfactory to shut down after the code execution.
Any tip would be greatly appreciated :) | How can I exit powerfactory using Python in Unattended mode? | 0 | 0 | 0 | 826 |
42,582,938 | 2017-03-03T15:39:00.000 | 0 | 0 | 1 | 0 | python,image,photo,editing | 42,583,042 | 2 | false | 0 | 0 | Online photo editor means that most of the processing will be done on the client side (i.e. in browser). Python is mostly a server-side language, so I would suggest using some other, more browser-friendly, language (perhaps, JavaScrip?) | 1 | 0 | 0 | I am going to build an online photo editor using python, but I don't know how to start. My plan is to create a platform online. Users can upload their photos and the system can transform their photos into a style like Ukioyoe from Japan, the ancient wood printing, so the photo outcomes are similar to that. Is there any similar works that have already done or any libraries that can help to do this work?
Thanks for answering. | Creating a online photo editor in Python | 0 | 0 | 0 | 2,065 |
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,python-3.x | 42,583,156 | 7 | false | 0 | 0 | In my case, /usr/bin/python is a symlink that points to /usr/bin/python2.7.
Ususally, there is a relevant symlink for python2 and python3.
So, if you type python2 you get a python-2 interpreter and if you type python3 you get a python-3 one. | 5 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 0 | 10,377 |
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,python-3.x | 42,583,347 | 7 | false | 0 | 0 | It depends on OS (and the way Python has been installed).
For most current installations:
on Windows, Python 3.x installs a py command in the path that can be used that way:
py -2 launches Python2
py -3 launches Python3
On Unix-likes, the most common way is to have different names for the executables of different versions (or to have different symlinks do them). So you can normally call directly python2.7 or python2 to start that version (and python3 or python3.5 for the alternate one). By default only a part of all those symlinks can have been installed but at least one per version. Search you path to find them | 5 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 0 | 10,377 |
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,python-3.x | 71,958,209 | 7 | false | 0 | 0 | As has been mentioned in other answers to this and similar questions, if you're using Windows, cmd reads down the PATH variable from the top down. On my system I have Python 3.8 and 3.10 installed. I wanted my cmd to solely use 3.8, so I moved it to the top of the PATH variable and the next time I opened cmd and used python --version it returned 3.8.
Hopefully this is useful for future devs researching this specific question. | 5 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 0 | 10,377 |
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,python-3.x | 42,583,188 | 7 | false | 0 | 0 | Usually on all major operating systems the commands python2 and python3 run the correct version of Python respectively. If you have several versions of e.g. Python 3 installed, python32 or python35 would start Python 3.2 or Python 3.5. python usually starts the lowest version installed I think.
Hope this helps! | 5 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 0 | 10,377 |
42,583,082 | 2017-03-03T15:46:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,python-3.x | 42,583,177 | 7 | false | 0 | 0 | If you use Windows OS:
py -2.7 for python 2.7
py -3 for python 3.x
But first you need to check your PATH | 5 | 0 | 0 | I have installed both versions of python that is python 2.7 and python 3.5.3. When I run python command in command prompt, python 3.5.3 interpreter shows up. How can I switch to python 2.7 interpreter? | how to switch python interpreter in cmd? | 0 | 0 | 0 | 10,377 |
42,584,551 | 2017-03-03T16:58:00.000 | 0 | 0 | 1 | 1 | python,linux,nano | 70,208,117 | 3 | false | 0 | 0 | Try just "M-I" (Esc-I) to switch off autoindent before pasting with Ctrl-Ins (or right mouse click) | 1 | 7 | 0 | I am a beginner programmer as well as linux user. Before, I was using windows and the python IDLE was so good. I need not needed to press tab button after the "If" statement or any other loops.
Now, I am using Linux and started to write programs through the command line text editor of ubuntu called as "nano". Here, I need to press tab all the time i use "if" statement. It is very tedious. Especially when there is bunch of nested loops, it becomes difficult to remember the tabs count. And i was thinking if there was any idea to make it work like in the IDLE in windows. I also tried to google the problem but i couldn't explain it in few words. I hope you've got what my problem actually is. And i need a descent solution for this. | How to make auto indention in nano while programming in python in linux? | 0 | 0 | 0 | 18,957 |
42,589,584 | 2017-03-03T22:32:00.000 | 11 | 0 | 1 | 1 | python,subprocess | 42,589,699 | 1 | true | 0 | 0 | Processes that produce color output do it by sending escape codes to the terminal(-emulator) intermixed with the output. Programs that handle the output of these programs as data would be confused by the escape codes, so most programs that produce color output on terminals do so only when they are writing to a terminal device. If the program's standard output is connected to a pipe rather than a terminal device, they don't produce the escape codes. When Python reads the output of a sub-process, it does it through a pipe, so the program you are calling in a sub-process is not outputting escape codes.
If all you are doing with the output is sending it to a terminal, you might want the escape codes so the color is preserved. It's possible that your program has a command-line switch to output escape codes regardless of the output device. If it does not, you might run your sub-process against a virtual terminal device instead of a pipe to have it output escape codes; which is too complex a topic to delve into in this answer. | 1 | 11 | 0 | I'm trying to run a process within another python process. The program I run normally has colored output when run in an ANSI terminal emulator. When I have my controlling python program print the output from the sub-process, I don't see any color. The color from the subprocess is lost when I read from it and print to screen.
print(subp.stdout.readline()) | ANSI color lost when using python subprocess | 1.2 | 0 | 0 | 2,656 |
42,590,945 | 2017-03-04T01:12:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,cx-oracle | 42,602,641 | 2 | false | 0 | 1 | Make sure that your Python, cx_Oracle and Oracle Client are all 64-bit or all 32-bit. If one of them is different you can get this error. | 1 | 0 | 0 | ImportError: DLL load failed: %1 is not a valid Win32 application. I have researched for 2 days but unable to resolve. Need help!! | ImportError when importing cx_Oracle python | 0 | 0 | 0 | 99 |
42,592,803 | 2017-03-04T06:12:00.000 | 0 | 0 | 1 | 0 | python,dictionary,data-structures,heap | 42,592,884 | 3 | false | 0 | 0 | If your data will not fit in memory, you need to be particularly mindful of how it's stored. Is it in a database, a flat file, a csv file, JSON, or what?
If it is in a "rectangular" file format, you might do well to simply use a standard *nix sorting utility, and then just read in the first k lines. | 1 | 2 | 1 | I have a very large dictionary with entries of the form {(Tuple) : [int, int]}. For example, dict = {(1.0, 2.1):[2,3], (2.0, 3.1):[1,4],...} that cannot fit in memory.
I'm only interested in the top K values in this dictionary sorted by the first element in each key's value. If there a data structure that would allow me to keep only the largest K key-value pairs? As an example, I only want 3 values in my dictionary. I can put in the following key-value pairs; (1.0, 2.1):[2,3], (2.0, 3.1):[1,4], (3.1, 4.2):[8,0], (4.3, 4.1):[1,1] and my dictionary would be: (3.1, 4.2):[8,0], (1.0, 2.1):[2,3], (2.0, 3.1):[1,4] (in case of key-value pairs with the same first element, the second element will be checked and the largest key-value pair based on the second element will be kept) | Data structure: Top K ordered dictionary keys by value | 0 | 0 | 0 | 698 |
42,596,399 | 2017-03-04T13:09:00.000 | 0 | 0 | 1 | 1 | python-3.6 | 42,596,435 | 1 | false | 0 | 0 | You need to add the location of the python.exe to your $PATH variable. This depends on your installation location. In my case it is C:\Anaconda3. The default is C:\Python as far as I know.
To edit your path variable you can do the following thing. Go to your Control Panel then search for system. You should see something like: "Edit the system environment variables". Click on this and then click on environment variables in the panel that opened. There you have a list of system variables. You should now look for the Path variable. Now click edit and add the Python path at the end. Make sure that you added a semicolon before adding the path to not mess with your previous configuration. | 1 | 1 | 0 | I am trying to install the pyperclip module for Python 3.6 on Windows (32 bit). I have looked at various documentations (Python documentation, pypi.python.org and online courses) and they all said the same thing.
1) Install and update pip
I downloaded get-pip.py from python.org and it ran immediately, so pip should be updated.
2) Use the command python -m pip install SomePackage
Okay here is where I'm having issues. Everywhere says to run this in the command line, or doesn't specify a place to run it.
I ran this in the command prompt: python -m pip install pyperclip. But I got the error message "'python' is not recognized as an internal or external command, operable program or batch file.
If I run it in Python 3.6, it says pip is an invalid syntax. Running it in IDLE gives me the same message.
I have no idea where else to run it. I have the pyperclip module in my python folder. It looks like a really simple problem, but I have been stuck on this for ages! | Installing Python modules | 0 | 0 | 0 | 614 |
42,597,497 | 2017-03-04T15:05:00.000 | 0 | 0 | 0 | 1 | twitter-bootstrap,python-2.7,google-app-engine,google-cloud-datastore,app-engine-ndb | 43,709,266 | 1 | false | 1 | 0 | When rendering the page just check if the assessment exists by retrieving it from the Consult (I imagine you store the assessment key inside the Consult).
That's it | 1 | 0 | 0 | I have a page View-Consult with 4 bootstrap tabs.
There are two entities retrieved from the Datastore on this page (Consult and Assessment). The consult is created first and the assessment later (by a different user).
Note: Consults have a property called "consult_status" that is PENDING before the Assessment is added, and COMPLETED after. This may be useful as a condition.
The properties from the Consult populate the first 3 bootstrap tabs. The Assessment properties are displayed in the 4th tab.
There will be a period where the Assessment has not been completed and the View-Consult page will need to display a message in the 4th tab saying "This consult is currently awaiting assessment. You will be notified by email when it is complete."
How would I create and test for this condition and render the appropriate output inside tab 4, depending if the Assessment is complete or not.
Note also: The Consult and Assessment have the same ID, so perhaps a better condition would be to check if there exists an Assignment with the current Consult ID. If not display message "awaiting assessment". | Appengine Python - How to filter tab content depending if entity has been created | 0 | 0 | 0 | 29 |
42,597,646 | 2017-03-04T15:20:00.000 | 3 | 0 | 1 | 0 | ipython | 43,457,170 | 1 | true | 0 | 0 | To run configuration commands at iPython startup locate the following line in your .ipython/profile_default/ipython_config.py
c.InteractiveShellApp.exec_lines = ['%alias echo show']
You can also have a file loaded with a bunch of iPython commands using c.InteractiveShellApp.exec_files = []
Hope this helps | 1 | 1 | 0 | I use IPython 5.1.0 with Python 3.5.2 at the command line on OSX 10.11.6.
I would like to define bookmarks and command aliases to be loaded when I start the IPython interactive shell.
I thought I would be able to run magic commands in the ipython_config.py configuration file (such as %bookmark FOLDERS /Users/tbaker/folders), but this does not work -- with or without the leading percent sign %.
Ideally, I would get these bookmarks from an external file of bookmark lines that is supposed to be shared between ipython and bash (for which I wrote a bookmark function with the same functionality as IPython's bookmark).
Nothing I have tried has worked, including an attempt to enclose the magic commands in various wrappers, e.g., get_ipython().magic(.... Indeed, aliases and bookmarks appear to be designed for definition on the fly, at the IPython prompt, and not batch-loaded at startup time. Does anyone see a way either to define bookmarks in the IPython startup configuration files, or to have IPython fetch the bookmark definitions from another file on startup? | Defining bookmarks and aliases in ``ipython_config.py`` configuration file | 1.2 | 0 | 0 | 324 |
42,600,500 | 2017-03-04T19:38:00.000 | 0 | 0 | 1 | 1 | python,macos,exe,software-distribution,dmg | 46,517,871 | 1 | false | 0 | 0 | This is a pretty broad question. The "best way" to distribute any software is to use a software distribution/systems management suite. It takes time to implement but once done the time savings is enormous. There are several suites that will do this; I believe that AirWatch will work as will ThingWorx, Helix Device Cloud, and others. These solutions can do what is called a "required distribution" which will simple force the software down. There's not even a click; the software is just there as of the date you specify.
Now, if you don't want to invest time in a solution like this, then use the MSI format for Windows. That is a superior way to install software - the user double-clicks on the software and, if you've done the MSI a certain way, the install happens. There's a user decision to install which some won't take advantage of. Again, that only works on Windows, sorry. I'm not versed in Mac installations but I'm sure that there's a way to build installers for Mac as well.
If your users get scared during a normal software install, well, you've got a different problem. If they're familiar with computers at all, they've seen software install before and have most likely done it. | 1 | 1 | 0 | What could be the best way to distribute a python application to both windows and mac user without scaring them away during the installation process?
I'm writing a software which will be of help to my university's students. This software will be used by student of various discipline, a lot from those which have little to no programming background.
It would be best if there are some one click magic happens solution to the installation.
How should I go about doing them? Please advice! | User friendly way to distribute Python Application | 0 | 0 | 0 | 75 |
42,602,039 | 2017-03-04T22:17:00.000 | 2 | 0 | 0 | 0 | qt,python-3.x,user-interface,pyqt5,qmediaplayer | 42,602,957 | 1 | true | 0 | 1 | Your question is a bit broad, bit in general this is what you should do:
Create a QProgressBar
Create your QMediaPlayer
Listen to the currentMediaChanged() signal of your QMediaPlayer module; in your handler fetch the duration of the current media, divide by 1000 to get the length in seconds, set this as the maximum value of your QProgressBar; reset the progressbar.
Listen to the positionChanged() signal of your QMediaPlayer; in the handler fetch the current position; again divide by 1000 and set the value in your QProgressBar with setValue.
This should give you a progressbar that is automatically updated by the QMediaPlayer.
You may wish to disable the text in the progressbar as a percentage isn't really useful for a song playback. Unfortunately there doesn't seem to be an easy way to print the time in the progressbar. | 1 | 1 | 0 | I want to know how to get a progress-bar/seeker for the QMediaPlayer module on PyQt5... So on my music player application I can have a progress bar for the songs. Thank You in Advance | Connect QProgressBar or QSlider to QMediaPlayer for song progress | 1.2 | 0 | 0 | 1,095 |
42,602,059 | 2017-03-04T22:20:00.000 | 4 | 0 | 1 | 0 | python,python-2.7,python-3.x,remote-access | 57,720,826 | 4 | false | 0 | 0 | Depending on your python interpreter environment variable path, you need to either py -m pip install win32api
or
Use python -m pip install win32api
In my case, py -m pip install win32api worked but python -m pip install win32api didn't | 1 | 21 | 0 | I have tried using pip -m install win32api, but I still get the error "can't open file 'pip': [Errno 2] No such file or directory"
Can anyone help me on this?
Note: I have renamed the python.exe file as python2 and python3, since I have both versions installed on my pc. | pip install gives me this error "can't open file 'pip': [Errno 2] No such file or directory" | 0.197375 | 0 | 0 | 76,715 |
42,602,126 | 2017-03-04T22:28:00.000 | 0 | 0 | 1 | 1 | python,cygwin,anaconda,python-module,pythonpath | 43,129,047 | 1 | false | 0 | 0 | Python at the startup builds the sys.path using site.py available in the PYTHONHOME directory. I appended to the file, addsitedir(). That worked for me. If there exists a space in the path, use double quotes around the path. | 1 | 0 | 0 | This might be trivial, but I can't identify the reason for not being able to import user-defined python modules into my python environment. I use Ananconda installation of python in cygwin. I have made entries in bash_profile to append module directory path to PYTHONPATH in this format.
export PYTHONPATH=$PYTHONPATH:"<dirpath>"
dirpath starts with /cygdrive/c/Users/
I have an __init__.py file available in the module directory to identify it is a python package.
Kindly provide your inputs. Thanks. | How to import user-defined python modules in cygwin? | 0 | 0 | 0 | 378 |
42,603,373 | 2017-03-05T01:22:00.000 | 0 | 0 | 1 | 1 | python,linux,encoding,utf-8 | 42,617,322 | 2 | false | 0 | 0 | I think you're overdoing it. Python comes with batteries included; just use them.
A correctly configured terminal session has the LANG environment variable set; it describes which encoding the terminal expects as output from programs running in this session.
Python interpreter detects this setting and sets sys.stdout.encoding according to it. It then uses that encoding to encode any Unicode output into a correct byte sequence. (If you're sending a byte sequence, you're on your own, and likely know what you're doing; maybe you're sending a binary stream, not text at all.)
So, if you output your text as Unicode, it must appear correctly automatically, provided that all the characters can be encoded.
If you need a finer control, pick the output encoding, encode with your own error handling, and output the bytes.
You're not in a business of changing the terminal session's settings, unless you're writing a tool specifically to do that. The user has configured the session; your program has to adapt to this configuration, not alter it, if it's a well-behaved program. | 2 | 1 | 0 | I'm writing a script in python that generates output that contains utf-8 characters, and even though most linux terminals use utf-8 by default, I'm writing the code presuming it isn't in utf-8 (in case the user changed it, for some reason).
From what I tested, os.environ["LANG"] = "en_US.utf-8" does not change the system environment variable, it only changes in the data structure inside Python. | How do I change the environment variable LANG from within a Python script? | 0 | 0 | 0 | 2,637 |
42,603,373 | 2017-03-05T01:22:00.000 | 0 | 0 | 1 | 1 | python,linux,encoding,utf-8 | 42,617,801 | 2 | false | 0 | 0 | It is not clear what you want to see happen when you change the LANG environment. If you want to test your Python code with other character encodings, you will need to set LANG before starting the Python code, as I believe LANG is read when Python first starts.
There might(?) be a function call you can call to change the LANG after Python has started, however if this is for testing purposes I recommend setting it before running the Python code.
An even better approach however would be to change the LANG in your terminal program. So that it has the correct encoding. Although almost everyone should be using UTF8, so I am not really sure you need to test non-UTF8 anymore. | 2 | 1 | 0 | I'm writing a script in python that generates output that contains utf-8 characters, and even though most linux terminals use utf-8 by default, I'm writing the code presuming it isn't in utf-8 (in case the user changed it, for some reason).
From what I tested, os.environ["LANG"] = "en_US.utf-8" does not change the system environment variable, it only changes in the data structure inside Python. | How do I change the environment variable LANG from within a Python script? | 0 | 0 | 0 | 2,637 |
42,605,931 | 2017-03-05T08:00:00.000 | 0 | 0 | 1 | 0 | python,c++,boost,memory-leaks,shared-libraries | 42,615,864 | 1 | false | 0 | 0 | I have finally found the segfault.
The header I was using for compiling the program was different from the one used by the library. One class member was not declared and so no memory was allocated to this member. | 1 | 0 | 0 | I'm develloping the backend of an app and I try to wrap my c++ code in python. I've used Boost Python3 to link c++ to python. I'm able to get a shared library and call it from python. For the moment, everything is working.
The problem arises when I'm trying to export this library. I would like to be able to use it from another location or computer without recompiling the c++ code.
To try this library, I'm just moving the library in another folder with its depedencies and check with ldd if all the depedencies are resolved (no problem for that).
Then, I'm trying to call some object from python3. At the beginning, I'm able to run many functions, but if I quit and relaunch python3, I start to have some segmentation fault, memory corruption, ... As an example: * Error in `python3': free(): invalid next size (normal): 0x0000000001ebeb50 *
I've tried to use valgrind to find any memory leaks. My program in c++ doesn't have any memory leak. When I try valgrind with my python code, I don't have any leaks for the library located in its original folder. However, after having moved the library, I start to have some leaks as:
Invalid write of size 4
==22695== at 0x6DCA0F9: Test::Test(std::string, std::string, std::string, int) (maintests.cpp:71)
==22695== by 0x6933E5B: boost::python::objects::value_holder<Test>::value_holder(_object*) (value_holder.hpp:137)
==22695== by 0x6934D8D: boost::python::objects::make_holder<0>::apply<boost::python::objects::value_holder<ritmo::Test>, boost::mpl::joint_view<boost::python::detail::drop1<boost::python::detail::type_list<boost::python::optional<std::string, std::string, std::string, int, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_>, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_> >, boost::python::optional<std::string, std::string, std::string, int, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_, mpl_::void_> > >::execute(_object*) (make_holder.hpp:94)
==22695== by 0x693924E: _object* boost::python::detail::invoke<int, void (*)(_object*), boost::python::arg_from_python<_object*> >(boost::python::detail::invoke_tag_<true, false>, int const&, void (*&)(_object*), boost::python::arg_from_python<_object*>&) (invoke.hpp:81)
==22695== by 0x6936942: boost::python::detail::caller_arity<1u>::impl<void (*)(_object*), boost::python::default_call_policies, boost::mpl::vector2<void, _object*> >::operator()(_object*, _object*) (caller.hpp:223)
==22695== by 0x6935D88: boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (*)(_object*), boost::python::default_call_policies, boost::mpl::vector2<void, _object*> > >::operator()(_object*, _object*) (py_function.hpp:38)
==22695== by 0x71CE139: boost::python::objects::function::call(_object*, _object*) const (in /usr/lib/x86_64-linux-gnu/libboost_python-py34.so.1.54.0)
==22695== by 0x71CE4A7: ??? (in /usr/lib/x86_64-linux-gnu/libboost_python-py34.so.1.54.0)
==22695== by 0x71D8742: boost::python::handle_exception_impl(boost::function0<void>) (in /usr/lib/x86_64-linux-gnu/libboost_python-py34.so.1.54.0)
==22695== by 0x71CCDB2: ??? (in /usr/lib/x86_64-linux-gnu/libboost_python-py34.so.1.54.0)
==22695== by 0x53493C: ??? (in /usr/bin/python3.4)
==22695== by 0x4F14F9: PyObject_Call (in /usr/bin/python3.4)
==22695== Address 0x6333fe0 is 16 bytes after a block of size 32 in arena "client"
I'm struggling with this issue. Any idea or tips will be more than welcome.
Thank you | Memory leak in c++ python wrapper | 0 | 0 | 0 | 742 |
42,606,584 | 2017-03-05T09:22:00.000 | 0 | 0 | 0 | 0 | python,opencv | 42,611,673 | 2 | false | 0 | 0 | There is no way that your camera or software will be able to look at a flat image and decide what is foreground and what is background. Is that parrot sitting on a perch and staring at the camera or is it a picture of a parrot on the wall?
In the past I've made a collection of frames from the camera and formed a reference image by taking the median value of every pixel. Hopefully, this is now an imge that can be compared with every subsequent frame and now substracting the two can be used to isolate where change has occurred. The difference isn't what you want but can be turned into a mask that will select what you want from the frame in question. | 1 | 0 | 1 | I'm trying to create a program that removes the background and get the foreground in color. For example if a face appears in front of my webcam i need to get the face only. I tried using BackgroundSubtractorMOG in opencv 3. But that didn't solve my problem. Can anyone tell me where to look or what to use. I'm a newbie in opencv.
P.S i use opencv3 in python | Get the Foreground in opencv | 0 | 0 | 0 | 713 |
42,607,012 | 2017-03-05T10:07:00.000 | 0 | 0 | 1 | 0 | python,file,concurrency,atomic | 42,607,066 | 1 | false | 0 | 0 | You can try opening file with "w+" option , it will append data if file exist or create new one if file is not there. | 1 | 0 | 0 | I have a Python script that must write a new line (containing a number) to a file each second.
I have another program that will regularly need to archive that file, so it will probably move the file to another location (Python can for example re-create the file if it doesn't exist anymore), but any other solution is possible (the file can be copied, left in place and emptied f.e.).
What would be the proper way to ensure that everything happens atomically, i.e. that no data is lost? | Proper way to regularly write to file which can be cleared by another process? | 0 | 0 | 0 | 34 |
42,607,360 | 2017-03-05T10:42:00.000 | 0 | 0 | 1 | 0 | python-2.7,ssl,ssl-certificate,embed | 42,621,585 | 1 | false | 0 | 0 | Appereantly there isn't any answer to this question... so i figured out something else. I saved the certificate in a variable in my pythonic code :P and then before connecting to the server, the client saves the certificate to a temp file, and at the end delete it. | 1 | 1 | 0 | I am coding a program(server-client) in python 2.7, that exchange data through sockets. I use SSL to secure the connection. But here is the thing. I want to make the client and the server executables with pyinstaller, and i want the SSL certificate and the key to be "hidden" somewhere inside the python code... so i can have only ONE file, and not several. I tried to load the certificate through a variable that contained the certificate, but appareantly the certificate needs to be loaded through a file. What options do i have ? | Embed SSL certificates | 0 | 0 | 1 | 228 |
42,608,100 | 2017-03-05T11:58:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,selenium,click,element | 42,665,717 | 1 | false | 0 | 0 | Technically speaking, you could set an explicit wait targetting the "presence_of_element_located" or "visibility_of_element_located" condition. However keep in mind that the action fired by the click on the element could be binded in many ways, and some of them could take place after the dom is ready (when the complete DOM is loaded altough not yet completely rendered).
Think at these scenarios:
The element has an "onClick" attribute which fire a javascript function: in thiw case, the action could take place before the complete load (but only if it not concern elements that are not rendered yet)
The element is an anchor with an "href" attribute with a plain url inside it: in this case I think it could be pretty safe to click before the complete load
The element has an action binded through javascript at some point: in this case you should check the js code to make sure the element has the action already binded when you want to click it. | 1 | 0 | 0 | Can you click on an element, while the page is not fully load but the element is already loaded/visible? If yes then how? If no then is there any other solution? | Python 2.7 Selenium, Click on button while the page isn't fully load | 0 | 0 | 1 | 122 |
42,608,362 | 2017-03-05T12:28:00.000 | 0 | 1 | 0 | 1 | python,bash,nginx,webserver,cgi | 42,608,590 | 2 | false | 0 | 0 | Why are you not try to pass the address/location of your file which you want to download as a argument to the class and then use that in the < a href> tag to convert that into the link and implement the download functionality | 1 | 0 | 0 | I have a simple python webserver but I want to use the CGI script for file download and upload according to client-request .But I couldnt find the any way of adjusting the CGI except using apache2 ,nginx or etc... Is there any way to adjust cgi script to my python webserver with Bash script or with other way ? Can you give me any advice about it? | CGI Script For Python Webserver | 0 | 0 | 0 | 74 |
42,609,943 | 2017-03-05T15:05:00.000 | 39 | 1 | 1 | 0 | python,pip | 68,885,989 | 3 | false | 0 | 0 | For those who don't have time:
If you install your project with an -e flag (e.g. pip install -e mynumpy) and use it in your code (e.g. from mynumpy import some_function), when you make any change to some_function, you should be able to use the updated function without reinstalling it. | 2 | 113 | 0 | When I need to work on one of my pet projects, I simply clone the repository as usual (git clone <url>), edit what I need, run the tests, update the setup.py version, commit, push, build the packages and upload them to PyPI.
What is the advantage of using pip install -e? Should I be using it? How would it improve my workflow? | What is the use case for `pip install -e`? | 1 | 0 | 0 | 71,689 |
42,609,943 | 2017-03-05T15:05:00.000 | 89 | 1 | 1 | 0 | python,pip | 59,667,164 | 3 | true | 0 | 0 | I find pip install -e extremely useful when simultaneously developing a product and a dependency, which I do a lot.
Example:
You build websites using Django for numerous clients, and have also developed an in-house Django app called locations which you reuse across many projects, so you make it available on pip and version it.
When you work on a project, you install the requirements as usual, which installs locations into site packages.
But you soon discover that locations could do with some improvements.
So you grab a copy of the locations repository and start making changes. Of course, you need to test these changes in the context of a Django project.
Simply go into your project and type:
pip install -e /path/to/locations/repo
This will overwrite the directory in site-packages with a symbolic link to the locations repository, meaning any changes to code in there will automatically be reflected - just reload the page (so long as you're using the development server).
The symbolic link looks at the current files in the directory, meaning you can switch branches to see changes or try different things etc...
The alternative would be to create a new version, push it to pip, and hope you've not forgotten anything. If you have many such in-house apps, this quickly becomes untenable. | 2 | 113 | 0 | When I need to work on one of my pet projects, I simply clone the repository as usual (git clone <url>), edit what I need, run the tests, update the setup.py version, commit, push, build the packages and upload them to PyPI.
What is the advantage of using pip install -e? Should I be using it? How would it improve my workflow? | What is the use case for `pip install -e`? | 1.2 | 0 | 0 | 71,689 |
42,610,000 | 2017-03-05T15:10:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 42,610,453 | 1 | false | 0 | 0 | You can set random_state parameter to some constant value to reproduce data splits. On the other hand, it's generally a good idea to test exactly what you are trying to know - i.e. run your training at least twice with different randoms states and compare the results. If they differ a lot it's a sign that something is wrong and your solution is not reliable. | 1 | 0 | 1 | According to the resources online "train_test_split" function from sklearn.cross_validation module returns data in a random state.
Does this mean if I train a model with the same data twice, I am getting two different models since the training data points used in the learning process is different in each case?
In practice can the accuracy of such two models differ a lot? Is that a possible scenario? | model evaluation with "train_test_split" not static? | 0.197375 | 0 | 0 | 86 |
42,610,590 | 2017-03-05T16:02:00.000 | 6 | 0 | 0 | 0 | python,node.js,scikit-learn,child-process | 62,075,227 | 1 | true | 1 | 0 | My recommendation: write a simple python web service (personally recommend flask) and deploy your ML model. Then you can easily send requests to your python web service from your node back-end. You wouldn't have a problem with the initial model loading. it is done once in the app startup, and then you're good to go
DO NOT GO FOR SCRIPT EXECUTIONS AND CHILD PROCESSES!!! I just wrote it in bold-italic all caps so to be sure you wouldn't do that. Believe me... it potentially go very very south, with all that zombie processes upon job termination and other stuff. let's just simply say it's not the standard way to do that.
You need to think about multi-request handling. I think flask now has it by default
I am just giving you general hints because your problem has been generally introduced. | 1 | 11 | 1 | I have a web server using NodeJS - Express and I have a Scikit-Learn (machine learning) model pickled (dumped) in the same machine.
What I need is to demonstrate the model by sending/receiving data from it to the server. I want to load the model on startup of the web server and keep "listening" for data inputs. When receive data, executes a prediction and send it back.
I am relatively new to Python. From what I've seen I could use a "Child Process" to execute that. I also saw some modules that run Python script from Node.
The problem is I want to load the model once and let it be for as long as the server is on. I don't want to keep loading the model every time due to it's size. How is the best way to perform that?
The idea is running everything in a AWS machine.
Thank you in advance. | Sklearn Model (Python) with NodeJS (Express): how to connect both? | 1.2 | 0 | 0 | 3,643 |
42,612,439 | 2017-03-05T18:43:00.000 | 0 | 0 | 0 | 0 | python,math | 42,655,243 | 1 | false | 0 | 0 | So the solution I have is :
linear_closeness = 1 - (difference / max_deviation)
exponential_closeness = 10^linear_closeness / 10
This is suitable for me. I am open to better solutions. | 1 | 0 | 1 | I have two, time value series (using pandas) and would like to represent the "closeness" of the last value in each series in regards to each other on a logarithmic scale between 0 and 1. 0 being very far away and 1 being the same.
I am not sure how to approach this and any help would be appreciated. | Python - Closeness of two values on a logarithmic scale | 0 | 0 | 0 | 101 |
42,615,538 | 2017-03-05T23:56:00.000 | 1 | 0 | 0 | 0 | django,python-3.x | 42,618,333 | 1 | false | 1 | 0 | do I need to deploy it with all the Python 'come-with' modules
Never do that. It might conflict with the dependencies on the server. Instead issue the following command to create a dependency file (requirements.txt).
pip freeze > requirements.txt (issue this command where manage.py is located)
On the server create a new virtual environment. Now copy django project to the server (you can do this using git clone or just plain old Filezilla). Activate virtual environment. Then change you current working directory to the where manage.py is located. to install all the dependencies issue the following command.
pip install -r requirements.txt
This will install the required dependencies on on server. | 1 | 0 | 0 | Good day.
I'm a newbie to Django and I have a slight confusion:
When deploying my Django app, do I need to deploy it with all the Python 'come-with' modules, or the hosts already have them installed.
Also, I installed PIL for image manipulation. Would they also have it installed or i have to find a way to install it on their servers. Thanks in advance | confusion in deploying module's with django | 0.197375 | 0 | 0 | 28 |
42,616,958 | 2017-03-06T03:13:00.000 | 1 | 0 | 1 | 0 | python | 42,618,800 | 1 | true | 0 | 0 | Save it in something next to or related to __file__, which is the path to the file the module was loaded from. I believe in some cases it can be a relative path, so you might want to store the memos in that path directly, or turn it into an absolute path or something. | 1 | 0 | 0 | I want to write a decorator that does persistent memoization (memoizing to disk). Since I want to use this decorator for many functions, I have to decide where to save memoizing data for these functions. I googled around and found two solutions:
let the functions decide where to store the memoizing data
automatically determine where to store the data by function names
However, in these two solutions, it is necessary for every function to "know" each other in case of colliding of names (or destinations), which is a smell of bad design.
Thus, my question is, how to avoid such collidings? | destination of python persistent memoization | 1.2 | 0 | 0 | 74 |
42,617,816 | 2017-03-06T04:57:00.000 | 0 | 1 | 0 | 0 | python,c++ | 42,618,540 | 1 | false | 0 | 0 | If you are using Linux, & will release bash session and in this case, CollorFlow and fileToXex.py will run in different bash sessions.
At the same time, composition ./ColorFollow | python fileToHex.py looks interesting, cause you redirect stdout of ColorFollow to fileToHex.py stdin - it can syncronize scripts by printing some code string upon exit, then reading it by fileToHex.py and exit as well.
I would create some empty file like /var/run/ColorFollow.flag and write there 1 when one of processes exit. Not a pipe - cause we do not care which process will start first. So, if next loop step of ColorFollow sees 1 in the file, it deletes it and exits (means that fileToHex already exited). The same - for fileToHex - check flag file each loop step and exit if it exists, after deleting flag file. | 1 | 0 | 0 | So this one is a doozie, and a little too specific to find an answer online.
I am writing to a file in C++ and reading that file in Python at the same time to move a robot. Or trying to.
When I try running both programs at the same time, the C++ one runs first and then the Python one runs.
Here's the command I use:
./ColorFollow & python fileToHex.py
This happens even if I switch the order of commands.
Even if I run them in different terminals (which is the same thing, just covering all bases).
Both the Python and C++ code read / write in 'infinite' loops, so these two should run until I say stop.
The code works fine; when the Python script finally runs the robot moves as intended. It's just that the code doesn't run at the same time.
Is there a way to make this happen, or is this impossible?
If you need more information, lemme know, but the code is pretty much what you'd expect it to be. | Simultaneous Python and C++ run with read and write files | 0 | 0 | 0 | 281 |
42,617,832 | 2017-03-06T04:58:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,opengl,pygame | 47,053,412 | 2 | false | 0 | 1 | Do not forget: Display objects inherit from Surfaces. So you can blit the screen to another Surface and scale it down! Create a subprocess using the corresponding module, initialize Display with the dummy video driver (as seen in the headless_no_windows_needed.py Pygame example), send the Surface converted to a simple list using PixelArray, through IPC, recieve it in the main process, and blit it to a Display without OPENGL flag. You can also use FBOs. | 1 | 1 | 0 | I'm making a lo-fi, low-resolution (1024x576) game and I was hoping I could get away with just doing supersampling (render the game at 2048x1152 then scale down) instead of proper anti-aliasing.
Trouble is, I don't see any way to render the OpenGL commands to a memory surface instead of the display surface. Is there a way? | Using Pygame + PyOpenGL, draw to a Surface instead of straight to the display? | 0 | 0 | 0 | 1,021 |
42,619,331 | 2017-03-06T06:57:00.000 | 1 | 0 | 0 | 1 | php,python,python-2.7,amazon-web-services,ubuntu | 42,709,943 | 1 | true | 0 | 0 | Finally i resolved the issue.
First upgrade the pip and then pip install --upgrade --user awsebcli. | 1 | 3 | 0 | I am trying to install AWS elasticbeanstalk command line tool in my ubuntu machine
Installed with pip install --upgrade --user awsebcli
But when i try to get the eb version with eb --version i got the following error
Traceback (most recent call last): File
"/home/shamon/.local/bin/eb", line 6, in
from pkg_resources import load_entry_point File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
2927, in
@_call_aside File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
2913, in _call_aside
f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
2940, in _initialize_master_working_set
working_set = WorkingSet._build_master() File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
635, in _build_master
ws.require(requires) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
943, in require
needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.7/dist-packages/pkg_resources/init.py", line
829, in resolve
raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'awsebcli==3.10.0'
distribution was not found and is required by the application | Installing AWS elasticbeanstalk command line tool in ubuntu:error The 'awsebcli==3.10.0' distribution was not found and is required by the application | 1.2 | 0 | 0 | 418 |
42,619,597 | 2017-03-06T07:13:00.000 | 0 | 1 | 1 | 0 | python,asynchronous,localization,internationalization,gettext | 42,623,529 | 1 | true | 1 | 0 | Okay, we solved problem using with that provide us with context to all inner function calls. | 1 | 2 | 0 | We have asynchronous python application (telegram bot), and we want to add localization: user selects language when he starts dialog with bot, then bot translates all messages for him.
Django allows to change language for every request, it is working normally, because Django create separate process for each request. But it will not work in async bot — there is only one process and we should handle multiple users with different languages inside of it.
We can do simple thing — store user's preferences in Database, load the preferred language from DB with each incoming message, and them pass this settings to all inside functions — but it is quite complicated, because our bot is complex and there can be more than dozen included function calls.
How we can implement language switching in the asynchronous application with elegant way? | Localization for Async Python Telegram bot | 1.2 | 0 | 0 | 703 |
42,620,942 | 2017-03-06T08:36:00.000 | 0 | 0 | 0 | 0 | python,web2py | 42,701,682 | 1 | false | 1 | 0 | If you want to do it manually, just:
go to your application admin interface
click on the "database admin" button (mighy be translated to your OS language)
click on db.auth_user
then use the Import/Export feature form at the bottom of the page
That's it. | 1 | 0 | 0 | Basically, I would like to deploy a web2py application with a set of default users already registered/created in the application.
Can this be accomplished by importing a CSV file containing default username/password and other details into the auth_user table?
Any help is welcome.
Thanks. | Creating default set of username/password from a csv file in a web2py application | 0 | 0 | 0 | 183 |
42,625,724 | 2017-03-06T12:30:00.000 | 0 | 0 | 0 | 0 | python,django | 42,626,309 | 2 | false | 1 | 0 | The CSRF token is rotated after the user logs in. If the user goes back in their browser after logging in and submits a form with the old token, then you will get a CSRF error.
If you refresh the page with the form after logging in, then the page will be reloaded with the new token, and CSRF verification should pass. | 1 | 1 | 0 | I am getting CSRF verification failed when i return to back go back after login using django method.
It returns the sign in page even after i successfully log in. And it posts the error as CSRF verification failed.
Could I use to set anything in Django setting? | Django - CSRF verification failed - after successfully logged in | 0 | 0 | 0 | 528 |
42,625,825 | 2017-03-06T12:35:00.000 | 0 | 0 | 0 | 0 | python,neural-network,cluster-analysis,image-recognition,self-organizing-maps | 47,734,394 | 2 | false | 0 | 0 | I have been wondering if there is any mileage to training a separate supervised neural network for the inputs which map to each node in the SOM. You'd then have separate supervised learning on the subset of the input data mapping to each SOM node. The networks attached to each node would perhaps be smaller and more easily trained than one large network which had to deal with the whole input space. There may also be benefit from including input vectors which map to the adjacent SOM nodes.
Is anyone aware of this being the subject of research? | 2 | 0 | 1 | I am working on a image recognition project in python. I have read in journals that if clustering performed by a self-organizing map (SOM) is input into a supervised neural network the accuracy of image recognition improves as opposed to the supervised network on its own. I have tried this myself by using the SOM to perform clustering and then using the coordinates of the winning neuron after each iteration as input to a multilayer perceptron from the keras. However the accuracy is very poor.
What output of SOM should be used as input to a multilayer perceptron? | How to combine a Self-organising map and a multilayer perceptron in python | 0 | 0 | 0 | 1,430 |
42,625,825 | 2017-03-06T12:35:00.000 | 1 | 0 | 0 | 0 | python,neural-network,cluster-analysis,image-recognition,self-organizing-maps | 42,777,643 | 2 | false | 0 | 0 | Another way to use SOM is for vector quantisation. Rather than using the winning SOM coordinates, use the codebook values of the winning neuron. Not sure which articles you are reading, but I would have said that SOM into MLP will only provide better accuracy in certain cases. Also, you will need to choose parameters like dimensionality and map size wisely.
For image processing, I would have said that Autoencoders or Convolutional Neural Networks (CNNs) are more cutting-edge alternatives to SOM to investigate if you're not determined on the SOM + MLP architecture. | 2 | 0 | 1 | I am working on a image recognition project in python. I have read in journals that if clustering performed by a self-organizing map (SOM) is input into a supervised neural network the accuracy of image recognition improves as opposed to the supervised network on its own. I have tried this myself by using the SOM to perform clustering and then using the coordinates of the winning neuron after each iteration as input to a multilayer perceptron from the keras. However the accuracy is very poor.
What output of SOM should be used as input to a multilayer perceptron? | How to combine a Self-organising map and a multilayer perceptron in python | 0.099668 | 0 | 0 | 1,430 |
42,626,496 | 2017-03-06T13:09:00.000 | 9 | 0 | 0 | 0 | python,python-3.x,button,tkinter | 42,626,791 | 1 | true | 0 | 1 | You can use anchor="w" when defining the button. However, some platforms may ignore that. For example, on older version of OSX the text will always be centered. | 1 | 3 | 0 | By default text in Button is centered but I want it to be aligned to the left so when I type more text than the button can display it wont cut the start of the sentence/word. Thanks for help. | Align text in tkinter Button | 1.2 | 0 | 0 | 8,956 |
42,628,408 | 2017-03-06T14:41:00.000 | 0 | 0 | 1 | 0 | python,file-io,byte | 42,628,789 | 2 | false | 0 | 0 | I'd use mmap for something like this. mmap.mmap() returns an bytearray object you can index. | 1 | 0 | 0 | I have a byte file which consists of integers that take up 4 bytes of space each. I also have function in my python code that is supposed to switch two elements in the file given their indexes.
index 0 is the first 4 byte integer, index 1 is the second batch of 4 bytes and so on.
How would I implement this in my code?
I am stuck on both - decoding and then writing back only the changes. I figured I could use fileinput to process this single long line as a string, but I'm not sure if this is the best way. Also, if I process this as a string, I'm not sure how to decode string back to bytes and then back to string properly.
EDIT: Is struct.unpack and the convenient way of opening file is a way to go? | How do I swap certain bytes in a file with python? | 0 | 0 | 0 | 135 |
42,628,638 | 2017-03-06T14:52:00.000 | 1 | 0 | 1 | 0 | python,amazon-web-services,aws-lambda | 43,783,169 | 1 | false | 0 | 0 | You could use code build, which will build your code on the aws linux envoirnment. Then it wont matter if the envoirnment is windows or linux.
code build will put the artifacts directly on s3, from there you can directly upload it to lambda. | 1 | 1 | 0 | I have developed a lambda function which hits API url and getting the data in Json Format. So need to use modules/libraries like requests which is not available in AWS online editor using Python 2.7.
So need to upload the code in Zip file, How we can do step by step to deploy Lambda function from windows local server to AWS console. What are the requirements? | AWS lambda function deployment | 0.197375 | 0 | 1 | 614 |
42,629,891 | 2017-03-06T15:49:00.000 | 0 | 0 | 1 | 0 | python-3.x,download | 42,630,045 | 2 | false | 0 | 0 | This should not pose a problem, as long as the directory is specified correctly in you PATH. | 2 | 0 | 0 | I am a 65 year old "newbie" and generally use default options when downloading. Python.org wants to download to an obscure directory such as
C:\Users\Facdev\AppData\Local\Programs\Python\Python36-32".
Is there anything wrong with downloading instead to "C:"? | Is it OK to modify the default location for Python-3.6.0.exe to just the C drive? | 0 | 0 | 0 | 49 |
42,629,891 | 2017-03-06T15:49:00.000 | 0 | 0 | 1 | 0 | python-3.x,download | 42,631,188 | 2 | false | 0 | 0 | It is OK to modify the location where you will download and install Python. However, I would advise against doing so if you are unfamiliar with how system environment variables and PATH locations work in Windows.
Why does it matter?
Once you have the python executable (in your case Python-3.6.0.exe) on your system, your computer needs to know where it is in order to execute it! If you place the executable in a location like the main directory on the C: drive your computer does not care. Your computer also does not care if the executable is deep down in the AppData\ directory.
By changing the default behavior you run a risk when troubleshooting unexpected behavior that instructions will not be written for your situation. This is OK as long as you understand what you will need to change in order to apply the troubleshooting techniques listed on documentation, blog posts, and forums.
Because of those factors and this being a new process for you, I recommend sticking to the default. You can change the location later, once you understand what doing so means. Learning to program can be frustrating and trying to grasp managing the software environment only adds to the frustration. Tackle that issue later.
Good luck on your new adventure! I hope you learn to enjoy writing your own programs in python! | 2 | 0 | 0 | I am a 65 year old "newbie" and generally use default options when downloading. Python.org wants to download to an obscure directory such as
C:\Users\Facdev\AppData\Local\Programs\Python\Python36-32".
Is there anything wrong with downloading instead to "C:"? | Is it OK to modify the default location for Python-3.6.0.exe to just the C drive? | 0 | 0 | 0 | 49 |
42,630,191 | 2017-03-06T16:04:00.000 | 7 | 0 | 1 | 1 | python,windows,python-3.x,unicode,console | 42,631,022 | 2 | true | 0 | 0 | Add /k chcp 65001 to the shortcut launching the cmd window. Alternatively, use Python 3.6 which uses Windows Unicode APIs to write to the console and ignores the code page. You do still need font support for what you are printing, however. | 1 | 4 | 0 | I use a python library that prints out a Unicode character to windows console. If I call a function on the library that prints out Unicode character, it will throw an exception 'charmap' codec can't encode characters.
So this is what I tried to solve that error:
Call "chcp 65001" windows console command from python using os.system("chcp 65001") before calling the library function.
I know there are questions similar to this and that is why I tried the above solution. It successfully calls the command on the console and tells me that it activated the code page.
However, the exception showed up again. if I try to run the program again without closing the previous console, the program executes successfully without any exception. Which means the above console command takes effect after the first try.
My question is: is there a way to launch windows console by pre-activating Unicode support so that I don't have to call the program twice. | Launch console window pre-activated with chcp 65001 using python | 1.2 | 0 | 0 | 4,940 |
42,632,411 | 2017-03-06T17:56:00.000 | 4 | 0 | 0 | 0 | python,image,keras,convolution | 42,632,755 | 2 | true | 0 | 0 | No issues with a rectangle image... Everything will work properly as for square images. | 1 | 2 | 1 | Let's say I have a 360px by 240px image. Instead of cropping my (already small) image to 240x240, can I create a convolutional neural network that operates on the full rectangle? Specifically using the Convolution2D layer.
I ask because every paper I've read doing CNNs seems to have square input sizes, so I wonder if what I propose will be OK, and if so, what disadvantages I may run into. Are all the settings (like border_mode='same') going to work the same? | Can Convolution2D work on rectangular images? | 1.2 | 0 | 0 | 2,468 |
42,633,643 | 2017-03-06T19:08:00.000 | -1 | 0 | 1 | 0 | python | 42,633,996 | 2 | false | 0 | 0 | You are not doing anything wrong. This is the behavior of virtualenv, it creates a new python environment with the current systemssite-packages`.
To avoid that behavior you can use --no-site-packages flag, It removes the standard site-packages directory from sys.path` | 1 | 0 | 0 | I setup a virtualenv and went ahead and activated the virtualenv and when I did a pip freeze for some reason it gave me a list of all global modules installed. What have I done wrong? | Python - newly created virtualenv gives a list of global modules | -0.099668 | 0 | 0 | 32 |
42,633,892 | 2017-03-06T19:22:00.000 | 7 | 0 | 0 | 1 | python-3.6,airflow,airflow-scheduler | 42,646,246 | 2 | false | 0 | 0 | You can run a task independently by using -i/-I/-A flags along with the run command.
But yes the design of airflow does not permit running a specific task and all its dependencies.
You can backfill the dag by removing non-related tasks from the DAG for testing purpose | 1 | 12 | 0 | I suspected that
airflow run dag_id task_id execution_date
would run all upstream tasks, but it does not. It will simply fail when it sees that not all dependent tasks are run. How can I run a specific task and all its dependencies? I am guessing this is not possible because of an airflow design decision, but is there a way to get around this? | How to run one airflow task and all its dependencies? | 1 | 0 | 0 | 12,887 |
42,636,462 | 2017-03-06T22:02:00.000 | 11 | 0 | 1 | 0 | python | 42,636,531 | 1 | false | 0 | 0 | In languages like Java, you implement getters and setters so that you do not need to change the class's interface if you want to add additional processing (such as value validation, access control, or logging) when getting or setting an attribute. This is particularly important if your classes will be used by applications you didn't write.
In Python, you can add code to attributes with @property without changing the interface, so you should do that instead. Use regular attribute access to start, and add @property later if you need to add a behavior.
Even then, keep your getter and setter functions simple. Attribute access is expected to be reasonably quick and you should avoid violating that expectation. If you need to do significant work to obtain a value, a method is appropriate, but give it a descriptive name: calculateFoo() rather than getFoo(), for instance. | 1 | 0 | 0 | I am working on a Python project and since the instance variable names can be directly accessed by specifying class.variablename, I was wondering if it is recommended to implement getter functions for the same. I have not declared the variables as private. | Is it a bad practice in Python to avoid implementing getter functions? | 1 | 0 | 0 | 265 |
42,637,127 | 2017-03-06T22:54:00.000 | 0 | 0 | 1 | 1 | python,opencv | 42,637,351 | 1 | false | 0 | 0 | You need to update your environment variables.
In search, go to the control panel
Click the Advanced system settings link.
Click Environment Variables. In the section System Variables, find the PYTHONPATH variable.
Click edit, and add the absolute path to your Lib directory | 1 | 0 | 0 | I'm having some trouble installing open-cv
I've tried several approaches but only succeeded in installing open-cv by downloading the wheel file from a website which I don't remember and running this command in the command prompt: pip3 install opencv_python-3.2.0-cp35-cp35m-win32.whl;
I can now import cv2 ONLY if I'm on site-packages directory. If I get out of that folder (in CMD of course) I wont be able to import cv2 (getting a "no module found" message).
If i didnt expressed myself well, these are the commands I proceed to run to be able to import cv2 inside "site-packages" directory using CMD:
python
import cv2
If I try this in another directory, it doesn't work. The same if I create a .py file and try to import cv2 | Installing open-cv on Windows 10 with Python 3.5 | 0 | 0 | 0 | 591 |
42,641,657 | 2017-03-07T06:29:00.000 | 1 | 0 | 0 | 0 | python,mxnet | 43,152,282 | 1 | true | 0 | 0 | Using "y = mod.predict(val_iter,num_batch=1)" instead of "y = mod.predict(val_iter)", then you can get only one batch labels. For example,if you batch_size is 10, then you will only get the 10 labels. | 1 | 1 | 1 | I am using MXNet on IRIS dataset which has 4 features and it classifies the flowers as -'setosa', 'versicolor', 'virginica'. My training data has 89 rows. My label data is a row vector of 89 columns. I encoded the flower names into number -0,1,2 as it seems mx.io.NDArrayIter does not accept numpy ndarray with string values. Then I tried to predict using
re = mod.predict(test_iter)
I get a result which has the shape 14 * 10.
Why am I getting 10 columns when I have only 3 labels and how do I map these results to my labels. The result of predict is shown below:
[[ 0.11760861 0.12082944 0.1207106 0.09154381 0.09155304 0.09155869
0.09154817 0.09155204 0.09154914 0.09154641] [ 0.1176083 0.12082954 0.12071151 0.09154379 0.09155323 0.09155825
0.0915481 0.09155164 0.09154923 0.09154641] [ 0.11760829 0.1208293 0.12071083 0.09154385 0.09155313 0.09155875
0.09154838 0.09155186 0.09154932 0.09154625] [ 0.11760861 0.12082901 0.12071037 0.09154388 0.09155303 0.09155875
0.09154829 0.09155209 0.09154959 0.09154641] [ 0.11760896 0.12082863 0.12070955 0.09154405 0.09155299 0.09155875
0.09154839 0.09155225 0.09154996 0.09154646] [ 0.1176089 0.1208287 0.1207095 0.09154407 0.09155297 0.09155882
0.09154844 0.09155232 0.09154989 0.0915464 ] [ 0.11760896 0.12082864 0.12070941 0.09154408 0.09155297 0.09155882
0.09154844 0.09155234 0.09154993 0.09154642] [ 0.1176088 0.12082874 0.12070983 0.09154399 0.09155302 0.09155872
0.09154837 0.09155215 0.09154984 0.09154641] [ 0.11760852 0.12082904 0.12071032 0.09154394 0.09155304 0.09155876
0.09154835 0.09155209 0.09154959 0.09154631] [ 0.11760963 0.12082832 0.12070873 0.09154428 0.09155257 0.09155893
0.09154856 0.09155177 0.09155051 0.09154671] [ 0.11760966 0.12082829 0.12070868 0.09154429 0.09155258 0.09155892
0.09154858 0.0915518 0.09155052 0.09154672] [ 0.11760949 0.1208282 0.12070852 0.09154446 0.09155259 0.09155893
0.09154854 0.09155205 0.0915506 0.09154666] [ 0.11760952 0.12082817 0.12070853 0.0915444 0.09155261 0.09155891
0.09154853 0.09155206 0.09155057 0.09154668] [ 0.1176096 0.1208283 0.12070892 0.09154423 0.09155267 0.09155882
0.09154859 0.09155172 0.09155044 0.09154676]] | mod.predict gives more columns than expected | 1.2 | 0 | 0 | 158 |
42,646,540 | 2017-03-07T11:01:00.000 | 0 | 1 | 0 | 0 | python,rabbitmq | 42,646,951 | 1 | false | 0 | 0 | It depends on the specific details so I'll try to address some example scenarios. I assume messages are not all created equal and the last one is the right one (e.g. updated data).
X is fixed, the time delta between first and last message is m seconds 99% of the time, processing a message more than one time is ok
In this case you know you'll get not 1 but n messages, so every time you receive a new (unique) message, you queue it up and when you have received n you can process it; if the process takes up more than m seconds you process it right away; if a late message arrives it will processed again after m seconds timeout with no problem.
X is variable, the time delta between first and last message is m seconds < 50% of the time, processing a message more than one time is ok
Same as previous case, you just need to tune the timeout carefully to achieve acceptable time-to-message-processed time while still keeping re-processing of messages low enough.
processing a message more than once is NOT ok, the last message after a timeout is the good one
I relaxed the requirement to process the last message and made it into the last message in a time interval.
This way you can queue messages up and wait for the last in the allowed time frame, then process it. You'll also need to keep a cache of processed messages so late ones can get recognized and discarded without being re-processed, as that would not be ok.
These are some scenarios, which can of course be combined to achieve more possibilities, but in the end it all depends on the specific details on how messages are generated and delivered and the requirements on which message gets to be processed. | 1 | 1 | 0 | For some reasons, my producer sends at the same time X messages to rabbit MQ. These messages are just notification that say "i update something, do your stuff"
But my consumer shall not call his callback for all messages, only one (and not wait all messages indefinitely). Is it possible to do that? | Wait all message and run callback | 0 | 0 | 0 | 57 |
42,648,610 | 2017-03-07T12:41:00.000 | 1 | 0 | 1 | 1 | python-3.x,jupyter-notebook | 49,881,600 | 12 | false | 0 | 0 | For me the fix was simply running pip install notebook
Somehow the original Jupiter install got borked along the way. | 6 | 123 | 0 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? | Error when executing `jupyter notebook` (No such file or directory) | 0.016665 | 0 | 0 | 125,723 |
42,648,610 | 2017-03-07T12:41:00.000 | 67 | 0 | 1 | 1 | python-3.x,jupyter-notebook | 47,619,339 | 12 | false | 0 | 0 | For me the issue was that the command jupyter notebook changed to jupyter-notebook after installation.
If that doesn't work, try python -m notebook, and if it opens, close it, then
export PATH=$PATH:~/.local/bin/, then refresh your path by opening a new terminal, and try jupyter notebook again.
And finally, if that doesn't work, take a look at vim /usr/local/bin/jupyter-notebook, vim /usr/local/bin/jupyter, vim /usr/local/bin/jupyter-lab (if you have JupyterLab) and edit the #!python version at the top of the file to match the version of python you are trying to use. As an example, I installed Python 3.8.2 on my mac, but those files still had the path to the 3.6 version, so I edited it to #!/Library/Frameworks/Python.framework/Versions/3.8/bin/python3 | 6 | 123 | 0 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? | Error when executing `jupyter notebook` (No such file or directory) | 1 | 0 | 0 | 125,723 |
42,648,610 | 2017-03-07T12:41:00.000 | 4 | 0 | 1 | 1 | python-3.x,jupyter-notebook | 47,279,945 | 12 | false | 0 | 0 | Since both pip and pip3.6 was installed and
pip install --upgrade --force-reinstall jupyter
was failing, so I used
pip3.6 install --upgrade --force-reinstall jupyter
and it worked for me.
Running jupyter notebook also worked after this installation. | 6 | 123 | 0 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? | Error when executing `jupyter notebook` (No such file or directory) | 0.066568 | 0 | 0 | 125,723 |
42,648,610 | 2017-03-07T12:41:00.000 | 5 | 0 | 1 | 1 | python-3.x,jupyter-notebook | 53,039,574 | 12 | false | 0 | 0 | Jupyter installation is not working on Mac Os
To run the jupyter notebook:-> python -m notebook | 6 | 123 | 0 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? | Error when executing `jupyter notebook` (No such file or directory) | 0.083141 | 0 | 0 | 125,723 |
42,648,610 | 2017-03-07T12:41:00.000 | 2 | 0 | 1 | 1 | python-3.x,jupyter-notebook | 54,565,364 | 12 | false | 0 | 0 | Deactivate your virtual environment if you are currently in;
Run following commands:
python -m pip install jupyter
jupyter notebook | 6 | 123 | 0 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? | Error when executing `jupyter notebook` (No such file or directory) | 0.033321 | 0 | 0 | 125,723 |
42,648,610 | 2017-03-07T12:41:00.000 | 0 | 0 | 1 | 1 | python-3.x,jupyter-notebook | 50,421,008 | 12 | false | 0 | 0 | I'm trying to get this going on VirtualBox on Ubuntu. Finally on some other post it said to try jupyter-notebook. I tried this and it told me to do sudo apt-get jupyter-notebook and that installed a bunch of stuff. Now if I type command jupyter-notebook, it works. | 6 | 123 | 0 | When I execute jupyter notebook in my virtual environment in Arch Linux, the following error occurred.
Error executing Jupyter command 'notebook': [Errno 2] No such file or directory
My Python version is 3.6, and my Jupyter version is 4.3.0
How can I resolve this issue? | Error when executing `jupyter notebook` (No such file or directory) | 0 | 0 | 0 | 125,723 |
42,649,784 | 2017-03-07T13:37:00.000 | 0 | 1 | 0 | 1 | php,python | 45,730,408 | 1 | false | 0 | 0 | if your python file and php file in same folder:
$command_to_run = "test.pyw $arg1 $arg2 , $output, $result";
$response = shell_exec($command_to_run);
else:
$command_to_run = "<php_folder>/scripts/python/test.pyw $arg1 $arg2 , $output, $result";
$response = shell_exec($command_to_run); | 1 | 0 | 0 | I am trying to run a python script from a webserver(apache) using php. I used the following command
exec (python test.py $arg1 $arg2 , $output, $result)
It executes successfully when I put the test.py in the document root directory. However, I wanted to run the python script from another subdirectory so that it would be easy for me to manage the outout of the python script.
what the python script does is
creates a folder
copy a file from the same directory the python script resides into the folder created (1)
zip the folder
The document root and the subdirectory for the python script have the same permission.
since it keeps on looking for the files to be copied from the documentroot, it generates "no such file or directory" error (in the apache error log file) | From which directory to run python Script in webserver | 0 | 0 | 0 | 341 |
42,650,548 | 2017-03-07T14:13:00.000 | 1 | 0 | 0 | 0 | python,django,python-3.x | 42,651,965 | 1 | true | 1 | 0 | Add the app name into installed_apps in Django settings file. Then you can use all the features of the app in django. | 1 | 1 | 0 | I have multiple python apps that are gathering and doing manipulation with data.
At this moment I use pewee as a database, but I am considering integrating them with Django, mostly because of the Django admin (more easy to manually check/modify data for me and non-devs) instead of building an interface myself.
The external apps run independently one of another and do a lot of database operations. They are gathering data from online and offline(excel, csv) sources, and do not run on a server(can be considered desktop apps).
It is possible to start an app from Django admin, by adding a button ?
What is the best way to add them from a project organization point of view ? | Integrating other python apps in Django | 1.2 | 0 | 0 | 68 |
42,653,958 | 2017-03-07T16:57:00.000 | 0 | 0 | 0 | 0 | python,html,selenium,iframe,scrapy | 42,658,773 | 2 | true | 1 | 0 | If you want to find this iframe using selenium try something like
driver.find_element_by_xpath('//iframe[@mytubeid="mytube1"]').
For more explicit answer please provide some code and site url. | 1 | 0 | 0 | What does the mytubeid tag(like <iframe src="/portal/corporateEventsCalendarIframe.html" mytubeid="mytube1" width="820" height="1600" frameborder="0"/>) do in an iframe?
Note that the iframe do not have an id or as such in it!
How can it be referenced in code? I am using python+selenium+scrapy to build a webscraping tool. | Referencing mytubeid in iframe using Python Selenium | 1.2 | 0 | 1 | 159 |
42,654,075 | 2017-03-07T17:04:00.000 | 0 | 0 | 0 | 0 | python,algorithm,dynamic-programming,knapsack-problem,bin-packing | 42,657,727 | 1 | true | 0 | 0 | This task can be reduced to solving several knapsack problems. To solve them, the principle of greedy search is usually used, and the number of cuts is the criterion of the search.
The first obvious step of the algorithm is checking the balance.
The second step is to arrange the arrays of bars and chocolate needs, which will simplify further calculations. This implements the principle of greedy search.
The third obvious step is to find and use all the bars, the sizes of which coincide with the needs.
The next step is to find and use all combinations of the bars what satisfy the needs. This task requires a "greedy" search in descending order of needs, which continues in the further calculations. This criterion is not optimal, but it allows to form a basic solution.
If not all the children have received chocolate, then the cuts become obvious. The search should be done according to the descending sizes of the tiles. First, one should check all possibilities to give the cut tiles to two children at once, then the same, but if one existing tile is used, etc.
After that there is an obvious variant "one cut - one need", allowing to form the base variant. But if there remain computational resources, they can be used first to calculate the options of the type "two slits - three needs", etc.
Further optimization consists in returning back to the steps and calculation for the following variants. | 1 | 1 | 1 | Here's the problem statement:
I have m chocolate bars, of integer length, and n children who
want integer amounts of chocolate. Where the total chocolate needs of
the children are less than or equal to the total amount of chocolate
you have. You need to write an algorithm that distributes chocolate to
the children by making the least number of cuts to the bars.
For example, for M = {1,3,7}, N = {1,3,4}, the least number of cuts would be 1.
I don't have any formal experience with algorithms, could anyone give me any hints on what I should start reading to tackle this problem in an efficient way? | Do I need to use a bin packing algorithm, or knapsack? | 1.2 | 0 | 0 | 770 |
42,656,915 | 2017-03-07T19:40:00.000 | 1 | 0 | 0 | 0 | python,matplotlib,plot,pyqt4 | 42,697,952 | 2 | false | 0 | 0 | Two possible solutions:
Don't show a scatter plot, but a hexbin plot instead.
Use blitting.
(In case someone wonders about the quality of this answer; mind that the questioner specifially asked for this kind of structure in the comments below the question.) | 1 | 1 | 1 | I want to work with a scatter plot within a FigureCanvasQTAgg. The scatter plot may have 50,000 or more data points. The user wants to draw a polygon in the plot to select the data points within the polygon. I've realized that by setting points via mouse clicks and connect them with lines using Axis.plot(). When the user has set all points the polygon is drawn. Each time I add a new point I call FigureCanvasQTAgg.draw() to render the current version of the plot. This is slow, because the scatter plot has so much data.
Is there a way to make this faster? | How can I make matplotlib plot rendering faster | 0.099668 | 0 | 0 | 1,682 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.