Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
37,181,491 | 2016-05-12T08:35:00.000 | 2 | 0 | 1 | 0 | python,spyder | 65,368,826 | 2 | false | 0 | 0 | First I will just reconfirm what has been said above, that python is ambiguous about what should be the correct indent. Unfortunately as coming from Matlab I also like Ctrl-I.
Though just check on how Tab and Shift-Tab work in practice, they did a little better than I expected. When I ended up having 2 tabs too much after rearranging code, one Shift-tab brought it back to proper position. | 2 | 5 | 0 | Is there any shortcut to automatically indent marked lines in the editor? For example, in MATLAB there is the CTRL+I shortcut. | Spyder IDE automatic indentation | 0.197375 | 0 | 0 | 27,958 |
37,184,618 | 2016-05-12T10:44:00.000 | 4 | 0 | 0 | 0 | python,c++,macos,numpy,blas | 37,185,292 | 3 | false | 0 | 0 | numpy.show_config() just tells that info is not available on my Debian Linux.
However /usr/lib/python3/dist-packages/scipy/lib has a subdirectory for blas which may tell you what you want. There are a couple of test programs for BLAS in subdirectory tests.
Hope this helps. | 1 | 33 | 1 | I use numpy and scipy in different environments (MacOS, Ubuntu, RedHat).
Usually I install numpy by using the package manager that is available (e.g., mac ports, apt, yum).
However, if you don't compile Numpy manually, how can you be sure that it uses a BLAS library? Using mac ports, ATLAS is installed as a dependency. However, I am not sure if it is really used. When I perform a simple benchmark, the numpy.dot() function requires approx. 2 times the time than a dot product that is computed using the Eigen C++ library. I am not sure if this is a reasonable result. | Find out if/which BLAS library is used by Numpy | 0.26052 | 0 | 0 | 28,680 |
37,186,159 | 2016-05-12T11:55:00.000 | 0 | 1 | 1 | 1 | shell,python-3.x,import | 43,708,606 | 1 | false | 0 | 0 | You have to be in the directory in which the file is located in order to import it from another script or from the interactive shell.
So you should either put the script trying to import ch09 in the same folder as ch09.py or you should use os.chdir to cd to the directory internally. | 1 | 0 | 0 | I looked around the web, couldn't really find, guess I'm searching wrong.
I try to import a file I built.
In cmd to use it I used a cd command and and just used it.
In shell it keeps on telling me:
[ Traceback (most recent call last): File "", line 1, in
from ch09 import * ImportError: No module named 'ch09' ]
(Im just learning python my self hence ch09)
please if someone can help me with this, even both in cmd not to use cd, though it fine, but more important in shell).
Thanks, Josh. | #python3.x Importing in Shell | 0 | 0 | 0 | 46 |
37,188,623 | 2016-05-12T13:37:00.000 | 1 | 0 | 1 | 1 | python,opencv,ubuntu | 37,188,746 | 9 | false | 0 | 0 | This is because you have multiple installations of python in your machine.You should make python3 the default, because by default is the python2.7 | 1 | 27 | 0 | I want to install OpenCV for python3 in ubuntu 16.04. Fist I tried running sudo apt-get install python3-opencv which is how I pretty much install all of my python software. This could not find a repository. The install does work however if I do sudo apt-get install python-opencv this issue with this is that by not adding the three to python it installs for python 2 which I do not use. I would really perfer not to have to build and install from source so is there a way I can get a repository? I also tried installing it with pip3 and it could not find it either. | Ubuntu, how to install OpenCV for python3? | 0.022219 | 0 | 0 | 54,785 |
37,189,635 | 2016-05-12T14:16:00.000 | 0 | 0 | 1 | 0 | python,letters | 37,214,333 | 1 | false | 0 | 0 | After getting some help from friends - '$\delta^{18}O_{SMOW}$
N. | 1 | 0 | 0 | I am new to python and It seems like a small issue, but I was unable to find the answer for it on the web.
I want to write the delta O 18 sign, the one from the famous isotope ratio expression, but with no luck so far:
Basically, I need to make the "18" in the following expression to "go" up - plt.ylabel('$\delta18O$')
Thanks in advance for all of you.
Best,
N. | Trying to write the delta-O-18 sign in Python | 0 | 0 | 0 | 72 |
37,193,143 | 2016-05-12T16:57:00.000 | 5 | 0 | 0 | 0 | python,postgresql,openerp,odoo-9 | 37,199,710 | 3 | false | 1 | 0 | This helped me.
sudo nano /etc/postgresql/9.3/main/pg_hba.conf
then add
local all odoo trust
then restart postgres
sudo service postgresql restart | 1 | 4 | 0 | I'm on Odoo 9, I have an issue when lunching odoo server $odoo.py -r odoo -w password, the localhost:8069 doesn't load and I get an error on terminal "Peer authentication failed for user "odoo"".
I already created a user "odoo" on postgres.
When lunching $odoo.py I can load the odoo page on browser but I can't create database (as default user).
It was working and i already created database but when I logged out I couldn't connect to my database account anymore.
Any ideas ? | Peer authentication failed for user "odoo" | 0.321513 | 1 | 0 | 23,159 |
37,198,467 | 2016-05-12T22:26:00.000 | 0 | 0 | 0 | 0 | python,django | 37,198,563 | 1 | false | 1 | 0 | This question is a little broad for a specific answer, but in general, one can:
Have the button access an API which will, on the server, in another thread, your main.py file.
Once the application is finished, move the generated files to a deterministic location that serves static files on your web server.
Provide the user a URL to the newly created file's location.
Have a cron job run to clear out old files in the static directory. | 1 | 0 | 0 | I've written a scientific program in python which outputs a .png and a .pdf
I would like to execute this main.py file from a web interface, with a nice big button saying GO and then display the .png and download a .pdf
I'm using a Django framework to serve the page saying GO. How do i get it to:
run my main.py file?
return the .png file to html template?
download the file which is generated by the main.py script?
Thank you internet | Getting Django to run a main.py file | 0 | 0 | 0 | 1,358 |
37,199,342 | 2016-05-13T00:10:00.000 | 1 | 0 | 0 | 0 | python,couchdb | 37,200,473 | 2 | false | 0 | 0 | I'm not sure what you mean by "parsing the contents using json". The data should already be parsed and you can refer to any attributes by doing something like row.value["_id"] where row is the name of the variable referencing the Row object. | 2 | 1 | 0 | I am a getting query result from couchdb emit function in python as follows:
<Row id=u'c0cc622ca2d877432a5ccd8cbc002432', key=u'eric', value={u'_rev': u'1-e327a4c2708d4015e6e89efada38348f', u'_id': u'c0cc622ca2d877432a5ccd8cbc002432', u'email': u'yap', u'name': u'eric'}>
How do I parse the content of value item as:
{u'_rev': u'1-e327a4c2708d4015e6e89efada38348f', u'_id': u'c0cc622ca2d877432a5ccd8cbc002432', u'email': u'yap', u'name': u'eric'}
using json? | Parsing couchdb query result using json in Python | 0.099668 | 0 | 1 | 335 |
37,199,342 | 2016-05-13T00:10:00.000 | 0 | 0 | 0 | 0 | python,couchdb | 50,189,612 | 2 | false | 0 | 0 | The value is already converted to a python dictionary. So, if you need to change it to a JSON string just use json.loads, otherwise you can access the key with row.key and value attributes with row.value['some_attribute']. | 2 | 1 | 0 | I am a getting query result from couchdb emit function in python as follows:
<Row id=u'c0cc622ca2d877432a5ccd8cbc002432', key=u'eric', value={u'_rev': u'1-e327a4c2708d4015e6e89efada38348f', u'_id': u'c0cc622ca2d877432a5ccd8cbc002432', u'email': u'yap', u'name': u'eric'}>
How do I parse the content of value item as:
{u'_rev': u'1-e327a4c2708d4015e6e89efada38348f', u'_id': u'c0cc622ca2d877432a5ccd8cbc002432', u'email': u'yap', u'name': u'eric'}
using json? | Parsing couchdb query result using json in Python | 0 | 0 | 1 | 335 |
37,207,589 | 2016-05-13T10:25:00.000 | 2 | 0 | 0 | 0 | algorithm,python-3.x,discrete-mathematics,binomial-coefficients | 37,212,468 | 2 | true | 0 | 0 | First you could start with the fact that : C(n,k) = (n/k) C(n-1,k-1).
You can prouve that C(n,k) is divisible by n/gcd(n,k).
If n is prime then n divides C(n,k).
Check Kummer's theorem: if p is a prime number, n a positive number, and k a positive number with 0< k < n then the greatest exponent r for which p^r divides C(n,k) is the number of carries needed in the subtraction n-k in base p.
Let us suppose that n>4 :
if p>n then p cannot divide C(n,k) because in base p, n and k are only one digit wide → no carry in the subtraction
so we have to check for prime divisors in [2;n]. As C(n,k)=C(n,n-k) we can suppose k≤n/2 and n/2≤n-k≤n
for the prime divisors in the range [n/2;n] we have n/2 < p≤n, or equivalently p≤n<2p. We have p≥2 so p≤n < p² which implies that n has exactly 2 digits when written in base p and the first digit has to be 1. As k≤n/2 < p, k can only be one digit wide. Either the subtraction as one carry and one only when n-k< p ⇒ p divides C(n,k); either the subtraction has no carry and p does not divide C(n,k).
The first result is :
every prime number in [n-k;n] is a prime divisor of C(n,k) with exponent 1.
no prime number in [n/2;n-k] is a prime divisor of C(n,k).
in [sqrt(n); n/2] we have 2p≤n< p², n is exactly 2 digits wide in base p, k< n implies k has at most 2 digits. Two cases: only one carry on no carry at all. A carry exists only if the last digit of n is greater than the last digit of p iif n modulo p < k modulo p
The second result is :
For every prime number p in [sqrt(n);n/2]
p divides C(n;k) with exponent 1 iff n mod p < k mod p
p does not divide C(n;k) iff n mod p ≥ k mod p
in the range [2; sqrt(n)] we have to check all the prime numbers. It's only in this range that a prime divisor will have an exponent greater than 1 | 1 | 3 | 1 | I'm interested in tips for my algorithm that I use to find out the divisors of a very large number, more specifically "n over k" or C(n, k). The number itself can range very high, so it really needs to take time complexity into the 'equation' so to say.
The formula for n over k is n! / (k!(n-k)!) and I understand that I must try to exploit the fact that factorials are kind of 'recursive' somehow - but I havent yet read too much discrete mathematics so the problem is both of a mathematical and a programming nature.
I guess what I'm really looking for are just some tips heading me in the right direction - I'm really stuck. | Smart algorithm for finding the divisors of a binomial coefficient | 1.2 | 0 | 0 | 846 |
37,213,568 | 2016-05-13T15:10:00.000 | 0 | 1 | 0 | 0 | php,python,mysql | 37,213,822 | 1 | false | 0 | 0 | You can check in your web-server logs.(/var/www/log/apache2/error.log) if you have apache as your webserver.. | 1 | 0 | 0 | I'm trying to use a PHP site button to kick off a python script on my server. When I run it, everything seems fine, on the server I can "ps ax" and see that the script is running.
The Python script attempts to process some files and write the results to a MySQL database. When I ultimately check to see that the changes were made to the DB, nothing has happened. Also, redirecting output shows no errors.
I have checked to make sure that it's executing (the ps ax)
I've made sure that all users have access to writing to the output directory (for saving the error report, if there is one)
I've made sure that the logon to MySql is correct...
I'm not sure what else to do or check. | Running a python (with MySQL) script from PHP | 0 | 1 | 0 | 40 |
37,214,983 | 2016-05-13T16:25:00.000 | 5 | 0 | 0 | 0 | python,openpyxl | 37,215,619 | 2 | true | 0 | 0 | openpyxl version 2.5 will preserve charts in existing files. | 1 | 5 | 0 | I just created a script with openpyxl to update a xlsx file that we usually update manually every month.
It work fine, but the new file lost all the graphs and images that were in the workbook. Is there a way to keep them? | Openpyxl updating sheet and maintaining graphs and images | 1.2 | 1 | 0 | 3,060 |
37,215,765 | 2016-05-13T17:13:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,boto | 37,215,966 | 1 | false | 0 | 0 | Solved: first call bucket.new_key, which locally creates a key object and does not incur in an API call.
Then use key.get_content. | 1 | 0 | 0 | I am using python and boto2 for an s3 project.
There is a file in s3, I want to get its contents by path name.
Correct me if I'm wrong but I think it can't be done with one API call.
First I need to call bucket.get_key and then key.get_content.
I would like to download the file contents with just one API call (the file is not big and should comfortably fit in an in-memory string) | boto2: download object from s3 with one API call | 0 | 0 | 1 | 337 |
37,219,045 | 2016-05-13T20:43:00.000 | 1 | 0 | 1 | 0 | python,windows,bash,cygwin | 68,331,346 | 6 | false | 0 | 0 | A better way (in my opinion):
Create a shortcut:
Set the target to
%systemroot%\System32\cmd.exe /c "python C:\Users\MyUsername\Documents\MyScript.py"
Start In:
C:\Users\MyUsername\Documents\
Obviously change the path to the location of your script. May need to add escaped quotes if there is a space in it. | 1 | 8 | 0 | I have a python script I run using Cygwin and I'd like to create a clickable icon on the windows desktop that could run this script without opening Cygwin and entering in the commands by hand. how can I do this? | Windows: run python command from clickable icon | 0.033321 | 0 | 0 | 33,470 |
37,221,133 | 2016-05-14T00:24:00.000 | 8 | 0 | 1 | 0 | python-3.x,multiprocessing,distributed-computing,hpc | 37,321,401 | 1 | false | 0 | 0 | Unfortunately I wasn't able to find an answer in the community. However, through experimentation, I was able to better isolate the problem and find a workable solution.
The problem arises from the nature of Python's multiprocessing implementation. When a Pool object is created (i.e. the manager class that controls the processing cores for the parallel work), a new Python run-time is started for each core. There are multiple places in my code where the multiprocessing package is used and a Pool object instantiated... every function that requires it creates a Pool object as needed and then joins and terminates before exiting. Therefore, if I call the function 3 times in the code, 8 instances of Python are spun up and then closed 3 times. On a single machine, the overhead of this was not significant at all compared to the computational load of the functions... however on the HPC it was absurdly high.
I re-architected the code so that a Pool object is created at the very beginning of the calling of process and then passed to each function as needed. It is closed, joined, and terminated at the end of the overall process.
We found that the bulk of the time was spent in the creation of the Pool object on each node. This was an improvement though because it was only being created once! We then realized that the underlying problem was that multiple nodes were trying to access Python at the same time in the same place from over the network (it was only installed on the head node). We installed Python and the application on all nodes, and the problem was completely fixed.
This solution was the result of trial and error... unfortunately our knowledge of cluster computing is pretty low at this point. I share this answer in the hopes that it will be critiqued so that we can obtain even more insight. Thank you for your time. | 1 | 4 | 1 | I am running a Python script on a Windows HPC cluster. A function in the script uses starmap from the multiprocessing package to parallelize a certain computationally intensive process.
When I run the script on a single non-cluster machine, I obtain the expected speed boost. When I log into a node and run the script locally, I obtain the expected speed boost. However, when the job manager runs the script, the speed boost from multiprocessing is either completely mitigated or, sometimes, even 2x slower. We have noticed that memory paging occurs when the starmap function is called. We believe that this has something to do with the nature of Python's multiprocessing, i.e. the fact that a separate Python interpreter is kicked off for each core.
Since we had success running from the console from a single node, we tried to run the script with HPC_CREATECONSOLE=True, to no avail.
Is there some kind of setting within the job manager that we should use when running Python scripts that use multiprocessing? Is multiprocessing just not appropriate for an HPC cluster? | Using Python multiprocessing on an HPC cluster | 1 | 0 | 0 | 1,975 |
37,222,588 | 2016-05-14T04:38:00.000 | 3 | 0 | 1 | 0 | python,c++,github,graphviz,pygraphviz | 38,644,196 | 2 | false | 0 | 0 | Using Graphviz-2.38 default installation path, I've been able to install pygraphviz-1.3.1 using this command:
pip install --global-option=build_ext --global-option="-IC:\Program Files (x86)\Graphviz2.38\include" --global-option="-LC:\Program Files (x86)\Graphviz2.38\lib\release\lib" pygraphviz | 1 | 5 | 0 | While trying to install pygraphviz 1.3 with pip the below error msg is coming
Fatal error 1083 Cannot open file graphviz/cgraph.h: No such file or directory
error: command C:\Users\Appdata\Local\Programs\Common\Microsoft\Visual C++
for python\9.0\VC\Bin\cl.exe failed with status 2
I already have Microsoft Visual C++. I am using Python27. Pip is working fine and I have successfully installed graphviz-2.38.
I have also tried with this command:
pip install --install-option="--include-path=\C:\graphviz-2.38\release\include\graphviz" --install-option="--library-path=\C:\graphviz-2.38\release\lib\graphviz" pygraphviz
Please let me know how to solve the problem. | Pygraphviz Installation Failed with error code 1083 Cannot open file graphviz/cgraph.h: No such file or directory | 0.291313 | 0 | 0 | 3,080 |
37,227,938 | 2016-05-14T14:37:00.000 | 0 | 0 | 0 | 0 | python,deep-learning | 37,228,094 | 1 | false | 0 | 0 | Why not add a preprocessing step, where you would either (a) physically move the images to folders associated with bucket and/or rename them, or (b) first scan through all images (headers only) to build the in-memory table of image filenames and their sizes/buckets, and then the random sampling step would be quite simple to implement. | 1 | 0 | 1 | For a Deep Learning application I am building, I have a dataset of about 50k grayscale images, ranging from about 300*2k to 300*10k pixels. Loading all this data into memory is not possible, so I am looking for a proper way to handle reading in random batches of data. One extra complication with this is, I need to know the width of each image before building my Deep Learning model, to define different size-buckets within the data (for example: [2k-4k, 4k-6k, 6k-8k, 8k-10k].
Currently, I am working with a smaller dataset and just load each image from a png file, bucket them by size and start learning. When I want to scale up this is no longer possible.
To train the model, each batch of the data should be (ideally) fully random from a random bucket. A naive way of doing this would be saving the sizes of the images beforehand, and just loading each random batch when it is needed. However, this would result in a lot of extra loading of data and not very efficient memory management.
Does anyone have a suggestion how to handle this problem efficiently?
Cheers! | Proper way of loading large amounts of image data | 0 | 0 | 0 | 811 |
37,231,032 | 2016-05-14T19:35:00.000 | 1 | 0 | 0 | 0 | python,django,django-migrations,django-database,django-1.9 | 37,232,610 | 1 | false | 1 | 0 | When you exported the data and re-imported it on the other database part of that package would have included the django_migrations table. This is basically a log of all the migrations successfully executed by django.
Since you have left out only the log table according to you, that should really be the only table that's missing from your schema. Find the entries in django_migrations that correspond to this table and delete them. Then run ./manage.py migrate again and the table will be created. | 1 | 1 | 0 | I am trying to get a project from one machine to another. This project contains a massive log db table. It is too massive. So I exported and imported all db tables except this one via phpmyadmin.
No if I run the migrate command I expected django to create everything missing. But it is not.
How to make django check for and create missing db tables?
What am I missing, why is it not doing this? I feel like the old syncdb did the job. But the new --run-syncdb does not.
Thank you for your help. | How to create missing DB tables in django? | 0.197375 | 1 | 0 | 2,300 |
37,234,114 | 2016-05-15T02:59:00.000 | 1 | 0 | 0 | 0 | python-2.7,tensorflow | 37,234,653 | 1 | true | 0 | 0 | To lock the ones that you don't want to train you can use tf.Variable(..., trainable=False) | 1 | 0 | 1 | In tensorflow, tf.train.GradientDescentOptimizer does gradient descent for all variables in default. Can i just do gradient descent for only a few of my variables and 'lock' the others? | How to do gradient descent for not all variables in tensorflow | 1.2 | 0 | 0 | 206 |
37,236,677 | 2016-05-15T09:36:00.000 | 0 | 0 | 1 | 0 | python,anaconda,tensorflow,spyder | 40,741,488 | 1 | false | 0 | 0 | To solve your issue you have 3 options here:
1 just start spyder from terminal
2 move PATH variable definition from .bash_profile to session init scripts
3 Duplicate your PATH in spyder's run configuration | 1 | 1 | 1 | I am running ubuntu 14.04 with an anaconda2 installation and would like to use tensorflow in combination with CUDA. So far the steps I performed are:
Installed CUDA 7.5 and cudnn
Installed tensorflow (GPU version) through a DEB package. Note that I don't want to use the conda package of tensorflow since that one is not the GPU version.
Added Anaconda, CUDA and cudnn to path.
Created a conda environment for tensorflow (conda create -n tensorflow python=2.7)
Now if I start python or IDLE from the terminal, I can import tensorflow and it will find all the CUDA dependencies, great!
...however, if I start ipython or spyder from the same terminal, running "import tensorflow as tf" gives me a cold-hearted "ImportError: No module named tensorflow".
My question: How can I get ipython and spyder to find the tensorflow library just like in an IDLE and a python instance? | Use GPU installation of tensorflow/cuda in spyder under ubuntu 14.04 | 0 | 0 | 0 | 1,863 |
37,240,431 | 2016-05-15T15:55:00.000 | 1 | 0 | 1 | 0 | python,visual-studio,ptvs | 49,196,161 | 2 | false | 1 | 0 | workaround rather than full answer.
I encountered this problem while importing my own module which ran code (which was erroring) on import.
By setting the imported module to the startup script, I was able to step through the startup code in the module and debug.
My best guess is that visual studio 2015 decided the imported module was a python standard library, but it really isn't viable to turn on the 'debug standard library option' as many standard library modules generate errors on import themselves. | 1 | 2 | 0 | I am trying to debug a scrapy project , built in Python 2.7.1
in visual studio 2013.
I am able to reach breakpoints, but when I do step into/ step over
the debugger seems to continue the exceution as if I did resume (F5).
I am working with standard python launcher.
Any idea how to make the step into/over functionality work? | python tools visual studio - step into not working | 0.099668 | 0 | 0 | 2,148 |
37,242,259 | 2016-05-15T18:42:00.000 | -1 | 0 | 1 | 0 | python,windows,configuration,python-3.5 | 39,355,263 | 2 | true | 0 | 0 | In the end, I bought a new computer that didn't have the problem. So there's your solution. | 1 | 5 | 0 | When I was using Python 3.4 I used MinGW to compile modules. Unfortunately in 3.5 MinGW support no longer works. I've installed the correct Visual C++ stuff, but pip still tries to use the MinGW compiler and fails.
How do I tell it to use the correct compiler? | Cannot configure Python 3.5 to use Visual C++ compiler on Windows | 1.2 | 0 | 0 | 480 |
37,245,042 | 2016-05-16T00:15:00.000 | 1 | 0 | 1 | 0 | python,regex | 37,245,129 | 2 | true | 0 | 0 | As long as you intend to include the trailing character in the resulting match, the regex pattern to do that is very simple. For instance, if you want to capture any series of digits followed by a letter A, the pattern would be \d+A | 1 | 0 | 0 | I can't seem to find an example of this, but I doubt the regex is that sophisticated. Is there a simple way of getting the immediately preceding digits of a certain character in Python?
For the character "A" and the string:
"îA"
It should return 238A | match any decimals appearing immediately before a character in python | 1.2 | 0 | 0 | 47 |
37,245,832 | 2016-05-16T02:22:00.000 | 0 | 0 | 0 | 0 | python,path,geometry | 37,246,380 | 3 | false | 0 | 0 | I'm sure there's an elegant way to do this with pandas, but until then, here's a simple idea if you can live with some error. You can do this a few different ways but here's the gist of it:
Treat each tuple as a node in a linked list. Define the desired length, D, between each point. As you move through the list, if the next node is not a distance D from the current node, adjust its x,y coordinates accordingly (or insert/delete nodes as necessary) so that it is a distance D from the current node along the line segments that connect the nodes.
Like I said, you'll have to live with some error because your original points will be adjusted/deleted. If you generate points to create more resolution prior to this, you can probably lessen the error. | 2 | 4 | 1 | I have a (long) list of x,y tuples that collectively describe a path (ie. mouse-activity sampled at a constant rate despite non-constant velocities).
My goal is to animate that path at a constant rate. So I have segments that are curved, and segments that are straight, and the delta-d between any two points isn't guaranteed to be the same.
Given data like:
[(0,0), (0,2), (4,6).... ] where the length of that list is ~1k-2k points, is there any way besides brute-force counting line-segment lengths between each point and then designating every n-length a "frame"? | How to calculate even distance along an interpolated path (Python2.7)? | 0 | 0 | 0 | 159 |
37,245,832 | 2016-05-16T02:22:00.000 | 1 | 0 | 0 | 0 | python,path,geometry | 37,246,950 | 3 | true | 0 | 0 | If you use Numpy arrays to represent your data then you can vectorise the computation. That's as efficient as you're going to get. | 2 | 4 | 1 | I have a (long) list of x,y tuples that collectively describe a path (ie. mouse-activity sampled at a constant rate despite non-constant velocities).
My goal is to animate that path at a constant rate. So I have segments that are curved, and segments that are straight, and the delta-d between any two points isn't guaranteed to be the same.
Given data like:
[(0,0), (0,2), (4,6).... ] where the length of that list is ~1k-2k points, is there any way besides brute-force counting line-segment lengths between each point and then designating every n-length a "frame"? | How to calculate even distance along an interpolated path (Python2.7)? | 1.2 | 0 | 0 | 159 |
37,246,342 | 2016-05-16T03:38:00.000 | 1 | 0 | 1 | 0 | python,sql,arrays,mongodb,database | 37,246,905 | 3 | false | 0 | 0 | Have you considered HDF5? It's very efficient for numerical data, and is supported by both Python and Matlab. | 1 | 1 | 1 | I am retrieving structured numerical data (float 2-3 decimal spaces) via http requests from a server. The data comes in as sets of numbers which are then converted into an array/list. I want to then store each set of data locally on my computer so that I can further operate on it.
Since there are very many of these data sets which need to be collected, simply writing each data set that comes in to a .txt file does not seem very efficient. On the other hand I am aware that there are various solutions such as mongodb, python to sql interfaces...ect but i'm unsure of which one I should use and which would be the most appropriate and efficient for this scenario.
Also the database that is created must be able to interface and be queried from different languages such as MATLAB. | Python, computationally efficient data storage methods | 0.066568 | 1 | 0 | 989 |
37,250,466 | 2016-05-16T09:18:00.000 | 1 | 0 | 1 | 0 | python,unicode,ipython,jupyter-notebook | 37,250,690 | 3 | false | 0 | 0 | Really, it is a matter of personal opinion. Keep in mind that Unicode character support for variable names is ONLY in Python 3, so make sure that any external libraries support Python 3. Other than that, there isn't a reason to say no. | 1 | 2 | 0 | I am getting into machine learning, and to document my code, I will write LaTeX math versions of my functions, right next to the code in an Jupyter/IPython notebook. The mathematical definitions include many Greek symbols, so I thought that I might as well use the Greek symbols in function and variable names, since that's possible in python. Would this be bad practice? | Are unicode identifiers in python bad practice? | 0.066568 | 0 | 0 | 1,265 |
37,252,527 | 2016-05-16T11:11:00.000 | 36 | 0 | 0 | 0 | python,pyspark,py4j | 37,252,533 | 3 | false | 1 | 0 | using the logging module run:
logging.getLogger("py4j").setLevel(logging.ERROR) | 1 | 22 | 0 | Once logging is started in INFO level I keep getting bunch of py4j.java_gateway:Received command c on object id p0 on your logs. How can I hide it? | how to hide "py4j.java_gateway:Received command c on object id p0"? | 1 | 0 | 0 | 6,581 |
37,255,003 | 2016-05-16T13:22:00.000 | 1 | 0 | 0 | 0 | python,iis,gcc,g++,theano | 37,435,858 | 1 | false | 0 | 0 | I've solved the problem , I had two g++ executables in my WinPython environment at following paths
WinPythonDir\python-2.7.10.amd64\Scripts\g++.exe
WinPythonDir\python-2.7.10.amd64\share\mingwpy\bin\g++.exe
Spyder used the correct one (2) and IIS seems to use the one mentioned in 1. I explicitly added path to 2 in my PATH env variable on IIS. Spyder didn't have 2 in PATH (that's strange) but it used the one mentioned in 2 (I confirmed that after logging in Theano files).
After this my MissingGxx error was gone but Now Theano was unable to create compilation directory because IIS uses System profile and Theano uses that profile to generate path to compile_dir , It was somewhere in C;|Windows|System32\config\SystemsProfile|Theano\compile_dir and IIS didn't have rights to it (Spyder uses my local USERPROFILE). I changed the default_base_compiledir path in Theano's configdefaults.py and gave IIS rights to access and modify it. I wasn't able to assign rights to previous compiledir in SystemsProfile beacause that location is pretty sensitive and OS restricted me to do so.
PS : I copied PATH by doing
echo %PATH%
from my WinPython console and concatenated g++ path mentioned at 2 with it and added to PATH variable on IIS because WinPython PATH variable didn't have 2 in it. | 1 | 3 | 1 | I've been trying to deploy my Python Flask App that uses Conv nets using Theano on local IIS. When I try to load a pickled Neural Network , I get following Errors
Unable to Create compiledir.
I solved this by changing compiledir path in configdefaults.py and giving read/write rights to IIS on that directory. Now compiledir gets created.
Now I'm getting MissingGXX error "g++ not available! We can't compile
c code.". G++ is there in my PythonFolder\Scripts and I've added this
path to my environmental variable PATH.
I just Want to know that what causes this error? Is it because Theano can't find g++ and it's all about path issues or it has something to do with compiledir lock
PS: I can run the code from my Winpython console and everything works fine. I've seen the contents of %PATH% and %PYTHONPATH% from my Win python console and replicated the same on my deployed IIS web App.
Here's the header of the stack trace :
(MissingGXX('The following error happened while compiling the node',
Shape_i{0}(input.input), '\n', "g++ not available! We can't compile c
code.", '[Shape_i{0}(input.input)]'), , ( | Theano : MissingGxx , g++ not available | 0.197375 | 0 | 0 | 1,208 |
37,255,548 | 2016-05-16T13:49:00.000 | 7 | 0 | 0 | 1 | python,celery | 54,003,787 | 12 | false | 0 | 0 | I have run celery task using RabbitMQ server.
RabbitMq is better and simple than redis broker
while running celery use this command "celery -A project-name worker --pool=solo -l info"
and avoid this command "celery -A project-name worker --loglevel info" | 5 | 42 | 0 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? | How to run celery on windows? | 1 | 0 | 0 | 46,000 |
37,255,548 | 2016-05-16T13:49:00.000 | 7 | 0 | 0 | 1 | python,celery | 64,753,882 | 12 | false | 0 | 0 | Compile Celery with --pool=solo argument.
Example:
celery -A your-application worker -l info --pool=solo | 5 | 42 | 0 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? | How to run celery on windows? | 1 | 0 | 0 | 46,000 |
37,255,548 | 2016-05-16T13:49:00.000 | 5 | 0 | 0 | 1 | python,celery | 37,277,253 | 12 | true | 0 | 0 | It's done the same way as in Linux. Changing directory to module containing celery task and calling "c:\python\python" -m celery -A module.celery worker worked well. | 5 | 42 | 0 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? | How to run celery on windows? | 1.2 | 0 | 0 | 46,000 |
37,255,548 | 2016-05-16T13:49:00.000 | 4 | 0 | 0 | 1 | python,celery | 60,682,458 | 12 | false | 0 | 0 | You can still use celery 4 0+ with Windows 10+
Just use this command "celery -A projet worker - -pool=solo - l info" instead of "celery - A project worker -l info | 5 | 42 | 0 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? | How to run celery on windows? | 0.066568 | 0 | 0 | 46,000 |
37,255,548 | 2016-05-16T13:49:00.000 | 3 | 0 | 0 | 1 | python,celery | 68,523,385 | 12 | false | 0 | 0 | You can run celery on windows without an extra library by using threads
celery -A your_application worker -P threads | 5 | 42 | 0 | How to run celery worker on Windows without creating Windows Service? Is there any analogy to $ celery -A your_application worker? | How to run celery on windows? | 0.049958 | 0 | 0 | 46,000 |
37,255,757 | 2016-05-16T14:00:00.000 | 1 | 0 | 1 | 0 | python | 37,256,251 | 2 | false | 0 | 0 | Your result is becoming too large. Python cannot allocate enough memory to add to it.
More than this is impossible to say without the source code and an explanation of what you're trying to accomplish. | 1 | 0 | 0 | Having an issue in Python and not sure where to start to debug a memory issue. Using the suggestions from answers I have made the changes in code and commented the lines which were previously in place: | Python - memory issue | 0.099668 | 0 | 0 | 87 |
37,258,045 | 2016-05-16T15:51:00.000 | 0 | 0 | 0 | 1 | python,django | 37,266,854 | 6 | false | 1 | 0 | You need to add django to your path variables and then restart the terminal. | 3 | 4 | 0 | I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ? | django-admin command not working in Mac OS | 0 | 0 | 0 | 13,509 |
37,258,045 | 2016-05-16T15:51:00.000 | 0 | 0 | 0 | 1 | python,django | 48,470,351 | 6 | false | 1 | 0 | I know I'm jumping in a little late, but my installations seem to all reside away from /usr/local/bin/... . What worked for me was adding an export path in bash_profile for my django installation.
This also made me realize that it was installed globally. From what I've heard, it's better to install django locally within venv as you work on different projects. That way each virtual environment can contain its own versions and dependencies for django (and whatever else you're using). Big thanks to @Arefe. | 3 | 4 | 0 | I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ? | django-admin command not working in Mac OS | 0 | 0 | 0 | 13,509 |
37,258,045 | 2016-05-16T15:51:00.000 | 6 | 0 | 0 | 1 | python,django | 37,266,721 | 6 | false | 1 | 0 | I solved the issue after reading a webpage about the mentioned issue.
In the Python shell, write the following,
>> import django
>> django.__file__
>> django also works
It will provide the installation location of django.
Change the path to the new path /usr/local/bin/django-admin.py,
sudo ln -s the complete path of django-admin.py /usr/local/bin/django-admin.py
In Mac OS, The call needs to be django-admin.py startproject mysite than django-admin startproject mysite | 3 | 4 | 0 | I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ? | django-admin command not working in Mac OS | 1 | 0 | 0 | 13,509 |
37,258,911 | 2016-05-16T16:38:00.000 | 2 | 0 | 1 | 0 | python,numpy,slice,ellipsis | 37,767,201 | 2 | false | 0 | 0 | arr[:,:,1] is fancy indexing used by numpy that selects the first element of the last column in arr. Fancy indexing can only be used in numpy arrays and not in python's traditional lists.
Also, like its pointed out in the comments, a[,:,:,] is a syntax error.
It is helpful because you can select columns easily | 1 | 0 | 1 | I have been reading a very old documentation of Numpy and found out a weird notation which eludes my understanding. The documentation says a[i:...] is a shortcut for a[i,:,:,:].
The documentation being old is very vague and I would welcome any comments.
Thanks,
Prerit | How slicing and ellipsis works in numpy? | 0.197375 | 0 | 0 | 1,494 |
37,265,069 | 2016-05-17T00:05:00.000 | 1 | 0 | 0 | 0 | python,gephi | 37,534,874 | 1 | false | 0 | 0 | I believe I've found a reason for my inability to size nodes in Gephi via a properly formatted gexf file: in the last two versions of Gephi - 0.9.0 and 0.9.1, the Autosize feature of Gephi is "stuck" in the "on" position. So, until that gets fixed, it does not seem possible to size nodes in advance for visualization in Gephi. | 1 | 2 | 0 | My question is perhaps not so much a coding "how to" question than it is asking for insight into the "internal" workings of Gephi.
I have successfully used the gexf spec to add "viz" tags to a gexf file using Python - specifically, "color" and "size" tags. However, when opening the file in Gephi, the node size as rendered in Gephi seems anything but linear - or predictable.
For example, in my gexf file, I can set one node size to 3, and another to 2. When I open the my gexf file in Gephi, the node sized to 3 is readily visible, but the node sized to 2 is barely visible.
I've tried various "remappings" of node size, from linear to natural log to square root in my code that generates the gexf, but nothing seems to make Gephi "behave" in a predictable manner.
Does anyone have any success in using the gexf file format to generate node sizes that Gephi will render in a predictable manner? Thanks in advance. | How does one effectively scale node size in gexf for Gephi | 0.197375 | 0 | 0 | 264 |
37,266,479 | 2016-05-17T03:13:00.000 | 4 | 0 | 0 | 1 | python,django,google-app-engine,google-compute-engine | 37,267,336 | 1 | true | 1 | 0 | DigitalOcean is IaaS (infrastructure as a service). I guess the corresponding offer from Google is Google Compute Engine, GCE.
Google App Engine is more like Heroku, a PaaS offer (platform as a service).
In practice, what is the difference between PaaS and IaaS?
IaaS: if you have a competent system administrator in your team he probably will choose IaaS - because this kind of service will give him more control at the cost of more decisions and setup - but that is his work.
PaaS: if you are willing to pay more (like double it) to avoid most of the management work and don't mind a more opinionated platform, than a PaaS may be the right product for you. You are a programmer and just want to deploy your code (and you are happy to pay an extra in order to avoid dealing with those dickheads in operations).
Probably you can find a more elegant comparison if you google for it. | 1 | 2 | 0 | I am trying to transfer my current server from DO to GCP but not sure what to use between App engine and Compute engine.
Currently using:
django 1.8
postgres (connected using psycopg2)
python 2.7
Thanks in advance! | App engine vs Compute engine : django project | 1.2 | 0 | 0 | 314 |
37,269,117 | 2016-05-17T06:55:00.000 | 0 | 0 | 0 | 0 | java,python,jsp,web,pygame | 37,271,937 | 1 | false | 1 | 0 | The way I see it you have two options:
If network connection is intermittent (not so frequent), you can use javascript (AJAX specifically) to make HTTP calls to the server whenever you need to access your database.
If you are expecting frequent (continuous) requests (i.e. multiplayer games), you would need to keep a connection alive using either
Persistent HTTP request
TCP Socket
Websocket : You probably want to use this if you want cross-browser support.
Let me know if you have any other questions regarding the options above. | 1 | 0 | 0 | So im creating this game using pygame, and i have to put it up on a website, but it has to run server-side since the game is gonna be using a database locally, and the client will be able to enter a web page and click on a button and the game is gonna run, i cant make the game run completely client side because then the game wont be able to connect to my local database
Any ideas? | Streaming a python game through a jsp | 0 | 0 | 0 | 111 |
37,270,679 | 2016-05-17T08:16:00.000 | 0 | 0 | 0 | 1 | python-2.7,windows-7,bandwidth,ifconfig | 37,288,046 | 1 | false | 0 | 0 | problem solved C:\windows\system32\ipconfig on cmd | 1 | 0 | 0 | i'm still confusing find bandwidth value using ifconfig eth0 , because it's very different if i see tutorial from os linux but my pc still os window 7 ..
Is there any other way to find bandwidth value in window 7 ?
because i must find byte from ifconfig for convert to Kbps | How to get bandwidth from ifconfig eth0 on win7 | 0 | 0 | 0 | 205 |
37,271,162 | 2016-05-17T08:41:00.000 | 1 | 0 | 0 | 0 | python,scons | 37,272,718 | 1 | true | 0 | 1 | You can't turn off the duplicate method for only a subset of files (based on their extension for example).
Installing a subset of source and target files to a specific directory is usually handled by calling the Install() method. Independent of whether you plan to use duplicate=0 or duplicate=1 for your actual builds, I'd suggest to not interfere with what happens in the variant dirs...and just letting SCons do its thing. | 1 | 0 | 0 | In my build I am required to copy header files in a flat structure while source code files are copied in a hierachical structure.
By default when specifying duplicate = 1 (my build is a variant dir build) in SCons all header files and .c/.cpp files are duplicated in a hierarchical structure.
Is there a way to deactivate duplication of header files ?
What I tried until now:
To provide an empty list to CPPPATH.
To remove source scanners from my builders.
I want to install them by myself in a separate folder. I don't want to turn off duplication since I need that for .c/.cpp files. | How to deactivate header files duplication in SCons? | 1.2 | 0 | 0 | 82 |
37,275,831 | 2016-05-17T12:11:00.000 | 0 | 0 | 0 | 0 | python,http,ftp,twisted | 37,276,337 | 1 | true | 0 | 0 | In my opinion, you could get this data stream by any protocol you desire (set up your own server using sockets, rest api and so forth), save it as a file and offer it to the customer via FTP.
This is the high-level design. What exactly are you struggling with?
(By the way, keep in mind you should prefer sFTP over FTP.)
Hope this helps. | 1 | 0 | 0 | I'm wondering if anyone can offer me any guidance on this.. hope it's being posted in the right place :|.
I want to have a web server that can initiate an download stream via http and serve it out the other side via the FTP protocol.
So a user of the program would request the file and the server would initiate an http stream from another source and pass this transfer on via FTP back to the user.
Thoughts? | HTTP to FTP data stream conversion? | 1.2 | 0 | 1 | 302 |
37,277,713 | 2016-05-17T13:33:00.000 | 1 | 0 | 0 | 0 | python,algorithm,dynamic-programming | 37,280,915 | 4 | false | 0 | 0 | Dynamically build a table of dimensions 6 x n. The entry table[w_i][d_j] will denote the maximal reachable value when Bob has worked for the last i days consecutively (including today) and it is day j.
The first column is easy to fill in:
table[1][0] = x_0 if Bob decides to work on the first day, all other values are 0 (table[0][0] => Bob doesn't work on the first day, table[2..5][0] => Bob can't work for multiple consecutive days on day 1.)
Go on to complete the table column-by-column according to the following rules:
The maximum value on day j with 0 consecutive days of work is the maximum of any value of the previous day and not working today:
table[0][d_j] = max{ table[0..5][d_j-1] }
The maximum value on day j with 1 consecutive day of work is the maximum of the previous 2 days with no consecutive days of work plus x_j. (It never makes sense to rest more than 2 days, as we could have just worked the day(s) in between.):
table[1][d_j] = max{ table[0][d_j-2], table[0][d_j-1] } + x[d_j]
Otherwise, table[w_i][d_j] = table[w_i-1][d_j-1] + x[d_j].
The solution will be the maximum value in the last column. | 1 | 3 | 1 | The problem:
There is a set of n days that Bob is planning to work, and on each day i there is a mission; each mission lasts exactly one day, must be done on day i in which it is given, and pays bob x_i dollars. Bob cannot work more than 5 consecutive missions at a time. That is, he must take at least 1 rest day every 5 days.
Given numbers x_1...x_n, on which days should Bob perform missions, and on which days should he rest, in order to make as much money as possible and not work more than 5 days? Your solution should be O(n)
My issue:
I am having trouble coming up with the recurrence for this problem. I have been thinking about this problem for a long time. My original thought was to let p[i] = max{x_i + x_i-1 + .... + x_i-4}, where p[i] is the max profit earnable from days i-4 to i. However, I realize, one, that this does take in to account that the optimal solution might have two consecutive work days, and two, I am not building off previous solutions.
My Question: Can anyone give me insight on understanding the structure of this problem? I feel like I am just not understanding the key properties that would make the solution easy to see.
Thanks in advance! | Dynamic Programming: Mission Per Day, Scheduling for Maximum Profit | 0.049958 | 0 | 0 | 766 |
37,281,218 | 2016-05-17T16:01:00.000 | 1 | 0 | 1 | 0 | python,mysql,json,database | 37,282,081 | 2 | true | 0 | 0 | As your required requirements i preferred you to use NoSQL database e.g Mongowith implementation of redisfor memory data storage that gives you more flexibility and performance.It based on objects so helps you for fetching faster | 2 | 1 | 0 | I'm creating a little own game, and I need to solve the problem: where I need to store player's inventory: in database with JSON(text field) or directly in database tables. Which method consume less RAM, and which is faster at all?
Game server will be written on Python | What is better - store player's inventory in JSON(text field in database) or database directly? | 1.2 | 1 | 0 | 285 |
37,281,218 | 2016-05-17T16:01:00.000 | 1 | 0 | 1 | 0 | python,mysql,json,database | 37,284,304 | 2 | false | 0 | 0 | It's probably better to use database tables directly. That way you can take advantage of other database features such as foreign keys, unique constraints, triggers, and so on. | 2 | 1 | 0 | I'm creating a little own game, and I need to solve the problem: where I need to store player's inventory: in database with JSON(text field) or directly in database tables. Which method consume less RAM, and which is faster at all?
Game server will be written on Python | What is better - store player's inventory in JSON(text field in database) or database directly? | 0.099668 | 1 | 0 | 285 |
37,283,786 | 2016-05-17T18:30:00.000 | 4 | 0 | 1 | 0 | python,python-3.x,division,integer-division | 37,283,878 | 4 | false | 0 | 0 | Floor division will also round down to the next lowest number, not the next lowest absolute value.
6 // 4 = 1.5, which rounds down to 1, and up to 2.
-6 // 4 = -1.5, which rounds down to -2, and up to -1. | 1 | 10 | 0 | The expression 6 // 4 yields 1, where floor division produces the whole number after dividing a number.
But with a negative number, why does -6 // 4 return -2? | Floor division with negative number | 0.197375 | 0 | 0 | 12,821 |
37,287,631 | 2016-05-17T23:09:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 37,287,718 | 2 | true | 1 | 0 | You can only see which version of your app was deployed and when - unless you deleted the older version. | 1 | 0 | 0 | I've deployed code changes to a GAE app under development and broken the app. My IDE isn't tracking history to the point in time where things still worked, and I didn't commit my code to a repo as often as I updated the app, so I can't be sure what the state of the deployed code was at the point in time when it was working, though I do know a date when it was working. Is there a way to either:
Rollback the app to a specific date?
See what code was deployed at a specific deployment or date?
I see that deployments are logged - I'm hoping that GAE keeps a copy of code for each deployment allowing me to at least see the code or diffs. | Does Google App Engine keep code snapshots of past deployments? | 1.2 | 0 | 0 | 26 |
37,289,244 | 2016-05-18T02:29:00.000 | 1 | 0 | 1 | 0 | c#,python,debugging,visual-studio-2015,ptvs | 37,337,168 | 1 | true | 0 | 0 | There seems to be a bug/issue in PTVS and/or Visual Studio in that the watch window does not realize that the context has switched to Python, unless there is at least one call to a python method in the call stack.
so if embedded script does:
print ('foo')
, the watch window thinks it's still in c# context.
If the embedded window has this instead - the watch window switches to Python
def Test():
print ('foo')
Test() | 1 | 1 | 0 | How does Visual Studio switch between python and C# expressions when debugging a process that mixes both C# an Python by embedding and invoking python interpreter?
For background: My Visual Studio 2015 with PTVS 2.2.2 did not allow me to specify any python expressions in the watch window (on at least two machines), until something switched, and now it only allows using Python expressions in the same watch window (but not C#).
I am not sure what I did, is there a proper way to switch between the two languages?
Once Python expressions started working, the C# expressions now all fall back on 'internal error in expression evaluator' both in watch and immediate window. The whole thing might have been related to me playing around with Python Debug Interactive window, but it feels very ad hoc and I am wondering how to properly configure this. | How to specify language for watch window expressions in a multi-language debugging environment? | 1.2 | 0 | 0 | 55 |
37,291,961 | 2016-05-18T06:36:00.000 | 5 | 1 | 0 | 0 | python,session,ssh,sftp,conceptual | 37,292,169 | 2 | false | 0 | 0 | The (local) operating system closes any pending TCP/IP connection opened by the process, when the process closes (even if it merely crashes).
So in the end the SSH session is closed, even if you do not close it. Obviously, it's closed abruptly, without proper SSH cleanup. So it may trigger some warning in the server log.
Closing the session is particularly important, when the process is long running, but the session itself is used only shortly.
Anyway, it's a good practice to close the session no matter what. | 2 | 4 | 0 | I'm new to Python. Currently I'm using it to connect to remote servers via ssh, do some stuff (including copying files), then close the session.
After each session I do ssh.close() and sftp.close(). I'm just doing this because that's the typical way I found on the internet.
I'm wondering what would happen if I just finishes my script without closing the session. Will that affect the server? Will this make some kind of (or even very little) load on the server? I mean why we are doing this in the first place? | What would happen if I don't close an ssh session? | 0.462117 | 0 | 1 | 1,588 |
37,291,961 | 2016-05-18T06:36:00.000 | 3 | 1 | 0 | 0 | python,session,ssh,sftp,conceptual | 37,292,405 | 2 | false | 0 | 0 | We close session after use so that the clean up(closing of all running processes associated to it) is done correctly/easily.
When you ssh.close() it generates a SIGHUP signal. This signal kills all the tasks/processes under the terminal automatically/instantly.
When you abruptly end the session that is without the close(), the OS eventually gets to know that the connection is lost/disconnected and initiates the same SIGHUP signal which closes most open processes/sub-processes.
Even with all that there are possible issues like few processes continue running even after SIGHUP because they were started with a nohup option(or have somehow been disassociated from the current session). | 2 | 4 | 0 | I'm new to Python. Currently I'm using it to connect to remote servers via ssh, do some stuff (including copying files), then close the session.
After each session I do ssh.close() and sftp.close(). I'm just doing this because that's the typical way I found on the internet.
I'm wondering what would happen if I just finishes my script without closing the session. Will that affect the server? Will this make some kind of (or even very little) load on the server? I mean why we are doing this in the first place? | What would happen if I don't close an ssh session? | 0.291313 | 0 | 1 | 1,588 |
37,292,381 | 2016-05-18T07:01:00.000 | 3 | 0 | 1 | 1 | python,windows,macos,wxpython,cross-platform | 37,292,590 | 3 | true | 0 | 0 | You can not generate Windows executable files on OS X. You must use the platform which you want the program to run on to compile the program. If you own a copy of Windows, you could run a virtual machine on you Mac and compile it on that. | 1 | 2 | 0 | I am programming python with my Mac laptop, however, the final executable will be run on final users' Windows system. And the Windows system hasn't set up the python environment specifically.
Can I generate an executable file on my Mac laptop, and Windows user can directly run on Windows system?
I looked at py2exe, but it seems it must build the python on Windows in order to run the exe on Windows. | Can I generate a python executable file on my Mac that can be used on Windows | 1.2 | 0 | 0 | 2,758 |
37,296,105 | 2016-05-18T09:53:00.000 | 6 | 0 | 1 | 0 | python,pycharm | 37,296,638 | 2 | false | 0 | 0 | You can go to the last visited method with Alt+← (left arrow) and again to the next last, but i have no idea how to show a list of them. | 1 | 6 | 0 | I like the ctrl+e "recent files" dialog of PyCharm and use it often.
Is there a way to list the recent methods I visited during the last minutes too? | PyCharm: Navigate to recent method? | 1 | 0 | 0 | 151 |
37,297,917 | 2016-05-18T11:13:00.000 | 0 | 0 | 1 | 0 | python,pyqt | 37,298,956 | 2 | false | 0 | 1 | Managed it thanks to the link above. Uninstalled Python 3.5 and set my PATH variable to C:\Python34. Downloaded pyinstaller and installed it using PIP. Then navigated to Python34/Scripts and dragged myFile.py (the one to be made an .exe) into it. Ran pyinstaller.exe --windowed myFile.py to create the exe which then went to my dist folder. Created a shortcut and it worked perfectly. | 2 | 0 | 0 | I'm creating a program that I would like to use as a normal program as well as continue to code it on the side. To do this I first tried creating a shortcut of the .py file in my PyCharms project folder and sent it to desktop. When I double-clicked the shortcut the command prompt would open for a second and then shut. It's a PyQt4 program so I'm not sure if this has any bearing. The program has been coded in Python 3.4. I've noticed that when I open the command prompt and type 'python' it shows Python 3.5 for some reason so I'm not sure if this has any bearing on the situation.
If you've ever programmed in C# I'd like to be able to build a solution and then rebuild the solution when I've updated the code so that I can access the program as a normal program as well as continue to improve the code of it.
Thanks for any help. | How can I run my Python program like a normal program from the desktop? | 0 | 0 | 0 | 883 |
37,297,917 | 2016-05-18T11:13:00.000 | 0 | 0 | 1 | 0 | python,pyqt | 37,299,236 | 2 | false | 0 | 1 | Go to your environmental variables (Right click on Computer > Properties > Advanced system settings > Environment Variables...). Find Path in System variables, select it, and click edit. Remove the Python 3.5 path and replace it with your python 3.4 or virtual environment folder that has python.exe in it.
Make a shortcut on your desktop that points to the .py file that you are editing.
If you have all of the dependencies right then double clicking the .py file's shortcut should run your program.
Other wise you can pip install cx_freeze and use cx_freeze like setuptools. Create a setup.py file and build the executable.
If you want to install this executable I suggest using Inno Setup. It is pretty straight forward on how to use and has an easy wizard that helps you build a basic installer. | 2 | 0 | 0 | I'm creating a program that I would like to use as a normal program as well as continue to code it on the side. To do this I first tried creating a shortcut of the .py file in my PyCharms project folder and sent it to desktop. When I double-clicked the shortcut the command prompt would open for a second and then shut. It's a PyQt4 program so I'm not sure if this has any bearing. The program has been coded in Python 3.4. I've noticed that when I open the command prompt and type 'python' it shows Python 3.5 for some reason so I'm not sure if this has any bearing on the situation.
If you've ever programmed in C# I'd like to be able to build a solution and then rebuild the solution when I've updated the code so that I can access the program as a normal program as well as continue to improve the code of it.
Thanks for any help. | How can I run my Python program like a normal program from the desktop? | 0 | 0 | 0 | 883 |
37,298,803 | 2016-05-18T11:52:00.000 | 0 | 0 | 0 | 1 | python,macos | 41,800,751 | 3 | false | 0 | 0 | To install cryptography package on Mac OS El Capitan. As explained in Cryptography installation doc
env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography | 1 | 0 | 0 | I tried
sudo pip install cryptography
And the error message is
Collecting cryptography
Using cached cryptography-1.3.2-cp27-none-macosx_10_6_intel.whl
Requirement already satisfied (use --upgrade to upgrade): cffi>=1.4.1 in /Library/Python/2.7/site-packages (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): pyasn1>=0.1.8 in /Library/Python/2.7/site-packages (from cryptography)
Collecting setuptools>=11.3 (from cryptography)
Using cached setuptools-21.0.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): six>=1.4.1 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): idna>=2.0 in /Library/Python/2.7/site-packages (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): ipaddress in /Library/Python/2.7/site-packages (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): enum34 in /Library/Python/2.7/site-packages (from cryptography)
Requirement already satisfied (use --upgrade to upgrade): pycparser in /Library/Python/2.7/site-packages (from cffi>=1.4.1->cryptography)
Installing collected packages: setuptools, cryptography
Found existing installation: setuptools 1.1.6
Uninstalling setuptools-1.1.6:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 736, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/init.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move
copytree(src, real_dst, symlinks=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-p7Ywro-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
Then I searched some post and tried
brew install pkg-config libffi openssl
Warning: pkg-config-0.28 already installed
Warning: libffi-3.0.13 already installed
Warning: openssl-1.0.2d_1 already installed
and
CFLAGS="-I/usr/local/opt/openssl/include" sudo pip install cryptography==0.8
I got this error message:
src/cryptography/hazmat/bindings/pycache/_Cryptography_cffi_f3e4673fx399b1113.c:217:10: fatal error: 'openssl/aes.h' file not found
#include
^
1 error generated.
error: command 'cc' failed with exit status 1
Command "/usr/bin/python -u -c "import setuptools, tokenize;file='/private/tmp/pip-build-MxT6op/cryptography/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /tmp/pip-G6b8Y_-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/tmp/pip-build-MxT6op/cryptography/
I also tried
brew install pkg-config libffi openssl
env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/include" pip install cryptography
and got this
Found existing installation: setuptools 1.1.6
Uninstalling setuptools-1.1.6:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 736, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/init.py", line 267, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 299, in move
copytree(src, real_dst, symlinks=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 208, in copytree
raise Error, errors
Error: [('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/init.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.py'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib/markers.pyc'"), ('/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib', "[Errno 1] Operation not permitted: '/tmp/pip-aYpqDT-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/_markerlib'")]
Please help me to get this fixed. Thanks a lot. | OSX El Capitan python install cryptography fail | 0 | 0 | 0 | 930 |
37,299,089 | 2016-05-18T12:05:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,pycharm,turtle-graphics | 37,306,132 | 1 | true | 0 | 0 | The turtle.py module is unusual. To make it easier for beginning programmers to use, I assume, all methods of the Turtle class are also available as top level functions that operate on the default (unnamed) turtle instance. All methods of the Screen class are also available as top level functions that operate on the default (sole) screen instance.
This doesn't help you know which methods PyCharm can/will autocomplete but when you call turtle.Screen(), you aren't creating an instance of the Screen class, you're just getting a pointer to the existing instance.
Because of its unusual design, I feel that turtle.py is one module that can violate the no import * rule since it's aimed at beginners. Doing this may help PyCharm distinguish between functions and methods and do the right thing. Or not. | 1 | 0 | 0 | It's possible to call turtle.mainloop() or turtle.exitonclick() without creating instance of Turtle or Screen class.
But PyCharm seems to have trouble auto-completing these functions after pressing Ctrl + Space.
Noticed turtle module has function called _make_global_funcs which I think makes class methods callable using <module_name>.<function_name> syntax.
How to know which methods are available for direct use and which ones not? If I can directly call turtle.exitonclick(), then I don't have to create instance of Screen class. | Why can't PyCharm autocomplete turtle.mainloop() function? How to know which function from module is available to use? | 1.2 | 0 | 0 | 483 |
37,302,892 | 2016-05-18T14:40:00.000 | 0 | 0 | 1 | 0 | python,excel,python-2.7,xlwings | 37,306,695 | 1 | true | 0 | 0 | I figured out that when passing a cusip through excel I left the single quotes in the cell, for example I passed '/cusip/...@BGN' instead of /cusip/...@BGN. Not sure why this changed the value so drastically but everything is calculating correctly now | 1 | 0 | 0 | I am pulling data from Bloomberg through TIA to python in order to calculate beta. When I run the code in the Python Compiler I get a correct calculation. When I hard code a cusip into the Python code, I also get the right calculation. However, when I pass the cusip through excel to python I get an incorrect value. The value seems to have no correlation with any calculations done. Is there a wrapper I need to provide in order to correctly pass data from Excel to Python other than the @xw.func identifier? For those who do not know the cusip needs to look something like this for TIA, '/cusip/abcde123@BGN' Where abcde123 are 8 characters representing the cusip and BGN is the pricing location. | xlwings calculation incorrect when passing value to UDF | 1.2 | 0 | 0 | 64 |
37,304,354 | 2016-05-18T15:43:00.000 | 2 | 0 | 1 | 0 | python,winapi,pywin32 | 37,307,651 | 1 | false | 0 | 0 | Figured it out, win32api is part of win32, so I just changed the import to win32.win32api. | 1 | 1 | 0 | I downloaded pywin32 (using pip), and want to use the win32api, however, the module doesn't exist. win32 exists and win32com exist, but not win32api. How can I access win32api(specifically I want to be able to access the GetVolumeInformation) function. | Win32 vs Win32api | 0.379949 | 0 | 0 | 936 |
37,308,794 | 2016-05-18T19:44:00.000 | 0 | 0 | 0 | 1 | google-app-engine-python | 44,471,398 | 2 | false | 1 | 0 | create the project in your google app engine account by specifiying the
Application identifier and title(say you have given your application identifier
is=helloworld)
Go to the google app engine launcher and match your project name in the
app.yaml
with the name identifier you created in your google app engine account and then deploy it. | 1 | 0 | 0 | I'm new to this and trying to deploy a first app to the app engine. However, when i try to i get this message:
"This application does not exist (app_id=u'udacity')."
I fear it might have to do with the app.yaml file so i'll just leave here what i have there:
application: udacity
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /favicon.ico
static_files: favicon.ico
upload: favicon.ico
url: /.*
script: main.app
libraries:
- name: webapp2
version: "2.5.2"
Thanks in advance. | Deploying app to Google App Engine (python) | 0 | 0 | 0 | 52 |
37,310,576 | 2016-05-18T21:42:00.000 | 2 | 0 | 0 | 0 | python,regex | 37,310,823 | 2 | false | 0 | 0 | Regular expressions try to match as many characters as possible (informally called "maximal munch").
A plain-English description of your regex .*\/posts\/(.*)[/?]+.* would be something like:
Match anything, followed by /posts/, followed by anything, followed by one or more /?, followed by anything.
When we apply that regex to this text:
.../posts/d78fa5da-4cbb-43b5-9fae-2b5c86f883cb/uid/7034
... the maximal munch rule demands that the second "anything" match be as long as possible, therefore it ends up matching more than you wanted:
d78fa5da-4cbb-43b5-9fae-2b5c86f883cb/uid
... because there is still the /7034 part remaining, which matches the remainder of the regex.
The best way to fix it is to use a regex which only matches characters that can actually occur in a UID (as suggested by @alecxe). | 1 | 1 | 0 | I want to extract UUID from urls.
for example:
/posts/eb8c6d25-8784-4cdf-b016-4d8f6df64a62?mc_cid=37387dcb5f&mc_eid=787bbeceb2
/posts/d78fa5da-4cbb-43b5-9fae-2b5c86f883cb/uid/7034
/posts/5ff0021c-16cd-4f66-8881-ee28197ed1cf
I have thousands of this kind of string.
My regex now is ".*\/posts\/(.*)[/?]+.*"
which gives me the result like this:
d78fa5da-4cbb-43b5-9fae-2b5c86f883cb/uid
84ba0472-926d-4f50-b3c6-46376b2fe9de/uid
6f3c97c1-b877-40e0-9479-6bdb826b7b8f/uid
f5e5dc6a-f42b-47d1-8ab1-6ae533415d24
f5e5dc6a-f42b-47d1-8ab1-6ae533415d24
f7842dce-73a3-4984-bbb0-21d7ebce1749
fdc6c48f-b124-447d-b4fc-bb528abb8e24
As you can see, my regex can't get rid of /uid, but handle ?xxxx, query parameter, fine.
What did I miss? How to make it right?
Thanks | extract uuid from url | 0.197375 | 0 | 1 | 3,800 |
37,310,589 | 2016-05-18T21:44:00.000 | 1 | 0 | 1 | 0 | python-2.7 | 37,423,139 | 1 | false | 0 | 0 | I work in windows only and I have discovered that in a UNIX or Mac environment filenames use forward slashes for file paths and back slashes are used for “escape characters’. \b means “backspace”.
by adding the r to the path like this(r'M:\projects\EGU\BS\bsofab.txt', 'w’) it works perfectly.
r is for "raw" and essentially allows the last backslash to be ignored.
I did find some more complicated solutions but this was by far the easiest to implement. | 1 | 0 | 0 | if I run this code I get the error below:
fout = open ('M:\projects\EGU\BS\bsofab.txt', 'w')
IOError: [Errno 22] invalid mode ('w') or filename: 'M:\projects\EGU\BS\x08sofab.txt'
If I change the file name to ('M:\projects\EGU\BS\ofbsab.txt', 'w') it work fine. can someone please tell me what is going on?
thanks | I am opening a file for writing but the file name I have chosen creates an error 22: | 0.197375 | 0 | 0 | 41 |
37,311,172 | 2016-05-18T22:33:00.000 | 3 | 0 | 0 | 0 | python,xml,django | 37,311,205 | 1 | true | 1 | 0 | You might want to consider using PycURL or Twisted. These should have the asynchronous capabilities you're looking for. | 1 | 1 | 0 | Using the form i create several strings that looks like xml data. One part of this strings i need to send on several servers using urllib and another part, on soap server, then i use suds library. When i receive the respond, i need to compare all of this data and show it to user. The sum of these server is nine and quantity of servers can grow. When i make this requests successively, it takes lot of time. According to this i have a question, is there some python library that can make different requests at the same time? Thank you for answer. | Django sending xml requests at the same time | 1.2 | 0 | 1 | 112 |
37,311,699 | 2016-05-18T23:33:00.000 | 3 | 0 | 0 | 0 | python,numpy,pandas,machine-learning,multi-index | 37,311,742 | 3 | false | 0 | 0 | If you need labelled arrays and pandas-like smart indexing, you can use xarray package which is essentially an n-dimensional extension of pandas Panel (panels are being deprecated in pandas in future in favour of xarray).
Otherwise, it may sometimes be reasonable to use plain numpy arrays which can be of any dimensionality; you can also have arbitrarily nested numpy record arrays of any dimension. | 1 | 6 | 1 | What is best way to store and analyze high-dimensional date in python? I like Pandas DataFrame and Panel where I can easily manipulate the axis. Now I have a hyper-cube (dim >=4) of data. I have been thinking of stuffs like dict of Panels, tuple as panel entries. I wonder if there is a high-dim panel thing in Python.
update 20/05/16:
Thanks very much for all the answers. I have tried MultiIndex and xArray, however I am not able to comment on any of them. In my problem I will try to use ndarray instead as I found the label is not essential and I can save it separately.
update 16/09/16:
I came up to use MultiIndex in the end. The ways to manipulate it are pretty tricky at first, but I kind of get used to it now. | High-dimensional data structure in Python | 0.197375 | 0 | 0 | 2,444 |
37,311,877 | 2016-05-18T23:57:00.000 | 0 | 0 | 1 | 1 | python,machine-learning,scikit-learn,windows-10,anaconda | 68,398,311 | 3 | false | 0 | 0 | I had the same issue. Internal or External command can be solved but just 3 steps:
(Before doing make sure you undo all hidden apps)
Select the path of the scripts which are there in your local computer (eg. C drive--> users --> (a file with your name) --> AppData --> Local --> then while you go forward you find Scripts.
Copy that Path.
Open edit the system environment variables--> go to path --> add new --> paste the copied Scripts path and click ok. Close all the prompts and run again.
It worked for me | 2 | 2 | 0 | I'm having trouble importing Machine Learning algorithms from scikit-learn.
I have it installed but whenever I type for example "from sklearn.naive_bayes import GaussianNB" it says " 'from' is not recognized as an internal or external command, operable program or batch file.
I'm using Anaconda on Windows 10. Is it compatibility issue? Am I missing something? Idk I'm still new to Python so I feel lost. Thanks | 'From/import' is not recognized as an internal or external command, operable program or batch file | 0 | 0 | 0 | 39,591 |
37,311,877 | 2016-05-18T23:57:00.000 | 3 | 0 | 1 | 1 | python,machine-learning,scikit-learn,windows-10,anaconda | 37,311,900 | 3 | false | 0 | 0 | That needs to be run in the Python REPL, not at a command line. Be sure to start one before typing Python statements. | 2 | 2 | 0 | I'm having trouble importing Machine Learning algorithms from scikit-learn.
I have it installed but whenever I type for example "from sklearn.naive_bayes import GaussianNB" it says " 'from' is not recognized as an internal or external command, operable program or batch file.
I'm using Anaconda on Windows 10. Is it compatibility issue? Am I missing something? Idk I'm still new to Python so I feel lost. Thanks | 'From/import' is not recognized as an internal or external command, operable program or batch file | 0.197375 | 0 | 0 | 39,591 |
37,312,465 | 2016-05-19T01:22:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,hbase,thrift,happybase | 40,366,307 | 1 | false | 1 | 0 | thrift servers are generally only run in a trusted network. that said, thrift can run over ssl but support in happybase is limited because no one stepped up to properly design and implement an api for it. feel free to contribute. | 1 | 0 | 0 | I'm using happybase to access HBase. However, the only parameter I need is the host name. How does Thrift work without authentication? How do I add security to my code? | How do I add authentication/security to access HBase using happybase? | 0 | 0 | 0 | 410 |
37,312,734 | 2016-05-19T01:56:00.000 | 1 | 0 | 1 | 0 | python,regex | 37,312,823 | 2 | false | 0 | 0 | In REGEX:
[] will error with: Incomplete character class
] matches the character ] literally
[ will error with: Incomplete character class
This is a parsing limitation with a lot of bracketed notations and it is not python specific. If you start a meta-squence ([ <- open bracket) the parser has to match it to a closing bracket or error. [a[d] vs [a[d]* this is why searches became greedy. | 1 | 0 | 0 | So if I have a regex eg: '[ab]c]' then I expect re.compile() to throw an error saying missing '[' but instead it uses the latter ']' literally and matches 'ac]' as a correct string.
I do not follow this behavior and and thus unable to add validation to my regex that a user can input.
Please help. | Python re.compile uses incorrect ] literally rather than throwing error | 0.099668 | 0 | 0 | 136 |
37,313,320 | 2016-05-19T03:10:00.000 | 1 | 0 | 0 | 0 | python,audio | 37,313,404 | 2 | false | 0 | 0 | i handle this by using Matlab.python can do the same. (left-channel+right-channel)/2.0 | 2 | 1 | 1 | I am playing around with some audio processing in python. Right now I have the audio as a 2x(Large Number) numpy array. I want to combine the channels since I only want to try some simple stuff. I am just unsure how I should do this mathematically. At first I thought this is kind of like converting an RGB image to gray-scale where you would average each of the color channels to create a gray pixel. Then I thought that maybe I should add them due to the superposition principal of waves (then again average is just adding and dividing by two.) Does anyone know the best way to do this? | How to convert two channel audio into one channel audio | 0.099668 | 0 | 0 | 3,971 |
37,313,320 | 2016-05-19T03:10:00.000 | 1 | 0 | 0 | 0 | python,audio | 37,313,414 | 2 | true | 0 | 0 | To convert any stereo audio to mono, what I have always seen is the following:
For each pair of left and right samples:
Add the values of the samples together in a way that will not overflow
Divide the resulting value by two
Use this resulting value as the sample in the mono track - make sure to round it properly if you are converting it to an integer value from a floating point value | 2 | 1 | 1 | I am playing around with some audio processing in python. Right now I have the audio as a 2x(Large Number) numpy array. I want to combine the channels since I only want to try some simple stuff. I am just unsure how I should do this mathematically. At first I thought this is kind of like converting an RGB image to gray-scale where you would average each of the color channels to create a gray pixel. Then I thought that maybe I should add them due to the superposition principal of waves (then again average is just adding and dividing by two.) Does anyone know the best way to do this? | How to convert two channel audio into one channel audio | 1.2 | 0 | 0 | 3,971 |
37,318,490 | 2016-05-19T08:59:00.000 | 0 | 0 | 0 | 0 | python,django,uwsgi,grpc | 38,935,407 | 1 | false | 1 | 0 | As of grpcio 0.15.0 clients no longer need to be closed. | 1 | 0 | 0 | I am writing a Django project with uwsgi(multiple progress and multiple threading mode) using grpc.
Right now, when the server exits, the grpc python client can not be closed because it used threading and rewrite the exit function. so I have to make a clean function to close the grpc client when the uwsgi sever is reload or exit, and I wish to organise it so that this function is called automatically when the server quits.
Any help would be greatly appreciated. | Django with uwsgi(multiple progress and multiple threading): How to run a function when server exits? | 0 | 0 | 0 | 641 |
37,319,345 | 2016-05-19T09:37:00.000 | 1 | 0 | 1 | 0 | python,user-interface,go,communication,core | 37,328,359 | 1 | true | 0 | 1 | There are many ways you can tackle this but what I would recommend is having one of the parts (GUI/Core) as the main application that does all of the initializations and starts the other part. I would recommend using the core for this.
Here's a sample architecture you can use, though the architecture you choose is highly dependent on the application and your goals.
Core runs first, performing initialization actions including starting the GUI, sets up the communication between the GUI (using pipes, sockets, etc), then waits for commands from the GUI. If the GUI signals to close, the core can perform whatever clean up necessary and then exits. With this scenario, the lifetime of the exe is controlled by the GUI.(GUI sends a signal to the core when the user hits the exit button to let the core know it should exit)
If the core starts the GUI, then it can set up the STDIN/STDOUT pipes for it and listen for commands on the STDOUT, while sending results on the STDIN. You can also take the server approach, having the core listen on a socket, and the GUI sends requests to it and wait for a response. With the server approach, you can have some sort of concurrency unlike the serial pipes, but I think it might be slower than the pipes (the difference might be negligible but its hard to say without knowing what exactly you're doing). | 1 | 2 | 0 | I am working on an GUI based application, that is developed using python and go. I am using python(+Kivy) to implement UI and Go to implement middleware / core on windows OS.
my problem statement is :
1) I want to run the exe of the core on launching the application and it should remain in the background till my application is closed.
2) When an event is triggered from the application, a command is sent to the core, which it turns execute the command on the remote device and returns with the result of command execution.
I want to know, how I control the lifetime of the exe and how I can establish a communication between the UI and Core.
Any ideas!! | Communication between UI and Core on windows machine | 1.2 | 0 | 0 | 105 |
37,327,132 | 2016-05-19T15:01:00.000 | 1 | 0 | 0 | 0 | python,facebook,facebook-graph-api | 37,327,933 | 1 | true | 0 | 0 | No, that is not possible.
What is visible/accessible to you as a user on facebook.com or in their official apps, has litte correlation to what you can see via API.
If you want to access anyone’s likes via API – then they have to login to your app first, and grant it appropriate permission. | 1 | 0 | 0 | I know that in the API users must allow the app permissions to use their specific user data.
But as a user, I have friends which have their own likes. Is there a possible way to query another user or my friend's profile (using their user id) to get a list of their likes via the Graph API.
Ie. because they've decided to friend me, they've given me their permission on a user basis to access their data in Facebook, so can I query this via the graph api as myself instead of an external app? | How to use Facebook graph api as myself instead of external app, to query friend's data? | 1.2 | 0 | 1 | 38 |
37,328,773 | 2016-05-19T16:12:00.000 | 3 | 1 | 0 | 1 | python,plugins,jenkins | 37,616,786 | 2 | true | 0 | 0 | As far as my experiments for jenkins and python goes, shining panda plug-in doesn't install python in slave machines in fact it uses the existing python library set in the jenkins configuration to run python commands.
In order to install python on slaves, I would recommend to use python virtual environment which comes along with shining panda and allows to run the python commands and then close the virtual environment. | 1 | 7 | 0 | The Jenkins ShiningPanda plugin provides a Managers Jenkins - Configure System setting for Python installations... which includes the ability to Install automatically. This should allow me to automatically setup Python on my slaves.
But I'm having trouble figuring out how to use it. When I use the Add Installer drop down it gives me the ability to
Extract .zip/.tar.gz
Run Batch Command
Run Shell Command
But I can't figure out how people us these options to install Python. Especially as I need to install Python on Windows, Mac, & Linux.
Other Plugins like Ant provide an Ant installations... which installs Ant automatically. Is this possible with Python? | How to configure the Jenkins ShiningPanda plugin Python Installations | 1.2 | 0 | 0 | 4,515 |
37,330,662 | 2016-05-19T17:51:00.000 | 0 | 0 | 0 | 0 | python,authorization,access-control,rbac | 37,330,762 | 1 | false | 1 | 0 | From a UX stand point, Option A is better. If Abe has to perform a variety of tasks, then he may have to log out and log in multiple times under different roles, increasing his cognitive load and potentially delay between accomplishing his task.
From a development standpoint, Option B may be slightly easier, if for the the only reason that you must now perform track the users permissions in a slightly more complex way on the backend, but I'd largely consider this negligible.
I personally would go with Option A for these reasons. | 1 | 0 | 0 | In an RBAC an User can have multiple roles. Say, an User named Abe is part of couple of Roles, Driver and Mechanic. Abe wants to check whether the truck needs an OilChange (an operation only Mechanics can do)
Option A) When Abe logs in to the application (mobile app or webpage), the backend authenticates Abe and determines that he has two roles. The application iteratively checks each of the Roles and see which Role has the permission to do that operation.
Option B) When Abe logs in to the application (mobile app or webpage), he choose the Role along with it. And backend authenticates Abe and uses the Role sent by Abe to check whether he has the permission to do that operation.
Or is there a better way to chose a Role for an action performed by the User?
Thanks. | Which Role to choose for an User | 0 | 0 | 0 | 29 |
37,331,559 | 2016-05-19T18:40:00.000 | 0 | 0 | 0 | 0 | python,plot,bokeh,glyph | 37,333,670 | 1 | true | 0 | 0 | It's always possible to use .add_glyph directly, but it is a bit of a pain. The feature to add all the "glyph methods" e.g. .circle, .rect, etc. is in GitHub master, and will be available in the upcoming 0.12 release. | 1 | 1 | 1 | Is it possible to annotate or add markers of any form to Bokeh charts (specifically Bar graphs)? Say I want to add an * (asterisks) on top of a bar; is this possible? | Annotating bokeh chart plot | 1.2 | 0 | 0 | 240 |
37,333,887 | 2016-05-19T20:58:00.000 | 2 | 0 | 1 | 0 | python,temporary-files | 37,333,919 | 1 | true | 0 | 0 | tempfile.NamedTemporaryFile() doesn't return the filename, only the handle. You need to access the name attribute in order to get the filename. | 1 | 1 | 0 | haven't found a solution to this through searching existing questions, here goes:
New-ish to python. Trying to make a temporary file using the tempfile package. Here is the line of code that is failing with a ValueError:
(temp_file, self.bucket) = tempfile.NamedTemporaryFile(suffix='.py', prefix='Custom_', dir=[mydir], delete=False)
I'm getting this ValueError when I run my script:
ValueError: need more than 0 values to unpack
Why? | Getting a ValueError when making a NamedTemporaryFile in python | 1.2 | 0 | 0 | 73 |
37,334,162 | 2016-05-19T21:18:00.000 | 2 | 0 | 0 | 1 | ipython,jupyter,jupyter-notebook,jupyterhub,jupyter-console | 37,334,386 | 1 | false | 0 | 0 | Turns out that when you edit /etc/bash.bashrc on the machine to include the PATH variable, it then persists in the Jupyterhub terminal | 1 | 2 | 0 | When I open a terminal on JupyterHub and try to run commands that I've placed in the machine's path, it says "command not found."
The default PATH variable on Jupyterhub terminal seems to be /sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin and I'm wondering why it doesn't use the PATH as defined on the machine. Is there a way to get it to inherit this PATH, or to run some command like source /etc/environment whenever a terminal is generated? | Jupyterhub terminal path variable | 0.379949 | 0 | 0 | 600 |
37,336,002 | 2016-05-20T00:08:00.000 | 0 | 0 | 0 | 0 | python,qt,qcombobox | 37,339,381 | 1 | true | 0 | 1 | The easiest way - to use setAutoCompletion(false) on your QComboBox. | 1 | 0 | 0 | I have a QComboBox and say I type the word "info" in the box, followed by "INFO".
Why doesn't it remember both as 2 distinct words and instead convert "INFO" to "info"?
How can I go about solving this issue so that both words would end up in the list. Also the back-end is in python.
Note: I would prefer a solution without setDuplicatesEnabled.
I read the documentation and searched the web but with no luck and I cannot figure this one out. Most answers on SO were outdated(5+ years ) | QComboBox: same words with different cases are seen as duplicates | 1.2 | 0 | 0 | 47 |
37,338,448 | 2016-05-20T05:18:00.000 | 4 | 0 | 1 | 0 | python,memory-management,cpu-usage,diskspace | 37,338,506 | 1 | false | 0 | 0 | No, virtual environments are a solution to manage Python library paths. It is not a virtualization system and does not enforce any restrictions on the process. | 1 | 2 | 0 | As per definition virtualenv is a tool to create isolated Python environments. virtualenv creates a folder which contains all the necessary executables to use the packages that a Python project would need.
Is there way to created environment which can have restriction on resources CPU, Memory and Diskusage? | Can we setup cpu memory requirement with python virtualenv | 0.664037 | 0 | 0 | 1,111 |
37,340,848 | 2016-05-20T07:43:00.000 | 0 | 0 | 0 | 0 | javascript,jquery,python,raspberry-pi,rfid | 37,479,489 | 3 | false | 1 | 0 | To update page data without delay you need to use websockets.
There is no need in using heavy frameworks.
Once page is loaded first time you open websocket with js and listen to it.
Every time you read a tag you post all necessary data to this open socket and it instantly appear on client side. | 1 | 0 | 0 | So I'm using a Raspberry Pi 2 with a rfid scanner and wrote a script in python that logs people in and out of our attendance system, connects to our postgresql database and returns some data like how much overtime they have and whether their action was a login or logout.
This data is meant to be displayed on a very basic webpage (that is not even on a server or anything) that just serves as a graphical interface to display said data.
My problem is that I cannot figure out how to dynamically display that data that my python script returns on the webpage without having to refresh it. I'd like it to simply fade in the information, keep it there for a few seconds and then have it fade out again (at which point the system becomes available again to have someone else login or logout).
Currently I'm using BeautifulSoup4 to edit the Html File and Chrome with the extension "LivePage" to then automatically update the page which is obviously a horrible solution.
I'm hoping someone here can point me in the right direction as to how I can accoumplish this in a comprehensible and reasonably elegant way.
TL;DR: I want to display the results of my python script on my web page without having to refresh it. | Dynamically update static webpage with python script | 0 | 0 | 0 | 1,371 |
37,342,486 | 2016-05-20T09:04:00.000 | 0 | 0 | 1 | 0 | python,anaconda,jupyter | 37,367,182 | 2 | false | 0 | 0 | These comments are based on a windows 10 install.
Are you sure Jupyter notebook did not install? Go to the command prompt and type "jupyter notebook" without the quotes. In my case that launched the notebook. If that doesn't work type "conda update jupyter" without the quotes and try launching the notebook again. If that doesn't work you may have to re-install Anaconda.
As to launching the notebook offline simply use the command prompt and go to the directory/folder you want the notebook to launch in (for example I launch using a folder on a flash drive) and type "Jupyter Notebook" Hope this helps. | 1 | 0 | 0 | I want to learn python, hence a newbie.
i tried to install jupyter offline. I downloaded anaconda3 for windows, installed it, then ran the command conda install jupyter but nothing happened, perhaps it tries to go online for fetching some dependency files etc.
please guide me for offline installation of jupyter for python.
thanks in advance
maddy | Installing anaconda3 offline | 0 | 0 | 0 | 520 |
37,346,844 | 2016-05-20T12:32:00.000 | 0 | 0 | 0 | 1 | python-2.7,kivy,startup,python-idle | 48,363,806 | 1 | false | 0 | 1 | reason for subprocess startup error can be a new python file that you have created recently with the same name as same existing libraries or module name
e.g. 're.py','os.py' etc
beacuse re,os are the predefined libraries
just go and find the file and rename it
hope so it will be resolved | 1 | 0 | 0 | I have just installed kivy in python 2.7.11. After installing it, whenever i try to open IDLE, it is giving subprocess startup error.
Actually i installed kivy on my windows 7 PC through command prompt. After installation, I copied programs of kivy from my android tab to run them on my pc. When I tried to open them, IDLE doesn't respond for some time and after some time it gives startup error. Since then IDLE is not starting .
But it is quite strange that, on running python builtin module, there is no error.
I had reinstalled python but still there is no change. | Python IDLE giving startup error | 0 | 0 | 0 | 566 |
37,351,300 | 2016-05-20T16:15:00.000 | 0 | 0 | 0 | 1 | python,django,url-routing | 37,352,987 | 1 | false | 1 | 0 | After doing more research and talking with coworkers, I realized that reverse does exactly what I need. | 1 | 1 | 0 | I need to redirect my clients to another endpoint in my django app. I know I can use relative urls with request.build_absolute_uri() to do this, but I am searching for a generic solution that doesn't require the redirecting handler to know its own place in the URL hierarchy.
As an example, I have handlers at the following two URLs:
https://example.com/some/other/namespace/MY_APP/endpoint_one
https://example.com/some/other/namespace/MY_APP/foo/bar/endpoint_two
Both handlers need to redirect to this URL:
https://example.com/some/other/namespace/MY_APP/baz/destination_endpoint
I would like for endpoint_one and endpoint_two to both be able to use the exact same logic to redirect to destination_endpoint.
My app has no knowledge of the /some/other/namespaces/ part of the URL, and that part of the URL can change depending on the deployment (or might not be there at all in a development environment).
I know I could use different relative urls from each endpoint, and redirect to the destination URL. However, that required that the handlers for endpoint_one and endpoint_two know their relative position in the URL hierarchy, which is something I am trying to avoid. | Can I tell where my django app is mounted in the url hierarchy? | 0 | 0 | 0 | 33 |
37,352,294 | 2016-05-20T17:14:00.000 | 1 | 0 | 1 | 1 | python-3.x,logging,cluster-computing | 37,386,212 | 1 | true | 0 | 0 | A simple solution is to ask for logging to log on syslog (typically /dev/log, which won't block your application), locally (so your application is not bound to your logging system: it's still portable), then let rsyslog (I prefer syslogng personally) transmit them to a main log server.
Another solution is to use a tool like logstash to push your logs to an elasticsearch server / cluster so you can browse and graph them easily. In this case, if your log lines are json objects, it's a big win because elasticsearch-side (typically via kibana), you'll be able to query, filter, and aggregate on fields of your json documents. Typically graphing info vs warnings vs errors, frequency of errors per file, or per user, etc... | 1 | 0 | 0 | Need advice on the organization of logging in a clustered application written in Python(asyncio). Applications use the logging module and store logs local file.
See the logs with 3 servers uncomfortable.
I would like to use rsyslog, but there are fears that it will block the application. Another way to use aioredis(push to channel) and another application to collect data in a single file. | asyncio logging for cluster app | 1.2 | 0 | 0 | 382 |
37,352,895 | 2016-05-20T17:49:00.000 | 1 | 0 | 1 | 0 | python,multidimensional-array | 37,352,954 | 1 | false | 0 | 0 | NumPy arrays actually do allow non-numerical contents, so you can just use NumPy. | 1 | 0 | 1 | I would like to make a numpy-like multi-dimensional array of non-numerical objects in python. I believe Numpy arrays only allow numerical values. List-of-lists are much less convenient to index- for example, I'd like to be able to ask for myarray[1,:,2] which requires much more complicated calls from lists of lists.
Is there a good tool for this? | Python arrays for objects | 0.197375 | 0 | 0 | 42 |
37,353,915 | 2016-05-20T18:51:00.000 | 2 | 0 | 1 | 1 | python,python-2.7,batch-file,ipython,python-idle | 37,354,377 | 3 | true | 0 | 0 | Why not run the script directly from the command line using "python script_file.py" in your command line? | 1 | 0 | 0 | I have a python script which creates some images. The script runs in Ipython IDLE as expected, however then I call IDLE from the cmd line, and include the name of the script, the script loads but does not execute. If I hit F5(run Module) then the program runs, but I wonder if it is possible to make the script run without having to press F5. | I need to run a python script in IDLE from a BAT file | 1.2 | 0 | 0 | 1,630 |
37,354,292 | 2016-05-20T19:16:00.000 | 0 | 1 | 0 | 0 | python,unity3d,blender | 39,065,689 | 2 | false | 0 | 0 | Another approach is to compute a CRC-like signature on meaningful values (mesh geometry, materials, whatever it is you change often) and store that somewhere (in each object as a custom property, for instance).
Then you can easily skip objects whose signatures did not change since last export. | 1 | 2 | 0 | I have a Blender file called Assets.blend containing over 100 objects for a game I'm developing in Unity.
When ever I make modifications, I run a script that exports each root object as a separate fbx file.
However I have no way of detecting which ones have been updated, so every time I have to re-export every single object even though I've only created/modified 1.
The time it takes to run the script is about 10 seconds, but then Unity detects the changes and spends over 30 seconds processing mostly unchanged prefabs.
How can I improve my script so that it knows which objects have been altered since the last export?
There does not appear to be any date_modified variable for objects or meshes. | Detect changes in Blender object for more efficient export script | 0 | 0 | 0 | 649 |
37,357,318 | 2016-05-20T23:34:00.000 | 0 | 0 | 0 | 1 | python,pymssql | 37,366,489 | 1 | false | 0 | 0 | No, pymssql does not use ODBC. | 1 | 0 | 0 | I want to use pymssql on a 24/7 Linux production app and am worried about stability. As soon as I hear ODBC I start to have reservations, especially on Linux.
Does pymssql use ODBC or is it straight to freeTDS? | Does Linux pymssql use ODBC? | 0 | 1 | 0 | 184 |
37,361,976 | 2016-05-21T10:54:00.000 | 1 | 1 | 1 | 1 | python,installation | 47,179,582 | 1 | false | 0 | 0 | Type .\python.exe .\
PS C:\Python27> .\python.exe .\PRACTICE.py
Typing only python will not get recognized by windows. | 1 | 1 | 0 | I've installed python, and added it's path to system variables "C:\Python27" but when typing "python" to powershell, I get error mentioned in title. I also can't run it from cmd.
And yes, my python folder is in c directory. | The term 'python' is not recognized as the name of a cmdlet | 0.197375 | 0 | 0 | 4,380 |
37,367,028 | 2016-05-21T19:11:00.000 | 0 | 0 | 1 | 0 | python | 37,367,050 | 2 | false | 0 | 0 | Yes - pip. Using pip freeze will give you the list of all the installed modules. Next all you need to do is to install all that modules by running pip install -r your_file_with_modules_list. | 1 | 0 | 0 | I will format my pc and i would like to somehow collect all the python modules that i have currently and package them (zip or rar etc) / or create an index file of them, so that when i'm done formatting the pc i can reinstall them all in one go, either by using the package/or by using the index created to pip install them all in a batch.
Is there any python module that allows to do that? | Reinstall python with modules | 0 | 0 | 0 | 167 |
37,368,441 | 2016-05-21T22:00:00.000 | 3 | 1 | 1 | 1 | python,c++,autotools,automake,distutils | 37,369,042 | 1 | true | 0 | 0 | Based on your description, I would suggest that you have your project built using stock autotools-generated configure and Makefile, i.e. autoconf and automake, and have either your configure or your Makefile take care of executing your setup.py, in order to set up your Python bits.
I have a project that's mostly C/C++ code, together with a Perl module. This is very similar to what you're trying to do, except that it's Perl instead of Python.
In my Makefile (generated from Makefile.am) I have a target that executes the Perl module's Makefile.PL, which is analogous to Python's setup.py, and in that manner I build the Perl module together with the rest of the C++ code, seamlessly together, as a single build. Works fairly well.
automake's Makefile.am is very open-ended and flexible, and can be easily adapted and extended to incorporate foreign bits, like these. | 1 | 2 | 0 | I'm working on transitioning a project from scons to autotools, since I it seems to automatically generate a lot of features that are annoying to write in a SConscript (e.g. make uninstall).
The project is mostly c++ based, but also includes some modules that have been written in python. After a lot of reading autotools, I can finally create a shared library, compile and link an executable against it, and install c++ header files. Lovely. Now comes they python part. By including AM_PYTHON_PATH in configure.ac, I'm also installing the python modules with Makefile.am files such as
autopy_PYTHON=autopy/__init__.py autopy/noindent.py autopy/auto.py
submoda_PYTHON=autopy/submoda/moda.py autopy/submoda/modb.py autopy/submoda/modc.py autopy/submoda/__init__.py
submodb_PYTHON=autopy/submodb/moda.py autopy/submodb/modb.py autopy/submodb/modc.py autopy/submodb/__init__.py
autopydir=$(pythondir)/autopy
submodadir=$(pythondir)/submoda
submodbdir=$(pythondir)/submodb
dist_bin_SCRIPTS=scripts/script1 scripts/script2 scripts/script3
This seems to place all my modules and scripts in the appropriate locations, but I wonder if this is "correct", because the way to install python modules seems to be through a setup.py script via distutils. I do have setup.py scripts inside the python modules and scons was invoking them until I marched in with autotools. Is one method preferred over the other? Should I still be using setup.py when I build with autotools? I'd like to understand how people usually resolve builds with c++ and python modules using autotools. I've got plenty of other autotools questions, but I'll save those for later. | Are there any disadvantages in using a Makefile.am over setup.py? | 1.2 | 0 | 0 | 580 |
37,369,691 | 2016-05-22T01:35:00.000 | 0 | 0 | 0 | 0 | python,django,django-forms,django-admin | 37,629,111 | 1 | false | 1 | 0 | there is only one requirement - select field must get a list of values or list of tuple (value, description), of course saving those options is the second thing
how you will do this is up to you - you need to find a solution that will fit your needs and skills
simplest is to have secondary table, referenced from primary table as foreign key - you can create very fast admin for this model and it will work straight forward (problems will start when you get a lot of choices...)
other option option is to use any django module, which provide "dynamic settings" - some of them have options to store lists, some of them provide only simple objects, but even then you can store comma separated lists in text field
another option is to store data in files (even .py file that can be imported)
each of those options will be fine, you have the tools, now - choose wisely! | 1 | 1 | 0 | I have a contact form that is posting values to a model called Enquiry. The Enquiry model has an attribute named enquiry_type which is a dropdown select box on the form. Current options are 'general enquiry' & 'request a call back'.
My question is what is the best practice to allow an admin to add to this dropdown? I was thinking of creating another model named EnquiryType with an attribute .text, & iterate through this in the form? This would allow an admin to create new objects with a .text value of their choice. E.g they could add 'Request an email quote' to the dropdown.
Is this correct thinking or is there a simpler/more accepted protocol? Sorry Im still new to Django! | Allowing a Django-admin user to add an to a dropdown | 0 | 0 | 0 | 728 |
37,372,854 | 2016-05-22T09:57:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,permissions,chmod | 37,373,078 | 2 | false | 0 | 0 | use os.chmod().
os.chmod(path, stat.S_IRUSR | stat.S_IRGRP | stat.S_IROTH) | 1 | 1 | 0 | I need to change the permissions of an existing folder to be "without permission" - meaning they cannot be read/write.
Is there a way to do this in python (windows)? | How to change permission of folder to not read/write in python | 0 | 0 | 0 | 1,126 |
37,374,206 | 2016-05-22T12:11:00.000 | 1 | 0 | 0 | 0 | python,django,django-models,django-channels | 37,388,078 | 1 | false | 1 | 0 | As mentioned by knbk, restarting the worker processes made it reflect the changes on my Admin portal. That was the only thing I hadn't tried. | 1 | 1 | 0 | I have a implemented django-channels. Earlier I was using Apache to serve the django application, but now Channels uses Daphne(server) to serve my application. After adding two new models to the models.py file, I migrated the changes to database. I also registered the models in the admin.py file.
Even so, the models are not showing up in the Django-admin panel.
I tried the following:
Stopped Daphne process.
Started Apache server. The Admin panel started showing the new models.
Stopped Apache server. Started Daphne on port80. This time Admin panel did not show the new models.
I am wondering what might be the case. As far as I can guess, whenever the application is served by Apache, updated files are used. Whereas, whenever the application is served by Django-Channels (Daphne), the old configurations (without the new models) are used.
Would like all the help to solve this issue. How can I make Django-Channels(Daphne) reflect the changes, the new models in my Django Admin console. | Django-Channels - /admin/ portal not displaying new models created | 0.197375 | 0 | 0 | 205 |
37,375,019 | 2016-05-22T13:32:00.000 | 3 | 0 | 1 | 1 | python,linux,pip,fedora,pyperclip | 37,375,107 | 1 | true | 0 | 0 | you can use python3 -m pip install pyperclip | 1 | 1 | 0 | I'll start by saying I am a complete novice and am likely overlooking something obvious. Don't assume I have any idea about anything related to linux or python.
Anyway, I installed python 3.5 onto my computer which runs Fedora 23. Fedora comes prepackaged with 2.7. When I installed 3.5, I somehow installed it into my /home/user/Documents directory. I since deleted that rm -r -f /home/user/Documents/Python-3.5.1 directory. Yet I can still open 3.5 when I type python3. Originally I created an alias to point to the python command in the home/user/Documents/Python-3.5.1 directory, so being able to still open 3.5 after deleting that directory and removing the alias is confusing, and must mean I had two python 3.5 installs. That's some backstory that isn't really my problem, but maybe it's related.
The issue I'm having is that I cannot install a module that I want to import for use in a Python 3.5 program.
When I type pip install pyperclip (I'm working through AutomateTheBoringStuff) pyperclip is installed for 2.7. If I open the python2.7 command line and type import pyperclip everything is fine, but if I try the same thing in the python3.5 command line I get an error saying the module does not exist.
I assume this is because pip installs the pyperclip module to the subdirectories associated with 2.7. How can I install modules for 3.5 using pip? | pip works for python2.7 but not 3.5 | 1.2 | 0 | 0 | 800 |
37,378,730 | 2016-05-22T19:30:00.000 | 0 | 0 | 1 | 0 | python | 37,378,811 | 3 | false | 0 | 0 | python 3.5 packages do not work with python 2.7 and vice versa.
And I even think that you can't install packages from linux to windows. | 2 | 0 | 0 | I'm newbie of python programming.
I have windows 7 laptop and I'm also running virtual box Ubuntu.
I manually installed python3.5 on Win7 through Eclipse first. The default python version in my Virtualbox Ubuntu is 2.7.11.
Is it possible for me to install python packages through my VirtualBox to my Win7 python3.5?
Will it bring any problem with multiple versions of python. | Virtual box python different python version | 0 | 0 | 0 | 1,880 |
37,378,730 | 2016-05-22T19:30:00.000 | 0 | 0 | 1 | 0 | python | 37,379,150 | 3 | false | 0 | 0 | The Ubuntu VirtualBox is not just a linux-looking frontend to your existing Win7 install. It's a completely separate OS that runs as if it were on a separate computer. You can set up shared folders to share data between the host and guest, but trying to use Linux to install programs on Windows would be error-prone and risky even if you could do it.
If you plan to be using Python on Windows, you'll need to install the packages on Windows, and a Linux VirtualBox isn't going to help you with that at all. If you're uncomfortable with the Windows command line, read up on some tutorials. There are also some alternative Windows shells that provide some advantages over the built-in CMD.
If you'd rather work on Linux, you can do that, but you'd want to do it entirely inside the VirtualBox. Inside Linux, you could install the version of Python you want, install the packages you want, and use whatever dev tools you want, as if you were working on a Linux computer. (Although in this case I have to wonder why you aren't actually using a Linux computer instead of a Windows one.) | 2 | 0 | 0 | I'm newbie of python programming.
I have windows 7 laptop and I'm also running virtual box Ubuntu.
I manually installed python3.5 on Win7 through Eclipse first. The default python version in my Virtualbox Ubuntu is 2.7.11.
Is it possible for me to install python packages through my VirtualBox to my Win7 python3.5?
Will it bring any problem with multiple versions of python. | Virtual box python different python version | 0 | 0 | 0 | 1,880 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.