Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
38,626,409 | 2016-07-28T03:00:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7,pip | 1 | 38,627,143 | 0 | 3 | 0 | false | 0 | 0 | Try navigating to ~/Python[version]/Scripts in cmd, then use pip[version] [command] [module] (ie. pip3 install themodulename or pip2 install themodulename) | 2 | 0 | 0 | 0 | I keep trying to install Pip using get-pip.py and only get the wheel file in the scripts folder. Try running "pip" in the command prompt and it just comes out with an error. Running windows 8 incase you need.
edit error is 'pip' is not recognized as an internal or external command... | Python pip installation not working how to do? | 0 | 0 | 1 | 0 | 0 | 17,163 |
38,626,409 | 2016-07-28T03:00:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7,pip | 1 | 38,627,346 | 0 | 3 | 0 | false | 0 | 0 | If you are using latest version of Python.
In computer properties, Go to Advanced System Settings -> Advanced tab -> Environmental Variables
In System variables section, there is variable called PATH. Append c:\Python27\Scripts (Note append, not replace)
Then open a new command prompt, try "pip" | 2 | 0 | 0 | 0 | I keep trying to install Pip using get-pip.py and only get the wheel file in the scripts folder. Try running "pip" in the command prompt and it just comes out with an error. Running windows 8 incase you need.
edit error is 'pip' is not recognized as an internal or external command... | Python pip installation not working how to do? | 0 | 0.066568 | 1 | 0 | 0 | 17,163 |
38,657,054 | 2016-07-29T10:55:00.000 | 3 | 0 | 0 | 0 | 0 | python-2.7,pdf,jupyter-notebook | 0 | 41,493,134 | 0 | 2 | 0 | false | 1 | 0 | When I want to save a Jupyter Notebook I right click the mouse, select print, then change Destination to Save as PDF. This does not save the analysis outputs though. So if I want to save a regression output, for example, I highlight the output in Jupyter Notebook, right click, print, Save as PDF. This process creates fairly nice looking documents with code, interpretation and graphics all-in-one. There are programs that allow you to save more directly but I haven't been able to get them to work. | 1 | 6 | 1 | 0 | I am doing some data science analysis on jupyter and I wonder how to get all the output of my cell saved into a pdf file ?
thanks | how to save jupyter output into a pdf file | 0 | 0.291313 | 1 | 0 | 0 | 19,046 |
38,672,768 | 2016-07-30T10:15:00.000 | 2 | 0 | 1 | 0 | 1 | ipython,anaconda,seaborn,conda | 1 | 38,698,060 | 0 | 1 | 0 | true | 0 | 0 | conda is a command line tool, not a Python function. You should be typing these commands in a bash (or tcsh, etc.) shell, not in the IPython interpreter. | 1 | 1 | 0 | 0 | I've recently tried to install seaborn on ipython, which the latter was installed using anaconda. However, when I ran conda install seaborn, i was returned with a syntax error. I tried again with conda install -c anaconda seaborn=0.7.0 this time but syntax error was returned again. Apologies for my limited programming knowledge, but could anyone provide advice on how to resolve this issue? | Syntax error while installing seaborn using conda | 0 | 1.2 | 1 | 0 | 0 | 494 |
38,688,707 | 2016-07-31T21:54:00.000 | 0 | 0 | 1 | 0 | 1 | python,python-2.7,sublimetext3,virtualenv | 0 | 38,689,463 | 0 | 1 | 0 | false | 0 | 0 | Solved. In ST3, use Virtualenv: Add Directory instead of Virtualenv: New. The latter creates a new virtualenv (hence the new Scripts folder). | 1 | 0 | 0 | 0 | I'm trying to run Python scripts inside virtualenv from Sublime Text 3. When I activate the virtualenv in ST3 and choose the .py, ST3 creates a Scripts folder inside the preexisting Scripts folder (for a new `.py'). What is causing this problem and how I do stop this from happening?
Following are the detailed steps I follow:
Create `virtualenx Somevenv' from CMD
Navigate to 'Someenv\Scripts`
activate
pip install somePackage
Select Virtualenv:New (Virtualenv: Activate does nothing)
Paste \path\to\Someenv\Scripts under Virtualenv Path
Select c:\Python27
ST3 does it's thing and produces this message:
New python executable in C:\Users\Gandalf\Documents\Python_Virtual_Env\Legolas\Scripts\Scripts\python.exe
Installing setuptools, pip, wheel...done.
As you see, ST3 creates a Scripts inside the previous Scripts folder. As a result, the packages installed in step 4 are not used. I want to stop the creation of this second Scripts folder. | Sublime Text3 creates Scripts inside Scripts folder inside virtualenv | 0 | 0 | 1 | 0 | 0 | 36 |
38,696,575 | 2016-08-01T10:30:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7 | 0 | 38,696,887 | 0 | 4 | 0 | false | 0 | 0 | Python itself does not include any facilities to allow the programmer direct access to memory. This means that sadly (or happily, depending on your outlook) the answer to your question is "no". | 2 | 8 | 0 | 0 | I know python is a high level language and manages the memory allocation etc. on its own user/developer doesn't need to worry much unlike other languages like c, c++ etc.
but is there a way through will we can write some value in a particular memory address in python
i know about id() which if hex type casted can give hexadecimal location but how can we write at a location. | Writing to particular address in memory in python | 1 | 0.049958 | 1 | 0 | 0 | 5,817 |
38,696,575 | 2016-08-01T10:30:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7 | 0 | 51,573,628 | 0 | 4 | 0 | false | 0 | 0 | I can't advise how but I do know (one) why. Direct writing to registers allows one to set up a particular microcontroller. For example, configuring ports or peripherals. This is normally done in C (closer to hardware) but it would be a nice feature if you wanted to use Python for other reasons. | 2 | 8 | 0 | 0 | I know python is a high level language and manages the memory allocation etc. on its own user/developer doesn't need to worry much unlike other languages like c, c++ etc.
but is there a way through will we can write some value in a particular memory address in python
i know about id() which if hex type casted can give hexadecimal location but how can we write at a location. | Writing to particular address in memory in python | 1 | 0 | 1 | 0 | 0 | 5,817 |
38,703,892 | 2016-08-01T16:37:00.000 | 0 | 0 | 0 | 0 | 0 | python,sql,google-forms | 0 | 39,074,108 | 0 | 1 | 0 | false | 1 | 0 | You can add a script in the Google spreadsheet with an onsubmit trigger. Then you can do whatever you want with the submitted data. | 1 | 0 | 0 | 0 | I am creating a web project where I take in Form data and write to a SQL database.
The forms will be a questionnaire with logic branching. Due to the nature of the form, and the fact that this is an MVP project, I've opted to use an existing form service (e.g Google Forms/Typeform).
I was wondering if it's feasible to have form data submitted to multiple different tables (e.g CustomerInfo, FormDataA, FormDataB, etc.). While this might be possible with a custom form application, I do not think it's possible with Google Forms and/or Typeform.
Does anyone have any suggestions on how to parse user submitted Form data into multiple tables when using Google Forms or Typeform? | Using Google Forms to write to multiple tables? | 1 | 0 | 1 | 1 | 0 | 638 |
38,707,350 | 2016-08-01T20:14:00.000 | 1 | 0 | 0 | 1 | 0 | python,subprocess,command-line-arguments | 0 | 38,707,485 | 0 | 2 | 0 | false | 0 | 0 | There are two things this needs to do: you need to handle command-line flags, and you need to send signals to another process. For the flags, you could use the argparse library, or simply sys.argv.
For sending signals, you will need the Process ID (pid) of the already running process. Under Linux you can call ps, and check to see if there is another instance of the script running. If there is, send it a signal.
Another alternative to signal handling is DBus. This is less cross-platform capable, though. | 1 | 0 | 0 | 0 | I'm trying to make a python script run in the background, and listen to commands. For example if I run my script:
python my_script.py it will start running (and wait for commands).
Then I wish to run:
python my_script.py --do_something, open a different python process, and it will run a function do_something() in the previous process.
I've seen this works on programs like PDPlayer where the flag --play causes the player to start playing a video. Can this also be done in python?
I know how to handle the command line arguments using argparse. I need help regarding the communication between the two python processes.
Also, I plan to cx_freeze the app, so the PID of the app can be found using psutil by searching the executable name.
Thanks.
btw, I'm using Windows... | Send commands to running script by running it with flags | 0 | 0.099668 | 1 | 0 | 0 | 1,144 |
38,727,035 | 2016-08-02T17:35:00.000 | 0 | 0 | 1 | 0 | 0 | python,matplotlib,ipython | 1 | 38,729,045 | 0 | 1 | 0 | false | 0 | 0 | You need to install pyGTK. How to do so depends on what you're using to run Python. You could also not use '%matplotlib inline' and then it'll default to whatever is installed on your system. | 1 | 0 | 1 | 0 | I got two questions when I was plotting graph in ipython.
once, i implement %matplotlib inline, I don't know how to switch back to use floating windows.
when I search for the method to switch back, people told me to implement
%matplotlib osx or %matplotlib, however, I finally get an error, which is
Gtk* backend requires pygtk to be installed.
Can anyone help me, giving me some idea?
p.s. I am using windows 10 and python 2.7 | How to turn off matplotlib inline function and install pygtk? | 0 | 0 | 1 | 0 | 0 | 336 |
38,741,327 | 2016-08-03T10:43:00.000 | 1 | 0 | 0 | 1 | 0 | python,google-app-engine | 0 | 38,788,820 | 1 | 1 | 0 | true | 1 | 0 | You have multiple layers of caches beyond memcache,
Googles edge cache will definitely cache static content especially if you app is referenced by your domain and not appspot.com .
You will probably need to use some cache busting techniques.
You can test this by requesting the url that is presenting old content with the same url but appending something like ?x=1 to the url.
If you then get current content then the edge cache is your problem and therefore the need to use cache busting techniques. | 1 | 0 | 0 | 0 | I've deployed a new version which contains just one image replacement. After migrating traffic (100%) to the new version I can see that only this version now has active instances. However 2 days later and App engine is still intermittently serving the old image. So I assume the previous version. When I ping the domain I can see that the latest version has one IP address and the old version has another.
My question is how do I force App Engine to only server the new version? I'm not using traffic splitting either.
Any help would be much appreciated
Regards,
Danny | App Engine serving old version intermittently | 0 | 1.2 | 1 | 0 | 0 | 190 |
38,759,647 | 2016-08-04T06:08:00.000 | 2 | 0 | 0 | 0 | 0 | python-2.7,machine-learning,neural-network | 0 | 38,763,173 | 0 | 3 | 0 | true | 0 | 0 | Yes - this is a really important issue. Basically there are two ways to do that:
Try different topologies and choose best: due to the fact that number of neurons and layers are a discrete parameters you cannot differentiate your loss function with respect to this parameters in order to use a gradient descent methods. So the easiest way is to simply set up different topologies and compare them using either cross-validation or division of your training set to - training / testing / validating parts. You can also use a grid / random search schemas to do that. Libraries like scikit-learn have appropriate modules for that.
Dropout: the training framework called dropout could also help. In this case you are setting up relatively big number of nodes in your layers and trying to adjust a dropout parameter for each layer. In this scenario - e.g. assuming that you will have a two-layer network with 100 nodes in your hidden layer with dropout_parameter = 0.6 you are learning the mixture of models - where every model is a neural network with size 40 (approximately 60 nodes are turned off). This might be also considered as figuring out the best topology for your task. | 2 | 0 | 1 | 0 | In neural network theory - setting up the size of hidden layers seems to be a really important issue. Is there any criteria how to choose the number of neurons in a hidden layer? | How should we set the number of the neurons in the hidden layer in neural network? | 0 | 1.2 | 1 | 0 | 0 | 638 |
38,759,647 | 2016-08-04T06:08:00.000 | 1 | 0 | 0 | 0 | 0 | python-2.7,machine-learning,neural-network | 0 | 38,776,068 | 0 | 3 | 0 | false | 0 | 0 | You have to set the number of neurons in hidden layer in such a way that it shouldn't be more than # of your training example. There are no thumb rule for number of neurons.
Ex: If you are using MINIST Dataset then you might have ~ 78K training example. So make sure that combination of Neural Network (784-30-10) = 784*30 + 30*10 which are less than training examples. but if you use like (784-100-10) then it exceeds the # of training example and highly probable to over-fit.
In short, make sure you are not over-fitting and hence you have good chances to get good result. | 2 | 0 | 1 | 0 | In neural network theory - setting up the size of hidden layers seems to be a really important issue. Is there any criteria how to choose the number of neurons in a hidden layer? | How should we set the number of the neurons in the hidden layer in neural network? | 0 | 0.066568 | 1 | 0 | 0 | 638 |
38,776,447 | 2016-08-04T20:02:00.000 | 5 | 0 | 1 | 0 | 1 | python,git | 1 | 38,776,587 | 0 | 2 | 0 | true | 0 | 0 | Don't do that!
Suppose that git stash save saves nothing, but there are already some items in the stash. Then, when you're all done, you pop the most recent stash, which is not one you created.
What did you just do to the user?
One way to do this in shell script code is to check the result of git rev-parse refs/stash before and after git stash save. If it changes (from failure to something, or something to something-else), you have created a new stash, which you can then pop when you are done.
More recent versions of Git have git stash create, which creates the commit-pair as usual but does not put them into the refs/stash reference. If there is nothing to save, git stash create does nothing and outputs nothing. This is a better way to deal with the problem, but is Git-version-dependent. | 2 | 2 | 0 | 0 | I am creating a post-commit script in Python and calling git commands using subprocess.
In my script I want to stash all changes before I run some commands and then pop them back. The problem is that if there was nothing to stash, stash pop returns a none-zero error code resulting in an exception in subprocess.check_output(). I know how I can ignore the error return code, but I don't want to do it this way.
So I have been thinking. Is there any way to get the number of items currently in stash? I know there is a command 'git stash list', but is there something more suited for my needs or some easy and safe way to parse the output of git stash list?
Also appreciate other approaches to solve this problem. | Only call 'git stash pop' if there is anything to pop | 0 | 1.2 | 1 | 0 | 0 | 441 |
38,776,447 | 2016-08-04T20:02:00.000 | 2 | 0 | 1 | 0 | 1 | python,git | 1 | 38,776,533 | 0 | 2 | 0 | false | 0 | 0 | You can simply try calling git stash show stash@{0}. If this returns successfully, there is something stashed. | 2 | 2 | 0 | 0 | I am creating a post-commit script in Python and calling git commands using subprocess.
In my script I want to stash all changes before I run some commands and then pop them back. The problem is that if there was nothing to stash, stash pop returns a none-zero error code resulting in an exception in subprocess.check_output(). I know how I can ignore the error return code, but I don't want to do it this way.
So I have been thinking. Is there any way to get the number of items currently in stash? I know there is a command 'git stash list', but is there something more suited for my needs or some easy and safe way to parse the output of git stash list?
Also appreciate other approaches to solve this problem. | Only call 'git stash pop' if there is anything to pop | 0 | 0.197375 | 1 | 0 | 0 | 441 |
38,796,441 | 2016-08-05T19:20:00.000 | 1 | 0 | 1 | 1 | 0 | python,wing-ide,python-packaging | 0 | 38,796,484 | 0 | 2 | 0 | false | 0 | 0 | pip install hypothesis
Assuming you have pip.
If you want to install it from the downloaded package just open command prompt and cd to the directory where you downloaded it and do
python setup.py install | 1 | 0 | 0 | 0 | I'm using Wing IDE, how do I install hypothesis Python package to my computer?
I have already download the zip file, do I use command prompt to install it or there is an option in Wing IDE to do it? | how to install hypothesis Python package? | 0 | 0.099668 | 1 | 0 | 0 | 1,175 |
38,807,068 | 2016-08-06T17:41:00.000 | 0 | 0 | 1 | 0 | 1 | python,qt,python-3.x,pyqt5 | 1 | 40,221,119 | 0 | 1 | 0 | false | 0 | 1 | Had the exact same issue. Looks like Eric wants pyuic5.bat (somewhere in the path)
I created such a batch file with the following contents, and it worked
@"pyuic5.exe" %1 %2 %3 %4 %5 %6 %7 %8 %9
PS: In my setup these files are both located in a folder:
C:\Python35-32\Scripts | 1 | 0 | 0 | 0 | I will compile form designed by qt designer in eric6, but show "Could not start pyuic5, Ensure that it is in the search path." But actually the PATH of pyuic5.exe has been in the system PATH, and also the pyuic5.exe can be run by typing pyuic5 in the cmd of window7 .
The envirement is python3.5+qt5.7+pyqt5.7+eric6.
Why I could not compile form in eric6? How can fix the error? | Compile form in eric6, but show "Could not start pyuic5, Ensure that it is in the search path." | 0 | 0 | 1 | 0 | 0 | 974 |
38,819,322 | 2016-08-07T22:58:00.000 | 28 | 0 | 1 | 0 | 0 | ipython-notebook,jupyter-notebook,recovery | 0 | 44,044,643 | 0 | 12 | 0 | false | 0 | 0 | This is bit of additional info on the answer by Thuener,
I did the following to recover my deleted .ipynb file.
The cache is in ~/.cache/chromium/Default/Cache/ (I use chromium)
used grep in binary search mode, grep -a 'import math' (replace search string by a keyword specific in your code)
Edit the binary file in vim (it doesn't open in gedit)
The python ipynb should file start with '{ "cells":' and
ends with '"nbformat": 4, "nbformat_minor": 2}'
remove everything outside these start and end points
Rename the file as .ipynb, open it in your jupyter-notebook, it works. | 4 | 23 | 0 | 0 | I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash).
Does anyone know how I can recover the notebook? I am using Mac OS X.
Thanks! | How to recover deleted iPython Notebooks | 0 | 1 | 1 | 0 | 0 | 68,420 |
38,819,322 | 2016-08-07T22:58:00.000 | 11 | 0 | 1 | 0 | 0 | ipython-notebook,jupyter-notebook,recovery | 0 | 59,777,233 | 0 | 12 | 0 | false | 0 | 0 | On linux:
I did the same error and I finally found the deleted file in the trash
/home/$USER/.local/share/Trash/files | 4 | 23 | 0 | 0 | I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash).
Does anyone know how I can recover the notebook? I am using Mac OS X.
Thanks! | How to recover deleted iPython Notebooks | 0 | 1 | 1 | 0 | 0 | 68,420 |
38,819,322 | 2016-08-07T22:58:00.000 | 0 | 0 | 1 | 0 | 0 | ipython-notebook,jupyter-notebook,recovery | 0 | 52,046,886 | 0 | 12 | 0 | false | 0 | 0 | Sadly my file was neither in the checkpoints directory, nor chromium's cache. Fortunately, I had an ext4 formatted file system and was able to recover my file using extundelete:
Figure out the drive your missing deleted file was stored on:
df /your/deleted/file/diretory/
Switch to a folder located on another you have write access to:
cd /your/alternate/location/
It is proffered to run extundlete on an unmounted partition. Thus, if your deleted file wasn't stored on the same drive as your operating system, it's recommended you unmount the partition of the deleted file (though you may want to ensure extundlete is already installed before proceeding):
sudo umount /dev/sdax
where sdax is the partition returned by your df command earlier
Use extundelete to restore your file:
sudo extundelete --restore-file /your/deleted/file/diretory/delted.file /dev/sdax
If successful your recovered file will be located at:
/your/alternate/location/your/deleted/file/diretory/delted.file | 4 | 23 | 0 | 0 | I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash).
Does anyone know how I can recover the notebook? I am using Mac OS X.
Thanks! | How to recover deleted iPython Notebooks | 0 | 0 | 1 | 0 | 0 | 68,420 |
38,819,322 | 2016-08-07T22:58:00.000 | 1 | 0 | 1 | 0 | 0 | ipython-notebook,jupyter-notebook,recovery | 0 | 53,780,758 | 0 | 12 | 0 | false | 0 | 0 | If you're using windows, it sends it to the recycle bin, thankfully. Clearly, it's a good idea to make checkpoints. | 4 | 23 | 0 | 0 | I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash).
Does anyone know how I can recover the notebook? I am using Mac OS X.
Thanks! | How to recover deleted iPython Notebooks | 0 | 0.016665 | 1 | 0 | 0 | 68,420 |
38,828,829 | 2016-08-08T12:08:00.000 | 1 | 0 | 1 | 0 | 0 | python,pycharm,tensorflow,anaconda,ubuntu-16.04 | 0 | 39,021,770 | 0 | 3 | 0 | true | 0 | 0 | Anaconda defaults doesn't provide tensorflow yet, but conda-forge do, conda install -c conda-forge tensorflow should see you right, though (for others reading!) the installed tensorflow will not work on CentOS < 7 (or other Linux Distros of a similar vintage). | 1 | 2 | 1 | 0 | I am using Ubuntu 16.04 . I tried to install Tensorflow using Anaconda 2 . But it installed a Environment inside ubuntu . So i had to create a virtual environment and then use Tensorflow . Now how can i use both Tensorflow and Sci-kit learn together in a single environment . | How to use Tensorflow and Sci-Kit Learn together in one environment in PyCharm? | 1 | 1.2 | 1 | 0 | 0 | 2,492 |
38,841,865 | 2016-08-09T04:07:00.000 | 2 | 0 | 1 | 0 | 0 | python | 0 | 38,842,389 | 0 | 3 | 0 | false | 0 | 0 | You can use logging with debug level and once the debugging is completed, change the level to info. So any statements with logger.debug() will not be printed. | 2 | 2 | 0 | 0 | To python experts:
I put lots of print() to check the value of my variables.
Once I'm done, I need to delete the print(). It quite time-consuming and prompt to human errors.
Would like to learn how do you guys deal with print(). Do you delete it while coding or delete it at the end? Or there is a method to delete it automatically or you don't use print()to check the variable value? | How do you deal with print() once you done with debugging/coding | 0 | 0.132549 | 1 | 0 | 0 | 306 |
38,841,865 | 2016-08-09T04:07:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 38,843,280 | 0 | 3 | 0 | false | 0 | 0 | What I do is put print statements in with with a special text marker in the string. I usually use print("XXX", thething). Then I just search for and delete the line with that string. It's also easier to spot in the output. | 2 | 2 | 0 | 0 | To python experts:
I put lots of print() to check the value of my variables.
Once I'm done, I need to delete the print(). It quite time-consuming and prompt to human errors.
Would like to learn how do you guys deal with print(). Do you delete it while coding or delete it at the end? Or there is a method to delete it automatically or you don't use print()to check the variable value? | How do you deal with print() once you done with debugging/coding | 0 | 0 | 1 | 0 | 0 | 306 |
38,856,271 | 2016-08-09T16:42:00.000 | 1 | 0 | 0 | 1 | 0 | python,r,shell,command-line | 0 | 38,856,331 | 0 | 3 | 0 | true | 0 | 0 | You probably already have R, since you can already run your script.
All you have to do is find its binaries (the Rscript.exe file).
Then open windows command line ([cmd] + [R] > type in : "cmd" > [enter])
Enter the full path to R.exe, followed by the full path to your script. | 2 | 4 | 0 | 0 | I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R.
How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this.
R: version 3.3 python: version 3.x os: windows | Running an R script from command line (to execute from python) | 0 | 1.2 | 1 | 0 | 0 | 3,887 |
38,856,271 | 2016-08-09T16:42:00.000 | 3 | 0 | 0 | 1 | 0 | python,r,shell,command-line | 0 | 38,856,393 | 0 | 3 | 0 | false | 0 | 0 | You already have Rscript, it came with your version of R. If R.exe, Rgui.exe, ... are in your path, then so is Rscript.exe.
Your call from Python could just be Rscript myFile.R. Rscript is much better than R BATCH CMD ... and other very old and outdated usage patterns. | 2 | 4 | 0 | 0 | I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R.
How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this.
R: version 3.3 python: version 3.x os: windows | Running an R script from command line (to execute from python) | 0 | 0.197375 | 1 | 0 | 0 | 3,887 |
38,862,088 | 2016-08-09T23:38:00.000 | 3 | 1 | 0 | 0 | 0 | python,linux,yocto,bitbake,openembedded | 0 | 38,865,576 | 0 | 4 | 0 | false | 0 | 0 | The OE layer index at layers.openembedded.org lists all known layers and the recipes they contain, so searching that should bring up the meta-python layer that you can add to your build and use recipes from. | 1 | 13 | 0 | 0 | I wish to add more python modules to my yocto/openembedded project but I am unsure how to? I wish to add flask and its dependencies. | How do I add more python modules to my yocto/openembedded project? | 0 | 0.148885 | 1 | 0 | 0 | 20,667 |
38,865,708 | 2016-08-10T06:23:00.000 | 0 | 0 | 1 | 0 | 0 | python,raspberry-pi,scikit-learn | 0 | 38,866,597 | 0 | 2 | 0 | false | 0 | 0 | scikit-learn will run on a Raspberry Pi just as well as any other Linux machine.
To install it, make sure you have pip3 (sudo apt-get install python3-pip), and use sudo pip3 install scikit-learn.
All Python scripts utilizing scikit-learn will now run as normal. | 1 | 0 | 0 | 0 | I'm new in embedded programming, and would like to understand what I need to do to run python scikit-learn on a capable embedded processor.
See Raspberry Pi as an example. | How can I run python scikit-learn on Raspberry Pi? | 0 | 0 | 1 | 0 | 0 | 18,641 |
38,882,845 | 2016-08-10T20:22:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-2.7,python-3.x,ubuntu,anaconda | 0 | 46,602,056 | 1 | 2 | 1 | false | 0 | 0 | Use anaconda version Anaconda3-4.2.0-Linux-x86_64.sh from the anaconda installer archive.This comes with python 3.5. This worked for me. | 1 | 1 | 0 | 0 | Anaconda for python 3.5 and python 2.7 seems to install just as a drop in folder inside my home folder on Ubuntu. Is there an installed version of Anaconda for Ubuntu 16? I'm not sure how to ask this but do I need python 3.5 that comes by default if I am also using Anaconda 3.5?
It seems like the best solution is docker these days. I mean I understand virtualenv and virtualenvwrapper. However, sometimes I try to indicate in my .bashrc that I want to use python 3.5 and yet I'll use the command mkvirtualenv and it will start installing the python 2.7 versions of python.
Should I choose either Anaconda or the version of python installed with my OS from python.org or is there an easy way to manage many different versions of Python?
Thanks,
Bruce | How to get Python 3.5 and Anaconda 3.5 running on ubuntu 16.04? | 1 | 0 | 1 | 0 | 0 | 3,184 |
38,887,061 | 2016-08-11T04:02:00.000 | 0 | 1 | 0 | 0 | 1 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | 1 | 38,895,935 | 0 | 4 | 0 | false | 1 | 0 | My guess would be that you missed a step on setup. There's one where you have to set the "event source". IF you don't do that, I think you get that message.
But the debug options are limited. I wrote EchoSim (the original one on GitHub) before the service simulator was written and, although it is a bit out of date, it does a better job of giving diagnostics.
Lacking debug options, the best is to do what you've done. Partition and re-test. Do static replies until you can work out where the problem is. | 3 | 2 | 0 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 1 | 0 | 1 | 0 | 1 | 3,387 |
38,887,061 | 2016-08-11T04:02:00.000 | 3 | 1 | 0 | 0 | 1 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | 1 | 38,902,127 | 0 | 4 | 0 | true | 1 | 0 | tl;dr: The remote endpoint could not be called, or the response it returned was invalid. also means there may have been a timeout waiting for the endpoint.
I was able to narrow it down to a timeout.
Seems like the Alexa service simulator (and the Alexa itself) is less tolerant to long responses than the lambda testing console. During development I had increased the timeout of ARN:1 to 30 seconds (whereas I believe the default is 3 seconds). The DynamoDB table used by ARN:1 has more data and it takes slightly longer to process than ARN:3 which has an almost empty table. As soon as I commented out some of the data loading stuff it was running slightly faster and the Alexa service simulator was working again. I can't find the time budget documented anywhere, I'm guessing 3 seconds? I most likely need to move to another backend, DynamoDB+Python on lambda is too slow for very trivial requests. | 3 | 2 | 0 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 1 | 1.2 | 1 | 0 | 1 | 3,387 |
38,887,061 | 2016-08-11T04:02:00.000 | 1 | 1 | 0 | 0 | 1 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | 1 | 39,245,816 | 0 | 4 | 0 | false | 1 | 0 | I think the problem you having for ARN:1 is you probably didn't set a trigger to alexa skill in your lambda function.
Or it can be the alexa session timeout which is by default set to 8 seconds. | 3 | 2 | 0 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 1 | 0.049958 | 1 | 0 | 1 | 3,387 |
38,931,064 | 2016-08-13T09:00:00.000 | 0 | 0 | 0 | 1 | 0 | python,sockets,batch-file,portforwarding | 0 | 38,932,875 | 0 | 2 | 0 | false | 0 | 0 | I'm not sure if that's possible, as much as I know, ports aren't actually a thing their just some abstraction convention made by protocols today and supported by your operating system that allows you to have multiple connections per one machine,
now sockets are basically some object provided to you by the operating system that implements some protocol stack and allows you to communicate with other systems, the API provides you some very nice API called the socket API which allows you use it's functionality in order to communicate with other computers, Port forwarding is not an actual thing, it just means that when the operating system of the router when receiving incoming packets that are destined to some port it will drop them if the port is not open, think of your router as some bouncer or doorman, standing in the entrance of a building, the building is your LAN, your apartment is your machine and rooms within your apartment are ports, some package or mail arrives to your doorman under the port X, a port rule means on IP Y and Port X of the router -> forward to IP Z and port A of some computer within the LAN ( provides and implements the NAT/PAT ) so what happens if we'll go back to my analogy is something such as this: doorman receives mail destined to some port, and checks if that port is open, if not it drops the mail if it is it allows it to go to some room within some apartment.. (sounds complex I know apologize) my point is, every router chooses to implement port rules or port blocking a little bit different and there is no standard protocol for doing, socket is some object that allows you program to communicate with others, you could create some server - client with sockets but that means that you'll need to create or program your router, and I'm not sure if that's possible,
what you COULD do is:
every router provides some http client ( web client ) that is used to create and forward ports, maybe if you read about your router you could get access to that client and write some python http script that forwards ports automatically
another point I've forgot is that you need to make sure you're own firewall isn't blocking ports, but there's no need for sockets / python to do so, just manually config it | 2 | 3 | 0 | 0 | Purpose:
I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router.
Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own.
Approaches I have explored:
Creating a bat file that issues commands by means of netsh, then running the bat.
Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly).
(I'm aware programs such as GameRanger do this)
Using the Socket Module.
If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it.
Thank you.
Edit: Purpose | How to make a port forward rule in Python 3 in windows? | 0 | 0 | 1 | 0 | 1 | 707 |
38,931,064 | 2016-08-13T09:00:00.000 | 0 | 0 | 0 | 1 | 0 | python,sockets,batch-file,portforwarding | 0 | 38,932,807 | 0 | 2 | 0 | false | 0 | 0 | You should read first some sort of informations about UPnP (Router Port-Forwarding) and that it's normally disabled.
Dependent of your needs, you could also try a look at ssh reverse tunnels and at ssh at all, as it can solve many problems.
But you will see that working with windows and things like adavanced network things is a bad idea.
At least you should use cygwin.
And when you really interessted in network traffic at all, wireshark should be installed. | 2 | 3 | 0 | 0 | Purpose:
I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router.
Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own.
Approaches I have explored:
Creating a bat file that issues commands by means of netsh, then running the bat.
Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly).
(I'm aware programs such as GameRanger do this)
Using the Socket Module.
If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it.
Thank you.
Edit: Purpose | How to make a port forward rule in Python 3 in windows? | 0 | 0 | 1 | 0 | 1 | 707 |
38,936,584 | 2016-08-13T20:12:00.000 | 1 | 0 | 0 | 0 | 0 | python,python-3.x,python-module | 0 | 38,937,103 | 0 | 1 | 0 | false | 0 | 0 | My flow chart looks something like this:
Reading the published documentation (or use help(moduleName) which gives you the same information without an internet connection in a harder to read format). This can be overly verbose if you're only looking for one tidbit of information, in which case I move on to...
Finding tutorials or similar stack overflow posts using specific keywords in your favorite search engine. This is generally the approach you will use 99% of the time.
Just recursively poking around with dir() and __doc__ if you think the answer for what you're looking for is going to be relatively obvious (usually if the module has relatively simple functions such as math that are obvious by the name)
Looking at the source of the module if you really want to see how things works. | 1 | 1 | 0 | 1 | I can't seem to find a good explanation of how to use Python modules. Take for example the urllib module. It has commands such as
req = urllib.request.Request().
How would I find out what specific commands, like this one, are in certain Python modules?
For all the examples I've seen of people using specific Python modules, they just know what to type, and how to use them.
Any suggestions will be much appreciated. | How to find good documentation for Python modules | 0 | 0.197375 | 1 | 0 | 1 | 138 |
38,944,204 | 2016-08-14T16:27:00.000 | 5 | 0 | 1 | 0 | 1 | ipython,jupyter,jupyter-notebook | 0 | 38,944,682 | 0 | 1 | 0 | false | 1 | 0 | Reposting as an answer:
When your changes don't seem to be taking effect in an HTML interface, browser caching is often a culprit. The browser saves time by not asking for files again. You can:
Try force-refreshing with Ctrl-F5. It may get some things from the cache anyway, though sometimes mashing it several times is effective.
Use a different browser profile, or private browsing mode, to load the page.
There may be a setting to disable caching under developer options. I think Chrome has this. May only apply while developer tools are open.
If all else fails, load the page using a different browser. If it still doesn't change, it's likely the problem is not (just) browser caching. | 1 | 2 | 0 | 0 | By mistake, I updated this file to customize css.
D:\Continuum\Anaconda2\Lib\site-packages\notebook\static\custom\custom.css
To rollback the above change,
1) I put back the original file that I saved before. still the new css shows up in jupyter.
2) I removed all .ipython and .jupyter dir and it didn't work either.
3) I even uninstalled anaconda and still that css shows up.
I'm really stuck here. Does anyone know how to go back to the default css of jupyter ? | jupyter custom.css removal | 0 | 0.761594 | 1 | 0 | 0 | 770 |
38,948,430 | 2016-08-15T02:12:00.000 | 1 | 0 | 1 | 0 | 0 | python,bit-shift | 0 | 38,948,543 | 0 | 1 | 0 | false | 0 | 0 | Python does not have registers and you cannot declare the type of anything.
The shift operators operate on unlimited-precision integers. If you shift left, the number will continue to get larger indefinitely (or until out of memory). If you shift right, the least-significant bit is dropped as you would expect. There is no "carry flag", that's the kind of thing you see in assembly language and Python is not assembly. Since the integers have unlimited precision, logical and arithmetic shifts are equivalent, in a sense (if you imagine that the sign bit repeats indefinitely).
Any time you want fixed width operations you will just have to mask the results of the unlimited-precision operations.
As for the "smartest" way to do something, that's not really an appropriate question for Stack Overflow. | 1 | 0 | 0 | 0 | Let's say I want to write a 16bit linear feedback shift register LFSR in Python using its native shift operator.
Does the operator itself have a feature to specify the bit to shifted into the new MSB position?
Does the operator have a carry flag or the like to catch the LSB falling out of the register?
Has to setup the register to 16bit size? Not sure how to do this in Python where variables are not clearly typed.
What's the smartest way to compute the multi-bit XOR function for the feedback. Actual bit extraction or lookup table?
Thanks,
Gert | Using shift operator for LFSR in python | 0 | 0.197375 | 1 | 0 | 0 | 384 |
38,954,505 | 2016-08-15T11:48:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,caching,static,cdn | 0 | 38,956,967 | 0 | 2 | 0 | true | 1 | 0 | Here's my work around :
On deployment (from a bash script), I get the shasum of my css style.
I put this variable inside the environment.
I have a context processor for the template engine that will read from the environment. | 1 | 1 | 0 | 0 | I'm working on a website built with Django.
When I'm doing updates on the static files, the users have to hard refresh the website to get the latest version.
I'm using a CDN server to deliver my static files so using the built-in static storage from Django.
I don't know about the best practices but my idea is to generate a random string when I redeploy the website and have something like style.css?my_random_string.
I don't know how to handle such a global variable through the project (Using Gunicorn in production).
I have a RedisDB running, I can store the random string in it and clear it on redeployment.
I was thinking to have this variable globally available in templates with a context_processors.
What are your thoughts on this ? | Cache busting with Django | 0 | 1.2 | 1 | 0 | 0 | 865 |
38,966,114 | 2016-08-16T03:16:00.000 | 0 | 0 | 1 | 1 | 1 | python,django,intellij-idea,ide,pycharm | 0 | 53,610,811 | 0 | 2 | 1 | false | 0 | 0 | Go to File > Settings > Plugins > Browse repositories > Search and Install Native Terminal
This will install a terminal which will use the Windows Native terminal.
A small black button will appear on the tool bar.
If you did not enable the tool bar, here is the trick: View | toolbar
check this toolbar option and the cmd button will be shown on the bar | 2 | 1 | 0 | 0 | First time posting, let me know how I can improve my questions.
I have installed PyCharm Edu 3.0 and Anaconda 3 on an older laptop. I am attempting to access the embedded terminal in the IDE and I am unable to launch it.
I have searched through similar questions here and the JetBrains docs, and the common knowledge seems to be installing the "Terminal" Plugin. My version of PyCharm does not have this plugin, and I am unable to find it in the JetBrains plugin list or community repositories.
If anyone has experienced this before or knows where I am going wrong attempting to launch the terminal I would appreciate the feedback. | Pycharm edu terminal plugin missing | 0 | 0 | 1 | 0 | 0 | 628 |
38,966,114 | 2016-08-16T03:16:00.000 | -1 | 0 | 1 | 1 | 1 | python,django,intellij-idea,ide,pycharm | 0 | 43,782,480 | 0 | 2 | 1 | false | 0 | 0 | Click preferences and choose plugin. Next click install Jetbrains plugin and choose Command line Tool Support. I hope this will help you | 2 | 1 | 0 | 0 | First time posting, let me know how I can improve my questions.
I have installed PyCharm Edu 3.0 and Anaconda 3 on an older laptop. I am attempting to access the embedded terminal in the IDE and I am unable to launch it.
I have searched through similar questions here and the JetBrains docs, and the common knowledge seems to be installing the "Terminal" Plugin. My version of PyCharm does not have this plugin, and I am unable to find it in the JetBrains plugin list or community repositories.
If anyone has experienced this before or knows where I am going wrong attempting to launch the terminal I would appreciate the feedback. | Pycharm edu terminal plugin missing | 0 | -0.099668 | 1 | 0 | 0 | 628 |
38,966,528 | 2016-08-16T04:09:00.000 | 0 | 0 | 1 | 0 | 0 | python-3.x,process,multiprocessing,parent-child,atexit | 0 | 38,981,627 | 0 | 1 | 0 | false | 0 | 0 | The functions registered via atexit are inherited by the children processes.
The simplest way to prevent that, is via calling atexit after you have spawned the children processes. | 1 | 0 | 0 | 0 | I have a program that spawns multiple child processes, how would I make the program only call atexit.register(function) on the main process and not on the child processes as well?
Thanks | Python3 Running atexit only on the main process | 0 | 0 | 1 | 0 | 0 | 168 |
38,976,431 | 2016-08-16T13:35:00.000 | 0 | 0 | 0 | 0 | 0 | python,pandas,numpy,large-data | 0 | 38,976,616 | 0 | 1 | 0 | false | 0 | 0 | Out of curiosity, is there a reason you want to use Pandas for this? Image analysis is typically handled in matrices making NumPy a clear favorite. If I'm not mistaken, both sk-learn and PIL/IMAGE use NumPy arrays to do their analysis and operations.
Another option: avoid the in-memory step! Do you need to access all 1K+ images at the same time? If not, and you're operating on each one individually, you can iterate over the files and perform your operations there. For an even more efficient step, break your files into lists of 200 or so images, then use Python's MultiProcessing capabilities to analyze in parallel.
JIC, do you have PIL or IMAGE installed, or sk-learn? Those packages have some nice image analysis algorithms already packaged in which may save you some time in not having to re-invent the wheel. | 1 | 1 | 1 | 0 | Background: I have a sequence of images. In each image, I map a single pixel to a number. Then I want to create a pandas dataframe where each pixel is in its own column and images are rows. The reason I want to do that is so that I can use things like forward fill.
Challenge: I have transformed each image into a one dimensional array of numbers, each of which is about 2 million entries and I have thousands of images. Simply doing pd.DataFrame(array) is very slow (testing it on a smaller number of images). Is there a faster solution for this? Other ideas how to do this efficiently are also welcome, but using non-core different libraries may be a challenge (corporate environment). | Initializing a very large pandas dataframe | 0 | 0 | 1 | 0 | 0 | 310 |
38,980,544 | 2016-08-16T16:57:00.000 | 0 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn,random-forest | 0 | 38,980,686 | 0 | 1 | 0 | true | 0 | 0 | They will be treated in the same manner as the minimal value already encountered in the training set. RF is just a bunch of voting decision trees, and (basic) DTs can only form decisions in form of "if feature X is > then T go left, otherwise go right". Consequently, if you fit it to data which, for a given feature, has only values in [0, inf], it will either not use this feature at all or use it in a form given above (as decision of form "if X is > than T", where T has to be from (0, inf) to make any sense for the training data). Consequently if you simply take your new data and change negative values to "0", the result will be identical. | 1 | 0 | 1 | 0 | When I built my random forest model using scikit learn in python, I set a condition (where clause in sql query) so that the training data only contain values whose value is greater than 0.
I am curious to know how random forest handles test data whose value is less than 0, which the random forest model has never seen before in the training data. | What does Random Forest do with unseen data? | 0 | 1.2 | 1 | 0 | 0 | 709 |
38,984,387 | 2016-08-16T20:59:00.000 | 1 | 0 | 1 | 0 | 1 | python,nltk | 1 | 40,272,424 | 0 | 3 | 0 | false | 0 | 0 | While i am not sure exactly where the problem arises, I had this same error happen to me (it started 'overnight' - the code had been working, i hand not re-installed nltk, so i have no idea what caused it to start happening). I still had the problem after upgrading to the latest version of nltk (3.2.1), and re-downloading the nltk data.
shiratori's answer helped me solve my problem, although at least for me it was slightly more complicated. Specifically, my nltk data was stored in C:\Users\USERNAME\AppData\Roaming\nltk_data (i think this is a default location). This is where it had always been stored, and always had worked fine, however suddenly nltk did not seem to be recognizing this location, and hence looked in the next drive. To solve it, I copied and pasted all the data in that folder to C:\nltk_data and now it is running fine again.
Anyway, not sure if this is Windows induced problem, or what exactly changed to cause code that was working to stop working, but this solved it. | 1 | 4 | 0 | 0 | I am trying to use nltk in python, but am receiving a pop up error (windows) describing that I am missing a drive at the moment I call import nltk
Does anyone know why or how to fix this?
The error is below:
"There is no disk in the drive. Please insert a disk into drive \Device\Harddisk4\DR4." | Drive issue with python NLTK | 1 | 0.066568 | 1 | 0 | 0 | 895 |
38,989,896 | 2016-08-17T06:49:00.000 | 0 | 0 | 1 | 0 | 0 | python,windows,scikit-learn | 0 | 46,249,521 | 0 | 2 | 0 | false | 0 | 0 | Old post, but right answer is,
'sudo pip install -U numpy matplotlib --upgrade' for python2 or 'sudo pip3 install -U numpy matplotlib --upgrade' for python3 | 2 | 1 | 1 | 0 | I know how to install external modules using the pip command but for Scikit-learn I need to install NumPy and Matplotlib as well.
How can I install these modules using the pip command? | How to install scikit-learn | 0 | 0 | 1 | 0 | 0 | 2,435 |
38,989,896 | 2016-08-17T06:49:00.000 | -1 | 0 | 1 | 0 | 0 | python,windows,scikit-learn | 0 | 38,990,089 | 0 | 2 | 0 | false | 0 | 0 | Using Python 3.4, I run the following from the command line:
c:\python34\python.exe -m pip install package_name
So you would substitute "numpy" and "matplotlib" for 'package_name' | 2 | 1 | 1 | 0 | I know how to install external modules using the pip command but for Scikit-learn I need to install NumPy and Matplotlib as well.
How can I install these modules using the pip command? | How to install scikit-learn | 0 | -0.099668 | 1 | 0 | 0 | 2,435 |
39,003,909 | 2016-08-17T18:30:00.000 | 3 | 0 | 1 | 0 | 0 | python,tensorflow,gpu | 0 | 39,004,702 | 0 | 2 | 0 | true | 0 | 0 | Do something like this before running your main script
export TF_MIN_GPU_MULTIPROCESSOR_COUNT=4
Note though that the default is set for a reason -- if you enable slower GPU by changing that variable, your program may run slower than it would without any GPU available, because TensorFlow will try to put run everything on that GPU | 2 | 0 | 0 | 0 | I get a message that says my GPU Device is ignored because its multiprocessor count is lower than the minimum set. However, it gives me the environment variable TF_MIN_GPU_MULTIPROCESSOR_COUNT but it doesn't seem to exist because I keep getting command not found. When I look at the environment variables using set or printenv and grep for the variable name, it doesn't exist. Does anyone know where I can find it or how I can change its set value? | Can't find TF_MIN_GPU_MULTIPROCESSOR_COUNT | 1 | 1.2 | 1 | 0 | 0 | 2,221 |
39,003,909 | 2016-08-17T18:30:00.000 | 0 | 0 | 1 | 0 | 0 | python,tensorflow,gpu | 0 | 56,837,569 | 0 | 2 | 0 | false | 0 | 0 | In windows, create a new environmental variable with this name and assign its value.
You can do that by right clicking on the This PC in File Explorer, select Properties at bottom, then select Advanced system settings on left. That will get you to the System Properties dialog. Also you can type "environmental properties" in Cortana Search.
From there you click the Environmental Variables button. Once in the Environmental Variables dialog, select new to create the variable and assign the value, then back out. You may have to restart your IDE or open a new DOS window for that environmental variable to be visible. | 2 | 0 | 0 | 0 | I get a message that says my GPU Device is ignored because its multiprocessor count is lower than the minimum set. However, it gives me the environment variable TF_MIN_GPU_MULTIPROCESSOR_COUNT but it doesn't seem to exist because I keep getting command not found. When I look at the environment variables using set or printenv and grep for the variable name, it doesn't exist. Does anyone know where I can find it or how I can change its set value? | Can't find TF_MIN_GPU_MULTIPROCESSOR_COUNT | 1 | 0 | 1 | 0 | 0 | 2,221 |
39,007,823 | 2016-08-17T23:37:00.000 | 1 | 0 | 1 | 0 | 0 | java,python,type-conversion,reserved-words,jpype | 0 | 39,027,662 | 0 | 2 | 0 | false | 1 | 0 | Figured out that jpype appends an "_" at the end for those methods/fields in its source code. So you can access it by Jpype.JClass("Foo").pass_
Wish it's documented somewhere | 2 | 1 | 0 | 0 | Any idea how this can be done? ie, if we have a variable defined in java as below
public Class Foo {
String pass = "foo";
}
how can I access this via jpype since pass is a reserved keyword? I tried
getattr(Jpype.JClass(Foo)(), "pass") but it fails to find the attribute named pass | jpype accessing java mehtod/variable whose name is reserved name in python | 0 | 0.099668 | 1 | 0 | 0 | 358 |
39,007,823 | 2016-08-17T23:37:00.000 | 0 | 0 | 1 | 0 | 0 | java,python,type-conversion,reserved-words,jpype | 0 | 39,007,952 | 0 | 2 | 0 | false | 1 | 0 | unfortunally Fields or methods conflicting with a python keyword can’t be accessed | 2 | 1 | 0 | 0 | Any idea how this can be done? ie, if we have a variable defined in java as below
public Class Foo {
String pass = "foo";
}
how can I access this via jpype since pass is a reserved keyword? I tried
getattr(Jpype.JClass(Foo)(), "pass") but it fails to find the attribute named pass | jpype accessing java mehtod/variable whose name is reserved name in python | 0 | 0 | 1 | 0 | 0 | 358 |
39,017,678 | 2016-08-18T11:59:00.000 | 0 | 0 | 0 | 1 | 0 | python,django,asynchronous,rabbitmq,celery | 0 | 39,065,804 | 0 | 2 | 0 | false | 1 | 0 | I've used the following set up on my application:
Task is initiated from Django - information is extracted from the model instance and passed to the task as a dictionary. NB - this will be more future proof as Celery 4 will default to JSON encoding
Remote server runs task and creates a dictionary of results
Remote server then calls an update task that is only listened for by a worker on the Django server.
Django worker read results dictionary and updates model.
The Django worker listens to a separate queue, those this isn't strictly necessary. Results backend isn't used - data needed is just passed to the task | 1 | 5 | 0 | 0 | I have a django project where I am using celery with rabbitmq to perform a set of async. tasks. So the setup i have planned goes like this.
Django app running on one server.
Celery workers and rabbitmq running from another server.
My initial issue being, how to do i access django models from the celery tasks resting on another server?
and assuming I am not able to access the Django models, is there a way once the tasks gets completed, i can send a callback to the Django application passing values, so that i get to update the Django's database based on the values passed? | Django and celery on different servers and celery being able to send a callback to django once a task gets completed | 1 | 0 | 1 | 0 | 0 | 1,298 |
39,022,629 | 2016-08-18T15:53:00.000 | 0 | 1 | 1 | 0 | 1 | python,pycharm,pythonpath | 0 | 39,022,921 | 0 | 2 | 0 | false | 0 | 0 | Not sure how much effort you want to put into this temporary python path thing but you could always use a python virtual environment for running scripts or whatever you need. | 1 | 8 | 0 | 0 | I'm thinking of something like
python3 my_script.py --pythonpath /path/to/some/necessary/modules
Is there something like this? I know (I think) that Pycharm temporarily modifies PYTHONPATH when you use it to execute scripts; how does Pycharm do it?
Reasons I want to do this (you don't really need to read the following)
The reason I want to do this is that I have some code that usually needs to run on my own machine (which is fine because I use Pycharm to run it) but sometimes needs to run on a remote server (on the commandline), and it doesn't work because the remote server doesn't have the PYTHONPATHs that Pycharm automatically temporarily adds. I don't want to export PYTHONPATH=[...] because it's a big hassle to change it often (and suppose it really does need to change often). | Provide temporary PYTHONPATH on the commandline? | 0 | 0 | 1 | 0 | 0 | 3,666 |
39,023,270 | 2016-08-18T16:30:00.000 | 0 | 0 | 1 | 0 | 0 | python,scikit-learn,ipython | 0 | 39,031,052 | 0 | 1 | 0 | false | 0 | 0 | If the code is in a file called file.py, you should just be able to do import file (if you're not in the right folder, just run cd folder in IPython first.) | 1 | 0 | 0 | 0 | I have downloaded from github a package (scikit-lean) and put the code source in repository folder (Windows 7 64-bit).
After modifying the code source, how can I load the package into the IPython notebook for testing ?
Should I copy paste the modified in sites-packages folder ?
(what about the current original scikit-lean package)
Can I add the modified folder to the Python path ?
How to manage versioning when loading package in Python since both are same names ?
(ie: the original package vs the package I modified)
Sorry, it looks like beginner questions, but could not find anything how to start with | How to load a code source modified package in Python? | 0 | 0 | 1 | 0 | 0 | 85 |
39,062,605 | 2016-08-21T09:08:00.000 | 0 | 0 | 0 | 0 | 0 | django,amazon-s3,python-django-storages | 0 | 39,062,626 | 0 | 1 | 0 | false | 1 | 0 | The URL is relative to the amazon storage address you provide in your settings. so you only need to move the images to a new bucket and update your settings. | 1 | 0 | 0 | 0 | I have a Django application where I use django-storages and amazon s3 to store images.
I need to move those images to a different account: different user different bucket.
I wanted to know how do I migrate those pictures?
my main concern is the links in my database to all those images, how do I update it? | changing s3 storages with django-storages | 0 | 0 | 1 | 1 | 0 | 82 |
39,064,796 | 2016-08-21T13:36:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-2.7,terminal | 0 | 39,064,972 | 0 | 2 | 0 | false | 0 | 0 | You would need to have python implemented into the software.
Also, I believe this is a task for GCSE Computing this year as I was privileged enough to choose what test we are doing and there was a question about serial numbers. | 1 | 0 | 0 | 0 | I'm writing a code to read serial input. Once the serial input has been read, I have to add a time stamp below it and then the output from a certain software. To get the output from the software, I want python to write a certain command to the terminal, and then read the output that comes on the terminal. Could you suggest how do I go about doing the last step: namely, writing to the terminal then reading the output? I'm a beginner in python, so please excuse me if this sounds trivial. | Giving input to terminal in python | 0 | 0 | 1 | 0 | 0 | 338 |
39,086,420 | 2016-08-22T18:32:00.000 | 1 | 0 | 0 | 1 | 0 | python,python-requests | 0 | 39,086,692 | 0 | 1 | 0 | true | 0 | 0 | requests is a HTTP request library, while Spark's wordcount example provides a raw socket server, so no, requests is not the right package to communicate with your Spark app. | 1 | 0 | 0 | 0 | I have an application (spark based service), which when starts..works like following.
At localhost:9000
if I do nc -lk localhost 9000
and then start entering the text.. it takes the text entered in terminal as an input and do a simple wordcount computation on it.
how do i use the requests library to programmatically send the text, instead of manually writing them in the terminal.
Not sure if my question is making sense.. | Using requests package to make request | 0 | 1.2 | 1 | 0 | 1 | 32 |
39,128,100 | 2016-08-24T16:03:00.000 | 0 | 0 | 0 | 0 | 0 | python,database,caching,redis,memcached | 0 | 39,128,415 | 0 | 2 | 0 | false | 1 | 0 | I had this exact question myself, with a PHP project, though. My solution was to use ElasticSearch as an intermediate cache between the application and database.
The trick to this is the ORM. I designed it so that when Entity.save() is called it is first stored in the database, then the complete object (with all references) is pushed to ElasticSearch and only then the transaction is committed and the flow is returned back to the caller.
This way I maintained full functionality of a relational database (atomic changes, transactions, constraints, triggers, etc.) and still have all entities cached with all their references (parent and child relations) together with the ability to invalidate individual cached objects.
Hope this helps. | 1 | 2 | 0 | 0 | For my app, I am using Flask, however the question I am asking is more general and can be applied to any Python web framework.
I am building a comparison website where I can update details about products in the database. I want to structure my app so that 99% of users who visit my website will never need to query the database, where information is instead retrieved from the cache (memcached or Redis).
I require my app to be realtime, so any update I make to the database must be instantly available to any visitor to the site. Therefore I do not want to cache views/routes/html.
I want to cache the entire database. However, because there are so many different variables when it comes to querying, I am not sure how to structure this. For example, if I were to cache every query and then later need to update a product in the database, I would basically need to flush the entire cache, which isn't ideal for a large web app.
I would prefer is to cache individual rows within the database. The problem is, how do I structure this so I can flush the cache appropriately when an update is made to the database? Also, how can I map all of this together from the cache?
I hope this makes sense. | How do I structure a database cache (memcached/Redis) for a Python web app with many different variables for querying? | 1 | 0 | 1 | 1 | 0 | 488 |
39,154,611 | 2016-08-25T20:57:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-2.7,math,equation,exp | 0 | 71,583,338 | 0 | 6 | 0 | false | 0 | 0 | Just to add, numpy also has np.e | 1 | 48 | 0 | 0 | How can i write x.append(1-e^(-value1^2/2*value2^2)) in python 2.7?
I don't know how to use power operator and e. | How can I use "e" (Euler's number) and power operation in python 2.7 | 0 | 0 | 1 | 0 | 0 | 248,925 |
39,160,816 | 2016-08-26T07:34:00.000 | 0 | 0 | 0 | 0 | 0 | python,web,openerp | 0 | 39,180,410 | 0 | 1 | 0 | false | 1 | 0 | There is no such thing as 'flag/state'.
What you are probably trying to say is that you want to know which operations are taking place on a record. The easiest method is to take a look at your log. There will be statements there in the form /web/dataset/call_kw/model/operation where model is your ORM model and operation could be a search, read, unlink etc. RPC calls are logged in there as well. The format of the log output is a little bit different between different versions of odoo. You can go to a lower level by monitoring sql transactions on postgresql but I do not think that this is what you want. | 1 | 1 | 0 | 0 | I am new in odoo, I want to know how we get the current flag/state of every operation.
For example: when we create a new record how do we know the current flag/state is "add"? or when we view a record how do we know the current flag/state is "view"?
It something like current user id that stored in session named "uid", is there something similar to get the current flag/state in every operation? | How to get the flag/state of current operation in Odoo 9? | 0 | 0 | 1 | 0 | 0 | 107 |
39,168,025 | 2016-08-26T13:54:00.000 | 2 | 0 | 0 | 0 | 1 | python,neural-network,tensorflow,lstm | 0 | 39,177,157 | 0 | 1 | 0 | false | 0 | 0 | If you are using tf.rnn_cell.BasicLSTMCell , the variable you are looking for will have the following suffix in its name : <parent_variable_scope>/BasicLSTMCell/Linear/Matrix . This is a concatenated matrix for all the four gates. Its first dimension matches the sum of the second dimensions of the input matrix and the state matrix (or output of the cell to be exact). The second dimension is 4 times the number of cell size.
The other complementary variable is <parent_variable_scope>/BasicLSTMCell/Linear/Bias that is a vector of the same size as the second dimension of the abovementioned tensor (for obvious reasons).
You can retrieve the parameters for the four gates by using tf.split() along dimension 1. The split matrices would be in the order [input], [new input], [forget], [output]. I am referring to the code here form rnn_cell.py.
Keep in mind that the variable represents the parameters of the Cell and not the output of the respective gates. But with the above info, I am sure you can get that too, if you so desire.
Edit:
Added more specific information about the actual tensors Matrix and Bias | 1 | 5 | 1 | 0 | I am using the LSTM model that comes by default in tensorflow. I would like to check or to know how to save or show the values of the forget gate in each step, has anyone done this before or at least something similar to this?
Till now I have tried with tf.print but many values appear (even more than the ones I was expecting) I would try plotting something with tensorboard but I think those gates are just variables and not extra layers that I can print (also cause they are inside the TF script)
Any help will be well received | Tensorflow: show or save forget gate values in LSTM | 0 | 0.379949 | 1 | 0 | 0 | 1,695 |
39,172,559 | 2016-08-26T18:23:00.000 | 0 | 0 | 0 | 0 | 0 | python,scipy,interpolation | 1 | 39,174,418 | 0 | 1 | 0 | false | 0 | 0 | As long as you can assume that your errors represent one-sigma intervals of normal distributions, you can always generate synthetic datasets, resample and interpolate those, and compute the 1-sigma errors of the results.
Or just interpolate values+err and values-err, if all you need is a quick and dirty rough estimate. | 1 | 0 | 1 | 0 | I have a number of data sets, each containing x, y, and y_error values, and I'm simply trying to calculate the average value of y at each x across these data sets. However the data sets are not quite the same length. I thought the best way to get them to an equal length would be to use scipy's interoplate.interp1d for each data set. However, I still need to be able to calculate the error on each of these averaged values, and I'm quite lost on how to accomplish that after doing an interpolation.
I'm pretty new to Python and coding in general, so I appreciate your help! | Python: How to interpolate errors using scipy interpolate.interp1d | 0 | 0 | 1 | 0 | 0 | 403 |
39,185,570 | 2016-08-27T20:42:00.000 | 11 | 0 | 0 | 1 | 0 | python,django,amazon-web-services,amazon-ec2,amazon-elastic-beanstalk | 1 | 42,735,371 | 0 | 2 | 0 | true | 1 | 0 | I've realised that the problem was that Elastic Beanstalk, for some reasons, kept the unsuccessfully deployed versions under .elasticbeanstalk. The solution, at least in my case, was to remove those temporal (or whatever you call them) versions of the application. | 1 | 18 | 0 | 0 | I'm trying to deploy a new version of my Python/Django application using eb deploy.
It unfortunately fails due to unexpected version of the application. The problem is that somehow eb deploy screwed up the version and I don't know how to override it. The application I upload is working fine, only the version number is not correct, hence, Elastic Beanstalk marks it as Degraded.
When executing eb deploy, I get this error:
"Incorrect application version "app-cca6-160820_155843" (deployment
161). Expected version "app-598b-160820_152351" (deployment 159). "
The same says in the health status at AWS Console.
So, my question is the following: How can I force Elastic Beanstalk to make the uploaded application version the current one so it doesn't complain? | How to force application version on AWS Elastic Beanstalk | 0 | 1.2 | 1 | 0 | 0 | 10,464 |
39,217,639 | 2016-08-30T01:50:00.000 | 1 | 0 | 1 | 0 | 1 | python | 0 | 39,217,670 | 0 | 7 | 0 | false | 0 | 0 | You can do your list comprehension logic with tuples and then flatten the resulting list:
[n for pair in [(x, x+1) for x in [1,5,7]] for n in pair] | 2 | 5 | 0 | 0 | For example,
how to convert [1, 5, 7] into [1,2,5,6,7,8] into python?
[x, x+1 for x in [1,5,7]] can't work for sure... | How to convert a list by mapping an element into multiple elements in python? | 0 | 0.028564 | 1 | 0 | 0 | 93 |
39,217,639 | 2016-08-30T01:50:00.000 | 0 | 0 | 1 | 0 | 1 | python | 0 | 39,217,679 | 0 | 7 | 0 | false | 0 | 0 | If you just want to fill the list with the numbers between the min and max+1 values you can use [i for i in range (min(x),max(x)+2)] assuming x is your list. | 2 | 5 | 0 | 0 | For example,
how to convert [1, 5, 7] into [1,2,5,6,7,8] into python?
[x, x+1 for x in [1,5,7]] can't work for sure... | How to convert a list by mapping an element into multiple elements in python? | 0 | 0 | 1 | 0 | 0 | 93 |
39,236,025 | 2016-08-30T19:53:00.000 | 0 | 1 | 0 | 1 | 0 | python,linux,installation,hdf5 | 0 | 67,224,754 | 0 | 4 | 0 | false | 1 | 0 | For Centos 8, I got the below warning message :
Warning: Couldn't find any HDF5 C++ libraries. Disabling HDF5 support.
and I solved it using the command :
sudo yum -y install hdf5-devel | 1 | 6 | 0 | 0 | I want to use h5py which needs libhdf5-dev to be installed. I installed hdf5 from the source, and thought that any options with compiling that one would offer me the developer headers, but doesn't look like it.
Anyone know how I can do this? Is there some other source i need to download? (I cant find any though)
I am on amazon linux, yum search libhdf5-dev doesn't give me any result and I cant use rpm nor apt-get there, hence I wanted to compile it myself. | how to install libhdf5-dev? (without yum, rpm nor apt-get) | 0 | 0 | 1 | 0 | 0 | 20,706 |
39,243,626 | 2016-08-31T07:45:00.000 | 0 | 0 | 0 | 0 | 0 | php,python,mysql,database | 0 | 39,244,221 | 0 | 1 | 0 | true | 0 | 0 | No, it is not possible to call external scripts from MySQL.
The only thing you can do is adding an ON UPDATE trigger that will write into some queue. Then you will have the python script POLLING the queue and doing whatever it's supposed to do with the rows it finds. | 1 | 1 | 0 | 0 | I want to execute script(probably written in python), when update query is executed on MySQL database. The query is going to be executed from external system written in PHP to which I don't have access, so I can't edit the source code. The MySQL server is installed on our machine. Any ideas how I can accomplish this, or is it even possible? | Executing script when SQL query is executed | 1 | 1.2 | 1 | 1 | 0 | 54 |
39,256,378 | 2016-08-31T18:13:00.000 | 0 | 0 | 1 | 0 | 1 | python,python-3.x,bpy | 1 | 39,298,240 | 0 | 1 | 0 | false | 0 | 0 | Found the error:
the pyd file was compiled with a 32 bit Python, was called with a 64 bit Python | 1 | 0 | 0 | 0 | I could not get a working example of importing a compiled library (pyd file) in Python.
I compiled the blender source code, result is a bpy.pyd file.
This file is placed in the python\lib folder.
In the source code I have
import bpy
The file is found at runtime, but I get a runtime error that the module could not be imported
Does someone have a good documentation on importing compiled python modules?
I searched ~100 entries, but only general definitions on how to do this. I trued all suggestions without success.
Thanks! | How to Import compiled libs (pyd) in python | 0 | 0 | 1 | 0 | 0 | 703 |
39,257,759 | 2016-08-31T19:42:00.000 | 0 | 0 | 1 | 0 | 0 | python,regex,format,expression | 0 | 39,257,814 | 0 | 5 | 0 | false | 0 | 0 | For the letter use [a-zA-Z], and if it's only upper case then [A-Z] is sufficient. | 2 | 2 | 0 | 0 | I need to match things that format something along the lines of
657432-76, 54678-01, 54364A-12
I got (r'^\d{6}-\d{2}$')
and (r'^\d{5}-\d{2}$')
but how do you get the letter?
thanks!! | Regular Expression with letter | 0 | 0 | 1 | 0 | 0 | 47 |
39,257,759 | 2016-08-31T19:42:00.000 | 0 | 0 | 1 | 0 | 0 | python,regex,format,expression | 0 | 39,257,845 | 0 | 5 | 0 | false | 0 | 0 | it seems the pattern generically is 6 characters with possible letter or number at last char max then - then 2 numbers? so then you'd use this pattern
pattern = r'^d{5}.+-\d{2}$' | 2 | 2 | 0 | 0 | I need to match things that format something along the lines of
657432-76, 54678-01, 54364A-12
I got (r'^\d{6}-\d{2}$')
and (r'^\d{5}-\d{2}$')
but how do you get the letter?
thanks!! | Regular Expression with letter | 0 | 0 | 1 | 0 | 0 | 47 |
39,274,850 | 2016-09-01T14:54:00.000 | 1 | 1 | 0 | 0 | 0 | python,django,testing,rpc,spyne | 0 | 39,275,854 | 0 | 1 | 0 | false | 1 | 0 | I believe if you are using a service inside a test, that test should not be a unit test.
you might want to consider use factory_boy or mock, both of them are python modules to mock or fake a object, for instance, to fake a object to give a response to your rpc call. | 1 | 1 | 0 | 1 | I am currently learning building a SOAP web services with django and spyne. I have successfully tested my model using unit test. However, when I tried to test all those @rpc functions, I have no luck there at all.
What I have tried in testing those @rpc functions:
1. Get dummy data in model database
2. Start a server at localhost:8000
3. Create a suds.Client object that can communicate with localhost:8000
4. Try to invoke @rpc functions from the suds.Client object, and test if the output matches what I expected.
However, when I run the test, I believe the test got blocked by the running server at localhost:8000 thus no test code can be run while the server is running.
I tried to make the server run on a different thread, but that messed up my test even more.
I have searched as much as I could online and found no materials that can answer this question.
TL;DR: how do you test @rpc functions using unit test? | How to test RPC of SOAP web services? | 0 | 0.197375 | 1 | 0 | 0 | 458 |
39,280,278 | 2016-09-01T20:21:00.000 | 0 | 0 | 0 | 0 | 1 | python,pandas | 1 | 39,280,341 | 0 | 2 | 0 | false | 0 | 0 | data[c] does not return a value, it returns a series (a whole column of data).
You can apply the strip operation to an entire column df.apply. You can apply the strip function this way. | 1 | 0 | 1 | 0 | I'm trying to remove spaces, apostrophes, and double quote in each column data using this for loop
for c in data.columns:
data[c] = data[c].str.strip().replace(',', '').replace('\'', '').replace('\"', '').strip()
but I keep getting this error:
AttributeError: 'Series' object has no attribute 'strip'
data is the data frame and was obtained from an excel file
xl = pd.ExcelFile('test.xlsx');
data = xl.parse(sheetname='Sheet1')
Am I missing something? I added the str but that didn't help. Is there a better way to do this.
I don't want to use the column labels, like so data['column label'], because the text can be different. I would like to iterate each column and remove the characters mentioned above.
incoming data:
id city country
1 Ontario Canada
2 Calgary ' Canada'
3 'Vancouver Canada
desired output:
id city country
1 Ontario Canada
2 Calgary Canada
3 Vancouver Canada | Pandas - how to remove spaces in each column in a dataframe? | 0 | 0 | 1 | 0 | 0 | 5,363 |
39,303,681 | 2016-09-03T05:48:00.000 | 1 | 1 | 1 | 1 | 1 | ubuntu,python-appium | 1 | 41,982,234 | 0 | 1 | 0 | false | 0 | 0 | Try to use nosetest.
Install:
pip install nose
Run:
nosetests (name of the file containing test) | 1 | 0 | 0 | 0 | I got the following error while executing a python script on appium
ImportError: No module named appium
I am running appium in one terminal and tried executing the test on another terminal. Does anyone know what is the reason for this error? and how to resolve it? | Python and Appium | 0 | 0.197375 | 1 | 0 | 0 | 571 |
39,316,449 | 2016-09-04T11:31:00.000 | 1 | 0 | 1 | 0 | 0 | python,testing | 0 | 39,316,539 | 0 | 2 | 0 | true | 0 | 0 | The code you test, and the code you run should be same.
I do not recommend using a filename, because now you are dealing with (in one function) opening the file - and the errors associated with that part, and then confirming the file format (the actual purpose of the function).
It sounds to me that your function's job is to check if the file's contents contain a specific string. So, this function should take any type of content element (an iterable) and then as long as the key string is not found, the function should return None/False/Fail condition - and your test should check for that. | 1 | 0 | 0 | 0 | I am writing tests for a program I intend to write that checks for certain lines in configuration files.
For example, the program might check that the line: AllowConnections-
is contained in the file SomeFile.conf.
My function stub does not take any arguments because I know the file that I am going to be checking.
I am trying to write a tests for this function that check the behavior for different SomeFile.conf files, but I don't see how I could do this. It is possible to change SomeFile.conf in the setup and teardown test functions, but this seems like a bad way to test. Should I change the function so that it can accept a file argument just for the sake of testing? | Should functions take extra arguments for the sake of testing? | 1 | 1.2 | 1 | 0 | 0 | 56 |
39,319,275 | 2016-09-04T16:49:00.000 | 0 | 0 | 0 | 0 | 0 | python,amazon-web-services,lua,server,torch | 0 | 48,819,337 | 0 | 2 | 0 | false | 1 | 0 | It does make sense to look at the whole task and how it fits to your actual server, Nginx or Lighttpd or Apache since you are serving static content. If you are going to call a library to create the static content, the integration of your library to your web framework would be simpler if you use Flask but it might be a fit for AWS S3 and Lambda services.
It may be worth it to roughly design the whole site and match your content to the tools at hand. | 1 | 0 | 0 | 0 | I am building am application to process user's photo on server. Basically, user upload a photo to the server and do some filtering processing using deep learning model. Once it's done filter, user can download the new photo. The filter program is based on the deep learning algorithm, using torch framework, it runs on python/lua. I currently run this filter code on my local ubuntu machine. Just wonder how to turn this into a web service. I have 0 server side knowledge, I did some research, maybe I should use flask or tornado, or other architecture? | how to build an deep learning image processing server | 0 | 0 | 1 | 0 | 0 | 906 |
39,324,217 | 2016-09-05T05:04:00.000 | 1 | 0 | 1 | 1 | 0 | python,oracle,pyinstaller,cx-oracle | 0 | 39,349,805 | 0 | 1 | 0 | true | 0 | 0 | The error "Unable to acquire Oracle environment handle" means there is something wrong with your Oracle configuration. Check to see what libclntsh.so file you are using. The simplest way to do that is by using the ldd command on the cx_Oracle module that PyInstaller has bundled with the executable. Then check to see if there is a conflict due to setting the environment variable ORACLE_HOME to a different client!
If PyInstaller picked up the libclntsh.so file during its packaging you will need to tell it to stop doing that. There must be an Oracle client (either full client or the much simpler instant client) on the target machine, not just the one file (libclntsh.so).
You can also verify that your configuration is ok by using the cx_Oracle.so module on the target machine to establish a connection -- independently of your application. If that doesn't work or you don't have a Python installation there for some reason, you can also use SQL*Plus to verify that your configuration is ok as well. | 1 | 0 | 0 | 0 | I am building an application in Python using cx_Oracle (v5) and Pyinstaller to package up and distribute the application. When I built and packaged the application, I had the Oracle 12c client installed. However, when I deployed it to a machine with the 11g client installed, it seems not to work. I get the message "Unable to acquire Oracle environment handle". I assume this is as the result of the application being packaged with Pyinstaller while my ORACLE_HOME was pointed to a 12c client. I know that the cx_Oracle I have was built against both 11g and 12 libraries. So, I'm wondering how I deploy an application using Pyinstaller so it can run with either 11 or 12c client libraries installed?
By the way, I am building this on Linux (debian/Mint 17.2), and deploying to Linux (CentOS 7). | How do I build a cx_oracle app using pyinstaller to use multiple Oracle client versions? | 0 | 1.2 | 1 | 0 | 0 | 738 |
39,332,901 | 2016-09-05T14:36:00.000 | 0 | 0 | 0 | 0 | 1 | python,installation,attributes,scikit-learn,theano | 1 | 39,334,690 | 0 | 1 | 0 | false | 0 | 0 | Apparently it was caused by some issue with Visual Studio. The import worked when I reinstalled VS and restarted the computer.
Thanks @super_cr7 for the prompt reply! | 1 | 0 | 1 | 0 | I'm trying to use scikit-learn's neural network module in iPython... running Python 3.5 on a Win10, 64-bit machine.
When I try to import from sknn.mlp import Classifier, Layer , I get back the following AttributeError: module 'theano' has no attribute 'gof' ...
The command line highlighted for the error is class DisconnectedType(theano.gof.type.Type), within theano\gradient.py
Theano version is 0.8.2, everything installed via pip.
Any lights on what may be causing this and how to fix it? | Failure to import sknn.mlp / Theano | 0 | 0 | 1 | 0 | 0 | 331 |
39,337,545 | 2016-09-05T20:48:00.000 | 0 | 0 | 0 | 0 | 0 | python,pyqtgraph | 0 | 39,413,892 | 0 | 1 | 0 | false | 0 | 1 | Sorry, there was a bug in my code which was handling the case where a part of one LinearRegionItem overlapped with another LinearRegionItem.
Now I see that one linearRegionItem can lie on top of another one.
Consider this solved | 1 | 1 | 0 | 0 | I have added 2 LinearRegionItems to a pyqtgraph plot. When I move the boundary of 1 over the other, the boundary never overlaps the other.
I would like to know how to allow overlapping. This is a functionality that I need, where I am selecting different regions of the data plot to be used later on. | How to get 2 or more LinearRegionItem to overlap each other | 0 | 0 | 1 | 0 | 0 | 78 |
39,345,313 | 2016-09-06T09:22:00.000 | 1 | 0 | 1 | 0 | 0 | python,anaconda,32bit-64bit,development-environment,conda | 0 | 39,345,619 | 1 | 1 | 0 | true | 0 | 0 | As I understand, Anaconda installs into a self-contained directory (<pwd>/anaconda3). Since 64-bit and 32-bit builds of Python can not be mixed or converted into each other (in terms of the compiled Python binaries and libraries in site-packages or other PYTHONPATH location), you have to go with a second (64-bit) Anaconda installation in another directory.
If you have 32-bit code that needs to call 64-bit code, you have to rely subprocesses and pipes (or other IPC mechanisms). You probably have to be careful about your environment variables, e.g. PATH and PYTHONPATH when doing so. | 1 | 2 | 0 | 1 | I have a 32 bit installation of the Anaconda Python distribution.
I know how to create environments for different python versions.
What I need is to have a 64 bit version of python.
Is it possible to create a conda env with the 64 bit version?
Or do I have to reinstall anaconda or install a different version of anaconda and then switch the values of the PATH when I need the different versions?
I looked and searched the documentation, and the conda create -h help page did not find any mention of this. | How to create python conda 64 bit environment in existing 32bit install? | 1 | 1.2 | 1 | 0 | 0 | 4,169 |
39,361,214 | 2016-09-07T04:29:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-3.x | 0 | 39,361,292 | 0 | 5 | 0 | false | 0 | 0 | This regular expression will detect a single character surrounded by spaces, if the character is a plus or minus or mult or div sign: r' ([+-*/]) '. Note the spaces inside the apostrophes. The parentheses "capture" the character in the middle. If you need to recognize a different set of characters, change the set inside the brackets.
If you haven't dealt with regular expressions before, read up on the re module. They are very useful for simple text processing. The two relevant features here are "character classes" (the square brackets in my example) and "capturing parentheses" (the round parens). | 1 | 0 | 0 | 0 | Anyone know how I can find the character in the center that is surrounded by spaces?
1 + 1
I'd like to be able to separate the + in the middle to use in a if/else statement.
Sorry if I'm not too clear, I'm a Python beginner. | Python detect character surrounded by spaces | 0 | 0.039979 | 1 | 0 | 0 | 2,051 |
39,368,789 | 2016-09-07T11:33:00.000 | 0 | 0 | 1 | 0 | 0 | python,git,pip | 0 | 39,369,148 | 0 | 2 | 0 | false | 0 | 0 | The best way to do this would be to clone the repository, or just donwload the requirements.txt file, and then run pip install -r requirements.txt to install all the modules dependencies. | 1 | 2 | 0 | 0 | For example, we have project Foo with dependency Bar (that in private Git repo) and we want install Bar into Foo directory via pip from requirements.txt.
We can manually install Bar with console command:
pip install --target=. git+ssh://git.repo/some_pkg.git#egg=SomePackage
But how to install Bar into current directory from requirements.txt? | How to install package via pip requirements.txt from VCS into current directory? | 0 | 0 | 1 | 0 | 0 | 1,394 |
39,380,527 | 2016-09-07T23:40:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 39,380,622 | 0 | 1 | 0 | true | 0 | 1 | Keep positions of snake segments on list (first segment is head).
Later new position of head insert before first segment (and remove last segment).
Use this list to blit snake segments. | 1 | 0 | 0 | 0 | I'm trying reproduce the game "Snake" in pygame, using the pygame.blit function, instead of pygame.draw. My question is how to make an image follow another image. I mean, make the snake's body photo follow your head. In the current state of my program the head moves on its own. | Pygame: how to blit an image that follows another image | 0 | 1.2 | 1 | 0 | 0 | 42 |
39,382,725 | 2016-09-08T04:42:00.000 | 1 | 0 | 1 | 0 | 0 | python,json,tensorflow | 0 | 39,395,872 | 0 | 1 | 0 | false | 0 | 0 | A Tensor in TensorFlow is a node in the graph which, when run, will produce a tensor. So you can't save the SparseTensor directly because it's not a value (you can serialize the graph). If you do evaluate the sparsetensor, you get a SparseTensorValue object back which can be serialized as it's just a tuple. | 1 | 0 | 1 | 0 | I'm creating a list of Sparsetensors in Tensorflow. I want to access them in later sessions of my program. I've read online that you can store Python lists as json files but how do I save a list of Sparsetensors to a json file and then use that later on?
Thanks in advance | Saving Python list containing Tensorflow Sparsetensors to file for later access? | 0 | 0.197375 | 1 | 0 | 0 | 70 |
39,383,557 | 2016-09-08T06:03:00.000 | 21 | 0 | 0 | 0 | 0 | python,apache-spark,pyspark,apache-spark-sql | 0 | 44,253,561 | 0 | 10 | 0 | false | 0 | 0 | You can use df.dropDuplicates(['col1','col2']) to get only distinct rows based on colX in the array. | 2 | 149 | 1 | 0 | With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in that column. | Show distinct column values in pyspark dataframe | 1 | 1 | 1 | 0 | 0 | 344,799 |
39,383,557 | 2016-09-08T06:03:00.000 | 1 | 0 | 0 | 0 | 0 | python,apache-spark,pyspark,apache-spark-sql | 0 | 60,578,769 | 0 | 10 | 0 | false | 0 | 0 | If you want to select ALL(columns) data as distinct frrom a DataFrame (df), then
df.select('*').distinct().show(10,truncate=False) | 2 | 149 | 1 | 0 | With pyspark dataframe, how do you do the equivalent of Pandas df['col'].unique().
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby then countDistinct, instead I want to check distinct VALUES in that column. | Show distinct column values in pyspark dataframe | 1 | 0.019997 | 1 | 0 | 0 | 344,799 |
39,387,983 | 2016-09-08T10:01:00.000 | 1 | 0 | 0 | 0 | 1 | python,mysql,django,pootle | 1 | 39,447,127 | 0 | 1 | 0 | true | 1 | 0 | Install django debug toolbar, you can easily check all of the queries that have been executed | 1 | 0 | 0 | 0 | I am trying to debug a Pootle (pootle is build on django) installation which fails with a django transaction error whenever I try to add a template to an existing language. Using the python debugger I can see that it fails when pootle tries to save a model as well as all the queries that have been made in that session.
What I can't see is what specifically causes the save to fail. I figure pootle/django must have added some database database constraint, how do I figure out which one? MySql (the database being used) apparently can't log just failed transactions. | How do I get Django to log why an sql transaction failed? | 1 | 1.2 | 1 | 1 | 0 | 223 |
39,394,328 | 2016-09-08T15:02:00.000 | 1 | 1 | 0 | 1 | 0 | python,performance,file,io | 0 | 39,395,013 | 0 | 5 | 0 | false | 0 | 0 | If you can find a way to take advantage of hash tables your task will change from O(N^2) to O(N). The implementation will depend on exactly how large your files are and whether or not you have duplicate job IDs in file 2. I'll assume you don't have any duplicates. If you can fit file 2 in memory, just load the thing into pandas with job as the index. If you can't fit file 2 in memory, you can at least build a dictionary of {Job #: row # in file 2}. Either way, finding a match should be substantially faster. | 2 | 0 | 0 | 0 | I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.
I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).
The problem
I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.
Here's an example of the files:
File 1 File 2
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
I would like to compare the contents of lines with the same "Job" field, like so:
Job File 1 Content File 2 Content
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
I will be performing calculations on the File 1 Content and File 2 Content and comparing the two (for each line).
What is the most efficient way of doing this (matching lines)?
The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.
I appreciate any and all help.
Thank you! | Comparing the contents of very large files efficiently | 0 | 0.039979 | 1 | 0 | 0 | 1,140 |
39,394,328 | 2016-09-08T15:02:00.000 | 1 | 1 | 0 | 1 | 0 | python,performance,file,io | 0 | 39,396,201 | 0 | 5 | 0 | false | 0 | 0 | I was trying to develop something where you'd split one of the files into smaller files (say 100,000 records each) and keep a pickled dictionary of each file that contains all Job_id as a key and its line as a value. In a sense, an index for each database and you could use a hash lookup on each subfile to determine whether you wanted to read its contents.
However, you say that the file grows continually and each Job_id is unique. So, I would bite the bullet and run your current analysis once. Have a line counter that records how many lines you analysed for each file and write to a file somewhere. Then in future, you can use linecache to know what line you want to start at for your next analysis in both file1 and file2; all previous lines have been processed so there's absolutely no point in scanning the whole content of that file again, just start where you ended in the previous analysis.
If you run the analysis at sufficiently frequent intervals, who cares if it's O(n^2) since you're processing, say, 10 records at a time and appending it to your combined database. In other words, the first analysis takes a long time but each subsequent analysis gets quicker and eventually n should converge on 1 so it becomes irrelevant. | 2 | 0 | 0 | 0 | I need to compare two files of differing formats quickly and I'm not sure how to do it. I would very much appreciate it if someone could point me in the right direction.
I am working on CentOS 6 and I am most comfortable with Python (both Python 2 and Python 3 are available).
The problem
I am looking to compare the contents two large files (quickly). The files, unfortunately, differ in content; I will need to modify the contents of one before I can compare them. They are not particularly well-organized, so I can't move linearly down each and compare line-by-line.
Here's an example of the files:
File 1 File 2
Job,Time Job,Start,End
0123,3-00:00:00 0123,2016-01-01T00:00:00,2016-01-04T00:00:00
1111,05:30:00 1111,2016-01-01T00:00:00,2016-01-01T05:30:00
0000,00:00:05 9090.abc,2016-01-01T12:00:00,2016-01-01T22:00:00
9090.abc,10:00:00 0000,2015-06-01T00:00:00,2015-06-01T00:00:05
... ...
I would like to compare the contents of lines with the same "Job" field, like so:
Job File 1 Content File 2 Content
0123 3-00:00:00 2016-01-01T00:00:00,2016-01-04T00:00:00
1111 05:30:00 2016-01-01T00:00:00,2016-01-01T05:30:00
0000 00:00:05 2015-06-01T00:00:00,2015-06-01T00:00:05
9090.abc 10:00:00 2016-01-01T12:00:00,2016-01-01T22:00:00
... ... ...
I will be performing calculations on the File 1 Content and File 2 Content and comparing the two (for each line).
What is the most efficient way of doing this (matching lines)?
The system currently in place loops through one file in its entirety for each line in the other (until a match is found). This process may take hours to complete, and the files are always growing. I am looking to make the process of comparing them as efficient as possible, but even marginal improvements in performance can have a drastic effect.
I appreciate any and all help.
Thank you! | Comparing the contents of very large files efficiently | 0 | 0.039979 | 1 | 0 | 0 | 1,140 |
39,410,656 | 2016-09-09T11:34:00.000 | 1 | 0 | 0 | 0 | 0 | python,django | 0 | 39,411,527 | 0 | 1 | 0 | true | 1 | 0 | You could create a custom middleware (called after AuthenticationMiddleware), that checks if the user if logged in or not, and if not, replaces the current user object attached to request, with the the user of your choice. | 1 | 1 | 0 | 0 | Is there a way to globally provide a custom instance of User class instead of AnonymousUser?
It is not possible to assign AnonymousUser instances when User is expected (for example in forms, there is need to check for authentication and so on), and therefore having an ordinary User class with name 'anonymous' (so that we could search for it in the DB) would be globally returned when a non-authenticated user visits the page. Somehow implementing a custom authentication mechanism would do the trick? And I also want to ask if such an idea is a standard approach before diving into this. | Providing 'default' User when not logged in instead of AnonymousUser | 1 | 1.2 | 1 | 0 | 0 | 49 |
39,419,108 | 2016-09-09T20:15:00.000 | 2 | 0 | 1 | 0 | 0 | python,build,pygame,pyinstaller | 0 | 68,652,885 | 0 | 2 | 0 | false | 0 | 1 | Not trying to dig up this old question, but this was at the top of my Google search so it may be for others as well.
If you intend to distribute the program in some kind of folder, you can always just mark everything unnecessary as hidden in Windows, and it will remain hidden even if you compress or extract it.
For a program that I designed to be very user friendly, I just selected each file and folder that was not necessary to the user and hid them. If the user has show hidden files on (rarely default), they aren't likely to be intimidated by the mess of files that pyinstaller creates. | 1 | 2 | 0 | 0 | Alright, so I managed to use PyInstaller to build a homework assignment I made with Pygame. Cool. The executable works fine and everything.
Problem is, alongside the executable, there is so much clutter. So many files, like pyds and dlls accompany the exe in the same directory, making it look so ugly.
Now, I know that these files are important; the modules I used, such as Pygame, need them to work. Still, how do I make PyInstaller build my game, so that it puts the clutter into its own folder? I could just manually make a folder and move the files in there, but it stops the exe from working.
If this info would help any, I used Python 3.4.3 and am on Windows. | How to remove clutter from PyInstaller one-folder build? | 0 | 0.197375 | 1 | 0 | 0 | 4,472 |
39,436,020 | 2016-09-11T12:20:00.000 | 1 | 0 | 1 | 0 | 0 | python-3.4 | 0 | 39,436,077 | 0 | 1 | 0 | true | 0 | 0 | Use the built-in method list.index
If you wanna know where is 'c':
l = ['a','b','c']
l.index('c') | 1 | 0 | 0 | 0 | Im trying to find out the number of where X is in the list e.g:
if i had a list like: ['a','b','c','d'] and i have 'c' how would i find where it is in the list, so that it would print '2' (as thats where it is in the list)
thanks | How to find in a list what thats number python3 | 0 | 1.2 | 1 | 0 | 0 | 29 |
39,438,003 | 2016-09-11T16:01:00.000 | 1 | 0 | 0 | 0 | 0 | python,sockets,tcp,ddos,python-sockets | 0 | 39,438,366 | 0 | 1 | 0 | true | 0 | 0 | What you describe are internals of the TCP stack of the operating system. Python just uses this stack via the socket interface. I doubt that any of these settings can be changed specific to the application at all, i.e. these are system wide settings which can only be changed with administrator privileges. | 1 | 1 | 0 | 0 | More in detail, would like to know:
what is the default SYN_RECEIVED timer,
how do i get to change it,
are SYN cookies or SYN caches implemented.
I'm about to create a simple special-purpose publically accessible server. i must choose whether using built-in TCP sockets or RAW sockets and re-implement the TCP handshake if these security mechanisms are not present. | what anti-ddos security systems python use for socket TCP connections? | 0 | 1.2 | 1 | 0 | 1 | 216 |
39,438,843 | 2016-09-11T17:36:00.000 | 1 | 0 | 1 | 0 | 0 | python,regex | 0 | 39,438,883 | 0 | 4 | 0 | true | 0 | 0 | Use parenthesis
Like
re.findall("(foo)bar","foobar foogy woogy") | 1 | 0 | 0 | 0 | I've been reading the documentation but can't find what I'm looking for.
I'm simply trying to match foo inside foobar but can't seem to see how to do it. Any guidance would be helpful! | Python Regex matching word inside words | 0 | 1.2 | 1 | 0 | 0 | 162 |
39,482,504 | 2016-09-14T04:30:00.000 | 1 | 0 | 0 | 0 | 1 | python,pyinstaller,cx-oracle | 1 | 39,503,038 | 0 | 1 | 0 | true | 0 | 0 | One thing that you may be running into is the fact that if you used the instant client RPMs when you built cx_Oracle an RPATH would have been burned into the shared library. You can examine its contents and change it using the chrpath command. You can use the special path $ORIGIN in the modified RPATH to specify a path relative to the shared library.
If an RPATH isn't the culprit, then you'll want to examine the output from the ldd command and see where it is looking and then adjust things to make it behave itself! | 1 | 0 | 0 | 0 | I wrote a python application that uses cx_Oracle and then generates a pyinstaller bundle (folder/single executable). I should note it is on 64 bit linux. I have a custom spec file that includes the Oracle client libraries so everything that is needed is in the bundle.
When I run the bundled executable on a freshly installed CentOS 7.1 VM, (no Oracle software installed), the program connects to the database successfully and runs without error. However, when I install the bundled executable on another system that contains RHEL 7.2, and I try to run it, I get
Unable to acquire Oracle environment handle.
My understanding is this is due to an Oracle client installation that has some sort of conflict. I tried unsetting ORACLE_HOME on the machine giving me errors. It's almost as though the program is looking for the Oracle client libraries in a location other than in the location where I bundled the client files.
It seems like it should work on both machines or neither machine. I guess I'm not clear on how the Python application/cx_Oracle finds the Oracle client libraries. Again, it seems to have found them fine on a machine with a fresh operating system installation. Any ideas on why this is happening? | Why does pyinstaller generated cx_oracle application work on fresh CentOS machine but not on one with Oracle client installed? | 0 | 1.2 | 1 | 1 | 0 | 306 |
39,500,513 | 2016-09-14T22:18:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,amazon-web-services,amazon-elastic-beanstalk,django-migrations | 0 | 39,500,763 | 0 | 1 | 0 | false | 1 | 0 | Seems that you might have deleted the table or migrations at some point of time.
When you run makemigrations, django create migratins and when you run migrate, it creates database whichever is specified in settings file.
One thing is if you keep on creating migrations and do not run it in a particular database, it will be absolutely fine. Whenever you switch to databsse and run migrations, it will handle it as every database will store the point upto which migrations have been run until now in django-migrations table and will start running next migrations only.
To solve your problem, you can delete all databases and migration files and start afresh as you are perhaps testing right now. Things will go fine untill you delete a migration or a database in any of the server.
If you have precious data, you should get into migration files and tables to analyse and manage things. | 1 | 1 | 0 | 0 | I am developing a small web application using Django and Elasticbeanstalk.
I created a EB application with two environments (staging and production), created a RDS instance and assigned it to my EB environments.
For development I use a local database, because deploying to AWS takes quite some time.
However, I am having troubles with the migrations. Because I develop and test locally every couple of minutes, I tend to have different migrations locally and on the two environments.
So once I deploy the current version of the app to a certain environment, the "manage.py migrate" fails most of the times because tables already exist or do not exist even though they should (because another environment already created the tables).
So I was wondering how to handle the migration process when using multiple environments for development, staging and production with some common and some exclusive database instances that might not reflect the same structure all the time?
Should I exclude the migration files from the code repository and the eb deployment and run makemigrations & migrate after every deployment? Should I not run migrations automatically using the .ebextensions and apply all the migrations manually through one of the instances?
What's the recommended way of using the same Django application with different database instances on different environments? | Django Migration Process for Elasticbeanstalk / Multiple Databases | 1 | 0.197375 | 1 | 0 | 0 | 795 |
39,525,214 | 2016-09-16T06:41:00.000 | 0 | 0 | 0 | 1 | 0 | java,python,scala,apache-spark,pyspark | 0 | 39,530,209 | 0 | 2 | 0 | false | 0 | 0 | your question is unclear. If the data are on your local machine, you should first copy your data to the cluster on HDFS filesystem. Spark can works in 3 modes with YARN (are u using YARN or MESOS ?): cluster, client and standalone. What you are looking for is client-mode or cluster mode. But if you want to start the application from your local machine, use client-mode. If you have an SSH access, you are free to use both.
The simplest way is to copy your code directly on the cluster if it is properly configured then start the application with the ./spark-submit script, providing the class to use as an argument. It works with python script and java/scala classes (I only use python so I don't really know) | 1 | 0 | 1 | 0 | I have to send some applications in python to a Apache Spark cluster. There is given a Clustermanager and some worker nodes with the addresses to send the Application to.
My question is, how to setup and to configure Spark on my local computer to send those requests with the data to be worked out to the cluster?
I am working on Ubuntu 16.xx and already installed java and scala. I have searched the inet but the most find is how to build the cluster or some old advices how to do it, which are out of date. | Configurate Spark by given Cluster | 0 | 0 | 1 | 0 | 0 | 35 |
39,537,700 | 2016-09-16T18:10:00.000 | -1 | 0 | 1 | 0 | 1 | python,pycharm,xml-rpc | 1 | 44,059,972 | 0 | 1 | 0 | false | 0 | 0 | Hi I had the same problem as you. I solved the problem by making the line 127.0.0.1 localhost as the first line in /etc/hosts. The reason python console does not run is that python console tries to connect to localhost:pycharm-port, but localhost was resolved to the IPv6 addess of ::1, and the connection is refused. | 1 | 4 | 0 | 0 | Exception in XML-RPC listener loop (java.net.SocketException: Socket closed).
When I run PyCharm from bash , I get this error..As result: I cant't use python-console in pycharm Anybody know how to fix it ?
OS: ubuntu 16.04 | How to fix python console error in pycharm? | 0 | -0.197375 | 1 | 0 | 1 | 602 |
39,571,659 | 2016-09-19T11:05:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,html | 0 | 69,865,240 | 0 | 2 | 0 | false | 1 | 0 | If you are not gonna use any sensitive data like password you can use localStorage or Url Hash . | 1 | 0 | 0 | 0 | I want to post data from html to another html
I know how to post data html->python and python-> html
I have dictionary in the html (I get it from python - return render_to_response('page.html', locals())
how can I use with the dictionary in the second html file? | Post data from html to another html | 0 | 0 | 1 | 0 | 1 | 95 |
39,598,572 | 2016-09-20T15:49:00.000 | 1 | 0 | 1 | 0 | 0 | python,multithreading,queue,python-multithreading,pyzmq | 0 | 70,651,650 | 0 | 1 | 0 | false | 0 | 0 | Q : "Is there a better approach?"
A :
Well, my ultimate performance-candidate would be this :
the sampler will operate two or more, separate, statically preallocated "circular"-buffers, one for storing in phase one, the other thus free-to get sent and vice-verse
once the sampler's filling reaches the end of the first buffer, it starts filling the other, sending the first one and vice versa
ZeroMQ zero-copy, zero-blocking .send( zmq.NOBLOCK ) over an inproc:// transport-class uses just memory-pointer mapping, without moving data in-RAM ( or we can even further reduce the complexity, if moving the filled-up buffer right from here directly to the client, w/o any mediating party ( if not needed otherwise ) for doing so, if using a pre-allocated, static storage,like a numpy.array( ( bufferSize, nBuffersInRoundRobinCYCLE ), dtype = np.int32 ), we can just send an already packed-block of { int32 | int64 }-s or other dtype-mapped data using .data-buffer, round-robin cycling along the set of nBuffersInRoundRobinCYCLE-separate inplace storage buffers (used for sufficient latency-masking, filling them one after another in cycle and letting them get efficiently .send( zmq.NOBLOCK )-sent in the "background" ( behind the back of the Python-GIL-lock blocker tyrant ) in the meantime as needed ).
Tweaking Python-interpreter, disabling gc.disable() at all and tuning the default GIL-lock smooth processing "meat-chopper" from 100[ms] somewhere reasonably above, as no threading is needed anymore, by sys.settimeinterval() and moving several acquired samples in lump multiples of CPU-words ~up~to~ CPU-cache-line lengths ( aligned for reducing the fast-cache-to-slow-RAM-memory cache-consistency management mem-I/O updates ) are left for the next LoD of bleeding performance boosting candidates | 1 | 7 | 0 | 0 | I acquire samples (integers) at a very high rate (several kilo samples per seconds) in a thread and put() them in a threading.Queue. The main thread get()s the samples one by one into a list of length 4096, then msgpacks them and finally sends them via ZeroMQ to a client. The client shows the chunks on the screen (print or plot). In short the original idea is, fill the queue with single samples, but empty it in large chunks.
Everything works 100% as expected. But the latter part i.e. accessing the queue is very very slow. Queue gets larger and the output always lags behind by several to tens of seconds.
My question is: how can I do something to make queue access faster? Is there a better approach? | Python threading queue is very slow | 0 | 0.197375 | 1 | 0 | 0 | 3,388 |
39,646,077 | 2016-09-22T18:11:00.000 | 0 | 0 | 1 | 0 | 0 | python,anaconda | 0 | 39,646,217 | 0 | 2 | 0 | false | 0 | 0 | You can use any text editor to open a .py file, e.g. TextMate, TextWrangler, TextEdit, PyCharm, AquaMacs, etc. | 2 | 1 | 0 | 0 | I have installed Anaconda, but I do not know how to open a .py file..
If it is possible, please explain plainly, I browsed several threads, but I understood none of them..
Thanks a lot for your helps..
Best, | How to read a .py file after I install Anaconda? | 1 | 0 | 1 | 0 | 0 | 8,545 |
39,646,077 | 2016-09-22T18:11:00.000 | 3 | 0 | 1 | 0 | 0 | python,anaconda | 0 | 39,646,328 | 0 | 2 | 0 | true | 0 | 0 | In the menu structure of your operating system, you should see a folder for Anaconda. In that folder is an icon for Spyder. Click that icon.
After a while (Spyder loads slowly) you will see the Spyder integrated environment. You can choose File then Open from the menu, or just click the Open icon that looks like an open folder. In the resulting Open dialog box, navigate to the relevant folder and open the relevant .py file. The Open dialog box will see .py, .pyw, and .ipy files by default, but clicking the relevant list box will enable you to see and load many other kinds of files. Opening that file will load the contents into the editor section of Spyder. You can view or edit the file there, or use other parts of Spyder to run, debug, and do other things with the file.
As of now, there is no in-built way to load a .py file in Spyder directly from the operating system. You can set that up in Windows by double-clicking a .py file, then choosing the spyder.exe file, and telling Windows to always use that application to load the file. The Anaconda developers have said that a soon-to-come version of Anaconda will modify the operating system so that .py and other files will load in Spyder with a double-click. But what I said above works for Windows.
This answer was a bit condensed, since I do not know your level of understanding. Ask if you need more details. | 2 | 1 | 0 | 0 | I have installed Anaconda, but I do not know how to open a .py file..
If it is possible, please explain plainly, I browsed several threads, but I understood none of them..
Thanks a lot for your helps..
Best, | How to read a .py file after I install Anaconda? | 1 | 1.2 | 1 | 0 | 0 | 8,545 |
Subsets and Splits