Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
34,198,892 | 2015-12-10T10:04:00.000 | 2 | 0 | 1 | 1 | python,ubuntu | 55,406,526 | 6 | false | 0 | 0 | neither try any above ways nor sudo apt autoremove python3 because it will remove all gnome based applications from your system including gnome-terminal. In case if you have done that mistake and left with kernal only than trysudo apt install gnome on kernal.
try to change your default python version instead removing it. you can do this through bashrc file or export path command. | 4 | 13 | 0 | I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place:
iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7
Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7.
So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7? | Ubuntu, how do you remove all Python 3 but not 2 | 0.066568 | 0 | 0 | 127,783 |
34,198,892 | 2015-12-10T10:04:00.000 | 8 | 0 | 1 | 1 | python,ubuntu | 34,220,703 | 6 | true | 0 | 0 | So I worked out at the end that you cannot uninstall 3.4 as it is default on Ubuntu.
All I did was simply remove Jupyter and then alias python=python2.7 and install all packages on Python 2.7 again.
Arguably, I can install virtualenv but me and my colleagues are only using 2.7. I am just going to be lazy in this case :) | 4 | 13 | 0 | I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place:
iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7
Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7.
So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7? | Ubuntu, how do you remove all Python 3 but not 2 | 1.2 | 0 | 0 | 127,783 |
34,198,892 | 2015-12-10T10:04:00.000 | 9 | 0 | 1 | 1 | python,ubuntu | 34,198,961 | 6 | false | 0 | 0 | EDIT: As pointed out in recent comments, this solution may BREAK your system.
You most likely don't want to remove python3.
Please refer to the other answers for possible solutions.
Outdated answer (not recommended)
sudo apt-get remove 'python3.*' | 4 | 13 | 0 | I have recently get hold of a RackSpace Ubuntu server and it has pythons all over the place:
iPython in 3.5, Pandas in 3.4 &2.7, modules I need like pyodbc etc. are only in 2,7
Therefore, I am keen to clean up the box and, as a 2.7 users, keep everything in 2.7.
So the key question is, is there a way to remove both 3.4 and 3.5 efficiently at the same time while keeping Python 2.7? | Ubuntu, how do you remove all Python 3 but not 2 | 1 | 0 | 0 | 127,783 |
34,199,233 | 2015-12-10T10:19:00.000 | 7 | 0 | 0 | 0 | python,tensorflow,tensorflow2.0,tensorflow2.x,nvidia-titan | 44,128,902 | 16 | false | 0 | 0 | Shameless plug: If you install the GPU supported Tensorflow, the session will first allocate all GPUs whether you set it to use only CPU or GPU. I may add my tip that even you set the graph to use CPU only you should set the same configuration(as answered above:) ) to prevent the unwanted GPU occupation.
And in an interactive interface like IPython and Jupyter, you should also set that configure, otherwise, it will allocate all memory and leave almost none for others. This is sometimes hard to notice. | 2 | 349 | 1 | I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.
For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU.
The problem with TensorFlow is that, by default, it allocates the full amount of available GPU memory when it is launched. Even for a small two-layer neural network, I see that all 12 GB of the GPU memory is used up.
Is there a way to make TensorFlow only allocate, say, 4 GB of GPU memory, if one knows that this is enough for a given model? | How to prevent tensorflow from allocating the totality of a GPU memory? | 1 | 0 | 0 | 237,456 |
34,199,233 | 2015-12-10T10:19:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,tensorflow2.0,tensorflow2.x,nvidia-titan | 52,828,871 | 16 | false | 0 | 0 | i tried to train unet on voc data set but because of huge image size, memory finishes. i tried all the above tips, even tried with batch size==1, yet to no improvement. sometimes TensorFlow version also causes the memory issues. try by using
pip install tensorflow-gpu==1.8.0 | 2 | 349 | 1 | I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.
For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU.
The problem with TensorFlow is that, by default, it allocates the full amount of available GPU memory when it is launched. Even for a small two-layer neural network, I see that all 12 GB of the GPU memory is used up.
Is there a way to make TensorFlow only allocate, say, 4 GB of GPU memory, if one knows that this is enough for a given model? | How to prevent tensorflow from allocating the totality of a GPU memory? | 0.012499 | 0 | 0 | 237,456 |
34,200,159 | 2015-12-10T10:58:00.000 | 0 | 1 | 0 | 0 | python,c++,raspberry-pi,gpio | 40,125,816 | 2 | false | 0 | 1 | If you clean up the GPIO Headers in both scripts, it should be possible, otherwise it wont work.
You can clean up in python by using GPIO.cleanup(), then it sould work, cause it is clean again to your c++ Code. | 1 | 0 | 0 | I have a Python script and a C++ program running at the same time, both accessing the GPIO pins (not the same ones, though) in this order:
C++
Python
C++
The access of the C++ program worked (I used wireless transmitters and received the message). After that the Python access (light up an LED) worked as well. But when I tried to send another message using the wireless transmitters with C++, nothing happened, I don't receive messages anymore.
Is there a way to find out, whether the GPIO pins are blocked or something? | Is it possible to access GPIO pins from a Python script and a C++ program at the same time? | 0 | 0 | 0 | 250 |
34,200,551 | 2015-12-10T11:18:00.000 | 0 | 0 | 0 | 1 | python,windows,python-2.7,batch-file,scheduled-tasks | 34,218,832 | 1 | false | 0 | 0 | Thank you guys for your help. It was indeed "just" the working directory I had to set to the location of the bat file | 1 | 0 | 0 | Hi folks so I got the following problem,
I have the following code in a batch file:
..\python-2.7.10.amd64\python.exe ./bin/bla.py ./conf/config.conf > ./logs/output.txt
This works like a charme by double clicking the batch. Next my plan was to automate the call of this batch by adding it to the task scheduler in windows. So I changed all the relative paths to absolute paths:
D:\path\to\python-2.7.10.amd64\python.exe D:\path\to\bin\bla.py D:\path\to\conf\config.conf > D:\path\to\logs\output.txt
This also still works by double clicking the batch file.
So my next step was adding the batch to the task scheduler but when I run it from there I get this error message:
Traceback (most recent call last): File "D:\path\to\bin\bla.py", line 159, in logging.config.fileConfig(logFile) File "D:\path\to\python-2.7.10.amd64\lib\logging\confi eConfig formatters = _create_formatters(cp) File "D:\path\to\python-2.7.10.amd64\lib\logging\confi reate_formatters flist = cp.get("formatters", "keys") File "D:\path\to\python-2.7.10.amd64\lib\ConfigParser. raise NoSectionError(section) ConfigParser.NoSectionError: No section: 'formatters'
So for some reason the python script can't find the conf file by the absolute path I think but I don't understand why. I also tried it with the relative paths in the task scheduler it obviously also doesn't work.
Does anyone of you have a clue why it works straight from the batch but not from the task scheduler ? | Python script from batch file won't run in task scheduler | 0 | 0 | 0 | 1,575 |
34,204,574 | 2015-12-10T14:35:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 34,209,473 | 1 | true | 1 | 0 | PyCharm support explained that (as of v 4.5.3), there is a checkbox option in Deployment Settings for "Visible only for this project". | 1 | 1 | 0 | I have two projects in PyCharm 4.5.2. When I change some deployment settings (e.g., add or delete a server) in one project, they also change in the other. Is this the way it is supposed to work? Is there a way I should be doing this so that the settings are specific to each project? | Are settings unique to projects in PyCharm | 1.2 | 0 | 0 | 71 |
34,206,946 | 2015-12-10T16:27:00.000 | 0 | 0 | 1 | 0 | python,virtualenv,conda | 34,273,042 | 1 | true | 0 | 0 | No this will not break your system's python. As long as you don't tick the option "register miniconda as the default system python" (or whatever that option is called depending on your OS).
One of the key benefits of conda is that you can create isolated python environments, fully independent of each other. | 1 | 1 | 0 | I have a system with certain python version and packages installed suing the distribution repositories. For some project (calculation) I need newer version the the packages. I am thinking of installing anaconda and use conda virtual environments. Will this broke programs that must use the system packages?
(note: I tried virtual enviroment, but I couldn't install a newver version of matplotlib, because of problems with pygtk) | Do anaconda packages interfere with system python | 1.2 | 0 | 0 | 1,386 |
34,209,697 | 2015-12-10T19:02:00.000 | 0 | 0 | 1 | 1 | python,windows-10 | 34,210,329 | 1 | false | 0 | 0 | Choose open with, then scroll down and click something like "choose another application from this computer" (I don't exactly know, I use windows in different language). Then just select your Python executable and click OK. | 1 | 0 | 0 | I just upgraded to windows 10, and downloaded the Anaconda Python distribution and chose the option for it to add everything to my PATH etc. Back in windows 8 when I created a .py file I could execute it from the file explorer just by clicking on it, but for some reason windows 10 won't recognise .py files and when I try run them it opens them in notepad. I am able to run them from the command line. What's gone wrong?
UPDATE: When I choose another application to open the file, I click on the python application and it says "Cannot Execute as Python27.dll is not found", I installed python 3, why is it trying to open in python2.7? | Python Executables on Windows | 0 | 0 | 0 | 71 |
34,212,036 | 2015-12-10T21:26:00.000 | 1 | 0 | 1 | 1 | python,python-3.x | 34,212,369 | 4 | false | 0 | 0 | You have to add the python bin folder to your path. You can do it manually but when you install python i remember you have an option to do that. | 1 | 11 | 0 | Just curious, is there a particular reason why Python 3.x is not installed on Windows to run default with the command line "python3", like it does on Mac OSX and Linux? Is there some kind of way to configure Python so that it runs like this? Thanks.
EDIT: Just to add, the reason I am asking is because I have both the Python 2 and 3 interpreter installed on my computer, and so it is ambiguous, as both are run using the command "python". | Python 3 installation on windows running from command line | 0.049958 | 0 | 0 | 25,234 |
34,213,429 | 2015-12-10T23:09:00.000 | 0 | 0 | 0 | 0 | python,qt,pyqt4,poppler | 34,292,822 | 2 | false | 0 | 1 | In C:\poppler-0.24.5\include\src replace it C:\poppler-0.24.5\include\qt4\src | 1 | 1 | 0 | Trying to install python-poppler-qt4 on Windows (8.1) but i've been having issues building/installing it
(error fatal error C1083: Cannot open include file: 'QMetaType' : No such file or directory. #include QMetaType)
Before this error I had a missing poppler-qt4.dll issue. After locating and installing the dll I get the error above
It's pretty annoying how I cant install this package like any other .. suggestions? Really need to use poppler for PDF rendering. | Python Poppler install issues | 0 | 0 | 0 | 627 |
34,213,644 | 2015-12-10T23:26:00.000 | 1 | 0 | 0 | 0 | python,django,django-forms,django-views | 34,215,469 | 1 | false | 1 | 0 | You could use either option.
Option #1: In the post method (if using Class-based-views, otherwise check for "post" as the request type), just instantiate the form with MessageForm(request.POST), and then check the form's is_valid() method. If the form is valid, save the Message object and redirect back to the same view using HttpResponseRedirect within the if form.is_valid(): code block.
If you're checking for the related Messages objects in your template, the newly created message should be there.
Option #2: Very similar to Option #1, except if the form is not valid, re-render the same template that is used for the product_view with the non-valid form instance included in the template context. | 1 | 0 | 0 | I am developing a web-site using Django/Python. I am quite new to this technology and I want to do the web-site in a right way.
So here is my problem:
Imagine, that there is a Product entity and product view to display the Product info.
I use (product_view in my views.py ).
There is also Message entity and the Product might have multiple of them.
In Product view page ( I use "product_view" action in my views.py ) I also query for the messages and display them.
Now, there should be a form to submit a new message ( in product view page ).
Question #1: what action name should form have ( Django way, I do understand I might assign whatever action I want )?
Option #1: it might be the same action "product_view". In product_view logic I might check for the HTTP method ( get or post ) and handle form submit or just get request. But it feels a bit controversial for me to submit a message to the "product_view" action.
Option #2: create an action named "product_view_message_save". ( I don't want to create just "message_save", because there might be multiple ways to submit a message ). So I handle the logic there and then I make a redirect to product_view. Now the fun part is: if the form is invalid, I try to put this form to the session, make the redirect to the "product_view", get the form there and display an error near the message field. However, the form in Django is not serializable. I can find a workaround, but it just doesn't feel right again.
What would you say?
Any help/advice would be highly appreciated!
Best Regards,
Maksim | Django - guidance needed | 0.197375 | 0 | 0 | 34 |
34,213,706 | 2015-12-10T23:32:00.000 | 0 | 0 | 0 | 1 | python,mysql,twisted | 35,131,551 | 1 | false | 0 | 0 | I think the best way to accomplish this is to first make a select for the id (or ids) of the row/rows you want to update, then update the row with a WHERE condition matching the id of the item to update. That way you are certain that you only updated the specific item.
An UPDATE statement can update multiple rows that matches your criteria. That is why you cannot request the last updated id by using a built in function. | 1 | 3 | 0 | Python, Twistd and SO newbie.
I am writing a program that organises seating across multiple rooms. I have only included related columns from the tables below.
Basic Mysql tables
Table
id
Seat
id
table_id
name
Card
seat_id
The Seat and Table tables are pre-populated with the 'name' columns initially NULL.
Stage One
I want to update a seat's name by finding the first available seat given a group of table ids.
Stage Two
I want to be able to get the updated row id from Stage One (because I don't already know this) to add to the Card table. Names can be assigned to more than one seat so I can't just find a seat that matches a name.
I can do Stage One but have no idea how to do Stage Two because lastrowid only works for inserts not updates.
Any help would be appreciated.
Using twisted.enterprise.adbapi if that helps.
Cheers | Python Twistd MySQL - Get Updated Row id (not inserting) | 0 | 1 | 0 | 311 |
34,214,908 | 2015-12-11T01:49:00.000 | 1 | 1 | 0 | 0 | python,mysql,linux,django,passwords | 34,215,479 | 4 | false | 1 | 0 | I'd place the password file in a directory with 600 permissions owned by the Django user. The Django user would be able to do what it needed to and nobody else would even be able to look in the directory (except root and Django)
Another thing you could do would be to store it in a database and set it so that the root user and the Django user in the DB have unique passwords, that way only the person with those passwords could access it. IE system root is no longer the same as DB root. | 1 | 1 | 0 | I've got a Django project running on an Ubuntu server. There are other developers who have the ability to ssh into the box and look at files. I want to make it so that the mysql credentials and api keys in settings.py are maybe separated into a different file that's only viewable by the root user, but also usable by the django project to run.
My previous approach was to make the passwords file only accesible to root:root with chmod 600, but my settings.py throws an ImportError when it tries to import the password file's variables. I read about setuid, but that doesn't seem very secure at all. What's a good approach for what I'm trying to do? Thanks. | Linux/Python: How can I hide sensitive information in a Python file, so that developers on the environment won't be able to access it? | 0.049958 | 0 | 0 | 885 |
34,217,236 | 2015-12-11T06:17:00.000 | 1 | 0 | 1 | 0 | python,regex | 34,222,140 | 2 | true | 0 | 0 | There are two important things to understand here:
First, p* matches zero or more, while p+ matches one or more.
Second, you will get the first match, no matter if that match is an empty string or not.
Third, regex is greedy by default so once it found the first match it will include as many p as possible.
So, as a result of this,
p* on blackpink matches the zero p at the very beginning of the string, that is ''.
p* on pinkpink matches the first p (not the second).
p+ on blackpink matches the sixth letter, the p, since the empty string is no longer a match because of the +.
p+ on pinkpink matches the first p. | 1 | 1 | 0 | When I use regex p* on string blackpink it returns the empty string as a match even though p is inside the string.
When I use the same regex p* on string pinkpink then it matches and returns p, indicating its matching only on the start of the string even though i have not specified anything of the kind.
The peculiar behavior is that, when I use p+ on string pink and blackpink, in both cases it returns p , indicating it does not care if the match is in the beginning or inside a string.
Can anyone explain this? | Python regex * matches occurrences only in the starting of the string | 1.2 | 0 | 0 | 285 |
34,221,468 | 2015-12-11T10:36:00.000 | 0 | 1 | 0 | 1 | python,ubuntu,debian,32bit-64bit,ctypes | 34,221,649 | 2 | false | 0 | 0 | I am not sure if you can do this in the same process - we are talking about arithmetic here: 32bit pointers are different from 64bit pointers, so trying to reference them in the same process ... well, I am not sure what happens when trying to access a memory area which is not accessible or which is not supposed to be accessed (I guess Segmentation fault? ).
The only solution I can think of is it to have a separate Python 32 bit instance that runs in its own process. Then, with some form of IPC you can call the python32 bit instance from your 64 bit instance. | 1 | 1 | 0 | I'm trying for ages to access a 32bit C compiled lib within an 64bit Ubuntu. I'm using python and CDLL lib in order to make it happen but with no success so far. I can easily open the same 32bit lib on a 32bit OS, and the 64bit version on a 64bit OS.
So, what I'm asking is if anyone knows a way to encapsulate/sandbox/wrap the lib so I can achieve my goal. That way I can use a single 64bit server to access the 32 and 64bit versions of those libs.
If someone knows another python lib that can make the trick please let me know. | Accessing a 32bit with python on a Debian 64bit with CDLL lib (or other) | 0 | 0 | 0 | 285 |
34,223,068 | 2015-12-11T11:59:00.000 | 5 | 1 | 0 | 0 | python,vim,syntax-highlighting,vim-syntax-highlighting | 34,225,084 | 2 | true | 0 | 0 | Overly long lines can dramatically slow down Vim's syntax highlighting; usually, this is a fault of the syntax script, and you should inform its author (found in the $VIMRUNTIME/syntax/python.vim script header).
Vim 7.4 includes the :syntime command, which greatly helps with troubleshooting and finding the problematic regular expression.
It might help to :set synmaxcol=... to a value lower than the default 3000. | 1 | 2 | 0 | I have this python script, in one line I have a 1000 character long string. I have syntax highlighting on, vim hangs on this line. If I change the file extension to c++ than it works. I suspect problems with syntax highlighting plugin is is causing the hang.
Can this be fixed somehow? I'm using vim version 7.4.52 | Vim python syntax highlighting hangs for very long lines | 1.2 | 0 | 0 | 995 |
34,224,258 | 2015-12-11T13:04:00.000 | 0 | 0 | 1 | 0 | python,go,interpreter,multiple-instances | 34,311,874 | 1 | true | 0 | 0 | As Chris Townsend and pie-o-pah said,
Trying to implement sub-interpreter is much more complicate.
Try to create the language interface is make senses to my case.
In this situation os/exec is the way to go.
And can even create ssh to remote my python module if my main server is overload. | 1 | 1 | 0 | Currently I'm doing a project in Golang which need to call to Python.
In Python it's a library of singleton-like instance.
But I can't modify those library because It's too complicated. (for me)
Most thing I can do is wrap it with my own Python script.
So I'm finding a way to create multiple Python interpreter in Go.
Or maybe multiple sub-interpreter in Python.
Which mean I can create many python instance(same application).
Any ways I can do this? | How to create multiple Python's instances within a Go application | 1.2 | 0 | 0 | 126 |
34,225,530 | 2015-12-11T14:13:00.000 | 1 | 0 | 1 | 0 | python,docker,virtualenv | 34,226,943 | 2 | false | 0 | 0 | When using docker it makes sense to adopt the microservices concept. With microservices each microservice is aligned with a specific business function, and only defines the operations necessary to that business function. This means that each application runs in one or more separate docker images with their specific dependencies (python modules). This makes the use of virtualenv unnecessary. | 2 | 5 | 0 | Almost all python tutorials suggest that virutalenv be setup as step one to maintain consistency . In working with Docker containers, why or why not should this standard be maintained? | Is there a good reason for setting up virtualenv for python in Docker containers? | 0.099668 | 0 | 0 | 150 |
34,225,530 | 2015-12-11T14:13:00.000 | 4 | 0 | 1 | 0 | python,docker,virtualenv | 34,225,592 | 2 | true | 0 | 0 | If you intend to run only one version on the container and it is the container's system version, there's no technical reason to use virtualenv in a container. But there could still be non-technical reasons. For example, if your team is used to finding python libraries in ~/some-env, or understands the virtualenv structure better than the container's libs, then you may want to keep using virtualenv anyway.
On the "cons" side, virtualenv on top of an existing system python may make your images slightly larger, too. | 2 | 5 | 0 | Almost all python tutorials suggest that virutalenv be setup as step one to maintain consistency . In working with Docker containers, why or why not should this standard be maintained? | Is there a good reason for setting up virtualenv for python in Docker containers? | 1.2 | 0 | 0 | 150 |
34,228,132 | 2015-12-11T16:25:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 41,070,427 | 1 | false | 0 | 0 | If you opened the notebook in iPython, it has option for download files to PDF pages. | 1 | 4 | 0 | I want to download my jupyter notebook as pdf. At first, I was reminded that I have to install something, then I go to download page to install pandocs-1.15.2-windows. However, when I tried to download it again, another error message shows up: "nbconvert failed: 'ascii' codec can't decode byte 0xb4 in position 1: ordinal not in range(128)". How can I fix it? Did I download the wrong package? | jupyter notebook downloaded as pdf | 0 | 0 | 0 | 751 |
34,228,646 | 2015-12-11T16:55:00.000 | 1 | 1 | 1 | 0 | python,c++ | 34,230,076 | 2 | false | 0 | 0 | You can use standard python api , Cython or Boost.python. It is much easier to work with boost.python. You have to add very little code to your c++ library and compile it as a module library which then you can call from python.
with boost you can easily add your classes and their methods. Additionally you can introduce vector of an object which makes it easier to pass data to python and back to your library.
I recommend boost.python but you can look for yourself. There are a lot of tutorials on both cython and boost.python if you google it. | 1 | 1 | 0 | I am studying in machine learning now and I want to build up a recommender system. First, I would like to make a top-N recommendation using two existing methods and they are both written in C++ code. As the file are huge and complex, I want to call them with Python, instead of adding code on that. Which tool is suitable for my case? Thank you in advance! | How can I call C++ code using Python? | 0.099668 | 0 | 0 | 200 |
34,233,767 | 2015-12-11T22:44:00.000 | 2 | 0 | 0 | 0 | python,tensorflow,tensorboard | 35,853,184 | 1 | false | 0 | 0 | The best solution from a TensorBoard perspective is to have a root directory for your experiment, e.g. ~/tensorflow/mnist_experiment, and then to create a new subdirectory for each run, e.g. ~/tensorflow/mnist_experiment/run1/...
Then run TensorBoard against the root directory, and every time you invoke your code, setup the SummaryWriter pointing to a new subdirectory. TensorBoard will then interpret all of the event files correctly, and it will also make it easy to compare between your different runs. | 1 | 3 | 1 | I am using Tensorflow to build up the Neural Network, and I would like to show training results on the Tensorboard. So far everything works fine. But I have a question on "event file" for the Tensorboard. I notice that every time when I run my python script, it generates different event files. And when I run my local server using
$ python /usr/local/lib/python2.7/dist-packages/tensorflow/tensorboard/tensorboard.py --logdir=/home/project/tmp/, it shows up error if there are more than 1 event files. It seems to be annoying since whenever I run my local server, I have to delete all previous event files to make it work. So I'm wondering if there is any solution to prevent this issue. I would really appreciate it. | Event files in Google Tensorflow | 0.379949 | 0 | 0 | 1,651 |
34,235,225 | 2015-12-12T01:38:00.000 | 12 | 0 | 0 | 0 | python,tensorflow | 44,353,399 | 4 | false | 0 | 0 | Yes, tf.Graph are build in an append-only fashion as @mrry puts it.
But there's workaround:
Conceptually you can modify an existing graph by cloning it and perform the modifications needed along the way. As of r1.1, Tensorflow provides a module named tf.contrib.graph_editor which implements the above idea as a set of convinient functions. | 1 | 24 | 1 | TensorFlow graph is usually built gradually from inputs to outputs, and then executed. Looking at the Python code, the inputs lists of operations are immutable which suggests that the inputs should not be modified. Does that mean that there is no way to update/modify an existing graph? | Is it possible to modify an existing TensorFlow computation graph? | 1 | 0 | 0 | 13,045 |
34,237,068 | 2015-12-12T06:35:00.000 | 0 | 0 | 0 | 1 | python,vim,ultisnips | 46,685,085 | 1 | false | 0 | 0 | Problem solved. I downgraded python from 2.7.11 to 2.7.9 and it worked well. – Cicero | 1 | 1 | 0 | Environments
OS:Windos 7, x64 bit
Vim: gvim74 from vim.org
Python: Python 2.7.11
UltiSnips: just downloaded from github
Gvim worked perfectly for me with SnipMate for a long time and lately I want to use UltiSnips instead. So i newly installed python on my PC, installed UltiSnips with Pathogen and just deleted snipmate, hoped it works well but it doesn't.
The problem is: When I open gvim, it exits as soon as I press "i". Then I recovered gvim before I installed UltiSnips and simply executed a command on gvim like "python print "Hello"" or "python 1" or so, It does nothing but causes gvim to exit instantly, just like a executed "q!" command.
OK It probably is a problem happens when gvim encounters python while does nothing with UltiSnips. Hope for suggestios or methods to solve this. Thanks all your guys. | UltiSnips not work: PYTHON caused GVim to EXIT | 0 | 0 | 0 | 120 |
34,241,400 | 2015-12-12T15:14:00.000 | 0 | 0 | 0 | 1 | python,terminal,pickle | 34,241,511 | 2 | false | 0 | 0 | Check atexit()
Add a function and decorate it with atexit | 1 | 0 | 0 | I have just made a script that I want to turn off from the terminal, but instead of just ending it I want it to pickle a file. Is there a correct way to do this? | How can I turn my python script off from the terminal? | 0 | 0 | 0 | 457 |
34,242,017 | 2015-12-12T16:11:00.000 | -1 | 0 | 0 | 0 | python,mysql,database,lamp,mysql-python | 34,242,082 | 1 | false | 0 | 0 | are never on the same internet network.
Let me clear the question, the problem is are never on the same internet network. firstly you need to fix the network issue, add router between the two sides which you want to communicate with. No relations with Python or LAMP.
let me assume your DB is mysql, if you can make you can access that DB from outside servers, you can just talk with that DB directly from another servers.
but for another solution, I recommend you to use API which would cover all request above the DB, then you can talk with that API to handle the data. | 1 | 0 | 0 | I have a python code which needs to retrieve and store data to/from a database on a LAMP server. The LAMP server and the device running the python code are never on the same internet network. The devices running the python code can be either a Linux, Windows or a MAC system. Any idea how could I implement this? | How to fetch or store data into a database on a LAMP server from devices over the internet? | -0.197375 | 1 | 0 | 155 |
34,242,949 | 2015-12-12T17:39:00.000 | 2 | 0 | 1 | 0 | python,django,pip,virtualenv | 34,242,990 | 3 | false | 1 | 0 | Can you try this command:
sudo pip install django==1.8 | 1 | 0 | 0 | I have been trying to install django 1.8 on virtualenv, i performed the following steps:
changed to my project directory
changed to scripts folder of virtual environment which I created
activated the virtual env
typed command: pip install django == 1.8
Nothing worked
also, tried pip install django and easy_install django, however, none worked.
Could you please help me out ? | Unable to install a specific version of django on virtualenv | 0.132549 | 0 | 0 | 487 |
34,243,376 | 2015-12-12T18:24:00.000 | -1 | 0 | 0 | 0 | python,django,django-models,django-forms,django-uploads | 34,243,402 | 2 | false | 1 | 0 | Have a fast look a the source code. No, it doesn't provide a support for that. | 1 | 0 | 0 | as the title says, is there a way to only allow for images to be uploaded when using django-multiupload? At the moment my users can upload any file but I want to limit them to only images.
Any help/advice would be much appreciated :-) | Is there a way to only allow for images to be uploaded when using django-multiupload? | -0.099668 | 0 | 0 | 389 |
34,247,918 | 2015-12-13T04:14:00.000 | 1 | 0 | 1 | 1 | python,unzip,win32com,winzip | 34,455,025 | 1 | false | 0 | 0 | Forget win32com. Instead,
create a destination folder
loop over zipx archives; for each one:
create a temp folder using Python's tempfile module
use the subprocess module to run your unzip utility (that handles zipx format) on the zipx archive, with command line option to extract to the temp folder created
use the shutil module to copy each unzipped file in that folder to the common destination folder created in first step, if file meets the condition. For file size, use Path.stat().st_size or os.path.get_size().
erase temp folder
So each archive gets unzipped to a different temp folder, but all extracted files get moved to one common folder. Alternately, you could use the same temp folder for all archives, but empty the folder at end of each iteration, and delete the temp folder at end of script.
create a destination folder
create a temp archive extraction folder using Python's tempfile module
loop over zipx archives; for each one:
use the subprocess module to run your unzip utility (that handles zipx format) on the zipx archive, with command line option to extract to the temp folder created
use the shutil module to copy each unzipped file in that folder to the common destination folder created in first step, if file meets the condition. For file size, use Path.stat().st_size or os.path.get_size().
erase contents of temp folder
erase temp folder | 1 | 0 | 0 | I need to unzip numerous zipx files into directory while checking on the run if unzipped files comply with a condition. The condition is "if there is file with the same name overright it only if unzipped file is larger".
I wanted to control winzip with win32com but I couldn't find Object.Name with COM browser (win32com\client\combrowse.py). Also would be nice to find methods I could use with this winzip object.
Could anyone help with the way I choose or advice an easier option to solve the described problem.
Thansk. | unzipping zipx by controlling winzip with python | 0.197375 | 0 | 0 | 777 |
34,247,930 | 2015-12-13T04:16:00.000 | 6 | 0 | 1 | 0 | python-3.x,windows-7-x64 | 41,469,993 | 10 | false | 0 | 0 | run it at the cmd window, not inside the python window. it took me forever to realize my mistake. | 7 | 18 | 0 | I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install pip. After the installation, I wanted to check whether pip was working, so I typed pip on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards. | pip not working in python 3.5 on Windows 7 | 1 | 0 | 0 | 86,592 |
34,247,930 | 2015-12-13T04:16:00.000 | 8 | 0 | 1 | 0 | python-3.x,windows-7-x64 | 41,629,695 | 10 | false | 0 | 0 | go the windows cmd prompt
go to the python directory
then type python -m pip install package-name | 7 | 18 | 0 | I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install pip. After the installation, I wanted to check whether pip was working, so I typed pip on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards. | pip not working in python 3.5 on Windows 7 | 1 | 0 | 0 | 86,592 |
34,247,930 | 2015-12-13T04:16:00.000 | 1 | 0 | 1 | 0 | python-3.x,windows-7-x64 | 40,418,960 | 10 | false | 0 | 0 | I had the same problem with Version 3.5.2.
Have you tried py.exe -m install package-name? This worked for me. | 7 | 18 | 0 | I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install pip. After the installation, I wanted to check whether pip was working, so I typed pip on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards. | pip not working in python 3.5 on Windows 7 | 0.019997 | 0 | 0 | 86,592 |
34,247,930 | 2015-12-13T04:16:00.000 | -1 | 0 | 1 | 0 | python-3.x,windows-7-x64 | 46,616,075 | 10 | false | 0 | 0 | For those with several python versions of python 3 installed in windows: I solved this issue by executing the pip install command directly from my python35 Scripts folder in cmd...for some reason pip3 pointed to python 34 even though python 35 was set first in environmental variables. | 7 | 18 | 0 | I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install pip. After the installation, I wanted to check whether pip was working, so I typed pip on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards. | pip not working in python 3.5 on Windows 7 | -0.019997 | 0 | 0 | 86,592 |
34,247,930 | 2015-12-13T04:16:00.000 | 2 | 0 | 1 | 0 | python-3.x,windows-7-x64 | 60,048,577 | 10 | false | 0 | 0 | I was having the same problem on Windows 10, This is how I fix it:
Click the search icon and type System Environment
In System Properties click on Environment Variables
In System Variables tab click New
Enter PYTHON3_SCRIPTS for the variable name and C:\Users\YOUR USER NAME\AppData\Local\Programs\Python\Python38-32\Scripts for Variable Value. Don't forget to change (YOUR USER NAME) in the path with your user, And to change your Python version or just go to this path to check it C:\Users\YOUR USER NAME\AppData\Local\Programs\Python
Click OK
Click NEW again!
Enter PYTHON3_HOME for the variable name and C:\Users\YOUR USER NAME\AppData\Local\Programs\Python\Python38-32\ for Variable Value. Don't forget to change (YOUR USER NAME) in the path with your user, And to change your Python version or just go to this path to check it C:\Users\YOUR USER NAME\AppData\Local\Programs\Python
Click OK
Find Path in the same tab select it and click Edit
Click New and type %PYTHON3_SCRIPTS% Then click OK
Now, everything is set. Restart your Terminal and pip should be working now. | 7 | 18 | 0 | I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install pip. After the installation, I wanted to check whether pip was working, so I typed pip on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards. | pip not working in python 3.5 on Windows 7 | 0.039979 | 0 | 0 | 86,592 |
34,247,930 | 2015-12-13T04:16:00.000 | 0 | 0 | 1 | 0 | python-3.x,windows-7-x64 | 61,621,951 | 10 | false | 0 | 0 | If you are working in Pycharm, an easy way is go to file>setting>project interpreter. Click on the + icon you will find on right side probably and then search and install required library. | 7 | 18 | 0 | I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install pip. After the installation, I wanted to check whether pip was working, so I typed pip on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards. | pip not working in python 3.5 on Windows 7 | 0 | 0 | 0 | 86,592 |
34,247,930 | 2015-12-13T04:16:00.000 | -1 | 0 | 1 | 0 | python-3.x,windows-7-x64 | 51,876,900 | 10 | false | 0 | 0 | I had the issue, and answered this same question sometimes ago. Do open a cmd if in windows OS or linux OS with admistrative privilege. Then python -m pip install --upgrade pip
then
python -m pip install <> | 7 | 18 | 0 | I have installed python 3.5 on my Windows 7 machine. When I installed it, I marked the check box to install pip. After the installation, I wanted to check whether pip was working, so I typed pip on the command line and hit enter, but did not respond. The cursor blinks but it does not display anything.
Please help.
Regards. | pip not working in python 3.5 on Windows 7 | -0.019997 | 0 | 0 | 86,592 |
34,251,045 | 2015-12-13T12:25:00.000 | 2 | 0 | 1 | 0 | python,coding-style | 34,251,079 | 2 | false | 0 | 0 | Normally the semicolon is only used in Python if you want to have multiple commands in one line. So you should omit the semicolons where they are not necessary. | 1 | 2 | 0 | So far I have coded mainly in Java. So I am more used to the fact that a statement ends with a semicolon. If I continue the same habit here in Python (which by the way helps me to keep a nice continuity), how will that be perceived as far as programming practices are concerned? | Putting semicolons at the end of the statements. Is this a good/bad programming practice in Python? | 0.197375 | 0 | 0 | 1,092 |
34,251,551 | 2015-12-13T13:24:00.000 | 0 | 1 | 1 | 0 | python,html,python-2.7 | 34,251,582 | 3 | false | 1 | 0 | I is not possible to import Python code in html as you import JavaScript code. JavaScript is executed by the browser of the client and the browsers don't have an included Python interpreter. You have to do it with JavaScript if you want to do it on the Client side. | 1 | 4 | 0 | I'm working on a School Project. I've done a lot of Python Script before and I was wondering if I could like import python in html like javascript? How should I do it? Example is importing time. I want to show a Time clock in my webpage from python script. | Running Python Script in HTML | 0 | 0 | 0 | 281 |
34,251,778 | 2015-12-13T13:47:00.000 | 1 | 0 | 1 | 0 | python,virtualenv,anaconda,canopy,conda | 34,251,914 | 1 | false | 0 | 0 | conda's environments are isolated from each other and the system. Normally you should not try to access packages from a different environment. This defeats the whole purpose of virtual environments.
That said, conda's environments are by design light weight. If you have two environments using the same interpreter and the same version of a package, only one physical package will exist on your hard drive. The individual environments merely link to the physical package. | 1 | 0 | 0 | I am familiar with creating virtualenv with Enthought Canopy's venv command. One of the features that i like about it is the --system-site-packages option which allows me to link all my system libraries to the new environment. Hence allowing me to create "light" environments. This saves a lot of disk space when creating multiple environments. I am trying to find a similar option with conda but it seem to me that it doesn't provide such an option. Is there anyway to create canopy like virtual environments with anaconda? | Enthought style virtualenv in Anaconda | 0.197375 | 0 | 0 | 207 |
34,264,314 | 2015-12-14T10:09:00.000 | 0 | 0 | 1 | 0 | python,algorithm,list,search | 34,266,212 | 2 | false | 0 | 0 | If you can afford the storage and the preprocessing time, you can insert all triples, quadruples and quintuples found from the lists in three distinct dictionaries. The dictionary entries will store the sets of lists where these tuples occur, and where in the lists.
Then a query will be performed in time just proportional to the number of matches. | 1 | 0 | 0 | I'm looking for an efficient python implementation of the following:
I have a large set of integer lists between 4 and >100 integers in length but mostly around a length of 4-10. There could be up to a million in total depending on the dataset. They are order specific. The integers themselves will range from 0 to <=99999.
I will have input search lists of between 3 and 5 integers in length, again order specific. I need to find all examples from the larger set of integer lists, where the list contains an input search list.
e.g.: example large set of integer lists [1,40, 98, 32, 778], [7, 9347, 21, 98345, 632, 444], [87567, 4563, 97, 40, 87], [1, 40, 98, 32, 778], [4563, 97, 40, 87, 76], [935, 57342, 86, 213, 89674, 4327, 9641, 13283], [4563, 40, 87, 76, 97]
Example query [4563, 97, 40].
Result [87567, 4563, 97, 40, 87], [4563, 97, 40, 87, 76] but NOT [4563, 40, 87, 76, 97].
I can store the set of integer lists in a dict and search the keys for the query integer list but this is slow. I can write the integer lists to flat file and use grep to search them which is fast but a nasty hack. Ultimately I have further code I need to run on the results (matched lists) so I'd prefer to stay in my current python workflow.
I am aware of search algorithms like aho corasick but I'm working with integers not text and I am doing the reverse (searching whole strings for a substring). | fast search for integer list in much larger set of longer integer lists | 0 | 0 | 0 | 298 |
34,264,710 | 2015-12-14T10:28:00.000 | 3 | 0 | 1 | 0 | python | 56,504,927 | 6 | false | 0 | 0 | float('inf') can be used in the comparison, thus making the code simpler and clear. For instance, in merge sort, a float('inf') can be added to the end of subarrays as a sentinel value. Don't confuse with the usage of infinity in maths, after all, programming is not all about maths. | 1 | 101 | 0 | Just wondering over here, what is the point of having a variable store an infinite value in a program? Is there any actual use and is there any case where it would be preferable to use foo = float('inf'), or is it just a little snippet they stuck in for the sake of putting it in? | What is the point of float('inf') in Python? | 0.099668 | 0 | 0 | 163,152 |
34,266,083 | 2015-12-14T11:37:00.000 | 1 | 1 | 0 | 0 | php,python,session,flask | 34,272,457 | 1 | true | 1 | 0 | I'm not sure this is the answer you are looking for, but I would not try to have the Flask API access session data from PHP. Sessions and API do not go well together, a well designed API does not need sessions, it is instead 100% stateless.
What I'm going to propose assumes both PHP and Flask have access to the user database. When the user logs in to the PHP app, generate an API token for the user. This can be a random sequence of characters, a uuid, whatever you want, as long as it is unique. Write the token to the user database, along with an expiration date if you like. The login process should pass that token back to the client (use https://, of course).
When the client needs to make an API call, it has to send that token in every request. For example, you can include it in the Authorization header, or you can use a custom header as well. The Flask API gets the token and searches the user database for it. If it does not find the token, it returns 401. If the token is found, it now knows who the user is, without having to share sessions with PHP. For the API endpoints you will be looking up the user from the token for every request.
Hope this helps! | 1 | 2 | 0 | As the title says, I’ am trying to run Flask alongside a PHP app.
Both of them are running under Apache 2.4 on Windows platform. For Flask I’m using wsgi_module.
The Flask app is actually an API. The PHP app controls users login therefore users access to API. Keep in mind that I cannot drop the use of the PHP app because it controls much more that the logging functionality [invoicing, access logs etc].
The flow is:
User logs in via PHP app
PHP stores user data to a database [user id and a flag indicating if user is logged in]
User makes a request to Flask API
Flask checks if user data are in database: If not, redirects to PHP login page, otherwise let user use the Flask API.
I know that between steps 2 and 3, PHP have to share a session variable-cookie [user id] with Flask in order Flask app to check if user is logged in.
Whatever I try fails. I cannot pass PHP session variables to Flask.
I know that I can’t pass PHP variables to Flask, but I’m not sure for that.
Has anyone tried something similar?
What kind of user login strategy should I implement to the above setup? | Run Flask alongside PHP [sharing session] | 1.2 | 1 | 0 | 1,770 |
34,266,159 | 2015-12-14T11:40:00.000 | 5 | 0 | 1 | 1 | python,linux,debian,pip | 61,732,256 | 10 | false | 0 | 0 | Here's how,
pip3 show numpy | grep "Location:"
this will return path/to/all/packages
du -h path/to/all/packages
last line will contain size of all packages in MB
Note: You may put any package name in place of numpy | 1 | 46 | 0 | I'm not sure this is possible. Google does not seem to have any answers.
Running Linux Debian can I list all pip packages and size (amount of disk space used) thats installed?
i.e. List all pip packages with size on disk? | How to see pip package sizes installed? | 0.099668 | 0 | 0 | 34,529 |
34,267,461 | 2015-12-14T12:47:00.000 | -1 | 0 | 0 | 0 | python,django,google-chrome | 37,083,575 | 1 | false | 1 | 0 | This could be specific to the server you are using. First try clearing your cookies, but if that does not work, that means you have a faulty server and I don't know how to fix that other than getting another one. | 1 | 0 | 0 | I have a django api which returns content fine in my localhost. But when I run it production. it giving me 324 error [ empty content response error].
I had printed api response which is fine. But even before api runs for completions, chrome browser throwing 324 error.
When I researched a bit. it look like socket connection is dead in client side. I am not sure how to fix it. | 324 error::empty response, django | -0.197375 | 0 | 1 | 649 |
34,271,634 | 2015-12-14T16:11:00.000 | 0 | 0 | 1 | 0 | python,caching,ipython | 34,275,216 | 1 | false | 0 | 0 | (should be a comment) Set the PYTHONDONTWRITEBYTECODE environment variable, which should do the same thing. | 1 | 3 | 0 | When using Python from the command line, one can suppress the output of the _pycache_ directory using the command line option -B. Unfortunately, I wasn; able to find how to suppress this output in iPython.
What I have to do when I change a cached module with iPython is the following:
Exit from the interpreter
Remove the _pycache_ folder manually
Enter the interpreter again
As you can imagine, this procedure is really annoying!
Is there any way to suppress the _pycache_ folder with iPython? | Avoid _pycache_ with iPython | 0 | 0 | 0 | 206 |
34,271,752 | 2015-12-14T16:18:00.000 | 1 | 0 | 0 | 0 | python,django,pydev | 34,288,921 | 1 | true | 1 | 0 | ok, I found a solution which always works :
uninstall and reinstall everything (python, Django, pydev) whitout using the pip | 1 | 0 | 0 | I'm working with Eclipse and suddenly I could not use Django anymore.
I tried to make a new project, but an error occurred : "Django not found".
I checked the interpreters like it is said in the forums.
I have uninstalled and installed Django multiple times, change the pythonpath thousands times, I reinstalled pydev, nothing has fixed the issue.
I really don't understand the fact that I was just typing usual code, and suddenly nothing worked again.
Edit : In the python command, I can import django.config but i cannot import Django.config.admin for example. | Error : "Django not found" | 1.2 | 0 | 0 | 405 |
34,275,096 | 2015-12-14T19:30:00.000 | 0 | 0 | 0 | 0 | python,math,matplotlib | 34,277,060 | 2 | false | 0 | 0 | The function is evaluated at every grid node, and compared to the iso-level. When there is a change of sign along a cell edge, a point is computed by linear interpolation between the two nodes. Points are joined in pairs by line segments. This is an acceptable approximation when the grid is dense enough. | 1 | 0 | 1 | I want to know how the contours levels are chosen in pyplot.contour. What I mean by this is, given a function f(x, y), the level curves are usually chosen by evaluating the points where f(x, y) = c, c=0,1,2,... etc. However if f(x, y) is an array A of nxn points, how do the level points get chosen? I don't mean how do the points get connected, just simply the points that correspond to A = c | How are the points in a level curve chosen in pyplot? | 0 | 0 | 0 | 976 |
34,277,148 | 2015-12-14T21:43:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,matplotlib | 35,905,473 | 1 | false | 0 | 0 | I have the exact same problem. I'm not sure what the issue is but every once in a while, when trying to import matplotlib inside ipython I encounter this error and restarting the computer solves the issue. Maybe that would help in locating the issue? | 1 | 0 | 1 | I am using OSX El Capitan and trying to import matplotlib.pyplot
when I do that I get recursive error and at the end it says "ValueError: insecure string pickle"
Here is the whole log:
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last) in ()
4 stats = Statistics("HumanData.xlsx")
5
----> 6 get_ipython().magic(u'matplotlib inline')
7
8 #matplotlib.pyplot.hist(stats.getActionData("Human", "Pacman", "Left"))
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc
in magic(self, arg_s) 2334 magic_name, _, magic_arg_s =
arg_s.partition(' ') 2335 magic_name =
magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2336 return self.run_line_magic(magic_name, magic_arg_s) 2337 2338
-------------------------------------------------------------------------
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc
in run_line_magic(self, magic_name, line) 2255
kwargs['local_ns'] = sys._getframe(stack_depth).f_locals 2256
with self.builtin_trap:
-> 2257 result = fn(*args,**kwargs) 2258 return result 2259
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/magics/pylab.pyc
in matplotlib(self, line)
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/magic.pyc
in (f, *a, **k)
191 # but it's overkill for just that one bit of state.
192 def magic_deco(arg):
--> 193 call = lambda f, *a, **k: f(*a, **k)
194
195 if callable(arg):
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/magics/pylab.pyc
in matplotlib(self, line)
98 print("Available matplotlib backends: %s" % backends_list)
99 else:
--> 100 gui, backend = self.shell.enable_matplotlib(args.gui)
101 self._show_matplotlib_backend(args.gui, backend)
102
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc
in enable_matplotlib(self, gui) 3130 gui, backend =
pt.find_gui_and_backend(self.pylab_gui_select) 3131
-> 3132 pt.activate_matplotlib(backend) 3133 pt.configure_inline_support(self, backend) 3134
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/IPython/core/pylabtools.pyc
in activate_matplotlib(backend)
272 matplotlib.rcParams['backend'] = backend
273
--> 274 import matplotlib.pyplot
275 matplotlib.pyplot.switch_backend(backend)
276
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/pyplot.py
in ()
27 from cycler import cycler
28 import matplotlib
---> 29 import matplotlib.colorbar
30 from matplotlib import style
31 from matplotlib import _pylab_helpers, interactive
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/colorbar.py
in ()
32 import matplotlib.artist as martist
33 import matplotlib.cbook as cbook
---> 34 import matplotlib.collections as collections
35 import matplotlib.colors as colors
36 import matplotlib.contour as contour
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/collections.py
in ()
25 import matplotlib.artist as artist
26 from matplotlib.artist import allow_rasterization
---> 27 import matplotlib.backend_bases as backend_bases
28 import matplotlib.path as mpath
29 from matplotlib import _path
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/backend_bases.py
in ()
60
61 import matplotlib.tight_bbox as tight_bbox
---> 62 import matplotlib.textpath as textpath
63 from matplotlib.path import Path
64 from matplotlib.cbook import mplDeprecation, warn_deprecated
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/textpath.py
in ()
13 from matplotlib.path import Path
14 from matplotlib import rcParams
---> 15 import matplotlib.font_manager as font_manager
16 from matplotlib.ft2font import FT2Font, KERNING_DEFAULT, LOAD_NO_HINTING
17 from matplotlib.ft2font import LOAD_TARGET_LIGHT
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py
in () 1418 verbose.report("Using
fontManager instance from %s" % _fmcache) 1419 except:
-> 1420 _rebuild() 1421 else: 1422 _rebuild()
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py
in _rebuild() 1403 def _rebuild(): 1404 global
fontManager
-> 1405 fontManager = FontManager() 1406 if _fmcache: 1407 pickle_dump(fontManager, _fmcache)
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py
in init(self, size, weight) 1041 # Load TrueType fonts
and create font dictionary. 1042
-> 1043 self.ttffiles = findSystemFonts(paths) + findSystemFonts() 1044 self.defaultFamily = { 1045
'ttf': 'Bitstream Vera Sans',
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py
in findSystemFonts(fontpaths, fontext)
321 fontfiles[f] = 1
322
--> 323 for f in get_fontconfig_fonts(fontext):
324 fontfiles[f] = 1
325
/Users/AhmedKhalifa/anaconda/lib/python2.7/site-packages/matplotlib/font_manager.py
in get_fontconfig_fonts(fontext)
273 pipe = subprocess.Popen(['fc-list', '--format=%{file}\n'],
274 stdout=subprocess.PIPE,
--> 275 stderr=subprocess.PIPE)
276 output = pipe.communicate()[0]
277 except (OSError, IOError):
/Users/AhmedKhalifa/anaconda/lib/python2.7/subprocess.pyc in
init(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines,
startupinfo, creationflags)
708 p2cread, p2cwrite,
709 c2pread, c2pwrite,
--> 710 errread, errwrite)
711 except Exception:
712 # Preserve original exception in case os.close raises.
/Users/AhmedKhalifa/anaconda/lib/python2.7/subprocess.pyc in
_execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, to_close,
p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1332
if e.errno != errno.ECHILD: 1333 raise
-> 1334 child_exception = pickle.loads(data) 1335 raise child_exception 1336
/Users/AhmedKhalifa/anaconda/lib/python2.7/pickle.pyc in loads(str)
1386 def loads(str): 1387 file = StringIO(str)
-> 1388 return Unpickler(file).load() 1389 1390 # Doctest
/Users/AhmedKhalifa/anaconda/lib/python2.7/pickle.pyc in load(self)
862 while 1:
863 key = read(1)
--> 864 dispatchkey
865 except _Stop, stopinst:
866 return stopinst.value
/Users/AhmedKhalifa/anaconda/lib/python2.7/pickle.pyc in
load_string(self)
970 if rep.startswith(q):
971 if len(rep) < 2 or not rep.endswith(q):
--> 972 raise ValueError, "insecure string pickle"
973 rep = rep[len(q):-len(q)]
974 break
ValueError: insecure string pickle
Any help with that? | Matplotlib error in importing | 0 | 0 | 0 | 704 |
34,281,404 | 2015-12-15T04:57:00.000 | 2 | 0 | 1 | 0 | python,oop | 34,281,474 | 4 | true | 0 | 0 | If it's meant to be the "same" method visible in both B and C, it sounds like you need to add another class into the hierarchy below A, but above B and C. Let's call it E. You'll have D and E as subclasses of A, and B and C as subclasses of E. | 1 | 1 | 0 | For example we have a base class called A, and three sub classes called B C D, all inheritance of A. If I want some method only appear in B and C, but not in D. Where should I put this method?
If I put it in A, D will have the method it doesn't need.
If I put it in B and C, I repeat myself. | Class inheritance: where to put a method? | 1.2 | 0 | 0 | 50 |
34,284,335 | 2015-12-15T08:34:00.000 | 0 | 0 | 1 | 0 | python,multithreading,kill-process | 34,284,612 | 2 | false | 0 | 1 | First you can use subprocess.Popen() to spawn child processes, then later you can use Popen.terminate() to terminate them.
Note that you could also do everything in a single Python thread, without subprocesses, if you want to. It's perfectly possible to "multiplex" reading from multiple ports in a single event loop. | 1 | 4 | 0 | Kind all, I'm really new to python and I'm facing a task which I can't completely grasp.
I've created an interface with Tkinter which should accomplish a couple of apparently easy feats.
By clicking a "Start" button two threads/processes will be started (each calling multiple subfunctions) which mainly read data from a serial port (one port per process, of course) and write them to file.
The I/O actions are looped within a while loop with a very high counter to allow them to go onward almost indefinitely.
The "Stop" button should stop the acquisition and essentially it should:
Kill the read/write Thread
Close the file
Close the serial port
Unfortunately I still do not understand how to accomplish point 1, i.e.: how to create killable threads without killing the whole GUI. Is there any way of doing this?
Thank you all! | How to integrate killable processes/thread in Python GUI? | 0 | 0 | 0 | 1,596 |
34,284,385 | 2015-12-15T08:37:00.000 | 1 | 0 | 0 | 0 | python,twitter,nltk,document-classification | 34,300,675 | 2 | false | 0 | 0 | I wouldn't be so quick to write off Naive Bayes. It does fine in many domains where there are lots of weak clues (as in "overlapping words"), but no absolutes. It all depends on the features you pass it. I'm guessing you are blindly passing it the usual "bag of words" features, perhaps after filtering for stopwords. Well, if that's not working, try a little harder.
A good approach is to read a couple of hundred tweets and see how you know which category you are looking at. That'll tell you what kind of things you need to distill into features. But be sure to look at lots of data, and focus on the general patterns.
An example (but note that I haven't looked at your corpus): Time expressions may be good clues on whether you are pre- or post-sale, but they take some work to detect. Create some features "past expression", "future expression", etc. (in addition to bag-of-words features), and see if that helps. Of course you'll need to figure out how to detect them first, but you don't have to be perfect: You're after anything that can help the classifier make a better guess. "Past tense" would probably be a good feature to try, too. | 1 | 3 | 1 | I've this CSV file which has comments (tweets, comments). I want to classify them into 4 categories, viz.
Pre Sales
Post Sales
Purchased
Service query
Now the problems that I'm facing are these :
There is a huge number of overlapping words between each of the
categories, hence using NaiveBayes is failing.
The size of tweets being only 160 chars, what is the best way to
prevent words from one category falling into the another.
What all ways should I select the features which can take care of both the 160 char tweets and a bit lengthier facebook comments.
Please let me know of any reference link/tutorial link to follow up the same, being a newbee in this field
Thanks | Classifying sentences with overlapping words | 0.099668 | 0 | 0 | 622 |
34,284,421 | 2015-12-15T08:39:00.000 | 2 | 1 | 0 | 1 | python,c++,c,language-binding | 34,284,538 | 1 | true | 0 | 1 | You can call between C, C++, Python, and a bunch of other languages without spawning a separate process or copying much of anything.
In Python basically everything is reference-counted, so if you want to use a Python object in C++ you can simply use the same reference count to manage its lifetime (e.g. to avoid copying it even if Python decides it doesn't need the object anymore). If you want the reverse, you may need to use a C++ std::shared_ptr or similar to hold your objects in C++, so that Python can also reference them.
In some cases things are even simpler than this, such as if you have a pure function in C or C++ which takes some values from Python and returns a result with no side effects and no storing of the inputs. In such a case, you certainly do not need to copy anything, because you can read the Python values directly and the Python interpreter will not be running while your C or C++ code is running (because they are all in a single thread).
There is an extensive Python (and NumPy, by the way) C API for this, plus the excellent Boost.Python for C++ integration including smart pointers. | 1 | 2 | 0 | There are multiple questions about "how to" call C C++ code from Python. But I would like to understand what exactly happens when this is done and what are the performance concerns. What is the theory underneath? Some questions I hope to get answered by understanding the principle are:
When considering data (especially large data) being processed (e.g. 2GB) which needs to be passed from python to C / C++ and then back. How are the data transferred from python to C when function is called? How is the result transferred back after function ends? Is everything done in memory or are UNIX/TCP sockets or files used to transfer the data? Is there some translation and copying done (e.g. to convert data types), do I need 2GB memory for holding data in python and additional +-2GB memory to have a C version of the data that is passed to C function? Do the C code and Python code run in different processes? | How does calling C or C++ from python work? | 1.2 | 0 | 0 | 397 |
34,287,867 | 2015-12-15T11:19:00.000 | 0 | 0 | 0 | 0 | python,django,bdd,lettuce | 34,333,554 | 1 | false | 1 | 0 | I feel like a lonely person asking and answering her own question :D
The problem was in importing, which we were not even using, so deleting this line resolved our problem. Hope it would be helpful for someone in the future
from sure import basestring | 1 | 0 | 0 | After pulling with rebase changes from VCS I am getting a Key Error when trying to run my Aloe_Django(porting from Lettuce) tests. Before it was working fine, now we can not figure out what we did wrong.
The Error is
KeyError:< sure.AssertionBuilder object at 0x7fbf588172e8>
The error occurs in registry.py file in lines :
def append_to(self, what, when, function, name=None, priority=0):
"""
Add a callback for a particular type of hook.
"""
if name is None:
name = self._function_id(function)
funcs = self[what][when].setdefault(priority, OrderedDict()) #HAPPENS HERE
funcs.pop(name, None)
funcs[name] = function
# pylint:enable=too-many-arguments | Key Error sure.AssertionBuilder object at | 0 | 0 | 0 | 77 |
34,291,760 | 2015-12-15T14:28:00.000 | 2 | 0 | 1 | 0 | python,c,modulo | 46,969,300 | 2 | false | 0 | 0 | easily implement a C-like modulo in python.
Since C does truncation toward zero when integers are divided, the sign of the remainder is always the sign of the first operand. There are several ways to implement that; pick one you like:
def mod(a, b): return abs(a)%abs(b)*(1,-1)[a<0]
def mod(a, b): return abs(a)%abs(b)*(1-2*(a<0)) | 1 | 2 | 0 | Modulo operator % on negative numbers is implemented in different way in python and in C. In C:
-4 % 3 = -1, while in python:-4 % 3 = 2.
I know how to implement python-like modulo in C. I wonder how to do the reverse, that is: easily implement a C-like modulo in python. | How to easily implement C-like modulo (remainder) operation in python 2.7 | 0.197375 | 0 | 0 | 1,018 |
34,295,198 | 2015-12-15T17:04:00.000 | 1 | 1 | 0 | 1 | python,bash,raspberry-pi2,usb-drive,udev | 34,300,132 | 1 | true | 0 | 0 | Well you probably described you problem. The mount process is too slow. You can mount your usb device from your script.sh
Also you probably need to disable automatic USB device mount for your system or the specific device only.
If you add a symlink to your udev rule e.g. SYMLINK+="backup", then you can mount this device by:
mkdir -p /path/to/foo
mount -t ext4 /dev/backup /path/to/foo | 1 | 1 | 0 | I am trying to run a script from a udev rule after any USB drive has been plugged in.
When I run the script manually, after the USB is mounted normally, it will run fine. The script calls a python program to run and the python program uses a file on the USB drive. No issues there.
If I make the script to simply log the date in a file, that works just fine.
So I know my UDEV rule and my script work fine, each on their own.
The issue seems to come up when udev calls the script, then script calling the python program and the python program does not run right. I believe it to be that the USB drive has not finished mounting before the python script runs. When watching top, my script begins to run, then python begins to run, they both end, and then I get the window popup of my accessing the files on my USB drive.
So I tried having script1.sh call script2.sh call python.py. I tried having script.sh call python1.py call python2.py. I tried adding sleep function both in the script.sh and python.py. I tried in the rule, RUN+="/home/pi/script.sh & exit". I tried exit in the files. I tried disown in the files.
What else can I try? | Run script with udev after USB plugged in on RPi | 1.2 | 0 | 0 | 1,205 |
34,296,703 | 2015-12-15T18:25:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 34,296,873 | 1 | false | 0 | 0 | Maybe:
str.decode("utf-8").replace(u"\u0113", "e") | 1 | 4 | 0 | How to replace in a string character (latin-1) 'ē' whose code is u0113 with 'e' (code UTF-8:u0065)
I get the foolowing python error: | How to replace in a string character (latin-1) 'ē' whose code is u0113 with 'e' (code UTF-8:u0065) | 0 | 0 | 0 | 143 |
34,301,518 | 2015-12-15T23:38:00.000 | 1 | 0 | 0 | 0 | python,sql,generator | 63,923,673 | 3 | false | 0 | 0 | You can also use faker.
just pip install faker
Just go through documentation and check it out | 1 | 0 | 0 | I need to create random entries with a given sql-schema in sql with the help of python programming language.
Is there a simple way to do that or do I have to write own generators? | How to create random entries in database with python | 0.066568 | 1 | 0 | 1,307 |
34,302,705 | 2015-12-16T01:51:00.000 | 1 | 1 | 1 | 0 | python | 34,302,742 | 3 | false | 0 | 0 | This isn't a very high-quality question, but the answer is quite simple. Always yes.
The more languages you learn, the more you'll find similarities between them. It will eventually be a matter of applying different algorithms and data structures to get work done instead of choosing programming languages, for general purpose programming, anyway.
When you get into application-specific things, such as embedded programming (imagine, cars, planes, military), then "learning that specific language" will become inescapable and valuable as employable skills. Also, ancient languages such as COBOL apparently fetch a pretty penny.
Enjoy the life of computer science and/or software engineering, kid! | 3 | 0 | 0 | So I've always wanted to learn code/program since I was 14 or so. I took YouTube and even website tutorials for Java, tried to follow along and everything but just didn't get it. I thought that Java was the best and easiest language to learn for a beginner. Well, not for me.
Fast forward to the beginning of this school year, I'm in high school and 16. I started taking programming class and we're going to learn Python. A month or two later and I actually understand the syntax of it, how if-else statements work, variables, functions, I even programmed a function to solve my Physics homework.
Do you think now, that having a basic understanding of Python, it would be easier for me to learn another programming language like C, or Java, or something else? | After learning one language, are other languages easier? | 0.066568 | 0 | 0 | 74 |
34,302,705 | 2015-12-16T01:51:00.000 | 1 | 1 | 1 | 0 | python | 34,302,725 | 3 | false | 0 | 0 | Definitely, it certainly helped in my experience, where my first language was Liberty BASIC, and then Python. I found learning Python easier than learning LB. It's really more to with how you think about your programs, your logical thinking/problem solving skills. | 3 | 0 | 0 | So I've always wanted to learn code/program since I was 14 or so. I took YouTube and even website tutorials for Java, tried to follow along and everything but just didn't get it. I thought that Java was the best and easiest language to learn for a beginner. Well, not for me.
Fast forward to the beginning of this school year, I'm in high school and 16. I started taking programming class and we're going to learn Python. A month or two later and I actually understand the syntax of it, how if-else statements work, variables, functions, I even programmed a function to solve my Physics homework.
Do you think now, that having a basic understanding of Python, it would be easier for me to learn another programming language like C, or Java, or something else? | After learning one language, are other languages easier? | 0.066568 | 0 | 0 | 74 |
34,302,705 | 2015-12-16T01:51:00.000 | 1 | 1 | 1 | 0 | python | 34,302,757 | 3 | false | 0 | 0 | Absolutely.
knowing one language always helps learning the new one, especially if they are similar. if you have learned python I'd suggest to move to java. avoid C or C++ they are very theoretics and much harder to learn with no stiff teacher and must-do homeworks..
be aware that python is much less strict than other languages so you will need to work a bit harder, but yeah do it, now! learning programming takes a little time and A LOT of practice but it sure no impossible.
I would suggest watching some wonderful online lectures on Udacity or Coursera.
Good luck ;) | 3 | 0 | 0 | So I've always wanted to learn code/program since I was 14 or so. I took YouTube and even website tutorials for Java, tried to follow along and everything but just didn't get it. I thought that Java was the best and easiest language to learn for a beginner. Well, not for me.
Fast forward to the beginning of this school year, I'm in high school and 16. I started taking programming class and we're going to learn Python. A month or two later and I actually understand the syntax of it, how if-else statements work, variables, functions, I even programmed a function to solve my Physics homework.
Do you think now, that having a basic understanding of Python, it would be easier for me to learn another programming language like C, or Java, or something else? | After learning one language, are other languages easier? | 0.066568 | 0 | 0 | 74 |
34,304,044 | 2015-12-16T04:39:00.000 | 0 | 0 | 1 | 0 | python,path,pycharm | 50,773,172 | 9 | false | 0 | 0 | Sometimes it is different. I solved my problem by clicking "Run" at the Pycharm's toolbar and then "Edit Configurations..." and I change my Interpreter to another actual one. Just changing it in the settings does not help, but this opperation already does ;) | 6 | 48 | 0 | Recently, I'm unable to use relative paths in my code while using PyCharm. For instance, a simple open('test.txt', 'r') will not work - whereupon I am sure the file exists in the same level as the running py file. PyCharm will return this error.
FileNotFoundError: [Errno 2] No such file or directory:
After reading answers online on StackOverflow, I have tried multiple options including:
Changing test.txt to ./test.txt
Closing project, deleting the .idea folder, open the folder with code.
Reinstalling as well as installing the latest version of PyCharm.
Invalidating caches and restarting.
None of these options have worked for me. Is there someway I can tell PyCharm to refresh the current working directory (or even to see where it thinks the current working directory is)?
Thanks in advance!
Edit: I should note that running the script in a terminal window will work. This appears to be a problem with PyCharm and not the script. | PyCharm current working directory | 0 | 0 | 0 | 99,363 |
34,304,044 | 2015-12-16T04:39:00.000 | 63 | 0 | 1 | 0 | python,path,pycharm | 34,567,518 | 9 | false | 0 | 0 | Change:
Run > Edit Configurations > Working directory,
which sets the working directory for a specific project. (This is on a Mac) | 6 | 48 | 0 | Recently, I'm unable to use relative paths in my code while using PyCharm. For instance, a simple open('test.txt', 'r') will not work - whereupon I am sure the file exists in the same level as the running py file. PyCharm will return this error.
FileNotFoundError: [Errno 2] No such file or directory:
After reading answers online on StackOverflow, I have tried multiple options including:
Changing test.txt to ./test.txt
Closing project, deleting the .idea folder, open the folder with code.
Reinstalling as well as installing the latest version of PyCharm.
Invalidating caches and restarting.
None of these options have worked for me. Is there someway I can tell PyCharm to refresh the current working directory (or even to see where it thinks the current working directory is)?
Thanks in advance!
Edit: I should note that running the script in a terminal window will work. This appears to be a problem with PyCharm and not the script. | PyCharm current working directory | 1 | 0 | 0 | 99,363 |
34,304,044 | 2015-12-16T04:39:00.000 | 0 | 0 | 1 | 0 | python,path,pycharm | 53,367,250 | 9 | false | 0 | 0 | A little clarification for mac users. In mac, what @andere said above is correct for setting working directory. However, if your code is in a different folder, say working_dir/src/ (like classic java/scala file structure) in that case you still need to set your Sources Root. In mac's PyCharm this can be done by right clicking on the src/ folder > Mark Directory as > Sources Root. Helped me with lot of similar import issues. Hope this helps someone. | 6 | 48 | 0 | Recently, I'm unable to use relative paths in my code while using PyCharm. For instance, a simple open('test.txt', 'r') will not work - whereupon I am sure the file exists in the same level as the running py file. PyCharm will return this error.
FileNotFoundError: [Errno 2] No such file or directory:
After reading answers online on StackOverflow, I have tried multiple options including:
Changing test.txt to ./test.txt
Closing project, deleting the .idea folder, open the folder with code.
Reinstalling as well as installing the latest version of PyCharm.
Invalidating caches and restarting.
None of these options have worked for me. Is there someway I can tell PyCharm to refresh the current working directory (or even to see where it thinks the current working directory is)?
Thanks in advance!
Edit: I should note that running the script in a terminal window will work. This appears to be a problem with PyCharm and not the script. | PyCharm current working directory | 0 | 0 | 0 | 99,363 |
34,304,044 | 2015-12-16T04:39:00.000 | 2 | 0 | 1 | 0 | python,path,pycharm | 51,742,696 | 9 | false | 0 | 0 | In PyCharm, click on "run/edit configurations..."
Then find your script file in the "Python" dropdown menu. Check the "Working Directory" entry and change it if necessary. | 6 | 48 | 0 | Recently, I'm unable to use relative paths in my code while using PyCharm. For instance, a simple open('test.txt', 'r') will not work - whereupon I am sure the file exists in the same level as the running py file. PyCharm will return this error.
FileNotFoundError: [Errno 2] No such file or directory:
After reading answers online on StackOverflow, I have tried multiple options including:
Changing test.txt to ./test.txt
Closing project, deleting the .idea folder, open the folder with code.
Reinstalling as well as installing the latest version of PyCharm.
Invalidating caches and restarting.
None of these options have worked for me. Is there someway I can tell PyCharm to refresh the current working directory (or even to see where it thinks the current working directory is)?
Thanks in advance!
Edit: I should note that running the script in a terminal window will work. This appears to be a problem with PyCharm and not the script. | PyCharm current working directory | 0.044415 | 0 | 0 | 99,363 |
34,304,044 | 2015-12-16T04:39:00.000 | 0 | 0 | 1 | 0 | python,path,pycharm | 68,215,017 | 9 | false | 0 | 0 | EXACT ANSWER TO SOLVE THIS ISSUE ,,
GO TO EDIT CONFIGURATION (just LEFT side of GREEN CODE RUNNER ICON)
click on python (not any specific python script) ONLY SELECT PYTHON
then below right side click on [edit configuration templetes]
select current working dir by going into those blocks
It will change the CWD of all python file that exists in project folder..
then all file will understand the RELATIVE PATH that starts from your actual project name..
i hope this will resolve all your issue related path. | 6 | 48 | 0 | Recently, I'm unable to use relative paths in my code while using PyCharm. For instance, a simple open('test.txt', 'r') will not work - whereupon I am sure the file exists in the same level as the running py file. PyCharm will return this error.
FileNotFoundError: [Errno 2] No such file or directory:
After reading answers online on StackOverflow, I have tried multiple options including:
Changing test.txt to ./test.txt
Closing project, deleting the .idea folder, open the folder with code.
Reinstalling as well as installing the latest version of PyCharm.
Invalidating caches and restarting.
None of these options have worked for me. Is there someway I can tell PyCharm to refresh the current working directory (or even to see where it thinks the current working directory is)?
Thanks in advance!
Edit: I should note that running the script in a terminal window will work. This appears to be a problem with PyCharm and not the script. | PyCharm current working directory | 0 | 0 | 0 | 99,363 |
34,304,044 | 2015-12-16T04:39:00.000 | 0 | 0 | 1 | 0 | python,path,pycharm | 51,396,970 | 9 | false | 0 | 0 | I too had the same issue few minutes ago...but,with the latest version of PyCharm it is resolved by simply using the relative path of that file..
For instance, a simple f = open('test', 'r') will work. | 6 | 48 | 0 | Recently, I'm unable to use relative paths in my code while using PyCharm. For instance, a simple open('test.txt', 'r') will not work - whereupon I am sure the file exists in the same level as the running py file. PyCharm will return this error.
FileNotFoundError: [Errno 2] No such file or directory:
After reading answers online on StackOverflow, I have tried multiple options including:
Changing test.txt to ./test.txt
Closing project, deleting the .idea folder, open the folder with code.
Reinstalling as well as installing the latest version of PyCharm.
Invalidating caches and restarting.
None of these options have worked for me. Is there someway I can tell PyCharm to refresh the current working directory (or even to see where it thinks the current working directory is)?
Thanks in advance!
Edit: I should note that running the script in a terminal window will work. This appears to be a problem with PyCharm and not the script. | PyCharm current working directory | 0 | 0 | 0 | 99,363 |
34,305,656 | 2015-12-16T06:53:00.000 | 1 | 0 | 1 | 0 | python,ironpython,tibco,spotfire | 34,326,443 | 3 | false | 0 | 0 | Spotfre has its own IDE for developing scripts but it is very poor one when analysing its functionalities. I dont think you can use any IDE to debug the scripts but you can at least use the one suggested by BendEg to make creation of the code more 'pleasant'. | 1 | 4 | 0 | Can we use any IRONPython editor to develop scripts for Tibco Spotfire controls.
Can we use IDLE editor to develop IRONPython scripts for Tibco Spotfire? If yes then how to integrate the tibco module with IDLE editor, Can anyone help on this?? | Which IronPython editor I can use to develop scripts for Tibco Spotfire controls | 0.066568 | 0 | 0 | 1,385 |
34,307,237 | 2015-12-16T08:35:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,weak-references,python-module | 61,902,272 | 1 | false | 0 | 0 | I was having same issue than you. The problem was that I was naming the file I was trying to run/edit as weakref.py Then, only change the name. I changed name to "weakref_example.py" | 1 | 0 | 0 | There came up strange error from python today. Whatever i want to launch or do, i can't getting error : 'module' has no attribute 'weakvaluedictionary'.
Even tried to launch pip install/uninstall and got same error.
Nothing has been changed from last day, and yesterday everything was working perfectly.
I checked init.py and did not see anything strange with weakref:
there is import weakref and _handlers = weakref.WeakValueDictionary() #map of handler names to handlers lines.
Please help!! | weakref module has no attribute 'weakvaluedictionary' | 0.197375 | 0 | 0 | 799 |
34,308,221 | 2015-12-16T09:26:00.000 | 1 | 0 | 0 | 1 | python,django,celery,celery-task | 34,311,859 | 1 | true | 1 | 0 | Executing Celery tasks from a command line utility is the same as executing them from views. If you have a task called foo, then in both cases:
Calling foo(...) executes the code of the task as if foo were just a plain Python function.
Calling foo.delay(...) executes the code of the task asynchronously, through a Celery worker. | 1 | 1 | 0 | I'm trying to run a task, using celery 3.1, from a custom management command.
If I call my task from a view it works fine but when starting the same task from my management command, the task will only run synchronous in current context (not async via celery).
I don't have djcelery installed.
What do I need to add to my management command to get async task processing on command line? | run celery task using a django management command | 1.2 | 0 | 0 | 1,782 |
34,310,736 | 2015-12-16T11:23:00.000 | 1 | 0 | 0 | 0 | python,google-app-engine,caching,server-side,multilingual | 34,314,025 | 1 | false | 1 | 0 | I assume the individual product rendering in a particular language accounts for the majority (or at least a big chunk) of the rendering effort for the entire page.
You could cache server-side the rendered product results for a particular language, prior to assembling them in a complete results page and sending them to the client, using a 2D product x language lookup scheme.
You could also render individual product info offline, on a task queue, whenever products are added/modified, and store/cache them on the server ahead of time. Maybe just for the most heavily used languages?
This way you avoid individual product rendering on the critical path - in response to the client requests, at the expense of added memcache/storage.
You just need to:
split your rendering in 2 stages (individual product info and complete results page assembly)
add logic for cleanup/update of the stored/cached rendered product info when products add/change/delete ops occur
(maybe) add logic for on-demand product info rendering when pre-rendered info is not yet available when the client request comes in (if not acceptable to simply not display the info)
You might want to check if it's preferable to cache/store the rendered product info compressed (html compresses well) - balancing memcache/storage costs vs instance runtime costs vs response time performance (I have yet to do such experiment). | 1 | 0 | 0 | Question:
What are the most efficient approaches to multi-lingual data caching on a web server, given that clients want the same base set of data but in their locale format. So 1000 data items max to be cached and then rendered on demand in specific locale format.
My current approach is as follows:
I have a multilingual python Google App Engine project. The multi-lingual part uses Babel and various language .po and .mo files for translation. This is all fine and dandy. Issues, start to arise when considering caching of data. For example, let's say I have 1000 product listings that I want clients to be able to access 100 at a time. I use memcache with a datastore backup entity if the memcache gets blasted. Again, all is fine and dandy, but not multilingual. Each product has to be mapped to match the key with the particular locale of any client, English, French, Turkish, whatever. The way I do it now is to map the products under a specific locale, say 'en_US', and render server side using jinja2 templates. Each bit of data that is multilingual specific is rendered using the locale settings for date, price formatting title etc. etc. in the 'en_US' format and placed into the datastore and memcache all nicely mapped out ready for rendering. However, I have an extra step to take for getting those multilingual data into the correct format for a clients locale, and that is by way of standard {{ }} translations and jinja2 filters, generally for stuff like price formatting and dates. Problem is this is slowing things up as this all has to be rendered on the server and then passed back to the client. The initial 100 products are always server side rendered, however, before caching I was rendering the rest client side from JSON data via ajax calls to the server. Now it's all server side rendering.
I don't want to get into a marathon discussion regarding server vs client side rendering, but I would appreciate any insights into how others have successfully handled multi-lingual caching | Issues with Multi-lingual website data caching - Python - Google App Engine | 0.197375 | 0 | 0 | 47 |
34,315,470 | 2015-12-16T15:08:00.000 | 0 | 0 | 1 | 1 | python,c,ipc | 34,316,940 | 4 | false | 0 | 0 | If your struct is simple enough, you could even not use IPC at all. Provided, you can serialize it as string parameters that could be used as program arguments and provided the int value to return can be in the range 0-127, you could simply:
in C code:
prepare the command arguments to pass to the Python script
fork-exec (assuming a Unix-like system) a Python interpretor with the script path and the script arguments
wait for child termination
read what the script passed as code termination
in Python:
get the arguments from command line and rebuild the elements of the struct
process it
end the script with exit(n) where n is an integer in the range 0-127 that will be returned to caller.
If above does not meet your requirements, next level would be to use pipes:
in C code:
prepare 2 pipe pairs one for C->Python (let's call it input), one for Python->C (let's call it output)
serialize the struct into a char buffer
fork
in child
close write side of input pipe
close read side of output pipe
dup read side of input pipe to file descriptor 0 (stdin) (see `dup2)
dup write side of output pipe to file descriptor 1 (stdout)
exec a Python interpretor with the name of the script
in parent
close read side of input pipe
close write side of output pipe
write the buffer (eventually preceded by its size if it cannot be known a priori) to the write side on input file
wait for the child to terminate
read the return value from the read side of output pipe
in Python:
read the serialized data from standard input
process it
write the output integer to standard output
exit | 1 | 7 | 0 | So I am relatively new to IPC and I have a c program that collects data and a python program that analyses the data. I want to be able to:
Call the python program as a subprocess of my main c program
Pass a c struct containing the data to be processed to the python process
Return an int value from the python process back to the c program
I have been briefly looking at Pipes and FIFO, but so far cannot find any information to address this kind of problem, since as I understand it, a fork() for example will simply duplicate the calling process, so not what I want as I am trying to call a different process. | IPC between C application and Python | 0 | 0 | 0 | 10,304 |
34,315,691 | 2015-12-16T15:19:00.000 | 1 | 0 | 1 | 0 | python | 34,315,885 | 2 | false | 0 | 0 | Since this is a very broad question I will just give you a general answer. You are probably going to want to make a new class that will contain the data for a room. In this class you could have variable that could store randomly generated numbers (using the random module) and then have methods use those numbers to determine the layout, monsters, and items in each room. All you would then have to do is to have a 2D or 3D grid (probably using lists of the room class) and randomly fill the grid with rooms that each contain random data. | 1 | 1 | 0 | I'm trying to figure out how to create a completely random maze/dungeon for a small text game I'm working on. I'm really not sure where to start since I've never done anything like this before. How do I do this? I need the rooms to know what mobs it holds, what items are on the ground, where the exits are and what other rooms they go to. Any help is appreciated. | Randomly generated dungeon with Python | 0.099668 | 0 | 0 | 663 |
34,316,369 | 2015-12-16T15:50:00.000 | 0 | 0 | 0 | 0 | python,django,redirect,logging,django-middleware | 34,317,060 | 1 | false | 1 | 0 | Not really, at least in any official way. HTTP requests are independent of each other. You cant tell that one request followed another. That is a reason if you need to maintain state between pages, you end up using sessions and passing session IDs around.
For your purposes using session ID to track pages is not reliable since a user can have multiple pages open.
The only semi-reliable solution I can think of is appending a tracking querystring to the URL upon redirects.
For example if some view processing request to /foo/ returns a redirect to /bar/ in your middleware you change that URL to /bar/?tracking=<something random>. The random part can be a uuid or something similar. Then when the user will go to that page, you can match the random bit and hence correlate that the request came from the original page /foo/. Note that in order for this to work the random bit will have to be unique for all requests.
Whether you should use above approach, probably not. Its probably not very reliable and probably has many edge cases where it will break. Maybe you can change your requirements to reflect HTTP-nature a bit better hence you will not need to come up with hacks to do what you are trying to do? | 1 | 2 | 0 | I have Django event logging application with middleware, which logs user page views. Currently, if response code is 200, log "User X visited page Y" is saved, but in case of redirect the log should be "User X has been redirected to page Y".
Is it possible to determine if response (200) occurred after 302 response redirect? | Determine if response results from redirect | 0 | 0 | 0 | 62 |
34,319,011 | 2015-12-16T18:01:00.000 | 0 | 0 | 0 | 0 | python,csv,pandas | 34,323,744 | 1 | false | 0 | 0 | I found the mistake. The problem was a thousand separator.
When writing the CSV file, most numbers were below thousand and were correctly written to the CSV file. However, this one value was greater than thousand and it was written as "1,123" which pandas did not recognize as a number but as a string. | 1 | 2 | 1 | I'm trying to read a large and complex CSV file with pandas.read_csv.
The exact command is
pd.read_csv(filename, quotechar='"', low_memory=True, dtype=data_types, usecols= columns, true_values=['T'], false_values=['F'])
I am pretty sure that the data types are correct. I can read the first 16 million lines (setting nrows=16000000) without problems but somewhere after this I get the following error
ValueError: could not convert string to float: '1,123'
As it seems, for some reason pandas thinks two columns would be one.
What could be the problem? How can I fix it? | Pandas: Read CSV: ValueError: could not convert string to float | 0 | 0 | 0 | 8,440 |
34,319,068 | 2015-12-16T18:04:00.000 | 0 | 0 | 0 | 1 | python,numpy,matplotlib,setup.py | 34,319,997 | 1 | true | 0 | 0 | This sounds hacky and quite possibly evil, but if you don't have shell access but do have Python access, I suppose you could write a Python script that writes the library files to the proper location.
You can determine the location by examining the __file__ value in each module. If this is a file system location the Python process has permissions write to (possibly the site package directory) it could be done. If this is under a location you can't write to, then no. Be careful, this is quite hacky. | 1 | 0 | 0 | The problem is like this:
the python on the server is version 2.4.3 (somewhat obsolete),
numpy is version 1.2.1 (obsolete) and
matplotlib is version 0.99.1.1
(devastating obsolete + lacks pyplot for some unknown reason).
I cannot use shell/bash on server. How can I renew the numpy and matplotlib to current versions? E.g., can I upload some folders of my python install to certain server locations and they will magically work? Or something different?
Thank you for your attention.
P.S. I can manipulate python path on server during script execution. | Uploading python library to server | 1.2 | 0 | 0 | 49 |
34,322,216 | 2015-12-16T21:15:00.000 | 1 | 0 | 1 | 0 | python,django | 34,322,974 | 4 | false | 1 | 0 | It depends on the scope of the Alphabet class. If it is a utility class then I would suggest to put in a utils.py file, for example. But it is perfectly fine to have classes in the views.py file, mainly those dealing with UI processing. Up to you. | 2 | 6 | 0 | I'm learning Django on my own and I can't seem to get a clue of where I implement a regular Python class. What I mean is, I don't know where do the Python classes I write go. Like they go in a separate file and then are imported to the views.py or are the classes implemented inside the views.py file?
Example I want to implement a Class Alphabet, should I do this in a separate file inside the module or just implement the functions inside the views.py file? | Where to implement python classes in Django? | 0.049958 | 0 | 0 | 1,687 |
34,322,216 | 2015-12-16T21:15:00.000 | 1 | 0 | 1 | 0 | python,django | 34,324,293 | 4 | false | 1 | 0 | Distinct to similar frameworks, you can put your Python code anywhere in your project, provided you can reference them later by their import path (model classes are partially an exception, though):
Applications are referenced by their import path (or an AppConfig import path). Although there's some magic involving test.py and models.py, most of the times the import / reference is quite explicit.
Views are referenced by urls.py files, but imported as regular python import path.
Middlewares are referenced by strings which denote an import path ending with their class name.
Other settings you normally don't configure are also full import paths.
The exception to this explicitness is:
models.py, test.py, admin.py : They have special purposes and may not exist, providing:
You will not need any model in your app, and will provide an AppConfig (instead of just the app name) in your INSTALLED_APPS.
You will not rely on autodiscovery for admin classes in your app.
You don't want to make tests on your app, or will specify a non-default path for your app-specific test command run.
templates and static files: your project will rely on per-app loaders for your static files and for your templates files, and ultimately there's a brute-force search in each of your apps: their inner static/ and templates/ directories, if exist, are searched for those files.
Everything else is just normal python code and, if you need to import them from any view, you just do a normal import sentence for them (since view code is imported with the normal Python import mechanism). | 2 | 6 | 0 | I'm learning Django on my own and I can't seem to get a clue of where I implement a regular Python class. What I mean is, I don't know where do the Python classes I write go. Like they go in a separate file and then are imported to the views.py or are the classes implemented inside the views.py file?
Example I want to implement a Class Alphabet, should I do this in a separate file inside the module or just implement the functions inside the views.py file? | Where to implement python classes in Django? | 0.049958 | 0 | 0 | 1,687 |
34,322,297 | 2015-12-16T21:19:00.000 | 0 | 0 | 1 | 0 | python,logging | 34,322,394 | 1 | false | 0 | 0 | Ok I found the answer. The master parent for all loggers is root logger - no mater it's name doesn't appear in the canonical name. | 1 | 0 | 0 | I understand and like the idea of hierarchical structure of loggers with canonical module name as the name of the logger. But I don't know how to tie everything up at the top level.
Supposing I have application using
package1.subpackage1.module1 and
package2.subpackage2.module2.
And now I'd like to define one handler and one formatter for all. But I don't want to enumerate all module's loggers and setup them separately.
It seems that all module loggers should be automagically attached somewhere to "master" logger, where the only handler is defined.
How to achieve this? | how to gather all module's loggers under one parent? | 0 | 0 | 0 | 15 |
34,323,027 | 2015-12-16T22:08:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,numpy,pca | 34,323,069 | 2 | false | 0 | 0 | So if scikit's third eigenvector is (a,-b,-c,-d) then mine is (-a,b,c,d).
That's completely normal. If v is an eigenvector of a matrix, then -v is an eigenvector with the same eigenvalue. | 1 | 0 | 1 | I have written a simple PCA code that calculates the covariance matrix and then uses linalg.eig on that covariance matrix to find the principal components. When I use scikit's PCA for three principal components I get almost the equivalent result. My PCA function outputs the third column of transformed data with flipped signs to what scikit's PCA function does. Now I think there is a higher probability that scikit's built-in PCA is correct than to assume that my code is correct. I have noticed that the third principal component/eigenvector has flipped signs in my case. So if scikit's third eigenvector is (a,-b,-c,-d) then mine is (-a,b,c,d). I might a bit shabby in my linear algebra, but I assume those are different results. The way I arrive at my eigenvectors is by computing the eigenvectors and eigenvalues of the covariance matrix using linalg.eig. I would gladly try to find eigenvectors by hand, but doing that for a 4x4 matrix (I am using iris data set) is not fun.
Iris data set has 4 dimensions, so at most I can run PCA for 4 components. When I run for one component, the results are equivalent. When I run for 2, also equivalent. For three, as I said, my function outputs flipped signs in the third column. When I run for four, again signs are flipped in the third column and all other columns are fine. I am afraid I cannot provide the code for this. This is a project, kind of. | Alternative to numpy's linalg.eig? | 0.099668 | 0 | 0 | 593 |
34,324,386 | 2015-12-16T23:56:00.000 | -1 | 0 | 0 | 0 | python,kivy | 34,325,646 | 1 | false | 0 | 1 | You can just create another image with faded edge and load it. | 1 | 0 | 0 | I have an interface made with Kivy, which has an image in part of it. I want to fade one edge of this image. What's the easiest way of achieving this? | How to add a gradient to an image | -0.197375 | 0 | 0 | 243 |
34,327,655 | 2015-12-17T06:03:00.000 | 2 | 0 | 0 | 0 | python,xml,openerp | 34,328,053 | 2 | false | 1 | 0 | Providing access rule is one part of the solution. If you look at "Access Control List" in "Settings > Technical > Security > Access Controls Lists", you can see that the group Hr Employee has only read access to the model hr.employee. So first you have to provide write access also to model hr.employee for group Employee. After you have allowed write access to the group Employee for model hr.employee,
Create a new record rule from Settings > Technical > Security > Record Rules named User_edit_own_employee_rule (As you wish).
Provide domain for this group User_edit_own_employee_rule as [('user_id', '=', user.id)]. And this domain should apply for Read and Write. ie; by check "Apply for Read" and "Apply for Write" Boolean field.
Create another record rule named User_edit_own_employee_rule_1
Provide domain for this group User_edit_own_employee_rule as [('user_id', '!=', user.id)]. And this domain should apply for Read only. ie; check "Apply for Read".
Now by creating two record rule for the group Employee, we can provide access to read and write his/her own record but only to read other employee records.
Detail:
Provide write access in access control list to model hr.employee for group Employee. Create two record rule:
User_edit_own_employee_rule :
Name : User_edit_own_employee_rule
Object : Employee
Apply for Read : Checked
Apply for Write : Checked
Rule Definition : [('user_id', '=', user.id)]
Groups : Human Resources / Employee
User_edit_own_employee_rule_1 :
Name : User_edit_own_employee_rule_1
Object : Employee
Apply for Read : Checked
Apply for Write : Un Checked
Rule Definition : [('user_id', '!=', user.id)]
Groups : Human Resources / Employee
I hope this will help you. | 1 | 0 | 0 | I have created groups to give access rights everything seems fine but I want to custom access - rights for module issue. When user of particular group logins, I want that user only able to create/edit their own issue and can't see other users issue.Please help me out!!
Thanks | How to make user can only access their own records in odoo? | 0.197375 | 1 | 0 | 6,109 |
34,333,808 | 2015-12-17T11:45:00.000 | 0 | 0 | 1 | 0 | python,build,scons | 34,335,139 | 2 | false | 0 | 1 | Create two Environments, one with each compiler, use where necessary.
Then use whichever Environment you need for linking object from either Environment. | 1 | 0 | 0 | I'm trying to set up a complete build environment with SCons and I came across this problem:
My project can be compiled with two different compilers (c or cpp compilers) and the resulting object files linked with the same linker.
Because of this, I need to know how to split the compilation part from the linking part.
Also, there are cases when I only need the .o files so I want to avoid linking.
Is this possible using the same environment ? | How to compile with two different compilers using SCons? | 0 | 0 | 0 | 657 |
34,337,788 | 2015-12-17T15:06:00.000 | 1 | 0 | 0 | 1 | python,windows,docker,tensorflow | 34,340,617 | 1 | false | 0 | 0 | If you're using one of the devel tags (:latest-devel or :latest-devel-gpu), the file should be in /tensorflow/tensorflow/models/image/imagenet/classify_image.py.
If you're using the base container (b.gcr.io/tensorflow/tensorflow:latest), it's not included -- that image just has the binary installed, not a full source distribution, and classify_image.py isn't included in the binary distribution. | 1 | 2 | 1 | I have installed tensorflow on Windows using Docker, I want to go to the folder "tensorflow/models/image/imagenet" that contains "classify_image.py" python file..
Can someone please how to reach this mentioned path? | Location of tensorflow/models.. in Windows | 0.197375 | 0 | 0 | 1,104 |
34,339,172 | 2015-12-17T16:08:00.000 | 0 | 0 | 0 | 0 | python,django,django-authentication,django-sites | 34,339,800 | 1 | true | 1 | 0 | Had the same issue, created default site entry (Id=1) and never had any issue ever since | 1 | 5 | 0 | The following warning appears twice when I run ./manage.py runserver after upgrading Django from 1.7 to 1.8.
.../django/contrib/sites/models.py:78: RemovedInDjango19Warning: Model class django.contrib.sites.models.Site doesn't declare an explicit app_label and either isn't in an application in INSTALLED_APPS or else was imported before its application was loaded. This will no longer be supported in Django 1.9.
The project still runs fine, but I want to get rid of the warning. I'm not using the Sites framework in my project, but the warning disappeared when I added 'django.contrib.sites' to the INSTALLED_APPS list in the project's settings.py. So that took care of the warning and I was happy.
But then the project starts demanding a Site in the database at the login prompt. Now, the whole thing is that I don't want the Sites framework at all. But now I seem forced to manage a database entry and need to consider it during installation and such when I'm just trying to get rid of a warning.
It appears that the login code in django.contrib.auth relies on it from the code. However, in Django's documentation I found this assertion: "site_name: An alias for site.name. If you don’t have the site framework installed, this will be set to the value of request.META['SERVER_NAME']. For more on sites, see The “sites” framework."
So it appears that the authors of django.contrib.auth consider the Sites framework optional, but judging from my situation, it isn't.
Hence my question. Is it possible to use Django's (presumably contributed) authentication system without using the Sites framework at all and still getting rid of that warning and everything related to the Sites framework? | Upgrading Django to 1.8 produces irrelevant Sites-framework warnings | 1.2 | 0 | 0 | 78 |
34,340,496 | 2015-12-17T17:15:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,multi-tenant,saas | 34,340,595 | 1 | false | 1 | 0 | I'm not familiar with Django. But If you going to build a SAAS, one of the main things you need to think about from the beginning is scalability, which of course suggests the 2nd option. 1st one will be a nightmare when your SAAS is expanding. | 1 | 0 | 0 | I have been mainly focused on other frameworks like laravel and express.js. so i am new to django but have build several projects. I need to build a SAAS product. So which is the best approach.
Seperate database of each customers
Same db with tenant_id mapping
Or any other best solutions by SO gurus? | Multitenant SAAS using Django | 0 | 0 | 0 | 363 |
34,341,396 | 2015-12-17T18:08:00.000 | 0 | 1 | 0 | 0 | python-3.x,python-requests | 34,343,323 | 1 | false | 0 | 0 | This seems to work now. I believe the issue was linked to an old version of openssl. Once I upgraded even the 301 for a different domain goes through with no errors and that was with verify set to True. | 1 | 0 | 0 | I am using the Python requests module (requests (2.7.0)) and tracking URL requests.
Most of these URL's are supposed to trigger a 301 redirect however for some the domain changes as well. These URL's where the 301 is causing a domain name change i.e. x.y.com ends up as a.b.com I get a certificate verify failed. However I have checked and the cert on that site is valid and it is not a self signed cert.
For the others where the domain remains the same I do not get any errors so it does not seem to be linked to SSL directly else the others would fail as well.
Also what is interesting is that if I run the same script using curl instead of requests I do not get any errors.
I know I can suppress the request errors by setting verify=False but I am wondering why the failure occurs only when there is a domain name change.
Regards,
AB | Python requests module results in SSL error for 301 redirects to a different domain | 0 | 0 | 1 | 188 |
34,341,489 | 2015-12-17T18:13:00.000 | 3 | 0 | 0 | 0 | python,django,python-3.x | 34,341,868 | 1 | true | 0 | 0 | PyMySQL is a pure-python database connector for MySQL, and can be used as a drop-in replacement using the install_as_MySQLdb() function. As a pure-python implementation, it will have some more overhead than a connector that uses C code, but it is compatible with other versions of Python, such as Jython and PyPy.
At the time of writing, Django recommends to use the mysqlclient package on Python 3. This fork of MySQLdb is partially written in C for performance, and is compatible with Python 3.3+. You can install it using pip install mysqlclient. As a fork, it uses the same module name, so you only have to install it and Django will use it in its MySQL database engine. | 1 | 0 | 0 | MySQLdb as I understand doesn't support Python 3. I've heard about PyMySQL as a replacement for this module. But how does it work in production environment?
Is there a big difference in speed between these two? I asking because I will be managing a very active webapp that needs to create entries in the database very often. | MySQL module for Python 3 | 1.2 | 1 | 0 | 79 |
34,343,996 | 2015-12-17T20:51:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,debugging,pdb | 34,935,331 | 1 | false | 0 | 0 | On my Debian system, various versions of /usr/bin/pdb (including pdb3.5 and pdb2.7) are symlinks pointing at ../lib/python?.?/pdb.py (for the two versions of pdb I mentioned ?.? is 3.5 or 2.7). So for me the module and the script are literally the same file (with two different pathnames). The script conditionally calls pdb.main() in the usual "a python module is a script" manner.
If a debugged python program uses the pdb module without the pdb command, a common way of doing that is to insert a call to pdb.set_trace() in a suitable location (it has the same intent as a pdb breakpoint when using the pdb command).
Another common way of invoking pdb is to use pdb.run; I have used a call to pdb.run within gdb's python interpreter to debug gdb extension code written in python. | 1 | 0 | 0 | I find there are two ways to invoke pdb.
In OS's shell, run pdb myscript.py, which invokes pdb immediately and allows to run pdb commands on the running of myscript.py.
in myscript.py, import pdb module, and add some function from pdb module in myscript.py. Then run myscript.py without pdb as python myscript.py, and when the running of myscript.py reaches the first pdb function in myscript.py, pdb will be invoked, which allows to run pdb commands on the running of myscript.py.
My questions are:
Are the pdb script (run in a shell in the first way) and the pdb module (imported into myscript.py in the second way) both in the same script pdb.py?
In the second way, how is pdb invoked when running a debugged program till a function from pdb module, so that the two ways look the same after invoking pdb? | How is pdb invoked when running a debugged program till a function from pdb module? | 0.197375 | 0 | 0 | 160 |
34,344,544 | 2015-12-17T21:27:00.000 | 0 | 0 | 1 | 0 | python-unittest,python-3.5 | 34,344,545 | 1 | true | 0 | 0 | setUpClass was added in 2.7 and 3.2, and should be ignored in 3.1 and 2.6-. So the best option is to create your own subclass of unittest.TestCase and add the warnings.filterwarnings code to the setUpClass function*.
*Don't forget to use the classmethod decorator on setUpClass. | 1 | 1 | 0 | In 3.5 DeprecationWarning is explicitly set to show during testing.
My package is for 2.7 - 3.5 and uses functions present in 2.7 - 3.4 that were deprecated in 3.5. Using the replacement method would be a pain since it didn't exist before 3.5, plus it isn't going anywhere for years (2020 at the earliest).
How do I get the DeprecationWarning silenced during the 3.5 test?
I have tried setting PYTHONWARNING, warnings.filterwarning, creating my own TestCase with warnings.filterwarnings in __init__, all to no avail. | How to ignore DeprecationWarning during testing in Pythons 2.7 - 3.5? | 1.2 | 0 | 0 | 260 |
34,344,624 | 2015-12-17T21:32:00.000 | 1 | 0 | 0 | 0 | user-interface,python-3.x,tkinter | 34,344,977 | 1 | true | 0 | 1 | That usually means that you're trying to call a method on a widget that has been destroyed. The string .52674064 is the internal name of a specific widget.
This can easily happen if you call a function via a binding or via after, if the widget is destroyed before the binding or after call has been triggered. | 1 | 0 | 0 | I am trying to run a Tkinter GUI on Python 3.x and When I use the .get command to get the number off a scale, this error pops up
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python34\lib\tkinter\__init__.py", line 1482, in __call__
return self.func(*args)
File "C:\Users\Danny\Downloads\Space RPG. Alpha 0.2 (2) (1).py", line 39, in close
print (w1.get(), w2.get())
File "C:\Python34\lib\tkinter\__init__.py", line 2840, in get
value = self.tk.call(self._w, 'get')
_tkinter.TclError: invalid command name ".52674064"
What is happening? | tkinter.TclError: invalid command name ".52674064 | 1.2 | 0 | 0 | 642 |
34,357,617 | 2015-12-18T14:15:00.000 | 2 | 0 | 1 | 0 | python,arrays,numpy,append | 70,073,546 | 2 | false | 0 | 0 | using np.stack should work
but the catch is both arrays should be of 2D form.
np.stack([A,B]) | 1 | 36 | 1 | I have an array A that has shape (480, 640, 3), and an array B with shape (480, 640).
How can I append these two as one array with shape (480, 640, 4)?
I tried np.append(A,B) but it doesn't keep the dimension, while the axis option causes the ValueError: all the input arrays must have same number of dimensions. | Append 2D array to 3D array, extending third dimension | 0.197375 | 0 | 0 | 65,088 |
34,357,680 | 2015-12-18T14:19:00.000 | 0 | 0 | 1 | 0 | python,pdf,encryption,pdfminer | 71,596,480 | 3 | false | 0 | 0 | This is pdfminer's error, use pdfplumber.open(file_name, password="".encode()) to skip this error or TypeError: can only concatenate str (not "bytes") to str. | 1 | 11 | 0 | I'm trying to extract text from pdf-files and later try to identify the references. I'm using pdfminer 20140328. With unencrypted files its running well, but I got now a file where i get:
File "C:\Tools\Python27\lib\site-packages\pdfminer\pdfdocument.py", line 348, in _initialize_password
raise PDFEncryptionError('Unknown algorithm: param=%r' % param)
pdfminer.pdfdocument.PDFEncryptionError: Unknown algorithm: param={'CF': {'StdCF': {'Length': 16, 'CFM': /AESV2, 'AuthEvent': /DocOpen}}, 'O': '}\xe2>\xf1\xf6\xc6\x8f\xab\x1f"O\x9bfc\xcd\x15\xe09~2\xc9\\x87\x03\xaf\x17f>\x13\t^K\x99', 'Filter': /Standard, 'P': -1548, 'Length': 128, 'R': 4, 'U': 'Kk>\x14\xf7\xac\xe6\x97\xb35\xaby!\x04|\x18(\xbfN^Nu\x8aAd\x00NV\xff\xfa\x01\x08', 'V': 4, 'StmF': /StdCF, 'StrF': /StdCF}
I checked with pdfinfo, that this file seemed to be AES encrypted, but i can open it without any problems.
So i have two questions:
at first: how is it possible that a document is encrypted but i can open it without a password?
and secondly: how do i make PDFMiner read that file properly? Somewhere i read to install pycrypto to get additional algorithms but it doesnt fixed my problem.
Many thanks. | PDF Miner PDFEncryptionError | 0 | 0 | 0 | 5,581 |
34,357,818 | 2015-12-18T14:27:00.000 | 0 | 0 | 0 | 0 | python,qt,pyqt | 34,370,666 | 1 | true | 0 | 1 | The view asks the model for data by calling data function for all table cells that are visible. The model then queries the database if needed. How this happens in detail depends on the underlying database. The QSqlQueryModel documentation says that: "if the database doesn't return the number of selected rows in a query, the model will fetch rows incrementally. See fetchMore() for more information". Apparently this is the case for sqlite.
The query happens in the fetchMore method of the model, I don't think there are any signals involved. You should be able to use the canFetchMore method to test if all records have been retrieved (not sure if this is want you want, perhaps you can tell us what you are ultimately trying to achieve.)
Edit after discussion:
You can inherit from the model and override the data method. I don't know where it's called in the view, probably in many places. You could try to look in the Qt source. If you need to do this from the view: when I look in the documentation of QAbstractScrollArea (which is an ancestor of QTableView) it seems that you can connect to a scroll bar's QAbstractSlider.actionTriggered signal to detect if the scroll bar is at the maximum. | 1 | 0 | 0 | I have a QTableView on top of a QSqlTableModel. My database is sqlite.
I know QSqlTableModel lazily loads the data from the database (it actually loads 256 rows at a time), so when the users scrolls the view to the bottom, the model loads the 256 next rows.
I would like to know:
What is the signal emitted when the users reaches the bottom of the view ?
What is the model's method called to load the 256 next rows ? | What model's slot is called when Qtableview scrolled | 1.2 | 0 | 0 | 304 |
34,360,265 | 2015-12-18T16:46:00.000 | 0 | 0 | 1 | 0 | python,matplotlib,3d | 34,361,488 | 1 | false | 0 | 0 | You can use roll function of numpy to rotate your plane and make it parallel with a base plane. now you can choose your plane and plot. Only problem is that at close to edges the value from one side will be added to opposite side. | 1 | 0 | 1 | I have a 3D regular grid of data. I would like to write a routine allowing the user to specify a plane slicing through the data with arbitrary orientation and returning a contour plot of the data in the plane. Is there a ready-made way in matplotlib to do this? Could find anything in the docs. | plotting 2D slice of arbitrary orientation through 3D data in matplotlib | 0 | 0 | 0 | 494 |
34,361,728 | 2015-12-18T18:20:00.000 | 0 | 0 | 1 | 0 | python,windows,ui-automation,pyautogui | 34,362,035 | 3 | false | 0 | 0 | I would go with option one but I would sleep for 2 seconds if that is the minimum average time required the open a window. After 2 seconds, I would check if the window has appeared and if not, then I would sleep again for 2 seconds. That would possibly save more time than sleeping for 5 seconds.
But since trying to check for the window is CPU intensive and time consuming, I think waiting for 5 seconds would be better over all. | 1 | 4 | 0 | I am using PyAutoGUI library of Python to automate GUI. The application which I am automating opens a new window after I am done with data entry on my current window. Everything is taken care by python automation (data entry in my current window and the click required to open the window).
When the click is performed in the current window, the new window takes some time to open (which may range from 2 - 5 seconds). So there are two options that I can think of here:
Sleep using time.sleep(5) (Con: 3 seconds might be wasted unnecessarily)
Spin in a tight loop till the window appears on the screen. PyAutoGUI offers a locateOnScreen function which could be used to find out if the window has actually appeared on the screen. (However, this is CPU intensive and the function itself is CPU intensive and takes almost 2 seconds to return)
So it looks [1] is a better option to me. Is there some other technique that I may have missed that would be better than either of these two methods? Thanks. | Windows Desktop GUI Automation using Python - Sleep vs tight loop | 0 | 0 | 0 | 9,680 |
34,365,044 | 2015-12-18T22:43:00.000 | -2 | 0 | 1 | 0 | python,windows,opencv | 59,330,392 | 13 | false | 0 | 0 | While Installing pycharm,dont select virtual environment unless u want it,If you select it ,then it will create a venv file and you need to import all the module by command prompts.Tick the existing interpreter,it will make everything easy. | 7 | 22 | 0 | I am using OpenCV 3 and python 2.7 and coding using PyCharm. The code works fine but PyCharm does not recognize cv2 as a module. It underlines it with a red line, so it doesn't display its functions in the intellisence menu.
I tried to set an environment variable OPENCV_DIR but it didn't work
OpenCV is extracted in
F:\opencv and
Python is installed on
C:\Python27
What is wrong ? | PyCharm does not recognize cv2 as a module | -0.03076 | 0 | 0 | 90,179 |
34,365,044 | 2015-12-18T22:43:00.000 | 13 | 0 | 1 | 0 | python,windows,opencv | 49,818,214 | 13 | false | 0 | 0 | Try File->Invalidate Caches / Restart...
I can't say definitively why this works, but it may have something to do with the cached module definitions that PyCharm uses to provide code hints. In some cases they aren't updated or get corrupted. I just know that it's worked for me. | 7 | 22 | 0 | I am using OpenCV 3 and python 2.7 and coding using PyCharm. The code works fine but PyCharm does not recognize cv2 as a module. It underlines it with a red line, so it doesn't display its functions in the intellisence menu.
I tried to set an environment variable OPENCV_DIR but it didn't work
OpenCV is extracted in
F:\opencv and
Python is installed on
C:\Python27
What is wrong ? | PyCharm does not recognize cv2 as a module | 1 | 0 | 0 | 90,179 |
34,365,044 | 2015-12-18T22:43:00.000 | 0 | 0 | 1 | 0 | python,windows,opencv | 55,831,215 | 13 | false | 0 | 0 | Just install opencv python package from settings. | 7 | 22 | 0 | I am using OpenCV 3 and python 2.7 and coding using PyCharm. The code works fine but PyCharm does not recognize cv2 as a module. It underlines it with a red line, so it doesn't display its functions in the intellisence menu.
I tried to set an environment variable OPENCV_DIR but it didn't work
OpenCV is extracted in
F:\opencv and
Python is installed on
C:\Python27
What is wrong ? | PyCharm does not recognize cv2 as a module | 0 | 0 | 0 | 90,179 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.