Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
46,360,132 | 2017-09-22T08:24:00.000 | 0 | 0 | 1 | 0 | python,x86,64-bit,x86-64,pyscripter | 46,360,812 | 3 | false | 0 | 0 | I used pyscripter too for a while but eventually you have to switch to something more modern. As far as I remember it works up Python 3.4.4 and then stops. Also they no longer develop it.
My advice is start with an editor that is being developed (I use Pycharm and they have a free of charge community edition). You will eventually need all those plugins and version controll support, database interactions etc. etc. all in one package.
This way you learn an editor that has a future. Don't repeat my mistake of learning Pyscripter and then realising it is not supported anymore and Python 3.6 does not work on it :).
I started not so long ago as well.
Good luck! | 3 | 0 | 0 | so basically I'm trying to install pyscripter for my computer, and am aware that Python 2.4 or higher is needed to run the program.
My computer specs, first of all, are:
Windows 10 (64bit)
Intel CPU
4GB ram
(or at least the important ones)
Now when I go to python.org, there are about a thousand different downloads available like 'Python 3.7.0a1' or '3.6.3rc1' or '2.7.14', most of them being x86, and some of them having x64 next to them which I am assuming is 64 bit, and some of these files are a .zip file, executable file, MSI installer etc. What I want to know is:
Which one of these do I have to download for my system?
Does MSI matter?
Does x64 mean that the file is going to be 64 bit?
Does installing version 2 or version 3 (I am aware of the differences between version 2 and version 3) change the way that pyscripter runs? | Which version of python do I have to download for pyscripter? | 0 | 0 | 0 | 1,277 |
46,361,173 | 2017-09-22T09:20:00.000 | 0 | 0 | 1 | 0 | python-3.x,py2exe | 46,374,319 | 1 | true | 0 | 0 | @Jeronimo answer worked right.
Wherever you took that from, but it looks like a command line. Or maybe there's an IPython magic command "%py" that I dont know of. Anyways, try opening a command line in your Anaconda folder, and do python.exe -m py2exe.build_exe C:/full/path/to/myscript.py. | 1 | 1 | 0 | Using anaconda 3.5/jupyter/spyder.
I have installed Py2exe 0.9.2.2 which supports Python 3.3 and above.
I am interested in creating an executable file from python script.
The code py -3.4 -m py2exe.build_exe myscript.py does not work in jupyter notebook and anaconda prompt.
i see python.exe file located in the folder:
C:\Users\me\AppData\Local\Continuum\Anaconda2
Thanks in advance for the help. | py2exe while using anaconda | 1.2 | 0 | 0 | 5,145 |
46,361,494 | 2017-09-22T09:37:00.000 | 20 | 0 | 0 | 0 | python,selenium,selenium-webdriver,local-storage,selenium-chromedriver | 46,361,873 | 4 | true | 0 | 0 | I solved using:
driver.execute_script("return window.localStorage;")
EDIT: this is a quick and short answer. See Florent B.'s answer for a more detailed one. | 1 | 31 | 0 | What's the equivalent of:
driver.get_cookies()
to get the LocalStorage instead of Сookies? | How to get the localStorage with Python and Selenium WebDriver | 1.2 | 0 | 1 | 40,399 |
46,362,884 | 2017-09-22T10:46:00.000 | 0 | 0 | 1 | 0 | python-2.7,azure,azure-functions | 46,399,217 | 2 | false | 0 | 0 | Per my experience, the time you took is expected when you install and import package into Azure Fuctions.
This is logical, since that files in your package might contain decorators, library calls, inner constants, etc. So it can take a long time if there is a program written in the module or when you import a large amount of additional libraries in your package.
Furthermore, although the code itself is not executed, the interpreter will analyze the function in Azure Function. It will transform the source code into a syntax tree and do some analysis (which variables are local, etc.).
You could alse note that a package typically has an __init__.py file, that initializes the package. That file is also executed and can take considerable time as well. For instance some packages that have a database connection, will already set up the connection to that database, and it can take some time before the database responds to the connection. | 1 | 0 | 0 | In my Azure Function, I install and import some package such as: cv2, numpy, azure, pydocumentdb. I calculated the time it takes to import those libraries, it is about 20s.
What are the reasons? Do you have any solution for this? I am using Python 2.7.
Thank you. | Importing Python package to Azure Function takes too long time | 0 | 0 | 0 | 546 |
46,362,927 | 2017-09-22T10:48:00.000 | 2 | 1 | 0 | 0 | python,networking,python-requests | 46,373,450 | 1 | true | 0 | 0 | I can provide 2 ways to do want you want: a bad way, and a clean way.
Let's talk about the bad way:
If your script is using "the default gateway" of the system, may be, by setting up a dedicated route for your destination used by your script into your system to avoid your script to use the default gateway, may be enough to solve your problem. This is a bad way because not only your script will be impacted, all the traffic generated from your host to that destination will be impacted.
Let's talk about the clean way. I suppose your are running Linux, but I think you may be able to do the same in all OS that are supporting multi-routing tables feature into their kernel.
Here is the summary of the steps you need to do:
to prepare the second (or third ...) routing table using the ip command and the 'table table_name' option
to create a rule traffic selector using ip rule using the uidrange option. This option allows you to select the traffic based on the UID of the user.
When you have done these steps, you can test your setup by creating a user account within the range of UID selected into ip-rule command and test your routing setup.
The last step, is to modify your python script by switching your uid to the uid selected in ip-rule, and existing from it when you have finish to generate your traffic.
To summarize the clean way, it is to do the same as the bad way, but just for a limited number of users uid of your Linux system by using the multi-routing tables kernel feature, and to switch to the selected user into your script to use the special routing setup.
The multi-routing tables feature of kernel is the only feature I think that is able to merge 2 different systems, as you requested it, your host and its VM, without the need of VM. And you can activate the feature by switching from one user to another one into your script of any language. | 1 | 1 | 0 | I want to simulate a file download via http(s) from within a Python script. The script is used to test a router/modem appliance in my local LAN.
Currently, I have a virtual machine's network configured to use the appliance as default gateway and DNS. The python script on my test machine sshs into the VM using the Phython's paramiko library. From within the python script using this ssh connection I run wget https://some.websi.te/some_file.zip. That seems like overkill but I don't want to reconfigure the network on my machine running the test script.
Is there a way to eliminate the extra VM (the one that runs wget via ssh) and still simulate the download? Of course that should run within my python test script on my local machine?
In other words can I get python to use another gateway than the system default from within a script? | Is there a way to make a http request in python using another gateway than the system default? | 1.2 | 0 | 1 | 431 |
46,363,175 | 2017-09-22T11:00:00.000 | 1 | 0 | 1 | 1 | python,windows,virtualenv | 46,363,515 | 2 | false | 0 | 0 | You could run: SET PATH=%PATH%;C:\Python27\ from a command prompt and it will add the python to path temporary (i.e will be gone when the command prompt is closed) | 1 | 0 | 0 | I have to do some work with a company with security restrictions on their computers.
I need to set Python2.7 virtualenv on their Windows10 machine but can't add python to Windows path. I installed Python through the Windows Software Centre. The interpreter is in usual C:\Python27\python.exe but it is not added to Windows path. When I run python in CMD it is not recognizable although C:\Python27\python opens the interpreter.
The problem is that to add it to Windows path I need admin privileges. It is simply not possible. I know the obvious answer is to contact admin but again it is not an option.
So the problem is, having this setup I need to install virtualenv, inside create all my environment and work on it.
I can't find the way to do it without Python in the path. | working with virtualenv without Python in Windows path | 0.099668 | 0 | 0 | 4,036 |
46,369,458 | 2017-09-22T16:38:00.000 | 1 | 1 | 0 | 1 | python,python-2.7,z3,z3py | 47,165,922 | 2 | true | 0 | 0 | I found out that I have to add the paths everytime I open a new terminal window. Then only z3 can be imported from anywhere. | 1 | 1 | 0 | I tried installing z3 theorem prover.
I am using Ubuntu 16.04.
And I am using Python 2.7.12
I did the installation in two ways:
I used sudo apt-get install z3
But when I tried to import z3 by opening python from terminal using from z3 import * and also using import z3 as z I got an error saying No Module named z3
I used
python scripts/mk_make.py
cd build
make
sudo make install
and also added build/python to PYTHONPATH and build to LD_LIBRARY_PATH but I got the same problem when i tried to import z3 using the same way.
Now I tried running examples.py
which is the folder build/python
And lo!!! No Error!!!
I also tried running other example files and I didn't get any error for them too.
Can anybody help me with the problem why I cannot import z3 when I open Python from terminal or any other folder outside of build/python?
EDIT:
I found out that I have to do adding the folders to the path every time I open a terminal outside of build/python | Cannot import z3 in python-2.7.12 in ubuntu even after installation | 1.2 | 0 | 0 | 1,158 |
46,375,886 | 2017-09-23T03:58:00.000 | 5 | 1 | 1 | 0 | python,batch-file,extendscript | 46,386,083 | 1 | true | 0 | 0 | In After Effects you can use the "system" object's callSystem() method. This gives you access to the system's shell so you can run any script from the code. So, you can write your python script that echos or prints the array and that is essentially what is returned by the system.callSystem() method. It's a synchronous call, so it has to complete before the next line in ExtendScript executes.
The actual code might by something like:
var stdOut = system.callSystem("python my-python-script.py") | 1 | 1 | 0 | I am working on an extendscript code (Adobe After Effects - but it is basically just javascript) which needs to iterate over tens of thousands of file names on a server. This is extremely slow in extendscript but I can accomplish what I need to in just a few seconds with python, which is my preferred language anyway. So I would like to run a python file and return an array back into extendscript. I'm able to run my python file and pass an argument (the root folder) by creating and executing a batch file, but how would pass the result (an array) back into extendscript? I suppose I could write out a .csv and read this back in but that seems a bit "hacky". | extendscript return argument from python script | 1.2 | 0 | 0 | 656 |
46,376,713 | 2017-09-23T06:22:00.000 | 0 | 0 | 0 | 0 | android,python-3.x,kivy,2d-games,qpython3 | 61,050,929 | 2 | false | 0 | 1 | Look at using Linux (a different Kernal) distro on your phone. Many will basically act as a pc, just lacking power. I'm sure there is one you can run regular Python with. Then you may also look at renpy for porting it to android, or even kivy but i'm thinking kivy would be very different than anything you have, and require learning a new language basically.
Sadly changing the OS on your phone is the only option I can think of, and probably the best. I cannot imagine any frameworks for android to be as vast as those that have been developed for PC. You may be able to find some hacked, port version of something that may help, a tool something idk. | 1 | 2 | 0 | So, I'm programming in Python 3.2 on android (the app I'm using is QPython3) and I wonder if I could make a game/app with a graphic interface for android, using only a smartphone. Is it possible? If yes, how do I do that? | Pygame/Kivy on Android? | 0 | 0 | 0 | 3,395 |
46,377,331 | 2017-09-23T07:49:00.000 | 0 | 0 | 0 | 0 | python,numpy,matrix | 72,120,529 | 3 | false | 0 | 0 | Can use:
np.linalg.lstsq(x, y)
np.linalg.pinv(X) @ y
LinearRegression().fit(X, y)
Where 1 and 2 are from numpy and 3 is from sklearn.linear_model.
As a side note you will need to concatenate ones(use np.ones_like) in both 1 and 2 to represent the bias from the equation y = ax + b | 1 | 8 | 1 | I'm focusing on the special case where A is a n x d matrix (where k < d) representing an orthogonal basis for a subspace of R^d and b is known to be inside the subspace.
I thought of using the tools provided with numpy, however they only work with square matrices. I had the approach of filling in the matrix with some linearly independent vectors to "square" it and then solve, but I could not figure out how to choose those vectors so that they will be linearly independent of the basis vectors, plus I think it's not the only approach and I'm missing something that can make this easier.
is there indeed a simpler approach than the one I mentioned? if not, how indeed do I choose those vectors that would complete Ainto a square matrix? | solving Ax =b for a non-square matrix A using python | 0 | 0 | 0 | 23,050 |
46,379,901 | 2017-09-23T13:06:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,math | 46,379,942 | 5 | false | 0 | 0 | Yes, division and multiplication is calculated first, but multiplication is´nt performed before division and vice versa. So:
2 + 4/2 * 3 = 2+2*3 = 2+6 = 8
1. ()
2. %, /, *
3. +, - | 4 | 2 | 0 | please can someone explain to me why the expression 2 + 4 / 2 * 3 evaluates to 8.0 and not 2.66?
I thought that multiplication was performed before division, however in this instance it seems that the division operation is being performed before the multiplication. | Python Mathematical Order of Operation | 0.07983 | 0 | 0 | 3,356 |
46,379,901 | 2017-09-23T13:06:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,math | 46,380,082 | 5 | false | 0 | 0 | Python gives multiplication and division the same priority.
As a rule, same priority operations are executed in order from left to right. | 4 | 2 | 0 | please can someone explain to me why the expression 2 + 4 / 2 * 3 evaluates to 8.0 and not 2.66?
I thought that multiplication was performed before division, however in this instance it seems that the division operation is being performed before the multiplication. | Python Mathematical Order of Operation | 0.07983 | 0 | 0 | 3,356 |
46,379,901 | 2017-09-23T13:06:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,math | 69,658,035 | 5 | false | 0 | 0 | The order of python operations follows the same rules. You can remember it using the mnemonic " please excuse my dear aunt Sally." or PEMDAS when executing math. PEMDAS stands for Parenthesis, Exponentiation, Multiplication, Division, Addition, and Subtraction. However, Multiplication and division can have the same precedence but are differentiated by their orders from left to right. Addition and Subtraction also behave in the same way. | 4 | 2 | 0 | please can someone explain to me why the expression 2 + 4 / 2 * 3 evaluates to 8.0 and not 2.66?
I thought that multiplication was performed before division, however in this instance it seems that the division operation is being performed before the multiplication. | Python Mathematical Order of Operation | 0.039979 | 0 | 0 | 3,356 |
46,379,901 | 2017-09-23T13:06:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,math | 46,379,983 | 5 | false | 0 | 0 | Python basic operaters follow the BODMAS rule and due to accordance of that priority of division is higher than multiplication.So it goes this way:
2+(4/2)*3
Now if you want to get 2.66 as your answer it must be like 2+4/(2*3) | 4 | 2 | 0 | please can someone explain to me why the expression 2 + 4 / 2 * 3 evaluates to 8.0 and not 2.66?
I thought that multiplication was performed before division, however in this instance it seems that the division operation is being performed before the multiplication. | Python Mathematical Order of Operation | 0 | 0 | 0 | 3,356 |
46,380,101 | 2017-09-23T13:31:00.000 | 0 | 0 | 0 | 0 | python,oracle,cx-oracle | 46,380,171 | 1 | false | 0 | 0 | This is too long for a comment.
You would need a trigger in the database to correctly implement this functionality. If you try to do it in the application layer, then you will be subject to race conditions in a multi-client environment.
Within Oracle, I would recommend just using an auto-generated column for the primary key. Don't try inserting it yourself. In Oracle 12C, you can define this directly using generated always as. In earlier versions, you need to use a sequence to define the numbers and a trigger to assign them. | 1 | 0 | 0 | can I get some advice, how to make mechanism for inserts, that will check if the values of PK is used?
If it is not used in the table, it will insert row with number. If it is used, it will increment value and check next value if it's used. So on... | cx_oracle PK autoincrementarion | 0 | 1 | 0 | 26 |
46,383,060 | 2017-09-23T18:40:00.000 | 1 | 0 | 1 | 0 | python,png,dds-format | 53,226,414 | 3 | false | 0 | 0 | I found that I can use the command line app in python, so image magic covers all my needs. | 1 | 1 | 0 | Need to convert 50 images, and doing it manually would be too long.
Сan't find a library with such function.
I tried Pillow, but it can't save in dds. | Convert png to dds with python? | 0.066568 | 0 | 0 | 2,474 |
46,383,189 | 2017-09-23T18:53:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook | 47,618,302 | 2 | false | 0 | 0 | Simple, just find where the script jupyter-notebook resides, for example ~/.local/bin if you installed it locally.
Then just edit the first line to: #!/usr/bin/python3 will be fine. | 1 | 2 | 0 | I installed jupyter notebook along with anaconda python as the only python on my PC (Windows 10). However I recently installed python 3.6.2 and I wonder if I can somehow add it to jupyter so as I can use them changeably.
I remember having both on my other machine when I installed python first and then after that I installed the whole anaconda package with the jupyter notebook (so I had python 3 and python(conda) option for kernels).
So how can I add to jupyter? | How do I add a python3 kernel to my jupyter notebook? | 0 | 0 | 0 | 6,917 |
46,383,369 | 2017-09-23T19:15:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,python-module | 46,383,961 | 2 | false | 0 | 0 | Use a container tech.
Docker, for example, gives you the ability to port your code with dependencies to any machine you want without you have to install anything new in the machine and it saves a lot of time too. | 1 | 0 | 0 | I want to use a Python module like urllib.request but have all the module's dependencies in a file where I can use them on a computer without having the entire Python installation.
Is there a module or tool I can use to just copy a specific module into a file and it's dependencies without having to go through the entire script and copying it manually. I'm using Python 3. | How to copy a Python module and it's dependencies to a file | 0.099668 | 0 | 1 | 1,569 |
46,384,607 | 2017-09-23T21:52:00.000 | 0 | 0 | 0 | 1 | python,concurrency,gevent,cpython,greenlets | 46,387,137 | 1 | false | 0 | 0 | It's underlying dispatch model, is the event loop in libevent, which uses the event base, which monitors for the different events, and reacts to them accordiningly, then from what I gleamed , it will take the greenlets do some fuckery with semaphores, and then dispatch it onto libevent. | 1 | 0 | 0 | I am trying to understand the way Gevent/Greenlet chooses the next greenlet to be run. Threads use the OS Scheduler. Go Runtime uses 2 hierarchical queues.
By default, Gevent uses libevent to its plumbling. But how do libevent chooses the next greenlet to be ran, if many are ready to?
Is it random?
I already had read their docs and had a sight on the sourcecode. Still do not know.
Updated: Text changed to recognize that Gevent uses libevent. The question still applies over libevent. | How do libevent chooses the next Gevent greenlet to run? | 0 | 0 | 0 | 174 |
46,384,630 | 2017-09-23T21:55:00.000 | 2 | 0 | 1 | 0 | python,visual-studio-code | 46,384,705 | 2 | false | 0 | 0 | I am not sure how you try to run this program, but you can just go to menu View → Terminal and type python your_program.py in TERMINAL from the folder where your program is located.
And please check if you have not accidentally installed Visual Studio instead of Visual Studio Code (those are two completely different programs). | 2 | 3 | 0 | I have this very beginner question that I happened to install Visual Studio Code on my Mac, and every time I tried to run a simple Python program on that, it said that I need a workspace to run, so how do I create the workspace? | How can I create a workspace in Visual Studio Code? | 0.197375 | 0 | 0 | 5,816 |
46,384,630 | 2017-09-23T21:55:00.000 | 0 | 0 | 1 | 0 | python,visual-studio-code | 69,560,073 | 2 | false | 0 | 0 | VSCode workspaces are basically just folders. If you open an empty folder in VSCode it will get treated as a workspace, and you can add any scripts you want to it. VSCode will create a new hidden folder in the folder you chose that will hold settings for the workspace. For python, make sure you install the python extension (just grab the one with the most downloads) and follow the instructions there to make sure your python environment is properly configured. If you're using git, you might want to add that hidden folder to the gitignore. | 2 | 3 | 0 | I have this very beginner question that I happened to install Visual Studio Code on my Mac, and every time I tried to run a simple Python program on that, it said that I need a workspace to run, so how do I create the workspace? | How can I create a workspace in Visual Studio Code? | 0 | 0 | 0 | 5,816 |
46,385,250 | 2017-09-23T23:41:00.000 | 1 | 0 | 0 | 0 | python,arrays,numpy | 46,385,537 | 2 | false | 0 | 0 | You can write your own matrix initializer.
Go through the array[i][j] for each i, j pick a random number between 0 and 7.
If the number equals to either left element: array[i][j-1] or to the upper one: array[i-1][j] regenerate it once again.
You have 2/7 probability to encounter such a bad case, and 4/49 to make it twice in a row, 8/343 for 3 in a row, etc.. the probability dropes down very quickly.
The average case complexity for n elements in a matrix would be O(n). | 1 | 0 | 1 | I have a matrix or a multiple array written in python, each element in the array is an integer ranged from 0 to 7, how would I randomly initalize this matrix or multiple array, so that for each element holds a value, which is different from the values of its 4 neighbours(left,right, top, bottom)? can it be implemented in numpy? | how to init a array with each element holding the value different from its neighbours | 0.099668 | 0 | 0 | 62 |
46,385,689 | 2017-09-24T01:24:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,numpy,scikit-learn,anaconda | 46,430,717 | 1 | false | 0 | 0 | It looks to me like you may have two versions of Python installed. In your original stack trace, you can see that the version of Python that is complaining about scipy is coming from "C:/Python27/". However, your install of Anaconda looks like it's coming from "C:/Users/james/Anaconda2".
I would recommend putting Anaconda's python.exe first in your PATH. | 1 | 2 | 1 | Windows 10
Python 2.7
Anaconda
pip
I am having big problems installing SciKit.
I have tried every installation option I can find.
tried installing with pip and anaconda. It says it is successfully installed but I can't import it to my script - I get error -
Traceback (most recent call last):
File "C:/Python27/trash.py", line 1, in
from sklearn import datasets
File "C:\Python27\lib\site-packages\sklearn__init__.py", line 134, in
from .base import clone
File "C:\Python27\lib\site-packages\sklearn\base.py", line 10, in
from scipy import sparse
ImportError: No module named scipy
I have installed numpy, pandas, ipython, sympy, scipy etc .... everything that any post or forum says is needed. My pc says I already have scipy installed. I was told the easiest option was to do it with Anaconda. Anaconda also says it is all already installed.
///////////////////////////////////////////////////////////////////////
If I try install it with pip install scipy or pip -U install scipy I get this error ---
Command "c:\python27\python.exe -u -c "import setuptools, tokenize;file='c:\users\james\appdata\local\temp\pip-build-g1vohj\scipy\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record c:\users\james\appdata\local\temp\pip-xjacl_-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\james\appdata\local\temp\pip-build-g1vohj\scipy\
///////////////////////////////////////////
Anaconda using conda install scipy I get --
(C:\Users\james\Anaconda2) C:\Users\james>conda install scipy Fetching package metadata ........... Solving package specifications: . # All requested packages already installed. # packages in environment at C:\Users\james\Anaconda2: # scipy 0.19.1 np113py27_0
I get the same response when installing all the stuff that is required like numpy.
//////////////////////////////////////////////////////////
I am trying to get started on machine learning but this is just a nightmare.
please help me... | PYTHON: I can't get scipy/sklearn to work. No scipy module | 0 | 0 | 0 | 507 |
46,386,122 | 2017-09-24T02:53:00.000 | 2 | 1 | 1 | 0 | python,unit-testing,testing,procedural-programming | 46,386,323 | 1 | true | 0 | 0 | If you write your code in the form of functions that operate on file objects (streams) or, if the data is small enough, that accept and return strings, you can easily write tests that feed the appropriate data and check the results. If the real data is large enough to need streams, but the test data is not, use the StringIO function in the test code to adapt.
Then use the __name__=="__main__" trick to allow your unit test driver to import the file without running the user-facing script. | 1 | 0 | 0 | I'm pretty new to Python. However, I am writing a script that loads some data from a file and generates another file. My script has several functions and it also needs two user inputs (paths) to work.
Now, I am wondering, if there is a way to test each function individually. Since there are no classes, I don't think I can do it with Unit tests, do I?
What is the common way to test a script, if I don't want to run the whole script all the time? Someone else has to maintain the script later. Therefore, something similar to unit tests would be awesome.
Thanks for your inputs! | How can I test a procedural python script properly | 1.2 | 0 | 0 | 293 |
46,386,596 | 2017-09-24T04:28:00.000 | 18 | 1 | 0 | 0 | python,django,django-channels | 53,784,973 | 2 | false | 1 | 0 | I think the one major downside you will find is that the ASGI servers are newer and therefore tested less, may have less features, fewer in number, and probably have a smaller community behind them. However, I use an ASGI server (Daphne) for everything and feel that websockets offer so much in terms of user experience that everything will eventually shift to ASGI.
Being able to use asyncio in your code is a major benefit for web programming. Instead of running 10 queries one after the other and waiting for each one to come back, you can run 10 queries at the same time, while hitting your cache and making a HTTP request simultaneously on a single thread. | 2 | 30 | 0 | What is the explicit and clear disadvantages of using ASGI instead of WSGI for HTTP request handling in Django in general?
I know ASGI is for asynchronous tasks, but it can also handle synchronous HTTP requests via http.* channels. Is it slower than normal WSGI or is there any unsupported features comparing to WSGI?
One more, to provide both REST API and websocket handling in same project, which way do you prefer and why?
WSGI for REST + ASGI for websocket in different server instances
WSGI for REST + ASGI for websocket in same machine
ASGI for both | Disadvantages of using ASGI instead of WSGI | 1 | 0 | 0 | 26,075 |
46,386,596 | 2017-09-24T04:28:00.000 | 9 | 1 | 0 | 0 | python,django,django-channels | 49,812,130 | 2 | false | 1 | 0 | I didn't do any benchmarking but use both WSGI and ASGI in several project and didn't see any sufficient differences between their performance, so if the Django WSGI performance is acceptable for you then ASGI will work too.
For the REST + websockets API I used ASGI for both. There is no reason to use WSGI if you have ASGI enabled in your project (WSGI works over ASGI). | 2 | 30 | 0 | What is the explicit and clear disadvantages of using ASGI instead of WSGI for HTTP request handling in Django in general?
I know ASGI is for asynchronous tasks, but it can also handle synchronous HTTP requests via http.* channels. Is it slower than normal WSGI or is there any unsupported features comparing to WSGI?
One more, to provide both REST API and websocket handling in same project, which way do you prefer and why?
WSGI for REST + ASGI for websocket in different server instances
WSGI for REST + ASGI for websocket in same machine
ASGI for both | Disadvantages of using ASGI instead of WSGI | 1 | 0 | 0 | 26,075 |
46,386,612 | 2017-09-24T04:32:00.000 | 1 | 0 | 1 | 0 | python | 46,386,633 | 1 | true | 0 | 0 | You're correct that, for single list elements, the value or lack thereof is the only difference. However, del has a broader range of applicability: it can remove slices from a list, and it can also destroy variables, attributes, and mapping entries. | 1 | 0 | 0 | Is my understanding correct?
pop() remove and return the last item in the list, so it can be used to implement “First In Last Out” structure
by pop(), and “First In First Out” structure by pop(0)
del can also be used to delete specified index’s item.
So, my question is what’s the difference between those two. If those two are the same,
why would Python creator bother to create both? Is there only difference that pop() returns the
remove item while del does not? | the difference between pop() and del keyword | 1.2 | 0 | 0 | 319 |
46,387,317 | 2017-09-24T06:37:00.000 | 0 | 0 | 1 | 0 | python-2.7,pseudocode | 46,387,465 | 3 | false | 0 | 0 | T should be 28. It will loop till m>7 (since n=7) and in each iteration T adds m to itself, since T is 0 initially it is only summing up m after incrementing it by 1 in each iteration.So if you add 1+2+3.....+7 you get 28 and that is when the loop breaks since m is now equal to 8. | 3 | 0 | 0 | Please help me to understand the following code and what will be the possiable output.
What will be the output of the following pseudo code for input 7?
1.Input n
2.Set m = 1, T = 0
3.if (m > n)
Go to step 9
5.else
T = T + m
m = m + 1
8.Go to step 3
9.Print T | What will be the output of the following pseudo code for input 7? | 0 | 0 | 0 | 5,897 |
46,387,317 | 2017-09-24T06:37:00.000 | 0 | 0 | 1 | 0 | python-2.7,pseudocode | 46,387,347 | 3 | false | 0 | 0 | 0
n is less than n so go to step 9 which is print T which is equal to 0 as set in step 2. | 3 | 0 | 0 | Please help me to understand the following code and what will be the possiable output.
What will be the output of the following pseudo code for input 7?
1.Input n
2.Set m = 1, T = 0
3.if (m > n)
Go to step 9
5.else
T = T + m
m = m + 1
8.Go to step 3
9.Print T | What will be the output of the following pseudo code for input 7? | 0 | 0 | 0 | 5,897 |
46,387,317 | 2017-09-24T06:37:00.000 | 0 | 0 | 1 | 0 | python-2.7,pseudocode | 61,960,043 | 3 | false | 0 | 0 | for m = 1 2 3 4 5 6 7 and for 8 m>n will be true and it will go to step 9
T=(T+M)= 1 3 6 10 15 21 28 basically T is a series where next is added as 2,3,4,5,6,7 to prev number 2 3 4 5 6 7 if one look from other angle | 3 | 0 | 0 | Please help me to understand the following code and what will be the possiable output.
What will be the output of the following pseudo code for input 7?
1.Input n
2.Set m = 1, T = 0
3.if (m > n)
Go to step 9
5.else
T = T + m
m = m + 1
8.Go to step 3
9.Print T | What will be the output of the following pseudo code for input 7? | 0 | 0 | 0 | 5,897 |
46,387,431 | 2017-09-24T06:55:00.000 | 0 | 0 | 0 | 0 | python,tkinter,collections,widget | 46,395,861 | 1 | false | 0 | 1 | I found a couple of packages for pure Python custom widget creation with a little more searching online. One is Python megawidgets, at pmw.sourceforge.net, which, according to their documentation:
"is a toolkit for building high-level compound widgets in Python using the Tkinter module. It consists of a set of base classes and a library of flexible and extensible megawidgets built on this foundation. These megawidgets include notebooks, comboboxes, selection widgets, panes widgets, scrollable widgets, and dialog windows."
A different approach is writing custom widgets yourself using the Widget Construction Kit, at effbot.org/zone/wck.htm. This provides a base Widget class with primitive drawing methods, such as for borders, text, and colors, along with a basic but complete set of event definitions for binding your event handlers to your custom widgets. It has some good advice on doing animated widgets, such as drag and drop.
If anybody knows of any other packages of widgets or construction toolkit APIs, feel free to post it here. Developers will appreciate having a larger selection in a single location. | 1 | 0 | 0 | I'm searching for a tkinter custom widget collection that I can include in a application designer I'm writing in 100% Python but haven't had much luck yet. I figured out a way to do a table for instance, but would like to save myself the work if there's a good implementation out there. | tkinter custom widget collecting | 0 | 0 | 0 | 277 |
46,390,149 | 2017-09-24T12:42:00.000 | 1 | 0 | 0 | 0 | python,csv,heroku,web-scraping,download | 46,390,214 | 1 | true | 1 | 0 | Heroku's filesystem is per-dyno and ephemeral. You must not save things to it for later use.
One alternative is to write it to somewhere permanent, such as Amazon S3. You can use the boto library for this. Although you do have to pay for S3 storage and data, it's very inexpensive. | 1 | 0 | 0 | I have a python script that scrapes data from different websites then writes some data into CSV files.
I run this every day from my local computer but I would like to make this automatically running on a server and possibly for free.
I tried PythonAnywhere but it looks that their whitelist stops me from scraping bloomberg.com.
I then passed to Heroku, I deployed my worker (the python script). Everything seems to work but looking with Heroku bash to the directory where the python script is supposed to write the CSV files nothing appears.
I also realised that I have no idea of how I would download those CSV files in case those were written.
I am wondering if I can actually achieve what I am trying to achieve with Heroku or if the only way to get a python script to work on a server is by paying for PythonAnywhere and avoid scraping restriction? | Can I use Heroku to Scrape Data to later Download? | 1.2 | 0 | 1 | 484 |
46,392,625 | 2017-09-24T17:04:00.000 | 0 | 0 | 0 | 0 | python,random | 46,392,727 | 5 | false | 0 | 0 | If you need random integer values between 0 and c use random.randint(0, c). For random floating point values between 0 anc c use random.uniform(0, c). | 2 | 1 | 1 | I happen to have a list y = [y1, y2, ... yn]. I need to generate random numbers ai such that 0<=ai<=c (for some constant c) and sum of all ai*yi = 0. Can anyone help me out on how to code this in python? Even pseudocode/logic works, I am having a conceptual problem here. While the first constraint is easy to satisfy, I cannot get anything for the second constraint.
EDIT: Take all yi = +1 or -1, if that helps, but a general solution to this would be interesting. | Generating Random Numbers Under some constraints | 0 | 0 | 0 | 1,846 |
46,392,625 | 2017-09-24T17:04:00.000 | 0 | 0 | 0 | 0 | python,random | 46,440,005 | 5 | false | 0 | 0 | I like splitting this problem up. Note that there must be some positive and some negative values of y (otherwise sum(ai*yi) can't equal zero).
Generate random positive coefficients ai for the negative values of y, and construct the sum of ai*yi over only the negative values of y (let's say this sum is -R).
Assuming there are "m" remaining positive values, choose random numbers for the first m-1 of the ai coefficients for positive yi values, according to ai = uniform(R/(m*max(y)).
Use your constraint to determine am = (R-sum(aiyi | yi> 0))/ym. Notice that, by construction, all ai are positive and the sum of aiyi = 0.
Also note that multiplying all ai by the same amount k will also satisfy the constraint. Therefore, find the largest ai (let's call it amax), and if amax is greater than c, multiply all values of ai by c/(amax + epsilon), where epsilon is any number greater than zero.
Did I miss anything? | 2 | 1 | 1 | I happen to have a list y = [y1, y2, ... yn]. I need to generate random numbers ai such that 0<=ai<=c (for some constant c) and sum of all ai*yi = 0. Can anyone help me out on how to code this in python? Even pseudocode/logic works, I am having a conceptual problem here. While the first constraint is easy to satisfy, I cannot get anything for the second constraint.
EDIT: Take all yi = +1 or -1, if that helps, but a general solution to this would be interesting. | Generating Random Numbers Under some constraints | 0 | 0 | 0 | 1,846 |
46,394,539 | 2017-09-24T20:29:00.000 | 0 | 0 | 1 | 0 | python-3.x,random | 46,394,799 | 2 | false | 0 | 0 | What part of MWE is responsible for Heads apperaring on the output? | 1 | 0 | 0 | I am partitioning a random number to be in one of 2 cases to simulate a roll of a die. The problem is that sometimes, there is more than one step per loop. Please see the MWE below:
count = 0
n = random.random()
while count = 1/2:
n = random.random() # generate a new random number
print(" Tails")
count = count + 1
Output
Count = 0
Heads
Tails
Count = 1
Heads
Count = 2
Heads
Tails
Count = 3
Heads
Tails
Count = 4
Heads
Count = 5
Heads
Count = 6
Heads
Tails
Count = 7
Tails
Count = 8
Tails
Count = 9
Tails
Count = 10
Tails | Multiple if statements to categorize a random number in a partitioned interval | 0 | 0 | 0 | 18 |
46,394,659 | 2017-09-24T20:42:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 68,513,454 | 2 | false | 0 | 0 | I had the similar problem. There are two ways you can try: 1. After the output layer, add an extra layer to produce the histogram; 2. Use something like tf.RegisterGradient or tf.custom_gradient to define your own gradients for the operations. | 1 | 3 | 1 | I want to use the histogram of the output from a CNN to compute the loss. I am wondering whether tf.histogram_fixed_width() supports the gradient to flow back to its former layer. Only it works, I can add a loss layer after calculating the histogram. | Does tf.histogram_fixed_width() support back propagation? | 0 | 0 | 0 | 670 |
46,394,954 | 2017-09-24T21:16:00.000 | 3 | 1 | 1 | 0 | python,mp3,music21 | 46,396,673 | 3 | false | 0 | 0 | There are ways of doing this in music21 (audioSearch module) but it's more of a proof of concept and not for production work. There are much better software packages for analyzing audio (try sonic visualizer or jMIR or a commercial package). Music21's strength is in working with scores. | 3 | 3 | 0 | I am looking for python library to find out a key and tempo of the song recorded in MP3 format. I've found the music21 lib that allows doing that. But it seems like it works only with midi files.
Does somebody know how to parse MP3 files using music21 and get the required sound characteristics? If it is impossible, please suggest another library. | Is it possible to analyze mp3 file using music21? | 0.197375 | 0 | 0 | 995 |
46,394,954 | 2017-09-24T21:16:00.000 | 4 | 1 | 1 | 0 | python,mp3,music21 | 46,395,044 | 3 | true | 0 | 0 | No, this is not possible. Music21 can only process data stored in musical notation data formats, like MIDI, MusicXML, and ABC.
Converting a MP3 audio file to notation is a complex task, and isn't something that software can reliably accomplish at this point. | 3 | 3 | 0 | I am looking for python library to find out a key and tempo of the song recorded in MP3 format. I've found the music21 lib that allows doing that. But it seems like it works only with midi files.
Does somebody know how to parse MP3 files using music21 and get the required sound characteristics? If it is impossible, please suggest another library. | Is it possible to analyze mp3 file using music21? | 1.2 | 0 | 0 | 995 |
46,394,954 | 2017-09-24T21:16:00.000 | 1 | 1 | 1 | 0 | python,mp3,music21 | 46,417,595 | 3 | false | 0 | 0 | Check out librosa. It can read mp3s and give some basic info such as tempo. | 3 | 3 | 0 | I am looking for python library to find out a key and tempo of the song recorded in MP3 format. I've found the music21 lib that allows doing that. But it seems like it works only with midi files.
Does somebody know how to parse MP3 files using music21 and get the required sound characteristics? If it is impossible, please suggest another library. | Is it possible to analyze mp3 file using music21? | 0.066568 | 0 | 0 | 995 |
46,395,323 | 2017-09-24T22:06:00.000 | 0 | 1 | 1 | 0 | python | 46,398,934 | 1 | false | 0 | 0 | If you understood what Big O notation means, you should be able to "measure the running times" of increasingly longer input.
Try with input size 10, 100, 1000, 10000, ... and plot the result. This is a fine approximation of the behaviour of your function.
YOu should make friend with pytho's time module :) | 1 | 1 | 0 | Is there a way to check the complexity or performance (i.e. Big O notation) of python data structure's methods offline? | Python data structure's complexity/performance check | 0 | 0 | 0 | 58 |
46,397,244 | 2017-09-25T03:36:00.000 | 0 | 0 | 0 | 0 | python,iphone,mdm | 46,548,859 | 1 | true | 1 | 0 | Did you remove the Profile from the Device settings or through the Apple Configurator?
Many times even if you remove profile from Device settings, its still there in the Device.
You can see if its still there in the Device using Apple Configurator and try removing it from there. | 1 | 0 | 0 | I have a strange problem at my iphone,
when I remove the mdm profile,
the camera always be disabled, it's not come back , the mdm profile is only one , how to reset it? | remove the profile but the camera always be disabled | 1.2 | 0 | 0 | 27 |
46,397,373 | 2017-09-25T03:55:00.000 | 0 | 0 | 0 | 1 | python,airflow | 46,435,566 | 2 | false | 0 | 0 | In the UI, when you clear a instance task, the downstream case is checked by default.
If you unchecked it, it will clear only this one and not re-run the downstream tasks | 1 | 2 | 0 | I have tasks A -> B -> C in Airflow and when I run the DAG and all complete with success, I'd like to be able to clear B alone (while leaving C marked as success). B clears and gets put into the 'no_status' state but then when I try to re-run B, nothing happens. I've tried --ignore_dependencies, --ignore_depends_on_past and --force but to no avail. B seems to only re-run if C is also cleared and then everything re-runs as expected.
The reason why I'd like to be able re-run B specifically without changing the pipeline is that some of B's external inputs may change slightly (file changed, or tweak) and I'd like to run it and evaluate it's output before restarting the downstream tasks (to mitigate any potential interruption). | Airflow force re-run of upstream task when cleared even though downstream if marked success | 0 | 0 | 0 | 2,112 |
46,398,738 | 2017-09-25T06:19:00.000 | -1 | 1 | 0 | 0 | python,ssh,paramiko | 46,398,816 | 2 | false | 0 | 1 | Whenever you have ssh to server 1 port is block from both the side, if you have connection when client starts then it will block that port and no one could communicate to server , also you increasing server load just keeping the connection open.
So, my advice is to start ssh whenever the need is and stop once task is completed. | 1 | 1 | 0 | I am working on a client application with python. GUI is created with PyQt.
Basically, the application connects to the server via ssh and retrieves information thereby reading files generated by the server software. I am using paramko module.
My question is:
Should I open an ssh connectivity right when the client application is started and keep until it quits? Or I should create a new ssh connectivity whenever a button in client app triggers information retrieval?
How would it affect the performance?
Any suggestion and reference would be highly appreciated. | Client application's connectivity with server | -0.099668 | 0 | 0 | 64 |
46,401,486 | 2017-09-25T09:09:00.000 | 6 | 0 | 1 | 0 | python,python-3.x,data-structures,linked-list,xor | 46,404,964 | 1 | false | 0 | 0 | You can't build an XOR linked list in Python, since Python doesn't let you mess with the bits in pointers.
You don't want to implement that anyway -- it's a dirty trick that makes your code hard to understand for little benefit.
If you're worried about memory, it's almost always better to use a doubly-linked list with more than 1 element per node, like a linked list of arrays.
For example, while an XOR linked list costs 1 pointer per item, plus the item itself, A doubly-linked list with 16 items per node costs 3 pointers for each 16 items, or 3/16 pointers per item. (the extra pointer is the cost of the integer that records how many items are in the node) That is less than 1. In Python there are additional overheads, but it still works out better.
In addition to the memory savings, you get advantages in locality because all 16 items in the node are next to each other in memory. Algorithms that iterate through the list will be faster.
Note that an XOR-linked list also requires you to allocate or free memory each time you add or delete a node, and that is an expensive operation. With the array-linked list, you can do better than this by allowing nodes to be less than completely full. If you allow 5 empty item slots, for example, then you only have allocate or free memory on every 3rd insert or delete at worst. | 1 | 2 | 0 | Given that python objects are only a reference to the actual memory Objects and
memory address of objects cannot be retrived.
Is it possible to implement an XOR linked list in Python ? if yes how ? | How to implement an XOR Linked List in Python? | 1 | 0 | 0 | 2,397 |
46,404,243 | 2017-09-25T11:35:00.000 | 0 | 0 | 0 | 0 | python,numpy,pyspark,spark-streaming | 46,404,352 | 2 | false | 0 | 0 | Pyspark is used to run program/code/algorithm in spark, which are coded in python language.
For machine leaning, spake has MLlib library packages.
For streaming purpose, spark has Spark streaming lib packages
You can explore Storm as well for real time streaming. | 2 | 0 | 1 | I am new to spark and have to write a streaming application that has to perform tasks like fast fourier transformations and some machine learning stuff like classification/regression with svms etc. I want to do this in pyspark, because of python's huge variety of modules like numpy, scikit-learn etc. My question is, is it possible to do such stuff in a streaming application? As far as I know, spark uses dstreams. Are these streams convertible to something like numpy arrays or anything similar that can serve as an input for python functions?
Thx | Is pyspark streaming suitable for machine learning/ scientific computing? | 0 | 0 | 0 | 189 |
46,404,243 | 2017-09-25T11:35:00.000 | 0 | 0 | 0 | 0 | python,numpy,pyspark,spark-streaming | 46,480,253 | 2 | false | 0 | 0 | Machine learning is process of learn from data. First you train your model and then use that on top of data stream.
Data can be processed as mini, micro or even real time, depends on amount of data it is generating in a particular time.
Flume and Kafka are used to fetch data on real time and store on HDFS or can be fed to Spark with Spark streaming pointing to flume sink. | 2 | 0 | 1 | I am new to spark and have to write a streaming application that has to perform tasks like fast fourier transformations and some machine learning stuff like classification/regression with svms etc. I want to do this in pyspark, because of python's huge variety of modules like numpy, scikit-learn etc. My question is, is it possible to do such stuff in a streaming application? As far as I know, spark uses dstreams. Are these streams convertible to something like numpy arrays or anything similar that can serve as an input for python functions?
Thx | Is pyspark streaming suitable for machine learning/ scientific computing? | 0 | 0 | 0 | 189 |
46,405,832 | 2017-09-25T12:56:00.000 | 1 | 0 | 1 | 0 | tensorflow,python-3.5,python-3.6 | 46,405,924 | 1 | true | 0 | 0 | It all depends on what code you used, and if the syntax was changed in later versions. For example, if your version of Python uses print "Hello World!" and the user's version is print("Hello World"), then you would have to change it to the later versions specification. | 1 | 0 | 0 | I've written a program on the python version 3.5.2, because i need a 64 bit version of python for my tensorflow-gpu library.
Its also possible to use the normal tensorflow library, which doese'nt require a 64 bit python, but in my case i wanted to use my gpu.
My question is: If some users have a higher version installed (of python) and use the normal tensorflow library, will they still be able to execute it?
Fabian | Python - program written in Python 3.5.2 also executable on higher python versions? | 1.2 | 0 | 0 | 54 |
46,406,527 | 2017-09-25T13:32:00.000 | 0 | 0 | 0 | 0 | python,rgb,pca | 53,309,575 | 1 | false | 0 | 0 | Separate three channels i.e., Red, Blue, Green and apply PCA on each. After applying PCA on each channel again join them. | 1 | 0 | 1 | I'm trying to reduce dimension of RGB images using PCA on python. But it seems to me that all codes I found only work on a greyscale image. Is there any way to do PCA on RGB image using any python library like sklearn or opencv?
Thanks | PCA for RGB image in python | 0 | 0 | 0 | 1,645 |
46,410,009 | 2017-09-25T16:32:00.000 | 2 | 0 | 0 | 0 | sql-server,python-3.x,flask,pymssql | 46,436,613 | 1 | false | 0 | 0 | Well, our answer was to switch to pyodbc. A few utility functions made it more or less a cut-and-paste with a few gotchas here and there, but pymssql has been increasingly difficult to build, upgrade, and use for the last few years. | 1 | 2 | 0 | We've had a Flask application using pymssql running for 1.5 years under Python 2.7 and SQL Server 2012. We moved the application to a new set of servers and upgraded the Flask app to Python 3.6 and a new database server to SQL Server 2016. They're both Windows servers.
Since then, we've been getting intermittent 20017 errors:
pymssql.OperationalError(20017, b'DB-Lib error message 20017, severity 9:\nUnexpected EOF from the server (xx.xx.xx.xx:1433)\nDB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (xx.xx.xx.xx:1433)\n')
Only a small percentage of the calls return this, but enough to be causing problems. I can provide specific versions of everything we're running.
One solution proposed is to switch to pyodbc, but we have hundreds of queries and stored procedure calls, many with UUIDs, which pyodbc doesn't handle nearly as cleanly as pymssql.
I've installed pymssql via a precompiled wheel (pymssql-2.1.3-cp36-cp36m-win_amd64) because pip can't build it without an older version.
Any ideas on debugging or fixing this would be helpful. | Pymssql Error 20017 after upgrading to Python 3.6 and SQL Server 2016 | 0.379949 | 1 | 0 | 933 |
46,410,780 | 2017-09-25T17:20:00.000 | 0 | 1 | 0 | 0 | python,python-2.7,raspberry-pi,raspberry-pi3 | 46,410,824 | 1 | false | 0 | 0 | The only way i know to do this is to from turtle import *, which will import the turtle graphics module, which will allow you to use listen(), onkey(), and more... the syntax for onkey is onkey("FUNCTION", "KEY") | 1 | 0 | 0 | I would like to do a script that would be able to read the taps from another keyboard. That script would run at the start, then I couldn't use raw_input at its own.
Thank you in advance | How to do a script to read keyboard taps | 0 | 0 | 0 | 28 |
46,413,323 | 2017-09-25T20:04:00.000 | 1 | 0 | 1 | 0 | python,string,datetime | 46,413,391 | 3 | true | 0 | 0 | In that particular format, yes. More generally, any format in which bigger units appear first (e.g. years before months) and in which numbers are always the same length by padding with zeroes on the left is safe. | 2 | 1 | 0 | I understand when comparing two strings via certain operators like ==, !=, >, <, etc.. python uses the ASCII values of the strings under the hood.
My question is, is it safe to compare the ASCII values of dates rather than converting the object to a datetime object in python?
For instance , u'2017-01-01' > u'2016-12-01' = True | Comparing strings in python which contain dates | 1.2 | 0 | 0 | 72 |
46,413,323 | 2017-09-25T20:04:00.000 | 3 | 0 | 1 | 0 | python,string,datetime | 46,413,371 | 3 | false | 0 | 0 | with 2016-12-01 (year+zero-padded month+zero-padded day), you've picked a format where lexicographical order is the same as chronological order.
The most important data is first (the year), then the month, and the day. It's not possible that an earlier date comes after another because of that property (the zero padding is very important here)
So in that case, comparing lexicographically is safe. | 2 | 1 | 0 | I understand when comparing two strings via certain operators like ==, !=, >, <, etc.. python uses the ASCII values of the strings under the hood.
My question is, is it safe to compare the ASCII values of dates rather than converting the object to a datetime object in python?
For instance , u'2017-01-01' > u'2016-12-01' = True | Comparing strings in python which contain dates | 0.197375 | 0 | 0 | 72 |
46,416,052 | 2017-09-26T00:25:00.000 | 20 | 0 | 0 | 1 | python,bash,pipenv | 46,416,282 | 1 | true | 0 | 0 | Aliases are never inherited. .bash_profile is only sourced for login shells, and pipenv apparently creates a nonlogin interactive shell. Aliases should be defined in .bashrc, and on a Mac (where terminal emulators starts login shells by default), add [[ -f ~/.bashrc ]] && source ~/.bashrc to the end of your .bash_profile. | 1 | 14 | 0 | I have a core set of bash aliases defined in my .bash_profile (Mac). But when I activate a pipenv with pipenv shell, my aliases don't work and the bash alias command returns nothing.
Is there a configuration step needed to spawn pipenv shells that inherit bash aliases from the parent shell? | pipenv and bash aliases | 1.2 | 0 | 0 | 3,809 |
46,418,373 | 2017-09-26T05:21:00.000 | 2 | 0 | 0 | 0 | python,tensorflow,deep-learning,keras,keras-2 | 46,418,511 | 2 | true | 0 | 0 | I would use Repeat to add one element and implement the interpolation as a new lambda layer. I don't think there's an existing layer for this in keras. | 1 | 1 | 1 | I want to resize a tensor (between layers) of size say (None, 2, 7, 512) to (None, 2, 8, 512), by interpolating it (say using nearest neighbor), similar to this function tf.image.resize_nearest_neighbor available in Tensorflow.
Is there any way to do that?
I tried directly using the Tensorflow function tf.image.resize_nearest_neighbor and the pass the tensors to the next Keras layer, but with the next layer this error was thrown:
AttributeError: 'Tensor' object has no attribute '_keras_history'
I believe this is due to some attributes that are missing in Tensorflow tensors, which makes sense as the layer expects Keras tensors to be passed. | How to resize (interpolate) a tensor in Keras? | 1.2 | 0 | 0 | 5,492 |
46,420,245 | 2017-09-26T07:29:00.000 | 3 | 1 | 1 | 0 | python | 46,420,296 | 2 | true | 0 | 0 | At the end of the day, you need to store the score in one type of database or the other, whether it's a file-based database or relational or any other. For one execution of your code, you can certainly keep it in the RAM, but for persistence, there's no way around it. If your use case is simple, you may consider sqlite instead of explicit file-based storage. | 2 | 1 | 0 | I have recently made a game with python that makes the user decipher an anagram and based on difficulty it increases their score. Is there any way to implement a high score into this without the use of a text file?
The overall goal is for the program to compare the users score to the high score and if the score is greater it would edit the high score to the score gained. This would need to stay like that for the next run through of the program after I close it. | Python: is it possible to set a high score without a text file? | 1.2 | 0 | 0 | 52 |
46,420,245 | 2017-09-26T07:29:00.000 | 0 | 1 | 1 | 0 | python | 46,421,366 | 2 | false | 0 | 0 | Don't forget: there's always HKCU and winreg module from stdlib. It can be useful. It's well documented. | 2 | 1 | 0 | I have recently made a game with python that makes the user decipher an anagram and based on difficulty it increases their score. Is there any way to implement a high score into this without the use of a text file?
The overall goal is for the program to compare the users score to the high score and if the score is greater it would edit the high score to the score gained. This would need to stay like that for the next run through of the program after I close it. | Python: is it possible to set a high score without a text file? | 0 | 0 | 0 | 52 |
46,421,437 | 2017-09-26T08:29:00.000 | 3 | 0 | 0 | 0 | django,python-3.x,django-rest-framework,django-serializer | 46,424,881 | 1 | true | 1 | 0 | That's because you are using the browsable API.
JSON renderer will only call it once.
Browsable API needs several calls:
for the result itself
for the raw data tab when you can modify a resource through PUT
for the raw data tab when you can modify a resource through PATCH
for the HTML form tab | 1 | 0 | 0 | Lets say I have a model called Thingy, and there are 20 Thingies in my database. When I retrieve all Thingies, serializer.to_represenatation() is executed 20 times. This is good.
However, when I retrieve just a single Thingy from /api/thingies/1, I observe that serializer.to_representation() is executed four (4!!!) times.
Why does this happen, and how can I get away with just one call to to_representation()? | Why does retrieving a single resource execute serializer.to_representation() multiple times in Django REST framework? | 1.2 | 0 | 0 | 229 |
46,422,692 | 2017-09-26T09:29:00.000 | 2 | 1 | 1 | 0 | python,c++,json,serialization,floating-point | 46,424,992 | 2 | true | 0 | 0 | You can use float.hex in python to get the hexadecimal representation of your number, then read it using the std::hexfloat stream manipulator in C++. | 1 | 1 | 0 | I have a bunch of 32-bit floating point values in a Python script, need to store them to disc and load them in a C++ tool.
Currently they are written in human-readable format. However the loss of precision is too big for my (numeric) application.
How do I best store (and load) them without loss? | serializing float32 values in python and deserializing them in C++ | 1.2 | 0 | 0 | 393 |
46,424,912 | 2017-09-26T11:13:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,lstm,embedding,rnn | 46,461,915 | 1 | false | 0 | 0 | Alright! So, I have found the answer to the question.
The main source of confusion was in the dimensions
[output_size x num_decoder_symbols] of the W matrix itself.
The output_size here doesn't refer to the output_size that you want, but is the output_size (same as the size of the hidden vector) of the LSTM cell. Thus the matrix multiplication u x W will result in a vector of size num_decoder_symbols that can be considered as the logits for the output symbols. | 1 | 1 | 1 | The official documentation for tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq has the following explanation for the output_projection argument:
output_projection: None or a pair (W, B) of output projection weights and biases; W has shape [output_size x num_decoder_symbols] and B has shape [num_decoder_symbols]; if provided and feed_previous=True, each fed previous output will first be multiplied by W and added B.
I don't understand why the B argument should have the size of [num_decoder_symbols]? Since the output is first multiplied by W and then the biases are added, Shouldn't it be [output_size]? | tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq output projection | 0 | 0 | 0 | 141 |
46,425,678 | 2017-09-26T11:46:00.000 | 0 | 0 | 0 | 0 | python-2.7,odoo-10 | 46,458,232 | 1 | false | 1 | 0 | I finally found the problem, it was due to an Indentation error.
When i added the field, i used linebreak in the editor, so it was at the same level of the other fields. But the editor used automatically tab when the other fields were indented with spaces.
It was that simple, but gave me a headache. | 1 | 0 | 0 | Odoo10 / Windows
I installed a module "my_app" in Odoo10, and want to change the model : app_model.py following this steps :
add field to app_model.py and save it
restart the odoo server form windows "services"
Activate developper mode
Update the application list
Upgrade the application
The problem is that when i open my app model in the configuration menu=>Database structure=>models , the added field doesn't appear
I tried to uninstall the app (to install it later with the new model), but Odoo didn't remove the app_model from the list of models (after app list update, restart service and even retart the server).
did i miss something ? | Odoo10 not updating model | 0 | 0 | 0 | 554 |
46,426,875 | 2017-09-26T12:41:00.000 | 6 | 0 | 1 | 0 | python,pandas,dataframe | 57,285,250 | 2 | false | 0 | 0 | iat and at gives only a single value output, while iloc and loc can give multiple row output. Example: iloc[1:2,5:8] is valid but iat[1:2,5:8] will throw error | 1 | 13 | 1 | I've recently noticed that a function where I iterate over a DataFrame rows using .iloc is very slow. I found out that there's a faster method called .iat, that's said to be equivalent to .iloc. I tried it and it cut the run time down by about 75%.
But I'm a little hesitant: why is there an "equivalent" method that's faster? There must be some difference between the inner workings of these two and a reason why they both exist and not just the faster one. I've tried looking everywhere but even the pandas documentation just states that
DataFrame.iat
Fast integer location scalar accessor.
Similarly to iloc, iat provides integer based lookups. You can also set using these indexers.
And that doesn't help.
Are there limits to using .iat? Why is faster; is it sloppier? Or do I just switch to using .iat and happily forget .iloc ever existed? | Difference between pandas .iloc and .iat? | 1 | 0 | 0 | 5,846 |
46,427,830 | 2017-09-26T13:25:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook,jupyter | 46,428,961 | 2 | false | 0 | 0 | You could try which jupyter to see where it's installed and then change the permission on that. | 1 | 0 | 0 | I am unable to create jupyter notebook in Ubuntu. I checked others answers, they were saying to change the permission setting of '/home/ubuntu/.local/share/jupyter'. But, there is no '/home/ubuntu/.local/share/jupyter' with my installation. My Jupyter version is 5.0.
What shall I do? | Unable to create Jupyter notebook in Ubuntu | 0 | 0 | 0 | 54 |
46,428,168 | 2017-09-26T13:42:00.000 | 0 | 0 | 0 | 0 | python,excel,openpyxl | 55,336,278 | 2 | false | 0 | 0 | I had the same issue and found that while i was getting reasonable times initially (opening and closing was taking maybe 2-3 seconds), this suddenly increased to over a minute. I had introduced logging, so thought that may have been the cause, but after commenting this out, there was still a long delay
I copied the data from the Excel spreadsheet and just saved to a new excel spreadsheet which fixed it for me. Seems like it must have got corrupted somehow.
Note - saving the same filename as another filename didn't work, neither did saving the same filename on a local drive. | 1 | 0 | 0 | I'm using openpyxl for Python 2.7 to open and then modify a existing .xlsx file. This excel file has about 2500 columns and just 10 rows. The problem is openpyxl took to long to load the file (almost 1 Minute). Is there anyway to speed up the loading process of openpyxl. From other Threads I found some tips with read_only and write_only. But i have to read and write excel at the same time, so i can't apply this tips for me. Does anyone have any Suggestion. Thanks you very much | Openpyxl loading existing excel takes too long | 0 | 1 | 0 | 1,358 |
46,431,306 | 2017-09-26T16:12:00.000 | 0 | 0 | 1 | 1 | python,macos,python-2.7,pip,python-2.x | 54,506,145 | 2 | false | 0 | 0 | It's worth mentioning (for Windows Users), once you have multiple versions of Python installed, you can easily manage packages for each specific version by calling pip<major>.<minor> from a cmd window.
for example, I have Python 2.7, 3.6 and 3.7 currently installed, and I can manage my packages for each installation using pip2.7, pip3.6 and pip3.7 respectively ...
On Windows 10, $ pip3.7 install <module> works for me - haven't tested it with venv instances yet though | 1 | 1 | 0 | Currently, I am running two versions of python on Mac. The native one (2.7.10) (/usr/bin/python), and another one, which has been downloaded via home-brew (2.7.14).
I want to download two versions of pip and download packages depending on the python version I want to use.
Is this possible? | How to install pip associated to different versions of Python | 0 | 0 | 0 | 2,371 |
46,432,544 | 2017-09-26T17:29:00.000 | 3 | 0 | 1 | 0 | python,automation,pywinauto | 46,433,055 | 2 | true | 0 | 0 | You can add found_index=0 or other index to the window specification object. This is the first way to disambiguate the search.
Also there are methods .children() and .descendants() with additional params like control_type or title (as I remember title should work), but some window specification params are not supported in these methods. | 1 | 1 | 0 | I'm using a WPF application that has custom stack panel, which is basically a list. The item in the list is exactly the same so I'm not able to select a specific text to uniquely identify the elements. And some other values such as time are dynamic.
Is there a way for me to get the list of elements returned. I know it's possible because the error was thrown, ElementAmbiguousError state the count.
If I could do that, then from that list I can use the index and validate what I need. | Pywinauto how do I get the list of returned elements | 1.2 | 0 | 0 | 6,545 |
46,436,522 | 2017-09-26T22:04:00.000 | 1 | 0 | 1 | 0 | python,pillow | 58,201,396 | 3 | false | 0 | 0 | You can use
Blockquote
from PIL import Image
testimage = Image.open(filename,"r")
print(testimage.n_frames)
Blockquote | 1 | 0 | 0 | I was wondering if it is possible in Python to create a program that is able to take the number of pages inside .tiff files, then output exactly how many pages it is all together. I'm new to Python, but want to try writing code that can do this. Is this possible? If so could you please point me in the write direction? From my googling, it seems like I would need to use PIL.
I don't think this is possible but...
Is there any metadata information that Python can take from a .tiff file and simply add them all together from all files?
Thank you for the help! | Python count total number of pages in group of multi-page TIFF files | 0.066568 | 0 | 0 | 1,178 |
46,437,863 | 2017-09-27T00:54:00.000 | 7 | 0 | 1 | 1 | python,batch-file,scheduled-tasks,anaconda,conda | 57,192,970 | 4 | false | 0 | 0 | I had a similar problem a few days ago.
What I discovered is that anaconda prompt is nothing but your usual cmd prompt after running an 'activate.bat' script which is located in the anaconda 'Scripts' folder.
So to run your python scripts in anaconda all you need to do is write 2 lines in a batch file. (Open notepad and write the lines mentioned below. Save the file with .bat extension)
call C:\....path to anaconda3\Scripts\activate.bat
call python C:\path to your script\Script.py
Then you schedule this batch file to run as you wish and it will run without problems. | 2 | 26 | 0 | I have a script that i run every day and want to make a schedule for it, i have already tried a batch file with:
start C:\Users\name\Miniconda3\python.exe C:\script.py
And im able to run some basic python commands in it, the problem is that my actual script uses some libraries that were installed with Anaconda, and im unable to use them in the script since Anaconda will not load.
Im working on windows and can't find a way to start Anaconda and run my script there automatically every day. | Schedule a Python script via batch on windows (using Anaconda) | 1 | 0 | 0 | 27,782 |
46,437,863 | 2017-09-27T00:54:00.000 | 1 | 0 | 1 | 1 | python,batch-file,scheduled-tasks,anaconda,conda | 46,438,132 | 4 | false | 0 | 0 | Found a solution, i copied the "activate.bat" file in "C:\Users\yo\Miniconda3\Scripts" and renamed it as schedule.bat and added my script (copy pasted it) on the end of the file.
Then i can schedule a task on windows that executes schedule.bat everyday | 2 | 26 | 0 | I have a script that i run every day and want to make a schedule for it, i have already tried a batch file with:
start C:\Users\name\Miniconda3\python.exe C:\script.py
And im able to run some basic python commands in it, the problem is that my actual script uses some libraries that were installed with Anaconda, and im unable to use them in the script since Anaconda will not load.
Im working on windows and can't find a way to start Anaconda and run my script there automatically every day. | Schedule a Python script via batch on windows (using Anaconda) | 0.049958 | 0 | 0 | 27,782 |
46,439,814 | 2017-09-27T05:01:00.000 | -1 | 0 | 0 | 0 | python,pandas,dataframe,machine-learning | 46,443,938 | 1 | false | 0 | 0 | It depends on the purpose of the transformation. Converting categories to numerical labels may not make sense if the ordinal representation does not correspond to the logic of the categories. In this case, the "one-hot" encoding approach you have adopted is the best way to go, if (as I surmise from your post) the intention is to use the generated variables as the input to some sort of regression model. You can achieve what you are looking to do using pandas.get_dummies. | 1 | 1 | 1 | I am working with a medical data set that contains many variables with discrete outputs. For example: type of anesthesia, infection site, Diabetes y/n. And to deal with this I have just been converting them into multiple columns with ones and zeros and then removing one to make sure there is not a direct correlation between them but I was wondering if there was a more efficient way of doing this | Machine Learning dataset with many discrete features | -0.197375 | 0 | 0 | 89 |
46,444,065 | 2017-09-27T09:20:00.000 | 0 | 0 | 1 | 0 | javascript,java,python | 46,444,255 | 1 | false | 0 | 0 | Write protection normally only exists for complete files. So you could revoke write permissions for the file, but then also appending isn't possible anymore.
For ensuring that no tampering has taken place, the standard way would be to cryptographically sign the data. You can do this like this, in principle:
Take the contents of the file.
Add a secret key (any arbitrary string or random characters will do, the longer the better) to this string.
Create a cryptographical checksum (SHA256 hash or similar).
Append this hash to the file. (Newlines before and after.)
You can do this again every time you append something to the file. Because nobody except you knows your secret key, nobody except you will be able to produce the correct hash codes of the part of the file above the hash code.
This will not prevent tampering but it will be detectable.
This is relatively easily done using shell utilities like sha256sum for mere text files. But you have a JSON structure in a file. This is a complex case because the position in the file does not correlate with the age of the data anymore (unlike in a text file which is only being appended to).
To still achieve what you want you need to have an age information on the data. Do you have this? If you provide the JSON structure as @Rohit asked for we might be able to give more detailed advice. | 1 | 0 | 0 | I need to store some date stamped data in a JSON file. It is a sensor output. Each day the same JSON file is updated with the additional data. Now, is it possible to put some write protection on already available data to ensure that only new lines could be added to the document and no manual tampering should occur with it?
I suspect that creating checksums after every update may help, but I am not sure how do I implement it? I mean if some part of JSON file is editable then probably checksum is also editable.
Any other way for history protection? | Is it possible to write protect old data of JSON Files and only enable appending? | 0 | 0 | 0 | 87 |
46,444,903 | 2017-09-27T09:58:00.000 | 3 | 0 | 1 | 1 | python,jupyter-notebook,icloud | 57,729,567 | 5 | false | 0 | 0 | Mac: Access your iCloud Documents folder with Jupyter Notebook or JupyterLab
Hi!
In my case what I do it's:
Open a Terminal in Your Mac
Type:
cd ~/Library/Mobile*Documents/com~apple~CloudDocs/Documents
Verify you are at the right folder. Type
pwd
Open Jupyter Notebook or JupyterLab. Type:
jupyter notebook
or type:
jupyter lab
Your browser will open a Jupyter Notebook (/Lab) and you'll see your iCloud Documents Folder and all the subfolders included on it | 2 | 7 | 0 | I am using Jupyter Notebooks (through Anaconda) on a Mac OS X Sierra. Now I would like to access a notebook saved in my iCloud Drive. However, I have been unable to find a way to access iCloud via the Juypter interface. I can of course upload a file with the "Upload" button, but not access the file sitting in my iCloud directly, which I would much prefer. How can I do that? | Access iCloud Drive in Jupyter Notebook | 0.119427 | 0 | 0 | 8,894 |
46,444,903 | 2017-09-27T09:58:00.000 | 12 | 0 | 1 | 1 | python,jupyter-notebook,icloud | 59,936,184 | 5 | false | 0 | 0 | While someone can launch the Jupyter Lab or Jupyter Notebook software from the iCloud directory, as described in the accepted response, this will need to be done each and every time you want to access Jupyter notebooks within iCloud. Instead I prefer to create a symbolic link (symlink...a shortcut) to my iCloud folder, in my home directory. This will always be present. In doing this I can launch Jupyter Lab or a Jupyter Notebook from any folder or from Anaconda Navigator and I will always see a "folder" to iCloud.
To do this you need to open a Terminal and copy and paste the statement below into the Terminal window and hit return:
ln -s ~/Library/Mobile*Documents/com~apple~CloudDocs/ iCloud
After creating this symlink, there will always be an iCloud folder that you can browse in your home directory. You can use whatever name you want by replacing the "iCloud" at the on the statement above with your own name. | 2 | 7 | 0 | I am using Jupyter Notebooks (through Anaconda) on a Mac OS X Sierra. Now I would like to access a notebook saved in my iCloud Drive. However, I have been unable to find a way to access iCloud via the Juypter interface. I can of course upload a file with the "Upload" button, but not access the file sitting in my iCloud directly, which I would much prefer. How can I do that? | Access iCloud Drive in Jupyter Notebook | 1 | 0 | 0 | 8,894 |
46,445,758 | 2017-09-27T10:39:00.000 | 0 | 0 | 1 | 0 | python,string,loops,directory,text-files | 46,446,384 | 3 | false | 0 | 0 | I think using the os library will do.
import os
import re
path = r"<directory>"
#now you could print everything inside the directory
print(os.listdir(path))
#to print each item separately
for roots, dirs, files in os.walk(path):
#the roots will select all the roots, dirs will select the directories
#inside your directory and files will select only the files inside
#your directory
#to get the files
for file in files:
#you can print all filenames for test purpose
print("File = %s", file)
#now you have the string names of all files, all that is left to do
#is parse them using regular expressions.
parts = re.split("\.",file)
#this will spit the string after the period
#the parts variable will now have two parts, first part containing
#the file name and the second part containing the file extension.
if(parts[1]=="txt"):
f = open(file,"w")
#now you can do whatever you want with the file
Hope this works out for you | 1 | 0 | 0 | I'm using python 3.6.1.
I have a directory filled with thousands of text files.
I want to remove the first 2 lines in each text files, without making a new copy of each text files.
Also, I want to remove the last line in each file if they contain a certain "keyword", if not, then the last line is not removed and still there.
How to do it?
Thanks in advance. | Manipulating Many Text Files In a Directory Using Python | 0 | 0 | 0 | 153 |
46,453,761 | 2017-09-27T17:20:00.000 | 0 | 0 | 0 | 0 | django,python-3.x,iis | 46,498,424 | 2 | true | 1 | 0 | After doing some trial and error tests we found out the following:
For question 1, it seems that this is not an isolated problem because in other web server applications such as Apache, IIS or nginx, the time it takes for pages to be reflected depends on the default parameters that are established by these web server applications. How to modify these parameters? This is an off-topic question and more research should be devoted to solve it.
For question 2, we did not find where to modify these parameters of "refreshing" time in IIS so the alternative solution was to run our Django web app in localhost inside of our server with python manage.py runserver and we started noticing the effects immediately for debugging purposes. | 2 | 2 | 0 | Currently we have our production Django web app hosted in IIS and we have noticed that when we add new changes into the views.py or, specially, the url.py files, the changes take a while to be reflected in the server, that is, it can take an hour to see the changes. This does not happen when I modify the html files, changes take effect immediately.
We also broke on purpose the url.py file by adding incorrect lines such as url1234567(r'^tickets/$', views.tickets_calculate), but this line did not take effect until several minutes later.
Questions
Why do the changes in these files take so long?
How can I speed up this process of "refreshing of views.py and urls.py? | How to speed up Django when doing new changes to the views.py and urls.py? | 1.2 | 0 | 0 | 103 |
46,453,761 | 2017-09-27T17:20:00.000 | 1 | 0 | 0 | 0 | django,python-3.x,iis | 53,764,817 | 2 | false | 1 | 0 | Old question, but consider: sudo /etc/init.d/apache2 graceful to restart the webserver just after pulling the changes. | 2 | 2 | 0 | Currently we have our production Django web app hosted in IIS and we have noticed that when we add new changes into the views.py or, specially, the url.py files, the changes take a while to be reflected in the server, that is, it can take an hour to see the changes. This does not happen when I modify the html files, changes take effect immediately.
We also broke on purpose the url.py file by adding incorrect lines such as url1234567(r'^tickets/$', views.tickets_calculate), but this line did not take effect until several minutes later.
Questions
Why do the changes in these files take so long?
How can I speed up this process of "refreshing of views.py and urls.py? | How to speed up Django when doing new changes to the views.py and urls.py? | 0.099668 | 0 | 0 | 103 |
46,457,545 | 2017-09-27T21:32:00.000 | 13 | 0 | 1 | 0 | python,pip,conda,package-management | 46,459,614 | 2 | false | 0 | 0 | They will be installed in the same directory such as /home/user/miniconda/env/envname/lib/python3.6/site-packages/requests.
So if you install a package by both conda and pip, then uninstall it by pip, the source code has gone. And that means you cannot use this package any more.
When installing packages, pip will check dist-info or egg-info directory while conda will check conda-meta directory. In this case, you can install the same package both by conda and pip if you install it by pip first and then install it by conda. In the reverse case, pip will consider that package has been already installed.
To completely uninstall a package installed both by conda and pip, you need to run both conda remove to remove information in conda-meta and pip uninstall to remove dist-info directory. | 1 | 13 | 0 | What happens if the same package is installed by both pip and conda in the same environment? Is conda designed to cope with this? Can you safely pip uninstall the pip version without messing up the conda version? | same package installed by both pip and conda | 1 | 0 | 0 | 5,449 |
46,459,441 | 2017-09-28T01:11:00.000 | 3 | 0 | 0 | 0 | python,arrays,numpy,data-structures | 46,460,039 | 1 | true | 0 | 0 | ndarray vs list: both can hold a 1-dimensional collection of elements; however, in an ndarray the elements would usually all be of the same type (e.g., 64-bit floating point numbers), and numpy provides operators (and behind-the-scenes optimizations) for calculations on these vectors. For example, you can (quickly) add elements in nda1 and nda2 via nda3 = nda1 + nda2. With lists, you would need to do lst3 = [a + b for (a, b) in zip(lst1, lst2)]. On the other hand, you can easily insert and remove items in lists. ndarrays are designed for high-performance computations on vectors of numbers; lists are designed for ad hoc operations on arbitrary collections of objects.
ndarray vs dictionary: these are quite dissimilar. Dictionaries allow you to select objects from an arbitrary collection by name; ndarrays usually only hold numbers, and only allow lookup via index number (unless you get into recarrays, which you didn't ask about).
ndarray vs Pandas dataframe: dataframes are somewhat similar to multidimensional ndarrays, in that they are designed to hold similar types of data in each column. However, different columns of a dataframe would often hold different types of data, while all the elements in a multidimensional ndarray would usually be numbers of the same type. In addition, dataframes provide name-based indexing across rows and columns. I like to think of dataframes as something like a dictionary of 1-dimensional ndarrays, i.e., each column of the dataframe is like a 1-dimensional ndarray, and you can retrieve the column by name and then manipulate it. But Pandas provides additional indexing goodness, so you can also give a name to each row, and then pull elements out of the table based on both their row and column names. Pandas also provides operators for element-wise operations (e.g., adding two columns together), similar to numpy. But it generally does this by matching index terms, not row/column numbers. So data manipulations in Pandas are slower but more reliable.
ndarrays vs structured arrays: structured arrays are somewhat like the rows of a Pandas dataframe (you can have different standardized types of data in each column). But the semantics for manipulating them are more like standard numpy operations -- you have to make sure the right data is in the right spot in the array before you operate on it. (Pandas will re-sort the tables so the row-names match if needed.)
ndarray vs sequence of lists: ndarrays are initialized and displayed like sequences of lists, and if you take the nth element of a 2D array, you get a list (the row). But internally, in an ndarray, every element has the same datatype (unlike lists), and the values are packed tightly and uniformly in memory. This allows processors to quickly perform operations on all the values together. Lists are more like pointers to values stored elsewhere, and mathematical computations on lists or lists-of-lists are not optimized. Also, you can't use 2D or 3D indexing with lists-of-lists (you have to say lst[1][2][3], not lst[1, 2, 3]), nor can you easily do elementwise operations (lst1+lst2 does not do elementwise addition like nda1+nda2).
higher dimensions: you can create an ndarray with any number of dimensions. This is sort of similar to a list of lists of lists. e.g., this makes a 3D array: np.array([[[1, 2], [3, 4]], [[5, 6], [7,8]]]) | 1 | 1 | 1 | So I've read the manual - but the structure still comes confusing to me. Specifically, what is the relationship between:
nd-array and Python list?
nd-array and Python dictionary?
nd-array and Pandas DataFrame?
nd-arrays and Numpy "structured arrays"?
Also, is nd-array just like a sequence of lists?
Where does the "n-dimension" come into the picture? Because it looks just like a matrix, which is just two dimensions.
Thanks! | Intuitive understanding of Numpy nd-array | 1.2 | 0 | 0 | 184 |
46,462,152 | 2017-09-28T06:14:00.000 | 10 | 0 | 0 | 0 | python,sqlalchemy | 46,462,502 | 1 | true | 0 | 0 | In SQL, tables are related to each other via foreign keys. In an ORM, models are related to each other via relationships. You're not required to use relationships, just as you are not required to use models (i.e. the ORM). Mapped classes give you the ability to work with tables as if they are objects in memory; along the same lines, relationships give you the ability to work with foreign keys as if they are references in memory.
You want to set up relationships for the same purpose as wanting to set up models: convenience. For this reason, the two go hand-in-hand. It is uncommon to see models with raw foreign keys but no relationships. | 1 | 9 | 0 | I've noticed that many SQLAlchemy tutorials would use relationship() in "connecting" multiple tables together, may their relationship be one-to-one, one-to-many, or many-to-many. However, when using raw SQL, you are not able to define the relationships between tables explicitly, as far as I know.
In what cases is relationship() required and not required? Why do we have to explicitly define the relationship between tables in SQLAlchemy? | Is it necessary to use `relationship()` in SQLAlchemy? | 1.2 | 1 | 0 | 1,079 |
46,466,345 | 2017-09-28T10:01:00.000 | 0 | 0 | 0 | 0 | scrapy,python-3.5,scrapy-spider | 46,466,842 | 1 | true | 1 | 0 | You need to add dont_filter=True when re-queueing the request. Though the request may not match other request but Scrapy remembers what requests it has already made and it will filter out if you re-queue it. It will assume it was by mistake. | 1 | 0 | 0 | Scrapy seems to complete without processing all the requests. I know this because i am logging before and after queueing the request and I can clearly see that.
I am logging in both parse and error callback methods and none of them got called for those missing requests.
How can I debug what happened to those requests? | requests disappear after queueing in scrapy | 1.2 | 0 | 1 | 43 |
46,468,549 | 2017-09-28T11:57:00.000 | 0 | 1 | 0 | 0 | python,c++,io | 46,468,692 | 3 | false | 0 | 1 | First of all the efficiency of i/o operations is limited by the buffer size. So if you want to achieve higher throughput you might have to play with the input output buffers. And regarding your doubt of what way to output into the files is dependent on your data and what delimiters you want to use to separate the data in the files. | 2 | 1 | 0 | I want to write output files containing tabular datas (float values in lines and columns) from a C++ program.
I need to open those files later on with other languages/softwares (here python and paraview, but might change).
What would be the most efficient output format for tabular files (efficient for files memory sizes efficiency) that would be compatible with other languages ?
E.g., txt files, csv, xml, binarized or not ?
Thanks for advices | Which data files output format from C++ to be read in python (or other) is more size efficient? | 0 | 0 | 0 | 408 |
46,468,549 | 2017-09-28T11:57:00.000 | 2 | 1 | 0 | 0 | python,c++,io | 46,473,158 | 3 | false | 0 | 1 | 1- Your output files contain tabular data (float values in lines and columns), in other words, a kind of matrix.
2- You need to open those files later on with other languages/softwares
3- You want to have files memory sizes efficiency
That's said, you have to consider one of the two formats below:
CSV: if your data are very simple (a matrix of float without particualr structure)
JSON if you need a minimum structure for your files
These two formats are standard, supported by almost all the known languages and maintained softwares.
Last, if your data have a great complexity structure, prefer to look at a format like XML but the price to pay is then in the size of your files!
Hope this helps! | 2 | 1 | 0 | I want to write output files containing tabular datas (float values in lines and columns) from a C++ program.
I need to open those files later on with other languages/softwares (here python and paraview, but might change).
What would be the most efficient output format for tabular files (efficient for files memory sizes efficiency) that would be compatible with other languages ?
E.g., txt files, csv, xml, binarized or not ?
Thanks for advices | Which data files output format from C++ to be read in python (or other) is more size efficient? | 0.132549 | 0 | 0 | 408 |
46,472,119 | 2017-09-28T14:48:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,anaconda | 46,472,222 | 1 | false | 0 | 0 | On your code1.py and code2.py files, you should include a variable named __all__ = ['function1'] and __all__ = ['function2'] respectively. This is list contains tha names of the functions that are to be imported. When importing in Main.py, use: from code1 import * and same for code2.py and should work fine. | 1 | 0 | 0 | I have three python files, named: Main.py, code1.py, code2.py. "Main.py" is calling the other two by "import code1" and "import code2" statements. code1 and code2 are receiving some numerical arrays, and returning some other numerical arrays by using function1 and function2, respectively.
Here is my problem: when I open main.py and run it, it says:
"name function1 is not defined"
Then, after running code1.py and code2.py, it works. After getting work done, I quit python. Next day, when I open and run main.py, same problem again.
Question: Why do I need to run code1.py and code2.py everytime before running main.py? Is there any way to solve this problem?
Thanks!
Note: I am using Python 3.6.1 on Anaconda 4.4.0 using Spyder 3.1.4 | Need to run python modules before importing? | 0.53705 | 0 | 0 | 45 |
46,478,672 | 2017-09-28T21:34:00.000 | 3 | 0 | 0 | 0 | python,tkinter,tkinter-canvas | 46,478,717 | 1 | true | 0 | 1 | You need to call update on your canvas object instead of tkinter, i.e. canvas.update(). | 1 | 2 | 0 | I need to update my canvas but tkinter.update() says tkinter has no attribute "update". What might be the possible solution. My thanks in advance. | How to update the canvas in python using tkinter? | 1.2 | 0 | 0 | 4,656 |
46,480,925 | 2017-09-29T02:31:00.000 | 0 | 0 | 1 | 0 | python,multithreading,dictionary,multiprocessing | 46,480,946 | 2 | false | 0 | 0 | A lock is used to avoid race conditions so no two threads could make change to the dict at the same time so it is advisible that you use the lock else you might go into a race condition causing program to fail. A mutex lock can be used to deal with 2 threads. | 1 | 2 | 0 | When I use dictionary.get() function, is it locking the whole dictionary? I am developing a multiprocess and multithreading program. The dictionary is used to act as a state table to keep track of data. I have to impose a size limit to the dictionary, so whenever the limit is being hit, I have to do garbage collection on the table, based on the timestamp. The current implementation will delay adding operation while garbage collection is iterating through the whole table.
I will have 2 or more threads, one just to add data and one just to do garbage collection. Performance is critical in my program to handle streaming data. My program is receiving streaming data, and whenever it receives a message, it has to look for it in the state table, then add the record if it's non-existent in the first place, or copy certain information and then send it along the pipe.
I have thought of using multiprocessing to do the search and adding operation concurrently, but if I used processes, I have to make a copy of state table for each process, in that case, the performance overhead for synchronization is too high. And I also read that multiprocessing.manager.dict() is also locking the access for each CRUD operation. I could not spare the overhead for it so my current approach is using threading.
So my question is while one thread is doing .get(), del dict['key'] operation on the table, will the other insertion thread be blocked from accessing it?
Note: I have read through most SO's python dictionary related posts, but I cannot seem to find the answer. Most people only answer that even though python dictionary operations are atomic, it is safer to use a Lock for insertion/update. I'm handling a huge amount of streaming data so Locking every time is not ideal for me. Please advise if there is a better approach. | Python dict.get() Lock | 0 | 0 | 0 | 2,615 |
46,481,176 | 2017-09-29T03:02:00.000 | 1 | 0 | 1 | 0 | python,gremlin,janusgraph,gremlinpython | 46,571,131 | 1 | false | 0 | 0 | The issue you are seeing is more at the Apache TinkerPop level. JanusGraph 0.1.1 ships with TinkerPop 3.2.3, and the error you are seeing was resolved with TinkerPop 3.2.5.
The master branch of JanusGraph is already at TinkerPop 3.2.6, so it is compatible with Jupyter notebooks, but you'd have to build JanusGraph from source code. I'd expect the next release of JanusGraph to be out later this month. | 1 | 3 | 0 | I`m trying to use tinkerpop3.2.3 to conncet janusgraph0.1.1 on my centOS7, everything works fine in gremlin shell. I tried to use gremlin-python3.2.3 in python shell, it also works well. But when I moved my codes to jupyter notebook, I got RuntimeError:IOLoop is already running when excuting
g = graph.traversal().withRemote(DriverRemoteConnection('ws://localhost:8182/gremlin','g'))
So, is that possible to make gremlinpython work in jupyter notebook?
I tried both python2.7 & python3.5 | Gremlin-python in jupyter notebook | 0.197375 | 0 | 0 | 1,060 |
46,481,314 | 2017-09-29T03:20:00.000 | 0 | 0 | 1 | 1 | python-3.x,debugging,pycharm | 50,509,969 | 2 | false | 0 | 0 | Possibilities:
An unhandled exception was raised by code long before we ever got to the code containing a breakpoint. That is, the code which gets executed before the code containing a break-point contains an error.
The code containing a break-point is never executed, even when the script runs from start to finish. For example, if the break-point is inside an if-block and the condition of the if-statement is never true, then the breakpoint will never be reached.
You are not running the script you think you are running. Look in the upper right corner of the UI. What is the name of the file next to the run button (green triangle)? | 1 | 4 | 0 | When I start debugging (hitting the bug button to top-right), it gets connected and below message is shown:
Connected to pydev debugger (build 172.3968.37)
But it doesn't stop at break points. Can anyone help me with this problem?
I am using PyCharm CE on a Mac with python 3.6.2 | Debugger doesn't stop at break points | 0 | 0 | 0 | 2,634 |
46,487,500 | 2017-09-29T11:03:00.000 | 0 | 0 | 1 | 0 | python-3.x,cmd,path,pip | 46,490,397 | 1 | false | 0 | 1 | So, if Windows doesn't recognize something as a script (for example, if you're using .py extension), then it will try to open the file in whatever it has as a default editor. In your case, it's Atom. In my case (when I used Windows) it was Notepad++.
Regardless, you need to use the command prompt to get this working:
Press Windows-r A prompt will show up.
Type cmd in the prompt.
Press enter This will drop you into the command prompt.
Type python then hit enter If you have an error, then you don't have Python.
Paste/type cd \Users\jmcco\Desktop\python-3.6.2-embed-amd64\
Type python get-pip.py
Alternatively, you could use Anaconda, which should allow you to use a GUI to install. | 1 | 1 | 0 | I'm trying to install kivy, but my computer doesn't recognise me as having installed pip so I run get-pip.py through my cmd and this is what happens:
When I type: cd \Users\jmcco\Desktop\python-3.6.2-embed-amd64\ and then type: python get-pip.py I get an error message that reads: TypeError: expected str, bytes or os.PathLike object, not NoneType
The whole cmd output reads:
File "get-pip.py", line 20061, in
main()
File "get-pip.py", line 194, in main
bootstrap(tmpdir=tmpdir)
File "get-pip.py", line 82, in bootstrap
import pip
File "", line 961, in _find_and_load
File "", line 950, in _find_and_load_unlocked
File "", line 646, in _load_unlocked
File "", line 616, in _load_backward_compatible
File "C:\Users\jmcco\AppData\Local\Temp\tmp8xsq68ol\pip.zip\pip__init__.py", line 26, in
File "", line 961, in _find_and_load
File "", line 950, in _find_and_load_unlocked
File "", line 646, in _load_unlocked
File "", line 616, in _load_backward_compatible
File "C:\Users\jmcco\AppData\Local\Temp\tmp8xsq68ol\pip.zip\pip\utils__init__.py", line 23, in
File "", line 961, in _find_and_load
File "", line 950, in _find_and_load_unlocked
File "", line 646, in _load_unlocked
File "", line 616, in _load_backward_compatible
File "C:\Users\jmcco\AppData\Local\Temp\tmp8xsq68ol\pip.zip\pip\locations.py", line 88, in
File "ntpath.py", line 75, in join
TypeError: expected str, bytes or os.PathLike object, not NoneType
C:\Users\jmcco\Desktop\python-3.6.2-embed-amd64> | Python 3.6.2 - Trying to install pip through cmd, get the following TypeError: | 0 | 0 | 0 | 627 |
46,487,707 | 2017-09-29T11:17:00.000 | 0 | 0 | 1 | 0 | python,dde | 46,489,746 | 2 | false | 0 | 0 | Update:
I find a file "dde.pyd" in "Python27\Lib\site-packages\pythonwin".
Now the error is :
import dde
ImportError: DLL load failed: | 1 | 1 | 0 | How can I import dde in python 2.7?
I took over a job about measurement automation using python. However, the author of the code import a module "dde". My pycharm reminds me that "no module named dde".
Maybe I need to install dde module? But I cannot find dde module installation.
I have already installed pywin32(214version).
new to this, hope an answer.
Thanks a lot. | Python dde module install | 0 | 0 | 0 | 1,381 |
46,487,871 | 2017-09-29T11:29:00.000 | 0 | 0 | 1 | 1 | python | 46,487,918 | 1 | false | 0 | 0 | open cmd
goto the D:\python\Scripts(Your Installation directory)
Run The Command pip install -U pip | 1 | 0 | 0 | I plan on using python in the LiClipse IDE to play around with AI. However i require a few libraries. The libraries can be installed with pip. They mention the commands to install and upgrade pip(e.g. python -m pip install -U pip), however I am not sure where I should write this command anymore because it does not work in either the CMD or python shell.
Is there any condition while using these commands I should think about?
Thank you | How to (and where) install/upgrade pip for Python? | 0 | 0 | 0 | 43 |
46,488,989 | 2017-09-29T12:36:00.000 | 1 | 0 | 0 | 0 | python,flask | 46,489,288 | 3 | false | 1 | 0 | If you look at most of the REST APIs, they will return 400 and appropriate error message back to the client if the user sends request parameters of a different type than is expected.
So, you should go with your 2nd option. | 2 | 1 | 0 | Whats the best way to handle invalid parameters passed in with a GET or POST request in Flask+Python?
Let's say for the sake of argument I handle a GET request using Flask+Python that requires a parameter that needs to be an integer but the client supplies it as a value that cannot be interpreted as an integer. So, obviously, an exception will be thrown when I try to convert that parameter to an integer.
My question is should I let that exception propagate, thus letting Flask do it's default thing of returning an HTTP status code of 500 back to the client? Or should I handle it and return a proper (IMO) status code of 400 back to the client?
The first option is the easier of the two. The downside is that the resulting error isn't clear as to whose at fault here. A system admin might look at the logs and not knowing anything about Python or Flask might conclude that there's a bug in the code. However, if I return a 400 then it becomes more clear that the problem might be on the client's end.
What do most people do in these situations? | Best way to handle invalid GET/POST request parameters with Flask+Python? | 0.066568 | 0 | 1 | 1,882 |
46,488,989 | 2017-09-29T12:36:00.000 | 1 | 0 | 0 | 0 | python,flask | 46,489,291 | 3 | false | 1 | 0 | A status code of 400 means you tell the client "hey, you've messed up, don't try that again". A status code of 500 means you tell the client "hey, I've messed up, feel free to try again later when I've fixed that bug".
In your case, you should return a 400 since the party that is at fault is the client. | 2 | 1 | 0 | Whats the best way to handle invalid parameters passed in with a GET or POST request in Flask+Python?
Let's say for the sake of argument I handle a GET request using Flask+Python that requires a parameter that needs to be an integer but the client supplies it as a value that cannot be interpreted as an integer. So, obviously, an exception will be thrown when I try to convert that parameter to an integer.
My question is should I let that exception propagate, thus letting Flask do it's default thing of returning an HTTP status code of 500 back to the client? Or should I handle it and return a proper (IMO) status code of 400 back to the client?
The first option is the easier of the two. The downside is that the resulting error isn't clear as to whose at fault here. A system admin might look at the logs and not knowing anything about Python or Flask might conclude that there's a bug in the code. However, if I return a 400 then it becomes more clear that the problem might be on the client's end.
What do most people do in these situations? | Best way to handle invalid GET/POST request parameters with Flask+Python? | 0.066568 | 0 | 1 | 1,882 |
46,489,303 | 2017-09-29T12:56:00.000 | 1 | 0 | 1 | 0 | python,python-wheel | 46,489,393 | 1 | true | 0 | 0 | DESCRIPTION.rst contains whatever you passed to the long_description argument of setup() in your setup.py. This argument is intended to be used for supplying your package's README, and most packages set it with the equivalent of long_description=open('README.rst').read(). | 1 | 0 | 0 | When I generate wheel for my python package with command python setup.py bdist_wheel I get a whl file DESCRIPTION.rst with only UNKNOWN string inside. How can I fill DESCRIPTION.rst with information that I want to? | How the contents of the DESCRIPTION.rst in python wheel are generated? | 1.2 | 0 | 0 | 331 |
46,492,279 | 2017-09-29T15:33:00.000 | 0 | 1 | 0 | 0 | python,telegram-bot,python-telegram-bot | 46,501,988 | 2 | true | 0 | 0 | Just edit the message sent without providing a reply_markup. | 1 | 0 | 0 | I would like to hide a InlineKeyboardMarkup if no answer is provided. I'm using Python and python-telegram-bot. Is this possible?
Thank you. | Hide InlineKeyboardMarkup in no answer provided | 1.2 | 0 | 1 | 545 |
46,492,388 | 2017-09-29T15:39:00.000 | 3 | 0 | 0 | 0 | python,database,sqlite | 46,492,537 | 2 | false | 0 | 0 | SQLite3 is embedded-only database so it does not have network connection capabilities. You will need to somehow mount the remote filesystem.
With that being said, SQLite3 is not meant for this. Use PostgreSQL or MySQL (or anything else) for such purposes. | 1 | 4 | 0 | I have a question about sqlite3. If I were to host a database online, how would I access it through python's sqlite3 module?
E.g. Assume I had a database hosted at "www.example.com/database.db". Would it be as simple as just forming a connection with sqlite3.connect ("www.example.com/database.db") or is there more I need to add so that the string is interpreted as a url and not a filename? | Connecting to an online database through python sqlite3 | 0.291313 | 1 | 0 | 1,622 |
46,493,886 | 2017-09-29T17:15:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,amazon-ec2,boto,aws-cli | 46,494,159 | 1 | false | 1 | 0 | This sounds like an AWS credentials question, not specifically a "create ec2 instances" question. The answer is to assign the appropriate AWS permissions to the EC2 instance via an IAM role. Then your boto/boto3 code and/or the AWS CLI running on that instance will have permissions to make the necessary AWS API calls without having an access key and secret key stored in your code. | 1 | 0 | 0 | I'm looking to create an AWS system with one master EC2 instance which can create other instances.
For now, I managed to create python files with boto able to create ec2 instances.
The script works fine in my computer environment but when I try to deploy it using Amazon BeanStalk with Django (Python 3.4 included) the script doesn't work. I can't configure aws cli (and so Boto) through SSL because the only user I can access is ec2-user and the web server uses another user.
I could simply handwrite my access ID key and password on the python file but that would not be secure. What can I do to solve this problem?
I also discovered AWS cloudformation today, is it a better idea to create new instances with that rather than with the boto function run? | How to create ec2 instances from another instance? boto awscli | 0.197375 | 0 | 1 | 161 |
46,495,000 | 2017-09-29T18:32:00.000 | 0 | 0 | 0 | 0 | python,django | 46,495,157 | 2 | false | 1 | 0 | NoReverseMatch error has to do with urls so check your url patterns
url(...).
make sure your django version is 1.11 if not try to update it to 1.11 | 1 | 2 | 0 | Django raises
django.urls.exceptions.NoReverseMatch: 'en-us' is not a registered namespace
How i can fix it? | Django raises NoReverseMatch: 'en-us' is not a registered namespace | 0 | 0 | 0 | 2,378 |
46,495,647 | 2017-09-29T19:22:00.000 | 1 | 1 | 1 | 1 | python,micropython | 46,496,039 | 3 | false | 0 | 0 | I don't know whether MicroPython has compile() and exec() built-in.
But when embeded Python has them and when MCU has enough RAM, then I do the following:
Send a line to embeded shell to start a creation of variable with multiline string.
'_code = """\'
Send the code I wish executed (line by line or however)
Close the multiline string with """
Send exec command to run the transfered code stored in the variable on MCU and pick up the output.
If your RAM is small and you cannot transfer whole code at once, you should transfer it in blocks that would be executed. Like functions, loops, etc.
If you can compile bytecode for MicroPython on a PC, then you should be able to transfer it and prepare it for execution. This would use a lot less of RAM.
But whether you can inject the raw bytecode into shell and run it depends on how much MicroPython resembles CPython.
And yep, there are differences. As explained in another answer line by line execution can be tricky. So blocks of code is your best bet. | 1 | 1 | 0 | I'm integrating MicroPython into a microcontroller and I want to add a debug step-by-step execution mode to my product (via a connection to a PC).
Thankfully, MicroPython includes a REPL aka Python shell functionality: I can feed it one line at a time and execute.
I want to use this feature to single-step on the PC-side and send in the lines in the Python script one-by-one.
Is there ANY difference, besides possibly timing, between running a Python script one line at a time vs python my_script.py? | Run Python script line-by-line in shell vs atomically | 0.066568 | 0 | 0 | 1,057 |
46,497,450 | 2017-09-29T22:00:00.000 | 1 | 0 | 1 | 0 | python,machine-learning,speech-recognition | 46,498,089 | 1 | false | 0 | 0 | I'm thinking autocorrelation isn't working here. You should cut the sentence to word (by finding parts with low segnal during x ms). And then to check the similarity of them | 1 | 0 | 0 | I am using the Python Librosa library. Can someone point me to a way i could find out whether there is are repeated words in a spoken sentence? I was thinking of using an autocorrelation function to uncover the repeats. | Speech procession looking for repeats in spoken words in a sentence | 0.197375 | 0 | 0 | 31 |
46,500,405 | 2017-09-30T07:04:00.000 | 3 | 0 | 0 | 0 | python,openerp,odoo-9,odoo-10 | 46,501,313 | 2 | false | 1 | 0 | you can restart the server and start the server by python odoo-bin -d database_name -u module_name
or -u all to update all module | 2 | 4 | 0 | I have two databases in odoo DB1 and DB2. I made some changes to existing modules(say module1 and module2) in DB1 through GUI(web client). All those changes were stored to DB1 and were working correctly when I am logged in through DB1.
Now, I made some changes in few files(in same two modules module1 and module2). These modules need to be upgraded in order to load those changes. So, i logged in from DB2 and upgraded those modules. My changes in file loaded correctly and were working correctly when I am logged in through DB2.
But those file changes were loaded only for DB2 and not for DB1.
So, I wanted to know:
How upgrading of a module works?? Does it upgrades only for the database through which the user is logged in and upgraded the module?
And if it is so. Then, is there a way that I can Upgrade my module while retaining all the previous changes that i made through the GUI in that same module?
What are the things that are changed when a module is upgraded? | How upgrading of a Odoo module works? | 0.291313 | 1 | 0 | 1,295 |
46,500,405 | 2017-09-30T07:04:00.000 | 4 | 0 | 0 | 0 | python,openerp,odoo-9,odoo-10 | 46,513,745 | 2 | false | 1 | 0 | There is 2 step for upgrading an addons in Odoo,
First, restarting the service. it will upgrade your .py files.
Second, click upgrade button in Apps>youraddonsname. it will upgrade your .xml files.
i create a script for upgrading the XML files. the name is upgrade.sh
#!/bin/sh
for db in $(cat /opt/odoo/scripts/yourlistdbfiles);
do
odoo --addons-path=/opt/odoo/youraddonspath -d $db -u youraddonsname --no-xmlrpc > /opt/odoo/logs/yourlogfiles.log 2>&1 &
sleep 20s && exit &
done
so you just run sh /opt/odoo/script/upgrade.sh after editing your addons and no need to click the upgrade button anymore.
hope this help | 2 | 4 | 0 | I have two databases in odoo DB1 and DB2. I made some changes to existing modules(say module1 and module2) in DB1 through GUI(web client). All those changes were stored to DB1 and were working correctly when I am logged in through DB1.
Now, I made some changes in few files(in same two modules module1 and module2). These modules need to be upgraded in order to load those changes. So, i logged in from DB2 and upgraded those modules. My changes in file loaded correctly and were working correctly when I am logged in through DB2.
But those file changes were loaded only for DB2 and not for DB1.
So, I wanted to know:
How upgrading of a module works?? Does it upgrades only for the database through which the user is logged in and upgraded the module?
And if it is so. Then, is there a way that I can Upgrade my module while retaining all the previous changes that i made through the GUI in that same module?
What are the things that are changed when a module is upgraded? | How upgrading of a Odoo module works? | 0.379949 | 1 | 0 | 1,295 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.