Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
48,656,958 | 2018-02-07T06:00:00.000 | 0 | 1 | 0 | 0 | python,probability,blockchain | 48,712,283 | 1 | false | 0 | 0 | I figured this out, the solution is below:
int(hashlib.sha256(block_header + userID).hexdigest(), 16) / float(2**256)
You conver the hash into an integer, then divide that integer by 2**256. This gives you a decimal from 0 to 1 that can be compared to a random.random() to get the prize. | 1 | 0 | 0 | I'm building a project that does a giveaway every Monday. There are three prizes, the basic prize, the "lucky" prize, and the jackpot prize. The basic prize is given out 70% of the time, the lucky prize gets given out 25% of the time and the jackpot prize gets rewarded the final 5% of the time.
There are multiple people that get prizes each Monday. Each person in the giveaway gets at least the basic prize.
Right now I'm just generating a random number and then assigning the prizes to each participant on my local computer. This works, but the participants have to trust that I'm not rigging the giveaway.
I want to improve this giveaway by using the blockchain to be the random number generator. The problem is that I don't know how to do this technically.
Here is my start to figuring out how to do it: When the giveaway is created, a block height is defined as being the source of randomness. Each participant has a userID. When the specific block number is found, the block hash is catenated to the userID and then hashed. The resulting hash is then "ranged" across the odds defined in the giveaway.
The part I can't figure out is how to do the "ranging". I think it might involve the modulo operator, but I'm not sure. If the resulting hash falls into the 5% range, then that user gets the jackpot prize. If the hash falls into the 70% range, then he gets the basic prize, etc. | Assigning giveaways based on cryptocurrency block header | 0 | 0 | 0 | 27 |
48,658,738 | 2018-02-07T08:06:00.000 | 2 | 0 | 1 | 0 | python-3.x,installation,windows-10,anaconda,conda | 55,585,726 | 2 | false | 0 | 0 | I had the same issue in installation.
Issue: anaconda.exe file not found
Solution worked for me : Installed with admin rights and anaconda.exe was found in installation folder and startup menu.
Status : Resolved | 1 | 2 | 0 | I had missing conda.exe in Continuum\anaconda3\Scripts folder, so conda command doesn't work. This is not PATH problem. After that I have uninstalled anaconda, rebooted and installed just downloaded last version of anaconda. Installation was without admin rights. After that I still have conda missing in that folder.
What can I do? Is where a way to install conda eventually? Is it better to open an issue?
Windows 10, anaconda 5.01
PS I have previously installed Anaconda. Which I believe was installed with admin rights to whole users. Can it cause such problem? | Anaconda installed without conda.exe | 0.197375 | 0 | 0 | 19,450 |
48,659,860 | 2018-02-07T09:13:00.000 | 1 | 0 | 1 | 0 | python-3.x,jupyter-notebook | 57,488,016 | 2 | false | 1 | 0 | You can do: !jt -l
To change the theme: !jt -t 'themes name' and reload | 2 | 1 | 0 | I have installed jupyterthemes by pip install --upgrade jupyterthemes under Windows7. But when I type in cmd jt -l,it turns out jt:command not found.
How can I solve this problem? | jupyterthemes jt command not found | 0.099668 | 0 | 0 | 599 |
48,659,860 | 2018-02-07T09:13:00.000 | 0 | 0 | 1 | 0 | python-3.x,jupyter-notebook | 68,090,027 | 2 | false | 1 | 0 | Exit() python or ipython interpreter and try again e.g on PC using Anaconda prompt: (base) C:\Users\YourAccount>jt -l that should work if the themes pack was installed. That is what worked for me when I encountered this issue. | 2 | 1 | 0 | I have installed jupyterthemes by pip install --upgrade jupyterthemes under Windows7. But when I type in cmd jt -l,it turns out jt:command not found.
How can I solve this problem? | jupyterthemes jt command not found | 0 | 0 | 0 | 599 |
48,664,649 | 2018-02-07T13:07:00.000 | 2 | 0 | 0 | 0 | python,tensorflow,keras,conv-neural-network,data-processing | 48,674,801 | 1 | true | 0 | 0 | In fact - you need to reshape a single data point to have a 3D shape - as keras expects your dataset to have shape (number examples, width, height, channels). If you don't wont to make your image RGB - you can simply leave it with only one channel (and interpret it as greyscale channel). | 1 | 1 | 1 | I am interested in training a Keras CNN and I have some data in the form of 2D matrices (e.g. width x height). I normally represent, or visualize the data like a heatmap, with a colorbar.
However, in training the CNN and formatting the data input, I'm wondering if I should keep this matrix as a 2D matrix, or convert it into an RGB image that is essentially a 3D matrix?
What is the best practice and some considerations people should take into account? | Keras Training a CNN - Should I Convert Heatmap Data As Image or 2D Matrix | 1.2 | 0 | 0 | 691 |
48,666,310 | 2018-02-07T14:31:00.000 | 1 | 1 | 0 | 0 | python,cluster-analysis,sequence,bioinformatics | 50,987,919 | 2 | false | 0 | 0 | It also depends on the question for which tool you need(data reduction, otu clustering, making a tree, etc..). These days you see a shift in cluster tools that uses a more dynamic approach instead of a fixed similarity cutoff.
Example:
DADA2
UNOISE
Seekdeep
Fixed clustering:
CD-HIT
uclust
vsearch | 1 | 1 | 0 | I am trying to find a new method to cluster sequence data. I implemented my method and got an accuracy rate for it. Now I should compare it with available methods to see whether it works as I expected or not.
Is it possible to tell me what are the most famous methods in bioinformatics domain and what are the packages corresponded to those methods in Python? I am an engineer and have no idea about the most accurate methods in this field that I should compare my method to them. | bioinformatics sequence clustering in Python | 0.099668 | 0 | 0 | 1,216 |
48,667,581 | 2018-02-07T15:33:00.000 | 3 | 0 | 1 | 1 | python,ubuntu,anaconda,conda | 51,832,344 | 2 | true | 0 | 0 | You can create a Python 3.7 environment via:
$conda create -n Py37 python=3.7
This will create a new anaconda environment named Py37 containing the python=3.7 package.
Afterwards, you can activate the environment by running:
$conda activate Py37
and it should work right out of the box! | 1 | 1 | 0 | I have anaconda installed on my ubuntu 17.10. It works fine for python 2.7 but I can't use it in python 3.7. Is there a way I can add python3.7 to the anaconda path? | Adding python3.7 to Anaconda path | 1.2 | 0 | 0 | 2,945 |
48,668,031 | 2018-02-07T15:54:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,neural-network,deep-learning,classification | 48,668,122 | 2 | true | 0 | 0 | There are many ways to import images for training, you can use Tensorflow but these will be imported as Tensorflow objects, which you won't be able to visualize until you run the session.
My favorite tool to import images is skimage.io.imread. The imported images will have the dimension (width, height, channels)
Or you can use importing tool from scipy.misc.
To resize images, you can use skimage.transform.resize.
Before training, you will need to normalize all the images to have the values between 0 and 1. To do that, you simply divide the images by 255.
The next step is to one hot encode your labels to be an array of 0s and 1s.
Then you can build and train your CNN. | 1 | 0 | 1 | I am new to Tensorflow and to implementing deep learning. I have a dataset of images (images of the same object).
I want to train a Neural Network model using python and Tensorflow for object detection.
I am trying to import the data to Tensorflow but I am not sure what is the right way to do it.
Most of the tutorials available online are using public datasets (i.e. MNIST), which importing is straightforward but not helpful in the case where I need to use my own data.
Is there a procedure or tutorial that i can follow? | How to import data into Tensorflow? | 1.2 | 0 | 0 | 864 |
48,668,686 | 2018-02-07T16:27:00.000 | 4 | 0 | 0 | 0 | python,apache-beam | 48,674,833 | 1 | true | 0 | 0 | PCollection is simply a logical node in the execution graph and its contents are not necessarily actually stored anywhere, so this is not possible directly.
However, you can ask your pipeline to write the PCollection to a file (e.g. convert elements to strings and use WriteToText with num_shards=1), run the pipeline and wait for it to finish, and then read that file from your main program. | 1 | 2 | 1 | I am using Apache-Beam with the Python SDK.
Currently, my pipeline reads multiple files, parse them and generate pandas dataframes from its data.
Then, it groups them into a single dataframe.
What I want now is to retrieve this single fat dataframe, assigning it to a normal Python variable.
Is it possible to do? | How to retrieve the content of a PCollection and assign it to a normal variable? | 1.2 | 0 | 0 | 1,669 |
48,669,430 | 2018-02-07T17:04:00.000 | 2 | 0 | 1 | 1 | python-3.x,installation,windows-10,lpsolve,openturns | 54,786,256 | 1 | false | 0 | 0 | lpsolve is available for python 3.x as well on conda-forge:
conda config --add channels conda-forge
conda install openturns lpsolve55 | 1 | 2 | 0 | I want to use the packages lpsolve55 and openturns but they are not working on python 3.6, is there any way to install them for this version of python? | how can I install lpsolve55 and openturns for python 3.6 in windows 10? | 0.379949 | 0 | 0 | 438 |
48,670,780 | 2018-02-07T18:20:00.000 | 0 | 0 | 0 | 0 | python,pandas,csv,encoding,iso-8859-1 | 48,681,568 | 1 | false | 0 | 0 | Basically the column looks like this
Column_ID
10
HGF6558
059
KP257
0001 | 1 | 0 | 1 | I have a problem with reading in a csv with an id field with mixed dtypes from the original source data, i.e. the id field can be 11, 2R399004, BL327838, 7 etc. but the vast majority of them being 8 characters long.
When I read it with multiple versions of pd.read_csv and encoding='iso-8859-1' it always converts the 7 and 11 to 00000007 or the like. I've tried using utf-8 but I get the following error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc9 in position 40: unexpected end of data
I have tried setting the dtype={'field': object} and string and various iterations of latin-1 and the like but it will continually do this.
Is there any way to get around this error, without going through every individual file and fixing the dtypes? | Pandas read csv adds zeros | 0 | 0 | 0 | 85 |
48,671,331 | 2018-02-07T18:56:00.000 | -1 | 0 | 0 | 0 | python,mysql,pip,installation | 48,671,377 | 2 | true | 1 | 0 | You should be able to type it in the command line for your operating system (ie. CMD/bash/terminal) as long as you have pip installed and the executable location is in your PATH. | 1 | 0 | 0 | Am implementing a sign up using python & mysql.
Am getting the error no module named flask.ext.mysql and research implies that i should install flask first. They say it's very simple, you simply type
pip install flask-mysql
but where do i type this? In mysql command line for my database or in the python app? | Where do i enter pip install flask-mysql? | 1.2 | 1 | 0 | 702 |
48,673,104 | 2018-02-07T20:57:00.000 | 2 | 0 | 0 | 0 | c#,python,c++,opencv,object-detection | 48,678,366 | 3 | false | 0 | 0 | What features are you using to detect your books? Are you training a CNN and deploying it with OpenCV? In that case adding rotation image augmentation to your training would make it easy to detect rotated books.
If you are using traditional computer vision techniques instead, you can try to use some rotation invariant feature extractors like SURF, however, the results will not be as good as using CNNs which are now the state of the art for this kind of problems. | 1 | 5 | 1 | I have been training OpenCV classifier for recognition of books.The requirement is recognize book from an image. I have used 1000+ images and OpenCV is able to detect books with no rotation. However, when I try to detect books with rotations it does not work properly.So I am wondering if their anyway to detect objects with rotations in images using OpenCV? | How to detect rotated object from image using OpenCV? | 0.132549 | 0 | 0 | 5,762 |
48,674,084 | 2018-02-07T22:08:00.000 | 0 | 0 | 0 | 0 | python,gensim,lda,topic-modeling | 49,749,946 | 1 | false | 0 | 0 | In LDA all words are part of all topics, but with a different probability. You could define a minimum probability for your words to print, but I would be very surprised if mallet didn't come up with at least a couple of "duplicate" words across topics as well. Make sure to use the same parameters for both gensim and mallet. | 1 | 0 | 1 | I am using gensim lda for topic modeling and getting the results like so:
Topic 1: word1 word2 word3 word4
Topic 2: word4 word1 word2 word5
Topic 3: word1 word4 word5 word6
However using mallet on same lda does not produce duplicate words across topics. I have ~20 documents with >1000 words each that I train the lda on. How to get rid of words appearing across multiple topics? | Words appearing across all topics in lda | 0 | 0 | 0 | 675 |
48,674,966 | 2018-02-07T23:20:00.000 | 0 | 0 | 1 | 0 | python,virtual-machine,virtualbox,jupyter | 51,632,871 | 1 | false | 1 | 0 | Yes, it's possible.
To begin with, your host PC mut have a shared folder with your VM (Explained elsewhere).
A method I do is to
First create a folder myfolder for the (.pdf or .csv) files.
Secondly, load the folder by: going to the created myfolder --> right-click on the myfolder --> choose "Open in Terminal".
Thirdly, at the terminal it should look like this: myfolder]$. Then I type "jupyter notebook" as such: [username@localhost myfolder]$ jupyter notebook and hit ENTER.
The folder should be opened and containing my .pdf or .csv files. | 1 | 0 | 0 | Is it possible to upload a file (say a .pdf or .csv) saved in my hard disk into Jupyter running on VirtualBox? | Uploading files to Jupyter running on VM VitualBox | 0 | 0 | 0 | 277 |
48,675,077 | 2018-02-07T23:32:00.000 | 0 | 0 | 0 | 1 | python | 48,675,155 | 1 | false | 0 | 0 | Use os.system() or give a folder path to change dir in os.chdir() not an executable file (why would you want to change into an executable? You can't do that) | 1 | 1 | 0 | Context: Basically I want to create a python program that asks me what Software I want to run, as soon as I start my computer.
Usefull code: os.chdir(r'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe'). And that's the error I got :
FileNotFoundError Traceback (most recent call last)
in ()
----> 1 os.chdir(r"C:/Program Files (x86)/Google/Chrome/Application/chrome.exe'")
FileNotFoundError: [WinError 2] Das System kann die angegebene Datei nicht finden: "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe'"
The translation for "Das System kann die angegebene Datei nicht finden:" is the system was unable to find the file.
Problem: My problem is that I can't run any ".exe", or any type of files, because of the white spaces in "C:\Program Files (x86) ", so my question would be how can I bypass that? / What could I use to make it work?
PS: I searched, on various forums, but this problem doesn't seem to be very common... Or I didn't search enough. | How to run an ".exe" file from "C:\Program Files (x86)" using python? | 0 | 0 | 0 | 621 |
48,675,264 | 2018-02-07T23:51:00.000 | 0 | 0 | 0 | 0 | python,ftp,ftplib | 48,761,226 | 1 | false | 0 | 0 | No. Using FTP.retrlines('LIST') is the only solution with FTP protocol.
If you need a faster approach, you would have to use another interface. Like shell, some web API, etc. | 1 | 0 | 0 | I was wondering if there was an effective way to recursively search for a given filename using ftplib. I know that I could use FTP.cwd() and FTP.retrlines('LIST), but I think that it would be rather repetitive and inefficient. Is there something that lets me find files on an FTP server in Python, such as how os can do os.path.walk()? | Finding a file in an FTP server using ftplib | 0 | 0 | 0 | 198 |
48,675,954 | 2018-02-08T01:13:00.000 | 2 | 0 | 0 | 0 | python,random,statistics,probability,probability-density | 48,676,209 | 1 | false | 0 | 0 | There a few different paths one can follow here.
(1) If P(x,y,z) factors as P(x,y,z) = P(x) P(y) P(z) (i.e., x, y, and z are independent) then you can sample each one separately.
(2) If P(x,y,z) has a more general factorization, you can reduce the number of variables that need to be sampled to whatever's conditional on the others. E.g. if P(x,y,z) = P(z|x, y) P(y | x) P(x), then you can sample x, y given x, and z given y and x in turn.
(3) For some particular distributions, there are known ways to sample. E.g. for multivariate Gaussian, if x is a sample from a mean 0 and identity covariance Gaussian (i.e. just sample each x_i as N(0, 1)), then y = L x + m has mean m and covariance S = L L' where L is the lower-triangular Cholesky decomposition of S, which must be positive definite.
(4) For many multivariate distributions, none of the above apply, and a more complicated scheme such as Markov chain Monte Carlo is applied.
Maybe if you say more about the problem, more specific advice can be given. | 1 | 2 | 1 | I have a multivariate probability density function P(x,y,z), and I want to sample from it. Normally, I would use numpy.random.choice() for this sort of task, but this function only works for 1-dimensional probability densities. Is there an equivalent function for multivariate PDFs? | Sampling from a multivariate probability density function in python | 0.379949 | 0 | 0 | 965 |
48,676,037 | 2018-02-08T01:26:00.000 | 0 | 0 | 0 | 0 | python-3.x,amazon-web-services,h5py | 48,676,038 | 1 | false | 1 | 0 | just solve it using sudo pip install hypy. | 1 | 0 | 0 | i have the below error while running my code on amazon ec2 instance and when trying to import the h5py package i have permission denied error
ImportError: load_weights requires h5py | Import error requires h5py | 0 | 0 | 1 | 539 |
48,676,157 | 2018-02-08T01:42:00.000 | 2 | 0 | 1 | 0 | python,virtual-machine,virtualbox,jupyter,sage | 48,676,340 | 1 | false | 0 | 0 | Just worked out a solution (pretty obvious): a shared folder as to be defined in VirtualBox with the "auto-mount" option ticked. Then the 'upload' button in Jupyter can be used. | 1 | 1 | 0 | Is it possible to access a file stored in my hard disk from SageMath/Jupyter running on VirtualBox? | How to access files stored in the hard disk using SageMath/Jupyter running on Virtual Machine | 0.379949 | 0 | 0 | 204 |
48,677,643 | 2018-02-08T04:38:00.000 | 0 | 0 | 0 | 0 | python,django | 48,680,841 | 1 | false | 1 | 0 | Load balancing is not anything to do with Django. It is something you implement at a much higher layer, via servers that sit in front of the machines that Django is running on.
However, if you've just started creating your site, it is much too early to start thinking about this. | 1 | 2 | 0 | I have a Django server which responds to a call like this 127.0.0.1:8000/ao/. Before adding further applications to the server, I would like to experiment the load balancing which are supported by Django.
Can anyone please explain how to implement load balancing. I spent sometime in understanding the architecture but was unable to find a solution.
I work on Windows OS and I am new to servers. | load balancing in django | 0 | 0 | 0 | 2,681 |
48,680,155 | 2018-02-08T07:45:00.000 | 0 | 1 | 0 | 0 | c#,python,mono | 48,683,700 | 2 | true | 0 | 0 | Is there a solution to pass the data to the Python script by avoiding
file writing/reading?
I can only think of two approaches:
You could open a socket for communication between both programs through localhost. The C# program would send data to that socket, and the Python program would read it.
Write both programs in Python or C#. The single program would capture
data and process it.
I've read something about memory mapping, but I do not know if it's a
solution.
Memory mapping is about loading a file in memory, and once the work on it is finished, write it back at once. Since you have two different processes, I don't think it is applyable here.
Hope this helps. | 1 | 0 | 0 | I have a C# (Mono) app that gets data from a sensor and passes it to a Python script for processing. It runs on a Raspberry Pi/Raspbian. The script is started at program initialization and is running in background waiting for data to be passed to it.
The data consists of 1000 samples, 32 bit double (in total, 32 kbits). Currently I write the data in a file and pass the file path to the script.
I was thinking of speeding up processing time by passing the data directly to the script, avoiding writing the file. However, I see that the number of characters of command line arguments is a limited.
Is there a solution to pass the data to the Python script by avoiding file writing/reading? I've read something about memory mapping, but I do not know if it's a solution. | How to pass large data to Python script? | 1.2 | 0 | 0 | 309 |
48,680,696 | 2018-02-08T08:19:00.000 | 0 | 0 | 1 | 0 | python,anaconda,jupyter-notebook,jupyter | 54,463,928 | 2 | false | 0 | 0 | You need to change your keyboard language. I had the same issue while using "U.S. International - PC" keyboard. When I changed to "US" it starts working with quotes as well. | 2 | 1 | 0 | I am new to python. I am using Jupyter notebook through anaconda.
I do not get the auto quotes while writing code.("",'')
But auto braces or parentheses works.
Could you please help me fix this. | I don't get auto quotes in jupyter notebook | 0 | 0 | 0 | 674 |
48,680,696 | 2018-02-08T08:19:00.000 | 1 | 0 | 1 | 0 | python,anaconda,jupyter-notebook,jupyter | 70,317,144 | 2 | false | 0 | 0 | It's part of the "Auto Close Brackets" setting, which is a toggle menu item that you should be able to find in the "Settings" menu. | 2 | 1 | 0 | I am new to python. I am using Jupyter notebook through anaconda.
I do not get the auto quotes while writing code.("",'')
But auto braces or parentheses works.
Could you please help me fix this. | I don't get auto quotes in jupyter notebook | 0.099668 | 0 | 0 | 674 |
48,680,916 | 2018-02-08T08:32:00.000 | 1 | 0 | 1 | 0 | python,datetime | 48,680,951 | 1 | true | 0 | 0 | DateTime.Today only gives today's date and default time as 12:00:00.
DateTime.Today : 1/1/2018 12:00:00 AM.
DateTime.Now gives current time of running thread with today's date. (Date and Time of the computer where code is running)
DateTime.Now : 1/1/2018 8:49:52 AM. | 1 | 3 | 0 | Is there any difference between datetime.today() and datetime.now()?
Using them lately has proved that there is none. | What is the difference between datetime.now() and datetime.today() in Python | 1.2 | 0 | 0 | 5,800 |
48,682,817 | 2018-02-08T10:09:00.000 | 1 | 0 | 0 | 0 | python,variables,flask | 48,683,430 | 1 | true | 1 | 0 | app.config is global. Don't store user-specific configuration there. Likely you'll want to use a cookie or a URL parameter for the display language. | 1 | 0 | 0 | I am new and just start learning flask. In my project, I need all the pages are printed in two languages, say 'LAN_A' and 'LAN_B'. So I need a variant to save it. Now I save it in app.config['language'] = 'LAN_A'/'LAN_B'. When debugging locally, I browse the page by two devices (thus also two IP). In device1, I change the language then I found all the pages in device2 will also be changed(after the setting in device1). It seems understandable since there is only one main.py is running.
My question: Is it because my treatment is inappropriate? or future in the real service, we have some method to let each user run one main.py? | Flask variants can be visible by different users? | 1.2 | 0 | 0 | 21 |
48,683,621 | 2018-02-08T10:47:00.000 | 0 | 0 | 0 | 0 | python,opencv,image-processing | 48,683,822 | 3 | false | 0 | 0 | As long as you don't change the extension of the image file, the pixel values don't change because they're stored in memory and your display or printer are just the way you want to see the image and often you don't get the same thing because it depends on the technology and different filters applied to you image before they're displayed or printed.. | 1 | 0 | 1 | Assume we are reading and loading an image using OpenCV from a specific location on our drive and then we read some pixels values and colors, and lets assume that this is a scanned image.
Usually if we open scanned image we will notice some differences between the printed image (before scanning) and the image if we open it and see it on the display screen.
The question is:
The values of the pixels colors that we get from OpenCV. Are they according to our display screen color space or we get exactly the same colors we have in the scanned image (printed version) ?? | RGB in OpenCV. What does it mean? | 0 | 0 | 0 | 135 |
48,683,976 | 2018-02-08T11:04:00.000 | 1 | 0 | 0 | 0 | python-3.x,graphite,grafana-api | 62,026,844 | 1 | false | 0 | 0 | The only way I found in grafana 7.1 was to:
Open the dashboard and then inspect the panel
Open the query tab and click on refresh
Use the url and parameters on your own query to the api
note: First you need to create an API key in the UI with the proper role and add the bearer to the request headers | 1 | 4 | 0 | Need to extract specific data from grafana dashboard. Grafana is connected to graphite in the backend. Seems there is no API to make calls to grafana directly.
Any help?
Ex: I need to extract AVG CPU value from graph of so and so server. | How can we extract data from grafana dashboard? | 0.197375 | 0 | 1 | 2,210 |
48,685,070 | 2018-02-08T12:04:00.000 | 2 | 0 | 0 | 0 | python,http,request,tornado | 48,722,882 | 1 | false | 1 | 0 | The content-type header is what tells the browser how to display a given file. It doesn't know how to display text/csv, so it has no choice but to treat it as an opaque download. If you want the file to be displayed as plain text, you need to tell the browser that it has content-type text/plain.
If you need to tell other clients that the content type is text/csv, you need some way to distinguish clients that understand that content type from those that do not. The best way to do this is with the Accept request header. Clients that understand CSV would send Accept: text/csv in their request, and then the server would respond with content-type text/plain or text/csv depending on whether CSV appears in the accept header.
Using the Accept header may require modifications to the client, which may or may not be possible for you. If you can't update the clients to send the Accept header, then you'll have to use a hackier workaround. You can either use a different url (add ?type=plain or ?type=csv) or try to detect browsers based on their user agent. | 1 | 0 | 0 | My server in Python (Tornado) send a csv content on a GET request.
I want to specify the content type of the response as "text/csv", but when I do this the file is downlaoded when I send the GET request on my browser.
How can I specify the header "Content-type : text/csv" without having making it a downlaodable file but just show the content on my browser ? | GET response - Do NOT send a downlaodable file | 0.379949 | 0 | 1 | 50 |
48,685,075 | 2018-02-08T12:04:00.000 | 2 | 0 | 1 | 0 | python,pyinstaller | 48,685,532 | 1 | true | 0 | 0 | I feel like pyinstaller should include a FAQ page that states...
Do not use Anaconda/Miniconda, use virtual environment or
default python
… | 1 | 2 | 0 | Pyinstaller keeps importing scipy when I exclude it through exclude module. That did not work.
I am using Miniconda on Windows. What is going on here? | Importing modules I do not use | 1.2 | 0 | 0 | 32 |
48,686,826 | 2018-02-08T13:38:00.000 | 6 | 0 | 1 | 0 | reactjs,python-decorators,higher-order-components | 48,688,832 | 3 | false | 0 | 0 | Two differences would be decorators are used to mutate the variable while HOC's are recommended not to. Another is specific to React, HOC's are supposed to render a component, while decorators can return different things depending on implementation. | 1 | 32 | 0 | Can someone explain what is the difference between these two?
I mean except the syntactic difference, do both of these techniques are used to achieve the same thing (which is to reuse component logic)? | React js - What is the difference betwen HOC and decorator | 1 | 0 | 0 | 12,156 |
48,687,566 | 2018-02-08T14:13:00.000 | 0 | 0 | 0 | 0 | python,django,csv | 48,688,068 | 1 | false | 1 | 0 | You state the the database is blank, if so you're going to have to have at least 1 record in the parent table, where the primary key is defined. After that you can use the parent.primary-key value in your CSV file as a dummy value for your foreign key. That should at least allow you to populate the table. Later on, as your parent table grows you're going to have to write some sort of script to add the correct parent.primary-key value to each record in the CSV file and then import it. | 1 | 0 | 0 | I'm a little new to this, and figuring it out as I go. So far so good, however I have having trouble importing a field that has a foreign key. I have about 10,000 rows in a csv file that I want to add to the database. As you can imagine, entering 10,000 items at a time is too labour intensive. When I try for an import I get this error: ValueError: invalid literal for int() with base 10
This is because (i think) it is trying to match the related model with the id. However the database is empty, so there is no id, and furthermore, the "author" field in my csv (the one with the foreign key) doesn't have an id yet. ( i assume this is created when the record is). Any suggestions?
Sorry in advance for the newbie question. | CSV import into empty Django database with foreign key field | 0 | 1 | 0 | 423 |
48,690,736 | 2018-02-08T16:51:00.000 | 0 | 0 | 0 | 0 | python,mongodb,python-2.7,pymongo | 48,690,738 | 1 | false | 0 | 0 | It turns out, after some tinkering, that using the MongoClient argument socketTimeoutMS, if set to something quicker than that observed by AutoReconnect, will supersede AutoReconnect.
Contrary to my initial concern, this does not interfere with long running queries as the socket connected just fine.
I discovered that after the first NetworkTimeout exception is raised as a result of this setting it can take 10 seconds or so to attempt again. That is solved by also passing the connectTimeoutMS argument to MongoClient with, most likely, the same value as that of socketTimeoutMS.
Please post a response here if anyone discovers any caveats to this solution. | 1 | 1 | 0 | I'm using a 3-node MongoDB replica set and connecting to it using Pymongo v3.3.1.
In testing the handling of errors like AutoReconnect and ServerSelectionTimeout and such I find that I can't (safely/reliably) control how long it takes to raise an AutoReconnect exception.
If I instantiate MongoClient with the argument serverSelectionTimeoutMS set to, for instance, 2000, I do see the ServerSelectionTimeout exceptions return in about 2 seconds. However, when the conditions are just right to trigger an AutoReconnect it always takes at least 20 seconds, sometime closer to 30 seconds!
How can I constrain this behavior? I'm shooting for relatively high availability and want to detect network/replica set anomalies and commence my retry logic rather rapidly. | Is there any way to decrease the time it takes for a pymongo.errors.AutoReconnect to occur? | 0 | 0 | 0 | 58 |
48,692,101 | 2018-02-08T18:10:00.000 | -1 | 0 | 1 | 0 | python,colors,detection | 48,692,751 | 1 | false | 0 | 0 | It depends on what module you are working with. I.e; Turtle Graphics, Tkinter, etc. | 1 | 0 | 0 | Hi,
i am working on a project at the time and i am trying to see if a speciffic area on screen matches a speciffic color of my choice. i've tried a couple of things but none of them seemed to be working. is there any easy way to do this in python 2.7? | Easy Color Detection in python 2.7 | -0.197375 | 0 | 0 | 25 |
48,692,483 | 2018-02-08T18:32:00.000 | 0 | 1 | 0 | 0 | python,amazon-s3,boto,boto3 | 63,806,918 | 3 | false | 0 | 0 | It's true that the AWS CLI uses boto, but the cli is not a thin wrapper, as you might expect. When it comes to copying a tree of S3 data (which includes the multipart chunks behind a single large file), it is quite a lot of logic to make a wrapper that is as thorough and fast, and that does things like seamlessly pick up where a partial download has left off, or efficiently sync down only the changed data on the server.
The implementation in the awscli code is more thorough than anything in the Python or Java SDKs, as far as I have seen. I've seen several developers who were too proud to call the CLI from their code, but zero thus far all such attempts I have seen have failed to measure up. Love to see a counter example, though. | 1 | 3 | 0 | I have packages stored in s3 bucket. I need to read metadata file of each package and pass the metadata to program. I used boto3.resource('s3') to read these files in python. The code took few minutes to run. While if I use aws cli sync, it downloads these metafiles much faster than boto. My guess was that if I do not download and just read the meta files, it should be faster. But it isn't the case. Is it safe to say that aws cli is faster than using boto? | Is aws CLI faster than using boto3? | 0 | 0 | 1 | 4,228 |
48,693,266 | 2018-02-08T19:23:00.000 | 0 | 0 | 0 | 0 | python,c++,opencv | 48,694,488 | 2 | false | 0 | 0 | The "without calibration" bit dooms you, sorry.
Without knowing the focal length (or, equivalently, the field of view) you cannot "convert" a pixel into a ray.
Note that you can sometimes get an approximate calibration directly from the camera - for example, it might write a focal length for its lens into the EXIF header of the captured images. | 2 | 0 | 1 | I am trying to convert X,Y position of a tracked object in an image to 3D coordinates.
I got the distance to the object based on the size of the tracked object (A marker) but now I need to convert all of this to a 3D coordinate in the space. I have been reading a lot about this but all of the methods I found require a calibration matrix to achieve this.
In my case I don't need a lot of precision but I need this to work with multiple cameras without calibration. Is there a way to achieve what I'm trying to do? | Get 3D coordinates in OpenCV having X,Y and distance to object | 0 | 0 | 0 | 630 |
48,693,266 | 2018-02-08T19:23:00.000 | 0 | 0 | 0 | 0 | python,c++,opencv | 48,694,562 | 2 | false | 0 | 0 | If you're using some sort of micro controller, it may be possible to point a sensor towards that object that's seen through the camera to get the distance.
You would most likely have to have a complex algorithm to get multiple cameras to work together to return the distance. If there's no calibration, there would be no way for those cameras to work together, as Francesco said. | 2 | 0 | 1 | I am trying to convert X,Y position of a tracked object in an image to 3D coordinates.
I got the distance to the object based on the size of the tracked object (A marker) but now I need to convert all of this to a 3D coordinate in the space. I have been reading a lot about this but all of the methods I found require a calibration matrix to achieve this.
In my case I don't need a lot of precision but I need this to work with multiple cameras without calibration. Is there a way to achieve what I'm trying to do? | Get 3D coordinates in OpenCV having X,Y and distance to object | 0 | 0 | 0 | 630 |
48,694,790 | 2018-02-08T21:07:00.000 | 0 | 0 | 0 | 0 | python,pandas,biopython | 49,092,434 | 5 | false | 0 | 0 | There is no good way to do this.
BioPython alone seems to be sufficient, over a hybrid solution involving iterating through a BioPython object, and inserting into a dataframe | 1 | 1 | 1 | i have a dataset (for compbio people out there, it's a FASTA) that is littered with newlines, that don't act as a delimiter of the data.
Is there a way for pandas to ignore newlines when importing, using any of the pandas read functions?
sample data:
>ERR899297.10000174
TGTAATATTGCCTGTAGCGGGAGTTGTTGTCTCAGGATCAGCATTATATATCTCAATTGCATGAATCATCGTATTAATGC
TATCAAGATCAGCCGATTCT
every entry is delimited by the ">"
data is split by newlines (limited to, but not actually respected worldwide
with 80 chars per line) | pandas read csv ignore newline | 0 | 0 | 0 | 9,964 |
48,696,556 | 2018-02-08T23:36:00.000 | 0 | 0 | 0 | 0 | python,regex,subset | 48,696,674 | 2 | false | 0 | 0 | For character handling at such micro level the query will end up being clunky with high response time — if you're lucky to write a working one.
This's more of a script kind of operation. | 1 | 0 | 1 | I'm trying to match text strings (gene names) in a column from one file to text strings in a column of another, in order to create a subset the second.
For simplicity, the first will look more or less like this:
hits = ["IL1", "NRC31", "AR", etc.]
However, the column of interest in the second df looks like this:
68 NFKBIL1;NFKBIL1;ATP6V1G2;NFKBIL1;NFKBIL1;NFKBI
236 BARHL2
272 ARPC2;ARPC2
324 MARCH5
...
11302 NFKBIL1;NFKBIL1;ATP6V1G2;NFKBIL1;NFKBIL1;NFKBI
426033 ABC1;IL1;XYZ2
...
425700 IL17D
426295 RAB3IL1
426474 IL15RA;IL15RA
I came up with:
df2[df2.UCSC_RefGene_Name.str.contains('|'.join(hits), na=False)]
But I need to match the gene IL1 if it falls in the middle of the string (e.g.row 426033 above) but not similar genes (e.g. row 426295 above).
How do I use regular expressions to say:
"Match the any of the strings in hits when they have ';' or 'a blankspace' at either the beginning or the end of the gene name, but not when they have other letters or numbers on either side (which indicate a different gene with a similar name)?
I also need to exclude any rows with NA in dataframe 2.
Yes, I know there are regex syntax documents, but there are too many moving parts here for me to make sense of them. | Matching genes in string in Python | 0 | 0 | 0 | 271 |
48,696,759 | 2018-02-09T00:01:00.000 | 1 | 1 | 0 | 1 | python-2.7,tryton | 48,704,012 | 1 | false | 0 | 0 | Tryton search modules only in the subdirectory modules from its installation path or for the entry points trytond.modules But it does not use the PYTHONPATH to detect modules.
So you must move the modules under the "modules" directory or you must install the modules with python setup.py install (or python setup.py develop). | 1 | 0 | 0 | tryton V3.8 on MAC. I have a database "steve" with trytond running TCP/IP. When doing database connect in tryton client, I get the following result as the last line in the error window: "IOError: File not found : /site-packages/trytond/modules/product/tryton.cfg"
In ~/.bash_profile, I have:
export PYTHONPATH=~/bryants:$PYTHONPATH
where ~/bryants has a init.py file, and all modules are beneath it.
There is a tryton.cfg file in the ~/bryants/product directory. Why isn't it looking in the ~/bryants directory? Why isn't it being found? | tryton file not found when opening database - looking in site-packages rather than directory in PYTHONPATH | 0.197375 | 0 | 0 | 96 |
48,697,322 | 2018-02-09T01:15:00.000 | 0 | 0 | 0 | 0 | javascript,php,python,html,css | 60,838,059 | 3 | false | 1 | 0 | You can use domdocument and domxpath to parse the html file(you can use php file_get_contents to read the file )
it looks like i can't post links | 1 | 0 | 0 | I have some 1000 html pages. I need to update the names which is present at the footer of every html page. What is the best possible efficient way of updating these html pages instead of editing each name in those html pages one by one.
Edit: Even if we use some sort of scripts, we have to make changes to every html file. One possible way could be using Editor function. Please share if anybody has any another way of doing it. | How to update static content in multiple HTML pages | 0 | 0 | 1 | 1,154 |
48,697,423 | 2018-02-09T01:28:00.000 | 1 | 1 | 0 | 0 | python-3.x,raspberry-pi,google-assistant-sdk | 48,698,224 | 1 | true | 1 | 0 | Yes, you cannot do remote execution of device actions. | 1 | 0 | 0 | Is it possible to trigger a custom device action from another device that is attached to the same account?
e.g.:
Trigger a custom device action registered to one PI from a second PI that doesn't have any custom device actions.
Trigger a custom device action registered to one PI from an Android phone attached to the same email address.
Basically I'm trying to figure out if you have to speak directly to the device that has the custom device action. | Trigger a device action from another device on the same account | 1.2 | 0 | 0 | 95 |
48,697,967 | 2018-02-09T02:41:00.000 | 1 | 0 | 0 | 0 | python,normalization,image-preprocessing | 48,697,994 | 1 | true | 0 | 0 | Without more information about your source code and the packages you're using, this is really more of a data science question than a python question.
To answer your question, a more than satisfactory method in most circumstances it min-max scaling. Simply normalize each coordinate of your images between 0 and 1. Whether or not that is appropriate for your circumstance depends on what you intend to do next with that data. | 1 | 0 | 1 | I am trying to normalize MR image.
There is a negative value in the MR image.
So the MR image was normalized using the Gaussian method, resulting in a negative area.
But i don't want to get negative area.
My question:
What is the normalization method so that there is no negative value?
Thanks in advance | What is the normalization method so that there is no negative value? | 1.2 | 0 | 0 | 424 |
48,698,110 | 2018-02-09T03:04:00.000 | 2 | 0 | 0 | 0 | python,rest,http,react-native,server | 48,718,794 | 2 | false | 0 | 0 | You have to create a flask proxy, generate JSON endpoints then use fetch or axios to display this data in your react native app. You also have to be more specific next time. | 1 | 2 | 0 | I'm learning about basic back-end and server mechanics and how to connect it with the front end of an app. More specifically, I want to create a React Native app and connect it to a database using Python(simply because Python is easy to write and fast). From my research I've determined I'll need to make an API that communicates via HTTP with the server, then use the API with React Native. I'm still confused as to how the API works and how I can integrate it into my React Native front-end, or any front-end that's not Python-based for that matter. | How to create a Python API and use it with React Native? | 0.197375 | 0 | 1 | 6,331 |
48,699,357 | 2018-02-09T05:38:00.000 | 0 | 0 | 0 | 0 | python,lmfit | 48,707,014 | 1 | true | 0 | 0 | More detail about what you are actually doing would be helpful. That is, vague questions can really only get vague answers.
Assuming you are doing curve fitting with lmfit's Model class, then once you have your Model and a set of Parameters (say, after a fit has refined them to best match some data), then you can use those to evaluate the Model (with the model.eval() method) for any values of the independent variable (typically called x). That allows on a finer grid or extending past the range of the data you actually used in the fit.
Of course, predicting past the end of the data range assumes that the model is valid outside the range of the data. It's hard to know when that assumption is correct, especially when you have no data ;). "It's tough to make predictions, especially about the future." | 1 | 0 | 1 | I have fitted a curve using lmfit but the trendline/curve is short. Please how do I extend the trendline/curve in both directions because the trendline/curve is hanging. Sample codes are warmly welcome my senior programmers. Thanks. | Extending a trendline in a lmfit plot | 1.2 | 0 | 0 | 213 |
48,700,373 | 2018-02-09T07:03:00.000 | 0 | 0 | 1 | 0 | python-3.x | 65,051,879 | 4 | false | 0 | 0 | You want to get a subset using indexes. set is an unordered collection with non-duplicated items. The set's pop method removes a random item from the set. So, it is impossible in general, but if you want to remove a limited number of random items (I can't imagine why anybody needs it), you can call pop multiple times in a loop. | 1 | 4 | 0 | Can we do indexing in a python set, to attain an element from a specific index?
Like accessing a certain element from the below set:
st = {'a', 'b', 'g'}
How to return the second element by indexing? | Set indexing python | 0 | 0 | 0 | 16,216 |
48,700,554 | 2018-02-09T07:15:00.000 | 14 | 0 | 0 | 0 | python,rasa-nlu,rasa-core | 49,782,296 | 1 | true | 0 | 0 | RASA NLU is the natural language understanding piece, which is used for taking examples of natural language and translating them into "intents." For example: "yes", "yeah", "yep" and "for sure" would all be translated into the "yes" intent.
RASA CORE on the other hand is the engine that processes the flow of conversation after the intent of the user has already been determined. RASA CORE can use other natural language translators as well, so while it pairs very nicely with RASA NLU they don't both have to be used together.
As an example if you were using both:
User says "hey there" to RASA core bot
Rasa core bot calls RASA NLU to understand what "hey there" means
RASA NLU translates "hey there" into intent = hello (with 85% confidence)
Rasa core receives "hello" intent
Rasa core runs through it's training examples to guess what it should do when it receives the "hello" intent
Rasa core predicts (with 92% confidence) that it should respond with the "utter_hello" template
Rasa core responds to user "Hi, I'm your friendly Rasa bot"
Hope this helps. | 1 | 2 | 1 | I am new to chatbot application and RASA as well, can anyone please help me to understand how should i use RASA NLU with RASA CORE. | How to use RASA NLU with RASA CORE | 1.2 | 0 | 0 | 1,115 |
48,700,571 | 2018-02-09T07:16:00.000 | 0 | 0 | 1 | 0 | python,object,websocket | 48,713,829 | 1 | true | 0 | 0 | The answer to this question comes from @A.Wenn. By using Pickle, I can serialize my objects in one script and save them. From any other portion of my application, I can then import the pickled object and use it as though I had instantiated the object from scratch! This makes my application significantly faster and more pleasant to use.
Who doesn't love pickles!!! | 1 | 0 | 0 | An App I've written in Python instantiates several classes. Other portions of the App live in other Python files and as a result, they must instantiate their own versions of the class before running.
Is there a way that I can instantiate the classes I want to use within the App and keep those classes "Alive" for use by other parts of the Application that live in separate files? Some of these Classes are objects that handle API GET/POST requests and don't have web sockets available for their endpoints. | Instantiate Python Object and Keep Alive for Use in Other Apps | 1.2 | 0 | 0 | 201 |
48,702,061 | 2018-02-09T08:58:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,http,server | 48,702,358 | 2 | false | 1 | 0 | This is a commonly used pattern when separating Web Servers and App Servers in traditional Web Application setup, keeping the Web Servers in public subnets (Or keeping internet accessible) and the business rules kept in App Servers in the private network.
However, it also depends on the complexity of the system to justify the separation having multiple servers. One of the advantages of this approach is you can scale these servers separately. | 1 | 0 | 0 | I have two instances. One is on the Public Subnet & the other is on the Private subnet of AWS. In the private system, I am performing some computation. And the public system is acting as the API endpoint.
My total flow idea is like this: When some request comes to the public server, the parameters should be forwarded to the private system, the computation will be done there and the result will be sent back to the public and from there the result will be fed back to the user.
In the private system, some python code is running. I did setup Apache-Flask in the private system. So the idea is when some requests are coming, from the public server the parameters will be extracted and another HTTP request will be fired to the private system. Computation will be done there and the response will return which will return to the client system.
I have two question, Is this a good approach? Any better way to implement the total scenario? | Best way for communicating between two servers | 0 | 0 | 1 | 59 |
48,702,629 | 2018-02-09T09:28:00.000 | 0 | 0 | 1 | 0 | python,queue,priority-queue | 48,710,792 | 1 | false | 0 | 0 | Since all items are arranged in a prioritised manner, just extract them in the way they come out...as it is a queue. Or you could apply more conditions on how you would like to arrange the items with same priorities among themselves. | 1 | 0 | 0 | Basically, no counter is allowed. No iterables (arrays, dictionaries, etc.) allowed either to store insertion. There are two linked lists: one storing odd insertions and even insertions. Each node has the priority, and the object. Is there any pattern you can find? Or is this impossible?
Edit: sorry for not mentioning, we need to extract the first one that was inserted. | Priority Queue: If two objects have same priority, how do I determine which to extract? | 0 | 0 | 0 | 240 |
48,708,133 | 2018-02-09T14:33:00.000 | -1 | 0 | 1 | 0 | python,sympy | 48,713,292 | 1 | false | 0 | 0 | after reading more on the subject ,
Multiplicative group Z * p
In classical cyclic group gryptography we usually use multiplicative group Z p * , where p is prime.
Z p *
= { 1, 2, .... , p - 1} combined with multiplication of integers mod p
So it is simply
G2= [ i for i in range(1, n-1 )] #G2 multiplicativ Group of ordern | 1 | 1 | 1 | I'm trying to create a multiplicative group of order q.
This code generates an additive cyclic group of order 5
from sympy.combinatorics.generators import cyclic
list(cyclic(5))
[(4), (0 1 2 3 4), (0 2 4 1 3), (0 3 1 4 2), (0 4 3 2 1)]
Any help ? | multiplicative group using SymPy | -0.197375 | 0 | 0 | 463 |
48,709,350 | 2018-02-09T15:43:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,twitter,login,splinter | 48,716,354 | 1 | false | 0 | 0 | What's the purpose of doing this rather than using the official API?
Scripted logins to Twitter.com are against the Terms of Service, and Twitter employs multiple techniques to detect and disallow them. Accounts showing signs of automated login of this kind are liable to suspension or requests for security re-verification. | 1 | 0 | 0 | I'm working with splinter and Python and I'm trying to setup some automation and log into Twitter.com
Having trouble though...
For example the password field's "name=session[password]" on Twitter.com/login
and the username is similar. I'm not exactly sure of the syntax or what this means, something with a cookie...
But I'm trying to fill in this field with splinters:
browser.fill('exampleName','exampleValue')
It isn't working... Just curious if there is a work around, or a way to fill in this form?
Thanks for any help! | How to log into Twitter with Splinter Python | 0 | 0 | 1 | 59 |
48,709,540 | 2018-02-09T15:53:00.000 | 2 | 0 | 1 | 1 | python,windows,filesystems | 48,709,541 | 1 | false | 0 | 0 | As of Python 3.3, the function os.replace is available. It should provide cross-platform mv-like semantics.
Sadly it fails when trying to move files across file systems.
Even though the documentation says, that shutil.move depends on the semantics of os.rename, it also works on Windows, and supports moving files across file-system boundaries. Sadly, I have seen cases where for some reason it still denied overwriting a file, so when possible, os.replace should be safer. | 1 | 1 | 0 | For the purpose of safely updating a file, I am writing an updated version to a temporary file and then try to overwrite the original file with it. In a shell unix script I would use mv FROM TO for this.
With python on Linux, the functions os.rename and shutil.move perform an atomic replace operation when the target filename exists. On Windows, they fail.
The behavior can be approximated by a sequence of copy, rename and remove operations, but it cannot be guaranteed, that such a self-written replace operation is performed either finished or completely reverted.
Is it possible to get a reliable “rename and overwrite” operation on Windows? | Move file and overwrite existing in Python 3.6 on Windows | 0.379949 | 0 | 0 | 1,028 |
48,712,186 | 2018-02-09T18:41:00.000 | 8 | 0 | 1 | 0 | python,h5py | 48,713,373 | 1 | true | 0 | 0 | No, you do not need to flush the file before closing. Flushing is done automatically by the underlying HDF5 C library when you close the file.
As to the point of flushing. File I/O is slow compared to things like memory or cache access. If programs had to wait before data was actually on the disk each time a write was performed, that would slow things down a lot. So the actual writing to disk is buffered by at least the OS, but in many cases by the I/O library being used (e.g., the C standard I/O library). When you ask to write data to a file, it usually just means that the OS has copied your data to its own internal buffer, and will actually put it on the disk when it's convenient to do so.
Flushing overrides this buffering, at whatever level the call is made. So calling h5py.File.flush() will flush the HDF5 library buffers, but not necessarily the OS buffers. The point of this is to give the program some control over when data actually leaves a buffer.
For example, writing to the standard output is usually line-buffered. But if you really want to see the output before a newline, you can call fflush(stdout). This might make sense if you are piping the standard output of one process into another: that downstream process can start consuming the input right away, without waiting for the OS to decide it's a good time.
Another good example is making a call to fork(2). This usually copies the entire address space of a process, which means the I/O buffers as well. That may result in duplicated output, unnecessary copying, etc. Flushing a stream guarantees that the buffer is empty before forking. | 1 | 7 | 0 | In the Python HDF5 library h5py, do I need to flush() a file before I close() it?
Or does closing the file already make sure that any data that might still be in the buffers will be written to disk?
What exactly is the point of flushing? When would flushing be necessary? | In h5py, do I need to call flush() before I close a file? | 1.2 | 0 | 0 | 3,618 |
48,712,383 | 2018-02-09T18:56:00.000 | 0 | 0 | 1 | 0 | python,audio,wav | 50,443,585 | 1 | true | 1 | 0 | I think that it is hard or even impossible to add tags to WAV so I will convert the wav files to mp3 files using ffmpy and then use eyed3 to add the tags. | 1 | 2 | 0 | I have a wav file (test.wav)
I am trying to apply tags to it:
title
artist
album
year / release date
I have had no success with pydub or eyed3
Is there a way I can do this? | Applying tags to a wav file in Python 2.7 on Windows | 1.2 | 0 | 0 | 304 |
48,713,679 | 2018-02-09T20:33:00.000 | 3 | 1 | 1 | 0 | python,google-cloud-platform,iot,paho | 48,819,342 | 1 | false | 0 | 0 | Turns out it's the python version. It was just about impossible to debug. I ended trying on many different devices/os to slowly isolate the issue. Looks like newly created projects rely on newer versions of google cloud sdk and python. Must have python 2.7.14. | 1 | 3 | 0 | I'm seeing a hard to track down error with Google Cloud IoT. I have 2 projects set up with IoT API enabled. I'm using the same "quickstart" instructions to test, by setting up "my-registry" and "my-device". Let's call them projects A and B. I then run both the python and node examples pulled straight from git. The node example runs fine for both projects A and B, validated by pulling using gcloud. However, the python example works for project A, but with project b gets a "on disconnect. 1. out of memory" error. There is never a "connection accepted" message for B, but yes for A. "Out of memory" seems to be a generic error, not really the issue. It could represent a variety of issues in connectivity. Nothing on the server side Anyone encountered this one before? I appreciate any help in resolving or at least trace the eror. | Google Cloud IoT Python MQTT "out of memory" error | 0.53705 | 0 | 0 | 1,506 |
48,715,867 | 2018-02-10T00:08:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,time-series,svm | 48,715,905 | 2 | false | 0 | 0 | given multi-variable regression, y =
Regression is a multi-dimensional separation which can be hard to visualize in ones head since it is not 3D.
The better question might be, which are consequential to the output value `y'.
Since you have the code to the loadavg in the kernel source, you can use the input parameters. | 1 | 0 | 1 | I have a dataset of peak load for a year. Its a simple two column dataset with the date and load(kWh).
I want to train it on the first 9 months and then let it predict the next three months . I can't get my head around how to implement SVR. I understand my 'y' would be predicted value in kWh but what about my X values?
Can anyone help? | support vector regression time series forecasting - python | 0 | 0 | 0 | 2,181 |
48,718,313 | 2018-02-10T07:22:00.000 | 0 | 0 | 1 | 1 | python,ffmpeg,path,environment-variables,jupyter-notebook | 48,719,791 | 1 | false | 0 | 0 | Rebooting session solved the problem. Somehow on my work computer reboot wasn't needed for environment variable to take effect. | 1 | 2 | 0 | I have PATH variable set for ffmpeg which I tested with CMD command echo %PATH% but Python's os.environ['PATH'] doesn't contain registry of it. I'm using Jupyter notebook on Windows. | os.environ['PATH'] doesn't match echo %PATH% | 0 | 0 | 0 | 481 |
48,720,266 | 2018-02-10T11:29:00.000 | 1 | 1 | 1 | 0 | python,python-3.x,unit-testing,dependencies,setuptools | 48,720,426 | 1 | false | 0 | 0 | OK, I think this is actually quite easy and I was thinking to complicated: I made it work by just setting PYTHONPATH to the relative path to the directory containing package A. | 1 | 0 | 0 | I want to develop two packages A and B in parallel, and B depends on A. I need to import something from A when running my unit tests for B.
So how can I configure things in setup.py so that when I run the unit tests for B, the local directory (in parallel to the one for B) gets added to the modules path and A can be imported?
I do not constantly want to install A so that B can depend on it like on an installed package. | How to depend on local directory for setuptools unit tests? | 0.197375 | 0 | 0 | 23 |
48,720,833 | 2018-02-10T12:35:00.000 | 9 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 48,735,246 | 23 | false | 0 | 0 | Uninstalling Python and then reinstalling solved my issue and I was able to successfully install TensorFlow. | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 1 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 228 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 51,831,928 | 23 | true | 0 | 0 | As of October 2020:
Tensorflow only supports the 64-bit version of Python
Tensorflow only supports Python 3.5 to 3.8
So, if you're using an out-of-range version of Python (older or newer) or a 32-bit version, then you'll need to use a different version. | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 1.2 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 7 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 65,537,792 | 23 | false | 0 | 0 | (as of Jan 1st, 2021)
Any over version 3.9.x there is no support for TensorFlow 2. If you are installing packages via pip with 3.9, you simply get a "package doesn't exist" message. After reverting to the latest 3.8.x. Thought I would drop this here, I will update when 3.9.x is working with Tensorflow 2.x | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 1 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 71 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 55,988,352 | 23 | false | 0 | 0 | I installed it successfully by pip install https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-1.8.0-py3-none-any.whl | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 1 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 5 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 59,836,416 | 23 | false | 0 | 0 | Looks like the problem is with Python 3.8. Use Python 3.7 instead. Steps I took to solve this.
Created a python 3.7 environment with conda
List item Installed rasa using pip install rasa within the environment.
Worked for me. | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 0.043451 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 69,111,030 | 23 | false | 0 | 0 | using pip install tensorflow --user did it for me | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 0.008695 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 36 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 49,432,863 | 23 | false | 0 | 0 | I am giving it for Windows
If you are using python-3
Upgrade pip to the latest version using py -m pip install --upgrade pip
Install package using py -m pip install <package-name>
If you are using python-2
Upgrade pip to the latest version using py -2 -m pip install --upgrade pip
Install package using py -2 -m pip install <package-name>
It worked for me | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 1 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 42 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 53,488,421 | 23 | false | 0 | 0 | if you are using anaconda, python 3.7 is installed by default, so you have to downgrade it to 3.6:
conda install python=3.6
then:
pip install tensorflow
it worked for me in Ubuntu. | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 1 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 67,496,288 | 23 | false | 0 | 0 | This issue also happens with other libraries such as matplotlib(which doesn't support Python > 3.9 for some functions) let's just use COLAB. | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 0 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 60,302,029 | 23 | false | 0 | 0 | use python version 3.6 or 3.7 but the important thing is you should install the python version of 64-bit. | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 0 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | -2 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 61,057,983 | 23 | false | 0 | 0 | I solved the same problem with python 3.7 by installing one by one all the packages required
Here are the steps:
Install the package
See the error message:
couldn't find a version that satisfies the requirement -- the name of the module required
Install the module required.
Very often, installation of the required module requires the installation of another module, and another module - a couple of the others and so on.
This way I installed more than 30 packages and it helped. Now I have tensorflow of the latest version in Python 3.7 and didn't have to downgrade the kernel. | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | -0.01739 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 3 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 62,932,939 | 23 | false | 0 | 0 | For version TensorFlow 2.2:
Make sure you have python 3.8
try:
python --version
or
python3 --version
or
py --version
Upgrade the pip of the python which has version 3.8
try:
python3 -m pip install --upgrade pip
or
python -m pip install --upgrade pip
or
py -m pip install --upgrade pip
Install TensorFlow:
try:
python3 -m pip install TensorFlow
or python -m pip install TensorFlow
or py -m pip install TensorFlow
Make sure to run the file with the correct python:
try:
python3 file.py
or python file.py
or py file.py | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 0.026081 | 0 | 0 | 663,737 |
48,720,833 | 2018-02-10T12:35:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,python-2.7,tensorflow,pip | 64,305,430 | 23 | false | 0 | 0 | In case you are using Docker, make sure you have
FROM python:x.y.z
instead of
FROM python:x.y.z-alpine. | 13 | 305 | 1 | I installed the latest version of Python (3.6.4 64-bit) and the latest version of PyCharm (2017.3.3 64-bit). Then I installed some modules in PyCharm (Numpy, Pandas, etc), but when I tried installing Tensorflow it didn't install, and I got the error message:
Could not find a version that satisfies the requirement TensorFlow (from versions: )
No matching distribution found for TensorFlow.
Then I tried installing TensorFlow from the command prompt and I got the same error message.
I did however successfully install tflearn.
I also installed Python 2.7, but I got the same error message again. I googled the error and tried some of the things which were suggested to other people, but nothing worked (this included installing Flask).
How can I install Tensorflow? Thanks. | Could not find a version that satisfies the requirement tensorflow | 0 | 0 | 0 | 663,737 |
48,727,820 | 2018-02-11T02:51:00.000 | 0 | 1 | 0 | 0 | python,multithreading,logging,cron,bots | 48,728,192 | 1 | false | 0 | 0 | Use multiprocessing if you do not want the data between the processes to be shared. You can give a name to your processes. | 1 | 0 | 0 | I'm going to have to run the same python script multiple of times (>500) at the same time using cron, I'm looking for ways to best handle this to avoid problems in future and if there is any better way that I can use for reporting and logging to alert me if one script is down , please advise me with it.
Plan so far, is to make a copy of the same scripts (folder) a couple of times while renaming each to unique name to distinguish later, then run it using cron scheduler.
advise please. | Running the same script multiple times , easier way to manage and report | 0 | 0 | 0 | 101 |
48,729,174 | 2018-02-11T07:03:00.000 | 0 | 0 | 0 | 1 | python,numpy,ubuntu,matplotlib,installation | 48,729,205 | 3 | false | 0 | 0 | U cannot install it using apt-get. u need to install pip first. After you install pip, just google about how to install different packages using pip | 1 | 0 | 1 | I am having problems trying to install the following packages on Ubuntu:
scipy
numpy
matplotlib
pandas
sklearn
When I execute the command:
sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose
I get the following message:
Reading package lists... Done
Building dependency tree
Reading state information... Done
build-essential is already the newest version (12.1ubuntu2).
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies.
libatlas-dev : Depends: libblas-dev but it is not going to be installed
libatlas3-base : Depends: libgfortran3 (>= 4.6) but it is not going to be installed
Depends: libblas-common but it is not going to be installed
python-dev : Depends: libpython-dev (= 2.7.11-1) but it is not going to be installed
Depends: python2.7-dev (>= 2.7.11-1~) but it is not going to be installed
python-scipy : Depends: python-decorator but it is not going to be installed
Depends: libgfortran3 (>= 4.6) but it is not going to be installed
Recommends: python-imaging but it is not going to be installed
python-setuptools : Depends: python-pkg-resources (= 20.7.0-1) but it is not going to be installed
rstudio : Depends: libjpeg62 but it is not going to be installed
Depends: libgstreamer0.10-0 but it is not going to be installed
Depends: libgstreamer-plugins-base0.10-0 but it is not going to be installed
Recommends: r-base (>= 2.11.1) but it is not going to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
So the packages are failing to install, but I really need these packages to begin a new project, how can I successfully install these packages? | How do I install packages for python ML on ubuntu? | 0 | 0 | 0 | 428 |
48,729,532 | 2018-02-11T08:01:00.000 | 0 | 0 | 0 | 1 | python-2.7 | 48,741,455 | 1 | false | 0 | 0 | You can install passlib module using pip
sudo pip install passlib | 1 | 0 | 0 | I am trying to install odoo in mac OS. I have installed all necessary requirements by referring a video on youtube and finally I tried to run server but I am getting an error like this:
lalits-MacBook-Pro:odoo lalitpatidar$ ./odoo-bin
Traceback (most recent call last):
File "./odoo-bin", line 5, in
__import__('pkg_resources').declare_namespace('odoo.addons')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 1926, in declare_namespace
declare_namespace(parent)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 1942, in declare_namespace
_handle_ns(packageName, path_item)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 1912, in _handle_ns
loader.load_module(packageName); module.__path__ = path
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pkgutil.py", line 246, in load_module
mod = imp.load_module(fullname, self.file, self.filename, self.etc)
File "/Users/lalitpatidar/Desktop/v10/odoo/odoo/__init__.py", line 60, in
import modules
File "/Users/lalitpatidar/Desktop/v10/odoo/odoo/modules/__init__.py", line 8, in
from . import db, graph, loading, migration, module, registry
File "/Users/lalitpatidar/Desktop/v10/odoo/odoo/modules/graph.py", line 13, in
import odoo.osv as osv
File "/Users/lalitpatidar/Desktop/v10/odoo/odoo/osv/__init__.py", line 4, in
import osv
File "/Users/lalitpatidar/Desktop/v10/odoo/odoo/osv/osv.py", line 4, in
from ..exceptions import except_orm
File "/Users/lalitpatidar/Desktop/v10/odoo/odoo/exceptions.py", line 15, in
from tools.func import frame_codeinfo
File "/Users/lalitpatidar/Desktop/v10/odoo/odoo/tools/__init__.py", line 8, in
from misc import *
File "/Users/lalitpatidar/Desktop/v10/odoo/odoo/tools/misc.py", line 16, in
import passlib.utils
ImportError: No module named passlib.utils
I am new in odoo and stackoverflow as well so please avoid my little mistake if any. | Error while running odoo in mac | 0 | 0 | 0 | 661 |
48,730,108 | 2018-02-11T09:32:00.000 | 1 | 0 | 0 | 0 | python,websocket,pip | 70,608,299 | 2 | false | 0 | 0 | Just installing websocket-client==1.2.0 is ok.
I encountered this problem when I was using websocket-client==1.2.3 | 1 | 1 | 0 | I am trying to use websocket.WebSocketApp, however it's coming up with the error: module 'websocket' has no attribute 'WebSocketApp'
I had a look at previous solutions for this, and tried to uninstall websocket, installed websocket-client and still comes up with the same error.
My File's name is MyWebSocket, so I don't think it has anything to do with that
can anyone help me please? | AttributeError: module 'websocket' has no attribute 'WebSocketApp' pip | 0.099668 | 0 | 1 | 7,871 |
48,730,129 | 2018-02-11T09:36:00.000 | 0 | 0 | 1 | 0 | python,django,virtualenv | 48,733,188 | 1 | false | 1 | 0 | Thank you phd and Kevin L.!
The virtualenv -p python3 my_env_name solved the issue.
Then it was important to restore dependencies via pip module
Anyone doing the migration (to newer linux or another pc) I also recommend to dump the dependencies with pip freeze > requirements.txt. | 1 | 3 | 0 | On my Fedora 25 I have configured the virtual environment with python 3.5
and after upgrading the system to Fedora 27 I cannot longer launch django app withing the virtual env (python manage.py runserver) neither check the version of the python:
error while loading shared libraries: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory
Could you please advice what to do next? I'm not advanced user in terms of python configuration. Shall I reinstall python 3.5 or try to set up virtual environment once again?
Any help greatly appreciated. | Python error while loading shared libraries: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory | 0 | 0 | 0 | 4,951 |
48,730,694 | 2018-02-11T10:49:00.000 | 0 | 0 | 0 | 0 | python,jquery,django,django-rest-framework | 48,730,768 | 2 | false | 1 | 0 | as per my experience and knowledge, you are almost going towards correct direction.
my recommendation is for making backend rest api Django and django rest framework is the best option however for consuming those api you can look for the angular or react both works very well in terms of consuming API. | 1 | 0 | 0 | I am currently developing my first more complex Web Application and want to ask for directions from more experienced Developers.
First I want to explain the most important requirements.
I want to develop a Web App (no mobile apps or desktop apps) and want to use as much django as possible. Because I am comfortable with the ecosystem right now and don't have that much time to learn something new that is too complex. I am inexperienced in the Javascript World, but I am able to do a little bit of jQuery.
The idea is to have one database and many different Frontends that are branded differently and have different users and administrators. So my current approach is to develop a Backend with Django and use Django Rest Framework to give the specific data to the Frontends via REST. Because I have not that much time to learn a Frontend-Framework I wanted to use another Django instance to use as a Frontend, as I really like the Django Template language. This would mean one Django instance one Frontend, where there would be mainly TemplateViews. The Frontends will be served on different subdomains, while the backend exposes the API Endpoints on the top level domain.
It is not necessary to have a Single Page App. A Normal Website with mainly the normal request/response-cycle is fine.
Do you think this is a possible approach to do things? I am currently thinking about how to use the data in the frontend sites in the best way. As I am familiar with the Django template language I thought about writing a middleware that asks about the user details in every request cycle from the backend. The thought is to use a request.user as normally as possible while getting the data from the backend.
Or is ist better to ask these details via jQuery and Ajax Calls and don't use the django template language very much?
Maybe there is also a way to make different Frontends for the same database without using REST?
Or what would you think about using a database with each frontend, which changes everytime I make a change in the main database in the backend? Although I don't really like this approach due to the possibility of differences in data if I make a mistake.
Hopefully this is not to confusing for you. If there are questions I will answer them happily. Maybe I am also totally on the wrong track. Please don't hesitate to point that out, too.
I thank you very much in advance for your guiding and wish you a nice day. | Design Decision Django Rest Framework - Django as Frontend | 0 | 0 | 0 | 548 |
48,731,124 | 2018-02-11T11:44:00.000 | 4 | 0 | 0 | 0 | python,tensorflow,google-colaboratory | 48,739,426 | 4 | true | 0 | 0 | Even if you will install gpu version !pip install tensorflow-gpu==1.5.0 it will still fail to import it because of the cuda libraries. Currently I have not found a way to use 1.5 version with GPU. So I would rather use 1.4.1 with gpu than 1.5 without gpu.
You can send them a feedback ( Home - Send Feedback ) and hopefully if enough people will send something similar, they will update the new gpu version. | 1 | 5 | 1 | Currently google colaboratory uses tensorflow 1.4.1. I want to upgrade it to 1.5.0 version. Each time when i executed !pip install --upgrade tensorflow command, notebook instance succesfully upgrades the tensorflow version to 1.5.0. But after upgrade operation tensorflow instance only supports "CPU".
When i have executed this command it shows nothing :
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
Should there be another way for upgrading tensorflow ? such as upgrading to tensorflow-gpu package ? Also when will notebooks will come with upgraded tensorflows ? | How to upgrade tensorflow with GPU on google colaboratory | 1.2 | 0 | 0 | 15,191 |
48,732,470 | 2018-02-11T14:20:00.000 | 1 | 0 | 0 | 0 | python,json,validation,marshmallow | 48,732,969 | 1 | false | 1 | 0 | For now I solved it by subclassing my model schema, adding the additional field there, and then - before loading my data through the model schema - validating it through the subclassed schema.
If there is a more elegant solution I'd still love to hear it. | 1 | 1 | 0 | I'm trying to build a Marshmallow schema based on a model, but with one additional field. While this seemed to work by declaring the special field by itself and then setting meta.model to my assigned model, I fail to find a solution so that the additional field gets validated (it is marked as required), but does not turn up in the resulting, deserialized object.
I tried setting it as excluded and dump_only, but to no avail, either the validation does not take place, or the deserialized object also contains the additional field (which then clashes with my ORM). | Validate field in Marshmallow but don't deserialize it | 0.197375 | 0 | 0 | 507 |
48,733,840 | 2018-02-11T16:42:00.000 | -1 | 1 | 0 | 0 | python,user-interface,pytest | 70,441,874 | 2 | false | 0 | 0 | So far I have not found any GUI to see the status of the tests. In fact I am working on developing one with Qt5 | 1 | 0 | 0 | I'm trying to find a way to execute test-cases with help of Graphical user interface in Pytest.
I found few plugins like Pytest-sugar which displays failed/passed status only. But I actually need to select the test-cases that I want to run in GUI display.
Is there a way to achieve this ?
Thanks in advance !! | How execute pytest-test cases with GUI | -0.099668 | 0 | 0 | 1,806 |
48,734,600 | 2018-02-11T17:56:00.000 | 2 | 1 | 0 | 0 | javascript,python,node.js | 48,735,967 | 1 | false | 0 | 0 | For anyone who might ever find himself in the same situation, I have found a workaround that works for me and might even work for you!
So, this is actually quite simple, but only works for a specific application. Here, I have a sensor which is read by a javascript script, but I want a python script to handle the sensor's output value. What I did was:
import subprocess
import os
deadly_string = subprocess.getoutput('npm run test')
not_so_deadly_string = deadly_string[-6:]
print ("value: ", not_so_deadly_string)
I captured the output of the terminal with "subprocess.getoutput". The output of the terminal looks something like this:
nodejs-qmc5883l@....
.....
Declination angle correction: 1
.
.
.
Azimuth= 300.00
The value "300.00" is what I want, so I cut the last 5 characters of the string and that's it.... | 1 | 1 | 0 | I have accomplished running a js file with the following python script:
import subprocess
subprocess.check_call('npm run test')
The file test.js reads sensor data and is only written in javascript because the only available library for this sensor is a NodeJS library. Now, I want the test.js to return those values everytime it is executed within my python script. How do I do that?
And it is not possible via this method, are there others? I cannot write this js script in python as the library uses NodeJS.
I want to thank everone that tries to help me in advance and if you need more info, simply contact me! | Running a javascript script through python using the subprocess library on a raspberry pi and returning values from it | 0.379949 | 0 | 0 | 779 |
48,738,338 | 2018-02-12T01:22:00.000 | 0 | 0 | 0 | 0 | python,artificial-intelligence | 51,712,117 | 1 | false | 0 | 0 | You can just measure the loudness of the voice command spoken in mDb. You can then process these numbers to change the UI.
Like , higher the loudness makes the bar longer. Let me know which speech to text engine are you using so that I can provide further help. | 1 | 0 | 0 | So basically I have created an A.I assistant and was wondering if anyone had suggestions on how to make visuals that react with the sound? | Making my Artificial Intelligence visuals that react with sound change | 0 | 0 | 0 | 41 |
48,738,584 | 2018-02-12T02:00:00.000 | 1 | 0 | 1 | 0 | python | 48,738,613 | 1 | true | 0 | 0 | There is NO ERROR in syntax.
+4 signifies that 4 is positive. If you put -4 it is negative 4 and you will get True. | 1 | 0 | 0 | Ok, I get that four is not greater than four but what is confusing me is the + what does that mean? Why doesn't it cause an error? I know its false but whats up with the +? | 4 > + 4 I would like to know what the plus stands for? | 1.2 | 0 | 0 | 34 |
48,738,650 | 2018-02-12T02:10:00.000 | 2 | 1 | 1 | 0 | python,algorithm,pow,modular-arithmetic,cryptanalysis | 48,738,710 | 2 | false | 0 | 0 | It sounds like you are trying to evaluate pow(a, b) % c. You should be using the 3-argument form, pow(a, b, c), which takes advantage of the fact that a * b mod c == a mod c * b mod c, which means you can reduce subproducts as they are computed while computing a ^ b, rather than having to do all the multiplications first. | 1 | 1 | 0 | I am trying to calculate something like this: a^b mod c, where all three numbers are large.
Things I've tried:
Python's pow() function is taking hours and has yet to produce a result. (if someone could tell me how it's implemented that would be very helpful!)
A right-to-left binary method that I implemented, with O(log e) time, would take about 30~40 hours (don't wanna wait that long).
Various recursion methods are producing segmentation faults (after I changed the recursion limits)
Any optimizations I could make? | How to implement modular exponentiation? | 0.197375 | 0 | 0 | 1,213 |
48,738,979 | 2018-02-12T03:01:00.000 | 0 | 0 | 0 | 0 | python-2.7,tensorflow,out-of-memory,gpu | 51,146,080 | 1 | false | 0 | 0 | You can try using model.fit_generator instead. | 1 | 0 | 1 | I am trying to train my deep learning code using Keras with tensorflow backend on a remote server with GPU. However, even the GPU server states OOM.
This was the output:
2018-02-09 14:19:28.918619: I tensorflow/core/common_runtime/bfc_allocator.cc:685] Stats:
Limit: 10658837300
InUse: 10314885120
MaxInUse: 10349312000
NumAllocs: 8762
MaxAllocSize: 1416551936
2018-02-09 14:19:28.918672: W tensorflow/core/common_runtime/bfc_allocator.cc:277] ************__********************************************************************************xxxxxx 2018-02-09 14:19:28.918745: W tensorflow/core/framework/op_kernel.cc:1182] Resource exhausted: OOM when allocating tensor of shape [13772,13772] and type float 2018-02-09 14:19:29.294784: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Resource exhausted: OOM when allocating tensor of shape [13772,13772] and type float [[Node: training_4/RMSprop/zeros = Constdtype=DT_FLOAT, value=Tensor, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Would there be any ways to resolve this issue? I tried adjusting for batch size, it initially worked when batch size with 100 but when i reduced it to 50, it showed this error. Afterwhich i tried batch size 100 but it displayed this same error again.
I tried to search on how to suspend training binary while running evaluation but did not get much.
Would greatly appreciate your help in this! Thank you!! | OOM when training on GPU external server | 0 | 0 | 0 | 102 |
48,739,050 | 2018-02-12T03:12:00.000 | 0 | 0 | 0 | 1 | python,python-3.x | 48,739,078 | 1 | false | 0 | 0 | You are running on Windows and trying to use semantics from a POSIX-like shell.
Instead of running ./tesy.py try start test.py | 1 | 0 | 0 | Tried searching many posts but unable to find the answer.
I have a simple python script (test.py) written as:
!/usr/bin/env python (tried with #!/usr/bin/python)
print("Hellow World")
but when I am trying to run this script from command line (from the script location) as ./test.py, it is always giving error
"'.' is not recognized as an internal or external command, operable program or batch file."
"which python" is giving me below path:
!/cygdrive/c/Users/User>/AppData/Local/Programs/Python/Python36-32/python
I am able to run the script using "python test.py" but unable to understand the issue with "./" | Unable to run python script from command line using ./ | 0 | 0 | 0 | 58 |
48,739,521 | 2018-02-12T04:24:00.000 | 0 | 0 | 1 | 0 | python,anaconda,spyder | 63,882,538 | 4 | false | 0 | 0 | In Windows 10 Anaconda installs itself into a hidden folder called ".anaconda" which is placed in the Users directory under your own profile sub directory.
When you first try to use the right-click menue "Open with" it opens up in C:\Program Files so you have to go up one folder and down into Users. You may need to have previously set one of the options in the View Menu of the file manager so that you can see hidden files. You can't do this from the "right-click open with" place, you have to set that in the regular file manager.
You will find a file called Spyder.bat a couple of folders down within that, e.g. C:\Users\Your_profile.anaconda\navigator\scripts
It will take forever to open each time. | 2 | 3 | 0 | I installed Spyder using Anaconda, and I am able to launch the IDE using the Spyder icon in my start menu (Win10). I wanted to set my preferences to open all .py files with Spyder, so I followed the Spyder start menu button to an executable, pythonw.exe. The problem is that I cannot launch pythonw.exe by clicking it.
How does the start menu icon for Spyder, which points to pythonw.exe, launch Spyder, but clicking the executable does not yield the same results? Also, when I double click spyder.exe in Anaconda\Scripts a command prompt opens along with the IDE, which does not happen when I click the start menu icon.
Why does this application behave so much differently than any other application I've used before (if this is just how things are in python, I apologize as I'm new!) and is it possible to set Spyder as the default application to open .py files in the same way I can open source files with IDEs in other languages?
Cheers | How do I make all .py files launch with Spyder? | 0 | 0 | 0 | 6,094 |
48,739,521 | 2018-02-12T04:24:00.000 | 1 | 0 | 1 | 0 | python,anaconda,spyder | 48,739,754 | 4 | true | 0 | 0 | You can right click any of your *.py file, go to properties and choose Spyder as "Opens with" choice. | 2 | 3 | 0 | I installed Spyder using Anaconda, and I am able to launch the IDE using the Spyder icon in my start menu (Win10). I wanted to set my preferences to open all .py files with Spyder, so I followed the Spyder start menu button to an executable, pythonw.exe. The problem is that I cannot launch pythonw.exe by clicking it.
How does the start menu icon for Spyder, which points to pythonw.exe, launch Spyder, but clicking the executable does not yield the same results? Also, when I double click spyder.exe in Anaconda\Scripts a command prompt opens along with the IDE, which does not happen when I click the start menu icon.
Why does this application behave so much differently than any other application I've used before (if this is just how things are in python, I apologize as I'm new!) and is it possible to set Spyder as the default application to open .py files in the same way I can open source files with IDEs in other languages?
Cheers | How do I make all .py files launch with Spyder? | 1.2 | 0 | 0 | 6,094 |
48,739,909 | 2018-02-12T05:14:00.000 | 3 | 0 | 1 | 0 | python,gem5 | 48,739,937 | 1 | false | 0 | 0 | When you run the first instance it will use the first Python config file. Then when you run the second instance it will run the newly updated config file. It should all work fine as long as its not running on a localhost port, because it will try to use the same port and wont run. If this is not the case then it should run with the updated config file. | 1 | 3 | 0 | I'm running a simulation (more on this in a sec) and one of the files for this simulation is a python file that contains configuration details. I want to run multiple versions of this simulation, but I don't know if running one simulation, opening another terminal, then editing the python file and running a second simulation will ruin the first simulation. Does python have a concept of "source code" is separate from the actual process? Can I edit the file safely? These simulations take a while.
(So, I'm running a gem5 simulation and I need to run multiple versions of the simulation, varying properties of the CPU. Just in case that helps.) | Is it ok to run python file, edit it, then run it in second terminal | 0.53705 | 0 | 0 | 117 |
48,742,249 | 2018-02-12T08:32:00.000 | 0 | 0 | 1 | 0 | python,command-line,pycharm,python-venv | 66,214,395 | 3 | false | 0 | 0 | Or
source <path to your environment>/bin/activate
on linux | 2 | 7 | 0 | I'm using Python 3.6 in Windows and using PyCharm. I have a .py file that is using packages installed on a venv which is in a different folder to the .py file.
I'm trying to run this .py from command line and when I do it gives me a ModuleNotFoundError: No module named '<module>'. The file works fine from PyCharm just not from the command line because the packages are in the venv.
How can I get the file to run without errors from command line and keep the packages in the venv?
Many thanks. | Running Python File from Command Line with Libraries in venv | 0 | 0 | 0 | 8,300 |
48,742,249 | 2018-02-12T08:32:00.000 | 1 | 0 | 1 | 0 | python,command-line,pycharm,python-venv | 68,630,370 | 3 | false | 0 | 0 | I Think the easiest way to do that is using shebang, and it works both linux and windows.
For windows you just have to add #!.\venv\Scripts\python.exe at the very first line at your .py script file. | 2 | 7 | 0 | I'm using Python 3.6 in Windows and using PyCharm. I have a .py file that is using packages installed on a venv which is in a different folder to the .py file.
I'm trying to run this .py from command line and when I do it gives me a ModuleNotFoundError: No module named '<module>'. The file works fine from PyCharm just not from the command line because the packages are in the venv.
How can I get the file to run without errors from command line and keep the packages in the venv?
Many thanks. | Running Python File from Command Line with Libraries in venv | 0.066568 | 0 | 0 | 8,300 |
48,749,538 | 2018-02-12T15:11:00.000 | 1 | 0 | 0 | 1 | python,django,celery,celery-task | 48,769,758 | 1 | false | 1 | 0 | After helping comment from Danila, I have tried celery.result library, which I didnt know anything about before (surprisingly I also havent found any mention of it before). After importing ResultBase from celery.result, which captures the incoming results from task, I was finally able to get the URL of my generated file and therefore I got what I needed. Thank you | 1 | 1 | 0 | I got a task to make exports of several big tables in DB asynchronous because right now the amount of records is so big that it takes too much time. So I've successfully moved the code of exporting to the Celery task. It exports and saves the file in a folder on a server. But I am not able to get from the task the whole filename, which I could pass to the rest of code and therefore download it after completing the export process because the only thing Celery task can return is only the state of the result of performing the task (done or not yet or failed). I am using Django/Python + Celery + Redis. Thank you for any advice, it already borders me for several days. | What is the best way to get URl of generated file within Celery task | 0.197375 | 0 | 0 | 379 |
48,750,055 | 2018-02-12T15:36:00.000 | 0 | 0 | 1 | 0 | python,jupyter-notebook,scheduled-tasks,papermill | 66,136,980 | 10 | false | 0 | 0 | You can download the notebook in the form of .py and then create a batch file to execute the .py script. Then schedule the batch file in the task scheduler | 1 | 36 | 0 | I have some Python code in a Jupyter notebook and I need to run it automatically every day, so I would like to know if there is a way to set this up. I really appreciate any advice on this. | How to run a Jupyter notebook with Python code automatically on a daily basis? | 0 | 0 | 0 | 63,876 |
48,750,404 | 2018-02-12T15:54:00.000 | 1 | 0 | 0 | 0 | python,django | 48,750,740 | 1 | true | 1 | 0 | When you use defer or only to load an instance, the get_deferred_fields() method returns a list of field names that have not been loaded; you should be able to use this to work out which ones will be saved. | 1 | 1 | 0 | I am using Django 1.11, in one of my models I have added actions when the model is saved.
However I don't want these actions to be done when only a part of the model is saved.
I know that update_fields=('some_field',) can be used to specify which field must be saved.
But, when the object has been fetched in the database using the methods only() or defer() I don't see any information about the fields updated in the save() method, update_fields is empty.
Thus my question: How can I get the fields saved by Django when only some fields have been fetched ? | How to know which field is saved when only() has been used in Django? | 1.2 | 0 | 0 | 189 |
48,753,128 | 2018-02-12T18:28:00.000 | 0 | 0 | 0 | 0 | python,rstudio,r-markdown,python-import | 49,017,753 | 1 | false | 0 | 0 | First I had to make a setup.py file for my project.
activate the virtual environment corresponding to my project source activate, then run python setup.py develop
Now, I can import my own python library from R as I installed it in my environment. | 1 | 0 | 1 | I have developed few modules in python and I want to import them to rstudio RMarkdown file. However, I am not sure how I can do it.
For example, I can't do from code.extract_feat.cluster_blast import fill_df_by_blast as fill_df as I am used to do it in pycharm.
Any hint?
Thanks. | Import my python module to rstudio | 0 | 0 | 0 | 378 |
48,754,352 | 2018-02-12T19:45:00.000 | 0 | 0 | 1 | 0 | python,numpy,package,jupyter-notebook,python-import | 48,758,996 | 4 | false | 0 | 0 | Try checking if it is in the 'bin' folder. Alternatively, add the the path to numpy in the environment variables. | 1 | 4 | 0 | I want to use numpy in my Jupyter notebook. However, I came across a weird problem. Despite the fact that:
I have a numpy installed in my default python3.
From Jupyter I've run a command !pip install numpy and !pip3 install numpy.
I restarted my notebook multiple times.
After running a command import numpy I get a message that package is not found:
ImportError: No module named 'numpy'
I use python3 kernel of course.
Has anyone come across similar problem? | Python package not found in jupyter even after running pip install | 0 | 0 | 0 | 2,730 |
48,754,469 | 2018-02-12T19:52:00.000 | 0 | 0 | 0 | 0 | python-3.x | 49,515,251 | 1 | false | 0 | 0 | Try the below:
pd.read_csv("filepath",encoding='cp1252').
This one should work as it worked for me. | 1 | 2 | 1 | I have been trying to upload a csv file using pandas .read() function. But as you can see from my title this is what I get
"'utf-8' codec can't decode byte 0x92 in position 16: invalid start byte"
And it's weird because from the same folder I was able to upload a different csv file without problems.
Something that might have caused this is that the file was previously xlsx and I manually converted it to cvs?
Please Help
Python3 | Uploading CSV - 'utf-8' codec can't decode byte 0x92 in position 16: invalid start byte | 0 | 0 | 0 | 603 |
48,755,384 | 2018-02-12T20:56:00.000 | 1 | 0 | 1 | 1 | python,virtualenv,deb | 48,755,624 | 1 | true | 0 | 0 | I found that it's possible to add an option --deb-systemd FILEPATH which points to which file should be put in systemd for the service and solved my issue. | 1 | 0 | 0 | I'm working on distributing a python package that uses uwsgi and falcon to run an API.
To build it as a deb package I am using fpm. After some tinkering I have managed to get my package to include everything I need for my virtualenv, however now I am running into the problem that my service files aren't installing properly and I can't start the service with systemctl
I build the package using:
fpm -s virtualenv -t deb --prefix /opt/venvs/{project_name} --version {$VERS} --name {project_name} path/to/setup.py path/to/requirements.txt
Inside my package I have systemd/{service_name}.service, however the service files aren't in my package when I check the contents with dpkg -c {service_name}.deb | grep service
How can I get fpm to build the deb package with the service files correctly?
Thanks. | fpm python uwsgi service | 1.2 | 0 | 0 | 117 |
48,757,083 | 2018-02-12T23:11:00.000 | 3 | 0 | 0 | 0 | python,nginx,flask,uwsgi | 48,763,573 | 2 | true | 1 | 0 | Look at the 502 spec:
The HyperText Transfer Protocol 502 Bad Gateway server error response
code indicates that the server, while acting as a gateway or proxy,
received an invalid response from the upstream server.
If your app responds anything while hitting an exception, it's most likely garbage and nginx raises (correctly) the 502, meaning "I won't (as opposed to can't) talk to the backend".
If you want 500s, you have to catch any possible underlying exception, wrap it in a valid response and it will be processed by nginx. | 2 | 0 | 0 | I am running a Flask application and using uwsgi socket nginx configuration.
This may be a stupid question but the issue I am facing is that whenever there is exception raised in Flask code (non handled exception for example 1/0), my nginx gives 502 instead of 500. I wanted to know if raising the exception is not getting propagated to nginx as 500 from uwsgi unix socket by default or do I need to specify this explicitly? Somewhere I read that for exceptions, Flask doesn't raise 500 error message automatically. Any comments will be helpful. Thanks. | nginx gives 502 for non handled exception 500 error in flask | 1.2 | 0 | 0 | 1,072 |
Subsets and Splits