Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
44,213,136 | 2017-05-27T05:02:00.000 | 1 | 0 | 1 | 1 | python,apache-zeppelin | 44,213,558 | 1 | false | 0 | 0 | It worked.
I have set "zeppelin.python" as /mnt/reference/softwares/anaconda27/bin/python in %python interpreter section. | 1 | 0 | 0 | Where is the python installation location specified for Python interpreter in Zeppelin?
How to change the python installation location for Python interpreter with new python installation directory in Zeppelin?
Help would be much appreciated.
Thanks. | How to change the python installation location for Python interpreter in Zeppelin? | 0.197375 | 0 | 0 | 81 |
44,214,153 | 2017-05-27T07:23:00.000 | 0 | 0 | 0 | 0 | django,python-3.x | 44,214,492 | 2 | false | 1 | 0 | I think there is no difference between the two line. | 1 | 0 | 0 | I'm getting parameters through GET or POST.
On the book I'm reading,
there's
Word = '%s' % self.request.POST['word']
I wonder whether there's a specific reason for that.
I mean why not like this just simply?
Word = self.request.POST['word'] | Django python, differences between self.request and '%s' % self.request | 0 | 0 | 0 | 60 |
44,215,404 | 2017-05-27T09:52:00.000 | 0 | 1 | 0 | 0 | python,mysql,apache,raspberry-pi | 44,215,522 | 1 | false | 0 | 0 | Your question is quite unclear. But from my understanding, here is what you should try doing: (Note: I am assuming you want to connect your Pi to a database to collect data and store in an IoT based application)
Get a server. Any Basic server would do. I recommend DigitalOcean or AWS LightSail. They have usable servers for just $5 per month. I recommend Ubuntu 16.04 for ease of use.
SSH into the server with your terminal with the IP address you got when you created the server
Install Apache, MySQL, Python, PHPMyAdmin on the server.
Write your web application in any language/framework you want.
Deploy it and write a separate program to make HTTP calls to the said web server.
MySQL is the Database server. Python is the language that is used to execute any instructions. PHPMyAdmin is the interface to view MySQL Databases and Tables. Apache is the webserver that serves the application you have written to deal with requests.
I strongly recommend understanding the basics of Client-Server model of computing over HTTP.
Alternatively, you could also use the approach of Using a DataBase-as-a-service from any popular cloud service provider(Eg., AWS RDS), to make calls directly into the DB. | 1 | 0 | 0 | Yesterday, I installed an apche web server and phpmyadmin on my raspberry-py. How can I connect my raspberry-pi to databases in phpmyadmin with python? Can I use MySQL? Thank, I hope you understand my question and sorry for my bad english. | connect my raspbery-pi to MySQL | 0 | 1 | 0 | 110 |
44,215,505 | 2017-05-27T10:04:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,random,generator | 44,216,018 | 7 | false | 0 | 0 | For a large number of non-repeating random numbers use an encryption. With a given key, encrypt the numbers: 0, 1, 2, 3, ... Since encryption is uniquely reversible then each encrypted number is guaranteed to be unique, provided you use the same key. For 64 bit numbers use DES. For 128 bit numbers use AES. For other size numbers use some Format Preserving Encryption. For pure numbers you might find Hasty Pudding cipher useful as that allows a large range of different bit sizes and non-bit sizes as well, like [0..5999999].
Keep track of the key and the last number you encrypted. When you need a new unique random number just encrypt the next number you haven't used so far. | 1 | 4 | 1 | Does Python have a random number generator that returns only one random integer number each time when next() function is called? Numbers should not repeat and the generator should return random integers in the interval [1, 1 000 000] that are unique.
I need to generate more than million different numbers and that sounds as if it is very memory consuming in case all the number are generated at same time and stored in a list. | Random number generator that returns only one number each time | 0.028564 | 0 | 0 | 5,066 |
44,217,141 | 2017-05-27T12:58:00.000 | 0 | 0 | 1 | 0 | python-2.7,gnuplot | 44,217,197 | 2 | false | 0 | 0 | One option is to use scipy to fit the data from within python. Another option is to write gnuplot output to a file, and parse it. | 1 | 1 | 0 | So I’m using Gnuplot to fit some data sets to some function and I’m using python for most of the programming stuff. The problem is I have to calculate the best fit parameter each time in Gnuplot and then manually type it in a python list. Is there a better way of doing this? | Is there a way to directly import the best fit parameters of a function, obtained using Gnuplot, to python? (Please read details) | 0 | 0 | 0 | 37 |
44,217,507 | 2017-05-27T13:36:00.000 | -1 | 0 | 1 | 1 | python,macos,homebrew | 44,484,184 | 5 | false | 0 | 0 | This is not a direct answer to the question, rather it explains a solution to avoid touching the system python.
The general idea is that you should always install the independent python for your projects. Each project needs its own python version (for compatibility reasons with libraries), and it's not practical to keep one python version and trying to make it work with multiple projects.
I assume this issue in your system happened because another project required a higher python version and now for your other project you need a lower version python.
The best way to handle python versions is to use virtualenv.
Each project will have it's own python, so you can have projects that work with python 2.7 and python 3 and they never touch each other's dependency.
Install different python versions with homebrew and then for each project when you create virtualenv you decide which python to pick. Everytime you work with that project, the python version would be the one you picked yourself when created the virtualenv. | 1 | 18 | 0 | I'm running MacOS Sierra 10.12.4 and I've realized that homebrew python was upgraded to version 2.7.13. How can I switch back to 2.7.10? | MacOS: How to downgrade homebrew Python? | -0.039979 | 0 | 0 | 47,780 |
44,217,644 | 2017-05-27T13:50:00.000 | 1 | 0 | 1 | 0 | python,regex,django | 44,217,707 | 2 | false | 1 | 0 | Using the carat inside of square brackets (e.g. [^...]) makes it an inverse. So, for example, [A-Za-z0-9_] would match alpha numerics and underscores, whereas [^A-Za-z0-9_] would match anything that is not alpha numerics or underscores. In your case, the regex you seem to want is r'^[^\w\s\.\@\+\-]+$'. | 1 | 0 | 0 | In a Django website of mine, I allow usernames that are alphanumeric, and/or contain @ _ . + -. Moreover, whitespaces are allowed too. I've written a simple regex to ensure this: '^[\w\s.@+-]+$'.
It might be an obvious question, but how do I capture characters that do not pass regex validation? I want to display such characters in a tool tip to my users. | Catching characters that do not pass regex validation | 0.099668 | 0 | 0 | 26 |
44,218,229 | 2017-05-27T14:46:00.000 | 0 | 0 | 1 | 0 | python-3.x | 44,222,391 | 2 | false | 0 | 0 | Here is what I have done :
string = re.findall('^[(++)? (1-9d+) (++)?]+$','0 1 +')
it wouldn't capture the 0 as individul number in this case tho....
thanks!
Erik | 1 | 0 | 0 | I am just wondering in regx why we could not have character class within a character class? for example as below I need to exclude all numbers that starts with 0 , but not 0 it self.
However the square bracket within square bracket is not valid: [[]]
string = re.findall('^[(++)? [1-9][0-9] (++)?]+$',' 01 + 2')
why is that? Is't much easier that we can do such expressions as above?
Many thanks!
Erik | Character Class Regular Expression Python | 0 | 0 | 0 | 274 |
44,220,587 | 2017-05-27T18:58:00.000 | 2 | 0 | 1 | 0 | python,rounding | 44,220,620 | 4 | false | 0 | 0 | Simple: because int() works as "floor" , it simply cuts of instead of doing rounding. Or as "ceil" for negative values.
You want to use the round() function instead. | 2 | 1 | 0 | I noticed that when I type int(0.9) it becomes 0 instead of 1.
I was just wondering why Python wouldn't round the numbers. | Why is int(0.9) rounded to 0? | 0.099668 | 0 | 0 | 1,520 |
44,220,587 | 2017-05-27T18:58:00.000 | 2 | 0 | 1 | 0 | python,rounding | 44,220,606 | 4 | false | 0 | 0 | If you use int function, it ignores decimal points and gives you integer part. So use round function if you want to round figures. | 2 | 1 | 0 | I noticed that when I type int(0.9) it becomes 0 instead of 1.
I was just wondering why Python wouldn't round the numbers. | Why is int(0.9) rounded to 0? | 0.099668 | 0 | 0 | 1,520 |
44,221,542 | 2017-05-27T20:57:00.000 | 0 | 0 | 0 | 0 | javascript,python,html,selenium,audio | 44,233,382 | 1 | false | 1 | 0 | There is no way to do it with just selenium, but it will be not so hard to implement it with python, just search python module or external program which suite your purpose | 1 | 5 | 0 | I'm using Selenium in Python for a project and I was wondering if there was a way to record or stream audio that is being played in the browser. Essentially I want to use selenium to get the audio and pass it to my application in Python so that I can process it.
Is there a way to do this in Selenium, or using another Python package?
Ideally I would want to be able to get any audio playing in the browser window irregardless of the source. | Selenium, streaming audio from browser | 0 | 0 | 1 | 1,354 |
44,225,160 | 2017-05-28T08:14:00.000 | 0 | 0 | 1 | 0 | python,numpy,anaconda | 44,225,303 | 1 | false | 0 | 0 | When u install Anaconda it installs Python and other packages and might add Python to the PATH. Hence when you type python it opens anaconda's interpreter which has necessary packages. But you are opening IDLE of your previous python interpreter which doesn't have numpy. | 1 | 0 | 0 | I have an odd problem. I Installed Python 2,7 and afterwards installed Anaconda. If I use the terminal and type "python" followed by "import numpy" I everything is fine, no error.
However if I try to import numpy in the IDLE I get this:
"ImportError: No module named numpy"
Why is this? I thought the numpy was installed on the computer once and for all when I installed Anaconda. | Python can't find numpy in Idle | 0 | 0 | 0 | 711 |
44,225,170 | 2017-05-28T08:14:00.000 | 1 | 0 | 1 | 0 | python,jira | 44,225,250 | 2 | true | 1 | 0 | without having used jira-python, i guess you can't. That would break the meaning of the updated field, when setting "updated" to a new value, although there hasn't been any update. In which situation would that make sense? | 1 | 0 | 0 | Could I change "updated" field for Jira issue without changing any others fields for the issue?
I want to update this filed let's say in the "silent mode" without changing values in the other fields. | Python + Jira (lib) How to change "updated" fileld | 1.2 | 0 | 0 | 38 |
44,226,932 | 2017-05-28T11:43:00.000 | 22 | 0 | 0 | 0 | python,tensorflow,deep-learning,conv-neural-network | 45,108,482 | 3 | true | 0 | 0 | I am no expert on this, but as far as I understand the difference is this:
Lets say you have an input colour image with length 100, width 100. So the dimensions are 100x100x3. For both examples we use the same filter of width and height 5. Lets say we want the next layer to have a depth of 8.
In tf.nn.conv2d you define the kernel shape as [width, height, in_channels, out_channels]. In our case this means the kernel has shape [5,5,3,out_channels].
The weight-kernel that is strided over the image has a shape of 5x5x3, and it is strided over the whole image 8 times to produce 8 different feature maps.
In tf.nn.depthwise_conv2d you define the kernel shape as [width, height, in_channels, channel_multiplier]. Now the output is produced differently. Separate filters of 5x5x1 are strided over each dimension of the input image, one filter per dimension, each producing one feature map per dimension. So here, a kernel size [5,5,3,1] would produce an output with depth 3. The channel_multiplier tells you how many different filters you want to apply per dimension. So the original desired output of depth 8 is not possible with 3 input dimensions. Only multiples of 3 are possible. | 1 | 10 | 1 | What is the difference between tf.nn_conv2d and tf.nn.depthwise_conv2d in Tensorflow? | Difference between tf.nn_conv2d and tf.nn.depthwise_conv2d | 1.2 | 0 | 0 | 8,789 |
44,227,380 | 2017-05-28T12:35:00.000 | 0 | 0 | 0 | 0 | python,layout,kivy,kivy-language | 55,109,529 | 2 | false | 0 | 1 | I don't think you can have lines with different viewclasses in a single RecycleView. RecycleView ,by design, has only one viewclass because it is meant for a large collection of homogeneous items.
For what you are looking for the most straightforward way is probably to use ScrollView and to define a custom add_line(self, type): function to dynamically add each line specifying the type. | 2 | 2 | 0 | I'm implementing a recycleview in kivy. It is possible have multiple (one or more) viewclass depending the dataset data? i would like to have in the same list multiple layouts (eg. one line viewclass1 (one label and two buttons) and another line viewclass2 (one label and two TextInput). Thank you. | multiple viewclass (Kivy - Recycleview) | 0 | 0 | 0 | 1,077 |
44,227,380 | 2017-05-28T12:35:00.000 | 1 | 0 | 0 | 0 | python,layout,kivy,kivy-language | 44,798,573 | 2 | false | 0 | 1 | You can create a widget that extends a layout and then you could add the widgets you need programmatically. | 2 | 2 | 0 | I'm implementing a recycleview in kivy. It is possible have multiple (one or more) viewclass depending the dataset data? i would like to have in the same list multiple layouts (eg. one line viewclass1 (one label and two buttons) and another line viewclass2 (one label and two TextInput). Thank you. | multiple viewclass (Kivy - Recycleview) | 0.099668 | 0 | 0 | 1,077 |
44,227,748 | 2017-05-28T13:18:00.000 | 8 | 0 | 1 | 0 | python,string,pandas,split | 47,000,213 | 5 | false | 0 | 0 | in messy data it might to be a good idea to remove all whitespaces df.replace(r'\s', '', regex = True, inplace = True). | 1 | 34 | 1 | I've used multiple ways of splitting and stripping the strings in my pandas dataframe to remove all the '\n'characters, but for some reason it simply doesn't want to delete the characters that are attached to other words, even though I split them. I have a pandas dataframe with a column that captures text from web pages using Beautifulsoup. The text has been cleaned a bit already by beautifulsoup, but it failed in removing the newlines attached to other characters. My strings look a bit like this:
"hands-on\ndevelopment of games. We will study a variety of software technologies\nrelevant to games including programming languages, scripting\nlanguages, operating systems, file systems, networks, simulation\nengines, and multi-media design systems. We will also study some of\nthe underlying scientific concepts from computer science and related\nfields including"
Is there an easy python way to remove these "\n" characters?
Thanks in advance! | removing newlines from messy strings in pandas dataframe cells? | 1 | 0 | 0 | 88,228 |
44,227,908 | 2017-05-28T13:37:00.000 | 4 | 0 | 0 | 0 | python,scikit-learn,regression | 44,228,127 | 1 | true | 0 | 0 | If you have a similar pipeline to feed the same data into the models, then the metrics are comparable. You can choose the SVR Model without any doubt.
By the way, it could be really interesting for you to "redevelop" this "R_squared" Metric, it could be a nice way to learn the underlying mechanic. | 1 | 1 | 1 | I am getting different score values from different estimators from scikit.
SVR(kernel='rbf', C=1e5, gamma=0.1) 0.97368549023058548
Linear regression 0.80539997869990632
DecisionTreeRegressor(max_depth = 5) 0.83165426563946387
Since all regression estimators should use R-square score, I think they are comparable, i.e. the closer the score is to 1, the better the model is trained. However, each model implements score function separately so I am not sure. Please clarify. | Is it correct to compare score of different estimators? | 1.2 | 0 | 0 | 94 |
44,228,698 | 2017-05-28T14:58:00.000 | 0 | 0 | 0 | 0 | python-3.x | 46,943,022 | 1 | false | 0 | 0 | I had seen this same exact error message, and it was because python was confused about some other file names in the same folder that is was loading instead of library files. Try cleaning the folder, renaming your files, etc. | 1 | 0 | 1 | i'm trying to impute missing values with KNN in python so i have downloaded a package named fancyimpute that contain the methode KNN then when i want to import it i get this Error ImportError: cannot import name 'KNN'
please help me | Impute Missing Values Using K-Nearest Neighbors | 0 | 0 | 0 | 427 |
44,230,567 | 2017-05-28T18:22:00.000 | 1 | 1 | 0 | 0 | android,python,scripting,embed | 44,276,418 | 1 | false | 0 | 1 | I finally found a project which fits to my needs: BeanShell (beanshell.org) .
It is Java-like but more easy. It has direct access to Java libraries and sharing variables between the host app and the script is very easy. It is lightweight. It is quiet popular and has a good documentation (even for android purposes) | 1 | 1 | 0 | I know, that's a long title. However any shorter title would be a duplicate of the tons of other questions of this kind here on StackOverflow.
I am searching for a scripting language I could embed into my Android application. The user would write a script. Later on I trigger a specific function of this script and process its returned result. The triggering and processing should run without any console - like - gui just in the background while the user is viewing a beautiful android activity. This is what I would like to achieve. However I did not find a language for Android that suits my needs. It should be:
Easy (like Python, etc.)
Provide direct access to popular android/java methods (like sl4a does with the Android() module)
lightweight (as far as possible)
There must be a way to call the script (or if not possible, a function defined in the script) and get the returned value from Java
would be good, if the language had a good documentation
I found no language, which suits all my needs.
I found:
deelang - no built-in access to popular Android SDK methods and bad documentation
sl4a - I found no way to embed it directly into my app. I cant let the user download the sl4a app. I don't know whether I can get the result of a script and pass paramaters to it.
android-python27 - bad documentation, same as sl4a
rhino - does not provide built-ins to access popular Android SDK methods
androlua - bad documentation, I don't know whether it provides built-in access to the Android SDK
Is there really no language available that suits my needs? Wouldn't we need a language like it which is easy to embed? | Embeddable Easy Lightweight Scripting Language with direct bindings to java for Android | 0.197375 | 0 | 0 | 511 |
44,233,042 | 2017-05-28T23:46:00.000 | 2 | 0 | 0 | 0 | python,memory-management,tensorflow,keras,theano | 51,102,921 | 1 | false | 0 | 0 | You can do this thing, but it will cause your training time to approach sizes that will only make the results useful for future generations.
Let's consider what all we have in our memory when we train with a batch size of 1 (assuming you've only read in that one sample into memory):
1) that sample
2) the weights of your model
3) the activations of each layer #your model stores these for backpropogation
None of this stuff is unnecessary for training. However, you could, theoretically, do a forward pass on the first half of the model, dump the weights and activations to disk, load the second half of the model, do a forward pass on that, then the backward pass on that, dump those weights and activations to disk, load back the weights and activations of the first half, then complete the backward pass on that. This process could be split up even more to the point of doing one layer at a time.
OTOH, this is akin to what swap space does, without you having to think about it. If you want a slightly less optimized version of this (which, optimization is clearly moot at this point), you can just increase your swap space to 500GB and call it a day. | 1 | 2 | 1 | I've got a model in Keras that I need to train, but this model invariably blows up my little 8GB memory and freezes my computer.
I've come to the limit of training just one single sample (batch size = 1) and still it blows up.
Please assume my model has no mistakes or bugs and this question is not about "what is wrong with my model". (Yes, smaller models work ok with the same data, but aren't good enough for the task).
How can I split my model in two and train each part separately, but propagating the gradients between them?
Is there a possibility? (There is no limitation about using theano or tensorflow)
Using CPU only, no GPU. | Can I train a model in steps in Keras? | 0.379949 | 0 | 0 | 957 |
44,233,219 | 2017-05-29T00:21:00.000 | 1 | 0 | 0 | 0 | python,directx,mouseevent,mouse,mousemove | 44,331,240 | 2 | true | 0 | 1 | Found a perfect solution, it will require some extra stuff to run but its the shortest way.
Haven't found anyone who actually knows how to simulate mouse but decided to ask Sentdex for help and he recommended using vJoy to simulate controller.
So you need to simulate a controller instead of mouse by using a combination of vJoy(controller driver) and FreePIE(Input emulator).
After doing some research, for my purpose the best solution for moving on the axis(x,y) is to bind controller (x,y) axis movements to keyboard shortcuts(E.g. W.A.S.D) and make the main script to press these shortcuts, if I'm looking in the wrong direction.
tl;dr Simulate controller. NEED: vJoy, FreePie | 1 | 1 | 0 | I've just started learning python few days ago and I'd like to know how to simulate mouse movements inside games that have forced mouse coordinates.
directx environments?
I've currently tested pyautogui, ctypes, wxpython. I've also tried using directpython11 but I've had trouble installing it, ton of dll errors.
Can't find any topics helping with this in google, lots of pages on how to click or write in these kinds of cases but nothing about moving the mouse. | How to simulate mouseMovement in-game? | 1.2 | 0 | 0 | 1,950 |
44,233,777 | 2017-05-29T02:09:00.000 | 2 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,aws-lambda | 44,234,884 | 2 | true | 1 | 0 | You could certainly write an AWS Lambda function that would:
Download the file from the URL and store it in /tmp
Upload to Amazon S3 using the AWS S3 SDK
It would be easiest to download the complete file rather than attempting to stream it in 'bits'. However, please note that there is a limit of 500MB of disk space available for storing data. If your download is larger than 500MB, you'll need to do some creative programming to download portions of it and then upload it as a multi-part upload.
As for how to download it, use whatever library you prefer for download a web file. | 1 | 7 | 0 | I want to upload video into S3 using AWS lambda function. This video is not available in my local computer. I have 'download URL'. I don't want to download it into my local computer and upload it into S3. I am looking for a solution to directly put this video file into S3 using lambda function. If I use buffer or streaming, I will consume lots of memory. Is there a better efficient solution for this?
I really appreciate your help. | Use AWS lambda to upload video into S3 with download URL | 1.2 | 0 | 1 | 7,999 |
44,235,633 | 2017-05-29T06:09:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,integer-division | 44,235,692 | 4 | false | 0 | 0 | I guess Python 2.7 automatically gives the floor values.
You could import numpy and then do a np.ceil(-7/3) | 2 | 0 | 0 | In Python 2.x ,the integer division-7/3 should output -2,right?.Reason:since both the numerators and denominators are integers,the division is carried out and digits after decimal are truncated.so in this case,
-7/3=-2.3333333....so,python should output -2.but it gives output -3.can anyone please explain why? | Division in Python 2.x | 0 | 0 | 0 | 1,098 |
44,235,633 | 2017-05-29T06:09:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,integer-division | 44,235,728 | 4 | false | 0 | 0 | It is important to realize that -7/3 is parsed as (-7)/3 rather than as -(7/3), whereupon the floor division rule explained in other answers results in the quotient evaluating to -3. | 2 | 0 | 0 | In Python 2.x ,the integer division-7/3 should output -2,right?.Reason:since both the numerators and denominators are integers,the division is carried out and digits after decimal are truncated.so in this case,
-7/3=-2.3333333....so,python should output -2.but it gives output -3.can anyone please explain why? | Division in Python 2.x | 0.049958 | 0 | 0 | 1,098 |
44,240,416 | 2017-05-29T10:37:00.000 | 1 | 0 | 1 | 0 | python,dynamic | 44,240,524 | 1 | false | 0 | 0 | As far as I understand, by "dynamic strings" you mean format strings (in 'such {}'.format('constructions')). Their purpose is contrary to that of regular expressions: using format strings you convert structured data to strings, using regular expressions you convert strings to structured data. The former task is usually a lot more deterministic and simple, but it's not exactly a subset of the task solved by regular expression (they are as different as text-to-speech and speech-to-text systems), so the syntax for the two tasks is just different. | 1 | 0 | 0 | learning regular expressions on python, but already saw about dynamic strings with Android SDK, wondering IF Dynamic Strings are a subset of regular expressions and/or the differences
cheers | easy: differences between dynamic strings and regular expressions? | 0.197375 | 0 | 0 | 35 |
44,241,532 | 2017-05-29T11:34:00.000 | 1 | 0 | 1 | 1 | python,linux,apt-get,xmltodict | 44,242,318 | 2 | false | 0 | 0 | if you want to do it with pip or easy-install, you can run pip2/pip2.7 and install it. It will be installed for Python which was used to run pip... | 1 | 0 | 0 | I am trying to add xmltodict package to Python on Linux.
I have 2 installations of Python on my Linux build; Python 2.7 (default) and Python 3.5 (in the form of an Anaconda installation). I want to add xmltodict to the Python 3 installation, but when I use sudo apt-get install python-xmltodict it adds it to the default Python 2.7 installation.
How can I get apt to add this package to my Python 3 installation without changing the default or using pip? I do not want to have to rebuild my installation with a virtual environment either | Add package to specific install of Python in Linux | 0.099668 | 0 | 0 | 138 |
44,241,716 | 2017-05-29T11:45:00.000 | 0 | 0 | 1 | 0 | python,virtualenv,setup.py,data-files | 44,242,501 | 2 | false | 0 | 0 | virtualenv is more for development, not for deployment. There are many scenarios to deploy Python app. but if you prefer virtualenv usage and you have common files, they can be anywhere IMHO, because virtualenv is not real isolation, it's only Python paths mangling/modification mechanism (not like "chroot"), so you decide where to place your common files, even /usr/share/my-app-1.0/datafiles/. Also, virtualenv is used to isolate binaries/scripts, but if data files are static, you can place them when you prefer. | 1 | 0 | 0 | Here is my problem, I am trying to make an application which copies data files during its setup. When i am doing pip install the setup copies a few files to a directory.
Now my question is, When inside a virtual environment, what is the behaviour that the customer expects- does he want all the created data files inside the virtual environment directory for each virtualenv or copy all the files into a common directory outside the virtual environment directory.
While running the application there will be new files that will be created and stored along these copied files. What is the ideal behaviour that is expected form a python virtualenv. common or isolated? | What is the expected behaviour in a virtualenv python? | 0 | 0 | 0 | 61 |
44,242,207 | 2017-05-29T12:12:00.000 | 0 | 0 | 0 | 0 | python,opencv | 44,246,824 | 4 | false | 0 | 0 | As the card symbol is at fixed positions, you may try below (e.g. in OpenCV 3.2 Python):
Crop the symbol at top left corner, image = image[h1:h2,w1:w2]
Threshold the symbol colors to black, the rest to white, thresh = mask = cv2.inRange(image,(0,0,0),(100,100,100))
Perform a find contour detection, _, contours, hierarchy = cv2.findContours()
Get the area size of the contour. area = cv2.contourArea(contour)
Compare the area to determine which one of the 4 symbols it belongs to.
In which you have to build the area thresholds of each symbol for step 4 to compare. All the cv2 functions above are just for reference.
Hope this help. | 1 | 0 | 1 | I'm trying to detect the difference between a spade, club, diamond and hart. the number on the card is irrelevant, just the suit matters.
i've tried color detection by looking at just the red or black colors, but that still leaves me with two results per color. how could i make sure i can detect each symbol individually?
for instance: i have a picture of a red hart, a red diamond, a black spade and a black club. i want to draw the contours of each symbol in a different color.
I'm using my webcam as a camera. | detect card symbol using opencv python | 0 | 0 | 0 | 4,838 |
44,242,760 | 2017-05-29T12:40:00.000 | -1 | 0 | 0 | 1 | python,opencv,docker,video-capture | 44,807,718 | 1 | false | 0 | 0 | There might be 2 problems:
1) In your container opencv is not installed properly. To check that do print(ret, frame). If they come (false, None). Then opencv has not installed properly.
2) The file you are using is corrupted. To check that try to copy any image file (jpg) into the container and use cv2.imread to read the image and then print it. If numpy array comes then while copying your file is not getting corrupted.
The best option is pull any opencv + python image and then create a container with it. The better option is use dockerfiles of these images to create the container. | 1 | 2 | 1 | I am trying to use cv2.VideoCapture to capture image from a docker container.
import cv2
vid = cv2.VideoCapture('path\to\video')
ret, frame = vid.read()
In terms of the video file,
I have tried
either mount the file with docker -v
or docker cp to copy the video file into container,
but both with no luck (ret returns False).
Should I add any command when launching the container?
Thanks in advance. | cv2.VideoCapture doesn't work within docker container | -0.197375 | 0 | 0 | 1,491 |
44,242,810 | 2017-05-29T12:42:00.000 | 0 | 0 | 1 | 0 | python,fft,dft | 44,309,273 | 1 | false | 0 | 0 | For calculating periods I would just find the peak of the Fourier transformed data, to do that in python look into script.fft. Could be computationally intensive though. | 1 | 3 | 1 | This question seems so trivial but I didn't find any suitable answer so I am asking!
Lets say I have a two column data(say, {x, sin(x)} )
X Y(X)
0.0 0.0
0.1 0.099
0.2 0.1986
How do I find the period of the function Y(X);
I have some experience in Mathematica where(roughly)
I just interpolate the data as a function say y(x), then
Calculate y'(x) and set y'(x_p)=0;
collect all (x_p+1 - x_p)'s and take average to get the period.
In python, however I am stuck after step 1 as I can find out x_p for a particular guess value but not all the x_p's. Also this procedure doesn't seem very elegant to me. Is there a better way to do things in python? | Python: Finding period of a two column data | 0 | 0 | 0 | 1,459 |
44,243,446 | 2017-05-29T13:15:00.000 | 1 | 0 | 0 | 0 | python,django | 44,285,373 | 3 | false | 1 | 0 | Make sure, that your settings.py which located at the project root directory, has sys.path.append(os.path.dirname(os.path.abspath(__file__)))
from web -> app -> here's where I want to import things from:
import Theorem_prover.here_are_the_things_I_want_to_import | 1 | 1 | 0 | I am developing a Django project for which I wrote some non-web related libraries.
My directory structure looks something like this:
Main folder
Theorem prover
here are the things I want to import
web
app
here's where I want to import things from
The place where I'm running the app is the web/ folder. What would be the proper way of doing this? | Python import from beyond top-level package | 0.066568 | 0 | 0 | 738 |
44,243,853 | 2017-05-29T13:35:00.000 | 0 | 0 | 0 | 1 | python,shell,subprocess | 44,243,964 | 2 | false | 0 | 0 | Make sure you execute this in the directory where the files exist. If you just fire up Idle to run this code, you will not be in that directory. | 1 | 0 | 0 | I have couple of file with same name, and I wanted to get the latest file
[root@xxx tmp]# ls -t1 abclog*
abclog.1957
abclog.1830
abclog.1799
abclog.1742
I can accomplish that by executing below command.
[root@xxx tmp]# ls -t1 abclog*| head -n 1
abclog.1957
But when I am trying to execute the same in python , getting error :
subprocess.check_output("ls -t1 abclog* | head -n 1",shell=True)
ls: cannot access abclog*: No such file or directory
''
Seems it does not able to recognize '*' as a special parameter. How can I achieve the same ? | how to use subprocess.check_output() in Python to list any file name as "abc*" | 0 | 0 | 0 | 260 |
44,244,711 | 2017-05-29T14:17:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,classification,random-forest,supervised-learning | 44,250,976 | 3 | true | 0 | 0 | oversampling or
under sampling or
over sampling the minority and under sampling the majority
is a hyperparameter. Do cross validation which ones works best.
But use a Training/Test/Validation set. | 2 | 1 | 1 | I have binary classification problem where one class represented 99.1% of all observations (210 000). As a strategy to deal with the imbalanced data, I choose sampling techniques. But I don't know what to do: undersampling my majority class or oversampling the less represented class.
If anybody have an advise?
Thank you.
P.s. I use random forest algorithm from sklearn. | Imbalanced data: undersampling or oversampling? | 1.2 | 0 | 0 | 4,671 |
44,244,711 | 2017-05-29T14:17:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,classification,random-forest,supervised-learning | 51,599,898 | 3 | false | 0 | 0 | Undersampling:
Undersampling is typically performed when we have billions (lots) of data points and we don’t have sufficient compute or memory(RAM) resources to process the data. Undersampling may lead to worse performance as compared to training the data on full data or on oversampled data in some cases. In other cases, we may not have a significant loss in performance due to undersampling.
Undersampling is mainly performed to make the training of models more manageable and feasible when working within a limited compute, memory and/or storage constraints.
Oversampling:
oversampling tends to work well as there is no loss of information in oversampling unlike undersampling. | 2 | 1 | 1 | I have binary classification problem where one class represented 99.1% of all observations (210 000). As a strategy to deal with the imbalanced data, I choose sampling techniques. But I don't know what to do: undersampling my majority class or oversampling the less represented class.
If anybody have an advise?
Thank you.
P.s. I use random forest algorithm from sklearn. | Imbalanced data: undersampling or oversampling? | 0 | 0 | 0 | 4,671 |
44,246,527 | 2017-05-29T16:00:00.000 | -1 | 0 | 0 | 0 | python,windows,windows-10,wifi | 54,468,292 | 4 | false | 0 | 0 | All you need to do is take the space out and it works for me.
For example: use interface name = "Wi-Fi2" with the code above. | 2 | 4 | 0 | I have been looking for a way to toggle my wifi on and off using a script. I figured this could be done maybe by entering airplane mode or some other method. When I was googleing for the answer though I could not find anything useful for windows, just for android and even macOS.
Anyone have a 2 functions, one for turning wifi off, and one for turning it back on? Or connecting/disconnecting from a specific one works too. It is my default wifi if this is relevant. | Turn WiFi off using Python on Windows 10? | -0.049958 | 0 | 0 | 10,493 |
44,246,527 | 2017-05-29T16:00:00.000 | 0 | 0 | 0 | 0 | python,windows,windows-10,wifi | 70,149,001 | 4 | false | 0 | 0 | netsh interface set interface name = "Wi-fi" enabled
netsh interface set interface name = "Wi-fi" disabled
Just replace ' with "
You need administrator rights to use this command | 2 | 4 | 0 | I have been looking for a way to toggle my wifi on and off using a script. I figured this could be done maybe by entering airplane mode or some other method. When I was googleing for the answer though I could not find anything useful for windows, just for android and even macOS.
Anyone have a 2 functions, one for turning wifi off, and one for turning it back on? Or connecting/disconnecting from a specific one works too. It is my default wifi if this is relevant. | Turn WiFi off using Python on Windows 10? | 0 | 0 | 0 | 10,493 |
44,251,610 | 2017-05-29T23:59:00.000 | 2 | 0 | 0 | 0 | python,api,bloomberg | 44,255,713 | 1 | false | 0 | 0 | Yes, you need to log in to be able to run the API. Once you log off, you can still pull data as long as you don't log onto a different computer. That's by design.
So there is no other solution than either manually logging onto your terminal in the morning or making sure that you don't use Bloomberg Anywhere on another PC. | 1 | 2 | 0 | Is there any way to automatically log on through the Bloomberg API to fetch data? From my testing with Python the API can pull data when I am logged into Bloomberg through the terminal and some time after I log off.
The next morning I try to run my code with an error, but the problem goes away as soon as I log on the terminal.
Thanks. | Automatically log onto Bloomberg through API Python | 0.379949 | 0 | 1 | 1,291 |
44,252,108 | 2017-05-30T01:25:00.000 | 0 | 1 | 0 | 0 | java,python,google-chrome | 44,575,770 | 1 | true | 1 | 0 | After 2 weeks of waiting for any messages from support or new build I have decided to request a new challenge (3rd one) and vuala - the system accepted my solution and promoted me to the 3rd level. | 1 | 1 | 0 | Im on Level 2, part 2: (Level 2, 50%)
After I submitted challenge (java) I got nothing - the time stops and status doesn't update.
Logout/Clear cash - doesn't work, all looks like I need request the new challenge but I did it twice! And it doesn't work at all!
I posted bug though Feedback command from foo.bar but who knows where it goes.
Who knows where I can find foo.bar bug tracker? To understand what is going on.
Also I will appreciate any help and advice.
Thanks!
google-chrome, macos, foobar version 53-17-g6bfce4dd-beta (2017-03-21-22:12+0000) | Google Foobar Bug chahallenge resets after submit | 1.2 | 0 | 0 | 405 |
44,253,243 | 2017-05-30T04:06:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 44,256,171 | 3 | false | 0 | 0 | If you want to keep the file open, you can also call :
file.truncate() | 1 | 0 | 0 | I have a text file namefile.txt which has more character. I want to delete all text and keep the file in order to see nothings when we open namefile.txt after run the python script? | How to delete all characters in text file by python | 0 | 0 | 0 | 1,455 |
44,253,840 | 2017-05-30T05:06:00.000 | 1 | 0 | 0 | 0 | python-2.7,amazon-s3,aws-lambda,doc2vec | 44,291,881 | 1 | false | 0 | 0 | Gensim's Doc2Vec is not designed to distribute training over multiple-machines. It'd be a significant and complex project to adapt its initial bulk training to do that.
Are you sure your dataset and goals require such distribution? You can get a lot done on a single machine with many cores & 128GB+ RAM.
Note that you can also train a Doc2Vec model on a smaller representative dataset, then use its .infer_vector() method on the frozen model to calculate doc-vectors for any number of additional texts. Those frozen models can be spun up on multiple machines – allowing arbitrarily-distributed calculation of doc-vectors. (That would be far easier than distributing initial training.) | 1 | 0 | 1 | I'm using python Gensim to train doc2vec. Is there any possibility to allow this code to be distributed on AWS (s3).
Thank you in advance | How to train doc2vec on AWS cluster using spark | 0.197375 | 0 | 0 | 632 |
44,256,443 | 2017-05-30T07:46:00.000 | 0 | 0 | 0 | 0 | python,web,pyramid | 44,275,394 | 2 | false | 1 | 0 | If you can manage to view this as a permission/security problem then you can deal with it by adding appropriate permissions/ACLs to your application and then handling the issue in a forbidden view by likely returning a redirect. | 1 | 0 | 0 | I have a situation where if the user visits any route on my Pyramid application I would like to serve them one particular view. Basically "force" them to a single page. A redirect would be fine too.
Is there something in Pyramid to achieve this? | Changing view or route after the request has been started in Pyramid | 0 | 0 | 0 | 52 |
44,259,535 | 2017-05-30T10:19:00.000 | 1 | 0 | 0 | 0 | revit-api,revitpythonshell | 44,261,410 | 3 | false | 0 | 0 | To answer my own question, I actually never thought of looking through the code of other Revit Python scripts... in this case of PyRevit, which is in my opinion far more eloquently written than RPS, raelly looking forward for their console work to be done!
Basically, I had mistakenly used GetParameter('parameter') instead of LookupParameter('parameter').
As I said, it was something stupidly simple that I just didn't understand.
If anyone has sufficient knowledge to coherently clarify this, please do answer!
Many thanks! | 1 | 1 | 0 | I'm just trying to find a way to access the name property of an Area element inside Revit Python Shell, tried looking on Jeremy Tammik's amazingly informative blog, tried AUGI, Revit API docs, been looking for 2 days now...
Tried accessing via a bunch of ways, FilteredElementsCollector(doc).OfCategory(BuiltInCategory.OST_Areas), tried by Area class, tried through AreaTag, every single time I get an error under every circumstance and it's driving me nuts, it seems like such a simple issue that I can't seem to grasp!
EDIT: Also tried by element id, through tags, through area schemes, nada, no go...
Can anyone please tell me how to access this property via RPS? | Accessing Area.Name Throws Error | 0.066568 | 0 | 0 | 127 |
44,260,553 | 2017-05-30T11:07:00.000 | 0 | 0 | 0 | 0 | python-2.7 | 44,584,830 | 1 | false | 0 | 0 | Create a folder dataset and then create two sub-folders train and test. Then inside Train if you wish create sub-folders with images labels (e.g. fish - holds all fish images, lion - holds lion images etc) and in test you can populate with some images. Finally train the model pointing to Dataset - > Train. | 1 | 0 | 1 | I am very new to the keras. currently working with CNN in keras using theano as backend. I would like to train my network with own images( around 25000 images),which all are in same folder and test it. How could I do that? (please help me, i am not familiar with deep learning) | CNN in keras using theano as backend | 0 | 0 | 0 | 70 |
44,260,604 | 2017-05-30T11:09:00.000 | 0 | 1 | 0 | 0 | python,rest,pyresttest | 44,262,032 | 1 | false | 1 | 0 | need to use template headers, like
headers: {template: {Content-Type: application/json, auth_token: "$auth_key"}}
Example:
headers: {template: {'$headername': '$headervalue', '$cache': '$cachevalue'}}
its worked fine now | 1 | 1 | 0 | Hi, I declared variables in config and iam using in below test case. the variables are not passing. iam not able to figure out the actual issue.
config:
variable_binds: {'user_id': 'ravitej', 'pass': 'password', 'auth': 'plain'}
test:
name:"login success"
url: "/api/login"
method: "POST"
body: '{"username": "$user_id", "password": "$pass", "authtype": "$auth"}'
headers: {Content-Type: application/json}
expected_status: [200]
group: "User"
In this below case: I'm running 1st test set and I'm getting some auth_token in response this auth_token is saved to auth_key to use in another test set, but this auth_key is not passing to the 2nd test set. it was passing an empty variable. I tried all possible ways wich are posted in Github still I am getting the same problem
test:
name: "Registration"
url: "/api/register"
method: "POST"
body: '{"device_id": "0080", "device_key": "imAck", "device_name": "teja", "device_description": "Added By Ravi", "project_tag": "MYSTQ-16", "attributes": "{}"}'
headers: {Content-Type: application/json}
expected_status: [200]
group: "Register"
extract_binds:
- 'auth_key': {'jsonpath_mini': 'auth_token'}
test:
name: "Status"
url: "/api/status"
method: "POST"
body: '{"attributes": {"num_ports": "2", "model": "Phillips", "firmware_ver": "8.0.1", "ipaddress": "192.168.0.0", "rssi": "Not Available"}}'
headers: {Content-Type: application/json, auth_token: "$auth_key"}
expected_status: [200]
group: "Status" | Variables are not passing from one test set to another? | 0 | 0 | 0 | 202 |
44,261,941 | 2017-05-30T12:12:00.000 | 1 | 0 | 1 | 0 | python,json,concurrency,rabbitmq,producer-consumer | 44,268,755 | 1 | true | 0 | 0 | Using some sort of message queue could help to cleanly address this problem, and decouple downloading a JSON and processing a JSON.
In this setup:
[download] -> [MQ] -> [process] -> ??
Each [] would represent a separate process and -> will represent sending some sort of interprocess data.
Your download script could be modified to save each file to a cloud file storage service, and publish a message with the location of that file, when the download is complete.
There could then be a consumer process which reads from the message queue and processes the file.
This will allow you to process each file as it is download. Additionally, it lets you scale the download and the process steps separately.
While this pattern is very common, it comes with operation complexity. You will need to manage 3 separate processes.
If you would like to run this on a single machine you could apply the same pattern locally by having two separate processes:
[download] - writes to stdout
[process_json] - reads from stdin and processes the json
then you could link them using os pipes download.py | process_json.py
download.py would download the file and write the file path. and process_json would operate on a single filepath. | 1 | 1 | 0 | I have a script which downloads a lot of JSONs. I am processing the JSONs after they are downloaded and I am sending them to some other functions. Presently, I am just waiting till all the JSONs are downloaded and then processing each of them. Is there any way to do this parallelly? Like, as soon as each JSON is downloaded, move it to perform some tasks on it.
I am thinking to use RabbitMQ which sends the consumer the path of the JSON after it has been fully downloaded. I have no clue how to determine whether a JSON has been downloaded and is ready to use it.
I have looked at other answers, but I couldn't find anything clear. I just want an idea on how to continue with the concurrency part or how to take the just downloaded JSON to next process. | Process files as soon as they are downloaded | 1.2 | 0 | 0 | 87 |
44,262,586 | 2017-05-30T12:41:00.000 | 1 | 0 | 0 | 0 | python,django,python-import | 44,262,726 | 4 | false | 1 | 0 | It means from the current directory import views.py module | 3 | 8 | 0 | I started to learn Django, I don't know python very well, so please forgive me, if the question is quite stupid).
from . import views
what is "." in this statement? Module's name? | Django tutorial. from . import views | 0.049958 | 0 | 0 | 15,432 |
44,262,586 | 2017-05-30T12:41:00.000 | 1 | 0 | 0 | 0 | python,django,python-import | 44,263,397 | 4 | false | 1 | 0 | while one dot means the current directory if you want upper dicrectory then use double dot just for your referencce | 3 | 8 | 0 | I started to learn Django, I don't know python very well, so please forgive me, if the question is quite stupid).
from . import views
what is "." in this statement? Module's name? | Django tutorial. from . import views | 0.049958 | 0 | 0 | 15,432 |
44,262,586 | 2017-05-30T12:41:00.000 | 6 | 0 | 0 | 0 | python,django,python-import | 44,262,623 | 4 | true | 1 | 0 | The single dot is a convention from command line applications. It means the current directory. In terms of Django it stands for the directory/module the current file is on. | 3 | 8 | 0 | I started to learn Django, I don't know python very well, so please forgive me, if the question is quite stupid).
from . import views
what is "." in this statement? Module's name? | Django tutorial. from . import views | 1.2 | 0 | 0 | 15,432 |
44,262,826 | 2017-05-30T12:51:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,sockets,unix,sqlmap | 44,585,660 | 1 | false | 0 | 0 | Update :
I finally managed to do what I had in mind.
In SQLmap, the "dictionnary" can be found in the "/xml" directory.
By using several "grep" I have been able to follow the flow of creation. Then, to create the global payload, and then split it in real SQL payload | 1 | 0 | 0 | I'm currently programming in python :
- a graphical interface
- my own simple client-server based on sockets (also in python).
The main purpose here is to recieve a message (composed of several fields), apply any changes on it, and then send back the message to the client.
What I want to achieve is to link the modification of a field of this message with any script (I Just saw radamsa which could do the job, I still need to test it.
For example the message : John/smith/London would be John/(request by script)/London.
I don't know if it's even possible to use just a part of code of something like sqlmap (or something else, I don't mind), without the whole thing.
I hope I have given enough details, if not, feel free to ask for any =)
Edit: the use of sqlmap is not mandatory, it's just one of the few fuzzing scripts i was aware of | How to link sqlmap with my own script | 0 | 1 | 0 | 156 |
44,265,851 | 2017-05-30T15:08:00.000 | 0 | 0 | 0 | 0 | python,django,settings | 44,270,896 | 2 | false | 1 | 0 | Make sure that in the test you import the settings like this:
from django.conf import settings | 1 | 0 | 0 | I have different settings for different environments for my application, such as local, dev, stage, production. So, when I run my apps, say, locally, I pass the settings as the parameter to manage.py, e.g. python3 manage.py runserver 0.0.0.0:8080 --settings=myapp.settings.local, and indeed all my settings are correctly initialised, e.g. DEBUG is True as it is set in my settings file, and not False as it is in defaultsettings.py.
However, when I try to run tests with python3 manage.py test --settings=myapp.settings.local, the value of DEBUG is set to false, that is it is loaded from defaultsettings.py. Why does that happen and how can I fix this? | Django settings are not loaded when running tests | 0 | 0 | 0 | 293 |
44,266,677 | 2017-05-30T15:47:00.000 | 0 | 0 | 0 | 0 | python,machine-learning | 44,267,916 | 2 | false | 0 | 0 | The train set determines what features you can use for recognition. If you're lucky, your recognizer will just ignore unknown features (I believe NaiveBayes does), otherwise you'll get an error. So save the set of feature names you created during training, and use them during testing/recognition.
Some recognizers will treat a missing binary feature the same as a zero value. I believe this is what the NLTK's NaiveBayesClassifier does, but other engines might have different semantics. So for binary present/absent features, I would write my feature extraction function to always put the same keys in the feature dictionary. | 1 | 3 | 1 | guys.
I was developing an ML model and I got a doubt. Let's assume that my train data has the following data:
ID | Animal | Age | Habitat
0 | Fish | 2 | Sea
1 | Hawk | 1 | Mountain
2 | Fish | 3 | Sea
3 | Snake | 4 | Forest
If I apply One-hot Encoding, it will generate the following matrix:
ID | Animal_Fish | Animal_Hawk | Animal_Snake | Age | ...
0 | 1 | 0 | 0 | 2 | ...
1 | 0 | 1 | 0 | 1 | ...
2 | 1 | 0 | 0 | 3 | ...
3 | 0 | 0 | 1 | 4 | ...
That's beautiful and work in most of the cases. But, what if my test set contains fewer (or more) features than the train set? What if my test set doesn't contain "Fish"? It will generate one less category.
Can you guys help me how can I manage this kind of problem?
Thank you | Machine Learning - test set with fewer features than the train set | 0 | 0 | 0 | 5,636 |
44,268,109 | 2017-05-30T17:09:00.000 | 1 | 0 | 0 | 1 | python,snakebite | 63,066,582 | 2 | false | 0 | 0 | You must kinit with proper keytab is enough. It will take automatically by the principal name to fetch results. | 1 | 0 | 0 | I have been struggling with how to pass requisite parameters to snakebite utility for it to be able to access a kerberized cluster. I have tried setting the necessary conf dir in the /usr/local/etc/hadoop path, as well as initialising and getting a ticket using kinit.
Any help or working example in this regards would be greatly appreciated.
NOTE: I have tested that the environment setup is proper by using 'hadoop' CLI to access the cluster from the same machine. | How to access kerberized cluster using snakebite python client | 0.099668 | 0 | 0 | 387 |
44,269,711 | 2017-05-30T18:46:00.000 | 3 | 0 | 1 | 0 | python,django | 44,269,738 | 2 | false | 1 | 0 | You could do this by modifying template processors, but if you want to add this to all <p> elements which are generated why not just use CSS? | 1 | 1 | 0 | I user next line in my template: {{ description|linebreaks }}
How to add the same class to all <p> elements which will be generate? | How add class to p elements which generated by linebreaks? | 0.291313 | 0 | 0 | 153 |
44,270,585 | 2017-05-30T19:38:00.000 | 1 | 1 | 0 | 0 | php,python,api,server | 44,271,429 | 2 | true | 1 | 0 | If the result of your algorithm only depends on the LAST scrape and not a history of several scrapes, then you could just scrape "on demand" and deploy your algo as an AWS lambda function. | 2 | 0 | 0 | I'm working on a project and I just wanted to know what route (If I'm not going down the correct route) I should go down in order to successfully complete this project.
Ferry Time: This application uses boat coordinates and ETA algorithms to determine when a boat will be approaching a certain destination.
Now, I've worked up a prototype that works, but not the way I want it to. In order for my ETA's to show up on my website accurately, I have a python script that runs every minute to webscrape these coordinates from a specific site, do an algorithm, and spit out an ETA. The ETA is then sent to my database, where I display it on my website using PHP and SQL.
The ETA's only update as long as this script is running (I literally run the script on Eclipse and just leave it there)
Now my question: Is there a way I can avoid running the script? Almost like an API. Or to not use a database at all?
Thanks ! | Project using Python Webscraping | 1.2 | 0 | 1 | 71 |
44,270,585 | 2017-05-30T19:38:00.000 | 0 | 1 | 0 | 0 | php,python,api,server | 44,270,817 | 2 | false | 1 | 0 | Even if they had an API you'd still need to run something against it to get the results. If you don't want to leave your IDE open you could use cron to call your script. | 2 | 0 | 0 | I'm working on a project and I just wanted to know what route (If I'm not going down the correct route) I should go down in order to successfully complete this project.
Ferry Time: This application uses boat coordinates and ETA algorithms to determine when a boat will be approaching a certain destination.
Now, I've worked up a prototype that works, but not the way I want it to. In order for my ETA's to show up on my website accurately, I have a python script that runs every minute to webscrape these coordinates from a specific site, do an algorithm, and spit out an ETA. The ETA is then sent to my database, where I display it on my website using PHP and SQL.
The ETA's only update as long as this script is running (I literally run the script on Eclipse and just leave it there)
Now my question: Is there a way I can avoid running the script? Almost like an API. Or to not use a database at all?
Thanks ! | Project using Python Webscraping | 0 | 0 | 1 | 71 |
44,271,059 | 2017-05-30T20:10:00.000 | 7 | 0 | 0 | 0 | python-2.7,logging,websocket | 44,287,918 | 1 | true | 0 | 0 | websocket.enableTrace(False) solved this. | 1 | 4 | 0 | I am using the websocket-client Python Package. This works fine. But when I send messages to the server the client prints this message to stdout. Is there a possibility to disable this? | Websocket-client Python disable output | 1.2 | 0 | 1 | 1,702 |
44,271,088 | 2017-05-30T20:12:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 44,272,087 | 2 | false | 0 | 1 | "overlay a grid on top of pack" makes no sense. Tkinter has neither a grid nor a pack object. grid and pack are functions.
As functions, you can not use them both to manage widgets that have a common parent.
Since your goal is to have a background image, the simplest solution is to use a label rather than a frame. You can call grid to arrange widgets inside a label in exactly the same way that you arrange widgets inside a frame. The benefit of using a label is that you can give it an image.
Another option would be to continue to use your frame, and then use place to add your background image to the frame. place does not share the same limitation as grid and pack since it doesn't affect the geometry of the parent widget. | 1 | 0 | 0 | Is it possible to lay a grid on top of a pack? I ask this because I am trying to make an interactive map. I want a background image to take up the entire window and then place buttons on top of it in specific locations. The grid makes it is easy to place the buttons where they need to go, but I cannot get the background image to work properly unless I use a pack. Then I cannot easily place the buttons where I would like for them to go (they need to go over the top of specific places on a map i.e. buildings) | overlay grid on top of pack | 0 | 0 | 0 | 355 |
44,271,174 | 2017-05-30T20:17:00.000 | 1 | 0 | 0 | 1 | python,angular,google-app-engine,angular-cli,google-app-engine-python | 44,271,728 | 2 | true | 1 | 0 | You can use enviroment on angular in order to point you python rest service and another enviroment to production.
Example:
enviroment.ts
export const environment = {
production: false,
urlServices: 'http://190.52.112.41:8075'
};
enviroment.prod.ts
export const environment = {
production: true,
urlServices: 'http://localhost:8080'
};
In this way , you dont need to compile to test you aplication because angular always will be pointed to you python app. | 1 | 0 | 0 | I am currently using a angular4 projected generated by angular-CLI to create the project structure and I was able to serve it using ng-serve and develop and see changes. Now I would like to move it to being hosted on my own backend, using google app engine and webApp2 and run it using dev_appserver.py app.yaml. Currently the only way I can get it to work is by doing an ng-build and serving the files out of the dist folder. I would like to make it so I can easily make changes but not have to wait for it to rebuild each time. | Angular4 and WebApp2 on google App engine | 1.2 | 0 | 0 | 468 |
44,272,954 | 2017-05-30T22:34:00.000 | 1 | 0 | 1 | 0 | python-3.x,go,concurrency,python-asyncio,coroutine | 44,298,289 | 2 | false | 0 | 0 | I suppose you could think of it working that way underneath, sure. It's not really accurate, but, close enough.
But there is a big difference: in Go you can write straight line code, and all the I/O blocking is handled for you automatically. You can call Read, then Write, then Read, in simple straight line code. With Python asyncio, as I understand it, you need to queue up a function to handle the reads, rather than just calling Read. | 2 | 6 | 0 | Are goroutines roughly equivalent to python's asyncio tasks, with an additional feature that any CPU-bound task is routed to a ThreadPoolExecutor instead of being added to the event loop (of course, with the assumption that we use a python interpreter without GIL)?
Is there any substantial difference between the two approaches that I'm missing? Of course, apart from the efficiencies and code clarity that result from the concurrency being an integral part of Go. | Goroutines vs asyncio tasks + thread pool for CPU-bound calls | 0.099668 | 0 | 0 | 3,316 |
44,272,954 | 2017-05-30T22:34:00.000 | 12 | 0 | 1 | 0 | python-3.x,go,concurrency,python-asyncio,coroutine | 44,298,225 | 2 | false | 0 | 0 | I think I know part of the answer. I tried to summarize my understanding of the differences, in order of importance, between asyncio tasks and goroutines:
1) Unlike under asyncio, one rarely needs to worry that their goroutine will block for too long. OTOH, memory sharing across goroutines is akin to memory sharing across threads rather than asyncio tasks since goroutine execution order guarantees are much weaker (even if the hardware has only a single core).
asyncio will only switch context on explicit await, yield and certain event loop methods, while Go runtime may switch on far more subtle triggers (such as certain function calls). So asyncio is perfectly cooperative, while goroutines are only mostly cooperative (and the roadmap suggests they will become even less cooperative over time).
A really tight loop (such as with numeric computation) could still block Go runtime (well, the thread it's running on). If it happens, it's going to have less of an impact than in python - unless it occurs in mutliple threads.
2) Goroutines are have off-the-shelf support for parallel computation, which would require a more sophisticated approach under asyncio.
Go runtime can run threads in parallel (if multiple cores are available), and so it's somewhat similar to running multiple asyncio event loops in a thread pool under a GIL-less python runtime, with a language-aware load balancer in front.
3) Go runtime will automatically handle blocking syscalls in a separate thread; this needs to be done explicitly under asyncio (e.g., using run_in_executor).
That said, in terms of memory cost, goroutines are very much like asyncio tasks rather than threads. | 2 | 6 | 0 | Are goroutines roughly equivalent to python's asyncio tasks, with an additional feature that any CPU-bound task is routed to a ThreadPoolExecutor instead of being added to the event loop (of course, with the assumption that we use a python interpreter without GIL)?
Is there any substantial difference between the two approaches that I'm missing? Of course, apart from the efficiencies and code clarity that result from the concurrency being an integral part of Go. | Goroutines vs asyncio tasks + thread pool for CPU-bound calls | 1 | 0 | 0 | 3,316 |
44,273,555 | 2017-05-30T23:44:00.000 | -1 | 0 | 0 | 0 | python,pandas | 44,274,041 | 2 | false | 0 | 0 | Building on from piRSquared, a possible method to treating NaN values (if applicable to your problem) is to convert the NaN inputs to the median of the column.
df = df.fillna(df.mean()) | 1 | 2 | 1 | I have a pandas dataframe with a variable, which, when I print it, shows up as mostly containing NaN. It is of dtype object. However, when I run the isnull function, it returns "FALSE" everywhere. I am wondering why the NaN values are not encoded as missing, and if there is any way of converting them to missing values that are treated properly.
Thank you. | Unable to handle NaN in pandas dataframe | -0.099668 | 0 | 0 | 1,285 |
44,274,141 | 2017-05-31T01:10:00.000 | 0 | 0 | 0 | 0 | python,twilio | 44,371,828 | 1 | false | 0 | 0 | Twilio developer evangelist here.
It seems that the latest Python package only implemented partial support for input="speech" and the related TwiML attributes which makes it unusable at the moment.
I am raising a bug internally and will get this fixed as soon as possible. I will notify you here when it is ready. | 1 | 0 | 0 | Need some help in Twilio. Twilio Gather verb supports input="speech" which recognizes speech as mentioned in their site. Does the Twilio python package support this speech recognition yet? Couldn't find any help documentation regarding this. | Twilio Speech Recognition support in twilio python package | 0 | 0 | 1 | 220 |
44,276,329 | 2017-05-31T05:24:00.000 | 3 | 0 | 0 | 0 | javascript,jquery,python,django-forms,django-templates | 44,276,395 | 1 | true | 1 | 0 | Yes, its possible. Django gives an id in format id_fieldname to each field you define in your Django form.
E.g. For name field, id in HTML will be id_name and For email_address field ,id in HTML will be id_email_address
Now you can simply use getElementById() in your JS with these ID for validation. | 1 | 0 | 0 | I have django form with name and email address as a form field in forms.py file.
I am implementing that form in HTML Page with the help of {form.as_p}.
Now I have JavaScript inside that same HTML page with...
{% block javascript %}
<script>
</script>
{% endblock %}
Now How can I access both form variable inside this script tag. I want to validate data at client-side and than wants to send at server.
Will getElementById() method works? and If so what I need to write for access my form variables or any other good alternative.
Thanks | accessing django form variable in javascript which is inside html page | 1.2 | 0 | 0 | 2,842 |
44,276,962 | 2017-05-31T06:10:00.000 | 0 | 0 | 0 | 0 | python,opencv,disparity-mapping | 44,306,111 | 2 | false | 0 | 1 | Unlike c++, Python doesn't work well with pointers. So the arguments are
Filtered_disp = ximgproc_DisparityWLSFilter.filter(left_disp,left, None,right_disp)
Note that it's no longer a void function in Python!
I figured this out through trial and error though. | 1 | 2 | 1 | I get a ximgproc_DisparityWLSFilter from cv2.ximgproc.createDisparityWLSFilter(left_matcher),
but I cannot get ximgproc_DisparityWLSFilter.filter() to work.
The error I get is
OpenCV Error: Assertion failed (!disparity_map_right.empty() && (disparity_map_right.depth() == CV_16S) && (disparity_map_right.channels() == 1)) in cv::ximgproc::DisparityWLSFilterImpl::filter, file ......\opencv_contrib\modules\ximgproc\src\disparity_filters.cpp, line 262
In general, how do I figure out how to use this, when there isn't a single google result for "ximgproc_DisparityWLSFilter"? | What are the ximgproc_DisparityWLSFilter.filter() Arguments? | 0 | 0 | 0 | 2,604 |
44,282,569 | 2017-05-31T10:42:00.000 | 1 | 0 | 1 | 0 | python,visual-studio-code | 44,314,981 | 2 | false | 0 | 0 | I found the answer more or less randomly. So basically, you can't configure it globally, but you can create (or modify) a Python debugging environment. Once you do that, you can configure it and set stopOnEntry to false on that environment. After that, if you choose the right debugger, stopOnEntry will never be enabled (for that environment). | 1 | 1 | 0 | I'm using VS Code for my Python projects, but most of the time I don't need the debugger to stop the script on run. By default, when I hit F5, the script is stopped at the first line and I have to press F5 again to start it.
This can be changed by editing the project-related launch.json; though I'd like it to be that way by default (and also for non-project files, like test scripts for example).
Is there any way to do that? There's not parameter related to that feature that I can set in my settings.json.
Thanks | Disable stop on entry by default? | 0.099668 | 0 | 0 | 67 |
44,288,336 | 2017-05-31T15:05:00.000 | 0 | 0 | 0 | 0 | python,trello | 58,631,278 | 3 | false | 0 | 0 | you try POST to
https://api.trello.com/1/cards/cardID/actions/comments?text=yourComment&key=apiKey&token=trelloToken
where you must change the variables cardI, yourComment, apiKey and trelloToken to your variables | 1 | 0 | 0 | I am testing Trello API via Postman and I have a problem adding a comment to a card.
I'm submitting a PUT request to https://api.trello.com/1/cards/card id/actions/commentCard/comments?&key=my_key&token=my_token&text=comment, but I receive an error:
invalid value for idAction
Could someone point out what I'm doing wrong? | How to add a comment to Trello card via API calls? | 0 | 0 | 1 | 1,702 |
44,290,736 | 2017-05-31T17:01:00.000 | 5 | 0 | 0 | 0 | python,machine-learning | 44,968,506 | 8 | false | 0 | 0 | question is really vague one.
still as you mentioned machine learning TAG. i take it as machine learning problem. in this case there is no specific model or algorithm available to decide which algorithm/function best suits to your data !!.
it's hit & trial method to decide which model should be best for your data. so you are going to write a wrapper program which going to test your data with all possible model and based on their Accuracy Score your code will decide which model best suits. !! | 4 | 6 | 1 | Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product?
Thanks. | Python Machine Learning Functions | 0.124353 | 0 | 0 | 1,870 |
44,290,736 | 2017-05-31T17:01:00.000 | 1 | 0 | 0 | 0 | python,machine-learning | 49,795,930 | 8 | false | 0 | 0 | I don't think this is a perfect place to ask this kind of questions. There are some other websites where you can ask this kind of questions.
For learning Machine Learning (ML), do a basic ML course and follow blogs. | 4 | 6 | 1 | Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product?
Thanks. | Python Machine Learning Functions | 0.024995 | 0 | 0 | 1,870 |
44,290,736 | 2017-05-31T17:01:00.000 | 1 | 0 | 0 | 0 | python,machine-learning | 52,089,251 | 8 | false | 0 | 0 | If you have just started learning ML then you should first get the ideas about different scientific libraries which Python provides. Most important thing is that you have to start with basic understanding of machine learning modelling from various online material available or by doing ML course.
FYI.. there is no such functionality available in python which gives you information about the model which is perfect for a particular set of data. It solely depends on how much analytical you are to choose a good model for your dataset by checking different statistical/model parameters. | 4 | 6 | 1 | Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product?
Thanks. | Python Machine Learning Functions | 0.024995 | 0 | 0 | 1,870 |
44,290,736 | 2017-05-31T17:01:00.000 | 1 | 0 | 0 | 0 | python,machine-learning | 56,123,050 | 8 | false | 0 | 0 | From your question, I garner that you have a result and are trying to find the optimal algorithm to reach there. Unfortunately, as far as I'm aware, you have to compare the different algorithms in itself to understand which one has better performance.
However if you only wish to obtain a suitable algorithm for your use-case but are unsure what broad classification to start looking for an algorithm in, then I'd suggest you read up on the different types of machine learning (classification/regression) and then establish relations between your use case and how the algorithm performs its tasks. With that as a fundamental, you can fine tune your search. | 4 | 6 | 1 | Is there functionality in Machine Learning (using Python) that I could use to feed it a group of inputs, tell it what the end product should be, then have the code figure out what function it should use in order to reach the end product?
Thanks. | Python Machine Learning Functions | 0.024995 | 0 | 0 | 1,870 |
44,291,111 | 2017-05-31T17:22:00.000 | 2 | 0 | 0 | 0 | python,python-3.x,ubuntu,scrapy | 44,655,448 | 3 | false | 1 | 0 | If you run your spider with scrapy crawl:
If you want to keep the logs: scrapy crawl my_spider > /path/to/logfile.txt 2>&1 &
If you want to dismiss the logs: scrapy crawl my_spider > /dev/null 2>&1 & | 2 | 5 | 0 | I managed to run a scrapy program in Ubuntu terminal. However, I cannot use Ctrl+Z and bg command to let it run in the background. It will close the spider connection everytime I press Ctrl + Z.
Is there any workaround or ways to solve the issue? | Run scrapy in background (Ubuntu) | 0.132549 | 0 | 0 | 2,613 |
44,291,111 | 2017-05-31T17:22:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,ubuntu,scrapy | 44,650,889 | 3 | false | 1 | 0 | You can use screen to run one or more task in the background | 2 | 5 | 0 | I managed to run a scrapy program in Ubuntu terminal. However, I cannot use Ctrl+Z and bg command to let it run in the background. It will close the spider connection everytime I press Ctrl + Z.
Is there any workaround or ways to solve the issue? | Run scrapy in background (Ubuntu) | 0 | 0 | 0 | 2,613 |
44,292,280 | 2017-05-31T18:31:00.000 | 0 | 0 | 1 | 0 | sql,database,python-3.x,datetime,psycopg2 | 44,292,411 | 3 | false | 0 | 0 | I imagine in order to use string stripping methods, you must first convert it to a string, then strip it, then convert back to whatever format you want to use. Cheers | 1 | 2 | 0 | I'm calling a database, and returning a datetime.datetime object as the 3rd item in a tuple.
The SQL statement is: SELECT name,name_text,time_received FROM {}.{} WHERE DATE(time_received)=CURRENT_DATE LIMIT 1
If I do print(mytuple[2]) it returns something like: 2017-05-31 17:21:19+00:00 but always with that "+00:00" at the end for every value. How do I remove that trailing "+00:00" from the datetime.datetime object?
I've tried different string-stripping methods like print(mytuple[2][:-6]) but it gives me an error saying that datetime.datetime object is not subscriptable. Thanks for any help! | How to remove "+00:00" from end of datetime.datetime object. (not remove time) | 0 | 1 | 0 | 4,484 |
44,293,170 | 2017-05-31T19:24:00.000 | 0 | 0 | 0 | 0 | python,django | 44,293,598 | 1 | false | 1 | 0 | I deleted the CommunityRhyme model and added a "is_community" Boolean field to Rhyme. | 1 | 0 | 0 | I have 3 models. Rhyme, CommunityRhyme, and AssortedRhymes.
Rhyme(s) are collected by a bot.
CommunityRhyme(s) are rhymes added by the community.
AssortedRhymes contains a ManyToManyField that references only the Rhyme model.
How can I make it so that Rhyme and CommunityRhyme are of the same "type" so that the AssortedRhymes ManyToManyField that references the Rhyme model can also reference (add) CommunityRhymes?
Thank you in advance | How can I make two models the same "type"? | 0 | 0 | 0 | 33 |
44,293,241 | 2017-05-31T19:30:00.000 | 1 | 0 | 0 | 0 | python,firebase | 62,375,312 | 10 | false | 1 | 0 | In my case, I faced a similar error. But my issue was, I have initialized the app twice in my python file. So it crashed my whole python file and returns a
ValueError: The default Firebase app already exists. This means you called initialize_app() more than once without providing an app name as the second argument. In most cases you only need to call initialize_app() once. But if you do want to initialize multiple apps, pass a second argument to initialize_app() to give each app a unique name.
I solved this by removing one of my firebase app initialization. I hope this will help someone!! | 1 | 30 | 0 | I get the following error:
ValueError: The default Firebase app already exists. This means you called initialize_app() more than once without providing an app name as the second argument. In most cases you only need to call initialize_app() once. But if you do want to initialize multiple apps, pass a second argument to initialize_app() to give each app a unique name.
How Can I check if the default firebase app is already initialized or not in python? | check if a Firebase App is already initialized in python | 0.019997 | 0 | 0 | 23,579 |
44,293,780 | 2017-05-31T20:05:00.000 | 0 | 1 | 0 | 0 | python,ssl,pyopenssl | 44,293,925 | 1 | false | 0 | 0 | In order to use a client certificate, you must have a private key. You will either sign some quantity with your private key or engage in encrypted key agreement in order to mutually agree on a key. Either way, the private key is required.
It's possible though that rather than using client certificates they use something like username/password and what they have given you is not a certificate for you to use, but the certificate you should expect from them. | 1 | 0 | 0 | i'm using a python client that makes request to a push server, and has the option to use certificates with the lib OpenSSL for python(pyopenssl), but this client ask me for the private key(can be on the same file or different path. already checked with another server that i have both cert and key). but for the "real" server they are using self-signed cert and they don't have any private key in the same file or another one as far as they told me, they just share with me this cert file, is there any way to use this kind of cert with OpeenSSL in python?, thanks | OpenSSL ask for private key | 0 | 0 | 1 | 95 |
44,295,013 | 2017-05-31T21:31:00.000 | 1 | 0 | 1 | 0 | python,pip,anaconda,pulp | 44,297,399 | 2 | false | 0 | 0 | You have installed an standalone Python, and other Python with Anaconda.
First, if you installed a package by the pip, it is O.K. that it is not displayed in installed packages in Anaconda environment(s) (as you installed it out of Anaconda) and at the same time it is fully functional.
Second, you have to install PuLP again for your standalone Python (with the same pip install command - this time not in conda command prompt, but directly in you OS console / terminal / DOS-prompt (depends of your OS). | 2 | 2 | 0 | Python newbie here. I am trying to install PuLP. So I do a pip install Pulp in conda command prompt and looks like pulp gets installed. I run a conda list and I can see that PuLP is there in the list (please see image).
However, when I try importing PuLP in IDLE,i get an error and I also can't see PuLP in the list of environments in Anaconda navigator (not sure if I am supposed to).
If Pulp is in the list, why can't I import it or see it in Navigator. Any inputs will be appreciated. | Can't install PuLP in Anaconda | 0.099668 | 0 | 0 | 9,530 |
44,295,013 | 2017-05-31T21:31:00.000 | 2 | 0 | 1 | 0 | python,pip,anaconda,pulp | 52,325,382 | 2 | false | 0 | 0 | You can try:
conda install -c conda-forge pulp | 2 | 2 | 0 | Python newbie here. I am trying to install PuLP. So I do a pip install Pulp in conda command prompt and looks like pulp gets installed. I run a conda list and I can see that PuLP is there in the list (please see image).
However, when I try importing PuLP in IDLE,i get an error and I also can't see PuLP in the list of environments in Anaconda navigator (not sure if I am supposed to).
If Pulp is in the list, why can't I import it or see it in Navigator. Any inputs will be appreciated. | Can't install PuLP in Anaconda | 0.197375 | 0 | 0 | 9,530 |
44,295,661 | 2017-05-31T22:30:00.000 | 1 | 0 | 0 | 0 | python,sql,postgresql | 44,295,921 | 1 | true | 0 | 0 | It's difficult to say without knowing what the database load and latency requirements are. I would typically avoid using the same table for source and output: I would worry about the cost and contention of simultaneously reading from the table and then writing back to the table but this isn't going to be a problem for low load scenarios. (If you're only running a single process that is doing both the reading and writing then this isn't a problem at all.)
Separating Input/Output may also make the system more understandable, especially if something unexpected goes wrong. | 1 | 0 | 0 | There is a data model(sql) with a scenario where it uses one input table and it is fed into the prediction model(python) and some output variables are generated onto another table and final join is done between input table and output table to get the complete table(sql). Note: there could be chances that output table doesn't predict values for every primary key in the input table.
Is it good approach to create one table that is fed as input to prediction model and the same table is updated with the output variables i.e. we pass those columns as complete null during input and they are updated as the values are predicted? If not, then what are the disadvantages for the same. | Best way to use input and output tables | 1.2 | 1 | 0 | 123 |
44,295,930 | 2017-05-31T22:58:00.000 | 0 | 0 | 1 | 1 | python | 44,295,964 | 2 | false | 0 | 0 | In command line application documentation, square brackets usually denote optional arguments (that goes beyond Python). | 2 | 0 | 0 | Greeting all!
I'm learning Google's Python class on my own. I have one quick question about the Baby Names Exercise.
In the py file provided, there are lines like this:
if not args:
print 'usage: [--summaryfile] file [file ...]'
sys.exit(1)
I can understand it wants to show you what to type in cmd when using the code. However the format of "[--summaryfile] file [file ...]" confuses me. What does the square brackets in "[--summaryfile]" and "[file ...]" mean? Lists? Or something else? What should I type in the Windows cmd when running the code? Some examples would be very helpful.
Thank you very much in advance! | Python usage: [--summaryfile] file [file ...] | 0 | 0 | 0 | 241 |
44,295,930 | 2017-05-31T22:58:00.000 | 1 | 0 | 1 | 1 | python | 44,295,966 | 2 | true | 0 | 0 | It means that those parts are optional. You can supply or omit the option name --summaryfile. Then you supply a list of as many files as you want.
This is ancient command description syntax, going back to the days of IBM mainframe operating systems in the 1960s. Most documentation assumes that you've seen (or can find) a description somewhere, and most supply it in the front of the global reference manual ... wherever that might be. | 2 | 0 | 0 | Greeting all!
I'm learning Google's Python class on my own. I have one quick question about the Baby Names Exercise.
In the py file provided, there are lines like this:
if not args:
print 'usage: [--summaryfile] file [file ...]'
sys.exit(1)
I can understand it wants to show you what to type in cmd when using the code. However the format of "[--summaryfile] file [file ...]" confuses me. What does the square brackets in "[--summaryfile]" and "[file ...]" mean? Lists? Or something else? What should I type in the Windows cmd when running the code? Some examples would be very helpful.
Thank you very much in advance! | Python usage: [--summaryfile] file [file ...] | 1.2 | 0 | 0 | 241 |
44,297,372 | 2017-06-01T02:06:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,flask,aws-lambda,aws-api-gateway | 44,307,224 | 2 | false | 1 | 0 | The recommended way to host a server with lambda and without EC2 is:
Host your front static files on S3 (html, css, js).
Configure your S3 bucket to be a static web server
Configure your lambdas for dynamic treatments and open it to the outside with API-gateway
your JS call the lambda through API-gateway, so don't forget to activate CORS (on the S3 bucket AND on API-gateway).
configure route53 to link it with your bucket (your route53 config must have the same name as your bucket) so you can use your own DNS name, not the generic S3-webserver url | 2 | 2 | 0 | I'm new to AWS in general, and would like to learn how to deploy a dynamic website with AWS. I'm coming from a self-hosted perspective (digitalocean + flask app), so I'm confused on what exactly the process would be for AWS.
With self-hosting solution, the process is something like:
User makes a request to my server (nginx)
nginx then directs the request to my flask app
flask app handles the specific route (ie, GET /users)
flask does db operations, then builds an html page using jinja2 with the results from the db operation
returns html to user and user's browser renders the page.
With AWS, I understand the following:
User makes a request to Amazon's API Gateway (ie, GET /users)
API Gateway can call a AWS Lambda function
AWS Lambda function does db functions or whatever, returns some data
API Gateway returns the result as JSON (assuming I set the content-type to JSON)
The confusing part is how do I generate the webpage for the user, not just return the JSON data? I see two options:
1) Somehow get AWS Lambda to use Jinja2 module, and use it to build the HTML pages after querying the db for data. API Gateway will just return the finished HTML text. Downside is this will no longer be a pure api, and so I lose flexibility.
2) Deploy Flask app onto Amazon Beanstalk. Flask handles application code, like session handling, routes, HTML template generation, and makes calls to Amazon's API Gateway to get any necessary data for the page.
I think (2) is the 'correct' way to do things; I get the benefit of scaling the flask app with Beanstalk, and I get the flexibility of calling an API with the API Gateway.
Am I missing anything? Did I misunderstand something in (2) for serving webpages? Is there another way to host a dynamic website without using a web framework like Flask through AWS, that I don't know about? | Serving dynamic webpages using aws? | 0.099668 | 0 | 0 | 818 |
44,297,372 | 2017-06-01T02:06:00.000 | 1 | 0 | 0 | 0 | python,amazon-web-services,flask,aws-lambda,aws-api-gateway | 44,312,884 | 2 | false | 1 | 0 | You definitely have to weigh the pros and cons of serving the dynamic website via API GW and Lambda.
Pros:
Likely cheaper at low volume
Don't have to worry about scaling
Lambda Functions are easier to manage than even beanstalk.
Cons:
There will be some latency overhead
In some ways less flexible, although Python is well supported and you should be able to import the jinja2 module.
Both of your proposed solutions would work well, it kind of depends on how you view the pros and cons. | 2 | 2 | 0 | I'm new to AWS in general, and would like to learn how to deploy a dynamic website with AWS. I'm coming from a self-hosted perspective (digitalocean + flask app), so I'm confused on what exactly the process would be for AWS.
With self-hosting solution, the process is something like:
User makes a request to my server (nginx)
nginx then directs the request to my flask app
flask app handles the specific route (ie, GET /users)
flask does db operations, then builds an html page using jinja2 with the results from the db operation
returns html to user and user's browser renders the page.
With AWS, I understand the following:
User makes a request to Amazon's API Gateway (ie, GET /users)
API Gateway can call a AWS Lambda function
AWS Lambda function does db functions or whatever, returns some data
API Gateway returns the result as JSON (assuming I set the content-type to JSON)
The confusing part is how do I generate the webpage for the user, not just return the JSON data? I see two options:
1) Somehow get AWS Lambda to use Jinja2 module, and use it to build the HTML pages after querying the db for data. API Gateway will just return the finished HTML text. Downside is this will no longer be a pure api, and so I lose flexibility.
2) Deploy Flask app onto Amazon Beanstalk. Flask handles application code, like session handling, routes, HTML template generation, and makes calls to Amazon's API Gateway to get any necessary data for the page.
I think (2) is the 'correct' way to do things; I get the benefit of scaling the flask app with Beanstalk, and I get the flexibility of calling an API with the API Gateway.
Am I missing anything? Did I misunderstand something in (2) for serving webpages? Is there another way to host a dynamic website without using a web framework like Flask through AWS, that I don't know about? | Serving dynamic webpages using aws? | 0.099668 | 0 | 0 | 818 |
44,298,778 | 2017-06-01T04:52:00.000 | 5 | 0 | 0 | 0 | python,browser | 44,317,597 | 1 | true | 0 | 0 | Lower-case v should work (unless you use the QtWebEngine backend). Otherwise, Ctrl-A in insert mode does.
However, Stackoverflow is for programming questions - this isn't about programming. ;-) | 1 | 2 | 0 | I need to map to select the document like in a normal browser. I tried using ggVG ( vim equivalent) but 'V' didn't work. Does anyone know how to do it? | How to select all the text in a page in QuteBrowser? | 1.2 | 0 | 0 | 958 |
44,300,678 | 2017-06-01T07:03:00.000 | 0 | 0 | 1 | 0 | python,windows | 44,622,795 | 2 | true | 0 | 0 | with windows 7 sp1 python 3.5.2 got successfully installed. Pls also chk if windows visual c++ 2015 redistributable is properly installed.. | 2 | 0 | 0 | While I am trying to install Python 3.6.1 it is showing Setup failed.. windows 7 service pack 1 and all applicable updates are required to install Python 3.6.1.
I am using windows 7 professional OS.
Waiting for a solution !! | windows 7 service pack 1 and all applicable updates are required to install Python 3.6.1 | 1.2 | 0 | 0 | 14,012 |
44,300,678 | 2017-06-01T07:03:00.000 | 2 | 0 | 1 | 0 | python,windows | 60,004,698 | 2 | false | 0 | 0 | Sometimes this type of error is showing while installing a certain application
(like Python) in the windows 7 platform.
For Solution, you have to install service pack 1 to your os. You can also install the service pack for windows manually as described below.
Open regedit (By clicking windows+R key, type regedit and then click enter)
Go to HKEY-LOCALMACHINE->SYSTEM->Control->Windows.
open CSDVersion
Set the value as you want(Like 100 for servicepack 1, 200 for servicepack 2)
and then click 'Ok'.
Restart your PC
Now you can install application | 2 | 0 | 0 | While I am trying to install Python 3.6.1 it is showing Setup failed.. windows 7 service pack 1 and all applicable updates are required to install Python 3.6.1.
I am using windows 7 professional OS.
Waiting for a solution !! | windows 7 service pack 1 and all applicable updates are required to install Python 3.6.1 | 0.197375 | 0 | 0 | 14,012 |
44,302,025 | 2017-06-01T08:12:00.000 | 0 | 0 | 0 | 0 | python,linux,python-2.7,web2py | 44,313,012 | 1 | false | 1 | 0 | You have two options. First, you can put the two sets of items in two different files in the /models folder. Model files are executed in alphabetical order, so you can put together the final response.menu object in the second of the two files (any variables defined in the first model file will be available globally in the second file, without any need to import).
Alternatively, you can put one of the sub-menus in a module (in the /modules folder) and simply import it where needed. | 1 | 0 | 0 | I have a menu.py which contains all my menus.
I would like to use the menu.py file and in case someone from another team needs to add additional sub-menus they can add them on their own file, then import menu.py as well.
for example: i have 2 sub-menus under /models/menu.py:
system_sub_menu = [
... ....
... ....
]
file_sub_menu = [
... ...
... ...
]
Can i separate them into 2 files?
Thanks
Yaron | web2py - is it possible to import content of menu.py from another file? | 0 | 0 | 1 | 72 |
44,305,806 | 2017-06-01T11:03:00.000 | 3 | 0 | 1 | 1 | python,windows,python-2.7,windows-10,python-idle | 44,307,874 | 1 | true | 0 | 0 | Okay, I've installed Process Monitor and set up filters to monitor the Python folder. Turns out Avast has been busy deleting pythonw.exefor a couple of months now.
It happened sparsely in the beginning, but last 12 deletes were from the past few days.
FIXED | 1 | 2 | 0 | When I try to open .py files with IDLE (right click and "edit with IDLE"), Windows will ask whether to use python.exe to open this file. If I choose python, IDLE won't start properly.
If I use "open with" and navigate to Python27\Lib\idlelib\idle.bat I'll get an error:
Windows cannot find 'C:\Python27\Lib\idlelib....\pythonw.exe
If I start IDLE from start menu, Windows will open the installer and say "Please wait while Windows configures Python 2.7.13 (64-bit)" and if I have the Python installation file available, it will say "Error writing to file C:\Python27\pythonw.exe".
Starting IDLE from CMD (eg. >>python.exe C:\Python27\Lib\idlelib\idle.py) works normally.
I don't see pythonw.exe under C:\Python27\. Should it be there and how would it delete itself?
I'm using Windows 10 and this issue started a while back. I did a complete reinstall of python and packages and the issue was fixed for a short while, but it returned today. | Python IDLE doesn't work because python.exe is missing | 1.2 | 0 | 0 | 1,122 |
44,306,468 | 2017-06-01T11:37:00.000 | 0 | 0 | 1 | 0 | python,pip | 67,401,053 | 3 | false | 0 | 0 | Another option would be to switch to that user and run it with
pip install --user package
There is that one issue (you probably ran into) that you cant switch to some users because some users have nologin shell. But you can always switch to that user passing shell you want like this:
sudo su -s /usr/bin www-data | 1 | 4 | 0 | I'm wish to install Numpy for the www-data user, but I can not log into this user using login. How can I make www-data make us of the Numpy module?
To clarify. Numpy is available for root, and for my default user. | Can I use pip install to install a module for another users? | 0 | 0 | 0 | 5,849 |
44,306,765 | 2017-06-01T11:51:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,tensorflow,gpu | 44,316,105 | 1 | true | 0 | 0 | TensorFlow sessions allocate ~all GPU memory on startup, so they can bypass the cuda allocator.
Do not run more than one cuda-using library in the same process or weird things (like this stream executor error) will happen. | 1 | 4 | 1 | I have tensorflow's gpu version installed, as soon as I create a session, it shows me this log:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0
with properties: name: GeForce GTX TITAN Black major: 3 minor: 5
memoryClockRate (GHz) 0.98 pciBusID 0000:01:00.0 Total memory: 5.94GiB
Free memory: 5.31GiB I
tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 I
tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y I
tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN
Black, pci bus id: 0000:01:00.0)
And when I check my GPU memory usage, around 90% of it gets consumed.
Tensorflow documentation does not say anything about this. Does it take control of the gpu ? Why does it consume most of the memory ? | What does a tensorflow session exactly do? | 1.2 | 0 | 0 | 526 |
44,308,060 | 2017-06-01T12:50:00.000 | -1 | 0 | 1 | 0 | python,ocr,tesseract,python-tesseract | 50,696,761 | 2 | false | 0 | 1 | Usually pytesseract is effective only for high resolution images. Certain morphological operations such as dilation, erosion, OTSU binarization can help increase in pytesseract performance. | 1 | 2 | 0 | I need to to extract text from a screenshot taken from a game window in python. So far, I have been using tesseract (pytesseract) but while the recognigion itself is great, the performance is not optimal.
As I have read that tesseract is best used for high resolution images, I'm wondering if there is a better (faster) way? | Is there an alternative to (py)tesseract for extracting text from game screenshots? | -0.099668 | 0 | 0 | 2,713 |
44,308,195 | 2017-06-01T12:58:00.000 | 7 | 0 | 1 | 0 | python,matplotlib | 44,308,255 | 1 | true | 0 | 0 | It is an abbreviation indicating a "super" title. It is a title which appears at the top of the figure, whereas a normal title only appears above a particular axes. If you only have one axes object, then there's unlikely an appreciable difference, but the difference happens when you have multiple subplots on the same figure and you would like a title at the top of the figure not on each of the axes objects. | 1 | 5 | 1 | I understand that matplotlib.figure.suptitle() adds a title to a figure.
But what does the "sup" stand for? | matplotlib.figure.suptitle(), what does 'sup' stand for? | 1.2 | 0 | 0 | 370 |
44,313,288 | 2017-06-01T17:00:00.000 | 1 | 0 | 0 | 0 | python,django,django-parler,django-shop | 50,614,495 | 2 | false | 1 | 0 | The simplest way to localize prices in django-SHOP, is to use the MoneyInXXX class. This class can be generated for each currency using the MoneyMaker factory.
Whenever an amount of a Money class is formatted, it is localized properly. | 1 | 1 | 0 | I know easy way, make a few different fields for needed currencies, but that's not only ugly, but the currencies will be hardcoded. It seems to me be more elegant through django-parler, but I do not quite understand how to do it. | What the right way to localize the price in the Django-shop? | 0.099668 | 0 | 0 | 210 |
44,313,699 | 2017-06-01T17:26:00.000 | 0 | 0 | 1 | 0 | python,windows,date | 44,313,753 | 4 | false | 0 | 0 | See the Python os package for basic system commands, including directory listings with options. You'll be able to extract the file date. See the Python datetime package for date manipulation.
Also, check the available Windows commands on your version: most of them have search functions with a date parameter; you could simply have an OS system command return the needed file names. | 1 | 1 | 0 | I am relatively new to Python and im trying to make a script that finds files (photos) that have been created between two dates and puts them into a folder.
For that, I need to get the creation date of the files somehow (Im on Windows).
I already have everything coded but I just need to get the date of each picture. Would also be interesting to see in which form the date is returned. The best would be like m/d/y or d/m/y (d=day; m=month, y=year).
Thank you all in advance! I am new to this forum | Python: finding files (png) with a creation date between two set dates | 0 | 0 | 0 | 1,222 |
44,317,175 | 2017-06-01T21:14:00.000 | 3 | 0 | 1 | 1 | windows,python-3.x,cmd | 44,373,756 | 1 | true | 0 | 0 | I had to install PyQt5. It is a requirement for spyder on windows and I was missing it. After I installed it I can launch spyder3 from cmd.exe | 1 | 1 | 0 | I have windows-7 64 bit machine and Python3.6.1(32-bit) installed on it. I wanted to try spyder as IDE for python. I don't have Anaconda or anything like that. So, I installed spyder from command line (cmd.exe) and it did install successfully and prompt returned.
I think it is installed because
I can see spyder3.exe under C:\Users\UserName\AppData\Local\Programs\Python\Python36-32\Scripts
From cmd.exe when I enter spyder3 it doesn't throw any error and a rotating circle appears which indicates something is processing. But nothing is launched.
After running spyder3 from cmd.exe though nothing gets launched except the rotating circle for couple of seconds, I see spyder.lock folder under C:\Users\UserName\.spyder-py3
When I delete spyder.lock folder under C:\Users\UserName\.spyder-py3 and run spyder3 again in cmd.exe the folder is created again.
Question: How can I make spyder launch? Did I do something wrong while installing spyder or am I trying to launch it with an incorrect method? | Issue in launching spyder that was installed using pip install on Python 3.6.1 | 1.2 | 0 | 0 | 983 |
44,318,030 | 2017-06-01T22:24:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,led | 44,318,102 | 1 | false | 0 | 0 | Keeping your code modulable is indeed a good practice ! If your code is not Objet oriented, the best way is to create another python file (let's call it util.py) in the same directory as your "main". You can simply include util.py with the following command at the beginning of your main code :
import util
And then when you want to use a function that you've defined in your util.py file, juste use :
util.myFunction(param1, param2,...) | 1 | 0 | 0 | I don't have a lot of experience coding so I'm sorry if this has been answered before; I couldn't find anything that helped.
I just completed a project on a Raspberry Pi that runs some RGB LED strips via PWM. I have a program that runs the lights and works fine with a few different modes (rainbow shifting, strobe, solid color) but with each new mode I add the program get longer and more convoluted. I would like to have each separate mode be its own script that gets started or stopped by a sort of master script. That way I could easily add a new mode by simply writing a separate program and adding it to the list on the master script instead of mucking around inside a giant program with everything in it and hoping I don't break something. I guess what I want is a simple way to start a python script with some specific setting (Determined by variables passed from the master script) and be able to kill that script when the master script receives the command to change modes. | How to start and stop scripts using another script? | 0 | 0 | 0 | 576 |
44,318,239 | 2017-06-01T22:47:00.000 | 0 | 1 | 1 | 0 | python-3.x,python-module | 44,318,361 | 1 | false | 0 | 0 | I'm afraid that is impossible to import single python files using their paths.
In python you have to deal with modules. The finest solution would be to create a module from your outside-folder script and put in your system modules path. | 1 | 1 | 0 | I know that I can import a python module by using their file path, but is there a way to import a file relative to the current file's position?
For example, it seems like I can "import file" only if the script is in the same folder as the one I am working with. How can I import the file if it's in the folder outside of mine, or one below the hierarchy, without giving the full file path? I would like it so that I can move the project folder around and still have everything work. | Importing modules in the same project folder | 0 | 0 | 0 | 446 |
44,319,954 | 2017-06-02T02:37:00.000 | 1 | 0 | 0 | 0 | python,django,database,development-environment,wagtail | 69,073,212 | 3 | false | 1 | 0 | I struggled mightily with the dumpdata / loaddata and had to assemble the answer from various sources, so I'll write a short summary of what worked for me in the end:
On Prod: ./manage.py dumpdata -e contenttypes -e auth.Permission --output data.json
On Dev: flush your database ./manage.py flush, if flushing does not work, see below
On Dev: ./manage.py loaddata data.json
If flush does not work due to some data inconsistency on Dev which happened to me:
delete the db file
delete everything except the initials from the migrations folder rm */migrations/0*.py
./manage.py migrations && ./manage.py migrate
Then you should be able to load the data | 1 | 6 | 0 | I have a Django website with 3 environments (Local, Staging, Production).
Production contains some data that I don't want my developers to have access to (Personal data and financial data of users).
Doing a DB backup restore is not an option for compliance reason.
However we also have some content pages on this website that we manage with Wagtail CMS.
I am looking for a way to sync production data (only some models but specifically the wagtail pages) back to Staging and Developers local environment when they need to.
Ideally I would have a manage command that I can run in another environment to copy data:
Example: ./manage.py sync_from_prod BlogPost this would find all missing blog post in the local or staging environment and would create them in the database. I cannot find any library doing this for Wagtail or Django doing this.
This seems like a common problem and I am surprised to find no Stackoverflow questions or opensource libraries fixing this problem.
If nothing exist I will probably try to write my own django-model-sync (Found this project, but is 3 year old and compatible until django 1.7 and I am on python3 django 1.11)
In order to manage security, a secret could be used by a dev to access a production API exposing the data (over ssl) for example | Replicate part of production django database to local or staging | 0.066568 | 0 | 0 | 2,016 |
44,323,816 | 2017-06-02T08:05:00.000 | 0 | 0 | 1 | 0 | python,gensim,word2vec | 44,324,291 | 1 | false | 0 | 0 | I am not intimately familiar with the word2vec implementation in gensim but the model, once trained, should basically boil down to a dictionary of (word -> vector) pairs. This functionality is provided by the gensim.models.KeyedVectors class and is independent of the training algorithm used to derive the vectors.
You could extend that class such that it loads the vectors from a database (SQLite for instance) on demand instead of into memory upon creation.
Probably best if you open an issue on github and start a discussion with the core developers on the matter. | 1 | 0 | 1 | I have about 30 word2vec models. When loading them in a python script each consumes a few GB of RAM so it is impossible to use all of them at once. Is there any way to use the models without loading the complete model into RAM? | word2vec - reduce RAM consumption when loading model | 0 | 0 | 0 | 467 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.