Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
46,219,800
2017-09-14T12:55:00.000
-1
1
1
0
python,raspberry-pi
61,260,234
2
false
0
0
I'm using Python 3.7.3 on Raspberry Pi, now the import keyboard is not longer required, just comment it or remove the line; all must works fine.
1
4
0
I want to use module keyboard in Python 3.5.3. pip, import and my program work fine on Windows. On my Raspberry pip install works and pip list shows keyboard. However when I try to run import keyboard I get the error: "ImportError: No module named 'keyboard'" I even tried to use sudo import as the keyboard documentation suggests with the same result. What am I missing?
Python module "keyboard" works on Windows, not found on Raspberry
-0.099668
0
0
12,939
46,219,917
2017-09-14T13:01:00.000
1
0
0
0
python,python-requests,pickle
46,219,989
1
true
0
0
You might want look into serialization, something like pickle For example you would open a file and dump the session with pickle.dump(sess, f) and read with pickle.load(f) into a session object
1
0
0
I am trying to find a way for saving a session object requests.session() to a file and then continue my session from another python script from the same place, include the cookies, the headers etc. I've try doing that with pickle but for some reason the session's cookies and all other attributes are not loaded from file. Is there any way to do this?
python requests library , save session to file
1.2
0
1
1,027
46,222,606
2017-09-14T15:07:00.000
0
0
1
0
python,pandas,azure,anaconda,azure-machine-learning-studio
46,853,504
2
false
0
0
The Azure Machine Learning Workbench allows for much more flexibility with setting up environments using Docker. I moved to using that tool.
1
5
1
I would really like to get access to some of the updated functions in pandas 0.19, but Azure ML studio uses pandas 0.18 as part of the Anaconda 4.0 bundle. Is there a way to update the version that is used within the "Execute Python Script" components?
Updating pandas to version 0.19 in Azure ML Studio
0
0
0
1,862
46,223,080
2017-09-14T15:29:00.000
1
0
1
0
python,python-3.x,debugging,visual-studio-2017
46,230,480
1
true
0
0
In Visual Studio 2017, we could start debug a Python project with shortcuts of F5 (Start with Debugging) and Ctrl + F5 (Start without Debugging). It is debugging for project. The debugging for a Python file doesn't have shortcuts by default. You need to right-click the file and choose Start with Debugging or Start without Debugging. If you just want to debug one of the Python files, you could set the file as Startup file through right-click the file -> choose Set as Startup file. Then you can debug this file through F5 or Ctrl + F5 until you set another file as startup file.
1
0
0
I am using Visual Studio 2017 with Python. When I want to run my file, I have to do Project > 'Start with Debugging' or Project > 'Start without Debugging' and there are no shortcuts for these, so it's very inconvenient to run my files. It is fine to set projects as 'StartUp project' and then run with F5 (Debug > Start Debugging) but can't I do it with just my files (File > New > File..)?
Visual Studio 2017 Python can't debug files
1.2
0
0
1,696
46,223,120
2017-09-14T15:32:00.000
-1
0
1
0
java,c#,python,garbage-collection,programming-languages
46,223,163
4
false
0
0
In Java, Objects only reside in the heap. If this can help you.
4
0
0
I am reading Programming Language Pragmatics, by Scott, and wonder about the following question: Is garbage collection (e.g. by reference count) used for reclaiming just heap objects or objects in both heap and stack? For example, in Python, Java and C#, which use garbage collection, are stack objects deallocated automatically without garbage collection once they are out of scope?
Is garbage collection used for reclaiming just heap objects or objects in both heap and stack?
-0.049958
0
0
333
46,223,120
2017-09-14T15:32:00.000
1
0
1
0
java,c#,python,garbage-collection,programming-languages
46,223,263
4
false
0
0
There is no any "deallocation" in stack at all. So, yes, garbage collector is not involved in this process. Everything that you put into stack is being forgot by program once you don't need it. App usually simply subtracts size of object in bytes from stack's pointer, which is equivalent to cleaning everything placed last time.
4
0
0
I am reading Programming Language Pragmatics, by Scott, and wonder about the following question: Is garbage collection (e.g. by reference count) used for reclaiming just heap objects or objects in both heap and stack? For example, in Python, Java and C#, which use garbage collection, are stack objects deallocated automatically without garbage collection once they are out of scope?
Is garbage collection used for reclaiming just heap objects or objects in both heap and stack?
0.049958
0
0
333
46,223,120
2017-09-14T15:32:00.000
2
0
1
0
java,c#,python,garbage-collection,programming-languages
46,224,198
4
true
0
0
Is garbage collection (e.g. by reference count) used for reclaiming just heap objects or objects in both heap and stack? It's used just for heap objects. Stack is used exactly for objects with guaranteed short lifetimes (i.e. they do not live past the function which created them). For those objects, garbage collection is unnecessary. Though figuring out which objects are "heap objects" may not be trivial. For example, the JVM uses escape analysis to detect objects which can be allocated on the stack instead of the heap. Another example is C#, where a value type local variable in an async method is usually going to be allocated on the heap.
4
0
0
I am reading Programming Language Pragmatics, by Scott, and wonder about the following question: Is garbage collection (e.g. by reference count) used for reclaiming just heap objects or objects in both heap and stack? For example, in Python, Java and C#, which use garbage collection, are stack objects deallocated automatically without garbage collection once they are out of scope?
Is garbage collection used for reclaiming just heap objects or objects in both heap and stack?
1.2
0
0
333
46,223,120
2017-09-14T15:32:00.000
1
0
1
0
java,c#,python,garbage-collection,programming-languages
46,235,165
4
false
0
0
Garbage collector in java just claims the objects in heap but it does trace the stack memory to see if any of the heap objects are still being referenced before they can be claimed.Stack memory is claimed once the method call ends.
4
0
0
I am reading Programming Language Pragmatics, by Scott, and wonder about the following question: Is garbage collection (e.g. by reference count) used for reclaiming just heap objects or objects in both heap and stack? For example, in Python, Java and C#, which use garbage collection, are stack objects deallocated automatically without garbage collection once they are out of scope?
Is garbage collection used for reclaiming just heap objects or objects in both heap and stack?
0.049958
0
0
333
46,228,138
2017-09-14T20:59:00.000
0
0
0
0
python,apache-spark,dataframe,pyspark
53,010,988
5
false
0
0
You can persist dataframe in memory and take action as df.count(). You would be able to check the size under storage tab on spark web ui.. let me know if it works for you.
2
13
1
For python dataframe, info() function provides memory usage. Is there any equivalent in pyspark ? Thanks
How to find pyspark dataframe memory usage?
0
0
0
18,349
46,228,138
2017-09-14T20:59:00.000
8
0
0
0
python,apache-spark,dataframe,pyspark
60,070,654
5
false
0
0
I have something in mind, its just a rough estimation. as far as i know spark doesn't have a straight forward way to get dataframe memory usage, But Pandas dataframe does. so what you can do is. select 1% of data sample = df.sample(fraction = 0.01) pdf = sample.toPandas() get pandas dataframe memory usage by pdf.info() Multiply that values by 100, this should give a rough estimate of your whole spark dataframe memory usage. Correct me if i am wrong :|
2
13
1
For python dataframe, info() function provides memory usage. Is there any equivalent in pyspark ? Thanks
How to find pyspark dataframe memory usage?
1
0
0
18,349
46,232,338
2017-09-15T05:36:00.000
0
0
0
0
python,windows,login,automation,wake-on-lan
46,238,592
1
true
0
0
I found a way to do this with Python and Windows Task Scheduler. With Task scheduler you can run tasks based on events like startup. I created a Python script that changes the password to a blank one and restarts the computer, this way it doesn't require the password for login. In Task Scheduler you have to check the run with highest privileges and run whether the user is logged in or not. Then I created another script that after log in event changes the password back to what it was. In my opinion this is some kind of security flaw but more experienced people can give their opinion about that.
1
0
0
I have got the Wake On Lan feature working. But after I start the computers they get stuck at the windows login screen obviously as the computers have passwords. Is there any way to auto login after Wake On Lan or somehow automate the login process without removing the password. I found threads discussing a possibility that you could run a script as a service or something, but couldn't find a working solution.
Auto login after Wake On Lan
1.2
0
0
1,270
46,235,088
2017-09-15T08:30:00.000
0
1
0
0
python,selenium,automated-tests,selenium-ide,web-testing
46,241,635
3
false
0
0
The best way is to learn how your app is constructed, and to work with the developers to add an id element and/or distinctive names and classes to the items you need to interact with. With that, you aren't dependent on using the fragile xpaths and css paths that the inspector returns and instead can create short, concise expressions that will hold up even when the underlying structure of the page changes.
1
0
0
Selenium IDE is a very useful tool to create Selenium tests quickly. Sadly, it has been unmaintained for a long time and now not compatible with new Firefox versions. Here is my work routine to create Selenium test without Selenium IDE: Open the Inspector Find and right click on the element Select Copy CSS Selector Paste to IDE/Code editor Type some code Back to step 2 That is a lot of manual work, switching back and for. How can I write Selenium tests faster?
Since Selenium IDE is unmaintained, how to write Selenium tests quickly?
0
0
1
116
46,235,675
2017-09-15T09:00:00.000
0
1
0
0
python,algorithm,performance,edit-distance
46,241,767
1
false
0
0
It seems, from wikipedia, that edit distance is one of three operations insertion, deletion, substitution; performed on a starting string. Why not systematically generate all strings up to N edits from a starting string then stop when you reach your limit? There would be no need to check for the actual edit distance as they would be correct by generation. For randomness could you generate a number then shuffle them.
1
0
1
I need to create a program/script for the creation of a high numbers of random sequences (20 letter long sequence based on 4 different letters) with a minimum edit distance between all sequences. "High" would here be a minimum of 100k sequences, but if possible up to 1 million. I started with a naive approach of just generating random 20 letter sequences, and for each sequence, calculate the edit distance between the sequence and all other sequences already created and stored. If the new sequence pass my threshold value, store it, otherwise discard. As you understand, this scales very badly for higher number of sequences. Up to 10k is reasonably fine, but trying to get 100k this starts to get troublesome. I really only need to create the sequences once and store the output, so I'm really not that fussy about speed, but making 1 million at this rate today is just no possible. Been trying to think of alternatives to speed up the process, like building the sequences is "blocks" of minimal ED and then combining, but haven't come up with any solution. Wondering, do anyone have any smart idea/method that could be implemented to create such high number of sequences with minimal ED more time efficient? Cheers, JB
Create high nr of random sequences with min Edit Distance time efficient
0
0
0
79
46,242,451
2017-09-15T14:55:00.000
0
1
0
0
python-2.7,timestamp,wireshark,scapy
46,265,590
1
false
0
0
Short answer: Scapy is not really good at replaying a PCAP file (if you want to be fast, I mean). If you need better performances than what Scapy can offer, you should probably use tcpreplay; you can do that directly from Scapy, using the sendpfast() function.
1
0
0
I am trying to measure the packet leaving and reaching time via scapy. Is it possible to measure the time when packet leaves the node? If so, How good scapy is in replaying exact those timestamps? Plus how to verify its credibility. Is it possible to compare scapy timestamps with wireshark's? If so then how? I know these are lots of questions. but I really need these answers. I thank in advance for your patience and effort.
How good scapy is in replaying a pcap file?
0
0
0
529
46,244,095
2017-09-15T16:34:00.000
4
0
0
0
python
48,470,963
1
false
0
0
I'm the creator of pylogit. I don't have built in utilities for estimating conditional logits with fixed effects. However, you can use pylogit to estimate this model. Simply Create dummy variables for each decision maker. Be sure to leave out one decision maker for identification. For each created dummy variable, add dummy variable's column name to the one's utility specification.
1
1
1
I am trying to estimate a logit model with individual fixed effects in a panel data setting, i.e. a conditional logit model, with python. I have found the pylogit library. However, the documentation I could find, explained how to use the conditional logit model for multinomial models with varying choice attributes. This model does not seem to be the same use case as a simple binary panel model. So my questions are: Does pylogit allow to estimate conditional logits for panel data? If so, is there documentation? If not, are there other libraries that allow you to estimate this type of model? Any help would be much appreciated.
conditional logit for panel data in python
0.664037
0
0
2,528
46,245,240
2017-09-15T17:58:00.000
0
0
0
0
python,directory,shutil,copytree
46,245,531
1
false
0
0
I made a mistake with the file pathways I inputted into the copytree function. The function works as expected, in the way I mentioned I wanted it to in my question.
1
3
0
So I have a directory called /src, which has several contents: a.png, file.text, /sub_dir1, /sub_dir2 I want to copy these contents into destination directory /dst such that the inside of /dst looks like this (assuming /dst is previously empty): a.png, file.text, /sub_dir1, /sub_dir2 I have tried shutil.copytree(src, dst), but when I open up /dst I see: /src which, although contains everything I want, should not be there, as I do not want the /src directory itself copied over, only its inner contents. Does anyone know how to do this?
Using shutil library to copy contents of a directory from src to dst, but not the directory itself
0
0
0
89
46,245,946
2017-09-15T18:57:00.000
0
0
1
0
python,gdal
67,923,125
3
false
0
0
Try "from osgeo import gdal", hope it helps!
1
1
0
I used conda install -c conda-forge gal to install the GDAL package. However, I got a following error in importing the package. >>> import gdal Traceback (most recent call last): File "", line 1, in File "/Users/name/anaconda/lib/python3.6/site-packages/gdal.py", line 2, in from osgeo.gdal import deprecation_warn File "/Users/name/anaconda/lib/python3.6/site-packages/osgeo/__init__.py", line 21, in _gdal = swig_import_helper() File "/Users/name/anaconda/lib/python3.6/site-packages/osgeo/__init__.py", line 17, in swig_import_helper _mod = imp.load_module('_gdal', fp, pathname, description) File "/Users/name/anaconda/lib/python3.6/imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "/Users/name/anaconda/lib/python3.6/imp.py", line 342, in load_dynamic return _load(spec) ImportError: dlopen(/Users/name/anaconda/lib/python3.6/site-packages/osgeo/_gdal.cpython-36m-darwin.so, 2): Library not loaded: @rpath/libicui18n.58.dylib Referenced from: /Users/name/anaconda/lib/libgdal.20.dylib Reason: image not found I tried the followings but they didn't work for me: conda upgrade numpy conda install libpng Does anyone know what I should do?
Import Gdal not working
0
0
0
2,041
46,246,197
2017-09-15T19:17:00.000
0
1
0
0
python,raspberry-pi,pygame
46,246,348
1
false
0
1
This ought to work ... but it's purely hypothetical: Use a parallel circuit to set a pin "high" and "low" - "high" means start the timer; "low" means stop the timer. The next "high" resets and restarts the timer. You could use two circuits. One for start/stop and one for "reset". You'd probably need some code to not reset while running. The parallel circuit can be controlled manually (for testing) or automatically (perhaps with a master program?).
1
0
0
I am trying to set up a gameshow type system where I have 5 stations each having a monitor and a button. The monitor will show a countdown timer and various animations. My plan is to program the timer, animations, and button control through pygame and put a pi at each station each running it's own pygame script, waiting for the start signal (can be keypress or gpio). I'm having trouble figuring out how to send that signal simultaneously to all stations. Additionally I need to be able to send a 'self destruct' signal to each station to stop the timer. I can ssh into each station but I don't know how to send keypress/gpio signals through the command line to a running pygame script.. I was thinking of putting a rf receiver on each pi, all at the same wavelength and using a common transmitter, but that seems very hacky and not necessarily so simultaneous.
Control Multiple Raspberry Pis Remotely / Gameshow Type System
0
0
0
114
46,246,907
2017-09-15T20:14:00.000
0
0
1
0
python,ironpython,pylint,flake8
46,254,492
1
false
0
0
I believe IronPython has the exact same syntax as CPython (or any other Python implementation) so Flake8 and pylint should Just Work. You'll need to install them and both packages strongly suggest using pip. I've not used IronPython before so you should install pip into your IronPython environment and then use that to install Flake8 and PyLint. (You can and should use both. They're complimentary, not mutually exclusive. In fact, Flake8 uses PyLint in addition to itself.)
1
0
0
I have looked into a couple of linting tools so far as flake8 and pylint would both be great. Unfortunately, I need to use these linting tools with IronPython instead of CPython. How do I go about using pylint (or alternatively flake8) with IronPython?
IronPython Linting Tool
0
0
0
362
46,248,507
2017-09-15T23:10:00.000
-1
0
1
0
python,eval
46,248,856
1
false
0
0
I use built in functions, like .isalpha() , and tuples to limit the use of eval() . I personally built a simple calculator, and limited out all words and letters via a tuple, checked by a loop over the input string. This prevents code from being passed through, where only a mathematical function should be. Good luck: )
1
0
0
I am making a program that takes in a bunch of input and returns something based on it. Like if the person does 3 + 4 - (77 ** 3), but how can I limit it so that the person using it is only able to do that. As in, can I limit it so the person can't type in print(""), because that will return "". Can I make it so that he can only do math operations? Or is that not possible and a too much of a question?
Am I able to limit the use of eval()? (PYTHON)
-0.197375
0
0
95
46,249,389
2017-09-16T02:04:00.000
1
0
1
0
python-2.7
46,249,425
1
true
0
0
Most modern text viewers do not interpret the backspace character as removing the previous character, so if you use one of those you won't be able to get it to work.
1
0
0
I want to write a string with backspace escape characters into a text file, but when I open the .txt it has the symbol that represent the Backspace. Is there any way for the backspace character to work in text files? I have tried with \n and it works fine so I don't get it
Backspace character in .txt files
1.2
0
0
757
46,250,806
2017-09-16T06:25:00.000
2
1
1
0
python,cryptography
46,251,597
1
true
0
0
I got the solution for this: The pycrypto library has to be installed using pip3 instead of pip. sudo pip3 install crypto; sudo pip3 install pycrypto Import works fine afterwords.
1
0
0
from Crypto.Cipher import AES This gives me following error when executed with python3. It works fine in python2. ImportError: No module named 'Crypto' What is problem here?
importing PyCrypto Library is not working in Python3
1.2
0
0
826
46,252,155
2017-09-16T09:07:00.000
1
0
0
0
r,python-2.7,facebook-graph-api
46,254,610
1
false
1
0
Without a Page Token of the Page, it is impossible to get the reviews/ratings. You can only get those for Pages you own. There is no paid service either, you can only ask the Page owners to give you access.
1
0
0
I want to extract reviews from public facebook pages like airline page, hospital page, to perform sentiment analysis. I have app id and app secret id which i generated from facebook graph API using my facebook account, But to extract the reviews I need page access token and as I am not the owner/admin of the page so I can not generate that page access token. Is any one know how to do it or it requires some paid service? Kindly help. Thanks in advance.
How to extract reviews from facebook public page without page access token?
0.197375
0
1
259
46,253,238
2017-09-16T11:11:00.000
0
1
0
0
python,python-telegram-bot
46,253,303
2
false
0
0
You can't know how long your bot had been added to group at this time. :( You need to log it to your own database, and there is leaveChat method if you need it.
2
0
0
I'm built a telegram bot for the groups.When the bot is added to the group, it will delete messages containing ads.How can I change the bot to work for only 30 days in each group and then stop it? That means, for example, today's bot is added to group 1 and the next week the bot is added to group 2; I need to change the bot to stop the 30 days in group 1 and stop it in group 2 for another 37 days. How can I do that?
How can a telegram bot change that works only for a specific time in a group?
0
0
1
174
46,253,238
2017-09-16T11:11:00.000
0
1
0
0
python,python-telegram-bot
46,253,626
2
false
0
0
Simply, all you need is a database at the back-end. Just store the group_id and join_date in each row. At any time, you can query your database. If more than 30 days has passed join_date, stop the bot or leave the group. You can also use any other storage rather than a database. A file, index, etc.
2
0
0
I'm built a telegram bot for the groups.When the bot is added to the group, it will delete messages containing ads.How can I change the bot to work for only 30 days in each group and then stop it? That means, for example, today's bot is added to group 1 and the next week the bot is added to group 2; I need to change the bot to stop the 30 days in group 1 and stop it in group 2 for another 37 days. How can I do that?
How can a telegram bot change that works only for a specific time in a group?
0
0
1
174
46,254,811
2017-09-16T14:11:00.000
0
0
0
0
django,python-2.7,static,connection
46,255,626
1
true
1
0
Just immediately do a page refresh by adding <meta http-equiv="refresh" content="0"> to the of your page. Then sleep instead of responding until you actually want to update the page. Thats still a shitty hack and you should propably reconsider your design. Client-Side Updates and no JS doesn't mix.
1
0
0
It is necessary to make a static application that uses a constant connection to the server to receive messages or updates the page in a period of time. With Django, python 2.7 and without js and jquery.
How to update a client-side page without js and jquery on django?
1.2
0
0
49
46,257,172
2017-09-16T18:31:00.000
3
0
1
0
python,augmented-assignment
46,257,281
2
true
0
0
Is there anything like this available in Python? No Is this completely absurd or confusing to you? No. That said, it would be somewhat different from the existing augmented assignment operators (like +=, *=, etc.). For those operators, you can define a special magic method (__iadd__, __imul__, etc.) to implement them. A key feature of these is that, because a separate method is called, they may update the object in place. For instance, if x is a list, then x += [1, 2, 3] will actually mutate the object x rather than creating a new list. For your proposed .= operator, it's not clear how this could work. If there were an __imeth__ operator for "augmented method assignment", what would it take as arguments? If it took the name of the method as an argument, you would need a giant if-block inside __imeth__ to decide what to do for various methods (i.e., if method == 'lower' to handle .lower() and so on). If it didn't take the name of the method as an argument, how would it know what method is being called? More importantly, though, a fundamental feature of the existing operators is that they accept an expression as their operands. With your proposed .=, what would happen if you did x .= 3? Or x .= (foo+bar).blah()/7? Or even x .= lower (with no parentheses)? It would seem that .= would require its right-hand argument to be syntactically restricted to just a single function call (which would be interpreted as a method call). That is quite different from any existing Python operator. It seems the only way to handle all of that would be to reduce the scope of the proposal so that it indeed only accepts a single function call on the right, and make it non-customizable, so that x .= method(...) is pure syntactic sugar for x = x.method(...). But, as described above, that is much weaker than what current augmented assignment allows, so I don't think it would be a big win.
1
2
0
Let: a = 5, b = 10, and hello_world = 'Hello World'. To my understanding: Python allows us to utilize assignment operators to prevent us from having to repeat the left operand. For example, a = a + b can be rewritten as a += b where both would return 15. So with some Python objects it could be somewhat similar, depending on what the method being called returns. With a string, str, or this case our string hello_world there are a multitude of methods for you to use to modify it in some way such as hello_world.lower() and sometimes I would call it to assign the variable the result of the method within. For example, hello_world = hello_world.lower() could be rewritten as something like hello_world .= lower() where both would return hello world. Is there anything like this available in Python? Is this completely absurd or confusing to you? Curious what people think of this and/or if it exists already.
The possibility of an assignment operator concept for an object method
1.2
0
0
116
46,257,627
2017-09-16T19:21:00.000
-1
0
0
0
python,scikit-learn,scale
49,274,464
2
false
0
0
My understanding is that scale will transform data in min-max range of the data, while standardscaler will transform data in range of [-1, 1].
1
21
1
I understand that scaling means centering the mean(mean=0) and making unit variance(variance=1). But, What is the difference between preprocessing.scale(x)and preprocessing.StandardScalar() in scikit-learn?
Scikit-learn: preprocessing.scale() vs preprocessing.StandardScalar()
-0.099668
0
0
9,544
46,257,848
2017-09-16T19:47:00.000
1
0
0
0
python,r,spss
46,259,970
1
true
0
0
For R, you can perhaps use the haven package. Of the course the results will depend on the files being imported, but the package does included functions for dealing with/viewing labels (presuming the labels actually exist).
1
0
1
How can I extract a mapping of numbers to labels from a .sav file without access to SPSS? I am working with a non-profit who uses SPSS, but I don't have access to SPSS (and they are not technical). They've sent me some SPSS files, and I was able to extract these into csv files which have correct information with an R package called foreign. However for some files the R package extracts textual labels and for other files the R package extracts numbers. The files are for parallel case studies of different individuals, and when I count the labels vs. the numbers they don't even match exactly (say 15 labels vs. 18 enums because the underlying records were made across many years and by different personnel, so I assume the labels probably don't match in any case). So I really need to see the number to label matching in the underlying enum. How can I do this without access to SPSS? (p.s. I also tried using scipy.io to read the .sav file and got the error Exception: Invalid SIGNATURE: b'$F' when testing on multiple files before giving up so that seems like a non-starter)
How to extract enumerated labels and corresponding numerical values from a .sav file?
1.2
0
0
102
46,259,836
2017-09-17T01:21:00.000
2
0
0
0
django,amazon-web-services,nginx,python-3.4,gunicorn
46,266,596
2
false
1
0
You definitely still need some reverse proxy on your application level. While ELB has no specific reverse proxy, Application Load Balancer (ALB) would kind of replace need of proper reverse proxy as it allows to define path-based routing. Nevertheless it's not a full substitute of nginx in this case. With nginx you are equipped with tools that which allows you to do almost unlimited things that may be required by your application as it keeps on growing up when the serious traffic comes into play. What's more, for Django application on production you definitely want to run it with some uwsgi for example as it's capable of handling traffic any "development" server as the one shipped with django could not do. With all the things described above, you're in position of charge here by having all those nginx and uwsgi stuff ready to go with your application. I love to have all applications we do on daily basis contenerized with Docker on EBS multi-container environment. There we've got nginx, uwsgi, so we can do anything we need.
1
4
0
I am working on creating a Django web app using resources on AWS. I am new to deployment and in my production setup (Elastic Beanstalk i.e. ELB based) I would like to move away from the Django development web server and instead use Nginx + Gunicorn. I have been reading about them and also about ELB. Is Nginx + Gunicorn needed if I deploy my Django app on ELB? As ELB does come with reverse proxy, auto scaling, load balancing etc. Appreciate the inputs.
AWS elastic beanstalk + Nginx + Gunicorn
0.197375
0
0
5,381
46,261,736
2017-09-17T07:34:00.000
1
1
1
0
python-2.7,file,file-io
59,353,987
2
true
0
0
I use os.access('your_file_path', os.W_OK) to check write mode. file.mode always returns 'r', while the file is actually in 'write' mode.
1
1
0
Is there a quick "pythonic" way to check if a file is in write mode, whether the mode is r+, w, w+, etc. I need to run a function when __exit__ is called, but only if the file is open in write mode and not just read-only mode. I am hoping some function exists to obtain this information but I can't seem to find anything. Is there a way to do this without having to build a separate function to interpret the list of mode types?
Python check if file object is in write mode
1.2
0
0
1,269
46,263,313
2017-09-17T10:58:00.000
1
1
1
0
python,programming-languages,computer-science,typing
46,263,361
3
false
0
0
This is an aspect of duck typing. Python, as a dynamically-typed language, cares less about the actual types of objects than about their behaviour. As the saying goes, if it quacks like a duck, then it's probably a duck; in the case of your descriptor, Python just wants to know it defines the special methods, and if it does then it accepts that it is a descriptor.
2
0
0
This is a programming language concept question, e.g. similar to the level of Programming Language Pragmatics, by Scott. In Python, the classes of some kinds of objects are defined in terms of having some methods with special names, for example, a descriptors' class is defined as a class which has a method named __get__, __set__, or __delete__(). an iterators' class is defined as a class which has a method named __next__. Questions: What is the language feature in Python called in programming language design? Is it duck typing? How does the language feature work underneath? In C++, C#, and Java, is it correct that a descriptor's class and an iterator's class would have been defined as subclasses of some particular classes? (similarly to C# interface IDisposable) In Python, Can descriptors' classes be defined as subclasses of some particular class? Can iterators' classes be defined as subclasses of some particular class?
What is the language feature that some kinds of classes are defined in terms of having some methods with special names?
0.066568
0
0
63
46,263,313
2017-09-17T10:58:00.000
1
1
1
0
python,programming-languages,computer-science,typing
46,263,607
3
true
0
0
What is the language feature in Python called in programming language design? Is it duck typing? "Any object with a member with a specific name (or signature), can work here" is duck typing. I don't think there is a more specific term for "any object with a member with a specific name (or signature), can work for this language feature", if that's what you were asking. How does the language feature work underneath? I don't understand the question. If a language feature means that it calls a method with a specific name, it calls a method with that name. That's it. In C++, C#, and Java, is it correct that a descriptor's class and an iterator's class would have been defined as subclasses of some particular classes? I'm not aware of anything similar to descriptor in any of these languages and I don't think it makes sense to speculate on how it would look if it did exist. As for iterators, each of these languages has a foreach loop, so you can look at that: In C++, the range-based for loop works on any type that has instance members begin and end or for which the begin and end functions exist. The returned type has to support the ++, != and * operators. In C#, the foreach loop works on any type that has instance method GetEnumerator(), which returns a type with a MoveNext() method and a Current property. There is also the IEnumerable<T> interface, which describes the same shape. Enumerable types commonly implement this interface, but they're not required to do so. In Java, the enhanced for loop works on any type that implements Iterable. So, there are no subclasses anywhere (C# and Java differentiate between implementing an interface and deriving from a base class). Java requires you to implement an interface. C# uses duck typing, but also optionally allows you to implement an interface. C++ uses duck typing, there is no interface or base class at all. Note that, depending on the language, the decision whether to use duck typing for a certain language feature might be complicated. As an extreme example, one feature of C# (collection initializers) requires implementing of a specific interface (IEnumerable) and also the presence of a method with a specific name (Add). So this feature is partially duck typed.
2
0
0
This is a programming language concept question, e.g. similar to the level of Programming Language Pragmatics, by Scott. In Python, the classes of some kinds of objects are defined in terms of having some methods with special names, for example, a descriptors' class is defined as a class which has a method named __get__, __set__, or __delete__(). an iterators' class is defined as a class which has a method named __next__. Questions: What is the language feature in Python called in programming language design? Is it duck typing? How does the language feature work underneath? In C++, C#, and Java, is it correct that a descriptor's class and an iterator's class would have been defined as subclasses of some particular classes? (similarly to C# interface IDisposable) In Python, Can descriptors' classes be defined as subclasses of some particular class? Can iterators' classes be defined as subclasses of some particular class?
What is the language feature that some kinds of classes are defined in terms of having some methods with special names?
1.2
0
0
63
46,267,265
2017-09-17T18:05:00.000
-1
0
0
0
python,html
68,627,124
4
false
1
0
Try trinket.io! Trinket lets you embed, share and save python code for free!
1
3
0
I'm trying to look for a way to embed Python code inside an HTML page. I'm NOT talking about using Python as the back-end with tools like Django or Flask. I would like to implement a very very basic console on my webpage so that I can show off Python scripts running, not just pure code. The user would then be able to modify the python, then re-run it to see changes. Suppose I'm making a python programming tutorial website, and I want the user to see that print("hello world"), when run, output "hello world". Is there a way to achieve this?
Embed Python in an HTML page
-0.049958
0
0
7,680
46,267,910
2017-09-17T19:14:00.000
0
0
1
0
python,text-files
46,267,961
4
false
0
0
Because you've asked to focus on how to handle the updates in a text file, I've focused on that part of your question. So, in effect I've focused on answering how would you go about having something that changes in a text file when those changes impact the length and structure of the text file. That question is independent of the thing in the text file being a password. There are significant concerns related to whether you should store a password, or whether you should store some quantity that can be used to verify a password. All that depends on what you're trying to do, what your security model is, and on what else your program needs to interact with. You've ruled all that out of scope for your question by asking us to focus on the text file update part of the problem. You might adopt the following pattern to accomplish this task: At the beginning see if the text file is present. Read it and if so assume you are doing an update rather than a new user Ask for the username and password. If it is an update prompt with the old values and allow them to be changed Write out the text file. Most strategies for updating things stored in text files involve rewriting the text file entirely on every update.
2
0
0
I am trying to create a program that asks the user for, in this example, lets say a username and password, then store this (I assume in a text file). The area I am struggling with is how to allow the user to update this their password stored in the text file? I am writing this in Python.
Storing a username and password, then allowing user update. Python
0
0
0
3,881
46,267,910
2017-09-17T19:14:00.000
-2
0
1
0
python,text-files
46,267,983
4
false
0
0
Is this a single user application that you have? If you can provide more information one where you're struggling You can read the password file (which has usernames and passwords) - When user authenticate, match the username and password to the combination in text file - When user wants to change password, then user provides old and new password. The username and old password combination is compared to the one in text file and if matches, stores the new
2
0
0
I am trying to create a program that asks the user for, in this example, lets say a username and password, then store this (I assume in a text file). The area I am struggling with is how to allow the user to update this their password stored in the text file? I am writing this in Python.
Storing a username and password, then allowing user update. Python
-0.099668
0
0
3,881
46,268,174
2017-09-17T19:41:00.000
0
0
0
0
python,python-3.x,webbrowser-control,opera
46,268,349
2
false
0
0
You can use Selenium to launch web browser through python.
1
0
0
I need to execute my webbrowser (opera) from python code, how can i get it? Python version 3.6, OS MS Windows 7
How can i execute my webbrowser in Python?
0
0
1
737
46,271,314
2017-09-18T03:44:00.000
1
0
1
0
python,python-3.x
46,272,801
2
false
0
0
join(list) is a class method of str class. str() is constructing an object (empty string object) of str class and it also have join method. str().join(['abc', 'def']) will return abcdef and it is equivalent to ''.join(['abc', 'def']). What str() does it creating empty string object. Class method join can also be used to generate the same result. You can use str.join('', ['abc', 'def']) to generate the same output. In summary, 'string'.join([string list]) is equivalent to str.join('string', [string list])
1
2
0
I'm new to Python and I'm using Python3, I'm trying to join list items into a string, I found that when I try str().join(lst) I successfully get the list items as a string but when I do this str.join(lst) I get: TypeError: descriptor 'join' requires a 'str' object but received a 'list' What is the difference between str and str() in this case?
What is the difference between str.join() and str().join() in Python?
0.099668
0
0
3,566
46,275,319
2017-09-18T08:58:00.000
2
0
0
0
python,html,django,seo
46,275,441
1
true
1
0
Building on top of what you had in mind, you could just create a separate place to keep the images that you don't want to be indexed, write a script to move files to that location once they're "expired" and just add the url to the the robots.txt file. Perhaps something like /expired_images*.
1
0
0
I want to stop crawlers from indexing specific images on my website, but only if they're older than a specific date. However, the crawler shall not stop indexing the page where the image is currently linked. My initial approach was to write a script which adds the URL of the image to 'robots.txt', but I think the file will become huge, as we talk about a really huge amount of potential images. My next idea was to use the <meta name="robots" content="noimageindex"> tag, but I think this approach can be error prone, as I could forget to add this tag to a template where I might want to stop crawlers from indexing the image. It's also redundant and the crawler will ignore all images. My question is: do you know a programmatically way to force a crawler too not index a image, if a condition (in my case the date) is true? Or is my only possibility to stop the crawler from indexing the whole page?
Is there a programmatically way to force a crawler to not index specific images?
1.2
0
1
66
46,277,726
2017-09-18T11:02:00.000
1
0
1
0
python
46,277,867
2
true
1
0
As long as one process has a file open, the file will remain available to that process. No other processes can open it, and its existence will not be visible with normal OS utilities, such as ls. In the case of Python, I think that the compiled version of the script remains in memory. Removing it from the filesystem does nothing to change that. So your flask app is not affected until it terminates, at which point the resident memory will be freed and any files that it held open will be removed by the OS.
2
0
0
It should sound stupid. But here is my scenario. I have a python-flask website which is made live using nohup. Now even after i removed all files in __pycache__ and the original .py files the program is still running without any errors. So where does it run from ? Is there any cache of files getting created anywhere else? Note: I know i can kill the process but just wanted to know about this issue
Python program is running even after the files are removed
1.2
0
0
113
46,277,726
2017-09-18T11:02:00.000
1
0
1
0
python
46,277,897
2
false
1
0
So where does it run from ? It runs from memory. Once your program has been compiled to byte code (or the byte code has been loaded from .pyc files), it is shipped off for execution to the Python Virtual Machine and the original source file is closed. Removing it doesn't affect the running process.
2
0
0
It should sound stupid. But here is my scenario. I have a python-flask website which is made live using nohup. Now even after i removed all files in __pycache__ and the original .py files the program is still running without any errors. So where does it run from ? Is there any cache of files getting created anywhere else? Note: I know i can kill the process but just wanted to know about this issue
Python program is running even after the files are removed
0.099668
0
0
113
46,281,845
2017-09-18T14:28:00.000
1
0
0
0
python,matplotlib,pyqt4,ubuntu-16.04,python-3.6
46,406,790
3
false
0
0
For python 3.6(since i had that in my computer), you go to command line , and type this : conda install -c anaconda pyqt=5.6.0 If you are unsure about the python and pyqt version. Then type : conda info pyqt This will output the relevant pyqt version. Hence you can check your pyqt version and install from command mentioned at first.
1
0
1
When I import matplotlib.pyplot in any python 3.6 program, I get the following error: $ python kernel1.py Traceback (most recent call last): File "kernel1.py", line 13, in <module> import matplotlib.pyplot as plt File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py", line 115, in <module> _backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup() File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/__init__.py", line 32, in pylab_setup globals(),locals(),[backend_name],0) File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5agg.py", line 16, in <module> from .backend_qt5 import QtCore File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py", line 26, in <module> import matplotlib.backends.qt_editor.figureoptions as figureoptions File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_editor/figureoptions.py", line 20, in <module> import matplotlib.backends.qt_editor.formlayout as formlayout File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_editor/formlayout.py", line 56, in <module> from matplotlib.backends.qt_compat import QtGui, QtWidgets, QtCore File "/home/atul/anaconda3/lib/python3.6/site-packages/matplotlib/backends/qt_compat.py", line 137, in <module> from PyQt4 import QtCore, QtGui ModuleNotFoundError: No module named 'PyQt4' However, if I use python 3.5, matplotlib.pyplot works perfectly. I have tried using sudo apt-get install python-qt4. Still I get the same error. I am using Ubuntu 16.04.
No module named PyQt4 in python 3.6 when I use matplotlib.pyplot
0.066568
0
0
4,436
46,281,879
2017-09-18T14:30:00.000
3
0
0
0
python,quickfix,fix-protocol
46,290,095
2
true
0
0
Is it common practice to write a program to connect via FIX and leave it running for the entire session time? Or is it acceptable to close the program, given I don't send a logout message, and reconnect at a later time in the day? I don't know what others have done, but I used QuickFIX with Python for years and never had any problem running my system all day, OR shutting it down periodically for whatever reason and reconnecting. In the end I wound up leaving the system connected for weeks at a time, since that allowed me to record data. I would say that the answer to both of your questions is YES. It is common to leave it running. Also, it is acceptable to just close the program. There can always be edge cases and idiosyncratic features of your implementation and your counterparty, so you should seek to understand more why they have asked you not to disconnect. That sounds very strange to me. Is their FIX engine not capable of something very simple and standard?
1
4
0
I wrote a program in Python using the quickfix package which connects to a vendor via FIX. We login in the morning, but don't actually send messages through the connection until the end of the day. The issue is, we don't want to keep the program open for the entirety of the day, but would rather relogin in the afternoon when we need to send the messages. The vendor is requesting we stay logged in for the full duration between our start and stop times specified in our configurations. This is only possible by leaving my program on for the entirety of the day, because if I close it then the messages the vendor sends aren't registered as received by me. I don't send a logout message though. Is it common practice to write a program to connect via FIX and leave it running for the entire session time? Or is it acceptable to close the program, given I don't send a logout message, and reconnect at a later time in the day? Any design or best practice advice would be helpful here.
Is it standard practice to keep a FIX connection connected all day long, or relogin periodically?
1.2
0
0
525
46,286,077
2017-09-18T18:37:00.000
0
0
1
0
python,tensorflow
46,287,762
2
false
0
0
Make sure your Tensorflow folder is somewhere that the environment will look, such as [Python install directory]/Lib/Site-packages
1
0
1
I am trying to install Tensorflow on a Windows 7 laptop in order to use jupyter notebook to play around with the object detection notebook in Github. I am facing this error: ImportError Traceback (most recent call last) in () 4 import sys 5 import tarfile ----> 6 import tensorflow as tf 7 import zipfile 8 ImportError: No module named tensorflow I am getting the above error when I start the Jupyter Notebook from inside the Conda environment in Windows 7. I have installed Python 3.5.4 & within conda environment, tensorflow as well. I am also getting ... not recognized as an internal/external... for $ command while giving $ python and sometimes also for pip3 I have included several file paths in Environment Variables. Can you please suggest me what to do. I am using the Conda env as I feel I have a problem in having Windows Service Pack 1.
Error installing Tensorflow in Windows 7
0
0
0
820
46,286,465
2017-09-18T19:04:00.000
0
0
1
0
python,regex,parsing,nested
46,287,892
2
false
0
0
With some editing of your input, it can be loaded by yaml and the data object is as you have requested it, a set of nested dictionaries. How was the input string created ?. The specific edits are : change "data dict" id:" to "data dict"\nid:", change "\n group_id" to "\ngroup_id" change all { to : , remove all } .
1
0
0
I have a complex string with a nested dictionary in it. This dictionary further has a list of three similar dictionaries inside it. How do I convert this into a Python dictionary? Please help. Input: 'name: "data dict" id: 2\nv6: false\nstats {\n hosts {\n cnt1: 256\n cnt2: 0\n }\n groups {\n cnt1: 1\n cnt2: 0\n }\n main_groups {\n cnt1: 1\n cnt2: 0\n }\n main_hosts {\n cnt1: 256\n cnt2: 0\n }\n}\n group_id: "None"' Expected result: { name: "data dict", id: 2, v6: false, stats: { hosts: { cnt: 1, cnt: 2 } groups: { cnt: 1, cnt: 2 } main: { cnt: 1, cnt: 2 } main_hosts: { cnt: 1, cnt: 2 } } }
Extract nested Python dictionary from complex string
0
0
0
1,888
46,287,979
2017-09-18T20:49:00.000
0
0
1
1
python,macos
46,288,007
2
false
0
0
If you use Anaconda you are fine, if you don't uninstall it. Systems get very confused on what you want to update/use so just pick one and use it!
1
0
0
I am attempting to update pip for IDLE (Python 3.5) on mac using the terminal. It tells me that pip is up to date in anaconda: Daniels-MacBook-Pro-3:~ danielsellers$ pip install --upgrade pip Requirement already up-to-date: pip in ./anaconda/lib/python3.6/site-packages Daniels-MacBook-Pro-3:~ danielsellers$ But IDLE is recommending I update pip, which I am inclined to do because it keeps crashing while trying to install modules. How do I update the version of pip which IDLE is running? I'm somewhat new to python, thanks in advance
Pip update is failing in terminal because Anaconda is up to date - idle is not
0
0
0
112
46,289,025
2017-09-18T22:26:00.000
1
0
1
0
python,mysql,python-2.7,mysql-python,nohup
46,289,041
2
true
0
0
You could run your code in a screen or on tmux.
1
2
0
I have a long code that extracts data from a file, stores it in a dictionary, and inserts it into a mysql table. I need to loop this over a folder of nearly 1000 files, and this will take hours. I have seen a lot of conflicting advice and am not sure which is the simplest and most safe. Is there a command I can run that'll let the code keep running even if I log out of my user on the computer (which means the terminals will be quit out of)? I have not started running it yet.
Let code run after logout
1.2
0
0
848
46,289,557
2017-09-18T23:27:00.000
2
0
1
0
python
46,289,744
1
false
0
0
You should use import json again to explicitly declare the dependency. Python will optimize the way it loads the modules so you don't have to be concerned with inefficiency. If you later don't need xyz.py anymore and you drop that import, then you still want import json to be there without having to re-analyze your dependencies.
1
0
0
Is it a good idea to use a standard library function from an imported module? For example, I write a xyz.py module and within xyz.py, I've this statetment import json I have another script where I import xyz. In this script I need to make use of json functions. I can for sure import json in my script but json lib was already imported when I import xyz. So can I use xyz.json() or is it a bad practice?
Calling a standard library from imported module
0.379949
0
1
39
46,289,627
2017-09-18T23:35:00.000
0
1
0
1
python,apache
46,290,001
1
false
0
0
Python CGI script gets executed when APACHE gets a request. APACHE redirects the request to python. Since, user 'APACHE' would be running this script, you get that as the id. You can only get the id as operator if user 'operator' is running the script. Users connect to your script using a web browser which is intercepted by APACHE. There is no way to determine which user is making the request from web browser as they never login to the machine where APACHE is running. You can get their IP/port using the requests library
1
0
0
In Python CGI, when I call name = os.popen('whoami').read(), the name will return as Apache. How can I get the original login name that was login to this machine? For example, in terminal windows, when I run whoami, the login name return as "operator". In Apache server, is there a way to get the login name as "operator"? Thanks! Tom Wang
How to get original login name while in Python CGI Apache web server?
0
0
0
183
46,289,679
2017-09-18T23:43:00.000
0
0
1
0
python,time,compare,timedelta
46,289,757
2
false
0
0
It becomes "difficult" only when endtime is smaller than starttime (IE, next day). But, you could have a test, if this is the case, add 12 hours to all three items, so now you can easily test and verify that 11am is between 9am and 3pm.
1
0
0
So if I have two timedelta values such as 21:00:00(PM) as start time and 03:00:00 (AM) as end time and I wish to compare 23:00:00 within this range, to see whether it falls between these two, without having the date value, how could I do so? So although in this example it will see 23:00:00 larger than 21:00:00 it will not see 23:00:00 less than 03:00:00, which is my issue. I need to be able to compare time on a 24 hour circle using timedelta or a conversion away from timedelta is fine. Note: The start/end time ranges can change, so any could be AM/PM. So I cannot just add a day to the end time for example.
Checking if time-delta value is in range past midnight
0
0
0
972
46,293,283
2017-09-19T06:23:00.000
0
1
1
0
python,python-2.7,file-permissions,file-ownership
46,293,476
1
false
0
0
You can use rsync facility to copy the file to remote location with same permissions. A simple os.system(rsync -av SRC <DEST_IP>:~/location/) call can do this. Another methods include using a subprocess.
1
0
0
I have the following problem. I need to replace a file with another one. As far as the new is transfered over the network, owner and group bits are lost. So I have the following idea. To save current permissions and file owner bits and than after replacing the file restore them. Could you please suggest how to do this in Python or maybe you could propose a better way to achieve this.
Python save file permissions & owner/group and restore later
0
0
0
120
46,293,320
2017-09-19T06:26:00.000
2
0
1
0
python-3.x,ubuntu,pycharm
47,638,054
1
false
0
0
how do I actually make Pycharm save my interpreter settings and stop asking me about it. I was having a similar issue when I used PyCharm Community 2017.3 on Ubuntu 16.04 for the first time. The solution was to open the project folder rather than a specific script.
1
0
0
I use Pycharm for a while now and I'm getting really annoyed that my Pycharm interpreter settings always resets for some reason. Meaning that whenever I open up a new/old project it will always tell me that: No Python interpreter configured... even after I change and apply the settings in File > Settings > Project: ProjectName > Project Interpreter or File > Default Settings > Project Interpreter. (These changes only apply for as long as Pycharm is open. Once it's closed I need to repeat the whole procedure, which is my problem here.) Then I noticed that all my projects that I open for some reason end up being opened in the tmp folder. (e.g. "/tmp/Projectname.py") Which is also the reason why I cant open recent projects via the menu. So my question is, how do I actually make Pycharm save my interpreter settings and stop asking me about it. I know that there seems to be similar questions about it, but either they are not solved or the solution doesn't work. And I hope that this tmp folder thing might be of use to solve this problem.
Pycharm default interpreter and tmp working directory on Ubuntu
0.379949
0
0
583
46,294,026
2017-09-19T07:06:00.000
7
0
0
1
python,parameter-passing,airflow,apache-airflow
46,296,899
1
true
0
0
Unfortunately, it's not possible to wait for user input let say in Airflow UI. DAG's are programmatically authored which means defined as a code and they should not be dynamic since they are imported in web server, scheduler and workers in same time and has to be same. There are two workarounds I came up with, and we use first in production for a while. 1) Create a small wrapper around Variables. For each DAG then load Variables and compose arguments which are then passed into Operators via default_arguments. 2) Add Slack operator which can be programmatically configured to wait for user input. Afterwards, propagate that information via XCOM into next Operator.
1
8
0
i have searched many links but didn't find any solution to the problem i have. I have seen option to pass key/var into airflow UI ,but it is really confusing for end user to work as which key is associated with which dag. Is there any way to implement functionality like : While running an airflow job, end user will be asked for values to some parameters and after entering those details airflow will run the job.
In airflow can end user pass parameters to keys which are associated with some specific dag
1.2
0
0
1,915
46,299,246
2017-09-19T11:27:00.000
0
0
0
0
python-3.x,machine-learning,scikit-learn,sound-recognition
46,299,601
1
false
0
0
Yes, you can use 1D convolutional neural network. The convolutional filters can exploit consecutive parts of the signal and therefore it can be useful. You can also try to look into recurrent neural networks which are more complex.
1
0
1
What I'm trying to do is conceptually similar to the infamous NMIST classification example. Except that each digit is a computer generated sound wave. I'll be adding some background noise to the data in order to improve real world accuracy. My question is; considering the sequential data what model is best suited for this? Am I right in assuming a convolutional net would work? I favour simpler model in exchange for a few percentage performance points and preferably it could be written with the Scikit Learn library.
Sound digit classification
0
0
0
58
46,300,696
2017-09-19T12:40:00.000
1
0
0
0
python,excel,amazon-web-services,amazon-ec2,xlsxwriter
46,301,367
1
false
0
0
I guess they are kept in cache but I would like them to be added to the server straight away. Is there a "commit()" to add to the code? No. It isn't possible to stream or write a partial xlsx file like a CSV or Html file since the file format is a collection of XML files in a Zip container and it can't be generated until the file is closed.
1
1
0
I am running my Python script in which I write excel files to put them into my EC2 instance. However, I have noticed that these excel files, although they are created, are only put into the server once the code stops. I guess they are kept in cache but I would like them to be added to the server straight away. Is there a "commit()" to add to the code? Many thanks
xlswriter on a EC2 instance
0.197375
1
0
116
46,301,015
2017-09-19T12:53:00.000
3
0
1
0
pycharm,python-3.6
46,301,224
1
true
0
0
CreateProcess error=740, The requested operation requires elevation, this line indicates that it should be run by administrator of the computer, try running it via admin account or with admin rights
1
1
0
I use the PyCharm IDE for Python development and today I was trying to run a very simple program and got an error stating: "Cannot run program "C:\Users\Ahmed\AppData\Local\Programs\Python\Python36-32\python.exe" (in directory "C:\Users\Ahmed\PycharmProjects\mypython"): CreateProcess error=740, The requested operation requires elevation".
Cannot run python.exe from PyCharm: requires elevation
1.2
0
0
2,039
46,303,776
2017-09-19T14:59:00.000
1
0
0
0
python,algorithm,csv,search
46,303,902
2
false
0
0
An efficient way would be to read each line from the first file(with less number of lines) and save in an object like Set or Dictionary, where you can access using O(1) complexity. And then read lines from the second file and check if it exists in the Set or not.
1
0
1
I have two CSV's, each with about 1M lines, n number of columns, with identical columns. I want the most efficient way to compare the two files to find where any difference may lie. I would prefer to parse this data with Python rather than use any excel-related tools.
Most efficient way to compare two near identical CSV's in Python?
0.099668
0
0
147
46,306,032
2017-09-19T17:00:00.000
1
1
0
0
python,youtube,embed,discord.py
50,806,683
3
false
0
0
Only form of media you can embed is images and gifs. It’s not possible to embed a video
2
1
0
I am playing around with a discord.py bot, I have pretty much everything working that I need (at this time), but I can't for the life of me figure out how to embed a YouTube video using Embed(). I don't really have any code to post per-say, as none of it has worked correctly. Note: I've tried searching everywhere (here + web), I see plenty of info on embedding images which works great. I do see detail in the discord API for embedding video, as well as the API Documentation for discord.py; but no clear example of how to pull it off. I am using commands (the module I am working on as a cog) Any help would be greatly appreciated. Environment: Discord.py Version: 0.16.11 Python: 3.5.2 Platform: OS X VirtualEnv: Yes
discord.py embed youtube video without just pasting link
0.066568
0
1
7,927
46,306,032
2017-09-19T17:00:00.000
4
1
0
0
python,youtube,embed,discord.py
46,324,897
3
false
0
0
What you are asking for is unfortunately impossible, since Discord API does not allow you to set custom videos in embeds.
2
1
0
I am playing around with a discord.py bot, I have pretty much everything working that I need (at this time), but I can't for the life of me figure out how to embed a YouTube video using Embed(). I don't really have any code to post per-say, as none of it has worked correctly. Note: I've tried searching everywhere (here + web), I see plenty of info on embedding images which works great. I do see detail in the discord API for embedding video, as well as the API Documentation for discord.py; but no clear example of how to pull it off. I am using commands (the module I am working on as a cog) Any help would be greatly appreciated. Environment: Discord.py Version: 0.16.11 Python: 3.5.2 Platform: OS X VirtualEnv: Yes
discord.py embed youtube video without just pasting link
0.26052
0
1
7,927
46,306,331
2017-09-19T17:18:00.000
13
0
1
0
python,sublimetext
51,280,447
8
false
0
0
Tools->Build System->Python or Ctrl+B
1
3
0
So I'm trying to run python code from Sublime Text 3, but I'm not sure how. Even if it was only from the console, that would be fine. Anybody know how???
How to run python code in Sublime Text 3?
1
0
0
51,441
46,307,447
2017-09-19T18:28:00.000
1
0
0
0
django,amazon-s3,boto3,python-django-storages
61,942,402
2
false
1
0
The docs now explain this: If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not set, boto3 internally looks up IAM credentials.
1
3
0
Django-Storages provides an S3 file storage backend for Django. It lists AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as required settings. If I am using an AWS Instance Profile to provide S3 access instead of a key pair, how do I configure Django-Storages?
Use Django-Storages with IAM Instance Profiles
0.099668
1
0
1,069
46,307,874
2017-09-19T18:54:00.000
0
0
1
0
python,tensorflow,pip,anaconda,virtualenv
46,308,361
2
false
0
0
Yes, after a short reading into the topic, I simply unistalled tf using sudo pip uninstall tensorflow within my virtualenv and then deactivated the virtualenv. Don't know how to really uninstall the virtualenv as well but I guess that is already enough then and I can proceed with the installation of Anaconda? I also have installed some additional packages like mathplotlib, ipython etc. but I can keep them as well without problems? Thanks
2
1
1
I am new to all the tensorflow and Python programming and have installed tensorflow through pip and virtualenv but now I read that in order to use Spyder for Python it is best to use Anaconda. I know that tf can be installed through conda as well but how do I go about it now? Do I have to completely remove the existing installations first and if yes, can someone explain in detail which and how I can do it?
Completely removing Tensorflow, pip and virtualenv
0
0
0
633
46,307,874
2017-09-19T18:54:00.000
0
0
1
0
python,tensorflow,pip,anaconda,virtualenv
46,308,280
2
false
0
0
Just install Anaconda it will take care of everything. Uninstalling the existing ones is up to you, they wont harm anything.
2
1
1
I am new to all the tensorflow and Python programming and have installed tensorflow through pip and virtualenv but now I read that in order to use Spyder for Python it is best to use Anaconda. I know that tf can be installed through conda as well but how do I go about it now? Do I have to completely remove the existing installations first and if yes, can someone explain in detail which and how I can do it?
Completely removing Tensorflow, pip and virtualenv
0
0
0
633
46,312,129
2017-09-20T02:00:00.000
2
0
1
0
python,django,opencv,ubuntu,virtualenv
46,312,773
1
false
1
0
Evert's comment was correct. I followed his steps and got a different, but similar, error. It turns out I had to install libxrender1. Here are the steps I used: activate my virtual environment uninstall opencv-python sudo apt-get install libsm6 reinstall opencv-python sudo apt-get install libxrender1
1
0
0
I'm new to Ubuntu, and also fairly new to web development, so I am hoping there is some obvious thing I am missing. My problem is as follows: I have a box running Ubuntu 16.04 and I have my Django project with a virtualenv. With the virtualenv activated, I ran pip install opencv-python, and it seemed to work (all the files seem to be where I would think they need to be (env/lib/python3.5/site-packages/{cv2,numpy}). But when I run manage.py, I get an error that traces back to __init__.py in the opencv package: ImportError: libSM.so.6: cannot open shared object file: No such file or directory. I get the same error when I run python interactively in the virtualenv and try to import cv2. Is .cv2 in the error a namespace? Is there an way I can get more information or do a python search for the namespace?
Ubuntu 16.04 Django 1.11.5 virtualenv opencv
0.379949
0
0
454
46,312,140
2017-09-20T02:01:00.000
0
0
1
0
python,python-3.x,pillow
47,043,263
1
true
0
0
I figured this out a while back but forgot to come back to the question. Running conda install ipykernel nb_conda_kernels worked like a charm.
1
0
0
I'm using python3 and I ran conda install pillow. When I run conda list I can see that it is installed. In my jupyter notebook, however, I am getting the following error: ---> 10 from PIL import Image ModuleNotFoundError: No module named 'PIL' I saw other posts on StackOverflow that said to use pip install pillow instead. I tried that but had the same results. Any help is appreciated! Thanks~
Imported Pillow But I Still Get "No Module Named PIL"
1.2
0
0
758
46,314,148
2017-09-20T05:47:00.000
0
1
0
0
python,email,apple-mail
46,314,807
1
false
0
0
This is okay now It seems like the content type of the email i sent from Apple Mail was multipart which includes a "text/plain" which contains the text inside my email and "multipart/related" that contains the image i attached. So i just needed to check if the email is a multipart if so then loop it to print all payloads.
1
0
0
I am developing a program in python that can read emails, my program can now read emails from GMAIL even with attachment but my program can't read the email that was sent from Apple Mail. Unlike the email that I sent from GMAIL when I use Message.get_payload() in apple mail it does not have a value. I'm a newbie on python and this is my first project so please bear with me. Thank you in advance. Note: The email I sent from Apple has attachment on it. Update: I can now read the text inside the email my only problem now is how to get the attachment since when I loop all the payloads since it is multipart it only prints the text inside and this "[<email.message.Message object at 0x000001ED832BF6D8>, <email.message.Message object at 0x000001ED832BFDD8>]"
Python: Apple Email Content
0
0
0
471
46,314,983
2017-09-20T06:42:00.000
0
0
0
1
python,linux,cassandra
47,167,910
1
false
0
0
Cassandra uses the python driver bundled in-tree in a zip file. If your Python runtime was not built with zlib support, it cannot use the zip archive in the PYTHONPATH. Either install the driver directly (pip install) as suggested, or put a correctly configured Python runtime in your path.
1
1
0
I am getting the below error while running the cqlsh in cassandra 2.2.10 ?? Can somebody help me to pass this hurdle: [root@rac1 site-packages]# $CASSANDRA_PATH/bin/cqlsh Python Cassandra driver not installed, or not on PYTHONPATH. You might try “pip install cassandra-driver”. Python: /usr/local/bin/python Module load path: [‘/opt/cassandra/apache-cassandra-2.2.10/bin/../lib/six-1.7.3-py2.py3-none-any.zip’, ‘/opt/cassandra/apache-cassandra-2.2.10/bin/../lib/futures-2.1.6-py2.py3-none-any.zip’, ‘/opt/cassandra/apache-cassandra-2.2.10/bin/../lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip/cassandra-driver-3.5.0.post0-d8d0456’, ‘/opt/cassandra/apache-cassandra-2.2.10/bin’, ‘/usr/local/lib/python2.7/site-packages’, ‘/usr/local/lib/python27.zip’, ‘/usr/local/lib/python2.7’, ‘/usr/local/lib/python2.7/plat-linux2’, ‘/usr/local/lib/python2.7/lib-tk’, ‘/usr/local/lib/python2.7/lib-old’, ‘/usr/local/lib/python2.7/lib-dynload’] Error: can’t decompress data; zlib not available
Python Cassandra driver not installed, or not on PYTHONPATH
0
1
0
2,223
46,317,830
2017-09-20T09:09:00.000
2
0
1
0
c#,python,pyd
46,322,006
2
false
0
1
A .pyd file IS a DLL but with function init*() where * is the name of your .pyd file. For example, spam.pyd has a function named initspam(). They are not just similar, but the same! They are usable on PCs without Python. Do not forget: builtins will NOT be available. Search for them in C, add them as an extern in your Cython code and compile!
1
3
0
I am developing a program using C#, but I just figured out that what I am going to program is very very difficult in C# yet easy in Python. So what I want to do is make a .PYD file and use it in C# program, but I don't know how. I've been searching about it but I couldn't find anything. So these are my questions: How do I use .PYD file in C#? I know that .PYD files are similar to .DLL files, so are .PYD files still usable in computers that have no Python installed?
Using .PYD file in C#?
0.197375
0
0
2,953
46,319,434
2017-09-20T10:19:00.000
0
0
1
1
python-2.7,jsondb
46,363,482
1
true
0
0
So my solution to this was quite simple (Just protect data integrity): Before write - backup the file On successful write - delete the backup (Avoids doubling the size of the DB) Where ever a corrupted file is encountered - revert to backup The idea being that if the system closes the script during the file backup, it doesn't matter, we still have the original and if the system closes the script during write to the original file, the backup never gets deleted and we can just use that instead. All and all it was just an extra 6 lines of code and appears to have solved the issue.
1
0
0
So I have a rather large json database that I'm maintaining with python. It's basically scraping data from a website on an hourly basis and I'm running daily restarts on the system (Linux Mint) via crontab. My issue is that if the system happens to restart during the database updating process I get corrupted json files. My question is if there is anyway to delay the system restart in my script to ensure the system shuts down at a safe time? I could issue the restart command inside the script itself but if I decide to run multiple scripts that are similar to this in the future I'll obviously have a problem. Any help here would be greatly appreciated. Thanks Edit: Just to clarify I'm not using the python jsondb package. I am doing all file handling myself
Delaying system shutdown during json DB update in python
1.2
0
0
33
46,321,590
2017-09-20T12:04:00.000
1
0
0
0
python,locust
46,342,647
2
false
0
0
I don't think there's built-in support for it. You could set an id yourself in the on_start method Each Locust will trigger a new task after the previous task was finished (taking into account the wait period). If your response time has a small variation, you can assume that the requests are equally distributed. No, the Locusts are picking tasks one after the other, but are not waiting their turn. When a Locust is available, it will pick a task and execute it. I don't think there is support for your requirement.
1
2
0
Is there any way to get the current executed Client information inside the Task set class? We can get the Thread number in Jmeter. Like that can we get the Client number in Locust tool? Suppose I'm executing the 20 requests with 5 clients. Can i say that each client is executing 4 requests (20/5 = 4 request each)? What is the internal mechanism using here to execute those 20 requests by using 5 clients? This question is related to the data given in Question: 2, Is that execution is happened iteration-wise. Like 1st iteration, Client 1, 2, 3 ,4 and 5 are executing request 1, 2,3, 4 and 5 respectively. Next iteration, again Client 1, 2, 3 ,4 and 5 are executing requests 6,7,8,9 and 10 respectively. How could we achieve this type of execution mechanism in Locust tool. Is this possible? Please help me to clarify above questions.
How do we control the Clients in the Task set execution in Locust?
0.099668
0
1
1,436
46,322,184
2017-09-20T12:30:00.000
0
0
1
0
python-3.x
46,322,253
1
false
0
0
Create repository in github, create new pull/ clone from github using sourctree, gitkraken or etc, and then put .git folder to your project folder, then update the sourctree/gitkraket or etc with new project, then commit changes and push to github. :) It always works for me if I'm stuck...
1
0
0
I am new to programming and just got started few months ago. I am using spyder in anaconda for python programming. I want to upload my work in github but it seems to be less presentable than the other files uploaded which are written in using jupyter notebook. can any one suggest me some way to get over this. I have been working on spyder since one year. i dont think moving from this is good idea.
uploading work in jupyter to github
0
0
0
56
46,322,899
2017-09-20T13:05:00.000
3
1
0
0
python,amazon-web-services,amazon-ec2,aws-lambda,serverless-framework
46,323,508
3
false
1
0
You can use CloudWatch for this. You can create a cloudwatch rule Service Name - Ec2 Event Type - EC2 Instance change notification Specific state(s) - shutting-down Then use an SNS target to deliver email.
1
1
0
i am new to serverless framework and aws, and i need to create a lambda function on python that will send email whenever an ec2 is shut down, but i really don't know how to do it using serverless. So please if any one could help me do that or at least give me some tracks to start with.
send email whenever ec2 is shut down using serverless
0.197375
0
1
203
46,326,262
2017-09-20T15:34:00.000
0
0
0
0
python,django
46,329,111
1
true
1
0
Apparently, according to @DanielRoseman recreating the virtualenv should fix the problem.
1
0
0
I have a Django project and I messed up with the src/lib/python folder (I know I shouldn't have but that's the case) and now I have missing files and folders. How can I re-install the python folder in order to retrieve the missing files?
Re-install src/lib/python folder in a Django project
1.2
0
0
26
46,326,595
2017-09-20T15:51:00.000
0
0
1
0
python,pyqt4
46,413,682
1
false
0
1
For Windows, if you installed PyQt4 package already, you should have the pyuic tool available. You can call it with usingpython C:\Python\Python2.7\Lib\site-packages\PyQt4\uic\pyuic.py UIFileName.ui -o PythonFileName.py -x
1
0
0
I am trying to convert a .ui file generated in Qt4 into python executable file. I've tried installing pyqt4 dev tools by the following command: pip install pyqt4-dev-tools But found an exception as follows: Could not find a version that satisfies the requirement pyqt4-dev-tools (from versions: ) No matching distribution found for pyqt4-dev-tools. Will be pleased if anyone provides a solution.Thank you.
Python27 - PyQt4
0
0
0
436
46,328,579
2017-09-20T17:42:00.000
0
0
1
0
python,h2o,categorical-data
46,359,347
1
true
0
0
The best way to see inside a model is to export the pojo, and look at the java source code. You should see how it is processing enums. But, if I understand the rest of your question correctly, it should be fine. As long as the training data contains all possible values of a category it will work as you expect. If a categorical value not see in training is presented in production it will be treated as an NA.
1
1
1
Is there a way to see how the categorical features are encoded when we allow h2o to automatically create categorical data by casting a column to enum type? I am implementing holdout stacking where my underlying training data differs for each model. I have a common feature that I want to make sure is encoded the same way across both sets. The feature contains names (str). It is guaranteed that all names that appear in one data set will be appear in the other.
Encoded categorical features in h2o in python
1.2
0
0
246
46,329,561
2017-09-20T18:42:00.000
1
0
1
0
python,pandas,amazon-web-services,aws-lambda,aws-glue
51,174,183
13
false
0
0
As of now, You can use Python extension modules and libraries with your AWS Glue ETL scripts as long as they are written in pure Python. C libraries such as pandas are not supported at the present time, nor are extensions written in other languages.
3
15
1
What is the easiest way to use packages such as NumPy and Pandas within the new ETL tool on AWS called Glue? I have a completed script within Python I would like to run in AWS Glue that utilizes NumPy and Pandas.
Use AWS Glue Python with NumPy and Pandas Python Packages
0.015383
0
0
29,917
46,329,561
2017-09-20T18:42:00.000
2
0
1
0
python,pandas,amazon-web-services,aws-lambda,aws-glue
46,416,040
13
false
0
0
when you click run job you have a button Job parameters (optional) that is collapsed by default , when we click on it we have the following options which we can use to save the libraries in s3 and this works for me : Python library path s3://bucket-name/folder-name/file-name Dependent jars path s3://bucket-name/folder-name/file-name Referenced files path s3://bucket-name/folder-name/file-name
3
15
1
What is the easiest way to use packages such as NumPy and Pandas within the new ETL tool on AWS called Glue? I have a completed script within Python I would like to run in AWS Glue that utilizes NumPy and Pandas.
Use AWS Glue Python with NumPy and Pandas Python Packages
0.03076
0
0
29,917
46,329,561
2017-09-20T18:42:00.000
1
0
1
0
python,pandas,amazon-web-services,aws-lambda,aws-glue
46,414,546
13
false
0
0
If you go to edit a job (or when you create a new one) there is an optional section that is collapsed called "Script libraries and job parameters (optional)". In there, you can specify an S3 bucket for Python libraries (as well as other things). I haven't tried it out myself for that part yet, but I think that's what you are looking for.
3
15
1
What is the easiest way to use packages such as NumPy and Pandas within the new ETL tool on AWS called Glue? I have a completed script within Python I would like to run in AWS Glue that utilizes NumPy and Pandas.
Use AWS Glue Python with NumPy and Pandas Python Packages
0.015383
0
0
29,917
46,331,570
2017-09-20T20:52:00.000
2
0
0
0
python,jwt
46,333,523
2
true
0
0
You could encrypt the token before handing it off to the client, either using their own public key, or delivering them the key out of band. That secures the delivery, but still does not cover everything. In short, there's no easy solution. You can perform due diligence and require use of security features, but once the client has decrypted the token, there is still no way to ensure they won't accidentally or otherwise expose it anyway. Good security requires both participants practice good habits. The nice thing about tokens is you can just give them a preset lifespan, or easily revoke them and generate new ones if you suspect they have been compromised.
2
2
0
I have a python application that needs to give users a JSON web token for authentication. The token is built using the PyJWT library (import jwt). From what I have been reading it seems like an acceptable practice to give the token to a client after they have provided some credentials, such as logging in. The client then uses that token in the HTTP request header in the Authorization Bearer field which must happen over TLS to ensure the token is not exposed. The part I do not understand is what if the client exposes that token accidentally? Won't that enable anybody with that token to impersonate them? What is the most secure way to hand off the token to a client?
How to give clients JSON web token (JWT) in secure fashion?
1.2
0
1
550
46,331,570
2017-09-20T20:52:00.000
1
0
0
0
python,jwt
46,337,636
2
false
0
0
Token will be build based on user provided information and what you back-end decided to be part of the token. For higher security you can just widen your token information to some specific data of the user like current ip address or device mac address, this will give you a more secure way of authentication but will restrict user to every time use the same device, as a matter you can send a confirmation email when a new login happens.
2
2
0
I have a python application that needs to give users a JSON web token for authentication. The token is built using the PyJWT library (import jwt). From what I have been reading it seems like an acceptable practice to give the token to a client after they have provided some credentials, such as logging in. The client then uses that token in the HTTP request header in the Authorization Bearer field which must happen over TLS to ensure the token is not exposed. The part I do not understand is what if the client exposes that token accidentally? Won't that enable anybody with that token to impersonate them? What is the most secure way to hand off the token to a client?
How to give clients JSON web token (JWT) in secure fashion?
0.099668
0
1
550
46,335,180
2017-09-21T03:51:00.000
0
0
1
0
c#,python,azure,azure-web-app-service,ironpython
46,335,205
2
false
0
0
I wouldn't write stuff to C:\Windows\system32 as those are operating system files. I would feel more comfortable writing to the local temp directory for whatever user you are running the program as: %USERPROFILE%\AppData\Local\Temp
1
2
0
I have a specific function that utilizes IronPython and inside the python code it is accessing the current directory and creating a temporary file. When it later tries to access that file based off the relative directory path it cant get it and I receive an error back from IronPython that states "Access to the path 'D:\Windows\system32\file is denied" ('file' being the unique temp file created). This all works when I run VS locally as a administrator. If i'm running it locally not as a administrator I receive the same error. When I publish the application to a app service on Azure it gives me the access denied error. Thank you very much ahead of time and let me know if you have any additional questions.
"Access to the path 'D:\\Windows\\system32\\file is denied" Azure Web App
0
0
0
2,907
46,335,246
2017-09-21T03:59:00.000
3
0
0
0
android,python-3.x,pygame,kivy
46,348,642
1
true
0
1
It is also more flexible (based on my limited exp. on this), because the pygame activity can be controlled using while and can get info of the events with one line of code. It isn't more flexible, just a different API. Kivy's drawing API is much more modern and closer to how drawing with opengl actually works. Is it possible to package a kivy app, that uses pygame module, for Android? Kivy used to use a modified pygame backend on Android, which is still available using --bootstrap=pygame when using python-for-android. I think at least some pygame commands worked when this was used, including drawing commands. However, use of the pygame api was never really supported, and the pygame bootstrap is nowadays deprecated in favour of SDL2 - we won't deliberately break it, but it has issues that will probably never be fixed.
1
4
0
In a perspective: The user interface features in kivy is easier to handle, compared to pygame. But, in pygame, it is convenient to manipulate graphics with blit : do blit, then clear all graphics on the surface after finishing an event, then blit again, etc. It is also more flexible (based on my limited exp. on this), because the pygame activity can be controlled using while and can get info of the events with one line of code. Is it possible to package a kivy app, that uses pygame module, for Android? Thanks in advance
Use Pygame in Kivy to be packaged to an Android app, is this Ok?
1.2
0
0
4,711
46,339,134
2017-09-21T08:34:00.000
0
0
0
1
python,opencv,anaconda,conda
46,645,511
2
false
0
0
So you are providing the python package and library path to environment specific location, in order to make it available environment try using the anaconda/bin and lib path. Can't make it as a comment due to low reputation.
1
3
1
I have an Ubuntu 16.04 system with an Anaconda installation. I want to compile and install OpenCV 3.3 and use also the Python bindings. I used the following CMake command: cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_CUDA=ON -D D WITH_FFMPEG=1 -D WITH_CUBLAS=ON -D WITH_TBB=ON -D WITH_V4L=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules -D BUILD_EXAMPLES=ON -D BUILD_TIFF=ON -D PYTHON_EXECUTABLE=/home/guel/anaconda2/envs/py27/bin/python -D PYTHON2_LIBRARIES=/home/guel/anaconda2/envs/py27/lib/libpython2.7.so -D PYTHON2_PACKAGES_PATH=/home/guel/anaconda2/envs/py27/lib/python2.7/site-packages -DWITH_EIGEN=OFF -D BUILD_opencv_cudalegacy=OFF .. The command does the job but then, of course, OpenCV is installed only for a specific conda environment that I created. However, I want to be able to use it also from different environments without having to go through the compilation for each and every environment. Is there a way to achieve that in a simple way? Since the OpenCv libraries are actually installed in /usr/local, I can imagine that there must be a simple way to link the libraries to each new conda enviroment but I couldn't figure out exactly how.
Installing OpenCV for all conda environments
0
0
0
2,609
46,344,722
2017-09-21T13:04:00.000
2
0
0
0
python,flask
46,345,733
1
false
1
0
I think today's default solution to provide realtime capabilities for web applications are websockets (Please google Flask+websockets). If concurrency is an issue (i.e. many long-lasting, simultaneous connections), the "chain" of software handling the websockets should be non-blocking (otherwise many threads will eat up your resources while doing nothing). You can continue to use flask, as it supports gevent (a non-blocking monkey-patch for the Python stdlib), but your are probably better off with nginx than Apache as the reverse proxy since it has always been non-blocking. For example, nginx+gevent+flask would provide you a nonblocking setup. Personally, I like Tornado a lot, but I would be reluctant to introduce another framework into your application if you already went a long way with Flask.
1
0
0
Currently we have developed an API using Flask that works in a traditional request-response style (client is a mobile application). This API is hosted using Apache on the remote server. What we would like to do is integrate the real-time connections between server and client (just like a chat app). This is where I got stuck. I need to make choice for the framework here. Using Tornado with Flask will not be a good choice since Tornado is non-blocking but Flask is blocking. What would be the best choice in this case?
What would be the best choice for integrating a real-time connection in flask API?
0.379949
0
0
68
46,345,699
2017-09-21T13:47:00.000
0
1
0
0
python-3.x,serial-port,pyqt5,raspbian
46,519,334
1
false
0
1
I'd decided to reflash the image and my problem is solved.
1
1
0
I'd designed a python software that required to start right after boot into system. I'd use sudo nano /home/pi/.config/lxsession/LXDE-pi/autostart to add in @python3 /home/pi/X/exe.py Before I'd include serial communication into the apps, everything works fine. But after I'd add in serial, the autostart had failed. So, how to autostart on boot a PyQt5 based serial comn.-able apps in Raspbian Jessie? I'd been suspecting that this weird behavior is due to serial communication that I'd added, that will be used before Pi logon.
Raspbian autostarting a PyQT5 UI that has Serial communication
0
0
0
61
46,346,745
2017-09-21T14:34:00.000
0
1
0
1
python,linux,unit-testing
46,359,839
1
true
0
0
OK, I found the reason. On my Fedora 24 machine an old version of xmlrunner (1.14.0 - something) was installed. I used pip to install the lates xmlrunner (1.7.7) for python3 and now I do get the output directly on the terminal.
1
0
0
I'm using xmlrunner in combination with unittest in python for testing purposes by running xmlrunner.XMLTestRunner(outsuffix="",).run(suite) where suite is standard unittest.suite.TestSuite When I run the tests on my windows machine I get an output by using the standard print() function in my tests. Unfortunately I don't get any output to my terminal when running the tests on my fedora machine. The output is correctly logged to an XML file but I would like to have the output directly to stdout / terminal. Did I miss something that explains this behaviour?
No print to stdout when running xmlrunner in python
1.2
0
0
249
46,347,262
2017-09-21T14:59:00.000
0
0
0
0
python,kivy
63,662,212
3
false
0
1
There's a dirty method though.. try requesting google.com or any other reliable website in the background with urllib or requests or socket, if its not getting any reply it must mean that system is not connected to internet
1
0
0
I've created with python and kivy an android app that works offline, app shows landscape photos, how can i open my app only when wifi is enabled? to let my app upload ads,have patience with me im new Thank You.
How can i open my app only when wifi is enabled for upload Ads?
0
0
0
127
46,347,289
2017-09-21T15:00:00.000
0
0
0
0
python-2.7,jupyter-notebook,rows,nearest-neighbor,graphlab
46,347,870
3
false
0
0
ok, well, seems like I have to define the number or neighbours with: tfidf_model.query(Test_AD, k=100).show() so I can get a list of first 100 in the canvass.
1
0
1
I using jupyter notebook and graphlab / turi for tfidf-nearest_neighbors model, which works fine so far. However, when I query the model like tfidf_model.query(Test_AD) I always just get the head - [5 rows x 4 columns] I am supposed to use "print_rows(num_rows=m, num_columns=n)" to print more rows and columns like: tfidf_model.query(Test_AD).print_rows(num_rows=50, num_columns=4) however, when I used it, I dont get any rows anymore, only the summary field: Starting pairwise querying. +--------------+---------+-------------+--------------+ | Query points | # Pairs | % Complete. | Elapsed Time | +--------------+---------+-------------+--------------+ | 0 | 1 | 0.00519481 | 13.033ms | | Done | | 100 | 106.281ms | +--------------+---------+-------------+--------------+ That's it. No error message, nothing. Any Ideas, how to get all/ more rows? I tried to convert into pandas or .show() command etc., didnt help.
print_rows(num_rows=m, num_columns=n) in graphlab / turi not working
0
0
0
1,900
46,349,624
2017-09-21T17:13:00.000
14
0
1
0
python,pip,virtualenv
46,349,941
2
true
0
0
Pip uses the default cache indeed, whether you're working inside a virtualenv or not. This indeed means that after removing your virtualenv, any pip cache related to it is not removed. Mind that the installed packages themselves are deleted, just not the download cache. Why would that be a problem? I think this is expected behaviour, since you gain an advantage when installing the same package in another virtualenv later on.
1
14
0
Where is pip's cache when using a virtual environment? Is it the default cache? If so, won't those downloaded packages/wheels remain if you delete the virtual environment?
Where is pip's cache in a virtualenv?
1.2
0
0
11,199
46,350,778
2017-09-21T18:21:00.000
2
1
1
0
python-2.7,testing,automation,automated-tests,robotframework
72,298,130
1
false
1
0
You can give variable "j" with symbol @
1
0
0
I have a test case where I am iterating using a FOR loop, where the variable in the loop is "j". I am then using this "j" in a user-defined keyword, but the test case fails and the error is "Variable j not found". This exact same test case works on another machine without any error, and I'm not sure why. In my machine where it fails, there is no problem with libraries or the setup, and this variable is not being saved anywhere. Could someone please suggest why this could happen?
Variable not found in Robot Framework-RIDE
0.379949
0
0
1,010
46,351,068
2017-09-21T18:40:00.000
1
1
0
0
internationalization,python-sphinx,restructuredtext
46,535,864
1
false
0
0
We tested and verified that the csv content is automatically extracted into PO files, and building a localized version places the translated strings in MO files back into the table.
1
0
0
My team uses .rst/sphinx for tech doc. We've decided to do tables in csv files, using the .. csv-table:: directive. We are beginning to using sphinx-intl module for translation. Everything seems to work fine, except that I don't see any our tables int he extracted .po files. Has anyone had this experience? What are best practices for doing csv tables and using sphinx-intl?
How do I use sphinx-intl if I am using the .. csv-table:: directives for my tables?
0.197375
1
0
89
46,351,581
2017-09-21T19:13:00.000
0
0
1
0
python,python-3.x,python-import,pyinstaller,python-module
46,371,122
1
true
0
0
The problem was that the module was named "code", this seems to be a python-internal name/module already and pyinstaller got confused over it
1
0
0
Problem: I have a Python3 project using Anaconda and PyCharm which runs fine from within PyCharm. When building a deployable version using pyinstaller, the building process seems to work, but the generated .exe file crashes with the following error: Traceback (most recent call last): File "code\main.py", line 10, in <module> ImportError: No module named 'code.libs'; 'code' is not a package Details: main.py:10 states from code.libs.hugelib.important import ImportantClass The directory structure looks like (all init.py are empty): code/ __init__.py libs/ __init__.py hugelib/ __init__.py important.py whatever.py stuff.py main.py data/ I create the executable using code>pyinstaller main.spec main.spec has been created using --paths=libs --paths=code --paths=code/libs --hidden-import=code --hidden-import=code.libs Question: Why is 'code' not seen as a package, even if the init files are there, and why is PyCharm executing everything just fine, while pyinstaller's bundled version is not?
ImportError for pyinstaller-packed PyCharm project
1.2
0
0
202
46,353,305
2017-09-21T21:17:00.000
0
0
1
1
python,pip,pyperclip
46,353,348
3
false
0
0
I think you should restart your computer. If that doesn't work, go to Control Panel -> System -> Advanced Settings -> Environment Variables. In the system variables you should go to Path and add the folder containing the pip.exe to your path.
2
0
0
I am having trouble installing any PIP module. Steps/Precautions I have taken: I uninstalled Python and downloaded the most recent Python 3.6.2. PIP seems to be installed already C:\Users\Danc2>C:\Users\Danc2\AppData\Local\Programs\Python\Python36-32\scripts\pip3.6 (also included are files: pip, pip3). pip install pyperclip returns 'pip' is not recognized as an internal or external command, operable program or batch file. In using many different forums and typing commands into CMD I come up with results like: "'pip' is not recognized as an internal or external command, operable program or batch file." When trying to refer to my folder location: "C:\Users\Danc2>C:\Users\Danc2>C:\Users\Danc2\AppData\Local\Programs\Python\Python36-32\scripts Access is denied." Sorry for the common question, but I just cannot figure it out for my individual problem. I appreciate any kind effort to help. Daniel.
Installing PIP Modules
0
0
0
147
46,353,305
2017-09-21T21:17:00.000
0
0
1
1
python,pip,pyperclip
46,353,369
3
false
0
0
If your Python installation works at all with the command line, then replacing pip with python -m pip in the command line is likely to fix the issue for you.
2
0
0
I am having trouble installing any PIP module. Steps/Precautions I have taken: I uninstalled Python and downloaded the most recent Python 3.6.2. PIP seems to be installed already C:\Users\Danc2>C:\Users\Danc2\AppData\Local\Programs\Python\Python36-32\scripts\pip3.6 (also included are files: pip, pip3). pip install pyperclip returns 'pip' is not recognized as an internal or external command, operable program or batch file. In using many different forums and typing commands into CMD I come up with results like: "'pip' is not recognized as an internal or external command, operable program or batch file." When trying to refer to my folder location: "C:\Users\Danc2>C:\Users\Danc2>C:\Users\Danc2\AppData\Local\Programs\Python\Python36-32\scripts Access is denied." Sorry for the common question, but I just cannot figure it out for my individual problem. I appreciate any kind effort to help. Daniel.
Installing PIP Modules
0
0
0
147
46,354,897
2017-09-21T23:59:00.000
0
0
1
1
python
48,916,162
1
false
0
0
in my case removing .vscode folder in my user folder and reinstalling debugger extension helped
1
1
0
I'm using Visual Studio Code (Version 1.16.1) with Python Extension (Don Jayamanne version 0.7.0). As I finish debugging a script, I consistently get an error - "Debug adapter process has terminated unexpectedly". This happens regardless of execution process (Integrated or External terminal/console). I'm an instructor for a Python class and all of my students are having to clear this error every time they debug. I and my students would appreciate any help with this. Thanks, John
Debug adapter process has terminated unexpectedly
0
0
0
613
46,356,186
2017-09-22T03:04:00.000
0
0
1
0
python,pandas,datetime,spyder
46,450,459
1
true
0
0
The problem was related to having several python installations on my PC. After removing all of it and installing a single instance, it was working well. Thanks for the tipp, Carlos Cordoba!
1
0
1
i have written a little csv parser based on pandas. It works like a charm in Spyder 3. Yesterday i tried to put it into production and run it with a .bat file, like: python my_parser.py In the console it doesn't work at all. Pandas behaves different: The read_csv method lost the "quotechar" keyword argument, for example. Especially date comparisons break all the time. I read the dates with pandas as per pd.read_csv(parse_dates=[col3, col5, col8]) Then i try a date calculation by substracting pd.to_datetime('now') I tested everything, and as said, in Spyder no failure is thrown, it works and produces results as it should be. As soon as i start it in the console, he throws type errors. The most often one of the two dates is a mere string and the other stays a datetime, so the minus operation fails. I could now rewrite the code and find a procedure that works in both, Spyder and console. However, i prefer to ask you guys here: What could be a possible reason that spyder and the console python behave completely different from each other? It's really annoying to debug code that does not throw any failures, so i really would like to understand the cause.
Python: Date comparing works in Spyder but not in Console
1.2
0
0
164
46,356,893
2017-09-22T04:29:00.000
0
0
0
0
python,scala,apache-spark,pyspark,user-defined-functions
46,358,100
2
false
0
0
In your driver application, you don't necessarily have to collect a ton of records. Maybe you're just doing a reduce down to some statistics. This is just typical behavior: Drivers usually deal with statistical results. Your mileage may vary. On the other hand, Spark applications typically use the executors to read in as much data as their memory allows and process it. So memory management is almost always a concern. I think this is the distinction the book is getting at.
1
4
1
In the book "Spark: The definitive guide" (currently early release, text might change), the authors advise against the use of Pyspark for user-defined functions in Spark: "Starting up this Python process is expensive but the real cost is in serializing the data to Python. This is costly for two reasons, it is an expensive computation but also once the data enters Python, Spark cannot manage the memory of the worker. This means that you could potentially cause a worker to fail if it becomes resource constrained (because both the JVM and python are competing for memory on the same machine)." I understand that the competition for worker node resources between Python and the JVM can be a serious problem. But doesn't that also apply to the driver? In this case, it would be an argument against using Pyspark at all. Could anyone please explain what makes the situation different on the driver?
Spark: Dangers of using Python
0
0
0
134
46,358,525
2017-09-22T06:52:00.000
0
0
1
0
python,security
46,359,011
2
true
0
0
Here comes a biggy. Make an account server. Open an internet port on it. Make a lib which will send (use Python ssl lib) the product key, login and password (all three userinput). If the server accepts them (account exists, password is correct) it will register that key to that account. Key will be in the database, and when it is redeemed then it is exchanged for game copy license on your server. Then game will check whether the loginned account has that game, and if it has then the game starts. For antitamper use Cython. DeNuvo can also be used.
1
0
0
I have had trouble trying to protect my software from being copied by another person. I want to be able to give the software to them then send them a one time key to open the software making it workable, preventing it from being copied and distributed. Any help appreciated. Want to make a login server for cutomers to login to.
How to add security to a python script so that the script can be opened using a one time key
1.2
0
0
102
46,360,132
2017-09-22T08:24:00.000
0
0
1
0
python,x86,64-bit,x86-64,pyscripter
49,209,770
3
false
0
0
PyScripter supports all python versions from 2.5 up to 3.7 both 32-bit and 64-bit. Just make sure you use the 32-bit version of PyScripter with 32-bit versions of Python and the 64-bit version of PyScripter with 64-bit versions of Python.
3
0
0
so basically I'm trying to install pyscripter for my computer, and am aware that Python 2.4 or higher is needed to run the program. My computer specs, first of all, are: Windows 10 (64bit) Intel CPU 4GB ram (or at least the important ones) Now when I go to python.org, there are about a thousand different downloads available like 'Python 3.7.0a1' or '3.6.3rc1' or '2.7.14', most of them being x86, and some of them having x64 next to them which I am assuming is 64 bit, and some of these files are a .zip file, executable file, MSI installer etc. What I want to know is: Which one of these do I have to download for my system? Does MSI matter? Does x64 mean that the file is going to be 64 bit? Does installing version 2 or version 3 (I am aware of the differences between version 2 and version 3) change the way that pyscripter runs?
Which version of python do I have to download for pyscripter?
0
0
0
1,277
46,360,132
2017-09-22T08:24:00.000
1
0
1
0
python,x86,64-bit,x86-64,pyscripter
46,360,274
3
true
0
0
Which one of these do I have to download for my system? You can install 2.7.14 version of python to run pyscripter. On a seperate note you can intall/run multiple versions of python on your machine if you want/require. Does MSI matter? It's a installer for microsoft operating systems. Does x64 mean that the file is going to be 64 bit? Yes Does installing version 2 or version 3 (I am aware of the differences between version 2 and version 3) change the way that pyscripter runs? No - However you can configure pyscripter to use a specific version of python as per the requirement.
3
0
0
so basically I'm trying to install pyscripter for my computer, and am aware that Python 2.4 or higher is needed to run the program. My computer specs, first of all, are: Windows 10 (64bit) Intel CPU 4GB ram (or at least the important ones) Now when I go to python.org, there are about a thousand different downloads available like 'Python 3.7.0a1' or '3.6.3rc1' or '2.7.14', most of them being x86, and some of them having x64 next to them which I am assuming is 64 bit, and some of these files are a .zip file, executable file, MSI installer etc. What I want to know is: Which one of these do I have to download for my system? Does MSI matter? Does x64 mean that the file is going to be 64 bit? Does installing version 2 or version 3 (I am aware of the differences between version 2 and version 3) change the way that pyscripter runs?
Which version of python do I have to download for pyscripter?
1.2
0
0
1,277