Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
48,389,159 | 2018-01-22T19:45:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pip,jupyter,python-3.7 | 50,757,220 | 3 | false | 0 | 0 | You should downgrade to python 3.6.5 (after trying all ways you can find on the Internet).
Today, I install python 3.7.0b5 on a new computer(windows 10). Then pip install didn't work with this error code 1. I tried all methods I can find on the Internet. And still stuck here. So I downgrade to 3.6.5 which works well on my mbp. Finally, it works!!! | 2 | 2 | 0 | Recently switched from MacOSX to a ThinkPad with Windows 10.
Installed Python 3.7, Pip 9
Attempted pip install jupyter and received the following error:
Command python setup.py egg_info failed with error code 1 in
C:\Users\BRIANM~1\AppData\Local\Temp\pip-build-greiazb7\pywinpty\
Uninstalled setup tools, upgraded setup tools, upgraded pip, ran as admin, all the traditional fixes are not working.
Anyone have a fix? | Pip Install throws error code 1 | 0 | 0 | 0 | 18,193 |
48,389,159 | 2018-01-22T19:45:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,pip,jupyter,python-3.7 | 49,367,599 | 3 | false | 0 | 0 | I had the same problem and I found a solution that worked for me. Instead of pip I used: py -m easy_install textract | 2 | 2 | 0 | Recently switched from MacOSX to a ThinkPad with Windows 10.
Installed Python 3.7, Pip 9
Attempted pip install jupyter and received the following error:
Command python setup.py egg_info failed with error code 1 in
C:\Users\BRIANM~1\AppData\Local\Temp\pip-build-greiazb7\pywinpty\
Uninstalled setup tools, upgraded setup tools, upgraded pip, ran as admin, all the traditional fixes are not working.
Anyone have a fix? | Pip Install throws error code 1 | 0.066568 | 0 | 0 | 18,193 |
48,391,044 | 2018-01-22T22:09:00.000 | 9 | 0 | 1 | 0 | python,json,jsonpickle | 63,633,348 | 3 | false | 0 | 0 | New way of doing. Above answer is old.
jsonpickle.encode(my_object, unpicklable=False) | 1 | 4 | 0 | When an object is serialized to json using jsonpickle, I noticed objects such as datetime are stored once then future uses are stored as references value such as {"py/id":1}. Is it possible store actual value instead of reference? This reference seems hidden and would be confusing when interacting directly with database.
Ex.
class MyClass:
def __init__(self, eee):
now = datetime.datetime.utcnow()
self.ddd = now
self.ddd2 = now
self.ddd3 = now
Json is
{"py/object": "__main__.MyClass", "py/state": {"ddd": {"py/object": "datetime.datetime", "__reduce__": [{"py/type": "datetime.datetime"}, ["B+IBFhYJCwx9oQ=="]]}, "ddd2": {"py/id": 1}, "ddd3": {"py/id": 1}, "eee": "fwaef"}} | Avoid jsonpickle using py/id pointer to another object | 1 | 0 | 0 | 2,974 |
48,392,454 | 2018-01-23T00:36:00.000 | 3 | 0 | 1 | 0 | python,regex | 48,392,464 | 1 | true | 0 | 0 | In a character class (the square bracket syntax in regexes), a hyphen means a range of characters. You have ,-/ in your square brackets, which means it will match any of , - . / | 1 | 0 | 0 | I've been trying to use regex in python to match either individual punctuation marks or groups of them. For example, I want to split out punctuation marks like '!?!' and just '@'.
I have the following regex: (["#$%&()*+,-/:;<=>@[\]^_`{|}~]|[.?!]+), which does what I want, mostly, except that it seems to capture periods individually (so I get . . . instead of ...)
What I don't understand is that if I move the , character in the first [] group somewhere else, it works fine... even if its just one character right or left.
Is there some significance there? Why doesn't it work properly when I have it where it is? (taken from string.punctuation)
Thanks in advance. I've searched around and couldn't find anything... so hopefully this isn't too dumb of a question... | Python regex seems to be incorrectly matching strings | 1.2 | 0 | 0 | 67 |
48,394,464 | 2018-01-23T04:51:00.000 | 1 | 0 | 1 | 0 | python,class,flask | 48,394,534 | 2 | false | 1 | 0 | Flask Requests are stateless, so to preserve data for a user across requests the options are limited. Here are some ideas:
Serialize the class instance, store it in a flask session (just a wrapper for browser session cookies), retrieve later.
Store it in a database, retrieve later when needed
Pickle it, dump it using user name, retrieve when needed.
Alternatively, depending on the application, a cache solution might good enough (ig
Flask-caching). The route/view would instantiate the class the first time it's called and return a value.
If the view is called again with the same arguments/data, the previous return value is returned without running the view function again. | 1 | 1 | 0 | I am currently building a web application built on Flask Framework with around 10 user accounts in the future when the application has been finished.
There is a Class with heavy module (Compute-intensive), built and used in this application, served as one of the frequently used key features, and I have run into some issues and am seeking for some solutions (let's named it as Class A in file a.py)
Originally, I imported the Class A directly into one of the view file, and created a route function for it, that once an user clicks the button which invokes this route, the route function will then create an instance of Class A, and this instance runs based on received data (like Json). But I found the system can be slow down as the instance of Class A has to be created every single time when the user uses the feature frequently, (also there can be 10 users), and Class A is too heavy to be created again and again.
Therefore I am thinking is there anyway that I can create the instance of Class A for only one time (e.g., the time that the Flask application starts), and each logged in user can access this instance rather than create it over and over again?
Thanks in advance | How to maintain a Class instance for each User's session in Flask? | 0.099668 | 0 | 0 | 1,798 |
48,397,015 | 2018-01-23T08:18:00.000 | 2 | 1 | 0 | 0 | telegram,telegram-bot,python-telegram-bot,php-telegram-bot,telegram-webhook | 48,397,565 | 1 | true | 0 | 0 | Telegram don't provide paid plan at this time.
For sending massive amount of message, it is better to use channel, and ask users to join.
If you really want to send via PM, you can send 1,800 messages per minute, I think this limit is enough for most use case. | 1 | 1 | 0 | My telegram bot needs to send a message to all the users at the same time. However, Telegram claims a max of 30 calls/sec so it gets really slow. I am sure that there is a telegram bot which sends over 30 calls/sec. Is there a paid plan for this? | API call limitation on my telegram bot | 1.2 | 0 | 1 | 1,442 |
48,398,158 | 2018-01-23T09:25:00.000 | 0 | 0 | 1 | 0 | python,encoding,pyinstaller | 48,398,408 | 1 | false | 0 | 0 | The path of the script contained some Non English characters. | 1 | 0 | 0 | When I am trying to run the command pyinstaller myscript.py I get the following error whatever my script contains, I tried with a script with a single line of code like x=1 or print('Hello'). Everything gives the same error.
139 INFO: PyInstaller: 3.3.1 139 INFO: Python: 3.6.4 140 INFO:
Platform: Windows-10-10.0.16299-SP0 Traceback (most recent call last):
File "c:\anaconda3\envs\py\lib\runpy.py", line 193, in
_run_module_as_main
"main", mod_spec) File "c:\anaconda3\envs\py\lib\runpy.py", line 85, in _run_code
exec(code, run_globals) File "C:\Anaconda3\envs\py\Scripts\pyinstaller.exe__main__.py", line 9, in
File
"c:\anaconda3\envs\py\lib\site-packages\PyInstaller__main__.py", line
92, in run
spec_file = run_makespec(**vars(args)) File "c:\anaconda3\envs\py\lib\site-packages\PyInstaller__main__.py", line
39, in run_makespec
spec_file = PyInstaller.building.makespec.main(filenames, **opts) File
"c:\anaconda3\envs\py\lib\site-packages\PyInstaller\building\makespec.py",
line 385, in main
specfile.write(onedirtmplt % d) File "c:\anaconda3\envs\py\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode characters in
position 127-137: character maps to < undefined > | PyInstaller gives the same encoding error | 0 | 0 | 0 | 438 |
48,400,191 | 2018-01-23T11:07:00.000 | 2 | 0 | 0 | 0 | python-3.x,numpy,tensorflow | 48,401,126 | 1 | true | 0 | 0 | Note: This answer does not answer the OP's exact question, but addresses the actual need of the OP as clarified in the comments (i.e., generate image patches, quickly). I just thought this would fit better here than in a badly-formatted comment.
If all you need to do is generating image patches, Tensorflow (and generally GPU acceleration) is not the right tool for this, because the actual computation is trivial (extract a sub-area of an image) and the bottleneck would be memory transfer between GPU and CPU.
My suggestion is, then, to write CPU-only code that uses view_as_windows and parallelize it via multiprocessing to split the workload on all your CPU cores.
Should you need to feed those patches to a Tensorflow graph afterwards, the way to go would be to first generate the patches on the CPU (with whatever input pipeline you like), batch them and then feed them to the GPU for the graph computation. | 1 | 1 | 1 | I am working with python3 and tensorflow to generate image patches by using numpy view_as_windows but because of numpy can't run on GPU, is there any way to do it with tensorflow?
ex: view_as_windows(array2d, window_shape, stride)
Thanks | Is there any tensorflow version of numpy.view_as_windows? | 1.2 | 0 | 0 | 133 |
48,400,682 | 2018-01-23T11:31:00.000 | 1 | 0 | 1 | 0 | python,cython | 48,406,563 | 2 | false | 0 | 0 | You need an injection mechanism, because you correctly assume that whatever is stored in the files accessible to the user will be extracted.
This definitely means that whatever way of passing the key material to the user code you choose, the key will be MITM-ed and exfiltrated. Yes, the user-accessible code should never access the key.
This is, of course, a solved problem. Use an API that accepts a challenge from your cloud service, and optionally user input ("password", etc), and returns a temporary authorization token. The token should expire soon enough to make session stealing impractical. The re-authorization process should repeat periodically, as long as the user is using the access.
This is how e.g. SSH works. You can safely hide the SSH key using the OS permissions, or a hardware token (see U2F). You can also try to use U2F with GPG; some cloud services may support U2F natively.
By now you surely remember the idea that security cannot be bolted onto an existing solution, but must be built into it form the start. You might need to rethink a wider swath of your app with security in mind, and come up with proper authorization mechanics. I don't know the specifics of your problem, so I cannot guess a specific approach. | 1 | 0 | 0 | I want to embed configuration secrets (eg access keys) within a module compiled with Cython. These values should not be easily accessible from the compiled code and isolated from the actual source code. Is there a way of injecting this values from a different source than the main Python code (like a comand line option to cython or the c compiler)?
NB: I don't want to have an own Cython module just containing the access keys, because the access key can be easily found than. | Is there a way to inject constants into a Cython module from the command line? | 0.099668 | 0 | 0 | 536 |
48,403,207 | 2018-01-23T13:42:00.000 | 11 | 0 | 1 | 0 | python,terminology | 61,780,616 | 2 | false | 0 | 0 | The Stacktrace is the trace of the methods call stack, exactly as it is in the memory of the computer that is executing your program.
So most recents method calls are at the top; and likely the root of the problem is at the top as well.
Virtually all programming languages do it this way.
The Traceback is something Python has "invented": it's the reversed of the above. So, to find the root of your problem, you need to start reading it from the bottom, as this is apparently easier to read to pythonists.
To make it clear, they have had to specify "most recent call last".
Calling "stacktrace" a "traceback" is simply wrong: a traceback is not a trace of a stack. It's a stacktrace reversed: and the "back" probably means so.
At the top of a stack, in every meaning, you have the most recent item. | 1 | 21 | 0 | In the Python world there are two terms which seem to be equal:
Stacktrace
Traceback
Is there any difference between the two? | Python: Stacktrace vs Traceback | 1 | 0 | 0 | 1,600 |
48,403,728 | 2018-01-23T14:08:00.000 | 0 | 0 | 0 | 0 | python,django,heroku | 48,406,138 | 1 | false | 1 | 0 | For each request, Django create a new HttpRequest object and load new instances of corresponding views. You can't share data between requests without putting them in a persistent storage. | 1 | 0 | 0 | I've been working on a website which allows users to play a game against a "Machine" player and I decided to do this using django 1.12 and python 3.6 in an attempt to develop skills in this area. The game & ML algorithms run on the backend in python and during testing/dev this all worked fine. When pushing this to heroku it became apparent that the instance from the game and other classes were being instantiated correctly but then as the page refreshes, in order to get the machine player's choice from the server, the request would go to another server which didn't have the instantiated objects. I tried using the default cache to allow the player to access the same instance but I believe it might be too large. After some reading it sounds like memcached is the way forward, but I wondered whether anyone might have any suggestions or know if there's a simpler solution? | Accessing the same instance of a class django, python, heroku | 0 | 0 | 0 | 29 |
48,404,347 | 2018-01-23T14:41:00.000 | 0 | 0 | 0 | 1 | python,linux,bash,shell,qsub | 48,405,314 | 2 | false | 0 | 0 | Get the scripts to work by
#!/usr/bin/bash
python_cpu='/path/ld-linux-x86-64.so.2 --library-path /path/other_libs /path/python'
$python_cpu python_script.py different_params
instead of using alias | 1 | 1 | 0 | I have tensorflow 1.4.1 installed by pip, but the system-default gcc libs are not latest, that running
import tensorflow
will cause this error
ImportError: /lib64/libc.so.6: version 'GLIBC_2.16' not found
Since I don't have root permission, I built the gcc libs, and use
alias python_cpu='/path/ld-linux-x86-64.so.2 --library-path /path/other_libs /path/python'
to run tensorflow on CPU.
Now I've generated thousands of bash scripts and want to run them with qsub
within each script oo.sh writes
#!/usr/bin/bash
python_cpu python_script.py different_params
I've tried the below ideas but all failed.
Use qsub -V oo.sh to pass the alias into oo.sh.
Use alias python_cpu="" within the bash script.
Without alias, use '/path/ld-linux-x86-64.so.2 --library-path /path/other_libs /path/python' python_script.py params.
By the way the alias of TF works well with the bash shell command line. Any suggestions of what to do now? | How to use ld-linux-x86-64.so.2 in bash script for qsub | 0 | 0 | 0 | 658 |
48,404,474 | 2018-01-23T14:48:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,unicode,python-unicode,pyenchant | 48,405,718 | 1 | false | 1 | 0 | Oh boy... false alarm, here! It actually works, but I entered some incorrect character codes. I'm going to leave the question up since that code is the only thing that seemed to let me complete this particular task in this environment. | 1 | 0 | 0 | I'm using Python 2.7 here (which is very relevant).
Let's say I have a string containing an "em" dash, "—". This isn't encoded in ASCII. Therefore, when my Django app processes it, it complains. A lot.
I want to to replace some such characters with unicode equivalents for string tokenization and use with a spell-checking API (PyEnchant, which considers non-ASCII apostrophes to be misspellings), for example by using the shorter "-" dash instead of an em dash. Here's what I'm doing:
s = unicode(s).replace(u'\u2014', '-').replace(u'\u2018', "'").replace(u'\u2019', "'").replace(u'\u201c', '"').replace(u'\u201d', '"')
Unfortunately, this isn't actually replacing any of the unicode characters, and I'm not sure why.
I don't really have time to upgrade to Python 3 right now, importing unicode_literals from future at the top of the page or setting the encoding there does not let me place actual unicode literals in the code, as it should, and I have tried endless tricks with encode() and decode().
Can anyone give me a straightforward, failsafe way to do this in Python 2.7? | Replacing unicode characters with ascii characters in Python/Django | 0 | 0 | 0 | 407 |
48,406,918 | 2018-01-23T16:48:00.000 | 0 | 0 | 1 | 0 | python-3.x | 48,451,358 | 1 | false | 0 | 0 | So, I had to do more research and here is the answer:
In the right pane of Visual Studio there is a few lines of references to Python.
To keep your original project, and add a new tab:
1.Right click on your project in the right pane.
2.Click add new item.
3.In the new window click empty Python file.
4.Rename that tab (at the bottom of the window).
You are golden! | 1 | 0 | 0 | I am brand new to programming, and I think I have a simple question for those of you that have been programming for quite some time.
How do I add multiple python scripts (as multiple tabs) within Visual Studio?
I keep going to File, New Project and that gives me one tab to work from.
I would like to have multiple tabs so I can go back and forth looking at the files I have created.
Thank you in advance.
-Robert | Multiple Tabs within Visual Studio | 0 | 0 | 0 | 122 |
48,408,263 | 2018-01-23T18:13:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,list,loops | 48,408,389 | 1 | false | 0 | 0 | You only need to remember the sum and the number of inputs in two variables that are updated when the user writes a number.
When the user enters 'done', compute the mean (sum / number_of_inputs). | 1 | 0 | 0 | I'm a beginner and my textbook just covered iterations and loops in Python. Lists have only been given cursory coverage at this point.
The exercise I'm struggling with is this: Write a program which repeatedly reads numbers until the user enters "done". Once "done" is entered print out the total, count and average of all the numbers. If the user enters anything other than a number, detect their mistake using try and except and print an error message and skip to the next number.
All of this I can manage, except for how to get the program to store multiple user inputs. No matter what I write I only end manipulating the last number entered. Considering we haven't formally covered lists yet I find it hard to believe I should be using append, and therefore must be overthinking this problem to death.
Any and all advice is much appreciated. | Possible to write a loop in Python that stores user input for future manipulation without using append? | 0.379949 | 0 | 0 | 89 |
48,414,156 | 2018-01-24T03:01:00.000 | 0 | 0 | 0 | 0 | python,selenium,webdriver,selenium-chromedriver | 48,414,334 | 2 | true | 0 | 1 | driver.close() and driver.quit() are two different methods for closing the browser session in Selenium WebDriver.
driver.close() - It closes the the browser window on which the focus is set.
driver.quit() – It basically calls driver.dispose method which in turn closes all the browser windows and ends the WebDriver session gracefully.
You should use driver.quit whenever you want to end the program. It will close all opened browser window and terminates the WebDriver session. If you do not use driver.quit at the end of program, WebDriver session will not close properly and files would not be cleared off memory. This may result in memory leak errors. | 1 | 0 | 0 | I've a made a selenium test using python3 and selenium library.
I've also used Tkinter to make a GUI to put some input on (account, password..).
I've managed to hide the console window for python by saving to the .pyw extension; and when I make an executable with my code, the console doesn't show up even if it's saved with .py extension.
However, everytime the chromedriver starts, it also starts a console window, and when the driver exists, this window does not.
so in a loop, i'm left with many webdriver consoles.
Is there a work around this to prevent the driver from launching a console everytime it runs ? | Python3, Selenium, Chromedriver console window | 1.2 | 0 | 1 | 617 |
48,415,348 | 2018-01-24T05:25:00.000 | 0 | 0 | 1 | 0 | python,parallel-processing,cluster-computing,hpc,mpi4py | 48,505,541 | 1 | false | 0 | 0 | Actually, you may submit one script to some nodes, specified in a given file. But the results are given based on each script. You cannot combine the results of more than one script on run-time, since each result is saved in some particular file (if any job scheduler used). | 1 | 0 | 1 | I've read some tutorials and documentation on MPI for python. However, I'm still not clear on how it is supposed to be used for sending jobs to separate nodes in a cluster, then combining/processing the results. It seems that you only specify the number of different processes.
Is it possible to use MPI for sending versions of the same script to separate nodes which run separately with multiprocessing, then combine the combine the results later? If this is an inappropriate use for MPI, what could do something like this? | How to use mpi for python for parallel cluster computing/ hpc? | 0 | 0 | 0 | 207 |
48,417,561 | 2018-01-24T08:08:00.000 | 0 | 0 | 0 | 0 | python | 48,418,721 | 1 | false | 1 | 0 | This is not the kind of problem that will be solved in a couple of lines of Python. The problem is under-specified - there's no guarantee that there will even be silence between songs, ads and announcers on any given radio stream, as they try to make it harder to usefully record full songs from their streams for piracy purposes.
To do this robustly, it's likely that you'll need to apply AI / deep learning techniques to distinguish music from ads and announcements. Even then it's tricky, as some music will have regular talking in it, some songs are short, and some ads are long and contain music. | 1 | 0 | 0 | My task is to extract full songs from radio streaming using python 2.7.
I have managed to record radio streaming, but I can't find a good way to detect if the audio that I record is music, ads, or just talking.
I tried to detect by threshold, but it wasn't good because there are not enough silence between the talking or the ads to the songs.
If someone knows a good solution for me I would love to hear about it.
import pydub
streamAudio = pydub.AudioSegment.from_mp3("justRadioStream.mp3")
listMp3 = pydub.silence.detect_silence(streamAudio, min_silence_len=400, silence_thresh=-38)
print listMp3
I tried to play with the min_silence_len and silence_thresh, but there is not enough time of silence between songs and ads or talking, or louder voice to detect properly
thanks a lot! | python extract songs from radio streaming | 0 | 0 | 1 | 203 |
48,417,764 | 2018-01-24T08:21:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,lstm,recurrent-neural-network | 48,419,553 | 2 | false | 0 | 0 | In the first iteration add some tf.assign operations to assign the values you want to the internal variables.
Make sure this only happens in the first iteration otherwise you'll overwrite any training you do.
The cell has a method called get_trainable_variables to help you if you want. | 1 | 1 | 1 | I want to initial the value for weight and bias in BasicLSTMcell in Tensorflow with my pre-trained value (I get them by .npy). But as I use get_tensor_by_name to get the tensor, it seems that it just returns a copy for me and the raw value is never changed. I need your help! | How to modify the initial value in BasicLSTMcell in tensorflow | 0 | 0 | 0 | 488 |
48,419,029 | 2018-01-24T09:33:00.000 | 0 | 0 | 0 | 0 | python,anaconda,spyder | 48,419,369 | 2 | false | 0 | 0 | I'm assuming you already tried pip3 install cx_Oracle? If you're on a mac and have done this and still receiving ModuleNotFoundError then try pip3 install --user cx_Oracle. To confirm the module is installed run pip3 freeze to list all python3 modules currently installed. | 1 | 0 | 0 | I want to connect Python 3.6 (Spyder) to Oracle.
When connecting I get the error as
ModuleNotFoundError: No module named 'cx_Oracle' | ModuleNotFoundError: No module named 'cx_Oracle' spyder | 0 | 0 | 0 | 8,004 |
48,420,005 | 2018-01-24T10:20:00.000 | 2 | 0 | 0 | 0 | python,keras,lstm | 48,605,766 | 2 | true | 0 | 0 | I'd say that one way for it to do this is for your network to simply predict C or have C as the label
I have been seeing this again and again. Don't confuse a NN with something more than it actually is. You simply approximate the output Y given an input X by learning a function F. That is your NN.
In your case the output could very easily be C + Other_Output
Depending on what that other output is, your network could converge and have good results. It could very well not, so your question is simply at this point incomplete.You have to ask yourself some questions like:
Does C + Ohter_Output make sense for the give input.
Is there a good way for me serialize the C + Other_Output ? Like having the first K out of N output array elements describing C and the rest N-K describing Other_Output ?
Is C a multiclass problem and if so is Other_Output a different kind of problem or could potentially be turned into a multiclass of the same kind, problem that could converge along with C or make them both multilabel problem ?
These are at least some of the questions you need to ask yourself before even choosing the architecture.
That being said, no, unless you train your network to learn about patterns between A B D and C it will not be able to predict a missing input.
Good luck,
Gabriel | 1 | 1 | 1 | Assume that I have five columns in my dataset (A,B,C,D,E) and I want to build an LSTM model by training just on A,B,D,E (i.e. I want to exclude C)
My problem is that I still want to use this model to predict C. Is it possible if I didn't train my model with this variable? How can I do that?
EDIT 1
I'm working with categorical and numerical data modeled as time series. In this specific case, C is a categorical time series (given in a one-hot representation). | Predict a variable that is not in the input sequences with LSTM-Keras | 1.2 | 0 | 0 | 686 |
48,424,813 | 2018-01-24T14:26:00.000 | 0 | 0 | 0 | 0 | python,performance,dask | 48,425,028 | 2 | false | 0 | 0 | Try going through the data in chunks with:
chunksize = 10 ** 6
for chunk in pd.read_csv(filename, chunksize=chunksize):
process(chunk) | 1 | 3 | 1 | Hi I have a python script that uses dask library to handle a very large data frame, larger than the physical memory. I notice that the job get killed in the middle of a run if the memory usage stays at 100% of the computer for some time.
Is it expected? I would thought the data would be spilled to disk and there are plenty of disk space left.
Is there a way to limit its total memory usage? Thanks
EDIT:
I also tried:
dask.set_options(available_memory=12e9)
It did not work. It did not seemed to limit its memory usage. Again, when memory usage reach 100%, the job gets killed. | dask job killed because memory usage? | 0 | 0 | 0 | 1,009 |
48,425,546 | 2018-01-24T15:01:00.000 | 0 | 0 | 0 | 1 | python-3.x,celery,celery-task,flower | 53,119,548 | 1 | false | 0 | 0 | You can use celery autoscaling. For example setting autoscale to 8 will mean it will fire up to 8 processes to process your queue(s). It will have a master process sitting waiting though. You can also set a minimum, for example 2-8 which will have 2 workers waiting but fire up some more (up to 8) if it needs to (and then scale down when the queue is empty).
This is the process based autoscaler. You can use it as a reference if you want to create a cloud based autoscaler for example that fires up new nodes instead of just processes.
As to your flower issue it's hard to say without knowing your broker (redis/rabbit/etc). Flower doesn't capture everything as it relies on the broker doing that and some configuration causes the broker to delete information like what tasks have run. | 1 | 1 | 0 | I am putting together a Celery based data ingestion pipeline. One thing I do not see anywhere in the documentation is how to build a flow where workers are only running when there is work to be done. (seems like a major flaw in the design of Celery honestly)
I understand Celery itself won't handle autoscaling of actual servers, thats fine, but when I simulate this Flower doesn't see the work that was submitted unless the worker was online when the task was submitted. Why? I'd love a world where I'm not paying for servers unless there is actual work to be done.
Workflow:
Imagine a While loop thats adding new data to be processed using the celery_app.send_task method.
I have custom code that sees theres N messages in the queue. It spins up a Server, and starts a Celery worker for that task.
Celery worker comes online, and does the work.
BUT.
Flower has no record of that task, even though I see the broker has a "message", and while watchings the output of the worker, I can see it did its thing.
If I keep the worker online, and then submit a task, it monitors everything just fine and dandy.
Anyone know why? | Celery with dynamic workers | 0 | 0 | 0 | 1,419 |
48,429,246 | 2018-01-24T18:25:00.000 | 1 | 1 | 1 | 1 | python,atom-editor | 48,435,723 | 2 | true | 0 | 0 | Having run C on atom should not interfere with you running python. Make sure you've installed the python extension and you name your file with the py extension. Also, install the 'script' extension. Enter your script and hit command-I. The script extension should then run your script. Command-I is just a shortcut to run script. You can install these extensions (add-ons) by going to Preferences under the Atom menu item. This opens a window in Atom and you can install from a list of available extensions. | 1 | 0 | 0 | I have been using atom code editor to write C code and run it using a gcc compiler, recently I started out on python code and have been trying to run python script using atom code editor but i keep on getting errors, is there a way to fix this? | How to run python script using atom? | 1.2 | 0 | 0 | 1,966 |
48,430,380 | 2018-01-24T19:38:00.000 | 4 | 0 | 1 | 0 | python,spyder | 48,431,392 | 1 | true | 0 | 0 | (Spyder developer here) This problem has been reported before but we haven't had time to solve it. Although it sounds simple, it's really hard to solve correctly in all situations because indentation spaces matters in Python (whereas in R they are irrelevant). | 1 | 6 | 0 | I usually use RStudio to code in R, and when running code line by line, if running the first line of a loop, the content of the loop runs with it. It is very convenient for debugging code, to be able to run code line by line without manually selecting the whole loop for example. In Spyder, if I run the first line of a loop, it runs just that, as if I was trying to run an empty loop, and gives an error. How can a run a code line by line properly in Spyder? I have researched the question but did not find an answer. Thank you for your help! | How to run python code line by line in Spyder and include loop/if statement contents | 1.2 | 0 | 0 | 6,876 |
48,432,038 | 2018-01-24T21:39:00.000 | 0 | 0 | 1 | 0 | python,pandas,ram,conda | 48,432,486 | 1 | false | 0 | 0 | Ok, so starting from a brand new environment with latest python 3.5.4 and latest panda seems to cut it....so I think I'll close this wonderful thread for now and re-open if after reinstalling all other needed libs I end up with the same problem. | 1 | 0 | 1 | I'm fairly new to python but I'm pretty sure I didn't get this behaviour before.
A couple of days back I've noticed that if I open a new python console and simply do:
import pandas as pd
Then python.exe ram usage grows steadily in about 5 seconds to reach about 96% utilisation (ie about 15.5G of my 16G total ram).
That's not normal, right?
I'm using anaconda3 python 3.5 on windows 10....I've updated my conda and pandas but to no avail...
Cheers | Why would importing pandas use up almost all my ram? | 0 | 0 | 0 | 61 |
48,433,297 | 2018-01-24T23:21:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,apache-spark,pyspark,emr | 50,046,220 | 1 | false | 0 | 0 | bootstrap is the solution. write a shell script, pip install all your required packages and put it in the bootstrap option. It will be executed on all nodes when you create a cluster. just keep in mind that if the bootstrap takes too long time (1 hour or so?), it will fail. | 1 | 4 | 0 | I am currently running spark-submit jobs on an AWS EMR cluster. I started running into python package issues where a module is not found in during imports.
One obvious solution would be to go into each individual node and install my dependencies. I would like to avoid this if possible. Another solution I can do is write a bootstrap script and create a new cluster.
Last solution that seems to work is I can also pip install my dependencies and zip them and pass them through the spark-submit job through --py-files. Though that may start becoming cumbersome as my requirements increase.
Any other suggestions or easy fixes I may be overlooking? | Module not found in AWS EMR slave nodes | 0 | 0 | 1 | 724 |
48,434,150 | 2018-01-25T01:12:00.000 | 2 | 0 | 1 | 0 | python,shell,pip,anaconda | 48,434,570 | 2 | false | 0 | 0 | A shell is a command line interface that lets you give your computer commands using a syntax specific to the OS and shell program. PIP (acronym for "PIP Installs Packages") is simply a program designed to be used within a shell environment like CMD.
Anaconda is a Python package distribution which happens to include a Python IDLE, which has a both a command line interface as well as text editor.
Hope this helps your understanding. | 2 | 1 | 0 | Good evening! I was wondering why you have to use a shell, such as cmd, to install packs in Pip but when it comes to Anaconda you can use its own shell. Or, rephrasing: what impedes Pip to be considered a shell (considering it even has the appearance of one if you open it by itself)? Thank you! | Installing packs with Pip and Anaconda (Windows) | 0.197375 | 0 | 0 | 211 |
48,434,150 | 2018-01-25T01:12:00.000 | 1 | 0 | 1 | 0 | python,shell,pip,anaconda | 48,434,384 | 2 | false | 0 | 0 | Pip is actually running from the shell when double clicking! When you double click pip you will probably see that it quickly closes. The inplamenters chose to this. Not exactly sure why (you would have to ask them) but I would guess because of one (or more) of these reasons:
1) Running from a shell is more portable. No matter where you are you in your file path you can open up cmd and as long as pip has been added to path run it. Running by double clicking is not always convenient.
2) Other architectures. Under Linux the terminal has a much greater part than on Windows. The inplamenters would have wanted pip to be cross platform. The double click method does not exist under Linux and so the only other option is to run through the terminal. Remember that the inplamenters wanted to be as cross platform as possible and running from a shell is the safest, most concise method of doing things.
3) They did not have only Windows in mind when building. Python was built under C and although cross platform it has not been built for a single OS. This means the inplamenters could not use all the attractive features because many would not work as soon as the OS changed. | 2 | 1 | 0 | Good evening! I was wondering why you have to use a shell, such as cmd, to install packs in Pip but when it comes to Anaconda you can use its own shell. Or, rephrasing: what impedes Pip to be considered a shell (considering it even has the appearance of one if you open it by itself)? Thank you! | Installing packs with Pip and Anaconda (Windows) | 0.099668 | 0 | 0 | 211 |
48,434,325 | 2018-01-25T01:35:00.000 | 1 | 0 | 1 | 0 | python,conda,catboost | 69,830,191 | 2 | false | 0 | 0 | conda install -c conda-forge catboost | 2 | 3 | 0 | Does Yandex support Anaconda environments? I'm trying to get CatBoost working in PyCharm using an Anaconda environment as Python interpreter, but I continue to get the ModuleNotFoundError: No module named 'catboost'. I'm able to install CatBoost using pip, but not with 'conda install', and especially not 'conda install' with an -n flag specifying a particular conda environment. | Can CatBoost be installed in a Conda environment? | 0.099668 | 0 | 0 | 960 |
48,434,325 | 2018-01-25T01:35:00.000 | 1 | 0 | 1 | 0 | python,conda,catboost | 48,917,417 | 2 | false | 0 | 0 | I'm using anaconda and just pip installed it.
Any particular reason you need it installed by Conda? | 2 | 3 | 0 | Does Yandex support Anaconda environments? I'm trying to get CatBoost working in PyCharm using an Anaconda environment as Python interpreter, but I continue to get the ModuleNotFoundError: No module named 'catboost'. I'm able to install CatBoost using pip, but not with 'conda install', and especially not 'conda install' with an -n flag specifying a particular conda environment. | Can CatBoost be installed in a Conda environment? | 0.099668 | 0 | 0 | 960 |
48,434,492 | 2018-01-25T01:59:00.000 | 3 | 0 | 1 | 1 | python,pythonpath,os.path | 48,434,543 | 1 | false | 0 | 0 | If filenameis relative, the current working directory (available with os.getcwd()) is used.
PYTHONPATH is only used when importing modules. | 1 | 0 | 0 | Which path is used in os.path.isfile(filename) in python.
I have file in /home/debian and I have added that path in PYTHONPATH variable still os.path.isfile(filename) returns FALSE. | why os.path.isfile returns false if file is other directory? | 0.53705 | 0 | 0 | 875 |
48,435,165 | 2018-01-25T03:29:00.000 | 3 | 0 | 1 | 0 | python,generator,yield | 48,435,262 | 2 | true | 0 | 0 | Simply put, yield delays the execution but remembers where it left off. However, more specifically, when yield is called, the variables in the state of the generator function are saved in a "frozen" state. When yield is called again, the built in next function sends back the data in line to be transmitted. If there is no more data to be yielded (hence a StopIteration is raised), the generator data stored in its "frozen" state is discarded. | 1 | 3 | 0 | I understand generator generates value once at a time, which could save a lot memory and not like list which stores all value in memory.
I want to know in python, how yield knows which value should be returned during the iteration without storing all data at once in memory?
In my understanding, if i want to print 1 to 100 using yield, it is necessary that yield needs to know or store 1 to 100 first and then move point one by one to return value ?
If not, then how yield return value once at a time, but without storing all value in memory? | where does the yield store value in python | 1.2 | 0 | 0 | 1,899 |
48,441,559 | 2018-01-25T11:14:00.000 | 2 | 1 | 0 | 1 | python,linux,cron,sleep | 48,441,730 | 2 | false | 0 | 0 | Undoubtedly, whatever memory is consumed by Python and the startup of your script will stay in memory for the duration of the sleep, but since you have written the code you can organise things to minimise the memory usage until the sleep is over.
As to cpu performance, I'm sure that you will incur no overhead for the duration of the sleep. | 1 | 2 | 0 | I'm using a cron task to schedule many jobs every 2 min.
Since there are no higher resolution for less than a minute in cron, I make the python code call a randomised sleep command (between 0-60) so it will spread the execution time across a minute.
This works out fine for me.
I'm just wondering that if I have a process which sleep for 50 seconds, does it keep hold of the memory during these 50 seconds? Can it cause performance problems? | Does sleep command slow the performance? | 0.197375 | 0 | 0 | 664 |
48,446,164 | 2018-01-25T15:14:00.000 | 0 | 0 | 0 | 0 | python,pentaho,kettle | 48,450,589 | 1 | true | 0 | 0 | You can´t. The execution relies on a product developer by Hitachi Vantara, so unless you take all the source code from their repository and convert that to Python, the only way you have to execute a Pentaho transformation is to use their tools.
Either way, you can create a python application that calls on Pentaho's tools to execute the transformation you need.
The console application used to run a .ktr file is "Pan". | 1 | 1 | 0 | I have a .ktr file to execute where I can not use spoon or any other tool to execute it.
How to execute that ktr file using python? | Execute a pentaho kettle (.ktr) using python | 1.2 | 0 | 0 | 847 |
48,446,691 | 2018-01-25T15:41:00.000 | 3 | 1 | 0 | 0 | javascript,python,nginx,web,raspberry-pi | 48,447,114 | 1 | true | 1 | 0 | If the frontend will run on a browser executed on a laptop o desktop it will run fine, but if the interface will run on a browser executed on the Pi maybe it will be too expensive in terms of GPU/CPU usage and it will require fine tuning in order to avoid unnecessary re-renders.
So if the browser is on a remote machine ok, if not think about a something like TkInter for UI. | 1 | 0 | 0 | I have a web application that I developed for use on a Raspberry Pi written in Python and hosted on nginx. It's a bit slow to serve new pages, even when there is very little to no logic being processed for the page that's loading (4-5 seconds+).
I know that's a common problem as Pi's aren't exactly equipped to handle the load required to deliver web pages super quickly, but I was wondering if anyone had any experience with this and if it would be worthwhile to recreate the app in some other environment? I was wondering if perhaps a nodejs server would be significantly (a few seconds) quicker in general, or building a single page application using react would be worthwhile? Or if there is some other solution that would be even faster?
EDIT:
more info: raspberry pi 3, json for storing/reading data (very small amounts of data), running chrome, only one user interacting directly with the app, and on the device itself (not from the internet or another network) | Web Interface on the Raspberry Pi | 1.2 | 0 | 0 | 99 |
48,447,814 | 2018-01-25T16:35:00.000 | 1 | 0 | 0 | 0 | python,numpy,ffmpeg | 48,448,182 | 1 | false | 0 | 0 | You are exactly right, JPEG images are compressed (this is even a lossy compression, PNG would be a format with lossless compression), and JPEG files are much smaller than the data in uncompressed form.
When you load the images to memory, they are in uncompressed form, and having several GB of data with 14400 images is not surprising.
Basically, my advice is don't do that. Load them one at a time (or in batches), process them, then load the next images. If you load everything to memory beforehand, there will be a point when you run out of memory.
I'm doing a lot of image processing, and I have trouble imagining a case where it is necessary to have that many images loaded at once. | 1 | 0 | 1 | I have to read thousands of images in memory.This has to be done.When i extract frames using ffmpeg from a video,the disk space for the 14400 files =92MB and are in JPG format.When I read those images in python and append in a python list using libraries like opencv,scipy etc the same 14400 files=2.5 to 3GB.Guess the decoding is the reason?any thoughts on this will be helpful? | Can I use ffmpeg to output jpgs to a numpy array in python without writing the files to disk etc? | 0.197375 | 0 | 0 | 420 |
48,448,863 | 2018-01-25T17:37:00.000 | 7 | 0 | 1 | 0 | python,json,decoding,diacritics | 48,449,404 | 1 | true | 0 | 1 | Try This:
(self.Lines = json.load(open("Data/Lines.json","rb"), encoding="utf-8"))
The difference is loading the file in bytes and reading it in utf-8 format (assuming that's the file format). | 1 | 3 | 0 | I have a string like this ("Theres loads of adventures to be had here;\nYou'll get your own Kobémon\nand get to catch more!") in my .JSON file and when I read from it into the python file and into a Tkinter textbox I get "é" instead of é. Is there a way to stop this. Im reading the .JSON using this :(self.Lines = json.load(open("Data/Lines.json"))) | Python Reading from JSON accented characters coming out wrong | 1.2 | 0 | 0 | 2,107 |
48,449,788 | 2018-01-25T18:41:00.000 | 3 | 0 | 1 | 0 | python,compilation | 48,449,856 | 1 | true | 0 | 0 | No. py2exe and similar tools just create a bundle including the Python interpreter, the bytecode of your Python sources and their dependencies. It's just a deploy convenience, there's no speed advantage (besides skipping the initial parsing of the .py files; in this respect, it's like running your code the second time when the .pyc files are already created).
For "out of the box" performance improvement you can try running your script with PyPy instead of CPython - for "all interpreted" (=> no numpy & co.) numerical Python code I saw very often 20x speedups. | 1 | 0 | 0 | I am working on a script someone created for modifying 3D digital models that was written in Python code. The original author compiles the file into a Windows executable before distributing it. I'm guessing he uses py2exe or some similar tool.
My question is, is there any speed benefit in doing so? The script is very slow, and I'm hoping for better performance after compiling the script. Thanks. | Any speed benefit to compiling Python code? | 1.2 | 0 | 0 | 1,505 |
48,450,010 | 2018-01-25T18:55:00.000 | 2 | 0 | 0 | 0 | python,django,deployment | 48,450,115 | 1 | true | 1 | 0 | I recommend using virtual environment for your Django project.
source bin/activate your virtual environment in your server would simulate the same setup as of your local.
In your project
List all your dependencies in requirements.txt and settings in my_settings.py apart from django settings.py
In your server
just pull/transfer your code via git or any other means and activate virtual environment.
pip install -r reuirements.txt and change any minor changes required in my_settings
Take care of your migrations and Db setup. You may have to run migrations if you are migrating to your server for the first time.
And thats it you are up and running. | 1 | 2 | 0 | I'm getting ready to move my Django project from my laptop to my server. What is the recommended way to do this? E.g., is there a Django command that will package everything up (and select the correct settings file for test vs prod servers) and create a zip or tar file that can be moved over to the server? Sort of like Ant for building Java projects. | Packaging Django code for deployment | 1.2 | 0 | 0 | 120 |
48,453,620 | 2018-01-25T23:32:00.000 | 1 | 0 | 0 | 0 | java,python,arrays,jython | 48,453,704 | 3 | false | 1 | 0 | If you want a simple solution then I suggest that you write and read the integers to a file. Perhaps not the most elegant way but it would only take a couple of minutes to implement. | 3 | 0 | 0 | I have a java program and I need it to get some data calculated by a python script.
I've already got java to send an integer to python via jython's PythonInterpreter and displayed it, but I can't recover it to make other operations. Also, it would be great to send a full integer array rather than a single integer but I can't wrap my mind arround PyObjects and how to use them.
Is there any useful tutorial that covers arrays? I've been searching for a while but I just find integer and float related tutorials. | How can I send a data array back and forth between java and python? | 0.066568 | 0 | 0 | 298 |
48,453,620 | 2018-01-25T23:32:00.000 | 0 | 0 | 0 | 0 | java,python,arrays,jython | 48,453,799 | 3 | false | 1 | 0 | If the solution of writing/reading the numbers to a file somehow is not sufficient, you can try the following:
Instead of using Jython, you can use Pyro4 (and the Pyrolite client library for your java code) to call a running Python program from your java code.
This allows you to run your python code in a 'normal' python 3.6 interpreter for instance, rather than being limited to what version Jython is stuck on.
You'll have to launch the Python interpreter in a separate process though (but this could very well even be on a different machine) | 3 | 0 | 0 | I have a java program and I need it to get some data calculated by a python script.
I've already got java to send an integer to python via jython's PythonInterpreter and displayed it, but I can't recover it to make other operations. Also, it would be great to send a full integer array rather than a single integer but I can't wrap my mind arround PyObjects and how to use them.
Is there any useful tutorial that covers arrays? I've been searching for a while but I just find integer and float related tutorials. | How can I send a data array back and forth between java and python? | 0 | 0 | 0 | 298 |
48,453,620 | 2018-01-25T23:32:00.000 | 1 | 0 | 0 | 0 | java,python,arrays,jython | 48,454,007 | 3 | false | 1 | 0 | I've worked on similar project. Here's brief outline of what Java and Python was doing respectively.
Java
We used Java as a main server for receiving requests from clients and sending back responses after some data manipulation.
Python
Python was in charge of data manipulation or calculation. Data was sent from Java via socket network. We first defined the data we needed in string format, then cncerted them into bytes in order to have them semt via socket network.
Since there were limitations, though, using socket network, I changed it to Rest Api using Python Flask. In that way we could easily communicate with, not only but in this case mainly, Java with key-value json format. In this way, I was able to recieve any data type that could be passed through Api including array object you mentioned. | 3 | 0 | 0 | I have a java program and I need it to get some data calculated by a python script.
I've already got java to send an integer to python via jython's PythonInterpreter and displayed it, but I can't recover it to make other operations. Also, it would be great to send a full integer array rather than a single integer but I can't wrap my mind arround PyObjects and how to use them.
Is there any useful tutorial that covers arrays? I've been searching for a while but I just find integer and float related tutorials. | How can I send a data array back and forth between java and python? | 0.066568 | 0 | 0 | 298 |
48,455,129 | 2018-01-26T02:48:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pandas,python-import | 48,455,544 | 1 | false | 0 | 0 | SOLVED. So when I did:
import sys
sys.path.append(path to pandas library)
it worked! so now I can fully use pandas. I guess I will just have to do this anytime I download a new library and it doesn't work. Thank you for all the help | 1 | 0 | 1 | I installed (with pip) MatPlotLib and Pandas, and they are both not working properly in programs. Here is the strange thing...
When I type the following into the interactive environment of IDLE
import pandas as pd
pd.Series([1, 2, 3, 4, 5])
I get this as output: (indicating that it works properly)
0 1
1 2
2 3
3 4
4 5
dtype: int64
But when I use that very same code in a python program, it crashes and says "AttributeError: module 'pandas' has no attribute 'Series'"
Can anyone tell me what's going on?
I can also successfully import matplotlib in the interactive environment but get errors when I do it in a program and run it.
EDIT: There is no shebang because I am running the program through IDLE. I am using python 3.6, the only python I have on my computer. I am executing this file by clicking run in IDLE.
Currently In my command prompt paths I have
C:\Users\Karl\AppData\Local\Programs\Python\Python36
C:\Users\Karl\AppData\Local\Programs\Python\Python36\Scripts\
EDIT 2: I think we are closer to finding out the problem. If I run a python program with the code above in the command line (this time with the proper shebang(I forgot one before)), it works! So this must be an idle issue.
Currently, one of IDLE's paths is to
C:\Users\Karl\AppData\Local\Programs\Python\Python36\lib\site-packages
which contains all of the libraries for python.
EDIT FINAL: SOLVED. So when I did:
import sys
sys.path.append(path to pandas library) it worked! so now I can fully use pandas. I guess I will just have to do this anytime I download a new library and it doesn't work. Thank you for all the help | Pip is correctly installing libraries to the proper directory, but I cannot import those packages properly in program | 0 | 0 | 0 | 53 |
48,455,275 | 2018-01-26T03:11:00.000 | 1 | 1 | 1 | 0 | python,visual-studio,keyboard-shortcuts | 48,455,444 | 1 | true | 0 | 0 | Download the extension 'Code Runner'. You may need to restart visual studio code after loading. Open your script in an editor window. Hit the keys 'control-alt-n' and your script should run. I just checked it on my mac and it ran fine. | 1 | 0 | 0 | Is there a way to easily run a currently active python file in Visual Studio? I'm used to Notepad++ in which I had customized it to run an active python file in cmd on ctrl+r which made testing code very easy and fast. If there was something similar I could do for Visual Studio, that would be wonderful.
Thanks! | How to easily run a python script from Visual Studio | 1.2 | 0 | 0 | 1,054 |
48,459,764 | 2018-01-26T10:41:00.000 | 0 | 0 | 0 | 0 | python,amazon-ec2,websocket,amazon-elb | 48,469,075 | 1 | false | 1 | 0 | To access the application via Load Balancer you have to make sure first that your target in Target Group is healthy. The health status is displayed in AWS Web console on your target group instance details on Targets tab.
If there are no targets in your Target Group, add one by pressing Edit button and selecting your EC2 instance from the list. Don't forget to use the appropriate port. Also make sure health checks are configured correctly (path, port...). You can find them on Health Checks tab of your target group details page.
If all above is ok and you have a healthy target in TG, but the ELB doesn't show your application, I'd recommend you to SSH to your EC2 instance with Flask app and check if that one is running correctly. | 1 | 1 | 0 | I have a Flask application running on AWS Application Load Balancer, but can't get web sockets to work. After reading several posts and configuring Load Balancers, Target Groups, stickiness on EC2, I came to the conclusion that it might be that ALB is not staring the application correctly.
Flask-SocketIo says to use socketio.run(application, host='0.0.0.0', port=port) to start up the web server as it encapsulates application.run(). But after further reading I found that EC2 already calls application.run() without the need of explicitly doing so in the start up script, and therefore it might just bypassing my socketio.run() and not be starting my web server.
Could this be the case? How can I verify it and make sure socketio is started properly? | flask-socketio run on Application Load Balancer | 0 | 0 | 1 | 648 |
48,466,626 | 2018-01-26T17:34:00.000 | 0 | 0 | 0 | 1 | python,linux,server,nohup,foreground | 48,466,757 | 1 | false | 0 | 0 | If you nohup a process, when you log out the parent of the process switches to being init (1) and you can't get control of it again.
The best approach is to have the program open a socket and then use that for ipc. You probably want to split your code in to 2 pieces - a daemon that runs in the background and keeps a socket open, and a client which connects to the socket to control the daemon. | 1 | 1 | 0 | I am working on a chat program with Python. I would like to use nohup because users always can access server when I am logout.
I could run nohup very well. It was great.But I am a admin and I can write messages,and can see online users as using python. after I worked nohup, and logout, when I login I can't access the python progress. I want to foreground it again.
Yeah, I can see it in background with ps -aux . I see its PID,STAT but I don't know how to access it. I should access it.jobs doesn't see it. fg don't work. or I can't do. How can I do? | Access a progress that work background with nohup (LINUX) -get foreground | 0 | 0 | 0 | 122 |
48,467,301 | 2018-01-26T18:20:00.000 | 1 | 0 | 0 | 0 | python,sql,django,security | 48,467,379 | 1 | true | 1 | 0 | It is not safe to connect to a remote database in a scenario that you are describing.
For a potential hacker its a piece of cake to figure out the credentials of the remote database that you are using.
And to answer your question it will be difficult for the hacker to replace the DB with a fake one. But it wont stop him from getting all the data from your DB and modifying it.
What you should do is to have a rest-api endpoint or a grapghql endpoint to interact with the database. and you can hit that endpoint from the client app. | 1 | 1 | 0 | I have a desktop app that is built on top of Django framework and frozen to .exe using PyInstaller. The idea behind it, that an application should connect to remote database(PostgreSQL) on VPS. That VPS is serving static files for this application too. So here is the question - is that option secure? Can potential hackers connect to my database and make a mess in it or replace original DB with the fake one? If they can, how should I fix that? | Let desktop app based on Django, connect to remote DB is secure? | 1.2 | 1 | 0 | 119 |
48,467,961 | 2018-01-26T19:06:00.000 | 2 | 0 | 0 | 0 | python,selenium,headless,browser-automation | 48,468,760 | 3 | false | 1 | 0 | What you are asking for is currently not possible. Further, such a "feature" would have nothing to do with Selenium, but the vendor of the browser. You can search their bug tracker to see if such a feature has already been requested.
The only currently available option is to run full GUI browser during debug / development of your tests. | 1 | 4 | 0 | I know this is sort of counter to the purpose of headless automation, but...
I've got an automation test running using Selenium and Chromedriver in headless mode. I'd prefer to keep it running headless, but occasionally, it runs into an error that really needs to be looked at and interacted with. Is it possible to render and interact with a headless session? Maybe by duplicating the headless browser in a non-headless one? I can connect through remote-debugging, but the Dev Tools doesn't seem to do allow me to view the rendered page or interact with anything.
I am able to take screenshots, which sort of helps. But I'm really looking for the ability to interact--there's some drag-and-drop elements that aren't working well with Selenium that are causing issues occasionally. | Possible to open/display/render a headless Selenium session? | 0.132549 | 0 | 1 | 3,716 |
48,470,828 | 2018-01-26T23:07:00.000 | 0 | 0 | 1 | 0 | python-3.x,date,fuzzy-comparison | 48,471,254 | 1 | false | 0 | 0 | The problem you're describing is categorically known as "Natural Language Parsing", or NLP.
Googling for Python NLP Date Parsing libraries yields several results. You should do that, and evaluate them for your needs. | 1 | 2 | 1 | I have web scraped data in a column of a pandas dataframe that represents when different pieces of art were created. This data was entered in as strings by various people in many many different formats. Some examples:
1998
circa 1995
c. 2003-5
March 2, 1904
1st quarter of 19th century
19th to 20th century
ca. late 19th and early 20th Century
BCE 500
206 BCE-240 CE
1995-99
designed 1950, produced 1969
designed 1935, produced circa 1946-1968
1990; and 1989
1975/97
618-907 CE
2001; 2006 and 2008
1937-42/48
no date
n.d.
mid 1900s
late 1940's
I've spent a couple days writing a long transformer class that attempts to handle every combination in my current dataset, which is semi-successful, but I figured this must be something people have done in the past.
So does there exist any way in Python to handle date information that is extremely fuzzy in this way? | How to parse EXTREMELY fuzzy dates? | 0 | 0 | 0 | 290 |
48,473,417 | 2018-01-27T06:55:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,deep-learning,convolution,image-segmentation | 48,475,567 | 1 | false | 0 | 0 | No, it's your decision how you calculate the kernel-map in every single convolutional layer. It's a matter of designing your model. | 1 | 2 | 1 | I am implementing some variants of FCN for Segmentation. In particular, I have implemented a U-net architecture. Within the architecture, I am applying valid convolution with a 3x3 kernel and then I apply transposed convolution for upsampling with a 2x2 kernel and stride of 2.
My question is, if using valid or same padding for the convolution, does this determine whether we use valid or same padding for the transposed convolution?
Currently I use valid padding for convolution and same padding for transposed convolution. | Transposed convolution TensorFlow padding for FCN style networks | 0 | 0 | 0 | 330 |
48,476,742 | 2018-01-27T14:24:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,amazon-lex | 48,495,195 | 2 | false | 0 | 0 | If you are using your own website or an app for integrating the chatbot, then you can send some unique welcome text from that website/app when it loads for the first time i.e on load method to the amazon lex. And in amazon lex you can create a welcome intent and put exact same text as utterance.
This way, when the website/app loads, it will send text to amazon lex and lex can fire the welcome intent and reply to it. | 2 | 2 | 0 | I am using Aws Lex for constructing chatbots. I had a scenario where I need to have welcome message initially without user input so that I can give a direction to the user in my chatbot. | How to get welcome messages in AWS Lex (lambda in Python)? | 0 | 0 | 1 | 4,953 |
48,476,742 | 2018-01-27T14:24:00.000 | 5 | 1 | 0 | 0 | python,amazon-web-services,amazon-lex | 48,477,745 | 2 | false | 0 | 0 | You need to work that scenario using API call to start a context with your user.
You can follow these steps:
You need to create an Intent called AutoWelcomeMessage.
Create a Slot type with only one value i.e: HelloMe.
Create an Utterances HelloMessage.
Create a Slot as follow: Required, name: answer, Slot Type: HelloMe, Prompt: 'AutoWelcomePrompt'.
Pick Amazon Lambda for your Fulfillment that will send a response to your user. I.e:
Hello user, may I help? (Here the user will enter another Intent and your Bot will respond).
Now, start a conversation with your User, just call via API your Lex Bot and send an intention with Intent AutoWelcomeMessage, that call starts a context with your Lex Bot and the fulfillment will execute your Lambda. | 2 | 2 | 0 | I am using Aws Lex for constructing chatbots. I had a scenario where I need to have welcome message initially without user input so that I can give a direction to the user in my chatbot. | How to get welcome messages in AWS Lex (lambda in Python)? | 0.462117 | 0 | 1 | 4,953 |
48,477,179 | 2018-01-27T15:16:00.000 | 0 | 0 | 1 | 0 | python,pygame,sprite | 48,477,360 | 2 | false | 0 | 1 | I would make a scraper-like tool:
1 From each red pixel check for connecting white/blue pixels, put these in a list
2 From each white pixel in the list, check if a blue pixel can be reached, if so , return True
3 add any white pixels that connect to white pixels already in the list (but don't add any that already are in the list)
4 if no new white pixels where added and no blue pixels where found, return False. else, go back to step 2 | 1 | 0 | 0 | I'm making a pipe connection game in Pygame where you rotate a set of given pipe pieces to connect the start to the end, and am having trouble making a way for the game to actually be completed. I need to find if any path of white pixels (from the pipe piece sprites) connects any red pixel to any blue pixel (the colours of the start and end pieces). How could I go about doing this? The background colour is black, if that helps. | How can I find if there is a white path connecting pixels of two different colours in Python / Pygame? | 0 | 0 | 0 | 69 |
48,477,469 | 2018-01-27T15:47:00.000 | 0 | 0 | 1 | 0 | python,batch-file | 48,477,630 | 1 | false | 0 | 0 | sys.argv returns a list of command line arguments you pass to the program. Like if you run the command as python demo.py var1 var2 var3 the sys.argv will return the list ['demo.py', 'var1', 'var2', 'var3'] just choose the index of whatever you want. Example sys.argv[0] will return 'demo.py', sys.argv[1] will return 'var1' | 1 | 0 | 0 | I want that my set /p Input1 = does something like py "Python.py" %Input1% but with multiple inputs. I also want to get the input from that very batch file into a python code. How do I take the input? I normally do it with a single input like sys.argv() but that doesn't seem to work. Any suggestions? | How to insert Batch input into Python | 0 | 0 | 0 | 673 |
48,477,832 | 2018-01-27T16:23:00.000 | 2 | 1 | 0 | 0 | python,amazon-web-services,security,boto3 | 48,478,740 | 1 | false | 0 | 0 | There is nothing special or secure with a csv file. Its security risks are same as credentials file since both are text files. If you are worried about security and prefer a file option, one alternative I can think of:
Encrypt the credentials and store them as binary data in a file
In your Boto3 script, read the file, decrypt the data and supply the credentials to Boto3
You can use simple symmetric keys to encrypt the creds | 1 | 1 | 0 | TL;DR : Is passing auth data to a boto3 script in a csv file named as an argument (and not checked in) less secure than a plaintext shared credentials file (the default answer in docs) for any reason?
I want to write a boto3 script intended to run from my laptop that uses an IAM key. The main accepted way to initialize your session is to include the API key, the secret, the region, and (if applicable) your session key in a shared credentials file identified by AWS_SHARED_CREDENTIALS_FILE, or to have the key and secret be environment variables themselves (AWS_ACCESS_KEY_ID, etc.) What I would like to do is load these values in a dictionary auth from a csv or similar file, and then use the keys and values of this dictionary to initialize my boto3.Session. This is easy to do; but, because a utility to load auth data from csv is so obvious and because so few modules provide this utility, I assume there is some security problem with it that I don't know.
Is there a reason the shared credentials file is safer than a csv file with the auth data passed as an argument to the boto3 script? I understand that running this from an EC2 instance with a role assignment is best, but I'm looking for a way to test libraries locally before adding them to one run through role security. | AWS BOTO3 : Handling API keys | 0.379949 | 0 | 1 | 931 |
48,478,283 | 2018-01-27T17:08:00.000 | 1 | 0 | 1 | 0 | python,get,web.py | 48,512,664 | 2 | false | 0 | 0 | GET takes the same number of arguments as captures from the url (plus "self"). If you capture two bits /something/(.*)/(.*), then GET takes self + 2 arguments.
You can use default values, if you want to use two different sets of URL but handle with the same GET().
You can also match (but not capture) within the URL -- if you want to match a URL, but don't need to pass a portion on to your GET().
For example support url /(dog|cat|pig)/([0-9]+), which would process URLs /dog/1, /cat/45, /pig/123 all with the same GET function: GET(self, animal, code). Two captures, two (plus "self") values.
Maybe you don't need to pass in the first bit (because you handle any of these three animals the same way) so instead do url /(?:dog|cat|pig)/([0-9]+). You then use GET(self, code). This is useful to allow /zebra/234 to be processed using a different function. One capture, one (plus "self") values. | 1 | 0 | 0 | I have little trouble in undertanding how GET(self,arg) takes arguments from the URL.
I couldn't find any complete documentation or reference for webpy.
From my understanding, webpy takes arguments from the URL based on the rules we define:
urls = ( '/something/(.*), 'some_class' )
So if we supply url like http://server.com/something/item, the arg for GET will be item.
I have tried the following, but didn't work:
urls = ( /something/(.*), 'some_class' ), GET(self,arg1,arg2) and http://server.com/something/item1/item2
urls = ( /something/(.*)/(.*), 'some_class' ), GET(self,arg1,arg2) and http://server.com/something/item1/item2
So, what's happenig in the background and what are the rules that are defining how GET should take it's arguments. Does it take only one argumenta other than self?
Update:
Actually, the second example works. So, I think that's it. GET can take any number of arguments from URL. | Can GET(self,arg) in webpy take more than one argument other than self? | 0.099668 | 0 | 0 | 30 |
48,478,792 | 2018-01-27T17:57:00.000 | 1 | 0 | 0 | 0 | python-3.x,video,pyqt5 | 49,198,231 | 1 | false | 0 | 1 | Okay, so I couldn't find anything on "MP4 and green line" so I looked at how to modify the PyQt5 interface as a way of hiding the issue.
The option I chose was QGroupBox and changing the padding in the stylesheet to -9 (in my particular case - you may find another value works better but it depends on the UI).
I did attempt to use QFrame, as my other option, but this didn't personally work for me. | 1 | 0 | 0 | I've made a desktop app using Python 3 and PyQt 5 and it works except for the playback of the MP4 video files (compiled by pyrcc5). They are visible and play on the video widget but there is a green line down the right side. I tried to put a green frame (using a Style Sheet) around the QVideoWidget but with no success.
Does anyone have any advice on how to resolve this issue?
Thanks | Python 3, PyQt 5 - MP4 as resource file issue | 0.197375 | 0 | 0 | 273 |
48,480,183 | 2018-01-27T20:24:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,postgresql,psycopg2,tor | 48,602,297 | 1 | false | 0 | 0 | This would be easy enough if I simply opened the database VPS to accept connections from anywhere
Here lies your issue. Just simply lock down your VPS using fail2ban and ufw. Create a ufw role to only allow connection to your Postgres port from the IP address you want to give access from to that VPS ip address.
This way, you don't open your Postgres port to anyone (from *) but only to a specific other server or servers that you control. This is how you do it. Don't run an onion service to connect Postgres content because that will only complicate things and slow down the reads to your Postgres database that I am assuming an API will be consuming eventually to get to the "useful data" you will be scraping.
I hope that at least points you in the right direction. Your question was pretty general, so I am keeping my answer along the same vein. | 1 | 0 | 0 | I'm creating a Python 3 spider that scrapes Tor hidden services for useful data. I'm storing this data in a PostgreSQL database using the psycopg2 library. Currently, the spider script and the database are hosted on the same network, so they have no trouble communicating. However, I plan to migrate the database to a remote server on a VPS so that I can have a team of users running the spider script from a number of remote locations, all contributing to the same database. For example, I could be running the script at my house, my friend could run it from his VPS, and my professor could run the script from a few different systems in the lab at the university, and all of these individual systems could synchronize with the PostgreSQL server runnning on my remote VPS.
This would be easy enough if I simply opened the database VPS to accept connections from anywhere, making the database public. However, I do not want to do this, for security reasons. I know I could tunnel the connection through SSH, but that would require giving each person a username and password that would grant them access to the server itself. I don't wish to do this. I'd prefer simply giving them access to the database without granting access to a shell account.
I'd prefer to limit connections to the local system 127.0.0.1 and create a Tor hidden service .onion address for the database, so that my remote spider clients can connect to the database .onion through Tor.
The problem is, I don't know how to connect to a remote database through a proxy using psycopg2. I can connect to remote databases, but I don't see any option for connecting through a proxy.
Does anyone know how this might be done? | Connect to remote PostgreSQL server over Tor? [python] [Tor] | 0 | 1 | 0 | 367 |
48,481,203 | 2018-01-27T22:28:00.000 | 0 | 1 | 1 | 1 | python,atom-editor | 48,512,683 | 1 | true | 0 | 0 | Only way I can change Atom python is to run it from a directory that has a different default python version. if I type python from a terminal window, whichever version of python that opens is the version Atom uses. I use virtual environments so I can run python 2.7.13 or python 3.6. If I want Atom to run python 3, I activate my python 3 environment and then run atom.
There may be a way to do this from within Atom but I haven't found it yet. | 1 | 0 | 0 | I have downloaded Anaconda on my computer however Anaconda is installed for all users on my mac therefore when I try and access python2.7 by typing in the path: /anaconda3/envs/py27/bin:/anaconda3/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
Even if I open from terminal the path above is not in the current directory since:
machintoshHD/anaconda3/....
machintoshHD/Users/adam/desktop....
how can i redirect the configure script feature in the atom package script so that i can run python 2? | Atom script configure script run python 2.7 | 1.2 | 0 | 0 | 423 |
48,481,327 | 2018-01-27T22:44:00.000 | 0 | 0 | 0 | 0 | python,niftynet | 58,831,150 | 2 | false | 0 | 0 | Please Install tensorflow using this command
pip install tensorflow
After that install nifty net using below command
'''
pip install niftynet
'''
reinstall the python
'''
pip install python
'''
if the problem is still than please mentioned your problem in more details
please make sure your environment variable is set before executing the command from niftynet page. | 1 | 3 | 1 | I'm trying out NiftyNet and got stuck at the first step.
Trying to run the quickstart command
python net_download.py dense_vnet_abdominal_ct_model_zoo
python net_segment.py inference -c ~/niftynet/extensions/dense_vnet_abdominal_ct/config.ini
gives me
KeyError: "Registering two gradient with name 'FloorMod' !(Previous registration was in _find_and_load_unlocked :955)"
Could any one help? I'm using Ubuntu 16.04 with Nvidia GPU. Tried tensorflow:1.4.1-py3 docker image, Anaconda with CPU version of tensorflow
and native python with CPU version of tensorflow and I get the same error.
I'm pretty sure it's something I did wrong because I get the same error from those different environment but I'm not sure what...
Thanks! | Error while trying to run NiftyNet quick start command | 0 | 0 | 0 | 599 |
48,481,765 | 2018-01-27T23:52:00.000 | 0 | 0 | 0 | 0 | django,python-3.x,django-templates | 68,304,143 | 2 | false | 1 | 0 | {{ value|rjust:"10" }} didn't work for me so I had to format the string before being passed to the HTML page.
I first right aligned the string with an arbitrary character:
amount.rjust(12, '$')
And then replace the character with " "
amount.replace('$', ' ') | 1 | 0 | 0 | I have a table in my HTML and I want to put a float variable into a column. I also want this variable to justify itself to the right.
Currently, I am trying {{ float|floatformat:2|rjust }}, but it keeps throwing up a TemplateSyntaxError. Is it even possible to do this via the template system, or will I just have to use some CSS styling for this? | Format a float to justify to the right | 0 | 0 | 0 | 855 |
48,481,801 | 2018-01-27T23:59:00.000 | 1 | 0 | 1 | 0 | python,encoding | 48,481,828 | 2 | false | 0 | 0 | Found the solution. repr() will do. | 1 | 1 | 0 | I am encoding Chinese characters using gb18030 in python. I want to access part of the encoded string. For example, the string for 李 is: '\xc0\xee'. I want to extract 'c0' and 'ee' out of this. However, python is not treating '\xc0\xee' as a 8 character string, but as a 2 character string. How I do turn it into a 8 character string so that I could access the individual roman letters in it? | how to access part of a encoded (gb18020) string in python | 0.099668 | 0 | 0 | 16 |
48,481,873 | 2018-01-28T00:08:00.000 | 2 | 0 | 0 | 0 | python,tensorflow,neural-network,conv-neural-network | 48,482,421 | 1 | false | 0 | 0 | A unity stride is the same as not having a stride (turning it off), as its the normal way a convolution works.
As the stride is the amount of pixels the sliding window moves, one is the minimum value, and zero would not be valid as then the sliding window wouldn't move at all. | 1 | 0 | 1 | is there a way that I can turn off stride in tensor flow when using: tf.layers.conv2d()? According to the docs, the default is (1,1) but when I try to change this to (0,0) I get an error telling me that it has to be a positive number.
Thanks. | how to set stride to zero when using tf.layers.conv2d | 0.379949 | 0 | 0 | 336 |
48,484,272 | 2018-01-28T07:49:00.000 | 0 | 1 | 0 | 0 | telegram-bot,python-telegram-bot | 72,498,718 | 2 | false | 0 | 0 | The pyrogram have DeletedMessagesHandler / @Client.on_deleted_messages(). If used as Userbot, It handles in all chat groups channels. I failed to filter. Maybe it will work in a bot | 2 | 8 | 0 | Is there any way that can handle deleted message by user in one-to-one chat or groups that bot is member of it ?
there is method for edited message update but not for deleted message . | handle deleted message by user in telegram bot | 0 | 0 | 1 | 1,993 |
48,484,272 | 2018-01-28T07:49:00.000 | 9 | 1 | 0 | 0 | telegram-bot,python-telegram-bot | 48,485,447 | 2 | true | 0 | 0 | No. There is no way to track whether messages have been deleted or not. | 2 | 8 | 0 | Is there any way that can handle deleted message by user in one-to-one chat or groups that bot is member of it ?
there is method for edited message update but not for deleted message . | handle deleted message by user in telegram bot | 1.2 | 0 | 1 | 1,993 |
48,486,923 | 2018-01-28T13:34:00.000 | 0 | 0 | 1 | 0 | python,console,pycharm,mkstemp | 48,487,528 | 1 | false | 0 | 0 | Maybe there is an issue with PyCharms configuration for your project. Try and delete .idea folder. This helped me when I had similar issues with PyCharm console. | 1 | 0 | 0 | I can't connect to the console process when I press Python console in Tools.
The interpeter tries to "import mkdtemp, mkstemp" but cant import mkdtemp
I have tried to check if there is something wrong with the project's settings but couldn't find something. | Pycharm - Couldnt connect to console process | 0 | 0 | 0 | 1,131 |
48,491,623 | 2018-01-28T21:41:00.000 | 0 | 0 | 1 | 0 | python,kivy | 50,039,603 | 2 | false | 0 | 1 | i got the same issue (error)
it couldn't find any version that satisfies the python interpreter , any version of python that Pywin32 doesn't support
type in the console (cmd) : pip install pypiwin32==219
this version 219 nearly supports all the versions of python (not all of them !)
and if still not worked do this : choose on of these versions of python and install it then it will work !
2.7.X
3.1.X
3.2.X
3.3.X
3.4.X
3.5.X
notice that this version doesn't support 3.6 , anyway good luck with your programming see you again boys ! | 1 | 1 | 0 | I am using pip install to install kivy on windows 10. I keep getting an error in the command prompt that states:
(could not find a version that satisfies the requirement pywin32 (from
versions: ) no matching distribution found for pywin32 (from
pypiwin32)
Do you have any ideas? | Error installing kivy | 0 | 0 | 0 | 1,882 |
48,494,296 | 2018-01-29T04:13:00.000 | -1 | 0 | 1 | 0 | python,loops,binary | 48,494,319 | 4 | false | 0 | 0 | A binary string has been defined as a string that only contains "0" or "1". So, how about checking each 'character' in the string, and if it's not a "0" or "1" you will know that the string is not a binary string. | 1 | 0 | 0 | The question on my assignment is as follows:
Write a function that takes, as an argument, a string, identified by the variable aString. If the string only contains digits 0 and 1, return the string formed by concatenating the argument with the string "is a binary string." Otherwise, return a string indicating the length of the argument, as specified in the examples that follow. Name this function AmIBinary(aString).
I am having trouble figuring out how to form a loop which searches through a string and determines whether or not the string is a binary string. I understand how to get the length of a string, I just don't understand how to figure out if it is a binary string. | Python Binary String Loop | -0.049958 | 0 | 0 | 2,689 |
48,501,645 | 2018-01-29T12:47:00.000 | 0 | 0 | 0 | 0 | windows,python-3.x,powershell,winrm,wsman | 64,877,092 | 1 | false | 0 | 0 | Actually, I had a quick look at the code of wirm (as of 20201117)
and the "Session" is not an actual session in the traditional sense, but only an object holding the creds to authenticate.
Each time run_cmd or run_ps is invoked, a session in opened on the target and closed on completion of the task. So there's nothing to close, really. | 1 | 3 | 0 | hello I'm using PyWinRM to poll a remote windows server.
s = winrm.Session('10.10.10.10', auth=('administrator', 'password'))
As there is no s.close() function available, I am worried about leaking file descriptors.
I've checked by using lsof -p <myprocess> | wc -l and my fd count is stable
but my google searches show that ansible had fd leaks previously; ansible relies on pywinrm to manage remote window hosts as well
kindly advice, thanks! | how do you close a Pywinrm session? | 0 | 0 | 0 | 987 |
48,503,130 | 2018-01-29T14:05:00.000 | 2 | 0 | 1 | 0 | python,pip,setuptools,pypi | 48,504,790 | 1 | true | 0 | 0 | It's possible with data_files but not recommended. Think about a package installed into a separate environment created with virtualenv — users will be surprised if such a package installs files outside that separate environment.
Hence advice: distinguish pip-installable package that must be self-contained and must not install anything beyond python code and files required for the code (could be installed with package_data) from full-blown installable package created with installer builders like RPM or DEB. | 1 | 3 | 0 | Is it possible to install icons and launchers using a setup.py file, setuptools and PyPI? Like, I'm talking about including .desktop launcher files for Python scripts included in the package and .svg icons for those launchers. Usually the .desktop files would be installed at /usr/share/applications and the icons would be installed at /usr/share/icons. | Can pip and setuptools install icons and launchers? | 1.2 | 0 | 0 | 741 |
48,503,493 | 2018-01-29T14:25:00.000 | 1 | 1 | 1 | 0 | python,git,gitignore | 48,503,972 | 1 | true | 0 | 0 | I want to keep files in my git master, which should not be downloaded when the git is cloned.
Impossible. Either store the files in a different branch or in an entirely different repository from where a script will clone them. | 1 | 0 | 0 | I want to keep files in my git master, which should not be downloaded when the git is cloned. Only running an additional script should download these optional files (some trained models).
Is there any git way (e.g. a special command)/pythonic way to do this? | How to clone/load special data mentioned in gitignore from master | 1.2 | 0 | 0 | 17 |
48,503,540 | 2018-01-29T14:27:00.000 | 0 | 0 | 0 | 0 | python,graph,networkx | 48,504,499 | 2 | false | 0 | 0 | I guess you can use a directed graph and store the direction as an attribute if you don't need to represent that directed graph. | 2 | 0 | 0 | I'm looking for a way to implement a partially undirect graph. This is, graphs where edges can be directed (or not) and with different type of arrow (>, *, #, etc.).
My problem is that when I try to use undirect grpah from Networkx and stored arrow type as an attribute, I don't find an efficient way to tell networkx if that attribute (type arrorw) is from a to b or from b to a.
Does anyone know how to handle this? | Partially undirect graphs in Networkx | 0 | 0 | 1 | 215 |
48,503,540 | 2018-01-29T14:27:00.000 | 0 | 0 | 0 | 0 | python,graph,networkx | 48,583,218 | 2 | true | 0 | 0 | After search it in a lot of different sources, the only way to do a partial undirect graph I've found it is this is through adjacent matrices.
Networkx has a good tools to move between graph and adjacent matrix (in pandas and numpy array format).
The disadvantage is if you need networkx functions you have to program it yourself or convert the adjacent matrix to networkx format and then return it back to your previous adjacent matrix. | 2 | 0 | 0 | I'm looking for a way to implement a partially undirect graph. This is, graphs where edges can be directed (or not) and with different type of arrow (>, *, #, etc.).
My problem is that when I try to use undirect grpah from Networkx and stored arrow type as an attribute, I don't find an efficient way to tell networkx if that attribute (type arrorw) is from a to b or from b to a.
Does anyone know how to handle this? | Partially undirect graphs in Networkx | 1.2 | 0 | 1 | 215 |
48,503,593 | 2018-01-29T14:30:00.000 | 2 | 0 | 1 | 0 | python,pip | 53,537,810 | 2 | false | 0 | 0 | At the time of writing, OP has 4 thumbs down because the question was worded in granular manner not indicative of the real problem, paraphrased below:
"Why does the --help option not display all possible flags that the pip command supports, and their usages?"
Answer:
More are covered in the manual page link that user:plaes provided, such as "-r" for requirements. | 1 | 9 | 0 | I was advised to use pip install module-name -t /path/to/project-dir, but I did not understand what the t flag is for. Can someone help me? | What is the -t flag for pip? | 0.197375 | 0 | 0 | 7,308 |
48,508,135 | 2018-01-29T18:56:00.000 | 0 | 1 | 1 | 0 | python,scheduled-tasks | 62,406,225 | 1 | false | 0 | 0 | please select the task from the task scheduler library and click history. from there select "Action completed" task category and right click.. then select event properties.. you will find the resulting code.. if this shows 0 that is okay if there is something else you need to look for that code!
this applies for Windows 2012 R2 servers | 1 | 1 | 0 | I have searched StackOverflow and haven't found an answer to my problem.
I run a Python script on Task Scheduler that runs a few times per day and it sends out an email to various people. It ran well for the past year but over the past week it sometimes started getting stuck half way through and so it did send out the email to everyone. I'm trying to figure out what is causing the error and what it is getting stuck on, but I can't find any way to save or output the Python console log with error messages while running in Task Scheduler. How do I see what is causing the error?
Thanks for your help. | Logging Python console errors in Task Scheduler | 0 | 0 | 0 | 998 |
48,509,766 | 2018-01-29T20:52:00.000 | 6 | 0 | 0 | 0 | python,matplotlib | 53,142,990 | 1 | true | 0 | 0 | I solved this by running the following imports in the following order:
import matplotlib.pyplot as plt
import mpl_toolkits
from mpl_toolkits.mplot3d import Axes3D
Note: This only worked for me on python3. So first, I had to install python3 and pip3. Then I did "pip3 install matplotlib" in Terminal. If you already have matplotlib then try "pip3 install --upgrade matplotlib" | 1 | 2 | 1 | I have a python program and am trying to plot something using the mplot3d from mpl toolkits, but whenever I try to import the Axes3D from mpl_toolkits from mpl_toolkits.mplot3d import Axes3D
I get the following error: ImportError: No module named mpl_toolkits | ImportError: No module named mpl_toolkits | 1.2 | 0 | 0 | 11,673 |
48,512,013 | 2018-01-30T00:10:00.000 | 7 | 1 | 0 | 1 | python,heroku | 50,388,192 | 6 | false | 1 | 0 | Make Sure Procfile should not have any extension like .txt
otherwise this will be the error
remote: -----> Discovering process types
remote: Procfile declares types -> (none)
To create file without extension type following in cmd
notepad Procfile.
Now add web: gunicorn dep:app and save
Now when you will git push heroku master the above lines will be like
remote: -----> Discovering process types
remote: Procfile declares types -> web
And the error is gone when you will run
C:\Users\Super-Singh\PycharmProjects\URLShortener>heroku ps:scale web=1
Scaling dynos... done, now running web at 1:Free | 3 | 22 | 0 | I'm trying to deploy a simple python bot on Heroku but I get the error
couldn't find that process type
When I try to scale the dynos. I already made a procfile and it looks like this:
web: gunicorn dep:app, where "dep" is the name of my python code
What could be the reason? | Couldn't find that process type, Heroku | 1 | 0 | 0 | 49,734 |
48,512,013 | 2018-01-30T00:10:00.000 | 0 | 1 | 0 | 1 | python,heroku | 71,761,780 | 6 | false | 1 | 0 | While it's not Python, in my case, I had heroku/java followed by heroku/pgbouncer. In Heroku's Settings, I switched them so heroku/pgbouncer was on top. This fixed the issue. Perhaps your buildpacks need to be ordered differently if you are using multiple. | 3 | 22 | 0 | I'm trying to deploy a simple python bot on Heroku but I get the error
couldn't find that process type
When I try to scale the dynos. I already made a procfile and it looks like this:
web: gunicorn dep:app, where "dep" is the name of my python code
What could be the reason? | Couldn't find that process type, Heroku | 0 | 0 | 0 | 49,734 |
48,512,013 | 2018-01-30T00:10:00.000 | 56 | 1 | 0 | 1 | python,heroku | 53,184,918 | 6 | false | 1 | 0 | This may happen if your procfile is misspelt, such as "procfile" or "ProcFile" etc. The file name should be "Procfile" (with a capital P).
sometimes changing the file name is not anough, because git wouldn't spot the change. I had to delete the Procfile completely, then commit the change, than add it again with the right name, and then commit that again:
remove your procfile
git commit
add a new procfile with the exact name "Procfile"
commit again
git push heroku master (or main - new heroku projects now uses main)
should work! | 3 | 22 | 0 | I'm trying to deploy a simple python bot on Heroku but I get the error
couldn't find that process type
When I try to scale the dynos. I already made a procfile and it looks like this:
web: gunicorn dep:app, where "dep" is the name of my python code
What could be the reason? | Couldn't find that process type, Heroku | 1 | 0 | 0 | 49,734 |
48,514,039 | 2018-01-30T04:46:00.000 | 0 | 0 | 1 | 0 | python | 48,514,101 | 2 | false | 0 | 0 | It is part Python and part the editor/terminal you are using. First try using something like IDLE instead of your terminal. Then you will want to ensure you are using the correct code-page (likely Big5 Chinese) Unicode instead of the default byte string.
Hope this helps. | 1 | 0 | 0 | I read a string containing Chinese characters and it displays as "❤12💳✈"
How to display the string correctly?
Thank you. | How to display Chinese characters correctly in Python? | 0 | 0 | 0 | 311 |
48,517,407 | 2018-01-30T08:57:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,boto3,pre-signed-url | 52,748,706 | 1 | false | 1 | 0 | Have you tried cycling through the list of items in the bucket?
do a aws s3 ls <bucket_name_with_Presigned_URL> and then use a for loop to get each item.
Hope this helps. | 1 | 0 | 0 | I am trying to give temporary download access to a bucket in my s3.
using boto3.generate_presigned_url(), I have only managed to download a specific file from that bucket but not the bucket itself.
is there any option to do so or my only option is to download the bucket content, zip it, upload it, and give access to the zip? | boto3 python generate pre signed url for a whole bucket | 0 | 0 | 1 | 1,170 |
48,517,784 | 2018-01-30T09:18:00.000 | 0 | 0 | 0 | 0 | python,hdf5,h5py | 49,185,722 | 2 | false | 0 | 0 | If you need consistency and avoid corrupted hdf5 files, you may like to:
1) use write-ahead-log, use append logs to write what's being added/updated each time, no need to write to hdf5 this moment.
2) periodically, or at the time you need to shutdown, you replay the logs to apply them one by one, write to the hdf5 file
3) if your process down during 1), you won't lose data, after you startup next time, just replay the logs and writes them to hdf5 file
4) if your process is down during 2), you will not lose data, just remove the corrupted hdf5 file, replay the logs and write it again. | 1 | 2 | 1 | I have some data in memory that I want to store in a HDF file.
My data are not huge (<100 MB, so they fit in memory very comfortably), so for performance it seems to make sense to keep them there. At the same time, I also want to store it on disk. It is not critical that the two are always exactly in sync, as long as they are both valid (i.e. not corrupted), and that I can trigger a synchronization manually.
I could just keep my data in a separate container in memory, and shovel it into an HDF object on demand. If possible I would like to avoid writing this layer. It would require me to keep track of what parts have been changed, and selectively update those. I was hoping HDF would take care of that for me.
I know about the driver='core' with backing store functionality, but it AFAICT, it only syncs the backing store when closing the file. I can flush the file, but does that guarantee to write the object to storage?
From looking at the HDF5 source code, it seems that the answer is yes. But I'd like to hear a confirmation.
Bonus question: Is driver='core' actually faster than normal filesystem back-ends? What do I need to look out for? | Fine control over h5py buffering | 0 | 0 | 0 | 870 |
48,524,573 | 2018-01-30T15:06:00.000 | 0 | 0 | 1 | 1 | python,linux,windows | 48,548,392 | 1 | false | 0 | 0 | The if is okay but don't litter that all over the code but try to isolate an API. See for instance how the os module handles importing the appropriate platform dependent path module. | 1 | 0 | 0 | Apologies if this has been answered on here before, but I did some searching and was unable to find an answer.
I'm taking over a Python application that runs on a remote Linux box, but need to do development locally on Windows. Naturally, I want the code I develop locally to match what gets deployed in production, but there are sections of the code that need to be handled differently between environments (due to library dependencies and OS minutiae).
Right now I'm simply handling this via if statements and sys.platform like the following:
if platform == "linux"
and this works but seems to me like there would be a better way to implement it.
Is there a more canonical or fault-tolerant way to do this? | Is there a best-practice way to implement a Python script that handles Windows and Linux environments differently? | 0 | 0 | 0 | 41 |
48,525,733 | 2018-01-30T16:03:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,amazon-s3,amazon-sagemaker | 48,531,968 | 1 | false | 1 | 0 | Create an object in S3 and enable versioning to the bucket. Everytime you change the model and save it to S3, it will be automatically versioned and stored in the bucket.
Hope it helps. | 1 | 0 | 1 | I am executing a Python-Tensorflow script on Amazon Sagemaker. I need to checkpoint my model to the S3 instance I am using, but I can't find out how to do this without using the Sagemake Tensorflow version.
How does one checkpoint to an S3 instance without using the Sagemaker TF version? | TensorFlow Checkpoints to S3 | 0 | 0 | 0 | 895 |
48,527,451 | 2018-01-30T17:42:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook | 48,527,617 | 2 | false | 0 | 0 | usually, yes, so long as the kernel is still up. the return values of all expressions evaluated are stored in the Out global list. If you are now executing statement number n, then Out[n-1] will have the last thing you successfully finished.
if your output was not returned, but rather printed. You're out of luck... | 1 | 3 | 1 | Is there any way to see the previous output without rerunning the program? For example, I left my ML algorithm to work overnight and in the morning I got the results. But, for some reason, when I pressed Enter on the original code, it started to run again and the original output disappeared. | Jupyter Notebook and previous output | 0.099668 | 0 | 0 | 3,493 |
48,527,785 | 2018-01-30T18:02:00.000 | 0 | 0 | 1 | 0 | python,bokeh | 63,316,299 | 2 | false | 0 | 0 | Renaming the bokeh.py file and deleting bokeh.pyc file solved it for me. | 2 | 8 | 1 | I am trying to import curdoc. I have tried from bokeh.io import curdoc and from bokeh.plotting import curdocbut neither works.
I've tried pip install -U bokeh and pip install bokeh but it still returns no module named 'bokeh.plotting; 'bokeh' is not a package'. What is happening?
I have reverted back to 0.12.1 currently. | No module named 'bokeh.plotting'; bokeh is not a package | 0 | 0 | 0 | 6,323 |
48,527,785 | 2018-01-30T18:02:00.000 | 18 | 0 | 1 | 0 | python,bokeh | 52,453,673 | 2 | true | 0 | 0 | Check your folder if any of the program named bokeh.py please rename it because it's picking bokeh.plotting from your program bokeh.py not from the library. | 2 | 8 | 1 | I am trying to import curdoc. I have tried from bokeh.io import curdoc and from bokeh.plotting import curdocbut neither works.
I've tried pip install -U bokeh and pip install bokeh but it still returns no module named 'bokeh.plotting; 'bokeh' is not a package'. What is happening?
I have reverted back to 0.12.1 currently. | No module named 'bokeh.plotting'; bokeh is not a package | 1.2 | 0 | 0 | 6,323 |
48,527,808 | 2018-01-30T18:04:00.000 | 1 | 0 | 0 | 0 | python,optimization,neural-network,deep-learning,pytorch | 48,529,697 | 1 | true | 0 | 0 | Probably your learning rate is too high. Try decreasing your learning rate. A too large learning rate is the most common reason for loss increasing from the first epoch.
Also your loss is very high. It is unusual to have such a high lost. You probably have a sum in your loss function, it might be wiser to replace that sum with a mean. While this makes no difference if you use the Adam optimizer, if you use simple SGD with or without momentum using a sum instead of a mean, means that you will need to tune your learning rate differently if the dimensions (or the length of your sequence processed by your lstm) of your system changes. | 1 | 0 | 1 | I am training my siamese network for nlp. I have used lstm in it. and BCELoss. My loss is increasing from the first epoch. The first 36 epoch loss is
error after 0 is
272.4357
[torch.FloatTensor of size 1]
error after 1 is
271.8972
[torch.FloatTensor of size 1]
error after 2 is
271.5598
[torch.FloatTensor of size 1]
error after 3 is
271.6979
[torch.FloatTensor of size 1]
error after 4 is
271.7315
[torch.FloatTensor of size 1]
error after 5 is
272.3965
[torch.FloatTensor of size 1]
error after 6 is
273.3982
[torch.FloatTensor of size 1]
error after 7 is
275.1197
[torch.FloatTensor of size 1]
error after 8 is
275.8228
[torch.FloatTensor of size 1]
error after 9 is
278.3311
[torch.FloatTensor of size 1]
error after 10 is
277.1054
[torch.FloatTensor of size 1]
error after 11 is
277.8418
[torch.FloatTensor of size 1]
error after 12 is
279.0189
[torch.FloatTensor of size 1]
error after 13 is
278.4090
[torch.FloatTensor of size 1]
error after 14 is
281.8813
[torch.FloatTensor of size 1]
error after 15 is
283.4077
[torch.FloatTensor of size 1]
error after 16 is
286.3093
[torch.FloatTensor of size 1]
error after 17 is
287.6292
[torch.FloatTensor of size 1]
error after 18 is
297.2318
[torch.FloatTensor of size 1]
error after 19 is
307.4176
[torch.FloatTensor of size 1]
error after 20 is
304.6649
[torch.FloatTensor of size 1]
error after 21 is
328.9772
[torch.FloatTensor of size 1]
error after 22 is
300.0669
[torch.FloatTensor of size 1]
error after 23 is
292.3902
[torch.FloatTensor of size 1]
error after 24 is
300.8633
[torch.FloatTensor of size 1]
error after 25 is
305.1822
[torch.FloatTensor of size 1]
error after 26 is
333.9984
[torch.FloatTensor of size 1]
error after 27 is
346.2062
[torch.FloatTensor of size 1]
error after 28 is
354.6148
[torch.FloatTensor of size 1]
error after 29 is
341.3568
[torch.FloatTensor of size 1]
error after 30 is
369.7580
[torch.FloatTensor of size 1]
error after 31 is
366.1615
[torch.FloatTensor of size 1]
error after 32 is
368.2455
[torch.FloatTensor of size 1]
error after 33 is
391.4102
[torch.FloatTensor of size 1]
error after 34 is
394.3190
[torch.FloatTensor of size 1]
error after 35 is
401.0990
[torch.FloatTensor of size 1]
error after 36 is
422.3723
[torch.FloatTensor of size 1] | Loss is increasing from first epoch itself | 1.2 | 0 | 0 | 1,124 |
48,528,112 | 2018-01-30T18:23:00.000 | 6 | 0 | 1 | 0 | python,string | 48,528,199 | 4 | true | 0 | 0 | Using split(" in "), you can split the string from the "in".
This produces a list with the two ends. Now take the first part by using [0]:
string.split(" in ")[0]
If you don't want the space character at the end, then use rstrip():
string.split(" in ")[0].rstip()
Welcome. | 1 | 5 | 0 | I've got the following string: blah blah blah blah in Rostock
What's the pythonic way for removing all the string content from the word 'in' until the end, leaving the string like this: 'blah blah blah blah' | Remove string characters from a given found substring until the end in Python | 1.2 | 0 | 0 | 4,890 |
48,528,477 | 2018-01-30T18:46:00.000 | 0 | 0 | 1 | 0 | python,anaconda,packages,python-idle | 48,535,641 | 2 | false | 0 | 0 | The IDLE that comes with python 3.5.2 can only be run by python 3.5.2. Code you submit to python 3.5.2 through that IDLE can normally only access packages installed for 3.5.2, plus your own code. I believe Anaconda 3.6.3 comes with Python 3.6.3 and the 3.6.3 standard library, including the 3.6.3 version of idlelib.
In order for your code to use the packages installed with anaconda, you must run your code with the anaconda binary. To run your code from IDLE with the anaconda binary, you must run IDLE with that binary instead of some other binary (like the 3.5.2 binary.
When running Python 3.6.3 interactively, you can start IDLE 3.6.3 at a >>> prompt with import idlelib.idle. If you can start python 3.6.3 in a terminal (Command Prompt on Windows), then adding the arguments -m idlelib will start IDLE.
On Windows, I have no idea whether or not Anaconda adds 'Edit with IDLE 3.6.3' to the right-click context menu for .py and .pyw files, the way the python.org installer does. On any system, you should be able to create a file or icon that will start 3.6.3 with IDLE, but details depend heavily on OS and version. | 1 | 1 | 0 | The title says it all i want to be able to use the packages that are installed with anaconda with idle so are there any ways of making this work?
When i try to import packages with idle that i installed using anaconda it says package not found.
I need some help please and thank you in advance. | how can i get IDLE (Python) to use packages installed by anaconda (windows 7 32bit)? | 0 | 0 | 0 | 3,178 |
48,529,455 | 2018-01-30T19:50:00.000 | 0 | 0 | 0 | 0 | javascript,jquery,python,ajax,django | 48,529,535 | 2 | false | 1 | 0 | I think it's bad practice to continuously ping the server like that. Especially considering that with something like a game, a lot of things can be changing very quickly. Your best bet is to keep track of what's going on in the front end, and then ping the server as needed to send or request information, query the database, etc (in other words, go with number 2, except you won't need to "compare" it with what's going on in the session, you'd simply be saving the finished game information to your database.)
All in all it depends on the complexity of the game and how you decide to structure it. | 1 | 0 | 0 | Here's my problem.
I have a small javascript game, which I am trying to pair with Django backend, to store highscores in sessions, maybe implement a highscore board, login, etc. in future. Just as a practice of interaction between front/backend.
So far I have the game running in javascript, and my plan is this: upon getting a score, it sends an ajax (jQuery) request to django, telling django to increment the score. Now here's few questions I have in mind:
Is it possible to render the highscore to template with Django/DTL? Score is set at 0, and each time a player scores, ajax-call to django backend will increment the score by one, and render it in template without refreshing the page. (Also compare it to highscore in sessions, and overwrite it if the new score is higher)
To my understanding, it requires a page refresh, which is not what I want.
Should I increment the score in frontend with JS, and after a game ends, send the score to django backend to compare it with the one in sessions etc etc.
Can javascript access values of django view? JSON maybe?
I'm still hesitating if I am heading to the right direction, so if you guys could just point me to the right direction. What would be the best way to implement this? | Pairing up Django backend with frontend | 0 | 0 | 0 | 733 |
48,529,883 | 2018-01-30T20:22:00.000 | 2 | 0 | 1 | 0 | python-3.x,pyomo | 48,531,487 | 1 | false | 0 | 0 | Pyomo looks for solvers on your system PATH.
Instructions for how to set the PATH can be found quite easily by Googling "How to set system PATH". | 1 | 0 | 0 | I am trying to install BARON solver on Pyomo + Anaconda. However, I have difficulty linking the solver executables with Pyomo?
Any suggestion? | BARON Optimization Solver on Pyomo | 0.379949 | 0 | 0 | 977 |
48,531,314 | 2018-01-30T22:06:00.000 | 0 | 0 | 0 | 1 | python,vnc,openai-gym | 49,056,767 | 1 | true | 0 | 0 | I found out that it closes without crashing if I type in the terminal ctrl+c instead of ctrl+z as I did before. | 1 | 0 | 0 | I'm trying to create a simple program using universe by openai but every time I close the VNC, the python launcher doesn't respond anymore and I have to force quit it. What can I do to solve this? Thanks | Python launcher not responding after closing VNC (mac) | 1.2 | 0 | 0 | 65 |
48,532,069 | 2018-01-30T23:20:00.000 | 1 | 0 | 0 | 0 | python-3.x,machine-learning,deep-learning,computer-vision,imblearn | 48,550,016 | 2 | false | 0 | 0 | Thanks for the clarification. In general, you don't oversample with Python. Rather, you pre-process your data base, duplicating the short-handed classes. In the case you cite, you might duplicate everything in class B, and make 5 copies of everything in class C. This gives you a new balance of 1000:600:500, likely more palatable to your training routines. Instead of the original 1400 images, you now shuffle 2100.
Does that solve your problem? | 1 | 3 | 1 | I am working on a multiclass classification problem with an unbalanced dataset of images(different class). I tried imblearn library, but it is not working on the image dataset.
I have a dataset of images belonging to 3 class namely A,B,C. A has 1000 data, B has 300 and C has 100. I want to oversample class B and C, so that I can avoid data imbalance. Please let me know how to oversample the image dataset using python. | How to oversample image dataset using Python? | 0.099668 | 0 | 0 | 2,828 |
48,534,423 | 2018-01-31T04:19:00.000 | 0 | 0 | 1 | 0 | python,json,sqlite,python-idle | 48,535,308 | 2 | false | 0 | 0 | IDLE is part of the CPython standard library and is usually installed when tkinter (and turtle) are. It provides an optional alternate interface for running your code with a particular python binary. When you run code through IDLE, it is run by the python binary that is running IDLE. So the same modules are available whether you run code through the terminal interface or through IDLE. There is no separate installation. | 1 | 1 | 0 | I'm more specifically wanting to know whether sqlite3 and json comes with python IDLE or do I have to install them separately to use them inside IDLE, If so can anyone link me to those installing procedures of sqlite3 and json on Python IDLE?
I also want to know where I can find the list of other pre-installed packages that comes with basic Python IDLE (i.e. Python 2.7.14) . I am a Beginner and it would be really helpful.
Thank you. | Where can I find pre-installed packages that comes with Python IDLE? | 0 | 0 | 0 | 546 |
48,537,478 | 2018-01-31T08:15:00.000 | 1 | 0 | 0 | 0 | python-2.7,amazon-s3,boto3,s3-bucket | 48,553,966 | 2 | false | 0 | 0 | This is not possible.
There is no way to discover the names of all of the millions of buckets that exist. There are known to be at least 2,000,000,000,000 objects stored in S3, a number announced several years ago and probably substantially lower than the real number now. If each bucket had 1,000,000 of those objects, that would mean 2,000,000 buckets to hold them.
You lack both the time and the permission to scan them all, and intuition suggests that AWS Security would start to ask questions, if you tried. | 1 | 0 | 0 | I was working on boto3 module in python and I have had created a bot which would find the publicly accessible buckets, but this is done for a single user with his credentials. I am thinking of advancing the features and make the bot fetch all the publicly accessible buckets throughout every user accounts. I would like to know if this is possible, if yes how, if not why? | Find all the s3 public buckets | 0.099668 | 0 | 1 | 2,004 |
48,539,256 | 2018-01-31T09:56:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,neural-network,keras | 48,560,107 | 1 | false | 0 | 0 | What I noticed in my tests is that increasing the number of parameters require sometime to review how you prepare your input data or how you initialize your weights. I found that often increasing the number of parameteres requires to initialize the weights differently (meaning initializing with smaller values) or you need to normalize the input data (I guess you have done that), or even dividing them by a constant factor to make them smaller.
Sometime reducing the learning rate helps, since your cost function will become more complex with more parameters and it may happen that the learning rate that before was working fine is too big for your new case. But is very difficult to give a precise answer.
Something else: what do you mean with bigger error? Are you doing classification or regression? In addition are you talking about error on the train set or the dev/test sets? That is a big difference. It may well be that (if you are talking about the dev/test sets) that you are overfitting your data and therefore gets a bigger error on the dev/tests sets (bias-variance tradeoff)... Can you give us more details? | 1 | 0 | 1 | I am training neural networks using the great Keras library for Python. I got curious about one behaviour I don't understand.
Often even slighly bigger models converge to bigger error than smaller ones.
Why does this happen? I would expect bigger model just to train longer, but converge to smaller or same error.
I hyperoptimized the model, tried different amounts of dropout regularization and let it train for sufficient time. I experimented with models about 10-20k parameters, 5 layers, 10M data samples and 20-100 epochs with decreasing LR. Models contained Dense and sometimes LSTM layers. | Bigger neural network converges to bigger error than smaller | 0.197375 | 0 | 0 | 407 |
48,541,801 | 2018-01-31T12:07:00.000 | 9 | 0 | 1 | 0 | python,django,python-3.x,msbuild,misaka | 52,150,652 | 4 | false | 0 | 0 | I was getting the same error when trying to install biopython with Python 3.7 on Windows 10.
Installing just the Build Tools (instead of the full Community Edition as suggested in the other answer), with the options "C++/CLI support" and "VC++ 2015.3 v14.00 (v140) toolset for desktop" checked in addition to the defaults, solved the problem. | 1 | 33 | 0 | I have tried all methods mentioned on the internet but there is no use.
I am trying to install misaka by writing pip install misaka it keeps complaining by showing the same message. I have downloaded and installed MS build tool 2015 and 2017, Restarted my laptop. Whatever I did, couldn't figure out why it complains.
Python version 3.6.4
Windows 10 | Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools | 1 | 0 | 0 | 102,349 |
48,542,243 | 2018-01-31T12:29:00.000 | 2 | 0 | 1 | 0 | python,ipython,arch | 48,542,300 | 1 | true | 0 | 0 | Restart the Kernel
To isolate the problem, try importing it in a normal python environment you start in the command line. | 1 | 1 | 0 | I am facing this error in iPython notebook even after installing running pip install arch command and successfully installing it.
Any help would be highly appreciated. | Import Error: No module named arch | 1.2 | 0 | 0 | 5,273 |
Subsets and Splits