Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
47,067,456
2017-11-02T04:07:00.000
0
0
1
0
python,pdfminer,pypdf2
47,067,638
2
false
0
0
I don't think this can be directly done. Although I have done something similar using the following approach Using ghostscript to convert pdf to page images. On each page use computer vision (OpenCV) to extract the area of interest(in your case images).
1
0
0
Is there a way to count number of images(JPEG,PNG,JPG) in a pdf document through python?
Count Images in a pdf document through python
0
0
0
2,378
47,069,678
2017-11-02T07:20:00.000
0
0
0
0
java,python,apache-kafka,bokeh,apache-kafka-streams
50,312,611
1
false
1
0
Try kafka-python. You can set up a simple consumer to read the data from your cluster.
1
0
1
Here is a problem I am stuck with presently. Recently I have been exploring bokeh for plotting and kafka for streaming. And I thought of making a sample live dashboard using both of them. But the problem is I use bokeh with python and kafka stream api's with Java. Is there a way to use them together by any chance. The only way I can see is both of them can be used with scala. But presently I don't want to get into scala.
Using bokeh plotting with kafka streaming
0
0
0
824
47,074,881
2017-11-02T12:06:00.000
0
0
0
0
django,python-2.7,amazon-web-services,amazon-elastic-beanstalk
47,075,247
1
true
1
0
you should be able to do this no problem by updating the requirements.txt file. Please post the error details you are getting and you requirements.txt file.
1
0
0
I have a Django 1.9.12 project running on EBS. I'd like to upgrade to Django 1.11 which I've done in the dev environment. How can I force EBS to update to 1.11? I hoped it might be a simple case of updating the requirements.txt but that hasn't worked with eb deploy Would it be easier just to create a new EBS project?
Upgrade Django installation on AWS elasticbeanstalk
1.2
0
0
89
47,075,629
2017-11-02T12:43:00.000
2
0
0
1
jquery,python,ajax,django,celery
47,076,457
2
true
1
0
"How do I prevent the task from firing up each time on refresh" First make sure you use the "POST/redirect/GET" pattern: your view should only fire the task on a "POST" request (GET request must not have side-effects), and then return a HttpResponseRedirect. This won't prevent the user from firing a second task while the first is still running, but it does prevent re-submitting the form each time you refresh ("GET") the page... Ideally, a user can only run one task of this particular type. When firing a task, store it's id in the session, and before firing a task check the session for a task id. If there's one, use it to check the task's state. If it's done (whatever the result), remove the task id and proceed with the new task, else just display a message telling your user he already has a task running. Just make sure you correctly handle the case of a "ghost" task_id in the session (it might be a long finished task for which celery already discarded the results, or a task that got lost in a worker or broker crash - yeah, sh!t happens - etc). A working solution is to actually store a (timestamp, task_id) pair and if celery has no state for the task_id - actually "no state" is (mis)named celery.states.PENDING, which really means "I don't have a clue about this task's state ATM" - check the timestamp. If it's way older than it should then you can probably consider it as long dead and 6 feets under. and prevent navigating to another page. Why would you want to prevent your user to do something else in the meantime, actually ? From a UI/UX point of view, once the task is fired, your user should be redirected to a "please wait" page with (as much as possible) some progress bar or similar feedback. The simple (if a bit heavy) solution here is to do some polling using ajax : you setup a view that takes a task_id, check results / progress and returns them as json, and the 'please wait' page calls this views (using ajax) every X seconds to update itself (and possibly redirect the user to the next page when the task is done). Now if there are some operations (apart from re-launching the same task) your user couldn't do while a task is running, you can use the same "check session for a current task" mechanism for those operations (making it a view decorator really helps).
1
0
0
I created a django app with a celery task that organises database tables. The celery task takes about 8 seconds to do the job ( quite heavy table moves and lookups in over 1 million lines ) This celery starts through a button click by a registered user. I created a task-view in views.py and a separate task-url in urls.py. When navigating to this task-url, the task starts. My Problem: When you refresh the task-url while the task is not finished, you fire up a new task. With this set-up, you navigate to another page or reload the view How do I prevent the task from firing up each time on refresh and prevent navigating to another page. Ideally, a user can only run one task of this particular type. Might this be doable with JQuery/Ajax? Could some hero point me out in the right direction as I am not an expert an have no experience with JQuery/Ajax. Thanks
run celery task only once on button click
1.2
0
0
1,196
47,079,459
2017-11-02T15:50:00.000
7
1
0
1
python,docker
47,079,624
1
true
0
0
In your Dockerfile, either of these should work: Use the ENV instruction (ENV PYTHONPATH="/:$PYTHONPATH") Use a prefix during the RUN instruction (RUN export PYTHONPATH=/:$PYTHONPATH && <do something>) The former will persist the changes across layers. The latter will take effect in that layer/RUN command
1
2
0
Using dockerbuild file, how can I do there something like: export PYTHONPATH=/:$PYTHONPATH using RUN directive or other option
Adding path to pythonpath in dockerbuild file
1.2
0
0
2,449
47,080,385
2017-11-02T16:39:00.000
3
1
1
0
python,list,append,psychopy,del
47,080,508
1
false
0
0
I would take a slightly different approach: I would wrap the items you're inserting into the list with a thin object that has a timestamp field. Then I'd just leave it there, and when you iterate the list to find an object to pop - check the timestamp first and if it's bigger than 10 seconds, discard it. Do it iteratively until you find the next element that is younger than 10 seconds and use it for your needs. Implementing this approach should be considerably simpler than triggering events based on time and making sure they run accurately and etc.
1
1
0
I'm building an experiment in Psychopy in which, depending on the participants response, I append an element to a list. I'd need to remove/pop/del it after a specific amount of time has passed after it was appended (e.g. 10 seconds). I was considering creating a clock to each element added, but as I need to give a name to each clock and the number of elements created is unpredictable (dependent on the participants responses), I think I'd have to create names to each of the clocks created on the go. However, I don't know how to do that and, on my searches about this, people usually say this isn't a good idea. Would anyone see a solution to the issue: remove/pop/del after a specific time has passed after appending the element? Best, Felipe
Append an element to a list and del/pop/remove it after a a specific amount of time has passed since it was appended
0.53705
0
0
95
47,081,126
2017-11-02T17:19:00.000
0
0
1
0
python,virtualenv,redhat
47,081,425
2
false
0
0
What error is shown when u try to activate it, Make sure the python version and environment PATH are consistent with the ones on the previous system.
2
2
0
I need to deploy python application to a no internet server. I have created a virtual environment on my host machine which uses Ubuntu. This contains python script with a variety of non-standard libraries. I have used option --relocatable to make the links relative. I have copied over the environment to my client machine which uses RedHat and has no access to the internet. After activating it using source my_project/bin/activate the environment does not seem to be working - the python used is standard system one and the libraries don't work. How can the virtual environment be deployed on a different server? Edit: this is normally done through the creation of requirement.txt file and then using pip to install the libraries on the target machine, however in this case it's not possible as the machine is offline.
Deploy Python virtualenv on no internet machine with different system
0
0
0
1,866
47,081,126
2017-11-02T17:19:00.000
3
0
1
0
python,virtualenv,redhat
47,333,648
2
true
0
0
For anyone dealing with the same problem: The quickest way for me was to: Create a VirtualBox with the target system on the internet machine Download wheel files using pip download Migrate to the target machine Install with pip install --no-index --find-links pip_libs/ requests
2
2
0
I need to deploy python application to a no internet server. I have created a virtual environment on my host machine which uses Ubuntu. This contains python script with a variety of non-standard libraries. I have used option --relocatable to make the links relative. I have copied over the environment to my client machine which uses RedHat and has no access to the internet. After activating it using source my_project/bin/activate the environment does not seem to be working - the python used is standard system one and the libraries don't work. How can the virtual environment be deployed on a different server? Edit: this is normally done through the creation of requirement.txt file and then using pip to install the libraries on the target machine, however in this case it's not possible as the machine is offline.
Deploy Python virtualenv on no internet machine with different system
1.2
0
0
1,866
47,081,149
2017-11-02T17:20:00.000
0
0
0
0
python,machine-learning,nlp,word2vec,doc2vec
47,099,951
1
false
0
0
They're very similar, so just as with a single approach, you'd try tuning parameters to improve results in some rigorous manner, you should try them both, and compare the results. Your dataset sounds tiny compared to what either needs to induce good vectors – Word2Vec is best trained on corpuses of many millions to billions of words, while Doc2Vec's published results rely on tens-of-thousands to millions of documents. If composing some summary-vector-of-the-document from word-vectors, you could potentially leverage word-vectors that are reused from elsewhere, but that will work best if the vectors' original training corpus is similar in vocabulary/domain-language-usage to your corpus. For example, don't expect words trained on formal news writing to work well with, or even cover the same vocabulary as, informal tweets, or vice-versa. If you had a larger similar-text corpus of documents to train a Doc2Vec model, you could potentially train a good model on the full set of documents, but then just use your small subset, or re-infer vectors for your small subset, and get better results than a model that was only trained on your subset. Strictly for clustering, and with your current small corpus of short texts, if you have good word-vectors from elsewhere, it may be worth looking at the "Word Mover's Distance" method of calculating pairwise document-to-document similarity. It can be expensive to calculate on larger docs and large document-sets, but might support clustering well.
1
1
1
So, I have close to 2000 reports and each report has an associated short description of the problem. My goal is to cluster all of these so that we can find distinct trends within these reports. One of the features I'd like to use some sort of contextual text vector. Now, I've used Word2Vec and think this would be a good option but I also so Doc2Vec and I'm not quite sure what would be a better option for this use case. Any feedback would be greatly appreciated.
Looking to cluster short descriptions of reports. Should I use Word2Vec or Doc2Vec
0
0
0
380
47,082,490
2017-11-02T18:45:00.000
2
0
1
0
python,python-2.7,recursion,lambda
47,082,533
1
true
0
0
They are not the same. In the second variant both expressions in the list are evaluated, and only then the appropriate one is picked from it. But this does not stop the recursion from happening even when x == 0. And so the recursion continues into the negative numbers bumping into the memory limit. ... if ... else ... on the other hand will first evaluate the condition following the if keyword, and only evaluate the expression that corresponds to that result.
1
0
0
Consider the two following recursive functions that calculate the factorial of an input that I naively thought would be the same: fact = lambda x: 1 if x == 0 else x * fact(x-1) fact = lambda x: [x*fact(x-1),1][x==0] The first runs fine but the second give me the error RuntimeError: maximum recursion depth exceeded. This is true for for inputs where x==0 and x!=0. Why can't the lambda function handle the second case?
Why does list selection in a recursive lambda function throw recursion limit errors?
1.2
0
0
48
47,082,736
2017-11-02T18:58:00.000
0
0
1
1
python,macos,homebrew
47,084,214
2
false
0
0
I agree to using virtualenv, it allows you to manage different python versions separately for different projects and clients. This basically allows you to have each project it's own dependencies which are isolated from others.
2
0
0
What's the best way to manage multiple Python installations (long-term) if I've already installed Python 3 via brew? In the past Python versions were installed here, there, and everywhere, because I used different tools to install various updates. As you can imagine, this eventually became a problem. I once was in a situation where a package used in one of my projects only worked with Python 3.4, but I had recently updated to 3.6. My code no longer ran, and I had to scour the system for Python 3.4 to actually fire up the project. It was a huge PITA. I recently wiped my computer and would like to avoid some of my past mistakes. Perhaps this is naïve, but I'd like to limit version installation to brew. (Unless that's non-sensical — I'm open to other suggestions!) Furthermore, I'd like to know how to resolve my past version management woes (i.e. situations like the one above). I've heard of pyenv, but would that conflict with brew? Thanks!
Managing multiple Python versions on OSX
0
0
0
432
47,082,736
2017-11-02T18:58:00.000
2
0
1
1
python,macos,homebrew
47,083,151
2
true
0
0
Use virtualenvs to reduce package clash between independent projects. After activating the venv use pip to install packages. This way each project has an independent view of the package space. I use brew to install both Python 2.7 and 3.6. The venv utility from each of these will build a 2 or 3 venv respectively. I also have pyenv installed from brew which I use if I want a specific version that is not the latest in brew. After activating a specific version in a directory, I will then create a venv and use this to manage the package isolation. I can't really say what is best. Let's see what other folks say.
2
0
0
What's the best way to manage multiple Python installations (long-term) if I've already installed Python 3 via brew? In the past Python versions were installed here, there, and everywhere, because I used different tools to install various updates. As you can imagine, this eventually became a problem. I once was in a situation where a package used in one of my projects only worked with Python 3.4, but I had recently updated to 3.6. My code no longer ran, and I had to scour the system for Python 3.4 to actually fire up the project. It was a huge PITA. I recently wiped my computer and would like to avoid some of my past mistakes. Perhaps this is naïve, but I'd like to limit version installation to brew. (Unless that's non-sensical — I'm open to other suggestions!) Furthermore, I'd like to know how to resolve my past version management woes (i.e. situations like the one above). I've heard of pyenv, but would that conflict with brew? Thanks!
Managing multiple Python versions on OSX
1.2
0
0
432
47,085,598
2017-11-02T22:28:00.000
0
0
1
0
python,pyqt5,python-sip
47,085,913
2
false
0
1
That is not a valid version number. You have to use a final version of python (as stated in Klaus D.'s comment).
2
0
0
I am new to all of this and I am trying to install PyQt5, I entered "pip install pyqt5" and this is what happened - ( its cached because of previous download attempt) C:\Users\Liam>pip install pyqt5 Collecting pyqt5 Using cached PyQt5-5.9.1-5.9.2-cp35.cp36.cp37-none-win_amd64.whl Collecting sip<4.20,>=4.19.4 (from pyqt5) Could not find a version that satisfies the requirement sip<4.20,>=4.19.4 (from pyqt5) (from versions: ) No matching distribution found for sip<4.20,>=4.19.4 (from pyqt5) Can anyone give me some pointers I am completely lost. Thank you for any help that you can give me Liam
How do i install PyQt5 with python 3.7.0a2 on windows 10
0
0
0
3,282
47,085,598
2017-11-02T22:28:00.000
1
0
1
0
python,pyqt5,python-sip
57,754,547
2
true
0
1
Use Python 3.4 Pyqt 5 works with it
2
0
0
I am new to all of this and I am trying to install PyQt5, I entered "pip install pyqt5" and this is what happened - ( its cached because of previous download attempt) C:\Users\Liam>pip install pyqt5 Collecting pyqt5 Using cached PyQt5-5.9.1-5.9.2-cp35.cp36.cp37-none-win_amd64.whl Collecting sip<4.20,>=4.19.4 (from pyqt5) Could not find a version that satisfies the requirement sip<4.20,>=4.19.4 (from pyqt5) (from versions: ) No matching distribution found for sip<4.20,>=4.19.4 (from pyqt5) Can anyone give me some pointers I am completely lost. Thank you for any help that you can give me Liam
How do i install PyQt5 with python 3.7.0a2 on windows 10
1.2
0
0
3,282
47,088,034
2017-11-03T03:26:00.000
0
0
0
0
python-2.7,beautifulsoup
47,088,273
1
false
1
0
Make sure that you have added the required headers such as 'User-Agent' before firing the get Request. In most cases, if 'User-Agent' is not provided, you'll end up with 403 response.
1
0
0
when scraping from website using bs4 it showing response object as access denied and Forbidden how to solve this?
403 Forbidden or access denied for some website why?
0
0
1
194
47,089,489
2017-11-03T06:10:00.000
7
0
1
1
python-3.x,celery
47,645,248
1
false
0
0
It means that when celery has executed tasks more than the limit on one worker (the "worker" is a process if you use the default process pool), it will restart the worker automatically. Say if you use celery for database manipulation and you forget to close the database connection, the auto restart mechanism will help you close all pending connections.
1
4
0
The doc is: With this option you can configure the maximum number of tasks a worker can execute before it’s replaced by a new process. In what condition will a worker be replaced by a new process ? Does this setting make a worker, even with multi processes, can only process one task at one time?
What does [Max tasks per child setting] exactly mean in Celery?
1
0
0
3,499
47,092,416
2017-11-03T09:33:00.000
2
0
1
0
python,anaconda,virtualenv
47,094,340
1
true
0
0
So the problem could be solved in two ways: The Cleaner Way The Clever Way The Cleaner Way You have the virtual environment monolith with you which you want to use in every project. For every project copy the virtual environment monolith with the project name and use it as the virtual environment. The advantage for this way would be that we will have a clean and separate virtual environment to use. The cost for this way would be large space acquired by the same data, since you are copying monolith in every project. The Clever Way Create a copy of monolith virtual environment (only for safety). Make the folder containing the virtual environment packages a local git repo. The following command will be useful. git init git add . git commit -m"Master Project" Now for every new project create a new branch using git checkout -b PROJECT_NAME and Don't forget to switch to the branch you want to use. Most important when you are installing any package. P.S : The Clever may or may not work depending on your system, I would suggest to go with cleaner way. Since the project domains will not be more than 6 or 7. (i.e one for ML, another for CV....) Also, please comment which worked for you.
1
3
0
I want to use multiple conda environments together. I have a huge conda environment containing a lot of packages (lets call it the monolith) which I use in all my projects and don't want to create again. I want to create a separate smaller conda environments for each project and work use it along with my huge monolith. So that I can keep the monolith clean and use for multiple projects safely. Following are a few things I think should be taken care of, Update PATH, PYTHONPATH and LD_LIBRARY_PATH variables. When installing a new package run a try dry run on all the environments in the stack and only then install it to the top environment. So that any version conflicts can be caught. While executing the dry runs keep track of all the packages conda lists for installation. And when running the final install on top environment install only the intersection of packages listed in each of the dry run with --no-deps flag. This way we can avoid reinstallation of packages. Would this approach work?
Using stacked conda environments
1.2
0
0
1,071
47,094,705
2017-11-03T11:28:00.000
0
0
0
0
python,numpy,scipy
51,358,087
2
false
0
0
Most library methods offering low discrepancy methods for arbitrary dimensions, won’t include arguments that allow you to define arbitrary intervals for each of the separate dimensions/components. However, in virtually all of these cases, you can adapt the exisiting method to suit your requirements with the addition of a single line of code. Understanding this will dramatically increase number of librbaries you can choose to use! For nearly all low discrepancy (quasirandom) sequences, each term is equidistributed in the half open range [0,1). Similarly, for d-dimensional sequences, each component of each term falls in [0,1). This includes the Halton sequence ( which is a generalization if the van der Corput), Hammersley, Weyl/Kronecker, Sobol, and Niederreiter sequences. Converting a value from [0,1) to [a,b) can be achieved, via the linear transformation x = a + (b-a) z. Thus if the n-th term of the canonical low discrepancy sequence is (z_1,z_2,z,z_3), then you desired sequence is (2+2*z1, 2+2*z2, 1+6*z3).
1
3
1
I am trying to construct Hammersley and Halton quasi random sequences. I have for example three variables x1, x2 and x3. They all have integer values. x1 has a range from 2-4, x2 from 2-4 and x3 from 1-7. Is there any python package which can create those sequences? I saw that there are some procject like sobol or SALib, but they do not implemented Halton and Hammersley. Best regards
Python : Halton and Hammersley quasi random sequences
0
0
0
1,801
47,096,120
2017-11-03T12:44:00.000
1
0
1
0
python,dll,source-code-protection
51,919,783
4
false
0
0
One other option, of course, is to expose the functionality over the web, so that the user can interact through the browser without ever having access to the actual code.
1
1
0
I have written a python code which takes an input data file, performs some processing on the data and writes another data file as output. I should distribute my code now but the users should not see the source code but be able to just giving the input and getting the output! I have never done this before. I would appreciate any advice on how to achieve this in the easiest way. Thanks a lot in advance
How to protect my Python code before distribution?
0.049958
0
0
11,558
47,102,090
2017-11-03T18:10:00.000
0
0
0
0
python,django,xml,django-rest-framework
47,103,539
1
false
1
0
You don't really do this in view part of Django. What you should do is take the json, find the uri, get the content of uri through urllib, requests etc, get the relevant content from the response, add a new field to the json and then pass it to your view.
1
0
0
I have a django view that takes in a json object and from that object I am able to get a uri. The uri contains an xml object. What I want to do is get the data from the xml object but I am not sure how to do this. I'm using django rest, which I am fairly inexperienced in using, but I do not know the uri until I search the json object in the view. I have tried parsing it in the template but ran into CORS issues amongst others. Any ideas on how this could be done in the view? My main issue is not so much parsing the xml but how to get around the CORS issue which I have no experience in dealing with
Getting data from a uri django
0
0
1
68
47,104,930
2017-11-03T21:56:00.000
0
0
0
1
google-cloud-platform,google-cloud-storage,google-cloud-python
65,637,184
2
false
1
0
The solution for me was that both google-cloud-storage and pkg_resources need to be in the same directory. It sounds like your google-cloud-storage is in venv and your pkg_resources is in the lib folder
1
7
0
As in the title, when running the appserver I get a DistributionNotFound exception for google-cloud-storage: File "/home/[me]/Desktop/apollo/lib/pkg_resources/init.py", line 867, in resolve raise DistributionNotFound(req, requirers) DistributionNotFound: The 'google-cloud-storage' distribution was not found and is required by the application Running pip show google-cloud-storage finds it just fine, in the site packages dir of my venv. Everything seems to be in order with python -c "import sys; print('\n'.join(sys.path))" too; the cloud SDK dir is in there too, if that matters. Not sure what to do next.
google-cloud-storage distribution not found despite being installed in venv
0
0
0
1,810
47,106,830
2017-11-04T02:37:00.000
3
0
0
0
python,machine-learning,tensorflow,keras
56,950,297
2
false
0
0
Maybe you are asking for weights before they are created. Weights are created when the Model is first called on inputs or build() is called with an input_shape. For example, if you load weights from checkpoint but you don't give an input_shape to the model, then get_weights() will return an empty list.
1
8
1
I am teaching myself data science and something peculiar has caught my eyes. In a sample DNN tutorial I was working on, I found that the Keras layer.get_weights() function returned empty list for my variables. I've successfully cross validated and used model.fit() function to compute the recall scores. But as I'm trying to use the get_weights() function on my categorical variables, it returns empty weights for all. I'm not looking for a solution to code but I am just curious about what would possibly cause this. I've read through the Keras API but it did not provide me with the information I was hoping to see. What could cause the get_weights() function in Keras to return empty list except for of course the weights not being set?
Why does get_weights return an empty list?
0.291313
0
0
2,799
47,106,848
2017-11-04T02:40:00.000
2
1
0
1
python,django,amazon-web-services,aws-cli,amazon-elastic-beanstalk
47,276,695
3
true
1
0
You should start with the EBCLI and then involve the AWSCLI where the EBCLI falls short. The AWSCLI (aws) allows you to run commands from a bunch of different services, whereas, the EBCLI (eb) is specific to Elastic Beanstalk. The EBCLI makes a lot of tedious tasks easier because it is less hands on than the AWS CLI. I have observed, for most of my tasks, the EBCLI is sufficient; I use the AWS CLI and the AWS SDKs otherwise. Consider deploying your Django app. You could start off by performing eb init, which would take you through an interactive set of menus, from which you would choose your region, and solution stack (Python). Next, you would perform eb create, which creates an application version and subsequently an Elastic Beanstalk environment for you. The above two EBCLI steps translate to half a dozen or more AWSCLI steps. Furthermore, a lot of the processes that the EBCLI hides from you involve multiple AWS services, which can make the task of replicating the EBCLI through the AWS CLI all the more tedious and error-prone.
2
3
0
What is the difference between "AWS Command Line Interface" and "AWS Elastic Beanstalk Command Line Interface"? Do I need both to deploy a Django project through AWS Elastic Beanstalk? Thank you!
What is the difference between AWSCLI and AWSEBCLI?
1.2
0
0
1,072
47,106,848
2017-11-04T02:40:00.000
0
1
0
1
python,django,amazon-web-services,aws-cli,amazon-elastic-beanstalk
47,106,874
3
false
1
0
You only need eb to deploy and control Elastic Beanstalk. You can use aws to control any other resource in AWS. You can also use aws for lower-level control of Elastic Beanstalk.
2
3
0
What is the difference between "AWS Command Line Interface" and "AWS Elastic Beanstalk Command Line Interface"? Do I need both to deploy a Django project through AWS Elastic Beanstalk? Thank you!
What is the difference between AWSCLI and AWSEBCLI?
0
0
0
1,072
47,109,343
2017-11-04T09:36:00.000
0
0
1
0
python,tensorflow,anaconda,spyder,code-completion
47,526,695
2
false
0
0
Now that I have to use a temporary alternative, I installed anaconda version without an installed tensorflow in anaconda's envs. And I use it when I don't use tensorflow. I hope this question can be complement answered, please attent my answer.
2
0
1
I am data scientist in beijing and working with anaconda in win7 but after I pip installed tensorflow v1.4,code completion of my IDE spyder in anaconda not work, before anything of code completion function is work perfectly. Now even I uninstall tensorflow,code completion function of spyder still not work. Any help? my envirment: win7 anaconda3 v5.0 for win64 (py3.6) tensorflow v1.4 for win (tf_nightly-1.4.0.dev20171006-cp36-cp36m-win_amd64.whl) So two question: 1 How can i fix it so to make anaconda3 spyder code completion work again? 2 After uninstall tensorflow, anaconda3 spyder code completion still not work, what can I do?
how can i make anaconda spyder code completion work again after installing tensorflow
0
0
0
487
47,109,343
2017-11-04T09:36:00.000
0
0
1
0
python,tensorflow,anaconda,spyder,code-completion
47,525,945
2
false
0
0
I try pip rope_py3k、jedi and readline, and reset the set of tool, but all are not useful. and my Spyder code editing area also can not be automatically completed after the installation of tensorflow, I have re-installed again and found the same problem. However,when I re-installed all envs except tensorflow,it can work!! My environment is win10, anaconda3.5, python3.6.3, tensorflow1.4. Did you resolve it?And I hope you can teach me.
2
0
1
I am data scientist in beijing and working with anaconda in win7 but after I pip installed tensorflow v1.4,code completion of my IDE spyder in anaconda not work, before anything of code completion function is work perfectly. Now even I uninstall tensorflow,code completion function of spyder still not work. Any help? my envirment: win7 anaconda3 v5.0 for win64 (py3.6) tensorflow v1.4 for win (tf_nightly-1.4.0.dev20171006-cp36-cp36m-win_amd64.whl) So two question: 1 How can i fix it so to make anaconda3 spyder code completion work again? 2 After uninstall tensorflow, anaconda3 spyder code completion still not work, what can I do?
how can i make anaconda spyder code completion work again after installing tensorflow
0
0
0
487
47,110,412
2017-11-04T11:50:00.000
0
0
1
1
python,python-3.x
47,110,895
2
true
0
0
You can use 'task scheduler' if you are using windows, or 'crontab' if you are using linux, those will run the scrip in the background for you. There are different ways to do the thing that you are asking for. For example, you can run the script that counts the files, and each time it runs, writes the number of files it found to a txt file somewhere.
2
0
0
For example i want to count number of files in a directory in real time. As python script will be running continuously in background, when i add a file in the directory, value of count variable updates in my script. So, i need a solution with which pyton script keeps running and updates vcalue in runtime.
Python script to keep a python file running all the time
1.2
0
0
152
47,110,412
2017-11-04T11:50:00.000
0
0
1
1
python,python-3.x
47,110,985
2
false
0
0
You can use a package called "threading", which runs the code every n seconds(minutes, hours etc.)
2
0
0
For example i want to count number of files in a directory in real time. As python script will be running continuously in background, when i add a file in the directory, value of count variable updates in my script. So, i need a solution with which pyton script keeps running and updates vcalue in runtime.
Python script to keep a python file running all the time
0
0
0
152
47,111,084
2017-11-04T13:08:00.000
1
0
1
0
python,visual-studio,formatting,indentation
47,117,731
1
false
0
0
Found it. In VS2015 menu, Edit, advanced, there is "Tabify Selected Lines" to convert spaces to tab. The "Untabify Selected Lines" will replace Tab with spaces. No keyboard short cuts. :(
1
0
0
Editing python in VS2015, my current code is from internet so it has indentation of 4 spaces. When I'm editing it, any new line will have indentation of a tab. The short cut ctrl+k , ctrl+F doesn't work. Is there any quick way to fix this (My guess would be a find/replace all)?
Fixed the indentation of VS2015 for python
0.197375
0
0
26
47,112,803
2017-11-04T16:20:00.000
0
0
1
0
python-3.x,google-oauth,youtube-data-api
47,530,288
1
true
0
0
pip3 was using the wrong python, so I had to specify exacly which python version I had to use, in my case; python3.5
1
0
0
when I did sudo pip3 install google-api-python-client it successefully installs but when I try to do import google.oauth2 it doesn't find it. It just says ImportError: No module named 'google'
google youtube api installation problems
1.2
0
1
37
47,114,021
2017-11-04T18:20:00.000
10
0
0
0
python,pandas,dataframe,scikit-learn
47,115,148
1
true
0
0
I feel imputer class has its own benefits because you can just simply mention mean or median to perform some action unlike in fillna where you need to supply values. But in imputer you need to fit and transform the dataset which means more lines of code. But it may give you better speed over fillna but unless really big dataset it doesn’t matter. But fillna has something which is really cool. You can fill the na even with a custom value which you may sometime need. This makes fillna better IMHO even if it may perform slower.
1
5
1
I found 2 ways to replace nan values in pythons, One using sklearn's imputer class and the other using df.fillnan() the later seems easy with less code. But efficiency wise which is better. Can anyone explain the use cases of each.?
Sklearn's imputer v/s df.fillnan to replace nan values with mean of the column
1.2
0
0
2,979
47,116,006
2017-11-04T21:55:00.000
0
0
0
0
python,turtle-graphics
70,328,375
3
false
0
1
Use the title("Custom title") function
1
1
0
I would like to provide a specific graphics window title in turtle graphics, similar to how title KJR works in cmd.exe. I am currently in the process of creating a game, and would like it to display the name, KJR, rather than turtle graphics. Is there a way this can be done?
How to rename the graphics window when using turtle graphics?
0
0
0
3,295
47,116,246
2017-11-04T22:28:00.000
2
0
0
0
python,django,python-2.7,python-requests
47,116,517
1
true
1
0
It does seem that your server cannot resolve the hostname into IP, this is probably not Django nor Python problem but your server network setup issue. Try to reach the same URL with ping tool / wget/curl or troubleshoot DNS with nslookup.
1
0
0
Everytime I make an external request (including to google.com) I get this response: HTTPConnectionPool(host='EXTERNALHOSTSITE', port=8080): Max retries exceeded with url: EXTERNALHOSTPARAMS (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x105d8d6d0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
Django can't make external connections with requests or urllib2 on development server
1.2
0
1
138
47,116,344
2017-11-04T22:41:00.000
2
0
0
0
python,flask
47,116,699
1
true
1
0
Application shutdown is not part of the WSGI server standard. There is no general way to know when the server stops completely from within the application code.
1
3
0
I’m using Flask on a Raspberry Pi for an IoT application. My problem is, that I need to cleanly close the connection to my external device before the application is shutdown or restarted by the Flask auto-reloader. Is there any callback / handler / event / etc. I can use for a clean shutdown? (that also works with the auto-reloader)
Flask restart / shutdown callback
1.2
0
0
1,206
47,116,912
2017-11-05T00:06:00.000
3
0
0
0
python,mysql,flask
47,117,043
2
true
0
0
flask.ext. is a deprecated pattern which was used prevalently in older extensions and tutorials. The warning is telling you to replace it with the direct import, which it guesses to be flask_mysql. However, Flask-MySQL is using an even more outdated pattern, flaskext.. There is nothing you can do about that besides convincing the maintainer to release a new version that fixes it. from flaskext.mysql import MySQL should work and avoid the warning, although preferably the package would be updated to use flask_mysql instead.
1
4
0
When I run from flask.ext.mysql import MySQL I get the warning Importing flask.ext.mysql is deprecated, use flask_mysql instead. So I installed flask_mysql using pip install flask_mysql,installed it successfully but then when I run from flask_mysql import MySQL I get the error No module named flask_mysql. In the first warning I also get Detected extension named flaskext.mysql, please rename it to flask_mysql. The old form is deprecated. .format(x=modname), ExtDeprecationWarning. Could you please tell me how exactly should I rename it to flask_mysql? Thanks in advance.
Python flask.ext.mysql is deprecated?
1.2
1
0
3,318
47,118,678
2017-11-05T06:03:00.000
26
0
1
0
python,nlp,deep-learning,word2vec,fasttext
49,439,794
2
true
0
0
The .vec files contain only the aggregated word vectors, in plain-text. The .bin files in addition contain the model parameters, and crucially, the vectors for all the n-grams. So if you want to encode words you did not train with using those n-grams (FastText's famous "subword information"), you need to find an API that can handle FastText .bin files (most only support the .vec files, however...).
1
25
0
I recently downloaded fasttext pretrained model for english. I got two files: wiki.en.vec wiki.en.bin I am not sure what is the difference between the two files?
Difference between Fasttext .vec and .bin file
1.2
0
0
11,988
47,121,053
2017-11-05T11:31:00.000
0
1
0
0
python,python-3.x,streaming,audio-streaming,shoutcast
61,761,330
2
false
0
0
Use liquidsoap to generate your audio stream(s) and output them to shoutcast and/or icecast2 servers. I currently have liquidsoap, shoutcast, icecast2 and apache2 all running on the same Ubuntu 18.04 server. liquidsoap generates the audio stream and outputs it to both shoutcast and icecast2. Listeners can use their browser to access either the shoutcast stream at port 8000 or the icecast2 stream at port 8010. It works very well 24 x 7. You can have multiple streams and liquidsoap has many features including playlists and time-based (clock) actions. See the liquidsoap documentation for examples to create audio streams from your mp3 or other format audio files. Best of all liquidsoap is free.
1
1
0
I've been looking around for a solution to this and I'm completely stuck. The icecast/shoutcast libs all seem to be Python 2.7 which is an issue as I'm using 3.6 Any ideas for where to start with broadcasting and authenticating would be very useful. I'm looking to stream mp3 files. TIA.
Python broadcast to shoutcast (DNAS) or icecast
0
0
1
2,107
47,122,453
2017-11-05T14:04:00.000
0
0
0
0
python,tkinter,window,desktop,topmost
47,123,502
2
true
0
1
No, there is no method in tkinter that forces the window to be below all other windows on the desktop.
1
1
0
I already know tkinter.Tk().attributes("-topmost", True) which makes the window stay on top all the time. But is there some way to make the window stay at the bottom all the time? I mean something like tkinter.Tk().attributes("-bottommost", True) or something like that.
Tkinter window at the bottom all the time
1.2
0
0
703
47,123,188
2017-11-05T15:19:00.000
0
0
0
0
python,openpyxl
47,125,299
2
false
0
0
The warning is exactly that, a warning about some aspect of the file being removed. But it has nothing to do with the rest of the question. I suspect you are running out of memory. How much memory is openpyxl using when the laptop freezes?
2
1
0
I'm trying to use wb = load_workbook(filename) but either I work in Python console or call it from a script, it hangs for a while, then my laptop completely freezes. I can't switch to console to reboot, can't restart X etc. (UPD: CPU consumption is 100% in this moment, memory consump. is 5% only). Has anybody met such issue? Python 2.7, openpyxl 2.4.9
openpyxl load_workbook() freezes
0
1
0
1,046
47,123,188
2017-11-05T15:19:00.000
0
0
0
0
python,openpyxl
64,410,841
2
false
0
0
I had this issue kinda.... I had been editing my excel workbook. I ended up accidentally pasting a space into an almost infinite amount of rows. ya know... like a lot. I selected all empty cells and hit delete, saved workbook, problem gone.
2
1
0
I'm trying to use wb = load_workbook(filename) but either I work in Python console or call it from a script, it hangs for a while, then my laptop completely freezes. I can't switch to console to reboot, can't restart X etc. (UPD: CPU consumption is 100% in this moment, memory consump. is 5% only). Has anybody met such issue? Python 2.7, openpyxl 2.4.9
openpyxl load_workbook() freezes
0
1
0
1,046
47,125,132
2017-11-05T18:34:00.000
0
0
1
0
python,anaconda,conda,python-packaging
48,739,485
3
false
0
0
The error is typical because python by default is installed only for current user . During python installation with little effort from our side i.e to change the installation to all users will get rid of this error. In conjunction to the above step the environment variable needs to updated to the installed location.
1
4
0
after trying to update Anaconda using conda update --all, the downloading successfully ends but when trying to install the packages, the error message: " Windows cannot find 'pythonw'. Make sure you typed the name correctly, and then try again " appears. anyone knows how to deal with it? thanks in advance P.S. I installed Anaconda somewhere other that C:\, might have something to do with that? Environment variables?
Error appears when installing python packages: pythonw not found
0
0
0
7,582
47,126,418
2017-11-05T20:42:00.000
3
1
0
0
python,amazon-web-services,nginx,flask,aws-lambda
47,126,544
1
true
1
0
You need to build an API on your server. Probably a REST API, or at least some HTTP endpoints on your Flask server that will accept some JSON data in the request. Then your Lambda function will make HTTP requests to your server to interact with it.
1
0
0
I am looking for some help knowing what to research/look into for connecting an AWS Lambda function to a homemade server. I am building an Amazon Echo skill that uses AWS Lambda. The end goal is to have the Echo skill get information from my server and contribute to a database sitting on the server. I am using Nginx and Gunicorn to help serve my Flask application. Are there any tools or concepts I can look into to make this work? Currently, I am kind of lost and am seeing AWS Lambda and my server as two unique, silo-ed entities. But surely this isn't the case! Thank you for your help!
How do I connect AWS Lambda to homemade server?
1.2
0
0
54
47,127,649
2017-11-05T23:14:00.000
0
0
0
0
python,pyqt5,qt-designer,pypi
63,687,408
1
false
0
1
Qt designer doesn't come bundled with PyQt5. You can install it separately using pip install pyqt5-tools (or pip3 install pyqt5-tools). You will find designer.exe at: ...\Lib\site-packages\pyqt5_tools\Qt\bin
1
1
0
how can I install Qt Designer for PyQt5. I installed SIP from PyPi by running: pip3 install SIP and I installed PyQt5 also from PyPi by running: pip3 install PyQt5 but I didn't find QT Designer
how can I install Qt Designer for PyQt5 for WINDOWS 10
0
0
0
4,343
47,128,046
2017-11-06T00:17:00.000
4
0
1
0
python,io,read-write
47,128,082
1
true
0
0
Reading a text file doesn't corrupt it. There can be access errors if the writing program doesn't keep the file open all the time and tries to open the file while it is read or the reading program can't open it while it is opened by the writing program. If this happens depends on some settings (exclusive lock, shared lock) when opening the file and on the operating system. But the file itself won't be corrupted in any case.
1
3
0
I want to read contents of a text file that is continuously being written by another program. How safe it is to read contents of that file. Will it corrupt the text file?
Python IO - Is it safe to read a text file in python while other program is writing to it?
1.2
0
0
341
47,129,383
2017-11-06T03:37:00.000
0
0
0
0
python,machine-learning,tensorflow,conv-neural-network
47,134,221
1
true
0
0
No, it won't be the same, because the number of channels of input layer determines the shape of convolutional filters, hence the number of parameters and how they are applied. Compare these two convolutional networks: [32x32x3] input shape, batch_size=5, 5x5 receptive field, then each neuron in the conv layer will have weights to a [5x5x3] region in the input volume, for a total of 5*5*3 = 75 weights (and +1 bias parameter). [32x32x15] input shape, batch_size=1, same 5x5 receptive field, then each neuron will have weights to a [5x5x15] region, for a total of 5*5*15 = 325 weights (and the same +1 bias parameter). In both cases, the two conv layers sees 15 RGB images, but the second network will use 5 times more parameters to learn the same data, while the first one will reuse the same parameters for different images. Also note that the second network will have dedicated parameters to each first image, each second image, etc. Obviously, the first approach is better, not only because it saves the resources, it's also invariant to the order of training images in a batch.
1
2
0
Normally, we set input images 'height','width','channels'. In 'channels', we set 1 for gray level pictures, and 3 for RGB pictures. My question is, should it be the same picture using this channels? Like mentioned above. Or I can set series of images to channels? (e.g. I have 10 images discrete in space at one moment, so I set channels 10 as one input) Will there be any problem and is it the right way to do? Or I should just set 10 input for these 10 images? Thanks for answering!
Can I put several different images as channels?
1.2
0
0
776
47,131,361
2017-11-06T06:55:00.000
1
0
0
0
python,pandas,merge,compare,diff
47,131,429
5
false
0
0
Set df2.columns = df1.columns Now, set every column as the index: df1 = df1.set_index(df1.columns.tolist()), and similarly for df2. You can now do df1.index.difference(df2.index), and df2.index.difference(df1.index), and the two results are your distinct columns.
1
7
1
I have two dataframes both of which have the same basic schema. (4 date fields, a couple of string fields, and 4-5 float fields). Call them df1 and df2. What I want to do is basically get a "diff" of the two - where I get back all rows that are not shared between the two dataframes (not in the set intersection). Note, the two dataframes need not be the same length. I tried using pandas.merge(how='outer') but I was not sure what column to pass in as the 'key' as there really isn't one and the various combinations I tried were not working. It is possible that df1 or df2 has two (or more) rows that are identical. What is a good way to do this in pandas/Python?
Diff between two dataframes in pandas
0.039979
0
0
32,562
47,135,747
2017-11-06T11:18:00.000
1
0
0
0
python-3.x,python-pptx
47,142,619
1
true
1
0
The term semantics is sometimes used in a programming context to describe the set of behaviors an object has. So in this case, as Zero mentioned in his comment, it means "has the same behaviors as a list". You could alternately interpret the phrase as "is a list-like thing" or "is interacted with the same way you would a list object". The term semantics means "meaning". In the case of an object, its meaning comprises "what you can do with it" and how. In this case you can do "list-like" things with it, like iterate its members with for x in xs, get its length with len(xs), and access elements by index with xs[i].
1
0
0
In context of the following para : prs.slide_layouts is the collection of slide layouts contained in the presentation and has list semantics, at least for item access which is about all you can do with that collection at the moment.
What is a list semanitcs in python?
1.2
0
0
33
47,136,965
2017-11-06T12:22:00.000
0
0
1
1
python,linux,ipc,python-multiprocessing
47,141,169
2
false
0
0
I would like to avoid file based locks, since I don't know where I should put the lock file. You can lock the existing file or directory (the one being processed). Required feature: If one process dies, then the lock/semaphore should get released by the operating system. That is exactly how file locks work.
1
2
0
I need a way to ensure only one python process is processing a directory. The lock/semaphore should be local to the machine (linux operating system). Networking or NFS is not involved. I would like to avoid file based locks, since I don't know where I should put the lock file. There are libraries which provide posix IPC at pypi. Is there no way to use linux semaphores with python without a third party library? The lock provided by multiprocessing.Lock does not help, since both python interpreter don't share one the same parent. Threading is not involved. All processes have only one thread. I am using Python 2.7 on linux. How to to synchronize two python scripts on linux (without file based locking)? Required feature: If one process dies, then the lock/semaphore should get released by the operating system.
Linux IPC: Locking, but not file based locking
0
0
0
1,095
47,137,372
2017-11-06T12:45:00.000
1
0
0
1
python-2.7,pywinauto
47,142,927
2
false
0
0
There are several points to improve. It's better to use standard Python module subprocess with stdin re-direction to communicate with a command line application. I'd highly recommend you this way which is resistant to RDP minimizing. RDP doesn't provide GUI context in minimized state (any GUI automation tool will give up here). To workaround it simply switch RDP from full-screen mode to restored window state (non-minimized!), run your GUI automation script inside RDP window and quickly switch to your local machine (to another window) and continue your work without affecting the automation script. Just don't ever minimize RDP. It's a manual quick hack, if you do it rarely. Third thing to automate is using command psexec with key -i (interactive). This way you can run remote commands with GUI context automatically without manual hacks. Just find and download PsexecTools (recommended) or learn similar commands for Power Shell. To eliminate this problem at all just use VNC Server software like TightVNC instead of RDP. If you used RDP at least once, you have to reboot the remote machine though. One more possible pitfall is the fact that VNC display is not virtual (like RDP session), hence it requires to have relevant display drivers for your video card. Otherwise you may face with black screen or small resolution. The big plus of VNC that it keeps GUI context even if you disconnect from current session (i.e. closed your laptop before going home).
1
0
0
I have been using pywinauto for opening a command prompt (Mingw-64) and was passing commands using type_keys It was working properly in my local system but, when i hosted my code into RDP server, i am not able to restore the window and pass the commands when RDP is in minimized state Please give me a proper solution and let me know if any package does the same purpose. Thanks!
How to restore a window in RDP using pywinauto, when RDP is in Minimized state
0.099668
0
0
1,048
47,141,019
2017-11-06T16:02:00.000
0
0
1
0
python,global-variables
47,141,168
1
false
0
0
Just do not do it, put all the functions, if they are related, in a class and make the variables accessible with self. Using a different module is worst.
1
0
0
I have to share around 10 variables between functions, which are contained in the same .py file. The variables will be modified in almost every function. I know that global variables are evil, but unfortunately for now I have to keep few of them as global, while the rest I have been able to change the implementation and to pass them as an argument. One way of doing this would be using the "global" keyword, but I have run into another option, that would be placing them in an empty module, and importing the module every time. I am just a beginner in python, what would be the best way to do this? EDIT: This is a rewriting of a code based almost completely on global variables. Almost all the functions are now in a class, the variables are used with self.name_var. However, since we are using multiprocess with Array, few variables have to remain globals. Thanks, Andrea
Safest way of using global variables in python: module or "global" keyword?
0
0
0
50
47,142,323
2017-11-06T17:14:00.000
0
0
1
0
python,windows,numpy,anaconda
47,259,321
1
false
0
0
basteflp, thanks for your response. I managed to solve it. The module not found error was due to running the script outside of the a specific Anaconda environment. Running the script after loading the Anaconda environment resolved the error.
1
0
1
On a windows machine with Anaconda installed. Script B runs correctly and produces the correct result. Script B is called from a Windows console app. When script A imports script B, script B fails with the error "ModuleNotFoundError: No module named 'numpy'". When script B is passed directly to Python executable, script B works and executes without error. (I'm new to python) Any help pointing me in the right direction would be appreciated. Thanks
ModuleNotFound Error in python script but only when imported into a parent script
0
0
0
52
47,145,152
2017-11-06T20:25:00.000
1
0
1
0
python,pip,virtualenv
47,145,537
2
false
0
0
Just install the python3 and use an alias. Removing python2 from your system is a very bad idea.
1
0
0
I am trying to upgrade python version 2.6 to 3.5 using pip on a virtual environment but don't know correct command.
How to update/upgrade Python (2.6 to 3.5) version using Pip?
0.099668
0
0
3,851
47,146,493
2017-11-06T21:54:00.000
2
0
0
1
python,amazon-web-services,boto3,aws-cli
47,146,546
1
false
1
0
Why is it really an issue? I assume you installed the AWS CLI tool by downloading the installer directly. If you want to "fix" it then uninstall the CLI tool, and then install it through pip with pip install awscli.
1
2
0
I have installed Python3 and pip3 on my Macbook pro. Running python --version shows Python 3.6.3 Running pip --version shows pip 9.0.1 from /usr/local/lib/python3.6/site-packages (python 3.6) however running aws --version shows aws-cli/1.11.170 Python/2.7.10 Darwin/16.7.0 botocore/1.7.28 Looks like it's using python2. How do I fix this?
AWS CLI is using Python 2 instead of Python 3
0.379949
0
0
3,869
47,146,728
2017-11-06T22:13:00.000
0
0
0
1
python,audio,pydub
47,147,058
1
true
1
0
No. Playing an audio file doesn't modify the file. Unless your car stereo writes some sort of log to the flash drive of what it's played -- which is unlikely; I've never seen or heard of a stereo that did that -- there's no way to determine which files have been played.
1
0
0
I'm trying to create a python script to help me manage my car radio's music library. The idea is the following: I have a USB flash drive with 2-hour podcast mp3 files. Since I never drive such long journeys, the script splits the files in 5-minute fragments and removes the originals. Now, the next thing I would want to do is automatically remove the ones I've already played. My first idea was something like DRM or a self-destructing file that erased itself once it's played, but from what I have found online that's pretty much impossible. So, the question is, can I check with pydub if the file has been already played, so when I arrive home I can plug the USB in the computer, run the script, detect the played files and erase them? Thanks and sorry if it's a dumb question!
Can pydub know if a file has ever been played?
1.2
0
0
57
47,146,997
2017-11-06T22:35:00.000
2
0
1
0
python-3.x,anaconda,spyder
49,118,826
1
true
0
0
I had the same problem and noticed that unless the Working directory settings is set to "The directory of the file being executed", it won't work. You can change it in "Run" > "Run configuration per file".
1
2
0
I am not sure if this is a right place to ask this type of questions. I am a python beginner or programmer overall at this point. I am using Spyder to use python 3.6 (via Anaconda). I wrote a code that works fine when I run it in the current Ipython console. But I really need to run it in an external system terminal. In order to do so, I chose the following path: Run-> configuration per file -> execute in an external system terminal. That has been working fine. But now it refuses to work! I validated that there is nothing wrong with my code by running something simple and saw that running via external system terminal does not work. So far I deleted Anaconda and re-installed it. Could someone suggest what I should be looking for to diagnose the problem and fix it? Thanks!
Spyder external system terminal does not work (Python3.6)
1.2
0
0
2,797
47,147,414
2017-11-06T23:13:00.000
0
0
0
0
python,pandas
47,147,531
6
false
0
0
I believe that reduce( (lambda x, y: x & (df[y[0]]<y[1])), list_of_filters ) will do it.
1
4
1
I am comfortable with basic filtering and querying using Pandas. For example, if I have a dataframe called df I can do df[df['field1'] < 2] or df[df['field2'] < 3]. I can also chain multiple criteria together, for example: df[(df['field1'] < 3) & (df['field2'] < 2)]. What if I don't know in advance how many criteria I will need to use? Is there a way to "chain" an arbitrary number of these operations together? I would like to pass a list of filters such as [('field1', 3), ('field2', 2), ('field3', 4)] which would result in the chaining of these 3 conditions together. Thanks!
Filter Pandas Dataframe using an arbitrary number of conditions
0
0
0
813
47,148,516
2017-11-07T01:23:00.000
0
0
1
0
python,mongodb,datetime,pymongo
47,148,634
1
false
0
0
You are experiencing the defined behavior. MongoDB has a single datetime type (datetime). There are no separate, discrete types of just date or just time. Workarounds: Plenty, but food for thought: Storing just date is straightforward: assume Z time, use a time component of 00:00:00, and ignore the time offset upon retrieval. Storing just time is trickier but doable: establish a base date like the epoch and only vary the time component, and ignore the date component upon retrieval.
1
0
0
I want to insert date and time into mongo ,using pymongo. However, I can insert datetime but not just date or time . here is the example code : now = datetime.datetime.now() log_date = now.date() log_time = now.time() self.logs['test'].insert({'log_date_time': now, 'log_date':log_date, 'log_time':log_time}) it show errors : bson.errors.InvalidDocument: Cannot encode object: datetime.time(9, 12, 39, 535769) in fact , i don't know how to insert just date or time in mongo shell too. i know insert datetime is new Date(), but I just want the date or time filed.
questions about using pymongo to insert date and time into mongo
0
1
0
515
47,148,615
2017-11-07T01:36:00.000
0
0
0
0
python-3.x,nlp,gensim,doc2vec
47,172,267
1
false
0
0
Bulk training only creates vectors for tags you supplied. If you want to read out a bulk-trained vector per paragraph (as if by model.docvecs['paragraph000']), you have to give each paragraph a unique tag during training (like 'paragraph000'). You can give docs other tags as well - but bulk training only creates remembers doc-vectors for supplied tags. After training, you can infer vectors for any other texts you supply to infer_vector() - and of course you could supply the same paragraphs that were used during training.
1
0
1
This is my first time using Doc2Vec I'm trying to classify works of an author. I have trained a model with Labeled Sentences (paragraphs, or strings of specified length), with words = the list of words in the paragraph, and tags = author's name. In my case I only have two authors. I tried accessing the docvecs attribute from the trained model but it only contains two elements, corresponding to the two tags I have when I trained the model. I'm trying to get the doc2vec numpy representations of each paragraph I fed in to the training so I can use that as training data later on. How can I do this? Thanks.
Getting numpy vector from a trained Doc2Vec model for each document
0
0
0
773
47,148,672
2017-11-07T01:42:00.000
1
0
1
0
python,multiprocessing
47,148,814
1
false
0
0
Pool() can provide a specified number of processes for user invocations when a new request is submitted to the Pool, and if the Pool is not full, a new process is created to execute the request; But if the number of processes in the pool has reached the specified maximum, the request will wait until there is a process in the pool to create a new process. For a model that supports multi-threading, the number of threads recommended is at least 1:1. 5. This allows some threads to do IO. And if your process doesn't use a full core, it won't take up another core.
1
0
0
I have two Xeon processors (each has 8 cores, so total 32 threads) on my MOBO, and I ran a simple code using multiprocessing.Pool(processors=30). When I monitor using htop, I find that only 12 threads are utilized. Does anyone know why that might be happening?
python multiprocessing with multiple cpus?
0.197375
0
0
229
47,149,250
2017-11-07T02:55:00.000
0
1
1
0
python,python-3.x,deployment,pyinstaller
61,566,212
2
false
0
0
You can create a python package with all the uncommon modules in your project and upload to a feed, could be public feed like www.pypi.org where all the pip installs are downloaded from or it could be your organizations azure devops artifacts. After uploading your package, you only need to pip install from the feed that you have chosen.
1
3
0
Suppose I have a python script that uses many uncommon modules. I want to deploy this script to sites which are unlikely to have these uncommon modules. What are some convenient way to install and deploy this python script without having to run "pip" at all the sites? I am using python v3.x
Deploying python script that uses uncommon modules
0
0
0
164
47,152,610
2017-11-07T07:54:00.000
5
0
0
0
python,machine-learning,scikit-learn,regression,xgboost
64,742,353
3
false
0
0
From my opinion the main difference is the training/prediction speed. For further reference I will call the xgboost.train - 'native_implementation' and XGBClassifier.fit - 'sklearn_wrapper' I have made some benchmarks on a dataset shape (240000, 348) Fit/train time: sklearn_wrapper time = 89 seconds native_implementation time = 7 seconds Prediction time: sklearn_wrapper = 6 seconds native_implementation = 3.5 milliseconds I believe this is reasoned by the fact that sklearn_wrapper is designed to use the pandas/numpy objects as input where the native_implementation needs the input data to be converted into a xgboost.DMatrix object. In addition one can optimise n_estimators using a native_implementation.
1
30
1
I already know "xgboost.XGBRegressor is a Scikit-Learn Wrapper interface for XGBoost." But do they have any other difference?
What is the difference between xgb.train and xgb.XGBRegressor (or xgb.XGBClassifier)?
0.321513
0
0
17,925
47,155,758
2017-11-07T10:36:00.000
0
0
0
0
python,user-interface,design-patterns,model-view-controller
47,155,904
1
true
1
0
I believe you can this by using MVC. You don't have to create a lot of controllers for each model, you can just design GET-parameter like ?view= or specify endpoint to read view type and after this to handle it in a single controllers.
1
0
0
I am trying to create a python gui application where I need an MVC like pattern to display and control models. My issue is that I will create and modify the models over time and I need to create several different "view types" (like a form view on one window and a map view on an other), each "view type" should be able to show each of my models. If I use an MVC pattern (which I am not even sure is relevant), I should then create a view-controller for each of my model and "view type". So if I create a new model, I will have to create a view-controller for each of the existing "view types", and if I want to create a new "view type" I will have to create a new view-controller for each model. Creating a generic view is hard because the models are quite independant and differents. Is there a good pattern or example I could use so I can make this smarter ? I'm stuck with this model / view design... Thanks for ideas.
python application link models and views
1.2
0
0
74
47,160,587
2017-11-07T14:39:00.000
-1
0
0
0
python,scrapy,scrapy-spider
61,992,037
1
false
1
0
Yes, it's possible to do what you're trying with Scrapy's LinkExtractor library. This will help you document the URLs for all of the pages on your site. Once this is done, you can iterate through the URLs and the source (HTML) for each page using the urllib Python library. Then you can use RegEx to find whatever patterns you're looking for within the HTML for each page in order to perform your analysis.
1
5
0
Is it possible to use Scrapy to generate a sitemap of a website including the URL of each page and its level/depth (the number of links I need to follow from the home page to get there)? The format of the sitemap doesn't have to be XML, it's just about the information. Furthermore I'd like to save the complete HTML source of the crawled pages for further analysis instead of scraping only certain elements from it. Could somebody experienced in using Scrapy tell me whether this is a possible/reasonable scenario for Scrapy and give me some hints on how to find instructions? So far I could only find far more complex scenarios but no approach for this seemingly simple problem. Addon for experienced webcrawlers: Given it is possible, do you think Scrapy is even the right tool for this? Or would it be easier to write my own crawler with a library like requests etc.?
Sitemap creation with Scrapy
-0.197375
0
1
1,378
47,166,301
2017-11-07T19:49:00.000
2
0
0
0
python,database,postgresql
47,166,411
2
false
0
0
When your script quits your connection will close and the server will clean it up accordingly. Likewise, it's often the case in garbage collected languages like Python that when you stop using the connection and it falls out of scope it will be closed and cleaned up. It is possible to write code that never releases these resources properly, that just perpetually creates new handles, something that can be problematic if you don't have something server-side that handles killing these after some period of idle time. Postgres doesn't do this by default, though it can be configured to, but MySQL does. In short Postgres will keep a database connection open until you kill it either explicitly, such as via a close call, or implicitly, such as the handle falling out of scope and being deleted by the garbage collector.
1
0
0
I wonder how does Postgres sever determine to close a DB connection, if I forgot at the Python source code side. Does the Postgres server send a ping to the source code? From my understanding, this is not possible.
How does Postges Server know to keep a database connection open
0.197375
1
0
41
47,169,033
2017-11-07T23:21:00.000
0
0
1
0
python,string,function,parameters,arguments
70,921,285
4
false
0
0
A parameter is the placeholder; an argument is what holds the place. Parameters are conceptual; arguments are actual. Parameters are the function-call signatures defined at compile-time; Arguments are the values passed at run-time. Mnemonic: "Pee" for Placeholder Parameters, "Aeigh" for Actual Arguments.
1
31
0
So I'm still pretty new to Python and I am still confused about using a parameter vs an argument. For example, how would I write a function that accepts a string as an argument?
Parameter vs Argument Python
0
0
0
28,483
47,171,293
2017-11-08T03:54:00.000
0
0
1
0
python,function,import,module,restrictions
47,173,602
4
false
0
0
You can try using a _single_leading_underscore. _single_leading_underscore This convention is used for declaring private variables, functions, methods and classes in a module. Anything with this convention are ignored in from module import *. However, of course, Python does not supports truly private, so we can not force somethings private ones and also can call it directly from other modules. So sometimes we say it “weak internal use indicator”.
2
0
0
For example, in file A.py I have functions: a (), b () and c (). I import A.py to B.py, but I want to restrict the functions a () and b (). so that from B.py I will be able to call only c (). How can I do that? Are there public, privates functions?
How can I restrict which functions are callable from an imported module? Can I make a function private? (Python)
0
0
0
789
47,171,293
2017-11-08T03:54:00.000
3
0
1
0
python,function,import,module,restrictions
47,171,462
4
false
0
0
Really in Python all is public. So if you wish you can call anything. Standard hiding approach is to name methods with double underscores like __method. This way Python mangles their names as _class__method, so they could not be found as __merhod, but indeed available with long name .
2
0
0
For example, in file A.py I have functions: a (), b () and c (). I import A.py to B.py, but I want to restrict the functions a () and b (). so that from B.py I will be able to call only c (). How can I do that? Are there public, privates functions?
How can I restrict which functions are callable from an imported module? Can I make a function private? (Python)
0.148885
0
0
789
47,171,592
2017-11-08T04:26:00.000
0
0
1
0
python,file
47,171,646
1
true
0
0
import os os.rename('foo.bar', 'foo.baz')
1
0
0
I read some information about changing file extension using python code. I have giant folder with several file types like .csv, .json and .sf1. All I need to do is to replace all .sf1 to .txt file. Any suggested code in python would help.
How to change specific file extensions in a folder using Python ?
1.2
0
0
92
47,172,525
2017-11-08T05:56:00.000
0
0
1
0
python,tensorflow,jupyter
47,173,103
2
false
0
0
Its done...tried installing within the environment Tensorflow.
2
0
1
I am already having tensorflow in my anaconda.Still when i run the ipython notebook ,it shows No module named tensorflow.
No module named tensorflow even after it is present in the local
0
0
0
263
47,172,525
2017-11-08T05:56:00.000
0
0
1
0
python,tensorflow,jupyter
47,172,877
2
false
0
0
Are you using Virtual Environment? If yes, there might be difference in version. try "pip install ipython", and then import tensorflow. May it works.
2
0
1
I am already having tensorflow in my anaconda.Still when i run the ipython notebook ,it shows No module named tensorflow.
No module named tensorflow even after it is present in the local
0
0
0
263
47,173,204
2017-11-08T06:45:00.000
0
0
0
0
python,django,web-applications,connection-pool
47,176,595
1
false
1
0
Your understanding of how things work is wrong, unfortunately. The way Django runs is very much dependent on the way you are deploying it, but in almost all circumstances it does not load code or initiate globals on every request. Certainly, uWSGI does not behave that way; it runs a set of long-lived workers that persist across many requests. In effect, uWSGI is already a connection pool. In other words, you are trying to solve a problem that does not exist.
1
0
0
The purpose is to implement a pool like database connection pool in my web application. My application is write by Django. The problem is that every time a http request come, my code will be loaded and run through. So if I write some code to initiate a pool. These code will be run per http request. And the pool will be initiated per request. So it is meaningless. So how should I write this?
How to implement a connection pool in web application like django?
0
0
0
917
47,173,286
2017-11-08T06:50:00.000
0
0
0
0
python,pyspark,apache-spark-sql,spark-dataframe
47,173,911
1
false
0
0
If i understand correctly, then what you will get on the client side is an int. At least should be, if setup correctly. So the answer is no, the DF is not going to hit your local RAM. You are interacting with the cluster via SparkSession (SparkContext for earlier versions). Even though you are developing -i.e. writing code- on the client machine, the actual computation of spark operations -i.e. running pyspark code- will not be performed on your local machine.
1
0
1
I am deploying a Jupyter notebook(using python 2.7 kernel) on client side which accesses data on a remote and does processing in a remote Spark standalone cluster (using pyspark library). I am deploying spark cluster in Client mode. The client machine does not have any Spark worker nodes. The client does not have enough memory(RAM). I wanted to know that if I perform a Spark action operation on dataframe like df.count()on client machine, will the dataframe be stored in Client's RAM or will it stored on Spark worker's memory?
Where is RDD or Spark SQL dataframe stored or persisted in client deploy mode on a Spark 2.1 Standalone cluster?
0
1
0
187
47,173,777
2017-11-08T07:21:00.000
2
0
1
0
python-3.x,powerpoint,python-pptx
47,189,291
2
true
0
0
A picture shape in python-pptx has four crop properties (.crop_left, .crop_top, etc.). These each take a float value with e.g. 0.1 corresponding to 10%. but unfortunately these are read-only at present. If you need to crop your photos, you'll need to do it by hand or perhaps pre-process the images with something like the Python Imaging Library (PIL/Pillow) to modify their extents before inserting them. An image can be added to a slide in two ways. Either you can add it as a separate shape at an arbitrary location using slide.shapes.add_picture(), or you can add an image placeholder to the layout you use to create the slide and use placeholder.insert_picture(). This latter approach automatically draws the position and size from the placeholder, which helps keep those consistent across slides using that layout.
1
1
0
I am doing process automation. it involve adding of images to the slides (total 8 images, 2 per slide, even some text below it.). I could add the image using pptx. but images need some cropping. How do I go about cropping it ? I also need to use some specific format for the slide. How to use that layout ? If someone can give process flow for doing it, would be grateful.
How to add images to the slides using python?
1.2
0
0
2,654
47,175,167
2017-11-08T08:49:00.000
0
0
0
0
python,django,mod-wsgi,vps,mod-python
47,175,245
1
false
1
0
You should use mod_wsgi. mod_python is the old and direct interface to Apache from Python. wsgi is a standard interface between any webserver and Python (mod_wsgi is the Apache implementation).
1
0
0
What's the difference between mod_wsgi and mod_python. In order to publish django websites on VPS, which one should I install on VPS?
Django website on VPS with WHM CPanel
0
0
0
363
47,181,684
2017-11-08T14:01:00.000
1
0
1
0
python,encoding,utf-8,iso-8859-1
47,188,166
3
false
0
0
libiconv has a "TRANSLIT" feature which does what you want
1
3
0
Does anyone know of Python libraries that allows you to convert a UTF-8 string to ISO-8859-1 encoding in a smart way? By smart, I mean replacing characters like "–" by "-" or so. And for the many characters for which an equivalent really cannot be thought of, replace with "?" (like encode('iso-8859-1', errors='replace') does).
UTF-8 to ISO-8859-1 encoding: replace special characters with closest equivalent
0.066568
0
0
4,764
47,183,145
2017-11-08T15:10:00.000
0
1
1
0
python,komodo
47,184,856
1
false
0
0
In the Properties and Settings just go to File Preferences. Here you can change the language setting.
1
0
0
I use Komodo-edit to edit usually python files, and so far the syntax highlighting works fine. But currently I have created two files in the same(!) session which are in different tabs: test2.py and test3.py. In one case the code is syntax highlighted, for the other code it is not. Everything is just in black font colour. What is going on? How to have syntax highlighting for this tab? When I right-click in a tab I can select Properties and Settings which opens a new window. In this window, I select File Preferences -> Languages -> Syntax Checking. In there the selected language is text and I cannot change it. The section is called Language-specific syntax checking properties.
Why do I have partly no code syntax highlighting in Komodo-edit?
0
0
0
85
47,184,096
2017-11-08T15:54:00.000
1
0
0
0
python,django,forms
47,184,290
2
true
1
0
Writing a form in Django will ultimately produce a HTML form. The Django form can be bound to a model which will then apply validation to the form based on the model structure, this saves on having to manually code the validation and also helps keep everything aligned when changes are made to the model.
1
3
0
I am working my django projects based on html form submission method . But recently, I came to know that there exist django forms. Please let me know what is the difference between them.
what is the difference between Django form and html form
1.2
0
0
1,130
47,184,513
2017-11-08T16:13:00.000
0
0
1
0
python,python-3.x,function,recursion
47,184,616
1
false
0
0
Why do you want to do that? You could just do find -iregex .*.py on any linux box. Alternatively you could import walk from os and use it to walk a directory and check the last 3 characters of each file name.
1
0
0
I am quite new to python and i am trying to figure out the way we can write a function which can compute the number of python files(.py extension) in a specified directory recursively.
Count number of files with .py extension using python function
0
0
0
157
47,185,422
2017-11-08T16:55:00.000
0
0
1
0
python,message-queue,python-2.2
47,205,971
1
false
0
0
I made a .jar that reads the queue and inserts in our database. And a listener in python that looks at the table. Not python 2.2, but work's.
1
0
0
How to do a Message Queuing in python 2.2? Must be in python 2.2, I need for a legacy system. I've already looked a lot and found nothing, I'm already thinking that it can not be done. Which I do not doubt, since python 2.2 is quite old.
Message Queuing on python 2.2
0
0
0
34
47,187,750
2017-11-08T19:09:00.000
6
0
0
0
python-3.x,machine-learning,nlp,sentiment-analysis,lightgbm
47,379,830
2
true
0
0
Are there any approaches to follow to handle this type of datasets that are so imbalanced.? Your dataset is almost balanced. 70/30 is close to equal. With gratient boosted trees it is possible to train on much more unbalanced data, like credit scoring, fraud detection, and medical diagnostics, where percentage of positives may be less that 1%. Your problem might be not in class imbalance, but in the wrong metric you use. When you calculate accuracy, you implicitly penalize your model equally for false negatives and false positives. But is it really the case? When classes are imbalanced, or just uncomparable from the business or physical point of view, other metrics, like precision, recall, or ROC AUC might be of more use than accuracy. For your problem I would recommend ROC AUC. Maybe, what you really want is probabilistic classification. And if you want to keep it binary, play with the threshold used for the classification. How can I further improve my model.? Because it is analysis of text, I would suggest more accurate data cleaning. Some directions to start with: Did you try different regimes of lemmatization/stemming? How did you preprocess special entities, like numbers, smileys, abbreviations, company names, etc.? Did you exploit collocations, by including bigrams or even trigrams into your model along with words? How did you handle negation? One single "no" could change the meaning dramatically, and CountVectorizer catches that poorly. Did you try to extract semantics from the words, e.g. match the synonyms or use word embeddins from a pretrained model like word2vec or fastText? Maybe tree-based models is not the best choice. In my own experience, best sentiment analysis was performed by linear models like logistic regression or a shallow neural network. But you should heavily regularize them, and you should scale your features wisely, e.g. with TF-IDF. And if your dataset is large, you can try deep learning and train a RNN on your data. LSTM is often the best model for many text-related problems. Should I try down-sampling.? No, you should never down-sample, unless you have too much data to process on your machine. Down-sampling creates biases in your data. If you really really want to increase the relative importance of the minority class for your classifier, you can just reweight the observations. As far as I know, in LightGBM you can change class weights with the scale_pos_weight parameter. Or is it the maximum possible accuracy.? How can I be sure of it.? You can never know. But you can do an experiment: ask several humans to label your test samples, and compare them with each other. If only 90% of labels coincide, then even humans cannot relaibly classify the rest 10% of samples, so you have reached the maximum. And again, don't focus on accuracy too much. Maybe, for your business application it is okay if you incorrectly label some positive reviews as negative, as long as all the negative reviews are successfully identified.
1
2
1
I am trying to perform sentiment analysis on a dataset of 2 classes (Binary Classification). Dataset is heavily imbalanced about 70% - 30%. I am using LightGBM and Python 3.6 for making the model and predicting the output. I think imbalance in dataset effect performance of my model. I get about 90% accuracy but it doesn't increase further even though I have performed fine-tuning of the parameters. I don't think this the maximum possible accuracy as there are others who scored better than this. I have cleaned the dataset with Textacy and nltk. I am using CountVectorizer for encoding the text. I have tried up-sampling the dataset but it resulted in poor model (I haven't tuned that model) I have tried using the is_unbalance parameter of LightGBM, but it doesn't give me a better model. Are there any approaches to follow to handle this type of datasets that are so imbalanced.? How can I further improve my model.? Should I try down-sampling.? Or is it the maximum possible accuracy.? How can I be sure of it.?
Sentiment Analysis with Imbalanced Dataset in LightGBM
1.2
0
0
3,502
47,190,414
2017-11-08T21:59:00.000
1
0
0
0
python,alamofire,webapp2
47,213,275
1
true
1
0
To access the trip dictionary you would need to use: class CreateMediaPostTaskHandler(webapp2.RequestHandler): def post(self): params = self.request.params start_address_city = params['trip[startAddress][city]']
1
1
0
I have a dictionary in Swift like so: let params : [String : Any] = ["user_key": "ag1kZXZ-Z29hbC1yaXNlchELEgRVc2VyGICAgICAgIAKDA", "post_text": "Lol", "trip": ["posted_by": "", "endAddress": "", "post_text": "Lol", "startAddress": ["state": "IL", "city": "Oak Park", "address1": "6503 West North Avenue", "address2": "", "zipCode": ""], "time": "", "role": "", "eta": ""]] When I send this in the params object of Alamofire, the webapp2 accesses the params by self.request.get('user_key') and so forth, however, it does not get the 'trip' parameter. ```self.request.get('trip') returns nothing. How do I send this dictionary to webapp2 Request Handler?
How to send a dictionary through params webapp2 python?
1.2
0
0
56
47,191,192
2017-11-08T23:01:00.000
0
0
1
0
python,jupyter-notebook,statsmodels
48,732,416
1
false
0
0
Use pycharm instead of jupyter notebook, you may get some warning messages, but without this importerror.
1
2
0
I am trying to import statsmodels.api to run a logistic regression and i get the following error: ImportError: cannot import name 'getargspec' I have tried to updgrde: pip install statsmodels --upgrade Still nothing.
Python: cannot import statsmodels.api --> ImportError: cannot import name 'getargspec'
0
0
0
849
47,193,079
2017-11-09T02:47:00.000
13
0
1
0
python,module,package
47,193,123
2
true
0
0
x may be a package or a module and y is something inside that module/package. A module is a .py file, a package is a folder with an __init__.py file. When you import a package as a module, the contents of __init__.py module are imported.
1
7
0
Whenever I do from 'x' import 'y' I was wondering which one is considered the 'module' and which is the 'package', and why it isn't the other way around?
Module vs. Package?
1.2
0
0
6,342
47,193,190
2017-11-09T03:02:00.000
1
0
0
1
python,cloud,publish-subscribe
51,531,222
1
false
0
0
Update your google-cloud-pubsub to the latest version. It should resolve the issue
1
2
0
After running sudo pip install google.cloud.pubsub I am running the following python code in ubtunu google compute engine: import google.cloud.pubsub_v1 I get the following error when importing this: ImportError: No module named pubsub_v1 attribute 'SubscriberClient' Can anyone tell me how to fix this?
AttributeError: 'module' object has no attribute 'SubscriberClient'
0.197375
0
0
558
47,194,189
2017-11-09T04:58:00.000
0
0
0
0
python,windows,google-chrome,selenium,freeze
47,257,116
1
true
0
0
Found that the issue was due to AV. Stopped the anti virus for some time and it is running properly now.
1
0
0
We are running a Python Selenium script in Windows Chrome, and are now faced with an issue which we cannot resolve. The script (medium sized) runs to completion only say 1 in 3 times. Other times, it freezes in the middle, maybe after 10 steps, or 15 steps- after that there is no response whatsoever. The only clue we got was that there is an error printed out (usually after 10 minutes of waiting) access denied. After this hanging the only option is to Kill the browser or Kill the process We tried --disable-extensions, and having an user-dir etc, but to no avail. There is an anti-virus (Symantec) running in the machine, which cannot be disabled (enterprise level security settings). Has anyone faced this issue? Is there any solution? Please let me know.
Chrome gets hung randomly - Python Selenium Windows
1.2
0
1
95
47,194,826
2017-11-09T05:56:00.000
0
1
0
1
python,unix,win32com
47,194,889
2
false
0
0
No. win32com can only be installed and used on a Windows OS.
1
0
0
I wanted to know whether win32com.client package be used in an python app if I want to run that application in the Unix server. My goal is to set a cron in the unix server to automate some mail related task using win32com.client. All I want to know is that will this whole win32com will work smoothly in the Unix server.
Small query regarding win32com.client
0
0
0
624
47,199,421
2017-11-09T10:27:00.000
5
0
0
0
python,amazon-web-services,amazon-ec2,boto3
47,202,059
2
true
1
0
No way ! You cannot change your max price once the instance is running. in order to change the price of your bid, you must cancel it and place another bid.
2
3
0
Can I keep my spot instance in use by modifying my max bid price pro-grammatically(Python boto) as soon as the bid price increases, so as to stop it from terminating itself and manually terminate it once am done with my work. I know I can use the latest spot block to use the spot instance upto 6 hours, but it reduces the profit margin. So I wanted to know if I can modify my bid pricing on the go based on the current demand. Thanks.
Changing max bid price of AWS Spot instance
1.2
0
1
2,432
47,199,421
2017-11-09T10:27:00.000
1
0
0
0
python,amazon-web-services,amazon-ec2,boto3
47,204,123
2
false
1
0
No. It is not possible to change the bid price on an existing spot request. You will need to create a new spot request with the new bid price. However, any EC2 instances allocated with the first request will always be tied to that first request. If your work cannot handle an EC2 instance terminating prematurely, then spot instances are not right for your work and you should use OnDemand or Reserved instances.
2
3
0
Can I keep my spot instance in use by modifying my max bid price pro-grammatically(Python boto) as soon as the bid price increases, so as to stop it from terminating itself and manually terminate it once am done with my work. I know I can use the latest spot block to use the spot instance upto 6 hours, but it reduces the profit margin. So I wanted to know if I can modify my bid pricing on the go based on the current demand. Thanks.
Changing max bid price of AWS Spot instance
0.099668
0
1
2,432
47,200,623
2017-11-09T11:24:00.000
1
0
0
1
python,gcloud,gsutil,google-cloud-sdk
49,879,820
5
false
0
0
I had the same issue. I am using a mac. Looking into /usr/local/lib/python2.7/site-packages i found a homebrew protobuf link. I removed it with "rm homebrew-protobuf.pth" Then gsutil started working.
2
9
0
I've been using gcloud and gsutil for a while but now suddenly for any gsutil command I run I get errors: Traceback (most recent call last): File "/Users/julian/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 12, in import bootstrapping File "/Users/julian/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 22, in from googlecloudsdk.core.credentials import store as c_store File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/credentials/store.py", line 27, in from googlecloudsdk.core import http File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/http.py", line 31, in from googlecloudsdk.core.resource import session_capturer File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/resource/session_capturer.py", line 32, in from googlecloudsdk.core.resource import yaml_printer File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/resource/yaml_printer.py", line 17, in from googlecloudsdk.core.resource import resource_printer_base File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/resource/resource_printer_base.py", line 38, in from googlecloudsdk.core.resource import resource_projector File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/resource/resource_projector.py", line 34, in from google.protobuf import json_format as protobuf_encoding ImportError: cannot import name json_format I tried gcloud update and gcloud reinstall but still get same problem. Is there a conflict with the python installation? Any other ideas?
gsutil no longer works?
0.039979
0
0
2,067
47,200,623
2017-11-09T11:24:00.000
0
0
0
1
python,gcloud,gsutil,google-cloud-sdk
51,340,051
5
false
0
0
For CentOS 7.5 (probably earlier as well) using the Google Cloud SDK rpm install, removing the protobuf-python package yum remove protobuf-python will solve this.
2
9
0
I've been using gcloud and gsutil for a while but now suddenly for any gsutil command I run I get errors: Traceback (most recent call last): File "/Users/julian/google-cloud-sdk/bin/bootstrapping/gsutil.py", line 12, in import bootstrapping File "/Users/julian/google-cloud-sdk/bin/bootstrapping/bootstrapping.py", line 22, in from googlecloudsdk.core.credentials import store as c_store File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/credentials/store.py", line 27, in from googlecloudsdk.core import http File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/http.py", line 31, in from googlecloudsdk.core.resource import session_capturer File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/resource/session_capturer.py", line 32, in from googlecloudsdk.core.resource import yaml_printer File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/resource/yaml_printer.py", line 17, in from googlecloudsdk.core.resource import resource_printer_base File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/resource/resource_printer_base.py", line 38, in from googlecloudsdk.core.resource import resource_projector File "/Users/julian/google-cloud-sdk/lib/googlecloudsdk/core/resource/resource_projector.py", line 34, in from google.protobuf import json_format as protobuf_encoding ImportError: cannot import name json_format I tried gcloud update and gcloud reinstall but still get same problem. Is there a conflict with the python installation? Any other ideas?
gsutil no longer works?
0
0
0
2,067
47,202,472
2017-11-09T12:59:00.000
0
0
0
0
python,ubuntu,scrapy,web-crawler,scrapyd
54,874,106
1
false
1
0
You need to install the next packages: apt-get update apt-get install gcc apt-get install python-dev pip install scrapyd pip install scrapyd-client pip install beautifulsoup4 pip install dateparser And try to deploy again
1
0
0
I have 2 pc, PC A doesn't have errors and crawlers are deployed successfully, but on PC B, the error happens. My Scrapyd server is running but then when I tried to deploy my crawlers, these error occurs. {"status": "error", "message":Traceback (most recent call last):\\n File \"/usr/lib/python2.7/runpy.py\", line 162, in _run_module_as_main\\n \"__main__\", fname, loader, pkg_name)\\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\\n exec code in run_globals\\n File \"/usr/local/lib/python2.7/dist-packages/scrapyd/runner.py\", line 40, in <module>\\n main()\\n File \"/usr/local/lib/python2.7/dist-packages/scrapyd/runner.py\", line 37, in main\\n execute()\\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py\", line 148, in execute\\n cmd.crawler_process = CrawlerProcess(settings)\\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py\", line 243, in __init__\\n super(CrawlerProcess, self).__init__(settings)\\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py\", line 134, in __init__\\n self.spider_loader = _get_spider_loader(settings)\\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py\", line 330, in _get_spider_loader\\n return loader_cls.from_settings(settings.frozencopy())\\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/spiderloader.py\", line 61, in from_settings\\n return cls(settings)\\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/spiderloader.py\", line 25, in __init__\\n self._load_all_spiders()\\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/spiderloader.py\", line 47, in _load_all_spiders\\n for module in walk_modules(name):\\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/utils/misc.py\", line 71, in walk_modules\\n submod = import_module(fullpath)\\n File \"/usr/lib/python2.7/importlib/__init__.py\", line 37, in import_module\\n __import__(name)\\n File \"spiderman/spiders/scraper.py\", line 16, in <module>\\n mail = input('Email : ')\\nEOFError: EOF when reading a line\\n", "node_name": "MY PC"}
Scrapyd Deploy Error: EOFError: EOF when reading a line
0
0
0
229
47,206,395
2017-11-09T16:03:00.000
2
0
0
0
python,amazon-web-services,amazon-s3,boto3
47,210,265
1
false
1
0
You are out of luck. If the S3 bucket was created by a CFT, then You cannot add new tags or Add new tags and lose the tags created by CFT (then your delete stack will fail unless you exclude that S3 resource from deletion) You can try updating the stack with new tags as suggested by @jarmod
1
2
0
Currently I'm in the process of tagging every S3 bucket I have using boto3. Compared to a resource like Lambdas, doing s3.put_bucket_tagging overwrites any previous tags, compared to Lambdas which only add extra tags while keeping old ones. Is there a way to only add tags, rather than overwrite them? Secondly, I have created a method to take the current tags, add the new tags on, and then overwrite the tags with those values, so I don't lose any tags. But some of these S3 buckets are created by CloudFormation and thus are prefixed with aws: which gives me the error Your TagKey cannot be prefixed with aws: when I try to take the old tags and re-put them with the new tags. A fix for either of these to give me the ability to automate tagging of every s3 bucket would be the best solution.
Using boto3 to add extra tags to S3 buckets
0.379949
0
1
1,424
47,209,114
2017-11-09T18:29:00.000
0
0
0
0
python,django,postgresql,database-trigger
47,226,539
1
false
1
0
Problem was resulting from an error of time zone management.
1
0
0
I have made several tables in a Postgres database in order to acquire data with time values and do automatic calculation in order to have directly compiled values. Everything is done using triggers that will update the right table in case of modification of values. For example, if I update or insert a value measured @ 2017-11-06 08:00, the trigger will detect this and do the update for daily calculations; another one will do the update for monthly calculations, and so... Right now, everything is working well. Data acquisition is done in python/Qt to update the measured values using pure SQL instruction (INSERT/UPDATE/DELETE) and automatic calculation are working. Everything is working well too when I use an interface like pgAdmin III to change values. My problem comes with development in django to display and modify the data. Up to now, I did not have any problem as I just displayed data without trying to modify them. But now I don't understand what's going on... If I insert a new value using model.save(), eveything is working: the hourly measure is written, the daily, monthly and yearly calculation are done. But if I update an existing value, the triggers seem to not see the modification: the hourly measure is updated (so model.save() do the job), but the daily calculation trigger seems not to be launched as the corresponding table is not updated. As said previously, manually updating the same value with pgAdmin III works: the hourly value is updated, the daily calculation is done. I do not understand why the update process of django seems to disable my triggers... I have tried to use the old save algorithm (select_on_save = True), but without success. The django account of the database is owning all the tables, triggers and functions. He has execute permission on all triggers and functions. And again, inserting an item with django is working using the same triggers and functions. My solution for the moment is to use direct SQL instruction with python/Qt to do the job, but I feel a bit frustrating not to be able to use only django API... Does anybody have some idea to debug or solve this issue?
Issues with django and postgresql triggers
0
1
0
1,413
47,209,532
2017-11-09T18:55:00.000
0
0
0
0
python,numpy,lapack,blas
48,100,987
2
false
0
0
Another option is ArrayFire. While this package does not contain a complete BLAS and LAPACK implementation, it does offer much of the same functionality. It is compatible with OpenCL and CUDA, and hence, is compatible with AMD and Nvidia architectures. It has wrappers for Python, making it easy to use.
1
4
1
We have a Python code which involves expensive linear algebra computations. The data is stored in NumPy arrays. The code uses numpy.dot, and a few BLAS and LAPACK functions which are currently accessed through scipy.linalg.blas and scipy.linalg.lapack. The current code is written for CPU. We want to convert the code so that some of the NumPy, BLAS, and LAPACK operations are performed on a GPU. I am trying to determine the best way to do this is. As far as I can tell, Numba does not support BLAS and LAPACK functions on the GPU. It appears that PyCUDA may the best route, but I am having trouble determining whether PyCUDA allows one to use both BLAS and LAPACK functions. EDIT: We need the code to be portable to different GPU architectures, including AMD and Nvidia. While PyCUDA appears to offer the desired functionality, CUDA (and hence, PyCUDA) cannot run on AMD GPUs.
NumPy + BLAS + LAPACK on GPU (AMD and Nvidia)
0
0
0
2,168
47,209,988
2017-11-09T19:25:00.000
5
0
1
0
python,kdb,qpython,google-colaboratory
47,210,437
1
true
0
0
!pip install qpython is the recommended approach: we can't hope to have every possible dep installed, so users should just install what they need.
1
2
0
I am testing out Googles colaboratory and I am getting an error ImportError: No module named qpython I know because its a virtual machine the modules are installed there but if one is missing is there a way to get it installed? Thanks!
How can I install a module on colaboaratory?
1.2
0
0
191
47,211,305
2017-11-09T20:51:00.000
2
0
0
0
android,python,cpython
47,213,499
1
true
0
1
I didn't find how to build it. But I downloaded Crystax NDK and found out python libraries already compiled and just copied them to my project.
1
1
0
I want to port some algorithms written on Python to Android. The algorithms don't use any OS specific staff, only several CPython modules for data processing. And I don't want to use some heavy frameworks like kivy. Are there any easy way to build cpython for Android?
How to build CPython for Android?
1.2
0
0
389
47,215,245
2017-11-10T03:31:00.000
2
0
0
1
python,apache-kafka,confluent-platform
47,280,609
1
true
0
0
librdkafka will do its best to automatically recover from any errors it hits, so the error_cb is mostly informational and it is generally not advisable for the application to do anything drastic upon such an error. _MSG_TIMED_OUT and _TIMED_OUT- Kafka protocol requests timed out, typically due to network or broker issues. The requests will be retried according to the retry configuration, or the corresponding API / functionality willl propagate a more detailed error (e.g., failure to commit offsets). This error can safely be ignored. _TRANSPORT - the broker connection went down or could not be established, again this is a temporary network or broker problem and may too be safely ignored.
1
0
0
I'm wondering what is the proper reaction to evens that lead to the error_cb callback being called. Initially our code was always throwing an Exception from the error_cb regardless of anything. We're running our stuff in Kubernetes, so restarting a consumer/producer is (technically) not a big deal. But the number of restart were quite significant, so we added a couple of exceptions, which we just log without quitting: KafkaError._MSG_TIMED_OUT (both consumer and producer) KafkaError._TRANSPORT (consumer) These are the ones that we see a lot, and confluent-kafka-python seems to be able to recover from them without any extra help. Now I'm wondering if we were right to throw any exceptions in error_cb to begin with. Should we start treating error_cb just as a logging function, and only react to exceptions thrown explicitly by poll and flush?
error_cb in confluent-kafka-python producers and consumers
1.2
0
0
1,415
47,215,370
2017-11-10T03:47:00.000
0
1
0
0
python,python-2.7,python-3.x,recaptcha,captcha
47,215,397
1
true
0
0
His IP is probably flagged. Also recaptcha will automatically throw a captcha if an outdated or odd user agent is detected.
1
0
0
I wrote a Python script for mass commenting on a certain website.It works perfectly on my computer but when my friend tried running it on his then a Captcha would appear on the login page where as on running it on my machine no captcha appears.I tried resetting the caches,cookies but still no captcha. Tried resetting the browser settings but still no luck and on the other system the captcha always appears.If you could list down the reasons of why this happening that would be great.Thanks
Captcha not Appearing or Malfunctioning on one system and working on the other
1.2
0
1
40
47,217,046
2017-11-10T06:44:00.000
0
0
1
0
python,regex-alternation
47,217,120
2
false
0
0
Group the delimiter and the roman numeral and treat it the same way you treat the decimal point in the float / int (you don't know whether or not it will appear but it will only appear once at most). Hope this helps!
1
0
0
I want to match a string for which the string elements should contain specific characters only: First character from [A,C,K,M,F] Followed by a number (float or integer). Allowed instances: 1,2.5,3.6,9,0,6.3 etc. Ending at either of these roman numerals [I, II, III, IV, V]. The regex that I am supplying is the following bool(re.match(r'(A|C|K|M|F){1}\d+\.?\d?(I|II|III|IV|V)$', test_str)) "(I|II|III|IV|V)" part will return true for test_str='C5.3IV' but I want to make it true even if two of the roman numerals are present at the same time with a delimiter / i.e. the regex query should retrun true for test_str='C5.3IV/V' also. How should I modify the regex? Thanks
Python regex: Using Alternation for sets of words with delimiter
0
0
0
242
47,217,314
2017-11-10T07:01:00.000
0
0
1
0
python
47,217,501
2
false
0
0
The code looks right. If you call or print gene_line outside of the for loops, then it is only going to print the last assigned value of the gene_line variable, which would be the last line of the file.
1
1
0
I have this function that opens a txt file containing 1 string of characters on each line. It doesn't seem to be working properly. When I test it to call the line of text (gene_line) it is returning the bottom-most line of text when it should be the first line of text.
Not iterating through lines of text file properly
0
0
0
45