Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47,807,709 | 2017-12-14T07:07:00.000 | 12 | 0 | 1 | 0 | python,installation,pycharm,anaconda | 56,228,150 | 6 | false | 0 | 0 | You have mentioned that you have already installed the Anaconda in your system. You can try the following,
Go to Project Interpreter in Pycharm
Under Conda Environment, choose the Existing Environment and select the python.exe from the Anaconda Installation folder in the Interpreter field.
Enable the option, Make available to all projects and click ok.
You should now be able to see the libraries in the Project Interpreter. | 4 | 5 | 0 | I am new to Python. Installed Anaconda on my system.
I installed PyCharm too.
When I try to run a file from PyCharm I get this error message:
C:\Python\Test\venv\Scripts\python.exe python 3.6
C:/Python/Test/while.py C:\Python\Test\venv\Scripts\python.exe: can't
open file 'python': [Errno 2] No such file or directory | PyCharm not finding Anaconda Python, giving "can't open file 'python': [Errno 2] No such file or directory?" | 1 | 0 | 0 | 24,660 |
47,807,709 | 2017-12-14T07:07:00.000 | 1 | 0 | 1 | 0 | python,installation,pycharm,anaconda | 56,841,154 | 6 | false | 0 | 0 | I had the same issue and then solved it by running PyCharm as administrator. | 4 | 5 | 0 | I am new to Python. Installed Anaconda on my system.
I installed PyCharm too.
When I try to run a file from PyCharm I get this error message:
C:\Python\Test\venv\Scripts\python.exe python 3.6
C:/Python/Test/while.py C:\Python\Test\venv\Scripts\python.exe: can't
open file 'python': [Errno 2] No such file or directory | PyCharm not finding Anaconda Python, giving "can't open file 'python': [Errno 2] No such file or directory?" | 0.033321 | 0 | 0 | 24,660 |
47,807,709 | 2017-12-14T07:07:00.000 | 2 | 0 | 1 | 0 | python,installation,pycharm,anaconda | 56,523,032 | 6 | false | 0 | 0 | Under Conda Environment, you can try to choose X:\Anaconda3\Scripts\conda.exe. | 4 | 5 | 0 | I am new to Python. Installed Anaconda on my system.
I installed PyCharm too.
When I try to run a file from PyCharm I get this error message:
C:\Python\Test\venv\Scripts\python.exe python 3.6
C:/Python/Test/while.py C:\Python\Test\venv\Scripts\python.exe: can't
open file 'python': [Errno 2] No such file or directory | PyCharm not finding Anaconda Python, giving "can't open file 'python': [Errno 2] No such file or directory?" | 0.066568 | 0 | 0 | 24,660 |
47,810,891 | 2017-12-14T10:19:00.000 | 2 | 0 | 0 | 0 | python,cuda,numba | 47,820,153 | 1 | true | 0 | 0 | The Numba CUDA Python implementation presently doesn't support any sort of function pointer or objects within kernels. So what you would have ambitions to do is not possible. | 1 | 0 | 1 | I am working in python with the numba library and wondered if there is a solution to write a parallel version of a previous work. I have a function f(X, S, F) where X and S are scalar arrays, and F is a list of functions.
I am almost sure that passing an array of functions is not possible with numba (and cuda in general?). What would be an alternative solution to this? If there is one.
Thanks in advance for your help | Passing functions to CUDA blocks with numba | 1.2 | 0 | 0 | 227 |
47,812,002 | 2017-12-14T11:16:00.000 | 0 | 0 | 1 | 1 | python,pyperclip | 47,812,094 | 2 | false | 0 | 0 | Fixed it, I tried out "C:\Users\Scott\AppData\Local\Programs\Python\Python36-32\Scripts\pip.exe install pyperclip", and it worked. | 1 | 0 | 0 | I'm following a course on Udemy: Automate the Boring Stuff using Python. For one of the courses the guide asks me to install Pyperclip. Now I've tried this, but cmd keeps returning: " 'pip' is not recognized as an internal or external command, operable or batch file
The path I use is "C:\Users\myname\AppData\Local\Programs\Python\Python36-32\Scripts\pip.exe", I've tried it without the pip.exe, but then it also returns an error. | PYTHON - pyperclip installing issue | 0 | 0 | 0 | 169 |
47,814,599 | 2017-12-14T13:32:00.000 | 1 | 1 | 1 | 0 | python,mod-wsgi,pyramid | 47,821,932 | 1 | true | 1 | 0 | Presuming you are using recommended separate mod_wsgi daemon process group for each application instance, set the python-path option of the WSGIDaemonProcess directive for each to include the directory where your instance specific settings module is. A normal import should then work. | 1 | 0 | 0 | I am using mod_wsgi with pyramid and have different wsgi files per environment/server like the pyramid-test.wsgi and pyramid-prod.wsgi
These files contains code to set environment variables that are different per environment. Example:
os.environ['SQLALCHEMY_URL'] = 'TODO'
I try to move this code to a file called settings.py that will be called in the .wsgi file. These settings files will be hold next to the .wsgi file or preferable in a secure sub dir, such that others can't read the settings (like db password), but can however deploy a new version and overwrite the .wsgi file such that the app is automatically reloaded by Apache.
How can I call the python code in the settings.py file from the .wsgi file?
When I try to do that, it can't find it, as it's not part of the app module. | How to call a function in another python file in my wsgi file? | 1.2 | 0 | 0 | 443 |
47,814,955 | 2017-12-14T13:50:00.000 | -1 | 0 | 0 | 0 | django,python-3.x,google-chrome | 47,827,088 | 1 | false | 1 | 0 | if you are using LoginRequieredMiddleware, browser shows a blank page | 1 | 0 | 0 | I had a Django project running perfectly using Python 2.7. Now I switched to Python 3 and whenever I click the link shown when I start the Django development server, google chrome opens a new, empty window where previously it opened a new tab in the existing window (which also contained the web application I was trying to run).
I'm not sure this has anything to do with switching Python versions because I remember seeing this issue before when I wasn't doing anything with Python.
Does anybody have clue? It's probably something very simple.
Cheers | Google Chrome opens empty new window when clicking Django dev server link | -0.197375 | 0 | 0 | 61 |
47,816,469 | 2017-12-14T15:05:00.000 | 0 | 0 | 0 | 0 | python,django,paypal | 47,881,662 | 2 | false | 1 | 0 | PROBLEM SOLVED, needed to add SESSION_COOKIE_NAME with my domain name in settings.py file | 1 | 2 | 0 | I'm kind of new to django, I'm having an issue to keep the logged in user still logged in after he made a payment via PayPal.
So, the user purchases something on my platform via PayPal payment, he is redirected to PayPal (currently to sandbox PayPal domain), PayPal executes the payment and redirects the user back to my platform using redirect_url I'm generating when sending the payment requests json to PayPal api.
After the user is redirected back to my platform he is not logged in anymore.
For example, in another scenario, lets say the user logged in and closed the browser, when they reopen the platform again he is still logged in.
What am I missing here? | Django keep logged in lost when going outside to paypal payment | 0 | 0 | 0 | 66 |
47,819,441 | 2017-12-14T17:56:00.000 | 0 | 0 | 0 | 0 | python-2.7,common-table-expression,impala,impyla | 47,851,949 | 1 | true | 0 | 0 | Based on the tests I have conducted it does not appear to be possible to use common table expressions with Python 2.7 using impala.dbapi. This is because the CTEs do not stay in memory with subsequent cursor.execute commands, and running two SQL commands in one cursor.execute instance returns an error. | 1 | 0 | 0 | It appears when using Python's impala.dbapi connect, you can only run one command per execute. I am using Python 2.7.
I would like to create two common table expressions then join them, but I am unable to get this to work.
If I run the SQL as it would work in HUE using Impala it fails because only one command can be run per execute.
If I create the common table expressions in two separate executes in python and try to join the two CTEs in a third execute I get error unable to resolve "cte..." it appears the CTE does not stay in memory after the first execute is complete. The work around has been to create temporary tables in Impala instead of using CTEs. Eventually I will use Spark data frames and join those, but a permission issue is preventing Spark API from reading from Impala tables for the near future. | Is it possible to use common table expressions with impala using Python? | 1.2 | 0 | 0 | 816 |
47,820,287 | 2017-12-14T18:54:00.000 | 0 | 1 | 0 | 0 | python,exploit,metasploit | 47,822,711 | 4 | false | 1 | 0 | Because metasploit is purely written in Ruby. | 2 | 1 | 0 | I've added an exploit from www.exploit-db.com to /.msf4/modules/exploit/windows/remote/41987.py following the naming convention. I updated the database with the command updatedb and rebooted.
Metasploit does not detect the newly added exploit. However, if i add 41891.rb, it detects it no problem. Why does Metasploit not see the python files? | Metasploit does't detect added exploit from exploit-db | 0 | 0 | 0 | 2,281 |
47,820,287 | 2017-12-14T18:54:00.000 | 0 | 1 | 0 | 0 | python,exploit,metasploit | 47,952,094 | 4 | false | 1 | 0 | DO as follow to add python extension (if you havent done yet):
- meterpreter > use python
- try python_import to import your python code module | 2 | 1 | 0 | I've added an exploit from www.exploit-db.com to /.msf4/modules/exploit/windows/remote/41987.py following the naming convention. I updated the database with the command updatedb and rebooted.
Metasploit does not detect the newly added exploit. However, if i add 41891.rb, it detects it no problem. Why does Metasploit not see the python files? | Metasploit does't detect added exploit from exploit-db | 0 | 0 | 0 | 2,281 |
47,821,346 | 2017-12-14T20:14:00.000 | 1 | 0 | 0 | 0 | python,scipy,pyomo | 47,821,531 | 1 | true | 0 | 0 | At the moment (Dec 2017), there is no built-in support for passing a Pyomo model to scipy.optimize. That said, it would not be a very difficult task to write a reasonably general purpose object that could generate the necessary (value, Jacobian, Hessian) evaluation functions to pass to scipy.optimize.minimize(). | 1 | 1 | 1 | Can I integrate scipy.optimize.minimize solver with method=SLSQP inside pyomo? Modeling in pyomo is much faster than in scipy but pyomo documentation does not seem to say explicitly if this is feasible. | Call scipy.optimize inside pyomo | 1.2 | 0 | 0 | 588 |
47,827,106 | 2017-12-15T06:41:00.000 | 5 | 1 | 0 | 1 | python,aptana,aptana3,pipenv | 53,565,839 | 1 | false | 0 | 0 | Pipenv and python folders
Get the location where pipenv is storing the virtual environment with:
pipenv --venv
It will return a folder: [environment_folder]
Also, get at hand the location of the original python executable, the one used by pipenv: [python_folder]
Eclipse configuration
Go to the existing project where you want to use pipenv or create a new python project.
Go to Eclipse menu: Project > Properties
Choose: PyDev - Interpreter
Click: "Click here to configure an interpreter not listed"
Choose "New" from python interpreter
Use the environment_folder output of pipenv --venv
Set Interpreter Executable to: [environment_folder]\Scripts\python.exe
When propmted, select folders to add to system PYTHONPATH:
[python_folder]\Lib
[python_folder]\DLLs
[environment_folder]\
[environment_folder]\lib [environment_folder]\Scripts
[environment_folder]\lib\site-packages
Add those folders.
Accept and apply.
And everything should be ready. | 1 | 5 | 0 | I have a python project working fine in Aptana.
I then started to use pipenv for the project's environment management and now I can't get Aptana to use that environment.
I also set up a new project then added it to Aptana and Aptana uses a 3.7 version of python instead of the pipenv python 2.7.
Any suggestions on where to start looking? | How do I configure Aptana IDE (Eclipse) to work with pipenv? | 0.761594 | 0 | 0 | 1,372 |
47,827,130 | 2017-12-15T06:44:00.000 | 0 | 0 | 0 | 0 | python-3.x,nlp,gensim,text-analysis | 47,830,720 | 1 | false | 0 | 0 | I have worked on similar lines. This approach can work till 300 such documents. But, taking it to higher scale you need to replicate the approach using spark.
Here it goes:
1) Prepare TF-IDF matrix: Represent documents in terms Term Vectors. Why not LDA because you need to supply number of themes first which you don't know first. You can use other methods of representing documents if want to be more sophisticated (better than semantics) try word2Vec, GloVe, Google News Vectors etc.
2) Prepare a Latent Semantic Space from the above TF-IDF. Creation of LSA uses SVD approach (one can choose the kaiser criteria to choose the number of dimensions).
Why we do 2)?
a) TF-IDF is very sparse. Step 3 (tSne) which is computationally expensive.
b) This LSA can be used to create a semantic search engine
You can bypass 2) when your TF-IDF size is very small but i don't think given your situation that would be the case and also, you don't have other needs like having semantic search on these documents.
3) Use tSne (t-stochastic nearest embedding) to represent the documents in 3 dimensions. Prepare a spherical plot from the euclidean cordinates.
4) Apply K-means iteratively to find the optimal number of clusters.
Once decided. Prepare word clouds for each categories. Have your themes. | 1 | 0 | 1 | I am trying to do textual analysis on a bunch (about 140 ) of textual documents. Each document, after preprocessing and removing unnecessary words and stopwords, has about 7000 sentences (as determined by nlkt's sentence tokenizer) and each sentence has about 17 words on average. My job is to find hidden themes in those documents.
I have thought about doing topic modeling. However, I cannot decide if the data I have is enough to obtain meaningful results via LDA or is there anything else that I can do.
Also, how do I divide the texts into different documents? Is 140 documents (each with roughly 7000 x 17 words) enough ? or should I consider each sentence as a document. But then each document will have only 17 words on average; much like tweets.
Any suggestions would be helpful.
Thanks in advance. | Suggestion on LDA | 0 | 0 | 0 | 66 |
47,832,693 | 2017-12-15T12:44:00.000 | 0 | 0 | 0 | 0 | python,selenium,selenium-webdriver | 47,832,860 | 3 | false | 0 | 0 | Reliably determining whether a page has been fully loaded can be challenging. There is no way to know if all the elements have been loaded just like that. You must define some "anchor" points in each page so that as far as you aware, if these elements has been loaded, it is fair to assume the whole page has been loaded. Usually this involves a combination of tests. So for example, you can define that if the below combination of tests passes, the page is considered loaded:
JavaScript document.readyState === 'complete'
"Anchor" elements
All kinds of "spinners", if exist, disappeared. | 1 | 1 | 0 | I am asking for generally checking if all elements of a page has been loaded. Is there a way to check that basically?
In the concrete example there is a page, I click on some button, and then I have to wait until I click on the 'next' button. However, this 'Next' button is available, selectable and clickable ALL THE TIME. So how to check with selenium that 'state'(?) of a page?
As a reminder: This is a question about selenium and not the quality of the webpage in question.... | Is there a way with python-selenium to wait until all elements of a page has loaded? | 0 | 0 | 1 | 5,281 |
47,833,775 | 2017-12-15T13:52:00.000 | 0 | 0 | 1 | 0 | python,matplotlib,python-dateutil | 47,834,107 | 4 | false | 0 | 0 | do you have multiple python version on your computer ?
python 2.7 and 3.5 for example.
maybe there is a confusion here.
or even multiple version of python 3.5 ?
i often have this problem when I forget which python I am using. | 2 | 12 | 0 | I installed Python-Dateutil package, but when i import it in my script , it's throwing error:
import dateutil
ImportError: No module named 'dateutil'
when i checked the lib folder, dateutil.eggs files are there , because of this i can not run matplotlib module. Please provide a solution. | Import Error: "No module named 'dateutil' " | 0 | 0 | 0 | 15,030 |
47,833,775 | 2017-12-15T13:52:00.000 | 2 | 0 | 1 | 0 | python,matplotlib,python-dateutil | 55,591,573 | 4 | false | 0 | 0 | if you are using the anaconda
open anaconda prompt (go to start and search anaconda prompt)
type the following command on prompt
conda install -c anaconda dateutil
if you are using python 3.x version
pip3 install python-dateutil
if you are using python 2 version
pip2 install python-dateutil | 2 | 12 | 0 | I installed Python-Dateutil package, but when i import it in my script , it's throwing error:
import dateutil
ImportError: No module named 'dateutil'
when i checked the lib folder, dateutil.eggs files are there , because of this i can not run matplotlib module. Please provide a solution. | Import Error: "No module named 'dateutil' " | 0.099668 | 0 | 0 | 15,030 |
47,834,750 | 2017-12-15T14:50:00.000 | 0 | 0 | 1 | 0 | python,tox | 47,835,517 | 2 | false | 0 | 0 | My suggestion is to not list the dependency in deps but instead install it in commands using a command or a shell script of your own. | 2 | 0 | 0 | One of the required Python package needs to be compiled with having a specific environment variable set. I looked at tox documentation either PASSENV or setenv only impacts the execution of tests commands. What can I do? Anything I misunderstand? Thanks. | Set environment variable in tox that deps require | 0 | 0 | 0 | 488 |
47,834,750 | 2017-12-15T14:50:00.000 | 0 | 0 | 1 | 0 | python,tox | 48,002,758 | 2 | true | 0 | 0 | I have found a solution without the need to handle environment variables, that is using --install-option of pip, for example, pip install --install-option="--with-openssl" pycurl, which passes --with-openssl to underlying setup.py install. | 2 | 0 | 0 | One of the required Python package needs to be compiled with having a specific environment variable set. I looked at tox documentation either PASSENV or setenv only impacts the execution of tests commands. What can I do? Anything I misunderstand? Thanks. | Set environment variable in tox that deps require | 1.2 | 0 | 0 | 488 |
47,835,322 | 2017-12-15T15:26:00.000 | 0 | 0 | 0 | 1 | python,exe,cx-freeze | 47,848,710 | 1 | true | 0 | 0 | cx_Freeze calls shutil.copystat() on all files that it copies, so the source time is used on the target file as well. | 1 | 0 | 0 | I built a simple python script as an exe file - first build didn't display expected results so I rebuilt it with changes, exe works fine now however the exe file is still displaying is original build date. | Date modified of an executable displays the first build date when building with cxfreeze. However the executable runs the most recent version | 1.2 | 0 | 0 | 73 |
47,837,082 | 2017-12-15T17:22:00.000 | 1 | 0 | 0 | 0 | python,aws-lambda,webhooks | 47,854,491 | 1 | false | 1 | 0 | I would use SQS / SNS depending on exact architecture design. Maybe Apache Kafka, if you need to store events longer...
So upcoming event would be placed on SQS, and then other lambda would be used to do processing. Problem is that time of processing is limited to 5 min. Also delivering can't be parallel.
Other option is to have one input queue, and one output queue per receiver. So the lambda function, which process input, just spreads it through other queues. And then other lambdas are responsible for delivering. That way has other obvious problems.
Finally. Your lambda, while processing input, can generate messages on outgoing queue, instrumenting what message should be delivered to which users. Then you can have one lambda triggered on each message from outgoing queue. And there you can have small loop delivering messages. Note that in case of problems you need to send back what was not delivered.
Good point is that SQS has something like dead letter queue, so that problematic messages would not stay there forever. | 1 | 0 | 0 | I'm implementing a webhooks provider and trying to solve some problems while minimizing the added complexity to my system:
Not blocking processing of the API call that triggered the event while calling all the hooks so the response to that call will not be delayed
Not making a flood of calls to my listeners if some client is quickly calling my APIs that trigger hooks (i.e. wait a couple seconds and throw away any earlier calls if duplicates come in later)
My environment is Python (Chalice) and AWS Lambda. Ideal solution will be easy to integrate and cheap. | Design for implementing web hooks (including not blocking and ignoring superseding repeat events) | 0.197375 | 0 | 1 | 55 |
47,837,591 | 2017-12-15T17:59:00.000 | 1 | 0 | 1 | 1 | python,linux,gdb,pyenv | 58,419,198 | 1 | false | 0 | 0 | Probably your python version in the virtual env is incompatible with the python dbg package, which is compatible with the python at /usr/bin/. I've solved this problem by copying the python at /usr/bin to the virtual env and re-running. Even if both pythons are the same version the build date seems to make a big difference for gdb debugging. | 1 | 3 | 0 | I ty to get a stacktrace from the running python process, using gdb. The python is running in an virtualenv managed by pyenv, on Ubuntu 16.4.
I tried this:
sudo gdb ~/.pyenv/versions/bla/bin/python -p <PID>
Then I do not have the extensions available, so I do
symbol-file /usr/bin/python3.5-dbg
Then when I run py-list, I get the following error:
Unable to locate gdb frame for python bytecode interpreter.
Also tried:
sudo gdb /usr/bin/python3.5-dbg -p <PID> and same error.
Any other way? Or an easier approach? | Unable to locate gdb frame for python bytecode interpreter | 0.197375 | 0 | 0 | 3,323 |
47,838,719 | 2017-12-15T19:25:00.000 | 7 | 0 | 0 | 0 | python-3.x,gensim | 51,581,196 | 2 | false | 0 | 0 | tl;dr Use a dimensionality reduction technique like PCA or t-SNE.
This is not a trivial operation that you are attempting. In order to understand why, you must understand what these word vectors are.
Word embeddings are vectors that attempt to encode information about what a word means, how it can be used, and more. What makes them interesting is that they manage to store all of this information as a collection of floating point numbers, which is nice for interacting with models that process words. Rather than pass a word to a model by itself, without any indication of what it means, how to use it, etc, we can pass the model a word vector with the intention of providing extra information about how natural language works.
As I hope I have made clear, word embeddings are pretty neat. Constructing them is an area of active research, though there are a couple of ways to do it that produce interesting results. It's not incredibly important to this question to understand all of the different ways, though I suggest you check them out. Instead, what you really need to know is that each of the values in the 300 dimensional vector associated with a word were "optimized" in some sense to capture a different aspect of the meaning and use of that word. Put another way, each of the 300 values corresponds to some abstract feature of the word. Removing any combination of these values at random will yield a vector that may be lacking significant information about the word, and may no longer serve as a good representation of that word.
So, picking the top 100 values of the vector is no good. We need a more principled way to reduce the dimensionality. What you really want is to sample a subset of these values such that as much information as possible about the word is retained in the resulting vector. This is where a dimensionality reduction technique like Principle Component Analysis (PCA) or t-distributed Stochastic Neighbor Embeddings (t-SNE) come into play. I won't describe in detail how these methods work, but essentially they aim to capture the essence of a collection of information while reducing the size of the vector describing said information. As an example, PCA does this by constructing a new vector from the old one, where the entries in the new vector correspond to combinations of the main "components" of the old vector, i.e those components which account for most of the variety in the old data.
To summarize, you should run a dimensionality reduction algorithm like PCA or t-SNE on your word vectors. There are a number of python libraries that implement both (e.g scipy has a PCA algorithm). Be warned, however, that the dimensionality of these word vectors is already relatively low. To see how this is true, consider the task of naively representing a word via its one-hot encoding (a one at one spot and zeros everywhere else). If your vocabulary size is as big as the google word2vec model, then each word is suddenly associated with a vector containing hundreds of thousands of entries! As you can see, the dimensionality has already been reduced significantly to 300, and any reduction that makes the vectors significantly smaller is likely to lose a good deal of information. | 1 | 2 | 1 | I loaded google's news vector -300 dataset. Each word is represented with a 300 point vector. I want to use this in my neural network for classification. But 300 for one word seems to be too big. How can i reduce the vector from 300 to say 100 without compromising on the quality. | reducing word2vec dimension from Google News Vector Dataset | 1 | 0 | 0 | 2,005 |
47,838,811 | 2017-12-15T19:30:00.000 | 0 | 0 | 0 | 0 | python,arrays,numpy,matplotlib,multidimensional-array | 47,839,249 | 2 | false | 0 | 0 | use x and y as coordinates and put a dot sized for the energy z.
a table would work as well, since you haven't stated that x and y geometries have any numeric purpose other than as lablels. | 1 | 0 | 1 | I have a set of about 2000 files that look like: 10_20.txt, 10_21.txt, 10_21.txt, ... ,10_50.txt, ... , 11.20.txt, ... , 11.50.txt , ... , 20_50.txt
The first value in the file name, we'll call x, goes from 10-20 in steps of 1, and the second value in the file name, we'll call y, and goes from 20-50 in steps of 1.
Within these files, there is a load of values and another value I want to extract, which we'll call z.
I have written a program to cycle through the files and extract z from each file and add it to a list.
My question now is, if I have 2 numpy arrays that look like:
x = np.arange(10,20,1)
y = np.arange(20,50,1)
and a z list that has ~2000 floats in it, what is the best way to plot how z depends on x and y? Is there standard way to do this?
I had been thinking it would be best to extract x, y and z from a file, and add them to a multidimensional array. If this is the case could anyone point me in the right direction of how to extract the x and y values out of the file name. | Plotting a 3d surface in Python from known values | 0 | 0 | 0 | 470 |
47,841,384 | 2017-12-16T00:14:00.000 | 1 | 0 | 0 | 0 | python,turtle-graphics,python-turtle | 62,399,262 | 7 | false | 0 | 1 | You can use:
turtle.right(angle)
or:
turtle.left(angle).
Hope this helps! | 1 | 4 | 0 | How can I tell a turtle to face a direction in turtle graphics?
I would like the turtle to turn and face a direction no matter its original position, how can I achieve this? | Turtle direction in turtle graphics? | 0.028564 | 0 | 0 | 9,625 |
47,842,966 | 2017-12-16T05:53:00.000 | 1 | 0 | 0 | 0 | python,mysql,types,double,storage | 47,843,118 | 2 | true | 0 | 0 | All python floats have the same precision and take the same amount of storage. If you want to reduce overall storage numpy arrays should do the trick. | 1 | 2 | 0 | So I'm trying to store a LOT of numbers, and I want to optimize storage space.
A lot of the numbers generated have pretty high precision floating points, so:
0.000000213213 or 323224.23125523 - long, high memory floats.
I want to figure out the best way, either in Python with MySQL(MariaDB) - to store the number with smallest data size.
So 2.132e-7 or 3.232e5, just to basically store it as with as little footprint as possible, with a decimal range that I can specify - but removing the information after n decimals.
I assume storing as a DOUBLE is the way to go, but can I truncate the precision and save on space too?
I'm thinking some number formating / truncating in Python followed by just normal storage as a DOUBLE would work - but would that actually save any space as opposed to just immediately storing the double with N decimals attached.
Thanks! | Most efficient way to store scientific notation in Python and MySQL | 1.2 | 1 | 0 | 317 |
47,843,529 | 2017-12-16T07:25:00.000 | 1 | 0 | 0 | 0 | python,django | 47,844,457 | 2 | false | 1 | 0 | There won't be any issue with having the different name of parent directory and the project directory (the one with settings.py).
You can try renaming the parent directory and it would still work the same way but the name of project directory should not be touched since the same is linked with project settings which are used to run the project. | 1 | 3 | 0 | I've pulled my django repo from bitbucket onto my digital ocean server. My project folder name in my server is project whereas my initial app from my repo (the one with settings.py is called app. Does this cause any problems? Because I know when you create a django project offline, the first directory is always the same as the parent directory. | Does my Django app have to be the same name as the project directory? | 0.099668 | 0 | 0 | 1,866 |
47,844,612 | 2017-12-16T10:20:00.000 | 1 | 0 | 0 | 0 | python,django | 47,845,087 | 3 | false | 1 | 0 | Yes, you can copy the whole content of settings.py, but first remove the SECRET_KEY and set DEBUG to FALSE. | 1 | 1 | 0 | So I'm moving my project from my offline directory to my remote server on Digital Ocean. Is there anything I need to be concerned about? For example am I safe keeping the same SECRET_KEY that was generated offline? Anything else I need to worry about? | Am I safe to copy my settings.py file which was generated offline to my remote server? | 0.066568 | 0 | 0 | 44 |
47,852,254 | 2017-12-17T04:45:00.000 | 1 | 0 | 1 | 1 | qt,pyqt,debian,python-sip | 61,470,215 | 5 | false | 0 | 0 | I got this after upgrading Ubuntu to 20.04. Only thing that worked was backing up my GNS3 folder with my project files, uninstalling gns3-server and gns3-gui, then re-installing. Everything works now. | 2 | 1 | 0 | Have installed GNS3 on my Linux (Debian Strech) and getting below error message, please help, installed from package, OS updated. qt and sip at their newest version on my machine (installed).
Fail update installation: No module named 'sip' **
**Can't import Qt modules: Qt and/or PyQt is probably not installed correctly...
Any help/direction to solve the problem will be highly appreciated.
Thanks in-advance. | GNS3 sip/qt error | 0.039979 | 0 | 0 | 3,641 |
47,852,254 | 2017-12-17T04:45:00.000 | 1 | 0 | 1 | 1 | qt,pyqt,debian,python-sip | 48,118,242 | 5 | false | 0 | 0 | i found out that the problem was a source in source.list.
in my case was the firefox quantum | 2 | 1 | 0 | Have installed GNS3 on my Linux (Debian Strech) and getting below error message, please help, installed from package, OS updated. qt and sip at their newest version on my machine (installed).
Fail update installation: No module named 'sip' **
**Can't import Qt modules: Qt and/or PyQt is probably not installed correctly...
Any help/direction to solve the problem will be highly appreciated.
Thanks in-advance. | GNS3 sip/qt error | 0.039979 | 0 | 0 | 3,641 |
47,858,129 | 2017-12-17T18:21:00.000 | 1 | 0 | 0 | 0 | python,windows,firewall | 47,858,149 | 1 | false | 0 | 0 | The whole idea of a firewall is that it decides who gets through and who doesn't. So in principle, this is not possible, and that's a good thing!
Most firewalls, however, are configured to allow e.g. web traffic (port 80) to pass. So you have to find out what ports your firewall has open, and use these. | 1 | 1 | 0 | I am trying to create a simple chat application using sockets(UDP) and i would like to make it automatically allow itself through firewall, like every other application does. Is there a simple way to do this? | How do i make a python program allow itself through firewall? | 0.197375 | 0 | 1 | 1,029 |
47,858,150 | 2017-12-17T18:23:00.000 | 0 | 0 | 0 | 0 | scipy,python-3.6 | 48,814,986 | 2 | false | 0 | 0 | Looks like I am the only one on earth to have this issue. Fortunately, I got it to work with endless attempts. In case someone in the future gets the same error, you can try this:python -m pip install scipy. I have no idea why pip install scipy doesn't work. | 2 | 3 | 1 | I have python 3.6, Mac OS X El Capitan.
I installed scipy by pip install scipy. But when I import scipy, I get the following error:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/init.py in ()
116 del _NumpyVersion
117
--> 118 from scipy._lib._ccallback import LowLevelCallable
119
120 from scipy._lib._testutils import PytestTester
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/_lib/_ccallback.py in ()
----> 1 from . import _ccallback_c
2
3 import ctypes
4
5 PyCFuncPtr = ctypes.CFUNCTYPE(ctypes.c_void_p).bases[0]
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/_lib/_ccallback_c.cpython-36m-darwin.so, 2): no suitable image found. Did find:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/_lib/_ccallback_c.cpython-36m-darwin.so: mach-o, but wrong architecture
I don't get this error in Python2. | scipy ImportError: dlopen no suitable image found in Python 3 | 0 | 0 | 0 | 5,874 |
47,858,150 | 2017-12-17T18:23:00.000 | 0 | 0 | 0 | 0 | scipy,python-3.6 | 54,552,819 | 2 | false | 0 | 0 | What I did find on MacOS 10.14.2 is that I had installed Scipy 1.1. After executing python -m pip install scipy I got Scipy 1.2 and get rid of "ImportError: dlopen". | 2 | 3 | 1 | I have python 3.6, Mac OS X El Capitan.
I installed scipy by pip install scipy. But when I import scipy, I get the following error:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/init.py in ()
116 del _NumpyVersion
117
--> 118 from scipy._lib._ccallback import LowLevelCallable
119
120 from scipy._lib._testutils import PytestTester
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/_lib/_ccallback.py in ()
----> 1 from . import _ccallback_c
2
3 import ctypes
4
5 PyCFuncPtr = ctypes.CFUNCTYPE(ctypes.c_void_p).bases[0]
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/_lib/_ccallback_c.cpython-36m-darwin.so, 2): no suitable image found. Did find:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/scipy/_lib/_ccallback_c.cpython-36m-darwin.so: mach-o, but wrong architecture
I don't get this error in Python2. | scipy ImportError: dlopen no suitable image found in Python 3 | 0 | 0 | 0 | 5,874 |
47,858,956 | 2017-12-17T20:04:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 47,860,104 | 2 | false | 0 | 1 | There is no way to get this information from tkinter. | 1 | 1 | 0 | I'm creating a GUI that uses the Tkinter Canvas widget to allow a user to draw a line over an image. I would like to convert that line object to a list of points that make up that line. I'm able to get the coordinates and bounding box and other things described under the documentation, but I wasn't able to find the answer to this.
As an example, if I had a line that started at point (0,0) and ended at (3,3), I would like a list that includes points [(0,0), (1,1), (2,2), (3,3)]
Any help is greatly appreciated. | How to convert Tkinter line object to list of points | 0 | 0 | 0 | 668 |
47,860,271 | 2017-12-17T22:57:00.000 | 0 | 0 | 0 | 0 | python,django | 47,860,862 | 1 | true | 1 | 0 | Make a table. Remember, Django makes it really easy. Just create another class for the settings table and populate it. You should be able to access it with just about any other language/system - e.g., PHP. The data gets backed up with everything else in the database, if you move to a different server the data moves along with everything else, etc. Yes, the overhead is technically a little more than a plain text file, but if it is really that small then that overhead is insignificant. If the list of settings grows over time then having it in a searchable database will make updates & retrieval much easier that a text file. | 1 | 0 | 0 | I am currently developing an app for django that needs to have some custom settings that can be changed at runtime by admin users, and those settings have to be accessible to another separate system that uses the same database.
On one hand, we could store those settings in a json file and have it accessible to both systems, as only the django system will actually make any changes to the settings. On the other hand, we could just store those settings as a lone row in on a 'settings' table on the database.
The first choice seems quite cumbersome to deal with, and might result in some problems of multiple accesses, while the other would need a whole table in the database for a single row.
Is any of these ideas any good, or is there something I'm overlooking? | Django custom settings for app | 1.2 | 0 | 0 | 348 |
47,860,803 | 2017-12-18T00:34:00.000 | 2 | 0 | 1 | 0 | python,tensorflow,tensorflow-gpu | 49,210,718 | 3 | false | 0 | 0 | I had a similar issue, but with the version 9.1 which I had on my machine.
The one which was missing 'cudart64_90.dll', whereas there was 'cudart64_91.dll'. So I did a 'downgrade' from CUDA 9.1 to 9.0 and it solved my problem. Hope it helps. | 3 | 5 | 1 | I am trying to import tensorflow (with GPU) and keep getting the following error:
ImportError: Could not find 'cudart64_80.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable
Setup:
NVIDIA GTX 1080
CUDA Development Tool v8.0
cuDNN 6.0
tensorflow-gpu 1.4
Environment variables:
CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
CUDA_PATH: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
CUDA_PATH_V8.0: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
I have also added the following to the %PATH% variable:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\libnvvp
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\extras\CUPTI\libx64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64
What am I missing? Why it can't find cudart64_80.dll despite its location is explicitly specified in %PATH%?
Any help would be much appreciated. | Error loading tensorflow - Could not find "cudart64_80.dll" | 0.132549 | 0 | 0 | 12,211 |
47,860,803 | 2017-12-18T00:34:00.000 | 1 | 0 | 1 | 0 | python,tensorflow,tensorflow-gpu | 47,864,449 | 3 | true | 0 | 0 | In certain cases you may need to restart the computer to propagate all the changes.
If you are using intellij or pycharm, make sure to restart that as it may not take the correct path environment variables otherwise. | 3 | 5 | 1 | I am trying to import tensorflow (with GPU) and keep getting the following error:
ImportError: Could not find 'cudart64_80.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable
Setup:
NVIDIA GTX 1080
CUDA Development Tool v8.0
cuDNN 6.0
tensorflow-gpu 1.4
Environment variables:
CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
CUDA_PATH: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
CUDA_PATH_V8.0: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
I have also added the following to the %PATH% variable:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\libnvvp
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\extras\CUPTI\libx64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64
What am I missing? Why it can't find cudart64_80.dll despite its location is explicitly specified in %PATH%?
Any help would be much appreciated. | Error loading tensorflow - Could not find "cudart64_80.dll" | 1.2 | 0 | 0 | 12,211 |
47,860,803 | 2017-12-18T00:34:00.000 | 0 | 0 | 1 | 0 | python,tensorflow,tensorflow-gpu | 57,690,869 | 3 | false | 0 | 0 | I just changed cudart64_90 to cudart64_80. It worked | 3 | 5 | 1 | I am trying to import tensorflow (with GPU) and keep getting the following error:
ImportError: Could not find 'cudart64_80.dll'. TensorFlow requires that this DLL be installed in a directory that is named in your %PATH% environment variable
Setup:
NVIDIA GTX 1080
CUDA Development Tool v8.0
cuDNN 6.0
tensorflow-gpu 1.4
Environment variables:
CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
CUDA_PATH: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
CUDA_PATH_V8.0: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0
I have also added the following to the %PATH% variable:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\libnvvp
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\extras\CUPTI\libx64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\lib\x64
What am I missing? Why it can't find cudart64_80.dll despite its location is explicitly specified in %PATH%?
Any help would be much appreciated. | Error loading tensorflow - Could not find "cudart64_80.dll" | 0 | 0 | 0 | 12,211 |
47,861,097 | 2017-12-18T01:34:00.000 | 2 | 1 | 0 | 0 | python,api,twitter | 47,863,424 | 1 | false | 0 | 0 | The User Credentials is what determines permissions. With OAuth a user gives your app permission to act on their behalf. | 1 | 1 | 0 | I'm using Python to get texts of tweets from twitter using tweepy and is it possible to get ID and password from user, pass it to twitter api, and access to the tweets and get json data.
I read "User timelines belonging to protected users may only be requested when the authenticated user either “owns” the timeline or is an approved follower of the owner." but not sure whether it means the programmer must be accessible to the protected account or the api can access to protected account by receiving ID and password. | Is it able to view protected accounts using twitter api? | 0.379949 | 0 | 1 | 1,041 |
47,861,813 | 2017-12-18T03:34:00.000 | -1 | 0 | 0 | 0 | python-3.x,selenium,session,selenium-webdriver,webdriver | 47,862,340 | 3 | false | 0 | 0 | Without getting into why do you think that leaving an open browser windows will solve the problem of being slow, you don't really need a handle to do that. Just keep running the tests without closing the session or, in other words, without calling driver.quit() as you have mentioned yourself. The question here though framework that comes with its own runner? Like Cucumber?
In any case, you must have some "setup" and "cleanup" code. So what you need to do is to ensure during the "cleanup" phase that the browser is back to its initial state. That means:
Blank page is displayed
Cookies are erased for the session | 1 | 17 | 0 | For some unknown reasons ,my browser open test pages of my remote server very slowly. So I am thinking if I can reconnect to the browser after quitting the script but don't execute webdriver.quit() this will leave the browser opened. It is probably kind of HOOK or webdriver handle.
I have looked up the selenium API doc but didn't find any function.
I'm using Chrome 62,x64,windows 7,selenium 3.8.0.
I'll be very appreciated whether the question can be solved or not. | How can I reconnect to the browser opened by webdriver with selenium? | -0.066568 | 0 | 1 | 23,577 |
47,863,416 | 2017-12-18T06:52:00.000 | 8 | 0 | 1 | 0 | python,python-3.x,anaconda | 47,868,393 | 1 | true | 0 | 0 | Download the full installer: Provided that you uninstall your existing Anaconda, this method will be least likely to cause upgrade problems. It will also probably be slower. Note that I think you should uninstall the old Anaconda so that you don't end up with two conda[.exe] files, two Anaconda Prompt shortcuts, and so forth. You may end up trying to install a package with the wrong conda and be very confused about what's happening.
conda update --all: This will update all of your packages in the environment to their latest version, regardless of their version in the Anaconda installer. This is not recommended because you will end up with package versions that are different from the ones in the Anaconda installer and you may end up with an error message about packages that are incompatible.
conda update anaconda: This will update the "metapackage" called anaconda to the latest version. This package has dependencies on specific versions of all of the packages and Anaconda (the company) give some assurance that these will all work together. So, updating the anaconda package will update all your packages to the version used in the latest version of the Anaconda installer.
My suggestion (based on some experience, I am not an employee of Anaconda) would be to try #3 and if it fails, try #1. | 1 | 6 | 0 | I have Anaconda 4.4.0 (Windows, Python 3.6., 64 bit).
I would like to upgrade to latest Anaconda 5.0.1
Few options:
Download the full installer and run it
From existing installation (of 4.4.0) run "conda update --all"
From existing installation run "conda update anaconda"
What is the tradeoff among these options? What is the recommended one? | How to upgrade to the latest Anaconda 5.0.1 | 1.2 | 0 | 0 | 12,491 |
47,863,640 | 2017-12-18T07:11:00.000 | 0 | 0 | 1 | 0 | python,pyinstaller | 61,203,249 | 1 | false | 0 | 0 | Do not mention the pickle file to pyinstaller and put the pickle file in the same folder than the bundled onefile. For me this works even if you have two files in the folder instead of one. | 1 | 1 | 0 | I'm making onefile executables with pyinstaller on windows 10. I'm including data files (pickle files) by editing the .spec file...
How can I store changes made to these files during run time? My understanding is that the data files are copied to a temp directory during execution. I can read from the files using the path I get from sys._MEIPASS but the changes I write are lost after restarting the program.
Is there a way to write to pickle files stored inside the exe? | how to change bundled data file pyinstaller on execute | 0 | 0 | 0 | 590 |
47,865,082 | 2017-12-18T09:06:00.000 | 1 | 1 | 0 | 0 | python,nltk,sentiment-analysis,senti-wordnet,vader | 49,966,573 | 2 | false | 0 | 0 | What I did for my research is take a small random sample of those tweets and manually label them as either positive or negative. You can then calculate the normalized scores using VADER or SentiWordNet and compute the confusion matrix for each which will give you your F-score etc.
Although this may not be a particularly good test, as it depends on the sample of tweets you use. For example you may find that SentiWordNet classes more things as negative than VADER and thus appears to have the higher accuracy if your random sample are mostly negative. | 2 | 1 | 0 | I'm performing different sentiment analysis techniques for a set of Twitter data I have acquired. They are lexicon based (Vader Sentiment and SentiWordNet) and as such require no pre-labeled data.
I was wondering if there was a method (like F-Score, ROC/AUC) to calculate the accuracy of the classifier. Most of the methods I know require a target to compare the result to. | Accuracy of lexicon-based sentiment analysis | 0.099668 | 0 | 0 | 1,755 |
47,865,082 | 2017-12-18T09:06:00.000 | 0 | 1 | 0 | 0 | python,nltk,sentiment-analysis,senti-wordnet,vader | 47,885,084 | 2 | false | 0 | 0 | The short answer is no, I don't think so. (So, I'd be very interested if someone else posts a method.)
With some unsupervised machine learning techniques you can get some measurement of error. E.g. an autoencoder gives you an MSE (representing how accurately the lower-dimensional representation can be reconstructed back to the original higher-dimensional form).
But for sentiment analysis all I can think of is to use multiple algorithms and measure agreement between them on the same data. Where they all agree on a particular sentiment you mark it as more reliable prediction, where they all disagree you mark it as unreliable prediction. (This relies on none of the algorithms have the same biases, which is probably unlikely.)
The usual approach is to label some percentage of your data, and assume/hope it is representative of the whole data. | 2 | 1 | 0 | I'm performing different sentiment analysis techniques for a set of Twitter data I have acquired. They are lexicon based (Vader Sentiment and SentiWordNet) and as such require no pre-labeled data.
I was wondering if there was a method (like F-Score, ROC/AUC) to calculate the accuracy of the classifier. Most of the methods I know require a target to compare the result to. | Accuracy of lexicon-based sentiment analysis | 0 | 0 | 0 | 1,755 |
47,866,365 | 2017-12-18T10:23:00.000 | 0 | 0 | 0 | 1 | python,ftplib | 47,869,378 | 1 | false | 0 | 0 | If you have access to linux server, and the file generated on windows automatically, you can do the folowing:
Generate ssh-key on your windows maching
Add it to authorized_hosts of the linux machine
Install simple console scp tool on windows
Write simple cmd-script to copy file with help of scp, something like:
scp c:\path\to\file.txt [email protected]:/home/user/file.txt
Run this script automatically every time, then the file is generated on windows host. | 1 | 0 | 0 | What are the different modules/ways to copy file from a windows computer to a linux server available in python
I tried using ftplib api to connect to the windows server but i m unable to do with the error - socket.error: [Errno 111] Connection refused
What are the other modules that i can connect to a windows computer to copy or list the files under a directory | Copy File from Windows Host to Linux [Python] | 0 | 0 | 1 | 2,296 |
47,870,331 | 2017-12-18T14:15:00.000 | 2 | 0 | 1 | 0 | python,oop,inheritance | 47,870,442 | 4 | false | 0 | 0 | Yes, inheriting from multiple classes in perfectly legal. However, if you have 10 super classes, it seems that you might need to tweak your inheritance tree. Perhaps some of the classes can be inherited by intermediate classes so that the final class inherits from fewer classes.
You should also learn about composition which might provide a cleaner solution for what you are trying to do. | 1 | 0 | 0 | Is that a pythonic way to create a class that inherits from more than 10 classes? Or maybe I should consider a different approach? | Python inheritance - multiple superclasses | 0.099668 | 0 | 0 | 110 |
47,877,726 | 2017-12-18T23:05:00.000 | 2 | 0 | 1 | 0 | python,powershell,cmd,pip,pycharm | 47,878,274 | 1 | true | 0 | 0 | PyCharm is a nice IDE because when you set up a project you can configure your local interpreter (python.exe) and it will remember. The Windows command prompt defaults to your environmental settings unless you tell it explicitly the path of the pip/python you want - which will always be in the Scripts folder of your virtualenv.
So for instance, to use the virtualenv version of pip you can path to your environment and type Scripts\python -m pip install <package> (noting that I have had trouble using Scripts\pip install <package> directly before; but given that the former has always worked, I haven't bothered to figure out why). Similarly, you can use the virtualenv python interpreter on the command line simply by typing Scripts\python.
You DON'T want to change your environmental settings to point to your virtualenv .exe because things can get pretty messed up that way. It also negates the point of keeping your python environments isolated.
As an aside - usually people interact with a virtualenv by "activating" it. This is optional and just puts everything relative to the Scripts folder (so you don't have to keep typing Scripts\ in front). The other benefit is that you can start pathing around to other directories and the command line will remember (while it is activated) that you want to use those particular versions of pip and python. If you're using the command line you can activate with Scripts\activate. If you're using PowerShell you'll need to use Scripts\activate.ps1. | 1 | 1 | 0 | I think I may have screwed myself.
I had pip working in my venv in PyCharm fine
but whenever I try to accsses pip from the powershell or cmd line, it doesn't recognize the command. I double checked the path variables and everything, and now further discouraged as the GUI platform I was working with wont open in the venv.
What are my options here? I need to get pip working in the powershell, it says it's there when I upgrade it, but says its missing when I try to use it, which is unbelievably frustrating. tried also uninstalling pip from the venv but that didn't help anything either. Any help is greatly appreciated.
when I try to install pip normally, i get this
PS C:\Users\lerug\Downloads> py .\get-pip.py
Requirement already up-to-date: pip in c:\users\lerug\appdata\local\programs\python\python36-32\lib\site-packages
PS C:\Users\lerug\Downloads> | Why wont pip work outside of PyCharm | 1.2 | 0 | 0 | 428 |
47,878,076 | 2017-12-18T23:46:00.000 | -1 | 0 | 0 | 0 | python,sql,python-3.x,postgresql,pandas | 55,399,149 | 2 | false | 0 | 0 | I was also facing same issue because dot was added in header. remove dot then it will work. | 2 | 1 | 1 | Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case?
I am not getting any error messages. The column names appear fine, but the table is entirely empty.
When I try to send over a single column (i.e. data.ix[2]), it actually works.
However, if I try to send over more than one column (data.ix[1:3]), I again get a completely blank table in sql.
I have been using this code for other dataframes and have never encountered this problem. It still runs for other dataframes in my set. | Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case? | -0.099668 | 1 | 0 | 2,183 |
47,878,076 | 2017-12-18T23:46:00.000 | 0 | 0 | 0 | 0 | python,sql,python-3.x,postgresql,pandas | 47,896,038 | 2 | false | 0 | 0 | I fixed this problem - it was becomes some of the column headers had '%' in it.
I accidentally discovered this reason for the empty tables when I tried to use io and copy_from a temporary csv, instead of to_sql. I got a transaction error based on a % placeholder error.
Again, this is specific to passing to PSQL; it went through to SQL Server without a hitch. | 2 | 1 | 1 | Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case?
I am not getting any error messages. The column names appear fine, but the table is entirely empty.
When I try to send over a single column (i.e. data.ix[2]), it actually works.
However, if I try to send over more than one column (data.ix[1:3]), I again get a completely blank table in sql.
I have been using this code for other dataframes and have never encountered this problem. It still runs for other dataframes in my set. | Pandas .to_sql is not inserting any records for a dataframe I want to send to sql. Are there any generic reasons why this might be the case? | 0 | 1 | 0 | 2,183 |
47,878,639 | 2017-12-19T01:01:00.000 | 2 | 0 | 1 | 0 | python,time,timeit | 47,878,761 | 4 | false | 0 | 0 | Depending on the OS you're running and how messy solution can you accept, you can do this without imports.
Ordered by increasing insanity:
Some systems provide virtual files which contain various timers. You can get a sub-second resolution at least on a Linux system by reading a counter from that kind of file before and after execution. Not sure about others.
Can you reuse existing imports? If the file already contains any of threading, multiprocessing, signal, you can construct a timer out of them.
If you have some kind of scheduler running on your system (like cron) you can inject a job into it (by creating a file), which will print out timestamps every time it's run.
You can follow a log file on a busy system and assume the last message was close to the time you read it.
Depending on what accuracy you want, you could measure the amount of time each python bytecode operation takes, then write an interpreter for the code available via function.__code__.co_code. While you run the code, you can sum up all the expected execution times. This is the only pure-python solution which doesn't require a specific OS / environment.
If you're running on a system which allows process memory introspection, you can open it and inject any functionality without technically importing anything. | 2 | 2 | 0 | So, I have recently been tasked with writing a function in python 2 that can time the execution of another function. This is simple enough, but the catch is I have to do it WITHOUT importing any modules; this naturally includes time, timeit, etc.
Using only built in functions and statements (e.g. sum(), or, yield) is this even possible?
I don't want to see a solution, I need to work that out for myself, but I would greatly appreciate knowing if this is even possible. If not, then I'd rather not waste the time bashing my head against the proverbial brick wall. | Time a python 2 function takes to run WITHOUT ANY imports; is it possible? | 0.099668 | 0 | 0 | 364 |
47,878,639 | 2017-12-19T01:01:00.000 | 1 | 0 | 1 | 0 | python,time,timeit | 47,878,926 | 4 | false | 0 | 0 | Two "cheating" methods.
If you're avoiding the import keyword, you can use __import__ to import time, which is actually a module builtin to the python2 executable.
If you know the location of the Python installation, you can use execfile on os.py and use the times function. | 2 | 2 | 0 | So, I have recently been tasked with writing a function in python 2 that can time the execution of another function. This is simple enough, but the catch is I have to do it WITHOUT importing any modules; this naturally includes time, timeit, etc.
Using only built in functions and statements (e.g. sum(), or, yield) is this even possible?
I don't want to see a solution, I need to work that out for myself, but I would greatly appreciate knowing if this is even possible. If not, then I'd rather not waste the time bashing my head against the proverbial brick wall. | Time a python 2 function takes to run WITHOUT ANY imports; is it possible? | 0.049958 | 0 | 0 | 364 |
47,878,687 | 2017-12-19T01:11:00.000 | 0 | 0 | 0 | 0 | python-3.x,sms,sms-gateway,messagebird | 68,136,708 | 1 | false | 0 | 0 | Messagebird have a feature of forward incoming sms data through webhook(get or post method). if you set an url then Messagebird will forward every incoming sms to you(or your server). You can easily read get/post response. | 1 | 1 | 0 | I am evaluating MessageBird service. I got a Virtual Mobile Number. I am able to send message to dummy numbers (until i get approval for sending messages to real USA number)
Unknown: My problem is about reading the messages received by a VMN.
Details: If I as a VMN owner send a message to consumer e.g. +1(111)111-1111 and i am interested in reading the response from the consumer, how to do get it?
MessageBird documentation expects me to know the ID for response message object (or my understanding is wrong). The documentation is good but i don't see a way to programmatically achieve it. Any suggestions How to achieve it?
Thanks in advance! | MessageBird: How to read a response from consumer | 0 | 0 | 0 | 146 |
47,884,227 | 2017-12-19T09:53:00.000 | 0 | 0 | 0 | 0 | google-bigquery,google-cloud-platform,google-cloud-storage,google-python-api | 47,884,399 | 1 | false | 0 | 0 | You put this wrong. You need to provide access to the user account on both projects to have accessible across projects. So there needs to be a user authorized to do the BQ thing and also the GCP thing on the different project.
Also Bucket names must be globally unique it means I can't create the name as well, it's global (for the entire planet you reserved that name, not just for project) | 1 | 1 | 0 | I have two projects under the same account:
projectA with BQ and projectB with cloud storage
projectA has BQ with dataset and table - testDataset.testTable
prjectB has cloud storage and bucket - testBucket
I use python, google cloud rest api
account key credentials for every project, with different permissions: projectA key has permissions only for BQ; projectB has permissions only for cloud storage
What I need:
import data from projectA testDataset.testTable to projectB testBucket
Problems
of course, I'm running into error Permission denied while I'm trying to do it, because apparently, projectA key does not have permissions for projectB storage and etc
another strange issue as I have testBucket in projetB I can't create a bucket with the same name in projectA and getting
This bucket name is already in use. Bucket names must be globally
unique. Try another name.
So, looks like all accounts are connected I guess it means should be possible to import data from one account to another one via API
What can I do in this case? | Import data from BigQuery to Cloud Storage in different project | 0 | 1 | 0 | 716 |
47,885,832 | 2017-12-19T11:19:00.000 | 0 | 0 | 0 | 0 | python,networking | 47,886,143 | 2 | false | 0 | 0 | try this on Unix:
you need the subprocess module to find out the local hostname
import subprocess
hn = subprocess.Popen(['hostname'], stdout=subprocess.PIPE)
hn_out = hn.stdout.readline().strip('\n')
Test if "host" string is the IPv4 loop back address, IPv6 loop back address or if it's the local host name
if host == '127.0.0.1' or host == '::1' or host == hn_out:
print("It's localhost") | 1 | 4 | 0 | I have a string host which can be a hostname (without domain), ipv4 address or ipv6 address.
Is there a simple way to determine if this refers to localhost loop-back device?
Python Version: 2.7 | Determine if host is localhost | 0 | 0 | 0 | 3,231 |
47,887,030 | 2017-12-19T12:26:00.000 | 2 | 0 | 1 | 0 | python,objective-c,nsoperationqueue | 47,887,914 | 2 | false | 0 | 0 | have you looked celery as an option? This is what celery website quotes
Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well. | 1 | 0 | 0 | I'm looking into concurrency options for Python. Since I'm an iOS/macOS developer, I'd find it very useful if there was something like NSOperationQueue in python.
Basically, it's a queue to which you can add operations (every operation is Operation-derived class with run method to implement) which are executed either serially, or in parallel or ideally various dependencies can be set on operations (ie that some operation depends on others being executed before it can start). | Is there something like NSOperationQueue from ObjectiveC in Python? | 0.197375 | 0 | 0 | 354 |
47,887,942 | 2017-12-19T13:19:00.000 | 1 | 0 | 1 | 0 | python,regex,python-2.7 | 47,888,419 | 1 | false | 0 | 0 | Though not the most optimal way to do it, but one way could be to split your long string into list of words. Then for each word query the database using LIKE regex.
Eg: SELECT * FROM table WHERE city LIKE '%word%' | 1 | 0 | 0 | I have a large database of cities and towns (around 300 000) and I am trying -using python- to check if a given string contains one of these cities.
What is the optimal way to achieve this? | Find matching words in a database and a long string | 0.197375 | 0 | 0 | 65 |
47,892,038 | 2017-12-19T17:13:00.000 | 2 | 0 | 1 | 0 | python | 58,295,830 | 3 | false | 0 | 0 | I was looking for a similar issue (unable to import tensorflow in jupyter) and found that maybe most answers are outdated because now conda installs tf in its own environment.
The most useful thing I found is:
https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/
which explains in very few steps how to install tf or tf-gpu in its own environment.
Then my problem was that the jupyter notebook is in its own base environment, not in the tf-gpu environment. How to work with that from a jupyter notebook based on the base environment?
The solution comes from the very helpful answer from Nihal Sangeeth to this question
https://stackoverflow.com/questions/53004311/how-to-add-conda-environment-to-jupyter-lab
conda activate tf-gpu
(tf-gpu)$ conda install ipykernel
(tf-gpu)$ ipython kernel install --user --name=<any_name_you_like_for_kernel>
(tf-gpu)$ conda deactivate
Close and reopen your jupyter notebook.
Then in your jupyter notebook you will find the option, under "kernel" of "change kernel". Change kernel to your newly created kernel and you will be able to import tensorflow as tf and go on from there.
Hope it helps somebody | 1 | 2 | 1 | ModuleNotFoundError Traceback (most recent call last)
in ()
11 import numpy as np
12
---> 13 import tensorflow as tf
14
15
ModuleNotFoundError: No module named 'tensorflow' | import tensorflow in Anaconda prompt | 0.132549 | 0 | 0 | 9,109 |
47,892,039 | 2017-12-19T17:13:00.000 | 2 | 1 | 1 | 0 | python,anaconda,spyder | 47,895,902 | 2 | true | 0 | 0 | (Spyder maintainer here) Sorry, Spyder doesn't have this functionality at the moment (February 2020). | 1 | 4 | 0 | Is it possible to export my color scheme, so I can import back and forth on other computers and Anaconda envs? I can't seem to find a practical way of doing it. | How to import/export syntax coloring scheme | 1.2 | 0 | 0 | 1,031 |
47,894,980 | 2017-12-19T20:51:00.000 | 1 | 0 | 0 | 0 | python,selenium,wait | 47,895,032 | 1 | false | 0 | 0 | Sounds like you could just get the value of the span and have python check once a second (or whatever) to see if it is different than before. | 1 | 0 | 0 | I have clicked on a button that will update a span. So now I want to wait for that span to change from a 1 to a 2. It sounds like I'm trying to do exactly what text_to_be_present_in_element does, except without a locator. I already have the WebElement I need. Is it possible to use these expected_conditions functions with an actual WebElement? | Is it possible to wait for text to change to a specific value in Selenium (Python) with a WebElement? | 0.197375 | 0 | 1 | 108 |
47,895,059 | 2017-12-19T20:56:00.000 | 0 | 0 | 1 | 0 | python,ubuntu,importerror | 47,895,217 | 2 | false | 0 | 0 | One of the reason could be, that you have different versions of python in your system. The version the spyder is using has pyscreenshot installed in it, however the terminal version does not have it. Check which version is being used in both the cases. | 2 | 0 | 0 | while running a python file with the terminal the following ImportError occurs:
ImportError: No module named pyscreenshot
if I run it with spyder everything works fine...
I dont know why this ImportError occurs and how to fix it.
Also I execute the python file in the terminal with sudo.. if it helps. | Terminal ImportError: No module named pyscreenshot | 0 | 0 | 0 | 725 |
47,895,059 | 2017-12-19T20:56:00.000 | 0 | 0 | 1 | 0 | python,ubuntu,importerror | 47,895,121 | 2 | false | 0 | 0 | It means that in your terminal's main python version not installed pyscreenshot so you can install it with pip install pyscreenshot It seems your terminal and spyder using different python files. | 2 | 0 | 0 | while running a python file with the terminal the following ImportError occurs:
ImportError: No module named pyscreenshot
if I run it with spyder everything works fine...
I dont know why this ImportError occurs and how to fix it.
Also I execute the python file in the terminal with sudo.. if it helps. | Terminal ImportError: No module named pyscreenshot | 0 | 0 | 0 | 725 |
47,898,566 | 2017-12-20T03:56:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,keras | 47,898,790 | 1 | true | 0 | 0 | I don't think 'None' can be accepted in any reshape function. Reshape functions usually require numeric size. But you can reshape
'None' part to 1 to fit the model you are using.
For reshaping,
considering x is the variable that needs to be reshaped,
Using Numpy,
x = np.reshape(x, (1000, 1, 1, 1))
Using Tensorflow,
x = tf.reshape(x, (1000, 1, 1, 1)) | 1 | 0 | 1 | I want to ask about how I reshape an array with a different number of elements, For example, the size of the array is (1000,). I want to reshape it to be (1000, None,None,1) to be the input to CNN, I'm using keras. | Variable length array reshape for input to CNN | 1.2 | 0 | 0 | 373 |
47,901,878 | 2017-12-20T08:41:00.000 | 0 | 1 | 0 | 0 | python,wmi | 59,811,597 | 1 | false | 0 | 0 | if your file name is wmi.py:
rename it | 1 | 0 | 0 | I have downloaded python 3.6.3 & installed wmi extension, wmi module. I'm getting error like "AttributeError: module wmi has no attribute WMI". Please help. | Python wmi module attribute error | 0 | 0 | 0 | 723 |
47,905,139 | 2017-12-20T11:36:00.000 | 0 | 0 | 0 | 0 | python,text-classification | 50,867,759 | 1 | false | 0 | 0 | If you have 3 classes and labelled data and have trained the model, then you have "told the classifier" everything you can (ie trained).
If you're saying you want to tell the classifier about the 2/6 test cases that failed, you then no its not possible with Logistic Regression (maybe some other feedback model?). What you need is to train the model more, or add more test cases. You could add those 2 failed cases to training and try different test data.
You may have an underfit model you can try to tune, but with the experiments I've done with text similar to yours it can be difficult to obtain really high accuracy with limited data and just tf-idf since the "model" is just word frequencies. | 1 | 0 | 1 | I am planning to classify emails. I am using tfidf vectorizer and logistic regression algorithm to do this. I took very small training and testing sets. My training set consists of 150 emails( 3 classes, 50 emails/class) and testing set consists of 6 emails. Now my classifier is predicting 4 out of 6 correctly. Now my doubt is, can I tell the classifier that this document belongs to class X not class Y? If yes, What is this process called?
Thank you. | text classification using logistic regression | 0 | 0 | 0 | 422 |
47,905,576 | 2017-12-20T11:59:00.000 | 2 | 0 | 0 | 0 | python,gensim,doc2vec | 47,918,220 | 2 | false | 0 | 0 | Doc2Vec infer_vector() only takes individual text examples, as lists-of-word-tokens. So you can't pass in a batch of examples. (And, you shouldn't be passing in non-tokenized strings – but lists-of-tokens, preprocessed in the same manner as your training data was preprocessed.)
But, you might be able to use a function that multiply-applies infer_vector() for you, as the @COLDSPEED comment suggests. Still, the column should have lists-of-tokens, rather than strings-of-characters, if you want meaningful results.
Also, most users find infer_vector() works much better using non-default values for its steps parameter (much larger than its default of 5), and perhaps smaller values for its starting alpha parameter (such as more like the training default of 0.025 than the inference default of 0.1). | 2 | 2 | 1 | I have created document vectors for a large corpus using Gensim's doc2vec.
sentences=gensim.models.doc2vec.TaggedLineDocument('file.csv')
model = gensim.models.doc2vec.Doc2Vec(sentences,size = 10, window = 800, min_count = 1, workers=40, iter=10, dm=0)
Now I am using Gensim's infer_vector() using those document vectors to create document vectors for another sample corpus
Eg: model.infer_vector('This is a string')
Is there a way to pass the entire DataFrame through infer_vector and get the output vectors for each line in the DataFrame? | How to use Gensim Doc2vec infer_vector() for large DataFrame? | 0.197375 | 0 | 0 | 1,915 |
47,905,576 | 2017-12-20T11:59:00.000 | 0 | 0 | 0 | 0 | python,gensim,doc2vec | 47,925,223 | 2 | false | 0 | 0 | @gojomo: Thanks for the answer, but I tried inferring using both tokenized rows as well as raw strings, and got the same document vector.
Is there a way to know if the document vector which are getting created are meaningful or not? | 2 | 2 | 1 | I have created document vectors for a large corpus using Gensim's doc2vec.
sentences=gensim.models.doc2vec.TaggedLineDocument('file.csv')
model = gensim.models.doc2vec.Doc2Vec(sentences,size = 10, window = 800, min_count = 1, workers=40, iter=10, dm=0)
Now I am using Gensim's infer_vector() using those document vectors to create document vectors for another sample corpus
Eg: model.infer_vector('This is a string')
Is there a way to pass the entire DataFrame through infer_vector and get the output vectors for each line in the DataFrame? | How to use Gensim Doc2vec infer_vector() for large DataFrame? | 0 | 0 | 0 | 1,915 |
47,912,529 | 2017-12-20T18:47:00.000 | 0 | 0 | 0 | 0 | python,postgresql,psycopg2 | 48,137,413 | 2 | false | 0 | 0 | Eventually I ended up adding AUTOCOMMIT = true
This is the only way I can make sure all workers see when a table is created | 1 | 2 | 0 | I have a problem figuring out how I can create a table using psycopg2, with IF NOT EXISTS statement, and getting the NOT EXISTS result
The issue is that I'm creating a table, and running some CREATE INDEX / UNIQUE CONSTRAINT after it was created. If the table already exists - there is no need to create the indexes or constraints | psycopg2 create table if not exists and return exists result | 0 | 1 | 0 | 1,166 |
47,914,980 | 2017-12-20T22:01:00.000 | 10 | 0 | 1 | 1 | python,anaconda | 53,872,163 | 6 | false | 0 | 0 | I added "\Anaconda3_64\" and "\Anaconda3_64\Scripts\" to the PATH variable. Then I can use conda from powershell or command prompt. | 4 | 49 | 0 | I had to install the 64-bit version of Anaconda with python 3.5 in Windows 10. I followed the default settings (AppData/Continuum/Anaconda3). However, after installation, I am unsure how to access the Anaconda command prompt so that I can use conda to install packages. I also attempted to install Anaconda 64 bit in C:/Program Files, but several of the python script did not like the space and it failed to install.
What can I do to access the Anaconda prompt? | How to access Anaconda command prompt in Windows 10 (64-bit) | 1 | 0 | 0 | 214,781 |
47,914,980 | 2017-12-20T22:01:00.000 | 46 | 0 | 1 | 1 | python,anaconda | 47,915,189 | 6 | false | 0 | 0 | Go with the mouse to the Windows Icon (lower left) and start typing "Anaconda". There should show up some matching entries. Select "Anaconda Prompt". A new command window, named "Anaconda Prompt" will open. Now, you can work from there with Python, conda and other tools. | 4 | 49 | 0 | I had to install the 64-bit version of Anaconda with python 3.5 in Windows 10. I followed the default settings (AppData/Continuum/Anaconda3). However, after installation, I am unsure how to access the Anaconda command prompt so that I can use conda to install packages. I also attempted to install Anaconda 64 bit in C:/Program Files, but several of the python script did not like the space and it failed to install.
What can I do to access the Anaconda prompt? | How to access Anaconda command prompt in Windows 10 (64-bit) | 1 | 0 | 0 | 214,781 |
47,914,980 | 2017-12-20T22:01:00.000 | 4 | 0 | 1 | 1 | python,anaconda | 56,891,764 | 6 | false | 0 | 0 | If Anaconda Prompt is missing, you can create it by creating a shortcut file of Command Prompt (cmd.exe) and change its target to:
%windir%\System32\cmd.exe "/K" <Anaconda Location>\anaconda3\Scripts\activate.bat
Example:
%windir%\system32\cmd.exe "/K" C:\Users\user_1\AppData\Local\Continuum\anaconda3\Scripts\activate.bat | 4 | 49 | 0 | I had to install the 64-bit version of Anaconda with python 3.5 in Windows 10. I followed the default settings (AppData/Continuum/Anaconda3). However, after installation, I am unsure how to access the Anaconda command prompt so that I can use conda to install packages. I also attempted to install Anaconda 64 bit in C:/Program Files, but several of the python script did not like the space and it failed to install.
What can I do to access the Anaconda prompt? | How to access Anaconda command prompt in Windows 10 (64-bit) | 0.132549 | 0 | 0 | 214,781 |
47,914,980 | 2017-12-20T22:01:00.000 | 16 | 0 | 1 | 1 | python,anaconda | 55,545,141 | 6 | false | 0 | 0 | To run Anaconda Prompt using an icon, I made an icon and put:
%windir%\System32\cmd.exe "/K" C:\ProgramData\Anaconda3\Scripts\activate.bat C:\ProgramData\Anaconda3 (The file location would be different in each computer.)
at icon -> right click -> Property -> Shortcut -> Target
I see %HOMEPATH% at icon -> right click -> Property -> Start in
Test environment
OS: Windows 10
Library: Anaconda 10 (64 bit) | 4 | 49 | 0 | I had to install the 64-bit version of Anaconda with python 3.5 in Windows 10. I followed the default settings (AppData/Continuum/Anaconda3). However, after installation, I am unsure how to access the Anaconda command prompt so that I can use conda to install packages. I also attempted to install Anaconda 64 bit in C:/Program Files, but several of the python script did not like the space and it failed to install.
What can I do to access the Anaconda prompt? | How to access Anaconda command prompt in Windows 10 (64-bit) | 1 | 0 | 0 | 214,781 |
47,925,668 | 2017-12-21T13:06:00.000 | 3 | 1 | 0 | 0 | python,rabbitmq | 47,937,389 | 1 | true | 0 | 0 | The best way to consume the messages is using basic_consume.
basic_get is slow | 1 | 0 | 0 | I'm writing Python script cron-job, which is periodically get updates from RabbitMQ and process them. Every time I process them I need only to get current snapshot of RMQ queue. I use queue_declare to get number of messages in it.
I know how to get messages one by one with basic_get. Also, I can use basic_consume/start_consuming and get messages in background, store them in some list and periodically get from that list, but what if script will fail with some error? I will lost all read messages in list. Also, I can use few consumers (pool of connections to RMQ) and get messages one by one. Maybe there is some other approach to do it?
So, my question is - what is the best (i.e. secure and fast) way to get current messages from RabbiMQ queue? | The best way to read a lot of messages from RabbitMQ queue? | 1.2 | 0 | 0 | 517 |
47,928,371 | 2017-12-21T15:48:00.000 | 2 | 0 | 0 | 0 | python,ubuntu,tensorflow,cublas | 47,928,522 | 1 | false | 0 | 0 | A likely explanation is that your path is not set up correctly.
Try echo $LD_LIBRARY_PATH and let us know what you get.
Another explanation is that it is not in that directory. Yes, libcublas.so should normally be in /usr/local/cuda-8.0/lib64 but double check if it is there or another directory by using find. | 1 | 3 | 1 | I am new to tensorflow and I am working on shared linux (Ubuntu 16.04), it means I don't have root access. Cuda 8.0 and Cudnn 8 are already installed by admin as root. I have installed python 3.5 using anaconda and then installed tensorflow using pip. I have added the cuda-8.0/bin and cuda-8.0/lib64 to PATH and LD_PATH_LIBRARY using following exports.
export PATH="$PATH:/usr/local/cuda-8.0/bin"
export LD_LIBRARY_PATH="/usr/local/cuda-8.0/lib64"
But when I try to run the program it gives the following error.
ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory
However these files exist in LD_LIBRARY_PATH, and nvcc -V is also working.
Is it even possible to refer to the system installed Cuda and CuDnn ? If yes, can you help to clear the above error. Thanks in advance. | ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory (Shared Linux) | 0.379949 | 0 | 0 | 1,016 |
47,929,215 | 2017-12-21T16:39:00.000 | 1 | 0 | 1 | 0 | python,mathematical-optimization,pulp,mixed-integer-programming | 47,929,278 | 1 | true | 0 | 0 | One fairly simple approach:
introduce an integer-variable I
build your constraint as: probl += lpSum([vars[h] for h in varSKU if h[2] == b]) == I*100
(constrain I as needed: e.g. I >= 1; I <= N)
Keep in mind: when having multiple constraints and the multiples of 100 are not necessarily the same for your constraints, you will need one auxiliary variable I_x for each constraint!
(And: you can't use python's operators in general within pulp or any other LP-modelling sytem (round, int, mod, ceil, ...)! You have to accept the rules/form those modelling-systems allow: in this case -> LpAffineExpression) | 1 | 1 | 0 | I am writing a LpProblem and I need to create a constraint where the sum of some variables is multiples of 100... 100, 200, 300...
I am trying the next expressions using mod(), round() and int() but none works because they don't support LpAffineExpression.
probl += lpSum([vars[h] for h in varSKU if h[2] == b]) % 100 == 0
probl += lpSum([vars[h] for h in varSKU if h[2] == b]) / 100 == int(lpSum([vars[h] for h in varSKU if h[2] == b]) / 100)
probl += lpSum([vars[h] for h in varSKU if h[2] == b]) / 100 == round(lpSum([vars[h] for h in varSKU if h[2] == b]) / 100)
Can you give me some ideas for write this constraint.
Thank you! | Use mod function in a constraint using Python Pulp | 1.2 | 0 | 0 | 864 |
47,929,909 | 2017-12-21T17:25:00.000 | 1 | 0 | 0 | 0 | python,django,google-api,django-socialauth,social-authentication | 47,943,497 | 1 | true | 1 | 0 | My bad, I used a wrong key on settings.py | 1 | 0 | 0 | I recently implanted the connection with facebook and google on my local server, and everything worked.
But, when I tried to do it in production, the connection with google returns: "Your credentials aren't allowed". (Facebook works)
I don't know why, because i'm pretty sure that my application is confirmed by Google.
Do you have some ideas ?
Thanks in advance ! | "Your credentials aren't allowed" - Google+ API - Production | 1.2 | 0 | 1 | 1,548 |
47,932,656 | 2017-12-21T21:02:00.000 | 2 | 0 | 1 | 0 | python | 47,932,681 | 1 | true | 0 | 0 | No, it doesn't have such a number as a native type. You could use int, and just never store a negative number. Or you could create your own type and limit its value in its methods. | 1 | 3 | 0 | I have a question: Does python have a whole number data type> Unlike integers and floats, whole numbers are non-negative (0 to infinity). Does python have a data type declaration for that? | Does Python have a whole number data type? | 1.2 | 0 | 0 | 332 |
47,932,725 | 2017-12-21T21:08:00.000 | 10 | 0 | 1 | 0 | python-3.x,pycharm | 49,899,116 | 10 | false | 0 | 0 | I ran into this issue when trying to get docker up and running with Pycharm 2018.1 and using the container's Interpreter. I would get the error below.
"Cannot Save Settings please use a different SDK name"
The issue I had was due to having multiple python interpreters of the same name.
Under Pycharm || Preferences || Project Interpreter
Click "show all" within the Project Interpreter dropdown and then delete any / all interpreters that you don't need. | 6 | 22 | 0 | I have been using Pycharm for years and have never had any problem. However, after my most recent PyCharm update I can no longer configure the interpreter.
Also each time I create a new project it creates a vent directory under my project. When I go to File/Default Settings/Project Interpreter, I am provided with new options.
In this window it allows you to configure a virtual environment, the conda environment, and the system interpreter. I am assuming that I should configure the system interpreter. From there I point PyCharm to the interpreter on my Mac at /usr/local/Cellar/python3/3.6.3/bin/python3 and hit OK.
It then takes me back to the main window where it shows the path in the project interpreter. At this point I hit apply and get a message:
Cannot Save Settings please use a different SDK name
It doesn't matter which interpreter I choose, I get the same message. Has anyone else come up with the same problem and how do I fix this?
Interestingly my old projects still work correctly. | Configuring interpreter in PyCharm: "please use a different SDK name" | 1 | 0 | 0 | 16,186 |
47,932,725 | 2017-12-21T21:08:00.000 | 35 | 0 | 1 | 0 | python-3.x,pycharm | 48,535,675 | 10 | false | 0 | 0 | I had the same problem while setting up the virtual environment for my project and no matter if I create a new virtual environment or select an existing one, I get the warning:
"Cannot Save Settings please use a different SDK name"
Finally I found the solution:
Click on the project interpreter dropdown and select show all.... There you might be having multiple virtual environments with same name. Now here is the conflict you need to fix manually by renaming them so every item has the unique name. | 6 | 22 | 0 | I have been using Pycharm for years and have never had any problem. However, after my most recent PyCharm update I can no longer configure the interpreter.
Also each time I create a new project it creates a vent directory under my project. When I go to File/Default Settings/Project Interpreter, I am provided with new options.
In this window it allows you to configure a virtual environment, the conda environment, and the system interpreter. I am assuming that I should configure the system interpreter. From there I point PyCharm to the interpreter on my Mac at /usr/local/Cellar/python3/3.6.3/bin/python3 and hit OK.
It then takes me back to the main window where it shows the path in the project interpreter. At this point I hit apply and get a message:
Cannot Save Settings please use a different SDK name
It doesn't matter which interpreter I choose, I get the same message. Has anyone else come up with the same problem and how do I fix this?
Interestingly my old projects still work correctly. | Configuring interpreter in PyCharm: "please use a different SDK name" | 1 | 0 | 0 | 16,186 |
47,932,725 | 2017-12-21T21:08:00.000 | 1 | 0 | 1 | 0 | python-3.x,pycharm | 57,827,910 | 10 | false | 0 | 0 | Go to Project > Project Interpreter > Select the dropdown menu > "Show All".
For me, there were several Python environments, two of which were red with an tag. Remove the envs that are red / have an tag, select the remaining valid one, and re-apply settings. | 6 | 22 | 0 | I have been using Pycharm for years and have never had any problem. However, after my most recent PyCharm update I can no longer configure the interpreter.
Also each time I create a new project it creates a vent directory under my project. When I go to File/Default Settings/Project Interpreter, I am provided with new options.
In this window it allows you to configure a virtual environment, the conda environment, and the system interpreter. I am assuming that I should configure the system interpreter. From there I point PyCharm to the interpreter on my Mac at /usr/local/Cellar/python3/3.6.3/bin/python3 and hit OK.
It then takes me back to the main window where it shows the path in the project interpreter. At this point I hit apply and get a message:
Cannot Save Settings please use a different SDK name
It doesn't matter which interpreter I choose, I get the same message. Has anyone else come up with the same problem and how do I fix this?
Interestingly my old projects still work correctly. | Configuring interpreter in PyCharm: "please use a different SDK name" | 0.019997 | 0 | 0 | 16,186 |
47,932,725 | 2017-12-21T21:08:00.000 | 4 | 0 | 1 | 0 | python-3.x,pycharm | 57,171,648 | 10 | false | 0 | 0 | How fix this in Windows 10:
close Pycharm .
delete this file: C:\Users\<username>\.PyCharmCE2018.3\config\options\jdk.table.xml
open Pycahrm again and load all python interceptors again. | 6 | 22 | 0 | I have been using Pycharm for years and have never had any problem. However, after my most recent PyCharm update I can no longer configure the interpreter.
Also each time I create a new project it creates a vent directory under my project. When I go to File/Default Settings/Project Interpreter, I am provided with new options.
In this window it allows you to configure a virtual environment, the conda environment, and the system interpreter. I am assuming that I should configure the system interpreter. From there I point PyCharm to the interpreter on my Mac at /usr/local/Cellar/python3/3.6.3/bin/python3 and hit OK.
It then takes me back to the main window where it shows the path in the project interpreter. At this point I hit apply and get a message:
Cannot Save Settings please use a different SDK name
It doesn't matter which interpreter I choose, I get the same message. Has anyone else come up with the same problem and how do I fix this?
Interestingly my old projects still work correctly. | Configuring interpreter in PyCharm: "please use a different SDK name" | 0.07983 | 0 | 0 | 16,186 |
47,932,725 | 2017-12-21T21:08:00.000 | 0 | 0 | 1 | 0 | python-3.x,pycharm | 55,686,153 | 10 | false | 0 | 0 | In my case, I moved my project to a different location and PyCharm started complaining about Cannot Save Settings please use a different SDK name. At the top of the main editor, it asks me to Configure Project Interpreter. I clicked it, and then ...
My solution
Remove all existing interpreters that are marked as invalid in the preference.
Select the interpreter in the moved venv subfolder in my project.
Without doing both, I kept getting the same "SDK name" error. It seemed that the project thinks that it already has an interpreter called "python.exe", if you don't actively remove all "invalid" ones. | 6 | 22 | 0 | I have been using Pycharm for years and have never had any problem. However, after my most recent PyCharm update I can no longer configure the interpreter.
Also each time I create a new project it creates a vent directory under my project. When I go to File/Default Settings/Project Interpreter, I am provided with new options.
In this window it allows you to configure a virtual environment, the conda environment, and the system interpreter. I am assuming that I should configure the system interpreter. From there I point PyCharm to the interpreter on my Mac at /usr/local/Cellar/python3/3.6.3/bin/python3 and hit OK.
It then takes me back to the main window where it shows the path in the project interpreter. At this point I hit apply and get a message:
Cannot Save Settings please use a different SDK name
It doesn't matter which interpreter I choose, I get the same message. Has anyone else come up with the same problem and how do I fix this?
Interestingly my old projects still work correctly. | Configuring interpreter in PyCharm: "please use a different SDK name" | 0 | 0 | 0 | 16,186 |
47,932,725 | 2017-12-21T21:08:00.000 | 1 | 0 | 1 | 0 | python-3.x,pycharm | 51,556,018 | 10 | false | 0 | 0 | You cannot have 2 or more virtual environments with same name. Even if you have projects with same name stored at 2 different places, please give unique name to its venv. This will solve your problem.
To check all the virtual environments:
Go to File >> Settings >> Project: your_project_name >> Project Interpreter
And rename the venv name. | 6 | 22 | 0 | I have been using Pycharm for years and have never had any problem. However, after my most recent PyCharm update I can no longer configure the interpreter.
Also each time I create a new project it creates a vent directory under my project. When I go to File/Default Settings/Project Interpreter, I am provided with new options.
In this window it allows you to configure a virtual environment, the conda environment, and the system interpreter. I am assuming that I should configure the system interpreter. From there I point PyCharm to the interpreter on my Mac at /usr/local/Cellar/python3/3.6.3/bin/python3 and hit OK.
It then takes me back to the main window where it shows the path in the project interpreter. At this point I hit apply and get a message:
Cannot Save Settings please use a different SDK name
It doesn't matter which interpreter I choose, I get the same message. Has anyone else come up with the same problem and how do I fix this?
Interestingly my old projects still work correctly. | Configuring interpreter in PyCharm: "please use a different SDK name" | 0.019997 | 0 | 0 | 16,186 |
47,934,376 | 2017-12-22T00:10:00.000 | 0 | 0 | 0 | 0 | python,pandas,dataframe | 47,934,883 | 2 | false | 0 | 0 | If you have both dataframes of same length you can also use:
print df1.loc[df1['ID'] != df2['ID']]
assign it to a third dataframe. | 1 | 1 | 1 | Does anyone know of an efficient way to create a new dataframe based off of two dataframes in Python/Pandas?
What I am trying to do is check if a value from df1 is in df2, then do not add the row to df3. I am working with student IDS, and if a student ID from df1 is in df2, I do not want to include it in the new dataframe, df3.
So does anybody know an efficient way to do this? I have googled and looked on SO, but found nothing that works so far. | Create a dataframe by discarding intersections of two dataframes (Pandas) | 0 | 0 | 0 | 226 |
47,937,244 | 2017-12-22T07:08:00.000 | 0 | 0 | 0 | 0 | python,gtk | 48,007,102 | 1 | true | 0 | 1 | Turns out active was the wrong signal to connect to a RadioMenuItem, even though it works perfectly fine for a regular MenuItem.
Instead, connecting the toggled signal, and then checking in the callback function whether the widget's get_active() function returns True, results in the desired behaviour. | 1 | 0 | 0 | I have a menu with a few RadioMenuItems.
After the user selects an option, my program reloads the menu and therefore also resets the pointer to the selected item.
I need to programmatically set it back but without activating the function connected to it. RadioMenuItem.set_active(True) will activate the function. In fact, it seems that my function is called even when I do not call set_active, even just when the menu is drawn.
How do? | how to set a Gtk RadioMenuItem as 'selected' without activating it | 1.2 | 0 | 0 | 217 |
47,937,566 | 2017-12-22T07:37:00.000 | -3 | 0 | 0 | 0 | python,django | 47,937,728 | 4 | false | 1 | 0 | You can see it in the internet browsers (F12) or if you use POSTMAN, it show the time.
In also you can use standard python library time for measuring the execution time of a piece of code. | 1 | 6 | 0 | How do I calculate the response time from the moment where the user input the search criteria until the relevant information are loaded/displayed onto the portal? | How to calculate response time in Django | -0.148885 | 0 | 0 | 9,889 |
47,939,696 | 2017-12-22T10:13:00.000 | 0 | 0 | 1 | 0 | python,pip,setuptools,entry-point | 47,959,297 | 2 | true | 0 | 0 | The only thing that fixed this issue was to create a new virtualenv.
Apparently my virtualenv/bin had compiled (.pyc) and non-compiled (.py) references to the old version for some reason - they were probably not upgraded / removed when I installed the new version.
Once I created a new virtualenv and re-installed required packages I was able to resolve this issue. | 1 | 1 | 0 | I am trying to develop a python package that is importable and also has entry_point to call from shell.
When trying to call the entry point I get:
pkg_resources.VersionConflict: (pysec-aws 0.1.dev1 (/Users/myuser/PycharmProjects/pysec-aws), Requirement.parse('pysec-aws==0.1.dev0'))
Essentially what I did before getting this error, is incrementing the version from 0.1.dev0 to 0.1.dev1 in setup.py, and running python setup.py sdist and then pip install -e .
What am I doing wrong? What is the proper way to install development versions of packages you are actively developing and bundling with setuptools? | Unable to load python entry_point when developing python package | 1.2 | 0 | 0 | 294 |
47,940,052 | 2017-12-22T10:36:00.000 | 0 | 0 | 1 | 0 | android,python,api | 47,940,628 | 1 | true | 1 | 0 | The mobile vision API is designed only for Android and iOS. As far as I know, Pycharm does not work well with Java, so I would say that you would have to create an Android/iOS project in order to test it (It would be a lot harder trying to make it work with python than simply installing Android studio and cloning a mock project). | 1 | 0 | 0 | I am exploring the APIs provided by Google. Firstly, I was experimenting with Google Cloud Vision API with Python in PyCharm in order to try to perform Optical Character Recognition with various texts.
So I wrote a basic program in Python in PyCharm which was calling this API, I gave to it as an input an image which included text e.g. the image/photo of an ice-cream bucket and then takes the text written on this bucket as an output.
Now I want to test the barcode scanner of Google Mobile Vision API. So ideally I would like to call the Google Mobile Vision API in a python program in PyCharm which calls this API, give as an input an image/photo of a barcode and take as an output the details saved in this barcode.
My question is if this can be (easily) done with PyCharm or if I should download Android Studio to do this simple task?
In other words, can I call easily a mobile API in an IDE which is not for mobile app development like Android Studio but in an IDE for desktop applications like Pycharm?
It may be a very basic question but I do not know if I missing something important. | Can I use Google Mobile Vision API in PyCharm? | 1.2 | 0 | 0 | 383 |
47,941,250 | 2017-12-22T12:06:00.000 | 0 | 0 | 1 | 0 | javascript,python,image-processing,ocr,image-conversion | 47,942,101 | 1 | false | 0 | 0 | I don't if I am understanding your problem correctly, but I'm assuming each dictionary in your json is giving you the coordinates for a word.
My approach would be to first find the difference in pixels for the space between any 2 words, and you use this value to detect the sequence of words.
For example:
img1 = {'coordinates': 'information'}
img2 = {'coordinates': 'information'}
space_value = 10 # for example
if img1['Height'] == img2['Height'] and (img1['Left'] + img1['Width'] + space_value) == img2['Left']:
next_word = True | 1 | 0 | 1 | I am using online library and able to fetch words from an image with their locations.
Now I want to form sentences exactly like which are in image.
Any idea how can I do that?
Earlier i used the distance between two words and if there are pretty close then it means it is a part of a sentence but this approach is not working fine
Please help
This is the json I am receiving I have...
"WordText": "Word 1",
"Left": 106,
"Top": 91,
"Height": 9,
"Width": 11
},
{
"WordText": "Word 2",
"Left": 121,
"Top": 90,
"Height": 13,
"Width": 51
}
.
.
.
More Words | Able to fetch text with their locations from an image...How can I form sentence? | 0 | 0 | 0 | 41 |
47,941,694 | 2017-12-22T12:37:00.000 | 4 | 0 | 1 | 0 | python,boolean,operators | 47,941,734 | 2 | false | 0 | 0 | The and operator converts the first operands to boolean using __bool__, and then does a predefined action to the booleans (if first.__bool__() is True, return second, else return first). There is no way to change this behavior. | 1 | 5 | 0 | So I've been messing around with the standard operators in classes to try and see what i can make, but i haven't been able to find how to edit the boolean and operator.
I can edit the bitwise &operator by defining __and__(self), but not the way that and behaves. Does anyone know how I can change the behavior of a and b where a and bare instances of the class I'm making?
Thanks in advance! | Edit boolean and operator | 0.379949 | 0 | 0 | 576 |
47,946,518 | 2017-12-22T19:34:00.000 | 3 | 0 | 0 | 0 | python-3.x,time-series,facebook-prophet | 51,340,436 | 2 | false | 0 | 0 | There is a simple solution in the current version of the library. You can use from the predicted model fc. What you want for the value of yearly can be found with fc['yearly'] without using the functions in the above solution.
Moreover, if you want all the other components like trend, you can use fc['trend']. | 1 | 7 | 1 | Any ideas on how to export the yearly seasonal trend using fbprophet library?
The plot_components() function plots the trend, yearly, and weekly.
I want to obtain the values for yearly only. | Python fbprophet - export values from plot_components() for yearly | 0.291313 | 0 | 0 | 3,166 |
47,947,718 | 2017-12-22T21:43:00.000 | 0 | 0 | 0 | 0 | python,opencv,imshow | 47,976,344 | 1 | false | 0 | 0 | I think the GUI you're talking about is Tkinter, but opencv's cv.imshow is just playing frame.I don't think it's necessary to use GUI to show frame.Because cv2.imshow is good at handling. | 1 | 0 | 1 | I have been trying to use OpenCV to display my camera stream in the same window as another (GUI) window. However, the imshow() method opens its own window. Is there another way to do this so that it displays in the same window as my GUI? | How to display OpenCV camera stream in the same window as another file? | 0 | 0 | 0 | 223 |
47,948,518 | 2017-12-22T23:39:00.000 | 0 | 1 | 1 | 0 | python-3.x,unicode,utf-8 | 57,127,693 | 2 | false | 0 | 0 | Do I need to call decode('utf-8') when reading a text file?
You need to try-read a text file to make sure it's utf-8 encoding in the file. | 1 | 8 | 0 | Ok, so python3 and unicode. I know that all python3 strings are actually unicode strings and all python3 code is stored as utf-8. But how does python3 reads text files? Does it assume that they are encoded in utf-8? Do I need to call decode('utf-8') when reading a text file? What about pandas read_csv() and to_csv()? | Reading UTF-8 Encoded Files and Text Files in Python3 | 0 | 0 | 0 | 11,947 |
47,952,552 | 2017-12-23T12:36:00.000 | 1 | 0 | 0 | 1 | python,windows,ssh,paramiko | 47,953,486 | 2 | false | 0 | 0 | Normally the (SSH) servers run as a Windows service.
Window services run in a separate Windows session (google for "Session 0 isolation"). They cannot access interactive (user) Windows sessions.
Also note that there can be multiple user sessions (multiple logged in users) in Windows. How would the SSH server know, what user session to display the GUI on (even if it could)?
You can run the SSH server in an interactive Windows session, instead as a service. It has its limitations though.
In general, all this (running GUI application on Windows remotely through SSH) does not look like a good idea to me. | 1 | 1 | 0 | Install bitvise(ssh server services) in windows7, but i use python paramiko remote call program, are executed in the backend. i want it executed in front end. how to solve this problem? | How to configure Bitvise SSH Server,make the process run in the front end | 0.099668 | 0 | 0 | 438 |
47,952,659 | 2017-12-23T12:52:00.000 | 1 | 1 | 0 | 0 | python,telegram-bot | 47,952,757 | 1 | false | 0 | 0 | I don't think this is possible. As far as I know, bots can't communicate with other bots.
But you can add bots to groups | 1 | 0 | 0 | I want to create a telegram bot which would immediately forward message from another bot, that regularly posts some info. I've tried to find some ready templates, but alas... I would be grateful for any useful info. Thanks! | Is it possible to create a telegram bot which forwards messages from another bot to telegram group or channel? | 0.197375 | 0 | 1 | 497 |
47,954,890 | 2017-12-23T18:20:00.000 | 2 | 1 | 1 | 0 | python,algorithm,numpy,combinations,permutation | 47,955,527 | 3 | false | 0 | 0 | There is no "efficient" way to do this. There are 2.8242954e+19 different possible combonations, or 28,242,954,000,000,000,000. If each combination is 16 characters long, storing this all in a raw text file would take up 451,887,264,000 gigabytes, 441,296,156.25 terabytes, 430,953.2775878906 petabytes, or 420.8528101444 exabytes. The largest hard drive available to the average consumer is 16TB (Samsung PM1633a). They cost 12 thousand US dollars. This puts the total cost of storing all of this data to 330,972,117,600 US dollars (3677.46797 times Bill Gates' net worth). Even ignoring the amount of space all of these drives would take up, and ignoring the cost of the hardware you would need to connect them to, and assuming that they could all be running at highest performance all together in a lossless RAID array, this would make the write speed 330,972,118 gigabytes a second. Sounds like a lot, doesn't it? Even with that write speed, the file would take 22 minutes to write, assuming that there were no bottlenecks from CPU power, RAM speed, or the RAID controller itself.
Sources - a calculator. | 1 | 1 | 0 | I want to generate all possible sequence of alternative digit and numbers. For example
5j1c6l2d4p9a9h9q
6d5m7w4c8h7z4s0i
3z0v5w1f3r6b2b1z
NumberSmallletterNumberSmallletter
NumberSmallletterNumberSmallletter
NumberSmallletterNumberSmallletter
NumberSmallletterNumberSmallletter
I can do it by using 16 loop But it will take 30+ hours (rough idea). Is there any efficient way. I hope there will be in python. | Generate all possible sequence of altrenate digits and alphabets | 0.132549 | 0 | 0 | 264 |
47,956,027 | 2017-12-23T21:08:00.000 | 1 | 0 | 1 | 0 | ipython,jupyter-notebook,jupyter | 47,959,962 | 2 | false | 0 | 0 | Listing directory may be slow for a couple of reasons:
- Antivirus checking the behavior of the Jupyter and throttling listing of directories. Try disabling it temporarily to check.
- Using old Python. Newer Python (3.6+) have extended os.listdir() with os.scandir() which newer version of Jupyter notebook make use of.
- A lot of hidden files, Jupyter have to list them but won't show them.
We cannot be as fast as Explorer, as explorer can do a lot of optimisations than Jupyter cannot (like explorer can get notified of files changes and thus use efficient caching), while Jupyter can't. | 1 | 2 | 0 | I'm using Jupyter on Windows 7 to browse my local directories, view various files and open jupyter notebooks. However, changing from one to another directory takes around 3 up to 60 seconds, while opening the same folder using File Explorer is close to instantaneous.
To be specific, the inteface showing the current directory and 'Files', 'Running', 'Clusters' tabs are fast to load. It is the list of files in the current directory that takes a very long time to load.
Is there a way to speed up browsing directories in the Jupyter dashboard? What may be reasons that make the Jupyter dashboard so slow? | Browsing directories in Jupyter dashboard very slow | 0.099668 | 0 | 0 | 1,352 |
47,956,052 | 2017-12-23T21:12:00.000 | 0 | 1 | 0 | 0 | python,cgi | 47,956,075 | 2 | false | 1 | 0 | Short answer is: no, this isn't possible.
Sure, they can download your script and run it themselves if they have a compatible version of Python installed, but you won't be able to run it from the browser (that would be a severe security problem!)
Your options are either to write it in JS, or create an API on your server, or find and use an existing API. | 1 | 0 | 0 | So, my project does a lot of mathematics for the user. It lets them enter equations and then solves them with some fairly complicated items like eigenvalues. I do some of this is javascript, but I have also written a python script utilizing numpy. I would like the user to be able to have the option of having the script on their local machine and then solving the mathematics there instead of on my server.
So, the user would enter an equation and hit enter. The javascript would then call a python script running on the users local machine. The equation is solved there with my code and the result is returned to the web page.
I thought that this would be possible with CGI, but I cannot seem to find clear documentation on how this would be accomplished. Is there a better way?
I do not want to run third party software and I do not want to run the python code in the browser.
Thanks | Call python script from web page and get results | 0 | 0 | 0 | 399 |
47,956,527 | 2017-12-23T22:28:00.000 | 18 | 0 | 0 | 0 | python,web-scraping,web,bots | 47,956,652 | 1 | true | 0 | 0 | There's a large array of techniques that internet service providers use to detect and combat bots and scrapers. At the core of all of them is to build heuristics and statistical models that can identify non-human-like behavior. Things such as:
Total number of requests from a certain IP per specific time frame, for example, anything more than 50 requests per second, or 500 per minute, or 5000 per day may seem suspicious or even malicious. Counting number of requests per IP per unit of time is a very common, and arguably effective, technique.
Regularity of incoming requests rate, for example, a sustained flow of 10 requests per second may seem like a robot programmed to make a request, wait a little, make the next request, and so on.
HTTP Headers. Browsers send predictable User-Agent headers with each request that helps the server identify their vendor, version, and other information. In combination with other headers, a server might be able to figure out that requests are coming from an unknown or otherwise exploitative source.
A stateful combination of authentication tokens, cookies, encryption keys, and other ephemeral pieces of information that require subsequent requests to be formed and submitted in a special manner. For example, the server may send down a certain key (via cookies, headers, in the response body, etc) and expect that your browser include or otherwise use that key for the subsequent request it makes to the server. If too many requests fail to satisfy that condition, it's a telltale sign they might be coming from a bot.
Mouse and keyboard tracking techniques: if the server knows that a certain API can only be called when the user clicks a certain button, they can write front-end code to ensure that the proper mouse-activity is detected (i.e. the user did actually click on the button) before the API request is made.
And many many more techniques. Imagine you are the person trying to detect and block bot activity. What approaches would you take to ensure that requests are coming from human users? How would you define human behavior as opposed to bot behavior, and what metrics can you use to discern the two?
There's a question of practicality as well: some approaches are more costly and difficult to implement. Then the question will be: to what extent (how reliably) would you need to detect and block bot activity? Are you combatting bots trying to hack into user accounts? Or do you simply need to prevent them (perhaps in a best-effort manner) from scraping some data from otherwise publicly visible web pages? What would you do in case of false-negative and false-positive detections? These questions inform the complexity and ingenuity of the approach you might take to identify and block bot activity. | 1 | 2 | 0 | I am learning python and i am currently scraping reddit. Somehow reddit has figured out that I am a bot (which my software actually is) but how do they know that? And how we trick them into thinking that we are normal users.
I found practical solution for that, but I am asking for bit more in depth theoretical understanding. | How do websites detect bots? | 1.2 | 0 | 1 | 6,861 |
47,958,697 | 2017-12-24T07:01:00.000 | -1 | 0 | 0 | 0 | python,scikit-learn,one-hot-encoding | 47,958,997 | 1 | true | 0 | 0 | simple use of pickel for OHE will do for me. | 1 | 4 | 1 | Is there anyway of saving OneHotencoder object in python? . Reason is being I used that object in preprocessing of training data and test data and we are building a API containing the same trained model and that will be injected by real data from the website when user created. So first that data needs to be preprocessed and then model can predict o/p for the same.
Thanks | Save OneHot Encoder object python | 1.2 | 0 | 0 | 1,953 |
47,962,314 | 2017-12-24T16:34:00.000 | 1 | 1 | 0 | 0 | python,telegram,telegram-bot,python-telegram-bot | 47,962,389 | 1 | true | 0 | 0 | There have no method to get recently actions by bot. And bots won't get notified when users joined channel.
If you want to know whether user in your channel, there have getChatMember method. | 1 | 0 | 0 | My bot is admin in a channel and I want to read channel's recent actions(like who joined the channel and etc) using python-telegram-bot. How can I achive this? | Read channel recent actions with telegram bot | 1.2 | 0 | 1 | 967 |
Subsets and Splits