Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47,632,891 | 2017-12-04T11:59:00.000 | -1 | 0 | 1 | 0 | python,python-3.x,pip | 47,635,832 | 17 | false | 0 | 0 | In windows, you have to run pip install command from( python path)/ scripts path in cmd prompt
C:/python27/scripts
pip install pandas | 7 | 19 | 0 | I've just installed python 3.6 which comes with pip
However, in Windows command prompt, when I do: 'pip install bs4' it returns 'SyntaxError: invalid syntax' under the install word.
Typing 'python' returns the version, which means it is installed correctly. What could be the problem? | pip install returning invalid syntax | -0.011764 | 0 | 0 | 173,878 |
47,632,891 | 2017-12-04T11:59:00.000 | -2 | 0 | 1 | 0 | python,python-3.x,pip | 47,632,918 | 17 | false | 0 | 0 | You need to run pip install in the command prompt, outside from a python interpreter ! Try to exit python and re try :) | 7 | 19 | 0 | I've just installed python 3.6 which comes with pip
However, in Windows command prompt, when I do: 'pip install bs4' it returns 'SyntaxError: invalid syntax' under the install word.
Typing 'python' returns the version, which means it is installed correctly. What could be the problem? | pip install returning invalid syntax | -0.023525 | 0 | 0 | 173,878 |
47,632,891 | 2017-12-04T11:59:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,pip | 62,642,631 | 17 | false | 0 | 0 | The problem is the OS can’t find Pip. Pip helps you install packages
MODIFIED SOME GREAT ANSWERS TO BE BETTER
Method 1 Go to path of python, then search for pip
open cmd.exe
write the following command:
E.g
cd C:\Users\Username\AppData\Local\Programs\Python\Python37-32
In this directory, search pip with python -m pip then install package
E.g
python -m pip install ipywidgets
-m module-name Searches sys.path for the named module and runs the corresponding .py file as a script.
OR
GO TO scripts from CMD. This is where Pip stays :)
cd C:\Users\User name\AppData\Local\Programs\Python\Python37-32\Scripts>
Then
pip install anypackage | 7 | 19 | 0 | I've just installed python 3.6 which comes with pip
However, in Windows command prompt, when I do: 'pip install bs4' it returns 'SyntaxError: invalid syntax' under the install word.
Typing 'python' returns the version, which means it is installed correctly. What could be the problem? | pip install returning invalid syntax | 0.011764 | 0 | 0 | 173,878 |
47,632,891 | 2017-12-04T11:59:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pip | 69,875,607 | 17 | false | 0 | 0 | I had this error too!
I found the problem:
it had error cause i opened cmd then entered python then pip!!
you shouldn't enter python!
just open cmd then write "pip install bs4" | 7 | 19 | 0 | I've just installed python 3.6 which comes with pip
However, in Windows command prompt, when I do: 'pip install bs4' it returns 'SyntaxError: invalid syntax' under the install word.
Typing 'python' returns the version, which means it is installed correctly. What could be the problem? | pip install returning invalid syntax | 0 | 0 | 0 | 173,878 |
47,632,891 | 2017-12-04T11:59:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,pip | 51,194,147 | 17 | false | 0 | 0 | Try running cmd as administrator (in the menu that pops up after right-clicking) and/or
entering "pip" alone and then | 7 | 19 | 0 | I've just installed python 3.6 which comes with pip
However, in Windows command prompt, when I do: 'pip install bs4' it returns 'SyntaxError: invalid syntax' under the install word.
Typing 'python' returns the version, which means it is installed correctly. What could be the problem? | pip install returning invalid syntax | 0.023525 | 0 | 0 | 173,878 |
47,634,574 | 2017-12-04T13:34:00.000 | 3 | 0 | 1 | 1 | python,os.system | 47,634,691 | 1 | true | 0 | 0 | One example of many is that subprocess.run() can capture the output, while os.system() only captures the return code.
subprocess.run() is simply way more flexible. It can do everything that os.system() can but also way more. If you KNOW that you never will use any of the benefits in subprocess.run(), then by all means, use os.system(), but most people would say that it's a bit of a waste of time to learn two different tools for the same thing.
os.system() is pretty much a copy of system() in C. | 1 | 1 | 0 | I read many answers on this topic.
It seems that they try to explain it with much more difficult illustrations or just saying it's deprecated consulting official documentation.
os.system is handy for beginner.
Could the reason be explained in an easy example o a metaphor? | The reason 'subprocess.run' is better than 'os.system' for beginner | 1.2 | 0 | 0 | 36 |
47,644,110 | 2017-12-05T00:17:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 51,676,877 | 1 | true | 0 | 0 | I think that as of today that is not possible. However, you can call methods of the queue object by getting them using get_operation_by_name. For example get_operation_by_name("queue_name/enqueue"). You can also make your own Queue object which internally simply calls the corresponding graph operations. | 1 | 0 | 1 | In TensorFlow I have a graph that has in it a string_input_producer that is used in it's input pipeline.
I am loading the graph from a checkpoint file, so I do not have access to the original object when it was created. Nonetheless, I need to run the enqueue method of this object.
I have tried getting the FIFOQueue object using get_operation_by_name and get_tensor_by_name, which obviously did not work because the queue is neither an operation nor a tensor. Is there any function like the mentioned that would do what I want? (fake e.g. get_queue_by_name) How can I solve my problem otherwise? | How to get a Queue object from a TensorFlow graph? | 1.2 | 0 | 0 | 79 |
47,646,159 | 2017-12-05T04:34:00.000 | 0 | 0 | 0 | 0 | python-3.x,tensorflow | 48,959,163 | 2 | false | 0 | 0 | I would recommend tf.feature_column.categorical_column_with_vocabulary_list.
I would think you would want to treat the data as categories, rather than as scalars. | 1 | 3 | 1 | For Tensorflow feature columns contain boolean value 0 and 1.
Should I be using tf.feature_column.numeric_column
or tf.feature_column.categorical_column_with_vocabulary_list? | Tensorflow Feature Column for 0 and 1 | 0 | 0 | 0 | 1,012 |
47,647,252 | 2017-12-05T06:20:00.000 | 0 | 0 | 0 | 0 | python,selenium,server,automated-tests,load-testing | 47,676,817 | 1 | false | 0 | 0 | Have you considered the w3c HTTP access log field, "time-taken." This will report on every single request the time in milliseconds maximally. On some platforms the precision reported is more granular. In order for a web server, an application server with an HTTP access layer, an enterprise services bus with an HTTP access layer (for SOAP and REST calls) to be fully w3c standards compliant this log value must be available for inclusion in the HTTP access logs.
You will see every single granular request and the time required for processing from first byte of receipt at the server to the last byte sent minus the final TCP ACK at the end. | 1 | 0 | 0 | I'm trying to test a web application using selenium python. I've wrote a script to mimic a user. It logs in to the server, generates some reports and so on. It is working fine.
Now, I need to see how much time the server is taking to process a specific request.
Is there a way to find that from the same python code?
Any alternate method is acceptable.
Note:
The server is in the same LAN
Also I don't have privileges to do anything at the server side. So anything I can do is from outside the server.
Any sort of help is appreciable. Thank you | Request processing time in python | 0 | 0 | 1 | 527 |
47,650,139 | 2017-12-05T09:29:00.000 | 0 | 0 | 0 | 0 | python,mongodb,nosql,mongodb-query,microservices | 47,653,516 | 1 | false | 0 | 0 | FindandModify() along with progress field. So we will call this function with query of progress field to be "0". We will update the value to 1 so other services can not access it. On getting the success from Rest call we can delete the record. On getting the failure we will again call the FindandModify() with update value to be "0" so other service can access at later time.
This solution is OK and is simpler and faster than the second.
We we call Find() function which will give us one document. We get the "_id" of the document and store it into another collection. If another service also gets the same document and that document "_id" is already present. Then it will not be insert again and that service again call Find() function.
This solution (as you presented) would not work unless the document is removed from the first collection after it is inserted in the second, otherwise the processes will be caught in an infinite loop. The drawback is that this solution implies two collections and the document moving process.
In any case, you must take into account the failure of the processes in any of the phases and you must be able to detect failure and recover from it. | 1 | 0 | 0 | I have multiple instances of a services. This service access a collection of unprocessed document. We are using MongoDB. The role of service is :
Fetch the first unprocessed document from collection A.
Make a Rest call using uuid.
Get the response and store the response in another collection B.
Multiple service may access the same document leading to duplicate. The steps which I can think to deal with this situation:
FindandModify() along with progress field. So we will call this function with query of progress field to be "0". We will update the value to 1 so other services can not access it. On getting the success from Rest call we can delete the record. On getting the failure we will again call the FindandModify() with update value to be "0" so other service can access at later time.
We we call Find() function which will give us one document. We get the "_id" of the document and store it into another collection. If another service also gets the same document and that document "_id" is already present. Then it will not be insert again and that service again call Find() function.
What would be the performance and bottleneck of these approaches. Also do we have any other better approach which will enhance the performance. | Prevent multiple service access same document in MongoDB till processing is done | 0 | 0 | 0 | 271 |
47,651,326 | 2017-12-05T10:27:00.000 | 0 | 0 | 0 | 0 | python,aws-lambda,zappa | 47,662,032 | 1 | false | 1 | 0 | Zappa does not currently support that model. | 1 | 1 | 0 | when I define a new stage in zappa_settings.json a new api-gateway-backend gets created. What I would need is the same gateway but a different stage, ie. /prod instead of /dev.
Is there any way of accomplishing this in zappa? | zappa deployment: deploy to same api gateway with different stages | 0 | 0 | 0 | 605 |
47,652,443 | 2017-12-05T11:23:00.000 | 0 | 0 | 0 | 0 | jquery,python,json,ajax,django | 47,655,518 | 1 | true | 1 | 0 | Django Maintain Session in a very good way you can use Django Session Variables there Or u Can Pass the filter data in Post along with your data | 1 | 0 | 0 | When passing a JSON to Django views, it's supposed to be using POST. Using GET will append a long URL that would throw a long URI error.
I am trying to implement a filter on objects in JSON type of functionality using ajax. I have two filter options, by text and by dropdown, where the users can filter one or the other or both.
To do this I pass these two data using GET.
So essentially its:
JSON : POST
filter1 & filter2 : GET
The main problem is, where I can't keep the POST-ed json (ofcourse, since its POST) but its also unconventional to make a global variable in Python (not like in Java) so I can't keep it until GET gets called so the filters passed can use it on those objects (not directly from the models).
Thank you so much in advance for giving suggestions! | Django - JQuery AJAX : How to filter "POST"-ed json on "GET"? | 1.2 | 0 | 0 | 95 |
47,652,642 | 2017-12-05T11:34:00.000 | 2 | 0 | 0 | 1 | python,scheduled-tasks | 47,653,341 | 1 | true | 0 | 0 | Windows Task Scheduler has a default working directory of C:\Windows\System32. If you set a relative path to the file you are trying to write, it will likely be written into that directory. If you open a Command Prompt in the directory of your script and run it, the relative path will be that directory. So, you actually have two copies of the pickle file.
If you set an absolute path in your script to the file you want to write to, both methods of running your script will write to the same file. | 1 | 0 | 0 | I currently have a Python script that scrapes some data from the internet, then saves it as a pickle file. When running this from a terminal with python filename.py it saves correctly, since the Date Modified field of the pickle file changes. However, when running with the built-in scheduler it doesn't actually save the pickle file, since the Date Modified isn't changed, despite seeing the Python script being executed (terminal opens up and I see the script running).
I ticked the Run with highest privileges box in the scheduler, and despite that it doesn't save the pickle file. I thought it had to do with it not having write permission, but if it has the highest priviliges, surely it can save a file?
At the scheduled time a terminal opens, so I know it is actually being executed (print a message to make sure), but it doesn't show an error about the fact that it couldn't save the file or anything like that. The only reason I know it's not working is the Date Modified field not changing. How can I fix this? | Windows scheduler with Python script that saves a file | 1.2 | 0 | 0 | 730 |
47,655,813 | 2017-12-05T14:20:00.000 | 3 | 0 | 0 | 0 | python-3.x,imputation | 48,248,566 | 3 | false | 0 | 0 | So as per documentation SMOTE doesn't support Categorical data in Python yet, and provides continuous outputs.
You can instead employ a workaround where you convert the categorical variables to integers and use SMOTE.
Then use np.round(X_train[categorical_variables]) to convert them back to the respective categorical values. | 1 | 16 | 1 | I would like to apply SMOTE to unbalanced dataset which contains binary, categorical and continuous data. Is there a way to apply SMOTE to binary and categorical data? | Oversampling: SMOTE for binary and categorical data in Python | 0.197375 | 0 | 0 | 22,277 |
47,656,709 | 2017-12-05T15:03:00.000 | 0 | 1 | 0 | 0 | python,email,organizer,exchangelib | 47,670,437 | 1 | true | 0 | 0 | The meeting organizer is available on CalendarItem objects as item.organizer. | 1 | 1 | 0 | I'm currently using the exchangelib library in python. I would like to compare the mail of the account connected to exchangelib and the mail of a meeting organizer. I can have the account mail by typing "account.primary_smtp_address" but I don't know how I could get the meeting organizer mail.
For now i can only get the organizer's name by typing "item.subject" where "item" is my meeting.
Conversely, is it possible to get the account's name (the complete name: "Michael JORDAN" for example) which I could compare with the meeting organizer's name.
Thank you ! | Exchangelib - Get meeting organizer mail | 1.2 | 0 | 0 | 134 |
47,657,995 | 2017-12-05T16:11:00.000 | 0 | 0 | 1 | 0 | python,gtk3 | 47,842,208 | 1 | false | 0 | 1 | Gtk3 already has Gtk.TextView and Gtk.TextBuffer which has the requirements you mention (though might be missing some of the more sophisticated ones of a real Office suite). It can also insert images, and do many other tricks. Of course, you have to provide the commands to do steer the widget into executing each of them.
Another possibility is using a web-based editor, and include webkit in your project. | 1 | 0 | 0 | I want to create text editor that have coloring feature, Bold,Italic, ... and too many basically Office Program feature for users of my program. but creating this will take too much of my time. so is there a module, package or code for this ?
Thanks. | Python GTK3 Basic Text Editor Like MS Office, LibreOffice | 0 | 0 | 0 | 87 |
47,660,523 | 2017-12-05T18:38:00.000 | 3 | 0 | 0 | 0 | python,selenium | 47,660,624 | 3 | false | 0 | 0 | Try converting class name to a CSS selector.
With a CSS selector, a class named x-form-text x-form-field x-form-num-field
turns into .x-form-text.x-form-field.x-form-num-field
So basically just replace spaces with dots and you're good to go. | 1 | 6 | 0 | I'm writing a Python program that uses Selenium to navigate to and enter information into search boxes on an advanced search page. This website uses Javascript, and the IDs and Names for each search box change slightly each time the website is loaded, but the Class Names remain consistent. Class names are frequently reused though, so my goal is to use find_elements_by_class_name(classname) and then index through that list.
One box, for example, has the class name x-form-text x-form-field x-form-num-field x-form-empty-field, but I can't use this because selenium considers it a compound class name and throws an error. If I use just a portion of it, such as x-form-text, it can't find the element. My hope is to either find a way to allow the spaces or, if that can't be done, find a way to search for all elements whose class name contains a section of text without spaces, such as x-form-text.
Any help or thoughts would be greatly appreciated!
Edit:
I tried this code:
quantminclass = 'x-form-text.x-form-field.x-form-num-field.x-form-empty-field'
quantmin = '25'
browser.find_elements_by_css_selector(quantminclass)[0].send_keys(quantmin)
But got an error that the list index was out of range, implying that it can't find anything. I inspected the element and that is definitely its class name, so I'm not sure how to proceed. | Selenium Python - Finding Elements by Class Name With Spaces | 0.197375 | 0 | 1 | 19,954 |
47,662,143 | 2017-12-05T20:29:00.000 | 24 | 0 | 0 | 0 | python,tensorflow | 47,662,958 | 2 | true | 0 | 0 | The difference involves computational speed. If a large tensor has many, many zeroes, it's faster to perform computation by iterating through the non-zero elements. Therefore, you should store the data in a SparseTensor and use the special operations for SparseTensors.
The relationship is similar for matrices and sparse matrices. Sparse matrices are common in dynamic systems, and mathematicians have developed many special methods for operating on them. | 1 | 22 | 1 | I am having troubles understanding the meaning and usages for Tensorflow Tensors and Sparse Tensors.
According to the documentation
Tensor
Tensor is a typed multi-dimensional array. For example, you can represent a mini-batch of images as a 4-D array of floating point numbers with dimensions [batch, height, width, channels].
Sparse Tensor
TensorFlow represents a sparse tensor as three separate dense tensors: indices, values, and shape. In Python, the three tensors are collected into a SparseTensor class for ease of use. If you have separate indices, values, and shape tensors, wrap them in a SparseTensor object before passing to the ops below.
My understandings are Tensors are used for operations, input and output. And Sparse Tensor is just another representation of a Tensor(dense?). Hope someone can further explain the differences, and the use cases for them. | What is the difference between tensors and sparse tensors? | 1.2 | 0 | 0 | 13,330 |
47,665,047 | 2017-12-06T00:45:00.000 | 0 | 0 | 1 | 0 | python,configuration,visual-studio-code | 47,665,163 | 1 | false | 0 | 0 | Open launch.json (gear icon when on debugger tab). Find the section for the python debugger you are using. Change "stopOnEntry" from true to false. Save. | 1 | 0 | 0 | When I open a python file without a workspace and start debugging it, the debugger breaks on the first line. I would like to change this behaviour, but I don't know how to do it for files that are opened without a workspace. | How to disable stopOnEntry for files that are opened without a workspace? | 0 | 0 | 0 | 32 |
47,665,910 | 2017-12-06T02:32:00.000 | -1 | 0 | 0 | 0 | python-2.7,machine-learning,scikit-learn,classification,perceptron | 47,674,949 | 1 | true | 0 | 0 | I'm sure someone will correct me if I'm wrong but I do not believe Averaged Perceptron is implemented in sklearn. If I recall correctly, Perceptron in sklearn is simply SGD with certain default parameters.
With that said, have you tried good old logistic regression? While it may not be the sexiest algorithm around, it often does provide good results and can serve as a baseline to see if you need to explore more complicated methods. | 1 | 2 | 1 | I am a new learner to machine learning and I want to do a 2-class classification with only a few attributes. I have learned by researching online that two-class averaged perceptron algorithm is good for two-class classification with a linear model.
However, I have been reading through the documentation of Scikit-learn, and I am a bit confused if Scikit-learn is providing a averaged perceptron algorithm.
I wonder if the sklearn.linear_model.Perceptron class can be implemented as the two-class averaged perceptron algorithm by setting up the parameters correctly.
I appreciate it very much for your kind help. | scikit learn averaged perceptron classifier | 1.2 | 0 | 0 | 635 |
47,669,663 | 2017-12-06T08:15:00.000 | 3 | 0 | 1 | 0 | python,console,spyder | 47,675,703 | 1 | false | 0 | 0 | (Spyder maintainer here) Sorry, there is no way to not print cells in the IPython console in any version of Spyder 3.
However, we are working to make this possibility available in Spyder 4, to be released in 2019. | 1 | 6 | 0 | When I use "run current cell" or "run selection" the compiler print ALL the code it will run to the console and then runs it.
Any way to supress this? I dont want to flood the console with all this non-useful information. I know what code i ran, I don't need it repeated (i just need it to run:). | Spyder python: Skip printing all code when using "run current cell" | 0.53705 | 0 | 0 | 2,031 |
47,671,023 | 2017-12-06T09:37:00.000 | -1 | 0 | 1 | 0 | python,project,executable,pyinstaller | 53,550,927 | 3 | false | 0 | 0 | 1) Open the command prompt
2) Install pyinstaller
Pip install pyinstaller
3) Test your python script first just to check python script should work with normal
.py extension. Saurabh is the name of python file
python saurabh.py
4) Convert the python into executable file
pyinstaller --onefile Saurabh.py
Notice we have passed “–onefile” as a argument which tell pyinstaller to create only one file
5) Go to directory and navigate to the dist folder.
Prerequisite : Install PyQt5 to avoid error.
pip install PyQt5 | 2 | 3 | 0 | I have a Python project that I want to convert into an executable. I have installed Pyinstaller. I only know how to convert one single script into .exe, but I have multiple packages with multiple python scripts. | How to convert Python project into executable | -0.066568 | 0 | 0 | 3,628 |
47,671,023 | 2017-12-06T09:37:00.000 | -1 | 0 | 1 | 0 | python,project,executable,pyinstaller | 47,671,205 | 3 | false | 0 | 0 | Converting the main script into .exe should solve the problem, use -onefile to convert it to one exe and rest of the .py files should be included. | 2 | 3 | 0 | I have a Python project that I want to convert into an executable. I have installed Pyinstaller. I only know how to convert one single script into .exe, but I have multiple packages with multiple python scripts. | How to convert Python project into executable | -0.066568 | 0 | 0 | 3,628 |
47,671,602 | 2017-12-06T10:06:00.000 | 1 | 0 | 0 | 0 | python,django,bash,manage.py,virtual-environment | 47,673,367 | 3 | false | 1 | 0 | You can't "run manage.py runserver from outside of virtual environment" if your project relies on a virtual environment... But nothing prevents you from writing a wrapper bash script that cd into your project's root, activate the virtualenv and launch the dev server.
This being said, I really don't see the point - and I even see a couple reasons to not do so, the first one being that you definitly want to keep the terminal where you run your dev server from opened so you can read all the logs in real time. | 1 | 3 | 0 | I have created DJango Project inside virtual environment.
Let's say virtual environment DJangoProject project name is Mysite
I am tired of running ./manage.py runserver everytime so that I wanted to automate running server when I was logged into ubuntu.I tried many ways but was always failing to access the path of virtual environment. So help me to run manage.py runserver from outside of virtual environment using bash or shell scripting.
any help can be appreciate! | Run DJango server from outside of virtual environment by using script | 0.066568 | 0 | 0 | 1,012 |
47,673,851 | 2017-12-06T12:02:00.000 | 2 | 0 | 1 | 0 | python,spyder | 47,675,820 | 2 | false | 0 | 0 | (Spyder maintainer here) This is not a bug, it's the way Python works. However, the normal behavior can be improved by making our IPython consoles to load the %autoreload IPython magic. With that change, after saving a file you'll immediately get the code on it reloaded in your console.
We'll do this in our 3.2.5 version (to be released on December 2017). | 1 | 2 | 0 | Using spyder 3.2.4 with python.
When I call i function (in another file) lets call it my_func() from my main script, it uses the "old version" of that function. Clarification:
If i change my_func and save that file, the new version of the function is used if called from my main script, BUT only if I run the entire main script.
If i just run the lines with my_func (using run cell or run selected lines) the OLD version of my_func (before the changes was mad) is used.
I though my_func had to be "reimported" (from myOtherFile import my_func), so I try to run those lines as well, but the old version of the function (not changed) is still used.
Did I misunderstand something or is this a really, really serious bug? If i close down spyder and restart it it works (it uses the new version of the function) but that's an unacceptable solution. | Python spyder: changing a function in another file takes no effect | 0.197375 | 0 | 0 | 914 |
47,676,721 | 2017-12-06T14:31:00.000 | 4 | 0 | 1 | 0 | python,python-3.x,package,pypi | 47,724,121 | 2 | false | 0 | 0 | Does not make to much sense to register in the community index a package that is not public for the community.
To reduce chances of future conflicts, I would prefix the package name with something related to your company (I.e: the name or alias). For instance: mycompany-eventualconflictingname.
Eventually, if you want to make the package public, you will need to update your internal clients requirements. But that seems less worrying than having a name conflict. | 1 | 15 | 0 | I read somewhere that if you make an internal Python package for proprietary work, you should still register the name on PyPi to avoid potential future dependency issues.
How do I do this without posting my code publicly? This package contains code to be used internally at my work. Should I make an empty python package using the name I want to reserve and upload that to PyPi? And then install my package at work using git instead of PyPi?
Uploading an empty package seems like a silly thing to do that would just annoy other people. But I can't find a way to just register the name. | Register an internal package on Pypi | 0.379949 | 0 | 0 | 3,903 |
47,679,966 | 2017-12-06T17:17:00.000 | 0 | 0 | 0 | 0 | python,genetic-algorithm,traveling-salesman,mutation,crossover | 48,290,905 | 1 | false | 0 | 0 | A problem in GA's is narrowing your search space too quickly and reaching a local maxima solution. You need to ensure that you are not leading your solution in any way other than in the selection/fitness function. So when you say,
why would you take a good solution and then
perform a function that will most likely make it a worse solution
,the reason is that you WANT a chance for the solution to take a step back, it will likely need to get worse before it gets better. So really you should remove any judgement logic from your genetic operators, leave this to the selection process.
Also, crossover and mutation should be seen as 2 different ways of generating a child individual, you should use one or the other. In practice, this means you have a chance of performing either a mutation of a single parent or a crossover between 2 parents. Usually the mutation chance is only 5% with crossover being used to generate the other 95%.
A crossover keeps the genetic information from both parents (children are mirror images), so one child will be worse than the parents and the other better (or both the same). So in this sense with crossover, if there is a change, you will always get a better individual.
Mutation on the other hand offers no guarantee of a better individual, yet it exists for the purpose of introducing new data, helping to move the GA from the local maxima scenario. If the mutation fails to improve the individual and makes it worse, then it has less chance of being selected for parenting a child anyway (i.e. you don't need this logic in the mutation operator itself).
you select the best one to mutate
This is not strictly true, good individuals should have a higher CHANCE of being selected. In here there is a subtle difference that BAD individuals may also be selected for parents. Again this helps reduce the chance of reaching a local maxima solution. This also means that the best individual for a generation could (and often does) actually get worse. To solve this we usually implement 'elitism', whereby the best individual is always copied to the next generation (at it is/undergoing no operations).
It would also be beneficial if I could comment on which genetic operators you are using. I have found cycle crossover and inversion mutation to work well in my experience. | 1 | 0 | 1 | This is my genetic algorithm, step by step:
Generate two initial population's randomly, and select the fittest tour from both.
Perform an ordered crossover, which selects a random portion of the first fit tour and fills in the rest from the second, in order.
Mutates this tour by randomly swapping two cities if the tour is only 1.3 times as good as the top 10% tour in the initial population (which I have literally just done by induction, singling out poor tours that are produced) - I would love to change this but can't think of a better way.
The mutation is selected from a population of several mutations.
Returns the tour produced.
The mutation, however is almost ALWAYS worse, if not the same as the crossover.
I'd greatly appreciate some help. Thanks! | How can I improve this genetic algorithm for the TSP? | 0 | 0 | 0 | 469 |
47,682,666 | 2017-12-06T20:09:00.000 | 0 | 0 | 0 | 1 | python,bash,shell | 47,682,968 | 2 | true | 0 | 0 | Best practice for Shell would be shebang at the begging like @Veda suggested.
Execute the shell script using bash like bash shell.sh as the link suggests using relative locations rather than absolute ones | 1 | 0 | 0 | To start off, I am a complete noob to Debian, Python, and shell scripts, so please speak to me like I am a toddler.
I have a python script I am running through a virtualenv, and I want to execute it via a shell script. Here is what I'm typing in to the terminal to do so:
source .profile
workon cv
cd Desktop/Camera
python main.py
I tried turning this into a shell script, but I receive the error -- source: not found
I've found no answer to my problem, at least not in any terms I can understand. Any advice would be appreciated. Furthermore, before you answer, I also have no idea why it is I need to execute source .profile, I'm simply following a beginner guide for the project which can be found here: https://www.hackster.io/hackerhouse/smart-security-camera-90d7bd
Thanks in advance, and sorry if this is a dumb question. | Unable to source .profile in shell script | 1.2 | 0 | 0 | 977 |
47,684,387 | 2017-12-06T22:18:00.000 | 21 | 0 | 0 | 0 | python,django | 47,684,488 | 1 | true | 1 | 0 | Django uses INSTALLED_APPS as a list of all of the places to look for models, management commands, tests, and other utilities.
If you made two apps (say myapp and myuninstalledapp), but only one was listed in INSTALLED_APPS, you'd notice the following behavior:
The models contained in myuninstalledapp/models.py would never trigger migration changes (or generate initial migrations). You wouldn't be able to interact with them on the database level either because their tables will have never been created.
Static files listed within myapp/static/ would be discovered as part of collectstatic or the test server's staticfiles serving, but myuninstalledapp/static files wouldn't be.
Tests within myapp/tests.py would run but myuninstalledapp/tests.py wouldn't.
Management commands listed in myuninstalledapp/management/commands/ wouldn't be discovered.
So really, you're welcome to have folders within your Django project that aren't installed apps (you can even create them with python manage.py startapp) but just know that certain auto-discovery Django utilities won't work for that application. | 1 | 15 | 0 | Most documentation simply tells you to add the name of each of your apps to the INSTALLED_APPS array in your Django project's settings. What is the benefit/purpose of this? What different functionality will I get if I create 2 apps, but only include the name of one in my INSTALLED_APPS array? | What is the purpose of adding to INSTALLED_APPS in Django? | 1.2 | 0 | 0 | 4,206 |
47,685,816 | 2017-12-07T00:42:00.000 | 0 | 0 | 0 | 0 | python,classification,imagenet | 47,710,171 | 2 | false | 0 | 0 | There are several ways of applying Transfer Learning and it's trial & error what works best. However, ImageNet includes multiple types of cats and dogs in its 1000 classes, which is why I would do the following:
Add a single Dense layer to your model with 2 outputs
Set only the last layer to trainable
Retrain the network using only images of cats and dogs
This will get solid results rather quickly because you're only training one layer. Meaning, you don't have to backpropagate through the entire network. In addition, your model only has to learn a rather linear mapping from the original cats and dogs subclasses to this binary output. | 1 | 0 | 1 | How do I finetune ResNet50 Keras to only classify images in 2 classes (cats vs. dogs) instead of all 1000 imagenet classes? I used Python and was able to classify random images with ResNet50 and keras with the 1000 ImageNet classes. Now I want to fine tune my code so that it only classifies cats vs. dogs, using the Kaggle images instead of ImageNet. How do I go about doing that? | How do I finetune ResNet50 Keras to only classify images in 2 classes (cats vs. dogs) instead of all 1000 imagenet classes? | 0 | 0 | 0 | 544 |
47,687,046 | 2017-12-07T03:18:00.000 | 0 | 0 | 0 | 0 | python,ajax,django,server | 47,687,089 | 1 | false | 1 | 0 | Reading from csv is not good solution,message quene is suitable solution for this context。
python program can post a data message to django when adding lines
to csv.
django webapp can send data message to browser when getting
message by SSE or socket.io lib | 1 | 0 | 0 | I have a python program appending lines to a .csv file every few seconds. I also have a web app that was created using Django. My goal is to continuously read the data from the python program as it runs and display it on the website without having to refresh.
My proposed solution is to send the python output to a server using requests.put(), and then read from the server using AJAX.
1.) Is this the best solution, or is there a better manner to connect the program and the site.
2.) If this is a good solution, what's the easiest way to get a server running to POST, PUT, and GET from? It can be local and will never expect heavy traffic.
Thank you! | Sending data stream from python program to web app | 0 | 0 | 1 | 173 |
47,688,813 | 2017-12-07T06:20:00.000 | 0 | 0 | 1 | 0 | python-3.x,opencv,image-processing,tensorflow,python-tesseract | 47,689,124 | 2 | false | 0 | 0 | Tesseract has script detection within "OSD", but not language Detection , you cannot detect language automatically you have to specify language. | 1 | 3 | 0 | I am building a software using python in which the image is uploaded.The software will extract the text using tesseract ocr.
But I want my software to detect the languages in the images automatically and extract the detected text.
Please suggest me some ways to do that,I am ready to do Machine Learning also but i can't determine a perfect pipeline for the process.
Thanks In-advance. | Automatic Language detection from Images for OCR character Extraction | 0 | 0 | 0 | 7,119 |
47,692,135 | 2017-12-07T09:50:00.000 | 0 | 0 | 1 | 0 | python,unit-testing,automated-tests,enumerate | 47,692,312 | 1 | false | 0 | 0 | You could use the keyword "assert" to avoid a certain behaviour, or even if you expected a different value. Actually your question is not that clear. Another idea would be try/except statment. | 1 | 0 | 0 | Is there a way to automatically (script) test if someone implemented a function with something that they were not allowed to use or if they used something stupid...
Example A -- check values of a with range(len()):
Bad: idx in range(len(a)) followed by a[idx]
Ok: i, v in enumerate(a)
Example B -- use int() when you are not allowed:
Bad: int('1')
Ok: a = ord('1')-ord('0') | python 3 code quality testing - improper function usage detection | 0 | 0 | 0 | 34 |
47,692,738 | 2017-12-07T10:20:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,deep-learning,xgboost | 47,720,583 | 1 | false | 0 | 0 | A possible explanation: you have all non-missing values going to the 'text < 4' branch, and all missing values to the other - 'text > 4' - branch. Can you verify? | 1 | 1 | 1 | My xgboost model trained for a regression task in python using the xgboost package version 0.6 is using strange values for splits. Some values used as a splitting criteria are not present in the training dataset at all.
Example:
- there's a variable 'text' with values in the train set of [Missing,1,2]
- yet, a derived splitting criteria of a node in the trained model is 'text < 4'
What could be a possible reason of such a split when no such value (-> 4) can be found in the data set?The split does not increase the information gain, since all samples follow one branch after this decision node. | XGBoost unreasonable splitting value in nodes | 0.197375 | 0 | 0 | 579 |
47,700,585 | 2017-12-07T17:29:00.000 | 0 | 0 | 0 | 0 | wxpython | 49,250,297 | 1 | false | 0 | 1 | Try like this
self.toolbar.SetToolDisabledBitmap(self.recoveryBtn.Id,wx.Bitmap(RECOVERY_BTN_BMP)) | 1 | 0 | 0 | I have a list of bitmap buttons in a vertical sizer on the right.
I want to move the button and resize it when I click on it.
How can I proceed? | Resizing a BitmapButton image in wxpython | 0 | 0 | 0 | 174 |
47,702,450 | 2017-12-07T19:31:00.000 | -2 | 0 | 0 | 0 | python,sqlite | 47,702,482 | 1 | true | 0 | 0 | You could read all the tables into DataFrames with Pandas, though I'm surprised it's slow. sqlite has always been really fast for me. | 1 | 0 | 0 | I have an existing sqlite db file, on which I need to make some extensive calculations. Doing the calculations from the file is painfully slow, and as the file is not large (~10 MB), so there should be no problem to load it into memory.
Is there a way in Python to load the existing file into memory in order to speed up the calculations? | Load existing db file to memory Python sqlite? | 1.2 | 1 | 0 | 231 |
47,703,965 | 2017-12-07T21:18:00.000 | 1 | 0 | 0 | 1 | python,cx-freeze | 47,725,348 | 1 | true | 0 | 1 | It is simply because that is how it started. Even now it is called the "Win32" subsystem, even though it is 64-bit! | 1 | 0 | 0 | The command line of an application built with cx_Freeze can be hidden with Win32GUI.
Why does cx_Freeze choose to reference Win32GUI rather than Win64GUI?
This suggests that cx_Freeze has either not been updated to include this feature, or is calling a 32 bit command and yet almost all Windows computers are now 64 bit.
Is there a reason why the inplamenters chose to call it this?
I have researched for a long time but have not found any answer, any thoughts would be appreciated. Thank you. | Why does cx_Freeze use Win32GUI rather than Win64GUI? | 1.2 | 0 | 0 | 1,399 |
47,704,158 | 2017-12-07T21:33:00.000 | 4 | 0 | 0 | 0 | python,scipy,statistics | 47,704,296 | 1 | true | 0 | 0 | I cannot speak for the scipy stats.describe people, but the general answer is this: mean, variance, and even kurtosis can be computed in one or two O(n) passes through the data, while median requires an O(n*log(n)) sort. | 1 | 2 | 1 | It might be a naive question but I couldn't find a reasonable answer. Why the median is not included in the return of stats.describe ? Even kurtosis is included but why not median ?
Thanks | Why does scipy stats describe does not have median? | 1.2 | 0 | 0 | 573 |
47,704,301 | 2017-12-07T21:42:00.000 | 0 | 0 | 0 | 1 | python,windows,python-3.x | 47,705,260 | 1 | true | 0 | 0 | Well thanks for the tip about adding the shebang line. I now know what the issue was. I had python2.7 installed also and the scripts were trying to start with this instead of 3.6. The guest account had the files associatted correctly, that why it worked. All good now. Thanks guys. | 1 | 0 | 0 | Wondering if anyone can shed light on this issue I am having.
I have two python scripts which I want to run when anyone logs into the computer. I have added the scripts to the Startup folder for all users. When I log in the scripts should run with pythonw.exe, both scripts have the pyw extension.
When I log in to my own account, it will only start one script. When I log in to a guest account both scripts start fine.
I have checked AV and the script is not being blocked. Both files access a txt file on the C: drive. I have ensured all users have permissions to access the files.
I can run the second script manually and it works as expected but cannot figure out why it will not run at startup on the admin account. | Running Multiple Scripts from Startup Folder on Windows 7 | 1.2 | 0 | 0 | 58 |
47,707,185 | 2017-12-08T03:17:00.000 | 0 | 0 | 0 | 0 | python-3.x,amazon-web-services,amazon-s3,aws-lambda,aws-api-gateway | 47,707,562 | 1 | true | 1 | 0 | Here is how we did it:
Check and Redirect:
API Gateway --> Lambda (return 302)
Deliver Content:
CloudFront --> S3
Check for S3 existence with Lambda returning a 302 to cloudfront. You can also return Signed URL from Lambda with a valid time to access the URL from CloudFront.
Hope it helps. | 1 | 1 | 0 | The setup is using S3 as a storage, API gateway for the rest endpoint and Lambda (Python) for get/fetch of file in S3.
I'm using Boto3 for the Lambda function (Python) to check if the file exists in S3, and I was able to download it but being stored in Lambda machine ("/tmp"). The API Gateway can trigger the lambda function already. Is there a way that once the lambda function is triggered then the download will happen in the browser?
Thanks! | Retrieve a file from browser using API gateway, AWS LAMBDA and S3 | 1.2 | 0 | 1 | 802 |
47,707,489 | 2017-12-08T03:59:00.000 | 0 | 0 | 1 | 0 | jupyter-notebook,vpython | 47,760,954 | 2 | false | 0 | 0 | Try running some of the demo jupyter vpython notebooks in the Demo directory on the github repository for "jupyter vpython" . See if your problem still persists when running these demo vpython notebooks. | 1 | 2 | 0 | I am trying to run a simple ball moving demo program using vpython 7 on jupyter notebook, however, the ball didn't move smoothly, but like jumping from one position to the next position, there are big delays between frames. I ran the same code in glowscript.org, it demonstrated very smooth motions. After I start the cell, the statue of the jupyter notebook kernel quickly jumps between busy and idle, even if the loop ends, it still jumping. If I interrupt that kernel, the notebook prompt kernel's have died and I have to restart the kernel. I believe this reason results in the delays of the 3D render of the vpython.
I am using Ubuntu 14.04, python 2.7, vpython 7.2.0, jupyter notebook 4.2.3 and chrome. | vpython 7 with jupyter notebook kernel jump between busy and idle | 0 | 0 | 0 | 209 |
47,707,608 | 2017-12-08T04:13:00.000 | 0 | 0 | 0 | 0 | python,node.js,postgresql,orm | 47,708,755 | 2 | false | 1 | 0 | It is possible, but it may cause conflicts with table names, constraint names, sequence names and other names which are depend on ORM naming strategy. | 2 | 0 | 0 | If I have two backends, one NodeJS and one Python both of them are accessing the same database. Is it possible to use an ORM for both or is that really bad practice? It seems like that would lead to a maintenance nightmare. | Node ORM and Python ORM for same DB? | 0 | 1 | 0 | 184 |
47,707,608 | 2017-12-08T04:13:00.000 | 0 | 0 | 0 | 0 | python,node.js,postgresql,orm | 47,708,025 | 2 | false | 1 | 0 | so long as both ORMs put few constraints on the database structure it should be fine. | 2 | 0 | 0 | If I have two backends, one NodeJS and one Python both of them are accessing the same database. Is it possible to use an ORM for both or is that really bad practice? It seems like that would lead to a maintenance nightmare. | Node ORM and Python ORM for same DB? | 0 | 1 | 0 | 184 |
47,709,408 | 2017-12-08T07:09:00.000 | 0 | 0 | 0 | 1 | html,linux,python-3.x,web,terminal | 47,709,772 | 1 | false | 1 | 0 | A quick solution I would do (without any libraries or stuff):
invoke your CLI program and set it up to continuously output to a file on the server. Make sure you know the file name.
use Ajax calls to fetch the output to the client. OR (if you are in for hard stuff) use a webSocket to output to the client. | 1 | 1 | 0 | I am trying to create a web application using python which has a HTML form and the form data is used as a command line input to start an application. I know how to initiate the CLI tool from web app, but want a way to get the output of the CLI tool from Linux terminal real time and show it in the web application.
The CLI tool will run for a day and the terminal output will change in real time. Is there a way to display the changing Linux terminal output in to the web application. Information about any web terminal or way to get and store the real-time screen output of a linux application will be helpful.
When ever the screen O/P of the application running in linux changes then the web application should also be updated with the change.
Any python lib or tools will be useful and easy to Integrate? | Getting a Linux terminal output of a CLI tool and displaying in Web page | 0 | 0 | 0 | 515 |
47,709,687 | 2017-12-08T07:30:00.000 | 0 | 0 | 1 | 0 | python,regex | 47,709,796 | 1 | false | 0 | 0 | You could just capture all Chinese characters between quotes, capture any comment start ´--´ indicators, and then throw away all the matches after that indicator. | 1 | 2 | 0 | I want to match any Chinese character in "", but not in the comment of Lua, i.e: after --.
For example, in string Tips("中文") -- "注释",中文 should be matched, but not 注释.
The regex I write is ur'(?<!--.*?)"([\u4e00-\u9fff]+)"',but Python gives an error: look-behind requires fixed-width pattern.
So how to fix this? | Regex in Python. How to match a string but not in comments? | 0 | 0 | 0 | 70 |
47,714,544 | 2017-12-08T12:34:00.000 | 2 | 1 | 0 | 0 | python,callback,mqtt,paho | 47,719,711 | 1 | true | 0 | 0 | What does "is better" mean for you?
The callback registered with on_message() will get all messages for all your subscriptions, whereas with message_callback_add you can have different callbacks for each topic that you subscribe to.
Do you need your callbacks to do different things based on the topic name? If not, you use on_message, else you use message_callback_add. | 1 | 0 | 0 | I have subscribed to multiple (around 4) topics using paho.mqtt.
On receiving message from each topic, I want to buffer the message until it reaches some threshold and then later insert bulk messages into MySQL database.I want to gather some 1000 messages and check if the threshold is greater than 1000 and then finally insert into database on certain time interval (for every 1 minute).
For each topic, there is corresponding table in the database. Which callback function Should I use on_message() callback or message_callback_add()? Which is better in such scenario? | Paho mqtt callbacks on multiple subscription | 1.2 | 0 | 0 | 1,202 |
47,720,160 | 2017-12-08T18:23:00.000 | 0 | 1 | 0 | 1 | php,python,linux,webserver,file-permissions | 47,720,634 | 1 | true | 1 | 0 | As suggested by @wpercy you have figure out which user is executing the file. Usually that user is called www-data !
To find out which user is calling the service use
ps aux | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1
After you figured out the user you have to change the permission for the folder. That command schould be something like chown -R www-data:www-data /var/www/html/Projectfolder
Special thanks to wpercy! | 1 | 0 | 0 | I'm currently working on a project, which will dynamically fetch some informations of my job and display them in a html page. To accomplish this i wrote a python script, which will be invoked using a PHP webservice. The script needs to edit some files in order to work.
Basically PHP executes the script using
$output = shell_exec('python script.py');
The problem is, that if the webservice is called, the script does not have the needed permissions to edit the files.
So the webserver should call the script using something like $output = shell_exec('sudo python script.py');
I may need to change the permissions to the project folder but i don't know how.
Some additional informations:
I'm using a raspberry pi 3 with LAMP installation on raspian as webserver
The folder structure is the following:
projectfolder
|
- style (containing css)
-script.py
-script2.py
-filetoedit1.txt
-filetoedit2.html
Any help is appreciated! | Linux editing privilege over webservice | 1.2 | 0 | 0 | 25 |
47,721,121 | 2017-12-08T19:40:00.000 | 0 | 0 | 0 | 0 | python,urllib,urllib3 | 47,721,219 | 1 | true | 0 | 0 | It's not clear from your question, but I'm assuming that the Python code in question is not running on the web server. (Otherwise, it would be a matter of using the regular open() call.)
The answer is no, HTTP servers do not usually provide the ability to update files, and urllib does not support writing files over FTP / SCP. You will need to be either running some sort of an upload service on the server that exposes an API over HTTP (via a POST entry point or otherwise) that allows you to make requests to the server in a way that causes the file to be updated on the server's filesystem. Alternatively, you would need to use a protocol other than HTTP, such as FTP or SCP, and use an appropriate Python library for that protocol such as ftplib. | 1 | 0 | 0 | I am wondering if there is a way to write into .txt file on web server using urllib or urllib3. I tried using urllib3 POST but that doesnt do anything. Does any of these libraries have ability to write into files or do I have to use some other library? | Writing into text file on web server using urllib/urllib3 | 1.2 | 0 | 1 | 164 |
47,724,417 | 2017-12-09T01:28:00.000 | 0 | 0 | 0 | 0 | python,turtle-graphics,listen | 67,288,344 | 2 | false | 0 | 1 | turtle and screen have separate inputs.
For example:
turtle.onclick() and screen.onclick() may seem the same but screen.onclick() is referring to the general window while turtle.onclick() is referring to the turtle module itself.
Turtle
When calling turtle.onclick() you are activating the (onclick) function, to call your argument function whenever the user clicks specifically on the turtle object.
Screen
When calling screen.onclick() you are activating the (onclick) function, to call your argument function whenever the user clicks anywhere on the window.
This is equivalent to turtle.onscreenclick() because onscreenclick() refers to the entire screen. Hence the name screenclick rather than just click which refers to the turtle object.
Listen
So because turtle and screen have separate input functions, you'll need separate listening functions.
So
turtle.listen() listens for overall the entire turtle module's inputs, whilst screen.listen() listens for screen / window inputs. | 1 | 0 | 0 | Is there any difference between window.listen(), screen.listen(), and turtle.listen()?
I see them used differently in various programs but assume they can be called interchangeably. | Difference between turtle.listen() and screen.listen() | 0 | 0 | 0 | 3,369 |
47,725,736 | 2017-12-09T05:48:00.000 | 0 | 1 | 0 | 0 | python,pytest,allure | 47,764,766 | 1 | true | 0 | 0 | Found the solution.
First of all, you need to remove ALL the old libraries and packages, since it may cause problems.
Then you need to install the Allure command line via Brew.
That will allow you to generate reports from the output of your tests.
Then what is left is to install via Pip the package for Python, which is called Allure-pytest.
I did notice that some imports are different from the previous version of Allure; not sure what is the problem though, since importing Allure works fine in Python, but when I generate reports, I get errors because Objevt of type bytearray is not JSON serializable.
Was working with the previous version of Allure API on Python 2.7, so I am not sure what is wrong, but most likely it is user error. | 1 | 0 | 0 | I am trying to use Allure report with Python3, although the libraries used for Python Pytest are not supported, from what I can see.
The documentation say that Allure plugin for pytest support the previous version of allure only.
Is there a workaround to use pytest on python3, and get the Allure reports created? | Use Allure report V2 via pytest on Python3 | 1.2 | 0 | 1 | 614 |
47,725,773 | 2017-12-09T05:53:00.000 | 0 | 0 | 1 | 0 | python | 47,725,836 | 5 | false | 0 | 0 | I don't see any other way to do it other than iterating over the entire dictionary and then seeing which key was the closest by recording min_diff, as well as the key itself as variables.
Alternatively, you could try using an ordered dict to save some time, but either way it'll run in linear time. | 1 | 0 | 0 | I have a dictionary where the keys are integers. I have another integer. I want to find the value corresponding to the key in the dict that is closest to the given integer. Is there an efficient way to do this?
Maybe a different data structure (binary tree) would be more efficient? | Finding an integer key of a Python dict that is closest to a given integer | 0 | 0 | 0 | 505 |
47,728,923 | 2017-12-09T13:11:00.000 | 0 | 0 | 1 | 0 | python,mysql | 47,729,223 | 1 | false | 0 | 0 | Looks like SQLs: SELECT ... FOR UPDATE would lock the selected row and other processes can't read/update it until I commit changes.. if I understand correctly | 1 | 1 | 0 | I have MySQL DB/table with column "name" containing one value. Multiple python scripts are accessing the same DB/table and the same column. There are also two more columns called "locked" and "locked_by", each script is reading the table and selects 10 entries from "name" where "locked" is false and update the locked value to True so other script can't take them and do the same work again. At least that is the solution I have for multiple script accessing one column and not tripping all over each other.. BUT!
I'm worried that between time when one script is updating the "locked" status other script takes that value and try to update it and so on.. ending in mess
Is there some solution to this or am I just worried about non exitant issue ? | How to prevent conflicts while multiple python scripts accessing MySQL DB at once ? | 0 | 1 | 0 | 79 |
47,729,340 | 2017-12-09T14:01:00.000 | 0 | 0 | 0 | 0 | python-2.7,kivy,ubuntu-16.04,buildozer | 47,729,379 | 2 | false | 0 | 1 | Change the icon.filename setting in the buildozer.spec file. | 1 | 0 | 0 | How can I change the default kivy icon logo? I tried in buildozer spec but nothing happens - when I convert my app in the apk the icon does not change. | How to change default kivy logo with another image logo? | 0 | 0 | 0 | 2,472 |
47,730,994 | 2017-12-09T17:06:00.000 | 0 | 0 | 0 | 0 | python,sql,database,list,append | 47,731,116 | 1 | false | 0 | 0 | Though not intended you could join the list by a specific seperator. In turn when you query the selected field you have to convert it into a list again. | 1 | 0 | 0 | I want to store a list within another list in a database (SQL) without previous data being lost.This is one example of values i have in my database (1, 'Haned', 15, 11, 'Han15', 'password', "['easymaths', 6]"). What i want to do is store another piece of information/data within the list [] without it getting rid of "['easymaths', 6]" so it would look something like "['easymaths', 6,'mediummaths', 6]" and so on.Thank you | Storing a list within another list in a database without previous information in that list getting lost | 0 | 1 | 0 | 20 |
47,733,280 | 2017-12-09T21:16:00.000 | 0 | 0 | 1 | 0 | python-2.7,reverse-engineering,offset,ida | 47,740,142 | 1 | false | 0 | 0 | In order to run a python script as an IDAPython script, it must be run from within IDA, either manually or using the basic headless command line arguments IDA supports. | 1 | 0 | 0 | I have a python script that auto read and export particular offsets from a game. It's made with the help of IDA 6.6 and it's python scripts/libs. Now I am not very experienced with python and I don't know why I get this errors:
Could not import idaapi. Running in 'pydoc mode'.
Traceback (most recent call last): File "C:\Users\1234\Desktop\idapyhon\offsets.py", line 1, in from idc import BADADDR, INF_BASEADDR, SEARCH_DOWN, FUNCATTR_START, FUNCATTR_END File "D:\prg\IDA 6.6\python\idc.py", line 41, in EA64 = idaapi.BADADDR == 0xFFFFFFFFFFFFFFFFL NameError: name 'idaapi' is not defined
offsets.py:
pastebin.com/sp08SiS9
idc.py:
pastebin.com/6eJRtphF
What this script must do is get all the offsets from the game and place them in txt file in the "output" dir.
Let me know if you need any other code. | Could not import idaapi. Name error: name 'idaapi' is not defined | 0 | 0 | 0 | 2,684 |
47,733,364 | 2017-12-09T21:25:00.000 | 0 | 0 | 0 | 0 | python,flask,virtualenv,pocketsphinx | 47,735,058 | 1 | true | 1 | 0 | I was able to fix this issue by simply passing the flask app.root_path variable into my python script and adding it in front of 'reqs/model/en-us/en-us' so I guess I needed to have an absolute path instead of a relative one. | 1 | 0 | 0 | Just started with Flask and Python yesterday so this may be a stupid question, but what is the difference between running a Python script via flask:5000 server and running it locally?
I have a script that uses pocketsphinx and it works 100% correctly when I run it in my terminal but when I call it from my flask site it gets an error.
There error is:
"acmod.c", line 83: Folder 'reqs/model/en-us/en-us' does not contain
acoustic model definition 'mdef'
It doesn't make sense to me since my views.py script is in the same folder that 'reqs' is in and the mdef file IS located in 'reqs/model/en-us/en-us' so I'd think the path would work.
And it works when ran in terminal, just not on the flask site. | Flask and running a script on server VS running in terminal : Different results? | 1.2 | 0 | 0 | 82 |
47,735,137 | 2017-12-10T01:59:00.000 | 1 | 0 | 1 | 0 | python | 47,735,230 | 1 | true | 0 | 0 | I would recommend using libraries, even though it may feel like "cheating" at first.
Thousands of hours of work have gone into developing high-quality libraries for Python that allow you to finish more sophisticated projects with greater ease. As some commenters have noted, there's no use in "re-inventing the wheel". At first, when you work on small simple programs, a few well-placed library function calls may compose the bulk of your code. It will take some time to learn which functions are readily available and how to use them, but this will assist you in future projects as well.
On the other hand, if there is a concept that you wish to better understand, it could be worth implementing yourself.
For example, in almost all contexts, implementing matrix multiplication from scratch in Python is definitely a waste of your time, since the numpy library provides this functionality and will perform faster than any code you write. If you're interested in how matrix multiplication is performed, and want to investigate more efficient algorithms for doing so, only then is it worth implementing yourself (why you would be doing this in Python is besides the point).
In short, for educational purposes, when you want to know how something works, you can try building it yourself. Otherwise, please, use the libraries! They are there for a reason. | 1 | 1 | 0 | I started with python a few months ago and I have several projects in mind, but what has been stopping me a lot is the fact of using libraries, because It seems to me like I'm just learning to use a library instead of improving my skills.
So... should I worry about that as a beginner? | Use libraries or write my own code as a beginner? | 1.2 | 0 | 0 | 73 |
47,737,020 | 2017-12-10T08:10:00.000 | 1 | 0 | 1 | 0 | windows-10,pyinstaller,python-3.6 | 47,795,678 | 2 | true | 0 | 0 | It seems that this is a known issue with cx_Freeze which has been resolved in the source. It will be fixed in the new release (5.1.1) | 1 | 0 | 0 | I am trying to convert my python file into a .exe, and I found out about pyinstaller. I ran in the command prompt:pip install pyinstaller. After a few seconds, the last line said something like "pyinstaller was successfully installed" then when I am running just "pyinstaller" in the command prompt, it prints "failed to create process.". I tried running it in the directory with the scripts, I tried doing "pyinstaller myprogram.py" in the directory of my program. I even went to windows 10, "Advanced system settings" and added the directory of my python scripts, but it always returns this "failed to create process." I looked at other questions in stackoverflow. I even looked at the first lines of scripts of the pyinstaller but they already had quotes, so I do not know what is wrong.
Any reply would be appreciated. | Running pyinstaller in a command prompt returns "failed to create process" | 1.2 | 0 | 0 | 434 |
47,738,554 | 2017-12-10T11:52:00.000 | 0 | 0 | 0 | 0 | python,algorithm,data-structures,hash | 47,738,572 | 3 | false | 0 | 0 | Because in order to insert and delete, you need to make search and search takes O(n) in worst case. Therefore, it should also takes at least O(n) in worst case as well. | 3 | 3 | 0 | In the book grokking algorithms, the author said that
In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow.
In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for search it will take linear time because you have to scan all the items one by one.
What I don't understand is that for insert and delete, that hash table take linear time to perform. In worst case, all the items are stored in the same slot which points to a linked list. And for linked list, delete and insert take constant time. Why hash table take linear time?
For insert, can hash table just append the item at the end of linked list? It will take constant time. | Why in worst case insert and delete take linear time for hash table? | 0 | 1 | 0 | 2,136 |
47,738,554 | 2017-12-10T11:52:00.000 | 0 | 0 | 0 | 0 | python,algorithm,data-structures,hash | 47,738,575 | 3 | true | 0 | 0 | And for linked list, delete and insert take constant time.
They don't. They take linear time, because you have to find the item to delete (or the place to insert) first. | 3 | 3 | 0 | In the book grokking algorithms, the author said that
In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow.
In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for search it will take linear time because you have to scan all the items one by one.
What I don't understand is that for insert and delete, that hash table take linear time to perform. In worst case, all the items are stored in the same slot which points to a linked list. And for linked list, delete and insert take constant time. Why hash table take linear time?
For insert, can hash table just append the item at the end of linked list? It will take constant time. | Why in worst case insert and delete take linear time for hash table? | 1.2 | 1 | 0 | 2,136 |
47,738,554 | 2017-12-10T11:52:00.000 | 3 | 0 | 0 | 0 | python,algorithm,data-structures,hash | 47,738,580 | 3 | false | 0 | 0 | Delete will not be constant: you will have to visit the whole worst case linked-list to find the object you want to delete. So this would also be a O(n) complexity.
You will have the same problem to insert: you don't want any duplicates, therefore, to be sure not to create some of them ,you will have to check the whole linked list. | 3 | 3 | 0 | In the book grokking algorithms, the author said that
In the worst case, a hash table takes O(n)—linear time—for everything, which is really slow.
In worst case, I understand that hash function will map all the keys in the same slots, the hash table start a linked list at that slot to store all the items. So for search it will take linear time because you have to scan all the items one by one.
What I don't understand is that for insert and delete, that hash table take linear time to perform. In worst case, all the items are stored in the same slot which points to a linked list. And for linked list, delete and insert take constant time. Why hash table take linear time?
For insert, can hash table just append the item at the end of linked list? It will take constant time. | Why in worst case insert and delete take linear time for hash table? | 0.197375 | 1 | 0 | 2,136 |
47,740,206 | 2017-12-10T15:16:00.000 | 0 | 0 | 0 | 0 | python | 47,741,851 | 2 | false | 0 | 1 | Want to continue Jakubs great answer:
Using C++ or something might be a good idea in some cases.
I created a csgo aimbot with opencv, pil, pyautogui and numpy and few other modules, watching sentdex's videos on this subject might give u some idea(link above).
There might be a custom module for GPU cv2
tips for py:
1) lower imagegrab resolution.
2) in imagegrab.grab bbox you can give coordinates on which the grabs will happen instead of full screen.
3) grayscale is 0-255 so it will speed up the process, if u know which colors are in this reaction thingy just pass them to grayscale so u know in which colors to respond to. | 1 | 0 | 1 | I just started to study OpenCV with Python and was trying to build my first game bot. My thought is: capture the game window frame by frame and analyze the pixels in some specific locations, if the color of those pixels has changed, then press a key to do some automatic operations.
This game needs quick reactions so the FPS is quite important, I have tried the mss and PIL, but the fps in both methods are not enough (30+ with mss and 10+ with PIL's ImageGrab), so I'm wondering if there is any better way to deal with pixels in real-time. Thanks! | Dealing with pixels in real-time using OpenCV | 0 | 0 | 0 | 1,166 |
47,740,320 | 2017-12-10T15:27:00.000 | 1 | 0 | 1 | 1 | python,intellij-idea | 47,830,967 | 1 | true | 0 | 0 | I found the answer, and including it here for completion even though it's a kind of specific case.
I have a project shared between a docker instance and my local machine. The 'env' I was trying to use on the local machine had been created in docker, and therefore referred to an instance of python that existed on docker.
I had to create a second environment on my machine, and now all is working well. | 1 | 1 | 0 | I've checked out a python project with a local environment. I am trying to add the local env in IntelliJ on the Project Structure > Platform Settings > SDKs screen, I select 'Add local' and navigate to [my_project]/env/bin/python.
I then see the message "Invalid Python SDK - Cannot set up a python SDK at Unknown at '[my_project]/env/bin/python'. The SDK seems invalid."
If I then click ok, I see the message "Cannot Detect SDK Version - Probably SDK installed in '[my_project]/env/bin/python' is corrupt"
In the logs I see the messages
"ERROR - ns.python.sdk.PythonSdkUpdater - Failed to determine Python's sys.path value"
and
"...env/bin/python: cannot execute binary file".
Any advice would be much appreciated. | Using local Python environment in IntelliJ | 1.2 | 0 | 0 | 334 |
47,744,506 | 2017-12-10T23:17:00.000 | -1 | 0 | 1 | 1 | python,windows,python-3.x,windows-10 | 47,744,895 | 2 | true | 0 | 0 | Solution:
Python's install path was overridden by Visual Studio to it's "shared" location (more like pain-in-the-ass location) located at:
C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\
Changing both system variables(Path and PYTHONPATH) to this location resolved this issue.
Thank you to all who commented/responded.
EDIT: Apparently it is bad practice to manually set PYTHONPATH, as it is automatically set by Python. | 1 | 0 | 0 | Problem:
I have included Python's install path (C:\Program Files\Python36) on both
the "Path" and "PYTHONPATH" system variables. I haven't needed to use Python for a while, and I am certain it worked last time (close to a few months ago). Python IDLE still works, but I need to use Python through the command prompt.
Every other similar question I found online was usually resolved with making the proper changes to the system variables.
Error message when attempting to execute any Python-related task:
python : The term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ python
+ ~~~~~~
+ CategoryInfo : ObjectNotFound: (python:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
What I've done so far:
Set system variables to Python Path via Control Panel>System and Security>System>Advanced System Settings>Environment Variables
Tried using cd to use python directly from it's install location | Python 3.6 isn't runnable through Command Prompt | 1.2 | 0 | 0 | 2,345 |
47,751,125 | 2017-12-11T10:43:00.000 | 7 | 0 | 1 | 1 | python,macos,terminal | 47,751,976 | 2 | true | 0 | 0 | Just type python3 where you would have typed python.
For example open repl type python3 run app.py program with python3.6 type python3 app.py | 2 | 2 | 0 | When I run python someCode.py in my terminal, I am actually using python 2.7
I have installed python 3.6 on my Mac and would like to use it in terminal. how can I do that? | How do I use python 3 on terminal Mac | 1.2 | 0 | 0 | 15,774 |
47,751,125 | 2017-12-11T10:43:00.000 | 1 | 0 | 1 | 1 | python,macos,terminal | 54,866,894 | 2 | false | 0 | 0 | type python3 on command window/ terminal window & press enter
for exit from python type quit() & press enter | 2 | 2 | 0 | When I run python someCode.py in my terminal, I am actually using python 2.7
I have installed python 3.6 on my Mac and would like to use it in terminal. how can I do that? | How do I use python 3 on terminal Mac | 0.099668 | 0 | 0 | 15,774 |
47,755,807 | 2017-12-11T15:09:00.000 | 1 | 0 | 1 | 0 | python,visual-studio,pyinstaller,nuitka | 47,764,771 | 1 | false | 0 | 0 | xaav is correct.
I cannot comment so Instead I will post this as a solution in the hopes it will direct you to right path.
Cython exists for a reason. You get your python code, add a few changes and bam, your code is cythonised.
This is good for two reasons. To obfuscate the code and it can speed up the code (depends).
Why do you not use cython and pyinstaller? This is tried and tested. Pyinstaller even says that it supports it. The approach you are taking can be done in theory but it is so overly complicated and not even needed.
Possible concerns:
But can't they steal my source code? No, it's cythonised so yes but not easily.
Can't I use Nuitka? Yes, if you want it to be buggy and not work as intended.
What about the libraries, they do not work on another pc? Spec files exist for a reason. A bit of manual handling and this can work.
Can't I compile to c++ and then make it standalone? Take a look at the number of unanswered questions and people who could not get it to work. Also, it is not needed when pyinstaller and cython exist and does the same thing. Cython is widely supported. It just feels like you are doing things the long and hard way.
But won't compiling to c++ be easier. No way, pyinstaller already does most of the leg work. You might have to adjust the spec file here and there, but otherwise it's the only way to go. Keep in mind it also has integration with pyupdater too. | 1 | 0 | 0 | I am using cython to generate *.c files, to be later compiled with the MS Visual Studio 2017 as C/C++. It all works splendid, with the minor exception that all python *.lib were dynamically linked.
Since my goal is to produce a self-contained exe (large exe size is not a problem), I would like to ask if it is possible to static-link all the Python *.lib. I already tried specifying the \MT release option and defining all Python libraries on the Debugger include.
Unfortunately, all my efforts were futile, since the dynamically linked executable can't find the python3.dll when copied to another computer. Currently I plan to copy the Entire python install directory together with the executable and specify the proper include links when compiling.
Therefore, I am interested in any option, it it exists, to produce a self-contained portable executable.
I would appreciate your help and advice. | How to remove the dependence of Python extensions on the UCRT | 0.197375 | 0 | 0 | 151 |
47,757,009 | 2017-12-11T16:17:00.000 | 2 | 0 | 1 | 0 | python,spyder | 47,757,379 | 1 | true | 0 | 0 | (Spyder maintainer here) You need to go to the little cog menu on the right of the Editor and select the option New window to get a proper window.
This will be improved in Spyder 4 by allowing all our panes to generate a new window when undocked (Note: This is already implemented in Spyder master branch, in case you want to try it). | 1 | 0 | 0 | I'm running Spyder 3.2.4 in Windows 10
Despite my best efforts, my code is getting too wide to view in Spyder's default layout. I have therefore tried undocking the editor window. However, the undocked editor does not show up in the taskbar. Which leads me to the question, is there a way of switching between the main Spyder window and the undocked editor window? | Python: Spyder switching between undocked editor window and spyder console | 1.2 | 0 | 0 | 1,931 |
47,757,753 | 2017-12-11T17:00:00.000 | 1 | 0 | 1 | 0 | python,jupyter | 49,182,635 | 1 | true | 0 | 0 | Grammarly was the cause of this.
If you use jupyter notebooks and have the grammarly extension. It will cause problems. | 1 | 0 | 1 | I don't know how this happens or why,
but I'll be in a jupyter notebook grouping by things and I will very conciously type in dataframe.groupby, write some other code and hit ctrl+ enter
and there will be that damn error. Every single time, I will go back and delete the 'groupyouby' and type in groupby.
I doubt that anyone has run into this error,and I don't know how long it will be until someone else creates the mess of libraries that I have that resulted in this chinese water tourture like nightmare. I am here, to let you know, that you are not alone.
Also if someone has a fix that would be great. I got nothing for you other than that description above. | Jupyternotebook corrects groupby to groupyouby Python | 1.2 | 0 | 0 | 39 |
47,758,329 | 2017-12-11T17:36:00.000 | 1 | 1 | 0 | 0 | python,automated-tests,aws-lambda | 47,763,008 | 1 | false | 0 | 0 | When it comes to unit testing, there's nothing special you have to do about AWS Lambda projects.
You Lambda handler is a Python function. Therefore, you can import it on your tests, call it with some input, and assert the output. Just like a normal Python function. | 1 | 1 | 0 | I have implemented a lambda project which has 4 modules/project inside it. Each module/project has python(s) files which implement module functionality.
I have to write the test cases for each module so that it goes through CircleCI and execute on themselves:
That the module is starting and stopping a stepfunction.
module is calling Rest service.
It is writing/reading files from S3 Bucket.
Everywhere, it is like a test driven development to write unit test, but now I have completed project implementation, how do I write automated test cases for my module ? | How do I write test cases for my AWS Python-based lambda project? | 0.197375 | 0 | 0 | 1,070 |
47,763,078 | 2017-12-11T23:25:00.000 | 1 | 1 | 0 | 1 | python,azure,azure-iot-hub,azure-iot-sdk | 57,144,824 | 2 | false | 0 | 0 | This is really disappointing that as of July 2019, no documentation/API reference is available for Azure IOT suite's python SDK. What we have is a git repository with uncommented C-code or some sample programs.
To answer your question, No, there is no documentation as of now. I personally had to add dotnet in my stack only to get it working. Though working with python SDK would make life marginally better if there WAS a python SDK. The current one is simply a wrapper on created with BOOST. However, you can look into the API references for dotnet SDK and then moving back to the so-called Python SDK get's relatively easier.
-Cheers (: | 1 | 0 | 0 | Is there any real documentation for the Python Azure IoTHub SDK? I have found a few code samples but nothing comprehensive. It's also hard to correlate with the documentation for other language SDK's as they all seem slightly different. Even the source code is not good documentation as it's actually a wrapper over C++.
I'd like to be able to take advantage of all the features, but I can't. For instance in the code samples we see send_confirmation_callback, receive_message_callback, device_twin_callback, device_method_callback, and blob_upload_conf_callback, and it's not clear to me what these all do, or what other kinds of callbacks there might be.
Am I missing it or does it not exist? | Documentation for Python Azure IoTHub SDK | 0.099668 | 0 | 0 | 271 |
47,764,837 | 2017-12-12T03:19:00.000 | 0 | 1 | 0 | 1 | python,shell,cron,scheduled-tasks,aix | 47,767,204 | 2 | false | 0 | 0 | One way I can think of is to write a daemon (another script perhaps) and keep checking the file for changes.
It is possible to write a script with infinite loop and the loop can contain the file check logic. | 1 | 0 | 0 | I am trying to monitor a file and if there is a change file I need to execute a script but the catch is cron job is not allowed, also this will be executed on AIX system.
Can somebody help me here on how to proceed.Thank You | how to monitor changes in file without cron job in AIX | 0 | 0 | 0 | 73 |
47,765,237 | 2017-12-12T04:13:00.000 | 3 | 0 | 1 | 0 | console,pycharm,ipython | 48,452,720 | 2 | true | 0 | 0 | This is probably because IPython is not recognized as a package in your project interpreter.
This usually happens when you configure a virtualenv for your project, but don't check the Inherit global site-packages checkbox.
Fixing is quite easy:
Go to Settings -> Project: <project name> -> Project interpreter
At the bottom of the packages list you should see a + sign.
Press that button, find ipython and install it.
On the next time you'll open your console, IPython will be detected and used automatically. | 1 | 5 | 0 | I wanted to use ipython as default console in Pycharm but it doesn't work.
I have selected use ipython when available in the console option available under Build,Deployment and Execution in Settings. Did the same in Default Settings also. But it doesn't seem to work.
I am using Python 3.6.3 , IPython 6.2.1 and PyCharm Professional 2017.3 | Can't use IPython console in PyCharm | 1.2 | 0 | 0 | 2,558 |
47,766,709 | 2017-12-12T06:42:00.000 | 0 | 0 | 1 | 0 | python-3.x | 47,809,640 | 1 | false | 0 | 0 | This is because Python's default path encoding on Windows is ASCII, not UTF-8, and therefore Chinese symbols are treated as something like D:\Ko&z\Za@e -_~u\:;\clip.cpython-36.pyc, and the NTFS just cannot find such path and results in WinError, which is overlapped by python: can't reopen .pyc file in the console window. In fact, you just cannot open .pyc files in non-ASCII paths.
Workaround
Move your .pyc file into a directory which does NOT contain non-ASCII characters in its pathname, or create a symbolic link, junction point or a hardlink, to reflect your .pyc file into some other directory. But the most preferred way is to rename your pathname into a pathname without any non-ASCII characters, this will help others understand you in the Stack Overflow community | 1 | 0 | 0 | run pyc in cmd
1.with Chinese path
py -3.6 "D:\实施项目\牡丹江高分农业示范\资料\clip.cpython-36.pyc"
python: Can't reopen .pyc file
2.with English path
py -3.6 "D:\Download\clip.cpython-36.pyc"
program running
Problem reading input | python3.6 can not reopen .pyc file with Chinese path | 0 | 0 | 0 | 915 |
47,769,236 | 2017-12-12T09:31:00.000 | 1 | 0 | 1 | 0 | python,configuration,pycharm,interpreter | 47,935,804 | 1 | true | 0 | 0 | From the comments, it sounds like the problem was that you didn't have write access to your .idea folder after copying it from another user.
I'm just copying the comments into an answer so the question doesn't look unanswered anymore. | 1 | 2 | 0 | Each time if I start the PyCharm after the Laptop restart, the configurations forget the python interpreter. I don't understand how I can save the interpreter for always.
p.s.: PyCharm Community Edition 2017.2 | Pycharm doesn't save the python interpreter in the configurations | 1.2 | 0 | 0 | 2,256 |
47,769,299 | 2017-12-12T09:34:00.000 | 1 | 0 | 0 | 0 | python,scrapy,scrapy-spider | 47,769,467 | 1 | true | 1 | 0 | Thanks Rubber duck debugging, propably simple pipeline with process_item() method, earlier I thought only about Exporters and FeedStorage | 1 | 1 | 0 | What is the best way to export/store items in REST API ?
I want send scraped items to REST API, where should I put my
requests.post(...) ? Any examples ? | Scrapy - best way to export/store items in REST API | 1.2 | 0 | 1 | 180 |
47,769,641 | 2017-12-12T09:53:00.000 | 44 | 0 | 1 | 0 | python,pycharm | 47,772,901 | 2 | true | 0 | 0 | This is depending on your project settings, the project interpreter to be specific.
The project interpreter can be set to one of the following:
an interpreter installed globally on your system
an interpreter in a shared virtual environment
an interpreter in a virtual environment associated with a project
Now the approach I'd recommend would be to create a shared virtual environment where you install your packages to, and use this environment for all your project.
That way, you have the desired result of needing to install your packages only once, but still have an environment isolated from your system environment.
To create such an environment, follow these steps:
Settings -> Project -> Project Interpreter
Click the cogwheel / gear icon right-side the interpreter dropdown
Select "Add Local..." -> Virtualenv Environment
Select a path as a root directory for the new environment
Select base interpreter you want to use
Tick the checkbox "Make available to all projects"
Click the "OK" button to save the new environment | 1 | 31 | 0 | I use PyCharm and all the initial settings are okay.Simple package installation is working. Then why do I need to reinstall a package for each project? Is there any way to install the packages for all projects from now on? | How do I install packages in PyCharm for all projects? | 1.2 | 0 | 0 | 51,569 |
47,774,073 | 2017-12-12T13:39:00.000 | 3 | 0 | 0 | 1 | java,python,grails,jenkins | 47,774,670 | 1 | true | 0 | 0 | Just dir no ${workspace}
bat 'dir' in a pipeline script.
Select a "execute batch command" option and write the commands you want to execute in there.
Then check the console output of the build which was successful. | 1 | 3 | 0 | I have created a workspace and clone all files in to that. Now, I'm running my code through jenkins. Can anyone assist how to display all the files in my workspace through any command. I tried with ${workspace} dir in Jenkins but it is showing any output.
Basically, if you are at any folder in your system and you open cmd and exceute dir in windows, it will display all the files in it.
The same thing I want to see in Jenkins for windows.
Thanks. | Jenkins command to display all file in directory? | 1.2 | 0 | 0 | 2,946 |
47,774,652 | 2017-12-12T14:09:00.000 | 0 | 0 | 0 | 0 | python,opencv | 47,775,119 | 1 | false | 0 | 0 | It looks like you have not given a valid file location.
The error is saying that it thinks you have given it a whl file to install but it can't find it.
I suggest you check you file path again.
Make sure it is located at C:\Users\Om and it is called exactly the same as the filename you have put down. It should match opencv_python-3.2.0-cp36-cp36m-win32.whl exactly.
Ensure you do not rename the whl file.
Correct that and it will attempt an install and if that does not work you will get another error. | 1 | 0 | 1 | C:\Users\Om>pip install opencv_python-3.2.0-cp36-cp36m-win32.whl
Requirement 'opencv_python-3.2.0-cp36-cp36m-win32.whl' looks like a filename, but the file does not exist
Processing c:\users\om\opencv_python-3.2.0-cp36-cp36m-win32.whl
Exception:
Traceback (most recent call last):
File "c:\users\om\appdata\local\programs\python\python36-32\lib\site-packages\pip\basecommand.py", line 215, in main
status = self.run(options, args)
File "c:\users\om\appdata\local\programs\python\python36-32\lib\site-packages\pip\commands\install.py", line 324, in run
requirement_set.prepare_files(finder)
File "c:\users\om\appdata\local\programs\python\python36-32\lib\site-packages\pip\req\req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "c:\users\om\appdata\local\programs\python\python36-32\lib\site-packages\pip\req\req_set.py", line 620, in _prepare_file
session=self.session, hashes=hashes)
File "c:\users\om\appdata\local\programs\python\python36-32\lib\site-packages\pip\download.py", line 809, in unpack_url
unpack_file_url(link, location, download_dir, hashes=hashes)
File "c:\users\om\appdata\local\programs\python\python36-32\lib\site-packages\pip\download.py", line 715, in unpack_file_url
unpack_file(from_path, location, content_type, link)
File "c:\users\om\appdata\local\programs\python\python36-32\lib\site-packages\pip\utils__init__.py", line 599, in unpack_file
flatten=not filename.endswith('.whl')
File "c:\users\om\appdata\local\programs\python\python36-32\lib\site-packages\pip\utils__init__.py", line 482, in unzip_file
zipfp = open(filename, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Om\opencv_python-3.2.0-cp36-cp36m-win32.whl'
what is the problem ? | Python package installation error for openCV | 0 | 0 | 0 | 266 |
47,778,403 | 2017-12-12T17:34:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,scikit-learn,nlp,tf-idf | 57,437,066 | 3 | false | 0 | 0 | As we are talking about text data, we have to make sure that the model is trained only on the vocabulary of the training set as when we will deploy a model in real life, it will encounter words that it has never seen before so we have to do the validation on the test set keeping that in mind.
We have to make sure that the new words in the test set are not a part of the vocabulary of the model.
Hence we have to use fit_transform on the training data and transform on the test data.
If you think about doing cross validation, then you can use this logic across all the folds. | 2 | 20 | 1 | In the chapter seven of this book "TensorFlow Machine Learning Cookbook" the author in pre-processing data uses fit_transform function of scikit-learn to get the tfidf features of text for training. The author gives all text data to the function before separating it into train and test. Is it a true action or we must separate data first and then perform fit_transform on train and transform on test? | Computing TF-IDF on the whole dataset or only on training data? | 0.132549 | 0 | 0 | 12,402 |
47,778,403 | 2017-12-12T17:34:00.000 | 4 | 0 | 0 | 0 | python,machine-learning,scikit-learn,nlp,tf-idf | 55,300,936 | 3 | false | 0 | 0 | Author gives all text data before separating train and test to
function. Is it a true action or we must separate data first then
perform tfidf fit_transform on train and transform on test?
I would consider this as already leaking some information about the test set into the training set.
I tend to always follow the rule that before any pre-processing first thing to do is to separate the data, create a hold-out set. | 2 | 20 | 1 | In the chapter seven of this book "TensorFlow Machine Learning Cookbook" the author in pre-processing data uses fit_transform function of scikit-learn to get the tfidf features of text for training. The author gives all text data to the function before separating it into train and test. Is it a true action or we must separate data first and then perform fit_transform on train and transform on test? | Computing TF-IDF on the whole dataset or only on training data? | 0.26052 | 0 | 0 | 12,402 |
47,783,328 | 2017-12-12T23:40:00.000 | 0 | 0 | 0 | 0 | python,mysql,django,django-filter,django-tables2 | 47,835,665 | 2 | false | 1 | 0 | What django-filter does from the perspective of django-tables2 is supplying a different (filtered) queryset. django-tables2 does not care about who composed the queryset, it will just iterate over it and render rows using the models form the queryset.
So if you a checkbox column to the table or not, or use django-filter or not, django-tables2 will just render any queryset it gets.
If you want to use the checked records for some custom filter, you'll have to do some manual coding, it's not supported out of the box.
Short answer: yes, you can use django-tables2 with a CheckboxColumn together with django-filter. | 1 | 1 | 0 | I want to have a checkboxcolumn in my returned table via Django-filter, then select certain rows via checkbox, and then do something with these rows.
This is Django-filter: django-filter.readthedocs.io/en/1.1.0 This is an example of checkboxcolumn being used in Django-tables2: stackoverflow.com/questions/10850316/…
My question is: can I use the checkboxcolumn for a table returned via Django-filter?
Thanks | Django-filter AND Django-tables2 CheckBoxColumn compatibility | 0 | 1 | 0 | 1,965 |
47,783,481 | 2017-12-12T23:56:00.000 | 0 | 0 | 1 | 0 | python,tensorflow,keras,jupyter-notebook | 51,338,424 | 2 | false | 0 | 0 | If you are a Windows/Mac user who is working on Jupyter notebook, pip install keras doesn't help you. Instead, try the steps below:
In command prompt navigate to the “site packages” directory of your anaconda installed.
Now use conda install tensorflow and after conda install keras
Restart your Jupyter notebook and run the packages. | 1 | 0 | 1 | I have setup Tensorflow and Keras on Mac OS. I also have Jupyter that came as part of my Anaconda installation.
When I try to import Tensoflow or Keras in a Jupyter notebook, I get "a no module named <...>" error.
Am I missing a step ? | Can import TensorFlow and Keras in Jupyter, even though I have them installed? | 0 | 0 | 0 | 661 |
47,785,763 | 2017-12-13T04:58:00.000 | 0 | 0 | 0 | 1 | python,gunicorn | 47,850,707 | 1 | true | 1 | 0 | If you use threads, you must write your application to behave well, e.g. by always using locks to coordinate access to shared resources.
If you use events (e.g. gevent), then you generally don't need to worry about accessing shared resources, because your application is effectively single-threaded.
To answer your second question: if you use a pure python library to access your database, then gevent's monkey patching should successfully render that library nonblocking, which is what you want. But if you use a C library wrapped in Python, then monkey patching is of no use and your application will block when accessing the database. | 1 | 0 | 0 | I am correct in understand that if I use the default worker type (sync) then if the app blocks for any reason, say while waiting fo the result of a database query, the aaociated worker process will not be able to handle any further requests during this time?
I am looking for a model which doesn't require too much special coding in my app code. I understand there are two async worker types, gevent and gthread, which can solve this problem. What is the difference between these two and and does my app need to be thread safe to use these?
UPDATE - I did some reading on gevent it seems it works by monkey patching std library functions so I would think that in the case of a database query in general it probably wouldn't patch whatever db library I am using so if I would need to program my app to cooperatively yield control when I waiting on the database. Is this correct? | If I use async workers workers with Gunicorn does my app been to be thread safe? | 1.2 | 0 | 0 | 874 |
47,787,413 | 2017-12-13T07:11:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 47,787,770 | 1 | true | 0 | 0 | On pycharm go to
File > Settings > project interpreter.. Under this add your env as an interpreter. | 1 | 0 | 0 | I tried import apiai on pycharm, but it didn't work so I ran pip install apiai then freezed to the requirements.txt it still didn't work on pycharm.
I went and tested my code in the prompt after i ran it in the env environment and it worked just fine.
My question is how can i make it work on pycharm too ? | I am unable to import apiai on pycharm? | 1.2 | 0 | 0 | 101 |
47,789,320 | 2017-12-13T09:13:00.000 | 0 | 0 | 0 | 0 | python,postgresql,sqlalchemy | 47,789,777 | 1 | true | 0 | 0 | Have you tried EXPLAIN (VERBOSE)? That shows the column names.
But I think it will be complicated – you'd have to track table aliases to figure out which column belongs to which table. | 1 | 0 | 0 | I am looking for a way to require users of a SQL query system to include certain columns in the SELECT query for example require select to have transaction_id column else return error. This is to insure compatibility with other functions.
I'm using EXPLAIN (FORMAT JSON) to parse query plan as a dictionary but it doesn't provide information about the column names. | Requiring certain columns in SELECT SQL query for it to go through? | 1.2 | 1 | 0 | 32 |
47,789,968 | 2017-12-13T09:50:00.000 | -1 | 0 | 1 | 0 | python,pandas,numpy,opencv,visual-studio-code | 47,800,231 | 1 | false | 0 | 0 | There's no explicit technique to improving the import time short of using lazy loading, but there are technical considerations which have to be taken into account before going down that route (e.g. some modules simply don't work when loaded lazily). | 1 | 1 | 1 | Is it possible to decrease the debug startup time of the python script in visual studio code? If I a have a lot of import library (like opencv, numpy, pandas and so on) every time i start the debug of the script, pressing the F5 button, the environment wait for seconds to reload them. Is it possible to reduce this time? Thanks. | Decrease the debug startup time of the python code in Visual Studio Code | -0.197375 | 0 | 0 | 80 |
47,794,365 | 2017-12-13T13:37:00.000 | 0 | 0 | 0 | 0 | python,opencv,graphics,computer-vision | 47,808,839 | 1 | false | 0 | 0 | If the camera motion is approximately a rotation about its optical center / lens entrance pupil (for example, pan-tilt-roll on a tripod with the subject distance much larger than the translation of the optical center), then images taken from rotated viewpoints are related by a homography.
If you know the 3D rotation (pan/tilt/roll), then you can explicitly compute the homography and apply it to the image. If not, but you have two images upon which you can identify 4 corresponding points or more, then you can estimate the homography directly from those correspondences. | 1 | 1 | 1 | I am performing some perspective transforms on camera images, however in certain cases the roll and pitch of the camera are not zero. In other words, the camera is not level and I would like to be able to correct for this.
I have some questions:
1) Can the transformation matrix (from M = cv2.getPerspectiveTransform(...) ) be corrected for the pitch and roll angles?
2) Should I just transform the source points and get a new transformation matrix? Roll seems like a simple enough correction since it's analogous to rotating the image, but how can I get the proper transformation for both roll and pitch? | OpenCV perspective transform with camera roll and pitch correction | 0 | 0 | 0 | 715 |
47,795,020 | 2017-12-13T14:09:00.000 | 0 | 0 | 1 | 1 | python,azure,azure-data-factory | 47,874,795 | 3 | false | 0 | 0 | With python you can use the API to create, configure and schedule the data factory pipelines. There won't be any python code running, data factory is configured only with json files. The Python library will only help you to create these json files in a language you are familiar with, the same goes for .net, powershell and every other supported language. The end result is always a bunch of json files.
I dont know the specifics for your case, but in general you need to create linked services, datasets (that will use those linked services), and pipelines that will be a group of logical activities (that will make use of those datasets).
If you are using ADFv1, you can configure the schedule within the dataset's properties and you wont need a gateway as you are not using on-premise data. If you are using ADFv2, you will need an Azure Integration Runtime (type "managed") and you can configure the schedule with triggers.
I hope I was able to clarify a bit these concepts.
Cheers. | 1 | 2 | 0 | I want to create a Data Warehouse in Azure that contains information from several sources. The input data comes from diferent APIS, which I want to access them using python and the output should be stored into the Warehouse. This process should be updated every day.
I have read lots of documents from Azure, but I can't understand how I need to design this process.
The first question is: Where should the python processes, to collect the data from the different APIs, be created? In a pipeline of the Azure Data Factory or somewhere else?
Regards | Get data from API using Python and load into Azure SQL Data Warehouse using Azure Data Factory | 0 | 0 | 0 | 1,508 |
47,797,170 | 2017-12-13T15:50:00.000 | 2 | 0 | 0 | 1 | python,wlst | 47,809,130 | 1 | true | 0 | 0 | This command will print the Python version used by WLST :
print (sys.version) | 1 | 2 | 0 | In Weblogic how to find WLST python version. Its known that WLST is made up by python.
And Python 2 and Python 3 is different from one another Architecture wise and bit wise. Python 2 available for 64 bit considering Windows OS. Python 3 is still 32-bit as far I know. Python is well known simplest Higher Level Language that is widely used in many Industry, but classified like version 2 and 3.
When I work in WLST I wonder what would be the Python version it is made up of.
When Weblogic version differs does it differ. However it so, how to find out, is there a specific command or function for it while operating in WLST shell | How to find WLST python version | 1.2 | 0 | 0 | 1,545 |
47,799,657 | 2017-12-13T18:13:00.000 | 0 | 0 | 0 | 0 | python,gensim,doc2vec | 47,806,250 | 1 | false | 0 | 0 | Which parameters are best can vary with the quality & size of your training data, and exactly what your downstream goals are. (There's no one set of best-for-everything parameters.)
Starting with the gensim defaults is reasonable first guess, or other values you've see someone else having used successfully on a similar dataset/problem.
But really you'll need to experiment, ideally by creating an automated evaluation based on some held-back testing set, then meta-optimizing the Doc2Vec parameters by searching over many small adjustments to the parameters for the best ranges/combinations. | 1 | 0 | 1 | my task is to assign tags (descriptive words) to documents or posts from the list of available tags. I'm working with Doc2vec available in Gensim. I read that doc2vec can be used for document tagging. But i could not get the suitable parameter values for this task. Till now, i have tested it by changing value of parameters named 'size' and 'window'. The results i'm getting are too nonsense and also by changing values of these parameters i haven't find any trend in results i.e. at some values results got little bit improved and at some values results fall down. Can anyone suggest what should be suitable parameter values for this task? I found that 'size'(defines size if feature vector) should be large if we have enough training data. But about the rest of parameters, i am not getting sure! | Parameter values of Doc2vec for Document Tagging - Gensim | 0 | 0 | 0 | 374 |
47,800,794 | 2017-12-13T19:29:00.000 | -5 | 0 | 1 | 0 | python,powershell,command-line,anaconda,conda | 48,412,773 | 6 | false | 0 | 0 | Here is a work around - start cmd shell . Run - activate . Check - conda env list . Start powershell - powershell . | 4 | 40 | 0 | I have two environments in anaconda, namely: root, 2env. When I open anaconda prompt, I can switch from root to 2env by typing: activate 2env. I also have conda in my powershell, but when I open powershell and try to run conda activate 2env it gives the following error:
CommandNotFoundError: 'activate'
Any suggestions on how to fix this? | How to activate different anaconda environment from powershell | -1 | 0 | 0 | 49,258 |
47,800,794 | 2017-12-13T19:29:00.000 | 8 | 0 | 1 | 0 | python,powershell,command-line,anaconda,conda | 52,237,127 | 6 | false | 0 | 0 | I found this command while using vs code & cmd /k "activate <env> & powershell" .
It is working | 4 | 40 | 0 | I have two environments in anaconda, namely: root, 2env. When I open anaconda prompt, I can switch from root to 2env by typing: activate 2env. I also have conda in my powershell, but when I open powershell and try to run conda activate 2env it gives the following error:
CommandNotFoundError: 'activate'
Any suggestions on how to fix this? | How to activate different anaconda environment from powershell | 1 | 0 | 0 | 49,258 |
47,800,794 | 2017-12-13T19:29:00.000 | 1 | 0 | 1 | 0 | python,powershell,command-line,anaconda,conda | 51,418,747 | 6 | false | 0 | 0 | I have been battling this issue for a while. I found a solution by using a batch script and calling call activate %env%. I hope this can help somebody. | 4 | 40 | 0 | I have two environments in anaconda, namely: root, 2env. When I open anaconda prompt, I can switch from root to 2env by typing: activate 2env. I also have conda in my powershell, but when I open powershell and try to run conda activate 2env it gives the following error:
CommandNotFoundError: 'activate'
Any suggestions on how to fix this? | How to activate different anaconda environment from powershell | 0.033321 | 0 | 0 | 49,258 |
47,800,794 | 2017-12-13T19:29:00.000 | 0 | 0 | 1 | 0 | python,powershell,command-line,anaconda,conda | 67,298,072 | 6 | false | 0 | 0 | Open PowerShell.
Run conda init (not conda init powershell as the accepted answer suggests).
Close and re-open PowerShell.
Use conda normally. | 4 | 40 | 0 | I have two environments in anaconda, namely: root, 2env. When I open anaconda prompt, I can switch from root to 2env by typing: activate 2env. I also have conda in my powershell, but when I open powershell and try to run conda activate 2env it gives the following error:
CommandNotFoundError: 'activate'
Any suggestions on how to fix this? | How to activate different anaconda environment from powershell | 0 | 0 | 0 | 49,258 |
47,801,519 | 2017-12-13T20:21:00.000 | -2 | 0 | 0 | 0 | python,html-parsing | 47,801,694 | 2 | false | 0 | 0 | if you need to handle broken html/xml, I recommend you to ckech Beautiful Soup 4 | 1 | 0 | 0 | Note:
I can't use third party modules so bs4 and lxml are not an option.
I need to parse HTML with the
Python 3 std lib. I thought xml.minidom would be the way to go but it doesn't seem to be able to parse invalid XML/HTML without throwing an exception like syntax error.
Am I missing something within the xml module that can do what I'm looking for?
Am I missing something in the std lib? | Can xml.minidom parse broken XML | -0.197375 | 0 | 1 | 78 |
47,803,986 | 2017-12-14T00:06:00.000 | 0 | 0 | 0 | 0 | python,function,loops | 47,804,074 | 2 | false | 0 | 0 | You can use the module pandas. Do the following:
import pandas as pd
# Load your data from the file
data = pd.read_csv("path_to_the_file", sep=" ")
^ space here
And you're done!
Have a look at your data if you want:
data.head() | 1 | 1 | 0 | I'm a noobie at Python and got this task, which I'm having trouble in starting.
I've an IP.log file with this info:
12.0.0.1 120 x
188.1.1.1 12 x
199.1.1.1 3
99.1.5.5 1
Bassically I've to make an app with such functionalities: Sorting the file by IP, deleting records, adding record, markinf /unmarking the IP address, editing the records, reseting the count to zero. The read file should display something like this:
1 | 12.0.0.1 | 120 | x
2 | 188.1.1.1 | 12 | x
3 | 199.1.1.1 | 3 |
4 | 99.1.5.5 | 1 |
Any help to help me get on track would be appreciated. | Python IP address management module | 0 | 0 | 0 | 148 |
47,804,206 | 2017-12-14T00:32:00.000 | 10 | 0 | 1 | 0 | python,pip,setuptools,python-packaging | 47,804,476 | 1 | true | 0 | 0 | use "pip freeze > requirements.txt" command; then "packages='requirements.txt'" in the setup script
Even assuming that by packages='requirements.txt' you mean packages=open('requirements.txt').read().splitlines(), that is absolutely the wrong thing to do, and I hope that you've simply been misreading whatever sources you've consulted rather than such blatantly wrong information actually being posted somewhere.
The purpose of the packages keyword to the setup() function is to tell setuptools what directories of Python code in your repository are to be included when distributing & installing your project. For most simple cases, packages=find_packages() is all you need.
requirements.txt, on the other hand, is supposed to contain a list of other people's projects that your project depends on (and it should really be hand-crafted rather than redirecting pip freeze into it like a lobotomized chimp). The correct setup() keyword to pass its contents to is install_requires, which is what causes your project's dependencies to also be installed whenever someone installs your project. | 1 | 4 | 0 | I've been working on packaging a python project to so I can install it on other systems in a lab. In my research on how to go about creating the setup.py script, I've seen two methods.
1) use "pip freeze > requirements.txt" command; then "packages='requirements.txt'" in the setup script
2) Simply using "packages=find_packages()" in the setup script
My question is, what is the difference between these two methods? It seems like "find_packages" does the same as "pip freeze" but does nothing in terms of installing a modules where there are none to begin with.
Can anyone explain how these two methods differ, or just explain what each one is doing so I can make a more informed decision on which method to use?
Thanks! | Difference between using find_packages() vs "requirements.txt" for setup.py script | 1.2 | 0 | 0 | 1,774 |
47,804,792 | 2017-12-14T01:56:00.000 | 0 | 0 | 0 | 1 | python,tensorflow,tensorflow-gpu | 54,758,338 | 2 | false | 0 | 0 | When a distributed TF code is run on the cluster, other nodes could be accessed through "private ip: port number".
But the problem with AWS is that the other nodes can not be easily launched and it needs extra configuration. | 1 | 4 | 1 | I am using distributed Tensorflow on aws using gpus. When I train the model on my local machine, I indicate ps_host/workers_host as something like 'localhost:2225'. What are the ps/workers host I need to use in case of aws? | Distributed Tensorflow: ps/workers hosts on aws ? | 0 | 0 | 0 | 186 |
47,807,709 | 2017-12-14T07:07:00.000 | -3 | 0 | 1 | 0 | python,installation,pycharm,anaconda | 62,389,507 | 6 | false | 0 | 0 | I had the same problem with the Pycharm Community Edition (CE). Solved it by switching to Pycharm Professional. | 4 | 5 | 0 | I am new to Python. Installed Anaconda on my system.
I installed PyCharm too.
When I try to run a file from PyCharm I get this error message:
C:\Python\Test\venv\Scripts\python.exe python 3.6
C:/Python/Test/while.py C:\Python\Test\venv\Scripts\python.exe: can't
open file 'python': [Errno 2] No such file or directory | PyCharm not finding Anaconda Python, giving "can't open file 'python': [Errno 2] No such file or directory?" | -0.099668 | 0 | 0 | 24,660 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.