Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
37,729,594 | 2016-06-09T14:59:00.000 | 0 | 0 | 0 | 0 | django,python-3.x,concurrency | 37,775,758 | 1 | true | 1 | 0 | I ended up enforcing the unicity at the database (model) level, and catching the resulting IntegrityError in the code. | 1 | 0 | 0 | I am running a Django app with 2 processes (Apache + mod_wsgi).
When a certain view is called, the content of a folder is read and the process adds entries to my database based on what files are new/updated in the folder.
When 2 such views execute at the same time, both see the new file and both want to create a new entry. I cannot manage to have only one of them write the new entry.
I tried to use select_on_update, with transaction.atomic(), get_or_create, but without any success (maybe used wrongly?).
What is the proper way of locking to avoid writing an entry with the same content twice with get_or_create ? | Django processes concurrency | 1.2 | 0 | 0 | 35 |
37,732,918 | 2016-06-09T17:48:00.000 | -2 | 1 | 1 | 0 | python,raspberry-pi,pycharm | 40,196,547 | 2 | false | 0 | 0 | You can run Pycharm directly on your Raspberry Pi:
- Using your Raspberry Pi, download the installation file directly from the Pycharm website (JetBrains). It will be a tarball, i.e., a file ending in ".tar.gz".
- Extract the file to a folder of your choice.
- Browsing through the extracted files and folders, you will find a folder named "bin". Inside "bin" you will find a file named Pycharm.sh
- Using your terminal window, go to the "bin" folder and launch the Pycharm application by typing: sudo ./Pycharm.sh
After several seconds (it's a little slow to load on my RPi3), Pycharm will load. Have fun! | 1 | 3 | 0 | I've been using IDLE with my raspberry for a while, it's nice at the beginning, but Pycharm provides lots more of features and I'm used to them since I've been also using Android Studio.
The problem is I couldn't figure out how to install the RPi module to control the pins of my Raspberry. Does anyone know how to do this?
In case it matters, it's python3 on a raspberry 2B. | Install RPi module on Pycharm | -0.197375 | 0 | 0 | 18,522 |
37,736,033 | 2016-06-09T20:57:00.000 | 20 | 0 | 0 | 1 | python,pudb | 37,736,539 | 1 | false | 0 | 0 | Ctrl-n/p - browse command line history | 1 | 13 | 0 | I'm on Linux and expected it to work like pdb, gdb, i.e., press enter to repeat the last command. I understand the debugger has a Variables watch window. | How to repeat the last command on the command-line in the python debugger, PuDB | 1 | 0 | 0 | 2,124 |
37,738,404 | 2016-06-10T00:53:00.000 | 0 | 0 | 0 | 0 | python,maya | 37,891,556 | 1 | false | 0 | 0 | Yellow means you have connection that means setAttr with lock won't work there. You need to use disconnectAttr. cmds.setAttr('your_source_object.attribute', 'camera01:cameraShape.horizontalFilmAperture'). But in this case I seriously think that you have a reference. In that case story will be different. | 1 | 0 | 0 | I am trying to edit this particular Camera's node - Film Gate in my scene.
However while doing so, I was given the following error The attribute 'main_camera01:main_cameraShape.horizontalFilmAperture' is locked or connected and cannot be modified and The attribute 'main_camera01:main_cameraShape.verticalFilmAperture' is locked or connected and cannot be modified
These 2 particular attributes are highlighted in yellow with some connections and the camera is from a referenced file.
I tried something like - cmds.setAttr('camera01:cameraShape.horizontalFilmAperture', lock=0) to unlock the attribute, however I got back the same error. Is there anyway in which I can tweak this Film Gate attribute without affecting its other attribute connections? | Editing an attribute of a reference file without affecting its other attributes | 0 | 0 | 0 | 223 |
37,740,394 | 2016-06-10T05:16:00.000 | -1 | 1 | 1 | 0 | python,c++,database,game-engine | 37,740,518 | 2 | false | 0 | 0 | I never worked with python. But I think this is one of the main features of any programming/script languages: Call a function multiple times with it's own instances as many times as you need. | 1 | 1 | 0 | I'm in the process of trying to engineer a data structure for a game engine, and allow a scripting language to grab data from it. Due to some limitations of design, the data would need to be stored on the C++ side of the program in a database like structure. Main reason being that I'm not sure if Python's serialization base can compensate for modders suddenly adding and removing data fields.
I am wondering if is possible to call a python script, and have it act as it's own object with it's own data? If not, can you instantiate a python class from C++ without knowing the class's name until runtime? | Can Python run multiple instances of a script with each instance containing it's own data? | -0.099668 | 0 | 0 | 1,053 |
37,743,940 | 2016-06-10T08:52:00.000 | -1 | 0 | 1 | 0 | python,string,date,python-3.x | 56,444,902 | 4 | false | 0 | 0 | Easy way
Convert from regular date to Julian date
print datetime.datetime.now().strftime("%y%j")
Convert from Julian date to regular date
print datetime.datetime.strptime('19155', '%y%j').strftime("%d-%m-%Y") | 1 | 8 | 0 | I have a string as Julian date like "16152" meaning 152'nd day of 2016 or "15234" meaning 234'th day of 2015.
How can I convert these Julian dates to format like 20/05/2016 using Python 3 standard library?
I can get the year 2016 like this: date = 20 + julian[0:1], where julian is the string containing the Julian date, but how can I calculate the rest according to 1th of January? | How to convert Julian date to standard date? | -0.049958 | 0 | 0 | 29,645 |
37,746,658 | 2016-06-10T11:05:00.000 | -1 | 0 | 1 | 1 | python-3.x,ubuntu,exe | 70,057,993 | 1 | false | 0 | 0 | in my opinion its not possible to create some executables in linux for windows | 1 | 1 | 0 | I have made a python script using python3.5, it uses many packages like tkinter, matplotlib, pylab etc. I want to convert .py file to .exe so that I can give it to people to run on windows. However I need to do the following conversion from Ubuntu only. | How to make a .py file to .exe from Ubuntu to run it on Windows? | -0.197375 | 0 | 0 | 216 |
37,750,405 | 2016-06-10T14:10:00.000 | 1 | 0 | 1 | 0 | python-2.7 | 40,962,470 | 1 | false | 0 | 0 | I found alternative: pip install --user --upgrade grammar-check | 1 | 1 | 0 | I am having problem of installing python package language-check to my python 2.7 environment.
I tried the pip install language-check --upgrade command but it was no avail. It gave my error saying "Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/fn/g0nd0gb54d5c__5fjhb7rp0w0000gn/T/pip-build-zMuFbc/language-check/". I have problem of understanding what it is saying. If you know what it is saying, please give me a hint on how to fix it.
I also tried to download the laguage-check tar.gz to my mac, gunzip it, ran tar -xwf command on it, went to language-check directory and ran setup install but it did not work either . It gave me an error saying "error in language-check setup command: package_data must be a dictionary mapping package names to lists of wildcard patterns". So if you know how to fix the problem, please let me know.
Thank you so much in advance,
Tom | error on installing language-check packages to python 2.7 environment | 0.197375 | 0 | 0 | 387 |
37,751,120 | 2016-06-10T14:45:00.000 | 0 | 0 | 1 | 0 | ipython,jupyter,jupyter-notebook | 70,173,372 | 4 | false | 0 | 0 | Try print(chr(12)).
I am not sure what this function does behind the scenes, but if you are looking for a way to hide all previous outputs (such as in-memory 'card' game), it works. | 1 | 35 | 0 | Is it possible to restart an ipython Kernel NOT by selecting Kernel > Restart from the notebook GUI, but from executing a command in a notebook cell? | Restart ipython Kernel with a command from a cell | 0 | 0 | 0 | 55,150 |
37,751,430 | 2016-06-10T15:00:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,scikit-learn,computer-science,k-means | 37,752,325 | 1 | false | 0 | 0 | Before answering which is better, here is a quick reminder of the algorithm:
"Choose" the number of clusters K
Initiate your first centroids
For each point, find the closest centroid
according to a distance function D
When all points are attributed to a cluster, calculate the barycenter of the cluster which become its new centroid
Repeat step 3. and step 4. until convergence
As stressed previously, the algorithm depends on various parameters:
The number of clusters
Your initial centroid positions
A distance function to calculate distance between any point and centroid
A function to calculate the barycenter of each new cluster
A convergence metric
...
If none of the above is familiar to you, and you want to understand the role of each parameter, I would recommend to re-implement it on low-dimensional data-sets. Moreover, the implemented Python libraries might not match your specific requirements - even though they provide good tuning possibilities.
If your point is to use it quickly with a big-picture understanding, you can use existing implementation - scikit-learn would be a good choice. | 1 | 0 | 1 | Is it better to implement my own K-means Algorithm in Python or use the pre-implemented K-mean Algorithm in Python libraries like for example Scikit-Learn? | K-Means Implementation in Python | 0.197375 | 0 | 0 | 614 |
37,753,121 | 2016-06-10T16:31:00.000 | 0 | 0 | 1 | 0 | python,python-imaging-library,pillow | 38,139,040 | 1 | false | 0 | 1 | Solved:
I finally figured it out! I had a 32 bit and a 64 bit version of python installed. I had pillow installed to my 32 bit directory, so when python ran from 32 bit was why I got the win32 error. And when I ran python 64bit it said module not found. Uninstalled both and reinstalled the 64 bit to the normal install direc in C, reinstalled 64 bit pillow and it worked!
After updating pip it was able to read the wheel files. | 1 | 0 | 0 | I am having a problem with running pillow for python 3.4.2.
I tried installing Pillow using 3 different files:
Pillow-3.2.0.win-amd64-py3.4.exe,
Pillow-3.2.0-cp34-cp34m-win_amd64.whl,
Pillow-3.2.0-cp34-none-win_amd64.whl.
Every time I try to import Image or tkimage I get the not a valid win32 error in shell. Also, the 2 wheel files said they were incompatible with my system when using pip to install so I had to extract the data and place manually.
I am running windows 10 64bit but when I request what platform I'm running with python it says windows 8! In my system info from control panel it says the correct information though.
Please help if anyone knows the solution! | Unable to use Pillow (not a valid WIN32 error) | 0 | 0 | 0 | 115 |
37,753,322 | 2016-06-10T16:43:00.000 | 0 | 0 | 1 | 0 | python,excel,mathcad,vba | 37,847,104 | 2 | false | 0 | 0 | I have no idea what mathcad is, but in Excel, if you click on Data, you will see that you can import From Text file, and From Other Sources. Do any of those make sense? Try one and see how you get along.
Also, if you want to automate the process, turn on the Macro Recorder and then click through all the necessary steps. Your Macro will be created and saved, so you can automatically repeat the process whenever you want. | 2 | 0 | 0 | I have a mathcad program with thousands of variables. Is there a way I can use Python or Visual Basic to loop over all the varibles in my mathcad program and then assign them and there values to an excel spreadsheet?
A python method is preferred as I have never used VB. | Importing variables from mathcad to excel | 0 | 0 | 0 | 459 |
37,753,322 | 2016-06-10T16:43:00.000 | 1 | 0 | 1 | 0 | python,excel,mathcad,vba | 48,646,084 | 2 | false | 0 | 0 | Firstly, you can get value from a mathcad worksheet through mathcad Automation API(getValue method of Worksheet object, details can refer to the Developer Reference in the installation folder).
Secondly, you can assign the value you just get to the excel worksheet using xlwt package of python. | 2 | 0 | 0 | I have a mathcad program with thousands of variables. Is there a way I can use Python or Visual Basic to loop over all the varibles in my mathcad program and then assign them and there values to an excel spreadsheet?
A python method is preferred as I have never used VB. | Importing variables from mathcad to excel | 0.099668 | 0 | 0 | 459 |
37,753,380 | 2016-06-10T16:47:00.000 | 2 | 1 | 1 | 0 | python,encryption,passwords | 37,772,728 | 2 | false | 0 | 0 | PyNaCl supports multiple types of crypto primitives, but a password hashing scheme is none of them.
Using encryption for password storage is an anti-pattern. Where is the key stored to decrypt the encrypted password? If the key is stored somewhere in code or in some file in the file system, then the whole thing is nothing more than obfuscation. What if the key is lost? An attacker can directly decrypt the password and log in.
I'm assuming here that users don't actually type in keys, but rather passwords. If they would type in keys, then those keys could be used directly for PyNaCl encryption.
Instead, passwords should be hashed repeatedly and the hash stored. If a user tries to log in again, the password is hashed again with the same parameters (salt, iteration count, cost factor) and compared to the stored value. This is how it commonly solved in client-server applications, but it is not necessary to store the password hash anywhere, because PyNaCl's symmetric encryption also provides authentication (integrity). It means that you can detect a wrong password by deriving a key from that and attempting to decrypt the container. The password was wrong when PyNaCl produces an error (or the container was tampered with).
There are multiple schemes (PBKDF2, bcrypt, scrypt, Argon2) that can be used for this purpose, but none of them are included in PyNaCl. Although, the underlying libsodium supports two of them. | 1 | 1 | 0 | I have the following scenario:
given a Python application on some client machine which enables several users. It encrypts and decrypts user passwords. What would be the currently most recommended approach?
Attempts of using PyNaCl lead to the insight that it is not a good approach due to the fact that PyNaCl is used for communication encryption and decryption. Here we have passwords which shall be encrypted, stored to a file, and then decrypted on request (e.g. if a specific user wants to re-login). Storing the passwords in a database is for our current experiment not an option (although it would be possibly a better solution).
According to your experiences: what would be a good way to approach this issue of encrypting and decrypting user data from e.g. text files? (Again: this is experimental and not meant for productive use in the current stage) | How to handle password management via PyNaCl? | 0.197375 | 0 | 0 | 665 |
37,753,680 | 2016-06-10T17:05:00.000 | 0 | 0 | 1 | 0 | python,dictionary | 37,771,544 | 3 | true | 0 | 0 | TheLazyScripter gave a nice workaround solution for the problem, but the runtime characteristics are not good because for each reconstructed word you have to loop through the whole dict.
I would say you chose the wrong dict design: To be efficient, lookup should be done in one step, so you should have the numbers as keys and the words as items.
Since your problem looks like a great computer science homework (I'll consider it for my students ;-) ), I'll just give you a sketch for the solution:
use word in my_dict.values() #(adapt for py2/py3) to test whether the word is already in the dictionary.
If no, insert the next available index as key and the word as value.
you are done.
For reconstructing the sentence, just
loop through your list of numbers
use the number as key in your dict and print(my_dict[key])
Prepare exception handling for the case a key is not in the dict (which should not happen if you are controlling the whole process, but it's good practice).
This solution is much more efficient then your approach (and easier to implement). | 1 | 0 | 0 | I am creating a code where I need to take a string of words, convert it into numbers where hi bye hi hello would turn into 0 1 0 2. I have used dictionary's to do this and this is why I am having trouble on the next part. I then need to compress this into a text file, to then decompress and reconstruct it into a string again. This is the bit I am stumped on.
The way I would like to do it is by compressing the indexes of the numbers, so the 0 1 0 2 bit into the text file with the dictionary contents, so in the text file it would have 0 1 0 2 and {hi:0, bye:1, hello:3}.
Now what I would like to do to decompress or read this into the python file, to use the indexes(this is how I will refer to the 0 1 0 2 from now on) to then take each word out of the dictionary and reconstruct the sentence, so if a 0 came up, it would look into the dictionary and then find what has a 0 definition, then pull that out to put into the string, so it would find hi and take that.
I hope that this is understandable and that at least one person knows how to do it, because I am sure it is possible, however I have been unable to find anything here or on the internet mentioning this subject. | How to take a word from a dictionary by its definition | 1.2 | 0 | 0 | 103 |
37,754,771 | 2016-06-10T18:13:00.000 | 4 | 0 | 0 | 0 | python | 37,754,845 | 4 | true | 0 | 0 | One simple way is to make a request to google search, then parse the html result. You can use some Python libraries such us Beautiful Soup to parse the html content easily, finally get the url link you need. | 1 | 2 | 0 | My goal is to create a small sript that find all the result of a google search but in "raw".
I don't speak english very well so i prefer to give an exemple to show you what i would like :
I Type : elephant
The script return
www.elephant.com
www.bluelephant.com
www.ebay.com/elephant
.....
I was thinking about urllib.request but the return value will not be usable to that !
I found some tutorials but not adapted at all to my wish !
Like i told you my goal is to have an .txt file as output witch contains alls the website who match with my query !
Thanks all | Python - Get Result of Google Search | 1.2 | 0 | 1 | 10,567 |
37,757,233 | 2016-06-10T21:12:00.000 | 0 | 0 | 0 | 0 | python,scrapy | 63,207,972 | 9 | false | 1 | 0 | make sure you activate command that is
"Scripts\activate.bat" | 5 | 6 | 0 | I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it? | Scrapy installed, but won't run from the command line | 0 | 0 | 0 | 24,069 |
37,757,233 | 2016-06-10T21:12:00.000 | 7 | 0 | 0 | 0 | python,scrapy | 40,753,182 | 9 | false | 1 | 0 | I tried the following sudo pip install scrapy , however was promtly advised by Ubuntu 16.04 that it was already installed.
I had to first use sudo pip uninstall scrapy, then sudo pip install scrapy for it to successfully install.
Now you should successfully be able to run scrapy. | 5 | 6 | 0 | I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it? | Scrapy installed, but won't run from the command line | 1 | 0 | 0 | 24,069 |
37,757,233 | 2016-06-10T21:12:00.000 | 3 | 0 | 0 | 0 | python,scrapy | 60,684,096 | 9 | false | 1 | 0 | I faced the same problem and solved using following method. I think scrapy is not usable by the current user.
Uninstall scrapy.
sudo pip uninstall scrapy
Install scrapy again using -H.
sudo -H pip install scrapy
Should work properly. | 5 | 6 | 0 | I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it? | Scrapy installed, but won't run from the command line | 0.066568 | 0 | 0 | 24,069 |
37,757,233 | 2016-06-10T21:12:00.000 | 16 | 0 | 0 | 0 | python,scrapy | 55,285,170 | 9 | false | 1 | 0 | I had the same error. Running scrapy in a virtual environment solved it.
Create a virtual env : python3 -m venv env
Activate your env : source env/bin/activate
Install Scrapy with pip : pip install scrapy
Start your crawler : scrapy crawl your_project_name_here
For example my project name was kitten, I just did the following in step 4
scrapy crawl kitten
NOTE: I did this on Mac OS running Python 3+ | 5 | 6 | 0 | I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it? | Scrapy installed, but won't run from the command line | 1 | 0 | 0 | 24,069 |
37,757,233 | 2016-06-10T21:12:00.000 | 0 | 0 | 0 | 0 | python,scrapy | 37,914,201 | 9 | false | 1 | 0 | I had the same issue. sudo pip install scrapy fixed my problem, although I don't know why must use sudo. | 5 | 6 | 0 | I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it? | Scrapy installed, but won't run from the command line | 0 | 0 | 0 | 24,069 |
37,757,686 | 2016-06-10T21:57:00.000 | 0 | 0 | 1 | 0 | python,json,validation,parsing | 37,757,873 | 2 | false | 0 | 0 | If I understand you right, the validator uses a json library to read the file, and then makes some additional checks.. This sounds like a good design to me; why reinvent the wheel? If invalid JSON gives error messages that are too cryptic, how about catching them in a try ... except block and formulating the error message your own way? In addition to the exception message, you can recover quite a bit of information about the error by inspecting the exception.
If you could be more specific about what kind of error message you find unhelpful and what you would like to see instead (with suitable input that triggers the error), maybe someone can explain how. | 1 | 1 | 0 | I'm using a file format. The format is, effectively, JSON with a particular structure. The format comes with a validator, which is great, and gives helpful error messages. However, the validator fails when the error causes the input to be invalid JSON, and gives a very poor error message.
I can use this with a normal JSON validator, but what I really want to do is to be able to put a JSON structure into a tool, and get a (python) parser out of the other end. Obviously there are various ways of doing this, my question is: are there any ways of defining a JSON format that let me avoid writing a parser for JSON itself?
The use case is this: I would like to build a 'proper' validator for the format, so that a user can upload their file and have it checked. I can just write the BNF, but I'd like to write the BNF for a tool that understand it was BNF-within-JSON. | Generate a python parser for a particular JSON format | 0 | 0 | 0 | 104 |
37,757,927 | 2016-06-10T22:22:00.000 | 1 | 0 | 0 | 0 | php,python,mysql | 37,759,626 | 2 | false | 0 | 0 | It's a database. It doesn't care what language or application you're using too access it. That's one of the benefits of having standards like the MySQL protocol, SQL in general, or even things like TCP/IP: They allow different systems to seamlessly inter-operate. | 2 | 0 | 0 | Is it possible to access same mysql database using python and php.Beacause I am developing a video searching website which based on semantics. Purposely I have to use python and JavaEE. So I have to make a datbase to store video data. But it should be accessed through both python and javaEE, I can use php to interfacing between javaEE mysql database. But my problem is python can access the same database.?
I new to here and developing. I appreciate your kindness. Think I can get a best solution | can python and php access same mysqldb? | 0.099668 | 1 | 0 | 96 |
37,757,927 | 2016-06-10T22:22:00.000 | 0 | 0 | 0 | 0 | php,python,mysql | 37,760,285 | 2 | false | 0 | 0 | Like @tadman said, yes.
All you care about is making a new connection and obtaining a cursor in each of your program (no matter what language).
The cursor is what does what you want (analogous to executing an actual query in whatever program you're using). | 2 | 0 | 0 | Is it possible to access same mysql database using python and php.Beacause I am developing a video searching website which based on semantics. Purposely I have to use python and JavaEE. So I have to make a datbase to store video data. But it should be accessed through both python and javaEE, I can use php to interfacing between javaEE mysql database. But my problem is python can access the same database.?
I new to here and developing. I appreciate your kindness. Think I can get a best solution | can python and php access same mysqldb? | 0 | 1 | 0 | 96 |
37,767,790 | 2016-06-11T19:33:00.000 | 1 | 0 | 0 | 0 | python,scala,apache-spark,machine-learning,scikit-learn | 37,768,933 | 1 | false | 0 | 0 | Well according to discussions https://issues.apache.org/jira/browse/SPARK-2336 here MLLib (Machine Learning Library for Apache Spark) does not have an implementation of KNN.
You could try https://github.com/saurfang/spark-knn. | 1 | 1 | 1 | I have been working on machine learning KNN (K Nearest Neighbors) algorithm with Python and Python's Scikit-learn machine learning API.
I have created sample code with toy dataset simply using python and Scikit-learn and my KNN is working fine. But As we know Scikit-learn API is build to work on single machine and hence once I will replace my toy data with millions of dataset it will decrease my output performance.
I have searched for many options, help and code examples, which will distribute my machine learning processing parallel using spark with Scikit-learn API, but I was not found any proper solution and examples.
Can you please let me know how can I achieve and increase my performance with Apache Spark and Scikit-learn API's K Nearest Neighbors?
Thanks in advance!! | Scikit-learn KNN(K Nearest Neighbors ) parallelize using Apache Spark | 0.197375 | 0 | 0 | 5,059 |
37,773,568 | 2016-06-12T11:12:00.000 | 0 | 0 | 0 | 1 | python,sockets,tcpclient,tcpserver | 37,773,623 | 1 | true | 0 | 0 | There are most likely two reasons for that:
1.) Your server application is not listening on that particular ip/port
2.) A firewall is blocking that ip/port
I would recommend checking your firewall settings. You could start with turning your firewall off to determine if it really is a firewall issue.
If so, just add an accept rule for your webservice (ip:port).
edit: And check your routing configuration if you are in a more or less complex network. Make sure that both networks can reach each other (e.g. ping the hosts or try to connect via telnet). | 1 | 0 | 0 | When I try to run tcpServer and tcpClient on the same local network, it works, but I can't run them on the external network. The OS refuses the connection.
Main builtins.ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
I checked whether tcpServer is running or not using netstat, and it is in the listening state.
What am I supposed to do? | Python, tcpServer tcpClient, [WinError 10061] | 1.2 | 0 | 1 | 331 |
37,781,380 | 2016-06-13T03:23:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,shell,ubuntu | 37,781,709 | 2 | true | 0 | 0 | I am not sure what you mean by run the module.
If you just want to open fb.com, then try webbrowser.open('http://fb.com'). Give the http:// so it doesn't try to open with file:// or something. | 1 | 0 | 0 | I am using python 2 on Ubuntu and
when writing import webbrowser
webbrowser.open("fb.com")
and run the module, the shell restarts and nothing happens. What is the problem here? | Python shell is restarted every time I do “run module” inside editor? | 1.2 | 0 | 0 | 84 |
37,781,940 | 2016-06-13T04:43:00.000 | 5 | 0 | 1 | 0 | python,python-sphinx | 37,800,352 | 2 | true | 1 | 0 | You can use
:Version: |version|
in your rst | 1 | 5 | 0 | I am using sphinx and I would like to show the version of my project from conf.py on my main page for documentation. | How to show version info on index.html page | 1.2 | 0 | 0 | 1,277 |
37,784,053 | 2016-06-13T07:33:00.000 | 0 | 0 | 1 | 0 | python,multithreading,algorithm,ros | 37,784,509 | 1 | false | 0 | 0 | You should be able to build a system like you suggested. As long as it's small you shouldn't have any problems, but you should reconsider if you want something that works in larger environments or for longer runs.
The main problems I've seen with designs like this:
Clutter. There's always that one variable that doesn't have a good name and you want to use it in two modules. In this setup it ends up in the vars module making it harder to understand.
Ownership. In this setup any module can change any variable. If you need to debug then it can get really hard to figure out who did the change and why.
Hard to test. When you write tests (and you probably should write some) then you need to setup this entire large module which can get tricky.
As improvements, I would suggest not to store values, store classes that own them and are responsible for them. You can extract interfaces if it makes it easier, but you always know which module is responsible for what. This also helps when adding strange variables with imperfect names, since at least you have the module name to give you some hint of what it's for.
For testing it helps if you don't take the entire module, but only the data you need. You can have a tiny wrapper that takes the module, extracts the needed values and passes them to more testable methods.
Also take care of the global interpreter lock in python (GIL). You may not get the parallelism you expect.
Again, all this makes sense when you have a somewhat larger setup or one that you want to keep around for longer. For prototypes and experiments the setup you proposed should work just fine. | 1 | 0 | 0 | In my current algorithm, I am doing some interaction with a robot. The robot is running a ROS master, which publishes all the datas itself. My computer, where the algorithm is running, is connected to the ROS master and controls the robot.
The calculations of the algorithm is based on the current state of the robot (which will be published over ROS). For this case, it is necessary, always having the current state of the robot. However, simultaneously, my algorithm needs to make decisions based on the current robot state continuously. Because I need to do some stuff in parallel, I thought about using threads.
My idea was the following
I have a central storage (basically a python module) "vars", which only contains different kind of variables, among others the current robot state. My idea is, to update these variables from ROS so I always have up-to-date data in these variables and I can do my calculations based on this central stored variables.
My question
What do you think about my basic structure?
Is it safe, to use python module variables within threads?
Thanks for any advice. | Python module variable as central storage for data sharing and access in within threads | 0 | 0 | 0 | 110 |
37,785,380 | 2016-06-13T08:50:00.000 | 0 | 0 | 0 | 0 | python,c++,matlab,nao-robot | 37,796,440 | 1 | false | 0 | 0 | Using NAO C++ SDK, it may be possible to make a MEX-FILE in Matlab that "listens" to NAO. Then NAO just has to raise an event in its memory (ALMemory) that Matlab would catch to start running the script. | 1 | 0 | 1 | I have a Wizard of Oz experiment using Choregraphe to make a NAO perform certain tasks running on machine A. The participant interacting with the NAO also interacts with a machine B. When I start the experiment (in Choregraphe on machine A) I want a certain MATLAB script to start on machine B. I.e. Choregraphe will initiate the MATLAB script.
Do you have any suggestions of how to do this? My programming are limited to that of MATLAB and R, while Choregraphe is well integrated with Python and C++ hence my question here on Stack.
Kind Regards,
KD | Sync Choregraphe and Matlab | 0 | 0 | 0 | 107 |
37,787,435 | 2016-06-13T10:32:00.000 | 3 | 0 | 1 | 0 | python,django,sudo,port80 | 37,787,526 | 4 | false | 1 | 0 | No, you don't need to do this. You shouldn't be trying to run the development server on port 80; if you're setting up a production environment, use a proper server. | 1 | 0 | 0 | I have a django website setup and configured in a python virtual environment (venv) on Ubuntu and all is working fine. Now in order to to run my server on port80 I need to use "sudo" which does not execute in the context of the virtual environment, raising errors (i.e no module named django ...)
Is there a way to get "sudo" to execute in the context of the python virtual environment?! | How to make sudo execute in current python virtual environment? | 0.148885 | 0 | 0 | 340 |
37,792,956 | 2016-06-13T14:54:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,pyodbc,jaydebeapi | 37,793,124 | 1 | true | 0 | 0 | Simple answer - until more details given in question:
In case you want to speak ODBC with the database: Go with pyodbc or for a pure python solution with pypyodbc
Else if you want to talk JDBC with the database try jaydebeapi
This should depend mor on the channel you want to use between python and the database and less on the version of python you are using. | 1 | 1 | 0 | As the title says, simple question...
When to use pyodbc and when to use jaydebeapi in Python 2/3?
Let me elaborate with a couple of example scenarios...
If I were a solution architect and am looking at a Pyramid Web Server looking to access multiple RDBMS types (HSQLDB, Maria, Oracle, etc) with the expectation of heavy to massive concurrency and need for performance in latency in a monolithic web server, which paradigm would be chosen? And why?
If I were to implement an Enterprise Microservice solution (a.k.a. the new SOA) with each microservice accessing specific targeted RDBMS but with heavy load and perfomance latency requirements each, which paradigm would be chosen? And why?
Traditionally JDBCs performed significantly better in large Enterprise solutions requiring good concurrency. Are the same idiosyncracies prevalent in Python ? Is there another way besides the two mentioned above?
I am new to Python so please be patient if my question doesn't make sense and I'll attempt to elaborate further. It is best to think about my question from a high-level solution design then going from the ground up as a developer. What would you mandate as the paradigm if you were the sol-architect? | When to use pyodbc and when to use jaydebeapi in Python 2/3? | 1.2 | 1 | 0 | 1,110 |
37,793,923 | 2016-06-13T15:40:00.000 | 1 | 0 | 0 | 0 | python,bots | 37,794,163 | 2 | true | 0 | 0 | You could save the content to a local file and use webbrowser.open_new("file://yourlocalfile.html") but this has one major flaw:
Because of the browsers same origin policy this site could not load any external js, css or pictures. | 1 | 0 | 0 | I'm writing a script that makes a post request to a url, I'd then like to open the response page in the browser of the system. I'm having trouble finding out how. | Open a url response in the browser | 1.2 | 0 | 1 | 2,708 |
37,797,709 | 2016-06-13T19:29:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook,ipython,nbconvert | 64,225,256 | 15 | false | 0 | 0 | One way to do that would be to upload your script on Colab and download it in .py format from File -> Download .py | 1 | 96 | 0 | How do you convert an IPython notebook file (json with .ipynb extension) into a regular .py module? | Convert JSON IPython notebook (.ipynb) to .py file | 0.013333 | 0 | 0 | 201,893 |
37,798,552 | 2016-06-13T20:27:00.000 | 0 | 0 | 1 | 0 | python,python-3.5,xlwt | 37,798,884 | 1 | false | 0 | 0 | Sounds like you are typing pip install ... into a Python prompt and not a shell command prompt. This is not a Python statement but a shell command that has to be executed at the command-line prompt. | 1 | 0 | 0 | I'm trying to download the package xlwt to my Python 3.5.1 but typing 'pip install xlwt' isn't working and gives me an error at the word install that says invalid syntax, though all the websites I've checked told me to do exactly this.
I mostly have a theoretical knowledge of Python and can code pretty decently, but don't really know how to set the technology up in order to do the actual coding.
Any help would be appreciated!!! | Downloading xlwt to Python 3.5.1 | 0 | 0 | 0 | 1,235 |
37,799,065 | 2016-06-13T21:03:00.000 | 0 | 0 | 1 | 1 | python,avro,google-cloud-dataflow | 37,866,731 | 2 | true | 0 | 0 | You are correct: the Python SDK does not yet support this, but it will soon. | 1 | 1 | 0 | I am looking to ingest and write Avro files in GCS with the Python SDK. Is this currently possible with Avro leveraging the Python SDK? If so how would I do this? I see TODO comments in the source regarding this so I am not too optimistic. | Dataflow Python SDK Avro Source/Sync | 1.2 | 0 | 0 | 761 |
37,804,327 | 2016-06-14T06:20:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,mrjob,bigdata | 38,152,994 | 1 | true | 1 | 0 | python mr_statistics.py -r emr s3://bimalucmbucket/inputFile.txt --output-dir=s3://bimalucmbucket/output --no-output -c ~/mrjob.conf | 1 | 0 | 0 | I put the mrjob.conf file in /home directory and tried to run the job from command and I am getting this error:
File "/Users/bimalthapa/anaconda/lib/python2.7/site-packages/mrjob-0.4.6- py2.7.egg/mrjob/conf.py", line 283, in conf_object_at_path
with open(conf_path) as f:
IOError: [Errno 2] No such file or directory: 'mrjob.conf'
This is my command:
python mr_statistics.py -c ~/mrjob.conf -r emr s3://bimalucmbucket/inputFile.txt --output-dir=s3://bimalucmbucket/output --no-output
What is correct way of placing mrjob.conf and correct command ? | Error while running MRJOB on AWS | 1.2 | 0 | 0 | 137 |
37,809,163 | 2016-06-14T10:15:00.000 | -1 | 0 | 0 | 0 | python,mysql | 37,839,311 | 2 | false | 0 | 0 | Moving to MySQLdb (instead of mysql.connector) solved all the issues :-) | 1 | 1 | 0 | I'm using a python driver (mysql.connector) and do the following:
_db_config = {
'user': 'root',
'password': '1111111',
'host': '10.20.30.40',
'database': 'ddb'
}
_connection = mysql.connector.connect(**_db_config) # connect to a remote server
_cursor = _connection.cursor(buffered=True)
_cursor.execute("""SELECT * FROM database LIMIT 1;""")
In some cases, the call to _cursor.execute() hangs with no exception
By the way, when connecting to a local MySQL server it seems to be ok | Call to MySQL cursor.execute() (Python driver) hangs | -0.099668 | 1 | 0 | 1,546 |
37,811,518 | 2016-06-14T12:05:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,pycharm,project | 37,812,234 | 2 | false | 0 | 0 | If you do not want to publish the project and just use it for yourself:
Create a new __main__.py in your project root directory and start your program from there (by importing the old main.py etc.)
Make sure the program runs as expected: Run python __main__.py [arguments] in the root directory.
If this works, zip the whole project directory using an archiver and use like this:
python myproject.zip [arguments] | 1 | 0 | 0 | I have a Python project created in PyCharm. Now that it's finished I want to make it available without PyCharm (making it an executable is not relevant). It consists of different packages and quite a few files inside each package. How can I export the project so I can run it from one file that will call the rest? | How to export a Python program from PyCharm | 0.099668 | 0 | 0 | 12,292 |
37,811,767 | 2016-06-14T12:16:00.000 | 1 | 0 | 1 | 0 | python | 37,812,075 | 2 | false | 0 | 0 | You can install ipython, it has the history function on steroids it does exactly what you ask, without adding code anywhere. If you make a typo or type enter twice, just use the "up arrow" and you get back all the class or function. | 1 | 1 | 0 | I like to use python interpreter, as it shows the result instantly. But I sometimes make mistakes. Like misspelling or typing 'enter' twice during writing class or function. It's really annoying work to rewrite the code.
Is it possible to add some code to a predefined class or function in the interpreter? | add code at class or function in python interperter | 0.099668 | 0 | 0 | 58 |
37,815,371 | 2016-06-14T14:49:00.000 | 33 | 0 | 1 | 0 | python,opencv,ffmpeg,pyinstaller | 59,979,390 | 8 | false | 0 | 1 | Extending Vikash Kumar's answer, build the application by adding the --hidden-import argument to the command.
For example, running the command given below worked for me.
"pyinstaller --hidden-import=pkg_resources.py2_warn example.py"
update: added missing "=" | 2 | 33 | 0 | This is my first time posting a question here as most of my questions have already been answered by someone else! I am working on a GUI application in python and am attempting to use pyinstaller to package it into a single folder and .exe for easier portability. Currently, I am using windows 10 and anaconda3 to manage my python packages. The application relies on tkinter, pillow, numpy, scikit-learn, opencv, ffmpeg, and matplotlib. The application is formatted with a main GUI.py file that creates objects of a number of other files (many of which are stored in a subfolder as this GUI is replacing a command line utility that served the same purpose). The issue I am running into (as you can see in the title) is that the .exe is throwing the error block:
Traceback (most recent call last):
File "site-packages\PyInstaller\loader\rthooks\pyi_rth_pkgres.py", line 11, in
File "c:\users\gurnben\anaconda3\envs\opencv\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.dict)
File "site-packages\setuptools-20.7.0-py3.5.egg\pkg_resources__init__.py", line 68, in
File "site-packages\setuptools-20.7.0-py3.5.egg\pkg_resources\extern__init__.py", line 60, in load_module
ImportError: The 'packaging' package is required; normally this is bundled with this package so if you get this warning, consult the packager of your distribution.
Failed to execute script pyi_rth_pkgres
When I look at the warn.txt it gives a massive list of missing packages including parts of some packages that are actually in the single folder package.
I have, however, successfully gotten it to recognize the dll files from opencv and it is not listed among the missing (nor is ffmpeg however I did not see any ffmpeg files in the folder). I had to pass in a custom path to get it to include the opencv files as they are not in anaconda at this time.
Any hints or ideas for next troubleshooting steps? I am overly greatful for all of the help you an offer and I can upload any code, files, etc. that would help you diagnose the issue. In the meantime I will continue searching for a solution myself! | Pyinstaller "Failed to execute script pyi_rth_pkgres" and missing packages | 1 | 0 | 0 | 55,660 |
37,815,371 | 2016-06-14T14:49:00.000 | 3 | 0 | 1 | 0 | python,opencv,ffmpeg,pyinstaller | 62,305,689 | 8 | false | 0 | 1 | this is because he did not copy a dependency.I solved it like this.
pyinstaller my_program.py
this creates a my_program.spec. it is a base configuration file.
open it with any text editor. search for
hiddenimports=[]
edit to.
hiddenimports=["pkg_resources.py2_warn"]
now let's call pyinstaller passing our configured file instead of our program
pyinstaller my_program.spec | 2 | 33 | 0 | This is my first time posting a question here as most of my questions have already been answered by someone else! I am working on a GUI application in python and am attempting to use pyinstaller to package it into a single folder and .exe for easier portability. Currently, I am using windows 10 and anaconda3 to manage my python packages. The application relies on tkinter, pillow, numpy, scikit-learn, opencv, ffmpeg, and matplotlib. The application is formatted with a main GUI.py file that creates objects of a number of other files (many of which are stored in a subfolder as this GUI is replacing a command line utility that served the same purpose). The issue I am running into (as you can see in the title) is that the .exe is throwing the error block:
Traceback (most recent call last):
File "site-packages\PyInstaller\loader\rthooks\pyi_rth_pkgres.py", line 11, in
File "c:\users\gurnben\anaconda3\envs\opencv\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 389, in load_module
exec(bytecode, module.dict)
File "site-packages\setuptools-20.7.0-py3.5.egg\pkg_resources__init__.py", line 68, in
File "site-packages\setuptools-20.7.0-py3.5.egg\pkg_resources\extern__init__.py", line 60, in load_module
ImportError: The 'packaging' package is required; normally this is bundled with this package so if you get this warning, consult the packager of your distribution.
Failed to execute script pyi_rth_pkgres
When I look at the warn.txt it gives a massive list of missing packages including parts of some packages that are actually in the single folder package.
I have, however, successfully gotten it to recognize the dll files from opencv and it is not listed among the missing (nor is ffmpeg however I did not see any ffmpeg files in the folder). I had to pass in a custom path to get it to include the opencv files as they are not in anaconda at this time.
Any hints or ideas for next troubleshooting steps? I am overly greatful for all of the help you an offer and I can upload any code, files, etc. that would help you diagnose the issue. In the meantime I will continue searching for a solution myself! | Pyinstaller "Failed to execute script pyi_rth_pkgres" and missing packages | 0.07486 | 0 | 0 | 55,660 |
37,818,938 | 2016-06-14T17:52:00.000 | 30 | 0 | 1 | 0 | python,pycharm,console | 37,819,230 | 1 | true | 0 | 0 | Settings → Build Execution Deployment → Console → Python Console | 1 | 18 | 0 | How do I change the default working directory when I open a new Python Console? I have multiple projects open in my PyCharm view and the Python Console seems to be defaulting to an arbitrary one. Of course I can work around by modifying sys.path but I want a definite solution. Using Windows. | Change working directory of console in PyCharm | 1.2 | 0 | 0 | 19,467 |
37,820,234 | 2016-06-14T19:08:00.000 | 0 | 0 | 0 | 0 | python,django,session,web | 37,820,294 | 1 | true | 1 | 0 | Coookies are your answer. They will work for any URL for the same browser. assuming your user has agreed to use them.
An alternative would be to tag the links with parameters, but that is specific to the link and could be shared with others. | 1 | 0 | 0 | I'm new to web programming, so I guess my question would seem very stupid :)
I have simple website on Python/Django. There is some url, which users may open without any authentication.
I need to remember this user somehow and recognize him when he re-opens this url once again (not for a long time - say, for several hours).
By "same user" I mean "user uses same browser on same device".
How can I achieve this? Thanks in advance :) | Recognize user returning to URL from same device | 1.2 | 0 | 0 | 42 |
37,820,668 | 2016-06-14T19:36:00.000 | 1 | 0 | 0 | 0 | python,scala,apache-spark,pyspark | 37,821,215 | 2 | false | 0 | 0 | An experienced developer will be able to pick up a new language and become productive fairly quickly.
I would only consider using the two languages together if:
The deadlines are too tight to allow for the developer to get up to speed,
The integration between the modules is quite limited (and you're confident that won't change) and
There is a clear deployment strategy.
I would suggest doing a small-scale test first to confirm the deployment and integration plans you have will work. | 1 | 2 | 1 | I'm working on a project with another person. My part of the project involves analytics with Spark's Machine Learning, while my teammate is using Spark Streaming to pipeline data from the source to the program and out to an interface.
I am planning to use Scala since it has the best support for Spark. However, my teammate does not have any experience with Scala, and would probably prefer to use Python.
Given that our parts of the program are doing two different things, would it be a good idea for us to have his Python script call my Scala executable? Or would using different languages raise complications later on? | Using Multiple Languages while developing a Spark application | 0.099668 | 0 | 0 | 452 |
37,821,521 | 2016-06-14T20:29:00.000 | 3 | 0 | 1 | 0 | python,unit-testing,nose | 37,824,188 | 2 | false | 0 | 0 | Nose shouldn't run your tests in parallel by default; it should require you to explicitly pass in the --processes flag. What else is going in your DB, How many connections does it have? How many does it support? Where are the connections coming from? | 2 | 0 | 0 | I have several test objects that I am connecting to DB in each of their setups. Apparently, my DB connection limits number of accesses per IP, since I am getting this error telling me that a connection is already established when I run all the tests/ folder, but if I run them separately, they all pass. So, I'm wondering if it is the case that nosetests runs them in parallel? If so, is there a way to disable this feature? | Different test objects pass individually but have error running together. [Does Python nosetests run different test classes in parallel?] | 0.291313 | 0 | 0 | 181 |
37,821,521 | 2016-06-14T20:29:00.000 | 0 | 0 | 1 | 0 | python,unit-testing,nose | 37,869,289 | 2 | false | 0 | 0 | We figured the problem out. It was not about nose running tests in parallel. It was one of tests changing an attribute of sys, which was not used in that particular test but was affecting others. | 2 | 0 | 0 | I have several test objects that I am connecting to DB in each of their setups. Apparently, my DB connection limits number of accesses per IP, since I am getting this error telling me that a connection is already established when I run all the tests/ folder, but if I run them separately, they all pass. So, I'm wondering if it is the case that nosetests runs them in parallel? If so, is there a way to disable this feature? | Different test objects pass individually but have error running together. [Does Python nosetests run different test classes in parallel?] | 0 | 0 | 0 | 181 |
37,823,426 | 2016-06-14T22:55:00.000 | 0 | 0 | 1 | 0 | python-idle,python-3.5,auto-indent | 37,849,464 | 1 | true | 0 | 0 | There is not way to turn auto indent off. The Shell currently uses tabs because of the '>>> ' prompt (I hope to improve this someday). The editor uses spaces according to the configuration setting on the Options / Configure IDLE / Fonts/Tab tab. The default is now 4 spaces per tab. The smart indent is usually right according to PEP 8. 'Wrong indents' are usually a sign of a syntax error. IDLE indents according to what you write, not what you intend. | 1 | 0 | 0 | I've been coding in IDLE for a few days now. I have version 3.5.1 of python if that helps.
I was coding today and I noticed that when ever I start a new line it will be indented. It's quite annoying actually because most of the time I don't even need the code to be further indented and when I do it never puts the correct amount of space. It tends to be a tab and a space to far.
Is there any known method to prevent this? | How can I stop IDLE from auto indeting? | 1.2 | 0 | 0 | 1,135 |
37,823,965 | 2016-06-14T23:50:00.000 | 0 | 0 | 1 | 0 | python,windows-10,python-idle | 37,823,985 | 1 | false | 0 | 0 | While this won't help you get IDLE open, I would strongly suggest looking into using another IDE, such as Pycharm's free edition. | 1 | 0 | 0 | I am very new to programming and I am trying to use Python on my computer. I downloaded and installed the program, but when I try to open IDLE, the Windows blue loading circle pops up and then disappears and nothing else happens. I'm using Windows 10 and Python version 3.4.3. I've tried downloading it from different sources, changing the path in environmental variables, repairing Python, and googling answers, but nobody seems to have the answer or even a similar problem. I would greatly appreciate any help. | IDLE won't open | 0 | 0 | 0 | 851 |
37,825,196 | 2016-06-15T02:38:00.000 | 1 | 0 | 1 | 0 | python | 37,825,264 | 2 | false | 0 | 0 | A few solutions:
Open a terminal, type python, and see what it says in the preamble.
In linux, type which python
On windows, type where python | 2 | 3 | 0 | I'm not just looking for the version but specifically the distribution, i.e. whether it's Anaconda, Python(x,y), etc. | How can I find out what distribution of Python I'm using? | 0.099668 | 0 | 0 | 3,400 |
37,825,196 | 2016-06-15T02:38:00.000 | 3 | 0 | 1 | 0 | python | 37,826,030 | 2 | false | 0 | 0 | Open a terminal (or command line on Windows) and type python --version or python -V (capital "V" for the second one). For instance, on my Windows machine this returns:
Python 3.4.4 :: Anaconda 4.0.0 (64-bit)
Unless I'm in my Python 2.7 virtual env, in which case it returns:
Python 2.7.11 :: Anaconda 4.0.0 (64-bit)
which python tells you where the binary is located, but often does not give you much of an idea about which version it is (although if it's in an anaconda folder, you know it's anaconda, and that sort of thing). | 2 | 3 | 0 | I'm not just looking for the version but specifically the distribution, i.e. whether it's Anaconda, Python(x,y), etc. | How can I find out what distribution of Python I'm using? | 0.291313 | 0 | 0 | 3,400 |
37,825,965 | 2016-06-15T04:08:00.000 | 1 | 1 | 1 | 0 | python,python-idle | 37,849,395 | 2 | false | 0 | 0 | You did not specify the exact release you are using, but currently (Since about Sept 2014), IDLE makes changing the popup delay easy. Select Options and Configure Extensions if you see that choice. Otherwise select Configure IDLE and then the Extensions tab (since Fall 2015). In either case, select AutoComplete and change the popupwait. I happen to have reset to to 0 for myself. I think 2 seconds is too long, but changing the default is problematical. | 1 | 1 | 0 | I am playing with IDLE. But it seems that intellisense in IDLE is a bit slow. When we type time. I need to wait a second or more for the intellisense to appear. What is the reason for this? I have heard that IDLE is developed in Python itself and that Python is a bit slower than other languages (slower but not notably so).
Now, is the slowness of Python the reason? | Intellisence in IDLE is slow. Is the slowness of python the reason? | 0.099668 | 0 | 0 | 273 |
37,826,018 | 2016-06-15T04:13:00.000 | 0 | 0 | 0 | 0 | python,hybrid-mobile-app,python-appium | 37,913,958 | 1 | true | 1 | 0 | Question has resolved via update chrome driver to latest version(51.0.2704.103 m (64-bit)). | 1 | 0 | 0 | My python scripts as below:
wrong at the red arrow: | Python scripts not working after switch_to.context('webview') when testing on hybrid Android app? | 1.2 | 0 | 0 | 89 |
37,826,093 | 2016-06-15T04:21:00.000 | 0 | 1 | 0 | 0 | python,selenium | 37,828,643 | 2 | false | 0 | 0 | You won't need a Selenium Grid for this. The Grid is used to distribute the test execution across multiple machines. Since you're only using one machine you don't need to use it.
You are running tests so I'm assuming you are using a test framework. You should do some research on how you can run tests in parallel using this framework.
There will probably also be a way to execute a function before test execution. In this function you can start the driver.
I'd be happy to give you a more detailed answer but your question is lacking the framework you are using to run the tests. | 1 | 0 | 0 | I am selenium python and I would like to speed up my tests. let's say 5 tests simultaneously. How can I achieve that on the single machine with the help of selenium grid | How to Speed up Test Execution Using Selenium Grid on single machine | 0 | 0 | 1 | 303 |
37,826,135 | 2016-06-15T04:25:00.000 | 1 | 0 | 1 | 0 | python-2.7,powershell,python-idle,repr | 37,826,553 | 2 | false | 0 | 0 | repr() - Return a string containing a printable representation of an object.
The output of print repr('foo') is 'foo'.
When you run it from the shell you get the output with-in quotes(string) => "'foo'"
When you run print repr('foo') from a python script you get the out string printed as => 'foo'
When you just put repr('foo') in your script you get nothing as you don't have a print statement to print the output. | 2 | 0 | 0 | Executing python ex1.py with ex1.py contents: print repr('foo') yields
'foo'
But executing repr('foo') on IDLE yields
"'foo'"
Alternatively, executing print repr('foo') on IDLE yields
'foo'
And executing python ex1.py with ex1.py contents: repr('foo') clearly yields
\n
For the former three cases...what's going on here? | repr() output different in IDLE and Windows Powershell | 0.099668 | 0 | 0 | 211 |
37,826,135 | 2016-06-15T04:25:00.000 | 2 | 0 | 1 | 0 | python-2.7,powershell,python-idle,repr | 37,826,488 | 2 | true | 0 | 0 | repr('foo') is an expression whose value is the 5-character string 'foo'.
Therefore:
Printing the result of repr('foo') will display 'foo'.
Typing repr('foo') in a Python interpreter (such as IDLE's shell) will show the repr of 'foo', which is "'foo'".
Running a Python script containing just the code repr('foo') won't print anything, so you just get an empty output (the \n is likely added by your shell). | 2 | 0 | 0 | Executing python ex1.py with ex1.py contents: print repr('foo') yields
'foo'
But executing repr('foo') on IDLE yields
"'foo'"
Alternatively, executing print repr('foo') on IDLE yields
'foo'
And executing python ex1.py with ex1.py contents: repr('foo') clearly yields
\n
For the former three cases...what's going on here? | repr() output different in IDLE and Windows Powershell | 1.2 | 0 | 0 | 211 |
37,829,169 | 2016-06-15T07:46:00.000 | 2 | 0 | 0 | 0 | python,neural-network,tensorflow | 37,846,825 | 4 | false | 0 | 0 | Trying to squeeze blood from a stone!
I'm skeptical that with 4283 training examples your net will learn 62 categories...that's a big ask for such a small amount of data. Especially since your net is not a conv net...and it's forced to reduce its dimensionality to 100 at the first layer. You may as well pca it and save time.
Try this:
Step 1: download an MNIST example and learn how to train and run it.
Step 2: use the same mnist network design and throw your data at it...see how it goes. you may need to pad your images. Train and then run it on your test data.
Now step 3: take your fully trained step 1 mnist model and "finetune" it by continuing to train with your data(only) and with a lower learning rate for a few epochs(ultimately determine #epochs by validation). Then run it on your test data again and see how it does. Look up "transfer learning"...and a "finetuning example" for your toolkit.(Note that for finetuning you need to mod the output layer of the net)
I'm not sure how big your original source images are but you can resize them and throw a pre-trained cifar100 net at it(finetuned) or even an imagenet if the source images are big enough. Hmmm cifar/imagnet are for colour images...but you could replicate your greyscale to each rgb band for fun.
Mark my words...these steps may "seem simple"...but if you can work through it and get some results(even if they're not great results) by finetuning with your own data, you can consider yourself a decent NN technician.
One good tutorial for finetuning is on the Caffe website...flickr style(I think)...there's gotta be one for TF too.
The last step is to design your own CNN...be careful when changing filter sizes--you need to understand how it affects outputs of each layer and how information is preserved/lost.
I suppose another thing to do is to do "data augmentation" to get yourself some more of it. slight rotations/resizing/lighting...etc. Tf has some nice preprocessing for doing some of this...but some will need to be done by yourself.
good luck! | 3 | 0 | 1 | I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hidden layers.
The problem I have is that the network will very quickly train itself to only give back one answer: the most common training letter (in my case 'A'). The output is in the form of a (62, 1) vector of probabilities, one for each number and letter (upper- and lower-case). This vector is EXACTLY the same for all input images.
I've then tried to remove all of the 'A's from my input data, at which point the network changed to only give back the next most common input type (an 'E').
So, is there some way to stop my network stopping at a local minima (not sure if that's the actual term)? Is this even a generic problem for neural networks, or is it just that my network is broken somehow?
I'm happy to provide code if it would help.
EDIT: These are the hyperparameters of my network:
Input size : 400 (20x20 greyscale images)
Hidden layer 1 size : 100
Hidden layer 2 size : 100
Output layer size : 62 (Alphanumeric, lower- and upper-case)
Training data size : 4283 images
Validation data size : 1000 images
Test data size : 1000 images
Batch size : 100
Learning rate : 0.5
Dropout rate : 0.5
L2 regularisation parameter : 0 | Neural network only learns most common training image | 0.099668 | 0 | 0 | 379 |
37,829,169 | 2016-06-15T07:46:00.000 | 0 | 0 | 0 | 0 | python,neural-network,tensorflow | 37,846,462 | 4 | false | 0 | 0 | Which optimizer are you using? If you've only tried gradient descent, try using one of the adaptive ones (e.g. adagrad/adadelta/adam). | 3 | 0 | 1 | I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hidden layers.
The problem I have is that the network will very quickly train itself to only give back one answer: the most common training letter (in my case 'A'). The output is in the form of a (62, 1) vector of probabilities, one for each number and letter (upper- and lower-case). This vector is EXACTLY the same for all input images.
I've then tried to remove all of the 'A's from my input data, at which point the network changed to only give back the next most common input type (an 'E').
So, is there some way to stop my network stopping at a local minima (not sure if that's the actual term)? Is this even a generic problem for neural networks, or is it just that my network is broken somehow?
I'm happy to provide code if it would help.
EDIT: These are the hyperparameters of my network:
Input size : 400 (20x20 greyscale images)
Hidden layer 1 size : 100
Hidden layer 2 size : 100
Output layer size : 62 (Alphanumeric, lower- and upper-case)
Training data size : 4283 images
Validation data size : 1000 images
Test data size : 1000 images
Batch size : 100
Learning rate : 0.5
Dropout rate : 0.5
L2 regularisation parameter : 0 | Neural network only learns most common training image | 0 | 0 | 0 | 379 |
37,829,169 | 2016-06-15T07:46:00.000 | 0 | 0 | 0 | 0 | python,neural-network,tensorflow | 37,829,933 | 4 | false | 0 | 0 | Your learning rate is way too high. It should be around 0.01, you can experiment around it but 0.5 is too high.
With a high learning rate, the network is likely to get stuck in a configuration and output something fixed, like you observed.
EDIT
It seems the real problem is the unbalanced classes in the dataset. You can try:
to change the loss so that less frequent examples get a higher loss
change your sampling strategy by using balanced batches of data. When selecting the 64 examples in your batch, select randomly in the dataset but with the same probability for each class. | 3 | 0 | 1 | I'm building a neural net using TensorFlow and Python, and using the Kaggle 'First Steps with Julia' dataset to train and test it. The training images are basically a set of images of different numbers and letters picked out of Google street view, from street signs, shop names, etc. The network has 2 fully-connected hidden layers.
The problem I have is that the network will very quickly train itself to only give back one answer: the most common training letter (in my case 'A'). The output is in the form of a (62, 1) vector of probabilities, one for each number and letter (upper- and lower-case). This vector is EXACTLY the same for all input images.
I've then tried to remove all of the 'A's from my input data, at which point the network changed to only give back the next most common input type (an 'E').
So, is there some way to stop my network stopping at a local minima (not sure if that's the actual term)? Is this even a generic problem for neural networks, or is it just that my network is broken somehow?
I'm happy to provide code if it would help.
EDIT: These are the hyperparameters of my network:
Input size : 400 (20x20 greyscale images)
Hidden layer 1 size : 100
Hidden layer 2 size : 100
Output layer size : 62 (Alphanumeric, lower- and upper-case)
Training data size : 4283 images
Validation data size : 1000 images
Test data size : 1000 images
Batch size : 100
Learning rate : 0.5
Dropout rate : 0.5
L2 regularisation parameter : 0 | Neural network only learns most common training image | 0 | 0 | 0 | 379 |
37,830,020 | 2016-06-15T08:28:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,methods,packages,class-method | 37,830,779 | 1 | false | 0 | 0 | For better readability, I got a few suggestions.
Collapse all functions definitions, which can easily be achieved by
one click in most of the popular text editors.
Place related methods next to each other.
Give proper names to categorise and differentiate methods, for
example.
def search_board_first(): pass
def search_deep_first(): pass
Regarding splitting a huge class into a Object oriented behaviour, my rule of thumb is to consider the re-usability. If functions can be reused by other classes, they should be put in separate files and make them independent(static) of other classes.
If the methods are tied to the class and no where else, it is better to just to enclose that method within the class itself. Think it this way, to review the code, you need to refer the class properties anyway. Logically it doesn't quite make sense to split files just for splitting. | 1 | 0 | 0 | I wrote a class with many different parameters, depending on the parameter the class uses different actions.
I can easily differentiate my methods into different cases, each set of methods belonging to certain parameters.
This resulted in a huge .py file, implementing all methods in the one class. For better readability, is it possible to write multiple methods in an own file and load it (similar as a package) into the class to treat them as class methods?
To give more details, my class is a decision tree. A parameter for example is the pruning method, used to shrink the tree. As I use different pruning methods, this takes a lot of lines in my class. I need to have a set of methods for each pruning parameter. It would be nice to simply load the methods for pruning from another file into the class and therefore shrinking the size of my decision tree .py file. | Python: Separate class methods from the class in an own package | 0 | 0 | 0 | 228 |
37,830,836 | 2016-06-15T09:03:00.000 | 3 | 0 | 0 | 0 | python,command,exit,trace32 | 37,845,692 | 1 | true | 0 | 0 | To close the PowerView main window use TRACE32 command QUIT | 1 | 2 | 0 | I load and execute a cmm script inside trace32 application using bmm commands.
when the execution is over i need to close the entire t32 application window itself (similar to File -> Exit) using cmm command ? | how to close trace 32 application itself through cmm command? | 1.2 | 0 | 0 | 993 |
37,832,937 | 2016-06-15T10:33:00.000 | 0 | 0 | 0 | 0 | python,pyspark,caffe | 37,876,437 | 1 | true | 1 | 0 | Found that you can add the additional files to all the workers by using --files argument in spark-submit. | 1 | 0 | 1 | I am using PySpark 1.6.1 for my spark application. I have additional modules which I am loading using the argument --py-files. I also have a h5 file which I need to access from one of the modules for initializing the ApolloNet.
Is there any way I could access those files from the modules if I put them in the same archive? I tried this approach but it was throwing an error because the files are not there in every worker. I can think of copying the file to each of the workers but I want to know if there are better ways to do it? | Adding h5 files in a zip to use with PySpark | 1.2 | 0 | 0 | 84 |
37,833,638 | 2016-06-15T11:05:00.000 | 1 | 0 | 0 | 0 | android,python,opencv,qpython | 45,597,300 | 2 | false | 0 | 1 | You can install opencv from qpython libraries(install from QPypi).
QPython --> Libraries --> QPypi --> opencv-qpython --> install
Use it as
import cv | 2 | 3 | 0 | I'm developing a project (in Python) that does video processing using OpenCV. Now I'm planning to implement that in my android phone. I read that Qpython supports python on android. So is there any way to import third party libs like OpenCV in Qpython.
Thanks in advance. | OpenCV in Qpython | 0.099668 | 0 | 0 | 7,472 |
37,833,638 | 2016-06-15T11:05:00.000 | 0 | 0 | 0 | 0 | android,python,opencv,qpython | 38,175,636 | 2 | false | 0 | 1 | In Qpython3 there is a pip program (not sure if on qpython too) but running that program you can "pip install [your module]" Worked for me installing sqlalchemy and youtube_dl | 2 | 3 | 0 | I'm developing a project (in Python) that does video processing using OpenCV. Now I'm planning to implement that in my android phone. I read that Qpython supports python on android. So is there any way to import third party libs like OpenCV in Qpython.
Thanks in advance. | OpenCV in Qpython | 0 | 0 | 0 | 7,472 |
37,836,077 | 2016-06-15T12:53:00.000 | 1 | 0 | 0 | 0 | python,openerp,fileopendialog | 37,839,268 | 2 | false | 1 | 0 | You can define binary fields in Odoo, like other fields. Look into ir.attachment model definition and its view definitions to get a good hint, how do it for such fields. | 1 | 1 | 0 | does anybody knows how to open a filedialog on Odoo? I've add a button on a custom view, now I would like to browse for a file on THE CLIENT when this button is clicked.
Any ideas?
Thanks! | Odoo python fileopendialog | 0.099668 | 0 | 0 | 108 |
37,836,380 | 2016-06-15T13:07:00.000 | 1 | 0 | 1 | 0 | python-2.7,openmdao | 37,845,674 | 2 | true | 0 | 0 | Thanks to swryan for the link, i found the answer.
One of the possible answer was to put libgfortran.so.3 in anaconda2/lib, but i already had it.
They were also saying the problem was solved when installing anaconda 4.0+ but i also had the latest version.
What worked for me was to run : conda update libgfortran --force
Doing it without the --force retrogrades the scipy, which seems to disable scipy.optimize.least_squares. If you did that, you can then run conda update scipy --force | 1 | 0 | 0 | Hello everyone,
I have installed Openmdao, pyOpt and pyoptsparse on my computer. As my program works with the Scipy optimizer, I tried it with a random optimizer of pyoptsparse (that was 'ALPSO'). It worked and I was happy. But it turns out it seems to be the only one working.
Every time I try to use another one (like 'SLSQP', which is the default optimizer !), i get this message "pyOptSparse Error : There was an error importing the compiled SLSQP module", inside a frame made of '-' and '+'.
Does anybody know what to do ? I am using Ubuntu if it changes something. | Import Error with certain modules in pyoptsparse | 1.2 | 0 | 0 | 582 |
37,838,526 | 2016-06-15T14:37:00.000 | 0 | 1 | 1 | 0 | python,python-2.7,perforce,p4python | 56,567,412 | 2 | false | 0 | 0 | Note that running using Dirs and Files to recursively iterate through a directory tree is inefficient if you're planning to populate the entire tree.
If you need file info for all files under a directory, including its children, it's orders of magnitudes faster to just issue the "files" command to include the entire tree (i.e. path/... as opposed to path/*).
I suspect this is because the P4 server has no concept of directories, internally. A file's "directory" in P4 is just the last path-separated token in the file's path. So, it has to do extra work to slice its file set into a directory-specific list. | 1 | 2 | 0 | I'd like to read folders and files structure inside a specified folder path on the P4 depot without syncing it. Is it possible? | How to read depot's folders structure by p4python without syncing? | 0 | 0 | 0 | 2,045 |
37,839,265 | 2016-06-15T15:08:00.000 | 0 | 0 | 0 | 1 | python,plot,gnuplot | 37,839,374 | 1 | false | 0 | 0 | You can plot the data as it is being processed, but there's a couple of issues that come along with it in terms of efficiency.
Gnuplot needs to do work each time to process your data
Gnuplot needs to wait for your operating system to paint your screen whenever it updates
Your program needs to wait for Gnuplot to do any of this in order to go to the next step
All of these will greatly impact the amount of time you spend waiting for your data. You could potentially have it run every x iterations (eg. every 5 iterations), but even this wouldn't give you much of a speed-up. | 1 | 1 | 1 | I'm in the process of converting a large (several GBs) bin file to csv format using Python so the resulting data can be plotted. I'm doing this conversion because the bin file is not in a format that a plotting tool/module could understand, so there needs to be some decoding/translation. Right now it seems like Gnuplot is the way to go for such large data size.
I'm wondering if instead of waiting for the whole file to finish converting and then running Gnuplot, is there a way to plot the data as it's being processed? Perhaps I could bypass the csv file altogether.
Everything I've read so far points to plotting a file with data, but I have not seen any ways of plotting/appending individual data points. | How to plot data while it's being processed | 0 | 0 | 0 | 67 |
37,841,276 | 2016-06-15T16:46:00.000 | 1 | 0 | 1 | 0 | python,notepad++ | 37,841,503 | 2 | false | 0 | 0 | Choose the Run Menu, and the Run… command. Enter the path to your Python executable with the script in parameter. | 1 | 0 | 0 | Exactly, I need to execute some functions after programming a script(Like you can do in the IDLE) | How do I enable a console of Python in Notepad++? | 0.099668 | 0 | 0 | 5,172 |
37,842,741 | 2016-06-15T18:08:00.000 | 0 | 1 | 0 | 0 | python,python-3.5 | 37,842,951 | 1 | false | 0 | 0 | You would have to compile the code into an exe file. The py2exe library can help you out with this | 1 | 0 | 0 | So I am creating a program that takes input, processes the data, then puts it in Excel. In order to do this, I am using the "xlwt" package (and possibly xlrd). How do I then give this program to other people without making them download python and the packages associated with my program? I considered utilizing an online python interpreter and giving the username/password to my coworkers, but xlwt isn't on any of the ones I've tried, and they don't offer a way (that I can see) to download new packages. | Universalizing my program/Making it accessible to other users | 0 | 0 | 0 | 32 |
37,845,130 | 2016-06-15T20:25:00.000 | 1 | 0 | 0 | 0 | python,django | 37,864,661 | 1 | true | 1 | 0 | It seems that what I am looking for doesn't exist. Django trusts the user to deal with migrations and such and doesn't check the database on load. So there is no place in the system where you can load some data on system start and be sure that you can actually load it. What I ended up doing is loading the data in ready(), but do a sanity check first by doing MyModel.objects.exist() in a try: except: block and returning if there was an exception. This is not ideal, but I haven't found any other way. | 1 | 1 | 0 | I have a file with a bunch of data common between several projects. The data needs to be loaded into the Django database. The file doesn't change that much, so loading it once on server start is sufficient. Since the file is shared between multiple projects, I do not have full control over the format, so I cannot convert this into a fixture or something.
I tried loading it in ready(), but then I run into a problem when creating a new database or migrating an existing database, since apparently ready() is called before migrations are complete and I get errors from using models that do not have underlying tables. I tried to set it in class_prepared signal handler, but the loading process uses more than one model, so I cannot really be sure all required model classes are prepared. Also it seems that ready() is not called when running tests, so unit tests fail because the data is missing. What is the right place to do something like this? | Load data on startup | 1.2 | 0 | 0 | 112 |
37,845,256 | 2016-06-15T20:33:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,gnuplot,visualization | 37,890,119 | 2 | true | 0 | 0 | No, gnuplot cannot really move the viewing point, for the good reason that the viewing point is at infinity: all you can do is set an angle and magnification (using set view) and an offset within the viewing window (with set origin). That means, you can move the viewing point on a sphere at infinity, but not among the points you're plotting.
(Question 2 is off-topic as a software advice, but you're looking for a rendering software such as paraview) | 1 | 2 | 1 | I have a list of points given by their x, y, z coordinates. I would like to plot these points on a computer. I have managed to do this with gnuplot and with the python library matplotlib separately. However for these two solutions, it seems hard to change the 'viewing point', or the point from which the projection of the 3D point cloud to the 2D screen is done.
1) Is there any easy way to, preferably continuously, move the viewing point in gnuplot (the splot command) or with matplotlib (the plot command)?
2) What other libraries are there for which this is an easy task?
EDIT: I want to move the viewing point (like the player in an first-person shooter, say), not change the viewing angles. | Plotting a point cloud and moving the camera | 1.2 | 0 | 0 | 1,343 |
37,845,389 | 2016-06-15T20:42:00.000 | 0 | 0 | 0 | 0 | python,django,python-3.x | 37,845,897 | 2 | false | 1 | 0 | I assume that your models look something like this
class Contest(Model):
... something ...
class Picture(Model):
user = ForeignKey(User)
contest = ForeignKey(Contest)
... something ...
So, Picture.objects.filter(user=user) gives you pictures by a particular user (don't have to specify _id, filters operate on model objects just fine). And to get contests with pictures by a particular user you can do
pics_by_user = Picture.objects.filter(user=user)
contests_by_user = Contest.objects.filter(id__in=pics_by_user.values_list('contest', flat=True))
There might be an easier way though | 1 | 1 | 0 | I have a queryset from Picture.objects.filter(user_ID=user). The Picture model has "contest_ID" as a foreign key.
I'm looking to get a queryset of Contests which have Pictures, so from the queryset I already have, how do I pull a list of Contest objects? | Get a queryset from a queryset | 0 | 0 | 0 | 71 |
37,846,097 | 2016-06-15T21:29:00.000 | 1 | 0 | 1 | 0 | python,security,token | 37,846,257 | 3 | true | 0 | 0 | What you can do is have a code that generates a random 32 bit string from just letters (Upper case and lower case) and numbers and save it in the DB with a expire time (A few hours will be enough).
This way even if the "hacker" starts brute force by the time he is even close, the token will be expired and he will have to start from the beggening.
Note:
If you want to make it even stronger you can also use special characters in your token. | 1 | 0 | 0 | I need to generate a token that will allow someone to do a unique action.
The difficulty is that nobody must find this token and do the action of someone else. Using python i would like to know if generating a random token is enough or not ? Can someone tell me what is necessary for generating a strong token that can't be found easily
Thank you for your help ! | Generate token for an action | 1.2 | 0 | 0 | 413 |
37,850,154 | 2016-06-16T04:55:00.000 | 1 | 0 | 0 | 0 | python-2.7,openerp,odoo-9 | 37,852,231 | 2 | false | 1 | 0 | User Signup is a standard feature provided by Odoo, and it seems that you already found it.
The database selector shows because you have several PostgresSSQL databases.
The easiest way is to set a filter that limits it to the one you want:
start the server with the option --dbfilter=^MYDB$, where MYDBis the database name.
User data is stored both in res.userand res.partner: the user specific data, such as login and password, are stored in res.user. Other data, such as the Name is stored in a related res.partner record. | 2 | 0 | 0 | How can I create a signup page in odoo website. The auth_signup module seems to do the job (according to their description). I don't know how to utilize it.
In the signup page there shouldn't be database selector
Where should I store the user data(including password); res.users or res.partner | Odoo website, Creating a signup page for external users | 0.099668 | 0 | 0 | 1,173 |
37,850,154 | 2016-06-16T04:55:00.000 | 2 | 0 | 0 | 0 | python-2.7,openerp,odoo-9 | 37,852,264 | 2 | true | 1 | 0 | you can turn off db listing w/ some params in in odoo.cfg conf
db_name = mydb
list_db = False
dbfilter = mydb
auth_signup takes care of the registration, you don't need to do anything. A res.user will be created as well as a partner related to it.
The pwd is stored in the user. | 2 | 0 | 0 | How can I create a signup page in odoo website. The auth_signup module seems to do the job (according to their description). I don't know how to utilize it.
In the signup page there shouldn't be database selector
Where should I store the user data(including password); res.users or res.partner | Odoo website, Creating a signup page for external users | 1.2 | 0 | 0 | 1,173 |
37,850,869 | 2016-06-16T05:50:00.000 | 2 | 1 | 1 | 0 | python,memory,memory-management,genetic-programming | 37,853,966 | 2 | true | 0 | 0 | Given the RAM constraint, I'd change the population model from generational to steady state.
The idea is to iteratively breed a new child or two, assess their fitness and then reintroduce them directly into the population itself, killing off some preexisting individuals to make room for them.
Steady state uses half the memory of a traditional genetic algorithm because there is only one population at a time.
Changing the implementation shouldn't be too hard, but you have to pay attention to premature convergence (i.e. tweaks parameters like mutation rate, tournament size...).
The island model is another / additional possibility: population is broken into separate sub-populations (demes). Demes send individuals to one another to help spread news of newly-discovered fit areas of the space.
Usually it's a asynchronous mechanism but you could use a synchronous algorithm, loading demes one by one, with a great reduction of the required memory resources.
Of course you can write the population to a file and you can load just the needed individuals. If you choose this approach, it's probably a good idea to compute a hash signature of individuals to optimize the identification / loading speed.
Anyway you should consider that, depending on the task your GP system is performing, you could register a massive performance hit. | 1 | 3 | 0 | I've created a genetic programming system in Python, but am having troubles related to memory limits. The problem is with storing all of the individuals in my population in memory. Currently, I store all individuals in memory, then reproduce the next generation's population, which then gets stored in to memory. This means that I have two populations worth of individuals loaded in memory. After some testing, I've found that I exceed the default 2GB application memory size for Windows fairly quickly.
Currently, I write out the entire population's individual trees to a file, which I can then load and recreate the population if I want. What I have been considering is instead of having all of the individuals loaded in memory, access individual information by pulling the individual from the file and only instantiating that single individual. From my understanding of Python's readline functionality, it should only load a single line from the file at a time, instead of the entire file. If I did this, I think I would be able to only store in memory the individuals that I was currently manipulating.
My question is, is there an underlining problem with doing this that I'm not seeing right now? I understand that because I am dealing with data on disk instead of in memory my performance is going to take a hit, but for this situation memory is more important than speed. Also I don't want to increase the allotted 2GB of memory given to Python programs.
Thanks! | Storing objects in file instead of in memory | 1.2 | 0 | 0 | 132 |
37,851,796 | 2016-06-16T06:50:00.000 | 1 | 0 | 0 | 0 | python,pandas,ipython,spyder | 37,851,865 | 1 | false | 0 | 0 | This come from the docs themselves. Have you read them?
low_memory : boolean, default True
Internally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
types either set False, or specify the type with the dtype parameter.
Note that the entire file is read into a single DataFrame regardless,
use the chunksize or iterator parameter to return the data in chunks.
(Only valid with C parser) | 1 | 1 | 1 | What does the low_memory parameter do in the read_csv function from the pandas library? | low_memory parameter in read_csv function | 0.197375 | 0 | 0 | 113 |
37,852,560 | 2016-06-16T07:27:00.000 | -1 | 1 | 1 | 0 | python | 50,852,984 | 4 | false | 0 | 0 | So uncommon it is that I have learned about it today (and I'm long ago into python).
Memory is deallocated, files closed, ... by the GC. But you could need to perform some task with effects outside of the class.
My use case is about implementing some sort of RAII regarding some temporal directories. I'd like it to be removed no matter what.
Instead of removing it after the processing (which, after some change, was no longer run) I've moved it to the __del__ method, and it works as expected.
This is a very specific case, where we don't really care about when the method is called, as long as it's called before leaving the program. So, use with care. | 2 | 17 | 0 | I do things mostly in C++, where the destructor method is really meant for destruction of an acquired resource. Recently I started with python (which is really a fun and fantastic), and I came to learn it has GC like java.
Thus, there is no heavy emphasis on object ownership (construction and destruction).
As far as I've learned, the __init__() method makes more sense to me in python than it does for ruby too, but the __del__() method, do we really need to implement this built-in function in our class? Will my class lack something if I miss __del__()? The one scenario I could see __del__() useful is, if I want to log something when destroying an object. Is there anything other than this? | Is __del__ really a destructor? | -0.049958 | 0 | 0 | 22,561 |
37,852,560 | 2016-06-16T07:27:00.000 | 2 | 1 | 1 | 0 | python | 37,852,707 | 4 | false | 0 | 0 | Is del really a destructor?
No, __del__ method is not a destructor, is just a normal method you can call whenever you want to perform any operation, but it is always called before the garbage collector destroys the object.
Think of it like a clean or last will method. | 2 | 17 | 0 | I do things mostly in C++, where the destructor method is really meant for destruction of an acquired resource. Recently I started with python (which is really a fun and fantastic), and I came to learn it has GC like java.
Thus, there is no heavy emphasis on object ownership (construction and destruction).
As far as I've learned, the __init__() method makes more sense to me in python than it does for ruby too, but the __del__() method, do we really need to implement this built-in function in our class? Will my class lack something if I miss __del__()? The one scenario I could see __del__() useful is, if I want to log something when destroying an object. Is there anything other than this? | Is __del__ really a destructor? | 0.099668 | 0 | 0 | 22,561 |
37,855,059 | 2016-06-16T09:24:00.000 | 4 | 0 | 0 | 0 | python,numpy,numpy-ufunc | 37,855,371 | 2 | true | 0 | 0 | Because max is associative, but argmax is not:
max(a, max(b, c)) == max(max(a, b), c)
argmax(a, argmax(b, c)) != argmax(argmax(a, b), c) | 1 | 3 | 1 | These two look like they should be very much equivalent and therefore what works for one should work for the other? So why does accumulate only work for maximum but not argmax?
EDIT: A natural follow-up question is then how does one go about creating an efficient argmax accumulate in the most pythonic/numpy-esque way? | Why does accumulate work for numpy.maximum but not numpy.argmax | 1.2 | 0 | 0 | 1,719 |
37,857,927 | 2016-06-16T11:31:00.000 | 0 | 0 | 0 | 0 | python,selenium,webdriver,video-streaming,selenium-chromedriver | 37,862,602 | 2 | false | 0 | 0 | You need to create a script with vb or python which will close the popups on the basis of their titles.
Even you can minimise them also.
Code in vb
Set wshShell = CreateObject("WScript.Shell")
Do
ret = wshShell.AppActivate("title of the popup")
If ret = True Then
wshShell.SendKeys "%N"
Exit Do
End If
WScript.Sleep 500
Loop | 1 | 1 | 0 | I'm currently writinng a script to interact with a live stream, mainly taking screenshots.
I'm using Selenium Webdriver for Python to open Chromedriver and go from there.
However, I want to build this behavior into a bigger program and hide the whole process of opening chromedriver, waiting for the stream to load and then taking a screenshot, so the user only gets the screenshot once it's done.
From what I've found online, it's not possible to hide the command-line console within my script with something like setVisible and I'm okay with the console showing up, but I really have to hide the website popup, so the screenshot will be taken in the background.
Is it possible to do so in Python/Selenium or do I have to switch to another language? | Webdriver without window popup | 0 | 0 | 1 | 1,090 |
37,861,296 | 2016-06-16T13:52:00.000 | 1 | 0 | 1 | 1 | python,homebrew | 37,862,696 | 2 | true | 0 | 0 | As others mentioned already: It's not the best idea - at least not in general - to install python packages via your systems package manager. It's better to use pip (ideally in conjunction with virtualenvs).
Apart from that, it should be possible to use the package you installed using homebrew from pycharm / python in general. In pycharm you can switch between different interpreters (Settings / Project / Interpreter). You need to to choose the one you installed the package for with homebrew (e.g. the one in /usr/local/Cellar/python//..).
If you used brew link the currently active one should be symlinked to /usr/local/bin/python. | 1 | 1 | 0 | I downloaded the QJson python library using brew install qjson.
What are the next steps I need to take in order to be able to work with it in PyCharm? | How to install a library using homebrew | 1.2 | 0 | 0 | 1,993 |
37,862,112 | 2016-06-16T14:28:00.000 | 0 | 0 | 0 | 0 | django,python-3.x,ldap,django-authentication | 37,865,795 | 1 | true | 1 | 0 | From the documentation:
When a user attempts to authenticate, a connection is made to the LDAP
server, and the application attempts to bind using the provided
username and password. If the bind attempt is successful, the user
details are loaded from the LDAP server and saved in a local Django
User model. The local model is only created once, and the details will
be kept updated with the LDAP record details on every login.
It authenticates by binding each time, and updates the information from LDAP (as you have it configured) each time. The Django user won't be removed from Django's user table if removed from LDAP; if you set multiple auth backends to also using the Django default auth, the user should be able to login (perhaps after a password reset) if removed from LDAP. If you look in your auth_user table you will noticed users using Django auth have their passwords hashed with pbkdf2_sha256, and the LDAP users passwords do not. | 1 | 1 | 0 | I am using django-python3-ldap for LDAP authentication in Django. This works completely fine but whenever an LDAP user is (successfully) authenticated the user details are stored in the local Django database (auth_user table).
My question now is when the same
(LDAP) user tries to authenticate next time, the user will be authenticated by LDAP or by the default Django authentication (since the user details are now stored in the local Django database)?
If the user is authenticated using local Django database then the user can still able to get access even after the user is removed from the LDAP server? This is a real concern for me?.
If this is the case is there a way, so that the LDAP user details is removed from the database (auth_user table) everytime the user is logged out and created every time the user is logged in?. Any help in the right direction is highly appreciated. Thank you for your valuable inputs. | django-python3-ldap authentication | 1.2 | 0 | 0 | 1,417 |
37,865,055 | 2016-06-16T16:49:00.000 | 0 | 0 | 0 | 0 | python,flask | 37,865,699 | 1 | false | 1 | 0 | As others mentioned, you can secure the endpoint so that a user has to provide credentials to issue a successful request to that endpoint.
In addition, your endpoint should be using proper HTTP semantics if its creating / updating data. i.e. POST to create a drink, PUT to update a drink. This will also protect you from someone just putting the URL into a browser since that is a GET request.
TL;DR
Secure the endpoint (if possible)
Add checks that the proper request body is provided
Use proper HTTP semantics | 1 | 0 | 0 | I am making my first web app with Flask wherein a database of drinks is displayed on the front-end based on selected ingredients. Then the user selects a drink and an in-page pop-up appears with some drink info and a button "make", when the user hits "make" it calls some python code on the back end (Flask) to control GPIO on my raspberry pi to control some pumps.
Does this "make" need to call some route (e.g. /make/<drink>) in order to call the python function on the back end? I don't like the idea that any user could just enter the URL (example.com/make/<drink>) where is something in the database to force the machine to make the drink even if the proper ingredients are not loaded. Even if I did checking on the back end to ensure the user had selected ingredients were loaded, I want the user to have to user the interface instead of just entering URLs.
Is there a way so that the make button calls the python code without using a "dummy URL" for routing the button to the server? | Are "dummy URLs" required to make function calls to Flask from the front-end? | 0 | 0 | 0 | 381 |
37,865,079 | 2016-06-16T16:50:00.000 | 0 | 0 | 0 | 0 | django,python-2.7 | 37,871,359 | 1 | true | 1 | 0 | Never mind, I solved this problem by creating a python27_env virtual environment and pip installed all required modules, and then it worked.
I'm guessing it's due to something got messed up in my desktop setup for python27.
Thanks guys. | 1 | 0 | 0 | I was trying to start my django server, but constantly getting the above error
django version is 1.5 (due to my project's backward compatibility issue, we cannot upgrate it to a newer version)
python version is 2.7.7
I've searched online and find that usually, this is due to Django version, once switched to 1.5, it'll be fine, but for me, it's still there.
Any help please? | attributeerror 'module' object has no attribute 'python_2_unicode_compatible' | 1.2 | 0 | 0 | 700 |
37,866,346 | 2016-06-16T18:02:00.000 | 2 | 0 | 1 | 0 | python,multithreading,events,garbage-collection | 37,866,866 | 2 | false | 0 | 0 | Since other objects hold a reference to the event, the event itself won't be deleted or garbage collected. It has no idea that your object is being deleted. Whether you want your class to have a __del__ that sets the event when the object is deleted (either naturally through having its ref count go to zero or though garbage collection) is entirely dependent on your event system design. Suppose I have a dozen objects referencing the event. Do I want the event fired when each one goes away? Depends! | 1 | 0 | 0 | The title of this post pretty much sums up my question - will threads waiting on an Event be notified if that event has been garbage collected? In my particular case I have a class whose instances have an Event as an attribute, and I'm wondering whether I should implement a __del__ method on this class that calls self.event.set() before it's garbage collected.
I'm new to asynchronicity, so if event's don't set() when they're garbage collected, perhaps it's bad practice to do so, and better to let threads hang? Thanks in advance for any responses. | Python - is `threading.Event` "set" during garbage collection? | 0.197375 | 0 | 0 | 317 |
37,869,718 | 2016-06-16T21:35:00.000 | 0 | 0 | 1 | 0 | python,sorting | 37,869,959 | 1 | true | 0 | 0 | The first one asks you to implement a QUEUE , the second one asks you to implement a STACK, and the third and fourth are variations of PRIORITY QUEUE data structures. | 1 | 0 | 0 | I just begin to study python 3. There is a project about doing a simple restaurant simulator to see which four given approaches (1. Fist-in,fist-served, 2. last-in, first-served, 3. serve the most expensive order first, 4. serve the one with least preparation time first) yield the best results(profit and number of customer served)
The idea is to have a customer class and a restaurant class to assist the simulation.
The restaurant class have two main methods, add_customer(new_comer) and process(unit_time).
I wrote my codes in a way that each time when adding new customers, Ill add new_comer to a waiting_list and then sorted accordingly base on the four subclasses' approach, then use the same block of codes to process different approaches. However, my TA told me that it may not be the best way to keep a sorted.
Hence, I am wondering if there is a better or more efficient way to add and process customer without using sorted list?
Regards,
Sebastian | Why is not a good idea to keep a sorted list in a extremely simple restaurant simulator | 1.2 | 0 | 0 | 83 |
37,870,371 | 2016-06-16T22:26:00.000 | 0 | 0 | 1 | 1 | python,windows,file,rename | 37,871,733 | 1 | false | 0 | 0 | In pseudocode:
Find all files in locations Source and Target.
For each file in Source, calculate (checksum, size), and use this as a key in a dict, where the value is the filename.
For each file in Target, calculate (checksum, size), and look it up in the dict created previously. If it exists, rename it. | 1 | 0 | 0 | Path a/b/c/d where folder d has a bunch of files and folders, something like this.
Folder E which has abc.txt, def.jpg, and ghi.pst
Folder D also has loose files like jkl.pst, mno.jpg, pqr.txt
Path w/x/y/z where folder z has a bunch of folders named like
a3cj85zblahblahblah
asdfljklqwpeoriu833
Each of these folders contains a file, or files from one of the folders in path a/b/c/d. They are the same files except they've been renamed. So the file named abc.txt from a/b/c/d/FolderE is now in any folder in the path w/x/y/z renamed as bf6241b7c8b1.txt.
I know it's the same file because I not only compared them but they also have the same modified date, type, and size. I was thinking about using os.rename and os.walk but I don't know where to start. I'm fairly new to Python and need to get this done ASAP in Windows. | What's the best way to rename files with the same modified date and time? | 0 | 0 | 0 | 49 |
37,871,418 | 2016-06-17T00:38:00.000 | 2 | 0 | 0 | 0 | python,beautifulsoup,digital-ocean | 37,871,468 | 1 | true | 0 | 0 | If you're using ubuntu, it's way easier to install the pre-packaged version using apt-get install python-bs4or apt-get install python3-bs4 | 1 | 0 | 0 | I'm using the most basic service, running with Ubuntu (the standard config), I have developed some python scripts in my own PC that uses bs4, when I upload them it says the classical error:
bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: xml. Do you need to install a parser library?
So I try pip install lxml, and it asks that libxml2 should be installed, and so on, and so on...
I'm not a Linux person, I'm more a Windows guy, I know maybe I have to compile something but I have no idea what or how. I've been looking for tutorials all noon, but I can't find nothing helpful. | How can I install BeautifulSoup in a Digital Ocean droplet? | 1.2 | 0 | 1 | 105 |
37,877,325 | 2016-06-17T09:00:00.000 | 5 | 0 | 1 | 0 | python,ubuntu,vagrant,pycharm,virtualenv | 39,578,272 | 1 | false | 0 | 0 | I also had this issue setting up a remote interpreter with Vagrant.
It appears that for a remote interpreter you need to mark Python source root folders as "Source Folders" under Project Structure in Preferences. They should then show up as blue in your Project browser. You don't need to mark all the sub folders, just the root folder for each python project/package.
Without doing this it seems like Pycharm can't find the source files and takes you to the readonly cached code derived from the remote interpreter environment. | 1 | 6 | 0 | I setuup Project Interpreter pointing virtualenv on vagrant virtual machine (Settings / Project Interpreter / Add Remote), but when I click ctrl+B or use 'go to definition' I always end up in location like this: /home/<my_user_name>/.PyCharm50/system/remote_sources/1174787026/154306353/django/...
how to avoid such pycharm behaviour? How to force it to use virtualenvs code when go to declaration?
Using Pycharm 5.0 on Ubuntu 14.04
UPDATE: with pycharm 2017.2.* it works now good! | pycharm not using virtualenv from vagrant box when 'go to declaration' instead uses some outdated stuff from its remote_sources | 0.761594 | 0 | 0 | 458 |
37,879,558 | 2016-06-17T10:42:00.000 | 1 | 0 | 0 | 0 | python,numpy,scipy | 37,879,801 | 1 | true | 0 | 0 | If you grid is regular:
You have calculate dx = x[i+1]-x[i], dy = y[i+1]-y[i], dz = z[i+1]-z[i].
Then calculate new arrays of points:
x1[i] = x[i]-dx/2, y1[i] = y[i]-dy/2, z1[i] = z[i]-dz/2.
If mesh is irregular you have to do the same but dx,dy,dz you have to define for every grid cell. | 1 | 1 | 1 | I have a question, I have been given x,y,z coordinate values at cell centers of a grid. I would like to create structured grid using these cell center coordinates.
Any ideas how to do this? | Creating a 3D grid using X,Y,Z coordinates at cell centers | 1.2 | 0 | 0 | 425 |
37,884,856 | 2016-06-17T15:05:00.000 | 0 | 0 | 0 | 0 | python,python-3.x,signals,signal-processing,fft | 37,889,465 | 3 | false | 0 | 0 | To use an FFT, you will need to created a vector of samples evenly spaced in time.
If the signal was bandlimited to below a sample rate implied by the widest sample spacings, you can try polynomial interpolation between your unevenly spaced samples to create a grid of about the same number of equally spaced samples in time. But, depending on polynomial degree, this might be highly sensitive to any noise in the bandlimiting or sampling process. | 1 | 0 | 0 | I have a data with unevenly spaced (time) samples. How can I find the FFT of the signal and plot it. | How to find the FFT of an unevenly sampled signal in python3? | 0 | 0 | 0 | 2,622 |
37,888,384 | 2016-06-17T18:45:00.000 | 0 | 0 | 1 | 0 | python,operators | 37,888,465 | 8 | false | 0 | 0 | foo = bar sets foo to the same value as bar- if foo is 5 and bar is 3, foo is now 3.
foo += bar is shorthand for foo = foo + bar. So if foo is 5, and bar is 3, foo is now 8
This does in fact use whatever + means in context, so if foo is "A String" and bar is "bar", foo += bar makes foo == "A Stringbar" | 2 | 0 | 0 | What is the difference between = and +=?
I've been experimenting, and I haven't found the difference. | What is the difference between "=" and "+=" in Python? | 0 | 0 | 0 | 1,592 |
37,888,384 | 2016-06-17T18:45:00.000 | 0 | 0 | 1 | 0 | python,operators | 37,888,479 | 8 | false | 0 | 0 | = is used to assign a value to a variable. e.g.: c=1+c (which assigns the value of c+1 to c, so it increments c by 1 )
+= is used to increment a variable by a specific value. e.g.: c+=1 (which | 2 | 0 | 0 | What is the difference between = and +=?
I've been experimenting, and I haven't found the difference. | What is the difference between "=" and "+=" in Python? | 0 | 0 | 0 | 1,592 |
37,889,196 | 2016-06-17T19:38:00.000 | 3 | 0 | 1 | 0 | python-2.7,boolean-logic,boolean-operations | 37,889,350 | 1 | true | 0 | 0 | and returns the first 'falsy' (False, zero, empty string or list, etc.) value it sees, or the final value if none were falsy. Further values are not even evaluated, since they can't change the result.
or likewise returns the first 'truthy' (True, non-zero, non-empty string or list, etc.) value it sees (or the final one if there were none), and doesn't evaluate the rest.
This behavior is sometimes more convenient than strictly returning only True or False. | 1 | 2 | 0 | I am currently in the course of learning Python 2.7 and have come across the Equality and Boolean operators
My question is:
Why False and 1 is False but True and 1 is 1
Likewise, False or 1 is 1 but True or 1 is True
Can someone kindly explain why this is happening
Many thanks | Python 2.7 Boolean Operators Logic | 1.2 | 0 | 0 | 668 |
37,890,849 | 2016-06-17T21:49:00.000 | 5 | 0 | 0 | 0 | python,pandas,normalization | 37,905,017 | 2 | false | 0 | 0 | If your data is in the range (-1;+1) (assuming you lost the minus in your question) then log transform is probably not what you need. At least from a theoretical point of view, it's obviously the wrong thing to do.
Maybe your data has already been preprocessed (inadequately)? Can you get the raw data? Why do you think log transform will help?
If you don't care about what is the meaningful thing to do, you can call log1p, which is the same as log(1+x) and which will thus work on (-1;∞). | 1 | 13 | 1 | I have a Pandas Series, that needs to be log-transformed to be normal distributed. But I can´t log transform yet, because there are values =0 and values below 1 (0-4000). Therefore I want to normalize the Series first. I heard of StandardScaler(scikit-learn), Z-score standardization and Min-Max scaling(normalization).
I want to cluster the data later, which would be the best method?
StandardScaler and Z-score standardization use mean, variance etc. Can I use them on "not yet normal distibuted" data? | Pandas Series: Log Normalize | 0.462117 | 0 | 0 | 64,605 |
37,890,898 | 2016-06-17T21:53:00.000 | 6 | 0 | 1 | 0 | python,bash,environment-variables,jupyter-notebook | 47,326,270 | 10 | false | 0 | 0 | If you need the variable set before you're starting the notebook, the only solution which worked for me was env VARIABLE=$VARIABLE jupyter notebook with export VARIABLE=value in .bashrc.
In my case tensorflow needs the exported variable for successful importing it in a notebook. | 1 | 143 | 0 | I've a problem that Jupyter can't see env variable in bashrc file, is there a way to load these variables in jupyter or add custome variable to it? | How to set env variable in Jupyter notebook | 1 | 0 | 0 | 199,897 |
37,891,520 | 2016-06-17T23:04:00.000 | 0 | 0 | 1 | 0 | python-3.x | 51,901,645 | 1 | false | 0 | 0 | How about you try and copy it to C:\Users\chuck\AppData\Local\Programs\Python\Python35-32\Lib\site-packages ?
It should work I am not sure though. Try it. | 1 | 0 | 0 | Having thoroughly researched my question and tried many routes, here I am.
the module I want is openpyxl
I have tried
1.) extracting and copying it to the C:\Users\chuck\AppData\Local\Programs\Python\Python35-32\Lib
THEN pip install openpyxl
no luck.
2.) pip install in the python shell . no luck.
I have actually done this before, though on a mac and it was as easy as pip3 install openpyxl in cmd line. | How to install 3rd party modules in Python 3.5? | 0 | 0 | 0 | 986 |
37,892,784 | 2016-06-18T02:51:00.000 | -2 | 0 | 0 | 0 | python,python-2.7,opencl,tensorflow,keras | 64,930,005 | 8 | false | 0 | 0 | Technically you can if you use something like OpenCL, but Nvidia's CUDA is much better and OpenCL requires other steps that may or may not work. I would recommend if you have an AMD gpu, use something like Google Colab where they provide a free Nvidia GPU you can use when coding. | 1 | 87 | 1 | I'm starting to learn Keras, which I believe is a layer on top of Tensorflow and Theano. However, I only have access to AMD GPUs such as the AMD R9 280X.
How can I setup my Python environment such that I can make use of my AMD GPUs through Keras/Tensorflow support for OpenCL?
I'm running on OSX. | Using Keras & Tensorflow with AMD GPU | -0.049958 | 0 | 0 | 121,342 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.