Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
38,203,988
2016-07-05T12:52:00.000
1
0
0
0
python,pandas,encoding,sas
44,310,145
3
false
0
0
read_sas from pandas seem to not like encoding = "utf-8". I had a similar problem. Using SAS7BDAT('foo.sas7bdata').to_data_frame() solved the decoding issues of sas files for me.
1
2
1
I am trying to import a sas dataset(.sas7bdat format) using pandas function read_sas(version 0.17) but it is giving me the following error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 12: ordinal not in range(128)
Encoding in .sas7dbat
0.066568
0
0
9,854
38,204,346
2016-07-05T13:10:00.000
1
0
0
0
python,excel,csv
38,206,084
2
false
0
0
If you want to do somewhat more selective fishing for particular rows, then the python csv module will allow you to read the csv file row by row into Python data structures. Consult the documentation. This may be useful if just grabbing the first hundred lines reveals nothing about many of the columns because they are blank in all those rows. So you could easily write a program in Python to read as many rows as it takes to find and write out a few rows with non-blank data in particular columns. Likewise if you want to analyze a subset of the data matching particular criteria, you can read all the rows in and write only the interesting ones out for further analysis. An alternative to csv is pandas. Bigger learning curve, but it is probably the right tool for analyzing big data. (1Gb is not very big these days).
1
2
1
I have a ~1.0gb CSV file, and when trying to load it into Excel just to view, Excel crashes. I don't know the schema of the file, so it's difficult for me to load it into R or Python. The file contains restaurant reviews and has commas in it. How can I open just a portion of the file (say, the first 100 rows, or 1.0mb's worth) in Windows Notepad or Excel?
Viewing a portion of a very large CSV file?
0.099668
0
0
1,514
38,205,364
2016-07-05T13:56:00.000
1
0
1
0
python,performance,select,time,sleep
38,205,678
3
false
0
0
How you defining the efficiency? In most of the cases, the sleep and select have been using to observe if there space/buffer. If the space is not available, then we have option to wait and see when the buffer will be empty and we can execute our action. The sleep() internally select(). So, I think it matters with which you are comfortable with, I guess.
2
6
0
What would be better ? time.sleep(delayTime) or select.select([],[],[],delayTime) Are they equivalent ? Is select more efficient?
What's the most efficient way to sleep in Python?
0.066568
0
0
6,573
38,205,364
2016-07-05T13:56:00.000
4
0
1
0
python,performance,select,time,sleep
38,205,795
3
true
0
0
The answer depends on what your trying to achieve: time.sleep(delayTime) Action: Suspend execution of the current thread for the given number of seconds. Any caught signal will terminate the sleep() following execution of that signal’s catching routine select.select([],[],[],delayTime) This is a straightforward interface to the Unix select() system call. The first three arguments are sequences of ‘waitable objects’: rlist: wait until ready for reading wlist: wait until ready for writing xlist: wait for an “exceptional condition” So now, that we understand the two interfaces we can understand that the answer depends on the purpose: If all you want to do is to suspend the current thread - the first option is simpler. But if there are objects to wait on - use the second method. In temp of efficiency - I don't think there are differences if all you are looking for is the simplest use-case (just suspend the main thread).
2
6
0
What would be better ? time.sleep(delayTime) or select.select([],[],[],delayTime) Are they equivalent ? Is select more efficient?
What's the most efficient way to sleep in Python?
1.2
0
0
6,573
38,206,614
2016-07-05T14:54:00.000
1
0
0
0
python,database,module,path,openerp
38,218,401
1
false
0
0
In your openerp-server.conf file on line addons_path add your custom path, just put comma and place your path. Restart server after that, update module list.
1
1
0
I am looking for a way to update the currently installed module's path. I want to move the module from one addons folder to a different one and my attempt to just move the module and then Update Modules List gives me nothing. The module is not found and only a grayed out module name is left in the modules list. Maybe there is a database table with all the paths already in it? And it is pliable to change? Update: I should mention that I have moved more than one module from its original directory to a different one and only a single module is not found. The rest work just fine. Also, this has occurred more than once. I have restored a database from other server and while all of the modules should have been found among those set in addons_path, a single module was not (different one from mentioned before, although that one is present and recognized).
How to update module path that is already installed? Odoo
0.197375
0
0
620
38,207,914
2016-07-05T16:01:00.000
1
0
0
0
python,django,git,django-rest-framework
38,208,333
2
false
1
0
I will go with python package, after all this is what it is for.
1
0
0
I have to find a solution for sharing code between two big Django projects. The main things to share are models and serializers and template tags. I've came up with 3 different solutions and I need you to find pro and cons to be able to make a choice. I'll list you the solutions I found: git submodules Create a repository where to store my *.py files and include them as a django app such as 'common_deps' Even if this is the purpose of git submodules there are a bit hard to use and its easy to fall into traps. python package Create a python package to store my *.py files. It seems to be the best option to me event if that means that I'll need to change my requirements.txt file on my projects on each new release. Simple git repository Create a new repository to store my *.py files and include them as a django app such as 'common_deps'. Then add it to my PYTHON_PATH I need some advices, I haven't chosen yet. I'm just telling myself that git submodules seems to be a bas idea. Tell me guys.
Most efficient solution for sharing code between two django projects
0.099668
0
0
1,082
38,207,942
2016-07-05T16:03:00.000
1
0
1
0
python,r,jupyter,cairo
38,773,030
1
false
0
0
I had the precise same issue and fixed it by installing Cairo package: install.packages("Cairo")` library(Cairo) Thanks to my collegue Bartek for providing this solution
1
2
1
It's my first time using R in jupyter. I've installed everything like jupyter, python, R and lRkernel. I can use just typing or calculation in jupyter but whenever I want to use a graph library like plot or ggplot2 it shows Error in loadNamespace(name): there is no package called 'Cairo' Traceback: plot without title Someone please guide me about how to handle this issue.
Error in loadNamespace(name) Cairo
0.197375
0
0
411
38,209,343
2016-07-05T17:32:00.000
0
0
1
0
python,windows,installation,iexpress
38,213,178
1
false
0
0
Ultimately, I decided it was simpler to just put all of the source code and directory trees in a ZIP file and give the user careful instructions on how to add those trees to their PATH. Thanks anyway for the help guys.
1
0
0
I am trying to create an installation package for my program that I can send to other people who don't have any programming background and presumably cannot use the command line. The program itself is fairly user friendly, but it requires the installation of several Python packages from the Python Package Index to function. I was able to install these packages with pip (though that did not work for one, so I had to download and manually install it from the .tar.gz), but I don't think my users will be able to do this or troubleshoot installation problems like I did. My users and I are working in a Windows environment, so I tried using iExpress wizard to create an installation file. But the wizard does not include any options for installing a large number of files in a directory tree format or automatically adding those directories to PATH; so that does not seem to be an option. What is the simplest approach I can take to automatically and properly install these Python modules for my users? EDIT I am also considering the possibility of using a batch file included in the installation package, but I would still like to know if there is a more streamlined way of doing this kind of installation.
Creating an Installation Package That Grabs Python Modules From the Web
0
0
0
133
38,210,820
2016-07-05T19:04:00.000
0
0
0
0
python,apache-spark-mllib,bayesian,lda
53,956,744
1
false
0
0
i think the matrix is m*n m is the words number and n is the topic number
1
3
1
The output of LDAModel.topicsMatrix() is unclear to me. I think I understand the concept of LDA and that each topic is represented by a distribution over terms. In the LDAModel.describeTopics() it is clear (I think): The highest sum of likelihoods of words of a sentence per topic, indicates the evidence of this tweet belonging to a topic. With n topics, the output of describeTopics() is a n times m matrix where m stands for the size of the vocabulary. The values in this matrix are smaller or equal to 1. However in the LDAModel.topicsMatrix(), I have no idea what I am looking at. The same holds when reading the documentation. The matrix is a m times n matrix, the dimensions have changed and the values in this matrix are larger than zero (and thus can take the value 2, which is not a probability value). What are these values? The occurrence of this word in the topic perhaps? How do I use these values do calculate the distance of a sentence to a topic?
What is the output of Spark MLLIB LDA topicsmatrix?
0
0
0
391
38,213,691
2016-07-05T22:13:00.000
0
0
0
1
python,windows,git,clearcase
38,217,644
1
false
0
0
Since a git hook executes itself in a git bash, check if that affect how the parameters are passed to the windows executable clearfsimport (especially the path) One simple test: wrap your clearfsimport call in a .bat script, and set the source and destination path in that script, then make your git hook call said script.
1
1
0
I am running in Windows and have a git post-receive hook that calls a python 3 script. This Python script does a number of things. One of these is to output the username running the git hook. This username is MACHINENAME$ (where the machine name is MACHINENAME), which I think is the Network Service account, but I could be wrong here. After that it calls subprocess.run which execs a call to the ClearCase command clearfsimport. Note that I use the clearfsimport 'nsetevent' switch which does allow other users to check-in to this view, but this doesn't seem to work for the Network Service account. If I run the python command directly as the ClearCase view owner, the clearfsimport succeeds. If I run it as another user, the clearfsimport succeeds. If I run it as a git hook, however, it fails with the following error message: subprocess.CalledProcessError: Command '['clearfsimport', '-recurse', '-nsetevent', '-rmname', '-comment', "This is my comment", '/path/to/clearfsimport/source', '/path/to/ClearCase/view']' returned non-zero exit status 1 What can I do to get this git hook to work correctly? It does not matter if I have to adjust python, git, ClearCase, or Windows, or some combination.
Change git hook credentials in Windows
0
0
0
70
38,215,657
2016-07-06T02:48:00.000
0
0
0
0
python,django,redirect,oauth-2.0,mailchimp
38,216,891
1
false
1
0
Your email_host, user, password and port match with your mail-chimp credentials? second thing you need to check mail-chimp api log for status. you will get some glimpse from there.
1
0
0
I am trying to set up Oauth2 with the Mailchimp API. So far things seem to be working correctly except that after having the user login at Mailchimp, the browser doesn't redirect back to my redirect_uri. It just stays on the Mailchimp login page. For the code: I redirect the user to the authorize url/mailchimp login: authorize_uri = 'https://login.mailchimp.com/oauth2/authorize? response_type=code&client_id=%s&client_secret=%s&redirect_uri=%s' % (settings.MAILCHIMP_CLIENT_ID, settings.MAILCHIMP_CLIENT_SECRET, redirect_uri) my redirect_uri is redirect_uri = 'http://127.0.0.1:8000/mailchimp/connect' So the authorize_url redirects to the login page, and I login with credentials that absolutely work to login the regular non-oauth way. Also I see the 302 redirect with the code I need in my logs, but the browser seems to just refresh the Mailchimp login page and the view(I'm using django) for processing the GET request below is never triggered. [06/Jul/2016 02:31:43] "GET /mailchimp/connect?code=36ad22daa3d0f8b3804f7e340e5d50f1 HTTP/1.1" 302 0 I have no idea what I'm doing wrong...
why am I not being redirected in the browser after Mailchimp API oauth2 initial request is sent?
0
0
0
356
38,216,399
2016-07-06T04:36:00.000
1
0
1
0
python,windows,python-3.x,command-line,jupyter
38,217,022
1
false
0
0
You probably do not have Python in your system variables. Press Windows+Pause/Break Click Advanced System Settings (you need admin rights here) Click Environment Variables In the lower part where it says "System Variables" click on the line that says PATH and then on "Edit" Append to that sequence of paths your Python path where the python.exe and also the jupyter.exe are located (which is in the /Scripts subfolder of your Python directory). Separate the paths with a semicolon, do not add spaces or trailing slashes, like so: C:\path1;C\path2\subfolder;D:\path3\subfolder\subsubfolder Click OK and you're done. Now Python and jupyter should work from the command line
1
0
0
Running on Windows 10. New to Python. Just installed python 3 and installed jupyter via pip. I cannot get jupyter notebook to run. Everything is under this path: C:\Users\user\AppData\Local\Programs\Python\Python35-32> I figured I could just type jupyter notebook in the prompt but it's not working. Do I have to install Anaconda to get this to run or something?
Running Jupyter notebook
0.197375
0
0
1,225
38,216,690
2016-07-06T05:11:00.000
1
0
0
0
python,robotframework
38,220,509
1
false
0
0
As the collections library is a builtin library, you will need to upgrade to Robot Framework 3.0 to get those non-existing changes, you won't be able to get it individually. It should work with Python 2.7 without issue.
1
0
0
I am using RIDE 1.5.2.1 running on Python 2.7.11 but found some features from Collections 3.0 are not existing. Can I upgrade upgrade Ride but still use Python 2.7? Or there is a way to only upgrade the collections library? Thanks.
Is RIDE 1.5.2.1 possible to work with Collection 3.0?
0.197375
0
0
44
38,217,297
2016-07-06T06:10:00.000
0
0
1
0
python,dronekit
38,236,476
1
false
0
0
Try flying with APM:Copter 3.3
1
0
0
Running dronekit-python with ArduCopter as SITL was successful.(APM:Copter V3.4-dev) Then,I run the same code on the real copter(APM:Copter V3.2.1) is not work. The code is from dronekit's example. Any ideas or pointers are appreciated.
Dronekit python goto_position_target_local_ned()
0
0
0
148
38,218,132
2016-07-06T07:09:00.000
-1
0
1
0
python,python-3.x,pip
38,218,276
3
false
1
0
pip uninstall currently doesn't support removing the dependencies. You can manually go to the folder where scrapy is installed and delete it. For example: /usr/local/lib/python2.7/dist-packages/scrapy. For example if it is at '/PATH/TO/SCRAPY', run this command on the terminal: sudo rm -rf /PATH/TO/SCRAPY
1
0
0
I had installed Scrapy with pip install scrapy. It also install all its requirement packages Installing collected packages: zope.interface, Twisted, six, cssselect, w3lib, parsel, pycparser, cffi, pyasn1, idna, cryptography, pyOpenSSL, attrs, pyasn1-modules, service-identity, queuelib, PyDispatcher, scrapy. So, is it possible to uninstall scrapy and all its requirement packages with a terminal command?
Pip uninstall Scrapy with all its dependencies
-0.066568
0
0
5,640
38,220,530
2016-07-06T09:25:00.000
0
0
0
1
python,docker
38,221,268
1
false
1
0
You should copy your production-ready config file into the docker container as part of your image-building process (COPY directive in your dockerfile), and then proceed with the same deployment steps you would normally use.
1
0
0
I used to deploy Python web applications on AWS EC2 instances with Ansible. In my development environment, I use config from a module local_config.py, but in the deployment process, I use an Ansible task to replace this file with a production-ready config. How do I do something similar when building a Docker image?
How to add a production config when deploying a docker container?
0
0
0
38
38,222,375
2016-07-06T10:59:00.000
0
0
0
0
python,sql,django,migration
38,225,266
1
false
1
0
The first approach I'd try would be to check out the last good commit and recreate the model changes in question so the migration could be regenerated and checked in. And while it's good to have a contingency plan for things like this, if it's a real concern I'd suggest evaluating your deployment process to make this issue less likely.
1
0
0
While this shouldn't happen its not impossible. So what to do in the event that a migration has been run into a database and the migration file has then been deleted and is not recoverable? this assumes that the database cannot just be dropped.
If a django migration is migrated to db, what is the best practice if the migration is deleted at a later date?
0
0
0
37
38,223,546
2016-07-06T12:04:00.000
1
0
0
0
java,python-2.7,machine-learning,scikit-learn,jepp
43,283,828
2
false
0
0
The _PyThreadState_Current error implies that it's using the wrong Python. You should be able to fix it by setting PATH and LD_LIBRARY_PATH to the python/bin and python/lib directories you want to use (and built Jep and sklearn against) before launching the process. That will ensure that Python, Jep, and sklearn are all using the same libraries. If that doesn't work, it's possible that Jep or sklearn were built with different versions of Python than you're running.
1
0
1
I am using jep for running python script in java, I basically need to run the script that uses scikit package. But it shows me error when I try to run, which I couldn't understand. This is the piece of code in my program, Jep jep = new Jep(); jep.eval("import sklearn"); It shows the below error,but sklearn works perfectly well in python. Jul 06, 2016 5:31:50 PM JepEx main SEVERE: null jep.JepException: jep.JepException: : /usr/local/lib/python2.7/dist-packages/sklearn/__check_build/_check_build.so: undefined symbol: _PyThreadState_Current Contents of /usr/local/lib/python2.7/dist-packages/sklearn/check_build: setup.py __init.pyc _check_build.so build init.py setup.pyc It seems that scikit-learn has not been built correctly. If you have installed scikit-learn from source, please do not forget to build the package before using it: run python setup.py install or make in the source directory. If you have used an installer, please check that it is suited for your Python version, your operating system and your platform. at jep.Jep.eval(Jep.java:485) at JepEx.executeCommand(JepEx.java:26) at JepEx.main(JepEx.java:38) Caused by: jep.JepException: : /usr/local/lib/python2.7/dist-packages/sklearn/__check_build/_check_build.so: undefined symbol: _PyThreadState_Current Contents of /usr/local/lib/python2.7/dist-packages/sklearn/check_build: setup.py __init.pyc _check_build.so build init.py setup.pyc It seems that scikit-learn has not been built correctly. If you have installed scikit-learn from source, please do not forget to build the package before using it: run python setup.py install or make in the source directory. If you have used an installer, please check that it is suited for your Python version, your operating system and your platform. at /usr/local/lib/python2.7/dist-packages/sklearn/check_build/__init.raise_build_error(init.py:41) at /usr/local/lib/python2.7/dist-packages/sklearn/check_build/__init.(init.py:46) at /usr/local/lib/python2.7/dist-packages/sklearn/init.(init.py:56)
jep for using scikit model in java
0.099668
0
0
860
38,223,687
2016-07-06T12:10:00.000
1
0
0
0
python,matlab,matplotlib,spectrogram
38,223,850
2
false
0
0
The value in Matlab is a scalar as it represents the size of the window, and Matlab uses a Hamming window by default. The Window argument also accepts a vector, so you can pass in any windowing function you want.
2
1
1
I am trying to incorporate a preexisting spectrogram from Matlab into my python code, using Matplotlib. However, when I enter the window value, there is an issue: in Matlab, the value is a scalar, but Matplotlib requires a vector. Why is this so?
Difference between Matlab spectrogram and matplotlib specgram?
0.099668
0
0
865
38,223,687
2016-07-06T12:10:00.000
1
0
0
0
python,matlab,matplotlib,spectrogram
38,225,574
2
true
0
0
The arguments are just organized differently. In matplotlib, the window size is specified using the NFFT argument. The window argument, on the other hand, is only for specifying the window itself, rather than the size. So, like MATLAB, the window argument accepts a vector. However, unlike MATLAB, it also accepts a function that should take an arbitrary-length vector and return another vector of the same size. This allows you to use functions for windows instead of just vectors. So to put it in MATLAB terms, the MATLAB window argument is split into the window and NFFT arguments in matplotlib, while the MATLAB NFFT argument is equivalent to the matplotlib pad_to argument. As for the reason, specifying the window and window size independently allows you to use a function as the argument for window (which, in fact, is the default). This is impossible with the MATLAB arguments. In Python, functions are first-class objects, which isn't the case in MATLAB. So it tends to be much more common to use functions as arguments to other functions in Python compared to MATLAB. Python also allows you to specify arguments by name, something MATLAB really doesn't. So in MATLAB it is much more common to have arguments that do different things depending on the inputs, while similar functions in Python tend to split those into multiple independent arguments.
2
1
1
I am trying to incorporate a preexisting spectrogram from Matlab into my python code, using Matplotlib. However, when I enter the window value, there is an issue: in Matlab, the value is a scalar, but Matplotlib requires a vector. Why is this so?
Difference between Matlab spectrogram and matplotlib specgram?
1.2
0
0
865
38,227,501
2016-07-06T15:11:00.000
3
0
1
1
ipython
38,227,674
2
false
0
0
So I figured out a solution - I changed the environment variable PATH to the subfolder with the .exe files. Although the path including this subfolder was listed under %env, it did not work without being referred directly in the System setting.
1
2
0
I changed the environment variable PATH, to a new value and then back to what I believe was the original one. But now I can't open a .ipynb file through the Windows command line the was I was used to. After changing the directory in command line and running ipython notebook notebook_name.ipynb I get the following message: 'ipython' is not recognized as an internal or external command. My environment variable is set to a folder with python.exe, and this folder includes a subfolder with ipython.exe and jupyter-notebook.exe. When I open iPython command line and type %env, I can see the full path to the correct subfolder under PATH. Can someone point to a solution? Thanks.
ipython from windows command line
0.291313
0
0
6,717
38,228,088
2016-07-06T15:40:00.000
2
0
0
0
python,numpy,scipy,scikit-learn,anaconda
38,233,222
2
false
0
0
If your code uses linear algebra, check it. Generally, roundoff errors are not deterministic, and if you have badly conditioned matrices, it can be it.
1
10
1
I have a very strange problem that I get different results on the same code and same data on different machines. I have a python code based on numpy/scipy/sklearn and I use anaconda as my base python distribution. Even when I copy the entire project directory (which includes all the data and code) from my main machine to another machine and run it, the results I get are different. Specifically, I'm doing a classification task and I get 3 percent difference in accuracy. I am using the same version of python and anaconda on the two machines. My main machine is ubuntu 16.04 and the results on it are lower than several other machines with various OS on which I tried (OSX, ubuntu 14.04 and Centos). So, there should be something wrong with my current system configuration because all other machines show consistent results. Since the version of my anaconda is consistent among all machines, I have no idea what else could be the problem. Any ideas what else I should check or what could be the source of the problem? I also removed and reinstalled anaconda from scratch but it didn't help.
Same Python code, same data, different results on different machines
0.197375
0
0
6,808
38,230,178
2016-07-06T17:31:00.000
1
0
0
0
python,ruby-on-rails,heroku,transfer
38,232,752
1
false
1
0
I would suggest writing a secured JSON or XML API to transfer the data from app to app,. Once the data is received I would then generate the .csv or .html files from the received data. It keeps things clean and easy to modify for future revisions because now you'll have an API to interact with.
1
0
0
I need to setup a Heroku app (python) which would perform scheduled tasks that include fetching a set of data (.csv an .html) files from other Heroku app (ROR) and returning a result back to that app. Also it should be restricted only to my app to be able to connect to the ROR app because it deals with sensitive information. There would be from 20 to 100 files each time so I want them to be compressed somehow to transfer them quiclky (to avoid bothering the server for too long). I'm interested in possible ways to accomplish it. The first thought is to send HTTP GET request to the ROR app and fetch the necessary files yet it generally not secured at all. Would SCP work in some way in this situation or you have any other ideas? Thanks in advance!
Securely transfer a banch of files from one Heroku app to another
0.197375
0
0
46
38,230,462
2016-07-06T17:46:00.000
2
0
0
0
python,matplotlib,graph,data-science
38,230,601
2
true
0
0
Matplotlib gives you a nice level of access: you can change all details of the plots, modify ticks, labels, spacing, ... it has many sensible defaults, so a oneliner plot(mydata) produces fairly nice plots it plays well with numpy and other numerical tools, so you can pass your data science objects directly to the plotting tool without going through some intermediate io
2
0
1
Hi, I feel like this question might be completely stupid, but I am still going to ask it, as I have been thinking about it. What are the advantages, of using a plotter like matplotlib, instead of an existing software, or grapher. For now, I have guessed that although it takes a lot more time to use such a library, you have more possibilities? Please, let me know what your opinion is. I am just starting to learn about data science with Python, so I would like to make things clear.
Why use matplotlib instead of some existing software/grapher
1.2
0
0
146
38,230,462
2016-07-06T17:46:00.000
3
0
0
0
python,matplotlib,graph,data-science
38,230,638
2
false
0
0
Adding to Robin's answer, I think reproducibility is key. When you make your graphs with matplotlib, since you are coding everything rather than using an interface, all of you work is reproducible, you can just run your script again. Using other software, specifically programs with user interfaces, means that each time you want to remake your graphs, you have to start from scratch, and if someone asks you the specifics of your graph (ie what scale an axis used, what units something is in that might not be labeled) it is difficult for you to go back and figure it out, since there isn't code to examine.
2
0
1
Hi, I feel like this question might be completely stupid, but I am still going to ask it, as I have been thinking about it. What are the advantages, of using a plotter like matplotlib, instead of an existing software, or grapher. For now, I have guessed that although it takes a lot more time to use such a library, you have more possibilities? Please, let me know what your opinion is. I am just starting to learn about data science with Python, so I would like to make things clear.
Why use matplotlib instead of some existing software/grapher
0.291313
0
0
146
38,233,057
2016-07-06T20:20:00.000
0
0
1
0
python
38,233,116
8
false
0
0
There's a couple options here. The simplest would be to iterate over the array and stop when a) you've reached the end of the array OR b) the value at the current position is larger than the one you're looking for. Then simply return the last value found. (One edge case to consider: what happens if the number requested is smaller than ALL values of the array?) This solution assumes that the array is ordered. If not, it will not work.
2
3
0
Considering a list like [0,3,7,10,12,15,19,21], I want to get the nearest minimum digit closest to any value, so if I pass 4, I would get 3, and if I pass 18, I would get 15, etc.
python recipe: list item nearest equal to value
0
0
0
143
38,233,057
2016-07-06T20:20:00.000
0
0
1
0
python
38,233,148
8
false
0
0
If your list is small, iterating through it is the best bet. You'll avoid some overhead, plus there's no point in over-engineering this task. If you want your code to be a bit more "correct", or if you want this to scale well on larger inputs, I'd recommend a binary search approach. This is, of course, assuming that your input is guaranteed to be sorted. If unsorted, you have no choice other than to iterate through the values while keeping track of a delta. Here's a relatively high-level explanation of the binary search strategy: Do a binary search If equal to the value you were searching for, return the value at the previous index. If less than the value, return that index. If greater than the value, return the previous index. It might help to run through a few examples to prove to yourself that this will work.
2
3
0
Considering a list like [0,3,7,10,12,15,19,21], I want to get the nearest minimum digit closest to any value, so if I pass 4, I would get 3, and if I pass 18, I would get 15, etc.
python recipe: list item nearest equal to value
0
0
0
143
38,235,099
2016-07-06T22:54:00.000
0
0
1
0
python,dll,environment,cx-freeze,unqlite
38,306,009
1
false
0
0
Well, I have finally found the solution. I took a monitor software to look at the DLL load in my machine. I filtred all files with my process and the DLL in loaded path. In one hand, it was one python DLL which were missing. On an other hand, Cython expected one library from microsoft visual C++ runtime environnement. After adding these DLL manually in the program folder and in the setup, my program worked.
1
0
0
In my machine (Windows), I can use my executable of my python program. But if I try on the machine of another person(Windows), it doesn't work. The executable blocks at the line : from unqlite import UnQLite I have fixed his dependency in packages variable: options={'build_exe':{'include_files':includefiles,'packages': ['Cython'],'includes':['unqlite']}} And if I look at the folder where it puts the exe, the unqlite.pyd is there...
Python exe - Cx_Freeze - ImportError DLL load failed
0
0
0
349
38,235,295
2016-07-06T23:14:00.000
0
0
0
0
sockets,python-3.x,graphite,plaintext
38,251,602
1
true
0
0
My code actually worked. For some reason, the graph itself did not update. So sock.sendall(message.encode()) actually does work for the plaintext protocol.
1
0
0
Looking at Graphite's latest documentation, I see that I can feed data into Graphite via plaintext. But I can't seem to find a way in Python 3 to send plaintext via the server ip address and port 2003. All I can seem to do is send bytes via sock.sendall(message.encode()) and Graphite does not seem to read that. Is there a way for Python 3 to feed data into Graphite?
How do I send plaintext via Python 3 socket library?
1.2
0
1
740
38,236,636
2016-07-07T02:12:00.000
5
1
1
0
python,git
38,236,672
3
true
0
0
There is, it's called a rebase - however, you'd probably want to add spaces to every file retroactively in each commit where you edited that file, which would be extremely tedious. However, there is absolutely nothing wrong with having a commit like this. A commit represents a change a distinct functioning state of your project, and replacing tabs with spaces is definitely a distinct state. One place where you would want to use rebase is if you accidentally make a non-functional commit. For example, you might have committed only half the files you need to. One last thing: never edit history (i.e. with rebase) once you've pushed your changes to another machine. The machines will get out of sync, and your repo will start slowly exploding.
3
3
0
I have a Python project that I'm working on for research. I've been working on two different machines, and I recently discovered that half of my files used tabs and the other half used spaces. Python objected to this when I attempted to edit and run a file from one machine on the other, so I'd like to switch everything to spaces instead of tabs. However, this seems like a waste of a Git commit - running 'git diff' on the uncommitted-but-correct files makes it look like I'm wiping out and replacing the entire file. Is there a way around this? That is, is there some way that I can "hide" these (IMO) frivolous changes?
Is a format-only update a frivolous Git commit?
1.2
0
0
89
38,236,636
2016-07-07T02:12:00.000
0
1
1
0
python,git
38,284,795
3
false
0
0
It is perfectly valid. Coupling a white space reformat with other changes in the same file could obfuscate the non white space changes. The commit has a single responsibility to reformat white space.
3
3
0
I have a Python project that I'm working on for research. I've been working on two different machines, and I recently discovered that half of my files used tabs and the other half used spaces. Python objected to this when I attempted to edit and run a file from one machine on the other, so I'd like to switch everything to spaces instead of tabs. However, this seems like a waste of a Git commit - running 'git diff' on the uncommitted-but-correct files makes it look like I'm wiping out and replacing the entire file. Is there a way around this? That is, is there some way that I can "hide" these (IMO) frivolous changes?
Is a format-only update a frivolous Git commit?
0
0
0
89
38,236,636
2016-07-07T02:12:00.000
3
1
1
0
python,git
38,236,665
3
false
0
0
Unfortunately there is no way around the fact that at the textual level, this is a big change. The best you can do is not mix whitespace changes with any other changes. The topic of a such a commit should be nothing but the whitespace change. If this screwup is unpublished (only in your private repos), you can go back in time and fix the mess in that point in the history when it was introduced, and then go through the pain of fixing up the subsequent changes (which have to be re-worked in the correct indentation style). For the effort, you end up with a clean history.
3
3
0
I have a Python project that I'm working on for research. I've been working on two different machines, and I recently discovered that half of my files used tabs and the other half used spaces. Python objected to this when I attempted to edit and run a file from one machine on the other, so I'd like to switch everything to spaces instead of tabs. However, this seems like a waste of a Git commit - running 'git diff' on the uncommitted-but-correct files makes it look like I'm wiping out and replacing the entire file. Is there a way around this? That is, is there some way that I can "hide" these (IMO) frivolous changes?
Is a format-only update a frivolous Git commit?
0.197375
0
0
89
38,238,858
2016-07-07T06:34:00.000
1
0
0
0
python,sqlalchemy,migration
38,949,945
1
true
0
0
The best method is to just execute sql. In this casesession.execute("DROP INDEX ...")
1
1
0
I want to do a bulk insertion in SQL alchemy and would prefer to remove an index prior to making the insertion, reading it when the insertion is complete. I see adding and removing indexes is supported by Alembic for migrations, but is this possible with SQLAlchemy? If so, how?
Drop and read index using SQLAlchemy
1.2
1
0
1,562
38,241,463
2016-07-07T09:05:00.000
0
0
0
1
python,openstack,openflow,opendaylight
38,247,806
2
false
0
0
Hmm, well, you could consult the OPNFV (Open Platform for Network Function Virtualization) project. OPNFV builds an integrated release of OpenStack with OpenDaylight as the SDN controller.
1
0
0
strong texti need your help , i would like to do intergartion between openstack liberty(devstack) and opendaylight Beryllium. some one who know whow? i check lot of in internet but always the same documentation .
Integration openstack &opendaylight
0
0
0
69
38,247,070
2016-07-07T13:42:00.000
1
0
1
0
python,vpn,tor
38,247,424
1
true
0
0
This has very little to do with your connection. The server is simply drowning in requests. More requests from different locations won't help you. A faster connection might help you get into the queue before anyone else, but multiple connections won't help. If you really want tickets, figure out how to move through the website in an automated way such that you submit a request to move through the menus faster than any human could.
1
0
0
I have a hypothesis that you could increase your chances of getting tickets for sell-out events by attempting to access the website from multiple locations. Just to be clear i'm not trying to be that guy who buys ALL of the tickets for events and then sells them on at 10X the price, incidentally i'm talking specifically about one event, Glastonbury festival, for which I have tried many years to buy a ticket, and never been successful. The problem is that you literally can't get on the site when the tickets get released. So i guess there area few qualifying questions to work out if i need to even ask the main question. What is actually happening on the website's server(s) at these times? Does the sheer volume of traffic cause some users to get 'rejected'? Is it down to chance who gets through to the site? Would trying to access the site multiple times increase your chances? If so, would you have to try to access it from multiple locations? I.e. as opposed to just opening multiple tabs in the same browser. Which brings me to the actual question: Could this be achieved as simply as using Python to open multiple instances of Tor?
Programmatically access one website from multiple locations?
1.2
0
1
206
38,248,928
2016-07-07T15:03:00.000
2
1
1
0
python,git
38,249,044
3
true
0
0
Include in git repo code to read those things from environment variables. On the target machine, set those environment variables. This could be done by hand, or with a script that you don't include in your repo. Include instructions in your readme file
2
4
0
I need to push Python script to Git repo. Script contains variables with personal data like USER_NAME = "some_name" and USER_PASS = "some_password". This data could be accessible for other users. I want to hide this data. I found following approach: create separate data.py module with USER_NAME = "some_name" and USER_PASS = "some_password" import it once to generate compiled version- data.pyc change main script source: variables should be accessible like import data username = data.USER_NAME password = data.USER_PASS remove data.pyand push data.pyc to repo This was promising, but actually data in data.pyc appears like ???some_namet???some_passwords??? and still could be recognized as username and password. So what is the best practices to hide data in Python script?
How to hide personal data in Python script
1.2
0
0
940
38,248,928
2016-07-07T15:03:00.000
1
1
1
0
python,git
38,296,218
3
false
0
0
You should never put sensitive pieces of information inside your code neither store them into a public repository. It's better to follow Joel Goldstick's suggestion and modify your code to get passwords from private sources, e.g. local environment variables or local modules. Try googling for "python store sensitive information" to look at some ideas (and involved issues).
2
4
0
I need to push Python script to Git repo. Script contains variables with personal data like USER_NAME = "some_name" and USER_PASS = "some_password". This data could be accessible for other users. I want to hide this data. I found following approach: create separate data.py module with USER_NAME = "some_name" and USER_PASS = "some_password" import it once to generate compiled version- data.pyc change main script source: variables should be accessible like import data username = data.USER_NAME password = data.USER_PASS remove data.pyand push data.pyc to repo This was promising, but actually data in data.pyc appears like ???some_namet???some_passwords??? and still could be recognized as username and password. So what is the best practices to hide data in Python script?
How to hide personal data in Python script
0.066568
0
0
940
38,249,110
2016-07-07T15:10:00.000
2
0
0
0
python,panda3d
38,314,470
2
true
0
0
I am also new at Panda3D and I solved a problem similar to yours just a few hours ago. There are two ways to solve your Problem: Download another version of Blender. The last version working with YABEE is 2.66. Just export your model as a .x file (DirectX native) this works great with Panda3D
2
2
0
I am a beginner in panda3d, recently i came across blender 2.77 for modelling. I was disapointed to find that it cannot export egg files supported by panda3d, i searched online and found yabee and chicken but after installing those addons to blender also, i did'nt find an egg file exporter in the exporter's list. I tried using obj and dae files in blender and then converting them to egg through obj2egg(did not load mtl files) and dae2egg(i cannot see any other color than white in pview). I have no idea of where i am wrong. I am sure i have done the conversion correctly. Any help will be appreciated.
cannot convert to egg file for panda3d
1.2
0
0
1,220
38,249,110
2016-07-07T15:10:00.000
2
0
0
0
python,panda3d
40,102,258
2
false
0
0
Common error when using YABEE: Not only do you need to copy it into the addons directory, you also need to activate it in Blender.
2
2
0
I am a beginner in panda3d, recently i came across blender 2.77 for modelling. I was disapointed to find that it cannot export egg files supported by panda3d, i searched online and found yabee and chicken but after installing those addons to blender also, i did'nt find an egg file exporter in the exporter's list. I tried using obj and dae files in blender and then converting them to egg through obj2egg(did not load mtl files) and dae2egg(i cannot see any other color than white in pview). I have no idea of where i am wrong. I am sure i have done the conversion correctly. Any help will be appreciated.
cannot convert to egg file for panda3d
0.197375
0
0
1,220
38,252,931
2016-07-07T18:35:00.000
0
0
0
0
python,linux,windows,scikit-learn,pickle
39,263,619
1
false
0
0
Python pickle should run between windows/linux. There may be incompatibilities if: python versions on the two hosts are different (If so, try installing same version of python on both hosts); AND/OR if one machine is 32-bit and another is 64-bit (I dont know any fix so far for this problem)
1
0
1
I am trying to save a sklearn model on a Windows server using sklearn.joblib.dump and then joblib.load the same file on a linux server (centOS71). I get the error below: ValueError: non-string names in Numpy dtype unpickling This is what I have tried: Tried both python27 and python35 Tried the built in open() with 'wb' and 'rb' arguments I really don't care how the file is moved, I just need to be able to move and load it in a reasonable amount of time.
Dump Python sklearn model in Windows and read it in Linux
0
0
0
610
38,252,992
2016-07-07T18:39:00.000
0
0
1
0
python,python-2.7,python-3.x,pip,cloud9-ide
38,257,510
3
true
0
0
You can't use pip to install Python 3. In any case, since you specifically mentioned the Cloud9 IDE, it already comes with both Python 2 and 3. python is symlinked to python2 though, so if you want to call Python 3, you have to type python3 (instead of just python) in the terminal.
1
0
0
Currently using Cloud9's ACE IDE dev environment for some learning opportunities, however, I ran into an API that is exclusively made for Python 3. How can I pip install Python 3 while keeping python 2.7.6 (the current version) intact?
How can I use pip to pip install Python 3 in Ubuntu terminal?
1.2
0
0
248
38,256,104
2016-07-07T22:12:00.000
0
0
0
0
python,pandas,join,merge,concat
65,132,518
7
false
0
0
Only concat function has axis parameter. Merge is used to combine dataframes side-by-side based on values in shared columns so there is no need for axis parameter.
2
141
1
What's the essential difference(s) between pd.DataFrame.merge() and pd.concat()? So far, this is what I found, please comment on how complete and accurate my understanding is: .merge() can only use columns (plus row-indices) and it is semantically suitable for database-style operations. .concat() can be used with either axis, using only indices, and gives the option for adding a hierarchical index. Incidentally, this allows for the following redundancy: both can combine two dataframes using the rows indices. pd.DataFrame.join() merely offers a shorthand for a subset of the use cases of .merge() (Pandas is great at addressing a very wide spectrum of use cases in data analysis. It can be a bit daunting exploring the documentation to figure out what is the best way to perform a particular task. )
Difference(s) between merge() and concat() in pandas
0
0
0
114,669
38,256,104
2016-07-07T22:12:00.000
14
0
0
0
python,pandas,join,merge,concat
49,564,930
7
false
0
0
pd.concat takes an Iterable as its argument. Hence, it cannot take DataFrames directly as its argument. Also Dimensions of the DataFrame should match along axis while concatenating. pd.merge can take DataFrames as its argument, and is used to combine two DataFrames with same columns or index, which can't be done with pd.concat since it will show the repeated column in the DataFrame. Whereas join can be used to join two DataFrames with different indices.
2
141
1
What's the essential difference(s) between pd.DataFrame.merge() and pd.concat()? So far, this is what I found, please comment on how complete and accurate my understanding is: .merge() can only use columns (plus row-indices) and it is semantically suitable for database-style operations. .concat() can be used with either axis, using only indices, and gives the option for adding a hierarchical index. Incidentally, this allows for the following redundancy: both can combine two dataframes using the rows indices. pd.DataFrame.join() merely offers a shorthand for a subset of the use cases of .merge() (Pandas is great at addressing a very wide spectrum of use cases in data analysis. It can be a bit daunting exploring the documentation to figure out what is the best way to perform a particular task. )
Difference(s) between merge() and concat() in pandas
1
0
0
114,669
38,257,055
2016-07-07T23:57:00.000
1
1
0
1
python,network-programming
38,258,777
2
false
0
0
Backing up and copying configs to server. Automate certain config changes to adhere to standards. Scripts to copy run start on all devices at will. Finding various config entries on all devices that may need to be altered. There are so many possibilities. Search github and pastebin and the overflow sites for anything using: Import netmiko Import paramiko Import ciscoconfparse Scripts using any of those libraries will be network related typically and offer up ideas.
1
1
0
I originally posted on his on Network Engineering but it was suggested that o post it on here. I've been learning Python and since I work in networking I'd like to start writing a few scripts that I can use in my day to day work tasks with switches. So my question is this, what network projects have you used python in? What sort of tasks have you written scripts for? I'm not asking for source code, I'm interested in what projects people have done to inspire my own coding adventures! Thanks all!
Python Network Projects
0.099668
0
0
411
38,263,384
2016-07-08T09:35:00.000
1
0
0
0
python,caching,spacy
41,644,953
1
false
1
0
First of all you, if you only do NER, you can install the parser without vectors. This is possible giving the argument parser to: python -m spacy.en.download parser This will prevent the 700MB+ Glove vectors to be downloaded, slimming the memory needed for a single run. Then, well, it depends on your application/usage you make of the library. If you call it often it will be better to pass spacy.load('en') to a module/class variable loaded at the beginning of your stack. This will slow down a bit your boot time, but spacy will be ready (in memory) to be called. (If the boot time is a big problem, you can do lazy loading).
1
1
0
I'm using spaCy with Python for Named Entity Recognition, but the script requires the model to be loaded on every run and takes about 1.6GB memory to load it. But 1.6GB is not dispensable for every run. How do I load it into the cache or temporary memory so as to enable the script to run faster?
How to save spaCy model onto cache?
0.197375
0
0
1,333
38,264,090
2016-07-08T10:10:00.000
2
0
1
0
python,windows,python-2.7,pip,kivy
38,264,328
1
true
0
0
I think you have to set PATH in environment variable for python command to work.If you're on windows follow this: My Computer > Properties > Advanced System Settings > Environment Variables > Then under system variables create a new Variable name PATH.Under value type your python installation directory which includes py.exe etc(scripts folder).Then it'll work. OR open cmd prompt in python folder(shift+context menu click) and then enter command.
1
0
0
I have been looking for the past hour regarding installing modules using. It says that python27 comes with pip, However when I type python -m pip install --upgrade pip wheel setuptools into the command line I just get the error NameError: Name 'python' is not defined. I cant even enter python --version. The module I am trying to install is kivy. On their website it just say download python and type in that string into the command line and it should work. I cant find anywhere if i need to do something beforehand.
Cant find version or use pip using python27 command line
1.2
0
0
485
38,265,773
2016-07-08T11:43:00.000
4
0
1
0
python,vlc
38,265,916
5
true
0
0
I had a same issue. You should try sudo pip install python-vlc
2
7
0
I am trying to create a media player using vlc and python but it throws an Error which is No module named vlc. how to fix this?
Import Vlc module in python
1.2
0
0
30,496
38,265,773
2016-07-08T11:43:00.000
0
0
1
0
python,vlc
60,589,679
5
false
0
0
The answer didn't work for me, using Mu 1.0.2 on a Raspberry Pi, this did however: sudo pip3 install vlc
2
7
0
I am trying to create a media player using vlc and python but it throws an Error which is No module named vlc. how to fix this?
Import Vlc module in python
0
0
0
30,496
38,270,552
2016-07-08T15:37:00.000
0
0
0
0
python,sql-server,excel,hyperion,essbase
38,299,572
1
true
0
0
There are a couple of ways to go. The most straightforward is to export all of the data from your Essbase database using column export, then designing a process to load the data into SQL Server (such as using the import functionality or BULK IMPORT, or SSIS...). Another approach is to use the DataExport calc script command to export either to a file (that you then load into SQL) or directly to the relational database (DataExport can be configured to export data directly to relational). In either case, you will need privileges that are greater than normal user privileges, and either approach involves Essbase automation that may require you to coordinate with the Essbase admin.
1
0
0
I'm currently using a mix of smart view and power query(sql) to load data into Excel models however my excel always crashes when smart view is used. I'm required to work in Excel but I'm know looking at finding a way to periodically load data from Essbase into my SQL server database and only use power query(sql) for all my models. What would be my best options in doing this? Being a Python enthusiast I found essbasepy.py however there isn't much documentation on it. Please help
Load data from Essbase into SQL database
1.2
1
0
2,203
38,275,148
2016-07-08T20:46:00.000
3
1
1
0
python,class,oop,inheritance
38,275,281
5
true
0
0
In Python3, override the special method __getattribute__. This gives you almost complete control over attribute lookups. There are a few corner cases so check the docs carefully (it's section 3.3.2 of the Language Reference Manual).
2
3
0
I'd like to create an Python class that superficially appears to be a subclass of another class, but doesn't actually inherit its attributes. For instance, if my class is named B, I'd like isinstance(B(), A) to return True, as well as issubclass(B, A), but I don't want B to have the attributes defined for A. Is this possible? Note: I don't control the implementation of A. Why I care: The module I'm working with checks that a passed object is a subclass of A. I want to define the necessary attributes in B without inheriting the superfluous attributes defined in A (whose implementation I do not control) because I'm using __getattr__ to pass some attribute calls onto a wrapped class, and if these attributes are defined by inheritance from A, __getattr__ won't be called.
Python subclass that doesn't inherit attributes
1.2
0
0
2,382
38,275,148
2016-07-08T20:46:00.000
1
1
1
0
python,class,oop,inheritance
38,275,187
5
false
0
0
As long as you're defining attributes in the __init__ method and you override that method, B will not run the code from A's __init__ and will thus not define attributes et al. Removing methods would be harder, but seem beyond the scope of the question.
2
3
0
I'd like to create an Python class that superficially appears to be a subclass of another class, but doesn't actually inherit its attributes. For instance, if my class is named B, I'd like isinstance(B(), A) to return True, as well as issubclass(B, A), but I don't want B to have the attributes defined for A. Is this possible? Note: I don't control the implementation of A. Why I care: The module I'm working with checks that a passed object is a subclass of A. I want to define the necessary attributes in B without inheriting the superfluous attributes defined in A (whose implementation I do not control) because I'm using __getattr__ to pass some attribute calls onto a wrapped class, and if these attributes are defined by inheritance from A, __getattr__ won't be called.
Python subclass that doesn't inherit attributes
0.039979
0
0
2,382
38,275,243
2016-07-08T20:53:00.000
1
0
1
0
python,pycharm
38,285,332
1
true
0
0
That's a numbered bookmark. You can remove it by selecting the file in Project view and pressing Ctrl-Shift-9.
1
4
0
I noticed a number inside a square that looks like [9] next to a file that I have in Pycharm Community Edition on the dock on the left side that I do not understand. I am unable to click on it or read any more information about it. I couldn't find anything on Pycharm's website or any of their documentation. I am slightly concerned about this as it could effect subversion if I were to commit this if there is an issue I do not understand. I cannot show a screenshot as my company is very strict on what can be published to the internet but it looks something like this. [python icon]filename1.py [python icon][9]filename2.py [python icon]filename3.py This is a unix file with 775 mode.
PyCharm File Number Denotation
1.2
0
0
77
38,276,184
2016-07-08T22:23:00.000
0
0
1
1
python-2.7,flask,windows-7,ubuntu-14.04
38,276,238
1
false
0
0
Are you accessing/manipulating Operating system specific information ? For example, some file attributes, other than the basic attributes, are different on both systems.
1
0
0
Is it fine to work on Ubuntu while another person working with me on the same project is working on Windows 7? We are using Python 2.7, and Flask, and have set up matching virtual environments.
Is it okay to work on Ubuntu while another person working on the same project is working on Windows 7?
0
0
0
26
38,278,626
2016-07-09T05:16:00.000
1
0
1
0
android,qpython,qpython3
38,304,956
1
true
0
1
Open the "qpython3" app then touch "Console" and in the top left corner touch "No. 1" or "No. 2" or ... then select your background running scripts and by touching "X" sing you can kill them.
1
0
0
I am using Qpython3 on my Android tablet. I have a Python script for a talking alarm clock that I would like to run in the background and then go off at the time the user sets. The problem is, once I set the console running in the background, I can't figure out how to get back to it to stop the script (i.e. get the message to stop repeating).
How do I stop a script that is running in the background in Qpython3?
1.2
0
0
1,123
38,280,859
2016-07-09T10:30:00.000
2
0
0
0
python,reactjs,django-forms
38,281,765
1
true
1
0
The {{ form }} statement is relative to Django template. Django templates responsible for rendering HTML and so do React, so you don't have to mix the two together. What you probably want to do is to use the Django form validation mechanism server side, let React render the form client-side. In your Django view, simply return a JSON object that you can use in your React code to initialize your form component.
1
4
0
Is there any way I can use Django forms inside a ReactJS script, like include {{ form }} in the JSX file? I have a view which displays a from and it is rendered using React. When I load this page from one page the data in these fields should be empty, but when I hit this view from another page I want date to be prefilled in this form. I know how to do this using Django forms and form views, but I am clueless where to bring in React.
Django forms in ReactJs
1.2
0
0
1,649
38,288,839
2016-07-10T05:17:00.000
12
0
0
0
python,caching,redis,push,empty-list
38,289,365
1
false
0
0
Empty lists do not exist in Redis - a list must have one or more items. Empty lists (e.g. as a result of popping a non-empty one) are automatically removed.
1
6
0
We'd like to RPUSH/LPUSH a key with an empty list. This is for consistency reasons: when the key is read with LRANGE than whether the list is empty or not, the rest of the code behaves the same. Why the fact that if a key has an empty list it is deleted is a problem? Because we are using Redis as a cache and would like to differentiate the 2 situations: 1. A specific key with corresponding values was not cached yet. In which case we want to calculate the values (takes a long time) and cache them. The outcome of the calculation might be an empty list. 2. A key with an empty list was already cached. In which case we would like not to perform the calculations and return an empty list. The following options don't work: 1. rpush key --> no list value results with "wrong number of arguments". 2. rpush key [] --> adds a '[]' item The (ugly) solution we are currently using is storing a one-item-list with an "EMPTY-ITEM" item, and checking that when we read the list. Any ideas? Thank you
Redis - how to RPUSH/LPUSH an empty list
1
0
0
4,614
38,291,388
2016-07-10T11:29:00.000
2
0
0
0
python,html,django,dynamic,jinja2
38,301,898
2
true
1
0
I found a solution that works out pretty well. I use <link rel="stylesheet" href="{% block css %}{% endblock %}"> in the template and then: {% block css%}{% static 'home/css/file.css' %}{% endblock % in each page
1
2
0
I am trying to make my stylesheets dynamic with django (jinja2) and I want to do something like this: <link rel="stylesheet" href="{% static 'home/css/{{ block css }}{{ endblock }}.css' %}"> Apparently, I can't use Jinja in Jinja :), and I don't know how to make this work another way.
Dynamic css import with Jinja2
1.2
0
0
1,671
38,291,701
2016-07-10T12:07:00.000
1
0
0
0
python,csv
38,291,737
2
false
0
0
You should always try to use as much as possible the work that other people have already been doing for you (such as programming the pandas library). This saves you a lot of time. Pandas has a lot to offer when you want to process such files so this seems to me to be the the best way to deal with such files. Since the question is very general, I can also only give a general answer... When you use pandas, you will however need to read more in the documentation. But I would not say that this is a downside.
1
1
1
Forgive me if my questions is too general, or if its been asked before. I've been tasked to manipulate (e.g. copy and paste several range of entries, perform calculations on them, and then save them all to a new csv file) several large datasets in Python3. What are the pros/cons of using the aforementioned libraries? Thanks in advance.
Using pandas over csv library for manipulating CSV files in Python3
0.099668
0
0
114
38,292,311
2016-07-10T13:20:00.000
2
0
1
0
python,class,object,dictionary
38,292,336
2
false
0
0
All values in Python are objects, dictionaries included. Dictionaries have methods too! For example, dict.keys() is a method, so Python's dict instances are not objects with only data members. There is a class definition for dictionaries; just because it is defined in the Python C code doesn't make dict any less a class. You can subclass dict if you need to add more methods, for example.
2
0
0
I am studying Python right now and I am learning about Dictionaries. Is it correct to compare dictionaries with objects from C++ who only contain data members and no methods? Of course there is not class definition so every object instance can be declared differently in Python, but still I think it i a good analogy to associate dictionaries with objects with no methods for the purpose of learning. Or is it something that I am missing here.
Can Dictionaries in Python be considered objects from C++ with only Data Members?
0.197375
0
0
224
38,292,311
2016-07-10T13:20:00.000
3
0
1
0
python,class,object,dictionary
38,292,385
2
false
0
0
Is it correct to compare dictionaries with objects from C++ who only contain data members and no methods? NO A dictionary is a data-structure which is analogous to std::unordered_map, though the implementation might differ. An object is an instance of a class. Both C++ and Python supports Object Oriented programming to a large extent though there are differences which is out of scope of this answer. Off-course, both in Python and C++, either the dict or the std::unordered_map is implemented as a class which has methods and data-members. In python though, dict is a type which inherit from <type 'type'>. Considering in python everything is an object, even if it a class or a function, it is too colloquial to talk about object when in Python world.
2
0
0
I am studying Python right now and I am learning about Dictionaries. Is it correct to compare dictionaries with objects from C++ who only contain data members and no methods? Of course there is not class definition so every object instance can be declared differently in Python, but still I think it i a good analogy to associate dictionaries with objects with no methods for the purpose of learning. Or is it something that I am missing here.
Can Dictionaries in Python be considered objects from C++ with only Data Members?
0.291313
0
0
224
38,292,478
2016-07-10T13:37:00.000
0
0
0
0
python,particle-filter
44,927,182
1
false
0
0
It is completely normal that particles get distributed everywhere. Otherwise it is not a probabilistic approach. In addition, note that the particles are sampled based on the posterior probability at time t-1 and the current motion distribution. However, even if it is not recommended in filtering but you can restrict your research space in the sampling step. For backtracking, you may use at each time t the same approach of fortracking, with changing only the direction of the velocity (on all axies). You can start from the state which maximise the probability distribution. Finaly, you compare the obtained trajectories (result of for/backtracking) and you decide based on the result which further filtering is needed to get the best result.
1
1
0
I have just implemented a particle filter for Indoor Tracking. It looks good but at some points the particles go in a room and are trapped there. What's a smart way to do backtracking? I save the state of the particles for their last 10 movements. Thank you
Backtracking with Particle Filter
0
0
0
304
38,295,148
2016-07-10T18:24:00.000
0
0
1
0
python,database,kivy
38,304,824
1
false
0
0
Both approaches have pros and cons. A database is designed to store and query data. You can query data easily (SQL) from multiple processes. If you don't have multiple processes and no complicated querys a database doesn't really offers that much. Maybe persistence if that is a concern for you. If you don't need the features a database offers, don't use one. If you simply want to store a bit data a list is better. It's probably faster because you don't need inter process communication. Also if you store the data in the database you will still need to get it into the python process somehow, and then you will probably put it in a list. Based on your requests a database doesn't offer any features you need, so you should go with a simple list.
1
0
0
I work on a raspberry pi project and use Python + Kivy for such reasons: I read some string values comming from a device installed in a field every 300ms. As soon as I see certain value I trigger a python thread to run another function which takes the string and stores it in a list and timestamp it. My kivy app displays the value stored in the list and run some other functions. The question is: Is it better approach to save received strings into DB and let kivy read DB or is it better for Python to append list and let to run another function that runs through list and trigger kivy task?
is it better to read from LIST or from Database?
0
1
0
56
38,297,010
2016-07-10T22:31:00.000
0
0
1
0
python,python-3.x
63,151,807
4
false
0
0
Closing the console in Windows and reopening it fixes the issue for me. I got the error when doing sudo apt update.
1
8
0
The full error is: OverflowError: timestamp too large to convert to C _PyTime_t I have no idea what this means, and have not been able to find it anywhere else online. I am new to python so it may be something really simple that I'm missing. The error is coming from this line of code within a function: time.sleep(t) t is a variable
What does this overflow error in python mean?
0
0
0
3,308
38,297,765
2016-07-11T00:52:00.000
2
0
0
0
python,image-processing,scikit-image,glcm
38,297,891
1
true
0
0
The simplest way for binning 8-bits images is to divide each value by 32. Then each pixel value is going to be in [0,8[. Btw, more than avoiding sparse matrices (which are not really an issue), binning makes the GLCM more robust to noise.
1
1
1
I am trying to find the GLCM of an image using greycomatrix from skimage library. I am having issues with the selection of levels. Since it's an 8-bit image, the obvious selection should be 256; however, if I select values such as 8 (for the purpose of binning and to prevent sparse matrices from forming), I am getting errors. QUESTIONS: Does anyone know why? Can anyone suggest any ideas of binning these values into a 8x8 matrix instead of a 256x256 one?
Grey Level Co-Occurrence Matrix // Python
1.2
0
0
2,092
38,298,459
2016-07-11T02:47:00.000
7
0
0
0
javascript,python,selenium,web-scraping
38,298,895
2
false
1
0
Ideally you don't even need to clicks buttons in these kind of cases. All you need is to see at what webservice does the form sends request when clicked on submit button. For that open your developer's control in the browser, Go to the Network tab and select 'preserve log'. Now submit the form manually and look for the first xhr GET/POST request sent. It would be POST request 90% of times. Now when you select that request in the request parameters it would show the values that you entered while submitting the form. Bingo!! Now all you need to do is mimic this request with relevant request headers and parameters in your python code using requests. And Wooshh!! Hope it helps..
1
5
0
I am working on a small project where I have to submit a form to a website. The website is, however, using onclick event to submit the form (using javascript). How can the onclick event be simulated in python? Which modules can be used? I have heard about selenium and mechanize modules. But, which module can be used or in case of both, which one is better? I am new to web scraping and automation.So,it would be very helpful. Thanks in advance.
How can I simulate onclick event in python?
1
0
1
13,226
38,298,752
2016-07-11T03:31:00.000
0
0
0
1
python,macos
71,665,360
3
false
0
0
I [believe I] resolved this by chown'ing and chgrp'ing everything in /usr/local/Cellar and then unlinking and relinking: ------ % brew link python3 Linking /usr/local/Cellar/python/3.6.5... Error: Could not symlink bin/2to3 Target /usr/local/bin/2to3 is a symlink belonging to [email protected]. You can unlink it: brew unlink [email protected] To force the link and overwrite all conflicting files: brew link --overwrite python To list all files that would be deleted: brew link --overwrite --dry-run python kevcool@MacBook-Pro-2 Cellar % brew unlink [email protected] Unlinking /usr/local/Cellar/[email protected]/3.9.12... 24 symlinks removed. kevcool@MacBook-Pro-2 Cellar % brew link python3 Linking /usr/local/Cellar/python/3.6.5... 25 symlinks created. % python3 --version Python 3.6.5 ------
1
1
0
I am trying to re-install python3 on my mac using brew by brew install python3. But when proceeding to the the link step, it threw an error: The brew link step did not complete successfully The formula built, but is not symlinked into /usr/local Could not symlink bin/2to3-3.5 Target /usr/local/bin/2to3-3.5 already exists. You may want to remove it: rm '/usr/local/bin/2to3-3.5' To force the link and overwrite all conflicting files: brew link --overwrite python3 To list all files that would be deleted: brew link --overwrite --dry-run python3 But after using rm '/usr/local/bin/2to3-3.5 and brew link --overwrite python3, an other error occured:Error: Permission denied - /usr/local/Frameworks. And I don't know why this happened, because I cannot see the Frameworks directory under /usr/local/.
Linking /usr/local/Cellar/python3/3.5.1... Error: Permission denied - /usr/local/Frameworks
0
0
0
1,544
38,301,047
2016-07-11T07:09:00.000
1
0
1
0
python,environment-variables,kivy
38,301,272
2
false
0
1
Make sure you're running the command from the folder where the *.py file is located, "kivy *.py" should run from there.
1
0
0
After the installation of Kivy 1.9.1 on Windows using the commands of Kivy installation tutorials, I can't run the program using "kivy ***.py". I don't know how to set up the environment variables, and I can't find it on the official websites. Kivy: 1.9.1 Python: 3.4.4 Windows 10 Please HELP! Thanks
How to run kivy after 1.9.1 on windows?
0.099668
0
0
662
38,301,675
2016-07-11T07:47:00.000
1
0
1
0
python,ipython
38,302,182
1
false
0
0
Have you considered trying ipython notebook ? (I've used it on a couple of training courses) You access the interatctive python via a web browser creating 'boxes' of runnable code. So if you ran a block, it gave the answer you wanted and threw no errors, you could then create another block and move on (preserving the bit you wished to keep). If it threw an error or didn't produce the expected result, you could edit and re-run until it did. Your boxes of code can be interspersed with html markup notes, graphs, diagrams - you name it. The whole notebook can then be saved for later access and re-run when re-loaded.
1
1
0
I am learning python using IPython. When I did something nice I like to copy this into some kind of private CheatSheet-file. Therefore I use the iPython %hist command. I am wondering, if there is a way to only print those commands which had been syntactically correct and did not raise an error. Any ideas?
iPython history of valid commands
0.197375
0
0
104
38,303,322
2016-07-11T09:19:00.000
0
0
1
0
ipython,anaconda
38,400,193
1
true
0
0
The latest version of ipython has now been added to the anaconda package list. Choose: Anaconda navigator > Environment > Update index Then search for the ipython package: right click it and select version 5.0.0
1
0
0
How can I install ipython5 in the anaconda navigator? It currently is running 4.2.0, but the latest ipython release is not available as an option.
How can I install ipython5 in the anaconda navigator?
1.2
0
0
367
38,304,942
2016-07-11T10:42:00.000
0
0
0
0
python,scikit-learn,classification,multilabel-classification,voting
53,769,151
1
false
0
0
I thing Voting Classifier only accepts different static weights for each estimator. However you may solve the problem by assigning class weights with the class_weight parameter of the random forest estimator by calculating the class weights on your train set.
1
1
1
I was wondering if it is possible to use dynamic weights in sklearn's VotingClassifier. Overall i have 3 labels 0 = Other, 1 = Spam, 2 = Emotion. By dynamic weights I mean the following: I have 2 classifiers. First one is a Random Forest which performs best on Spam detection. Other one is a CNN which is superior for topic detection (good distinction between Other and Emotion). What I would like is a VotingClassifier that gives a higher weight to RF when it assigns the label "Spam/1". Is VotingClassifier the right way to go? Best regards, Stefan
Ensembling with dynamic weights
0
0
0
139
38,307,982
2016-07-11T13:15:00.000
0
0
1
0
python,python-3.x,utf-8
38,308,361
4
false
0
0
What do you mean exactly by "special utf-8 characters" ? If you mean every non-ascii character, then you can try: s.encode('ascii', 'strict') It will rise an UnicodeDecodeError if the string is not 100% ascii
1
0
0
I have a list of Strings in python. Now I want to remove all the strings from the list that are special utf-8 characters. I want just the strings which include just the characters from "U+0021" to "U+00FF". So, do you know a way to detect if a String just contains these special characters? Thanks :) EDIT: I use Python 3
How to detect if a String has specific UTF-8 characters in it? (Python)
0
0
0
4,331
38,308,310
2016-07-11T13:31:00.000
0
0
1
0
python
38,308,473
1
false
0
0
By default, python checks for the requested file in the same directory the program file is in. If you want python to check for the file in some other location, you have to specify the absolute path. About your error, nothing can be said unless you share your code.
1
0
0
From where does python pick its file for reading.Is there any specific folder,or does it pick from anywhere on the system just given the filename and extension.Is there a need to mention absolute path. I am getting error while reading txt and csv files as no such file or directory. f=open('info.csv') print f I get a handle for the above file.But don't get a handle for .txt,both are in the same folder.Why does it give an error?
IOError while reading files
0
0
0
178
38,314,964
2016-07-11T19:43:00.000
1
0
0
0
python,tensorflow
38,317,151
1
false
0
0
Note that this exercise only speeds up the first step time by skipping the prefetching of a larger from of the data. This exercise does not speed up the overall training That said, the tutorial text needs to be updated. It should read Search for min_fraction_of_examples_in_queue in cifar10_input.py. If you lower this number, the first step should be much quicker because the model will not attempt to prefetch the input.
1
1
1
The TensorFlow tutorial for using CNN for the cifar10 data set has the following advice: EXERCISE: When experimenting, it is sometimes annoying that the first training step can take so long. Try decreasing the number of images that initially fill up the queue. Search for NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN in cifar10.py. In order to play around with it, I tried decreasing this number by a lot but it doesn't seem to change the training time. Is there anything I can do? I tried even changing it to something as low as 5 and the training session still continued very slowly. Any help would be appreciated!
Speeding up TensorFlow Cifar10 Example for Experimentation
0.197375
0
0
174
38,317,462
2016-07-11T22:55:00.000
0
0
1
0
python,multithreading,multiprocessing
38,317,881
1
false
0
0
Based on Harp's second comment to his original post (which was posted after your answer), I suspect that you would now agree with me that processes are probably called-for here, based on this newly-supplied information. However, I find myself questioning just how much "truly effective concurrency" is likely to be found here. This sounds like a job to me: a "script 1" (with its sub-scripts 1.1, 1.2 etc.), which prepares a file of inputs that is then delivered to "script 2." Especially since "script 2" is utterly beholden to an external web-site for what it does, I'm just not yet persuaded that the added complexity of "multi-threading" is genuinely justifiable here.
1
0
0
I wanted to get some help with an application... Currently I have a script that saves certain information to a database table, well call this table "x". I have another script that gets and saves other info to a different database table, well call this one "y". I also have a script that runs formulas on the information found in table y and I have another script that opens the link found in table x and saves certain information into table "z". The problem I have is that the first script doesn't end, and neither does the third script. So I know now that I need to have either threads or multiple processes running but which one do I choose? Script 1 accesses table W & X Script 2 accesses table X & Y Script 3 accesses table Y Script 4 accesses table Z Can you please give me some guidance on how to proceed?
Should I use Threads or multiple processess?
0
0
0
44
38,321,248
2016-07-12T06:15:00.000
1
0
0
0
python,numpy,theano,deep-learning,keras
38,353,930
1
false
0
0
This expression should do the trick: theano.tensor.tanh((x * y).sum(2)) The dot product is computed 'manually' by doing element-wise multiplication, then summing over the last dimension.
1
0
1
I am trying to apply tanh(dot(x,y)); x and y are batch data of my RNN. x,y have shape (n_batch, n_length, n_dim) like (2,3,4) ; 2 samples with 3 sequences, each is 4 dimensions. I want to do inner or dot production to last dimension. Then tanh(dot(x,y)) should have shape of (n_batch, n_length) = (2, 3) Which function should I use?
how to use dot production on batch data?
0.197375
0
0
291
38,321,820
2016-07-12T06:49:00.000
0
1
0
1
python,ssh,twisted.conch,password-prompt
38,324,034
1
false
0
0
The password prompt is part of keyboard-authentication which is part of the ssh protocol and thus cannot be changed. Technically, the prompt is actually client side. However, you can bypass security (very bad idea) and then output "your codes is"[sic] via the channel
1
0
0
I wrote a SSH server with Twisted Conch. When I execute "ssh [email protected]" command on the client side. My twisted SSH server will return a prompt requesting password that like "[email protected]'s password: ". But now I want to change this password prompt that like "your codes is:". Dose anyone know how to do it?
Python SSH Server(twisted.conch) change the password prompt
0
0
0
122
38,323,412
2016-07-12T08:16:00.000
0
0
1
1
python,python-2.7,packages,installation-package
58,928,704
3
false
0
0
install with pip: pip3 install pyclustering it even works with Anaconda prompt
1
2
0
This is the error I am getting when trying to install PyCluster. I am using python 2.7 with anaconda in spyder IDE and on windows. Downloading/unpacking PyCluster Getting page http://pypi.python.org/simple/PyCluster URLs to search for versions for PyCluster: * httpss://pypi.python.org/simple/PyCluster/ Getting page httpss://pypi.python.org/simple/PyCluster/ Analyzing links from page httpss://pypi.python.org/simple/pycluster/ Could not find any downloads that satisfy the requirement PyCluster No distributions at all found for PyCluster Exception information: Traceback (most recent call last): File "C:\Users\anankuma\AppData\Local\Continuum\Anaconda\lib\site-packages\pip-1.2.1-py2.7.egg\pip\basecommand.py", line 107, in main status = self.run(options, args) File "C:\Users\anankuma\AppData\Local\Continuum\Anaconda\lib\site-packages\pip-1.2.1-py2.7.egg\pip\commands\install.py", line 256, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "C:\Users\anankuma\AppData\Local\Continuum\Anaconda\lib\site-packages\pip-1.2.1-py2.7.egg\pip\req.py", line 1011, in prepare_files url = finder.find_requirement(req_to_install, upgrade=self.upgrade) File "C:\Users\anankuma\AppData\Local\Continuum\Anaconda\lib\site-packages\pip-1.2.1-py2.7.egg\pip\index.py", line 157, in find_requirement raise DistributionNotFound('No distributions at all found for %s' % req) DistributionNotFound: No distributions at all found for PyCluster Please suggest a workaround. Thanks
PyCluster unable to install package
0
0
0
4,769
38,327,327
2016-07-12T11:15:00.000
0
1
0
0
python,amazon-web-services,aws-lambda,amazon-sns
38,329,761
1
false
0
0
Exceptions of Lambda functions will be in your logs, which are streamed to AWS CloudWatch Logs. Execution time is stored as the Lambda Duration metric in CloudWatch. You would need to setup CloudWatch alerts on those items to be notified.
1
0
0
I'm using SNS to trigger lambda functions in AWS. This works fine and as expected, but I'm wondering if there is a way to get feedback about the execution times and if an exception was raised within the function. Obviously there is exception handling code in my lambda functions for most of the business logic, but I'm thinking about cases like an external program (like Ghostscript) that might end up in an endless loop and eventually get terminated by the 10 minute Lambda limit. As far as I know you can do this easily if you invoke the method in an synchronous fashion, but I can't seem to find a way to get information about how long the execution lasted and if something bad happened. Is there a way to subscribe to execution errors or similar, or have a callback from AWS (not my code) when an exception or timeout occurs?
Getting information about function execution time/billing and if the function call was successful
0
0
0
85
38,328,159
2016-07-12T11:52:00.000
0
0
1
0
python,scikit-learn
43,412,660
1
false
0
0
Several scikit-learn tools such as GridSearchCV and cross_val_score rely internally on Python’s multiprocessing module to parallelize execution onto several Python processes by passing n_jobs > 1 as argument. Taken from Sklearn documentation: The problem is that Python multiprocessing does a fork system call without following it with an exec system call for performance reasons. Many libraries like (some versions of) Accelerate / vecLib under OSX, (some versions of) MKL, the OpenMP runtime of GCC, nvidia’s Cuda (and probably many others), manage their own internal thread pool. Upon a call to fork, the thread pool state in the child process is corrupted: the thread pool believes it has many threads while only the main thread state has been forked. It is possible to change the libraries to make them detect when a fork happens and reinitialize the thread pool in that case: we did that for OpenBLAS (merged upstream in master since 0.2.10) and we contributed a patch to GCC’s OpenMP runtime (not yet reviewed).
1
4
1
Does anybody use "n_jobs" of sklearn-classes? I am work with sklearn in Anaconda 3.4 64 bit. Spyder version is 2.3.8. My script can't finish its execution after setting "n_jobs" parameter of some sklearn-class to non-zero value.Why is this happening?
n_jobs don't work in sklearn-classes
0
0
0
1,402
38,330,752
2016-07-12T13:47:00.000
2
0
0
0
python,sublimetext2,sublimetext3,sublimetext,text-editor
38,330,853
2
false
1
0
In the menu bar: View > Layout > Single Or from the keyboard (on Windows): Alt + Shift + 1 To find your default shortcuts, Preferences > Key Bindings - Default, and search for "set_layout".
2
0
0
I was wondering how you exit the multiple row layout on Sublime Text. I switched to a 3 row layout when editing one of my Django projects, but how do I exit from it (remove the extra rows I have added). Thanks, Henry
Sublime Text: How do you exit the multiple row layout
0.197375
0
0
98
38,330,752
2016-07-12T13:47:00.000
2
0
0
0
python,sublimetext2,sublimetext3,sublimetext,text-editor
38,330,833
2
true
1
0
Use View -> Layout menu. If you choose View -> Layout -> Single, other rows will be removed. Short keys depends on OS.
2
0
0
I was wondering how you exit the multiple row layout on Sublime Text. I switched to a 3 row layout when editing one of my Django projects, but how do I exit from it (remove the extra rows I have added). Thanks, Henry
Sublime Text: How do you exit the multiple row layout
1.2
0
0
98
38,331,175
2016-07-12T14:04:00.000
0
0
1
0
python,ide,pycharm
38,331,228
1
false
0
0
PyCharm displays the Welcome screen when no project is open. From this screen, you can quickly access the major starting points of PyCharm. The Welcome screen appears when you close the current project in the only instance of PyCharm. If you are working with multiple projects, usually closing a project results in closing the PyCharm window in which it was running, except for the last project, closing this will show the Welcome screen.
1
1
0
I switched to PyCharm a couple of months ago, but I can't figure out how to get rid of the welcome screen when I open files. More specifically, I've set up my mac to open all .py files using PyCharm. However, when I double click on a .py file, it's the Welcome screen that opens up and not the .py file. How do I get PyCharm to just open the python script in the editor, without showing me a welcome screen?
PyCharm Directly Open Python File
0
0
0
393
38,340,729
2016-07-13T00:12:00.000
-1
0
1
1
python,python-3.x,stdin,piping
38,340,837
1
false
0
0
You can test if your program's input is connected to a tty, which may help, depending on your use case: $ python -c 'import sys; print(sys.stdin.isatty())' True $ echo hi | python -c 'import sys; print(sys.stdin.isatty())' False $ python -c 'import sys; print(sys.stdin.isatty())' < foo False
1
0
0
If my script needs to behave differently when it is being piped to versus when it is being called normally, how can I determine whether or not it is being piped to? This is necessary for avoiding hanging. I am not talking about merely checking whether or not stdin is empty or not.
Determine if script is being piped to in Python 3
-0.197375
0
0
172
38,341,325
2016-07-13T01:36:00.000
2
0
0
0
python,django,django-forms,django-admin
38,363,690
1
true
1
0
I fixed the issue by using an iframe to embed the page itself. I used the ?_popup=1 argument so that the navbar and other parts of the admin site wouldn't show up.
1
0
0
I am trying to embed the exact form that appears in the Django admin when I edit a model in a different page on my website. My plan is to have an Edit button that, when clicked, displays a modal with the edit page inside of it. The issue with using a ModelForm is that this particular model has two generic foreign keys. The admin handles this perfectly, providing the ability to add, edit, or remove these objects. If I could embed the admin page (with its HTML or perhaps my own), that would be all I need. Thanks!
Django Embed Admin Form
1.2
0
0
674
38,344,740
2016-07-13T07:03:00.000
0
0
1
0
java,python,search,nlp,semantics
38,437,943
1
false
0
0
Your questions is somewhat vague but I will try nonetheless... If I understand you correctly then what you want to do (depending on the effort you want to spend) is the following: Expand the keyword to a synonym list that you will use to search for in the topics (you can use WordNet for this). Use collocations (n-gram model) to extend the keyword to the likely bi-, tri-grams and search for these in the texts. Depending on the availability of the data you may also want to create a classifier (e.g. using good old SVM or CRF) that maps list of keywords into topics (where topic is a class). Assuming that you have a number of documents per each topic, you may also want to create a list of most frequent words per topic (eliminating stop-words). Most of the functionality is available via NLTK, Pandas, etc. for Python and OpenNLP, libsvm, LingPipe in Java.
1
0
1
I want to do SEMANTIC keyword search on list of topics with NLP(Natural Language Processing ). It would be very appreciable if you post any reference links or ideas.
How to do semantic keyword search with nlp
0
0
0
296
38,348,268
2016-07-13T09:52:00.000
1
0
0
1
python,sockets,zeromq
38,349,487
1
true
0
0
The simplest is to create a logic for a pool-of-ports manager ( rather avoid attempts to share / pass ZeroMQ sockets to / among other processes ) One may create a persistent, a-priori known, tcp://A.B.C.D:8765-transport-class based .bind() access-point, exposed to all client processes as a port-assignment service, to which client processes .connect(), handshake in whatever manner is needed to proof an identity/credentials/purpose/etc and .recv() in a coordinated manner one actually free messaging/signalling-service port number, that is system-wide guaranteed to not be used at the very moment / until returned to the port-manager ( a rotating pool of ports is centrally managed, under your code-control, whereas all the sockets, created locally in the distributed process(es)/thread(s) .connect() / .bind()-ing to the pool-manager announced port#, and thus will still remain, and ought remain, consistently within ZeroMQ advice, not to be shared per-se ).
1
0
0
I have a Python program which spawns several other Python programs as subprocesses. One of these subprocesses is supposed to open and bind a ZMQ publisher socket, such that other subprocesses can subscribe to it. I cannot give guarantees about which tcp ports will be available, so when I bind to a random port in the subprocess, my main program will not know what to tell the other subprocesses. Is there a way to bind the socket in the main process and then somehow pass the socket to my subprocess? Or either some other way to preregister the socket or a standard way to pass the port information from the subprocess back to my main process (stdout and stderr are already used by other data)? Just checking for a free port in the main process and passing that to the subprocess is not really optimal, because this could still fail if the socket is being assigned in the meantime. Also, since my program should work on Unix and Windows, I cannot really use ipc sockets, which would otherwise solve my problem.
Moving a bound ZMQ socket to another process
1.2
0
0
324
38,354,633
2016-07-13T14:33:00.000
7
0
1
0
python,cyclomatic-complexity,code-metrics
38,357,619
4
false
0
0
Python isn't special when it comes to cyclomatic complexity. CC measures how much branching logic is in a chunk of code. Experience shows that when the branching is "high", that code is harder to understand and change reliably than code in which the branching is lower. With metrics, it typically isn't absolute values that matter; it is relative values as experienced by your organization. What you should do is to measure various metrics (CC is one) and look for a knee in the curve that relates that metric to bugs-found-in-code. Once you know where the knee is, ask coders to write modules whose complexity is below the knee. This is the connection to long-term maintenance. What you don't measure, you can't control.
1
12
0
I have a relatively large Python project that I work on, and we don't have any cyclomatic complexity tools as a part of our automated test and deployment process. How important are cyclomatic complexity tools in Python? Do you or your project use them and find them effective? I'd like a nice before/after story if anyone has one so we can take a bit of the subjectiveness out of the answers (i.e. before we didn't have a cyclo-comp tool either, and after we introduced it, good thing A happened, bad thing B happened, etc). There are a lot of other general answers to this type of question, but I didn't find one for Python projects in particular. I'm ultimately trying to decide whether or not it's worth it for me to add it to our processes, and what particular metric and tool/library is best for large Python projects. One of our major goals is long term maintenance.
Cyclomatic complexity metric practices for Python
1
0
0
10,581
38,356,263
2016-07-13T15:44:00.000
-3
0
0
0
python,tkinter
38,357,614
2
false
0
1
The forms toolkit offers precisely the components that it offers. If you are not happy with round radio buttons, then code in OSF/Motif, which offers diamond-shaped radio buttons. Either that, or you could hack the internals of the widget (sorry, "control": I am so accustomed to professional [= UNIX] terminology). The round button is probably represented as a pixmap somewhere: just overwrite that in place, lickety-split, with your own two-tone pixmap that effects a rough diamond shape.
1
0
0
I am creating a GUI for an application, modeled off of one I have seen. This other application uses diamond-shaped radiobutton indicators from Python Tkinter, and I can't seem to find out how to use a diamond-shaped radiobutton in my program. All of my attempts at creating a radiobutton result in a circular shaped radioubtton. And thoughts? I'm running my GUI on Redhat and Windows, same problem for both.
Diamond Shaped Radiobuttons in Python Tkinter
-0.291313
0
0
494
38,357,398
2016-07-13T16:44:00.000
2
0
1
1
ipython,keras
38,408,813
1
false
0
0
I don't think keras is the only problem. If you are using theano as a backend, it will create $HOME/.theano/ as well. One dirty trick is to export HOME=/data/username/, but other program than keras or ipython will also treat /data/username/ as $HONE. To avoid that, you can do this locally by calling HOME=/data/username/ ipython or HOME=/data/username/ python kerasProgram.py.
1
2
0
When I'm in ipython and try to import keras, I get the error No space left on device: /home/username/.keras. How can I change this so that Keras does not use my HOME directory, and instead use /data/username/? I did the same for the directory ~/.ipython. I moved it to the desired location and then did export IPYTHONDIR=/data/username/.ipython, can I do something similar with Keras? More generally, how can I do this for any app that wants to use HOME? Note: Please don't give answers like "you can clean your home" etc. I am asking this for a reason. Thanks!
Move .keras directory in Ubuntu
0.379949
0
0
680
38,358,671
2016-07-13T17:56:00.000
0
0
1
0
python,visual-studio-2015,ptvs
42,372,571
1
false
0
0
Similar problem here (VS Enterprise 2015 Update 3, PTVS 2.2.50113.00): I have a solution with many different project types (most of them C#) and ONE Python project. Opening the solution loads ALL projects but the one-and-only Python project. The Solution Explorer entry for that project displays an error message like "opening the project requires manual input). Right-clicking on the (not-yet-loaded) project and choosing the "Load Project" menu item then successfully loads the project. Somewhat annoying. Thanks for help
1
0
0
Wondering if others are experiencing this...I've got a fresh install of VS 2015 with PTVS 2016 and I almost always have to open the .sln file twice since the first time it opens the project will fail to load (this failure is displayed in the solution explorer in VS). Perhaps there is a quick fix but I didn't see anything during a cursory search on the web. Maybe there is a setting somewhere that I've got wrong?
loading of Python program in Visual Studio frequently fails first or second time
0
0
0
81
38,363,640
2016-07-14T00:15:00.000
3
0
1
0
python,hash
38,363,706
1
false
0
0
-1 is not "reserved as an error" in Python. Not sure what that would even mean. There are a huge number of programs you couldn't write simply and clearly if you weren't allowed to use -1. "Is there a problem?" No. Hash functions do not need to return a different hash for every object. In fact, this is not possible, since there are many more possible objects than there are hashes. CPython's hash() has the nice property of returning its argument for non-negative numbers up to sys.maxint, which is why in your second question hash(hash('s')) == hash('s'), but that is an implementation detail. The fact that -1 and -2 have the same hash simply means that using those values as, for example, dictionary keys will result in a hash conflict. Hash conflicts are an expected situation and are automatically resolved by Python, and the second key added would simply go in the next available slot in the dictionary. Accessing the key that was inserted second would then be slightly slower than accessing the other one, but in most cases, not enough slower that you'd notice. It is possible to construct a huge number of unequal objects all with the same hash value, which would, when stored in a dictionary or a set, cause the performance of the container to deteriorate substantially because every object added would cause a hash collision, but it isn't something you will run into unless you go looking for it.
1
7
0
I used Spyder, run Python 2.7. Just found interesting things: hash(-1) and hash(-2) both return -2, is there a problem? I though hash function on different object should return different values. I read previous posts that -1 is reserved as an error in Python. hash('s') returns 1835142386, then hash(1835142386) returns the same value. Is this another problem? Thanks.
Why hash function on two different objects return same value?
0.53705
0
0
1,486
38,363,949
2016-07-14T00:59:00.000
1
0
0
0
python,sas,dataset
53,167,909
1
true
0
0
With the help of sas7bdat package you can access all sas datasets normally in local drive, and to use datasets from server use FTP or SFTP connections to read the file as a object and it's easy to access.
1
0
0
I am writing some code in Python with all the data available in SAS datasets both on Local hard drive and SAS server. The problem is how to access / import these datasets directly in my python program and then write back? Can anybody help. I have seen recommendation for python package "Sas7bdat" but not sure about it. is there anyway other way to get connected especially to the datasets available on the local derive (not on server)?
How to access SAS dataset (available both n local derive and SAS server) from Python code?
1.2
0
0
958
38,364,162
2016-07-14T01:31:00.000
2
0
1
0
python,function
38,364,393
4
false
0
0
One way is to have your required parameters be named like func(a, b, c=1) and this would denote required because the code will error out at runtime if missing any. Then for the optional parameters you would then use Python's args and kwargs. Of course anytime you use Python's args and kwargs means additional code to pull the parameter from the args and kwargs. Additionally for each combination of optional parameters you then would need to code a bunch of conditional control flow. In addition you don't want too many optional assignments because it makes the code's API to complex to describe... and the control flow have to many lines of code because the number of possible combinations grows very quickly for each additional optional parameter. AND your test code grows EVEN faster...
1
9
0
especially when there are so many parameters (10+ 20+). What are good ways of enforcing required/optional parameters to a function? What are some good books that deal with this kind of questions for python? (like effective c++ for c++) ** EDIT ** I think it's very unpractical to list def foo(self, arg1, arg2, arg3, .. arg20, .....): when there are so many required parameters.
in python, how do you denote required parameters and optional parameters in code?
0.099668
0
0
19,827
38,364,568
2016-07-14T02:30:00.000
0
1
0
0
python,bdd,scenarios,python-behave
38,643,609
2
false
0
0
What I've been doing might give you an idea: In the before_all specify a list in the context (eg context.teardown_items =[]). Then in the various steps of various scenarios add to that list (accounts, orders or whatever) Then in the after_all I login as a superuser and clean everything up I specified in that list. Could something like that work for you ?
1
0
0
I am running multiple scenarios and would like to incorporate some sort of dynamic scenario dispatcher which would allow me to have specific steps to execute after a test is done based on the scenario executed. When I was using PHPUnit, I used to be able to subclass the TestCase class and add my own setup and teardown methods. For behave, what I have been doing is adding an extra "Then" step at the end of the scenario which would be executed once the scenario finishes to clean up everything - clean up the configuration changes made by scenario, etc. But since every scenario is different, the configuration changes I need to make are specific to a scenario so I can't use the after_scenario hook that I have in my environment.py file. Any ideas on how to implement something similar?
Dynamic scenario dispatcher for Python Behave
0
0
0
697
38,365,948
2016-07-14T05:13:00.000
1
0
1
1
python-3.x,image-processing,anaconda,conda,ubuntu-16.04
48,430,119
4
false
0
0
You can try this conda install -c mlgill imutils
1
0
0
Hi, I am working in computer vision projects, I installed python 3.5 using anaconda in my laptop (Ubunut16.04 LTS). Can you please tell me, How I install imutils using conda in my ubuntu 16.04 LTS.??????
Install imutils using conda in Ubuntu 16.04 LTS
0.049958
0
0
3,202
38,367,206
2016-07-14T06:44:00.000
0
0
0
0
python,openerp
38,368,546
1
false
1
0
The difference is that in the python .py when you set a fields required argument to True, it's creates a NOT NULL constraint directly on the database, this means that no matter what happens (Provided data didn't already exist in the table) you can never insert data into that table without that field containing a value, if you try to do so directly from psql or Odoo's xmlrpc or jsonrpc api you'll get an SQL NOT NULL error, with something like this `ERROR: null value in column "xxx" violates not-null constraint On the other-hand if you set a field to be required on the view (xml) then no constraint is set on the database, this means that the only restriction is the view and you can bypass that and write to the database directly or if you're making an external web service you can use Odoo's ORM methods to write to the database directly If you really want to make sure a column is not null and is required, then it's better to set that in the python code itself instead of the view.
1
0
0
What is the difference between giving required field in python file and xml file in openerp? In xml file :field name="employee_id" required="1" In python file: 'employee_id' : fields.char('Employee Name',required=True),
required field difference in python file and xml file
0
0
1
227
38,369,858
2016-07-14T09:00:00.000
0
0
1
0
python
38,370,024
5
false
0
0
You could do out = st.replace('Product=Product','Product') or out = st.replace('Product=Product',''). I tend to find that simple and readable.
1
1
0
I have string like: st= 'Product=Product Name 25' Want to lstrip. output desired: out= 'Product Name 25' For this i am doing like out = st.lstrip('Product=') Here I am getting as output as out= 'Name 25'.This is I don't want. As it is removing all occurence in my string but need to remove the first occurence. Desired output should be: out= 'Product Name 25'
Python: Remove a word from string using lstrip
0
0
0
1,271
38,373,407
2016-07-14T11:48:00.000
0
0
0
0
python,google-app-engine,full-text-search
38,381,484
1
false
1
0
If you just say "no", you'll search all fields in the document. However, if you prefix your term with a field name like "field2:no" you will only search the values of that field.
1
0
0
In the search document I have two fields which have value as Yes or No. field1 - have value as Yes or No field2 - have value as Yes or No from function foo(), I want to search a document which have value as "no" and it should not search in field1. How to archive this ?
Appengine Search API: Exclude one document field from search
0
0
0
34
38,375,062
2016-07-14T13:04:00.000
0
0
0
0
python,scikit-learn,k-means
38,375,229
1
true
0
0
The way you're defining the orientation to us seems like you've got the right idea. If you use the farthest distance from the center as the denominator, then you'll get 0 as your minimum (cluster center) and 1 as your maximum (the farthest distance) and a linear distance in-between.
1
0
1
I've got some clustered classes, and a sample with a prediction. Now, i want to know the "orientation" of the sample, which varies from 0 to 1, where 0 - right in the class center, 1 - right on the class border(radius). I guess, it's going to be orientation=dist_from_center/class_radius So, I'm struggled to find class radius. The first idea is to take the distance from a center to the most distant sample, but iwould like to use smth more 'academic' and less custom
Sample orientation in the class, clustered by k-means in Python
1.2
0
0
41
38,376,478
2016-07-14T14:08:00.000
5
0
0
0
python,tensorflow,conv-neural-network
38,376,532
6
false
0
0
sigmoid(tensor) * 255 should do it.
1
22
1
Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale the values. Is there any way to do this? Thanks!
Changing the scale of a tensor in tensorflow
0.16514
0
0
25,931