Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
41,200,785
2016-12-17T16:40:00.000
-1
0
1
0
python,nlp,nltk,spacy
44,993,090
2
false
0
0
Go through the spacy 2.0 nightly build. It should have the solution you're looking for.
1
2
0
Using Python Spacy, how to extract entity from simple passive voice sentence? In the follow sentence, my intention is to extract both "John” from the sentence as nsubjpass and .ent_. sentence = "John was accused of crimes by David"
Extract entities from Simple passive voice sentence by Python Spacy
-0.099668
0
0
1,693
41,208,473
2016-12-18T12:48:00.000
0
0
0
0
python,django,cron,serial-port
41,216,333
1
false
1
0
For detecting system shutdown compare time stamps of last reading taken to current reading's time stamp. If they differ by more than 15 minutes than something went wrong during the operation.
1
0
0
The application should read data from a serial port every 15 minutes (using the Modbus Protocol) and put them into a database. The data can then be viewed and manipulated in a web interface. I'm using Windows (no server) with a RAID system to prevent data loss. My current setup looks like this: using pyserial and minimalmodbus for reading the data and putting them into a MySQL database setting a cron job to run the script every 15 minutes (alternatives?) using Django in order to have a neat interface where one can view stats and download the data as a *.csv file My questions are: Does this setup makes sense concerning reliability; do you have any improvements? How can i detect if the system has experienced a shutdown and i lost some data?
Python reliable project setup: write data from a serial port to a database in a certain time interval
0
1
0
120
41,211,357
2016-12-18T18:10:00.000
0
1
0
0
python,testing,settings
41,211,379
2
false
1
0
I suppose the easiest way would be to move your settings.py to another folder for safekeeping and then make a new one, and edit that for debugging.
1
1
0
I have my project settings in settings.py file. There are database names, user names, host names etc. defined there. In project files I do import settings and then use the constants where needed like settings.HOST. For unit testing I would like to use different settings. How should I override the settings? I am not using django.
Override settings.py for testing
0
0
0
431
41,212,272
2016-12-18T19:55:00.000
4
0
0
1
python,etl,google-cloud-dataflow
41,229,251
1
true
0
0
Yes, this can absolutely be done. Right now, it's a little klutzy at the beginning, but upcoming work on a new primitive called SplittableDoFn should make this pattern much easier in the future. Start by using Create to make a dummy PCollection with a single element. Process that PCollection with a DoFn that downloads the file, reads out the subfiles, and emits those. [Optional] At this point, you'll likely want work to proceed in parallel. To allow the system to easily parallelize, you'll want to do a semantically unnecessary GroupByKey followed by a ParDo to 'undo' the grouping. This materializes these filenames into temporary storage, allowing the system to have different workers process each element. Process each subfile by reading its contents and emit into PCollections. If you want different file contents to be processed differently, use Partition to sort them into different PCollections. Do the relevant processing.
1
0
0
So I am having a bit of a issue with the concepts behind Dataflow. Especially regarding the way the pipelines are supposed to be structured. I am trying to consume an external API that delivers an index XML file with links to separate XML files. Once I have the contents of all the XML files I need to split those up into separate PCollections so additional PTransforms can be done. It is hard to wrap my head around the fact that the first xml file needs to be downloaded and read, before the product XML's can be downloaded and read. As the documentation states that a pipeline starts with a Source and ends with a Sink. So my questions are: Is Dataflow even the right tool for this kind of task? Is a custom Source meant to incorporate this whole process, or is it supposed to be done in separate steps/pipelines? Is it ok to handle this in a pipeline and let another pipeline read the files? How would a high-level overview of this process look like? Things to note: I am using the Python SDK for this, but that probably isn't really relevant as this is more a architectural problem.
Google Cloud Dataflow consume external source
1.2
0
1
614
41,215,036
2016-12-19T02:32:00.000
2
1
0
0
python,apache,flask,mod-wsgi
41,215,183
1
true
1
0
I would not run the web server as root for security reasons. Instead, I suggest to: Add the web user to /etc/sudoers - no password. Ideally, only allow the commands you want to run as root. run the command with sudo [command] You mention deployment, if you are packaging this into an rpm, I would put the sudo definitions in /etc/sudoers.d/youpackage Another option would be to split you app and use some sort of messaging system - either by having rows in a database table or use a messaging server such as rabbit mq (there are other servers but I find it very easy to setup). A separate process running as root would do the actual turning on/off the lights. Your frontend would simply send a message like "lights off" and the other process -which could be running as root- would get a message when needed. The advantage with this approach is that the web process never has any root privilege and even if it has a hole, damage is limited.
1
3
0
I host a flask web app on a Raspberry Pi that has controls for my LED light strip. It all works great when I run the server with python as the root user, but I am having difficulty deploying it with Apache mod_wsgi. I want to use htttps, so deploying it seems to be necessary, but Apache doesn't seem to allow running servers with root privileges. Root is necessary to control the lights through a library that is imported in the flask server. Is there any way to deploy a flask server with root privileges? If not, it it possible to use https (from letsencrypt.org) without deploying? Are there any other ways to get around this problem?
Deploying a Flask app with root privileges
1.2
0
0
4,412
41,215,169
2016-12-19T02:55:00.000
1
0
0
0
python,machine-learning,scikit-learn,cross-validation,grid-search
41,252,942
1
false
0
0
Try setting the random seed if you want to get the same result each time.
1
1
1
I am performing parameter selection using GridSearchCv (sklearn package in python) where the model is an Elastic Net with a Logistic loss (i.e a logistic regression with L1- and L2- norm regularization penalties). I am using SGDClassifier to implement this model. There are two parameters I am interested in searching the optimal values for: alpha (the constant that multiplies the regularization term) and l1_ratio (the Elastic Net mixing parameter). My data set has ~300,000 rows. I initialize the model as follows: sgd_ela = SGDClassifier(alpha=0.00001, fit_intercept=True, l1_ratio=0.1,loss='log', penalty='elasticnet') and the searching fxn. as follows: GridSearchCV(estimator=sgd_ela, cv=8, param_grid=tune_para), with tuning parameters: tune_para = [{'l1_ratio': np.linspace(0.1,1,10).tolist(),'alpha':[0.00001, 0.0001, 0.001, 0.01, 0.1, 1]}]. I get the best_params (of alpha and l1_ratio) upon running the code. However, in repeated runs, I do not get the same set of best parameters. I am interested to know why is this the case, and if possible, how can I overcome it?
Why does GridSearchCV give different optimums on repeated runs?
0.197375
0
0
907
41,217,464
2016-12-19T07:11:00.000
-1
1
0
0
python,c++,linux,ipc
41,218,658
3
false
0
0
you can combine both applications memory with tools like swig. also you can use namedpipe
1
0
0
What is the fast enough way (about 40~50Hz) to send large data (RGB image data: 320*240*3) from c++ process to python process (and small size of float data from python to c++) on Linux? Note: the two processes are running at the same PC. I have tried: UDP shared memory For UDP: The message to be sent is larger than the UDP message constrain (65535), so directly using sendto() will get error: Message too long. And I also doubt whether it is a fast way (about 40~50Hz is ok). For shared memory: Shared memory seems to be a fast way to send image from c++ to c++. But since there is no pointer in python, I do not find a way to read and write data in shared memory. So is there a fast way to do IPC things above? Or maybe a good way to read and write unsigned char and float type values to shared memory in python?
How to send LARGE data from c++ to python in a fast way on Linux?
-0.066568
0
0
899
41,219,281
2016-12-19T09:19:00.000
4
0
0
0
python,django,django-views
41,219,364
1
false
1
0
Django doesn't do anything at all. It is entirely up to the server, which has already determined (according to its configuration) whether to run Django in multiple processes and/or threads, and so distributes incoming requests among those.
1
1
0
Let we say two users access the same urls that lead two requests were send to django views. The question is, how does django deal with these two requests? Does them be handled in two different threads simultaneously or when one end it request-middleware-response life cycle then the other be handled?
Django view execution order for same url pattern requests?
0.664037
0
0
64
41,219,491
2016-12-19T09:31:00.000
1
1
0
0
python,git,twitter
41,219,615
2
false
0
0
Yes, anyone can see the old files in version history in git-hub free version. If you want to make your project secure, you have to pay for private repository in github. If you dont wana pay, follow what @Stijin suggested.
1
0
0
I'm working on a project mainly for a bit of fun. I set up a twitter account and wrote a python script to write tweets. Initially i hard-coded the twitter credentials for my app into my script (tweet.py) Now i want to share the project so i have removed my app's credentials from tweet.py and added them to a config file. I have added the config file to .gitignore. My question is, if someone forks my project, can they somehow checkout an old version of tweet.py which has the credentials? If so, what steps can i take to cover myself in this case?
Git security - private information in previous commits
0.099668
0
1
36
41,221,779
2016-12-19T11:39:00.000
-1
0
0
0
python,pelican
41,222,267
4
false
0
0
Probably is not the answer you are looking for, but if you are already customizing the CSS, think about usgin CSS to hide the section.
2
1
0
When generating content using Pelican, everything is Ok except that I see in the footer "Proudly powered by Pelican ..." I want to get rid of it. I know I can remove it from the generated files manually, but that is tedious. Is there a way to prevent the generation of the above phrase by asking Pelican to do that for me? Some magic Pelican command or settings, maybe?
how to automatically remove "Powered by ..." in Pelican CMS?
-0.049958
0
0
1,230
41,221,779
2016-12-19T11:39:00.000
3
0
0
0
python,pelican
47,449,423
4
false
0
0
In your theme template, there will be a line like, {% extends "!simple/base.html" %} This base.html is used as the foundation for creating the theme. This file is available in : %PYTHON%\Lib\site-packages\pelican\themes\simple\templates You can edit this file to remove the "Powered By.."
2
1
0
When generating content using Pelican, everything is Ok except that I see in the footer "Proudly powered by Pelican ..." I want to get rid of it. I know I can remove it from the generated files manually, but that is tedious. Is there a way to prevent the generation of the above phrase by asking Pelican to do that for me? Some magic Pelican command or settings, maybe?
how to automatically remove "Powered by ..." in Pelican CMS?
0.148885
0
0
1,230
41,227,982
2016-12-19T17:31:00.000
0
1
0
1
macos,python-2.7,amazon-elastic-beanstalk
41,230,657
1
true
0
0
The tilde character wasn't being expanded within the double-quoted string. If you had tried to execute "~/Library/Python/2.7/bin/eb" --version in your second example it wouldn't have worked either. You could have set your path using something like export PATH="/Users/peter/Library/Python/2.7/bin:$PATH", or potentially export PATH=~/"Library/Python/2.7/bin:$PATH" (notice the tilde is outside the double-quotes.) I'd prefer the former, however.
1
0
0
I am trying to add the Python 2.7 bin folder to my path in order to run elastic beanstalk. Here is some output from my Terminal: ➜ ~ echo $PATH ~/Library/Python/2.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin ➜ ~ ~/Library/Python/2.7/bin/eb --version EB CLI 3.8.9 (Python 2.7.1) ➜ ~ eb --version zsh: command not found: eb And here is my export statement in .zshrc: export PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" export PATH="~/Library/Python/2.7/bin:$PATH" Can anyone tell me what's wrong? EB seems to be installed fine, and the path variable seems to point to it.
I am having issues adding Python folder to my path
1.2
0
0
106
41,228,983
2016-12-19T18:42:00.000
0
0
1
0
python,ubuntu,tensorflow,pip,ipython
41,230,349
2
false
0
0
Each major version of python has its own site-packages directory. It seems that you have both python 3.4 and 3.5 and you have jupyter installed in 3.5 and tensorflow in 3.4. The easy solution is to install tensorflow in 3.5 as well. This should allow you to use it with the 3.5 notebook kernel. You could attempt to add 3.4 as a kernel, but I am not sure how to do that.
2
0
1
I have been running in circles trying to get tensorflow to work in a jupyter notebook. I installed it via pip on ubuntu and also tried a conda environment (but unless I'm mistaken, getting that to work with ipython is beyond my ability). Tensorflow works fine in python3.4, but not python 3.5, which is used when I load ipython. I'm not sure if this question makes any sense, but can I make it so that ipython uses only python 3.4? The reason I need to use ipython instead of going through the python shell is that I am trying to use the kadenzie tutorial. Thank you. Edit: I'm not sure how applicable this is to other people with my problem, but I solved it by changing my conda python version (conda install python=3.4.3), uninstalling ipython, and then reinstalling it.
can import tensorflow in python 3.4 but not in ipython notebook
0
0
0
255
41,228,983
2016-12-19T18:42:00.000
0
0
1
0
python,ubuntu,tensorflow,pip,ipython
50,797,957
2
false
0
0
The best way to setup tensorflow with jupyter 1.Install anaconda 2.Create a environment named "tensorflow" 3.activate that environment by the following command in command prompt Install anaconda Create a environment named "Tensorflow" activate that environment by the following command in command prompt activate tensorflow then type conda install ipykernel then when it is installed paste the following command python -m ipykernel install --user --name myenv --display-name "Python[Tensorflow]" Then run jupyter notebook in command prompt after that when you are going to create a new notebook you will see two types of notebook just select the tensorflow one.
2
0
1
I have been running in circles trying to get tensorflow to work in a jupyter notebook. I installed it via pip on ubuntu and also tried a conda environment (but unless I'm mistaken, getting that to work with ipython is beyond my ability). Tensorflow works fine in python3.4, but not python 3.5, which is used when I load ipython. I'm not sure if this question makes any sense, but can I make it so that ipython uses only python 3.4? The reason I need to use ipython instead of going through the python shell is that I am trying to use the kadenzie tutorial. Thank you. Edit: I'm not sure how applicable this is to other people with my problem, but I solved it by changing my conda python version (conda install python=3.4.3), uninstalling ipython, and then reinstalling it.
can import tensorflow in python 3.4 but not in ipython notebook
0
0
0
255
41,233,635
2016-12-20T01:33:00.000
2
0
1
0
python,parallel-processing,tensorflow,distributed-computing
60,222,616
3
false
0
0
Tensorflow 2.0 Compatible Answer: If we want to execute in Graph Mode of Tensorflow Version 2.0, the function in which we can configure inter_op_parallelism_threads and intra_op_parallelism_threads is tf.compat.v1.ConfigProto.
1
86
1
Can somebody please explain the following TensorFlow terms inter_op_parallelism_threads intra_op_parallelism_threads or, please, provide links to the right source of explanation. I have conducted a few tests by changing the parameters, but the results have not been consistent to arrive at a conclusion.
Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads
0.132549
0
0
54,304
41,234,380
2016-12-20T03:22:00.000
0
0
0
0
python,opencv,computer-vision,simplecv
41,242,570
1
false
0
0
What I am trying to say is that, consider you want to perform blob detection for a certain region of interest (ROI) on an image, but you also want to keep the image. ZdaR's comment helps if you want that ROI area alone. My suggestion would be: Create a binary mask of the contour region you want to perform blob detection on. Apply the mask on the image. You should be able to obtain only the ROI. Now perform blob detection on it. Now the same binary mask and mask the ROI for which blobs have been detected with the original image.
1
0
0
I'm working on a project where I want to run simple blob detection, but only on areas inside a contour. I know contours can return bounding rectangles or circles, but I don't see how to limit a simple blob detection to the area inside that contour. Any thoughts? I'm stuck.
How can I force simpleblobdetection to only search inside a contour area?
0
0
0
88
41,235,051
2016-12-20T04:46:00.000
0
0
0
0
python,mysql,apache-spark,pyspark
41,235,443
1
true
0
0
I find a solution: add &sessionVariables=sql_mode='NO_UNSIGNED_SUBTRACTION' to jdbc url.
1
0
0
I want to change mysql's sql_mode to 'NO_UNSIGNED_SUBTRACTION' , when using pyspark. Is there any way?
Is there any way to set mysql's sql_mode in pyspark?
1.2
1
0
156
41,235,497
2016-12-20T05:31:00.000
0
0
1
0
python,shell
54,035,064
1
false
0
0
I just want to print the last line. To print a line repeatedly, just override the line ending \n by giving the keyword argument end='\r' to print().
1
1
0
I am trying to build a game. The game will have an item called a "pulsating crystal" (I am using \033[1;31;40m] to change the items colour), I want to it to be rainbow, so it keeps changing colours, without deleting everything else in the terminal. I used print(\033c) to clear the terminal but I just want to print the last line. I am sorry if the question is unclear or repetitive, or has another answer but I couldn't find another clear answer for my problem. PS I use Linux.
Removing printed lines in Python 3
0
0
0
61
41,240,754
2016-12-20T11:01:00.000
0
0
0
0
windows,python-2.7,import,permissions,administrator
41,244,020
3
false
0
0
You should either use virtualenv as stated before or set the proper permissions to the site-packages folder. I should be in C:\Python27\Lib.
1
2
0
I'm struggling with some strange issues in Python 2.7. I wrote a very long tool where I import different modules, which I had to install first using pip. The tool is to be shared within the company, where different users have different rights on their specific machines. The problem occurred when another user logged into my machine (I'm having administrator rights there) and tried to use the tool. He was unable to run it, because specific modules could not be imported because of his status as a "non-admin". The error message is simply "No module named XY". When we looked into the file system, we found that we were not able to look into the folder where the module had been installed, simply because the access was denied by the system. We also got this error message when trying to run pip from the cmd; it prints "Access denied" and won't do anything. How is it possible, that some modules can be accessed by anyone, while others can't? And how can I get around this problem? Specifically, I'm talking about sqlalchemy and pyodbc. Thanks a lot in advance. EDIT 1: Oh, and we're talking about Windows here, not Linux... EDIT 2: Due to company policy it is not possible to set administrator permissions to all users. I tried, as suggested, but it didn't work and I learned that it's not possible within the company.
Why can I import certain modules in Python only with administrator rights?
0
0
0
4,340
41,243,944
2016-12-20T13:51:00.000
1
0
1
0
python,pyc
41,243,973
2
false
0
0
.pyc files contain byte code, which is what the Python interpreter compiles the source to. This code is then executed by Python's virtual machine. Python's documentation explains the definition like this: Python is an interpreted language, as opposed to a compiled one, though the distinction can be blurry because of the presence of the bytecode compiler. This means that source files can be run directly without explicitly creating an executable which is then run The .pyc files are created (and possibly overwritten) only when that python file is imported by some other script. If the import is called, Python checks to see if the .pyc file's internal timestamp matches the corresponding .py file. If it does, it loads the .pyc; if it does not or if the .pyc does not yet exist, Python compiles the .py file into a .pyc and loads it.
2
0
0
I have a test case for an algorithm, which gives a different result after the first execution. The test imports the algorithm and the test data from two files. The first execution returns the correct results and creates a .pyc file for the test data file. The second and all following executions return incorrect results. When I delete the test data's .pyc file the next execution returns the correct results again (and creates a new .pyc file again). When I move the test data into the same file as the test case itself (i.e. avoiding the creation of a .pyc file) the test always passes. I cannot apply this fix to my full program. Is this a known issue, is there a fix?
Why does the existence of .pyc file change the result of my code?
0.099668
0
0
1,456
41,243,944
2016-12-20T13:51:00.000
0
0
1
0
python,pyc
49,460,261
2
false
0
0
One thing I found that changes is the value of file (.pyc vs .py), which tripped me up writing a utility invoking stack traces.
2
0
0
I have a test case for an algorithm, which gives a different result after the first execution. The test imports the algorithm and the test data from two files. The first execution returns the correct results and creates a .pyc file for the test data file. The second and all following executions return incorrect results. When I delete the test data's .pyc file the next execution returns the correct results again (and creates a new .pyc file again). When I move the test data into the same file as the test case itself (i.e. avoiding the creation of a .pyc file) the test always passes. I cannot apply this fix to my full program. Is this a known issue, is there a fix?
Why does the existence of .pyc file change the result of my code?
0
0
0
1,456
41,246,248
2016-12-20T15:50:00.000
1
0
0
0
python,django,module,connection,access-denied
41,260,630
1
true
1
0
I find an answer for the first part of my question. I just used the import login_required. Just before the function homepage in /helpdesk/public.py and the function index in /helpdesk/kb.py, I put @login_required(login_url='/helpdesk/login/?next=/helpdesk/) and it worked ! Now I try to find the answer of the second part (redirection).
1
0
0
I'm working on the Helpdesk module from Django. I want people who try to access 192.168.x.xxx:8000/helpdesk/ are redirected to the login page: 192.168.x.xxx:8000/helpdesk/login/?next=/helpdesk/ I also want people who try to access an nonexistant page to be redirected to: the homepage if connected, or the login page if not connected.
Redirects in Django
1.2
0
0
46
41,249,703
2016-12-20T19:18:00.000
-2
0
1
0
python,c++,cython
41,251,029
1
false
0
1
Thank you Maverick for your question. Cython is not a way of using C++ classes into python.Cython is simply python with c data types.Means that you can use c data types in your python program which in turn turns your python code to execute faster.Also you can build extensions in python using cython. Now comming to the problem in the question,you can use boost library for your task.In boost you just need to write a wrapper to use your c++ object in python or vice versa.You just need to compile it to a shared library which can be done by tools like jam,cython etc.I am not going into details of how you are going to do it as you can find many tutorials for the same.
1
0
0
I have a C++ project that invokes the same function in each python script. But each script does very different things that need access to the C++ project's internal classes. So I need a python wrapper so I pass a C++ object to the python script and I need a way to run the python script's function from the C++ project too. From what I understand with Cython and Shed Skin they are utilities to make C++ classes into a python class but not necessarily share run time objects back and forth between the languages. What can I do?
Compile python scripts with C++ project
-0.379949
0
0
115
41,250,358
2016-12-20T20:00:00.000
1
0
1
0
python
41,250,667
2
true
0
0
If you used the newer syntax supported by 2.7, e.g. around exceptions, and/or, better yet, worked with new features imported from __future__, you'll have much easier time converting your code to Python 3 (up to no changes at all). I'd suggest to follow this path first, for it can be trod gradually, without an abrupt jump to Python 3. I suppose Python processes with different versions can interoperate, because object pickling format is compatible, and you can explicitly use a specific pickling protocol version on both sides to ensure that. I don't think multiprocessing packages on either side would be too useful, though. Consider using e.g. ZeroMQ as a more general solution.
1
1
0
I'd like to upgrade to python 3.5, but I use legacy python 2.7 packages. Is it easy to run legacy packages in python 3.5? I have been under the impression that this isn't easy, but I did a few searches to see if I'm wrong and didn't come up with much. I would expect there to be a multiprocessing package that allows standardized hand-offs between 3.5 code and 2.7 packages, allowing them to run independently under their own environments, but being somewhat seamless to the developer. I'm not talking about converting my own code to 3.5, I'm talking about libraries that I use that won't be updated for or by me.
Can we run legacy python 2.7 code under python 3.5?
1.2
0
0
526
41,251,298
2016-12-20T21:09:00.000
1
0
1
0
python,multithreading,cpu-usage
41,251,390
2
false
0
0
For multi-core applications you should use the multiprocessing module instead of threading. Python has some (well documented) performance issues with threads. Search google for Global Interpreter Lock for more information about Python and thread performance
1
0
0
I'm running a computation heavy program in Python that takes about 10 minutes to run on my system. When I look at CPU usage, one of eight cores is hovering at about 70%, a second is at about 20%, and the rest are close to 0%.Is there any way I can force the program into 100% usage of a single core? edit: I recognize that utilizing all 8 cores isn't a great option, but is there a way to force the one core into 100% usage?
Controlling CPU Usage and Threading In Python
0.099668
0
0
3,451
41,252,907
2016-12-20T23:33:00.000
0
0
0
0
python,excel,openpyxl
43,180,932
1
false
0
0
As far as i know, openpyxl does not allow you to access only one cell, or a limited number of cells for that matter. In order to access any information in a given worksheet, openpyxl will create one in the memory. This is the reason for which you will be unable to add a Sheet without opening the entire document in memory and overwriting it at the end.
1
0
0
I want to generate excel spreadsheets by python, the first few tabs are exactly same, all are refer to the last page, so how to insert the last page by openpyxl? because the first few tabs are too complex so load_workbook is always failed, is there have any other way to insert tabs without loading?
How to use openpyxl to insert one sheet to a template?
0
1
0
198
41,253,187
2016-12-21T00:05:00.000
1
0
0
1
google-app-engine,google-app-engine-python
41,268,471
2
false
1
0
If you are using the standard environment, the answer is no, you can't really inspect or see the code directly. You've mentioned looking at it via Stackdriver Debugger, which is one way to see a representation of it. It sounds like if you have a reason to be looking at the code, then someone in your organization should grant you the appropriate level of access to your source code management system. I'd imagine if you're deployment practices are mature, then they'd likely branch the code to map to your deployed versions and you could inspect in detail locally.
1
5
0
(Background: I am new to Google App Engine, familiar with other cloud providers' services) I am looking for access/view similar to shell access to production node. With a Python/Django based Google App Engine App, I would like to view the code in production. One view I could find is the StackDriver 'Debug' view. However, apparently the code shown in the Debug view doesn't reflect the updated production code (based on what is showing on the production site, for example, the text on the home page different). Does Google App Engine allow me to ssh into the VM where the application/code is running? If not, how can check the code that's running in production? Thanks.
Accessing Google App Engine Python App code in production
0.099668
0
0
1,805
41,253,846
2016-12-21T01:36:00.000
2
0
1
0
python,multithreading
41,253,895
2
false
0
0
The global interpreter lock (in cpython) is a measure put in place so that only one active python thread executes at the same time. As frustrating as it can be, this is a good thing because it is put in place to avoid interpreter corruption. When a blocking operation is encountered, the current thread yields the lock and thus allows other threads to execute while the first thread is blocked. However, when CPU bound threads (when purely python calls are made), only one thread executes no matter how many threads are running. It is interesting to note that in python 3.2, code was added to mitigate the effects of the global interpreter lock. It is also interesting to note that other implementations of python do not have a global interpreter lock Please not this is a limitation of the python code and that the underlying libraries may be still processing data. Also, in many cases, when it comes to I/O, to avoid blocking, a useful way to handle IO is using polling and eventing: Polling involves checking whether the operation would block and test whether there is data. For example, if you are trying to get data from a socket, you would use select() and poll() Eventing involves using callbacks in such a way that your thread is triggered when a relevant IO operation just occurred
2
0
0
Newbie on Python and multiple threading. I read some articles on what is blocking and non-blocking I/O, and the main difference seems to be the case that blocking I/O only allows tasks to be executed sequentially, while non-blocking I/O allows multiple tasks to be executed concurrently. If that's the case, how blocking I/O operations (some Python standard built-in functions) can do multiple threading?
Blocking I/O in Python
0.197375
0
0
1,343
41,253,846
2016-12-21T01:36:00.000
3
0
1
0
python,multithreading
41,253,876
2
false
0
0
Blocking I/O blocks the thread it's running in, not the whole process. (at least in this context, and on a standard PC) Multithreading is not affected by definition - only current thread gets blocked.
2
0
0
Newbie on Python and multiple threading. I read some articles on what is blocking and non-blocking I/O, and the main difference seems to be the case that blocking I/O only allows tasks to be executed sequentially, while non-blocking I/O allows multiple tasks to be executed concurrently. If that's the case, how blocking I/O operations (some Python standard built-in functions) can do multiple threading?
Blocking I/O in Python
0.291313
0
0
1,343
41,254,715
2016-12-21T03:40:00.000
1
0
1
0
python,rabbitmq,mqtt,bottle,hivemq
41,259,154
1
true
0
0
I'm not familiar with bottle but a quick look at the docs it doesn't look like there is any other way to start it's event loop apart from with the run() function. Paho provides a loop_start() which will kick off it's own background thread to run the MQTT network event loop. Given there looks to be no way to run the bottle loop manually I would suggest calling loop_start() before run() and letting the app run on 2 separate threads as there is no way to combine them and you probably wouldn't want to anyway. The only thing to be careful of will be if MQTT subscriptions update data that the REST service is sending out, but as long as are not streaming large volumes of data that is unlikely to be an issue.
1
1
0
I am a noob to web and mqtt programming, I am working on a python application that uses mqtt (via hivemq or rabbitmq broker) and also needs to implement http rest api for clients. I realized using python bottle framework is pretty easy to provide a simple http server, however both bottle and mqtt have their event loop, how do I combine these 2 event loop, I want to have a single threaded app to avoid complexity.
Using http and mqtt together in a single threaded python app
1.2
0
1
544
41,254,869
2016-12-21T04:00:00.000
6
0
0
0
python,django
41,256,129
2
true
1
0
Better to reset the console frequently. This is not a big problem but due to multiple terminals being not reset for long durations, such problem occurs.
1
9
0
I want to change my models in django when I execute python manage.py makemigrations ,it asks a question: Did you rename the demoapp.Myblog model to Blog? [y/N] y^M^M^M that I input y and press Enter,but it adds ^M to the line I've looked around and apparently but I've got no choices can anybody tell me how to fix it?
django press Enter and it show ^M
1.2
0
0
786
41,256,392
2016-12-21T06:22:00.000
0
1
0
0
python,multithreading
48,301,814
2
false
0
0
Besides scheduling, I recommend using a preemptive OS, which is the only way to assure a fixed timebase. One free option is the Wind River Linux. I would be grateful if someone could point other good free alternatives.
1
0
0
Currently I have written a PID controller (with pulse width modification) which sends the current temperature to a server each duty cycle. However, this is written in series and so there is a brief delay between each cycle which makes the temperature control less effective. Furthermore it is hard to terminate the temperature controller externally once the program is called. Is there an alternative way where I can run the PID controller and the server communication separately to reduce this delay? I could always write the data to the csv file and have another script read from said file. However it doesn't strike me as the most elegant or effective solution.
How to deal with delays when using Python for control systems
0
0
0
57
41,257,336
2016-12-21T07:29:00.000
11
0
0
0
python,python-2.7,opencv,opencv3.1,connected-components
41,257,567
1
false
0
0
Let us analyze it: Assertion failed (L.channels() == 1 && I.channels() == 1) The images that you are passing to some function should be 1 channel (gray not color). __extractPlantArea(plant_img) That happened in your code exactly at the function called __extractPlantArea. cv2.connectedComponentsWithStats While you are calling the OpenCV function called connectedComponentsWithStats. Conclusion: Do not pass colorful (BGR) image to connectedComponentsWithStats
1
1
1
I got the following error in OpenCV (python) and have googled a lot but have not been able to resolve. I would be grateful if anyone could provide me with some clue. OpenCV Error: Assertion failed (L.channels() == 1 && I.channels() == 1) in connectedComponents_sub1, file /home/snoopy/opencv- 3.1.0/modules/imgproc/src/connectedcomponents.cpp, line 341 Traceback (most recent call last): File "test.py", line 30, in plant = analyzeplant.analyzeSideView(plant) File "/home/snoopy/Desktop/Leaf-201612/my-work- editing/ripps/src/analyzePlant.py", line 229, in analyzeSideView plant_img = self.__extractPlantArea(plant_img) File "/home/snoopy/Desktop/Leaf-201612/my-work- editing/ripps/src/analyzePlant.py", line 16, in __extractPlantArea output = cv2.connectedComponentsWithStats(plant, 4, cv2.CV_32S) cv2.error: /home/snoopy/opencv- 3.1.0/modules/imgproc/src/connectedcomponents.cpp:341: error: (-215) > L.channels() == 1 && I.channels() == 1 in function connectedComponents_sub1
OpenCV Error: Assertion failed (L.channels() == 1 && I.channels() == 1) in connectedComponents_sub1
1
0
0
9,922
41,258,141
2016-12-21T08:20:00.000
1
0
0
0
python-2.7,nfc,pyinstaller
41,260,740
1
false
0
0
finally, found a solution, I need to import ['nfc.clf.acr122'] as hidden-input.
1
0
0
I am working on project that reads near field communication card and identifies the card user, I am using nfcpy with python 2.7, my nfc card reader is acr122 by ACS and I am working on windows 10. The application seems to be working fine when I run the python script. However, when I convert the python script to .exe using pyinstaller I have error "no module named acr122". Is there any specific protocol I have to follow to bundle nfcpy in exe file. Any help would be much appreciated, Thank you in advance.
nfcpy python 2.7 pyinstaller
0.197375
0
0
257
41,259,308
2016-12-21T09:30:00.000
2
1
0
1
python,unit-testing,jenkins,environment-variables,tox
41,262,440
1
true
0
0
I thought about a workaround: Create a build step in Jenkins job, that will execute bash script, that will open the tox.ini find line [testenv] and input one line below passenv = HTTP_PROXY HTTPS_PROXY. That would solve the problem. I am working on this right now but anyway if You guys know a better solution please let me know. cheers Ok so this is the solution: Add a build step Execute shell Input this: sed -i.bak '/\[testenv\]/a passenv = HTTP_PROXY HTTPS_PROXY' tox.ini This will update the tox.ini file (input the desired passenv line under [testenv] and save changes). And create a tox.ini.bak backup file with the original data before sed's change.
1
2
0
I am trying to run python unit tests in jenkins using tox's virtualenv. I am behind a proxy so I need to pass HTTP_PROXY and HTTPS_PROXY to tox, else it has problems with downloading stuff. I found out that I can edit tox.ini and add passenv=HTTP_PROXY HTTPS_PROXY under [testenv], and than using the Create/Update Text File Plugin I can override the tox.ini(as a build step) whenever Jenkins job fetches the original file from repository. This way I can manually copy content of tox.ini from workspace, add the passenv= line below [testenv] and update the file with the plugin mentioned above. But this is not the proper solution. I don't want to edit the tox.ini file this way, because the file is constantly updated. Using this solution would force me to update the tox.ini content inside the plugin everytime it is changed on the git repository and I want the process of running unit tests to be fully automated. And no, I can't edit the original file on git repository. So is there a way that I can pass the passenv = HTTP_PROXY HTTPS_PROXY in the Shell nature command? This is how my command in Virtualenv Builder looks like: pip install -r requirements.txt -r test-requirements.txt pip install tox tox --skip-missing-interpreter module/tests/ I want to do something like this: tox --skip-missing-interpreter --[testenv]passenv=HTTP_PROXY HTTPS_PROXY module/tests How to solve this? NOTE: I think there might be a solution with using the {posargs}, but I see that there is a line in the original tox.ini containing that posargs already: python setup.py testr --testr-args='{posargs}' help...
How to add passenv to tox.ini without editing the file but by running tox in virtualenv shell nature script in Jenkins behind proxy (python)
1.2
0
0
1,397
41,263,383
2016-12-21T12:59:00.000
0
0
0
0
python,sqlite,cursor
41,485,462
1
false
0
0
The close() method allows you to close a cursor object before it is garbage collected. The connection's execute() method is exactly the same as conn.cursor().execute(...), so the return value is the only reference to the temporary cursor object. When you just ignore it, CPython will garbage-collect the object immediately (other Python implementations might work differently).
1
1
0
Is closing a cursor needed when the shortcut conn.execute is used in place of an explicitly named cursor in SQLite? If so how is this done? Also, is closing a cursor only need for SELECT, when a recordset is returned, or is it also needed for UPDATE, etc.?
How does closing SQLite cursor apply when conn.execute is used in place of named cursor
0
1
0
127
41,266,019
2016-12-21T15:12:00.000
0
0
1
0
python,nlp,text-classification,data-cleaning
41,266,971
1
false
0
0
Must the regular expression allows something __abc__? If not, (\b_[a-zA-Z]+\s)|(\s[a-zA-Z]+_\b)|(\s_[a-zA-Z]+_\b) What problem do you solve? Do you prepare texts for classification etc.? You have to distinguish mistakes and symbol sequences. There are some scientific ways to make this, for example comparison with corpora words, annotated suffix trees, etc.
1
0
0
I needed help with a couple of things.. I am new to NLP and unstructured data cleaning.. can someone answer the following questions... Thanks need help with regex to identify words like _male and female_ or more generic like _word and word_ or _something_something_something and get rid of the underscore that is present in the beginning or the end but not in the middle. I wanted to know the formal process of cleaning the data, like are there any steps that we have to follow for cleaning unstructured data, im asking this because I am doing lemmatization (with POS) and replacing the commonly occurring words like (something, something) to something_something. so what steps should I follow? I am doing the following right now-tokenize_clean>remove_numbers>remove_url>remove_slash>remove_cross>remove_garbage>replace_hypen_with_underscore>lemmatize_sentence>change_words_to_bigrams>remove_smaller_than_3(words with len smaller then 3)>remove_simlutaneous( words that occurred simultaneously many times eg, death death death)>remove_location>remove_bullets>remove_stop>remove_simlutaneous Should I do something different in these steps? I also have words like (group'shealthplanbecauseeitheroneofthefollowingqualifyingeventshappens) , (whenyouuseanon_networkprovider) ,(per\xad) ,(vlfldq\x10vxshuylvhg) how should I handle them? ignore them completely or try to improve them? My final goal is to classify the documents into Yes and No class. Any suggestions are welcomed. Will provide more examples and explanation if required.
What is the formal process of cleaning unstructured data
0
0
0
574
41,266,569
2016-12-21T15:38:00.000
1
0
1
0
python-2.7,web2py
41,273,813
1
true
0
0
As you seem to conclude, there isn't much reason to be sending requests back and forth to the server given that the server isn't generating any new data that isn't already held in the browser. Just do all the filtering directly in the browser. If you did need to do some manipulation on the server, though, don't necessarily assume it would be more efficient to search/filter a large dataset in Python rather than querying the database. You should do some testing to figure out which is more efficient (and whether any efficiency gains are worth the hassle of adding complexity to your code).
1
0
0
I researched up and down and I'm not seeing any answers that I'm quite understanding so I thought to post my own question. I building a web application (specifically in web2py but that shouldn't matter I don't believe) on Python 2.7 to be hosted on Windows. I have a list of about 2000 items in a database table. The user will be opening the application which will initially select all 2000 items into Python and return the list to the user's browser. After that the user will be filtering the list based on one-to-many values of one-to-many attributes of the items. I'm wanting Python to hold the unadulterated list of 2000 items in-memory between the user's changes of filtering options. Every time the user changes their filter options, trip the change back to Python, apply the filter to the in-memory list and return the subset to the user's browser. This is to avoid hitting the database with every change of filter options. And to avoid passing the list in session over and over. Most of this I'm just fine with. What I'm seeking you advise on is how to get Python to hold the list in-memory. In c# you would just make it a static object. How do you do a static (or whatever other scheme that applies) in Python? Thanks for your remarks. While proofreading this I see I'm still probably passing at least big portions of the list back and forth anyway so I will probably manage the whole list in the browser. But I still like to hear you suggestions. Maybe something you say will help.
Python 2.7 list of dictionaries in memory between page trips
1.2
0
0
35
41,268,863
2016-12-21T17:46:00.000
-5
0
1
0
python,setuptools,setup.py
62,582,679
4
false
0
0
This is a very good question. I was looking for an answer myself, but couldn't find one that satisfied me. So after gaining some experience, here are some examples that can help better understand: Suppose our package is foo and it integrates with a users package bar, extending it's functionality. Our package foo cannot work without bar so it seems like it should be in install_requires, but there is a problem with that. If, for example, user had version 1.1 of bar installed, then installed our package foo - our package may install version 1.2 of bar which will override users version. Instead, we put bar in bar section in extras_require. In this case user can safely install foo, knowing that it will integrate with his existing version of bar. But what if the user doesn't have bar installed? In this case the user will run pip install foo[bar]. Another good example is tests. Very often the tests of your packge use packages like mock or spceific data types (like DataFrame) which are not mandatory for the use of the package itself. In this case, you can put all packages required for tests in test section in extras_require. When you want to run tests in a virtual environment (tox), you can simply write deps=my_package[tests] in the tox.ini file. I hope this answer helps.
1
80
0
I am trying to understand the difference between extras_require() and install_requires() in setup.py but couldn't get it. Both are used for installing Python dependencies, but what's the difference between them?
Difference between extras_require() and install_requires() in setup.py?
-1
0
0
43,935
41,268,864
2016-12-21T17:46:00.000
0
0
1
0
python,notepad++
41,268,898
1
true
0
0
To change your tab you need to go in notepad ++ Menu Settings > Preferences... then select Tab Settings Then select python in the Tab Settings box and uncheck Use default value and check Replace by space. If you want a more usefull IDE for Python, you might want to consider another ide. for example eclipse with the pydev plugin and django plungin. Erik is nice too.
1
0
0
Are there any plugin that can help writing Python program on Notepad++. Currently, I have to manually control the indentation. For example, I need to click the "space" four times to ensure the indentation is right. Thanks.
plugin for helping writing python program on notepad++
1.2
0
0
70
41,273,756
2016-12-21T23:47:00.000
0
0
0
0
python,tensorflow
41,295,384
1
false
0
0
You should probably get the parallel execution of the first 5 iterations and the second 5 iterations. I can say for sure if you provide a code sample.
1
0
1
I don't exactly understand how the while_loop parallelization works. Suppose I have a TensorArray having 10 Tensors all of same shape. Now suppose the computations in the loop body for the first 5 Tensors are independent of the computations in the remaining 5 Tensors. Would TensorFlow run these two in parallel? Also if I use a Tensor instead of a TensorArray and made the updates to it using scatter_update, would it pass the gradients properly during backprop?
TensorFlow while_loop parallelization TensorArray
0
0
0
558
41,276,039
2016-12-22T04:59:00.000
0
0
0
0
python,nlp,nltk
41,279,184
1
true
0
0
I'm not sure if you can say that sentence splitting is harder than (word) tokenisation. But tokenisation depends on sentence splitting, so errors in sentence splitting will propagate to tokenisation. Therefore you'd want to have reliable sentence splitting, so that you don't have to make up for it in tokenisation. And it turns out that once you have good sentence splitting, tokenisation works pretty well with regexes. Why is that? – One of the major ambiguities in tokenisation (in Latin script languages, at least) is the period ("."): It can be a full stop (thus a token of its own), an abbreviation mark (belonging to that abbreviation token), or something special (like part of a URL, a decimal fraction, ...). Once the sentence splitter has figured out the first case (full stops), the tokeniser can concentrate on the rest. And identifying stuff like URLs is exactly what you would use a regex for, isn't it? The sentence splitter's main job, on the other hand, is to find abbreviations with a period. You can create a list for that by hand – or you can train it on a big text collection. The good thing is, it's unsupervised training – you just feed in the plain text, and the splitter collects abbreviations. The intuition is: If a token almost always appears with a period, then it's probably an abbreviation.
1
0
1
I am using NLTK in python. I understood that it uses regular expressions in its word tokenization functions, such as TreebankWordTokenizer.tokenize(), but it uses trained models (pickle files) for sentence tokenization. I don't understand why they don't use training for word tokenization? Does it imply that sentence tokenization is a harder task?
Why NLTK uses regular expressions for word tokenization, but training for sentence tokenization?
1.2
0
0
198
41,277,033
2016-12-22T06:31:00.000
-2
0
0
0
python,gtk,kivy,chromium-embedded,cefpython
41,285,599
1
false
0
1
another way you can implement another toolkit or framework in kivy is by using Threads,,i tried that with tkinter and it worked
1
2
0
I am building a Kivy application that makes use of the cefpython widget for kivy. Upon execution of my program, whenever I add a Text Input widget into the view, my application crashes with the error : Gtk-ERROR **: GTK+ 2.x symbols detected. Using GTK+ 2.x and GTK+ 3 in the same process is not supported I'm in a fix as I can't seem to figure out how to work around all this. cefpython version : 31.2 kivy version : 1.9.1 kivy-garden version : 0.1.4 pygame version : 1.9.1release
Gtk-ERROR **: GTK+ 2.x symbols detected. Using GTK+ 2.x and GTK+ 3 in the same process is not supported (Kivy Application)
-0.379949
0
0
1,223
41,279,547
2016-12-22T09:17:00.000
7
0
1
0
python,django,web-applications,web
41,279,917
6
true
1
0
Well, this is one of the most common questions among beginners. I, myself have faced the question and did build multiple projects without worrying about the virtual environment. But, of late, I have realized the importance of using virtual environments. Some of the benefits of using virtual environments are: Dependency Management: Prevent conflicts between dependencies of multiple projects. Ease of installation and setting up new project on different machines: Store your dependencies in requirements.txt file and run pip install -r requirements.txt to install the dependencies wherever you want.
4
8
0
I'm currently a novice in web programming. I've been working on this Django project lately, and I've been reading about virtual environments. At the start of my project, I was unable to set up a virtual environment, and so I proceeded with the project without it. My questions are Whether or not this virtual environment is really necessary? If I want to make more Django projects in the future, will I need this virtual environment to differentiate the projects since right now I'm running all the commands in the command prompt from my main C: directory? Does this virtual environment differentiate multiple projects or does it separate each project with respect to the version of Django/Python it's coded with or both? I'm wondering because I currently input commands such as python manage.py runserver (without the virtual environment) in my main C:drive directory. So does that mean I can't do multiple projects at once without a virtual environment for each? Can I still work on multiple projects without a virtual environment? (I've been confused about this especially) Should I just try to set up a virtual environment for my next project or can I still do it for this current one (I'm halfway through the project already, I've already made models, views, templates, etc.)? Any answers to clarify my confusion is greatly appreciated!
Virtual Environment for Python Django
1.2
0
0
15,405
41,279,547
2016-12-22T09:17:00.000
0
0
1
0
python,django,web-applications,web
58,122,857
6
false
1
0
first we create virtual wrapper pip install virtualenv wrapper-win and after make wrapper environment now create a virtual env- mkvirtualenv envname (command run only 64 bit python) And if you want to start virtual env then set your workplace(Directory) using Command Prompt and write command workon envname
4
8
0
I'm currently a novice in web programming. I've been working on this Django project lately, and I've been reading about virtual environments. At the start of my project, I was unable to set up a virtual environment, and so I proceeded with the project without it. My questions are Whether or not this virtual environment is really necessary? If I want to make more Django projects in the future, will I need this virtual environment to differentiate the projects since right now I'm running all the commands in the command prompt from my main C: directory? Does this virtual environment differentiate multiple projects or does it separate each project with respect to the version of Django/Python it's coded with or both? I'm wondering because I currently input commands such as python manage.py runserver (without the virtual environment) in my main C:drive directory. So does that mean I can't do multiple projects at once without a virtual environment for each? Can I still work on multiple projects without a virtual environment? (I've been confused about this especially) Should I just try to set up a virtual environment for my next project or can I still do it for this current one (I'm halfway through the project already, I've already made models, views, templates, etc.)? Any answers to clarify my confusion is greatly appreciated!
Virtual Environment for Python Django
0
0
0
15,405
41,279,547
2016-12-22T09:17:00.000
0
0
1
0
python,django,web-applications,web
52,409,650
6
false
1
0
Virtual environment creates virtual installation of python and packages on your computer . Say,if you have your web application .With pass of time the packages get updated and there are changes that sometimes break backwards compaatibility that your web application or web project may depend on so what do you do if you want to test out the new features of a package update but you also don't want to break your web application. After all you can't just take down your web site every time a package gets updated .Well that's where the virtual environment comes in . You can create a virtual environment that contains the newer version of the package or the virtual environment for your older version of the package however luckily Anaconda makes this really easy for us .(A virtual handler is already included in Anaconda.)
4
8
0
I'm currently a novice in web programming. I've been working on this Django project lately, and I've been reading about virtual environments. At the start of my project, I was unable to set up a virtual environment, and so I proceeded with the project without it. My questions are Whether or not this virtual environment is really necessary? If I want to make more Django projects in the future, will I need this virtual environment to differentiate the projects since right now I'm running all the commands in the command prompt from my main C: directory? Does this virtual environment differentiate multiple projects or does it separate each project with respect to the version of Django/Python it's coded with or both? I'm wondering because I currently input commands such as python manage.py runserver (without the virtual environment) in my main C:drive directory. So does that mean I can't do multiple projects at once without a virtual environment for each? Can I still work on multiple projects without a virtual environment? (I've been confused about this especially) Should I just try to set up a virtual environment for my next project or can I still do it for this current one (I'm halfway through the project already, I've already made models, views, templates, etc.)? Any answers to clarify my confusion is greatly appreciated!
Virtual Environment for Python Django
0
0
0
15,405
41,279,547
2016-12-22T09:17:00.000
3
0
1
0
python,django,web-applications,web
41,280,016
6
false
1
0
In java all libs used can be packed into a war or jar file. The advantage is that you don't need to worry about the environments of the OS. Python is a pure dynamic language. Without virtual environment, all the python libs need to be installed into system path and shared among all python project. Imagine that you are developing a django 1.10 project. You find a demo project. You want to run it on your machine. But it is compatible only with django 1.8. You can not install two version of the same lib in the same machine, so you get stuck. Virtual environment solves problem like this. But of course virtual environment is not perfect. There are python libs like mysql-python which depends on libmysqld. If those libs are used in your project, it cannot be totally independent with the settings in OS. The best practice I think is to use virtual machine combined with docker. IDE like pycharm supports running remotely via docker
4
8
0
I'm currently a novice in web programming. I've been working on this Django project lately, and I've been reading about virtual environments. At the start of my project, I was unable to set up a virtual environment, and so I proceeded with the project without it. My questions are Whether or not this virtual environment is really necessary? If I want to make more Django projects in the future, will I need this virtual environment to differentiate the projects since right now I'm running all the commands in the command prompt from my main C: directory? Does this virtual environment differentiate multiple projects or does it separate each project with respect to the version of Django/Python it's coded with or both? I'm wondering because I currently input commands such as python manage.py runserver (without the virtual environment) in my main C:drive directory. So does that mean I can't do multiple projects at once without a virtual environment for each? Can I still work on multiple projects without a virtual environment? (I've been confused about this especially) Should I just try to set up a virtual environment for my next project or can I still do it for this current one (I'm halfway through the project already, I've already made models, views, templates, etc.)? Any answers to clarify my confusion is greatly appreciated!
Virtual Environment for Python Django
0.099668
0
0
15,405
41,280,318
2016-12-22T09:56:00.000
0
0
1
0
python,finally,ctrl
41,280,395
3
false
0
0
Yes, it will usually raise a KeyboardInterrupt exception, but remember that your application can be unexpectedly terminated at any time, so you shouldn't rely on that.
1
13
0
If you stop a python script with Ctrl+C, will it execute any finally blocks, or will it literally stop the script where it is?
Will Python execute finally block after receiving Ctrl+C
0
0
0
7,329
41,281,590
2016-12-22T11:00:00.000
0
0
1
0
python,visual-studio
44,493,312
1
false
0
0
Assuming you configured it correctly, it would still take some time for IntelliSense to finish scrapping and analysing your python libraries.
1
0
0
When I installed Visual Studio 2015 I also installed PTVS (Python Tools for Visual Studio) version 2.2.4; But after I installed it, autocomplete won't work for my Python code. I have installed an interpreter (CPython 3.5) and that still doesn't solve the problem.
Python on Visual Studio
0
0
0
97
41,285,298
2016-12-22T14:23:00.000
0
1
0
0
telegram,telegram-bot,python-telegram-bot,php-telegram-bot
41,792,018
4
false
0
0
It will just show preview of link and if it's an audio, an audio bar will be shown. so the answer is yes, but it will not start automatically and user should download and play it.
2
3
0
Telegram BOT API has functions to send audio files and documents ,But can it play from an online sound streaming URL?
Can the Telegram bot API play sound from an online audio streaming URL?
0
0
1
3,630
41,285,298
2016-12-22T14:23:00.000
0
1
0
0
telegram,telegram-bot,python-telegram-bot,php-telegram-bot
41,324,053
4
false
0
0
No, you can't with Telegram Bot APIs. You must download the file and upload it on Telegram servers.
2
3
0
Telegram BOT API has functions to send audio files and documents ,But can it play from an online sound streaming URL?
Can the Telegram bot API play sound from an online audio streaming URL?
0
0
1
3,630
41,285,440
2016-12-22T14:30:00.000
1
1
0
0
python,optimization,tensorflow,deep-learning,jupyter-notebook
54,832,398
2
false
0
0
If you find it, youll realize it just jumps to pyhon/framework, where the actual update is just an assign operation and then gets grouped
1
2
1
I'm working on a new optimizer, and I managed to work out most of the process. Only thing I'm stuck on currently is finding gen_training_ops. Apparently this file is crucial, because in both implementations of Gradient Descent, and Adagrad optimizers they use functions that are imported out of a wrapper file for gen_training_ops (training_ops.py in the python/training folder). I can't find this file anywhere, so I suppose I don't understand something and search in the wrong place. Where can I find it? (Or specifically the implementations of apply_adagrad and apply_gradient_descent) Thanks a lot :)
Can't find "gen_training_ops" in the tensorflow GitHub
0.099668
0
0
719
41,287,312
2016-12-22T16:08:00.000
0
1
0
1
python,trac,bitnami
41,288,887
1
true
0
0
I found out that there was another service running on the 8080 port that I had setup trac on and that was causing the trouble. The error in the logs was not pointing to that as being the issue however.
1
0
0
I installed trac using BitNami the other day and after restarting my computer I'm not able to get it running as a service today. I see in the error log this error [Fri Dec 02 08:52:40.565865 2016] [:error] [pid 4052:tid 968] C:\Bitnami\trac-1.0.13-0\python\lib\site-packages\setuptools-7.0-py2.7.egg\pkg_resources.py:1045: UserWarning: C:\WINDOWS\system32\config\systemprofile\AppData\Roaming\Python-Eggs is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable). Everyone's suggestion is to move the folder path PYTHON_EGG_CACHE to the C:\egg folder or to suppress the warning at the command line. I've already set the PYTHON_EGG_CACHE for the system, I set it in trac's setenv.bat file, and in the trac.wsgi file but it's not picking up on the changes when I try to start the service. Alternately I can't change the permissions on the folder in Roaming using chmod like in Linux, and I can't remove any more permissions on the folder in Roaming (myself, Administrators, System) as Corporate IT doesn't allow for Administrators to be removed and this isn't an unreasonable policy.
"Python-Eggs is writable by group/others" Error in Windows 7
1.2
0
0
400
41,287,609
2016-12-22T16:25:00.000
2
0
0
0
python,django,database,standards
41,287,774
1
true
1
0
A common philosophy in Django is "fat models, thin views", so preferably you would put as much of this DML functionality as functions on your model classes. Since models.py already defines the structure of your data, it makes sense to put functions that manipulate your data in the same file as much as possible.
1
0
0
'Methods that perform like DML' means methods that change data in the database. Is there a standard or guideline for that? Below are my own guesses. Collect all functions in file of which name is like 'data_access.py' Contain functions in each class of models.py There is no standard. No one will blame even if I make them in views.py Above all are wrong.
Python/Django - Where do I have to make methods that perform like DML?
1.2
0
0
117
41,288,716
2016-12-22T17:26:00.000
0
0
0
0
python,django,pep8
41,290,255
1
false
1
0
There certainly is something you can do about it, if it bothers you that much. All relationship fields that automatically define a reverse relation also allow you to override that default by specifying related_name. So if you really don't like profile.profileattributegroup, define the one-to-one field with related_name='profile_attribute_group'.
1
1
0
As PEP8 states, one should name classes that have several words with camelcase (e.g. ProfileAttributeGroup), and use underscores for variables (profile_attribute_group). However when it comes to django and reverse relations (and templates), we are forced to use lowercased name of classes. For example, if our ProfileAttributeGroup has a one-to-one key to a Profile model, the reverse lookup would be profile.profileattributegroup. Okay, we can override that one; but this lowercasing also happens in DetailView and UpdateView templates and in sql joins (e.g. someattr.filter(profileattributegroup__isnull=False)), and there's nothing we can do about it. This makes me think that it makes sense to just lowercase foreign key names, without adding any underscores there. This way I won't have to remember when to use profile_attribute_group or profileattributegroup. But explicit ignoring of underscores contradicts PEP8. My question is, has anyone else had my doubts? And are there any future downsides of ignoring underscores that I haven't thought about?
django - underscoring foreign keys - pros and cons
0
0
0
76
41,289,031
2016-12-22T17:45:00.000
0
0
0
1
python,google-cloud-dataflow
43,789,865
1
true
0
0
As determined by thylong and jkff: The extra_package was binary-incompatible with Dataflow's packages. The requirements.txt in the root directory and the one in the extra_package were different, causing the exec.go in DataFlow container failing again and again. To fix, we recreated the venv with the same frozen dependencies.
1
0
0
With the Python SDK, the job seems to hang forever (I have to kill it manually at some point) if I use the extra_package option to use a custom ParDo. Here is a job id for example : 2016-12-22_09_26_08-4077318648651073003 No explicit logs or errors are thrown... I noticed that It seems related to the extra_package option because if I use this option without actually triggering the ParDo (code commented), it doesn't work either. The initial Bq query with a simple output schema and no transform steps works. Did it happen to someone ? P.S : I'm using the DataFlow 0.4.3 version. I tested inside a venv and it seems to work with a DirectPipelineRunner
Job hangs forever with no logs
1.2
0
0
88
41,296,179
2016-12-23T06:21:00.000
2
0
0
0
python,google-chrome,selenium,google-search,browser-automation
41,296,329
1
false
0
0
Use this url - https://www.google.com/ncr. This will not redirect to your location specific site.
1
0
0
When I'm opening google.com and doing a search in Chrome Selenium WebDriver, it redirects me to my local google domain, although the search string I'm using is "google.com ....." How can I remain on the "com" domain?
How can I avoid being redirected to a local google domain in Selenium Chrome?
0.379949
0
1
202
41,297,839
2016-12-23T08:31:00.000
0
0
1
0
python-3.x
41,312,156
2
false
0
0
Thank you for answering my question. As I tried your example: (print("a\rb") # => b print("abcdef\rLOL") #=> LOLdef I still got b and LOL for the answers.... I don't know why it keeps happening... I am using pycharm by the way... Let me know how I fix it...
1
0
0
I am running codes in python 3.5.2 in mac. (pycharm) When I run print("abcz\rdef"), I was expecting to get defz, but I got def... What is happening right now?
what is \r in python 3.5.2? (pycharm & Mac)
0
0
0
330
41,298,588
2016-12-23T09:23:00.000
0
0
0
0
python,c++,windows,opencv,usb
41,299,177
3
false
0
0
If you can differentiate the cameras by their serial number or device and vendor id, you can loop through all video devices before opening with opencv and search for the camera device you want to open.
1
3
1
I have a python environment (on Windows 10) that uses OpenCV VideoCapture class to connect to multiple usb cameras. As far as I know, there is no way to identify a specific camera in OpenCV other than the device parameter in the VideoCapture class constructor / open method. The problem is that the device parameter changes depending on how many cameras are actually connected and to which usb ports. I want to be able to identify a specific camera and find its "device index" or "camera index" no matter how many cameras are connected and to which usb ports. Can somebody please suggest a way to achieve that functionality? python code is preferable but C++ will also do.
OpenCV VideoCapture device index / device number
0
0
0
10,992
41,298,599
2016-12-23T09:24:00.000
0
0
0
0
python,cumsum,prop
41,499,342
1
false
0
0
It seems that you invoked sort_index instead of sort_values. The by='prop' doesn't make sense in such a context (you sort the index by the index, not by columns in the data frame). Also, in my early release copy of the 2nd edition, this appears near the top of page 43. But since this is early release, the page numbers may be fluid.
1
0
1
I'm working on this book and keep running error when i'm run "Prop_cumsum" > prop_cumsum = df.sort_index(by='prop', ascending=False).prop.cumsum() /Users/anaconda/lib/python3.5/site-packages/ipykernel/main.py:1: FutureWarning: by argument to sort_index is deprecated, pls use .sort_values(by=...) if name == 'main': --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /Users/anaconda/lib/python3.5/site-packages/pandas/indexes/base.py in get_loc(self, key, method, tolerance) 1944 try: -> 1945 return self._engine.get_loc(key) 1946 except KeyError: pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4154)() pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4018)() pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12368)() pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12322)() KeyError: 'prop' During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) in () ----> 1 prop_cumsum = df.sort_index(by='prop', ascending=False).prop.cumsum() /Users/anaconda/lib/python3.5/site-packages/pandas/core/frame.py in sort_index(self, axis, level, ascending, inplace, kind, na_position, sort_remaining, by) 3237 raise ValueError("unable to simultaneously sort by and level") 3238 return self.sort_values(by, axis=axis, ascending=ascending, -> 3239 inplace=inplace) 3240 3241 axis = self._get_axis_number(axis) /Users/anaconda/lib/python3.5/site-packages/pandas/core/frame.py in sort_values(self, by, axis, ascending, inplace, kind, na_position) 3149 3150 by = by[0] -> 3151 k = self[by].values 3152 if k.ndim == 2: 3153 /Users/anaconda/lib/python3.5/site-packages/pandas/core/frame.py in getitem(self, key) 1995 return self._getitem_multilevel(key) 1996 else: -> 1997 return self._getitem_column(key) 1998 1999 def _getitem_column(self, key): /Users/anaconda/lib/python3.5/site-packages/pandas/core/frame.py in _getitem_column(self, key) 2002 # get column 2003 if self.columns.is_unique: -> 2004 return self._get_item_cache(key) 2005 2006 # duplicate columns & possible reduce dimensionality /Users/anaconda/lib/python3.5/site-packages/pandas/core/generic.py in _get_item_cache(self, item) 1348 res = cache.get(item) 1349 if res is None: -> 1350 values = self._data.get(item) 1351 res = self._box_item_values(item, values) 1352 cache[item] = res /Users/anaconda/lib/python3.5/site-packages/pandas/core/internals.py in get(self, item, fastpath) 3288 3289 if not isnull(item): -> 3290 loc = self.items.get_loc(item) 3291 else: 3292 indexer = np.arange(len(self.items))[isnull(self.items)] /Users/anaconda/lib/python3.5/site-packages/pandas/indexes/base.py in get_loc(self, key, method, tolerance) 1945 return self._engine.get_loc(key) 1946 except KeyError: -> 1947 return self._engine.get_loc(self._maybe_cast_indexer(key)) 1948 1949 indexer = self.get_indexer([key], method=method, tolerance=tolerance) pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4154)() pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4018)() pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12368)() pandas/hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12322)() KeyError: 'prop'
Python for Data Analysis: Chp 2 Pg 38 "prop_cumsum" error
0
0
0
187
41,299,126
2016-12-23T09:57:00.000
0
0
0
0
python,machine-learning,tensorflow,deep-learning,google-developer-tools
50,601,700
2
false
1
0
Doesn't TF actually also uses google's models from cloud? I'm pretty sure google uses cloud data to provide better models for TF. I'd recommend you to stay away from it. Only by writing your models from scratch you will learn to do useful stuff with it long term. I can also recommend weka for java, it's open source and you can only look at the code of the models there an implement it yourself adjusting for your needs.
1
0
1
Does anyone have any idea whether Google collects data that one supplies to Tensorflow? I mean it is open source, but it falls under their licences.
TensorFlow from Google - Data Security
0
0
0
861
41,303,027
2016-12-23T14:12:00.000
0
0
1
0
python,python-3.x,crash,anaconda,spyder
59,740,947
3
false
0
0
Run the following command conda update --all on Ananconda Prompt to solve this issue. In my case, it ended up downgrading Spyder from 4.0.1 to 3.3.5 (2 versions behind). If the version does not matter much to you, this will fix the error.
3
3
0
I used to compile on Spyder at work, so I downloaded Anaconda 4.2 with Python 3.5 64bit for installing it on my own PC too. But it doesn't work! Every time I try to open spyder or a Notebook, or even the Anaconda Navigator, it crashes and an error message of "Python stop working" I tried to open spyder from command prompt too, to no avail. On the other hand, if I open a Python shell from Windows Command Prompt, it works. Any ideas?
Python stops working while trying to open Spyder or Anaconda Navigator
0
0
0
3,410
41,303,027
2016-12-23T14:12:00.000
0
0
1
0
python,python-3.x,crash,anaconda,spyder
41,307,185
3
false
0
0
You should tell us the exact and complete error message that you got. But you should probably uninstall Anaconda and all its parts (including Python), delete the .anaconda and related sub-folders in your Users folder, then reinstall Anaconda. If that doesn't work, let us know. Don't skip deleting those sub-folders: I solved one of my problems by doing that.
3
3
0
I used to compile on Spyder at work, so I downloaded Anaconda 4.2 with Python 3.5 64bit for installing it on my own PC too. But it doesn't work! Every time I try to open spyder or a Notebook, or even the Anaconda Navigator, it crashes and an error message of "Python stop working" I tried to open spyder from command prompt too, to no avail. On the other hand, if I open a Python shell from Windows Command Prompt, it works. Any ideas?
Python stops working while trying to open Spyder or Anaconda Navigator
0
0
0
3,410
41,303,027
2016-12-23T14:12:00.000
0
0
1
0
python,python-3.x,crash,anaconda,spyder
42,871,777
3
false
0
0
Check your firewall (try disabling it temporarily). That turned out to be the issue in my case (after trying to reinstall different versions of spyder/anaconda 20 times).
3
3
0
I used to compile on Spyder at work, so I downloaded Anaconda 4.2 with Python 3.5 64bit for installing it on my own PC too. But it doesn't work! Every time I try to open spyder or a Notebook, or even the Anaconda Navigator, it crashes and an error message of "Python stop working" I tried to open spyder from command prompt too, to no avail. On the other hand, if I open a Python shell from Windows Command Prompt, it works. Any ideas?
Python stops working while trying to open Spyder or Anaconda Navigator
0
0
0
3,410
41,304,164
2016-12-23T15:38:00.000
2
1
0
0
python-3.5,python-embedding
41,304,207
1
true
0
1
It can be done with Py_SetPythonHome.
1
1
0
I am currently embedding Python3 into my C++ application. We also ships a defined version of Python3. Currently Py_Initialize finds the system python in /usr/lib/python3.5 (which we do not want). I could not yet figure out how I can strip the search path before calling Py_Initialize and force it to search in my custom path.
Embedding specific Python
1.2
0
0
34
41,304,621
2016-12-23T16:12:00.000
1
0
0
0
python,pip,lxml
45,203,932
1
false
0
0
Install missing dependency sudo apt-get install zlib1g-dev
1
0
0
I know this question has been asked before but I've tried pretty much every solution listed on every question I can find, all to no avail. Pip install lxml doesn't work, nor does easy_install lxml. I have downloaded and tried a handful of different versions of lxml: lxml-3.6.4-cp27-cp27m-win32 (WHL file) lxml-3.7.0-cp36-cp36m-win32 (WHL file) lxml-lxml-lxml-3.7.0-0-g826ca60.tar (GZ file) I have also downloaded, and extracted everything from, both libxml2 and libxslt. Now they are both sitting in their own unzipped folders. When I run the installations from the command line, it appears to be working for a few seconds but eventually just fails. It either fails with exit status 2 or failed building wheel for lxml or could not find function xmlCheckVersion in library libxml2. Is libxm12 installed?.....I think it's installed but I have no clue what an installed libxm12 should look like. I unzipped and extracted everything from the libxm12 download. I've also tried all the following commands from other SO posts, and each has failed: sudo apt-get install python-lxml apt-get install python-dev libxml2 libxml2-dev libxslt-dev pip install --upgrade lxml pip3 install lxml I have also looked up and attempted installing "prebuilt binaries" but those also don't seem to work....... I don't want this post to just be me complaining that it wouldn't work so my question is: what is the simplest most straightforward way to put lxml onto my computer so I can use it in Python?
Can't install lxml module
0.197375
0
1
1,539
41,305,139
2016-12-23T16:53:00.000
0
0
1
0
python,dynamic-typing
41,306,745
2
false
0
0
The simple answer is probably start without any typechecking and see what happens. You'll probably find out - as I did 17 years ago and much to my surprise - that you need it way less than you believe. For the record used to think you couldn't hope to write anything but quick scripts and toy programs without strict typechecking and lost a lot of time fighting against the language until I started reading serious python apps code and the stdlib code and found out it just worked without.
1
6
0
I'm starting to learn Python and as a primarily Java developer the biggest issue I am having is understanding when and when not to use type checking. Most people seem to be saying that Python code shouldn't need type checking, but there are many cases when I believe it is necessary. For example, let's say I need to use a method parameter to perform an arithmetic operation, why shouldn't I make sure the argument is a numeric data type? This issue is not only limited to functions. The same thought process occurs to me for class variables. Why and when should I or shouldn't I use properties (using @property) to check type instead of regularly implemented class variables? This is a new way of approaching development for me so I would appreciate help understanding.
When should I use type checking (if ever) in Python?
0
0
0
569
41,305,432
2016-12-23T17:16:00.000
2
0
1
0
windows,python-2.7,python-3.x,pip,urllib3
42,188,924
4
false
0
0
May be worth double checking your PYTHONPATH Environment Variable in: Control Panel\System and Security\System -> Advanced System Settings -> Environment Variables. I had a rogue copy of Python that caused this exact error
2
0
0
Everytime I use pip command the command failed with error: "ImportError: No module named 'urllib3'". I do got urllib3 installed, and when I'm trying to install urllib3 again I got the same error. What can I do? I'm using windows 10. I cant run "pip install virtualenv", I got the same error with any pip command.
ImportError: No module named 'urllib3'
0.099668
0
1
6,193
41,305,432
2016-12-23T17:16:00.000
0
0
1
0
windows,python-2.7,python-3.x,pip,urllib3
41,306,324
4
false
0
0
For escaping this error try to install virtualenv through "pip install virtualenv" and create the virtual environment directory using "python3 -m venv myvenv" which will create a myvenv named folder now activate the myvenv folder using "source \myvenv\bin\activate" now you have your virtual environment setup now you can install whatever you want under the venv , which will not conflict with your base os installed programs try some googling to explore pic virtualenv setup and use. happy coding :)
2
0
0
Everytime I use pip command the command failed with error: "ImportError: No module named 'urllib3'". I do got urllib3 installed, and when I'm trying to install urllib3 again I got the same error. What can I do? I'm using windows 10. I cant run "pip install virtualenv", I got the same error with any pip command.
ImportError: No module named 'urllib3'
0
0
1
6,193
41,310,316
2016-12-24T04:26:00.000
1
0
1
0
python,date,nlp
41,347,322
1
true
0
0
There is! (Python is amazing) dateutil.parser.parse("today is 21 jan 2016", fuzzy=True)
1
1
0
when I use this code for extracting the date dateutil.parser.parse('today is 21 jan 2016') It throws an error -> ValueError: unknown string format is there any way to extract dates and time from a sentence???
how to find unstructured date and time from a sentence in python?
1.2
0
0
1,640
41,310,818
2016-12-24T06:10:00.000
0
0
0
0
python-3.x,sockets
54,252,278
1
false
0
0
Working with MAC addresses should do it, login/pass, or pass-phrases. – Papipone
1
0
0
I am currently working on a test application involving a server and several client. The communication is achieved through the use of the TCP/IP protocol. The server has several slots available. When a client connects, this one is affected to a slot. Is there a reliable way to identify if a disconnected client has reconnected? I would like to reassign the disconnected client to its previous slot. I do not really ask for code, but just clues that could help me to solve this problem. Thanks for your answers. Edit Working with MAC addresses should do it, login/pass, or pass-phrases.
Python3 knwowing if a disconnected client has reconnected
0
0
1
55
41,312,197
2016-12-24T09:54:00.000
2
0
1
0
python,computer-vision,tensorflow,conv-neural-network
41,315,329
2
true
0
0
Only two classes. "Not food" is your background class. If you were trying to detect food or dogs, you could have 3 classes: "food", "dog", "neither food nor dog".
1
1
1
I see that a background class is used as a bonus class. So this class is used in case of not classifying an image in the other classes? In my case, I have a binary problem and I want to understand if an image contains food or not. I need to use 2 classes + 1 background class = 3 classes or only 2 classes?
Number of classes for inception network (Tensorflow)
1.2
0
0
397
41,318,435
2016-12-25T03:15:00.000
1
0
1
0
python-2.7,numpy,scipy,voice-recognition,voice
41,329,052
1
true
0
0
It is not possible to compare to speech samples on a sample level (or time domain). Each part of the spoken words might vary in length, so they won't match up, and the levels of each part will also vary, and so on. Another problem is that the phase of the individual components that the sound signal consists of can change too, so that two signals that sound the same can look very different in the time domain. So likely the best solution is to move the signal into the frequency domain. One common way to do this is using the Fast Fourier Transform (FFT). You can look it up, there is a lot of material about this on the net, and good support for it in Python. Then could could proceed like this: Divide the sound sample into small segments of a few milliseconds. Find the principal coefficients of FFT of segments. Compare the sequences of some selected principal coefficients.
1
0
1
I want to record a word beforehand and when the same password is spoken into the python script, the program should run if the spoken password matches the previously recorded file. I do not want to use the speech recognition toolkits as the passwords might not be any proper word but could be complete gibberish. I started with saving the previously recorded file and the newly spoken sound as numpy arrays. Now I need a way to determine if the two arrays are 'close' to each other. Can someone point me in the right direction for this?
Voice activated password implementation in python
1.2
0
0
851
41,324,742
2016-12-25T21:46:00.000
2
0
1
0
python
41,324,764
1
true
0
0
This means that ˋbio ˋ should be assigned to ˋuser.bio ˋ if the expression after the if is true. If the expression after the if return false, assign the result of the expression after else to ˋuser.bio ˋ. It is the python equivalent of the C and C++ ?: ternary operator expression.
1
0
0
a user is trying to update their info. I have been reading this line of code but I still do not understand what does the bio in front of the if statement does. the varaible bio is actually from an input bio = data.get('bio', '') user.bio = bio if bio else user.bio P.S. I just realised, could this be like a shorthand for if bio exist return bio else return user.bio? Then in this way, if user try to empty their input field, it actually wouldn't work?
what does this variable mean in front of the if statement? python
1.2
0
0
226
41,326,384
2016-12-26T04:18:00.000
6
1
0
0
python,rfid,mifare
42,299,452
2
false
0
0
There are 2 types of UID writeble cards: Block 0 writable cards: you can write block 0 at any moment Backdoored cards If writing block 0 does not work, you probably have a backdoored card: To enable the backdoor, you need to send the following sequence to the card: (everything in hexadecimal) RC522 > Card: 50 00 57 cd (HLTA + CRC) RC522 > Card: 40 (7 bits only) Card > RC522: A (4 bits only) RC522 > Card: 43 Card > RC522: A (4 bits only) Then you can write to block 0 without authentication. If it still does not work, your card is probably not UID changeable. To answer your question: There are no reason for Python libraries to refuse writing block 0. It your library can write any block except block 0, it's that your card refuses to write the block. Do your card sends back a NACK or nothing when trying to write block 0?
1
3
0
Here is my issue: my RC522 module is connected to my Pi2 via SPI and I'm able to read all [64 blocks / 16 sectors] using both MFRC522-python and pi-rc522 libraries. Also I'm able to write and change all the blocks(63 blocks) except for Block 0 (including UID) of a Chinese Mifare 1K card that I bought from ebay and it supposed to be Block 0 / UID writable. Question is: using the available python libraries(mentioned above), is it possible to write Block 0 on a Chinese writable Mifare 1K card at all or not. Note: when I received the card the sector trailer access bits were on transport configuration (FF 07 80 -> 001 for sector trailer and 000 for data blocks), which it means normally I could be able to change the data blocks (including Block 0) using KeyA or KeyB, but I couldn't. I changed the access bits to (7F 0F 88 -> 000 for data blocks) and used KeyA/KeyB, it didn't work, and block 0 remained unchanged. I also tried (78 77 88 -> 000 for data blocks) with KeyA or KeyB, same result. Again, setting proper access bits, I'm able to read/write all the other blocks except for block 0. Thanks, A.
re-writing uid and block 0 on Chinese (supposed to be writable) MIFARE 1K card in python
1
0
0
12,984
41,328,993
2016-12-26T09:25:00.000
0
0
0
0
django,python-2.7
41,331,613
2
false
1
0
There is no way to do what you want, by design.
1
0
0
I want to display all the Internet History Information of a system using Python?
How do i retrieve browsers history in python django?
0
0
0
399
41,329,665
2016-12-26T10:15:00.000
2
0
0
0
python-2.7,opencv3.0
65,705,673
4
false
0
0
old implementation is not available. Now it is available as follows: fld = cv2.ximgproc.createFastLineDetector() lines = fld.detect(image)
1
13
1
Can a sample implementation code or a pointer be provided for implementing LSD with opencv 3.0 and python? HoughLines and HoughLinesP are not giving desired results in python and want to test LSD in python but am not getting anywhere. I have tried to do the following: LSD=cv2.createLineSegmentDetector(0) lines_std=LSD.detect(mixChl) LSD.drawSegments(mask,lines_std) However when I draw lines on the mask I get an error which is: LSD.drawSegments(mask,lines_std) TypeError: lines is not a numerical tuple Can someone please help me with this? Thanks in advance.
LineSegmentDetector in Opencv 3 with Python
0.099668
0
0
18,293
41,329,994
2016-12-26T10:42:00.000
1
0
1
0
python,derivative,quantlib
41,336,999
1
false
0
0
FX forward is currently not supported. Not sure about the swap.
1
0
0
Is there any possibility to value fx instruments in QuantLib-Python (especially, fx forwards, fx swaps)? For the last two days I have been looking through the documentation but I have only found "one currency" instruments like "VanillaSwap" and etc. Probably, one could use other libraries that are based on QuantLib... Any help is highly appreciated.
Fx forward valuation in QuantLib-Python
0.197375
0
0
806
41,330,221
2016-12-26T10:59:00.000
3
0
1
0
python,python-3.x
41,330,324
2
true
0
0
Wrap the whole code with a try-catch is the most simple way but it doesn't fix the problem. I think the best way is to find out the root cause of crash and take care of it even just print an error message and exit.
1
4
0
Say, I have a big application in python running on a server a few times per day and sometimes it crashes. I want, when it crashes, it at least to release all the resources such as db connection, file handlers, etc. I've got some "try ... except" blocks in it, but who knows where it's going to crash next? It's possible it'll crash somewhere where the code isn't wrapped into "try ... except". What's a recommended way to improve it? Should I wrap the whole script body into "try ... except" as the last resort? Or what?
Is it ok to wrap the whole code into "try ... except" as the last resort?
1.2
0
0
2,449
41,331,567
2016-12-26T12:49:00.000
0
0
0
0
python,django,profiling,django-channels,daphne
46,773,111
2
false
1
0
Why not stick a monitoring tool something like Kibana or New Relic and monitor why and what's taking so long for a small payload response. It can tell you the time spent on Python, PostgreSQL and Memcache (Redis).
1
18
0
My technology stack is Redis as a channels backend, Postgresql as a database, Daphne as an ASGI server, Nginx in front of a whole application. Everything is deployed using Docker Swarm, with only Redis and Database outside. I have about 20 virtual hosts, with 20 interface servers, 40 http workers and 20 websocket workers. Load balancing is done using Ingress overlay Docker network. The problem is, sometimes very weird things happen regarding performance. Most of requests are handled in under 400ms, but sometimes request can take up to 2-3s, even during very small load. Profiling workers with Django Debug Toolbar or middleware-based profilers shows nothing (timing 0.01s or so) My question: is there any good method of profiling a whole request path with django-channels? I would like how much time each phase takes, i.e when request was processed by Daphne, when worker started processing, when it finished, when interface server sent response to the client. Currently, I have no idea how to solve this.
How to profile django channels?
0
0
0
1,604
41,332,955
2016-12-26T14:57:00.000
0
0
0
0
python,python-3.x,tkinter
60,816,826
2
false
0
1
I recommend using one main window with the buttons and put the rest of the widgets in different labelframes that appear and disappear upon execution of different functions by the buttons
1
3
0
I'm creating a Wizard in Tkinter. Almost each of the steps shoud I have the same footer with the button for navigation and cancelling. How can I achieve this? Should I create a Frame? And in general, should all the steps be created as different frames?
Creating a wizard in Tkinter
0
0
0
4,441
41,338,208
2016-12-27T02:16:00.000
1
0
0
0
database,sqlite,python-3.x,django-models
41,338,383
2
false
1
0
I think it's not a good idea to create a table for each user. This may cause bad performance and low security. Why don't you create a table named userInfo and put user.userID as a foreign key?
1
0
0
I am wondering if there is any way to create a new model table in SQLite with Django 1.10 (like writing general python code) without having to specify in the models.py. The situation is if there is a new member registered on my website, then I will create a new model table for them to hold their data. Specifically: step 1: John Doe registered on my site step 2: The system create a model table named db_johnDoe (with same set of fields as of the others) step 3: The system can insert and edit data of the db_johnDoe according to John's behavior on the website. Any idea? Thanks a lot.
Django 1.10 Create Model Tables Automatically
0.099668
1
0
292
41,338,354
2016-12-27T02:38:00.000
1
0
0
0
python,animation,scripting,maya
41,349,764
2
false
0
0
It worked when I gave the name of the ik_handle instead of a custom defined variable for ik_handle. ankle_grp=cmds.group( 'ankle_ik', 'ball_ik',n='ankle_grp')
1
0
0
I'm trying to automate the foot rig process in maya using python. When I try to group the ikHandles using this line of code, ankle_grp=cmds.group( ankle_ik, ball_ik,n='ankle_grp'), the effectors of the ikHandles are also coming into the ankle_grp. I do not want that. I want the ankle_grp to have only the ik Handles and not it's effectors. How do i do that? Thanks in advance.
How to select only ikHandle and not it's effector using python in Maya?
0.099668
0
0
268
41,338,712
2016-12-27T03:39:00.000
1
0
0
0
wxpython,wxformbuilder
41,374,082
1
false
0
1
Okay this is what i did, i added regular wxTreeCtrl from wxFB, then in the property pane under subclass, changed name to TreeListCtrl and header to wx.gizmos. Additional settings (like adding multiple columns and all) needs to be done manually in the derived frame class in init Event handlers can be added through wxFB and work correctly.
1
0
0
I am planning to use wxformbuilder to design application user interface. It's great tool and I like wxPython. But i don't see gizmos widgets anywhere on the UI. I am particularly interested in gizmos.TreeListCtrl widget which supports tree with multi column entries. Anybody has tried intergrating gizmos widget with wxformbuilder? is it possible even?
how to integrate gizmos widgets in wxformbuilder
0.197375
0
0
201
41,339,450
2016-12-27T05:23:00.000
2
0
0
0
wxpython,wxwidgets,wxformbuilder
41,344,951
1
false
0
1
In formbuilder, the wxSplitterWindow control accepts 2 (and only 2) children. Those children can be either wxPanels or wxScrolledWindows. You can then add a sizer any other controls to those you want to those children. If you use panels for the children, make sure to use the wxPanel from the "Containers" page and not the panel from the "Forms" page. If you want to know which items are allowed to children for a certain control, you can look at the file objtypes.xml in the xml folder of the wxFormbuilder of the application.
1
0
0
i am trying to design UI using wxFormBuilder. I have added wxFrame -> wxGridBagSizer -> wxSplitterWindow. After this point wxFormBuilder is not allowing me to put any windows under splitter window. I tried putting almost every widget. I also tried putting sizers under splitter window. But nothing is working. All the widgets go at the same level as spliter window.
wxformbuilder does not allow putting windows under wxsplitterwindow
0.379949
0
0
556
41,340,495
2016-12-27T06:59:00.000
2
0
1
0
python,python-2.7,pathos
41,346,636
1
false
0
0
I'm the pathos author. pathos has a few dependencies, and if you are installing pathos by hand (as you are doing now), you have to get all of the dependencies for it fully work. The easiest thing to do is to use pip, and pip install pathos. Or, you can first install setuptools, and then repeat what you already have done (install by hand), and setuptools will grab and install the dependencies for you. Or if you do want to do everything the hard way… then you need to install dill, pox, ppft, and multiprocess. Install them before installing pathos. It's typically much much easier if you have setuptools installed.
1
0
0
I try to use pathos for multiprocessing,but when I start the process, I get the ImportError:no module named pp,please help me. I download pathos from github, install by python setup.py, python version 2.7.8,
importerror when I use pathos
0.379949
0
0
1,876
41,341,113
2016-12-27T07:45:00.000
0
0
0
0
python,html,django
41,341,521
1
true
1
0
In your view add handle for POST requests and in that case render your template with template variable of your POST data.
1
0
0
How to get input from textarea and display them onto my webpage immediately after submit on Django?
Django: Get user's input in textarea and display them on page
1.2
0
0
375
41,343,316
2016-12-27T10:24:00.000
7
0
0
0
python,sqlalchemy,flask-sqlalchemy,alembic,flask-migrate
41,357,149
3
false
1
0
You can figure out if your project as at the latest migration with the current subcommand: Example output when you are at the latest migration: (venv) $ python app.py db current f4b4aa1dedfd (head) The key thing is the (head) that appears after the revision number. That tells you that this is the most recent migration. Here is how things change after I add a new migration, but before I upgrade the database: (venv) $ python app.py db current f4b4aa1dedfd And after I run db upgrade I get: (venv) $ python app.py db current f3cd9734f9a3 (head) Hope this helps!
1
8
0
We're using SQLAlchemy and Alembic (along with Flask-SQLAlchemy and Flask-Migrate). How to check if there are pending migrations? I tried to check both Alembic's and Flask-Migrate's documentation but failed to find the answer.
How to check if there are pending migrations when using SQLAlchemy/Alembic?
1
1
0
4,164
41,344,017
2016-12-27T11:07:00.000
0
0
1
0
python,ubuntu,anaconda,environment,spyder
41,346,692
2
false
0
0
(Posted on behalf of the OP). It is solved: I reinstalled spyder and it works properly now. Thank you.
1
0
1
I have not used Linux/Unix for more a decade. Why does the 'tensorflow' module import fail in Spyder and not in Jupyter Notebook and not in Python prompt? SCENARIO: [terminal] spyder [spyder][IPython console] Type 'import tensorflow as tf' in the IPython console CURRENT RESULT: [spyder][IPython console] Message error: 'ImportError: No module named 'tensorflow'' ADDITIONAL INFORMATION: OS: Ubuntu 14.04 (VMWare) Python: Python 3.5.2 :: Anaconda custom (64-bit) Install of TensorFlow: [terminal] sudo -s [terminal] conda create --name=IntroToTensorFlow python=3 anaconda [terminal] source activate IntroToTensorFlow [terminal] conda install -c conda-forge tensorflow PATH = $PATH:/home/mo/anaconda3/envs/IntroToTensorFlow/bin COMMENTS: When I replay the following scenario, it works fine: [terminal] sudo -s [terminal] source activate IntroToTensorFlow [terminal] python [python] import tensorflow as tf When I replay the tensorflow import in Jupyter Notebook, it works fine too WHAT I HAVE DONE SO FAR: I Googled it but I did not find a suitable anwser I searched in the Stack Overflow questions
Why does the 'tensorflow' module import fail in Spyder and not in Jupyter Notebook and not in Python prompt?
0
0
0
1,193
41,346,055
2016-12-27T13:25:00.000
0
0
0
0
python,machine-learning,scikit-learn,logistic-regression
66,783,030
1
false
0
0
You can use the warm_start option (with solver not liblinear), and manually set coef_ and intercept_ prior to fitting. warm_start : bool, default=False When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver.
1
4
1
I am using scikit-learn's linear_model.LogisticRegression to perform multinomial logistic regress. I would like to initialize the solver's seed value, i.e. I want to give the solver its initial guess as the coefficients' values. Does anyone know how to do that? I have looked online and sifted through the code too, but haven't found an answer. Thanks!
Feeding a seed value to solver in Python Logistic Regression
0
0
0
493
41,347,484
2016-12-27T15:02:00.000
0
0
1
0
python
41,347,592
4
false
0
0
the hard way, get the ascii equivalent of each letter generate a random letter between the range of highest and lowest ascii value join the random characters together
1
0
0
How to shuffle to inputted strings, and to mix together to jumble the two strings up. For example "hello" and "world" shuffle together to become "wherd llohe"
Anybody know how to shuffle two strings together?
0
0
0
1,312
41,352,692
2016-12-27T21:58:00.000
2
1
0
0
python,maximo,escalation
41,363,700
1
false
0
0
Applying the conditional approval of labor records (all labor started over 21 days ago) can be done in the escalation. I'm not saying it can't be done in the automation script. But, it is easy enough to write the SQL filter in the "Condition" box, I found out. I had started down this path first, but had used the wrong database field in my expression. Note, while using the "Condition" writer tool, Maximo shows a drop down of fields to choose from to apply to the filter. Don't use these. Go to the database itself and find the correct field you need to use. In this case, 'StartDate' instead of 'StartDateTime'. Here is my updated expression used in the escalation: GENAPPRSERVRECEIPT=0 and ( STARTDATE < (TRUNC(SYSDATE) - 21))
1
0
0
Thanks in advance. We have a Maximo automation script (python) that approves all labor transactions when it runs from a scheduled escalation. "mbo.approveLaborTransaction()" is the entire script. No problems with the automation script or the escalation. But, instead of approving ALL labor, when it runs, we would like to only approve labor where the start date is over 21 days ago. (This will give employees time to edit their labor records. Approved labor can't be edited.) Is this conditional approval of labor records possible through the python script? And if so, how? If not, is it possible to put conditions on the escalation that calls the automation script? Currently, there is a condition, 'GENAPPRSERVRECEIPT=0', on the escalation. (which means where labor NOT approved) I tried adding '...AND (STARTDATETIME < (SYSDATE - 21))', but that did not work. I'm open to other methods as well. Thanks. Ryan
Maximo 7.6 - Conditionally approving labor transactions with automation script and escalation
0.379949
0
0
905
41,357,580
2016-12-28T07:43:00.000
0
1
0
0
python,eclipse,selenium,automated-tests
41,359,710
1
false
0
0
which unit test framework you are using, and why you are running from eclipse, i mean it's good for testing but eventually you will have to integrate that with Jenkins or other software so can you try running from command line and see what's happening. by the way, what error you are getting?
1
0
0
I'm using eclipse program to run selenium python, but there is an issue that when I run over 1000 TCs in one times, only 1000 first TC have test result. If I separate these TCs to many parts with each part is less than 1000 TC, the test result is received completely. I think the issue is not from coding, how can I fix this ? :(
Cannot get test result if running over 1000 Test cases in selenium python
0
0
1
40
41,359,091
2016-12-28T09:29:00.000
0
0
1
0
python,pip,fedora,requirements.txt
41,496,794
3
false
0
0
Since we're discussing options - I'll throw in one that hasn't been discussed. Docker containers. You can install a base image with the OS you want, and then the Docker file will install all the dependencies you need. It can also pip install the requirements. This keeps the server clean of any installations it doesn't need, and versioning is easy since you'll just have a new container with new code/dependencies without overlapping with the old version.
2
13
0
So I have this project with many dependencies that are being installed from pip and are documented in requirements.txt I need to add another dependency now that doesn't exist on pip and I have it as an RPM in some address. What is the most Pythonic way to install it as a requirement? Thanks! The code will run on RHEL and Fedora
Add module from RPM as a requirement
0
0
0
1,375
41,359,091
2016-12-28T09:29:00.000
7
0
1
0
python,pip,fedora,requirements.txt
41,404,348
3
true
0
0
In this case, the Pythonic thing to do would be to simply fail if a dependency cannot be met. It's okay, and your users will appreciate a helpful error if they haven't satisfied a prerequisite for the installation. Consider the numerous Python packages out there with C library dependencies in order to be built and installed correctly. In your project, still declare all of your Python dependencies in your "setup.py" and "requirements.txt" files, but there's nothing in the Python packaging toolchain which will install an RPM for you (nor should it!), so just stop there and let the install fail if an RPM wasn't installed. Aside from that, you may want to consider packaging your Python application itself as an RPM. You have RPM dependencies, and your target platform is Fedora/RHEL. By packaging your application as an RPM, you can declare dependencies on other RPMs which automates installation of those required packages. Worry about being Pythonic within the build phase of your RPM, and use RPM magic to do the rest. I recommend against the use of configuration management tools (Puppet, Ansible, etc.), as they will overly complicate your build process. Those tools are great for their intended uses, but here it would be like using a cannon to swat a fly.
2
13
0
So I have this project with many dependencies that are being installed from pip and are documented in requirements.txt I need to add another dependency now that doesn't exist on pip and I have it as an RPM in some address. What is the most Pythonic way to install it as a requirement? Thanks! The code will run on RHEL and Fedora
Add module from RPM as a requirement
1.2
0
0
1,375
41,360,265
2016-12-28T10:39:00.000
5
0
0
0
python,pandas
41,402,234
1
true
0
0
@JohnGalt posted an answer to this on the comments. Thanks a lot. I just wanted to put the answer here just in case if people are looking for similar information in the future. df.shift(1) df.loc[0] = new_row df.shift(n) will shift the rows n times, filling the first n rows with na and getting rid of last n rows. The number of rows of df will not change with df.shift. I hope this is helpful.
1
1
1
I need to maintain a Pandas dataframe with 500 rows, and as the next row becomes available I want to push that new row in and throw out the oldest row from the dataframe. e.g. Let's say I maintain row 0 as newest, and row 500 as oldest. When I get a new data, I would push data to row 0, and it will shift row 0 to row 1, and so on until it pushes row 499 to row 500 (and row 500 gets deleted). Is there a way to do such a FIFO operation on Pandas? Thanks guys!
How to do a FIFO push-operation for rows on Pandas dataframe in Python?
1.2
0
0
2,326
41,361,080
2016-12-28T11:25:00.000
1
0
0
0
python,pyspark,collaborative-filtering
41,366,334
1
true
0
0
It it not necessary (for implicit) and shouldn't be done (for explicit) so in this case bass only data you actually have.
1
1
1
I`m trying to make a recommender system based on purchase history using trainImplicit. My input is in domain [1, +inf) (the sum of views and purchases). So the element of my input RDD looks like this: [(user_id,item_id),rating] --> [(123,5564),6] - the user(id = 123) interacted with the item(id=5564) 6 times. Should I add to my RDD elements such as [(user_id,item_id),rating] --> [(123,2222),0], meaning that given user has never interacted with given item or the ALS.implicitTrain does this implicitly?
Understanding Spark MLlib ALS.trainImplicit input format
1.2
0
0
363
41,361,091
2016-12-28T11:25:00.000
1
0
0
0
python,amazon-web-services,snapshot,boto3,rds
41,363,444
1
false
1
0
This is not possible using the AWS RDS snapshot mechanism, and it isn't possible using the AWS SDK. It is possible using the API for the specific database engine you are using. You would need to specify what database you are using for further help.
1
0
0
I'm trying to save a snapshot of several tables programatically in python, instead of all DB. I couldn't find the API (in boto/boto3) to do that. Is it possible to do?
AWS RDS save snapshot of selected tables
0.197375
1
0
540
41,361,151
2016-12-28T11:29:00.000
2
0
0
0
python,pandas
41,361,442
3
true
0
0
the constructor pd.Dataframe must be called like a function, so followed by parentheses (). Now you are refering to the module pd.dataframes (also note the final 's'). the for x-construction you're using creates a sequence. In this form you can't assign it to the variable x. Instead, enclose everything right of the equal sign '=' in () or [] it's usually not a good idea to use the same variable x both at the left hand side and at the right hand side of the assignment, although it won't give you a language error (but possibly much confusion!). to connect the names in fdnames to the dataframes, use e.g. a dict: dataFrames = {(name, pd.DataFrame()) for name in dfnames}
2
2
1
I am trying to create multiple empty pandas dataframes in the following way: dfnames = ['df0', 'df1', 'df2'] x = pd.Dataframes for x in dfnames The above mentionned line returns error syntax. What would be the correct way to create the dataframes?
python pandas: create multiple empty dataframes
1.2
0
0
3,447
41,361,151
2016-12-28T11:29:00.000
0
0
0
0
python,pandas
41,361,512
3
false
0
0
You can't have many data frames within a single variable name, here you are trying to save all empty data frames in x. Plus, you are using wrong attribute name, it is pd.DataFrame and not pd.Dataframes. I did this and it worked- dfnames = ['df0', 'df1', 'df2'] x = [pd.DataFrame for x in dfnames]
2
2
1
I am trying to create multiple empty pandas dataframes in the following way: dfnames = ['df0', 'df1', 'df2'] x = pd.Dataframes for x in dfnames The above mentionned line returns error syntax. What would be the correct way to create the dataframes?
python pandas: create multiple empty dataframes
0
0
0
3,447