Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
40,613,427 | 2016-11-15T15:17:00.000 | 1 | 1 | 1 | 0 | python,cron,aws-lambda,amazon-cloudwatch,cloudwatch | 40,633,940 | 1 | true | 0 | 0 | You could just have a single CloudWatch alarm trigger hourly and the Lambda function could check to see if any scheduled tasks need to run. If any evaluate to true then execute them. | 1 | 1 | 0 | I wrote 4 lambda functions in Python, one stops my dev instances at 19p.m from monday to friday, second function starts them at 7a.m from monday to friday, the third function stops the test instances at 17p.m all days of the week and the forth function starts them at 8a.m all days of the week.
I put cloudWatch as a trigger and I created a rule with the cron expression for each one of them.
I wonder, if there is any way to put them all together in only one function, and specify the cron expression in python inside the function ? If that's possible what would be the trigger here? Thank you.
Edited by: Cloudgls on Nov 16, 2016 12:29 PM | AWS Lambda function with multipal cron expressions | 1.2 | 0 | 0 | 353 |
40,615,878 | 2016-11-15T17:16:00.000 | 1 | 0 | 0 | 0 | python,django,django-models | 40,617,047 | 1 | true | 1 | 0 | Based on what you're describing you should probably be setting up parallel stacks and using either your DNS, Apache, your whatever your HTTP routing tech of choice is to do the separation.
Use a separate database, possibly even a separate server (or WSGI configuration), and keep your code clean.
Creating duplicate "models" based on the value of a field like you're describing breaks a lot of Python's DRY principles. | 1 | 0 | 0 | So I have a Django site that works perfectly and displays everything I want it to in the US. It automatically displays the data from the US data model.
What I want to be able to do is basically have an exact clone of my site, maybe under like mysite.com/canada for example, that displays the data from canada.
One approach was for me to just add in all the data into the database and add a field that says which country it's from, but I'd rather for each countries data to be in a completely different model.
With pure HTML/CSS this would be easy, I would just copy the entire site directory into a sub directory and that would be it for the country. Was wondering if there is something similiar I can do with Django. | How to display data depending on country in Django | 1.2 | 0 | 0 | 24 |
40,616,036 | 2016-11-15T17:24:00.000 | 0 | 0 | 0 | 0 | python,mysql,django | 40,616,612 | 1 | false | 1 | 0 | SHORT ANSWER: Yes.
MEDIUM ANSWER: Yes. But you will have to figure out how Django would have created the table, and do it by hand. That's not terribly hard.
Django may also spit out some warnings on startup about migrations being needed...but those are warnings, and if the app works, then you're OK.
LONG ANSWER: Yes. But for the sake of your sanity and sleep quality, get a completely separate development environment and test your backups. (But you knew that already.) | 1 | 0 | 0 | I'm working on a project that I inherited, and I want to add a table to my database that is very similar to one that already exists. Basically, we have a table to log users for our website, and I want to create a second table to specifically log users that our site fails to do a task for.
Since I didn't write the site myself, and am pretty new to both SQL and Django, I'm a little paranoid about running a migration (we have a lot of really sensitive data that I'm paranoid about wiping).
Instead of having a django migration create the table itself, can I create the second table in MySQL, and the corresponding model in Django, and then have this model "recognize" the SQL table? without explicitly using a migration? | Do I Need to Migrate to Link my Database to Django | 0 | 1 | 0 | 38 |
40,617,544 | 2016-11-15T18:53:00.000 | 1 | 0 | 0 | 0 | python,gtk3 | 40,634,755 | 1 | false | 0 | 1 | Have you tried EntryCompletion.set_inline_completion(True) ?
This may not be exactly what you were looking for as it will not select the complete first match. However, if you type far enough (to only have one choice), you can press Enter to autocomplete the rest.
Tell me your thoughts on this and/or more details on what you are trying to do. Maybe there is another way to achieve the same functionality. | 1 | 0 | 0 | Using Python 2.7, Ubuntu 16.04, Gtk3 (gi.repository). I have an Entry with an associated EntryCompletion and a ListStore. I would like to let the user autoselect the first result when pressing the Enter/Intro/Return key, without having to use the arrow keys to select an item, and then press Enter. How could this be done? | Python GTK EntryCompletion select first result when Intro | 0.197375 | 0 | 0 | 163 |
40,618,186 | 2016-11-15T19:31:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 40,618,331 | 1 | false | 0 | 0 | Probably you can do:
try:
import elevation
except:
pip.main(['install', '--target=%s' % self.elevation_dir, 'elevation'])
if self.elevation_dir not in sys.path:
sys.path.append(self.elevation_dir)
But normaly, this is an installation issue!
Those issues are not subject of the source code...
The pythonic way is defining your dependencies properly inside the setup.py of your package using the install_requires configuration. | 1 | 0 | 0 | My problem is as follows:
I'm working on a plugin for the QGIS desktop application at the moment. In my python code I use the enum34 as well as the pyproj modules. For the phase of implemention I installed the modules in my Python environment.
For testing the functionality I set up a VM based on the target environment of the plugin. As far as I know, there will be no possibility to install modules in the future environment of the plugin. The python environment which is installed within the installtion of QGIS doesn't include the necessary modules.
I already read about appending it to sys.path but it won't work...
So, how can I import a module without installing it, respectively is it essential to build the .egg so it will work? | (Python 2.7.5) How to use a python module without installing it | 0 | 0 | 0 | 314 |
40,620,021 | 2016-11-15T21:31:00.000 | 1 | 0 | 0 | 0 | python,google-maps | 48,936,468 | 2 | false | 0 | 0 | An answer that worked for me was to use Google Earth and load a position file using its and styles. Use this to refresh a KML file containing the position. Then when a new position is received, overwrite the KML file with the new position. Google Earth will then refresh the GE display. The position file can contain a series of coordinate statements and thus create a trail of recent positions. | 1 | 1 | 0 | I would like to show the track of a moving vehicle on a map in real time. The track information would be provide by an external GPS receiver, and I can convert NMEA sentences to KML or other text based formats.
I have previously created Python scripts to generate KML files (and others) for use with Google Earth and other map software but of course these are not real-time.
So far I have found many solutions for non-real-time display on Google Earth, but not real time. Also, real-time using web or cloud facilities, but my need is for an entirely local solution.
Any advice gratefully received! | Use Python to put a track on a map | 0.099668 | 0 | 0 | 2,219 |
40,625,749 | 2016-11-16T06:51:00.000 | 2 | 0 | 1 | 0 | python,triggers,camera,ethernet,acquisition | 41,272,947 | 2 | true | 0 | 0 | I'm not very familiar with the Advantage series, but I am quite familiar with the other In-Sight cameras. I'm going to assume the Advantage is similar to other In-Sight cameras.
You should be able to achieve a trigger from python by opening a telnet connection to the camera (on port 23), logging in (default username: admin, password: ), and sending the command 'SE8'. The camera trigger mode must be set to External, Manual or Network. If the command is successful, it will respond with a '1'. I'd suggest trying this with a telnet client before trying it in python. Suggested telnet clients: Putty or Hercules.
More information can be found in the In-Sight Explorer help file. From the contents, go to 'Communications Reference -> Native Mode Communications'. | 1 | 1 | 0 | I have got a Cognex Advantage 100 camera connected to my PC via ethernet.
After pressing F5 in the inSight Explorer to trigger the camera I can use the captured image in a Python script.
Can I make the Python script trigger the image capture itself? | Trigger an cognex camera by script | 1.2 | 0 | 0 | 3,446 |
40,626,429 | 2016-11-16T07:37:00.000 | 31 | 0 | 1 | 0 | python,visual-studio-code,vscode-settings,pylint | 56,183,059 | 5 | false | 0 | 0 | If you just want to disable pylint then the updated VSCode makes it much more easier.
Just hit CTRL + SHIFT + P > Select linter > Disabled Linter.
Hope this helps future readers. | 2 | 32 | 0 | Simple question - but any steps on how to remove pylint from a Windows 10 machine with Python 3.5.2 installed.
I got an old version of pylint installed that's spellchecking on old Python 2 semantics and it's bugging the heck out of me when the squigglies show up in Visual Studio Code. | Visual Studio Code - removing pylint | 1 | 0 | 0 | 42,819 |
40,626,429 | 2016-11-16T07:37:00.000 | 0 | 0 | 1 | 0 | python,visual-studio-code,vscode-settings,pylint | 72,494,136 | 5 | false | 0 | 0 | i had this problem but it was fixed with this solution CTRL + SHIFT + P > Selecionar linter > Linter desabilitado. | 2 | 32 | 0 | Simple question - but any steps on how to remove pylint from a Windows 10 machine with Python 3.5.2 installed.
I got an old version of pylint installed that's spellchecking on old Python 2 semantics and it's bugging the heck out of me when the squigglies show up in Visual Studio Code. | Visual Studio Code - removing pylint | 0 | 0 | 0 | 42,819 |
40,627,395 | 2016-11-16T08:38:00.000 | 0 | 1 | 0 | 0 | java,python,amazon-web-services,amazon-s3,aws-lambda | 40,632,303 | 3 | false | 1 | 0 | Lambda would not be a good fit for the actual processing of the files for the reasons mentioned by other posters. However, since it integrates with S3 events it could be used as a trigger for something else. It could send a message to SQS where another process that runs on EC2 (ECS, ElasticBeanstalk, ECS) could handle the messages in the queue and then process the files from S3. | 1 | 0 | 0 | I have an s3 bucket which is used for users to upload zipped directories, often 1GB in size. The zipped directory holdes images in subfolders and more.
I need to create a lambda function, that will get triggered upon new uploads, unzip the file, and upload the unzipped content back to an s3 bucket, so I can access the individual files via http - but I'm pretty clueless as to how I can write such a function?
My concerns are:
Pyphon or Java is probably better performance over nodejs?
Avoid running out of memory, when unzipping files of a GB or more (can I stream the content back to s3?) | AWS lambda function to retrieve any uploaded files from s3 and upload the unzipped folder back to s3 again | 0 | 0 | 0 | 1,257 |
40,628,945 | 2016-11-16T09:54:00.000 | 1 | 0 | 1 | 1 | macos,python-2.7,build,intel | 41,964,143 | 1 | false | 0 | 0 | You could edit the line in getcompiler.c that it is complaining about:
e.g. to
return "[Intel compiler]";
If you wanted to get fancier you could add in the compiler version, using e.g. the __INTEL_COMPILER macro. | 1 | 1 | 0 | I've been trying to build Python from source on my mac with the Intel compiler suite (Intel Parallel Studio) and link it against Intel's MKL.
The reason for that is that I want to use the exactly the same environment on my mac for developing Python code as on our linux cluster.
As long as I am not telling the configure script to use Intel's parallel studio, Python builds fine (configure and make: ./configure --with(out)-gcc). But as soon as I include --with-icc, or if I set the appropriate environment variables, mentioned in ./configure --help, to the Intel compilers and linkers, make fails with:
icc -c -fno-strict-aliasing -fp-model strict -g -O2 -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Python/getcompiler.o Python/getcompiler.c
Python/getcompiler.c(27): error: expected a ";"
return COMPILER;
^
compilation aborted for Python/getcompiler.c (code 2)
make: *** [Python/getcompiler.o] Error 2
I've searched everywhere, but nobody seems to be interested in building Python on a mac with intel compilers, or I am the only one who has problems with it. I've also configured my environment according to Intel's instructions: source /opt/intel/bin/compilervars.sh intel64, in ~/.bash_profile.
In any case, my environment is:
OS X 10.11.6
Xcode 8.1 / Build version 8B62
Intel Parallel Studio XE 2017.0.036 (C/C++, Fortran)
Thanks,
François | Build Python 2.7.12 on a Mac with Intel compiler | 0.197375 | 0 | 0 | 109 |
40,629,548 | 2016-11-16T10:19:00.000 | 0 | 0 | 0 | 0 | python,rest,curl | 40,684,339 | 1 | false | 1 | 0 | I am nominally embarrassed. The issue was NOT the python code at all, it was within Curl. So I both switched to HTTPie and changed the format to Schema=LONGSCHEMANAME
All of my tests started working so clearly I was not specifying the right string in curl. The -d option was beating me. So I apologize for wasting time. Thanks | 1 | 0 | 0 | I am clearly confused but not sure if I am screwing up the code, or curl
I would like to use a rest api to pass a schemaname, a queryname, and a number of rows. I've written the python code using a simple -s schemaname -q queryname -r rows structure. Thats seems easy enough. But I am having trouble finding a good example of passing multiple arguments in a restapi. No matter which version of the todos example I choose as a model, I just cannot figure out how to extend for the second and 3rd argument. If it uses a different structure (JSON) for input, I am fine. The only requirement is that it run from CURL. I can find examples of passing lists, but not multiple arguments.
If there is a code example that does it and i have missed it, please send me along. As long as it has a curl example I am good.
Thank you | Passing and receiving multiple arguments with Python in Flask RestAPI | 0 | 0 | 1 | 171 |
40,630,506 | 2016-11-16T11:03:00.000 | -2 | 0 | 0 | 0 | python,pandas | 62,375,979 | 3 | false | 0 | 0 | Just adding list() should help but the structure isn't super useful in its raw form. It returns a list of tuples with index. | 1 | 11 | 1 | Can we check the data in a pandas.core.groupby.SeriesGroupBy object? | Can we see the group data in pandas.core.groupby.SeriesGroupBy object | -0.132549 | 0 | 0 | 12,663 |
40,630,602 | 2016-11-16T11:08:00.000 | 1 | 1 | 0 | 1 | python,linux,bash,ssh | 40,630,667 | 1 | false | 0 | 0 | The proper way of doing this is using a configuration management solution, like ansible, You can use become directive to run script/command with sudo privileges from a remote client.
If there is only one box or you need to schedule, you can use /etc/crontab and run it with root user at desired interval. | 1 | 1 | 0 | On a remote server I can execute a certain command only via "sudo". The authentication is done via a certificate. So when I login via SSH by a bash script, python, ruby or something else, how can I execute a certain command with sudo? | How can I execute a command with "sudo" in a python/bash/ruby script via SSH? | 0.197375 | 0 | 0 | 210 |
40,631,040 | 2016-11-16T11:31:00.000 | 0 | 0 | 1 | 0 | python,machine-learning,nlp,ocr | 42,022,673 | 1 | false | 0 | 0 | You could build extraction zones to fetch this content.
In other words, group documents that have the required content within a given area in the image and then fetch the contents from that area for all images. | 1 | 1 | 0 | I'm trying to apply NLP to an OCR document. To extract named entities, how can I use features like position of the word in the document?
For example, I have a health report I need to extract the chemical terms in the report in a particular area and avoid their occurrence elsewhere. Can I define a position feature for this in terms of {top:x , left:y} values?
Are there any sklearn libraries? | NLP: Position feature of a word in an OCR of a document | 0 | 0 | 0 | 393 |
40,632,110 | 2016-11-16T12:23:00.000 | 0 | 0 | 0 | 0 | python,csv,jenkins,hudson,store | 40,676,992 | 1 | true | 0 | 0 | I ran the windows command line to execute the python script and it worked. | 1 | 0 | 1 | I have a python script that creates a csv file and I have a Hudson job that builds every day. I want to store the csv file as an artifact but when I start the job nothing is stored or created. The build is successful and all tests are done but no file is created.
Do you know why this happens? | Store a csv file created by a python script on Hudson | 1.2 | 0 | 0 | 42 |
40,633,399 | 2016-11-16T13:28:00.000 | 0 | 0 | 0 | 0 | python-2.7 | 40,635,363 | 1 | false | 0 | 0 | Finally, I found what was the problem. I was passing the same contour to every step ([l1[0][2]]), so I need to change to operate with index, like [l1[i][2]]).
Sorry for that mistake and for your patience.
Have a good day, Jaime | 1 | 0 | 1 | I am having trouble filling a mask using cv2.drawContours, I dont know what could it be, or how to fix it.
I have a list with three elements's sublists [coordX,coordY, contour], so for every pair of coordinates, there is an if/else decision, each one with this stament: cv2.drawContours(mask,[l1[0][2]],0,c1[cla],-1)
[l1[0][2]]: contour, like array([[[437, 497]],[[436, 498]],[[439, 498]], [[438, 497]]])
c1[cla]: tuple, like (25,250,138)
The script runs well without errors, but the resulting image is almost black, just having 4 green pixels.
Any suggestion or advice? | why python cv2.drawContours() just draw the first contours? | 0 | 0 | 0 | 92 |
40,633,748 | 2016-11-16T13:45:00.000 | 0 | 0 | 1 | 0 | string,visual-studio,python-3.x,split | 40,672,518 | 1 | true | 0 | 0 | The changes were not introduced by the interpreter but rather by a prankster in the network, I debuuged him. So no problem. | 1 | 0 | 0 | I'm using lxml to extract a long string value.
I pass this value to a def located in another module for string.split('|').
When i got the list[] its len() was 0.
The problem was somehow the pipe was interpreted as '\n\t\t\t\t'.
When i do string.split('\n\t\t\t\t'), problem solved, crisis avoided.
I know its a representation of escape sequences.
But why?
I know i didn't voluntarily changed this in any of my code.
!!! EDIT !!!
Sorry for the trouble, someone kept editing my xml file from the network...
I guess they thought it was funny... | string.split() value passed to python3 module interprets '|' like \n\t\t\t\t. Why? | 1.2 | 0 | 1 | 48 |
40,637,774 | 2016-11-16T16:51:00.000 | 0 | 0 | 1 | 0 | python,numpy | 63,427,934 | 3 | false | 0 | 0 | Just use python -m before pip command, so that you won't get any problem while doing so in IDLE..
ex. python -m pip install scipy/numpy | 2 | 3 | 1 | I have python3.5.2 installed on a windows10 machine(Adding into the pythonpath is included in the setup with new python). I ,then, installed the Anaconda(4.2.0) version. At the command prompt when i run the python interpreter and import numpy it works fine. But when i save it as a script and try running from the IDLE, it gives
Traceback (most recent call last):
File "C:\Users\pramesh\Desktop\datascience code\test.py", line 1, in <module>
from numpy import *
ImportError: No module named 'numpy'
I don't know what the problem is. I donot have any other python version installed. | import numpy not working in IDLE | 0 | 0 | 0 | 6,809 |
40,637,774 | 2016-11-16T16:51:00.000 | 2 | 0 | 1 | 0 | python,numpy | 40,638,108 | 3 | false | 0 | 0 | You do have two versions of python installed: the CPython 3.5.2 distribution you mention first, and the Anaconda 4.2.0 Python distribution you then mention. Anaconda packages a large number of 3rd party packages, including Numpy. However, the CPython 3.5.2 installation available on python.org only ships with the standard library.
These two python installs have separate package installations, so having Anaconda's numpy available doesn't make it available for the CPython install. Since you're starting the Idle with shipped with CPython, which doesn't have numpy, you're seeing this error. You have two options:
Install numpy for CPython. See numpy documentation for details on how to do this, but it may be difficult.
Use the version of Idle included with Anaconda. This should be available in the Anaconda programs folder. | 2 | 3 | 1 | I have python3.5.2 installed on a windows10 machine(Adding into the pythonpath is included in the setup with new python). I ,then, installed the Anaconda(4.2.0) version. At the command prompt when i run the python interpreter and import numpy it works fine. But when i save it as a script and try running from the IDLE, it gives
Traceback (most recent call last):
File "C:\Users\pramesh\Desktop\datascience code\test.py", line 1, in <module>
from numpy import *
ImportError: No module named 'numpy'
I don't know what the problem is. I donot have any other python version installed. | import numpy not working in IDLE | 0.132549 | 0 | 0 | 6,809 |
40,642,132 | 2016-11-16T20:56:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,statistics,cross-validation,data-science | 40,642,637 | 1 | true | 0 | 0 | You have different ways to create an ensemble using your weak classifiers:
-Bagging: You can average the output of the 4 classifiers.
-Stacking: Your final output could be a linear combination of the 4 individual outputs. You can use the output of your 4 models as the input of a another algorithm or you can directly use different weights by choose the ones with better accuracy. | 1 | 1 | 1 | I have implemented 4 classifiers using scikit-learn in Python. But, the performance on all of them is not very good. I want to implement an ensemble of these classifiers. I looked up the ensembles on scikit-learn but it has Random Forests and Adaboost. How should I create an ensemble of my weak classifiers? | Ensembles in Python | 1.2 | 0 | 0 | 513 |
40,644,986 | 2016-11-17T00:51:00.000 | 0 | 1 | 0 | 0 | android,python,git,android-studio,appium | 57,672,871 | 1 | false | 0 | 0 | I put my autotests in separate repository just to have them safe from deleting from my work computer or something else.
When I'm sure that my tests are stable I clone dev branch of the main project and making a pull request including my tests. When my request is merged we have a project repository with autotests.
Don't know any better ways, but for me it works great and very comfortable. | 1 | 2 | 0 | I have some Appium testing scripts that needed to be put through a repository for version control, mainly Git.
I looked through Google to figure out what is the best way to go about this if you have an Android App project in Andriod Studio that you're writing the tests for (which happens to be in it's own Git repository), and so far I haven't found anything in my search.
My question is: Would it be better if I include the test scripts inside the Android studio project in it's Git repository, or would it be better if I put the test scripts in their own repository? If putting the scripts in the Android project is better, where in the project's file structure should I include the test scripts?
Any input is greatly appreciated. | Best Practice for putting Appium Python UI test scripts in repository | 0 | 0 | 0 | 181 |
40,645,002 | 2016-11-17T00:54:00.000 | 1 | 0 | 0 | 1 | python,python-2.7,subprocess | 40,645,069 | 1 | true | 0 | 0 | The problem was that swipl is under /opt/local/bin/ and Intellij was running in a virtual environment. Changing the python interpreter under configurations seemed to solve it. | 1 | 0 | 0 | I am trying to execute result_b = subprocess.check_output(['swipl'])
where swipl is the name of a process. I constantly get the 'No such file or directory' error.
However, if I execute that same statement within the python interpreter, it works. What's going on here? Both running in the same directory and both on the same version. I tried all the things that were mentioned in other stack overflow posts, but to no avail. Is this some kind of $PATH problem?
result_b = subprocess.check_output(['ls']) does seem to work. | Intellij Subprocess: No such file or directory | 1.2 | 0 | 0 | 96 |
40,645,689 | 2016-11-17T02:22:00.000 | 0 | 0 | 0 | 0 | python,r,statistics,regression | 41,656,919 | 1 | false | 0 | 0 | how plm and xtreg,fedo behind the screen are well documented in their instructions, I didn't read them through but I think that when you run pgmm or some thing like logit regression in plm,the algorithm actually is maximum likelihood given the nature of these problems.
and it would be interesting to write all these by yourself. | 1 | 0 | 1 | In a regressions, I know instead of using fixed effects (say on country or year), you can specify these effects with dummy variables and this is a perfectly legitimate method. However, when the number of effects (e.g. countries) is large, then it becomes very computational difficult. Instead you would run the fixed effects model. I am curious what R's plm or Stata's xtreg,fe does behind the scenes. Specifically, I wanted to try custom rolling my own panel regression...looking for some likelihood function (or way to condition the likelihood) I can throw into optim and have some fun. Ideas? | Custom roll fix effects in regressions | 0 | 0 | 0 | 63 |
40,645,942 | 2016-11-17T02:53:00.000 | 2 | 0 | 1 | 0 | python,spyder | 52,139,511 | 14 | false | 0 | 0 | On mac using Spyder 3.3.1 run from Anaconda.
Cmd + I was not working for me at first to show the object inspector on the right pane for help on a particular function. So I typed in Cmd + , (which is to access preferences panel in any app on Mac), and went down to "Help" on the left side.
Then, I checked the boxes for "Editor" and "IPython Console" under the description that says
"This pane can automatically show an object's help information after a left parenthesis is written next to it. Below you can decide to which plugin you want to connect it to turn on this feature."
After checking these boxes and pressing OK, Cmd + I still did not work for getting the object information.
I restarted Spyder, closing it and reopening it from Anaconda navigator.
Now Cmd + I works and shows the information for whatever function I click on.
Hope this helps someone. I'm still not quite sure what happened here (since those checkboxes were for the left parenthesis function), but I still thought that sharing the steps will be useful to some people. | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0.028564 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 1 | 0 | 1 | 0 | python,spyder | 49,936,007 | 14 | false | 0 | 0 | After pressing Ctrl+H , a help window will come in that in [Source] dropdown select Console | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0.014285 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 0 | 0 | 1 | 0 | python,spyder | 48,121,484 | 14 | false | 0 | 0 | Note that in Spyder version 3.2.4 under Tools>Preferences>Help>Automatic Connections it clearly now states: "This pane can automatically show an object's help information after a left parenthesis is written next to it. Below you can decide to which plugin you want to connect it to turn on this feature." Then you can select Editor and/or IPython Console.
When I tried this, placing a left parenthesis before the (term is the only way I could get the help to bring up an example and a definition. | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 1 | 0 | 1 | 0 | python,spyder | 47,424,875 | 14 | false | 0 | 0 | One way to go about this is to go to View > Panes > Online Help. Then in the search box insert the module or package like so (sklearn.preprocessing.Imputer) and you will have all the docs related to the package.(**Shortest way: click on package....then Cmd + i )
Alternatively, right clicking the Object in the editor, select Go to Definition
Third way, in your console, type help(your class here) like help(Imputer) or just help() to get the interactive console then type your package there (sklearn.preprocessing.Imputer`).
Hope this help someone. | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0.014285 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 0 | 0 | 1 | 0 | python,spyder | 52,231,481 | 14 | false | 0 | 0 | Just left click on the top right corner, beside the close tab of editor and below the working directory tab
I tried it, and it successfully worked. | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 1 | 0 | 1 | 0 | python,spyder | 46,621,237 | 14 | false | 0 | 0 | In Windows, Ctrl+Shift+H worked after making changes to preferences as suggested by Ibrahem | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0.014285 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 0 | 0 | 1 | 0 | python,spyder | 45,386,925 | 14 | false | 0 | 0 | Please check the spelling of your command, if you type wrong spelling it wont display the help | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 3 | 0 | 1 | 0 | python,spyder | 44,524,704 | 14 | false | 0 | 0 | Since they changed "Object Inspector" to "Help", as Jitse Niesen says, they might have changed the shortcut too. In my Mac version the shortcut for "Help" is Shift+Cmd+H so the combination you are looking for is probably Ctrl+H. | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0.042831 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 3 | 0 | 1 | 0 | python,spyder | 40,829,813 | 14 | false | 0 | 0 | I had the same problem. I found the help and then discovered that I got a message saying No Documentation. I tried changing the setting from Rich Text to Plain Text and for some reason that worked and I'm able to use the Object Inspector. | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0.042831 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 37 | 0 | 1 | 0 | python,spyder | 43,257,901 | 14 | false | 0 | 0 | go to preferences > Help and enable the Automatic connections for Editor and restart the Spyder
This worked for me!! | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 1 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 6 | 0 | 1 | 0 | python,spyder | 50,115,156 | 14 | false | 0 | 0 | Although it's given in the tutorials but I'll explain.
1) Object Inspector is now known as Help.
2) I'm using Spyder 3.6, here go to Tools-->Preferences-->Help-->Check on Editor in Automatic Connections
3) Select your parameter and Ctrl+I
That'll do it. | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 1 | 0 | 0 | 42,012 |
40,645,942 | 2016-11-17T02:53:00.000 | 1 | 0 | 1 | 0 | python,spyder | 46,930,793 | 14 | false | 0 | 0 | Go to preferences->Help and tick the option of showing object info on Editor , then ctrl+I will work with any object | 12 | 30 | 0 | I just installed Anaconda and running Spyder I cannot find the Object Inspector. Hitting Ctrl+I has no effect and in the View/Panes menu there is no item Object Inspector.
I have seen videos and tutorials that show the Object Inspector. What is happening? | Spyder missing Object Inspector | 0.014285 | 0 | 0 | 42,012 |
40,647,363 | 2016-11-17T05:31:00.000 | 0 | 0 | 1 | 1 | python,process,python-multiprocessing | 40,647,910 | 1 | false | 0 | 0 | Once a task is begun by a process in one pool, you can't pause it and add it to another pool. What you could do, however, is have the handler in the first process return a tuple instead of just the value it's computing. The first thing in the tuple could be a boolean representing whether or not the task has finished, and the second thing could be the answer, or partial answer if it's not complete. Then, you could write some additional logic to take any returned values that are marked unfinished, and pass them to the second process, along with the data that's already been computed and returned from the first process.
Unfortunately, this will require you to come up with a way to store partial work, which could be very easy or very hard depending on what you're doing. | 1 | 1 | 0 | I wanna do some task scheduling using python multiprocessing module. I have two pools p1 and p2. One with high priority and one with low priority. A task is first put into the high priority pool. If after a certain amount of time, say 10s, the task still not finishes, i will migrate it to the lower priority pool. The question is can i migrate the task from one pool to another without wasting the work that is already done in the first pool? Basically, i wanna pause a running subprocess in one pool and add it to another pool and then resume it. If the second pool is busy, the task will wait until a free slot is available. | Can we migrate processes across different multiprocessing pools in python? | 0 | 0 | 0 | 395 |
40,650,154 | 2016-11-17T08:42:00.000 | 1 | 0 | 0 | 0 | python,selenium,beautifulsoup,aem | 40,650,411 | 1 | true | 1 | 0 | I would recommend Selenium as it provides complete browser interface and is mostly used for automation. Selenium will make it more easy to implement and most importantly maintain. | 1 | 0 | 0 | I'm trying to figure out how to scrape dynamic AEM sign-in forms using python.
The thing is I've been trying to figure out which module would be best to use for a sign-in form field that dynamically pops up over a webpage.
I've been told Selenium is a good choice, but so is BeautifulSoup.
Any pointers to which one would be best to use for dynamically scraping these? | How to scrape AEM forms? | 1.2 | 0 | 1 | 85 |
40,651,064 | 2016-11-17T09:26:00.000 | 1 | 0 | 0 | 0 | python,unit-testing,selenium,selenium-webdriver | 40,651,174 | 1 | false | 1 | 0 | This is how is suppose to work.
Tests should be independent else they can influence each other.
I think you would want to have a clean browser each time and not having to clean session/cookies each time, maybe now not, but when you will have a larger suite you will for sure.
Each scenario has will start the browser and it will close it at the end, you will have to research which methods are doing this and do some overriding, this is not recommended at all. | 1 | 1 | 0 | I have created a testsuite which has 2 testcases that are recorded using selenium in firefox. Both of those test cases are in separate classes with their own setup and teardown functions, because of which each test case opens the browser and closes it during its execution.
I am not able to use the same web browser instance for every testcase called from my test suite. Is there a way to achieve this? | Using same webInstance which executing a testsuite | 0.197375 | 0 | 1 | 35 |
40,655,912 | 2016-11-17T13:14:00.000 | 0 | 0 | 0 | 1 | python,subprocess,stack-trace,backtrace | 40,656,500 | 2 | false | 0 | 0 | What you can do is to redirect stdout and stderr of your subprocess.Popen() to a file and later on check that.
Doing like that it should be possible to check the backtrace later the "process Termination".
Good logging mechanism will give you that :-) Hope this help enough. | 1 | 1 | 0 | I want to run a process in a loop and if the process returns 0, I must rerun it. If it aborts, I have to capture its stack trace (backtrace). I'm using subprocess.Popen() and .communicate() to run the process. Now .returncode is 134, i.e. child has received SIGABRT, is there any way I can capture the backtrace (stack trace) of child?
Since this is a testing tool, I have to capture all the necessary information before I forward it to dev team. | Python subprocess.Popen - How to capture childs backtrace upon abort | 0 | 0 | 0 | 952 |
40,658,012 | 2016-11-17T14:53:00.000 | 2 | 0 | 0 | 0 | python,django | 40,658,108 | 2 | false | 1 | 0 | Whenever you have a login mask in a browser and transfer user credentials from browser to webserver it is highly recommended to use https, because otherwise the credentials can easily be read by others.
This applies to everything, not just django admin. | 1 | 0 | 0 | I just finished my first experience with Django on real application and we are running it on apache2.
Since I am newbie I am wondering if it is right to have admin page served on http?
Is https a better solution? how much of risk will I be experiencing by not having it run on https? | Is there a security risk to serve Django admin page on regular http rather than Https? | 0.197375 | 0 | 0 | 139 |
40,658,287 | 2016-11-17T15:05:00.000 | 1 | 0 | 0 | 0 | python,sql,database,pandas | 40,658,564 | 1 | true | 1 | 0 | Database is place where you store data collection. You can manipulate data by DML statement and some statements can be more difficult (like pivots or functions). Data framework is tool to make your computations, pivots and other manipulating much more easier (for example with drag and drop option). | 1 | 0 | 0 | I am interesting in building databases and have been reading about SQL engines and Pandas framework, and how they interact, but am still confused about the difference between a database and a data framework.
I wonder if somebody could point me to links which clarify the distinction between them, and which is the best starting point for a data analysis project. | Database and Data Framework | 1.2 | 1 | 0 | 66 |
40,659,253 | 2016-11-17T15:48:00.000 | 1 | 0 | 0 | 0 | python-2.7,google-api-client | 40,680,753 | 3 | false | 0 | 0 | Actually, found out that e.resp.status is where the HTTP error code is stored (e being the caught exception). Still don't know how to isolate the error message. | 1 | 1 | 0 | I can't find a way to retrieve the HTTP error code and the error message from a call to a Google API using the Google API Client (in Python).
I know the call can raise a HttpError (if it's not a network problem) but that's all I know.
Hope you can help | Retrieve error code and message from Google Api Client in Python | 0.066568 | 0 | 1 | 1,914 |
40,663,830 | 2016-11-17T19:51:00.000 | 1 | 1 | 0 | 1 | python,import,undefined-symbol,dawg | 40,664,272 | 1 | false | 0 | 0 | I found the answer for my very specific case, for anyone that may run into this case as well:
I am using Anaconda (python 3 version) and installing the package with conda install -c package package worked instead of pip install package.
I hope this helps someone. | 1 | 1 | 0 | I am attempting to run a package after installing it, but am getting this error:
ImportError: /home/brownc/anaconda3/lib/python3.5/site-packages/dawg.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZTVNSt7__cxx1118basic_stringstreamIcSt11char_traitsIcESaIcEEE
The dawg....gnu.so file is binary and so it doesn't give much information when opened in sublime. I don't know enough about binary files in order to go in and remove the line or fix it. Is there a simple fix for this that I am not aware of? | Import Error from binary dependency file | 0.197375 | 0 | 0 | 317 |
40,664,845 | 2016-11-17T20:55:00.000 | 1 | 0 | 0 | 0 | android,python,opencv,apk | 40,665,287 | 1 | false | 0 | 1 | There is SL4A / PythonForAndroid but unfortunately it uses hardcoded Java RMI invocations for anything os related. That means that there are no OpenCV bindings.
I guess you'll have to learn Java | 1 | 1 | 1 | I want to transform my python code into an Android app. The problem I saw is I'm using openCV in the code, and I didn't found anything related to generate an APK while using openCV.
There is a way to generate an APK from python and openCV? | Python and openCV to android APK | 0.197375 | 0 | 0 | 292 |
40,666,853 | 2016-11-17T23:24:00.000 | 0 | 1 | 0 | 1 | python,file,directory,relative-path | 40,666,955 | 3 | false | 0 | 0 | Try this one:
os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "config") | 1 | 1 | 0 | Say I am running a Python script in C:\temp\templates\graphics. I can get the current directory using currDir = os.getcwd(), but how can I use relative path to move up in directories and execute something in C:\temp\config (note: this folder will not always be in C:\)? | Move up in directory structure | 0 | 0 | 0 | 930 |
40,667,077 | 2016-11-17T23:45:00.000 | 1 | 0 | 0 | 0 | python,rubiks-cube | 40,667,361 | 1 | false | 0 | 1 | Why don't you try indexing the cube's faces according to which color they have at the center? Then you can just check whether the white-centered face on one cube matches the white-centered face on the other cube.
In other words, the north face will always have a white square at the center, the south face will always have a yellow square at the center, etc. Only operations that keep the orientation of the centers are allowed. | 1 | 0 | 0 | I'm creating a Rubiks cube in Python and am running into the problem of checking whether 2 cubes are the same. I'm representing the sides of the cube as north, east, south, west, front, and back. I originally just had my function check if cube1.north = cube2.north, cube1.south = cube2.south, etc and if all where true then they are the same. This leaves out the cubes where cube1.north = cube2.south, cube1.south = cube2.north, etc. and many other scenarios where they are equal but the specific faces don't match up exactly. Does anyone have an idea on how to check if any 2 cubes are equal without tons of if statements for every possibility? | Python Rubiks Cube How to tell if 2 states are equal | 0.197375 | 0 | 0 | 151 |
40,670,370 | 2016-11-18T06:05:00.000 | 2 | 0 | 0 | 0 | python,tensorflow,dot-product | 40,670,426 | 9 | false | 0 | 0 | You can do tf.mul(x,y), followed by tf.reduce_sum() | 2 | 28 | 1 | I was wondering if there is an easy way to calculate the dot product of two vectors (i.e. 1-d tensors) and return a scalar value in tensorflow.
Given two vectors X=(x1,...,xn) and Y=(y1,...,yn), the dot product is
dot(X,Y) = x1 * y1 + ... + xn * yn
I know that it is possible to achieve this by first broadcasting the vectors X and Y to a 2-d tensor and then using tf.matmul. However, the result is a matrix, and I am after a scalar.
Is there an operator like tf.matmul that is specific to vectors? | Dot product of two vectors in tensorflow | 0.044415 | 0 | 0 | 64,462 |
40,670,370 | 2016-11-18T06:05:00.000 | 26 | 0 | 0 | 0 | python,tensorflow,dot-product | 40,672,159 | 9 | false | 0 | 0 | In addition to tf.reduce_sum(tf.multiply(x, y)), you can also do tf.matmul(x, tf.reshape(y, [-1, 1])). | 2 | 28 | 1 | I was wondering if there is an easy way to calculate the dot product of two vectors (i.e. 1-d tensors) and return a scalar value in tensorflow.
Given two vectors X=(x1,...,xn) and Y=(y1,...,yn), the dot product is
dot(X,Y) = x1 * y1 + ... + xn * yn
I know that it is possible to achieve this by first broadcasting the vectors X and Y to a 2-d tensor and then using tf.matmul. However, the result is a matrix, and I am after a scalar.
Is there an operator like tf.matmul that is specific to vectors? | Dot product of two vectors in tensorflow | 1 | 0 | 0 | 64,462 |
40,671,158 | 2016-11-18T07:00:00.000 | 5 | 0 | 1 | 0 | python,algorithm,api,data-structures | 40,671,334 | 2 | true | 0 | 0 | Why to use public APIs:
The built in methods were written and reviewed by very experienced and many coders, and a lot of effort was invested to optimize them to be as efficient as it gets.
Since the built in methods are public APIs, it is also means they are constantly used, which means you get a massive "free" testing. You are much more likely to detect issues in public APIs than in private ones, and once something is discovered - it will be fixed for you.
Don't reinvent the wheel. Someone already programmed it for you, use it. If your profiler says there is a problem, think about replacing it. Not before.
Why to use custom made methods:
That said, the public APIs are general case. If you need something
very specific for your scenario, you might find a solution that will
be more efficient, but it will take you quite some time to actually
achieve better than the already optimize general purpose public API.
tl;dr: Use public APIs unless you:
Need it and can afford a lot of time to replace it.
Know what you are doing pretty well.
Intend to maintain it and do robust testing for it. | 1 | 1 | 1 | In a programming language like Python Which will have better efficiency? if i use a sorting algorithm like merge sort to sort an array or If I use a built in API like sort() to sort the array? If Algorithms are independent of programming languages, then what is the advantage of algorithms over built in methods or API's | Use APIs for sorting or algorithm? | 1.2 | 0 | 0 | 200 |
40,671,319 | 2016-11-18T07:10:00.000 | 2 | 0 | 1 | 0 | python | 40,671,776 | 2 | false | 0 | 0 | It heavily depends on the type of analysis. Here is a simple rule of thumb to give hints:
if the process is memory bound, keep it serial
if it is io bound, use multithreading - optimal number of threads depends on the percentage of time spent in io waiting
if it is cpu-bound, use muti-processing with a number or process equal to the number of available cores
If you cannot be sure a priori, just experiment... But never forget that no method is absolutely better that the others in all possible use cases | 1 | 2 | 0 | I am involved in a project recently which has been written in python. All files are getting analyzed serially one after other, so taking approx 1 hour in completion. My task is to reduce this time.
I am not sure what should I use in python : Multi-thread or multi-process to run file analysis in parallel.
Please suggest.
Thank you. | What to use Multiprocessing or multi-threading in Python? | 0.197375 | 0 | 0 | 1,567 |
40,671,611 | 2016-11-18T07:30:00.000 | 1 | 0 | 0 | 0 | python,post,flask | 40,671,727 | 1 | true | 1 | 0 | Short answer: The client should add the query parameter when submitting the form data (e.g. in the action parameter of the form tag).
Explanation: The server is responding to a request to a particular URL. There is no way for the server to "change the URL" of the request. The only thing the server can do is ask the client to send another request to a different URL by returning a redirect. The problem with this approach, as you mentioned, is that the form data will be lost. You could save the form data using cookies or some similar mechanism, but it's much easier to just have the client submit the form to the correct URL in the first place. | 1 | 0 | 0 | I have page with form which. It is working perfect, after form submitted it is replaced with "thank you" message. Initially form is accessible by url http://localhost:5000/contact and after submit it has the same URL. I want that after submit url changed to http://localhost:5000/contact?aftersubmit. I.e. add query string parameter on server side.
I know that I ca do it with redirection, but thus I am losing post-submit rendered content. Also I do not want that if user input http://localhost:5000/contact?aftersubmit could see post-submit content, i.e. I cannot analyze query string on client side and update HTML. It must be done on server side.
How it could be done? | Add query string parameter in Flask POST response | 1.2 | 0 | 0 | 1,516 |
40,672,436 | 2016-11-18T08:29:00.000 | 1 | 0 | 0 | 0 | python,django,django-rest-framework,django-filter | 40,674,813 | 1 | true | 1 | 0 | However if I actually pass this url it returns rows that filters against last parameter value
This is because ForeignKey fields default to ModelChoiceFilter, which just takes a single value from the GET QueryDict.
If you declare your fields as ModelMultipleChoiceFilter they will take the list of values you require. | 1 | 0 | 0 | I have model with following fields:
loading_port
discharge_port
carrier
supplier
All these fields are ForeignKey to models that contains name field.
Also I have viewset, which uses DjangoFilter backend for filtering. At this moment I want to make possible filtering multiple values for each field, like:
loading_port__name=PORT_1&loading_port__name=PORT_2&supplier__name=SUPP_NAME_1&supplier__name=SUPP_NAME_2 and so on. However if I actually pass this url it returns rows that filters against last parameter value (in this example for loading_port - PORT_2, for supplier - SUPP_NAME_2).
How can I fix filtering so it will meet my requirements? | How to filter against multiple values for ForeignKey using DjangoFilterBackend | 1.2 | 0 | 0 | 365 |
40,672,551 | 2016-11-18T08:36:00.000 | 0 | 0 | 0 | 0 | python,mysql,escaping,load-data-infile,pymysql | 40,672,954 | 2 | false | 0 | 0 | I think the problem is with the SQL statement you print. The single quote in ''' should be escaped: '\''. Your backslash escapes the quote at Python level, and not the MySQL level. Thus the Python string should end with ENCLOSED BY '\\'';
You may also use the raw string literal notation:
r"""INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" | 2 | 1 | 0 | I built a simple statement to run a load data local infile on a MySQL table. I'm using Python to generate the string and also run the query. So I'm using the Python package pymysql.
This is the line to build the string. Assume metadata_filename is a proper string:
load_data_statement = """load data local infile """ + metadata_filename + """INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';"""
I originally had string substitution, and wanted to see if that was the issue, but it isn't. If I edit the statement above and commend out the ENCLOSED BY part, it is able to run, but not properly load data since I need the enclosed character
If I print(load_data_statement), I get what appears to be proper SQL code, but it doesn't seem to be read by the SQL connector. This is what's printed:
load data local infile 'filename.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY ''';
That all appears to be correct, but the Mysql engine is not taking it. What should I edit in Python to escape the single quote or just write it properly?
Edit:
I've been running the string substitution alternative, but still getting issues: load_data_statement = """load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
And tried adding extra escapes: Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile \'tgt_metadata_%s.txt\' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
The execute line is simple `cur.execute(load_data_statement)
And the error I'm getting is odd: `pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'tgt_metadta_mal.txt'.txt' INTO table tgt_metadata FIELDS TERMINATED BY ','; ENC' at line 1")
I don't understand why the message starts at 'tgt_metadata_mal.txt and shows only the first 3 letters of ENCLOSED BY... | In Python how do I escape a single quote within a string that will be used as a SQL statement? | 0 | 1 | 0 | 4,539 |
40,672,551 | 2016-11-18T08:36:00.000 | 3 | 0 | 0 | 0 | python,mysql,escaping,load-data-infile,pymysql | 40,672,606 | 2 | false | 0 | 0 | No need for escaping that string.
cursor.execute("SELECT * FROM Codes WHERE ShortCode = %s", text)
You should use %s instead of your strings and then (in this case text) would be your string. This is the most secure way of protecting from SQL Injection | 2 | 1 | 0 | I built a simple statement to run a load data local infile on a MySQL table. I'm using Python to generate the string and also run the query. So I'm using the Python package pymysql.
This is the line to build the string. Assume metadata_filename is a proper string:
load_data_statement = """load data local infile """ + metadata_filename + """INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';"""
I originally had string substitution, and wanted to see if that was the issue, but it isn't. If I edit the statement above and commend out the ENCLOSED BY part, it is able to run, but not properly load data since I need the enclosed character
If I print(load_data_statement), I get what appears to be proper SQL code, but it doesn't seem to be read by the SQL connector. This is what's printed:
load data local infile 'filename.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY ''';
That all appears to be correct, but the Mysql engine is not taking it. What should I edit in Python to escape the single quote or just write it properly?
Edit:
I've been running the string substitution alternative, but still getting issues: load_data_statement = """load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
And tried adding extra escapes: Also tried raw strings:load_data_statement = r"""load data local infile 'tgt_metadata_%s.txt' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\\'';""" % metadata_filename
Also tried raw strings:load_data_statement = r"""load data local infile \'tgt_metadata_%s.txt\' INTO TABLE table1 FIELDS TERMINATED BY ',' ENCLOSED BY '\'';""" % metadata_filename
The execute line is simple `cur.execute(load_data_statement)
And the error I'm getting is odd: `pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'tgt_metadta_mal.txt'.txt' INTO table tgt_metadata FIELDS TERMINATED BY ','; ENC' at line 1")
I don't understand why the message starts at 'tgt_metadata_mal.txt and shows only the first 3 letters of ENCLOSED BY... | In Python how do I escape a single quote within a string that will be used as a SQL statement? | 0.291313 | 1 | 0 | 4,539 |
40,673,629 | 2016-11-18T09:36:00.000 | 0 | 0 | 0 | 0 | python,numpy,machine-learning,scikit-learn | 40,673,779 | 1 | true | 0 | 0 | You cannot use NaN values because the input vector will, for instance, be multiplied with a weight matrix. The result of such operations needs to be defined.
What you typically do if you have gaps in your input data is, depending on the specific type and structure of the data, fill the gaps with "artificial" values. For instance, you can use the mean or median of the same column in the remaining training data instances. | 1 | 0 | 1 | I have a classifier that has been trained using a given set of input training data vectors. There are missing values in the training data which is filled as numpy.Nan values and Using imputers to fill in the missing values.
However, In case of my input vector for prediction, how do I pass in the input where the value is missing? should I pass the value as nan? Does imputer play a role in this.?
If I have to fill in the value manually, How do I fill in the value for such case will I need to calculate the mean/median/frequency from the existing data.
Note : I am using sklearn. | Pass a NAN in input vector for prediction | 1.2 | 0 | 0 | 787 |
40,677,670 | 2016-11-18T12:54:00.000 | 0 | 1 | 0 | 1 | python,linux,centos | 40,677,984 | 1 | true | 0 | 0 | Python is required by many of the Linux distributions. Many system utilities the distro providers combine (both GUI based and not), are programmed in Python.
The version of python the system utilities are programmed in I will call the "main" python. For Ubuntu 12.04 e.g. this is 2.7.3, the version that you get when invoking python on a freshly installed system.
Because of the system utilities that are written in python it is impossible to remove the main python without breaking the system. It even takes a lot of care to update the main python with a later version in the same major.minor series, as you need to take care of compiling it with the same configuration specs as the main python. This is needed to get the correct search paths for the libraries that the main python uses, which is often not exactly what a .configure without options would get you, when you download python to do a python compilation from source.
Installing a different version from the major.minor version the system uses (i.e. the main python) normally is not a problem. I.e. you can compile a 2.6 or 3.4 python and install it without a problem, as this is installed next to the main (2.7.X) python. Sometimes a distro provides these different major.minor packages, but they might not be the latest bug-release version in that series.
The problems start when you want to use the latest in the main python series (e.g. 2.7.8 on a system with main python version is 2.7.3). I recommend not trying to replace the main python, but instead to compile and install the 2.7.8 in a separate location (mine is in /opt/python/2.7.8). This will keep you on the security fix schedule of your distribution and guarantees someone else tests compatibility of the python libraries and against that version (as used by system utilities!). | 1 | 0 | 0 | How to remove python 2.6 from Cent OS?
I tried command yum remove python. After python --version and get again | How to remove Python in Cent OS? | 1.2 | 0 | 0 | 210 |
40,678,200 | 2016-11-18T13:21:00.000 | 1 | 0 | 1 | 0 | python,numpy | 47,094,209 | 2 | false | 0 | 0 | I also got this error. If you google for it, you will find lot's of similar issues. The problem can happen when you have multiple Python versions. In my case, I had the Ubuntu 16.04 Python 2.7 via /usr/bin/python and another Python 2.7 via Linuxbrew. type python gave me /u/zeyer/.linuxbrew/bin/python2, i.e. the Linuxbrew one. type pip2.7 gave me /u/zeyer/.local/bin/pip2.7, and looking into that file, it had the shebang #!/usr/bin/python, i.e. it was using the Ubuntu Python.
So, there are various solutions. You could just edit the pip2.7 file and change the shebang to #!/usr/bin/env python2.7. Or reinstall pip in some way.
In my case, I found that the Python 2.7 via Linuxbrew was incompatible to a few packages I needed (e.g. Tensorflow), so I unlinked it and use only the Ubuntu 16.04 Python 2.7 now. | 1 | 1 | 1 | I am a python beginner and I would like some help with this. I am using Ubuntu and I had installed python using Anaconda, but then I tried to install it again using pip and now when I'm trying to run my code, at import numpy as np, I see this error
ImportError: /home/dev/.local/lib/python2.7/site-packages/numpy/core/multiarray.so: undefined symbol: _PyUnicodeUCS4_IsWhitespace
How can I fix this? | ImportError: undefined symbol: _PyUnicodeUCS4_IsWhitespace | 0.099668 | 0 | 0 | 3,200 |
40,680,022 | 2016-11-18T14:53:00.000 | 0 | 1 | 0 | 0 | python,automated-tests,hudson,testlink | 40,722,012 | 1 | true | 1 | 0 | I found how to do it :
I used testLink-API-Python-client. | 1 | 0 | 0 | I want to run automated test on my python script using Hudson and testlink. I configured Hudson with my testlink server but the test results are always "not run". Do you know how to do this? | Automate python test with testlink and hudson | 1.2 | 0 | 0 | 299 |
40,681,689 | 2016-11-18T16:19:00.000 | 6 | 0 | 0 | 0 | python,mongodb,ip,eve | 40,690,062 | 1 | true | 0 | 0 | Add a parameter to your app.run(). By default it runs on localhost, change it to app.run(host= '0.0.0.0') to run on your machine IP address. | 1 | 2 | 0 | When I run the api.py the default IP address is 127.0.0.1:5000 which is the local host. I am running the eve scripts on the server side. Am I able to change that IP address to server's address? or Am I just access it using server's address.
For example,
if the server's address is 11.5.254.12, then I run the api.py.
Am I able to access it outside of the server using 11.5.254.12:5000 or is there any way to change it from 127.0.0.1 to 11.5.254.12? | How to change eve's IP address? | 1.2 | 0 | 1 | 1,424 |
40,685,275 | 2016-11-18T20:15:00.000 | 0 | 0 | 0 | 0 | python | 40,699,877 | 1 | true | 1 | 0 | I was able to solve this just now!
I went through my DNS settings of my domain and pointed the DNS A record to the IP address that my flask application is running on.
Previously, I was using a redirect on the domain, which was not working. | 1 | 0 | 0 | I have built a flask web application that makes use of Google's authenticated login to authenticate users. I currently have it running on localhost 127.0.0.1:5000 however I would like to point a custom domain name to it that I would like to purchase.
I have used custom domains with Flask applications before, I'm just not sure how to do it with this. I'm confused as to what I would do with my oauth callback.
My callback is set to http://127.0.0.1:5000/authorized in my Google oauth client credentials. I don't think it would just be as easy as running the app on 0.0.0.0.
I would need to be able to match the flask routes to the domain. i.e be able to access www.mydomain.com/authorized. | Use custom domain for flask app with Google authenticated login | 1.2 | 0 | 0 | 447 |
40,686,773 | 2016-11-18T22:14:00.000 | 0 | 0 | 0 | 0 | python,algorithm,graph,shortest-path,breadth-first-search | 40,687,262 | 1 | false | 0 | 0 | As you've hinted, this will depend a lot on the data access characteristics of your system. If you were restricted to single-element accesses, then you'd be truly stuck, as trincot observes. However, if you can manage block accesses, then you have a chance of parallel operations.
However, I think that would be outside your control: the hash function owns the adjacency characteristics -- and, in fact, will likely "pessimize" (opposite of "optimize") that characteristic.
I do see one possible hope: use iteration instead of recursion, maintaining a list of nodes to visit. When you place a new node on the list, get its hash value. If you can organize the nodes clustered by location, you can perhaps do a block transfer, accessing several values in one read operation. | 1 | 0 | 1 | Consider I have an adjacency list of billion of nodes structured using hash table arranged in the following manner:
key = source node
value = hash_table { node1, node2, node3}
The input values are from text file in the form of
from,to
1,2
1,5
1,11
... so on
eg.
key = '1'
value = {'2','5','11'}
means 1 is connected to nodes 2,5,11
I want to know an algorithm or approach to find destination node from source node of exactly k edges in an undirected graph of billion nodes without cycle
for eg. from node 1 I want to find node 50 only till depth 3 or till 3 edges.
My assumption the algorithm finds 1 - 2 - 60 - 50 which is the shortest path but how would the traversing be efficient using the above adjacency list structure?
I do not want to use Hadoop/Map Reduce.
I came up with naive solution as below in Python but is not efficient. Only thing is hash table searches key in O(1) so I could just search neighbours and their billion neighbours directly for the key. The following algo takes lot of time.
start with source node
use hash table search for finding key
go 1 level deeper with hash table of neighbor nodes and find their values for destination nodes until node found
Stop if node not found on k depth
 1
|
{2 5 11}
| | |
{3,6,7} {nodes} {nodes} .... connected nodes
| | | | |
{nodes} {nodes} {nodes} .... million more connected nodes.
Please suggest. The algorithm above implemented similar to BFS takes more than 3 hours to search for all the possible key value relationships. Can be it be reduced with other searching method? | Algorithm/Approach to find destination node from source node of exactly k edges in undirected graph of billion nodes without cycle | 0 | 0 | 0 | 105 |
40,689,085 | 2016-11-19T03:59:00.000 | 0 | 0 | 1 | 0 | python,terminal,pip,xlwings,numexpr | 40,692,275 | 1 | false | 0 | 0 | xlwings imports pandas, if it is installed. Pandas again is importing numexpr if it's available. This seems to be not correctly installed. I would reinstall numexpr using conda (as you are using anaconda) and if that doesn't help pandas and xlwings. You could also create a new conda environment and conda install xlwings to try it out in a fresh environment. | 1 | 0 | 0 | I'm trying to get started with xlwings, but am recieving a few errors when I go to import it.
I pulled up my OSX terminal, ran
pip install xlwings
no problem there. Fired up python
$ python
then ran
import xlwings as xw
And it gave me this:
/users/Joshua/anaconda/lib/python3.5/site-packages/numexpr/cpuinfo.py:53: UserWarning: [Errno 2] No such file or directory: 'arch'
stacklevel=stacklevel + 1)
/users/Joshua/anaconda/lib/python3.5/site-packages/numexpr/cpuinfo.py:53: UserWarning: [Errno 2] No such file or directory: 'machine'
stacklevel=stacklevel + 1)
/users/Joshua/anaconda/lib/python3.5/site-packages/numexpr/cpuinfo.py:76: UserWarning: [Errno 2] No such file or directory: 'sysctl'
stacklevel=stacklevel + 1):
I tried uninstalling and reinstalling the numexpr package
pip uninstall numexpr
pip install numexpr
and doing the same with xlwings, but still recieving this error. :/
Any ideas on how to get the missing files? | import xlwings, missing files from numexpr package | 0 | 0 | 0 | 179 |
40,689,819 | 2016-11-19T06:14:00.000 | 1 | 1 | 0 | 0 | telegram,telegram-bot,python-telegram-bot,php-telegram-bot,lua-telegram-bot | 41,173,517 | 3 | false | 0 | 0 | When a user register on telegram, server choose a unique chat_id for that user! it means the server do this automatically. thus if the user send /start message to your bot for the first time, this chat_id will store on bot database (if you code webhook which demonstrates users statastics)
The answer is if the user doesnt blocked your bot you can successfully send him/her a message. on the other hand if the user had delete accounted no ways suggest for send message to new chat id!
i hope you got it | 1 | 1 | 0 | I want to say to clients to start my chat bot and send me username and password, then I store chat_id of them, and use it whenever I want to send a message to one of them.
Is it possible? or chat_id will be expire? | Can I use chat_id to send message to clients in Telegram bot after a period of time? | 0.066568 | 0 | 1 | 2,478 |
40,690,016 | 2016-11-19T06:45:00.000 | 1 | 0 | 0 | 0 | python,artificial-intelligence | 40,690,032 | 1 | false | 0 | 0 | Treat them like human players; don't give them the internal guts, just give them an interface to use.
E.g. give them an object that contains only the information they're allowed to access, and have the AI return a choice of which action they wish to perform. | 1 | 0 | 0 | I'm making an AI for a card game in Python and was wondering how I can keep players' decision functions from accessing the information given to them by the game that they shouldn't be able to access (for example, other players' hands). Currently, the game object itself is being passed into the players' decision functions.
I can only see two avenues of improvement: to either carefully choose what you pass in (although even things like one's own deck shouldn't be able to be manipulated by oneself, sadly, so this might not work), or to somehow filter using some obfuscation method, but I can't really think of one. Can you think of a better way to design this?
Thanks!
Andrew | Obfuscating/Hiding Information from other players when writing an AI for a game | 0.197375 | 0 | 0 | 31 |
40,690,598 | 2016-11-19T08:04:00.000 | 3 | 0 | 1 | 0 | python,machine-learning,tensorflow,keras | 44,625,370 | 7 | false | 0 | 0 | I just spent some time figure it out.
Thoma's answer is not complete.
Say your program is test.py, you want to use gpu0 to run this program, and keep other gpus free.
You should write CUDA_VISIBLE_DEVICES=0 python test.py
Notice it's DEVICES not DEVICE | 1 | 113 | 1 | I have Keras installed with the Tensorflow backend and CUDA. I'd like to sometimes on demand force Keras to use CPU. Can this be done without say installing a separate CPU-only Tensorflow in a virtual environment? If so how? If the backend were Theano, the flags could be set, but I have not heard of Tensorflow flags accessible via Keras. | Can Keras with Tensorflow backend be forced to use CPU or GPU at will? | 0.085505 | 0 | 0 | 147,257 |
40,690,675 | 2016-11-19T08:14:00.000 | 1 | 0 | 0 | 0 | python,opencv,colors,shape,contour | 40,767,562 | 2 | false | 0 | 0 | For finding the shape of a particular contour we can draw a bounded rectangle around the contour.
Now we can compare the area of contour with the area of bounded rectangle.
If area of contour is equal to half the area of bounded rectangle the shape is a triangle.
If the area of contour is less that area of bounded rectangle but is greater than half the area of bounded rectangle then its a circle.
Note: This method is limited to regular triangle and circle. this doesnt apply to polygons like hexagon,heptagon etc. | 1 | 2 | 1 | I am new to opencv using python and trying to get the shape of a contour in an image.
Considering only regular shapes like square, rectangle, circle and triangle is there any way to get the contour shape using only numpy and cv2 libraries?
Also i want to find the colour inside a contour. How can I do it?
For finding area of a contour there is an inbuilt function: cv2.contourArea(cnt).
Are there inbuilt functions for "contour shape" and "color inside contour" also?
Please help!
Note : The images I am considering contains multiple regular shapes. | Detecting shape of a contour and color inside | 0.099668 | 0 | 0 | 5,369 |
40,691,002 | 2016-11-19T08:57:00.000 | 2 | 0 | 0 | 0 | python,django | 40,691,076 | 1 | false | 1 | 0 | The solution is to have a ManyToMany relation between User and Company.
All Users are admin of their own Company (happens when they create their account), but in addition they are also candidates of other companies.
They can add Items for all companies they are in, but only invite new people for the company they're owner of, all using the same user account.
You'll need some way to switch the company they're currently working as, or showing all of them on the same screen, etc. | 1 | 1 | 0 | Workflow:
User in registration form gives his email, password and company name. The company with the same name is automatically creates during registration process(model Company). This user is automatically admin of this company(in User model I have role field).
Company admin can invite candidates. In form gives candidate email, first and last name. Application sends an email with activation link to candidate.
The candidate by clicking the link is transfered to the page with form where sets his password and is redirected to login page
Candidate can log in and add new items to database(model Item)
The problem is that many companies should be able to have the same user(the same email address). Currently application returns that email is already in use(in other company but it can't be like that). So this is something like Software as a Service. Any ideas how to solve this problem? | Software as a service in Django - many companies should be able to have the same users | 0.379949 | 0 | 0 | 50 |
40,693,141 | 2016-11-19T12:54:00.000 | 0 | 0 | 1 | 0 | list,python-3.x,loops,iteration | 40,693,343 | 2 | false | 0 | 0 | Set a variable counter to 0
Loop through each item inthe wordlist and compare each word in the list with the given list
If the given word is greater than the item in list then increment the counter
So when you are out of the loop, counter value is the index you are looking for.
You can convert this to code. | 1 | 0 | 0 | I understand how to index a word in a given list, but if given a set list and a word not in the list, how do I find the index position of the new word without appending or inserting the new word to the sorted list?
For example:
def find_insert_position:
a_list = ['Bird', 'Dog', 'Alligator']
new_animal = 'Cow'
Without altering the list, how would I determine where the new word would be inserted within a sorted list? So that if you entered the new word, the list would stat in alphabetical order. Keep in mind this is a given list and word, so I would not know any of the words before hand. I am using Python3. | How to iterate a list without inserting the new word to list? | 0 | 0 | 0 | 43 |
40,694,528 | 2016-11-19T15:22:00.000 | -1 | 0 | 1 | 0 | python,anaconda,jupyter-notebook,jupyter | 67,112,315 | 8 | false | 0 | 0 | Check the Python Version
import sys
print(sys.version) | 2 | 203 | 0 | I use Jupyter notebook in a browser for Python programming, I have installed Anaconda (Python 3.5). But I'm quite sure that Jupyter is running my python commands with the native python interpreter and not with anaconda. How can I change it and use Anaconda as interpreter? | How to know which Python is running in Jupyter notebook? | -0.024995 | 0 | 0 | 330,999 |
40,694,528 | 2016-11-19T15:22:00.000 | 5 | 0 | 1 | 0 | python,anaconda,jupyter-notebook,jupyter | 67,410,197 | 8 | false | 0 | 0 | Looking the Python version
Jupyter menu help/about will show the Python version | 2 | 203 | 0 | I use Jupyter notebook in a browser for Python programming, I have installed Anaconda (Python 3.5). But I'm quite sure that Jupyter is running my python commands with the native python interpreter and not with anaconda. How can I change it and use Anaconda as interpreter? | How to know which Python is running in Jupyter notebook? | 0.124353 | 0 | 0 | 330,999 |
40,695,276 | 2016-11-19T16:34:00.000 | 1 | 0 | 1 | 0 | python | 50,421,462 | 1 | false | 0 | 0 | you can search like this.
m.uid('search', None, "(OR (SUBJECT baseball) (SUBJECT basketball))") | 1 | 2 | 0 | I have trouble using impalib to search email that contain more than two subjects, for example:
import imaplib
m = imaplib.IMAP4_SSL("imap.gmail.com")
m.login('myname', 'mypwd')
m.select("Inbox")
resp, items = m.uid('search', None, "(SUBJECT baseball SUBJECT basketball)")
will have no problem getting data from searching those subject. However, if i search more than two subjects
resp, items = m.uid('search', None, "(SUBJECT baseball SUBJECT basketball SUBJECT football)")
it won't have data come back. also, subject like ""space jam" or "matchbox 20" will have trouble of parsing in the field | Python Imaplib search multiple SUBJECT criteria and special characters | 0.197375 | 0 | 1 | 2,629 |
40,695,393 | 2016-11-19T16:44:00.000 | 3 | 0 | 1 | 0 | python,jupyter-notebook,jupyter | 40,695,507 | 4 | false | 1 | 0 | Such a functionality, (to my knowledge) is not available in Jupyter as of yet. However, if you are really worried about having a lot of function definitions at the beginning and want to hide them, you can do the following alternative:
Define the functions in a Python script.
Add the script execution to the first coding cell of your notebook
Add the remaining of the code to the consecutive cells of the notebook
Optionally, show its contents at the end of the notebook for viewers' convenience. | 2 | 10 | 0 | I have a Jupyter notebook.
In the cell 1, I defined a lot of functions, which need to run before other things. Then in the following cells, I start to present result.
However, when I convert to HTML, this layout is ugly. Readers have to scroll a long time to see the result and they may not care about the functions at all.
But I have to put the code in that order because I need those functions.
So my question is, is there a way I could control the run order of cells after I click run all? or is there a way I could do something like the following.
I put all my function definitions in cell 20, then in cell 1, I could say tell Jupyter something like "run cell 20".
Just curious if this is doable.
Thanks. | Python Jupyter Notebook: Specify cell execution order | 0.148885 | 0 | 0 | 2,975 |
40,695,393 | 2016-11-19T16:44:00.000 | 4 | 0 | 1 | 0 | python,jupyter-notebook,jupyter | 40,695,575 | 4 | false | 1 | 0 | I would save the functions as a separate module, then import this module at the beginning. | 2 | 10 | 0 | I have a Jupyter notebook.
In the cell 1, I defined a lot of functions, which need to run before other things. Then in the following cells, I start to present result.
However, when I convert to HTML, this layout is ugly. Readers have to scroll a long time to see the result and they may not care about the functions at all.
But I have to put the code in that order because I need those functions.
So my question is, is there a way I could control the run order of cells after I click run all? or is there a way I could do something like the following.
I put all my function definitions in cell 20, then in cell 1, I could say tell Jupyter something like "run cell 20".
Just curious if this is doable.
Thanks. | Python Jupyter Notebook: Specify cell execution order | 0.197375 | 0 | 0 | 2,975 |
40,700,472 | 2016-11-20T03:44:00.000 | 1 | 0 | 0 | 0 | python,tkinter,resize | 40,700,579 | 1 | true | 0 | 1 | No, there is no way to constrain the geometry propagation in only one direction.
In my experience there are almost always better ways to solve layout problems than turning propagation off. There is no single best solution, it all depends on your specific layout needs. | 1 | 0 | 0 | By default, Tkinter widgets will resize based on their children's sizes (i.e., Tkinter will not respect my width and height configurations). I know that using parent.pack_propagate(False) will prevent parent's children from modifying its dimensions, but what if I only want to prevent the children from changing one dimension, allowing it to modify another dimension? For example, how would I prevent a widget's children from modifying its width but allow it to change its height?
One hack-ish solution I came up with was to have a 1px tall frame with my requested width and add that to the parent, which prevented the other children from shrinking the width of the parent, but this seems like an inelegant solution. Is there any built-in solution to this? | Tkinter - Prevent children from changing parent width | 1.2 | 0 | 0 | 828 |
40,702,556 | 2016-11-20T09:39:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,asynchronous,io | 40,770,608 | 2 | false | 0 | 0 | In my question I separated the I/O handling into two categories: polling represented by non-blocking select, and "callback" represented by the blocking select. (The blocking select sleeps the thread, so it's not strictly speaking a callback; but conceptually it is similar to a callback, since it doesn't use CPU cycles until the I/O is ready. Since I don't know the precise term, I'll just use "callback").
I assumed that asynchronous model cannot use "callback" I/O. It now seems to me that this assumption was incorrect. While an asynchronous program should not be using non-blocking select, and it cannot strictly request a traditional callback from the OS either, it can certainly provide OS with its main event loop and say a coroutine, and ask the OS to create a task in that event loop using that coroutine when an I/O socket is ready. This would not use any of the program's CPU cycles until the I/O is ready. (It might use OS kernel's CPU cycles if it uses polling rather than interrupts for I/O, but that would be the case even with a multi-threaded program.)
Of course, this requires that the OS supports the asynchronous framework used by the program. It probably doesn't. But even then, it seems quite straightforward to add an middle layer that uses a single separate thread and blocking select to talk to the OS, and whenever I/O is ready, creates a task to the program's main event loop. If this layer is included in the interpreter, the program would look perfectly asynchronous. If this layer is added as a library, the program would be largely asynchronous, apart from a simple additional thread that converts synchronous I/O to asynchronous I/O.
I have no idea whether any of this is done in python, but it seems plausible conceptually. | 2 | 0 | 0 | In an asynchronous program (e.g., asyncio, twisted etc.), all system calls must be non-blocking. That means a non-blocking select (or something equivalent) needs be executed in every iteration of the main loop. That seems more wasteful than the multi-threaded approach where each thread can use a blocking call and sleep (without wasting CPU resource) until the socket is ready.
Does this sometimes cause asynchronous programs to be slower than their multi-threaded alternatives (despite thread switching costs), or is there some mechanism that makes this not a valid concern? | CPU utilization while waiting for I/O to be ready in asynchronous programs | 0 | 0 | 0 | 251 |
40,702,556 | 2016-11-20T09:39:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,asynchronous,io | 40,702,725 | 2 | false | 0 | 0 | When working with select in a single thread program, you do not have to continuously check the results. The right way to work with it is to let it block until the relevant I/O has arrived, just like in the case of multi threads.
However, instead of waiting for a single socket (or other I/O), the select call gets a list of relevant sockets, and blocks until any of them is interrupted.
Once that happens, select wakes-up and returns a list of the sockets (or I/Os) that are ready. It is up to the coder to handle those ready sockets in the required way, and then, if the code has nothing else to do, it might start another iteration of the select.
As you can see, no polling loop is required; the program does not require CPU resources until one or more of the required sockets are ready. Moreover, if a few sockets were ready almost together, then the code wakes-up once, handle all of them, and only then start select again. Add to that the fact that the program does not requires the resources overhead of a few threads, and you can see why this is more effective in terms of OS resources. | 2 | 0 | 0 | In an asynchronous program (e.g., asyncio, twisted etc.), all system calls must be non-blocking. That means a non-blocking select (or something equivalent) needs be executed in every iteration of the main loop. That seems more wasteful than the multi-threaded approach where each thread can use a blocking call and sleep (without wasting CPU resource) until the socket is ready.
Does this sometimes cause asynchronous programs to be slower than their multi-threaded alternatives (despite thread switching costs), or is there some mechanism that makes this not a valid concern? | CPU utilization while waiting for I/O to be ready in asynchronous programs | 0.099668 | 0 | 0 | 251 |
40,703,741 | 2016-11-20T11:56:00.000 | 3 | 1 | 1 | 0 | python,c++,python-2.7 | 40,703,761 | 3 | false | 0 | 1 | Make a game installer which will install all required dependencies. | 1 | 0 | 0 | I have written game with base on c++ and which uses python scripts to calculate object properties and draw them. But main problem is that it won't run on pc's without python2.7 installed on them. I'm out of ideas. How i should do that, to make it run on pc's without python on them? | Running program on computers without python | 0.197375 | 0 | 0 | 158 |
40,703,762 | 2016-11-20T11:58:00.000 | 0 | 0 | 1 | 0 | python,c++,epoll | 40,704,285 | 1 | true | 0 | 1 | You can always maintain a separate map from fd to A or B. Then when an event gets triggered, lookup based on fd.
Doesn't look like epoll has a richer interface, even in Python 3+ | 1 | 0 | 0 | I use python's epoll but it can't use event.data.ptr like in c++.
Sometimes I will register class A.fd and sometimes I will register class B.fd.
So, when epoll.poll() returned, how can I know whether fd belongs to class A or B? | python,epoll.register(fd, eventmask) only has two parameters, how could I to use event.data_ptr like c++? | 1.2 | 0 | 0 | 171 |
40,703,961 | 2016-11-20T12:22:00.000 | 2 | 0 | 1 | 0 | python,azure,jupyter-notebook,azure-machine-learning-studio | 40,713,966 | 1 | true | 0 | 0 | The algorithms used as modules in Azure ML Studio are not currently able to be used directly as code for Python programming.
That being said: you can attempt to publish the outputs of the out-of-the-box algorithms as web services, which can be consumed by Python code in the Azure ML Studio notebooks. You can also create your own algorithms and use them as custom Python or R modules. | 1 | 0 | 1 | I wonder if it is possible to design ML experiments without using the drag&drop functionality (which is very nice btw)? I want to use Python code in the notebook (within Azure ML studio) to access the algorithms (e.g., matchbox recommender, regression models, etc) in the studio and design experiments? Is this possible?
I appreciate any information and suggestion! | Creating azure ml experiments merely using python notebook within azure ml studio | 1.2 | 0 | 0 | 236 |
40,704,035 | 2016-11-20T12:31:00.000 | 2 | 0 | 1 | 0 | python,class,oop | 40,704,089 | 3 | true | 0 | 0 | External dictionary holds the data (seems against the spirit of OOP)
Or maybe you can implement all approaches at once: you define both associations Person has a Room and Room has many persons, and also you index everything using a proper data structure to access your objects at the light speed!
That is, you can walk across your object graph using object-oriented associations (i.e. composition) and you implement some specific requirements storing your graph in some data structures to fulfill your requirement.
This isn't against the spirit of OOP. It's in favor of good coding practices instead of artificial dogmas. | 2 | 1 | 0 | How would you set up the following class structure, and why?
Class Person represents a Person
Class Room represents a Room
Where I would like to store the Room location of a Person, with the ability to efficiently ask the following two questions:
1) What Room is Person X in? and,
2) Which Persons are in Room Y?
The options I can see are:
Class Person stores the Room location (question 2 would be inefficient),
Class Room stores its Persons (question 1 would be inefficient),
Both of the above (opportunity for data integrity issues) and,
External dictionary holds the data (seems against the spirit of OOP) | What would be the most efficient way to set up classes that belong to a class | 1.2 | 0 | 0 | 60 |
40,704,035 | 2016-11-20T12:31:00.000 | 1 | 0 | 1 | 0 | python,class,oop | 40,704,086 | 3 | false | 0 | 0 | Definately an external structure. That is the spirit of OOP. Classes represent objects. Neither a person is a property of a room, nor a room a property of a person, so there is no reason putting one in another's class. Instead, the dictionary represents a relationship between them, so it should be a new entity.
A simple format would be a dictionary where each key is a room and the value is a list of the people inside it. However, you could probably create a class around it if you need more complex functionality | 2 | 1 | 0 | How would you set up the following class structure, and why?
Class Person represents a Person
Class Room represents a Room
Where I would like to store the Room location of a Person, with the ability to efficiently ask the following two questions:
1) What Room is Person X in? and,
2) Which Persons are in Room Y?
The options I can see are:
Class Person stores the Room location (question 2 would be inefficient),
Class Room stores its Persons (question 1 would be inefficient),
Both of the above (opportunity for data integrity issues) and,
External dictionary holds the data (seems against the spirit of OOP) | What would be the most efficient way to set up classes that belong to a class | 0.066568 | 0 | 0 | 60 |
40,704,339 | 2016-11-20T13:07:00.000 | 0 | 0 | 1 | 0 | python,string,matrix,coordinates,words | 40,761,500 | 2 | false | 0 | 0 | I would rather look at words that are already there and then randomly select a word from the set of words that fit there.
Of course you might not fill the whole matrix like this. If you have put one word somewhere where it blocks all other words (no other word fits), you might have to backtrack, but that will kill the running time.
If you really want to fill the whole matrix, I would iterate over all possible starting positions, see how many words fit there, and then recurse over the possibilities of the starting position with the least number of candidates. That will cause your program recognize and leave "dead ends" early, which improves the running time drastically. That is a powerful technique from fixed parameter algorithms, which I like to call Branching-vector minimization. | 2 | 1 | 1 | I'm not sure of the rules to create a matrix for a word search puzzle game. I am able to create a matrix with initial values of 0.
Is it correct that I'll randomly select a starting point(coordinates), and a random direction(horizontally,vertically,&,diagonally) for a word then manage if it would overlap with another word in the matrix? If it does then check if the characters are the same (although there's only a little chance) then if no I'll assign it there. The problem is it's like I lessen the chance of words to overlap.
I have also read that I need to check first the words that have the same characters. But if that's the case, it seems like the words that I am going to put in the matrix are always overlapping. | Generate a word search puzzle matrix | 0 | 0 | 0 | 936 |
40,704,339 | 2016-11-20T13:07:00.000 | 0 | 0 | 1 | 0 | python,string,matrix,coordinates,words | 40,761,650 | 2 | false | 0 | 0 | Start with the longest word.
First of all you must find all points and directions, where this word may fit. For example word 'WORD' may fit, when at the first pos there is NULL or W, on the second pos NULL or O, on the third NULL or R and on the fourth NULL or D.
Then You should group it to positions with no NULLS, with one NULL, with two NULLs and so on.
Then select randomly position from the group with the smallest ammount of NULLS. If there are no posible positions, skip the word.
This attempt will allow you to put more words and prevent situations, where random search can't find the proper place (when there is only a few of them). | 2 | 1 | 1 | I'm not sure of the rules to create a matrix for a word search puzzle game. I am able to create a matrix with initial values of 0.
Is it correct that I'll randomly select a starting point(coordinates), and a random direction(horizontally,vertically,&,diagonally) for a word then manage if it would overlap with another word in the matrix? If it does then check if the characters are the same (although there's only a little chance) then if no I'll assign it there. The problem is it's like I lessen the chance of words to overlap.
I have also read that I need to check first the words that have the same characters. But if that's the case, it seems like the words that I am going to put in the matrix are always overlapping. | Generate a word search puzzle matrix | 0 | 0 | 0 | 936 |
40,708,834 | 2016-11-20T20:10:00.000 | 2 | 0 | 0 | 0 | python,proxy,routing,vpn | 40,709,023 | 1 | true | 0 | 0 | The short answer is no.
The long answer is: network routing is managed by OS and could be managed by other utilities, like iptables. Adding such capabilities to standard libraries is out of the scope of programming language. So, what you are probably looking for is a binding of a VPN library (e.g. libpptp) or making syscalls in Cython, which is not much different than writing in C. | 1 | 1 | 0 | Okay so i know that you can route web-requests through a proxy in Python, is there any way to route ALL traffic from your system through a server. Much like a VPN client such as Hotspot Shield or CyberGhost, but a custom-build client using the Python language?
Any links/help is greatly appreciated.
Thanks. | How to connect to a VPN/Proxy server via Python? | 1.2 | 0 | 1 | 3,290 |
40,711,286 | 2016-11-21T00:50:00.000 | 1 | 1 | 0 | 1 | python,amazon-ec2 | 40,711,336 | 1 | false | 0 | 0 | If your EC2 instance is running a linux OS, you can use the following command to install python:
sudo apt-get install python*.*
Where the * represents the version you want to install, such as 2.7 or 3.4. Then use the python command with the first argument as the location of the python file to run it. | 1 | 1 | 0 | I've ssh'd into my EC2 instance and have the python script and .txt files I'm using on my local system. What I'm trying to figure out is how to transfer the .py and .txt files to my instance and then run them there? I've not even been able to install python on the instance yet | How can I run a python script on an EC2 instance? | 0.197375 | 0 | 0 | 3,179 |
40,712,162 | 2016-11-21T03:06:00.000 | 0 | 0 | 0 | 0 | python,django,migration | 40,712,191 | 1 | true | 1 | 0 | Is your database file also included with "project files". If you use the local sqlite3 file generated by Django or really any other local database file that isn't in production and the other developers don't have this why would they see your updates to the DB? | 1 | 0 | 0 | I have manually added entries (rows) to the models in my Django project through the admin interface.
I have also ran the following commands
python3 manage.py makemigrations &
python3 manage.py migrate
The issue is I am the only one that can see the data in the database, and other developers cannot see them.
They are all using the same project files as present on my computer. | Django Model Entries Not Available to Other Developers | 1.2 | 0 | 0 | 24 |
40,712,203 | 2016-11-21T03:12:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 58,099,676 | 4 | false | 0 | 0 | Please note that this button does not reset all the variables. But then you can enter in the command prompt: reset. | 2 | 32 | 0 | I've seen other IDEs have the option to right click and reinitialize the environment. Does anyone know if this is possible in PyCharm, and if so, how it's done? | How to reinitialize the Python console in PyCharm? | 0 | 0 | 0 | 31,725 |
40,712,203 | 2016-11-21T03:12:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 71,106,939 | 4 | false | 0 | 0 | I had tried "rerun" and found that it didn't reload the new environment.
I suggest that "new console" button could help you to reload the environment that it had installed the new packages. | 2 | 32 | 0 | I've seen other IDEs have the option to right click and reinitialize the environment. Does anyone know if this is possible in PyCharm, and if so, how it's done? | How to reinitialize the Python console in PyCharm? | 0 | 0 | 0 | 31,725 |
40,712,660 | 2016-11-21T04:15:00.000 | 0 | 0 | 1 | 0 | python,regex,python-3.x | 40,712,720 | 1 | false | 0 | 0 | If the letter is an O, O-10 is valid. Regex: O-10
If the letter is an O, O-1 through O-9 is valid. Regex: O-[1-9]
If the letter is an E, E-1 through E-9 is valid. Regex: E-[1-9]
If the letter is an W, W-1 through W-5 are valid. Regex: W-[1-5]
Put them all together using alternation: O-10|O-[1-9]|E-[1-9]|W-[1-5]
The only trick is to make sure the longest match is first. You have to put O-10 first because otherwise O-10 would be matched by O-[1-9] and it wouldn't match the zero.
Finally, since O-[1-9] and E-[1-9] are both valid, we can combine them, giving: O-10|[EO]-[1-9]|W-[1-5] | 1 | 0 | 0 | I am stuck trying to make the right range of numbers correlate to each of the three letters!
This is what I have so far: \w*[EWO]-\d{1,2}
The question is as follows:
Personnel employed by the United States government are assigned a pay grade that determines their annual salary. The pay grade is a code that begins with the letter E, W, or O (each of which is uppercase). The first letter is followed by a single dash and then exactly one or two digits. Valid E pay grades move from 1 up through 9. Valid W pay grades move from 1 up through 5. Valid O pay grades move from 1 up through 10. Some valid pay grade classifications include E-1, W-5, O-1, and O-10. Examples of invalid pay grade classifications would be E-10, W-6, and O-20. | Trying to get the correct regular expression | 0 | 0 | 0 | 79 |
40,713,063 | 2016-11-21T04:57:00.000 | 0 | 0 | 1 | 0 | python | 40,713,136 | 1 | false | 0 | 0 | In Python 2 the internal method for an iterator is next() in Python 3 it is __next__(). The builtin function next() is aware of this and always calls the right method making the code compatible with both versions. Also it adds the default argument for more easy handling of the iteration end. | 1 | 0 | 0 | When you make a generator by calling a function or method that has the yield keyword in it you get an object that has a next method.
So far as I can tell, there isn't a difference between using this method and using the next builtin function.
e.g. my_generator.next() vs next(my_generator)
So is there any difference? If not, why are there two ways of calling next on a generator? | difference between the next function and the next method | 0 | 0 | 0 | 636 |
40,717,835 | 2016-11-21T10:22:00.000 | 0 | 0 | 1 | 0 | python,package,pip | 40,717,894 | 3 | false | 0 | 0 | For Python 2.x: In C:\Python27\Scripts folder you will have some executables like pip2.7.exe or pip2.exe
similarly for Python 3.x.
cmd> pip install <module>
instead of just pip you can use the respected version of pip. | 1 | 0 | 0 | I need to advice:
I have 2 versions of Python: 2 and 3.
How Can I install some package through the pip to especially version? What should I use? | Python: install some packages | 0 | 0 | 0 | 84 |
40,719,659 | 2016-11-21T11:59:00.000 | 1 | 0 | 0 | 0 | python,lua,jwt,sip,freeswitch | 40,747,794 | 2 | false | 1 | 0 | It seems that the following solution should be used.
In order to allow FS work with JWT for authentication it is necessary to send JWT inside the custom header from the user agent to the FS. Also it is important to put some known password to the user agent.
When UA is connecting to the FS and when building dynamically the directory using lua script (xml-handler-script, xml-handler-bindings) it is possible to validate the JWT and provide the right directory entry for the user simply by reading the custom-header fields.
If JWT was valid then correct password (known one) will be used to allow FS to proceed with that, otherwise - another non valid password will be provided and FS will drop the connection.
Hope that helps to somebody, | 1 | 1 | 0 | I am trying to make an integration between a sip client and FS system. SIP Client sends a JWT token as a password during the authentication stage.
In order to authenticate a client, FS creates a directory entry with the password field and compares it to the password received from the client, in my case I need to override this behaviour by getting the "token" which appears as password, verifying it and returning the answer to FS about the result of the verification so it will know whether to accept or to reject the user.
I am not sure how to override this behaviour in FS without change of the source code. I would prefer to write a python or lua plugins to deal with that.
Many thanks, | Freeswitch JWT Integration | 0.099668 | 0 | 0 | 533 |
40,722,481 | 2016-11-21T14:26:00.000 | 0 | 0 | 0 | 0 | apache-kafka,kafka-consumer-api,kafka-python | 40,737,084 | 1 | true | 0 | 0 | You need to make sure, that the data is fully processed before you commit it to avoid "data loss" in case if consumer failure.
Thus, if you enable auto.commit, make sure that you process all data completely after a poll() before you issue the next poll() because each poll() implicitly commits all data from its previous poll().
If this is not possible, you should disable auto.commit and commit manually after data got completely processed via consumer.commit(...). For this, keep in mind that you do no need to commit each message individually, and that a commit with offset X implicitly commits all messages with offsets < X (e.g., after processing message of offset 5, you commit offset 6 -- the committed offset is not the last successfully processed message, but the next message you want to process). And a commit of offset 6, commits all messages with offset 0 to 5. Thus, you should not commit offset 6 before all messages with smaller offset got completely processed. | 1 | 0 | 0 | I am facing a problem while using consumer.poll() method .After fetching data by using poll() method consumer won't have any data to commit so Please help me to remove specific number of lines from the kafka topic . | How to delete specific number of lines from kafka topic by using python or using any inbuilt method? | 1.2 | 0 | 0 | 450 |
40,724,726 | 2016-11-21T16:21:00.000 | 3 | 1 | 1 | 0 | python,string,algorithm,big-o,anagram | 40,732,241 | 1 | false | 0 | 0 | I like your idea of filtering the word list down to just the words that could possibly be made with the input letters, and I like the idea of trying to string them together, but I think there are a few major optimizations you could put into place that would likely speed things up quite a bit.
For starters, rather than choosing a word and then rescanning the entire dictionary for what's left, I'd consider just doing a single filtering pass at the start to find all possible words that could be made with the letters that you have. Your dictionary is likely going to be pretty colossal (150,000+, I'd suspect), so rescanning it after each decision point is going to be completely infeasible. Once you have the set of words you can legally use in the anagram, from there you're left with the problem of finding which combinations of them can be used to form a complete anagram of the sentence.
I'd begin by finding unordered lists of words that anagram to the target rather than all possible ordered lists of words, because there's many fewer of them to find. Once you have the unordered lists, you can generate the permutations from them pretty quickly.
To do this, I'd use a backtracking recursion where at each point you maintain a histogram of the remaining letter counts. You can use that to filter out words that can't be added in any more, and this essentially saves you the cost of having to check the whole dictionary each time. I'd imagine this recursion will dead-end a lot, and that you'll probably find all your answers without too much effort.
You might consider some other heuristics along the way. For example, you might want to start with larger words first to pull out as many letters as possible and keep the branching factor low. To do that, you could sort your word list from longest to shortest and try the words in that order. You could alternatively try to use the most constrained letters up first to decrease the branching factor. These sorts of heuristics will probably work really well in practice.
Overall you're still looking at exponential work in the worst case, but it shouldn't be too bad for shorter strings. | 1 | 2 | 0 | I want to find all possible anagrams from a phrase, for example if I input "Donald Trump" I should get "Darn mud plot", "Damp old runt" and probably hundreds more.
I have a dictionary of around 100,000 words, no problems there.
But the only way I can think of is to loop through the dictionary and add all words that can be built from the input to a list. Then loop through the list and if the word length is less than the length of the input, loop through the dictionary again add all possible words that can be made from the remaining letters that would make it the length of the input or less. And keep looping through until I have all combinations of valid words of length equal to input length.
But this is O(n!) complexity, and it would take almost forever to run. I've tried it.
Is there any way to approach this problem such that the complexity will be less? I may have found something on the net for perl, but I absolutely cannot read perl code, especially not perl golf. | Python: Find all anagrams of a sentence | 0.53705 | 0 | 0 | 1,207 |
40,726,899 | 2016-11-21T18:22:00.000 | 15 | 0 | 0 | 0 | python,scikit-learn | 40,727,147 | 1 | true | 0 | 0 | In general, different models have score methods that return different metrics. This is to allow classifiers to specify what scoring metric they think is most appropriate for them (thus, for example, a least-squares regression classifier would have a score method that returns something like the sum of squared errors). In the case of GaussianNB the docs say that its score method:
Returns the mean accuracy on the given test data and labels.
The accuracy_score method says its return value depends on the setting for the normalize parameter:
If False, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples.
So it would appear to me that if you set normalize to True you'd get the same value as the GaussianNB.score method.
One easy way to confirm my guess is to build a classifier and call both score with normalize = True and accuracy_score and see if they match. Do they? | 1 | 19 | 1 | Whats the difference between score() method in sklearn.naive_bayes.GaussianNB() module and accuracy_score method in sklearn.metrics module? Both appears to be same. Is that correct? | Difference between score and accuracy_score in sklearn | 1.2 | 0 | 0 | 12,527 |
40,727,598 | 2016-11-21T19:09:00.000 | 1 | 0 | 1 | 0 | python-2.7,debugging,spyder,pdb | 40,733,182 | 1 | true | 0 | 0 | (Spyder developer here) pdb.set_trace() was not supported when Spyder 3.0 was released in September 2016. We didn't supported officially before that, and the fact that it was working was a matter of luck.
However, that was solved in Spyder 3.2.0, released in July 2017. | 1 | 2 | 0 | When I write (at any script):
import pdb;
pdb.set_trace()
Sometimes when I press n+enter the program send me to "interactiveshell"
Other times, if pressing n+enter I can move forward, I can't no longer see what's going on with the variables that are being generated in "variables explorer" as I used to do a few days ago (even stoping completelly the debugging process)
This wasn't happen few days ago (the debugger worked properly with the same way of using), however I haven't been able to use the debugger as usual again.
Thank in advance.
Raúl | pdb set_trace() is not working properly in spyder 3 | 1.2 | 0 | 0 | 791 |
40,729,919 | 2016-11-21T21:42:00.000 | 0 | 0 | 0 | 1 | python,linux,lttng | 40,730,003 | 1 | false | 0 | 0 | Handling read, write, pread, pwrite, readv, writev should be enough.
You just have to check whether the FD refers to the cache or disk. I think it would be easier in kernelspace, by writing a module, but... | 1 | 0 | 0 | I have a python program that read Linux kernel system calls (use Lttng), So with this program I could read all kernel calls. I have some operations and then with python program going to analyses system calls, in the operations I have some IO works, then with python program I need to know how many bytes that read from cache and how many read from disk. which system calls show me the bytes read from cache and disk? | Which linux kernel system calls shows bytes read from disk | 0 | 0 | 0 | 130 |
40,729,995 | 2016-11-21T21:46:00.000 | 0 | 0 | 0 | 1 | python,macos,scrapy,pip | 40,731,300 | 2 | false | 1 | 0 | Temporarily (just for this module), you could manually install it. Download it from wherever you can, extract it if it is zipped then use python setup.py install | 1 | 0 | 0 | Good, I am currently trying to install Scrapy in my MacOS but everything is problems, the first thing I introduce in terminal is:
pip install scrapy
And it returns me:
You are using pip version 7.0.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Requirement already satisfied (use --upgrade to upgrade): scrapy in /usr/local/lib/python2.7/site-packages/Scrapy-1.2.1-py2.7.egg
Collecting Twisted>=10.0.0 (from scrapy)
Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(, 'Connection to pypi.python.org timed out. (connect timeout=15)')': /simple/twisted/
Could not find a version that satisfies the requirement Twisted>=10.0.0 (from scrapy) (from versions: )
No matching distribution found for Twisted>=10.0.0 (from scrapy)
Seeing the consideration that makes of updating, I realize it ...
pip install --upgrade pip
And it returns me the following:
You are using pip version 7.0.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Requirement already up-to-date: pip in /usr/local/lib/python2.7/site-packages/pip-7.0.1-py2.7.egg
The truth is that yesterday I was doing a thousand tests and gave me another type of error:
"SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed"
But this last mistake no longer shows me. | Install Scrapy on Mac OS X error SSL pip | 0 | 0 | 0 | 490 |
40,731,567 | 2016-11-21T23:53:00.000 | 0 | 0 | 0 | 0 | python,selenium,pdf,salesforce,pdfkit | 40,732,242 | 1 | false | 1 | 0 | log in using requests
use requests session mechanism to keep track of the cookie
use session to retrieve the HTML page
parse the HTML (use beautifulsoup)
identify img tags and css links
download locally the images and css documents
rewrite the img src attributes to point to the locally downloaded images
rewrite the css links to point to the locally downloaded css
serialize the new HTML tree to a local .html file
use whatever "HTML to PDF" solution to render the local .html file | 1 | 1 | 0 | I have been exploring ways to use python to log into a secure website (eg. Salesforce), navigate to a certain page and print (save) the page as pdf at a prescribed location.
I have tried using:
pdfkit.from_url: Use Request to get a session cookie, parse it then pass it as cookie into the wkhtmltopdf's options settings. This method does not work due to pdfkit not being able to recognise the cookie I passed.
pdfkit.from_file: Use Request.get to get the html of the page I want to print, then use pdfkit to convert the html file to pdf. This works but the page format and images are all missing.
Selenium: Use a webdriver to log in then navigate to the wanted page, call the windows.print function. This does not work because I can't pass any arguments to the window's SaveAs dialog.
Does anyone have any idea to get around? | Log into secured website, automatically print page as pdf | 0 | 0 | 1 | 237 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.