Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
37,894,857 | 2016-06-18T08:20:00.000 | 0 | 0 | 0 | 1 | google-app-engine,docker,google-app-engine-python | 37,896,863 | 1 | false | 1 | 0 | Well, in general, 10 minute deployment isn't that bad. I use AWS Elastic Beanstalk and it's about the same for a full deployment of a production environment. However, this is totally unacceptable for your everyday development.
Since you use docker, I really don't understand, why not to spin up the same container on your local machine and test it locally before releasing to staging?
If that is not an option, for some reason, my second bet would be updating code directly inside the container. I've used that trick a lot. As Python a dynamic language, all you need just a fresh copy of your repo, so, you can ssh into your container and check out the code. That said, the feedback loop will be reduced to the time of committing and checking out the code. Additionally, if you set up some hooks on commit, you don't even need to check out the code manually.
All in all, this is just my two cents and it would be nice to hear more opinions on that really important issue. | 1 | 1 | 0 | We're using Google App Engine (GAE) with Managed VMs for a Python compat environment, and deployments take too much time. I haven't done strict calculations, but I'm sure each deployment takes over 10 mins.
What can we do to accelerate this? Is this more a GAE or a Docker issue? Haven't tried deploying Docker in other platforms so I'm not sure standard/acceptable deployment times.
Having to wait so much to test an app in the staging servers damages our productivity quite a bit. Any help is appreciated. :) | Faster Google App Engine Managed VM deploys (Python compat env)? | 0 | 0 | 0 | 36 |
37,895,568 | 2016-06-18T09:45:00.000 | 2 | 0 | 1 | 0 | python,tuples | 67,827,643 | 5 | false | 0 | 0 | In the general case, it's the commas that make tuples, not the parentheses. Things become confusing in the case of empty tuples because a standalone comma is syntactically incorrect. So for the special case of an empty tuple, the "it is commas that make tuples" rule does not apply, and the special case () syntax is used instead. | 1 | 56 | 0 | How can I create a tuple consisting of just an empty tuple, i.e. (())? I have tried tuple(tuple()), tuple(tuple(tuple())), tuple([]) and tuple(tuple([])) which all gave me ().
The reason that I use such a thing is as follows: Assume you have n bags with m items. To represent a list of items in a bag, I use a tuple of length n where each element of that tuple is a representative for a bag. A bag might be empty, which is labeled by (). Now, at some initial point, I have just one bag with empty items! | How to create a tuple of an empty tuple in Python? | 0.07983 | 0 | 0 | 88,693 |
37,896,090 | 2016-06-18T10:46:00.000 | 1 | 0 | 0 | 0 | python,c++,arrays,opencv,3d | 37,902,814 | 3 | false | 0 | 0 | If I understand correctly, you want to create a voxel representation of 3D models? Something like the visible human displays?
I would use one of the OBJ file loaders recommended above to import the model into an OpenGL program. Rotate and scale to whatever alignment you want along XYZ.
Then draw the object with a fragment shader that discards any pixel with Z < 0.001 or Z >= 0.002 (or whatever resolution works - I'm just trying to explain the method). This gives you the first image slice, which you store or save. Clear and draw again this time discarding Z < 0.002 or Z >= 0.003 … Because it's the same model in the same position, all your slices will be aligned.
However, are you aware that OBJ (and nearly all other 3D formats) are surface descriptions, not solid? They're hollow inside like origami models. So your 3D array representation will be mostly empty.
Hope this helps. | 1 | 5 | 1 | Is there any way by which 3D models can be represented as 3D arrays? Are there any libraries that take .obj or .blend files as input and give an array representation of the same?
I thought that I would slice object and export the slices to an image. I would then use those images in opencv to build arrays for each slice. In the end I would combine all the arrays of all the slices to finally get a 3D array representation of my .obj file. But I gave up halfway through because it is a painfully long process to get the image slices aligned to each other.
Is there any other index based representation I could use to represent 3D models in code?
A 3D array would be very convenient for my purposes. | How to represent a 3D .obj object as a 3D array? | 0.066568 | 0 | 0 | 3,543 |
37,897,064 | 2016-06-18T12:39:00.000 | 0 | 1 | 0 | 0 | python,search,twitter,tweepy | 37,902,045 | 2 | true | 0 | 0 | I've created a workaround that kind of works. The best way to do it is to search for mentions of a user, then filter those mentions by in_reply_to_id . | 1 | 2 | 0 | So I've been doing a lot of work with Tweepy and Twitter data mining, and one of the things I want to do is to be able to get all Tweets that are replies to a particular Tweet. I've seen the Search api, but I'm not sure how to use it nor how to search specifically for Tweets in reply to a specific Tweet. Anyone have any ideas? Thanks all. | Tweepy Get Tweets in reply to a particular tweet | 1.2 | 0 | 1 | 3,634 |
37,899,247 | 2016-06-18T16:33:00.000 | 2 | 1 | 0 | 0 | python,git,heroku,deployment | 44,854,965 | 3 | false | 0 | 0 | To successfully push python code to heroku you should have a requirements.txt and a Procfile. Go to your project folder in terminal/commandline and enter the following commands which will generate the necessary files. Commit them and push should work.
pip freeze > requirements.txt(you might need to install pip, if using older python version)
echo "worker: python yourfile.py" > Procfile (worker could be replaced with web if it's a website) | 2 | 1 | 0 | I am making a Simple Python Bot which can be run like python file.py . I created a Folder in my PC having 3 files file.py list.txt Procfile . In Procfile i wrote worker: python file.py , I choosed worker as it a Command Line application and my plan is to run that Python File forever on the server. Than i did git init , heroku git:remote -a py-bot-xyz where py-bot-xyz is the application which i created in My Heroku Dashboard and than git add ., git commit -am "make it better" & finally git push heroku master .
That's where the error occurs, that prints out
remote: Compressing source files... done.
remote: Building source:
remote:
remote:
remote: ! Push rejected, no Cedar-supported app detected
remote: HINT: This occurs when Heroku cannot detect the buildpack
remote: to use for this application automatically.
remote: See https://devcenter.heroku.com/articles/buildpacks
remote:
remote: Verifying deploy....
remote:
remote: ! Push rejected to py-bot-xyz.
remote:
To https://git.heroku.com/py-bot-xyz.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/py-bot-xyz.git'
Now, when i go to Heroku's Dashboard Build Failed in Activity. What can i do now? :((( | Heroku Python Remote Rejected error | 0.132549 | 0 | 0 | 4,397 |
37,899,247 | 2016-06-18T16:33:00.000 | 0 | 1 | 0 | 0 | python,git,heroku,deployment | 55,756,831 | 3 | false | 0 | 0 | I was facing same kind of error. As I am new to this area..
I used "requirement.txt" instead of "requirements.txt".
Watch out for the exact spellings. | 2 | 1 | 0 | I am making a Simple Python Bot which can be run like python file.py . I created a Folder in my PC having 3 files file.py list.txt Procfile . In Procfile i wrote worker: python file.py , I choosed worker as it a Command Line application and my plan is to run that Python File forever on the server. Than i did git init , heroku git:remote -a py-bot-xyz where py-bot-xyz is the application which i created in My Heroku Dashboard and than git add ., git commit -am "make it better" & finally git push heroku master .
That's where the error occurs, that prints out
remote: Compressing source files... done.
remote: Building source:
remote:
remote:
remote: ! Push rejected, no Cedar-supported app detected
remote: HINT: This occurs when Heroku cannot detect the buildpack
remote: to use for this application automatically.
remote: See https://devcenter.heroku.com/articles/buildpacks
remote:
remote: Verifying deploy....
remote:
remote: ! Push rejected to py-bot-xyz.
remote:
To https://git.heroku.com/py-bot-xyz.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/py-bot-xyz.git'
Now, when i go to Heroku's Dashboard Build Failed in Activity. What can i do now? :((( | Heroku Python Remote Rejected error | 0 | 0 | 0 | 4,397 |
37,901,786 | 2016-06-18T21:26:00.000 | 1 | 0 | 1 | 0 | python,django,date,datetime | 38,021,560 | 2 | false | 1 | 0 | If you have only a handful of reoccurring descriptive dates, the easiest thing to do would be to create a dictionary that can translate them to the explicit dates you want whenever they pop up in your data.
If you have arbitrary descriptive dates or a large number of them, however, it seems that, as was discussed in the comments, NLP is the way to go. | 1 | 1 | 0 | I have some data that has descriptive dates (e.g., Monday before Thanksgiving, Last day in February, 4th Saturday in April) as part of describing start and end times. Some of the dates are explicit (e.g., October 31st). I want to store the descriptive and the explicit values so for any year I can then calculate when the exact dates are. I did some searching and came up short.
This feels like a common thing, and someone must have solved it.
I'm also curious if these kinds of descriptive dates have a proper name.
As in the tags, my app uses Python + Django.
Thanks! | How does one store descriptive dates like the "The last day in Feb", "Fourth Saturday in April"? | 0.099668 | 0 | 0 | 75 |
37,903,437 | 2016-06-19T02:32:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x | 37,903,737 | 1 | false | 0 | 0 | You can replace the character:
cadena = cadena.replace("|", "") | 1 | 0 | 0 | How can I change my input or make my input ignore some characters like "-" or "|"?
I want to do this because I have a lot of inputs in my projects in different modules and classes. | How to change input | 0 | 0 | 0 | 37 |
37,910,066 | 2016-06-19T17:44:00.000 | 1 | 0 | 0 | 0 | python,postgresql,heroku,sqlalchemy,heroku-postgres | 46,558,758 | 3 | true | 1 | 0 | Old question, but the answer seems to be that database_exists and create_database have special case code for when the engine URL starts with postgresql, but if the URL starts with just postgres, these functions will fail. However, SQLAlchemy in general works fine with both variants.
So the solution is to make sure the database URL starts with postgresql:// and not postgres://. | 2 | 1 | 0 | I'm running a python 3.5 worker on heroku.
self.engine = create_engine(os.environ.get("DATABASE_URL"))
My code works on local, passes Travis CI, but gets an error on heroku - OperationalError: (psycopg2.OperationalError) FATAL: database "easnjeezqhcycd" does not exist.
easnjeezqhcycd is my user, not database name. As I'm not using flask's sqlalchemy, I haven't found a single person dealing with the same person
I tried destroying my addon database and created standalone postgres db on heroku - same error.
What's different about heroku's URL that SQLAlchemy doesn't accept it? Is there a way to establsih connection using psycopg2 and pass it to SQLAlchemy? | Heroku SQLAlchemy database does not exist | 1.2 | 1 | 0 | 2,039 |
37,910,066 | 2016-06-19T17:44:00.000 | 0 | 0 | 0 | 0 | python,postgresql,heroku,sqlalchemy,heroku-postgres | 62,351,512 | 3 | false | 1 | 0 | so I was getting the same error and after checking several times I found that I was giving a trailing space in my DATABASE_URL. Which was like DATABASE_URL="url<space>".
After removing the space my code runs perfectly fine. | 2 | 1 | 0 | I'm running a python 3.5 worker on heroku.
self.engine = create_engine(os.environ.get("DATABASE_URL"))
My code works on local, passes Travis CI, but gets an error on heroku - OperationalError: (psycopg2.OperationalError) FATAL: database "easnjeezqhcycd" does not exist.
easnjeezqhcycd is my user, not database name. As I'm not using flask's sqlalchemy, I haven't found a single person dealing with the same person
I tried destroying my addon database and created standalone postgres db on heroku - same error.
What's different about heroku's URL that SQLAlchemy doesn't accept it? Is there a way to establsih connection using psycopg2 and pass it to SQLAlchemy? | Heroku SQLAlchemy database does not exist | 0 | 1 | 0 | 2,039 |
37,910,615 | 2016-06-19T18:42:00.000 | 2 | 0 | 0 | 0 | python,django,python-3.x,django-views,django-urls | 37,910,649 | 1 | false | 1 | 0 | Yes, that is perfectly fine.
Django will look for a matching url in the first one, and if it doesn't find it, it will move on to the next one. | 1 | 0 | 0 | Is it acceptable to use two includes for the same base url routing schema?
e.g. - I have allauth installed which uses r'^accounts/', include('allauth.urls')
and I want to extend this further with my own app, which extends the allauth urls even further.
An example of this would be accounts/profile or some other extension of the base accounts/ url.
Is it fine to do the following?
r'^accounts/', include('myapp.urls')
In additon to:
r'^accounts/', include('allauth.urls')
As far as I can tell both will just be included with the base url routing schema and it will just look for the allauth urls first? | double include url schema in django | 0.379949 | 0 | 0 | 144 |
37,913,600 | 2016-06-20T01:39:00.000 | 1 | 0 | 1 | 0 | python,nltk | 37,920,600 | 1 | false | 0 | 0 | Since the computers in question already have python, you can install the NLTK for your own use without admin privileges. The NLTK itself is pretty small, but you can further trim it down to produce a customized installation that only contains the modules you need for this task.
You don't say what resource you were planning to use as a dictionary/thesaurus replacement, but perhaps its format is simple enough to process without the nltk. If so, you can just get the file from nltk_data on your own computer and copy it to other systems. | 1 | 0 | 0 | I was wondering if it is possible to use the dictionary installed with Microsoft office via a python script. I really would like to use the thesaurus to find synonyms for certain words. I know I could use the nltk package, but I will not be able to install it on every system due to a very crabby systems admin. Maybe there is another lexicon installed with windows that could be used instead? | Microsoft word/office dictionary thesaurus via python script | 0.197375 | 0 | 0 | 176 |
37,918,215 | 2016-06-20T08:51:00.000 | 4 | 0 | 1 | 0 | python,windows,numpy,cmd,anaconda | 37,918,313 | 3 | false | 0 | 0 | I think you are referring to the command-line use of python?
If you have admin priviliges on your machine you can add python to your environment variables, making it available in the console anywhere. (Sorry for different spellings, I am not on an english machine)
Press Shift+Pause ("System")
Click "Advanced System Options"
Click "Environment variables"
In the lower field with "System variables" there is a variable called PATH. Append the complete path to your python.exe without the file to that by adding a ; behind the last path in the variable and then adding your path. Do not add any spaces!
Example: C:\examplepath\;C:\Python27\ | 1 | 9 | 0 | I have just installed Anaconda on my computer because I need to use Numpy.
Well, when I use python I for some reason have to be in the same folder as python.exe and, of course, now that I want to use Anaconda I have to be in the Anaconda3\Scripts folder where python.exe isn't. This is a nightmare, how can I use anaconda with python on a windows computer? Why does it have to be so complicated? | Using python with Anaconda in Windows | 0.26052 | 0 | 0 | 35,921 |
37,920,231 | 2016-06-20T10:30:00.000 | 2 | 0 | 0 | 0 | python,pack | 37,920,362 | 1 | false | 0 | 0 | The difference between these methods really revolves around whether you already have an existing buffer you wish to write formatted data into (struct.pack_into), or whether you simply want to create a new buffer with the formatted data (struct.pack).
You are dealing with small buffers. Unless you have good reason to suspect you need to optimise for buffer copies, you may as well be using struct.pack | 1 | 1 | 0 | I am writing a program where i need to send lots of small chunks of data to a server (mostly integers or strings), so i am using the struct-library.
Right now i am using struct.pack, but i am wondering if i should use struct.pack_into instead, as i read it reduces overhead.
However, i am not interested in "saving" the values- i just want to pack the data and quickly send it off. If i use struct.pack_into, would it save the values in any way as it uses a buffer, thus reducing performance?
Which of these 2 methods best suits my needs?
Thanks, | Python - struct.pack vs struck.packinto | 0.379949 | 0 | 0 | 333 |
37,922,029 | 2016-06-20T11:59:00.000 | 2 | 0 | 0 | 0 | python,django,web,django-rest-framework | 37,922,348 | 4 | false | 1 | 0 | Just override delete method of model A and check relation before delete. If it isn't empty - move object to another table/DB. | 1 | 6 | 0 | Suppose that I have django models such A ->B ->C ->D in the default database.
C is a foreign key in D,similarly B in C and C in A.
On the deletion of the object A,the default behaviour of Django is that all the objects related to A directly or indirectly will get deleted automatically(On delete cascade).Thus, B,C,D would get automatically deleted.
I want to implement deletion in a way such that on deletion of an object of A it would get moved to another database named 'del'.Also along with it all other related objects of B,C,D will also get moved.
Is there an easy way of implementing this in one go? | Soft delete django database objects | 0.099668 | 0 | 0 | 5,266 |
37,922,631 | 2016-06-20T12:29:00.000 | 1 | 0 | 0 | 0 | python,django,web,url-redirection | 37,922,848 | 3 | false | 1 | 0 | You could just navigate to the URL via HttpResponseRedirect | 1 | 0 | 0 | I am using Django 1.9 to build a link shortener. I have created a simple HTML page where the user can enter the long URL. I have also coded the methods for shortening this URL. The data is getting stored in the database and I am able to display the shortened URL to the user.
I want to know what I have to do next. What happens when a user visits the shorter URL? Should I use redirects or something else? I am totally clueless about this topic. | Building a link shortener in Django | 0.066568 | 0 | 0 | 819 |
37,923,209 | 2016-06-20T12:57:00.000 | 0 | 0 | 1 | 0 | python-2.7,nltk,stemming | 37,928,414 | 1 | false | 0 | 0 | You're using it correctly; it's the stemmer that's acting weird. It could be caused by too little training data, or the wrong balance, or simply the wrong conclusion by the stemmer's statistical algorithm. We can't expect perfection, but it's annoying when it happens with common words. It's also stemming "everything" to "everyth", as if it's a verb. At least here it's clear what it's doing. But "-e" is not a suffix in English...
The stemmer allows the option ignore_stopwords=True, which will suppress stemming of words in the stopword list (these are common words, usually irregular, that Porter thought fit to exclude from the training set because he got worse results when they are included.) Unfortunately it doesn't help with the particular examples you ask about. | 1 | 0 | 0 | I am importing from nltk.stem.snowball import SnowballStemmer
and I have a string as follows:
text_string="Hi Everyone If you can read this message youre properly using parseOutText Please proceed to the next part of the project"
I run this code on it:
words = " ".join(stemmer.stem(word) for word in text_string.split(" "))
and I get the following which has a couple of 'e' missing. Can't figure out what is causing it. Any suggestions? Thanks for the feedbacks
"hi everyon if you can read this messag your proper use parseouttext pleas proceed to the next part of the project" | trying to stem a string in natural language using python-2.7 | 0 | 0 | 0 | 704 |
37,925,504 | 2016-06-20T14:48:00.000 | 1 | 0 | 0 | 0 | python,heroku,scrapy,digital-ocean,dokku | 37,969,257 | 1 | false | 1 | 0 | I 'fixed' this issue by not using a Digital Ocean server. The website that I am trying to crawl, which is craigslist.org, just did not respond well to a DO server. It takes a long time to respond to a request. Other websites like Google or Amazon work just fine with DO.
My scraper works just fine on craigslist when using a VPS from another provider. | 1 | 0 | 0 | Not sure how to describe this but I am running a Scrapy spider on a Digital Ocean server ($5 server), the Scrapy project is deployed as a Dokku app.
However, it runs very slowly compared to the speed on my local computer and on a Heroku free tier dyno. On Dokku it crawls at a speed of 30 pages per minute while locally and on Heroku the speed is 200+ pages per minutes.
I do not know how to debug, analyze or where to start in order to fix the problem. Any help, clues or tips on how to solve this? | Running Scrapy on Dokku using a Digital Ocean server | 0.197375 | 0 | 0 | 430 |
37,925,969 | 2016-06-20T15:11:00.000 | -2 | 0 | 0 | 0 | python,xlwt | 37,931,428 | 1 | false | 0 | 0 | seems like a caching issue.
Try sheet.flush_row_data() every 100 rows or so ? | 1 | 0 | 0 | I'm quite new to Python and trying to fetch data in HTML and saved to excels using xlwt.
So far the program seems work well (all the output are correctly printed on the python console when running the program) except that when I open the excel file, an error message saying 'We found a problem with some content in FILENAME, Do you want us to try to recover as much as we can? If you trust the source of this workbook, click Yes.' And after I click Yes, I found that a lot of data fields are missing.
It seems that roughly the first 150 lines are fine and the problem begins to rise after that (In total around 15000 lines). And missing data fields concentrate at several columns with relative high data volume.
I'm thinking if it's related to sort of cache allocating mechanism of xlwt?
Thanks a lot for your help here. | Python XLWT: Excel generated by Python xlwt contains missing value | -0.379949 | 1 | 0 | 219 |
37,933,629 | 2016-06-21T00:01:00.000 | 1 | 0 | 0 | 0 | python,pyqt4,qtextedit,qpushbutton | 37,933,711 | 1 | true | 0 | 1 | There is no builtin way to edit a push button in the sense that you have a cursor and can type along.
Probably the easiest solution is to bring up a QInputDialog. If that feels to heavy, you could also place a floating QLineEdit over or next to the QPushButton. Close that on <Enter> and set the typed text to the QPushButton.
If you really want an editable Button, you'll have to subclass QPushButton and implement the desired functionality yourself. To get started with this, you need to reimplement mousePressEvent() for starting your editing mode. Reimplement keyPressEvent() for handling key strokes. If you need to display a cursor, reimplement paintEvent(). I have no particular resource at hand that describes what exactly you have to do, but the terms above should be sufficient to look it up yourself. | 1 | 1 | 0 | Using Python 2.7 and PyQt4.
So I need a way to make a text of the QPushButton editable when click on it, like on QTextEdit. | How to make QPushButton editable text when one click on it? | 1.2 | 0 | 0 | 302 |
37,937,313 | 2016-06-21T06:47:00.000 | 1 | 0 | 0 | 0 | python-3.x,inverted-index | 37,945,993 | 1 | true | 0 | 0 | Python does allow you to constrcut classes that implement dictionary-like interface and thatc ould maintain any inverted indexes you would wish -
But you are too broad on your question. The "extradict" Python package (pip install extradict), for example have a "BijectiveDict" that just exposes any values as keys and vice-versa, and keep everything synchronize - but it is a simple symmetric key, value store.
If you want complex, nested documents, and persistence you should use an existing NoSQL database like MongoDB, Codernity, ElasticSearch, ZODB, rather than try to implement one yourself. | 1 | 0 | 0 | how do I update an inverted index efficiently if documents are inserted, deleted or updated ? also should i use index file to store index or should I store index in a database table ? | how to make inverted index? | 1.2 | 0 | 0 | 551 |
37,939,412 | 2016-06-21T08:37:00.000 | 0 | 0 | 0 | 0 | python,numpy,pygame,blender,pykinect | 37,956,343 | 1 | true | 0 | 1 | If you are 'just' moving the locations of vertices (which is what moulding sounds like) then you can 'just' replace that block of the .obj etc file. You should be able to figure out where to cut and insert by looking at some files (I've done this with .obj but not the others so this may be harder than I suggest!)
However you should really fix the normals which you can do by taking cross products of face edges. Also uv coordinates would need fixing if you use those and that's quite a bit harder. | 1 | 0 | 0 | I have created a pygame based environment. In that I am importing a spherical ball blender based obj file. Using pykinect, I am trying to mould the object with inputs from a kinect for Xbox 360 camera.
All is working ok.
However I wish to export the end product 3D moulded object and save it in a .dae, .obj and .stl formats. Currently by recording the end coordinates of the boundary of the 3D object trying to mimic it to export, but this is a very cumbersome process.
Can someone suggest what could be done to be able to save the deformed file in the desired 3D formats? | How to export 3D files from python(pygame) | 1.2 | 0 | 0 | 380 |
37,940,005 | 2016-06-21T09:06:00.000 | 0 | 0 | 0 | 0 | python,matlab | 37,940,134 | 2 | false | 0 | 0 | I'd find the moment with peak amplitude in both signal stream, and sync the signals there. Then I'd go over the signal, sample by sample, normalize the values based on the peak amplitude and compare the trend.
I don't think this question should be here, by the way, try to ask in a more relevant StackExchange forum. | 2 | 0 | 0 | I have 2 raw signals having same sampling rate and they start at different time. I am interested in trying to do a match making(i.e. both to be similar without any deviations) of these signals in python 3.5. Can anyone give me inputs on what is the best method to compare such signals and say there is no change in pattern? | Signal Comparison method | 0 | 0 | 0 | 1,581 |
37,940,005 | 2016-06-21T09:06:00.000 | 0 | 0 | 0 | 0 | python,matlab | 37,941,568 | 2 | false | 0 | 0 | What i can recommend is:
Fast-Fourier-Transform: you get all the different frequencies within your two signals and then you can see if there are any different peaks between the two spectrum of the signals.
Wavelet Transform: kind of same working principle of the FFT but you will be using a 'Wavelet' to analyse your signal instead of a sinusoid in the FFT then you can then compare the energy of your signal's details
cross-correlation
Statistical tests calculate the standard deviation of a signal and then test if the values of your other signal are still within the range of 2 or 3 sigma.
Is it possible to have a sample of these two signals? | 2 | 0 | 0 | I have 2 raw signals having same sampling rate and they start at different time. I am interested in trying to do a match making(i.e. both to be similar without any deviations) of these signals in python 3.5. Can anyone give me inputs on what is the best method to compare such signals and say there is no change in pattern? | Signal Comparison method | 0 | 0 | 0 | 1,581 |
37,942,244 | 2016-06-21T10:42:00.000 | 0 | 0 | 0 | 0 | python,django,video | 37,942,478 | 1 | false | 1 | 0 | Is there a way to load video files via Django but them serve them
using a different server?
You should define what you mean by "different server". but I assume, you mean different project that is not written in Django.
Since video files land in file system ( if you design so), you can access them as you want if different project is running on the same server. otherwise you would need some file sync between servers. if you want to distinguish which video file belongs to which object in db, I would insert the object name into filepath.
if I didnot fully answer your question, let me know below | 1 | 0 | 0 | I'm developing a Django site which allows users to upload PDF, image and video files. Django is able to serve the pdf and image files comfortably for my purposes but cannot cope with the video downloads. Is there a way to load video files via Django but them serve them using a different server? | How to serve previously uploaded video files in Django | 0 | 0 | 0 | 329 |
37,944,198 | 2016-06-21T12:16:00.000 | 2 | 0 | 1 | 0 | python | 37,944,565 | 2 | false | 0 | 0 | This is more related to Windows API then Python or whatever framework you are using.
Actually you can do something similar by:
Freezing your code. You can choose any but I use cx_freeze.
Package the app using Inno Setup. It provides some "shortcuts" to work with Windows, including context menu actions.
You can use Inno Script Studio, which is an IDE for Inno Setup. It may help you setting up the context menu actions.
Hope this helps. | 1 | 4 | 0 | For example, I have written a code which gets an import folder directory and a destination folder, and rotates all images in the import folder by 45 degrees clockwise, and saves them rotated in the destination folder. It works great, but you must have python in order to use it. I want to have an option when you press right click on a folder, and then you can choose: rotate all images by 45 degrees.
How can I do that? | convert my python code to windows application (right-click menu) | 0.197375 | 0 | 0 | 440 |
37,945,725 | 2016-06-21T13:24:00.000 | -1 | 0 | 0 | 0 | xml,python-2.7,pyspark,spark-dataframe,parquet | 37,989,050 | 2 | false | 0 | 0 | You can map each row to a string with xml separators, then save as text file | 1 | 0 | 0 | I have stored a pyspark sql dataframe in parquet format. Now I want to save it as xml format also. How can I do this? Solution for directly saving the pyspark sql dataframe in xml or converting the parquet to xml anything will work for me. Thanks in advance. | How to save a pyspark sql DataFrame in xml format | -0.099668 | 1 | 1 | 992 |
37,947,178 | 2016-06-21T14:27:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,file | 37,947,472 | 2 | false | 0 | 0 | More details are required i think. how are these files being created? What are the stop conditions for your function? If you are doing a simple file writer you can change the mode to append and that should help. Anyway, a little more details are required
Have a good one | 1 | 0 | 0 | I am using python and I have to write a program to create files of a total of 160 GB. I ran the program overnight and it was able to create files of 100 GB. However, after that it stopped running and gave an error saying "No space left on device".
QUESTION : I wanted to ask if it was possible to start running the program from where it stopped so I don't have to create those 100 GB files again. | continue creating files from where it stopped | 0 | 0 | 0 | 35 |
37,947,558 | 2016-06-21T14:42:00.000 | 6 | 0 | 0 | 0 | python,neural-network,scikits,activation-function | 37,947,823 | 2 | true | 0 | 0 | A neural network is just a (big) mathematical function. You could even use different activation functions for different neurons in the same layer. Different activation functions allow for different non-linearities which might work better for solving a specific function. Using a sigmoid as opposed to a tanh will only make a marginal difference. What is more important is that the activation has a nice derivative. The reason tanh and sigmoid are usually used is that for values close to 0 they act like a linear function while for big absolute values they act more like the sign function ((-1 or 0) or 1 ) and they have a nice derivative. A relatively new introduced one is the ReLU (max(x,0)), which has a very easy derivative (except for at x=0), is non-linear but importantly is fast to compute so nice for deep networks with high training times.
What it comes down to is that for the global performance the choice in this is not very important, the non-linearity and capped range is important. To squeeze out the last percentage points this choice will matter however but is mostly dependent on your specific data. This choice just like the number of hidden layers and the number of neurons inside these layers will have to be found by crossvalidation, although you could adapt your genetic operators to include these. | 1 | 5 | 1 | I am using the sknn package to build a neural network. In order to optimize the parameters of the neural net for the dataset I am using I am using an evolutionary algorithm. Since the package allows me to build a neural net where each layer has a different activation function, I was wondering if that is a practical choice, or whether I should just use one activation function per net? Does having multiple activation functions in a neural net harm, does no damage, or benefit the neural network?
Also what is the maximum amount of neuron per layer I should have, and the maximum amount of layers per net should I have? | Neural Network composed of multiple activation functions | 1.2 | 0 | 0 | 5,281 |
37,948,294 | 2016-06-21T15:14:00.000 | -2 | 0 | 1 | 1 | python-2.7,pycharm | 63,333,069 | 2 | false | 0 | 0 | Difference between os.mkdir(dirname) and os.mkdirs(dirname)
mkdir() will create only neaded directory. If some of parent directories was not existing, mkdir() will return false. mkdirs() will create the last directory with all missing parent directories.so mkdirs() is more handy. | 2 | 6 | 0 | I use python 2.7 to create a spider in Pycharm to get data from website.
In the first spider I create a spider in the project folder and use os.mkdir('home/img/') to create a folder to save data. There is no error.
In the second spider I create the spider with RedisQueue which is in the project folder and put the Spider.py into /usr/lib/python2.7. when I use os.mkdir('home/img/') it reports the error 'no such file or dir' and I change it to os.makedirs() which works.
May I know why the 1st one doesn't meet error? Thanks in advance | the difference between os.mkdir() and os.makedirs() | -0.197375 | 0 | 0 | 21,297 |
37,948,294 | 2016-06-21T15:14:00.000 | 14 | 0 | 1 | 1 | python-2.7,pycharm | 37,948,589 | 2 | true | 0 | 0 | os.makedirs() : Recursive directory creation function. Like os.mkdir(), but makes all intermediate-level directories needed to contain the leaf directory.
What this means is that you should not try to create nested directories with os.mkdir() but use os.makedirs() instead.
In your case, I am guessing that you want to create a directory under your home directory, in which case you would need something like os.mkdir("/home/img"), which will fail if you do not have enough permissions.
You could try and do something like: os.chdir('/home') and after that os.mkdir('img') so you create home/img in steps! Good luck! | 2 | 6 | 0 | I use python 2.7 to create a spider in Pycharm to get data from website.
In the first spider I create a spider in the project folder and use os.mkdir('home/img/') to create a folder to save data. There is no error.
In the second spider I create the spider with RedisQueue which is in the project folder and put the Spider.py into /usr/lib/python2.7. when I use os.mkdir('home/img/') it reports the error 'no such file or dir' and I change it to os.makedirs() which works.
May I know why the 1st one doesn't meet error? Thanks in advance | the difference between os.mkdir() and os.makedirs() | 1.2 | 0 | 0 | 21,297 |
37,948,852 | 2016-06-21T15:39:00.000 | 1 | 0 | 1 | 0 | python-2.7,numpy,spyder | 37,971,709 | 2 | false | 0 | 0 | I solved the problem by executing the spyder version of the python2 environment.
It is located in Anaconda3\envs\python2\Scripts\spyder.exe | 2 | 0 | 1 | I am on Windows 10, 64bits, use Anaconda 4 and I created an environment with python 2.7 (C:/Anaconda3/envs/python2/python.exe)
In this environment, I successfully installed numpy and when I type "python", enter, "import numpy", enter, it works perfectly in the anaconda prompt window.
In spyder however, when I open a python console and type "import numpy", I get "cannot import name multiarray". I have obviously changed the path of the python interpreter used by spyder to match the python.exe of the environment I created (C:/Anaconda3/envs/python2/python.exe). I also updated the PYTHONSTARTUP to C:/Anaconda3/envs/python2/Lib/site-packages/spyderlib/scientific_startup.py
It's supposed to be the exact same python program running but it's two different behavior. How is it possible and how to fix it ?
PS: I already tried the various solutions to this error like uninstalling numpy and reinstalling it. It shouldn't be a problem with numpy since it works just fine in the python console of the anaconda prompt window. | spyder, numpy, anaconda : cannot import name multiarray | 0.099668 | 0 | 0 | 833 |
37,948,852 | 2016-06-21T15:39:00.000 | 1 | 0 | 1 | 0 | python-2.7,numpy,spyder | 42,165,864 | 2 | false | 0 | 0 | I have encountered same issue. I have followed every possible solution, which is stated on stack-overflow. But no luck. The cause of error might be the python console. I have installed a 3.5 Anaconda, and the default console is the python 2.7, which I have installed primarily with pydev. I did this and now it is working absolutely fine. Go to tools>preferences and click on reset to defaults. It might solve the issue. Or another solution is to uninstall the current Anaconda i.e. y.x and installing the correct one according to the default. In my case 2.7 Anaconda instead of 3.5 | 2 | 0 | 1 | I am on Windows 10, 64bits, use Anaconda 4 and I created an environment with python 2.7 (C:/Anaconda3/envs/python2/python.exe)
In this environment, I successfully installed numpy and when I type "python", enter, "import numpy", enter, it works perfectly in the anaconda prompt window.
In spyder however, when I open a python console and type "import numpy", I get "cannot import name multiarray". I have obviously changed the path of the python interpreter used by spyder to match the python.exe of the environment I created (C:/Anaconda3/envs/python2/python.exe). I also updated the PYTHONSTARTUP to C:/Anaconda3/envs/python2/Lib/site-packages/spyderlib/scientific_startup.py
It's supposed to be the exact same python program running but it's two different behavior. How is it possible and how to fix it ?
PS: I already tried the various solutions to this error like uninstalling numpy and reinstalling it. It shouldn't be a problem with numpy since it works just fine in the python console of the anaconda prompt window. | spyder, numpy, anaconda : cannot import name multiarray | 0.099668 | 0 | 0 | 833 |
37,951,303 | 2016-06-21T17:52:00.000 | 7 | 0 | 0 | 1 | python,openssl,cryptography,scrapy | 38,144,434 | 2 | false | 1 | 0 | Copy "openssl" folder from C:\OpenSSL-Win32\include\ to C:\Pyhton27\include\
and copy all libs from C:\OpenSSL-win32\lib to C:\Python27\Libs\ | 1 | 20 | 0 | I'm trying to install Scrapy, but got this error during installing: build\temp.win-amd64-2.7\Release_openssl.c(429) : fatal error C1083: Cannot open include file: 'openssl/opensslv.h': No such file or directory
I've checked that the file "opensslv.h" is in here "C:\OpenSSL-Win64\include\openssl". And I've also included this "C:\OpenSSL-Win64\include" in the Path, system variables.
Stuck on this for hours, can someone please help out? Thanks.
The same issue was found for the "cryptography-1.5.2" package | Fatal error C1083: Cannot open include file: 'openssl/opensslv.h' | 1 | 0 | 0 | 26,978 |
37,951,368 | 2016-06-21T17:55:00.000 | 0 | 1 | 0 | 1 | python,bash,crontab,ubuntu-server | 37,951,458 | 1 | false | 0 | 0 | You could create a cronjob that starts the script every 5 minutes (or whatever often you want it to run) and additionally modify the script such that it creates a .lock file which it removes on exiting, but if it encounters it at the beginning it won't do anything (this way you don't have a long-running script active multiple times). | 1 | 0 | 0 | I have a Python program I run at all times of the day that alerts me when something I am looking for online is posted. I want to give this to my employees but I only want to have it email them during business hours.
I have a Ubuntu server and use a .sh. I have a command in crontab that runs on startup.
How do I make my command run from 9-5? | How do I have a bash script continually run during business hours? | 0 | 0 | 0 | 191 |
37,954,211 | 2016-06-21T20:42:00.000 | 0 | 0 | 1 | 0 | python,linux,python-3.x | 37,954,337 | 1 | false | 0 | 0 | Indentation can be configured by navigating to Utilities > Global Options > Editing > Tab width. You said you are coding in python so I strongly recommend that you leave the indentation as it is as python only accepts indentation with standard tabs (4 spaces long).
Obs: I see no reason why you would use Jedit, you'd better use a decent editor like Atom or Sublime Text. | 1 | 0 | 0 | Ive been using Jedit for programming for a few days and I wonder how I can change the tab indent in Jedit. Or can I change the Indent for whole Linux? My second question: I use Python and I would like to have an Indent in the next line after colons. where are the settings for this? | Tab indent in Linux and Jedit | 0 | 0 | 0 | 137 |
37,954,324 | 2016-06-21T20:49:00.000 | -3 | 0 | 0 | 0 | python,numpy,pickle | 38,003,329 | 3 | false | 0 | 0 | Thanks everyone. I ended up finding a workaround (a machine with more RAM so I could actually load the dataset into memory). | 1 | 10 | 1 | I have a large dataset: 20,000 x 40,000 as a numpy array. I have saved it as a pickle file.
Instead of reading this huge dataset into memory, I'd like to only read a few (say 100) rows of it at a time, for use as a minibatch.
How can I read only a few randomly-chosen (without replacement) lines from a pickle file? | How to load one line at a time from a pickle file? | -0.197375 | 0 | 0 | 14,112 |
37,955,199 | 2016-06-21T21:51:00.000 | 0 | 0 | 0 | 0 | python,django,django-models | 37,961,101 | 2 | false | 1 | 0 | In your case, I think the better is to put your two models in one app. | 1 | 0 | 0 | Recently I've been making a few test projects in Django and while I've found the structure to be better than that of other Web Frameworks, I am a little confused on the concept of different 'apps'.
Here is a test case example:
Suppose I have a simple CRUD application where users post a picture and a title, with a small description, but I want other users to have the ability to create a review of this picture.
Seeing as both the "Post" and "Review" models in this case require CRUD functionality, would I just have two models in the same app, and associate them with one another? Or have two separate apps with different urls.py and views.py files?
I have a hunch I've been doing it wrong and it should be just two models, if this is the case how would I go about writing the urls and views for two models in the same app?
Thanks and any input is appreciated! | When to make a django app, rather than just a model | 0 | 0 | 0 | 40 |
37,955,249 | 2016-06-21T21:55:00.000 | 2 | 0 | 1 | 0 | python | 37,955,310 | 1 | true | 0 | 0 | Use a for-loop if you can. It's simple, and it uses iterators behind the scenes. One of the great things about python's iterator system is that you don't need to think about them most of the time. It is quite rare that you'll need to explicitly call next() on something.
This is kind of general, but so is your question. If you have a particular use case in mind, edit your question to add it and you'll get more detailed responses. | 1 | 0 | 0 | I'm learning Python 3 (my first language since BASIC), and I have a general question:
If I want to iterate over something, how do I determine if the best way is to use a For loop or a generator? They appear to be closely related. | Python 3 For loop vs next() iterator | 1.2 | 0 | 0 | 128 |
37,959,217 | 2016-06-22T05:26:00.000 | 22 | 0 | 1 | 0 | python,pm2 | 39,378,255 | 2 | true | 0 | 0 | This question is a few months old, so maybe you figured this out a while ago, but it was one of the top google hits when I was having the same problem so I thought I'd add what I found.
Seems like it's an issue with how python buffers sys.stdout. In some platforms/instances, when called by say pm2 or nohup, the sys.stdout stream may not get flushed until the process exits. Passing the "-u" argument to the python interpreter stops it from buffering sys.stdout. In the process.json for pm2 I added "interpreter_args": "-u" and I'm getting logs normally now. | 1 | 5 | 0 | I'm using PM2 to run a Python program in the background like so
pm2 start helloworld.py
and it works perfectly fine. However, within helloworld.py I have several print statements that act as logs. For example, when a network request comes in or if a database value is updated. When I run helloworld.py like so:
python3 helloworld.py
all these print statements are visible and I can debug my application. However, when running
pm2 logs helloworld
none of these print statements show up. | PM2 doesn't log Python3 print statements | 1.2 | 0 | 0 | 2,940 |
37,959,515 | 2016-06-22T05:47:00.000 | 1 | 0 | 0 | 0 | python,amazon-dynamodb,boto3 | 37,960,297 | 3 | false | 1 | 0 | There is no cheap way to achieve this in Dynamodb. There is no inbuild function to determine the max value of an attribute without retrieving all items and calculate programmatically. | 1 | 11 | 0 | I'm using Dynamodb. I have a simple Employee table with fields like id, name, salary, doj, etc. What is the equivalent query of select max(salary) from employee in dynamodb? | Dynamodb max value | 0.066568 | 0 | 1 | 9,688 |
37,960,704 | 2016-06-22T06:58:00.000 | 1 | 0 | 1 | 1 | python,ios,objective-c,xcode | 38,047,376 | 1 | true | 0 | 0 | Replacing
PyUnicode_AsUTF8 or PyUnicode_FromString for _PyString_FromString
PyLong_AsLong for _PyInt_AsLong
PyLong_FromLong for _PyInt_FromLong
solved my problem. | 1 | 1 | 0 | Undefined symbols for architecture i386:
"_PyInt_AsLong", referenced from:
_main in main.o
"_PyInt_FromLong", referenced from:
_main in main.o
"_PyString_FromString", referenced from:
_main in main.o
ld: symbol(s) not found for architecture i386
clang: error:
linker command failed with exit code 1 (use -v to see invocation) | Xcode gives error while using Python.framework in iOS ,undefined symbol for architecture i386 | 1.2 | 0 | 0 | 295 |
37,961,668 | 2016-06-22T07:45:00.000 | 2 | 0 | 0 | 0 | python,odoo,odoo-9 | 37,964,958 | 2 | true | 1 | 0 | If you wanna set this for all website users you need to set them to portal users. Also, you can set under Users->Preferences->Home Action set to Website.
UPDATE
For signup new users you need to create template user account and check portal options for that user. Next, go to Settings->General Settings under Portal Access find Template user for new users created through signup choose your template user. | 1 | 1 | 0 | After I login to odoo from localhost:8069/web/login I get redirected to Odoo backend, from where I need to click Website to come back to Home Page.
How can I prevent this? I need to stay inside the home page after login.
EDIT:
@moskiSRB 's answer solves the problem for simple login.
But after Signup there is auto login which still leads to backend | Odoo, prevent redirecting after web login | 1.2 | 0 | 0 | 2,805 |
37,966,600 | 2016-06-22T11:23:00.000 | 1 | 0 | 0 | 0 | python-2.7,dropbox,dropbox-api | 37,974,715 | 1 | true | 0 | 0 | Unfortunately, the Dropbox API doesn't offer a way to add a folder from a Dropbox shared link directly to an account via the API, without downloading and re-uploading it. We'll consider this a feature request for a way to do so. | 1 | 1 | 0 | User A shares a folder link.
I want to use that shared link to copy that folder to my business dropbox account.
Catch is I don't want a method which downloads the folder to my server and uploads it to my dropbox account. I want a method by which I can pass that shared link as a parameter and make the api call and then dropbox copies the folder to my dropbox account at there end.
Is there a way using dropbox-api to copy directly to my dropbox account.
Thanks | Copy folder using dropbox shared link to a dropbox account without downloading and uploading again | 1.2 | 0 | 1 | 79 |
37,967,838 | 2016-06-22T12:19:00.000 | 2 | 0 | 0 | 1 | python,amazon-web-services,amazon-ec2,aws-code-deploy,aws-codepipeline | 37,980,335 | 1 | false | 1 | 0 | I would loop in the ValidateService hook, checking for the condition you expect, OR just sleep for 60 seconds, assuming that is the normal initialization time.
The ValidateService hook should do just that: make sure the service is fully running before continuing/finalizing the deployment. That depends on your app of course. But consider a loop that pulls a specially designed page EG http://localhost/service-ready. In that URL, test and confirm anything and everything appropriate for your service. Return a -Pending- string if the service is not yet validated. Return a -OK- when everything is 100%
Perhaps loop that 10-20 times with a 10 second sleep, and exit when it returns -OK- then throw an error if the service never validates. | 1 | 1 | 0 | I have an heavy app hosted on AWS.
I use CodeDeploy & Code Pipeline (updating from github) to update the servers when a new release is ready (currently running 6 ec2 instances on production environment).
I've setup the codedeploy to operate one-by-one and also defined a 300 seconds connection draining on the load balancer.
Still, my application is heavy (it loads large dictionary pickle files from the disk to the memory), the process of firing up takes about ~60 seconds. In those 60 seconds CodeDeploy marks the process of deployment to an instance as completed, causing it to join back as a healthy instance to the load balancer - this might cause errors to users trying to reach the application.
I thought about using the ValidateService hook, but i'm not sure how to in my case..
Any ideas on how to wait for a full load and readyness of the application before proceeding to the next instance?
This is my current AppSpec.yml
version: 0.0
os: linux
files:
- source: /deployment
destination: /deployment
- source: /webserver/src
destination: /vagrant/webserver/src
permissions:
- object: /deployment
pattern: "**"
owner: root
mode: 644
type:
- directory
- object: /webserver/src
owner: root
mode: 644
except: [/webserver/src/dictionaries]
type:
- directory
hooks:
ApplicationStop:
- location: /deployment/aws_application_stop.sh
BeforeInstall:
- location: /deployment/aws_before_install.sh
AfterInstall:
- location: /deployment/aws_after_install.sh
ApplicationStart:
- location: /deployment/aws_application_start.sh | Using CodeDeploy ValidateService Hook with Python Application | 0.379949 | 0 | 0 | 1,608 |
37,970,376 | 2016-06-22T14:02:00.000 | 0 | 0 | 1 | 0 | python-3.x | 37,973,719 | 1 | false | 0 | 0 | var= [('a=123,b=3456,c=789', {'D': [b'F'], 'G': [b'H'], 'I': [b'J'], 'K': [b'L']})]
out= ("var[0][1]['I']:", a[0][1]['I'])[1][0].decode("utf-8")
print (out) | 1 | 0 | 0 | ' [('a=123,b=3456,c=789', {'D': [b'F'], 'G': [b'H'], 'I': [b'J'], 'K': [b'L']})]'
I would like to parse this string and get the J value out of it | I am getting the below python output as a string and i would like to have the "J" Value in the string | 0 | 0 | 0 | 62 |
37,970,895 | 2016-06-22T14:22:00.000 | 0 | 0 | 0 | 0 | python,neural-network,prediction | 37,971,491 | 1 | false | 0 | 0 | What Ashafix says was my first though, you should post your training and test data also the data that you use for 'real world'.
Another problem it could be that when you are testing, you are using only previous correct whether data(data you already have), and when you are in practice you are using your predicts and correct whether data. You should be consistent here.
PD. Sorry for my english, I'm still learning. | 1 | 0 | 1 | This is my first question on this site. I'm attempting to practice neural networks by having my program predict whether the temperature will go up or down on a given day relative to the previous day. My training data set consists of the previous ten days and whether they went up or down relative to the day before them. I'm not implying this is an effective way to predict weather, but that makes my problem even more confusing.
When I train the program over 25 days (50 days ago to 25 days ago) then test it on the next 25 days (25 days ago to yesterday) I consistently get 100% accuracy in the test set. I've added an alpha for gradient descent and have around 60 hidden layers, and if I make the alpha something bigger like 0.7 the accuracy will reduce to ~40%, so I think my testing code is correct.
Assuming I have a true 100% accuracy, I had the program predict tomorrow's weather, then use that and 9 days of historical to predict the day after tomorrow, and so on until I've predicted 5 days in the future. I then waited to see what the weather would be and my program was comically bad in its predictions. I ran this for a week to test, and had an accuracy of predicting the next day of about 60% and after that only around 10%.
TL;DR
Sorry for rambling the details, but my question is what would cause a neural network to be 100% accuracy in testing and then fail spectacularly in practice? Thanks for the help, I can post code if needed (and someone explains how to in a comment) | Neural Network converging and accurate in training, but failing in real world | 0 | 0 | 0 | 68 |
37,971,864 | 2016-06-22T15:04:00.000 | 2 | 1 | 0 | 0 | c#,python | 37,971,906 | 1 | false | 0 | 0 | I think the easiest way to do this would be to store the data in a shared resource of some kind. Perhaps your python script could store values in a database and your C# application could refer to the database to retrieve the state of your bulbs, switches, etc. | 1 | 0 | 0 | I'm learning python for home automation a few months now and want to learn C# for building apps.
My python file is turning devices on and off automatically. Now I want to make an app that can read this python file, see if the device is on or off.( lamp=o or lamp=1 ). For this it must read a variable from python script.
Next I want to turn the device on or off on my mobile and with this action also change the variable in the script.
Is this all possible without making a text file for the status or using ironpython?
Read many stackoverflow questions about this, but all of them were using 1 device and most ironpython. If there is any good documentation about this subject I would be happy to receive it since I can't find one.
Thanks | Is it possible to Pass values between C# and python and edit them both ways. | 0.379949 | 0 | 0 | 120 |
37,973,803 | 2016-06-22T16:36:00.000 | 1 | 1 | 1 | 0 | python,c++,boost | 37,979,379 | 1 | false | 0 | 0 | Yes, absolutely, it's a library like any other.
I always use it with CMake, but anything will do. You need to
Add to include paths the location of the boost headers.
Add to include paths the location of python headers (usually installed with Python, location depends on OS)
Link with the appropriate boost.python library (e.g. in my case it's boost_python-vc120-mt-1_58.lib or boost_python-vc120-mt-gd-1_58.lib, again depends on version/os/toolkit) | 1 | 2 | 0 | Boost.python module provides a easy way of blinding c/c++ codes into Python. However, most tutorials assume that bjam is used to compile this module. I was wondering if I do not compile this module can I still use this module? What I mean "do not compile this module" is including all the source files of Boost.python in my current project. I did it for other modules from Boost. For example, the Boost.filesystem module, when I use this module, I just include all the files from this module and compile them with the codes I have written. Thanks. | Can I compile boost.python module without bjam? | 0.197375 | 0 | 0 | 305 |
37,976,237 | 2016-06-22T18:59:00.000 | 0 | 0 | 0 | 0 | python,background,pygame,screen,sprite | 37,977,559 | 2 | false | 0 | 1 | Blitting works both ways, meaning you can blit something onto the display screen, but you can also blit the screen onto another surface. So simply make a new surface the same size of your display surface and blit the screen onto that surface for later use. | 1 | 1 | 0 | When using python and pygame: after loading the screen with the background image and blitting new objects (Text, circles, rectangles, etc.), is there a way to save the modified screen so as to be recalled later in the program? Specifically, I am setting the background and blitting new objects and would like to save the screen image with all of the blits in intact so it can be used later in the program as a new background upon which sprites can be manipulated. Any suggestions welcomed! | saving modified screens in python/pygame for later use | 0 | 0 | 0 | 271 |
37,977,358 | 2016-06-22T20:09:00.000 | 0 | 0 | 0 | 0 | python,calendar,google-api,google-admin-sdk | 38,876,630 | 1 | true | 0 | 0 | Solved:
The issue was that my client_secrets.json file for oauth 2.0 was set to my personal google account and not the admin account. I cleared the storage.json file where credentials were stored, re ran the program with the admin account logged in, and it worked! Hoped this helps. | 1 | 0 | 0 | I'm working on a script using Python that will access all students' Google calendars using their Google accounts and then add their school schedule into their calendar. I have figured out adding and deleting events and calendars using the API, but my question is how do I add a specific event to a specific calendar under a domain. I am a domain admin. | Google API Domain Admin Access | 1.2 | 0 | 0 | 71 |
37,977,531 | 2016-06-22T20:19:00.000 | 1 | 0 | 0 | 1 | python,jenkins | 38,075,436 | 1 | false | 0 | 0 | Solution:
from os import environ
Type = environ['Type'] | 1 | 0 | 0 | I have created a job in Jenkins, for which user provides the value of a specific parameter, let's say PYTHON_PARM, as an input. On this job I execute a python script (using Python Plugin). The problem is that I want to use as a variable the user input $PYTHON_PARM parameter. This is not considered as an environment variable, so when trying to use os.environ['PYTHON_PARM'], this doesn't work.
Any idea?
Thanks, | Execute python script on Jenkins with variables | 0.197375 | 0 | 0 | 3,572 |
37,980,943 | 2016-06-23T01:41:00.000 | 0 | 0 | 1 | 0 | python,excel,ipython,spyder,copy-paste | 70,369,693 | 1 | false | 0 | 0 | (Spyder maintainer here) For the record, this problem was solved in our 4.1.0 version, released in March 2020. | 1 | 4 | 0 | In IDLE Python if I do print "a\tb" I get an output that looks like: a[TAB]b.
If I do the same in IPython in Spyder, then I get an output that looks like: a[7 spaces]b
I like to output tables of data as tab delimited text to make it easier to copy from the console and paste it to Excel. If the tabs get converted to spaces it becomes more difficult.
Is there any setting within IPython or Spyder which controls how TAB characters are displayed? I am using Spyder+IPython on a Windows 10 desktop. I realized I could just write the data to a file, but in this case it is more convenient to just use the console and the clipboard. | How to prevent tab characters from being converted to spaces in console output when using IPython in Spyder | 0 | 0 | 0 | 793 |
37,980,970 | 2016-06-23T01:45:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,windows-10,exe,pyinstaller | 37,997,430 | 1 | true | 0 | 0 | Solved by adding --onedir which will put everything needed to run the program in one directory in the dist folder. | 1 | 1 | 0 | I compiled my Python GUI with Pyinstaller on Windows 10 but it seems like it cannot find my other script even though I provided the hard-coded absolute path to it (with r'"C:\Program Files...script path..."'). I even tried os.isfile (script path) but it returns False. The python script was compiled with pyinstaller --onefile --windowed --icon=iconimage.ico myscript.py from the command prompt. I use this same command on Ubuntu and the binary works just fine. I read something about Pyinstaller creating a temporary directory which I found, but I don't think it matters where it's running from as long as I give it the full path, so I'm thinking maybe I need more options when compiling? The GUI opens just fine. It's when it needs to call the script that it doesn't do anything. There are no errors when I run it from the command prompt. Please help! | Pyinstaller-compiled exe can't find file with absolute path | 1.2 | 0 | 0 | 2,621 |
37,982,930 | 2016-06-23T05:21:00.000 | 0 | 0 | 1 | 1 | python-2.7,pycharm,remote-debugging,maya | 37,991,894 | 2 | false | 0 | 0 | The break points are for the ide to catch only. Maya's script editor is just a text box with fancy things | 1 | 0 | 0 | I am debugging a Python code for Maya through the remote debugger in PyCharm.
The remote debugger can catch breakpoints as expected if the code is run at the command line, but it fails to do that if the Python code is running inside Maya's Script Editor.
The Python code is running on a Ubuntu machine while the PyCharm remote debugger is running on Windows.
I launch Maya on the Ubuntu machine from the directory that contains the script. The path mapping of PyCharm is simply set to "." for the Windows path that contains the same python script. Can you help me with this problem? Thanks a lot. | PyCharm cannot catch breakpoints if the python code is running in Maya's Script Editor | 0 | 0 | 0 | 434 |
37,984,531 | 2016-06-23T07:08:00.000 | 1 | 0 | 1 | 0 | python,windows,file,properties,version | 37,985,547 | 2 | true | 0 | 0 | Those are textfiles and thus they don't contain any header to include such information.
Specify the version with __version__ attribute and ask Microsoft to write this functionality. | 1 | 1 | 0 | I was wondering if it is possible to set a version to a python script. I would like to be able to see the version of a script by right clicking on the file, selecting properties and then select the version tab.
This tab exists on many other files, but is it possible to lure it out on a .py/.pyw file? | Version of python script | 1.2 | 0 | 0 | 71 |
37,987,462 | 2016-06-23T09:24:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 42,223,723 | 2 | false | 0 | 0 | Please look at the structure in project view. You can get your function and variable names there. | 2 | 0 | 0 | Not sure if this is the right place to ask this question but I hope it is.
I have been using PyCharm for Python development in the last month and a half and there's an issue that irks me a lot since I've moved from Visual Studio (I had to). I am trying to find stuff in the code and PyCharm for some reason just doesn't give me the same replies I get searching the code on GitHub or Visual Studio.
Now I tried the OS X version of PyCharm and the Red Hat compliant version of it. I tried Ctrl + F, double Tab, and probably everything under the menus and just can't get the results I get elsewhere.
Am I missing something, should I configure additional stuff? | PyCharm code search woes | 0 | 0 | 0 | 147 |
37,987,462 | 2016-06-23T09:24:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 37,993,017 | 2 | false | 0 | 0 | I needed to use Ctrl Shift F instead of double Tab | 2 | 0 | 0 | Not sure if this is the right place to ask this question but I hope it is.
I have been using PyCharm for Python development in the last month and a half and there's an issue that irks me a lot since I've moved from Visual Studio (I had to). I am trying to find stuff in the code and PyCharm for some reason just doesn't give me the same replies I get searching the code on GitHub or Visual Studio.
Now I tried the OS X version of PyCharm and the Red Hat compliant version of it. I tried Ctrl + F, double Tab, and probably everything under the menus and just can't get the results I get elsewhere.
Am I missing something, should I configure additional stuff? | PyCharm code search woes | 0 | 0 | 0 | 147 |
37,989,566 | 2016-06-23T10:53:00.000 | 1 | 0 | 0 | 0 | http,python-3.5 | 38,712,590 | 1 | false | 0 | 0 | There can be multiple network blockers in this client/server communication.
One of those, which is highly probable is Security-group or NACLs in this AWS based communication.
If you are running your instance in EC2-Classic then you need to check security-group inbound rules for allowing client on port 80 and if it is running in AWS VPC then check the security-group inbound rules as well as Network ACLs for inbound as well as outbound rule.
In Security Group allow
Type Http, Protocol TCP and source IP should your client IP or
0.0.0.0/0 (Less secure).
And in case of NACLs adjust it as below:
INBOUND Rule: 100 HTTP (80) TCP (6) 80
Allow OUTBOUND Rule: 100 Custom TCP Rule TCP (6) 1024-65535
ALLOW
Ephemeral port range can be adjusted here depending upon OS and distribution.
Apart from these adjustments you need to check if Firewall on client/server is blocking any such communication or not. | 1 | 0 | 0 | I have setup a python server on server machine which is an aws instance and trying to access it using public_IP:80 from client machine which is in different network.
It is not able to load the data from the server. | Setting http.server in python3 | 0.197375 | 0 | 1 | 900 |
37,990,545 | 2016-06-23T11:37:00.000 | 13 | 0 | 0 | 1 | python,windows,ui-automation | 38,046,259 | 3 | false | 0 | 0 | Normally, an application exposes a user interface (UI) for users, and an application programming interface (API) for programming.
A human being uses keyboard and mouse to work with the user interface (UI)
An application uses programming to work with the application programming interface (API)
The UI is designed for humans, and the API is designed for computers.
It is sometimes possible to use programming to control the user interface of another program -- so your program acts as if it were using the keyboard and mouse. This technique is often called "UI automation", and programs that do it are sometimes called "robots".
It's a big topic, and it can be quite complex. It's almost always better to use an API instead if you can: it's faster, simpler, more reliable.
If you do need to use UI automation, there are a few different tools that can help.
You are asking about Python, so here are a few UI automation tools that work with Python:
AutoIT is a standalone product, but you can use Python to script it.
PyWinAuto is designed for use from Python.
Sikuli uses computer vision to find parts of the screen. I believe it comes with a recording tool as well.
Just to repeat: UI automation is weird and hard. If you can possibly use an API instead, your life will be much easier. | 1 | 5 | 0 | I have the application installed on my windows PC, I want to launch that application using python and select dropdown options and do some other activities in that application.
I was able to launch the application using the os.system command, but I am not able to proceed further.
I want my program to do things like:
* select from a dropdown menu
* click on a button
How can my application control the user interface of another application? | How to control a Windows application from Python | 1 | 0 | 0 | 30,343 |
37,991,245 | 2016-06-23T12:10:00.000 | 0 | 0 | 1 | 0 | python,mongodb,pymongo | 37,991,450 | 1 | false | 0 | 0 | Indexing will drastically speed up finding subsets of documents within a collection, but will not (to my knowledge) speed up pulling the entire collection.
The reason indexing speeds up finding subsets is that mongo does not have to iterate through each document to see if they match the query- instead mongo can just go to each specific document location by looking it up in the index.
If you are returning the entire collection- then the index has no effect. Mongo fundamentally has to iterate through every document in the collection.
The storage engine you chose will effect hte speed, so I suggest you read up on the differences between Wired Tiger and mmapV1. I know there are third party ones but can't think of them off the top of my head. | 1 | 1 | 0 | As the data retrieving is too slow when I am querying for the whole data at once in MongoDB using the query db.find({}, {'_id':0}).
I am using PyMongo
How can I retrieve all the documents faster using Python driver.
I think indexing can make data retrieve faster but how to apply Indexing on whole collection to make db.find({}) query for whole collection runs faster. | How to apply indexing in mongodb to read the whole data at once faster | 0 | 1 | 0 | 117 |
37,992,129 | 2016-06-23T12:48:00.000 | 0 | 0 | 0 | 0 | java,python-2.7,tensorflow | 37,997,580 | 3 | false | 1 | 0 | I've had the same problem, Java+Python+TensorFlow. I've ended up setting up a simple http server. If that's too slow for you, you can shave off some overhead by employing sockets directly. | 1 | 1 | 1 | So I have a neural network in tensorflow (python2.7) and I need to retrieve its output using Java. I have a simple python function getValue(input) which starts the session and retrieves the value. I am open to any suggestions. I believe Jython wont work cause tensorflow is not in the library. I need the call to be as fast as possible. JNI exists for Java calling C so can I convert with cython and compile then use JNI? Is there a way to pass the information in RAM or some other way I haven't thought of? | Java calling python function with tensorflow graph | 0 | 0 | 0 | 1,916 |
37,992,209 | 2016-06-23T12:52:00.000 | 1 | 1 | 0 | 1 | python,session,cookies,tornado,pyramid | 38,032,565 | 1 | false | 1 | 0 | The two locations are separate origins in HTTP language. By default, they should not share cookies.
Before trying to figure out how to pass cookies around I'd try to set up a front end web server like Nginx that would proxy requests between two different backend servers. Both applications could get their own path, served from www.abcd.com. | 1 | 2 | 0 | In a server, I have a pyramid app running and I plan to have another tornado app running in parallel in the same machine. Let's say the pyramid app is available in www.abcd.com, then the tornado app will be available in www.abcd.com:9000.
I want only the authenticated users in the pyramid webapp to be able to access the tornado webapp.
My guess is somehow using cookie set by the pyramid app in tornado.
Is it possible? What is the best way to do that? | How to use pyramid cookie to authenticate user in tornado web framework? | 0.197375 | 0 | 0 | 83 |
37,993,175 | 2016-06-23T13:33:00.000 | 0 | 0 | 1 | 0 | python,command-prompt,anaconda | 66,311,256 | 3 | false | 0 | 0 | When you use anaconda command prompt it opened at conda directory (path where all the conda commands run)
like when I was installing pip3 install prettytable on command prompt it successfully installed but not replicate in jupyter notebook.
But when install it using anaconda prompt it replicates intantly | 1 | 51 | 0 | I installed anaconda into my computer using python. After I install the software, I found there is one program called anaconda prompt.
What is the difference between anaconda prompt and command prompt? If I want to update the package, which one I should use or either one. Like (conda update conda)
Thank you | difference between command prompt and anaconda prompt | 0 | 0 | 0 | 69,999 |
37,993,285 | 2016-06-23T13:37:00.000 | 0 | 0 | 0 | 1 | python-2.7,docker,ubuntu-14.04,pyside | 38,057,871 | 1 | false | 0 | 1 | Ok, after several days of trying figure that this is what was needed:
For the tooltip problem:
Adding WA_AlwaysShowToolTips
Change the stylesheet show that the transparent feature will work only for the QPushButton background and not the entire widget.
For the event problem:
Add the attribute: WA_Hover
Everything worked as should be after that | 1 | 0 | 0 | I wrote a PySide application that should run on python2.7, on window,Linux and Docker container. The application contains Qtooltip and specific eventFilter that catches HoverEnter\HoverLeave.
The application works well on windows 10 and ubuntu 14.04 desktop but when trying to run it inside Ubuntu 14.04 container both features didn't work well:
The tooltip - It would seem that the text is covered by other tooltip text or totally black.
The eventFilter - The application can't get the hover event (didn't appear).
The main difference that I saw was that when running on Ubuntu desktop, some GTk libraries are loaded to the python process (according to the maps files).
I tried reproduce the problem by installing everything on Ubuntu server (without Gtk), and got the same effect as inside the container.
Even after installing Gtk on the server, still no change.
I think that I might have missed some dependencies, but can't find any documentation on the issue.
Thanks in advance, | Pyside fail to show tooltip and specific events in Linux | 0 | 0 | 0 | 70 |
37,996,435 | 2016-06-23T15:53:00.000 | 0 | 0 | 1 | 0 | python,converter,type-conversion,xlwings | 70,226,530 | 2 | false | 0 | 0 | In my case conclusion was, just adding one row to the last row of raw data.
Write any text in the column you want to change to str, save, load, and then delete the last line. | 1 | 1 | 0 | I have an exelfile that I want to convert but the default type for numbers is float. How can I change it so xlwings explicitly uses strings and not numbers?
This is how I read the value of a field:
xw.Range(sheet, fieldname ).value
The problem is that numbers like 40 get converted to 40.0 if I create a string from that. I strip it with: str(xw.Range(sheetFronius, fieldname ).value).rstrip('0').rstrip('.') but that is not very helpful and leads to errors because sometimes the same field can contain both a number and a string. (Not at the same time, the value is chosen from a list) | How can I read every field as string in xlwings? | 0 | 1 | 0 | 3,124 |
37,996,628 | 2016-06-23T16:02:00.000 | 3 | 0 | 0 | 0 | python,scipy,statistics,distribution,kolmogorov-smirnov | 37,996,674 | 3 | true | 0 | 0 | You do not need to worry about something going wrong with the scipy functions. P values that low just mean that it's really unlikely that your samples have the same parent populations.
That said, if you were not expecting the distributions to be (that) different, now is a good time to make sure you're measuring what you think you're measuring, i.e. you are feeding in the right data to scipy. | 2 | 2 | 1 | I'm using Python's non-parametric tests to check whether two samples are consistent with being drawn from the same underlying parent populations: scipy.stats.ks_2samp (2-sample Kolmogorov-Smirnov), scipy.stats.anderson_ksamp (Anderson-Darling for k samples), and scipy.stats.ranksums (Mann-Whitney-Wilcoxon for 2 samples). My significance threshold to say that two samples are significantly different from each other is p = 0.01.
If these three tests return extremely low p-values (sometimes like 10^-30 or lower), then do I need to worry about something having gone wrong with the scipy functions? Are these ridiculously small p-values reliable, and can I just report p << 0.01 (p much less than my threshold)? | Extremely low p-values from non-parametric tests | 1.2 | 0 | 0 | 1,931 |
37,996,628 | 2016-06-23T16:02:00.000 | 0 | 0 | 0 | 0 | python,scipy,statistics,distribution,kolmogorov-smirnov | 38,006,265 | 3 | false | 0 | 0 | Well, you've bumped into a well-known feature of significance tests, which is that the p-value typically goes to zero as the sample size increases without bound. If the null hypothesis is false (which can often be established a priori), then you can get as small a p-value as you wish, just by increasing the sample size.
My advice is to think about what practical difference it makes that the distributions differ. Try to quantify that in terms of cost, either real (dollars) or abstract. Then devise a measurement for that. | 2 | 2 | 1 | I'm using Python's non-parametric tests to check whether two samples are consistent with being drawn from the same underlying parent populations: scipy.stats.ks_2samp (2-sample Kolmogorov-Smirnov), scipy.stats.anderson_ksamp (Anderson-Darling for k samples), and scipy.stats.ranksums (Mann-Whitney-Wilcoxon for 2 samples). My significance threshold to say that two samples are significantly different from each other is p = 0.01.
If these three tests return extremely low p-values (sometimes like 10^-30 or lower), then do I need to worry about something having gone wrong with the scipy functions? Are these ridiculously small p-values reliable, and can I just report p << 0.01 (p much less than my threshold)? | Extremely low p-values from non-parametric tests | 0 | 0 | 0 | 1,931 |
37,997,715 | 2016-06-23T17:02:00.000 | 1 | 0 | 0 | 1 | python,python-idle | 59,004,916 | 5 | false | 0 | 0 | In Windows 10
1. Type in "Controlled folder Access"
2. Select "Allow an app through Controlled folder access" Select yes to "UAC"
3. Click on "+ Add an allowed app"
4. Select "recently blocked apps"
5. Find the executable for the C:\Python27
6. Click the + to add it.
7. Select Close
Then try running the Python Shell again. This worked for me 100%
Also, add exception through Windows Firewall Python27 select Private and Public. | 4 | 2 | 0 | I have tried uninstalling it and have searched other answers. None of them have worked; IDLE opens, but I can't run anything I write. | IDLE's subprocess didn't make a connection. Either IDLE can't start or personal firewall software is blocking connection | 0.039979 | 0 | 1 | 15,720 |
37,997,715 | 2016-06-23T17:02:00.000 | 0 | 0 | 0 | 1 | python,python-idle | 49,338,940 | 5 | false | 0 | 0 | First uninstall the application.Then reinstall it BUT at the time of reinstallation try -n at the end of location adress. It worked for me, you can copy the below text and paste it at the location while installing it.
“C:\Program Files\Python32\pythonw.exe” lib\idlelib\idle.py -n | 4 | 2 | 0 | I have tried uninstalling it and have searched other answers. None of them have worked; IDLE opens, but I can't run anything I write. | IDLE's subprocess didn't make a connection. Either IDLE can't start or personal firewall software is blocking connection | 0 | 0 | 1 | 15,720 |
37,997,715 | 2016-06-23T17:02:00.000 | 0 | 0 | 0 | 1 | python,python-idle | 46,725,394 | 5 | false | 0 | 0 | IDLE's subprocess didn't make a connection. Either IDLE can't start or a personal firewall software is blocking the connection.
Having had this problem myself I did an uninstall and created a new directory in the C drive and reinstalled in that folder, which worked for me. | 4 | 2 | 0 | I have tried uninstalling it and have searched other answers. None of them have worked; IDLE opens, but I can't run anything I write. | IDLE's subprocess didn't make a connection. Either IDLE can't start or personal firewall software is blocking connection | 0 | 0 | 1 | 15,720 |
37,997,715 | 2016-06-23T17:02:00.000 | 0 | 0 | 0 | 1 | python,python-idle | 44,459,007 | 5 | false | 0 | 0 | If you at the network environment then check on the secure Group (SG), to see if the user is listed under that group.
else as other had been suggested you have to have the (right click on the program the login as Admin right to enable the IDLE to run. | 4 | 2 | 0 | I have tried uninstalling it and have searched other answers. None of them have worked; IDLE opens, but I can't run anything I write. | IDLE's subprocess didn't make a connection. Either IDLE can't start or personal firewall software is blocking connection | 0 | 0 | 1 | 15,720 |
37,998,013 | 2016-06-23T17:19:00.000 | 3 | 1 | 0 | 0 | python,c++,algorithm,search | 37,998,220 | 2 | true | 0 | 0 | You have to decide at some point just how large you want your crawled list to become. Up to a few tens of millions of items, you can probably just store the URLs in a hash map or dictionary, which gives you O(1) lookup.
In any case, with an average URL length of about 80 characters (that was my experience five years ago when I was running a distributed crawler), you're only going to get about 10 million URLs per gigabyte. So you have to start thinking about either compressing the data or allowing re-crawl after some amount of time. If you're only adding 100,000 URLs per day, then it would take you 100 days to crawl 10 million URLs. That's probably enough time to allow re-crawl.
If those are your limitations, then I would suggest a simple dictionary or hash map that's keyed by URL. The value should contain the last crawl date and any other information that you think is pertinent to keep. Limit that data structure to 10 million URLs. It'll probably eat up close to 2 GB of space, what with dictionary overhead and such.
You will have to prune it periodically. My suggestion would be to have a timer that runs once per day and cleans out any URLs that were crawled more than X days ago. In this case, you'd probably set X to 100. That gives you 100 days of 100,000 URLs per day.
If you start talking about high capacity crawlers that do millions of URLs per day, then you get into much more involved data structures and creative ways to manage the complexity. But from the tone of your question, that's not what you're interested in. | 2 | 0 | 0 | I am building a web crawler which has to crawl hundreds of websites. My crawler keeps a list of urls already crawled. Whenever crawler is going to crawl a new page, it first searches the list of urls already crawled and if it is already listed the crawler skips to the next url and so on. Once the url has been crawled, it is added to the list.
Currently, I am using binary search to search the url list, but the problem is that once the list grows large, searching becomes very slow. So, my question is that what algorithm can I use in order to search a list of urls (size of list grows to about 20k to 100k daily).
Crawler is currently coded in Python. But I am going to port it to C++ or other better languages. | Efficiently searching a large list of URLs | 1.2 | 0 | 1 | 361 |
37,998,013 | 2016-06-23T17:19:00.000 | -1 | 1 | 0 | 0 | python,c++,algorithm,search | 37,998,279 | 2 | false | 0 | 0 | I think hashing your values before putting them into your binary searched list- this will get rid of the probable bottleneck of string comparisons, swapping to int equality checks. It also keeps the O(log2(n)) binary search time- you may not get consistent results if you use python's builtin hash() between runs, however- it is implementation-specific. Within a run, it will be consistent. There's always the option to implement your own hash which can be consistent between sessions as well. | 2 | 0 | 0 | I am building a web crawler which has to crawl hundreds of websites. My crawler keeps a list of urls already crawled. Whenever crawler is going to crawl a new page, it first searches the list of urls already crawled and if it is already listed the crawler skips to the next url and so on. Once the url has been crawled, it is added to the list.
Currently, I am using binary search to search the url list, but the problem is that once the list grows large, searching becomes very slow. So, my question is that what algorithm can I use in order to search a list of urls (size of list grows to about 20k to 100k daily).
Crawler is currently coded in Python. But I am going to port it to C++ or other better languages. | Efficiently searching a large list of URLs | -0.099668 | 0 | 1 | 361 |
37,999,400 | 2016-06-23T18:40:00.000 | 0 | 0 | 1 | 1 | python,virtualenv,atom-editor | 38,010,189 | 1 | true | 0 | 0 | It seems that python-autocomplete plugin is all we need in atom python coding. Just set your virtualenv from the console, then type atom . and editor (accompanied with aforementioned plugin) will pick that up. No other configuration is necessary. | 1 | 0 | 0 | I'm trying to switch to Atom Editor for my python projects. I have several of those and each is set up using virtualenv.
How do I set up this editor, so that when I open up one project it will be using python.exe from its path and not some other?
I don't want to add anything to the init script because, as far as I understand, this script contains global settings. Is there a way to configure the desired behavior 'per project'? | How to set path to python (venv) executable in atom editor | 1.2 | 0 | 0 | 1,646 |
38,000,416 | 2016-06-23T19:39:00.000 | 1 | 1 | 1 | 1 | python,python-2.7,ubuntu,pip | 38,004,576 | 1 | true | 0 | 0 | Use dpkg -S <path...> for installed packages, or apt-file search <paths...> for packages that might not be installed. | 1 | 0 | 0 | What is the way to get the name of the package which creates specific dir under /usr/lib/python2.7/dist-packages/ on Ubuntu
For example I am trying to get the package name which installs /usr/lib/python2.7/dist-packages/hdinsight_common/ or /usr/lib/python2.7/dist-packages/hdinsight_common/decrypt.sh
can anyone help me with this ?
Thanks | How to get source packages from dir in /usr/lib/python2.7/dist-packages/ | 1.2 | 0 | 0 | 117 |
38,000,992 | 2016-06-23T20:15:00.000 | 0 | 0 | 0 | 1 | python,python-3.x,http,tornado | 38,003,009 | 1 | true | 0 | 0 | Callbacks are executed as soon as possible after the event for which they're waiting is complete. So, they are called in response order. | 1 | 0 | 0 | I'm using Tornado's AsyncHTTPClient to fetch a URL multiple times. I pass in a different callback with each request.
If I send requests A, B (with associated callbacks Callback_A and Callback_B) to a URL, but the responses come back in the opposite order B, A. Should I expect the callbacks to be called in the order of Callback_A, Callback_B or will they get called in the opposite order?
I'd like to have the callbacks called in the order of responses (so Callback_B, Callback_A). If that's not the default behavior is there a way to do that instead? | Do tornado AsyncHTTPClient fetch callbacks get called in order of request or response? | 1.2 | 0 | 0 | 211 |
38,003,213 | 2016-06-23T22:56:00.000 | 10 | 0 | 1 | 0 | python,ipython,ipython-parallel | 38,015,997 | 1 | true | 0 | 0 | There are a few reasons why you might choose IPython parallel, which may or may not be relevant to you:
There are some things IPython parallel can serialize efficiently (numpy arrays) that multiprocessing doesn't do as well because it pickles everything
IPython parallel can distribute work across many machines, which multiprocessing cannot.
IPython parallel manages persistent interactive namespaces on each engine (a full IPython session), which can be useful for composing work in pieces and debugging.
In general, if you are just trying to parallelize small bits of code on your multi-core computer, IPython parallel doesn't offer you much over multiprocessing, and the burden of starting and connecting to an IPython cluster isn't worth it. But if you might want to distribute it across more machines, IPython parallel will let you do that. And since it works the same way whether you are using one computer or one hundred, you can prototype on your laptop and then run the exact same code on a larger scale without any changes. | 1 | 3 | 0 | Is there any reason to use Ipyparallel for common python script (not ipython notebook)? | Is there any reason to use Ipyparallel for common python script (not ipython notebook) over multiprocessing module? | 1.2 | 0 | 0 | 1,730 |
38,007,240 | 2016-06-24T06:50:00.000 | 1 | 0 | 0 | 0 | mysql,python-3.x,raspberry-pi2 | 48,172,697 | 3 | false | 0 | 0 | Just use $sudo apt-get install python3-mysqldb and it works on pi-3. | 1 | 4 | 0 | I am new at using raspberry pi.
I have a python 3.4 program that connects to a database on hostinger server.
I want to install mysql connector in raspberry pi.I searched a lot but I was not able to find answers . any help would be appreciated | installing mysql connector for python 3 in raspberry pi | 0.066568 | 1 | 0 | 31,314 |
38,008,389 | 2016-06-24T07:55:00.000 | 0 | 0 | 0 | 0 | python,button,tkinter | 68,547,819 | 2 | false | 0 | 1 | I found an answer that is a bit easier.
for example lets say you have 4 horizontal ttk.buttons, and you wish to place a vertical ttk.button next to them lets call our new button config.
Using the grid method for the buttons, the first vertical button is at col=0, row=0
second at col=0, row=1 etc
Create a button with with=4 text=C\no\nn\nf\ni\ng\n, grid at col=1, row=0, rowspan=4,
There you have it. Simple and easy using standard tkinter. | 1 | 5 | 0 | Is it possible to orient a tk.Button or ttk.Button vertically? Smething like orienting a tk.Scrollbar in a way self.scrlbr = tk.Scrollbar(master, orient = vertical)?
Have tried tk.Button(*args).pack(fill = tk.Y), but it does not provide desired effect - button still gets horizontally oriented.
Found nothing in man pages, but maybe there is some not-staightforward way? | Is it possible to have a vertical-oriented button in tkinter? | 0 | 0 | 0 | 3,538 |
38,009,583 | 2016-06-24T09:09:00.000 | 0 | 0 | 0 | 0 | wxpython,wxwidgets | 38,017,758 | 1 | true | 0 | 1 | No, but you can use wx.combo.ComboCtrl instead which lets you implement the drop-down window yourself, and then you can make it do whatever you want. It's not a perfect emulation of the wx.ComboBox, and fairly complex compared to the wx.ComboBox, so it may not be worth it to you for just adding horizontal scrolling. You can see examples in the wxPython demo. | 1 | 0 | 0 | After setting limited width size,strings which is too long get truncated in the dropdown of wxcombobox.
self.namelist = wx.ComboBox(self, -1, "", size=(270,-1))
Is there any way to make the combobox dropdown scrolls horizontally.So that we can see long strings.? | Can we set horizontal scrollbar for wxcombobox? | 1.2 | 0 | 0 | 217 |
38,013,224 | 2016-06-24T12:14:00.000 | 2 | 0 | 1 | 0 | python,numpy | 38,013,321 | 2 | true | 0 | 1 | If you use Ubuntu then you have 2 versions of python executables - python and python3. So I think you need to install dependencies for python3 version by sudo pip3 install numpy or sudo apt-get install python3-numpy if it exists in repos. | 1 | 2 | 0 | I recently formatted my hard drive and got rid of Windows and went to Linux. I had a program that used to work fine before the reformat but isn't working fine now.
I believe it was written for 3.4 and not 2.7 since I used import tkinter and not import Tkinter. In either case the program won't run now that I have made the switch over. In 2.7 it does nothing...it acts like it has run through the code and then stops and gives me back the cursor when it should be popping up a t/Tkinter window displaying a graph. In 3.4 I get the error saying numpy isn't installed.
When I apt-cache policy python-numpy it comes up showing it 1:1.8.2 is installed. When I do the same for scipy it shows 0.13.3 is installed. Seeing from other websites when I check for cython it shows 0.20.1+git90-gee6e38e is installed. When I check for tk it comes up 8.6.0 is installed.
I'm a bit lost. Why I do get the error code saying numpy isn't found when I got to run the program yet it is installed. What do I have to do to get this program back up and running again. | Install python dependencies not working | 1.2 | 0 | 0 | 520 |
38,014,135 | 2016-06-24T13:03:00.000 | 5 | 0 | 1 | 0 | python,math,compiler-errors,operators,exponent | 38,014,187 | 1 | true | 0 | 0 | ^ is already taken as exclusive or in python. So ** was the better alternative. | 1 | 1 | 0 | I am used to writing e^10 in several languages. However, every time in my short time writing Python I end up with this type error:
TypeError: unsupported operand type(s) for ^: 'float' and 'int'
Since in Python we should use **. What made Python choose that operator instead of the ^, which, I think, is more frequently used in programming and is more natural to my mind. | Exponent syntax in Python | 1.2 | 0 | 0 | 1,609 |
38,016,242 | 2016-06-24T14:49:00.000 | 1 | 0 | 0 | 0 | python,sql,python-3.x,config,md5 | 38,016,382 | 2 | true | 0 | 0 | MD5, unfortunately, is a hash signature protocol, not an encryption protocol. It is used to generate strings that are used to detect even the very-slightest change to the value from which the MD5 hash was produced. But . . . (by design) . . . you cannot recover the value that was originally used to produce the signature!
If you are working in a corporate, "intra-net" setting, consider using LDAP (Microsoft OpenDirectory) or some other form of authorization/authentication, in lieu of "passwords." In this scenario, the security department authorizes your application to do certain things, and they provide you with an otherwise-meaningless token with which to do it. The database uses the presented token, along with other rules controlled only by the security department, to recognize your script and to grant selected access to it. The token is "useless if stolen."
If you do still need to use passwords, you'll need to find a different way to securely store them. MD5 cannot be used. | 1 | 0 | 0 | I have a python program that accesses SQL databases with the database login currently encoded in base64 in a text file. I'd like to encode the login instead using MD5 and store it in a config file, but after some research, I couldn't find much on the topic. Could someone point me in the right direction on where to start? | Encrypting a SQL Login in a Python program using MD5 | 1.2 | 1 | 0 | 397 |
38,018,045 | 2016-06-24T16:26:00.000 | 0 | 0 | 1 | 1 | python,pygame,ubuntu-14.04 | 38,035,454 | 2 | false | 0 | 0 | First of all I want to thanks Bennet for responding to my question so that I was able to figure out what the problem was. Actually the problem was with aliasing. When I installed cv2 or pygame using apt-get, they were installed for default version but when I installed any package by downloading the installer first (like I installed anaconda), it was installed for python 2.7.11 because 'python' was aliased for this version(that is 2.7.11). So, basically make sure that the default version for which you want to install everything is the one which is aliased as 'python', and everything goes fine. I aliased 'python' for the default version and then installed anaconda via installer and now it has been installed default version. | 1 | 0 | 0 | I have Ubuntu 14.04 LTS. I guess different versions of python are pre-installed in Ubuntu 14.04. Right now when I type 'python' in terminal it opens python 2.7.11, but I guess the default version of Ubuntu 14.04 is 2.7.6. When I type /usr/bin/python it opens the default version. I know this can be done with making aliases. The real problem is, I have installed pygame, cv2 (that is for image processing) using apt-get. These are installed for default version of python i.e python 2.7.6. Also I have installed anaconda with python 2.7.11 using pip, but again 'pip' and anaconda are installed for 2.7.11. I know python 3 is also pre-installed there but I don't use it. Also I have no python version installed in user/local/bin.Now I want to know why this problem is occurring? How can I fix this now? Also how to import all the libraries for one python version(either default or another) and how to use it? How to configure my settings so that I would not have any problem in future? | How to install pygame, cv2, anaconda, pip etc to any one version of python in ubuntu 14.04 | 0 | 0 | 0 | 180 |
38,021,598 | 2016-06-24T20:27:00.000 | 16 | 0 | 0 | 1 | python,bash | 38,021,808 | 1 | true | 0 | 0 | When python is installed, some installers will modify your .bash_profile. They save your previous version in .bash_profile.pysave. | 1 | 11 | 0 | I just noticed that I have a .bash_profile and a .bash_profile.pysave and I was wondering what the .pysave was, if I can delete it and how/why it came into existence. | what exactly is .bash_profile.pysave? | 1.2 | 0 | 0 | 3,075 |
38,023,256 | 2016-06-24T23:02:00.000 | 0 | 0 | 1 | 1 | python,windows,pyffmpeg | 38,023,476 | 1 | true | 0 | 0 | You're basically asking how to quickly set up a development environment to compile a given project. You're generally at the mercy of the project developers and how well they documented the build process.
On linux, you often have a package manager to make the installing and resolving of dependencies easy.
Since Windows doesn't have a package manager, many popular projects with lots of dependencies will include a download link to a Libraries zip file that contains all the dependencies necessary to compile the source.
Instead of running pip each time, it may be faster to just download the source to those python projects and run the setup.py manually, resolving dependencies until it succeeds.
In general, for python libraries that wrap C/C++ libraries, you're not going to be able to build the python library if you can't build the corresponding C/C++ library. So, you may want to download the ffmpeg source and try compiling it first.
Also, for some compiled python libraries, you may be able to find python wheels, which will contain pre-compiled binaries for your system, making the compile step unnecessary. If the python library wraps another C/C++ library, you'll still need to download and install the appropriate version of the library that it wraps (e.g. ffmpeg) | 1 | 0 | 0 | I am trying to work with python on a new project in my windows system. The project uses ffmpeg and pyrabin among others. I find it extremely difficult to move forward with pip installing these packages as they constantly keep on asking for missing dependencies. Following are some errors:
ffvideo\ffvideo.c(254) : fatal error C1083: Cannot open include file: 'libavutil/rational.h': No such file or directory
local\temp\pip-build-kvsijc\pyrabin\src\rabin_polynomial.h(38) : fatal error C1083: Cannot open include file: 'stdint.h': No such file or directory
It is taking me forever to resolve each of them. Please advice on how to quickly resolve such missing dependencies. I tried google and it is full of options for linux systems. Any help would be highly appreciated. | python packages with dependencies in windows system | 1.2 | 0 | 0 | 802 |
38,024,133 | 2016-06-25T01:33:00.000 | 0 | 1 | 0 | 0 | python-2.7,cgi,iis-8 | 38,183,797 | 1 | false | 0 | 0 | I fixed the issue by uninstalling activePython which was installing modules under the users profile in the appdata folder.
This caused an issue where the anonymous isur of the website it no longer had access to the installed modules
I uninstall activePython and returned to the normal windows python install and re-installed the modules using PIP.
All scripts are working as expected, happy days. | 1 | 0 | 0 | When I try to import passlib.hash in my python script I get a 502 error
502 - Web server received an invalid response while acting as a gateway or proxy server.
There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.
The only modules I'm importing are:
import cgi, cgitb
import passlib.hash
passlib.hash works fine when I try in a normal python script or if I try importing in python interactive shell
using python 2.7, iis 8
when I browse on the localhost I get this
HTTP Error 502.2 - Bad Gateway
The specified CGI application misbehaved by not returning a complete set of HTTP headers. The headers it did return are "Traceback (most recent call last): File "C:##path remove##\test.py", line 2, in import passlib.hash ImportError: No module named passlib.hash ". | Importing passlib.hash with CGI | 0 | 0 | 1 | 116 |
38,024,935 | 2016-06-25T04:24:00.000 | 7 | 0 | 1 | 0 | python,exe | 38,026,380 | 3 | true | 0 | 0 | PyInstaller works up to Python 3.5. Once you've installed it (type in your terminal pip install pyinstaller), you can do in your terminal:
pyinstaller --onefile script.py
where script.py is the name of script you want to compile into .exe
With the --onefile option it will create only one .exe file. | 1 | 5 | 0 | So far I have used cx_freeze to convert .py file to .exe file, but I get many files. Is there a way to get it all into one executable?
I have seen that PyInstallerGUI is able to that, but it is for Python 2.7. Can it be done with Python 3.4 as well? | Python to EXE file in one file | 1.2 | 0 | 0 | 6,074 |
38,031,729 | 2016-06-25T18:34:00.000 | 3 | 1 | 0 | 0 | python,couchbase,aws-lambda | 54,285,148 | 2 | false | 0 | 0 | Following two things worked for me:
Manually copy /usr/lib64/libcouchbase.so.2 into ur project folder
and zip it with your code before uploading to AWS Lambda.
Use Python 2.7 as runtime on the AWS Lambda console to connect to couchbase.
Thanks ! | 1 | 6 | 0 | I'm trying to use AWS Lambda to transfer data from my S3 bucket to Couchbase server, and I'm writing in Python. So I need to import couchbase module in my Python script. Usually if there are external modules used in the script, I need to pip install those modules locally and zip the modules and script together, then upload to Lambda. But this doesn't work this time. The reason is the Python client of couchbase works with the c client of couchbase: libcouchbase. So I'm not clear what I should do. When I simply add in the c client package (with that said, I have 6 package folders in my deployment package, the first 5 are the ones installed when I run "pip install couchbase": couchbase, acouchbase, gcouchbase, txcouchbase, couchbase-2.1.0.dist-info; and the last one is the c client of Couchbase I installed: libcouchbase), lambda doesn't work and said:
"Unable to import module 'lambda_function': libcouchbase.so.2: cannot open shared object file: No such file or directory"
Any idea on how I can get the this work? With a lot of thanks. | How to create AWS Lambda deployment package that uses Couchbase Python client | 0.291313 | 0 | 0 | 348 |
38,032,608 | 2016-06-25T20:17:00.000 | 1 | 1 | 0 | 0 | python,node.js,socket.io,raspberry-pi | 38,032,700 | 1 | true | 0 | 0 | It's unclear exactly which part you need help with. To make a socket.io connection work, you do the following:
Run a socket.io server on one of your two computers. Make sure it is listening on a known port (it can share a port with a web server if desired).
On the other computer, get a socket.io client library and use that to make a socket.io connection to the other computer.
Register message handlers on both computers for whatever custom messages you intend to send each way and write the code to process those incoming messages.
Write the code to send messages to the other computer at the appropriate time.
Socket.io client and server libraries exist for both node.js and python so you can either type of library for either type of system.
The important things to understand are that you must have a socket.io server up and running. The other endpoint then must connect to that server. Once the connection is up and running, you can then send message from either end to the other end.
For example, you could set up a socket.io server on node.js. Then, use a socket.io client library for python to make a socket.io connection to the node.js server. Then, once the connection is up and running, you are free to send messages from either end to the other and, if you have, message handlers listening for those specific messages, they will be received by the other end. | 1 | 0 | 0 | My requirement is to communicate socketio with nodejs server to Raspberry Pi running a local Python app. Please help me. I can find ways of communication with web app on google but is there any way to communicate with Python local app with above mentioned requirements. | Raspberry Pi python app and nodejs socketio communication | 1.2 | 0 | 1 | 885 |
38,036,233 | 2016-06-26T07:36:00.000 | 0 | 1 | 0 | 0 | python,ipc,dllimport | 38,201,278 | 1 | false | 0 | 0 | Answer according to Doug Ross: consider the Asyncio module. | 1 | 1 | 0 | I have a Python function get_messages() that is able to retrieve messages from another application via a dll. These messages arrive at a rate of about 30hz, and I need to fill a buffer with these messages, while the main Python application is running and doing things with theses messages. I believe the filling of the buffer should occur in a separate thread. My question is: what is the best Pythonic way to retrieve these messages ? (running a loop in a separate thread is probably not the best solution). Is there a module that is dedicated to this sort of tasks? | python - communicating with other applications at high rate | 0 | 0 | 0 | 86 |
38,036,339 | 2016-06-26T07:51:00.000 | 0 | 0 | 1 | 1 | python-3.x | 38,039,181 | 1 | false | 0 | 0 | Go to the directory {Your python root dir}\Scripts
Then press Shift + Right CLick > Open CMD here
Then type in pip install {package name} | 1 | 0 | 0 | I have a fresh install of python 3.5 on my windows PC
and here is my directory of my installation:
C:\Users*PCNAME*\AppData\Local\Programs\Python\Python35-32
I have installed it entering python get-pip.py in CMD, it says it is installed successfully but when I enter pip in CMD it says it is not recognized?
Please kindly enlighten me | How do I install PIP in python 3.5? | 0 | 0 | 0 | 1,881 |
38,040,240 | 2016-06-26T15:45:00.000 | 0 | 0 | 0 | 0 | python,django,apache,32bit-64bit,32-bit | 38,040,277 | 1 | false | 1 | 0 | I would assume so. You should definitely go for a 64-bit version of Apache to make use of all the memory available. | 1 | 0 | 0 | simple question - if I run apache 32bit version, on 64bit OS, with a lot of memory (32GB RAM). Does this mean all the memory will go to waste since 32bit apache can't use more then 3GB ram? | Apache web server 32bit on 64bit computer | 0 | 1 | 0 | 203 |
38,042,915 | 2016-06-26T20:32:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,save | 38,043,046 | 1 | true | 0 | 0 | If you really need the output to be robust to random shutdowns and/or crashes, you should write each entry to file as they are generated instead of appending to a list (assuming you don't need them in the program). And if you do it using with open(...) as f: then it will handle closing the file appropriately even with unexpected shutdowns.
Alternatively, if you write the list as a class attribute, and the class instance is in the global namespace, it will still exist after you cancel the program and you can access it manually then. That's not very pretty though, and is problematic if anyone else touches your program, or if you come back to it later.
Or perhaps the best option would be to add an appropriate stopping condition and avoid having to terminate your program. | 1 | 1 | 0 | I am doing some tests: running time functions and appending values to a list.
I am not really sure what to receive. I am thinking about cancelling the shell after a period of time. However, I would like to see what values were appended to the list. Is there any pythonic way to do this, or is the best way to save data to a txt file? | Saving list values when terminating | 1.2 | 0 | 0 | 103 |
38,042,987 | 2016-06-26T20:41:00.000 | 0 | 0 | 0 | 0 | python,image,matplotlib,save | 46,231,053 | 2 | false | 0 | 0 | You should do a for loop over different dpi values in decreasing order and in every loop save the image, check the file size and delete the image if filesize > 15 MB. After filesize < 15 MB break the loop. | 2 | 0 | 1 | Just wondering if there is a tried and tested method for setting the dpi output of your figure in matplotlib to be governed by some maximum file size, e.g., 15MB. | matplotlib: How to assign the dpi of your figure to meet some set maximum file size? | 0 | 0 | 0 | 80 |
38,042,987 | 2016-06-26T20:41:00.000 | 0 | 0 | 0 | 0 | python,image,matplotlib,save | 38,043,037 | 2 | false | 0 | 0 | There can not be such a mechanism, because the file size can only be determined by actually rendering the finished drawing to a file, and that must happen after setting up the figure (where you set the DPI).
How, for example, should anyone know, before rendering your curve, how well it's compressible as PNG? Or how big the PDF you might alternatively generate might be, without knowing how many lines you'll plot?
Also, matplotlib potentially has a lot of different output formats.
Hence, no, this is impossible. | 2 | 0 | 1 | Just wondering if there is a tried and tested method for setting the dpi output of your figure in matplotlib to be governed by some maximum file size, e.g., 15MB. | matplotlib: How to assign the dpi of your figure to meet some set maximum file size? | 0 | 0 | 0 | 80 |
38,043,336 | 2016-06-26T21:25:00.000 | 0 | 0 | 1 | 1 | python,sublimetext3,sublimerepl | 53,070,278 | 2 | false | 0 | 0 | As mentioned above (a long time ago) the key bindings aren't present for Windows. However, one can Mouse Right Click to open a context menu. From here there are menu options for Kill and Restart. You can also open a sub-menu which allows you send those and other signals including SIGINT. | 1 | 3 | 0 | I'm using REPL extension for Sublime text 3 for my python projects. Currently when I want to interrupt a running script I have to close to close the REPL window to stop execution and all computations are so far are lost.
I was wondering if anybody knows how to interrupt an execution and have a short cut or key bindings for that | Key bindings for interrupt execution in Python Sublime REPL | 0 | 0 | 0 | 1,773 |
38,044,103 | 2016-06-26T23:27:00.000 | 2 | 1 | 1 | 0 | python | 38,082,041 | 1 | false | 0 | 0 | I was able to reproduce the error you encounter by creating an array of complex64 from [0, 2+j, -3.14-7.99j], saving it to a file and reading it as Python built-in complex type.
The issue is that the built-in complex type has the size of a C double which, depending on your plateform, may differ from 32-bits (256 bits on my machine).
You must use numpy.fromfile('file_name', dtype=numpy.complex64) to read your file correctly, i.e. make sure the complex numbers are read as two 32-bits floating point numbers. | 1 | 3 | 0 | I have a binary file that contains several complex numbers of type complex64? (i.e. four bytes of type float for the real part and another four bytes for the imaginary part). The real and imaginary parts are multiplexed so that the real part is stored first and followed by the imaginary part. | How to read a binary file of type complex64 values in Python | 0.379949 | 0 | 0 | 1,833 |
38,045,616 | 2016-06-27T03:46:00.000 | 2 | 0 | 0 | 0 | database,python-2.7,pythonanywhere | 38,056,627 | 1 | true | 0 | 0 | You cannot get PythonAnywhere to read the files directly off your machine. At the very least, you need to upload the file to PythonAnywhere first. You can do that from the Files tab. Then the link that Rptk99 provided will show you how to import the file into MySQL. | 1 | 2 | 0 | I'm new to pythonanywhere. I wonder how to load data from local csv files (there are many of them, over 1,000) into a mysql table. Let's say the path for the folder of the csv files is d:/data. How can I write let pythonanywhere visit the local files? Thank you very much! | Pythonanywhere Loading data from local files | 1.2 | 1 | 0 | 1,111 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.