Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
35,188,305
2016-02-03T21:42:00.000
2
0
0
0
0
python,html,flask,bokeh,flask-socketio
0
35,204,805
0
2
0
true
1
0
You really are asking two questions in one. Really, you have two problems here. First, you need a mechanism to periodically give the client access to updated data for your tables and charts. Second, you need the client to incorporate those updates into the page. For the first problem, you have basically two options. The most traditional one is to send Ajax requests (i.e. requests that run in the background of the page) to the server on a regular interval. The alternative is to enhance your server with WebSocket, then the client can establish a permanent connection and whenever the server has new data it can push it to the client. Which option to use largely depends on your needs. If the frequency of updates is not too high, I would probably use background HTTP requests and not worry about adding Socket.IO to the mix, which has its own challenges. On the other side, if you need sort of a live, constantly updating page, then maybe WebSocket is a good idea. Once the client has new data, you have to deal with the second problem. The way you deal with that is specific to the tables and charts that you are using. You basically need to write Javascript code that passes these new values that were received from the server into these components, so that the page is updated. Unfortunately there is no automatic way to cause an update. You can obviously throw the current page away and rebuild it from scratch with the new data, but that is not going to look nice, so you should probably find what kind of Javascript APIs these components expose to receive updates. I hope this helps!
1
0
0
0
I have developed a python web application using flask microframework. I have some interactive plots generated by Bokeh and some HTML5 tables. My question is how I can update my table and graph data on fly? Should I use threading class and set the timer and then re run my code every couple of seconds and feed updated data entries to the table and graphs? I also investigated flask-socketIO, but all I found is for sending and receiving Messages, Is there a way to use flask-socketIO for this purpose? I also worked a little bit with Bokeh-server, should I go that direction? does it mean I need to run two servers? my flask web server and bokeh-server? I am new to this kind of work. I appreciate if you can explain in detail what I need to do.
Streaming live data in HTML5 graphs and tables
1
1.2
1
0
0
1,747
35,195,348
2016-02-04T07:51:00.000
0
0
1
0
0
python,multithreading
0
35,222,210
0
2
0
false
0
0
Thanks for the response. After some thoughts, I have decided to use the approach of many queues and a router-thread (hub-and-spoke). Every 'normal' thread has its private queue to the router, enabling separate send and receive queues or 'channels'. The router's queue is shared by all threads (as a property) and used by 'normal' threads as a send-only-channel, ie they only post items to this queue, and only the router listens to it, ie pulls items. Additionally, each 'normal' thread uses its own queue as a 'receive-only-channel' on which it listens and which is shared only with the router. Threads register themselves with the router on the router queue/channel, the router maintains a list of registered threads including their queues, so it can send an item to a specific thread after its registration. This means that peer to peer communication is not possible, all communication is sent via the router. There are several reasons I did it this way: 1. There is no logic in the thread for checking if an item is addressed to 'me', making the code simpler and no constant pulling, checking and re-putting of items on one shared queue. Threads only listen on their queue, when a message arrives the thread can be sure that the message is addressed to it, including the router itself. 2. The router can act as a message bus, do vocabulary translation and has the possibility to address messages to external programs or hosts. 3. Threads don't need to know anything about other threads capabilities, ie they just speak the language of the router. In a peer-to-peer world, all peers must be able to understand each other, and since my threads are of many different classes, I would have to teach each class all other classes' vocabulary. Hope this helps someone some day when faced with a similar challenge.
1
3
0
0
I have read lots about python threading and the various means to 'talk' across thread boundaries. My case seems a little different, so I would like to get advice on the best option: Instead of having many identical worker threads waiting for items in a shared queue, I have a handful of mostly autonomous, non-daemonic threads with unique identifiers going about their business. These threads do not block and normally do not care about each other. They sleep most of the time and wake up periodically. Occasionally, based on certain conditions, one thread needs to 'tell' another thread to do something specific - an action -, meaningful to the receiving thread. There are many different combinations of actions and recipients, so using Events for every combination seems unwieldly. The queue object seems to be the recommended way to achieve this. However, if I have a shared queue and post an item on the queue having just one recipient thread, then every other thread needs monitor the queue, pull every item, check if it is addressed to it, and put it back in the queue if it was addressed to another thread. That seems a lot of getting and putting items from the queue for nothing. Alternatively, I could employ a 'router' thread: one shared-by-all queue plus one queue for every 'normal' thread, shared with the router thread. Normal threads only ever put items in the shared queue, the router pulls every item, inspects it and puts it on the addressee's queue. Still, a lot of putting and getting items from queues.... Are there any other ways to achieve what I need to do ? It seems a pub-sub class is the right approach, but there is no such thread-safe module in standard python, at least to my knowledge. Many thanks for your suggestions.
Recommended way to send messages between threads in python?
0
0
1
0
1
1,923
35,237,044
2016-02-06T03:34:00.000
9
0
0
0
0
python,random-forest,xgboost,kaggle
0
35,248,119
0
1
0
false
0
0
Extra-trees(ET) aka. extremely randomized trees is quite similar to random forest (RF). Both methods are bagging methods aggregating some fully grow decision trees. RF will only try to split by e.g. a third of features, but evaluate any possible break point within these features and pick the best. However, ET will only evaluate a random few break points and pick the best of these. ET can bootstrap samples to each tree or use all samples. RF must use bootstrap to work well. xgboost is an implementation of gradient boosting and can work with decision trees, typical smaller trees. Each tree is trained to correct the residuals of previous trained trees. Gradient boosting can be more difficult to train, but can achieve a lower model bias than RF. For noisy data bagging is likely to be most promising. For low noise and complex data structures boosting is likely to be most promising.
1
8
1
0
I am new to all these methods and am trying to get a simple answer to that or perhaps if someone could direct me to a high level explanation somewhere on the web. My googling only returned kaggle sample codes. Are the extratree and randomforrest essentially the same? And xgboost uses boosting when it chooses the features for any particular tree i.e. sampling the features. But then how do the other two algorithms select the features? Thanks!
What is the difference between xgboost, extratreeclassifier, and randomforrestclasiffier?
0
1
1
0
0
2,360
35,237,874
2016-02-06T05:53:00.000
0
0
0
0
0
python-2.7,pandas
0
35,237,949
0
3
0
false
0
0
If the operations are done in the pydata stack (numpy/pandas), you're limited to fixed precision numbers, up to 64bit. Arbitrary precision numbers as string, perhaps?
2
0
1
0
I am working on some problem where i have to take 15th power of numbers, when I do it in python console i get the correct output, however when put these numbers in pandas data frame and then try to take the 15th power, i get a negative number. Example, 1456 ** 15 = 280169351358921184433812095498240410552501272576L, however when similar operation is performed in pandas i get negative values. Is there a limit on the size of number which pandas can hold and how can we change this limit.
Python Pandas largest number
1
0
1
0
0
112
35,237,874
2016-02-06T05:53:00.000
0
0
0
0
0
python-2.7,pandas
0
35,237,988
0
3
0
false
0
0
I was able to overcome by changing the data type from int to float, as doing this gives the answer to 290 ** 15 = 8.629189e+36, which is good enough for my exercise.
2
0
1
0
I am working on some problem where i have to take 15th power of numbers, when I do it in python console i get the correct output, however when put these numbers in pandas data frame and then try to take the 15th power, i get a negative number. Example, 1456 ** 15 = 280169351358921184433812095498240410552501272576L, however when similar operation is performed in pandas i get negative values. Is there a limit on the size of number which pandas can hold and how can we change this limit.
Python Pandas largest number
1
0
1
0
0
112
35,266,360
2016-02-08T09:40:00.000
1
0
1
0
0
javascript,firefox,ipython-notebook,jupyter-notebook
0
35,270,287
0
2
0
false
1
0
In the address bar, type "about:config" (with no quotes), and press Enter. Click "I'll be careful, I promise". In the search bar, search for "javascript.enabled" (with no quotes). Right click the result named "javascript.enabled" and click "Toggle". JavaScript is now enabled. To Re-disable JavaScript, repeat these steps.
2
1
0
0
I'm trying to run jupyter notebook from a terminal on Xfce Ubuntu machine. I typed in the command: jupyter notebook --browser=firefox The firefox browser opens, but it is empty and with the following error: "IPython Notebook requires JavaScript. Please enable it to proceed." I searched the web on how to enable JavaScript on Ipython NoteBook but didn't find an answer. I would appreciate a lot any help. Thanks!
ipython notebook requires javascript - on firefox web browser
0
0.099668
1
0
0
3,643
35,266,360
2016-02-08T09:40:00.000
1
0
1
0
0
javascript,firefox,ipython-notebook,jupyter-notebook
0
35,266,458
0
2
0
false
1
0
javascript has to be enabled in firefox browser it is turned off. to turn it on do this To enable JavaScript for Mozilla Firefox: Click the Tools drop-down menu and select Options. Check the boxes next to Block pop-up windows, Load images automatically, and Enable JavaScript. Refresh your browser by right-clicking anywhere on the page and selecting Reload, or by using the Reload button in the toolbar.
2
1
0
0
I'm trying to run jupyter notebook from a terminal on Xfce Ubuntu machine. I typed in the command: jupyter notebook --browser=firefox The firefox browser opens, but it is empty and with the following error: "IPython Notebook requires JavaScript. Please enable it to proceed." I searched the web on how to enable JavaScript on Ipython NoteBook but didn't find an answer. I would appreciate a lot any help. Thanks!
ipython notebook requires javascript - on firefox web browser
0
0.099668
1
0
0
3,643
35,276,844
2016-02-08T18:36:00.000
0
0
1
1
1
python,pythonpath,sys.path
0
35,276,882
0
1
0
false
0
0
It always uses PYTHONPATH. What happened is probably that you quit python, but didn't quit your console/command shell. For that shell, the environment that was set when the shell was started still applies, and hence, there's no PYTHONPATH set.
1
2
0
0
I'm having some trouble understanding how Python uses the PYTHONPATH environment variable. According to the documentation, the import search path (sys.path) is "Initialized from the environment variable PYTHONPATH, plus an installation-dependent default." In a Windows command box, I started Python (v.2.7.6) and printed the value of sys.path. I got a list of pathnames, the "installation-dependent default." Then I quit Python, set PYTHONPATH to .;./lib;, restarted Python, and printed os.environ['PYTHONPATH']. I got .;./lib; as expected. Then I printed sys.path. I think it should have been the installation-dependent default with .;./lib; added to the start or the end. Instead it was the installation-dependent default alone, as if PYTHONPATH were empty. What am I missing here?
When/how does Python use PYTHONPATH
0
0
1
0
0
96
35,294,239
2016-02-09T14:12:00.000
0
0
0
0
0
python,mysql,tkinter
0
35,295,077
0
1
0
false
0
1
in order to periodically refresh the user messages, just make an infinite while loop and set it to update every 5 seconds or so. this way every 5 seconds you check to see if database has new messages. alternatively you can make the while loop update if the database has been updated at any point but this is more complex.
1
0
0
0
I made a simple chat program in python that uses tkinter and mysql db. It connects to db first, gets the messages and shows them to user. But when another user send a message to the user, the user can not see the new messages. So, I made a refresh button. But, everybody knows, people don't want to use a chat program that you always should press a button to see messages. The question is, how can I make a instant message app without clicking any buttons? It doesn't require to use tkinter for gui. It can be run with other gui libs.
Python3 - Tkinter - Instant messaging
0
0
1
0
0
294
35,302,508
2016-02-09T21:27:00.000
0
0
1
0
0
python,python-2.7,debugging,command-line,visual-studio-2015
0
35,303,799
0
5
0
false
0
0
You want to select "Execute Project with Python Interactive" from the debug dropdown menu. The keyboard shortcut for this is Shift+Alt+F5. When you do that, you will have a window open at the bottom of the screen called Python Interactive and you will see your printed statements and any prompts for inputs from your program. This does not allow you to also enter debug mode though. It is either one or the other.
1
17
0
0
I am working with Python Tools for Visual Studio. (Note, not IronPython.) I need to work with arguments passed to the module from the command line. I see how to start the module in Debug by right-clicking in the code window and selecting "Start with Debugging". But this approach never prompts me for command line arguments, and len(sys.argv) always == 1. How do I start my module in debug mode and also pass arguments to it so sys.argv has more than 1 member?
How do I pass command line arguments to Python from VS in Debug mode?
0
0
1
0
0
21,881
35,318,866
2016-02-10T15:13:00.000
0
0
0
1
0
python,json,elasticsearch
0
35,405,127
0
2
0
true
1
0
This is Chrisses answer, copied from gitter.im: You can use the dict field type for "unstructured data", as it takes arbitrary json. If the db engine is postgres, it uses jsonfield under the hood, and if the db engine is mongo, it's converted to a bson document as usual. Either way it should index automatically as expected in ES and will be queryable through the Ramses API. The following ES queries are supported on documents/fields: nefertari-readthedocs-org/en/stable/making_requests.html#query-syntax-for-elasticsearch See the docs for field types here, start at the high level (ramses) and it should "just work", but you can see what the code is mapped to at each level below down to the db if desired: ramses: ramses-readthedocs-org/en/stable/fields.html nefertari (underlying web framework): nefertari-readthedocs-org/en/stable/models.html#wrapper-api nefertari-sqla (postgres-specific engine): nefertari-sqla-readthedocs-org/en/stable/fields.html nefertari-mongodb (mongo-specific engine): nefertari-mongodb-readthedocs-org/en/stable/fields.html Let us know how that works out, sounds like it could be a useful thing. So far we've just used that field type to hold data like user settings that the frontend wants to persist but for which the API isn't concerned.
1
1
0
0
I would like to give my users the possibility to store unstructured data in JSON-Format, alongside the structured data, via an API generated with Ramses. Since the data is made available via Elasticsearch, I try to achieve that this data is indexed and searchable, too. I can't find any mentioning in the docs or searching. Would this be possible and how would one do it? Cheers /Carsten
Storing unstructured data with ramses to be searched with Ramses-API?
1
1.2
1
0
0
95
35,322,629
2016-02-10T18:00:00.000
1
0
0
0
1
python,apache,flask,amazon-redshift
1
44,923,869
0
1
0
false
1
0
I solved this error by turning DEBUG=False in my config file [and/or in the run.py]. Hope it helps someone.
1
4
0
0
I am using apache with mod_wsgi in windows platform to deploy my flask application. I am using sqlalchemy to connect redshift database with connection pool(size 10). After few days suddenly I am getting follwoing error. (psycopg2.OperationalError) SSL SYSCALL error: Software caused connection abort Can anybody suggest why I am getting this error and how to fix? If I do the apache restart then this error gone. But after few days it again comeback.
(psycopg2.OperationalError) SSL SYSCALL error: Software caused connection abort
0
0.197375
1
1
0
3,963
35,326,476
2016-02-10T21:39:00.000
2
0
1
0
0
python,django
1
35,326,564
1
2
0
false
1
0
Create your own virtualenv If all fails, just recreate the virtualenv from the requirements.txt and go from there Find out how the old app was being launched If you insist on finding the old one, IMO the most direct way is to find how is the production Django app being ran. Look for bash scripts that start it, some supervisor entries etc If you find how it starts, then you can pinpoint the environment it is launched in (e.g. which virtualenv) Find the virtualenv by searching for common files Other than that you can use find or locate command to search for files we know to exist in a virtualenv like lib/pythonX.Y/site-packages, bin/activate or bin/python etc
2
2
0
0
Deploying to a live server for an existing Django application. It's a very old site that has not been updated in 3+ years. I was hired on contract to bring it up to date, which included upgrading the Django version to be current. This broke many things on the site that had to be repaired. I did a test deployment and it went fine. Now it is time for deployment to live and I am having some issues.... First thing I was going to do is keep a log of the current Django version on server, incase of any issues we can roll back. I tried logging in Python command prompt and importing Django to find version number, and it said Django not found. I was looking further and found the version in a pip requirements.txt file. Then I decided to update the actual django version on the server. Update went through smoothly. Then I checked the live site, and everything was unchanged (with the old files still in place). Most of the site should have been broken. It was not recognizing any changes in Django. I am assuming the reason for this might be that the last contractor used virtualenv? And that's why it is not recognizing Django, or the Django update are not doing anything to the live site? That is the only reason I could come up with to explain this issue, as since there is a pip requirements.txt file, he likely installed Django with pip, which means Python should recognize the path to Django. So then I was going to try to find the source path for the virtualenv with command "lsvirtualenv". But when I do that, even that gives me a "command not found" error. My only guess is that this was an older version of virtualenv that does not have this command? If that is not the case, I'm not sure what is going on. Any advice for how I find the information I need to update the package versions on this server with the tools I have access to?
How to locate a virtualenv install
1
0.197375
1
0
0
5,340
35,326,476
2016-02-10T21:39:00.000
0
0
1
0
0
python,django
1
35,345,062
1
2
0
false
1
0
Why not start checking what processes are actually running, and with what commandline, using ps auxf or something of the sort. Then you know if its nginx+uwsgi or django-devserver or what, and maybe even see the virtualenv path, if it's being launched very manually. Then, look at the config file of the server you find. Alternatively, look around, using netstat -taupen, for example, which processes are listening to which ports. Makes even more sense, if there's a reverse proxy like nginx or whatever running, and you know that it's proxying to. The requirements.txt I'd ignore completely. You'll get the same, but correct information from the virtualenv once you activate it and run pip freeze. The file's superfluous at best, and misleading at worst. Btw, if this old contractor compiled and installed a custom Python, (s)he might not even have used a virtualenv, while still avoiding the system libraries and PYTHONPATH. Unlikely, but possible.
2
2
0
0
Deploying to a live server for an existing Django application. It's a very old site that has not been updated in 3+ years. I was hired on contract to bring it up to date, which included upgrading the Django version to be current. This broke many things on the site that had to be repaired. I did a test deployment and it went fine. Now it is time for deployment to live and I am having some issues.... First thing I was going to do is keep a log of the current Django version on server, incase of any issues we can roll back. I tried logging in Python command prompt and importing Django to find version number, and it said Django not found. I was looking further and found the version in a pip requirements.txt file. Then I decided to update the actual django version on the server. Update went through smoothly. Then I checked the live site, and everything was unchanged (with the old files still in place). Most of the site should have been broken. It was not recognizing any changes in Django. I am assuming the reason for this might be that the last contractor used virtualenv? And that's why it is not recognizing Django, or the Django update are not doing anything to the live site? That is the only reason I could come up with to explain this issue, as since there is a pip requirements.txt file, he likely installed Django with pip, which means Python should recognize the path to Django. So then I was going to try to find the source path for the virtualenv with command "lsvirtualenv". But when I do that, even that gives me a "command not found" error. My only guess is that this was an older version of virtualenv that does not have this command? If that is not the case, I'm not sure what is going on. Any advice for how I find the information I need to update the package versions on this server with the tools I have access to?
How to locate a virtualenv install
1
0
1
0
0
5,340
35,328,278
2016-02-10T23:50:00.000
0
0
0
0
0
python-2.7
0
35,330,126
0
3
0
false
0
0
You can use 1) Beautiful Soup 2) Python Requests 3) Scrapy 4) Mechanize ... and many more. These are the most popular tools, and easy to learn for the beginner. From there, you can branch out to more complex stuff such as UserAgentSpoofing, HTML Load balancing, Regex, XPATH and CSS Selectors. You will need these to scrape more difficult sites that has protection or login fields. Hope that helps. Cheers
2
0
0
0
I am looking for some real help. I want to web scraping using Python, I need it because i want to import some database, How can we do that in Python. What libraries do we need ?
Python Web scraping- Required Libraries and how to do it
0
0
1
0
1
71
35,328,278
2016-02-10T23:50:00.000
0
0
0
0
0
python-2.7
0
44,280,408
0
3
0
false
0
0
As others have suggested I too would use Beautiful Soup and Python Requests, but if you get problems with websites which have to load some data with Javascript after the page has loaded and you only get the incomplete html with Request, try using Selenium and PhantomJs for the scraping.
2
0
0
0
I am looking for some real help. I want to web scraping using Python, I need it because i want to import some database, How can we do that in Python. What libraries do we need ?
Python Web scraping- Required Libraries and how to do it
0
0
1
0
1
71
35,346,456
2016-02-11T17:46:00.000
0
0
0
0
1
javascript,python,google-chrome-extension
0
35,349,167
0
2
0
false
1
0
The only way to get the output of a Python script inside a content script built with Javascript is to call the file with XMLHttpRequest. As you noted, you will have to use an HTTPS connection if the page is served over HTTPS. A workaround for this is to make a call to your background script, which can then fetch the data in whichever protocol it likes, and return it to your content script.
1
6
0
0
I'm writing a chrome extension that injects a content script into every page the user goes to. What i want to do is to get the output of a python function for some use in the content script (can't write it in javascript, since it requires raw sockets to connect to my remote SSL server). I've read that one might use CGI and Ajax or the like, to get output from the python code into the javascript code, but i ran into 3 problems: I cannot allow hosting the python code on a local server, since it is security sensitive data that only the local computer should be able to know. Chrome demands that HTTP and HTTPS can not mix- if the user goes to an HTTPS website, i can't host the python code on a HTTP server. I don't think Chrome even supports CGI on extensions-when i try to access a local file, all it does is print out the text (the python code itself) instead of what i defined to be its output (I tried to do so using Flask). As I said in 1, I shouldn't even try this anyway, but this is just a side note. So my question is, how do i get the output of my python functions inside a Content Script, built with javascript?
Combining Python and Javascript in a chrome plugin
0
0
1
0
1
7,214
35,371,372
2016-02-12T20:10:00.000
0
0
1
0
0
python,python-3.x,random,choice
0
35,371,559
0
2
0
false
0
0
Shuffle the list and pop elements from the top. That will only produce each list element once.
1
0
0
0
Is there a way to "pseudo"-randomly select an element from a list that wasn't chosen before? I know about the choice function, which returns a random item from the list, but without taking into account previous chosen items. I could keep track of which elements were already picked, and keep randomly choose another not yet selected item, but this might include nested loops, etc. I could also for example remove the element chose from the list at each iteration, but this does not seem a good solution, too. My question is: is there a "aware" choice function that selects only items that weren't chosen before? Note that I'm not asking how to implement such a function, but possible solutions are of course well-accepted too.
Pseudo-randomly pick an element from a list only if it was not chosen yet
0
0
1
0
0
953
35,376,747
2016-02-13T06:11:00.000
2
0
0
0
1
android,python,kivy,buildozer
0
35,383,918
0
1
0
true
0
1
Hello guys I finally found the problem. The import android actually works. The problem was that I used it wrongly . I was trying to do a makeToast like dis 'android.makeToast'. Evidently dat was wrong. Found out there was another way to do it with pyjnius. Thanks so ooo much for your assistance
1
1
0
0
I'm building a small project for my android phone using kivy. I am trying to get the android back key to do a make Toast saying 'press back again to exit', and then exit when the back key is pressed twice. I checked online and saw a tutorial on how to do this. I had to useimport android but the problem is that it just doesn't work on my phone. Not on kivy launcher when i tested it. I even compiled to an android apk using buildozer, but it still doesn't work. Please im still very new to kivy and android api. Help me get this right. Or if there is another way to do this i also appreciate it. Please include an example in your response.
kivy import android doesnt work
0
1.2
1
0
0
614
35,391,120
2016-02-14T11:17:00.000
1
0
0
1
0
python,google-app-engine,flask,google-cloud-sql,alembic
0
35,395,267
0
3
0
false
1
0
You can whitelist the ip of your local machine for the Google Cloud SQL instance, then you run the script on your local machine.
1
8
0
0
I have a Flask app that uses SQLAlchemy (Flask-SQLAlchemy) and Alembic (Flask-Migrate). The app runs on Google App Engine. I want to use Google Cloud SQL. On my machine, I run python manage.py db upgrade to run my migrations against my local database. Since GAE does not allow arbitrary shell commands to be run, how do I run the migrations on it?
Run Alembic migrations on Google App Engine
0
0.066568
1
1
0
1,816
35,436,599
2016-02-16T15:32:00.000
1
0
0
0
0
python,scikit-learn,linear-regression
0
35,438,322
0
3
0
false
0
0
There is a linear classifier sklearn.linear_model.RidgeClassifer(alpha=0.) that you can use for this. Setting the Ridge penalty to 0. makes it do exactly the linear regression you want and set the threshold to divide between classes.
1
1
1
0
I trained a linear regression model(using sklearn with python3), my train set was with 94 features and the class of them was 0 or 1.. than i went to check my linear regression model on the test set and it gave me those results: 1.[ 0.04988957] its real value is 0 on the test set 2.[ 0.00740425] its real value is 0 on the test set 3.[ 0.01907946] its real value is 0 on the test set 4.[ 0.07518938] its real value is 0 on the test set 5.[ 0.15202335] its real value is 0 on the test set 6.[ 0.04531345] its real value is 0 on the test set 7.[ 0.13394644] its real value is 0 on the test set 8.[ 0.16460608] its real value is 1 on the test set 9.[ 0.14846777] its real value is 0 on the test set 10.[ 0.04979875] its real value is 0 on the test set as you can see that at row 8 it gave the highest value but the thing is that i want to use my_model.predict(testData) and it will give only 0 or 1 as results, how can i possibly do it? the model got any threshold or auto cutoff that i can use?
can i make linear regression predict like a classification?
0
0.066568
1
0
0
2,479
35,484,772
2016-02-18T14:54:00.000
0
0
1
0
0
python,c++,windows
0
35,495,227
0
1
0
false
0
0
You could have your python executable call the c++ executable and have the executable take in command line arguments. So basically in python have the service main code and a few basic cases that will call into a normal c++ executable. Not extremely efficient, but it works
1
0
0
0
Can I combine an executable with another executable (Windows Service Program) and run this program as a logical service? By combining, I mean to form a single executable. I want to write a Windows Service, and I've followed some tutorials that show how to do it using C++, i.e. writing the Service Program (in Windows) and using ServiceMain() functions as logical services. However, I prefer not to write the ServiceMain() functions in C++. Instead, I wonder whether I could write these logical services using Python and compile to binary using py2exe. Is this possible? - could I substitute the ServiceMain() functions for py2exe compiled modules? If so, please provide the details on how to do it.
Can I combine an executable with another executable (Windows Service Program) and run this program as a logical service?
1
0
1
0
0
75
35,485,629
2016-02-18T15:29:00.000
0
0
1
0
1
python,pandas
0
35,486,318
0
2
0
false
0
0
figured it out. specified the data type on import with dtype = {"phone" : str, "other_phone" : str})
1
0
1
0
I'm using pandas to input a list of names and phone numbers, clean that list, then export it. When I export the list, all of the phone numbers have '.0' tacked on to the end. I tried two solutions: A: round() B: converting to integer then converting to text (which has worked in the past) For some reason when I tried A, the decimal still comes out when I export to a text file and when I tried B, I got an unexpected negative ten digit number Any ideas about what's happening here and/or how to fix it? Thanks!
Removing decimals on export python
0
0
1
0
0
79
35,509,019
2016-02-19T15:35:00.000
1
0
1
1
0
python,linux
0
35,509,182
0
1
0
false
0
0
Look at setuptools distutils These are classical tools for python packaging
1
3
0
0
I have created a simple software with GUI. It has several source files. I can run the project in my editor. I think it is ready for the 1.0 release. But I don't know how to create a setup/installer for my software. The source is in python. Environment is Linux(Ubuntu). I used an external library which does not come with standard Python library. How can I create the installer, so that I just distribute the source code in the tar file. And the user installs the software on his machine(Linux) by running a setup/installer file? Please note: When the setup is run, it should automatically take care of the dependencies.(Also, I don't want to build an executable for distribution.) Something similar to what happens when I type: sudo apt-get install XXXX
How to create setup/installer for my Python project which has dependencies?
0
0.197375
1
0
0
137
35,524,022
2016-02-20T13:37:00.000
4
0
0
1
1
python,django,shell,subprocess,gunicorn
0
35,524,148
0
1
0
true
1
0
1) User who runs gunicorn has no permissions to run .sh files 2) Your .sh file has no rights to be runned 3) Try to user full path to the file Also, which error do you get when trying to run it on the production?
1
1
0
0
I have a django 1.9 project deployed using gunicorn with a view contains a line subprocess.call(["xvfb-run ./stored/all_crawlers.sh "+outputfile+" " + url], shell=True, cwd= path_to_sh_file) which runs fine with ./manage.py runserver but fails on deployment and (deployed with gunicorn and wsgi). Any Suggestion how to fix it?
Django deployed project not running subprocess shell command
0
1.2
1
0
0
309
35,531,367
2016-02-21T01:51:00.000
1
0
0
0
0
python,pandas
0
35,531,393
0
4
1
false
0
0
Try this method: Create a duplicate data set. Use .mode() to find the most common value. Pop all items with that value from the set. Run .mode() again on the modified data set.
1
0
1
0
So I'm generating a summary report from a data set. I used .describe() to do the heavy work but it doesn't generate everything I need i.e. the second most common thing in the data set. I noticed that if I use .mode() it returns the most common value, is there an easy way to get the second most common?
In pandas, how to get 2nd mode
1
0.049958
1
0
0
2,825
35,544,800
2016-02-22T02:18:00.000
1
0
1
0
0
python,excel,combinations
0
35,557,793
0
1
0
false
0
0
I am told to use Pandas to get at each of the individual states in your excel file. I then use a dictionary structure to store state values and look up sates from the above to these.
1
0
0
0
I have a file that has a column with names, and another with comma separated US licenses, for example, AZ,CA,CO,DC,HI,IA,ID; but any combination of 50 states is possible. I have another file that has a certain value attached to each state, for example AZ=4, CA=30, DC=23, and so on for all 50. I need to add up the amount that each person is holding via their combination of licenses. Say, someone with just CA, would have 30, while some one with AZ, CA and DC, would end up with 30+4+23=57; and any combination of 50 licenses is possible. I know a bit of Python, but not enough to know how to even get started, what packages to use, what the architecture should be.. Any guidance is appreciated. Thank you.
How do I parse out all US states from comma separated strings in Python from an excel file.
1
0.197375
1
0
0
52
35,555,798
2016-02-22T14:11:00.000
3
0
1
0
0
python,spyder
1
35,555,927
0
1
0
true
0
0
Simply removing the line which was an issue and starting Spyder did the trick. Spyder rebuilt the spyder.ini file upon running spyder.exe.
1
1
0
0
I'm working with WinPython and Spyder, and somehow spyder wouldn't start. It would briefly flash an error message of which the relevant line is: ConfigParser.ParsingError: File contains parsing errors: D:\progs\WinPython-64bit-2.7.10.3\settings\.spyder\spyder.ini [line 431]: u'_/switch to'. Then delving into that file it seems to be clipped. It abruptly ends on line 431 with _/switch to in the [shortcuts] section of the file. Can anyone link me to a complete spyder.ini file, I can't find it in the spyder github? Or if it's the last line (or one of the last few lines), provide me with the bit I'm missing?
Replace corrupted spyder.ini file (with winpython 64)
0
1.2
1
0
0
1,152
35,561,072
2016-02-22T18:26:00.000
0
0
1
0
0
python,loops,python-3.x,functional-programming
0
35,561,218
0
3
0
false
0
0
In a recursive function there are two main components: The recursive call The base case The recursive call is when you call the function from within itself, and the base case is where the function returns/stops calling itself. For your recursive call, you want nfactorial(n-1), because this is essentially the definition of a factorial (n(n-1)(n-2)...*2). Following this logic, the base case should be when n == 2. Good luck, hope I was able to point you in the right direction.
1
2
0
0
So we just started learning about loops and got this assignment def factorial_cap(num): For positive integer n, the factorial of n (denoted as n!), is the product of all positive integers from 1 to n inclusive. Implement the function that returns the smallest positive n such that n! is greater than or equal to argument num. Examples: factorial_cap(20) → 4 #3!<20 but 4!>20 factorial_cap(24) → 4 #4!=24 Can anyone give me a direction as to where to start? I am quite lost at how to even begin to start this. I fully understand what my program should do, just not how to start it.
Starting loops with python
0
0
1
0
0
124
35,570,376
2016-02-23T06:28:00.000
1
1
1
0
0
python,eclipse,ide
0
35,571,869
0
1
0
true
0
0
The 'Import Existing Projects into Workspace' wizard has a 'Copy projects into workspace' check box on the first page. Unchecking this option will make Eclipse work on the original files.
1
0
0
0
I have used the "import existing project" option to import an existing project into workspace. However, eclipse actually makes copies of the original files and create a new project. So, if I made a change on a file. It only affect on the copied file in workspace. The original file is untouched. My question is how do I make my modification affected on the original files?
eclipse modify imported project files
0
1.2
1
0
0
98
35,574,857
2016-02-23T10:26:00.000
2
0
0
0
0
python,django,rest,django-rest-framework
0
35,583,466
0
1
0
false
1
0
"Normal" Django views (usually) return HTML pages. Django-Rest-Framework views (usually) return JSON. I am assuming you are looking for something more like a Single page application. In this case you will have a main view that will be the bulk of the HTML page. This will be served from "standard" Django view returning HTML (which will likely include a fair bit of JavaScript). Once the page is loaded the JavaScript code will makes requests to the DRF views. So when you interact with the page, JavaScript will request Json, and update (not reload) the page based on the contents of the JSON. Does that make sense?
1
0
0
0
I want to use Django REST framework for my new project but I am not sure if I can do it efficiently. I would like to be able to integrate easily classical Django app in my API. However I don't know how I can proceed to make them respect the REST framework philosophy. Will I have to rewrite all the views or is there a more suitable solution?
How to use classic Django app with Django REST framework?
0
0.379949
1
0
0
422
35,607,753
2016-02-24T16:32:00.000
0
0
0
0
1
python,django,django-rest-framework,django-rest-auth
0
35,726,205
0
1
0
true
1
0
To anyone that stumbles onto this question, I couldn't figure out how to make the hybrid approach work. Having Django serve pages that each contained API calls seemed OK, but I never saw any requests made to the API- I believe due to some other security issues. I'm sure it's possible, but I decided to go for the single page app implementation after all to make things simpler.
1
5
0
0
I'm building an app with a Django backend, Angular frontend, and a REST API using Django REST Framework for Angular to consume. When I was still working out backend stuff with a vanilla frontend, I used the provided Django authentication to handle user auth- but now that I'm creating a REST based app, I'm not sure how to approach authentication. Since all user data will be either retrieved or submitted via the API, should API authentication be enough? If so, do I need to remove the existing Django authentication middleware? Right now, when I try to hit API endpoints on an early version of the app, I'm directed to what looks like the normal Django login form. If I enter a valid username and password, it doesn't work- just prompts to login again. Would removing the basic Django authentication prevent this? I want to be prompted to login, however I'm not sure how to handle that with these technologies. The package django-rest-auth seems useful, and the same group makes an Angular module- but the docs don't go much past installation and the provided endpoints. Ultimately, I think the core of this question is: how do I entirely switch authentication away from what's provided by Django to something like django-rest-auth or one of the other 3rd party packages recommended by DRF? edit: I made this comment below, but I realized that I need to figure out how combined auth will work. I'm not building a single page app, so individual basic pages will be served from Django, but each page will hit various API endpoints to retrieve the data it needs. Is there a way to have something like django-rest-auth handle all authentication?
Django, Angular, & DRF: Authentication to Django backend vs. API
1
1.2
1
0
0
698
35,612,338
2016-02-24T20:23:00.000
0
0
1
0
0
python,ipython,pycharm,anaconda
0
52,747,634
0
1
0
false
0
0
Short answer: Go to File > Default settings > Build, Execution, Deployment > Console and select Use Ipython if available Go to Run > Edit Configurations and select Show command line afterwards Tip: Run selected parts of your code with ALT + SHIFT + E The details: If you've selected Anaconda as the project interpreter, IPython will most likely be the selected console even though it neither looks nor behaves like the IPython console you are used to in Spyder. I guess you are used to seeing this in Spyder: enter image description here I'm also guessing that the following is what you're seeing in PyCharm in the Console Window: enter image description here Unlike Spyder, PyCharm has no graphical indicator showing that this is an IPython console. So, to make sure it's an IPython console and make it behave more or less like the IPython console you are used to from Spyder, you should try to follow these two steps: Go to File > Default Settings > Build, Execution, Deployment > Console and make sure to select Use IPython if available. enter image description here Go to Run > Edit Configurations and select Show command line afterwards enter image description here Now you can run selected parts of your code with ALT+SHIFT+E more or less exactly like in Spyder. If this doesn't do the trick, you should check out these other posts on SO: Interacting with program after execution Disable ipython console in pycharm
1
10
0
0
I am using PyCharm IDE with Anaconda distribution. when I run:Tools > Python Console... PyCharm uses ipython console which is part of Anaconda distribution. But it using a default profile. I already tried add option --profile=myProfileName in Environment variables and in Interpreter options in Settings > Build, Execution, Deployment > Console > Python Console But it keeps using default profile. My question is how to set different ipython profile in PyCharm?
How to set ipython profile in PyCharm
0
0
1
0
0
1,263
35,624,808
2016-02-25T10:34:00.000
0
0
1
0
0
python,macos,console,pycharm,jetbrains-ide
0
54,680,604
0
1
0
true
0
0
It's in the same window know so much easier to go to it.
1
3
0
0
Hi I am wondering how I can have the python console pop up automatically after I run a script in Pycharm. Currently it opens in the background and I have to either command-tab to it, or click manually. Maybe there is a way to edit the configuration to allow it to pop up, I haven't found one. Thanks
How to automatically switch focus to python console when running script in Pycharm?
1
1.2
1
0
0
360
35,647,221
2016-02-26T08:49:00.000
4
0
0
0
0
python,django
0
35,650,368
0
2
0
false
1
0
Based on itzmeontv's answer: To override original templates in registration application. create folder templates inside your base app if it doesn't exist create folder registration inside it. So folder looks like <yourapp>/templates/registration Inside yourapp/templates/registration , create htmls with same name as in registration app. For ex : password_change_form.html. So it will look like <yourapp>/templates/registration/password_change_form.html. Make sure that your base app comes before registration in INSTALLED_APPS.
1
0
0
0
As the question says, I'm using django-registration-redux and I've made templates for the registration emails and pages but can't figure out how to make the template for the password reset email. I'm using django 1.9
How do I make a custom email template for django-registration-redux password reset?
0
0.379949
1
0
0
606
35,647,516
2016-02-26T09:06:00.000
2
0
0
0
0
python,html,flask
0
35,647,647
0
1
0
true
1
0
You can't trigger anything on the server without making a request to a URL. If you don't want the page to reload, you can either redirect back to the original page after your action is finished, or you can use Ajax to make the request without changing the page; but the request itself is always to a URL.
1
0
0
0
In flask programming, people usually use 'url_for' such as {{url_for = 'some url'}} This way have to make URL(@app.route), template(HTML) and map each other. But, I just want to send email when I click submit button in HTML modal. For this, There is no page reloading. To do this, I think I have to connect between button and python function without URL, return(response) I wonder that how to make it, help me please, I'm beginner in flask programming.
How to connect between HTML button and python(or flask) function?
0
1.2
1
0
0
1,304
35,658,436
2016-02-26T17:48:00.000
9
0
1
1
0
python,python-2.7,homebrew,pyinstaller
0
36,139,384
0
1
0
true
0
0
The pyinstaller docs are poorly worded and you may be misunderstanding their meaning. PyInstaller works with the default Python 2.7 provided with current Mac OS X installations. However, if you plan to use a later version of Python, or if you use any of the major packages such as PyQt, Numpy, Matplotlib, Scipy, and the like, we strongly recommend that you install THESE using either MacPorts or Homebrew. It means to say "install later versions of Python as well as python packages with Homebrew", and not to say "install pyinstaller itself with homebrew". In that respect you are correct, there is no formula for pyinstaller on homebrew. You can install pyinstaller with pip though: pip install pyinstaller or pip3 install pyinstaller. Then confirm the install with pyinstaller --version.
1
1
0
0
I am using python 2.7.0 and pygame 1.9.1, on OS X 10.10.5. The user guide for PyInstaller dictates that Mac users should use Homebrew, and I have it installed. I used it to install both Python and Pygame. But 'brew install PyInstaller' produces no formulae at all when typed into Terminal! So how can I use homebrew to install PyInstaller? This seems like it should be simple, and I'm sorry to bother you, but I have searched high and low with no result.
Using homebrew to install pyinstaller
0
1.2
1
0
0
3,559
35,661,419
2016-02-26T20:46:00.000
1
0
0
0
0
python,opencv,frames
0
35,663,902
0
2
0
false
0
0
How are your objects filled or just an outline? In either case the approach I would take is to detect the vertices by finding the maximum gradient or just by the bounding box. The vertices will be on the bounding box. Once you have the vertices, you can say whether the object is a square or a rectangle just by finding the distances between the consecutive vertices.
2
0
1
0
I have a video consisting of different objects such as square, rectangle , triangle. I somehow need to detect and show only square objects. So in each frame, if there is a square, it is fine but if there is a triangle or rectangle then it should display it. I am using background subtraction and I am able to detect all the three objects and create a bounding box around them. But I am not able to figure out how to display only square object.
How to detect objects in a video opencv with Python?
0
0.099668
1
0
0
948
35,661,419
2016-02-26T20:46:00.000
3
0
0
0
0
python,opencv,frames
0
35,788,514
0
2
0
true
0
0
You can use the following algorithm: -Perform Background subtraction, as you're doing currently -enclose foreground in contours (using findContours(,,,) then drawContours(,,,) function) -enclose obtained contours in bounding boxes (using boundingRect(,,,) function) -if area of bounding box is approximately equal to that of enclosed contour, then the shape is a square or rectangle, not a triangle. (A large part of the box enclosing a triangle will lie outside the triangle) -if boundingBox height is approximately equal to its width, then it is a square. (access height and width by Rect.height and Rect.width)
2
0
1
0
I have a video consisting of different objects such as square, rectangle , triangle. I somehow need to detect and show only square objects. So in each frame, if there is a square, it is fine but if there is a triangle or rectangle then it should display it. I am using background subtraction and I am able to detect all the three objects and create a bounding box around them. But I am not able to figure out how to display only square object.
How to detect objects in a video opencv with Python?
0
1.2
1
0
0
948
35,670,348
2016-02-27T13:24:00.000
0
0
0
0
0
python,pandas,apply,next
0
35,670,515
0
2
0
false
0
0
While it's not the most "fancy" way - I would just use a numeric iterator and access lines i and i+1
1
1
1
0
I have a df in pandas import pandas as pd import pandas as pd df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value']) I want to iterate over rows in df. For each row i want rows value and next rows value. Here is the desired result. 0 1 AA BB 1 2 BB CC I have tried a pairwise() function with itertools. from itertools import tee, izip def pairwise(iterable): "s -> (s0,s1), (s1,s2), (s2, s3), ..." a, b = tee(iterable) next(b, None) return izip(a, b) import pandas as pd df = pd.DataFrame(['AA', 'BB', 'CC'], columns = ['value']) for (i1, row1), (i2, row2) in pairwise(df.iterrows()): print i1, i2, row1["value"], row2["value"] But, its too slow. Any idea how to achieve the output with iterrows ? I would like to try pd.apply for a large dataset.
python pandas ... alternatives to iterrows in pandas to get next rows value (NEW)
0
0
1
0
0
908
35,677,767
2016-02-28T01:47:00.000
0
0
1
0
0
python,matplotlib,plot,figure
0
35,677,835
0
3
0
false
0
0
pyplot is matlab like API for those who are familiar with matlab and want to make quick and dirty plots figure is object-oriented API for those who doesn't care about matlab style plotting So you can use either one but perhaps not both together.
1
39
1
0
I'm not really new to matplotlib and I'm deeply ashamed to admit I have always used it as a tool for getting a solution as quick and easy as possible. So I know how to get basic plots, subplots and stuff and have quite a few code which gets reused from time to time...but I have no "deep(er) knowledge" of matplotlib. Recently I thought I should change this and work myself through some tutorials. However, I am still confused about matplotlibs plt, fig(ure) and ax(arr). What is really the difference? In most cases, for some "quick'n'dirty' plotting I see people using just pyplot as plt and directly plot with plt.plot. Since I am having multiple stuff to plot quite often, I frequently use f, axarr = plt.subplots()...but most times you see only code putting data into the axarr and ignoring the figure f. So, my question is: what is a clean way to work with matplotlib? When to use plt only, what is or what should a figure be used for? Should subplots just containing data? Or is it valid and good practice to everything like styling, clearing a plot, ..., inside of subplots? I hope this is not to wide-ranging. Basically I am asking for some advice for the true purposes of plt <-> fig <-> ax(arr) (and when/how to use them properly). Tutorials would also be welcome. The matplotlib documentation is rather confusing to me. When one searches something really specific, like rescaling a legend, different plot markers and colors and so on the official documentation is really precise but rather general information is not that good in my opinion. Too much different examples, no real explanations of the purposes...looks more or less like a big listing of all possible API methods and arguments.
Understanding matplotlib: plt, figure, ax(arr)?
1
0
1
0
0
8,825
35,685,734
2016-02-28T17:23:00.000
0
0
1
0
0
python,user-interface,python-3.x
0
35,685,792
0
1
0
false
0
1
Since your question mentions .exe executables, I'wll assume you work in the Windows environment. Try using a .pywextension instead of a .py extension for the python program.
1
0
0
0
So recently I made a script, and I also finished gui and managed to merge those two together. Now i wish when I start the exe file that cmd doesn't appear but instead only GUI? Any idea on how to manage this? So far my searching didn't yield any satisfying results. Some more info is: Python 3.5, using pyinstaller to convert to exe, Tkinter Gui, pycharm 5.0.1. Thanks!
How do I stop cmd from appearing when i run exe file (python 3, gui)
0
0
1
0
0
89
35,690,072
2016-02-29T00:19:00.000
0
0
1
1
0
python,zip,archive
0
35,690,298
0
4
0
false
0
0
I got the answer. It is that we can use two commands: archive.getall_members() and archive.getfile_members(). We iterate over each of them and store the file/folder names in two arrays a1(contains file/folder names) and a2(contains file names only). If both the arrays contain that element, then it is a file otherwise it is a folder.
1
7
0
0
I have an archive which I do not want to extract but check for each of its contents whether it is a file or a directory. os.path.isdir and os.path.isfile do not work because I am working on archive. The archive can be anyone of tar,bz2,zip or tar.gz(so I cannot use their specific libraries). Plus, the code should work on any platform like linux or windows. Can anybody help me how to do it?
How to check if it is a file or folder for an archive in python?
0
0
1
0
0
20,515
35,726,948
2016-03-01T15:32:00.000
5
0
0
1
0
python,celery,flower
0
38,764,411
0
2
0
false
1
0
You can use the persisent option,eg: flower -A ctq.celery --persistent=True
1
5
0
0
I am building a framework for executing tasks on top of Celery framework. I would like to see the list of recently executed tasks (for the recent 2-7 days). Looking on the API I can find app.backend object, but cannot figure out how to make a query to fetch tasks. For example I can use backends like Redis or database. I do not want to explicitly write SQL queries to database. Is there a way to work with task history/results with API? I tried to use Flower, but it can only handle events and cannot get history before its start.
Celery task history
0
0.462117
1
0
0
4,109
35,731,438
2016-03-01T19:13:00.000
1
0
0
0
0
python,django
0
35,732,320
0
2
0
false
1
0
Python\Django is modular. App should include just those models which usually solve 1 concrete task. If some of models from p.1 can be usefull in other tasks, then probably would be better to create a new apps for those models. Ie if some models are shared between multiple tasks then there is a logic to make a new apps with those models. For example you have a forum app. This forum has such features like, polls, registrations, PM, etc. Logically everything seems to be combined together. However, if your site is just a forum - ok, but if there are other content, for example blogs with comments, then "registration model" can be made as a separate app and can be shared between parts of site such as "blogs with comments" and "forum". Regarding admin\frontend. Ive seen apps\projects with more than 10 models together. Based on the forum example above, if the admin part does not do any task out of scope of your app, then I would make admin and front-end inside of one app. Otherwise, if admin has any relation to another task, which is out of scope of your main app - admin should be as a seperate app.
1
2
0
0
I'm build a Django app with a number of models (5-10). There would be an admin side, where an end-user manages the data, and then a user side, where other end-users can only read the data. I understand the point of a Django app is to encourage modularity, but I'm unsure as to "where the line is drawn" so-to-speak. In "best practices terms": Should each model (or very related groups of models) have their own app? Should there be an 'admin' app and then a 'frontend' app? In either case, how do the other apps retrieve and use models/data inside other apps?
Django - What constitutes an app?
0
0.099668
1
0
0
100
35,737,093
2016-03-02T02:03:00.000
0
0
0
0
0
python,function,language-features,pos-tagger,crf
0
38,946,931
0
1
0
false
0
0
i recommend use CRF tagger, it's very easy.
1
0
0
0
I am using CRF POS Tagger in Python, training English PTB sample corpus and the result is quite good. Now I want to use CRF to train on a large Vietnamese corpus. I need to add some Vietnamese features into this tagger like proper name, date-time, number,... I tried for days but cannot figure out how to do that. I already knew the format of data so it is not problem. I am quite new to Python. So any detailed answer can be helpful. Thanks.
How to add specific features to CRF POS Tagger in Python?
0
0
1
0
0
293
35,741,698
2016-03-02T08:14:00.000
1
0
0
0
0
python,dxf
0
36,529,617
0
2
0
true
0
0
dxfgrabber and ezdxf are just interfaces to the DXF format and do not provide any kind of CAD or calculation functions, and the geometrical length of DXF entities are not available attributes in the DXF format.
1
3
0
0
I am trying to find total length(perimeter),area of a spline from dxf file. Is there any function in dxfgrabber or ezdxf to find total length of an entity from dxf file ?
how to find length of entity from dxf file using dxfgrabber or ezdxf packages
0
1.2
1
0
0
3,245
35,763,357
2016-03-03T04:42:00.000
3
1
0
1
0
python,datetime,unix,timestamp,epoch
0
35,763,677
0
5
0
false
0
0
Well, there are 946684800 seconds between 2000-01-01T00:00:00Z and 1970-01-01T00:00:00Z. So, you can just set a constant for 946684800 and add or subtract from your Unix timestamps. The variation you are seeing in your numbers has to do with the delay in sending and receiving the data, and could also be due to clock synchronization, or lack thereof. Since these are whole seconds, and your numbers are 3 to 4 seconds off, then I would guess that the clocks between your computer and your device are also 3 to 4 seconds out of sync.
1
9
0
0
I am trying to interact with an API that uses a timestamp that starts at a different time than UNIX epoch. It appears to start counting on 2000-01-01, but I'm not sure exactly how to do the conversion or what the name of this datetime format is. When I send a message at 1456979510 I get a response back saying it was received at 510294713. The difference between the two is 946684796 (sometimes 946684797) seconds, which is approximately 30 years. Can anyone let me know the proper way to convert between the two? Or whether I can generate them outright in Python? Thanks Edit An additional detail I should have mentioned is that this is an API to a Zigbee device. I found the following datatype entry in their documentation: 1.3.2.7 Absolute time This is an unsigned 32-bit integer representation for absolute time. Absolute time is measured in seconds from midnight, 1st January 2000. I'm still not sure the easiest way to convert between the two
Conversion from UNIX time to timestamp starting in January 1, 2000
0
0.119427
1
0
0
13,703
35,809,944
2016-03-05T04:18:00.000
0
0
0
0
0
python,statistics
0
35,810,321
0
1
0
false
0
0
Check wls_prediction_std from statsmodels.sandbox.regression.predstd.
1
0
0
0
After spending 2 hours of research to no avail, I decided to pose my question here. What is the code to find CI of mean response in python? I know how to do it in R, but I just don't know what I need to do for Python. I assume statsmodel has a function for that. If so, what is it?
How do I find CI of Mean Response using Python?
0
0
1
0
0
40
35,811,941
2016-03-05T08:34:00.000
0
0
0
0
1
python-2.7,anaconda,theano
1
42,725,755
0
1
0
false
0
1
get rid of theano and reinstall. If that doesn't work, reinstall all of python
1
0
0
0
New to Theano when I tried to use the package I keep getting the following error: ImportError: ('The following error happened while compiling the node', Dot22(, ), '\n', 'dlopen(/Userdir/.theano/compiledir_Darwin-14.3.0-x86_64-i386-64bit-i386-2.7.11-64/tmpEBdQ_0/eb163660e6e45b373cd7909e14efd44a.so, 2): Library not loaded: libmkl_intel_lp64.dylib\n Referenced from: /Userdir/.theano/compiledir_Darwin-14.3.0-x86_64-i386-64bit-i386-2.7.11-64/tmpEBdQ_0/eb163660e6e45b373cd7909e14efd44a.so\n Reason: image not found', '[Dot22(, )]') Can someone tell me how to fix this issue? Thanks.
Running Theano on Python 2.7
0
0
1
0
0
46
35,820,328
2016-03-05T21:48:00.000
0
0
0
0
1
python,django,apache,lxml,libxslt
0
35,821,295
0
1
0
false
1
0
Fixed by removing libexslt.so files from usr/lib64/.
1
0
0
0
I have a django app that requires Python (3.4) lxml package. I had a fair amount of trouble building the c shared libraries libxslt and libxml2 that lxml depends on in my red hat server environment. However, pip install lxml now completes successfully and I can import and use lxml in the command line interpreter. When I restart apache, importing lxml within my django app causes the error: ImportError: /usr/local/lib/python3.4/site-packages/lxml/etree.cpython-34m.so: undefined symbol: exsltMathXpathCtxtRegister I have checked that my LD_LIBRARY_PATH is set the same in both environments (/usr/lib). I notice that when I reinstall lxml through pip, pip tells me that it is building against libxml2/libxslt found at /usr/lib64. I have removed all libxml2.so and libxslt.so files found at /usr/lib64/ and been confounded to find that pip continues to tell me that it is building against lib64, that the install completes successfully, and that lxml still works correctly at command line but not through apache. pip also says that the detected version of libxslt that it's using in the install is 1.1.23. However, I've used strace to see that when I import using the interpreter, the library that is loaded is /usr/lib/libxslt.so.1.1.28. I don't know of any tool or technique to find out what library is being loaded through apache.. Does anyone have any theories as to what is going on or how to debug the issue? Thanks in advance!
lxml runs in interpreter but not through apache/mod_wsgi
0
0
1
0
0
362
35,851,862
2016-03-07T19:04:00.000
0
0
1
0
0
python
0
35,851,962
0
5
0
false
0
0
You could use a regular expression such as "hours.*minutes", or you could use a simple string search that looks for "hours", notes the location where it is found, then does another search for "minutes" starting at that location.
1
2
0
0
I'm wondering how to detect if two substrings match a main string in a specific order. For example if we're looking for "hours" and then "minutes" anywhere at all in a string, and the string is "what is 5 hours in minutes", it would return true. If the string was "what is 5 minutes in hours", it would return false.
If multiple substrings match string in specific order
0
0
1
0
0
63
35,869,666
2016-03-08T14:12:00.000
0
0
1
0
1
python,logging
0
35,869,928
0
1
0
true
0
0
If the logger is not named, it just means it is the default logger. You can get it by calling logging.getLogger() So to set the log level, do this: logging.getLogger.setLevel(logging.INFO)
1
1
0
0
To change the logging level of a dependent package that properly names its logger log = logging.getLogger(__name__) is easy: logging.getLogger("name.of.package").setLevel(logging.WARNING). But if the 3rd party package doesn't name their logger and just logs messages using logging.info("A super loud annoying message!"), how do I change that level? Adding the getLogger(...).setLevel(..) doesn't seem to work because the logger isn't named. Is it possible to change the logging level output of just one package without changing the level for the entire logging module?
How to change python log level for unnamed logger?
1
1.2
1
0
0
133
35,871,850
2016-03-08T15:49:00.000
0
0
1
0
0
python,python-3.x,anaconda,conda
0
35,872,466
0
1
0
false
0
0
if you already used pip and virtual env, conda is like both at the same time. It's a package manager and also creates virtual environments. To answer your question, conda creates a new environement, exporting python paths for this environment and installing all packages here. You can always switch between environments, but after reboot, all your virtual environments would be desactivated and you'll have your default system python path (2.7).
1
2
0
0
I have installed anconda with python 3.5, but i am curious to know how conda is managing between system python(2.7.6) and python3.5(installed with anaconda). Particularly If I make a new environment with conda help containing python 3.5 and don't switch to my root env in conda while restarting the system. Does system start with python3 as default or python 2.7.6? I am in need of answer to this as one of my friend installed Anaconda with python3.5 as default to system which broke the system dependencies and It did not start. I am using Ubuntu 14.04.
How conda manages the environment with system python and python installed with this
0
0
1
0
0
642
35,872,623
2016-03-08T16:24:00.000
0
0
1
1
0
python,linux,ubuntu,installation,environment-variables
0
35,872,702
0
1
0
false
0
0
Try installing the 2.7 version : apt-get install python2.7-dev
1
0
0
0
I have the newest version of python (2.7.11) installed on my home director. To compile the YouCompleteMe plugin, I need the python-dev to be installed. However, the global python of my environment is 2.7.11, which means that if I install python-dev via apt-get, it would incompatible with python 2.7.11, because it is used for python 2.6. I re-compiled python 2.7.11 with --enable-shared flag, but failed to know how to add its lib and header files to system's default search path (if there exist such a path environment variable). So, my question is, how to manually install the locally compiled python library to system?
how to manually install the locally compiled python library (shared python library) to system?
0
0
1
0
0
170
35,879,103
2016-03-08T22:15:00.000
1
0
1
0
0
python,collections
0
35,879,173
0
2
0
false
0
0
You could have a data structure that maps interval start or end points to positions. In order to compute the interval you need to look up, either do some appropriate rounding on the time value in question (if the intervals can be considered regular enough for that), or use the bisect module to look up the closest start or end point in the list of all occurring intervals.
1
0
0
0
I've got a situation where I've got finer time granularity than I do position granularity. Let's say that I'm measuring position at 10 Hz, but am making other measurements at 100 Hz. I'm wondering if anyone is aware of a clever/efficient way of associating a position with a time interval? That is, given a time that falls within that interval the lookup would return an appropriate position. It may just be that a straightforward implementation involving a list of tuples (start_time, end_time, position) and looping won't be disastrous, but I'm curious to know how other people have dealt with this kind of problem.
Efficiently associating a single value with an interval
1
0.099668
1
0
0
17
35,879,106
2016-03-08T22:15:00.000
0
0
1
0
0
python,python-2.7,random
0
35,879,315
0
5
0
false
0
0
You could declare a fixed list of approximately 1000 strings (i.e. ['000', '001', ..., '999'], omitting whatever values you like, then call random.choice() on that list. If you don't want to type 1000 strings, you could programatically generate the list using something like range() and then .remove() your banned values.
1
0
0
0
In Python 2.7, how do I most efficiently produce a unique, random string of len=3 - formed only of digits - where values contained in a list (called exclude_me) are not considered while the random string is being calculated? E.g. exclude_me=['312','534','434','999',...........,'123']
Generating a random string of fixed length where certain values are prohibited
0
0
1
0
0
344
35,880,417
2016-03-08T23:56:00.000
0
0
0
0
0
python,video,ffmpeg,video-streaming,http-live-streaming
0
35,886,856
0
2
0
false
1
0
You could use FFmpeg to mux the video stream in to H.264 in a mp4 container and then that can be directly used in a HTML5 video element.
1
6
0
0
I am trying to show live webcam video stream on webpage and I have a working draft. However, I am not satisfied with the performance and looking for a better way to do the job. I have a webcam connected to Raspberry PI and a web server which is a simple python-Flask server. Webcam images are captured by using OpenCV and formatted as JPEG. Later, those JPEGs are sent to one of the server's UDP ports. What I did up to this point is something like a homemade MJPEG(motion-jpeg) streaming. At the server-side I have a simple python script that continuously reads UDP port and put JPEG image in the HTML5 canvas. That is fast enough to create a perception of a live stream. Problems: This compress the video very little. Actually it does not compress the video. It only decreases the size of a frame by formatting as JPEG. FPS is low and also quality of the stream is not that good. It is not a major point for now but UDP is not a secure way to stream video. Server is busy with image picking from UDP. Needs threaded server design. Alternatives: I have used FFMPEG before to convert video formats and also stream pre-recorded video. I guess, it is possible to encode(let say H.264) and stream WebCam live video using ffmpeg or avconv. (Encoding) Is this applicable on Raspberry PI ? VLC is able to play live videos streamed on network. (Stream) Is there any Media Player to embed on HTML/Javascript to handle network stream like the VLC does ? I have read about HLS (HTTP Live Stream) and MPEG-DASH. Does these apply for this case ? If it does,how should I use them ? Is there any other way to show live stream on webpage ? RTSP is a secure protocol. What is the best practice for transport layer protocol in video streaming ?
Live Video Encoding and Streaming on a Webpage
1
0
1
0
1
8,409
35,882,062
2016-03-09T02:58:00.000
0
0
0
0
0
python,scikit-learn
1
35,889,624
0
1
0
false
0
0
In the test phase you should use the same model names as you used in the trainings phase. In this way you will be able to use the model parameters which are derived in the training phase. Here is an example below; First give a name to your vectorizer and to your predictive algoritym (It is NB in this case) vectorizer = TfidVectorizer() classifier = MultinomialNB() Then, use these names to vectorize and predict your data trainingdata_counts = vectorizer.fit_transform(trainingdata.values) classifier.fit(trainingdata_counts, trainingdatalabels) testdata_counts = vectorizer.transform(testdata.values) predictions=classifier.predict(testdata_counts) By this way, your code will be able to process training and the test phases continuously.
1
0
1
0
in text mining/classification when a vectorizer is used to transform a text into numerical features, in the training TfidfVectorizer(...).fit_transform(text) or TfidfVectorizer(...).fit(text) is used. In testing it supposes to utilize former training info and just transform the data following the training fit. In general case the test run(s) is completely separate from train run. But it needs some info regarding the fit obtained during the training stage otherwise the transformation fails with error sklearn.utils.validation.NotFittedError: idf vector is not fitted . It's not just a dictionary, it's something else. What should be saved after the training is done, to make the test stage passing smoothly? In other words train and test are separated in time and space, how to make test working, utilizing training results? Deeper question would be what 'fit' means in scikit-learn context, but it's probably out of scope
Vectorizer where or how fit information is stored?
1
0
1
0
0
172
35,900,622
2016-03-09T19:15:00.000
5
0
0
1
0
python,windows,scheduled-tasks
0
35,901,175
0
2
0
true
0
0
Simply save your script with .pyw extension. As far as I know, .pyw extension is the same as .py, only difference is that .pyw was implemented for GUI programs and therefore console window is not opened. If there is more to it than this I wouldn't know, perhaps somebody more informed can edit this post or provide their own answer.
1
3
0
0
Windows 7 Task Scheduler is running my Python script every 15 minutes. Command line is something like c:\Python\python.exe c:\mypath\myscript.py. It all works well, script is called every 15 minues, etc. However, the task scheduler pops up a huge console window titled taskeng.exe every time, blocking the view for a few seconds until the script exits. Is there a way to prevent the pop-up?
Windows Task Scheduler running Python script: how to prevent taskeng.exe pop-up?
0
1.2
1
0
0
3,605
35,914,596
2016-03-10T11:02:00.000
0
0
0
0
1
python,python-2.7,pyqt4
1
35,914,964
0
1
0
false
0
1
As far as I know, ui_mainWindow is a python file generated by some Qt Tool, that transforms .ui file from QtDesigner to Python class. I have no real experience with PyQT, but I know both C++/Qt and python. In C++/Qt QtCreator does the job of transforming .ui file to C++ class, but probably in python You need to do this Yourself.
1
0
0
0
I have python 2.7 under windows x64, I have been trying to make a simple GUI using PyQt4, like this: from PyQt4 import * from ui_mainwindow import Ui_MainWindow class MainWindow(QtGui.QMainWindow, Ui_MainWindow): when I run the program I have this error: " No module named ui_mainWindow" -I have pyqt4 installed - I have tried to replace um_mainwindow with ui_simple and clientGUI but the same error resulted. What am I doing wrong and how to fix this? thank you
python 2.7 under windows: cannot import ui_mainwindow
0
0
1
0
0
831
35,928,155
2016-03-10T21:58:00.000
0
1
0
1
0
android,python,shell,qpython
0
35,935,344
0
2
0
true
0
1
I don't have experience in Android programming, so I can only give a general recommendation: Of course the naive solution would be to explicitly pass the arguments from script to script, but I guess you can't or don't want to modify the scripts in between, otherwise you would not have asked. Another approach, which I sometimes use, is to define an environment variable in the outermost scripts, stuff all my parameters into it, and parse it from Python. Finally, you could write a "configuration file" from the outermost script, and read it from your Python program. If you create this file in Python syntax, you even spare yourself from parsing the code.
2
0
0
0
I run python in my Android Terminal and want to run a .py file with: python /sdcard/myScript.py The problem is that python is called in my Android enviroment indirect with a shell in my /system/bin/ path (to get it direct accessable via Terminal emulator). My exact question, how the title tells how to pass parameter through multiple Shell scripts to Python? My direct called file "python" in /System/bin/ contains only a redirection like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh and so on to call python binary. Edit: I simply add the $1 parameter after every shell, Python is called through like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh $1 so is possible to call python /sdcard/myScript.py arg1 and in myScript.py as usual fetch with sys.argv thanks
Pass parameter through shell to python
1
1.2
1
0
0
373
35,928,155
2016-03-10T21:58:00.000
0
1
0
1
0
android,python,shell,qpython
0
36,178,959
0
2
0
false
0
1
I have similar problem. Runing my script from Python console /storage/emulator/0/Download/.last_tmp.py -s && exit I am getting "Permission denied". No matter if i am calling last_tmp or edited script itself. Is there perhaps any way to pass the params in editor?
2
0
0
0
I run python in my Android Terminal and want to run a .py file with: python /sdcard/myScript.py The problem is that python is called in my Android enviroment indirect with a shell in my /system/bin/ path (to get it direct accessable via Terminal emulator). My exact question, how the title tells how to pass parameter through multiple Shell scripts to Python? My direct called file "python" in /System/bin/ contains only a redirection like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh and so on to call python binary. Edit: I simply add the $1 parameter after every shell, Python is called through like: sh data/data/com.hipipal.qpyplus/files/bin/qpython-android5.sh $1 so is possible to call python /sdcard/myScript.py arg1 and in myScript.py as usual fetch with sys.argv thanks
Pass parameter through shell to python
1
0
1
0
0
373
35,928,317
2016-03-10T22:07:00.000
-2
0
1
0
0
keyboard,python-idle,enthought,shortcut
0
52,823,738
0
1
0
false
0
0
if i were you, try to continue and use your old shortcuts but if they still dont work, try to use "control" for "option" shortcuts and vice-versa. thanks to the the websites online for everything!
1
0
0
0
I'm taking an online MIT programming course that suggested I use the enthought programming environment. I installed it and now my idle keyboard shortcuts have all changed. It seems to be directly caused by the installation of enthought, as my other computer (without enthought) still retained the old keyboard shortcuts. Anyone know how to get my old keyboard shortcuts back?
idle keyboard shortcuts have changed after installing enthought
0
-0.379949
1
0
0
21
35,938,891
2016-03-11T11:18:00.000
0
0
1
0
0
python,matplotlib,seaborn
0
35,939,091
0
1
0
true
0
0
Passing arguments into ax.yaxis.grid() and ax.xaxis.grid() will include or omit grids in the graphs
1
0
1
0
I'm using the whitegrid style and it's fine except for the vertical lines in the background. I just want to retain the horizontal lines.
matplotlib & seaborn: how to get rid of lines?
0
1.2
1
0
0
202
35,941,506
2016-03-11T13:28:00.000
0
0
0
1
0
python,redirect,stdout
0
35,941,617
0
2
0
false
0
0
You want to use 'tee'. stdbuf -oL python mycode.py | tee out.txt
1
2
0
0
I can run my python scripts on the terminal and get the print results on the stdout e.g. python myprog.py or simply redirect it to a file: python myprog.py > out.txt My question is how could I do both solutions at the same time. My linux experience will tell me something like: python myprog.py |& tee out.txt This is not having the behaviour I expected, print on the fly and not all at once when the program ends. So what I wanted (preferred without changing python code) is the same behavior as python myprog.py (print on the fly) but also redirecting output to a file. What is the simplest way to accomplish this?
python - print to stdout and redirect output to file
0
0
1
0
0
838
35,952,511
2016-03-12T01:06:00.000
0
0
1
0
1
python,django,virtualenv
0
35,952,928
0
2
0
false
1
0
It's also good practice to make a requires.txt file for all your dependencies. If for example your project requires Flask and pymongo, create a file with: Flask==<version number you want here> pymongo==<version number you want here> Then you can install all the necessary libraries by doing: pip install -r requires.txt Great if you want to share your project or don't want to remember every library you need in your virtualenv.
1
0
0
0
I'm trying to use virtualenv in my new mainly Python project. The code files are located at ~/Documents/Project, and I installed a virtual environment in there, located at ~/Documents/Project/env. I have all my packages and libraries I wanted in the env/bin folder. The question is, how do I actually run my Python scripts, using this virtual environment? I activate it in Terminal, then open idle as a test, and try "import django" but it doesn't work. Basically, how can I use the libraries install in the virtual environment with my project when I run it, instead of it using the standard directories for installed Python libraries?
How to use virtualenv in Python project?
0
0
1
0
0
2,126
35,956,180
2016-03-12T09:58:00.000
-1
0
1
0
0
python,windows,gcc,mingw,msys2
0
41,492,215
0
3
0
false
0
0
sys.platform gives msys when in msys-Python.
1
2
0
0
I am using python in the MSYS2 environment. The MSYS2 has its own built MINGW python version. Also I can install the official python from the www.python.org. Here is the problem: If I want to write a python code need to know the python version is MINGW or the official one, how can I do it? Here are some ways I can image. Use the "sys.prefix" object. It tells the installation directory. the MSYS2 usually installed in the directory X:\msys2\.... and the official one install in the X:\Python27\ as default. But users may change the installation directory. So this is not a good way. Use the "sys.version" object can get the version strings show with the compiler name. It shows the MINGW python compiled by GCC, the official one compiled by MSC. But there may have some possibility that there is an other version's python also built by GCC or MSC. Is there any more elegant way can do this?
How to determine the python is mingw or official build?
1
-0.066568
1
0
0
2,041
35,962,581
2016-03-12T19:55:00.000
1
0
0
0
0
python,django,server
0
35,962,887
0
1
0
true
1
0
manage.py runserver is only used to speed your development process, it shouldn't be run on your server. It's similar to the newly introduced php's built-in server php -S host:port. Since you're coming from PHP you can use apache with mod_wsgi in order to serve your django application, there are a lot of tutorials online on how to configure it properly. You might want to read what wsgi is and why it's important.
1
0
0
0
This might be a very dumb question, so please bear with me (there's also no code included either). Recently, I switched from PHP to Python and fell in love with Django. Locally, everything works well. However, how are these files accessed when on a real server? Is the manage.py runserver supposed to be used in a server environment? Do I need to use mod_python ? Coming from PHP, one would simply use Apache or Nginx but how does the deployment work with Python/Django? This is all very confusing to me, admittedly. Any help is more than welcome.
Python development: Server Handling
0
1.2
1
0
0
42
35,963,580
2016-03-12T21:34:00.000
0
0
1
0
0
python,pandas,datanitro
1
35,964,006
0
1
0
false
0
0
DataNitro is probably using a different copy of Python on your machine. Go to Settings in the DataNitro ribbon, uncheck "use default Python", and select the Canopy python directory manually. Then, restart Excel and see if importing works.
1
1
1
0
when I try to import pandas using the data nitro shell, I get the error that there is no module named pandas. I have pandas through the canopy distribution, but somehow the data nitro shell isn't "finding" it. I suspect this has to do with the directory in which pandas is stored, but I don't know how to "extract" pandas from that directory and put it into the appropriate directory for data nitro. Any ideas would be super appreciated. Thank you!!
Can't find pandas in data nitro
0
0
1
0
0
193
35,964,324
2016-03-12T22:52:00.000
0
0
0
0
0
python,cassandra,resultset
0
35,969,262
0
1
0
false
0
0
There is no magic, you'll need to: create a prepare statement for INSERT ... INTO tableB ... on each ResultSet from table A, extract the values and create a bound statement for table B, then execute the bound statement for insertion into B You can use asynchronous queries to accelerate the migration a little bit but be careful to throttle the async.
1
0
0
0
I am using the python cassandra-driver to execute queries on a cassandra database and I am wondering how to re-insert a ResultSet returned from a SELECT query on table A to a table B knowing that A and B have the same columns but a different primary keys. Thanks in advance
Cassandra python driver - how to re-insert a ResultSet
0
0
1
1
0
294
35,965,427
2016-03-13T01:29:00.000
3
0
1
0
0
python,uninstallation
0
35,965,537
0
1
0
false
0
0
Find the uninstall shortcut link in the python folder on win start menu Or in Python's install folder, then uninstall it. If you can not find either, I think you should just delete the python install folder, then everything should be ok after you install the python x64. Because for many program just the files in install folder are x86/x64 dependent, other files in the user folder is not. P.S. Installation folder maybe locate in c:\\programs/python35/or in something like c:\Users\USERNAME\AppData\Local\Programs\Python\Python35
1
3
0
0
I have Windows 7 64 Bit. By mistake I installed Python 3.5 32 bit. I want to uninstall it (for installing 64 Bit version) but dont know how to do it (It does not get uninstalled from Control Panel -> Uninstall a Program). I googled it and found some links but could not understand / was not able to do it. Please help.
Python 3.5 uninstall Windows 7
0
0.53705
1
0
0
9,697
35,968,464
2016-03-13T09:08:00.000
0
0
0
0
1
python,image,crash,save,pygame
1
51,588,818
0
1
0
false
0
1
Are you on windows or on mac? If you're on windows look if you wrote the location like this "\folder\thing.png", that's an error because you put the starting "\". Remove that and try again.
1
1
0
0
I have written a program in Python which draws parts of the Mandelbrot set using pygame. However, when I leave it running to generate for a long time and then save the file I get this error: pygame.error: SavePNG: could not open for writing I'm not sure why this would happen and saving works fine usually. Perhaps when the computer goes to sleep something stops working? But more importantly does anyone know how to fix this?
pygame.error: SavePNG: could not open for writing?
0
0
1
0
0
1,146
35,968,682
2016-03-13T09:35:00.000
1
1
1
0
0
python,fortran,profiling,f2py
0
35,970,843
0
1
0
true
0
0
At last I found out -DF2PY_REPORT_ATEXIT option can report wrapper performance.
1
0
0
0
I am currently writing a time consuming python program and decided to rewrite part of the program in fortran. However, the performance is still not good. For profiling purpose, I want to know how much time is spent in f2py wrappers and how much time is actual spent in fortran subroutines. Is there a convenient way to achieve this?
How to obtain how much time is spent in f2py wrappers
0
1.2
1
0
0
82
35,999,344
2016-03-14T22:30:00.000
4
0
0
1
0
python-3.x,tkinter
0
35,999,383
0
6
0
false
0
1
Run Tkinter.TclVersion or Tkinter.TkVersion and if both are not working, try Tkinter.__version__
2
27
0
0
Hi have been scavenging the web for answers on how to do this but there was no direct answer. Does anyone know how I can find the version number of tkinter?
How to determine what version of python3 tkinter is installed on my linux machine?
0
0.132549
1
0
0
40,994
35,999,344
2016-03-14T22:30:00.000
5
0
0
1
0
python-3.x,tkinter
0
63,566,647
0
6
0
false
0
1
Type this command on the Terminal and run it. python -m tkinter A small window will appear with the heading tk and two buttons: Click Me! and QUIT. There will be a text that goes like This is Tcl/Tk version ___. The version number will be displayed in the place of the underscores.
2
27
0
0
Hi have been scavenging the web for answers on how to do this but there was no direct answer. Does anyone know how I can find the version number of tkinter?
How to determine what version of python3 tkinter is installed on my linux machine?
0
0.16514
1
0
0
40,994
36,007,774
2016-03-15T09:51:00.000
1
0
1
0
0
python,macos,module,installation
0
36,007,983
0
1
0
true
0
0
You can do this if pip version >=1.5 $ pip2.6 install package $ pip3.3 install package if pip version is between 0.8 and 1.5 this will work $ pip-2.7 install package $ pip-3.3 install package
1
0
0
0
I'm working on mac OS X and I have both Python 2.7 and 3.3 on it. I want to install pykml module and i successfully installed it on python 3.3, but how do I do the same for Python 2.7?
Install module in Python 2.7 and not in Python 3.3
0
1.2
1
0
0
66
36,022,867
2016-03-15T21:48:00.000
0
0
1
0
0
python,amazon-s3,pip
0
62,367,203
1
3
0
false
0
0
What about wrapping up the whl file (e.g. yourpkg-1.0-py3-none-any.whl) inside another zip file (e.g. yourpkg.zip) with a deterministic name. Then you can set up some cron scripts to check locally whether the deterministic key has a new s3 file, and if so then unzip the whl and install it.
1
21
0
0
we are trying to come up with a solution to have AWS S3 to host and distribute our Python packages. Basically what we want to do is using python3 setup.py bdist_wheel to create a wheel. Upload it to S3. Then any server or any machine can do pip install $http://path/on/s3. (including a virtualenv in AWS lambda) (We've looked into Pypicloud and thought it's an overkill.) Creating package and installing from S3 work fine. There is only one issue here: we will release new code and give them different versions. If we host our code on Pypi, you can upgrade some packages to their newest version by calling pip install package --upgrade. But if you host your packages on S3, how do you let pip know there's a newer version exists? How do you roll back to an older version by simply giving pip the version number? Is there a way to let pip know where to look for different version of wheels on S3?
If we want use S3 to host Python packages, how can we tell pip where to find the newest version?
1
0
1
0
0
13,564
36,029,866
2016-03-16T08:02:00.000
1
0
0
0
0
python-2.7,boto,emr,boto3
0
46,980,003
0
2
0
false
1
0
According to the boto3 documentation, yes it does support spot blocks. BlockDurationMinutes (integer) -- The defined duration for Spot instances (also known as Spot blocks) in minutes. When specified, the Spot instance does not terminate before the defined duration expires, and defined duration pricing for Spot instances applies. Valid values are 60, 120, 180, 240, 300, or 360. The duration period starts as soon as a Spot instance receives its instance ID. At the end of the duration, Amazon EC2 marks the Spot instance for termination and provides a Spot instance termination notice, which gives the instance a two-minute warning before it terminates. Iniside the LaunchSpecifications dictionary, you need to assign a value to BlockDurationMinutes. However, the maximum value is 360 (6 hours) for a spot block.
1
4
0
0
How can I launch an EMR using spot block (AWS) using boto ? I am trying to launch it using boto but I cannot find any parameter --block-duration-minutes in boto, I am unable to find how to do this using boto3.
How can I launch an EMR using SPOT Block using boto?
0
0.099668
1
0
1
454
36,054,602
2016-03-17T07:58:00.000
0
0
0
0
0
python,python-3.x,post,network-programming,python-requests
0
36,056,276
0
1
0
true
0
0
Here there are two different answers. The one is for the case where you want to specify the NIC to send the request and the other is for what you're asking: find the correct NIC. For the second answer, I can only say that it depends. Are these NICs on the same network / subnet? Are they bonded? If they are on a different network then you can know the host IP Address and use the computer's routing table to see which NIC the packet will go through. If they are bonded on the same network then the request will leave from the bond interface since it is (normally) the one with an IP Address. If they are on the same network and have a different IP on the same subnet then it depends. Can you provide some more information in that case?
1
1
0
0
I have machine with multiple NICs that can be connected to one net, to different nets or every other possible way. Python script that uses requests module to send POST/GET requests to the server is ran on that machine. So, the question is next: how can I know in python script from which interface requests will be sent?
Python 3: how to get nic from which will be sent packets to certain ip?
0
1.2
1
0
1
204
36,059,066
2016-03-17T11:22:00.000
0
0
1
0
0
python,python-3.x
0
36,060,129
0
1
0
false
0
0
Q1. A better solution could be: pop enqueue dequeue push By do this, you get 4*n operations. For instance, let be stack: 1, 2, 3 (with 1 as the top) Let be s the stack and q the queue. q.enqueue(s.pop()), so now s = [2, 3] and q = [1] q.enqueue(s.pop()), so now s = [3] and q = [1, 2] q.enqueue(s.pop()), so now s = [] and q = [1, 2, 3] s.push(q.dequeue()), so now s = [1] and q = [2, 3] s.push(q.dequeue()), so now s = [2, 1] and q = [3] s.push(q.dequeue()), so now s = [3, 2, 1] and q = [] q.enqueue(s.pop()), so now s = [2, 1] and q = [3] q.enqueue(s.pop()), so now s = [1] and q = [3, 2] q.enqueue(s.pop()), so now s = [] and q = [3, 2, 1] s.push(q.dequeue()), so now s = [3] and q = [2, 1] s.push(q.dequeue()), so now s = [2, 3] and q = [1] s.push(q.dequeue()), so now s = [1, 2, 3] and q = [] I don't think this could be the best solution, but is a better solution (9*n vs 4*n, always O(n)) Q2. I think that the same (not optimal) algorithm of previous case could be used now... The main idea is that when you get an item from stack, this would be the last of the queue, cause the two data stuctures have a different way for input. Hope this helps you :-)
1
0
0
0
Question 1 You have an abstract stack with n entries and an empty abstract queue (to help). Approximately how many calls are needed to determine n? The stack needs to be unchanged afterwards. Question 2 Same question, but you start with an abstrack queue and have an empty abstrack stack. My reasoning Pop from stack -> push onto queue -> get from queue -> put on stack -> pop from stack -> push onto queue -> get from queue -> put on stack. Somewhere we throw in a counter and that makes it 8*n calls (9*n if the counter calls count). I don't see how else I can pop the items from the stack and then get them back in the right order. Is there a better way?
Number of calls to determine size of stack/queue
0
0
1
0
0
234
36,062,523
2016-03-17T13:50:00.000
0
0
1
0
0
python,pandas,python-3.5
1
36,587,035
1
1
0
false
0
0
You should try typing in python3 instead of python because, you are running v3.x on Python and the command python runs v2.7 according to me.
1
0
0
0
First of all I'm new to python and using 3.5 version when trying to install pandas usingpip install pandasit is showing error as pip is not recognized as an internal or external command by using other command to install some packages,py -3.5 -m pip install SomePackageit shows me error could not find a version that satisfies the requirement SomePackage also ask's me update the pip version from 7.1.2 to 8.1.0 and when updating the pip python -m pip install --upgrade pip it shows me error python is not recognized as an internal or external command Now how to install Pandas on my python 3.5?
error for pandas installation and update of pip in python 3.5 windows
0
0
1
0
0
479
36,087,488
2016-03-18T14:53:00.000
0
0
0
1
0
python,linux,cpu
0
36,087,597
0
1
0
false
0
0
Maybe try to to use time.sleep() and play around with how long to sleep in between calculations?
1
1
0
0
I'd like to know how to generate a controlled CPU load on Linux using shell or python script. By controlled load I mean creating a process that consumes a specified amount of CPU cycles (e.g., 20% of available CPU cycles). I wrote a python script that does some dummy computation like generating N random integers and sort them using the built-in sort function. I used "time" utility in Linux to compute the User and Kernel time consumed by the process. But I am not sure how to compute the CPU utilization of the specific process from CPU time. Thanks.
How to create controlled CPU load on Linux?
0
0
1
0
0
151
36,095,800
2016-03-18T23:14:00.000
1
0
0
0
0
python,apache,cookies,mod-wsgi,sign
0
36,096,173
0
1
1
false
1
0
The best (read: easiest) way to go about this is with session variables. That said, in lieu of session variable functionality you would get with a framework, you can implement your own basic system. 1) Generate a random Session id 2) send a cookie to browser 3) Json or pickle encode your variables 4a) save encoded string to key-value storage system like redis or memcached with session if as the key, or 4b) save it to a file on the server preferably in /tmp/
1
0
0
0
I'm developing a web app using my own framework that I created using mod_wsgi. I want to avoid using dependencies such as Django or Flask, just to have a short script, It actually won't be doing much. I have managed to authenticate user using LDAP, from a login page, the problem is that I don't want the user to authenticate every time a action requires authorization, but I don't know how to keep user logged in. Should I use the cookies? If so, what would be the best method to keep identification in cookies? What are my options?
Keep user signed in using mod_wsgi
0
0.197375
1
0
0
127
36,104,410
2016-03-19T16:49:00.000
0
0
1
0
0
python-2.7
0
36,119,138
0
2
0
false
0
0
I found this and it's working ! variable = "mymodule" module = __ import__(variable,globals(),locals(),[],-1) ... module.myfunction() # where function is the name of the function inside mymodule.py
1
0
0
0
Hello dear programmers Python language. I have a question about importing modules in another module with Python 2.7. I want to know how to import a .py module in the form of a variable. In fact, I would like to import a module based on the needs of my main module to limit the memory usage of the computer. For example, suppose I have 25 modules: 1.py, 2.py ... 25.py Suppose my main module P.y needs, at some point, the modules 2, 7, 15 and 24.py but not the others. Because, I don't know which modules the main module needs, I currently import all modules with the import function: import 1 2 3 ... 25 Is there a python function to import only the modules 2, 7,15 and 24 with a variable ? (for example: somethink_like_import (variable) where variable contains the module name to import.) Thank you.
import a module with a variable
0
0
1
0
0
80
36,115,491
2016-03-20T15:11:00.000
0
0
1
0
1
python,windows,psycopg2
1
69,657,488
0
1
0
false
0
0
pip install psycopg this worked for me, don't mention version i.e (pip install psycopg2)
1
0
0
0
I try to use pip install psycopg2 on windows10 and python3.5, but it show me below error message. how can i fix it ? Command "d:\desktop\learn\python\webcatch\appserver\webcatch\scripts\python.exe -u -c "import setuptools, tokenize;file='C:\Users\16022001\AppData\Local\Temp\pip-build-rsorislh\psycopg2\setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record C:\Users\16022001\AppData\Local\Temp\pip-kzsbvzx9-record\install-record.txt --single-version-externally-managed --compile --install-headers d:\desktop\learn\python\webcatch\appserver\webcatch\include\site\python3.5\psycopg2" failed with error code 1 in C:\Users\16022001\AppData\Local\Temp\pip-build-rsorislh\psycopg2\
pip install psycopg2 error
0
0
1
1
0
329
36,118,490
2016-03-20T19:31:00.000
0
0
0
0
0
python,tweepy
0
55,508,308
0
4
1
false
1
0
i am using below code this works fine .. if(api.show_friendship(source_screen_name='venky6amesh', target_screen_name='vag')): print("the user is not a friend of ") try: api.create_friendship(j) print('success') except tweepy.TweepError as e: print('error') print(e.message[0]['code']) else: print("user is a friend of ",j)
1
0
0
1
I am trying to find, do I follow someone or not. I realized that although it is written in official Tweepy document, I cannot use API.exists_friendship anymore. Therefore, I tried to use API.show_friendship(source_id/source_screen_name, target_id/target_screen_name) and as the document said it returns me friendship object. (<tweepy.models.Friendship object at 0x105cd0890>, <tweepy.models.Friendship object at 0x105cd0ed0>) When I write screen_names = [user.screen_name for user in connection.show_friendship(target_id=someone_id)] it returns my_username and username for someone_id. Can anyone tell me how can I use it properly? or is there any other method that simply returns me True/False because I just want to know do I follow him/her or not.
Checking friendship in Tweepy
1
0
1
0
1
3,016
36,122,612
2016-03-21T03:19:00.000
2
0
1
0
0
python,ruby,rubygems,pip,virtualenv
0
36,122,740
0
1
0
true
0
0
bundler is generally used to lock dependency versions for a project (e.g. the gem versions). rbenv and rvm (there are several others too) are two common approaches to managing multiple versions of Ruby. A feature these provide (at least rvm does) is gemsets: these are a way to isolate your gem directories (so you may have a default gemset and a edge gemset or something; I don't find these very useful so I apologize for the bad examples). In general bundler is usually seen as the "good enough" solution to isolating dependencies and gemsets don't seem to be used all that often anymore.
1
1
0
0
As title, python use virtualenv to isolate pip libraries for each python application. Ruby has gem, how does it prevent library version conflicts without a virtual environment.
Python use virtualenv to prevent library version conflicts. How ruby does it?
0
1.2
1
0
0
299
36,141,010
2016-03-21T20:32:00.000
1
0
1
0
0
python,python-2.7,tkinter
0
36,141,286
0
1
0
false
0
1
Each instance of Tk is separate from any other instance of Tk. It gets its own copy of an underlying tcl interpreter. Two instance of Tk in the same process means you have two active tcl interpreters. As a general rule you should only ever have exactly one instance of Tk, but I suppose if you fully understand how they work, it should be possible to have two or more. I think this falls into the category of things you shouldn't do until you understand why you shouldn't do them. And once you understand, you won't want to do it.
1
1
0
0
I have been learning how to make GUIs with Tkinter and a question has occurred to me. As I'm testing the program, I often end up building the code while an instance of it already exists in the background. Are these two independent of each other in terms of performing their functions? I've always read that when I create the instance of the Tk() and then run its mainloop(), that is what takes care of everything. Can I have two or more loops running if each pertains to a different Tk() instance?
Is each instance of Tk() class running independent of each other?
0
0.197375
1
0
0
291
36,144,018
2016-03-22T00:12:00.000
0
0
0
0
0
python,django,forms,permissions,verify
0
36,146,532
0
1
1
false
1
0
You have several ways to do it: UI level: when the search field is focused you can say through an alert or other mechanism to notify users you are not allowed to search. Server level: assuming your user is logged in or has an account you can verify the user in the search request and return a response where you state you cannot search without confirming your email. Don't let them use the site after registering unless they confirm their email. You can see doing searches as data display and if you don't block that either you confuse users. Why can I see all articles but can't search? I would go for 3. and let them use the site. They can confirm it afterwards when they try to do something which modifies the DB (aka they try to post something, then from a psychological standpoint there is a block between them and their objective and they will be more willing to confirm in order to achieve their objective)
1
0
0
0
So I have lots of forms that aren't attached to models, like a search form. I don't want people to be able to access these without first verifying their account through an email. How is the best way to limit their ability to do this? Is it through custom permissions? If so, how do I go about this? Thank you so much!
Django: Making custom permissions
0
0
1
0
0
41
36,159,706
2016-03-22T16:15:00.000
0
0
0
0
0
django,python-2.7,django-models,db2
0
36,159,818
0
2
0
false
1
0
DB2 uses so-called two part names, schemaname.objectname. Each object, including tables, can be referenced by the entire name. Within a session there is the current schema which by default is set to the username. It can be changed by the SET SCHEMA myschema statement. For your question there are two options: 1) Reference the tables with their full name: schemaname.tablename 2) Use set schema to set the common schemaname and reference just the table.
1
2
0
0
In Django database username is used as schema name. In DB2 there is no database level users. OS users will be used for login into the database. In my database I have two different names for database user and database schema. So in django with db2 as backend how can I use different schema name to access the tables? EDIT: Clarifying that I'm trying to access via the ORM and not raw SQLs. The ORM implicitly is using the username as the schema name. How do I avoid that ?
django-db2 use different schema name than database username
0
0
1
1
0
688
36,193,159
2016-03-24T04:11:00.000
13
0
1
0
0
python,docx,python-docx
0
36,194,547
0
2
1
true
0
0
Short answer is no, because the page breaks are inserted by the rendering engine, not determined by the .docx file itself. However, certain clients place a <w:lastRenderedPageBreak> element in the saved XML to indicate where they broke the page last time it was rendered. I don't know which do this (although I expect Word itself does) and how reliable it is, but that's the direction I would recommend if you wanted to work in Python. You could potentially use python-docx to get a reference to the lxml element you want (like w:document/w:body) and then use XPath commands or something to iterate through to a specific page, but just thinking it through a bit it's going to be some detailed development there to get that working. If you work in the native Windows MS Office API you might be able to get something better since it actually runs the Word application. If you're generating the documents in python-docx, those elements won't be placed because it makes no attempt to render the document (nor is it ever likely to). We're also not likely to add support for w:lastRenderedPageBreak anytime soon; I'm not even quite sure what that would look like. If you search on 'lastRenderedPageBreak' and/or 'python-docx page break' you'll see other questions/answers here that may give a little more.
1
10
0
1
I am trying to create a program in python that can find a specific word in a .docx file and return page number that it occurred on. So far, in looking through the python-docx documentation I have been unable to find how do access the page number or even the footer where the number would be located. Is there a way to do this using python-docx or even just python? Or if not, what would be the best way to do this?
Page number python-docx
0
1.2
1
0
0
14,423
36,195,457
2016-03-24T07:56:00.000
0
0
0
0
0
python,scikit-learn,cluster-analysis,k-means
0
50,800,687
0
6
0
false
0
0
You can Simply store the labels in an array. Convert the array to a data frame. Then Merge the data that you used to create K means with the new data frame with clusters. Display the dataframe. Now you should see the row with corresponding cluster. If you want to list all the data with specific cluster, use something like data.loc[data['cluster_label_name'] == 2], assuming 2 your cluster for now.
1
40
1
0
I am using the sklearn.cluster KMeans package. Once I finish the clustering if I need to know which values were grouped together how can I do it? Say I had 100 data points and KMeans gave me 5 cluster. Now I want to know which data points are in cluster 5. How can I do that. Is there a function to give the cluster id and it will list out all the data points in that cluster?
How to get the samples in each cluster?
0
0
1
0
0
62,619
36,202,899
2016-03-24T14:41:00.000
1
0
0
0
0
python,django,multithreading,django-rest-framework,django-1.9
0
36,209,892
0
1
0
true
1
0
Django itself doesn't have a queue, but you can easily simulate it. Personally, I would probably use an external service, like rabbitMQ, but it can be done in pure Django if you want. Add a separate ImageQueue model to hold references to incoming images and use transaction management to make sure simultaneous requests don't return the same image. Maybe something like this (this is purely proof of concept code, of course). class ImageQueue(models.Model): image = models.OneToOne(Image) added = models.DateTimeField(auto_now_add=True) processed = models.DateTimeField(null=True, default=None) processed_by = models.ForeignKey(User, null=True, default=None) class Meta: order_by=('added') ... # in the incoming image API that drone uses def post_an_image(request): image = Image() ... whatever you do to post an image ... image.save() queue = ImageQueue.objects.create(image=image) ... whatever else you need to do ... # in the API your users will use from django.db import transaction @transaction.atomic def request_images(request): user = request.user num = request.POST['num'] # number of images requested queue_slice = ImageQueue.objects.filter(processed__isnull=True)[:num] for q in queue_slice: q.processed = datetime.datetime.now() q.processed_by = user q.save() return [q.image for q in queue_slice]
1
1
0
0
I am setting up a system where one user will be posting images to a Django server and N users will each be viewing a subset of the posted images in parallel. I can't seem to find a queuing mechanism in Django to accomplish this task. The closest thing is using latest with filter(), but that will just keep sending the latest image over and over again until a new one comes. The task queue doesn't help since this isn't a periodic task, it only occurs when a user asks for the next picture. I have one Viewset for uploading the images and another for fetching. I thought about using the python thread-safe Queue. The unloader will enqueue the uploaded image pk, and when multiple users request a new image, the sending Viewset will dequeue an image pk and send it to the most recent user requesting an image and then the next one dequeued to the second most recent user and so on... However, I still feel like there are some race conditions possible here. I read that Django is thread-safe, but that the app can become un-thread-safe. In addition, the Queue would need to be global to be shared among the Viewsets, which feels like bad practice. Is there a better and safer way of going about this? Edit Here is more detail on what I'm trying to accomplish and to give it some context. The user posting the pictures is a Smart-phone attached to a Drone. It will be posting pictures from the sky at a constant interval to the Django server. Since there will be a lot of pictures coming in. I would like to be able to have multiple users splitting up the workload of looking at all the pics (i.e. no two user's should see the same picture). So when a user will contact the Django server, saying "send me the next pic you have or send me the next 3 pics you have or etc...". However, multiple users might say this at the same time. So Django needs to keep some sort of ordering to the pictures,that's why I said Queue and figure out how to pass it to users if more than one of them asks at a time. So one Viewset is for the smart phone to post the pics and the other is for the users to ask for the pics. I am looking for a thread-safe way to do this. The only idea I have so far is to use Python's thread-safe queue and make it a global queue to the Viewsets. However, I feel like that is bad practice, and I'm not sure if it is thread-safe with Django.
Queuing pictures to be requested in Django
0
1.2
1
0
0
394
36,212,431
2016-03-25T00:50:00.000
0
0
1
0
0
python
0
36,212,457
0
4
0
false
0
0
Why not read each file into a list each element in the list holds 1 line. Once you have both files loaded to your lists you can work line by line (index by index) through your list doing whatever comparisons/operations you require.
1
0
0
0
I want to using python to open two files at the same time, read one line from each of them then do some operations. Then read the next line from each of then and do some operation,then the next next line...I want to know how can I do this. It seems that for loop cannot do this job.
how to use python to deal with two file at the same time
0
0
1
0
0
48
36,223,345
2016-03-25T15:55:00.000
10
0
0
1
0
python,windows-10,simplehttpserver
0
36,223,473
0
3
0
true
0
0
Ok, so different commands is apparently needed. This works: C:\pathToIndexfile\py -m http.server As pointed out in a comment, the change to "http.server" is not because of windows, but because I changed from python 2 to python 3.
1
5
0
0
I recently bought a Windows 10 machine and now I want to run a server locally for testing a webpage I am developing. On Windows 7 it was always very simple to start a HTTP Server via python and the command prompt. Fx writing the below code would fire up a HTTP server and I could watch the website through localhost. C:\pathToIndexfile\python -m SimpleHTTPServer This does however not seems to work on Windows 10... Does anyone know how to do this on Windows 10?
How to start python simpleHTTPServer on Windows 10
0
1.2
1
0
1
25,052
36,234,690
2016-03-26T11:24:00.000
1
1
0
0
0
python,email
0
36,242,008
0
1
0
true
0
0
Well, just a little bit of testing with telnet will give you the answer to the question 'how do I find the imap server for privateemail.com'. mail.privateemail.com is their IMAP server.
1
0
0
0
I am using NameCheap to host my domain, and I use their privateemail.com to host my email. I'm looking to create a python program to retrieve specific/all emails from my inbox and to read the HTML from them (html instead of .body because there is a button that has a hyperlink which I need an is only accessible via html). I had a couple questions for everyone. Would the best way to do this be via IMAPlib? If it is, how do I find out the imap server for privateemail.com? I could do this via selenium, but it would be heavy and I would prefer a lighter weight and faster solution. Any ideas on other possible technologies to use? Thanks!
Retrieving emails from NameCheap Private Email
0
1.2
1
0
1
411
36,288,578
2016-03-29T15:25:00.000
0
0
0
0
0
python,machine-learning,scikit-learn
0
36,298,305
0
1
0
false
0
0
I don't see why you'd want X to vary for each task: the point of multitask learning is that the same feature space is used to represent instances for multiple tasks which can be mutually informative. I get that you may not have ground truth y for all instances for all tasks, though this is currently assumed in the scikit-learn implementation.
1
0
1
0
I have one huge data matrix X, of which subsets of rows correspond to different tasks that are related but also have different idiosyncratic properties. Thus I want to train a Multi-Task model with some regularization and chose sklearn's linear_model MultiTaskElasticNet function. I am confused with the inputs of fitting the model. It says that both the X and the Y matrix are 2-dimensional. The 2nd dimension in Y corresponds to the number of tasks. That makes sense, but in my understanding the X matrix should be 3-dimensional right? In that way I have selected which subsets of my data correspond to different tasks as I know that in advance (obviously). Does someone know how to enter my data correctly for this scikit-learn module? Thank you!
Sklearn multi-task: Input data not 3-dimensional?
0
0
1
0
0
511
36,288,794
2016-03-29T15:35:00.000
2
0
1
0
0
python,mongodb,pymongo
0
36,289,062
0
1
0
false
0
0
With mongo you have to pass a json (which in python we can essentially think of as a dict) in order to do anything really, so say you want to add a list or set of numbers to the hamburgers field in your collection, you would prepare it as follows: db.your_collection.update({'$push': {'hamburgers': {$each: [1, 2, 3, 4]}}}) if it's a set, you cant convert it to a list db.your_collection.update({'$push': {'hamburgers': {$each: list({1, 2, 3, 4})}}})
1
1
0
0
Apparently it is not possible and the best advice I found so far is to use dict but this is not what I want. Is there still a way to store a set / list in MongoDB?
MongoDB & PyMongo: how to store a set / list?
1
0.379949
1
0
0
2,869
36,295,569
2016-03-29T21:38:00.000
1
0
1
0
0
python,class,internal
0
36,295,771
0
2
0
true
0
0
Python code doesn't have any such equivalent for an anonymous namespace, or static linkage for functions. There are a few ways you can get what you're looking for Prefix with _. Names beginning with an underscore are understood to be for internal use to that python file and are not exported by from * imports. it's as simple as class _MyClass. Use __all__: If a python file contains a list a list of strings named __all__, the functions and classes named within are understood to be local to that python file and are not exported by from *. Use local classes/functions. This would be done the same way you've done so with C++ classes. None these gets exactly what you want, but privacy and restricting in this way are just not part of the language (much like how there's no private data member equivalent). Pydoc is also well aware of these conventions and will provide informative documentation for the intended-to-be-public functions and classes.
1
1
0
0
I'm relatively new to Python. When I did C/C++ programming, I used the internal classes quite often. For example, in some_file.cc, we may implement a class in the anonymous namespace to prevent it from being used outside. This is useful as a helper class specific to that file. Then, how we can do a similar thing in Python?
How to make a python class not exposed to the outside?
0
1.2
1
0
0
751