Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
38,847,743 | 2016-08-09T10:03:00.000 | 1 | 0 | 0 | 0 | python,r,dataframe,google-bigquery | 39,119,230 | 1 | true | 0 | 0 | It looks like bigrquery package does the job with insert_upload_job(). In the package documentation, it says this function
> is only suitable for relatively small datasets
but it doesn't specify any size limits. For me, it's been working for tens of thousands of rows. | 1 | 0 | 1 | I know it's possible to import Google BigQuery tables to R through bigrquery library. But is it possible to export tables/data frames created in R to Google BigQuery as new tables?
Basically, is there an R equivalent of Python's temptable.insert_data(df) or df.to_sql() ?
thanks for your help,
Kasia | Exporting R data.frame/tbl to Google BigQuery table | 1.2 | 1 | 0 | 635 |
38,847,922 | 2016-08-09T10:10:00.000 | 0 | 0 | 1 | 0 | python,pdf-generation,pypdf2 | 42,619,596 | 1 | false | 0 | 0 | In PyPdf2, you can use get a pageObject which is a dictionary. You can then search for key = '/Annots' and its values in it. You can at least know whether page is having highlighted text or not. | 1 | 3 | 0 | I am currently trying to use PyPDF2 to read the PDF file in the Python.I want to know whether the text of the PDF file is highlighted or not.
Context:
We use to highlight text in PDF file with a different color.Is there any way to know which text is highlighted in Python using any library or so?
If there is please direct me to the right source.
I look into many places for this problem.What I found is PyPDF2 can't solve this problem? | Finding text whether it is highlighted or not | 0 | 0 | 0 | 304 |
38,853,094 | 2016-08-09T14:07:00.000 | 1 | 0 | 1 | 0 | python,linux,ole,visio | 38,855,285 | 2 | true | 0 | 0 | You have really picked a strong enemy :)
Unlike other office apps Visio .vsd binary file format is not exactly Microsoft's "compound document", that's basically just a wrapper. The format was created by Visio Corp back in 199x, and AFAIK was never actually publicly documented.
I would really recommend you NOT to go with binary .VSD if possible. Latest Visio supports standard openxml format (.vsdx) which is just a bunch of zipped xml files basically.
AFAIK the only known third-party library to understand binary .vsd is aspose diagrams, but it's not free. | 1 | 1 | 0 | I am trying to read the contents of a Visio Binary .VSD file which contains information from a graph I have made.
I have tried using the OLE Tools and OLEFile but cannot correctly read the contents. I can view the file with the OLETools. When I dump the contents and view it with the 'xxd' command (in terminal) i can't clearly see the text that I saved within the file. There is a lot of extra \x00, \xff etc. and other characters within the file, which when removed make it worse. I've done the exact same with a .doc file and I have been able to open and clearly read the contents.
Can anyone please point me in the correct direction if I am doing this wrong or rather in the direction of other tools that work fine? | Reading data from a VSD (Windows Visio Binary) File in Python (Linux) with OLE Tools is very unclear, is there any other way to extract the data? | 1.2 | 0 | 0 | 2,852 |
38,854,382 | 2016-08-09T15:06:00.000 | 0 | 0 | 0 | 0 | apache,flask,python-requests | 38,869,412 | 1 | false | 1 | 0 | Dirn was completely right, it turned out not to be an Apache issue at all. It was SQL Alchemy all along.
I imagine that SQL Alchemy knows not to do any 'caching' when it requests data on the development server but decides that it's a good idea in production, which makes perfect sense really. It was not using the committed data on every search, which is why restarting the Apache server fixed it because it also reset the connection.
I guess that's what dirn meant by "How are you loading data in your application?" I had assumed that since I turned off Flask's debugging on the development server it would behave just like it would in deployment but it looks like something has slipped through. | 1 | 0 | 0 | I am running a Flask app on an Apache 2.4 server. The app sends requests to an API built by a colleague using the Requests library. The requests are in a specific format and constructed by data stored in a MySQL database. The site is designed to show the feedback from the API on the index, and the user can edit the data stored in the MySQL database (and by extension, the data sent in the request) by another page, the editing page.
So let's say for example a custom field date is set to be "2006", I would access the index page, a request would be sent, the API does its magic and sends back data relevant to 2006. If I then went and changed the date to "2007" then the new field is saved in MySQL and upon navigating back to index the new request is constructed, sent off and data for 2007 should be returned.
Unfortunately that's not happening.
My when I change details on my editing page they are definitely stored to the database, but when I navigate back to the index the request sends the previous set of data. I think that Apache is causing the problem because of two reasons:
When I reset the server (service apache2 restart) the data sent back is the 'proper' data, even though I haven't touched the database. That is, the index is initially requesting 2006 data, I change it to request 2007 data, it still requests 2006 data, I restart the server, refresh the index and only then does it request 2007 data like it should have been doing since I edited it.
When I run this on my local Flask development server, navigating to the index page after editing an entry immediately returns the right result - it feeds off the same database and is essentially identical to the deployed server except that it's not running on apache.
Is there a way that Apache could be caching requests or something? I can't figure out why the server would keep sending old requests until I restart it.
EDIT:
The requests themselves are large and ungainly and the responses would return data that I'm not comfortable with making available for examples for privacy reasons.
I am almost certain that Apache is the issue because as previously stated, the Flask development server has no issues with returning the correct dataset. I have also written some requests to run through Postman, and these also return the data as requested, so the request structure must be fine. The only difference I can see between the local Flask app and the deployed one is Apache, and given that restarting the Apache server 'updates' the requests until the data is changed again, I think that it's quite clearly doing something untoward. | Apache server seems to be caching requests | 0 | 1 | 0 | 83 |
38,856,271 | 2016-08-09T16:42:00.000 | 1 | 0 | 0 | 1 | python,r,shell,command-line | 38,856,331 | 3 | true | 0 | 0 | You probably already have R, since you can already run your script.
All you have to do is find its binaries (the Rscript.exe file).
Then open windows command line ([cmd] + [R] > type in : "cmd" > [enter])
Enter the full path to R.exe, followed by the full path to your script. | 2 | 4 | 0 | I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R.
How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this.
R: version 3.3 python: version 3.x os: windows | Running an R script from command line (to execute from python) | 1.2 | 0 | 0 | 3,887 |
38,856,271 | 2016-08-09T16:42:00.000 | 3 | 0 | 0 | 1 | python,r,shell,command-line | 38,856,393 | 3 | false | 0 | 0 | You already have Rscript, it came with your version of R. If R.exe, Rgui.exe, ... are in your path, then so is Rscript.exe.
Your call from Python could just be Rscript myFile.R. Rscript is much better than R BATCH CMD ... and other very old and outdated usage patterns. | 2 | 4 | 0 | I'm currently trying to run an R script from the command line (my end goal is to execute it as the last line of a python script). I'm not sure what a batch file is, or how to make my R script 'executable'. Currently it is saved as a .R file. It works when I run it from R.
How do I execute this from the windows command prompt line? Do i need to download something called Rscript.exe? Do I just save my R script as an .exe file? Please advise on the easiest way to achieve this.
R: version 3.3 python: version 3.x os: windows | Running an R script from command line (to execute from python) | 0.197375 | 0 | 0 | 3,887 |
38,858,195 | 2016-08-09T18:43:00.000 | -2 | 0 | 1 | 0 | python,virtualenv,jupyter | 38,859,614 | 1 | false | 0 | 0 | you can use tox to setup multiple virtualenvs instead ... that would generally be the standard solution | 1 | 0 | 0 | I want to use jupyter with both the versioned kernel of python in a virtualenv. How can I do that? | Is there a way to use python 2.x and 3.x in same virtualenv? | -0.379949 | 0 | 0 | 68 |
38,858,553 | 2016-08-09T19:04:00.000 | -1 | 0 | 0 | 0 | mysql,django,oracle,python-2.7 | 38,858,633 | 1 | false | 1 | 0 | Basically write models that match what you want your destination tables to be and then write something to migrate data between the two. I'd make this a comment if I could but not enough rep. | 1 | 0 | 0 | Right now in Django, I have two databases:
A default MySQL database for my app and
an external Oracle database that, for my purposes, is read-only
There are far more tables in the external database than I need data from, and also I would like to modify the db layout slightly. Is there a way I can selectively choose what data in the external database I would like to sync to my database? The external database is dynamic, and I would like my app to reflect that.
Ex I would like to do something like this:
Say the external database has two tables (out of 100) as follows:
Table47
Eggs
Spam
Sausage
Table48
Name
Age
Color
And I want to keep the data like:
Foo
Eggs
Spam
Type (a foreign key)
Bar
Name
Age
Type (foreign key)
Type
Some fields
Is there a way I could do this in Django? | How do you selectively sync a database in Django? | -0.197375 | 1 | 0 | 218 |
38,862,088 | 2016-08-09T23:38:00.000 | 3 | 1 | 0 | 0 | python,linux,yocto,bitbake,openembedded | 38,865,576 | 4 | false | 0 | 0 | The OE layer index at layers.openembedded.org lists all known layers and the recipes they contain, so searching that should bring up the meta-python layer that you can add to your build and use recipes from. | 1 | 13 | 0 | I wish to add more python modules to my yocto/openembedded project but I am unsure how to? I wish to add flask and its dependencies. | How do I add more python modules to my yocto/openembedded project? | 0.148885 | 0 | 0 | 20,667 |
38,865,708 | 2016-08-10T06:23:00.000 | 0 | 0 | 1 | 0 | python,raspberry-pi,scikit-learn | 38,866,597 | 2 | false | 0 | 0 | scikit-learn will run on a Raspberry Pi just as well as any other Linux machine.
To install it, make sure you have pip3 (sudo apt-get install python3-pip), and use sudo pip3 install scikit-learn.
All Python scripts utilizing scikit-learn will now run as normal. | 1 | 0 | 0 | I'm new in embedded programming, and would like to understand what I need to do to run python scikit-learn on a capable embedded processor.
See Raspberry Pi as an example. | How can I run python scikit-learn on Raspberry Pi? | 0 | 0 | 0 | 18,641 |
38,866,244 | 2016-08-10T06:54:00.000 | 0 | 1 | 0 | 0 | python,pytest,python-2.6 | 38,869,743 | 2 | false | 0 | 0 | Unfortunately pytest < 3.0 "hides" the ImportError happening when failing to import a plugin. If you remove all plugin arguments but add -rw, you should be able to see what exactly is going wrong in the warning summary. | 1 | 1 | 0 | I am new to pytest and I have python2.6 installed on my setup.
I installed pytest and the testcases get executed properly. I installed couple of plugins like pytest-timeout, putest-xdist etc but these plugins does not load when I run the cases. For timeout, I get following error: py.test: error: unrecognized arguments: --timeout
Same steps followed with python2.7 works.
Any idea how this can be solved or alteast steps to debug to know what exactly is causing the issue. | Does pytest plugins work with python2.6 | 0 | 0 | 0 | 128 |
38,866,649 | 2016-08-10T07:18:00.000 | 1 | 0 | 0 | 1 | python-2.7,http,tornado,broadcast | 38,866,860 | 2 | true | 0 | 0 | Short answer: you might be interested in WebSockets. Tornado seems to have support for this.
Longer answer: I assume you're referring to broadcast from the server to all the clients.
Unfortunately that's not doable conceptually in HTTP/1.1 because of the way it's thought out. The client asks something of the server, and the server responds, independently of all the others.
Furthermore, while there is no request going on between a client and a server, that relationship can be said to not exist at all. So if you were to broadcast, you'd be missing out on clients not currently communicating with the server.
Granted, things are not as simple. Many clients keep a long-lived TCP connection when talking to the server, and pipeline HTTP requests for it on that. Also, a single request is not atomic, and the response is sent in packets. People implemented server-push/long-polling before WebSockets or HTTP/2 with this approach, but there are better ways to go about this now. | 1 | 1 | 0 | I have a tornado HTTP server.
How can I implement broad-cast message with the tornado server?
Is there any function for that or I just have to send normal HTTP message all clients looping.
I think if I send normal HTTP message, the server should wait for the response.
It seems not the concept of broad-cast.
Otherwise, I need another third-part option for broad-cast?
Please give me any suggestion to implement broad-cast message. | How can I send HTTP broadcast message with tornado? | 1.2 | 0 | 0 | 544 |
38,866,665 | 2016-08-10T07:19:00.000 | 0 | 0 | 0 | 0 | syntax-error,stdin,qpython | 38,874,758 | 1 | false | 0 | 1 | I am editing my last answer:
I was mistaken when I read the text and the error was my fault.
Be sure and read thoroughly to avoid mistakes. | 1 | 0 | 0 | I am using QPython on my Samsung Android V. I am studying from the book Think Python. The topic is about Adding New Functions such as: def print_lyrics:
print ("I am a lumberjack and I work all day."
Etc.
But when I attempt to run it I get an error saying, File "", line 1 error.
1. I looked on Google but could not find the exact answer for this particular problem. I was about to look for a new app and delete QPython because I've been stuck for two days on this lesson. But I came here first.
2. I tried different methods of doing this program function with no luck.
3. I'm wondering if it just isn't available on the app, or am I failing to understand what I am doing wrong, or missing?
Thanks for your help. | File "", line 1 error In QPython When Running def Function | 0 | 0 | 0 | 474 |
38,869,507 | 2016-08-10T09:34:00.000 | 1 | 1 | 0 | 1 | php,android,python,linux,exec | 38,870,240 | 2 | false | 0 | 0 | First Check your python PATH using "which python" command and check result is /usr/bin/python.
Check your "TestCode.py" if you have written #!/usr/bin/sh than replace it with #!/usr/bin/bash.
Than run these commands
exec('/usr/bin/python /var/www/html/Source/TestCode.py', $result);
echo $result | 1 | 1 | 0 | I am making an android application in which I am first uploading the image to the server and on the server side, I want to execute a Python script from PHP. But I am not getting any output. When I access the Python script from the command prompt and run python TestCode.py it runs successfully and gives the desired output. I'm running Python script from PHP using the following command:
$result = exec('/usr/bin/python /var/www/html/Source/TestCode.py');
echo $result
However, if I run a simple Python program from PHP it works.
PHP has the permissions to access and execute the file.
Is there something which I am missing here? | Execution of a Python Script from PHP | 0.099668 | 0 | 0 | 173 |
38,870,823 | 2016-08-10T10:28:00.000 | 0 | 0 | 0 | 0 | python,database,matlab,oop,time-series | 38,870,995 | 1 | true | 0 | 0 | For current computers 30 years of day by day data amounts to almost nothing if your dayly data remains below say 10kB. Since your simulation may need efficient retrieval, especially if it combines data from different dates, I'd read all the data in memory in one query and then start processing.
What is considered elegant is changing. Quite some years ago loading everything in to memory would have been considered a cardinal sin. Nowadays in-memory databases are common. Since databases in general offer set-level access (especially when queried by SQL, which you probably use) it would be quite inefficient to retrieve data item by item in a loop (although your database may be clever enough to cache things). | 1 | 0 | 0 | I am writing an application that uses historical time series data to perform simulations.
Is it better for application to load the data from the database into local data wrapper classes before executing the main loop (up to 30 years day by day) or connect to the database each day to pull the required data?
Which is more elegant and efficient? | Database to wrapper-classes or direct connectivity to database for time-series simulation application? | 1.2 | 1 | 0 | 15 |
38,873,603 | 2016-08-10T12:35:00.000 | 0 | 0 | 1 | 0 | python,pip | 38,873,722 | 2 | false | 0 | 0 | Yes, pip install will execute remote code with your user access; anything could have happened that you as a user can do on your computer without admin access.
Now, usually the setup.py file of a project does nothing more than just install the project files into your Python site-packages dir (and optionally to bin/ or Scripts/, depending on your OS), but the code could do anything, really.
Always take a look over the setup.py of a project you download; you should find nothing more than a setup() call and some imports and perhaps some platform-detection to alter the settings passed to the setup() function.
If your AV solution only flagged access to lib/site-packages and Scripts, you should be fine. Provided the library you installed doesn't carry any malicious trojan, of course. Again, with the code publicly visible on GitHub, you should be able to give it at least a cursory check, right? | 1 | 0 | 0 | Recently, I have been exploring github and went on an install spree using "pip install -r requirements.txt". Today, I came across one that required my antivirus to give permission. Felt suspicious, but I installed it anyway. pip install git+https://github.com/something
Usually, command "python something.py" would execute program and produce results. This particular program would instead run with its own command, even after I deleted the cloned source files.
Could something malicious have gotten into my computer? It wanted access to python/lib/packages and /scripts and I granted it. Would pip uninstall have gotten rid of it safely? | Can a pip install have detrimental effects to your computer? | 0 | 0 | 0 | 302 |
38,875,222 | 2016-08-10T13:44:00.000 | 0 | 0 | 0 | 0 | python,openerp,odoo-8,dashboard | 38,877,250 | 3 | false | 1 | 0 | I fear without changing some base elements of Odoo there is no other solution than duplicating the views and change the user, because the field user is required. | 2 | 0 | 0 | How can I share my customized dashboard for all users, I found that every customized dashboard created is stored on customized views, then to share a dashboard you should duplicate the customized view corresponding to that dashboard, and change the user field.
Is there a better solution ? | Share Odoo Dashboard to all users | 0 | 0 | 0 | 1,484 |
38,875,222 | 2016-08-10T13:44:00.000 | 0 | 0 | 0 | 0 | python,openerp,odoo-8,dashboard | 69,828,393 | 3 | false | 1 | 0 | There is a record rule that prevents other users to see one's dashboard, disabling that record rule will make visible one dashboard to other | 2 | 0 | 0 | How can I share my customized dashboard for all users, I found that every customized dashboard created is stored on customized views, then to share a dashboard you should duplicate the customized view corresponding to that dashboard, and change the user field.
Is there a better solution ? | Share Odoo Dashboard to all users | 0 | 0 | 0 | 1,484 |
38,875,927 | 2016-08-10T14:15:00.000 | 3 | 0 | 0 | 0 | python,django,postgresql,sqlite,django-models | 38,875,962 | 1 | true | 1 | 0 | hstore is specific to Postgres. It won't work on sqlite.
If you just want to store JSON, and don't need to search within it, then you can use one of the many third-party JSONField implementations. | 1 | 2 | 0 | I am using sqlite (development stage) database for my django project. I would like to store a dictionary field in a model. In this respect, i would like to use django-hstore field in my model.
My question is, can i use django-hstore dictionary field in my model even though i am using sqlite as my database?
As per my understanding django-hstore can be used along with PostgreSQL (Correct me if i am wrong). Any suggestion in the right direction is highly appreciated. Thank you. | Django hstore field in sqlite | 1.2 | 1 | 0 | 912 |
38,875,969 | 2016-08-10T14:16:00.000 | 0 | 0 | 1 | 0 | python-3.x,gtk3,pygobject | 39,381,937 | 1 | false | 0 | 0 | No, although you can implement that logic yourself in your code.
What you may have read is that you can install a separate icon theme using the Icon Naming Spec names, and applications will fall back to the default hicolor icon theme in the case where your icon theme is missing an icon? | 1 | 0 | 0 | If my memory is not deceiving me, I read somewhere that you can use the system icons (Icon Naming) as second option if the "primary" is not available. In other words, we can choose an icon that is in a directory any and if it is not available a system icon (Icon Naming Specification) is used.
This is possible or am I mistaken? | Icon Naming Specification as Second Option | 0 | 0 | 0 | 34 |
38,876,441 | 2016-08-10T14:36:00.000 | 1 | 0 | 1 | 0 | python,json,string,list,pickle | 38,876,740 | 1 | false | 0 | 0 | If the first script just writes the file once and then at some point the second script has to read it, I would use csv (or just plain text and the elements separated by a coma).
If the second file has to read periodically the strings that the first writes I would use a socket to send it to the second script. | 1 | 2 | 0 | I will create two Python scripts. One will generate strings and save them to a file. The other will traverse through the list of string from the top down, do operations on each string and then delete the string when done.
I would like to know which file type can best satisfy this purpose (e.g. pickle, json, plain text, csv,..)? | Best way to save/load list of strings and the file will be manipulated by two Python scripts at the same time | 0.197375 | 0 | 0 | 73 |
38,876,721 | 2016-08-10T14:47:00.000 | 9 | 0 | 1 | 0 | python,flask | 38,876,910 | 2 | false | 1 | 0 | How many requests will my application be able to handle concurrently with this statement?
This depends drastically on your application. Each new request will have a thread launched- it depends on how many threads your machine can handle. I don't see an option to limit the number of threads (like uwsgi offers in a production deployment).
What are the downsides to using this? If i'm not expecting more than a few requests concurrently, can I just continue to use this?
Switching from a single thread to multi-threaded can lead to concurrency bugs... if you use this be careful about how you handle global objects (see the g object in the documentation!) and state. | 1 | 81 | 0 | What exactly does passing threaded = True to app.run() do?
My application processes input from the user, and takes a bit of time to do so. During this time, the application is unable to handle other requests. I have tested my application with threaded=True and it allows me to handle multiple requests concurrently. | Handle Flask requests concurrently with threaded=True | 1 | 0 | 0 | 80,066 |
38,877,514 | 2016-08-10T15:19:00.000 | 2 | 1 | 0 | 0 | python,vlc,libvlc | 38,877,998 | 1 | true | 0 | 0 | In an MPEG stream, there is no such thing as "songs". It's just an audio stream. Some radio stations do change metadata in between, so you might be able to check whether the stream title changes or something. But that's purely heuristic.
I guess the notification you see is also triggered by the metadata change. | 1 | 0 | 0 | I'm new to using programming vlc, I'm using python specifically python-vlc to play a internet radio station.
I have it playing the station but can't get the current track that is playing.
When I get the audio track info it returns Track 1 all the time.
Anyways, I am looking for a way to get the song change event. It seems that it could be possible. Because vlc title bar shows the current playing song and windows pops up a notification of the new playing song.
I would prefer to get the change event with the song so that I don't have to poll to check to see if the name change.
Any help would be appreciated. | VideoLan song change event for radio stream | 1.2 | 0 | 0 | 398 |
38,879,399 | 2016-08-10T16:56:00.000 | 0 | 0 | 1 | 0 | python,visual-studio | 51,743,400 | 1 | false | 0 | 0 | Visual Studio is displaying the contents of stdout and stderr. When each thread and finally the entire program exits, it'll show that they exited and the code they returned when they exited.
Your program doesn't print anything to stdout or stderr, which is why no output appears before the program exits.
Your problem with Chrome could be caused by running 32-bit Visual Studio or 32-bit Python on a 64-bit machine, causing it not to find the 32-bit version of Chrome in the 32-bit folder (because the 32-bit folder is just ordinary Program Files as far as a 32-bit program is concerned). | 1 | 0 | 0 | I am using visual studio, when I run this code below I am getting this message and the program did not run correctly:
The thread 'MainThread' (0x339c) has exited with code 0 (0x0).
The program '[10996] python.exe' has exited with code 0 (0x0).
from selenium import webdriver
path = chrome_path = r'C:\Program Files (x86)\Google\Chrome\Application/chromedriver'
driver = webdriver.Chrome(path)
driver.get('https://google.com/') | The thread 'MainThread' (0x339c) has exited with code 0 (0x0) | 0 | 0 | 1 | 1,551 |
38,880,555 | 2016-08-10T18:04:00.000 | 2 | 1 | 0 | 1 | python,email,google-app-engine,cron | 38,884,139 | 2 | true | 1 | 0 | You can easily accomplish what you need with Task API. When you create a task, you can set an ETA parameter (when to execute). ETA time can be up to 30 days into the future.
If 30 days is not enough, you can store a "send_email" entity in the Datastore, and set one of the properties to the date/time when this email should be sent. Then you create a cron job that runs once a month (week). This cron job will retrieve all "send_email" entities that need to be send the next month (week), and create tasks for them, setting ETA to the exact date/time when they should be executed. | 1 | 0 | 0 | I want to be able to schedule an e-mail or more of them to be sent on a specific date, preferably using GAE Mail API if possible (so far I haven't found the solution).
Would using Cron be an acceptable workaround and if so, would I even be able to create a Cron task with Python? The dates are various with no specific pattern so I can't use the same task over and over again.
Any suggestions how to solve this problem? All help appreciated | Is there a way to schedule sending an e-mail through Google App Engine Mail API (Python)? | 1.2 | 0 | 0 | 197 |
38,882,845 | 2016-08-10T20:22:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,python-3.x,ubuntu,anaconda | 46,602,056 | 2 | false | 0 | 0 | Use anaconda version Anaconda3-4.2.0-Linux-x86_64.sh from the anaconda installer archive.This comes with python 3.5. This worked for me. | 1 | 1 | 0 | Anaconda for python 3.5 and python 2.7 seems to install just as a drop in folder inside my home folder on Ubuntu. Is there an installed version of Anaconda for Ubuntu 16? I'm not sure how to ask this but do I need python 3.5 that comes by default if I am also using Anaconda 3.5?
It seems like the best solution is docker these days. I mean I understand virtualenv and virtualenvwrapper. However, sometimes I try to indicate in my .bashrc that I want to use python 3.5 and yet I'll use the command mkvirtualenv and it will start installing the python 2.7 versions of python.
Should I choose either Anaconda or the version of python installed with my OS from python.org or is there an easy way to manage many different versions of Python?
Thanks,
Bruce | How to get Python 3.5 and Anaconda 3.5 running on ubuntu 16.04? | 0 | 0 | 0 | 3,184 |
38,885,944 | 2016-08-11T01:30:00.000 | 0 | 0 | 0 | 1 | python,logging,windows-ce,data-analysis | 38,886,144 | 2 | false | 0 | 0 | There is no input data at all to this problem so this answer will be basically pure theory, a little collection of ideas you could consider.
To analize patterns out of a bunch of many logs you could definitely creating some graphs displaying relevant data which could help to narrow the problem, python is really very good for these kind of tasks.
You could also transform/insert the logs into databases, that way you'd be able to query the relevant suspicious events much faster and even compare massively all your logs.
A simpler approach could be just focusing on a simple log showing the crash, instead wasting a lot of efforts or resources trying to find some kind of generic pattern, start by reading through one simple log in order to catch suspicious "events" which could produce the crash.
My favourite approach for these type of tricky problems is different from the previous ones, instead of focusing on analizing or even parsing the logs I'd just try to reproduce the bug/s in a deterministic way locally (you don't even need to have the source code). Sometimes it's really difficult to replicate the production environment in your the dev environment but definitely is time well invested. All the effort you put into this process will help you to solve not only these bugs but improving your software much faster. Remember, the more times you're able to iterate the better.
Another approach could just be coding a little script which would allow you to replay logs which crashed, not sure if that'll be easy in your environment though. Usually this strategy works quite well with production software using web-services where there will be a lot of tuples with data-requests and data-retrieves.
In any case, without seeing the type of data from your logs I can't be more specific nor giving much more concrete details. | 1 | 0 | 0 | My company has slightly more than 300 vehicle based windows CE 5.0 mobile devices that all share the same software and usage model of Direct Store Delivery during the day then doing a Tcom at the home base every night. There is an unknown event(s) that results in the device freaking out and rebooting itself in the middle of the day. Frequency of this issue is ~10 times per week across the fleet of computers that all reboot daily, 6 days a week. The math is 300*6=1800 boots per week (at least) 10/1800= 0.5%. I realize that number is very low, but it is more than my boss wants to have.
My challenge, is to find a way to scan through several thousand logfille.txt files and try to find some sort of pattern. I KNOW there is a pattern here somewhere. I’ve got a couple ideas of where to start, but I wanted to throw this out to the community and see what suggestions you all might have.
A bit of background on this issue. The application starts a new log file at each boot. In an orderly (control) log file, you see the app startup, do its thing all day, and then start a shutdown process in a somewhat orderly fashion 8-10 hours later. In a problem log file, you see the device startup and then the log ends without any shutdown sequence at all in a time less than 8 hours. It then starts a new log file which shares the same date as the logfile1.old that it made in the rename process. The application that we have was home grown by windows developers that are no longer with the company. Even better, they don’t currently know who has the source at the moment.
I’m aware of the various CE tools that can be used to detect memory leaks (DevHealth, retail messages, etc..) and we are investigating that route as well, however I’m convinced that there is a pattern to be found, that I’m just not smart enough to find. There has to be a way to do this using Perl or Python that I’m just not seeing. Here are two ideas I have.
Idea 1 – Look for trends in word usage.
Create an array of every unique word used in the entire log file and output a count of each word. Once I had a count of the words that were being used, I could run some stats on them and look for the non-normal events. Perhaps the word “purple” is being used 500 times in a 1000 line log file ( there might be some math there?) on a control and only 4 times on a 500 line problem log? Perhaps there is a unique word that is only seen in the problem files. Maybe I could get a reverse “word cloud”?
Idea 2 – categorize lines into entry-type and then look for trends in the sequence of type of entry type?
The logfiles already have a predictable schema that looks like this = Level|date|time|system|source|message
I’m 99% sure there is a visible pattern here that I just can’t find. All of the logs got turned up to “super duper verbose” so there is a boatload of fluff (25 logs p/sec , 40k lines per file) that makes this even more challenging. If there isn’t a unique word, then this has almost got to be true. How do I do this?
Item 3 – Hire a windows CE platform developer
Yes, we are going down that path as well, but I KNOW there is a pattern I’m missing. They will use the tools that I don’t have) or make the tools that we need to figure out what’s up. I suspect that there might be a memory leak, radio event or other event that platform tools I’m sure will show.
Item 4 – Something I’m not even thinking of that you have used.
There have got to be tools out there that do this that aren’t as prestigious as a well-executed python script, and I’m willing to go down that path, I just don’t know what those tools are.
Oh yeah, I can’t post log files to the web, so don’t ask. The users are promising to report trends when they see them, but I’m not exactly hopeful on that front. All I need to find is either a pattern in the logs, or steps to duplicate
So there you have it. What tools or techniques can I use to even start on this? | Data analysis of log files – How to find a pattern? | 0 | 0 | 0 | 2,344 |
38,887,061 | 2016-08-11T04:02:00.000 | 0 | 1 | 0 | 0 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | 38,895,935 | 4 | false | 1 | 0 | My guess would be that you missed a step on setup. There's one where you have to set the "event source". IF you don't do that, I think you get that message.
But the debug options are limited. I wrote EchoSim (the original one on GitHub) before the service simulator was written and, although it is a bit out of date, it does a better job of giving diagnostics.
Lacking debug options, the best is to do what you've done. Partition and re-test. Do static replies until you can work out where the problem is. | 3 | 2 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 0 | 0 | 1 | 3,387 |
38,887,061 | 2016-08-11T04:02:00.000 | 3 | 1 | 0 | 0 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | 38,902,127 | 4 | true | 1 | 0 | tl;dr: The remote endpoint could not be called, or the response it returned was invalid. also means there may have been a timeout waiting for the endpoint.
I was able to narrow it down to a timeout.
Seems like the Alexa service simulator (and the Alexa itself) is less tolerant to long responses than the lambda testing console. During development I had increased the timeout of ARN:1 to 30 seconds (whereas I believe the default is 3 seconds). The DynamoDB table used by ARN:1 has more data and it takes slightly longer to process than ARN:3 which has an almost empty table. As soon as I commented out some of the data loading stuff it was running slightly faster and the Alexa service simulator was working again. I can't find the time budget documented anywhere, I'm guessing 3 seconds? I most likely need to move to another backend, DynamoDB+Python on lambda is too slow for very trivial requests. | 3 | 2 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 1.2 | 0 | 1 | 3,387 |
38,887,061 | 2016-08-11T04:02:00.000 | 1 | 1 | 0 | 0 | python,amazon-web-services,amazon-dynamodb,aws-lambda,alexa-skills-kit | 39,245,816 | 4 | false | 1 | 0 | I think the problem you having for ARN:1 is you probably didn't set a trigger to alexa skill in your lambda function.
Or it can be the alexa session timeout which is by default set to 8 seconds. | 3 | 2 | 0 | I'm starting with ASK development. I'm a little confused by some behavior and I would like to know how to debug errors from the "service simulator" console. How can I get more information on the The remote endpoint could not be called, or the response it returned was invalid. errors?
Here's my situation:
I have a skill and three Lambda functions (ARN:A, ARN:B, ARN:C). If I set the skill's endpoint to ARN:A and try to test it from the skill's service simulator, I get an error response: The remote endpoint could not be called, or the response it returned was invalid.
I copy the lambda request, I head to the lambda console for ARN:A, I set the test even, paste the request from the service simulator, I test it and I get a perfectly fine ASK response. Then I head to the lambda console for ARN:B and I make a dummy handler that returns exactly the same response that ARN:A gave me from the console (literally copy and paste). I set my skill's endpoint to ARN:B, test it using the service simulator and I get the anticipated response (therefore, the response is well formatted) albeit static. I head to the lambda console again and copy and paste the code from ARN:A into a new ARN:C. Set the skill's endpoint to ARN:C and it works perfectly fine. Problem with ARN:C is that it doesn't have the proper permissions to persist data into DynamoDB (I'm still getting familiar with the system, not sure wether I can share an IAM role between different lambdas, I believe not).
How can I figure out what's going on with ARN:A? Is that logged somewhere? I can't find any entry in cloudwatch/logs related to this particular lambda or for the skill.
Not sure if relevant, I'm using python for my lambda runtime, the code is (for now) inline on the web editor and I'm using boto3 for persisting to DynamoDB. | Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction | 0.049958 | 0 | 1 | 3,387 |
38,890,319 | 2016-08-11T07:47:00.000 | 0 | 0 | 0 | 0 | python,ibm-cloud-infrastructure | 38,898,571 | 1 | false | 0 | 0 | We don't have any report about this kind of issue nor if the server is busy in SoftLayer's side. but regarding to your issue, it is something related with network issues. It seems that there is something happening with your proxy connection.
First we need to discard if the proxy can be the reason for this issue, it would be very useful if can verify that this issue is reproducible without using a proxy from your side, let me know if you could test it.
If you could check this without proxy, I recommend to submit a ticket for further investigation about this issue. | 1 | 0 | 0 | It is wired to get an exception at about 7:30am(utc+8) everyday when calling softlayer-api.
TransportError: TransportError(0): HTTPSConnectionPool(host='api.softlayer.com', port=443): Max retries exceeded with url: /xmlrpc/v3.1/SoftLayer_Product_Package (Caused by ProxyEr
ror('Cannot connect to proxy.', error('Tunnel connection failed: 503 Service Unavailable',)))
And I uses a proxy to forward https request to softlayer's server. At first I thougth it is caused by the proxy, but when I looked into the log, it showed every request had been forwarded successfully. So maybe it is caused by the server. Does the server do something so busy at that moment everyday that it fails to server? | TransportError happened when calling softlayer-api | 0 | 0 | 1 | 106 |
38,891,879 | 2016-08-11T09:04:00.000 | 0 | 0 | 0 | 0 | python,django,nginx,process,uwsgi | 39,093,474 | 2 | true | 1 | 0 | I figured out a workaround (Don't know if it will qualify as an answer).
I wrote the background process as a job in database and used a cronjob to check if I have any job pending and if there are any the cron will start a background process for that job and will exit.
The cron will run every minute so that there is not much delay. This helped in improved performance as it helped me execute heavy tasks like this to run separate from main application. | 1 | 4 | 0 | I am starting a process using python's multiprocessing module. The process is invoked by a post request sent in a django project. When I use development server (python manage.py runserver), the post request takes no time to start the process and finishes immediately.
I deployed the project on production using nginx and uwsgi.
Now when i send the same post request, it takes around 5-7 minutes to complete that request. It only happens with those post requests where I am starting a process. Other post requests work fine.
What could be reason for this delay? And How can I solve this? | python process takes time to start in django project running on nginx and uwsgi | 1.2 | 0 | 0 | 739 |
38,899,740 | 2016-08-11T14:51:00.000 | -1 | 0 | 1 | 0 | python,special-characters | 39,754,124 | 1 | true | 0 | 0 | CMD can not display those characters you must convert them | 1 | 1 | 0 | I have a python program that returns a large amount of info from an API. The info contains special characters like the TM sign and the ★ symbol. When the code is run in the IDLE interface it works exactly as it should returning all information however when the program is run in the CMD-Like interface it crashes because it can not display the symbols. I am using PRINT to output the information. Is there anyway i can make it display the characters in the CMD part? | How to display special characters in the CMD part of python | 1.2 | 0 | 0 | 186 |
38,901,974 | 2016-08-11T16:44:00.000 | 1 | 0 | 1 | 0 | python,bamboo,suse,python-jira | 38,902,162 | 2 | false | 1 | 0 | That is very hard to understand what you problem is. From what I understood you are saying that when you run your module as standalone file, everything works, but when you imoprt it you get an error. Here are some steps towards solving the problem.
Make sure that your script is in Python package. In order to do that, verify that there is (usually) empty __init__.py file in the same directory where the package is located.
Make sure that your script does not import something else in the block that gets executed only when you run the file as script (if __name__ == "__main__")
Make sure that the python path includes your package and visible to the script (you can do this by running print os.environ['PYTHONPATH'].split(os.pathsep) | 2 | 1 | 0 | Hi I'm running a python script that transitions tickets from "pending build" to "in test" in Jira. I've ran it on my local machine (Mac OS X) and it works perfectly but when I try to include it as a build task in my bamboo deployment, I get the error
"from jira import JIRA
ImportError: No module named jira"
I'm calling the python file from a script task like the following "python myFile.py" and then I supply the location to the myFile.py in the working subdirectory field. I don't think that is a problem because the error shows that it is finding my script fine. I've checked multiple times and the jira package is in site-packages and is in the path. I installed using pip and am running python 2.7.8. The OS is SuSE on our server | ImportError: No module named jira | 0.099668 | 0 | 0 | 7,884 |
38,901,974 | 2016-08-11T16:44:00.000 | 0 | 0 | 1 | 0 | python,bamboo,suse,python-jira | 48,773,686 | 2 | false | 1 | 0 | Confirm that you don't have another file or directory that shares the same name as the module you are trying to import. | 2 | 1 | 0 | Hi I'm running a python script that transitions tickets from "pending build" to "in test" in Jira. I've ran it on my local machine (Mac OS X) and it works perfectly but when I try to include it as a build task in my bamboo deployment, I get the error
"from jira import JIRA
ImportError: No module named jira"
I'm calling the python file from a script task like the following "python myFile.py" and then I supply the location to the myFile.py in the working subdirectory field. I don't think that is a problem because the error shows that it is finding my script fine. I've checked multiple times and the jira package is in site-packages and is in the path. I installed using pip and am running python 2.7.8. The OS is SuSE on our server | ImportError: No module named jira | 0 | 0 | 0 | 7,884 |
38,903,061 | 2016-08-11T17:53:00.000 | 1 | 0 | 0 | 0 | python,r,lda,topic-modeling,text-analysis | 38,916,691 | 1 | true | 0 | 0 | First, your question kind of assumes that topics identified by LDA correspond to real semantic topics - I'd be very careful about that assumption and take a look at the documents and words assigned to topics you want to interpret that way, as LDA often have random extra words assigned, can merge two or more actual topics into one (especially with few topics overall) and may not be meaningful at all ("junk" topics).
In answer to you questions then: the idea of a "distinct number of topics" isn't clear at all. Most of the work I've seen uses a simple threshold to decide if a documents topic proportion is "significant".
A more principled way is to look at the proportion of words assigned to that topic that appear in the document - if it's "significantly" higher than average, the topic is significant in the document, but again, this is involves a somewhat arbitrary threshold. I don't think anything can beat close reading of some examples to make meaningful choices here.
I should note that, depending on how you set the document-topic prior (usually beta), you may not have each document focussed on just a few topics (as seems to be your case), but a much more even mix. In this case "distinct number of topics" starts to be less meaningful.
P.S. Using word lists that are meaningful in your application is not a bad way to identify candidate topics of interest. Especially useful if you have many topics in your model (:
P.P.S.: I hope you have a reasonable number of documents (at least some thousands), as LDA tends to be less meaningful with less, capturing chance word co-occurences rather than meaningful ones.
P.P.P.S.: I'd go for a larger number of topics with parameter optimisation (as provided by the Mallet LDA implementation) - this effectively chooses a reasonable number of topics for your model, with very few words assigned to the "extra" topics. | 1 | 1 | 1 | As far as I know, I need to fix the number of topics for LDA modeling in Python/ R. However, say I set topic=10 while the results show that, for a document, nine topics are all about 'health' and the distinct number of topics for this document is 2 indeed. How can I spot it without examining the key words of each topic and manually count the real distinct topics?
P.S. I googled and learned that there are Vocabulary Word Lists (Word Banks) by Theme, and I could pair each topic with a theme according to the word lists. If several topics fall into the same theme, then I can combine them into one distinct topic. I guess it's an approach worth trying and I'm looking for smarter ideas, thanks. | Find the Number of Distinct Topics After LDA in Python/ R | 1.2 | 0 | 0 | 618 |
38,906,212 | 2016-08-11T21:09:00.000 | 0 | 0 | 1 | 0 | python | 38,906,316 | 2 | false | 0 | 0 | %2 is checking the modulus of the number when divided by 2, in otherwords it's checking if the reminder is 1 or 0, while &1 is the bitwise operator checking if the last bit is the same. The bitwise operator IS slightly faster, but the difference is negligible.
IMO the reason I think %2 is used more is because that makes more sense to the average python programmer that hasn't completely studied bits and operators yet, so to explain %2 vs &1, the %2 is more user friendly. | 1 | 0 | 0 | I have noticed that the most common way to determine whether a number is even in python is x%2. Wouldn't x&1 be faster? What disadvantages does it have? | Why is %2 used rather than &1 to determine parity | 0 | 0 | 0 | 631 |
38,906,844 | 2016-08-11T22:06:00.000 | 0 | 1 | 1 | 1 | python,python-2.7,bamboo | 40,619,044 | 2 | false | 0 | 0 | I run a lot of python tasks from bamboo, so it is possible. Using the Script task is generally painless...
You should be able to use your script task to run the commands directly and have stdout written to the logs. Since this is true, you can run:
'which python' -- Output the path of which python that is being ran.
'pip list' -- Output a list of which modules are installed with pip.
You should verify that the output from the above commands matches the output when ran from the server. I'm guessing they won't match up and once that is addressed, everything will work fine.
If not, comment back and we can look at a few other things.
For the future, there are a handful of different ways you can package things with python which could assist with this problem (e.g. automatically installing missing modules, etc). | 1 | 2 | 0 | I'm trying to run a python script from bamboo. I created a script task and wrote inline "python myFile.py". Should I be listing the full path for python?
I changed the working directory to the location of myFile.py so that is not a problem. Is there anything else I need to do within the configuration plan to properly run this script? It isn't running but I know it should be running because the script works fine from terminal on my local machine. Thanks | Run a python script from bamboo | 0 | 0 | 0 | 5,082 |
38,921,975 | 2016-08-12T15:51:00.000 | 1 | 0 | 1 | 0 | python,arrays,numpy,dictionary,red-black-tree | 38,924,162 | 2 | false | 0 | 0 | The most basic form of a dictionary is a structure called a HashMap. Implementing a hashmap relies on turning your key into a value that can be quickly looked up. A pathological example would be using ints as keys: The value for key 1 would go in array[1], the value for key 2 would go in array[2], the Hash Function is simply the identity function. You can easily implement that using a numpy array.
If you want to use other types, it's just a case of writing a good hash function to turn those keys into unique indexes into your array. For example, if you know you've got a (int, int) tuple, and the first value will never be more than 100, you can do 100*key[1] + key[0].
The implementation of your hash function is what will make or break your dictionary replacement. | 1 | 3 | 1 | I need to write a huge amount number-number pairs into a NumPy array. Since a lot of these pairs have a second value of 0, I thought of making something akin to a dictionary. The problem is that I've read through the NumPy documentation on structured arrays and it seems like dictionaries built like those on the page can only use strings as keys.
Other than that, I need insertion and searching to have log(N) complexity. I thought of making my own Red-black tree structure using a regular NumPy array as storage, but I'm fairly certain there's an easier way to go about this.
Language is Python 2.7.12. | How can I implement a dictionary with a NumPy array? | 0.099668 | 0 | 0 | 4,918 |
38,925,638 | 2016-08-12T19:59:00.000 | 0 | 0 | 0 | 0 | python,django-upgrade | 38,926,190 | 2 | false | 1 | 0 | You might have Django installed twice. Running "pip uninstall django" twice, and then reinstalling new version again should help. | 2 | 2 | 0 | After upgrading my django from 1.8 to 1.10, when i start a project(django-admin startproject lwc) there is an error:
CommandError: C:\Python34\binesh\lwc\lwc\settings.py already exists, overlaying
a project or app into an existing directory won't replace conflicting files.
it creates a file for lwc, manage.py and another lwc folder in it, and settings.py in second lwc folder.
what is wrong with it? | command error after start project in django | 0 | 0 | 0 | 961 |
38,925,638 | 2016-08-12T19:59:00.000 | 1 | 0 | 0 | 0 | python,django-upgrade | 39,836,064 | 2 | false | 1 | 0 | Uninstall django, delete your python/Lib/site-packages/django directory completely, then reinstall.
The installation of the new version, even though it claims to uninstall the old version, leaves old files hanging around, and they are quietly brought into the new version in various ways (e.g., manage.py can bring in syncdb if a sync.py is left over in the django directories). | 2 | 2 | 0 | After upgrading my django from 1.8 to 1.10, when i start a project(django-admin startproject lwc) there is an error:
CommandError: C:\Python34\binesh\lwc\lwc\settings.py already exists, overlaying
a project or app into an existing directory won't replace conflicting files.
it creates a file for lwc, manage.py and another lwc folder in it, and settings.py in second lwc folder.
what is wrong with it? | command error after start project in django | 0.099668 | 0 | 0 | 961 |
38,926,579 | 2016-08-12T21:16:00.000 | 7 | 0 | 1 | 0 | python,list | 38,926,607 | 2 | true | 0 | 0 | list(thing) doesn't mean "put thing in a list". It means "put thing's elements in a list". If you want to put a thing in a list, that's [thing]. | 1 | 0 | 0 | I have a case where a user passes a function a single floating-point value. While trying to put that value into a list for easier data handling later, I've discovered that I cannot make a list using list(some_float), but [some_float] does work. Python prints an error stating that "'float' object is not iterable."
My question for you wonderful people is why [] works, but list() does not. My understanding is that they produce identical results even if they are executed differently. I am using Python 3.4.3. | Python -- list(some_float) fails but [some_float] works? | 1.2 | 0 | 0 | 53 |
38,926,917 | 2016-08-12T21:48:00.000 | 0 | 0 | 1 | 1 | python,python-2.7 | 38,926,942 | 1 | false | 0 | 0 | Basically, the only drawback is that it's potentially slower. The buffering on stdin allows your program to run ahead of the physical I/O which is slow.
However, if you're sending it to less, you're operating at human speeds anyway -- it's not going to make a difference. | 1 | 1 | 0 | I have a script whose output is piped to less, and I would like the script to print it's statements into less as they come, rather than all at once.
I found that if I flush stdout (via sys.stdout.flush()) after each print, the line is displayed in less when flushed (obviously).
My question is: Are there any drawbacks to doing this? My script has hundreds of thousands of lines being printed, would flushing after each line cause problems?
My impression is yes, because you take extra resources up for displaying each time you flush, as well as completely circumventing the idea of buffered output | Implications of flushing stdout after each print | 0 | 0 | 0 | 373 |
38,927,339 | 2016-08-12T22:38:00.000 | 2 | 0 | 0 | 0 | python-3.x,gtk3,pygobject | 38,931,286 | 1 | true | 0 | 1 | Gio.Icon is just an interface. It is implemented by Gio.ThemedIcon, Gio.FileIcon, Gio.BytesIcon, etc. So you would would use those. | 1 | 1 | 0 | I need to show an icon on Gio.MenuItem with set_icon() method. But set_icon() expects to receive a GIcon object.
How to create a GIcon object? | How to instantiate GIcon | 1.2 | 0 | 0 | 90 |
38,930,971 | 2016-08-13T08:49:00.000 | 1 | 0 | 1 | 0 | python,multithreading,queue,locking | 38,932,248 | 1 | true | 0 | 0 | The Queue type has blocking calls to get() and put() by default. So when you make a get() call, it will block the call and wait for an item to be put in the queue.
The put() call will also by default block if the queue is full and wait for a slot to be free before it can put the item.
This default behaviour might be altered by using block=False or passing a positive integer to timeout. If you disable blocking or set a timeout, the call will try to execute normally and if it fails (within the timeout), it will raise certain exceptions.
Disabling blocking will fail instantly where as setting a timeout value would fail after those seconds.
Since the default nature of the calls are blocking, you should not run into any issues. Even if you disable blocking, still there will be exceptions which you can handle and properly control the flow of the program. So there should not be an issue from simultaneously accessing the queue as it is "synchronized". | 1 | 0 | 0 | I currently have a program that starts a thread with a FIFO queue. The queue is constantly putting data into the queue, and there is a method to get the items from the queue. The method acquires a lock and releases it once it grabs the items.
My question is, will I face into any problems in the future putting and getting from the queue simultaneously? Would I need to add a lock when putting data into the queue?
Thanks. | Can you get and put into a Queue at the same time? | 1.2 | 0 | 0 | 937 |
38,931,064 | 2016-08-13T09:00:00.000 | 0 | 0 | 0 | 1 | python,sockets,batch-file,portforwarding | 38,932,875 | 2 | false | 0 | 0 | I'm not sure if that's possible, as much as I know, ports aren't actually a thing their just some abstraction convention made by protocols today and supported by your operating system that allows you to have multiple connections per one machine,
now sockets are basically some object provided to you by the operating system that implements some protocol stack and allows you to communicate with other systems, the API provides you some very nice API called the socket API which allows you use it's functionality in order to communicate with other computers, Port forwarding is not an actual thing, it just means that when the operating system of the router when receiving incoming packets that are destined to some port it will drop them if the port is not open, think of your router as some bouncer or doorman, standing in the entrance of a building, the building is your LAN, your apartment is your machine and rooms within your apartment are ports, some package or mail arrives to your doorman under the port X, a port rule means on IP Y and Port X of the router -> forward to IP Z and port A of some computer within the LAN ( provides and implements the NAT/PAT ) so what happens if we'll go back to my analogy is something such as this: doorman receives mail destined to some port, and checks if that port is open, if not it drops the mail if it is it allows it to go to some room within some apartment.. (sounds complex I know apologize) my point is, every router chooses to implement port rules or port blocking a little bit different and there is no standard protocol for doing, socket is some object that allows you program to communicate with others, you could create some server - client with sockets but that means that you'll need to create or program your router, and I'm not sure if that's possible,
what you COULD do is:
every router provides some http client ( web client ) that is used to create and forward ports, maybe if you read about your router you could get access to that client and write some python http script that forwards ports automatically
another point I've forgot is that you need to make sure you're own firewall isn't blocking ports, but there's no need for sockets / python to do so, just manually config it | 2 | 3 | 0 | Purpose:
I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router.
Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own.
Approaches I have explored:
Creating a bat file that issues commands by means of netsh, then running the bat.
Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly).
(I'm aware programs such as GameRanger do this)
Using the Socket Module.
If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it.
Thank you.
Edit: Purpose | How to make a port forward rule in Python 3 in windows? | 0 | 0 | 1 | 707 |
38,931,064 | 2016-08-13T09:00:00.000 | 0 | 0 | 0 | 1 | python,sockets,batch-file,portforwarding | 38,932,807 | 2 | false | 0 | 0 | You should read first some sort of informations about UPnP (Router Port-Forwarding) and that it's normally disabled.
Dependent of your needs, you could also try a look at ssh reverse tunnels and at ssh at all, as it can solve many problems.
But you will see that working with windows and things like adavanced network things is a bad idea.
At least you should use cygwin.
And when you really interessted in network traffic at all, wireshark should be installed. | 2 | 3 | 0 | Purpose:
I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router.
Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own.
Approaches I have explored:
Creating a bat file that issues commands by means of netsh, then running the bat.
Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly).
(I'm aware programs such as GameRanger do this)
Using the Socket Module.
If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it.
Thank you.
Edit: Purpose | How to make a port forward rule in Python 3 in windows? | 0 | 0 | 1 | 707 |
38,933,872 | 2016-08-13T14:47:00.000 | 3 | 0 | 1 | 0 | python,algorithm,dynamic-programming | 38,934,300 | 1 | true | 0 | 0 | Lets say cost[i] - best cost to cover first i elements of the roof.
Obviously cost[0] = 0 (we don't need any money to cover 0 tiles).
Lets describe our state as (position, cost).
From state (i,cost[i]) we can get to 4 different potential states:
(i + 1, cost[i] + 3) (when we use tile of length 1 and cost is 3)
(i + 13, cost[i] + 13) (tile length = 13, cost is also 13)
(i + 55, cost[i] + 50) (tile length = 55, cost is 50)
(i + 1, cost[i]) (we ignore current position and don't use any tile here)
Once we change state using one of the above rules we should consider:
position should be <= total Length (55)
if we get to position i with same or bigger cost we don't want to proceed (basically dynamic programming plays role here, if we get to the sub-problem with same or worse result we don't want to proceed).
we can't skip tile (our 4th state transformation) if this tile has hole.
Once we run all this state transformations answer will be at cost[total length (55)] | 1 | 1 | 0 | I'm just learning about dynamic programming, and I've stumbled upon a problem which I am not sure how to formulate in Python:
Given a binary array H of length 55, a 1 indicates a hole in the roof, 0 indicates no hole.
The tails you can use have length 1, 13 or 55, and the cost to deploy each is 3, 13 and 50, respectively.
For a given array of holes H return the minimum cost such that all the holes are covered.
From what I learned, the first step is to find the base cases, and to reason by induction.
So, here are some base cases I could easily find:
a tile of size 13 is more convenient than 5 tiles of size 1 (cost: 13 vs 15 or more)
a tile of size 55 is more convenient than 4 tiles of size 13 (cost: 50 vs 52 or more)
Initially I thought the first point means that if there are 5 or more holes in 13 contiguous spaces I should always choose the 13 tile. However I think it depends on following holes.
The second point is even more problematic if you throw in 1-tiles in the problem. Consider, e.g., 4 single holes at locations [0, 15, 29, 44] you're better off with 4 1-tiles (1 x 55-tile costs 50, 4 x 13-tiles = 52).
So it looks I have to evaluate "how spaced" are the holes for all the possible combination of slices in the array are.
How can I formulate the above into (even pseudo-) code? | How to find the minimum number of tiles needed to cover holes in a roof | 1.2 | 0 | 0 | 389 |
38,936,584 | 2016-08-13T20:12:00.000 | 1 | 0 | 0 | 0 | python,python-3.x,python-module | 38,937,103 | 1 | false | 0 | 0 | My flow chart looks something like this:
Reading the published documentation (or use help(moduleName) which gives you the same information without an internet connection in a harder to read format). This can be overly verbose if you're only looking for one tidbit of information, in which case I move on to...
Finding tutorials or similar stack overflow posts using specific keywords in your favorite search engine. This is generally the approach you will use 99% of the time.
Just recursively poking around with dir() and __doc__ if you think the answer for what you're looking for is going to be relatively obvious (usually if the module has relatively simple functions such as math that are obvious by the name)
Looking at the source of the module if you really want to see how things works. | 1 | 1 | 0 | I can't seem to find a good explanation of how to use Python modules. Take for example the urllib module. It has commands such as
req = urllib.request.Request().
How would I find out what specific commands, like this one, are in certain Python modules?
For all the examples I've seen of people using specific Python modules, they just know what to type, and how to use them.
Any suggestions will be much appreciated. | How to find good documentation for Python modules | 0.197375 | 0 | 1 | 138 |
38,938,191 | 2016-08-14T00:39:00.000 | 1 | 0 | 0 | 0 | php,python,django | 38,941,344 | 1 | true | 1 | 0 | Django used to do the same back when CGI was the most common way to run dynamic web applications. It would create a new python process on each request, which would load all the files on the fly. But while PHP is optimized for this use-case with a fast startup time, Python, as a general purpose language, isn't, and there were some pretty heavy performance drawbacks. WSGI (and FastCGI before it) solves this performance issue by running the Python code in a persistent background process.
So while WSGI gives a lot of benefits, one of the "drawbacks" is that it only loads code when the process is (re)started, so you have to restart the process for any changes to take effect. In development this is easily solved by using an autoreloader, such as the one in Django's manage.py runserver command.
In production, there are quite a few reasons why you would want to delay the restart until the environment is ready. For example, if you pull in code changes that include a migration to add a database field, the new version of your code wouldn't be able to run before you've ran the migration. In such a case, you don't want the new code to run until you've actually ran all the necessary migrations. | 1 | 2 | 0 | I'm confused about this question. When do Django development, if I have modified the py file or static file, the build-in server will reload. But on PHP app development, if I have modified the files, the Apache Server do not need reload and the modified content will show on browser.
Why? | Why WSGI Server need reload Python file when modified but PHP not need? | 1.2 | 0 | 0 | 291 |
38,938,205 | 2016-08-14T00:42:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pip | 38,943,188 | 7 | false | 0 | 0 | Since you have specified in the comments you want syntax like pip install [package] to work, here is a solution:
Install setuptools for Python3: apt-get install python3-setuptools
Now pip for Python3 could be installed by: python3 -m easy_install pip
Now you can use pip with the specific version of Python to
install package for Python 3 by: pip-3.2 install [package] | 2 | 56 | 0 | I am using OSX and I have pip installed for both Python3.5 and Python2.7. I know I can run the command pip2 to use Python2 and when I use the command pip3 Python3.x will be used.
The problem is that the default of pip is set to Python2.7 and I want it to be Python3.x.
How can I change that?
edit:
No, I am not running a virtual environment yet. If it was a virtual environment I could just run Python3.x and forget all about Python2.7, unfortunately since OSX requires Python2.7 for it's use I can't do that. Hence why I'm asking this.
Thanks for the answer. I however don't want to change what running python does. Instead I would like to change the path that running pip takes. At the moment pip -V shows me pip 8.1.2 from /Library/Python/2.7/site-packages (python 2.7), but I am looking for pip 8.1.2 from /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages (python 3.5) I am sure there has to be a way to do this. Any ideas? | How to override the pip command to Python3.x instead of Python2.7? | 0 | 0 | 0 | 94,207 |
38,938,205 | 2016-08-14T00:42:00.000 | 10 | 0 | 1 | 0 | python,python-3.x,pip | 38,938,367 | 7 | false | 0 | 0 | Can't you alias pip='pip3' in your ~/.bash_profile?
In Terminal, run nano ~/.bash_profile, then add a line to the end that reads alias pip='pip3'. This is safe; it won't affect system processes, only your terminal. | 2 | 56 | 0 | I am using OSX and I have pip installed for both Python3.5 and Python2.7. I know I can run the command pip2 to use Python2 and when I use the command pip3 Python3.x will be used.
The problem is that the default of pip is set to Python2.7 and I want it to be Python3.x.
How can I change that?
edit:
No, I am not running a virtual environment yet. If it was a virtual environment I could just run Python3.x and forget all about Python2.7, unfortunately since OSX requires Python2.7 for it's use I can't do that. Hence why I'm asking this.
Thanks for the answer. I however don't want to change what running python does. Instead I would like to change the path that running pip takes. At the moment pip -V shows me pip 8.1.2 from /Library/Python/2.7/site-packages (python 2.7), but I am looking for pip 8.1.2 from /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages (python 3.5) I am sure there has to be a way to do this. Any ideas? | How to override the pip command to Python3.x instead of Python2.7? | 1 | 0 | 0 | 94,207 |
38,939,085 | 2016-08-14T04:19:00.000 | 0 | 0 | 0 | 1 | python,command-line,spotify,spotipy | 39,049,945 | 1 | false | 0 | 0 | Copy and paste the entire redirect URI from your browser to the terminal (when prompted) after successful authentication. Your access token will be cached in the directory (look for .cache.<username>) | 1 | 0 | 0 | I am testing my app using the terminal, which is quite handy in a pre-development phase.
so far, I have used spotipy.Spotify(client_credentials_manager=client_credentials_manager) within my python scripts in order to access data.
SpotifyClientCredentials() requires client_idand client_secret as parameters.
now I need to access analysis_url data, which requires an access token.
Is there a way to include this access token requirement via my python script ran at command line or do I have to build an app on the browser just to do a simple test?
many thanks on advance. | Spotify - access token from command line | 0 | 0 | 1 | 422 |
38,944,172 | 2016-08-14T16:23:00.000 | 0 | 1 | 1 | 0 | python,c++,subprocess,piping | 38,959,725 | 3 | false | 0 | 0 | I'm not familiar with python however, a simple approach is to use some communication protocols such as serial ports or udp which is a network protocol. For real-time applications, UDP protocol is a preferable choice. | 1 | 3 | 0 | I have a sensor value which can be read using the C++ only because it was implemented using it , I want to pass location value (two float variable) of one sensor to Python. I have been reading a lot and I found shared memory , piping and more any idea what is the best way to do it ? | How to pass variable value from C++ to Python? | 0 | 0 | 0 | 3,054 |
38,944,204 | 2016-08-14T16:27:00.000 | 5 | 0 | 1 | 0 | ipython,jupyter,jupyter-notebook | 38,944,682 | 1 | false | 1 | 0 | Reposting as an answer:
When your changes don't seem to be taking effect in an HTML interface, browser caching is often a culprit. The browser saves time by not asking for files again. You can:
Try force-refreshing with Ctrl-F5. It may get some things from the cache anyway, though sometimes mashing it several times is effective.
Use a different browser profile, or private browsing mode, to load the page.
There may be a setting to disable caching under developer options. I think Chrome has this. May only apply while developer tools are open.
If all else fails, load the page using a different browser. If it still doesn't change, it's likely the problem is not (just) browser caching. | 1 | 2 | 0 | By mistake, I updated this file to customize css.
D:\Continuum\Anaconda2\Lib\site-packages\notebook\static\custom\custom.css
To rollback the above change,
1) I put back the original file that I saved before. still the new css shows up in jupyter.
2) I removed all .ipython and .jupyter dir and it didn't work either.
3) I even uninstalled anaconda and still that css shows up.
I'm really stuck here. Does anyone know how to go back to the default css of jupyter ? | jupyter custom.css removal | 0.761594 | 0 | 0 | 770 |
38,944,551 | 2016-08-14T17:06:00.000 | 4 | 0 | 0 | 0 | python,django,apache,postgresql,github | 62,814,973 | 4 | false | 1 | 0 | If you receive this error and are using the Heroku hosting platform its quite possible that you are trying to write to a Hobby level database which has a limited number of rows.
Heroku will allow you to pg:push the database even if you exceed the limits, but it will be read-only so any modifications to content won't be processed and will throw this error. | 1 | 53 | 0 | What are some basic steps for troubleshooting and narrowing down the cause for the "django.db.utils.ProgrammingError: permission denied for relation django_migrations" error from Django?
I'm getting this message after what was initially a stable production server but has since had some changes to several aspects of Django, Postgres, Apache, and a pull from Github. In addition, it has been some time since those changes were made and I don't recall or can't track every change that may be causing the problem.
I get the message when I run python manage.py runserver or any other python manage.py ... command except python manage.py check, which states the system is good. | Steps to Troubleshoot "django.db.utils.ProgrammingError: permission denied for relation django_migrations" | 0.197375 | 1 | 0 | 32,047 |
38,945,299 | 2016-08-14T18:26:00.000 | 0 | 1 | 0 | 1 | python,shell,jboss,nagios,icinga | 39,367,798 | 3 | true | 1 | 0 | I did this by monitored jboss process using ps aux | grep "\-D\[Standalone\]" for standalone mode and ps aux | grep "\-D\[Server" for domain mode. | 1 | 0 | 0 | I want to monitor jboss if its running or not through Icinga.
I don't want to check /etc/inid.d/jboss status as sometimes service is up but some of the jboss is killed or hang & jboss doesn't work properly.
I would like to create a script to monitor all of its process from ps output. But few servers are running in standalone mode, domain(master,slave) and processes are different for each case.
I'm not sure from where do I start. Anyone here who did same earlier? Just looking for the idea to do this. | monitoring jboss process with icinga/nagios | 1.2 | 0 | 0 | 1,024 |
38,945,695 | 2016-08-14T19:11:00.000 | 0 | 0 | 0 | 0 | python,opencv,ellipse,data-fitting | 38,945,813 | 2 | false | 0 | 0 | Empirically, I ran code matching thousands of ellipses, and I never got one return value where the returned width was greater than the returned height. So it seems OpenCV normalizes the ellipse such that height >= width. | 1 | 1 | 1 | An ellipse of width 50, height 100, and angle 0, would be identical to an ellipse of width 100, height 50, and angle 90 - i.e. one is the rotation of the other.
How does cv2.fitEllipse handle this? Does it return ellipses in some normalized form (i.e. angle is picked such that width is always < height), or can it provide any output?
I ask as I'm trying to determine whether two fit ellipses are similar, and am unsure whether I have to account for these things. The documentation doesn't address this at all. | How does cv2.fitEllipse handle width/height with regards to rotation? | 0 | 0 | 0 | 1,827 |
38,948,021 | 2016-08-15T00:57:00.000 | 0 | 0 | 0 | 1 | python,linux,bash,ubuntu,windows-subsystem-for-linux | 39,723,038 | 3 | false | 0 | 0 | Looks like you are having permissions issues.
To see everything on your home folder try
ls -al
to change permissions check out the chmod command | 2 | 0 | 0 | I'm new to Linux. I recently downloaded Bash on Ubuntu on Windows 10 (after the Anniversary edition update to Windows 10). Since this update is relatively new, there is not much online regarding troubleshooting. There are two things I need help on:
(1) When I go to the home folder, which seems to be "C:\Users\user\AppData\Local\lxss\home\user" and I add a new folder through Windows, this folder does not show up in Linux with the "ls" command. But when I add a directory using "mkdir" in Linux, the "ls" command shows this folder. Why is it behaving like this? Am I limited to creating folders through "mkdir" when working in this folder?
(2) I have a Python script sitting in that same folder that I'm trying to run and again it is not being found by Linux or the Python interpreter started in Bash on Ubuntu on Windows. I have Python 3 installed (Anaconda) and I'm able to type commands directly in the Python interpreter and it's working. However, I would like to run scripts in files.
Please let me know if more information is needed. Thanks. | bash on Ubuntu on windows Linux, folder recognition, and running Python scripts | 0 | 0 | 0 | 970 |
38,948,021 | 2016-08-15T00:57:00.000 | 1 | 0 | 0 | 1 | python,linux,bash,ubuntu,windows-subsystem-for-linux | 51,154,550 | 3 | false | 0 | 0 | The reason why ls is not showing anything is that it shows the Linux directory structure. Try setting it to the Windows directory, in this example the c drive:
cd /mnt/c
Does ls show a folder structure now? | 2 | 0 | 0 | I'm new to Linux. I recently downloaded Bash on Ubuntu on Windows 10 (after the Anniversary edition update to Windows 10). Since this update is relatively new, there is not much online regarding troubleshooting. There are two things I need help on:
(1) When I go to the home folder, which seems to be "C:\Users\user\AppData\Local\lxss\home\user" and I add a new folder through Windows, this folder does not show up in Linux with the "ls" command. But when I add a directory using "mkdir" in Linux, the "ls" command shows this folder. Why is it behaving like this? Am I limited to creating folders through "mkdir" when working in this folder?
(2) I have a Python script sitting in that same folder that I'm trying to run and again it is not being found by Linux or the Python interpreter started in Bash on Ubuntu on Windows. I have Python 3 installed (Anaconda) and I'm able to type commands directly in the Python interpreter and it's working. However, I would like to run scripts in files.
Please let me know if more information is needed. Thanks. | bash on Ubuntu on windows Linux, folder recognition, and running Python scripts | 0.066568 | 0 | 0 | 970 |
38,948,430 | 2016-08-15T02:12:00.000 | 1 | 0 | 1 | 0 | python,bit-shift | 38,948,543 | 1 | false | 0 | 0 | Python does not have registers and you cannot declare the type of anything.
The shift operators operate on unlimited-precision integers. If you shift left, the number will continue to get larger indefinitely (or until out of memory). If you shift right, the least-significant bit is dropped as you would expect. There is no "carry flag", that's the kind of thing you see in assembly language and Python is not assembly. Since the integers have unlimited precision, logical and arithmetic shifts are equivalent, in a sense (if you imagine that the sign bit repeats indefinitely).
Any time you want fixed width operations you will just have to mask the results of the unlimited-precision operations.
As for the "smartest" way to do something, that's not really an appropriate question for Stack Overflow. | 1 | 0 | 0 | Let's say I want to write a 16bit linear feedback shift register LFSR in Python using its native shift operator.
Does the operator itself have a feature to specify the bit to shifted into the new MSB position?
Does the operator have a carry flag or the like to catch the LSB falling out of the register?
Has to setup the register to 16bit size? Not sure how to do this in Python where variables are not clearly typed.
What's the smartest way to compute the multi-bit XOR function for the feedback. Actual bit extraction or lookup table?
Thanks,
Gert | Using shift operator for LFSR in python | 0.197375 | 0 | 0 | 384 |
38,953,175 | 2016-08-15T10:11:00.000 | 0 | 0 | 0 | 1 | python,dbus,bluez,gatt | 38,997,649 | 2 | false | 0 | 0 | See 'test/example-gatt-client' from bluez package | 1 | 3 | 0 | I would like to connect to a Bluetooth LE device and receive notifications from it in python. I would like to use the Bluez dbus API, but can't find an example I can understand. :-)
With gatttool, I can use the following command:
gatttool -b C4:8D:EE:C8:D2:D8 --char-write-req -a 0x001d -n 0100 –listen
How can I do the same in python, using the dbus API of Bluez? | Connect to a Bluetooth LE device using bluez python dbus interface | 0 | 0 | 0 | 3,227 |
38,954,505 | 2016-08-15T11:48:00.000 | 0 | 0 | 0 | 0 | python,django,caching,static,cdn | 38,956,967 | 2 | true | 1 | 0 | Here's my work around :
On deployment (from a bash script), I get the shasum of my css style.
I put this variable inside the environment.
I have a context processor for the template engine that will read from the environment. | 1 | 1 | 0 | I'm working on a website built with Django.
When I'm doing updates on the static files, the users have to hard refresh the website to get the latest version.
I'm using a CDN server to deliver my static files so using the built-in static storage from Django.
I don't know about the best practices but my idea is to generate a random string when I redeploy the website and have something like style.css?my_random_string.
I don't know how to handle such a global variable through the project (Using Gunicorn in production).
I have a RedisDB running, I can store the random string in it and clear it on redeployment.
I was thinking to have this variable globally available in templates with a context_processors.
What are your thoughts on this ? | Cache busting with Django | 1.2 | 0 | 0 | 865 |
38,955,954 | 2016-08-15T13:25:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 38,956,381 | 4 | false | 0 | 0 | Simple answer: you can't.
Except in the trivial way, which is by calling a function that does this for you, using a loop. If you want this kind of nice syntax you can use libraries as suggested: map, numpy, etc. Or you can write your own function.
If what you are looking for is syntactic convenience, Python does not allow overloading operators for built-in types such as list.
Oh, and you can use recursion, if that's "not a loop" for you. | 1 | 0 | 0 | I have two python lists A and B of equal length each containing only boolean values. Is it possible to get a third list C where C[i] = A[i] and B[i] for 0 <= i < len(A) without using loop?
I tried following
C = A and B
but probably it gives the list B
I also tried
C = A or B
which gives first list
I know it can easily be done using for loop in single line like C = [x and y for x, y in zip(A, B)]. | Elementwise and in python list | 0 | 0 | 0 | 951 |
38,956,660 | 2016-08-15T14:07:00.000 | 24 | 0 | 0 | 0 | python,pandas,pycharm | 52,064,081 | 10 | false | 0 | 0 | I have faced the same problem with PyCharm 2018.2.2. The reason was having a special character in a column's name as mentioned by Yunzhao .
If your having a column name like 'R&D' changing it to 'RnD' will fix the problem. It's really strange JetBrains hasn't solved this problem for over 2 years. | 6 | 57 | 1 | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | Dataframe not showing in Pycharm | 1 | 0 | 0 | 27,005 |
38,956,660 | 2016-08-15T14:07:00.000 | 9 | 0 | 0 | 0 | python,pandas,pycharm | 51,483,568 | 10 | false | 0 | 0 | I have met the same problems.
I figured it was because of the special characters in column names (in my case)
In my case, I have "%" in the column name, then it doesn't show the data in View as DataFrame function. After I remove it, everything was correctly shown.
Please double check if you also have some special characters in the column names. | 6 | 57 | 1 | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | Dataframe not showing in Pycharm | 1 | 0 | 0 | 27,005 |
38,956,660 | 2016-08-15T14:07:00.000 | 2 | 0 | 0 | 0 | python,pandas,pycharm | 57,003,355 | 10 | false | 0 | 0 | In my situation, the problem is caused by two same cloumn name in my dataframe.
Check it by:df.columns.shape[0] == len(set(df.columns)) | 6 | 57 | 1 | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | Dataframe not showing in Pycharm | 0.039979 | 0 | 0 | 27,005 |
38,956,660 | 2016-08-15T14:07:00.000 | 2 | 0 | 0 | 0 | python,pandas,pycharm | 55,593,342 | 10 | false | 0 | 0 | I use PyCharm 2019.1.1 (Community Edition) and I run Python 3.7.
When I first click on "View as DataFrame" there seems to be the same issue, but if I wait a few second the content pops up. For me it is a matter of loading. | 6 | 57 | 1 | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | Dataframe not showing in Pycharm | 0.039979 | 0 | 0 | 27,005 |
38,956,660 | 2016-08-15T14:07:00.000 | 2 | 0 | 0 | 0 | python,pandas,pycharm | 57,313,249 | 10 | false | 0 | 0 | For the sake of completeness: I face the same problem, due to the fact that some elements in the index of the dataframe contain a question mark '?'. One should avoid that too, if you still want to use the data viewer. Data viewer still worked, if the index strings contain hashes or less-than/greather-than signs though. | 6 | 57 | 1 | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | Dataframe not showing in Pycharm | 0.039979 | 0 | 0 | 27,005 |
38,956,660 | 2016-08-15T14:07:00.000 | 1 | 0 | 0 | 0 | python,pandas,pycharm | 64,172,890 | 10 | false | 0 | 0 | As of 2020-10-02, using PyCharm 2020.1.4, I found that this issue also occurs if the DataFrame contains a column containing a tuple. | 6 | 57 | 1 | I am using PyCharm 2016.2.1 . When I try to view a Pandas dataframe through the newly added feature 'View as DataFrame' in the debugger, this works as expected for a small (e.g. 4x4) DataFrame.
However when I try to view a DataFrame (generated by custom script) of ~10,000 rows x ~50 columns, I get the message: "Nothing to show".
When I run the same script (that generates the DataFrame) in Spyder, I am able to view it, so I am pretty sure it's not an error in my script.
Does anyone know if there is a maximum size to the DataFrames that can be viewed in PyCharm, and if there is a way to change this?
EDIT:
It seems that the maximum size allowed is 1000 x 15 , as in some cases it gets truncated to this size (when the number of rows is too large, but when there are too many columns pycharm just says 'nothing to show').
Still, I would like to know if there is a way to increase the maximum allowed rows and columns viewable through the DataFrame viewer. | Dataframe not showing in Pycharm | 0.019997 | 0 | 0 | 27,005 |
38,960,221 | 2016-08-15T17:49:00.000 | 1 | 0 | 1 | 0 | python,pandas,tuples | 38,966,174 | 1 | true | 0 | 0 | You could use a dict of dicts instead of a dict of namedtuples. Dicts are mutable, so you'll be able to modify the inner dicts.
Given what you said in the comments about the structures of each DataFrame-1 and -2 being comparable, you could also group all of each into one big DataFrame, by adding a column to each DataFrame containing the value of sample_info_1 repeated across all rows, and likewise for sample_info_2. Then you could concat all the DataFrame-1s into a big one, and likewise for the DataFrame-2s, getting all your data into two DataFrames. (Depending on the structure of those DataFrames, you could even join them into one.) | 1 | 1 | 1 | Is there a data class or type in Python that matches these criteria?
I am trying to build an object that looks something like this:
ExperimentData
ID 1
sample_info_1: character string
sample_info_2: character string
Dataframe_1: pandas data frame
Dataframe_2: pandas data frame
ID 2
(etc.)
Right now, I am using a dict to hold the object ('ExperimentData'), which containsnamedtuple's for each ID. Each of the namedtuple's has a named field for the corresponding data attached to the sample. This allows me to keep all the ID's indexed, and have all of the fields under each ID indexed as well.
However, I need to update and/or replace the entries under each ID during downstream analysis. Since a tuple is immutable, this does not seem to be possible.
Is there a better implementation of this? | Mutable indexed heterogeneous data structure? | 1.2 | 0 | 0 | 68 |
38,961,360 | 2016-08-15T19:05:00.000 | 2 | 0 | 0 | 0 | python,excel,openpyxl | 39,077,066 | 2 | false | 0 | 0 | In openpyxl you'll have to go cell by cell.
You could use Excel's builtin Data Validation or Conditional Formatting, which openpyxl supports, for this. Let Excel do the work and talk to it using xlwings. | 2 | 1 | 0 | I am working on a project that requires me to read a spreadsheet provided by the user and I need to build a system to check that the contents of the spreadsheet are valid. Specifically I want to validate that each column contains a specific datatype.
I know that this could be done by iterating over every cell in the spreadsheet, but I was hoping there is a simpler way to do it. | Use openpyxl to verify the structure of a spreadsheet | 0.197375 | 1 | 0 | 109 |
38,961,360 | 2016-08-15T19:05:00.000 | 1 | 0 | 0 | 0 | python,excel,openpyxl | 39,088,757 | 2 | true | 0 | 0 | I ended up just manually looking at each cell. I have to read them all into my data structures before I can process anything anyways so it actually made sense to check then. | 2 | 1 | 0 | I am working on a project that requires me to read a spreadsheet provided by the user and I need to build a system to check that the contents of the spreadsheet are valid. Specifically I want to validate that each column contains a specific datatype.
I know that this could be done by iterating over every cell in the spreadsheet, but I was hoping there is a simpler way to do it. | Use openpyxl to verify the structure of a spreadsheet | 1.2 | 1 | 0 | 109 |
38,962,468 | 2016-08-15T20:22:00.000 | 1 | 0 | 0 | 0 | python,volttron | 38,963,891 | 1 | false | 0 | 0 | BACnet has a size limit for the size of a message. The message size has several different valid values based on the BACnet specification. If a device wants to send a message that exceeds the supported size of either device it may segment the message into smaller pieces. Both devices must support segmentation for this to work, otherwise you get the error you are seeing.
The cause of this error is the device being scraped does not support segmentation and the number of points being scraped by the driver at once (by default, all of them) creates a message too big to avoid segmentation either sending or receiving.
The BACnet driver currently supports manual segmentation to overcome this device limitation without reducing the number of points configured in the driver. You can set the max_per_request setting in the driver_config section of a BACnet device configuration. The setting is per device so you must include max_per_request in every device affected. A typical value is 20. If the error persists try lower values.
A planned future enhancement for the BACnet driver is to auto detect this case and automatically set an ideal max_per_request value.
EDIT
I should also mention that the max_per_request argument was added after VOLTTRON 3.0. You need to be running either 3.5RC1 or the develop branch. | 1 | 1 | 0 | When running the BACnet Proxy and MasterDriver agents, I receive the following error message:
master_driver.driver ERROR: Failed to scrape Device Name:
RuntimeError('Device communication aborted: segmentationNotSupported')
Could anyone help me to resolve this error? | VOLTTRON : Device communication aborted: segmentationNotSupported | 0.197375 | 0 | 0 | 78 |
38,962,947 | 2016-08-15T20:57:00.000 | 28 | 0 | 1 | 0 | python,windows,ubuntu,pip | 38,963,343 | 3 | false | 0 | 0 | I highly recommend against doing this - the overwhelmingly supported best practice is to use a requirements.txt file, listing the packages you want to install specifically.
You then install it with pip install -r requirements.txt and it installs all the packages for your project.
This has several benefits:
Repeatability by installing only the required packages
Conciseness
However, if you really do want to install ALL python packages (note that there are thousands), you can do so via the following:
pip search * | grep ")[[:space:]]" | cut -f1 -d" "
I strongly recommend against this as it's likely to do horrible things to your system, as it will attempt to install every python package (and why it's in spoiler tags). | 1 | 8 | 0 | I would like to install all available modules for Python 2.7.12 using a single pip command. Is there a way to do this without having to specify every single package name? | Is there a way to install all python modules at once using pip? | 1 | 0 | 0 | 45,827 |
38,963,102 | 2016-08-15T21:08:00.000 | 3 | 0 | 1 | 0 | python,pandas,grep | 38,963,174 | 1 | true | 0 | 0 | Since grep is a compiled C program grep is certainly faster than interpreting bytecode for file scan AND regex processing (although regex lib is native code)
Running with pypy could close the gap, but in the end the compiled code would win.
Of course, on smaller data, if data could be stored in a dictionary, multiple search operations would be faster than calling grep the same number of time because grep search is O(n) and dictionary search is O(log(N)) | 1 | 1 | 0 | I need to search for a particular regex in a very large file I can't load to memory or create dataframe of. Which one, grep or iterating over a TextFileReader will be faster in that case?
Sadly, I don't have time to learn, configure and run a Hadoop.
Cheers | What will be faster - grep or pandas TextFileReader for a very large file? | 1.2 | 0 | 0 | 214 |
38,964,041 | 2016-08-15T22:35:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,amazon-s3,amazon-ec2,keras | 45,422,652 | 2 | false | 0 | 0 | you can do it using Jupyter notebook otherwise use:
Duck for MAC, Putty for windows.
I hope it helps | 1 | 2 | 1 | I'm trying to train a Keras model on AWS GPU.
How would you load images (training data) in S3 for a deep learning model with EC2 instance (GPU)? | How to load images in S3 for a deep learning model with EC2 instance (GPU) | 0 | 0 | 0 | 764 |
38,965,457 | 2016-08-16T01:42:00.000 | 0 | 0 | 1 | 0 | python,tcl,version | 38,965,465 | 1 | false | 0 | 1 | What worked eventually:
tcl = Tkinter.tcl()
tcl.eval('package forget Tcl')
tcl.eval('package provide Tcl 8.5')
tcl.eval('package require Tcl')
8.5
Success! | 1 | 0 | 0 | Using python2.7 Tkinter for executing Tcl.
The Tcl code has package require Tcl 8.5, while the tclsh loads Tcl 8.4 by default.
Causes: version conflict for package "Tcl": have 8.4, need 8.5
I have libtcl8.5.so installed at a custom location.
Tried adding it to LD_LIBRARY_PATH, TCL_LIBRARY, TCLLIBPATH. Nothing worked. It's like the tclsh completely ignores the envs. | Python Tkinter: version conflict for package "Tcl": have 8.4, need 8.5 | 0 | 0 | 0 | 761 |
38,966,114 | 2016-08-16T03:16:00.000 | 0 | 0 | 1 | 1 | python,django,intellij-idea,ide,pycharm | 53,610,811 | 2 | false | 0 | 0 | Go to File > Settings > Plugins > Browse repositories > Search and Install Native Terminal
This will install a terminal which will use the Windows Native terminal.
A small black button will appear on the tool bar.
If you did not enable the tool bar, here is the trick: View | toolbar
check this toolbar option and the cmd button will be shown on the bar | 2 | 1 | 0 | First time posting, let me know how I can improve my questions.
I have installed PyCharm Edu 3.0 and Anaconda 3 on an older laptop. I am attempting to access the embedded terminal in the IDE and I am unable to launch it.
I have searched through similar questions here and the JetBrains docs, and the common knowledge seems to be installing the "Terminal" Plugin. My version of PyCharm does not have this plugin, and I am unable to find it in the JetBrains plugin list or community repositories.
If anyone has experienced this before or knows where I am going wrong attempting to launch the terminal I would appreciate the feedback. | Pycharm edu terminal plugin missing | 0 | 0 | 0 | 628 |
38,966,114 | 2016-08-16T03:16:00.000 | -1 | 0 | 1 | 1 | python,django,intellij-idea,ide,pycharm | 43,782,480 | 2 | false | 0 | 0 | Click preferences and choose plugin. Next click install Jetbrains plugin and choose Command line Tool Support. I hope this will help you | 2 | 1 | 0 | First time posting, let me know how I can improve my questions.
I have installed PyCharm Edu 3.0 and Anaconda 3 on an older laptop. I am attempting to access the embedded terminal in the IDE and I am unable to launch it.
I have searched through similar questions here and the JetBrains docs, and the common knowledge seems to be installing the "Terminal" Plugin. My version of PyCharm does not have this plugin, and I am unable to find it in the JetBrains plugin list or community repositories.
If anyone has experienced this before or knows where I am going wrong attempting to launch the terminal I would appreciate the feedback. | Pycharm edu terminal plugin missing | -0.099668 | 0 | 0 | 628 |
38,966,528 | 2016-08-16T04:09:00.000 | 0 | 0 | 1 | 0 | python-3.x,process,multiprocessing,parent-child,atexit | 38,981,627 | 1 | false | 0 | 0 | The functions registered via atexit are inherited by the children processes.
The simplest way to prevent that, is via calling atexit after you have spawned the children processes. | 1 | 0 | 0 | I have a program that spawns multiple child processes, how would I make the program only call atexit.register(function) on the main process and not on the child processes as well?
Thanks | Python3 Running atexit only on the main process | 0 | 0 | 0 | 168 |
38,968,007 | 2016-08-16T06:30:00.000 | 1 | 1 | 0 | 0 | java,python,c++,python-2.7,batch-file | 38,968,156 | 2 | false | 0 | 1 | You can load C++ function and execute it from Python like BasicWolf explained in his answer. For Java, Jython might be a good approach. But then there's a problem - you will need to be dependent on Jython which is not up to date with the latest versions of Python. You will face compatibility issues with different libraries too.
I would recommend compiling your C++ and Java functions to create individual binaries out of them. Then execute these binaries from within Python, passing the arguments as command line parameters. This way you can keep using CPython. You can interoperate with programs written in any language. | 1 | 2 | 0 | I am creating a Windows program that so far has a .bat file calling a .pyw file, and I need functions from Java and C++. How can I do this?(I don't mind creating a new batch or python file, and I already have the header file for the C++ section and a .jar file for my java components. (For Java I use Eclipse Java Mars, and it's Java 8u101)) Thanks!!! | How do you call C++ and\or Java functions from Python 2.7? | 0.099668 | 0 | 0 | 455 |
38,968,367 | 2016-08-16T06:54:00.000 | 5 | 0 | 1 | 0 | python,apache-spark,ibm-cloud,data-science-experience,watson-studio | 38,968,368 | 1 | true | 0 | 0 | !PIP_USER= pip freeze
IBM sets the environment variable PIP_USER to enable the --user option by default. That's because many users forgot to specify that option for pip install. Unfortunately, this also enables the option for pip freeze, where it might not be desired. Therefore, you have to override the default option to get the full list of installed packages.
Alternative ways to ignore default options from environment variables:
!pip freeze --isolated
!env -i pip freeze | 1 | 1 | 0 | Update August 2019: This question is no longer relevant. It refers to a retired Apache Spark as a Service offering. Current Spark backends in Watson Studio use a different technology.
In a Python notebook, I can execute !pip freeze to get a list of installed packages. But the result is an empty list, or shows only a few packages that I installed myself. Until a few weeks ago, the command would return a list of all the packages, including those pre-installed by IBM. How can I get the full list now? | How to list the pre-installed Python packages on IBM's Spark service | 1.2 | 0 | 0 | 467 |
38,969,449 | 2016-08-16T07:57:00.000 | 0 | 0 | 1 | 0 | python,ibm-cloud-infrastructure | 38,977,645 | 1 | false | 0 | 0 | Currently there’s no API method to reset the Virtual Guest’s password. | 1 | 0 | 0 | I have a requirement that resetting the virtual guest's password when the password is forgotten. And I failed to find a suitable method to reset the password. Maybe reloading the os is a way to reset the password, but it is too crude. Is there any api/method to reset the virtual guest's password? | Can not find api to reset virtual guest's password | 0 | 0 | 0 | 35 |
38,972,380 | 2016-08-16T10:22:00.000 | 0 | 0 | 0 | 0 | python,deep-learning,keras | 39,922,584 | 3 | false | 0 | 0 | The best way to achieve this seems to be to create a new generator class expanding the one provided by Keras that parses the data augmenting only the images and yielding all the outputs. | 1 | 22 | 1 | In a Keras model with the Functional API I need to call fit_generator to train on augmented images data using an ImageDataGenerator.
The problem is my model has two outputs: the mask I'm trying to predict and a binary value.
I obviously only want to augment the input and the mask output and not the binary value.
How can I achieve this? | Keras: How to use fit_generator with multiple outputs of different type | 0 | 0 | 0 | 21,904 |
38,972,634 | 2016-08-16T10:34:00.000 | 0 | 0 | 1 | 0 | python-3.x,window,python-idle | 38,996,457 | 1 | false | 0 | 0 | folder location for idle settings (windows) is: users\PCname.idlerc.
thanks @TerryJanReedy | 1 | 0 | 0 | this is annoying. even when i change it from the option to a different height and width. every time I open the script window or the idle. it's at max height. I also the disabled the height option and still the same.
I uninstalled and reinstalled. but the same also.
is there a place where the python settings are saved aside from the main folder?. | python 3 idle window stuck at full height | 0 | 0 | 0 | 86 |
38,976,431 | 2016-08-16T13:35:00.000 | 0 | 0 | 0 | 0 | python,pandas,numpy,large-data | 38,976,616 | 1 | false | 0 | 0 | Out of curiosity, is there a reason you want to use Pandas for this? Image analysis is typically handled in matrices making NumPy a clear favorite. If I'm not mistaken, both sk-learn and PIL/IMAGE use NumPy arrays to do their analysis and operations.
Another option: avoid the in-memory step! Do you need to access all 1K+ images at the same time? If not, and you're operating on each one individually, you can iterate over the files and perform your operations there. For an even more efficient step, break your files into lists of 200 or so images, then use Python's MultiProcessing capabilities to analyze in parallel.
JIC, do you have PIL or IMAGE installed, or sk-learn? Those packages have some nice image analysis algorithms already packaged in which may save you some time in not having to re-invent the wheel. | 1 | 1 | 1 | Background: I have a sequence of images. In each image, I map a single pixel to a number. Then I want to create a pandas dataframe where each pixel is in its own column and images are rows. The reason I want to do that is so that I can use things like forward fill.
Challenge: I have transformed each image into a one dimensional array of numbers, each of which is about 2 million entries and I have thousands of images. Simply doing pd.DataFrame(array) is very slow (testing it on a smaller number of images). Is there a faster solution for this? Other ideas how to do this efficiently are also welcome, but using non-core different libraries may be a challenge (corporate environment). | Initializing a very large pandas dataframe | 0 | 0 | 0 | 310 |
38,978,228 | 2016-08-16T14:57:00.000 | 0 | 0 | 1 | 1 | python,git,docker,docker-compose,devops | 38,978,850 | 1 | false | 0 | 0 | The best solution is B except you will not use a volume in production.
Docker-compose will also allow you to easily mount your code as a volume but you only need this for dev.
In production you will COPY your files into the container. | 1 | 0 | 0 | I want to Dockerize a project.
I have my project files in a git repo. The project is written in python, and requires a virtual environment to be activated and a pip installation of requirements after the git clone from the repo. The container is going to be a development container, so I would need a basic set of software. Of course I also need to modify the project files, push and pull to git as I prefer.
Solution A
This solution build everything on runtime, and nothing is kept if the container is restarted. It would require the machine to install all the requirements, clone the project every time the container is started.
Use the Dockerfile for installing python, virtualenv, etc.
Use the Dockerfile for cloning the project from git, installing pip requirements.
Use docker compose for setting up the environment, memory limits, cpu shares, etc.
Solution B
This solution clones the project from git once manually, then the project files are kept in a volume, and you can freely modify them, regardless of container state.
Use the Dockerfile for installing python, virtualenv, etc.
Use docker compose for setting up the environment, memory limits, cpu shares, etc.
Create a volume that is mounted on the container
Clone the project files into the volume, set up everything once.
Solution C
There might be a much better solution that I have not thought of, if there is, be sure to tell. | Should I create a volume for project files when using Docker with git? | 0 | 0 | 0 | 86 |
38,979,170 | 2016-08-16T15:42:00.000 | 0 | 1 | 0 | 0 | javascript,jquery,python,cookies | 38,979,448 | 1 | false | 1 | 0 | IF you want to interact to anorher webservice the resolution is send post/get request and parse response
Question is what is your goal? | 1 | 0 | 0 | I have a problem that I am trying to conceptualize whether possible or not. Nothing too fancy (i.e. remote login or anything etc.)
I have Website A and Website B.
On website A a user selects on a few links from website B, i would like to then remotely click on behalf of the user on the link (as Website B creates a cookie with the clicked information) so when the user gets redirected to Website B, the cookie (and the links) are pre-selected and the user does not need to click on them one by one.
Can this be done? | How to remote click on links from a 3rd party website | 0 | 0 | 1 | 38 |
38,980,544 | 2016-08-16T16:57:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn,random-forest | 38,980,686 | 1 | true | 0 | 0 | They will be treated in the same manner as the minimal value already encountered in the training set. RF is just a bunch of voting decision trees, and (basic) DTs can only form decisions in form of "if feature X is > then T go left, otherwise go right". Consequently, if you fit it to data which, for a given feature, has only values in [0, inf], it will either not use this feature at all or use it in a form given above (as decision of form "if X is > than T", where T has to be from (0, inf) to make any sense for the training data). Consequently if you simply take your new data and change negative values to "0", the result will be identical. | 1 | 0 | 1 | When I built my random forest model using scikit learn in python, I set a condition (where clause in sql query) so that the training data only contain values whose value is greater than 0.
I am curious to know how random forest handles test data whose value is less than 0, which the random forest model has never seen before in the training data. | What does Random Forest do with unseen data? | 1.2 | 0 | 0 | 709 |
38,983,478 | 2016-08-16T20:01:00.000 | 1 | 0 | 1 | 0 | python-3.x,server,simplehttpserver,python-interactive | 38,983,578 | 1 | true | 0 | 0 | You could add a (testing only!) route in your web server, like POST /eval, which takes a string which will be Python code, executes it, and returns the result.
Obviously you need to make sure such functionality isn't exposed to a public network. | 1 | 0 | 0 | I'm not sure if this is the right way to word this, but I have a python web server that accepts connections and updates objects, is it possible to use the interactive shell to inject commands into the same memory space and view/change the objects the server is interacting with?
Currently, once the httpd function starts, it the shell takes no input until the process is interrupted, then I can type and check the object states. But while I'm doing this, the server is not running, and has to be restarted.
Is what I'm trying to do ridiculous or possible? It's primarily for ease of testing and development. I've considered pickling and opening those pickles in another shell. | Run Python process in background with interactive shell on the same memory | 1.2 | 0 | 0 | 153 |
38,984,069 | 2016-08-16T20:39:00.000 | 3 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 38,984,364 | 1 | true | 0 | 0 | You cannot add data to SVM and achieve the same result as if you would add it to the original training set. You can either retrain with extended training set starting with the previous solution (should be faster) or train on new data only and completely diverge from the previous solution.
There are only few models that can do what you would like to achieve here - like for example Ridge Regression or Linear Discriminant Analysis (and their Kernelized - Kernel Ridge Regression or Kernel Fischer Discriminant, or "extreme"-counterparts - ELM or EEM), which have a property of being able to add new training data "on the fly". | 1 | 2 | 1 | I am scraping approximately 200,000 websites, looking for certain types of media posted on the websites of small businesses. I have a pickled linearSVC, which I've trained to predict the probability that a link found on a web page contains media of the type that I'm looking for, and it performs rather well (overall accuracy around 95%). However, I would like the scraper to periodically update the classifier with new data as it scrapes.
So my question is, if I have loaded a pickled sklearn LinearSVC, is there a way to add in new training data without re-training the whole model? Or do I have to load all of the previous training data, add the new data, and train an entirely new model? | add training data to existing LinearSVC | 1.2 | 0 | 0 | 902 |
38,984,387 | 2016-08-16T20:59:00.000 | 1 | 0 | 1 | 0 | python,nltk | 40,272,424 | 3 | false | 0 | 0 | While i am not sure exactly where the problem arises, I had this same error happen to me (it started 'overnight' - the code had been working, i hand not re-installed nltk, so i have no idea what caused it to start happening). I still had the problem after upgrading to the latest version of nltk (3.2.1), and re-downloading the nltk data.
shiratori's answer helped me solve my problem, although at least for me it was slightly more complicated. Specifically, my nltk data was stored in C:\Users\USERNAME\AppData\Roaming\nltk_data (i think this is a default location). This is where it had always been stored, and always had worked fine, however suddenly nltk did not seem to be recognizing this location, and hence looked in the next drive. To solve it, I copied and pasted all the data in that folder to C:\nltk_data and now it is running fine again.
Anyway, not sure if this is Windows induced problem, or what exactly changed to cause code that was working to stop working, but this solved it. | 1 | 4 | 0 | I am trying to use nltk in python, but am receiving a pop up error (windows) describing that I am missing a drive at the moment I call import nltk
Does anyone know why or how to fix this?
The error is below:
"There is no disk in the drive. Please insert a disk into drive \Device\Harddisk4\DR4." | Drive issue with python NLTK | 0.066568 | 0 | 0 | 895 |
38,985,796 | 2016-08-16T23:08:00.000 | 0 | 0 | 1 | 0 | python,class,instance | 38,985,920 | 1 | false | 0 | 0 | If you are running the files on after another with the interpreter, then the counter will reset once you run the second file, since everything from the first file being run will have exited memory. If you need data to persist between running two different files, your best bet is to write it to an external file in the first file and then read it in the second. | 1 | 0 | 0 | So I have a python class Driver which has a method drive which essentially runs a loop forever. I have it set up so that with every new instance of the class, a counter goes up. So if I make two separate instances of the class in 2 different files and run them one after the other, will the one that runs second register that it's a 2nd instance of the class, and the counter go up to 2? Or does this only work for instance calls in the same file? | How to register class instance count in two separate files? | 0 | 0 | 0 | 40 |
38,987,289 | 2016-08-17T02:41:00.000 | 0 | 0 | 0 | 1 | python,macos,pygame,osx-elcapitan | 42,425,154 | 2 | false | 0 | 0 | Okie Dokie I figured it out. You have to download the version of idle that is 2.7.(any) IMPORTANT!!! the version has to be 32 bit not 64 bit. Just search on the idle website idle 2.7.12 32 bit. Then download that. Finally download pygame 2.7.
Thanks everyone that helped out!
P.S. Some idle versions didn't work for me. However, 2.7.8 and 2.7.13 did | 2 | 0 | 0 | I started programing a game on a Mac. Then, I brought the same EXACT code to another Mac.
I got many, many different errors with Pygame saying it wasn't installed, EVEN THOUGH IT WAS!
Anyway, I fixed those errors, then I went to go run the module and the window appeared then it crashed and gave me this message:
IDLE's subprocess didn't make connection. Either IDLE can't start a
subprocess or personal firewall software is blocking the connection
I never got this message before. However, it continues to crash. I have killed idle using the Activity Monitor. There weren't any files in the directory. I have deleted all of the Python files that I have created.
Trashed every .pyc file. The Mac I am using is on El Captain; Python is at 2.7.12. Like I said, the code has not changed AT ALL from the first computer.
However, games that are pre-installed with IDLE work perfectly. I have moved the program to the same folders as the games. I copied the content from my program to another file, still nothing.
All help is appreciated, thank you :) | IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection | 0 | 0 | 0 | 5,220 |
38,987,289 | 2016-08-17T02:41:00.000 | 0 | 0 | 0 | 1 | python,macos,pygame,osx-elcapitan | 39,058,369 | 2 | false | 0 | 0 | The most likely reason that your getting this error, is because you're
not the administrator of the computer and you're trying to run a script from your local disk. There are a few things that you could do to solve this.
1. Move the .py file:
Before jumping to the method below, simply try moving your python file to a different location on your drive. Then try running the script with the python IDLE. If Your script still won't run, or you must have the script on your local drive, see the second method below
2. Run the script from the command prompt\terminal:
To run the script from your command prompt\terminal, first find the path to your python executable. In my case mine is:
C:\Users[insert user name here]\AppData\Local\Programs\Python\Python35\python.exe
Copy and paste the entire path this into your command prompt\terminal window. Next, find the path to your python file. For an example, the path to my script is:
C:\test.py
It is important to note that your path to your python executable cannot contain spaces.
Next copy and paste the path tot your python file into your command prompt\terminal window. When finished, the command your made should like something like this:
C:\Users[insert user name here]\AppData\Local\Programs\Python\Python35\python.exe C:\test.py
Next press enter, and watch your python script run. | 2 | 0 | 0 | I started programing a game on a Mac. Then, I brought the same EXACT code to another Mac.
I got many, many different errors with Pygame saying it wasn't installed, EVEN THOUGH IT WAS!
Anyway, I fixed those errors, then I went to go run the module and the window appeared then it crashed and gave me this message:
IDLE's subprocess didn't make connection. Either IDLE can't start a
subprocess or personal firewall software is blocking the connection
I never got this message before. However, it continues to crash. I have killed idle using the Activity Monitor. There weren't any files in the directory. I have deleted all of the Python files that I have created.
Trashed every .pyc file. The Mac I am using is on El Captain; Python is at 2.7.12. Like I said, the code has not changed AT ALL from the first computer.
However, games that are pre-installed with IDLE work perfectly. I have moved the program to the same folders as the games. I copied the content from my program to another file, still nothing.
All help is appreciated, thank you :) | IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection | 0 | 0 | 0 | 5,220 |
38,987,464 | 2016-08-17T03:05:00.000 | 1 | 0 | 0 | 0 | python,algorithm,graph,analytics,d3dimage | 38,987,964 | 2 | false | 0 | 0 | One approach would be to choose a threshold density, convert all voxels below this threshold to 0 and all above it to 1, and then look for the pair of 1-voxels whose shortest path is longest among all pairs of 1-voxels. These two voxels should be near the ends of the longest "rope", regardless of the exact shape that rope takes.
You can define a graph where there is a vertex for each 1-voxel and an edge between each 1-voxel and its 6 (or possibly 14) neighbours. You can then compute the lengths of the shortest paths between some given vertex u and every other vertex in O(|V|) time and space using breadth first search (we don't need Dijkstra or Floyd-Warshall here since every edge has weight 1). Repeating this for each possible start vertex u gives an O(|V|^2)-time algorithm. As you do this, keep track of the furthest pair so far.
If your voxel space has w*h*d cells, there could be w*h*d vertices in the graph (if every single voxel is a 1-voxel), so this could take O(w^2*h^2*d^2) time in the worst case, which is probably quite a lot. Luckily there are many ways to speed this up if you can afford a slightly imprecise answer:
Only compute shortest paths from start vertices that are at the boundary -- i.e. those vertices that have fewer than 6 (or 14) neighbours. (I believe this won't sacrifice an optimal solution.)
Alternatively, first "skeletonise" the graph by repeatedly getting rid of all such boundary vertices whose removal will not disconnect the graph.
A good order for choosing starting vertices is to first choose any vertex, and then always choose a vertex that was found to be at maximum possible distance from the last one (and which has not yet been tried, of course). This should get you a very good approximation to the longest shortest path after just 3 iterations: the furthest vertex from the start vertex will be near one of the two rope ends, and the furthest vertex from that vertex will be near the other end!
Note: If there is no full-voxel gap between distant points on the rope that are near each other due to bending, then the shortest paths will "short-circuit" through these false connections and possibly reduce the accuracy. You might be able to ameliorate this by increasing the threshold. OTOH, if the threshold is too high then the rope can become disconnected. I expect you want to choose the highest threshold that results in only 1 connected component. | 1 | 2 | 1 | I have protein 3D creo-EM scan, such that it contains a chain which bends and twists around itself - and has in 3-dimension space 2 chain endings (like continuous rope). I need to detect (x,y,z) location within given cube space of two or possibly multiplier of 2 endings. Cube space of scan is presented by densities in each voxel (in range 0 till 1) provided by scanning EM microscope, such that "existing matter" gives values closer to 1, and "no matter" gives density values closer to 0. I need a method to detect protein "rope" edges (possible "rope ending" definition is lack of continuation in certain tangled direction. Intuitively, I think there could be at least 2 methods: 1) Certain method in graph theory (I can't specify precisely - if you know one - please name or describe it. 2) Derivatives from analytic algebra - but again I can't specify specific attitude - so please name or explain one. Please specify computation complexity of suggested method. My project is implemented in Python. Please help. Thanks in advance. | How to detect ending location (x,y,z) of certain sequence in 3D domain | 0.099668 | 0 | 0 | 97 |
38,989,896 | 2016-08-17T06:49:00.000 | 0 | 0 | 1 | 0 | python,windows,scikit-learn | 46,249,521 | 2 | false | 0 | 0 | Old post, but right answer is,
'sudo pip install -U numpy matplotlib --upgrade' for python2 or 'sudo pip3 install -U numpy matplotlib --upgrade' for python3 | 2 | 1 | 1 | I know how to install external modules using the pip command but for Scikit-learn I need to install NumPy and Matplotlib as well.
How can I install these modules using the pip command? | How to install scikit-learn | 0 | 0 | 0 | 2,435 |
38,989,896 | 2016-08-17T06:49:00.000 | -1 | 0 | 1 | 0 | python,windows,scikit-learn | 38,990,089 | 2 | false | 0 | 0 | Using Python 3.4, I run the following from the command line:
c:\python34\python.exe -m pip install package_name
So you would substitute "numpy" and "matplotlib" for 'package_name' | 2 | 1 | 1 | I know how to install external modules using the pip command but for Scikit-learn I need to install NumPy and Matplotlib as well.
How can I install these modules using the pip command? | How to install scikit-learn | -0.099668 | 0 | 0 | 2,435 |
38,990,070 | 2016-08-17T07:00:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 38,990,213 | 2 | false | 0 | 0 | files have a writelines method. use that instead of write | 1 | 0 | 0 | I tried it but it comes in the form of a list in the file. I want it in a format without the brackets and the commas. | How to convert a list in Python to a text file? | 0 | 0 | 0 | 961 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.