Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,508,137 | 2017-08-04T13:28:00.000 | 0 | 0 | 0 | 1 | 0 | python,automation,fabric,devops | 0 | 45,508,534 | 0 | 1 | 0 | false | 0 | 0 | You can try to modify your command as follow:
mysql -uroot -p{your_password} -e 'SELECT * FROM dfs_va2.artikel_trigger;' > /Users/admin/Documents/dbdump/$(hostname)_dump.csv" download:"/Users/johnc/Documents/Imports/$(hostname)_dump.csv"
hostname returns current machine name so all your files should be unique (of course if machines have unique names)
Also you don't need to navigate to /bin/mysql every time, you can use simply mysql or absolute path /usr/local/mysql/bin/mysql | 1 | 0 | 0 | 0 | I have to SSH into 120 machines and make a dump of a table in databases and export this back on to my local machine every day, (same database structure for all 120 databases).
There isn't a field in the database that I can extract the name from to be able to identify which one it comes from, it's vital that it can be identified, as it's for data analysis.
I'm using the Python tool Fabric to automate the process and export the CSV on to my machine..
fab -u PAI -H 10.0.0.35,10.0.0.XX,10.0.0.0.XX,10.0.0.XX -z 1
cmdrun:"cd /usr/local/mysql/bin && ./mysql -u root -p -e 'SELECT *
FROM dfs_va2.artikel_trigger;' >
/Users/admin/Documents/dbdump/dump.csv"
download:"/Users/johnc/Documents/Imports/dump.csv"
Above is what I've got working so far but clearly, they'll all be named "dump.csv" is there any awesome people out there can give me a good idea on how to approach this? | Best way to automate file names of multiple databases | 1 | 0 | 1 | 1 | 0 | 62 |
45,513,191 | 2017-08-04T18:17:00.000 | 0 | 1 | 1 | 0 | 0 | python,python-3.x,module,reload | 0 | 47,103,371 | 0 | 2 | 0 | false | 0 | 0 | As an appendix to DYZ's answer, I use module reloading when analysing large data in interactive Python interpreter.
Loading and preprocessing of the data takes some time and I prefer to keep them in the memory (in a module BTW) rather than restart the interpreter every time I change something in my modules. | 1 | 4 | 0 | 0 | I am going through a book on Python which spends a decent time on module reloading, but does not explain well when it would be useful.
Hence, I am wondering, what are some real-life examples of when such technique becomes handy?
That is, I understand what reloading does. I am trying to understand how one would use it in real world.
Edit:
I am not suggesting that this technique is not useful. Instead, I am trying to learn more about its applications as it seems quite cool.
Also, I think people who marked this question as duplicate took no time to actually read the question and how it is different from the proposed duplicate.
I am not asking HOW TO reload a module in Python. I already know how to do that. I am asking WHY one would want to reload a module in Python in real life. There is a huge difference in nature of these questions. | Real-life module reloading in Python? | 0 | 0 | 1 | 0 | 0 | 92 |
45,517,437 | 2017-08-05T01:40:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,php,python,angularjs,web | 0 | 45,517,697 | 0 | 1 | 0 | true | 1 | 0 | Download entire HTML content of the web page
For JS files, look for the <script> tags and download the js file using the src attribute or any inline scripts inside the tag
For CSS, use <link> tag to download the CSS file and also look for any <style> tags for inline CSS styling
For Images, scan for <img> tag and download the image using src attribute
Same approach can be used for audio/video etc | 1 | 1 | 0 | 1 | I am not talking about the softwares like surf online, HTtrack, or any other 'save page' feature of browsers but I need to know how actually it happens in the background. I am interested in making my own program to do that.
Also is it possible to do in JavaScript. If yes what are the libraries I should look into or any other APIs that could be helpful. Please give me any kind of information about the topic I couldn't find any relevant thing to contribute my research. | How to get all files like: js,css,images from a website to save it for offline use? | 1 | 1.2 | 1 | 0 | 1 | 188 |
45,518,829 | 2017-08-05T06:05:00.000 | 1 | 0 | 0 | 0 | 0 | python-2.7,odoo-10 | 0 | 45,519,483 | 0 | 1 | 0 | false | 1 | 0 | You need to create a scheduler it should run everyday. It should calculate the expiry dates which are ending in next 10 days. For those records you have to trigger the mail.
Create a scheduler
Find the expiry dates
Create email template
Trigger Email
Please refer sale subscription it has subscription expiry reminder | 1 | 0 | 0 | 0 | How to calculate expiry date in odoo 10 and how to notify to customers from email/sms before 10 days?
for example:-
If expiry date is near, the customer gets notification through mail or sms.
Can anyone suggest any solution? | expiry date in odoo 10 | 0 | 0.197375 | 1 | 0 | 0 | 325 |
45,527,497 | 2017-08-06T00:18:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,sqlite,electron,node-gyp | 0 | 45,533,423 | 0 | 1 | 0 | false | 0 | 0 | This has been resolved....
Uninstalled Python 2.7.13. Reinstalled, added path to PATH variable again, now command 'python' works just fine... | 1 | 0 | 0 | 0 | I am trying to include sqlite3 in an electron project I am getting my hands dirty with. I have never used electron, nor Node before, excuse my ignorance. I understand that to do this on Windows, I need Python installed, I need to download sqlite3, and I need to install it.
As per the NPM sqlite3 page, I am trying to install it using npm install --build-from-source
It always fails with
unpack_sqlite_dep
'python' is not recognized as an internal or external command,
operable program or batch file.
I have Python 2.7 installed and the path has been added to environment variable PATH. I can verify that if I type 'python' in cmd, I get the same response. BUT, if I type 'py', it works....
So, my question is: how can I make node-gyp use the 'py' command instead of 'python' when trying to unpack sqlite3?
If this is not possible, how can I make 'python' an acceptable command to use?
I am using Windows 10 if this helps. Also, please let me know if I can do this whole procedure in a different way.
Thanks for any help! | Failing to install sqlite3 plugin for electron project on windows | 1 | 0 | 1 | 1 | 0 | 259 |
45,531,514 | 2017-08-06T11:23:00.000 | 0 | 0 | 0 | 0 | 0 | python,nltk | 1 | 45,533,385 | 0 | 2 | 0 | false | 0 | 0 | This is a well-known problem in NLP and it is often referred to Tokenization. I can think about two possible solutions:
try different NLTK tokenizers (e.g. twitter tokenizer), which maybe will be able to cover all of your cases
run a Name Entity Recognition (NER) on your sentences. This allows you to recognise entity present in the text. This could work because it can recognise Heart rate as a single entity, thus as a single token. | 1 | 0 | 1 | 0 | I am having some trouble with NLTK's FreqDist. Let me give you some context first:
I have built a web crawler that crawls webpages of companies selling wearable products (smartwatches etc.).
I am then doing some linguistic analysis and for that analysis I am also using some NLTK functions - in this case FreqDist.
nltk.FreqDist works fine in general - it does the job and does it well; I don't get any errors etc.
My only problem is that the word "heart rate" comes up often and because I am generating a list of the most frequently used words, I get heart and rate separately to the tune of a few hundred occurrences each.
Now of course rate and heart can both occur without being used as "heart rate" but how do I count the occurrences of "heart rate" instead of just the 2 words separately and I do mean in an accurate way. I don't want to subtract one from the other in my current Counters or anything like that.
Thank you in advance! | NLTK FreqDist counting two words as one | 0 | 0 | 1 | 0 | 0 | 1,205 |
45,537,958 | 2017-08-07T00:35:00.000 | 0 | 0 | 0 | 0 | 0 | python,scripting,neural-network | 0 | 45,537,986 | 0 | 1 | 0 | false | 0 | 0 | Wrap it in a Python based web server listening on some agreed-on port. Hit it with HTTP requests when you want to supply a new file or retrieve results. | 1 | 0 | 1 | 0 | To be clear I have no idea what I'm doing here and any help would be useful.
I have a number of saved files, Keras neural network models and dataframes. I want to create a program that loads all of these files so that the data is there and waiting for when needed.
Any data sent to the algorithm will be standardised and fed into the neural networks.
The algorithm may be called hundreds of times in quick succession and so I don't want to have to import the model and standardisation parameters every time as it will slow everything down.
As I understand it the plan is to have this program running in the background on a server and then somehow call it when required.
How would I go about setting up something like this? I'm asking here first because I've never attempted anything like this before and I don't even know where to start. I'm really hoping you can help me find some direction or maybe provide an example of something similar. Even a search term that would help me research would be useful.
Many thanks | Python have program running and ready for when called | 1 | 0 | 1 | 0 | 0 | 28 |
45,575,016 | 2017-08-08T17:56:00.000 | 0 | 0 | 1 | 0 | 0 | python,path,seaborn | 0 | 45,575,093 | 0 | 1 | 0 | false | 0 | 0 | You probably want to use
plot_name.savefig(plot_path)
instead of
plot_name.savefig('plot_path') (note no '-s). | 1 | 1 | 0 | 0 | I created a path variable for my project using
proj_path = pathlib.Path('C:/users/data/lives/here')
I now want to save a seaborn plot as png so I created a new path variable for the file
plot_path = proj_path.joinpath('plot_name.png')
but when I call plot_name.savefig(plot_path) returns
TypeError: Object does not appear to be a 8-bit string path or a Python file-like object
What path format is accepted by savefig and how do I convert plot_path? | What format of path should be used for savefig? | 0 | 0 | 1 | 0 | 0 | 670 |
45,577,630 | 2017-08-08T20:38:00.000 | 25 | 0 | 0 | 0 | 0 | python,python-3.x,pandas | 0 | 45,577,693 | 0 | 1 | 0 | false | 0 | 0 | print(df2[['col1', 'col2', 'col3']].head(10)) will select the top 10 rows from columns 'col1', 'col2', and 'col3' from the dataframe without modifying the dataframe. | 1 | 11 | 1 | 0 | How do you print (in the terminal) a subset of columns from a pandas dataframe?
I don't want to remove any columns from the dataframe; I just want to see a few columns in the terminal to get an idea of how the data is pulling through.
Right now, I have print(df2.head(10)) which prints the first 10 rows of the dataframe, but how to I choose a few columns to print? Can you choose columns by their indexed number and/or name? | Print sample set of columns from dataframe in Pandas? | 0 | 1 | 1 | 0 | 0 | 31,744 |
45,601,984 | 2017-08-09T23:10:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 45,602,412 | 0 | 2 | 0 | false | 0 | 0 | You can set dependencies on another python package (e.g. using install_requires in your setup.py), but if your code relies on a specific non-Python binary you cannot have that installed automatically as part of the pip install process.
You could create a native package for your operating system, which would allow you to set dependencies on other system packages such that when your Python script was installed with apt/yum/dnf/etc, the necessary binary would be installed as well. | 1 | 0 | 0 | 0 | I am trying to make a python package that relies on a command line utility to work. I am wondering if anyone knows how to make pip install that command line utility when pip installs my package. The only documentation I can seem to find is on dependency_links which looks to be depreciated. | Installing a command line utility from installing a python package | 0 | 0 | 1 | 0 | 0 | 103 |
45,608,490 | 2017-08-10T08:36:00.000 | 0 | 0 | 0 | 1 | 0 | python,celery | 0 | 45,608,632 | 0 | 1 | 0 | false | 1 | 0 | Try a web server like flask that forwards requests to the celery workers. Or try a server that reads from a queue (SQS, AMQP,...) and does the same.
No matter the solution you choose, you end up with 2 services: the celery worker itself and the "server" that calls the celery tasks. They both share the same code but are launched with different command lines.
Alternately, if the task code is small enough, you could just import the git repository in your code and call it from there | 1 | 1 | 0 | 0 | Imagine that I've written a celery task, and put the code to the server, however, when I want to send the task to the server, I need to reuse the code written before.
So my question is that are there any methods to seperate the code between server and client. | how to seperate celery code into server and client side? | 1 | 0 | 1 | 0 | 1 | 222 |
45,612,349 | 2017-08-10T11:22:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,heroku | 0 | 45,612,723 | 0 | 1 | 0 | false | 1 | 0 | I suggest you to create a Django management command for your project like python mananage.py run_this_once_a_day. And you can use Heroku schedular to trigger this scheduling. | 1 | 0 | 0 | 0 | I have deployed a django app on heroku. So far it works fine. Now I have to schedule a task (its in the form of python script) once a day. The job would take the data from heroku database perform some calculations and post the results back in the database. I have looked at some solutions for this usually they are using rails in heroku. I am confused whether I should do it using the cron jobs extension available in django or using the scheduled jobs option in heroku. Since the application is using using heroku I thought of using that only but I dont get any help how to add python jobs in it. Kindly help. | running scheduled job in django app deployed on heroku | 0 | 0.197375 | 1 | 0 | 0 | 397 |
45,625,918 | 2017-08-11T02:12:00.000 | 2 | 0 | 0 | 0 | 0 | python,python-3.x,tkinter,width | 0 | 45,626,120 | 0 | 1 | 0 | true | 0 | 1 | The width will never be bigger than the parent frame.
You can call winfo_reqwidth to get the requested width of the widget. I'm not sure if that will give you the answer you are looking for. I'm not entirely what the real problem is that you are trying to solve. | 1 | 2 | 0 | 0 | Say I have a Frame in tkinter with a set width and height (placed using the place method), and I add a child Frame to that parent Frame (using the pack method). In that child Frame, I add an arbitrary amount of widgets, so that the child Frame's width is dynamically set depending on its children.
My question is how do I get the width of the child Frame if it's width is greater than its parent?
I know there's a winfo_width method to get the width of a widget, but it only returns the width of the parent Frame if its width is greater than its parent. In other words, how do I get the actual width of a widget, not just the width of the part of the widget that is displayed? | Get width of child in tkinter | 0 | 1.2 | 1 | 0 | 0 | 174 |
45,630,562 | 2017-08-11T08:36:00.000 | 1 | 0 | 0 | 0 | 0 | python,mysql,database,database-design,amazon-ec2 | 0 | 45,643,778 | 0 | 2 | 0 | false | 1 | 0 | The problem is you don't have access to RDS filesystem, therefore cannot upload csv there (and import too).
Modify your Python Scraper to connect to DB directly and insert data there. | 1 | 0 | 0 | 1 | I have a Python Scraper that I run periodically in my free tier AWS EC2 instance using Cron that outputs a csv file every day containing around 4-5000 rows with 8 columns. I have been ssh-ing into it from my home Ubuntu OS and adding the new data to a SQLite database which I can then use to extract the data I want.
Now I would like to try the free tier AWS MySQL database so I can have the database in the Cloud and pull data from it from my terminal on my home PC. I have searched around and found no direct tutorial on how this could be done. It would be great if anyone that has done this could give me a conceptual idea of the steps I would need to take. Ideally I would like to automate the updating of the database as soon as my EC2 instance updates with a new csv table. I can do all the de-duping once the table is in the aws MySQL database.
Any advice or link to tutorials on this most welcome. As I stated, I have searched quite a bit for guides but haven't found anything on this. Perhaps the concept is completely wrong and there is an entirely different way of doing it that I am not seeing? | Exported scraped .csv file from AWS EC2 to AWS MYSQL database | 0 | 0.099668 | 1 | 1 | 0 | 158 |
45,634,854 | 2017-08-11T12:07:00.000 | 0 | 1 | 0 | 0 | 0 | python,raspberry-pi,pyserial | 0 | 45,635,399 | 0 | 1 | 0 | false | 1 | 0 | You might be able to tell whether the device is physically plugged in by checking the status of one of the RS232 control lines - CTS, DSR, RI, or CD (all of which are exposed as properties in PySerial). Not all USB-serial adapters support any of these.
If the only connection to the device is the TX/RX lines, your choices are extremely limited:
Send a command to the device and see if it responds. Hopefully its protocol includes a do-nothing command for this purpose.
If the device sends data periodically without needing an explicit command, save a timestamp whenever data is received, and return False if it's been significantly longer than the period since the last reception. | 1 | 0 | 0 | 0 | So I am working on a project that has a Raspberry Pi connected to a Serial Device via a USB to Serial Connector. I am trying to use PySerial to track the data being sent over the connected Serial device, however there is a problem.
Currently, I have my project set up so that every 5 seconds it calls a custom port.open() method I have created, which returns True if the port is actually open. This is so that I don't have to have the Serial Device plugged in when I initially go to start the program.
However I'd also like to set it up so that the program can also detect when my serial device is disconnected, and then reconnected. But I am not sure how to accomplish this.
If I attempt to use the PySerial method isOpen() to check if the device is there, I am always having it return true as long as the USB to Serial connector is plugged in, even if I have no Serial device hooked up to the connector itself. | PySerial, check if Serial is Connected | 0 | 0 | 1 | 0 | 0 | 1,786 |
45,676,247 | 2017-08-14T13:57:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-3.x,pyaudio | 1 | 45,676,889 | 0 | 3 | 0 | true | 0 | 0 | Check in the documentation of pyaudio if it is compatible with your python version
Some modules which are not compatible may be installed without issues, yet still won't work when trying to access them | 2 | 1 | 0 | 0 | I ran pip install pyaudio in my terminal and got this error:
Command "/home/oliver/anaconda3/bin/python -u -c "import setuptools,
tokenize;file='/tmp/pip-build-ub9alt7s/pyaudio/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n',
'\n');f.close();exec(compile(code, file, 'exec'))" install
--record /tmp/pip-e9_md34a-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-ub9alt7s/pyaudio/
So I ran sudo apt-install python-pyaudio python3-pyaudio
which seemed to work.
Then in jupyter:
import pyaudio
error:
ModuleNotFoundError: No module named 'pyaudio'
Can anyone help me work out this problem? I am not familiar with Ubuntu and it's commands paths etc as I've only been using it a few months.
If you need more information, let me know what, and how. Thanks | ModuleNotFoundError: No module named 'pyaudio' | 0 | 1.2 | 1 | 0 | 0 | 5,269 |
45,676,247 | 2017-08-14T13:57:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-3.x,pyaudio | 1 | 59,344,854 | 0 | 3 | 0 | false | 0 | 0 | if you are using windows then these command on the terminal:
pip install pipwin
pipwin install pyaudio | 2 | 1 | 0 | 0 | I ran pip install pyaudio in my terminal and got this error:
Command "/home/oliver/anaconda3/bin/python -u -c "import setuptools,
tokenize;file='/tmp/pip-build-ub9alt7s/pyaudio/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n',
'\n');f.close();exec(compile(code, file, 'exec'))" install
--record /tmp/pip-e9_md34a-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-ub9alt7s/pyaudio/
So I ran sudo apt-install python-pyaudio python3-pyaudio
which seemed to work.
Then in jupyter:
import pyaudio
error:
ModuleNotFoundError: No module named 'pyaudio'
Can anyone help me work out this problem? I am not familiar with Ubuntu and it's commands paths etc as I've only been using it a few months.
If you need more information, let me know what, and how. Thanks | ModuleNotFoundError: No module named 'pyaudio' | 0 | 0 | 1 | 0 | 0 | 5,269 |
45,688,168 | 2017-08-15T07:22:00.000 | 0 | 0 | 0 | 0 | 0 | python,excel,pandas,openpyxl,xlsxwriter | 0 | 45,689,273 | 0 | 1 | 0 | false | 0 | 0 | i have been recently working with openpyxl. Generally if one cell has the same style(font/color), you can get the style from cell.font: cell.font.bmeans bold andcell.font.i means italic, cell.font.color contains color object.
but if the style is different within one cell, this cannot help. only some minor indication on cell.value | 1 | 0 | 1 | 0 | I'm working a lot with Excel xlsx files which I convert using Python 3 into Pandas dataframes, wrangle the data using Pandas and finally write the modified data into xlsx files again.
The files contain also text data which may be formatted. While most modifications (which I have done) have been pretty straight forward, I experience problems when it comes to partly formatted text within a single cell:
Example of cell content: "Medical device whith remote control and a Bluetooth module for communication"
The formatting in the example is bold and italic but may also be a color.
So, I have two questions:
Is there a way of preserving such formatting in xlsx files when importing the file into a Python environment?
Is there a way of creating/modifying such formatting using a specific python library?
So far I have been using Pandas, OpenPyxl, and XlsxWriter but have not succeeded yet. So I shall appreciate your help!
As pointed out below in a comment and the linked question OpenPyxl does not allow for this kind of formatting:
Any other ideas on how to tackle my task? | Modifying and creating xlsx files with Python, specifically formatting single words of a e.g. sentence in a cell | 0 | 0 | 1 | 1 | 0 | 190 |
45,711,940 | 2017-08-16T11:09:00.000 | 1 | 0 | 1 | 0 | 0 | python-3.x | 0 | 45,712,201 | 0 | 2 | 0 | true | 0 | 0 | Except the weirdness of the question :)
What you did is a correct way, but each time you call a new program your stack gets bigger and after a while your stack is full and you get a stack overflow (no you don't get this site :p ), but just an error which this site is named after as you encountered.
If you really want to keep their system busy I would try to do something heavy inside one program. | 1 | 0 | 0 | 0 | I was wondering how to make program1 run program2 and program2 run program1 and so on. I have already tried using os.system() on each program to run the other, but a really long line of errors comes up and says maximum recursion depth reached
Thanks | Can I make an endless loop of python programs? | 0 | 1.2 | 1 | 0 | 0 | 36 |
45,725,440 | 2017-08-17T01:51:00.000 | 1 | 0 | 0 | 0 | 0 | python,opencv,matrix,camera,3d-reconstruction | 0 | 45,727,419 | 0 | 1 | 0 | true | 0 | 0 | You already have a code for camera calibration and printing a camera matrix in your OpenCV installation. Go to this path if you are on windows -
C:\opencv\sources\samples\python
There you have a file called calibrate | 1 | 1 | 1 | 0 | I want to achieve a 3D-reconstruction algorithm with sfm,
But how should i set the parameters of the Camera Matrix?
I have double cameras,both know their focal length.
And how about Rotation Matrix and Translation Matrix from world view?
i use python | How can i obtain Camera Matrix in 3Dreconstruction? | 0 | 1.2 | 1 | 0 | 0 | 574 |
45,726,623 | 2017-08-17T04:32:00.000 | 0 | 0 | 1 | 1 | 0 | python | 0 | 45,726,701 | 0 | 2 | 0 | false | 0 | 0 | Brew installs packages into /usr/local/Cellar and then links them to /usr/local/bin (i.e. /usr/local/bin/python3). In my case, I just make sure to have /usr/local/bin in my PATH prior to /usr/bin.
export PATH=/usr/local/bin:$PATH
By using brew, your new packages will be installed to:
/usr/local/Cellar/python
or
/usr/local/Cellar/python3
Package install order shouldn't matter. | 1 | 1 | 0 | 0 | As we all know; Apple ship OSX with Python, but it locks it away.
This force me and anyone else that use python, to install another version and start the painful process of installing with pip with 100 tricks and cheats.
Now, I would like to understand how to do this right; and sorry but I can't go with the route of the virtualenv, due to the fact that I run this for a build server running Jenkins, and I have no idea how to set that up correctly.
Could you please clarify for me these?
How do you tell OSX to run the python from brew, instead than system one?
Where is the official python living, and where are the packages installed, when I run pip install with and without the -U and/or the --user option?
In which order should I install a bunch of packages starting from scratch.on a fresh OSX machine,so I can set it up reliably every time?
Mostly I use opencv, scikit-image, numpy, scipy and pillow. These are giving me so many issues and I can't get a reliable setup so that Jenkins is happy to run the python code, using these libraries. | Questions about double install of Python on OSX | 0 | 0 | 1 | 0 | 0 | 37 |
45,731,787 | 2017-08-17T09:47:00.000 | 1 | 0 | 0 | 0 | 1 | python-3.x,tensorflow,mnist | 1 | 45,747,350 | 0 | 1 | 0 | false | 0 | 0 | I have solved this problem.
I changed line 204 and line 210 of mnist_with_summaries.py to the local directories, and I created some folders.
OR, don't change the code, and I created some folders in the local disk where is the running environment according to the code.
line 204: create /tmp/tensorflow/mnist/input_data
line 210: create /tmp/tensorflow/mnist/logs/mnist_with_summaries | 1 | 1 | 1 | 0 | When running this example:" python mnist_with_summaries.py ", it has
occurred the following error:
detailed errors:
Traceback (most recent call last):
File "mnist_with_summaries.py", line 214, in
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py"
, line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "mnist_with_summaries.py", line 186, in main
tf.gfile.MakeDirs(FLAGS.log_dir)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.p
y", line 367, in recursive_create_dir
pywrap_tensorflow.RecursivelyCreateDir(compat.as_bytes(dirname), status)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\contextlib.py", line 89, in exit
next(self.gen)
File "D:\ProgramData\Anaconda2\envs\Anaconda3\lib\site-packages\tensorflow\python\framework\errors
_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.NotFoundError: Failed to create a directory: /tmp\tensorflow
Running environment:windows7+Anaconda3+python3.6+tensorflow1.3.0
Why?Any idea on how to resolve this problem?Thank you! | When running " python mnist_with_summaries.py ", it has occurred the error | 0 | 0.197375 | 1 | 0 | 0 | 249 |
45,737,486 | 2017-08-17T14:11:00.000 | 1 | 0 | 1 | 0 | 0 | python,mongodb,python-3.x,pymongo,pymongo-3.x | 0 | 45,737,589 | 0 | 1 | 0 | true | 0 | 0 | You should be able to just insert into the collection in parallel without needing to do anything special. If you are updating documents then you might find there are issues with locking, and depending on the storage engine which your MongoDB is using there may be collection locking, but this should not affect how you write your python script. | 1 | 1 | 0 | 1 | I would like to know how to insert into same MongoDb collection from different python scripts running at the same time using pymongo
any help redirecting guidance would be very appreciated because I couldn't find any clear documentation in pymongo or mongdb about it yet
thank in advance | Writing in parallel to MongoDb collection from python | 0 | 1.2 | 1 | 1 | 0 | 754 |
45,740,126 | 2017-08-17T16:08:00.000 | 1 | 0 | 0 | 0 | 0 | python,distributed-computing,mt4 | 0 | 45,751,233 | 0 | 5 | 0 | false | 0 | 0 | Several options:
exchange with files (write data from mt4 into a file for python, another folder in opposite direction with buy/sell instructions);
0MQ (or something like that) as a better option. | 1 | 2 | 0 | 0 | I'm using a MetaTrader4 Terminal and I'm experienced python developer.
Does anyone know, how can I connect MT4 and Python? I want to:
- connect to MT4
- read USD/EUR data
- make order (buy/sell)
Does anyone know some library, a page with instructions or a documentation or have at least idea how to do that?
I googled first 30 page but I didn't find anything useful. | How to control MT4 from python? | 0 | 0.039979 | 1 | 0 | 0 | 12,826 |
45,747,880 | 2017-08-18T03:08:00.000 | 4 | 0 | 0 | 0 | 0 | python,openerp,qweb | 0 | 45,753,702 | 0 | 1 | 0 | true | 1 | 0 | The best way to do this is to have a popup target=new and have a statusbar on top right which will be clickable/not readonly (so that the user can go back). And depending on the state of your record, show the appropriate fields
You can of course create a popup, and when the user clicks next destroy that popup and create another one but that doesn't seem like a good idea to me. | 1 | 0 | 0 | 0 | I'm trying to create a wizard which has several pages.
I know how to pass to 'target' new or current, to pass the action to a form or tree view, but what I actually need, before that, is to create several steps which will be on different "views" of this wizard, like a form with 'next' and 'back' buttons.
Is there some example code I can look for that?
I've searched on default addons, with no success. | Add button next page - wizard - Odoo v8 | 0 | 1.2 | 1 | 0 | 0 | 639 |
45,760,932 | 2017-08-18T16:13:00.000 | 0 | 1 | 1 | 0 | 1 | python,ipython | 0 | 60,616,311 | 0 | 2 | 0 | false | 0 | 0 | %run myprogram works for Python scripts/programs.
To run any arbitrary programs, use ! as a prefix, e.g. !myprogram.
Many common shell commands/programs (cd, ls, less, ...) are also registered as IPython magic commands (run via %cd, %ls, ...), and also have registered aliases, so you can directly run them without any prefix, just as cd, ls, less, ... | 1 | 1 | 0 | 0 | I'm in IPython and want to run a simple python script that I've saved in a file called "test.py".
I'd like to use the %run test.py command to execute it inside IPython, but I don't know to which folder I need to save my test.py.
Also, how can I change that default folder to something else, for example C:\Users\user\foldername ?
I tried with the .ipython folder (original installation folder) but that's not working. | IPython - running a script with %run command - saved to which folder? | 0 | 0 | 1 | 0 | 0 | 475 |
45,764,187 | 2017-08-18T19:55:00.000 | 0 | 1 | 0 | 0 | 0 | python,exception-handling,timing,nas | 0 | 45,768,586 | 0 | 1 | 0 | false | 0 | 0 | Taking on MQTT at this stage would be a big change to this nearly-finished project. But your suggestion of decoupling the near-real-time Python from the NAS drive by using a second script is I think the way to go. If the Python disc interface commands wait 10 seconds for an answer, I can't help that. But I can stop it holding up the time-critical Python functions by keeping all time-critical file accesses local in Pi memory, and replicating whole files in both directions between the Pi and the NAS drive whenever they change. In fact I already have the opportunistic replicator code in Python - I just need to move it out of the main time-critical script into a separate script that will replicate the files. And the replicator Python script will do any waiting, rather than the time-critical Python script. The Pi scheduler will decouple the two scripts for me. Thanks for your help - I was beginning to despair! | 1 | 0 | 0 | 0 | For a flood warning system, my Raspberry Pi rings the bells in near-real-time but uses a NAS drive as a postbox to output data files to a PC for slower-time graphing and reporting, and to receive various input data files back. Python on the Pi takes precisely 10 seconds to establish that the NAS drive right next to it is not currently available. I need that to happen in less than a second for each access attempt, otherwise the delays add up and the Pi fails to refresh the hardware watchdog in time. (The Pi performs tasks on a fixed cycle: every second, every 15 seconds (watchdog), every 75 seconds and every 10 minutes.) All disc access attempts are preceded by tests with try-except. But try-except doesn't help, as tests like os.path.exists() or with open() both take 10 seconds before raising the exception, even when the NAS drive is powered down. It's as though there's a 10-second timeout way down in the comms protocol rather than up in the software.
Is there a way of telling try-except not to be so patient? If not, how can I get a more immediate indicator of whether the NAS drive is going to hold up the Pi at the next read/write, so that the Pi can give up and wait till the next cycle? I've done all the file queueing for that, but it's wasted if every check takes 10 seconds. | How can Python test the availability of a NAS drive really quickly? | 0 | 0 | 1 | 0 | 0 | 226 |
45,772,510 | 2017-08-19T14:05:00.000 | 0 | 0 | 0 | 1 | 0 | python | 0 | 45,772,896 | 0 | 3 | 0 | false | 0 | 0 | In my case I would try something using Task Manager data, probably using subprocess.check_output(ps)(for me that looks good), but you can the [psutil][1] library.
Tell us what you did later :) | 1 | 0 | 0 | 0 | In Python, how do you check that an external program is running? I'd like to track my use of some programs, so I can see the amount of time I've spent with them. For example, if I launch my program , I want to be able to see if Chrome has already been launched, and if so, start a timer which would end when I exit Chrome.
Ive seen that then subprocess module can launch external programs, but this is not what I'm looking for.
Thanks in advance. | python: how to check the use of an external program | 1 | 0 | 1 | 0 | 0 | 47 |
45,787,213 | 2017-08-20T22:08:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,html | 0 | 45,788,234 | 0 | 2 | 0 | false | 1 | 0 | You can use Two Python libraries.
Django
Flask
I recommend Django. Django is easy and fast to make.
the Flask is more complex but you can make more detail functions. | 1 | 0 | 0 | 0 | I would like to basically call a python script from HTML, after the script is called and it finished running, I would like to execute a javascript file(wich I know how to do.) Now my question is: Can I do this with just pure HTML and javascript or do I need to get a library for python? If I dont need a library, how would I go about doing this? | Runnning python script on HTML page | 0 | 0 | 1 | 0 | 0 | 76 |
45,802,690 | 2017-08-21T17:28:00.000 | 2 | 0 | 1 | 0 | 0 | python | 0 | 45,802,768 | 0 | 3 | 0 | false | 0 | 0 | find . -name "*.py" -exec ipython nbconvert --to=python {} \; should work on Linux. | 1 | 2 | 0 | 0 | I was wondering if their is anyway to go through and make copies of each .ipynb file in a folder and then change that file to .py. I want to keep the .ipynb files but I would like to have a .py file as well. I know how to do it manually but would like a way to do it automatically for each file in a specified directory. | Converting all files in folder to .py files | 0 | 0.132549 | 1 | 0 | 0 | 1,236 |
45,806,967 | 2017-08-21T23:45:00.000 | 0 | 1 | 0 | 1 | 0 | python,jenkins | 0 | 45,807,814 | 0 | 2 | 0 | false | 0 | 0 | You can use which python to find which python Jenkins use.
You can use ABSPATH/OF/python to run your pytest | 1 | 0 | 0 | 0 | I am running pytest on a Jenkins machine; although I am not sure which Python it is actually running.
The machine is running OSX; and I did install various libraries (like numpy and others), on top of another Python install via Brew, so I keep things separated.
When I run the commands from console; I specify python2.6 -m pytest mytest.py, which works, but when I run the same via shell in Jenkins, it fail, because it can't find the right libraries (which are the extra libraries I did install, after installing Python via Brew).
Is there a way to know what is Jenkins using, so I can force it to run the correct python binary, which has access to my extra libraries? | how to find out which Python is called when I run pytest via Jenkins | 0 | 0 | 1 | 0 | 0 | 171 |
45,813,527 | 2017-08-22T09:16:00.000 | 0 | 0 | 1 | 0 | 1 | python,matplotlib,anaconda,windows-subsystem-for-linux | 0 | 45,832,556 | 0 | 1 | 0 | true | 0 | 0 | It looks like when anaconda or matplotlib was installed it's created the matplotlibrc file in C:\Users\user\AppData\Local\lxss\home\puter\anaconda3\lib\python3.6\site-packages\matplotlib\mpl-data using the windows environment. This has caused the file not to be recognised in WSL.
To fix this create another matplotlibrc file in bash or whatever shell you're using. In the directory listed above copy the contents of the previously created matplotlibrc file into your new matplotlibrc file. Make sure you don't create this file in the windows environment otherwise it won't be recognised. | 1 | 0 | 1 | 0 | I'm currently using anaconda and python 3.6 on windows bash. Every time i want to use matplotlib I have to paste a copy of the matplotlibrc file into my working directory otherwise my code won't run or plot and I get the warning - /home/computer/anaconda3/lib/python3.6/site-packages/matplotlib/init.py:1022: UserWarning: could not find rc file;returning defaults
my matplotlibrc file is located at C:\Users\user\AppData\Local\lxss\home\puter\anaconda3\lib\python3.6\site-packages\matplotlib\mpl-data
I thought to fix this I could edit my .condarc file and set it to look for matplotlibrc in the correct directory. Could anyone tell me how to do this? | how to change matplotlibrc default directory | 0 | 1.2 | 1 | 0 | 0 | 950 |
45,817,703 | 2017-08-22T12:30:00.000 | 0 | 0 | 1 | 0 | 1 | python-3.x | 0 | 45,877,103 | 0 | 1 | 1 | false | 0 | 1 | This is what I have found:
There is a designer from QT, to build a ui file. There is a tool for translating the ui into python. Then you can edit the logic, with any python tool. You only need PyQt the current version is PyQt5. | 1 | 0 | 0 | 0 | I want to do some applications with python, but I haven't found any way of getting a tool-box of buttons, check box, etc.
Can some explain me please how can I do that with:
1. Pycharm.
2. If it is problem with Pycharm, visual studio community is also okay.
Thanks,
Ayal | Python windows forms application | 0 | 0 | 1 | 0 | 0 | 795 |
45,822,389 | 2017-08-22T16:03:00.000 | 1 | 0 | 1 | 0 | 0 | python,linux,pip | 0 | 71,218,863 | 0 | 2 | 0 | false | 0 | 0 | Do:
pip freeze > requirements.txt
It will store all your requirements in file requirements.txt
pip wheel -r requirements.txt --wheel-dir="packages"
It will pre-package or bundle your dependencies into the directory packages
Now you can turn-off your Wi-fi and install the dependencies when ever you want from the "packages" folder.
Just Run this:
pip install --force-reinstall --ignore-installed --upgrade --no-index --no-deps packages/*
Thanks :) | 1 | 0 | 0 | 0 | I was wondering how to make a .tar.gz file of all pip packages used in a project. The project will not have access to the internet when a user sets up the application. So, I though it would be easiest to create .tar.gz file that would contain an all the necessary packages and the user would just extract and install them with a setup.py file (example) or something along those lines. Thanks | How do you make a .tar.gz file of all your pip packages used in a project? | 0 | 0.099668 | 1 | 0 | 0 | 1,561 |
45,828,456 | 2017-08-22T23:36:00.000 | 1 | 0 | 1 | 0 | 1 | python,ghostscript | 1 | 45,833,436 | 0 | 2 | 0 | true | 1 | 0 | You are installing on Windows, the Windows binary differs in name from the Linux binaries and indeed differs depending whether you installed the 64 or 32-bit version.
On Linux (and MacOS) the Ghostscript binary is called 'gs', on Windows its 'gswin32' or 'gswin64' or 'gswin32c' or 'gswin64c' depending on whether you want the 32 or 64 bit version, and the command line or windowed executable.
My guess is that your script is looking for simply 'gs' and is probably expecting the path to be in the $PATH environment variable, its not clear to me what its expecting.
You could probably 'fix' this by making sure the installation path is in the $PATH environment variable and copying the executable to 'gs.exe' in that directory.
Other than that you'll need someone who can tell you what the script is looking for. Quite possibly you could just grep it. | 1 | 1 | 0 | 0 | I'm trying to generate a pdf417 barcode in python using treepoem but pycharm keeps giving me the following error:
Traceback (most recent call last):
File "C:/Users/./Documents/barcodes.py", line 175, in
image = generate_barcode(barcode_type="pdf417",data=barcode, options=dict(eclevel=5, rows=27, columns=12))
File "C:\Users.\AppData\Local\Programs\Python\Python36-32\lib\site-packages\treepoem__init__.py", line 141, in generate_barcode
bbox_lines = _get_bbox(code)
File "C:\Users.\AppData\Local\Programs\Python\Python36-32\lib\site-packages\treepoem__init__.py", line 81, in _get_bbox
ghostscript = _get_ghostscript_binary()
File "C:\Users.\AppData\Local\Programs\Python\Python36-32\lib\site-packages\treepoem__init__.py", line 108, in _get_ghostscript_binary
'Cannot determine path to ghostscript, is it installed?'
treepoem.TreepoemError: Cannot determine path to ghostscript, is it installed?
I've tried to install ghostcript, using both the .exe I found online and using pip install ghostscript (successfully completed the first time, and now tells me the requirement is satisfied), yet I still keep getting this error. Any ideas on how to fix it? | Treepoem barcode generator unable to find ghostscript | 0 | 1.2 | 1 | 0 | 0 | 2,240 |
45,838,549 | 2017-08-23T11:32:00.000 | 0 | 0 | 0 | 0 | 0 | python,sockets | 0 | 51,373,042 | 0 | 1 | 0 | true | 0 | 0 | It was because I was generating a new socket each time, rather than just re-using one socket. | 1 | 0 | 0 | 0 | I'm writing a basic socket program in Python3, which consists of three different programs - sender.py, channel.py, and receiver.py. The sender should send a packet through the channel to the receiver, then receiver sends an acknowledgement packet back.
It works for sending one packet - it goes through the channel to the receiver, and the receiver sends an acknowledgement packet through the channel to the sender, which gets it successfully. But when the sender tries to send a second packet, it attempts to send it but gets no response, so it sends it again. When it does, it gets BrokenPipeError: [Errno 32] Broken pipe. The channel gives no indication that it receives the second packet, and just sits there waiting. What does this mean and how can it be avoided?
I never call close() on any of the sockets. | How do I avoid a BrokenPipeError while using the sockets module in Python? | 0 | 1.2 | 1 | 0 | 1 | 56 |
45,848,956 | 2017-08-23T20:42:00.000 | 2 | 0 | 1 | 0 | 0 | python,sql,json,postgresql,pickle | 0 | 45,850,429 | 0 | 2 | 0 | false | 0 | 0 | What you want to do is store a one-to-many relationship between a row in your table and the members of the set.
None of your solutions allow the members of the set to be queried by SQL. You can't do something like select * from mytable where 'first item' in myset. Instead you have to retrieve the text/blob and use another programming language to decode or parse it. That means if you want to do a query on the elements of the set you have to do a full table scan every time.
I would be very reluctant to let you do something like that in one of my databases.
I think you should break out your set into a separate table. By which I mean (since that is clearly not as obvious as I thought), one row per set element, indexed over primary key of the table you are referring from or, if you want to enforce no duplicates at the cost of a little extra space, primary key of the table you are referring from + set element value.
Since your set elements appear to be of heterogeneous types I see no harm in storing them as strings, as long as you normalize the numbers somehow. | 1 | 0 | 0 | 0 | I would like to store a "set" in a database (specifically PostgreSQL) efficiently, but I'm not sure how to do that efficiently.
There are a few options that pop to mind:
store as a list ({'first item', 2, 3.14}) in a text or binary column. This has the downside of requiring parsing when inserting into the database and pulling out. For sets of text strings only, this seems to work pretty well, and the parsing is minimal. For anything more complicated, parsing becomes difficult.
store as a pickle in a binary column. This seems like it should be quick, and it is complete (anything picklable works), but isn't portable across languages.
store as json (either as a binary object or a text stream). Larger problems than just plain text, but better defined parsing.
Are there any other options? Does anyone have any experience with these? | How to store a "set" (the python type) in a database efficiently? | 0 | 0.197375 | 1 | 1 | 0 | 76 |
45,859,384 | 2017-08-24T10:37:00.000 | 0 | 0 | 0 | 1 | 0 | python,apache-kafka,kafka-producer-api,pykafka | 0 | 45,862,977 | 0 | 2 | 0 | false | 0 | 0 | Just use the send() method. You do not need to manage it by yourself.
send() is asynchronous. When called it adds the record to a buffer of
pending record sends and immediately returns. This allows the producer
to batch together individual records for efficiency.
Your task is only that configure two props about this: batch_size and linger_ms.
The producer maintains buffers of unsent records for each partition.
These buffers are of a size specified by the ‘batch_size’ config.
Making this larger can result in more batching, but requires more
memory (since we will generally have one of these buffers for each
active partition).
The two props will be done by the way below:
once we get batch_size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will ‘linger’ for the specified time waiting for more records to show up. | 1 | 1 | 0 | 0 | How to produce kafka topic using message batch or buffer with pykafka. I mean one producer can produce many message in one produce process. i know the concept using message batch or buffer message but i dont know how to implement it. I hope someone can help me here | How to produce kafka topic using message batch or buffer with pykafka | 0 | 0 | 1 | 0 | 0 | 4,131 |
45,865,164 | 2017-08-24T15:10:00.000 | 0 | 0 | 1 | 0 | 0 | python,jupyter-notebook,jupyter | 1 | 45,865,241 | 0 | 2 | 0 | false | 0 | 0 | You can use markdowns. Or you can put in comment your code but it will be not in "jupyter way" | 1 | 3 | 0 | 0 | I am using jupyter notebooks to write some explanations on how to use certain functions. The code i am showing there is not complete, meaning it will give errors when executed. Is there a way to write display-only code in a jupyter notebook? | Jupyter notebook display code only | 0 | 0 | 1 | 0 | 0 | 5,049 |
45,902,890 | 2017-08-27T08:03:00.000 | -1 | 0 | 1 | 0 | 0 | python,debugging,pycharm,pydev,breakpoints | 0 | 45,904,346 | 0 | 2 | 0 | false | 0 | 0 | Take a look at Eric Python IDE and VSC(Visual Studio Code) | 1 | 3 | 0 | 0 | I am looking for the following (IMHO, very important) feature:
Suppose I have two functions fa() and fb(), both of them has a breakpoint.
I am now stopped in the breakpoint in fa function.
In the interactive debugger console I am calling fb().
I want to stop in fb breakpoint, but, unfortunately pb() runs but ignores the breakpoint.
someone in another SO thread called it "nested breakpoints".
I am a developer that come from Matlab, in Matlab no matter how a function is called, from console, from debugger. if it has a breakpoint it stops.
I read past threads about this subject and did not find any solution.
I also tried latest pycharm community and latest pydev and no luck.
I also read that visual studio can not make it.
Is this inherent in Python and technically can not be done?
Is there a technique / another IDE that supports it? | any Python IDE supports stopping in breakpoint from the debugger | 0 | -0.099668 | 1 | 0 | 0 | 846 |
45,911,894 | 2017-08-28T04:43:00.000 | 0 | 0 | 1 | 1 | 0 | python,windows,background-process | 0 | 60,206,875 | 0 | 2 | 0 | false | 0 | 0 | You can run the file using pythonw instead of python means run the command pythonw myscript.py instead of python myscript.py | 1 | 7 | 0 | 0 | I'm fairly new to Python and I have a python script that I would like to ultimately convert to a Windows executable (which I already know how to do). Is there a way I can write something in the script that would make it run as a background process in Windows instead of being visible in the foreground? | How to put a Python script in the background without pythonw.exe? | 0 | 0 | 1 | 0 | 0 | 3,312 |
45,923,490 | 2017-08-28T16:26:00.000 | 0 | 1 | 0 | 0 | 0 | python,twitter,web-scraping,text-mining,scrape | 0 | 45,959,340 | 0 | 1 | 0 | false | 1 | 0 | Hardly so, and even if you manage to somehow do it, you'll most likely get blacklisted. Also, please read the community guidelines when it comes to posting questions. | 1 | 0 | 0 | 0 | i am student and i am totally new to scraping etc, today my supervisor gave me task to get the list of followers of a user or page(celebrity etc)
the list should contain information about every user (i.e user name, screen name etc)
After a long search i found that i can't get the age and gender of any user on twitter.
secondly i got help regarding getting list of my followers but i couldnt find help about "how i can get user list of public account"
kindly suggest me that its possible or not, and if it is possible, what are the ways to get to my goals
thank you in advance | is it possible to scrape list of followers of a public twitter acount (page) | 1 | 0 | 1 | 0 | 1 | 279 |
45,935,428 | 2017-08-29T09:28:00.000 | 1 | 0 | 0 | 1 | 0 | python,airflow,flask-login,apache-airflow | 0 | 46,274,739 | 0 | 1 | 0 | true | 0 | 0 | You can get it by calling {{ current_user.user.username }} or {{ current_user.user }} in your html jenja template. | 1 | 0 | 0 | 0 | Does anyone know how I'll be able to get the current user from airflow? We have our backend enabled to airflow/contrib/auth/backends/ldap_auth.pyand so users log in via that authentication and I want to know how to get the current user that clicks on something (a custom view we have as a plugin). | Airflow: Get user logged in with ldap | 0 | 1.2 | 1 | 0 | 0 | 1,335 |
45,942,883 | 2017-08-29T15:22:00.000 | 2 | 1 | 0 | 1 | 0 | python,caffe,layer | 0 | 45,944,107 | 0 | 1 | 0 | true | 0 | 0 | Your python layer has two parameters in the prototxt: layer: where you define the python class name implementing your layer, and moduule: where you define the .py file name where the layer class is implemented.
When you run caffe (either from command line or via python interface) you need to make sure your module is in the PYTHONPATH | 1 | 1 | 0 | 0 | if you are using a custom python layer - and assuming you wrote the class correctly in python - let's say the name of the class is "my_ugly_custom_layer"; and you execute caffe in the linux command line interface,
how do you make sure that caffe knows how to find the file where you wrote the class for your layer? do you just place the .py file in the same directory as the train.prototxt?
or
if you wrote a custom class in python you need to use the python wrapper interface? | Bekeley caffe command line interface | 0 | 1.2 | 1 | 0 | 0 | 263 |
45,947,457 | 2017-08-29T20:01:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,python-3.x,django-models | 0 | 45,951,472 | 0 | 2 | 0 | false | 1 | 0 | You can convert the list to json and store it in a charfield. | 1 | 1 | 0 | 0 | excuse my English
I'm beginner in django and i want to achieve this in my model and form:
the scenario is like purchasing shoes on amazon:
1. the shoes has list of size a user can select form
2. the user select and add it to the cart
3. user place an order which is saved in a model with the respective size included
now imagine a shoe seller when selling can enter a list of sizes like this in python size = [1, 2, 3, 4] which is also saved on a shoe-model
how can implement this in django and how to save a list of size entered but a seller to a model and how can i display this list so the user can select only one of the value of this list?
please help | Django: how can i get a model that store a list(like in python or anyway) of a sizes of a shoes | 0 | 0 | 1 | 0 | 0 | 858 |
45,959,832 | 2017-08-30T11:58:00.000 | -1 | 0 | 0 | 0 | 0 | python,django,vue.js,vue-component,server-side-rendering | 0 | 45,959,886 | 0 | 1 | 0 | false | 1 | 0 | I think the best method really is to split up the front and backend into two seperate entities, communicating via an API rather than bleeding the two together as in the long run it creates headaches. | 1 | 0 | 0 | 0 | I have an existing python/django app with jinja template engine.
I have a template with a filter and a list the gets rendered correctly via server, also the filters work perfectly without javascript.
(based on url with parameters)
The server also responds with json if the same filter url is requested by ajax. So it's ready for an enhanced version.
Now:
I would like to make it nicer and update/rerender the list asynchronously when I change the filter based on the json response I receive.
Questions:
When I init the vue on top of the page template, it removes/rerenders everything in the app. All within the vue root element becomes white.
Can I declare only parts of my whole template ( the one filter and the list) as separate vue components and combine these instances (there are other parts that are not part of the async update part vue should not take of)
Can I somehow use the existing markup of my components in jinja to be used to rerender the components again with vue or do I have to copy paste it into javascript (please no!) ?
TLDR: I don't wan't to create the whole model with vue and then prerender the whole app (modern SSR way) and vue will find out the diff.
I just want to use vue on top of an existing working django app. | Vue.JS on top of existing python/django/jinja app for filter and list render | 0 | -0.197375 | 1 | 0 | 0 | 464 |
45,997,541 | 2017-09-01T09:50:00.000 | -5 | 0 | 1 | 0 | 0 | python | 0 | 45,998,552 | 0 | 4 | 0 | false | 0 | 0 | You can simply apt-get install python3 and then use -p python3 while creating your virtual environment. Installing python3 will not disturb your system python (2.7). | 1 | 2 | 0 | 0 | I have a CentOS 7 machine which already has Python 2.7.5 installed. Now i want to install Python version 3 also side by side without disturbing the original Python version 2. If i install with pip i fear that it would install version 3 on top of the already existing version.
Can someone please guide me how to do the same ? Also i have created a virtualenvs directory inside my installation where i want to create the virualenvs.
At present whenever i create any virtualenvs using the virtualenv command it automatically copies the Python version 2 installable over there.
I want my virtualenvs to contain version 3 and anything outside my virtualenvs should run with version 2.
Is this even possible.
Thanks a lot for any answers. | Install Python version 3 side by side with version 2 in Centos 7 | 1 | -1 | 1 | 0 | 0 | 6,155 |
46,011,785 | 2017-09-02T08:08:00.000 | 32 | 0 | 1 | 0 | 0 | python,jupyter-notebook,ipython-notebook,jupyter,data-science | 0 | 48,120,444 | 0 | 4 | 0 | false | 0 | 0 | <sup>superscript text </sup> also works, and might be better because latex formatting changes the whole line etc. | 1 | 46 | 0 | 0 | I want to to use numbers to indicate references in footnotes, so I was wondering inside of Jupyter Notebook how can I use superscripts and subscripts? | How to do superscripts and subscripts in Jupyter Notebook? | 0 | 1 | 1 | 0 | 0 | 58,725 |
46,016,838 | 2017-09-02T18:14:00.000 | 1 | 0 | 0 | 0 | 1 | python,pandas,dataframe | 0 | 58,381,427 | 0 | 4 | 0 | false | 0 | 0 | You can also do like this,
df[df == '?'] = np.nan | 2 | 2 | 1 | 0 | I have a dataset and there are missing values which are encoded as ?. My problem is how can I change the missing values, ?, to NaN? So I can drop any row with NaN. Can I just use .replace() ? | How can I convert '?' to NaN | 0 | 0.049958 | 1 | 0 | 0 | 8,652 |
46,016,838 | 2017-09-02T18:14:00.000 | 2 | 0 | 0 | 0 | 1 | python,pandas,dataframe | 0 | 52,961,763 | 0 | 4 | 0 | false | 0 | 0 | You can also read the data initially by passing
df = pd.read_csv('filename',na_values = '?')
It will automatically replace '?' to NaN | 2 | 2 | 1 | 0 | I have a dataset and there are missing values which are encoded as ?. My problem is how can I change the missing values, ?, to NaN? So I can drop any row with NaN. Can I just use .replace() ? | How can I convert '?' to NaN | 0 | 0.099668 | 1 | 0 | 0 | 8,652 |
46,018,812 | 2017-09-02T22:37:00.000 | 0 | 0 | 1 | 0 | 0 | python,google-cloud-platform,jupyter-notebook | 0 | 55,023,413 | 0 | 3 | 0 | false | 0 | 0 | I know it's an old question but let me give it a shot. When you say Google Cloud instance do you mean their compute engine (virtual machine)? If so, you don't need to keep your laptop running all the time to continue a process on cloud, there is an alternative. You can run your python programs from terminal and use tools like Screen or tmux, which keep running your (training) process even if your GC instance is disconnected. You can even turn your system off. I just finished a 24 hr long Hyperparameter optimization marathon few days back.
Allow me to also mention here that sometimes "Screen" throws X11 display related errors, so tmux can be used instead.
Usage Instructions:
Install tmux : sudo apt-get install tmux
start new tmux session to run your process : tmux new -s "name_of_sess" --> this opens a new window in terminal. Type your command here like : python my_program.py to start your training.
detach session : ctrl+B -> release -> press 'd' (while inside session)
list of all the tmux session running : tmux ls
attach a session : tmux attach -t "id_from_above_command_lists"
kill a tmux session : ctrl+shift+d
Note: Commands are mentioned based on what I remember, it may have mistakes. Quick search would give you exact syntax. Idea is to show how easy it is to use. | 3 | 0 | 0 | 0 | So I want to run a neural network on Google Cloud instance, but whenever my computer goes to sleep the notebook seems to stop running. Does anyone know how I can keep it running? | Keep Google Cloud Jupyter Notebook Running while computer sleeps | 0 | 0 | 1 | 0 | 0 | 2,193 |
46,018,812 | 2017-09-02T22:37:00.000 | 0 | 0 | 1 | 0 | 0 | python,google-cloud-platform,jupyter-notebook | 0 | 56,662,974 | 0 | 3 | 0 | false | 0 | 0 | In the remote (GCP) SSH client:
Install tmux: sudo apt-get install tmux.
Start a new tmux session: tmux new -s "session".
Run commands as normal. | 3 | 0 | 0 | 0 | So I want to run a neural network on Google Cloud instance, but whenever my computer goes to sleep the notebook seems to stop running. Does anyone know how I can keep it running? | Keep Google Cloud Jupyter Notebook Running while computer sleeps | 0 | 0 | 1 | 0 | 0 | 2,193 |
46,018,812 | 2017-09-02T22:37:00.000 | 2 | 0 | 1 | 0 | 0 | python,google-cloud-platform,jupyter-notebook | 0 | 51,087,148 | 0 | 3 | 0 | false | 0 | 0 | I just discovered this huge oversight in GCP. I don't understand how they could design it this way. This behavior defeats a major point of using the cloud. They want us to pay for this? I use a laptop which sometimes needs to be asleep in a backpack, I can't keep a computer on all day just so a cloud computer can run.
If I can't figure anything else out, we are just going to have to use two cloud computers. Maybe use like a small free cloud computer to keep the big datalab one running. We shouldn't have to resort to this. | 3 | 0 | 0 | 0 | So I want to run a neural network on Google Cloud instance, but whenever my computer goes to sleep the notebook seems to stop running. Does anyone know how I can keep it running? | Keep Google Cloud Jupyter Notebook Running while computer sleeps | 0 | 0.132549 | 1 | 0 | 0 | 2,193 |
46,024,476 | 2017-09-03T14:34:00.000 | 0 | 1 | 0 | 1 | 1 | python,cron | 0 | 46,024,560 | 0 | 2 | 1 | true | 0 | 0 | Well, no. Your argument - */60 * * * * - means "run every 60 minutes". And you can't specify a shorter interval than 1 minute, not in standard Unix cron anyway. | 1 | 1 | 0 | 0 | I tried making a cronjob to run my script every second, i used */60 * * * * this a parameter to run every second but it didn't worked, pls suggest me how should i run my script every second ? | I want to make a cronjob such that it runs a python script every second | 0 | 1.2 | 1 | 0 | 0 | 1,441 |
46,028,132 | 2017-09-03T21:51:00.000 | 0 | 0 | 1 | 0 | 1 | python,python-3.x,exe,pyinstaller | 1 | 46,028,466 | 0 | 1 | 0 | true | 0 | 0 | It seems like your Pyinstaller is using the wrong version of Python, to make it use the correct one you probably want to use an explicit declaration of what Python interpreter you're using.
It's normally something like python -m pyinstaller {args} but other ones could be python3.5
I'd recommend using a virtual environment so you're sure what Python interpreter you're using. | 1 | 0 | 0 | 0 | I have a 64 bit PC, and python 3.6.2 (64 bit), python 3.5.4 (32 bit), and python 3.5.4 (64 bit), all installed and added to my path.
In Pycharm, I created a virtual environment based off of python 3.5.4 (32 bit) and wrote a project in this env. Each version of python I have installed has an associated virtual env, and all of them have pyinstaller installed on them via Pycharm's installer.
However, when I open up a command prompt in the project folder and type
pyinstaller -F project_name.py
it spits out a .exe that only runs on 64 bit machines. Everything is tested and works perfectly well on 64 bit PCs, but I get an error on 32 bit PCs asking me to check whether or not the system is 32 bit or 64 bit.
How can this be possible, and how do I fix it?
EDIT: It seems as though pyinstaller is accessing the python35 folder instead of the python35-32 folder when running. How do I stop this? | Making a 32 bit .exe from PyInstaller using PyCharm | 1 | 1.2 | 1 | 0 | 0 | 2,959 |
46,038,671 | 2017-09-04T13:59:00.000 | 1 | 0 | 0 | 0 | 0 | python,computer-vision,artificial-intelligence,keras,training-data | 0 | 46,039,296 | 0 | 3 | 0 | false | 0 | 0 | First detect the cars present in the image, and obtain their size and alignment. Then go for segmentation and labeling of the parking lot by fixing a suitable size and alignment. | 2 | 0 | 1 | 0 | I have a project that use Deep CNN to classify parking lot. My idea is to classify every space whether there is a car or not. and my question is, how do i prepare my image dataset to train my model ?
i have downloaded PKLot dataset for training included negative and positive image.
should i turn all my data training image to grayscale ? should i rezise all my training image to one fix size? (but if i resize my training image to one fixed size, i have landscape and portrait image). Thanks :) | how to prepare image dataset for training model? | 0 | 0.066568 | 1 | 0 | 0 | 822 |
46,038,671 | 2017-09-04T13:59:00.000 | 1 | 0 | 0 | 0 | 0 | python,computer-vision,artificial-intelligence,keras,training-data | 0 | 46,153,830 | 0 | 3 | 0 | false | 0 | 0 | as you want use pklot dataset for training your machine and test with real data, the best approach is to make both datasets similar and homological, they must be normalized , fixed sized , gray-scaled and parameterized shapes. then you can use Scale-invariant feature transform (SIFT) for image feature extraction as basic method.the exact definition often depends on the problem or the type of application. Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. you can use these types of image features based on your problem:
Corners / interest points
Edges
Blobs / regions of interest points
Ridges
... | 2 | 0 | 1 | 0 | I have a project that use Deep CNN to classify parking lot. My idea is to classify every space whether there is a car or not. and my question is, how do i prepare my image dataset to train my model ?
i have downloaded PKLot dataset for training included negative and positive image.
should i turn all my data training image to grayscale ? should i rezise all my training image to one fix size? (but if i resize my training image to one fixed size, i have landscape and portrait image). Thanks :) | how to prepare image dataset for training model? | 0 | 0.066568 | 1 | 0 | 0 | 822 |
46,056,161 | 2017-09-05T13:26:00.000 | 1 | 0 | 1 | 1 | 0 | python,cmd,installation,python-install | 0 | 63,153,547 | 0 | 3 | 0 | false | 0 | 0 | For Windows
I was unable to find a way to Download python using just CMD but if you have python.exe in your system then you can use the below Method to install it (you can also make .bat file to automate it.)
Download the python.exe file on your computer from the official site.
Open CMD and change Your directory to the path where you have python.exe
Past this code in your Command prompt make sure to change the name with your file version In the below code(e.g python-3.8.5.exe)
python-3.6.0.exe /quiet InstallAllUsers=1 PrependPath=1 Include_test=0
It will also set the path Variables. | 1 | 7 | 0 | 0 | Is it possible to install Python from cmd on Windows? If so, how to do it? | How to install Python using Windows Command Prompt | 1 | 0.066568 | 1 | 0 | 0 | 66,195 |
46,097,968 | 2017-09-07T13:42:00.000 | 2 | 0 | 0 | 0 | 0 | python,tensorflow,neural-network,deep-learning,image-segmentation | 0 | 46,203,664 | 0 | 2 | 0 | false | 0 | 0 | If I understand correctly you have a portion of each image with label void in which you are not interested at all. Since there is not a easy way to obtain the real value behind this void spots, why don't you map these points to background label and try to get results for your model? I would try in a preprocessing state to clear the data labels from this void label and substitute them with background label.
Another possible strategy ,if you don's simply want to map void labels to background, is to run a mask (with a continuous motion from top to bottom from right to left) to check the neigthbooring pixels from a void pixel (let's say an area of 5x5 pixels) and assign to the void pixels the most common label besides void.
Also you can always keep a better subset of the data, filtering data where the percentage of void labels is over a threshold. You can keep only images with no void labels, or more likeley you can keep images that have only under a threshold (e.g. 5%) of non-labeled points. In this images you can implement the beforementioned strategies for replacing the void labels. | 1 | 21 | 1 | 0 | I was wondering how to handle not labeled parts of an image in image segmentation using TensorFlow. For example, my input is an image of height * width * channels. The labels are too of the size height * width, with one label for every pixel.
Some parts of the image are annotated, other parts are not. I would wish that those parts have no influence on the gradient computation whatsoever. Furthermore, I am not interested in the network predicting this “void” label.
Is there a label or a function for this? At the moment I am using tf.nn.sparse_softmax_cross_entropy_with_logits. | TensorFlow: How to handle void labeled data in image segmentation? | 0 | 0.197375 | 1 | 0 | 0 | 3,236 |
46,099,439 | 2017-09-07T14:52:00.000 | 3 | 0 | 0 | 0 | 0 | python,api,curl,user-agent | 0 | 69,885,034 | 0 | 2 | 0 | false | 1 | 0 | I know this post is a few years old, but since I stumbled upon it...
tldr; Do not use the user agent to determine the return format unless absolutely necessary. Use the Accept header or (less ideal) use a separate endpoint/URL.
The standard and most future-proof way to set the desired return format for a specific endpoint is to use the Accept header. Accept is explicitly designed to allow the client to state what response format they would like returned. The value will be a standard MIME type.
Web browsers, by default, will send text/html as the value of the Accept header. Most Javascript libraries and frontend frameworks will send application/json, but this can usually be explicitly set to something else (e.g. text/xml) if needed. All mobile app frameworks and HTTP client libraries that I am aware of have the ability to set this header if needed.
There are two big problems with trying to use user agent for simply determining the response format:
The list will be massive. You will need to account for every possible client which needs to be supported today. If this endpoint is used internally, this may not be an immediate problem as you might be able to enforce which user agents you will accept (may cause its own set of problems in the future, e.g. forcing your users to a specific version of Internet Explorer indefinitely) which will help keep this list small. If this endpoint is to be exposed externally, you will almost certainly miss something you badly need to accept.
The list will change. You will need to account for every possible client which needs to be supported tomorrow, next week, next year, and in five years. This becomes a self-induced maintenance headache.
Two notes regarding Accept:
Please read up on how to use the Accept header before attempting to implement against it. Here is an actual example from this website: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9. Given this, I would return back HTML.
The value of the header can be */*, which basically just says "whatever" or "I don't care". At that point, the server is allowed to determine the response format. | 1 | 2 | 0 | 0 | I'm looking for a way to show html to a user if they call from a browser or just give them the API response in JSON if the call is made from an application, terminal with curl or generally any other way.
I know a number of APIs do this and I believe Django's REST framework does this.
I've been able to fool a number of those APIs by passing in my browser's useragent to curl so I know this is done using useragents, but how do I implement this? To cover every single possible or most useragents out there.
There has to be a file/database or a regex, so that I don't have to worry about updating my useragent lists every few months, and worrying that my users on the latest browsers might not be able to access my website. | How to detect if a GET request is from a browser or not | 0 | 0.291313 | 1 | 0 | 1 | 1,500 |
46,101,394 | 2017-09-07T16:39:00.000 | 0 | 1 | 0 | 0 | 0 | python,python-3.x,chat,bots,telegram | 0 | 46,103,183 | 0 | 3 | 0 | false | 0 | 0 | I am not sure to understand your question, can you give us what you pretend to do more explained?
You have a few options, creating a group and adding the bot to it.
In private chat you only can talk with a single user at a time. | 2 | 1 | 0 | 0 | I am going to make a telegram bot in Python 3 which is a random chat bot. As I am new in telegram bots, I don't know how to join two different people in a chat bot. Is there a guide available for this? | how can i join two users in a telegram chat bot? | 0 | 0 | 1 | 0 | 1 | 2,352 |
46,101,394 | 2017-09-07T16:39:00.000 | 0 | 1 | 0 | 0 | 0 | python,python-3.x,chat,bots,telegram | 0 | 46,113,831 | 0 | 3 | 0 | true | 0 | 0 | You need to make a database with chatID as primary column. and another column as partner. which stores his/her chat partner chatID.
now when a user sends a message to you bot you just need to check the database for that user and send the message to her chat partner.
after the chat is done you should empty partner fields of both users.
And for the picking part. when a user wants to find a new partner, choose a random row from your database WHERE partnerChatID is Null and set them to first users ID and vise versa. | 2 | 1 | 0 | 0 | I am going to make a telegram bot in Python 3 which is a random chat bot. As I am new in telegram bots, I don't know how to join two different people in a chat bot. Is there a guide available for this? | how can i join two users in a telegram chat bot? | 0 | 1.2 | 1 | 0 | 1 | 2,352 |
46,107,591 | 2017-09-08T02:07:00.000 | 0 | 0 | 1 | 0 | 0 | python,nltk,spyder | 0 | 46,107,776 | 0 | 2 | 0 | false | 0 | 0 | I don't know what you want actually. If you just need the corpus in nltk, you don't have to put nltk.download() in you code but run nltk.download() once in the shell and download the corpus you need. Remind there is another function called nltk.download-gui(). You can try it in spyder or maybe you should change the graphics backend to Qt5 in your spyder settings if that's the problem. | 1 | 0 | 0 | 0 | For some reason, when I put nltk.download() in my .py file after import nltk, it doesn't run correctly in Spyder. It does run with the anaconda prompt though. Should I include it in my .py file? If so, how do I get Spyder to be ok with that?
Thanks! | Should I put ntlk.download() in my .py file? | 0 | 0 | 1 | 0 | 0 | 120 |
46,127,941 | 2017-09-09T06:47:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 46,129,112 | 0 | 3 | 0 | false | 0 | 0 | Up to now, I still don't get an answer expected. Initially, when I saw this way of expression open(name[, mode[, buffering]]), I really want to know what does that mean. It means optional parameters obviously. At that moment, I found it may be a different way(different from normal way like f(a,b,c=None,d='balabala')) to define a function with optional parameters but not only tell us it's optional parameters. The benefit of this writing can help us use optional parameters but no default value, so I think it's a more clear and more simple way to define optional parameters.
What I really want to know is about 2 things: 1. if we can define optional parameters using this way(no at present) 2. It will be nice if someone could explain what does the module-level function mean?
I am really appreciated for the above answers and comments! THANKS A LOT | 2 | 3 | 0 | 0 | I often find some functions defined like open(name[, mode[, buffering]]) and I know it means optional parameters.
Python document says it's module-level function. When I try to define a function with this style, it always failed.
For example
def f([a[,b]]): print('123')
does not work.
Can someone tell me what the module-level means and how can I define a function with this style? | python how to define function with optional parameters by square brackets? | 1 | 0.066568 | 1 | 0 | 0 | 1,720 |
46,127,941 | 2017-09-09T06:47:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 46,131,685 | 0 | 3 | 0 | true | 0 | 0 | "1. if we can define optional parameters using this way(no at present)"
The square bracket notation not python syntax, it is Backus-Naur form - it is a documentation standard only.
A module-level function is a function defined in a module (including __main__) - this is in contrast to a function defined within a class (a method). | 2 | 3 | 0 | 0 | I often find some functions defined like open(name[, mode[, buffering]]) and I know it means optional parameters.
Python document says it's module-level function. When I try to define a function with this style, it always failed.
For example
def f([a[,b]]): print('123')
does not work.
Can someone tell me what the module-level means and how can I define a function with this style? | python how to define function with optional parameters by square brackets? | 1 | 1.2 | 1 | 0 | 0 | 1,720 |
46,152,636 | 2017-09-11T09:44:00.000 | 1 | 0 | 0 | 0 | 0 | python,pymc,markov-chains,mcmc | 0 | 46,322,688 | 0 | 2 | 0 | false | 0 | 0 | Perhaps, assuming each user behaves the same way in a particular time interval, at each interval t we can get the matrix
[ Pr 0->0 , Pr 1->0;
Pr 1->0 , Pr 1->0]
where Pr x ->y = (the number of people in interval t+1 who are in state y AND who were in state x in interval t) divided by (the number of people who were in state x in interval t), i.e. the probability based on the sample that someone in the given time interval in state x (0 or 1) will transition to state y (0 or 1) in the next time interval. | 1 | 1 | 1 | 0 | I'm trying to build a MCMC model to simulate a changing beavior over time. I have to simulate one day with a time interval of 10-minutes. I have several observations of one day from N users in 144 intervals. So I have U_k=U_1,...,U_N U users with k ranging from 1 to N and for each user I have X_i=X_1,...X_t samples. Each user has two possible states, 1 and 0. I have understood that I have to build a transition probability matrix for each time step and then run the MCMC model. Is it right? But I did not understood how to build it in pyMC can anybody provided me suggestion? | Monte Carlo Marcov Chain with pymc | 0 | 0.099668 | 1 | 0 | 0 | 245 |
46,166,696 | 2017-09-12T01:48:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 53,291,691 | 0 | 2 | 0 | false | 0 | 0 | If you read the code here in theory there is this:
meaning(term, disable_errors=False)
so you should be able to pass True to avoid printing the error in case the word is not in the dictionary. I tried but I guess the version I installed via pip does not contains that code... | 1 | 0 | 0 | 1 | Very new to the PyDictionary library, and have had some trouble finding proper documentation for it. So, I've come here to ask:
A) Does anybody know how to check if a word (in english) exists, using PyDictionary?
B) Does anybody know of some more full documentation for PyDictionary? | Using PyDictionary to check if a word exists | 0 | 0.099668 | 1 | 0 | 0 | 1,845 |
46,174,679 | 2017-09-12T10:59:00.000 | 0 | 0 | 1 | 1 | 0 | python,multiprocessing,python-multiprocessing | 0 | 46,175,030 | 0 | 1 | 0 | false | 0 | 0 | The most portable solution I can suggest (although this will still involve further research for you), is to have a long-running process that manages the "background worker" processes. This shouldn't ever be killed off, as it handles the logic for piping messages to each sub process.
Manager.py can then implement logic to create communication to that long-running process (whether that's via pipes, sockets, HTTP or any other method you like). So manager.py effectively just passes on a message to the 'server' process "hey please stop all the child processes" or "please send a message to process 10" etc.
There is a lot of work involved in this, and a lot to research. But the main thing you'll want to look up is how to handle IPC (Inter-Process Communication). This will allow your Manager.py script to interact with an existing/long-running process that can better manage each background worker.
The alternative is to rely fully on your operating system's process management APIs. But I'd suggest from experience that this is a much more error prone and troublesome solution. | 1 | 0 | 0 | 0 | i see a lot of examples of how to use multiprocessing but they all talk about spawning workers and controlling them while the main process is alive. my question is how to control background workers in the following way:
start 5 worker from command line:
manager.py --start 5
after that, i will be able to list and stop workers on demand from command line:
manager.py --start 1 #will add 1 more worker
manager.py --list
manager.py --stop 2
manager.py --sendmessagetoall "hello"
manager.py --stopall
the important point is that manager.py should exit after every run. what i don't understand is how to get a list of already running workers from an newly created manager.py program and communicate with them.
edit: Bilkokuya suggested that i will have (1)a manager process that manage a list of workers... and will also listen to incoming commands. and (2) a small command line tool that will send messages to the first manager process... actually it sounds like a good solution. but still, the question remains the same - how do i communicate with another process on a newly created command line program (process 2)? all the examples i see (of Queue for example) works only when both processes are running all the time | using python multiprocessing to control independent background workers after the spawning process has been closed | 0 | 0 | 1 | 0 | 0 | 223 |
46,176,656 | 2017-09-12T12:40:00.000 | 2 | 0 | 0 | 0 | 0 | python,pandas,pandas-loc | 0 | 46,176,863 | 0 | 2 | 1 | false | 0 | 0 | Underneath the covers, both are using the __setitem__ and __getitem__ functions. | 1 | 19 | 1 | 0 | So .loc and .iloc are not your typical functions. They somehow use [ and ] to surround the arguments so that it is comparable to normal array indexing. However, I have never seen this in another library (that I can think of, maybe numpy as something like this that I'm blanking on), and I have no idea how it technically works/is defined in the python code.
Are the brackets in this case just syntactic sugar for a function call? If so, how then would one make an arbitrary function use brackets instead of parenthesis? Otherwise, what is special about their use/defintion Pandas? | Why/How does Pandas use square brackets with .loc and .iloc? | 0 | 0.197375 | 1 | 0 | 0 | 3,012 |
46,178,062 | 2017-09-12T13:42:00.000 | 3 | 0 | 0 | 1 | 1 | python,postgresql,google-cloud-platform,google-cloud-storage,google-cloud-sql | 0 | 64,040,093 | 0 | 3 | 0 | false | 1 | 0 | Hostname is the Public IP address. | 1 | 3 | 0 | 0 | I'm trying to connect to a PostgreSQL database on Google Cloud using SQLAlchemy. Making a connection to the database requires specifying a database URL of the form: dialect+driver://username:password@host:port/database
I know what the dialect + driver is (postgresql), I know my username and password, and I know the database name. But I don't know how to find the host and port on the Google Cloud console. I've tried using the instance connection name, but that doesn't seem to work. Anyone know where I can find this info on Google Cloud? | What is the hostname for a Google Cloud PostgreSQL instance? | 0 | 0.197375 | 1 | 1 | 0 | 6,507 |
46,183,843 | 2017-09-12T19:15:00.000 | 2 | 0 | 0 | 0 | 0 | python,web-scraping,beautifulsoup,scrapy | 0 | 46,185,425 | 0 | 2 | 0 | true | 1 | 0 | Scrapy uses a link follower to traverse through a site, until the list of available links is gone. Once a page is visited, it's removed from the list and Scrapy makes sure that link is not visited again.
Assuming all the websites pages have links on other pages, Scrapy would be able to visit every page of a website.
I've used Scrapy to traverse thousands of websites, mainly small businesses, and have had no problems. It's able to walk through the whole site. | 1 | 1 | 0 | 0 | I have used Beautiful Soup with great success when crawling single pages of a site, but I have a new project in which I have to check a large list of sites to see if they contain a mention or a link to my site. Therefore, I need to check the entire site of each site.
With BS I just don't know yet how to tell my scraper that it is done with a site, so I'm hitting recursion limits. Is that something Scrapy handles out of the box? | Does Scrapy 'know' when it has crawled an entire site? | 0 | 1.2 | 1 | 0 | 1 | 344 |
46,184,423 | 2017-09-12T19:57:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-3.x,package,bundle | 0 | 46,187,452 | 0 | 1 | 0 | false | 0 | 0 | Couldn't you just use a virtual environment (virtualenv folder_name) and then activate it unless you are looking for something else.
Once it's activated install of your libraries and drown its there using pip install | 1 | 0 | 0 | 0 | I have my Python script and my requirements.txt ready.
What I want to do is to get all the packages listed in the "requirements.txt" into a folder. In the bundle, I'd for example have the full packages of "pymysql", "bs4" as well as all their dependencies.
I have absolutely no idea how to do this. Could you help me please? I am stuck and I am really struggling with this.
I am using Python 3.6
I am using "pip download -r requirements.txt" but it's not downloading the dependencies and outputs me only.whl files whereas I'm looking for "proper" folders.. | Bundle all packages required from a Python script into a folder | 0 | 0 | 1 | 0 | 0 | 244 |
46,194,582 | 2017-09-13T10:03:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,django-models,django-admin | 0 | 46,196,573 | 0 | 2 | 0 | false | 1 | 0 | there is no thing like clickable link in Database.you can put link as text and while importing it to your HTML use , and for Images there are two options. either you put your image path name in database or change TextField in models to FileField. | 1 | 0 | 0 | 0 | I am using django to create a blog. In my model I am using a text field for my blog content. But I am unable to insert any image or a clickable link. How to add links(clickable) and insert images? | how to add links and images in text field of django admin | 0 | 0 | 1 | 0 | 0 | 814 |
46,202,639 | 2017-09-13T16:31:00.000 | 0 | 0 | 1 | 1 | 0 | python,multiprocessing,buffer | 0 | 46,202,744 | 0 | 2 | 0 | false | 0 | 0 | If you need to use different processes (as opposed to multiple functions in a single process), perhaps a messaging queue would work well for you? Whereby your first process would do whatever it does and put the results in a message queue, which your second process is listening to.
There are obviously a lot of options available but based on your description, this sounds like a reasonable approach. | 1 | 0 | 0 | 0 | I have the following scenario. Data (packets in this case) are received processed by a Python function in real-time as each datum streams by. So each datum is received and translated into a python object. There is a light-weight algorithm done on that object which returns an output (small dictionary). Then the object is discarded and the next one is handled. I have that program running.
Now, for each object the algorithm will produce a small dictionary of output data. This dictionary needs to be processed (also in real time) by a separate, second algorithm. I envision my code running two processes. I need to have the second process "listen" for the outputs of the first.
So how do I write this second algorithm in python so it can listen for and accept the data that is produced by the first? for a concrete example, suppose the first algorithm applies the timestamp, then passes to a buffer, and the second algorithm listens-- it grabs from the buffer and processes it. If there is nothing in the buffer, then as soon as something appears it processes it. | How to send, buffer, and receive Python objects between two python programs? | 0 | 0 | 1 | 0 | 0 | 395 |
46,214,501 | 2017-09-14T08:47:00.000 | 1 | 0 | 0 | 0 | 0 | python,jira,robotframework,testrail,robotframework-ide | 0 | 46,220,408 | 0 | 1 | 0 | false | 1 | 0 | If you want to use a python library in a robot test, you will need to create your own library that provides keywords that use the library. You can't just import any random python library and expect it to work like a robot keyword library. | 1 | 0 | 0 | 1 | We are using RIDE IDE and are trying to integrate TestRail and JIRA. We have downloaded the TestRail API python file (testrail.py), but we are not able to import it in our project in RIDE.
Can we know how to implement the same.
Is there any steps or tutorial video for integrating TestRail and JIRA in RIDE ?
We are using RIDE 1.5.2.1 running on Python 2.7.12
Thanks | TestRail and JIRA integration with Robot Framework (RIDE IDE) | 0 | 0.197375 | 1 | 0 | 0 | 1,211 |
46,235,088 | 2017-09-15T08:30:00.000 | 0 | 1 | 0 | 0 | 0 | python,selenium,automated-tests,selenium-ide,web-testing | 0 | 46,241,635 | 0 | 3 | 0 | false | 0 | 0 | The best way is to learn how your app is constructed, and to work with the developers to add an id element and/or distinctive names and classes to the items you need to interact with. With that, you aren't dependent on using the fragile xpaths and css paths that the inspector returns and instead can create short, concise expressions that will hold up even when the underlying structure of the page changes. | 1 | 0 | 0 | 0 | Selenium IDE is a very useful tool to create Selenium tests quickly. Sadly, it has been unmaintained for a long time and now not compatible with new Firefox versions.
Here is my work routine to create Selenium test without Selenium IDE:
Open the Inspector
Find and right click on the element
Select Copy CSS Selector
Paste to IDE/Code editor
Type some code
Back to step 2
That is a lot of manual work, switching back and for. How can I write Selenium tests faster? | Since Selenium IDE is unmaintained, how to write Selenium tests quickly? | 0 | 0 | 1 | 0 | 1 | 116 |
46,244,095 | 2017-09-15T16:34:00.000 | 4 | 0 | 0 | 0 | 0 | python | 0 | 48,470,963 | 0 | 1 | 0 | false | 0 | 0 | I'm the creator of pylogit.
I don't have built in utilities for estimating conditional logits with fixed effects. However, you can use pylogit to estimate this model. Simply
Create dummy variables for each decision maker. Be sure to leave out one decision maker for identification.
For each created dummy variable, add dummy variable's column name to the one's utility specification. | 1 | 1 | 1 | 1 | I am trying to estimate a logit model with individual fixed effects in a panel data setting, i.e. a conditional logit model, with python.
I have found the pylogit library. However, the documentation I could find, explained how to use the conditional logit model for multinomial models with varying choice attributes. This model does not seem to be the same use case as a simple binary panel model.
So my questions are:
Does pylogit allow to estimate conditional logits for panel data?
If so, is there documentation?
If not, are there other libraries that allow you to estimate this type of model?
Any help would be much appreciated. | conditional logit for panel data in python | 0 | 0.664037 | 1 | 0 | 0 | 2,528 |
46,245,240 | 2017-09-15T17:58:00.000 | 0 | 0 | 0 | 0 | 1 | python,directory,shutil,copytree | 0 | 46,245,531 | 0 | 1 | 0 | false | 0 | 0 | I made a mistake with the file pathways I inputted into the copytree function. The function works as expected, in the way I mentioned I wanted it to in my question. | 1 | 3 | 0 | 0 | So I have a directory called /src, which has several contents:
a.png, file.text, /sub_dir1, /sub_dir2
I want to copy these contents into destination directory /dst such that the inside of /dst looks like this (assuming /dst is previously empty):
a.png, file.text, /sub_dir1, /sub_dir2
I have tried shutil.copytree(src, dst), but when I open up /dst I see:
/src
which, although contains everything I want, should not be there, as I do not want the /src directory itself copied over, only its inner contents. Does anyone know how to do this? | Using shutil library to copy contents of a directory from src to dst, but not the directory itself | 0 | 0 | 1 | 0 | 0 | 89 |
46,246,197 | 2017-09-15T19:17:00.000 | 0 | 1 | 0 | 0 | 0 | python,raspberry-pi,pygame | 0 | 46,246,348 | 0 | 1 | 0 | false | 0 | 1 | This ought to work ... but it's purely hypothetical:
Use a parallel circuit to set a pin "high" and "low" - "high" means start the timer; "low" means stop the timer. The next "high" resets and restarts the timer.
You could use two circuits. One for start/stop and one for "reset". You'd probably need some code to not reset while running.
The parallel circuit can be controlled manually (for testing) or automatically (perhaps with a master program?). | 1 | 0 | 0 | 0 | I am trying to set up a gameshow type system where I have 5 stations each having a monitor and a button. The monitor will show a countdown timer and various animations. My plan is to program the timer, animations, and button control through pygame and put a pi at each station each running it's own pygame script, waiting for the start signal (can be keypress or gpio).
I'm having trouble figuring out how to send that signal simultaneously to all stations. Additionally I need to be able to send a 'self destruct' signal to each station to stop the timer. I can ssh into each station but I don't know how to send keypress/gpio signals through the command line to a running pygame script..
I was thinking of putting a rf receiver on each pi, all at the same wavelength and using a common transmitter, but that seems very hacky and not necessarily so simultaneous. | Control Multiple Raspberry Pis Remotely / Gameshow Type System | 0 | 0 | 1 | 0 | 0 | 114 |
46,252,155 | 2017-09-16T09:07:00.000 | 1 | 0 | 0 | 0 | 0 | r,python-2.7,facebook-graph-api | 0 | 46,254,610 | 0 | 1 | 0 | false | 1 | 0 | Without a Page Token of the Page, it is impossible to get the reviews/ratings. You can only get those for Pages you own. There is no paid service either, you can only ask the Page owners to give you access. | 1 | 0 | 0 | 0 | I want to extract reviews from public facebook pages like airline page, hospital page, to perform sentiment analysis. I have app id and app secret id which i generated from facebook graph API using my facebook account, But to extract the reviews I need page access token and as I am not the owner/admin of the page so I can not generate that page access token. Is any one know how to do it or it requires some paid service?
Kindly help.
Thanks in advance. | How to extract reviews from facebook public page without page access token? | 0 | 0.197375 | 1 | 0 | 1 | 259 |
46,267,910 | 2017-09-17T19:14:00.000 | 0 | 0 | 1 | 0 | 0 | python,text-files | 0 | 46,267,961 | 0 | 4 | 0 | false | 0 | 0 | Because you've asked to focus on how to handle the updates in a text file, I've focused on that part of your question. So, in effect I've focused on answering how would you go about having something that changes in a text file when those changes impact the length and structure of the text file. That question is independent of the thing in the text file being a password.
There are significant concerns related to whether you should store a password, or whether you should store some quantity that can be used to verify a password. All that depends on what you're trying to do, what your security model is, and on what else your program needs to interact with. You've ruled all that out of scope for your question by asking us to focus on the text file update part of the problem.
You might adopt the following pattern to accomplish this task:
At the beginning see if the text file is present. Read it and if so assume you are doing an update rather than a new user
Ask for the username and password. If it is an update prompt with the old values and allow them to be changed
Write out the text file.
Most strategies for updating things stored in text files involve rewriting the text file entirely on every update. | 2 | 0 | 0 | 0 | I am trying to create a program that asks the user for, in this example, lets say a username and password, then store this (I assume in a text file). The area I am struggling with is how to allow the user to update this their password stored in the text file? I am writing this in Python. | Storing a username and password, then allowing user update. Python | 0 | 0 | 1 | 0 | 0 | 3,881 |
46,267,910 | 2017-09-17T19:14:00.000 | -2 | 0 | 1 | 0 | 0 | python,text-files | 0 | 46,267,983 | 0 | 4 | 0 | false | 0 | 0 | Is this a single user application that you have? If you can provide more information one where you're struggling
You can read the password file (which has usernames and passwords)
- When user authenticate, match the username and password to the combination in text file
- When user wants to change password, then user provides old and new password. The username and old password combination is compared to the one in text file and if matches, stores the new | 2 | 0 | 0 | 0 | I am trying to create a program that asks the user for, in this example, lets say a username and password, then store this (I assume in a text file). The area I am struggling with is how to allow the user to update this their password stored in the text file? I am writing this in Python. | Storing a username and password, then allowing user update. Python | 0 | -0.099668 | 1 | 0 | 0 | 3,881 |
46,268,174 | 2017-09-17T19:41:00.000 | 0 | 0 | 0 | 0 | 0 | python,python-3.x,webbrowser-control,opera | 0 | 46,268,349 | 0 | 2 | 0 | false | 0 | 0 | You can use Selenium to launch web browser through python. | 1 | 0 | 0 | 0 | I need to execute my webbrowser (opera) from python code, how can i get it?
Python version 3.6, OS MS Windows 7 | How can i execute my webbrowser in Python? | 0 | 0 | 1 | 0 | 1 | 737 |
46,289,679 | 2017-09-18T23:43:00.000 | 0 | 0 | 1 | 0 | 0 | python,time,compare,timedelta | 0 | 46,289,757 | 0 | 2 | 0 | false | 0 | 0 | It becomes "difficult" only when endtime is smaller than starttime (IE, next day). But, you could have a test, if this is the case, add 12 hours to all three items, so now you can easily test and verify that 11am is between 9am and 3pm. | 1 | 0 | 0 | 0 | So if I have two timedelta values such as 21:00:00(PM) as start time and 03:00:00 (AM) as end time and I wish to compare 23:00:00 within this range, to see whether it falls between these two, without having the date value, how could I do so?
So although in this example it will see 23:00:00 larger than 21:00:00 it will not see 23:00:00 less than 03:00:00, which is my issue. I need to be able to compare time on a 24 hour circle using timedelta or a conversion away from timedelta is fine.
Note: The start/end time ranges can change, so any could be AM/PM. So I cannot just add a day to the end time for example. | Checking if time-delta value is in range past midnight | 0 | 0 | 1 | 0 | 0 | 972 |
46,293,283 | 2017-09-19T06:23:00.000 | 0 | 1 | 1 | 0 | 0 | python,python-2.7,file-permissions,file-ownership | 0 | 46,293,476 | 0 | 1 | 0 | false | 0 | 0 | You can use rsync facility to copy the file to remote location with same permissions. A simple os.system(rsync -av SRC <DEST_IP>:~/location/) call can do this. Another methods include using a subprocess. | 1 | 0 | 0 | 0 | I have the following problem.
I need to replace a file with another one. As far as the new is transfered over the network, owner and group bits are lost.
So I have the following idea. To save current permissions and file owner bits and than after replacing the file restore them.
Could you please suggest how to do this in Python or maybe you could propose a better way to achieve this. | Python save file permissions & owner/group and restore later | 0 | 0 | 1 | 0 | 0 | 120 |
46,293,320 | 2017-09-19T06:26:00.000 | 2 | 0 | 1 | 0 | 1 | python-3.x,ubuntu,pycharm | 0 | 47,638,054 | 0 | 1 | 0 | false | 0 | 0 | how do I actually make Pycharm save my interpreter settings and stop asking me about it.
I was having a similar issue when I used PyCharm Community 2017.3 on Ubuntu 16.04 for the first time. The solution was to open the project folder rather than a specific script. | 1 | 0 | 0 | 0 | I use Pycharm for a while now and I'm getting really annoyed that my Pycharm interpreter settings always resets for some reason.
Meaning that whenever I open up a new/old project it will always tell me that:
No Python interpreter configured...
even after I change and apply the settings in
File > Settings > Project: ProjectName > Project Interpreter
or
File > Default Settings > Project Interpreter.
(These changes only apply for as long as Pycharm is open. Once it's closed I need to repeat the whole procedure, which is my problem here.)
Then I noticed that all my projects that I open for some reason end up being opened in the tmp folder.
(e.g. "/tmp/Projectname.py")
Which is also the reason why I cant open recent projects via the menu.
So my question is, how do I actually make Pycharm save my interpreter settings and stop asking me about it.
I know that there seems to be similar questions about it, but either they are not solved or the solution doesn't work. And I hope that this tmp folder thing might be of use to solve this problem. | Pycharm default interpreter and tmp working directory on Ubuntu | 1 | 0.379949 | 1 | 0 | 0 | 583 |
46,306,331 | 2017-09-19T17:18:00.000 | 13 | 0 | 1 | 0 | 0 | python,sublimetext | 0 | 51,280,447 | 0 | 8 | 0 | false | 0 | 0 | Tools->Build System->Python
or Ctrl+B | 1 | 3 | 0 | 0 | So I'm trying to run python code from Sublime Text 3, but I'm not sure how. Even if it was only from the console, that would be fine. Anybody know how??? | How to run python code in Sublime Text 3? | 0 | 1 | 1 | 0 | 0 | 51,441 |
46,307,447 | 2017-09-19T18:28:00.000 | 1 | 0 | 0 | 0 | 0 | django,amazon-s3,boto3,python-django-storages | 0 | 61,942,402 | 0 | 2 | 0 | false | 1 | 0 | The docs now explain this:
If AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not set, boto3 internally looks up IAM credentials. | 1 | 3 | 0 | 0 | Django-Storages provides an S3 file storage backend for Django. It lists
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as required settings. If I am using an AWS Instance Profile to provide S3 access instead of a key pair, how do I configure Django-Storages? | Use Django-Storages with IAM Instance Profiles | 0 | 0.099668 | 1 | 1 | 0 | 1,069 |
46,307,874 | 2017-09-19T18:54:00.000 | 0 | 0 | 1 | 0 | 0 | python,tensorflow,pip,anaconda,virtualenv | 0 | 46,308,361 | 0 | 2 | 0 | false | 0 | 0 | Yes, after a short reading into the topic, I simply unistalled tf using sudo pip uninstall tensorflow within my virtualenv and then deactivated the virtualenv. Don't know how to really uninstall the virtualenv as well but I guess that is already enough then and I can proceed with the installation of Anaconda?
I also have installed some additional packages like mathplotlib, ipython etc. but I can keep them as well without problems?
Thanks | 2 | 1 | 1 | 0 | I am new to all the tensorflow and Python programming and have installed tensorflow through pip and virtualenv but now I read that in order to use Spyder for Python it is best to use Anaconda. I know that tf can be installed through conda as well but how do I go about it now? Do I have to completely remove the existing installations first and if yes, can someone explain in detail which and how I can do it? | Completely removing Tensorflow, pip and virtualenv | 0 | 0 | 1 | 0 | 0 | 633 |
46,307,874 | 2017-09-19T18:54:00.000 | 0 | 0 | 1 | 0 | 0 | python,tensorflow,pip,anaconda,virtualenv | 0 | 46,308,280 | 0 | 2 | 0 | false | 0 | 0 | Just install Anaconda it will take care of everything. Uninstalling the existing ones is up to you, they wont harm anything. | 2 | 1 | 1 | 0 | I am new to all the tensorflow and Python programming and have installed tensorflow through pip and virtualenv but now I read that in order to use Spyder for Python it is best to use Anaconda. I know that tf can be installed through conda as well but how do I go about it now? Do I have to completely remove the existing installations first and if yes, can someone explain in detail which and how I can do it? | Completely removing Tensorflow, pip and virtualenv | 0 | 0 | 1 | 0 | 0 | 633 |
46,314,148 | 2017-09-20T05:47:00.000 | 0 | 1 | 0 | 0 | 0 | python,email,apple-mail | 0 | 46,314,807 | 0 | 1 | 0 | false | 0 | 0 | This is okay now It seems like the content type of the email i sent from Apple Mail was multipart which includes a "text/plain" which contains the text inside my email and "multipart/related" that contains the image i attached. So i just needed to check if the email is a multipart if so then loop it to print all payloads. | 1 | 0 | 0 | 0 | I am developing a program in python that can read emails, my program can now read emails from GMAIL even with attachment but my program can't read the email that was sent from Apple Mail.
Unlike the email that I sent from GMAIL when I use Message.get_payload() in apple mail it does not have a value. I'm a newbie on python and this is my first project so please bear with me. Thank you in advance.
Note: The email I sent from Apple has attachment on it.
Update: I can now read the text inside the email my only problem now is how to get the attachment since when I loop all the payloads since it is multipart it only prints the text inside and this
"[<email.message.Message object at 0x000001ED832BF6D8>, <email.message.Message object at 0x000001ED832BFDD8>]" | Python: Apple Email Content | 0 | 0 | 1 | 0 | 0 | 471 |
46,317,830 | 2017-09-20T09:09:00.000 | 2 | 0 | 1 | 0 | 0 | c#,python,pyd | 0 | 46,322,006 | 0 | 2 | 0 | false | 0 | 1 | A .pyd file IS a DLL but with function init*() where * is the name of your .pyd file. For example, spam.pyd has a function named initspam(). They are not just similar, but the same! They are usable on PCs without Python.
Do not forget: builtins will NOT be available. Search for them in C, add them as an extern in your Cython code and compile! | 1 | 3 | 0 | 0 | I am developing a program using C#, but I just figured out that what I am going to program is very very difficult in C# yet easy in Python. So what I want to do is make a .PYD file and use it in C# program, but I don't know how. I've been searching about it but I couldn't find anything.
So these are my questions: How do I use .PYD file in C#? I know that .PYD files are similar to .DLL files, so are .PYD files still usable in computers that have no Python installed? | Using .PYD file in C#? | 0 | 0.197375 | 1 | 0 | 0 | 2,953 |
46,322,899 | 2017-09-20T13:05:00.000 | 3 | 1 | 0 | 0 | 0 | python,amazon-web-services,amazon-ec2,aws-lambda,serverless-framework | 0 | 46,323,508 | 0 | 3 | 0 | false | 1 | 0 | You can use CloudWatch for this.
You can create a cloudwatch rule
Service Name - Ec2
Event Type - EC2 Instance change notification
Specific state(s) - shutting-down
Then use an SNS target to deliver email. | 1 | 1 | 0 | 0 | i am new to serverless framework and aws, and i need to create a lambda function on python that will send email whenever an ec2 is shut down, but i really don't know how to do it using serverless. So please if any one could help me do that or at least give me some tracks to start with. | send email whenever ec2 is shut down using serverless | 0 | 0.197375 | 1 | 0 | 1 | 203 |
46,347,262 | 2017-09-21T14:59:00.000 | 0 | 0 | 0 | 0 | 0 | python,kivy | 0 | 63,662,212 | 0 | 3 | 0 | false | 0 | 1 | There's a dirty method though.. try requesting google.com or any other reliable website in the background with urllib or requests or socket, if its not getting any reply it must mean that system is not connected to internet | 1 | 0 | 0 | 0 | I've created with python and kivy an android app that works offline, app shows landscape photos, how can i open my app only when wifi is enabled? to let my app upload ads,have patience with me im new Thank You. | How can i open my app only when wifi is enabled for upload Ads? | 0 | 0 | 1 | 0 | 0 | 127 |
46,347,289 | 2017-09-21T15:00:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,jupyter-notebook,rows,nearest-neighbor,graphlab | 1 | 46,347,870 | 0 | 3 | 0 | false | 0 | 0 | ok, well, seems like I have to define the number or neighbours with:
tfidf_model.query(Test_AD, k=100).show()
so I can get a list of first 100 in the canvass. | 1 | 0 | 1 | 0 | I using jupyter notebook and graphlab / turi for tfidf-nearest_neighbors model, which works fine so far.
However, when I query the model like
tfidf_model.query(Test_AD)
I always just get the head - [5 rows x 4 columns]
I am supposed to use "print_rows(num_rows=m, num_columns=n)" to print more rows and columns like:
tfidf_model.query(Test_AD).print_rows(num_rows=50, num_columns=4)
however, when I used it, I dont get any rows anymore, only the summary field:
Starting pairwise querying.
+--------------+---------+-------------+--------------+
| Query points | # Pairs | % Complete. | Elapsed Time |
+--------------+---------+-------------+--------------+
| 0 | 1 | 0.00519481 | 13.033ms |
| Done | | 100 | 106.281ms |
+--------------+---------+-------------+--------------+
That's it. No error message, nothing. Any Ideas, how to get all/ more rows?
I tried to convert into pandas or .show() command etc., didnt help. | print_rows(num_rows=m, num_columns=n) in graphlab / turi not working | 0 | 0 | 1 | 0 | 0 | 1,900 |
46,376,713 | 2017-09-23T06:22:00.000 | 0 | 0 | 0 | 0 | 0 | android,python-3.x,kivy,2d-games,qpython3 | 0 | 61,050,929 | 0 | 2 | 0 | false | 0 | 1 | Look at using Linux (a different Kernal) distro on your phone. Many will basically act as a pc, just lacking power. I'm sure there is one you can run regular Python with. Then you may also look at renpy for porting it to android, or even kivy but i'm thinking kivy would be very different than anything you have, and require learning a new language basically.
Sadly changing the OS on your phone is the only option I can think of, and probably the best. I cannot imagine any frameworks for android to be as vast as those that have been developed for PC. You may be able to find some hacked, port version of something that may help, a tool something idk. | 1 | 2 | 0 | 0 | So, I'm programming in Python 3.2 on android (the app I'm using is QPython3) and I wonder if I could make a game/app with a graphic interface for android, using only a smartphone. Is it possible? If yes, how do I do that? | Pygame/Kivy on Android? | 1 | 0 | 1 | 0 | 0 | 3,395 |
46,377,331 | 2017-09-23T07:49:00.000 | 0 | 0 | 0 | 0 | 0 | python,numpy,matrix | 0 | 72,120,529 | 0 | 3 | 0 | false | 0 | 0 | Can use:
np.linalg.lstsq(x, y)
np.linalg.pinv(X) @ y
LinearRegression().fit(X, y)
Where 1 and 2 are from numpy and 3 is from sklearn.linear_model.
As a side note you will need to concatenate ones(use np.ones_like) in both 1 and 2 to represent the bias from the equation y = ax + b | 1 | 8 | 1 | 0 | I'm focusing on the special case where A is a n x d matrix (where k < d) representing an orthogonal basis for a subspace of R^d and b is known to be inside the subspace.
I thought of using the tools provided with numpy, however they only work with square matrices. I had the approach of filling in the matrix with some linearly independent vectors to "square" it and then solve, but I could not figure out how to choose those vectors so that they will be linearly independent of the basis vectors, plus I think it's not the only approach and I'm missing something that can make this easier.
is there indeed a simpler approach than the one I mentioned? if not, how indeed do I choose those vectors that would complete Ainto a square matrix? | solving Ax =b for a non-square matrix A using python | 0 | 0 | 1 | 0 | 0 | 23,050 |
46,380,101 | 2017-09-23T13:31:00.000 | 0 | 0 | 0 | 0 | 0 | python,oracle,cx-oracle | 0 | 46,380,171 | 0 | 1 | 0 | false | 0 | 0 | This is too long for a comment.
You would need a trigger in the database to correctly implement this functionality. If you try to do it in the application layer, then you will be subject to race conditions in a multi-client environment.
Within Oracle, I would recommend just using an auto-generated column for the primary key. Don't try inserting it yourself. In Oracle 12C, you can define this directly using generated always as. In earlier versions, you need to use a sequence to define the numbers and a trigger to assign them. | 1 | 0 | 0 | 0 | can I get some advice, how to make mechanism for inserts, that will check if the values of PK is used?
If it is not used in the table, it will insert row with number. If it is used, it will increment value and check next value if it's used. So on... | cx_oracle PK autoincrementarion | 0 | 0 | 1 | 1 | 0 | 26 |
46,383,189 | 2017-09-23T18:53:00.000 | 0 | 0 | 1 | 0 | 0 | python,jupyter-notebook | 0 | 47,618,302 | 0 | 2 | 0 | false | 0 | 0 | Simple, just find where the script jupyter-notebook resides, for example ~/.local/bin if you installed it locally.
Then just edit the first line to: #!/usr/bin/python3 will be fine. | 1 | 2 | 0 | 0 | I installed jupyter notebook along with anaconda python as the only python on my PC (Windows 10). However I recently installed python 3.6.2 and I wonder if I can somehow add it to jupyter so as I can use them changeably.
I remember having both on my other machine when I installed python first and then after that I installed the whole anaconda package with the jupyter notebook (so I had python 3 and python(conda) option for kernels).
So how can I add to jupyter? | How do I add a python3 kernel to my jupyter notebook? | 0 | 0 | 1 | 0 | 0 | 6,917 |
46,384,607 | 2017-09-23T21:52:00.000 | 0 | 0 | 0 | 1 | 0 | python,concurrency,gevent,cpython,greenlets | 0 | 46,387,137 | 0 | 1 | 0 | false | 0 | 0 | It's underlying dispatch model, is the event loop in libevent, which uses the event base, which monitors for the different events, and reacts to them accordiningly, then from what I gleamed , it will take the greenlets do some fuckery with semaphores, and then dispatch it onto libevent. | 1 | 0 | 0 | 0 | I am trying to understand the way Gevent/Greenlet chooses the next greenlet to be run. Threads use the OS Scheduler. Go Runtime uses 2 hierarchical queues.
By default, Gevent uses libevent to its plumbling. But how do libevent chooses the next greenlet to be ran, if many are ready to?
Is it random?
I already had read their docs and had a sight on the sourcecode. Still do not know.
Updated: Text changed to recognize that Gevent uses libevent. The question still applies over libevent. | How do libevent chooses the next Gevent greenlet to run? | 0 | 0 | 1 | 0 | 0 | 174 |
Subsets and Splits