Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | I have set up an unused Macbook Pro as a shared server on our internal company intranet. I'm kind of running it as a skunkworks; it can access the internet, but doesn't have permission to access, say, Gmail via SMTP. It's also unlikely to get that access anytime soon.
I'd like to be able to run processes on the machine that send notification emails. Every library I can find seems to require an email server, and I can't access one.
Is there any way to set up my MBP to act as its own email server? Thanks. | 0 | python,email,osx-yosemite | 2015-01-12T19:36:00.000 | 1 | 27,909,442 | adding this an answer cause not enough space in comments.
It might work, but highly unlikely, and if you can send outbound mail, it will most likely be spam folder'd or dropped. The reason most apps use a dedicated mail server or smart host is that there are lots of other things that need to be setup besides the mail server (DNS records, SPF, DKIM,etc..). By default it if type sendmail [email protected] on your mac, type your message and end it with a . on a line by itself you mac will try to deliver it using its internal server(postfix). It will look up the Right Hand Side, look for MX records, try to connect to port 25 on the lowest order mx, and do all the things that a mail server does when delivering email. But if your skunk work project cannot access gmail on port 465 or 587 due to firewall settings, then there is very little chance that your mail admins will allow it to connect to random servers on port 25 (since this is exactly what Direct to MX Bots/Malware do).
You best bet is to contact your admins and tell them you have an application that needs to send email, (low volume, notification type, whatever), and ask them if they have an approved server that you can smart host via.
Going around network security, even with the best of intentions, is generally a bad idea. Since the rules are generally put in place for a reason | 0 | 291 | true | 0 | 1 | How can I send mail via Python on Yosemite... without an outside server? | 27,913,491 |
1 | 2 | 0 | 0 | 0 | 1 | 1.2 | 0 | I'm trying to make a python program that converts text to one long binary string. The usual line of test and sentences are easy enough to convert into binary but I'm having trouble with the whitespace.
How do I put in a binary byte to represent the enter key?
Do I just put in the '/' and an 'n' strings?
I would ideally want to be able to convert an entire text file into a binary string and be able to convert it back again. Obviously, if I were to do this with a python script, the tabbing would get messed up and the program would be broken.
Would a C language be better for doing this stuff?
Obviously a C program would still function without its whitespace whereas python would not.
In short, I need to know how to represent the 'tab' and 'enter' keys in binary, and how to create a function to translate them into binary. would bin(ord('\n')) be good? | 0 | python,c++,c,binary | 2015-01-12T23:49:00.000 | 0 | 27,912,757 | The tab is represented in the ASCII chart as 0x08 0x09, or "00001000" "00001001" in a binary string.
The Enter key is different because it could represent CR (carriage return), LF (Linefeed), or both.
The CR is represented as 0x0d, or "00001101" as binary string.
The LF is represented as 0x0A, or "00001010" as binary string.
A common convention is '\t' for tab and '\r' for CR, '\n' for newline. | 0 | 310 | true | 0 | 1 | text - binary conversion | 27,912,842 |
1 | 2 | 0 | 5 | 7 | 0 | 0.462117 | 0 | How to use pyenv with another user?
For example, If I have installed pyenv in user test's environment, I could use pyenv when i login as test.
However, how could i use pyenv when I login as another user, such as root? | 0 | python | 2015-01-16T06:25:00.000 | 0 | 27,978,383 | Even if you did this, I'd strongly discourage it. Root can access pretty much everyone's home directory, but the nuances of adding programs to the PATH that the root user doesn't technically own can be detrimental at best - might lead to a few root services not working properly, and actively insecure at worst.
There's literally nothing wrong with installing your own copy of pyenv as another user. There's no pain involved and there's not much sense to do it any other way. | 0 | 3,900 | false | 0 | 1 | How to use pyenv with another user? | 27,978,545 |
1 | 1 | 0 | 0 | 2 | 1 | 0 | 0 | If I had 100 imports, would my program perform worse due to Python looking up names during runtime (and looking through my all 100 imports)?
I am creating a game and it would be far easier to describe each area as a separate module containing single class. | 0 | performance,python-3.x,import | 2015-01-16T11:06:00.000 | 0 | 27,982,645 | There would be an initial performance hit the first time the modules are loaded. The second, third, fourth, ..., times the modules are loaded, Python gets them from its cache, so no performance hit at all.
Having said that, the performance should be minor -- still, being a game, do all the imports up front. | 0 | 99 | false | 0 | 1 | Python3 import performance | 27,990,196 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a very strange problem. I have a script written in python which is generating an html report then I convert it using pdfkit to pdf . The script is working just fine on MAC and the pdf is generated normally. When I tried to install the same script on Ubuntu I got the following upnormal behavior:
The html file is fully generated
The pdf file is generated but without the last 2 pages
when I tried to convert the either through wkhtmlto pdf command or through the python (outside the script) I got the right PDF form. Any idea why I'm examining this behavior ? | 0 | python,ubuntu,pdf-generation,pdfkit | 2015-01-16T21:52:00.000 | 0 | 27,993,383 | I found the solution. The problem was that I didn't close .colse() the file before the initialisation of the pdfkit | 0 | 487 | false | 1 | 1 | PDFKit with python on Ubuntu | 29,774,518 |
3 | 3 | 0 | 0 | 6 | 0 | 0 | 0 | Hi I am getting the error below when going to the website url on ubuntu server 14.10 running apache 2 with mod_wsgi and python on django.
My django application uses python 3.4 but it seems to be defaulting to python 2.7, I am unable to import image from PIL and AES from pycrypto.
ImportError at / cannot import name _imaging Request
Method: GET Request URL: Django Version: 1.7.3
Exception Type: ImportError Exception Value: cannot import
name _imaging Exception
Location: /usr/local/lib/python3.4/dist-packages/PIL/Image.py in
, line 63 Python Executable: /usr/bin/python Python
Version: 2.7.6 Python Path: ['/var/www/blabla',
'/usr/local/lib/python3.4/dist-packages',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages', '/var/www/blabla',
'/usr/local/lib/python3.4/dist-packages'] | 0 | django,apache,python-3.x,mod-wsgi | 2015-01-19T06:51:00.000 | 0 | 28,019,310 | Thanks guys,
I actually fixed the issue myself this morning by running the make install of mod_wsgi with .configure pointing to python3.4.
I think you were right Adam. | 0 | 5,696 | false | 1 | 1 | running django python 3.4 on mod_wsgi with apache2 | 28,044,520 |
3 | 3 | 0 | 9 | 6 | 0 | 1.2 | 0 | Hi I am getting the error below when going to the website url on ubuntu server 14.10 running apache 2 with mod_wsgi and python on django.
My django application uses python 3.4 but it seems to be defaulting to python 2.7, I am unable to import image from PIL and AES from pycrypto.
ImportError at / cannot import name _imaging Request
Method: GET Request URL: Django Version: 1.7.3
Exception Type: ImportError Exception Value: cannot import
name _imaging Exception
Location: /usr/local/lib/python3.4/dist-packages/PIL/Image.py in
, line 63 Python Executable: /usr/bin/python Python
Version: 2.7.6 Python Path: ['/var/www/blabla',
'/usr/local/lib/python3.4/dist-packages',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages', '/var/www/blabla',
'/usr/local/lib/python3.4/dist-packages'] | 0 | django,apache,python-3.x,mod-wsgi | 2015-01-19T06:51:00.000 | 0 | 28,019,310 | I believe that mod_wsgi is compiled against a specific version of python, so you need a py3.4 version of mod_wsgi. You may be able to get one from your os's package repository or you can build one without too much drama. From memory you'll need gcc and python-dev packages (python3-dev?) to build.
OK, quick google, for ubuntu 14.10: sudo apt-get install libapache2-mod-wsgi-py3 should install a py3 version of mod_wsgi (will probably want to remove the existing py2 version).
Adding a shebang line won't do any good as the python interpreter is already loaded before the wsgi.py script is read. | 0 | 5,696 | true | 1 | 1 | running django python 3.4 on mod_wsgi with apache2 | 28,038,045 |
3 | 3 | 0 | 0 | 6 | 0 | 0 | 0 | Hi I am getting the error below when going to the website url on ubuntu server 14.10 running apache 2 with mod_wsgi and python on django.
My django application uses python 3.4 but it seems to be defaulting to python 2.7, I am unable to import image from PIL and AES from pycrypto.
ImportError at / cannot import name _imaging Request
Method: GET Request URL: Django Version: 1.7.3
Exception Type: ImportError Exception Value: cannot import
name _imaging Exception
Location: /usr/local/lib/python3.4/dist-packages/PIL/Image.py in
, line 63 Python Executable: /usr/bin/python Python
Version: 2.7.6 Python Path: ['/var/www/blabla',
'/usr/local/lib/python3.4/dist-packages',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages', '/var/www/blabla',
'/usr/local/lib/python3.4/dist-packages'] | 0 | django,apache,python-3.x,mod-wsgi | 2015-01-19T06:51:00.000 | 0 | 28,019,310 | From what I see here your application is using py2 interpreter with py3 compiled modules, which is no-go.
One simple possible solution that comes me in mind is to add or change first line of manage.py to #!/usr/bin/python3. This will tell script to be interpreted with py3.
Next on guess list would be misconfiguration in *.wsgi file or apache config, whichever you are using. | 0 | 5,696 | false | 1 | 1 | running django python 3.4 on mod_wsgi with apache2 | 28,032,221 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | I tried using timeit and time.clock() but my computer's speed is inconsistent and I couldn't find out which is faster, If this question already exists then I apologize but I couldn't find it. | 0 | python,python-2.7 | 2015-01-19T16:27:00.000 | 0 | 28,029,215 | I belive second version (not 5 % 2) is faster since unary_not operation is faster than compare operation. | 0 | 142 | false | 0 | 1 | What is faster? "5 % 2 == 0" or "not 5 % 2" | 28,029,391 |
2 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 0 | I tried using timeit and time.clock() but my computer's speed is inconsistent and I couldn't find out which is faster, If this question already exists then I apologize but I couldn't find it. | 0 | python,python-2.7 | 2015-01-19T16:27:00.000 | 0 | 28,029,215 | Testing %2 (i.e. for oddness / evenness) can be performed quickly by examining the least significant bit of a number. (Do that with logical AND: (n & 1)). You'd be hard pressed to beat that.
But any potential performance gain might be probably attenuated in Python, so profile it and use if only, in your judgement, the performance gain outweighs the obfuscating effects of an expression which, although idiomatic in C and C++, may well be unfamiliar to folk with whom you are collaborating. | 0 | 142 | false | 0 | 1 | What is faster? "5 % 2 == 0" or "not 5 % 2" | 28,029,315 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a website in django, I only going to visit the website using a Raspberry Pi, I need print a data, and for print I am using a Thermal Printer in the Raspberry Pi.
I can print from the Raspberry pi using
("/dev/ttyAMA0",19200,timeout=5)
for obtain serial communication
This work fine but only when the project is hosting local in the raspberry Pi , but the django project is hosting in a VPS with ubuntu then I have problems for printer from the Raspberry PI.
because in ubuntu there are not
"/dev/ttyAMA0"
How I can work with this thermal Printer from a django project for print correctly using a RaspberryPi | 0 | python,linux,django,printing,raspberry-pi | 2015-01-21T05:00:00.000 | 0 | 28,059,937 | install a print server on the pi and serve the printer. you may need some sort of tunnel so that the VPS can reach the pi. | 0 | 349 | false | 1 | 1 | django work with a Thermal Printer in Raspberry PI | 28,060,170 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I want to build an universal database in which I will keep data from multiple countries so I will need to work with the UNICODE charset.
I need a little help in order to figure out which is the best way to work with stuff like that and how my queries will be affected ( some sql example queries from php/python for basic stuff like insert/update/select would also be great)
Thank you. | 1 | php,python,sql,unicode | 2015-01-22T19:13:00.000 | 0 | 28,096,856 | just put a N infront of the string, something like INSERT INTO MYTABLE VALUES(N"xxx") and make sure your column type is nvarchar | 0 | 1,733 | false | 0 | 1 | How to insert UNICODE characters to SQL db? | 28,096,917 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I want to build an universal database in which I will keep data from multiple countries so I will need to work with the UNICODE charset.
I need a little help in order to figure out which is the best way to work with stuff like that and how my queries will be affected ( some sql example queries from php/python for basic stuff like insert/update/select would also be great)
Thank you. | 1 | php,python,sql,unicode | 2015-01-22T19:13:00.000 | 0 | 28,096,856 | there is nothing special you need to do.
with php you can do... query("SET NAMES utf8"); | 0 | 1,733 | false | 0 | 1 | How to insert UNICODE characters to SQL db? | 28,096,868 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have made a script in Java and Python that logs in to my internet. I want to run one of it on my Ubuntu 14.10 at boot (before user login). How do I do it? | 0 | java,python,ubuntu | 2015-01-26T14:01:00.000 | 1 | 28,151,557 | If you put it in ~/.bashrc it will run at the same time you log in | 0 | 38 | false | 0 | 1 | How to run a script that logs in to the internet at boot (before user login) on ubuntu | 28,151,615 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 1 | Which is the more resource-friendly way to collect SNMP traps from a Cisco router via python:
I could use a manager on a PC running a server, where the Cisco SNMP traps are sent to in case one occurs
I could use an agent to send a GET/GETBULK request every x timeframe to check if any new traps have occurred
I am looking for a way to run the script so that it uses the least resources as possible. Not many traps will occur so the communication will be low mostly, but as soon as one does occur, the PC should know immediately. | 0 | python,cisco,pysnmp | 2015-01-26T14:54:00.000 | 0 | 28,152,579 | Approach 1 is better from most perspectives.
It uses a little memory on the PC due to running a trap collecting daemon, but the footprint should be reasonably small since it only needs to listen for traps and decode them, not do any complex task.
Existing tools to receive traps include the net-snmp suite which allows you to just configure the daemon (i e you don't have to do any programming if you want to save some time).
Approach 2 has a couple of problems:
No matter what polling interval you choose, you run the risk of missing an alarm that was only active on the router for a short time.
Consumes CPU and network resources even if no faults are occurring.
Depending on the MIB of the router, some types of event may not be stored in any table for later retrieval. For Cisco, I would not expect this problem, but you do need to study the MIB and make sure of this. | 0 | 2,151 | false | 0 | 1 | Collecting SNMP traps with pySNMP | 48,458,525 |
1 | 2 | 0 | 0 | 1 | 0 | 1.2 | 0 | I know I can find multiple answers to this question but I have a problem with the result.
I have a Windows PC with a script on it and a Linux PC that has to start the script using ssh.
The problem I am seeing is that for some reason it's using the Linux environment to run the script and not the Windows env. Is this expected and if yes how can I start a remote script (From Linux) and still use the Windows env?
Linux: Python 2.7
Windows: Python 3.4
My example:
I am running:ssh user@host "WINDOWS_PYTHON_PATH Script.py arg1 arg2 arg3" and it fails internally at a copy command
I can't run ssh user@host "Script.py arg1 arg2 arg3" because then it will fail to run the script because of the python version.
The way I run the command in Windows is using the same syntax "Script.py arg1 arg2 arg3" and it works.
It looks like it's using the Linux env to run the script. I would like to run the script on Windows no matter who triggers it. How can I achieve this? | 0 | python,linux,bash,ssh | 2015-01-27T02:26:00.000 | 1 | 28,162,229 | The solution to my problem is to use PATH=/cygdrive/c/WINDOWS/system32:/bin cmd /c in front of the script call, sth like: ssh user@host "PATH=/cygdrive/c/WINDOWS/system32:/bin cmd /c script" .This will run the script using in Windows env.
In my case the problem was that the script was run under cygwin env and I wanted to be run in a Windows env. | 0 | 865 | true | 0 | 1 | How to execute a remote script using ssh | 28,223,316 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I need to kill all python scripts except one. Unfortunattely, all scripts have similar name "pythonw.exe". Difference in PID only.
First time, i don't need to leave one script alive, thats why i just kill all python scripts in system by taskkill /F /T /IM "python*" command.
But now, i have one script, that automates all other scripts. The script is my simple "testing system". It rewrite object script, start, stop and restart.
But my problem is multithreading in objective script (10 - 20 threads).
I don't know, how to kill all python threads, except automates one.
P.S.
I tried to get tasklist and PID of automates script, and killed all scripts, except that one, but it doesn't work (i don't know why)
P.P.S
OS: Windows XP
Python 2.7.8 | 0 | python,windows,multithreading | 2015-01-27T20:59:00.000 | 1 | 28,179,883 | My solution is place testing code in exe file. Now i can kill all python scripts, as previously. Maybe someone will offer another solution? | 0 | 207 | false | 0 | 1 | Python kill scripts | 28,252,408 |
1 | 2 | 0 | 1 | 1 | 0 | 1.2 | 1 | I have some git commands scripted in python and occasionally git fetch can hang for hours if there is no connectivity. Is there a way to time it out and report a failure instead? | 0 | python,git,timeout | 2015-01-27T21:07:00.000 | 0 | 28,180,013 | No you can't, you need to timeout the command using a wrapper. | 0 | 2,127 | true | 0 | 1 | Can you specify a timeout to git fetch? | 29,615,466 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I am trying to extract mails from gmail using python. I noticed that I can get mails from "[Gmail]/All Mail", "[Gmail]/Drafts","[Gmail]/Spam" and so on. However, is there any method to retrieve mails that are labeled with "Primary", "Social", "Promotions" etc.? These tags are under the "categories" label, and I don't know how to access it.
By the way, I am using imaplib in python. Do I need to access the "categories" with some pop library? | 0 | python,email,gmail | 2015-01-28T02:18:00.000 | 0 | 28,183,527 | Unfortunately the categories are not exposed to IMAP. You can work around that by using filters in Gmail to apply normal user labels. (Filter on, e.g., category:social.) | 0 | 596 | false | 0 | 1 | How to extract mail in "categories" label in gmail? | 28,184,142 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I am trying to extract mails from gmail using python. I noticed that I can get mails from "[Gmail]/All Mail", "[Gmail]/Drafts","[Gmail]/Spam" and so on. However, is there any method to retrieve mails that are labeled with "Primary", "Social", "Promotions" etc.? These tags are under the "categories" label, and I don't know how to access it.
By the way, I am using imaplib in python. Do I need to access the "categories" with some pop library? | 0 | python,email,gmail | 2015-01-28T02:18:00.000 | 0 | 28,183,527 | Yes, categories are not available in IMAP. However, rather than filters, I found that gmail api is more favorable for me to get mail by category. | 0 | 596 | false | 0 | 1 | How to extract mail in "categories" label in gmail? | 28,218,594 |
2 | 3 | 0 | 16 | 97 | 1 | 1 | 0 | How do I solve an ImportError: No module named 'cStringIO' under Python 3.x? | 0 | python-3.x,stringio,cstringio | 2015-01-28T19:05:00.000 | 0 | 28,200,366 | I had the same issue because my file was called email.py. I renamed the file and the issue disappeared. | 0 | 138,242 | false | 0 | 1 | python 3.x ImportError: No module named 'cStringIO' | 50,033,540 |
2 | 3 | 0 | 0 | 97 | 1 | 0 | 0 | How do I solve an ImportError: No module named 'cStringIO' under Python 3.x? | 0 | python-3.x,stringio,cstringio | 2015-01-28T19:05:00.000 | 0 | 28,200,366 | I had the issue because my directory was called email. I renamed the directory to emails and the issue was gone. | 0 | 138,242 | false | 0 | 1 | python 3.x ImportError: No module named 'cStringIO' | 68,349,549 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I'm running a Python-based application on an AWS instance but now I want to install a Wordpress (PHP) blog as a sub-domain or as a sub-folder as an addition in the application. Is it technically possible to run two different stack applications on a single cloud instance? Currently getting an inscrutable error installing the Wordpress package with the Yum installer. | 0 | php,python,wordpress,amazon-web-services | 2015-01-29T13:46:00.000 | 0 | 28,216,315 | If you can run it on the same physical server, you can run it on an EC2 server the same way. There is no difference as far as that is concerned. | 0 | 31 | false | 1 | 1 | Running different stacks on same cloud instance | 28,217,518 |
1 | 1 | 0 | 6 | 4 | 0 | 1 | 1 | I am sending 20000 messages from a DEALER to a ROUTER using pyzmq.
When I pause 0.0001 seconds between each messages they all arrive but if I send them 10x faster by pausing 0.00001 per message only around half of the messages arrive.
What is causing the problem? | 0 | python,zeromq,pyzmq | 2015-01-31T00:52:00.000 | 0 | 28,246,973 | What is causing the problem?
A default setup of the ZMQ IO-thread - that is responsible for the mode of operations.
I would dare to call it a problem, the more if you invest your time and dive deeper into the excellent ZMQ concept and architecture.
Since early versions of the ZMQ library, there were some important parameters, that help the central masterpiece ( the IO-thread ) keep the grounds both stable and scalable and thus giving you this powerful framework.
Zero SHARING / Zero COPY / (almost) Zero LATENCY are the maxims that do not come at zero-cost.
The ZMQ.Context instance has quite a rich internal parametrisation that can be modified via API methods.
Let me quote from a marvelous and precious source -- Pieter HINTJENS' book, Code Connected, Volume 1.
( It is definitely worth spending time and step through the PDF copy. C-language code snippets do not hurt anyone's pythonic state of mind as the key messages are in the text and stories that Pieter has crafted into his 300+ thrilling pages ).
High-Water Marks
When you can send messages rapidly from process to process, you soon discover that memory is a precious resource, and one that can be trivially filled up. A few seconds of delay somewhere in a process can turn into a backlog that blows up a server unless you understand the problem and take precautions.
...
ØMQ uses the concept of HWM (high-water mark) to define the capacity of its internal pipes. Each connection out of a socket or into a socket has its own pipe, and HWM for sending, and/or receiving, depending on the socket type. Some sockets (PUB, PUSH) only have send buffers. Some (SUB, PULL, REQ, REP) only have receive buffers. Some (DEALER, ROUTER, PAIR) have both send and receive buffers.
In ØMQ v2.x, the HWM was infinite by default. This was easy but also typically fatal for high-volume publishers. In ØMQ v3.x, it’s set to 1,000 by default, which is more sensible. If you’re still using ØMQ v2.x, you should always set a HWM on your sockets, be it 1,000 to match ØMQ v3.x or another figure that takes into account your message sizes and expected subscriber performance.
When your socket reaches its HWM, it will either block or drop data depending on the socket type. PUB and ROUTER sockets will drop data if they reach their HWM, while other socket types will block. Over the inproc transport, the sender and receiver share the same buffers, so the real HWM is the sum of the HWM set by both sides.
Lastly, the HWM-s are not exact; while you may get up to 1,000 messages by default, the real buffer size may be much lower (as little as half), due to the way libzmq implements its queues. | 0 | 1,622 | false | 0 | 1 | ZMQ DEALER ROUTER loses message at high frequency? | 28,255,012 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I've tried to use hachoir-metadata to work with multimedia files but I can't find how to parse covers in ID3v2 metadata. I see in source code that it know about a lot of covers tags but dose not return any in parser. And I've even tried to use libextractor and python-extractor binding and also didn't find how to fetch cover from multimedia. | 0 | python,metadata,id3v2,hachoir-parser | 2015-02-03T06:23:00.000 | 0 | 28,292,632 | The problem with hachoir-metadata is that they search only "text" field for every ID3 chunk, but write image in "img_data" field not in "text" field.
So, hachoir-metadata can not extract images from metadata because this problem in source code. | 0 | 141 | true | 0 | 1 | Does hachoir metadata or libextractor extract covers from ID3v2 and all another formats? | 28,557,527 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | What will be the best way to test following:
We have a large complex class that we'll call ValueSetter which accepts string, gets some data from it and sets this data to several variables like
message_title, message_content, message_number
To perform this it uses another one class called Rule where are rules for particular case described with regular expressions.
What is needed:
Because in each Rule there are about 5 cases to match, we ant to test each of them separately.
So far we need only to assert that particular Rule returns correct string in each case. What is the best way in this situation ? | 0 | python,regex,unit-testing,python-unittest | 2015-02-04T07:49:00.000 | 0 | 28,316,119 | Try to test Rule and ValueSetter each in their own Test. Test that the Rule really does what you think in the 5 cases you describe in your question. Then when you test your ValueSetter just assume that Rule does what you think and set for example message_title, message_content and message_number directly. So you inject the information in a way that Rule should have done.
This is what you usually do in a unittest. In order to test if everything is working in conjunction you usually would do a functional test that tests the application from a higher/user level.
If you cannot construct a ValueSetter without using a Rule then just create a new class for the test that inherits from ValueSetter and overwrite the __init__ method. In this way you will be able to get a 'blank' object and set the member variables as you expect them to be directly. | 0 | 56 | true | 0 | 1 | Test part of complex structure with unittest and mocks | 28,316,773 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | fuzzy.c:1635:5: error: too few arguments to function 'PyCode_New'
I am upgrading from python 2.7 to 3.2. I am getting an error in the c-compile of the fuzzy library (that apparently isn't Python 3 compatible).
Any suggestions? Is there an alternative to the NYSIIS encoding?
Thanks | 0 | python-3.x,fuzzy-search,fuzzy-comparison | 2015-02-04T12:31:00.000 | 0 | 28,321,474 | Jellyfish supports Python 3.3 and 3.4. | 0 | 299 | false | 0 | 1 | Fuzzy - NYSIIS python 3 | 37,747,215 |
1 | 3 | 0 | 41 | 102 | 1 | 1 | 0 | I am new to PyCharm. I have a directory that I use for my PYTHONPATH: c:\test\my\scripts\. In this directory I have some modules I import. It works fine in my Python shell.
How do I add this directory path to PyCharm so I can import what is in that directory? | 0 | python,pycharm | 2015-02-04T16:25:00.000 | 0 | 28,326,362 | In PyCharm Community 2019.2/2019.3 (and probably other versions), you can simply:
right-click any folder in your project
select "Mark Directory As"
select "Sources Root"
Modules within that folder will now be available for import. Any number of folders can be so marked. | 0 | 100,884 | false | 0 | 1 | PyCharm and PYTHONPATH | 57,497,936 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 1 | I am doing a digital signage project using Raspberry-Pi. The R-Pi will be connected to HDMI display and to internet. There will be one XML file and one self-designed HTML webpage in R-Pi.The XML file will be frequently updated from remote terminal.
My idea is to parse the XML file using Python (lxml) and pass this parsed data to my local HTML webpage so that it can display this data in R-Pi's web-browser.The webpage will be frequently reloading to reflect the changed data.
I was able to parse the XML file using Python (lxml). But what tools should I use to display this parsed contents (mostly strings) in a local HTML webpage ?
This question might sound trivial but I am very new to this field and could not find any clear answer anywhere. Also there are methods that use PHP for parsing XML and then pass it to HTML page but as per my other needs I am bound to use Python. | 0 | python,html,xml,parsing,raspberry-pi | 2015-02-04T19:40:00.000 | 0 | 28,329,977 | I think there are 3 steps you need to make it work.
Extracting only the data you want from the given XML file.
Using simple template engine to insert the extracted data into a HTML file.
Use a web server to service the file create above.
Step 1) You are already using lxml which is a good library for doing this so I don't think you need help there.
Step 2) Now there are many python templating engines out there but for a simple purpose you just need an HTML file that was created in advance with some special markup such as {{0}}, {{1}} or whatever that works for you. This would be your template. Take the data from step 1 and just do find and replace in the template and save the output to a new HTML file.
Step 3) To make that file accessible using a browser on a different device or a PC you need to service it using a simple HTTP web server. Python provides http.server library or you can use an 3rd party web server and just make sure it can access the file created on step 2. | 0 | 4,898 | false | 0 | 1 | Parse XML file in python and display it in HTML page | 28,361,971 |
1 | 1 | 0 | 5 | 2 | 1 | 0.761594 | 0 | is "import" in python equivalent to "include" in c++?
Can I consider namespaces from c++ the same way I do with python module names? | 0 | python,c++ | 2015-02-05T04:37:00.000 | 0 | 28,336,318 | #include in C and C++ is a textual include. import in Python is very different -- no textual inclusion at all!
Rather, Python's import lets you access names exported by a self-contained, separately implemented module. Some #includes in C or C++ may serve similar roles -- provide access to publicly accessible names from elsewhere -- but they could also be doing so many other very different things, you can't easily tell.
For example it's normal for a .cc source file to #include the corresponding .h header file to make sure it's implementing precisely what that header file makes available elsewhere -- there's no equivalent of that in Python (or Java or AFAIK most ohter modern languages).
#include could also be about making macros available... and Python very deliberately chooses to have no macros, so, no equivalence!-)
All in all, I think the analogy is likely to be more confusing than helpful. | 0 | 3,107 | false | 0 | 1 | include in C++ vs import in python | 28,336,386 |
1 | 1 | 0 | 1 | 2 | 1 | 1.2 | 0 | I use Python 3.4. I have a program that provide an integration with COM module in Windows, by win32com package. To process messages from this module I use the pythoncom.PumpWaitingMessages() method in the infinite while loop. But python infinite loop makes 100% CPU core load (as shown in Windows Task Manager). The questions:
Is it real "work" or peculiarity of Windows Task Manager?
How one can avoid that. Maybe by using asyncio module or another way?
Is it possible to process messages in another thread or asynchronously with pythoncom? | 0 | python,python-3.x,com,win32com | 2015-02-05T19:29:00.000 | 0 | 28,352,419 | PumpWaitingMessage will process messages and return as soon as there are no more messages to process.
You can call it in a loop, but you should call MsgWaitForMultipleObjects, or MsgWaitForMultipleObjectsEx, before the next loop iteration.
Avoid calling these functions before the initial loop iteration, as they'll block and you won't have a chance to see if some condition is met whether there are messages to process or not. Or alternatively, provide a reasonable timeout to these functions. | 0 | 3,020 | true | 0 | 1 | python win32com "PumpWaitingMessages()" processing | 28,363,049 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | How I can use some 'extra' python module which is not located localy but on a remote server? Somthing like using maven dependencies with Java | 0 | python-3.x,python-import,python-module | 2015-02-07T06:30:00.000 | 1 | 28,379,325 | You would be better off to install those modules locally.
Can you create packages of those modules (either using pip or something similar) then you can distribute them to your local box.
Of what I know, there is nothing similar. | 0 | 32 | true | 0 | 1 | How to use remote python modules | 29,157,556 |
2 | 3 | 0 | 2 | 0 | 1 | 0.132549 | 0 | When I print sys.path in my code I get the following as output:
['C:\Netra_Step_2015\Tests\SVTestcases', 'C:\Netra_Step_2015\Tests\SVTestcases\TC-Regression', 'C:\Python27\python27.zip', 'C:\Python27\DLLs', 'C:\Python27\lib', etc.]
Now, when I write "import testCaseBase as TCB" where testcaseBase.py is in this path:
C:\Netra_Step_2015\Tests\SVTestcases\Common\shared
I get an error: "ImportError: No module named testCaseBase"
My code is in C:\Netra_Step_2015\Tests\SVTestcases\TC-Regression\regression.py. My code goes ahead with compilation, but testcaseBase.py which is residing in a parallel directory fails to compile.
What might be the reason? | 0 | python,python-2.7,python-3.x | 2015-02-09T09:27:00.000 | 0 | 28,406,420 | put
C:\Netra_Step_2015\Tests\SVTestcases\Common\shared
in your PYTHONPATH env | 0 | 6,665 | false | 0 | 1 | ImportError: Module not found but sys.path is showing the file resides under the path | 28,406,587 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | When I print sys.path in my code I get the following as output:
['C:\Netra_Step_2015\Tests\SVTestcases', 'C:\Netra_Step_2015\Tests\SVTestcases\TC-Regression', 'C:\Python27\python27.zip', 'C:\Python27\DLLs', 'C:\Python27\lib', etc.]
Now, when I write "import testCaseBase as TCB" where testcaseBase.py is in this path:
C:\Netra_Step_2015\Tests\SVTestcases\Common\shared
I get an error: "ImportError: No module named testCaseBase"
My code is in C:\Netra_Step_2015\Tests\SVTestcases\TC-Regression\regression.py. My code goes ahead with compilation, but testcaseBase.py which is residing in a parallel directory fails to compile.
What might be the reason? | 0 | python,python-2.7,python-3.x | 2015-02-09T09:27:00.000 | 0 | 28,406,420 | Please dont use ~/ in the path . it does not work. Use the full path. | 0 | 6,665 | false | 0 | 1 | ImportError: Module not found but sys.path is showing the file resides under the path | 52,959,701 |
2 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I'm trying to install Robot Framework, but it keeps giving me an error message during setup that "No Python installation found in the registry." I've tried running the installer as administrator, I've made sure that Python is installed (I've tried both 2.7.2 and 2.7.9), and both C:\Python27 and C:\Python27\Scripts are added to the PATH (with and without slashes on the end, if that matters).
I have no idea why it can't find Python. What do I need to do? | 0 | python,robotframework | 2015-02-09T15:44:00.000 | 0 | 28,413,567 | Try to add the following path in environment variable also:
"C:\Python27\Lib\site-packages"
Since this path consists all the third party modules installed on your PC and also verify if robot-framework library is present in this folder. | 0 | 1,818 | false | 1 | 1 | Robot Framework can't find Python | 41,825,897 |
2 | 2 | 0 | 0 | 1 | 1 | 0 | 0 | I'm trying to install Robot Framework, but it keeps giving me an error message during setup that "No Python installation found in the registry." I've tried running the installer as administrator, I've made sure that Python is installed (I've tried both 2.7.2 and 2.7.9), and both C:\Python27 and C:\Python27\Scripts are added to the PATH (with and without slashes on the end, if that matters).
I have no idea why it can't find Python. What do I need to do? | 0 | python,robotframework | 2015-02-09T15:44:00.000 | 0 | 28,413,567 | I faced the same issue.
Install a different bit version of ROBOT framework. In my case, I was first trying to install 64bit version but it said "No Python installation found in the registry."
Then I tried to install the 32bit version of ROBOT framework and it worked.
So there is nothing wrong with your Python version. | 0 | 1,818 | false | 1 | 1 | Robot Framework can't find Python | 28,443,946 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | I'm in the midst of writing our build/deployment scripts for a small Python application that will be run multiple times per day using a scheduled task. I figure now is as good a time as ever to investigate Python's setup.py feature.
My question is, is there any sort of benefit to investing the time in creating a setup.py file for internal business applications as opposed to say, writing a simple script that will activate my virtualenv, then download my pip packages based on my requirements file? | 0 | python,python-3.x,setuptools,setup.py | 2015-02-10T14:15:00.000 | 0 | 28,433,859 | setup.py is intended for people who are writing their own Python code and need to deploy it. If you either haven't written any Python code, or for some reason do not need to deploy any of the code you have written (you're not doing development in production, are you?), setup.py is not going to be terribly useful. | 0 | 76 | true | 0 | 1 | Is Python's setup.py beneficial for internal applications? | 28,435,407 |
1 | 1 | 0 | 7 | 4 | 0 | 1.2 | 0 | So I'm building a system where I scan a RFID tag with a reader connected to a Raspberry Pi, the RFID tag ID should then be sent to another "central" RPI, where a database is checked for some info, and if it matches the central Pi sends a message to a lamp (also connected to a Pi) which will then turn on. This is just the start of a larger home-automation system.
I read about MQTT making it very easy to make more RPIs communicate and act on events like this. The only thing I am wondering about, but can't find documented on the internet, is whether the central Pi in my case can act like the broker, but also be subscribed to the topic for the RFID tag ID, check the database and then publish to another topic for the light.
Purely based on logical thinking I'd say yes, since the broker is running in the background. Thus I would still be able to run a python script that subscribes/publishes to, I'm guessing, localhost instead of the central Pi's IPaddress and port.
Can anyone confirm this? I can't test it myself yet because I have just ordered the equipment, and am doing lots of preparation-research. | 0 | python,raspberry-pi,mqtt | 2015-02-10T15:23:00.000 | 0 | 28,435,372 | You can run as many clients as you like on the same machine as the broker (You could even run multiple brokers as long as they listen on different ports). The only thing you need to do is ensure that each client has different client id | 0 | 2,202 | true | 0 | 1 | MQTT broker and client on the same RPI | 28,435,487 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am running two different Python scripts on a Ubuntu VPS. One of them is the production version, and the other one is for testing purposes.
From time to time, I need to kill or restart one of them. However ps does not show which script does each Python process runs.
What is the conventional way to do this? | 0 | python,linux,ps | 2015-02-10T17:18:00.000 | 1 | 28,437,813 | Easiest way to keep it simple would be to create a screen for each. screen -S prod and screen -S test, then run the python script in the background of each and detach the screen (using ctrl +a+d) then when you need to stop one screen -r prod then kill it/restart then detach again. | 0 | 194 | false | 0 | 1 | How do i stop process running a certain Python script? | 31,284,981 |
2 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | I am running two different Python scripts on a Ubuntu VPS. One of them is the production version, and the other one is for testing purposes.
From time to time, I need to kill or restart one of them. However ps does not show which script does each Python process runs.
What is the conventional way to do this? | 0 | python,linux,ps | 2015-02-10T17:18:00.000 | 1 | 28,437,813 | ps -AF will give you All processes (not only the ones in your current terminal, or run as your current user), in Full detail, including arguments. | 0 | 194 | false | 0 | 1 | How do i stop process running a certain Python script? | 28,437,903 |
1 | 1 | 0 | 0 | 2 | 0 | 1.2 | 0 | I have a small Python script that has a dependency on the scipy library and a few others. It exposes a few methods: weather_prediction, current_temperature, and so on.
I would like to make this script callable from a Ruby gem, and wrap the Python methods in similar Ruby methods so that consumers of the gem can interface with Ruby, not Python.
Ruby has the ability to have C extensions, but that's not really what I'm after here; I'd rather just have a way to talk to the Python directly. Is that possible? | 0 | python,ruby | 2015-02-10T20:17:00.000 | 0 | 28,440,950 | Follow-up: I eventually decided that this was forcing a square peg in a round hole. We now just run the Python dependency as a HTTP service and call it that way. | 0 | 79 | true | 0 | 1 | Including a Python dependency in a Ruby gem | 29,416,346 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am writing PHP-Python web application for a webApp scanner where web application is managed by PHP and scanning service is managed by python. My question is if I kill a running python process with PHP, does it cause any memory leak or any other trouble (functionality-wise I handled it already) | 0 | php,python,memory,memory-management | 2015-02-11T11:28:00.000 | 0 | 28,453,194 | No. When a process is killed, the operating system releases all operating system resources (memory, sockets, file handles, …) previously acquired by that process. | 0 | 71 | true | 1 | 1 | PHP-Python - killing python process leads memory leak? | 28,453,222 |
1 | 3 | 0 | 6 | 0 | 0 | 1 | 0 | I have a python application running on a beaglebone. How do I (in Python) check if the "/mnt" partition is mounted as read-only or read-write? | 0 | python,linux | 2015-02-11T14:04:00.000 | 1 | 28,456,349 | EDIT: The answer makes the assumption that you plan to write to /mnt.
I would just try to write to it and catch OSError exception to handle the read-only case. | 0 | 2,297 | false | 0 | 1 | Python - Check if linux partition is read-only or read-write? | 28,456,469 |
2 | 2 | 0 | 0 | 6 | 0 | 0 | 0 | I am currently using Sublime Text 3 for programming in Python, Java, C++ and HTML. So, for each language I have a different set of plugins. I would like to know if there is a way for changing between "profiles", with each profile containing plugins of the respective language. My PC is not all that powerful, so it starts to hang if I have too many active plugins. So when one profile is running, all other plugins should be disabled.
TL;DR : Is there a way to change between "profiles" containing a different set of plugins in Sublime Text? | 0 | java,python,sublimetext,sublimetext3,sublime-text-plugin | 2015-02-12T02:36:00.000 | 0 | 28,468,321 | In windows 10 if you delete this folder --> C:\Users\USER\AppData\Roaming\Sublime Text 3
then sublime text will default back to its original state. Maybe if you setup a batch file to keep different versions of this folder for example":
"./Sublime Text 3_Java" or
"./Sublime Text 3_Python" or
"./Sublime Text 3_C++"
and then when you want to work on java have a batch file to rename "./Sublime Text 3_Java" to "./Sublime Text 3" and restart sublime. If you really want to get fancy use a symlink to represent "./Sublime Text 3" then have the batch file to modify(or delete and recreate) where the symlink points to. | 0 | 1,302 | false | 1 | 1 | Sublime Text 3 - Plugin Profiles | 40,161,595 |
2 | 2 | 0 | 2 | 6 | 0 | 0.197375 | 0 | I am currently using Sublime Text 3 for programming in Python, Java, C++ and HTML. So, for each language I have a different set of plugins. I would like to know if there is a way for changing between "profiles", with each profile containing plugins of the respective language. My PC is not all that powerful, so it starts to hang if I have too many active plugins. So when one profile is running, all other plugins should be disabled.
TL;DR : Is there a way to change between "profiles" containing a different set of plugins in Sublime Text? | 0 | java,python,sublimetext,sublimetext3,sublime-text-plugin | 2015-02-12T02:36:00.000 | 0 | 28,468,321 | The easiest way I can think of doing this on Windows is to have multiple portable installs, each one set up for your programming language and plugin set of your choice. You can then set up multiple icons on your desktop/taskbar/start menu/whatever, each one pointing to a different installation. This way you don't have to mess around with writing batch files to rename things.
Alternatively, you could just spring for a new computer :) | 0 | 1,302 | false | 1 | 1 | Sublime Text 3 - Plugin Profiles | 28,504,749 |
1 | 1 | 0 | 5 | 3 | 0 | 1.2 | 0 | The scheme is the following. There exists a package called foo (an API under heavy development, in first alpha phases) whose rst files are auto generated with sphinx-apidoc.
For the sake of having a better documentation for foo after those files are generated, there is some editing. In, say, foo.bar.rst there are some paragraphs added to the contents generated with sphinx-apidoc
How can I not loose all that information when a new call of sphinx-apidoc is made? And of course I want potential changes in the API to be reflected, with that manual information added being preserved. | 0 | python,python-sphinx | 2015-02-12T15:34:00.000 | 0 | 28,481,471 | sphinx-apidoc only needs to be re-run when the module structure of your project changes. If adding, removing, and renaming modules is an uncommon occurrence for you, it may be easiest to just place the rst files under version control and update them by hand. Adding or removing a module only requires changing a few lines of rst, so you don't even need to use sphinx-apidoc once you've run it once. | 0 | 1,826 | true | 0 | 1 | Keeping the API updated in Sphinx | 28,481,785 |
1 | 5 | 0 | 6 | 26 | 1 | 1 | 0 | My laptop has been formatted and new OS was installed, and since then I get this error:
ImportError: No module named git
This refers to a python code which simply imports git.
Location of git before my laptop was formatted: /usr/local/bin/git
Location of git after the laptop was formatted: /usr/bin/git
How / what do I change in my python code to refer to right path ? | 0 | python,git | 2015-02-12T16:54:00.000 | 0 | 28,483,253 | In my case apt install python-git fixed the issue. | 0 | 50,377 | false | 0 | 1 | ImportError: No module named git after reformatting laptop | 58,271,317 |
1 | 1 | 1 | 0 | 4 | 1 | 0 | 0 | I currently use Cython to build a module that is mostly written in C. I would like to be able to debug quickly by simply calling a python file that imports the "new" Cython module and test it. The problem is that I import GSL and therefore pyximport will not work. So I'm left with "python setup.py build; python setup.py install" and then running my test script.
Is this the only way? I was wondering if anyone else uses any shortcuts or scripts to help them debug faster? | 0 | python,cython | 2015-02-12T23:01:00.000 | 0 | 28,489,542 | I usually just throw all the commands I need to build and test into a shell script, and run it when I want to test. It's a lot easier than futzing with crazy Python test runners. | 0 | 87 | false | 0 | 1 | Better Way of Debugging Cython Packages | 29,159,267 |
1 | 1 | 0 | 2 | 4 | 0 | 1.2 | 0 | I'm trying to install python 2.6 using pyenv but when doing pyenv install 2.6.9I get the following:
checking MACHDEP... darwin
checking EXTRAPLATDIR... $(PLATMACDIRS)
checking machine type as reported by uname -m... x86_64
checking for --without-gcc... no
checking for gcc... gcc
checking whether the C compiler works... no
configure: error: in `/var/folders/r9/771hsm9931sd81ppz31384p80000gn/T/python-build.20150213191018.2121/Python-2.6.9':
configure: error: C compiler cannot create executables
I've installed Xcode 4.6.3 and installed Command Line Tools as info.
Cheers,
Ch | 0 | c,xcode4,osx-lion,python-2.6 | 2015-02-13T18:25:00.000 | 1 | 28,506,217 | Actually I found the problem. The problem was with the ld: library not found for -lgcc_ext.10.5
The gcc version given by Xcode 4.6.3 on Mac OS X Lion is 4.6.
I installed the new gcc via homebrew, brew install gcc.
I symlink my gcc to gcc-4.9 by doing ln -s /usr/local/bin/gcc /usr/local/bin/gcc-4.9.
make sure the that in your PATH /usr/local/bin is before /usr/bin ).
To a ls -l 'which gcc' to check that gcc is associated to the 4.9 version. Once this is done, the library is found and python 2.6 can be installed using pyenv. | 0 | 6,376 | true | 0 | 1 | checking whether the C compiler works... no when installing python 2.6 (mac os x lion) | 28,530,379 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I'm using a raspberry b+ to create some files that i would like to post on FB and Instagram (my account or any account).
I have a good industrial computer bckground but not for the "cloud" stuff.
I seen the libs for python to connect to facebook and to instagram.
(facebook-sdk, python-instagram).
I understand the code of the examples etc...
I'm just missing the context of where should I put this code to be able to interact with these "social media" sites.
Could it work just with a UPLOADER.py ?
Or do I need to set up like a webserver ? Do i need the Json.simple/google and so on ?
I understand if it's a dumb question, but I'm a bit lost...
Few "architectural" directions will do :). I'll get to understand the technical parts bymyself...
Thanks in advance!
Cheers,
Mat | 0 | python,facebook,instagram | 2015-02-13T21:01:00.000 | 0 | 28,508,663 | You can set them up on "any" OS. Just make sure you have an internet connection. Also note, that those libraries wan't do anything unless you write the code. So you need to create a lightweight wrapper, that would pass credentials and triggers necessary functions, in a certain order. And that's pretty much it.
Could it work just with a UPLOADER.py ?
Not sure what you referring to.
Or do I need to set up like a webserver ?
No. You dont. It's not a requirements for the library.
Do i need the Json.simple/google
Take a look at the file called requirements.txt it provides a set of libraries you need to have in addition to the standart/builtin libs. | 0 | 112 | true | 0 | 1 | python lib for facebook instagram | 28,509,000 |
1 | 2 | 0 | 0 | 4 | 0 | 0 | 0 | I need to create python wrapper for the library using SWIG and write unit tests for it. I don't know how to do this. My first take on this problem is to mock dynamic library with the same interface as those library that I'm writing wrapper for. This mock library can log every call or return some generated data. This logs and generated data can be checked by the unit tests. | 0 | python,swig,python-extensions | 2015-02-16T14:04:00.000 | 0 | 28,543,193 | I'd absolutely recommend doing some basic testing of the wrapped code. Even some basic "can I instantiate my objects" test is super helpful; you can always write more tests as you find problem areas.
Basically what you're testing is the accuracy of the SWIG interface file - which is code that you wrote in your project!
If your objects are at all interesting it is very possible to confuse SWIG. It's also easy to accidentally skip wrapping something, or for a wrapper to use a diferent typemap than you hoped. | 0 | 501 | false | 0 | 1 | How should I unit-test python wrapper generated by SWIG | 60,458,804 |
1 | 3 | 1 | 1 | 0 | 0 | 1.2 | 0 | When dealing with C extensions with Python, what is the best way to move data from a C++ array or vector into a PyObject/PyList so it can be returned to Python?
The method I am using right now seems a bit clunky. I am looping through each element and calling Py_BuildValue to append values to a PyList.
Is there something similar to memcpy? | 0 | python,c++,c | 2015-02-17T10:40:00.000 | 0 | 28,560,045 | You have to construct a PyObject from each element prior to adding it to a sequence.
So you have to either add them one by one, or convert them all, then pass to a constructor from PyObject[].
I guess the 2nd way is slightly faster since it doesn't have to adjust the sequence's member variables after each addition. | 0 | 983 | true | 0 | 1 | Best way to copy C++ buffer to Python | 28,560,741 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I quiet often launch python scripts from command-line, like python somescript.py --with-arguments
Now I'm wondering why that does not saved in output of history command?
And if there is a way to see history of it | 0 | python,bash,ubuntu,command-line,history | 2015-02-18T12:23:00.000 | 1 | 28,583,633 | Bash usually saves all your commands in the history buffer except if you specifically mark them to be excluded. There is an environment variable HISTIGNORE which might be configured to ignore python invocations altogether, although this is somewhat unlikely; or you may be marking them for exclusion by typing a space before the command. | 0 | 45 | false | 0 | 1 | Why records about python command-line launches do not saved in history | 28,583,843 |
1 | 1 | 0 | 2 | 0 | 1 | 1.2 | 0 | When i use python-mode it uses my system (mac python), I have anaconda installed and want Vim to autocomplete etc with that version of python
As it stands now, python-mode will only autocomplete modules in from system python and not any other modules e.g. pandas that is installed in the anaconda distro.
thanx
Tobie | 0 | vim,python-mode | 2015-02-18T19:51:00.000 | 1 | 28,592,624 | Not trivial. Python-mode uses the Python interpreter Vim is linked against; you'll have to recompile Vim and link it against Anaconda. | 0 | 513 | true | 0 | 1 | Vim python-mode plugin picks up system python and not anaconda | 28,592,881 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I need to score my students c programming homeworks. I want to write an autograder script which automatically score the homeworks. I plan to write this script in python language.
My question is, in the homework, in some parts students get some input from keyboard with scanf, how can i handle this problem if I try to write an autograder? Is there any way to read from text file when the scanf line runs in homework ?
Any idea is appreciated. | 0 | python,c,automation,automated-tests | 2015-02-18T22:10:00.000 | 0 | 28,594,933 | You can use the subprocess module in Python to spawn other programs, retrieve their output and feed them arbitrary input. The relevant pieces will most likely be the Popen.communicate() method and/or the .stdin and .stdout file objects; ensure when you do this that you passed PIPE as the argument to the stdin and stdout keywords on creation. | 0 | 160 | false | 0 | 1 | Autograder script - reading keyboard inputs from textfile | 28,594,984 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I am using pythonanywhere.com and trying to run an app that I made for twitter that uses tweepy but it keeps saying connection refused or failed to send request. Is there any way to run a python app online easily that sends requests? | 0 | python,tweepy | 2015-02-19T06:38:00.000 | 0 | 28,600,076 | Pythonanywhere requires a premium account for web access.
You can create a limited account with one web app at your-username.pythonanywhere.com, restricted Internet access from your apps, low CPU/bandwidth. It works and it's a great way to get started!
Get a premium account:
$5/month
Run your Python code in the cloud from one web app and the console
A Python IDE in your browser with unlimited Python/bash consoles
One web app with free SSL at
your-username.pythonanywhere.com
Enough power to run a typical 50,000 hit/day website.
(more info)
3,000 CPU-seconds per day
(more info)
512MB disk space
That said, I'd just set it up locally if its for personal use and go from there. | 0 | 214 | false | 0 | 1 | Trying to use python on a server | 28,600,163 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I am using pythonanywhere.com and trying to run an app that I made for twitter that uses tweepy but it keeps saying connection refused or failed to send request. Is there any way to run a python app online easily that sends requests? | 0 | python,tweepy | 2015-02-19T06:38:00.000 | 0 | 28,600,076 | You will need a server that has a public IP.
You have a few options here:
You can use a platform-as-a-service provider like Heroku or AWS Elastic Beanstalk.
You can get a server online on AWS, install your dependencies and use it instead.
As long as you keep your usage low, you can stay withing the free quotas for these services. | 0 | 214 | false | 0 | 1 | Trying to use python on a server | 28,600,193 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I'm using python and writing something that connects to a remote object using Pyro4
When running some unit tests (using pyunit) that repeatedly connects to a remote object with pyro, I found I couldn't run more than 9 tests or the tests would get stuck and just hang there.
I've now managed to fix this by using
with Pyro4.Proxy(PYRONAME:name) as pyroObject:
do something with object...
whereas before I was creating the object in the test set up:
def setUp(self):
self.pyroObject = Pyro4.Proxy(PYRONAME:name)
and then using self.pyroObject within the tests
Does anyone know why this has fixed the issue? Thanks | 0 | python,remote-connection | 2015-02-19T12:20:00.000 | 0 | 28,606,259 | When you're not cleaning up the proxy objects they keep a connection live to the pyro daemon. By default the daemon accepts 16 concurrent connections.
If you use the with.. as... syntax, you're closing the proxy cleanly after you've done using it and this releases a connection in the daemon, making it available for a new proxy.
You can increase the number of 16 by increasing Pyro's threadpool size via the config. Alternatively you could perhaps use the multiplex server type instead of the default threaded one. | 0 | 499 | true | 0 | 1 | python - number of pyro connections | 28,615,437 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 1 | I'm trying to get the hostmask for a user, to allow some authentication in my IRCClient bot. However, it seems to be removed from all responses? I've tried 'whois', but it only gives me the username and the channels the user is in, not the hostmask.
Any hint on how to do this? | 0 | python,twisted,irc,twisted.internet,twisted.words | 2015-02-19T12:46:00.000 | 0 | 28,606,809 | Found it, when I override RPL_WHOISUSER, I can get the information after issuing an IRCClient.whois.
(And yes, did search for it before I posted my question, but had an epiphany right after I posted my question...) | 0 | 37 | false | 0 | 1 | How to get a user's hostmask with Twisted IRCClient | 28,607,141 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm using the Pydev plugin for Eclipse Luna for Java EE.
The python code runs correctly, but errors are showing up for built in keywords like print.
Error: Undefined Variable: print
I looked on stackoverflow for other answers, and the suggestions have all been to manually configure an interpreter. I changed my interpreter to point at C:/python34/python.exe, but this has not fixed the problem. I also made sure that I was using grammar version 3.0.
Update: I think it might be a problem with aptana instead of pydev. I uninstalled aptana, and installed pydev without any issues. But when I tried to reinstall aptana, I can only do it by uninstalling pydev. I need a way to try a previous version of aptana or else a way to install aptana and pydev separately | 0 | eclipse,python-3.x,pydev | 2015-02-19T19:39:00.000 | 1 | 28,615,418 | It seems like Eclipse Luna does not provide support for PyDev when it's installed with Aptana. I was able to install Aptana without PyDev and do a separate install of Pydev on its own and this solved the problem. | 0 | 303 | false | 0 | 1 | Syntax errors for keywords in pydev plugin for Eclipse | 28,972,434 |
1 | 2 | 0 | 2 | 1 | 1 | 0.197375 | 0 | I know that log2(x) accuracy fails when x is large enough and is in the form 2^n-1 for most languages, except R and Matlab may be. Any specific reasons ?
Edit 1: x is an integer around 10^15 and up | 0 | c++,python-2.7,math | 2015-02-20T09:05:00.000 | 0 | 28,625,131 | When x is large enough (about 4.5E15 for an IEEE double, I think), 2^n-1 isn't representable. | 0 | 80 | false | 0 | 1 | Can anyone explain the inaccuracy in log2 in C++/Python? | 28,625,338 |
1 | 1 | 0 | 5 | 0 | 1 | 1.2 | 0 | The proper way to convert a unicode string u to a (byte)string in Python is by calling u.encode(someencoding).
Unfortunately, I didn't know that before and I had used str(u) for conversion. In particular, I called str(u) to coerce u to be a string so that I can make it a valid shelve key (which must be a str).
Since I didn't encounter any UnicodeEncodeError, I wonder if this process is reversible/lossless. That is, can I do u = str(converted_unicode) (or u = bytes(converted_unicode) in Python 3) to get the original u? | 0 | python,string,unicode | 2015-02-20T13:43:00.000 | 0 | 28,630,414 | In Python 2, if the conversion with str() was successful, then you can reverse the result. Using str() on a unicode value is the equivalent of using unicode_value.encode('ascii') and the reverse is to simply use str_value.decode('ascii'). Using unicode(str_value) will use the same implicit ASCII codec to decode.
In Python 3, calling str() on a unicode value simply gives you the same object back, since in Python 3 str() is the Unicode type. Using bytes() on a Unicode value without an encoding fails, you always have to use explicit codecs in Python 3 to convert between str and bytes. | 0 | 326 | true | 0 | 1 | Is converting Python unicode by casting to str reversible? | 28,630,446 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 | I have latest IronPython version built and running in Ubuntu 14.04 through Mono.
Building Ironpython and running with Mono seems trivial but I am not convinced I have proper sys.paths or permissions for Ironpython to import modules, especially modules like fcntl.
Running ensurepip runs subprocess, and wants to import "fcntl". There are numerous posts already out there, but mostly regarding windows.
As I understand, fcntl is part of unix python2.7 standard library. To start the main problem seems to be that Ironpython has no idea where this is, but I also suspect that since fcntl seems to be perl or at least not pure python, that there is more to the story.
So my related sys.path questions are:
In Ubuntu, where should I install Ironpython (Ironlanguages folder)
to? Are there any permissions I need to set?
What paths should I add to the sys.path to get Ironpython's standard library found?'
What paths should I add to the sys.path to get Ubuntu's python 2.7 installed modules?
What paths should I add to the sys.path or methods to get fcntl to import properly in Ironpython
Any clues on how to workaround known issues installing pip through ensurepip using mono ipy.exe X:Frames ensurepip
Thanks! | 0 | python-2.7,ubuntu,mono,ironpython,fcntl | 2015-02-21T16:41:00.000 | 1 | 28,648,230 | As far as I can see, the fcntl module of cPython is a builtin module (implemented in C) - those modules need to be explicitly implemented for most alternative Python interpreters like IronPython (in contrast to the modules implemented in plain Python), as they cannot natively load Python C extensions.
Additionally, it seems that there currently is no such fcntl implementation in IronPython.
There is a Fcntl.cs in IronRuby, however, maybe this could be used as a base for implementing one in IronPython. | 0 | 456 | true | 0 | 1 | Ubuntu and Ironpython: What paths to add to sys.path AND how to import fcntl module? | 28,673,847 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Hello everyone I am a grader for a programming language class and i am trying to use jenkins continuous integration for running some junit tests on their code that is pushed to github.
I was able to get all the committed jobs to the jenkins, but can i run a python file in order to push a testing class to their code and then build their projects an get an xml report about their program test...?? | 0 | java,python,junit,jenkins,continuous-integration | 2015-02-22T05:09:00.000 | 0 | 28,654,775 | Jenkins allows you to run any external command so you can just call your Python script afterwards | 0 | 516 | false | 1 | 1 | How can i add junit test for a java program using python for continuous integration in jenkins | 31,576,925 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I installed python-mode for vim on my Mac OSX system. I decided to try one of the python motion commands. I hit [C which I thought would go to the next class. But the screen also switched, to show ONLY class names in gray highlighting. I've searched the python-mode documentation, and I can't see anything about this happening, and therefore no way to undo it.
Well, I thought, I will just quit and reload, and everything will be fine. But no! When I come back in to the file, it opens as I left it, with just the class names showing, highlighted in gray, and indications of line numbers.
How do I get out of this "mode" or whatever I am stuck in? | 0 | python,vim,python-mode | 2015-02-25T16:49:00.000 | 1 | 28,724,782 | If you are not happy with folding, you might like to disable python-mode folding by keeping let g:pymode_folding = 0 in ~/.vimrc.
What I usually do is enable folding and use space bar to open it. I also set set foldclose=all to automatically fold unfolded fold | 0 | 132 | false | 0 | 1 | Installed python-mode; now I only see class names in my file | 35,023,398 |
2 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | I installed python-mode for vim on my Mac OSX system. I decided to try one of the python motion commands. I hit [C which I thought would go to the next class. But the screen also switched, to show ONLY class names in gray highlighting. I've searched the python-mode documentation, and I can't see anything about this happening, and therefore no way to undo it.
Well, I thought, I will just quit and reload, and everything will be fine. But no! When I come back in to the file, it opens as I left it, with just the class names showing, highlighted in gray, and indications of line numbers.
How do I get out of this "mode" or whatever I am stuck in? | 0 | python,vim,python-mode | 2015-02-25T16:49:00.000 | 1 | 28,724,782 | It sounds like you've discovered the "folding" feature of Vim. Press zo to open one fold under the cursor. zO opens all folds under the cursor. zv opens just enough folds to see the cursor line. zR opens all folds. See :help folding for details. | 0 | 132 | false | 0 | 1 | Installed python-mode; now I only see class names in my file | 28,724,868 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I am trying to connect my raspberry pi to parse.com wit ParsePy which uses the rest-api from parse.com. I am writing some python code to get it to work and I have an error with the classes supplied by ParsePy. In particular its the datatypes.py class.
It seems that when I run the code when it states import six, it cannot see it.
The errors I get is NameError:name 'six' is not defined.
What can I do so that I gets the right class? | 0 | python,parse-platform,raspberry-pi,six,parsepy | 2015-02-25T18:21:00.000 | 0 | 28,726,712 | You need to install the six module.
There is probably an installable package available with apt-get install python-six; you can also install it using pip or easy_install (e.g., pip install six). | 0 | 311 | false | 0 | 1 | Raspberry Pi error with ParsePy and six.py | 28,727,971 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I want to run a python script on raspberry pi when it is turned on. how can i do this? my script contains "Webkit" and
"Gtk" modules. I've tried many methods but still not working. the code works perfectly through the python IDEL | 0 | python,webkit,gtk,raspberry-pi,startup | 2015-02-26T08:57:00.000 | 0 | 28,738,062 | If you run a DE, then use the DE's session manager.
If you want to run only your application in fullscreen mode use the ~/.xinitrc to lauch it. | 0 | 711 | false | 0 | 1 | how to run a python script automatically after startx on raspberrypi | 28,744,914 |
1 | 1 | 0 | 2 | 2 | 0 | 0.379949 | 0 | How does textblob calculate polarity in sentiment analysis? What logic does it follow and can we change the logic?
Thank you. | 0 | python-2.7,sentiment-analysis,textblob | 2015-02-26T11:44:00.000 | 0 | 28,741,590 | TextBlob used a classifier to predict the polarity of a sentence. By default it uses the Movie-Review data set. You can train it using your own training model. | 0 | 650 | false | 0 | 1 | Logic behind the polarity score calculated by TEXTBLOB? | 30,297,074 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | I am developing a trading/position management system in Python. It is therefore going to interact with external software, database events, and also on timed events as well.
Is it efficient enough to use a AMQP like RabbitMQ to handle everything? Or should I use a PyDispatcher for local events and AMQP for external events?
Are both these solutions going to be slower than just say, a normal function call from one python script to another imported python script?
Many thanks! | 0 | python,events,rabbitmq,amqp | 2015-02-26T11:51:00.000 | 0 | 28,741,731 | The right answer to your question depends on your architecture. If you're adopting a microservices-style architecture, it might make sense to use AMQP for all your event communication. If you're adopting more of a monolithic architecture, you may want to use PyDispatcher + AMQP.
Note that AMQP and RabbitMQ shouldn't be used interchangeably. AMQP is a protocol. RabbitMQ implements a specific version of AMQP (0.9 and 0.9.1). ActiveMQ (broker), Apache Qpid (broker), Apache Qpid Proton (client), and Microsoft Azure (cloud service) support AMQP 1.0. | 0 | 285 | false | 0 | 1 | Pydispatcher or AMQP/RabbitMQ for local event driven processing written in python? | 28,864,448 |
1 | 1 | 0 | 1 | 2 | 1 | 1.2 | 0 | I'm installing some additional packages to anaconda and I can't get them to work. One such package is pydicom which I downloaded, unziped, and moved to /usr/local/anaconda/lib/python2.7/site-package/pydicom. In the pydicom folder the is a subfolder called source which contains both ez_setup.py and setup.py. I ran sudo python setup.py install which didn't spit out any errors and then ran sudo python ez_setup.py install when I still couldn't get the module to open in ipython. Now I can successfully import dicom but ONLY when my current directory is /usr/local/anaconda/lib/python2.7/site-package/pydicom/source. How do I get it so I import it from any directory? I'm running CentOS and I put
export PATH=/usr/local/anaconda/bin:$PATH
export PATH=/usr/local/anaconda/lib/python2.7/:$PATH
in my .bashrc file. | 0 | python,installation,anaconda,packages,pydicom | 2015-02-26T20:06:00.000 | 1 | 28,751,764 | You shouldn't copy the source to site-packages directly. Rather, use python setup.py install in the source directory, or use pip install .. Make sure your Python is indeed the one in /usr/local/anaconda, especially if you use sudo (which in general is not necessary and not recommended with Anaconda). | 0 | 1,420 | true | 0 | 1 | How do I get python to recognize a module from any directory? | 28,770,249 |
1 | 1 | 0 | 2 | 3 | 0 | 0.379949 | 1 | I am planning to have API Automation Framework built on the top pf Python + Request Library
Expected flow:
1) Read Request Specification From input file "csv/xml"
2) Make API Request & get Response & analyse the same
3) Store Test Results
4) Communicate the same
Initial 'smoke test' to be performed with basic cases then the detailed ones.There will be 'n' number of api's with respective cases. | 0 | python,api,automation,web-api-testing | 2015-02-27T12:33:00.000 | 0 | 28,765,243 | I have done API Automation framework using JAVA - TestNG - HTTP Client.
It's a Hybrid framework consist of,
Data Driven Model : Reads data from JSON/ XML file.
Method-Driven : I have written POJO for JSON Objects and Arrays reading and writing.
Report : I will be getting the Report using TestNG customized report format
Dependency Management: I have used Maven.
This framework I have Integrated with Jenkins for Continous Integration.
SCM : I have Used GIT for that. | 0 | 1,451 | false | 0 | 1 | API Test Automation Framework Structure | 38,866,398 |
1 | 2 | 0 | 2 | 0 | 1 | 1.2 | 0 | I have a python program run_tests.py that executes test scripts (also written in python) one by one. Each test script may use threading.
The problem is that when a test script unexpectedly crashes, it may not have a chance to tidy up all open threads (if any), hence the test script cannot actually complete due to the threads that are left hanging open. When this occurs, run_tests.py gets stuck because it is waiting for the test script to finish, but it never does.
Of course, we can do our best to catch all exceptions and ensure that all threads are tidied up within each test script so that this scenario never occurs, and we can also set all threads to daemon threads, etc, but what I am looking for is a "catch-all" mechanism at the run_tests.py level which ensures that we do not get stuck indefinitely due to unfinished threads within a test script. We can implement guidelines for how threading is to be used in each test script, but at the end of the day, we don't have full control over how each test script is written.
In short, what I need to do is to stop a test script in run_tests.py even when there are rogue threads open within the test script. One way is to execute the shell command killall -9 <test_script_name> or something similar, but this seems to be too forceful/abrupt.
Is there a better way?
Thanks for reading. | 0 | python,multithreading | 2015-03-01T22:02:00.000 | 1 | 28,799,663 | To me, this looks like a pristine application for the subprocess module.
I.e. do not run the test-scripts from within the same python interpreter, rather spawn a new process for each test-script. Do you have any particular reason why you would not want to spawn a new process and run them in the same interpreter instead? Having a sub-process isolates the scripts from each other, like imports, and other global variables.
If you use the subprocess.Popen to start the sub-processes, then you have a .terminate() method to kill the process if need be. | 0 | 238 | true | 0 | 1 | Best way to stop a Python script even if there are Threads running in the script | 28,799,878 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | For some routine work I found that combining different scripting languages can be the fastest way to do. Like I have some main bash script which calles some awk, pyton and bash scripts or even some compiled fortran executables.
I can put all the files into a folder that is in the paths, but it makes modification is a bit slower. If I need a new copy with some modifications I need to add another path to $PATH as well.
Is there a way to make merge these files as a single executable?
For example: tar all the files together and explain somehow that the main script is main.sh? This way I could simply vi the file, modify, run, modify, run ... but I could move the file between folders and machines easily. Also dependencies could be handle properly (executing the tar could set PATH itself).
I hope this dream does exist! Thanks for the comments!
Janos | 0 | python,bash,scripting,path,packing | 2015-03-02T15:44:00.000 | 1 | 28,813,775 | Thanks for the answers!
Of course this is not a good approach for providing a published code! Sorry if it was confusing. But this is a good approach if you are developing some e.g. scientific idea, and you wish to obtain a proof of concept result fast and you wish to do similar tasks several times but replacing fast some parts of the algorithm. Note, that sometimes many codes are available for some parts of the task. These codes are sometimes needed to be modified a bit (a few lines). I am a big believer of re-implementing everything, but first it is good to know if it worth to do!
For some compromise: can I call a script externally that is wrapped in some tar or zip and is not compressed?
Thanks again,
J. | 0 | 202 | false | 0 | 1 | Multiple executables as a single file | 28,828,064 |
1 | 2 | 0 | 1 | 1 | 1 | 1.2 | 0 | I have a slightly unusually situation: I have scripts that I make small changes to frequently, and that take hours to execute.
I save output logs, but more importantly I need to make sure that the code which produced a given log will not be lost.
Committing changes before each run will work, but I'd like to enforce this automatically by preventing my code from running if git is not up to date.
Is there a simple way to do this, or is running shell commands and scraping output my best bet? | 0 | python,git | 2015-03-02T16:34:00.000 | 1 | 28,814,845 | You could consider running your script from a separate checkout to where you do your development. That way you would need to commit, push locally, and pull in the 'deployment' location before you could run the updated script. You could probably automate those steps with a shell script or even a git commit hook. | 0 | 132 | true | 0 | 1 | only run python script if it is git committed | 28,814,943 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I'm using mercurial and I think I may have an older revision of a python file running on my server. How can I tell which revision, of a particular file, is currently being ran on in my web application?
I suspect an older revision because I currently have an error being thrown in sentry that refers to a line of code that no longer exists in my latest code file. I can log directly onto the server and see my latest file, but still my web app runs this old version
I have ruled out browser caching and server caching
Thanks in advance. | 0 | python,django,mercurial | 2015-03-02T19:07:00.000 | 0 | 28,817,558 | You can use hg id -i to see the currently checked out revision on your server, and hg status to check if the file has been modified relative to that revision. | 0 | 119 | false | 1 | 1 | HG - Find current running revision of a code file | 28,817,753 |
1 | 2 | 0 | 1 | 4 | 0 | 0.099668 | 0 | I'm planning to migrate existing image processing logic to AWS lambda. Lambda thumbnail generator is better than my previous code so I want to re-process all the files in an existing bucket using lamdba.
Lambda seems to be only event driven, this means that my lamdba function will only be called via a PUT event. Since the files are already in the bucket this will not trigger any events.
I've considered creating a new bucket and moving the files from my existing bucket to a new bucket. This will trigger new PUT events, but my bucket has 2MM files so I refuse to consider this hack as a viable options. | 0 | java,python,amazon-s3,thumbnails,aws-lambda | 2015-03-03T02:40:00.000 | 0 | 28,823,172 | You can add a SQS queue as an event source/trigger for the Lambda, make the slight changes in the Lambda to correctly process a SQS event as opposed to a S3 event, and then using a local script loop through a list of all objects in the S3 bucket (with pagination given the 2MM files) and add them as messages into SQS. Then when you're done, just remove the SQS event source and queue.
This doesn't get around writing a script to list and find and then call the lambda function but the script is really short. While this way does require setting up a queue, you won't be able to process the 2MM files with direct calls due to lambda concurrency limits.
Example:
Set up SQS queue and add as event source to Lambda.
The syntax for reading a SQS message and an S3 event should be pretty similar
Paginate through list_objects_v2 on the S3 bucket in a for-loop
Create messages using send_message_batch
Suggestion:
Depending on the throughput of new files landing in your bucket, you may want to switch to S3 -> SQS -> Lambda processing anyways instead of direct S3 -> Lambda calls. For example, if you have large bursts of traffic then you may hit your Lambda concurrency limit, or if an error occurs and you want to keep the message (can be resolved by configuring a DLQ for your lambda). | 0 | 2,230 | false | 1 | 1 | Running aws-lambda function in a existing bucket with files | 56,451,012 |
1 | 1 | 0 | 2 | 2 | 0 | 1.2 | 0 | I have a django test suite that builds a DB from a 400 line fixture file. It runs unfortunately slow. Several seconds per test.
I was on the train yesterday developing without internet access, with my wifi turned off, and I noticed my tests ran literally 10x faster without internet. And they are definitely running correctly.
Everything is local, it all runs fine without an internet connection. The tests themselves do not hit any APIs or make any other connections, so it seems it must be something else. | 0 | python,django,testing,python-unittest | 2015-03-04T20:22:00.000 | 0 | 28,864,152 | This most likely means you've got some component installed which is trying to make network connections. Possibly something that does monitoring or statistics gathering?
The simplest way to figure out what's going on is to use tcpdump to capture your network traffic and see what's going on. To do that:
Run tcpdump -i any (or tcpdump -i en1 if you're on a mac; the airport is usually en1, but you can double check with ifconfig)
Watch the traffic to get some idea what's normal
Run your test suite
Watch the traffic printed by tcpdump to see if anything obviously jumps out at you | 0 | 523 | true | 1 | 1 | Django Tests run faster with no internet connection | 28,864,271 |
1 | 1 | 0 | 2 | 0 | 1 | 0.379949 | 0 | Given a double, what is the most efficient way to calculate the precision and accuracy of the number? Ideally, I'd like to avoid for loops. | 0 | python,algorithm,math | 2015-03-04T21:20:00.000 | 0 | 28,865,200 | Floating point numbers are stored in binary in python. Accuracy and precision have totally different implications in binary, especially given that computers are constrained to fixed-length representations. You can demonstrate that to yourself simply by noting that the code 1e16+1 == 1e16 returns True.
You linked to Wolfram.com where precision and accuracy are discussed in terms of decimal numbers. It means nothing to talk about the position of the decimal point in a number stored in a computer in binary. To represent and manipulate decimal numbers with their real decimal precision and accuracy, you need to use python's Decimal class.
Note that you must give the Decimal class a string for it to be accurate in decimal terms. If you give it a literal number (e.g. Decimal(1.001)) you are just giving it a float, and the Decimal you create will be the precise decimal representation of python's imprecise floating point representation of 1.001.
Sadly, the Decimal class doesn't have methods to return the number of digits either side of the decimal point. However, assuming your decimal is in decimal form, you can just examine it as a string, using str(x) where x is an instance of Decimal. | 0 | 458 | false | 0 | 1 | Precision and Accuracy | 28,866,275 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | From my home pc using putty, I ssh'ed into a remote server, and I ran a python program that takes hours to complete, and as it runs it prints stuff. Now after a while, my internet disconnected, and I had to close and re-open putty and ssh back in. If I type 'top' I can see the python program running in the background with its PID number. Is there a command I can use to basically re-open that process and see it printing its stuff again?
Thanks | 0 | python,linux,ssh,terminal | 2015-03-06T02:13:00.000 | 1 | 28,891,305 | Things to try:
nohup, or
screen | 0 | 1,139 | false | 0 | 1 | How to open process again in linux terminal? | 28,891,431 |
1 | 2 | 0 | 0 | 1 | 1 | 0 | 0 | As I've mentioned before I originally came from a Matlab background and then moved onto PHP before discovering Python. In both Matlab and PHP there were ways to create a scripts that when run all the variables got dumped into your current workspace. That workspace in Matlab being the interpreter when called from there or the function's workspace when called from a function. I would use this capability for constants - for example, plotting tools where you want to define a set of default fonts, line widths, etc.
Now in Python I can import a module but then all of the references constants in that module require either from {module} import * or {module}.{constant}.
This doesn't really limit me it is just an inconvenience. However, it would be nice to be able to import a constants file and just have the constants available to whatever called them.
I suspect Python does not allow this but I thought I would ask to see if someone had a clever work around. | 0 | php,python,matlab,import | 2015-03-06T17:44:00.000 | 0 | 28,904,565 | I'm not sure about your question, but maybe your IDE can help you to get rid of those imports. For example, in Spyder you can execute a startup script that will run when a new console is launched. Placing your imports there would provide direct access to those files. | 0 | 1,278 | false | 0 | 1 | Python module variables into current workspace | 28,905,871 |
3 | 5 | 0 | -1 | 5 | 0 | -0.039979 | 0 | I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
drwxrwxrwx 2 root root 4096 Mar 8 00:19 python
this is permission showing for the directory /usr/bin/python | 0 | python,linux,centos | 2015-03-08T05:24:00.000 | 1 | 28,923,393 | -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied then
first remove python follow command line
-- sudo rpm -e python
second check which package install this command line
-- sudo rpm -q python
then install package
-- sudo yum install python*
i think this problem solve | 0 | 45,325 | false | 0 | 1 | -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied | 37,664,878 |
3 | 5 | 0 | 3 | 5 | 0 | 0.119427 | 0 | I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
drwxrwxrwx 2 root root 4096 Mar 8 00:19 python
this is permission showing for the directory /usr/bin/python | 0 | python,linux,centos | 2015-03-08T05:24:00.000 | 1 | 28,923,393 | yum doesn't work with python2.7.
You should do the following
vim /usr/bin/yum
change
#!/usr/bin/python
to
#!/usr/bin/python2.6
If your python2.6 was deleted, then reinstall them and point the directory in /usr/bin/yum to your python2.6 directory. | 0 | 45,325 | false | 0 | 1 | -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied | 40,200,244 |
3 | 5 | 0 | 0 | 5 | 0 | 0 | 0 | I am new in centos.I am try to do an application on it.For my application I need to install python 2.7.But the default one on server was python 2.6. So tried to upgrade the version .And accidentally I deleted the folder /usr/bin/python.After that I Installed python 2.7 through make install.I created the folder again /usr/bin/python and run command sudo ln -s /usr/bin/python2.7 /usr/bin/python. After this when I tried to run YUM commands I am getting the error
-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied
drwxrwxrwx 2 root root 4096 Mar 8 00:19 python
this is permission showing for the directory /usr/bin/python | 0 | python,linux,centos | 2015-03-08T05:24:00.000 | 1 | 28,923,393 | this problem is that yum file start head write #!/usr/local/bin/python2.6, write binary file, is not dir, is python binary file | 0 | 45,325 | false | 0 | 1 | -bash: /usr/bin/yum: /usr/bin/python: bad interpreter: Permission denied | 62,297,081 |
1 | 1 | 0 | 9 | 2 | 1 | 1.2 | 0 | I'm working with Python on a Raspberry Pi using the Raspian operating system. I installed evdev-0.4.7 and it works fine for Python 2.7. But when I try it for Python 3.3 I get an error. Apparently it only installed on Python 2.7.
How can I install evdev on Python 3.3 as well? | 0 | python,evdev | 2015-03-09T06:32:00.000 | 0 | 28,936,333 | Try to install it with pip3
sudo pip3 install evdev | 0 | 11,168 | true | 0 | 1 | How can I install evdev on both Python 2.7 and Python 3.3? | 28,936,564 |
2 | 8 | 0 | 1 | 78 | 0 | 0.024995 | 0 | I have done a fair amount of googling about this question and most of the threads I've found are 2+ years old, so I am wondering if anything has changed, or if there is a new method to solve the issue pertaining to this topic.
As you might know when using IntelliJ (I use 14.0.2), it often autosaves files. For me, when making a change in Java or JavaScript files, its about 2 seconds before the changes are saved. There are options that one would think should have an effect on this, such as Settings > Appearance & Behavior > System Settings > Synchronization > Save files automatically if application is idle for X sec. These settings seem to have no effect for me though, and IntelliJ still autosaves, for example if I need to scroll up to remember how a method I am referencing does something.
This is really frustrating when I have auto-makes, which mess up TomCat, or watches on files via Grunt, Karma, etc when doing JS development. Is there a magic setting that they've put in recently? Has anyone figured out how to turn off auto-save, or actually delay it? | 0 | java,python,intellij-idea,pycharm,ide | 2015-03-09T18:31:00.000 | 0 | 28,949,290 | If there are any file watchers active (Preferences>Tools>File Watchers), make sure to check their Advanced Options. Disable any Auto-save files to trigger the watcher toggles.
This option supersedes the Autosave options from Preferences>Appearance & Behaviour>System Settings. | 0 | 59,393 | false | 1 | 1 | Turning off IntelliJ Auto-save | 66,222,468 |
2 | 8 | 0 | 8 | 78 | 0 | 1 | 0 | I have done a fair amount of googling about this question and most of the threads I've found are 2+ years old, so I am wondering if anything has changed, or if there is a new method to solve the issue pertaining to this topic.
As you might know when using IntelliJ (I use 14.0.2), it often autosaves files. For me, when making a change in Java or JavaScript files, its about 2 seconds before the changes are saved. There are options that one would think should have an effect on this, such as Settings > Appearance & Behavior > System Settings > Synchronization > Save files automatically if application is idle for X sec. These settings seem to have no effect for me though, and IntelliJ still autosaves, for example if I need to scroll up to remember how a method I am referencing does something.
This is really frustrating when I have auto-makes, which mess up TomCat, or watches on files via Grunt, Karma, etc when doing JS development. Is there a magic setting that they've put in recently? Has anyone figured out how to turn off auto-save, or actually delay it? | 0 | java,python,intellij-idea,pycharm,ide | 2015-03-09T18:31:00.000 | 0 | 28,949,290 | I think the correct answer was given as a comment from ryanlutgen above:
The beaviour of "auto-saving" your file is not due to the auto-save options mentioned.
IJ saves all changes to your build sources to automatically build the target.
This can be turned of in:
Preferences -> Build,Execution,Deployment -> Compiler -> Make project automatically.
Note: now have to initiate the project build manually (e.g. by using an appropriate key-shortcut)
(All other "auto-save" options just fine-tune the build in auto-save behaviour.) | 0 | 59,393 | false | 1 | 1 | Turning off IntelliJ Auto-save | 37,813,276 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | So I created a setup.py script for my python program with distutils and I think it behaves a bit strange. First off it installs all data_files into /usr/local/my_directory by default which is a bit weird since this isn't a really common place to store data, is it?
I changed the path to /usr/share/my_directory/. But now I'm not able to write to the database inside that directory and I can't set required permission from within setup.py neither since the actual database file has not been created when I run it.
Is my approach wrong? Should I use another tool for distributing?
Because at least for Linux, writing a simple setup sh script seems easier to me at the moment. | 0 | python,linux,distutils,setup.py | 2015-03-09T20:56:00.000 | 1 | 28,951,588 | The immediate solution is to invoke setup.py with --prefix=/the/path/you/want.
A better approach would be to include the data as package_data. This way they will be installed along side your python package and you'll find it much easier to manage it (find paths etc). | 0 | 36 | false | 0 | 1 | distutils setup script under linux - permission issue | 28,951,788 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm trying to use django-allauth for Twitter sign-in in my Django app. I notice that when I do the sign-in process, Twitter says that this app will NOT be able to post tweets. I do want to be able to post tweets. How do I add this permission? | 0 | python,django,twitter,django-allauth | 2015-03-09T21:49:00.000 | 0 | 28,952,404 | I see now that I simply needed to define my app as "read & write" in the Twitter admin UI. | 0 | 120 | false | 1 | 1 | Using `django-allauth` to let users sign in with Twitter and tweet through the app | 28,975,387 |
1 | 3 | 0 | 0 | 1 | 0 | 1.2 | 1 | I am writing a little script which picks the best machine out of a few dozen to connect to. It gets a users name and password, and then picks the best machine and gets a hostname. Right now all the script does is print the hostname. What I want is for the script to find a good machine, and open an ssh connection to it with the users provided credentials.
So my question is how do I get the script to open the connection when it exits, so that when the user runs the script, it ends with an open ssh connection.
I am using sshpass. | 0 | python,ssh | 2015-03-10T18:17:00.000 | 1 | 28,971,180 | If you want the python script to exit, I think your best bet would be to continue doing a similar thing to what you're doing; print the credentials in the form of arguments to the ssh command and run python myscript.py | xargs ssh. As tdelaney pointed out, though, subprocess.call(['ssh', args]) will let you run the ssh shell as a child of your python process, causing python to exit when the connection is closed. | 0 | 1,186 | true | 0 | 1 | Open SSH connection on exit Python | 28,971,821 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | I have a python script that scans for new tweets containing specified #hashtags, then it posts them to my "python bot's" twitter account as new tweets.
I tested it from the python console and let it run for 5 minutes. It managed to pick up 10 tweets matching my criteria. It works flawlessly, but I'm concerned about performance issues and leaving the script running for extended amounts of time.
What are the negative effects of leaving this script running on my personal computer for a whole day or more?
Should I be running this on a digital ocean VPS instead?
Twitter offers the API for bot creation, but do they care how much a bot tweets? I don't see how this is any different from retweeting. | 0 | python,twitter,tweepy | 2015-03-10T22:00:00.000 | 0 | 28,974,896 | Twitter probably has limits on their api and will most likely block your api key if they feel that you are spamming. In fact I would bet there is a maximum number of tweets per day depending on the type of developer account.
For stability and up time concerns running on a 'personal' computer is not a good idea. You probably want to do other things on your personal comp that may interrupt your bot's service (like install programs/updates and restart). As far as load on the cpu, well if its only picking up 10 tweets per 5 min that doesn't seem like any kind of load that you need to worry about. To be sure you could run the top command and check out the cpu and memory usage.
If you have a server somewhere like at digital ocean then I would run it there just to reduce the interruption the program experiences.
I ran a similar program using twitters stream api and collected tweets using a personal computer and the interruptions were annoying and I eventually stopped collecting data.... | 0 | 399 | true | 0 | 1 | Running python tweepy listener | 28,977,051 |
3 | 16 | 0 | 8 | 252 | 1 | 1 | 0 | What is the BEST way to clear out all the __pycache__ folders and .pyc/.pyo files from a python3 project. I have seen multiple users suggest the pyclean script bundled with Debian, but this does not remove the folders. I want a simple way to clean up the project before pushing the files to my DVS. | 0 | python,python-3.x,python-3.4,delete-file | 2015-03-11T15:39:00.000 | 0 | 28,991,015 | From the project directory type the following:
Deleting all .pyc files
find . -path "*/*.pyc" -delete
Deleting all .pyo files:
find . -path "*/*.pyo" -delete
Finally, to delete all '__pycache__', type:
find . -path "*/__pycache__" -type d -exec rm -r {} ';'
If you encounter permission denied error, add sudo at the begining of all the above command. | 0 | 212,604 | false | 0 | 1 | Python3 project remove __pycache__ folders and .pyc files | 58,837,642 |
3 | 16 | 0 | 9 | 252 | 1 | 1 | 0 | What is the BEST way to clear out all the __pycache__ folders and .pyc/.pyo files from a python3 project. I have seen multiple users suggest the pyclean script bundled with Debian, but this does not remove the folders. I want a simple way to clean up the project before pushing the files to my DVS. | 0 | python,python-3.x,python-3.4,delete-file | 2015-03-11T15:39:00.000 | 0 | 28,991,015 | Using PyCharm
To remove Python compiled files
In the Project Tool Window, right-click a project or directory, where Python compiled files should be deleted from.
On the context menu, choose Clean Python compiled files.
The .pyc files residing in the selected directory are silently deleted. | 0 | 212,604 | false | 0 | 1 | Python3 project remove __pycache__ folders and .pyc files | 56,165,314 |
3 | 16 | 0 | 0 | 252 | 1 | 0 | 0 | What is the BEST way to clear out all the __pycache__ folders and .pyc/.pyo files from a python3 project. I have seen multiple users suggest the pyclean script bundled with Debian, but this does not remove the folders. I want a simple way to clean up the project before pushing the files to my DVS. | 0 | python,python-3.x,python-3.4,delete-file | 2015-03-11T15:39:00.000 | 0 | 28,991,015 | Why not just use rm -rf __pycache__? Run git add -A afterwards to remove them from your repository and add __pycache__/ to your .gitignore file. | 0 | 212,604 | false | 0 | 1 | Python3 project remove __pycache__ folders and .pyc files | 48,244,930 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I would like to run some experiments over night and also find tomorrow morning how long it took for the experiments to finish executing. However, I would like to add as little overhead as possible so that the results from the experiments aren't affected too much by this extra timing.
I read from various resources that time script.py would be a good way to measure that. However I am not sure how time works and how much it can affect my experiments. | 0 | python,performance,time | 2015-03-12T00:28:00.000 | 0 | 28,999,934 | As far as I know, time Linux command simply asks kernel to provide information on the process that was run using it. Since kernel collects CPU and other information for all processes anyway, running time with a script doesn't impact the performance of the script itself (or its negligible)
The above answer is correct but it may not be idempotent. A script time can be affected due to current load, data size it is processing, network and so on.
I recommend using the Linux time command since it provides lot more information about the process. You can then run it under different loads and compare them. | 0 | 49 | false | 0 | 1 | What is the best way to find how long it took for a python script to finish executing without adding any extra over head? | 29,001,172 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I bought a raspberry pi and created a small remote controlled truck, but I want to work on more projects; is there any way to sore the Python file onto a flashdrive and connect it to some kind of cpu so the truck will still work and I can use the pi for other things and continue buying the small "cpu" uploading the pythons code and moving forward on different projects? | 0 | python,raspberry-pi2 | 2015-03-12T02:25:00.000 | 0 | 29,000,873 | You've written in a high-level language for specific hardware.
You could make your own ARM Cortex-A7 based board but it'll be cheaper to just buy another Pi.
If you want to make small inexpensive devices, then you should use a lower-level language with a microcontroller, such as Atmel AVR, found in Arduinos. | 0 | 73 | false | 0 | 1 | How do I take code from a raspberry pi and store it onto a smaller chip so I don't have to use the pi over and over again | 32,802,086 |
1 | 2 | 0 | -4 | 22 | 0 | -1 | 1 | I'm trying to load a CSV file to Amazon S3 with Python. I need to know CSV file's modification time. I'm using ftplib to connect FTP with Python (2.7). | 0 | python,python-2.7,datetime,ftp,ftplib | 2015-03-13T07:13:00.000 | 0 | 29,026,709 | When I want to change the file modification time, I use an FTP client on the console.
Log on to a remote FTP ftp ftp.dic.com
cd commands go to the correct directory
SITE command to move the extended command mode
UTIME somefile.txt 20050101123000 20050101123000 20050101123000 UTC
change the access time, modification time, it's time to create a directory on 2005-01-01 12:30:00 somefile.txt
Complete example:
site UTIME somefile.txt 20150331122000 20150331122000 20150331122000 UTC
Please feel free to sit back and wish you a pleasant journey in time :) | 0 | 24,718 | false | 0 | 1 | How to get FTP file's modify time using Python ftplib | 29,468,092 |
1 | 2 | 0 | 5 | 17 | 0 | 0.462117 | 0 | I want to do the following:
Save numeric data in a CSV-like formatting, with a ".foo" extension;
Associate the ".foo" file extension with some python script, which in turns opens the .foo file, reads its content, and plots something with a plotting library (matplotlib most probably).
The use-case would be: double-click the file, and its respective plot pops up right away.
I wonder how I should write a python script in order to do that.
Besides, the windows "open with" dialog only allows me to choose executables (*.exe). If I choose "fooOpener.py", it doesn't work. | 0 | python,windows,file-extension,file-association | 2015-03-13T20:40:00.000 | 0 | 29,041,571 | press the windows key
type cmd
right click the result and choose "run as administrator"
assoc .foo=foofile
ftype foofile="C:\Users\<user>\AppData\Local\Programs\Python\PYTHON~1\python.exe" "C:\<whatever>\fooOpener.py" "%1" %*
Use pythonw.exe if it's a .pyw file (to prevent a cmd window from spawning).
If you want to use an existing file type, you can find its alias by not assigning anything. For example, assoc .txt returns .txt=txtfile. | 0 | 8,512 | false | 0 | 1 | Associate file extension to python script, so that I can open the file by double click, in windows | 63,965,668 |
1 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I am getting little bit confused with returning arrays and objects created locally in the function call. So I believe -
C - No Objects, only arrays and structures can be created on stack, so will be deleted when function returns. So its not wise to send them as return value to the calling module.
C++ - Objects & structures resides in heap, so objects can be returned but nothing else, i.e. arrays will still be destroyed when returning
Java - I can return arrays as well as Objects, I guess arrays moved to heap here?
Python - Same as Java, Objects and Arrays created locally can be returned to calling module as reference.
Please correct me if I am wrong somewhere. Now why would java/python put arrays in heap? being interpreted languages is that the reason? So would every compiled language will not let me return locally created arrays back to calling module.
Thanks in advance. | 0 | java,python,c++,c,arrays | 2015-03-14T11:49:00.000 | 0 | 29,048,568 | to extend a little the answer:
Python manage lifetime by reference counting. When an object has no references it'll be destructed or finalized and released.
In Java I think that happens exactly the same | 0 | 87 | false | 0 | 1 | returning arrays and objects created in functions in various languages | 29,048,713 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am trying to learn how to quickly spin up a digital ocean / ec2 server to temporarily run a python worker script (for parallel performance gains). I can conceptually grasp how to do everything except how / where to store certain auth credentials. These would be things like:
git username / pass to access private repos
AWS auth credentials to access an SQS queue
database credentials
etc.
Where do I store this stuff when I deploy via a fabric script? A link to a good tutorial would be very helpful. | 0 | python,fabric | 2015-03-15T15:04:00.000 | 0 | 29,062,125 | We have a local credential YAML file that contains all these, fab read the credentials from it and use them during the deployment only. | 0 | 75 | false | 1 | 1 | Best way to store auth credentials on fabric deploys? | 29,063,107 |
Subsets and Splits