Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I just installed Panda3D, and I can run the example programs by double clicking them, but I can't run them from IDLE or Sublime.
I get errors like ImportError: No module named direct.showbase.ShowBase
I some people bring this up before and the responses suggested using ppython, I can't figure out how run that from Sublime, and I really the auto complete function there.
How can I either configure the Python 2.7 version that I already have to run Panda3D programs or run ppython from SUblime? | 0 | python,panda3d | 2015-07-31T14:26:00.000 | 0 | 31,748,596 | This depends on your operating system. Panda3D uses the system's python on OS X and Linux and should "just work".
For Windows Panda3D installs its own copy of Python into Panda3D's install directory (defaults to C:\Panda3D I think), and renames the executable to ppython to prevent name collisions with any other python installs you might have. In your editor you have to change which interpreter it uses to the ppython.exe in the panda3d directory. | 0 | 155 | false | 1 | 1 | What do I need to do to be able to use Panda3D from my Python text editors? | 32,128,178 |
1 | 1 | 0 | 4 | 2 | 1 | 1.2 | 0 | I have come across the following key combinations(...i assume) in vim pymode documentation. <C-c>, <C-C>, <C-X><C-O>, <C-P>/<C-N>. How are they to be interpreted? | 0 | vim,python-mode | 2015-08-01T14:27:00.000 | 0 | 31,762,911 | <C-c> and <C-C> both mean Ctrl+C.
I'm sure you can infer how to type the others.
See :help key-notation. | 0 | 1,336 | true | 0 | 1 | Vim pymode: meaning of key combination | 31,763,008 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | Usually the workflow I have is as follows:
Perform SQL query on database,
Load it into memory
Transform data based on logic foo()
Insert the transformed data to a table in a database.
How should unit test be written for this kind of workflow? I'm really new to testing.
Anyway, I'm using Python 3.4. | 1 | python,unit-testing,tdd,integration-testing | 2015-08-02T08:04:00.000 | 0 | 31,769,814 | One way to test this kind of workflow is by using a special database just for testing. The test database mirrors the structure of your production database, but is otherwise completely empty (i.e. no data is in the tables). The routine is then as follows
Connect to the test database (and and maybe reload its structure)
For every testcase, do the following:
Load the minimal set of data into the database necessary to test your routine
Run your function to test and grab its output (if any)
Perform some tests to see that your function did what you expected it to do.
Drop all data from the database before the next test case runs
After all your tests are done, disconnect from the database | 0 | 538 | false | 0 | 1 | How should unit test be written for data transformation? | 31,769,998 |
1 | 2 | 0 | 11 | 5 | 0 | 1.2 | 0 | I am using the hypothesis python package for testing.
I am getting the following error:
Flaky: Hypothesis test_visiting produces unreliable results: Falsified on the first call but did not on a subsequent one
As far as I can tell, the test is working correctly.
How do I get around this? | 0 | pytest,python-hypothesis | 2015-08-02T08:16:00.000 | 0 | 31,769,887 | It means more or less what it says: You have a test which failed the first time but succeeded the second time when rerun with the same example. This could be a Hypothesis bug, but it usually isn't. The most common cause of this is that you have a test which depends on some external state - e.g. if you're using a system random number generator rather than a Hypothesis provided one, or if your test creates some files and only fails if the files did not exist at the start of the test. The second most common cause of this is that your failure is a recursion error and the example which triggered it at one level of function calls did not at another.
You haven't really provided enough information to say what's actually happening, so it's hard to provide more specific advice than that. If you're running a recent version of Hypothesis (e.g. 1.9.0 certainly does it) you should have been given quite detailed diagnostics about what is going on - it will tell you what the original exception you got was and it will report if the values passed in seemed to change between calls. | 0 | 3,046 | true | 0 | 1 | What does Flaky: Hypothesis test produces unreliable results mean? | 31,770,016 |
1 | 2 | 1 | 0 | 6 | 0 | 1.2 | 0 | I've been writing a Kivy graphical program on Raspberry Pi, with the KivyPie OS (Linux pre-configured for Kivy development).
For some reason, it's running extremely slow if started with sudo.
Normally, running "python main.py", the program runs at about 30 cycles per second.
However, if I do "sudo python main.py", it runs as slowly as 1 cycle per 5-10 seconds.
I need to use sudo to access Raspberry's GPIO. (unless I try some other way to do it, that I see people discuss).
I'm interested, though, what could be the cause of such a massive performance drop with sudo? And is it possible to work around that?
PS: Running the same program on my PC (Linux) with and without sudo doesn't seem to cause such problem. Only on Raspberry. | 0 | python,raspberry-pi,kivy,sudo | 2015-08-03T11:31:00.000 | 0 | 31,786,122 | Well, I would call this problem solved, even if a few questions remain.
Here are the key points:
The slowdown is caused by Kivy being unable to load the proper video driver under "sudo", and using software rendering instead.
I haven't figured out why the driver isn't loading with sudo or how to fix it. However...
After compiling the program with Pyinstaller, everything works fine. The executable can be started with sudo, GPIO is working, Kivy loads the appropriate driver, everything works fast, as it should.
To sum it up, the reason of the initial problem has been found, no fix for launching the program directly with Python was yet found, but the problem was removed by compiling the program with Pyinstaller. (still, not a convenient way for debugging.) | 0 | 2,524 | true | 0 | 1 | Raspberry Pi Python (Kivy) extremely slow with sudo | 31,861,791 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a python script. This script is essentially my own desktop/UI. However, I would like to replace the default Raspbian (Raspberry Pi linux distro) desktop enviroment with my own version. How would I go about:
Disabling the default desktop and
Launching my python script (fullscreen) at startup?
This is on the Raspberry Pi running a modified version of debian linux.
(Edit: I tried making a startup script in the /etc/init.d directory, and added it to chmod, but I still can't seem to get it to start up. The script contained the normal .sh stuff, but also contained the python command that opened the script in my designated directory.) | 0 | python,linux,arm,raspberry-pi,init.d | 2015-08-03T14:37:00.000 | 1 | 31,790,133 | Ah bah, let's just give a quick answer.
After creating a script in /etc/init.d, you need to add a soft-link to the directory /etc/rc2.d, such as sudo ln -s /etc/init.d/<your script> /etc/rc2.d/S99<your script>. Assuming, of course, that you run runlevel 2. You can check that with the command runlevel.
The S means the script is 'started', the number determines the order in which processes are started.
You will also want to remove the entry from rc2.d that starts the graphical environment. What command that is depends on how your pi is configured. | 0 | 344 | true | 0 | 1 | Starting a python script at boot - Raspbian | 31,791,309 |
1 | 4 | 0 | 7 | 13 | 0 | 1 | 0 | I tried to install python module via pip, but it was not successful.
can any one help me to install smtplib python module in ubuntu 12.10 OS? | 0 | python,module,smtplib | 2015-08-03T20:27:00.000 | 1 | 31,796,174 | I will tell you a probable why you might be getting error like Error no module smtplib
I had created program as email.py
Now email is a module in python and because of that it start giving error for smtplib also
then I had to delete email.pyc file created and then rename email.py to mymail.py
After that no error of smtplib
Make sure your file name is not conflicting with the python module. Also see because of that any *.pyc file created inside the folder | 0 | 58,280 | false | 0 | 1 | How to install python smtplib module in ubuntu os | 35,091,800 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 0 | Overview
After upgrading to 10.11 Yosemite, I discovered that vim (on the terminal) highlights a bunch of errors in my python scripts that are actually not errors.
e.g.
This line:
from django.conf.urls import patterns
gets called out as an [import-error] Unable to import 'django.conf.urls'.
This error is not true because I can open up a python shell from the command line and import the supposedly missing module. I'm also getting a bunch of other errors all the way through my python file too: [bad-continuation] Wrong continued indentation, [invalid-name] Invalid constant name, etc.
All of these errors are not true.
Question
Anyway, how do I turn off these python error checks?
vim Details
vim --version:
VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 5 2014 21:00:28)
Compiled by [email protected]
Normal version without GUI. Features included (+) or not (-):
-arabic +autocmd -balloon_eval -browse +builtin_terms +byte_offset +cindent
-clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments
-conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs
-dnd -ebcdic -emacs_tags +eval +ex_extra +extra_search -farsi +file_in_path
+find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv
+insert_expand +jumplist -keymap -langmap +libcall +linebreak +lispindent
+listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape
-mouse_dec -mouse_gpm -mouse_jsbterm -mouse_netterm -mouse_sysmouse
+mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg -osfiletype
+path_extra -perl +persistent_undo +postscript +printer -profile +python/dyn
-python3 +quickfix +reltime -rightleft +ruby/dyn +scrollbind +signs
+smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary
+tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title
-toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo
+vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp
-xterm_clipboard -xterm_save
system vimrc file: "$VIM/vimrc"
user vimrc file: "$HOME/.vimrc"
user exrc file: "$HOME/.exrc"
fall-back for $VIM: "/usr/share/vim"
Compilation: gcc -c -I. -D_FORTIFY_SOURCE=0 -Iproto -DHAVE_CONFIG_H -arch i386 -arch x86_64 -g -Os -pipe
Linking: gcc -arch i386 -arch x86_64 -o vim -lncurses | 0 | python,macos,vim,osx-yosemite | 2015-08-04T01:01:00.000 | 1 | 31,799,087 | Vim doesn't check Python syntax out of the box, so a plugin is probably causing this issue.
Not sure why an OS upgrade would make a Vim plugin suddenly start being more zealous about things, of course, but your list of installed plugins (however you manage them) is probably the best place to start narrowing down your problem. | 0 | 421 | true | 0 | 1 | How Do I Turn Off Python Error Checking in vim? (vim terminal 7.3, OS X 10.11 Yosemite) | 31,800,107 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a python server that forks itself once it receives a request. The python service has several C++ .so objects it can call into, as well as the python process itself.
My question is, in any one of these processes, I would like to be able to see how much CPU all instances of this server are currently using. So lets say I have foo.py, I want to see how much CPU all instances of foo.py are currently using. For example, foo.py(1) is using 200% cpu, foo.py(2) is using 300%, and foo.py(3) is using 50%, id like to arrive at 550%.
The only way I can think of doing this myself is getting the PID of every process and scanning through the /proc filesystem. Is there a more general way available within C/Python/POSIX for such an operation?
Thank you! | 0 | python,c++,c,linux | 2015-08-05T03:02:00.000 | 1 | 31,822,714 | Here is the only way to do that I can think. It is a bit confusing but if you follow the steps it is very simple:
If I want to select total cpu use of Google Chrome process:
$ps -e -o pcpu,comm | grep chrome | awk '{ print $1 }' | paste -sd+ |
bc -l | 0 | 114 | false | 0 | 1 | Query total CPU usage of all instances of a process on Linux OS | 31,830,627 |
1 | 3 | 0 | 3 | 3 | 1 | 0.197375 | 0 | I want to have Python send a mail automatically after certain events occur. In my script I have to enter a password. Is there any way to encrypt my password and use it in this script?
Please give an example as I am not an expert in python. I have seen few answers on this topic but those aren't discussed completely, just some hints are given. | 0 | python,email,encryption,sendmail | 2015-08-05T08:21:00.000 | 0 | 31,827,094 | Encryption basically tries to rely on one (and only one) secret.
That is, one piece of data that is known to those who want to communicate but not to an attacker.
In the past attempts have been made to e.g. (also) keep the encryption algorithm/implementation secret, but if that implementation is widely used (in a popular cryptosystem) those attempts have generally fared poorly.
In general that one secret is the password. So that even if the attacker knows the encryption algorithm, he cannot decrypt the traffic if he doesn't know the password.
As others have shown, encrypting a password and giving a script the means to decrypt it is futile if the attacker can get hold of the script. It's like a safe with the combination of the lock written on the door.
On the other hand as long as you can keep your script secret, the key in it is secret as well.
So if you restrict the permissions of your script such that only the root/administrator user can read or execute it, the only way for an attacker to access it is to have cracked the root/administrator account. In which case you've probably already lost.
The biggest challenges in cases like these are operational.
Here are some examples of things that you should not do;
Make the script readable by every user.
Store the script where it can by read be a publicly accessible web-server.
Upload it to github or any other public hosting service.
Store it in an unencrypted backup.
Update: You should also consider how the script uses the password. If it sends the password over the internet in cleartext, you don't have much security anyway. | 0 | 6,684 | false | 0 | 1 | How to use encrypted password in python email | 31,828,903 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a periodical celery job that is supposed to run every night at midnight. Of course I can just run the system and leave it overnight to see the result. But I can see that it's not going to be very efficient in terms of solving potential problems and energy.
In such situation, is there a trick to make the testing easier? | 0 | python,testing,celery | 2015-08-05T09:46:00.000 | 1 | 31,828,928 | To facilitate testing you should first run the task from ipython to verify that it does what it should.
Then to verify scheduling you should change the celerybeat schedule to run in the near future, and verify that it does in fact run.
Once you have verified functionality and schedule you can update the celerybeat schedule to midnight, and be at least some way confident that it will run like it should. | 0 | 158 | false | 0 | 1 | testing celery job that runs each night | 31,877,460 |
2 | 3 | 0 | 6 | 5 | 0 | 1.2 | 0 | I have been looking around for UART baud rates supported by the Beaglebone Black (BB). I can't find it in the BB system reference manual or the datasheet for the sitara processor itself. I am using pyserial and the Adafruit BBIO library to communicate over UART.
Does this support any value within reason or is it more standard (9600, 115200, etc.)?
Thanks for any help.
-UPDATE-
It is related to the baud rates supported by PySerial. This gives a list of potential baud rates, but not specific ones that will or will not work with specific hardware. | 0 | python,pyserial,beagleboneblack,uart,baud-rate | 2015-08-05T21:03:00.000 | 1 | 31,842,785 | The AM335x technical reference manual (TI document spruh73) gives the baud rate limits for the UART sub-system in the UART section (section 19.1.1, page 4208 in version spruh73l):
Baud rate from 300 bps up to 3.6864 Mbps
The UART modules each have a 48MHz clock to generate their timing. They can be configured in one of two modes: UART 16x and UART 13x, in which that clock is divided by 16 and 13, respectively. There is then a configured 16-bit divisor to generate the actual baud rate from that clock. So for 300 bps it would be UART 16x and a divisor of 10000, or 48MHz / 16 / 1000 = 300 bps.
When you tell the omap-serial kernel driver (that's the driver used for UARTs on the BeagleBone), it calculates the mode and divisor that best approximates the rate you want. The actual rate you'll get is limited by the way it's generated - for example if you asked for an arbitrary baud of 2998 bps, I suppose you'd actually get 2997.003 bps, because 48MHz / 16 / 1001 = 2997.003 is closer to 2998 than 48 MHz / 16 / 1000 = 3000.
So the UART modules can certainly generate all the standard baud rates, as well as a large range of arbitrary ones (you'd have to actually do the math to see how close it can get). On Linux based systems, PySerial is just sending along the baud you tell it to the kernel driver through an ioctl call, so it won't limit you either.
Note: I just tested sending data on from the BeagleBone Black at 200 bps and it worked fine, but it doesn't generate 110 bps (the next lower standard baud rate below 300 bps), so the listed limits are really the lowest and highest standard rates it can generate. | 0 | 5,044 | true | 0 | 1 | Maximum Beaglebone Black UART baud? | 33,552,144 |
2 | 3 | 0 | 0 | 5 | 0 | 0 | 0 | I have been looking around for UART baud rates supported by the Beaglebone Black (BB). I can't find it in the BB system reference manual or the datasheet for the sitara processor itself. I am using pyserial and the Adafruit BBIO library to communicate over UART.
Does this support any value within reason or is it more standard (9600, 115200, etc.)?
Thanks for any help.
-UPDATE-
It is related to the baud rates supported by PySerial. This gives a list of potential baud rates, but not specific ones that will or will not work with specific hardware. | 0 | python,pyserial,beagleboneblack,uart,baud-rate | 2015-08-05T21:03:00.000 | 1 | 31,842,785 | The BBB reference manual does not contain any information on Baud Rate for UART but for serial communication I usually prefer using value of BAUDRATE = 115200, which works in most of the cases without any issues. | 0 | 5,044 | false | 0 | 1 | Maximum Beaglebone Black UART baud? | 31,902,876 |
1 | 1 | 0 | 2 | 2 | 0 | 0.379949 | 1 | I'm looking for solution how to setup domain authorization with aiohttp.
There are several ldap librarys, but all of them blocks event loop, plus i don't have clear understanding about user authorization with aiohttp.
As i see i need session managment and store isLogdedIn=True in cookie file, check that cookie at every route -> redirect at login handler, and check key in every template? It seems very insecure, session could be stolen. | 0 | python,python-asyncio | 2015-08-06T13:51:00.000 | 0 | 31,857,628 | You may call synchronous LDAP library in thread pool (loop.run_in_executor()).
aiohttp itself doesn't contain abstractions for sessions and authentication but there are aiohttp_session and aiohttp_security libraries. I'm working on these but current status is alpha. You may try it as beta-tester :) | 0 | 549 | false | 0 | 1 | Proper way to setup ldap auth with aiohttp.web | 31,880,066 |
3 | 4 | 0 | 1 | 5 | 1 | 0.049958 | 0 | Sorry that I don't have enough reputation to post images.
The main problem is that it tells me that I need to install a C compiler and reinstall gensim or the train will be slow, and in fact it is really slow.
I have installed mingw32, Visual Studio 2008, and have added the mingw32 environment variable to my path.
Any ideas on how to solve it? | 0 | python,compilation,word2vec | 2015-08-07T06:23:00.000 | 0 | 31,870,995 | Similar to user1151923, after adding MinGW\bin to my path variable and uninstalling\reinstalling gensim through pip, I still received the same warning message. I ran the following code to fix this problem (installed gensim from conda).
pip uninstall gensim
conda install gensim | 0 | 4,228 | false | 0 | 1 | Gensim needs a C compiler? | 34,438,547 |
3 | 4 | 0 | 0 | 5 | 1 | 0 | 0 | Sorry that I don't have enough reputation to post images.
The main problem is that it tells me that I need to install a C compiler and reinstall gensim or the train will be slow, and in fact it is really slow.
I have installed mingw32, Visual Studio 2008, and have added the mingw32 environment variable to my path.
Any ideas on how to solve it? | 0 | python,compilation,word2vec | 2015-08-07T06:23:00.000 | 0 | 31,870,995 | I had the same problem and tried many solutions, but none of them worked except degrading to gensim version 3.7.1. | 0 | 4,228 | false | 0 | 1 | Gensim needs a C compiler? | 56,945,149 |
3 | 4 | 0 | 0 | 5 | 1 | 0 | 0 | Sorry that I don't have enough reputation to post images.
The main problem is that it tells me that I need to install a C compiler and reinstall gensim or the train will be slow, and in fact it is really slow.
I have installed mingw32, Visual Studio 2008, and have added the mingw32 environment variable to my path.
Any ideas on how to solve it? | 0 | python,compilation,word2vec | 2015-08-07T06:23:00.000 | 0 | 31,870,995 | When I installed it from conda-forge then I obtained a version that is already compiled and fast:
conda install -c conda-forge gensim | 0 | 4,228 | false | 0 | 1 | Gensim needs a C compiler? | 61,358,854 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am fairly new to unit testing. And at the moment I have trouble on trying to unit test a Google oAuth Picasa authentication. It involves major changes to the code if I would like to unit tested it (yeah, I develop unit test after the app works).
I have read that Mock Object is probably the way to go. But if I use Mock, how do I know that the functionality (that is Google oAuth Picasa authentication), is really working?
Or, aside that I develop unit testing after the app finished, did I made other mistakes in understanding Mock? | 0 | python,unit-testing | 2015-08-08T11:17:00.000 | 0 | 31,892,531 | When unit testing, you test a particular unit (function/method...) in isolation, meaning that you don't care if other components that your function uses, work (since there are other unit test cases that cover those).
So to answer your question - it's out of the scope of your unit tests whether an external service like Google oAuth works. You just need to tests that you make a correct call to it, and here's where Mock comes in handy. It remembers the call for you to inspect and make some assertions about it, but it prevents the request for actually going out to the external service / component / library / whatever.
Edit: If you find your code is too complex and difficult to test, that might be an indication that it should be refactored into smaller more manageable pieces. | 0 | 32 | true | 0 | 1 | How can mock object replace all system functionality being tested? | 31,892,566 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | How can you:
Print debugging messages from a Python script?
Access those messages in real-time on a remote device (Arduino Yun) you're SSH'ed into?
So far, I've been making changes, copying them to the Yun, restarting it, and doing all this without the benefit of debugging. (I have to test on the Yun itself.) | 0 | python,python-2.7,arduino,arduino-yun | 2015-08-10T23:11:00.000 | 0 | 31,930,634 | In a Python script, you can give yourself messages like this: print("var x = {0}; y = {1}".format(x, y))
Run python path/to/file.py (while SSH'ed) to access the server w/real-time logs.
If the server is already running, you can do ps | grep python then kill XXXX for the running process, then run the above instruction again. | 0 | 152 | false | 0 | 1 | How to debug Python on Arduino Yun | 31,950,208 |
2 | 10 | 0 | 0 | 7 | 0 | 0 | 1 | Ok so I know nothing about programming with python yet but I have wanted to make a bot to post to instagram for a while so thought it would be a good way to 'hit the ground running'.
I dont have a specific time frame so no rush.
I don't know any programming languages yet but have wanted to branch out since I use a GUI based web automaiton tool which I see has quite alot of overlap with programming languages such as if statements, variables, loops etc.
I have been feeling that learning a proper language will be a better investment long term.
So since I know nothing about it, but I have my goal in mind can people suggest what where I start in terms of what I should study for the task? Then I can laser focus what I need to learn and work at it piece by piece.
I want to just upload pictures as one operation and follow/unfollow as another on instagram. So please illuminate me on how Id go about that. I was told that python is the best all rounder to learn since it does everything in a tidy fashion ie less code and is intuitive. I will want to make other projects in future based on web automation so felt this would be a good one to learn from what I was told by a pro programmer.
I understand I may have been vague but not sure what to ask yet given my ignorance so please ask away if needed to hone the question/s. | 0 | python,instagram | 2015-08-11T09:56:00.000 | 0 | 31,938,658 | It's a bit heavy, but you can use Selenium and do a bot on your browser. You can even make automatic clicks on the window if you don't want to read web codes. | 0 | 36,619 | false | 0 | 1 | Making an instagram posting bot with python? | 55,689,785 |
2 | 10 | 0 | 6 | 7 | 0 | 1 | 1 | Ok so I know nothing about programming with python yet but I have wanted to make a bot to post to instagram for a while so thought it would be a good way to 'hit the ground running'.
I dont have a specific time frame so no rush.
I don't know any programming languages yet but have wanted to branch out since I use a GUI based web automaiton tool which I see has quite alot of overlap with programming languages such as if statements, variables, loops etc.
I have been feeling that learning a proper language will be a better investment long term.
So since I know nothing about it, but I have my goal in mind can people suggest what where I start in terms of what I should study for the task? Then I can laser focus what I need to learn and work at it piece by piece.
I want to just upload pictures as one operation and follow/unfollow as another on instagram. So please illuminate me on how Id go about that. I was told that python is the best all rounder to learn since it does everything in a tidy fashion ie less code and is intuitive. I will want to make other projects in future based on web automation so felt this would be a good one to learn from what I was told by a pro programmer.
I understand I may have been vague but not sure what to ask yet given my ignorance so please ask away if needed to hone the question/s. | 0 | python,instagram | 2015-08-11T09:56:00.000 | 0 | 31,938,658 | You should note that while you can follow and unfollow users and like and unlike media. you CAN NOT post to Instagram using their API. | 0 | 36,619 | false | 0 | 1 | Making an instagram posting bot with python? | 31,963,001 |
1 | 1 | 0 | 2 | 2 | 0 | 0.379949 | 0 | I'm new to the world of AWS, and I just wrote and deployed a small Pyramid application. I ran into some problems getting set up, but after I got it working, everything seemed to be fine. However, now, my deployments don't seem to be making a difference in the environment (I changed the index.pt file that my root url routed to, and it does not register on my-app.elasticbeanstalk.com).
Is there some sort of delay to the deployments that I am unaware of, or is there a problem with how I'm deploying (eb deploy using the awsebcli package) that's causing these updates to my application to not show? | 0 | python,amazon-web-services,amazon-elastic-beanstalk,pyramid | 2015-08-12T02:14:00.000 | 1 | 31,954,968 | Are you committing your changes before deploying?
eb deploy will deploy the HEAD commit.
You can do eb deploy --staged to deploy staged changes. | 0 | 2,185 | false | 1 | 1 | why does elastic beanstalk not update? | 31,955,222 |
1 | 1 | 0 | 3 | 1 | 1 | 1.2 | 0 | I am working on a python module that is a convenience wrapper for a c library. A python Simulation class is simply a ctypes structure with a few additional helper functions. Most parameters in Simulation can just be set using the _fields_ variable. I'm wondering how to document these properly. Should I just add it to the Simulation docstring? Or should I write getter/setter methods just so I can document the variables? | 0 | python,documentation | 2015-08-15T17:57:00.000 | 0 | 32,027,621 | When I do similar things, if it is a small class I will put everything in the same class, but if it is bigger, I typically make a class that only contains the fields, and then a subclass of that with functions. Then you can have a docstring for your fields class and a separate docstring for your simulation functions.
YMMV, but I would never consider adding getters and setters for the sole purpose of making the documentation conform to some real or imaginary ideal. | 0 | 124 | true | 0 | 1 | Documenting ctypes fields | 32,031,704 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have a python program. It takes some text from a text file (A) as an input, do some text annotation and stores the annotated text as an output in another file (B). Now, my plan was to make this like a web service.
I could do this using php and calling the python program from php. Specifically, my php code does this-
--Takes text from a HTML textarea.
--Saves the text into file A.
--Runs the python program.
--Load the output from file B and show the annotated text in the HTML textarea.
Now, to do the text annotation, python program needs to load a model from another big file (C). I would say, loading time is 10 sec and annotating takes 2 sec. Each time, I have a new text in the HTML textarea, I need 12 sec to show the output. I want to minimize the overall time.
I was thinking, if I could communicate from PHP with an already running python program, I could actually save 10 sec. Because, then python would just need to load the model file C once and it could apply the model on any text that PHP sends him and it could send the output to PHP too.
Is there a way I can achieve this? Can django help here in anyway?
Thank you for reading so much. | 0 | php,python,django,web-services | 2015-08-16T01:12:00.000 | 0 | 32,030,932 | You could use raw sockets in your Python and php programs to make them communicate through TCP locally.
Make your Python program a TCP server with address 'localhost' and port number, for example, 5555, and then, in your php script, also using sockets, create a client code that sends the to be processed text as a TCP request to your Python script. | 0 | 1,455 | false | 1 | 1 | How can I convert a python program into a web service? | 32,031,097 |
3 | 4 | 0 | 1 | 8 | 0 | 0.049958 | 0 | I'm using Node-Red, hosted on a Raspberry Pi for an IoT project.
How do I trigger a Python script that is on the raspi from Node-Red? I want to run a script that updates the text on an Adafruit LCD shield which is sitting on the Pi
Should I be looking to expose the Python script as a web service somehow?
I'm using a Raspberry Pi B+ | 0 | python,raspberry-pi,gpio,iot,node-red | 2015-08-17T19:07:00.000 | 1 | 32,057,882 | I hope you have installed red-node along with Python.
If not, install it using following either in Power shell or CMD:
npm install -g node-red-contrib-python3-function
After starting node-red, you can find pythonshell node in Node Panel of node-red.
Drag and Drop it and double click it to get "node properties" panel,
Enter Python.exe path in Name and Python File in Py File and click on Done.
Have and msg-payload node connected to it and Deploy.
Click on PythonShell node input, you will get your python program executed and displayed in output. | 0 | 28,176 | false | 0 | 1 | How to trigger Python script on Raspberry Pi from Node-Red | 54,633,976 |
3 | 4 | 0 | 0 | 8 | 0 | 0 | 0 | I'm using Node-Red, hosted on a Raspberry Pi for an IoT project.
How do I trigger a Python script that is on the raspi from Node-Red? I want to run a script that updates the text on an Adafruit LCD shield which is sitting on the Pi
Should I be looking to expose the Python script as a web service somehow?
I'm using a Raspberry Pi B+ | 0 | python,raspberry-pi,gpio,iot,node-red | 2015-08-17T19:07:00.000 | 1 | 32,057,882 | I had a similar challenge with a Raspberry pi 4.
I solved it by using an execute node. On the command slot, enter the path of the python script as follows.
sudo python3 /home/pi/my_script.py
Change the script path to yours. Use the inject node to run the script and the debug node to view your output.
Ensure you grant superuser permission using sudo and you have python3 installed. | 0 | 28,176 | false | 0 | 1 | How to trigger Python script on Raspberry Pi from Node-Red | 71,484,228 |
3 | 4 | 0 | 9 | 8 | 0 | 1.2 | 0 | I'm using Node-Red, hosted on a Raspberry Pi for an IoT project.
How do I trigger a Python script that is on the raspi from Node-Red? I want to run a script that updates the text on an Adafruit LCD shield which is sitting on the Pi
Should I be looking to expose the Python script as a web service somehow?
I'm using a Raspberry Pi B+ | 0 | python,raspberry-pi,gpio,iot,node-red | 2015-08-17T19:07:00.000 | 1 | 32,057,882 | Node-RED supplies an exec node as part of it's core set, which can be used to call external commands, this could be call your python script.
More details of how to use it can be found in the info sidebar when a copy is dragged onto the canvas.
Or you could wrap the script as a web service or just a simple TCP socket, both of which have nodes that can be used to drive them. | 0 | 28,176 | true | 0 | 1 | How to trigger Python script on Raspberry Pi from Node-Red | 32,058,198 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I know there's a lot of questions asking about the opposite, but is there a particularly good way to launch a Python script from an executable? The executable itself was originally written in Python and compiled using py2exe so I was thinking of using popen() and passing python myscript.py but not sure if that's the most efficient.
The particular script being launched would be Python 2.7 with the Python ArcGIS interpreter. | 0 | python,python-2.7,executable,popen | 2015-08-18T01:03:00.000 | 1 | 32,062,008 | If you're simply trying to execute a python script externally, then just use popen() on your script, as you said. | 0 | 46 | false | 0 | 1 | Efficient way of launching python script from exe | 32,062,059 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I need an system that could check if many python scripts have run comletely.
The scripts would scrape data and output it to a corresponding xml file.
If the script fails there might be no xml file or error messages in the logs files. PHP files run the python scripts.
Is there a simple solution using an AWS service that would trigger alarms when a python script is not functioning fully? | 0 | php,python,amazon-web-services,logging,alarm | 2015-08-18T11:06:00.000 | 0 | 32,070,697 | You can schedule cron job to attain that. | 0 | 45 | false | 1 | 1 | Is there an AWS solution of an alarm system for failed python scripts? | 32,070,794 |
1 | 1 | 0 | 5 | 3 | 0 | 0.761594 | 0 | I have a few unit tests written using pytest that have successfully caused segfaults to occur. However if a segfault occurs (on my Mac) during these execution, pytest seems to quit altogether and provide no information on what caused the python interpreter to crash.
Now I could infer from the logs the specific test that was crashing and determine the values used within, but I was wondering if there was any way to be able to somehow log the crash as a regular test failure and keep iterating through my tests without stopping?
If helpful, I can try to conceive an example but I think the question should be pretty self-explanatory. | 0 | python,unit-testing,testing,pytest,python-unittest | 2015-08-18T16:54:00.000 | 0 | 32,078,314 | Using the pytest-xdist plugin it will restart the nodes automatically when they crash, so installing the plugin and running the tests with -n1 should make py.test survive the crashing tests. | 0 | 1,941 | false | 0 | 1 | pytest: If a crash/segfault/etc. occurs during testing, is there a way to make pytest log the crash as a test failure and continue testing? | 32,140,354 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a drone that has a goPro and Video transmitter attached to it. the transmitter can transmit via wireless or through 3.5mm. How can i pick up this data through 3.5mm/composite video on a pi? the drone is a DJI phantom 2 and an iOSD transmitter. i want to use the transmitter video rather than the GoPro directly since the transmitter also applies a nice HUD to the video with flight specs. | 0 | java,python,raspberry-pi,raspberry-pi2 | 2015-08-18T19:18:00.000 | 0 | 32,080,897 | You can't. The composite video phono socket is an output device. | 0 | 282 | false | 0 | 1 | How to read data from 3.5mm jack on Raspberry Pi 2 with python or java? | 32,080,928 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I wish to search for 500 strings in 2500 .c/.h files and return the line and files containing the string. Something that is built in Total Commander's search function. Is there a way I can automate the TC search and retrieve the results?
Or else can this be achieved in Python without TC? | 0 | python,search,automation | 2015-08-19T06:18:00.000 | 0 | 32,087,821 | In Total Commander you can pick menu Commands->Search in separate Process...
Then you can filter c/h files using e.g. this mask in "Search for" field: *.cpp;*.c;*.cxx;*.hpp;*.h;*.hxx (add/remove which one do you need).
Afterwards you can enable "find text" box and "Regex (2)". And enter your words in text as (word1|word2|word3).
This is also possible to do with Python if you prefer it. | 0 | 476 | false | 0 | 1 | Total Commander Automation using Python | 32,307,071 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a fpga board and I write a VHDL code that can get Images (in binary) from serial port and save them in a SDRAM on my board. then FPGA display images on a monitor via a VGA cable. my problem is filling the SDRAM take to long(about 10 minutes with 115200 baud rate).
on my computer I wrote a python code to send image(in binary) to FPGA via serial port. my code read binary file that saved in my hard disk and send them to FPGA.
my question is if I use buffer to save my images insted of binary file, do I get a better result? if so, can you help me how to do that, please? if not, can you suggest me a solution, please?
thanks in advans, | 0 | python-2.7 | 2015-08-19T15:38:00.000 | 1 | 32,100,003 | Unless you are significantly compressing before download, and decompressing the image after download, the problem is your 115,200 baud transfer rate, not the speed of reading from a file.
At the standard N/8/1 line encoding, each byte requires 10 bits to transfer, so you will be transferring 1150 bytes per second.
In 10 minutes, you will transfer 1150 * 60 * 10 = 6,912,000 bytes. At 3 bytes per pixel (for R, G, and B), this is 2,304,600 pixels, which happens to be the number of pixels in a 1920 by 1200 image.
The answer is to (a) increase the baud rate; and/or (b) compress your image (using something simple to decompress on the FPGA like RLE, if it is amenable to that sort of compression). | 0 | 96 | false | 0 | 1 | IS reading from buffer quicker than reading from a file in python | 32,107,551 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Eclipse 4.5 (Mars) / Windows 7
I have an Eclipse C/C++ Makefile project that has both Python and C/C++ code. The source code is checked-out from an SVN repository. The build environment is via a MSYS shell using a project specific configuration script to create all Makefiles in the top/sub-directories and 'make', 'make install' to build.
My .project file has both the PyDev and CDT natures configured.
I can switch between the PyDev and C/C++ perspectives and browse code including right-clicking on a symbol and 'open declaration'.
The 'Debug' perspective appears to be specific to the C/C++ perspective.
Do you have experience with configuring an Eclipse project that allows you to debug both Python and C/C++ code? | 0 | python,c++,eclipse,pydev | 2015-08-19T16:16:00.000 | 1 | 32,100,787 | After 'googling' around the internet, here is what appears to be working for my particular situation:
Create a C/C++ project (empty makefile project). This produces the following 3 files in my top-level local SVN check-out directory:
.settings
.cproject
.project
Note: I keep my Eclipse workspace separate from my Eclipse project.
Create a separate Python project that is outside of the local SVN check-out directory.
Note: This Eclipse Python project is in my Eclipse workspace.
This creates the following 2 files:
.pydevproject
.project
Copy the .pydevproject to the directory containing the .settings, .cproject, and .project files.
Copy the Python 'nature' elements from the Python .project file to the CDT .project file.
Restart Eclipse if it had been running while editing the dot (.) files.
Finally, get into the "C/C++ Perspective". In the 'Project Explorer" window, pull down the 'View Menu".
Select 'Customize View...'.
Select the 'Content' tab.
Uncheck the 'PyDev Navigator Content' option. | 0 | 702 | false | 0 | 1 | Debugging Python when both PyDev and CDT natures in same Eclipse project | 32,167,307 |
1 | 1 | 0 | 2 | 0 | 1 | 1.2 | 0 | I have a python application running in an embedded Linux system. I have realized that the python interpreter is not saving the compiled .pyc files in the filesystem for the imported modules by default.
How can I enable the interpreter to save it ? File system permission are right. | 0 | python,linux | 2015-08-19T19:41:00.000 | 1 | 32,104,282 | There are a number of places where this enabled-by-default behavior could be turned off.
PYTHONDONTWRITEBYTECODE could be set in the environment
sys.dont_write_bytecode could be set through an out-of-band mechanism (ie. site-local initialization files, or a patched interpreter build).
File permissions could fail to permit it. This need not be obvious! Anything from filesystem mount flags to SELinux tags could have this result. I'd suggest using strace or a similar tool (as available for your platform) to determine whether any attempts to create these files exist.
On an embedded system, it makes much more sense to make this an explicit step rather than runtime behavior: This ensures that performance is consistent (rather than having some runs take longer than others to execute). Use py_compile or compileall to explicitly run ahead-of-time. | 0 | 377 | true | 0 | 1 | Python is not saving .pyc files in filesystem | 32,104,793 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I need to copy huge files from remotemachine1 to remotemachine2-remotemachine10.
What is the best way to do it ? Doing a get on remotemachine1 and then a put to all the remaining machines aren't ideal as the file is huge and I need to be able to send the Fabric command from my laptop. The remotemachines are all in the same network. Or should I do a run('rsync /file_on_remotemachine1 RemoteMachine2:/targetpath/') ?
Is there a better way to do this in Fabric ? | 0 | python,rsync,fabric | 2015-08-21T05:32:00.000 | 1 | 32,132,987 | Best way would be running the script from remotemachine1 if you can. | 0 | 223 | false | 0 | 1 | How do you use Fabric to copy files between remote machines? | 35,467,047 |
1 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | Details:
I am having xxx.py file in B machine.
I trying to execute that xxx.python file from A machine by using python script. | 0 | python,python-2.7,python-3.x | 2015-08-21T11:28:00.000 | 1 | 32,139,162 | Unless you have done something to specifically allow this, such as SSH into machine B first, you cannot do this.
That's a basic safety consideration. If any host A could execute any script on host B, it would be extremely easy to run malicious code on other machines. | 0 | 85 | false | 0 | 1 | How to run the python file in remote machine directory? | 32,139,340 |
1 | 2 | 0 | 1 | 0 | 1 | 1.2 | 0 | I need to use python "mock" library for unit testing.
Is it possible to install the library without connecting my development machine to the Internet?
Thx in advance. | 0 | python,mocking | 2015-08-21T13:42:00.000 | 0 | 32,141,887 | Finally I managed to install "mock" offline.
Step-by-step guide follows (I use Python 2.7):
Download necessary packages provided in .tar.gz archives:
mock, setuptools, pbr, six, funcsigs
Unpack all of the archives
Install modules one by one in the following order: setuptools, pbr, six, funcsigs, mock. To install a module, chdir to the folder it was unpacked to and execute python setup.py install | 0 | 343 | true | 0 | 1 | Python mock offline standalone installation | 32,206,732 |
1 | 1 | 0 | 1 | 2 | 0 | 1.2 | 0 | I have a rails application, and when there is an update to one of the rows in my database, I want to run a python script which is on a raspberry pi (example: lights up a LED when a user is created). I'm using PostgreSQL and have looked into NOTIFY/LISTEN channels, but can't quite figure that out. Is there an easy way to do this? The raspberry pi will not be on the same network as the rails application. | 0 | python,ruby-on-rails,postgresql,raspberry-pi | 2015-08-23T19:48:00.000 | 0 | 32,170,818 | There are many "easy" ways, depending on your skills.
Maybe: "Write triggers, which are sending the notify on insert/update" is the hint you need? | 0 | 61 | true | 1 | 1 | Trigger python script on raspberry pi from a rails application | 32,172,061 |
1 | 1 | 1 | 0 | 0 | 0 | 1.2 | 0 | I've got PyQt4 and pyqt4-dev-tools installed on my raspberry pi but I'm getting
ImportError: No module named PyQt4 on my Raspberry Pi
with the following includes when I run python3
from PyQt4 import QtGui
from PyQt4 import QtCore
I've got another Pi that PyQT4 is found so I'm not sure what I've done wrong on this one. Can anyone tell me what I can do to get Python to find the PyQt4 modules? | 0 | python,python-3.x,import,pyqt4 | 2015-08-24T02:27:00.000 | 0 | 32,173,695 | Most likely you installed PyQt4 and pyqt4-dev-tools for Python 2.x, but not for Python 3.x.
Check if PyQt4 is in your site-packages directory for Python 3.x. For me this is under /usr/lib/python3.4/site-packages/PyQt4.
If it's not there, you need to grab the correct Python 3 version of the packages. What distro are you using? | 0 | 8,273 | true | 0 | 1 | ImportError: No module named PyQt4 on my Raspberry Pi | 32,173,862 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | Let say I'm creating an issue in Jira and write the summary and the description. Is it possible to call a python script after these are written that sets the value for another field, depending on the values of the summary and the description?
I know how to create an issue and change fields from a python script using the jira-python module. But I have not find a solution for using a python script while editing/creating the issue manually in Jira. Does anyone have an idea of how I manage that? | 0 | python,jira | 2015-08-26T15:07:00.000 | 0 | 32,230,294 | Take a look at JIRA webhooks calling a small python based web server? | 0 | 1,311 | false | 1 | 1 | Call python script from Jira while creating an issue | 32,234,002 |
1 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I have been writing unit tests for over a year now, and have always used patch.object for pretty much everything (modules, classes, etc).
My coworker says that patch.object should never be used to patch an object in a module (i.e. patch.object(socket, 'socket'), instead you should always use patch('socket.socket').
I much prefer the patch.object method, as it allows me to import modules and is more pythonic in my opinion. Is my coworker right?
Note: I have looked through the patch documentation and can't find any warnings on this subject. Isn't everything an object in python? | 0 | python,mocking,patch | 2015-08-27T17:57:00.000 | 0 | 32,256,361 | There is no such requirement, and yes, everything is an object in Python.
It is nothing more than a style choice; do you import the module yourself or does patch take care of this for you? Because that's the only difference between the two approaches; either patch() imports the module, or you do.
For the code-under-test, I prefer mock.patch() to take care of this, as this ensures that the import takes place as the test runs. This ensures I get a test error state (test failure), rather than problems while loading the test. All other modules are fair game. | 0 | 315 | false | 0 | 1 | python mocking: mock.patch.object gotchas | 32,256,435 |
1 | 1 | 0 | 0 | 1 | 1 | 1.2 | 0 | I don't know how many duplicates of this are out there but none of those I looked at solved my problem.
To practice writing and installing custom modules I've written a simple factorial module. I have made a factorial folder in my site-packages folder containing factorial.py and an empty __init__.py file.
But typing import factorial does not work. How can I solve this? I also tried pip install factorial but that didn't work either.
Do I have to save my code intended to use factorial in the same folder inside site-packages or can I save it whereever I want?
Greetings
holofox
EDIT: I solved it. Everything was correct as I did it. Had some problems at importing and using it properly in my code... | 0 | python,macos,module,installation | 2015-08-29T10:32:00.000 | 0 | 32,285,041 | I think there are several different things to take into consideration here.
When you're importing a module, (doing import factorial), Python will look in the defined PATH and try to find the module you're trying to import. In the simplest case, if your module is in the same folder where your script is trying to import it, it will find it. If it's somewhere else, you will have to specify the path.
Now, site-packages is where Python keep the installed libraries. So for example when you do pip install x, Python will put the module x in your site-packages destination and when you try to import it, it will look for it there. In order to manage site-packages better, try to read about virtualenv.
If you want your module to go there, first you need to create a package that you can install. For that look at distutils or all the different alternatives for packaging that involve some type of building process based on a setup file.
I don't want to go into details in any of these points because all of them have been covered before. Just wanted to give you a general idea of where to look for. | 0 | 512 | true | 0 | 1 | Install a custom Python 2.7.10 module on Mac | 32,285,525 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I am trying to embed a A.html file inside B.html(report) file as a hyperlink.
Just to be clear, both html files are offline and available only on my local machine. and at last the report html file will be sent over email.
The email recipient will not be having access to my local machine. So if they click on hyperlink in report.html, they will get "404 - File or directory not found".
Is there any way to embed A.html inside report.html, so that email recipient can open A.html from report.html on their machine | 0 | python,html,hyperlink | 2015-08-31T05:57:00.000 | 0 | 32,304,781 | You need to one of:
Attach A.html as well as report.html,
Post A.html to a shared location such as Google drive and modify the link to point to it, or
Put the content of A.html into a hidden <div> with a show method. | 0 | 78 | false | 1 | 1 | Embed one html file inside other report html file as hyperlink in python | 32,305,050 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | Basicly, I'm crawling text from a webpage with python using Beautifulsoup, then save it as an HTML and send it to my Kindle as a mail attachement. The problem is; Kindle supports Latin1(ISO-8859-1) encoding, however the text I'm parsing includes characters that are not a part of Latin1. So when I try to encode text as Latin1 python gives following error because of the illegal characters:
UnicodeEncodeError: 'latin-1' codec can't encode character u'\u2019'
in position 17: ordinal not in range(256)
When I try to encode it as UTF-8, this time script runs perfectly but Kindle replaces some incompatible characters with gibberish. | 0 | python,encoding,kindle,latin1 | 2015-08-31T17:13:00.000 | 0 | 32,316,480 | Use <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
I previously used <meta charset="UTF-8" />, which did not seem to work. | 0 | 171 | false | 1 | 1 | Text Encoding for Kindle with Python | 64,088,459 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I want to perform DIFF over encoded content (gzip mainly), Is there any way ?
Right now I am decoding the content and performing the diff, it adds a lot of time overhead.
I am using python zlib library for decodig and libdiff for taking diff. | 0 | python,encoding,gzip,diff | 2015-09-01T11:23:00.000 | 0 | 32,330,346 | It is pointless to do a diff of a compressed file if there are any differences, since the entire compressed file will be different after the first difference in the uncompressed data. If there are a small set of differences in the uncompressed data, then the only way to find those is to uncompress the data. | 0 | 57 | false | 0 | 1 | How to perform diff over encoded content | 32,343,370 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have included Location header in My Virtual Host file.
AuthType Basic
AuthName "Restricted"
AuthUserFile /etc/httpd/.htpasswd
require valid-user
Also created user to access the domain. but the user i have created using htpasswd is not allow other user to make any activity in CKAN Instance.
anyone Have an idea..Please let me know | 0 | python,apache,tomcat6,.htpasswd,ckan | 2015-09-02T06:44:00.000 | 0 | 32,346,261 | You are using nginx, arent you?
So you can make the athentification with nginx with just adding two lines to one file and creating a password file.
In /etc/nginx/sites-available/ckan add following lines:
auth_basic "Restricted";
auth_basic_user_file /filedestination;
then create create a file at your filedestination with following content:
USERNAME:PASSWORD
The password must be in md5.
Have fun with ckan! | 0 | 75 | false | 1 | 1 | How to use htpasswd For CKAN Instance | 32,395,525 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am installing mediaproxy on my server debian. Please review the error pasted below. I have also tried installing the dependencies but still this error occurs. Need help on this.
root@server:/usr/local/src/mediaproxy-2.5.2# ./setup.py build running build running build_py running build_ext building 'mediaproxy.interfaces.system._conntrack' extension x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DMODULE_VERSION=2.5.2 -I/usr/include/python2.7 -c mediaproxy/interfaces/system/_conntrack.c -o build/temp.linux-x86_64-2.7/mediaproxy/interfaces/system/_conntrack.o mediaproxy/interfaces/system/_conntrack.c:12:29: fatal error: libiptc/libiptc.h: No such file or directory #include ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
Thanks. Faisal | 0 | python,gcc,proxy,compilation,media | 2015-09-02T14:18:00.000 | 1 | 32,355,681 | Could be a dependency issue. Give this a shot:
sudo apt-get install build-essential autoconf libtool pkg-config python-opengl python-imaging python-pyrex python-pyside.qtopengl idle-python2.7 qt4-dev-tools qt4-designer libqtgui4 libqtcore4 libqt4-xml libqt4-test libqt4-script libqt4-network libqt4-dbus python-qt4 python-qt4-gl libgle3 python-dev | 0 | 118 | false | 0 | 1 | Error during mediaproxy installation | 32,355,833 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I'm working on detecting license plates with openCV, python and a raspberry pi. Most of it is already covered. What I want is to detect the ROI (Region of interest) of the plate, track it on a few frames and add those together to get a more clear and crisp image of the plate.
I want to get a better image of the plate by taking the information from several frames. I detect the plate, and have a collection of plates from several frames, as many as I wish and as many as the car is moving by the camera. How can I take all those and get a better version? | 0 | python,opencv,raspberry-pi | 2015-09-02T21:47:00.000 | 0 | 32,363,678 | You need to ensure that your frame rate is fast enough to get a decent still of the moving car. When filming, each frame will most likely be blurry, and our brain pieces together the number plate on playback. Of course a blurry frame is no good for letter recognition, so is something you'll need to deal with on the hardware side, rather than software side.
Remember the old saying: Garbage in; Garbage out. | 1 | 330 | false | 0 | 1 | openCV track object in video and obtain a better image from multiple frames | 32,363,759 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I'm trying to be sure I understand some basics about programming for different ARM architectures (e.g. ARMv5 vs ARMv7).
I have a python program that was ported to the newer Raspberry Pi B(Cotrex-7A). What would it take to also have it run on an ARMv6 or ARMv5 architecture. The program does simple waveform processing and serial communication with no need for a GPU.
My understanding is that I would have to recompile the program for each of the architectures to account for the different instruction sets. And I would also need to run the same version of Linux (in this case Wheezy), but is there more I have to consider here?
Is there the possibility that if it compiles in an ARMv7 it won't on an ARMv6 or ARMv5
Thanks | 0 | python,linux,arm,raspberry-pi | 2015-09-03T13:10:00.000 | 0 | 32,376,618 | The nice thing about python is that you rarely need to worry about the
underlying architecture. Python is interpreted, so the interpreter does
all the hard work of handling 32 bit, 64 bit, little-endian, big-endian,
soft or hard floating point etc.
Also, you don't need to compile your python, as the interpreter will
also compile your source if your provide both the .py and the .pyc or .pyo file
and the latter does not match what is needed. Compiling python is
not the same as compiling C, for example, as python targets a virtual
machine, not real hardware. The resulting .pyc or .pyo files are
however tied to the particular version of python.
Generally, source files are usually provided, and if there is no .pyc or .pyo for them,
then the first time python is run it will create them (if it has
file permissions). A second run will then use the compiled versions,
if the source has not changed. | 0 | 1,564 | true | 0 | 1 | Programming on different ARM architectures | 32,380,390 |
2 | 3 | 0 | 0 | 0 | 0 | 1.2 | 0 | I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace.
However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places.
However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers
Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively?
Thanks | 0 | python,eclipse,version-control,synchronization,pydev | 2015-09-04T21:21:00.000 | 1 | 32,406,765 | I use mercurial. I picked it because it seemed easier. But is is only easiER.
There is mercurial eclipse plugin.
Save a copy of your workspace and maybe your eclipse folder too before daring it :) | 0 | 86 | true | 1 | 1 | Using SCM to synchronize PyDev eclipse projects between different computer | 32,408,606 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace.
However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places.
However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers
Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively?
Thanks | 0 | python,eclipse,version-control,synchronization,pydev | 2015-09-04T21:21:00.000 | 1 | 32,406,765 | I use bitbucket coupled with mercurial. That is my repository is on bitbucket and i pull and psuh to it from mercurial within eclipse
For my backup i have an independent carbonite process going to net back all hard disk files. But I imagine there is a clever free programatic way to do so. If one knew how to write the appropriate scripts.
Glad the first suggestion was helpful .you are wise to bite the bullet and get this in place now. ;) | 0 | 86 | false | 1 | 1 | Using SCM to synchronize PyDev eclipse projects between different computer | 32,466,408 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | How to go about to get the process id of a process blocking a certain COM Port on Windows 7 and/or later?
I would like to get the PID programmatically. If possible using Python or C# but the language is not really important, I just want to understand the procedure. | 0 | c#,python,windows,serial-port | 2015-09-06T00:53:00.000 | 1 | 32,419,015 | This question has been asked numerous times on SO and many other forums for the last 10 years or so. The generally accepted answer is to use sysinternals to find the process using the particular file handle. Remember, a serial port is really just a file as far as the win32 api is concerned.
So, two answers for you:
Use sysinternals to find to offending application. I don't think this approach will work via python but you might hack something with .NET.
Use the NtQuerySystemInformation in a getHandles function. Take a look at the structures and figure out which fields are useful for identifying the offending process.
os.system("taskkill blah blah blah") against known serial port using apps. More on this idea at the end.
The 2nd idea sounds fun, however I just don't think the juice is worth the squeeze in this case. A relatively small number of processes actually use serial ports these days and if you are working in a specific problem domain, you are well aware of what the applications are called.
I would just run taskkill (via os.system) against any applications that I know 1) can be safely closed and 2) might actually have a port open. With this approach you'll save the headache of enumerating file handles and get back to focusing on what your application should really be doing. | 0 | 367 | false | 0 | 1 | Get PID of process blocking a COM PORT | 32,430,151 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I implemented my first aiohttp based RESTlike service, which works quite fine as a toy example. Now I want to run it using gunicorn. All examples I found, specify some prepared application in some module, which is then hosted by gunicorn. This requires me to setup the application at import time, which I don't like. I would like to specify some config file (development.ini, production.ini) as I'm used from Pyramid and setup the application based on that ini file.
This is common to more or less all python web frameworks, but I don't get how to do it with aiohttp + gunicorn. What is the smartest way to switch between development and production settings using those tools? | 0 | python-3.x,gunicorn,aiohttp | 2015-09-06T12:23:00.000 | 0 | 32,423,519 | At least for now aiohttp is a library without reading configuration from .ini or .yaml file.
But you can write code for reading config and setting up aiohttp server by hands easy. | 0 | 273 | false | 1 | 1 | Configuring an aiohttp app hosted by gunicorn | 32,440,342 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I have something like 1500 mail messages in eml format and I want to parse them na get e-mail addresses that caused error and error message (or code).
I would like to try to do it in python.
Someone have any idea how to do that except parsing line by line and searching for line and error code (or know software to do that)?
I see nothing about errors in mail headers which is sad. | 0 | python,email,parsing | 2015-09-07T12:20:00.000 | 0 | 32,438,646 | So you have 1500 .eml files and want to identify mails from mailer-daemons and which adress caused the mailer-daemon message?
Just iterate over the files, then check the from: line and see if it is a mailer-daemon message, and then get the adress that caused the error out of the text.
There is no other way than iterating over them line by line. | 0 | 165 | false | 0 | 1 | Parse mailer daemons, failure notices | 32,442,153 |
1 | 3 | 0 | 4 | 15 | 1 | 0.26052 | 0 | I have a setup.cfg file that specifies default parameters to use for pytest. While this is great for running tests on my whole package, I'd like to be able to ignore the setup.cfg options when running tests on individual modules. Is there a way to easily do this? | 0 | python,pytest | 2015-09-07T18:38:00.000 | 0 | 32,444,402 | You can also create an empty pytest.ini file in the current directory.
The config file discovery function will try to find a file matching this name first before looking for a setup.cfg. | 0 | 4,936 | false | 0 | 1 | Is there an option for pytest to ignore the setup.cfg file? | 49,825,909 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have installed java and using it in internal command with variable name:PATH and variable value: C:\Program Files\Java\jdk1.8.0_60\bin . Now i want add python to internal command. What variable name do I give so that it works.I tried with Name: PTH and Value:C:\Python34; its not working. | 0 | java,python-2.7,python-3.4 | 2015-09-10T08:56:00.000 | 1 | 32,497,329 | You can create a new variable name, for example MY_PYTHEN=C:\Pythen34 . Then you need to add the variable name into system variable PATH such as,
PATH = ...;%MY_PYTHEN%
PATH is a Windows system default variable. | 0 | 29 | false | 1 | 1 | Python and Java in internal command | 41,688,142 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am using Ardupilot in the plane and a Raspberry Pi running dronekit-python at the ground end across 3DR radio - with the Pi not controlling anything, just providing feedback to the pilot when they breach certain things like the boundary of a rectangular geofence (with increasing alarms the further they get out). So I am downloading only a few variables as frequently as I can (or as new data is available). Can anyone guide me on how to ask mavproxy not to automatically start downloading the whole tlog from the time it is started as I don't need it (other than for occasional debugging - but I can write my own specific log as needed)?
Edit: On digging further it appears to be invoked from lines 985 and 1031 of the mavproxy.py code (call functions set log directories, and write telemetry logs). Will comment them out and see what happens.
Further Edit: That works, once I worked out which version of Mavproxy was being loaded.
Gibbo | 0 | dronekit-python | 2015-09-10T14:02:00.000 | 0 | 32,504,020 | And from DKPY2 (just released) there is no MAVProxy dependency, so this should no longer be an issue. | 0 | 175 | false | 0 | 1 | Dronekit-Python - Stop Mavproxy Downloading Logs | 33,382,039 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | My flow is such that I already have the access token available in my backend server. So basically I was using the REST Apis until now for getting all user messages. However, I would like to use the Gmail API batch requests to improve on performance. I see that it is non-trivial to use python requests to do so. The gmail api client for python on the other hand does not seem to have a option where I can use the access token to get the results. Rather I need to use the authorization code which is unavailable to me. Can someone help me solve this?
Thanks,
Azeem | 0 | python | 2015-09-11T01:14:00.000 | 0 | 32,514,000 | You need to activate the Gmail API in your project on Google Developer Console to get the API key which will have separate billing cost involved. | 0 | 100 | false | 0 | 1 | Gmail Python API: Build service using access token | 32,514,035 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | I have an AES256 encrypted message as a string. The message consists of the IV (16 bytes HEX numbers so total 32 characters in the string) and 64 bytes HEX payload (128 characters). Therefore its a single 160 character string consisting of HEX numbers 00, e0, f2 etc. Why is it string? Its received from an another device as a string.
Now I break up the encrypted message to the IV and payload using the code 'iv = encrypted[:16]'. The IV is only zeroes (for testing purposes). If I use iv = bytes.fromhex(iv) I can print the iv as b'\x00\x00\x00... which is what I expect.
But when I do the same for the payload message starting with 9ed57a..., I would expect to get b'\x9e\xd5\x7a... etc, instead I get b'\x9e\xd5z_\xe3.... What do those extra characters (z_) mean and why does the next byte seem to be totally different than what I have in my original string?
The print would not be a problem of course, but when I use AES.decrypt I get garbage, even when I'm sure that I have the same password in both the sending and the receiving end of my setup. If my code is totally wrong, I would very much appreciate some help to correctly implement what I'm trying to do here.
Edit:
I have been trying something else now, I'm trying to turn the string of HEXes into an array of bytes using a loop. It seems to work right until passing it to the decrypting function. I get the message "ValueError: Input strings must be a multiple of 16 in length" which I don't understand since my input string is exactly 64 characters long (when printing len(msg)). The message is all weird characters, but since it's parsed from standard hexadecimal values ranging from 0x00 to 0xff, why doesn't it work? | 0 | python,character-encoding,hex,python-3.4 | 2015-09-11T17:57:00.000 | 0 | 32,529,454 | Python doesn't print nicely printable characters as escape sequences, but rather as their ASCII counterpart. When you look into an ASCII table, you will see that \x7a or 0x7a corresponds to a lowercase z.
Not all bytes can be printed this way. For example all byte values below 0x20 are unprintable control bytes. | 0 | 607 | false | 0 | 1 | Converting string of Hex numbers to hex for pycrypto (Python 3.4) | 32,529,584 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have Ardupilot on plane, using 3DR Radio back to Raspberry Pi on the ground doing some advanced geo and attitude based maths, and providing audio feedback to pilot (rather than looking to screen).
I am using Dronekit-python, which in turn uses Mavproxy and Mavlink. What I am finding is that I am only getting new attitude data to the Pi at about 3hz - and I am not sure where the bottleneck is:
3DR is running at 57.6 khz and all happy
I have turned off the automatic push of logs from Ardupilot down to Pi (part of Mavproxy)
The Pi can ask for Attitude data (roll, yaw etc.) through the DroneKit Python API as often as it likes, but only gets new data (ie, a change in value) about every 1/3 second.
I am not deep enough inside the underlying architecture to understand what the bottleneck may be -- can anyone help? Is it likely a round trip message response time from base to plan and back (others seem to get around 8hz from Mavlink from what I have read)? Or latency across the combination of Mavproxy, Mavlink and Drone Kit? Or is there some setting inside Ardupilot or Telemetry that copuld be driving this.
I am aware this isn't necessarily a DroneKit issue, but not really sure where it goes as it spans quite a few components. | 0 | dronekit-python | 2015-09-14T01:39:00.000 | 0 | 32,556,233 | Requesting individual packets should work, but that was never meant to be requested lots of times per second.
In order to get a certain packet many times per second, set up streams. A stream will trigger a certain number of times per second, and will then send whichever packet is associated with it, automatically. The ATTITUDE message is in the group called EXTRA1.
Let's suppose you want to receive 10 ATTITUDE messages per second. The relevant parameter is called SR0_EXTRA1. This defines the number of Attitude packets sent per second. The default is 4. Try increasing that parameter to 10. | 0 | 1,082 | false | 0 | 1 | ArduPilot, Dronekit-Python, Mavproxy and Mavlink - Hunt for the Bottleneck | 32,556,732 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | I know there are alternatives to call the functions without classmethod director? But what is the advantage apart from calling it with class argument? | 0 | python | 2015-09-14T05:31:00.000 | 0 | 32,557,853 | Using the classmethod decorator It act's like an alternate constructor.
althought it'll still need to use the arguments needed for your __init__ you can do other stuff in it. for simplicity's sake. It's like another __init__ method for your class. | 0 | 53 | false | 0 | 1 | Where do we use python class method decorator? | 32,558,377 |
1 | 1 | 0 | -1 | 0 | 0 | -0.197375 | 0 | Trying to find the python code for XGen: Export patches for Batch render options in Maya. I couldn't find anything via Maya's Script Editor activity (also tried enabling echo all commands) but nothing shows up when I hit the button under Xgen window>File>Export Patches for Batch Render.
Thanks!! | 0 | python,batch-processing,maya | 2015-09-14T18:17:00.000 | 0 | 32,571,328 | /your/maya_install_path/plug-ins/xgen/scripts/xgenm/ui/xgDescriptionEditor.py take a look at this file, they have a function called exportPatches line num around 1800+. You can see there they exporting alambic and xgmPatchInfo. Basically you can mimic same function in your script. | 0 | 1,232 | false | 0 | 1 | Python code for XGen: Export Patches for Batch Render | 32,574,618 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | I have a project being built with Pybuilder. I cloned it onto a new computer, and when I ran pyb, my unit tests complained that there was no module named xmlrunner. So after I did pip install xmlrunner, I get a build error from Pybuilder that:
'unicode' object has no attribute 'write'.
If I remove my unit tests from the unittest search path, the build completes successfully. When I run the unit tests directly, they complete successfully. So I'm thinking that somehow XMLRunner is failing. Pip installed XMLRunner version 1.7.7. Thanks in advance for your help. | 0 | python,unit-testing,unicode,python-unittest,pybuilder | 2015-09-15T15:35:00.000 | 0 | 32,590,141 | I seemed to have got this working by doing the following:
First I got the same error as you:
BUILD FAILED - 'unicode' object has no attribute 'write'
Then I uninstalled xmlrunner & unittest-xml-reporting using pip
Then I used pyb install_dependencies which reinstalls unittest-xml-reporting
Then my unit tests start running again when I use pyb:
There were 1 error(s) and 0 failure(s) in unit tests
This is my current pip list output:
pip (7.1.2)
PyBuilder (0.11.1)
setuptools (18.2)
six (1.9.0)
tblib (1.1.0)
unittest-xml-reporting (1.12.0)
wheel (0.24.0)
If you are using virtualenv, you can also get this error when you have pybuilder installed outside of your virtualenv environment:
For example, your virtualenv does not have pybuilder installed, but you can still run pyb from command line. It is this pybuilder that needs to be removed as well (I am on OSX so it was the default python that came with it) | 0 | 839 | false | 0 | 1 | XMLRunner - "unicode object has no attribute 'write'" when building | 32,715,377 |
1 | 3 | 0 | 2 | 7 | 1 | 0.132549 | 0 | I have a CLI application that requires sympy. The speed of the CLI application matters - it's used a lot in a user feedback loop.
However, simply doing import sympy takes a full second. This gets incredibly annoying in a tight feedback loop. Is there anyway to 'preload' or optimize a module when a script is run again without a change to the module? | 0 | python,performance,module | 2015-09-15T19:22:00.000 | 0 | 32,593,997 | Obviously sympy does a lot when being imported. It could be initialization of internal data structures or similar. You could call this a flaw in the design of the sympy library.
Your only choice in this case would be to avoid redoing this initialization.
I assume that you find this behavior annoying because you intend to do it often. I propose to avoid doing it often. A way to achieve this could be to create a server which is started just once, imports sympy upon its startup, and then offers a service (via interprocess communication) which allows you to do whatever you want to do with sympy.
If this could be an option for you, I could elaborate on how to do this. | 0 | 4,855 | false | 0 | 1 | Is there any way to speed up an import? | 32,594,171 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 1 | EDIT:
I want to telnet into my web server on localhost, and request my php file from command line:
I have:
1) cd'd into the directory I want to serve, namely "/www" (hello.php is here)
2) run a server at directory www: python -m SimpleHTTPServer
3) telnet localhost 80
but "connection is refused". what am I doing wrong? | 0 | php,python,telnet | 2015-09-16T04:14:00.000 | 0 | 32,599,677 | You're probably trying to connect to a wrong port. Check with netstat -lntp which port is your http server listening on. The process will be listed as python/pid_number. | 0 | 2,140 | true | 0 | 1 | Telnet connection on localhost Refused | 32,620,211 |
1 | 1 | 0 | 2 | 1 | 1 | 1.2 | 0 | I'm using python 2.7 and tried installing the pyCrypto module using pip (pip install pycrypto) which downloaded and installed the 2.6 version, as it is needed for using twisted.
However, whenever I try to use it, I get an ImportError saying that the module Crypto doesn't exist - but I can import crypto normally.
I already tried uninstalling and installing it again, but still didn't work.
Is there any bug in the package downloaded using pip or is it anything else I'm doing wrong?
Thank you. | 0 | python-2.7,pip,pycrypto | 2015-09-16T18:34:00.000 | 0 | 32,616,148 | If anyone is having this same problem, the reason was that I had mistakenlly installed the package crypto before installing pycrypto. Once I removed both packages and reinstalled pycrypto everything worked.
I believe that it might be related to Windows treating crypto and Crypto folders as the same. | 0 | 1,080 | true | 0 | 1 | pyCrypto importing only "crypto", not "Crypto" (not found) | 32,632,879 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Am new to Python and using FTPLib for some reason.
My aim is, am having a server where files with .txt will be stored by different clients very frequently. With nlst() function I could get the files present in the FTP server. But it returns all the files. Since the server has hell lot of files the response time is slow.
Is there any way to get the first twenty elements from the FTP using some function and then next twenty? This way I could improve the response time from FTP server considerably.
Regards | 0 | python,ftp,ftplib | 2015-09-16T19:46:00.000 | 0 | 32,617,391 | No there's no standard way to retrieve directory listing by parts in FTP protocol.
Some FTP servers do support wildcards in the listing commands (NLST and alike). So you could get first all file starting with a, then with b, etc. But you have to test this specifically with your server, as it is a non-standard behavior. | 0 | 467 | false | 0 | 1 | Python ftplib: Getting number of files from FTP | 32,623,881 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I have a script done which uses the twitter lib to send a twitter DM.
I've tried a number of ways to include codes which render on iOS 8+ as emoji without luck. Google has been unkind.
Examples:
msg += u'\xF0\x9F\x9A\x80' gives me no rocket. I get a d with a line through the top.
msg += u'U+1F684' gives me the code not a train
As I can include emoji when I send a twitter DM to a user, the server clearly handles meta data pertaining to emoji. As emoji is a UTF-8 character set rather than a font, I'm surprised that in the first example I'm getting representation in the font the twitter DM arrives in.
How can I send such characters from python 2? | 0 | python,twitter,emoji | 2015-09-16T20:53:00.000 | 0 | 32,618,466 | OK got it.
msg += u'\U0001f468' gives me an old (white) man
msg += u'\U0001f468\U0001f3ff' gives me an old (afro-caribbean) man. | 0 | 1,192 | false | 0 | 1 | How to send emoji with Twitter Python lib | 32,618,578 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | I have a PHP script that is supposed to execute a python script as user "apache" but is returning the error:
/transform/anaconda/bin/python: can't open file '/transform/python_code/edit_doc_with_new_demo_info.py': [Errno 13] Permission denied
Permissions for edit_doc_with_new_demo_info.py are ---xrwx--x. 1 apache PosixUsers 4077 Sep 18 12:14 edit_doc_with_new_demo_info.py. The line that is calling this python script is:
shell_exec('/transform/anaconda/bin/python /transform/python_code/edit_doc_with_new_demo_info.py ' . escapeshellarg($json_python_data) .' >> /transform/edit_subject_python.log 2>&1')
If apache is the owner of the python file and the owner has execute permission how can it be unable to open the file? | 0 | php,python,apache,permission-denied | 2015-09-18T19:58:00.000 | 1 | 32,660,088 | You need read permission to run the python script. | 0 | 4,074 | false | 0 | 1 | Python can't open file | 32,660,141 |
1 | 3 | 0 | -1 | 1 | 1 | -0.066568 | 0 | I have to analyse some python file. For this I should analyse all modules (i.e. get source code of this modules if they written on Python) imported in given file.
How I can get paths to files with imported python-modules?
I try to use sys.path, but it gives all paths, where does python interpreter may search modules | 0 | python,python-3.x | 2015-09-19T17:59:00.000 | 0 | 32,671,518 | Modules are found in /usr/lib/python3/dist-packages in unix-like systems
and are found in C:\Python34\Lib in windows | 0 | 55 | false | 0 | 1 | Getting paths to code of imported modules | 32,671,674 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I have a site, that performs some heavy calculations, using library for symbolic math.
Currently average calculation time is 5 seconds.
I know, that ask too broad question, but nevertheless, what is the optimized configuration for this type of sites? What server is best for this?
Currently, I'm using Apache with mod_wsgi, but I don't know how to correctly configure it.
On average site is receiving 40 requests per second.
How many processes, threads, MaxClients etc. should I set?
Maybe, it is better to use nginx/uwsgi/gunicorn (I'm using python as programming language)?
Anyway, any info is highly appreciated. | 0 | python,nginx,apache2,server,uwsgi | 2015-09-20T17:10:00.000 | 0 | 32,682,065 | Andrew,
I believe that you can move some pieces of your deployment topology.
My suggestion is use nginx for delivering HTTP content, and expose your application using some web framework, i.e. tornadoweb (my preference, considering async core, and best documented if compared to twisted, even twisted being a really great framework)
You can communicate between nginx and tornado by proxy. It is simple to be configured.
You can replicate your service instance to distribute your calculation application inside the same machine and another hosts. It can be easily configured by nginx upstreams.
If you need more performance, you can break your application in small modules and integrate it using Async Messaging. You can choose using zeromq or rabbitmq, among other solutions.
Then, you can have different topologies, gradually applied during the evolution of your application.
1th Topology:
nginx -> tornadoweb
2th Topology:
nginx with loadbalance (upstreams) -> tornadoweb replicated on [1..n] instances
3rd Topology:
[2nd topology] -> your app integrated by messaging (zeromq, amqp(rabbitmq), ...)
My favorite is 3rd, for begining. But, you should start, for this moment, by 1th and 2nd
There are a lot of options. But, these thre may be sufficient for a simple organization of your app. | 0 | 229 | true | 1 | 1 | Best server configuration for site with heavy calculations | 32,682,202 |
1 | 2 | 0 | 8 | 3 | 0 | 1 | 0 | I am looking for a way to limit how a python file to be called. Basically I only want it to be executable when I call it from a bash script but if ran directly either from a terminal or any other way I do not want it to be able to run. I am not sure if there is a way to do this or not but I figured I would give it a shot. | 0 | python,bash | 2015-09-21T14:51:00.000 | 1 | 32,698,320 | There is no meaningful way to do this.
UNIX process architecture does not work this way. You cannot control the execution of a script by its parent process.
Instead we should discuss why you want to do something like this, you are probably thinking doing it in a wrong way and what good options there is to address the actual underlying problem. | 0 | 54 | false | 0 | 1 | Limit a python file to only be run by a bash script | 32,698,394 |
1 | 1 | 0 | 2 | 0 | 1 | 0.379949 | 0 | I am in the process of evaluating whether Python is a suitable implementation choice for my program given the security requirements.
The input to my program is an set of encrypted (RSA) text files that describe some I/P that I want to keep secure. The encryption / decryption library and the private key are all accessed via SWIG wrappers to a C++ library. I envision that the Python code will call the library to decrypt the incoming source files.
Once decrypted, I will transform the I/P in some fashion and then write it out encrypted, once again using the SWIG wrapped C++ library for this function.
My program and the I/P will be distributed to customers, but the customers should not be able to examine the I/P. Only tools designated by the I/P author that have the private key should.
Can someone examine the data in its decrypted state as it flows through my program at run-time? Is there a way to protect my data in Python? Is a C++ implementation more secure than a Python one? | 0 | python,c++,security,encryption,reverse-engineering | 2015-09-21T18:17:00.000 | 0 | 32,702,029 | If your application contains the private key inside of it, then your data will never truly be safe from a motivated hacker (as they can step through the program to find it)...
Or they could run your app in a debugger, pause it after the files have been decoded in memory and then pull the data from memory. | 0 | 89 | false | 0 | 1 | Securing Data In A Python Program At Runtime | 32,702,204 |
1 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | In one of my projects I'm using cgi.escape() to escape a set of titles that I get from a resource. These titles could be from Youtube or anywhere else, and may need to be escaped.
The issue I'm having is that if a title is already escaped from Youtube and I pass it into cgi.escape(), I end up getting double-escaped titles, which is messing up later parts of my project.
Is there a library that will escape strings but check if a piece is already escaped, and ignore it? | 0 | python,escaping,cgi | 2015-09-21T23:36:00.000 | 0 | 32,706,147 | If you know your input is already escaped, unescape it first. Then later escape it just before where it needs to be. | 0 | 492 | false | 0 | 1 | Escaping characters in Python, but ignoring already escaped characters | 32,706,237 |
1 | 2 | 0 | 0 | 3 | 1 | 0 | 0 | I have 5 different games written in python that run on a raspberry pi. Each game needs to pass data in and out to a controller using a serial connection. The games get called by some other code (written in nodeJS) that lets the user select any of the games.
I'm thinking I don't want to open and close a serial port every time I start and finish a game. Is there anyway to make a serial object instance "global", open it once, and then access it from multiple game modules, all of which can open and close at will?
I see that if I make a module which assigns a Serial object to a variable (using PySerial) I can access that variable from any module that goes on to import this first module, but I can see using the id() function that they are actually different objects - different instances - when they are imported by the various games.
Any ideas about how to do this? | 0 | python,global | 2015-09-22T04:53:00.000 | 0 | 32,708,630 | Delegate the opening and management of the serial port to a separate daemon, and use a UNIX domain socket to transfer the file descriptor for the serial port to the client programs. | 0 | 328 | false | 1 | 1 | Python - global Serial obj instance accessible from multiple modules | 32,709,121 |
3 | 3 | 0 | 0 | 1 | 1 | 1.2 | 0 | I'm writing a password management program that encrypts the passwords and saves the hashes to a document. Should I import before defining the functions, import in the functions they are used, or import after defining the functions but before running the functions. I'm trying to make my code as neat as possible. I'm currently importing passlib.hash, sha256_crypt, os.path, time. Sorry if it's not clear I'm kind of new and trying to teach myself. Any advice helps. | 0 | python,function,python-import,code-organization | 2015-09-24T07:09:00.000 | 0 | 32,755,412 | It's a common use to make all imports on top, mainly for readability: you shouldn't have to look around the whole code to find an import. Of course you have to import a symbol before you can use it.
Anyway in Python it's not always wrong to import inside functions or classes, this is because of the way Python actually interpret the import. When you import a module you are actually running it's code, that is, in most cases, just defining new symbols, but could be also to trigger some side effect; thus it sometimes make sense to import inside functions to make the imported code execute only on function call. | 0 | 308 | true | 0 | 1 | Python - Does it matter if i import modules before or after defining functions? Newb Ques | 32,755,617 |
3 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | I'm writing a password management program that encrypts the passwords and saves the hashes to a document. Should I import before defining the functions, import in the functions they are used, or import after defining the functions but before running the functions. I'm trying to make my code as neat as possible. I'm currently importing passlib.hash, sha256_crypt, os.path, time. Sorry if it's not clear I'm kind of new and trying to teach myself. Any advice helps. | 0 | python,function,python-import,code-organization | 2015-09-24T07:09:00.000 | 0 | 32,755,412 | It's a good style to import in the very beginning of the code. So you have an overview and can avoid multiple imports. | 0 | 308 | false | 0 | 1 | Python - Does it matter if i import modules before or after defining functions? Newb Ques | 32,755,537 |
3 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | I'm writing a password management program that encrypts the passwords and saves the hashes to a document. Should I import before defining the functions, import in the functions they are used, or import after defining the functions but before running the functions. I'm trying to make my code as neat as possible. I'm currently importing passlib.hash, sha256_crypt, os.path, time. Sorry if it's not clear I'm kind of new and trying to teach myself. Any advice helps. | 0 | python,function,python-import,code-organization | 2015-09-24T07:09:00.000 | 0 | 32,755,412 | Typically imports come first in any design pattern I have seen. Imports > large scope variables > functions. | 0 | 308 | false | 0 | 1 | Python - Does it matter if i import modules before or after defining functions? Newb Ques | 32,755,514 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | I am currently transforming a perl / bash tool into a salt module and I am wondering how I should sync the non-python parts of this module to my minions.
I want to run salt agent-less and ideally the dependencies would by synced automatically alongside the module itself once its called via salt-ssh.
But it seems that only python scripts get synced. Any thoughts for a nice and clean solution?
Copying the necessary files from the salt fileserver during module execution seems somehow wrong to me.. | 0 | python,salt-stack | 2015-09-24T13:29:00.000 | 1 | 32,762,675 | Only python extensions are supported, so your best bet is to do the following:
1) Deploy your non-Python components via a file.managed / file.recurse state.
2) Ensure your custom execution module has a __virtual__() function checking for the existence of the non-Python dependencies, and returning False if they are not present. This will keep the module from being loaded and used unless the deps are present.
3) Sync your custom modules using saltutil.sync_modules. This function will also re-invoke the loader to update the available execution modules on the minion, so if you already had your custom module sync'ed and later deployed the non-Python depenencies, saltutil.sync_modules would re-load the custom modules and, provided your __virtual__() function returned either True or the desired module name, your execution module would then be available for use. | 0 | 284 | true | 0 | 1 | How to sync a salt execution module with non-python dependencies | 32,769,491 |
1 | 1 | 0 | 1 | 3 | 1 | 1.2 | 0 | I'm expectedly getting a CypherExecutionException. I would like to catch it but I can't seem to find the import for it.
Where is it?
How do I find it myself next time? | 0 | python,neo4j,py2neo | 2015-09-24T14:12:00.000 | 0 | 32,763,625 | Depending on which version of py2neo you're using, and which Cypher endpoint - legacy or transactional - this may be one of the auto-generated errors built dynamically from the server response. Newer functionality (i.e. the transaction endpoint) no longer does this and instead holds hard-coded definitions for all exceptions for just this reason. This wasn't possible for the legacy endpoint when the full list of possible exceptions was undocumented.
You should however be able to catch py2neo.error.GraphError instead which is the base class from which these dynamic errors inherit. You can then study the attributes of that error for more specific checking. | 0 | 139 | true | 0 | 1 | import for py2neo.error.CypherExecutionException | 32,781,464 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I have a potentially big list of image sequences from nuke. The format of the string can be:
/path/to/single_file.ext
/path/to/img_seq.###[.suffix].ext
/path/to/img_seq.%0id[.suffix].ext, i being an integer value, the values between [] being optional.
The question is: given this string, that can represent a sequence or a still image, check if at least one image on disk corresponds to that string in the fastest way possible.
There is already some code that checks if these files exist, but it's quite slow.
First it checks if the folder exists, if not, returns False
Then it checks if the file exists with os.path.isfile, if it does, it returns True.
Then it checks if no % or # is found in the path, and if not os.path.isfile, it returns False.
All this is quite fast.
But then, it uses some internal library which is in performance a bit faster than pyseq to try to find an image sequence, and does a bit more operations depending if start_frame=end_frame or not.
But it stills take a large amount of time to analyze if something is an image sequence, specially on some sections of the network and for big image sequences.
For example, for a 2500 images sequence, the analysis takes between 1 and 3 seconds.
If I take a very naive approach, and just checks if a frame exist by replacing #### by %04d, and loop over 10000 and break if found, it takes less than .02 seconds to check for os.path.isfile(f), specially if the first frame is between 1-3000.
Of course I cannot guarantee what the start frame will be, and that approach is not perfect, but in practice many of the sequences do begin between 1-3000, and I could return True if found and fallback to the sequence approach if nothing is found (it would still be quicker for most of the cases)
I'm not sure what's the best approach is for this, I already made it multithreaded when searching for many image sequences, so it's faster than before, but I'm sure there is room for improvement. | 0 | python,image,file,sequence,exists | 2015-09-24T23:45:00.000 | 1 | 32,772,672 | You should probably not loop for candidates using os.path.isfile(), but use glob.glob() or os.listdir() and check the returned lists for matching your file patterns, i.e. prefer memory operations over disk accesses. | 0 | 1,146 | false | 0 | 1 | Fastest way to check if an image sequence string actually exists on disk | 32,875,932 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have a potentially big list of image sequences from nuke. The format of the string can be:
/path/to/single_file.ext
/path/to/img_seq.###[.suffix].ext
/path/to/img_seq.%0id[.suffix].ext, i being an integer value, the values between [] being optional.
The question is: given this string, that can represent a sequence or a still image, check if at least one image on disk corresponds to that string in the fastest way possible.
There is already some code that checks if these files exist, but it's quite slow.
First it checks if the folder exists, if not, returns False
Then it checks if the file exists with os.path.isfile, if it does, it returns True.
Then it checks if no % or # is found in the path, and if not os.path.isfile, it returns False.
All this is quite fast.
But then, it uses some internal library which is in performance a bit faster than pyseq to try to find an image sequence, and does a bit more operations depending if start_frame=end_frame or not.
But it stills take a large amount of time to analyze if something is an image sequence, specially on some sections of the network and for big image sequences.
For example, for a 2500 images sequence, the analysis takes between 1 and 3 seconds.
If I take a very naive approach, and just checks if a frame exist by replacing #### by %04d, and loop over 10000 and break if found, it takes less than .02 seconds to check for os.path.isfile(f), specially if the first frame is between 1-3000.
Of course I cannot guarantee what the start frame will be, and that approach is not perfect, but in practice many of the sequences do begin between 1-3000, and I could return True if found and fallback to the sequence approach if nothing is found (it would still be quicker for most of the cases)
I'm not sure what's the best approach is for this, I already made it multithreaded when searching for many image sequences, so it's faster than before, but I'm sure there is room for improvement. | 0 | python,image,file,sequence,exists | 2015-09-24T23:45:00.000 | 1 | 32,772,672 | If there are potentially so many files that you're worried about wasting memory for a dictionary that holds them all, you could just store a single key for each img_seq.###[.suffix].ext pattern, removing the sequence number as you scan the directory. Then a single lookup will suffice. The values in the dictionary could either be "dummy" booleans because the existence of the key is the only thing you care about, or counters in case you ever want to know how many files you have for a certain sequence. | 0 | 1,146 | false | 0 | 1 | Fastest way to check if an image sequence string actually exists on disk | 32,930,114 |
1 | 2 | 0 | 1 | 3 | 1 | 0.099668 | 0 | I have a serial port which gives me a lot of different data from different pieces of hardware. I need to send different commands to the serial port to receive different kinds of data from it. So, I need to write and read data simultaneously from the port in different functions. Sometimes, I might need to read and write simultaneously from the serial port in 10 different threads. What is the best way of writing code in this situation for simultaneously reading/writing data from a single port? Threads, sub processes, etc. | 0 | multithreading,python-2.7,serialization,subprocess | 2015-09-25T19:22:00.000 | 0 | 32,789,247 | There can be different approaches based on the type of architecture and driver. One of them can be as below :
As soon as you receive data via receive interrupt, post the data to the main receive buffer queue. There can be a single thread called Rx dispatcher/Rx Manager that always reads the main receive buffer queue and post/dispatch to respective receive queue of reader threads based on msg type/id for further processing.
On the transmission side, respective thread can post the data along with msg type/id to the main transmit buffer queue which shall be sent out by Tx thread via transmit interrupt. | 0 | 5,692 | false | 0 | 1 | Reading and writing from a single serial port simultaneously from multiple threads | 32,807,914 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am trying to download some files via mechanize. Files smaller than 1GB are downloaded without causing any trouble. However, if a file is bigger than 1GB the script runs out of memory:
The mechanize_response.py script throws out of memory at the following line
self.__cache.write(self.wrapped.read())
__cache is a cStringIO.StringIO, It seems that it can not handle more than 1GB.
How to download files larger than 1GB?
Thanks | 0 | python-2.7,mechanize-python | 2015-09-27T09:01:00.000 | 1 | 32,806,238 | It sounds like you are trying to download the file into memory but you don't have enough. Try using the retrieve method with a file name to stream the downloaded file to disc. | 0 | 304 | false | 0 | 1 | python mechanize retrieving files larger than 1GB | 32,806,729 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am trying to download some files via mechanize. Files smaller than 1GB are downloaded without causing any trouble. However, if a file is bigger than 1GB the script runs out of memory:
The mechanize_response.py script throws out of memory at the following line
self.__cache.write(self.wrapped.read())
__cache is a cStringIO.StringIO, It seems that it can not handle more than 1GB.
How to download files larger than 1GB?
Thanks | 0 | python-2.7,mechanize-python | 2015-09-27T09:01:00.000 | 1 | 32,806,238 | I finally figured out a work around.
Other than using browser.retrieve or browser.open I used mechanize.urlopen which returned the urllib2 Handler. This allowed me to download files larger than 1GB.
I am still interested in figuring out how to make retrieve work for files larger than 1GB. | 0 | 304 | false | 0 | 1 | python mechanize retrieving files larger than 1GB | 32,808,075 |
2 | 2 | 0 | 0 | 4 | 0 | 0 | 0 | For a project I am doing I have to connect my Linux PC to a Bluetooth LE device. The application I design will be deployed on an ARM embedded system when it is complete.
Searching for documentation online hints that the preferred programming language for these kind of applications is Python. All the Bluez /test examples are written in Python and there are quite a few sources of information regarding creating BLE applications in Python. Not so much in C.
My superior and I had an arguement about whether I should use Python or C. One of his arguments was that there was unacceptable overhead when using Python for setting up Bluetooth LE connections and that Bluetooth LE had to be very timely in order to function properly. My argument was that the overhead would not matter as much, since there were no time constraints regarding bluetooth LE connections; The application will find devices, connect to a specific one and read a few attributes, which it saves to a file.
My question is; is there any reason to prefer the low-level C approach over using a high-level Python implementation for a basic application that reads GATT services and their characteristics? What would the implications be for an embedded device? | 0 | python,c,bluetooth-lowenergy,dbus,bluez | 2015-09-28T13:57:00.000 | 1 | 32,824,889 | Something more to consider:
the latest BlueZ (eg. 5.36+), BLE should work fine and has been very stable for me - and remember to add "experimental" when building it and "-E" as a service parameter to get manufacturerData (and other experimental features)
Using the C API, I think your code must be GPL (not 100% sure tho). The DBus interface allows you to make closed source code (if it's for a company) | 0 | 1,918 | false | 0 | 1 | Dbus & Bluez programming language | 34,717,559 |
2 | 2 | 0 | 3 | 4 | 0 | 1.2 | 0 | For a project I am doing I have to connect my Linux PC to a Bluetooth LE device. The application I design will be deployed on an ARM embedded system when it is complete.
Searching for documentation online hints that the preferred programming language for these kind of applications is Python. All the Bluez /test examples are written in Python and there are quite a few sources of information regarding creating BLE applications in Python. Not so much in C.
My superior and I had an arguement about whether I should use Python or C. One of his arguments was that there was unacceptable overhead when using Python for setting up Bluetooth LE connections and that Bluetooth LE had to be very timely in order to function properly. My argument was that the overhead would not matter as much, since there were no time constraints regarding bluetooth LE connections; The application will find devices, connect to a specific one and read a few attributes, which it saves to a file.
My question is; is there any reason to prefer the low-level C approach over using a high-level Python implementation for a basic application that reads GATT services and their characteristics? What would the implications be for an embedded device? | 0 | python,c,bluetooth-lowenergy,dbus,bluez | 2015-09-28T13:57:00.000 | 1 | 32,824,889 | This is quite an open question as there are so many things to consider when making this decision. So the best "answer" might rather be an attempt to narrow down the discussion:
Based on the question, I'm making the assumption that the system you are targeting has D-Bus and Python available with all needed dependencies.
I'd try to narrow down the discussion by first deciding on what BlueZ API to use. If you are planning on using the D-Bus API rather than the libbluetooth C library API, then there is already some overhead introduced by that and I don't believe Python in itself would be the major factor. That should of course be measured/evaluated to know for sure, but ruling out Python while still using D-Bus might be a premature optimization without much impact in practice.
If the C library API is to be used in order to avoid D-Bus overhead then I think you should go with C for the client throughout.
If the "timely manner" factor is very important I believe you will eventually need to have ways to measure performance anyway. Then perhaps a proof of concept of both design options might be the best way to really decide.
If the timing constraints turn out to be a moot question in practice, other aspects should weigh in more, e.g. ease of development (documentation and examples available), testability, and so on. | 0 | 1,918 | true | 0 | 1 | Dbus & Bluez programming language | 32,861,048 |
1 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | What's the motivation behind having my unit test classes inherit from unittest.TestCase, rather than object? Does it matter if I'm using Nose (or PyTest) instead of unittest? | 0 | python,python-2.7,unit-testing,python-3.x | 2015-09-28T15:02:00.000 | 0 | 32,826,166 | If you don't inherit from unit.TestCase, the testing framework won't know that you want those classes to be test cases. So when you try to run your tests, nothing will happen! | 0 | 212 | false | 0 | 1 | Why inherit from unittest.TestCase? | 32,826,392 |
1 | 1 | 0 | 2 | 0 | 1 | 0.379949 | 0 | I'm very new to programming, and have decided to start by learning Python; I've only been studying it for a little over a month, so I am very much still a beginner. Thus far, I really love the language, and am starting to grasp some of it.
But that brings me to my question/concerns; because I am currently only trying to teach myself Python, most of any source that I find to teach myself, be it a book, videos or tutorials, are written for Python 2, and only occasionally for 3.
So, is learning Python 2 going to make it more difficult for me, or slow me down? I really feel that I like the feel of Python 3 much more, but I am mostly learning 2. If I grasp the core concepts of Python in 2, will that translate into 3 easily?
I just want to make sure that I won't regret having put much of my effort into 2, if it is going to make 3 more challenging.
Thank you! | 0 | python-2.7,python-3.x | 2015-09-28T22:25:00.000 | 0 | 32,832,755 | Learn Python 3 first. It's the future of Python and 2.x is in the rear-view mirror for most of the core developers.
Learning Python 2.7.x won't be a waste of your time, as you'll often run into it "in the wild," but it takes years to gain mastery, so you might as well start on the path that will be the most relevant when you become fluent in the language. | 0 | 106 | false | 0 | 1 | Is learning Python 2 along with 3 a waste of effort? | 32,832,829 |
1 | 3 | 0 | 0 | 2 | 1 | 0 | 0 | I am working with huge numbers, such as 150!. To calculate the result is not a problem, by example
f = factorial(150) is
57133839564458545904789328652610540031895535786011264182548375833179829124845398393126574488675311145377107878746854204162666250198684504466355949195922066574942592095735778929325357290444962472405416790722118445437122269675520000000000000000000000000000000000000.
But I also need to store an array with N of those huge numbers, in full presison. A list of python can store it, but it is slow. A numpy array is fast, but can not handle the full precision, wich is required for some operations I perform later, and as I have tested, a number in scientific notation (float) does not produce the accurate result.
Edit:
150! is just an example of huge number, it does not mean I am working only with factorials. Also, the full set of numbers (NOT always a result of factorial) change over time, and I need to do the actualization and reevaluation of a function for wich those numbers are a parameter, and yes, full precision is required. | 0 | python,arrays,numpy,int,factorial | 2015-09-30T19:49:00.000 | 0 | 32,874,446 | Store it as tuples of prime factors and their powers. A factorization of a factorial (of, let's say, N) will contain ALL primes less than N. So k'th place in each tuple will be k'th prime. And you'll want to keep a separate list of all the primes you've found. You can easily store factorials as high as a few hundred thousand in this notation. If you really need the digits, you can easily restore them from this (just ignore the power of 5 and subtract the power of 5 from the power of 2 when you multiply the factors to get the factorial... cause 5*2=10). | 0 | 2,771 | false | 0 | 1 | How to store array of really huge numbers in python? | 32,875,136 |
1 | 1 | 0 | 2 | 2 | 1 | 1.2 | 0 | I set up a VM with google and want it to run a python script persistently. If I exit out of the SSH session, the script stops. Is there a simple way to keep this thing going after I log out? | 0 | python,google-cloud-platform | 2015-10-01T11:01:00.000 | 1 | 32,885,938 | Since you can open an SSH session you install any number of terminal multiplexers such as tmux, screen or byobu.
If you can't install things on your VM, invoking the script every minute via a cron job could also solve the issue. | 0 | 854 | true | 0 | 1 | Keep a python script running on google VM | 32,886,058 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I have been searching the web for hours now, found several instances where someone had the same problem, but I seem to be too much of a newb with linux/ubuntu to follow the instructions properly, as none of the given solutions worked.
Whenever I try to run a panda3d sample file from the python shell, I would give me an error saying:
Traceback (most recent call last):
File "/usr/share/panda3d/samples/asteroids/main.py", line 16, in
from direct.showbase.ShowBase import ShowBase
ImportError: No module named 'direct'
What really bugs me is that when I try to execute the .py file directly (without opening it in the IDLE or pycharm) it works just fine.
I know this has been asked before, but I would like to ask for a working step by step solution to be able to import panda3d from pycharm and the IDLE. I have no clue how to get it working, as none of the answers given to this question worked for me. | 0 | python,python-import,panda3d | 2015-10-04T18:48:00.000 | 0 | 32,937,078 | try to change your PYTHONPATH?
i met a problem like this, and then i modify my PYTHONPATH, and it worked. | 0 | 501 | false | 0 | 1 | panda3d python import error | 32,937,162 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I wrote the default password of my ami i.e. 'ubuntu' but it didn't work. I even tried with my ssh key. I've browsed enough and nothing worked yet.Can anybody please help me out?
[] Executing task 'spawn'
Started...
Creating instance
EC2Connection:ec2.us-west-2.amazonaws.com
Instance state: pending
Instance state: pending
Instance state: pending
Instance state: pending
Instance state: running
Public dns: ec2-52-89-191-143.us-west-2.compute.amazonaws.com
Waiting 60 seconds for server to boot...
[ec2-52-89-191-143.us-west-2.compute.amazonaws.com] run: whoami
[ec2-52-89-191-143.us-west-2.compute.amazonaws.com] Login password for 'ubuntu': | 0 | python,django,amazon-web-services | 2015-10-05T12:25:00.000 | 1 | 32,948,568 | Looks like there's an issue with your "ec2 key pairs". Make sure you have the correct key and that the permission of that key are 400.
To know if the key is working try to manually connect to the instance with
ssh -i ~/.ssh/<your-key> ubuntu@<your-host> | 0 | 219 | false | 1 | 1 | while spawning the fab file it asks me the login password for 'ubuntu' | 33,025,966 |
1 | 5 | 0 | 0 | 3 | 1 | 0 | 0 | I am reading and learning now Python and C at the same time. (Don't ask me why, it is a lot of fun! :-))
I use "Learning Python" by Mark Lutz. Here is what he writes about functions in Python:
Unlike in compiled languages such as C, Python functions do not need
to be fully defined before the program runs. More generally, defs are
not evaluated until they are reached and run, and the code inside defs
is not evaluated until the functions are later called.
I do not quite get it as in my second book K.N.King says that you CAN declare a function first and create a definition later.
English is not my native language so what I am missing here?
I can make only one guess, that it is somehow related to program runtime. In C the compiler runs through the program and finds the function declaration. Even if it is not defined, compiler goes on and finds function definition later. Function declaration in C helps to avoid problems with return-type of a function (as it is int by default). On the other hand in Python function is not evaluated until it is reached during runtime. And when it is reached, it does not evaluate the body of a function until there is a function call. But this guess does not explain a quote above.
What is then Mr.Lutz is talking about? I am confused a bit... | 0 | python,c | 2015-10-06T13:44:00.000 | 0 | 32,971,613 | In C you can compile without declaring, the compiler assumes the function
is int, if that's the case your program compiles and runs.
If your function or functions are another type you'll get problems and you'll have to declare. It is often quicker to not declare and generate your declarations from your code to an h file and include where needed. You can leave it to a program to write this part of your program. Just like you can leave the indent, write a big mess and let indent do it for you. | 0 | 455 | false | 0 | 1 | Difference in declaring a function in Python and C | 32,972,064 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | where should my python files be stored so that I can run that using gdb. I have custom gdb located at /usr/local/myproject/bin. I start my gdb session by calling ./arm-none-eabi-gdb from the above location.
I don't know how this gdb and python are integrated into each other.
Can anyone help.? | 0 | python,gdb | 2015-10-06T19:36:00.000 | 1 | 32,978,233 | I was able to figure out. What I understood is
GDB embeds the Python interpreter so it can use Python as an extension language.
You can't just import gdb from /usr/bin/python like it's an ordinary Python library because GDB isn't structured as a library.
What you can do is source MY-SCRIPT.py from within gdb (equivalent to running gdb -x MY-SCRIPT.py). | 0 | 886 | false | 0 | 1 | Invoke gdb from python script | 32,979,189 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | I have been developing a python script called script.py . I wish to compile it using py2exe. However, I wish to make sure the the final script.exe is optimized with the pypy JIT compiler and hence faster.
P.S. I am new to both py2exe and pypy | 0 | python-3.x,py2exe,pypy | 2015-10-08T00:08:00.000 | 0 | 33,004,491 | py2exe and pypy are incompatible. It's possible to write an equivalent of py2exe for pypy, but some work has to be done. | 0 | 423 | true | 0 | 1 | py2exe with Pypy | 33,037,742 |
2 | 4 | 0 | 2 | 1 | 0 | 0.099668 | 0 | I want to give at all members of a Plone (4.3.7) site the possibility to restore a file accidentally deleted.
I only found ecreall.trashcan for this purpose, but I've some problem with installation. After add it in buildout.conf and do a bin/buildout the output contain some error like...
File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/isTrashcanOpened.py", line 11
return session and session.get('trashcan', False) or False
SyntaxError: 'return' outside function
File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_trash.py", line 23
return context.translate(msg)
SyntaxError: 'return' outside function
File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_restore.py", line 23
return context.translate(msg)
SyntaxError: 'return' outside function
...
And so, I don't find any new add-on to enable or configure in site setup.
Someone know what could be, or is there another method for do what I want?
Please.... thanks in advance | 0 | python,plone,plone-4.x | 2015-10-08T07:55:00.000 | 0 | 33,009,839 | If you don't find a proper add-on, know that in Plone a trash can only be a matter of workflow.
You can customize your workflow adding a new trash transition that move the content in a state (trashed) where users can't see it (maybe keep the visibility open for Manager and/or Site Administrators).
Probably you must also customize the content_status_modify script because after the trash on a content you must be redirected to another location (or you'll get an Unhautorized error). | 0 | 166 | false | 1 | 1 | Is there a metod to have a trash can in Plone? | 33,016,003 |
2 | 4 | 0 | 1 | 1 | 0 | 0.049958 | 0 | I want to give at all members of a Plone (4.3.7) site the possibility to restore a file accidentally deleted.
I only found ecreall.trashcan for this purpose, but I've some problem with installation. After add it in buildout.conf and do a bin/buildout the output contain some error like...
File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/isTrashcanOpened.py", line 11
return session and session.get('trashcan', False) or False
SyntaxError: 'return' outside function
File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_trash.py", line 23
return context.translate(msg)
SyntaxError: 'return' outside function
File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_restore.py", line 23
return context.translate(msg)
SyntaxError: 'return' outside function
...
And so, I don't find any new add-on to enable or configure in site setup.
Someone know what could be, or is there another method for do what I want?
Please.... thanks in advance | 0 | python,plone,plone-4.x | 2015-10-08T07:55:00.000 | 0 | 33,009,839 | I've found the solution(!!!) working with -Content Rules- in the control panel.
First I've created a folder called TRASHCAN , after in content rule I've added a rule that copy the file/page/image in folder trashcan if it will be removed.
This rule can be disable in trashcan folder, so you could delete definitely the objects inside. | 0 | 166 | false | 1 | 1 | Is there a metod to have a trash can in Plone? | 33,026,043 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I'm trying to send a notification from my Django Application everytime the user perform specific actions, i would like to send those notifications from the email of the person who performed these actions.
I don't want them to have to put their password on my application or anything else. I know this is possible because i remember doing this with PHP Long time ago. | 0 | python,django,email,smtplib | 2015-10-08T12:57:00.000 | 0 | 33,016,533 | You connect to the SMTP server, preferably your own, that doesn't require authentication or on which you do have an account, then you create an email that has the users e-mail in the from field, and you just send it.
Which lib you will use to do it, smtplib, some Django stuff, or anything else, is irrelevant. If you want to, you can even skip the SMTP server, and simulate one. That way you can deposit the composed mail directly into users POP server inbox. But there is rarely a need for such extremes. | 0 | 286 | false | 1 | 1 | How to send an email on python without authenticating | 33,017,245 |
Subsets and Splits