Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23,203,874 | 2014-04-21T18:54:00.000 | 0 | 0 | 0 | 0 | python,windows,python-3.x,python-asyncio | 48,090,576 | 1 | false | 0 | 0 | As jfs and OP observe,
ProactorEventLoop is incompatible with SSL, and
The default loop supports SSL on Windows. By extension aiohttp should also work with https on Windows. | 1 | 7 | 0 | When I make an HTTPS request using the aiohttp library with asyncio and Python 3.4 in Windows 7, the request fails with a NotImplementedError in the _make_ssl_transport function in base_events.py as shown in the traceback.
On Windows, I use the ProactorEventLoop. I think you have to use that one to get asyncio to work. I tried the same request in a Debian 7 VM with a compiled version of Python 3.4, and the same request works. I don't use the ProactorEventLoop in Debian, just the default though.
Any ideas or workarounds? Or, should I give up with aiohttp HTTPS on Windows for now? I am not able to use Linux for this project, needs to be on Windows. | Does the aiohttp Python library in Windows support HTTPS? | 0 | 0 | 1 | 1,453 |
23,208,253 | 2014-04-22T00:24:00.000 | 1 | 0 | 1 | 0 | list,python-2.7 | 23,208,304 | 3 | false | 0 | 0 | figured out the answer.
my_list = [[] for i in xrange(5)]
should do the trick | 2 | 1 | 0 | I have a list my_list created as mylist = [[]]*5
now I want to add an integer say 4 to the third list in my_list
my_list[2].append(4)
When I print out my_list i see that every list in my_list has the integer 4 added to it.
>>> my_list = [[]]*5
>>> my_list[2].append(4)
>>> my_list
[[4], [4], [4], [4], [4]]
is there a way to just have 4 added to the 3rd list?
expected: [[],[],[4],[],[]]
actual: [[4],[4],[4],[4],[4]] | list in python behaviour | 0.066568 | 0 | 0 | 32 |
23,208,253 | 2014-04-22T00:24:00.000 | 1 | 0 | 1 | 0 | list,python-2.7 | 23,208,336 | 3 | false | 0 | 0 | Well my understanding of what is going on is that when you run [[]] it just creates multiple references to one initialized list. I believe [[] for i in range(5)] is what you are looking for; it initializes different lists inside the parent list. Hope this helps! | 2 | 1 | 0 | I have a list my_list created as mylist = [[]]*5
now I want to add an integer say 4 to the third list in my_list
my_list[2].append(4)
When I print out my_list i see that every list in my_list has the integer 4 added to it.
>>> my_list = [[]]*5
>>> my_list[2].append(4)
>>> my_list
[[4], [4], [4], [4], [4]]
is there a way to just have 4 added to the 3rd list?
expected: [[],[],[4],[],[]]
actual: [[4],[4],[4],[4],[4]] | list in python behaviour | 0.066568 | 0 | 0 | 32 |
23,208,297 | 2014-04-22T00:29:00.000 | 3 | 0 | 1 | 0 | python,performance,multiprocessing,cython | 23,225,448 | 1 | true | 0 | 0 | Cython can have translation costs if you go between C and Python types too much, which could contribute. There's also the fact that the speedup in Python will be higher, which hides overhead.
One suggestion is to use nogil functions and see whether threading has a lower overhead. | 1 | 1 | 0 | I'm currently working on a minmax tree based ai in python. To squeeze extra performance out of the ai I've been using cython to optimize the bottlenecks, and have attempted to multiprocess the tree building.
The issue I have is that the ai is actually slower when multiprocessing with cython. I know there is overhead with multiprocessing, which can sometimes cause it to be slower. However, it's only slower when cython is used. When equivalent python code is used multiprocessing provides a 2-3 times performance increase.
I've run several tests to rule out any obvious problems. For example, I've run tests both with and without alpha-beta pruning enabled (which could under some circumstances perform better without multiprocessing), but it makes no difference. I've already setup the cython objects to be pickleable, and the multiprocessed cython ai builds a proper tree. The multiprocessing implementation I'm using (pass only the root children to a pool.map function) DOES increase performance, but only when pure python code is used.
Is there some quirk to cython that I'm missing? Some additional overhead to using cython code (or c extensions in general) with multiprocessing? Or is this a problem with cython itself?
Edit: Here are some example timings:
Given a depth of 7 and no Alpha-Beta pruning: (all times in seconds)
Cython, No Multiprocessing:
12.457
Cython, Multiprocessing:
15.440
No Cython, No Multiprocessing:
26.010
No Cython, Multiprocessing:
17.609
After much testing I've found the cause of the overhead. @Veedrac is right in that there is extra overhead with c extensions, and the slowness of python masked the overhead without cython. Specifically, the overhead occurred when returning branches from the multiple processors, and adding them to the root node. This explains why the overhead was not constant, and actually scaled up as the depth of the tree increased.
I had actually suspected this, and tested for it before. However, it appears the code I previously used to test for this overhead was bugged. I've now fixed the multiprocessing to only return necessary information, and the overhead has been eliminated. The Cython with multiprocessing now runs very quickly. | Slow multiprocessing with cython | 1.2 | 0 | 0 | 1,778 |
23,209,693 | 2014-04-22T03:11:00.000 | 0 | 0 | 1 | 0 | python,edge-detection | 23,209,821 | 1 | true | 0 | 0 | If you just have a threshold without hysteresis, then when an image is near the threshold you can have low and high transitions (edges) very near each other. What you likely want are real value transitions in order to recognize an edge. The hysteresis value gives a required change before going from a high edge to a low edge and the other way around. | 1 | 0 | 0 | What the hysteresis threshold, and why is it useful in edge detection?
I am trying to write an edge detection program in python, and it seems to work well without using hysteresis, but many sources include it. I was wondering why it would be useful. | Hysteresis in Edge Detection | 1.2 | 0 | 0 | 1,150 |
23,210,636 | 2014-04-22T04:52:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,boto | 23,243,516 | 1 | false | 0 | 0 | Although it might be possible to use a single connection for multiple services, that's not how boto is written and as the comment above states, I doubt very much that it would improve your performance. I would recommend that you create a single connection per service and keep reusing that connection. Boto caches connections and will also handle any reconnection that might be required if you don't use the connection for a while or encounter some error. | 1 | 0 | 0 | Is it possible to create a single connection object to be used for different aws services ?
Each time a connection is made its a new api call so i believe it would save some time if a connection once created can be reused. | Single boto connection object for different aws services | 0 | 0 | 1 | 120 |
23,211,180 | 2014-04-22T05:35:00.000 | 4 | 0 | 0 | 0 | python,sockets | 23,211,267 | 1 | false | 0 | 0 | s.send('\x01') (Python2); s.send(b'\x01') (Python3).
Ctrl+A is control character with numeric value 1. | 1 | 0 | 0 | I am trying to send the commands using python socket.
I have to send 'ctrl+a' key stroke, first.
Typically, I connect using telnet and type 'ctrl+a' then type the enter.
In terminal 'ctrl+a' looked as '^A'.
So I tried to send using python send function like below.
s.send('^A')
But it didn't work.
It looked as '^A' on the terminal but it doesn't feel like the text.
I need to send real 'ctrl+a' message.
How can I do that?
Please advice.
Thank you. | send ctrl+a message on python socket | 0.664037 | 0 | 1 | 1,291 |
23,214,773 | 2014-04-22T08:50:00.000 | 2 | 0 | 0 | 0 | python,xml,parsing,concurrency,sax | 23,223,638 | 2 | true | 0 | 0 | You can't easily split the SAX parsing into multiple threads, and you don't need to: if you just run the parse without any other processing, it should run in 20 minutes or so. Focus on the processing you do to the data in your ContentHandler. | 1 | 0 | 0 | I have a couple of gigantic XML files (10GB-40GB) that have a very simple structure: just a single root node containing multiple row nodes. I'm trying to parse them using SAX in Python, but the extra processing I have to do for each row means that the 40GB file takes an entire day to complete. To speed things up, I'd like to use all my cores simultaneously. Unfortunately, it seems that the SAX parser can't deal with "malformed" chunks of XML, which is what you get when you seek to an arbitrary line in the file and try parsing from there. Since the SAX parser can accept a stream, I think I need to divide my XML file into eight different streams, each containing [number of rows]/8 rows and padded with fake opening and closing tags. How would I go about doing this? Or — is there a better solution that I might be missing? Thank you! | Concurrent SAX processing of large, simple XML files? | 1.2 | 0 | 1 | 1,062 |
23,214,996 | 2014-04-22T09:00:00.000 | 0 | 0 | 1 | 0 | java,python | 23,215,438 | 4 | false | 0 | 0 | Both are same.
Import keyword are used to import built-in and user defined package into your source file. So that your class can refer to a class that is in another package by directly using its name. | 1 | 2 | 0 | I am a Python programmer. And I begin to learn Java recently. I find Python and Java both use import to get in code from other files. Is there any difference between the exact meaning of these import in 2 languages? | What's the difference between the meaning of import statement in Python and Java? | 0 | 0 | 0 | 1,737 |
23,217,264 | 2014-04-22T10:44:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn | 23,232,968 | 1 | true | 0 | 0 | scikit-learn does not currently have an MLP implemented which you can initialize via an RBM, but you can still access the weights which are stored in the components_ attribute and the bias which is stored in the intercept_hidden_ attribute.
If you're interested in using modern MLPs, torch7, pylearn2, and deepnet are all modern libraries and most of them contain pretraining routines like you describe. | 1 | 0 | 1 | I want to build a Deep Believe Network with scikit-learn. As I know one should train many Restricted Boltzmann Machines (RBM) individually. Then one should create a Multilayer Perceptron (MLP) that has the same number of layers as the number of (RBMs), and the weights of the MLP should be initialized with the weights of the RBMs. However I'm unable to find a way to get the weights of the RBMs from scikit-learn's BernoulliRBM. Also it doesn't seem to be a way also to initialize the weights of a MLP in scikit-learn.
Is there a way to do what I described? | Initializing the weights of a MLP with the RBM weights | 1.2 | 0 | 0 | 674 |
23,219,456 | 2014-04-22T12:22:00.000 | 5 | 0 | 0 | 0 | python,django,compatibility | 23,219,502 | 1 | true | 1 | 0 | The best way is to build a virtualenv with Django 1.6, install your app, and run its tests. There will likely be some small breaks—Django has changed since 1.3—but they should be relatively easy to patch up. | 1 | 1 | 0 | I working on project where I must use things from an existing Django application. The application is written with Django 1.3. Is there a way to determine if it is possible to use it for a project that use in Django 1.6. | How to check if Django 1.3 project is also compatible with Django 1.6 | 1.2 | 0 | 0 | 66 |
23,222,104 | 2014-04-22T14:14:00.000 | 8 | 0 | 0 | 0 | python,django | 23,223,408 | 9 | true | 1 | 0 | I got the solution ,Try with --allow-unverified
syntax: pip install packagename=version --allow-unverified packagename
Some package condains insecure and unverifiable files. it will not download to the system . and it can be solved by using this method --allow-unverified. it will allow the installation.
Eg: pip install django-ajax-filtered-fields==0.5 --allow-unverified
django-ajax-filtered-fields | 1 | 26 | 0 | error when installing some package but its actualy existing example django-ajax-filtered-fields==0.5
Downloading/unpacking django-ajax-filtered-fields==0.5 (from -r
requirements.example.pip (line 13)) Could not find any downloads
that satisfy the requirement django-ajax-filtered-fields==0.5(from
-r requirements.example.pip (line 13))
No distributions at all found for django-ajax-filtered-fields==0.5 Storing debug log for failure in /home/pd/.pip/pip.log
(peecs)pd@admin:~/proj/django/peecs$ pip install
django-ajax-filtered-fields==0.5 --allow-unverified
django-ajax-filtered-fields==0.5 Downloading/unpacking
django-ajax-filtered-fields==0.5 Could not find any downloads that
satisfy the requirement django-ajax-filtered-fields==0.5 Some
externally hosted files were ignored (use --allow-external
django-ajax-filtered-fields to allow). Cleaning up... No distributions
at all found for django-ajax-filtered-fields==0.5 Storing debug log
for failure in /home/pd/.pip/pip.log | No distributions at all found for some package | 1.2 | 0 | 0 | 83,294 |
23,227,044 | 2014-04-22T18:04:00.000 | 2 | 1 | 0 | 1 | python,linux,go | 23,227,250 | 1 | true | 0 | 0 | Python, being an interpreted language, requires the system to load the interpreter each time a script is run from the command line. Also
On my particular system, after disk caching, it takes the system 20ms to execute a script with import string (which is plausible for your use case). If you're processing a lot information, and can't submit it all at once, you should consider setting up a daemon to avoid this kind of overhead.
On the other hand, a daemon is more complex to write and test, so you should probably see if a script suits your needs before optimizing prematurely.
There's no answer to your question that fits every possible case. Ultimately, you always have to try the performance with your data and in your system, | 1 | 0 | 0 | I need to validate phone numbers and there is a very good python library that will do this. My stack however is Go and I'm really not looking forward to porting a very large library. Do you think it would be better to use the python library by running a shell command from within the Go codebase or by running a daemon that I then have to communicate with somehow? | Run daemon server or shell command? | 1.2 | 0 | 0 | 206 |
23,227,680 | 2014-04-22T18:41:00.000 | 0 | 0 | 0 | 0 | python,selenium,flash | 23,436,481 | 1 | true | 1 | 0 | It turns out, I needed to use selenium to scroll down the page to load all the content. | 1 | 0 | 0 | I am running selenium webdriver (firefox) using python on a headless server. I am using pyvirtualdisplay to start and stop the Xvnc display to grab the image of the sites I am visiting. This is working great except flash content is not loading on the pages (I can tell because I am taking screenshots of the pages and I just see empty space where flash content should be on the screenshots).
When I run the same program on my local unix machine, the flash content loads just fine. I have installed flash on my server, and have libflashplayer.so in /usr/lib/mozilla/plugins. The only difference seems to be that I am using the Xvnc display on the server (unless plash wasn't installed properly? but I believe it was since I used to get a message asking me to install flash when I viewed a site that had flash content but since installing flash I dont get that message anymore).
Does anyone have any ideas or experience with this- is there a trick to getting flash to load using a firefox webdriver on a headless server? Thanks | Flash content not loading using headless selenium on server | 1.2 | 0 | 1 | 574 |
23,230,726 | 2014-04-22T21:40:00.000 | 3 | 0 | 1 | 0 | python | 23,230,771 | 3 | false | 0 | 0 | FORTRAN is another language that uses the ** notation for power. It predates both Python and C by a lot, so perhaps it was an influence on the BDFL. | 1 | 2 | 0 | A few languages I've seen utilise the ^ symbol, and it doesn't seem to be reserved for anything in Python. It sort of confuses me as well since the ^ symbol is (very) well known and Python is supposed to be easy to use, which is not as much the case in using the **.
Is there any logical explanation for this? I mean it's not a huge difference, but just curious for this choice? | Why doesn't Python use ^ to denote squaring a number but uses ** instead? | 0.197375 | 0 | 0 | 437 |
23,232,172 | 2014-04-22T23:35:00.000 | 2 | 1 | 0 | 1 | python,ssh,virtual-machine,vagrant,vagrantfile | 29,586,100 | 2 | false | 0 | 0 | You have two options:
You can go the classic route of using the shell provisioner using vagrant
config.vm.provision "shell", inline: $script
And in your script run the python script
All files are pushed in /tmp, you can possible use this to run your python script | 2 | 1 | 0 | This is a dumb question but please help me.
Q. How do I run Python script that is saved in my local machine?
after vagrant up and vagrant ssh, I do not see any Python file in the VM. Then what if I want to run Python scripts that are saved in my Mac? I do not want to copy and paste them manually using vim.
How would you run Python script in Vagrant ssh? | Run Python script in Vagrant | 0.197375 | 0 | 0 | 1,918 |
23,232,172 | 2014-04-22T23:35:00.000 | 2 | 1 | 0 | 1 | python,ssh,virtual-machine,vagrant,vagrantfile | 23,232,231 | 2 | true | 0 | 0 | On your Guest OS there will be a folder under / called /vagrant/ this will be all the files and directories under the directory on your host machine that contains the .VagrantFile
If you put your script in that folder they will be shared with the VM.
Additionally if you are using chef as your provisioner you can use a script resource to run external scripts during the provisioning step. | 2 | 1 | 0 | This is a dumb question but please help me.
Q. How do I run Python script that is saved in my local machine?
after vagrant up and vagrant ssh, I do not see any Python file in the VM. Then what if I want to run Python scripts that are saved in my Mac? I do not want to copy and paste them manually using vim.
How would you run Python script in Vagrant ssh? | Run Python script in Vagrant | 1.2 | 0 | 0 | 1,918 |
23,232,933 | 2014-04-23T00:54:00.000 | 0 | 0 | 0 | 1 | python,ide,sublimetext2,sublimetext | 23,233,193 | 1 | false | 0 | 0 | Never mind...
View - Side Bar - Show Side Bar | 1 | 0 | 0 | I have tried Add Folder to a Project but no sidebar shows up. | How do I open a folder in Sublime Text 2 and have the inner directory show up on the side, like in Brackets? | 0 | 0 | 0 | 39 |
23,234,103 | 2014-04-23T03:13:00.000 | 14 | 0 | 0 | 0 | python,pickle | 23,234,151 | 1 | false | 0 | 0 | Save an object containing the game state before the program exits:
pickle.dump(game_state, open('gamestate.pickle', 'wb'))
Load the object when the program is started:
game_state = pickle.load(open('gamestate.pickle', 'rb'))
In your case, game_state may be a list of questions. | 1 | 4 | 1 | I'm making a Animal guessing game and i finish the program but i want to add pickle so it save questions to disk, so they won't go away when
the program exits. Anyone can help? | How to use pickle to save data to disk? | 1 | 0 | 0 | 3,645 |
23,234,969 | 2014-04-23T04:49:00.000 | -1 | 1 | 0 | 0 | python,eclipse,codeskulptor | 23,235,258 | 2 | false | 0 | 0 | It is not possible without getting the source of the library.
First of all, you should contact the developers and ask them to provide you a copy of the library "simplegui".
Furthermore, "Codeskulptor" is a tool which compile python and run it in the browser which make me think that simplegui is based on javascript. | 1 | 1 | 0 | I'm learning to program in Python now in a course via Coursera website. We are using an environment called "CodeSkulptor" and mainly using a module called "SimpleGUI".
I was wondering if there's any way to get the module sources and to attach them to eclipse so I can write in Python using this module in Eclipse instead of using CodeSkulptor all the time...
Thanks in advance | How to use simplegui module when programming in python in eclipse? | -0.099668 | 0 | 0 | 837 |
23,237,444 | 2014-04-23T07:17:00.000 | 4 | 0 | 1 | 0 | python,mysql,database,class,oop | 23,237,519 | 1 | false | 0 | 0 | Would a Class be better for this?
Probably not.
Classes are useful when you have multiple, stateful instances that have shared methods. Nothing in your problem description matches those criteria.
There's nothing wrong with having a script with a handful of functions to perform simple data transfers (extract, transform, store). | 1 | 3 | 0 | I searched around and couldn't really find any information on this. Basically i have a database "A" and a database "B". What i want to do is create a python script (that will likely run as a cron job) that will collect data from database "A" via sql, perform an action on it, and then input that data into database "B".
I have written it using functions something along the lines of:
Function 1 gets the date the script was last run
Function 2 Gets the data from Database "A" based on function 1
Function 3-5 Perform the needed actions
Function 6 Inserts data into Database "B"
My question is, it was mentioned to me that i should use a Class to do this rather than just functions. The only problem is, I am honestly a bit hazy on Classes and when to use them.
Would a Class be better for this? Or is writing this out as functions that feed into each other better? If i would use a Class, could you tell me how it would look? | Collecting Data from Database, functions vs classes | 0.664037 | 1 | 0 | 156 |
23,243,596 | 2014-04-23T11:57:00.000 | 1 | 0 | 0 | 0 | python,macos,reportlab | 23,244,082 | 2 | false | 0 | 0 | Here is my solution.
Cause: I keep my mac up to date and as a result it seems I now have a newer (different) version of the c compiler (clang) than the one that allowed the "-mno-fused-madd" command line switch.
Solution: I did not find the above switch in any file in the reportlab source. It had to be on the computer itself. The culprit seemed to be in the distutils, because setup.py uses module distutils.
The problem was in the file /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_sysconfigdata.py. This file contains definitions as a dictionary named build_time_vars. We are obviously in the right place as we have a build time problem.
First make a copy as a safeguard.
sudo <editor> <file path> to edit the file.
Then editing this file, search for and remove the switch -mno-fused-madd from the file. I found it in line beginning with 'CFLAGS' since this is a compile flag.
Change the line:
... -fwrapv -mno-fused-madd -DENABLE_DTRACE ... to ... -fwrapv -DENABLE_DTRACE ...
Save the file and continue with your build. It will now stay fixed. No need for environment variables or any such thing.
Edit: While you are at it, remove both _sysconfigdata.pyc and _sysconfigdata.pyo files. | 1 | 0 | 0 | When installing ReportLab 3.1.8, I ran into the problem where I kept getting the error and I could not find where this compiler option was being set.
The point in setup was:
building 'reportlab.lib._rl_accel' extension
clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future]
clang: note: this will be a hard error (cannot be downgraded to a warning) in the future
error: command 'cc' failed with exit status 1 | clang: error: unknown argument: '-mno-fused-madd' | 0.099668 | 0 | 0 | 786 |
23,246,013 | 2014-04-23T13:34:00.000 | 1 | 0 | 0 | 0 | python,numpy,fits,pyfits | 23,254,015 | 1 | false | 0 | 0 | The expression data.field[('zquality' > 2) & ('pgal'==3)] is asking for fields where the string 'zquality' is greater than 2 (always true) and where the string 'pgal' is equal to 3 (also always false).
Actually chances are you're getting an exception because data.field is a method on the Numpy recarray objects that PyFITS returns tables in.
You want something like data[(data['zquality'] > 2) & (data['pgal'] == 3)].
This expression means "give me the rows of the 'zquality' column of data containing values greater than 2. Then give me the rows of the 'pgal' column of data with values equal to three. Now give me the full rows of data selected from the logical 'and' of the two row masks. | 1 | 0 | 1 | I have opened a FITS file in pyfits. The HEADER file reads XTENSION='BINTABLE' with DIMENSION= 52989R x 36C with 36 column tags like, 'ZBEST', 'ZQUALITY', 'M_B', 'UB', 'PGAL' etc.
Now, I have to choose objects from the data with 'ZQUALITY' greater than 2 & 'PGAL' equals to 3. Then I have to make a histogram for the 'ZBEST' of the corresponding objects obeying the above conditions. Also I have to plot 'M_B' vs 'UB' for those objects.
At last I want to slice the 'ZBEST' into three slices (zbest < 0.5), (0.5 < zbest < 1.0), (zbest > 1.0) and want to plot histogram and 'M_B' vs 'UB' diagram of them separately.
I am stuck at choosing the data obeying the two conditions. Can anyone please tell me how can I choose the objects from the data satisfying both the conditions ('ZQUALITY' > 2 & 'PGAL' == 3 )? I am using like: data.field[('zquality' > 2) & ('pgal'==3)] but it's not working. | Condtionally selecting values from a Numpy array returned from PyFITS | 0.197375 | 0 | 0 | 229 |
23,248,765 | 2014-04-23T15:24:00.000 | 3 | 0 | 0 | 0 | python,django | 23,249,113 | 1 | true | 1 | 0 | You should not override __init__, because that is called in all cases when a model is being instantiated, including when you load it from the database.
A good way to do what you want is to check the value of self.pk within your save method: if it is None, then this is a new instance being created. | 1 | 2 | 0 | I overrided the save() method of my Fooclass so that when I create a Foo instance, some logic occurs. It works well.
Nevertheless, I have other methods in other classes that update Foo instances, and of course, I have to save changes calling the save() method. But I want them to directly update without passing into the logic I made for object creation.
Is there an elegant solution to that?
What about overriding __init__() method instead of save()? (I was told it was a bad practice, but not sure to understand why)
Thank you. | How to override Django model creation without affecting update? | 1.2 | 0 | 0 | 24 |
23,250,659 | 2014-04-23T16:52:00.000 | 0 | 0 | 1 | 0 | python,pygame | 28,335,704 | 1 | false | 0 | 1 | The pygame library must be in site-packages folder in Lib folder of your python folder.
It's better you don't personalize this kind of instalation. Why separate a language of its libraries? | 1 | 0 | 0 | I used to have PyGame installed on my PC but formatted my PC and now need it installed again. I have followed the same process as last time and have installed Python 3.3 and PyGame 3.3.0 off of GitBucket. I install PyThon to my only HDD as Python33 and Pygame in a different file on my HDD as PythonX, but for some reason when entering import pygame but it just doesn't find the module. What am I doing wrong? | Installing PyGame issues | 0 | 0 | 0 | 108 |
23,251,063 | 2014-04-23T17:15:00.000 | 1 | 1 | 0 | 0 | python,visual-studio-2012,wireshark | 23,254,680 | 1 | false | 0 | 0 | If you know the port number used by the application you can filter by that port by putting tcp.port == 1234 in the filter toolbar. | 1 | 1 | 0 | I Wrote an application in vs2012 in python and I want to see the messages that are being sent and recieved to the application.
When I open wireshark I see a lot of messages go through.
Is there a way to focus wireshark on only my application?
Thank you! | WireShark messages | 0.197375 | 0 | 0 | 128 |
23,252,370 | 2014-04-23T18:26:00.000 | 45 | 0 | 1 | 0 | python,class | 23,252,537 | 3 | false | 0 | 0 | No. __dict__ is a method used for introspection - it returns object attributes. What you want is a brand new method, call it as_dict, for example - that's the convention. The thing to understand here is that dict objects don't need to be necessarily created with dict constructor. | 1 | 47 | 0 | I have a class where I want to get the object back as a dictionary, so I implemented this in the __dict__(). Is this correct?
I figured once I did that, I could then use the dict (custom object), and get back the object as a dictionary, but that does not work.
Should you overload __dict__()? How can you make it so a custom object can be converted to a dictionary using dict()? | Overloading __dict__() on python class | 1 | 0 | 0 | 42,908 |
23,252,796 | 2014-04-23T18:48:00.000 | 1 | 0 | 1 | 0 | python,loops,time,while-loop | 23,253,070 | 3 | false | 0 | 0 | A piece of warning. You may not expect a real time on a non-realtime system. The sleep family of calls guarantees at least a given delay, but may well delay you for more.
Therefore, once you returned from sleep, query current time, and make the calculations into the "future" (accounting for the calculation time). | 1 | 3 | 0 | I'm currently reading physics in the university, and im learning python as a little hobby.
To practise both at the same time, i figured I'll write a little "physics engine" that calculates the movement of an object based on x,y and z coordinates. Im only gonna return the movement in text (at least for now!) but i want the position updates to be real-time.
To do that i need to update the position of an object, lets say a hundred times a second, and print it back to the screen. So every 10 ms the program prints the current position.
So if the execution of the calculations take 2 ms, then the loop must wait 8ms before it prints and recalculate for the next position.
Whats the best way of constructing a loop like that, and is 100 times a second a fair frequency or would you go slower, like 25 times/sec? | Control the speed of a loop | 0.066568 | 0 | 0 | 3,375 |
23,254,639 | 2014-04-23T20:33:00.000 | 0 | 0 | 1 | 1 | python,bash,emacs | 24,714,702 | 1 | true | 0 | 0 | The issue was unrelated to ipython. I was starting Emacs from my desktop environment menu (GNOME on CentOS 6), rather than from the terminal. Doing the latter resolved the issue. | 1 | 1 | 0 | I just noticed that my IPython (as called by run-python against my variable python-shell-interpreter) doesn't see all my environment variables, but IPython called from bash in the terminal does. I exporting MYVAR in both .bash_profile and .bashrc.
When I evaluate os.getenv('MYVAR') in the terminal ipython, it works. But inside of emacs nothing shows up. Why would it be different in Emacs? | Emacs IPython doesn't see the same environment variables as IPython from the terminal | 1.2 | 0 | 0 | 143 |
23,255,293 | 2014-04-23T21:14:00.000 | 1 | 0 | 1 | 0 | python | 23,255,311 | 2 | false | 0 | 0 | Always use the "!=" form.
The two forms are equivalent in Python 2 and previous versions, because of the Languages the original Python inherited from (features from Basic, ABC and C) - but since then (1991), the "!=" form was preferred, and "<>" deprecated, since most imperative languages use this operator (the same one used in "C") and to avoid two different ways of performing the same operation.
It is interesting to note that the "<>" note has been deprecated and is not valid at all in Python 3, so the "!=" operator is the official one to be used in new code. | 2 | 0 | 0 | I didn't find much info on the web (or may be one is deprecated ?). Is there a use preference/case for one operator over the other ? It looks from some docs on the web that they are similar...
Thanks for any suggestion. | Python When to use <> and when != operator | 0.099668 | 0 | 0 | 45 |
23,255,293 | 2014-04-23T21:14:00.000 | 1 | 0 | 1 | 0 | python | 23,255,326 | 2 | false | 0 | 0 | Use <> when you want to pretend you are coding in BASIC. Hey, look, print is a statement!
Use != at other times. It's the only inequality operator supported in Python 3.x (<> has been removed). | 2 | 0 | 0 | I didn't find much info on the web (or may be one is deprecated ?). Is there a use preference/case for one operator over the other ? It looks from some docs on the web that they are similar...
Thanks for any suggestion. | Python When to use <> and when != operator | 0.099668 | 0 | 0 | 45 |
23,257,123 | 2014-04-23T23:36:00.000 | 2 | 0 | 0 | 0 | python,django,python-3.x,virtualenv,production | 23,259,806 | 1 | true | 1 | 0 | Here are my thoughts:
Arguments for grouping in a common folder
Cleaner management of multiple venvs on a given machine. Good tools to support checking which are available, adding new ones, purging old ones, etc.
More sensible (and more space-efficient) when sharing one or more venvs across more than one project
Allows the use of some nice features like autocompletion of venv names
Arguments for keeping with the project
Clear relationship between the venv and the project. Eliminates any ambiguity and less error-prone since there's little chance of running the wrong venv for a project (which is not always immediately evident).
Makes more sense when there is a one-to-one relationship between venvs and projects
May be the preferred approach when working in teams from separate accounts.
More straightforward when deploying across identical hosts; (just rsync the whole project). Nothing stopping you from doing this with a venv in a common folder, but it feels more natural to deploy a single tree.
Easier to sandbox the whole application.
I tend to prefer the former for more experimental / early-stage work, and the latter for projects that are deployed. | 1 | 9 | 0 | When using virtualenv (or virtualenvwrapper), the recommended practice is to group all your virtual environments together ... for example in ~/.virtualenvs
BUT, I've noticed in reading a number of articles on deploying Django applications, that the recommendation seems to be to put your virtual environments somewhere under the root of the individual web application ... for example in /srv/www/example.com/venv.
My questions are:
Why?
Would it matter if I went one way or the other?
And is one way recommended over another? | Where should virtualenvs go in production? | 1.2 | 0 | 0 | 922 |
23,258,176 | 2014-04-24T01:28:00.000 | 0 | 1 | 1 | 0 | java,python,c++,compilation,translation | 23,258,361 | 1 | false | 0 | 0 | All of the translation process is done when you compile a Java program. This is no different than compiling a C++ program or any other compiled language. The biggest difference is that this translation is targeted to the Java Byte Code language rather than assembly or machine language. The Byte Code undergoes its own translation process (including many of the same stages) when the program is run. | 1 | 4 | 0 | So my question today is about the translation process of Java. I understand the general translation process itself but I am not too sure how it applies to Java.
Where does the lexical analysis take place? When is symbol table created? When is the syntax analysis and how is the syntax tree created?
From what I have already research and able to understand is that the Java source code is then translated into a independent byte-code through a JVM or Java Virtual Machine. Is this when it undergoes a lexical analysis?
I also know that after it is translated into byte-code it is translated into machine code but I don't know how it progress after that.
Last but not least, is the Translation process of Java and different from C++ or Python? | What is the Translastion Process of Java? | 0 | 0 | 0 | 954 |
23,259,831 | 2014-04-24T04:28:00.000 | 0 | 0 | 1 | 0 | python,string | 23,260,055 | 3 | true | 0 | 0 | The problems are
There should be a comma(,) before the 1st quotation.
print (n+1), " : x1=",x1,"f(x)=",fx
printing f(x) will print correct value if you have a return statement in the function | 1 | 0 | 0 | print (n+1) ": x1= ",x1,"f(x)= ",fx
I want it to print what the x1 is and the value of the function at x1 (fx), but I get an invalid syntax on the end of the first quotation. Can someone explain to me what my problem is. | python print invalid syntax | 1.2 | 0 | 0 | 312 |
23,262,767 | 2014-04-24T07:41:00.000 | 0 | 1 | 0 | 0 | python,ldap | 23,263,099 | 1 | false | 0 | 0 | You are right, there is an ongoing communication between your workstation and the Active Directory server, which can use LDAP protocol.
Since I don't know what you tried so far, I suggest that you look into the python module python-ldap. I have used it in the past to connect, query and modify information on Active-Directory servers. | 1 | 0 | 0 | When I logon to my company's computer with the AD username/password, I find that my Outlook will launch successfully. That means the AD authentication has passed.
In my opinion, outlook retrieves the AD user information, then sends it to an LDAP server to verify.
But I don't know how it retrieves the information, or by some other methods? | How does auto-login Outlook successfully when in AD environment? | 0 | 0 | 1 | 104 |
23,264,037 | 2014-04-24T08:45:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,decision-tree,cart-analysis | 23,264,484 | 1 | true | 0 | 0 | When you train your tree using the training data set, every time you do a split on your data, the left and right node will end up with a certain proportion of instances from class A and class B. The percentage of instances of class A (or class B) can be interpreted as probability.
For example, assume your training data set includes 50 items from class A and 50 items from class B. You build a tree of one level, by splitting the data once. Assume after the split, your left node ends up having 40 instances of class A and 10 instances of class B and the right node has 10 instances of class A and 40 instances of class B. Now the probabilities in the nodes will be 40/(10+40) = 80% for class A in left node, and 10/(10+40) = 20% for class A in left node (and vice versa for class B).
Exactly the same applies for deeper trees: you count the instances of classes and compute the proportion. | 1 | 0 | 1 | I'm implementing decision tree based on CART algorithm and I have a question. Now I can classify data, but my task is not only classify data. I want have a probability of right classification in end nodes.
For example. I have dataset that contains data of classes A and B. When I put an instance of some class to my tree I want see with what probability the instance belongs to class A and class B.
How can I do that? How can I improve CART to have probability distribution in the end nodes? | Get probability of classification from decision tree | 1.2 | 0 | 0 | 2,896 |
23,265,183 | 2014-04-24T09:38:00.000 | 1 | 0 | 0 | 1 | google-app-engine,python-2.7,google-cloud-datastore,datamodel | 23,277,351 | 1 | true | 1 | 0 | StructuredPropertys belong to the entity that contains them - so your assumption that
updating a single StructuredProperty will invalidate the memcache is correct.
LocalStructuredProperty is the same behavior - the advantage however is that each
property on a LocalStructuredProperty is obfuscated into a binary storage - the datastore
has no idea about the structure of a LocalStructuredProperty. (There is probably a deserialization
computational cost attributed to these properties - but that depends a lot on the amount
of data they contain, I imagine.)
To contrast, StructuredProperty actually makes its child properties available for
Query indexing in most cases - allowing you to perform complicated lookups.
Keep in mind - you should be calling put() for the containing entity, not for each
StructuredProperty or LocalStructuredProperty - so you should be seeing a single RPC
call for updating that parent entity - regardless of the number of repeated properties exist.
I would advise using StructuredProperty that contain ndb.IntegerProperty(repeated=True), rather
than making 'parallel lists' of integers and floats - that adds more complexity to your python
model, and is exactly the behavior that ndb.StructuredProperty strives to replace. | 1 | 0 | 0 | I am considering ways of organizing data for my application.
One data model I am considering would entail having entities where each entity could contain up to roughly 100 repeated StructuredProperties. The StructuredProperties would be mostly read and updated only very infrequently. My question is - if I update any of those StructuredProperties, will the entire entity get deleted from Memcache and will the entire entity be reread from the ndb? Or is it just the single StructuredProperty that will get reread? Is this any different with LocalStructuredProperty?
More generally, how are StructuredProperties organized internally? In situations where I could use multiple Float or Int properties - and I am using a StructuredProperty instead just to make my model more readable - is this a bad idea? If I am reading an entity with 100 StructuredProperties will I have to make 100 rpc calls or are the properties retrieved in bulk as part of the original entity? | on google app engine are how are StructuredProperties updated? | 1.2 | 0 | 0 | 153 |
23,268,179 | 2014-04-24T11:55:00.000 | 1 | 0 | 0 | 0 | python,sql,qt,pyqt | 23,281,662 | 2 | false | 0 | 1 | This question is a bit broad, but I'll try answering it anyway. Qt does come with some models that can be connected to a database. Specifically classes like QSqlTableModel. If you connect such a model to your database and set it as the model for a QTableView it should give you most of the behavior you want.
Unfortunately I don't think I can be any more specific than that. Once you have written some code, feel free to ask a new question about a specific issue (remember to include example code!) | 1 | 1 | 0 | I have made a database file using SQL commands in python. i have used quite a lot of foreign keys as well but i am not sure how to display this data onto qt with python? any ideas? i would also like the user to be able to add/edit/delete data | How to display data from a database file onto pyqt so that the user can add/delete/edit the data? | 0.099668 | 1 | 0 | 4,532 |
23,271,792 | 2014-04-24T14:26:00.000 | 0 | 0 | 0 | 0 | python,date,selenium,selenium-webdriver | 23,276,624 | 1 | false | 0 | 0 | If you want to keep the date/time constant for the purpose of your tests, you can just hardcode the date/time and passing it to the methods you want to test. If "hardcoding a date" doesn't feel right you could hardcode the specific sequence of actions that a browser might go through, to pick a particular date that happens to be static.
After all, you are testing whether your method correctly determines whether a date is before or after another date; how the input is provided is not relevant. | 1 | 0 | 0 | I'm writing up some Selenium tests for my site, and I'd like to test the date/timepickers I have, primarily to make sure that the code I put in to prevent users from putting in dates out of order works.
However, I've realized that the tests I have won't work the way I want them to if it's close to midnight, as the times I'm passing in will wrap to the next day, and be earlier than the current time rather than later, or vice versa.
Is it possible to run these tests as if it were a specific date/time? | Selenium tests using static date | 0 | 0 | 1 | 304 |
23,277,623 | 2014-04-24T19:03:00.000 | 0 | 1 | 1 | 0 | python,c,arrays,python-2.7,io | 28,697,765 | 1 | false | 0 | 0 | struct.unpack('1000000I',f.read()) doesn't seem too long to me. – roippi | 1 | 0 | 0 | I have a list of 1,000,000 ints stored as a binary file. How do I load this quickly into a Python list? In C I would just read the file into a char array and cast that array as an int array. Is there a way to do something equivalent to this in Python? I know about Python's struct module, but as far as I can tell, that would require an extremely long format string to convert all the ints at once. | Binary file to python integer list | 0 | 0 | 0 | 152 |
23,279,546 | 2014-04-24T20:51:00.000 | 1 | 0 | 0 | 0 | python,pandas | 23,279,735 | 2 | false | 0 | 0 | This depends on your operating system.
You're saying you'd like to save the file on the desktop of the user who is running the script right?
On linux (not sure if this is true of every distribution) you could pass in "~/desktop/my_file.xls" as the path where you're saving the file | 1 | 2 | 1 | Is there a way to use pandas to_excel function to write to the desktop, no matter which user is running the script? I've found answers for VBA but nothing for python or pandas. | to_excel on desktop regardless of the user | 0.099668 | 1 | 0 | 781 |
23,281,357 | 2014-04-24T22:59:00.000 | 1 | 0 | 0 | 0 | python,django,asynchronous,celery,django-celery | 23,290,106 | 2 | false | 1 | 0 | Note that polling means you'll be keeping the request and connection open. On web applications with large amount of hits, this will waste a significant amount of resource. However, on smaller websites the open connections may not be such a big deal. Pick a strategy that's easiest to implement now that will allow you to change it later when you actually have performance issues. | 1 | 3 | 0 | I want the user to be able to click a button to generate a report, show him a generating report animation and then once the report finishes generating, display the word success on the page.
I am thinking of creating a celery task when the generate report button is clicked. What is the best way for me to update the UI once the task is over? Should I constantly be checking via AJAX calls if the task has been completed? Is there a better way or third party notification kind of app in Django that helps with this process?
Thanks!
Edit: I did more research and the only thing I could find is three way data bindings with django-angular and django-websocket-redis. Seems like a little bit of an overkill just for this small feature. I guess without web sockets, the only possible way is going to be constantly polling the backend every x seconds to check if the task has completed. Any more ideas? | What is the best way to update the UI when a celery task completes in Django? | 0.099668 | 0 | 0 | 761 |
23,281,952 | 2014-04-25T00:01:00.000 | -2 | 0 | 0 | 0 | python,pygame | 23,282,068 | 5 | false | 0 | 1 | I do not know python or pygame, but depending on what you are building, it may be easier to just make an image using a program like inkscape for pc and mac, or inkpad for iPad. Both of these let you make a diagonal ellipse, and then export it as a .png and use it in your code. Again, if this is possible really depends on what you are doing with the ellipse. | 1 | 4 | 0 | Does anyone know of an easy way to draw ellipses that are not aligned to the x & y axis. I am very new to pygame so please forgive my ignorance, but I cannot find anything related to it.
If no easy method exists, can someone help me in how I might draw this besides generating many many points on the ellipse and plotting all of them? | drawing a diagonal ellipse with pygame | -0.07983 | 0 | 0 | 7,513 |
23,284,409 | 2014-04-25T04:49:00.000 | 28 | 0 | 0 | 0 | python,merge,pandas | 38,764,796 | 4 | false | 0 | 0 | Consider Following:
df_one is first DataFrame
df_two is second DataFrame
Present in First DataFrame and Not in Second DataFrame
Solution: by Index
df = df_one[~df_one.index.isin(df_two.index)]
index can be replaced by required column upon which you wish to do exclusion.
In above example, I've used index as a reference between both Data Frames
Additionally, you can also use a more complex query using boolean pandas.Series to solve for above. | 1 | 18 | 1 | The operation that I want to do is similar to merger. For example, with the inner merger we get a data frame that contains rows that are present in the first AND second data frame. With the outer merger we get a data frame that are present EITHER in the first OR in the second data frame.
What I need is a data frame that contains rows that are present in the first data frame AND NOT present in the second one? Is there a fast and elegant way to do it? | How to subtract rows of one pandas data frame from another? | 1 | 0 | 0 | 34,025 |
23,284,759 | 2014-04-25T05:24:00.000 | 6 | 0 | 0 | 0 | python,pdf,pandas | 23,285,666 | 7 | false | 1 | 0 | this is not possible. PDF is a data format for printing. The table structure is therefor lost. with some luck you can extract the text with pypdf and guess the former table columns. | 2 | 34 | 1 | Is it possible to open PDFs and read it in using python pandas or do I have to use the pandas clipboard for this function? | Opening a pdf and reading in tables with python pandas | 1 | 0 | 0 | 82,529 |
23,284,759 | 2014-04-25T05:24:00.000 | 3 | 0 | 0 | 0 | python,pdf,pandas | 41,133,523 | 7 | false | 1 | 0 | Copy the table data from a PDF and paste into an Excel file (which usually gets pasted as a single rather than multiple columns). Then use FlashFill (available in Excel 2016, not sure about earlier Excel versions) to separate the data into the columns originally viewed in the PDF. The process is fast and easy. Then use Pandas to wrangle the Excel data. | 2 | 34 | 1 | Is it possible to open PDFs and read it in using python pandas or do I have to use the pandas clipboard for this function? | Opening a pdf and reading in tables with python pandas | 0.085505 | 0 | 0 | 82,529 |
23,288,692 | 2014-04-25T09:04:00.000 | 2 | 0 | 0 | 0 | python,c++,dll,binding,module | 23,289,534 | 1 | true | 0 | 1 | When Python calls into C++ code, the code it executes is the machine code generated by the C++ compiler. You will have some cost at the interface level, as you have to marshal Python types into C++ types and vice versa, but the C++ code itself will run at pretty much the same speed as if it were called from C++; any differences will be due to different locality of dynamically allocated memory due to different memory use patterns (which would cause your C++ code to run at different speeds depending on which C++ application called it as well). | 1 | 1 | 0 | I'll keep my question short and simple.
Assume I have a python program which calls C++ code from a DLL compiled in C/C++.
-Will the speed/performance of the executing code be preserved?
Assume I have a python program ... has a binding to a C++ library (for example - GTK or Wx).
-Is the speed going to match that of the library as if it was compiled with a C++ program?
Thank you. | Python bindings; calling C code & performance | 1.2 | 0 | 0 | 1,197 |
23,288,911 | 2014-04-25T09:14:00.000 | 1 | 0 | 0 | 0 | python,html,django | 23,297,917 | 1 | true | 1 | 0 | Part of your page that contains the paragraph tags is a piece of JavaScript that contains a timer.
Every once in a while it does an Ajax request to get the data with regard to "what's going on now in the system".
If you use the Ajax facilites of JQuery, which is probably the easiest, you can pass a JavaScript callback function that will be called if the request is answered. This callback function receives the data served by Django as response to the asynchroneous request. In the body of this callback you put the code to fill your paragraph.
Django doesn't have to "know" about Ajax, it just serves the required info from a different URL than your original page containing the paragraph tags. That URL is part the your Ajax request from the client.
So it's the client that takes the initiative. Ain't no such thing as server push (fortunately). | 1 | 0 | 0 | I am developing a project on Python using Django. The project is doing lot of work in the background so i want to notify users what's going on now in the system. For this i have declared a p tag in HTML and i want to send data to it.
I know i can do this by templates but i am little confused as 5 functions need to pass the status to the p tag and if i use render_to_response() it refreshes the page every time a status is passed from the function
Anyone please tell me how to do this in the correct way | Pass Data From Python To Html Tag | 1.2 | 0 | 0 | 117 |
23,298,546 | 2014-04-25T16:35:00.000 | 1 | 1 | 0 | 0 | python,amazon-web-services,amazon-ec2,pyramid | 24,533,996 | 3 | false | 1 | 0 | I would suggest to run two instances and use Elastic Load Balancer.
Never run anything important on a single EC2 instance, EC2 instances are not durable, they can suddenly vanish, taking whatever data you had stored on it.
Everything else should work as in Pyramid Cookbook description. | 2 | 2 | 0 | I have been given a task to complete: Deploy my pre-existing Pyramid application onto our EC2 Linux server. I would like to do this with a minimal amount of stress and error, especially considering am I totally new to AWS.
What I have done so far:
Setup the EC2 instance which I can SSH into.
Locally develop my Pyramid application
And, we version control the application with GitHub.
We are using: Pyramid (latest), along with Python 2.7.5 and Postgresql (via SQLAlchemy and Alembic.)
What is a basic, high-level list of steps to ensure that my application is deployed appropriately?
Where, if at all, does something like Elastic Beanstalk come into play?
And, considering my project is currently in a Git repo, what steps or considerations must be taken to accommodate this?
I'm not looking for opinions on how to tweak my setup or anything like that. I am looking for a non-debatable, comprehensible set of steps or considerations to deploy my application in the most basic form. This server is for development purposes only, so I am not looking for a full-blown solution.
I have researched this topic for Django projects, and frankly, I am a bit overwhelmed with the amount of different possible options. I am trying to boil this situation down to its minimal components.
I appreciate the time and help. | Deploying Pyramid application on AWS EC2 | 0.066568 | 0 | 0 | 1,592 |
23,298,546 | 2014-04-25T16:35:00.000 | 2 | 1 | 0 | 0 | python,amazon-web-services,amazon-ec2,pyramid | 23,324,088 | 3 | true | 1 | 0 | Deploying to an EC2 server is just like deploying to any other Linux server.
If you want to put it behind a load balancer, you can do which is fully documented.
You can also deploy to Elastic Beanstalk. Where as EC2 is a normal Linux sever, Beanstalk is more like deploying to an environment, you just push all your git changes into an S3 repo, your app then gets built and deployed onto beanstalk.
Meaning no server setups, no configuration (other than the very basics) and all new changes you push to S3, get built and update each version of your app that may have been launched on beanstalk.
You don't want to host your database server on EC2, use Amazons RDS database server, dead simple and takes about two minutes to setup and configure.
As far as file storage goes, move everything to S3.
EC2 and beanstalk should not be used for any form of storage. | 2 | 2 | 0 | I have been given a task to complete: Deploy my pre-existing Pyramid application onto our EC2 Linux server. I would like to do this with a minimal amount of stress and error, especially considering am I totally new to AWS.
What I have done so far:
Setup the EC2 instance which I can SSH into.
Locally develop my Pyramid application
And, we version control the application with GitHub.
We are using: Pyramid (latest), along with Python 2.7.5 and Postgresql (via SQLAlchemy and Alembic.)
What is a basic, high-level list of steps to ensure that my application is deployed appropriately?
Where, if at all, does something like Elastic Beanstalk come into play?
And, considering my project is currently in a Git repo, what steps or considerations must be taken to accommodate this?
I'm not looking for opinions on how to tweak my setup or anything like that. I am looking for a non-debatable, comprehensible set of steps or considerations to deploy my application in the most basic form. This server is for development purposes only, so I am not looking for a full-blown solution.
I have researched this topic for Django projects, and frankly, I am a bit overwhelmed with the amount of different possible options. I am trying to boil this situation down to its minimal components.
I appreciate the time and help. | Deploying Pyramid application on AWS EC2 | 1.2 | 0 | 0 | 1,592 |
23,299,034 | 2014-04-25T17:01:00.000 | 0 | 0 | 1 | 0 | python,pyodbc | 44,371,598 | 3 | false | 0 | 0 | I fixed this by installing pyodbc 3.0.10. The latest version of pyodbc didn't work on Windows with Python 3.4
However pyodbc 3.0.10 did work for me
Install command on command prompt : pip install pyodbc 3.0.10 | 1 | 19 | 0 | pyodbc is a very nice thing, but the Windows installers only work with their very specific python version. With the release of Python 3.4, the only available installers just stop once they don't see 3.3 in the registry (though 3.4 is certainly there).
Copying the .pyd and .egg-info files from a 3.3 installation into the 3.4 site-packages directory doesn't seem to do the trick. When importing pyodbc, an ImportError is thrown: ImportError: DLL load failed: %1 is not a valid Win32 application.
Is there a secret sauce that can be added to make the 3.3 file work correctly? Or do we just need to wait for a 3.4 installer version? | pyodbc and python 3.4 on Windows | 0 | 0 | 0 | 23,213 |
23,299,694 | 2014-04-25T17:44:00.000 | 1 | 0 | 0 | 0 | python,opencv,svm | 23,300,115 | 1 | true | 0 | 0 | As a simple approach, you can train an additional classifier to determine if your feature is a digit or not. Use non-digit images as positive examples and the other classes' positives (i.e. images of digits 0-9) as the negative samples of this classifier. You'll need a huge amount of non-digit images to make it work, and also it's recommendable to use strategies as the selection of hard negatives: negative samples classified as "false positives" after the first training stage, which are used to re-train the classifier.
Hope that it helps! | 1 | 0 | 1 | I have problem with classification using SVM. Let's say that I have 10 classes, digts from 0 to 9. I can train SVM to recognize theese classes, but sometimes I get image which is not digt, but SVM still tries to categorize this image. Is there a way to set threshold for SVM on the output maybe (as I can set it for Neural Networks) to reject bad images? May I ask for code sample (in C++ or Python with opencv)?
Thanks in advance. | Classification using SVM from opencv | 1.2 | 0 | 0 | 645 |
23,301,532 | 2014-04-25T19:34:00.000 | 0 | 0 | 0 | 0 | python,django,oauth,oauth-2.0,python-social-auth | 23,304,504 | 2 | true | 1 | 0 | I would try to approach this problem by using django.contrib.auth.models.Group and django.contrib.auth.models.Permission. Create one general group with custom permissions to your apps' functionality and add all your normal users to that.
Save accounts created by python-social-auth in default django.contrib.auth.models.User but create seperate Group without any permissions for them.
If necessary create some scheduled task ( either with cronjob or Celery ) which will go through users and deactivate/delete those who expired. | 1 | 1 | 0 | My site has regular users that use the django default User model, but for one particular functionality, I want people to be able to login using their social accounts (twitter, fb..etc) using python-social-auth without having these logins saved in the database with the user model (no accounts created, no ability to do certain normal user tasks) and with a session timeout.
I looked around for ways to do that but my little research bore no fruit. Any ideas?
Summary:
Separation between normal users and social (so I can limit what social auth'd users can do)
Session timeout for social auth'd users
No addition in the User table for social auth'd users (no footprint).
Optional: Obtain their social username and id for logging purposes.
Thanks | Zero Footprint Python-Social-Auth authentication | 1.2 | 0 | 0 | 195 |
23,303,787 | 2014-04-25T22:01:00.000 | 0 | 0 | 0 | 0 | java,python,file,bits | 23,303,846 | 2 | false | 1 | 0 | Sure, you read the file as byte stream (which you would typically do with a file), and then display the bytes in binary. | 1 | 0 | 0 | I was wondering if it was possible to read a file bit meaning 0's and 1's and them displaying them, in either java or python. I don't know if it possible. | Taking the bit of a file and displaying them | 0 | 0 | 0 | 41 |
23,303,972 | 2014-04-25T22:16:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 23,344,744 | 3 | false | 0 | 0 | My answer is {'A': 1, 'C': 3, 'B': 2}, but I want it to be exactly {'A': 1, 'B': 2, 'C': 3}. I used "sorted", but it only printed out "A, B, C", which missed the value of dictionary | 1 | 0 | 0 | here is my list:
projects = ["A", "B", "C"]
hours = [1,2,3]
I want my final answer to be like: {A:1,B:2,C:3}
Is there any suggestion? | How to build a map from two lists using zip? | 0 | 0 | 0 | 44 |
23,305,630 | 2014-04-26T01:44:00.000 | 1 | 0 | 1 | 0 | python,class,operator-overloading | 23,305,746 | 3 | false | 0 | 0 | For an expression lhs + rhs, Python will first try lhs.__add__(rhs), then rhs.__radd__(lhs). | 1 | 1 | 0 | For example if I have __add__ and __radd__ defined in two classes and I sum the two objects which definition of the operation will python use? | In an operation between two objects in python whose overloading of the operation as priority? | 0.066568 | 0 | 0 | 822 |
23,305,663 | 2014-04-26T01:50:00.000 | 1 | 0 | 0 | 0 | python,pyqt,qt-designer | 25,923,294 | 1 | true | 0 | 1 | I use KLed in my PyQt4 gui. Originally it was developed in the Kubuntu environment, so this wasn't an issue, but we ended up having to move to Unity (Ubuntu 14.04 LTS). In order to still use KLed I found that I needed to apt-get install the python libraries from kde (didn't want to install Kubuntu as it interfered with operation).
The fix was:
sudo apt-get install python-kde4
SO the summary answer is:
Yes, you can use KLed in Unity, as long as you have the libraries installed. | 1 | 0 | 0 | I'm trying to use the python kled module that's featured in QT Designer under ubuntu 13.10 Unity. Is it possible to use with Unity or will i need to use a KDE environment? | PyQt kled in Unity enviornmet? | 1.2 | 0 | 0 | 637 |
23,306,296 | 2014-04-26T03:29:00.000 | 0 | 1 | 0 | 0 | python,paypal | 23,308,018 | 2 | false | 0 | 0 | I'm not aware of any way to see that via the API. That's typically something you'd leave up to the end-user to know when they're signing up. Ask them if they have Pro or not, and based on that, setup your permissions request accordingly. | 1 | 0 | 0 | How do I determine which type of account a person has when using the permissions api? I need to make a different decision if they have a pro account versus a standard business account. Thanks! | PayPal Classic APIs determine account type | 0 | 0 | 1 | 124 |
23,306,361 | 2014-04-26T03:40:00.000 | 1 | 0 | 0 | 0 | python,proxy | 23,442,148 | 1 | false | 0 | 0 | In mitmproxy 0.10, a flow object is passed to the response handler function. You can access both flow.request and flow.response. | 1 | 0 | 0 | I am trying to write my own proxy extensions. Both, burp suite as well as mitmproxy allows us to write extensions.
Till now, I am successful with intercepting the request and response headers, and write it to my own output file.
The problem is, I get frequent requests and responses at anonymous time and at the same time, the output is getting written in the file.
How should I identify that which response belongs to which particular request ??
If we see in burp suite, when we click on particulat URL in target, we see two different tabs- "Request" and "Response". How is burp suite identifying this ?
Similar is the case with mitmproxy.
I am new to proxy extensions, so any help would be great.
----EDIT----
If any additional information is required then pls let me know. | how to identify http response belongs to which particular request through python? | 0.197375 | 0 | 1 | 175 |
23,309,116 | 2014-04-26T09:42:00.000 | 1 | 0 | 0 | 1 | python | 34,110,247 | 1 | false | 0 | 0 | You have to program the controllers to configure the switches in the following way:
If s1 gets a packet whose destination IP address = IP(h2), the action set should be outport = port that connects to s2
The same vice versa.
If s1 gets a packet destined to h1, push it through the port that connects to h1.
Do similar with s2.
Considering that this solution abstract is pretty straight forward, it is possible that you have not considered programming the controller in the first place. The first thing that would help is going through a small tutorial on a simple (in-built) controller such as POX. The controller code could be overwhelming in the beginning but it really gets quite simple once you get the pattern of the controller code!
I know I'm answering a little too late, but hope it helps the other people who are looking for similar solutions. | 1 | 1 | 0 | Suppose I have a created virtual network in mininet through python script.The network consists of
Two remote controllers(c1,c2),
Two switches(s1,s2):s1 is under the control of c1,s2 is under the control of c2,both s1 and s2 are connected to each other.
Two hosts(h1,h2):h1 is connected to s1,h2 is connected to s2.
When I have given ping command as h1 ping h2 -it is showing destination host unreachable.
Please let me know why it is not pinging?
c1 c2
/ \
s1------s2
/ \
h1 h2 | how to ping two virtual hosts connected to two different virtual switches created in mininet under two different remote controllers | 0.197375 | 0 | 0 | 548 |
23,310,229 | 2014-04-26T11:31:00.000 | 0 | 0 | 0 | 0 | python,wxpython,wxglade | 28,642,528 | 2 | false | 0 | 1 | WxGlade does not directly support a adding a wx.FileDialog to your GUI. As someone answer you have to create and event linked to a button,menu,or toolbar then in your code enter the programming to open a filedialog and return text from it. What I normally do is use the toolbox button to create a generic dialog box and then name it open or save or something that will remind me that I will need to edit that piece of the wxGlade generated python code into and actual filedialog. That saves me a bit of time of typing ALL the code for a filedialog into the program. | 1 | 0 | 0 | i build my GUI with wxGlade. I think it is very comfortable but i´m looking for a widget/button which open a frame to chose a directory or file..
Can u help me? | wxGlade - Button to chose directory and file | 0 | 0 | 0 | 1,034 |
23,311,233 | 2014-04-26T13:00:00.000 | 1 | 0 | 1 | 1 | python,service | 23,311,785 | 1 | true | 0 | 0 | The communication with daemons is usually done by signals. You can use userdefined signals or SIGSTOP(17) and SIGCONT(19) to pause and continue your daemon. | 1 | 0 | 0 | I am writing a python 'sensor'. The sensor spawns two children, one that reads in data and the other processes and outputs the data in db format. I need to run it in the background with the ability to start, stop pretty much as a service/daemon. I've looked at various options: daemonizing, init scripts etc. The problem is I need more than just start, stop, restart and status. I also want to add a 'pause' option'. I am thinking that an init script would be the best option adding start, stop, restart, status, pause cases but how would I implement this the pause functionality?
Thanks | Python pseudo service | 1.2 | 0 | 0 | 48 |
23,312,182 | 2014-04-26T14:20:00.000 | 0 | 1 | 0 | 0 | php,python,mysql,logging,serial-port | 25,128,746 | 1 | false | 1 | 0 | I don't know if I understand your problem correctly, but it appears you want to show a non-stop “stream” of data with your PHP script. If that's the case, I'm afraid this won't be so easy.
The basic idea of the HTTP protocol is request/response based. Your browser sends a request and receives a (static) response.
You could build some sort of “streaming” server, but streaming (such as done by youtube.com) is also not much more than periodically sending static chunks of a video file, and the player re-assembles them into a video or audio “stream”.
You could, however, look into concepts like “web sockets” and “long polling”. For example, you could create a long-running PHP script that reads a certail file once every two seconds and outputs the value. (Remember to use flush(), or output will be buffered.)
A smart solution could even output a JavaScript snippet every two seconds, which again would update some sort of <div> container displaying charts and what not.
There are for example implementations of progress meters implemented with this type of approach. | 1 | 2 | 0 | I am working on a small project which involves displaying and recording (for later processing) data received through a serial port connection from some sort of measurement device. I am using a Raspberry Pi to read and store the received information: this is done with a small program written in Python which opens the serial device, reads a frame and stores the data in a MySQL database (there is no need to poll or interact with the device, data is sent automatically).
The serial data is formatted into frames about 2.5kbits long, which are sent repeatedly at 1200baud, which means that a new frame is received about every 2 seconds.
Now, even though the useful data is just a portion of the frame, that is way too much information to store for what I need, so what I'm currently doing is "downsampling" the data by reading a frame only once per minute. Currently this is done via a cron task which calls my logging script every minute.
The problem with my setup is that the PHP webpage used to display (and process) the received data (pulled from the MySQL database) cannot show new data more than once per minute.
Thus here come my question:
How would you do to make the webpage show the live data (which doesn't need to be saved), while keeping the logging to the MySQL database @ once per minute?
I guess the solution would involve some sort of daemon, which stores the data at the specified frequency (once per minute), while keeping the latest received data available for the php webpage (how?). What do you think? Do you have any examples of similar code/applications which I could use as a starting point?
Thanks! | Receiving serial port data: real-time web display + logging (with downsampling) | 0 | 0 | 0 | 1,126 |
23,314,745 | 2014-04-26T18:10:00.000 | 0 | 1 | 0 | 0 | python,url-rewriting,pyramid | 23,315,196 | 3 | false | 1 | 0 | mod_rewrite is a webserver module that is independent of the framework your application uses. If it is configured on the server, it should operate the same regardless of whether you are using Drupal or Pyramid. Since the module is the same for each framework, the overhead is precisely the same in both cases. | 1 | 3 | 0 | I'm working on converting an existing Drupal site to Pyramid. The Drupal site has urls that are SEO friendly example: "testsite.com/this-is-a-page-about-programming". In Drupal they have a system which maps that alias to a path like "testsite.com/node/33" without redirecting the user to that path. So the user sees "testsite.com/this-is-a-page-about-programming" but Drupal loads node/33 internally. Also if the user lands on "testsite.com/node/33" they would be redirected to "testsite.com/this-is-a-page-about-programming".
How can this be achieved in Pyramid without a major performance hit? | How to mimic the url aliasing functionality of mod_rewrite with Pyramid (Python Framework)? | 0 | 0 | 0 | 801 |
23,314,851 | 2014-04-26T18:20:00.000 | 1 | 0 | 1 | 0 | python,file-recovery | 62,576,345 | 3 | false | 0 | 0 | Maybe the question is not regarding python scripting but file recovery. If that is the case, the strategy you need is different depending on the format of the drive and the operating system you are using.
You can try recovering files without using python at all, it is using specific characteristics of the filesystem and the operating system that you may recover deleted files. | 1 | 1 | 0 | I want to write a python script using which I can recover files from a formatted drive. I know formatting doesn't delete the data on drive but marks that space available to overwrite. So how can I recover those files that have not been overwritten ? | How To Write a python script to recover files from a formatted drive? | 0.066568 | 0 | 0 | 10,152 |
23,317,286 | 2014-04-26T22:43:00.000 | 1 | 0 | 0 | 0 | python,flask | 23,318,252 | 1 | true | 1 | 0 | Of course it will be faster to get data from cache that is stored in memory. But you've got to be sure that the amount of data won't get too large, and that you're updating your cache every time you update the database. Depending on your exact goal you may choose python dict, cache (like memcached) or something else, such as tries.
There's also a "middle" way for this. You can store in memory not the whole records from database, but just the correspondence between the search params in request and the ids of the records in database. That way user makes a request, you quickly check the ids of the records needed, and query your database by id, which is pretty fast. | 1 | 2 | 0 | I'm develop a web application using Flask. I have 2 approaches to return pages for user's request.
Load requesting data from database then return.
Load the whole database into python dictionary variable at initialization and return the related page when requested. (the whole database is not too big)
I'm curious which approach will have better performance? | Should I load the whole database at initialization in Flask web application? | 1.2 | 0 | 0 | 486 |
23,317,710 | 2014-04-26T23:41:00.000 | 1 | 0 | 0 | 0 | python,facebook,social-networking,networkx | 23,737,732 | 1 | false | 0 | 0 | Recommend to use igraph which has many community detection algorithms. | 1 | 0 | 0 | I am trying to analyse a social network (basically, friends of a facebook user) with python. My main goal is to detect the social circles of the network. So far i tried to use networkx, but I couldn't understand how it can detect communities. Is there a way, with netowrkx or with another package, to solve this problem? thank you! | Detecting communities on a social network with python | 0.197375 | 0 | 1 | 474 |
23,319,059 | 2014-04-27T03:30:00.000 | 1 | 0 | 0 | 0 | python,tkinter,wxpython,pygame,embed | 23,683,962 | 3 | false | 0 | 1 | According to the tracebacks, the program crashes due to TclErrors. These are caused by attempting to access the same file, socket, or similar resource in two different threads at the same time. In this case, I believe it is a conflict of screen resources within threads. However, this is not, in fact, due to an internal issue that arises with two gui programs that are meant to function autonomously. The errors are a product of a separate thread calling root.update() when it doesn't need to because the main thread has taken over. This is stopped simply by making the thread call root.update() only when the main thread is not doing so. | 1 | 24 | 0 | A friend and I are making a game in pygame. We would like to have a pygame window embedded into a tkinter or WxPython frame, so that we can include text input, buttons, and dropdown menus that are supported by WX or Tkinter. I have scoured the internet for an answer, but all I have found are people asking the same question, none of these have been well answered.
What would be the best way implement a pygame display embedded into a tkinter or WX frame? (TKinter is preferable)
Any other way in which these features can be included alongside a pygame display would also work. | Embedding a Pygame window into a Tkinter or WxPython frame | 0.066568 | 0 | 0 | 31,504 |
23,319,138 | 2014-04-27T03:43:00.000 | 0 | 1 | 0 | 1 | python,hash,routing | 69,093,482 | 2 | false | 0 | 0 | Typical algorithms split the traffic into semi-even groups of N pkts, where N is the number of ECMP links. So if the pkt sizes differ, or if some "streams" have more pkts than others, the overall traffic rates will not be even. Some algorithms factor for this. Breaking up or moving strean is bad (for many reasons). ECMP can be tiered --at layers1,2,3, and above; or at different physical pts. Typically, the src & dst ip-addr & protocol/port are used to define each stream. Sometimes it is configurable. Publishing the details can create "DoS/"IP"(Intellectual Property) vulnerabilities. Using the same algorithm at different "tiers" with certain numbers of links at each tier can lead to "polarization" (some links getting no traffic). To address this, a configurable or random input can be added to the algorithm. BGP ECMP requires IGP cost to be the same, else routing loops can happen(link/info @ cisco). Multicast adds more issues(link/info @ cisco). There are 3 basic types (link/info @ cisco). This is a deep subject. | 1 | 0 | 0 | I would like to know , how an ECMP and hash mapping are used in load balancing or routing of a tcp packet .Any help with links,examples or papers would be really useful. Sorry for the inconvinience , as I am completely new to this type of scenario.
Thanks for your time and consideration. | Hash Mapping and ECMP | 0 | 0 | 0 | 460 |
23,319,956 | 2014-04-27T06:00:00.000 | 1 | 0 | 1 | 0 | python,file,pygame,directory,exe | 23,372,498 | 2 | true | 0 | 1 | If you're using Python 3.x, you can use cx_Freeze. | 1 | 2 | 0 | I'm new to this so sorry it if doesn't make much sense.
I'v created a simple 2d game with Python 3.4 and Pygame and I want to create an exe file that includes python 3.4, the pygame module, the game files and launches the python game file when opened.
Thanks. | How do I turn a Pygame Folder into an application (eg .exe)? | 1.2 | 0 | 0 | 271 |
23,321,825 | 2014-04-27T09:47:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,app-engine-ndb | 23,324,452 | 2 | false | 1 | 0 | There is no automatic way of doing this.
You need to perform queries for all types that could hold the key and then delete them in code.
If there could be a lot and/or it could take a long time you might want to consider using a task. | 1 | 2 | 0 | I have two classes, Department and Employee. Department has a property declared as
employees = ndb.KeyProperty(kind=ClassB, repeated=True)
The problem is,when i delete the entity whose key is held in the employees list, the entity is deleted in the Employee datastore, but the list in Department datastore remains the same (with the key of the deleted employee still in it).
How do i make sure that when the Employee is deleted, all references to it in the Department datastore is deleted as well? | Mutating ndb repeated property | 0.099668 | 0 | 0 | 440 |
23,321,915 | 2014-04-27T09:57:00.000 | 5 | 0 | 1 | 0 | python,list-comprehension | 23,322,023 | 4 | false | 0 | 0 | You don't even have to use a comprehension:
a = map(lambda x: '' if x == None else x, a) | 1 | 2 | 0 | a = [None, None, '2014-04-27 17:31:17', None, None]
trying to replace None with ''
tried many times, this the closest I got. b= ['' for x in a if x==None] which gives me four '' but leaves out the date
i thought it would be b= ['' for x in a if x==None else x] but doesn't work.
what if it is nested like so:
a = [[None, None, '2014-04-27 17:31:17', None, None],[None, None, '2014-04-27 17:31:17', None, None],[None, None, '2014-04-27 17:31:17', None, None]]
Can you still use list comprehensions? | python replace None with blank in list using list Comprehensions or something else? Also a nested list solution | 0.244919 | 0 | 0 | 9,427 |
23,322,025 | 2014-04-27T10:09:00.000 | 1 | 0 | 0 | 0 | python,pandas,dataframe,julia | 23,440,098 | 2 | false | 0 | 0 | I'm a novice at this sort of thing but have definitely been using both as of late. Truth be told, they seem very quite comparable but there is far more documentation, Stack Overflow questions, etc pertaining to Pandas so I would give it a slight edge. Do not let that fact discourage you however because Julia has some amazing functionality that I'm only beginning to understand. With large datasets, say over a couple gigs, both packages are pretty slow but again Pandas seems to have a slight edge (by no means would I consider my benchmarking to be definitive). Without a more nuanced understanding of what you are trying to achieve, it's difficult for me to envision a circumstance where you would even want to call a Pandas function while working with a Julia DataFrame or vice versa. Unless you are doing something pretty cerebral or working with really large datasets, I can't see going too wrong with either. When you say "output the data" what do you mean? Couldn't you write the Pandas data object to a file and then open/manipulate that file in a Julia DataFrame (as you mention)? Again, unless you have a really good machine reading gigs of data into either pandas or a Julia DataFrame is tedious and can be prohibitively slow. | 1 | 16 | 1 | I am currently using python pandas and want to know if there is a way to output the data from pandas into julia Dataframes and vice versa. (I think you can call python from Julia with Pycall but I am not sure if it works with dataframes) Is there a way to call Julia from python and have it take in pandas dataframes? (without saving to another file format like csv)
When would it be advantageous to use Julia Dataframes than Pandas other than extremely large datasets and running things with many loops(like neural networks)? | Julia Dataframes vs Python pandas | 0.099668 | 0 | 0 | 9,570 |
23,324,690 | 2014-04-27T14:35:00.000 | 1 | 0 | 0 | 0 | python,google-drive-api | 23,331,227 | 1 | false | 0 | 0 | I would do this in two passes. Start by scanning the folder hierarchy, and then recreate the folders on drive. Update your in-memory folder model with the Drive folder ids. Then scan your files, uploading each one with appropriate parent id.
Only make it multithreaded if each thread will have a unique client id. Otherwise you will end up triggering the rate limit bug in Drive. If you have a large number of files, buy the boxed set of Game Of Thrones. | 1 | 1 | 0 | I have a directory of images that I'd like to transfer to Google drive via a python script.
What's a good way to upload (recursively) a directory of images to Google drive while preserving the original directory structure? Would there be any benefit to making this multithreaded? And if so, how would that work? | Best way to upload a directory of images to Google drive in python | 0.197375 | 0 | 1 | 738 |
23,324,731 | 2014-04-27T14:39:00.000 | 0 | 0 | 1 | 0 | java,python,oop,constructor | 23,324,780 | 2 | false | 0 | 0 | __init__ is for initialisation. __new__ is often used first. | 2 | 2 | 0 | I was currently learning more about constructors in Java, and I found out that just like the __init__ function in Python, constructors are functions that are called as soon as we instantiate an object of a class.
So, are both the concepts one and the same, abstractly? | Correlation between constructors in Java and __init__ function in Python | 0 | 0 | 0 | 1,366 |
23,324,731 | 2014-04-27T14:39:00.000 | 2 | 0 | 1 | 0 | java,python,oop,constructor | 23,324,774 | 2 | true | 0 | 0 | These are very similar things, however with at least one big difference.
constructor is called before/while the object is being constructed
__init__ is called after the object has been constructed, so you have a valid reference to it (called self) | 2 | 2 | 0 | I was currently learning more about constructors in Java, and I found out that just like the __init__ function in Python, constructors are functions that are called as soon as we instantiate an object of a class.
So, are both the concepts one and the same, abstractly? | Correlation between constructors in Java and __init__ function in Python | 1.2 | 0 | 0 | 1,366 |
23,326,310 | 2014-04-27T17:02:00.000 | 9 | 0 | 1 | 0 | python,blender,3d-model | 29,205,398 | 2 | true | 0 | 0 | You can start a new Blender process from any application (a C++, Python app or even command line) and tell the new process to run a script file (written in Python). This script will generate your geometry and then can save the new scene to a blend file.
To start a new Blender process and force it to execute a script use:
blender.exe --background --python "c:\path to\script.py" | 1 | 10 | 0 | I know Python is the standard scripting language for use inside Blender, but I didn't find a way to create a .blend file with python.
What I want to do is not to use python inside blender, but rather "use blender (libs?) inside python".
My planned workflow would be the following:
Define some parameters for my model;
Define a "generative recipe" to create appropriate Blender objects that will be saved to file;
Create a python script to store the parameters and procedures. When the script runs, some .blend file is created in the same folder;
Use Blender to visualize the model. If model needs to be changed, make changes to the script, run it again, and open it again. | Is it possible to create Blender file (.blend) programmatically with Python? | 1.2 | 0 | 0 | 10,352 |
23,326,430 | 2014-04-27T17:13:00.000 | 1 | 0 | 0 | 0 | python,r,algorithm | 23,326,609 | 1 | false | 0 | 0 | Sort the points, group them by value, and try all <=2n+1 thresholds that classify differently (<=n+1 gaps between distinct data values including the sentinels +-infinity and <=n distinct data values). The latter step is linear-time if you try thresholds lesser to greater and keep track of how many points are misclassified in each way. | 1 | 0 | 1 | I have a set of {(v_i, c_i), i=1,..., n}, where v_i in R and c_i in {-1, 0, 1} are the discrimination value and label of the i-th training example.
I would like to learn a threshold t so that the training error is the minimum when I declare the i-th example has label -1 if v_i < t, 0 if v_i=t, and 1 if v_i>t.
How can I learn the threshold t from {(v_i, c_i), i=1,..., n}, and what is an efficient algorithm for that?
I am implementing that in Python, although I also hope to know how to implement that in R efficiently.
Thanks!
Btw, why SO doesn't support LaTeX for math expressions? (I changed them to be code instead). | learn a threshold from labels and discrimination values? | 0.197375 | 0 | 0 | 72 |
23,327,609 | 2014-04-27T18:58:00.000 | 1 | 0 | 0 | 0 | javascript,jquery,python,ajax | 23,328,225 | 1 | true | 1 | 0 | You can use jQuery, which gives you a very simple way to do that:
$.post( "yourpage.html", $('form').serialize() + "&ajax=true", function(response) {
$('#results').html(response);
});
Server side, detect if ajax is true and then return only the query results instead of the whole page. They will be saved in the element of id="results". Replacing the whole page is generally not a good idea. | 1 | 0 | 0 | I have a web page with a form each time a form is submitted same page loads but with different data relevant to the query. On the back-end i am using python for finding data relevant to query.
I want to process all this with ajax as back-end process needs more time so i need to show status to the user i -e whats going on now in the system
Also the data returned is the same html file but with some other data. so how can i display it on the current page. It should not be appended to current html file. it is standalone
Anyone please give me a solution to this problem | Refresh same page with ajax with different data | 1.2 | 0 | 0 | 540 |
23,329,034 | 2014-04-27T21:10:00.000 | 0 | 0 | 1 | 1 | python,macos,python-2.7,twisted | 40,758,241 | 4 | false | 0 | 0 | I too was getting a ImportError: No module named xxxeven though I did a pip install xxx and pip2 install xxx.
pip2.7 install xxx worked for me. This installed it in the python 2.7 directory. | 1 | 10 | 0 | Hello I'm trying to run twisted along with python but python cannot find twisted.
I did run $pip install twisted successfully but it is still not available.
ImportError: No module named twisted.internet.protocol
It seems that most people have $which python at /usr/local/bin/python
but I get /Library/Frameworks/Python.framework/Versions/2.7/bin/python
May this be the issue? If so, how can I change the PATH env? | Python OSX $ which Python gives /Library/Frameworks/Python.framework/Versions/2.7/bin/python | 0 | 0 | 0 | 35,517 |
23,333,020 | 2014-04-28T05:20:00.000 | 1 | 0 | 1 | 0 | python,dictionary | 23,333,073 | 4 | false | 0 | 0 | if d.keys() has a length of at least 3, and it has a from and to attribute, you're golden.
My knowledge of Python isn't the greatest but I imagine it goes something like if len(d.keys) > 2 and d['from'] and d['to'] | 1 | 5 | 0 | Lets say I have a dictionary that specifies some properties for a package:
d = {'from': 'Bob', 'to': 'Joe', 'item': 'book', 'weight': '3.5lbs'}
To check the validity of a package dictionary, it needs to have a 'from' and 'to' key, and any number of properties, but there must be at least one property. So a dictionary can have either 'item' or 'weight', both, but can't have neither. The property keys could be anything, not limited to 'item' or 'weight'.
How would I check dictionaries to make sure they're valid, as in having the 'to', 'from', and at least one other key?
The only method I can think of is by obtaining d.keys(), removing the 'from' and 'to' keys, and checking if its empty.
Is there a better way to go about doing this? | python dictionary check if any key other than given keys exist | 0.049958 | 0 | 0 | 3,612 |
23,333,669 | 2014-04-28T06:07:00.000 | 0 | 1 | 0 | 1 | python,unix,wxpython,robotframework | 25,013,664 | 1 | false | 0 | 0 | I think the problem is that the file contains UTF-8, not ASCII. Robot Framework appears to be expecting ASCII text. ASCII text only contains values in the range 0-127, when the ascii codec sees a byte 0xC3 it throws an error. (If the text was using the Western European Windows 8-bit encoding, 0xC3 would be Ã. If it was using the MacOS encoding, 0xC3 would be ∑. In fact, it is the first of two bytes which define a single character in the range of most of the interesting accented characters.)
Somehow, you need to teach Robot Framework to use the correct encoding. | 1 | 1 | 0 | I am getting an error
unicodedecodeerror 'ascii' codec can't decode byte 0xc3 in position 1
ordinal not in range(128)
while performing the below mentioned operation.
I have a program that reads files from remote machine(Ubuntu) using grep and cat command for the same to fetch values and stores the value in a variable via robot framework builtin keyword export command from client.
Following are the versions i am using:-
Robot Framework: 2.8.11
Ride: 0.55
Putty: 0.63
Pyhton: 2.7.3
I am doing a SSH session on Linux machine and on that machine their is a file in which the data is having accented characters for eg: Õ Ü Ô Ý .
While reading the text from the file containing accented characters using 'grep' and 'cat' command i am facing this issue.
unicodedecodeerror 'ascii' codec can't decode byte 0xc3 in position 1
ordinal not in range(128)
Thank you. | unicodedecodeerror 'ascii' codec error in wxPython | 0 | 0 | 0 | 992 |
23,339,765 | 2014-04-28T11:21:00.000 | 0 | 0 | 1 | 0 | python,c,dll | 43,692,580 | 1 | false | 0 | 0 | (Posted on behalf of the OP).
Finally I decided to use Ubuntu, it's simpler to accomplish this. | 1 | 0 | 0 | I'm new to Python, and I'm learning to write C extensions for Python under Windows. Following a tutorial, I successfully compiled my exmaple.dll file using Cygwin.
The dll file seems okay as I can import it and the function of it also works.
Note that this is done using the Python of Cygwin. However, I can't use this dll under my own Python (not the one in Cygwin). I have copied the dll file to the Python search path, though. ImportError was raised.
I'm thinking, is it because the versions of two Python are different?
Cygwin comes with Python 2.7.5, while I use 2.7.6. | Compatibility of DLL files for Python C extension | 0 | 0 | 0 | 152 |
23,340,520 | 2014-04-28T11:58:00.000 | 0 | 0 | 1 | 0 | python,binary,bitwise-operators | 23,340,785 | 1 | true | 0 | 0 | You seem to assume that Python internally stores integers as strings of decimal digits. That is completely false. Integers, even arbitrary precision ones (long in 3.x), are stored as bit strings. XORing two Python integers simply performs the native XOR operation on the bits as stored in memory. It loads 2x 64 bits from memory into registers, XORs the bits as efficiently as the CPU allows, and writes the result back to memory. No conversion necessary.
To be completely honest, it does not use some of those 64 bits (this is used for easier overflow detection in multiplication), but it does that by just leaving those bits as zeroes all the time. | 1 | 0 | 0 | As I noticed, on the one hand, there are bitwise operators in Python, like: 8 ^ 10 which results 2, that's fine.
On the other hand, there are ways to convert a base 10 integer to a binary number, e.g. bin(2)
I wonder if I could combine this two, I mean there are no bitwise operators on strings, therefore, bin(8) ^ bin(10) only throws an error. I guess when using bitwise operators on integers, Python's first step is always a conversion anyways.
Actually I was thinking how I could speed up operations like a ^ b when both a and b are high values of integers and there are many bitwise operations on them. Conversion takes too much time so I'd like to just convert them in the first place and manipulating with bitwise operators only after.
Maybe converting each number to bool lists could help, but I'm interested if anyone has a better idea.
Thank you very much for any good advices! | Binary operators on binary numbers in Python | 1.2 | 0 | 0 | 1,155 |
23,346,771 | 2014-04-28T16:50:00.000 | 2 | 0 | 0 | 0 | python,excel,xlrd,fileparsing | 29,564,738 | 2 | false | 0 | 0 | rename or Save as your Excel file as .xls instead of .xlsx
Thank You | 1 | 6 | 0 | This is a very very strange issue. I have quite a large excel file (the contents of which I cannot discuss as it is sensitive data) that is a .xlsx and IS a valid excel file.
When I download it from my email and save it on my desktop and try to open the workbook using xlrd, xlrd throws an AssertionError and does not show me what went wrong.
When I open the file using my file browser, then save it (without making any changes), it works perfectly with xlrd.
Has anyone faced this issue before? I tried passing in various flags to the open_workbook function to no avail and I tried googling for the error. So far I haven't found anything.
The method I used was as follows
file = open('bigexcelfile.xlsx')
fileString = file.read()
wb = open_workbook(file_contents=filestring)
Please help! The error is as follows
Traceback (most recent call last):
File "./varify/samples/resources.py", line 354, in post
workbook = xlrd.open_workbook(file_contents=fileString)
File "/home/vagrant/varify-env/lib/python2.7/site-packages/xlrd/__init__.py", line 416, in open_workbook
ragged_rows=ragged_rows,
File "/home/vagrant/varify-env/lib/python2.7/site-packages/xlrd/xlsx.py", line 791, in open_workbook_2007_xml
x12sheet.process_stream(zflo, heading)
File "/home/vagrant/varify-env/lib/python2.7/site-packages/xlrd/xlsx.py", line 528, in own_process_stream
self_do_row(elem)
File "/home/vagrant/varify-env/lib/python2.7/site-packages/xlrd/xlsx.py", line 722, in do_row
assert tvalue is not None
AssertionError | xlrd cannot read xlsx file downloaded from email attachment | 0.197375 | 1 | 0 | 2,289 |
23,350,910 | 2014-04-28T20:49:00.000 | 0 | 0 | 0 | 0 | python,django,git,heroku | 23,351,079 | 1 | false | 1 | 0 | Here is a list of suggestions on how I would approach this issue with Heroku.
You should try heroku restart. This restarts your application and can help pick up new changes.
I would clear my browser cache as often I do not see changes on my web page if the browser has cached them.
I would check that the git repository on Heroku matches my local one in that it has all the newest changes made on my local server. | 1 | 1 | 0 | My issue is that when I view my site using python manage.py runserver or foreman start, I can see my site perfectly.
However, when I git push heroku master on the surface everything appears fine as no errors are given. But when I view my site with the Heroku given site link, I do not see my updated site as I see when I view my site using python manage.py runserver or foreman start.
I am building my site using 'pinax-theme-bootstrap` and my virtualenv is on my desktop directory.
Does anyone have a solution as to why this may be the case? | Herkou site looks different at launch then django local server site | 0 | 0 | 0 | 68 |
23,352,195 | 2014-04-28T22:22:00.000 | 0 | 0 | 1 | 1 | python-3.x,redhat,cairo,pycairo,rhel6 | 23,352,643 | 1 | false | 0 | 0 | redhat 6 is clearly out of date. Of course it can be done bringing rh6 up to date with downloading and compiling your own 3.x kernel with all what's needed to meet the requirments for pycairo 1.10....
BUT it would be easier and nicer to install a more modern Linux Distribution which goes nicely with an old computer. Linux Mint 16 (Petra) provides a distro with replaxed requirments and window managers in i386 mode.
I don't see any meaning in trying to get up to date code on such an old os version running. Every replacement hardware you can get hold on ebay will do better than that.
cheers,
Christian | 1 | 0 | 0 | I am trying to install pycairo 1.10 for Python 3.3 on redhat 6. There are no packages in the official repo, and when I try building it myself it says glibc is out of date. I have the latest glibc from the official the repo, and am somewhat hesitant to go on updating it through other means. Are there any other packages that can help, or is there some way to get this working with an older version (we have tried back to cairo 1.8). | Installing cairo for python 3.3 on redhat 6 | 0 | 0 | 0 | 263 |
23,354,411 | 2014-04-29T02:25:00.000 | 1 | 0 | 1 | 0 | python,django,amazon-web-services,virtualenv,amazon-elastic-beanstalk | 23,691,652 | 2 | true | 1 | 0 | OK, this is a hack, and an ugly one, but it worked.
Now, the error is happening on the local machine, nothing to do with remote.
I have boto installed locally and I am NOT using virtualenv (for reasons of my own, to test a more barebones approach).
1 note where the error is happening - in .git/AWSDevTools/aws/dev_tools.py
2 run a non-virtualenv python and
import boto
print boto.file
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/init.pyc
3 open up that dev_tools.py and add this on top:
import sys
sys.path.append("/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages")
Since you are appending to sys.path, you will only import modules from that addition if git aws.push hasn't found it in its own stuff.
That fixes the problem for now, except that it will re-occur on the next directory where you do an "eb init"
4 Go to where you have unzipped the CLI. In my case:
$cd ~/bin/AWS-ElasticBeanstalk-CLI-2.6.1
now
5 look for the original of dev_tools.py used by eb init
$find ~/bin -name dev_tools.py
~/bin/AWS-ElasticBeanstalk-CLI-2.6.1/AWSDevTools/Linux/scripts/aws/dev_tools.py
edit this file as in #3
if you do another eb init elsewhere you will see that your ugly hack is there as well.
Not great, but it works.
p.s. sorry for the formatting, newbie here, it's late and I wanna go skating. | 1 | 1 | 0 | I'm trying to use AWS's Elastic Beanstalk, but when I run eb start, I get "ImportError: No module named boto Cannot run aws.push for local repository HEAD."
I am in the virtual environment of my Django project.
I ran pip install boto and it was successful.
I did pip freeze > requirements.txt, git add requirements.txt, and git commit -m 'Added boto to requirements.txt', all successful.
Then I got into the python shell and imported boto without any resulting errors.
Finally, I ran eb start on the normal command line again. Same "no module named boto" error.
It seems like the eb start command is not using my virtualenv. What should I do? | AWS's Elastic Beanstalk not using my virtualenv: "No module named boto" | 1.2 | 0 | 0 | 6,301 |
23,354,503 | 2014-04-29T02:37:00.000 | 1 | 0 | 1 | 0 | python | 23,354,517 | 4 | false | 0 | 0 | Open a command prompt and cd to the directory where your .py file is and type the name of the file there to run it. | 1 | 1 | 0 | In any python code I write that gets an error, it will show the error but the window will disappear right away and i can't see the error. It makes fixing codes really difficult. can anyone help me? (i have python 2.7 installed and with my programs i type them in notepad and save them as a .py file) | Having python display errors in window | 0.049958 | 0 | 0 | 4,339 |
23,356,211 | 2014-04-29T05:30:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,django-south | 23,358,017 | 2 | true | 1 | 0 | OK This is not a valid question. I am embarrassed to admit I made a small tweak on the migration script that caused the problem. Please ignore this question - seems like I dont have a way to delete a question I had asked! | 1 | 0 | 0 | I just generated the migration scripts through ./manage.py schemamigration --auto and ran it. I get the following error. I am stumped as to what it could mean. I have been using SET_NULL for a while now. So this is something new that didn't occur earlier. Any idea what could be wrong?
Traceback (most recent call last):
File "./manage.py", line 16, in
execute_from_command_line(sys.argv)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/django/core/management/init.py", line 399, in execute_from_command_line
utility.execute()
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/django/core/management/init.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/django/core/management/base.py", line 242, in run_from_argv
self.execute(*args, **options.dict)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/management/commands/schemamigration.py", line 111, in handle
old_orm = last_migration.orm(),
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/utils/init.py", line 62, in method
value = function(self)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/migration/base.py", line 432, in orm
return FakeORM(self.migration_class(), self.app_label())
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/orm.py", line 48, in FakeORM
_orm_cache[args] = _FakeORM(*args)
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/orm.py", line 134, in init
self.retry_failed_fields()
File "/home/vivekv/.environments/fantain/local/lib/python2.7/site-packages/south/orm.py", line 377, in retry_failed_fields
fname, modelname, e
ValueError: Cannot successfully create field 'winner' for model 'match': 'module' object has no attribute 'SET_NULL'. | Django South migration is throwing an error 'module' object has no attribute 'SET_NULL' | 1.2 | 0 | 0 | 338 |
23,358,787 | 2014-04-29T07:54:00.000 | 6 | 0 | 0 | 1 | python,django,caching,memcached,celery | 24,082,360 | 2 | false | 1 | 0 | Solved it finally:
Celery has dynamic scaling feature- it's capable to add/kill workers according to load
It does it via forking existing one
Opened sockets and files are copied to the forked process, so both processes share them, which leads to race condition, when one process reads response of another one. Simply, it's possible that one process reads response intended for second one, and vise-versa.
from django.core.cache import cache this object stores pre-connected memcached socket. Don't use it when your process could be dynamically forked.. and don't use stored connections, pools and other.
OR store them under current PID, and check it each time you're accessing cache | 1 | 9 | 0 | Here is what we have currently:
we're trying to get cached django model instance, cache key includes name of model and instance id. Django's standard memcached backend is used. This procedure is a part of common procedure used very widely, not only in celery.
sometimes(randomly and/or very rarely) cache.get(key) returns wrong object: either int or different model instance, even same-model-different-id case appeared. We catch this by checking correspondence of model name & id and cache key.
bug appears only in context of three of our celery tasks, never reproduces in python shell or other celery tasks. UPD: appears under long-running CPU-RAM intensive tasks only
cache stores correct value (we checked that manually at the moment the bug just appeared)
calling same task again with same arguments might don't reproduce the issue, although probability is much higher, so bug appearances tend to "group" in same period of time
restarting celery solves the issue for the random period of time (minutes - weeks)
*NEW* this isn't connected with memory overflow. We always have at least 2Gb free RAM when this happens.
*NEW* we have cache_instance = cache.get_cache("cache_entry") in static code. During investigation, I found that at the moment the bug happens cache_instance.get(key) returns wrong value, although get_cache("cache_entry").get(key) on the next line returns correct one. This means either bug disappears too quickly or for some reason cache_instance object got corrupted.
Isn't cache instance object returned by django's cache thread safe?
*NEW* we logged very strange case: as another wrong object from cache, we got model instance w/o id set. This means, the instance was never saved to DB therefore couldn't be cached. (I hope)
*NEW* At least one MemoryError was logged these days
I know, all of this sounds like some sort of magic.. And really, any ideas how that's possible or how to debug this would be very appreciated.
PS: My current assumption is that this is connected with multiprocessing: as soon as cache instance is created in static code and before Worker process fork this would lead to all workers sharing same socket (Does it sound plausibly?) | memcache.get returns wrong object (Celery, Django) | 1 | 0 | 0 | 2,113 |
23,359,141 | 2014-04-29T08:14:00.000 | 1 | 0 | 1 | 0 | python,scripting,execfile | 23,359,640 | 2 | false | 0 | 0 | Take a look at the documentation for the reload() function and the restrictions mentioned there; depending on your python version it is located in different modules, for 2.x it is predefined. | 1 | 1 | 0 | In my current directory, I have a foo1.py script and a directory named other with a foo2.py script inside.
Now:
I launch the interpreter, and using execfile I can launch both scripts. The thing is, when I edit and save foo1.py, I don't have to restart the interpreter, I just execfile again and it runs with my modifications, but the same doesn't happen with foo2.py. For the edits I made to foo2.py to take effect I have to quit and relaunch the interpreter, since even after saving it execfile('foo2.py') will run the same script as before...
This is annoying, as I wanted to constantly be editing and launching multiple scripts in sucession, who often depend on each other...
How can I make it soo that the interpreter sees my edits to foo2.py, without having to restart it?
Thanks! | Python interpreter's relationship with scripts | 0.099668 | 0 | 0 | 45 |
23,360,160 | 2014-04-29T09:02:00.000 | 0 | 0 | 0 | 0 | python,django,github,python-social-auth | 63,099,520 | 3 | false | 1 | 0 | I did solve the login redirect URI mismatch by just using http://127.0.0.1:8000/ | 2 | 1 | 0 | I'm using python-social-auth on a project to authenticate the user with Github.
I need to redirect the user depending on the link they use. To do that I'm using the next attribute on the url, and I didn't declare any redirect url on my github app neither in my django settings.
This is the href attribute I'm using for my link : {% url 'social:begin' 'github' %}?next={% url 'apply' j.slug %}
And the first time I click on it, I'm getting redirected to my homepage with this error in the url field : http://127.0.0.1:8000/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fv3%2Foauth%2F%23redirect-uri-mismatch&state=Ui1EOKTHDhOkNJESI5RTjOCDEIdfFunt
But after first time the link work.
I don't know where is the problem, I hope someone can help me. Thanks | python-social-auth and github, I have this error "The redirect_uri MUST match the registered callback URL for this application" | 0 | 0 | 0 | 4,663 |
23,360,160 | 2014-04-29T09:02:00.000 | 0 | 0 | 0 | 0 | python,django,github,python-social-auth | 57,011,829 | 3 | false | 1 | 0 | that worked
Setting your domain to 127.0.0.1 in your hosts file should work, something like this
127.0.0.1 example.com | 2 | 1 | 0 | I'm using python-social-auth on a project to authenticate the user with Github.
I need to redirect the user depending on the link they use. To do that I'm using the next attribute on the url, and I didn't declare any redirect url on my github app neither in my django settings.
This is the href attribute I'm using for my link : {% url 'social:begin' 'github' %}?next={% url 'apply' j.slug %}
And the first time I click on it, I'm getting redirected to my homepage with this error in the url field : http://127.0.0.1:8000/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fv3%2Foauth%2F%23redirect-uri-mismatch&state=Ui1EOKTHDhOkNJESI5RTjOCDEIdfFunt
But after first time the link work.
I don't know where is the problem, I hope someone can help me. Thanks | python-social-auth and github, I have this error "The redirect_uri MUST match the registered callback URL for this application" | 0 | 0 | 0 | 4,663 |
23,361,057 | 2014-04-29T09:43:00.000 | 49 | 0 | 0 | 0 | python,django,django-signals | 23,363,551 | 11 | false | 1 | 0 | It is better to do this at ModelForm level.
There you get all the Data that you need for comparison in save method:
self.data : Actual Data passed to the Form.
self.cleaned_data : Data cleaned after validations, Contains Data eligible to be saved in the Model
self.changed_data : List of Fields which have changed. This will be empty if nothing has changed
If you want to do this at Model level then you can follow the method specified in Odif's answer. | 1 | 81 | 0 | I have a django model, and I need to compare old and new values of field BEFORE saving.
I've tried the save() inheritance, and pre_save signal. It was triggered correctly, but I can't find the list of actually changed fields and can't compare old and new values. Is there a way? I need it for optimization of pre-save actions.
Thank you! | django - comparing old and new field value before saving | 1 | 0 | 0 | 49,211 |
23,362,560 | 2014-04-29T10:51:00.000 | 1 | 0 | 1 | 0 | python,class,sharing | 23,362,812 | 2 | true | 0 | 1 | There is a range of techniques you can use :
Shared modules - A set of modules with well defined interfaces - functions, classes etc. These modules sit in a folders which every application you use can get to - i.e. the path to the folder is added to PYTHON_PATH environment variable. These interfaces should be engineered so when you add functionality to them they don't break the old applications.
Design patterns - Design your applications with a good design pattern - MVC (Master View Controller) is a useful one for GUI programs. V is your GUI - and exposes only a few methods, which aren't actually dependent on the GUI itself (for instances methods such as display_foo). The Master is your data access functionality - again with well defined interfaces. Controller interfaces between View and Master. There may be other patterns which apply to your application too. | 1 | 1 | 0 | I'm a trainee novice programmer.
I have / am creating simple programs, generally around screen scraping, data caption (postrgres), various processing methods and now a GUI via wxpython
I find a lot of these programs overlap - ie use same techniques, and get some very long copied and pasted programs!
Overtime I improve these techniques and find myself having to backtrack over multiple programs to update these.
How? Can I? Create a more dynamic, systematic process.
One where all programs / procedures / classes are shared?
Has it got a name?
My logical thought is that like 'procedures' and 'classes' I have would have smaller low level programs and mid level programs that called upon these - the GUI being the top program! but this would mean passing data to and from! Can Classes and Procedures be separate programs?
Many thanks
Cameron | How do I go make my Python code more efficient and dynamic? | 1.2 | 0 | 0 | 152 |
23,363,287 | 2014-04-29T11:23:00.000 | 0 | 0 | 0 | 1 | python,linux,ubuntu,amazon-web-services,ubuntu-14.04 | 43,012,792 | 4 | false | 0 | 0 | In ubuntu we write like :
export PATH=$PATH:/usr/local/lib/python2.7/site-packages/
It worked for me after writing this because eb folder will be present inside mentioned folder. | 2 | 5 | 0 | Im trying to install AWS eb command line interface in Ubuntu 14.04. I just donwloaded the .zip file. Extracted in a folder. if I go to folder where eb is (/home/roberto/app/AWS-ElasticBeanstalk-CLI-2.6.1/eb/linux/python2.7) and run it, I get: eb: command not found
Same if I do it with python3 path. | AWS Elastic Beanstalk (eb) installation in Ubuntu 14.04: command not found | 0 | 0 | 0 | 10,002 |
23,363,287 | 2014-04-29T11:23:00.000 | 1 | 0 | 0 | 1 | python,linux,ubuntu,amazon-web-services,ubuntu-14.04 | 42,512,402 | 4 | false | 0 | 0 | I think all you have to do is, upgrade awsebcli by running: pip install --upgrade awsebcli | 2 | 5 | 0 | Im trying to install AWS eb command line interface in Ubuntu 14.04. I just donwloaded the .zip file. Extracted in a folder. if I go to folder where eb is (/home/roberto/app/AWS-ElasticBeanstalk-CLI-2.6.1/eb/linux/python2.7) and run it, I get: eb: command not found
Same if I do it with python3 path. | AWS Elastic Beanstalk (eb) installation in Ubuntu 14.04: command not found | 0.049958 | 0 | 0 | 10,002 |
23,366,047 | 2014-04-29T13:27:00.000 | 0 | 0 | 1 | 0 | python,read-write | 23,366,357 | 2 | false | 0 | 0 | You can use SQLite or Pickle module instead, to allow easier data retrieval/manipulation from multiple programs/scripts. | 1 | 2 | 0 | I have a program that imports a .py file that contains lists and dictionaries and uses them in the program. I am making another program that's purpose is to change the lists and dictionaries in this database .py file (either adding or removing parts of the lists/dictionaries). How would I go about doing this? Do i need to read in the .py file line by line, modify the lists, and overwrite the document? Is there a better way?
Any ideas would be much appreciated. If overwriting the file is the best plan, how do you do that? | Modifying a .py file within Python | 0 | 0 | 0 | 3,423 |
23,366,161 | 2014-04-29T13:31:00.000 | 0 | 0 | 0 | 0 | python,qt,pyqt,pyqt4 | 23,383,069 | 1 | true | 0 | 0 | As of now, the only way seems to be to assemble the MIME multi-part body oneself, produce a digest of it and pass that byte data to QNetworkAccessManager sending method. | 1 | 1 | 0 | I need to calculate a digest (checksum) from the request body (e.g. raw POST data) that is being sent via QNetworkRequest and include a digest signature in the request header.
How could I do this before sending the request (so the signature can be included in the header) ?
This is trivial when I'm using a byte array as the request body, but what if I have a QHttpMultiPart object ?
Basically something like QHttpMultiPart.toString(). | How to access request body of QNetworkRequest with QHttpMultiPart? | 1.2 | 0 | 1 | 531 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.