Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
18,169,099 | 2013-08-11T05:37:00.000 | 30 | 1 | 0 | 0 | python,benchmarking | 18,169,127 | 6 | false | 0 | 0 | time.time() * 1000 will give you millisecond accuracy if possible. | 1 | 36 | 0 | How can I get the number of milliseconds since epoch?
Note that I want the actual milliseconds, not seconds multiplied by 1000. I am comparing times for stuff that takes less than a second and need millisecond accuracy. (I have looked at lots of answers and they all seem to have a *1000)
I am comparing a time that I get in a POST request to the end time on the server. I just need the two times to be in the same format, whatever that is. I figured unix time would work since Javascript has a function to get that | Comparing times with sub-second accuracy | 1 | 0 | 1 | 72,159 |
18,171,732 | 2013-08-11T12:04:00.000 | 1 | 0 | 1 | 1 | installation,enthought,biopython | 19,250,481 | 1 | false | 0 | 0 | No, I think you do not need to pay for that if you dont wish to. All you need to do is to install biopython independent of canopy some where in your machine say "/path/to/biopython"
and in canopy add import sys and sys.path.append('/path/to/biopython') will do the job! | 1 | 1 | 0 | When I try to use the Canopy Package Manager and "subscribe" to Biopython I am asked to pay for it. Can I use the package manager w/o paying? | Installing Biopython from academic Enthought Canopy 32bit Windows 7 | 0.197375 | 0 | 0 | 469 |
18,174,212 | 2013-08-11T16:37:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,django-forms,django-templates | 23,327,908 | 1 | true | 1 | 0 | I ended up using request.get_host() to provide different interfaces for each subdomain, and check in login for the correct group type of the would-be-logged-in user. | 1 | 0 | 0 | I have different user groups based on functionality: customer support, editors...etc
I want to use the same user system and database, but I want to have different interfaces (login, functionality, sub domain) for the different groups that I have, separate from the normal user website interface and login.
How would you do it? | Different Django Interfaces for Different user Groups | 1.2 | 0 | 0 | 104 |
18,174,961 | 2013-08-11T17:58:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,python-3.x | 18,175,130 | 2 | false | 0 | 0 | You can import os and try os.system('clear') to clear the screen rather than printing multiple \n's. For Windows it would look like os.system('CLS') as the commented (above) link suggests. | 1 | 1 | 0 | Any way we can add configuration to python IDLE and assign it a shortcut, so when we use the shortcut a function call to clear the python IDLE screen? like print '\n' * 50, just to make a shortcut to clear the python IDLE. Suggestions welcome. | Clearing the python IDLE screen in win7 | 0.099668 | 0 | 0 | 1,615 |
18,175,466 | 2013-08-11T18:49:00.000 | 1 | 0 | 0 | 0 | python,node.js,jinja2,language-comparisons | 63,772,013 | 5 | false | 1 | 0 | The ejs is the npm module which you are looking for.
This is the name written in my package.json file --> "ejs": "^3.1.3"
EJS is a simple templating language that lets you generate HTML markup with plain JavaScript.(Credits: Ejs website) | 1 | 22 | 0 | What would be a node.js templating library that is similar to Jinja2 in Python? | Templating library in node.js similar to Jinja2 in Python? | 0.039979 | 0 | 1 | 14,114 |
18,176,394 | 2013-08-11T20:32:00.000 | 1 | 0 | 1 | 0 | python,numpy | 18,176,452 | 2 | false | 0 | 0 | Yes. Your web site will need to be able to run Python code in some way, but if you can import numpy then you can use it. | 1 | 0 | 0 | If I write code with NumPy, can a webserver which run Python 2.5 run the code?
Can we use NumPy as a dynamic language in writing websites of computational nature? | Can we use NumPy in writing a website? | 0.099668 | 0 | 0 | 823 |
18,178,061 | 2013-08-12T00:25:00.000 | 0 | 0 | 0 | 0 | python | 18,178,111 | 2 | false | 0 | 1 | You should be able to get the image's bounding box (bbox) by calling bbox = canvas.bbox(imageID). Then you can use canvas.find_overlapping(*bbox). | 1 | 0 | 0 | In Tkinter, when I create an image on a canvas and find the coordinates of it, it only returns two coordinates, so the find_overlapping method doesn't work with it (naturally). Is there an alternative? | How to find the overlapping of an image on TkInter Canvas (Python)? | 0 | 0 | 0 | 1,295 |
18,179,680 | 2013-08-12T04:47:00.000 | 0 | 0 | 1 | 0 | python,algorithm | 18,180,322 | 6 | false | 0 | 0 | How about,
sort by first column O(n log n)
binary search to find indices that are out of range O(log n)
throw out values out of range
sort by second column O(n log n)
binary search to find indices that are out of range O(log n)
throw out values out of range
you are left with the values in range
This should be O(n log n)
You can sort rows and cols with np.sort and a binary search should only be a few lines of code.
If you have lots of queries, you can save the first sorted copy for subsequent calls but not the second. Depending on the number of queries, it may turn out to be better to do a linear search than to sort then search. | 1 | 6 | 1 | I have a 200k lines list of number ranges like start_position,stop position.
The list includes all kinds of overlaps in addition to nonoverlapping ones.
the list looks like this
[3,5]
[10,30]
[15,25]
[5,15]
[25,35]
...
I need to find the ranges that a given number fall in. And will repeat it for 100k numbers.
For example if 18 is the given number with the list above then the function should return
[10,30]
[15,25]
I am doing it in a overly complicated way using bisect, can anybody give a clue on how to do it in a faster way.
Thanks | finding a set of ranges that a number fall in | 0 | 0 | 0 | 3,636 |
18,187,278 | 2013-08-12T12:41:00.000 | 0 | 0 | 1 | 0 | python,virtualbox | 18,187,453 | 2 | false | 0 | 0 | Can you not install it locally on the remote machine, and not globally, then just export that python path? | 1 | 0 | 0 | I have setup virtual box and install package (WxPython) into virtual machine and doing programming for learning wxpython and Python. we connect into remote machine using putty in windows and ssh in virtual box.
I want to do some experiment/analysis with exiting code using WxPython But we do not have permission to install python package into remote machine. if I raise ticket package to install package towards IT team it require lot of business justification.
As it is my personal interest, I do not have any business reason
is it possible,can I access wxPython package into remote machine which is installed into virtual box. | import python module/packge into remote machine | 0 | 0 | 0 | 241 |
18,187,751 | 2013-08-12T13:05:00.000 | 49 | 0 | 0 | 1 | python,django,celery,django-celery | 18,190,019 | 2 | true | 1 | 0 | I've been using cron for a production website, and have switched to celery on a current project.
I'm far more into celery than cron, here is why:
Celery + Celerybeat has finer granularity than cron. Cron cannot run more than once a minute, while celery can (I have a task run every 90 seconds which checks an email queue to send messages, and another which cleans the online users list).
A cron line has to call a script or a unique command, with absolute path and user info. Celery calls python functions, no need to write more than code.
With celery, to deploy to another machine, you generally just have to pull/copy your code, which is generally in one place. Deploying with cron would need more work (you can automate it but...)
I really find celery better suited than cron for routine cleaning (cache, database), and in general, for short tasks. Dumping a database is more a work for cron, however, because you don't want clutter the event queue with too long tasks.
Not the least, Celery is easily distributed across machines. | 2 | 47 | 0 | Considering Celery is already a part of the stack to run task queues (i.e. it is not being added just for running crons, that seems an overkill IMHO ).
How can its "periodic tasks" feature be beneficial as a replacement for crontab ?
Specifically looking for following points.
Major pros/cons over crontab
Use cases where celery is better choice than crontab
Django specific use case: Celery vs crontab to run django based periodic tasks, when celery has been included in the stack as django-celery for queing django tasks. | Why would running scheduled tasks with Celery be preferable over crontab? | 1.2 | 0 | 0 | 15,773 |
18,187,751 | 2013-08-12T13:05:00.000 | 4 | 0 | 0 | 1 | python,django,celery,django-celery | 18,451,537 | 2 | false | 1 | 0 | Celery is indicated any time you need to coordinate jobs across multiple machines, ensure jobs run even as machines are added or dropped from a workgroup, have the ability to set expiration times for jobs, define multi-step jobs with graph-style rather than linear dependency flow, or have a single repository of scheduling logic that operates the same across multiple operating systems and versions. | 2 | 47 | 0 | Considering Celery is already a part of the stack to run task queues (i.e. it is not being added just for running crons, that seems an overkill IMHO ).
How can its "periodic tasks" feature be beneficial as a replacement for crontab ?
Specifically looking for following points.
Major pros/cons over crontab
Use cases where celery is better choice than crontab
Django specific use case: Celery vs crontab to run django based periodic tasks, when celery has been included in the stack as django-celery for queing django tasks. | Why would running scheduled tasks with Celery be preferable over crontab? | 0.379949 | 0 | 0 | 15,773 |
18,193,913 | 2013-08-12T18:20:00.000 | 1 | 0 | 1 | 0 | python,licensing,pyside,gpl,lgpl | 18,194,041 | 2 | true | 0 | 0 | Yes, you can. What the GPL is doing to your program? Read the license. :-)
The LGPL does not do much to your program. LGPL makes static linking more difficult, but I doubt this matters with python. | 1 | 1 | 0 | I am making a program and want to put it under the GPL but I am using PySide in it which is under the LGPL. Can I still put my program under the GPL or does it have to be under the LGPL. Also what do both of these licenses do to my program?
Thanks | Can I My Program be Under the GPL if I Use PySide? | 1.2 | 0 | 0 | 237 |
18,193,967 | 2013-08-12T18:23:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,user-management | 18,197,132 | 1 | false | 1 | 0 | Yes you can, but you'll have to build the session tracking functionality yourself. | 1 | 0 | 0 | I am writing an application which uses the Google user API and anyone having a google account can login. I would want to prevent from multiple users using the same google account to login simultaneously. Basically, I would like to allow only 1 user / account to be using my application. As I am running a subscription service, I need to restrict users sharing accounts and simultaneously logging in.
Can I accomplish this somehow in App Engine using Users module? If not, can someone please suggest an alternate mechanism?
I am using Python on App Engine. | Detecting multiple sessions from same user on Google App Engine | 0 | 0 | 0 | 84 |
18,198,308 | 2013-08-12T23:29:00.000 | 15 | 0 | 1 | 0 | python,operators,modulo,modulus | 18,200,092 | 2 | true | 0 | 0 | Python uses the classic Algorithm D from Knuth's 'The Art of Computer Programming'. The running time is (generally) proportional to the product of lengths of the two numbers. Space is proportional to the sum of the lengths of the two numbers.
The actual division occurs in Objects/longobject.c, see x_divrem(). For background on the internal representation of a Python long, see Include/longintrepr.h.
% 2 does not use bitwise operations. The standard idiom for checking if a number is even/odd is & 1.
Python 2 and 3 use the same algorithm. | 1 | 9 | 0 | I'm curious in regards to the time and space complexities of the % operator in Python. Also, does Python use a bitwise operation for % 2?
Edit:
I'm asking about Python 2.7's implementation, just in case it differs slightly from that of Python 3 | How does Python implement the modulo operation? | 1.2 | 0 | 0 | 3,820 |
18,203,079 | 2013-08-13T07:28:00.000 | 0 | 1 | 0 | 0 | python,sublimetext2,pylint | 18,387,411 | 2 | false | 0 | 0 | It is possible, but utmost difficult.
You can try to see if these modules are available as .py files in Sublime Text 2 source code and drop then to PYTHONPATH Pylint reads.
If the modules in the question are native modules, Pylint can see only if they are distributed as shared libraries and you point PYTHONPATH/LD_LIBRARY_PATH to this modules. If the modules are embedded inside Sublime Text 2 binary you have little hope to make Pylint understand them (unless you provide hinting by hand). In this case the behavior is operating system specific. | 1 | 1 | 0 | Is there a way to force pylint to see sublime and sublime_plugin modules?
I have tried adding sublime folder to pythonpath but it hasn't worked out.
These two errors really annoy me:
PyLinter: F0401: Unable to import 'sublime'
PyLinter: F0401: Unable to import 'sublime_plugin'
Thanks. | Pylint sublime plugin development | 0 | 0 | 0 | 335 |
18,203,781 | 2013-08-13T08:06:00.000 | 0 | 0 | 1 | 1 | python | 18,205,127 | 2 | false | 0 | 0 | Install readline-devel from yum and then recompile Python. Command line editing magics require this library. | 1 | 0 | 0 | I am working with RedHat Linux 5.6 (in case that matters).
My team is working with python 2.6.6. I installed it from source (configure, make, make install) from the official Python site. It seems to not work properly:
When I type python in the terminal to enter the Python CLI, for some reason I can't delete what I type (backspace prints character marks to screen)
Modules like psutils are missing (this should be a standard part of Python, no?)
Python 2.4, which was previously installed, works fine.
Any ideas? | Python 2.6.6 doesn't work properly | 0 | 0 | 0 | 194 |
18,208,650 | 2013-08-13T12:05:00.000 | 0 | 0 | 1 | 1 | python,macos,package,dmg | 18,209,869 | 1 | false | 0 | 1 | If you create the .dmg, you can setup a background image that tells users to move your application to the /Applications folder. If your application needs no extra setup, this is preferred, or a (Mac OS X created) .zip file with it.
The package option is better if some additional setup, or scripts checking for Python dependencies, are required. | 1 | 1 | 0 | I have developed my first .app for mac, written in python and would like to share this .app with some friends.
I have converted the python scripts via py2app. Then I have one .app and compress it to an .dmg file.
I share this .dmg file with the guys and for one, this is working fine. (He has already python installed)
The other people can´t open the .app file, they get error messages. After an intensive search I got it. They have no python installed.
Now my question: How can I include a "one click python installation" in my .dmg file (or as package?!) | I want to share my python app as dmg or package? | 0 | 0 | 0 | 1,590 |
18,208,683 | 2013-08-13T12:06:00.000 | 12 | 0 | 1 | 0 | python,python-2.7,super | 18,208,716 | 2 | false | 0 | 0 | self.__class__ might not be a subclass, but rather a grandchild-or-younger class, leading to a stack-breaking loop. | 1 | 52 | 0 | I've recently discovered (via StackOverflow) that to call a method in a base class I should call:
super([[derived class]], self).[[base class method]]()
That's fine, it works. However, I find myself often copying and pasting between classes when I make a change and frequently I forget to fix the derived class argument to the super() function.
I'd like to avoid having to remember to change the derived class argument. Can I instead just use self.__class__ as the first argument of the super() function?
It seems to work but are there good reasons why I shouldn't do this? | When calling super() in a derived class, can I pass in self.__class__? | 1 | 0 | 0 | 10,162 |
18,209,264 | 2013-08-13T12:36:00.000 | 0 | 0 | 0 | 1 | python,logging,python-2.4,logrotate | 18,210,985 | 2 | false | 0 | 0 | I would advise that you copy the source of WatchedFileHandler from a later version and adapt it, if needed, so that it works on 2.4. | 2 | 0 | 0 | What is the correct process for having logrotate manage files written to by the python logging module? Usually I would use a WatchedFileHandler but I need to target 2.4, which does not have this class. Is there a function in the logging module which i can attach to a HUP handler, perhaps, to have it reopen the logfile? | Python logrotate options | 0 | 0 | 0 | 498 |
18,209,264 | 2013-08-13T12:36:00.000 | 0 | 0 | 0 | 1 | python,logging,python-2.4,logrotate | 20,074,842 | 2 | false | 0 | 0 | The logrotate utility needs to be told which files to rotate, and with what options. You might want to override the standard WatchedFileHandler class to make entries required in /etc/logrotate.d as part of your module load sequence before logging begins. | 2 | 0 | 0 | What is the correct process for having logrotate manage files written to by the python logging module? Usually I would use a WatchedFileHandler but I need to target 2.4, which does not have this class. Is there a function in the logging module which i can attach to a HUP handler, perhaps, to have it reopen the logfile? | Python logrotate options | 0 | 0 | 0 | 498 |
18,212,995 | 2013-08-13T15:19:00.000 | 0 | 0 | 0 | 0 | javascript,python,plot,visualization,data-visualization | 18,213,493 | 1 | true | 1 | 0 | First of all, I would suggest JSON rather than XML to be used for exchange format, it is much easier to parse JSON at the javascript side.
Then, speaking about the architecture of your app, I think that it is better to write a server web application in Python to generate JSON content on the fly than to modify and serve static files (at least that is how such things are done usually).
So, that gives us three components of your system:
A client app (javascript).
A web application (it does not matter what framework or library you prefer: django, gevent, even Twisted will work fine, as well as some others). What it should do is, firstly, giving the state of the points to the client app when it requests, and, secondly, accepting updates of the points' state from the next app and storing them in a database (or in global variables: that strongly depends on how you run it, a single process gevent app may use variables, when an app running withing a multi-process web server should use a database).
An app performing calculations that periodically publishes the points' state by sending it to the web app, probably as JSON body in a POST request. This one most likely should be a separate app due to the typical environment of the web applications: usually it is a problem to perform background processes in a web app, and, anyway, the way this can be done strongly depends on the environment you run your app in.
Of course, this architecture is based on "server publishes data, clients ask for data" model. That model is simple and easy to implement, and the main problem with it is that the animation may not be as smooth as one may want. Also you are not able to notify clients immediately if some changes require urgent update of client's interface. However, smoothness and immediate client notifications are usually hard to implement when a javascript client runs within a browser. | 1 | 0 | 0 | I am working on a project which would animate points on a plain by certain methods. I intend to compute the movements of the points in python on server-side and to do the visualization on client-side by a javascript library (raphaeljs.com).
First I thought of the following: Running the process(python) and saving the states of the points into an xml file, than load that from javascript and visualize. Now I realized that maybe it would run for infinity thus I would need a realtime data exchange between the visualization part and the computing part.
How would you do that? | connect server-side computing with client-side visualization | 1.2 | 0 | 1 | 288 |
18,213,159 | 2013-08-13T15:25:00.000 | 0 | 1 | 1 | 0 | c++,python,multithreading,loops,boost-python | 18,213,302 | 2 | false | 0 | 1 | You need to use the multiprocessing module in python so that you get a separate GIL for each python thread. | 1 | 4 | 0 | How do I run C++ and Boost::Python code in parallel without problems?
Eg in my game I'd want to execute Python code in parallel with C++ code; if the embedded Python interpreter's code executes a blocking loop, like while(True): pass, the C++ code would still be running and processing frames to render with its own loop.
I tried with boost::thread and std::thread but unless I joined these threads with the main thread the program would crash...
Any suggestions or examples? | Embedded Boost::Python and C++ : run in parallel | 0 | 0 | 0 | 1,171 |
18,214,098 | 2013-08-13T16:12:00.000 | 0 | 0 | 0 | 0 | python,django,internationalization,web-deployment | 20,224,198 | 1 | true | 1 | 0 | it works now i was passing wrong url to LOCALE_PATHS | 1 | 0 | 0 | I have a Django project that works fine on my local server but when I
deploy it to web faction, internationalization doesn't work anymore.
How can I resolve this issue? | django internationalization didn't work on webfaction | 1.2 | 0 | 0 | 65 |
18,214,104 | 2013-08-13T16:12:00.000 | 0 | 0 | 0 | 0 | python,django | 18,214,393 | 3 | true | 1 | 0 | This is not related to django. It's how the web works. HTTP is stateless.
When you generate the page, you've finished with that task.
The model instance is destroyed.
When the user submits the form or sends the modifications in any other way, a new connection starts with a new request and a new context.
At this point you need to re-instance the object to modify.
Depends on the application and the model itself.
You can pass the unique_id of the object, if it has one, and get it back in your actual context querying for it. | 1 | 0 | 0 | I'm trying to do something that may appear to be simple, but I can't figure it out. As always, django surprises me with its complexity...
My view generates an instance of a model and "passes it on" in a context to a template. On that template, the user fills a form and submits it. And this is what should happen next: the object that was in the context when the page loaded is modified a bit and submitted in a context once again (to the same template). However, I can't get the instance of the object that was in the context when the page loaded. Is it possible to do? Maybe as a hidden input? Or with some fancy django function? Any other idea is appreciated as well, even workarounds (it's not really a professional project, I'm doing it for fun and for experience).
I'm sorry if this question is stupid, but I'm new to django and my brain still has troubles with understanding everything. Thanks for your help! | Reusing an object from a context after submitting a form | 1.2 | 0 | 0 | 51 |
18,215,466 | 2013-08-13T17:26:00.000 | 0 | 1 | 1 | 0 | python,windows,module,fuzzywuzzy | 18,218,012 | 1 | false | 0 | 0 | Try to wrap one of your conflict programs in CMD file. Like python-virtualenv. | 1 | 0 | 0 | I'm betting there's a simple solution to this problem that I don't know, and from googling and stackoverflowing around it seems to have something to do with setting a path.
I have anaconda installed on my computer and it seems to use python 2.7.4. I also have python 2.7.3 installed, which seems to be the version being used when I open up IDLE. When I installed fuzzywuzzy using 'python setup.py install' it's installed in the anaconda folder and using python in powershell, the command 'from fuzzywuzzy import fuzz' works fine, but when doing the same thing in IDLE I get a missing module error.
Is there a way to reconcile the two versions of Python? Can I get them to share packages, or delete one of the versions without ruining everything?
I tried doing this:
'''
Setting the PYTHONPATH / PYTHONHOME variables
Right click the Computer icon in the start menu, go to properties. On the left tab, go to Advanced system settings. In the window that comes up, go to the Advanced tab, then at the bottom click Environment Variables. Click in the list of user variables and start typing Python, and repeat for System variables, just to make certain that you don't have mis-set variables for PYTHONPATH or PYTHONHOME. Next, add new variables (I did in System rather than User, although it may work for User too): PYTHONPATH, set to C:\Python27\Lib. PYTHONHOME, set to C:\Python27.
'''
then reinstalled fuzzywuzzy, and it installed in the C:Python27 folder and works in IDLE, but now Kivy doesn't work!
Do I need to reinstall that too? Or is there a Path sharing fix? | Python missing module v 2.7.3 and Windows 7: Installed fuzzywuzzy, imports in powershell, not in IDLE | 0 | 0 | 0 | 623 |
18,216,495 | 2013-08-13T18:28:00.000 | 1 | 0 | 1 | 0 | python,image,encryption | 18,216,651 | 1 | true | 0 | 0 | When you're padding the file, make sure the pad character is not the same as the final byte in the file. When removing the padding, remove the bytes from the end of the file that have the same value, up to 8 in a row. If the original file's length is a multiple of 8, add 8 pad bytes, different from the final value in the file.
If you pad the file this way, don't use replace(), which will operate on the entire file, but use something like decryptedFileText = decryptedFileText.rstrip(decryptedFileText[-1]). | 1 | 0 | 0 | I'm using the PyCrypto Python library to attempt to encrypt a .jpg image file with a password. However, whenever I decrypt the file and open it, it comes out looking almost like a rainbow, and although you can vaguely see the original image, it looks nothing like it. I was wondering where the quality is being lost? My guess is that when I pad the file (you know, so that the length is a multiple of 8, which is what DES needs), I do so with a '{' character, and when I decrypt the file, I use decryptedFileText.replace('{',''), you know, to get rid of the pad characters, but at the same time, I may be removing other '{' characters crucial to the image quality. I was wondering if anyone knew a way of padding files that could get around this, or if I'm missing the problem entirely. | JPG Images Lose Quality When DES Encrypting | 1.2 | 0 | 0 | 485 |
18,219,951 | 2013-08-13T22:00:00.000 | 0 | 1 | 0 | 0 | python | 18,220,247 | 4 | false | 0 | 0 | I sometimes compress credentials with zlib and compile to pyo file.
It protect from "open in editor and press ctrl+f" and from not-programmers only.
Sometimes I used PGP cryptography.) | 4 | 0 | 0 | This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks. | Disguising username & password on distributed python scripts | 0 | 0 | 0 | 103 |
18,219,951 | 2013-08-13T22:00:00.000 | 2 | 1 | 0 | 0 | python | 18,219,982 | 4 | false | 0 | 0 | The answer is no. You can't put the authentication details into the program and make it impossible for users to get those same authentication details. You can try to obfuscate them, but it is not possible to ensure that they cannot be read.
Compiling the code will not even obfuscate them very much.
One approach to the problem would be to implement a REST web interface and supply each distribution of the program with an API key of some sort. Then set up the program to connect to the interface over SSL using its key and put whatever information it needs there. Then you could track which version is connecting from where and limit each distribution of the program to updating a restricted set of resources on the server. Furthermore you could use server heuristics to guess if an api key has leaked and block an account if that occurs.
Another way would be if all of the hosts/users of the program are trusted, then you could set up user accounts on a server node and each script could authenticate with its own username and password or SSH key. Your server node would then have to restrict access based on what each user is allowed to update. Using SSH key based authentication allows you to avoid leaving the passwords around while still allowing authenticated access to your server. | 4 | 0 | 0 | This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks. | Disguising username & password on distributed python scripts | 0.099668 | 0 | 0 | 103 |
18,219,951 | 2013-08-13T22:00:00.000 | 2 | 1 | 0 | 0 | python | 18,220,112 | 4 | false | 0 | 0 | Just set the name to "username" and password to "password", and then when you give it to your friends, provision an account/credential that's only for them, and tell them to change the script and be done with it. That's the best/easiest way to do this. | 4 | 0 | 0 | This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks. | Disguising username & password on distributed python scripts | 0.099668 | 0 | 0 | 103 |
18,219,951 | 2013-08-13T22:00:00.000 | 1 | 1 | 0 | 0 | python | 18,220,062 | 4 | false | 0 | 0 | to add onto jmh's comments and answer another part of your question, it is possible to decompile the java from the .class byte code and get almost exactly what the .java file contains so that won't help you. C is more difficult to piece back together but again, its certainly possible. | 4 | 0 | 0 | This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks. | Disguising username & password on distributed python scripts | 0.049958 | 0 | 0 | 103 |
18,222,808 | 2013-08-14T03:39:00.000 | 1 | 0 | 0 | 0 | python,browser,gif | 18,229,495 | 1 | true | 0 | 0 | Use this line: webbrowser.open('file://'+os.getcwd()+'/gif_name.gif') and change the default app to view pictures to Chrome. | 1 | 0 | 0 | I have a series of local gif files. I was wondering how I would be able to open this series of local gifs using the webbrowser module. I am, by the way, running on Mac OS X Snow Leapord. Whenever I try to use the webbrowser.open('file:gif_name') snippet, my computer throws the error 0:30: execution error: Bad name for file. some object (-37). Any help would be greatly appreciated! | Opening Series of Gifs in Chrome Using webbrowser | 1.2 | 0 | 1 | 231 |
18,223,280 | 2013-08-14T04:34:00.000 | 0 | 0 | 1 | 0 | python,python-3.3 | 18,223,323 | 4 | false | 0 | 0 | Assuming you have a default list L and you want to make sure you're working with a list equal to L and not mutating L itself, you could initialize a local variable to list(L). Is that what you mean? | 1 | 2 | 0 | How can I create a new instance of a list or dictionary every time a method is called? The closest thing I can compare what I would like to do is the new keyword in java or C# | Create new instance of list or dictionary without class | 0 | 0 | 0 | 3,944 |
18,227,789 | 2013-08-14T09:22:00.000 | 2 | 0 | 0 | 1 | python,video-streaming,wifi,gopro | 20,150,294 | 1 | false | 0 | 0 | I've been working on creating a GoPro API recently for Node.js and found the device very glitchy too. Its much more stable after installing the latest gopro firmware (3.0.0).
As for streaming, I couldnt get around the wifi latency and went for a record and copy approach. | 1 | 2 | 0 | I recently acquired a Go Pro Hero 3. Its working fine but when i attempt to stream live video/audio it gitches every now and then.
Initially i just used vlc to open the m3u8 file, however when that was glitchy i downloaded the android app and attempted to stream over that.
It was a little better on the app.
I used wireshark and i think the cause of it is its simply not transferring/buffering fast enough. Tried just to get everything with wget in loop, it got through 3 loops before it either: caught up (possible but i dont think so ... though i may double check that) or fell behind and hence timed out/hung.
There is also delay in the image, but i can live with that.
I have tried lowering the resolution/frame rate but im not sure if it is actually doing anything as i can't tell any difference. I think it may be just the settings for recording on the go pro. Either way, it didn't work.
Essentially i am looking for any possible methods for removing this 'glitchiness'
My current plan is to attempt writing something in python to get the files over UDP (no TCP overhead).
Ill just add a few more details/symptoms:
The Go Pro is using the Apple m3u8 streaming format.
At anyone time there are 16 .ts files in the folder. (26 Kb each)
These get overwritten in a loop (circular buffer)
When i stream on vlc:
Approx 1s delay - streams fine for ~0.5s, stops for a little less than that, then repeats.
What i think is happening is the file its trying to transfer gets overwritten which causes it to timeout.
Over the android App:
Less delay and shorter 'timeouts' but still there
I want to write a python script to try get a continuous image. The files are small enough that they should fit in a single UDP packet (i think ... 65Kb ish right?)
Is there anything i could change in terms of wifi setting on my laptop to improve it too?
Ie some how dedicate it to that?
Thanks,
Stephen | Go Pro Hero 3 - Streaming video over wifi | 0.379949 | 0 | 1 | 9,059 |
18,234,020 | 2013-08-14T14:07:00.000 | 0 | 0 | 0 | 0 | video,python-2.7,pyside | 18,250,434 | 1 | false | 0 | 1 | Qt's Phonon module plays Videos using the codecs installed in the operating system. So depending on the lowest Version of Windows you plan to support, you could chose one of the preinstalled codecs. Like wmv7 for XP.
Alternatively you could use Phonon and install an efficient free codec like x264. | 1 | 0 | 0 | I need a video player in my PySide application on Windows without any dependencies. Right now I play a video in a QWebView that loads Flash, which works okay, except most of the people using the application are running it on freshly installed copies of Windows which lack Flash, and they aren't willing to install Flash just to play the video in my application.
I could include the flash plugin with my distribution, but that's disallowed by Adobe's licensing.
I must have tried at least two dozen things, but nothing has worked well enough so far. The things that have worked best so far are:
Flash, licensing forbids it from being packed in with my app
PyGame, 1 - only accepted mpeg-1 files, which I've found are massive, 2 - has a tendency to crash
QMovie - seems to only support .mng files, which I've been unable to find a converter for. Additionally, that format is visual only - I need audio, too.
I've been trying to get PyMedia to work, but it's refusing to install (it wants Python 2.7 but I have Python 2.7.3. I've tried installing multiple copies of Python and downgrading before... it's just not worth the headache of trying to get all my code to run with a single version.) | Play videos with Python on Windows without Dependencies? | 0 | 0 | 0 | 240 |
18,237,368 | 2013-08-14T16:35:00.000 | 0 | 0 | 1 | 0 | python,esri,arcpy,arcmap | 18,663,569 | 1 | true | 0 | 0 | Short of writing your code for you, use arcpy.ListFields() to loop over the fields of each featureclass. Be careful not to totally denude the target class of identifying info; use an if/then w/in that loop.
You haven't really specified the common key/relate field but you'll want to keep that around, at least, in order to transfer information to relevant features in target.
P.S. why 2.6? Time to update both ArcGIS & Python! | 1 | 0 | 0 | Is there a way to remove all attributes from a feature class and then add new ones from an existing feature class? I have an application that is directed to a specific path, however the data it represents is updated regularly from a third party source and I have to download the updated feature class. If I were to simply load the new ones into the old I would have duplicates. I'm trying to automate this whole process with Python 2.6. | python- Feature class attribute manipulation | 1.2 | 0 | 0 | 164 |
18,238,558 | 2013-08-14T17:38:00.000 | 2 | 0 | 1 | 0 | python,exit | 18,238,597 | 2 | false | 0 | 0 | You should encapsulate the code in a try except block. That will catch your exception, and continue executing script A. | 1 | 0 | 0 | I am new to python language. My problem is I have two python scripts : Automation script A and a main script B. Script A internally calls script B. Script B exits whenever an exception is caught using sys.exit(1) functionality. Now, whenever script B exits it result in exit of script A also. Is there any way to stop exiting script A and continue its rest of execution, even if script B exits.
Thanks in advance. | How to stop exit python script when other python script exits ? | 0.197375 | 0 | 0 | 542 |
18,243,045 | 2013-08-14T21:58:00.000 | 0 | 0 | 1 | 0 | python,regex,windows | 18,243,064 | 1 | false | 0 | 0 | Just off the top of my head, do the search on a separate thread, if the time expires or cancelled, terminate the thread? | 1 | 2 | 0 | Is there a OS-independent or Windows-specific way to stop a regex search after a given amount of time or on user request?
My program provides text editing functionality with regex searching. If a user enters a pathological regex pattern searching may need too much time. It would be good to stop the search at user request or at least after a given timeout.
I found solutions for Linux/Unix using signal.alarm() but this function isn't supported on Windows. | Stop regex search on user request or after timeout | 0 | 0 | 0 | 132 |
18,244,050 | 2013-08-14T23:31:00.000 | 4 | 0 | 0 | 1 | python,command,arguments,twisted | 18,245,245 | 1 | true | 0 | 0 | A tac file is configuration. It doesn't accept configuration.
If you want to pass command line arguments, you do need to write a plugin. | 1 | 4 | 0 | I'm writing a server with Twisted that is based on a *.tac file that starts the services and the application. I'd like to get one additional command line argument to specify a yaml configuration file. I've tried using usage.Options by building a class that inherits from it, but is choking because of the additional, twistd command line arguments (-y for example) not being specified in my class Options(...) class.
How can get one additional argument and still pass the rest to twistd? Do I have to do this using the plugin system?
Thanks in advance for your help!
Doug | twistd using usage.options in a *.tac file | 1.2 | 0 | 0 | 511 |
18,245,510 | 2013-08-15T02:41:00.000 | 0 | 0 | 0 | 0 | python,mysql,database-connection,mysql-python | 18,245,522 | 1 | true | 0 | 0 | if your table uses innodb engine, you should call connection.commit() on every cursor.execute(). | 1 | 0 | 0 | I used mysqldb to connect to a database in my localhost.
It works, but if I add data to a table in the database when the program is running, it shows that it has been added, but when I check the table from localhost, it hasn't been updated. | musqldb-python doesnt really update the original database | 1.2 | 1 | 0 | 30 |
18,246,847 | 2013-08-15T05:31:00.000 | 0 | 0 | 0 | 0 | python,user-interface,wxpython,listctrl | 40,836,750 | 2 | true | 0 | 1 | I ended up using ObjectListView to do the job. Basically you build an index for your objects in the list, and then you are able to operate on any row you want. It's way more convenient than wx.ListCtrl | 1 | 0 | 0 | I'm using wx.ListCtrl for live report in my app, and there will be continuous status updating including inserting a new row when a task starts and deleting related rows when the tasks end. Since the list gets sorted every now and then, you cannot simply delete the rows by the rowid you started with. Although you can assign a unique id using SetItemData, and that way you know exactly which row to delete when a task is done, there does NOT seem to be any method related to deleting a row by that unique id, not even a method to get rowid by unique id, and the only method I found is GetItemData, which will return the unique id for a certain row.
So the only way came to my mind is to iterate all rows checking their unique ids and compares it against the given id, if it matches then delete that row. But this sounds way too clumsy, so is there a better way to delete a specific row after sorting? | How to delete a specific row in wxListCtrl after sorting? | 1.2 | 0 | 0 | 911 |
18,255,730 | 2013-08-15T15:10:00.000 | 2 | 1 | 0 | 1 | python,linux | 18,255,933 | 1 | false | 0 | 0 | bash does not handle signals while waiting for your foreground child process to complete. This is why sending it SIGINT does not do anything. This behaviour has nothing to do with process groups.
There are a couple of options to let your child process receive your SIGINT:
When spawning a new process with Shell=True try prepending exec to the front of your command line, so that bash gets replaced with your child process.
When spawning a new process with Shell=True append the command line with & wait %-. This will cause bash to react to signals while waiting for your child process to complete. But it won't forward the signal to your child process.
Use Shell=False and specify full paths to your child executables. | 1 | 3 | 0 | I am spawning some processes with Popen (Python 2.7, with Shell=True) and then sending SIGINT to them. It appears that the process group leader is actually the Python process, so sending SIGINT to the PID returned by Popen, which is the PID of bash, doesn't do anything.
So, is there a way to make Popen create a new process group? I can see that there is a flag called subprocess.CREATE_NEW_PROCESS_GROUP, but it is only for Windows.
I'm actually upgrading some legacy scripts which were running with Python2.6 and it seems for Python2.6 the default behavior is what I want (i.e. a new process group when I do Popen). | Popen new process group on linux | 0.379949 | 0 | 0 | 3,257 |
18,256,915 | 2013-08-15T16:13:00.000 | 0 | 1 | 0 | 0 | php,javascript,python,c,pointers | 18,257,064 | 3 | false | 1 | 0 | In Java:
Instead of having a pointer to a struct that you allocate with malloc, you have a reference to an instance of a class that you instantiate with "new". (In Java, you cannot allocate memory for objects on the heap directly as you can in C/C++)
Primitives have no pointers, BUT there are libraries built into the main library for wrapping int,double, etc. in objects (Integer, Double). | 2 | 0 | 0 | As a undergraduate in CS, I started with C, where pointer is an important data type. Thereafter I touched on Java, JavaScript, PHP, Python. None of them have pointer per se.
So why? Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
I roughly have some idea but I'd love to hear from more experienced programmers regarding this. | C pointer equivalents on other languages | 0 | 0 | 0 | 188 |
18,256,915 | 2013-08-15T16:13:00.000 | 5 | 1 | 0 | 0 | php,javascript,python,c,pointers | 18,257,037 | 3 | true | 1 | 0 | So why?
In general, pointers are considered too dangerous, so modern languages try to avoid their direct use.
Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
The functionality is VERY important. But to make them less dangerous, the pointer has been abstracted into less virulent types, such as references.
Basically, this boils down to stronger typing, and the lack of pointer arithmetic. | 2 | 0 | 0 | As a undergraduate in CS, I started with C, where pointer is an important data type. Thereafter I touched on Java, JavaScript, PHP, Python. None of them have pointer per se.
So why? Do they have some equivalents in their languages that perform like pointer, or is the functionality of pointer not important anymore?
I roughly have some idea but I'd love to hear from more experienced programmers regarding this. | C pointer equivalents on other languages | 1.2 | 0 | 0 | 188 |
18,259,697 | 2013-08-15T18:58:00.000 | 7 | 0 | 0 | 1 | python,google-app-engine,app-engine-ndb | 18,281,029 | 1 | true | 1 | 0 | Short answer: yes.
I find deserialization in Python to be very slow, especially where repeated properties are involved. Apparently, GAE-Python deserialization creates boatloads of objects. It's known to be inefficient, but also apparently, no one wants to touch it because it's so far down the stack.
It's unfortunate. We run F4 Front Ends most of the time due to this overhead (i.e., faster CPU == faster deserialization). | 1 | 6 | 0 | In profiling my python2.7 App Engine app, I find that it's taking an average of 7ms per record to deserialize records fetched from ndb into python objects. (In pb_to_query_result, pb_to_entity and their descendants—this does not include the RPC time to query the database and receive the raw records.)
Is this expected? My model has six properties, one of which is a LocalStructuredProperty with 15 properties, which also includes a repeated StructuredProperty with four properties, but the average object should have less than 30 properties all told, I think.
Is it expected to be this slow? I want to fetch a couple of thousand records to do some simple aggregate analysis, and while I can tolerate a certain amount of latency, over 10 seconds is a problem. Is there anything I can do to restructure my models or my schema to make this more viable? (Other than the obvious solution of pre-calculating my aggregate analysis on a regular basis and caching the results.)
If it's unusual for it to be this slow, it would be helpful to know that so I can go and look for what I might be doing that impairs it. | App Engine deserializing records in python: is it really this slow? | 1.2 | 0 | 0 | 264 |
18,260,514 | 2013-08-15T19:44:00.000 | 0 | 0 | 1 | 0 | python,django,web | 18,260,621 | 6 | false | 1 | 0 | The answer to this question would be pretty subjective, but lets try.
Minimal requirements
Knowledge about Python (basics, idioms, language characteristics),
Some server solution (if you want to put it live; otherwise local development is possible without web server),
At that point you are already able to code. You can write your code in even the simplest text editor, so no need for an IDE.
Good to have
Good IDE with autocompletion and inspections (I recommend PyCharm, but any decent one would do),
Knowledge about how to install Python modules,
At that point you are more efficient with your coding and see some errors before you execute your code.
Best practices (not necessarily all at once)
Virtualenv,
Vagrant,
Configured web server ,atching the one that will serve your Python app,
At that point you should have clean and separate environments for every project. They should also resemble the target environment as much as possible.
List could probably be completed with more items, though. | 1 | 3 | 0 | I guess I am having a hard time understanding what is needed to start web development with Python. I am new to both web development and Python and I am having a hard time figuring out what really is needed for a "Python Development Environment. I have heard that I should use virtualenv for all my developing. Others say a good IDE. Some day a VM with all the tools you need. It all is a bit overwhelming.
So from a Python developer standpoint. I ask what is the way to start. What do I need? What don't I need? Should I just get a good IDE or use a VM. | Python Development Enviorment | 0 | 0 | 0 | 163 |
18,262,685 | 2013-08-15T22:05:00.000 | 0 | 0 | 1 | 0 | python,multithreading,kernel,signals,cpython | 18,679,587 | 2 | false | 0 | 0 | A signal could be delivered and handled in the middle of a reference counting operation. In case you wonder why CPython doesn't use atomic CPU instructions for reference counting: They are too slow. Atomic operations use memory barrier to sync CPU caches (L1, L2, shared L3) and CPUs (ccNUMA). As you can imagine it prevents lots of optimizations. Modern CPU are insanely fast, so fast that they spend a lot of time doing nothing but waiting for data. Reference increment and decrement are very common operations in CPython. Memory barriers prevent out-of-order execution which is a very important optimization trick.
The reference counting code is carefully written and takes multi-threading and signals into account. Signal handlers cannot access a partly created or destroyed Python object just like threads can't, too. Macros like Py_CLEAR take care of edge cases. I/O functions take care of EINTR, too. 3.3 has an improved subprocess module that uses only async-signal-safe function between fork() and execvpe().
You don't have to worry. We have some clever people that know their POSIX fu quite well. | 2 | 0 | 0 | Obviously the GIL prevents switching contexts between threads to protect reference counting, but is signal handling completely safe in CPython? | Can signals be caught and handled in python in-between reference counting operations? | 0 | 0 | 0 | 70 |
18,262,685 | 2013-08-15T22:05:00.000 | 1 | 0 | 1 | 0 | python,multithreading,kernel,signals,cpython | 18,679,634 | 2 | false | 0 | 0 | Signals in Python are caught by a very simple signal handler which, in effect, simply schedules the actual signal handler function to be called on the main thread. The C signal handler doesn't touch any Python objects, so it doesn't risk corrupting any state, while the Python signal handler is executed in-between bytecode op evaluations, so it too won't corrupt CPython's internal state. | 2 | 0 | 0 | Obviously the GIL prevents switching contexts between threads to protect reference counting, but is signal handling completely safe in CPython? | Can signals be caught and handled in python in-between reference counting operations? | 0.099668 | 0 | 0 | 70 |
18,262,760 | 2013-08-15T22:12:00.000 | 3 | 0 | 0 | 1 | python,emacs,terminate | 18,262,847 | 1 | true | 0 | 0 | Try using keyboard interrupt which comint send to the interpreter through C-cC-c.
I generally hold down the C-c until it the prompt returns. | 1 | 0 | 0 | I am running a python interpreter through emacs. I often find myself running python scripts and wishing I could terminate them without killing the entire buffer. That is because I do not want to import libraries all over again...
Is there a way to tell python to stop executing a script and give me a prompt? | Terminating python script through emacs | 1.2 | 0 | 0 | 230 |
18,267,445 | 2013-08-16T06:53:00.000 | 0 | 0 | 1 | 0 | javascript,jquery,python,django,sphinx | 18,271,689 | 1 | false | 1 | 0 | You cant get a list of works as such. Morphology processing to produce a stem is a one way process.
But Sphinx does include a BuildExceprts function! This understands morphology settings and will highlight the relevent matching words. | 1 | 0 | 0 | If I type in the search for "Home", the answer I get the same word "Homes", etc.
I need to highlight the words on the client, as a result of the search. How to get a list of keywords based on the morphology of the client? | Django-sphinx How to get a list of keywords | 0 | 0 | 0 | 87 |
18,269,575 | 2013-08-16T09:01:00.000 | 0 | 0 | 1 | 0 | python,regex | 18,270,125 | 3 | false | 0 | 0 | use this /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i and make some changes accordingly if you want but this is a standard one that goes with most of the cases. | 1 | 0 | 0 | I'm trying to write a small regexp checker for mail with following conditions:
1) domain name between 2 and 128 symbols (numbers, alphabet and .-) = /^[a-z0-9_.-]{2,128}$/
2) minus symbol - not in the begining or the end of the login or domain name = /^[^-]|[^-]$/
3) account name not less then 64 symbols = /^.{64,}$/
4) two dots together are disallowed = /^([^.]|([^.]).[^.])*$/
5) if double quotes would exist in the string they will be twin (have a pair)
6) !,: - could exist between double quotes
What could I use from the regexp to perform theese conditions and bring them together in accordance with claims? | Python and mail regular expression | 0 | 0 | 0 | 226 |
18,269,786 | 2013-08-16T09:11:00.000 | 0 | 0 | 0 | 0 | python,sockets,tcp,udp,package | 18,271,774 | 2 | true | 0 | 0 | When you send a string, that might be sent in multiple TCP packets. If you send multiple strings, they might all be sent in one TCP packet. You are not exposed to the packets, TCP sockets are just a constant stream of data. Do not expect that every call to recv() is paired with a single call to send(), because that isn't true. You might send "abcd" and "efg", and might read in "a", "bcde", and "fg" from recv().
It is probably best to send data as soon as you get it, so that the networking stack has as much information about what you're sending, as soon as possible. It will decide exactly what to do. You can send as big a string as you like, and if necessary, it will be broken up to send over the wire. All automatically.
Since in TCP you don't deal with packets, things like lost packets also aren't your concern. That's all handled automatically -- either the data gets through eventually, or the connection closes.
As for UDP - you probably don't want UDP. :) | 1 | 1 | 0 | I am sending coordinates of points to a visualizational client-side script via TCP over the internet. I wonder which option I should use:
concat the coordinates into a large string and send them together, or
send them one by one
I don't know which one the faster is. I have some other questions too:
Which one should I use?
Is there a maximum size of packet of TCP? (python: maximum size of string for client.send(string))
As it is a visualization project should I use UDP instead of TCP?
Could you please tell me a bit about lost packet? When do they occur? How to deal with them?
Sorry for the many questions, but I really struggle with this issue... | send one big packet of many small over TCP/UDP? | 1.2 | 0 | 1 | 2,040 |
18,270,585 | 2013-08-16T09:53:00.000 | 1 | 0 | 0 | 0 | python,mysql,query-performance,sql-tuning,query-tuning | 18,270,751 | 2 | false | 0 | 0 | It is probably a matter of taste but...
... to give you an exact opposite answer as the one by Alma Do Mundo, for (not so) simple calculation made on the SELECT ... clause, I generally push toward using the DB "as a calculator".
Calculations (in the SELECT ... clause) are performed as the last step while executing the query. Only the relevant data are used at this point. All the "big job" has already been done (processing JOIN, where clauses, aggregates, sort).
At this point, the extra load of performing some arithmetic operations on the data is really small. And that will reduce the network traffic between your application and the DB server.
It is probably a matter of taste thought... | 2 | 2 | 0 | I'm trying to understand which of the following is a better option:
Data calculation using Python from the output of a MySQL query.
Perform the calculations in the query itself.
For example, the query returns 20 rows with 10 columns.
In Python, I compute the difference or division of some of the columns.
Is it a better thing to do this in the query or in Python ? | Data Calculations MySQL vs Python | 0.099668 | 1 | 0 | 2,805 |
18,270,585 | 2013-08-16T09:53:00.000 | 1 | 0 | 0 | 0 | python,mysql,query-performance,sql-tuning,query-tuning | 18,271,329 | 2 | false | 0 | 0 | If you are doing basic arithmetic operation on calculations in a row, then do it in SQL. This gives you the option of encapsulating the results in a view or stored procedure. In many databases, it also gives the possibility of parallel execution of the statements (although performance is not an issue with so few rows of data).
If you are doing operations between rows in MySQL (such as getting the max for the column), then the balance is more even. Most databases support simple functions to these calculations, but MySQL does not. The added complexity to the query gives some weight to doing these calculations on the client-side.
In my opinion, the most important consideration is maintainability of the code. By using a database, you are necessary incorporating business rules in the database itself (what entities are related to which other entities, for instance). A major problem with maintaining code is having business logic spread through various systems. I much prefer to have an approach where such logic is as condensed as possible, creating very clear APIs between different layers.
For such an approach, "read" access into the database would be through views. The logic that you are talking about would go into the views and be available to any user of the database -- ensuring consistency across different functions using the database. "write" access would be through stored procedures, ensuring that business rules are checked consistently and that operations are logged appropriately. | 2 | 2 | 0 | I'm trying to understand which of the following is a better option:
Data calculation using Python from the output of a MySQL query.
Perform the calculations in the query itself.
For example, the query returns 20 rows with 10 columns.
In Python, I compute the difference or division of some of the columns.
Is it a better thing to do this in the query or in Python ? | Data Calculations MySQL vs Python | 0.099668 | 1 | 0 | 2,805 |
18,270,859 | 2013-08-16T10:09:00.000 | 12 | 0 | 1 | 0 | python,django,virtual-machine,virtualenv,vagrant | 18,271,644 | 3 | true | 1 | 0 | If you run one vagrant VM per project, then there is no direct reason to use virtualenv.
If other contributors do not use vagrant, but do use virtualenv, then you might want to use it and support it to make their lives easier. | 2 | 22 | 0 | I was used VirtualBox manual setups with virtualenvs inside them to run Django projects on my local machine. Recently I discovered Vagrant and decided to switch to it, because it seems very easy and useful.
But I can not figure - do I need still use virtualenv Vagrant VM, is it encouraged practice or forbidden? | Do I need to use virtualenv with Vagrant? | 1.2 | 0 | 0 | 13,000 |
18,270,859 | 2013-08-16T10:09:00.000 | 9 | 0 | 1 | 0 | python,django,virtual-machine,virtualenv,vagrant | 28,601,794 | 3 | false | 1 | 0 | Virtualenv and other forms of isolation (Docker, dedicated VM, ...) are not necessarily mutually exclusive. Using virtualenv is still a good idea, even in an isolated environment, to shield the virtual system Python from your project packages. *nix systems use plethora of Python based utilities dependent on specific versions of packages being available in system Python and you don't want to mess with these.
Mind that virtualenv can still only go as far as pure Python packages and doesn't solve the situation with native extensions that will still mix with the system. | 2 | 22 | 0 | I was used VirtualBox manual setups with virtualenvs inside them to run Django projects on my local machine. Recently I discovered Vagrant and decided to switch to it, because it seems very easy and useful.
But I can not figure - do I need still use virtualenv Vagrant VM, is it encouraged practice or forbidden? | Do I need to use virtualenv with Vagrant? | 1 | 0 | 0 | 13,000 |
18,276,893 | 2013-08-16T15:25:00.000 | 0 | 0 | 0 | 0 | python,django,legacy,inspectdb | 18,277,208 | 1 | false | 1 | 0 | figured it out, my 'Name' in my settings.py was incorrect so it was looking at the wrong database | 1 | 0 | 0 | Running the "inspectdb" with django and it returns a model file but the model file is missing some of the tables in my db. It actually has a table that was put in awhile ago but later deleted or replaced. Do I need to update my DB or something, it seems like django is looking at an older "version" of the db. | django inspectdb not getting all my tables | 0 | 0 | 0 | 847 |
18,280,454 | 2013-08-16T19:08:00.000 | 0 | 0 | 1 | 0 | python,caching,flask,nlp,gevent | 18,286,160 | 2 | false | 1 | 0 | Can't you unpickle the files when the sever is instanciated, and then keep the unpickled data into the global namespace? This way, it'll be available for each requests, and as you're not planning to write anything in it, you do not have to fear any race conditions. | 2 | 3 | 0 | I am building a python based web service that provides natural language processing support to our main app API. Since it's so NLP heavy, it requires unpickling a few very large (50-300MB) corpus files from the disk before it can do any kind of analyses.
How can I load these files into memory so that they are available to every request? I experimented with memcached and redis but they seem designed for much smaller objects. I have also been trying to use the Flask g object, but this only persists throughout one request.
Is there any way to do this while using a gevent (or other) server to allow concurrent connections? The corpora are completely read-only so there ought to be a safe way to expose the memory to multiple greenlets/threads/processes.
Thanks so much and sorry if it's a stupid question - I've been working with python for quite a while but I'm relatively new to web programming. | Caching large objects in a python Flask/Gevent web service | 0 | 0 | 0 | 1,511 |
18,280,454 | 2013-08-16T19:08:00.000 | 1 | 0 | 1 | 0 | python,caching,flask,nlp,gevent | 18,284,611 | 2 | true | 1 | 0 | If you are using Gevent you can have your read-only data structures in the global scope of your process and they will be shared by all the greenlets. With Gevent your server will be contained in a single process, so the data can be loaded once and shared among all the worker greenlets.
A good way to encapsulate access to the data is by putting access function(s) or class(es) in a module. You can do the unpicliking of the data when the module is imported, or you can trigger this task the first time someone calls a function into the module.
You will need to make sure there is no possibility of introducing a race condition, but if the data is strictly read-only you should be fine. | 2 | 3 | 0 | I am building a python based web service that provides natural language processing support to our main app API. Since it's so NLP heavy, it requires unpickling a few very large (50-300MB) corpus files from the disk before it can do any kind of analyses.
How can I load these files into memory so that they are available to every request? I experimented with memcached and redis but they seem designed for much smaller objects. I have also been trying to use the Flask g object, but this only persists throughout one request.
Is there any way to do this while using a gevent (or other) server to allow concurrent connections? The corpora are completely read-only so there ought to be a safe way to expose the memory to multiple greenlets/threads/processes.
Thanks so much and sorry if it's a stupid question - I've been working with python for quite a while but I'm relatively new to web programming. | Caching large objects in a python Flask/Gevent web service | 1.2 | 0 | 0 | 1,511 |
18,282,042 | 2013-08-16T20:58:00.000 | 0 | 1 | 0 | 1 | python,virtualenv,raspberry-pi | 18,550,607 | 1 | false | 0 | 0 | Needed to format my drive with linux partition - not fat partition. | 1 | 0 | 0 | In raspian I can make virtual envs in my home directory but when I try to make a virtual env in a folder on my thumb drive it says the os prevents it ("operation not permitted"). Is this a known issue? | raspbian python virtualenv not working on thumb drive | 0 | 0 | 0 | 121 |
18,282,568 | 2013-08-16T21:44:00.000 | 3 | 0 | 1 | 0 | python,numpy,scipy | 18,321,537 | 1 | true | 0 | 0 | So it seems that the cause of the error was incompatibility between scipy 0.12.0 and the much older numpy 1.6.1.
There are two ways to fix this - either to upgrade numpy (to ~1.7.1) or to downgrade scipy (to ~0.10.1).
If ArcGIS 10.2 specifically requires Numpy 1.6.1, the easiest option is to downgrade scipy. | 1 | 7 | 1 | I just installed ArcGIS v10.2 64bit background processing which installs Python 2.7.3 64bit and NumPy 1.6.1. I installed SciPy 0.12.0 64bit to the same Python installation.
When I opened my Python interpreter I was able to successfully import arcpy, numpy, and scipy. However, when I tried to import scipy.ndimage I got an error that said numpy.core.multiarray failed to import. Everything I have found online related to this error references issues between scipy and numpy and suggest upgrading to numpy 1.6.1. I'm already at numpy 1.6.1.
Any ideas how to deal with this? | SciPy 0.12.0 and Numpy 1.6.1 - numpy.core.multiarray failed to import | 1.2 | 0 | 0 | 6,801 |
18,288,616 | 2013-08-17T12:07:00.000 | 1 | 0 | 1 | 0 | python,mysql | 18,288,628 | 1 | true | 0 | 0 | You just need it if you want to compile the Python MySQL bindings from source. If you already have the binary version of the python library then the answer is no, you don't need it. | 1 | 0 | 0 | I'm trying to use python for manipulating some data in MySQL DB.
DB is on a remote PC. And I will use another PC with Python to connect to the DB.
When I searched how to install MySQLdb module to Python, they all said MySQL need to be installed on the local PC.
Is it right? Or I don't need to install MySQL on the local PC? | Do I need MySQL installed on my local PC to use MySQLdb for Python to connect MySQL server remotely? | 1.2 | 1 | 0 | 323 |
18,290,296 | 2013-08-17T15:19:00.000 | 3 | 0 | 0 | 0 | python,django | 18,290,626 | 1 | true | 1 | 0 | put your code in __init__.py file of your app folder. | 1 | 2 | 0 | I have a python file in my app folder in my django project, I want it to run when the server starts, How do I do that ? | How to run custom python code when server starts with django framework? | 1.2 | 0 | 0 | 895 |
18,292,691 | 2013-08-17T19:45:00.000 | 1 | 0 | 1 | 0 | python | 18,292,723 | 6 | false | 0 | 0 | This is to do with how Python evaluates the expression x and y. It returns y if x is True, and x if x if False.
So, in case of 2 and 2 * 3, since 2 is evaluated to True, it would return the value 2 * 3, which is 6.
In case of and operation between multiple operands, it returns the 1st non-True value, and if all the values are True, it returns the last value.
Similarly, for or operator, the expression say, A or B or C, returns the 1st True value. And if all the values are False, it returns the last value. | 3 | 3 | 0 | My first question is what is the more abstract question for the question: 'what is the operation that returns 6 for the expression (2 and 2*3)? Please feel free to retitle my question appropriately.
My second question is what is it that is going on in python that returns 6 for (2 and 2*3). There seems something elegant going on here, and I'd like to read up on this operation. | Python: what is the operation that returns 6 for the expression (2 and 2*3)? | 0.033321 | 0 | 0 | 112 |
18,292,691 | 2013-08-17T19:45:00.000 | 1 | 0 | 1 | 0 | python | 18,292,719 | 6 | false | 0 | 0 | Applying lazy evaluation, python return for a and b a if a evaluates to False and b if a evaluates to True.
Hence 2 evaluates to True, 2 and 2*3 return 2*3 which equals 6. | 3 | 3 | 0 | My first question is what is the more abstract question for the question: 'what is the operation that returns 6 for the expression (2 and 2*3)? Please feel free to retitle my question appropriately.
My second question is what is it that is going on in python that returns 6 for (2 and 2*3). There seems something elegant going on here, and I'd like to read up on this operation. | Python: what is the operation that returns 6 for the expression (2 and 2*3)? | 0.033321 | 0 | 0 | 112 |
18,292,691 | 2013-08-17T19:45:00.000 | 1 | 0 | 1 | 0 | python | 18,292,721 | 6 | false | 0 | 0 | Basically it's same as 2 and 6.
How it works? and returns first element if it's considered False (False, 0, [] ...) and return second otherwise | 3 | 3 | 0 | My first question is what is the more abstract question for the question: 'what is the operation that returns 6 for the expression (2 and 2*3)? Please feel free to retitle my question appropriately.
My second question is what is it that is going on in python that returns 6 for (2 and 2*3). There seems something elegant going on here, and I'd like to read up on this operation. | Python: what is the operation that returns 6 for the expression (2 and 2*3)? | 0.033321 | 0 | 0 | 112 |
18,293,345 | 2013-08-17T21:05:00.000 | 1 | 0 | 0 | 1 | python,subprocess,gevent | 18,293,596 | 1 | false | 1 | 0 | It depends on your application logic. If you just feed the data into the database without any CPU intensive tasks, then most of your application time will be spent on IO and threads would be sufficient. If you are doing some CPU intensive suff then you should use the multiprocessing module so you can use all your CPU cores, which threads wont allow you because of the GIL.
Using subprocess would just add an additional task of implementing the same stuff that's already implemented in the multiprocessing module so I would skip that (why reinvent the wheel). And gevents is just an event loop I don't see how will that be better than using threads. But if I'm wrong please correct me, I never used gevent. | 1 | 0 | 0 | I need to constantly load a number of data feeds. The data feeds can take 20-30 seconds to load. I know what feeds to load by checking a MySQL database every hour.
I could have up to 20 feeds to load at the same time. It's important that non of the feeds block each other as I need to refresh them constantly.
When I no longer need to load the feeds the database that I'm reading gets updated and I thus need to stop loading the feed which I would like to do from my main program so I don't need multiple connections to the db.
I'm aware that I could probably do this using this using threading, subprocess or gevents. I wanted to ask if any of these would be best.
Thanks | Python: running process in the background with ability to kill them | 0.197375 | 0 | 0 | 124 |
18,300,122 | 2013-08-18T14:28:00.000 | 1 | 1 | 1 | 0 | python,python-2.7,code-readability | 18,300,147 | 4 | false | 0 | 0 | Use from math import sqrt. You can protect which functions you export from the module using an __all__ statement. __all__ should be a list of names you want to export from your module. | 3 | 3 | 0 | The title is a little hard to understand, but my question is simple.
I have a program that needs to take the sqrt() of something, but that's the only thing I need from math. It seems a bit wasteful to import the entire module to grab a single function.
I could say from math import sqrt, but then sqrt() would be added to my program's main namespace and I don't want that (especially since I plan to alter the program to be usable as a module; would importing like that cause problems in that situation?). Is there any way to import only that one function while still retaining the math.sqrt() syntax?
I'm using Python 2.7 in this specific case, but if there's a different answer for Python 3 I'd like to hear that too for future reference. | From-Import while retaining access by module | 0.049958 | 0 | 0 | 139 |
18,300,122 | 2013-08-18T14:28:00.000 | 6 | 1 | 1 | 0 | python,python-2.7,code-readability | 18,300,189 | 4 | true | 0 | 0 | Either way you "import" the complete math module in a sense that it's compiled and stored in sys.modules. So you don't have any optimisation benefits if you do from math import sqrt compared to import math. They do exactly the same thing. They import the whole math module, store it sys.modules and then the only difference is that the first one brings the sqrt function into your namespace and the second one brings the math module into your namespace. But the names are just references so you wont benefit memory wise or CPU wise by just importing one thing from the module.
If you want the math.sqrt syntax then just use import math. If you want the sqrt() syntax then use from math import sqrt.
If your concern is protecting the user of your module from polluting his namespace if he does a star import: from your_module import * then define a __all__ variable in your module which is a list of strings representing objects that will be imported if the user of your module does a start import. | 3 | 3 | 0 | The title is a little hard to understand, but my question is simple.
I have a program that needs to take the sqrt() of something, but that's the only thing I need from math. It seems a bit wasteful to import the entire module to grab a single function.
I could say from math import sqrt, but then sqrt() would be added to my program's main namespace and I don't want that (especially since I plan to alter the program to be usable as a module; would importing like that cause problems in that situation?). Is there any way to import only that one function while still retaining the math.sqrt() syntax?
I'm using Python 2.7 in this specific case, but if there's a different answer for Python 3 I'd like to hear that too for future reference. | From-Import while retaining access by module | 1.2 | 0 | 0 | 139 |
18,300,122 | 2013-08-18T14:28:00.000 | 0 | 1 | 1 | 0 | python,python-2.7,code-readability | 18,300,146 | 4 | false | 0 | 0 | The short answer is no. Just do from math import sqrt. It won't cause any problems if you use the script as a module, and it doesn't make the code any less readable. | 3 | 3 | 0 | The title is a little hard to understand, but my question is simple.
I have a program that needs to take the sqrt() of something, but that's the only thing I need from math. It seems a bit wasteful to import the entire module to grab a single function.
I could say from math import sqrt, but then sqrt() would be added to my program's main namespace and I don't want that (especially since I plan to alter the program to be usable as a module; would importing like that cause problems in that situation?). Is there any way to import only that one function while still retaining the math.sqrt() syntax?
I'm using Python 2.7 in this specific case, but if there's a different answer for Python 3 I'd like to hear that too for future reference. | From-Import while retaining access by module | 0 | 0 | 0 | 139 |
18,300,841 | 2013-08-18T15:43:00.000 | 3 | 0 | 1 | 0 | delphi,python4delphi | 18,300,977 | 1 | true | 0 | 0 | python4delphi is a loose wrapper around the Python API and as such relies on a functioning Python installation. Typically on Windows this comprises at least the following:
The main Python directory. On your system this is C:\Python27.
The Python DLL which is python27.dll and lives in your system directory.
Registry settings that indicate where your Python directory is installed.
When you rename the Python directory, the registry settings refer to a location that no longer exists. And so the failure you observe is entirely to be expected.
Perhaps you are trying to work out how to deploy your application in a self-contained way without requiring an external dependency on a Python installation. If so, then I suggest you look in to one of the portable Python distributions. You may need to adapt python4delphi a little to find the Python DLL which will be located under your application's directory. But that should be all that's needed. Take care of the licensing issues too if you do distribute Python with your application. | 1 | 1 | 0 | I have Python 2.7 installed in "C:\Python27". Now I run 1st demo of Python4delphi with D7, which somehow uses my Py2.7 install folder. If I rename Python folder, demo can't run (without error message). I didn't change properties of a demo form.
What part/file does py4delphi use from my Python folder? | What part of installed Python does python4delphi use? | 1.2 | 0 | 0 | 822 |
18,301,534 | 2013-08-18T16:58:00.000 | 0 | 0 | 0 | 0 | python-2.7,aptana3,python-import,pythonpath,urlparse | 18,302,731 | 1 | true | 0 | 0 | Update: I finally ended up restarting the project. It turns out that not all of the standard Python tools are selected when you select the virtualenv interpreter. After I selected all of the python tools from the list (just after choosing the interpreter), I was able to get access to the entire standard library.
Do NOT just import the modules into your project. Many of the stdlib modules are interdependent and the import function will only import a module into your main project directory, not a libary! | 1 | 0 | 0 | I recently started working on a project using just vim as my text editor with a virtualenv setup. I installed a few API's on this virtualenv from GitHub. Eventually, the project got a little bigger than vim could handle so I had to move the project to an IDE.
I chose Aptana Studio 3. When I started up Aptana, I pointed the project directory to the virtualenv folder that I had created to house my project. I then pointed the interpreter at the Python executable in App/bin (created from virtualenv)/python2.7. When I started reworking the code to make sure I had everything mapped correctly, I was able to import the API's that I had installed just fine. CherryPy came through with no problems, but I've been having an issue with importing a module that I believe is part of the stdlib--urlparse. At first, I thought it was that my python interpreter was 2.7.1 rather than 2.7.5 (I found the documentation in the 2.7.5 section with no option to review 2.7.1), but my terminal is using 2.7.1 and is able to import the module without any errors (I'm using OSX, Mountain Lion). I am also able to import the module when I activate the virtualenv and run my python interpreter. But when I plug "from urlparse import parse_qsl" into Aptana, I'm getting an error: "Unresolved_import: parse_qsl".
Should I have pointed this at a different interpreter and, if so, will I need to reinstall the API modules I had been working with in the new interpreter? | Aptana Python stdlib issue with virtualenv | 1.2 | 0 | 1 | 183 |
18,307,366 | 2013-08-19T06:09:00.000 | 4 | 0 | 1 | 1 | python,celery | 18,319,581 | 2 | false | 0 | 0 | Can someone tell me whether Celery executes a task in a thread or in a
separate child process?
Neither, the task will be executed in a separate process possibly on a different machine. It is not a child process of the thread where you call 'delay'. The -C and -P options control how the worker process manages it's own threading. The worker processes get tasks through a message service which is also completely independent.
How would you compare celery's async with Twisted's reactor model? Is
celery using reactor model after all?
Twisted is an event queue. It is asynchronous but it's not designed for parallel processing. | 1 | 7 | 0 | Can someone tell me whether Celery executes a task in a thread or in a separate child process? The documentation doesn't seem to explain it (read it like 3 times). If it is a thread, how does it get pass the GIL (particularly whom and how an event is notified)?
How would you compare celery's async with Twisted's reactor model? Is celery using reactor model after all?
Thanks, | Is celery's apply_async thread or process? | 0.379949 | 0 | 0 | 3,889 |
18,314,913 | 2013-08-19T13:23:00.000 | -1 | 0 | 0 | 0 | python,random-sample | 18,315,125 | 2 | false | 0 | 0 | The basic procedure is this:
1. Open the input file
This can be accomplished with the basic builtin open function.
2. Open the output file
You'll probably use the same method that you chose in step #1, but you'll need to open the file in write mode.
3. Read the input file to a variable
It's often preferable to read the file one line at a time, and operate on that one line before reading the next, but if memory is not a concern, you can also read the entire thing into a variable all at once.
4. Choose selected lines
There will be any number of ways to do this, depending on how you did step #3, and your requirements. You could use filter, or a list comprehension, or a for loop with an if statement, etc. The best way depends on the particular constraints of your goal.
5. Write the selected lines
Take the selected lines you've chosen in step #4 and write them to the file.
6. Close the files
It's generally good practice to close the files you've opened to prevent resource leaks. | 1 | 0 | 1 | I need to open a csv file, select 1000 random rows and save those rows to a new file. I'm stuck and can't see how to do it. Can anyone help? | Selecting random rows with python and writing to a new file | -0.099668 | 0 | 0 | 11,279 |
18,316,438 | 2013-08-19T14:33:00.000 | 0 | 1 | 1 | 0 | python,outlook,email-attachments,pgp,symantec | 22,923,765 | 1 | false | 0 | 0 | As far as I recall, when Symantec Encryption Desktop creates a PGP file, it is also zipping. This is how I used the Symantec Command Line API tool, as I would select multiple files for encryption and they would end up in a single file (like a zip).
So, you would probably remove any Outlook quirks by just PGPing the txt file, without the zip step in the middle. | 1 | 0 | 0 | I use Symantec Encryption Desktop v.10.3.0 and Microsoft Outlook v. 14.0.6129.5000 (32bit) in my pc.
I use SEC to encrypt a zip file containing a text document and then I attach the encrypted archive (filename.zip.pgp) and send it through Microsoft Exchange Server.
If I do this procedure manually the receiver gets a *.pgp attachment containing a zip, that contains a *.txt file.
If a use python's smtplib and email modules for sending the e-mail and gnupg module for the encryption I have the following problem:
If the receiver saves the .pgp archive in her disk and then uses SEC, the file opens fine.
But if the receiver double-clicks in the attachment inside Outlook the pgp file opens showing a *.txt file (and not a zip file) with the following filename: "filename zip.txt"
This is of course the zip file but with a different extension (txt).
Anyone knows why is this happening? | If I open pgp attachments in Outlook the file extension changes | 0 | 0 | 0 | 591 |
18,317,455 | 2013-08-19T15:23:00.000 | 0 | 1 | 0 | 0 | python,email,smtp | 18,317,530 | 1 | false | 0 | 0 | One possible solution is to create a web backend mantained by you which accepts a POST call and sends the passed message only to authorized addresses.
This way you can also mantain the list of email addresses on your server.
Look at it like an online error alerter. | 1 | 0 | 0 | I'm working on a Python tool for wide distribution (as .exe/.app) that will email reports to the user. Currently (in testing), I'm using smtplib to build the message and send it via GMail, which requires a login() call. However, I'm concerned as to the security of this - I know that Python binaries aren't terribly secure, and I'd rather not have the password stored as plaintext in the executable.
I'm not terribly familiar with email systems, so I don't know if there's something that could securely be used by the .exe. I suppose I could set up a mail server without authentication, but I'm concerned that it'll end up as a spam node. Is there a setup that will allow me to send mail from a distributed Python .exe/.app without opening it to potential attacks? | Securely Send Email from Python Executable | 0 | 0 | 1 | 241 |
18,320,199 | 2013-08-19T18:03:00.000 | 2 | 1 | 0 | 1 | python,c,named-pipes,fifo,mkfifo | 18,320,287 | 1 | false | 0 | 0 | A pipe is a stream.
The number of write() calls on the sender side does not necessarily need to correspond to the number of read()s on the receiver's side.
Try to implement some sort of synchronisation protocol.
If sending plain text you could do so for example by adding new-lines between each token and make the receiver read up until one of such is found.
Alternatively you could prefix each data sent, with a fixed length number representing the amount of the data to come. The receiver then can parse this format. | 1 | 3 | 0 | I have two processes one C and one python. The C process spends its time passing data to a named pipe which the python process then reads. Should be pretty simple and it works fine when I'm passing data (currently a time stamp such as "Mon Aug 19 18:30:59 2013") once per second.
Problems occur when I take out the sleep(1); command in the C process. When there's no one second delay the communication quickly gets screwed up. The python process will read more than one message or report that it has read data even though its buffer is empty. At this point the C process usually bombs.
Before I go posting any sample code I'm wondering if I need to implement some sort of synchronisation on both sides. Like maybe telling the C process not to write to the fifo if it's not empty?
The C process opens the named pipe write only and the python process opens as read only.
Both processes are intended to be run as loops. The C process continually reads data as it comes in over a USB port and the python process takes each "message" and parses it before sending it to a SQL Db.
If I'm going to be looking at up to 50 messages per second, will named pipes be able to handle that level of transaction rate? The size of each transaction is relatively small (20 bytes or so) but the frequency makes me wonder if I should be looking at some other form of inter-process communication such as shared memory?
Any advice appreciated. I can post code if necessary but at the moment I'm just wondering if I should be syncing between the two processes somehow.
Thanks! | Named pipe race condition? | 0.379949 | 0 | 0 | 1,407 |
18,320,431 | 2013-08-19T18:18:00.000 | 8 | 0 | 1 | 0 | python,keystroke,backspace | 18,320,507 | 5 | true | 0 | 0 | The character for backspace is '\b' but it sounds like you want to affect the GUI.
if your program changes the GUI, then simply delete the last character from the active input field. | 2 | 9 | 0 | I keep finding ways to map the backspace key differently, but that's not what I'm after.
I'm in a program writing a python code, and basically I want to write a line of code that causes the program to think someone just hit the Backspace key in the GUI (as the backspace key deletes something)
How I would code in a backspace key stroke? | Python code to cause a backspace keystroke? | 1.2 | 0 | 0 | 67,991 |
18,320,431 | 2013-08-19T18:18:00.000 | 0 | 0 | 1 | 0 | python,keystroke,backspace | 67,492,999 | 5 | false | 0 | 0 | Updating since this still pops up in search.
In Python3 print() can and does work if you use the \<end\> parameter. Meaning sys.stdout.write() and .flush() aren't needed.
eg.
print("\b"*len(time_str),end='') | 2 | 9 | 0 | I keep finding ways to map the backspace key differently, but that's not what I'm after.
I'm in a program writing a python code, and basically I want to write a line of code that causes the program to think someone just hit the Backspace key in the GUI (as the backspace key deletes something)
How I would code in a backspace key stroke? | Python code to cause a backspace keystroke? | 0 | 0 | 0 | 67,991 |
18,322,667 | 2013-08-19T20:36:00.000 | 2 | 0 | 0 | 1 | python,openerp | 18,322,809 | 1 | true | 0 | 0 | This just means that the underlying TCP connection was abruptly dropped. In this case it means that you are trying to write data to a socket that has already been closed on the other side (by the client). It is harmless, it means that while your server was sending an HTTP response to the client (browser) she stopped the request (closed the browser for example). | 1 | 0 | 0 | I am getting this dump occasionally from OpenERP, but it seems harmless. The code serves HTTP; is this dump what happens when a connection is dropped?
Exception happened during processing of request from ('10.100.2.71', 42799)
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 582, in process_request_thread
self.finish_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 323, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python2.7/SocketServer.py", line 640, in __init__
self.finish()
File "/usr/lib/python2.7/SocketServer.py", line 693, in finish
self.wfile.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe | dump from OpenERP Python is harmless? | 1.2 | 0 | 1 | 354 |
18,324,219 | 2013-08-19T22:36:00.000 | 0 | 0 | 0 | 0 | javascript,c++,python,pointers,hex | 18,325,087 | 2 | false | 0 | 0 | Each offset is a byte-reversed hex pointer to the location. So you need to convert the pointers to integers.
You aren't really clear about what you're asking, but assuming you have a file that has something like:
offset0A
offset0B
offset0C
offset0D
data
You should read all the offsets into an array of integers, each of which is the value of an offset.
Then read the rest of the data into a big block. Remember what the base offset of the data block was.
Then start updating your data. Everytime you insert bytes into the data block, you need to update all the pointers that start after the insertion point by the quantity of bytes you insert.
So if you have:
1000 0000
2000 0000
3000 0000
data
Suppose you insert "sometext" at offset 2400. You can then walk your list of offsets. 1000 < 2400, so you don't touch that one, or the one at 2000. But 3000 >= 2400 so you need to change that offset to 3008.
Continue to do this for all your insertions. When you're done, write out the set of offsets first, then the data. | 1 | 0 | 0 | I have a file with information.
Each section of information is located at a certain offset. There are about 100 sections of information. Every section of information starts at the beginning of the word "LPS"
At the beginning of the file there is a list of offset addresses that point to each section of information.
For example:
"801F 0B00" link to the offset "B1F80" where a new section of information starts
"80B0 0B00" refers to the offset "BB080" where a new section of information starts
I'm adding/shortening the information of those lines, so I need to change the offsets at the beginning of the file (the list) to make them coincide with the new locations.
So, what code exactly would help me to do this automatically?
For example, the tool must recognize the offset addresses (the list)
Allow me to add the new information
Then, modify the list of offset with the new locations.
Thanks | How can I reposition offset addresses quickly? | 0 | 0 | 0 | 89 |
18,325,521 | 2013-08-20T01:18:00.000 | 1 | 0 | 0 | 0 | php,javascript,jquery,python,extjs | 18,331,119 | 1 | true | 1 | 0 | I'm working on a Django project for the past six months, where I'm using Django for the backend service, returning only json responses, and the frontend code is completely separate.
jQuery by itself would result in unmaintainable code, even on a smaller scale, so you definitely need a high level frontend framework. I settled with Durandal.js, which includes:
Knockout.js for the ui bindings
Sammy.js for view routing
Require.js to modularize the code
I think it was a good choice at the time, and I feel very productive with that tech stack. If I were to start from scratch again, it would be very likely a similar stack.
As for ExtJS, it's a component/widget based framework, which philosophy I don't very much like, I saw the future, and it wasn't written in ExtJS :)
Although I see AngularJS and EmberJS as the titans that will very likely win the battle of frameworks, at least for now. | 1 | 2 | 0 | I have given the new project to complete where i have separate components which talk to each other via services calls
They are not linked directly.
The technical head wants to build the entire frontend in ExtJS or jquery and then use JSON to load the data. I mean all forms , login etc will be JSON.
Now i ahve not done anything like that that. I mean i have always generated forms , data from server side controllers and views. like PHP or Django python.
I want to know that is this way good or achievable because i don't want to chnage things after spending time initially.
but is its the good way then i can start with it | Building the web app with json only data with javascript and ORM | 1.2 | 0 | 0 | 191 |
18,329,554 | 2013-08-20T07:43:00.000 | 0 | 0 | 1 | 0 | python,eclipse,matplotlib | 18,815,837 | 1 | false | 0 | 0 | have you tried importing matplotlib.pylab, as it is alias for pyplot? And if you still get the error from eclipse syntax check, just try to run it. | 1 | 1 | 0 | When trying to import matplotlib.pyplot in the python console on eclipse, it gives me this error:
object of type 'NoneType' has no len()
But if I import matplotlib.pyplot in the console python itself offers, it can be imported successfully. | Weird error when importing matplotlib with eclipse | 0 | 0 | 0 | 171 |
18,329,995 | 2013-08-20T08:09:00.000 | 0 | 0 | 0 | 0 | python-3.x,sublimetext3 | 18,389,949 | 2 | false | 0 | 1 | There is nothing yet in Sublime Text 3. It would be nice to preview the image and its dimensions right inside of sublime. For now, you will have to open in finder/browser/whatever you use. | 1 | 4 | 0 | Would it be possible to create an internal image viewer plugin for Sublime Text 3? I noticed in their forum people have mentioned it not possible for ST2 due to the fact that the API doesn't allow access to the UI and widgets, but just wondered if it was still the case for ST3? | Sublime Text 3 internal image viewer | 0 | 0 | 0 | 4,022 |
18,330,916 | 2013-08-20T08:57:00.000 | 0 | 0 | 0 | 0 | java,android,python,django,authentication | 18,334,430 | 3 | false | 1 | 0 | You may want to use the Django Sessions middleware, which will set a cookie with a django session_id. On the following requests, the sessions middleware will set an attribute on your request object called user and you can then test if user is authenticated by request.user.is_authenticated() ( or login_required decorator) . Also, you can set the session timeout to whatever you like in the settings.
This middleware is enabled in default django settings. | 1 | 4 | 0 | As of now, I have a Django REST API and everything is hunky dory for the web app, wherein I have implemented User Auth in the backend. The "login_required" condition serves well for the web app, which is cookie based.
I have an Android app now that needs to access the same API. I am able to sign in the user. What I need to know is how to authenticate every user when they make GET/POST request to my views?
My research shows a couple of solutions:
1) Cookie-backed sessions
2) Send username and password with every GET/POST request(might not be secure)
Any ideas? | How to authenticate android user POST request with Django REST API? | 0 | 0 | 0 | 9,898 |
18,334,748 | 2013-08-20T12:13:00.000 | 6 | 0 | 1 | 0 | python,pip,setuptools | 18,334,814 | 2 | true | 0 | 0 | pip is not a replacement for setuptools, quite the contrary, it heavily depends on it and always will depend on it for installing packages from source. What pip does replace is the easy_install tool, which is provided as part of setuptools for historical reasons but shouldn't be used any more. | 1 | 3 | 0 | I'm new to Python. Trying install pip. All tutorials I saw say me to install setuptools first before installing pip.
But I have read, that pip is a replacement for setuptools.
So, why I have to install setuptools first and only then I can install pip? | Installing setuptools and pip | 1.2 | 0 | 0 | 6,702 |
18,334,877 | 2013-08-20T12:19:00.000 | 1 | 0 | 0 | 1 | python,django,celery | 18,336,614 | 1 | true | 1 | 0 | Are you defining your tasks with ignore_result=True (or did you set CELERY_IGNORE_RESULT to True)? If you did, you should try disabling it. | 1 | 0 | 0 | In Celery's logs there are
Task blabla.bla.bla[arguments] succeeded in 0.757446050644s: None
How to replace this None with something more meaningfull? I tried to set return value in tasks but no luck. | Celery - how to susbstitute 'None' in logs | 1.2 | 0 | 0 | 281 |
18,342,535 | 2013-08-20T18:33:00.000 | 1 | 1 | 0 | 1 | c++,python,c,opencv,compilation | 18,342,743 | 5 | false | 0 | 1 | How to compile my project so it can be used from python? I've read
that I should create a *.so file but how to do so?
That depends on your compiler. By example with g++:
g++ -shared -o myLib.so myObject.o
Should it work like a lib, so python calls some specific functions,
chosen in python level?
Yes it is, in my opinion. It seems do be the "obvious" way, since it's great for the modularity and the evolution of the C++ code. | 1 | 2 | 0 | I've a c++ code on my mac that uses non-standard lybraries (in my case, OpenCV libs) and need to compile this so it can be called from other computers (at least from other mac computers). Runned from python. So I've 3 fundamental questions:
How to compile my project so it can be used from python? I've read
that I should create a *.so file but how to do so?
Should it work like a lib, so python calls some specific functions,
chosen in python level?
Or should it contain a main function that is executed from
command line?
Any ideas on how to do so? PS: I'm using the eclipse IDE to compile my c++ project.
Cheers, | Calling c++ function, from Python script, on a Mac OSX | 0.039979 | 0 | 0 | 3,206 |
18,344,338 | 2013-08-20T20:23:00.000 | 3 | 0 | 0 | 0 | python,django,security | 18,354,920 | 1 | true | 1 | 0 | r prefixes to strings are not retained in the string value. 'a\\b' and r'a\b' are exactly the same string, which has a single backslash. u prefixes determine whether the string holds bytes or Unicode characters. In general strings in Django apps should be Unicode strings, but Python will automatically convert bytes to characters where necessary (this can blow up if you use non-ASCII characters).
None of this determines whether a string is ‘safe’.
Using the cleaned_data store on a Form means that the data has been validated for the particular type of field it is associated with. If you have an e-mail field, then the cleaned_data value is sure to look like a valid e-mail address. If you have a plain text field then cleaned_data can be any string. Neither of those provide you any guarantee that a string is ‘safe’; input validation is a good thing to do in general and a useful defense-in-depth but it does not make an application secure against injection.
Since these values are not escaped as far as I can see is it possible that they are not safe?
Input values should never be escaped and are never ‘safe’. It is not the job of the input handling phase to do escaping; it is when you drop the value into a string with a different context that you have to worry about escaping.
So, when you create an HTML response with a string in, you HTML-escape that string. (But better: use a templating language that automatically escapes for you, like Django's autoescape.)
When you create an SQL query with a string in, you SQL-escape that string. (But better: use parameterised queries or an ORM so that you never have to create a query with string variables.)
When you create a JavaScript variable assignment with a string in, you JS-escape that string. (But better: pass the data in a DOM data- attribute and read it from JS instead of using inline code.)
And so on. There are many different forms of escaping and there is no global escaping scheme which can protect you against the range of possible injection attacks. So leave the input as it is, and escape at the output phase, or better use existing framework tools to avoid having to explicitly escape at all. | 1 | 2 | 0 | I was wondering about the safest way to retrieve data from the POST or GET variable in Django. Sometimes I use the variable that is directly passed into the view function by url patterns in urls.py. I am told (not sure) that they are safe to use when I start the pattern a ''r''. But I dont know why this is the case.
For retrieving POST data I know of two options:
Using a form, Django forms have a cleaned data function which should make the data safe to use.....
Using request.POST.get('someval'). Since these values are not escaped as far as I can see is it possible that they are not safe? Secondly does putting a u or r symbol make it safe and if so why? | Safely retrieving data from POST or GET in django | 1.2 | 0 | 0 | 2,083 |
18,350,333 | 2013-08-21T06:32:00.000 | 1 | 0 | 0 | 0 | python,linux,plone | 18,373,792 | 1 | true | 0 | 1 | You need to carefully blank the <div> section for 'View Document in Fullscreen' in this javascript:
collective.documentviewer-2.2.1 py2.6.egg/collective/documentviewer/resources/assets/viewer.js | 1 | 0 | 0 | Where exactly can I override the collective.documentviewer code to disable the icon to view the document full screen in another window? Actually when I only click on the 'customize' button for the documentviewer in zcml it throws up an exception for 'widget', even without addition of a letter of code there and the viewer crashes.I am using version 2.2.1 for collective.documentviewer and Plone 4.1.4 on linux. Please guide. | How do I override the document viewer icon to view in Full Screen in Plone? | 1.2 | 0 | 0 | 119 |
18,352,493 | 2013-08-21T08:30:00.000 | 0 | 0 | 0 | 0 | python,opencv | 21,596,301 | 2 | false | 0 | 0 | For me, it wasn't working when the environment was of ROS Fuerte but it worked when the environment was of ROS Groovy.
As Alexandre had mentioned above, it must be the problem with the opencv2 versions. Fuerte had 2.4.2 while Groovy had 2.4.6 | 2 | 0 | 1 | Using HoughLinesP raises "<'unknown'> is not a numpy array", but my array is really a numpy array.
It works on one of my computer, but not on my robot... | OpenCv2: Using HoughLinesP raises " is not a numpy array" | 0 | 0 | 0 | 158 |
18,352,493 | 2013-08-21T08:30:00.000 | 2 | 0 | 0 | 0 | python,opencv | 18,353,075 | 2 | false | 0 | 0 | Found it:
I don't have the same opencv version on my robot and on my computer !
For the records calling HoughLinesP:
works fine on 2.4.5 and 2.4.6
leads to "<unknown> is not a numpy array" with version $Rev: 4557 $ | 2 | 0 | 1 | Using HoughLinesP raises "<'unknown'> is not a numpy array", but my array is really a numpy array.
It works on one of my computer, but not on my robot... | OpenCv2: Using HoughLinesP raises " is not a numpy array" | 0.197375 | 0 | 0 | 158 |
18,359,184 | 2013-08-21T13:46:00.000 | 3 | 1 | 1 | 1 | python,google-app-engine,twitter,package,twython | 18,362,592 | 4 | true | 0 | 0 | If you put the module files in a directory, for example external_modules/, and then use sys.path.insert(0, 'external_modules') you can include the module as it would be an internal module.
You would have to call sys.path.insert before the first import of the module.
Example: If you placed a "module.pyd" in external_modules/ and want to include it with import module, then place the sys.path.insert before.
The sys.path.insert() is an app-wide call, so you have to call it only once. It would be the best to place it in the main file, before any other imports (except import sys of course). | 1 | 1 | 0 | I'm developing a twitter app on google appengine - for that I want to use the Twython library. I tried installing it using pip - but it either installs it in the main python dir, or doesn't import all dependencies.
I can simply copy all the files of Twython to the appengine root dir, and also import manually all the dependency libraries, but that seems awfully wrong. How do I install a package in a specific folder including all it's dependencies?
Thanks | How to install python package in a specific directory | 1.2 | 0 | 0 | 2,148 |
18,359,244 | 2013-08-21T13:49:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 18,360,388 | 5 | false | 0 | 0 | Or could you just use string concatenation:
date = 'The date is ' + str(get_date()) | 1 | 2 | 0 | I'm writing a function that generates some part of a string, and is going to be called within another string, so that it completes the sentence.
The restriction, however, is that this complete string must be set in quotation marks. It looks something like:
date = 'The date is get_date()'
where get_date() is a function that returns the date in a string (though is a little more complicated than that). The problem is that python wont let me call a function within quotation marks.
Any ideas?
Thanks
EDIT:
I'll be more specific about what I'm trying to do, since I don't think it's that complicated, and you seem like a helpful bunch.
I've got a configuration file (conf.py) that is defining a bunch of variables. One of them that I'd like to manipulate (using a python script) is copyright year:
one_of_the_options = [('example1', 'Copyright 2008-CURRENTYEAR Company Name Ltd.')]
CURRENTYEAR is what I'd like to control via the output of a python script, where my function just returns the year as a string. | How to call a python function within quotation marks | 0.039979 | 0 | 0 | 2,629 |
18,360,528 | 2013-08-21T14:42:00.000 | 4 | 0 | 0 | 0 | python,jinja2,salt-stack | 37,210,237 | 7 | false | 1 | 0 | This is a very old post, but it is highly ranked in Google for getting the ipv4 address. As of salt 2015.5.8, the best way to get the primary ipv4 address is {{ grains['ipv4'][0] }}. | 1 | 12 | 0 | Our saltstack is based on hostnames (webN., dbN., etc.). But for various things I need IPs of those servers. For now I had them stored in pillars, but the number of places I need to sync grows.
I tried to use publish + network.ip_addrs, but that kinda sucks, because it needs to do the whole salt-roundtrip just to resolve a hostname. Also it's dependent on the minions responding. Therefore I'm looking for a way to resolve hostname to IP in templates.
I assume that I could write a module for it somehow, but my python skills are very limited. | How to get IP address of hostname inside jinja template | 0.113791 | 0 | 0 | 21,366 |
18,362,568 | 2013-08-21T16:11:00.000 | 0 | 1 | 0 | 0 | python,paramiko | 18,363,186 | 2 | false | 0 | 0 | Wrap your Python program in a shell script that checks that paramiko is installed and if it isn't installs it before running your program. | 1 | 2 | 0 | I'm writing a Python script which will be run on many different servers. A vital part of the script relies on the paramiko module, but it's likely that the servers do not have the paramiko package installed already. Everything needs to automated, so all the user has to do is run the script and everything will be completed for them. They shouldn't need to manually install anything.
I've seen that people recommend using Active Python / PyPM, but again, that requires an installation.
Is there a way to download and install Paramiko (and any package) from a Python script? | How to download/install paramiko on any sever from Python script? | 0 | 0 | 1 | 5,173 |
18,369,296 | 2013-08-21T23:30:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 18,369,413 | 2 | true | 0 | 0 | Your program should:
Check if a file (with a fixed filename) exists (or has any contents).
If it does not exist, then obtain data from the user and save it in the file.
If it does exist, read it. Display it to the user. Ask the user an input of "quit" (to exit) or a person name. If a person name, ask the user for the change in balance (positive number= a new loan; negative number= payback).
Before you start, it's better you define a data model for the application, i.e. what data you need to store, and how you will store it. Would you use lists? Dictionaries? Sets? Would you create objects? You also need to consider how to store this information in a file. You can do this by yourself or use the pickle module or similar. If you are just learning the language, I would recommend to do it yourself. | 1 | 0 | 0 | I want to make a python program that lets you track how much certain people owe you. It should ask you your name and the people's names the first time it is run. Afterwards it should say something along the lines of "Welcome back (name)" and be able to retrieve the people's names and how much they owe you, as well as allowing the user to edit names/owed money etc.
However, this wasn't mentioned in any of the tutorials I did. I have no idea how to even start. Is there a library or something for this? I tried using text files but it didn't work. What is the best way to solve this problem? | Python saving data and retrieving it | 1.2 | 0 | 0 | 165 |
18,369,347 | 2013-08-21T23:35:00.000 | 2 | 0 | 1 | 0 | python,eclipse,ide,pydev | 18,384,978 | 1 | true | 1 | 0 | You can define the templates used in PyDev (both for code-completion and for new modules) in window > preferences > pydev > editor > templates.
Anything with the context 'new module' there will be shown to you when you create a new module (and you can have many templates, such as one for unittests, empty modules, class modules, etc).
Note that the templates are only presented when you create a module with Alt+Shift+N > pydev module (or file > new > pydev module), not when you create a regular 'file' (even if it ends with .py) | 1 | 1 | 0 | I need to use a default template for my files, such as first heading : description of file, author, shebang line and so on. But PyDev and eclipse don't do it for me.
When i want to create a new file in my project, how i have them? | how to define reserved template for python file for eclipse | 1.2 | 0 | 0 | 379 |
18,375,308 | 2013-08-22T08:31:00.000 | 2 | 1 | 0 | 1 | python,linux,file-io,cron | 18,375,496 | 2 | true | 0 | 0 | Please use absolute path in your script when using crontab to run it | 1 | 2 | 0 | I have a tiny Python script that needs to read/write to a file. It works when I run it from the command line (since I am root, it will) , but when the cron job runs it cannot access the file.
The file is in the same folder as the script and is (should) be created from the script.
I'm not sure if this is really a programming question... | Python cron job file access | 1.2 | 0 | 0 | 390 |
18,377,222 | 2013-08-22T09:57:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 18,377,368 | 2 | false | 1 | 0 | create a new model with the data from the existing one..
or don't create the model until you have all the facts. | 2 | 0 | 0 | I know we cant change the parent of an entity that is stored , but can we change the parent of the entity that is not stored? For example i am declaring a model as
my_model = MyModel(parent = ParentModel1.key)
but after some checks i may have to change the parent of my_model (i have not run my_model.put() ) to ParentModel2. How can i do this ? | Google App engine change parent of entity that is not stored | 0.099668 | 0 | 0 | 35 |
18,377,222 | 2013-08-22T09:57:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine | 18,377,371 | 2 | true | 1 | 0 | You still can't do it. You should probably delay instantiation of the MyModel object until you know its parent. Perhaps you could collect the attributes in a dictionary, then when it comes to instantiation you can do my_instance = MyModel(parent=parent_instance, **kwargs). | 2 | 0 | 0 | I know we cant change the parent of an entity that is stored , but can we change the parent of the entity that is not stored? For example i am declaring a model as
my_model = MyModel(parent = ParentModel1.key)
but after some checks i may have to change the parent of my_model (i have not run my_model.put() ) to ParentModel2. How can i do this ? | Google App engine change parent of entity that is not stored | 1.2 | 0 | 0 | 35 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.