Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
21,052,461
2014-01-10T19:10:00.000
1
0
0
1
python,google-app-engine
22,256,340
2
false
1
0
A recent upgrade of the development SDK started causing this problem for me. After much turmoil, I found that the problem was that the SDK was in a sub-directory of my project code. When I ran the SDK from a different (parent) directory the error went away.
1
4
0
When I try to run any of my app engine projects by python GoogleAppEngineLauncher I got the error log as follows: Does anyone have any ideas of what's going on? I tried remove the SDK and reinstall it. Nothing happens. Still got the same error. Everything is working fine and I don't think I made any changes before this happens. The only thing that I can think of is that I install bigquery command line tool before this happens. But I don't think this should be the reason of this. bad runtime process port [''] Traceback (most recent call last): File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/_python_runtime.py", line 197, in _run_file(file, globals()) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/_python_runtime.py", line 193, in _run_file execfile(script_path, globals_) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/runtime.py", line 175, in main() File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/runtime.py", line 153, in main sandbox.enable_sandbox(config) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 159, in enable_sandbox import('%s.threading' % dist27.name) File "/Users/txzhang/Documents/App/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 903, in load_module raise ImportError('No module named %s' % fullname) ImportError: No module named google.appengine.dist27.threading
App Engine dev server: bad runtime process port [''] No module named google.appengine.dist27.threading
0.099668
0
0
1,955
21,052,563
2014-01-10T19:15:00.000
0
0
1
1
python,macros,libreoffice
21,052,890
3
false
0
0
Unless LibreOffice was programmed sloppy, it should not That would not be smart: using bundled software for anything else than what it was bundled for would not be smart
2
1
0
I have LibreOffice installed on a Windows machine. LibreOffice comes with a bundled python.exe (version 3.3) to allow you to write LibreOffice macros in Python. This works fine. But the bundled python routines don't come with the IDLE python ide as far as I can see. 1) If I download and install Python on my machine will that interfere with the execution of LibreOffice python macros (by changing Python environmental variables, register settings etc.? or 2) Is there a way to download IDLE or another free Python IDE and have it work with the Python bundled into LibreOffice?
Will adding Python to a machine with LibreOffice interfere with LibreOffice Python macro execution?
0
0
0
543
21,052,563
2014-01-10T19:15:00.000
0
0
1
1
python,macros,libreoffice
28,912,818
3
false
0
0
LibreOffice comes bundled with it's own copy of python. (Python 3.3 I think) So the answer to your question is no, it will not. I have found that a simple way of debugging python macros in libreoffice is to run libreoffice from the command line and put print commands in the macros. This at least allows you to trace where you are and what key values are as the print commands echo onto the terminal screen.
2
1
0
I have LibreOffice installed on a Windows machine. LibreOffice comes with a bundled python.exe (version 3.3) to allow you to write LibreOffice macros in Python. This works fine. But the bundled python routines don't come with the IDLE python ide as far as I can see. 1) If I download and install Python on my machine will that interfere with the execution of LibreOffice python macros (by changing Python environmental variables, register settings etc.? or 2) Is there a way to download IDLE or another free Python IDE and have it work with the Python bundled into LibreOffice?
Will adding Python to a machine with LibreOffice interfere with LibreOffice Python macro execution?
0
0
0
543
21,053,472
2014-01-10T20:06:00.000
1
0
0
0
python,parameters,oursql
21,053,569
2
false
0
0
IT is expecting a sequence of parameters. Use: [blah_variable]
1
0
0
Forgive my ignorance as I am new to oursql. I'm simply trying to pass a parameter to a statement: cursor.execute("select blah from blah_table where blah_field = ?", blah_variable) this treated whatever is inside the blah_variable as a char array so if I pass "hello" it will throw a ProgrammingError telling me that 1 parameter was expected but 5 was given. I've tried looking through the docs but their examples are not using variables. Thanks!
Python oursql treating a string variable as a char array
0.099668
1
0
233
21,054,747
2014-01-10T21:30:00.000
7
0
1
0
python,sublimetext2,sublimetext,sublimerepl
21,054,855
1
true
0
0
First, go to Tools -> SublimeREPL -> Python -> Python to start a new Python REPL. You can then use the commands in Tools -> SublimeREPL -> Eval in REPL and Transfer to REPL to transfer and/or evaluate pieces of your code in the running interpreter. When that code is done running, the REPL stays open, allowing you to enter new commands as you'd expect.
1
4
0
With python script currently opened in SublimeText I'm choosing: Tools > SublimeREPL > Python > RUN Current File Sublime executes the script in a new interactive REPL[python] window (this window is still inside of Sublime). After the python script execution is finished Sublime types: Repl Closed I now can start typing the python commands into this interactive window below "Repl Closed" message. But when I press an Enter key the editor simply advances to a new line when I expect it to Execute a line I just typed. Please advise what key (if any) should be used to run typed command.
SublimeREPL: Python - RUN Current File
1.2
0
0
6,496
21,054,775
2014-01-10T21:32:00.000
1
0
1
0
python,pygame
21,054,996
5
false
0
1
Example code in books is not the same as production code Code in books and articles explaining things do things to keep to the point and not inundate the reader with what might be noise or tangents to the specific topic. The author is taking short cuts to keep the code as specific to the problem they are explaining as possible. That said, there are better ways to do what they are doing than using module level variables. Even in books and tutorials. Huge State Machines are Bad Variables that encompass too much scope, no matter what level they are at, tend to have the side effect of creating invisible side effects that manifest themselves from the huge state machine that the application becomes. Excessive parameters to functions is a code smell It should tell you that you probably need to encapsulate your data better. Multiple related parameters should be grouped into a single data structure and just pass that single data structure as a parameter.
4
2
0
I am reading through "Making games with Python & Pygame" and I have noticed that because the author uses lots of functions with drawing in to break his code up, he uses lots of globals such as GRIDSIZE or BACKGROUNDCOLOR. I was always told that globals were generally bad, but without them, every drawing function would have ten more, repeating parameters - and I have also been told that repetition is bad. I wondered, then, is the author is correct in using globals for parameters that appear in most (drawing) functions, or should have just used more, repetitive parameters.
Globals vs Parameters for Functions with the Same Parameters
0.039979
0
0
189
21,054,775
2014-01-10T21:32:00.000
1
0
1
0
python,pygame
21,056,248
5
false
0
1
Constant globals can be OK, despite whatever general warnings you may have heard. In Python "constant" means "named in capital letters as a hint that nobody should modify them", and "globals" means "module-level variables", but the principle still applies. So hopefully GRIDSIZE and BACKGROUNDCOLOR are constant for the duration of the program, but it's possible they are not: I haven't seem the code. There are plenty of examples of module-level constants in the standard Python libraries. For example errno contains E2BIG, EACCESS, etc. math contains math.pi. However, those examples are even more constant than GRIDSIZE. Presumably it could change between different runs of the same program, which math.pi will not. So you need to assess the consequences of this particular global, and decide whether to use it or not. I was always told that globals were generally bad Bad things are bad for reasons, not "generally bad". In order to decide what to do you need to understand the reasons. Using globals (or, in Python, module-scope objects) causes certain problems: dependencies on previously-executed code: this is the worst problem, but it applies only to mutable globals. If you use a global that can be modified elsewhere, then in order for someone reading the code to reason about its current value they might need to know about the whole program, because any part of the program might access the global. If you keep your mutables in an object that only parts of your code ever get to see, then you only have to reason about those parts of your code that interact with that object. This might not sound like a big difference if your program is one file that's 100 lines long, but it becomes one eventually. Making your code easy to work with is almost all about making it easy for a reader to reason about what the code does. What else can affect it is a big part of that, so this is important. And "a reader" is you, next time you want to change something, so helping out "readers" of your code is pure self-interest. dependencies on concurrently-executing code: mutable globals + threads = needs locks = effort. hidden dependencies: this is applicable even for constants. You have to decide whether it's OK for this bit of code to have some dependencies that are not supplied as parameters ("injected") by the caller of that code. Almost always the answer to this question is "yes". If it is "absolutely not" then you're in a tight spot if you dislike lots of parameters, because you wouldn't even use Python builtins by name, let alone names from modules the code imports. Having decided that it's OK to have hidden dependencies, you have to think about the consequences of GRIDSIZE being one of them, on the way you use and especially test the code. Provided that you don't want to draw on different-sized grids in the same program you're basically fine until you want to write tests that cover lots of different values of GRIDSIZE. You'd do that to give yourself confidence now that when you change GRIDSIZE later nothing will break. At that point you'd probably want to be able to provide it as a parameter. For this reason you might find it more useful to have parameters with defaults rather than global values. To prevent long repetitive parameter lists you might pass around one object that combines all of these settings rather than each separately. Just beware of creating an object with multiple unrelated purposes, because the others are distracting if you want to reason about just one. namespace pollution: relevant in C, if you don't use static then two completely unrelated files cannot have different functions of the same name. Not so relevant in Python where everything is in namespaces.
4
2
0
I am reading through "Making games with Python & Pygame" and I have noticed that because the author uses lots of functions with drawing in to break his code up, he uses lots of globals such as GRIDSIZE or BACKGROUNDCOLOR. I was always told that globals were generally bad, but without them, every drawing function would have ten more, repeating parameters - and I have also been told that repetition is bad. I wondered, then, is the author is correct in using globals for parameters that appear in most (drawing) functions, or should have just used more, repetitive parameters.
Globals vs Parameters for Functions with the Same Parameters
0.039979
0
0
189
21,054,775
2014-01-10T21:32:00.000
1
0
1
0
python,pygame
21,054,846
5
false
0
1
In Python you don't have as big a problem with globals, because Python globals are at the module level. (So global is a bit of a misnomer anyway.) It's fine to use module-level globals under many circumstances where true globals would be inappropriate.
4
2
0
I am reading through "Making games with Python & Pygame" and I have noticed that because the author uses lots of functions with drawing in to break his code up, he uses lots of globals such as GRIDSIZE or BACKGROUNDCOLOR. I was always told that globals were generally bad, but without them, every drawing function would have ten more, repeating parameters - and I have also been told that repetition is bad. I wondered, then, is the author is correct in using globals for parameters that appear in most (drawing) functions, or should have just used more, repetitive parameters.
Globals vs Parameters for Functions with the Same Parameters
0.039979
0
0
189
21,054,775
2014-01-10T21:32:00.000
0
0
1
0
python,pygame
21,054,871
5
false
0
1
It depends on the situation. Globals are not generally bad, but should be avoided if not absolutely necessary. For example, constants are perfectly ok as globals, while configuration data should be bundled (encapsulated) as much as possible. If you concentrate all such data onto one (or few) objects, you either might want to have the named functions as methods of the objects's class(es), or you might want to access these objects from within these functions. At the end, it is a matter of taste. If you think that things are handlable in a convenient way, let it be as it is. If it looks like confusion, better change it.
4
2
0
I am reading through "Making games with Python & Pygame" and I have noticed that because the author uses lots of functions with drawing in to break his code up, he uses lots of globals such as GRIDSIZE or BACKGROUNDCOLOR. I was always told that globals were generally bad, but without them, every drawing function would have ten more, repeating parameters - and I have also been told that repetition is bad. I wondered, then, is the author is correct in using globals for parameters that appear in most (drawing) functions, or should have just used more, repetitive parameters.
Globals vs Parameters for Functions with the Same Parameters
0
0
0
189
21,056,015
2014-01-10T22:57:00.000
1
0
0
0
macos,python-2.7,matplotlib,enthought,canopy
24,621,287
1
false
0
0
Check your DYLD_LIBRARY_PATH and LD_LIBRARY_PATH. Make sure that you have your library paths in the right order. I changed mine recently due to a matlab install and it took ages before I made the connection that it was my LD_LIBRARY_PATH that was stuffed. Programs go searching for the libraries in the order specified by those paths. If you have another libpng (as I did) in a library path before the canopy one, then it will use that. Fine if the version is recent, otherwise you get these errors. First unset them both and then run python and your plot. Hopefully that works. Then go about fixing your DYLD_LIBRARY_PATH and LD_LIBRARY_PATH. I put these at the front of both /opt/local/lib:/Users/xxxxx/Library/Enthought/Canopy_64bit/User/lib My error was ... /Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/_png.so Reason: Incompatible library version: _png.so requires version 41.0.0 or later, but libpng12.dylib provides version 40.0.0
1
2
1
I am pretty new to Python, having just taken a course and now trying to apply what I learned to convert matlab code to python. I have to plot some things, so I tried to import matplotlib.pyplot but keep getting Incompatible library version: _png.so requires version 42.0.0 or later, but libpng12.0.dylib provides version 41.0.0 I don't really understand how to either update my libpng12.0.dylib (since I am not really a programmer, just someone who wants to learn python, so please be easy on me if this is a super easy question!), or tell my _png.so to look somewhere else, if that is appropriate. I have done a lot of digging in to this, and I know that there are a number of issues with installing matplotlib on osX, but I haven't seen anything about how to resolve this one. I am running Enthought Canopy, using python 2.7, and I am running OS X 10.8 I really appreciate any help
Issue importing matplotlib.pyplot
0.197375
0
0
295
21,064,467
2014-01-11T16:01:00.000
0
0
0
0
redirect,python-2.7,scrapy,http-post
21,065,555
1
true
1
0
I found my own solution to this promlem. Instead of building a list of requests and return them at once, I build a chain of them and passed the next one inside the requests meta_data. Inside the callback I pass either the next request, storing the parsed item in a spider member, or the parsed list of items if there is no next request to execute.
1
0
0
I've written a Spider which has one start_url. The parse method of my spider scraps some data and returns a list of FormRequests. The problem comes with the response of that post request. It redirects me to another site with some irrelevant GET Parameters. The only parameter which seems to matter is a SESSION_ID posted along in the header. Unfortunately Scrapys behavior is to execute my requests, one after another and queues the redirect response at the end of the queue. If all returned FormRequests are executed, scrapy starts to execute all redirects, which all return the same site. How can I circumvent this behavior, so that a FormRequest is executed, and the redirect returned in the requests response is executed befor any new FormRequest? Maybe there is another way, like forcing the site somehow to get a new SESSION_ID cookie for each FormRequest. I'm open to any idea that could probably solve the problem.
Handle Redirects one by one with scrapy
1.2
0
1
302
21,064,985
2014-01-11T16:45:00.000
-2
1
0
0
python,visual-studio,raspberry-pi,gpio
34,752,703
1
false
0
1
The nusbio device can give 8 gpios for your Windows machine directly avaialble for .NET languages.
1
1
0
At the moment I'm using Visual Studio 2012 Professional with Python Tools to program application for my Raspberry Pi. For the moment this is a brilliant combination, because the application can also run on a Windows computer and debug it while in development. After I'm at a point that the application can run on my Pi then I will move the files to the Pi and run it there. Although today I received a GPIO cable and this open new possibilities to use buttons and controle lightswitches, thus fun stuff. But! Now the problem, on my Windows machine I can use the GPIO library and not see the results of the application, what happens if I push this button, what happens in the code, I really want to debug this and also when using in a bigger application. Everytime moving the files to the Pi and testing them there is not a option. Is there a application that can simulate the GPIO interface of the Pi on my Windows machine so I can test/debug the application while developing?
Program Raspberry PI GPIO on Windows
-0.379949
0
0
3,513
21,067,976
2014-01-11T21:15:00.000
0
1
0
0
python,input,usb,raspberry-pi
21,079,003
1
false
0
0
Do you have control over these devices? Could you change the USB protocol to something more reasonable, like a USB CDC ACM virtual serial port? Do they have to by identical? If not, I would do something simple like have one of the devices only send capital letters and have the other device only send lower-case, but I guess that doesn't extend so well if you need to send a number. With two keyboard emulators, you have to worry about what happens if the messages overlap. For example, if device 1 tries to type "banana" and device 2 tried to type apple, there is nothing to prevent your python program from reading something like "applbaneana".
1
0
0
I'm working on a project with Raspberry Pi. I have two identical keyboard emulator devices as inputs. In my program, I need to know which one gave the input. Is there a way to do this in Python? Thank you!
Determine which USB device gives the input in Raspberry Pi
0
0
0
323
21,068,311
2014-01-11T21:48:00.000
0
0
0
1
python,google-app-engine
21,069,451
1
true
1
0
You cannot specify an ancestor for the DatastoreInputReader -- except for a namespace -- so the pipeline will always go through all your Domain entities in a given namespace.
1
0
0
Is there a way to use the standard DatastoreInputReader from AppEngine's mapreduce with entity kind requiring ancestors ? Let's say I have an entity kind Domain with ancestor kind SuperDomain (useful for transactions), where do I specify in mapreduce_pipeline.MapreducePipeline how to use a specific SuperDomain entity as ancestor to all queries?
DatastoreInputReader using entity kind with ancestor
1.2
0
0
57
21,068,454
2014-01-11T22:02:00.000
0
0
1
0
python-2.7,locale
21,068,511
1
false
0
0
Yes - barring (very unlikely) future changes to ASCII or the C locale, that set of characters should remain constant.
1
2
0
According to the docs, string.punctuation contains: String of ASCII characters which are considered punctuation characters in the C locale. If I print string.punctuation I get !"#$%&'()*+,-./:;<=>?@[]^_`{|}~ Can I rely on that this string is always the same because it contains all ASCII punctuation characters or is the locale setting somehow important for this? (I am using Python 2.7 on Xubuntu 12.04 with LANG=en_US.UTF-8)
python / will c local change string.punctuation
0
0
0
1,187
21,069,586
2014-01-12T00:07:00.000
0
0
1
1
macos,python-3.x,upgrade
21,069,629
1
true
0
0
As per Martijn Pieters's comment, I used python3 and now it works as expected.
1
1
0
I've just installed python 3.3.3 on my OS X 10.9.1, however when I run python from the terminal the version that is indicated is 2.7.5. What have I done wrong and how can I make it right?
python version doesn't update on OS X
1.2
0
0
57
21,071,715
2014-01-12T05:40:00.000
-1
0
0
0
python,scipy
60,296,334
3
false
0
0
Use scipy version 1.2.1 to solve this issue......
1
26
1
I would like to use scipy.spatial.distance.cosine in my code. I can import the spatial submodule if I do something like import scipy.spatial or from scipy import spatial, but if I simply import scipy calling scipy.spatial.distance.cosine(...) results in the following error: AttributeError: 'module' object has no attribute 'spatial'. What is wrong with the second approach?
Why does from scipy import spatial work, while scipy.spatial doesn't work after import scipy?
-0.066568
0
0
26,860
21,071,853
2014-01-12T06:08:00.000
0
0
0
0
python,django,multithreading,background
21,082,317
1
false
1
0
Celery is good, if you have tasks need to be runned in background. For example it could be a interaction with web-workers ( like sending emails, massive updates in stores and etc), or it could be parallel tasks, when one master worker sends tasks to celery server ( or servers ). In you case, I think better solution is: Create one daemon, which will talk with your SERIAL PORT in infinite loop and save data somewhere. Web workers, which will read this data and represent to user. If will need something like, long queries with heavy calculation for users, you can add Celery to your stack, and this celery will work as web workers, just read data and return results to web workers.
1
0
0
I never worked on web application/service side and not sure if this is the right way for my work: I have data collection system collecting data from serial port, and also want to present the data to user using web service. I'm thinking of creating a Django project to show my data on website. Also, to collecting the data, I need some background thread running when the website started. I'm trying to re-use the models defined in my django project in the data collecting thread. First, I'd like to know if this is a reasonable design? If yes, is there any easy way to do that? I saw a lot topics about background tasks using celery but those are very complicate scenarios. Isn't there an easy way for this?
Background thread behind django project
0
0
0
267
21,074,285
2014-01-12T11:52:00.000
0
0
0
0
python-2.7,user-interface,wxpython
21,093,804
1
false
0
1
You'll have to figure out what the width of a toolbar item is. Then you can take that number and multiply it by the number of toolbar items and set the frame's size appropriately. Then when you add or remove a toolbar item, part of that process will be to update the size of the frame (SetSize()). You will probably need to call the frame's Layout() method after adding/removing the widget. You may also need to call Refresh().
1
0
0
I have a form (wx.Frame) in which the widest element is the toolbar (created with toolbar = self.CreateToolbar() and toolbar.Realize()). I want the width of the form to be such that every toolbar item is shown. I don't see the toolbar changing after creation, but I'd prefer if that could be handled. Any suggestions? (I'm running Python 2.7, with non-Phoenix wxPython)
Set width of wxPython Frame based on toolbar width
0
0
0
135
21,076,983
2014-01-12T16:11:00.000
1
0
0
0
python,django,facebook,fandjango
21,109,766
1
false
1
0
Ok, I get it. The problem was with mysql database. The new version added a json field extradata. MySql interpreted it as text field with NULL value. So the problem was that fandjango wanted empty json, not NULL. I have updated the extradata field with '{}' and it's worked. Now I have a standart problem: The mobile version of the app is unavailable because it is misconfigured for mobile access. As it was earlier, before new version Now I will try to figure out what is this. :)
1
0
0
After migration fandjango to version 4.2., I've got an error when I access my facebook application: Exception Value: [u'Enter valid JSON'] Exception Location: /usr/local/lib/python2.7/dist-packages/jsonfield/fields.py in pre_init, line 77 Trace: /usr/local/lib/python2.7/dist-packages/jsonfield/subclassing.py in set obj.dict[self.field.name] = self.field.pre_init(value, obj) ... jsonfield.subclassing.Creator object at 0x2a5c750 obj User: My User value u'' /usr/local/lib/python2.7/dist-packages/jsonfield/fields.py in pre_init raise ValidationError(_("Enter valid JSON")) ... ▼ Local vars Variable Value self jsonfield.fields.JSONField: extra_data obj User: My User value u'' I have upgraded fandjagno using pip install -upgrade fandjango, python manage.py migrate fandjango. There were another problems: -No module named jsonfield, so I installed it using pip -No module named dateutil.tz, so I installed it as well. -Also it asked for property DJANGO_SITE_URL, which was not defined in the settings object. I putted also it in the settings file. However I didn't find any documentation about this property. So now I am trying to figure out what else is needed.
Django fandjango migration 4.2
0.197375
0
0
547
21,078,720
2014-01-12T18:40:00.000
0
0
0
0
python,listbox,tkinter
21,079,948
2
true
0
1
The listbox will fire the virtual event <<ListboxSelect>> whenever the selection changes. If you bind to it, your function will be called whenever the selection changes, even if it was changed via the keyboard.
1
0
0
I have a listbox on a GUI in Tkinter. I would like to implement a routine where if a listbox item is selected a function is called (based on this selection) to modify the gui (add another adjacent listbox). Then if that selection changes, the gui reverts back to its default view. Can this be done? Seems you would need to associate a function to a listbox selection, not sure how to do this or if its possible... Does anyone have the secret? Its possible to add "select" buttons to the bottom of my listbox, but I wanted to avoid this extra work for user and save space on the GUI. Thanks to all in advance! Daniel
Calling a function based on a Listbox current selection "curselection()" in Tkinter
1.2
0
0
1,733
21,082,196
2014-01-13T00:31:00.000
2
1
0
0
java,python
21,082,222
2
false
1
0
I'm not sure if this is even possible, but is there any way to keep the python script in a state where it wouldn't have to completely re-run from the start every single time? The correct and most obvious way to do this is to re-implement (if you can) the Python script and turn it into some kind of Remote Serivce and use some kind of Interface: Examples: Web Service over JSON Web Service over RPC, JSON-RPC, XML-RPC You would then access the service(s) remotely over a network connection from your Java program and serialize parameters passed to the Python program and theh results back to Java via something both can speak eaisly. e.g: JSON
1
1
0
I fiddled around with calling a python script from a Java program for a little while and was finally able to get it working. However, When I called it I noticed that there is a certain call in the python script that creates an object that takes a couple of seconds (which is longer than I'd like). So in an essence every time the script runs it has to re-import a few libraries and create a new object. I'm not sure if this is even possible, but is there any way to keep the python script in a state where it wouldn't have to completely re-run from the start every single time? Any help would be greatly appreciated. I do not have much experience with the integration of programs with different languages. Thank you very much!!! Any suggestions are welcome.
Integration between a Python script and Java program
0.197375
0
0
69
21,086,215
2014-01-13T07:48:00.000
0
0
0
0
python,parallel-processing,scikit-learn
21,118,718
1
true
0
0
I was unable to definitively find the cause of this problem, but it stopped happening when I increased the amount of memory available. So it seems reasonable to conclude that one of the children processes encountered a MemoryError and just died.
1
1
1
What is the proper way to diagnose what is happening when parallel jobs get stuck in Scikit-Learn? Specifically, I have had several jobs that appear to finish (htop shows no CPU activity), but python stops responding. Pressing Ctrl+c doesn't exit (though it does register a KeyboardInterrupt, it doesn't kill the python process), and the process must be killed from shell. Total memory usage approaches the capacity of the machine, but I get no explicit errors that there was a MemoryError. This has occurred with RandomForestRegressor, and also with cross_validation.cross_val_score, under both 0.14 and master on Ubuntu/Debian. I suspect this is a memory issue, since the jobs seem to complete without a problem on machines with more memory.
Scikit Learn - Diagnosing when parallel jobs get stuck
1.2
0
0
935
21,086,872
2014-01-13T08:37:00.000
1
1
0
0
python,android-testing,appium
21,126,944
2
false
1
0
Appium for Android is based on the UIAutomator framework. Selendroid is based on instrumentation. There are no drawbacks to using python, Appium works with all languages with Selenium/WebDriver bindings which includes python, node.js, objective-c, java, c#, ruby, and more.
2
0
0
Somebody said to me 'python does not do automation for android app, as the python stack does not exist in android OS'. Is it true? Is Appium based on Android instrumentation framework? Are there any drawbacks of using Python for writing my test cases? Should I use some other language?
Android automation using APPIUM framework
0.099668
0
0
751
21,086,872
2014-01-13T08:37:00.000
0
1
0
0
python,android-testing,appium
36,628,768
2
false
1
0
I believe appium dose not have any drawback if python is used. I suggest to use JAVA as a lot of examples and Q/A can be found on web easily.
2
0
0
Somebody said to me 'python does not do automation for android app, as the python stack does not exist in android OS'. Is it true? Is Appium based on Android instrumentation framework? Are there any drawbacks of using Python for writing my test cases? Should I use some other language?
Android automation using APPIUM framework
0
0
0
751
21,089,507
2014-01-13T10:54:00.000
1
0
0
0
python,django,eclipse,debugging,pydev
21,090,721
1
true
1
0
[Update] The error got resolved after setting python environment via Right Click on Project in Project Explorer -> PyDev -> Source PyDev Project Config Project Explorer -> Properties -> PyDev Interpreter Project Explorer -> Properties -> PyDev PYTHONPATH add exact path within virtualenv where the python site-packages are installed After this, one also needs to fill two fields in PyDev - Django Django manage.py = your manage.py file Django settings module = settings.local or whichever is your settings file Hope it helps. I am able to run the django server from eclipse but still not able to make the code stop at breakpoint. :(
1
1
0
I am looking for a clearly written set of steps to import an existing django project stored in a GIT repository into Liclipse (Eclipse configured for python) configured using virtualenv and running successfully. I used File->Import to import an existing project from its top level directory /home/comiventor/ProjectXYZ/ containing .git Now when I run ProjectXYZ->Django-> Sync DB (manage.py syncdb) It says "pydev nature is not properly set" I could not derive much help on this error from any other source. :( [Update] I am able to run the django server from eclipse (steps in my answer below) but still not able to make the code stop at breakpoint. :(
Liclipse/Eclipse: setup debugging environment for a django project alongwith its virtualenv
1.2
0
0
3,362
21,090,365
2014-01-13T11:39:00.000
0
0
0
1
python,django,scheduled-tasks
21,097,588
1
false
1
0
Basically you can use Celery's preiodic tasks with expire option, which makes you sure that your tasks will not be executed twice. Also you could run your own script with infinite loop like which will run calculation. If your calculation will run more than minute you can spawn your tasks using eventlet or gevent. Other option you could creare celery-tasks from this script and be sure that your tasks executes every N seconds, as you prefer.
1
0
0
In my django project, I need to collect data from about 50 remote servers into the local database minutely or every 30-seconds. Though it works with crontab in the remote servers, I want to do this in the project. Firstly, I consider the django-celery. However it does well in asynchronous processing and the collect-data task could not be delayed. Therefore i think, it may be not fit. How if i do this use the timer for python and what need i to pay more attention. Excuse for my ignorance of python and django. I'll appreciate other advice or ideas. Many thanks
How should I schedule my task in django
0
0
0
114
21,092,110
2014-01-13T13:07:00.000
6
0
0
0
python,django,django-models
21,092,602
2
true
1
0
In the particular case of a ForeignKey, you can check the existence of the _FOO_cache attribute. For instance, if your Employee object has a ForeignKey to Company, then if my_employee.company is populated then my_employee._company_cache will exist, so you can do hasattr(my_employee, '_company_cache').
1
6
0
In Django, is there an easy way to test that a model field on an object has already been queried from the database (e.g. an object coming from a foreign-key relationship)? I would like to make an assertion like this in one of my tests to ensure that accessing a particular attribute on one of my objects won't trigger an additional database query.
How can one assert in Django that a model field has already been populated from the DB?
1.2
0
0
1,129
21,094,299
2014-01-13T14:56:00.000
0
0
0
0
python,django,testing
21,094,376
1
false
1
0
You need to use manage.py instead on django-admin.py. So run ./manage.py test app
1
3
0
I have an app located at app/ and tests which reside at app/tests/tests.py. How can I run those tests with django-admin.py? I tried django-admin.py test app, django-admin.py test app.tests and django-admin.py test app.tests.tests but with no success. I add that I am also adding the --settings param to the above commands but cut it off for readability.
How to run specific tests with django-admin.py?
0
0
0
768
21,102,635
2014-01-13T22:26:00.000
0
1
0
0
python,usb
21,103,307
1
false
0
0
What I want to know is if the camera are able to hold the picture until they are transfered to the computer. That depends on the camera model, but since you mention in your post you are using "webcams", then the answer is almost certainly no. You could slow down the requests you make to the camera to take a picture though. This sequence of events is possible: wait request camera takes picture camera returns picture as normal wait This sequence of events is not possible (with webcams at least) wait request camera takes picture wait camera returns picture at a significantly later time that you want to have control over wait If you need the functionality displayed in the last sequence I provide (a controllable time between capture and readout of the picture) you will need to upgrade to a better camera, such as a machine vision camera. These cameras usually cost considerably more than webcams and are unlikely to interface over USB (though you might find some that do). You might be able to find some other solution to your problem (for instance what happens if you request 50 photos from 50 cameras nd saturate the USB bus? Do the webcams you have buffer the data well enough so that it achieves your ultimate goal, or does this affect the quality of the picture?)
1
0
0
I want to build a webcam based 3D scanner, since I'm going to use a lot of webcams I doing tests before. I have orderer 3 exact camera that I will drive in python to take snapshot at the same time. Obviously the bus is going to be saturated when there will be 50 of them. What I want to know is if the camera are able to hold the picture until they are transfered to the computer. To simulate this behavior I'd like to slow down the USB bus and make a snapshot with 3 camera, I'm under windows 7 pro, is this possible? Thanks. PS : couldn't I saturate the USB BUS by pluggin some USB external harddrive and doing some file transfert?
Can you slow down your USB bus?
0
0
0
89
21,104,236
2014-01-14T00:50:00.000
7
0
1
0
python,debugging,editor,spyder
21,104,895
1
true
0
0
(Spyder dev here) If you are using an Spyder version less than 2.2.5, please update it. On it you will find a Debug menu from which you can set breakpoints and control all debugging actions we have to offer.
1
6
0
I was recently introduced to Spyder. I decided to use Spyder because of its debugging capabilities. However, I have not been able to effectively use pdb in Spyder. When I started, I had the impression that the debugging tool would be similar to that of MATLAB. Is this true? How can the interpreter point to the breakpoint? I'd appreciate a proper resource on this.
How to use the debugging tool in Spyder for python scripts?
1.2
0
0
9,442
21,105,001
2014-01-14T02:22:00.000
0
0
1
0
python
21,107,220
1
true
0
0
As far as setuptools/distribute are concerned, the python installer will handle custom locations for you. As long as you don't move that directory, all should be fine. As for Pylauncher: Things are not quite so clean. Pylauncher has simple configuration/call-parameters (for shebang lines in particular), that can handle version/platform selection quite well (2.7 vs 3.3, and 32bit vs 64bit). As for the scenario in question (two different deployments where both are based on 32bit Python 2.7), pylauncher will attempt to guess which installation you wanted. If it is picking the wrong installation, there is some debugging information you can review to tune pylauncher's selection. If an environment variable PYLAUNCH_DEBUG is set (to any value), the launcher will print diagnostic information It does not seem like there is a portable way to configure this, and will have to be done per-system (once you have your installations configured, YOU CAN set an alias that will be recognized on the shebang line) Virtualenv and friends I have also found (after struggling with pylauncher focused solutions), virtualenv addresses many of the deployment isolation hurdles. At the time of posting, working with virtualenv was not nearly as intuitive (on Windows) as compared to a linux shell environment. But I have discovered support packages like virtualenvwrapper which handle a lot of the ugly batch file interfaces very nicely. Final Notes Originally, I was also handling python globally with admin accounts. Forcing myself to stay within my user home directory (C:/Users/username), utilizing python user-site configurations, and making optimum use of ipython: have all given me a much better interactive command-line experience.
1
0
0
I am using PythonXY (2.7, 32-bit) and the official Python (2.7, 32-bit). Normally it is recommended to install according to python version, example C:\python27. But since they are both python27, can I arbitrarily change the base name (example C:\pythonxy27)? When using python extras like pylauncher, or when utilizing the setuptools user-site, will they automatically recognize my custom installation sites (they will easily differentiate C:\python27 and C:\python33), or will both installations compete for the python27 namespace. (specifically when installing 3rd party packages to user-site, which normally locates as such \APPDATA\Python\PythonVer)
Installing Multiple Python Distributions, Windows
1.2
0
0
371
21,106,004
2014-01-14T04:17:00.000
1
0
0
1
python,networking,proxy,network-programming,squid
21,121,694
1
true
0
0
Cascading proxy is just the proxy connecting to an upstream proxy. It speaks the same HTTP proxy requests to the upstream proxy as a browser does, e.g. using full urls (method://host[:port]/path..) in the requests instead of just /path and using CONNECT for https tunneling instead of directly connecting with SSL.
1
0
0
Software like CCProxy in windows allows you to setup a cascading proxy. In squid we can do the same by mentioning a cache_peer directive? How does this work at application and TCP/IP layer ? Does it form a socket connection to the upstream proxy server ? Any details or RFCs in relation to this? PS - I want to implement it in Python for some testing purposes.
How does a cascading http proxy server works?
1.2
0
1
1,271
21,107,495
2014-01-14T06:47:00.000
0
0
1
0
python,architecture,concurrency
21,122,023
2
false
0
0
Our current solution involves a RabbitMQ exclusive queue (every consume locks the queue until the task is finished) for each player and celery to consume tasks from the queues. What do you think of this proposed solution? Erik.
2
0
0
To best describe the question, i'll begin with the following scenario: Suppose i have a poker game: The player is allowed to use credit in order to purchase some goods. If the player executes two purchase orders at the same time (theoretically), two workers may handle this request simultaneously and there could be an integrity error, thus the application must make sure that there is only one (or less) orders executing for a single player at a given time. Just to make sure that the scenario is clear - there could be hundreds of orders executing simultaneously - but for different players Following the 12Factor guidelines, i should be able to scale out the workers (which actually process the purchase orders). How can i make sure that only one order (or less) is executing for a single player at a given time with an elegant solution? Thanks in advance, Erik.
Making sure only one order (or less) is executing for a single player at a given time
0
0
0
40
21,107,495
2014-01-14T06:47:00.000
1
0
1
0
python,architecture,concurrency
21,107,551
2
false
0
0
Just a short disclaimer: I'm no python expert nor a database expert but I fell that my solution below is on the right track. Apply it with your skills and you'll get the result youre looking for. It's one of a few ways of achieving what you'd like dare I say quite elegant :). Im assuming you have different devices hence how multiple transactions could take place for one particular customer. Based on that assumption and also the assumption that you have some sort of webservice for the system, I have solved it like this: Add a table in your database that will record all active customers being handled currently. Then you add one more check before a transaction with a customer begins on the device to make sure that customer is not on that active transactions table, if so, then reject the new transaction from happening, otherwise continue. This check before hand will ensure that only one transaction can happen for any particular customer at any time no matter how many devices are being used concurrently. The above is just an example. Its how I am currently handling it, and its working well for me. It's basically the idea of quickly checking whether that current customer is in the activecustomers database, and if so for you to simply reject the customer as that customer is already in a session elsewhere. Once the customer is done with that session, you delete the entry from that table.
2
0
0
To best describe the question, i'll begin with the following scenario: Suppose i have a poker game: The player is allowed to use credit in order to purchase some goods. If the player executes two purchase orders at the same time (theoretically), two workers may handle this request simultaneously and there could be an integrity error, thus the application must make sure that there is only one (or less) orders executing for a single player at a given time. Just to make sure that the scenario is clear - there could be hundreds of orders executing simultaneously - but for different players Following the 12Factor guidelines, i should be able to scale out the workers (which actually process the purchase orders). How can i make sure that only one order (or less) is executing for a single player at a given time with an elegant solution? Thanks in advance, Erik.
Making sure only one order (or less) is executing for a single player at a given time
0.099668
0
0
40
21,107,967
2014-01-14T07:23:00.000
0
0
0
0
python,deployment,amazon-ec2,flask,flask-sqlalchemy
21,124,613
1
false
1
0
You should be building your python apps in a virtualenv rather than using the system's installation of python. Try creating a virtualenv for your app and installing all of the extensions in there.
1
0
0
I am deploying my flask app to EC2, however i get the error in my error.log file once i visit the link of my app. My extensions are present in the site-packages of my flask environment and not the "usr" folder of the server, however it tries to search usr folder to find the hook File "/usr/local/lib/python2.7/dist-packages/flask/exthook.py", line 87, in load_module It is located in /var/www/sample/flask/lib/python2.7/site-packages How to get over this issue?
ImportError: No module named flask.ext.sqlalchemy
0
1
0
2,246
21,110,022
2014-01-14T09:32:00.000
1
0
0
0
python,python-2.7,openerp,openerp-7
21,127,961
1
false
1
0
All attachments to emails are stored in the ir.attachments model. The basic procedure is to create your attachment in whatever binary format you like (png, zip, gzip etc...), then you base64 encode it. All attachments stored in OpenERP are base 64 encoded and the standard attachment functionality with encode and decode as required. If you are doing it by hand you must encode yourself. Emails have a many2many relationship with ir.attachments IIRC so you create a values dictionary for ir.attachments and write it along with the email using the magic numbers (6, 0, [list_of_value_dictionaries])
1
0
0
I want to attach Zip file in openerp. I see purchase order like that pdf is auto attached when the email widzard form is coming. But No idea how to create Email Widzard with attached file. I can create Zip file at backend but no idea how to put inside the widzard together with form. Please guide if soemone have already done. Thanks in advance. Phyo
How to attach zip file in email at OpenERP?
0.197375
0
0
666
21,114,655
2014-01-14T13:23:00.000
2
0
0
0
python,node-webkit
21,965,786
2
true
0
1
Maybe what you are looking for is Pyjs, is not entirely build to do that, but that's the nearest thing to it I have found.
1
7
0
It looks like developing GUI applications in HTML+JS with python in the background would be really nice. I see node-webkit. I also see that there are python bindings for webkit. It just seems like it's an order of magnitude more difficult to set up than python/tkinter - and I couldn't find win7 support.
Is there a python version of node-webkit
1.2
0
0
1,465
21,117,002
2014-01-14T15:13:00.000
0
0
0
1
python,virtualhost,cherrypy,bottle
21,117,703
4
false
1
0
perhaps you can simply put nginx as reverse proxy and configure it to send the traffic to the two domains to the right upstream (the cherryPy webserver).
1
2
0
I have a website (which running in Amazon EC2 Instance) running Python Bottle application with CherryPy as its front end web server. Now I need to add another website with a different domain name already registered. To reduce the cost, I want to utilize the existing website host to do that. Obviously, virtual host is the solution. I know Apache mod_wsgi could play the trick. But I don't want to replace CherryPy. I've googled a a lot, there are some articles showing how to make virtual hosts on CherryPy, but they all assume Cherrypy as Web Sever + Web application, Not CherrPy as Web server and Bottle as Application. How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
0
0
0
882
21,117,431
2014-01-14T15:35:00.000
17
0
0
0
python,postgresql,events,triggers,listener
21,128,034
3
true
0
0
donmage is quite right - LISTEN and NOTIFY are what you want. You'll still need a polling loop, but it's very lightweight, and won't cause detectable server load. If you want psycopg2 to trigger callbacks at any time in your program, you can do this by spawning a thread and having that thread execute the polling loop. Check to see whether psycopg2 enforces thread-safe connection access; if it doesn't, you'll need to do your own locking so that your polling loop only runs when the connection is idle, and no other queries interrupt a polling cycle. Or you can just use a second connection for your event polling. Either way, when the background thread that's polling for notify events receives one, it can invoke a Python callback function supplied by your main program, which might modify data structures / variables shared by the rest of the program. Beware, if you do this, that it can quickly become a nightmare to maintain. If you take that approach, I strongly suggest using the multithreading / multiprocessing modules. They will make your life massively easier, providing simple ways to exchange data between threads, and limiting modifications made by the listening thread to simple and well-controlled locations. If using threads instead of processes, it is important to understand that in cPython (i.e. "normal Python") you can't have a true callback interrupt, because only one thread may be executing in cPython at once. Read about the "global interpreter lock" (GIL) to understand more about this. Because of this limitation (and the easier, safer nature of shared-nothing by default concurrency) I often prefer multiprocessing to multithreading.
1
21
0
What I am using: PostgreSQL and Python. I am using Python to access PostgreSQL What I need: Receive a automatic notification, on Python, if anyone records something on a specific table on database. I think that it is possible using a routine that go to that table, over some interval, and check changes. But it requires a loop and I would like something like an a assynchronous way. Is it possible?
How to receive automatic notifications about changes in tables?
1.2
1
0
17,343
21,118,740
2014-01-14T16:33:00.000
0
0
1
0
python,virtualenv
21,119,115
1
false
0
0
It's not completely the same thing, but you could try running pip freeze > requirements.txt against that version of Python, then using the resulting file inside your virtualenv with pip install -r requirements.txt to install copies of the modules there.
1
1
0
Several questions address the way to make a virtualenv that does include global site-packages. I'm looking for something different: how to create a new virtualenv based on a Python executable from another location in my network, and also to include the libraries that are installed in that location in the network. I have a local desktop machine, but there is an IT-maintained version of Python and associated installed libraries, and it is the ubiquitous Python used by developers. I'm using virtualenv to create several local versions of Python that allow me to try out libraries or change settings, but I'd also like to maintain an installation that is nothing but a pure mirror of that IT-maintained system. So the question is how to make a virtualenv that points at that IT-maintained Python, and which does reference the previously installed packages for that Python and not for my local machine's global site-packages, etc.
How to create virtualenv and include non-global associated installed libraries
0
0
0
116
21,122,847
2014-01-14T20:01:00.000
3
1
0
0
python,postgresql,encryption
21,128,178
2
true
0
0
Imagine you have a Social Security Number field in your table. Users must be able to query for a particular SSN when needed. The SSN, obviously, needs to be encrypted. I can encrypt it from the Python side and save it to the database, but then in order for it to be searchable, I would have to use the same salt for every record so that I can incorporate the encrypted value as part of my WHERE clause, and that just leaves us vulnerable. I can encrypt/decrypt on the database side, but in that case, I'm sending the SSN in plain-text whenever I'm querying, which is also bad. The usual solution to this kind of issue is to store a partial value, hashed unsalted or with a fixed salt, alongside the randomly salted full value. You index the hashed partial value and search on that. You'll get false-positive matches, but still significantly benefit from DB-side indexed searching. You can fetch all the matches and, application-side, discard the false positives. Querying encrypted data is all about compromises between security and performance. There's no magic answer that'll let you send a hashed value to the server and have it compare it to a bunch of randomly salted and hashed values for a match. In fact, that's exactly why we salt our hashes - to prevent that from working, because that's also pretty much what an attacker does when trying to brute-force. So. Compromise. Either live with sending the SSNs as plaintext (over SSL) for comparison to salted & hashed stored values, knowing that it still greatly reduces exposure because the whole lot can't be dumped at once. Or index a partial value and search on that. Do be aware that another problem with sending values unhashed is that they can appear in the server error logs. Even if you don't have log_statement = all, they may still appear if there's an error, like query cancellation or a deadlock break. Sending the values as query parameters reduces the number of places they can appear in the logs, but is far from foolproof. So if you send values in the clear you've got to treat your logs as security critical. Fun!
1
1
0
We have a database that contains personally-identifying information (PII) that needs to be encrypted. From the Python side, I can use PyCrypto to encrypt data using AES-256 and a variable salt; this results in a Base64 encoded string. From the PostgreSQL side, I can use the PgCrypto functions to encrypt data in the same way, but this results in a bytea value. For the life of me, I can't find a way to convert between these two, or to make a comparison between the two so that I can do a query on the encrypted data. Any suggestions/ideas? Note: yes, I realize that I could do all the encryption/decryption on the database side, but my goal is to ensure that any data transmitted between the application and the database still does not contain any of the PII, as it could, in theory, be vulnerable to interception, or visible via logging.
Encryption using Python and PostgreSQL
1.2
1
0
2,355
21,123,473
2014-01-14T20:35:00.000
0
0
1
0
python,pdb
21,123,991
4
false
0
0
Eric IDE, Wing IDE & Spyder to mention just a few all have visual debuggers that are worth a go as they separate the display of values from the commands.
1
116
0
My code is, for better or worse, rife with single letter variables (it's physics stuff, so those letters are meaningful), as well as NumPy's, which I'm often interacting with. When using the Python debugger, occasionally I'll want to look at the value of, say, n. However, when I hit n<enter>, that's the PDB command for (n)ext, which has a higher priority. print n works around looking at it, but how can I set it?
How do I manipulate a variable whose name conflicts with PDB commands?
0
0
0
13,010
21,126,295
2014-01-14T23:33:00.000
3
0
1
0
python,string,unicode,compression,pytables
21,128,497
2
true
0
0
PyTables does not natively support unicode - yet. To store unicode. First convert the string to bytes and then store a VLArray of length-1 strings or uint8. To get compression simply instantiate your array with a Filters instance that has a non-zero complevel. All of the examples I know of storing JSON data like this do so using the HDF5 C-API.
1
6
1
I'm using PyTables to store a data array, which works fine; along with it I need to store a moderately large (50K-100K) Unicode string containing JSON data, and I'd like to compress it. How can I do this in PyTables? It's been a long time since I've worked with HDF5, and I can't remember the right way to store character arrays so they can be compressed. (And I can't seem to find a similar example of doing this on the PyTables website.)
How do you create a compressed dataset in pytables that can store a Unicode string?
1.2
0
0
3,616
21,130,642
2014-01-15T06:38:00.000
0
0
0
1
python,svn,tortoisesvn,pysvn
21,130,791
3
false
0
0
Take a look at the documentation for pysvn.Client.callback_* and you will see that the methods you have to provide handle prompting for passwords and errors if they don't match.
1
1
0
I need to verify if some user is valid SVN user or not in my application. I want to achieve this using SVN commandline tools/Tortoise SVN Commndline or Python in windows. I looked up PySVN but it seems to me that there is no way to authenticate a current user in that library. Please suggest some way of doing the same. Thank You
SVN User authentication using Command Line or Python
0
0
0
3,064
21,141,026
2014-01-15T15:10:00.000
0
0
0
0
python
21,144,008
2
false
0
0
I would use .. a VCS ? If it were feasible, I'd hack up a windows installer that: - installs git/subversion/your favorite vcs - does an initial checkout/clone of the repository - add a scheduled job to the machine (windows equiv of cronjobs) to run every hour and update the working copies Could be done in a couple hours work and should be simple enough that users just need to run the installer and eventually choose the location of where to clone the repo (which directory to place it in). Then from there you push your changes to the repo and the clients computers will check for updates every hour or so.
1
1
0
(This may not be an appropriate question--if there is a better stack site for it, please let me know.) I belong to an organization that distributes sheet music to its users. Right now, we have to individually download each file, and it's a pain. Files are frequently updated, and every time there's a new version we have to download the new one, delete the old one, blah blah blah. I've automated the process myself with Python, so when I run my script I have a nice folder with all the current files. I'm looking for a way to share this with others. I initially thought Dropbox, but that just requires users to go to my Dropbox folder and still do it all manually (I know there's an option to download as a .zip, but many of our members are not very technically proficient). Is there a way to have users sign up and somehow have a folder on their computers download what's in mine? A helpful Google suggestion may be all I need.
Distribute files to users automatically
0
0
0
67
21,143,142
2014-01-15T16:39:00.000
0
0
1
1
python,autocomplete,ide,wing-ide
21,143,719
1
false
0
0
You are probably using Wing 101, which is a very scaled back version that does not have auto-completion. It was designed for teaching beginning programmers and the professors we worked with in creating it felt auto-completion should be left off. Wing IDE Personal and Wing IDE Pro both have auto-completion and much more; Wing 101 really is severely scaled back to make it simple for beginners.
1
1
0
I am using the Python IDE Wing and just cannot seem to find the auto-complete option. Is there even such option in this program?
How to turn on the Auto-Complete in Python Wing IDE?
0
0
0
2,346
21,150,012
2014-01-15T22:55:00.000
1
0
1
0
python,python-2.7,sqlite
21,150,081
1
false
0
0
Building a dict doesn't really take that much memory. It's much more efficient since you'll only need to do one operation - and let SQLite handle it. Well, python is going to clean the dict anyway, so this is definitely the way to go. But as @JoranBeasley mentioned in the comment.. You never know until you try. Hope this helps!
1
1
0
I have several million rows in a Sqlite database that need 5 columns updated. Each row/column value is different, so I have to update each row individually. Because of the way I'm looping through JSON from an external API, for each row, I have the option of either: 1) do 5 UPDATE operations, one for value. 2) build a temporary dict in python, then unpack it into a single UPDATE operation that updates all 5 columns at once. Basically I'm trading off Python time (slower language, but in memory) for SQLite time (faster language, but on disk). Which is faster?
For a single Sqlite row, faster to do 5 UPDATEs or build a python dict, then 1 Update?
0.197375
1
0
71
21,153,521
2014-01-16T04:48:00.000
0
0
0
0
python,google-chrome
51,508,462
2
false
0
0
Try with a tool selenium, you only need add a driver for your web browser.
1
1
0
I've been trying to figure out a way to send commands to a chrome extension via python. My goal is to automate browser functions such as opening a new tab or reloading a page remotely, but on the same computer. What would be the best/simplest way to do this?
How can I send commands to Chrome extension via a local python application?
0
0
1
675
21,160,593
2014-01-16T11:32:00.000
0
0
1
0
python,regex,python-2.7,jpeg
21,161,910
3
false
0
0
Split on the whole string, then use a loop to iterate the splitted items, each time comparing with startswith("http" and endswith(".jpg")
1
0
0
I have a problem when I try to get (https://XXXXX.jpg) I'm using this format: (https://.*.jpg) However it doesn't find what I want. It returns, for example, (https://XXXXX.jpg <## Heading ##div> bla bla bla </div> bla bla https://XYZ .jpg) Startswith https, endswith jpg. What should I do?
Regex for finding jpg
0
0
1
843
21,166,679
2014-01-16T15:58:00.000
56
0
1
0
python,matplotlib
21,169,703
1
true
0
0
Fundamentally, imshow assumes that all data elements in your array are to be rendered at the same size, whereas pcolormesh/pcolor associates elements of the data array with rectangular elements whose size may vary over the rectangular grid. If your mesh elements are uniform, then imshow with interpolation set to "nearest" will look very similar to the default pcolormesh display (without the optional X and Y args). The obvious differences are that the imshow y-axis will be inverted (w.r.t. pcolormesh) and the aspect ratio is maintained, although those characteristics can be altered to look like the pcolormesh output as well. From a practical point of view, pcolormesh is more convenient if you want to visualize the data array as cells, particularly when the rectangular mesh is non-uniform or when you want to plot the boundaries/edges of the cells. Otherwise, imshow is more convenient if you have a fixed cell size, want to maintain aspect ratio, want control over pixel interpolation, or want to specify RGB values directly.
1
51
1
I often find myself needing to create heatmap-style visualizations in Python with matplotlib. Matplotlib provides several functions which apparently do the same thing. pcolormesh is recommended instead of pcolor but what is the difference (from a practical point of view as a data plotter) between imshow and pcolormesh? What are the pros/cons of using one over the other? In what scenarios would one or the other be a clear winner?
When to use imshow over pcolormesh?
1.2
0
0
24,635
21,168,440
2014-01-16T17:15:00.000
0
1
0
1
python,c++,llvm,llvm-ir
21,189,967
1
true
0
0
After some reading and some conversations I believe the answer is that the ExecutionEngine essentially executes code as if it was native C code. Which means if you wanted to execute lua/python/javascript code ontop of llvm you would need to actually send the bitcode for that runtime. Then the runtime could parse and execute the script as usual. As far as I know none of these runtimes have the ability to compile their script directly into llvm bitcode (yet).
1
4
0
I'm making an application and I would like to load and execute llvm bitcode using the ExecutionEngine. I have managed to do this with really simple C code compiled via clang so far. My thought is, if I use llvm for this project then it could be more language agnostic than say, specifically picking lua/python/javascript. But I'm confused about how this might work for managed or scripting languages since they are often times tied to a platform with resources such as a GC. So I'm not sure how it would actually work through the ExecutionEngine. So as an example scenario, suppose a user wanted to write some python code that runs in my application. I then want them to deliver to me bitcode representing that python code, which I will then run in my C++ application using llvm's ExecutionEngine. Is this possible? Can python be simply compiled into bitcode and then run later using the ExecutionEngine? If not, what do I need to know to understand why not?
Can llvm execute code from managed languages?
1.2
0
0
214
21,168,690
2014-01-16T17:26:00.000
0
0
0
0
python,selenium,selenium-grid2
21,194,979
2
false
0
0
You can restart the node instead of the server.
1
0
0
We're using Selenium's Python bindings at work. Occasionally I forget to put the call to WebDriver.quit() in a finally clause, or the tear down for a test. Something bad happens, an exception is thrown, and the session is abandoned and stuck as "in use" on the grid. How can I quit those sessions and return them to being available for use without restarting the grid server?
how do I quit a web driver session after the code has finished executing?
0
0
1
288
21,171,095
2014-01-16T19:30:00.000
0
0
0
0
python,numpy,scipy,data-fitting
21,172,252
2
false
0
0
Apparently with wants vector sets, so direc=([1,0,0],[0,0.1,0],[0,0,1]) will do the job. However, still unclear on how this is arranged and functions, so not sure what would happen if some of those zeros were changed.
2
2
1
There is no information on how the direc argument of fmin-powell is supposed to be entered. All the scipy documentation for fmin_powell says is direc : ndarray, optional Initial direction set. I thought that by giving direc=(0.1,0.1,1), I was telling it to start with step sizes of 0.1 for the first two fitting parameters and 1 for the third, which are needed in my case since the 3rd parameter is not sensitive to step sizes of 0.1. However, with this code it starts with 0.1 for all of the fitting parameters. If I try direc=(1,0.1,1), it uses an initial step of 1 for all parameters which destroys the fit, as the second parameter has a range of (0,1) and results in a division by zero if it ever goes negative. How are you supposed to set this argument?
Scipy.opimize.fmin_powell direc argument syntax
0
0
0
385
21,171,095
2014-01-16T19:30:00.000
1
0
0
0
python,numpy,scipy,data-fitting
21,224,406
2
false
0
0
For Powell minimization, the initial set of direction vectors don't need to be aligned with the axes (although normally they are). As the algorithm runs, it updates the direction vectors to be whatever direction is best in order to step downhill quickly. But, imagine a case where the surface defined by your function is almost all flat near the starting point. Except, in one particular direction (not aligned with any axis) there is a narrow gulley that descends downward rapidly to the function minimum. In this case, using the direction of the gulley as one of the initial direction vectors might be helpful. Otherwise, conceivably, it might take a while for the algorithm to find a good direction to start moving in.
2
2
1
There is no information on how the direc argument of fmin-powell is supposed to be entered. All the scipy documentation for fmin_powell says is direc : ndarray, optional Initial direction set. I thought that by giving direc=(0.1,0.1,1), I was telling it to start with step sizes of 0.1 for the first two fitting parameters and 1 for the third, which are needed in my case since the 3rd parameter is not sensitive to step sizes of 0.1. However, with this code it starts with 0.1 for all of the fitting parameters. If I try direc=(1,0.1,1), it uses an initial step of 1 for all parameters which destroys the fit, as the second parameter has a range of (0,1) and results in a division by zero if it ever goes negative. How are you supposed to set this argument?
Scipy.opimize.fmin_powell direc argument syntax
0.099668
0
0
385
21,171,607
2014-01-16T19:58:00.000
0
1
0
0
java,python,c++,c,dll
21,177,883
3
false
0
1
You can also look into Lua, while not as widely used as a lot of other scripting languages, it was meant to be embedded easily into executables. It's relatively small and fast. Just another option. If you want to call other languages from your c/c++ look into SWIG.
1
0
0
A quick question that may seem out of the ordinary. (in reverse) Instead of calling native code from an interpreted language; is there a way to compile Java or Python code to a .dll/.so and call the code from C/C++? I'm willing to accept even answers such as manually spawning the interpreter or JVM and force it to read the .class/.py files. (is this a good solution?) Thank you.
Dynamic Link Library for Java/Python to access in C/C++?
0
0
0
253
21,172,756
2014-01-16T20:59:00.000
5
0
0
0
python-2.7,selenium,xpath,selenium-webdriver,css-selectors
21,173,350
1
false
0
0
No idea what you are trying to ask here. I can only take a guess. How about using css selector in Selenium Python if I am not getting id or name or class of that html element ? If you are testing a complex web application, you have to learn CSS Selector and/or XPath. Yes, other locating methods are somewhat limited. How about preferring CSS in comparison to xpath? Generally speaking, CSS Selectors are always in favor of XPath, because CSS Selectors are more elegant, more readable CSS Selectors are faster XPath engines are different in each browser IE does not have a native xpath engine However, there are situations XPath is the only way to go. For example Find element by its text Find element from its descendants (if there are no other better methods) Few other rare situations
1
0
0
How about using CSS selector in Selenium Python if I am not getting id or name or class of that HTML element ? How about preferring CSS in comparison to XPath?
CSS selector and XPath in Selenium Python
0.761594
0
1
712
21,173,241
2014-01-16T21:27:00.000
0
0
0
0
jquery,python,django,forms
21,173,464
1
false
1
0
Breaking this into 2 seperate models (Student, Industry) would not be a problem, it would actually help you if you need in the future to add more fields to each individual model. Since a Person can only belong to 1 university or 1 Industry then your query is also combined with no much additional overhead. Your initial approach is not wrong as well, but you need to think if in the future you will need to add additional information to the related models, if for instance you need to add courses, or sectors then you start overloading your initial model.
1
0
0
I'm working on a small Django project and for a form, i want to capture the details of the person signing in. There is a radio option which has the values 'Student' or 'Industry'. If Student is chosen, I want two input boxes to be shown, one for 'graduating year' and other for 'university name'. If 'Industry' is chosen I want 2 text boxes, one for 'Company name' other for 'Job title'. Right now, I'm able to get this working using jQuery to hide the un-needed text boxes and attaching a changelistener to the radiobuttons. However is there a django way of doing the same? Right now, my model has: name - common for both cases student_or_industry - ChoiceField job_title company_name univeristy graduating_year And my form is created using the simple ModelForm, which leads to loads of NULLs in the table. Should I be creating a different model for Student and Industry and linking these with a foreign key? If yes, how does this tie in with the forms? Do I create multiple forms? Thanks in Advance
Conditionally Show Form Elements in Django
0
0
0
90
21,173,816
2014-01-16T22:01:00.000
1
0
0
0
python,selenium,selenium-webdriver
21,173,887
1
false
0
0
No, Selenium drives the browser like a regular user would, which means redirects are followed when requested by the web application via either a 30X HTTP status or when triggered by javascript. I suggest you consider a legitimate bug in the application if you consider it problematic when it happens to users.
1
2
0
Does selenium automatic follow redirects? Because it seems that the webdriver isn't loading the page I requested. And when it does automatic follow the redirects is there any possibility to prevent this? ceddy
Selenium prevent redirect
0.197375
0
1
3,114
21,177,140
2014-01-17T03:11:00.000
1
0
1
1
python,windows-services
21,183,630
2
true
0
0
A service is nothing but a process/program that run on regular interval checks and runs accordingly. If you have script already written, then another script,service_script which will do the following It should check if the program is required to run ? (Syn is required if two parties are not in same state) At what interval you should check, there is a chance that this script is required to run. Say you DB updated every 10 mintues. Then code you script to syn with it. If job is there do it else set it to sleep. If possible make sure your script is optimised, following standards & all basic things. As for GUI, you store these success/failure details in a Log file. If you want GUI - a small php interface/python simple http will help you set up a interface. I have some experience in doing some monitoring scipts & dashboard, but not quiet simmilar to your work. Godspeed.
1
1
0
I have a python file that will synchronize my MySql Database from my own server to the local server. I want to install it as a windows services every time my local server boot up. Can you help me? I want to add also that can I make a GUI for that services just like an Apache that will display beside the task bar clock? Thank you so much in advance.
running python file in windows services
1.2
0
0
97
21,179,274
2014-01-17T06:29:00.000
-1
1
0
1
python,linux,security
21,179,425
2
false
0
0
If you just want to do this for learning, you can easily build a fake environment with your own faked passwd-file. You can use some of the built-in python encrypt method to generate passwords. this has the advantage of proper test cases, you know what you are looking for and where you should succeed or fail.
1
4
0
Preface: I am fully aware that this could be illegal if not on a test machine. I am doing this as a learning exercise for learning python for security and penetration testing. This will ONLY be done on a linux machine that I own and have full control over. I am learning python as my first scripting language hopefully for use down the line in a security position. Upon asking for ideas of scripts to help teach myself, someone suggested that I create one for user enumeration.The idea is simple, cat out the user names from /etc/passwd from an account that does NOT have sudo privileges and try to 'su' into those accounts using the one password that I have. A reverse brute force of sorts, instead of a single user with a list of passwords, Im using a single password with a list of users. My issue is that no matter how I have approached this, the script hangs or stops at the "Password: " prompt. I have tried multiple methods, from using os.system and echoing the password in, passing it as a variable, and using the pexpect module. Nothing seems to be working. When I Google it, all of the recommendations point to using sudo, which in this scenario, isnt a valid option as the user I have access to, doesnt have sudo privileges. I am beyond desperate on this, just to finish the challenge. I have asked on reddit, in IRC and all of my programming wizard friends, and beyond echo "password" |sudo -S su, which cant work because the user is not in the sudoers file, I am coming up short. When I try the same thing with just echo "password"| su I get su: must be run from a terminal. This is at a # and $ prompt. Is this even possible?
Learning python for security, having trouble with su
-0.099668
0
0
230
21,181,830
2014-01-17T09:09:00.000
3
1
0
0
python,pytest
21,206,053
2
false
0
0
Are you using pytest-2.5.1? pytest-2.5 and in particular issue287 is supposed to have brought support for running all finalizers and re-raising the first failed exception if any.
1
1
0
i have use funcargs in my tests: def test_name(fooarg1, fooarg2): all of them have pytest_funcarg__ factories, which returns request.cached_setup, so all of them have setup/teardown sections. sometimes i have a problem with fooarg2 teardown, so i raise exception in here. in this case ignore all the others teardowns(fooarg1.teardown, teardown_module, etc) and just goes to pytest_sessionfinished section. is there any option in pytest not to collect exceptions and execute all remaining teardowns functions?
pytest. execute all teardown modules
0.291313
0
0
297
21,196,399
2014-01-17T21:39:00.000
0
0
1
1
python,scripting,path,ipython
21,197,017
3
false
0
0
sys.path only affects imports, not IPython's %run. The run magic is like calling python script.py - you have to cd into the directory where scripts are, or pass the full path to those scripts.
2
0
0
im pretty new to all of this so please try to bear with me. I've got a directory set up where i dump all the scripts im working on, and i'm trying to make it so that i can run the scripts from within that directory directly from ipython. so far, ive add an init.py to the aforementined directory, and have tried appending the path to sys.path, however, even after i successfully append the path, trying to use the run command for any script in the directory results in a not found error. another problem i have, is that after every kernel reset the sys.path seems to reset to its previous values, without the new path settings i enter. grateful for any help, ron
running a script from a directory in ipython
0
0
0
1,630
21,196,399
2014-01-17T21:39:00.000
-1
0
1
1
python,scripting,path,ipython
50,347,295
3
false
0
0
in Ipython notebook type : %run script_name.py
2
0
0
im pretty new to all of this so please try to bear with me. I've got a directory set up where i dump all the scripts im working on, and i'm trying to make it so that i can run the scripts from within that directory directly from ipython. so far, ive add an init.py to the aforementined directory, and have tried appending the path to sys.path, however, even after i successfully append the path, trying to use the run command for any script in the directory results in a not found error. another problem i have, is that after every kernel reset the sys.path seems to reset to its previous values, without the new path settings i enter. grateful for any help, ron
running a script from a directory in ipython
-0.066568
0
0
1,630
21,197,311
2014-01-17T22:41:00.000
0
0
0
0
csv,python-3.x,io,pandas,pandastream
21,197,452
1
false
0
0
I've done a little bit of this in C#. First you open up the file and start reading lines of text. The first line in a .csv should be the header column, so handle that separately. The next lines should be your data. Now once you have your line of text insert it into a string and then split using commas. That will give you a string array. Then make an int array by converting the strings to text. This should not be a problem as long as all data in the column are integers. If not, test for non-integer values and convert them to strings that are valid intergers. E.G. if array[0] == "no data" array[0] = "0", or array[0] = null. Then create column 3 by adding the integer values for the first and second columns together.
1
0
1
I need to process data from csv file in such a way that output should print three columns e.g. c1,c2 and c3 where c1 and c2 must use group by clause like in mysql and c3 is sum of two other columns. I am new to python, Ideas will really help me.
Aggregation of data from CSV file using Pandas python
0
0
0
128
21,200,565
2014-01-18T05:46:00.000
0
1
0
0
python,linux,curl
21,940,288
4
false
0
0
The requests library is most supported and advanced way to do this.
1
0
0
I have a python program that takes pictures and I am wondering how I would write a program that sends those pictures to a particular URL. If it matters, I am running this on a Raspberry Pi. (Please excuse my simplicity, I am very new to all this)
Curl Equivalent in Python
0
0
1
2,487
21,201,618
2014-01-18T08:00:00.000
11
0
0
0
python,pandas
49,123,783
4
false
0
0
Pandas now has the function merge_asof, doing exactly what was described in the accepted answer.
1
22
1
I have two dataframes, both of which contain an irregularly spaced, millisecond resolution timestamp column. My goal here is to match up the rows so that for each matched row, 1) the first time stamp is always smaller or equal to the second timestamp, and 2) the matched timestamps are the closest for all pairs of timestamps satisfying 1). Is there any way to do this with pandas.merge?
pandas.merge: match the nearest time stamp >= the series of timestamps
1
0
0
16,509
21,201,970
2014-01-18T08:44:00.000
0
0
1
0
python,random
21,202,082
5
false
0
0
Yes, for repeat sample from one population, @MaxLascombe's answer is OK. If you do not want tiles in samples, you should kick the chosen one out. Then use @MaxLascombe's answer on the rest of the list.
1
0
0
How to use random.randint() to select random names given in a list in python. I want to print 5 names from that list. Actually i know how to use random.randint() for numbers. but i don't know how to select random names from a given list. we are not allowed to use random.choice. help me please
randomly choose from list using random.randint in python
0
0
0
2,543
21,203,648
2014-01-18T11:47:00.000
0
1
1
0
python,unit-testing,web-crawler
21,203,787
2
false
0
0
Unit testing verifies that your code does what you expect in a given environment. You should make sure all other variables are as you expect them to be and test your single method. To do that for methods which use third party APIs, you should probably mock them using a mocking library. By mocking you provide data you expect and verify that your method works as expected. You can also try to separate your code so that the part which makes API request and the part that parses/uses it are separate and unit test that second part with a certain API example response you provide.
2
3
0
Sorry if this is a really dumb question but I've been searching for ages and just can't figure it out. So I have a question about unit testing, not necessarily about Python, but since I'm working with Python at the moment I chose to base my question on it. I get the idea of unit testing, but the only thing I can find on the internet are the very simple unit tests. Like testing if the method sum(a, b) returns the sum of a + b. But how do you apply unit testing when dealing with a more complex program? As an example, I have written a crawler. I don't know what it will return, else I wouldn't need the crawler. So how can I test that the crawler works properly without knowing what the method will return? Thanks in advance!
Python - unit testing
0
0
0
569
21,203,648
2014-01-18T11:47:00.000
4
1
1
0
python,unit-testing,web-crawler
21,203,798
2
true
0
0
The whole crawler would be probably tested functionally (we'll get there). As for unit testing, you have probably written your crawler with several components, like page parser, url recogniser, fetcher, redirect handler, etc. These are your UNITS. You should unit tests each of them, or at least those with at least slightly complicated logic, where you can expect some output for some input. Remember, that sometimes you'll test behaviour, not input/output, and this is where mocks and stubs may come handy. As for functional testing - you'll need to create some test scenarios, like webpage with links to other webpages that you'll create, and set them up on some server. Then you'll need to perform crawling on webpages YOU created, and check whether your crawler is behaving as expected (you should know what to expect, because you;ll be creating those pages). Also, sometimes it is good to perform integration tests between unit and functional testing. If you have some components working together (for example fetcher using redirect handler) it is good to check whether those two work together as expected (for example, you may create resource on your own server, that when fetched will return redirect HTTP code, and check whether it is handled as expected). So, in the end: create unit tests for components creating your app, to see if you haven't made simple mistake create integration tests for co-working components, to see if you glued everything together just fine create functional tests, to be sure that your app will work as expected (because some errors may come from project, not from implementation)
2
3
0
Sorry if this is a really dumb question but I've been searching for ages and just can't figure it out. So I have a question about unit testing, not necessarily about Python, but since I'm working with Python at the moment I chose to base my question on it. I get the idea of unit testing, but the only thing I can find on the internet are the very simple unit tests. Like testing if the method sum(a, b) returns the sum of a + b. But how do you apply unit testing when dealing with a more complex program? As an example, I have written a crawler. I don't know what it will return, else I wouldn't need the crawler. So how can I test that the crawler works properly without knowing what the method will return? Thanks in advance!
Python - unit testing
1.2
0
0
569
21,205,278
2014-01-18T14:16:00.000
1
0
0
1
python,windows,admin,cx-freeze
21,368,197
1
false
0
0
After much hunting I have found a solution, I tried using: os.popen os.startfile subprocess.call subprocess.Popen and finally os.system as os.system is essentially the same as typing on the command line or putting the arguments into a batch file and then executing it this asks for the executables default permissions, the only downside to this is that I get a shell window when the UAC window comes up, which remains until the program it opened is closed. the problem with the other solutions are: 1 - passes only the permissions of the calling application, regardless of what the called application requires. 2 - asks for higher level of permissions but no mechanism to pass arguments 3 - same as 1 4 - same as 1 if anyone can recommend a mechanism to prevent the shell window it would be appreciated. James
1
0
0
I am trying to write a program in python that consists of several parts: a config utility a hardware monitor a background process The idea being that once installed (using cx_freeze) the hardware monitor is constantly running in the background, when a piece of compatible hardware (using d2xx driver for FTDI devices) is connected it checks the registry to see if it has been previously configured, if it has then it starts the background process with the serial number as an argument, however if not it starts the config utility. However the hardware monitor needs to be running from start-up and as it only reads from the registry doesn't need full admin privileges, and the background process only reads so also does not need admin provileges, but the config utility needs to be able to write to the registry and hence needs admin. My question is this: How can I call another program from within python as admin and with arguments? I considered using os.startfile as I have set the frozen program as needing admin, however i then can't pass arguments to it. I also considered using subprocess.Popen but i can't work out how, or even if you can, elevate this to admin level, so while it will open the program and pass it the arguments it can't write to the registry. Any help would be appreciated, for further information my set-up is: Windows 7 64 bit (but also plan to do XP 32 bit) python2.7.6 (again 64 bit but plan to also do 32 bit) PyUSB-1.6 psutil-1.2.1 cx_freeze-4.3.2 Thanks James
Python starting subprocess as admin without calling process being admin
0.197375
0
0
1,811
21,205,508
2014-01-18T14:37:00.000
0
1
0
1
python,while-loop,erlang,request
21,226,686
2
false
0
0
Ports communicate with Erlang VM by standard input/output. Does your python program use stdin/stdout for other purposes? If yes - it may be a reason of the problem.
1
2
0
I want to read some data from a port in Python in a while true. Then I want to grab the data from Python in Erlang on a function call. So technically in this while true some global variables is gonna be set and on the request from erlang those variables will be return. I am using erlport for this communication but what I found was that I can make calls and casts to the python code but not run a function in python (in this case the main) and let it run. when I tried to run it with the call function erlang doesn't work and obviously is waiting for a response. How can I do this? any other alternative approaches is also good if you think this is not the correct way to do it.
Run python program from Erlang
0
0
0
1,127
21,205,596
2014-01-18T14:45:00.000
2
0
1
0
python,arrays,fortran,f2py
21,223,365
2
false
0
0
I love the Python+Fortran stack. :) When needing close communication between your Python front-end and Fortran engine, a good option is to use the subprocess module in Python. Instead of saving the arrays to a text file, you'll keep them as arrays. Then you'll execute the Fortran engine as a subprocess within the Python script. You'll pipe the Python arrays into the Fortran engine and then pipe the results out to display. This solution will require changing the file I/O in both the Python and Fortran codes to writing and reading to/from a pipe (on the Python side) and from/to standard input and output (on the Fortran side), but in practice this isn't too much work. Good luck!
1
1
1
Background: My program currently assembles arrays in Python. These arrays are connected to a front-end UI and as such have interactive elements (i.e. user specified values in array elements). These arrays are then saved to .txt files (depending on their later use). The user must then leave the Python program and run a separate Fortran script which simulates a system based on the Python output files. While this only takes a couple of minutes at most, I would ideally like to automate the process without having to leave my Python UI. Assemble Arrays (Python) -> Edit Arrays (Python) -> Export to File (Python) -> Import File (Fortran) -> Run Simulation (Fortran) -> Export Results to File (Fortran) -> Import File to UI, Display Graph (Python) Question: Is this possible? What are my options for automating this process? Can I completely remove the repeated export/import of files altogether? Edit: I should also mention that the fortran script uses Lapack, I don't know if that makes a difference.
Passing Arrays from Python to Fortran (and back)
0.197375
0
0
2,760
21,206,568
2014-01-18T16:14:00.000
1
0
0
0
python,html,webserver
21,260,040
1
true
1
0
This has nothing to do with BaseHTTPRequestHandler as its purpose is to serve HTML, how you generate the HTML is another topic. You should use a templating tool, there are a lot available for Python, I would suggest using Mako or Jinja2. then, on your code, just get the real HTML using the template and use it on your handler response.
1
0
0
I am building a small program with Python, and I would like to have a GUI for some configuration stuff. Now I have started with a BaseHTTPServer, and I am implementing a BaseHTTPRequestHandler to handle GET and POST requests. But I am wondering what would be best practice for the following problem. I have two separate requests that result in very similar responses. That is, the two pages that I return have a lot of html in common. I could create a template html page that I retrieve when either of these requests is done and fill in the missing pieces according to the specific request. But I feel like there should be a way where I could directly retrieve two separate html pages, for the two requests, but still have one template page so that I don't have to copy this. I would like to know how I could best handle this, e.g. something scalable. Thanks!
Use template html page with BaseHttpRequestHandler
1.2
0
1
205
21,207,628
2014-01-18T17:46:00.000
1
0
0
0
python,django,angularjs,web-applications,backbone.js
21,207,658
2
false
1
0
Django is a full-featured MVC application where you generate the views on the serverside. I would say that is redundant with a single-page web application framework like Angular. If you use that, and you want to stick with Python, then you would probably be better served with a REST API library like Flask. Neither is "better." It depends on which programming model you prefer and the requirements for your application.
2
0
0
I am a little new to Django framework. I have pass the Django's tutorial and I would like to ask a very simple question. if I want to build an advance web app with database except of django framework(server side), do I really need to choose also a client framework like angular.js or backbone? Can I do the client side without involving a specific framework? I ask this question as a matter of cautious and saving time.
Advance Django app
0.099668
0
0
118
21,207,628
2014-01-18T17:46:00.000
1
0
0
0
python,django,angularjs,web-applications,backbone.js
21,207,803
2
true
1
0
You don't need to choose any other client framework, you can use solely Django - it's a full featured framework which is designed to be flexible enough for all your needs. There's a small learning curve (as with all good frameworks) but it's really not hard, especially if you have a background in Python. My advice would be to just play with it. Follow the tutorial making the voting application and then move onto creating forms, playing with the models and forms, making everything work cohesively and then once you're familiar with things you can begin writing your advanced web application. Also if you get stuck then there's the #django channel on Freenode (IRC) which can be useful.
2
0
0
I am a little new to Django framework. I have pass the Django's tutorial and I would like to ask a very simple question. if I want to build an advance web app with database except of django framework(server side), do I really need to choose also a client framework like angular.js or backbone? Can I do the client side without involving a specific framework? I ask this question as a matter of cautious and saving time.
Advance Django app
1.2
0
0
118
21,209,496
2014-01-18T20:31:00.000
5
0
0
0
python-3.x,pygame
21,209,675
1
true
0
1
There are 2 methods available for getting the width and height of a surface. The first one is get_size(), it returns a tuple (width,height). To access width for instance, you would do: surface.get_size()[0] and for height surface.get_size()[1]. The second method is to use get_width(), and get_height(), which return the width and the height. I suggest going through the python tutorial, to learn more about basic data structures such as tuples.
1
1
0
How do you get width and height of an image imported into pygame. I got the size using: Surface.get_size , but I dont know how to get the width and height.
Getting width and height of an image in Pygame
1.2
0
0
7,226
21,210,283
2014-01-18T21:45:00.000
1
0
0
0
python,rdp
21,210,586
2
false
0
0
If you need an interactive window, use the subprocess module to start your rdesktop.exe (or whatever). If you need to run some command automatically, you're probably better off forgetting about RDP and using ssh (with passwordless, passphraseless authentication via RSA or similar), psexec (note that some antivirus programs may dislike psexec, not because it's bad, but because it's infrequently been used by malware for bad purposes) or WinRM (this is what you use in PowerShell; it's like ssh or psexec, except it serializes objects on the sender, transmits, and deserializes back to an object on the recipient). Given a choice among the 3, I'd choose ssh. Cygwin ssh works fine, but there are several other implementations available for Windows. HTH
1
0
0
I am writing a script in python, and part of it needs to connect to a remote computer using rdp. Is there a script or an api that I could use to create this function? Also, if there is not, is there a way to package a rdp application along side python and then use a python script to run it? Any help would be much appreciated. Thanks in advance, Nate
RDP script in python?
0.099668
0
1
5,376
21,211,628
2014-01-19T00:18:00.000
1
0
1
0
python,python-3.x
21,211,687
3
false
0
0
Short answer-better not. Longer answer-impossible in python 2.7 as it will give you a Syntax error, but python3 will allow you to do that, though it it is a very very very bad practice to do so. Not just in python, but in any language out there.
1
3
0
I'm developing an app in Python3 and need to create a class that represents a diary. Well, I want to name it in my language, and it has an accent. Is it a bad practice? Will I have problems because of this character? class Diário(Base): pass
Is a bad practice to use accent in python class name?
0.066568
0
0
347
21,213,827
2014-01-19T06:15:00.000
2
0
1
0
ipython-notebook
21,332,689
3
false
0
0
One hack if you're desperate: open the .ipynb file, which is a text file. Scroll down to the lengthy cell output and delete it. Of course, you need to be careful that the result is still a valid .ipynb file.
2
3
0
I mistakenly printed to much to the output during a single cell's execution and now the browser tab completely freezes every time that notebook is opened. I tried restarting ipython and it didn't help (I am guessing that each time it is loaded, also all the chunk of text is loaded with it). Is there a way to load a notebook with outputs suspended or clear?
How to load a notebook without the outputs?
0.132549
0
0
1,137
21,213,827
2014-01-19T06:15:00.000
0
0
1
0
ipython-notebook
52,972,161
3
false
0
0
your code will be saved in the form of JSON. open it with json viewer and carefully delete the unwanted output cell and save it back.
2
3
0
I mistakenly printed to much to the output during a single cell's execution and now the browser tab completely freezes every time that notebook is opened. I tried restarting ipython and it didn't help (I am guessing that each time it is loaded, also all the chunk of text is loaded with it). Is there a way to load a notebook with outputs suspended or clear?
How to load a notebook without the outputs?
0
0
0
1,137
21,216,706
2014-01-19T12:23:00.000
1
1
0
0
java,python,c,libffi
21,247,326
1
false
1
0
In general, things get complicated when you're talking about two managed runtimes (CPython and the JVM, for instance). libffi only really deals with a subset of the issues here. I would look more at remote method invocations as a way to integrate code written in different managed runtime environments.
1
0
0
I'm wondering if it is possible for an app to run in Python and call Java methods (and vice versa) through libffi?
Can libffi be used for Python and Java to communicate?
0.197375
0
0
232
21,220,592
2014-01-19T18:10:00.000
2
0
0
1
python,ajax,google-app-engine
21,220,947
2
false
1
0
AJAX has nothing to do with PHP: it's a fancy name for a technique whose goal is to provide a way for the browser to communicate asynchronously with an HTTP server. It is independent of whatever is powering that server (be it PHP, Python or anything). I fear that you might not be able to understand this yet, so I recommend you to Google about it and experiment a lot before starting your project.
1
1
0
I was planning to develop an ecommerce site using Google App Engine in Python. Now, I want to use Ajax for some added dynamic features. However, I read somewhere that I need to know PHP in order to use AJAX on my website. So, is there no way I can use Ajax in Python in Google App Engine? Also, I would be using the webapp2 framework for my application. Also, if its possible to use Ajax in Google App Engine with Python, can anyone suggest some good tutorials for learning Ajax for the same?
Google App Engine: Using Ajax
0.197375
0
0
281
21,220,842
2014-01-19T18:31:00.000
5
0
0
0
python-2.7,ubuntu,numpy
21,222,072
1
true
0
0
Notes for the future me, when trying to redo the stuff: there are some prerequisites for working with numpy/scipy: g++ gfortran blas atlas lapack. it seems to be better -- and time consuming -- to compile the numpy/scipy sources. pip install does this. The commands were: sudo apt-get install g++ gfortran liblapack-dev libopenblas-dev python-dev python-pip sudo pip install nose sudo pip install numpy python -c "import numpy; numpy.test()" For the scipy library the following worked: sudo pip install scipy python -c "import scipy; scipy.test()"
1
0
1
on ubuntu 12.04 x32 I have installed python 2.7.3, numpy 1.6.1 via sudo apt-get install python-numpy. I run the test() from numpy via numpy.test() and I get: FAIL: test_pareto (test_random.TestRandomDist) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/numpy/random/tests/test_random.py", line 313, in test_pareto np.testing.assert_array_almost_equal(actual, desired, decimal=15) File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 800, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "/usr/lib/python2.7/dist-packages/numpy/testing/utils.py", line 636, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 15 decimals (mismatch 16.6666666667%) x: array([[ 2.46852460e+03, 1.41286881e+03], [ 5.28287797e+07, 6.57720981e+07], [ 1.40840323e+02, 1.98390255e+05]]) y: array([[ 2.46852460e+03, 1.41286881e+03], [ 5.28287797e+07, 6.57720981e+07], [ 1.40840323e+02, 1.98390255e+05]]) Ran 3169 tests in 17.483s FAILED (KNOWNFAIL=3, SKIP=4, failures=1) What should I do? did I miss a dependency or so? Thanks.
Numpy test() finished with errors
1.2
0
0
2,909
21,221,141
2014-01-19T18:57:00.000
0
0
0
0
javascript,python,asp.net,pyqt,qtwebkit
21,532,850
1
true
1
1
Since nobody answered, I will post my work-around. Basically, wanted to "transfer" my session from Mechanize (the python module) to the QtWebKits QWebView (PyQt4 module) because the vast majority of my project was automated headless, but I had encountered a road block where I had no choice but to have the user manually enter data into a possible resulting page (as the form was different each time depending on circumstances). Instead of transferring sessions, I met this requirement by utilizing QWebViews javascript functionality. My method went like this: Load page in Mechanize, and save the downloaded HTML to a local temporary file. Load this local file in QWebView. The user can now enter required data into the local copy of this page. Locate the form fields on this page, and pull the data the user entered using javascript. You can do this by getting the main frame object for the page (QWebView->Page()->MainFrame()), and then evaluating javascript code to accomplish the above task (use evaluateJavaScript()). Take the data you have extracted from the form fields, and use it to submit the form with the connection you still have open with mechanize. That's it! A bit of a work-around, but it works none-the-less :\
1
1
0
The issue: I have written a ton of code (to automate some pretty laborious tasks online), and have used the mechanize library for Python to handle network requests. It is working very well, except now I have encountered a page which I need javascript functionality... mechanize does not handle javascript. Proposed Solution: I am using PyQt to write the GUI for this app, and it comes packaged with QtWebKit, which DOES handle javascript. I want to use QtWebKit to evaluate the javascript on the page that I am stuck on, and the easiest way of doing this would be to transfer my web session from mechanize over to QtWebKit. I DO NOT want to use PhantomJS, Selenium, or QtWebKit for the entirety of my web requests; I 100% want to keep mechanize for this purpose. I'm wondering how I might be able to transfer my logged in session from mechanize to QtWebKit. Would this work? Transfer all cookies from mechanize to QtWebView Transfer the values of all state variables (like _VIEWSTATE, etc.) from mechanize to QWebView (the page is an ASP.net page...) Change the User-Agent header of QWebView to be identical to mechanize... I don't really see how I could make the two "browsers" appear more identical to the server... would this work? Thanks!
Python - Transferring session between two browsers
1.2
0
0
384
21,221,431
2014-01-19T19:20:00.000
5
0
1
0
python
21,221,460
2
true
0
0
Just use python -m pdb mycode.py, which will run your code in the python debugger (pdb module). In the debugger you can execute arbitrary code, watch variables, and jump to different places in the code. Specifically, n will execute the next line and h will show you the debugger help.
1
1
0
Is it possible to run code line by line with Python. Including running any module code, when used, line by line as well. I would like to go out and run some code line by line and watch as each of the lines goes through the processing phase and see just what code is getting executed when certain actions occur. I'm curious how certain values are getting passed off to the interpreter.
Python: Is line by line execution possible
1.2
0
0
323
21,222,621
2014-01-19T21:02:00.000
2
0
1
0
python
21,222,941
5
false
0
0
You should use the decimal module. Each number knows how many significant digits it has.
1
2
0
I'd like to pass numbers around between functions, while preserving the decimal places for the numbers. I've discovered that if I pass a float like '10.00' in to a function, then the decimal places don't get used. This messes an operation like calculating percentages. For example, x * (10 / 100) will always return 0. But if I manage to preserve the decimal places, I end up doing x * (10.00 / 100). This returns an accurate result. I'd like to have a technique that enables consistency when I'm working with numbers that decimal places that can hold zeroes.
In python, how do I preserve decimal places in numbers?
0.07983
0
0
10,506
21,222,632
2014-01-19T21:03:00.000
4
0
1
0
python,py2exe,pyinstaller
21,222,812
2
true
0
0
The problem you would have is that if your friend decided to change something in the config, he'd have to ask you to do it, run py2exe again and send the .exe to him again. With an .ini file, he'd simply edit the file.
2
1
0
I'm writing a script for a colleague who runs Windows but my development environment is GNU/Linux. I have a bunch of variables that need to be configurable. So I put them all in a config.py that I've imported it into the main project. Originally I planned to ask him to install Cygwin but then I thought of packaging it into an exe with py2exe or pyinstaller. I've not used either of these before so I don't know how they work. Would I have problems with the config.py file or should I be using an actual module like ConfigParser to store my settings so that it can be separate from the .exe file?
py2exe/pyinstaller: Is it bad practice to put all configurable variables in a .py file?
1.2
0
0
338
21,222,632
2014-01-19T21:03:00.000
1
0
1
0
python,py2exe,pyinstaller
21,222,800
2
false
0
0
I would definitely use a config parser or even just a json or ini file.
2
1
0
I'm writing a script for a colleague who runs Windows but my development environment is GNU/Linux. I have a bunch of variables that need to be configurable. So I put them all in a config.py that I've imported it into the main project. Originally I planned to ask him to install Cygwin but then I thought of packaging it into an exe with py2exe or pyinstaller. I've not used either of these before so I don't know how they work. Would I have problems with the config.py file or should I be using an actual module like ConfigParser to store my settings so that it can be separate from the .exe file?
py2exe/pyinstaller: Is it bad practice to put all configurable variables in a .py file?
0.099668
0
0
338
21,223,230
2014-01-19T21:57:00.000
0
0
0
0
python,django,rest,login
21,223,261
1
true
1
0
I don't think so. If this is safe for using on web pages, why should it be a problem for API calls? If you are really worried about someone getting session IDs, use SSL to encrypt your communication. But that should be the same for web resources as well, you should use https if you don't want session cookies to be stolen.
1
0
0
I have a rest API in Django 1.6 but I'm not using any library like django-tastypie or other to do that. I just write my endpoints (urls.py) and return json data in my views.py. For authentication I'm using django basic auth provided. So in every request made by front-end I check request.user.id and with that work to know if that user has access to a certain resource in other words I'm using login session data that django puts when front-end calls login endpoint. Am I incurring safety issues doing this?
Django as a service login and logout
1.2
0
0
378
21,225,368
2014-01-20T02:31:00.000
2
0
0
1
python,django,deployment,paas
21,233,816
1
false
1
0
If I was doing it (and I did a similar thing with a PHP application I inherited), I'd have a fabric command that allows me to provision a new instance. This could be broken up into the requisite steps (check-out code, create database, syncdb/migrate, create DNS entry, start web server). I'd probably do something sane like use the DNS entry as the database name: or at least use a reversible function to do that. You could then string these together to easily create a new instance. You will also need a way to tell the newly created instance which database and domain name they needed to use. You could have the provisioning script write some data to a file in the checked out repository that is then used by Django in it's initialisation phase.
1
1
0
I have a Django 1.6 project (stored in a Bitbucket Git repo) that I wish to host on a VPS. The idea is that when someone purchases a copy of the software I have written, I can type in a few simple commands that will take a designated copy of the code from Git, create a new instance of the project with its own subdomain (e.g. <customer_name>.example.com), and create a new Postgres database (on the same server). I should hopefully be able to create and remove these 'instances' easily. What's the best way of doing this? I've looked into writing scripts using some sort of combination of Supervisor/Gnunicorn/Nginx/Fabric etc. Other options could be something more serious like using Docker or Vagrant. I've also looked into various PaaS options too. Thanks in advance. (EDIT: I have looked at the following services/things: Dokku (can't use Heroku due to data constraints), Vagrant (inc Puppet), Docker, Fabfile, Deis, Cherokee, Flynn (under dev))
How do I run a Django 1.6 project with multiple instances running off the same server, using the same db backend?
0.379949
0
0
222
21,226,387
2014-01-20T04:44:00.000
1
0
0
1
python,rules,sniffer
21,226,492
1
true
0
0
In a modern switched network, you system is in general only going to see two kinds of traffic: unicast traffic explicitly directed to your system and broadcast traffic that is visible to all systems. Nothing you can do in your code will make other traffic on the network visible to you. Enabling promiscuous mode on your interfaces in this situation is going to net you very little additional traffic. This is less true in a network with a shared bus -- such as WifI, or back in the old days when we used hubs instead of switches. Netfilter -- the Linux firewall you manipulate with the iptables command -- really only operates on the layer 3 (ip) level, and isn't going to affect what traffic is visible to your interface.
1
0
0
I am running Ubuntu on my machine and want to write some sniffer scripts. But I am getting packets related to my NIC only even if I run my Interface in promisc mode. Is there any IPTABLE rules that i need to put on so that i can get entrie packets on the network?? Please help. I am using python for everything i am doing , if it helps
IPTABLE rules to get all network packets in promisc mode
1.2
0
0
607
21,228,282
2014-01-20T07:21:00.000
1
0
0
0
python,django,django-socialauth
21,248,297
1
false
1
0
Set SOCIAL_AUTH_FIELDS_STORED_IN_SESSION = ['foo_id'] in your settings, then you will be able to access foo_id in the session in your update_user_details by doing the usual request.session['foo_id'].
1
0
0
My Django social-auth Facebook login works fine, using the default url /login/facebook/. I'm also able to do stuff with the new user by overriding the update_user_details method. But I would like to pass some more arguments to process in update_user_details. For instance, if I wanted to associate a model Foo with the user after it's been created, I should have liked to call the following url /login/facebook/?foo_id=bar, so that I can get back the foo_id in update_user_details. Any ideas?
Passing arguments to Django social-auth Facebook login
0.197375
0
0
170
21,234,884
2014-01-20T13:02:00.000
2
0
1
0
python-2.7,pygame,frame-rate
21,239,429
2
false
0
1
I'm not entirely sure that I understand the question being asked, but I think you will just have to experiment with different numbers and find out what works for you. I find that around 50-100 is a good range. If what you are trying to do is make game events only update a certain number of times per second while rendering happens as fast as the computer is able to handle it, this is a very complex process and probably not very easily done in pygame.
1
0
0
How can I set a suitable fps number in pygame for any kind of monitor running my game? I know I can set fps using pygame.time.Clock().tick(fps) but how can I set suitable fps? Please post example python code along with answer.
How to set fps in pygame in order to prevent lag?
0.197375
0
0
603
21,234,887
2014-01-20T13:02:00.000
-1
0
0
0
python,numpy,storage,similarity
21,235,208
3
false
0
0
It is hard to answer you question. Because i don't know about ur data volume and type. But i can tell you now. If u are thinking about file for that, it may have scale out issue, if u scale out python server to # of box. So u may need a shared storage. In that case u have to think about shared storage file system like glusterFS or Hadoop. (glusterFS is more eaisier). But the access time will be very poor. The other option is u can think about Redis. It is memory based key & value store. It also supports file persistance. (because of that it's characteristics is little different from memcahed.) Final option is u can think about NoSQL which can support scalability and performance. But it is always depends on your requirement.
1
2
1
I am working on a reccommender algorithm for songs. I have a matrix of values that I get the cosine similiarity of in python ( numPy). The problem is that every time i run the program i need to recompute the similarity of every vector to every other vector. I want to store the results of computations locally so i don't have to compute it every time. The first thing that comes to my mind is storing them in a text file, or in the database itself. Surely theres a better way though?
Store data locally long term
-0.066568
0
0
846
21,236,425
2014-01-20T14:24:00.000
4
0
1
0
ipython,enthought
21,236,786
1
true
0
0
Right Click on some math expression > Math Settings > Scale All Math.... Persistent on a per-browser Basis, based on cookie.
1
2
0
I just found that the mathematical expression (LaTeX) displayed on Enthought IPython Notebook is barely to see. Is there any way to customize it? I use Enthought Canopy 32 bit, academic license on Window 7.
LaTeX Font too Small Displayed on Enthought Canopy
1.2
0
0
890
21,236,742
2014-01-20T14:40:00.000
2
1
0
0
python,ip
21,237,260
2
false
0
0
If the IPs are not logged by ask.fm, there is not much you can do about it. And if it's logged, you probably don't need any script to extract it, as it should be presented somewhere along with the questions or separately in some list.
2
0
0
I got much anonymous questions that attack my friendship. Is there a way to get the IP-Adresss of these Questions with a Python script? I have little more than normal Python knowledge, so you mustn't show me complete Code, just 1-5 lines or just explain something. I hope you'll help me!
Python Ask.fm IP of Anonymous Questions
0.197375
0
1
1,205
21,236,742
2014-01-20T14:40:00.000
0
1
0
0
python,ip
21,237,392
2
false
0
0
In addition to @Michael's answer, even if you might be able to get the IP you won't be able to do much. Most of people also use dynamic IP addresses. You may want to contact ask.fm to get more informations, it's very hard they will give you them though.
2
0
0
I got much anonymous questions that attack my friendship. Is there a way to get the IP-Adresss of these Questions with a Python script? I have little more than normal Python knowledge, so you mustn't show me complete Code, just 1-5 lines or just explain something. I hope you'll help me!
Python Ask.fm IP of Anonymous Questions
0
0
1
1,205
21,237,645
2014-01-20T15:20:00.000
1
0
0
0
python,oracle,oracle11g,sqlalchemy
21,238,505
1
true
0
0
CLOB or NCLOB would be the best options. Avoid splitting data into columns. What would happen when you have data larger than 2 columns - it will fail again. It also makes it maintenance nightmare. I've seen people split data into rows in some databases just because the database would not support larger character datatypes (old Sybase versions). However, if your database has a datatype built for this purpose by all means use it.
1
0
0
I am using Oracle database and in a certain column I need to insert Strings, which in some cases are larger than 4000 symbols (Oracle 11g limits Varchar2 size to 4000). We are required to use Oracle 11g, and I know about the 12g extended mode. I would not like to use the CLOB datatype for performance considerations. The solution that I have in mind is to split the column and write a custom SQLAlchemy datatype that writes the data to the second column in case of string larger than 4000. So, my questions are: Are we going to gain any significant performance boost from that (rather than using Clob)? How should that SQLAlchemy be implemented? Currently we are using types.TypeDecorator for custom types, but in this case we need to read/write in two fields.
SQLAlchemy type containing strings larger than 4000 on Oracle using Varchar2
1.2
1
0
820
21,237,833
2014-01-20T15:28:00.000
5
0
1
0
python,numpy
21,238,076
2
true
0
0
You could limit the process'es memory limit, but that is OS specific. Another solution would be checking value of psutil.virtual_memory(), and exiting your program if it reaches some point. Though OS-independent, the second solution is not Pythonic at all. Memory management is one of the things we have operating systems for.
2
8
0
I have a couple of Python/Numpy programs that tend to cause the PC to freeze/run very slowly when they use too much memory. I can't even stop the scripts or move the cursor anymore, when it uses to much memory (e.g. 3.8/4GB) Therefore, I would like to quit the program automatically when it hits a critical limit of memory usage, e.g. 3GB. I could not find a solution yet. Is there a Pythonic way to deal with this, since I run my scripts on Windows and Linux machines.
Quit Python program when it hits memory limit
1.2
0
0
2,186