Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17,364,120 | 2013-06-28T11:53:00.000 | 0 | 0 | 0 | 1 | 0 | python,django,web-scraping,scraper,scraperwiki | 1 | 17,374,282 | 0 | 1 | 0 | false | 1 | 0 | Step # 1
download django-dynamic-scraper-0.3.0-py2.7.tar.gz file
Step # 2
Unzip it and change the name of the folder to:
django-dynamic-scraper-0.3.0-py2.7.egg
Step # 3
paste the folder into C:\Python27\Lib\site-packages | 1 | 0 | 0 | 0 | I am trying to make a project in dynamic django scraper. I have tested it on linux and it runs properly. When I try to run the command: syndb i get this error
/*****************************************************************************************************************************/
python : WindowsError: [Error 3] The system cannot find the path specified: 'C:\Python27\l
ib\site-packages\django_dynamic_scraper-0.3.0-py2.7.egg\dynamic_scraper\migrations/.'
At line:1 char:1
+ python manage.py syncdb
+ ~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (WindowsError: [...migrations/.':String) [],
RemoteException
+ FullyQualifiedErrorId : NativeCommandError
/*****************************************************************************************************************************/
The admin server runs properly with the command python manage.py runserver
Kindly guide me how i can remove this error | Django Dynamic Scraper Project does not run on windows even though it works on Linux | 0 | 0 | 1 | 0 | 0 | 194 |
17,366,579 | 2013-06-28T14:05:00.000 | 10 | 0 | 0 | 1 | 0 | python,celery | 0 | 46,843,345 | 0 | 2 | 0 | false | 0 | 0 | If you want to log everything, you can use the following command
-f celery.logs
You can also specify different log levels as well. For suppose if you want log warning and errors add like following.
--loglevel=warning -f celery.logs | 1 | 28 | 0 | 0 | Can someone please help and tell me how to get the celery task debug details to a log file? I have a requirement to have the details of celery task logged into a .log file.
Can you please make some suggestions on how this can be done without impacting the performance of the task? | Celery Logs into file | 0 | 1 | 1 | 0 | 0 | 63,725 |
17,376,033 | 2013-06-29T02:14:00.000 | 0 | 0 | 1 | 1 | 0 | python-2.7,locking,shared-file | 0 | 17,376,105 | 0 | 2 | 0 | false | 0 | 0 | Just a thought...
Couldn't you put a 'lock' file in the same directory as the file your trying to write to? In your distributed processes check for this lock file. If it exists sleep for x amount and try again. Likewise, when the process that currently has the file open finishes the process deletes the lock file?
So if you have in the simple case 2 processes called A and B:
Process A checks for lock file and if it doesn't exist it creates the lock file and does what it needs to with the file. After it's done it deletes this lock file.
If process A detects the lock file then that means process B has the file, so sleep and try again later....rinse repeat. | 1 | 4 | 0 | 0 | My project requires being run on several different physical machines, which have shared file system among them. One problem arising out of this is how to synchronize write to a common single file. With threads, that can be easily achieved with locks, however my program consists of processes distributed on different machines, which I have no idea how to synchronize. In theory, any way to check whether a file is being opened right now or any lock-like solutions will do, but I just cannot crack out this by myself. A python way would be particularly appreciated. | is there a way to synchronize write to a file among different processes (not threads) | 1 | 0 | 1 | 0 | 0 | 1,617 |
17,382,053 | 2013-06-29T16:01:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,sqlite | 0 | 17,382,483 | 0 | 2 | 0 | true | 1 | 0 | I'm not sure you can get at the contents of a :memory: database to treat it as a file; a quick look through the SQLite documentation suggests that its API doesn't expose the :memory: database to you as a binary string, or a memory-mapped file, or any other way you could access it as a series of bytes. The only way to access a :memory: database is through the SQLite API.
What I would do in your shoes is to set up your server to have a directory mounted with ramfs, then create an SQLite3 database as a "file" in that directory. When you're done populating the database, return that "file", then delete it. This will be the simplest solution by far: you'll avoid having to write anything to disk and you'll gain the same speed benefits as using a :memory: database, but your code will be much easier to write. | 1 | 0 | 0 | 0 | In my python/django based web application I want to export some (not all!) data from the app's SQLite database to a new SQLite database file and, in a web request, return that second SQLite file as a downloadable file.
In other words: The user visits some view and, internally, a new SQLite DB file is created, populated with data and then returned.
Now, although I know about the :memory: magic for creating an SQLite DB in memory, I don't know how to return that in-memory database as a downloadable file in the web request. Could you give me some hints on how I could reach that? I would like to avoid writing stuff to the disc during the request. | Python: Create and return an SQLite DB as a web request result | 0 | 1.2 | 1 | 1 | 0 | 169 |
17,396,218 | 2013-07-01T00:51:00.000 | 0 | 0 | 0 | 0 | 0 | python,autocomplete,enthought | 0 | 17,397,527 | 0 | 1 | 0 | false | 0 | 0 | Canopy currently doesn't expose the API to customize auto-completion.
But, what exactly do you mean by making it behave like sublime? | 1 | 1 | 0 | 0 | I want to be able to customize auto-completion in canopy, so it behaves like sublime.
Is it possible? And if so, where do i find the APIs? | Canopy: Changing the way auto-completion works | 1 | 0 | 1 | 0 | 0 | 792 |
17,400,805 | 2013-07-01T09:08:00.000 | 13 | 0 | 1 | 0 | 0 | python,cpython | 0 | 17,401,698 | 0 | 3 | 0 | true | 0 | 0 | Python isn't written in C. Arguably, Python is written in an esoteric English dialect using BNF.
However, all the following statements are true:
Python is a language, consisting of a language specification and a bunch of standard modules
Python source code is compiled to a bytecode representation
this bytecode could in principle be executed directly by a suitably-designed processor but I'm not aware of one actually existing
in the absence of a processor that natively understands the bytecode, some other program must be used to translate the bytecode to something a hardware processor can understand
one real implementation of this runtime facility is CPython
CPython is itself written in C, but ...
C is a language, consisting of a language specification and a bunch of standard libraries
C source code is compiled to some bytecode format (typically something platform-specific)
this platform specific format is typically the native instruction set of some processor (in which case it may be called "object code" or "machine code")
this native bytecode doesn't retain any magical C-ness: it is just instructions. It doesn't make any difference to the processor which language the bytecode was compiled from
so the CPython executable which translates your Python bytecode is a sequence of
instructions executing directly on your processor
so you have: Python bytecode being interpreted by machine code being interpreted by the hardware processor
Jython is another implementation of the same Python runtime facility
Jython is written in Java, but ...
Java is a language, consisting of a spec, standard libraries etc. etc.
Java source code is compiled to a different bytecode
Java bytecode is also executable either on suitable hardware, or by some runtime facility
The Java runtime environment which provides this facility may also be written in C
so you have: Python bytecode being interpreted by Java bytecode being interpreted by machine code being interpreted by the hardware processor
You can add more layers indefinitely: consider that your "hardware processor" may really be a software emulation, or that hardware processors may have a front-end that decodes their "native" instruction set into another internal bytecode.
All of these layers are defined by what they do (executing or interpreting instructions according to some specification), not how they implement it.
Oh, and I skipped over the compilation step. The C compiler is typically written in C (and getting any language to the stage where it can compile itself is traditionally significant), but it could just as well be written in Python or Java. Again, the compiler is defined by what it does (transforms some source language to some output such as a bytecode, according to the language spec), rather than how it is implemented. | 1 | 4 | 0 | 0 | From what I know, CPython programs are compiled into intermediate bytecode, which is executed by the virtual machine. Then how does one identify without knowing beforehand that CPython is written in C. Isn't there some common DNA for both which can be matched to identify this? | What does it mean when people say CPython is written in C? | 0 | 1.2 | 1 | 0 | 0 | 686 |
17,412,982 | 2013-07-01T20:21:00.000 | 0 | 0 | 0 | 1 | 0 | python,ubuntu | 0 | 17,413,212 | 0 | 1 | 0 | true | 0 | 0 | The shebang line #!/usr/bin/python3 should work if sh, bash, etc. is trying to launch your script.
It it is being run from another script as python myscript.py you'll have to find that script and get it to launch the script using python3 myscripy.py | 1 | 0 | 0 | 0 | The standard python version of ubuntu 13.04 is python 2.7.
I know that I can call a python script of version 3.3 by calling python3.3 or python3 in terminal instead of only "python", which starts the version 2.7...
e.g. python3 myscript.py
But now I have a version 3.3. script in the system start routine and can only tell the path to the file. The system recognizes it as a python script (in the shebang with #!/usr/bin/python3)
But how to open it with the correct version? It is tried to be opened with the standard python install so it wont work nor even show up. | starting a python 3.3. script at ubuntu startup | 0 | 1.2 | 1 | 0 | 0 | 1,070 |
17,414,855 | 2013-07-01T22:44:00.000 | 0 | 0 | 0 | 1 | 0 | python,windows,usb,pyusb | 0 | 17,516,390 | 0 | 1 | 0 | false | 0 | 0 | What about polling? Create a Python app that enumerates a list of attached USB devices every couple of seconds or so.
Keep a list/dictionary of your initially detected devices, and compare to that to determine what was attached/detached since your last polling iteration.
This isn't the best approach, and enumerating all the devices takes a short while, so not too sure this would be the most CPU efficient method. | 1 | 0 | 0 | 0 | On a windows OS, how can I get python to detect if anything is plugged in to a specific USB location on the computer. For example "Port_#0002.Hub_#0003"
I've tried pyUSB which worked fine for detecting a specific device, but I couldn't seem to figure out how to just check a specific port/hub location for any kind of device. | How can I get Python to watch a USB port for any device? | 0 | 0 | 1 | 0 | 0 | 1,021 |
17,423,384 | 2013-07-02T10:37:00.000 | 2 | 0 | 0 | 0 | 0 | python,mysql,multithreading,python-2.7 | 0 | 17,423,440 | 0 | 1 | 0 | true | 0 | 0 | No, it does not. You have to tell the server on the other side that the connection is closed, because it can't tell the difference between "going away" and "I haven't sent my next query yet" without an explicit signal from you.
The connection can time out, of course, but it won't be closed or cleaned up without instructions from you. | 1 | 0 | 0 | 0 | I am using python 2.7 and Mysql. I am using multi-threading and giving connections to different threads by using PooledDB . I give db connections to different threads by
pool.dedicated_connection().Now if a thread takes a connection from pool and dies due to some reason with closing it(ie. without returning it to the pool).What happens to this connection.
If it lives forever how to return it to the pool?? | Does database connection return to pool if a thread holding it dies? | 0 | 1.2 | 1 | 1 | 0 | 178 |
17,446,703 | 2013-07-03T11:26:00.000 | 0 | 0 | 0 | 0 | 0 | python,eclipse,openerp | 0 | 17,447,105 | 0 | 1 | 0 | false | 1 | 0 | You can create field on function, you have to create field in object 'ir.model.fields'
if you are create simple field like float. char, boolean then you have to give value Field Name, Label, Model, for which object you want to create field , if many2one or many2many field then you have to give Object Relation field to.
Hope this help | 1 | 1 | 0 | 0 | Hi I have created a button oin my custom openerp module. I wanted to add func to this button to create a field. I have added the function but how to add functionality for creating fields . please help | How to create field through func in openerp? | 0 | 0 | 1 | 0 | 0 | 198 |
17,451,874 | 2013-07-03T15:18:00.000 | 6 | 0 | 1 | 0 | 0 | ipython,paste,ipython-magic | 0 | 17,475,225 | 0 | 3 | 0 | false | 0 | 0 | You have two options:
To edit it by hand, run %cpaste. Then you can paste it in with standard terminal options (try Ctrl-Shift-V), and edit it. Enter -- on a line to finish.
To change it as text in your code, run %paste foo. It will store the clipboard contents in foo. | 2 | 7 | 0 | 0 | When using magic %paste in ipython, it executes pasted code, rather than just pasting. How can i get it to just paste the copied code so that it can be edited? | When using magic %paste in ipython, how can i get it to just paste the copied code, rather than paste and execute, so that it can be edited | 0 | 1 | 1 | 0 | 0 | 3,184 |
17,451,874 | 2013-07-03T15:18:00.000 | 3 | 0 | 1 | 0 | 0 | ipython,paste,ipython-magic | 0 | 30,046,297 | 0 | 3 | 0 | false | 0 | 0 | There is a solution for this issue in ipython, if you are not concerned with indentation,
Just run %autoindent to Automatic indentation OFF. | 2 | 7 | 0 | 0 | When using magic %paste in ipython, it executes pasted code, rather than just pasting. How can i get it to just paste the copied code so that it can be edited? | When using magic %paste in ipython, how can i get it to just paste the copied code, rather than paste and execute, so that it can be edited | 0 | 0.197375 | 1 | 0 | 0 | 3,184 |
17,456,233 | 2013-07-03T19:09:00.000 | 1 | 0 | 1 | 0 | 0 | python,comparison | 1 | 17,456,347 | 0 | 4 | 0 | false | 0 | 0 | What you're asking is about a fuzzy search, from what it sounds like. Instead of checking string equality, you can check if the two string being compared have a levenshtein distance of 1 or less. Levenshtein distance is basically a fancy way of saying how many insertions, deletions or changes will it take to get from word A to B. This should account for small typos.
Hope this is what you were looking for. | 1 | 0 | 1 | 0 | I am working on a traffic study and I have the following problem:
I have a CSV file that contains time-stamps and license plate numbers of cars for a location and another CSV file that contains the same thing. I am trying to find matching license plates between the two files and then find the time difference between the two. I know how to match strings but is there a way I can find matches that are close maybe to detect user input error of the license plate number?
Essentially the data looks like the following:
A = [['09:02:56','ASD456'],...]
B = [...,['09:03:45','ASD456'],...]
And I want to find the time difference between the two sightings but say if the data was entered slightly incorrect and the license plate for B says 'ASF456' that it will catch that | Python Matching License Plates | 1 | 0.049958 | 1 | 0 | 0 | 1,137 |
17,457,460 | 2013-07-03T20:24:00.000 | 0 | 1 | 0 | 0 | 0 | python,c++,svm,libsvm | 0 | 18,509,671 | 0 | 3 | 0 | false | 0 | 0 | easy.py is a script for training and evaluating a classifier. it does a metatraining for the SVM parameters with grid.py. in grid.py is a parameter "nr_local_worker" which is defining the mumber of threads. you might wish to increase it (check processor load). | 1 | 3 | 1 | 0 | I'm using Libsvm in a 5x2 cross validation to classify a very huge amount of data, that is, I have 47k samples for training and 47k samples for testing in 10 different configurations.
I usually use the Libsvm's script easy.py to classify the data, but it's taking so long, I've been waiting for results for more than 3 hours and nothing, and I still have to repeat this procedure more 9 times!
does anybody know how to use the libsvm faster with a very huge amount of data? does the C++ Libsvm functions work faster than the python functions? | Large training and testing data in libsvm | 0 | 0 | 1 | 0 | 0 | 3,432 |
17,461,600 | 2013-07-04T03:53:00.000 | 0 | 0 | 0 | 0 | 0 | python,vtk | 0 | 25,908,567 | 0 | 2 | 0 | false | 0 | 1 | just use vtktexture, vtkimagedata & add your own image as texture background to the vtkrenderer by reducing the opacity like a watermark. thats it | 1 | 1 | 0 | 0 | Simple question, but I've tried a few things and nothing seems to work.
I want to overlay some statistics onto a 3d VTK scene, using 2D vtkTextActors. This works fine, but the text is at times difficult to see, depending on what appears behind it in the 3D scene.
For this reason, I'd like to add a 2d, semi-transparent "box" behind my text actors to provide a darker background.
Which VTK object is appropriate for this? I've tried so far:
vtkLegendBoxActor: Not what I want, but I can use this with no text to display a semi-transparent box on screen. I cannot size it directly and I get warnings about not initialising some of the content.
vtkImageData: Tried manually creating image data and adding it to the scene; I believe it was placed within the 3d scene and not used as an overlay. If that's not the case then I couldn't get it to show at all.
vtkCornerAnnotation: Scales with window size, is fixed to a corner and the background opacity cannot be set AFAIK.
vtkTextActor: Cannot set a background color or opacity
Can anyone tell me how they might achieve what I'm after in VTK? | Semi-transparent 2d VTK text background | 0 | 0 | 1 | 0 | 0 | 1,042 |
17,469,330 | 2013-07-04T11:38:00.000 | 1 | 0 | 0 | 0 | 0 | python,xml,postgresql,openerp | 0 | 17,473,961 | 0 | 1 | 0 | true | 1 | 0 | This is not a generally accepted way of doing customization in openerp. Ususally, you should make a custom module that implements your customization when installed on the OpenERP server installation.
Are you using Windows or Linux? The concept here is to move all of the server addons files to the upsite server, including a dump of the database which can be restored on the upsite server.
Here's how.
First click the Manage databases at the login screen,
Do a backup database and save the generated dump file.
Install openerp o nthe upsite server (*major versions must match).
Copy the server addons folder, and upload to the upsite server's addon directory.
Restart openerp service.
Then restore the dump file from your backup location.
This is basically how you can mirror a "customized" openerp installation across servers.
Hope this helps. | 1 | 0 | 0 | 0 | I installed OpenERP V 7 on my local machine. I made modification in the CSS. I also remove some menu, change the labels of some windows and change the position of some menus (one after the other in the order decided by the customer).
The work required is over and runs well on the premises. Now I'm looking for a way to move my work on the server while keeping the changes. Knowing that I worked directly through the interface of OpenERP.
Someone has an idea? | after working in local server, how to move OpenERP on a remote server? | 0 | 1.2 | 1 | 0 | 1 | 2,217 |
17,484,086 | 2013-07-05T08:09:00.000 | 0 | 0 | 0 | 1 | 0 | linux,python-2.7,wifi,wpa | 1 | 17,960,144 | 0 | 1 | 0 | true | 0 | 0 | Well, a not-so-straightforward (yet the only possible) way to go about fulfilling your needs would be initiating a four-way handshake with the AP.
Since you're coding in Python, Scapy would be your best option for crafting EAPOL message packets.
You'll have to know the structure of the EAPOL packets though, and fully implement it in your code.
You'll also have to recode, in Python, the functions for key generation, most (if not all) of which are PRFs **(Pseudo Random Functions); alternatively, you could import ready-compiled .DLL's to do the encoding for you.
However, it would be enough to manage only the first 3 messages from the four-way handshake:
If, after several connection attempts, the AP doesn't send the 3rd key message, then the MIC (Message Integrity Check) from the STA didn't match the one generated by the AP, and the password is thus invalid.
Otherwise, it is.
Note: wpa_supplicant follows the same procedure for authentication and connection, however it continues on for obtaining extra information like IP address and what not... That's why I said it's the only possible way. | 1 | 1 | 0 | 0 | I'm trying to validate the user's input of SSID and WPA Passphrase for a WPA connection. My program is a Python program running on an embedded Linux platform. I can validate an Access Point with SSID exists by parsing the output of a iwlist scan subprocess. Validating the Passphrase, however, is less straight forward. So far, the only solution I've come up with is to parse the output of
wpa_supplicant -Dwext -iwlan0 -c/tmp/wpa_supplicant.conf
looking for
"pre-shared key may be incorrect"
or the kernel message
"OnDeAuth Reason code(15)"
(which means WLAN_REASON_4WAY_HANDSHAKE_TIMEOUT according to the wpa_supplicant source).
Interpreting a handshake timeout as an invalid Passphrase seems plain wrong. Besides that, that approach requires waiting for some output from a subprocess and assumes the absence of error messages means the Passphrase is valid.
Googling around this just returns me a lot of questions and advice on how to hack a WPA connection! There's no wpa_cli or iwevent in the yum repository for my target platform and I'm unsure how to go about getting a third-party python package running on my target.
Question: What's the simplest way of validating the Wifi WPA Passphrase? | How to programatically validate WPA passphrase on Linux? | 0 | 1.2 | 1 | 0 | 0 | 1,144 |
17,486,578 | 2013-07-05T10:23:00.000 | 1 | 0 | 1 | 0 | 0 | python,setup.py,pypi | 0 | 17,486,777 | 0 | 7 | 0 | false | 0 | 0 | Yes, one zip-file/egg can provide multiple modules, so you can combine them into one file. I'm however highly skeptical to that being a good idea. You still need to install that zip-file, and it may still clash with other already installed versions, etc.
So the first question to ask is what the aim is. Why do you want just one file? Is it for ease of install, or ease of distribution, or what?
Having just one file will not really make the install easier, there are other, better ways. You can let the install download and install the dependencies automatically, that's easy to do.
And having them in one zip-file still means you need to expand that zip-file and run setup.py, which isn't very userfriendly.
So having just one file doesn't really solve many problems, so the question is which problem you are trying to solve. | 1 | 44 | 0 | 0 | It would be convenient when distributing applications to combine all of the eggs into a single zip file so that all you need to distribute is a single zip file and an executable (some custom binary that simply starts, loads the zip file's main function and kicks python off or similar).
I've seen some talk of doing this online, but no examples of how to actually do it.
I'm aware that you can (if it is zip safe) convert eggs into zip files.
What I'm not sure about is:
Can you somehow combine all your eggs into a single zip file? If so, how?
How would you load and run code from a specific egg?
How would you ensure that the code in that egg could access all the dependencies (ie. other eggs in the zip file)?
People ask this sort of stuff a lot and get answers like; use py2exe. Yes, I get it, that's one solution. It's not the question I'm asking here though... | How can you bundle all your python code into a single zip file? | 0 | 0.028564 | 1 | 0 | 0 | 43,916 |
17,487,208 | 2013-07-05T10:54:00.000 | 1 | 0 | 1 | 0 | 0 | python,queue | 0 | 17,487,423 | 0 | 2 | 0 | false | 0 | 0 | Sure. Create a multiprocessing.Pool which will by default spawn one process per core. Then use the original process to run an HTTP service or something else that accepts jobs via some protocol. The main process then listens for new requests and submits them to the pool for async processing. | 1 | 1 | 0 | 0 | I need to create a python server that can accept multiple job requests. Then from those it requests, it processes each Job one at a time but the server can still accept new Jobs while processing a task.
Does anyone have an suggestions on how to do this?
Thanks | Python Server, Job Queue, Launch Multiprocessing Job | 0 | 0.099668 | 1 | 0 | 0 | 455 |
17,507,004 | 2013-07-06T20:49:00.000 | 3 | 1 | 0 | 0 | 0 | c++,python,game-engine,language-interoperability | 0 | 17,507,117 | 0 | 2 | 0 | true | 0 | 1 | In the specific case of python, you have basically three options (and this generally applies across the board):
Host python in C++: From the perspective of the C++ programme, the python interpreter is a C library. On the python side, you may or may not need to use something like ctypes to expose the C(++) api.
Python uses C++ code as DLLs/SOs - C++ code likely knows nothing of python, python definitely has to use a foreign function interface.
Interprocess communication - basically, two separate processes run, and they talk over a socket. These days you'd likely use some kind of web services architecture to accomplish this. | 1 | 8 | 0 | 1 | I know that many large-scale applications such as video games are created using multiple langages. For example, it's likely the game/physics engines are written in C++ while gameplay tasks, GUI are written in something like Python or Lua.
I understand why this division of roles is done; use lower-level languages for tasks that require extreme optimization, tweaking, efficiency and speed, while using higher-level languages to speed up production time, reduce nasty bugs ect.
Recently, I've decided to undertake a larger personal project and would like to divy-up parts of the project similar to above. At this point in time, I'm really confused about how this interoperability between languages (especially compiled vs interpreted) works.
I'm quite familiar with the details of going from ANSCII code test to loading an executable, when written in something like C/C++. I'm very curious at how something like a video game, built from many different languages, works. This is a large/broad question, but specifically I'm interested in:
How does the code-level logic work? I.e. how can I call Python code from a C++ program? Especially since they don't support the same built-in types?
What does the program image look like? From what I can tell, a video game is running in a single process, so what does the runtime image look like when running a C/C++ program that calls a Python function?
If calling code from an interpreted language from a compiled program, what are the sequence of events that occur? I.e If I'm inside my compiled executable, and for some reason have a call to an interpreted language inside a loop, do I have to wait for the interpreter every iteration?
I'm actually finding a hard time finding information on what happening at the machine-level, so any help would be appreciated. Although I'm curious in general about interoperation of software, I'm specifically interested in C++ and Python interaction.
Thank you very much for any insight, even if it's just pointing me to where I can find more information. | How interoperability works | 1 | 1.2 | 1 | 0 | 0 | 1,114 |
17,511,310 | 2013-07-07T10:26:00.000 | 1 | 1 | 1 | 0 | 0 | python,python-3.x,pyobject | 0 | 17,511,486 | 0 | 1 | 0 | false | 0 | 0 | Do you really want a (1) PyObject, as in what Python calls object, or (2) an object of some subtype? That you "need an object with functions attached" seems to indicate you want either methods or attributes. That needs (2) in any case. I'm no expert on the C API, but generally you'd define your own PyTypeObject, then create an instance of that via PyObject_New (refcount and type field are initialized, other fields you might add are not). | 1 | 1 | 0 | 0 | I wonder how I can create a PyObject in C++ and then return it to Python.
Sadly the documentation is not very explicit about it.
There is no PyObject_Create so I wonder whether allocating sizeof(PyObject) via PyObject_Malloc and initializing the struct is sufficient.
For now I only need an object with functions attached. | Create a PyObject with attached functions and return to Python | 0 | 0.197375 | 1 | 0 | 0 | 360 |
17,517,961 | 2013-07-08T01:16:00.000 | 0 | 0 | 0 | 0 | 0 | python,tkinter | 0 | 17,518,474 | 0 | 1 | 0 | false | 0 | 1 | No, you do not have to destroy the screen and redraw it. You can easily insert widgets into the current window when a button is clicked. There's nothing special about being run from a button click -- the code is the same as your initialization code. | 1 | 0 | 0 | 0 | I am aiming to change the a part of the Tkinter screen when a button is clicked. Do I have to destroy the screen then redraw it (to create the illusion that only a part is being changed?) Or can I keep the button there and somehow only change one part (like the graphics.) Thanks! | Changing Tkinter Screen when button is clicked? | 0 | 0 | 1 | 0 | 0 | 881 |
17,522,492 | 2013-07-08T08:53:00.000 | 5 | 1 | 1 | 0 | 1 | python,oop,scientific-computing | 0 | 17,598,977 | 0 | 2 | 0 | true | 0 | 0 | Your general idea would work, here are some more details that will hopefully help you to proceed:
Create an abstract Data class, with some generic methods like load, save, print etc.
Create concrete subclasses for each specific form of data you are interested in. This might be task-specific (e.g. data for natural language processing) or form-specific (data given as a matrix, where each row corresponds to a different observation)
As you said, create an abstract Analysis class.
Create concrete subclasses for each form of analysis. Each concrete subclass should override a method process which accepts a specific form of Data and returns a new instance of Data with the results (if you think the form of the results would be different of that of the input data, use a different class Result)
Create a Visualization class hierarchy. Each concrete subclass should override a method visualize which accepts a specific instance of Data (or Result if you use a different class) and returns some graph of some form.
I do have a warning: Python is abstract, powerful and high-level enough that you don't generally need to create your own OO design -- it is always possible to do what you want with mininal code using numpy, scipy, and matplotlib, so before start doing the extra coding be sure you need it :) | 1 | 8 | 0 | 0 | As a biology undergrad i'm often writing python software in order to do some data analysis. The general structure is always :
There is some data to load, perform analysis on (statistics, clustering...) and then visualize the results.
Sometimes for a same experiment, the data can come in different formats, you can have different ways to analyses them and different visualization possible which might or not depend of the analysis performed.
I'm struggling to find a generic "pythonic" and object oriented way to make it clear and easily extensible. It should be easy to add new type of action or to do slight variations of existing ones, so I'm quite convinced that I should do that with oop.
I've already done a Data object with methods to load the experimental data. I plan to create inherited class if I have multiple data source in order to override the load function.
After that... I'm not sure. Should I do a Analysis abstract class with child class for each type of analysis (and use their attributes to store the results) and do the same for Visualization with a general Experiment object holding the Data instance and the multiple Analysis and Visualization instances ? Or should the visualizations be functions that take an Analysis and/or Data object(s) as parameter(s) in order to construct the plots ? Is there a more efficient way ? Am I missing something ? | Object-oriented scientific data processing, how to cleverly fit data, analysis and visualization in objects? | 1 | 1.2 | 1 | 0 | 0 | 2,039 |
17,533,094 | 2013-07-08T18:02:00.000 | 3 | 0 | 1 | 0 | 0 | python,installation,package,enthought,canopy | 0 | 45,376,116 | 0 | 5 | 0 | false | 0 | 0 | sometimes installing packages can be hard for enthought canopy . You could install all python packages using pip install mrjob command on the its own canopy command prompt
Go to tools tab on the canopy editor ,
Left click on the canopy command prompt ,
Finally pip install <package name> and hit Enter key | 1 | 4 | 0 | 0 | I'm really new to coding, programming, Python, and just computers in general, so I need some help with Canopy. I've been having pretty consistent troubles installing any packages to Canopy; some stuff is in the internal package manager,but whenever it isn't, it's really confusing. I guess I'll list a specific installation.
I'm trying to install "pywcs" (link provided below) to my Win7 64-bit machine. I have Cygwin if that helps at all. I do not know how to go about this; the stuff I found online is pretty confusing, and Cygwin easy_install (filename) never seems to work. Any step-by-step solutions? | Installing a package to Canopy | 0 | 0.119427 | 1 | 0 | 0 | 14,628 |
17,535,252 | 2013-07-08T20:18:00.000 | 2 | 0 | 0 | 0 | 0 | python,wxpython | 0 | 17,535,320 | 0 | 1 | 0 | true | 0 | 1 | Just keep track of the line that you are on/or has been updated and call EnsureVisible on the text control. (Also ensure you are using Append rather than Set to add new text).
Correction, (now I have access to the help files), I was getting mixed up with MakeCellVisible from Grid controls:
YourTextCtrl.ShowPositon(YourTextCtrl.GetLastPosition()) should do the job nicely.
Even better if you call SetInsertionPointEnd() on your text control before the text is inserted, (by using WriteText), then your problem goes away. | 1 | 0 | 0 | 0 | I have a scrollable wx.textcontrol widget that updates during the course of the program. Whenever the text is updated,the scrollbar resets to the top of the screen. I don't want that to happen, but I can't figure out how to stop it. Does any of you know? | Scroll bar problems | 0 | 1.2 | 1 | 0 | 0 | 70 |
17,536,394 | 2013-07-08T21:36:00.000 | 1 | 0 | 0 | 0 | 0 | python,numpy,machine-learning,scipy,scikit-learn | 0 | 28,424,354 | 0 | 2 | 0 | false | 0 | 0 | One way to overcome the inability of HashingVectorizer to account for IDF is to index your data into elasticsearch or lucene and retrieve termvectors from there using which you can calculate Tf-IDF. | 1 | 3 | 1 | 0 | TFIDFVectorizer takes so much memory ,vectorizing 470 MB of 100k documents takes over 6 GB , if we go 21 million documents it will not fit 60 GB of RAM we have.
So we go for HashingVectorizer but still need to know how to distribute the hashing vectorizer.Fit and partial fit does nothing so how to work with Huge Corpus? | How can i reduce memory usage of Scikit-Learn Vectorizers? | 0 | 0.099668 | 1 | 0 | 0 | 4,523 |
17,539,441 | 2013-07-09T03:31:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,wxpython | 0 | 17,550,349 | 0 | 2 | 0 | false | 0 | 1 | You would use a wx.grid.Grid in most cases. You could use a wx.ListCtrl or an ObjectListView widget too. The idea is to put the widgets inside of sizers. So you could have a top level BoxSizer in vertical orientation that contains another sizer or sizers. In this case, I would create a sizer to hold the buttons and nest that sizer in the top level sizer. Then add the table (i.e. grid) to the top level sizer and you're done. | 2 | 0 | 0 | 0 | I have a frame with buttons and plotting options and wanted to include a table as well in that frame. Is there any example or suggestion on how to do it?
Thanks. | How to insert a table inside a frame? | 0 | 0 | 1 | 0 | 0 | 88 |
17,539,441 | 2013-07-09T03:31:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,wxpython | 0 | 17,547,609 | 0 | 2 | 0 | false | 0 | 1 | The wxpython demo is full of examples of all differant controls. | 2 | 0 | 0 | 0 | I have a frame with buttons and plotting options and wanted to include a table as well in that frame. Is there any example or suggestion on how to do it?
Thanks. | How to insert a table inside a frame? | 0 | 0 | 1 | 0 | 0 | 88 |
17,546,628 | 2013-07-09T11:05:00.000 | 0 | 0 | 0 | 0 | 0 | python-3.x,sqlite,ssl,compilation,non-admin | 0 | 17,979,292 | 0 | 2 | 0 | false | 0 | 0 | I don't use that distro, but Linux Mint (it's based on Ubuntu).
In my case before the compilation of Python 3.3.2 I've installed the necessary -dev libraries:
$ sudo apt-get install libssl-dev
$ sudo apt-get install libbz2-dev
...
Then I've compiled and installed Python and those imports work fine.
Hope you find it useful
León | 1 | 1 | 0 | 0 | I want to build Python 3.3.2 from scratch on my SLE 11 (OpenSUSE).
During the compilation of Python I got the message that the modules _bz2, _sqlite and _ssl have not been compiled.
I looked for solutions with various search engines. It is often said that you have to install the -dev packages with your package management system, but I have no root access.
I downloaded the source packages of the missing libs, but I have no idea how to tell Python to use these libs. Can somebody help me? | How to build python 3.3.2 with _bz2, _sqlite and _ssl from source | 0 | 0 | 1 | 1 | 0 | 444 |
17,549,279 | 2013-07-09T13:20:00.000 | 0 | 0 | 1 | 0 | 0 | python,mobile,unity3d,nltk | 0 | 17,551,204 | 0 | 1 | 0 | false | 0 | 1 | Traditional/classic Python is implemented in C. This makes it easy to integrate into any environment that has support for compiled C.
I'm unfamiliar with Unity, but here are some general guidelines on how you would do it with standard iOS and Android. On iOS, you would just add all of the Python C source files (minus the ones that have a 'main' function) to your project in Xcode. On Android, you would add all of the C source files (again, minus the ones that have a 'main' function) to your project in the JNI part of your project and would use ndk-build to build a native shared library that would be part of your app.
Are you limited to Python? If not, you might want to have a look at Lua as it's much smaller (fewer C source files) and might be quicker to get going. | 1 | 0 | 0 | 0 | Sorry for the sort of general question, but I'm having a hard time figuring out how to start this. I'm trying to incorporate some natural language toolkit code in Python with a mobile app I'm developing in Unity. It's a very small amount of code, but it's critical for the functioning of the app.
Do I need to have the python code running on some kind of server? How would I go about doing this? I'm very new to python and mobile development.
Thanks. | How can I incorporate Python code in a mobile app? | 0 | 0 | 1 | 0 | 0 | 292 |
17,555,111 | 2013-07-09T18:01:00.000 | 1 | 0 | 0 | 0 | 0 | python,qt,pyside | 0 | 17,557,946 | 0 | 1 | 1 | true | 0 | 1 | QLabel
QTextDocument
QTextEdit
Almost all of the options above, can be configured to be "Read-only" or even unclickable by making them disabled.
QTextStream is also a really useful class.
Hope that helps. | 1 | 0 | 0 | 0 | What is the best way to a text based output in a Qt widget? what I mean by this is... like in win RAR or some windows installers where there is a drop down arrow showing more details and it shows live text output of files modified and things of that nature. how would I go about doing that in a Qt app?
I was thinking maybe a none editable multiple line text box... but I'm not sure, because I don't wan't it to be editable.
any help on this would be greatly appreciated. | how to make a Qt text output widget? | 1 | 1.2 | 1 | 0 | 0 | 590 |
17,559,967 | 2013-07-09T23:24:00.000 | 5 | 0 | 0 | 0 | 0 | python,flask,pythonanywhere | 0 | 17,579,664 | 0 | 4 | 0 | false | 1 | 0 | PythonAnywhere Dev here. You also have your access log. You can click through this from your web app tab. It shows you the raw data about your visitors. I would personally also use something like Google Analytics. However you don't need to do anything to be able to just see your raw visitor data. It's already there. | 1 | 9 | 0 | 0 | I just deployed my first ever web app and I am curious if there is an easy way to track every time someone visits my website, well I am sure there is but how? | how do I track how many users visit my website | 0 | 0.244919 | 1 | 0 | 0 | 11,232 |
17,560,658 | 2013-07-10T00:43:00.000 | 1 | 0 | 1 | 0 | 0 | python,regex | 0 | 24,908,840 | 0 | 4 | 0 | false | 0 | 0 | For detecting 2-or-more consecutive letters the regex becomes: (\w)\1+ | 1 | 6 | 0 | 0 | I want to find words that have consecutive letter pairs using regex.
I know for just one consecutive pair like zoo (oo), puzzle (zz), arrange (rr), it can be achieved by '(\w){2}'. But how about
two consecutive pairs: committee (ttee)
three consecutive pairs: bookkeeper (ookkee)
edit:
'(\w){2}' is actually wrong, it finds any two letters instead of a double letter pair.
My intention is to find the words that have letter pairs, not the pairs.
By 'consecutive', I mean there is no other letter between letter pairs. | python: how to find consecutive pairs of letters by regex? | 0 | 0.049958 | 1 | 0 | 0 | 12,433 |
17,569,671 | 2013-07-07T17:45:00.000 | 0 | 0 | 0 | 0 | 0 | system,qpython | 0 | 31,173,913 | 0 | 3 | 0 | false | 0 | 1 | Install the latest from the Google Play store. I'm not getting this error using QPython (classic, not QPython3). | 1 | 2 | 0 | 0 | Install QPython from Google Play store
Open QPython, slide right and click "Console"
Try some code, starting with import androidhelper
u0_a98@android:/ $ python
Python 2.7.2 (default, Jun 3 2013, 20:01:13)
[GCC 4.4.3] on darwin
Type "help", "copyright", "credits" or "license" for more information.
import androidhelper
Traceback (most recent call last):
File "", line 1, in
File "/storage/sdcard0/com.hipipal.qpyplus/lib/site-packages/androidhelper.py", line 43, in
import sl4a
ImportError: No module named sl4a
I'm running Cyanogenmod 10.0.0, Android 4.1.2. Any idea how to import androidhelper successfully? | Why does qpython say "No module named sl4a"? | 0 | 0 | 1 | 0 | 0 | 3,999 |
17,578,630 | 2013-07-10T18:49:00.000 | 0 | 0 | 1 | 0 | 0 | python,mysql,database,multithreading | 0 | 17,578,684 | 0 | 1 | 0 | false | 0 | 0 | How should I deal with the database connection? Create it in main,
then pass it to the logging thread, or create it directly in the
logging thread?
I would perhaps configure your logging component with the class that creates the connection and let your logging component request it. This is called dependency injection, and makes life easier in terms of testing e.g. you can mock this out later.
If the logging component created the connections itself, then testing the logging component in a standalone fashion would be difficult. By injecting a component that handles these, you can make a mock that returns dummies upon request, or one that provides connection pooling (and so on).
How you handle database issues robustly depends upon what you want to happen. Firstly make your database interactions transactional (and consequently atomic). Now, do you want your logger component to bring your system to a halt whilst it retries a write. Do you want it to buffer writes up and try out-of-band (i.e. on another thread) ? Is it mission critical to write this or can you afford to lose data (e.g. abandon a bad write). I've not provided any specific answers here, since there are so many options depending upon your requirements. The above details a few possible options. | 1 | 1 | 0 | 0 | I've got a fairly simple Python program as outlined below:
It has 2 threads plus the main thread. One of the threads collects some data and puts it on a Queue.
The second thread takes stuff off the queue and logs it. Right now it's just printing out the stuff from the queue, but I'm working on adding it to a local MySQL database.
This is a process that needs to run for a long time (at least a few months).
How should I deal with the database connection? Create it in main, then pass it to the logging thread, or create it directly in the logging thread? And how do I handle unexpected situations with the DB connection (interrupted, MySQL server crashes, etc) in a robust manner? | Architechture of multi-threaded program using database | 0 | 0 | 1 | 1 | 0 | 84 |
17,578,668 | 2013-07-10T18:51:00.000 | 1 | 0 | 1 | 0 | 0 | python,python-2.7 | 0 | 17,578,834 | 0 | 3 | 0 | true | 0 | 0 | You can read the file line by line. For each line you can then eval it, or use json.loads to unpack it. | 1 | 0 | 0 | 0 | I have wrote a list in to a file, how can i get it back as old array list.
list looks like this
['82294', 'ABDUL', 'NAVAS', 'B', 'M', 'MSCS', 'CUKE', '30',
'Kasargod', 'CU', 'Kerala', 'Online', 'PG-QS-12', '15', 'June,',
'2013', '12.00', 'Noon', '-', '02.00', 'PM\n', '29']
['82262', 'ABDUL', 'SHAFWAN', 'T', 'H', 'MSCS', 'CUKE', '30',
'Kasargod', 'CU', 'Kerala', 'Online', 'PG-QS-12', '15', 'June,',
'2013', '12.00', 'Noon', '-', '02.00', 'PM\n', '29']
when i read the file, it does consider as a string list,
for eg:
consider first list:
var[0][0] should be 82294 not '
i am a python noob, | Get array list from file | 0 | 1.2 | 1 | 0 | 0 | 161 |
17,584,635 | 2013-07-11T03:35:00.000 | 0 | 0 | 0 | 1 | 0 | python,cmd,warnings,indefinite | 1 | 17,585,244 | 0 | 1 | 0 | false | 0 | 0 | Try opening and storing information from one file at a time? We dont have enough information to understand what is wrong with your code. We really dont have much more than "I tried to open 185 fits files" and "too many open files" | 1 | 0 | 0 | 0 | I am in deep trouble at the moment. After every letter I type on my python
command prompt in Linux, I get an error message:
sys:1: GtkWarning: Attempting to store changes into `/u/rnayar/.recently-used.xbel', but failed: Failed to create file '/u/rnayar/.recently-used.xbel.L6ETZW': Too many open files
Hence I can type nothing on python, and the prompt is stuck.
I tried to open 185 fits files, containing some data, and feed in some of that data into an
array. I cannot abandon the command window, because I already have significant amounts of information stored on it.
Does anybody know how I can stop the error message and get it working as usual? | infinite error message python | 0 | 0 | 1 | 0 | 0 | 100 |
17,585,932 | 2013-07-11T05:49:00.000 | 0 | 0 | 0 | 0 | 0 | python,windows,python-2.7,smartcard,rfid | 0 | 25,094,663 | 0 | 1 | 0 | false | 0 | 0 | If you are using a USB or serial connection to connect your card reader to pc, .you can use datareceived event of Serial port class. | 1 | 2 | 0 | 0 | I am working on smartcard based application with smartcard reader, here whenever I flash the card i should get the card UID, based on that I need to retrieve the details from database.
For this need how do i start, whether i need to create service on windows which always run background or is there a way to detect an event on OS or any scheduler program.
I am able to get UID and related but i need to run the program externally.
Please suggest me on this issue, Thanks in advance. | How to detect the event when smart card scanned | 1 | 0 | 1 | 0 | 0 | 422 |
17,595,066 | 2013-07-11T13:44:00.000 | 0 | 0 | 0 | 1 | 0 | python,django,manage.py | 0 | 17,599,320 | 0 | 2 | 0 | false | 1 | 0 | If your goal is to ensure the load balancer is working correctly, I suppose it's not an absolute requirement to do this in the application code. You can use a network packet analyzer that can listen on a specific interface (say, tcpdump -i <interface>) and look at the output. | 1 | 6 | 0 | 0 | I'm running a temporary Django app on a host that has lots of IP addresses. When using manage.py runserver 0.0.0.0:5000, how can the code see which of the many IP addresses of the machine was the one actually hit by the request, if this is even possible?
Or to put it another way:
My host has IP addresses 10.0.0.1 and 10.0.0.2. When runserver is listening on 0.0.0.0, how can my application know whether the user hit http://10.0.0.1/app/path/etc or http://10.0.0.2/app/path/etc?
I understand that if I was doing it with Apache I could use the Apache environment variables like SERVER_ADDR, but I'm not using Apache.
Any thoughts?
EDIT
More information:
I'm testing a load balancer using a small Django app. This app is listening on a number of different IPs and I need to know which IP address is hit for a request coming through the load balancer, so I can ensure it is balancing properly.
I cannot use request.get_host() or the request.META options, as they return what the user typed to hit the load balancer.
For example: the user hits http://10.10.10.10/foo and that will forward the request to either http://10.0.0.1/foo or http://10.0.0.2/foo - but request.get_host() will return 10.10.10.10, not the actual IPs the server is listening on.
Thanks,
Ben | Django runserver bound to 0.0.0.0, how can I get which IP took the request? | 1 | 0 | 1 | 0 | 0 | 10,672 |
17,597,842 | 2013-07-11T15:44:00.000 | 1 | 1 | 1 | 0 | 0 | c++,python,c | 0 | 17,597,883 | 0 | 2 | 0 | false | 0 | 0 | Some files, such as .exe, .jpg, .mp3, contain a header (first few bytes of the file). You can inspect the header and infer the file type from that.
Of course, some files, such as raw text, depending on their encoding, may have no header at all. | 1 | 1 | 0 | 0 | To do a work i want to identify the type of file. But the files are without extension.The files may be txt,jpeg,mp3,pdf etc. Using c or c++ or python how can i check whether it is a jpeg or pdf or mp3 file? | how to identify the type of files having no extension? | 0 | 0.099668 | 1 | 0 | 0 | 250 |
17,606,646 | 2013-07-12T02:35:00.000 | 2 | 0 | 0 | 0 | 0 | python,web,cherrypy | 0 | 17,606,832 | 0 | 1 | 0 | true | 1 | 0 | Nevermind, folks. Turns out that this isn't so bad to do; it is simply a matter of doing the following:
Write a function that does what I want.
Make the function in to a custom CherryPy Tool, set to the before_handler hook.
Enable that tool globally in my config. | 1 | 1 | 0 | 0 | I am in the midst of writing a web app in CherryPy. I have set it up so that it uses OpenID auth, and can successfully get user's ID/email address.
I would like to have it set so that whenever a page loads, it checks to see if the user is logged in, and if so displays some information about their login.
As I see it, the basic workflow should be like this:
Is there a userid stored in the current session? If so, we're golden.
If not, does the user have cookies with a userid and login token? If so, process them, invalidate the current token and assign a new one, and add the user information to the session. Once again, we're good.
If neither condition holds, display a "Login" link directing to my OpenID form.
Obviously, I could just include code (or a decorator) in every public page that would handle this. But that seems very... irritating.
I could also set up a default index method in each class, which would do this and then use a (page-by-page) helper method to display the rest of the content. But this seems like a nightmare when it comes to the occasional exposed method other than index.
So, my hope is this: is there a way in CherryPy to set some code to be run whenever a request is received? If so, I could use this to have it set up so that the current session always includes all the information I need.
Alternatively, is it safe to create a wrapper around the cherrypy.expose decorator, so that every exposed page also runs this code?
Or, failing either of those: I'm also open to suggestions of a different workflow. I haven't written this kind of system before, and am always open to advice.
Edit: I have included an answer below on how to accomplish what I want. However, if anybody has any workflow change suggestions, I would love the advice! Thanks all. | Checking login status at every page load in CherryPy | 1 | 1.2 | 1 | 0 | 0 | 313 |
17,627,389 | 2013-07-13T05:51:00.000 | 0 | 1 | 0 | 0 | 0 | python,amazon-web-services,boto | 0 | 17,630,560 | 0 | 1 | 0 | true | 1 | 0 | Currently, there is no API for doing this. You have to log into your billing preference page and set it up there. I agree that an API would be a great feature to add. | 1 | 0 | 0 | 0 | Was wondering if anyone knew how if it was possible to enable programmatic billing for Amazon AWS through the API? I have not found anything on this and I even went broader and looked for billing preferences or account settings through API and still had not luck. I assume the API does not have this functionality but I figured I would ask. | Enable programmatic billing for Amazon AWS through API (python) | 0 | 1.2 | 1 | 0 | 1 | 356 |
17,637,243 | 2013-07-14T06:52:00.000 | 0 | 0 | 1 | 0 | 0 | python,python-3.x | 0 | 17,637,322 | 0 | 2 | 0 | false | 0 | 0 | Presumably you are actually using the bit values from the examples so why not just derive a dictionary from dictionary that has a new method getmasked which masks the value before looking it up... | 1 | 0 | 0 | 0 | I have a project I'm creating (in python 3.3) and I'm trying to figure out if there is a efficient (or prettier way) to do the following.
I have a function that extracts binary/hex strings like the following (groups of bits are split for example purposes only)
0000 1111 0002 0001
0000 1111 0003 0001
0000 1111 0002 0002
0000 1110 0002 0001
Now, what I want to do is to be able to pass these into a function and then fire them into a method depending on the values in the second group of bits, and the forth group of bits (that are opcodes)
eg; a hash function that will check to see if (* 1111 * 0001) matches and then return a function related to these bits.
I had the idea of using a dictionary of hash tables, however I'm not fully sure how one would make a key a mask.
While I could make a dictionary with the key 11110001 and the value the function I want to return, and then just concatting and passing in [4:8][12:16] would work, I was wondering if there was a way to make a hash function for a key. (if that makes sense) without going into a class and overriding the hash function and then passing that in.
Perhaps some form of data structure that stores regex keys and performs it on any valid input? - Whilst I could create one I'm wondering if there is some form of in-built function I'm missing (just so I don't reinvent the wheel)
Hopefully this makes sense!
Thanks for the help! | Using a hash function as a key in a dictonary in python | 1 | 0 | 1 | 0 | 0 | 99 |
17,641,441 | 2013-07-14T16:47:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,selenium,selenium-webdriver | 0 | 17,718,816 | 0 | 1 | 0 | true | 1 | 0 | First, what you'll need to do is navigate to the webpage with Selenium.
Then, analyse the web page source HTML which the Javascript will have rendered by you navigating to the page, to get the image URL. You can do this with Selenium or with an HTML parser.
Then you can easily download the image using wget or some other URL grabber.
You might not even need Selenium to accomplish this if when you get the page, the image is already there. If that is the case you can just use the URL grabber to get the page directly.
Let me know if you want more details or if you have any questions. | 1 | 0 | 0 | 0 | I need to take image frames from a webcam server using selenium python module.
The frames webcam is show on the selenium browser ( using http:// .....html webpage) .
The source code of the page show me it's done using some java script.
I try to use:
capture_network_traffic() and captureNetworkTraffic
Not working , also I don't think is a good way to do that.
Also I don't want to use capture_screenshot functions , this take all canvas of the browser.
Any idea ?
Thank you. Regards. | get image frames webcam from server with selenium python script | 1 | 1.2 | 1 | 0 | 1 | 439 |
17,646,259 | 2013-07-15T03:16:00.000 | 0 | 1 | 0 | 0 | 0 | python,post,get,chat | 0 | 17,646,320 | 0 | 1 | 0 | false | 0 | 0 | You need a server in order to be able to receive any GET and POST requests, one of the easier ways to get that is to set up a Django project, ready in minutes and then add custom views to handle the request you want properly. | 1 | 0 | 0 | 0 | Recently, I've been attempting to figure out how I can find out what an unlabeled POST is, and send to it using Python.
The issue of the matter is I'm attempting to make a chat bot entirely in Python in order to increase my knowledge of the language. For said bot, I'm attempting to use a chat-box that runs entirely on jQuery. The issue with this is it has no knowledgeable POST or GET statements associated with the chat-box submissions.
How can I figure out what the POST and GET statements being sent when a message is submitted, and somehow use that to my advantage to send custom POST or GET statements for a chat-bot?
Any help is appreciated, thanks. | Python Send to and figure out POST | 0 | 0 | 1 | 0 | 1 | 60 |
17,646,806 | 2013-07-15T04:31:00.000 | 0 | 0 | 0 | 0 | 0 | python,node.js,websocket,chat | 0 | 17,646,861 | 0 | 2 | 0 | false | 0 | 0 | Most chats will use a push notification system. It will keep track of people within a chat, and as it receives a new message to the chat, it will push it to all the people currently in it. This protects the users from seeing each other. | 2 | 1 | 0 | 0 | I've been learning about Python socket, http request/reponse handling these days, I'm still very novice to server programming, I have a question regarding to the fundamental idea behind chatting website.
In chatting website, like Omegle or Facebook's chat, how do 2 guys talk to each other? Do sockets on their own computers directly connect to each other, OR... guy A send a message to the web server, and server send this message to guy B, and vice versa?
Because in the first scenario, both users can retrieve each other's IP, and in the second scenario, since you are connecting to a server, you can not.. right?
Thanks a lot to clear this confusion for me, I'm very new and I really appreciate any help from you guys! | the idea behind chat website | 0 | 0 | 1 | 0 | 1 | 187 |
17,646,806 | 2013-07-15T04:31:00.000 | 0 | 0 | 0 | 0 | 0 | python,node.js,websocket,chat | 0 | 17,649,514 | 0 | 2 | 0 | true | 0 | 0 | Usually they both connect to the server.
There are a few reasons to do it this way. For example, imagine you want your users to see the last 10 messages of a conversation. Who's going to store this info? One client? Both? What happens if they use more than one PC/device? What happens if one of them is offline? Well, you will have to send the messages to the server, this way the server will have the conversation history stored, always available.
Another reason, imagine that one user is offline. If the user is offline you can't do anything to contact him. You can't connect. So you will have to send messages to the server, and the server will notify the user once online.
So you are probably going to need a connection to the server (storing common info, providing offline messages, keeping track of active users...).
There is also another reason, if you want two users to connect directly, you need one of them to start a server listening on a (public IP):port, and let the other connect against that ip:port. Well, this is a problem. If you use the clients->server model you don't have to worry about that, because you can open a port in the server easily, for all, without routers and NAT in between. | 2 | 1 | 0 | 0 | I've been learning about Python socket, http request/reponse handling these days, I'm still very novice to server programming, I have a question regarding to the fundamental idea behind chatting website.
In chatting website, like Omegle or Facebook's chat, how do 2 guys talk to each other? Do sockets on their own computers directly connect to each other, OR... guy A send a message to the web server, and server send this message to guy B, and vice versa?
Because in the first scenario, both users can retrieve each other's IP, and in the second scenario, since you are connecting to a server, you can not.. right?
Thanks a lot to clear this confusion for me, I'm very new and I really appreciate any help from you guys! | the idea behind chat website | 0 | 1.2 | 1 | 0 | 1 | 187 |
17,662,714 | 2013-07-15T19:54:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,django-forms,formset | 0 | 17,663,244 | 0 | 2 | 0 | false | 1 | 0 | I just come up with an idea, that I can create in first form additional hidden fields, which can be synchronized with fields from second form by JavaScript. This will create small redundancy, however seems to be very easy to implement.
Is it a good idea? | 1 | 3 | 0 | 1 | I am creating simple search engine, so I have one input text field on the top of the page and buttom "search" next to it. That is all in one form, and "produce" for instance/q=search%20query.
In sidebar I have panel with another form with filters, lets say from, to. I want to have a possibility of creating link like /q=search%20query&from=20&to=50. I wonder how button from first form should gather information from second form.
I read somewhere that there is something like formsets, however I didn't find information that they can be used to something like that. | Gathering information from few forms in django | 0 | 0 | 1 | 0 | 0 | 79 |
17,664,636 | 2013-07-15T21:56:00.000 | 0 | 0 | 1 | 0 | 0 | python,regex | 0 | 17,664,727 | 0 | 4 | 0 | false | 0 | 0 | If you want just the numbers use r'0-9+'. That will give you separated sequences of integers from the input string. | 1 | 0 | 0 | 0 | A string "forum/123/topic/4567".
How can I edit a regular expression to get '123' and '4567' separately?
I have tried lots of methods on the Internet, but nothing works. | how to get a part from a string with regular expression in python | 0 | 0 | 1 | 0 | 0 | 98 |
17,665,330 | 2013-07-15T23:01:00.000 | 5 | 1 | 1 | 0 | 0 | python,packaging | 0 | 17,722,381 | 0 | 1 | 0 | false | 0 | 0 | It is not required but recommended to include documentation as well as unit tests into the package.
Regarding documentation:
Old-fashioned or better to say old-school source releases of open source software contain documentation, this is a (de facto?) standard (have a look at GNU software, for example). Documentation is part of the code and should be part of the release, simply because once you download the source release you are independent. Ever been in the situation where you've been on a train somewhere, where you needed to have a quick look into the documentation of module X but didn't have internet access? And then you relievedly realized that the docs are already there, locally.
Another important point in this regard is that the documentation that you bundle together with the code for sure applies to the code version. Code and docs are in sync.
One more thing especially regarding Python: you can write your docs using Sphinx and then build beautiful HTML output based on the documentation source in the process of installing the package. I have seen various Python packages doing exactly this.
Regarding tests:
Imagine the tests are bundled in the source release and are easy to be run by the user (you should document how to do this). Then, if the user observes a problem with your code which is not so easy to track down, he can simply run the unit tests in his environment and see if at least those are passing. If not, you've probably made a wrong assumption when specifying the behavior of your code, which is good to know about. What I want to say is: it can be very good for you as a developer if you make it very simple for the user to execute unit tests. | 1 | 8 | 0 | 1 | So, I've released a small library on pypi, more as an exercise (to "see how it's done") than anything else.
I've uploaded the documentation on readthedocs, and I have a test suite in my git repo.
Since I figure anyone who might be interested in running the test will probably just clone the repo, and the doc is already available online, I decided not to include the doc and test directories in the released package, and I was just wondering if that was the "right" thing to do.
I know answers to this question will be rather subjective, but I felt it was a good place to ask in order to get a sense of what the community considers to be the best practice. | Releasing a python package - should you include doc and tests? | 0 | 0.761594 | 1 | 0 | 0 | 384 |
17,667,871 | 2013-07-16T04:11:00.000 | 2 | 1 | 0 | 1 | 1 | python,linux,operating-system,profiling | 0 | 17,770,525 | 0 | 1 | 0 | false | 0 | 0 | The Dstat source-code includes a few sample programs using Dstat as a library. | 1 | 2 | 0 | 0 | Is it possible, on a linux box, to import dstat and use it as an api to collect OS metrics and then compute stats on them?
I have downloaded the source and tried to collect some metrics, but the program seems to be optimized for command line usage.
Any suggestions as to how to get my desired functionality either using Dstat or any another library? | DSTAT as a Python API ? | 1 | 0.379949 | 1 | 0 | 0 | 468 |
17,676,081 | 2013-07-16T12:05:00.000 | 0 | 0 | 1 | 0 | 0 | python,user-interface,frontend | 0 | 17,677,079 | 0 | 1 | 0 | false | 0 | 0 | A simple, probably naïve way could be to structure your CLI program such that its main function accepts your command line arguments, so that you could import it and call it with the options set in the GUI.
I've never tried it, my guess is that it could work with simple "pure" CLI programs (ie, you run it, it does its jobs, and only then prints its output), but could get unwieldy with interactive program needing to prompt the user or with a lot of output. | 1 | 2 | 0 | 0 | I've been writing command line applications (mainly in Python) for quite a while now, and I've also been doing a bit of GUI programming using (Py)Qt. In the GUI program's I've written, the programs logic and the GUI were often quite integrated. I am now wondering however, how I could write a GUI front end, for the pure command line programs which I've written. Or in other words; how do I write a command line program so that a GUI could be developed completely separate from it?
Although I am most interested in Python implementations I think the answer could be pretty general. | How to write a command line program which can be accessed by a separate frontend? | 0 | 0 | 1 | 0 | 0 | 504 |
17,678,620 | 2013-07-16T14:00:00.000 | 5 | 1 | 0 | 1 | 0 | python-2.7,posix,popen,eof | 0 | 17,712,430 | 0 | 1 | 0 | true | 0 | 0 | EOF isn't really a signal that you can raise, it's a per-channel exceptional condition. (Pressing Ctrl+D to signal end of interactive input is actually a function of the terminal driver. When you press this key combination at the beginning of a new line, the terminal driver tells the OS kernel that there's no further input available on the input stream.)
Generally, the correct way to signal EOF on a pipe is to close the write channel. Assuming that you created the Popen object with stdin=PIPE, it looks like you should be able to do this. | 1 | 2 | 0 | 0 | I'm trying to get Python to send the EOF signal (Ctrl+D) via Popen(). Unfortunately, I can't find any kind of reference for Popen() signals on *nix-like systems. Does anyone here know how to send an EOF signal like this? Also, is there any reference of acceptable signals to be sent? | Trying to send an EOF signal (Ctrl+D) signal using Python via Popen() | 1 | 1.2 | 1 | 0 | 1 | 3,348 |
17,686,939 | 2013-07-16T21:04:00.000 | 1 | 1 | 0 | 0 | 0 | python,amazon,chef-infra,boto,aws-opsworks | 0 | 17,691,484 | 0 | 2 | 0 | false | 1 | 0 | You can use knife-bootstrap. This can be one way to do it. You can use AWS SDK to do most of it
Launch an instance
Add a public IP (if its not in VPC)
Wait for instance to come back online
use knife bootstrap to supply script, setup chef-client, update system
Then use chef cookbook to setup your machine | 1 | 0 | 0 | 0 | I need to create an application that will do the following:
Accept request via messaging system ( Done )
Process request and determine what script and what type of instance is required for the job ( Done )
Launch an EC2 instance
Upload custom script's (probably from github or may be S3 bucket)
Launch a script with a given argument.
The question is what is the most efficient way to do steps 3,4,5? Don't understand me wrong, right now I'm doing the same thing with script that does all of this
launch instance,
use user_data to download necessary dependencies
than SSH into instance and launch a script
My question is really: is that the only option how to handle this type of work? or may be there is an easy way to do this?
I was looking at OpsWork, and I'm not sure if this is the right thing for me. I know I can do steps 3 and 4 with it, but how about the rest? :
Launch a script with a given argument
Triger an OpsWork to launch an instance when request is came in
By the way I'm using Python, boto to communicate with AWS services. | Do I need to SSH into EC2 instance in order to start custom script with arguments, or there are some service that I don't know | 1 | 0.099668 | 1 | 0 | 1 | 218 |
17,709,153 | 2013-07-17T20:01:00.000 | 2 | 0 | 1 | 0 | 0 | python,user-interface,pyqt | 0 | 17,709,406 | 0 | 1 | 1 | false | 0 | 1 | A reasonably good rule of thumb is that if what you are doing needs more than 20 lines of code it is worth considering using an object oriented design rather than global variables, and if you get to 100 lines you should already be using classes. The purists will probably say never use globals but IMHO for a simple linear script it is probably overkill.
Be warned that you will probably get a lot of answers expressing horror that you are not already.
There are some really good, (and some of them free), books that introduce you to object oriented programming in python a quick google should provide the help you need.
Added Comments to the answer to preserve them:
So at 741 lines, I'll take that as a yes to OOP:) So specifically on the data class. Is it correct to create a new instance of the data class 20x per second as data strings come in, or is it more appropriate to append to some data list of an existing instance of the class? Or is there no clear preference either way? – TimoB
I would append/extend your existing instance. – seth
I think I see the light now. I can instantiate the data class when the "start data" button is pressed, and append to that instance in the subsequent thread that does the serial reading. THANKS! – TimoB | 1 | 1 | 0 | 1 | I have a program completed that does the following:
1)Reads formatted data (a sequence of numbers and associated labels) from serial port in real time.
2)Does minor manipulations to data.
3)plots data in real time in a gui I wrote using pyqt.
4)Updates data stats in the gui.
5)Allows post analysis of the data after collection is stopped.
There are two dialogs (separate classes) that are called from within the main window in order to select certain preferences in plotting and statistics.
My question is the following: Right now my data is read in and declared as several global variables that are appended to as data comes in 20x per second or so - a 2d list of values for the numerical values and 1d lists for the various associated text values. Would it be better to create a class in which to store data and its various attributes, and then to use instances of this data class to make everything else happen - like the plotting of the data and the statistics associated with it?
I have a hunch that the answer is yes, but I need a bit of guidance on how to make this happen if it is the best way forward. For instance, would every single datum be a new instance of the data class? Would I then pass them one by one or as a list of instances to the other classes and to methods? How should the passing most elegantly be done?
If I'm not being specific enough, please let me know what other information would help me get a good answer. | python program structure and use of global variables | 0 | 0.379949 | 1 | 0 | 0 | 184 |
17,709,751 | 2013-07-17T20:37:00.000 | 0 | 0 | 1 | 0 | 0 | python,datetime | 0 | 17,709,832 | 0 | 2 | 0 | false | 0 | 0 | I don't think there is any way to do this. datetime.datetime.min says 1 is the min value for year. | 2 | 2 | 0 | 0 | Exactly what the title says. If I try to it gives me a ValueError for the year value but I'd like to have a datetime with year 0. Is there any way to do this? | how make a datetime object in year 0 with python | 0 | 0 | 1 | 0 | 0 | 10,714 |
17,709,751 | 2013-07-17T20:37:00.000 | 4 | 0 | 1 | 0 | 0 | python,datetime | 0 | 17,709,791 | 0 | 2 | 0 | false | 0 | 0 | from the docs
The datetime module exports the following constants:
datetime.MINYEAR The smallest year number allowed in a date or
datetime object. MINYEAR is 1.
datetime.MAXYEAR The largest year number allowed in a date or datetime
object. MAXYEAR is 9999. | 2 | 2 | 0 | 0 | Exactly what the title says. If I try to it gives me a ValueError for the year value but I'd like to have a datetime with year 0. Is there any way to do this? | how make a datetime object in year 0 with python | 0 | 0.379949 | 1 | 0 | 0 | 10,714 |
17,710,943 | 2013-07-17T21:53:00.000 | 0 | 1 | 1 | 0 | 0 | c++,python,map,concurrency,rpc | 0 | 17,711,039 | 0 | 2 | 0 | false | 0 | 0 | Maybe factors can affect the selection.
One solution is to use fastcgi.
Client sends HTTP request to HTTP server that has fastCGI enabled.
HTP server dispatch the request to your RPC server via the fastcgi mechanism.
RPC server process and generates response, and sends back to http server.
http server sends the response back to your client. | 1 | 0 | 0 | 0 | I would like to have a class written in C++ that acts as a remote procedure call server.
I have a large (over a gigabyte) file that I parse, reading in parameters to instantiate objects which I then store in a std::map. I would like the RPC server to listen for calls from a client, take the parameters passed from the client, look up an appropriate value in the map, do a calculation, and return the calculated value back to the client, and I want it to serve concurrent requests -- so I'd like to have multiple threads listening. BTW, after the map is populated, it does not change. The requests will only read from it.
I'd like to write the client in Python. Could the server just be an HTTP server that listens for POST requests, and the client can use urllib to send them?
I'm new to C++ so I have no idea how to write the server. Can anyone point me to some examples? | Concurrent Remote Procedure Calls to C++ Object | 0 | 0 | 1 | 0 | 0 | 674 |
17,712,632 | 2013-07-18T00:38:00.000 | 0 | 0 | 1 | 0 | 0 | python-2.7 | 0 | 17,712,665 | 0 | 2 | 0 | false | 0 | 0 | One option is to dump that list into a temp file, and read it from your other python script.
Another option (if one python script calls the other), is to pass the list as an argument (e.g. using sys.argv[1] and *args, etc). | 2 | 0 | 0 | 0 | Is there a way I can import a list from a different Python file? For example if I have a list:
list1 = ['horses', 'sheep', 'cows', 'chickens', 'dog']
Can I import this list into other files? I know to import other functions you do
from FileName import DefName
This is a user defined list and I don't want to have the user input the same list a million times.
Just a few maybes as to how this could be done:
from FileName import ListName or put all the lists into a function and then import the definition name
Thanks for the help | Import a list from a different file | 0 | 0 | 1 | 0 | 0 | 54 |
17,712,632 | 2013-07-18T00:38:00.000 | 0 | 0 | 1 | 0 | 0 | python-2.7 | 0 | 18,006,639 | 0 | 2 | 0 | true | 0 | 0 | I'll just export the lists in a file. Therefore every piece of code can read it. | 2 | 0 | 0 | 0 | Is there a way I can import a list from a different Python file? For example if I have a list:
list1 = ['horses', 'sheep', 'cows', 'chickens', 'dog']
Can I import this list into other files? I know to import other functions you do
from FileName import DefName
This is a user defined list and I don't want to have the user input the same list a million times.
Just a few maybes as to how this could be done:
from FileName import ListName or put all the lists into a function and then import the definition name
Thanks for the help | Import a list from a different file | 0 | 1.2 | 1 | 0 | 0 | 54 |
17,718,449 | 2013-07-18T08:33:00.000 | 7 | 0 | 1 | 0 | 0 | python,memory-management | 0 | 17,718,746 | 0 | 5 | 0 | true | 0 | 0 | You could just read out /proc/meminfo. Be aware that the "free memory" is usually quite low, as the OS heavily uses free, unused memory for caching.
Also, it's best if you don't try to outsmart your OS's memory management. That usually just ends in tears (or slower programs). Better just take the RAM you need. If you want to use as much as you can on a machine with a previously unknown amount of memory, I'd probably check how much RAM is installed (MemTotal in /proc/meminfo), leave a certain amount for the OS and as safety margin (say 1 GB) and use the rest. | 1 | 18 | 0 | 0 | I would like my python script to use all the free RAM available but no more (for efficiency reasons). I can control this by reading in only a limited amount of data but I need to know how much RAM is free at run-time to get this right. It will be run on a variety of Linux systems. Is it possible to determine the free RAM at run-time? | Determine free RAM in Python | 1 | 1.2 | 1 | 0 | 0 | 20,527 |
17,730,406 | 2013-07-18T17:38:00.000 | 0 | 0 | 0 | 0 | 0 | python,report,openerp | 0 | 17,739,668 | 0 | 2 | 0 | false | 1 | 0 | I hope you can use webkit reporting. Any other openerp reporting tools have its limitations for creating dynamic columns. | 1 | 1 | 0 | 0 | I need to understand if it is possible to create reports with X number of columns. X will come from the amenities a hotel have for example. So it will change depending on the hotel selected from a wizard before generating the report.
Lets say Hotel XYZ has 5 amenities I need a report with 5 columns where I will show the payments each guest made for each amenity. Then Hotel YYY will have 10 amenities and I need to do the same but for all 10 amenities.
Will it be possible to code a report (I am currently using the OpenOffice plugin but anything that work would be fine) flexible enough to do this with OpenERP?
I am not asking how to do it, I just want to understand the possibilities and limitations.
Thanks! | Is it possible to generate reports with dynamic columns in OpenERP? | 1 | 0 | 1 | 0 | 0 | 582 |
17,731,701 | 2013-07-18T18:55:00.000 | 6 | 0 | 1 | 0 | 1 | ipython-notebook | 0 | 17,732,561 | 0 | 1 | 0 | true | 0 | 1 | You have to select and copy code using normal Ctrl-C Ctrl-V. 'Edit/Copy Cell' is a specific action in javascript that does a little more and that browser security policy prevent us to bind with clipboard. | 1 | 6 | 0 | 0 | I'm new to Ipython Notebook. I can cut and paste from other apps into my notebooks, but how do I copy/paste code out of notebook into a different app?
I'm accessing a Linux VNC session via Chicken. I can cut/paste with wild abandon between OSX/Linux using both command X/C/V and/or middle mouse button. I can also copy code into IPython notebook. I'm stopped dead in my tracks trying to get code out of IpyNotebook.
Using Notebook's 'Edit/Copy Cell' doesn't work, neither does 'Ctrl-m c'.
I'm running IPython 0.13.1 | Possible to copy/paste from IPython Notebook to other apps? | 0 | 1.2 | 1 | 0 | 0 | 12,168 |
17,755,621 | 2013-07-19T21:15:00.000 | 0 | 0 | 1 | 1 | 0 | python,macos,memory,permissions | 0 | 17,755,743 | 0 | 2 | 1 | false | 0 | 0 | import os
stats = os.stat('possibly_big_file.txt')
if (stats.st_size > TOOBIG):
print "Oh no....." | 1 | 0 | 0 | 0 | I am writing code with python that might run wild and do unexpected things. These might include trying to save very large arrays to disk and trying to allocate huge amounts of memory for arrays (more than is physically available on the system).
I want to run the code in a constrained environment in Mac OSX 10.7.5 with the following rules:
The program can write files to one specific directory and no others (i.e. it cannot modify files outside this directory but it's ok to read files from outside)
The directory has a maximum "capacity" so the program cannot save gigabytes worth of data
Program can allocate only a finite amount of memory
Does anyone have any ideas on how to set up such a controlled environment?
Thanks. | Can I limit write access of a program to a certain directory in osx? Also set maximum size of the directory and memory allocated | 0 | 0 | 1 | 0 | 0 | 139 |
17,758,039 | 2013-07-20T02:21:00.000 | 0 | 0 | 0 | 0 | 0 | python,module,virtualenv | 0 | 17,758,271 | 0 | 1 | 0 | false | 1 | 0 | Uploading your virtualenv probably won't work. There is a good chance that something in the virtualenv is dependent on the exact file paths and versions that won't match from your machine to the virtual host.
You can upload the virtualenv tool, make a new virtualenv, and then install the version of flask you want inside that virtualenv. | 1 | 0 | 0 | 0 | I have a flask based app, and it's now running with virtualenv on my dev machine. Now I want to deploy it to my virtual host. Sadly, this virtual host is running flask 0.6, and I want flask 0.10. I don't have enough privilege to upgrade it.
Can I just upload my whole virtual environment, and to use my own version of flask, and how?
My idea is change the PYTHONPATH, how to get rid of the old version and add the new into it?
Any help will be appreciated. | how to deploy apps developed in virtual env | 0 | 0 | 1 | 0 | 0 | 83 |
17,761,974 | 2013-07-20T11:51:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,search,inheritance,django-forms | 0 | 17,762,184 | 0 | 3 | 1 | false | 1 | 0 | It seems like you just need to create another SearchView which takes the query and displays the results. I am not sure if the results have to be displayed differently depending on which page the search has been performed from but it does not seem like.
The form would not have anything to do with the other views. You could just hard code it in the base template. | 1 | 0 | 0 | 0 | We have a large Django application made up of a large number of different views (some of which contain forms). As with most large applications, we use a base layout template that contains the common layout elements across the applications (mainly a header and a footer), which the templates for all of our views extend.
What we are looking to do is create a universal search box in our application, accessible on every page, which allows users to perform searches across the entire application, and want to place the search box inside the header, which involves placing a form inside our base layout template. This means that every view in our application will need to be able to handle the submission of this search form. Once this search form is submitted, we will need to redirect the user to another view containing the search results.
However, we are struggling to come up with a pattern to handle this. Does anyone know of functionality built into Django that will help us to build this? Failing that, can anyone suggest a good strategy for modifying our application so that we can handle this use-case without having to modify a large number of existing views (which we don't have the resources to do at the moment)?
Please note that the focus of this question is intended to be the best way to handle the submission of a form which appears in every view, and not strategies for implementing a universal search algorithm (which we have already figured out).
Ideas Explored So Far
Our first idea was to create a base View class that implements handling the universal search form submission, and have each of our views extend this. However, this is not possible because we already have views that inherit from a number of different Django view classes (TemplateView, ListView, FormView and DeleteView being some examples), and to be able to build our own common view class would mean either writing our own version of these Django view classes to inherit from our own view base class, or re-writing a large number of our views so they don't use the Django view classes.
Our next idea was to implement a mixin that would handle the universal search form submission, in an attempt to add this functionality to all our views in a way that allows us to continue using the different Django view classes. However, this brought to light two new problems: (a) how could we do this without modifying each of our views to become a form view, and (b) how can we do this in a way that allows the form handling logic to play nicely when mixed in to existing FormViews? | Implementing Universal Search in a Django Application | 1 | 0 | 1 | 0 | 0 | 624 |
17,767,579 | 2013-07-20T23:09:00.000 | 1 | 0 | 1 | 1 | 0 | python,ide,development-environment | 0 | 17,768,272 | 0 | 1 | 0 | true | 0 | 0 | Assuming that Python is already on your computer:
Go to /Applications folder
Then open Utilities
Double Click Terminal to open it and get a command line
type 'python' in the command prompt
Your all set! | 1 | 0 | 0 | 0 | I would like to know from you guys how you have set up your Mac terminal for python programming. I havent done anything big so far (have used ide's until now) with python in terminal but I think that you can do all kinds of fancy things (automatic fill up functions, colors, ...). Any suggestions??
Thanks you guys! | How to setup the Mac terminal for Programming with Python? | 0 | 1.2 | 1 | 0 | 0 | 263 |
17,779,316 | 2013-07-22T03:04:00.000 | 3 | 0 | 0 | 0 | 0 | python,matplotlib,histogram,gaussian | 0 | 20,057,520 | 0 | 3 | 0 | false | 0 | 0 | Another way of doing this is to find the normalized fit and multiply the normal distribution with (bin_width*total length of data)
this will un-normalize your normal distribution | 1 | 6 | 1 | 0 | I have data which is of the gaussian form when plotted as histogram. I want to plot a gaussian curve on top of the histogram to see how good the data is. I am using pyplot from matplotlib. Also I do NOT want to normalize the histogram. I can do the normed fit, but I am looking for an Un-normalized fit. Does anyone here know how to do it?
Thanks!
Abhinav Kumar | Un-normalized Gaussian curve on histogram | 0 | 0.197375 | 1 | 0 | 0 | 12,313 |
17,780,235 | 2013-07-22T05:02:00.000 | 0 | 0 | 0 | 0 | 0 | python,eclipse,openerp | 0 | 17,897,177 | 0 | 1 | 0 | false | 1 | 0 | I can't think of any easy way of doing this. When OpenERP connects to a database it sets up a registry containing all models and all the fields and as part of this, loads the fields into the database, performs database refactoring etc. The idea is that it is simple to inherit existing models and add your fields that way but it does require coding.
I have done something similar where:
I predefined some fields on your model (field1, intfield1, charfield1 etc.).
Provide a model/form so the admin can say use intfield1 and give it a label of 'My Value'
Override fields_view_get on your model and change the XML to include your field with the correct label.
But this is tricky to get right. You will want to spend some time learning the elementtree module to do the XML manipulation in the fields_view_get. | 1 | 0 | 0 | 0 | Hi I have created an openerp module using Python (eclipse) . I want to add a feature in my form so that admin will be able to create his own fields whenever and whatever he wants . I needed some guidance of how this will be done . As I am new to openerp , any help will be good to me . Thanks
Hopes for advice | how to make dynamic field creation capability in openerp module? | 0 | 0 | 1 | 0 | 1 | 228 |
17,783,481 | 2013-07-22T09:01:00.000 | 3 | 0 | 0 | 0 | 0 | python,scipy,curve-fitting | 0 | 17,786,438 | 0 | 1 | 0 | true | 0 | 0 | I'm afraid that the older FORTRAN-77 version of ODRPACK wrapped by scipy.odr does not incorporate constraints. ODRPACK95 is a later extension of the original ODRPACK library that predates the scipy.odr wrappers, and it is unclear that we could legally include it in scipy. There is no explicit licensing information for ODRPACK95, only the general ACM TOMS non-commercial license. | 1 | 3 | 1 | 0 | I'm using the ODRPACK library in Python to fit some 1d data. It works quite well, but I have one question: is there any possibility to make constraints on the fitting parameters? For example if I have a model y = a * x + b and for physical reasons parameter a can by only in range (-1, 1). I've found that such constraints can be done in original Fortran implementation of the ODRPACK95 library, but I can't find how to do that in Python.
Of course, I can implement my functions such that they will return very big values, if the fitting parameters are out of bounds and chi squared will be big too, but I wonder if there is a right way to do that. | Constraints on fitting parameters with Python and ODRPACK | 1 | 1.2 | 1 | 0 | 0 | 780 |
17,824,526 | 2013-07-24T03:07:00.000 | 0 | 0 | 0 | 1 | 0 | python-2.7,websocket,tornado | 0 | 17,835,810 | 0 | 2 | 0 | false | 1 | 0 | WebSocket was designed for low-latency bidirectional browser<->service communication. It's placed on top of TCP/IP and brings along some overhead. It was designed to solve all the problems that you simply do not have when it's about front-end<->back-end communication, because there we're talking about a defined environment which is under your control. Hence, I would recommend going back to the basics and do simple TCP/IP communication between your front-end and back-end. | 1 | 0 | 0 | 0 | Im using python tornado as web server and I have a backend server and a frontend server. I want to create browser-frontend-backend connection. Can anyone help me how to do this? I know how to create websocket connection between frontend and browser but I have no idea how to connect my frontend server to backend server to stream realtime data parsed by my backend server. | Websocket connection between two servers | 0 | 0 | 1 | 0 | 1 | 1,390 |
17,832,954 | 2013-07-24T11:40:00.000 | 1 | 0 | 0 | 0 | 0 | php,python,cgi | 0 | 17,833,204 | 0 | 1 | 0 | true | 1 | 0 | Probably by bind mounts (assuming Linux), so that the file is in it's original location as well as in the web root.
Or by priviledge separation. The web root sends a query to some worker job, that has access to all the needed files. | 1 | 0 | 0 | 0 | I have developed a python desktop application and application itself having setup page to change some configurations in the application and it is saved as a configuration file.
Now I need do to give web interface to change those configurations by using web browser remotely. But I need to change the same configuration file.
I can’t access any file outside the web root, So My first question is how can I edit that file which is located outside the web root.
more info : for web application I use LAMP stack and desktop application is python based.
someone suggest me to use CGI, Second question : Is that possible, if it is possible how I can I do it? | Edit a file outside the web root | 1 | 1.2 | 1 | 0 | 0 | 126 |
17,867,473 | 2013-07-25T19:50:00.000 | 1 | 0 | 0 | 1 | 1 | python,eclipse,pydev | 0 | 17,868,662 | 0 | 1 | 0 | false | 0 | 0 | Ctrl+Alt+F9 should terminate all launches, which will do the job for you. | 1 | 3 | 0 | 0 | I am afraid that my question may be duplicate, but I could not find good answers for my question in stackoverflow.
I use PyDev under Eclipse. And I often run my programs by opening a Python Console (Ctrl+Alt+Enter on the editor) for quick-and-easy debugging. The thing is I do not know how to stop the running program on the way. Ctrl+C, Ctrl+Z, or Ctrl+Break did not work. If I click [terminate] icon, the whole Console disappears, in which I do not want.
Is there any ways to stop the running program and go back to command line?
Thanks | How to stop a running program on PyDev | 0 | 0.197375 | 1 | 0 | 0 | 2,429 |
17,881,838 | 2013-07-26T12:53:00.000 | 0 | 0 | 0 | 0 | 0 | python,video,3d,move | 0 | 17,882,938 | 0 | 1 | 0 | false | 0 | 1 | I'd try to use a decent video client (mplayer, vlc). They can present the video in lots of ways, hopefully your stereoscopic issue can be solved by them.
Then I would let the client present a single window (not fullscreen) which I then would move around using window manager controls.
If you must not have window decorations around the video or if the output shall be a specific window, I think mplayer at least can be told to use an existing window to perform the output in. Maybe that's an approach then. | 1 | 2 | 0 | 0 | I would like to move a (stereoscopic) video on a computer screen automatically. Think of the video as the ball in a Pong game. The problem is that it should be a stereoscopic 3D video. So the video size itself is kind of small. I did this kind of movements with pictures or drawing object, but I don't know how to do it with video material!
Does somebody know how I can do this? I already searched for video tools in python like pygame or pyglet. I have an external player Bino 3d which can open the desired video. But how can I make it move around the screen?
Or is there a tool in other programming languages like c/c++ or Matlab which can help?
By the way, the program will be on a Linux OS.
I'll be grateful for any help or hints!
Anna | move a stereoscopic video on the screen | 0 | 0 | 1 | 0 | 0 | 110 |
17,884,487 | 2013-07-26T14:47:00.000 | 1 | 0 | 1 | 0 | 0 | python,sublimetext2 | 0 | 17,885,345 | 0 | 2 | 0 | true | 0 | 0 | Sorry snippets are primarily meant for reusability; you can do precisely as you say above but cannot insert specific things - you would have to write your own plugin in order to add this functionality as it would require things like a specific way of selecting things to modify and replace which would be specific to it. | 1 | 0 | 0 | 0 | Is there any way to access current scope (class name or function name) inside a snippet? I am trying to write a snippet for super(CurrentClassName, self).get(*args, **kwargs) but seems like I can't really replace CurrentClassName with actual class name. Does anyone know how to do that? | Sublime text 2 snippets | 0 | 1.2 | 1 | 0 | 0 | 351 |
17,888,244 | 2013-07-26T18:15:00.000 | 0 | 1 | 0 | 1 | 0 | python,fabric | 0 | 17,929,568 | 0 | 1 | 0 | false | 0 | 0 | Overrode the "env" variable via parameter in the function. Dumb mistake. | 1 | 0 | 0 | 0 | I'm developing a task where I need to have a few pieces of information specific to the environment.
I setup the ~/.fabricrc file but when I run the task via command line, the data is not in the env variable
I don't really want to add the -c config to simplify the deployment.
in the task, I'm calling
env.cb_account
and I have in ~/.fabricrc
cb_account=foobar
it throws AttributeError
Has anybody else run into this problem?
I found the information when I view env outside of my function/task. So now the question is how do I get that information into my task? I already have 6 parameters so I don't think it would be wise to add more especially when those parameters wouldn't change. | Python Fabric config file (~/.fabricrc) is not used | 0 | 0 | 1 | 0 | 0 | 675 |
17,901,514 | 2013-07-27T19:15:00.000 | 1 | 0 | 1 | 0 | 0 | python,loops,for-loop,syntax,while-loop | 0 | 17,901,607 | 0 | 4 | 0 | false | 0 | 0 | while, print, for etc. are keywords. That means they are parsed by the python parser whilst reading the code, stripped any redundant characters and result in tokens. Afterwards a lexer takes those tokens as input and builds a program tree which is then excuted by the interpreter. Said so, those constructs are used only as syntactic sugar for underlying lexical machinery and as such are not visible from inside the code. | 1 | 2 | 0 | 0 | What i mean is, how is the syntax defined, i.e. how can i make my own constructs like these?
I realise in a lot of languages, things like this will be built into the compiler / spec, and so it's dealt with by the compiler (at least that how i understand it to work).
But with python, everything i've come across so far has been accessible to the programmer, and so you more or less have the freedom to do whatever you want.
How would i go about writing my own version of for or while? Is it even possible?
I don't have any actual application for this, so the answer to any WHY?! questions is just "because why not?" or "curiosity". | How do the for / while / print *things* work in python? | 1 | 0.049958 | 1 | 0 | 0 | 140 |
17,931,476 | 2013-07-29T18:27:00.000 | 0 | 0 | 0 | 0 | 0 | python-3.x,sas-jmp,jsl | 0 | 34,890,487 | 0 | 3 | 0 | false | 1 | 0 | Make sure jmp.exe is available in your system environment so that if you type "jmp.exe" in the command line, it would launch jmp. Then have your *.jsl ready.
use python procees to run this command "jmp.exe *.jsl" and that would open jmp and run the *.jsl script and then you can import whatever you generate from jmp back in to python. | 1 | 4 | 0 | 0 | I have a python script running. I want to call *.jsl script in my running python script and want to make use of it's output in python. May I know how can I do that? | How to call a *.jsl script from python script | 0 | 0 | 1 | 0 | 0 | 4,983 |
17,955,275 | 2013-07-30T19:04:00.000 | 0 | 0 | 1 | 0 | 0 | python,performance,mongodb,pymongo | 0 | 24,357,799 | 0 | 1 | 0 | false | 0 | 0 | pymongo is thread safe, so you can run multiple queries in parallel. (I assume that you can somehow partition your document space.)
Feed the results to a local Queue if processing the result needs to happen in a single thread. | 1 | 1 | 0 | 0 | I'm currently running into an issue in integrating ElasticSearch and MongoDB. Essentially I need to convert a number of Mongo Documents into searchable documents matching my ElasticSearch query. That part is luckily trivial and taken care of. My problem though is that I need this to be fast. Faster than network time, I would really like to be able to index around 100 docs/second, which simply isn't possible with network calls to Mongo.
I was able to speed this up a lot by using ElasticSearch's bulk indexing, but that's only half of the problem. Is there any way to either bundle reads or cache a collection (a manageable part of a collection, as this collection is larger than I would like to keep in memory) to help speed this up? I was unable to really find any documentation about this, so if you can point me towards relevant documentation I consider that a perfectly acceptable answer.
I would prefer a solution that uses Pymongo, but I would be more than happy to use something that directly talks to MongoDB over requests or something similar. Any thoughts on how to alleviate this? | Bundling reads or caching collections with Pymongo | 0 | 0 | 1 | 1 | 0 | 195 |
17,960,013 | 2013-07-31T01:14:00.000 | 1 | 0 | 0 | 0 | 0 | python,rgb,fits,pyfits | 1 | 17,984,159 | 0 | 1 | 0 | true | 0 | 0 | I don't think there is enough information for me to answer your question completely; for example, I don't know what call you are making to perform the "image" "save", but I can guess:
FITS does not store RGB data like you wish it to. FITS can store multi-band data as individual monochromatic data layers in a multi-extension data "cube". Software, including ds9 and aplpy, can read that FITS data cube and author RGB images in RGB formats (png, jpg...). The error you see comes from PIL, which has no backend to author FITS files (I think, but the validity of that point doesn't matter).
So I think that you should use aplpy.make_rgb_cube to save a 3 HDU FITS cube based your 3 input FITS files, then import that FITS cube back into aplpy and use aplpy.make_rgb_image to output RGB compatible formats. This way you have the saved FITS cube in near native astronomy formats, and a means to create RGB formats from a variety of tools that can import that cube. | 1 | 0 | 0 | 0 | I am trying to make a three colour FITS image using the $aplpy.make_rgb_image$ function. I use three separate FITS images in RGB to do so and am able to save a colour image in png, jpeg.... formats, but I would prefer to save its as a FITS file.
When I try that I get the following error.
IOError: FITS save handler not installed
I've tried to find a solution in the web for a few days but was unable to get any good results.
Would anyone know how to get such a handler installed, or perhaps any other approach I could use to get this done? | Making a 3 Colour FITS file using aplpy | 0 | 1.2 | 1 | 0 | 0 | 1,054 |
17,960,261 | 2013-07-31T01:42:00.000 | 0 | 0 | 0 | 0 | 1 | python,django,deployment,django-cms,bluehost | 1 | 18,367,549 | 0 | 1 | 0 | true | 1 | 0 | As I wondered, the problem was that I was accessing the site using my temporary link from BlueHost, which the function throwing the error could not handle.
When my clients finally pointed their domain name at the server this problem and a few others (CSS inconsistencies in the Django admin, trouble with .htaccess) disappeared. Everything is up now and working fine. | 1 | 0 | 0 | 0 | I'm deploying my first Django app on a BlueHost shared server. It is a simple site powered by Django-CMS, and portions of it are working, however there are some deal-breaking quirks.
A main recurring one reads TypeError, a float is required. The exception location each time is .../python/lib/python2.7/site-packages/django/core/urlresolvers.py in _reverse_with_prefix, line 391. For example, I run into it when trying to load a page which includes {% cms_toolbar %} in the template, pressing "save and continue editing" when creating a page, or trying to delete a page through the admin interface.
I don't know if this is related, but nothing happens when I select a plugin from the "Available Plugins" drop-down while editing a page and press "Add Plugin".
Has anyone had any experience with this error, or have any ideas how to fix it? | Django-CMS Type Error "a float is required" | 1 | 1.2 | 1 | 0 | 0 | 397 |
17,961,363 | 2013-07-31T04:00:00.000 | 0 | 0 | 1 | 0 | 0 | python,pyqt,qtreewidget | 0 | 17,962,727 | 0 | 1 | 0 | true | 0 | 1 | Every QWidget has a contextMenuPolicy property which defines what to do when context menu is requested. A simpliest way to do what you need is like this:
Create QAction objects that call methods you want.
Add these actions to your tree widgets using widget.addAction()
Call widget.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
That's it. Context menu for the widget will contain actions you added. | 1 | 1 | 0 | 0 | i have 2 treewidgets placed in a frame in mainwindow. how can i have 2 different set of context menu options for the 2 treewidgets? i need individual set of right click options for the treewidgets.Thanks in advance.. | PyQT treewidgets with different context menu options | 1 | 1.2 | 1 | 0 | 0 | 1,433 |
17,984,890 | 2013-08-01T03:42:00.000 | 0 | 1 | 0 | 0 | 1 | python,coding-style,project-management,jira,issue-tracking | 1 | 18,004,681 | 0 | 2 | 1 | false | 1 | 0 | Every time you revisit code, make a list of the information you are not finding. Then the next time you create code, make sure that information is present. It can be in comments, Wiki, bugs or even text notes in a separate file. Make the notes useful for other people, so private notebooks aren't a good idea except for personal notes. | 1 | 3 | 0 | 0 | I have been in this problem for long time and i want to know how its done in real / big companies project.
Suppose i have the project to build a website. now i divide the project into sub tasks and do it.
But u know that suppose i have task1 in hand like export the page to pdf. Now i spend 3 days to do that , came accross various problems , many stack overflow questions and in the end i solve it.
Now 4 months after someone told me that there is some error in the code.
Now by that i comepletely forgot about(60%) how i did it and why i do this way. I document the code but i can't write the whole story of that in the code.
Then i have to spend much time on code to find what was the problem so that i added this line etc.
I want to know that is there any way that i can log steps in completeing the project.
So that i can see how i end up with code , what erros i got , what questions i asked on So and etc.
How people do it in real time. Which software to use.
I know in our project management softaware called JIRA we have tasks but that does not cover what steps i took to solve that tasks.
what is the besy way so that when i look backt at my 2 year old project , i know how i solve particular task | What is the best way to track / record the current programming project u work on | 1 | 0 | 1 | 0 | 0 | 278 |
17,998,464 | 2013-08-01T15:35:00.000 | 1 | 0 | 1 | 0 | 0 | python,multiprocessing,master | 1 | 18,034,766 | 0 | 2 | 0 | false | 0 | 0 | Here's one way to implement your workflow:
Have two multiprocessing.Queue objects: tasks_queue and
results_queue. The tasks_queue will hold device outputs, and results_queue will hold results of the assertions.
Have a pool of workers, where each worker pulls device output from
tasks_queue, parses it, asserts, and puts the result of assertion on the results_queue.
Have another process continuously polling device and put device
output on the tasks_queue.
Have one last process continuously polling results_queue, and
ending the overall program when the desired number of resuts (successful
assertions) is reached.
Total number of processes (multiprocessing.Process objects) is 2 + k, where k is the number of workers in the pool. | 1 | 1 | 0 | 0 | I'm fairly familiar with the python multiprocessing module, but I'm unsure of how to implement this setup. My project has this basic flow:
request serial device -> gather response -> parse response -> assert response -> repeat
It is right now a sequential operation that loops over this until it has gather the desired number of asserted responses. I was hoping to speed this task up by having a 'master process' do the first two operations, and then pass off the parsing and assertion task into a queue of worker processes. However, this is only beneficial if the master process is ALWAYS running. I'm guaranteed to be working on a multi-core machine.
Is there any way to have a process in the multiprocessing module always have focus / make run so I can achieve this? | Python Multiprocessing Management | 0 | 0.099668 | 1 | 0 | 0 | 743 |
18,009,550 | 2013-08-02T05:45:00.000 | 0 | 0 | 1 | 0 | 0 | python,pythonpath | 0 | 18,010,177 | 0 | 3 | 0 | false | 0 | 0 | PYTHONPATH passed to sys.path there any module can modify it before importing another modules. | 1 | 0 | 0 | 0 | Will the sources in PYTHONPATH always be searched in the very same order as they are listed? Or may the order of them change somewhere?
The specific case I'm wondering about is the view of PYTHONPATH before Python is started and if that differs to how Python actually uses it. | Will the first source in PYTHONPATH always be searched first? | 0 | 0 | 1 | 0 | 0 | 752 |
18,022,429 | 2013-08-02T17:04:00.000 | 0 | 0 | 1 | 0 | 0 | python,build,python-module,python-install | 0 | 18,048,958 | 1 | 1 | 0 | true | 0 | 0 | Reinstall them. It may seem like a no-brainer to reuse modules (in a lot of cases, you can), but in the case of modules that have compiled code - for long term systems administration this can be an utter nightmare.
Consider supporting multiple versions of Python for multiple versions / architectures of Linux. Some modules will reference libraries in /usr/local/lib, but those libraries can be the wrong arch or wrong version.
You're better off making a requirements.txt file and using pip to install them from source. | 1 | 1 | 0 | 0 | I normally use python 2.7.3 traditionally installed in /usr/local/bin, but I needed to rebuild python 2.6.6 (which I did without using virtualenv) in another directory ~/usr/local/ and rebuild numpy, scipy, all libraries I needed different versions from what I had for python 2.7.3 there...
But all the other packages that I want exactly as they were (meaning same version) in my default installation, I don't know how to just use them in the python 2.6.6 without having to download tarballs, build and installing them using --prefix=/home/myself/usr/local/bin.
Is there a fast or simpler way of "re-using" those packages in my "local" python 2.6.6? | Allowing python use modules from other python installation | 0 | 1.2 | 1 | 0 | 0 | 48 |
18,045,565 | 2013-08-04T17:26:00.000 | 2 | 0 | 1 | 0 | 0 | python,regex | 0 | 18,072,336 | 0 | 3 | 0 | false | 0 | 0 | I think you don't need regexpes for this problem,
you need some recursial graph search function | 1 | 5 | 0 | 0 | I have a regex like this '^(a|ab|1|2)+$' and want to get all sequence for this...
for example for re.search(reg, 'ab1') I want to get ('ab','1')
Equivalent result I can get with '^(a|ab|1|2)(a|ab|1|2)$' pattern,
but I don't know how many blocks been matched with (pattern)+
Is this possible, and if yes - how? | Python regexp: get all group's sequence | 0 | 0.132549 | 1 | 0 | 0 | 1,138 |
18,048,357 | 2013-08-04T22:45:00.000 | 1 | 0 | 0 | 1 | 0 | python,tornado | 0 | 19,019,586 | 0 | 1 | 0 | false | 0 | 0 | If I'm understanding your question correctly, all you need to do is call IOLoop.add_callback from the thread that is reading from the queue. This will run your callback in the IOLoop's thread so you can write your message out on the client websocket connections. | 1 | 1 | 0 | 0 | I have a tornado application that will serve data via websocket.
I have a separate blocking thread which is reading input from another application and pushing an object into a Queue and another thread which has a blocking listener to that Queue.
What I would like is for the reader thread to somehow send a message to tornado whenever it sees a new item in the Queue and then tornado can relay that via websocket to listening clients.
The only way I can think to do this is to have a websocket client in the reader thread and push the information to tornado via websocket. However it seems that I should be able to do this without using websocket and somehow have tornado listen for non websocket async events and then call a callback.
But I can't find anything describing how to do this. | How to add custom events to tornado | 0 | 0.197375 | 1 | 0 | 0 | 625 |
18,048,512 | 2013-08-04T23:09:00.000 | 2 | 0 | 1 | 0 | 0 | python,html,regex,parsing,html-parsing | 0 | 18,048,532 | 0 | 2 | 0 | false | 0 | 0 | People shy away from doing regexes to search HTML because it isn't the right tool for the job when parsing tags. But everything should be considered on a case-by-case basis. You aren't searching for tags, you are searching for a well-defined string in a document. It seems to me the simplest solution is just a regex or some sort of XPath expression -- simple parsing requires simple tools. | 1 | 0 | 0 | 0 | Sometimes I am not sure when do I have to use one or another. I usually parse all sort of things with Python, but I would like to focus this question on HTML parsing.
Personally I find DOM manipulation really useful when having to parse more than two regular elements (i.e. title and body of a list of news, for example).
However, I found myself in situations where it is not clear for me to build a regex or try to get the desired value simply manipulating strings. A particular fictional example: I have to get the total number of photos of an album, and the only way to get this is parsing the number of photos using this way:
(1 of 190)
So I have to get the '190' from the whole HTML document. I could write a regex for that, although regex for parsing HTML is not exactly the best, or that is what I always understood. On the other hand, using DOM seems overwhelming for me as it is just a simple element. String manipulation seems to be the best way, but I am not really sure if I should proceed like that in such a similar case.
Can you tell me how would you parse these kind of single elements from a HTML document using Python (or any other language)? | Should I use regex or just DOM/string manipulation? | 0 | 0.197375 | 1 | 0 | 1 | 525 |
18,050,770 | 2013-08-05T04:50:00.000 | 2 | 0 | 0 | 0 | 0 | python,eclipse,openerp | 0 | 18,071,459 | 0 | 2 | 0 | false | 1 | 0 | Do you mean you want a dynamic field on the form/tree view or in the model?
If it is in the view then you override fields_view_get, call super and then process the returned XML for the form type you want adding in the field or manipulating the XML. ElementTree is your friend here.
If you are talking about having a dynamic database field, I don't think you can and OpenERP creates a registry for each database when that database is first accessed and this process performs database refactoring at that time. The registry contains the singleton model instances you get with self.pool.get...
To achieve this you will need to create some kind of generic field like field1 and then in fields_view_get change the string attribute to give it a dynamic label.
Actually, a plan C occurs to me. You could create a properties type of table, use a functional field to read the value for the current user and override fields_view_get to do the form. | 2 | 0 | 0 | 0 | Hi I am working on an openerp module . I want to make a field dynamically . I want to take a name of a field from user and then create a field to it . How this can be done ? Can I do it with fields.function to return name, char type ? Plz help | how to set name of a field dynamically in openerp? | 0 | 0.197375 | 1 | 0 | 0 | 939 |
18,050,770 | 2013-08-05T04:50:00.000 | 0 | 0 | 0 | 0 | 0 | python,eclipse,openerp | 0 | 18,071,982 | 0 | 2 | 0 | false | 1 | 0 | You can create Fields Dynamically by the help of class self.pool.get('ir.model.fields')
Use Create Function. | 2 | 0 | 0 | 0 | Hi I am working on an openerp module . I want to make a field dynamically . I want to take a name of a field from user and then create a field to it . How this can be done ? Can I do it with fields.function to return name, char type ? Plz help | how to set name of a field dynamically in openerp? | 0 | 0 | 1 | 0 | 0 | 939 |
18,065,256 | 2013-08-05T18:34:00.000 | 2 | 0 | 0 | 0 | 0 | python,security | 0 | 18,065,965 | 0 | 2 | 0 | false | 0 | 0 | What you are asking about is part of what's commonly called "key management." If you google the term, you'll find lots of interesting reading. You may well discover that there are other parts of key management that your solution needs to address, like revocation and rotation.
In the particular part of key management that you're looking at, you need to figure out how to have two nodes trust each other. This means that you have to identify a separate thing that you trust on which to base the nodes' trust. There are two common approaches:
Trusting a third party. This is the model that we use for most websites we visit. When our computers are created, the trusted third party creates the device to already know about and trust certain entities, like Verisign. When we contact a web site over HTTPS, the browser automatically checks if Verisign (or another trusted third party certificate authority) agrees that this is the website that it claims to be. The magic of Public Key Cryptography and how this works is a whole separate topic, which I recommend you investigate (just google for it :) ).
Separate, secure channel. In this model, we use a separate channel, like an administrator who transfers the secret from one node to the other. The admin can do this in any manner s/he wishes, such as encrypted data carried carried via USB stick over sneakernet, or the data can be transferred across a separate SFTP server that s/he has already bootstrapped and can verify that it's secure (such as with his/her own internal certificate authority). Some variations of this are sharing a PGP key on a business card (if you trust that the person giving you the business card is the person with whom you want to communicate), or calling the key-owner over the phone and verbally confirming that the hash of the data you received is the same as the hash of the data they sent.
There are on-line key exchange protocols - you can look them up, probably even on Wikipedia, using the phrase "key exchange", but you need to be careful that they actually guarantee the things you need to determine - like how does the protocol authenticate the other side of the communication channel. For example, Diffie Hellman guarantees that you have exchanged a key without ever exchanging the actual contents of the key, but you don't know with whom you are communicating - it's an anonymous key exchange model.
You also mention that you're worried about message replay. Modern secure communication protocols like SSH and TLS protect against this. Any good protocol will have received analysis about its security properties, which are often well described on Wikipedia.
Oh, and you should not create your own protocol. There are many tomes about how to write secure protocols, analyses of existing protocols and there security properties (or lack thereof). Unless you're planning to become an expert in the topic (which will take many years and thousands of pages of reading), you should take the path of least resistance and just use a well known, well exercised, well respected protocol that does the work you need. | 1 | 0 | 0 | 0 | I'm building an authentication server in python and was wondering about how I could secure a connection totally between two peers totally? I cannot see how in any way a malicious user wouldn't be able to copy packets and simply analyze them if he understands what comes in which order.
Admitting a client server schema. Client asks for an Account. Even though SRP, packets can be copied and sent later on to allow login.
Then now, if I add public - private key encryption, how do I send the public key to each other without passing them in an un-encrypted channel?
Sorry if my questions remains noobish or looks like I haven't minded about the question but I really have a hard time figuring out how I can build up an authentication process without having several security holes. | How could i totally secure a connection between two nodes? | 0 | 0.197375 | 1 | 0 | 1 | 1,317 |
18,066,856 | 2013-08-05T20:12:00.000 | 0 | 1 | 0 | 0 | 0 | python,api,lotus-notes,email-attachments | 0 | 18,662,488 | 0 | 1 | 0 | false | 0 | 0 | You can do this in LotusScript as an export of data. This could be an agent that walks down a view in Notes, selects a document, document attachments could be put into a directory. Then with those objects in the directory(ies) you can run any script you like like a shell script or whatever.
With LotusScript you can grab out meta data or other meaningful text for your directory name. Detach the objects you want from richtext then move to the next document. The view that you travel down will effect the type of documents that you are working with. | 1 | 0 | 0 | 0 | Basically, I need to write a Python script that can download all of the attachment files in an e-mail, and then organize them based on their name. I am new to using Python to interact with other applications, so I was wondering if you had any pointers regarding this, mainly how to create a link to Lotus Notes (API)? | Using Python To Access E-mail (Lotus Notes) | 0 | 0 | 1 | 0 | 0 | 1,295 |
18,068,855 | 2013-08-05T22:27:00.000 | 1 | 0 | 0 | 0 | 0 | python,architecture,twisted,scalability,distributed | 0 | 18,069,861 | 0 | 1 | 0 | false | 0 | 0 | There are many solutions to implement a shared database. It depends on your technology stack, network architecture, programming language(s), etc. This is too broad of a question to be answered in a few paragraphs. Pick one approach and go with it, but make your code modular enough to replace your approach with another if necessary.
Update: Based on your comment that you are using Twisted, I will ask you a question. If you had a cluster of Twisted servers that are all sharing network state (your "distributed nodes"), how would you request your "complex operations" from those servers and how would you get back the results? If you can answer this in enough detail, you will have determined the requirements of your nodes. And then you can determine how they share and update the network state. At that point, you can ask a much more specific question (like "how do I replicate memcache across my nodes?"). | 1 | 2 | 0 | 0 | I apologize in advance for how long this explanation is, I don't know how to make it more concise because I imagine almost all of this is relevant. Sorry!
I'm designing a game server in Python with Twisted (probably not relevant, as this is an architecture question).
The general idea is that players connect and are placed into a lobby. They can then queue for a match (for instance, deathmatch or team deathmatch), and, when a match is found, they are placed into that game. When the game ends they are placed back into the lobby.
Because I'm aware of how complex distributed systems can be, I tried to simplify it as much as possible. The idea I came up with was to abstract all the information about a player into an object. All game logic is implemented in a GameHandler. When a player connects, they're assigned to a Lobby(GameHandler) instance. When they join a match, they are reassigned to a, say, Deathmatch(GameHandler) instance (which are held in a map of server: gamehandler).
At that point, when the player is added to a match, the server they're actually connected to serves as a reverse proxy (I didn't write the client and it can't be modified, there can't be any connection re-negotiation) and sends the info about the player to the match server. Then, using the instance map, all traffic from that player is routed without being looked at to the game instance they're in, and vice versa with an ID system. I don't think this is a problem because the servers should all be able to forward data on a gigabit LAN.
When a game is over, it just notifies all the Lobby servers (reverse proxies) that forwarded players, and they're returned back to the Lobby instance.
That should mean that I can scale out with resources by adding backend servers, scale out with network links by adding more reverse proxy lobby-only servers, and I can also scale up on any of the individual nodes. There is also no technical limitation that forces backend servers to be backend, every single server could have a Lobby instance, but games could be distributed across the network.
Now, so far so good (In theory, I haven't started implementing the distribution yet because I want to work out the main points beforehand), but that leaves one major question:
How do I control metainformation that all nodes need to know?
For instance:
How many players are online for the server list to display before a client connects?
Is matchmaking implemented (I'm planning on using Microsoft's TrueSkill algorithm, if that matters) in some P2P manner or should I delegate an entire server just for that (or even for metainformation)?
What about a party system where players join a queue together? Which server keeps track of the players in the group?
How do I manage configuration, like banned players? Every forward server would need to know about them.
If the lobby instance the player connected to happens to be full, how do I find another lobby instance that isn't full so I can route their connection there? This goes back to the first point, I need to have a way for nodes to easily query the network state
I could implement a P2P solution, or a primary server to handle this sort of thing. My major concern would be that adding a primary control server would add a single point of failure (which I would like to avoid), but the P2P solution would seem to be an order of magnitude more complex, and potentially slow things down significantly or use a fair amount of resources caching data on all the nodes.
So the question is: Is my design decent, and what do you think the pros and cons of those two solutions to the metainformation problem are? Is there a better third solution? What do you think is best?
Thank you in advance, your help is very much appreciated. | How do I manage metainformation in a horizontally scaled game server? | 0 | 0.197375 | 1 | 0 | 0 | 246 |
18,069,628 | 2013-08-05T23:48:00.000 | 0 | 0 | 1 | 0 | 0 | python,unicode,decode,encode | 0 | 18,069,703 | 0 | 3 | 0 | false | 0 | 0 | Unicode strings have the same methods as standard strings, you can remove '\n' with line.replace(r'\n','') and check if it exists with '\n' in unc | 1 | 0 | 0 | 0 | How do I remove line breakers i.e. '\n' from a unicode text read from text file using python? Also, how do I test if values of a list is linebreaker or not in unicode string? | how to avoid linebreakers from unicode string read from text file in python | 0 | 0 | 1 | 0 | 0 | 129 |
18,079,351 | 2013-08-06T11:48:00.000 | 0 | 0 | 0 | 0 | 0 | python,user-interface,wxpython,wxwidgets | 0 | 18,079,725 | 0 | 4 | 0 | false | 0 | 1 | You should be able to call the Layout method on the parent of the sizer, this will make it recalculate the shown items. | 3 | 0 | 0 | 0 | I have a wxpython grid sizer that is sizing sublists of bitmap buttons. The master list I would like to create just once because creating these buttons takes a considerable amount of time, and thus I do not want to destroy them. My idea is to somehow remove all of the buttons from the sizer, make a new list of the buttons that I want the sizer to contain, and then use the sizer's AddMany method.
If I can't remove the buttons from the sizer without destroying them, then is there a way to use the sizer's Show method to hide some of the times, but then have the sizer adjust to fill in the gaps? When I hide them, all I can get them to do right now is just to have them disappear and leave a gap. I need the next item to be adjusted to the gap's place.
Also is there a way to sort the grid sizer's item list?
Thanks for any help you can offer. | How to either sort a wxsizer or remove items without destroying them? | 1 | 0 | 1 | 0 | 0 | 698 |
18,079,351 | 2013-08-06T11:48:00.000 | 0 | 0 | 0 | 0 | 0 | python,user-interface,wxpython,wxwidgets | 0 | 18,079,420 | 0 | 4 | 0 | false | 0 | 1 | So I found out that the detach method is what I'm looking for! I would still be interested to know of a way to sort a sizer's item list though, without detaching all of the items and then re attaching a sublist. | 3 | 0 | 0 | 0 | I have a wxpython grid sizer that is sizing sublists of bitmap buttons. The master list I would like to create just once because creating these buttons takes a considerable amount of time, and thus I do not want to destroy them. My idea is to somehow remove all of the buttons from the sizer, make a new list of the buttons that I want the sizer to contain, and then use the sizer's AddMany method.
If I can't remove the buttons from the sizer without destroying them, then is there a way to use the sizer's Show method to hide some of the times, but then have the sizer adjust to fill in the gaps? When I hide them, all I can get them to do right now is just to have them disappear and leave a gap. I need the next item to be adjusted to the gap's place.
Also is there a way to sort the grid sizer's item list?
Thanks for any help you can offer. | How to either sort a wxsizer or remove items without destroying them? | 1 | 0 | 1 | 0 | 0 | 698 |
18,079,351 | 2013-08-06T11:48:00.000 | 0 | 0 | 0 | 0 | 0 | python,user-interface,wxpython,wxwidgets | 0 | 18,086,362 | 0 | 4 | 0 | true | 0 | 1 | You can't sort the sizer items in place. It would be possible to write your own function for doing this, of course, but it would use wxSizer::Detach() and Insert() under the hood anyhow. | 3 | 0 | 0 | 0 | I have a wxpython grid sizer that is sizing sublists of bitmap buttons. The master list I would like to create just once because creating these buttons takes a considerable amount of time, and thus I do not want to destroy them. My idea is to somehow remove all of the buttons from the sizer, make a new list of the buttons that I want the sizer to contain, and then use the sizer's AddMany method.
If I can't remove the buttons from the sizer without destroying them, then is there a way to use the sizer's Show method to hide some of the times, but then have the sizer adjust to fill in the gaps? When I hide them, all I can get them to do right now is just to have them disappear and leave a gap. I need the next item to be adjusted to the gap's place.
Also is there a way to sort the grid sizer's item list?
Thanks for any help you can offer. | How to either sort a wxsizer or remove items without destroying them? | 1 | 1.2 | 1 | 0 | 0 | 698 |
18,089,598 | 2013-08-06T20:14:00.000 | 5 | 0 | 1 | 0 | 0 | python,mongodb,pymongo,bson | 0 | 18,089,722 | 0 | 3 | 0 | false | 0 | 0 | Assuming you are not specifically interested in mongoDB, you are probably not looking for BSON. BSON is just a different serialization format compared to JSON, designed for more speed and space efficiency. On the other hand, pickle does more of a direct encoding of python objects.
However, do your speed tests before you adopt pickle to ensure it is better for your use case. | 1 | 18 | 0 | 1 | I have read somewhere that you can store python objects (more specifically dictionaries) as binaries in MongoDB by using BSON. However right now I cannot find any any documentation related to this.
Would anyone know how exactly this can be done? | Is there a way to store python objects directly in mongoDB without serializing them | 0 | 0.321513 | 1 | 1 | 0 | 23,835 |
18,090,039 | 2013-08-06T20:40:00.000 | 0 | 0 | 0 | 1 | 0 | python,sockets,webserver,twisted.web | 0 | 18,104,107 | 0 | 2 | 0 | true | 0 | 0 | I have contacted Fatcow.com support. They do not support SSH connections and do not support Python 2.7 with Twisted library, especially python socket application as server. So it is dead end.
Question resolved. | 1 | 0 | 0 | 0 | I have a small application written in python using TwistedWeb. It is a chat server.
Everything is configured right now as for localhost.
All I want is to run this python script on a server(shared server provided by Fatcow.com).
I mean that the script will be working all the time and some clients will connect/disconnect to it. Fatcow gives me python 2.5 without any custom libraries. Is there a way and a tutorial how to make it work with TwistedWeb?
Thanks in advice. | How to install twistedweb python library on a web server | 0 | 1.2 | 1 | 0 | 0 | 817 |
Subsets and Splits