Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
11,161,747
2012-06-22T18:03:00.000
-4
0
1
0
python,mtp
46,545,449
3
false
0
0
Simply connecting a USB cable between the phone and computer should work. It may be necessary to enable MTP transfers in the settings menu on your phone. The menu selection location is likely to be different on different versions of android and different phone models. Try a google search for "galaxy nexus enable mtp". Make sure to include your android and phone version in the search. Make sure it is a good quality USB cable. Poor quality cables will not make a good connection and therefor not work reliably. A file management dialog comes up immediately on my desktop after hooking up a usb cable between my phone and laptop showing both the phone internal storage and SD card. This allows me to transfer files both ways directly to the phone SD storage (Linux Mint <-> LG Android ver. 5.1) Note that it is also possible to transfer files using Bluetooth. After establishing a connection, you would need to find the device name. Then it would be possible to open the device using standard python file constructs, i.e. popen(), etc.
2
25
0
How can I read from and write to my Galaxy Nexus phone, using MTP over a USB cable in python? I'm on a windows 7 computer.
How to access an MTP USB device with python
-1
0
1
15,627
11,161,747
2012-06-22T18:03:00.000
5
0
1
0
python,mtp
11,254,179
3
true
0
0
One way to do this would be to install ADB (android debugging bridge, part of the SDK) and launch it as a child process from python. ADB can be used to, among other things, read from or write to, an android device.
2
25
0
How can I read from and write to my Galaxy Nexus phone, using MTP over a USB cable in python? I'm on a windows 7 computer.
How to access an MTP USB device with python
1.2
0
1
15,627
11,161,776
2012-06-22T18:05:00.000
0
0
1
1
python,centos,pip,python-module
42,556,424
2
false
0
0
If executing pip with sudo, you may want sudo's -H flag -H, --set-home set HOME variable to target user's home dir e.g sudo -H pip install virtualenv
2
16
0
When installing a package via sudo pip-python (CentOS 6 package: python-pip-0.8-1.el6.noarch), I sometimes get permission issues with the installed packages being readable only by root. Re-installing again one or two times usually fixes the problem. Has anyone experienced this? Or can anyone suggest any troubleshooting steps to nail down the cause?
pip: inconsistent permissions issues
0
0
0
11,217
11,161,776
2012-06-22T18:05:00.000
13
0
1
1
python,centos,pip,python-module
11,169,137
2
false
0
0
When you run a command using sudo, it will preserve the users umask. pip just installs files, it doesn't change access rights, so you'll end up with the files having the access rights set conforming to the current user's umask, which may be owner-readable only (0077) and therefore readable by root only. That means you can set umask to something sensible like umask 0022 before running sudo pip install. Or use sudo su to open a root shell with default settings and then pip install.
2
16
0
When installing a package via sudo pip-python (CentOS 6 package: python-pip-0.8-1.el6.noarch), I sometimes get permission issues with the installed packages being readable only by root. Re-installing again one or two times usually fixes the problem. Has anyone experienced this? Or can anyone suggest any troubleshooting steps to nail down the cause?
pip: inconsistent permissions issues
1
0
0
11,217
11,161,901
2012-06-22T18:14:00.000
1
0
1
0
python,blender
44,329,972
5
false
0
0
If you are on Windows you can just do python setup.py install as usual using the python interpreter given by blender. So for example, 'c:/Program Files/Blender Foundation/Blender/2.78/python/bin/python.exe' setup.py install. On Linux, I think the native python3 interpreter is used so there is no problem of this kind.
1
9
0
I've been trying to install pyserial for blender, but I can only install it to python32 on my C drive, is there anything i can do to have it install to blender or have blender import from python32
How to install python modules in blender
0.039979
0
0
18,980
11,162,214
2012-06-22T18:39:00.000
0
0
1
1
python,fabric
11,497,518
2
false
0
0
You could always use the new execute() and wrap that in a try/except or just look at the return codes from your run()s.
1
5
0
In the eventuality that Fabric exits cleanly or not, I need to execute a bunch of clean-up tasks (mostly delete temp files and folders). How can I achieve this with Fabric?
Fabric equivalent of try finally
0
0
0
1,088
11,164,176
2012-06-22T21:10:00.000
2
1
0
0
python,process,communication,iostream
11,164,681
4
true
0
0
On unix systems; the usual way to open a subprocess is with fork(), which will leave any open file descriptors (small integers representing open files or sockets) available in both the child, and the parent, and then exec(), which also allows the new executable to use the file descriptors that were open in the old process. This functionality is preserved in the subprocess.Popen() call (adjustable with the close_fds argument). Thus, what you probably want to do is use os.pipe() to create pairs of sockets to communicate on, then use Popen() to launch the other process, with arguments for each of fd's returned by the previous call to pipe() to tell it which fd's it should use.
2
0
0
A python program opens a new process of the C++ program and is reading the processes stdout. No problem so far. But is it possible to have multiple streams like this for communication? I can get two if I misuse stderr too, but not more. Easy way to hack this would be using temporary files. Is there something more elegant that does not need a detour to the filesystem? PS: *nix specific solutions are welcome too
C++ to python communication. Multiple io streams?
1.2
0
0
270
11,164,176
2012-06-22T21:10:00.000
0
1
0
0
python,process,communication,iostream
11,165,324
4
false
0
0
assuming windows machine. you could try using the clipboard for exchanging information between python processes and C++. assign some unique process id followed by your information and write it to clipboard on python side.....now just parse the string on C++ side. its akin to using temporary files but all done in memory..... but the drawback being you cannot use clipboard for any other application. hope it helps
2
0
0
A python program opens a new process of the C++ program and is reading the processes stdout. No problem so far. But is it possible to have multiple streams like this for communication? I can get two if I misuse stderr too, but not more. Easy way to hack this would be using temporary files. Is there something more elegant that does not need a detour to the filesystem? PS: *nix specific solutions are welcome too
C++ to python communication. Multiple io streams?
0
0
0
270
11,164,651
2012-06-22T21:55:00.000
1
0
1
1
linux,python-2.7
11,170,901
1
false
0
0
You are asking for a number that is nearly impossible to calculate and has very little value. Any Linux system that is running for an amount of time will have hardly any 'free' ram available. Just cat /proc/meminfo - the MemFree entry is usually in order of just a few megabytes. So, where did that memory go? The kernel caches all disk access, for starters. That's usually visible in the Cached entry. Disk cache will be pruned when you require more memory, so you could add that number to MemFree . But, if an application allocates (malloc() in C) 2 gigabytes on a system with exactly 2 gigabytes of RAM, that usually will just be granted: you get a valid pointer back. However, none of the RAM is actually reserved for your application - that only happens when your application starts touching memory pages - each touched page will be allocated. The maximum size you can ask for is available as CommitLimit. But the application code itself might not be in RAM either - binary file and libraries are mmapp()ed, so again only pages that are touched are loaded into RAM. If you run a tool like top - you get all kinds of memory info per process, including VIRT, RES and SHR. VIRT is for 'virtual' - all memory pages that the app would need if it would claim all pages it has asked for. RES is 'resident' - the amount of memory actually used SHR is 'shared' - the amount of pages that are shared with other applications, like e.g. libraries that are loaded in multiple applications. So, what is the value of knowing how much memory is available? You can start an application that could require significantly more RAM than your system has, and yet it runs... You might even be able to run the application twice or thrice - code pages are shared anyway... Note: the above answer cuts quite a few corners, the real mechanisms are significantly more complex. And I haven't even started bringing swap space into the story. But this will do for you, I hope...
1
0
0
So, the title describes almost all the necessary to answer me. Just one more thing: please, just reply about libraries installed with Python by default, as the app which I'm developing is part of the Ubuntu App Showdown. Running Python 2.7, Ubuntu 12.04.
How can I detect my RAM free and total space in Python?
0.197375
0
0
186
11,165,937
2012-06-23T01:02:00.000
1
0
1
0
python,python-2.7
11,165,954
1
true
0
0
Put the directory containing your module (let's call it functions.py) into the PYTHONPATH environment variable. Then you'll be able to use import functions to get your functions. Pip also seems to have support for this: pip install -e src/mycheckout for exxample, but I don't quite understand the ramifications of that.
1
0
0
I'm building a personal module of functions, generic functions for my scientific work. It's not finished so I would like to keep it in it's development folder for the time being without installing it like you install every other modules with pip, etc. Now, I also need to work on other non-related projects but still need the functions. My question is, having those 2 projects in completely independent folders, how do I import one to use in the other? thanks EDIT: Just another quick one. If both were inside their respective folder but with the same root. Would there be a better/easier way to do this?
Importing unfinished modules
1.2
0
0
56
11,166,014
2012-06-23T01:15:00.000
5
0
1
0
python,django,installation,duplicates
11,166,438
3
false
1
0
Check out virtualenv and virtualenvwrapper
2
7
0
In the process of trying to install django, I had a series of failures. I followed many different tutorials online and ended up trying to install it several times. I think I may have installed it twice (which the website said was not a good thing), so how do I tell if I actually have multiple versions installed? I have a Mac running Lion.
How to tell if you have multiple Django's installed
0.321513
0
0
2,635
11,166,014
2012-06-23T01:15:00.000
9
0
1
0
python,django,installation,duplicates
11,166,539
3
true
1
0
open terminal and type python then type import django then type django and it will tell you the path to the django you are importing. Goto that folder [it should look something like this: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/] and look for more than one instance of django(if there is more than one, they will be right next to each other). Delete the one(s) you don't want.
2
7
0
In the process of trying to install django, I had a series of failures. I followed many different tutorials online and ended up trying to install it several times. I think I may have installed it twice (which the website said was not a good thing), so how do I tell if I actually have multiple versions installed? I have a Mac running Lion.
How to tell if you have multiple Django's installed
1.2
0
0
2,635
11,166,890
2012-06-23T04:37:00.000
1
0
1
0
python,plone
11,255,794
1
false
0
0
Plone can, with appropriate tuning, handle this kind of load, but as Martijn points out above, Plone is not really a document management or versioning solution for large archives of binary files--it is a web content management system first and foremost. I would consider looking at something like Alfresco.
1
0
0
How scalable is plone/zope in terms of importing terabytes of existing data on a file system? I am using Plone 4.1 and wish to import existing files/images etc on the file system in linux debian.
How scalable is plone/zope in terms of importing terabytes of existing data on a file system?
0.197375
0
0
208
11,167,328
2012-06-23T06:08:00.000
0
0
1
0
python
11,167,357
6
false
0
0
You may want to try something like an array of dictionaries or objects. Arrays are ordered. The text for the menu item would be in some field of the array or object.
1
4
0
Can I expect the keys() to remain in the same order? I plan to use them for a dropdown box and I dont want them to shift if I add or delete items from the dictionary.
Order of list obtained by dictionary's keys()
0
0
0
129
11,167,518
2012-06-23T06:47:00.000
4
0
0
0
python,sqlalchemy,flask,flask-sqlalchemy
11,210,290
3
false
1
0
At the time you execute create_all, models.py has never been imported, so no class is declared. Thus, create_all does not create any table. To solve this problem, import models before running create_all or, even better, don't separate the db object from the model declaration.
1
1
0
Trying to set up Flask and SQLAlchemy on Windows but I've been running into issues. I've been using Flask-SQLAlchemy along with PostgreSQL 9.1.4 (32 bit) and the Psycopg2 package. Here are the relevant bits of code, I created a basic User model just to test that my DB is connecting, and committing. The three bits of code would come from the __init__.py file of my application, the models.py file and my settings.py file. When I try opening up my interactive prompt and try the code in the following link out I get a ProgrammingError exception (details in link). What could be causing this? I followed the documentation and I'm simply confused as to what I'm doing wrong especially considering that I've also used Django with psycopg2 and PostgreSQL on Windows.
Setting up Flask-SQLAlchemy
0.26052
1
0
7,080
11,167,771
2012-06-23T07:40:00.000
1
0
0
0
python,google-app-engine,jinja2
11,167,788
2
true
1
0
What you are thinking about, and moving toward (whether you know it or not) is called a content management system. Most of them store content in a database and provide a user interface to allow editing it, just as you're designing. Perhaps you could use off-the-shelf parts? I don't know exactly which ones are appengine-based, but this is a very common task and I'm sure you'll save time by using others' work.
1
1
0
So I have a python webapp on Google App Engine and am using the jinja2 template engine. I have a lot of text on the site that I want to update regularly, such as news sections and updates about the site. What is the most efficient way to go about doing this? Clearly the simplest short-term solution and what I am currently doing is just to change the HTML but I would like to give others access to this without giving them access to the server side of things. Should I just bite the bullet and write a interface on an admin page that allows users to edit it and then the server takes this and renders it in the News section? Any suggestions or tips would be great!
update text within html
1.2
0
0
119
11,168,747
2012-06-23T10:23:00.000
1
1
0
1
python,django,web-hosting,cpanel
11,169,154
2
true
1
0
You could try to put them in your PYTHONPATH. Usually, your current working directory is in your PYTHONPATH. If that changes, you might need to add a path to it (maybe in each file, you should check, or one common file which is always included), and put the libraries there. You can do this with import sys;sys.path.append(the_path) I'm not sure all of the libraries will work, but those which are pure-python, you should be able to copy/paste the source in a directory, and they will work I think.
2
0
0
I am using heliohost's free service to test my django apps. But Heliohost does not provide me shell access. Is there anyway to install python libraries on the host machine?
python libraries in cpanel
1.2
0
0
1,172
11,168,747
2012-06-23T10:23:00.000
0
1
0
1
python,django,web-hosting,cpanel
45,138,203
2
false
1
0
You should inform the support of heliohost.this server has very good support that help you or install any package you want
2
0
0
I am using heliohost's free service to test my django apps. But Heliohost does not provide me shell access. Is there anyway to install python libraries on the host machine?
python libraries in cpanel
0
0
0
1,172
11,169,418
2012-06-23T12:15:00.000
4
0
0
0
c++,arrays,python-3.x,numpy,dynamic-arrays
57,324,664
13
false
0
0
Use LibTorch (PyTorch frontend for C++) and be happy.
1
104
1
Are there any C++ (or C) libs that have NumPy-like arrays with support for slicing, vectorized operations, adding and subtracting contents element-by-element, etc.?
NumPy style arrays for C++?
0.061461
0
0
81,916
11,170,120
2012-06-23T14:07:00.000
1
0
1
0
python,object-oriented-analysis
11,170,301
2
false
0
0
I would say that all values are objects anyway. Let's say that instead of transaction class instance, you would have a dictionary {'transaction name':[123,'GBP','12/12/12',1234,'in']}. Now this dictionary is again an object and the difference is that it wasn't your own class. Everything is an object anyway. The fact that something is an object, does not automatically make it bulky, large, slow or whatever. Probably you would still need a consideration about these transactions of how many of those objects you want to keep in the memory in a given time? It's matter of clear code design in my opinion. Lets say that now you have a class book which has a method of action, accepting transaction objects as an attribute. When this action method will then be using object properties, it would be much clearer than if it was referring to nth elements of a list for instance. The fact that it's a class it also leaves you an opportunity to amend or add functionality in the future. For example you might want to add logging of all transaction, or withdraw method at some point.
1
7
0
I'm thinking about a situation where I have an object "Transaction", that has quite a few properties to it like account, amount, date, currency, type, etc. I never plan to mutate these data points, and calculation logic will live in other classes. My question is, is it poor Python design to instantiate thousands of objects just to hold data? I find the data far easier to work with embedded in a class rather than trying to cram it into some combination of data structures.
Objects With No Behavior
0.099668
0
0
168
11,170,599
2012-06-23T15:15:00.000
2
0
1
0
python,memory-leaks
11,170,635
1
true
0
0
It's not possible in native Python to obtain this information from an object after it's been created, but you can override the __new__() method on a base class to record it somewhere on the object (getting it from the inspect module).
1
1
0
Is there a way to find out where a Python object was instantiated the first time? Like the line number or even the full traceback at the creation? For memory profiling I'd like to examine all objects after the run. (I'm aware of memory profiling tools, but they are hard to install or don't do this task).
Find out where Python object was created
1.2
0
0
157
11,170,827
2012-06-23T15:46:00.000
-1
0
1
0
python,version,virtualenv
60,839,180
8
false
0
0
I had this problem and just decided to rename one of the programs from python.exe to python2.7.exe. Now I can specify on command prompt which program to run easily without introducing any scripts or changing environmental paths. So i have two programs: python2.7 and python (the latter which is v.3.8 aka default).
3
95
0
How do I, in the main.py module (presumably), tell Python which interpreter to use? What I mean is: if I want a particular script to use version 3 of Python to interpret the entire program, how do I do that? Bonus: How would this affect a virtualenv? Am I right in thinking that if I create a virtualenv for my program and then tell it to use a different version of Python, then I may encounter some conflicts?
How do I tell a Python script to use a particular version
-0.024995
0
0
224,173
11,170,827
2012-06-23T15:46:00.000
-1
0
1
0
python,version,virtualenv
56,130,973
8
false
0
0
While working with different versions of Python on Windows, I am using this method to switch between versions. I think it is better than messing with shebangs and virtualenvs 1) install python versions you desire 2) go to Environment Variables > PATH (i assume that paths of python versions are already added to Env.Vars.>PATH) 3) suppress the paths of all python versions you dont want to use (dont delete the paths, just add a suffix like "_sup") 4) call python from terminal (so Windows will skip the wrong paths you changed, and will find the python.exe at the path you did not suppressed, and will use this version after on) 5) switch between versions by playing with suffixes
3
95
0
How do I, in the main.py module (presumably), tell Python which interpreter to use? What I mean is: if I want a particular script to use version 3 of Python to interpret the entire program, how do I do that? Bonus: How would this affect a virtualenv? Am I right in thinking that if I create a virtualenv for my program and then tell it to use a different version of Python, then I may encounter some conflicts?
How do I tell a Python script to use a particular version
-0.024995
0
0
224,173
11,170,827
2012-06-23T15:46:00.000
0
0
1
0
python,version,virtualenv
11,170,838
8
false
0
0
You can't do this within the Python program, because the shell decides which version to use if you a shebang line. If you aren't using a shell with a shebang line and just type python myprogram.py it uses the default version unless you decide specifically which Python version when you type pythonXXX myprogram.py which version to use. Once your Python program is running you have already decided which Python executable to use to get the program running. virtualenv is for segregating python versions and environments, it specifically exists to eliminate conflicts.
3
95
0
How do I, in the main.py module (presumably), tell Python which interpreter to use? What I mean is: if I want a particular script to use version 3 of Python to interpret the entire program, how do I do that? Bonus: How would this affect a virtualenv? Am I right in thinking that if I create a virtualenv for my program and then tell it to use a different version of Python, then I may encounter some conflicts?
How do I tell a Python script to use a particular version
0
0
0
224,173
11,170,949
2012-06-23T15:59:00.000
1
0
0
0
python,python-c-extension
11,171,590
4
false
0
0
Physically copy the socket module to socket_monkey and go from there? I don't feel you need any "clever" work-around... but I might well be over simplifying!
1
39
0
I need to make a copy of a socket module to be able to use it and to have one more socket module monkey-patched and use it differently. Is this possible? I mean to really copy a module, namely to get the same result at runtime as if I've copied socketmodule.c, changed the initsocket() function to initmy_socket(), and installed it as my_socket extension.
How to make a copy of a python module at runtime?
0.049958
0
1
7,543
11,172,215
2012-06-23T19:07:00.000
0
1
0
0
python-c-extension,python-extensions
20,085,116
2
false
0
1
I had a similar problem and resolved it by: Adding the Modules\ directory (from your Python source) to the C/C++ Additional Include Directories. #include "socketmodule.h" Don't know if this is the best solution, but it worked for me!
1
1
0
I'd like to invoke PySocketModule_ImportModuleAndAPI function defined in socketmodule.h in my Python C-extension.
Is it possible to include socketmodule.h in Python C extensions?
0
0
1
142
11,174,324
2012-06-24T01:20:00.000
0
0
0
0
python,deployment,fixtures
11,174,357
1
false
0
0
You have to give the user that loads the fixture the privileges to write on the database regardless which way you are going to load the data. With Postgres you can give login permission without password to specific users and eliminate the problem of a shared password or you can store the password in the pgpass file within the home directory. Personally I find fabric a very nice tool to do deploys, in this specific case I will use it to connect to the remote machine and issue a psql -f 'dump_data.sql' -1 command.
1
1
0
I need to load fixtures into the system when a new VM is up. I have dumped MongoDB and Postgres. But I can't just sit in front of the PC whenever a new machine is up. I want to be able to just "issue" a command or the script automatically does it. But a command like pg_dump to dump PostgreSQL will require a password. The problem is, the script that I uses to deploy these fixtures should be under version control. The file that contains this password (if that's the only way to do automation) will not be committed. If it needs to be committed, the deploy repository is restricted for internal developers only. My question is... what do you consider a good practice in this situation? I am thinking of using Python's Popen to issue these commands. Thanks. I also can put it in the cache server... but not sure if it's the only "better" way...
Security concerns while loading fixtures
0
1
0
70
11,174,713
2012-06-24T03:21:00.000
0
0
1
0
java,python,image-processing,wolfram-mathematica,image-stitching
11,219,818
6
false
0
0
In Mathematica, you can use ImageCorrespondingPoints within the overlap region, and then FindGeometricTransform to compute an affine transformation that takes one image into the other. Note that the size of the images and and overlap regions influence the accuracy of the transformation. If you are doing something complex (like combining satellite images), you will need an overall geometric model for the result and then map each image to it. In such cases an affine transformation may not be sufficient.
2
6
0
I have a bunch of images in a folder that are effectively just pieces of one image that was broken into overlapping parts. How can I quickly and programmatically recombine these images to create the original image? I would prefer a solution that uses python or mathematica (or is an existing application), but I am open to other ideas as well (I am fairly proficient with Java).
Stitch together images with exactly matching (pixel to pixel) overlaps
0
0
0
3,573
11,174,713
2012-06-24T03:21:00.000
0
0
1
0
java,python,image-processing,wolfram-mathematica,image-stitching
11,174,812
6
false
0
0
What you want is a tool for creating panoramas. There are various tools sold to do this with various features. Things to think about are: matching position vertically and horizontally varying brightness between images correcting for camera rotation and angle
2
6
0
I have a bunch of images in a folder that are effectively just pieces of one image that was broken into overlapping parts. How can I quickly and programmatically recombine these images to create the original image? I would prefer a solution that uses python or mathematica (or is an existing application), but I am open to other ideas as well (I am fairly proficient with Java).
Stitch together images with exactly matching (pixel to pixel) overlaps
0
0
0
3,573
11,175,645
2012-06-24T07:14:00.000
0
1
1
0
python
11,175,666
10
false
0
0
You have only two values, so you know in advance the precise structure of the output: it will be divided into two regions of varying lengths.
1
2
0
What is the most efficient way to sort a list, [0,0,1,0,1,1,0] whose elements are only 0 & 1, without using any builtin sort() or sorted() or count() function. O(n) or less than that
Sort a list efficiently which contains only 0 and 1 without using any builtin python sort function?
0
0
0
1,534
11,176,273
2012-06-24T09:15:00.000
2
1
0
0
python,caching,memcached
11,176,318
1
false
1
0
There's no way to do it that's guaranteed to work. The only way I found is the way you'll find on google, but there's a restriction: Only 1 MB will be returned - it may not be all keys. And it will probably be quite slow.. If you really, really has to have all those keys you'd probably have to hack the source code. I would say: no, you can't. Why do you need all those key? I would consider redesigning your application to not make your admin panel dependent of the internals of a caching server
1
2
0
I know this question has been asked many times and it's also covered in official memcached FAQ. But my case is - I want to use it just for admin panel purposes. I want to see keys with values in my admin page so it doesn't matter if it's slow and against the best practices. Please advise, if it's possible.
List all memcached keys/values
0.379949
0
0
839
11,177,018
2012-06-24T11:21:00.000
0
0
0
0
python,multithreading,sockets
11,177,059
4
false
0
0
I don't know of a way to prioritize at the Python level. So I'd suggest using 2 processes, not threads, and prioritize at the OS level. On Unix you can use os.nice() to do that. You'd need to use 2 sockets then, and your sharing problem would be solved at the same time.
3
14
0
I have a design problem: I have two threads, a heartbeat/control thread and a messagehandler thread. Both are sharing the same socket, however the messageHandler thread only sends out messages and never receives. The heartbeat thread sends and receives (receives messages and reacts on heartbeats). The problem is I'm not sure if this is safe. There is no mechanism, I myself, implemented to see if the socket is being used. So is sharing a socket over python automatically thread safe or not? Also if it's not, the reason I put them in a separate thread is because the heartbeat is more important than the message handling. This means that if it gets flooded with messages, it still needs to do a heartbeat. So if I have to implement a bolt, is there away I can prioritize if my heartbeat/control thread needs to send a heartbeat?
Python: Socket and threads?
0
0
1
22,734
11,177,018
2012-06-24T11:21:00.000
0
0
0
0
python,multithreading,sockets
11,177,130
4
false
0
0
If both threads are client threads, it is a good idea to open two client sockets one to the server for heart beat and another for communication.
3
14
0
I have a design problem: I have two threads, a heartbeat/control thread and a messagehandler thread. Both are sharing the same socket, however the messageHandler thread only sends out messages and never receives. The heartbeat thread sends and receives (receives messages and reacts on heartbeats). The problem is I'm not sure if this is safe. There is no mechanism, I myself, implemented to see if the socket is being used. So is sharing a socket over python automatically thread safe or not? Also if it's not, the reason I put them in a separate thread is because the heartbeat is more important than the message handling. This means that if it gets flooded with messages, it still needs to do a heartbeat. So if I have to implement a bolt, is there away I can prioritize if my heartbeat/control thread needs to send a heartbeat?
Python: Socket and threads?
0
0
1
22,734
11,177,018
2012-06-24T11:21:00.000
10
0
0
0
python,multithreading,sockets
11,177,260
4
true
0
0
Unfortunately,The socket shared by multi-thread is not thread safe.Think about buffer two threads operate on with no lock. The normal way to implement is with two socket,just like what ftp does.cmd socket and msg socket. If you wanna implement this by one socket,you can put different type msgs into different queues,and with a third thread consumes the queue and send them through the only socket. In this way,you can control heartbeat msg priory to data msg.
3
14
0
I have a design problem: I have two threads, a heartbeat/control thread and a messagehandler thread. Both are sharing the same socket, however the messageHandler thread only sends out messages and never receives. The heartbeat thread sends and receives (receives messages and reacts on heartbeats). The problem is I'm not sure if this is safe. There is no mechanism, I myself, implemented to see if the socket is being used. So is sharing a socket over python automatically thread safe or not? Also if it's not, the reason I put them in a separate thread is because the heartbeat is more important than the message handling. This means that if it gets flooded with messages, it still needs to do a heartbeat. So if I have to implement a bolt, is there away I can prioritize if my heartbeat/control thread needs to send a heartbeat?
Python: Socket and threads?
1.2
0
1
22,734
11,178,136
2012-06-24T14:09:00.000
3
0
1
0
python,random,python-3.x
11,178,180
5
false
0
0
Make a wrapper function that increments the count and then makes the call and returns the result. If you consider that "manual", then yes.
1
3
0
I have a Python application in which I would like to monitor the number of calls to functions in the standard random module, at runtime; is there any nice way to do this, or do I have to "manually" do it?
Counting number of calls to random in Python?
0.119427
0
0
340
11,178,243
2012-06-24T14:24:00.000
5
1
1
0
java,python,jython
11,178,285
2
false
1
0
The point of using Jython is that you can write Python code and have it run on the JVM. Don't ruin that by making your Python into Java. If -- if -- it turns out that your data structure is too slow, you can drop-in replace it with a Java version. But that's for the optimisation stage of programming, which comes later. I guess I should try to answer your question. I would guess that using native Java structures will be faster (because the JVM can infer more about them than the Python interpreter can), but that might be counterbalanced by the extra processing needed to interface with Jython. Only tests will tell!
2
6
0
I'm trying to understand whether and under what circs one should use Python classes and/or Java ones. If making a specialist dictionary/Map kind of class, should one subclass from Python's dict, or from Java's HashMap or TreeMap, etc.? It is tempting to use the Python ones just because they are simpler and sexier. But one reason that Jython runs relatively slowly (so it appears to me to do) seems to have something to do with the dynamic typing. I'd better say I'm not that clear about all this, and haven't spent nocturnal hours poring over the Python/Jython interpreter code, to my shame. Anyway it just occurs to me that the Java classes might possibly run faster because the code might have to do less work. OTOH maybe it has to do more. Or maybe there's nothing in it. Anyone know?
Jython - is it faster to use Python data structures or Java ones?
0.462117
0
0
418
11,178,243
2012-06-24T14:24:00.000
4
1
1
0
java,python,jython
11,179,152
2
true
1
0
Generally, the decision shouldn't be one of speed - the Python classes will be implemented in terms of Java classes anyway, even if they don't inherit from them. So, the speed should be roughly comparable, and at most you would save a couple of method calls per operation. The bigger question is what you plan on doing with your class. If you're using it with Python APIs, you'll want to use the Python types, or something that behaves like them so that you don't have to do the work of implementing the entire Mapping protocol (only the bits your class changes). If you're using Java APIs, you will certainly need to meet the static type checks - which means you'll need to inherit from Java's classes. If this isn't easy to answer in your situation, start with the Python ones, since you (correctly ;-) find them "simpler and sexier". If your class doesn't pass outside the boundaries of your project, then this should be trivial to change later if the speed really becomes an issue - and at that point, you might also be thinking about questions like "could it help to implement it entirely at the Java level?" which you've hopefully recognised would be premature optimisation to think about now.
2
6
0
I'm trying to understand whether and under what circs one should use Python classes and/or Java ones. If making a specialist dictionary/Map kind of class, should one subclass from Python's dict, or from Java's HashMap or TreeMap, etc.? It is tempting to use the Python ones just because they are simpler and sexier. But one reason that Jython runs relatively slowly (so it appears to me to do) seems to have something to do with the dynamic typing. I'd better say I'm not that clear about all this, and haven't spent nocturnal hours poring over the Python/Jython interpreter code, to my shame. Anyway it just occurs to me that the Java classes might possibly run faster because the code might have to do less work. OTOH maybe it has to do more. Or maybe there's nothing in it. Anyone know?
Jython - is it faster to use Python data structures or Java ones?
1.2
0
0
418
11,181,195
2012-06-24T21:07:00.000
2
0
1
0
python
11,181,226
4
false
0
0
I would create a separated module called random_words, or something like that, hiding the list inside it and encapsulating the choice(word_list) inside an interface function. As to load them from a file, well, since I would need to type them anyway, and a python file is just a text file in the end, I would type them right there, probably one per line for easy maintenance.
2
0
0
I am writing a game in python in which I must periodically pull a random word from a list of words. When I prototyped my game I declared a word_list = ['cat','dog','rat','house'] of ten words at the top of one of my modules. I then use choice(word_list) to get a random word. However, I must must change this temporary hack into something more elegant because I need to increase the size of the word list to 5,000+ words. If I do this in my current module it will look ridiculous. Should I put all of these words in a flat txt file, and then read from that file as I need words? If so, how would I best do that? Put each word an a separate line and then read one random line? I'm not sure what the most efficient way is.
Where should I declare a list of 5,000+ words?
0.099668
0
0
290
11,181,195
2012-06-24T21:07:00.000
3
0
1
0
python
11,181,204
4
false
0
0
Read the words from the file at startup (or at least the line indexes), and use as required.
2
0
0
I am writing a game in python in which I must periodically pull a random word from a list of words. When I prototyped my game I declared a word_list = ['cat','dog','rat','house'] of ten words at the top of one of my modules. I then use choice(word_list) to get a random word. However, I must must change this temporary hack into something more elegant because I need to increase the size of the word list to 5,000+ words. If I do this in my current module it will look ridiculous. Should I put all of these words in a flat txt file, and then read from that file as I need words? If so, how would I best do that? Put each word an a separate line and then read one random line? I'm not sure what the most efficient way is.
Where should I declare a list of 5,000+ words?
0.148885
0
0
290
11,182,558
2012-06-25T01:29:00.000
3
0
1
0
python,iteration,combinations
11,182,603
4
false
0
0
The main issue with this problem, is that not all letters can be translated to symbols or numbers. You have to create a dictionary where the key is a lower case letter and the value is a list of all possible replacement of that letter: {'a':['a','A','@'],...,'s':['s','S','5'],...,} Once your dictionary is built, the rest is just a matter of a simple Cartesian product of the different lists in the right order.
1
0
0
I hope it's Monday-itis kicking in at the moment, but something I feel should be quite easy - or at least elegant - is giving me a brain fart. The use case is this: Find all possible combinations of a specific word, where letters can be any case or replaced with letters. For instance: Word: 'Password' Combinations: 'PASSWORD', 'P@ssw0rd', 'p@55w0rD' ... I do not want to write 7 loops to find this out, even though it's a once off script we'll never ever use again.
Find all combinations (upper and lower and symbols) of a word in python
0.148885
0
0
1,263
11,182,825
2012-06-25T02:24:00.000
4
0
0
0
python,html,parsing,text,web
11,182,855
1
false
0
0
I won't write code, but I'll give you the process I'd go through for solving this problem: Retrieve the source of the page Replace out all of the parts of the page that we don't care to monitor Calculate an md5 or sha1 hash of the source after replacements are made Compare the hash with the stored hash, see if it's different, and do whatever you need to do if the page has been updated Store the new hash
1
1
0
What would be the simplest way to check a web page for changes? I want to scan a web page every so often, and compare it to an older scan. One problem is I also need the scan to ignore certain changes, such as the time of day, etc. I only want to check for relevant updates.
Detecting web page updates with python
0.664037
0
1
2,212
11,186,600
2012-06-25T09:25:00.000
1
0
1
0
python,on-the-fly
11,186,714
2
false
0
0
If the program saves its state or its results from time to time, you could add a logic which skips the steps which have already executed. Otherwise, I don't see a way to change this.
2
10
0
I launched a python program with many nested loops, and the program will take days. I just realized that one of the loops values is wrong and makes a infinite loop. I don't want to restart the program from zero, is there a way to interrupt the current program and modify the loop range so it will work properly and also if it was trapped with the infinite loop to break it? Many thanks for your help.
modify a running python program
0.099668
0
0
1,132
11,186,600
2012-06-25T09:25:00.000
0
0
1
0
python,on-the-fly
15,535,463
2
false
0
0
I guess pretty old article but just came across now. In case you still want to try you could do the following: Make your script run under pdb as: python -m pdb This will run it under pdb. After entering pdb just enter command 'c' (continue). This will begin your program. When you encounter a infinite loop just do a ctrl+c this will stop the program within the debugger. Now you could run any python statements you want. Possibly you could also define a new script import it and run functions from that script or exit. I know it is not a good idea to always run under debugger, but at least the above would solve what you intended for.
2
10
0
I launched a python program with many nested loops, and the program will take days. I just realized that one of the loops values is wrong and makes a infinite loop. I don't want to restart the program from zero, is there a way to interrupt the current program and modify the loop range so it will work properly and also if it was trapped with the infinite loop to break it? Many thanks for your help.
modify a running python program
0
0
0
1,132
11,187,086
2012-06-25T09:57:00.000
3
0
0
0
python,export-to-excel,openerp,export-to-csv
11,187,374
2
false
0
0
Why not to use Open ERP client it self. you can go for xlwt if you really require to write a python program to generate it.
1
0
1
Which is the best to way to export openerp data to csv/xls file using python so that i can schedule it in openerp( i cant use the client side exporting)? using csv python package using xlwt python package or any other package? And also how can I dynamically provide the path and name to save this newly created csv file
The best way to export openerp data to csv file using python
0.291313
0
0
4,385
11,188,725
2012-06-25T11:43:00.000
0
1
0
0
python,django,http
11,188,777
2
false
0
0
In terms of security, you should store it in session. If it's in cookie, the client can modify your url to whatever he wants.
2
0
0
I need a url for using that for a template. Now there are two ways of storing the url and use that again in python I guess... One is using session to store that URL and get it later whenever we need it... or Second is using cookies to store that URL and get it later.. So which method is more appropriate in terms of security ? Is there any other method in python which is more better for storing the url and use that later, which is more secure..? While using cookies somebody can easily change the information I guess, in sessions also somebody can hijack it and make the changes....
Storing URL into cookies or session?
0
0
1
276
11,188,725
2012-06-25T11:43:00.000
0
1
0
0
python,django,http
11,188,963
2
true
0
0
I don't think "session hijacking" means what you think it means. The only thing someone can do with session hijacking is impersonate a user. The actual session data is stored on the back end (eg in the database), so if you don't give the user access to that particular data then they can't change it, whether they're the actual intended user or someone impersonating that user. So, the upshot of this is, store it in the session. Edit after comment Well, you'd better not allow any information to be sent to your server then, and make your website browse-only. Seriously, I don't see why "session data" is any less secure than anything else. You are being unreasonably paranoid. If you want to store data, you need to get that data from somewhere, either from a calculation on the server side, or from user submissions. If you can't calculate this specific URL on the server side, it needs to come from the user. And then you need to store it on the server against the particular user. I don't see what else you want to do.
2
0
0
I need a url for using that for a template. Now there are two ways of storing the url and use that again in python I guess... One is using session to store that URL and get it later whenever we need it... or Second is using cookies to store that URL and get it later.. So which method is more appropriate in terms of security ? Is there any other method in python which is more better for storing the url and use that later, which is more secure..? While using cookies somebody can easily change the information I guess, in sessions also somebody can hijack it and make the changes....
Storing URL into cookies or session?
1.2
0
1
276
11,190,243
2012-06-25T13:24:00.000
0
0
0
0
python,sockets,ssh-tunnel,graphite
11,214,979
1
false
0
0
It's hard to answer this correctly without a code sample. However, it sounds like you might be trying to reuse a closed socket, which is not possible. If the socket has been closed (or has experienced an error), you must re-create a new connection using a new socket object. For this to work, the remote server must be able to handle multiple client connections in its accept() loop.
1
0
0
I am running a Graphite server to monitor instruments at remote locations. I have a "perpetual" ssh tunnel to the machines from my server (loving autossh) to map their local ports to my server's local port. This works well, data comes through with no hasstles. However we use a flaky satellite connection to the sites, which goes down rather regularly. I am running a "data crawler" on the instrument that is running python and using socket to send packets to the Graphite server. The problem is, if the link goes down temporarily (or the server gets rebooted, for testing mostly), I cannot re-establish the connection to the server. I trap the error, and then run socket.close(), and then re-open, but I just can't re-establish the connection. If I quit the python program and restart it, the connection comes up just fine. Any ideas how I can "refresh" my socket connection?
python - can't restart socket connection from client if server becomes unavailable temporarily
0
0
1
1,295
11,194,380
2012-06-25T17:37:00.000
0
0
1
1
python,command-line,path,pypy
11,194,517
2
false
0
0
to add to your path just open your start menu right click on "Computer" select "Properties" click option for "Advanced System Settings" click option for "environmental Variables" change the one named "PATH" to include the folder that you need
1
0
0
So, I have installed the pypy pre-built interpreter to my home folder in windows; however, it only allows me to execute python scripts through the interpreters interface (similar to IDLE). I would like to extend this functionality to the cmd line in windows by putting something referencing the pypy interpreter to my system's PATH, however, I cannot find any documentation about this.
Running Python Scripts from Command Line with Pypy Interpreter
0
0
0
1,249
11,196,258
2012-06-25T19:44:00.000
4
1
0
0
python,cython,pypy,pytables,psyco
11,196,841
2
true
0
0
There is some support numpy. Running pypy 1.9 I get the following message on importing numpy: ImportError: The 'numpy' module of PyPy is in-development and not complete. To try it out anyway, you can either import from 'numpypy', or just write 'import numpypy' first in your program and then import from 'numpy' as usual.
1
1
0
And if it doesn't, is there anyway to speed up my python code for accessing pytables on a 64-bit system (so no psyco)?
Does Pypy Support PyTables and Numpy?
1.2
0
0
589
11,196,367
2012-06-25T19:54:00.000
-1
0
1
0
python,multithreading,multiprocessing
11,196,583
3
false
0
0
well break the single big file into multiple smaller files and have each of them processed in separate threads.
1
86
0
I have a single big text file in which I want to process each line ( do some operations ) and store them in a database. Since a single simple program is taking too long, I want it to be done via multiple processes or threads. Each thread/process should read the DIFFERENT data(different lines) from that single file and do some operations on their piece of data(lines) and put them in the database so that in the end, I have whole of the data processed and my database is dumped with the data I need. But I am not able to figure it out that how to approach this.
Processing single file from multiple processes
-0.066568
0
0
61,124
11,198,288
2012-06-25T22:29:00.000
1
0
0
0
python,netlogo,pycuda,agent-based-modeling,mayavi
11,198,804
2
true
0
0
You almost certainly do not want to use CUDA unless you are running into a significant performance problem. In general CUDA is best used for solving floating point linear algebra problems. If you are looking for a framework built around parallel computations, I'd look towards OpenCL which can take advantage of GPUs if needed.. In terms of visualization, I'd strongly suggest targeting a a specific data interchange format and then letting some other program do that rendering for you. The only reason I'd use something like VTK is if for some reason you need more control over the visualization process or you are looking for a real time solution.
1
2
1
sorry if this all seem nooby and unclear, but I'm currently learning Netlogo to model agent-based collective behavior and would love to hear some advice on alternative software choices. My main thing is that I'd very much like to take advantage of PyCuda since, from what I understand, it enables parallel computation. However, does that mean I still have to write the numerical script in some other environment and implement the visuals in yet another one??? If so, my questions are: What numerical package should I use? PyEvolve, DEAP, or something else? It appears that PyEvolve is no longer being developed and DEAP is just a wrapper on the outdated(?) EAP. Graphic-wise, I find mayavi2 and vtk promising. The problem is, none of the numerical package seems to bind to these readily. Is there no better alternative than to save the numerical output to datafile and feed them into, say, mayavi2? Another option is to generate the data via Netlogo and feed them into a graphing package from (2). Is there any disadvantage to doing this? Thank you so much for shedding light on this confusion.
ABM under python with advanced visualization
1.2
0
0
1,399
11,199,767
2012-06-26T01:49:00.000
1
0
1
0
python,ipython
11,199,881
2
true
0
0
Usually when there is an error, it shows error message and get out. That is why you are seeing the flash. Run the executable from command line and you can check out what that error is.
1
2
0
Just got a fresh 64bit box running windows vista, installed Python 2.7.3 and IPython 0.12.1, but IPython didn't seem to create any program folders. Even if I run the .exe file from C:\Python27\Scripts, the terminal just flashes for a moment. Any thoughts?
IPython won't launch after install
1.2
0
0
940
11,199,797
2012-06-26T01:53:00.000
2
0
1
0
python,django,import,settings
11,202,467
2
true
1
0
No. Use from ... import * or execfile() in settings/__init__.py to load the appropriate files.
1
0
0
I have a file layout like this: settings/ ----__init__.py ----common.py ----configs/ --------constants1.py --------constants2.py ----debug/ --------include1&2.py --------include1.py --------include2.py and when I import settings.debug.include1, I would like the settings file to execute/import common.py then override the settings in common.py with the proper constants file. Problem is, this isn't happening. Is there a way to accomplish my goals in this fashion?
How can I resolve this python and django settings import idiosyncrasy?
1.2
0
0
116
11,199,978
2012-06-26T02:20:00.000
3
0
1
0
python,python-2.7
11,200,000
3
false
0
0
Strings sort naturally. Use list.sort (in-place) or built-in sorted (copying). Both accept a boolean parameter named reverse which defaults to False; set to True fr reverse order.
1
1
0
Suppose I have a list of dates in the string format, 'YYYYMMDD.' How do I sort the list in regular and reverse order?
What is the most efficient way to sort a list of dates in string format in Python?
0.197375
0
0
241
11,202,713
2012-06-26T07:37:00.000
3
1
0
0
python,pyserial
11,202,829
1
true
0
0
A Serial port has no real concept of "cable connected" or not connected. Depending on the equipment you are using you could try to poll the DSR or CTS lines, and decide there is no device connected when those stay low over a certain time. From wikipedia: DTR and DSR are usually on all the time and, per the RS-232 standard and its successors, are used to signal from each end that the other equipment is actually present and powered-up So if you've got a conforming device, the DSR line could be the thing you need. Edit: As you seem to use an USB2Serial converter, you can try to check whether the device node still exists - you don't need to try to open it. so os.path.exists(devNode) could suffice.
1
1
0
I am working on a multi threaded server application for processing serial/USB ports. The issue is that if a cable gets unplugged, pyserial keeps reporting that the port is open and available. When reading I only receive Empty exceptions (due to read timeout). How do I find out that a port has been disconnected so that I can handle this case? Edit: OS is Ubuntu 12.04 Edit 2: Clarification - I am connecting to serial port devices via Serial to USB connector, thus the device being disconnected is an USB device.
How to find out if serial port is closed?
1.2
0
0
2,182
11,204,002
2012-06-26T09:05:00.000
0
0
0
0
python,django,python-imaging-library,importerror
11,204,486
1
true
1
0
Oh, installing lcms from macports and reinstalling PIL helped.
1
1
0
Today i've tried to test a django project on my macbook, but whenever i try to start it, i get the same error: "Error: No module named _imagingcms". Seems like something is missng from PIL. I've tried to reinstall PIL, but it does not help. What should i do?
No module named _imagingcms on OSX
1.2
0
0
550
11,206,884
2012-06-26T12:00:00.000
12
0
1
0
python,sorting
11,207,560
7
false
0
0
I think the docs are incomplete. I interpret the word "primarily" to mean that there are still reasons to use cmp_to_key, and this is one of them. cmp was removed because it was an "attractive nuisance:" people would gravitate to it, even though key was a better choice. But your case is clearly better as a cmp function, so use cmp_to_key to implement it.
1
36
0
The move in recent versions of Python to passing a key function to sort() from the previous cmp function is making it trickier for me to perform complex sorts on certain objects. For example, I want to sort a set of objects from newest to oldest, with a set of string tie-breaker fields. So I want the dates in reverse order but the strings in their natural order. With a comparison function I can just reverse the comparison for the date field compared to the string fields. But with a key function I need to find some way to invert/reverse either the dates or the strings. It's easy (although ugly) to do with numbers - just subtract them from something - but do I have to find a similar hack for dates (subtract them from another date and compare the timedeltas?) and strings (...I have no idea how I'd reverse their order in a locale-independent way). I know of the existence of functools.cmp_to_key() but it is described as being "primarily used as a transition tool for programs being converted to Python 3 where comparison functions are no longer supported". This implies that I should be able to do what I want with the key method - but how?
How to write sort key functions for descending values?
1
0
0
15,471
11,209,054
2012-06-26T14:01:00.000
1
0
1
0
python,utf-8,io
11,210,051
2
false
0
0
Some experimentation with utf-8 encodings (repeated seeking and printing of .read(1) methods in a file with lots of multi-byte characters) revealed that yes, .seek() and .read() do behave differently in utf-8 files... they don't deal with single bytes, but single characters. This consisted of several simple re-writings of code, reading and seeking in different patterns. Thanks to @satuon for your help.
1
4
0
For input text files, I know that .seek and .tell both operate with bytes, usually - that is, .seek seeks a certain number of bytes in relation to a point specified by its given arguments, and .tell returns the number of bytes since the beginning of the file. My question is: does this work the same way when using other encodings like utf-8? I know utf-8, for example, requires several bytes for some characters. It would seem that if those methods still deal with bytes when parsing utf-8 files, then unexpected behavior could result (for instance, the cursor could end up inside of a character's multi-byte encoding, or a multi-byte character could register as several characters). If so, are there other methods to do the same tasks? Especially for when parsing a file requires information about the cursor's position in terms of characters. On the other hand, if you specify the encoding in the open() function ... infile = open(filename, encoding='utf-8') Does the behavior of .seek and .tell change?
How does file reading work in utf-8 encoding?
0.099668
0
0
646
11,211,228
2012-06-26T15:54:00.000
2
0
1
0
java,php,c++,python,ruby
11,211,502
1
true
1
0
Perhaps I am missing something .... Its appears you want to keep the data in 5 min buckets, but you can't be sure you have all the data for a bucket for up to 10 sec after it has rolled over. This means for each instrument you need to keep the current bucket and the previous bucket. When its 10 seconds past the 5 min boundary you can publish/write out the old bucket.
1
1
0
Alright so this problem has been breaking my brain all day today. The Problem: I am currently receiving stock tick data at an extremely high rate through multicasts. I have already parsed this data and am receiving it in the following form. -StockID: Int-64 -TimeStamp: Microseconds from Epoch -Price: Int -Quantity: Int Hundreds of these packets of data are parsed every second. I am trying to reduce the computation on my storage end by packaging up this data into dictionaries/hashtables hashed by the stockID (key == stockID)(value == array of [timestamp, price, quantity] elements). I also want each dictionary to represent timestamps within a 5min interval. When the incoming data's timestamps get past the 5min time interval, I want this new data to go into a new dictionary that represents the next time interval. Also, a special key will be hashed at key -1 telling what 5 particular minute interval per day does this dictionary belong to (so if you receive something at 12:32am, it should hash into the dictionary that has value 7 at key -1, since this represents the time interval of 12:30am to 12:35am for that particular day). Once the time passes, the dict that has its time expired can be sent off to the dataWrapper. Now, you might be coming up with some ideas right about now. But here's a big constraint. The timestamps that are coming in Are not necessarily strictly increasing; however, if one waits about 10 seconds after an interval has ended then it can be safe to assume that every data coming in belongs to the current interval. The reason I am doing all this complicated things is to reduce computation on the storage side of my application. With the setup above, my storage side thread can simply iterate over all of the key, value pairs within the dictionary and store them in the same location on the storage system without having to reopen files, reassign groups or change directories. Good Luck! I will greatly appreciate ANY answers btw. :) Preferred if you can send me something in python (that's what I'm doing the project in), but I can perfectly understand Java, C++, Ruby or PHP. Summary I am trying to put stock data into dictionaries that represent 5min intervals for each dictionary. The timestamp that comes with the data determines what particular dictionary it should be put in. This could be relatively easy except that timestamps are not strictly increasing as they come in, so dictionaries cannot be sent off to the datawrapper immediately once 5 mins has passed by the timestamps, since it isn't guaranteed to not receive any more data within 10 seconds, after this its okay to send it to the wrapper. I just want any kind of ideas, algorithms, or partial implementations that could help me with the scheduling of this. How can we switch the current use of dictionaries within both timestamps (for the data) and actual time (the 10seconds buffer). Clarification Edit The 5 min window should be data driven (based upon timestamps), however the 10 second timeout appears to be clock time.
Interesting Stock Tick Data Scenario
1.2
0
0
441
11,214,241
2012-06-26T19:01:00.000
2
0
1
0
python,dictionary
11,214,297
2
false
0
0
The only drawback is performance. Larger keys mean longer times to hash. Simply put, the only requirement of keys in a python dict is that they be immutable and hashable. For tuples (which are immutable), this means that you just need to combine the hashes of the sub-objects (which themselves must be immutable and hashable). You can also use a frozenset as a key. You can't use lists or dicts or sets as keys.
2
1
0
Seems like there should be... Right now it just seems like magic that you can hash multidimensionally to the same table, without any negative effects.
Any Drawback to Using Tuples as Dictionary Keys in Python?
0.197375
0
0
1,117
11,214,241
2012-06-26T19:01:00.000
4
0
1
0
python,dictionary
11,214,300
2
true
0
0
From the dictionary's perspective, there's not a single thing multi-dimensional about it. The dictionary has no idea that you are interpreting the keys as describing an n-space. You could, for example, cleverly pack your vector into a string which would seem less magical, be more complicated to get right, and yet be functionally equivalent. Python strings are Yet Another Immutable Sequence as far as the interpreter is concerned. There is no negative effect. Some tasks might be less efficient than an alternate implementation. For example if you are using (x, y, z) coordinates as keys, finding all points at some z will be time consuming relative to a real multi-dimensional store. But sometimes clarity and ease of implementation and reading trump efficient store.
2
1
0
Seems like there should be... Right now it just seems like magic that you can hash multidimensionally to the same table, without any negative effects.
Any Drawback to Using Tuples as Dictionary Keys in Python?
1.2
0
0
1,117
11,214,620
2012-06-26T19:28:00.000
2
0
1
1
python,macos,osx-lion,virtualenv,homebrew
11,214,702
2
true
0
0
Homebrew is just a package manager for Mac, like pip for Python. Of course you never need a package manager, you can just get all the programs, or libraries in case of pip and Pypi yourself. The point of package managers however is to ease this process and give you a simple interface to install the software, and also to remove it as that is usually not so simply when compiling things yourself etc. That being said, Homebrew will only install things you tell it to install, so by just having Homebrew you don’t randomly get new versions of something. Homebrew is just a nice way to install general OSX stuff you need/want in general.
1
1
0
Being fairly new to programming, I am having trouble understanding exactly what Homebrew does... or rather - why it is needed. I know it contains pip for package management, but so does Virtualenv and I'm planning on installing this in due course. Does Homebrew install another version of python that is not the system version, upon which you would install Virtualenv and manage the different development environments from there? I have a clean install of OSX Lion and I want to keep my projects separated, but am unsure why I need Homebrew. I realise this is basic stuff, but if someone could explain it, I would be grateful.
Do I need to install Homebrew if I am planning to install Virtualenv?
1.2
0
0
680
11,215,535
2012-06-26T20:38:00.000
1
0
0
0
python,database,sqlite,error-handling
11,215,911
1
true
1
0
Your gut feeling is right. There is no way to add robustness to the application without reviewing each database access point separately. You still have a lot of important choice at how the application should react on errors that depends on factors like, Is it attended, or sometimes completely unattended? Is delay OK, or is it important to report database errors promptly? What are relative frequencies of the three types of failure that you describe? Now that you have a single wrapper, you can use it to do some common configuration and error handling, especially: set reasonable connect timeouts set reasonable busy timeouts enforce command timeouts on client side retry automatically on errors, especially on SQLITE_BUSY (insert large delays between retries, fail after a few retries) use exceptions to reduce the number of application level handlers. You may be able to restart the whole application on database errors. However, do that only if you have confidence as to in which state you are aborting the application; consistent use of transactions may ensure that the restart method does not leave inconsistent data behind. ask a human for help when you detect a locking error ...but there comes a moment where you need to bite the bullet and let the error out into the application, and see what all the particular callers are likely to do with it.
1
2
0
I have a desktop app that has 65 modules, about half of which read from or write to an SQLite database. I've found that there are 3 ways that the database can throw an SQliteDatabaseError: SQL logic error or missing database (happens unpredictably every now and then) Database is locked (if it's being edited by another program, like SQLite Database Browser) Disk I/O error (also happens unpredictably) Although these errors don't happen often, when they do they lock up my application entirely, and so I can't just let them stand. And so I've started re-writing every single access of the database to be a pointer to a common "database-access function" in its own module. That function then can catch these three errors as exceptions and thereby not crash, and also alert the user accordingly. For example, if it is a "database is locked error", it will announce this and ask the user to close any program that is also using the database and then try again. (If it's the other errors, perhaps it will tell the user to try again later...not sure yet). Updating all the database accesses to do this is mostly a matter of copy/pasting the redirect to the common function--easy work. The problem is: it is not sufficient to just provide this database-access function and its announcements, because at all of the points of database access in the 65 modules there is code that follows the access that assumes the database will successfully return data or complete a write--and when it doesn't, that code has to have a condition for that. But writing those conditionals requires carefully going into each access point and seeing how best to handle it. This is laborious and difficult for the couple of hundred database accesses I'll need to patch in this way. I'm willing to do that, but I thought I'd inquire if there were a more efficient/clever way or at least heuristics that would help in finishing this fix efficiently and well. (I should state that there is no particular "architecture" of this application...it's mostly what could be called "ravioli code", where the GUI and database calls and logic are all together in units that "go together". I am not willing to re-write the architecture of the whole project in MVC or something like this at this point, though I'd consider it for future projects.)
Efficient approach to catching database errors
1.2
1
0
237
11,216,401
2012-06-26T21:43:00.000
1
1
0
0
c#,python,visa,gpib
15,514,499
3
false
0
0
There should be a clear command (something like "*CLS?", but dont quote me on that). I always run that when i first connect to a device. Then make sure you have a good timeout duration. I found for my device around 1 second works. Less then 1 second makes it so I miss the read after a write. Most of the time, a timeout is because you just missed it or you are reading after a command without a return. Make sure you are also checking for errors in the error queue in between write to make sure the write actually properly when through.
1
2
0
I have a GPIB device that I'm communicating with using a National Instruments USB to GPIB. the USB to GPIB works great. I am wondering what can cause a GPIB device to be unresponsive? If I Turn off the device and turn it back on it will respond, but when I run my program it will respond at first. It then cuts off I can't even communicate with the GPIB device it just times out. Did I fill up the buffer? Some specifics from another questioner I'm controlling a National Instruments GPIB card (not USB) with PyVisa. The instrument on the GPIB bus is a Newport ESP300 motion controller. During a session of several hours (all the while sending commands to and reading from the ESP300) the ESP300 will sometimes stop listening and become unresponsive. All reads time out, and not even *idn? produces a response. Is there something I can do that is likely to clear this state? e.g. drive the IFC line?
What can cause a GPIB to be unresponsive
0.066568
0
0
2,223
11,217,855
2012-06-27T00:39:00.000
1
0
0
0
python,math,geometry,gis
11,217,921
2
false
0
0
You could recursively split the quad in half on the long sides until the resulting area is small enough.
1
1
1
It is pretty easy to split a rectangle/square into smaller regions and enforce a maximum area of each sub-region. You can just divide the region into regions with sides length sqrt(max_area) and treat the leftovers with some care. With a quadrilateral however I am stumped. Let's assume I don't know the angle of any of the corners. Let's also assume that all four points are on the same plane. Also, I don't need for the the small regions to be all the same size. The only requirement I have is that the area of each individual region is less than the max area. Is there a particular data structure I could use to make this easier? Is there an algorithm I'm just not finding? Could I use quadtrees to do this? I'm not incredibly versed in trees but I do know how to implement the structure. I have GIS work in mind when I'm doing this, but I am fairly confident that that will have no impact on the algorithm to split the quad.
Split quadrilateral into sub-regions of a maximum area
0.099668
0
0
1,141
11,218,393
2012-06-27T01:59:00.000
0
0
0
0
python,django,django-file-upload
11,218,464
2
false
1
0
try: models.FileField(upload_to = '...')
1
0
0
Is it possible to upload a file in django using django's model.FileField() to a location that's not relative to /media ?. In my case upload an .html file to myproject/templates.
django upload file to custom location
0
0
0
303
11,219,060
2012-06-27T03:37:00.000
6
0
0
0
python,mysql,ruby,utf-8
11,219,610
2
false
0
0
Once upon a time there was no unicode or UTF-8, and disparate encoding schemes were in use throughout the world. It wasn't until back in 1988 that the initial unicode proposal was issued, with the goal of encoding all the worlds characters in a common encoding. The first release in 1991 covered many character representations, however, it wasn't until 2006 that Balinese, Cuneiform, N'Ko, Phags-pa, and Phoenician were added. Until then the Phoenicians, and the others, were unable to represent their language in UTF-8 pissing off many programmers who wondered why everything was not just defaulting to UTF-8.
2
8
0
I'm just curious that there are modern systems out there that default to something other than UTF-8. I've had a person block for an entire day on the multiple locations that a mysql system can have different encoding. Very frustrating. Is there any good reason not to use utf-8 as a default (and storage space seems like not a good reason)? Not trying to be argumentitive, just curious. thx
why doesn't EVERYTHING default to UTF-8?
1
1
0
519
11,219,060
2012-06-27T03:37:00.000
-1
0
0
0
python,mysql,ruby,utf-8
11,219,088
2
false
0
0
Some encodings have different byte orders (little and big endian)
2
8
0
I'm just curious that there are modern systems out there that default to something other than UTF-8. I've had a person block for an entire day on the multiple locations that a mysql system can have different encoding. Very frustrating. Is there any good reason not to use utf-8 as a default (and storage space seems like not a good reason)? Not trying to be argumentitive, just curious. thx
why doesn't EVERYTHING default to UTF-8?
-0.099668
1
0
519
11,219,319
2012-06-27T04:16:00.000
1
0
0
1
python,apache,cgi,nltk,appdata
11,246,816
3
false
1
0
%APPDATA% is a special variable that expands to the "Application Data" directory of the user who expands the variable (i.e., who runs a script). Apache is not running as you, so it has no business knowing about your APPDATA directory. You should either hard-code the relevant path into your script, or replace it with a path relative to the location of the script, e.g., r'..\data\nltk_data'. If you really need to, you can recover the absolute location of your script by looking at __file__.
1
1
0
I'm using Python with the NLTK toolkit in Apache via CGI. The toolkit need to know the APPDATA directory, but when executed in the server, the os.environ not lists theAPPDATA. When I execute a simple print os.envrion in console, APPDATA is present, but not when executed via CGI in the web server. What is going on? How can I solve this? I'm new to Python and I'm just learning it yet.
APPDATA is not returned in Python executed via CGI
0.066568
0
0
496
11,221,544
2012-06-27T07:43:00.000
5
0
0
0
python,django,file-upload
11,221,675
1
true
1
0
how long will this file be retained in memory? Are you talking about the temporary file on the filesystem? In that case, on a Unix platform, usually until you reboot. If you're talking about uploaded files in RAM, it probably stays in there at least until the request/response cycle is done. But that shouldn't really matter to you, you'll have to handle the uploaded file in the response processing code anyways. Otherwise, you won't have any reference to it anymore. will each upload have a unique name regardless if the same file is uploaded twice? Yes.
1
2
0
"if an uploaded file is too large, Django will write the uploaded file to a temporary file stored in your system's temporary directory. On a Unix-like platform this means you can expect Django to generate a file called something like /tmp/tmpzfp6I6.upload. If an upload is large enough, you can watch this file grow in size as Django streams the data onto disk." This is taken from Django's documentation. My question is how long will this file be retained in memory? and will each upload have a unique name regardless if the same file is uploaded twice?
how long will django store temporary files?
1.2
0
0
457
11,223,147
2012-06-27T09:27:00.000
1
0
0
0
python,sqlite
11,224,222
4
false
0
0
If you're not after just parameter substitution, but full construction of the SQL, you have to do that using string operations on your end. The ? replacement always just stands for a value. Internally, the SQL string is compiled to SQLite's own bytecode (you can find out what it generates with EXPLAIN thesql) and ? replacements are done by just storing the value at the correct place in the value stack; varying the query structurally would require different bytecode, so just replacing a value wouldn't be enough. Yes, this does mean you have to be ultra-careful. If you don't want to allow updates, try opening the DB connection in read-only mode.
3
0
0
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.
Python + Sqlite 3. How to construct queries?
0.049958
1
0
1,125
11,223,147
2012-06-27T09:27:00.000
1
0
0
0
python,sqlite
11,224,475
4
true
0
0
If you're trying to transmit changes to the database to another computer, why do they have to be expressed as SQL strings? Why not pickle the query string and the parameters as a tuple, and have the other machine also use SQLite parameterization to query its database?
3
0
0
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.
Python + Sqlite 3. How to construct queries?
1.2
1
0
1,125
11,223,147
2012-06-27T09:27:00.000
0
0
0
0
python,sqlite
11,224,003
4
false
0
0
I want how to get the parsed 'sql param'. It's all open source so you have full access to the code doing the parsing / sanitization. Why not just reading this code and find out how it works and if there's some (possibly undocumented) implementation that you can reuse ?
3
0
0
I'm trying to create a python script that constructs valid sqlite queries. I want to avoid SQL Injection, so I cannot use '%s'. I've found how to execute queries, cursor.execute('sql ?', (param)), but I want how to get the parsed sql param. It's not a problem if I have to execute the query first in order to obtain the last query executed.
Python + Sqlite 3. How to construct queries?
0
1
0
1,125
11,224,299
2012-06-27T10:36:00.000
0
0
1
0
python,unicode
11,244,398
2
false
0
0
I ended up using pattern ff(fd|\d\w|\w\d) and removed all but only a few errors. Some errors such as ff07 and ff50 are not removed which is strange since they should have been removed by the re pattern, but that little amount of errors is within my tolerance.
1
0
0
I have a Chinese document, but in the document there are a lot of error strings left due to error in decoding, they all look like fffd , ff10 or something. Now I need to remove all the occurrence of those error strings, so I need to know the pattern for them, but I can't find useful information. All I SEEM TO know now is they consists of 4 characters, and they start with 'ff', but the last two are uncertain. For example, the error string may look like: 300dfffd or afffdnormalff0cword. What I want for the two words above are: 300d and anormalword. I can not delete all the four letter pattern starts with ff since there are normal words start with them. Is there a single re pattern that can represent them? Or is there any other way recommended? Thanks. BTW, I'm doing this in Python, so any Pythonic way is highly appreciated! Thanks. UPDATE: I ended up using pattern ff(fd|\d\w|\w\d) and removed almost all of the errors. Some errors such as ff07 and ff50 are not removed which is strange since they should have been removed by the re pattern, but that little amount of errors is within my tolerance.
How to detect coding error strings?
0
0
0
81
11,224,517
2012-06-27T10:49:00.000
1
1
0
1
python,centos
11,251,911
1
true
0
0
If python2.7 is available on Yum, you should use that: the package management on large distros (redhat, ubuntu, debian, fedora ) takes care of maintaining parallel Python installs for you which won't conflict with each other. This option should keep your system "/usr/bin/python¬ file pointing to Python2.4 and give you another python2.7 binary. Otherwise, if you choose to build it from source, pick another prefix - /opt - (not even /usr/local will be quite safe) for building it. You don't need to know exactly which system parts depend on Python 2.4 - just rest assured it will crash very hard and unpredictably if you try to modify the system Python itself.
1
0
0
I have a CentOS 5.8 server and am planning to install a later version of python (presumably 2.7). I have heard a lot of mention that CentOS relies quite heavily on 2.4 for many admin features etc. I'm trying to determine exactly what these features are (and whether I would actually be using them) so that I can decide whether to update python through yum or build from source. Can anyone give me some more detailed information on what CentOS features have dependencies on Python 2.4.
CentOS 5.8 dependencies on Python 2.4?
1.2
0
0
1,419
11,227,579
2012-06-27T13:40:00.000
2
0
0
1
python,macos,mount
11,228,033
1
true
0
0
Have a look at the diskutil(8) and hdiutil(1) tools.
1
1
0
Is there a way in which I can get some information about the mounts I have in the folder /Volumes in OSX? I want to be able to tell the difference between disk images like dmgs and other types, like hard disks or network mounts. I tried parsing the output of mount -v and looking if read-only is in the line but I doubt that's a particularly accurate way of telling, and also not a good method either. Is there any module or method that will give me this information?
Getting mount type information in python on OSX
1.2
0
0
813
11,228,645
2012-06-27T14:35:00.000
2
0
1
1
python,windows
11,228,733
3
false
0
0
On Linux, it is common to store the configuration file in the users home directory, for instance ~/.myprogramrc. On windows Vista and up, users have a home directory as well (/Users/username) and a would recommend storing your settings there in a subfolder (/Users/useranem/myprogram). Storing the settings in the application folder will generate UAC warnings. On Windows XP, users do not have a home folder. Some programs make the choice of putting configuration in the 'My Documents' folder which I guess is as good a place as any.
1
10
0
I have a python program that must work on Windows and Linux. There are some configuration options I normally store in a file, in a subdirectory of the program's directory. For Windows, I converted it to exe and created an Installer for it. And now I have the problem of dealing with the config file. What is the best place to save the configuration file? I have read that for Windows os.environ['APPDATA']+'myAppName' is the path that must be used. Is it correct? Is it standard? Will it work in all versions of Windows at least from XP (and at least in English and Spanish)? PD: I am not interested in using ConfigParser. Config file is in my own format and I have working code for reading/writing from it.
Where to store the configuration files of python applications on Windows
0.132549
0
0
4,707
11,228,878
2012-06-27T14:45:00.000
1
0
0
0
python,html
11,229,283
2
true
1
0
If you are just trying to execute your application from the web application you would like to create, then you can go for anything from bare cgi scripts (in say ... Perl) through PHP script and even Django (Python based web framework). It all depend on what you like to do :) If your intention is to integrate your Python app with the web app, you can try doing it in Django web framework.
1
0
0
I want to take HTML form data and handle the submited data(string for example) with my python application. The html file with the form will be stored locally and values will be entered from a browser. I then want to take the submited values to my python application. How do I do with the form action and link it to my application? Please point me in the right direction. BR,
Take HTML form data to python
1.2
0
0
600
11,230,979
2012-06-27T16:31:00.000
1
0
1
0
python,django,pycharm
11,231,045
3
true
1
0
If your site loads, you should put the import the models into one of your Django views. In a view you can do whatever you like with the models.
1
1
0
So I have a chunk of code that declares some classes, creates data, uses django to actually save them to the database. My question is how do I actually execute it? I am using PyCharm and have the file open. But I have no clue how to actually execute it. I can execute line by line in Django Console, but if it's more than that it can't handle the indentation. The project itself runs fine (127.0.0.1 loads my page). How can I accomplish this? I am sorry if this a completely obvious answer, I've been struggling with this for a bit.
How to run a file that uses django models (large block of code) in Pycharm
1.2
0
0
1,138
11,231,244
2012-06-27T16:48:00.000
1
0
0
0
python,ipv6,urllib,ipv4
11,231,476
1
false
0
0
I had a look into the source code. Unfortunately, urllib.urlopen() seems to use httplib.HTTP(), which doesn't even allow setting a source address. urllib2.urlopen() uses httplib.HTTPConnection() which you could inherit from and create a class which by default sets a source address '0.0.0.0' instead of ''. Then you could somehow inject that new overridden class into the urllib2 stuff by creating a "new" HTTPHandler() (look how it's done in urllib2.py) and a new opener which you build_opener() and/or install_opener(). Sorry for not being very exact, but I never have done such a thing and don't know exactly how that works.
1
2
0
What is the way to do urlopen in python such that even if the underlying machine has ipv6 networking enabled, the request is sent via ipv4 instead of ipv6?
how to do urlopen over ipv4 by default
0.197375
0
1
2,773
11,232,351
2012-06-27T18:03:00.000
4
0
0
0
python,iframe,tornado
11,232,414
1
true
1
0
Javascript has access to the browser context but a templating system will only have access to the request object. If you control the creation of the iframe in question, for instance if that is happening on another part of your site, you might be able to pass get parameters in to the templating system or something... But in general this is something you have to do with javascript. Add javascript directly to your template or (better) include a javascript file. You can expose both the iframed and the non-iframed versions of your page in the template and have javascript select which one to show once it hits the browser.
1
0
0
In Tornado, you can do if statements in the HTML such as {% if true %} do stuff {% end %}. I'd like to check if the page is within an iframe. In Javascript, it would be something like: if (top === self) { not in a frame } else { in a frame } How can I do this in with Tornado?
Test iFrame "top === self" in Python Tornado
1.2
0
0
152
11,232,958
2012-06-27T18:42:00.000
2
0
1
0
python,data-structures,foreign-keys,relationship
11,236,244
1
false
0
0
Use sqlite. No server to install, and you get foreign key constraints for free. If this was a question of a single pair of tables you could hard-code the checking, but as you've discovered, it gets out of hand when you have to re-implement so much of what a DBMS is designed for.
1
0
0
I'm looking for a way to manage a set of Python data structures which would nicely fit into a relational schema, but without the overhead of having a real database or parsing SQL. The amount of data can be assumed to be small enough to fit conveniently into memory (say, no structure contains more than a million elements). Most importantly, I would like to have automatically enforced foreign key constraints. Triggering an assertion failure on foreign key constraint violation would be good enough; it's always a programming error. Here's a real-world example of what I would like to accomplish. I do have the code to do this, but without automatic foreign key constraint checking it's getting error-prone and a mess of asserts. (These are data structures for code to analyze the execution trace of a machine code program, for the curious. The logger program outputs each unique (previous_instruction_addr, current_instruction_addr, stack_pointer_change) tuple once.) instrs: set(int) (addresses of machine instructions seen) next_instrs: dict(int -> set(int)), a dictionary mapping an instruction (which must be in instrs) to a set of instructions (all of which must be in instrs). Relationally, a subset of instrs×instrs. stk_changes: dict((int,int) -> set(int)), a dictionary mapping a pair of instructions (where the pair of instructions must be in next_instrs) to a set of changes in the stack pointer register jumps: set((int,int)), a subset of the next_instrs relation deemed to be a jump (including a function call or a return from a function) from one instruction to another. Actually I currently implement this as two dicts, as I need to be able to make queries both of the form "where does this instruction jump?" and "what instructions jump to this instruction?". calls: set((int,int)), a subset of jumps that are function calls. rets: set((int,int)), a subset of jumps that are returns from function. basic_blocks: set(int), a subset of instrs; the first addresses of basic blocks (blocks of code where execution only ever starts in the first instruction, i.e. no jumps to the middle, and only ever ends in the last instructions, i.e. no jumps from the middle) containing_bb: dict(int -> int), the basic block that contains each instruction. A subset of instrs×basic_blocks functions: set(int), basic blocks deemed to be starts of a function. A subset of basic_blocks. function_calls_by_bb: dict(int -> int), functions called by a basic block; a subset of basic_blocks×functions And so on; you get the idea. What I'm essentially looking for is a way to manage all this structure and automatically enforce all the foreign key constraints; for example, I would like basic_blocks.add(something) to fail with an error if something is not a member of instrs. Similarly, I would like basic_blocks.remove(something) to fail if something is still referred to by function_calls_by_bb. Clearly, writing assertions in add() and remove() methods for all of these structures is needlessly verbose and error-prone compared to, for example, foreign key constraints in an SQL database schema. I'm currently playing with sqlalchemy with an in-memory sqlite database, which lets me describe the constraints in a nice way, but I'm ideally looking for something much more lightweight that does not involve database engines. (It may be that a database engine is ultimately the proper way to do what I'm doing, but currently I'm evaluating alternatives.) Alternatively, if you can think of other ways to manage structures such as this, I'd also be interested in hearing about those.
Foreign key constrains for Python data structures without using real databases
0.379949
0
0
192
11,233,140
2012-06-27T18:54:00.000
2
0
0
0
python,excel,openpyxl
11,233,362
1
true
0
0
A workbook doesn't really have a name - normally you'd just consider it to be the basename of the file it's saved as... slight update - yep, even in VB WorkBook.Name just returns "file on disk.xls"
1
3
0
There is a worksheet.title method but not workbook.title method. Looking in the documentation there is no explicit way to find it, I wasn't sure if anyone knew a workaround or trick to get it.
Is there a way to get the name of a workbook in openpyxl
1.2
1
0
10,098
11,233,863
2012-06-27T19:49:00.000
7
0
1
0
python,python-2.7
11,233,915
4
true
0
0
Use an OrderedDict from the collections module if you simply need to access the last item entered. If, however, you need to maintain continuous sorting, you need to use a different data structure entirely, or at least an auxiliary one for the purposes of indexing. Edit: I would add that, if accessing the final element is an operation that you have to do very rarely, it may be sufficient simply to sort the dict's keys and select the maximum. If you have to do this frequently, however, repeatedly sorting would become prohibitively expensive. Depending on how your code works, the simplest approach would probably be to simply maintain a single variable that, at any given point, contains the last key added and/or the maximum value added (i.e., is updated with each subsequent addition to the dict). If you want to maintain a record of additions that extends beyond just the last item, however, and don't require continuous sorting, an OrderedDict is ideal.
1
2
0
I have a default dict of dicts whose primary key is a timestamp in the string form 'YYYYMMDD HH:MM:SS.' The keys are entered sequentially. How do I access the last entered key or the key with the latest timestamp?
What is the best way to access the last entered key in a default dict in Python?
1.2
0
0
179
11,234,197
2012-06-27T20:16:00.000
3
0
0
0
python,networking,interface,simulation
11,315,321
2
true
0
0
The massive number of answers people posted encouraged me to think outside of the box. My approach will be to use Dummynet, a truly amazing and versatile tool. Unfortunately the Dummynet Windows and Linux ports are not well-maintained, which means I'll be running *BSD. But this simplifies things, since a *BSD image can also be run as a VM, which greatly simplifies dealing with virtual interfaces. And if I'm concerned about size, I can use picoBSD or nanoBSD to craft a tiny tailored system for my simulator.
1
6
0
I'm making a simulator for a digital radio using Python. The radio relays over RF one each of an RS-232 port and an Ethernet port, with a pair of radios making seamless pipes. Thus, the simulator will be used in pairs, with pipes between them simulating the RF link, permitting users to connect to each end using physical interfaces, virtual interfaces, or tunnels. For the RF serial port, I'm using PySerial and virtual serial ports to make the simulator as versatile as possible: I can connect the simulator to either a physical serial port, to a network socket, or to another local program. Aside from the tools used to create the virtual serial ports on each different OS, this approach is completely cross-platform. I'd like the simulator to be able to network with a local program via a virtual interface, with a remote program via a shared network interface, and with a remote program via a local physical interface that would be dedicated to the simulator. But so far, I haven't found a straightforward way to do this. I've been looking at SLIP/PPP, TAP/DUN, pcap/SOCK_RAW, and other possibilities, and I see no obvious or general solution. The key difficulty seems to be that this involves an entire Ethernet interface, below the IP level, at the level of the Ethernet protocol itself: If it were only a few ports, the solution would be relatively simple. Or am I missing something blindingly obvious? How do I use Python to create and use an RF Ethernet interface in a way that is as versatile as the RF Serial interface solution?
Network interface simulation in Python?
1.2
0
1
2,780
11,239,467
2012-06-28T06:55:00.000
1
0
1
1
python
11,239,806
1
true
0
0
Most of the code written in Ubuntu is written in Python. One should avoid removing system dependencies at any time. You can do sudo apt-get install python but rest of the programs are probably gone; even though you install them manually, you can experience random bugs, system failures. I think you should just re-install ubuntu
1
0
0
I removed python from my computer to be reinstalled. However after removing it, many services has gone from my Ubuntu 10.04. (e.g. mozilla, Ubuntu Software Center and many applications from System tab.) How can I get back all of them? Thanks a lot..
Removing python
1.2
0
0
289
11,241,781
2012-06-28T09:33:00.000
20
1
0
0
python,unit-testing,jenkins,junit,xunit
11,463,624
6
false
0
0
I would second using nose. Basic XML reporting is now built in. Just use the --with-xunit command line option and it will produce a nosetests.xml file. For example: nosetests --with-xunit Then add a "Publish JUnit test result report" post build action, and fill in the "Test report XMLs" field with nosetests.xml (assuming that you ran nosetests in $WORKSPACE).
2
150
0
How do you get Jenkins to execute python unittest cases? Is it possible to JUnit style XML output from the builtin unittest package?
Python unittests in Jenkins?
1
0
0
104,678
11,241,781
2012-06-28T09:33:00.000
4
1
0
0
python,unit-testing,jenkins,junit,xunit
11,241,965
6
false
0
0
I used nosetests. There are addons to output the XML for Jenkins
2
150
0
How do you get Jenkins to execute python unittest cases? Is it possible to JUnit style XML output from the builtin unittest package?
Python unittests in Jenkins?
0.132549
0
0
104,678
11,242,387
2012-06-28T10:12:00.000
1
0
0
0
python,database,design-patterns,simulation
11,244,121
3
false
1
0
It sounds like you need to record more or less the same kinds of information for each case, so a relational database sounds like a good fit-- why do you think it's "not the proper way"? If your data fits in a collection of CSV files, you're most of the way to a relational database already! Just store in database tables instead, and you have support for foreign keys and queries. If you go on to implement an object-oriented solution, you can initialize your objects from the database.
2
7
0
I am working with some network simulator. After making some extensions to it, I need to make a lot of different simulations and tests. I need to record: simulation scenario configurations values of some parameters (e.g. buffer sizes, signal qualities, position) per devices per time unit t final results computed from those recorded values Second data is needed to perform some visualization after simulation was performed (simple animation, showing some statistics over time). I am using Python with matplotlib etc. for post-processing the data and for writing a proper app (now considering pyQt or Django, but this is not the topic of the question). Now I am wondering what would be the best way to store this data? My first guess was to use XML files, but it can be too much overhead from the XML syntax (I mean, files can grow up to very big sizes, especially for the second part of the data type). So I tried to design a database... But this also seems to me to be not the proper way... Maybe a mix of both? I have tried to find some clues in Google, but found nothing special. Have you ever had a need for storing such data? How have you done that? Is there any "design pattern" for that?
Preferred (or recommended) way to store large amounts of simulation configurations, runs values and final results
0.066568
0
0
2,845
11,242,387
2012-06-28T10:12:00.000
1
0
0
0
python,database,design-patterns,simulation
11,244,431
3
false
1
0
If your data structures are well-known and stable AND you need some of the SQL querying / computation features then a light-weight relational DB like SQLite might be the way to go (just make sure it can handle your eventual 3+GB data). Else - ie, each simulation scenario might need a dedicated data structure to store the results -, and you don't need any SQL feature, then you might be better using a more free-form solution (document-oriented database, OO database, filesystem + csv, whatever). Note that you can still use a SQL db in the second case, but you'll have to dynamically create tables for each resultset, and of course dynamically create the relevant SQL queries too.
2
7
0
I am working with some network simulator. After making some extensions to it, I need to make a lot of different simulations and tests. I need to record: simulation scenario configurations values of some parameters (e.g. buffer sizes, signal qualities, position) per devices per time unit t final results computed from those recorded values Second data is needed to perform some visualization after simulation was performed (simple animation, showing some statistics over time). I am using Python with matplotlib etc. for post-processing the data and for writing a proper app (now considering pyQt or Django, but this is not the topic of the question). Now I am wondering what would be the best way to store this data? My first guess was to use XML files, but it can be too much overhead from the XML syntax (I mean, files can grow up to very big sizes, especially for the second part of the data type). So I tried to design a database... But this also seems to me to be not the proper way... Maybe a mix of both? I have tried to find some clues in Google, but found nothing special. Have you ever had a need for storing such data? How have you done that? Is there any "design pattern" for that?
Preferred (or recommended) way to store large amounts of simulation configurations, runs values and final results
0.066568
0
0
2,845
11,243,256
2012-06-28T11:04:00.000
0
0
0
0
python,web.py
11,488,215
1
false
1
0
Not sure, but I'd switch to WSGI anyway, its faster and easy to use. Do you get that error when running the built in webserver?
1
1
0
when uploading a file with web.py, there's a exception " SystemError: error return without exception set" raised. here's traceback ... File "../web/template.py", line 882, in __call__ return BaseTemplate.__call__(self, *a, **kw) File "../web/template.py", line 809, in __call__ return self.t(*a, **kw) File "", line 193, in __template__ File "../web/webapi.py", line 276, in input out = rawinput(_method) File "../web/webapi.py", line 249, in rawinput a = cgi.FieldStorage(fp=fp, environ=e, keep_blank_values=1) File "../python2.7/cgi.py", line 508, in __init__ self.read_multi(environ, keep_blank_values, strict_parsing) File "../python2.7/cgi.py", line 632, in read_multi environ, keep_blank_values, strict_parsing) File "../python2.7/cgi.py", line 510, in __init__ self.read_single() File "../python2.7/cgi.py", line 647, in read_single self.read_lines() File "../python2.7/cgi.py", line 669, in read_lines self.read_lines_to_outerboundary() File "../python2.7/cgi.py", line 697, in read_lines_to_outerboundary line = self.fp.readline(1 """ def POST(self): x = web.input(myfile= {}) return x.myfile.file.read()
what's wrong with web.py? SystemError: error return without exception set
0
0
0
363
11,244,049
2012-06-28T11:53:00.000
1
1
0
0
java,python,jsp
11,244,145
3
false
1
0
It would be neater to expose your python API as RESTful services, that JSP can access using Ajax and display data in the page. I'm specifically suggesting this because you said 'JSP' not 'Java'.
1
1
0
I want to create a UI which invokes my python script. Can i do it using JSP? If so, can you please explain how ? Or can i do it using some other language. I have gone through many posts related to it but could not find much? please help me out? Explanations using examples would be more helpful. Thanks In Advance..
Is it possible to invoke a python script from jsp?
0.066568
0
0
8,657
11,245,439
2012-06-28T13:11:00.000
20
0
1
0
shell,ipython,undo
11,245,499
2
true
0
0
Ctrl-_ (underscore) or Ctrl-x Ctrl-u If you deleted something with ctrl-w/ctrl-k and so on you can just paste it back with ctrl-y. See readline(1) for additional hotkeys.
1
9
0
Is there a keyboard command for undoing typing in iPython? Note: I am not talking about undoing the result of a command you've executed. Suppose I copied and pasted a few variable names as arguments into a long function call, and then realized they are the wrong arguments. Can I do an equivalent of ctrl-z or something that undoes the paste operation? Ctrl-z kills the iPython session, so not recommended.
How to undo typing (not command output) in iPython shell
1.2
0
0
5,481
11,248,073
2012-06-28T15:36:00.000
0
0
1
0
python,pip,virtualenv,python-packaging
65,819,257
30
false
0
0
I simply wanted to remove packages installed by the project, and not other packages I've installed (things like neovim, mypy and pudb which I use for local dev but are not included in the app requirements). So I did: cat requirements.txt| sed 's/=.*//g' | xargs pip uninstall -y which worked well for me.
3
971
0
I'm trying to fix up one of my virtualenvs - I'd like to reset all of the installed libraries back to the ones that match production. Is there a quick and easy way to do this with pip?
What is the easiest way to remove all packages installed by pip?
0
0
0
859,222
11,248,073
2012-06-28T15:36:00.000
0
0
1
0
python,pip,virtualenv,python-packaging
47,974,813
30
false
0
0
In Command Shell of Windows, the command pip freeze | xargs pip uninstall -y won't work. So for those of you using Windows, I've figured out an alternative way to do so. Copy all the names of the installed packages of pip from the pip freeze command to a .txt file. Then, go the location of your .txt file and run the command pip uninstall -r *textfile.txt*
3
971
0
I'm trying to fix up one of my virtualenvs - I'd like to reset all of the installed libraries back to the ones that match production. Is there a quick and easy way to do this with pip?
What is the easiest way to remove all packages installed by pip?
0
0
0
859,222
11,248,073
2012-06-28T15:36:00.000
0
0
1
0
python,pip,virtualenv,python-packaging
70,313,854
30
false
0
0
Select Libraries To Delete From This Folder: C:\Users\User\AppData\Local\Programs\Python\Python310\Lib\site-packages
3
971
0
I'm trying to fix up one of my virtualenvs - I'd like to reset all of the installed libraries back to the ones that match production. Is there a quick and easy way to do this with pip?
What is the easiest way to remove all packages installed by pip?
0
0
0
859,222
11,249,313
2012-06-28T16:47:00.000
1
0
0
1
python,google-app-engine,cookies,openid
11,249,560
1
true
1
0
users.get_current_user() is actually reading the cookies so you don't need to do anything more to optimize it (you can easily verify it by deleting your cookies and then refreshing the page). Unless you want to store more information and have access to them without accessing the datastore on every request.
1
0
0
I am using OpenID as a login system for a google appengine website and I right now for every website I am just passing the user info to every page using user = users.get_current_user() Would using a cookie to do this be more efficient? (I know if would be easier that putting that in every single webpage) and is these any special way to do it with google appengine? I already have a cookie counting visits but I would image it'll be a little different. Update: Could I do self.user = users.get_current_user() as a global variable and then pass in user=self.user on every page to have access to that variable? Thanks!
store openid user in cookie google appengine
1.2
0
0
191
11,252,864
2012-06-28T21:04:00.000
2
1
1
0
python,git,packaging
11,253,601
1
true
0
0
Personally, I only set files I intend to be executed as scripts as executable. Using a least permissive model is a smart, if not ideal, design choice when it comes to security. If you don't need the permissions, don't use them. I don't see any reason why omitting the shebang is a bad idea, other than if someone else want's to make the file executable they have two steps instead of one.
1
0
0
I am starting an open source Python library that my company expects will be used by all of our customers. Since I am a sucker for proper presentation and practices, I have a question about file modes as saved by git. However, I want to avoid turning this into a best-practice type of discussion discouraged by StackOverflow, so here the is question in a form seeking a concrete answer: Is there a reason why I shouldn't set Python examples in my library to be executable? I tend to set the executable flag on Python that I need to run and would prefer to do so (simply because it's generally slightly easier to type ./ than python), but I have noticed that most open source libraries differ from that in practice. I don't feel that such security should be manifested that way, but I want to make sure. I would not be setting library files to be executable, just example files or tests that I feel should be executable. As a related question, should library files that are never meant to be executed directly omit the hashbang (#!/usr/bin/env python) on the first line?
Is there a reason to not set Python files' modes as executable in an open source git repository?
1.2
0
0
95
11,258,057
2012-06-29T07:52:00.000
1
0
1
0
python,class,tkinter
11,258,706
1
true
0
1
I figured it out. I used frame_table.grid_size() and have the columns and rows (7, 3). Sorry for the dull question!
1
0
0
Just a quick question.. Following opening a text file in a separate definition (no classes used) I have a 'table' (i.e a frame) that has n rows (depending on what is in the text file). As this number could be any number, is it possible to retrieve the number of rows afterwards since I have been given the task where I'm not to use classes but there cannot access the variables etc in the open definition. Thanks.
Obtaining number of rows Tkinter
1.2
0
0
72
11,258,710
2012-06-29T08:48:00.000
1
0
1
1
python,multithreading,google-app-engine,python-2.7
11,259,344
2
false
0
0
You can't have "some thread safe and some not thread safe". That's impossible. If some code is not thread safe, then none of the code is thread safe. That's just how thread safety works.
1
0
0
Please help how it is possible to detect if python27 runtime is run in thread safe mode or not for code? For example to notify that module is not compatible or apply threading code if required. I want to port some code to python27 as thread safe and some as not thread safe but not understand how it works in Google App Engine.
How to check if Google App Engine python27 runs thread safe mode or not?
0.099668
0
0
268