Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
12,761,517
2012-10-06T16:24:00.000
0
0
0
0
0
python,rsa,hex
0
12,761,645
0
1
0
true
0
0
Python's representation of your result as 0x1L indicates the number's type and value - however, it doesn't truncate the number at all. While it is stored as a long internally, its value is still just 1 in this case.
1
0
0
0
I've writing a RSA implementation in Python and have now successfully encrypted it however when it prints out the cipher, every time it comes out '0x1L.' I believe this is a long number, however I do not know how to show the full number. If my code is required, I will post in the comments section( It is quite long). Thanks
Full Hexadecimal Representation in Python?
1
1.2
1
0
0
399
12,763,608
2012-10-06T20:31:00.000
0
0
0
0
0
python,machine-learning,recommendation-engine,latent-semantic-indexing,topic-modeling
0
14,583,682
0
2
0
false
0
0
"represent a user as the aggregation of all the documents viewed" : that might work indeed, given that you are in linear spaces. You can easily add all the documents vectors in one big vector. If you want to add the ratings, you could simply put a coefficient in the sum. Say you group all documents rated 2 in a vector D2, rated 3 in D3 etc... you then simply define a user vector as U=c2*D2+c3*D3+... You can play with various forms for c2, c3, but the easiest approach would be to simply multiply by the rating, and divide by the max rating for normalisation reasons. If your max rating is 5, you could define for instance c2=2/5, c3=3/5 ...
2
2
1
1
I'm trying to come up with a topic-based recommender system to suggest relevant text documents to users. I trained a latent semantic indexing model, using gensim, on the wikipedia corpus. This lets me easily transform documents into the LSI topic distributions. My idea now is to represent users the same way. However, of course, users have a history of viewed articles, as well as ratings of articles. So my question is: how to represent the users? An idea I had is the following: represent a user as the aggregation of all the documents viewed. But how to take into account the rating? Any ideas? Thanks
User profiling for topic-based recommender system
0
0
1
0
0
602
12,763,608
2012-10-06T20:31:00.000
1
0
0
0
0
python,machine-learning,recommendation-engine,latent-semantic-indexing,topic-modeling
0
12,764,041
0
2
0
false
0
0
I don't think that's working with lsa. But you maybe could do some sort of k-NN classification, where each user's coordinates are the documents viewed. Each object (=user) sends out radiation (intensity is inversely proportional to the square of the distance). The intensity is calculated from the ratings on the single documents. Then you can place a object (user) in in this hyperdimensional space, and see what other users give the most 'light'. But: Can't Apache Lucene do that whole stuff for you?
2
2
1
1
I'm trying to come up with a topic-based recommender system to suggest relevant text documents to users. I trained a latent semantic indexing model, using gensim, on the wikipedia corpus. This lets me easily transform documents into the LSI topic distributions. My idea now is to represent users the same way. However, of course, users have a history of viewed articles, as well as ratings of articles. So my question is: how to represent the users? An idea I had is the following: represent a user as the aggregation of all the documents viewed. But how to take into account the rating? Any ideas? Thanks
User profiling for topic-based recommender system
0
0.099668
1
0
0
602
12,775,844
2012-10-08T05:34:00.000
14
0
0
0
0
python,django,postgresql,sql-order-by
0
12,775,949
0
3
0
true
1
0
order_by can have multiple params, I think order_by('score', '-create_time') will always return the same queryset.
1
8
0
0
I'd like to know how Django's order_by works if the given order_by field's values are same for a set of records. Consider I have a score field in DB and I'm filtering the queryset using order_by('score'). How will records having the same values for score arrange themselves? Every time, they're ordered randomly within the subset of records having equal score and this breaks the pagination at client side. Is there a way to override this and return the records in a consistent order? I'm Using Django 1.4 and PostgreSQL.
Django - How does order_by work?
0
1.2
1
0
0
16,934
12,789,138
2012-10-08T20:40:00.000
2
0
0
0
0
python,attributes
0
12,789,190
0
2
0
false
0
0
Not all operating systems have the concept of "hidden" for files, and most (even with all the different versions of Windows 7 etc. there are still more forms of *nix out there than Windows) indicate it by having the first character of the filename be a period (.). On the OSes that do support it you must use some external API or tool in order to set it on the file.
1
2
0
0
In Python, how do you make a particular file hidden? Or how do you set the file attribute as 'hidden' without using external API's/modules such as WIN32API, etc. Surely there is something in the standard libraries? As the os module does allow to set the "read" and 'write' attributes, it is very strange that there is no mention in the os docs of 'hidden'...
Hidden file attributes
0
0.197375
1
0
0
3,767
12,797,999
2012-10-09T10:26:00.000
48
0
0
0
0
python,django
0
12,798,019
0
3
0
true
1
0
No. It's not for making websites. Your sample just sounds like you want plain old HTML. Django is for creating web applications. That is, software, normally backed by a database, that includes some kind of interactivity, that operates through a browser. A Framework provides a structure and common methods for making this kind of software.
1
64
0
0
I heard a lot of people talking about Django on various forums. But I am having a very basic question : What is meant by Framework and why Django is used. After listening a lot about Django, I ran few chapters for Django (from Djangobook.com). After running these chapters, I am wondering how Django can be used to create a very simple website. (Website should have few pages like Home, Favorites, About, Contact linked to each other and will be providing static content). Can Django be used for creation of such website? I searched a lot on internet but couldn't find any relevant examples, I only encountered with the examples for creation of blog, forum sites etc. If Django can be used for creation of this website, what should be the approach. Can someone please explain this basic term "Framework" and its significance?
For what purpose Django is used for?
1
1.2
1
0
0
66,634
12,818,397
2012-10-10T11:35:00.000
1
0
0
0
1
python,gtk
0
12,827,853
0
2
0
false
0
1
I have currently ended up doing this: Instead of GtkMenuToolButton I have GtkToolItem with custom content In custom content I have GtkMenuButton Inside that one, I delete the default GtkArrow and replace it with 1x2 GtkGrid which has a Label + GtkArrow As a whole it does what I want =)
1
1
0
0
I am using GtkMenuToolButton and it has a button and a menu. When you click on the arrow the menu is opened. I'd like to make the button open that same menu as well. Simply emitting "show-menu" in the "clicked" callback did not work. Please help how to make this work.
How to make GtkMenuToolButton open the same menu when 'clicked' signal is emitted?
0
0.099668
1
0
0
213
12,821,201
2012-10-10T14:01:00.000
-1
0
1
0
0
python,nlp,nltk
0
12,821,336
0
4
0
false
0
0
I don't think there is a specific method in nltk to help with this. This isn't tough though. If you have a sentence of n words (assuming you're using word level), get all ngrams of length 1-n, iterate through each of those ngrams and make them keys in an associative array, with the value being the count. Shouldn't be more than 30 lines of code, you could build your own package for this and import it where needed.
1
14
1
0
I've read a paper that uses ngram counts as feature for a classifier, and I was wondering what this exactly means. Example text: "Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam" I can create unigrams, bigrams, trigrams, etc. out of this text, where I have to define on which "level" to create these unigrams. The "level" can be character, syllable, word, ... So creating unigrams out of the sentence above would simply create a list of all words? Creating bigrams would result in word pairs bringing together words that follow each other? So if the paper talks about ngram counts, it simply creates unigrams, bigrams, trigrams, etc. out of the text, and counts how often which ngram occurs? Is there an existing method in python's nltk package? Or do I have to implement a version of my own?
What are ngram counts and how to implement using nltk?
1
-0.049958
1
0
0
21,407
12,835,176
2012-10-11T08:20:00.000
2
0
1
0
0
python
0
12,835,628
0
2
0
false
0
0
A potential use case could be a "factory class" which returns instances of various classes depending on the implementation.
1
12
0
0
I understand what __new__ does (and how it's different from __init__) so I'm not interested in definitions, I'm interested in when and how to use __new__. The documentation says: In general, you shouldn't need to override __new__ unless you're subclassing an immutable type like str, int, unicode or tuple But I can't think of other cases to use __new__ or how to use it correctly (for example when subclassing an immutable type or why it's needed in this case). So, when, why and how do you need to use __new__? I'm interested in the use cases, not what it does (I know what it does).
What are the use cases for Python's __new__?
0
0.197375
1
0
0
1,383
12,840,696
2012-10-11T13:29:00.000
2
0
0
0
0
python,youtube-api
0
12,842,420
0
1
0
false
0
0
That's not something that's supported as part of the public YouTube Data API.
1
7
0
1
I am uploading videos to YouTube by using YouTube Data API (Python client library). Is it possible to set monetizing for that video from API rather than going to my account on the YouTube website and manually setting monetization for that uploaded video? If yes, then how can I do it from API? I am unable to find it in the API documentation, and Googling doesn't help either.
Enable Monetization on YouTube video using YouTube API
1
0.379949
1
0
1
1,037
12,848,684
2012-10-11T21:19:00.000
4
0
0
0
0
python,oop,structure
0
12,849,650
0
3
0
true
0
1
Engine is an object that contains data like images and rendering code This sounds like a God class: they're common in GUI code, and the problem you're having is a common effect of that. What is the conceptual relationship between different stuff in Engine? Does it need to be tightly coupled, or could the Engine just coordinate between the State, a Renderer and some other stuff? However that bit breaks down, passing the Engine (or Renderer, or whatever) down through the state machine is the only way to avoid a global (or singleton) with your current architecture. The usual way to break this dependency is to use something like an MVC (Model/View/Controller) pattern. Specifically, it sounds like the Engine is already roughly a Controller, and the State and Elements have the Model part covered. However, the Engine, State and Elements are also handling the rendering/presentation part: this introduces the coupling. If you can find a way to publish the State changes to some observer (deliberately leaving that vague, because we don't want the State logic to depend too much on the details of the observer), you can make a Renderer listen to State (model) updates and draw them. Now, the observer can be anything you want, so it's easy to mock, and it's also decoupled from the Engine.
1
3
0
0
I am having some issues making 'bigger than simple scripts and stuff' applications in Python. I have a class called WinMain, a class called Engine, one called State and another called Element. It's laid out like this: WinMain is the main class of the program, it has a mainLoop function and various other things. Engine is an object that contains data like images and rendering code. State is an object held by Engine that has its update method called each time Engine's step function is called. Elements are objects held by State that are things like gui buttons and pictures to render. Since my rendering functions are in Engine, how will I get my Elements (held by state) to render things? I suppose I could give each Element the instance of Engine, but that seems to be kind of hacky, because I'd have to do stuff like this: picture = Element(engine, (0, 0), 'gui_image') (not good) I'd rather do picture = Element((0, 0), 'gui_image') and still some how let the Element render things using Engine. This seems to be a major structural problem I have with most projects I start, and I can't seem to find a way around it (other than passing a honkload of variables through the arguments of classes and functions). How might I do this?
Python structural issue: Giving an 'engine' class to everything
0
1.2
1
0
0
467
12,851,527
2012-10-12T02:49:00.000
2
0
1
0
0
python,math,installation,createfile
0
12,851,575
0
1
0
true
0
0
Instead of writing a script that creates another script, a better way to solve the problem would be to write functions. For example, the first function would read the names of variables, etc. and return these data as a dictionary. Next, you would pass this dictionary into the second function, which, depending on the information stored in the dictionary, would read the remaining inputs and return another dictionary with the inputs. Then, you could use the data from the returned dictionary to solve the equation.
1
0
0
0
I have looked around for a while now and am still unable to find an answer to a method of doing this. I have made a few scripts to solve equations and now am looking for a way to write a script that will allow input of steps into an equation,number and name of variables etc. My idea is to make a script that will take all of these inputs and then write another script that will ask for the numbers in an equation and solve the equation. Any ideas of how to write the first script so that it can create a second? Also I forgot to mention that I want the newer script to be saved for later use/execution.
Python Script to Make Another Script
0
1.2
1
0
0
113
12,862,344
2012-10-12T15:27:00.000
0
0
0
0
0
python,signals,ipc
0
12,863,718
0
1
0
true
0
0
Signals are not designed to be general inter-process communication mechanisms that allow for passing data. They can't do much more than provide a notification. What the target process does in response can be fairly general (generating output to a particular file that the sender then knows to go look at, for example), but passing data directly back to the sender would require a different mechanism like a pipe, shared memory, message queue, etc. Also note that, in general, a process receiving a signal can't really determine who sent the signal, so it wouldn't know where to send a response anyway.
1
2
0
0
I have a python script that can run for long time in the background, and am trying to find a way of getting a status update from it. Basically we're considering to send it a SIGUSR1 signal, and then have it report back a status update. Catching the signal in Python is not the issue, lots of information about that. But how to get back information to the process initiating the signal? It seems that there is no way to figure out the pid of the initiating process by the receiving process, which could provide a way to send information back. A single reply message is enough here (in the tune of 'busy uploading; at 55% now; will finish at such a time'); a continuing update would be fantastic but not necessary. What I've come up with is to write this data to a temporary file with predetermined name - has the issue of leaving stale files behind, and need some kind of clean-up routine then. But that sounds like a hack. Is there anything better available? The way the running process is signalled doesn't matter, it doesn't have to be kill -SIGUSR1 pid. Any way to communicate with it would do. As long as the communication can be initiated from a new process that's started after the main process runs, possibly running under as different user.
Have python process talk back on SIGUSR1 call
0
1.2
1
0
0
882
12,867,465
2012-10-12T21:38:00.000
1
0
1
0
1
python,recursion
0
12,869,565
0
2
0
false
0
1
I find it helps a LOT to express recursions in words. What the algorithm says is basically "what's visible at radius N is what's visible from radius N-1". Like, uh, the edge gets bigger.
1
0
0
0
I am attempting to teach myself python and have hit a rough spot once it has come to recursion. I have done the classic recursive functions (factorial, fibonacci numbers...) but I am going back over old code and trying to convert most of my iterative functions to recursive functions for practice. This is the wall that I have hit: I made a dungeon crawler a while back and I am trying to replace a for loop I used to reveal the squares near my sprite. So when the sprite is placed, he/she sees the tile he/she is on as well as the adjacent and diagonal squares (9 in total including the one the avatar is on).The other tiles making up the room are hidden. This I deemed view radius 1. For view radius 2, I wanted the char to see radius 1 squares plus all the tiles adjacent to those tiles. At the time I could not figure out how to do it with a for loop so I just implemented a simpler scheme. I feel this visibility function could be written recursively but I am having a hard time coming up with a base case and what my recursive step would be. My for loop just took avatar pos and iterated over a range to avatar pos + radius and I did that for the x,y coordinates. As far as translating this over to a recursive function I am really confused. I have done many searches trying to get a lead but only come up with complicated subjects such as: FOV using recursive shadowcasting which is way beyond me. Any help would be greatly appreciated.
Recursive Function To Help Clear Tiles (Field Of View)
0
0.099668
1
0
0
241
12,873,542
2012-10-13T13:26:00.000
3
0
1
1
0
python,python-idle
0
12,873,584
0
3
0
false
0
0
Indeed, the command to run a Python file should be run in the command prompt. Python should be in your path variable for it to work flexible. When the python folder is added to path you can call python everywhere in the command prompt, otherwise just in your python install folder. The following is from the python website: Windows has a built-in dialog for changing environment variables (following guide applies to XP classical view): Right-click the icon for your machine (usually located on your Desktop and called “My Computer”) and choose Properties there. Then, open the Advanced tab and click the Environment Variables button. In short, your path is: My Computer ‣ Properties ‣ Advanced ‣ Environment Variables In this dialog, you can add or modify User and System variables. To change System variables, you need non-restricted access to your machine (i.e. Administrator rights). Another way of adding variables to your environment is using the set command in a command prompt: set PYTHONPATH=%PYTHONPATH%;C:\My_python_lib If you do it via My Computer, then look for the line named path in Enviroment Variables. Give that the value of your Python installation folder.
2
4
0
0
I have installed the Enthought Python distribution on my computer, but I don't have any idea how to use it. I have PyLab and IDLE but I want to run .py files by typing the following command: python fileName.py I don't know where to write this command: IDLE, PyLab or Python.exe or Windows command prompt. When I do this in IDLE it says: SyntaxError: invalid syntax Please help me to figure this out.
How to run a Python project?
0
0.197375
1
0
0
41,336
12,873,542
2012-10-13T13:26:00.000
3
0
1
1
0
python,python-idle
0
12,873,556
0
3
0
true
0
0
Open a command prompt: Press ⊞ Win and R at the same time, then type in cmd and press ↵ Enter Navigate to the folder where you have the ".py" file (use cd .. to go one folder back or cd folderName to enter folderName) Then type in python filename.py
2
4
0
0
I have installed the Enthought Python distribution on my computer, but I don't have any idea how to use it. I have PyLab and IDLE but I want to run .py files by typing the following command: python fileName.py I don't know where to write this command: IDLE, PyLab or Python.exe or Windows command prompt. When I do this in IDLE it says: SyntaxError: invalid syntax Please help me to figure this out.
How to run a Python project?
0
1.2
1
0
0
41,336
12,890,137
2012-10-15T06:06:00.000
1
1
0
0
0
java,python,rmi,rpc,web2py
0
12,890,526
0
4
0
false
1
0
I'd be astonished if you could do it at all. Java RMI requires Java peers.
1
0
0
0
I have a remote method created via Python web2py. How do I test and invoke the method from Java? I was able to test if the method implements @service.xmlrpc but how do i test if the method implements @service.run?
Using Java RMI to invoke Python method
0
0.049958
1
0
1
2,053
12,893,264
2012-10-15T09:57:00.000
1
0
0
0
1
python,firefox,authentication,selenium,form-submit
0
12,893,410
0
1
0
false
0
0
Using driver.get("https://username:[email protected]/") should directly log you in, without the popup being displayed, What about this did not work for you? EDIT I am not sure this will work but after driver.get("https://username:[email protected]/") Try accepting alert. For the alert - @driver.switch_to.alert.accept in Ruby or driver.switchTo().alert().accept(); in Java
1
2
0
0
I want to access with Selenium (through) Python, a URL that demands authentication. When visit the URL, manually a new authentication window pops up, on which I need to fill in a username and password. Only after clicking on “OK” this window disappears and I return to the original site. As I want to visit this URL on an interval base to download information and want to automatize this process in python. In my current effort I use Selenium, but none of the examples that I found seem to do what I need. Thinks I tried but do not work are: driver.get("https://username:[email protected]/") selenium.FireEvent("OK", "click") driver.find_element_by_id("UserName") I do not know the actual element id’s What I did manage is to load my Firefox profile that stores the authentication information, but I still need to confirm the authentication by clicking “ok”. Is there any way to prevent this screen to pop up? If not how to access this button on the authentication form, from which I cannot obtain id-information?
How to Submit Https authentication with Selenium in python
0
0.197375
1
0
1
3,449
12,909,334
2012-10-16T07:14:00.000
-1
1
0
1
0
python
1
66,757,126
0
2
0
false
0
0
I used the same script, but my host failed to respond. My host is in different network. WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
1
2
0
0
I am writing a python script to copy python(say ABC.py) files from one directory to another directory with the same folder name(say ABC) as script name excluding .py. In the local system it works fine and copying the files from one directory to others by creating the same name folder. But actually I want copy these files from my local system (windows XP) to the remote system(Linux) located in other country on which I execute my script. But I am getting the error as "Destination Path not found" means I am not able to connect to remote that's why. I use SSH Secure client. I use an IP Address and Port number to connect to the remote server. Then it asks for user id and password. But I am not able to connect to the remote server by my python script. Can Any one help me out how can I do this??
How to Transfer Files from Client to Server Computer by using python script?
0
-0.099668
1
0
1
9,452
12,936,533
2012-10-17T14:23:00.000
2
0
0
0
0
python,selenium,webdriver,urllib2,tcp-ip
0
12,940,958
0
3
0
false
0
0
In as much as a small amount of plastic explosive solves the problem of forgetting your house keys, I implemented a solution. I created a class that tracks a list of resources and the time they were added to it, blocks when a limit is reached, and removes entries when their timestamp passes beyond a timeout value. I then created an instance of this class setup with a limit of 32768 resources and a timeout of 240 seconds, and had my test framework add an entry to the list every time webdriver.execute() is called or a few other actions (db queries, REST calls) are performed. It's not particularly elegant and it's quite arbitrary, but at least so far it seems to be keeping my tests from triggering port exhaustion while not noticeably slowing tests that weren't causing port exhaustion before.
1
2
0
0
I'm working on a suite of tests using selenium webdriver (written in Python). The page being tested contains a form that changes its displayed fields based upon what value is selected in one of its select boxes. This select box has about 250 options. I have a test (running via nose, though that's probably irrelevant) that iterates through all the options in the select box, verifying that the form has the correct fields displayed for each selected option. The problem is that for each option, it calls through selenium: click for the option to choose find_element and is_displayed for 7 fields find_elements for the items in the select box get_attribute and text for each option in the select box So that comes out to (roughly) 250 * (7 * 2 + 1 + 2 * 250), or 128,750 distinct requests to the webdriver server the test is running on, all within about 10 or 15 minutes. This is leading to client port exhaustion on the machine running the test in some instances. This is all running through a test framework that abstracts away things like how the select box is parsed, when new page objects are created, and a few other things, so optimizations in the test code either mean hacking that all to hell or throwing away the framework for this test and doing everything manually (which, for maintainability of our test code, is a bad idea). Some ideas I've had for solutions are: Trying to somehow pool or reuse the connection to the webdriver server Somehow tweaking the configuration of urllib2 or httplib at runtime so that the connections opened by selenium timeout or are killed more quickly System independent (or at least implementable for all systems with an OS switch or some such) mechanism for actively tracking and closing the ports being opened by selenium As I mentioned above, I can't do much to tweak the way the page is parsed or handled, but I do have control over subclassing/tweaking WebDriver or RemoteConnection any way I please. Does anyone have any suggestions on how to approach any of the above ideas, or any ideas that I haven't come up with?
tcp/ip port exhaustion with selenium webdriver
0
0.132549
1
0
1
2,907
12,949,634
2012-10-18T07:48:00.000
0
0
1
0
1
python,regex,routing,tornado
0
12,950,281
0
2
0
true
1
0
What about this: r'/admin/?' or r'/admin/{0,1}? Pay attention that I'm only talking about regex, don't know if this would work in Django.
1
1
0
0
i currently have this in my routing: (r"/admin", AdminController.Index ), (r"/admin/", AdminController.Index ), how do i merge them with just one line and have admin and admin/ go to AdminController.Index? i know this could be achieved via regex, but it doesnt seem to work
Regex routing in Python using Tornado framework
0
1.2
1
0
0
1,761
12,953,542
2012-10-18T11:26:00.000
5
0
0
0
0
python,simplehttpserver,webdev.webserver
0
25,786,569
0
3
0
false
1
0
The right way to do this, to ensure that the query parameters remain as they should, is to make sure you do a request to the filename directly instead of letting SimpleHTTPServer redirect to your index.html For example http://localhost:8000/?param1=1 does a redirect (301) and changes the url to http://localhost:8000/?param=1/ which messes with the query parameter. However http://localhost:8000/index.html?param1=1 (making the index file explicit) loads correctly. So just not letting SimpleHTTPServer do a url redirection solves the problem.
1
13
0
0
I like to use Python's SimpleHTTPServer for local development of all kinds of web applications which require loading resources via Ajax calls etc. When I use query strings in my URLs, the server always redirects to the same URL with a slash appended. For example /folder/?id=1 redirects to /folder/?id=1/ using a HTTP 301 response. I simply start the server using python -m SimpleHTTPServer. Any idea how I could get rid of the redirecting behaviour? This is Python 2.7.2.
Why does SimpleHTTPServer redirect to ?querystring/ when I request ?querystring?
0
0.321513
1
0
1
6,427
12,977,441
2012-10-19T15:22:00.000
5
0
1
0
0
python,python-3.x
0
12,977,721
0
3
0
false
0
0
You can use the -i interpreter option. python -c "import os" -i will import the os module and go to the interpreter read/eval loop. You can also put some statements (imports, definitions, etc) on a file and load it with python -i <file.py>
1
1
0
0
I want to bring up a python window (could be idle or cmd based) with some packages already imported by double clicking a python script. Is this possible? If so, how do I do it?
How can I bring up a python shell with packages already imported with a python script?
1
0.321513
1
0
0
115
13,017,421
2012-10-22T18:24:00.000
2
0
0
0
0
python,django
0
13,017,620
0
1
0
false
1
0
I would image that calling sleep() should block the execution of all Django code in most cases. However it might depend on the deployment architecture (e.g. gevent, gunicorn, etc). For instance if you are using a server which fires Django threads for each request, then no it will not block all the code. In most cases however using something like Celeri would like to be a much better solution because (1) don't reinvent the wheel and (2) it has been tested.
1
5
0
0
In Django, if the view uses a sleep() function while answering a request, does this block the handling of the whole queue of requests? If so, how to delay an http answer without this blocking behavior? Can we do that out-of-the-box and avoid using a job queue like Celeri?
Is sleep() blocking the handling of requests in Django?
0
0.379949
1
0
0
1,668
13,049,515
2012-10-24T12:45:00.000
1
0
0
0
1
python,google-app-engine,webapp2,wtforms
0
13,051,668
0
1
0
true
1
0
I think this will work if the routes are part of the same app. But why not using a single handler with get and put and a method _create, which can be called (self._create instead of a redirect) by get and put to render the template with the form. It is faster than a browser redirect and you can pass arguments in an easy way.
1
1
0
0
I am having a problem with webapp2 and wtforms. More specifically I have defined two methods in two different handlers, called: create, which is a GET method listening to a specific route save, which is a POST method listening to another route In the save method I validate my form and if fails, I want to redirect to the create method via the redirect_to method, where I can render the template with the form. Is this possible with any way? I found an example on how this can be done if the same handler with get and post methods, but is this possible in methods of different handlers? Thanks in advance!
Webapp2 + WTForms issue: How to pass values and errors back to user?
1
1.2
1
0
0
348
13,059,891
2012-10-25T00:26:00.000
0
0
0
0
0
python,scroll,pygame,geometry-surface
0
13,068,085
0
3
0
false
0
1
I don't think so, I have an idea though. I'm guessing your background wraps horizontally and always to the right, then you could attach part of the beginning to the end of the background. Example, if you have a 10,000px background and your viewport is 1000px, attach the first 1000px to the end of the background, so you'll have a 11,000px background. Then when the vieport reaches the end of the background, you just move it to the 0px position and continue moving right.
3
2
0
0
I'm coding a game where the viewport follows the player's ship in a finite game world, and I am trying to make it so that the background "wraps" around in all directions (you could think of it as a 2D surface wrapped around a sphere - no matter what direction you travel in, you will end up back where you started). I have no trouble getting the ship and objects to wrap, but the background doesn't show up until the viewport itself passes an edge of the gameworld. Is it possible to make the background surface "wrap" around? I'm sorry if I'm not being very articulate. It seems like a simple problem and tons of games do it, but I haven't had any luck finding an answer. I have some idea about how to do it by tiling the background, but it would be nice if I could just tell the surface to wrap.
Wrapping a pygame surface around a viewport
1
0
1
0
0
1,379
13,059,891
2012-10-25T00:26:00.000
0
0
0
0
0
python,scroll,pygame,geometry-surface
0
13,096,965
0
3
0
false
0
1
I was monkeying around with something similar to what you described that may be of use. I decided to try using a single map class which contained all of my Tiles, and I wanted only part of it loaded into memory at once so I broke it up into Sectors (32x32 tiles). I limited it to only having 3x3 Sectors loaded at once. As my map scrolled to an edge, it would unload the Sectors on the other side and load in new ones. My Map class would have a Rect of all loaded Sectors, and my camera would have a Rect of where it was located. Each tick I would use those two Rects to find what part of the Map I should blit, and if I should load in new Sectors. Once you start to change what Sectors are loaded, you have to shift Each sector had the following attributes: 1. Its Coordinate, with (0, 0) being the topleft most possible Sector in the world. 2. Its Relative Sector Coordinate, with (0, 0) being the topleft most loaded sector, and (2,2) the bottom right most if 3x3 were loaded. 3. A Rect that held the area of the Sector 4. A bool to indicate of the Sector was fully loaded Each game tick would check if the bool to see if Sector was fully loaded, and if not, call next on a generator that would blit X tiles onto the Map surface. I The entire Surface Each update would unload, load, or update and existing Sector When an existing Sector was updated, it would shift It would unload Sectors on update, and then create the new ones required. After being created, each Sector would start a generator that would blit X amount of tiles per update
3
2
0
0
I'm coding a game where the viewport follows the player's ship in a finite game world, and I am trying to make it so that the background "wraps" around in all directions (you could think of it as a 2D surface wrapped around a sphere - no matter what direction you travel in, you will end up back where you started). I have no trouble getting the ship and objects to wrap, but the background doesn't show up until the viewport itself passes an edge of the gameworld. Is it possible to make the background surface "wrap" around? I'm sorry if I'm not being very articulate. It seems like a simple problem and tons of games do it, but I haven't had any luck finding an answer. I have some idea about how to do it by tiling the background, but it would be nice if I could just tell the surface to wrap.
Wrapping a pygame surface around a viewport
1
0
1
0
0
1,379
13,059,891
2012-10-25T00:26:00.000
0
0
0
0
0
python,scroll,pygame,geometry-surface
0
13,131,429
0
3
0
true
0
1
Thanks everyone for the suggestions. I ended up doing something a little different from the answers provided. Essentially, I made subsurfaces of the main surface and used them as buffers, displaying them as appropriate whenever the viewport included coordinates outside the world. Because the scrolling is omnidirectional, I needed to use 8 buffers, one for each side and all four corners. My solution may not be the most elegant, but it seems to work well, with no noticeable performance drop.
3
2
0
0
I'm coding a game where the viewport follows the player's ship in a finite game world, and I am trying to make it so that the background "wraps" around in all directions (you could think of it as a 2D surface wrapped around a sphere - no matter what direction you travel in, you will end up back where you started). I have no trouble getting the ship and objects to wrap, but the background doesn't show up until the viewport itself passes an edge of the gameworld. Is it possible to make the background surface "wrap" around? I'm sorry if I'm not being very articulate. It seems like a simple problem and tons of games do it, but I haven't had any luck finding an answer. I have some idea about how to do it by tiling the background, but it would be nice if I could just tell the surface to wrap.
Wrapping a pygame surface around a viewport
1
1.2
1
0
0
1,379
13,060,146
2012-10-25T01:00:00.000
3
0
1
0
0
python,windows,installation,windows-installer
0
13,060,316
0
2
0
false
0
0
You can do this with any of the installer applications out there. Each of the dependent installers has a silent install option, so your installer just needs to invoke the installers for each of the dependencies in the right order. I won't recommend any windows installer application in particular because I don't like any of them, but they will all do what you want. The other option you have is to use py2exe which can bundle everything into a single exe file that runs in its own python environment. The plus side to this is you don't have to worry about installing Python in the users environment and have the user potentially uninstall python and then have your app stop working because everything is in a standalone environment. Other ways that I have seen this done is with a custom exe written in whatever compiled Windows Language you prefer that does all this for you, but this takes a lot of work. You could also get the advantage of the py2exe route with a little work on an installer you write with either an installer app or a standalone exe that handles the install, by manually placing the python.exe, dll and related code in the proper directories relative to your application code. You may have to mess with your PYTHONPATH environment setting when your app starts to get everything working, but this way you don't have to worry about installing Python and whether the user already has Python installed or if they uninstall it because then you have the Python version you need bundled with your app. One thing to note is that if you are worried about size the Python installer itself is about 10 MB before any dependencies, but a lot of that is not relevant to an end user using your app, There is no Python Runtime Environment installer like there is a Java runtime Environment installer that just install what you need to run Python, you always get the development tools. Hope this helps a little.
1
1
0
0
I know nothing on this subject, but I need suggestions about the best tools or method for creating a setup program that installs python, some custom python modules, some other python modules such as PIL, and some EXE dependencies, all living on a network repository, on windows machines. In the repository are installers for python (msi file), PIL (exe file), the custom python modules (pyc files), and two windows executables (and exe file and a zip file). Any advice welcome.
how to write installer (installing python, python modules and other dependencies) for windows boxes?
0
0.291313
1
0
0
2,621
13,060,427
2012-10-25T01:40:00.000
1
0
0
0
0
python,sql,sorting,select
0
13,060,535
0
2
0
false
0
0
This is a very general question, but there are multiple things that you can do to possibly make your life easier. 1.CSV These are very useful if you are storing data that is ordered in columns, and if you are looking for easy to read text files. 2.Sqlite3 Sqlite3 is a database system that does not require a server to use (it uses a file instead), and is interacted with just like any other database system. However, for very large scale projects that are handling massive amounts of data, it is not recommended. 3.MySql MySql is a database system that requires a server to interact with, but can be tweaked for very large scale projects, as well as small scale projects. There are many other different types of systems though, so I suggest you search around and find that perfect fit. However, if you want to mess around with Sqlite3 or CSV, both Sqlite3 and CSV modules are supplied in the standard library with python 2.7 and 3.x I believe.
1
0
0
0
I have huge tables of data that I need to manipulate (sort, calculate new quantities, select specific rows according to some conditions and so on...). So far I have been using a spreadsheet software to do the job but this is really time consuming and I am trying to find a more efficient way to do the job. I use python but I could not figure out how to use it for such things. I am wondering if anybody can suggest something to use. SQL?!
sorting and selecting data
0
0.099668
1
1
0
97
13,090,476
2012-10-26T16:06:00.000
1
0
1
0
0
google-app-engine,memory-management,python-2.7,multi-tenant
0
13,126,025
0
1
1
false
1
0
i agree with nick. there should be no python code in the tenant specific zip. to solve the memory issue i would cache most of the pages in the datastore. to serve them you don't need to have all tenants loaded in your instances. you might also wanna look in pre generating html views on save rather then on request.
1
1
0
0
i have a multitenant app with a zipped package for each tenant/client which contains the templates and handlers for the public site for each of them. right now i have under 50 tenants and its fine to keep the imported apps in memory after the first request to that specific clients domain. this approach works well but i have to redeploy the app with the new clients zipped package every time i make changes and/or a new client gets added. now im working to make it possible to upload those packages via web upload and store them into the blobstore. my concerns now are: getting the packages from the blobstore is of course slower than importing a zipped package in the filesystem. but this is not the biggest issue. how do i load/import a module that is not in the filesystem and has no path? if every clients package is around 1mb its not a problem as long as the client base is low but what if it raises to 1k or even more? obviously there i dont have enough memory to store a few GB of data in memory. what is the best way to deal with this? if i use the instance memory to store the previously tenant package in memory how would invalidate the data in memory if there would be a newly uploaded package? i would appreciate some thougts about how to deal this kind of situation.
zipped packages and in memory storage strategies
1
0.197375
1
0
0
57
13,127,381
2012-10-29T18:16:00.000
1
0
0
0
0
python,chipmunk,pymunk
0
13,130,409
0
1
0
true
0
1
There are a couple of unsafe methods to modify a shape. Right now (v3.0) pymunk only supports updates of the Circle shape and the Segment shapes. However, I just committed a method to update the Poly shape as well, available in latest trunk of pymunk. If you dont want to run latest trunk I suggest you instead just replace the shape instead of modifying it. The end result will be the same anyway. (The reason why modification of shapes is discouraged is that its very hard to do a good simulation, the resize happen magically in one instant. For example, how should a collision between of a small object that after a resize would lie inside a large object be resolved?)
1
1
0
0
I am just getting started with pymunk, and I have a problem that I wasn't able to find a solution to in the documentation. I have a character body that changes shape during a specific animation. I know how to attach shapes to a physics body, but how do I change them? Specifically, I need to change the box to a smaller one temporarily. Is that possible?
Changing the shape of a pymunk/Chipmunk physics body
0
1.2
1
0
0
898
13,131,699
2012-10-30T00:41:00.000
1
0
1
1
1
python,windows,path,os.system
1
13,140,093
0
1
0
true
0
0
I think you can add the location of the files in the PATH environment variable. Follow the steps: Go to My Computer->Right click->Properties->Advanced System Settings->Click Environmental Variables. Now click PATH and then click EDIT. In the variable value field, go to the end and append ';' (without quotes) and then add the absolute path of the .exe file which you want to run via your program.
1
1
0
0
I am trying to create a Python program that uses the os.system() function to create a new process (application) based on user input... However, this only works when the user inputs "notepad.exe". It does not work, for instance, when a user inputs "firefox.exe". I know this is a path issue because the error says that the file does not exist. I assume then that Windows has some default path setup for notepad that does allow notepad to run when I ask it to? So this leads to my question: is there any way to programmatically find the path to any application a user inputs, assuming it does in fact exist? I find it hard to believe the only way to open a file is by defining the entire path at some point. Or maybe there's a way that Windows does this for me that I do not know how to access? Any help would be great, thanks!
Python - get file path programmatically?
1
1.2
1
0
0
598
13,134,353
2012-10-30T07:19:00.000
4
0
0
0
1
python,encoding,openerp
0
13,135,341
0
1
0
false
1
0
The comment # -*- coding: utf-8 -*- tells the python parser the encoding of the source file. It affects how the bytecode compiler converts unicode literals in the source code. It has no effect on the runtime environment. You should explicitly define the encoding when converting strings to unicode. If you are getting UnicodeDecodeError, post your problem scenario and I'll try to help.
1
0
0
0
Do you guys know how to change the default encoding of an openerp file? I've tried adding # -*- coding: utf-8 -*- but it doesn't work (is there a setup that ignore this command? just a wild guess). When I try to execute sys.getdefaultencoding() still its in ASCII. Regards
Setting default encoding Openerp/Python
0
0.664037
1
0
0
803
13,140,185
2012-10-30T13:34:00.000
0
0
1
0
0
python,lxml
0
13,355,413
0
2
0
true
0
0
iterparse() is strictly forward-only, I'm afraid. If you want to read a tree in reverse, you'll have to read it forward, while writing it to some intermediate store (be it in memory or on disc) in some form that's easier for you to parse backwards, and then read that. I'm not aware of any stream parsers that allow XML to be parsed back-to-front. Off the top of my head, you could use two files, one containing the data and the other an index of offsets to the records in the data file. That would make reading backwards relatively easy once it's been written.
1
1
0
0
I am parsing a large file (>9GB) and am using iterparse of lxml in Python to parse the file while clearing as I go forward. I was wondering, is there a way to parse backwards while clearing? I could see I how would implement this independently of lxml, but it would be nice to use this package. Thank you in advance!
lxml, parsing in reverse
1
1.2
1
0
1
1,552
13,141,796
2012-10-30T15:01:00.000
0
0
1
0
0
python,app-store,semantics,feedback,google-prediction
0
13,392,517
0
1
0
false
0
0
You can use the Google Prediction API to characterize your comments as important or unimportant. What you'd want to do is manually classify a subset of your comments. Then you upload the manually classified model to Google Cloud Storage and, using the Prediction API, train your model. This step is asynchronous and can take some time. Once the trained model is ready, you can use it to programmatically classify the remaining (and any future) comments. Note that the more comments you classify manually (i.e. the larger your training set), the more accurate your programmatic classifications will be. Also, you can extend this idea as follows: instead of a binary classification (important/unimportant), you could use grades of importance, e.g. on a 1-5 scale. Of course, that entails more manual labor in constructing your model so the best strategy will be a function of your needs and how much time you can spend building the model.
1
1
0
0
I have received tens of thousands of user reviews on the app. I know the meaning of many of the comments are the same. I can not read all these comments. Therefore, I would like to use a python program to analyze all comments, Identify the most frequently the most important feedback information. I would like to ask, how can I do that? I can download an app all comments, also a preliminary understanding of the Google Prediction API.
How to automatic classification of app user reviews?
0
0
1
0
0
235
13,148,512
2012-10-30T22:29:00.000
3
0
0
1
1
python,google-app-engine
0
13,638,216
1
2
0
false
1
0
A similiar issue happens with appcfg.py in SDK 1.73, where it skips uploading some files sometimes. It looks like this only happens if appcfg.py is run under python 2.7. The workaround is to simply run appcfg.py under python 2.5. Then the upload works reliably. The code uploaded can still be 2.7 specific - it is only necessary to revert 2.5 in the step of running the uploader function in appcfg.py.
1
3
0
0
I just updated to SDK 1.7.3 running on Linux. At the same time I switched to the SQLite datastore stub, suggested by the depreciation message. After this, edits to source files are not always detected, and I have to stop and restart the SDK after updating, probably one time in ten. Is anyone else seeing this? Any ideas on how to prevent it? UPDATE: Changes to python source files are not being detected. I haven't made any modifications to yaml files, and I believe that jinja2 template file modifications are being detected properly. UPDATE: I added some logging to the dev appserver and found that the file I'm editing is not being monitored. Continuing to trace what is happening.
Appengine SDK 1.7.3 not detecting updated files
0
0.291313
1
0
0
294
13,155,509
2012-10-31T10:07:00.000
-1
0
1
0
0
windows,pdf,python-sphinx
0
52,086,907
0
3
0
false
0
0
As you have figured out: use Sphinx to generate LaTeX source, and then run it through a LaTex compiler to produce your PDF. Instead of troubling yourself with installing LaTeX (which can be daunting) and getting an editor set up, I suggest that you use one of the on-line LaTeX services. You then only have to create a project in ShareLaTeX or Overleaf, for example (which are in the process of merging), upload the contents of the Sphinx build\latex directory, compile on-line, and download the finished PDF. This works reasonably well, but since the output targets are very different (HTML vs a formal document), you may have to fiddle with the reST to get things the way you like it.
1
16
0
1
I am using Sphinx to create documentation for my Python project in Windows. I need to generate PDF documentation. I found many explanation how to do this in Linux, but no good explanation how to do this in Windows. As far as i understand I need to create Latex format with Sphinx, and than use Texworks to convert Latex to PDF. Can someone provide step by step explanation how can I do this, assuming I created documentation in Latex format and installed Texworks?
How to create PDF documentation with Sphinx in Windows
0
-0.066568
1
0
0
8,815
13,156,730
2012-10-31T11:21:00.000
0
0
0
0
0
python,excel,xlrd,xlwt
0
13,998,563
0
2
0
false
0
0
Work on 2 cells among tens of thousands...quite meager. Normally,one should present an iteration over rows x columns.
1
1
0
0
I am using the modules xlwd, xlwt and xlutil to do some Excel manipulations in Python. I am not able to figure out how to copy the value of cell (X,Y) to cell (A,B) in the same sheet of an Excel file in Python. Could someone let me know how to do that?
Copying value of cell (X,Y) to cell (A,B) in same sheet of an Excel file using Python
0
0
1
1
0
472
13,160,217
2012-10-31T14:28:00.000
2
1
0
0
0
python,django,emacs,ide
0
13,160,450
0
3
0
false
1
0
I also switched from Eclipse to Emacs and I must say that after adjusting to more text-focused ways of exploring code, I don't miss this feature at all. In Emacs, you can just open a shell prompt (M-x shell). Then run IPython from within the Emacs shell and you're all set. I typically split my screen in half horizontally and make the bottom window thinner, so that it's like the Eclipse console used to be. I added a feature in my .emacs that lets me "bring to focus" the bottom window and swap it into the top window. So when I am coding, if I come across something where I want to see the source code, I just type C-x c to swap the IPython shell into the top window, and then I type %psource < code thing > and it will display the source. This covers 95%+ of the use cases I ever had for quickly getting the source in Eclipse. I also don't care about the need to type C-x b or C-x C-f to open the code files. In fact, after about 2 or 3 hours of programming, I find that almost every buffer I could possibly need will already be open, and I just type C-x b < start of file name > and then tab-complete it. Since I have become more proficient at typing and not needing to move attention away to the mouse, I think this is now actually faster than the "quick" mouse-over plus F3 tactic in Eclipse. And to boot, having IPython open at the bottom is way better than the non-interactive Eclipse console. And you can use things like M-p and M-n to get the forward-backward behavior of IPython in terms of going back through commands. The one thing I miss is tab completion in IPython. And for this, I think there are some add-ons that will do it but I haven't invested the time yet to install them. Let me know if you want to see any of the elisp code for the options I mentioned above.
1
5
0
0
I'm looking into emacs as an alternative to Eclipse. One of my favorite features in Eclipse is being able to mouse over almost any python object and get a listing of its source, then clicking on it to go directly to its code in another file. I know this must be possible in emacs, I'm just wondering if it's already implemented in a script somewhere and, if so, how to get it up and running on emacs. Looks like my version is Version 24.2. Also, since I'll be doing Django development, it would be great if there's a plugin that understands Django template syntax.
Link to python modules in emacs
1
0.132549
1
0
0
314
13,161,659
2012-10-31T15:41:00.000
4
0
0
1
0
python,windows-7,command-line,copy,robocopy
0
26,222,233
0
5
0
false
0
0
Like halfs13 said use subprocess but you might need to format it like so from subprocess import call call(["robocopy",'fromdir', 'todir',"/S"]) Or else it may read the source as everything
1
1
0
0
I am trying to move multiple large folders (> 10 Gb , > 100 sub folders, > 2000 files ) between network drives. I have tried using shutil.copytree command in python which works fine except that it fails to copy a small percentage (< 1 % of files ) for different reasons. I believe robocopy is the best option for me as i can create a logfile documenting the transfer process. However as i need to copy > 1000 folders manual work is out of question. So my question is essentially how can i call robocopy (i.e. command line ) from within a python script making sure that logfile is written in an external file. I am working on a Windows 7 environment and Linux/Unix is out of question due to organizational restrictions. If someone has any other suggestions to bulk copy so many folders with a lot of flexibility they are welcome.
How can i call robocopy within a python script to bulk copy multiple folders?
0
0.158649
1
0
0
16,661
13,162,409
2012-10-31T16:21:00.000
1
1
1
0
1
python,nlp,classification,tagging,folksonomy
0
13,162,597
0
4
0
false
0
0
I am not an expert but it seems like you really need to define a notion of "key term", "relevance", etc, and then put a ranking algorithm on top of that. This sounds like doing NLP, and as far as I know there is a python package called NLTK that might be useful in this field. Hope it helps!
1
2
0
0
Not sure how to phrase this question properly, but this is what I intend to achieve using the hypothetical scenario outlined below - A user's email to me has just the SUBJECT and BODY, the subject being the topic of email, and the body being a description of the topic in just one paragraph of max 1000 words. Now I would like to analyse this paragraph (in the BODY) using some computer language (python, maybe), and then come up with a list of most important words from the paragraph with respect to the topic mentioned in the SUBJECT field. For example, if the topic of email is say iPhone, and the body is something like "the iPhone redefines user-interface design with super resolution and graphics. it is fully touch enabled and allows users to swipe the screen" So the result I am looking for is a sort of list with the key terms from the paragraph as related to iPhone. Example - (user-interface, design, resolution, graphics, touch, swipe, screen). So basically I am looking at picking the most relevant words from the paragraph. I am not sure on what I can use or how to use to achieve this result. Searching on google, I read a little about Natural Language Processing and python and classification etc. I just need a general approach on how to go about this - using what technology/language, which area I have to read on etc.. Thanks! EDIT::: I have been reading up in the meantime. To be precise, I am looking at HOW TO do this, using WHAT TOOL: Generate related tags from a body of text using NLP which are based on synonyms, morphological similarity, spelling errors and contextual analysis.
picking the most relevant words from a paragraph
1
0.049958
1
0
0
1,918
13,173,029
2012-11-01T07:52:00.000
0
0
0
0
0
python,c,wxpython,pyqt
0
13,175,997
0
3
0
false
0
1
If you have an option between C or C#(Sharp) then go with C# and use visual studio, you can build the GUI by dragging and dropping components easy. If you want to do something in python look up wxPython. Java has a built in GUI builder known as swing. You'll need some tutorials, but unless this program doesn't need to be portable just go with C# and build it in 10 minutes . Also, you can write your code in C and export it as a python module which you can load from python. It`s not very complicated to set up some C functions and have a python GUI which calls them. To achieve this you can use SWIG, Pyrex, BOOST.
1
6
0
0
My friend has an application written in C that comes with a GUI made using GTK under Linux. Now we want to rewrite the GUI in python (wxpython or PyQT). I don't have experience with Python and don't know how to make Python communicate with C. I'd like to know if this is possible and if yes, how should I go about implementing it?
Can C programs have Python GUI?
0
0
1
0
0
4,672
13,176,027
2012-11-01T11:24:00.000
2
0
1
0
0
python,client-server,code-organization,project-organization
0
13,176,078
0
1
1
true
0
0
Yes, your project will be a package. A module is a collection of related code. Most non-trivial projects will be a collection of modules in a package (potentially with sub-packages).
1
2
0
1
Yesterday I started an important Python project and since then I've been searching for documentation on how to organize the code to have a "high-quality" project. There is a lot of articles and official documentation about how to organize packages and modules but, as I'm very new to this language, I think that is not my case. The project is a client-server platform to distribute files in a local network (ok, is a lot more than this but its the basic idea). The thing is that is not going to be a module and I think that is not a package. At least not as described in the Python documentation: Packages are a way of structuring Python’s module namespace by using “dotted module names” I searched too in Git to see what popular project do to organize its code but most of them are modules and the rest... I don't even know how to run them. So the question is, what is my code (module, package, ...) and which is the best way to organize it? Do you know any good article about this? Thank you.
Python project organization
0
1.2
1
0
0
415
13,177,087
2012-11-01T12:28:00.000
0
0
0
0
0
python,django,django-views,subprocess
0
13,177,553
0
2
1
false
1
0
Easiest I think will be use ajax to start he simulator. Response for start request can be updated on the same page. However, you will still have to think about how to pause,resume and stop the simulator started by earlier requests. i.e how to manage and manipulate state of the simulator. May be you want to update that in DB.
1
1
0
0
I have a HTML page which has four buttons Start, Stop, Pause and Resume in it. The functionality of the buttons are: Start Button: Starts the backend Simulator. (The Simulator takes around 3 minutes for execution.) Stop Button: Stops the Simulator. Pause Button: Pauses the Simulator. Resume Button: Resumes the Simulator from the paused stage. Each of the button when clicked goes on to calling a separate view function. The problem I'm facing is that when I click the start button, it starts up the Simulator through a function call in the Python view. But as I mentioned that the simulator takes around 3 minutes for completing it's execution. So, for the 3 minutes my UI is totally unresponsive. I cannot press Stop, Pause or Resume button untill the current view of Django is rendered. So what is the best way to approach this problem ? Shall I spawn a non-blocking process for the Simulator and If so how can I get to know after the view has rendered that the new spawned process has completed it's execution.
Getting Responsive Django View
0
0
1
0
0
457
13,203,601
2012-11-02T21:59:00.000
10
0
1
0
0
python,list,big-o
0
13,203,622
0
3
0
false
0
0
For a list of size N, and a slice of size M, the iteration is actually only O(M), not O(N). Since M is often << N, this makes a big difference. In fact, if you think about your explanation, you can see why. You're only iterating from i_1 to i_2, not from 0 to i_1, then I_1 to i_2.
1
61
1
0
Say I have some Python list, my_list which contains N elements. Single elements may be indexed by using my_list[i_1], where i_1 is the index of the desired element. However, Python lists may also be indexed my_list[i_1:i_2] where a "slice" of the list from i_1 to i_2 is desired. What is the Big-O (worst-case) notation to slice a list of size N? Personally, if I were coding the "slicer" I would iterate from i_1 to i_2, generate a new list and return it, implying O(N), is this how Python does it? Thank you,
Big-O of list slicing
0
1
1
0
0
47,961
13,206,304
2012-11-03T05:31:00.000
0
0
1
0
0
python,image,imagemagick
1
13,206,430
0
1
0
false
0
0
You mentioned that you were having problems installing PIL on a Mac. Have you considered using Macports?
1
0
0
0
I have a group of .png files where most of the image is transparent (alpha channel), but there is image in the middle (non-transparent pixels) that I need to extract. What I need to do is crop the image down to just the non-transparent pixels, but I need to know how many pixels were cropped off the left and bottom so when it comes time to render the cropped images, it's position can be adjusted back to were it was in the larger image. Is there a way to do the cropping and get the x,y offset using ImageMagick? I know how to crop the .png file, but the location of the non-transparent image within the larger image is lost and I need this information. It seems I can do this using PIL and python, but getting PIL installed on a Mac is proving to be a hair pulling experience. I've spent hours trying to get rid of the jpeg_resync_to_restart errors and it seems everyone has a different solution that worked for them, but none of them work for me... so I've given up on PIL. ImageMagick is already installed and working. Is there another set of tools I can call from a bash or python script that will do what I need? This isn't just a one-time operation I need to preform, so I need a script that can be run over and over when the source .png files change. Thanks.
Cropping away transparent pixels but preserving the offset
0
0
1
0
0
247
13,212,673
2012-11-03T19:28:00.000
1
1
0
0
0
python,computer-forensics
0
13,214,640
0
1
0
false
0
0
Your problem can be formulated as "how do I search in a very long file with no line structure." It's no different from what you'd do if you were reading line-oriented text one line at a time: Imagine you're reading a text file block by block, but have a line-oriented regexp to search with; you'd search up to the last complete line in the block you've read, then hold on to the final incomplete line and read another block to extend it with. So, you don't start afresh with each new block read. Think of it as a sliding window; you only advance it to discard the parts you were able to search completely. Do the same here: write your code so that the strings you're matching never hit the edge of the buffer. E.g., if the header you're searching for is 100 bytes long: read a block of text; check if the complete pattern appears in the block; advance your reading window to 100 bytes before the end of the current block, and add a new block's worth of text after it. Now you can search for the header without the risk of missing it. Once you find it, you're extracting text until you see the stop pattern (the footer). It doesn't matter if it's in the same block or five blocks later: your code should know that it's in extracting mode until the stop pattern is seen.
1
1
0
0
I'm looking to write a script in Python 2.x that will scan physical drive (physical and not logical) for specific strings of text that will range in size (chat artifacts). I have my headers and footers for the strings and so I am just wondering how is best to scan through the drive? My concern is that if I split it into, say 250MB chunks and read this data into RAM before parsing through it for the header and footer, it may be the header is there but the footer is in the next chunk of 250MB. So in essence, I want to scan PhysicalDevice0 for strings starting with "ABC" for example and ending in "XYZ" and copy all content from within. I'm unsure whether to scan the data as ascii or Hex too. As drives get bigger, I'm looking to do this in the quickest manner possible. Any suggestions?
Extracting strings from a physical drive
1
0.197375
1
0
0
197
13,215,104
2012-11-04T01:04:00.000
-2
0
1
0
0
python
0
13,215,122
0
6
0
true
0
0
You can use rstrip. Check the python docs.
1
2
0
0
I have a variable text whose value is like below,I need strip of trailing digits,is there a python built-in function to do it..if not,please suggest how can this be done in python e.g. -text=A8980BQDFZDD1701209.3 => -A8980BQDFZDD
How to strip trailing digits from a string
0
1.2
1
0
0
10,613
13,215,873
2012-11-04T03:43:00.000
3
0
1
0
0
python,python-3.x,python-3.3
0
13,215,904
0
1
1
true
0
0
You've been seeing things that somewhat conflict based on Python 2 vs 3. In Python 3, isinstance(foo, str) is almost certainly what you want. bytes is for raw binary data, which you probably can't include in an argument string like that. The python 2 str type stored raw binary data, usually a string in some specific encoding like utf8 or latin-1 or something; the unicode type stored a more "abstract" representation of the characters that could then be encoded into whatever specific encoding. basestring was a common ancestor for both of them so you could easily say "any kind of string". In python 3, str is the more "abstract" type, and bytes is for raw binary data (like a string in a specific encoding, or whatever raw binary data you want to handle). You shouldn't use bytes for anything that would otherwise be a string, so there's not a real reason to check if it's either str or bytes. If you absolutely need to, though, you can do something like isinstance(foo, (str, bytes)).
1
0
0
0
I'm trying to determine whether a function argument is a string, or some other iterable. Specifically, this is used in building URL parameters, in an attempt to emulate PHP's &param[]=val syntax for arrays - so duck typing doesn't really help here, I can iterate through a string and produce things like &param[]=v&param[]=a&param[]=l, but this is clearly not what we want. If the parameter value is a string (or a bytes? I still don't know what the point of a bytes actually is), it should produce &param=val, but if the parameter value is (for example) a list, each element should receive its own &param[]=val. I've seen a lot of explanations about how to do this in 2.* involving isinstance(foo, basestring), but basestring doesn't exist in 3.*, and I've also read that isinstance(foo, str) will miss more complex strings (I think unicode?). So, what is the best way to do this without causing some types to be lost to unnecessary errors?
Separate strings from other iterables in python 3
0
1.2
1
0
0
97
13,222,607
2012-11-04T20:33:00.000
0
0
1
0
0
python,user-interface,window
0
13,222,646
0
3
0
false
0
1
You can do this, the library is called tkinter.
2
0
0
0
Hello fellow python users, I bring forth a question that I have been wondering for a while. I love python, and have made many programs with it. Now what I want to do is (or know how to do) make a python program, but run it in a window with buttons that you click on instead of typing in numbers to elect things. I would like to just know weather or not I can do this, and if I can, please tell me where to go to learn how. Ok, it's not for the iPhone, sorry that I wasn't clear on that, and I didn't realize that iPhone was one of the tags.
Can you run python as a windowed program?
1
0
1
0
0
336
13,222,607
2012-11-04T20:33:00.000
0
0
1
0
0
python,user-interface,window
0
13,222,783
0
3
0
false
0
1
There are many GUI libraries to choose from, but I like Tkinter because it's easy and there's nothing to install (it comes with Python). But some people prefer wxPython or others, such as PyQT. You'll have to decide which you like, or just go with Tkinter if you don't want to go through the trouble of installing libraries just to try them out.
2
0
0
0
Hello fellow python users, I bring forth a question that I have been wondering for a while. I love python, and have made many programs with it. Now what I want to do is (or know how to do) make a python program, but run it in a window with buttons that you click on instead of typing in numbers to elect things. I would like to just know weather or not I can do this, and if I can, please tell me where to go to learn how. Ok, it's not for the iPhone, sorry that I wasn't clear on that, and I didn't realize that iPhone was one of the tags.
Can you run python as a windowed program?
1
0
1
0
0
336
13,233,107
2012-11-05T13:27:00.000
5
0
0
0
0
python,flask,neo4j,py2neo
0
13,234,558
0
2
0
true
1
0
None of the REST API clients will be able to explicitly support (proper) transactions since that functionality is not available through the Neo4j REST API interface. There are a few alternatives such as Cypher queries and batched execution which all operate within a single atomic transaction on the server side; however, my general approach for client applications is to try to build code which can gracefully handle partially complete data, removing the need for explicit transaction control. Often, this approach will make heavy use of unique indexing and this is one reason that I have provided a large number of "get_or_create" type methods within py2neo. Cypher itself is incredibly powerful and also provides uniqueness capabilities, in particular through the CREATE UNIQUE clause. Using these, you can make your writes idempotent and you can err on the side of "doing it more than once" safe in the knowledge that you won't end up with duplicate data. Agreed, this approach doesn't give you transactions per se but in most cases it can give you an equivalent end result. It's certainly worth challenging yourself as to where in your application transactions are truly necessary. Hope this helps Nigel
1
2
0
0
I'm currently building a web service using python / flask and would like to build my data layer on top of neo4j, since my core data structure is inherently a graph. I'm a bit confused by the different technologies offered by neo4j for that case. Especially : i originally planned on using the REST Api through py2neo , but the lack of transaction is a bit of a problem. The "embedded database" neo4j doesn't seem to suit my case very well. I guess it's useful when you're working with batch and one-time analytics, and don't need to store the database on a different server from the web server. I've stumbled upon the neo4django project, but i'm not sure this one offers transaction support (since there are no native client to neo4j for python), and if it would be a problem to use it outside django itself. In fact, after having looked at the project's documentation, i feel like it has exactly the same limitations, aka no transaction (but then, how can you build a real-world service when you can corrupt your model upon a single connection timeout ?). I don't even understand what is the use for that project. Could anyone could recommend anything ? I feel completely stuck. Thanks
using neo4J (server) from python with transaction
0
1.2
1
1
0
1,075
13,234,196
2012-11-05T14:32:00.000
1
0
0
0
0
python,oracle,cx-oracle
1
58,120,873
0
6
0
false
0
0
Tip for Ubuntu users After configuring .bashrc environment variables, like it was explained in other answers, don't forget to reload your terminal window, typing $SHELL.
3
12
0
0
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
0
0.033321
1
1
0
25,818
13,234,196
2012-11-05T14:32:00.000
2
0
0
0
0
python,oracle,cx-oracle
1
28,741,244
0
6
0
false
0
0
I got this message when I was trying to install the 32 bit version while having the 64bit Oracle client installed. What worked for me: reinstalled python with 64 bit (had 32 for some reason), installed cx_Oracle (64bit version) with the Windows installer and it worked perfectly.
3
12
0
0
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
0
0.066568
1
1
0
25,818
13,234,196
2012-11-05T14:32:00.000
2
0
0
0
0
python,oracle,cx-oracle
1
13,234,377
0
6
0
false
0
0
I installed cx_Oracle, but I also had to install an Oracle client to use it (the cx_Oracle module is just a common and pythonic way to interface with the Oracle client in Python). So you have to set the variable ORACLE_HOME to your Oracle client folder (on Unix: via a shell, for instance; on Windows: create a new variable if it does not exist in the Environment variables of the Configuration Panel). Your folder $ORACLE_HOME/network/admin (%ORACLE_HOME%\network\admin on Windows) is the place where you would place your tnsnames.ora file.
3
12
0
0
Newbie here trying to use python to do some database analysis. I keep getting the error: "error: cannot locate an Oracle software installation" When installing CX_oracle (via easy_install). The problem is I do not have oracle on my local machine, I'm trying to use python to connect to the main oracle server. I have have setup another program to do this(visualdb) and I had a .jar file I used as the driver but I'm not sure how to use it in this case. Any suggestions?
"error: cannot locate an Oracle software installation" When trying to install cx_Oracle
0
0.066568
1
1
0
25,818
13,236,405
2012-11-05T16:38:00.000
1
0
1
0
0
python,py2exe
0
13,237,078
0
2
0
true
0
0
There is a feature in the configuration of py2exe that allows you to bundle all the Python files in a single library.zip file. That would considerably reduce the amount of files in the root directory, but there will still remain some files, regardless of all that. These files are generally DLL files, at least from what I saw with GUI applications. You cannot remove these, because they are required to launch the application. A workaround to this problem is to create a batch file that will run the actual program which can be in child directory. The point is that these files should either be in the same directory as the executable, or the current working directory, or a path in the PATH environment variable. At least it's the case of most of these. Another approach might be a batch file which will modify the PATH variable or cd to another directory and run the file afterwards I never tried to do it, so it might break some things for you. Anyway, IMO the best approach is to create an installer and add shortcuts and you won't have to bother with the user messing with these files.
1
1
0
0
Hi!I made a chess engine in python which i then compiled to .exe using py2exe. The problem is that it doesn't look very neat when i have all the strange files gathered together in the same folder (dist). I'd like to make a new folder inside the dist folder that contains all the helper files, so all my dist folder contains is the folder holding the helper files and the main launch application. However, i can't simply copy the helper files to a new folder, as the computer doesn't find them then and raises an error. How can it be solved? Also, i'm using inno setup to make an installation, but i can't figure out how to find a solution there, either. Thank you very much!
How to put files in folders using py2exe.
0
1.2
1
0
0
1,098
13,261,858
2012-11-07T01:07:00.000
0
1
0
0
0
python,twitter,twython
0
16,578,360
0
2
0
false
0
0
considering the case of similar tweets and retweets, I would recommend making a semantic note of the whole tweet, extracting the text part of each tweet and doing a dictionary lookup. but tweet id is more simpler with significant loss, usage as noted above.
1
1
0
0
I'm working on a project which requires counting the number of tweets that meet the parameters of a query. I'm working in Python, using Twython as my interface to Twitter. A few questions though, how do you record which tweets have already been accounted for? Would you simply make a note of the last tweet ID and ignore it plus all previous? --What is the easiest implementation of this? As another optimizations question, I want to make sure that the amount of tweets missed by the counter is minimal, is there any way to make sure of this? Thanks so much.
How to count tweets from query without double counting?
1
0
1
0
0
245
13,297,219
2012-11-08T20:26:00.000
4
0
1
0
0
python,ipython
0
13,297,236
0
3
0
false
0
0
To use ipython, just go to the command line, and run the command ipython.
1
5
0
0
I've installed ipython, but I don't know how to use it. Where could I find ipython shell?
Where to use ipython and where is ipthon shell?
0
0.26052
1
0
0
5,087
13,311,732
2012-11-09T16:12:00.000
1
1
0
0
0
python,c,performance
0
13,311,964
0
1
0
false
0
0
You'll want the python calls to your C function to be as little as possible. If you can call the C function once from python and get it to do most/all of the work, that would be better.
1
0
0
0
I wrote a python script to do some experiment with the Mandelbrot set. I used a simple function to find Mandelbrot set points. I was wondering how much efficiency I can achieve by calling a simple C function to do this part of my code? Please consider that this function should call many times from Python. What is the effect of run time? And maybe other factors that should I aware?
Efficiency of calling C function from Python
0
0.197375
1
0
0
117
13,313,118
2012-11-09T17:37:00.000
0
0
0
1
0
python,google-app-engine,full-text-search,gae-search
0
13,315,587
0
1
0
false
1
0
This depends on whether or not you have any globally consistent indexes. If you do, then you should migrate all of your data from those indexes to new, per-document-consistent (which is the default) indexes. To do this: Loop through the documents you have stored in the global index and reindexing them in the new index. Change references from the global index to the new per-document index. Ensure everything works, then delete the documents from your global index (not necessary to complete the migration, but still a good idea). You then should remove any mention of consistency from your code; the default is per-document consistent, and eventually we will remove the ability to specify a consistency at all. If you don't have any data in a globally consistent index, you're probably getting the warning because you're specifying a consistency. If you stop specifying the consistency it should go away. Note that there is a known issue with the Python API that causes a lot of erroneous deprecation warnings about consistency, so you could be seeing that as well. That issue will be fixed in the next release.
1
0
0
0
I've been using the appengine python experimental searchAPI. It works great. With release 1.7.3 I updated all of the deprecated methods. However, I am now getting this warning: DeprecationWarning: consistency is deprecated. GLOBALLY_CONSIST However, I'm not sure how to address it in my code. Can anyone point me in the right direction?
Appengine Search API - Globally Consistent
0
0
1
0
0
174
13,318,291
2012-11-10T01:19:00.000
0
0
0
0
0
python,scrapyd
1
13,344,717
0
2
0
true
1
0
I found the answer by adding mylibs to site-packages of python by using setup.py inside mylib folder. That way I could import everything inside mylib in my projects. Actually mylibs were way outside from the location where setup.py of my deploy-able project is present. setup.py looks for packages on same level and inside the folders where it is located.
1
3
0
0
Scrapyd is service where we can eggify deploy our projects. However I am facing a problem. I have a Project named MyScrapers whose spider classes uses an import statement as follows: from mylibs.common.my_base_spider import MyBaseSpider The path to my_base_spider is /home/myprojectset/mylibs/common/my_base_spider While setting environment variable PYTHONPATH=$HOME/myprojectset/, I am able to run MyScrapers using scrapy command: scrapy crawl MyScrapers. But when I use scrapyd for deploying MyScrapers by following command: scrapy deploy scrapyd2 -p MyScrapers, I get the following error: Server response (200): {"status": "error", "message": "ImportError: No module named mylibs.common.my_base_spider"} Please tell how to make deployed project to use these libs?
Scrapyd: How to specify libs and common folders that deployed projects can use?
0
1.2
1
0
0
973
13,332,268
2012-11-11T14:55:00.000
4
0
0
1
0
python,linux,subprocess,pipe
0
13,359,172
0
9
0
false
0
0
Also, try to use 'pgrep' command instead of 'ps -A | grep 'process_name'
1
326
0
0
I want to use subprocess.check_output() with ps -A | grep 'process_name'. I tried various solutions but so far nothing worked. Can someone guide me how to do it?
How to use `subprocess` command with pipes
0
0.088656
1
0
0
305,963
13,346,698
2012-11-12T15:38:00.000
1
0
0
1
0
python,openerp
0
13,358,175
0
2
0
false
1
0
Good question.. Openerp on windows uses a dll for python (python26.dll in /Server/server of the openerp folder in program files). It looks like all the extra libraries are in the same folder, so you should be able to download the extra libraries to that folder and restart the service. (I usually stop the service and run it manually from the command line - its easier to see if there are any errors etc while debugging) Let us know if you get it working!
1
6
0
0
I installed OpenERP 6.1 on windows using the AllInOne package. I did NOT install Python separately. Apparently OpenERP folders already contain the required python executables. Now when I try to install certain addons, I usually come across requirements to install certain python modules. E.g. to install Jasper_Server, I need to install http2, pypdf and python-dime. As there is no separate Python installation, there is no C:\Python or anything like that. Where and how do I install these python packages so that I am able to install the addon? Thanks
Installing Python modules for OpenERP 6.1 in Windows
0
0.099668
1
0
0
3,249
13,352,796
2012-11-12T22:42:00.000
0
0
0
0
0
python,mongodb,twitter,tweepy
0
22,388,827
0
1
0
false
0
0
Unfortunately, the Twitter API doesn't provide a way to do this. You can try searching through receive tweets for the keywords you specified, but it might not match exactly.
1
1
0
0
I'm filtering the twitter streaming API by tracking for several keywords. If for example I only want to query and return from my database tweet information that was filtered by tracking for the keyword = 'BBC' how could this be done? Do the tweet information collected have a key:value relating to that keyword by which it was filtered? I'm using python, tweepy and MongoDB. Would an option be to search for the keyword in the returned json 'text' field? Thus generate a query where it searches for that keyword = 'BBC' in the text field of the returned json data?
Querying twitter streaming api keywords from a database
0
0
1
1
0
374
13,366,293
2012-11-13T18:12:00.000
8
0
1
0
0
python,function,arguments,range
0
13,366,356
0
4
0
true
0
0
Range takes, 1, 2, or 3 arguments. This could be implemented with def range(*args), and explicit code to raise an exception when it gets 0 or more than 3 arguments. It couldn't be implemented with default arguments because you can't have a non-default after a default, e.g. def range(start=0, stop, step=1). This is essentially because python has to figure out what each call means, so if you were to call with two arguments, python would need some rule to figure out which default argument you were overriding. Instead of having such a rule, it's simply not allowed. If you did want to use default arguments you could do something like: def range(start=0, stop=object(), step=1) and have an explicit check on the type of stop.
1
18
0
0
How can the range function take either: a single argument, range(stop), or range(start, stop), or range(start, stop, step). Does it use a variadic argument like *arg to gather the arguments, and then use a series of if statements to assign the correct values depending on the number of arguments supplied? In essence, does range() specify that if there is one argument, then it set as the stop argument, or if there are two then they are start, and stop, or if there are three then it sets those as stop, start, and step respectively? I'd like to know how one would do this if one were to write range in pure CPython.
How can the built-in range function take a single argument or three?
0
1.2
1
0
0
5,994
13,373,014
2012-11-14T04:24:00.000
1
0
1
0
1
python
0
13,373,099
0
1
0
true
0
0
yup, its not at all secure, but eval is the way to go: In [1]: a= 10 In [2]: b= 20 In [3]: eval('a + 10*b') Out[3]: 210
1
0
0
0
Is there any way to do this? Basically I have a file that I want a user to be able to edit via a GUI I built(all of which I can do easily). Part of this is a calculation in a function. That or being able to edit a .py file from another file would be fine as well, but it is hard to find anything on that because every search returns details about IDLE and such. I also have no problem with just having the calculation only in the text file and being able to read that from the text file and then parse it to add the variables in, but not even sure how to do that easily either, with the calculations varying like so: (abs(x) - abs(y) * dict['t']) * 18 ((abs(y) * dict['t']) - abs(x)) * 20 etc for about 10 different variations
Importing a function from a .txt file
0
1.2
1
0
0
119
13,373,291
2012-11-14T04:59:00.000
-1
0
0
0
1
python,ctypes,complex-numbers
0
13,373,641
0
5
0
false
0
1
If c_complex is a C struct and you have the definition of in documentation or a header file you could utilize ctypes to compose a compatible type. It is also possible, although less likely, that c_complex is a typdef for a type that ctypes already supports. More information would be needed to provide a better answer...
1
4
0
0
This might be a bit foolish but I'm trying to use ctypes to call a function that receives a complex vector as a paramter. But in ctypes there isn't a class c_complex. Does anybody know how to solve this? edit: I'm refering to python's ctypes, in case there are others....
Complex number in ctypes
0
-0.039979
1
0
0
2,911
13,385,085
2012-11-14T18:34:00.000
2
0
1
0
0
python,algorithm
0
13,386,318
0
3
0
true
0
0
I think the linear probe suggested by @isbadawi is the best way to find the beginning of the subsequence. This is because the subsequence could be very short and could be anywhere within the larger sequence. However, once the beginning of the subsequence is found, we can use a binary search to find the end of it. That will require fewer tests than doing a second linear probe, so it's a better solution for you. As others have pointed out, there is not much practical reason for doing this. This is true for two reasons: your large sequence is quite short (only about 31 elements), and you still need to do at least one linear probe anyway, so the big-O runtime will be still be linear in the length of the large sequence, even though we have reduced part of the algorithm from linear to logarithmic.
1
3
0
0
I need to find all the days of the month where a certain activity occurs. The days when the activity occurs will be sequential. The sequence of days can range from one to the entire month, and the sequence will occur exactly one time per month. To test whether or not the activity occurs on any given day is not an expensive calculation, but I thought I would use this problem learn something new. Which algorithm minimizes the number of days I have to test?
In python, how can I efficiently find a consecutive sequence that is a subset of a larger consecutive sequence?
0
1.2
1
0
0
418
13,389,724
2012-11-15T00:22:00.000
0
0
1
1
0
python,compilation,exe,cx-freeze
0
13,967,222
0
3
0
false
0
0
Have you tried innosetup? It can create installer files from the output of cxfreeze. There might be an option somewhere to bundle everything into one file.
1
3
0
0
I'm using cx_Freeze to compile Python programs into executables and it works just fine, but the problem is that it doesn't compile the program into one EXE, it converts them into a .exe file AND a whole bunch of .dll files including python32.dll that are necessary for the program to run. Does anyone know how I can package all of these files into one .exe file? I would rather it be a plain EXE file and not just a file that copies the DLLs into a temporary directory in order to launch the program. EDIT: This is in reference to Python 3
Convert an EXE and its dependencies into one stand-alone EXE
0
0
1
0
0
2,963
13,394,969
2012-11-15T09:51:00.000
0
0
0
0
0
python,machine-learning,cherrypy
0
13,399,425
0
2
0
false
1
0
NLTK based system tends to be slow at response per request, but good throughput can be achieved given enough RAM.
1
4
1
0
Let me explain what I'm trying to achieve. In the past while working on Java platform, I used to write Java codes(say, to push or pull data from MySQL database etc.) then create a war file which essentially bundles all the class files, supporting files etc and put it under a servlet container like Tomcat and this becomes a web service and can be invoked from any platform. In my current scenario, I've majority of work being done in Java, however the Natural Language Processing(NLP)/Machine Learning(ML) part is being done in Python using the NLTK, Scipy, Numpy etc libraries. I'm trying to use the services of this Python engine in existing Java code. Integrating the Python code to Java through something like Jython is not that straight-forward(as Jython does not support calling any python module which has C based extensions, as far as I know), So I thought the next option would be to make it a web service, similar to what I had done with Java web services in the past. Now comes the actual crux of the question, how do I run the ML engine as a web service and call the same from any platform, in my current scenario this happens to be Java. I tried looking in the web, for various options to achieve this and found things like CherryPy, Werkzeug etc but not able to find the right approach or any sample code or anything that shows how to invoke a NLTK-Python script and serve the result through web, and eventually replicating the functionality Java web service provides. In the Python-NLTK code, the ML engine does a data-training on a large corpus(this takes 3-4 minutes) and we don't want the Python code to go through this step every time a method is invoked. If I make it a web service, the data-training will happen only once, when the service starts and then the service is ready to be invoked and use the already trained engine. Now coming back to the problem, I'm pretty new to this web service things in Python and would appreciate any pointers on how to achieve this .Also, any pointers on achieving the goal of calling NLTK based python scripts from Java, without using web services approach and which can deployed on production servers to give good performance would also be helpful and appreciable. Thanks in advance. Just for a note, I'm currently running all my code on a Linux machine with Python 2.6, JDK 1.6 installed on it.
How to expose an NLTK based ML(machine learning) Python Script as a Web Service?
0
0
1
0
0
1,090
13,395,116
2012-11-15T10:01:00.000
5
1
1
0
0
python,python-import
0
13,395,225
0
4
0
false
0
0
You can just import it again other places that you need it -- it will be cached after the first time so this is relatively inexpensive. Alternatively you could modify the current global namespaces with something like globals()['name'] = local_imported_module_name. EDIT: For the record, although using the globals() function will certainly work, I think a "cleaner" solution would be to declare the module's name global and then import it, as several other answers have mentioned.
1
9
0
0
I need to have an import in __init__() method (because i need to run that import only when i instance the class). But i cannot see that import outside __init__(), is the scope limited to__init__? how to do?
Python import in __init__()
0
0.244919
1
0
0
7,587
13,427,477
2012-11-17T03:45:00.000
1
0
0
1
0
python,google-app-engine,http-post,http-get
0
13,427,499
0
1
0
true
1
0
Links inherently generate GET requests. If you want to generate a POST request, you'd need to either: Use a form with method="POST" and submit it, or Use AJAX to load the new page.
1
1
0
0
I'm trying to pass a variable from one page to another using google app engine, I know how to pass it using GET put putting it in the URL. But I would like to keep the URL clean, and I might need to pass a larger amount of data, so how can pass info using post. To illustrate, I have a page with a series of links, each goes to /viewTaskGroup.html, and I want to pass the name of the group I want to view based on which link they click (so I can search and get it back and display it), but I'd rather not use GET if possible. I didn't think any code is required, but if you need any I'm happy to provide any needed.
Pass data in google app engine using POST
1
1.2
1
0
0
297
13,428,313
2012-11-17T06:31:00.000
3
0
1
0
0
python
0
13,428,441
0
2
0
false
0
0
Just examine closed attribute of file object.
1
2
0
0
How do I add a check to avoid flushing a file f with f.flush() when some function has already done f.close()? I can't seem to figure out how to do so :/
Python Flushing and Already Closed File
0
0.291313
1
0
0
421
13,432,635
2012-11-17T16:54:00.000
1
0
1
0
0
python,list,dictionary
0
13,432,674
0
3
0
false
0
0
On Linux: A nice method using grep to filter out any words containing apostrophes in the words file and save to mywords.txt in your home directory. grep "^[^']*$" /usr/share/dict/words > ~/mywords.txt No need to install, download or write any code! On OS X: Even easier as /usr/share/dict/words contains no words with apostrophes in already.
1
0
0
0
I am looking for a dictionary file containing only words without apostrophes. I cant seem to find one! Does anyone know how where I can find one, if not how could I eliminate those words from the file using Python?
List of Dictionary words without apostrophes
0
0.066568
1
0
0
1,178
13,434,664
2012-11-17T20:52:00.000
0
0
0
0
1
python,web-scraping,casperjs
0
13,437,094
0
5
0
false
1
0
Because you mentioned CasperJS I can assume that web site generate some data by using JavaScript. My suggestion would be check WebKit. It is a browser "engine", that will let you do what ever you want with web-site. You can use PyQt4 framework, which is very good, and has a good documentation.
1
2
0
0
So I am trying to scrape something that is behind a login system. I tried using CasperJS, but am having issues with the form, so maybe that is not the way to go; I checked the source code of the site and the form name is "theform" but I can never login must be doing something wrong. Does any have any tutorials on how to do this correctly using CasperJS, I've looked at the API and google and nothing really works. Or does someone have any recommendations on how to do web scraping easily. I have to be able to check a simple conditional state and click a few buttons, that is all.
Web scraping - web login issue
0
0
1
0
1
684
13,436,032
2012-11-17T23:55:00.000
0
0
0
0
0
python,colors,cluster-computing,k-means
0
13,436,279
0
1
0
true
0
0
You can use a vector quantisation. You can make a list of each pixel and each adjacent pixel in x+1 and y+1 direction and pick the difference and plot it along a diagonale. Then you can calculate a voronoi diagram and get the mean color and compute a feature vector. It's a bit more effectice then to use a simple grid based mean color.
1
0
1
0
My score was to get the most frequent color in a image, so I implemented a k-means algorithm. The algorithm works good, but the result is not the one I was waiting for. So now I'm trying to do some improvements, the first I thought was to implement k-means++, so I get a beter position for the inicial clusters centers. First I select a random point, but how can I select the others. I mean how I define the minimal distance between them. Any help for this? Thanks
K-Means plus plus implementation
0
1.2
1
0
0
1,239
13,440,079
2012-11-18T12:28:00.000
0
0
0
0
0
python,operating-system
0
13,440,101
0
2
0
true
1
0
The access time for an individual file are not affected by the quantity of files in the same directory. running ls -l on a directory with more files in it will take longer of course. Same as viewing that directory in the file browser. Of course it might be easier to work with these images if you store them in a subdirectory defined by the user's name. But that just depends on what you are going to doing with them. There is no technical reason to do so. Think about it like this. The full path to the image file (/srv/site/images/my_pony.jpg) is the actual address of the file. Your web server process looks there, and returns any data it finds or a 404 if there is nothing. What it doesn't do is list all the files in /srv/site/images and look through that list to see if it contains an item called my_pony.jpg.
1
0
0
0
My website users can upload image files, which then need to be found whenever they are to be displayed on a page (using src = ""). Currently, I put all images into one directory. What if there are many files - is it slow to find the right file? Are they indexed? Should I create subdirectories instead? I use Python/Django. Everything is on webfaction.
Python/Django: how to get files fastest (based on path and name)
0
1.2
1
0
0
57
13,448,366
2012-11-19T05:49:00.000
1
0
0
0
0
python,google-app-engine,app-engine-ndb
0
21,716,718
0
3
0
true
1
0
If you have a small app then your data probably live on the same part of the same disk and you have one instance. You probably won't notice eventual consistency. As your app grows, you notice it more. Usually it takes milliseconds to reach consistency, but I've seen cases where it takes an hour or more. Generally, queries is where you notice it most. One way to reduce the impact is to query by keys only and then use ndb.get_multi() to load the entities. Fetching entities by keys ensures that you get the latest version of that entity. It doesn't guarantee that the keys list is strongly consistent, though. So you might get entities that don't match the query conditions, so loop through the entities and skip the ones that don't match. From what I've noticed, the pain of eventual consistency grows gradually as your app grows. At some point you do need to take it seriously and update the critical areas of your code to handle it.
3
8
0
0
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
In practice, how eventual is the "eventual consistency" in HRD?
0
1.2
1
0
0
560
13,448,366
2012-11-19T05:49:00.000
0
0
0
0
0
python,google-app-engine,app-engine-ndb
0
13,457,830
0
3
0
false
1
0
The replication speed is going to be primarily server-workload-dependent. Typically on an unloaded system the replication delay is going to be milliseconds. But the idea of "eventually consistent" is that you need to write your app so that you don't rely on that; any replication delay needs to be allowable within the constraints of your application.
3
8
0
0
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
In practice, how eventual is the "eventual consistency" in HRD?
0
0
1
0
0
560
13,448,366
2012-11-19T05:49:00.000
0
0
0
0
0
python,google-app-engine,app-engine-ndb
0
13,457,661
0
3
0
false
1
0
What's the worst case if you get inconsistent results? Does a user see some unimportant info that's out of date? That's probably ok. Will you miscalculate something important, like the price of something? Or the number of items in stock in a store? In that case, you would want to avoid that chance occurence. From observation only, it seems like eventually consistent results show up more as your dataset gets larger, I suspect as your data is split across more tablets. Also, if you're reading your entities back with get() requests by key/id, it'll always be consistent. Make sure you're doing a query to get eventually consistent results.
3
8
0
0
I am in the process of migrating an application from Master/Slave to HRD. I would like to hear some comments from who already went through the migration. I tried a simple example to just post a new entity without ancestor and redirecting to a page to list all entities from that model. I tried it several times and it was always consistent. Them I put 500 indexed properties and again, always consistent... I was also worried about some claims of a limit of one 1 put() per entity group per second. I put() 30 entities with same ancestor (same HTTP request but put() one by one) and it was basically no difference from puting 30 entities without ancestor. (I am using NDB, could it be doing some kind of optimization?) I tested this with an empty app without any traffic and I am wondering how much a real traffic would affect the "eventual consistency". I am aware I can test "eventual consistency" on local development. My question is: Do I really need to restructure my app to handle eventual consistency? Or it would be acceptable to leave it the way it is because the eventual consistency is actually consistent in practice for 99%?
In practice, how eventual is the "eventual consistency" in HRD?
0
0
1
0
0
560
13,450,878
2012-11-19T09:28:00.000
1
0
0
0
0
python,input,pygame
0
13,451,666
0
1
1
true
0
1
You can't do that, unless you use the input command in a different thread, but then you have to deal with syncronization (which might be what you want or don't want to do). The way I'd implement this is to create a kind of in-game console. When a special key (e.g. '\') is pressed you make the console appear, and when your application is in that state you interpreter key pressing not as in-game commands but... well, as text. You can print them in the console (using fonts). When a key (e.g "return") is pressed you can make the console disappear and the keys take back their primary functionality. I did this for my pet-project and it works as a charm. Plus, since you are developing in python you can accept python instructions and use exec to execute them and edit your game "on fhe fly"
1
2
0
0
In Pygame, how can I get graphical input(e.g. clicking exit button) and also get input from the a terminal window simultaneously? To give you context, my game has a GUI but gets its game commands from a "input()" command. How can I look for input from the command line while also handling graphics? I'm not sure if this is possible, but if not, what other options do I have for getting text input from the user? Thanks in advance.
Pygame: Graphical Input + Text Input
1
1.2
1
0
0
906
13,460,288
2012-11-19T18:57:00.000
0
0
0
0
0
python,selenium,webdriver
0
13,460,962
0
2
0
false
0
0
Very hacky, but you could modify the Webdriver Firefox plugin to point to your binary.
1
3
0
0
I'm trying to use Firefox portable for my tests in python. In plain webdriver it works, but i was wondering how to do it in remote webdriver. All i could find is how to pass firefox profile, but how to specify to webdriver which binary to use?
How to specify browser binary in selenium remote webdriver?
0
0
1
0
1
620
13,464,456
2012-11-19T23:46:00.000
2
1
0
1
1
python,installation,ubuntu-12.04
0
13,464,518
0
1
0
true
0
0
As an absolute beginner, don't worry right now about where to install libraries. Simple example scripts that you're trying out for learning purposes don't belong being installed in any lib directory such as under /usr/lib/python.' On Linux you want to do most work in your home directory, so just cd ~ to make sure you're there and create files there with an editor of your choice. You might want to organize your files hierarchically too. For example create a directory called src/ using the mkdir command in your home directory. And and then mdkir src/lpthw, for example, as a place to store all your samples from "Learn Python the Hard Way". Then simply fun python <path/to/py/file> to execute the script. Or you can cd ~/src/lpthw and run your scripts by filename only.
1
0
0
0
I am learning python from learnpythonthehardway. in the windows I had no issues with going through a lots of exercises because the setup was easier but I want to learn linux as well and ubuntu seemed to me the nicest choice. now I am having trouble with setting up. I can get access to the terminal and then usr/lib/python.2.7 but I don't know if to save the script in this directory? if I try to make a directory inside this through mkdir I can't as permission is denied. I also tried to do chmod but didn't know how or if to do it. any help regarding how to save my script in what libary? how to do that? and how to run it in terminal as: user@user$ python sampleexcercise.py using ubuntu 12.04 lts skill = newbie thanks in advance.
python library access in ubuntu 12.04
0
1.2
1
0
0
409
13,473,489
2012-11-20T12:44:00.000
1
1
0
0
0
python,web-applications,haskell,clojure,lisp
0
13,476,327
0
4
0
false
1
0
When the server-side creates the form, encode an hidden field with the timestamp of the request, so when the users POSTs his form, you can see the time difference. How to implement that is up to you, which server you have available, and several other factors.
1
1
0
0
I'd like to make a webapp that asks people multiple choice questions, and times how long they take to answer. I'd like those who want to, to be able to make accounts, and to store the data for how well they've done and how their performance is increasing. I've never written any sort of web app before, although I'm a good programmer and understand how http works. I'm assuming (without evidence) that it's better to use a 'framework' than to hack something together from scratch, and I'd appreciate advice on which framework people think would be most appropriate. I hope that it will prove popular, but would rather get something working than spend time at the start worrying about scaling. Is this sane? And I'd like to be able to develop and test this on my own machine, and then deploy it to a virtual server or some other hosting solution. I'd prefer to use a language like Clojure or Lisp or Haskell, but if the advantages of using, say, Python or Ruby would outweigh the fact that I'd enjoy it more in a more maths-y language, then I like both of those too. I probably draw the line at perl, but if perl or even something like Java or C have compelling advantages then I'm quite happy with them too. They just don't seem appropriate for this sort of thing.
How do I make a web server to make timed multiple choice tests?
0
0.049958
1
0
0
947
13,476,383
2012-11-20T15:23:00.000
0
0
1
0
1
python,python-imaging-library,python-unicode
0
13,476,705
0
2
0
false
0
1
Maybe use Unicode strings?? Like u'cadeau check 50 €' ... Now, does also your font have the corresponding glyphs?
1
2
0
0
I have a title ("cadeau check 50 €") in a form value that I want to write to a background image with arial.ttf. My text is correct but for the euro sign. I have 2 [] in place. I don't know where the problem is coming from. Is this an encoding problem in PIL, or have I a problem with the font?
PIL: how to draw € on an image with draw.text?
0
0
1
0
0
785
13,478,965
2012-11-20T17:44:00.000
2
0
1
0
0
python
0
13,479,006
0
3
0
false
0
0
You need to have an __init__.py file in the backend folder for Python to consider it a package. Then you can do import backend.handlers or from backend.handlers import foo
1
0
0
0
From main.py, I want to import a file from the backend folder WebAppName/main.py WebAppName/backend/handlers.py How do I specify this as an import statement I am aware that importing from the same folder is just import handlers But this is a child directory, so how do I do this?
How do I import from a child directory / subfolder?
0
0.132549
1
0
0
134
13,484,482
2012-11-21T00:23:00.000
0
0
0
1
0
python,applescript
0
53,218,057
0
2
0
false
0
0
My issue was an app with LSBackgroundOnly = YES set attempting to run an AppleScript that displays UI, such as display dialog ... Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory" AppleScript.scpt: execution error: No user interaction allowed. (-1713) Using tell application "Finder" ... or etc. works, as shown in the other answer. Or, remove the LSBackgroundOnly key to enable UI AppleScripts without telling a different Application. LSUIElement presents a similar mode - no dock icon, no menu bar, etc. - but DOES allow UI AppleScripts to be launched.
1
5
0
0
I have the applescript which will displays menu list and allow user to select menu items etc. It runs fine by itself. And now I try to run it in python. I get the No user interaction allowed. (-1713) error. I looked online. I tried the following: add on run function in the same applescript, so what i did is just add the main into the run on run tell application “AppleScript Runner” main() end tell end run i tried to run the above in python import os def main(): os.system ('osascript -e "tell application "ApplesScript Runner" do script /Users/eee/applescript/iTune.scpt end tell"') if name == 'main': main() Neither way works. Can anyone tell me how to do this correctly?
"No user interaction allowed" When running AppleScript in python
1
0
1
0
0
5,247
13,487,181
2012-11-21T05:51:00.000
0
0
1
0
0
python,sockets,shared-memory
0
13,500,968
0
2
0
false
0
0
First, note that what you're trying to build will require more than just shared memory: it's all well if a.py writes to shared memory, but how will b.py know when the memory is ready and can be read from? All in all, it is often simpler to solve this problem by connecting the multiple processes not via shared memory, but through some other mechanism. (The reason for why mmap usually needs a file name is that it needs a name to connect the several processes. Indeed, if a.py and b.py both call mmap(), how would the system know that these two processes are asking for memory to be shared between them, and not some unrelated z.py? Because they both mmaped the same file. There are also Linux-specific extensions to give a name that doesn't correspond to a file name, but it's more a hack IMHO.) Maybe the most basic alternative mechanism is pipes: they are usually connected with the help of the shell when the programs are started. That's how the following works (on Linux/Unix): python a.py | python b.py. Any output that a.py sends goes to the pipe, whose other end is the input for b.py. You'd write a.py so that it listens to the UDP socket and writes the data to stdout, and b.py so that it reads from stdin to process the data received. If the data needs to go to several processes, you can use e.g. named pipes, which have a nice (but Bash-specific) syntax: python a.py >(python b.py) >(python c.py) will start a.py with two arguments, which are names of pseudo-files that can be opened and written to. Whatever is written to the first pseudo-file goes as input for b.py, and similarly what is written to the second pseudo-file goes as input for c.py.
2
2
0
0
I have a processes from several servers that send data to my local port 2222 via udp every second. I want to read this data and write it to shared memory so there can be other processes to read the data from shared memory and do things to it. I've been reading about mmap and it seems I have to use a file... which I can't seem to understand why. I have an a.py that reads the data from the socket, but how can I write it to an shm? Once once it's written, I need to write b.py, c.py, d.py, etc., to read the very same shm and do things to it. Any help or snippet of codes would greatly help.
how to write to shared memory in python from stream?
1
0
1
0
0
2,659
13,487,181
2012-11-21T05:51:00.000
0
0
1
0
0
python,sockets,shared-memory
0
13,513,533
0
2
0
false
0
0
mmap doesn't take a file name but rather a file descriptor. It performs the so-called memory mapping, i.e. it associates pages in the virtual memory space of the process with portions of the file-like object, represented by the file descriptor. This is a very powerful operation since it allows you: to access the content of a file simply as an array in memory; to access the memory of special I/O hardware, e.g. the buffers of a sound card or the framebuffer of a graphics adapter (this is possible since file desciptors in Unix are abstractions and they can also refer to device nodes instead of regular files); to share memory between processes by performing shared maps of the same object. The old pre-POSIX way to use shared memory on Unix was to use the System V IPC shared memory. First a shared memory segment had to be created with shmget(2) and then attached to the process with shmat(2). SysV shared memory segments (as well as other IPC objects) have no names but rather numeric IDs, so the special hash function ftok(3) is provided, which converts the combination of a pathname string and a project ID integer into a numeric key ID, but collisions are possible. The modern POSIX way to use shared memory is to open a file-like memory object with shm_open(2), resize it to the desired size with ftruncate(2) and then to mmap(2) it. Memory-mapping in this case acts like the shmat(2) call from the SysV IPC API and truncation is necessary since shm_open(2) creates objects with an initial size of zero. (these are part of the C API; what Python modules provide is more or less thin wrappers around those calls and often have nearly the same signature) It is also possible to get shared memory by memory mapping the same regular file in all processes that need to share memory. As a matter of fact, Linux implements the POSIX shared memory operations by creating files on a special tmpfs file system. The tmpfs driver implements very lightweight memory mapping by directly mapping the pages that hold the file content into the address space of the process that executes mmap(2). Since tmpfs behaves as a normal filesystem, you can examine its content using ls, cat and other shell tools. You can even create shared memory objects this way or modify the content of the existent ones. The difference between a file in tmpfs and a regular filesystem file is that the latter is persisted to storage media (hard disk, network storage, flash drive, etc.) and occasionally changes are flushed to this storage media while the former lives entirely in RAM. Solaris also provides similar RAM-based filesystem, also called tmpfs. In modern operating systems memory mapping is used extensively. Executable files are memory-mapped in order to supply the content of those pages, that hold the executable code and the static data. Also shared libraries are memory-mapped. This saves physical memory since these mappings are shared, e.g. the same physical memory that holds the content of an executable file or a shared library is mapped in the virtual memory space of each process.
2
2
0
0
I have a processes from several servers that send data to my local port 2222 via udp every second. I want to read this data and write it to shared memory so there can be other processes to read the data from shared memory and do things to it. I've been reading about mmap and it seems I have to use a file... which I can't seem to understand why. I have an a.py that reads the data from the socket, but how can I write it to an shm? Once once it's written, I need to write b.py, c.py, d.py, etc., to read the very same shm and do things to it. Any help or snippet of codes would greatly help.
how to write to shared memory in python from stream?
1
0
1
0
0
2,659
13,503,553
2012-11-21T23:10:00.000
0
0
1
0
0
types,input,python-3.x,eval
1
13,521,828
0
3
0
false
0
0
To answer your second part of the question you can use the isinstance() function in python to check if a variable is of a certain type.
1
1
0
0
I have some functions calling for user input, sometimes string, int or whatever. so i noticed that if ENTER is pressed with NO INPUT i get an error. SO i did some research and i think i found that EVAL function may be what I'm looking for, but then again i read about its dangers. SO here are my questions: How can i check/force user input? EX: repeating the input string or maybe even warning user that he didn't enter anything? how do i check for the correct type of input (int, float, string, etc) against whatever the user types without having my scripts returning errors? I appreciate your feedback, Cheers
"Force" User Input and Checking for correct Input Type
0
0
1
0
0
1,011
13,517,991
2012-11-22T18:02:00.000
3
0
1
0
0
python-3.x,urllib3
1
13,520,455
0
1
0
true
0
0
You need to install it for each version of Python you have - if pip installs it for Python 2.7, it won't be accessible from 3.2. There doesn't seem to be a pip-3.2 script, but you can try easy_install3 urllib3.
1
1
0
0
I have Python 2.7.3 installed alongside Python 3.2.3 on an Ubuntu system. I've installed urllib3 using pip and can import it from the python shell. When I open the python3 shell, I get a can't find module error when trying to import urllib3. help('modules') from within the shell also doesn't list urllib3. Any ideas on how to get python3 to recognize urllib3?
Python3 can't find urllib3
0
1.2
1
0
1
4,054
13,522,975
2012-11-23T04:10:00.000
2
1
1
0
1
python-2.7,pygame
0
13,537,741
0
1
0
true
0
0
I found a solution. The latest version of Pygame is still able to play MPEG-1 files. The problem was that there are different encoding of MPEG-1. The ones I found that works so far is Any Video Converter and Zamzar.com online converter. The downside to Zamzar is that it outputs really small version of the original video. video.online-convert.com does not work
1
0
0
0
I'm making a game using Python with PyGame module. I am trying to make an introduction screen for my game using a video that I made since it was easier to make a video than coding the animation for the intro screen. The Pygame movie module does not work as stated on their site so I cannot use that. I tried using Pymedia but I have no idea how to even get a video running since their documentation weren't that helpful. Do you guys know any sample code that uses Pymedia to play a video? Or any code at all that loads a video using python. Or if there's any other video module out there that is simple, please let me know. I'm totally stumped.
Playing Video Files in Python
0
1.2
1
0
0
1,309
13,523,789
2012-11-23T05:55:00.000
1
0
0
0
1
python,python-3.x
0
13,533,315
0
1
0
true
0
0
I encountered the same problem, and i solved it by chopping the file up and then sending the parts separately (load the file, send file[0:512], then send file[512:1024] and so on. Before sending the file i sent the length of the file to the receiver so the it would know when its done. I know this probably isn't the best way to do this, but i hope it will help you.
1
0
0
0
Can someone give me a brief idea on how to transfer large files over the internet? I tried with sockets, but it does not work. I am not sure what the size of receiving sockets should be. I tried with 1024 bytes. I send the data from one end and keep receiving it at the other end. Is there any other way apart from sockets, I can use in Python?
Transfer a Big File Python
0
1.2
1
0
1
1,410
13,542,698
2012-11-24T15:38:00.000
0
1
1
0
1
python,apt
0
13,552,981
0
2
0
false
0
0
The use to fork is just a possibility I think. I've already try to redirect the sys.stdout even the sys.stderr : No Joy, it won't work.
1
3
0
0
I use python apt library and I would like that the commit() function doesn't produce any output. I've searched on the web and saw that the fork function can do the trick but I don't know how to do that or if there exists another way. I don't use any GUI, I work via the terminal.
How to silence the commit function from the python apt library?
0
0
1
0
0
256
13,553,174
2012-11-25T16:51:00.000
2
1
0
0
0
c++,python,boost,boost-python
0
13,558,179
0
1
0
false
0
1
Boost python is useful for exposing C++ objects to python. Since you're talking about interacting with an already running application from python, and the lifetime of the script is shorter than the lifetime of the game server, I don't think boost python is what you're looking for, but rather some form of interprocess communication. Whilst you could create your IPC mechanism in C++, and then expose it to python using boost python, I doubt this is what you want to do.
1
1
0
0
I read few Boost.Python tutorials and I know how to call C++ function from Python. But what I want to do is create C++ application which will be running in the background all the time and Python script that will be able to call C++ function from that instance of C++ application. The C++ application will be a game server and it has to run all the time. I know I could use sockets/shared memory etc. for this kind of communication but is it possible to make it with Boost.Python?
Boost.Python - communication with running C++ program
1
0.379949
1
0
0
426
13,555,278
2012-11-25T20:40:00.000
1
1
0
0
1
c++,python,c,plugins,shared-libraries
0
13,555,348
0
1
0
false
0
1
Write your application in Python, then you can have a folder for your plugins. Your application searches for them by checking the directory/traversing the plugin tree. Then import them via "import" or use ctypes for a .so/.dll, or even easier: you can use boost::python for creating a .so/.dll that you can 'import' like a normal python module. Don't use C++ and try to do scripting in Python - that really sucks, you will regret it. ;)
1
0
0
0
I want to create an application that is capable of loading plugins. The twist is that I want to be able to create plugins in both C/C++ and Python. So I've started thinking about this and have a few questions that I'm hoping people can help me with. My first question is whether I need to use C/C++ for the "core" of the application (the part that actually does the loading of the plugins)? This is my feeling at least, I would think that implementing the core in Python would result in a nasty performance hit, but it would probably simplify loading the plugins dynamically. Second question is how would I go about defining the plugin interface for C/C++ on one hand and Python on the other? The plugin interface would be rather simple, just a single method that accepts a list of image as a parameter and returns a list of images as a return value. I will probably use the OpenCV image type within the plugins which exists for both C/C++ and Python. Finally, I would like the application to dynamically discover plugins. So if you place either a .py file or a shared library file (.so/.dll) in this directory, the application would be able to produce a list of available plugins at runtime. I found something in the Boost library called Boost.Extension (http://boost-extension.redshoelace.com/docs/boost/extension/index.html) but unfortunately it doesn't seem to be a part of the official Boost library and it also seems to be a bit stale now. On top of that, I don't know how well it would work with Python, that is, how easy it would be to create Python plugins that fit into this mechanism. As a side note, I imagine the application split into two "parts". One part is a stripped down core (loading and invoking plugin instances from a "recipe"). The other part is the core plus a GUI that I plan on writing in Python (using PySide). This GUI will enable the user to define the aforementioned "recipes". This GUI part would require the core to be able to provide a list of available plugins. Sorry for the long winded "question". I guess I'm hoping for more of a discussion and of course if anybody knows of something similar that would help me I would very much appreciate a pointer. I would also appreciate any concise and to the point reading material about something similar (such as integrating C/C++ and Python etc).
Application that can load both C/C++ and Python plugins
0
0.197375
1
0
0
648
13,573,359
2012-11-26T21:20:00.000
2
0
0
0
0
python,linux,mysql-python,bluehost
1
13,573,647
0
2
0
true
1
0
I think you upgraded your OS installation which in turn upgraded libmysqlclient and broke native extension. What you can do is reinstall libmysqlclient16 again (how to do it depends your particular OS) and that should fix your issue. Other approach would be to uninstall MySQLdb module and reinstall it again, forcing python to compile it against a newer library.
2
2
0
0
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line 35, in import_module __import__(name) File "/****/****/.local/lib/python/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory Of course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages. Does anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?
Python module issue
0
1.2
1
1
0
2,446
13,573,359
2012-11-26T21:20:00.000
0
0
0
0
0
python,linux,mysql-python,bluehost
1
13,591,200
0
2
0
false
1
0
You were right. Bluehost upgraded MySQL. Here is what I did: 1) remove the "build" directory in the "MySQL-python-1.2.3" directory 2) remove the egg 3) build the module again "python setup.py build" 4) install the module again "python setup.py install --prefix=$HOME/.local" Morale of the story for me is to remove the old stuff when reinstalling module
2
2
0
0
I have a shared hosting environment on Bluehost. I am running a custom installation of python(+ django) with a few installed modules. All has been working, until yesterday a change was made on the server(I assume) which gave me this django error: ... File "/****/****/.local/lib/python/django/utils/importlib.py", line 35, in import_module __import__(name) File "/****/****/.local/lib/python/django/db/backends/mysql/base.py", line 14, in raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: libmysqlclient_r.so.16: cannot open shared object file: No such file or directory Of course, Bluehost support is not too helpful. They advised that 1) I use the default python install, because that has MySQLdb installed already. Or that 2) I somehow import the MySQLdb package installed on the default python, from my python(dont know if this can even be done). I am concerned that if I use the default install I wont have permission to install my other packages. Does anybody have any ideas how to get back to a working state, with as little infrastructure changes as possible?
Python module issue
0
0
1
1
0
2,446
13,585,238
2012-11-27T13:25:00.000
1
0
0
1
0
python,multiprocessing
0
13,585,552
0
2
0
false
0
0
use a pipe. Ceate two processes using the subprocess module, the first reads from the serial port and writes the set of hex codes to stdout. This is piped to the second process which reads from stdin and updates the database.
1
2
0
0
Hi I am using Serial port communication to one of my device using python codes. It is sending a set of Hex code, Receives a set of data. process it. This data has to be stored in to a database. I have another script that has MYSQLdb library pushing it in to the database. If I do that in sequentially in one script i lose a lot in sampling rate. I can sample up to 32 data sets per second if I dont connect to a database and insert it in to the table. If I use Multiprocessing and try to run it my sampling rate goes to 0.75, because the parent process is waiting for the child to join. so how can i handle this situation. Is it possible to run them independently by using a queue to fill data?
Subprocess in Reading Serial Port Read
1
0.099668
1
0
0
2,531