Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
15,315,984
2013-03-09T21:12:00.000
0
0
0
0
python,node.js
15,316,002
3
false
1
0
I think you're thinking about this problem backwards. Node.js lets you run browser Javascript without a browser. You won't find it useful in your Python programming. You're better off, if you want to stick with Python, using a framework such as Pyjamas to write Javascript with Python or another framework such as Flask or Twisted to integrate the Javascript with Python.
1
2
0
I use to program on python. I have started few months before, so I am not the "guru" type of developer. I also know the basics of HTML and CSS. I see few tutorials about node.js and I really like it. I cannot create those forms, bars, buttons etc with my knowledge from html and css. Can I use node.js to create what user see on browser and write with python what will happen if someone push the "submit" button? For example redirect, sql write and read etc. Thank you
using python and node.js
0
0
1
4,502
15,316,304
2013-03-09T21:49:00.000
0
0
0
0
python,selenium,tor
21,777,926
11
false
0
0
I looked into this, and unless I'm mistaken, on face value it's not possible. The reason this cannot be done is because: Tor Browser is based on the Firefox code. Tor Browser has specific patches to the Firefox code to prevent external applications communicating with the Tor Browser (including blocking the Components.Interfaces library). The Selenium Firefox WebDriver communicates with the browser through Javascript libraries that are, as aforementioned, blocked by Tor Browser. This is presumably so no-one outside of the Tor Browser either on your box or over the internet knows about your browsing. Your alternatives are: Use a Tor proxy through Firefox instead of the Tor Browser (see the link in the comments of the question). Rebuild the Firefox source code with the Tor Browser patches excluding those that prevent external communication with Tor Browser. I suggest the former.
1
32
0
Is it possible to make selenium use the TOR browser? Does anyone have any code they could copy-paste?
Open tor browser with selenium
0
0
1
46,677
15,317,707
2013-03-10T00:53:00.000
0
0
0
0
python,game-engine,pypy,rpython
15,564,865
2
false
0
1
RPython is not Python. They are different languages even though it just so happens that you can execute RPython code with a Python interpreter and get similar effects (much more slowly) to running the compiled RPython program. It is extremely unlikely for there to ever be any reasonable Python library (game library or any other kind) that also works in RPython. It would have to be specifically written for RPython, and if it worked in RPython it would be so inflexible when considered as a regular Python library that nobody would use it in that context. If you want to program in a lower level compiled language with game libraries, use C# or Java. RPython is not a good competitor (in large part because it has very few libraries for anything, not even much of a standard library). If you want to program in Python, use Python (possibly use the PyPy implementation of Python, which can be faster than the standard Python interpreter if it supports all the libraries you're using). RPython will not help you write Python code that goes faster. If your goal is not to specifically produce a game, but rather to do some project in RPython because you want to learn RPython specifically, then that's a perfectly reasonable aim. But writing a game in will probably not be the most helpful project for learning to write RPython; you should probably consider writing some sort of interpreter instead, since that's what RPython is designed to do.
1
1
0
Are there any Python game libraries (Pygame, Pyglet, etc.) with support for RPython? Or game libraries specifically made for RPython? Or bindings for a game library for RPython?
Game Library with Support for RPython
0
0
0
255
15,319,018
2013-03-10T04:46:00.000
0
0
0
0
android,python
27,331,662
5
false
1
1
Dr. python is a good ide/editor, but its simple to use. If your using linux, you can install it in the software center. I don't know how to install it on OSX/Windows
1
28
0
I will be in charge of teaching a small group of middle school students how to program phone applications. After much research I found that Python might be the best way to go. I am a website development senior at my university, but I have not used Python before. I understand both ActionScript and Javascript and I think their logic might be beneficial for learning Python. For the web languages I am familiar with writing I use Sublime2, Dreamweaver, or Flash to code them. So my questions are: Which program do I use to code Python? How do I use the code created in Python to work on Android phones?
How do I program an Android App with Python?
0
0
0
91,831
15,324,095
2013-03-10T15:46:00.000
0
0
0
0
python,django,powershell
27,304,554
2
false
1
0
When the window pops-up and then disappears that means an error is occurring inside of the action being called, but no error message is being returned for display in the command line. In my case, I was trying to run manage.py (and any command) and it required MySQL-python to be installed first.
2
0
0
I can use django-admin.py startproject foo and it works fine, however now when I try for instance django-admin.py shell or django-admin.py help, nothing happens. By nothing happens, I mean (for the django-admin.py shell example) the console doesn't open up the shell command like manage.py would, but instead will pop up and immediately close a new window as it does when I double-click a python file. There isn't any error message, the console just doesn't output anything. Hopefully what I am trying to say makes sense. It's kind of hard for me to explain. I used to be able to use the django-admin.py shell command, so I don't know what happened. Anyone know whats going on? Thanks in advance, and if I need to try to clarify something feel free to ask and I will try.
django-admin.py doesn't work properly
0
0
0
314
15,324,095
2013-03-10T15:46:00.000
1
0
0
0
python,django,powershell
15,324,273
2
false
1
0
Did you upgrade your version of Python/Django? Sometimes there's a compatibility issues and you have to uninstall the previous version.
2
0
0
I can use django-admin.py startproject foo and it works fine, however now when I try for instance django-admin.py shell or django-admin.py help, nothing happens. By nothing happens, I mean (for the django-admin.py shell example) the console doesn't open up the shell command like manage.py would, but instead will pop up and immediately close a new window as it does when I double-click a python file. There isn't any error message, the console just doesn't output anything. Hopefully what I am trying to say makes sense. It's kind of hard for me to explain. I used to be able to use the django-admin.py shell command, so I don't know what happened. Anyone know whats going on? Thanks in advance, and if I need to try to clarify something feel free to ask and I will try.
django-admin.py doesn't work properly
0.099668
0
0
314
15,325,056
2013-03-10T17:11:00.000
1
1
0
0
python,id3,mutagen
15,325,204
2
true
0
0
Beware of premature optimization. Are you really sure that this will be a performance problem? What are your requirements -- how quickly does the script need to run? How fast does it run with the naïve approach? Profile and evaluate before you optimize. I think there's a serious possibility that you're seeing a performance problem where none actually exists. You can't avoid visiting each file once if you want a guaranteed correct answer. As you've seen, optimizations that entirely skip files will basically amount to automated guesswork. Can you keep a record of previous scans you've done, and on a subsequent scan use the last-modified dates of the files to avoid re-scanning files you've already scanned once? This could mean that your first scan might take a little bit of time, but subsequent scans would be faster. If you need to do a lot of complex queries on a music collection quickly, consider importing the metadata of the entire collection into a database (for instance SQLite or MySQL). Importing will take time -- updating to insert new files will take a little bit of time (checking the last-modified dates as above). Once the data is in your database, however, everything should be fairly snappy assuming that the database is set up sensibly.
1
0
0
I'm building a small tool that I want to scan over a music collection, read the ID3 info of a track, and store it as long as that particular artist does not have a song that has been accessed more than twice. I'm planning on using Mutagen for reading the tags. However, the music collections of myself and many others are massive, exceeding 20,000 songs. As far as I know, libraries like Mutagen have to open and close every song to get the ID3 info from it. While MP3s aren't terribly performance-heavy, that's a lot of songs. I'm already planning a minor optimization in the form of keeping a count of each artist and not storing any info if their song count exceeds 2, but as far as I can tell I still need to open every song to check the artist ID3 tag. I toyed with the idea of using directories as a hint for the artist name and not reading any more info in that directory once the artist song count exceeds 2, but not everyone has their music set up in neat Artist/Album/Songs directories. Does anyone have any other optimizations in mind that might cut down on the overhead of opening so many MP3s?
Optimizing a Mass ID3 Tag Scan
1.2
0
0
330
15,327,776
2013-03-10T21:20:00.000
0
0
0
0
python,django,security,version-control,django-settings
59,843,984
4
false
1
0
Having something like this in your settings.py: db_user = 'my_db_user' db_password = 'my_db_password' Hard codes valuable information in your code and does pose a security risk. An alternative is to store your valuable information (Api keys, database passwords etc.) on your local machine as environment variables. E.g. on linux you could add: export DB_USER = "my_db_user" export DB_PASS = "my_db_password" to your .bash_profile. Or there is usually an option with your hosting provider to set environment variables e.g. with AWS elastic beanstalk you can add env variables under your configuration on console. Then to retrieve your information import os: import os db_user = os.environ.get['DB_USER'] db_password = os.environ.get['DB_PASS']
1
40
0
I use Python and Django to create web applications, which we store in source control. The way Django is normally set up, the passwords are in plain text within the settings.py file. Storing my password in plain text would open me up to a number of security problems, particularly because this is an open source project and my source code would be version controlled (through git, on Github, for the entire world to see!) The question is, what would be the best practice for securely writing a settings.py file in a a Django/Python development environment?
Python/Django - Avoid saving passwords in source code
0
0
0
15,090
15,329,256
2013-03-11T00:01:00.000
0
0
0
0
c++,python,algorithm,primes,prime-factoring
15,329,656
3
false
0
0
As Malvolio was (indirectly) going about, I personal wouldn't find a use for prime factorization if you want to find factors in a range, I would start at int t = (int)(sqrt(n)) and then decremnt until1. t is a factor2. Complete util t or t/n range has been REACHED(a flag) and then (both) has left the range Or if your range is relatively small, check versus those values themselves.
1
1
1
So I have algorithms (easily searchable on the net) for prime factorization and divisor acquisition but I don't know how to scale it to finding those divisors within a range. For example all divisors of 100 between 23 and 49 (arbitrary). But also something efficient so I can scale this to big numbers in larger ranges. At first I was thinking of using an array that's the size of the range and then use all the primes <= the upper bound to sieve all the elements in that array to return an eventual list of divisors, but for large ranges this would be too memory intensive. Is there a simple way to just directly generate the divisors?
How can I efficiently get all divisors of X within a range if I have X's prime factorization?
0
0
0
1,052
15,329,649
2013-03-11T00:53:00.000
0
0
0
0
python,sockets,python-idle
71,399,314
2
false
0
0
For newbies like myself: do not open the client script/file from the first opened IDLE Shell, with the server script already running in the Editor window, but open another IDLE Shell window, in which you open/run this client script.
2
7
0
I am working on a super simple socket program and I have code for the client and code for the server. How do I run both these .py files at the same time to see if they work ?
How to run two modules at the same time in IDLE
0
0
1
14,635
15,329,649
2013-03-11T00:53:00.000
15
0
0
0
python,sockets,python-idle
15,329,711
2
true
0
0
You can run multiple instances of IDLE/Python shell at the same time. So open IDLE and run the server code and then open up IDLE again, which will start a separate instance and then run your client code.
2
7
0
I am working on a super simple socket program and I have code for the client and code for the server. How do I run both these .py files at the same time to see if they work ?
How to run two modules at the same time in IDLE
1.2
0
1
14,635
15,330,175
2013-03-11T02:11:00.000
0
0
1
0
python,argparse
48,232,074
5
false
0
0
When I had this need I just installed argparse using pip: pip install argparse. This worked fine for the python-2.6.6-66.el6_8.x86_64 that was installed on my vanilla CentOS 6.8 system. I later found out that this did not work on other systems running python-2.6.6-52.el6.x86_64. As a result I ended up copying argparse related files from one system to the other. i.e. all files and directories beginning with argparse in /usr/lib/python2.6/site-packages Only after doing all this did I discover that I could have simply installed the python argparse rpm python-argparse that comes with both distributions. i.e. yum install python-argparse
1
40
0
I have some Python 2.7 code written that uses the argparse module. Now I need to run it on a Python 2.6 machine and it won't work since argparse was added in 2.7. Is there anyway I can get argparse in 2.6? I would like to avoid rewriting the code, since I will be transferring this kind of code between the machines often. Upgrading python is not an option. I should have clarified that the ideal solution would be something that does not require module installation.
How can I get argparse in Python 2.6?
0
0
0
65,496
15,330,521
2013-03-11T03:01:00.000
2
0
1
0
python,numpy,scipy
15,330,625
3
true
0
0
To answer your second question: Tuples in Python are n-dimensional. That is you can have a 1-2-3-...-n tuple. Due to syntax, the way you represent a 1-dimensional tuple is ('element',) where the trailing comma is mandatory. If you have ('element') then this is just simply the expression inside the parenthesis. So (3) + 4 == 7, but (3,) + 4 == TypeError. Likewise ('element') == 'element'. To answer your first question: You're more than likely doing something wrong with passing the array around. There is no reason for the NumPy array to misrepresent itself without some type of mutation to the array.
2
1
1
I have a NumPy array that is of size (3, 3). When I print shape of the array within __main__ module I get (3, 3). However I am passing this array to a function and when I print its size in the function I get (3, ). Why does this happen? Also, what does it mean for a tuple to have its last element unspecified? That is, shouldn't (3, ) be an invalid tuple in the first place?
NumPy array size issue
1.2
0
0
1,032
15,330,521
2013-03-11T03:01:00.000
2
0
1
0
python,numpy,scipy
15,330,600
3
false
0
0
A tuple like this: (3, ) means that it's a tuple with a single element (a single dimension, in this case). That's the correct syntax - with a trailing , because if it looked like this: (3) then Python would interpret it as a number surrounded by parenthesis, not a tuple. It'd be useful to see the actual code, but I'm guessing that you're not passing the entire array, only a row (or a column) of it.
2
1
1
I have a NumPy array that is of size (3, 3). When I print shape of the array within __main__ module I get (3, 3). However I am passing this array to a function and when I print its size in the function I get (3, ). Why does this happen? Also, what does it mean for a tuple to have its last element unspecified? That is, shouldn't (3, ) be an invalid tuple in the first place?
NumPy array size issue
0.132549
0
0
1,032
15,330,568
2013-03-11T03:08:00.000
0
0
0
0
c++,python
15,330,640
2
false
0
0
Why no consider using multithreading to store the result? Make an array with size equal to number of blocks, then each thread count for the result in one block, then the thread writes the result to the corresponding entry in the array. Later on you sort the array by decreasing order then you get the result.
1
0
1
I have to search about 100 words in blocks of data (20000 blocks approximately) and each block consists of about 20 words. The blocks should be returned in the decreasing order of the number of matches. The brute force technique is very cumbersome because you have to search for all the 100 words one by one and then combine the number of related searches in a complicated manner. Is there any other algorithm which allows to search multiple words at the same time and store the number of matching words? Thank you
Searching multiple words in many blocks of data
0
0
0
77
15,331,031
2013-03-11T04:04:00.000
1
0
1
0
python,image-processing,python-imaging-library
15,331,517
1
false
1
0
If all you want to do is overlay text, I suggest you simply use imagemagick.
1
3
0
On my site I'm using simple text overlay. Inputs come from textboxes and then javascript makes an AJAX call with the inputs that are then processed in the backend by PIL (Python Imaging Library). Thing is, I'm not happy about the quality of PIL's text overlays - it's not possible to do a nice looking stroke (e.g. white font color + black stroke) and I'm thinking about switching to a different solution than PIL. I want to stay with Python though. What would you recommend for image processing in Python? Which library offers the best quality? Thanks! Best, Tom
Recommendations for backend image processing in Python
0.197375
0
0
1,031
15,331,039
2013-03-11T04:05:00.000
1
1
0
0
python,flash,pyramid
15,347,393
2
false
1
0
I possibly ran into a similar problem on my pyramid app. I'm using TinyMCE and had placed the files in the static folder. Everything worked on my dev server, but moved to test and prod and static .html files related to TinyMCE couldn't be found. My web host had me add a symlink basically I think hardcoding to the server software (nginix in this case) the web address to my static HTML to the server path and that worked. I'll have to check out the mimetypes thing, though, too.
1
1
0
I am running this python pyramid server. Strangely, when I moved my server code to a different machine, pserve stopped serving flash videos in my static folder. Whereas it serves other static files, like images, fine ! What could be a reason for this ?
Pyramid server not serving flash files
0.099668
0
0
138
15,331,859
2013-03-11T05:37:00.000
14
0
1
0
c++,python
15,331,880
3
true
0
0
When you're running a python program, the interpreter runs through it from top to bottom executing as it goes. In C++, that doesn't happen. The compiler builds all your functions into little blobs of machine code and then the linker hooks them up. At runtime, the operating system calls your main function, and everything goes on from there. In that context, code outside of functions is meaningless - when would it run?
2
8
0
As a newbie to c++, coming from python, I'm not sure why c++ doesn't allow code outside of a function (in the global namespace?). It seems like this could be useful to do some initialization before main() is called or other functions are even declared. (I'm not trying to argue with the compiler, I'd just like to know the thought process behind implementing it this way.)
Why must c++ code be contained within functions?
1.2
0
0
578
15,331,859
2013-03-11T05:37:00.000
1
0
1
0
c++,python
15,331,913
3
false
0
0
The main() is the access point to the program. So any code you wanna write would be needs to have an order of execution from that point. Static variables are initiated before the main() is executed, so you can initiate whatever variables you need before that. But if you want run code that initiates the state of the program, you should do it just in the beginning of the program, and abuse the static variables and do it some constructor.
2
8
0
As a newbie to c++, coming from python, I'm not sure why c++ doesn't allow code outside of a function (in the global namespace?). It seems like this could be useful to do some initialization before main() is called or other functions are even declared. (I'm not trying to argue with the compiler, I'd just like to know the thought process behind implementing it this way.)
Why must c++ code be contained within functions?
0.066568
0
0
578
15,332,618
2013-03-11T06:40:00.000
1
0
0
0
mysql,django,python-2.7,xeround
15,388,969
1
false
0
0
Run your django orm statement in the django shell and print the traceback. Look for delete statements in the django traceback sql.
1
4
0
I have created a cronjob in Python. The purpose is to insert data into a table from another one based on certain conditions. There is more than 65000 record to be inserted. I have executed the cronjob and has seen more than 25000 records inserted. But after that the record are getting automatically deleted from that table. Even the records that has already inserted into the table that day before executing the cronjob is getting deleted. "The current database is hosted in Xeround cloud." Is MySQL doing so, i.e some kind of rollback or something Does anybody have any idea about this. Please give me a solution. Thanks in advance..
Records getting deleted from Mysql table automatically
0.197375
1
0
637
15,333,551
2013-03-11T07:49:00.000
0
0
1
0
python
15,333,663
2
false
0
0
There are several possible approaches: Delegate the generation of unique ids to the database engine. Many engines provide such a facility. Have a centralized service of your own that would generate the ids, and have your scripts talk to that service. Have each script generate its own ids; to ensure uniqueness, incorporate into the id a token that's unique to the script (e.g. process id and/or IP or MAC address).
1
0
0
I have a function that rapidly inserts data into a database and I need to also store a unique integer for each function. I was considering using rounded microtime however since there are multiple scripts running at the same time I'm worried about running into conflicts. Can someone recommend a solution?
python - generate a unique integer even if multiple scripts are running simultaneously?
0
0
0
200
15,334,465
2013-03-11T08:54:00.000
0
0
1
0
python,range,modulo,negative-number
55,908,106
5
false
0
0
This is an old question, but for future readers the straightforward solution for single integers is (x + 128)%256 - 128. Replace x with ord(c) if dealing with ASCII character data. The result of the % operator there is congruent (mod 256) to x + 128; and subtracting 128 from that makes the final result congruent to x and shifts the range to [128,127]. This will work in a generator expression, and save a step in Python 3 where you'd need to convert a string to a bytes object.
2
0
0
I would like to convert a list of characters (represented on a single byte ie. the range [0, 255]) to be represented with integers in the range [-128,127]. I've read that Python's modulo operator (%) always return a number having the same sign as the denominator. What is the right way to do this conversion in Python? EDIT Characters that map to [128,255] with ord should be remapped to [-128,-1], with 128 mapped to -128 and 255 mapped to -1. (For the inverse of the conversion I use chr(my_int%256), but my_int can be a negative number.)
How to map characters to integer range [-128, 127] in Python?
0
0
0
1,310
15,334,465
2013-03-11T08:54:00.000
0
0
1
0
python,range,modulo,negative-number
15,334,577
5
false
0
0
I wonder what do you mean by "a list of characters", are they numbers? If so, I think x % 256 - 128 or x % -256 + 128 should work.
2
0
0
I would like to convert a list of characters (represented on a single byte ie. the range [0, 255]) to be represented with integers in the range [-128,127]. I've read that Python's modulo operator (%) always return a number having the same sign as the denominator. What is the right way to do this conversion in Python? EDIT Characters that map to [128,255] with ord should be remapped to [-128,-1], with 128 mapped to -128 and 255 mapped to -1. (For the inverse of the conversion I use chr(my_int%256), but my_int can be a negative number.)
How to map characters to integer range [-128, 127] in Python?
0
0
0
1,310
15,338,324
2013-03-11T12:14:00.000
0
0
1
0
python,latitude-longitude,pyephem
51,375,908
2
false
0
0
There is a difference of time between the time the program calculates the passes over the longitude and the real time. I've checked it with the LIS system ( that it's inside the ISS ) of the Nasa to find lightnings. And I have discovered that in Europe in some orbits the time that the program calculates the pass, it's 30 seconds in advanced than the real time. And in Colombia in some orbits the time in advanced is about 3 minutes ( perhaps because 1 degree of longitud in Colombia is bigger in amount of Km than 1 degree of longitude in Europe ). But this problem only happens in 2 particular orbits ! The one that pass over France and goes down in Sicilia. And the one that pass over USA, and goes down in Cuba. Why could this be possible ? In my pinion I think maybe there's some mistake in the ephem.newton algorithm or maybe with the TLE, that normally it reads the one created at 00:00:00 at night when it changes of day (and not the actual, because the ISS creates 3-4 TLE per day ) or maybe with the sat.sublong function that calculates a wrong Nadir of the satellite. Does anyone has an idea or an explanation for this problem ? Why it happens ?. PS: I need to checked it for sure because I need to know when the ISS crosses an area ( for detecting the lightnings inside the area ). And if the time that the program calculates in some orbits it's in advanced than the real time, then the sat.sublong function calculates it's outside the area ( it calculates it hasn't arrived to the area yet ) but the program shows it's inside the area. So the real time doesn't match with the one that the program calculates, in some occasions. Thanks a lot for your time !
1
6
0
I am having a hard time figuring out how to calculate when a satellite crosses a specific Longitude. It would be nice to able to provide a time period and a TLE and be able to return all the times at which the satellite crosses a given longitude during the specified time period. Does pyephem support something like this?
Using pyephem to calculate when a satellite crosses a Longitude
0
0
0
1,267
15,341,762
2013-03-11T14:58:00.000
1
0
0
0
python,scientific-computing,pytables
15,347,065
3
true
0
0
It turns out that since all PyTables arrays are simply Numpy arrays underneath, you can do the following: MyPytableFile.root.myPytableArray[:].nbytes
1
0
0
How can I determine the size (in bytes) of a PyTables Array?
How to determine size (in bytes) of a PyTables array?
1.2
0
0
118
15,342,592
2013-03-11T15:35:00.000
0
0
1
0
python
15,343,646
2
true
0
0
Your answer was a good start, but they probably were looking for more: For example: Is a subclass the appropriate solution in this case? How much of the parent class's functionality will be appropriate for the subclass? Are you subclassing at the correct level of granularity: is your subclass 'final' - an endpoint which will represent the end of an inheritance chain, or an intermediate class that will be the parent class for other subclasses to follow? Do you need only one subclass, or perhaps several that reflect different aspects of the parent class's functionality and add their own. What aspects of the new functionality will be implemented in the subclass itself, and how much might be delegated to other classes or subsystems. Sounds simple, but following an appropriate NAMING PROTOCOL for your classes and subclasses is also very important: make sure the name of your subclass correctly describes its functionality and its relationship to the parent class. Some things to research - the GOF book is a tough read but 'Head First Design Patterns' is more accessible, although sometimes it gets into very elementary details. There may be Python specific design books out there too, but offhand I'm not familiar with them. Relevant subjects might be: *) Inheritance *) Object composition/delegation vs inheritance *) Abstract classes/Interfaces
1
1
0
I had a interview with one of the companies and they asked me this question? What are the protocols you follow while designing sub-classes. My answer was if the subclasses share all the properties and methods of the parent class and plus it wants to implement its own methods and properties than implement subclass. Can someone let me know the list of items that needs to be looked into?
Protocol designing subclasses
1.2
0
0
75
15,344,185
2013-03-11T16:55:00.000
1
0
0
0
python,tkinter
15,344,805
1
true
0
1
When you click on an item, you can get the canvas item id of the one that was clicked on. You would then use this information to look up the actual object. You can either iterate over all your objects looking for one with the given id, or you can keep a dictionary which maps ids to objects. Once you have the actual object, you can print the other attributes however you want.
1
0
0
I have a bunch of objects interacting with each other. They all draw ovals that constitute representations of themselves on a Tkinter Canvas, and store the representation's Tkinter id as an attribute. How can I make it so when I click one of the ovals, my program will print the other attributes of the object which that oval represents?
Returning object information after clicking its representation in a Tkinter canvas
1.2
0
0
177
15,345,704
2013-03-11T18:18:00.000
1
0
1
0
python
15,345,780
2
false
0
0
In Windows (for example), you could associate the file type (not one particular file, but all files with a certain extension) with the python script, and then launch the photo viewer (with the filename as argument, so that the application opens this file) from within the python script. This way, both your script and the "main" application will be run when opening the file type. BTW: In your script you could test the filename for certain patterns or substrings, to do different actions with different files of this file type... This solution may not be as smooth as you'd hope, but it probably could do as a "workaround".
1
0
0
How do I bind a python script to a file, and it runs on the background when the file is opened? Lets say I want to bind a file to a photo and then the photo is closed or open the script runs
Bind and execute python script
0.099668
0
0
616
15,345,864
2013-03-11T18:28:00.000
0
0
0
0
python,database,development-environment
15,346,132
2
false
0
0
Make sure you have a python programs or programs to fill databases with test data from scratch. It allows each developer to work from different starting points, but also test with the same environment.
1
0
0
I'm currently exploring using python to develop my server-side implementation. I've decided to use SQLAlchemy for database stuff. What I'm not currently to sure about is how it should be set up so that more than one developer can work on the project. For the code it is not a problem but how do I handle the database modifications? How do the users sync databases and how should potential data be set up? Should/can each developer use their own sqlite db for development? For production postgresql will be used but the developers must be able to work offline.
Multi developer environment python and sqlalchemy
0
1
0
195
15,346,908
2013-03-11T19:27:00.000
0
0
0
0
python,networking
17,349,828
2
false
0
0
pytun is not sufficient for this. It serves to connect your Python application to a system network interface. In effect, you become responsible for implementing that system network interface. If you want traffic that is routed over that network interface to traverse an actual network, then it is the job of your Python program to do the actual network operations that move the data from host A to host B. This is probably a lot of work to do well. I suggest you use an existing VPN tool instead.
1
2
0
I need to make a simple p2p vpn app, after a lot of searches I found a tun/tap module for python called PYTUN that is used to make a tunnel. How can I use this module to create a tunnel between 2 remote peers? All the attached doc only show how to make the tunnel interface on your local computer and config it, but it does not mention how to connect it to the remote peer.
how to use the tun/tap python module (pytun) to create a p2p tunnel?
0
0
1
2,900
15,347,136
2013-03-11T19:40:00.000
29
1
1
0
java,python,python-decorators,java-annotations
49,356,738
4
false
1
0
This is a very valid question that anyone dabbling in both these languages simultaneously, can get. I have spent some time on python myself, and have recently been getting myself up to speed with Java and here's my take on this comparison. Java annotations are - just that: annotations. They are markers; containers of additional metadata about the underlying object they are marking/annotating. Their mere presence doesn't change execution flow of the underlying, or doesn't add encapsulation/wrapper of some sort on top of the underlying. So how do they help? They are read and processed by - Annotation Processors. The metadata they contain can be used by custom-written annotation processors to add some auxiliary functionality that makes lives easier; BUT, and again, they NEITHER alter execution flow of an underlying, NOR wrap around them. The stress on "not altering execution flow" will be clear to someone who has used python decorators. Python decorators, while being similar to Java annotations in look and feel, are quite different under the hood. They take the underlying and wrap themselves around it in any which way, as desired by the user, possibly even completely avoiding running the underlying itself as well, if one chooses to do so. They take the underlying, wrap themselves around it, and replace the underlying with the wrapped ones. They are effectively 'proxying' the underlying! Now that is quite similar to how Aspects work in Java! Aspects per se are quite evolved in terms of their mechanism and flexibility. But in essence what they do is - take the 'advised' method (I am talking in spring AOP nomenclature, and not sure if it applies to AspectJ as well), wrap functionality around them, along with the predicates and the likes, and 'proxy' the 'advised' method with the wrapped one. Please note these musings are at a very abstract and conceptual level, to help get the big picture. As you start delving deeper, all these concepts - decorators, annotations, aspects - have quite an involving scope. But at an abstract level, they are very much comparable. TLDR In terms of look and feel, python decorators can be considered similar to Java annotations, but under the hood, they work very very similar to the way Aspects work in Java.
1
63
0
Are Python Decorators the same or similar, or fundamentally different to Java annotations or something like Spring AOP, or Aspect J?
Is a Python Decorator the same as Java annotation, or Java with Aspects?
1
0
0
19,358
15,347,414
2013-03-11T19:55:00.000
3
1
1
0
python,csv,unicode,dictionary
15,347,536
1
false
0
0
UnicodeWriter isn't an actual module in any version of Python. The code given in the documentation is an example which you'll have to copy into your own project.
1
2
0
I am trying to write a dictionary containing values with unicode characters to a text file and was thinking of using UnicodeWriter as mentioned in the python csv documentation. But I am unable to import it as the module is not recognized by python. I was wondering whether this is a problem with my version of python? Also if it is not possible to do it this way, is there any way to specify encoding while using the dictWriter class in python.
unable to import UnicodeWriter python 2.7
0.53705
0
0
743
15,348,326
2013-03-11T20:49:00.000
0
0
0
0
php,python,web
15,348,563
1
true
1
0
There is no 'clean' way to do this. You must relay on CURL or file_get_context() with context options in order to simply get data from URL, and, in order to notify you when content of URL is changed, you must store in database snapshots of page you are listening. Lately you are comparing new version of crawled content with earlier created snapshot and, if change in significant parts of DOM are detected, that should be trigger for your mail notifier function.
1
0
0
I want to write a little program that will give me an update whenever a webpage changes. Like I want to see if there is a new ebay listing under a certain category and then send an email to myself. Is there any clean way to do this? I could set up a program and run it on a server somewhere and have it just poll ebay.com every couple of minutes or seconds indefinitely but I feel like there should be a better way. This method could get dicey too if I wanted to monitor a variety of pages for updates.
Create an listener on a 3rd party's web page
1.2
0
1
68
15,348,710
2013-03-11T21:10:00.000
0
1
0
0
python,http,pdf,web-applications,latex
15,352,337
1
false
1
0
can you send it by email just like amazon , send a file to server , when it's ok , the server send it by email
1
2
0
I'm building a website using Python which uses LaTeX to generate PDF files. But I want to put most of the website on Google App Engine, and I can't run LaTeX on that. So I want to do the LaTeX part on another server. It seemed like a simple problem at first---I thought the best way to do it would be to POST the LaTeX to the server and have it respond with the PDF. But LaTeX files can take a while to compile sometimes if they're long, so I'm starting to think this isn't the best way to do it. What's the standard way of doing something like this? It must be a pretty common problem.
Web server which generates PDF files
0
0
0
368
15,350,517
2013-03-11T23:21:00.000
5
0
1
0
python,package,standard-library
15,350,580
2
true
0
0
Python packages from outside the standard library are occasionally adopted into the standard library. If there were a naming convention to distinguish them, you'd have to rename the modules, and this would break existing code for no good reason. For example, argparse was not always part of the standard library.
1
1
0
as it is done in Java... This makes the task of asserting if a package/module belongs to the standard library trivial. I'd guess this is just a convention. Are there any plans to change this in the future? Thanks in advance.
Is there a reason why Python standard library packages/modules are not in a package named python?
1.2
0
0
119
15,352,183
2013-03-12T02:21:00.000
2
0
0
0
python,header,report,footer,openerp
15,353,498
1
true
1
0
You can find header/footer of rml report in res_company_view.xml file server side. The file path is : server/openerp/addons/base/res/res_company_view.xml And the value of this header footer set default from: server/openerp/addons/base/res/res_company.py Regards
1
3
0
I'm currently searching what is the rml file generating the header in openerp 7. I can't find it... I have found server/openerp/addons/base/report/corporate_defaults.xml but no... Or maybe there is a cache caching the rml befort the report generation ? Thanks by advance !
Change openerp 7 header/footer with rml
1.2
0
1
2,744
15,353,057
2013-03-12T03:59:00.000
0
0
0
0
sublimetext2,python-3.3
15,373,407
1
false
0
0
You need to change the Sublime build system for Python. Copy the Python.sublime-build file from the Packages/Python folder to the Packages/User folder. In the file in the User folder, change the cmd option from python to c:/python33/python (where your Python 3.3 executable is located).
1
0
0
I am running windows 7 32 bit and i am using sublime text 2 I want to know how can I change the default python in ST2 to the python 3.3 I down loaded. Any help would be great thanks.
python 3.3 on sublime text 2 windows 7
0
0
0
1,111
15,360,961
2013-03-12T12:16:00.000
1
0
1
0
python,string,string-concatenation
61,205,309
10
false
0
0
Do it like this: '-'.join(max(x,'') for x in [a,b] if x is not None)
1
20
0
Understand "better" as a quicker, elegant and readable. I have two strings (a and b) that could be null or not. And I want concatenate them separated by a hyphen only if both are not null: a - b a (if b is null) b (where a is null)
What better way to concatenate string in python?
0.019997
0
0
8,094
15,366,482
2013-03-12T16:17:00.000
0
0
1
0
python,python-3.x
15,366,529
4
false
0
0
d.popitem() will give you a key,value tuple.
1
5
0
Let's say I have dict. I don't know the key/value inside. How do I get this key and the value without doing a for loop (there is only one item in the dict). You might wonder why I am using a dictionary in that case. I have dictionaries all over my API and I don't want the user to be lost. It's only a matter of consistency. Otherwise, I would have used a list and indexes.
How to fetch the key/value pair of a dictionary only containing one item?
0
0
0
5,506
15,367,688
2013-03-12T17:11:00.000
0
0
0
1
python,python-idle
49,071,459
9
false
0
0
Here's a way to reset IDLE's default working directory for MacOS if you launch Idle as an application by double-clicking it. You need a different solution if you launch Idle from a command line in Terminal. This solution is a permanent fix. You don't have to rechange the directory everytime you launch IDLE. I wish it were easier. The idea is to edit a resource file inside of the IDLE package in Applications. Start by finding the the file. In Finder, go to IDLE in Applications (in the Python folder) as if you wanted to open it. Right click and select "show package contents". Open Contents, then open Resources. In Resources, you'll see a file called idlemain.py. This file executes when you launch idle and sets, among other things, the working directory. We're going to edit that. But before you can edit it, you need to give yourself permission to write to it. To do that, right click on the idlemain.py and select get info. Scroll to the bottom of the getinfo window and you'll see the Sharing & Permissions section. On the bottom right there's a lock icom. Click the lock and follow the prompts to unlock it. Once it's unlocked, look to the left for the + (under the list of users with permissions). Click it. That will bring up a window with a list of users you can add. Select yourself (probably the name of your computer or your user account) and click Select. You'll see yourself added to the list of names with permissions. Click where is says "Read only" next to your name and change it to "Read & Write". Be careful not to change anything else. When you're done, click the lock again to lock the changes. Now go back to idlemain.py and open it with any text editor (you could use Idle, TextEdit, or anything. Right under the import statement at the top is the code to change the default working directory. Read the comment if you like, then replace the single line of code under the comment with os.chdir('path of your desired working directory') Mine looks like this: os.chdir('/Users/MyName/Documents/Python') Save your changes (which should work because you gave yourself permission). Next time you start Idle, you should be in your desired working directory. You can check with the following commands: import os os.getcwd()
3
11
0
Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it. In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed. It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance.
Default working directory for Python IDLE?
0
0
0
37,378
15,367,688
2013-03-12T17:11:00.000
-1
0
0
1
python,python-idle
54,316,970
9
false
0
0
This ought to be the number one answer. I have been playing around this for an hour or more and nothing worked. Paul explains this perfectly. It's just like the PATH statement in Windows. I successfully imported a module by appending my personal "PythonModules" path/dir on my Mac (starting at "/users/etc") using a simple import xxxx command in Idle.
3
11
0
Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it. In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed. It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance.
Default working directory for Python IDLE?
-0.022219
0
0
37,378
15,367,688
2013-03-12T17:11:00.000
0
0
0
1
python,python-idle
15,367,752
9
false
0
0
It can change depending on where you installed Python. Open up IDLE, import os, then call os.getcwd() and that should tell you exactly where your IDLE is working on.
3
11
0
Is there a configuration file where I can set its default working directory? It currently defaults to my home directory, but I want to set it to another directory when it starts. I know I can do "import os" followed by "os.chdir("")" but that's kind of troublesome. It'd be great if there is a conf file that I can edit and change that setting, but I am unable to find it. In particular, I've looked into my OS (Ubuntu)'s desktop entry '/usr/share/applications/idle-python3.2.desktop', which doesn't contain a conf file, but points to '/usr/lib/python3.2/idlelib/PyShell.py', which points to config-*.def conf files under the same folder, with 'config-main.def' being the most likely candidate. However I am unable to find where the default path is specified or how it can be changed. It seems that the path is hard-coded in PyShell.py, though I could be wrong with my limited knowledge on Python. I will keep looking, but would appreciate it if somebody knows the answer on top of his or her head. Thanks in advance.
Default working directory for Python IDLE?
0
0
0
37,378
15,368,272
2013-03-12T17:40:00.000
12
0
0
0
python,sockets,air,stdout,stdin
15,369,287
1
false
0
0
On UNIX-like platforms, using stdin/stdout is a socket, so there's no difference. That is, if you launch a process with its stdout redirected, that'll typically be done with socketpair, so making your own UNIX-domain socket to communicate is unnecessary. The Python classes for handling stdin/stdout won't give you access to the full flexibility of the underlying socket though, so you'll have to set it up yourself I think if you want to do a half-close, for example (Python can't offer that cross-platform because the Windows sys::stdin can't be a socket, for example, nor is it always on UNIX.) The nice thing about a local TCP connection is that it's cross-platform, and gives predictable semantics everywhere. If you want to be able to close your output and still read from the input, for example, it's much simpler to do this sort of thing with sockets which are the same everywhere. Even without that, for simplicity's sake, using a TCP socket is always a reasonable way to work around the wacky Windows mess of named pipes, although Python shields you from that reasonably well.
1
8
0
I have an app that consists of a local "server" and a GUI client. The server is written in Python, while the GUI is meant to be changable, and is written in Flex 4. The client queries the local server for information and displays it accordingly. Both applications are meant to be run on the same computer, and will only communicate locally. At the moment, the client and the Python server communicates via a basic socket, the client writes a request to the socket, and the socket returns some data. However, since I'm writing a desktop app, I thought it might be easier to maintain and to polish a system that uses standard streams instead of a socket. The server would listen for raw_input() continuously, and output according to anything written to stdin, and the client, in this case, would use AIR's NativeProcess class to read and write to stdout and stdin, rather than using sockets. The client and server are separate processes, but are meant to be started at more or less the same time. I don't really have need for complex networking, just local cross-language communication. What are the pros and cons of each approach? What would I gain or lose from using sockets, vs what I would gain or lose from using standard streams to communicate? Which one is more efficient? Which one is easier to maintain?
Sockets vs Standard Streams for local client-server communication
1
0
1
1,968
15,368,366
2013-03-12T17:45:00.000
7
0
1
1
c++,python,c,scripting,lua
15,369,077
1
false
0
0
Reconsider Lua: Yes. Yes. Lua does not create any OS threads at all. Garbage collection does not start until you've created lots of objects. You can simply turn it off. To destroy all variables once the script is executed, simply close the state. Yes. Yes.
1
0
0
I'm looking for an embedded scripting language. I don't need anything fancy, just basic constructs like conditionals, loops, logic and arithmetic operations etc. I have the following requirements Thread friendly - i.e. without "global interpreter lock" (python is out for this reason) Cheap "interpreter instance" creation - I will have potentially 100s of these. I understand that lua creates a separate gc thread per every Lua_State which means lua is out. No gc or refcounting or any other "on the fly" memory management. It should simply destroy any variables once the script is executed. Again both python and lua are out. And of course it should be fast and have low memory footprint. Should work on windows, GNU/Linux and MacOS X Any help is highly appreciated.
Embedded scripting language with C/C++ API for multithreading environment
1
0
0
338
15,369,985
2013-03-12T19:07:00.000
0
0
0
0
python,numpy,scipy
15,370,151
5
false
0
0
If you don't mind installing additional packages (for both python and c++), you can use [BSON][1] (Binary JSON).
3
6
1
Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program. What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program? Thank you for the help. I appreciate any guidance I can get. EDIT: There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in
python - saving numpy array to a file (smallest size possible)
0
0
0
7,265
15,369,985
2013-03-12T19:07:00.000
1
0
0
0
python,numpy,scipy
15,370,191
5
false
0
0
numpy.ndarray.tofile and numpy.fromfile are useful for direct binary output/input from python. std::ostream::write std::istream::read are useful for binary output/input in c++. You should be careful about endianess if the data are transferred from one machine to another.
3
6
1
Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program. What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program? Thank you for the help. I appreciate any guidance I can get. EDIT: There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in
python - saving numpy array to a file (smallest size possible)
0.039979
0
0
7,265
15,369,985
2013-03-12T19:07:00.000
1
0
0
0
python,numpy,scipy
19,226,920
5
false
0
0
Use the an hdf5 file, they are really simple to use through h5py and you can use set compression a flag. Note that hdf5 has also a c++ interface.
3
6
1
Right now I have a python program building a fairly large 2D numpy array and saving it as a tab delimited text file using numpy.savetxt. The numpy array contains only floats. I then read the file in one row at a time in a separate C++ program. What I would like to do is find a way to accomplish this same task, changing my code as little as possible such that I can decrease the size of the file I am passing between the two programs. I found that I can use numpy.savetxt to save to a compressed .gz file instead of a text file. This lowers the file size from ~2MB to ~100kB. Is there a better way to do this? Could I, perhaps, write the numpy array in binary to the file to save space? If so, how would I do this so that I can still read it into the C++ program? Thank you for the help. I appreciate any guidance I can get. EDIT: There are a lot of zeros (probably 70% of the values in the numpy array are 0.0000) I am not sure of how I can somehow exploit this though and generate a tiny file that my c++ program can read in
python - saving numpy array to a file (smallest size possible)
0.039979
0
0
7,265
15,370,703
2013-03-12T19:45:00.000
0
0
1
0
python
15,371,449
3
false
0
0
One of the advantages of Python (compared with languages like C) is that in general you don't have to worry about the details of memory management and you can concentrate on the purpose of your program. So my advice would be to don't bother unless you have reason to do so (e.g. python is eating all your RAM).
2
6
0
I have been writing a long script that occasionally builds large dicts and/or lists, and I was wondering if performance could be improved by deleting them with del when I'm finished using them. Or is it normal practice to leave these objects hanging around to be taken care of by garbage collection? What is the best practice here? Thanks.
Periodically delete long dicts / lists in Python good practice?
0
0
0
1,365
15,370,703
2013-03-12T19:45:00.000
3
0
1
0
python
15,370,827
3
false
0
0
del is usually not necessary. In CPython, objects go away when there are no references to them. del deletes only one reference to the object; if there are still other references to the object, it will stay around, and del has really done nothing to improve your memory situation. Conversely, simply reassigning the variable to a different object (say, creating a new empty list at the top of a loop) will also end up with one less reference to the object, implicitly doing the same as del. If there are no other references to the object at this point, it will be freed immediately. Also remember that local variables go away when the function returns, so you don't need to explicitly del any names you've defined in a function. An exception is circular references, where two (or more) objects reference each other but none is actually reachable through a name; Python periodically garbage-collects these, but you can free them up faster if you break the circle when you're done with the objects. (This may require as little as one del!) Circumstances in which this is useful are probably pretty rare, though. In IronPyothon (which runs on the .NET CLR) or Jython (which runs on the JVM), optimal memory management strategies may differ, since the garbage collector of the underlying VM is used.
2
6
0
I have been writing a long script that occasionally builds large dicts and/or lists, and I was wondering if performance could be improved by deleting them with del when I'm finished using them. Or is it normal practice to leave these objects hanging around to be taken care of by garbage collection? What is the best practice here? Thanks.
Periodically delete long dicts / lists in Python good practice?
0.197375
0
0
1,365
15,372,361
2013-03-12T21:20:00.000
3
0
0
0
python,django,content-management-system,django-cms
15,381,624
1
false
1
0
You need to create different page trees per language. Every page has only one template. Use {% trans %} and {% blocktrans %} for translating string in it. Or {% if request.LANGUAGE == "en" %}. If the templates really differ that much: don't add other languages to pages... but create different page trees with only one language.
1
0
0
How can i add different base templates for different languages of same page in django cms? I am trying to set a page and show it in different languages. And for all the languages, i need to use a different base template. I am completely new django cms. Please help.
how to add different base templates for different languages of same page in django cms
0.53705
0
0
404
15,372,971
2013-03-12T21:54:00.000
1
0
0
0
python,django,content-management-system,django-cms
15,378,057
1
true
1
0
Use Django Internationalization , It is there in django first create a "locale" folder in your django project,after that in settings.py include folder path . eg-LOCALE_PATHS="projectpath/locale" Add this to your middleware - 'django.middleware.locale.LocaleMiddleware', in settings.py and set USE_I18N = True in settings.py After that in settings.py include this "django.core.context_processors.i18n", in template context processor For html file -: First you have to include template tag of Internationalization and then you can use template tags in all the static elements of html file Eg- {% load i18n %} put this on top of your html file and try this {% trans "put your static text here" %} wherever static text is there in that page In case of template variable in Django you can use this - {% blocktrans %}this is your translated {{ object }}{% endblocktrans %} Now for django views you have to follow this-: from django.utils.translation import ugettext as _ def view(request): output = _("this is translated text") return HttpResponse(output)
1
2
0
What is the best way to make a page available in multiple languages in Django? I have followed the documentation and used LANGUAGES but I can't see the translated page. I am stuck. Should I manage the /en, /de, etc urls by myself? Thanks in advance.
best way to make a page available in multiple languages in django cms
1.2
0
0
735
15,375,832
2013-03-13T02:10:00.000
1
0
0
0
python,client-server,real-time
15,375,850
1
false
0
0
Yes, you can do it with Python. You could do it with PHP, Bash, JavaScript, Ruby, C, C++, C#, Java, Haskell, Go, Assembler, Perl, Pascal, Oberon, Prolog, Lisp, or Caml if you like too. Most interfaces to sockets fall into one of two categories: Blocking interfaces Event interfaces There is no way to know which is right for your application without knowing what your application does.
1
0
0
I need create a real-time client-server application (like Dropbox). Client application should listen one channel of data. I can do it with python? What solutions, technologies, modules exists in python for this task?
python real-time client-server application
0.197375
0
1
549
15,377,078
2013-03-13T04:27:00.000
1
0
0
1
python,winapi,wine
16,091,620
1
true
0
0
So you're controlling a Windows downloader running in Wine. Is this downloader graphical? Is this icon in a window or what? Assuming yes to both: If your Python application is running natively in *nix (not Wine), the only sure way is to take a screenshot around the cursor and use Optical Character Recognition to recognize the numbers in the image :-). This is because under Wine, not every Windows window is an X11 window. If you are running your application on a Windows version of Python installed in Wine, you're in luck. Tooltips are just windows - you should be able to iterate over all windows in the downloader, and get the contained text.
1
4
0
I'm writing a python GUI for a downloader in windows. Currently I can wine that application in some way to download things from the website. I want to write a GUI which calls the downloader so that it's easier for myself to use it. So one important thing for my GUI is to display the progress. When the downloader is running using wine, if I move the cursor onto the icon, it will display progress in percentage. That's the number I want for my code. So is there any way that I can get that information through some kind of API of wine?
Is it possible to get output from windows application through wine?
1.2
0
0
172
15,377,808
2013-03-13T05:35:00.000
1
0
0
0
python,plone
15,378,810
1
false
1
0
If you are referring to the Working-copy-support option: Files are excluded from the possibility to be checked-out. Probably exactly because of the reasons you are rising. Versioning attached files is easily blasting the DB because every file would be kept in the DB and thus (to answer your question) the object's size would be a sum of all of them. Also the standard-versioning-history is not applied to files.
1
0
0
I have a file uploaded to plone say size 1 MB. After checkout and downloading it, I edit and then upload and check-in. What is the size of the new uploaded file in plone site. It is the original size + added size OR the size of the new uploaded file?
What is the changed file size in plone during version control?
0.197375
0
0
85
15,380,190
2013-03-13T08:26:00.000
0
0
0
0
python,django,django-manage.py,manage.py
15,380,907
1
false
1
0
I don't know what manage.py tasks is, must be from some extension you have installed. However what you are asking doesn't make much sense. An app, at a minumum, has a models.py file, which presumably your templates folder doesn't have. But there is of course no reason at all to treat your templates folder as an app. Put the folder into TEMPLATE_DIRS instead.
1
0
0
I have made my template folder as an app in my python web application and I mentioned it in my INSTALLED_APPS I also have the init file in the template folder, but when I'm running manage.py tasks, there is an error and manage.py doesn't find this app, even there is no error when I'm putting the app into INSTALLED_APPS dictionary. any help?
no module found for an installed app
0
0
0
56
15,380,886
2013-03-13T09:04:00.000
3
0
1
0
python,tabs,coding-style,space
15,380,914
5
false
0
0
get an editor that understands simulated tabs. then it will simulate tabs using 4 spaces. Most editors can do this nowadays. They feel just like tabs but are spaces..
2
3
0
I have downloaded a python code and would like to edit it. The problem is that I used to use tabs to make indents but file use four spaces instead and if a combination of spaces and tabs is used, visually it looks fine, but the code generate errors. My question is if there is a simple way to replace spaces by tabs?
How do I replace four spaces by one tab in a big python codebase?
0.119427
0
0
1,450
15,380,886
2013-03-13T09:04:00.000
2
0
1
0
python,tabs,coding-style,space
15,380,964
5
false
0
0
Use reindent.py. It should be in tools/scripts on your system. In some cases it may not be installed by default, and require the addition of an extra package, ex: python-examples or python-tools.
2
3
0
I have downloaded a python code and would like to edit it. The problem is that I used to use tabs to make indents but file use four spaces instead and if a combination of spaces and tabs is used, visually it looks fine, but the code generate errors. My question is if there is a simple way to replace spaces by tabs?
How do I replace four spaces by one tab in a big python codebase?
0.07983
0
0
1,450
15,381,092
2013-03-13T09:15:00.000
1
1
0
0
python,amazon-web-services,amazon-sqs,amazon-sns
15,879,297
3
true
0
0
What you laid out will work in theory, but I am moved away from putting messages directly into queues, and instead put those messages in to SNS topics, and then subscribe the queues to the topics to get them there - gives you more flexibility to change things down the road without every touching the code or the servers that are in production. For the what you are doing now, the SNS piece is unnecessary, but using will allow you to change functionality without touching you existing servers down the road. For example: needs change and you want to add a process C that also kicks off every time the 'Start Process' runs on Sever B. Right thru the AWS SNS console you could direct a second copy of the message to another Queue that previously did not exist, and setup a server C that polls from that Queue (a fan out pattern). Also, what I often like to do during initial rollout is add notifications to SNS so I know whats going on, i.e. every time the 'start process' event occurs, I subscribe my cell phone (or email address) to the topic so I get notified - I can monitor in real time what is (or isn't) happening. Once a period of time has gone by after a production deployment, I can go into AWS console and simply unsubscribe my email/cell from the process - without every touching any servers or code.
3
2
0
I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario. Server A: Main Dedicated Server Server B: Cloud Process Server Server A sends message to SQS via SNS to say "Start Process" Server B constantly polls SQS for "Start Process" message Server B finds "Start Process" message on SQS Server B runs "process.sh" file Server B completes running "process.sh" file Server B removes "Start Process" from SQS Server B sends message to SQS via SNS to say "Start Import" Server A polls constantly polls SQS for "Start Import" message Server A finds "Start Import" message on SQS Server A runs import.sh Server A completes running "import.sh" Server A removes "Start Import" from SQS Is this how SQS should be used or am I missing the point completely?
How should Amazon SQS be used? Import / Process Scenario
1.2
0
0
1,359
15,381,092
2013-03-13T09:15:00.000
1
1
0
0
python,amazon-web-services,amazon-sqs,amazon-sns
15,397,063
3
false
0
0
Well... SQS doesn't not support message routing, in order to assign message to server A or B that why one of the available solutions: create SNS topics "server a" and "server b". These topics should put messages to SQS, which your application will pull. Also it possible to implement web hook - the subscriber on SNS events which will analyze message and do callback to your application.
3
2
0
I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario. Server A: Main Dedicated Server Server B: Cloud Process Server Server A sends message to SQS via SNS to say "Start Process" Server B constantly polls SQS for "Start Process" message Server B finds "Start Process" message on SQS Server B runs "process.sh" file Server B completes running "process.sh" file Server B removes "Start Process" from SQS Server B sends message to SQS via SNS to say "Start Import" Server A polls constantly polls SQS for "Start Import" message Server A finds "Start Import" message on SQS Server A runs import.sh Server A completes running "import.sh" Server A removes "Start Import" from SQS Is this how SQS should be used or am I missing the point completely?
How should Amazon SQS be used? Import / Process Scenario
0.066568
0
0
1,359
15,381,092
2013-03-13T09:15:00.000
3
1
0
0
python,amazon-web-services,amazon-sqs,amazon-sns
15,391,518
3
false
0
0
I'm almost sorry that Amazon offers SQS as a service. It is not a "simple queue", and probably not the best choice in your case. Specifically: it has abysmal performance in low volume messaging (some messages will take 90 seconds to arrive) message order is not preserved it is fond of delivering messages more than once they charge you for polling The good news is it scales well. But guess what, you don't have a scale problem, so dealing with the quirky behavior of SQS is just going to cause you pain for no good reason. I highly recommend you check out RabbitMQ, it is going to behave exactly like you want a simple queue to behave.
3
2
0
I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario. Server A: Main Dedicated Server Server B: Cloud Process Server Server A sends message to SQS via SNS to say "Start Process" Server B constantly polls SQS for "Start Process" message Server B finds "Start Process" message on SQS Server B runs "process.sh" file Server B completes running "process.sh" file Server B removes "Start Process" from SQS Server B sends message to SQS via SNS to say "Start Import" Server A polls constantly polls SQS for "Start Import" message Server A finds "Start Import" message on SQS Server A runs import.sh Server A completes running "import.sh" Server A removes "Start Import" from SQS Is this how SQS should be used or am I missing the point completely?
How should Amazon SQS be used? Import / Process Scenario
0.197375
0
0
1,359
15,381,202
2013-03-13T09:20:00.000
0
0
0
0
php,python,security
15,381,241
5
false
0
0
.htaccess, chmod or you could use a key defined by yourself... You have several possibilies. Edit: Anyway, if the file only contains a function. Nobody can use it from an external http request, unless you actually call it in this file: function();
1
0
0
I have two Python files that I want to prevent anyone from executing unless it's from the server itself. The files implement a function that increases an amount of money to a user. What I need is to make this file not public to the web, so that if someone tries to call this file, the file would refuse this request, unless the call is from the server. Does anyone know how I can do this? My first idea was to check for the IP address but a lot of people can spoof their IP. Example Let's say I have this file: function.py, a function in this file will accept a new amount of money and increase the appropriate balance in the database. When someone tries to post data to this file, and this person is outside the server (lets say from 244.23.23.0) the file will be in-accessible. Whereas, calling the function from the server itself will be accepted. So files can access other files on the server, but external users cannot, with the result that no one can execute this file unless it's called from the server. This is really important to me, because it's related to real money. Also, the money will come from PayPal IPN. And actually, if there was a way to prevent access unless it was coming from PayPal, that would be an amazing way to secure the app. OK, as far as what I have tried: Put the database in a cloud SQL using Google [https://developers.google.com/cloud-sql/] Try to check the IP of the incoming request, in the file Thanks for any and all help.
Restrict execution of Python files only to server (prevent access from browser)
0
0
1
538
15,381,414
2013-03-13T09:31:00.000
3
0
0
0
python,websocket
38,860,584
2
false
0
0
@ThiefMaster got it perfect. But if you want to add custom headers, you can do that with the argument header instead of headers. Hope this helps
1
3
0
How to send custom headers in the first handshake that occurs in the WebSocket protocol? I want to use custom header in my initial request "**X-Abc-Def : xxxxx" WebSocket clients are Python & Android client.
Sending custom headers in websocket handshake
0.291313
0
1
9,510
15,382,094
2013-03-13T10:02:00.000
0
0
0
1
python,google-app-engine,rest,mobile,oauth
15,399,334
2
true
1
0
I see you say you're not using Endpoints, but not why. It's likely the solution you want, as it's designed precisely for same-party (i.e. you own the backend and the client application) use cases, which is exactly what you've described.
1
2
0
I have a web server at Google App Engine and I want to protect my data. My mobile app will access the server to get this data. The idea is with OAuth authenticate my app, when it requests some data via REST. After the first authentication, the app will always be able to access the content. I don't want user's data, as Google Account or Facebook. My mobile app will assume the role of user to my services. Is it possible? Someone has another idea to create these structure? I'm not using Google End Point and my GAE is developed with Python. Thank you in advance! Regards, Mario Jorge Valle
OAuth to authenticate my app and allow it to access data at Google App Engine
1.2
0
0
556
15,383,666
2013-03-13T11:12:00.000
0
0
1
0
python,tar,vtk
72,487,851
5
false
0
0
MacOS only: The following steps worked for me: brew install vtk pip install vtk This installs [email protected]. You may want to install brew install [email protected] instead. Tested on: Python: 3.9.13, MacOS: 12.4 Monterey
1
18
0
I'm trying to install VTK module for python, I am however unsuccesful in doing so. I have downloaded a VTK tar-file, but I'm unable to extract it. I'm capable of extracting other tar-files, so there must be something specific with this file I suppose. This is my error: gzip: stdin: invalid compressed data--format violated tar: Child returned status 1 tar: Error is not recoverable: exiting now I hope somebody can help me with this.
Installing VTK for Python
0
0
0
62,032
15,388,961
2013-03-13T15:04:00.000
2
0
0
0
python,clang,llvm-clang
18,359,608
1
true
0
1
For RECORD, the function get_declaration() points to the declaration of the type (a union, enum, struct, or typedef); getting the spelling of the node returns its name. (Obviously, take care between differentiating the TYPEDEF_DECL vs. the underlying declaration kind.) For FUNCTIONPROTO, you have to use a combination of get_result() and get_arguments().
1
4
0
I am writing a python script(using python clang bindings) that parses C headers and extracts info about functions: name, return types, argument types. I have no problem with extracting function name, but I can't find a way to convert a clang.cindex.Type to C type string. (e.g. clang.cindex.TypeKind.UINT to unsigned int) Currently, as a temporary solution, i have a dictionary clang.cindex.TypeKind -> C type string and code to process pointers and const qualifiers, but I didn't found a way to extract structure names. Is there a generic way to get C definition of clang.cindex.Type? If there isn't, how can I get C type string for clang.cindex.TypeKind.RECORD and clang.cindex.TypeKind.FUNCTIONPROTO types?
Extract type string with Clang bindings
1.2
0
0
1,599
15,391,618
2013-03-13T16:55:00.000
2
0
0
0
javascript,jquery,python,html,intranet
15,391,677
2
false
1
0
Yes. Any page that the user can browse to normally can be loaded in an iframe.
2
0
0
This is not so much of a specific question, but more a general one. I'm redoing one of my old projects and to make it more user friendly I want the following: The project will be running on my home server, with a flask/python backend. User using my website will be coming from my companies intranet. Would it be possible to load an intranet page in a iframe on my website. So in short, is it possible to load an intranet page from an internet-page that has no access to said intranet.
Internet app calls intranet page
0.197375
0
1
301
15,391,618
2013-03-13T16:55:00.000
3
0
0
0
javascript,jquery,python,html,intranet
15,391,680
2
true
1
0
Of course you can load it in an iframe, you don't need access to the page from the internet for that - the client needs it. Yet, the intranet application might request not to be viewed in a frame.
2
0
0
This is not so much of a specific question, but more a general one. I'm redoing one of my old projects and to make it more user friendly I want the following: The project will be running on my home server, with a flask/python backend. User using my website will be coming from my companies intranet. Would it be possible to load an intranet page in a iframe on my website. So in short, is it possible to load an intranet page from an internet-page that has no access to said intranet.
Internet app calls intranet page
1.2
0
1
301
15,393,202
2013-03-13T18:10:00.000
1
0
1
1
python,python-2.7,py2exe
15,393,682
2
false
0
0
Yep, the path to the file gets passed in as an argument and can be accessed via sys.argv[1].
1
0
0
I wonder how the Windows "Open file with..." feature works. Or rather, how would do if I write a program in python, compile a executable with py2exe and then want to be able to open certain files in that program by right-clicking and choose it in "Open with". Is the file simply passed as an argument, like "CMD>C:/myapp.exe file"?
Windows "open with" python py2exe application
0.099668
0
0
262
15,393,622
2013-03-13T18:32:00.000
1
0
0
1
python,linux
15,393,788
2
false
0
0
add the program to your /bin directory - it's where linux will search for the command
1
1
0
I am new to linux. I made a python script that takes two input Input 1> directory path ex:- ~/home/user/apps Input 2> file path File contains pattern in each line And the output of script is all the file that matches the pattern and are in directory or in subdirectories of input directory path. Now using this python script I want to make a command in Linux like: core_dump@core_dump-VPCCB15FG:~/python$search directory_path file_path
How to make your own commands
0.099668
0
0
240
15,394,347
2013-03-13T19:10:00.000
0
0
1
0
python-sphinx,restructuredtext,sections,cross-reference
66,114,569
6
false
0
0
I was struggling to make this work and i found out that the actual notation is :ref:'{dir-path}/Installation:Homebrew' where {dir-path} is the relative path to Installation.rst from where config.py exists
1
138
0
How to insert a cross-reference in a reST/Sphinx page to either a sub-header or anchor in another page in the same documentation set?
Adding a cross-reference to a subheading or anchor in another page
0
0
0
83,618
15,397,024
2013-03-13T21:40:00.000
4
1
1
0
python,django,performance,apache,redhat
15,397,078
1
true
0
0
If you compile with the exact same flags that were used to compile the RPM version, you will get a binary that's exactly as fast. And you can get those flags by looking at the RPM's spec file. However, you can sometimes do better than the pre-built version. For example, you can let the compiler optimize for your specific CPU, instead of for "general 386 compatible" (or whatever the RPM was optimized for). Of course if you don't know what you're doing (or are doing it on purpose), it's always possible to build something slower than the pre-built version, too. Meanwhile, 2.7.3 is faster in a few areas than 2.6.6. Most of them usually won't affect you, but if they do, they'll probably be a big win. Finally, for the vast majority of Python code, the speed of the Python interpreter itself isn't relevant to your overall performance or scalability. (And when it is, you probably want to try PyPy, Jython, or IronPython to replace CPython.) This is especially true for a WSGI service. If you're not doing anything slow, Apache will probably be the bottleneck. If you are doing anything slow, it's probably something I/O bound and well outside of Python's control (like reading files). Ultimately, the only way you can know how much gain you get is by trying it both ways and performance testing. But if you just want a rule of thumb, I'd say expect a 0% gain, and be pleasantly surprised if you get lucky.
1
1
0
I would like to know if there are any documented performance differences between a Python interpreter that I can install from an rpm (or using yum) and a Python interpreter compiled from sources (with a priori well set flags for compilations). I am using a Redhat 6.3 machine as Django/Apache/Mod_WSGI production server. I have already properly compiled everything in different setups and in different orders. However, I usually keep the build-dev dependencies on such machine. For some various ego-related (and more or less practical) reasons, I would like to use Python-2.7.3. By default, Redhat comes with Python-2.6.6. I think I could go with it but it would hurt me somehow (I would have to drop and find a replacement for a few libraries and my ego). However, besides my ego and dependencies, I would like to know what would be the impact in terms of performance for a Django server.
Performance differences between python from package and python compiled from source
1.2
0
0
139
15,399,444
2013-03-14T01:10:00.000
1
0
1
1
python,python-idle
61,899,486
3
false
0
0
If what you meant is executing in the Python IDLE's interactive shell instead of command prompt or command line, then I usually use this approach: python -m idlelib.idle -r "C:/dir1/dir2/Your script.py" It works well with me. Tested on my Windows 10, python 3.7.3. Please ensure that you have added your desired python version on your environment variables.
1
5
0
In a bash shell, I can use 'bash ' or 'source ' to invoke a script by hand. Can I do the similar thing in the Python IDLE's interactive shell? I know I can go to File >> Open Module, then run it in a separate window, but that's troublesome.
How to run a Python script from IDLE command line?
0.066568
0
0
16,394
15,400,362
2013-03-14T02:55:00.000
0
0
1
0
python,pyscripter
15,400,730
1
false
0
0
Not sure either. Maybe PyScripter just does something weird and dangerous when it tries to find the python.exe. I recommend finding another IDE. I personally use the PyDev plugin for Eclipse. I love it because I can also use other plugins and tooling already in my Eclipse environment (like Mercurial version control). That is what I have the most success with.
1
0
0
My first exposure to Python; maybe normal? Have tried Python 2.7.3 from both Python.org & ActivePython (2.7.2.5). When I run PyScripter 2.5.3, AVG interrupts and objects to python.exe. Have tried to reinstall python in several different folders and sometimes Windows Firewall pops up before AVG. Running python.exe directly does not trigger AVG or Windows Firewall. Not sure what's going on here.
AVG & Windows Firewall block python 2.7 using PyScripter 2.5.3 (32-bit)
0
0
0
486
15,402,355
2013-03-14T06:08:00.000
1
0
0
0
python,config,pyramid
15,411,610
2
false
1
0
Yes, a proper reverse proxy will forward along the appropriate headers to your wsgi server. See the pyramid cookbook for an nginx recipe.
1
2
0
Let's say my app is served at the domain www.example.com. How (where?) should I specify this in the Pyramid configuration file so that functions like request.route_url would automatically pick it and generate the correct URL. (I think [server:main] is not the place for this)
Pyramid: How to specify the base URL for the application
0.099668
0
0
1,246
15,405,082
2013-03-14T09:08:00.000
1
1
0
0
python,audio,pygame,sudo,raspberry-pi
15,428,577
1
true
0
1
The issue was the sudo command changing the directory in which the script was being run - so running python with sudo -s or simply using an absolute path for the sound fixed it.
1
0
0
I have written a very simple script for my raspberry pi that loads an uncompressed WAV and plays it - however when I run the script as root (to be able to use GPIO and ServoBlaster), there is no sound output. I have set the default audio device to a USB sound card, and this works - I have tested this using aplay fx.wav. Running the pygame script without sudo, the sound plays fine. What is going on here?
When running a pygame script as root, no sound is output?
1.2
0
0
1,087
15,406,507
2013-03-14T10:19:00.000
11
0
0
0
python,html,django,templates
15,406,550
2
false
1
0
Just create a 404.html file in your project's root level templates directory.
1
5
0
How I customize the error page in Django and where do I put my html for this page.
How to customize Page not found (404) in django?
1
0
0
3,695
15,407,354
2013-03-14T10:59:00.000
2
0
0
1
python,linux,ip,mac-address,arp
15,407,559
3
false
0
0
You can read from /proc/net/arp and parse the content, that will give you couples of known IP-MAC addresses. The gateway is probably known at all times, if not you should ping it, and an ARP request will be automatically generated. You can find the default gw in /proc/net/route
2
3
0
Is there any quick way in Python to get the MAC address of the default gateway? I can't make any ARP requests from the Linux machine I'm running my code on, so it has to come directly from the ARP table.
Python: get MAC address of default gateway
0.132549
0
1
5,009
15,407,354
2013-03-14T10:59:00.000
2
0
0
1
python,linux,ip,mac-address,arp
15,407,512
3
false
0
0
Are you using Linux? You could parse the /proc/net/arp file. It contains the HW address of your gateway.
2
3
0
Is there any quick way in Python to get the MAC address of the default gateway? I can't make any ARP requests from the Linux machine I'm running my code on, so it has to come directly from the ARP table.
Python: get MAC address of default gateway
0.132549
0
1
5,009
15,407,510
2013-03-14T11:07:00.000
0
0
0
0
python,pyro
15,440,686
2
false
0
0
Here is my solution to my problem: since I didn't really need to get specific addresses of clients using pyro, just to distinguish clients in specific subnet (in my classrom) from other ips (students working from home). I just started two pyro clients on two ports and then filtered traffic using a firewall.
1
1
0
To cut the story short: is there any way to get IP of untrusted client calling remote proxy object, using Pyro4? So person A is calling my object from IP 123.123.123.123 and person B from IP 111.111.111.111 is there any way to distinguish them in Pyro4, assuming that they cannot be trusted enough to submit their own IP.
Get caller's IP in Pyro4 application
0
0
1
354
15,408,255
2013-03-14T11:41:00.000
0
0
0
0
python,django
15,408,439
3
true
1
0
django_roa is not yet compatible with django 1.5. I'm afraid it only works with django 1.3.
2
5
0
I'm using django and I'm trying to set up django-roa but when i'm trying to start my webserver I have this error cannot import name LOOKUP_SEP If I remove django_roa from my INSTALLEDS_APP it's okay but I want django-roa working and I don't know how resolve this problem. And I don't know what kind of detail I can tell to find a solution. Thanks
cannot import name LOOKUP_SEP
1.2
0
0
2,388
15,408,255
2013-03-14T11:41:00.000
0
0
0
0
python,django
18,352,809
3
false
1
0
I downgraded from 1.5.2 to 1.4.0 and my app started working again. Via pip: pip install django==1.4 Hope that helps.
2
5
0
I'm using django and I'm trying to set up django-roa but when i'm trying to start my webserver I have this error cannot import name LOOKUP_SEP If I remove django_roa from my INSTALLEDS_APP it's okay but I want django-roa working and I don't know how resolve this problem. And I don't know what kind of detail I can tell to find a solution. Thanks
cannot import name LOOKUP_SEP
0
0
0
2,388
15,411,498
2013-03-14T14:04:00.000
1
0
0
0
python,json,heroku,flask,gunicorn
16,584,784
1
false
1
0
I know I may be considered a little off the wall here but there is another option. We know that from time to time there is a bug that happens on transit.We know that there is not much we can do right now to stop the problem. If you are only providing the API then stop reading however if you write the client too, keep going. The error is a known case, and known cause. The result of an empty return value means that something went wrong. However the value is available and was fetched, calculated, whatever... My instinct as a developer would be to treat an empty result as an HTTP error and request the data be resent. You could then track the resend requests and see how often this happens. I would suggest (although you strike me as the kind of developer to think of this too) that you count the requests and set a sane value for responding "network error" to the user. My instinct would be to retry right away and then to wait a little while before retrying some more. From what you describe the first retry would probably pick up the data properly. Of course this could mean keeping older requests hanging about in cache for a few minutes or running the request a second time depending on what seemed most appropriate. This would also route around any number of other point-to-point networking errors and leave the app far more robust even in the face of connectivity problems. I know our instinct as developers is to fix the known fault but sometimes it is better to work towards a system that is able to operate despite faults. That said it never hurts to log errors and problems and try to fix them anyway.
1
78
0
I am running a Flask/Gunicorn Python app on a Heroku Cedar dyno. The app returns JSON responses to its clients (it's an API server, really). Once in a while clients get 0-byte responses. It's not me returning them, however. Here is a snippet of my app's log: Mar 14 13:13:31 d.0b1adf0a-0597-4f5c-8901-dfe7cda9bce0 app[web.1] [2013-03-14 13:13:31 UTC] 10.104.41.136 apisrv - api_get_credits_balance(): session_token=[MASKED] The first line above is me starting to handle the request. Mar 14 13:13:31 d.0b1adf0a-0597-4f5c-8901-dfe7cda9bce0 app[web.1] [2013-03-14 13:13:31 UTC] 10.104.41.136 apisrv 1252148511 api_get_credits_balance(): returning [{'credits_balance': 0}] The second line is me returning a value (to Flask -- it's a Flask "Response" object). Mar 14 13:13:31 d.0b1adf0a-0597-4f5c-8901-dfe7cda9bce0 app[web.1] "10.104.41.136 - - [14/Mar/2013:13:13:31] "POST /get_credits_balance?session_token=MASKED HTTP/1.1" 200 22 "-" "Appcelerator Titanium/3.0.0.GA (iPhone/6.1.2; iPhone OS; en_US;)" The third line is Gnicorn's, where you can see the Gunicorn got the 200 status and 22 bytes HTTP body ("200 22"). However, the client got 0 bytes. Here is the Heroku router log: Mar 14 13:13:30 d.0b1adf0a-0597-4f5c-8901-dfe7cda9bce0 heroku[router] at=info method=POST path=/get_credits_balance?session_token=MASKED host=matchspot-apisrv.herokuapp.com fwd="66.87.116.128" dyno=web.1 queue=0 wait=0ms connect=1ms service=19ms status=200 bytes=0 Why does Gunicorn return 22 bytes, but Heroku sees 0, and indeed passes back 0 bytes to the client? Is this a Heroku bug?
Heroku truncates HTTP responses?
0.197375
0
0
2,763
15,411,967
2013-03-14T14:25:00.000
1
0
1
0
python,ipython,ipython-notebook
60,176,426
14
false
0
0
Assuming you have control of the Jupyter Notebook you could: set an environment value in a cell that uses this as a flag in your code. Place a unique comment in that cell (or all cells you want to exclude) # exclude_from_export %set_env is_jupyter=1 Export the notebook as a python script to be used in a different context. The export would exclude the commented cell(s) and subsequently the code that sets the environment value. Note: replace your_notebook.ipynb with the name of your actual notebook file. jupyter nbconvert --to script --RegexRemovePreprocessor.patterns="['^# exclude_from_export']" your_notebook.ipynb This will generate a file that will not have the jupyter environment flag set allowing for code that uses it to deterministically execute.
1
116
0
I have some Python code example I'd like to share that should do something different if executed in the terminal Python / IPython or in the IPython notebook. How can I check from my Python code if it's running in the IPython notebook?
How can I check if code is executed in the IPython notebook?
0.014285
0
0
51,939
15,421,424
2013-03-14T22:28:00.000
1
0
1
0
python
15,421,464
5
false
0
0
If you do for line in filehandle: it will iterate over each line. If you have a flag that is true when the previous line is blank you can skip the next line then reset the flag.
1
1
0
What I am trying to do is go through a document line by line, find each blank line, keep traversing until I hit the next line of text, and pop that line. So for example, what I want to do is this: Paragraph 1 This is a line. This is another line. Here is a line after a space, which I want to pop! Here is the next line, which I want to keep. Here is another line I want to pop. So it will go through each number of blank lines until it hits the next sentence, and pops that sentence only, then continues on. I am thinking I should use re.split('\n') , but I am not sure. I am sorry I have no code to post but I really don't know where to start any help would be much appreciated, thank you! this is part of a larger code, which i've worked days and days on and have figured out up to this point, so I have done the bulk of the word.
Search for every blank line in document, and pop first line after that?
0.039979
0
0
1,252
15,423,157
2013-03-15T01:24:00.000
1
0
1
0
python
15,423,211
3
false
0
0
When a function makes a recursive call, control passes into the called function. When a function returns, control passes out of that function to the one that called it. This is how an interactive debugger describes what is happening: step in to a function, step over each statement, step out of the function. The usual bookkeeping for function calls is the structure called a stack. We're supposed to imagine a stack of plates that rest on a spring. Each invocation (call) of a function is one more plate "pushed" onto the "top" of the stack. Each return from a function "pops" that invocation off the stack.
1
0
0
I don't get how the return statements work in any recursive function (in python). Can someone please give me a few basic examples as to what's going on, when you're returning "stuff" in a recursive function?
How does the return function work in recursion?
0.066568
0
0
115
15,423,863
2013-03-15T02:49:00.000
0
0
0
0
python,django
15,423,962
4
false
1
0
Make sure that the folder is included in the TEMPLATE_DIRS setting (in settings.py).
2
0
0
I have the djnago app structure like myproject/myapp myproject/myapp/static/site/app/index.html I want to include that template in my file like this url(r'^public$', TemplateView.as_view(template_name="/static/site/app/index.html")), but it says template not found
How can i include my template from the static folder in django
0
0
0
519
15,423,863
2013-03-15T02:49:00.000
0
0
0
0
python,django
15,427,726
4
false
1
0
BY default django checks in app/templates directory for a template match, or in the project_root/templates/. You should have the directory listed in the TEMPLATE_DIRS setting as others have stated.
2
0
0
I have the djnago app structure like myproject/myapp myproject/myapp/static/site/app/index.html I want to include that template in my file like this url(r'^public$', TemplateView.as_view(template_name="/static/site/app/index.html")), but it says template not found
How can i include my template from the static folder in django
0
0
0
519
15,425,492
2013-03-15T05:37:00.000
3
0
0
0
python,pandas
15,425,560
3
false
0
0
There isn't, but if you want to only apply to unique values, just do that yourself. Get mySeries.unique(), then use your function to pre-calculate the mapped alternatives for those unique values and create a dictionary with the resulting mappings. Then use pandas map with the dictionary. This should be about as fast as you can expect.
1
0
1
I am using a rather large dataset of ~37 million data points that are hierarchically indexed into three categories country, productcode, year. The country variable (which is the countryname) is rather messy data consisting of items such as: 'Austral' which represents 'Australia'. I have built a simple guess_country() that matches letters to words, and returns a best guess and confidence interval from a known list of country_names. Given the length of the data and the nature of hierarchy it is very inefficient to use .map() to the Series: country. [The guess_country function takes ~2ms / request] My question is: Is there a more efficient .map() which takes the Series and performs map on only unique values? (Given there are a LOT of repeated countrynames)
Pandas: More Efficient .map() function or method?
0.197375
0
0
2,544
15,430,595
2013-03-15T10:57:00.000
34
0
1
0
python,eclipse,pydev
33,384,417
8
false
0
0
Scrolling didn't work for me after updating to Ubuntu 15.10 (Eclipse 4.5.1, and PyDev 4.4.0). Resolved after setting: Preferences --> PyDev --> Editor --> Overview Ruler Minimap --> Show vertical scrollbar
6
10
0
I updated today to the latest version of PyDev (2.7.2), everything went smooth, I restarted Eclipse and then the vertical scroll in the PyDev editor stoped working. The scrollbar is moving, but the text is not scrolling. The horizontal scroll works though. This happens only with python files. When I open a text file or some other kind of file (not .py) scrolling works fine. Does anyone has any idea why this is happening and maybe how this can be fixed?
Vertical scroll not working in Eclipse/PyDev
1
0
0
4,120
15,430,595
2013-03-15T10:57:00.000
6
0
1
0
python,eclipse,pydev
15,432,648
8
false
0
0
Same thing happened to me. For now I uninstalled the latest version and reinstalled PyDev 2.7.1.2012100913. This is done on Eclipse Help->Install New Software->What is already installed? Here you must uninstall PyDev. Then you have to go back to Help->Install New Software choose PyDev on the Work with combobox and uncheck the Show only the latest versions of available software option. Now you can install any older version with a working vertical scroll.
6
10
0
I updated today to the latest version of PyDev (2.7.2), everything went smooth, I restarted Eclipse and then the vertical scroll in the PyDev editor stoped working. The scrollbar is moving, but the text is not scrolling. The horizontal scroll works though. This happens only with python files. When I open a text file or some other kind of file (not .py) scrolling works fine. Does anyone has any idea why this is happening and maybe how this can be fixed?
Vertical scroll not working in Eclipse/PyDev
1
0
0
4,120
15,430,595
2013-03-15T10:57:00.000
0
0
1
0
python,eclipse,pydev
15,432,545
8
false
0
0
What OS are you running? It's not some third party java implementation causing issues? I had that problem with Eclipse behaving strange on Unix a few times.
6
10
0
I updated today to the latest version of PyDev (2.7.2), everything went smooth, I restarted Eclipse and then the vertical scroll in the PyDev editor stoped working. The scrollbar is moving, but the text is not scrolling. The horizontal scroll works though. This happens only with python files. When I open a text file or some other kind of file (not .py) scrolling works fine. Does anyone has any idea why this is happening and maybe how this can be fixed?
Vertical scroll not working in Eclipse/PyDev
0
0
0
4,120
15,430,595
2013-03-15T10:57:00.000
0
0
1
0
python,eclipse,pydev
15,432,929
8
false
0
0
Same problem on OS X, I added it to the bug report; hope this gets resolved soon.
6
10
0
I updated today to the latest version of PyDev (2.7.2), everything went smooth, I restarted Eclipse and then the vertical scroll in the PyDev editor stoped working. The scrollbar is moving, but the text is not scrolling. The horizontal scroll works though. This happens only with python files. When I open a text file or some other kind of file (not .py) scrolling works fine. Does anyone has any idea why this is happening and maybe how this can be fixed?
Vertical scroll not working in Eclipse/PyDev
0
0
0
4,120
15,430,595
2013-03-15T10:57:00.000
0
0
1
0
python,eclipse,pydev
15,931,485
8
false
0
0
I had this same problem today. Looks like this is fixed now in PyDev version 2.7.3.2013031601. Try updating your software to this version and see if that works. It worked for me.
6
10
0
I updated today to the latest version of PyDev (2.7.2), everything went smooth, I restarted Eclipse and then the vertical scroll in the PyDev editor stoped working. The scrollbar is moving, but the text is not scrolling. The horizontal scroll works though. This happens only with python files. When I open a text file or some other kind of file (not .py) scrolling works fine. Does anyone has any idea why this is happening and maybe how this can be fixed?
Vertical scroll not working in Eclipse/PyDev
0
0
0
4,120
15,430,595
2013-03-15T10:57:00.000
2
0
1
0
python,eclipse,pydev
38,352,717
8
false
0
0
Have you tried holding CTRL while you scroll the page ? It works for me ! My environment is Eclipse Mars R2 + PyDev 5.1.2 + Ubuntu 16.04 + Python 2.7
6
10
0
I updated today to the latest version of PyDev (2.7.2), everything went smooth, I restarted Eclipse and then the vertical scroll in the PyDev editor stoped working. The scrollbar is moving, but the text is not scrolling. The horizontal scroll works though. This happens only with python files. When I open a text file or some other kind of file (not .py) scrolling works fine. Does anyone has any idea why this is happening and maybe how this can be fixed?
Vertical scroll not working in Eclipse/PyDev
0.049958
0
0
4,120
15,433,845
2013-03-15T13:35:00.000
3
0
0
0
python
15,433,922
1
true
1
0
No, there's no generally applicable way of doing that for Python. There are some heuristics for simple modules, but they're going to fail miserably. In the specific case of NumPy you'd have to first find out which parts of its underlying C and Fortran code are needed and which aren't, which is a pretty difficult problem in its own right. Even if you can solve that, the fact that NumPy also uses __import__ in several places, including in compiled extension modules, makes it nearly impossible to determine which parts of the code are going to be imported.
1
1
0
I'm just deploying a Django application which uses Matplotlib and Numpy as dependencies. It's a small app, and in the end, the dependency code outweighs the app code by a lot. I'm also getting lots of errors in setting the dependencies in the production environment for methods I'm not directly using in the app. Is there a method for stripping down a dependecy for it to contain only the things necessary for the app to work?
Strip a dependency of unused functions
1.2
0
0
101
15,438,326
2013-03-15T17:13:00.000
2
0
1
0
python,coding-style,pep8
15,438,381
3
false
0
0
The reason that lines should be 79 characters is so that they can fit into an 80 character wide terminal (apparently they still exist). The reason that 72 characters are used on the docstring is that this means that you have equal padding on both sides (docstrings are indented, after all) and still fit into the 80 character width.
1
19
0
I've recently begun figuring it would be a good idea to follow PEP 8. I made my editor display the 80-column mark and am now attempting to wrap lines to fit in it. My question is this. In PEP 8 it says: Limit all lines to a maximum of 79 characters. Makes sense. [...] For flowing long blocks of text (docstrings or comments), limiting the length to 72 characters is recommended. Why 72? Why isn't 79 fine for docstrings or comments?
Python PEP 8 docstring line length
0.132549
0
0
9,289
15,438,586
2013-03-15T17:27:00.000
2
0
1
0
python
15,438,715
2
true
0
0
The only way the interpreter could safely execute the sub-expression ori.rfind(' ') once is if it knew that The rFind expression didn't perform any mutations No expression between the first and second use of rFind caused any mutations If any of this wasn't true then caching the result and reusing would be simply unsafe. Given the dynamic nature of Python it's nearly impossible to have these guarantees hence operations like this couldn't be cached + reused
1
1
0
I have the following statement in Python, where ori is a string [ori[ori.rfind(' ') + 1:], ori[:ori.rfind(' ')]] We can see ori.rfind(' ') is called twice, is the interpreter smart enough to just evaluate the function only once? We could do the following: s = ori.rfind(' ') return [ori[s+1:], ori[:s]] But this uses two lines. I intend to use this statement in a list comprehension over list of strings and hope this function is one line. In this case, it is actually easier for the interpreter to figure out, since string is an immutable. My guess is perhaps interpreter can be smart to avoid reevaluation. In general, if the object is an immutable, could the interpreter be smart enough?
Can Python avoid reevaluation in a statement?
1.2
0
0
138
15,438,947
2013-03-15T17:47:00.000
0
0
0
0
python,django
15,440,329
1
false
1
0
If you want the label, I believe you want to use instance.label.
1
0
0
When I try instance.get_choices_display() in shell, it is showing the label. But, I do the same thing in the view, it is printing value. What can be the reason? I want the label to be returned/printed in the view.
django get_choices_display() displaying label in shell and value in view
0
0
0
192
15,442,919
2013-03-15T22:13:00.000
0
0
0
0
python,amazon-simpledb
15,460,747
2
true
1
0
I opted to go with storing large text documents in Amazon S3 (retrieval seems to be quick), I'll be implementing an EC2 instance for caching the documents with S3 as a failover.
1
0
0
I've been reading up on SimpleDB and one downfall (for me) is the 1kb max per attribute limit. I do a lot of RSS feed processing and I was hoping to store feed data in SimpleDB (articles) and from what I've read the best way to do this is to shard the article across several attributes. The typical article is < 30kb of plain text. I'm currently storing article data in DynamoDB (gzip compressed) without any issues, but the cost is fairly high. Was hoping to migrate to SimpleDB for cheaper storage with still fast retrievals. I do archive a json copy of all rss articles on S3 as well (many years of mysql headaches make me wary of db's). Does anyone know to shard a string into < 1kb pieces? I'm assuming an identifier would need to be appended to each chunk for order of reassembly. Any thoughts would be much appreciated!
python - simpledb - how to shard/chunk a big string into several <1kb values?
1.2
0
0
255
15,443,732
2013-03-15T23:31:00.000
1
1
0
1
python,optimization,nginx,redis,uwsgi
45,383,617
4
false
1
0
"python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken." you are mistaken. the whole point of using uwsgi over, say, the CGI mechanism is to persist data across threads and save the overhead of initialization for each call. you must set processes = 1 in your .ini file, or, depending on how uwsgi is configured, it might launch more than 1 worker process on your behalf. log the env and look for 'wsgi.multiprocess': False and 'wsgi.multithread': True, and all uwsgi.core threads for the single worker should show the same data. you can also see how many worker processes, and "core" threads under each, you have by using the built-in stats-server. that's why uwsgi provides lock and unlock functions for manipulating data stores by multiple threads. you can easily test this by adding a /status route in your app that just dumps a json representation of your global data object, and view it every so often after actions that update the store.
2
8
0
I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question): I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB. Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit). Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second. All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second. But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken. Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above! So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly. P.S: Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
Persistent in-memory Python object for nginx/uwsgi server
0.049958
0
0
4,976