Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
15,608,933
2013-03-25T06:29:00.000
1
1
0
1
python,python-3.x,gevent,python-memcached
20,068,405
4
false
0
0
for memcached you probably know alternative: redis+python3
1
10
0
So, I have decided to write my next project with python3, why? Due to the plan for Ubuntu to gradually drop all Python2 support within the next year and only support Python3. (Starting with Ubuntu 13.04) gevent and the memcached modules aren't officially ported to Python3. What are some alternatives, already officially ported to Python3, for gevent and pylibmc or python-memcached?
Python3: Looking for alternatives to gevent and pylibmc/python-memcached
0.049958
0
0
5,262
15,609,211
2013-03-25T06:49:00.000
0
1
1
0
python
15,609,275
1
false
0
0
I am afraid there's no easy way to arbitrarily modify a running Python script. One approach is to test the script on a small amount of data first. This way you'll reduce the likelihood of discovering bugs when running on the actual, large, dataset. Another possibility is to make the script periodically save its state to disk, so that it can be restarted from where it left off, rather than from the beginning.
1
1
0
I'm using python scripts to execute simple but long measurements. I as wondering if (and how) it's possible to edit a running script. An example: Let's assume I made an error in the last lines of a running script.These lines have not yet been executed. Now I'd like to fix it without restarting the script. What should I do? Edit: One Idea I had was loading each line of the script in a list. Then pop the first one. Feed it to an interpreter instance. Wait for it to complete and pop the next one. This way I could modify the list. I guess I can't be the first one thinking about it. Someone must have implemented something like this before and I don't wan't to reinvent the weel. I one of you knows about a project please let me know.
Modifying a running script
0
0
0
109
15,609,918
2013-03-25T07:43:00.000
1
1
0
1
java,c++,python,c,thrift
15,610,243
4
false
0
0
If your C/C++ code already exists, your best bet is to publish it as a service, with an API matching what functionality you already have. You can then write new services in the language of your choice, matching the API you need, and they can call the C/C++ services. If your C/C++ code does not exist yet, and you are set to create the majority of code in a higher level language such as Java or C#, consider implementing the performance critical parts initially in that language as well. Only after profiling shows a particular performance problem, and after you exhaust the most basic optimization techniques within the language, such as avoiding allocations inside the hottest loops, you should consider rewriting the bits that have been proven to consume the most cycles into another language using glue such as JNI. In other words, do not optimize until you have numbers in hand. There is also no fundamental reason why you couldn't squeeze out (almost) the same performance level from Java as you can from C++, with enough trying. You have a real chance to end up with a simpler architecture than you expect.
1
2
0
This is more of a design question. I was planning on writing some web-services which implement CPU intensive algorithms. The problem that I am trying to solve is - higher level languages such as python, perl or java make it easy to write web services. While lower level languages such as C, C++ make it possible to fine tune the performance of your code. So I was looking at what I could do bridge two languages. Here's the options I came up with: Language specific bindings Use something like perl-xs or python's ctypes/loadlibrary or java's JNI. The up-side is that I can write extensions which can execute in the same process. There is small overhead of converting between the native language types to C and back. Implement a separate daemon Use something like thrift / avro and have a separate daemon that runs the C/C++ code. The upside is, it's loosely coupled from the higher level language. I can quickly replace the high level language. The downside being that the overhead of serializing and local unix domain sockets might be higher than executing the code in the same address space (offered by the previous option.) What do you guys think?
Bridging between different programming languages
0.049958
0
0
1,376
15,614,465
2013-03-25T12:03:00.000
0
0
1
1
windows,ipython
68,152,780
2
false
0
0
I have a Win 10 machine with Anaconda 2020.11 python installed, out of the box no updates. ipython 7.19.0. The only way I can cd to somewhere on another drive letter is cd d:/ No other permutation works: cd d:, cd d:, cd d:\, cd 'd:', cd 'd:', etc. So there's an answer but it's quite annoying to figure out.
1
3
0
How can I change drive letter while in IPython under windows? For example, !cd W: does not make W: the current path, it just changes the path if you would change to drive W. Changing to a dos shell with !cmd and then changing to W: does not have any effect to the IPython shell.
How to change drive in IPython under windows
0
0
0
3,953
15,616,093
2013-03-25T13:25:00.000
6
1
0
0
python,python-3.x,aptana
26,059,272
4
true
0
0
I had the same problem with Aptana and just solved it. In my case I had configured another interpreter (IronPython) for running another script. When I got back to a previous script I got the same error message as you "Unable to get project for the run" because it was trying to run it with IronPython instead of Python. I would therefore recommand the following: 1) Check your interpreter configuration. -> Window -> Preferences -> Pydev -> Interpreter Python If you have no interpreter there try autoconfig. If it doesn't work you will have to browse it yourself by clicking New (then it should be somewhere like C:\Python27\python.exe) 2) If you have an interpreter, it means that Aptana is trying to run your script with another interpreter. In that case right click on your script file in Aptana -> Run as -> Python run. That worked for me. Good luck !
4
8
0
Launching Python has encounterd a problem. Unable to get project for the run It would let me put the word problem in the title. The title is the exact message i get when i try to run/debug a file in Aptana 3. I have always been able to run Python in Eclipse without problems. Does anyone know what causes this error? For testing purposes i just created a new Pydev project with only 1 file in it.
Launching Python has encounterd a. Unable to get project for the run
1.2
0
0
10,294
15,616,093
2013-03-25T13:25:00.000
0
1
0
0
python,python-3.x,aptana
48,852,418
4
false
0
0
Go to Run -> Run configurations -> Python run delete "New configuration" then it must work
4
8
0
Launching Python has encounterd a problem. Unable to get project for the run It would let me put the word problem in the title. The title is the exact message i get when i try to run/debug a file in Aptana 3. I have always been able to run Python in Eclipse without problems. Does anyone know what causes this error? For testing purposes i just created a new Pydev project with only 1 file in it.
Launching Python has encounterd a. Unable to get project for the run
0
0
0
10,294
15,616,093
2013-03-25T13:25:00.000
0
1
0
0
python,python-3.x,aptana
52,572,793
4
false
0
0
It occurs when you create a New configuration for run a program. go to Run > run configuration > python run select "New configuration" press on delete icon and again Run the program . this is worked for me .
4
8
0
Launching Python has encounterd a problem. Unable to get project for the run It would let me put the word problem in the title. The title is the exact message i get when i try to run/debug a file in Aptana 3. I have always been able to run Python in Eclipse without problems. Does anyone know what causes this error? For testing purposes i just created a new Pydev project with only 1 file in it.
Launching Python has encounterd a. Unable to get project for the run
0
0
0
10,294
15,616,093
2013-03-25T13:25:00.000
0
1
0
0
python,python-3.x,aptana
55,316,981
4
false
0
0
I had similar issue. Below action solved my problem. Go to Run > run configuration > python run Delete all the configurations below python run - it may not be a great option if you have any custom configuration settings.
4
8
0
Launching Python has encounterd a problem. Unable to get project for the run It would let me put the word problem in the title. The title is the exact message i get when i try to run/debug a file in Aptana 3. I have always been able to run Python in Eclipse without problems. Does anyone know what causes this error? For testing purposes i just created a new Pydev project with only 1 file in it.
Launching Python has encounterd a. Unable to get project for the run
0
0
0
10,294
15,619,825
2013-03-25T16:24:00.000
1
0
0
0
python,numpy,matplotlib,scipy
15,623,721
1
true
0
0
SOLVED - My problem was that I was not up to the latest version of Matplotlib. I did the following steps to get fullscreen working in Matplotlib with Ubuntu 12.10. Uninstalled matplotlib with sudo apt-get remove python-matplotlib Installed build dependencies for matplotlib sudo apt-get build-dep python-matplotlib Installed matplotlib 1.2 with pip sudo pip install matplotlib Set matplotlib to use the GTK backend with matplotlib.rcParams['backend'] = 'GTK' Used keyboard shortcut 'f' when the plot was onscreen and it worked!
1
1
1
I am trying desperately to make a fullscreen plot in matplotlib on Ubuntu 12.10. I have tried everything I can find on the web. I need my plot to go completely fullscreen, not just maximized. Has anyone ever gotten this to work? If so, could you please share how? Thanks.
Matplotlib fullscreen not working
1.2
0
0
1,451
15,621,013
2013-03-25T17:24:00.000
1
0
0
0
python,openerp
15,632,547
2
false
1
0
The reason for storing the field is that you delegate sorting to sql, that gives you more performance than any other subsequent sorting, for sure.
1
3
0
On search screens, users can sort the results by clicking on a column header. Unfortunately, this doesn't work for all columns. It works fine for regular fields like name and price that are stored on the table itself. It also works for many-to-one fields by joining to the referenced table and using the default sort order for that table. What doesn't work is most functional fields and related fields. (Related fields are a type of functional field.) When you click on the column, it just ignores you. If you change the field definition to be stored in the database, then you can sort by it, but is that necessary? Is there any way to sort by a functional field without storing its values in the database?
Sorting OpenERP table by functional field
0.099668
0
0
1,523
15,623,229
2013-03-25T19:37:00.000
1
0
1
0
python,indentation
39,127,843
2
false
0
0
4 spaces, or hitting the tab button once. However, try to avoid the tab button. Sometimes it will give you indentation errors even when you indent correctly.
1
14
0
Is there any official rule/proposal on how should the Python code be indented?
What is the recommended size of indentation in Python?
0.099668
0
0
6,379
15,623,866
2013-03-25T20:13:00.000
0
1
0
0
python,geolocation,gps
15,624,093
2
false
0
0
For counting most frequent locations, a simple approach is to use only the first 3 digits after the latitdue/longitude decimal point, or better round to 3 digits after comma. At aequator: 4 digits: 11 m 3 digits 111m 2 digits 1.1km 1 digits 11.1km 0 digits 111.111 km (distance between two meridians): 40 000 000 / 360 Then you could use as hashtable, multiply with e,g 1000 to get rid of the 3 decimal points, and store as java.awt.Point in the hashtable. There are better solutions, but this gives an first idea.
1
0
0
I need to analyze a set of GPS coordinates in python. I need to find out what is the most frequent location. Given precision issues of the GPS data, the precision of the locations is not very high. Difficult to explan (and to search for infos on google), therefore an example: I drive from home to work every day for 2 months I start my gps logger for each trip and stop at the end of the trip Occasionally, I go somewhere else If I run the script I need to analyse the coordinates where drives started and stopped, with a location radius precision of let's say 20m, I'll find out that the most frequent place is my home and my work (each with a radius of 20m). It does not matter where did I park within this radius. Is there any library in python that can perform such operations? What do you recommend? Thanks
Python: Find out most frequent locations on a set of gps coordinates
0
0
0
1,235
15,625,662
2013-03-25T22:01:00.000
1
0
1
0
python,django,date,datetime
15,625,871
2
false
1
0
What you're looking for is probably coverd by post_date__year=year and post_date__month=month in django. Nevertheless all this seems a little bit werid for get parameters. Do you have any constraint at database level that forbids you from putting there two posts with the same title in the same month of given year?
2
1
0
I'm working on a blog using Django and I'm trying to use get() to retrieve the post from my database with a certain post_title and post_date. I'm using datetime for the post_date, and although I can use post_date = date(year, month, day) to fetch a post made on that specific day, I don't know how to get it to ignore the day parameter, I can't only pass two arguments to date() and since only integers can be used I don't think there's any kind of wildcard I can use for the day. How would I go about doing this? To clarify, I'm trying to find a post using the year in which it was posted, the month, and it's title, but not the day. Thanks in advance for any help!
Matching Month and Year In Python with datetime
0.099668
0
0
483
15,625,662
2013-03-25T22:01:00.000
1
0
1
0
python,django,date,datetime
15,625,840
2
true
1
0
you could use post_date__year and post_date__month
2
1
0
I'm working on a blog using Django and I'm trying to use get() to retrieve the post from my database with a certain post_title and post_date. I'm using datetime for the post_date, and although I can use post_date = date(year, month, day) to fetch a post made on that specific day, I don't know how to get it to ignore the day parameter, I can't only pass two arguments to date() and since only integers can be used I don't think there's any kind of wildcard I can use for the day. How would I go about doing this? To clarify, I'm trying to find a post using the year in which it was posted, the month, and it's title, but not the day. Thanks in advance for any help!
Matching Month and Year In Python with datetime
1.2
0
0
483
15,626,786
2013-03-25T23:34:00.000
0
0
0
0
python,asp.net,python-2.7,screen-scraping,web-crawler
15,627,630
2
false
0
1
I would look into PyQt or PySide which are Python wrapper on TOp of Qt. Qt is a big monster but it's very well documented and i'm sure it will help you further in your project once you grabbed your screen section.
1
0
0
Python noobie. I'm trying to make Python select a portion of my screen. In this case, it is a small window within a Firefox window -- it's Firebug source code. And then, once it has selected the right area, control-A to select all and then control-C to copy. If I could figure this out then I would just do the same thing and paste all of the copies into a .txt file. I don't really know where to begin -- are there libraries for this kind of thing? Is it even possible?
How to: Python script that will 'click' on a portion of my screen, and then do key commands?
0
0
0
556
15,627,698
2013-03-26T01:18:00.000
3
1
0
1
python,thrift
15,634,347
2
true
0
0
Daemonizing processes has nothing to do with thrift. Thrift only provides the communication layer for different platforms and you can run the server in one of the several programming languages thrift supports (that is - great majority of what you can think of). No matter if you write the server in Java, C++ (I've tried those so far) or python, none of them will create a daemon. This feature is not supported (e.g. PHP natively doesn't support neither multithreading nor daemonizing). I've just seen supervisord, didn't play with it much, but it seems to be a good choice to manage processes like thrift servers.
2
2
0
Right now I'm testing the waters with Apache Thrift, and I'm currently using a TThreadedServer written in Python, but when I run the server, it is not daemonized. Is there any way to make it run as a daemon, or is there another way to run thrift in a production environment?
Running thrift server as daemon
1.2
0
0
1,136
15,627,698
2013-03-26T01:18:00.000
1
1
0
1
python,thrift
15,873,194
2
false
0
0
I think you are looking for this: nohup hbase thrift start & This is the only way I found to keep thrift working after my disconnect from Linuxsession.
2
2
0
Right now I'm testing the waters with Apache Thrift, and I'm currently using a TThreadedServer written in Python, but when I run the server, it is not daemonized. Is there any way to make it run as a daemon, or is there another way to run thrift in a production environment?
Running thrift server as daemon
0.099668
0
0
1,136
15,628,387
2013-03-26T02:36:00.000
3
0
1
0
python
15,628,458
2
true
0
0
Quick math says you can't do it in-memory in Python on a 32-bit system. 108 keys will give you only 30 bytes per key if you have 3 GB of address space available. On a 64-bit system, the overhead for a key and value will take up at least: 8 bytes per pointer in the bucket, and one pointer each for the key and value, Probably 28 bytes for each key and value (sys.getsizeof(0) gives 28 on my 64-bit system) So our estimate is that it will take at least 7.2 GB of memory. You can do it, but you may get unacceptable performance. I recommend using something simple like Kyoto Cabinet.
1
1
0
I am going to have a program that generates a 100 million unique keys, and each key has a value associated with it (maximum of 4 digits). I then want to be able to access that data as fast as possible, so that I can look up a key and get its value. Ideally at least a million times a second. Assuming normal computing power, Is this even possible? Do I just create it as a dictionary or should I start learning about databases and such? Anything that points me in the right direction would be a huge help.
Is it possible to implement a massive look up table (100 million + keys) in python?
1.2
0
0
828
15,630,054
2013-03-26T05:33:00.000
2
0
0
0
python,xml,one-to-many,openerp,many-to-one
44,493,866
10
false
1
0
In the XML file: Please add options="{'no_create': True}" to your field which will remove the create button
2
10
0
Please advice me How to remove "Create and Edit..." from many2one field.? that item shows below in the many2one fields which I filtered with domain option. OpenERP version 7
How to remove Create and Edit... from many2one field.?
0.039979
0
0
19,810
15,630,054
2013-03-26T05:33:00.000
18
0
0
0
python,xml,one-to-many,openerp,many-to-one
15,630,138
10
true
1
0
I don't have much idea. Maybe for that you have to make changes in web addons. But an alternative solution is that you can make that many2one field selection. Add widget="selection" attribute in your xml. <field name="Your_many2one_field" widget="selection">
2
10
0
Please advice me How to remove "Create and Edit..." from many2one field.? that item shows below in the many2one fields which I filtered with domain option. OpenERP version 7
How to remove Create and Edit... from many2one field.?
1.2
0
0
19,810
15,632,648
2013-03-26T08:42:00.000
-2
0
1
0
python,loops,long-integer
15,632,672
1
true
0
0
in place of range() function please use xrange() function
1
0
0
I know in python we do not define the data types, but I have a particular number which is of long type, I had a loop using that value as the final parameter of the range function, when I complied and implemented it, it showed an error which was similar to this "long is not used for an iteration, please help me.
Long type is not used in range() function
1.2
0
0
63
15,635,888
2013-03-26T11:27:00.000
2
0
0
1
python,google-app-engine,google-cloud-datastore
15,641,028
2
false
1
0
This is answered, but to explain a little further - the local datastore, by default writes to the temporary file system on your computer. By default, the temporary file is emptied any time you restart the computer, hence your datastore is emptied. If you don't restart your computer, your datastore should remain.
1
0
0
I'm running App Engine with Python 2.7 on OS X. Once I stop the development server all data in the data store is lost. Same thing happens when I try to deploy my app. What might cause this behaviour and how to fix it?
GAE: Data is lost after dev server restart
0.197375
0
0
322
15,636,796
2013-03-26T12:11:00.000
2
0
0
0
python,matlab,numpy,linear-regression,rolling-computation
15,638,779
2
true
0
0
No, there is NO function that will do a rolling regression, returning all the statistics you wish, doing it efficiently. That does not mean you can't write such a function. To do so would mean multiple calls to a tool like conv or filter. This is how a Savitsky-Golay tool would work, which DOES do most of what you want. Make one call for each regression coefficient. Use of up-dating and down-dating tools to use/modify the previous regression estimates will not be as efficient as the calls to conv, since you only need factorize a linear system ONCE when you then do the work with conv. Anyway, there is no need to do an update, as long as the points are uniformly spaced in the series. This is why Savitsky-Golay works.
1
2
1
I have two vectors x and y, and I want to compute a rolling regression for those, e.g a on (x(1:4),y(1:4)), (x(2:5),y(2:5)), ... Is there already a function for that? The best algorithm I have in mind for this is O(n), but applying separate linear regressions on every subarrays would be O(n^2). I'm working with Matlab and Python (numpy).
Efficient way to do a rolling linear regression
1.2
0
0
4,368
15,638,612
2013-03-26T13:44:00.000
6
0
1
0
python,statistics
15,638,712
2
false
0
0
Sounds like a math question. For the mean, you know that you can take the mean of a chunk of data, and then take the mean of the means. If the chunks aren't the same size, you'll have to take a weighted average. For the standard deviation, you'll have to calculate the variance first. I'd suggest doing this alongside the calculation of the mean. For variance, you have Var(X) = Avg(X^2) - Avg(X)^2 So compute the average of your data, and the average of your (data^2). Aggregate them as above, and the take the difference. Then the standard deviation is just the square root of the variance. Note that you could do the whole thing with iterators, which is probably the most efficient.
1
4
1
I have a lot of data stored at disk in large arrays. I cant load everything in memory altogether. How one could calculate the mean and the standard deviation?
calculating mean and standard deviation of the data which does not fit in memory using python
1
0
0
6,151
15,638,882
2013-03-26T13:56:00.000
0
1
0
1
python,python-2.7,ssh,openssh
15,639,004
2
false
0
0
Run service sshd status (e.g. via Popen()) and read what it tells you.
1
0
0
I wanted to know if there is a way to find out the status of the ssh server in the system using Python. I just want to know if the server is active or not (just yes/no). It would help even if it is just a linux command so that I can use python's popen from subprocess module and run that command. Thanks PS: I'm using openssh-server on linux (ubuntu 12.04)
SSH Server status in Python
0
0
0
1,422
15,638,937
2013-03-26T13:58:00.000
0
0
0
0
python,xslt,lxml,libxslt
16,332,834
1
false
1
0
lxml's transform methods allow you to profile a transformation, and obtain the results as an XML document which shows how many times a pattern/mode/named-template was used. It should be possible to then perform an XPath across the XSL files to obtain all the comparative patterns/modes/named-templates and compare the two lists to see which templates are most/least used.
1
0
0
Is it possible to log/capture which XSL templates are used and/or not used during an XML transform using lxml? I'm looking to report on and prune unused templates to reduce "technical debt".
Can I capture which templates have been used/not used during an XSL transformation?
0
0
0
36
15,641,474
2013-03-26T15:51:00.000
34
0
0
0
python,django
15,641,548
3
true
1
0
I believe your browser is caching your js you could power refresh your browser, or clear browser cache? on chrome control+f5 or shift + f5 i believe on firefox it is control + shift + r
2
21
0
I have javascript files in my static folder. Django finds and loads them perfectly fine, so I don't think there is anything wrong with my configuration of the static options. However, sometimes when I make a change to a .js file and save it, the Django template that uses it does NOT reflect those changes -- inspecting the javascript with the browser reveals the javascript BEFORE the last save. Restarting the server does nothing, though restarting my computer has sometimes solved the issue. I do not have any code that explicitly deals with caching. Has anyone ever experienced anything like this?
Django Not Reflecting Updates to Javascript Files?
1.2
0
0
17,601
15,641,474
2013-03-26T15:51:00.000
0
0
0
0
python,django
67,354,933
3
false
1
0
For me, opening Incognito Mode in Chrome let the browser show the recent changes in my .js static files.
2
21
0
I have javascript files in my static folder. Django finds and loads them perfectly fine, so I don't think there is anything wrong with my configuration of the static options. However, sometimes when I make a change to a .js file and save it, the Django template that uses it does NOT reflect those changes -- inspecting the javascript with the browser reveals the javascript BEFORE the last save. Restarting the server does nothing, though restarting my computer has sometimes solved the issue. I do not have any code that explicitly deals with caching. Has anyone ever experienced anything like this?
Django Not Reflecting Updates to Javascript Files?
0
0
0
17,601
15,641,529
2013-03-26T15:53:00.000
1
0
0
0
python,django
15,641,560
1
true
1
0
It is sent in the clear, then the server hashes it. You would need to use https to prevent eavesdropping.
1
1
0
When you login to django, does the password get hashed and then transmitted or is it transmitted in the clear and the server does the hashing? This is within the context of not using https.
Admin.sites.url password transmission
1.2
0
0
43
15,642,665
2013-03-26T16:43:00.000
2
0
1
0
java,python,concurrency
15,642,897
3
false
1
0
Your best bet might be to ditch the use of a file and use sockets. The Java program generates and caches the output until a Python script is listening. The Python script then accepts the data, and handles it. Alternatively, you could use IPC signalling between the two processes, although this seems a lot more messy than sockets, IMHO. Otherwise, a .lock file seems like your best bet.
3
1
0
I have code written on Java which writes all data to the file and then I have python script which handles this data. They run completely separately and python script can be run by schedule but it also removing handled records from the file. The question is in implementation for the access to the file when java code from first process will try to write something and python code from second process will try to remove handled record? First thought was to have .lock file physically created when one of the processes updating the file but perhaps there are some other solutions to consider? Thank you.
How to access the file shared between different accessors?
0.132549
0
0
53
15,642,665
2013-03-26T16:43:00.000
0
0
1
0
java,python,concurrency
15,642,755
3
false
1
0
Make sure that both the Java and Python methods close the file when they are done. One possibility is to convert your Python script to Jython. If both processes are running in the JVM then you should be able to use standard Java concurrency techniques to make sure you do not have both threads modifying the file simultaneously.
3
1
0
I have code written on Java which writes all data to the file and then I have python script which handles this data. They run completely separately and python script can be run by schedule but it also removing handled records from the file. The question is in implementation for the access to the file when java code from first process will try to write something and python code from second process will try to remove handled record? First thought was to have .lock file physically created when one of the processes updating the file but perhaps there are some other solutions to consider? Thank you.
How to access the file shared between different accessors?
0
0
0
53
15,642,665
2013-03-26T16:43:00.000
0
0
1
0
java,python,concurrency
15,642,744
3
false
1
0
One mechanism would be to have the producer roll the file to a new name (maybe with HHMMSS suffix) every so often, and have the consumer only process the file once it has been rolled to the new name. Maybe every 5 minutes? Another mechanism would be to have the consumer roll the file itself and have the producer notice that the file has rolled and to re-open the original file name. So the consumer is always consuming from output.consume and the producer is always writing to output or something. Every time a line is written to the file, the producer makes sure that output exists. When a consumer is ready to read the file, he renames output to output.consume or something. The producer notices that the file output no longer exists so he reopens it for output. Once the output file is re-created the consumer can process the output.comsume file.
3
1
0
I have code written on Java which writes all data to the file and then I have python script which handles this data. They run completely separately and python script can be run by schedule but it also removing handled records from the file. The question is in implementation for the access to the file when java code from first process will try to write something and python code from second process will try to remove handled record? First thought was to have .lock file physically created when one of the processes updating the file but perhaps there are some other solutions to consider? Thank you.
How to access the file shared between different accessors?
0
0
0
53
15,645,296
2013-03-26T19:02:00.000
0
0
1
0
python,multithreading,io
15,647,380
1
false
0
0
There is no need to repeatedly check for either I/O completion or for lock release. An I/O completion, signaled by a hardware interrupt to a driver, or a lock release as signaled by a software interrupt from another thread, will make threads waiting on those operations ready 'immediately', and quite possibly running, and quite possibly preempting another thread when being made running. Essentially, after either a software or hardware interrupt, the OS can decide to interrupt-return to a different thread than the one interrupted. The high I/O performance of this mechanism, eliminating any polling or checking, is 99% of the reason for putting up with the pain of premptive multitaskers.
1
1
0
I'm curious. I've been programming in Python for years. When I run a command that blocks on I/O (whether it's a hard-disk read or a network request), or blocks while waiting on a lock to be released, how is that implemented? How does the thread know when to reacquire the GIL and start running again? I wonder whether this is implemented by constantly checking ("Is the output here now? Is it here now? What about now?") which I imagine is wasteful, or alternatively in a more elegant way.
How is waiting for I/O or waiting for a lock to be released implemented?
0
0
0
259
15,651,666
2013-03-27T03:58:00.000
4
1
1
0
python,file,file-io,filesystems
15,651,767
1
true
0
0
It's not really feasible in general, because the idea of file identity is an illusion (similar to the illusion of physical identity, but this isn't a philosophy forum). You cannot track identity using file contents, because contents change. You cannot track by any other properties attached to the file, because many file editors will save changes by deleting the old file and creating a new one. Version control systems handle this in three ways: (CVS) Don't track move operations. (Subversion) Track move operations manually. (Git) Use a heuristic to label operations as "move" operations based on changes to the contents of a file (e.g., if a new file differs from an existing file by less than 50%, then it's labeled as a copy). Things like inode numbers are not stable and not to be trusted. Here, you can see that editing a file with Vim will change the inode number, which we can examine with stat -f %i: $ touch file.txt $ stat -f %i file.txt 4828200 $ vim file.txt ...make changes to file.txt... $ stat -f %i file.txt 4828218
1
1
0
My idea is to track a specific file on a file-system over time between two points in time, T1 and T2. The emphasis here lies on looking at a file as a unique entity on a file-system. One that can change in data and attributes but still maintain its unique identity. The ultimate goal is to determine whether or not the data of a file has (unwillingly) changed between T1 and T2 by capturing and recording the data-hash and creation/modification attributes of the file at T1 and comparing them with the equivalents at T2. If all attributes are unchanged but the hash doesn't validate we can say that there is a problem. In all other cases we might be willing to say that a changed hash is the result of a modification and an unchanged hash and unchanged modification-attribute the result of no change on the file(data) at all. Now, there are several ways to refer to a file and corresponding drawbacks: The path to the file: However, if the file is moved to a different location this method fails. A data-hash of the file-data: Would allow a file, or rather (a) pointer to the file-data on disk, to be found, even if the pointer has been moved to a different directory, but the data cannot change or this method fails as well. My idea is to retrieve a fileId for that specific file at T1 to track the file at T2, even if it has changed its location so it doesn't need to be looked at as a new file. I am aware of two methods pywin offers. win32file.GetFileInformationByHandle() and win32file.GetFileInformationByHandleEx(), but they obviously are restricted to specific file-systems, break cross-platform-compatibility and sway away from a universal approach to track the file. My question is simple: Are there any other ideas/theories to track a file, ideally accross platforms/FSs? Any brainstormed food for thought is welcome!
Tracking a file over time
1.2
0
0
303
15,654,714
2013-03-27T08:43:00.000
0
0
0
1
python,linux,cherrypy,gnu-screen
25,355,763
3
false
0
0
You can use syslog or even better you can configure it to send all logs to a database!
2
2
0
I'm developing a small piece of software, that is able to control (start, stop, restart and so on - with gnu screen) every possible gameserver (which have a command line) and includes a tiny standalone webserver with a complete webinterface (you can access the gnu screen from there, like if you're attached to it) on linux. Almost everything is working and needs some code cleanup now. It's written in python, the standalone webserver uses cherrypy as a framework. The problem is, that the gnu screen output on the webinterface is done via a logfile, which can cause high I/O when enabled (ok, it depends on what is running). Is there a way to pipe the output directly to the standalone webserver (it has to be fast)? Maybe something with sockets, but i dont know how to handle them yet.
A way to "pipe" gnu screen output to a running python process?
0
0
0
1,489
15,654,714
2013-03-27T08:43:00.000
1
0
0
1
python,linux,cherrypy,gnu-screen
15,661,154
3
false
0
0
Writing to a pipe would work but it's dangerous since your command (the one writing the pipe) will block when you're not fast enough reading the data from the pipe. A better solution would be create a local "log server" which publishes stdin on a socket. Now you can pipe the output of your command to the log server which reads from stdin and sends copy of the input to anyone connected to it's socket. When no one is connected, then the output is just ignored. Writing such a "log server" is trivial (about 1h in Python, I'd guess). An additional advantage would be that you could keep part of the log file in memory (say the last 100 lines). When your command crashes, then you could still get the last output from your log server. For this to work, you must not terminate the log server when stdin returns EOF. The drawback is that you need to clean up stale log servers yourself. When you use sockets, you can send it a "kill" command from your web app.
2
2
0
I'm developing a small piece of software, that is able to control (start, stop, restart and so on - with gnu screen) every possible gameserver (which have a command line) and includes a tiny standalone webserver with a complete webinterface (you can access the gnu screen from there, like if you're attached to it) on linux. Almost everything is working and needs some code cleanup now. It's written in python, the standalone webserver uses cherrypy as a framework. The problem is, that the gnu screen output on the webinterface is done via a logfile, which can cause high I/O when enabled (ok, it depends on what is running). Is there a way to pipe the output directly to the standalone webserver (it has to be fast)? Maybe something with sockets, but i dont know how to handle them yet.
A way to "pipe" gnu screen output to a running python process?
0.066568
0
0
1,489
15,655,224
2013-03-27T09:14:00.000
16
1
1
0
python,performance,python-import
15,655,265
1
false
0
0
No, the difference is not a question of performance. In both cases, the entire module must be parsed, and any module-level code will be executed. The only difference is in namespaces: in the first, all the names in the imported module will become names in the current module; in the second, only the package name is defined in the current module. That said, there's very rarely a good reason to use from foo import *. Either import the module, or import specific names from it.
1
8
0
Is there any performance difference between from package import * and import package?
Performance between "from package import *" and "import package"
1
0
0
181
15,660,649
2013-03-27T13:56:00.000
2
0
1
0
python
15,660,939
3
true
0
0
If your library reads from a file with .read(), there isn't much point in some abstraction of merging multiple file-objects as one. it is quite trivial to read everything and throw it into StringIO.
1
3
0
I have two files: a header and the body. I am using a library to read the whole thing. I can use "fileinput.input" to create one FileInput object and hand this to the library that reads the data. Problem is FileInput objects do not have a '.read' attribute which the library seems to expect. I need a FileObject with a .read that is like reading both files as one. Any ideas existing workarounds? Yes, I know I can build my own little class or cat files together. Just wondering if there is some magic FileObject joiner I've never heard of.
python: open two files as one fileobject
1.2
0
0
243
15,665,581
2013-03-27T17:34:00.000
2
0
1
0
python,ruby,ipython,irb
15,665,619
2
false
0
0
Local variables are local to the scope they are defined in. That's why they are called local variables. If you define a local variable in the script myscript.rb, then it will be defined inside that scope and nowhere else. That's the whole point of local variables. If you want a variable that is available globally, use a global variable. Or maybe an instance variable of the top-level main object.
1
0
0
When I execute a script in IPython, by using run myscript.py, the names from the script are then available in the interactive interpreter for me to experiment with further. In irb this doesn't seem to happen when I run the script using load 'myscript.rb'. How do I keep the variables in scope in interactive ruby?
Keep script variables in scope when loading from irb
0.197375
0
0
647
15,667,578
2013-03-27T19:19:00.000
13
0
0
0
python,django,mezzanine
27,972,009
2
false
1
0
If you are like me, you will find that the FAQ is sorely lacking in its description of how to get Mezzanine working as an app. So here is what I did (after a painful half day of hacking) to get it integrated (somewhat): Download the repo and copy it into your project Run setup.py for the package cd to the package and run the mezzanine command to create a new app (mezzanine-project <project name>), let's say you use the name blog as your <project_name>. In either the local_settings.py or settings.py file, set the DATABASES dict to use your project's database. Run the createdb command from the mezzanine manage.py file Now it's time to start the hack-fest: In your project's settings.py file, add blog to INSTALLED_APPS Add some configuration variables to settings.py that Mezzanine is expecting: PACKAGE_NAME_FILEBROWSER = "filebrowser_safe" PACKAGE_NAME_GRAPPELLI = "grappelli_safe" GRAPPELLI_INSTALLED = False ADMIN_REMOVAL = [] RATINGS_RANGE = range(1, 5) TESTING = False BLOG_SLUG = '' COMMENTS_UNAPPROVED_VISIBLE = True COMMENTS_REMOVED_VISIBLE = False COMMENTS_DEFAULT_APPROVED = True COMMENTS_NOTIFICATION_EMAILS = ",".join(ALL_EMAILS) COMMENT_FILTER = None Add some middleware that Mezzanine is expecting: ```` ... "mezzanine.core.request.CurrentRequestMiddleware", "mezzanine.core.middleware.RedirectFallbackMiddleware", "mezzanine.core.middleware.TemplateForDeviceMiddleware", "mezzanine.core.middleware.TemplateForHostMiddleware", "mezzanine.core.middleware.AdminLoginInterfaceSelectorMiddleware", "mezzanine.core.middleware.SitePermissionMiddleware", Uncomment the following if using any of the SSL settings: "mezzanine.core.middleware.SSLRedirectMiddleware", "mezzanine.pages.middleware.PageMiddleware", .... ```` Add some INSTALLED_APPS that Mezzanine is expecting: .... "mezzanine.boot", "mezzanine.conf", "mezzanine.core", "mezzanine.generic", "mezzanine.blog", "mezzanine.forms", "mezzanine.pages", "mezzanine.galleries", "mezzanine.twitter", .... Add references to the template folders of mezzanine to your TEMPLATE_DIRS tuple os.path.join(BASE_PARENT, '<path to mezzanine>/mezzanine/mezzanine'), os.path.join(BASE_PARENT, '<path to mezzanine>/mezzanine/mezzanine/blog/templates'), Finally, if your like me, you'll have to override some of the extends paths in the mezzanine templates, the most obvious being in "blog_post_list.html" which just extends base.html, instead you want it to extend the mezzanine specific base file. So go to that file and replace the {% extends "base.html" %} with {% extends "core/templates/base.html" %}.
1
16
0
I already have an existing Django website. I have added a new url route '/blog/' where I would like to have a Mezzanine blog. If it possible to installed Mezzanine as an app in an existing Django site as opposed to a standalone blog application.
How do I install Mezzanine as a Django app?
1
0
0
4,399
15,671,591
2013-03-27T23:37:00.000
0
0
0
1
python,google-app-engine,google-sheets,google-cloud-datastore
15,671,792
1
false
1
0
If you use the Datastore API, you will also need to build out a way to manage users data in the system. If you use Spreadsheets, that will serve as your way to manage users data, so in that way managing the data would be taken care of for you. The benefits to use the Datastore API would be if you'd like to have a seamless integration of managing the user data into your application. Spreadsheet integration would remain separate from your main application.
1
0
0
In my company we want to build an application in Google app engine which will manage user provisioning to Google apps. But we do not really know what data source to use? We made two propositions : spreadsheet which will contains users' data and we will use spreadsheet API to get this data and use it for user provisioning Datastore which will contains also users' data and this time we will use Datastore API. Please note that my company has 3493 users and we do not know too many advantages and disadvantages of each solution. Any suggestions please?
Datastore vs spreadsheet for provisioning Google apps
0
1
0
248
15,678,119
2013-03-28T09:22:00.000
0
0
0
0
python,django,ubuntu
20,323,986
2
false
1
0
Just in case, did you run the command renice -20 -p {pid} instead of renice --20 -p {pid}? In the first case it will be given the lowest priority.
1
0
0
I am using ubuntu. I have some management commands which when run, does lots of database manipulations, so it takes nearly 15min. My system monitor shows that my system has 4 cpu's and 6GB RAM. But, this process is not utilising all the cpu's . I think it is using only one of the cpus and that too very less ram. I think, if I am able to make it to use all the cpu's and most of the ram, then the process will be completed in very less time. I tried renice , by settings priority to -18 (means very high) but still the speed is less. Details: its a python script with loop count of nearly 10,000 and that too nearly ten such loops. In every loop, it saves to postgres database.
ubuntu django run managements much faster( i tried renice by setting -18 priority to python process pid)
0
0
0
171
15,679,272
2013-03-28T10:20:00.000
1
0
1
0
python,deployment,pyqt,pyside
15,679,428
1
false
0
1
Yes, python comes with setup utilities, and there are packages which will put your complete application in a platform specific binary(exe on windows, .app on osx). Some of the packages I would recommend looking at would be: cx_freeze py2app py2exe
1
2
0
By "reasonable" environment I mean that it should not require the user to manually install any dependencies of the application, but a working Python installation can be required. Additionally I would like the application to work on Windows, OSX, and popular Linux distributions. If I can package a Python interpreter as well, that's better. Size is not really a concern. A good example of what I want to accomplish is the SublimeText editor. Is there an established way of doing this?
How can I create a package of a PyQt/PySide application which can be expected to run in "reasonable" environments?
0.197375
0
0
380
15,680,463
2013-03-28T11:20:00.000
48
0
1
0
ipython,jupyter-notebook,jupyter
47,042,617
31
false
0
0
For Windows 10 Look for the jupyter_notebook_config.py in C:\Users\your_user_name\.jupyter or look it up with cortana. If you don't have it, then go to the cmd line and type: jupyter notebook --generate-config Open the jupyter_notebook_config.py and do a ctrl-f search for: c.NotebookApp.notebook_dir Uncomment it by removing the #. Change it to: c.NotebookApp.notebook_dir = 'C:/your/new/path' Note: You can put a u in front of the first ', change \\\\ to /, or change the ' to ". I don't think it matters. Go to your Jupyter Notebook link and right click it. Select properties. Go to the Shortcut menu and click Target. Look for %USERPROFILE%. Delete it. Save. Restart Jupyter.
7
276
0
When I open a Jupyter notebook (formerly IPython) it defaults to C:\Users\USERNAME. How can I change this so to another location?
Change IPython/Jupyter notebook working directory
1
0
0
616,353
15,680,463
2013-03-28T11:20:00.000
18
0
1
0
ipython,jupyter-notebook,jupyter
40,683,115
31
false
0
0
Before runing ipython: Change directory to your preferred directory Run ipython After runing ipython: Use %cd /Enter/your/prefered/path/here/ Use %pwd to check your current directory
7
276
0
When I open a Jupyter notebook (formerly IPython) it defaults to C:\Users\USERNAME. How can I change this so to another location?
Change IPython/Jupyter notebook working directory
1
0
0
616,353
15,680,463
2013-03-28T11:20:00.000
23
0
1
0
ipython,jupyter-notebook,jupyter
28,818,885
31
false
0
0
Usually $ ipython notebook will launch the notebooks and kernels at he current working directory of the terminal. But if you want to specify the launch directory, you can use --notebook-dir option as follows: $ ipython notebook --notebook-dir=/path/to/specific/directory
7
276
0
When I open a Jupyter notebook (formerly IPython) it defaults to C:\Users\USERNAME. How can I change this so to another location?
Change IPython/Jupyter notebook working directory
1
0
0
616,353
15,680,463
2013-03-28T11:20:00.000
0
0
1
0
ipython,jupyter-notebook,jupyter
38,059,637
31
false
0
0
If you are using ipython in windows, then follow the steps: navigate to ipython notebook in programs and right click on it and go to properties. In shortcut Tab , change the 'Start in' directory to your desired directory. Restart the kernal.
7
276
0
When I open a Jupyter notebook (formerly IPython) it defaults to C:\Users\USERNAME. How can I change this so to another location?
Change IPython/Jupyter notebook working directory
0
0
0
616,353
15,680,463
2013-03-28T11:20:00.000
0
0
1
0
ipython,jupyter-notebook,jupyter
35,961,240
31
false
0
0
In command line before typing "jupyter notebook" navigate to the desired folder. In my case my all python files are in "D:\Python". Then type the command "jupyter notebook" and there you have it. You have changed your working directory.
7
276
0
When I open a Jupyter notebook (formerly IPython) it defaults to C:\Users\USERNAME. How can I change this so to another location?
Change IPython/Jupyter notebook working directory
0
0
0
616,353
15,680,463
2013-03-28T11:20:00.000
0
0
1
0
ipython,jupyter-notebook,jupyter
35,046,270
31
false
0
0
I have a very effective method to save the notebooks in a desired location in windows. One-off activity: Make sure the path of jupyter-notebook.exe is saved under environment variable. Open your desired directory either from windows explorer or by cd from command prompt From the windows explorer on your desired folder, select the address bar(in a way that the path label is fully selected) and type jupyter-notebook.exe voila!! the notebook opens from the desired folder and any new notebook will be saved in this location.
7
276
0
When I open a Jupyter notebook (formerly IPython) it defaults to C:\Users\USERNAME. How can I change this so to another location?
Change IPython/Jupyter notebook working directory
0
0
0
616,353
15,680,463
2013-03-28T11:20:00.000
1
0
1
0
ipython,jupyter-notebook,jupyter
39,199,310
31
false
0
0
If you are using ipython in linux, then follow the steps: !cd /directory_name/ You can try all the commands which work in you linux terminal. !vi file_name.py Just specify the exclamation(!) symbol before your linux commands.
7
276
0
When I open a Jupyter notebook (formerly IPython) it defaults to C:\Users\USERNAME. How can I change this so to another location?
Change IPython/Jupyter notebook working directory
0.006452
0
0
616,353
15,681,153
2013-03-28T11:57:00.000
14
0
1
0
ipython,ipython-notebook
15,689,558
3
true
0
0
Running %edit? will give you the help for the %edit magic function. You need to set c.TerminalInteractiveShell.editor, which is in your ipython_config.py. I'm not quite sure where this is located in Windows; on OS X and Linux, it is in ~/.ipython. You'll want to set the variable to be the full path of the editor you want. Alternatively, you can create an environment variable EDITOR in Windows itself, and set that equal to the full path of the editor you want. iPython should use that.
2
15
0
I am using IPython notebook and I want to edit programs in an external editor. How do I get the %edit file_name.py to open an editor such as Notepad++.
External editor for IPython notebook
1.2
0
0
11,889
15,681,153
2013-03-28T11:57:00.000
0
0
1
0
ipython,ipython-notebook
51,153,923
3
false
0
0
Try the 'Pycharm' editor This works for me.
2
15
0
I am using IPython notebook and I want to edit programs in an external editor. How do I get the %edit file_name.py to open an editor such as Notepad++.
External editor for IPython notebook
0
0
0
11,889
15,681,266
2013-03-28T12:01:00.000
0
0
0
0
java,python,web-applications
15,681,587
1
true
1
0
To allow the user to interact with the desktop in real-time, you need to run the application in the users web browser. Interaction with a webserver would just be too slow to do anything meaningful. I do not know about any way to execute Python in a web browser, so I would rule it out. Some of your options for client-sided code execution are: Javascript (the recent addition of Canvas and WebSocket made it suitable for this kind of problem) Java Applets (felt out of favor recently due to security problems) ActiveX (IE- and Windows only, very rarely used in a public context nowadays) Flash (a popular but dying technology)
1
0
0
I did a pretty fair bit of scouring, yet could not find anything useful which answers my questions. Either that or I am asking the wrong questions. I am trying to make a web application which gives a user a graphical view of the server desktop. I have understood that somewhere in here X engine has to be invoked and I have also understood that this is not something that php can accomplish primarily because its a language which processes before sending requests, please correct me if I am wrong in this regard. You may say that what I am trying to accomplish is something akin to what teamviewer does only on the web. My dilemma is whether I should be using python or java for this task, both would be pretty apt for the task, but which one would be better? Please give your suggestions
Making a web app which allows the user to view the server desktop
1.2
0
0
54
15,686,853
2013-03-28T16:13:00.000
4
0
0
1
google-app-engine,python-2.7,google-cloud-messaging
17,506,596
2
true
1
0
You can check the IP easily by doing a ping from the command line to the domain name, as in "ping appspot.com". With this you will obtain the response from the real IP. Unfortunately this IP will change over time and won't make your GCM service work. In order to make it work you only need to leave the allowed IPs field blank.
1
4
0
I have implemented GCM using my own sever. Now I'm trying to do the same using Python 2.7 in Google App Engine. How can I get the IP address for the server hosting my app? (I need it for API Key). Is IP-LookUp only option? And if I do so will the IP address remain constant?
API Key for GCM from GAE
1.2
0
1
907
15,688,887
2013-03-28T17:56:00.000
3
0
1
1
operating-system,task,cpu,python-idle,rtos
15,688,958
4
false
0
0
There's always code to run, the idle task is the code if there's nothing else. It may execute a special CPU instruction to power down the CPU until a hardware interrupt arrives. On x86 CPUs it's hlt (halt).
2
4
0
It sounds reasonable that the os/rtos would schedule an "Idle task". In that case, wouldn't it be power consuming? (it sounds reasonable that the idle task will execute: while (true) {} )
What happens in the CPU when there is no user code to run?
0.148885
0
0
2,611
15,688,887
2013-03-28T17:56:00.000
5
0
1
1
operating-system,task,cpu,python-idle,rtos
15,689,206
4
false
0
0
Historically it's been a lot of different schemes, especially before reducing power consumption in idle was an issue. Generally there is an "idle" process/task that runs at the lowest priority and hence always gets control when there's nothing else to do. Many older systems would simply have this process run a "do forever" loop with nothing of consequence in the loop body. One OS I heard of would run machine diagnostics in the idle process. A number of early PCs would run a memory refresh routine (since memory needed to be cycled regularly or it would "evaporate"). (A benefit of this scheme is that 100% minus the % CPU used by the idle process gives you the % CPU utilization -- a feature that was appreciated by OS designers.) But the norm on most modern systems is to either run a "halt" or "wait" instruction or have a special flag in the process control block that even more directly tells the processor to simply stop running and go into power-saving mode.
2
4
0
It sounds reasonable that the os/rtos would schedule an "Idle task". In that case, wouldn't it be power consuming? (it sounds reasonable that the idle task will execute: while (true) {} )
What happens in the CPU when there is no user code to run?
0.244919
0
0
2,611
15,688,954
2013-03-28T17:59:00.000
1
1
1
0
python,cpython,python-c-extension
15,692,895
1
true
0
0
You can't unload C extension modules at all. There is just no way to do it, and I know for sure that most of the standard extension modules would leak like crazy if there was.
1
0
0
The CPython headers define a macro to declare a method that is run to initialize your module on import: PyMODINIT_FUNC My initializer creates references to other python objects, what is the best way to ensure that these objects are properly cleaned up / dereferenced when my module is unloaded?
What's the proper way to clean up static python object references in a CPython extension module?
1.2
0
0
353
15,690,201
2013-03-28T19:10:00.000
1
0
0
1
python,installation,twisted
15,705,793
2
false
0
0
Use virtualenv to create your private Python libraries installation.
1
1
0
I have a server that I'd like to use to maintain persistent connections with a set of devices, just so that they can pass simple messages back and forth. It's a trivial task, but selecting a server-side platform has been suprrisingly difficult (especially since I have no administrative privileges - it's a dedicated commercial server). My best idea so far is to write a TCP server in Python. The Twisted platform seems suitable for the task, and has a lot of good reviews. However, my server has Python 2.7 but not Twisted, and the admins have been reluctant to install it for me. Is there any way that I can just upload a Twisted package to the server and reference it in my libraries without installing it as a framework?
Using Twisted on a server without installation privileges?
0.099668
0
0
93
15,692,874
2013-03-28T21:56:00.000
0
0
1
1
python,scons
16,347,496
3
false
0
0
I made it to work bu just setting an environment variable on Windows, TEST ="OS=win7 CPU=x86_64" and then running the scons script as scons %TEST%
1
1
0
I am using scons to build on windows. My SConscript file takes certain command line options to build like OS=win7 CPU=x86_64 etc. Every time I run scons from command line I have to type these options, Is there a way I can put them in SConscript file or set an environment variable so that I don't have to type them every time I build. I tried setting SCONSFLAGS but it didn't seem to work. Thanks in advance.
Not Having to Specify Command-Line Options Each Time
0
0
0
59
15,693,565
2013-03-28T22:53:00.000
1
1
0
1
python,testing,jenkins,distributed
15,693,722
2
false
0
0
To debug this: Add set -x towards the top of your shell script. Set a PS4 which prints the line number of each line when it's invoked: PS4='+ $BASH_SOURCE:$FUNCNAME:$LINENO:' Look in particular for any places where your scripts assume environment variables which aren't set when Hudson is running. If your Python scripts redirect stderr (where logs from set -x are directed) and don't pass it through to Hudson (and so don't log it), you can redirect it to a file from within the script: exec 2>>logfile There are a number of tools other than Jenkins for kicking off jobs across a number of machines, by the way; MCollective (which works well if you already use Puppet), knife ssh (which you'll already have if you use Chef -- which, in my not-so-humble opinion, you should!), Rundeck (which has a snazzy web UI, but shouldn't be used by anyone until this security bug is fixed), Fabric (which is a very good choice if you don't have mcollective or knife already), and many more.
1
1
0
I have a ton of scripts I need to execute, each on a separate machine. I'm trying to use Jenkins to do this. I have a Python script that can execute a single test and handles time limits and collection of test results, and a handful of Jenkins jobs that run this Python script with different args. When I run this script from the command line, it works fine. But when I run the script via Jenkins (with the exact same arguments) the test times out. The script handles killing the test, so control is returned all the way back to Jenkins and everything is cleaned up. How can I debug this? The Python script is using subprocess.popen to launch the test. As a side note, I'm open to suggestions for how to do this better, with or without Jenkins and my Python script. I just need to run a bunch of scripts on different machines and collect their output.
Shell scripts have different behavior when launched by Jenkins
0.099668
0
0
1,510
15,694,341
2013-03-29T00:09:00.000
1
0
0
0
python,forms,parsing,templates,jinja2
15,704,671
1
true
1
0
I think Jinja makes sense for building this, in particular because it contains a full-on lexer and parser. You can leverage those to derive your own versions of this that do what you need.
1
2
0
I'd like to do somehow the contrary to what a template is used for: I want to write templates and programmatically derive a representation of the different tags and placeholders present in the template, to ultimately generate a form. To put it another way, when you usually have the data and populate the template with it, I want to have the template and ask the user the right data to fill it. Example (with pseudo-syntax): Hello {{ name_of_entity only-in ['World', 'Universe', 'Stackoverflow'] }}! With that I could programatically derive that I should generate a form with a select tag named 'name_of_entity' and having 3 options ('World', 'Universe', 'Stackoverflow'). I looked into Jinja2, and it seems I can reach my goal using it and extending it (even if it's made to do things the other way). But I am still unsure how I should do in some cases, eg.: if I want to represent that {{ weekday }} has values only in ['Mo', 'Tu', ...] if I want to represent in the template that the {{ amount }} variable is accepting only integers... Is Jinja a good base to reach these goals? If yes, how would you recommend to do that?
Template to forms
1.2
0
0
83
15,694,721
2013-03-29T00:54:00.000
0
0
1
0
python
15,694,785
5
false
0
0
Sample the time after each input (up to you whether to do it only for successful commands or optionally include invalid ones). Compare this time to the prior sample and divide by some world tick interval. Iterate through the list of activities that happen per tick (for npc in npcs: npc.move_to_adjacent_posn(), e.g.).
3
3
0
I am writing a simple text based adventure game in Python. I would like to have certain processes occur periodically regardless of what the user does, approximately every 2 minutes. For instance: Have NPC's move around the rooms, have people get hungry and thirsty, have people heal, and during combat, have the battle proceed. Right now, I'm using 'raw_input' to get commands from the user, but this essentially pauses the code. How can I make the game proceed even if the user just sits there and doesn't type anything?
Coding timed functions into text based game using Python
0
0
0
401
15,694,721
2013-03-29T00:54:00.000
0
0
1
0
python
15,694,761
5
false
0
0
The answer is -- don't write real time for a console! If you want to do this text-based, you may wish to switch to Tkinter. This will allow you to do these things separately -- and also display text during these periodic events, and use a simple .after() call to execute them.
3
3
0
I am writing a simple text based adventure game in Python. I would like to have certain processes occur periodically regardless of what the user does, approximately every 2 minutes. For instance: Have NPC's move around the rooms, have people get hungry and thirsty, have people heal, and during combat, have the battle proceed. Right now, I'm using 'raw_input' to get commands from the user, but this essentially pauses the code. How can I make the game proceed even if the user just sits there and doesn't type anything?
Coding timed functions into text based game using Python
0
0
0
401
15,694,721
2013-03-29T00:54:00.000
0
0
1
0
python
15,702,219
5
false
0
0
I am not sure how you can do this without using a separate thread (and it is easy to use a separate thread). But my point here will be: look like your text-based function is a event/command based application? i.e. the client state won't change if there is no further command/event from the user? Not sure what you are trying to monitor with a timed function, but if your application is not already event-based, i.e. aggregate the state from the set of event the user perform/send, then you might want to make your application to be event-based, then you can get rid of the timed function. hope that help.
3
3
0
I am writing a simple text based adventure game in Python. I would like to have certain processes occur periodically regardless of what the user does, approximately every 2 minutes. For instance: Have NPC's move around the rooms, have people get hungry and thirsty, have people heal, and during combat, have the battle proceed. Right now, I'm using 'raw_input' to get commands from the user, but this essentially pauses the code. How can I make the game proceed even if the user just sits there and doesn't type anything?
Coding timed functions into text based game using Python
0
0
0
401
15,700,627
2013-03-29T09:32:00.000
3
0
0
0
python,tkinter
15,724,761
1
true
0
1
Both actions will occur. When you click on a radiobutton, first the variable will change its value, and after that the event handler passed as command option is called if present. Also your example would not work, since add_radiobutton doesn't allow the onvalue and offvalue options - only value.
1
3
0
goal Understanding how a radiobutton in a Tkinter menu works code I have a radio button inside the options menu as so: v = BooleanVar() v.set(True) options.add_radiobutton(label="change pop up", command =togglePopUp,variable=v,onvalue=True,offvalue=False) togglePopUp is a function that changes the value of variable v from True to False or vice versa. Main window is already opened and this menu will be added later to the window. This is just the fragment of code that is related to the radiobutton. Question Now my question is when I press the radiobutton (after running the code) will the value of the variable be changed or will the function togglePopUp be called? If the function will be called then what will happen to the status of the radiobutton? will the status of the radiobutton be updated instantly or will there be a delay? research I read about the radiobutton and the Boolean variable from the Tkinter book at effbot.org. But I was not convinced about how it worked. I tried a program but I am not getting the output that I essentially want. So I decided to understand how it works at a deeper level. specs python 2.7 Tkinter 8.5 Linux Mint 14
Radio button in a Tkinter menu
1.2
0
0
4,326
15,703,520
2013-03-29T12:45:00.000
1
0
1
0
python
15,703,555
1
true
0
0
When inserting into an empty list, make both head and tail refer to the new node. Also, make sure that the node's next and previous references are consistent with what the rest of the code is expecting.
1
1
0
I know how to add nodes before and after the head and tail, but I dont know how to add a node to an empty doubly linked list. How would I go about doing this? Thank you.
Adding to empty doubly linked list in python
1.2
0
0
313
15,704,873
2013-03-29T14:05:00.000
0
0
0
1
python,google-app-engine
15,735,772
1
false
1
0
Since you have app in C:\myap you need to run appcfg.py update C:\myap. It's just a path to you app on your machine. In windows command line. For example, "C:\Program Files (x86)\Google\google_appengine\appcfg.py" update C:\myap No, appcfg uses SSL while uploading. It's safe. If you mean to call application uploading - it's not really safe. I don't know why you need this. You can add app developers in App Engine admin console, so they will be able to deploy application from their accounts.
1
0
0
I have the myapp.py and app.yaml in my windows C:\myap directory. The docs say to use: appcfg.py update myapp/ to upload the app. I've downloaded/installed Python and the Google python kit. Sorry, for these noobish questions, but: Is the myapp/ listed above refer to c:\myapp on my windows machine? Or is it the name of my app on the google side? How/where do I type the appcfg.py to upload my directory? Are there any security issues associated with using my gmail account and email address? I'd like anybody from Second Life to be able to call this from in-world. There will be about a dozen calls a week. Are they going to have to authenticate with my email/password to use it? Thanks for any help you can provide!
Noob questions about upload & security
0
0
0
83
15,705,511
2013-03-29T14:41:00.000
2
0
0
0
python,orm,sqlalchemy,where-clause
15,707,037
1
true
0
0
you can modify query._whereclause directly, but I'd seek to find a way to not have this issue in the first place - whereever it is that the Query is generated should be factored out so that the non-whereclause version is made available.
1
2
0
If I've been given a Query object that I didn't construct, is there a way to directly modify its WHERE clause? I'm really hoping to be able remove some AND statements or replace the whole FROM clause of a query instead of starting from scratch. I'm aware of the following methods to modify the SELECT clause: Query.with_entities(), Query.add_entities(), Query.add_columns(), Query.select_from() which I think will also modify the FROM. And I see that I can view the WHERE clause with Query.whereclause, but the docs say that it's read-only. I realize I'm thinking in SQL terms, but I'm more familiar with those concepts than the ORM, at this point. Any help is very appreciated.
SQLAlchemy ORM: modify WHERE clause
1.2
1
0
489
15,710,399
2013-03-29T19:45:00.000
-1
0
0
0
python,http,post
15,710,412
2
false
0
0
You wont get a writeable file object via urllib2 / urllib. The return value is a "file like" object which supports iteration and reading. If you can read the contents and create your own file object for writing.
1
0
0
I have a Python script which produces some some data. I would like to stream it to an HTTP server using POST. That is, I don't want to accumulate the data in a buffer and then send it -- I just want to send it as it's created. There will be a lot. The apparently obvious way to do this would be to open the HTTP connection in some way that return a writeable file object, write the data to that object, and then close the connection. However, it's not obvious to me that this is supported in any of the libraries I looked at (urllib2, httplib, and requests). How can I accomplish this?
How can I get a writeable file object for an HTTP POST in Python?
-0.099668
0
1
147
15,711,233
2013-03-29T20:51:00.000
2
0
0
0
python,model-view-controller,web-frameworks
15,712,144
1
true
1
0
A view, from the django's perspective is what content is presented on a page. And the template is the how it is presented. A django view is not exactly a controller equivalent. The controller in some of those other frameworks is how the call of a function happens. In django, that is a part of the framework itself. Technically, there is nothing preventing you from renaming your views into controllers.- The URL routing scheme takes either the function or the string to the function. As long as you can send the appropriate string to the function (or the function itself), you can call your view whatever you want. However, for the reason stated in the paragraph above and for the fact of meeting the expectations of the other people that work on django, you should not really have files called controller.py. It's just a matter of getting used to. Hang in there for a bit.
1
1
0
I've developed many applications using the MVC pattern in Zend and Symfony. Now that I'm in Pythonland, I find that many frameworks such as Flask, Django and Pyramid use a file called views.py to contain functions which implement the routes. But, these "views" are really controllers in other MVC frameworks I've used before. Why are they called views in Python web frameworks? And, can I change them to controller.py without tearing a hole in the Python universe?
Why do Python MVC web frameworks use views.py to contain route functions?
1.2
0
0
583
15,711,677
2013-03-29T21:28:00.000
1
0
0
0
python,database,django,sqlite,triggers
15,712,350
2
false
1
0
A better way would be to have that application that modifies the records call yours. Or at least make a celery queue entry so that you don't really have to query the database too often to see if something changed. But if that is not an option, letting celery query the database to find if something changed is probably the next best option. (surely better than the other possible option of calling a web service from the database as a trigger, which you should really avoid.)
1
1
0
I want to develop an application that monitors the database for new records and allows me to execute a method in the context of my Django application when a new record is inserted. I am planning to use an approach where a Celery task checks the database for changes since the last check and triggers the above method. Is there a better way to achieve this? I'm using SQLite as the backend and tried apsw's setupdatehook API, but it doesn't seem to run my module in Django context. NOTE: The updates are made by a different application outside Django.
Trigger Django module on Database update
0.099668
0
0
2,861
15,714,976
2013-03-30T04:29:00.000
0
0
1
1
python,linux,pyqt,portability,python-bindings
15,716,571
1
true
0
1
If you package your application in the Linux distribution's package format, it can contain dependency information. That is the canonical solution to this problem. Otherwise you'd have to include all nested dependencies to make sure that it'll work.
1
0
0
I've managed to make a single working executable file (for Windows) from a PyQt based Python app using PyInstaller, but is it also possible for Linux? On linux machine (LUbuntu), when I run the .py script, I've got errors about missing PyQt bindings and I can't even download them by apt-get because of inability to connect the servers. It would be much more convenient to somehow pack the missing libraries to my program's files in order to make it more portable, but how can I do it?
How to convert a Python PyQt based program to a portable package in Linux?
1.2
0
0
1,029
15,719,667
2013-03-30T14:35:00.000
0
0
0
0
python,wxpython,listctrl
15,721,580
1
false
0
0
I don't think that is possible to do with a standard listctrl. Try poking around at the UltimateListCtrl, being a full owner drawn listctrl it has the ability to change the way its looks far more than a standard listctrl.
1
0
0
I have created a listctrl with some of the data in the listctrl are very long, and instead of showing all of the text it ends with .... For example Att PSSM_r1_0_T is [-10.179077,0.944198]|Att PSSM_r1_0_Y is.... How would i be able to make it so it shows all of the text. Something like Att PSSM_r1_0_T is [-10.179077,0.944198]|Att PSSM_r1_0_Y is [-4.820935,9.914433]|Att PSSM_r1_2_I is [-8.527803,1.953804]|Att PSSM_r1_2_K is [-12.083334,-0.183813]|Att PSSM_r1_2_V is [-14.112536,5.857771]|1 As the text is very long I would prefer if it covered more than one line.
listctrl new line for an data item
0
0
0
87
15,720,120
2013-03-30T15:22:00.000
8
0
1
1
python,multithreading,multiprocessing,pipe,communication
23,668,801
1
true
0
0
I believe everything you've stated is correct. On Linux, os.pipe is just a Python interface for accessing traditional POSIX pipes. On Windows, it's implemented using CreatePipe. When you call it, you get two ordinary file descriptors back. It's unidirectional, and you just write bytes to it on one end that get buffered by the kernel until someone reads from the other side. It's fairly low-level, at least by Python standards. multiprocessing.Pipe objects are much more high level interface, implemented using multiprocessing.Connection objects. On Linux, these are actually built on top of POSIX sockets, rather than POSIX pipes. On Windows, they're built using the CreateNamedPipe API. As you noted, multiprocessing.Connection objects can send/receive any picklable object, and will automatically handle the pickling/unpickling process, rather than just dealing with bytes. They're capable of being both bidirectional and unidirectional.
1
12
0
Recently I'm studying parallel programming tools in Python. And here are two major differences between os.pipe and multiprocessing.Pipe.(despite the occasion they are used) os.pipe is unidirectional, multiprocessing.Pipe is bidirectional; When putting things into pipe/receive things from pipe, os.pipe uses encode/decode, while multiprocessing.Pipe uses pickle/unpickle I want to know if my understanding is correct, and is there other difference? Thank you.
Python os.pipe vs multiprocessing.Pipe
1.2
0
0
1,971
15,721,749
2013-03-30T18:09:00.000
2
0
0
0
python,wxpython,wxwidgets
15,724,389
1
false
0
1
Look at the wx.lib.buttons module for various flavors of generic buttons. Also, in the 2.9 release series the stock button class (wx.Button) can have a bitmap + text label.
1
0
0
How do I create a bitmap button with attached text label in wxpyython. I have not come across any such generic buttons till now. I believe I will have to create one myself. How do I do it? Thanks in advance
Bitmap Button with attached text label in wxPython
0.379949
0
0
1,315
15,724,954
2013-03-31T00:13:00.000
0
0
1
0
python,full-text-search
15,747,164
2
false
0
0
I just get a feeling you want to use MapReduce type of processing for the search. It should be very scalable, Python should have MapReduce packages.
1
5
0
I have around 80,000 text files and I want to be able to do an advanced search on them. Let's say I have two lists of keywords and I want to return all the files that include at least one of the keywords in the first list and at least one in the second list. Is there already a library that would do that, I don't want to rewrite it if it exists.
python advanced search library
0
0
0
4,982
15,725,990
2013-03-31T03:19:00.000
2
0
1
0
java,python,c,computer-science,multilingual
15,726,086
5
true
0
0
It's certainly not unusual. Many parts of python standard library is written in C, and many popular third party library such as numpy has parts of them written in C, and you can create binding to your own C library with ctypes. Part of Python's default GUI library Tkinter is written in Tcl/Tk. Java has Java Native Interface (JNI) which can be used to integrate modules written to target the physical machine instead of the java virtual machine. Scala can use libraries written for the JVM (including those written in Java, obviously) and it can use JNI as well. Most large softwares are written in multiple languages. Usually two languages are used, one is a fast, compiled language (usually C or C++) for performance critical sections and the other is a scripting language (for example Python, Lisp, Lua) to write the complex but not performance critical parts. There are two requirements for any languages to be able to interact. One is they have to be able to share in-memory data in a mutually understood format, the second is they have to be able to call each other's functions using a common "calling convention". Native interface libraries solves those issues.
2
2
0
Can the different components of a desktop software be programmed in different languages ? For example, a software called MultiProg consists of components Comp1, Comp2, Comp3 etc. All components except 1,2,3 are in java and 1 is in C, 2 in Python, 3 in Scala etc. Is it possible to do that ? When does the need to do this arise ? Is this commonly seen in the software industry ? How do we make the components communicate when they are written in different languages ?
Making (desktop) software whose components are programmed in different languages?
1.2
0
0
168
15,725,990
2013-03-31T03:19:00.000
2
0
1
0
java,python,c,computer-science,multilingual
15,726,120
5
false
0
0
There's 2 ways this can be done. The first way is to compile the different pieces (in different languages) into object files and link them. This only works for some languages and not others, and depends on the availability of suitable tools. For example, if one language does garbage collection you can't expect other languages to suddenly support it. The other way is to build the application as separate processes that communicate/cooperate. This avoids the linking problem, but means that you've got separate processes (which can be "less clean") and serialisation/de-serialisation, etc. Note: there is a third way, which is building an interpreter or something into the application to run scripting stuff. I'm not sure if this counts (it depends if you consider the scripts part of the application's code or part of the data the application uses at run-time). Normally, nobody mixes languages without a good reason, because it's a pain in the neck for programmers. Most programmers know lots of languages but are only experts in a few, and the more languages you use the more chance there is that one or more programmers won't be able to comprehend one or more pieces of the application's source code.
2
2
0
Can the different components of a desktop software be programmed in different languages ? For example, a software called MultiProg consists of components Comp1, Comp2, Comp3 etc. All components except 1,2,3 are in java and 1 is in C, 2 in Python, 3 in Scala etc. Is it possible to do that ? When does the need to do this arise ? Is this commonly seen in the software industry ? How do we make the components communicate when they are written in different languages ?
Making (desktop) software whose components are programmed in different languages?
0.07983
0
0
168
15,726,843
2013-03-31T06:01:00.000
1
1
0
0
python,html,apache,cgi
15,726,928
2
true
1
0
The default Content Type is text, and if you forgot to send the appropriate header in your CGI file, you will end up with what you are seeing.
2
0
0
I have a CGI script that I wrote in python to use as the home page of the website I am creating. Everything works properly except when you view the page instead of seeing the page that it outputs you see the source code of the page, why is this? I dont mean that it shows me the source code of the .py file, it shows me all the printed information as if I were looking at a .htm file in notepad.
Python CGI - Script outputs source of generated page
1.2
0
0
426
15,726,843
2013-03-31T06:01:00.000
2
1
0
0
python,html,apache,cgi
15,726,936
2
false
1
0
Add the following before you print anything print "Content-type: text/html" Probably your script is not getting executed. Is your python script executable? Check whether you have the script under cgi-bin directory.
2
0
0
I have a CGI script that I wrote in python to use as the home page of the website I am creating. Everything works properly except when you view the page instead of seeing the page that it outputs you see the source code of the page, why is this? I dont mean that it shows me the source code of the .py file, it shows me all the printed information as if I were looking at a .htm file in notepad.
Python CGI - Script outputs source of generated page
0.197375
0
0
426
15,730,471
2013-03-31T14:32:00.000
3
0
1
0
python,ipython,ipython-notebook
15,744,762
1
true
0
0
Not yet, we need to refactor the saving/renaming API. I woudl suggest "open a copy" as a workaround, where the copy would be the "oldest" notebook.
1
5
0
I want to save a file with a different name, and keep the file with the old name (ie, no renaming) in Ipython Notebook. Is there a standard "save as" feature?
"Save as" in IPython notebook
1.2
0
0
1,190
15,730,630
2013-03-31T14:47:00.000
0
0
1
0
python,macos,ipython,python-3.3,ipython-notebook
15,735,954
2
false
0
0
It's expected that ipython will use Python 2.x. You should use ipython3 to use Python 3.x.
1
0
0
Prior to last weeks ML 10.8 I would invoke the IPython web notebook using ipython notebook --pylab=inline where I was running Python3. Post upgrade everything changed for the worse. A lot of hacking around the filesystem and changing permissions on /System/Library/Frameworks/Python.Framework from root to myself and I can now run python ipython3 notebook --pylab=inline however ipython when run without the python command preceding, wants to open Python 2.7. Anyone with similar issues or can anyone give insight as to what is going on here.
Starting IPython with Mountain Lion 10.8
0
0
0
205
15,730,976
2013-03-31T15:25:00.000
0
0
0
0
python,django,encryption,passwords
15,960,639
2
false
1
0
Reconsider your decision about keeping your old password hashes. EXCEPT if you already used some very modern and strong scheme for them (like pbkdf2, bcrypt. shaXXX_crypt) - and NOT just some (salted or not) sha1-hash. I know it is tempting to just stay compatible and support the old crap, but these old (salted or unsalted, doesn't matter much for brute-forcing) sha1-hashes can be broken nowadays at a rate of > 1*10^9 guesses per second. also, old password minimum length requirements might need a reconsideration due to same reasons. the default django password hash scheme is a very secure one, btw, you should really use it.
1
2
0
I am currently developing a tool in Python using Django, in which I have to import an existing User database. Obviously, password for these existing users have not the same encryption than the default password encryption used by Django. I want to override the encryption for the password method to keep my passwords unmodified. I don't find how to override existing method in the documentation, just found how to add information about user (I don't find how to remove information - like first name or last name - about user either, so if someone knows, tell me please). Thank you for your help.
Django Override password encryption
0
0
0
2,355
15,731,252
2013-03-31T15:51:00.000
1
0
1
0
python,list,tuples,cython,pypy
15,759,833
2
false
0
0
If your algorithm is bad in terms of computational complexity, then you cannot be saved, you need to write it better. Consult a good graph theory book or wikipedia, it's usually relatively easy, although there are some that have both non-trivial and crazy hard to implement algorithms. This sounds like a thing that PyPy can speed up quite significantly, but only by a constant factor, however it does not involve any modifications to your code. Cython does not speed up your code all that much without type declarations and it seems like this sort of problem cannot be really sped up just by types. The constant part is what's crucial here - if the algorithm complexity grown like, say, 2^n (which is typical for a naive algorithm), then adding extra node to the graph doubles your time. This means 10 nodes add 1024 time time, 20 nodes 1024*1024 etc. If you're super-lucky, PyPy can speed up your algorithm by 100x, but this remains constant on the graph size (and you quickly run out of the universe time one way or another).
1
1
0
I am working on a theoretical graph theory problem which involves taking combinations of hyperedges in a hypergrapha to analyse the various cases. I have implemented an initial version of the main algorithm in Python, but due to its combinatorial structure (and probably my implementation) the algorithm is quite slow. One way I am considering speeding it up is by using either PyPy or Cython. Looking at the documentation it seems Cython doesn't offer great speedup when it comes to tuples. This might be problematic for the implementation, since I am representing hyperedges as tuples - so the majority of the algorithm is in manipulating tuples (however they are all the same length, around len 6 each). Since both my C and Python skills are quite minimal I would appreciate it if someone can advise what would be the best way to proceed in optimising the code given its reliance on tuples/lists. Is there a documentation of using lists/tuples with Cython (or PyPy)?
using cython or PyPy to optimise tuples/lists (graph theory algorithm implemented in python)
0.099668
0
0
809
15,733,544
2013-03-31T19:24:00.000
0
0
1
0
python,time
35,672,134
2
false
0
0
Using time.clock() would be more accurate.
1
4
0
So I am using time.time() in my python module to track execution time and act as a while loop escape upon timeout. My question is when does time.time() rollover/overflow. Or does it? I don't fully comprehend python datatypes yet, so I am not sure how far it can keep increasing.
time.time() overflow/rollover
0
0
0
1,404
15,736,017
2013-04-01T00:06:00.000
2
0
0
0
python,html,django,apache,ubuntu
15,736,047
3
false
1
0
You cannot "run python scripts in html web pages". Everyone tells you to use something like Django because if you want to make a dynamic web site that executes server-side code in response to user input you need something like Django or some other server-side web framework. So you have already been put into the right direction but ignored that.
2
1
0
i need to create a website. The website needs to run a Python script once the user enters the data. I have searched the net for over a week now and haven't found any help. Everybody just tells me to download DJango framework, but no body shows how to run python scripts in html web pages. I dont have any experience in web design as it is not my field. But i do know a bit of python scripting and some html. Any kind of help that puts me into right direction would be greatly appreciated.
Run python script for HTML web page
0.132549
0
0
27,138
15,736,017
2013-04-01T00:06:00.000
4
0
0
0
python,html,django,apache,ubuntu
15,736,057
3
false
1
0
If you wish to run Python scripts within a web page in the same way that Javascript runs within a web page, this is not possible, because web browsers don't natively understand Python. If you want to run Python code that generates an HTML page, you can use a framework like Django or Flask, which will require a server that supports this kind of framework (long running processes). You can also use a CGI Python script to do this, which will require your web server to have Python installed and be set up to run CGI scripts. Embedding Python in a HTML in the same way that PHP is embedded in an HTML page is generally not done in Python - it is considered an anti-pattern that leads to security problems and lots of bad practices. Python folks will generally not help you shoot yourself in the foot, unlike other communities, so you won't find much help for what is considered the wrong thing. Some template engines like Mako support using Python within templates to generate the HTML markup, but you will need to use it in conjunction with some other web framework to handle the HTTP request.
2
1
0
i need to create a website. The website needs to run a Python script once the user enters the data. I have searched the net for over a week now and haven't found any help. Everybody just tells me to download DJango framework, but no body shows how to run python scripts in html web pages. I dont have any experience in web design as it is not my field. But i do know a bit of python scripting and some html. Any kind of help that puts me into right direction would be greatly appreciated.
Run python script for HTML web page
0.26052
0
0
27,138
15,736,995
2013-04-01T02:44:00.000
8
0
0
0
python,math,maps,mapping,latitude-longitude
15,737,078
8
false
0
0
One idea for speed is to transform the long/lat coordinated into 3D (x,y,z) coordinates. After preprocessing the points, use the Euclidean distance between the points as a quickly computed undershoot of the actual distance.
1
63
0
I want to be able to get a estimate of the distance between two (latitude, longitude) points. I want to undershoot, as this will be for A* graph search and I want it to be fast. The points will be at most 800 km apart.
How can I quickly estimate the distance between two (latitude, longitude) points?
1
0
0
99,220
15,737,993
2013-04-01T05:17:00.000
0
1
0
0
python,apache,session,mod-wsgi,pyramid
15,778,904
1
false
1
0
This entirely depends on the authentication policy that you use. The default AuthTktAuthenticationPolicy sets a cookie in the browser which (by default) does not expire. Again though, this depends on how you are tracking authenticated users.
1
1
0
I am making a pyramid webapp running in apache webserver using mod_wsgi. Is there anyway I could make user session never timed out? (The idea is so that once user logged in, the system will never kicked them out unless they logged out themselves). I cant find any information regarding this in apache, mod_wsgi or pyramid documentation. Thanks!
Making Pyramid application without session timeout
0
0
0
284
15,740,464
2013-04-01T09:00:00.000
0
0
0
1
python,cx-oracle
15,745,441
1
false
0
0
The issue with me was that I installed python, cx_oracle as root but Oracle client installation was done by "oracle" user. I got my own oracle installation and that fixed the issue. Later I ran into PyUnicodeUCS4_DecodeUTF16 issues with Python and for that I had to install python with —enable-unicode=ucs4 option
1
0
0
I have installd Python 2.7.3 on Linux 64 bit machine. I have Oracle 11g client(64bit) as well installed. And I set ORACLE_HOME, PATH, LD_LIBRARY_PATH, and installed cx_oracle 5.1.2 version for Python 2.7 & Oracle 11g. But ldd command on cx_oracle is unable to find libclntsh.so.11.1. I tried creating symlinks to libclntsh.so.11.1 under /usr/lib64, updated oracle.conf file under /etc/ld.so.conf.d/. Tried all possible solutions that have been discussed on this issue on the forums, but no luck. Please let me know what am missing.
cx_oracle unable to find Oracle Client
0
1
0
412
15,741,564
2013-04-01T10:16:00.000
11
0
0
0
python,file-io
15,741,565
6
false
0
0
You can use the following script: pre-condition: 1.csv is the file that consists the duplicates 2.csv is the output file that will be devoid of the duplicates once this script is executed. code inFile = open('1.csv','r') outFile = open('2.csv','w') listLines = [] for line in inFile: if line in listLines: continue else: outFile.write(line) listLines.append(line) outFile.close() inFile.close() Algorithm Explanation Here, what I am doing is: opening a file in the read mode. This is the file that has the duplicates. Then in a loop that runs till the file is over, we check if the line has already encountered. If it has been encountered than we don't write it to the output file. If not we will write it to the output file and add it to the list of records that have been encountered already
1
37
0
Goal I have downloaded a CSV file from hotmail, but it has a lot of duplicates in it. These duplicates are complete copies and I don't know why my phone created them. I want to get rid of the duplicates. Approach Write a python script to remove duplicates. Technical specification Windows XP SP 3 Python 2.7 CSV file with 400 contacts
Removing duplicate rows from a csv file using a python script
1
0
0
90,343
15,743,408
2013-04-01T12:22:00.000
0
0
0
0
python,debugging,pdb,pudb
60,792,925
2
false
0
0
In pudb, you can set a breakpoint, then edit the breakpoint to be skipped a given number of times or only trigger on a given condition.
2
0
0
In pdb/ipdb/pudb, is there a trick whereby I can selectively activate set_trace() statements during runtime? I'm debugging somewhat complex code with probabilistic behavior, and I would like to interact with the program without the debugger distracting, and when a situation of interest arises, activate the set_trace/s. (This is combined with logging, but not relevant to the question). I think might be possible to do this with conditionals, but is there a better way?
Activating set_trace() selectively at runtime in pdb or sisters
0
0
0
134
15,743,408
2013-04-01T12:22:00.000
1
0
0
0
python,debugging,pdb,pudb
22,053,321
2
false
0
0
I think there is no such way, as pudb (and the other debuggers) can only set_trace() unconditially. I'm not sure what you are trying to accomplish by moving the condition into set_trace() itself.. If you have some repetitive code there, just wrap it in a function.
2
0
0
In pdb/ipdb/pudb, is there a trick whereby I can selectively activate set_trace() statements during runtime? I'm debugging somewhat complex code with probabilistic behavior, and I would like to interact with the program without the debugger distracting, and when a situation of interest arises, activate the set_trace/s. (This is combined with logging, but not relevant to the question). I think might be possible to do this with conditionals, but is there a better way?
Activating set_trace() selectively at runtime in pdb or sisters
0.099668
0
0
134
15,744,495
2013-04-01T13:34:00.000
0
1
0
1
python,linux,windows,usb,pyserial
15,745,200
2
false
0
0
Normally on the linux front, if the usb dongle is of the right type, you will see something like /dev/usbserial or similar device. Maybe check dmesg after plugging the cable. (on linux you can run find /dev | grep usb to list all usb related devices) Just a side note, I've seen the beaglebone has an ethernet port, why not just using a network socket? It's all easier than reinventing a protocol on usb.
1
2
0
My setup looks like this: A 64-bit box running Windows 7 Professional is connected to a Beaglebone running Angstrom Linux. I'm currently controlling the beaglebone via a putty command line on the windows box. What I'd like to do is run an OpenCV script to pull some vision information, process it on the windows box, and send some lightweight data (e.g a True or False, a triplet, etc.) over the (or another) USB connection to the beaglebone. My OpenCV program is running using Python bindings, so any piping I can do with python would be preferable. I've played around with pyserial to receive data on a windows box via a COM port, so it seems like I could use that on the windows side... at a total loss though on the embedded linux front
How to send data from Windows to embedded linux over USB
0
0
0
956
15,745,398
2013-04-01T14:33:00.000
3
0
1
0
python,slider,tkinter
15,745,558
1
true
0
1
You can use the state=DISABLED option in the constructor, which is allowed in almost all Tkinter widgets.
1
0
0
I was wondering if there was a way to make a slider static under certain conditions, i.e. the program is under trial license. One solution I've thought up was putting an image instead of a slider, but I'd prefer a static slider. Is there a way I could do this? Thanks!!
Making a python 2 Tkinter slider static under certain conditions
1.2
0
0
51
15,750,551
2013-04-01T19:44:00.000
0
0
1
0
python,jinja2,pycharm
70,014,588
5
false
1
0
In community edition, the python template option is not available, so you can simply click on python packages next to the terminal present on the bottom. This will also add Jinja2
1
81
0
A bottle project of mine uses Jinja2. PyCharm does not automatically recognize it and shows such lines as errors. Is there a way to make Jinja2 work?
Does PyCharm support Jinja2?
0
0
0
42,666
15,750,660
2013-04-01T19:50:00.000
8
1
1
0
python,text,binary,ascii
15,750,957
1
false
0
0
Only in Windows, in the latter case, .write('\n') writes one byte with a value of 10. In the former case, it writes two bytes, with the values 13 and 10. You can prove this to yourself by looking at the resulting file size, and examining the files in a hex editor. In POSIX-related operating systems (UNIX, SunOS, MacOS, Linux, etc.), there is no difference beetween 'w' and 'wb'.
1
7
0
Wondering what the real difference is when writing files from Python. From what I can see if I use w or wb I am getting the same result with text. I thought that saving as a binary file would show only binary values in a hex editor, but it also shows text and then ASCII version of that text. Can both be used interchangably when saving text? (Windows User)
Python file IO 'w' vs 'wb'
1
0
0
19,466
15,754,610
2013-04-02T01:22:00.000
6
0
0
0
python,amazon-s3,gzip,boto
15,763,863
3
true
0
0
There really isn't a way to do this because S3 doesn't support true streaming input (i.e. chunked transfer encoding). You must know the Content-Length prior to upload and the only way to know that is to have performed the gzip operation first.
1
17
0
I have a large local file. I want to upload a gzipped version of that file into S3 using the boto library. The file is too large to gzip it efficiently on disk prior to uploading, so it should be gzipped in a streamed way during the upload. The boto library knows a function set_contents_from_file() which expects a file-like object it will read from. The gzip library knows the class GzipFile which can get an object via the parameter named fileobj; it will write to this object when compressing. I'd like to combine these two functions, but the one API wants to read by itself, the other API wants to write by itself; neither knows a passive operation (like being written to or being read from). Does anybody have an idea on how to combine these in a working fashion? EDIT: I accepted one answer (see below) because it hinted me on where to go, but if you have the same problem, you might find my own answer (also below) more helpful, because I implemented a solution using multipart uploads in it.
How to gzip while uploading into s3 using boto
1.2
0
1
15,998
15,757,213
2013-04-02T06:03:00.000
0
0
1
0
python-3.x,format,file-format
54,352,007
2
false
0
0
This may not be appropriate for your question but I think this may help you. I have a similar problem faced... but end up with some thing like creating a zip file and then renamed the zip file format to my custom file format... But it can be opened with the winRar.
1
6
0
How to start creating my own filetype in Python ? I have a design in mind but how to pack my data into a file with a specific format ? For example I would like my fileformat to be a mix of an archive ( like other format such as zip, apk, jar, etc etc, they are basically all archives ) with some room for packed files, plus a section of the file containing settings and serialized data that will not be accessed by an archive-manager application. My requirement for this is about doing all this with the default modules for Cpython, without external modules. I know that this can be long to explain and do, but I can't see how to start this in Python 3.x with Cpython.
Custom filetype in Python 3
0
0
0
4,824
15,760,825
2013-04-02T09:40:00.000
0
0
0
0
internet-explorer,google-chrome,firefox,python-2.7,download
28,467,360
1
false
0
0
Each of the web browsers e.g. Firefox, Chrome, IE, Safari, all have plugin models. If you are going to truly integrate your manager into the web browsers, you will need to examine their plugin models, and make your download manager work through that.
1
4
0
I wrote a little download manager in python, and now i want to "catch" downloads from Chrome Firefox and Explorer so they will download with it, instead of each built-in download manager of the browser itself. I want to know when each of the browsers are starting a file download, so i can prevent the default behavior and use my own manager. All i need to start a download myself is the file url of course. I know that in the past there were popular download managers such as "Get Right" that did exactly this. I want to do something similar. Any ideas how would i go about doing this?
Catch downloads from Chrome/Firefox/Explorer with python
0
0
1
233