Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
17,584,635
2013-07-11T03:35:00.000
0
0
0
1
python,cmd,warnings,indefinite
17,585,244
1
false
0
0
Try opening and storing information from one file at a time? We dont have enough information to understand what is wrong with your code. We really dont have much more than "I tried to open 185 fits files" and "too many open files"
1
0
0
I am in deep trouble at the moment. After every letter I type on my python command prompt in Linux, I get an error message: sys:1: GtkWarning: Attempting to store changes into `/u/rnayar/.recently-used.xbel', but failed: Failed to create file '/u/rnayar/.recently-used.xbel.L6ETZW': Too many open files Hence I can type nothing on python, and the prompt is stuck. I tried to open 185 fits files, containing some data, and feed in some of that data into an array. I cannot abandon the command window, because I already have significant amounts of information stored on it. Does anybody know how I can stop the error message and get it working as usual?
infinite error message python
0
0
0
100
17,585,932
2013-07-11T05:49:00.000
0
0
0
0
python,windows,python-2.7,smartcard,rfid
25,094,663
1
false
0
0
If you are using a USB or serial connection to connect your card reader to pc, .you can use datareceived event of Serial port class.
1
2
0
I am working on smartcard based application with smartcard reader, here whenever I flash the card i should get the card UID, based on that I need to retrieve the details from database. For this need how do i start, whether i need to create service on windows which always run background or is there a way to detect an event on OS or any scheduler program. I am able to get UID and related but i need to run the program externally. Please suggest me on this issue, Thanks in advance.
How to detect the event when smart card scanned
0
0
0
422
17,586,573
2013-07-11T06:34:00.000
0
0
1
0
python,csv,python-2.7
17,587,872
3
false
0
0
you can use os.listdir() to get list of files in directory
1
3
1
I need some help from python programmers to solve the issue I'm facing in processing data:- I have .csv files placed in a directory structure like this:- -MainDirectory Sub directory 1 sub directory 1A fil.csv Sub directory 2 sub directory 2A file.csv sub directory 3 sub directory 3A file.csv Instead of going into each directory and accessing the .csv files, I want to run a script that can combine the data of the all the sub directories. Each file has the same type of header. And I need to maintain 1 big .csv file with one header only and all the .csv file data can be appended one after the other. I have the python script that can combine all the files in a single file but only when those files are placed in one folder. Can you help to provide a script that can handle the above directory structure?
Python - Combing data from different .csv files. into one
0
0
0
1,781
17,587,840
2013-07-11T07:49:00.000
2
0
1
0
python,numpy,virtualenv
46,802,017
2
false
0
0
Having manually compiled VTK and PySide2 for Python36, I have also found myself bending the virtualenv rules. Just today, I transferred my virtualenv to another system, and to make things easier, I gave it the exact same path that it had on the previous system. However, I did not have the same path for Python on my new system. Fortunately I was able to change the location that the virtualenv was looking for by altering a 'orig-prefix.txt' file located in [VIRTUALENV]/Lib. The base Python path a virtualenv requires is stored in: [VIRTUALENV]/Lib/orig-prefix.txt If I recall correctly, the path of the virtualenv itself is embedded in multiple files. Thus, in a case where I needed to relocate the virtualenv to a different path, I just recreated it and copied over everything except for the [VIRTUALENV]/Scripts directory. This is probably not the way virtualenv is meant to be used, but it does provide a work-around. Also, note that I am doing this in a Windows environment.
1
5
0
I am using many python packages like numpy, bottleneck, h5py, ... for my daily work on my computer. Since I am root on this machine it is no problem to install these packages. However I would like to use my "environment" of different packages also on a server machine where I only have a normal user account. So I thought about creating a virtual environment (with virtualenv) on my machine by installing all needed packages in there. Then I just copy the whole folder to the server and can run everything from it? My machine uses Fedora 19 whereas the server uses Ubuntu. Is this a problem? I could not find any information on how to move such a virtual environment to another system. The reason I would like to create the virtual environment on my machine first is that there are a lot of tools missing on the server like python-dev, so I can't compile numpy for instance. I looked into Anaconda and Enthought Python distributions, but they don't include a couple of packages I need. Also, there should be a completely "open" way for this problem? Moving the virtual environment to the server failed, since it is complaining about some missing files when I import the packages. This is not surprising probably...
Port Python virtualenv to another system
0.197375
0
0
5,036
17,588,779
2013-07-11T08:37:00.000
1
0
0
1
python,multithreading,subprocess,twisted
17,590,814
1
true
0
0
There's nothing in Twisted's child-process support that will automatically kill the child process when any particular TCP client disconnects. The behavior you're asking about is basically the default behavior.
1
1
0
I`m creating a twisted tcp server that needs to make subprocess command line call and relay the results to the client while still connected. But the subprocess needs to continue running until it is done, even after the client disconnects. Is it possible to do this? And if so, please send me in the right direction..Its all new to me. Thanks in advance!
How does one make a twistedmatrix subprocess continue processing after the client disconnects?
1.2
0
0
70
17,588,792
2013-07-11T08:38:00.000
3
0
1
0
python,backup
17,589,906
3
false
0
0
A bit out of the box, but if this is important and you really need a full solution, you can run python inside a virtual machine and use snapshots to save session state. Whether it is practical or not depends on your use case.
1
3
0
I want to write a function 'backup(filename)' to store all the working data (objects?) under current environment of Python, and 'restore(filename)' to restore the data/object again. Just like R's save.image(file="workspace21.RData") and load(file="workspace21.RData"), which can snapshot the system. How to write the "backup" & "restore" ? Or is there any package existed can do that ?
How to backup and restore python current working environment?
0.197375
0
0
3,361
17,590,889
2013-07-11T10:21:00.000
5
0
1
0
python,compiler-construction
17,590,925
2
false
0
0
When the source code has changed, new .pyc files are automatically created when you run the program again. Therefore I wouldn't worry about compiling, but focus your attention on the code itself.. :)
1
7
0
I have a Python directory with a number of .py files. I recently compiled them into .pyc files using python -m compileall. I have now changed some of the source files and would like to recompile, writing over the old .pyc files. Is there a quick way to do this from the command line without having to manually delete all existing .pyc files?
Recompile all Python files in directory
0.462117
0
0
17,317
17,593,840
2013-07-11T12:49:00.000
4
1
1
0
python,python-3.x
17,594,117
2
false
0
0
python 3 is gaining popularity, but changing code base is always a hassle python 3 advantages: the GIL has been improved a lot so it locks up much less. built ins return generator expressions python 3 disadvantages: some libraries have yet to be ported to python 3 I like python 3 but the fear of finding a cool python 2 only library is what keeps my boss from daring changing to python 3... if you were starting from scratch it might make sense as a long term investment to code in python 3 but I think it is to early to switch as python 2 has many years of support left and it will probably have better library support for the next 3 years as well
1
5
0
One of my clients is a large media organization that does a lot of Python development for its own in-house business process management. We need to weigh in the pros and cons of switching the entire code base from Python 2.7 to Python 3, and also for doing any new development using Python 3. My question is: How would you sell Python 3? What are some tangible benefits that we could get out of using it? A quick google didn't turn up many concrete benefits, other than the occasional rather vague "it might speed up your code in some cases". Perhaps I'm not looking where I should be, so I would also appreciate pointers to resources where this is discussed.
What are the benefits / advantages of using Python 3?
0.379949
0
0
5,518
17,594,382
2013-07-11T13:14:00.000
1
0
1
1
python,automation,installation
17,596,459
3
false
0
0
what i meant is to combine all the installers into 1 single big installer. I am not sure, if you mean to make one msi out of several. If you have built the msis, this is possible to work out, but in most situations there were reasons for the separation. But for now I assume as the others, that you want a setup which combines all msi setups into one, e.g. with a packing/selfextracting part, but probably with some own logic. This is a very common setup pattern, some call it "bootstrapper". Unfortunately the maturity of most tools for bootstrapping is by far not comparable to the msi creation tools so most companies I know, write kind of an own bootstrapper with the dialogs and the control logic they want. This can be a very expensive job. If you have not high requirements, it may sound a simple job. Just starting a number of processes after each other. But what about a seamless process bar, what about uninstallation (single or bundled), what about repair, modify, what about, if one of them fails or needs a reboot also concerning repair/uninstall/modify/update. And so on. As mentioned, one of the first issues of bundling several setups into one is about caring how many and which uninstall entries shall the user see, and if it is ok that your bootstrapper does not create an own, combining one. If this is not an issue for you, then you have chances to find an easy solution. I know at least three tools for bootstrappers, some call it suites or bundles. I can only mention them here: WiX has at least something called "Burn". Google for WiX Burn and you will find it. I haven't used it yet, so I can't tell about. InstallShield Premier, which is not really what most people call a cheap product, allows setup "Suites" which is the same. I don't want to comment the quality here. In the Windows SDK there is (has been?) a kind of template of a setup.exe to show how to start installation of msi out of a program. I have never looked into that example really to tell more about it.
2
1
0
I have written a small python script that i want to share with other users.(i want to keep it as a script rather than and exe so that users can edit the codes if they need to) my script has several external libraries for python which doesn't come with basic python. But the other users doesn't have python and the required libraries installed in their PCs . So,For convenient, I am wondering if there's any way to automate the installation process for installing python and the external libraries they need. To make things more clear, what i meant is to combine all the installers into 1 single big installer. For you information, all the installers are window x86 MSI installers and there are about 5 or 6 of them. Is this possible?Could there be any drawbacks of doing this? EDIT: All the users are using windows XP pro 32 bit python 2.7
Automate multiple installers
0.066568
0
0
409
17,594,382
2013-07-11T13:14:00.000
1
0
1
1
python,automation,installation
17,594,560
3
true
0
0
I would suggest using NSIS. You can bundle all the MSI installers (including python) into one executable, and install them in "silent mode" in whatever order you want. NSIS also has a great script generator you can download. Also, you might be interested in activepython. It comes with pip and automatically adds everything to your path so you can just pip install most of your dependencies from a batch script.
2
1
0
I have written a small python script that i want to share with other users.(i want to keep it as a script rather than and exe so that users can edit the codes if they need to) my script has several external libraries for python which doesn't come with basic python. But the other users doesn't have python and the required libraries installed in their PCs . So,For convenient, I am wondering if there's any way to automate the installation process for installing python and the external libraries they need. To make things more clear, what i meant is to combine all the installers into 1 single big installer. For you information, all the installers are window x86 MSI installers and there are about 5 or 6 of them. Is this possible?Could there be any drawbacks of doing this? EDIT: All the users are using windows XP pro 32 bit python 2.7
Automate multiple installers
1.2
0
0
409
17,595,066
2013-07-11T13:44:00.000
0
0
0
1
python,django,manage.py
17,599,320
2
false
1
0
If your goal is to ensure the load balancer is working correctly, I suppose it's not an absolute requirement to do this in the application code. You can use a network packet analyzer that can listen on a specific interface (say, tcpdump -i <interface>) and look at the output.
1
6
0
I'm running a temporary Django app on a host that has lots of IP addresses. When using manage.py runserver 0.0.0.0:5000, how can the code see which of the many IP addresses of the machine was the one actually hit by the request, if this is even possible? Or to put it another way: My host has IP addresses 10.0.0.1 and 10.0.0.2. When runserver is listening on 0.0.0.0, how can my application know whether the user hit http://10.0.0.1/app/path/etc or http://10.0.0.2/app/path/etc? I understand that if I was doing it with Apache I could use the Apache environment variables like SERVER_ADDR, but I'm not using Apache. Any thoughts? EDIT More information: I'm testing a load balancer using a small Django app. This app is listening on a number of different IPs and I need to know which IP address is hit for a request coming through the load balancer, so I can ensure it is balancing properly. I cannot use request.get_host() or the request.META options, as they return what the user typed to hit the load balancer. For example: the user hits http://10.10.10.10/foo and that will forward the request to either http://10.0.0.1/foo or http://10.0.0.2/foo - but request.get_host() will return 10.10.10.10, not the actual IPs the server is listening on. Thanks, Ben
Django runserver bound to 0.0.0.0, how can I get which IP took the request?
0
0
0
10,672
17,597,842
2013-07-11T15:44:00.000
1
1
1
0
c++,python,c
17,597,883
2
false
0
0
Some files, such as .exe, .jpg, .mp3, contain a header (first few bytes of the file). You can inspect the header and infer the file type from that. Of course, some files, such as raw text, depending on their encoding, may have no header at all.
1
1
0
To do a work i want to identify the type of file. But the files are without extension.The files may be txt,jpeg,mp3,pdf etc. Using c or c++ or python how can i check whether it is a jpeg or pdf or mp3 file?
how to identify the type of files having no extension?
0.099668
0
0
250
17,599,706
2013-07-11T17:25:00.000
0
0
0
0
python,sockets
17,599,907
2
false
0
0
Write a separate function that handles the connect and if the connect fails this function throws an exception. Then in your main code you can use a try-catch and a while loop to try to connect multiple times.
1
0
0
whats the smartest way to to Try again when I get socket time out exception Try: opener=urllib.FancyURLopener(proxies) res=opener.open(req) except Exception as details: self.writeLog(details) lets for the above code I get a time out error from the socket, I wanna change the proxy and try again how do I do that (my function can not be done recursive) should I use something like while error is not socket time out keep doing this ? or should do a while True and in the except part change the proxy? whats the smartest way?
python try if socket times out, try again
0
0
1
153
17,601,124
2013-07-11T18:51:00.000
0
0
0
1
python,multithreading,performance,multiprocessing,web-crawler
17,601,331
1
false
1
0
Look into grequests, it doesn't do actual muti-threading or multiprocessing, but it scales much better than both.
1
0
0
Briefly idea, My web crawler have 2 main jobs. Collector and Crawler, The collector will collecting all of the url items for each sites and storing non duplicated url. The crawler will grab the urls from storage, extract needed data and store its back. 2 Machines Bot machine -> 8 core, Physical Linux OS (No VM on this machine) Storage machine -> mySql with clustering (VM for clustering), 2 databases (url and data); url database on port 1 and data port 2 Objective: Crawled 100 sites and try to decrease the bottle neck situation First case: Collector *request(urllib) all sites , collect the url items for each sites and * insert if it's non duplicated url to Storage machine on port 1. Crawler *get the url from storage port 1 , *request site and extract needed data and *store it's back on port 2 This cause the connection bottle neck for both request web sites and mySql connection Second case: Instead of inserting across the machine, Collector store the url on my own mini database file system.There is no *read a huge file(use os command technic) just *write (append) and *remove header of the file. This cause the connection request web sites and I/O (read,write) bottle neck (may be) Both case also have the CPU bound cause of collecting and crawling 100 sites As I heard for I/O bound use multithreading, CPU bound use multiprocessing How about both ? scrappy ? any idea or suggestion ?
Python web crawler multithreading and multiprocessing
0
0
1
763
17,603,825
2013-07-11T21:26:00.000
0
0
0
0
python,tkinter,wiki
23,421,260
3
false
0
1
Ideas This may sound like a lot of work, but it needs to be done and hopefully it would solve your problem: Make your own web browser that uses the latest Python version instead of JavaScript. In fact, write it in Python, with Tkinter. Then you'll be able to use the widgets more naturally. Tell me. (I might even help.) Make it so you can embed Python Tkinter widgets as you please. Figure out how to make the browser so it's not a magnet for people who like to write malicious Python code, or else have a big disclaimer. Write a plugin and/or extension that either includes or depends upon the Python interpreter and Tkinter, and make the page request that people install it. Make sure your wiki supports plugins, or that your plugin/extension supports wikis that don't support plugins. As others have said, Python doesn't come with a natural way to make Tkinter widgets in your web browser. You kind of have to make a way, or use someone else's way. If you can put applets in your wiki, I might recommend the Jython method someone else mentioned.
1
0
0
I am looking for a way to embed a Tkinter GUI into a wiki page. I have looked around Google for a few hours and haven't had any success with a method. Is there a way to do this?
Python on a wiki
0
0
0
183
17,604,692
2013-07-11T22:33:00.000
1
0
0
0
python,django
17,604,930
1
true
1
0
In SQL database it would be easiest to create an additional table which would hold a reference to (for example) thread table and user table. Call it (for example) ThreadVisitors. Whenever a user visits a thread you create an entry in that table for that user and the thread (you could add a unique constraint on (thread, user) pair). That way getting all visitors for a given thread is as simple as running count query (for a given thread). Some indexes would be helpful here and if performance is an issue then you should cache count queries. You will need such table per each model probably.
1
0
0
Where to store table specific data in django models ? Here is the scenario I am making a forum. In each thread, there will be following two kinds of visitors. Total Visitors Current Visitors My model is designed in the following manner Category model(will contain sub category) Sub-Category model(will contain sub-category, foreign key points to category) Thread model(will contain individual threads, foreign key points to sub-category) Post Model(will contain individual posts/messages, foreign key points to Thread) Now, I will have visitors at every level. A user visiting different threads/sub-categories/categories. I want to capture the no. of visitors visiting. Can anyone suggest me where these kinds of data fit in django model ?
table specific data in django models
1.2
0
0
66
17,606,646
2013-07-12T02:35:00.000
2
0
0
0
python,web,cherrypy
17,606,832
1
true
1
0
Nevermind, folks. Turns out that this isn't so bad to do; it is simply a matter of doing the following: Write a function that does what I want. Make the function in to a custom CherryPy Tool, set to the before_handler hook. Enable that tool globally in my config.
1
1
0
I am in the midst of writing a web app in CherryPy. I have set it up so that it uses OpenID auth, and can successfully get user's ID/email address. I would like to have it set so that whenever a page loads, it checks to see if the user is logged in, and if so displays some information about their login. As I see it, the basic workflow should be like this: Is there a userid stored in the current session? If so, we're golden. If not, does the user have cookies with a userid and login token? If so, process them, invalidate the current token and assign a new one, and add the user information to the session. Once again, we're good. If neither condition holds, display a "Login" link directing to my OpenID form. Obviously, I could just include code (or a decorator) in every public page that would handle this. But that seems very... irritating. I could also set up a default index method in each class, which would do this and then use a (page-by-page) helper method to display the rest of the content. But this seems like a nightmare when it comes to the occasional exposed method other than index. So, my hope is this: is there a way in CherryPy to set some code to be run whenever a request is received? If so, I could use this to have it set up so that the current session always includes all the information I need. Alternatively, is it safe to create a wrapper around the cherrypy.expose decorator, so that every exposed page also runs this code? Or, failing either of those: I'm also open to suggestions of a different workflow. I haven't written this kind of system before, and am always open to advice. Edit: I have included an answer below on how to accomplish what I want. However, if anybody has any workflow change suggestions, I would love the advice! Thanks all.
Checking login status at every page load in CherryPy
1.2
0
0
313
17,612,117
2013-07-12T09:50:00.000
1
1
0
0
python,apache,cherrypy,turbogears
17,742,500
1
true
1
0
From what I can see from their website, bluehost supports using FastCGI. In that case you can deploy your applications using FLUP flup.server.fcgi.WSGIServer permits to mount any WSGI application (like TurboGears apps) and use them with fastcgi.
1
0
0
Has anyone successfully installed TurboGears or CherryPy on BlueHost? There are listings on the web, but none of them are viable or the links to the scripts are broken. However, Bluehost Tech support claims that some folks are running TurboGears successfully on their shared hosting. Anyone who has a setup or knows how, to install TurboGears or CherryPy on Bluehost, will be very appreciated if he/she could share their know-how. Alternatively, if anyone knows another pythonic method that can be installed on Bluehost is welcome to share it with me. Many thanks, DK
Turbogears on bluehost
1.2
0
0
72
17,614,818
2013-07-12T12:21:00.000
2
0
1
0
python,performance,try-except
17,614,932
2
false
0
0
Yes, it makes no difference at all. Your possible source of an exception is the foo() function and you call it anyway in both programs. Assigning its output to aaa will not change anything, since the exception will originate when calling foo() not during the assignment (which is located in try block anyway).
2
4
0
Is it always safe to use hello2 instead of hello1 ? def hello1(): try: aaa = foo() return aaa except baz: return None def hello2(): try: return foo() except baz: return None
try except unnecessary step
0.197375
0
0
91
17,614,818
2013-07-12T12:21:00.000
11
0
1
0
python,performance,try-except
17,614,841
2
true
0
0
Yes, it is. Assigning first then returning makes no difference when it comes to catching exceptions. The assignment to aaa is entirely redundant.
2
4
0
Is it always safe to use hello2 instead of hello1 ? def hello1(): try: aaa = foo() return aaa except baz: return None def hello2(): try: return foo() except baz: return None
try except unnecessary step
1.2
0
0
91
17,617,535
2013-07-12T14:39:00.000
1
0
1
0
multithreading,python-2.6,asyncsocket
17,619,173
1
true
0
0
These two options "current thread waits for an ACK" and "another thread would be able to connect" are not mutually exclusive. Both are true. That's the whole point of threads that one can continue while the other is blocked.
1
0
0
If I issue multiple socket.connect() in different threads - would a single socket.connect() make the current thread to wait for an ACK response, or meanwhile another thread would be able to issue a socket.connect()?
socket.connect() blocks other threads?
1.2
0
1
118
17,618,361
2013-07-12T15:20:00.000
1
0
0
1
python,root,permission-denied,dd
17,619,054
2
true
0
0
I know I have to do some kind of root thing? Indeed you do! If you are using linux, sudo is the idiomatic way to escalate your user's privilege. So instead invoke 'sudo dd if=/dev/sdb of=/dev/null' (for example). If your script must be noninteractive, consider adding something like admin ALL = NOPASSWD: ALL to your sudoers, or something similar.
1
1
0
I have used a command line to run a dd command in Python, however, whenever I try to actually run the command, I get: dd: opening '/dev/sdb': Permission denied I know I have to do some kind of root thing? And I only need a certain section of my code to run the dd command, so I don't need to 'root' the whole thing; but the whole 'root' concept confuses me... Help would be HIGHLY appreciated!!
dd command PERMISSION DENIED Python
1.2
0
0
3,234
17,622,874
2013-07-12T19:44:00.000
12
0
1
0
python
17,622,928
1
true
0
0
Because 34.__class__ is not a valid floating-point number, which is what the . denotes in a numeric literal. Try (34).__class__.
1
6
0
We know that everything is an object in Python and so that includes integers. So doing dir(34) is no surprise, there are attributes available. My confusion stems from the following, why is it that doing 34.__class__ gives a syntax error when I know that 34 does have the attribute __class__. Furthermore, why does binding an integer to a name, say x, and then doing x.__class__ yield my expected answer of type int?
Invalid Syntax confusion on Python Integers
1.2
0
0
664
17,622,992
2013-07-12T19:53:00.000
5
1
1
1
python
17,623,026
3
false
0
0
You want to use virtualenv. It lets you create an application(s) specific directory for installed packages. You can also use pip to generate and build a requirements.txt
2
3
0
I'm working by myself right now, but am looking at ways to scale my operation. I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add /usr/local/lib/python2.7/site-packages/ (or whatever) to an svn repo? This doesn't solve the problems with PATHs, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out. How have other people solved this?
Is there a way to "version" my python distribution?
0.321513
0
0
98
17,622,992
2013-07-12T19:53:00.000
0
1
1
1
python
42,163,489
3
false
0
0
For the same goal, i.e. having the exact same Python distribution as my colleagues, I tried to create a virtual environment in a network drive, so that everybody of us would be able to use it, without anybody making his local copy. The idea was to share the same packages installed in a shared folder. Outcome: Python run so unbearably slow that it could not be used. Also installing a package was very very sluggish. So it looks there is no other way than using virtualenv and a requirements file. (Even if unfortunately often it does not always work smoothly on Windows and it requires manual installation of some packages and dependencies, at least at this time of writing.)
2
3
0
I'm working by myself right now, but am looking at ways to scale my operation. I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add /usr/local/lib/python2.7/site-packages/ (or whatever) to an svn repo? This doesn't solve the problems with PATHs, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out. How have other people solved this?
Is there a way to "version" my python distribution?
0
0
0
98
17,623,431
2013-07-12T20:22:00.000
2
1
0
0
python,rabbitmq,pika
17,647,896
1
true
0
0
Efficiency in terms of speed, there is probably no answer to your question, since there are efficient parsing methods available to extract the meta data from your messages after they leave RabbitMQ. But in case of using the meta data to filter your messages, it would be more efficient to do that in RabbitMQ, since you can do that filtering inside of RabbitMQ by using header exchange.
1
1
0
I need to append a meta data to each message when publishing to the queue. The question is which method is more efficient? Add custom fields to every message body Add custom headers to every message Just in case: Publisher is on AWS m1.small Messages rate is less than 500 msgs/s Rabbit library: pika (python)
What is more efficient: add fields to the message or create a custom header? RabbitMQ
1.2
0
1
377
17,627,389
2013-07-13T05:51:00.000
0
1
0
0
python,amazon-web-services,boto
17,630,560
1
true
1
0
Currently, there is no API for doing this. You have to log into your billing preference page and set it up there. I agree that an API would be a great feature to add.
1
0
0
Was wondering if anyone knew how if it was possible to enable programmatic billing for Amazon AWS through the API? I have not found anything on this and I even went broader and looked for billing preferences or account settings through API and still had not luck. I assume the API does not have this functionality but I figured I would ask.
Enable programmatic billing for Amazon AWS through API (python)
1.2
0
1
356
17,627,748
2013-07-13T06:44:00.000
0
0
0
0
pygame,python-2.6
17,653,503
1
false
0
1
Just seperate the gif into images and animate it with pygame.
1
1
0
What is the best library (and the link) for utilizing animated GIF files into pygame 2.6? I've been searching through some and they all seem to be either buggy, broken, or removed.
Best Library for utilizing animated GIF in pygame
0
0
0
241
17,628,613
2013-07-13T08:59:00.000
6
0
1
0
python,python-2.7
17,628,652
5
false
0
0
Inf is infinity, it's a "bigger than all the other numbers" number. Try subtracting anything you want from it, it doesn't get any smaller. All numbers are < Inf. -Inf is similar, but smaller than everything. NaN means not-a-number. If you try to do a computation that just doesn't make sense, you get NaN. Inf - Inf is one such computation. Usually NaN is used to just mean that some data is missing.
1
53
0
Just a question that I'm kind of confused about So I was messing around with float('inf') and kind of wondering what it is used for. Also I noticed that if I add -inf + inf i get nan is that the same as Zero or not. I'm confused about what the uses of these two values are. Also when I do nan - inf I don't get -inf I get nan I'm sure it's all pretty simple but I stumbled upon them and didn't know what they do.
What is inf and nan?
1
0
0
116,952
17,633,633
2013-07-13T19:44:00.000
1
0
1
0
python,distutils,pypi
17,657,183
2
true
0
0
The issue mentioned by Éric Araujo mentions this trick: "A trick can be used to avoid the second sdist to redo all its work: Fist you run “python setup.py sdist --keep-temp”, then you check the sdist, and to upload you call “python setup.py sdist --dry-run upload”. I’m not in favor of adding that trick to the doc, as for normal usage, running sdist twice is okay."
1
1
0
To upload to PyPI, you run python setup.py register sdist upload. But this requires regenerating the source distribution. As part of my release process, I want to be able to generate the source distribution separately from the uploading. Is there a way to upload from a file, i.e., something like python setup.py upload dists/mypackage.tar.gz?
setup.py upload from a file
1.2
0
0
642
17,634,435
2013-07-13T21:36:00.000
0
0
1
1
python,maven,dependencies,jython
18,564,145
1
false
1
0
ObsPy relies on ctypes which works only for CPython - so I'm afraid you won't get it running under Jython.
1
0
0
I am contributing on an open source Java project, and I am trying to use the Python tool ObsPy via the Jython PythonInterpreter. My problem is that I am having trouble figuring out how to include the ObsPy library in the Jython buildpath. Is it possible to use Maven in order to include the ObsPy library in a manner that the Jython runtime will recognize it? Thanks, and sorry I could not provide any existing code on this issue.
Jython and Python Lib Dependencies
0
0
0
312
17,635,269
2013-07-13T23:57:00.000
0
0
1
1
python
17,635,284
2
false
0
0
In general, windows defaults to the user directory in the command prompt. Saying "python ex1.py" is trying to find ex1.py in the C:\User\Username directory. Try moving your python script there or moving to the python projects folder using cd. Either way should fix the issue.
1
0
0
I am learning Python from "Learn Python the Hard Way" and searched up quite a bit on it with no solutions as of yet. I configured the path for python to work on the command prompt. But whenever I type in "python ex1.py" it comes up with an error: Errno2 No such file or directory! The code is a simple print code, nothing much there. But I do not know why it's showing this! I have all these exercises in the python directory C:\python27\projects\ex1.py
Python: errno2 No such file or directory
0
0
0
3,121
17,635,271
2013-07-13T23:57:00.000
0
0
0
0
python,weblogic,mbeans,wlst
17,650,190
3
false
1
0
Have you tried with the only get method like this : var=get('PausedForForwarding'); print var
2
1
0
Need to access a boolean value under Store and Forward Agent ... already inside the SAF_Agent and once i do a ls(), i see a list of operations and attributes. I can perform the operations, but i am unable to get one of the attributes the attribute is PausedForForwarding which is a boolean true or false which currently shows true which mean the SAF Agent is currently paused for forwarding trying to check the status for above using cmo.getPausedForForwarding() and other options as well, but no luck, depending on the status, i want to pause or resume the SAF_Agent !!! Help needed !!!
accessing a MBean which is boolean in wlst
0
0
0
938
17,635,271
2013-07-13T23:57:00.000
1
0
0
0
python,weblogic,mbeans,wlst
37,591,919
3
false
1
0
Try cmo.isPausedForForwarding().That works for me.
2
1
0
Need to access a boolean value under Store and Forward Agent ... already inside the SAF_Agent and once i do a ls(), i see a list of operations and attributes. I can perform the operations, but i am unable to get one of the attributes the attribute is PausedForForwarding which is a boolean true or false which currently shows true which mean the SAF Agent is currently paused for forwarding trying to check the status for above using cmo.getPausedForForwarding() and other options as well, but no luck, depending on the status, i want to pause or resume the SAF_Agent !!! Help needed !!!
accessing a MBean which is boolean in wlst
0.066568
0
0
938
17,637,175
2013-07-14T06:42:00.000
2
0
1
0
python,nltk,tokenize
17,637,292
2
false
0
0
I am not aware of such tools, but the solution of your problem depends on the language. For the Turkish language you can scan input text letter by letter and accumulate letters into a word. When you are sure that accumulated word forms a valid word from a dictionary, you save it as a separate token, erase the buffer for accumulating new word and continue the process. You can try this for English, but I assume that you may find situations when ending of one word may be a beginning of some dictionary word, and this can cause you some problems.
1
7
0
I'm using Python with nltk. I need to process some text in English without any whitespace, but word_tokenize function in nltk couldn't deal with problems like this. So how to tokenize text without any whitespace. Is there any tools in Python?
How to tokenize continuous words with no whitespace delimiters?
0.197375
0
0
2,531
17,637,243
2013-07-14T06:52:00.000
0
0
1
0
python,python-3.x
17,637,322
2
false
0
0
Presumably you are actually using the bit values from the examples so why not just derive a dictionary from dictionary that has a new method getmasked which masks the value before looking it up...
1
0
0
I have a project I'm creating (in python 3.3) and I'm trying to figure out if there is a efficient (or prettier way) to do the following. I have a function that extracts binary/hex strings like the following (groups of bits are split for example purposes only) 0000 1111 0002 0001 0000 1111 0003 0001 0000 1111 0002 0002 0000 1110 0002 0001 Now, what I want to do is to be able to pass these into a function and then fire them into a method depending on the values in the second group of bits, and the forth group of bits (that are opcodes) eg; a hash function that will check to see if (* 1111 * 0001) matches and then return a function related to these bits. I had the idea of using a dictionary of hash tables, however I'm not fully sure how one would make a key a mask. While I could make a dictionary with the key 11110001 and the value the function I want to return, and then just concatting and passing in [4:8][12:16] would work, I was wondering if there was a way to make a hash function for a key. (if that makes sense) without going into a class and overriding the hash function and then passing that in. Perhaps some form of data structure that stores regex keys and performs it on any valid input? - Whilst I could create one I'm wondering if there is some form of in-built function I'm missing (just so I don't reinvent the wheel) Hopefully this makes sense! Thanks for the help!
Using a hash function as a key in a dictonary in python
0
0
0
99
17,637,499
2013-07-14T07:46:00.000
0
0
1
0
python
17,637,518
6
false
0
0
You are looking for a way to inspect the class of an instance. isinstance(instance, class) is a good choice. It tells you whether the instance is of a class or is an instance of a subclass of the class. In other way, you can use instance.__class__ to see the exact class the instance is and class.__bases__ to see the superclasses of the class. For built-in types like generator or function, you can use inspect module. Names in python do not get a type. So there is no need to do the cast and in return determine the type of an instance is a good choice. As for the hints feature, it is a feature of editors or IDEs, not Python.
3
0
0
I've been using AS3 before, and liked its grammar, such as the keyword as. For example, if I type (cat as Animal) and press . in an editor, the editor will be smart enough to offer me code hinting for the class Animal no matter what type cat actually is, and the code describes itself well. Python is a beautiful language. How am I suppose to do the above in Python?
What is the python way to declare an existing object as an instance of a class?
0
0
0
230
17,637,499
2013-07-14T07:46:00.000
0
0
1
0
python
17,637,541
6
false
0
0
In Python, object is created to be an instance of a class, you cannot declare an existing object to be an instance of a class. Since the class have __init__ method, the initialization is essence for a class instance.
3
0
0
I've been using AS3 before, and liked its grammar, such as the keyword as. For example, if I type (cat as Animal) and press . in an editor, the editor will be smart enough to offer me code hinting for the class Animal no matter what type cat actually is, and the code describes itself well. Python is a beautiful language. How am I suppose to do the above in Python?
What is the python way to declare an existing object as an instance of a class?
0
0
0
230
17,637,499
2013-07-14T07:46:00.000
0
0
1
0
python
17,637,545
6
false
0
0
Python has no type declarations. If cat is an Animal, you can just use it as an Animal, and it'll work. You don't have to cast it to an expression of type Animal, like you would in a statically typed language. Unfortunately, this also means it's hard to provide code completion, since for all your IDE knows, cat could be a Nacho.
3
0
0
I've been using AS3 before, and liked its grammar, such as the keyword as. For example, if I type (cat as Animal) and press . in an editor, the editor will be smart enough to offer me code hinting for the class Animal no matter what type cat actually is, and the code describes itself well. Python is a beautiful language. How am I suppose to do the above in Python?
What is the python way to declare an existing object as an instance of a class?
0
0
0
230
17,639,299
2013-07-14T12:16:00.000
0
0
0
0
python,selenium,selenium-webdriver,httprequest,windmill
17,642,669
1
false
1
0
Selenium uses the browser but number of HTTP request is not one. There will be multiple HTTP request to the server for JS, CSS and Images (if any) mentioned in the HTML document. If you want to scrape the page with single HTTP request, you need to use scrapers which only gets what is present in the HTML source. If you are using Python, check out BeautifulSoup.
1
0
0
I want to use windmill or selenium to simulate a browser that visits a website, scrapes the content and after analyzing the content goes on with some action depending of the analysis. As an example. The browser visits a website, where we can find, say 50 links. While the browser is still running, a python script for example can analyze the found links and decides on what link the browser should click. My big question is with how many http Requests this can be done using windmill or selenium. I mean do these two programs can simulate visiting a website in a browser and scrape the content with just one http request, or would they use another internal request to the website for getting the links, while the browser is still running? Thx alot!
Browser Simulation and Scraping with windmill or selenium, how many http requests?
0
0
1
728
17,640,500
2013-07-14T14:54:00.000
3
0
0
0
python,wxpython,wxgrid
17,640,718
1
true
0
1
MakeCellVisible( int row, int col ) — forces the particular cell to be visible, effectively works to scroll the grid to be given cell.
1
0
0
I have wx.grid filled with data and I have search option so when user enters some text, I find it and select that row. The problem is that row only gets selected but doesnt become visible. It doesnt scroll down or up to that row. How can I manually scroll to that row?
Go to specifc row in wx.grid
1.2
0
0
646
17,640,687
2013-07-14T15:15:00.000
11
0
0
0
python,ajax,post,flask
46,003,806
4
false
1
0
Use request.get_data() to get the POST data. This works independent of whether the data has content type application/x-www-form-urlencoded or application/octet-stream.
1
17
0
Turns out that Flask sets request.data to an empty string if the content type of the request is application/x-www-form-urlencoded. Since I'm using a JSON body request, I just want to parse the json or force Flask to parse it and return request.json. This is needed because changing the AJAX content type forces an HTTP OPTION request, which complicates the back-end. How do I make Flask return the raw data in the request object?
Flask - How do I read the raw body in a POST request when the content type is "application/x-www-form-urlencoded"
1
0
1
23,453
17,641,441
2013-07-14T16:47:00.000
0
0
0
0
python-2.7,selenium,selenium-webdriver
17,718,816
1
true
1
0
First, what you'll need to do is navigate to the webpage with Selenium. Then, analyse the web page source HTML which the Javascript will have rendered by you navigating to the page, to get the image URL. You can do this with Selenium or with an HTML parser. Then you can easily download the image using wget or some other URL grabber. You might not even need Selenium to accomplish this if when you get the page, the image is already there. If that is the case you can just use the URL grabber to get the page directly. Let me know if you want more details or if you have any questions.
1
0
0
I need to take image frames from a webcam server using selenium python module. The frames webcam is show on the selenium browser ( using http:// .....html webpage) . The source code of the page show me it's done using some java script. I try to use: capture_network_traffic() and captureNetworkTraffic Not working , also I don't think is a good way to do that. Also I don't want to use capture_screenshot functions , this take all canvas of the browser. Any idea ? Thank you. Regards.
get image frames webcam from server with selenium python script
1.2
0
1
439
17,644,019
2013-07-14T21:35:00.000
1
0
0
0
python,scrapy
17,644,278
1
false
1
0
The "standard" is to use the environment. Depending on what environment you're running in you may not always be guaranteed a filesystem, however env vars are usually present in most application environments.
1
1
0
I have a working Scrapy scraper that does a login before proceeding to parse and retrieve data but I want to open source it with exposing my username/password. Where should I factor these out to? I was thinking of either storing them in ENV and then accessing them via os.environ, putting them in a settings.local.py file that is imported into settings.py, or just having a configuration file that is read by settings.py.
Where should I keep the authentication settings when using Scrapy?
0.197375
0
1
96
17,645,801
2013-07-15T02:09:00.000
5
0
0
0
python,django,django-models,django-signals
39,893,085
3
false
1
0
Don't forget about recursions risk. If you use post_save method with instance.save() calling, instead of .update method, you should disconnect your post_save signal: Signal.disconnect(receiver=None, sender=None, dispatch_uid=None)[source] To disconnect a receiver from a signal, call Signal.disconnect(). The arguments are as described in Signal.connect(). The method returns True if a receiver was disconnected and False if not. The receiver argument indicates the registered receiver to disconnect. It may be None if dispatch_uid is used to identify the receiver. ... and connect it again after. update() method don't send pre_ and post_ signals, keep it in mind.
1
33
0
I see I can override or define pre_save, save, post_save to do what I want when a model instance gets saved. Which one is preferred in which situation and why?
when to use pre_save, save, post_save in django?
0.321513
0
0
24,676
17,646,259
2013-07-15T03:16:00.000
0
1
0
0
python,post,get,chat
17,646,320
1
false
0
0
You need a server in order to be able to receive any GET and POST requests, one of the easier ways to get that is to set up a Django project, ready in minutes and then add custom views to handle the request you want properly.
1
0
0
Recently, I've been attempting to figure out how I can find out what an unlabeled POST is, and send to it using Python. The issue of the matter is I'm attempting to make a chat bot entirely in Python in order to increase my knowledge of the language. For said bot, I'm attempting to use a chat-box that runs entirely on jQuery. The issue with this is it has no knowledgeable POST or GET statements associated with the chat-box submissions. How can I figure out what the POST and GET statements being sent when a message is submitted, and somehow use that to my advantage to send custom POST or GET statements for a chat-bot? Any help is appreciated, thanks.
Python Send to and figure out POST
0
0
1
60
17,646,806
2013-07-15T04:31:00.000
0
0
0
0
python,node.js,websocket,chat
17,646,861
2
false
0
0
Most chats will use a push notification system. It will keep track of people within a chat, and as it receives a new message to the chat, it will push it to all the people currently in it. This protects the users from seeing each other.
2
1
0
I've been learning about Python socket, http request/reponse handling these days, I'm still very novice to server programming, I have a question regarding to the fundamental idea behind chatting website. In chatting website, like Omegle or Facebook's chat, how do 2 guys talk to each other? Do sockets on their own computers directly connect to each other, OR... guy A send a message to the web server, and server send this message to guy B, and vice versa? Because in the first scenario, both users can retrieve each other's IP, and in the second scenario, since you are connecting to a server, you can not.. right? Thanks a lot to clear this confusion for me, I'm very new and I really appreciate any help from you guys!
the idea behind chat website
0
0
1
187
17,646,806
2013-07-15T04:31:00.000
0
0
0
0
python,node.js,websocket,chat
17,649,514
2
true
0
0
Usually they both connect to the server. There are a few reasons to do it this way. For example, imagine you want your users to see the last 10 messages of a conversation. Who's going to store this info? One client? Both? What happens if they use more than one PC/device? What happens if one of them is offline? Well, you will have to send the messages to the server, this way the server will have the conversation history stored, always available. Another reason, imagine that one user is offline. If the user is offline you can't do anything to contact him. You can't connect. So you will have to send messages to the server, and the server will notify the user once online. So you are probably going to need a connection to the server (storing common info, providing offline messages, keeping track of active users...). There is also another reason, if you want two users to connect directly, you need one of them to start a server listening on a (public IP):port, and let the other connect against that ip:port. Well, this is a problem. If you use the clients->server model you don't have to worry about that, because you can open a port in the server easily, for all, without routers and NAT in between.
2
1
0
I've been learning about Python socket, http request/reponse handling these days, I'm still very novice to server programming, I have a question regarding to the fundamental idea behind chatting website. In chatting website, like Omegle or Facebook's chat, how do 2 guys talk to each other? Do sockets on their own computers directly connect to each other, OR... guy A send a message to the web server, and server send this message to guy B, and vice versa? Because in the first scenario, both users can retrieve each other's IP, and in the second scenario, since you are connecting to a server, you can not.. right? Thanks a lot to clear this confusion for me, I'm very new and I really appreciate any help from you guys!
the idea behind chat website
1.2
0
1
187
17,653,546
2013-07-15T11:47:00.000
1
0
0
0
python,django,apache,treeio-django
17,730,860
1
true
1
0
after sometime found a solution for this. i had to define the MEDIA_ROOT as an absolute URL in the settings.py when i went to static/media there was no folder called attachments, i had to make one and give it the correct permissions for python can write to that folder. this worked for me.
1
2
0
All the files I upload to the tree.io tickets, projects, etc cannot be downloaded, I am getting a 404 error. I am running tree.io in a cantos 6 box. any ideas to get this working please ?
cant download files uploaded tree.io
1.2
0
1
109
17,653,659
2013-07-15T11:54:00.000
0
0
0
1
google-app-engine,python-2.7
17,655,687
1
false
1
0
Log only if its been some time since they entered the app. if you really want to do it at the login level hou can but you will need to setup SSO on the domain.
1
0
0
I have a Python application on AppEngine that requires users to log in. Is there any way to write a log entry on logging in? Users could hit the log in screen from any URL and will reload pages throughout their session so adding it to code would add numerous entries when all I want is one at the point of authentication.
Logging User Login in App Engine
0
0
0
35
17,654,363
2013-07-15T12:33:00.000
6
0
1
1
python-2.7,pyinstaller
17,654,364
1
true
0
0
Cyrhon FAQ section says: Under Linux, I get runtime dynamic linker errors, related to libc. What should I do? The executable that PyInstaller builds is not fully static, in that it still depends on the system libc. Under Linux, the ABI of GLIBC is backward compatible, but not forward compatible. So if you link against a newer GLIBC, you can't run the resulting executable on an older system. The supplied binary bootloader should work with older GLIBC. However, the libpython.so and other dynamic libraries still depends on the newer GLIBC. The solution is to compile the Python interpreter with its modules (and also probably bootloader) on the oldest system you have around, so that it gets linked with the oldest version of GLIBC. and How to get recent Python environment working on old Linux distribution? The issue is that Python and its modules has to be compiled against older GLIBC. Another issue is that you probably want to use latest Python features and on old Linux distributions there is only available really old Python version (e.g. on Centos 5 is available Python 2.4).
1
3
0
Generated an executable on Linux 32-bit Ubuntu 11 and tested it on a 32-bit Ubuntu 10 and it failed with a "GLIBC_2.15" not found.
Pyinstaller GLIBC_2.15 not found
1.2
0
0
6,782
17,658,836
2013-07-15T16:12:00.000
0
0
0
0
python,matplotlib
17,659,174
1
true
0
0
Just access the input data that you used to generate the plot. Either this is a mathematical function which you can just evaluate for a given x or this is a two-dimensional data set which you can search for any given x. In the latter case, if x is not contained in the data set, you might want to interpolate or throw an error.
1
0
1
The case is, I have a 2D array and can convert it to a plot. How can I read the y value of a point with given x?
How to read out point position of a given plot in matplotlib?(without using mouse)
1.2
0
0
37
17,659,176
2013-07-15T16:32:00.000
0
1
0
0
python,flask
32,994,904
3
false
1
0
Add cffi or cryptography in requirements.txt .That solved the problem in my case.
2
2
0
I am attempting to deploy a flask app on Heroku and it always errors at the same place. GCC fails to install and compile the Bcrypt module so I removed it from my requirements.txt (it is not used in the app). When I view the requrements.txt file, there is no mention of Bcrypt but when I push to heroku, it still tries to install it. I have committed the most recent version of requirements.txt to Git. Any help would be greatly appreciated.
Heroku - Flask - Bcrypt error
0
0
0
773
17,659,176
2013-07-15T16:32:00.000
0
1
0
0
python,flask
20,009,155
3
false
1
0
I was able to get around it kind of, by successfully installing the following: "Successfully installed z3c.bcrypt python-bcrypt py-bcrypt-w32". Installing one of these (likely the second one) is what probably included the main bcrypt library that I guess needed to be compiled? I not 100% sure... I noticed this post is from July, I was able to download those libraries all using PIP.
2
2
0
I am attempting to deploy a flask app on Heroku and it always errors at the same place. GCC fails to install and compile the Bcrypt module so I removed it from my requirements.txt (it is not used in the app). When I view the requrements.txt file, there is no mention of Bcrypt but when I push to heroku, it still tries to install it. I have committed the most recent version of requirements.txt to Git. Any help would be greatly appreciated.
Heroku - Flask - Bcrypt error
0
0
0
773
17,659,500
2013-07-15T16:49:00.000
2
0
0
1
python,google-app-engine,app-engine-ndb
17,659,738
1
true
1
0
Provided you have indexed them to be queried by date, you can query the entities by date. The query will return the entities of interest. You can find out the ancestor of a given entity from its key - the ancestor's key is part of the entity's key.
1
1
0
Is it possible to delete entities from a datastore table without knowing their ancestor? I wish to delete all entities older than a specific date, but there are many different ancestors.
Delete entities from a datastore table without knowing ancestor
1.2
0
0
80
17,659,535
2013-07-15T16:51:00.000
2
0
1
0
python,file
17,659,659
3
false
0
0
This is a classical concurrency issue. You need to ensure that you exactly control what is happening. Regarding log files, the easiest solution might be to have a queue collecting log messages from various places (from different threads or even processes) and then have one entity that pops messages from that queue and writes them to the log file. This way, at least single messages stay self-contained. The operating system does not prevent message mix up if you write to the file from different unsynchronized entities. Hence, if you do not explicitly control what should happen in which order, you might end up with corrupted messages in that file, even if things seem to work most of the time.
1
0
0
I have two classes in my Python program and one of them is a thread. Is it a bad idea to have both classes open the same log file and write to it? Is there any good approach to write to the same log file for two classes which are running at the same time?
Bad idea to have two class opening the same file?
0.132549
0
0
93
17,662,330
2013-07-15T19:32:00.000
1
0
1
0
python,plone,zope,pyodbc
17,794,367
1
false
0
0
Add the package to the eggs section in buildout and then re-run buildout. There might be additional server requirements to install pyodbc.
1
1
0
How do you install pyodbc package on a Linux (RedHat server RHEL) onto a Zope/Plone bundled Python path instead of in the global Python path? yum install pyodbc and python setup.py install, all put pyodbc in the sys python path. I read articles about putting pyodbc in python2.4/site-packages/ I tried that, but it didn't work for my Plone external method, which still complains about no module named pyodbc.
pyodbc Installation Issue on Plone Python Path
0.197375
1
0
265
17,662,714
2013-07-15T19:54:00.000
0
0
0
0
python,django,django-forms,formset
17,663,244
2
false
1
0
I just come up with an idea, that I can create in first form additional hidden fields, which can be synchronized with fields from second form by JavaScript. This will create small redundancy, however seems to be very easy to implement. Is it a good idea?
1
3
0
I am creating simple search engine, so I have one input text field on the top of the page and buttom "search" next to it. That is all in one form, and "produce" for instance/q=search%20query. In sidebar I have panel with another form with filters, lets say from, to. I want to have a possibility of creating link like /q=search%20query&from=20&to=50. I wonder how button from first form should gather information from second form. I read somewhere that there is something like formsets, however I didn't find information that they can be used to something like that.
Gathering information from few forms in django
0
0
0
79
17,664,636
2013-07-15T21:56:00.000
0
0
1
0
python,regex
17,664,727
4
false
0
0
If you want just the numbers use r'0-9+'. That will give you separated sequences of integers from the input string.
1
0
0
A string "forum/123/topic/4567". How can I edit a regular expression to get '123' and '4567' separately? I have tried lots of methods on the Internet, but nothing works.
how to get a part from a string with regular expression in python
0
0
0
98
17,665,124
2013-07-15T22:40:00.000
18
0
0
0
python,python-2.7,python-3.x
17,665,156
2
false
0
1
Talk over a pipe or socket Enable such python 3 features as you can from __future__ or use a library like six to write code which is compatible with both. Don't do this. Finally, are you sure you can't use wxPython in Python 3? There's nothing in the online docs saying you can't.
1
13
0
I am designing a GUI using the wxPython toolkit, which means it's being written in python2. However, I want to use python3 for the actual application code. How would I go about calling my python3 code from the GUI?
Call python3 code from python2 code
1
0
0
19,541
17,665,330
2013-07-15T23:01:00.000
5
1
1
0
python,packaging
17,722,381
1
false
0
0
It is not required but recommended to include documentation as well as unit tests into the package. Regarding documentation: Old-fashioned or better to say old-school source releases of open source software contain documentation, this is a (de facto?) standard (have a look at GNU software, for example). Documentation is part of the code and should be part of the release, simply because once you download the source release you are independent. Ever been in the situation where you've been on a train somewhere, where you needed to have a quick look into the documentation of module X but didn't have internet access? And then you relievedly realized that the docs are already there, locally. Another important point in this regard is that the documentation that you bundle together with the code for sure applies to the code version. Code and docs are in sync. One more thing especially regarding Python: you can write your docs using Sphinx and then build beautiful HTML output based on the documentation source in the process of installing the package. I have seen various Python packages doing exactly this. Regarding tests: Imagine the tests are bundled in the source release and are easy to be run by the user (you should document how to do this). Then, if the user observes a problem with your code which is not so easy to track down, he can simply run the unit tests in his environment and see if at least those are passing. If not, you've probably made a wrong assumption when specifying the behavior of your code, which is good to know about. What I want to say is: it can be very good for you as a developer if you make it very simple for the user to execute unit tests.
1
8
0
So, I've released a small library on pypi, more as an exercise (to "see how it's done") than anything else. I've uploaded the documentation on readthedocs, and I have a test suite in my git repo. Since I figure anyone who might be interested in running the test will probably just clone the repo, and the doc is already available online, I decided not to include the doc and test directories in the released package, and I was just wondering if that was the "right" thing to do. I know answers to this question will be rather subjective, but I felt it was a good place to ask in order to get a sense of what the community considers to be the best practice.
Releasing a python package - should you include doc and tests?
0.761594
0
0
384
17,666,339
2013-07-16T00:55:00.000
0
0
1
0
python
17,666,654
1
false
0
0
Since the mentioned stackoverflow links do not have an accepted answer. I'll answer it while giving my opinion. I think the best solution for all python scripts is #!/usr/bin/env python as changing the python that runs is as simple as modifying the PATH environment variable. This is much better than updating all scripts (assuming scripts are already this way). Number 1 python <myscript.py> can be used first to test before PATH modification.
1
0
0
I outsmarted myself and created an error as a result of trying to be prepared. On the first line of a script I placed #! /usr/bin/python and later upgraded python and installed a new module. Sure enough, the new python is now /opt/local/bin/python and I got errors. It took a bit of debugging before I found this. Anyway, now that I have I am wondering what is the best way to run a script: Should I: use python <myscript.py> or make it executable, add the environment on the first line, and use it from the command line ./<myscript.py> I like 2. but upon upgrading or changing the default python, the script can break because it specifies a different install. Then am I expected to go through all the scripts and update them? Is there a way to make the current/default python override the one specified on line1 of the script or is there another way to make a script executable without explicitly stating which python it uses (ie, to use the default one)?
default python to use in a script
0
0
0
354
17,667,022
2013-07-16T02:26:00.000
11
0
0
0
python,algorithm
17,667,109
3
true
0
0
collect all the values to create a single ordered sequence, with each element tagged with the array it came from: 0(0), 2(1), 3(2), 4(0), 6(1), ... 12(3), 13(2) then create a window across them, starting with the first (0(0)) and ending it at the first position that makes the window span all the arrays (0(0) -> 7(3)) then roll this window by incrementing the start of the window by one, and increment the end of the window until you again have a window that covers all elements. then roll it again: (2(1), 3(2), 4(0), ... 7(3)), and so forth. at each step keep track of the the difference between the largest and the smallest. Eventually you find the one with the smallest window. I have the feeling that in the worst case this is O(n^2) but that's just a guess.
1
14
1
Have a 2-dimensional array, like - a[0] = [ 0 , 4 , 9 ] a[1] = [ 2 , 6 , 11 ] a[2] = [ 3 , 8 , 13 ] a[3] = [ 7 , 12 ] Need to select one element from each of the sub-array in a way that the resultant set of numbers are closest, that is the difference between the highest number and lowest number in the set is minimum. The answer to the above will be = [ 9 , 6 , 8 , 7 ]. Have made an algorithm, but don't feel its a good one. What would be a efficient algorithm to do this in terms of time and space complexity? EDIT - My Algorithm (in python)- INPUT - Dictionary : table{} OUTPUT - Dictionary : low_table{} # N = len(table) for word_key in table: for init in table[word_key]: temp_table = copy.copy(table) del temp_table[word_key] per_init = copy.copy(init) low_table[init]=[] for ite in range(N-1): min_val = 9999 for i in temp_table: for nums in temp_table[i]: if min_val > abs(init-nums): min_val = abs(init-nums) del_num = i next_num = nums low_table[per_init].append(next_num) init = (init+next_num)/2 del temp_table[del_num] lowest_val = 99 lowest_set = [] for x in low_table: low_table[x].append(x) low_table[x].sort() mini = low_table[x][-1]-low_table[x][0] if mini < lowest_val: lowest_val = mini lowest_set = low_table[x] print lowest_set
Finding N closest numbers
1.2
0
0
667
17,667,871
2013-07-16T04:11:00.000
2
1
0
1
python,linux,operating-system,profiling
17,770,525
1
false
0
0
The Dstat source-code includes a few sample programs using Dstat as a library.
1
2
0
Is it possible, on a linux box, to import dstat and use it as an api to collect OS metrics and then compute stats on them? I have downloaded the source and tried to collect some metrics, but the program seems to be optimized for command line usage. Any suggestions as to how to get my desired functionality either using Dstat or any another library?
DSTAT as a Python API ?
0.379949
0
0
468
17,676,036
2013-07-16T12:02:00.000
8
0
0
0
python,selenium,webdriver
17,676,233
2
false
1
0
If its a new window or an iframe you should use the driver.switch_to_frame(webelement) or driver.switch_to_window(window_name). This should then allow you to interact with the elements within the popup. After you've finished, you should then driver.switch_to_default_content() to return to the main webpage.
1
31
0
I am working on a web application, in which clicking on some link another popup windows appears. The pop windows is not an alert but its a form with various fields to be entered by user and click "Next". How can I handle/automate this popup windows using selenium. Summary :- Click on the hyperlink (url) - "CLICK HERE" A user registration form appears as a pop up A data is to be filled by user Click Next/Submit button. Another next redirected to another page/form 'User Personal Information Page' Personal information is to be filled by user Click "Next/Submit" Popup disappeared. Now further processing on original/Base page.
Python webdriver to handle pop up browser windows which is not an alert
1
0
1
84,462
17,676,081
2013-07-16T12:05:00.000
0
0
1
0
python,user-interface,frontend
17,677,079
1
false
0
0
A simple, probably naïve way could be to structure your CLI program such that its main function accepts your command line arguments, so that you could import it and call it with the options set in the GUI. I've never tried it, my guess is that it could work with simple "pure" CLI programs (ie, you run it, it does its jobs, and only then prints its output), but could get unwieldy with interactive program needing to prompt the user or with a lot of output.
1
2
0
I've been writing command line applications (mainly in Python) for quite a while now, and I've also been doing a bit of GUI programming using (Py)Qt. In the GUI program's I've written, the programs logic and the GUI were often quite integrated. I am now wondering however, how I could write a GUI front end, for the pure command line programs which I've written. Or in other words; how do I write a command line program so that a GUI could be developed completely separate from it? Although I am most interested in Python implementations I think the answer could be pretty general.
How to write a command line program which can be accessed by a separate frontend?
0
0
0
504
17,678,620
2013-07-16T14:00:00.000
5
1
0
1
python-2.7,posix,popen,eof
17,712,430
1
true
0
0
EOF isn't really a signal that you can raise, it's a per-channel exceptional condition. (Pressing Ctrl+D to signal end of interactive input is actually a function of the terminal driver. When you press this key combination at the beginning of a new line, the terminal driver tells the OS kernel that there's no further input available on the input stream.) Generally, the correct way to signal EOF on a pipe is to close the write channel. Assuming that you created the Popen object with stdin=PIPE, it looks like you should be able to do this.
1
2
0
I'm trying to get Python to send the EOF signal (Ctrl+D) via Popen(). Unfortunately, I can't find any kind of reference for Popen() signals on *nix-like systems. Does anyone here know how to send an EOF signal like this? Also, is there any reference of acceptable signals to be sent?
Trying to send an EOF signal (Ctrl+D) signal using Python via Popen()
1.2
0
1
3,348
17,678,927
2013-07-16T14:13:00.000
1
0
0
0
python,http,tornado,http-status-codes
17,681,053
2
true
0
0
Since HTTP 404 responses can have a response body, I would put the detailed error message in the body itself. You can, for example, send the string Author Not Found in the response body. You could also send the response string in the format that your API already uses, e.g. XML, JSON, etc., so that every response from the server has the same basic shape. Whether using code 404 with a X Not Found message depends on the structure of your API. If it is a RESTful API, where each URL corresponds to a resource, then 404 is a good choice if the resource itself is the thing missing. If a requested data field is missing, but the requested resource exists, I don't think 404 would be a good choice.
1
0
0
I have a client-server interface realized using the module requests as client and tornado as server. I use this to query a database, where some dataitems may not be avaiable. For example the author in a query might not be there or the book-title. Is there a recommended way to let my client know, what was missing? Like an HTTP 404: Author missing or something like that?
Can I have more semantic meaning in an http 404 error?
1.2
1
0
237
17,679,887
2013-07-16T14:50:00.000
2
0
0
0
python,networking,ip,dhcp,network-interface
46,349,123
6
false
0
0
You can see the content of the file in /sys/class/net/<interface>/operstate. If the content is not down then the interface is up.
2
9
0
In Python, is there a way to detect whether a given network interface is up? In my script, the user specifies a network interface, but I would like to make sure that the interface is up and has been assigned an IP address, before doing anything else. I'm on Linux and I am root.
Python: check whether a network interface is up
0.066568
0
1
30,551
17,679,887
2013-07-16T14:50:00.000
13
0
0
0
python,networking,ip,dhcp,network-interface
46,932,803
6
false
0
0
The interface can be configured with an IP address and not be up so the accepted answer is wrong. You actually need to check /sys/class/net/<interface>/flags. If the content is in the variable flags, flags & 0x1 is whether the interface is up or not. Depending on the application, the /sys/class/net/<interface>/operstate might be what you really want, but technically the interface could be up and the operstate down, e.g. when no cable is connected. All of this is Linux-specific of course.
2
9
0
In Python, is there a way to detect whether a given network interface is up? In my script, the user specifies a network interface, but I would like to make sure that the interface is up and has been assigned an IP address, before doing anything else. I'm on Linux and I am root.
Python: check whether a network interface is up
1
0
1
30,551
17,681,217
2013-07-16T15:49:00.000
0
0
0
1
google-app-engine,python-2.7,google-analytics
17,686,245
1
false
1
0
I've been using two branches where the app.yaml files are different. But it requires that I have to be careful when merging and explicitly NOT merge the app.yaml file. It's still a pain.
1
1
0
I have two versions of my app, prod and dev that I manage from one git repo. The way I've been managing them is to constantly switch and uncomment lines in both my app.yaml and my cron.yaml depending upon which version I want to upload. I was wondering if anyone had better experience managing two different versions within one git repo.
How can I manage separate app.yaml / cron.yaml for my app engine source code?
0
0
0
169
17,681,719
2013-07-16T16:11:00.000
-1
0
1
0
python,emacs,python-mode
17,682,028
2
false
0
0
Make a script that reads the directory and evaluates the files in it. Run that.
1
3
0
Is there any way to get python-mode to eval all files in a directory (or at least all the files I'm importing from)? When I work on a file that imports from another file in the same directory, I have to kill and then re-create the inferior python process in order to pick up changes made in the dependent files.
Evaling files in directory
-0.099668
0
0
83
17,681,731
2013-07-16T16:12:00.000
2
0
0
0
python,django,breakpoints,pycharm
67,087,404
5
false
1
0
In my case, setting the "No reload" option in the run/debug configuration solved the problem. I'm using python 3.8.
4
9
0
I'm seeing some bizarre behaviour in PyCharm. I have a Django project where breakpoints stopped working in some files but not others. For example, all the breakpoints in my app's views.py work fine, but all the breakpoints in that same app's models.py are just ignored. I've tried the following but no joy: double-check the breakpoints are enabled removing/re-adding the breakpoints closed/re-opened the project quit & re-launch PyCharm delete my configuration and create a new one Some details: PyCharm 2.7.3 Python 2.7.2 (within virtualenv) Django 1.5.1 I'm not using any special settings in my configuration. Any ideas?
PyCharm - some breakpoints not working in a Django project
0.07983
0
0
6,842
17,681,731
2013-07-16T16:12:00.000
13
0
0
0
python,django,breakpoints,pycharm
23,372,427
5
false
1
0
If you have the setting "Gevent compatible debugging" enabled it does not seem to hit breakpoints in a non-Gevent django application. Find it under Preferences -> Python Debugger -> Gevent compatible debugging
4
9
0
I'm seeing some bizarre behaviour in PyCharm. I have a Django project where breakpoints stopped working in some files but not others. For example, all the breakpoints in my app's views.py work fine, but all the breakpoints in that same app's models.py are just ignored. I've tried the following but no joy: double-check the breakpoints are enabled removing/re-adding the breakpoints closed/re-opened the project quit & re-launch PyCharm delete my configuration and create a new one Some details: PyCharm 2.7.3 Python 2.7.2 (within virtualenv) Django 1.5.1 I'm not using any special settings in my configuration. Any ideas?
PyCharm - some breakpoints not working in a Django project
1
0
0
6,842
17,681,731
2013-07-16T16:12:00.000
6
0
0
0
python,django,breakpoints,pycharm
17,688,764
5
true
1
0
While I don't know why or how, the problem was resolved by deleting the ".idea" directory within the Django project directory. This is where the PyCharm project data lives, so by removing this directory you will lose your project specific settings, so just be aware. Hope this helps someone else.
4
9
0
I'm seeing some bizarre behaviour in PyCharm. I have a Django project where breakpoints stopped working in some files but not others. For example, all the breakpoints in my app's views.py work fine, but all the breakpoints in that same app's models.py are just ignored. I've tried the following but no joy: double-check the breakpoints are enabled removing/re-adding the breakpoints closed/re-opened the project quit & re-launch PyCharm delete my configuration and create a new one Some details: PyCharm 2.7.3 Python 2.7.2 (within virtualenv) Django 1.5.1 I'm not using any special settings in my configuration. Any ideas?
PyCharm - some breakpoints not working in a Django project
1.2
0
0
6,842
17,681,731
2013-07-16T16:12:00.000
-1
0
0
0
python,django,breakpoints,pycharm
65,536,326
5
false
1
0
Just like other folders on the left side of your project, there will be a '.idea' folder on the top must be there. Delete that '.idea folder'. Close the project. Open it again. It solved my issues.
4
9
0
I'm seeing some bizarre behaviour in PyCharm. I have a Django project where breakpoints stopped working in some files but not others. For example, all the breakpoints in my app's views.py work fine, but all the breakpoints in that same app's models.py are just ignored. I've tried the following but no joy: double-check the breakpoints are enabled removing/re-adding the breakpoints closed/re-opened the project quit & re-launch PyCharm delete my configuration and create a new one Some details: PyCharm 2.7.3 Python 2.7.2 (within virtualenv) Django 1.5.1 I'm not using any special settings in my configuration. Any ideas?
PyCharm - some breakpoints not working in a Django project
-0.039979
0
0
6,842
17,682,444
2013-07-16T16:48:00.000
2
0
0
0
python,database,performance,postgresql,plpgsql
17,686,435
1
true
0
0
There can be differences - PostgreSQL stored procedures (functions) uses inprocess execution, so there are no any interprocess communication - so if you process more data, then stored procedures (in same language) can be faster than server side application. But speedup depends on size of processed data.
1
2
0
my teammate and i wrote a Python script running on the same server where the database is. Now we want to know if the performance changes when we write the same code as a stored procedure in our postgres database. What is the difference or its the same?? Thanks.
What is the difference between using a python script running on server and a stored procedure?
1.2
1
0
507
17,682,705
2013-07-16T17:01:00.000
0
0
0
1
python,mysql,django,macos
46,163,996
1
false
0
0
Try sudo find / -iname libmysqlclient.*
1
0
0
I've been following advice from StackOverflow posts and I've been asked for MySQLdb to verify that I have libmysqlclient.16.dylib on my compute. Where do I find this in OS X 10.8?
Where is libmysqlclient.16.dylib
0
0
0
378
17,682,818
2013-07-16T17:06:00.000
0
0
0
1
python,udp,heartbeat
17,682,961
5
false
0
0
yes thats the way to go. kind of like sending heartbeat ping. Since its UDP and since its just a header message you can reduce the frequency to say 10 seconds. This should not cause any measurable system perf degradation since its just 2 systems we are talking about. I feel here, UDP is might be better compared to TCP. Its lightweight, does not consume a lot of system resources and is theoretically faster. Downside would be there could be packet loss. You can circumvent that by putting in some logic like when 10 packets (spaced 10 seconds apart) are not received consecutively then declare the other system unreachable.
1
9
0
So, I have an application in Python, but I want to know if the computer (which is running the application) is on, from another remote computer. Is there any way to do this? I was thinking to use UDP packets, to send some sort of keep-alive, using a counter. Ex. every 5 mins the client sends an UDP 'keep-alive' packet to the server. Thanks in advance!
Python - Detect if remote computer is on
0
0
1
20,956
17,686,939
2013-07-16T21:04:00.000
1
1
0
0
python,amazon,chef-infra,boto,aws-opsworks
17,691,484
2
false
1
0
You can use knife-bootstrap. This can be one way to do it. You can use AWS SDK to do most of it Launch an instance Add a public IP (if its not in VPC) Wait for instance to come back online use knife bootstrap to supply script, setup chef-client, update system Then use chef cookbook to setup your machine
1
0
0
I need to create an application that will do the following: Accept request via messaging system ( Done ) Process request and determine what script and what type of instance is required for the job ( Done ) Launch an EC2 instance Upload custom script's (probably from github or may be S3 bucket) Launch a script with a given argument. The question is what is the most efficient way to do steps 3,4,5? Don't understand me wrong, right now I'm doing the same thing with script that does all of this launch instance, use user_data to download necessary dependencies than SSH into instance and launch a script My question is really: is that the only option how to handle this type of work? or may be there is an easy way to do this? I was looking at OpsWork, and I'm not sure if this is the right thing for me. I know I can do steps 3 and 4 with it, but how about the rest? : Launch a script with a given argument Triger an OpsWork to launch an instance when request is came in By the way I'm using Python, boto to communicate with AWS services.
Do I need to SSH into EC2 instance in order to start custom script with arguments, or there are some service that I don't know
0.099668
0
1
218
17,688,959
2013-07-17T00:02:00.000
1
0
1
0
python,google-app-engine,lxml
62,369,096
6
false
0
0
Install premailer using pip install premailer
3
6
0
I'm trying to import premailer in my project, but it keeps failing at the etree import. I installed the 2.7 binary for lxml. The lxml module imports fine, and it's showing the correct path to the library folder if I log the lxml module, but I can't import etree from it. There's an etree.pyd in the lxml folder but python can't seem to see\read it. I'm on windows7 64bit. Does anyone know what's going wrong here?
ImportError: No module named lxml.etree
0.033321
0
1
17,785
17,688,959
2013-07-17T00:02:00.000
1
0
1
0
python,google-app-engine,lxml
17,689,004
6
false
0
0
Try to using etree without import it like (lxml.etree() ) I think it function no module or install it if it a module
3
6
0
I'm trying to import premailer in my project, but it keeps failing at the etree import. I installed the 2.7 binary for lxml. The lxml module imports fine, and it's showing the correct path to the library folder if I log the lxml module, but I can't import etree from it. There's an etree.pyd in the lxml folder but python can't seem to see\read it. I'm on windows7 64bit. Does anyone know what's going wrong here?
ImportError: No module named lxml.etree
0.033321
0
1
17,785
17,688,959
2013-07-17T00:02:00.000
0
0
1
0
python,google-app-engine,lxml
17,689,259
6
false
0
0
Try: from lxml import etree or import lxml.etree <= This worked for me instead of lxml.etree()
3
6
0
I'm trying to import premailer in my project, but it keeps failing at the etree import. I installed the 2.7 binary for lxml. The lxml module imports fine, and it's showing the correct path to the library folder if I log the lxml module, but I can't import etree from it. There's an etree.pyd in the lxml folder but python can't seem to see\read it. I'm on windows7 64bit. Does anyone know what's going wrong here?
ImportError: No module named lxml.etree
0
0
1
17,785
17,689,822
2013-07-17T01:51:00.000
0
1
0
0
python,ssh,proxy,paramiko,tunnel
17,690,326
1
false
0
0
You could implement a SOCKS proxy in the paramiko client that routes connections across the SSH tunnel via paramiko's open_channel method. Unfortunately, I don't know of any out-of-the-box solution that does this, so you'd have to roll your own. Alternatively, run a SOCKS server on the server, and just forward that single port via paramiko.
1
0
0
I'm being trying to Log into a server using SHH (with Paramiko) Use that connection like a proxy and route network traffic through it and out to the internet. So say I could set it as my proxy in Urllib2, Mechanize, Firefox, etc.). Is the second part possible or will I have to have some sort of proxy server running on the server to get this to work?
Python Proxy Through SSH
0
0
1
1,036
17,694,389
2013-07-17T08:04:00.000
0
0
0
1
ipython
17,712,389
1
false
0
0
This is not possible with the notebook at this time. The notebook cannot use a kernel that it did not start. You could possibly write a new KernelManager that starts kernels remotely distributed across your machines and plug that into the notebook server, but you cannot attach an Engine or other existing kernel to the notebook server.
1
0
0
Is it possible to run notebook server with a kernel scheduled as processes on remote cluster (ssh or pbs), (with common directory on NFS)? For example I have three servers with GPU and would like to run a notebook on one of them, but I do not like to start more than one notebook server. It would be ideal to have notebook server on 4th machine which would in some way scheduele kernels automatically or manually. I did some trials with making cluster with one engine. Using %%px in each cell is almost a solution, but one cannot use introspection and the notebook code in fact is dependent on the cluster configuration which is not very good.
Bind engine to the notebook session
0
0
0
77
17,695,027
2013-07-17T08:39:00.000
11
0
0
0
python,pyqt,pyqt5
18,178,711
8
false
0
1
I encountered this issue with PyQt5 5.0.2, Windows 8, Python 3.3.2; slightly different error message: Failed to load platform plugin "windows". Available platforms are: Set the following environment variable and then run the application. $env:QT_QPA_PLATFORM_PLUGIN_PATH="C:\Python33\Lib\site-packages\PyQt5\plugins\platforms"
2
9
0
When I try to run any PyQt5 program from Eclipse, I got this error. Failed to load platform plugin "windows". Available platforms are: windows, minimal I've never encountered this problem with PyQt4 but with the new version. I'm not able to run a program. From other questions here I know it happens with Qt C++ development and the solution is to copy some Qt dll files to executable program directory. Do I need to do the same in Python development (PyQt5) too? Add those files to the directory, where my *.py files reside? Shouldn't this be managed by PyQt5 installation? Thank you
PyQt5 - Failed to load platform plugin "windows". Available platforms are: windows, minimal
1
0
0
22,391
17,695,027
2013-07-17T08:39:00.000
5
0
0
0
python,pyqt,pyqt5
43,377,112
8
false
0
1
i had a similar problem when compiling my code with cx_freeze. Copying the folder platforms from python installation directory into my built folder solved the problem. the "platforms" folder contains qminimal.dll
2
9
0
When I try to run any PyQt5 program from Eclipse, I got this error. Failed to load platform plugin "windows". Available platforms are: windows, minimal I've never encountered this problem with PyQt4 but with the new version. I'm not able to run a program. From other questions here I know it happens with Qt C++ development and the solution is to copy some Qt dll files to executable program directory. Do I need to do the same in Python development (PyQt5) too? Add those files to the directory, where my *.py files reside? Shouldn't this be managed by PyQt5 installation? Thank you
PyQt5 - Failed to load platform plugin "windows". Available platforms are: windows, minimal
0.124353
0
0
22,391
17,699,332
2013-07-17T12:05:00.000
3
0
1
0
python,python-3.x,set
17,699,531
1
true
0
0
Mathematically speaking, sets do not have an order. When displaying or iterating over a set, Python obviously needs to provide a particular order, but this order is arbitrary and not to be relied on. The order is, however, fixed for a particular set; iterating over the same, unmodified set will produce the same order each time.
1
0
0
I'm just starting out learning python set comprehensions. Why does { 2**x for x in {0,1,2,3,4} } return {8, 1, 2, 4, 16} instead of the ordered {1, 2, 4, 8, 16}?
Python Set Comprehensions
1.2
0
0
257
17,702,165
2013-07-17T14:13:00.000
1
0
0
1
python,google-app-engine,memcached,google-cloud-datastore,app-engine-ndb
17,816,617
2
false
1
0
As a commenter on another answer noted, there are now two memcache offerings: shared and dedicated. Shared is the original service, and is still free. Dedicated is in preview, and presently costs $.12/GB hour. Dedicated memcache allows you to have a certain amount of space set aside. However, it's important to understand that you can still experience partial or complete flushes at any time with dedicated memcache, due to things like machine reboots. Because of this, it's not a suitable replacement for the datastore. However, it is true that you can greatly reduce your datastore usage with judicious use of memcache. Using it as a write-through cache, for example, can greatly reduce your datastore reads (albeit not the writes). Hope this helps.
1
3
0
I have been using the datastore with ndb for a multiplayer app. This appears to be using a lot of reads/writes and will undoubtedly go over quota and cost a substantial amount. I was thinking of changing all the game data to be stored only in memcache. I understand that data stored here can be lost at any time, but as the data will only be needed for, at most, 10 minutes and as it's just a game, that wouldn't be too bad. Am I right to move to solely use memcache, or is there a better method, and is memcache essentially 'free' short term data storage?
Datastore vs Memcache for high request rate game
0.099668
1
0
341
17,705,204
2013-07-17T16:22:00.000
1
1
1
0
python,python-2.7
17,705,340
1
false
0
0
Not sure how this would work out, but the only thing I can think of is creating a virtual environment with the required python version, and then sharing that with people. Not the ideal solution, and I'm sure others can suggest something better.
1
0
0
The problem that I have right now is that I require a specific version of python in order for the source code that I have to work. To make this source code more accessible to everyone, I don't want people to have to go through the hassle to downloading the right python version. Instead is there a way to incorporate the right python version right into my program or any way to localize Python?
Is there anyway to make a localized python version?
0.197375
0
0
53
17,706,953
2013-07-17T18:00:00.000
0
1
1
0
python,module
17,706,983
1
false
0
0
Yes, it's bad practice, for the very reason that everything ends up in __main__. If you have two modules which have any variable with the same name, one will overwrite the other.
1
0
0
Is it bad practice to execute a script execfile(XX.py), rather than import XX as a module? The reason I'm interested, is that executing the file puts the functions directly into __main__, and then globals are available without needing to explicitly pass them. But, I'm not sure if this creates trouble... Thanks!
Importing Modules vs. Executing Scripts for Global Variables
0
0
0
37
17,709,153
2013-07-17T20:01:00.000
2
0
1
0
python,user-interface,pyqt
17,709,406
1
false
0
1
A reasonably good rule of thumb is that if what you are doing needs more than 20 lines of code it is worth considering using an object oriented design rather than global variables, and if you get to 100 lines you should already be using classes. The purists will probably say never use globals but IMHO for a simple linear script it is probably overkill. Be warned that you will probably get a lot of answers expressing horror that you are not already. There are some really good, (and some of them free), books that introduce you to object oriented programming in python a quick google should provide the help you need. Added Comments to the answer to preserve them: So at 741 lines, I'll take that as a yes to OOP:) So specifically on the data class. Is it correct to create a new instance of the data class 20x per second as data strings come in, or is it more appropriate to append to some data list of an existing instance of the class? Or is there no clear preference either way? – TimoB I would append/extend your existing instance. – seth I think I see the light now. I can instantiate the data class when the "start data" button is pressed, and append to that instance in the subsequent thread that does the serial reading. THANKS! – TimoB
1
1
0
I have a program completed that does the following: 1)Reads formatted data (a sequence of numbers and associated labels) from serial port in real time. 2)Does minor manipulations to data. 3)plots data in real time in a gui I wrote using pyqt. 4)Updates data stats in the gui. 5)Allows post analysis of the data after collection is stopped. There are two dialogs (separate classes) that are called from within the main window in order to select certain preferences in plotting and statistics. My question is the following: Right now my data is read in and declared as several global variables that are appended to as data comes in 20x per second or so - a 2d list of values for the numerical values and 1d lists for the various associated text values. Would it be better to create a class in which to store data and its various attributes, and then to use instances of this data class to make everything else happen - like the plotting of the data and the statistics associated with it? I have a hunch that the answer is yes, but I need a bit of guidance on how to make this happen if it is the best way forward. For instance, would every single datum be a new instance of the data class? Would I then pass them one by one or as a list of instances to the other classes and to methods? How should the passing most elegantly be done? If I'm not being specific enough, please let me know what other information would help me get a good answer.
python program structure and use of global variables
0.379949
0
0
184
17,709,751
2013-07-17T20:37:00.000
0
0
1
0
python,datetime
17,709,832
2
false
0
0
I don't think there is any way to do this. datetime.datetime.min says 1 is the min value for year.
2
2
0
Exactly what the title says. If I try to it gives me a ValueError for the year value but I'd like to have a datetime with year 0. Is there any way to do this?
how make a datetime object in year 0 with python
0
0
0
10,714
17,709,751
2013-07-17T20:37:00.000
4
0
1
0
python,datetime
17,709,791
2
false
0
0
from the docs The datetime module exports the following constants: datetime.MINYEAR The smallest year number allowed in a date or datetime object. MINYEAR is 1. datetime.MAXYEAR The largest year number allowed in a date or datetime object. MAXYEAR is 9999.
2
2
0
Exactly what the title says. If I try to it gives me a ValueError for the year value but I'd like to have a datetime with year 0. Is there any way to do this?
how make a datetime object in year 0 with python
0.379949
0
0
10,714
17,710,429
2013-07-17T21:19:00.000
5
0
1
0
python,naming
17,710,495
1
true
0
0
Use within. find("needle", within="haystack")
1
2
0
I have a find function that receives a optional keyword only in_ parameter to narrow the search space. Unfortunately, I had to add the trailing underscore to distinguish it from Python's in keyword. I don't want to expose such an oddly named parameter, are there any better names I could use? English is not my first language, so I am hoping there is a clearly better option. Thanks
Alternative name for a keyword only parameter "in_"
1.2
0
0
103
17,710,943
2013-07-17T21:53:00.000
0
1
1
0
c++,python,map,concurrency,rpc
17,711,039
2
false
0
0
Maybe factors can affect the selection. One solution is to use fastcgi. Client sends HTTP request to HTTP server that has fastCGI enabled. HTP server dispatch the request to your RPC server via the fastcgi mechanism. RPC server process and generates response, and sends back to http server. http server sends the response back to your client.
1
0
0
I would like to have a class written in C++ that acts as a remote procedure call server. I have a large (over a gigabyte) file that I parse, reading in parameters to instantiate objects which I then store in a std::map. I would like the RPC server to listen for calls from a client, take the parameters passed from the client, look up an appropriate value in the map, do a calculation, and return the calculated value back to the client, and I want it to serve concurrent requests -- so I'd like to have multiple threads listening. BTW, after the map is populated, it does not change. The requests will only read from it. I'd like to write the client in Python. Could the server just be an HTTP server that listens for POST requests, and the client can use urllib to send them? I'm new to C++ so I have no idea how to write the server. Can anyone point me to some examples?
Concurrent Remote Procedure Calls to C++ Object
0
0
0
674
17,712,632
2013-07-18T00:38:00.000
0
0
1
0
python-2.7
17,712,665
2
false
0
0
One option is to dump that list into a temp file, and read it from your other python script. Another option (if one python script calls the other), is to pass the list as an argument (e.g. using sys.argv[1] and *args, etc).
2
0
0
Is there a way I can import a list from a different Python file? For example if I have a list: list1 = ['horses', 'sheep', 'cows', 'chickens', 'dog'] Can I import this list into other files? I know to import other functions you do from FileName import DefName This is a user defined list and I don't want to have the user input the same list a million times. Just a few maybes as to how this could be done: from FileName import ListName or put all the lists into a function and then import the definition name Thanks for the help
Import a list from a different file
0
0
0
54
17,712,632
2013-07-18T00:38:00.000
0
0
1
0
python-2.7
18,006,639
2
true
0
0
I'll just export the lists in a file. Therefore every piece of code can read it.
2
0
0
Is there a way I can import a list from a different Python file? For example if I have a list: list1 = ['horses', 'sheep', 'cows', 'chickens', 'dog'] Can I import this list into other files? I know to import other functions you do from FileName import DefName This is a user defined list and I don't want to have the user input the same list a million times. Just a few maybes as to how this could be done: from FileName import ListName or put all the lists into a function and then import the definition name Thanks for the help
Import a list from a different file
1.2
0
0
54
17,713,873
2013-07-18T03:11:00.000
0
0
1
0
python,list,sorting,time
17,713,985
4
false
0
0
you should be able use the method sort(key=str.lower) since your time is parsed as a string
1
12
0
I have a python list of time values that I extracted from a web log. I have the list in the format of %H:%M:%S. How would I sort the time values in ascending order?
How to sort a list of time values?
0
0
0
28,339
17,716,737
2013-07-18T07:04:00.000
3
0
1
0
python,random
17,716,804
1
true
0
0
Create 2 variables that contain the lowest and highest possible values. Whenever you get a response, store it in the appropriate variable. Make the RNG pick a value between the two.
1
1
1
I'm having a little bug in my program where you give the computer a random number to try to guess, and a range to guess between and the amount of guesses it has. After the computer generates a random number, it asks you if it is your number, if not, it asks you if it is higher or lower than it. My problem is, if your number is 50, and it generates 53, you would say "l" or "lower" or something that starts with "l". Then it would generate 12 or something, you would say "higher" or something, and it might give you 72. How could I make it so that it remembers to be lower than 53?
Random number indexing past inputs Python
1.2
0
0
53