Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
22,464,900
2014-03-17T20:40:00.000
1
0
1
0
python,dictionary
22,464,944
3
false
0
0
As I know there's no limit but consider that more the key is long, the more the time to create/access the keys
1
26
0
I was wondering if Python had a limit on the length of a dictionary key. For clarification, I'm not talking about the number of keys, but the length of each individual key. I'm going to be building my dictionaries based on dynamic values (after validation), but I'm not sure if I should be taking length into account in this case.
Do Dictionaries have a key length limit?
0.066568
0
0
27,246
22,473,370
2014-03-18T07:56:00.000
0
0
0
0
python,django,amazon-s3
22,474,074
1
false
1
0
django-storages works quite well and many other Django products rely on it. It does provide other storage services besides S3 but, of course, you don't need to use any of the others. It does need to know your AWS access key & secret key, but you don't need to actually put those values in your settings.py; typically, you'll put them in an environmental variable and read those in settings.py, like: AWS_S3_ACCESS_KEY_ID = os.environ['YOUR_AWS_ACCESS_KEY_ID'] AWS_S3_SECRET_ACCESS_KEY = os.environ['YOUR_AWS_SECRET_ACCESS_KEY']
1
0
0
I am looking for a simple but effective S3 based Django Package through which subscribers from the website can directly use storage services without any hassle. I am a beginner for django so really looking for something simple to use. Please recommend something exactly as per requirement as previously I've found some resources but they cover all storage services and gets complicated for me to understand or apply. I need something that directly stores files to S3 excluding web server layer. I also don't want to save Access Key/Secret Key to my global settings file in settings.py. Please help.
Need Django Package for S3
0
0
0
67
22,474,051
2014-03-18T08:36:00.000
3
0
1
0
python,visual-studio-2010,python-2.7,visual-studio-2012,ptvs
22,486,749
1
true
0
0
When you say you're setting breakpoint at function definitions do you mean on the line with the "def ..." or are you setting the breakpoint on the 1st statement of the function? In Python functions are executable statements so if you're putting the breakpoint on the def line then you're going to hit the breakpoint when the function is being defined rather than being executed. As far as the console window it will generally open unless you mark your app as a Windows application in project properties (this will launch pythonw.exe which doesn't include a console window). If that doesn't help you might want to post the code you're having trouble with or a screenshot of the code with where the breakpoints are set.
1
3
0
I have integrated PTVS into Visual studio so that i can have intellisense support and debugging capability. I set Breakpoints at Function definitions, but when i debug the control goes directly out of function. And in some points the Console window pops up and it never iterates to the next line of code. I liked PTVS but this thing has stuck me up. In options->Python tools-> Interpreter options i have set it as Python 2.7 Can Anyone tell me whats wrong with the options and why that console screen is appearing. Thanks in advance.
Console Window Appears while debugging Python code using PTVS in Visual Studio
1.2
0
0
1,650
22,474,657
2014-03-18T09:07:00.000
3
0
0
0
django,python-2.7,django-models,django-forms
22,474,781
2
true
1
0
No. blank is enforced solely at the application level.
1
0
0
I have deployed my Django website but just now realized that I didn't make one of the fields compulsory. For the field it is currently, blank=True, null=True Now if I go ahead and change it to blank=False will there be any effect on the database and already existing data in it?
Making a field "blank=False" in production
1.2
0
0
104
22,479,095
2014-03-18T12:15:00.000
6
0
0
0
python,django,rest,django-models,django-views
22,479,708
2
true
1
0
I think it is an opinion where to call web services. I would say don't pollute your models because it means you probably need instances of those models to call these web services. That might not make any sense. Your other choice there is to make things @classmethod on the models, which is not very clean design I would argue. Calling from the view is probably more natural if accessing the view itself is what triggers the web service call. Is it? You said that you need to keep things in sync, which points to a possible need for background processing. At that point, you can still use views if your background processes issue http requests, but that's often not the best design. If anything, you would probably want your own REST API for this, which necessitates separating the code from your average web site view. My opinion is these calls should be placed in modules and classes specifically encapsulated for your remote calls and processing. This makes things flexible (background jobs, signals, etc.) and it is also easier to unit test. You can trigger calling this code in the views or elsewhere, but the logic itself should be separate from both the views and the models to decouple things nicely. You should imagine that this logic should exist on its own if there was no Django around it, then build other pieces that connect that logic to Django (ex: syncing the models). In other words, keep things atomic. Yes, same reasons as above, especially flexibility. Is there any reason not to? Yes, simply create the equivalent of an interface. Have each class map to the interface. If the fields are the same and you are lazy, in python you can just dump the fields you need as dicts to the constructor (using **kwargs) and be done with it, or rename the keys using some convetion you can process. I usually build some sort of simple data mapper class for this and process the django or rest models in a list comprehension, but no need if things match up as I mentioned. Another related option to the above is you can dump things into a common structure in a cache such as Redis or Memcache. It might be wise to atomically update this info if you are concerned with "freshness." But in general you should have a single source of authority that can tell you what is actually fresh. In sync situations, I think it's better to pick one or the other to keep things predictable and clear though. One last thing that might influence your design is that by definition, keeping things in sync is a difficult process. Syncs tend to be very prone to failure, so you should have some sort of durable mechanism such as a task queue or job system for retries. Always assume when calling a remote REST API that calls can fail for crazy reasons such as network hicups. Also keep in mind transactions and transactional behavior when syncing. Since these are important, it points again to the fact that if you put all this logic in a view directly, you will probably run into trouble reusing it in the background without abstracting things a bit anyway.
2
10
0
I have to call external REST APIs from Django. The external data source schemas resemble my Django models. I'm supposed to keep the remote data and local ones in sync (maybe not relevant for the question) Questions: What is the most logical place from where to call external web services: from a model method or from a view? Should I put the code that call the remote API in external modules that will be then called by the views? Is it possible to conditionally select the data source? Meaning presenting the data from the REST API or the local models depending on their "freshness"? Thanks EDIT: for the people willing to close this question: I've broken down the question in three simple questions from the beginning and I've received good answers so far, thanks.
Django calling REST API from models or views?
1.2
0
0
4,474
22,479,095
2014-03-18T12:15:00.000
3
0
0
0
python,django,rest,django-models,django-views
22,479,360
2
false
1
0
What is the most logical place from where to call external web services: from a model method or from a view? Ideally your models should only talk to database and have no clue what's happening with your business logic. Should I put the code that call the remote API in external modules that will be then called by the views? If you need to access them from multiple modules, then yes, placing them in a module makes sense. That way you can reuse them efficiently. Is it possible to conditionally select the data source? Meaning presenting the data from the REST API or the local models depending on their "freshness"? Of course it's possible. You can just implement how you fetch your data on request. But the more efficient way might just be avoiding that logic and just sync your local data with remote data and show the local data on the views.
2
10
0
I have to call external REST APIs from Django. The external data source schemas resemble my Django models. I'm supposed to keep the remote data and local ones in sync (maybe not relevant for the question) Questions: What is the most logical place from where to call external web services: from a model method or from a view? Should I put the code that call the remote API in external modules that will be then called by the views? Is it possible to conditionally select the data source? Meaning presenting the data from the REST API or the local models depending on their "freshness"? Thanks EDIT: for the people willing to close this question: I've broken down the question in three simple questions from the beginning and I've received good answers so far, thanks.
Django calling REST API from models or views?
0.291313
0
0
4,474
22,483,205
2014-03-18T15:03:00.000
1
0
0
0
python,mysql,django
22,500,375
1
false
1
0
Found the solution - I had to use dumpdata app.model --natural
1
0
0
I have a model with two DateField - fields in it, which I dumped to JSON using dumpdata. Now I want to load those fixtures (I am using South) to my MySQL-Database which leads to the following Error: CommandError: The database backend does not accept 0 as a value for AutoField. Does anybody know that problem and the solution to it? My Database is MySql (version 5.6.12) and I'm using Django 1.5.1. I used Sqlite before and want to change to MySQL.
Python Django: Load Autofield to MySql Table using loaddata
0.197375
0
0
344
22,487,388
2014-03-18T18:00:00.000
0
0
0
0
python,sockets,udp,port,nat
22,487,519
2
false
0
0
The simple answer is you don't care what the "real" (ie: pre-natted) port is. Just reply to the nat query and allow the nat to handling delivering the result. If you ABSOLUTELY have to know the source UDP port, include the information in your UDP packet -- but I strongly recommend against this.
2
1
0
I'm working on something that sends data from one program over UDP to another program at a known IP and port. The program at the known IP and port receives the message from the originating IP but thanks to the NAT the port is obscured (to something like 30129). The program at the known IP and port wants to send an acknowledgement and/or info to the querying program. It can send it back to the original IP and the obscured port #. But how will the querying program know what port to monitor to get it back on? Or, is there a way (this is Python) to say "send this out over port 3200 to known IP (1.2.3.4) on port 7000? That way the known IP/port can respond to port 30129, but it'll get redirected to 3200, which the querying program knows to monitor. Any help appreciated. And no, TCP is not an option.
How to determine outgoing port in Python through NAT
0
0
1
369
22,487,388
2014-03-18T18:00:00.000
1
0
0
0
python,sockets,udp,port,nat
22,488,054
2
false
0
0
Okay, I figured it out - the trick is to use the same sock object to receive that you used to send. At least in initial experiments, that seems to do the trick. Thanks for your help.
2
1
0
I'm working on something that sends data from one program over UDP to another program at a known IP and port. The program at the known IP and port receives the message from the originating IP but thanks to the NAT the port is obscured (to something like 30129). The program at the known IP and port wants to send an acknowledgement and/or info to the querying program. It can send it back to the original IP and the obscured port #. But how will the querying program know what port to monitor to get it back on? Or, is there a way (this is Python) to say "send this out over port 3200 to known IP (1.2.3.4) on port 7000? That way the known IP/port can respond to port 30129, but it'll get redirected to 3200, which the querying program knows to monitor. Any help appreciated. And no, TCP is not an option.
How to determine outgoing port in Python through NAT
0.099668
0
1
369
22,488,460
2014-03-18T18:51:00.000
0
0
0
0
python,scipy,filtering
22,663,453
2
true
0
0
So finally I adapted one filter to get the zerofrequency and another bandpassfilter to get the 600 Hz frequency. Passzero has to be true just for the zerofrequency then it works. I'm not yet happy with the phase delay but I'm working on it. 1)bandpass 600 Hz: taps_bp = bandpass_fir(ntaps, lowcut, highcut, fs) Function for the bandpassfilter def bp_fir(ntaps, lowcut, highcut, fs, window = 'hamming') taps = scipy.signal.firwin(ntaps,[lowcut, highcut],nyq, pass_zero=False) return taps 2)zerofrequency filter taps_zerofrequency = zero_fir(ntaps, zerofreq=1, fs) Function for the zerofrequency filter def zero_fir(ntaps, zerofreq, fs, window = 'hamming') taps = scipy.signal.firwin(ntaps,[zerofreq],nyq, pass_zero=True) return taps
1
0
1
I am implementing a bandpass filter in Python using scipy.signal (using the firwin function). My original signal consists of two frequencies (w_1=600Hz, w_2=800Hz). There might be a lot more frequencies that's why I need a bandpass filter. In this case I want to filter the frequency band around 600 Hz, so I took 600 +/- 20Hz as cutoff frequencies. When I implemented the filter and reproduced the signal in the time domain using lfilter the frequency is fine. The amplitude is reproduced in the right magnitude as well. But the problem is the signal is shifted in the y-direction. For example: s(t)=s_1(t)+s_2(t) with s_1(t)=sin(w_1 t)+3 and s_2(t)=sin(w_2 t) returns a filtered signal which varies around 0 but not [2,4]`.
Time signal shifted in amplitude, FIR filter with scipy.signal
1.2
0
0
794
22,490,257
2014-03-18T20:24:00.000
2
0
0
0
python,django,amazon-web-services,amazon-ec2
22,513,826
1
true
1
0
sudo python manage.py runserver 0.0.0.0:80 did the trick.
1
3
0
I am new to AWS setup: here are the steps i followed to setup a Django web server. ( but its not running on public ip ) created AWS instance installed Django 1.6.2 created sample app added security group (Inbound Requests) of running instance with HTTP - TCP - 80 - 0.0.0.0/0 tried following ways to run server. python manage.py runserver 0.0.0.0:8000 python manage.py runserver ec2-XX-XXX-XXX-XX.us-west-2.compute.amazonaws.com:8000 python manage.py runserver but server is not accesible from the Public DNS given by EC2. NOTE: running micro instance with ubuntu 12.04 (LTS) with virtualenv. what is missing in the above steps. Thanks.
AWS EC2 not running web server on default port
1.2
0
0
647
22,490,452
2014-03-18T20:35:00.000
0
0
0
0
python,http
22,567,478
1
false
0
0
I would like to make question more clear here http location mentioned is a common share folder link which can use by number of people by accessing through username/passwd. for ex: my target location has: I have http:/commonfolder.mydomain.com/somelocation/345333 location I have username = someuid ,password = xxxxx my source location has: I have a my.txt file contains some data I want to run shell or python script which will tranfer the file to target location mentioned above. so that many people can start using latest info I am uploading timely. -shijo
1
0
0
I am looking for a script which will upload files from a unix server to an HTTP location. I want to use only these modules - cookielib, urllib2, urllib, mimetypes, mimetools, traceback
How to upload files to an HTTP location using a python script
0
0
1
62
22,493,765
2014-03-19T00:16:00.000
2
0
1
0
java,python
55,081,785
6
false
0
0
There is no (strict) equivalent since in java you have to specify the return type for the method in its declaration which is then checked against a computed type of the following the return statement. So for methods that have a return type - neither semicollon nor leaving empty braces will work; I personally use: throw new java.lang.UnsupportedOperationException("Not supported yet."); - this is searchable, alerts you that the method is not implemented and is compatible with the strict return types.
2
57
0
What is the equivalent to Python's pass in Java? I realize that I could use continue or not complete the body of a statement to achieve that effect, but I like having a pass statement.
Java do nothing
0.066568
0
0
100,196
22,493,765
2014-03-19T00:16:00.000
92
0
1
0
java,python
22,493,796
6
true
0
0
Just use a semi-colon ;, it has the same effect.
2
57
0
What is the equivalent to Python's pass in Java? I realize that I could use continue or not complete the body of a statement to achieve that effect, but I like having a pass statement.
Java do nothing
1.2
0
0
100,196
22,495,767
2014-03-19T03:42:00.000
1
0
0
0
python,django,session,flask,server-side
22,520,376
2
false
1
0
Celery is a great solution, but it can be overpowered for many setups. If you just need tasks to run periodically (once an hour, once a day, etc) then consider just using cron. There's a lot less setup and it can get you quite far.
1
0
0
I have searched the forums for my question but im either searching for a thing naming it wrongly or the question is hard which i really doubt. I am developing a web-app which would have an web-interface written in one of the MVC frameworks like django or even flask and allow user to login, will identify users session and allow to make some settings and also my app needs to run some python process(script which basically is a separate file) on the server on a per-session per-settings made by user basis. This process is quite long - can take even days to perform and shouldn't affect the execution and performance of MVC part of an app. Another issue is that this process should be run per user so the basic usage model of such app would be: 1. the user enters the site. 2. the user makes some settings which are mirrored to database. 3. the user pushes the launch button which executes some python script just for this user with the settings he has made. 4. the user is able to monitor some parameters of the script running based on some messages that the script itself generates. I do understand that my question is related to the architecture of the app itself and i'm quite new to python and haven't had any experience of developing such complex application but I'm also quite eager to learn about it. I do understand the bricks from which my app should be built (like django or flask and the server-side script itself) but i know very little about how this elements should be glued together to create seamless environment. Please direct me to some articles related to this topic or recommend some similar threads or just give a clear high level explanation how such separate python processes could be triggered,run and monitored further on a per-user basis from controller part of MVC.
Server side python code runing continuosly per session
0.099668
0
0
183
22,495,894
2014-03-19T03:54:00.000
11
1
0
0
python,django,cloud,wsgi
22,497,827
4
true
1
0
Deploying .pyc files will not always work. If using Apache/mod_wsgi for example, at least the WSGI script file still needs to be straight Python code. Some web frameworks also may require the original source code files to be available. Using .pyc files also does little to obscure any sensitive information that may be in templates used by a web framework. In general, using .pyc files is a very weak defence and tools are available to reverse engineer them to extract information from them. So technically your application may run, but it would not be regarded as very secure way of protecting your source code. You are better of using a hosting service you trust. This generally means paying for reputable hosting rather than just the cheapest one you can find.
3
6
0
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'. What are the pros and cons of taking this approach? Is this a standard practice? I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
Should I deploy only the .pyc files on server if I worry about code security?
1.2
0
0
6,259
22,495,894
2014-03-19T03:54:00.000
1
1
0
0
python,django,cloud,wsgi
22,496,250
4
false
1
0
Generally, deploying PYC files will work fine. The Pros, as you said, a bit helpful for protecting source codes. Cons, here are the points I found: 1). PYC only works with same Python version. E.g., "a.pyc" was compiled by Python2.6, "b.pyc" was by 2.7, and b.pyc "import a", it won't work. Similarly, "python2.6 b.pyc" neither work. So do remember to use a same Python version to generate all PYC, as well as the version on your cloud server 2). if you want to SSH to cloud server for some live debugging, PYC cannot help you 3). the deployment work requires extra things to do
3
6
0
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'. What are the pros and cons of taking this approach? Is this a standard practice? I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
Should I deploy only the .pyc files on server if I worry about code security?
0.049958
0
0
6,259
22,495,894
2014-03-19T03:54:00.000
1
1
0
0
python,django,cloud,wsgi
22,495,975
4
false
1
0
Yes, just deploying the compiled files is fine. Another point to consider are the other aspects of your application. One aspect could be if current bugs let malicious users know what technology stack you are using, type of error messages displayed when (if) your application crashes. To me, these seem like some of the other aspects, I'm sure there are more.
3
6
0
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'. What are the pros and cons of taking this approach? Is this a standard practice? I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
Should I deploy only the .pyc files on server if I worry about code security?
0.049958
0
0
6,259
22,498,877
2014-03-19T07:18:00.000
0
1
1
0
python,migration,dynamic-typing
22,498,955
5
false
0
0
One of the tradeoffs between statically and dynamically typed languages is that the latter require less scaffolding in the form of type declarations, but also provide less help with refactoring tools and compile-time error detection. Some Python IDEs do offer a certain level of type inference and help with refactoring, but even the best of them will not be able to match the tools developed for statically typed languages. Dynamic language programmers typically ensure correctness while refactoring in one or more of the following ways: Use grep to look for function invocation sites, and fix them. (You would have to do that in languages like Java as well if you wanted to handle reflection.) Start the application and see what goes wrong. Write unit tests, if you don't already have them, use a coverage tool to make sure that they cover your whole program, and run the test suite after each change to check that everything still works.
1
4
0
I'm learning python and came into a situation where I need to change the behvaviour of a function. I'm initially a java programmer so in the Java world a change in a function would let Eclipse shows that a lot of source files in Java has errors. That way I can know which files need to get modified. But how would one do such a thing in python considering there are no types?! I'm using TextMate2 for python coding. Currently I'm doing the brute-force way. Opening every python script file and check where I'm using that function and then modify. But I'm sure this is not the way to deal with large projects!!! Edit: as an example I define a class called Graph in a python script file. Graph has two objects variables. I created many objects (each with different name!!!) of this class in many script files and then decided that I want to change the name of the object variables! Now I'm going through each file and reading my code again in order to change the names again :(. PLEASE help! Example: File A has objects x,y,z of class C. File B has objects xx,yy,zz of class C. Class C has two instance variables names that should be changed Foo to Poo and Foo1 to Poo1. Also consider many files like A and B. What would you do to solve this? Are you serisouly going to open each file and search for x,y,z,xx,yy,zz and then change the names individually?!!!
Tracking changes in python source files?
0
0
0
696
22,501,155
2014-03-19T09:21:00.000
1
1
1
0
php,python,python-2.7,shared-libraries
22,503,228
1
false
0
0
.py files are compiles on the fly to .pyc files, the .pyc is used if it is more recent than the .py file. Some modules can be written in C/C++, then they are delivered as a .so file.
1
0
0
This is not a language compare question like asked on various forums. I am interested to know about more specific term core libraries/modules calling/execution in python. As I checked python modules installation directory like /usr/lib/python2.7 (On Ubuntu). I found .py (Source Code) and .pyc (Byte Code). I am assuming Python interpreter/compiler call .pyc file when we using import statement or more specifically called class/function from that module. While php is using .so (Shared object) files for libraries. As I seen on /usr/lib/php5/20090626. Yes python also have a directory /usr/lib/pyshared/python2.7 for .so files. But still lot of important libraries are stored as .pyc files. Is it not a good idea for using only .so extension for core libraries like php for performance benefits ?
php compare to python in perspective of core libraries
0.197375
0
0
92
22,501,431
2014-03-19T09:32:00.000
0
0
1
0
python,ipython
28,310,783
2
false
0
1
In Spyder, try View - Panes - Object inspector. Then type the full name of the function.
1
1
0
I'm using IPython qtconsole under windows 7, and when I first write a method name and type the bracket, a popup shows method parameters. What is the way to display that popup explicitly, once it has disappeared? This is pretty common 'show method parameters' shortcut that I'm talking about, but I've failed to find the shortcut to it after an embarrassing amount of google searches.
How do I display function arguments in ipython qtconsole?
0
0
0
1,486
22,502,741
2014-03-19T10:22:00.000
0
0
0
0
python,udp,scapy,icmp
22,513,923
1
true
0
0
Well, at least for the ID and sequence fields, these can be any 16-bit numbered combination and the kernel will accept the packet and forward it to all registered ICMP socket handlers. But if the checksum field is incorrect, the receiving kernel will not pass the header up to the handlers (it will however to link layer sniffers). Also, from what I tested, if you change the type/code flags to incorrect combinations of known numbers, or numbers undefined by the protocol, the receiving kernel does not pass that to handlers (but it is still seen by link layer sniffers). Note I didn't use scapy, just straight python/socket code, and my system is Linux.
1
0
0
I am writing python two scripts using scapy one executed on server side and the other on client side. On client side, the script sends UDP packets to a closed port on server. The aim of my scripts, is to test if client will accept invalid ICMP packets received from server. On server side, I am going to sniff for incoming traffic and respond every UDP packet with an ICMP port unreachable, and everytime I will modify a field in ICMP packet (false value) to test if the packet is received. My question is: when I modify the Raw field (payload) ,is it normal that client will accept this ICMP packet ? I mean there is no control done on Raw field. I hope my question is clear. Thank you very much.
Invalid field Raw in an ICMP destination unreachable (port unreachable) packet
1.2
0
1
609
22,506,268
2014-03-19T12:45:00.000
1
0
0
0
python,scipy,weibull
22,522,819
2
false
0
0
If I understand correctly, then this requires estimation with censored data. None of the scipy.stats.distribution will directly estimate this case. You need to combine the likelihood function of the non-censored and the likelihood function of the censored observations. You can use the pdf and the cdf, or better sf, of the scipy.stats.distributions for the two parts. Then, you could just use scipy optimize to minimize the negative log-likelihood, or try the GenericLikelihoodModel in statsmodels if you are also interested in the uncertainty of the parameter estimates.
1
2
1
I'm currently working with some lifetime data that corresponds to the Installation date and Failure date of units. The data is field data, so I do have a major number of suspensions (units that haven't presented a failure yet). I would like to make some Weibull analysis with this data using Scipy stats library (fitting the data to a weibull curve and obtaining the parameters of the distribution for instance). I'm quite new to Python and Scipy so I can't find a way to include the suspended data in any avaiable Weibull distribution (dweibull, exponweibull, minweibull, maxweibull). Is there a easy way to work with suspensions? I would not like to recriate the wheel, but I'm having difficulties in estimating the parameters of the Weibull from my data. Can anyone help me? Thanks a lot!
Weibull Censored Data
0.099668
0
0
1,430
22,511,472
2014-03-19T15:57:00.000
0
0
0
0
python,windows,wxpython,gdi
23,710,926
1
false
0
1
OP seemed to find the solution. The solution I found is to call wx.Yield() to releases unused ressources. But this only work perfectly on linux. On windows, i need to call wx.Frame.DestroyChildren().
1
2
0
First, sorry for my language, english is not my native language :-/ I have some troubles using wxpython 2.8/python 2.7 with Windows. My app using wxPython is quite big now and uses a lot of widgets. My problem is : In a for loop, i open a frame as a variable (myVar = myFrame(...)), and give it some values/variables (like -> myVar.setval('xx', 42)) . Then, it will fill some textareas, grids, etc. from some requests in my database (from the child frame). At the end of the traitment, my child frame generate a pdf file, and print it, then it closes itself with the original self.Destroy() method, not surcharged, of wx.Frame. A lot of frames are created, one by one, with the same variable (myVar in the example). But, recently, I figured out some issues with more iterations in my loop than usual... With the same configuration (same data with a dump from database), it always crashes at the exact same moment (not on particular data), after a certain number of iteration in my loop. WxPython uses maximum amount of GDI allowed by Windows (error 1158). In fact my GDI resources from destroyed frames are not released! I read that there's a queue of destroyed objects, waiting to be really killed when no more events are waiting in wxPython and then they are not released because of constant work (the loop). On linux, there is not this issue, everything works fine (but I need it works on Windows). I tried wx.Yield method but nothing changed... I looked threads, but GUI widgets have to be used in wx main loop... If one of you guys has an idea, it will be wonderful ! [SOLVED] The solution i found is to call wx.Yield() to releases unused ressources. But this only work perfectly on linux. On windows, i need to call wx.Frame.DestroyChildren(). It seems to be OK for now :-)
wxPython : GDI ressources are not release after wxFrame.destroy()
0
0
0
296
22,511,568
2014-03-19T16:01:00.000
0
0
0
0
python,django,performance,caching
22,516,105
1
false
1
0
I think your idea to store the prepped data in a file is a good one. I might name the file something like this: /tmp/prepped-data-{{session_id}}.json You could then just have a function in each view called get_prepped_data(session_id) that either computes it or reads it from the file. You could also delete old files when that function is called. Another option would be to store the data directly in the user's session so it is cleaned up when their session goes away. The feasibility of this approach depends a bit on how much data needs to be stored.
1
0
0
Let's say I have a page I'd like to render which will present some (expensive to compute) data in a few ways. For example, I want to hit my database and get some large-size pile of data. Then I want to group that data and otherwise manipulate it in Python (for example, using Pandas). Say the result of this manipulation is some Pandas DataFrame that I'll call prepped_data. And say everything up to this point takes 3 seconds. (I.e. it takes a while...) Then I want to summarize that data at a single URL (/summary): I'd like to show a bar graph, a pie chart and also an HTML table. Each of these elements depends on a subset of prepped_data. One way I could handle this is to make 3 separate views hooked up to 3 separate URL's. I could make pie_chart_view which would make a dynamically generated pie chart available at /piechart.svg. I could make bar_graph_view which would make a dynamically generated bar graph available at /bargraph.svg. And I could make summary_view which would finish by rendering a template. That template would make use of context variables generated by summary_view itself to make my HTML table. And it would also include the graphs by linking to their URL's from within the template. In this structure, all 3 view functions would need to independently calculate prepped_data. That seems less-than-ideal. As an alternative. I could turn on some kind of caching. Maybe I could make a view called raw_data_view which would make the data itself available at /raw_data.json. I could set this to cache itself (using whatever Django caching backend) for a short amount of time (30 seconds?). Then each of the other views could hit this URL to get their data and that way I could avoid doing the expensive calculations 3 times. This seems a bit dicey as well, though, because there's some real judgement involved in setting the cache time. One other route could involve creating both graphs within summary_view and embedding the graphics directly within the rendered HTML (which is possible with .svg). But I'm not a huge fan of that since you wind up with bulky HTML files and graphics that are hard for users to take with them. More generally, I don't want to commit to doing all my graphics in that format. Is there a generally accepted architecture for handling this sort of thing? Edit: How I'm making the graphs: One comment asked how I'm making the graphs. Broadly speaking, I'm doing it in matplotlib. So once I have a Figure I like generated by the code, I can save it to an svg easily.
How to cache data to be used in multiple ways at a single URL
0
0
0
64
22,512,321
2014-03-19T16:32:00.000
0
1
0
0
php,python,joomla,cherrypy,joomla3.0
22,783,076
5
false
1
0
Before considering Python What are you wanting to customize? (perhaps some clever Javascript or a Joomla extension already exists) Is the Joomla-way not a better solution for your problem, given the fact that you're using Joomla? (change the template, or the view-templates of the modules and component in particular) i.o.w.: do you understand Joomla enough to know that you need something else? See below: If Python is still the way to go: Does your hosting support Python? Should you reconsider your choice of CMS? I like your choice of CherryPy.
1
5
0
I'm building a Joomla 3 web site but I have the need to customize quite a few pages. I know I can use PHP with Joomla, but is it also possible to use Python with it? Specifically, I'm looking to use CherryPy to write some custom pieces of code but I want them to be displayed in native Joomla pages (not just iFrames). Is this possible?
Can you display python web code in Joomla?
0
0
0
5,932
22,516,007
2014-03-19T19:16:00.000
0
0
0
0
python,linux,audio,pexpect,julius-speech
22,633,327
1
false
0
0
A workaround for my problem was the following: using dsnoop for ALSA audio settings in .asoundrc.
1
0
0
I am using pexpect in python to receive continuous audio data from an audio input for my home automation project. Is there a way to pause the pexpect from using my audio device? Or can I use the audio device in two separate programs/scripts? What I want to do is: Use speech recognition (julius) to listen for keywords. For more complex commands I want to use Google's Speech to Text API because of a higher accuracy. Both things work perfectly fine separately. What my problem is: Once the keyword is found, audio data needs to be recorded and send to the Google API. However, I have only one audio device and this is already used by the speech recognition with julius. I cannot .close and .spawn the speech recognition, because it takes a long time to load. Is there any chance the pexpect can be paused? Or do you guys know any other workaround? Bests, MGG
python pause pexpect.spawn and its used devices
0
0
1
248
22,516,246
2014-03-19T19:29:00.000
1
0
1
1
python-2.7,batch-file,csv,task,scheduler
22,519,416
1
true
0
0
Assuming that you try this under the windows 7 Task-scheduler... You may try the following: In the security options of your Task (1st page) ensure that you have selected the SYSTEM account. Tick the high privileges check box near the bottom of the dialog (i guess you already did that) check if the file can be accessed (write into it with notepad) try to call the executable from the python processor directly with your script-file as an argument (maybe something went wrong with the inheritance of access rights when windows calls the python processor; assuming that you linked the .py file in the Task Scheduler) check the execution profile of the python command processor and compare it to the ownership of the CSV file (does the csv file reside in a user-account folder and has therefor other access requirements the python process can provide ? example: csv owned by user X, Task is run as user Y) you may also try to create a new, empty textfile somewhere else (C:) and fill the content in from the CSV greetings :)
1
1
0
I have a .py file that reads integers from a separate csv , i just cant launch it from windows task scheduler, after 2 days and much frustration im posting here. A lot of similar questions have been posted but none are answered adequately for my case. I have no problems launching other python files or exe's, the problem arises when the python file needs to read a csv. I have turned the file into a batch file, and i have also went through every possible permutation of administration and permission options, but still no cigar. The problem stems solely from the fact that the python needs to call from an external csv. Has anyone got an opinion, or a work-around? Thanks.
Open python file with Task scheduler
1.2
0
0
359
22,516,612
2014-03-19T19:48:00.000
1
0
1
0
emacs,elisp,python-mode
24,333,355
1
true
0
0
I had the mode hook in my ~/.emacs.d/el-get-init-files/init-python-mode.el. I put a call to (message "FOO BAR") in the file and noticed it wasn't being loaded on startup. Looks like el-get only loads files from the el-get-init-files directory for packages it has installed. Since python mode comes with emacs, and wasn't installed via el-get, my python init files wasn't being loaded. I moved the mode hook into my .emacs files and it started working right away!
1
0
0
I'm trying to tweak my emacs configuration to treat _ as a word character. I've added (add-hook 'python-mode-hook #'(lambda () (modify-syntax-entry ?_ "w"))) to my .emacs file, but it doesn't seem to work. If I execute (modify-syntax-entry ?_ "w") directly in the mini-buffer, then it starts working. I'm guessing that one of my minor modes may be changing the syntax table back. I'm relatively new to emacs. How do I go about tracking down the source of the problem?
python-mode-hook not working as I'd expect
1.2
0
0
412
22,517,604
2014-03-19T20:39:00.000
1
0
0
0
php,python,ldap,single-sign-on,saml
22,533,335
1
false
1
0
Rather than using the user's credentials to bind to LDAP, get an application account at LDAP that has read permissions for the attributes you need on the users within the directory. Then, when you get the username via SSO, you just query LDAP using your application's ID. Make sure you make your application ID's password super strong - 64 chars with a yearly change should be good. Better yet, do certificate-based authn.
1
1
0
I have this web application with LDAP backend, to read and modify some LDAP attributes. Web application use the SSO (Single Sign-on) to authenticate user. How can I bind to LDAP, if I only get a user name as an attribute from SSO, withouth asking for password again, because it will make SSO useless? I use SimpleSAMLphp as identity provider, and python driven web application for LDAP management.
Bind to LDAP after SSO?
0.197375
0
1
153
22,521,912
2014-03-20T01:54:00.000
1
0
0
0
python,localhost,simplehttpserver,localserver
22,522,021
2
true
1
0
Only one process can listen on a port; you cannot have two SimpleHTTPServer processes listening on the same port. You can however leave an old server process up and then disregard failed startup of the new server process or error message about automatic port conflict resolution. To debug this process, use netstat ( lsof in OSX, since BSD netstat is lame ) to find the process listening on the port and then 'ps -fww' to list data about that process. You can also take a look at /proc/$pid ( linux ) to get a process ID's current working directories. lsof can also help track down files the process has open in linux OR BSD/OSX if you're unsure which files it's serving. Hope it helps!
2
2
0
I'm in the directory /backbone/ which has a main.js file within scripts. I run python -m SimpleHTTPServer from the backbone directory and display it in the browser and the console reads the error $ is not defined and references a completely different main.js file from something I was working on days ago with a local python server. I am new to this and don't have an idea what's going on. Would love some suggestions if you have time.
Local server giving wrong files. Is it possible I'm running 2 python servers?
1.2
0
0
1,036
22,521,912
2014-03-20T01:54:00.000
0
0
0
0
python,localhost,simplehttpserver,localserver
67,453,910
2
false
1
0
I recently had this problem and it was due to the old page being stored in the browser cache. Accessing the port from a different browser worked for me (or you can clear your cache).
2
2
0
I'm in the directory /backbone/ which has a main.js file within scripts. I run python -m SimpleHTTPServer from the backbone directory and display it in the browser and the console reads the error $ is not defined and references a completely different main.js file from something I was working on days ago with a local python server. I am new to this and don't have an idea what's going on. Would love some suggestions if you have time.
Local server giving wrong files. Is it possible I'm running 2 python servers?
0
0
0
1,036
22,522,802
2014-03-20T03:19:00.000
1
0
1
0
python,multithreading,locking
22,524,103
1
true
1
0
Since the updates are so infrequent, you're better off just making a copy of the object, updating copy, and then updating the global variable to point to the new object. Simple assignments in python are atomic so you don't need any locks at all.
1
0
0
I'm developing a tiny web application with Flask/Gunicorn on Heroku. Since I'm just prototyping, I have a single web process (dyno) with a worker thread started by the same process. The web application is just returning a JSON dump of a global object, which is periodically updated by the worker thread monitoring an external web service. The global object is updated every 15 to 60 minutes. My plan was to use an exclusive lock in the worker thread when an update to the global object is needed, and a shared lock in the web threads so multiple requests can be satisfied concurrently. Unfortunately it looks like that Python doesn't have shared locks, only exclusive locks. How can I ensure consistency in the web threads, i.e., how to be sure that the update to the global object is atomic while allowing multiple read-only access to the object?
Shared lock for Python objects
1.2
0
0
1,254
22,523,519
2014-03-20T04:28:00.000
1
0
0
0
python,django,postgresql
22,527,486
1
false
1
0
From ./manage.py help syncdb: --database=DATABASE Nominates a database to synchronize. Defaults to the "default" database. You can add another database definition in your DATABASES configuration, and run ./manage.py syncdb --database=name_of_database_definition. You might want to create a small wrapper script for running that command, so that you don't have to type out the --database=... parameter by hand every time. south also supports that option, so you can also use it to specify the database for your migrations.
1
3
0
I'm using Django 1.6 with PostgreSQL and I want to use a two different Postgres users - one for creating the initial tables (syncdb) and performing migrations, and one for general access to the database in my application. Is there a way of doing this?
Different Postgres users for syncdb/migrations and general database access in Django
0.197375
1
0
78
22,525,601
2014-03-20T06:54:00.000
1
0
0
0
python,cloud,openstack,openstack-nova,openstack-horizon
43,255,570
5
false
0
0
check the core service is running by typing command " netstat -an | grep LISTENING". In the controller node,it should contains listening port 8778(placement_api service), 8774(compute-service),9292(Image service),9696(network),5000(Identify service),5672(rabbitmq server), 11211( memcache server),35357(Identify service) at least if you don't modify the default config. if you install Ocata by offical guide line by line ,You must start placement-api service manually。 In compute node,you can run command "virt-host-validate" to check your host that whether it supports hardware virtualization.If fails ,edit the file "/etc/nova/nova.conf",set virt_type=qemu. Ensure your host owns enough cpu,Memory,disk resources. if All the steps is ok ,Open Debug log message By set debug=true int /etc/nova/nova.conf。you can find more information in the directory /var/log/nova/
5
5
0
I'm getting the following error on my Openstack (DevStack) every time I try to launch an image other than cirrOS. Walking through internet leads me to: Openstack cannot allocate RAM, CPU resources. It's not true because I have a lot of RAM, disk space and CPU available. set in nova.conf -> scheduler_default_filters=AllHostsFilter Tried without success. This hapends to any image in any format that is other than cirrOS. Update: Now it is clear that there is no direct answer to this question. Lets hope Openstack guys will provide more specific information in this error message
Openstack. "No valid host was found" for any image other than cirrOS
0.039979
0
0
6,243
22,525,601
2014-03-20T06:54:00.000
1
0
0
0
python,cloud,openstack,openstack-nova,openstack-horizon
27,073,835
5
false
0
0
For me, I got this same error because I mistakenly added an ubuntu image and set the metadata "hypervisor" tag to be "KVM" and not "QEMU". My host only had QEMU capability, of course. When I went to launch it, it gave that "No Valid Host was found". I'd say make sure the tags on the image aren't preventing the host from thinking "I can't run this". Simply changing the image tag back to QEMU fixed it for me.
5
5
0
I'm getting the following error on my Openstack (DevStack) every time I try to launch an image other than cirrOS. Walking through internet leads me to: Openstack cannot allocate RAM, CPU resources. It's not true because I have a lot of RAM, disk space and CPU available. set in nova.conf -> scheduler_default_filters=AllHostsFilter Tried without success. This hapends to any image in any format that is other than cirrOS. Update: Now it is clear that there is no direct answer to this question. Lets hope Openstack guys will provide more specific information in this error message
Openstack. "No valid host was found" for any image other than cirrOS
0.039979
0
0
6,243
22,525,601
2014-03-20T06:54:00.000
0
0
0
0
python,cloud,openstack,openstack-nova,openstack-horizon
22,552,298
5
false
0
0
The error can be due to many reasons. As you have told that it works with cirros, try this. Run the command "glance index". you will get the images you have in your glance. Now do a "glance show (your-glance-id)" Compare this between Cirros image and the rest.
5
5
0
I'm getting the following error on my Openstack (DevStack) every time I try to launch an image other than cirrOS. Walking through internet leads me to: Openstack cannot allocate RAM, CPU resources. It's not true because I have a lot of RAM, disk space and CPU available. set in nova.conf -> scheduler_default_filters=AllHostsFilter Tried without success. This hapends to any image in any format that is other than cirrOS. Update: Now it is clear that there is no direct answer to this question. Lets hope Openstack guys will provide more specific information in this error message
Openstack. "No valid host was found" for any image other than cirrOS
0
0
0
6,243
22,525,601
2014-03-20T06:54:00.000
0
0
0
0
python,cloud,openstack,openstack-nova,openstack-horizon
22,528,201
5
false
0
0
I don't know WHY but after a while I can launch Ubuntu saucy-server-cloudimg-i386-disk1.img — Ubuntu 13.10 x32 but can not saucy-server-cloudimg-amd64-disk1.img — Ubuntu 13.10 x64 and vise versa, I can launch precise-server-cloudimg-amd64-disk1.img — Ubuntu 13.04 x64 and cannot precise-server-cloudimg-i386-disk1.img — Ubuntu 13.04 x32
5
5
0
I'm getting the following error on my Openstack (DevStack) every time I try to launch an image other than cirrOS. Walking through internet leads me to: Openstack cannot allocate RAM, CPU resources. It's not true because I have a lot of RAM, disk space and CPU available. set in nova.conf -> scheduler_default_filters=AllHostsFilter Tried without success. This hapends to any image in any format that is other than cirrOS. Update: Now it is clear that there is no direct answer to this question. Lets hope Openstack guys will provide more specific information in this error message
Openstack. "No valid host was found" for any image other than cirrOS
0
0
0
6,243
22,525,601
2014-03-20T06:54:00.000
4
0
0
0
python,cloud,openstack,openstack-nova,openstack-horizon
22,526,659
5
false
0
0
Make sure the flavour size you select is size "small" or larger, cirros uses tiny by default, as do others if not changed
5
5
0
I'm getting the following error on my Openstack (DevStack) every time I try to launch an image other than cirrOS. Walking through internet leads me to: Openstack cannot allocate RAM, CPU resources. It's not true because I have a lot of RAM, disk space and CPU available. set in nova.conf -> scheduler_default_filters=AllHostsFilter Tried without success. This hapends to any image in any format that is other than cirrOS. Update: Now it is clear that there is no direct answer to this question. Lets hope Openstack guys will provide more specific information in this error message
Openstack. "No valid host was found" for any image other than cirrOS
0.158649
0
0
6,243
22,529,007
2014-03-20T09:47:00.000
2
0
1
0
python
22,529,061
2
false
0
0
item in your_list is O(n) operation in Python. If all items are distinct and the order doesn't matter and you need to do multiple looks up then you could use a set() instead of a list. item in your_set is O(1) (O(n) worst case).
1
0
0
Suppose I have a list of tuples of the form list = [(key1,value1), (key2,value2)], then is it fast/good to look up if a tuple exists in the list like this: (key_x,value_x) in list ? Also how does python search the list?! Does it compare pointers or how?! I'm new to Python coming from Java.
Cost of looking up a tuple in a list of tuple?
0.197375
0
0
187
22,530,287
2014-03-20T10:37:00.000
1
1
1
0
python,regex,email
22,530,391
2
true
0
0
<* means: "the character < zero or more times". You are looking for <.*@.*>
2
0
0
I am trying to get the address of the sender in mbox formated mails in python. When I get the line that contains the sender, it looks like From: Mister X <misterx@domain>. I am able to retrieve the mail address with, for example, re.findall('<[a-zA-Z0-9\.]+@[a-zA-Z0-9\.]+>', str). I think that should be fine since email addresses, as far as I know, cannot contain any other characters. What I do not understand is why the expression <*@*>, which I expected to match any characters in the email address does not work at all. In fact, re.findall('<*@*>', 'From: Mister X <misterx@domain>')returns ['>'].
Matching address in mbox format in python
1.2
0
0
134
22,530,287
2014-03-20T10:37:00.000
0
1
1
0
python,regex,email
22,535,800
2
false
0
0
Here is my answer. why the expression <*@*>, which I expected to match any characters in the email address does not work at all. Because you are using re module that evaluate your expression <*@*> as regular expression. If you want to make your expression evaluated as wildcard, use fnmatch module. But fnmatch have only function that checks if string matches or not. So you can't get matches by using fnmatch. From your question, It seems you want to retrieve mail address, so you shouldn't use fnmatch module. Just use re module to get matches. I think you are just confusing regular expression between wildcard.
2
0
0
I am trying to get the address of the sender in mbox formated mails in python. When I get the line that contains the sender, it looks like From: Mister X <misterx@domain>. I am able to retrieve the mail address with, for example, re.findall('<[a-zA-Z0-9\.]+@[a-zA-Z0-9\.]+>', str). I think that should be fine since email addresses, as far as I know, cannot contain any other characters. What I do not understand is why the expression <*@*>, which I expected to match any characters in the email address does not work at all. In fact, re.findall('<*@*>', 'From: Mister X <misterx@domain>')returns ['>'].
Matching address in mbox format in python
0
0
0
134
22,531,334
2014-03-20T11:19:00.000
4
0
1
0
python,list
22,531,335
3
false
0
0
Figured out a way, posting here for easy reference. Try this: if any(term in terms for term in ("foo", "bar", "baz")): pass
1
3
0
The question is about a quicker, ie. more pythonic, way to test if any of the elements in an iterable exists inside another iterable. What I am trying to do is something like: if "foo" in terms or "bar" in terms or "baz" in terms: pass But apparently this way repeats the 'in terms' clause and bloats the code, especially when we are dealing with many more elements. So I wondered whether is a better way to do this in python.
Find if any element of list exists in another list in Python
0.26052
0
0
1,872
22,532,644
2014-03-20T12:12:00.000
0
0
0
0
jquery,python,django
22,533,787
1
false
1
0
I think ajax should do the trick for you
1
0
0
I have a website and I want to have a form on the website that multiple people can view. As the firm gets updated by any of the individuals looking at it, everyone else can also see the updates without refreshing the web page. Basically it will be a row from a table displayed as a form and each part of the form will be filled periodically and will all start off empty I was thinking jquery might be able to do this but I do not fully understand everything about jquery yet. Any thoughts or ideas on the best way to do this? I am currently just learning django as I go.
Realtime forms in Django
0
0
0
81
22,535,539
2014-03-20T14:09:00.000
1
0
0
0
python,web-scraping,python-2.5
22,536,352
1
true
0
0
If you really want to do it old-school entirely within Python but without urllib, then you'll have to use socket and implement a tiny subset of HTTP 1.0 to fetch the page. Jumping through the hoops to get through a proxy will be really painful though. Use wget or curl and save yourself a few days of debugging.
1
0
0
I am working on an appliance using an old version of python (2.5.2). I'm working on a script which needs to read a webpage, but I can't access the normal libraries - urllib, urllib2 and requests are not available. How did people collect this in the olden days? I could do a wget/curl from the shell, but I'd prefer to stick to python if possible. I also need to be able to go through a proxy which may force me into system calls.
collect web page source with python 2.5.2
1.2
0
1
79
22,536,589
2014-03-20T14:48:00.000
3
0
1
0
python,numpy,complex-numbers
22,562,740
2
false
0
0
The short answer is that the C99 standard (Annex G) on complex number arithmetic recognizes only a single complex infinity (think: Riemann sphere). (inf, nan) is one representation for it, and (-inf, 6j) is another, equivalent representation.
1
3
1
At some point in my python script, I require to make the calculation: 1*(-inf + 6.28318530718j). I understand why this will return -inf + nan*j since the imaginary component of 1 is obviously 0, but I would like the multiplication to have the return value of -inf + 6.28318530718j as would be expected. I also want whatever solution to be robust to any of these kinds of multiplications. Any ideas? Edit: A Complex multiplication like x*y where x = (a+ib) and y = (c+id) I assume is handled like (x.real*y.real-x.imag*y.imag)+1j*(x.real*y.imag+x.imag*y.real) in python as this is what the multiplication comes down to mathematically. Now if say x=1.0 and y=-inf+1.0j then the result will contain nan's as inf*0 will be undefined. I want a way for python to interpret * so that the return value to this example will be -inf+1.0j. It seems unnecessary to have to define my own multiplication operator (via say a function cmultiply(x,y)) such that I get the desired result.
How to deal with indeterminate form in Python
0.291313
0
0
584
22,542,566
2014-03-20T18:56:00.000
1
1
0
0
php,python
22,542,914
1
false
1
0
Write a Python script that takes a path in sys.argv or the audio data via sys.stdin and writes metadata to sys.stdout. Call it from PHP using exec.
1
4
0
Is it possible to exchange data between a PHP page and a Python application? How can I implement a PHP page that reacts to a Python application? EDIT: My application is divided in 2 parts: the web backend and a Python daemon. Via the web backend I upload MP3s to my server; these MP3s are processed by my Python daemon which fetch metadata from Musicbrainz. Now: I need to ask the user the results of the "Python fetch" to choose the right metadata. Is this possible?
Exchange data between Python and PHP
0.197375
0
0
419
22,548,223
2014-03-21T00:57:00.000
1
0
0
0
python,django,facebook,facebook-graph-api,python-social-auth
41,433,786
2
false
1
0
Just some extra for the reply above. To get the token from extra_data you need to import the model with that data (took me a while to find this): from social.apps.django_app.default.models import UserSocialAuth
1
11
0
How can I retrieve Facebook friend's information using Python-Social-auth and Django? I already retrieve a profile information and authenticate the user, but I want to get more information about their friends and invite them to my app. Thanks!
How to retrieve Facebook friend's information with Python-Social-auth and Django
0.099668
0
0
7,106
22,548,412
2014-03-21T01:15:00.000
0
0
0
0
python,macos,xlwt,cinema-4d
22,853,287
2
true
0
0
You have to install the python packages into the Python Library Folder of Cinema4D. If you select the Preferences Menu in Cinema4D there is a Button called "Open Preferences Folder..." this will lead to a folder called CINEMA 4D R_ Inside this folder library/python/packages/osx/ is the folder where you have to install the xlrd and xlwt packages. If this is successful you should be able to access Excel files...
1
0
0
I'm trying to read excel from Cinema4d using python. I believed this can be achieved using XLWT. Where did I should copy and pasthe the XLWT package in Mac? I know in Window the location is here :C:\Users\user\AppData\Roaming\MAXON\CINEMA 4D version\library\python\packages\. How about Mac?
Cinema 4D - Import xlwt location
1.2
0
0
227
22,548,621
2014-03-21T01:37:00.000
1
0
1
0
python,cuda,parallel-processing,mpi,openmp
22,548,699
2
true
0
0
As far as I know, pyPar and/or pyMPI are the two most frequently used libraries for computation intensive applications in the scientific field. pyPar tends to be easier to use, while pyMPI is more feature complete - so the first is used more frequently for less complex computations. Iirc, they are just python wrappers for the relevant C libraries, making them the highest performing/most efficient ones to use.
1
1
0
It seems like there are many options of parallelizing Python. I have seen these options below: shared memory: therading, multiprocessing, joblib, cython.parallel distributed memory: mpi4py, parallelpython (pp) any CUDA, OpenCL options? Does anyone have experiences in using these or other parallel libraries? How do they compare to each other? I am particularly interested in the use of python in computation-intensive applications in scientific computing field.
Choice of parallelism in python
1.2
0
0
226
22,549,501
2014-03-21T03:18:00.000
0
0
1
0
python
22,549,535
7
false
0
0
It sounds like you are looking for the interactive prompt, and a text editor. If you want the interactive Python prompt: If you are on Windows, look for a directory called C:\Python? and find a bin\python.exe. If you are on Linux or OSX, typing python in the Terminal should get you the interactive prompt. For a text editor to define functions, Notepad or gedit should help you.
3
0
0
I'm not sure if this question is appropriate for here, so if it isn't, I apologize, however I'm not sure where else to ask. I've been learning python on codecademy however when I downloaded python myself it is nothing like what I've become used to. I can't for the life of me figure out where to define fucntions and I don't know where to find the console. I would like basically the same that codecademy has with the editor / console 2-in-1 combo. Where can I find this? FWIW I have python 3.3.5 that I downloaded from www.python.org Thank you.
where can i find the version of python that is on codecademy
0
0
0
1,952
22,549,501
2014-03-21T03:18:00.000
0
0
1
0
python
22,549,892
7
false
0
0
When I want to write a Python program, I open Windows explorer and go to the directory where I keep my .py files. When I right click a .py file, I am given the choice to "Edit with IDLE." I like IDLE really well. When I "Run" a program a Python Shell window opens. I work between these two windows. (I think I used Notepad to create the first .py file because if you open IDLE you get a Python Shell.) (Another oddity - when I want to create a new program file I "Save As" to a new name.) In spite of these oddities IDLE is my favorite programming environment.
3
0
0
I'm not sure if this question is appropriate for here, so if it isn't, I apologize, however I'm not sure where else to ask. I've been learning python on codecademy however when I downloaded python myself it is nothing like what I've become used to. I can't for the life of me figure out where to define fucntions and I don't know where to find the console. I would like basically the same that codecademy has with the editor / console 2-in-1 combo. Where can I find this? FWIW I have python 3.3.5 that I downloaded from www.python.org Thank you.
where can i find the version of python that is on codecademy
0
0
0
1,952
22,549,501
2014-03-21T03:18:00.000
0
0
1
0
python
32,239,844
7
false
0
0
I think Komodo Edit is the editor codeacademy used to teach us.
3
0
0
I'm not sure if this question is appropriate for here, so if it isn't, I apologize, however I'm not sure where else to ask. I've been learning python on codecademy however when I downloaded python myself it is nothing like what I've become used to. I can't for the life of me figure out where to define fucntions and I don't know where to find the console. I would like basically the same that codecademy has with the editor / console 2-in-1 combo. Where can I find this? FWIW I have python 3.3.5 that I downloaded from www.python.org Thank you.
where can i find the version of python that is on codecademy
0
0
0
1,952
22,552,304
2014-03-21T07:03:00.000
0
0
1
0
python,compiler-construction,interpreter
22,552,400
2
true
0
1
Python saves the precompiled .pyc file only for imported modules, not for the main script you're running. Running a program as main or importing it as a module is not the exact same thing, but very similar because in a module everything that is at top level is executed at import time. Note that for main program the source code is completely parsed and compiled too (so for example if you have a syntax error in last line nothing will be executed). The difference is only that the result of compilation is not saved back to disk.
1
1
0
I seen the some difference when I execute the .py file. I have observed two cases, 1) when I run the .py file using the python mypython.py I got the result. But .pyc file not created in my folder. 2) when I run the .py file using the python -c "import mypython" I got the same result. But .pyc file was created in my folder. My question is why first case not created .pyc file ?
Understanding python compile
1.2
0
0
102
22,553,659
2014-03-21T08:26:00.000
2
0
0
1
python,celery,celery-task,flower
22,554,555
1
true
0
0
I found i out. It is the matter of setting the persistant flag in command running celery flower.
1
1
0
I started to use celery flower for tasks monitoring and it is working like a charm. I have one concern though, how can i "reload" info about monitored tasks after flower restart ? I use redis as a broker, and i need to have option to check on tasks even in case of unexpected restart of service (or server). Thanks in advance
Celery Flower - how can i load previous catched tasks?
1.2
0
0
1,345
22,557,975
2014-03-21T11:50:00.000
3
0
1
0
python,ipython
22,558,254
2
true
0
0
In Ipython, use run -i myscript.py. from the docs, -i run the file in IPython's namespace instead of an empty one. This is useful if you are experimenting with code written in a text editor which depends on variables defined interactively.
1
1
0
Is it possible to run a .py code file using the existing variables from previous executions ? I mean to run the new .py file using the variables on memory,that is,the variables I see when I enter the who command.
IPython : Running a .py file using the existing variables
1.2
0
0
180
22,562,540
2014-03-21T14:31:00.000
6
0
0
0
python,pandas,scikit-learn
22,584,181
2
false
0
0
Pandas DataFrames are very good at acting like Numpy arrays when they need to. If in doubt, you can always use the values attribute to get a Numpy representation (df.values will give you a Numpy array of the values in DataFrame df.
1
11
1
Can we run scikit-learn models on Pandas DataFrames or do we need to convert DataFrames into NumPy arrays?
Pandas dataset into an array for modelling in Scikit-Learn
1
0
0
9,121
22,562,786
2014-03-21T15:22:00.000
1
0
0
0
python,analytics
22,563,437
3
false
0
0
You can use many 'web' analytics platforms inside of desktop or mobile apps. Mixpanel is a popular one that I have looked at, but you can use google analytics in this way as well. You basically just will have method calls in your code that call out to the mixpanel server whenever you want to log an event. It will be easier to use one of these vs inventing your own.
1
1
0
I have published a library that is used in-house. It is not a web based library but it unifies access to several different datasources and provides access in a unified way. I would like to gather usage statistics of this library - obviously with the proviso that users of the library don't mind these statistics being taken. Now this is not a web framework or anything similar, but just a bunch of classes and functions . Obviously the analytics framework must be able to recover from the gathering back end being not available - in fact the usage of the library must preferably be not affected in anyway by data being sent. Has anybody written anything like this before? Obviously I could knock up one myself, but when presented with questions like this, I always try to find a version of one done already (as they've probably done a better job than I could ever do).
Python analytics (non web based)
0.066568
0
0
106
22,563,386
2014-03-21T15:49:00.000
3
0
0
0
android,python,kivy
22,563,987
1
true
0
1
What do you mean by 'window'? In the practical sense, you can easily create different screens with kivy and switch between them, e.g. to have a menu screen, settings screen, game screen etc. Is this the kind of thing you mean? More generally, you can easily achieve any particular windowing behaviour you want within kivy. Android itself works with activities, a particular way of choosing what is displayed, how apps move between screens, and also how apps may call particular subsets of one another (plus a lot more of course). You don't need to know about this to use kivy, it works within a single activity (plus appropriate interaction with the rest of the system), but you should read up on it if you want to understand how android manages programs and how this is different to most desktop environments.
1
0
0
I'm evaluating Kivy for android development. I need to know if is it possible to create an application with multiple windows using Kivy. I dont know for sure how android works with this kind of approach. In c#, windows forms, we have a main window and from that we open/close another forms. How can I accomplish this approach using Kivy for android ?
Kivy Multiple windows
1.2
0
0
1,083
22,564,503
2014-03-21T16:40:00.000
0
0
0
0
python,sql,oracle,pandas,cx-oracle
53,431,165
7
false
0
0
kind of complicated but possible. I have seen it once. You need to create a javaclass inside oracle database. This class calls a .py file in the directory which contains it. create a procedure that calls the java class of item 1. in your sql query, call the procedure of item 2 whenever you need it.
1
18
0
Is it possible to call Python within an Oracle procedure? I've read plenty of literature about the reverse case (calling Oracle SQL from Python), but not the other way around. What I would like to do is to have Oracle produce a database table, then I would like to call Python and pass this database table to it in a DataFrame so that I could use Python to do something to it and produce results. I might need to call Python several times during the Oracle procedure. Does anyone know if this is possible and how could it be done?
Calling Python from Oracle
0
1
0
12,854
22,567,255
2014-03-21T18:54:00.000
2
0
0
0
python,paypal
22,621,678
1
false
1
0
Your question embodies a contradiction in terms. The purpose of so-called encrypted buttons is for Paypal to check that they exist as registered buttons. If you roll your own buttons, Paypal can't do that. You're looking at the problem the wrong way. If someone chooses to send you money, that's very nice, but unless it's a price you recognize and advertise for one of your own items, you're not obliged to deliver anything. Your IPN handler should check that and fail the transaction if the price, item, etc. don't match your own catalog database. You can either refund the money or even just let Paypal's normal processes do that for you if they have the cheek to raise an 'item not received' case. Or just keep it as a donation.
1
0
0
in my current web application project, I'm developing a small commerce with many products... I need to implement PayPal for payments and I have read a lot of documentation in paypal developer site. The solution of implement Payment standard Buttons (add to cart button in my case) is fantastic, but I need to auto generate the html for each product in my database. If I auto-generate the clear code (without encryption) the problem is that one malicious user can edit the amount of one product (in html rendered page) and the purchase it (with your price :D ).... What I want is to auto generate the encrypted "Add to cart" button with some APIs. What is the elegant solution to my problem?? I'm using python for develop my application.
Paypal, encrypted add to cart button generate dynamically
0.379949
0
0
383
22,570,422
2014-03-21T22:10:00.000
0
0
1
0
python
22,570,461
2
true
0
0
myList = [tup[0] for tup in mySet] They're not "keys" per se.
1
0
0
I have a set of tuple(a, b, c). How can I return a list of all a inside this set? Is there something like .keys() as dictionaries have?
Python set/tuple operations iteration
1.2
0
0
54
22,574,996
2014-03-22T07:54:00.000
1
0
0
0
python,qt,terminal,pyqt,pyqt4
22,578,889
2
false
0
1
I tried similar things before. For logging. Its a pain and it is very slow for a lot of lines. If you expect having a lot of lines cummulating in the terminal then consider writing an item model and attach it to a view. There are lot of possibilites in tweaking the appearance of such a view and it allows showing a small portion out of a big amount of data without becoming painfully slow. Also it allows to insert data at any position quickly.
1
0
0
I am willing to make a Terminal like QTextEdit using pyqt4, but do not know what property to edit,so that when text from a process is dumped, it should start from the bottom and goes up. Any help for a starting point would be really appreciated. Cheers.
Loading text in QTextEdit from down up like in Terminal
0.099668
0
0
541
22,578,624
2014-03-22T14:07:00.000
4
0
0
0
android,python,kivy
22,578,797
1
false
0
1
What are the packages which need to be installed for writing scripts using PyJius in system ? I'm not sure what you mean here. To run use pyjnius, all you need is...pyjnius. It is a separate module, not part of kivy itself, though kivy uses it on android. Kivy's mobile build tools automatically package this when you build an android apk. What is K ivy Launcher for android ? Will it be helpful for executing my scripts ? An app that can dynamically open kivy apps from your user data directory. You can use it to upload and run kivy scripts/apps on your device. It is most useful for quick tests, not as a way to distribute apps. For this, it's very easy to build your own apks, which gives a lot more flexibility in what you do and what you package. How actually Kivy works ( in detail say in I want to switch on Bluetooth using scripts, which are the things in the K ivy architecture gets invoked by doing so ? ) I'm not sure what you're asking here. Kivy is a graphical framework for python, using an optimised opengl interface...you write python gui applications with it. For things like bluetooth on android, you can use pyjnius (or more easily, wrapper projects like plyer providing an abstracted python api, though I don't think plyer has bluetooth quite yet). This generally isn't very hard, I've seen bluetooth done before. Kivy itself is a graphical framework, these other tools are sister projects but separate from the graphics. What i need is to write scripts on computer and then after sending those scripts to my phone, then i need to execute my scripts from phone. Get the results on the phone and send those results to system. You can certainly do this with kivy, by putting the scripts in an app that you run. Network communication also isn't hard - it's separate to kivy itself, but you have access to all the normal python modules you might use.
1
1
0
What are the packages which need to be installed for writing scripts using PyJius in system ? Is there anyway by which i could run these python scripts on android phone ? What is K ivy Launcher for android ? Will it be helpful for executing my scripts ? How actually Kivy works ( in detail say in I want to switch on Bluetooth using scripts, which are the things in the K ivy architecture gets invoked by doing so ? ) What i need is to write scripts on computer and then after sending those scripts to my phone, then i need to execute my scripts from phone. Get the results on the phone and send those results to system.
Kivy for android
0.664037
0
0
768
22,583,604
2014-03-22T21:06:00.000
1
0
1
0
python
22,583,678
2
false
0
0
You'll need to build a little parser. Iterate through the characters of the string, keeping track of the current nesting level of parentheses. Then you can detect the . you care about by checking that first, the character is a ., and second, there's only one level of parentheses open at that point. Then just place the characters in one buffer or another depending on whether you've reached that . or not.
1
0
0
So I want to do something like this "(1.0)" which returns ["1","0"] similarly "((1.0).1)" which returns ["(1.0)", "1") How do i do this python? Thanks for the help so basically I want to break the string "(1.0)" into a list [1,0] where the dot is the separator. some examples ((1.0).(2.0)) -> [(1.0), (2.0)] (((1.0).(2.0)).1) -> [((1.0).(2.0)), 1] I hope this is more clear.
How to split with multiple bracket
0.099668
0
0
63
22,585,235
2014-03-22T23:51:00.000
3
0
1
1
python,anaconda
50,594,422
19
false
0
0
In case you have multiple version of anaconda, rm -rf ~/anaconda2 [for version 2] rm -rf ~/anaconda3 [for version 3] Open .bashrc file in a text editor vim .bashrc remove anaconda directory from your PATH. export PATH="/home/{username}/anaconda2/bin:$PATH" [for version 2] export PATH="/home/{username}/anaconda3/bin:$PATH" [for version 3]
4
275
0
I installed Python Anaconda on Mac (OS Mavericks). I wanted to revert to the default version of Python on my Mac. What's the best way to do this? Should I delete the ~/anaconda directory? Any other changes required? Currently when I run which python I get this path: /Users/username/anaconda/bin/python
Python Anaconda - How to Safely Uninstall
0.031568
0
0
666,412
22,585,235
2014-03-22T23:51:00.000
0
0
1
1
python,anaconda
68,270,547
19
false
0
0
In macOs rm -rf ~/opt/anaconda3
4
275
0
I installed Python Anaconda on Mac (OS Mavericks). I wanted to revert to the default version of Python on my Mac. What's the best way to do this? Should I delete the ~/anaconda directory? Any other changes required? Currently when I run which python I get this path: /Users/username/anaconda/bin/python
Python Anaconda - How to Safely Uninstall
0
0
0
666,412
22,585,235
2014-03-22T23:51:00.000
1
0
1
1
python,anaconda
50,807,405
19
false
0
0
To uninstall Anaconda Fully from your System : Open Terminal rm -rf ~/miniconda rm -rf ~/.condarc ~/.conda ~/.continuum
4
275
0
I installed Python Anaconda on Mac (OS Mavericks). I wanted to revert to the default version of Python on my Mac. What's the best way to do this? Should I delete the ~/anaconda directory? Any other changes required? Currently when I run which python I get this path: /Users/username/anaconda/bin/python
Python Anaconda - How to Safely Uninstall
0.010526
0
0
666,412
22,585,235
2014-03-22T23:51:00.000
152
0
1
1
python,anaconda
22,585,265
19
false
0
0
The anaconda installer adds a line in your ~/.bash_profile script that prepends the anaconda bin directory to your $PATH environment variable. Deleting the anaconda directory should be all you need to do, but it's good housekeeping to remove this line from your setup script too.
4
275
0
I installed Python Anaconda on Mac (OS Mavericks). I wanted to revert to the default version of Python on my Mac. What's the best way to do this? Should I delete the ~/anaconda directory? Any other changes required? Currently when I run which python I get this path: /Users/username/anaconda/bin/python
Python Anaconda - How to Safely Uninstall
1
0
0
666,412
22,590,490
2014-03-23T11:50:00.000
1
0
0
1
python,shell
29,597,942
1
true
0
0
I encountered the same problem. After trying a lot of things I arrived at this solution. The trick is to make a new Proxification Rule in proxifier. Name it anything you prefer, say PythonIdle. In application box add python.exe and pythonw.exe. Set action to Direct. Hope it will solves your problem!
1
1
0
I have recently installed Python 2.7.6 in windows 7. When I tried to open the IDLE, it gave fatal errors. There are two error message popping up subsequently showing error messages: "IDLE Subprocess Error" "Socket Error:No connection could be made because the target machine actively refused it" "Subprocess Startup Error" "IDLE's subprocess didn't make connection. Either IDLE can't start a subprocess or personal firewall software is blocking the connection." then nothing happens. I figured out that this error occurs only when Proxifier is on. No issue with firewall. I exited Proxifier and the python Shell was working fine. Then I tried to open Proxifier after opening shell. Then the shell immediately stopped compiling and running python code and hanged up. See if someone can help to get the shell work well while Proxifier is on, or any other suggestions.
Error in Python IDLE and shell while Proxifier is on
1.2
0
0
603
22,590,718
2014-03-23T12:11:00.000
0
1
1
0
python,git
22,590,792
1
false
0
0
A pull is a complex command which will do a few different things depending on the configuration. It is not something you should use in a script, as it will try to merge (or rebase if so configured) which means that files with conflict markers may be left on the filesystem, which will make anything that tries to compile/interpret those files fail to do so. If you want to switch to a particular version of files, you should use something like checkout -f <remote>/<branch> after fetching from <remote>. Keep in mind that git cannot know what particular needs you have, so if you're writing a script, it should be able to perform some sanity checks (e.g. make sure there are no extra files lying around)
1
0
0
I'm building a little python script that is supposed to update itself everytime it starts. Currently I'm thinking about putting MD5 hashes on a "website" and downloading the files into a temp folder via the srcipt itself. Then if the MD5 Hashes line up the temp files will be moved over the old ones. But now I'm wondering if git will just do something like this anyway. What if the internet connection breaks away or power goes down when doing a git pull? Will I still have the "old" version or some intermediate mess? Since my aproach works with an atomic rename from the os I can at least be shure that every file is either old or new, but not messed up. Is that true for git as well?
Does git always produce stable results when doing a pull?
0
0
0
41
22,590,730
2014-03-23T12:12:00.000
3
0
1
0
python,pycharm
25,076,327
1
false
0
0
This may be due to Community Edition which doesn't provide all functionality. Download the professional edition of pycharm. The professional edition is free to use for one month.
1
9
0
I've just downloaded PyCharm community edition for my Mac. It seems to work great but for some reason project type -dropdown is missing in Create project dialog. I'm newbie to PyCharm (and Python overall) so I don't know if there is some obvious reason for this. I was able to create projects however - even in virtualenv. But they are always empty projects.
PyCharm is missing project type drop down
0.53705
0
0
3,046
22,590,811
2014-03-23T12:18:00.000
1
0
0
0
python,c++,opencv,numpy
22,591,329
1
true
0
0
there is no question at all, - use cv2 the old cv api, that wraps IplImage and CvMat is being phased out, and will be no more available in the next release of opencv the newer cv2 api uses numpy arrays for almost anything, so you can easily combine it with scipy, matplotlib, etc.
1
1
1
I've recently started using openCV in python. I've come across various posts comparing cv and cv2 and with an overview saying how cv2 is based on numpy and makes use of an array (cvMat) as opposed to cv makes use of old openCV bindings that was using Iplimage * (correct me if i'm wrong). However I would really like know how basic techniques (Iplimage* and cvMat) differ and why later is faster and better and how that being used in cv and cv2 respectively makes difference in terms of performance. Thanks.
Can anyone in detail explain how cv and cv2 are different and what makes cv2 better and faster than cv?
1.2
0
0
230
22,591,947
2014-03-23T14:08:00.000
1
0
1
0
c,stack,pypy,vm-implementation,rpython
22,592,504
1
true
0
0
That depends. Do you want to push the string, or a pointer to a string? If it's the former, you have a problem, because the string will have variable length, unlike a pointer or a number. If it's the latter, you have to consider memory management aside from your stack.
1
0
0
I'm making a stack based virtual machine in RPython using the PyPy toolchain to convert the RPython to C. So far I have 4 instructions. EOP - End of Program EOI - End of Instruction PUSH - Push item onto the stack PRINT - Print the top of the stack My question is, how do you push a String to the top of the stack. Is it the same as when you push a number to the top of the stack or do I have to do something else when working with strings?
Stack Machine with Strings
1.2
0
0
393
22,593,328
2014-03-23T16:03:00.000
1
0
0
0
python,user-interface,matplotlib,tkinter,wxpython
22,595,047
1
false
0
0
Tkinter, which is part of python, comes with a canvas widget that can be used for some simple plotting. It can draw lines and curves, and one datapoint every couple of seconds is very easy for it to handle.
1
0
1
Is there a minimalistic python module out there that I can use to plot real time data that comes in every 2-3 seconds? I've tried matplotlib but I'm having a couple errors trying to get it to run so I'm not looking for something as robust and with many features.
Python widget for real time plotting
0.197375
0
0
133
22,597,239
2014-03-23T21:22:00.000
0
0
0
0
python,scikit-learn
48,487,656
2
false
0
0
My recommendation is to not use the cross-validation split that had the best performance. That could potential give you problems with high bias. Afterall, the performance just happened to be good because there was a fold used for testing that just happened to match the data used for training. When you generalize it to the real world, that probably won't happen. A strategy I got from Andrew Ng is to have a train, dev, and test sets. I would first split your dataset into a test and train set. Then use cross fold validation on your training set, where effectively the training set will be split into training and dev sets. Do cross fold validation to validate your model and store the precision and recall and other metrics to build a ROC curve. Average the values and report those. You can also tune the hyperparameters using your dev set as well. Next, train the model with the entire training set, then validate the model with your hold out test set.
1
2
1
I'm using scikit-learn to train classifiers. I want also to do cross validation, but after cross-validation I want to train on the entire dataset. I found that cross_validation.cross_val_score() just returns the scores. Edit: I would like to train the classifier that had the best cross-validation score with all of my data.
Making scikit-learn train on all training data after cross-validation
0
0
0
1,088
22,600,861
2014-03-24T04:01:00.000
0
0
0
0
python,mysql,mysql-python
22,600,910
3
false
0
0
You need to quote only character and other non-integer data types. Integer data types need not be quoted.
1
0
0
insert_query = u""" INSERT INTO did_you_know ( name, to_be_handled, creator, nominator, timestamp) VALUES ('{0}', '{1}', '{2}', '{3}', '{4}') """.format("whatever", "whatever", "whatever", "whatever", "whatever") is my example. Does every single value in a MySQL query have to contain quotes? Would this be acceptable or not? INSERT INTO TABLE VALUES ('Hello', 1, 1, 0, 1, 'Goodbye') Thank you.
Does every single value in a MySQL query have to be quoted?
0
1
0
55
22,602,065
2014-03-24T05:56:00.000
2
0
1
0
python,compatibility,mysql-python,anaconda
24,050,784
1
false
0
0
You don't need to uninstall anaconda. In your case, Try pip install PyMySql. If which pip return /Users/vincent/anaconda/bin/pip, this should work.
1
1
0
so I am new to both programming and Python, but I think I have the issue narrowed down enough to ask a meaningful question. I am trying to use MySQLdb on my computer. No problem, right? I just enter: import PyMySQL PyMySQL.install_as_MySQLdb() import MySQLdb At the top of the script. But here is the problem. I installed Anaconda the other day to try to get access to more stats packages. As a result, on the command line, "which python" returns: /Users/vincent/anaconda/bin/python Based on reading other people's questions and answers, I think the problem is caused by being through Anaconda and not usr/bin/python, but I have no idea if this is correct... I would rather not uninstall Anaconda as I know that is not the right solution. So, I would like to ask for a very basic list of steps of how fix this if possible. I am on OSX (10.9) Anaconda is 1.9.1 and I think python is 2.7 Thank you!
ImportError: No module named PyMySQL - issues connecting with Anaconda
0.379949
1
0
7,318
22,602,390
2014-03-24T06:24:00.000
0
0
0
0
python,django,data-binding,architecture
22,629,711
2
false
1
0
Sounds like you need a message queue. You would run a separate broker server which is sent tasks by your web app. This could be on the same machine. On your two local machines you would run queue workers which connect to the broker to receive tasks (so no inbound connection required), then notify the broker in real time when they are complete. Examples are RabbitMQ and Oracle Tuxedo. What you choose will depend on your platform & software.
1
0
0
The scenario is I have multiple local computers running a python application. These are on separate networks waiting for data to be sent to them from a web server. These computers are on networks without a static IP and generally behind firewall and proxy. On the other hand I have web server which gets updates from the user through a form and send the update to the correct local computer. Question What options do I have to enable this. Currently I am sending csv files over ftp to achieve this but this is not real time. The application is built on python and using django for the web part. Appreciate your help
Sync data with Local Computer Architecture
0
0
1
63
22,604,179
2014-03-24T08:20:00.000
1
0
1
0
python
22,604,199
1
true
0
0
It's probably a custom module. The python standard library does not have an error module. Maybe you missed something in the textbook where an error module was created.
1
0
0
does python have error module? when i wrote 'import error' in python 2.7.6, it returns "No module named error.' but 'import error' is a part of program in a textbook?should i doult something in the textbook?
Does python have an "error" module?
1.2
0
0
35
22,604,620
2014-03-24T08:46:00.000
0
0
0
0
python,openerp
22,627,251
1
false
1
0
The user can add fields, models, can customize the views etc from client side. These are in Settings/Technical/Database Structure, here you can find the menus Fields, Models etc where the user can add fields. And the views can be customized in Settings/Technical/User Interface.
1
0
0
I have been developing modules in OpenERP-7 using Python on Ubuntu-12.04. I want to give my users a feature by which they will have the ability to create what ever fields they want to . Like they will set the name, data_type etc for the field and then on click , this field will be created. I dont have any idea how this will be implemented. I have set up mind to create a button that will call a function and it will create a new field according to the details entered by the user . Is this approach of mine is right or not? And will this work . ? Please guide me so that I can work smartly. Hopes for suggestion
How to code in openerp so that user can create his fields?
0
0
0
103
22,608,905
2014-03-24T12:03:00.000
1
0
0
0
python-2.7,computer-vision,python-module
22,609,701
2
false
0
0
OpenCV is free to use. But SIFT itself as algorithm is patented, so if you would make your own implementation of SIFT, not based on Lowe`s code, you still could not use it in commercial application. So, unless you have got a license for SIFT, no library with it, is free. But you can consult with patent guys - some countries like Russia does not allow to patent algorithms - so you can you SIFT inside such country.
2
2
1
In python which library is able to extract SIFT visual descriptors? I know opencv has an implementation but it is not free to use and skimage does not include SIFT particularly.
what library is able to extract SIFT features in Python?
0.099668
0
0
2,214
22,608,905
2014-03-24T12:03:00.000
2
0
0
0
python-2.7,computer-vision,python-module
23,098,414
2
true
0
0
I would like to suggest VLFeat, another open source vision library. It also has a python wrapper. The implementation of SIFT in VLFeat is modified from the original algorithm, but I think the performance is good.
2
2
1
In python which library is able to extract SIFT visual descriptors? I know opencv has an implementation but it is not free to use and skimage does not include SIFT particularly.
what library is able to extract SIFT features in Python?
1.2
0
0
2,214
22,609,272
2014-03-24T12:22:00.000
2
0
1
0
python,python-3.x,dictionary,namespaces,base
22,609,390
3
false
0
0
__name__ is a string containing the name of the class. __bases__ is a tuple of classes from which the current class derives. __dict__ is a dictionary of all methods and fields defined in the class. The use case for type(name, bases, dict) is when you want to dynamically generate classes at runtime. Just imagine that you want to create ORM for a database and you want all the classes representing database tables to be generated automatically. You have an idea how these classes should behave but you don't know their names, their fields until you inspect the database schema. Then you can use this function to generate these classes.
1
11
0
Docs: With three arguments, return a new type object. This is essentially a dynamic form of the class statement. The name string is the class name and becomes the __name__ attribute; the bases tuple itemizes the base classes and becomes the __bases__ attribute; and the dict dictionary is the namespace containing definitions for class body and becomes the __dict__ attribute. While learning Python I have come across this use of type as "a dynamic form of the class statement", this seems very interesting and useful and I wish to understand it better. Can someone clearly explain the role of __name__, __bases__, and __dict__ in a class and also give example where type(name, bases, dict) comes into its own right.
Python - type(name,bases,dict)
0.132549
0
0
4,642
22,609,742
2014-03-24T12:43:00.000
1
0
0
1
python,ubuntu-12.04,thumbnails,thumbor
23,401,532
1
false
0
0
You are missing a closing bracket in the filters option in your thumbor.conf. Did you miss it posting here or actually in the thumbor.conf file?
1
0
0
I have followed the wiki and set up everything necessary, but all the images are broken right now. I used the aptitude package manager to install. Here are my configuration files: /etc/default/thumbor # set this to 0 to disable thumbor, remove or set anything else to enable it # you can temporarily override this with # sudo service thumbor start force=1 enabled=1 # Location of the configuration file conffile=/etc/thumbor.conf # Location of the keyfile which contains the signing secret used in URLs #keyfile=/etc/thumbor.key # IP address to bind to. Defaults to all IP addresses # ip=127.0.0.1 # TCP port to bind to. Defaults to port 8888. # multiple instances of thumbor can be started by putting several ports coma separeted # Ex: # port=8888,8889,8890 # or port=8888 #Default /etc/thumbor.conf #!/usr/bin/python # -*- coding: utf-8 -*- # thumbor imaging service # https://github.com/globocom/thumbor/wiki # Licensed under the MIT license: # http://www.opensource.org/licenses/mit-license # Copyright (c) 2011 globo.com [email protected] # the domains that can have their images resized # use an empty list for allow all sources #ALLOWED_SOURCES = ['mydomain.com'] ALLOWED_SOURCES = ['admin.mj.dev', 'mj.dev', 'api.mj.dev', 's3.amazonaws.com'] # the max width of the resized image # use 0 for no max width # if the original image is larger than MAX_WIDTH x MAX_HEIGHT, # it is proportionally resized to MAX_WIDTH x MAX_HEIGHT # MAX_WIDTH = 800 # the max height of the resized image # use 0 for no max height # if the original image is larger than MAX_WIDTH x MAX_HEIGHT, # it is proportionally resized to MAX_WIDTH x MAX_HEIGHT # MAX_HEIGHT = 600 # the quality of the generated image # this option can vary widely between # imaging engines and works only on jpeg images QUALITY = 85 # enable this options to specify client-side cache in seconds MAX_AGE = 24 * 60 * 60 # client-side caching time for temporary images (using queued detectors or after detection errors) MAX_AGE_TEMP_IMAGE = 0 # the way images are to be loaded LOADER = 'thumbor.loaders.http_loader' # maximum size of the source image in Kbytes. # use 0 for no limit. # this is a very important measure to disencourage very # large source images. # THIS ONLY WORKS WITH http_loader. MAX_SOURCE_SIZE = 0 # if you set UPLOAD_ENABLED to True, # a route /upload will be enabled for your thumbor process # You can then do a put to this URL to store the photo # using the specified Storage UPLOAD_ENABLED = False UPLOAD_PHOTO_STORAGE = 'thumbor.storages.file_storage' UPLOAD_PUT_ALLOWED = False UPLOAD_DELETE_ALLOWED = False # how to store the loaded images so we don't have to load # them again with the loader #STORAGE = 'thumbor.storages.redis_storage' #STORAGE = 'thumbor.storages.no_storage' STORAGE = 'thumbor.storages.file_storage' #STORAGE = 'thumbor.storages.mixed_storage' # root path of the file storage FILE_STORAGE_ROOT_PATH = '/var/lib/thumbor/storage' # If you want to cache results, use this options to specify how to cache it # Set Expiration seconds to ZERO if you want them not to expire. #RESULT_STORAGE = 'thumbor.result_storages.file_storage' #RESULT_STORAGE_EXPIRATION_SECONDS = 60 * 60 * 24 # one day #RESULT_STORAGE_FILE_STORAGE_ROOT_PATH = '/tmp/thumbor/result_storage' RESULT_STORAGE_STORES_UNSAFE=False # stores the crypto key in each image in the storage # this is VERY useful to allow changing the security key STORES_CRYPTO_KEY_FOR_EACH_IMAGE = True #REDIS_STORAGE_SERVER_HOST = 'localhost' #REDIS_STORAGE_SERVER_PORT = 6379 #REDIS_STORAGE_SERVER_DB = 0 #REDIS_STORAGE_SERVER_PASSWORD = None # imaging engine to use to process images #ENGINE = 'thumbor.engines.graphicsmagick' #ENGINE = 'thumbor.engines.pil' ENGINE = 'thumbor.engines.opencv' # detectors to use to find Focal Points in the image # more about detectors can be found in thumbor's docs # at https://github.com/globocom/thumbor/wiki DETECTORS = [ 'thumbor.detectors.face_detector', 'thumbor.detectors.feature_detector', ] # Redis parameters for queued detectors # REDIS_QUEUE_SERVER_HOST = 'localhost' # REDIS_QUEUE_SERVER_PORT = 6379 # REDIS_QUEUE_SERVER_DB = 0 # REDIS_QUEUE_SERVER_PASSWORD = None # if you use face detection this is the file that # OpenCV will use to find faces. The default should be # fine, so change this at your own peril. # if you set a relative path it will be relative to # the thumbor/detectors/face_detector folder #FACE_DETECTOR_CASCADE_FILE = 'haarcascade_frontalface_alt.xml' # this is the security key used to encrypt/decrypt urls. # make sure this is unique and not well-known # This can be any string of up to 16 characters SECURITY_KEY = "thumbor@musejam@)!$" # if you enable this, the unencryted URL will be available # to users. # IT IS VERY ADVISED TO SET THIS TO False TO STOP OVERLOADING # OF THE SERVER FROM MALICIOUS USERS ALLOW_UNSAFE_URL = False # Mixed storage classes. Change them to the fullname of the # storage you desire for each operation. #MIXED_STORAGE_FILE_STORAGE = 'thumbor.storages.file_storage' #MIXED_STORAGE_CRYPTO_STORAGE = 'thumbor.storages.no_storage' #MIXED_STORAGE_DETECTOR_STORAGE = 'thumbor.storages.no_storage' FILTERS = [ 'thumbor.filters.brightness', 'thumbor.filters.contrast', 'thumbor.filters.rgb', 'thumbor.filters.round_corner', 'thumbor.filters.quality', 'thumbor.filters.noise', 'thumbor.filters.watermark', 'thumbor.filters.equalize', 'thumbor.filters.fill', 'thumbor.filters.sharpen', 'thumbor.filters.strip_icc', 'thumbor.filters.frame', # can only be applied if there are already points for the image being served # this means that either you are using the local face detector or the image # has already went through remote detection # 'thumbor.filters.redeye', URLs for images that I try to load look like this: http://localhost:8888/Q9boJke8j2p2Qtv53Hbz_g1nMZo=/250x250/smart/http://s3.amazonaws.com/our-company/0ea7eeb2979215f35112d2e5753a1ee5.jpg I have also setup a key in /etc/thumbor.key, please let me know if that's necessary to post here.
Thumbor installation not working
0.197375
0
1
3,567
22,609,931
2014-03-24T12:50:00.000
0
0
0
1
python
22,610,165
1
true
0
0
Look up phrases like "port scan" and "port mapping" for some ideas. You'll need to make use of whatever services are already running on the target machine, which could include Avahi/Bonjour/mDNS, HTTP, SSH, SMTP, etc. You'd try to figure out what ports might be open, try to connect there (or send UDP datagrams), and see what you find. You could develop your own heuristics if you have enough devices or a small enough set you need to support, or you could probably find some existing code to do some of this (maybe not in Python directly though).
1
0
0
I would like to query a device (e.g. an Android phone on WiFi) to find out what Operating System it is running, without having to run anything on the device or install an additional agent? Is this possible?
Query network device to determine operating system, without installing a remote agent?
1.2
0
0
83
22,610,616
2014-03-24T13:19:00.000
3
0
0
1
python,macos,firefox,firefox-addon,firefox-addon-sdk
22,612,244
1
true
1
0
It's looking for the Firefox binary file, not your application's binaries. You have to install Firefox because cfx run will open a browser with your add-on installed so you can use it and test it live. If firefox is already installed, then it is in a non-standar path, so you must tell cfx command where to find it, this way: cfx run -b /usr/bin/firefox or cfx run -b /usr/bin/firefox-trunk These examples are ony valid in some Linux distros like Ubuntu, you will have to find the firefox binary file in Mac OSX.
1
1
0
I installed the the latest Add-On SDK by Mozilla (version 1.15). Installation was successful and when I execute cfx I get a list of all possible commands. I made a new separate empty folder, cd'd into it and ran cfx init. This was also successful and all necessary folders and files got created. Now when I try to run the extension or test it, I get the following error: I can't find the application binary in any of its default locations on your system. Please specify one using the -b/--binary option. I have tried looking up the docs to see what kind of file I should be looking for but was unsuccessful in solving the issue. I tried to create an empty bin folder within the add-on folder and i have tried initiating the template in different parents and sub-folders. I still get the same message. I'm running on a Mac, OSX Mavericks 10.9.1 What's going on here exactly?
When I try to run 'cfx run' or 'cfx test' using the Mozilla Add-On SDK, my application binaries are not found
1.2
0
0
730
22,612,189
2014-03-24T14:25:00.000
1
0
1
0
python,debugging,ipython,pycharm
35,292,220
3
false
0
0
On ubuntu i had to change the line kernel.yama.ptrace_scope = 1 in /etc/sysctl.d/10-ptrace.conf to kernel.yama.ptrace_scope = 0 otherwise pycharm was not able to attach to the ipython process.
1
12
0
Is it possible to hit graphical breakpoints when running codes in PyCharm's IPython console? i.e.: You have a script foo() in foo.py You place a graphical breakpoint inside foo() from the editor (the red dot next to line number) You import foo into a PyCharm's IPython console and execute foo() (Note: not running from a debug configuration!)
Debugging inside PyCharm IPython
0.066568
0
0
4,151
22,614,608
2014-03-24T16:06:00.000
1
0
1
1
python
22,614,609
1
true
0
0
I found the solution to be this: python -c "import zipfile;file=zipfile.ZipFile('archive.zip');file.extractall('.')" The contents of archive.zip will be placed into the directory this command is executed from.
1
0
0
I have a zip file, but no unzipping tool installed. I do have python installed, but I can't easily create a python script file on this machine. How do I unzip the contents of the zip file using the python executable's -c argument?
Unzip all files in zip archive with inline command
1.2
0
0
174
22,619,437
2014-03-24T20:00:00.000
0
0
0
0
python,django,django-south
22,619,479
1
true
1
0
Well... the answer is to use apps. That's what they're for. They were designed the way the are exactly because standard modules don't provide the level of integration needed. If you start hacking away on your library to make it work on its own, you'll end up with mess of code and glue about the same size of a django app, but with considerably worse smell.
1
0
0
I'm currently working on a django project which tends to get pretty complex in time. Therefore I'm planning to encapsulate basic core models and utilities that are going to be reused throughout the application in a separate space. Since these models are mostly base models needed by other apps, imho there's no need to create a django app and instead place them in an standard python package (so the package is acting just like a simple library). Since I'm using south for migrations I'm running into problems when not creating an app and instead use my 'library', because south only considers apps for migrations. What is the django way to avoid this 'problem' and to be able to also create migrations for my core models?
Django: Core library and South migrations
1.2
0
0
78
22,619,506
2014-03-24T20:04:00.000
2
0
0
0
python,opencv,image-processing,numpy,scikit-image
22,619,589
2
false
0
0
Here is a list of ideas I can think of: get the np.sum() and if it is lower than a threshold, then consider it almost black calculate np.mean() and np.std() of the image, an almost black image is an image that has low mean and low variance
2
0
1
How can i see in if a binary image is almost all black or all white in numpy or scikit-image modules ? I thought about numpy.all function or numpy.any but i do not know how neither for a total black image nor for a almost black image.
How can i check in numpy if a binary image is almost all black?
0.197375
0
0
1,554
22,619,506
2014-03-24T20:04:00.000
2
0
0
0
python,opencv,image-processing,numpy,scikit-image
22,619,838
2
true
0
0
Assuming that all the pixels really are ones or zeros, something like this might work (not at all tested): def is_sorta_black(arr, threshold=0.8): tot = np.float(np.sum(arr)) if tot/arr.size > (1-threshold): print "is not black" return False else: print "is kinda black" return True
2
0
1
How can i see in if a binary image is almost all black or all white in numpy or scikit-image modules ? I thought about numpy.all function or numpy.any but i do not know how neither for a total black image nor for a almost black image.
How can i check in numpy if a binary image is almost all black?
1.2
0
0
1,554
22,621,259
2014-03-24T21:46:00.000
3
0
0
0
python,ios,xcode,python-requests,kivy
22,622,162
1
true
0
1
I don't know how kivy-ios manages different modules, but in the absence of anything else you can simply copy the requests module into your app dir so it's included along with everything else.
1
3
0
I finally have some idea how to build Kivy app in Xcode with help of Kivy-ios. But Xcode and mac environment is new to me. My issue is: how to compile other python modules that required for my application. There is 'build-all.sh' in 'kivy-ios/tools' that builds standard things, but how to add some other module. In particular, I need Requests module. Maybe there's some template script to include custom python modules? Thanks in advance
Compile custom module for Kivy-ios
1.2
0
0
462
22,623,375
2014-03-25T00:39:00.000
0
0
1
0
python
61,510,856
5
false
0
0
In simple words,int() method in python is the type conversion function similar to float() and str(), and can be used for converting float numbers to int numbers.
1
7
0
How can I remove all decimal places form a float? a = 100.0 I want that to become 100 I tried str(a).rstrip('.0'), but it just returned 1
python - remove all decimals from a float
0
0
0
40,286
22,624,070
2014-03-25T01:55:00.000
0
0
0
0
python,wxpython,wxwidgets
22,635,854
1
true
0
1
Selected items in wxListCtrl, wxListBox and so on always use the system background selection colour, it can't be changed.
1
0
0
I'm trying to set a custom background for selected items in wxPython 2.8. I cannot figure out how to do so. I've tried SetItemBackground with no luck.
Setting selection background in ListCtrl
1.2
0
0
209
22,624,987
2014-03-25T03:35:00.000
0
0
0
0
android,python,android-ndk,kivy
22,644,004
2
false
0
1
Android SDK are released more often that NDK. It happened more than once that if you use a too recent SDK, the NDK will not have the .h for it. Now i'm not sure this would be related to your issue at all.
1
2
0
Should/does NDK9 work with android API19? (though it was released with API18). Full story: I was building an Android App using kivy, python-for-android and buildozer. compiling with MDK9 (ie 9d) and api19 result in error: E/AndroidRuntime( 1773): java.lang.UnsatisfiedLinkError: Cannot load library: soinfo_relocate(linker.cpp:975): cannot locate symbol "wait4" referenced by "libpython2.7.so"... compiling with NDK9 (ie 9d) and API18 works. :)
android nkd and sdk compatibility issue (run time linker error)
0
0
0
681
22,626,203
2014-03-25T05:36:00.000
1
0
0
0
python,flask
22,659,358
1
true
1
0
The Werkzeug Request object heavily relies on properties and anything that touches request data is lazily cached; e.g. only when you actually access the .form attribute would any parsing take place, with the result cached. In other words, don't touch .files, .form, .get_data(), etc. and nothing will be sucked into memory either.
1
0
0
I have an endpoint in my Flask application that accepts large data as the content. I would like to ensure that Flask never attempts to process this body, regardless of its content-type, and always ensures I can read it with the Rquest.stream interface. This applies only to a couple of endpoints, not my entire application. How can I configure this?
Flask accept request data as a stream without processing?
1.2
0
0
92
22,633,008
2014-03-25T11:19:00.000
1
0
1
0
python,permutation,combinatorics
22,666,834
2
true
0
0
I believe that a block cipher, like AES, provides exactly this functionality.
1
4
0
I'm not sure whether this is possible even theoretically; but if it is, I'd like to know how to do it in Python. I want to generate a big, random permutation cheaply. For example, say that I want a permutation on range(10**9). I want the permutation to be uniform (i.e. I want the numbers to be all over the place without any seeming structure.) I want to have a function available to get the nth item in the permutation for each n, and I want to have another function available to get the index of every number in the permutation. And the big, final condition is this: I want to do all of that without having to store all the items of the permutation. (Which can be a huge amount of space.) I want each item to be accessible, but I don't want to store all the items, like range in Python 3. Is this possible?
Python: Generating a big uniform permutation cheaply
1.2
0
0
151
22,634,264
2014-03-25T12:13:00.000
0
0
0
1
python
22,634,599
2
false
0
0
You can run any python command from cmd using Python.exe -c //code For example, Python.exe -c print(10*10) will print 100 to your console. Does this help? Your question is a little unclear, sorry.
1
0
0
I would like to launch a python program.exe to test it from another python code i,e launch it on the cmd (myprogram.exe -a arg1 -b arg2) and eventually get the error message it can print or any console output does anyone have an idea how to do this? thanks EDIT Actually I launch it with os.popen(command) but what I want is to know if it ended with sys.exit or any exception or if it ended normally PS:I'm running python 2.6 (can't use subprocess.check_output ) Thanks
launch a python program with it's arguments on cmd and get the output messages python
0
0
0
47
22,636,894
2014-03-25T14:01:00.000
0
0
0
0
python,router
22,789,614
1
true
0
0
Its possible by using route add command in linux
1
1
0
Is it possible to switch between interfaces in Python program? I will have eth0 and wlan0 connection, both are different routers. I'm using boto to upload images to AWS server. And I need to upload using router with fast upload speed and for other downloads I need to use another interface which is connected to a router with fast download speed. If this is possible how I can do it?
Use different interfaces (eth0 and wlan0) for sending and receiving in Python program
1.2
0
1
409