Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
20,105,156 | 2013-11-20T19:29:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,google-cloud-storage,client-library | 20,105,749 | 1 | true | 1 | 0 | have you updated gcs client to 1.8.8 version from the downloads list, or to SVN head? Thanks. | 1 | 1 | 0 | I haven't changed any code, but when I try to upload an image file using the GCS Client library on app engine's dev server, I am now getting this fatal error:
Expect status [201] from Google Storage. But got status 400.
This was working until I made the update from Google to 1.8.8 as of 11/19/13.
Anybody else seeing this? It doesn't give any other indications as to why the 400 error. | Google Cloud Storage Client Library 400 error on devserver as of update 1.8.8 | 1.2 | 0 | 0 | 164 |
20,106,390 | 2013-11-20T20:31:00.000 | 2 | 1 | 1 | 0 | python,eclipse,pydev | 20,107,118 | 2 | true | 0 | 0 | Two things:
Automated tests, as you've suggested.
Static analysis: Pylint, PyChecker and/or pyflakes. Of these,
pylint is the most stringent. | 1 | 2 | 0 | Suppose I had a large project in Java, and a class, UsedEverywhere, which was used everywhere. If I changed the return type of the returnsSomething method of that class, my IDE would tell me about everything that broke because of that change. Now suppose I have a large project in Python with the same class and I make the same change. There is no way for me to know what impact I have had unless I also have a huge suite of regression unit tests (which I use regularly). This is due to the dynamic, duck typing system. The same goes for a situation where I remove a method from a class that is used everywhere. What is the best way to protect large projects from breaking without our knowing it? These breakages would never be detected until some kind of regression testing is done. | Python: large projects and making changes to files used everywhere | 1.2 | 0 | 0 | 94 |
20,106,877 | 2013-11-20T20:58:00.000 | 0 | 0 | 1 | 0 | python,maya | 20,107,379 | 2 | false | 0 | 0 | A better solution, rather than sleep, is a while loop. Set up a while loop to check a shared value (or even a thread-safe structure like a Queue). The parent processes that your waiting on can do their work (or children, it's not important who spawns what) and when they finish their work, they send a true/false/0/1/whatever to the Queue/variable letting the other processes know that they may continue. | 1 | 0 | 0 | I've run into situations as of late when writing scripts for both Maya and Houdini where I need to wait for aspects of the GUI to update before I can call the rest of my Python code. I was thinking calling time.sleep in both situations would have fixed my problem, but it seems that time.sleep just holds up the parent application as well. This means my script evaluates the exact same regardless of whether or not the sleep is in there, it just pauses part way through.
I have a thought to run my script in a separate thread in Python to see if that will free up the application to still run during the sleep, but I haven't had time to test this yet.
Thought I would ask in the meantime if anybody knows of some other solution to this scenario. | time.sleep that allows parent application to still evaluate? | 0 | 0 | 0 | 2,052 |
20,111,254 | 2013-11-21T02:21:00.000 | 0 | 0 | 0 | 1 | python,web,twisted | 20,114,092 | 1 | false | 1 | 0 | I have found several good references for launching daemon processes with python. See daemoncmd from pypi.
Im still coming up a little short on the monitoring/alert solutions (in python). | 1 | 2 | 0 | I would like to deploy several WSGI web applications with Twisted on a debian server, and need some direction for a solid production setup. These applications will be running 24/7.
I need to run several configurations, each binding to different ports/interfaces/privileges.
I want to do as much of this in python as possible.
I do not want to package my applications with a program like 'tap2deb'.
What is the best way to implement each application as a system service? Do I need some /etc/init.d shell scripts, or can I manage this with python? (I don't want anything quite as heavy as Daemontools)
If I use twistd to manage most of the configuration/process management, what kind of wrappers/supervisors do I need to put in place?
I would like centralized management, but restricting control to the parent user account is not a problem.
The main problem I want to avoid, is having to SSH into my server once a day to restart a blocking/crashed application | twisted web for production server service | 0 | 0 | 0 | 805 |
20,111,456 | 2013-11-21T02:42:00.000 | 1 | 0 | 0 | 0 | python,django,authentication | 21,763,143 | 2 | false | 1 | 0 | This sounds quite reasonable. There are several ways to achieve this: use a third party library like django-social-auth which handles using third party applications to authenticate users via the Django user model. The other way to do this is to write your own custom backend that uses OAuth2 protocol to authenticate users via a third party application (e.g. Twitter) and saves/authorizes them as a Django user for your application. This might sound difficult but it's quite easy. I wrote an example Django application to demonstrate this functionality as well as provide a tutorial for custom backend authentication. This app/tutorial uses Django 1.5: djangoauth.thecloutenproject.com/ | 1 | 1 | 0 | I am just learning python and django and I put up a pretty decent website to manage a database and also a search page. The new requirement that I am a bit confused now is that the authentication should be done through an external provider (unknown yet, but probably LDAP or Kerberos Tickets).
My idea was to authenticate the users through this service and if successful add the user to my django created database with syncdb (where I have permissions and groups) and then bypass this user as authenticated to enable them to perform actions in the site.
Does that sound reasonable? Is there an 'accepted' approach to this kind of authentication? I am not sure if I will have to write my own authentication view.
Thanks. | Django authentication via external provider | 0.099668 | 0 | 0 | 743 |
20,115,202 | 2013-11-21T07:48:00.000 | 1 | 0 | 0 | 0 | python,django,apache,solr,django-haystack | 20,115,328 | 1 | false | 1 | 0 | Django and Apache Solr have no relation with each other.
You can use django like you would use normally, and use pysolr or sunburnt python module to fetch/write data to SOLR.
I would recommend pysolr for its simplicity. | 1 | 1 | 0 | I am new on Django framework. So please help me work with solr and Django.
I am trying with some tutorial available on different site but all worked with older versions of Django and Apache Solr those are not compatible with new versions. | How Apache Solr works with Django 1.5? | 0.197375 | 0 | 0 | 103 |
20,119,271 | 2013-11-21T11:07:00.000 | 0 | 1 | 1 | 0 | python,class,serialization,hierarchy | 20,126,494 | 2 | false | 0 | 0 | Classes are normal python objects, so, in theory, should be picklable, if you provide __reduce__ (or implement other pickle protocol methods) for them. Try to define __reduce__ on their metaclass. | 1 | 0 | 0 | I have to serialise a dynamically created class hierarchy. And a bunch of objects - instances of the latter classes.
Python pickle is not of big help, its wiki says "Classes ... cannot be pickled". O there may be some trick that I cannot figure.
Performance requirement:
Deserialization should be pretty fast, because the serialised staff serves for cache and should save me the work of creating the same class hierarchy.
Details:
classes are created dynamically using type and sometimes meta-classes. | Python : serialise class hierarchy | 0 | 0 | 0 | 140 |
20,121,794 | 2013-11-21T13:03:00.000 | 0 | 0 | 1 | 0 | python,image,path,cmd,pygame | 20,121,823 | 2 | false | 0 | 0 | Try looking at Relative Paths.
Additionally, without seeing your image loading code it is really quite hard to help you. | 1 | 0 | 0 | I have following problem. When i am trying to open Python game files which i downloaded from internet (on the cmd), then i cant run them, because my computer doesn't find pictures which are in code. Even though the main code and pictures are in the same folder.
If i insert full paths of the pictures locations, then everything is fine, but it takes too much time, with long code.
So, my question is, how could i make my computer to open picture files, without me inserting full paths.
Thanks if anyone could help. | The paths of picture locations in python | 0 | 0 | 0 | 99 |
20,122,907 | 2013-11-21T13:57:00.000 | 1 | 0 | 1 | 0 | python,file | 20,123,095 | 1 | false | 0 | 0 | Create a file which contains all the names of the files. Update this file, say, once a day.
In Python, read this single file into a list, pick 10 random items. | 1 | 1 | 0 | How could i get 10 random files, or not, from a directory without listing or iterating over all files. The directory has millions of files and we can't change that. Thanks ! | PYTHON Get files from a directory without listing all files | 0.197375 | 0 | 0 | 99 |
20,128,461 | 2013-11-21T17:57:00.000 | 0 | 0 | 1 | 0 | python,pip | 20,129,765 | 1 | false | 0 | 0 | Ugly workaround:
create virtualenv, install demanded package with pip, use pip list -o in both real environment and virtual one, compare output... | 1 | 0 | 0 | Is there a way to know if specific version is outdated? I know I can use pip list -o, but this goes over all packages, and I would like to get only for specific version.
thanks.
Eran | how to find if specifc version is outdated | 0 | 0 | 0 | 44 |
20,132,603 | 2013-11-21T21:45:00.000 | 0 | 0 | 0 | 0 | python,django | 20,132,972 | 1 | false | 1 | 0 | You only have to set the is_staff flag on your user to True. Superusers bypass user perrmissions (all is permited). | 1 | 0 | 0 | Is there a way to register extra staff Admins such as a log in or sign up? or is the only way to add them in the Superuser Admin? | Django Admin: Register extra staff Admin | 0 | 0 | 0 | 114 |
20,134,413 | 2013-11-22T00:00:00.000 | -1 | 0 | 1 | 0 | python,python-2.7,asyncore | 22,015,245 | 1 | true | 0 | 0 | As stated in the comments above the answer is threading | 1 | 0 | 0 | I am trying to create an chat program using python in mac os x python 2.7.5.
I have successfully done so using the asyncore, asynchat modules however. I create a server that will open a telnet port on say 5006. Which is fine.
Here is the problem That previously mentioned executable creates a window in terminal now when I want to actually start chatting I have to open another terminal window and type $ telnet 127.0.0.1 5006 to open a connection to myself. Others have to do the same thing from their respective computers.
BUT I only want to open one window that will run my server code and chat with others.
I just want to make clear. There is no problem here with chatting and connections I am asking how to reduce my 2 window server/chatter to a server and chatter in one.
I dont need anybody to write my code I am looking for a push in the right direction if someone doesn't have a direct answer. Maybe a module of some sort or something similar. Im lost so... | Windows in asyncore, asynchat | 1.2 | 0 | 0 | 221 |
20,135,093 | 2013-11-22T01:09:00.000 | 13 | 1 | 1 | 0 | python,sum,time-complexity | 20,135,190 | 3 | false | 0 | 0 | It's got to be O(n) for a large list of integers. | 3 | 14 | 0 | What is the time complexity of the sum() function? | What is the time complexity of sum() in Python? | 1 | 0 | 0 | 25,615 |
20,135,093 | 2013-11-22T01:09:00.000 | 14 | 1 | 1 | 0 | python,sum,time-complexity | 20,135,284 | 3 | true | 0 | 0 | It will make Theta(n) next calls on the iterator, and Theta(n) additions, where n is the number of items you're summing.
That's as specific as you can be for the time complexity of an algorithm that calls unknown code. If the time taken for each addition depends on n (as for example it would when summing lists, like sum(list(range(i)) for i in range(n))), then that's going to affect the overall time complexity. | 3 | 14 | 0 | What is the time complexity of the sum() function? | What is the time complexity of sum() in Python? | 1.2 | 0 | 0 | 25,615 |
20,135,093 | 2013-11-22T01:09:00.000 | 5 | 1 | 1 | 0 | python,sum,time-complexity | 20,135,182 | 3 | false | 0 | 0 | It depends on your data structure. For a flat list, you cannot do better than O(n) because you have to look at each item in the list to add them up.
When in doubt, try it out: import profile is your friend. | 3 | 14 | 0 | What is the time complexity of the sum() function? | What is the time complexity of sum() in Python? | 0.321513 | 0 | 0 | 25,615 |
20,139,979 | 2013-11-22T08:09:00.000 | 0 | 0 | 0 | 1 | python,linux,django,windows,apache | 20,140,905 | 1 | false | 1 | 0 | I can think of some ways to do this:
Use web services with real REST protocol and cross-site scripting protection
Use WINE (As OneOfOnes suggested in his comment)
But this is very risky for real production and might not work at all (or just when the load will become heavier)
Write some code in the windows machine and call this code using something like Zero-MQ (ZMQ) or similar product
Depending on the way your are using this library, One solution can fit better than the others.
For most cases, I would suggest to go with ZMQ
This way you can use much more complex models of communication (subscription-subscribers, send-response, and more)
Also, using ZMQ would let you to scale in a very easy way if the need will come (you will be able to put few windows machines to process the requests)
Edit:
To support file transfer between machines, you have few options as well.
Use ZMQ. File can be just a stream of data.
No problem to support such a stream with ZMQ
Use file server with some Enq. procedure
Enq. can be done via ZMQ msg to inform the other side that the file is ready
You can use folder share instead of a file server, but sharing files on the windows machine will not be a scale-able solution
Windows program can send the file via FTP or SSH to the Linux server.
Once again, signaling (file ready, file name,...) can be done with ZMQ | 1 | 0 | 0 | Background knowledge: Website in Django run under apache.
Briefly speaking, I need to call an .exe program in a windows machine from a Linux machine.
The reason for this is our website runs on Linux, but one module relies on Windows DLL. So we plan to put it on a separate windows server, and use some methods to call the exe program, and get the result back.
My idea is: setup a web service on that windows machine, post some data to it, let itself deals with the exe program, and return some data as response. Notice that request data and response data will both contains files.
I wonder if there is any neater way for this?
EDIT: Thanks for @fhs, I found I didn't make my main problem clearly enough. Yes, the webservice could work. But the main disadvantages for this is: basically, I need to post several files to windows; windows receive files, save them, call the program using these files as parameters, and then package the result files into a zip and return it. In linux, receive the file, unpack it to local file system. It's kind of troublesome.
So, is there any way to let both machines access the other one's files as easily as in local file system? | Calling exe in Windows from Linux | 0 | 0 | 0 | 252 |
20,145,650 | 2013-11-22T13:05:00.000 | 0 | 0 | 0 | 0 | python,pyscripter | 22,344,568 | 1 | false | 0 | 1 | Disable the system sounds in the taskbar if working in windows. This only removes the beep however. | 1 | 2 | 0 | I am using PyScripter version 2.5.3.0 X86.
When I run a python script which has an error (ValueError), an error window popups with a annoying beep sound.
Is it possible to disable this error window with the sound?
Thanks | Pyscripter: Disable error windows popup | 0 | 0 | 0 | 297 |
20,150,671 | 2013-11-22T17:15:00.000 | 1 | 1 | 1 | 0 | c++,python,r,debugging,gdb | 20,153,521 | 1 | false | 0 | 0 | Because Python is an interpreted language, you can have this friendly "debugging experience". C++ is a compiled language so when the executable is running, the run-time knows nothing about the source code. That is why we have to use a GDB or something that can help us to associate the binary and the source code.
So I think you have to get familiar with GDB or just pick a nice IDE.
Eclipse is quite good! You can do anything with it because there are so many plugins for it. | 1 | 0 | 0 | Python and R overs a friendly way for one to understand the source code written in these languages and users can stop at a given point and inspect the objects (as objects in these languages can be printed in a user friendly way while debugging).
For C++, I don't know if there is similar way. I currently don't use IDE. I know the C++ source code can be compiled with the -g option to allow the use of gdb. But this is still much more difficult than what is in python and R. Does anybody know what might be the best to step through C++ source code and inspect objects when necessary (for code understanding purpose)? Thanks. | Stepping through C++ programs like in python and R | 0.197375 | 0 | 0 | 111 |
20,156,694 | 2013-11-23T00:00:00.000 | 3 | 0 | 0 | 1 | google-app-engine,python-2.7,app-engine-ndb | 20,156,917 | 1 | true | 1 | 0 | A few things are wrong here.
Firstly Subcategory.get_by_id(subcategoryId) probably won't work as you example key has an ancestor defined. You need to include the ancestor(s) in get_by_id
Given you are using a mySubcategorykey.get() and you don't retrieve an entity then it means the key is incorrect. a get by key won't experience eventual consistancy so the key is just wrong, or you didn't put() the original entity.
I suggest you examine the key after you put() the entity and see if it actually matches what you are using.
Also there are problems with your example key, ndb.Key(Category, 'Foods', Subcategory, subcategoryId) Category and Subcategory need to be strings, or variables with a string value of "Category" and "SubCategory" - which would be a bit odd to write this way.
Also you don't create query objects from keys , query is a method of ndb.Model, or you instantiate a query object from ndb.Query .
So you are mixing up some terminolgy and/or concepts. | 1 | 0 | 0 | I am passing one of my projects to the google app engine, just for the sake of learning. However I have some problems with the ndb datastore. My root entity would be Categories and these have Subcategories as child entities. So let's say I have Category Foods which has Subcategory Main Dishes. So the key for this Entity would be ndb.Key(Category, 'Foods', Subcategory, subcategoryId). When I am creating a query object from this key I can fetch the correct subcategory, but from the documentation I would like to do other two methods as well which are not working, I don't know for what reason.
mySubcategorykey.get() => it returns None using the aforementioned key.
Subcategory.get_by_id(subcategoryId) => Also returns None.
Also when I am generating a safeUrl from the key, I cannot return the object with ndb.Key(urlSafe=myUrlSafeString).get(), however printing the ndb.Key(urlSafe) gives me the correct key, as it states in the DataStore Viewer.
Anyone can help me please telling what I am doing wrong? Thank you. | mykey.get() method doesn't work in google app engine project | 1.2 | 0 | 0 | 99 |
20,159,107 | 2013-11-23T05:53:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 20,159,136 | 2 | false | 0 | 0 | Because "asdf" without its first four characters still does contain "". A harder check comes into play when the index exceeds the length of the string, but having an index equal to the string is equivalent to "".find(). | 1 | 1 | 0 | find('asdf','') finds an empty string in 'asdf' hence it returns 0.
Similarly, find('asdf','',3) starts to search for the string at index position 3 and hence return 3.
Since the last index is 3, find('asdf','',4) should return -1 but it returns 4 and starts to return -1 only if the starting index is more than or equal to (last_index)+2. Why is this so? | find() function in python2.7.5 | 0.099668 | 0 | 0 | 1,304 |
20,159,232 | 2013-11-23T06:08:00.000 | 4 | 0 | 0 | 1 | python,git,google-app-engine | 20,196,135 | 1 | true | 1 | 0 | Lack of support for App Engine modules is a known issue for the push-to-deploy feature, and is something we're actively working on addressing at this time. | 1 | 3 | 0 | Can I use AppEngines "Push to Deploy" (deploying by pushing a GIT repository) to update a multiple module Python application?
Where do I get the repo url for the non default modules? | AppEngine Push to Deploy and Modules | 1.2 | 0 | 0 | 347 |
20,160,026 | 2013-11-23T07:57:00.000 | 1 | 0 | 0 | 0 | python,http,connection,httplib | 20,160,481 | 1 | true | 0 | 0 | The thing to remember about an HTTP connection is it's still, at a lower level, a socket connection over TCP. They can be prone to issues with leaving a connection open, even if you're constantly streaming data from the source.
While some pretty serious efforts have been made in this area (socket.io, websockets, HTTP long polling, etc), your best and simplest bet is to just make new requests every couple of seconds.
However, there are specific use cases for using things like websockets, so perhaps you can explain what you're doing a little better, then maybe we can say for sure. | 1 | 1 | 0 | I want to send data over to my server via HTTP GET requests, every 1-2 seconds.
Should I create a new connection every time, or should i keep the connection open and keep sending requests over the same connection? If I employ the latter method would httplib keep the connection alive, what happens if the connection is broken?
I am not very familiar with http and networking protocols.
EDIT: I am working on gps tracking system for my university project, I need to regularly upload the coordinates to a database via php script. | Python httplib [Multiple requests] - How long can i keep the connection open? | 1.2 | 0 | 1 | 551 |
20,162,566 | 2013-11-23T12:57:00.000 | 1 | 0 | 0 | 0 | python,django | 20,162,808 | 1 | true | 1 | 0 | Django seems to do one DB table per model
Actually you'd better think about it the other way round : a Django "Model" is the representation of one DB table.
But I think that means all of the registrations for every race (via a
foreign key) will be in it...one table for all of them. Maybe it's not
that many, but it seems messy and unclean to have all registrations
for all of the races in one table.
Why ???
Races on average have about 500 people, some can be a couple thousand.
SQL Databases are designed to work on much bigger datasets - think millions of rows and more. With 2000 registrations per race and a few races per year you shouldn't have to worry too much. Now if you have peoples registering for more than one race, you'd be better with three tables:
race(race_id, name, date, ....)
person(person_id, firstname, lastname, ...)
registration(race_id, person_id, ....)
with a unique constraint on (registration.race_id, registratio.person_id).
From your question I assume you have very few knowledge - if any - or relational databases, so I strongly suggest you take some times learning about proper relational databases modeling. | 1 | 0 | 0 | I am slowly learning Django trying to write an app to replace the complicated PHP mess we have now. In the end what we need is pretty simple, but I am trying to wrap my head around what the best way (or "django way") is to do things. Here's what we do:
We manage races (what I'll call events) and do online registrations and payment for them. That basically means the user goes to a URL, fills out some personal info in a form and then they are in the database and sent to PayPal for payment.
So, with the current system some software creates a "race" form with the form fields I want and then behind the scenes a DB table is created with the name of the race. That table holds all of the registrations (obviously has a column for each form field plus some extras for behind-the-scenes processing). Currently I just recreate the form each year of the race, but I'm thinking with Django I am going to be cleaner and save the duplicate work and use the same "form" or "model" in Django land and just have some way to archive that data to another DB.
Anyway, here's where I need some advice: Django seems to do one DB table per model? So what I was thinking was I'd have a "Race" model with some general info about the race including some questions that will decide what form fields show up in my other model (for example, if the race has requires payment, we'll have some extra fields and logic for PayPal). All of the races will be in one DB table, but that's probably fine here...there's not going to be that many.
However, I then am thinking there'd be a "registrations" model that actually has all of those form fields in it and will be behind the actual registration form that people fill out. But I think that means all of the registrations for every race (via a foreign key) will be in it...one table for all of them. Maybe it's not that many, but it seems messy and unclean to have all registrations for all of the races in one table. Races on average have about 500 people, some can be a couple thousand. Do I just live with it or is there a better way to design this?
I'm also trying to think of flexability...where sometimes in the past I've had to manually load some data into the table for a specific form or mass change registrations for a particular race. Yes, I could be more specific with my queries, but it felt safer knowing I could only affect the registrations for one race because they were separate tables. Do I just create database views as some type of alternative?
In the end, I just want to make sure that this is a good design that will not slow down or be too inefficient, for example, when I will need to do sorting occasionally and also exports for each race (so that would end up needing to be queries for a specific race ID and getting everyone who registered just for that race).
Sorry for being so longwinded, I am doing the django tutorial now and I just want to be sure as I am thinking in my head how this will go, that I am going to have a decent design to grow with. Thanks so much! | Multiple DB tables with django models possible? | 1.2 | 0 | 0 | 196 |
20,164,834 | 2013-11-23T16:33:00.000 | 0 | 0 | 0 | 0 | python,tkinter,browser | 20,166,229 | 2 | true | 0 | 1 | There is no tkinter widget for displaying the contents of a web page. | 1 | 0 | 0 | I'm trying to create an application with tkinter, and I want the ability to connect it to a website and display what's there. It's sort of a web browser without the URL bar. Is it possible? If so, is there any documentation I could use? Do you have any source code I could learn from? | Connection to Webpage and Tkinter | 1.2 | 0 | 0 | 817 |
20,166,638 | 2013-11-23T19:12:00.000 | 0 | 0 | 0 | 0 | python,django,django-testing | 20,166,747 | 1 | true | 1 | 0 | You need to make sure your application appears in your settings.py INSTALLED_APPS before django.contrib.messages does. | 1 | 0 | 0 | In my django project, I'm having trouble testing my homemade app named "messages". If I write a single test in messages/tests.py and try to run it with python manage.py test messages, it will perform 74 tests from, I guess, the django.contrib.messages library.
How can I run my local app tests instead of the library one without renaming it? All other tests for my apps with other names run fine. | App can't be tested because of conflit with django.contrib.messages | 1.2 | 0 | 0 | 38 |
20,166,720 | 2013-11-23T19:19:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,pip,soundcloud | 20,177,193 | 2 | false | 1 | 0 | Ok, I think I figured out. What I needed to do what copy the soundcloud folder, along with the fudge, simplejson and requests folders into the root folder of my webapp. Thank you VooDooNOFX -- although you didn't directly answer the precise question, you sparked the thinking that helped me figure it out. | 1 | 0 | 0 | I want to use the soundcloud python library in a web app I am developing in Google App Engine. However, I can't find any file called "soundcloud.py" in the soundcloud library files I downloaded. When using pip install it works fine on my local computer.
What files exactly do I need to move - or what exact steps do I need to take - in order to be able to "import soundcloud" within Google App Engine.
I already tried moving all the *.py files into my main app directory, but still got this error:
import soundcloud
ImportError: No module named soundcloud | Using Soundcloud Python library in Google App Engine - what files do I need to move? | 0 | 0 | 0 | 241 |
20,167,618 | 2013-11-23T20:40:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 42,685,562 | 2 | false | 0 | 0 | Kyle does a great job of answering. I will only add that you can reference the last position of a string with the index -1. so in your example k[-1] produces 'f' just as k[3] produces 'f'. | 2 | 2 | 0 | Is it true that the last position of a string in python comes after the last character of that string?
If it is true does it mean if k='asdf' then at index position 4 there is a ''? If so, why doesn't k[4] return '' instead of out of range error.
It has been suggested to me to try k[4:4] to see this behavior but I think the slice returns a '' because it hasn't been given anything to contain and not specifically because of presence of a '' at the end of every string. If I do k[300:782], I still get '' but `find('asdf','',300)' returns -1 so this should confirm my beliefs. | What is the last position of a string in python? | 0 | 0 | 0 | 1,304 |
20,167,618 | 2013-11-23T20:40:00.000 | 4 | 0 | 1 | 0 | python,python-2.7 | 20,167,840 | 2 | true | 0 | 0 | That is not true. The last position in k='asdf' is k[3] with is 'f'.
You are also correct that when trying to examine a slice that doesn't contain anything (k[4:4] or k[300:2345] or k[6:5]) python will give you an empty result.
'' is an empty string; it is not returning a quotation mark.
@BrenBarn is absolutely right about find | 2 | 2 | 0 | Is it true that the last position of a string in python comes after the last character of that string?
If it is true does it mean if k='asdf' then at index position 4 there is a ''? If so, why doesn't k[4] return '' instead of out of range error.
It has been suggested to me to try k[4:4] to see this behavior but I think the slice returns a '' because it hasn't been given anything to contain and not specifically because of presence of a '' at the end of every string. If I do k[300:782], I still get '' but `find('asdf','',300)' returns -1 so this should confirm my beliefs. | What is the last position of a string in python? | 1.2 | 0 | 0 | 1,304 |
20,168,198 | 2013-11-23T21:37:00.000 | 5 | 0 | 1 | 1 | python,django,python-2.7,subprocess,celery | 20,231,324 | 2 | false | 1 | 0 | If you don't want something as complex as Celery, then you can use subprocess + nohup to start long running tasks off, dump the PID to a file (check the subprocess documentation for how to do that) and then check if the PID contained in the file is still running (using ps). And if you wanted, you could write a very small 'wrapper' script which would run the task you tell it to, and if it crashes, write a 'crashed.txt' file.
One thing to note is that you should probably run commands including the close_fds=True value to the call. (so check_call(['/usr/bin/nohup', '/tasks/do_long_job.sh'], close_fds=True) ). Why? By default, all subprocesses are given access to the parent's open file descriptors, INCLUDING ports. This means that if you need to restart your web server process, while the long process is running, that the running process will keep the port open, and you won't be able to load the server up again. You can guess how I found this out. :-) | 1 | 4 | 0 | I know there are many questions similar to this one, but as far as my research has taken me, none of them answers my specific question. I hope you will take your time to help me out, as I have been struggling with this for days without finding a proper answer.
I am trying to find the best way to implement a subprocess into a Django application. To be more specific:
The process will be run from one view (asynchronously) and handled from another.
The process can run up to several hours.
Multiple instances of the same process/program should be able to run at the same time.
Other than knowing when the process is completed (or if it crashed, so it can be re-run), no communication with it is needed.
Does anyone know which way would be the best to implement this? Would any of the Python modules (such as subprocess, threads, multiprocessing, spawn) be able to achieve this or would I have to implement an external task queue such as Celery? | Best way to implement long running subprocess in Django? | 0.462117 | 0 | 0 | 1,377 |
20,172,765 | 2013-11-24T08:55:00.000 | 0 | 0 | 0 | 0 | python,network-programming,client-server | 20,173,028 | 2 | false | 0 | 0 | You may use Tornado. It is asynchronic multithreading web-server framework. | 1 | 2 | 0 | I am developing a server using python but the server can communicate with only one client at a time . Even if the server establish connection with more than one clients, it can't make conversation with all clients at the same time.
One client should wait until the started conversation end, which may last for several minutes. This problem would create Tremendous delay up on the client which hasn't started conversation.
So, how could I let my python server to communicate with more than one clients at the same time ?
Thank you in advance | how can I make a server communicate with more than 1 client at the same time? | 0 | 0 | 1 | 180 |
20,173,519 | 2013-11-24T10:31:00.000 | 0 | 0 | 0 | 0 | python,optimization,pygame,bitmask | 20,182,011 | 1 | true | 0 | 1 | Okay, I worked out a fix for it that actually wasn't any of these things... Basically, I realised that I was only colliding one pixel with the stage at a time, so I used the Mask.get_at() function. Kinda annoyed this didn't occur to me before. Although, I've heard that using this can be quite slow, so if somebody would be willing to offer a faster alternative of get_at() that'd be nice. | 1 | 0 | 0 | Pygame offers a pretty neat bitmask colliding function for sprites, but it's really slow when you are comparing large images. I've got my level image, which worked fine when it was 400x240, but when I changed the resolution to (a lot) bigger, suddenly the game was unplayable, as it was so slow.
I was curious if there was a way to somehow crop the bitmask of a sprite, so it does not have to do as many calculations. Or, an alternative would be to split the whole stage sprite into different 'panels', and have the collision check for the closest one (or four or two, if he is on the edge of the panels). But I have no idea how to split an image into several sprites. Also, if you have any other suggestions, they would be appreciated.
I have seen the many places on the internet saying not to bother with bitmask level collision, because it is far too slow, and that I should use tile based collision instead. Although, I think bitmask would make it a lot more flexible, and it would give the opportunity for level destruction (like in the worms games), so I would prefer it if it was bitmask.
I think I've explained it enough not not need to post my code, but please tell me it you really need it.
Many thanks! | Pygame level bitmask checking optimisation? | 1.2 | 0 | 0 | 95 |
20,179,255 | 2013-11-24T18:35:00.000 | 1 | 0 | 0 | 0 | python,algorithm,neural-network,pybrain | 20,200,810 | 1 | false | 0 | 0 | Your idea with additional layer is good, although the problem is, that your weights in this layer have to be fixed. So in practise, you have to compute the partial derivatives of your R^2->R mapping, which can be used as the error to propagate through your network during training. Unfortunately, this may lead to the well known "vanishing gradient problem" which stopped the development of NN for many years.
In short - you can either manually compute the partial derivatives, and given expected output in R, simply feed the computed "backpropagated" errors to the network looking for R^2->R^2 mapping or as you said - create additional layer, and train it normally, but you will have to make the upper weights constant (which will require some changes in the implementation). | 1 | 0 | 1 | I am faced with this problem:
I have to build an FFNN that has to approximate an unknown function f:R^2 -> R^2. The data in my possession to check the net is a one-dimensional R vector. I know the function g:R^2->R that will map the output of the net into the space of my data. So I would use the neural network as a filter against bias in the data. But I am faced with two problems:
Firstly, how can I train my network in this way?
Secondly, I am thinking about adding an extra hidden layer that maps R^2->R and lets the net train itself to find the correct maps and then remove the extra layer. Would this algorithm be correct? Namely, would the output be the same that I was looking for? | Train a feed forward neural network indirectly | 0.197375 | 0 | 0 | 203 |
20,179,267 | 2013-11-24T18:36:00.000 | 4 | 0 | 0 | 0 | python,scikit-learn | 20,180,148 | 1 | true | 0 | 0 | There is a major conceptual diffrence between those, based on different tasks being addressed:
Regression: continuous (real-valued) target variable.
Classification: discrete target variable (classes).
For a general classification method, term probability of observation being class X may be not defined, as some classification methods, knn for example, do not deal with probabilities.
However for Random Forest (and some other classification methods), classification is reduced to regression of classes probabilities destibution. Predicted class is taked then as argmax of computed "probabilities". In your case, you feed the same input, you get the same result. And yes, it is ok to treat values returned by RandomForestRegressor as probabilities. | 1 | 4 | 1 | I have a data set comprising a vector of features, and a target - either 1.0 or 0.0 (representing two classes). If I fit a RandomForestRegressor and call its predict function, is it equivalent to using RandomForestClassifier.predict_proba()?
In other words if the target is 1.0 or 0.0 does RandomForestRegressor output probabilities?
I think so, and the results I a m getting suggest so, but I would like to get a second opinion...
Thanks
Weasel | using RandomForestClassifier.predict_proba vs RandomForestRegressor.predict | 1.2 | 0 | 0 | 4,377 |
20,180,272 | 2013-11-24T20:02:00.000 | 4 | 0 | 1 | 0 | python,pygame,file-extension | 20,180,327 | 1 | true | 0 | 0 | There is no real "usual extension", since there are so many different formats. Take a look at Minecraft for instance - it doesn't even have one save file per world, but a save directory.
Inside of this directory, it uses different file extensions (.dat, .mca, .json). Another file extension would be .sav or .mgz as used by Age of Empires II.
So, just use whatever you feel like. | 1 | 2 | 0 | I'm currently making a game in python which you can save your maps/worlds. I was wondering what extension is generally used used on a file which saved the world's terrain, stats, players, etc. | What file extension is usually on a game's save file? | 1.2 | 0 | 0 | 4,028 |
20,180,594 | 2013-11-24T20:30:00.000 | 1 | 0 | 0 | 0 | python,pygame,sprite,collision | 20,197,302 | 5 | false | 0 | 1 | Like Furas has said, no, there is not way to get side collisions in Pygame past the point system he set up. And even that one wont give you what you want, because you can never be sure which direction the collision happened when dealing with rows, columns or corners of Rectangles.
This is why most tutorials recommend saving your sprites initial direction. then moving in the opposite direction in case of a collision. | 2 | 6 | 0 | Is there a way in pygame to look for a collision between the a particular side of a sprite and a particular side of another sprite in pygame? For example, if the top of sprite A collides with the bottom of Sprite B, return True.
I am certain there is a way to do this, but I can't find any particular method in the documentation.
Thanks! | Pygame: Collision by Sides of Sprite | 0.039979 | 0 | 0 | 9,637 |
20,180,594 | 2013-11-24T20:30:00.000 | 0 | 0 | 0 | 0 | python,pygame,sprite,collision | 58,675,517 | 5 | false | 0 | 1 | For objectA give the object this method:
def is_collided_with(self, sprite):
return self.rect.colliderect(sprite.rect)
This return statement returns either True or False
then in the main loop for collisions just do:
if objectA.is_collided_with(ObjectB):
Collision happened! | 2 | 6 | 0 | Is there a way in pygame to look for a collision between the a particular side of a sprite and a particular side of another sprite in pygame? For example, if the top of sprite A collides with the bottom of Sprite B, return True.
I am certain there is a way to do this, but I can't find any particular method in the documentation.
Thanks! | Pygame: Collision by Sides of Sprite | 0 | 0 | 0 | 9,637 |
20,182,350 | 2013-11-24T23:13:00.000 | 1 | 0 | 0 | 0 | python,tkinter,raspberry-pi,desktop,shortcut | 62,213,416 | 4 | false | 0 | 1 | Besides creating executable file other option is create simple .bat file:
Open notepad
Enter "C:\ProgramData\Anaconda3\python.exe" "C:\Users\Your ID\script.py"
First part is path to python.exe, second to your python script
save file as .bat file, ex. "open_program.bat"
Now simply double click on saved .bat file icon should open your script. | 2 | 3 | 0 | I have written a python script with a Tkinter GUI. I would like to create a desktop icon that will execute this script so that the end-user (not myself) will be able to double-click the icon and have the GUI load, rather than 'run' the script from the terminal or python shell and then have to F5 from there.
Is there a way to do this? I have googled many arrangements of my question but most answers seem to be normal python scripts, not ones which are Tkinter based.
I am using a Raspberry Pi with Wheezy and Python 2.7
Thanks in advance. | Create a desktop icon for a Tkinter script | 0.049958 | 0 | 0 | 7,292 |
20,182,350 | 2013-11-24T23:13:00.000 | -1 | 0 | 0 | 0 | python,tkinter,raspberry-pi,desktop,shortcut | 69,898,319 | 4 | false | 0 | 1 | You can save the script as a .pyw file so the user can click on the file and the GUi would open | 2 | 3 | 0 | I have written a python script with a Tkinter GUI. I would like to create a desktop icon that will execute this script so that the end-user (not myself) will be able to double-click the icon and have the GUI load, rather than 'run' the script from the terminal or python shell and then have to F5 from there.
Is there a way to do this? I have googled many arrangements of my question but most answers seem to be normal python scripts, not ones which are Tkinter based.
I am using a Raspberry Pi with Wheezy and Python 2.7
Thanks in advance. | Create a desktop icon for a Tkinter script | -0.049958 | 0 | 0 | 7,292 |
20,184,149 | 2013-11-25T03:17:00.000 | 2 | 0 | 1 | 0 | python,multithreading,ctypes | 20,184,241 | 1 | true | 0 | 0 | It's unclear what you mean by "a problem". The library must acquire the GIL before calling back into Python, and because of the GIL only one thread at a time can execute Python-level code. But there's nothing that requires the library to wait for the callback to return - it can continue doing as much as it likes in its own threads. Whether that's semantically correct depends on knowing exact details of what the library is doing. | 1 | 0 | 0 | If I call a multithreaded shared-library and give it a set of Python callbacks, it's correct to assume that the GIL will still be a problem while the Python is executing, correct?
Dustin | Python, ctypes, and parallelization | 1.2 | 0 | 0 | 137 |
20,184,994 | 2013-11-25T04:53:00.000 | 5 | 0 | 0 | 0 | ipython,ipython-notebook | 30,234,937 | 3 | false | 0 | 0 | %%HTML
<style>
div.prompt {display:none}
</style>
This will hide both In and Out prompts
Note that this is only in your browser, the notebook itself isn't modified of course, and nbconvert will work just the same as before.
In case you want this in the nbconverted code as well, just put the <style>div.prompt {display:none}</style> in a Raw NBConvert cell. | 1 | 8 | 1 | I am using nbconvert to produce something as close as possible to a polished journal article.
I have successfully hidden input code using a custom nbconvert template. The doc is now looking very nice.
But I don't know how to suppress the bright red 'out[x]' statement in the top left corner of the output cells. Anyone know of any settings or hacks that are able to remove this also ?
Thanks,
John | ipython notebook nbconvert - how to remove red 'out[N]' text in top left hand corner of cell output? | 0.321513 | 0 | 0 | 5,150 |
20,186,344 | 2013-11-25T06:52:00.000 | 4 | 0 | 1 | 0 | python,import,workflow,ipython | 41,203,363 | 10 | false | 0 | 0 | The above mentioned comments are very useful but they are a bit difficult to implement. Below steps you can try, I also tried it and it worked:
Download that file from your notebook in PY file format (You can find that option in File tab).
Now copy that downloaded file into the working directory of Jupyter Notebook
You are now ready to use it. Just import .PY File into the ipynb file | 2 | 141 | 0 | Interactive Python (ipython) is simply amazing, especially as you are piecing things together on the fly... and does it in such a way that it is easy to go back.
However, what seems to be interesting is the use-case of having multiple ipython notebooks (ipynb files). It apparently seems like a notebook is NOT supposed to have a relationship with other notebooks, which makes sense, except that I would love to import other ipynb files.
The only workaround I see is converting my *.ipynb files into *.py files, which then can be imported into my notebook. Having one file hold everything in a project is a bit weird, especially if I want to really push for code-reuse (isn't that a core tenet of python?).
Am I missing something? Is this not a supported use case of ipython notebooks? Is there another solution I can be using for this import of an ipynb file into another notebook? I'd love to continue to use ipynb, but it's really messing up my workflow right now :( | Importing an ipynb file from another ipynb file? | 0.07983 | 0 | 0 | 190,732 |
20,186,344 | 2013-11-25T06:52:00.000 | 5 | 0 | 1 | 0 | python,import,workflow,ipython | 54,937,082 | 10 | false | 0 | 0 | You can use import nbimporter then import notebookName | 2 | 141 | 0 | Interactive Python (ipython) is simply amazing, especially as you are piecing things together on the fly... and does it in such a way that it is easy to go back.
However, what seems to be interesting is the use-case of having multiple ipython notebooks (ipynb files). It apparently seems like a notebook is NOT supposed to have a relationship with other notebooks, which makes sense, except that I would love to import other ipynb files.
The only workaround I see is converting my *.ipynb files into *.py files, which then can be imported into my notebook. Having one file hold everything in a project is a bit weird, especially if I want to really push for code-reuse (isn't that a core tenet of python?).
Am I missing something? Is this not a supported use case of ipython notebooks? Is there another solution I can be using for this import of an ipynb file into another notebook? I'd love to continue to use ipynb, but it's really messing up my workflow right now :( | Importing an ipynb file from another ipynb file? | 0.099668 | 0 | 0 | 190,732 |
20,186,457 | 2013-11-25T07:01:00.000 | 0 | 0 | 1 | 0 | python | 20,633,102 | 2 | false | 0 | 0 | Using type as the object in validictory, and mentioning the key (as the field name) for the properties dictionary, the field which failed validation can be identified | 1 | 1 | 0 | How to identify in python , where the validation failed using python's "validictory" package,?
"Failed to validate field '_data' list schema"- Is there a way to find out which field validation failed? | Python Validictory- Field Validation | 0 | 0 | 0 | 488 |
20,188,563 | 2013-11-25T09:29:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,ipython | 20,188,735 | 1 | true | 0 | 0 | Yes, ipython can do this. Native python also has this feature. In unix you can simply write python for interactive shell session. Ipython a higher level instrument - it can store input from your previous session and smth else. If you like GUI, you can install ipython-notebook for interactive programming in you browser. | 1 | 1 | 0 | With pry in Ruby, you are able to start an interactive shell session (also called a read-evaluate-print loop) when an uncaught exception occurs, greatly speeding up debugging.
I've found that for most things, the equivalent to pry in Python is ipython. Is there a way to do enable the aforementioned feature with ipython? Alternatively, is there another way to accomplish this within Python? | Start IPython session on uncaught error | 1.2 | 0 | 0 | 98 |
20,188,586 | 2013-11-25T09:30:00.000 | 0 | 0 | 0 | 0 | python,django,apache | 20,192,605 | 3 | false | 1 | 0 | This error comes because of filename or file contents cotains garbage collection or in another language (except english)..
So you can add unicode() for this. or check NLTK library of handling this situation. | 1 | 0 | 0 | I deployed a project on webfaction with djanog. All went fine until recently, when all of a sudden I started to get this error: UnicodeEncodeError: 'ascii' codec can't encode characters in position 64-68: ordinal not in range(128)
The url is with Russian characters. But the matter is, when I restart Apache, there is no error any more. So it is kind of difficult for me to pin the error. | UnicodeEncodeError with django: inconsistent behavior | 0 | 0 | 0 | 372 |
20,193,144 | 2013-11-25T12:29:00.000 | 0 | 0 | 0 | 0 | php,python,mysql,python-2.7 | 20,193,357 | 3 | false | 0 | 0 | As i understood you are able to connect only with "server itself (localhost)" so to connect from any ip do this:
mysql> CREATE USER 'myname'@'%.mydomain.com' IDENTIFIED BY 'mypass';
I agree with @Daniel no PHP script needed... | 2 | 0 | 0 | I have a MYSQL database with users table, and I want to make a python application which allows me to login to that database with the IP, pass, username and everything hidden. The thing is, the only IP which is allowed to connect to that mysql database, is the server itself (localhost).
How do I make a connection to that database from a user's computer, and also be able to retrieve data from it securely? Can I build some PHP script on the server that is able to take parameters and retrieve data to that user? | Secure MySQL Connection in Python | 0 | 1 | 0 | 143 |
20,193,144 | 2013-11-25T12:29:00.000 | 1 | 0 | 0 | 0 | php,python,mysql,python-2.7 | 20,193,562 | 3 | false | 0 | 0 | You should not make a connection from the user's computer. By default, most database configurations are done to allow only requests from the same server (localhost) to access the database.
What you will need is this:
A server side script such as Python, PHP, Perl, Ruby, etc to access the database. The script will be on the server, and as such, it will access the database locally
Send a web request from the user's computer using Python, Perl, or any programming language to the server side script as described above.
So, the application on the user's computer sends a request to the script on the server. The script connects to the database locally, accesses the data, and sends it back to the application. The application can then use the data as needed.
That is basically, what you are trying to achieve.
Hope the explanation is clear and it helps. | 2 | 0 | 0 | I have a MYSQL database with users table, and I want to make a python application which allows me to login to that database with the IP, pass, username and everything hidden. The thing is, the only IP which is allowed to connect to that mysql database, is the server itself (localhost).
How do I make a connection to that database from a user's computer, and also be able to retrieve data from it securely? Can I build some PHP script on the server that is able to take parameters and retrieve data to that user? | Secure MySQL Connection in Python | 0.066568 | 1 | 0 | 143 |
20,195,265 | 2013-11-25T14:13:00.000 | 0 | 0 | 0 | 0 | python,django | 20,195,470 | 2 | false | 1 | 0 | If those different sites use the same controller (file.py) then the packages must have different name. | 1 | 1 | 0 | I have a problem, where I have two packages that I use quite often, in different sites, with the same name. Namely, they are django-simple-captcha and django-recaptcha. They are both just called captcha.
In addition, these are shared between several people who may work on them, so I can't just have them as different names without it messing up between people.
Is there some way to solve this? | Multiple external packages with same name | 0 | 0 | 0 | 78 |
20,196,049 | 2013-11-25T14:51:00.000 | 0 | 0 | 0 | 1 | python,scheduled-tasks,scp | 70,087,563 | 8 | false | 0 | 0 | I had this issue before. I was able to run the task manually in Windows Task Scheduler, but not automatically. I remembered that there was a change in the time made by another user, maybe this change made the task scheduler to error out. I am not sure. Therefore, I created another task with a different name, for the same script, and the script worked automatically. Try to create a test task running the same script. Hopefully that works! | 5 | 17 | 0 | Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks. | Problems running python script by windows task scheduler that does pscp | 0 | 0 | 0 | 39,296 |
20,196,049 | 2013-11-25T14:51:00.000 | 0 | 0 | 0 | 1 | python,scheduled-tasks,scp | 70,039,404 | 8 | false | 0 | 0 | Just leaving this for posterity: A similar issue I faced was resolved by using the UNC (\10.x.xx.xx\Folder\xxx)path everywhere in my .bat and .py scripts instead of the letter assigned to the drive (\K:\Folder\xxx). | 5 | 17 | 0 | Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks. | Problems running python script by windows task scheduler that does pscp | 0 | 0 | 0 | 39,296 |
20,196,049 | 2013-11-25T14:51:00.000 | 0 | 0 | 0 | 1 | python,scheduled-tasks,scp | 52,639,496 | 8 | false | 0 | 0 | Create a batch file add your python script in your batch file and then schedule that batch file .it will work .
Example : suppose your python script is in folder c:\abhishek\script\merun.py
first you have to go to directory by cd command .so your batch file would be like :
cd c:\abhishek\script
python merun.py
it work for me . | 5 | 17 | 0 | Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks. | Problems running python script by windows task scheduler that does pscp | 0 | 0 | 0 | 39,296 |
20,196,049 | 2013-11-25T14:51:00.000 | 19 | 0 | 0 | 1 | python,scheduled-tasks,scp | 21,116,923 | 8 | true | 0 | 0 | I had the same issue when trying to open an MS Access database on a Linux VM. Running the script at the Windows 7 command prompt worked but running it in Task Scheduler didn't. With Task Scheduler it would find the database and verify it existed but wouldn't return the tables within it.
The solution was to have Task Scheduler run cmd as the Program/Script with the arguments /c python C:\path\to\script.py (under Add arguments (optional)).
I can't tell you why this works but it solved my problem. | 5 | 17 | 0 | Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks. | Problems running python script by windows task scheduler that does pscp | 1.2 | 0 | 0 | 39,296 |
20,196,049 | 2013-11-25T14:51:00.000 | 2 | 0 | 0 | 1 | python,scheduled-tasks,scp | 21,117,293 | 8 | false | 0 | 0 | Brad's answer is right. Subprocess needs the shell context to work and the task manager can launch python without that. Another way to do it is to make a batch file that is launched by the task scheduler that calls python c:\path\to\script.py etc. The only difference to this is that if you run into a script that has a call to os.getcwd() you will always get the root where the script is but you get something else when you make the call to cmd from task scheduler. | 5 | 17 | 0 | Not sure if anyone has run into this, but I'll take suggestions for troubleshooting and/or alternative methods.
I have a Windows 2008 server on which I am running several scheduled tasks. One of those tasks is a python script that uses pscp to log into a linux box, checks for new files and if there is anything new, copies them down to a local directory on the C: drive. I've put some logging into the script at key points as well and I'm using logging.basicConfig(level=DEBUG).
I built the command using a variable, command = 'pscp -pw xxxx name@ip:/ c:\local_dir' and then I use subprocess.call(command) to execute the command.
Now here's the weird part. If I run the script manually from the command line, it works fine. New files are downloaded and processed. However, if the Task Scheduler runs the script, no new files are downloaded. The script is running under the same user, but yet yields different results.
According to the log files created by the script and on the linux box, the script successfully logs into the linux box. However, no files are downloaded despite there being new files. Again, when I run it via the command line, files are downloaded.
Any ideas? suggestions, alternative methods?
Thanks. | Problems running python script by windows task scheduler that does pscp | 0.049958 | 0 | 0 | 39,296 |
20,197,259 | 2013-11-25T15:49:00.000 | 5 | 1 | 0 | 1 | python,uwsgi | 20,199,153 | 1 | true | 1 | 0 | upstart is only a process manager, the uWSGI master has access to lot of memory areas of workers (well it is the opposite indeed) so it can make truly monitoring of the behaviour of workers, in addition to this it allows graceful reloading, exports statistics and dozens of other things. Running without it is not a good idea from various point of views. | 1 | 11 | 0 | What are the benefits of running uWSGI in master mode if I'm only running a single app? Does master mode offer process management benefits that make it more reliable than, say, running via Upstart? | What is uWSGI master mode? | 1.2 | 0 | 0 | 4,022 |
20,198,589 | 2013-11-25T16:49:00.000 | 2 | 0 | 1 | 0 | python,ptvs | 20,203,996 | 1 | true | 0 | 0 | When using the regular (non-debug) Python Interactive window, you can actually attach VS to the python.exe process that it is running by using Debug -> Attach to Process. Once that is done, if the interactive window does something to e.g. hit a breakpoint, the debugger will hit on that breakpoint.
The tricky part is loading the code from a file in such a way that breakpoints are resolved. In particular, $load REPL command will not work because it just reads the file and evals it in the REPL line by line, without preserving the original file context. What you need is to load your script using Python facilities - e.g. import, or open+exec.
There are also some gotchas there - e.g. the REPL window will become unresponsive whenever you are paused on a breakpoint. | 1 | 1 | 0 | When I'm developing in Python I often want to debug a specific method, in which case it makes sense to call the method from the interactive console or debug interactive console. However, when a method is called from the interactive windows in PTVS, it doesn't stop at the break points in said method.
If it's possible, please tell me how to do it. If not, I would like to request this feature, and also to know if there is any quicker way to debug a specific method than calling it from the main script.
I'm using PTVS 2.0 RC in Visual Studio 2013 Ultimate | Is it possible to debug a method called from the interactive window in PTVS? | 1.2 | 0 | 0 | 1,144 |
20,198,826 | 2013-11-25T17:00:00.000 | 0 | 0 | 0 | 0 | android,python,eclipse | 20,198,995 | 2 | false | 1 | 0 | You don't seem to have a clue about how the android app and the web app are supposed to work together...
You can (theoretically) use just any language and techno you find appropriate for the web app since the communication between the android app and the web app will be http requests / responses.
Also, you can use whatever code editor you want to write Python code, as long as it (the code editor) supports Python. | 1 | 0 | 0 | I have a small android application made in eclipse.
Now i need to build a web server with python so they could work together.
Should i make a new python projekt then in some way link it to my application?
Or
Should i use jython and rebuild the app in a jython project?
I have used Visual studio before, and eclipse is not my hometown, så i would really appreciate clear answers. | Making android app work with a python web server | 0 | 0 | 0 | 682 |
20,200,295 | 2013-11-25T18:18:00.000 | 5 | 0 | 1 | 0 | c++,python | 20,200,392 | 5 | false | 0 | 0 | looking at 110 (6 decimal)
The Most Significant bit is 100 (4 decimal) // -- Note that this is always a power of 2
Create a mask: one less than the MSB is 011 (3 decimal)
Mask off the highest bit using bitwise-and: 110 & 011 = 10 (2 decimal)
Calculating the MSB (Most Significant Bit) has been handled here and elsewhere quite often | 2 | 5 | 0 | Is there an efficient way to remove the first bit of a number in C++ / Python, assuming you don't know how large the number is or its datatype?
I know in Python I can do it by getting the bin(n), truncating the string by 1, and then recasting it to an int, but I am curious if there is a more "mathematical" way to do this.
e.g. say the number is 6, which is 110 in binary. Chop the first bit and it becomes 10, or 2. | Removing first bit | 0.197375 | 0 | 0 | 9,490 |
20,200,295 | 2013-11-25T18:18:00.000 | 2 | 0 | 1 | 0 | c++,python | 20,200,463 | 5 | false | 0 | 0 | Well, you could create a loop in which you would double some variable (say x) in each iteration and then check whether this variable is greater than your number. If it is, divide it by two and subtract from your number. For example, if your number is 11:
-first iteration: x=1<11, so continue
-second iteration: x =2<11, so continue
-third iteration: x=4<11, so continue
-fourth iteration: x=8<11, so continue
-fifth iteration: x=16>11, so divide x by two: x=8. Then subtract 8 from your number and get answer:
11-8=3. | 2 | 5 | 0 | Is there an efficient way to remove the first bit of a number in C++ / Python, assuming you don't know how large the number is or its datatype?
I know in Python I can do it by getting the bin(n), truncating the string by 1, and then recasting it to an int, but I am curious if there is a more "mathematical" way to do this.
e.g. say the number is 6, which is 110 in binary. Chop the first bit and it becomes 10, or 2. | Removing first bit | 0.07983 | 0 | 0 | 9,490 |
20,200,556 | 2013-11-25T18:33:00.000 | 0 | 0 | 1 | 0 | python,django,virtualbox,vagrant | 62,098,699 | 3 | false | 1 | 0 | One solution I found was to mount my shared folder with cifs into the VM. That seems to work flawlessly. I did not found a solution for the vboxsf. | 1 | 0 | 0 | I have a django-based application that I'm running from a virtualbox-shared folder. When starting using 'runserver' I get an error indicating that a module could not be found. After copying the same exact code to a directory on the local filesystem, it starts and runs as expected.
Anyone seen anything like this when working with virtualbox and python?
It appears that module resolution is working differently when python is run from the mounted shared folder vs. the local folder, but I can't find a smoking gun that indicates whether or not it's caused by how that folder is mounted or python.
Thanks! | Running python from a mounted filesystem | 0 | 0 | 0 | 417 |
20,202,179 | 2013-11-25T20:04:00.000 | 2 | 0 | 0 | 0 | python,r | 20,208,600 | 1 | false | 0 | 0 | Counts of pairs are just products of counts of singletons.
This takes 5 seconds on my year old MacBook Pro using R:
Generate a matrix of 200000 rows and 180 columns whose elements are digits:
mat <- matrix(sample(0:9,180*200000,repl=T),nc=180)
Now table digits in each row:
tab <- sapply( 0:9, function(x) rowSums( mat==x ))
Now find the pair counts in each row:
cp <- combn( 0:9, 2, function(x) tab[,1+x[1] ] * tab[,1+x[2] ])
Sum the rows:
colSums(cp)
Verify the result for the first row:
tab2 <- table( matrix(mat[1,], nr=180, nc=180), matrix(mat[1,], nr=180, nc=180, byrow=TRUE))
all( tab2[ lower.tri(tab2)] == cp[1,] ) | 1 | 1 | 1 | I have a huge database of 180 columns and 200,000 rows. To illustrate in a better way, I have a matrix of 180 x 200000. Each matrix is a single digit number. I need to find their co-occurrence count.
For example I have a data of 5 columns having values 1,2,3,4,5. I need to find the number of times (1,2),(1,3),(1,4),(1,5),(2,3),(2,4),(2,5),(3,4),(3,5),(4,5) have occurred in the database. Can you please suggest me an approach to this problem?
I have an exposure to R and python. So any suggestion using those will really help.
Can this also be done using AWS map reducer? Any help or pointers on those lines would also be helpful. | Can co-occurance of word be calculated using R/ python/ Map reducer? | 0.379949 | 0 | 0 | 188 |
20,205,358 | 2013-11-25T23:08:00.000 | 2 | 1 | 0 | 0 | c++,python,html,opencv | 35,518,047 | 4 | false | 1 | 0 | Under lab conditions you send full images
You seem to be under lab conditions, so there is a simplistic, yet usable solution, just stream PNG's in Base64 using Websockets. On the client side (web browser) you just receive the base64 images and directly load them into the src of an <img>. It works for lab scenarios very well, albeit slow. | 1 | 22 | 0 | I am making a robot that will have a webcam on it to provide some simple object detection. For right now, I would like to simply stream the video to a webpage hosted on the robot and be able to view it from another device. I have written a simple test script in Python ( I will eventually move to C++, my language of choice) which can get a stream from my webcam, and then do whatever I need with it from there. The problem then, is that I can't write the video to a file while the app is running, it only writes the file after I quit the script. I already have a webserver running, and I can write the basic code in HTML to host a video from a file as well, and all of that works.
To summarize:
Is openCV2 in Python and/or C++ capable of livestreaming video using only openCV?
If not, what library would you recommend that I try to take a CV capture object or Mat object and write it to a stream that I can then put on a webpage?
In HTML, is the tag a good idea to stream video with?
Thank you very much for the advice, I can use all the pointers* I can get!
If you need something clarified/code posted/explanations further than what I have given, please ask and I will do so! | How do I stream an openCV video to an HTML webpage? | 0.099668 | 0 | 0 | 32,863 |
20,207,039 | 2013-11-26T01:41:00.000 | 1 | 0 | 1 | 0 | python,python-sphinx,python-import,pycuda | 20,207,085 | 2 | false | 0 | 0 | The usual method I've seen is to have a module-level function like foo.init() that sets up the GPU/display/whatever that you need at runtime but don't want automatically initialized on import.
You might also consider exposing initialization options here: what if I have 2 CUDA-capable GPUs, but only want to use one of them? | 1 | 1 | 0 | I'm writing a Python package that does GPU computing using the PyCUDA library. PyCUDA needs to initialize a GPU device (usually by importing pycuda.autoinit) before any of its submodules can be imported.
In my own modules I import whatever submodules and functions I need from PyCUDA, which means that my own modules are not importable without first initializing PyCUDA. That's fine mostly, because my package does nothing useful without a GPU present. However, now I want to write documentation and Sphinx Autodoc needs to import my package to read the docstrings. It works fine if I put import pycuda.autoinit into docs/conf.py, but I would like for the documentation to be buildable on machines that don't have an NVIDIA GPU such as my own laptop or readthedocs.org.
What's the most elegant way to defer the of import my dependencies such that I can import my own submodules on machines that don't have all the dependencies installed? | How to make my package importable without initializing the GPU | 0.099668 | 0 | 0 | 262 |
20,207,499 | 2013-11-26T02:36:00.000 | 0 | 0 | 0 | 1 | python,debian,openerp,google-compute-engine | 23,682,731 | 2 | false | 1 | 0 | you could limit access to your OpenERP instance by specifying --allowed_ip_sources="x.x.x.x" the IP or the CIDR range from where you expect the application to be accessed.
Additionally limit access of 8060 port only to your OpenERP instance, by tagging the instance as say ERP and apply --target_tags="ERP" to limit traffic from your source IP range to hit only the specific ERP instance. | 1 | 2 | 0 | I already install OpenERP and PostgreSQL in google compute engine.
Using debian 7. when i check with ifconfig as root user. I just got 2 ip addres.
127.0.0.1 and my internal ip address. My external IP/IP Public can't detected by debian 7.
I use ephemeral ip address for my external IP.
I allready try run OpenERP service using 127.0.0.1:8069 and my internal IP 10.240.226.xxx.
I can't access it from my external IP 8.34.xxx.xx:8069.
Please give me advice to fix these problem? and where i can contact or find Google "Help & Support" or submit "ticket support", beside using stackoverflow and google group? | Google Compute Engine OpenERP | 0 | 0 | 0 | 513 |
20,207,882 | 2013-11-26T03:19:00.000 | 3 | 1 | 0 | 0 | python,django | 20,207,935 | 2 | true | 1 | 0 | can you execute Python programs from .pyc files where there is no .py file?
Yes. Simply place the .pyc file wherever the .py file would normally be used (except into your text editor, of course).
If yes, is this also possible in Django? How?
No difference. The interpreter handles the files the same way regardless of what framework is being used.
Is it a good way to make web apps "almost closed source"?
Not really. Decompiling compiled Python bytecode is trivial. | 1 | 1 | 0 | So I've been entering the world of Python just recently, and directed it to quick, personal programs made for myself and professional use for web application design with Django. I've taken a few introductory Python tutorials and most Django tutorials, and have been reading Python documentation whenever possible.
I've recently seen that .pyc files are just bytecode (compiled) .py scripts, and don't make the Python language any faster, only lighter and non-human-readable. However, in a medium-low traffic site (let's say, 95% of web sites), this is negligible in difference from PHP, for example (and I find Python to be thousands of times more productive).
The question is: can you execute Python programs from .pyc files where there is no .py file? If yes, is this also possible in Django? How? Is it a good way to make web apps "almost closed source"? | Run compiled Python | 1.2 | 0 | 0 | 115 |
20,209,874 | 2013-11-26T06:19:00.000 | 1 | 0 | 1 | 1 | python,installation,gmp,mpfr,bigfloat | 20,229,129 | 3 | false | 0 | 0 | There are two versions of gmpy - version 1 (aka gmpy) and version 2 (aka gmpy2). gmpy2 includes MPFR. If you install gmpy2 then you probably don't need bigfloat since the functionality of MPFR can be directly accessed from gmpy2.
Disclaimer: I maintain gmpy and gmpy2. | 1 | 1 | 0 | I am trying to install bigfloat in Python 3.2 on a Windows 7 machine. The documentation says that I first need to install GMP and MPFR. I have downloaded both of these to my desktop (as well as the bigfloat package). However as they are C packages I am not sure how to install them in python (I have tried to find a clear explanation for the last several hours and failed). Can any one either tell me what I need to do or point me to a tutorial? Thanks a lot, any help is greatly appreciated. | Installing bigfloat, GMP and MPFR in windows for python | 0.066568 | 0 | 0 | 4,016 |
20,213,112 | 2013-11-26T09:31:00.000 | 0 | 0 | 0 | 0 | python,wxpython,mdi | 20,219,489 | 1 | false | 0 | 1 | You either use one of the MDI implementations that wxPython already has built-in or you reinvent the wheel and create your own widgets. | 1 | 0 | 0 | I want to build an application in wxPython which have multiple windows like MDI. I don't want use MDIParentFrame which is old. I've seen wx.aui too but I don't like it.
What I want is a main frame that hase a menu and toolbar and more child windows. Child windows must minimize when main frame minimize and got focus when main frame clicked
How do this ?
Thanks all | wxPython and MDI like app | 0 | 0 | 0 | 181 |
20,213,284 | 2013-11-26T09:38:00.000 | 1 | 0 | 0 | 1 | python-3.3,emacs24 | 20,214,424 | 1 | true | 0 | 0 | Sounds like a bug. Try a workaround: load python-mode first, then open the shell interactively. This will provide some setup, which might cure it.
With shipped python.el M-x run-python RET
With python-mode.el M-x python[VERSION] RET
VERSION is optional, it provides non-default shells without re-customizing the variable holding the command-name, i.e. py-shell-name | 1 | 2 | 0 | I am trying to start python 3.3.3 within a shell buffer in emacs (GNU emacs 24.2). OS is Win7. If I start python from the regular command line, the program works well. If I open a shell buffer in emacs (M-x shell) and type "python" into the command line (the program is in the path), it prints "python" on a new line and stops there.
Any ideas what I am doing wrong? | python 3.3.3 hangs if opened in shell within emacs | 1.2 | 0 | 0 | 232 |
20,220,200 | 2013-11-26T14:45:00.000 | 1 | 0 | 1 | 1 | python,virtualenv | 20,238,425 | 1 | true | 0 | 0 | Many problems lessened or solved with virtualenvwrapper-win. Well written framework, with simple entry points.I spend a lot of time fighting with windows, trying to get a functional python work environment. This is one of the those programs I really wish I knew about a long time ago.
Does not handle multiple python installations extraordinarily (or switching between them), but the project owner also developed another supporting product, pywin, meant to augment that particular shortcoming.
The whole point is, that it makes Windows command-line development quite a bit smoother, even if its not all the automation I dream about. | 1 | 1 | 0 | I am working on Windows (sadface) with Python and virtualenv.
I would like to have setup and teardown scripts that go along with the virtualenv activation/deactivation. But I am not sure if these hooks have already been designated, and if so, where?
I guess I could hack the activate.bat, but then what if I use activate.py instead (does activate.py call activate.bat, or must I hack both files)? I can almost get away with environment variable PYTHONSTARTUP, but this needs to be redefined in each virtualenv. So unless virtualenv allows arbitrary assignment of env-vars, I am back to an activation/deactivation hook to set PYTHONSTARTUP (which really defeats the purpose, but now you see my catch-22).
EDIT: I plan to use my virtualenv to host interactive development sessions. I will be calling 'venv/bin/activate.bat' manually from the terminal. I do not want loose Batch/Powershell scripts laying around that I have to remember to call once when I activate, and once again when I deactive. I want to hook the execution in such a way, so that after I add my custom scripting hooks, 6 months later I don't have to remember how it works. I just execute activate.bat, and I am off to the races. | virtualenv on Windows, activate/deactivate events/hooks | 1.2 | 0 | 0 | 1,635 |
20,221,953 | 2013-11-26T16:00:00.000 | 0 | 0 | 1 | 0 | python,conceptual | 20,222,231 | 1 | false | 0 | 0 | You cannot know what a program does without "executing" it (could be in some context where you mock things you don't want to be modified but it look like shooting at a moving target).
If you do a handmade parsing there will always be some issues you miss.
You should break the code in two functions :
a first one where the code can only add_component(s) without any side
effects, but with the possibility to run real code to check the
environment etc. to know which components to add.
a second one that
can have side effects and rely on the added components.
Using an XML (or any static format) is similar except :
you are certain there are no side effects (don't need to rely on the programmer respecting the documentation)
much less flexibility but be sure you need it. | 1 | 0 | 0 | I am implementing a workflow management system, where the workflow developer overloads a little process function and inherits from a Workflow class. The class offers a method named add_component in order to add a component to the workflow (a component is the execution of a software or can be more complex).
My Workflow class in order to display status needs to know what components have been added to the workflow. To do so I tried 2 things:
execute the process function 2 times, the first time allow to gather all components required, the second one is for the real execution. The problem is, if the workflow developer do something else than adding components (add element in a databases, create a file) this will be done twice!
parse the python code of the function to extract only the add_component lines, this works but if some components are in a if / else statement and the component should not be executed, the component apears in the monitoring!
I'm wondering if there is other solution (I thought about making my workflow being an XML or something to parse easier but this is less flexible). | Understand programmatically a python code without executing it | 0 | 0 | 0 | 107 |
20,222,657 | 2013-11-26T16:32:00.000 | 4 | 0 | 0 | 0 | python,optimization,scipy | 20,255,287 | 2 | false | 0 | 0 | I believe cobyla is the only technique that supports this in scipy.optimize.minimize. You can essentially control how big it's steps are with the rhobeg parameter. (It's not really the step size since it's a sequential linear method, but it has the same effect). | 1 | 6 | 1 | Is there a way to make the scipy optimization modules use a smaller step size?
I am optimizing a problem with a large set of variables (approximately 40) that I believe are near the optimal value, however when I run the scipy minimization modules (so far I have tried L-BFGS and CG) they do not converge because the initial step size is too large. | Method to set scipy optimization minimization step size | 0.379949 | 0 | 0 | 5,636 |
20,227,834 | 2013-11-26T20:59:00.000 | 0 | 0 | 0 | 0 | python,algorithm,neo4j,recommendation-engine | 20,237,347 | 1 | true | 1 | 0 | I think you won't find any out-of-the box solution for you problem, as it is quite specific. What you could do with Neo4j is to store all your data that you use for building recommendations (users, friendships links, users' restaurants recommendations and reviews etc) and then build you recommendation engine on this data. From my experience it is quite straightforward to do once you get all your data in Neo4j (either with Cypher or Gremlin). | 1 | 0 | 0 | I have developed a search engine for restaurants. I have a social network wherein users can add friends and form groups and recommend restaurants to each other. Each restaurant may serve multiple cuisines. All of this in Python.
So based on what restaurants a user has recommended, we can zero in on the kind of cuisines a user might like more. At the same time, we will know which price tier the user is more likely to explore(high-end, fast food, cafe, lounges etc)
His friends will recommend some places which will carry more weightage. There are similar non-friend users who have the recommended some the restaurants the user has recommended and some more which the user hasn't.
The end problem is to recommend restaurants to the user based on:
1) What he has recommended(Other restaurants with similar cuisines) - 50% weightage
2) What his friends have recommended(filtering restaurants which serve the cuisines the user likes the most) - 25% weightage
3) Public recommendation by 'similar' non-friend users - 25% weightage.
I am spending a lot of time reading up on Neo4j, and I think Neo4j looks promising. Apart from that I tried pysuggest, but it didn't suit the above problem. I also tried reco4j but it is a Java based solution, whereas I'm looking for a Python based solution. There is also no activity on the Reco4j community, and it is still under development.
Although I've researched quite a lot, I might be missing out on something.
I'd like to know how would you go about implementing the above solution? Could you give any use cases for the same? | Recommendation engine using collaborative filtering in Python | 1.2 | 0 | 0 | 1,190 |
20,229,048 | 2013-11-26T22:11:00.000 | 1 | 0 | 0 | 0 | python,django,facebook,authentication | 20,229,262 | 1 | false | 1 | 0 | There is an app called django-allauth. If you read their official documentation, it is pretty easy to follow. As per their instructions, you install the core app, and any other authentication you need (like facebook, oauth which google uses). Then, you have to go to facebook, get developers key, and add it to your django admin.
Basically, when somebody tries to login using facebook, the signin process sends the keys to facebook, and check if the user exists. If it does, then the authentication app creates user on the backend, just like a normal signin process. You can get javascript from facebook to make a login window. | 1 | 0 | 0 | I've read about a lot of different apps for django for integrating social authentication to django projects. But, I'm having some trouble understanding how this integration works
Does it extends the default USER models? Where do I find those kind of information in the applications?
I basically need a user system that has groups (for permission purposes). The user would be able to register using a common registration proccess or facebook. Will I be able to achieve that with any kind of application?
Thanks in advance. | user system with social authentication in django | 0.197375 | 0 | 0 | 74 |
20,232,889 | 2013-11-27T02:38:00.000 | 2 | 0 | 0 | 0 | python,opencv,opencl | 20,238,109 | 1 | true | 0 | 0 | unfortunately, - no way.
opencv uses special Mat types for this, ocl::Mat or cuda::Mat ,
and those are not exposed to the wrappers (so, same problem for java and matlab) | 1 | 1 | 1 | From what I can tell, there's no way to access OpenCV's OpenCL (OCL) module from the python cv2 bindings. Does anyone know of a straightforward way to do this? | Is it possible to access OpenCV OCL (OpenCL) methods from python (cv2)? | 1.2 | 0 | 0 | 1,033 |
20,233,650 | 2013-11-27T04:01:00.000 | 0 | 0 | 1 | 1 | python,bash,pbs | 20,234,550 | 2 | false | 0 | 0 | I like to write data to sys.stderr sometimes for this sort of thing. It obviates the need to flush so much. But if you're generating output for piping sometimes, you remain better off with sys.stdout. | 1 | 1 | 0 | I write a python script in which there are several print statement. The printed information can help me to monitor the progress of the script. But when I qsub the bash script, which contains python my_script &> output, onto computing nodes, the output file contains nothing even when the script is running and printing something. The output file will contains the output when the script is done. So how can I get the output in real time through the output file when the script is running. | Output file contains nothing before script finishing | 0 | 0 | 0 | 641 |
20,235,524 | 2013-11-27T06:34:00.000 | 2 | 1 | 0 | 1 | python,linux,python-3.x | 20,235,689 | 1 | false | 0 | 0 | The easiest way is to not allow the user delete the script. Put the script in one of the non-core bin directories, e.g. into /usr/local/bin as root and regular user will not be able to remove it. | 1 | 0 | 0 | I want to send an email alert using the linux os when user delete python setup file. i made a screenshot program using python, unfortunately if user uninstall python setup file.I want to send an email to the admin. if you know the processing steps kindly share with me or please give any suggestions. | How to send email alert to admin after delete a python setup file using linux? | 0.379949 | 0 | 0 | 100 |
20,242,434 | 2013-11-27T12:24:00.000 | 1 | 1 | 0 | 0 | python | 20,242,742 | 4 | false | 0 | 0 | Use a web application framework like CherryPy, Django, Webapp2 or one of the many others. For a production setup, you will need to configure the web server to make them work.
Or write CGI programs with Python. | 1 | 7 | 0 | In my development of Android and Java applications I have been using PHP scripts to interact with an online MySQL database, but now I want to migrate to Python.
How does one run Python scripts on a web server? In my experience with PHP, I have been saving my files under /var/www folder in a Linux environment. Then I just call the file later with a URL of the path. Where do I save my Python scripts? | How to run Python scripts on a web server (e.g localhost) | 0.049958 | 0 | 0 | 47,776 |
20,243,803 | 2013-11-27T13:28:00.000 | 0 | 0 | 0 | 0 | python,django,sorting,django-tables2 | 21,063,484 | 1 | true | 1 | 0 | In the matter of fact, this can't be done. if TemplateColumn name is based on name in model, sorting works fine. As far as I am concerned If you want present other data it is impossible. | 1 | 2 | 0 | Is it somehow possible, to add custom sort method to values generated using TemplateColumn? because on basic it tries to find column name in model and returns FieldError: Cannot resolve keyword u'coulmn_name' into field. Choices are: [all fields in model]. | Django-tables2 Sorting TemplateColumn | 1.2 | 0 | 0 | 448 |
20,247,408 | 2013-11-27T16:09:00.000 | 1 | 0 | 0 | 0 | c++,qt,boost,ipython | 20,943,115 | 1 | false | 0 | 1 | I have been looking to try and do something similar. I'm trying to at the very least use the ipython kernel in a c++ widget. So far the only stuff I have found online is importing modules into python. I have found nothing on the ipython github site either yet. But I'm still digging. | 1 | 2 | 0 | Is there some example of people embedding ipython-qt console inside their C++/Qt application?
I've only seen examples of people embedding that in PyQt applications.
What I would like to do is at the end something like the console example available for PythonQt, where from a console with autocompletition, one can modify internal status of c++ variables.
Maybe is it possible thanks to an additional layer of binding of C++ methods via Boost.Python? | Embedding ipython qt-console as qwidget inside qt/c++ application | 0.197375 | 0 | 0 | 951 |
20,248,521 | 2013-11-27T16:57:00.000 | 1 | 0 | 0 | 1 | python,hadoop | 20,252,470 | 2 | true | 0 | 0 | Number of mappers is actually governed by the InputFormat you are using. Having said that, based on the type of data you are processing, InputFormat may vary. Normally, for the data stored as files in HDFS FileInputFormat, or a subclass, is used which works on the principle of MR split = HDFS block. However, this is not always true. Say you are processing a flat binary file. In such a case there is no delimiter(\n or something else) to represent the split boundary. What would you do in such a case? So, the above principle doesn't always work.
Consider another scenario wherein you are processing data stored in a DB, and not in HDFS. What will happen in such a case as there is no concept of 64MB block size when we talk about DBs?
The framework tries its best to carry out the computation in a manner as efficient as possible, which might involve creation of lesser/more number of mappers as specified/expected by you. So, in order to see how exactly mappers are getting created you need to look into the InputFormat you are using in your job. getSplits() method to be precise.
If I want to use only 1 map task, do I have to set the input splits size to 1GB??
You can override the isSplitable(FileSystem, Path) method of your InputFormat to ensure that the input files are not split-up and are processed as a whole by a single mapper.
Let's say I successfully specify that I want to use only 2 map tasks, does it use 2 cores? And each core has 1 map task??
It depends on availability. Mappers can run on multiple cores simultaneously. And a single core can run multiple mappers sequentially. | 1 | 3 | 0 | What I'm trying to do
I'm new to hadoop and I'm trying to perform MapReduce several times with a different number of mappers and reducers, and compare the execution time. The file size is about 1GB, and I'm not specifying the split size so it should be 64MB. I'm using a machine with 4 cores.
What I've done
The mapper and reducer are written in python. So, I'm using hadoop streaming. I specified the number of map tasks and reduce tasks by using '-D mapred.map.tasks=1 -D mapred.reduce.tasks=1'
Problem
Because I specified to used 1 map task and 1 reduce task, I expected to see just one attempt but I actually have 38 map attempts, and 1 reduce task. I read tutorials and SO questions similar to this problem, and some said that the default map task is 2, but I'm getting 38 map tasks. I also read that mapred.map.tasks only suggests the number and the number of map tasks is the number of split size. However, 1GB divided by 64MB is about 17, so I still don't understand why 38 map tasks were created.
1) If I want to use only 1 map task, do I have to set the input splits size to 1GB??
2) Let's say I successfully specify that I want to use only 2 map tasks, does it use 2 cores? And each core has 1 map task?? | Number of map tasks and split size | 1.2 | 0 | 0 | 1,433 |
20,251,562 | 2013-11-27T19:39:00.000 | 1 | 0 | 1 | 0 | python,django,macos | 33,155,616 | 4 | false | 1 | 0 | to setup django in pyton3
sudo apt-get install python3-pip
sudo pip3 install virtualenv
virtualenv -p /usr/bin/python3.X venv
source venv/bin/activate
congrats you have setup python3 django and now have many packages to work with
Note: X stands for the version of python | 2 | 14 | 0 | I had python 2.6.1 because it was old I decide to install python 3.3.2 but, when I type "python" in my mac it prints it is version 2.6.1 and when I type python3 it shows that this is 3.3.2. I installed django 1.6 but when I check, understand that it is installed for old version of python (python 2.6.1). I want to instal it for my python 3.3.2 what should I do? any way to uninstall python 2.6.1 and when I enter python in terminal it's version be 3.3.2? I have mac os 10.6.8 | How to install django for python 3.3 | 0.049958 | 0 | 0 | 16,364 |
20,251,562 | 2013-11-27T19:39:00.000 | 0 | 0 | 1 | 0 | python,django,macos | 46,796,659 | 4 | false | 1 | 0 | To install django for python3, you need to use pip3 instead of pip.
python defaults to python2.
pip defaults to pip for python2.
So, when you install pip using whatever package manager you have, you are essentially installing pip for python2.
To remove Python2: $sudo apt remove python
To install pip for python3:
$sudo apt install python3-pip
To install pip for python2:
$sudo apt install python-pip
Note: I am using apt package manager. You should use the package manager for your OS. | 2 | 14 | 0 | I had python 2.6.1 because it was old I decide to install python 3.3.2 but, when I type "python" in my mac it prints it is version 2.6.1 and when I type python3 it shows that this is 3.3.2. I installed django 1.6 but when I check, understand that it is installed for old version of python (python 2.6.1). I want to instal it for my python 3.3.2 what should I do? any way to uninstall python 2.6.1 and when I enter python in terminal it's version be 3.3.2? I have mac os 10.6.8 | How to install django for python 3.3 | 0 | 0 | 0 | 16,364 |
20,252,484 | 2013-11-27T20:27:00.000 | 2 | 0 | 0 | 0 | python,r,scikit-learn,classification,random-forest | 20,557,736 | 4 | true | 0 | 0 | After reading over the documentation, I think that the answer is definitely no. Kudos to anyone who adds the functionality though. As mentioned above the R package randomForest contains this functionality. | 4 | 6 | 1 | Perhaps this is too long-winded. Simple question about sklearn's random forest:
For a true/false classification problem, is there a way in sklearn's random forest to specify the sample size used to train each tree, along with the ratio of true to false observations?
More details are below:
In the R implementation of random forest, called randomForest, there's an option sampsize(). This allows you to balance the sample used to train each tree based on the outcome.
For example, if you're trying to predict whether an outcome is true or false and 90% of the outcomes in the training set are false, you can set sampsize(500, 500). This means that each tree will be trained on a random sample (with replacement) from the training set with 500 true and 500 false observations. In these situations, I've found models perform much better predicting true outcomes when using a 50% cut-off, yielding much higher kappas.
It doesn't seem like there is an option for this in the sklearn implementation.
Is there any way to mimic this functionality in sklearn?
Would simply optimizing the cut-off based on the Kappa statistic achieve a similar result or is something lost in this approach? | Can sklearn Random Forest classifier adjust sample size by tree, to handle class imbalance? | 1.2 | 0 | 0 | 2,604 |
20,252,484 | 2013-11-27T20:27:00.000 | 0 | 0 | 0 | 0 | python,r,scikit-learn,classification,random-forest | 28,440,842 | 4 | false | 0 | 0 | As far as I am aware, the scikit-learn forest employ bootstrapping i.e. the sample set sizes each tree is trained with are always of the same size and drawn from the original training set by random sampling with replacement.
Assuming you have a large enough set of training samples, why not balancing this itself out to hold 50/50 positive/negative samples and you will achieve the desired effect. scikit-learn provides functionality for this. | 4 | 6 | 1 | Perhaps this is too long-winded. Simple question about sklearn's random forest:
For a true/false classification problem, is there a way in sklearn's random forest to specify the sample size used to train each tree, along with the ratio of true to false observations?
More details are below:
In the R implementation of random forest, called randomForest, there's an option sampsize(). This allows you to balance the sample used to train each tree based on the outcome.
For example, if you're trying to predict whether an outcome is true or false and 90% of the outcomes in the training set are false, you can set sampsize(500, 500). This means that each tree will be trained on a random sample (with replacement) from the training set with 500 true and 500 false observations. In these situations, I've found models perform much better predicting true outcomes when using a 50% cut-off, yielding much higher kappas.
It doesn't seem like there is an option for this in the sklearn implementation.
Is there any way to mimic this functionality in sklearn?
Would simply optimizing the cut-off based on the Kappa statistic achieve a similar result or is something lost in this approach? | Can sklearn Random Forest classifier adjust sample size by tree, to handle class imbalance? | 0 | 0 | 0 | 2,604 |
20,252,484 | 2013-11-27T20:27:00.000 | 3 | 0 | 0 | 0 | python,r,scikit-learn,classification,random-forest | 28,648,499 | 4 | false | 0 | 0 | In version 0.16-dev, you can now use class_weight="auto" to have something close to what you want to do. This will still use all samples, but it will reweight them so that classes become balanced. | 4 | 6 | 1 | Perhaps this is too long-winded. Simple question about sklearn's random forest:
For a true/false classification problem, is there a way in sklearn's random forest to specify the sample size used to train each tree, along with the ratio of true to false observations?
More details are below:
In the R implementation of random forest, called randomForest, there's an option sampsize(). This allows you to balance the sample used to train each tree based on the outcome.
For example, if you're trying to predict whether an outcome is true or false and 90% of the outcomes in the training set are false, you can set sampsize(500, 500). This means that each tree will be trained on a random sample (with replacement) from the training set with 500 true and 500 false observations. In these situations, I've found models perform much better predicting true outcomes when using a 50% cut-off, yielding much higher kappas.
It doesn't seem like there is an option for this in the sklearn implementation.
Is there any way to mimic this functionality in sklearn?
Would simply optimizing the cut-off based on the Kappa statistic achieve a similar result or is something lost in this approach? | Can sklearn Random Forest classifier adjust sample size by tree, to handle class imbalance? | 0.148885 | 0 | 0 | 2,604 |
20,252,484 | 2013-11-27T20:27:00.000 | 0 | 0 | 0 | 0 | python,r,scikit-learn,classification,random-forest | 30,005,356 | 4 | false | 0 | 0 | Workaround in R only, for classification one can simply use all cores of the machine with 100% CPU utilization.
This matches the time and speed of Sklearn RandomForest classifier.
Also for regression there is a package RandomforestParallel on GitHub, which is much faster than Python Sklearn Regressor.
Classification: I have tested and works well. | 4 | 6 | 1 | Perhaps this is too long-winded. Simple question about sklearn's random forest:
For a true/false classification problem, is there a way in sklearn's random forest to specify the sample size used to train each tree, along with the ratio of true to false observations?
More details are below:
In the R implementation of random forest, called randomForest, there's an option sampsize(). This allows you to balance the sample used to train each tree based on the outcome.
For example, if you're trying to predict whether an outcome is true or false and 90% of the outcomes in the training set are false, you can set sampsize(500, 500). This means that each tree will be trained on a random sample (with replacement) from the training set with 500 true and 500 false observations. In these situations, I've found models perform much better predicting true outcomes when using a 50% cut-off, yielding much higher kappas.
It doesn't seem like there is an option for this in the sklearn implementation.
Is there any way to mimic this functionality in sklearn?
Would simply optimizing the cut-off based on the Kappa statistic achieve a similar result or is something lost in this approach? | Can sklearn Random Forest classifier adjust sample size by tree, to handle class imbalance? | 0 | 0 | 0 | 2,604 |
20,252,595 | 2013-11-27T20:35:00.000 | 1 | 0 | 0 | 1 | python,django,apache,deployment,amazon-ec2 | 20,256,055 | 1 | true | 1 | 0 | You use Terminal to SSH into your AWS EC2 environment. All commands past there are 100% platform based (ubuntu, amazon linux, red hat, etc).
You wouldn't use any Mac OS commands besides creating the SSH connection. There's a tutorial on how to do that through the EC2 console. | 1 | 0 | 0 | I am trying to deploy Django and Apache to Amazon EC2 server. Currently, i have already got the AWS account and lunched the instance on server. But the problem is that i cannot find a tutorial about how to deploy Django and Apache to Amazon EC2 WITH Mac OS, all i can find are Linux system deploying tutorials. Where can i find deploying tutorial for Mac OS? | Mac OS deploy Django + Apache on Amazon EC2 | 1.2 | 0 | 0 | 304 |
20,252,845 | 2013-11-27T20:50:00.000 | 0 | 0 | 0 | 0 | python,algorithm,geometry | 20,253,113 | 4 | false | 0 | 0 | I don't know what algorithms you've tried, but typically this problem is solved by storing the list of points in the same order (say, clockwise) so that every time run your angle-calculation on a triple of points in order, you always get the same side of the shape (say, the interior angle). | 2 | 2 | 0 | I have been working on a 2-dimensional object creator program in python 3.2.5 that handles manipulations of arbitrary shapes and calculates collision detection between them. The program allows you to input a shape's coordinates, and from there it will do whatever else you want it to do (draw the shape to the screen, expand the border, manipulate individual coordinates, make it symmetrical, etc.).
But I've run into a problem when trying to calculate the interior angles of an arbitrary polygon. While the algorithm's I've used to calculate the angles technically output the correct angle, I have no way of telling whether or not the program spits out an interior angle or an exterior angle (since the arbitrary shape the user inputs could have concave vertices).
On paper this would seem like a piece of cake, since you can visualize the shape and you can interpret which angle is interior and exterior automatically. But since the program only stores the values of the coordinates and doesn't actually visually create the object to extrapolate data, this problem becomes a little bit trickier to solve.
So my question is:
What method should I use to calculate the angle between two lines and how should I go about using it to determine the difference between an interior and an exterior angle?
For example, if I have a shape that has coordinates ((30,50),(35,47),(40,50),(37,43),(35,35),(33,43)) (which ends up looking sort of like an upside-down spire with a concave base), I can easily calculate the angles between the lines, but which angle I am calculating is a mystery. | Best algorithm for detecting interior and exterior angles of an arbitrary shape in python | 0 | 0 | 0 | 5,803 |
20,252,845 | 2013-11-27T20:50:00.000 | 4 | 0 | 0 | 0 | python,algorithm,geometry | 20,254,677 | 4 | false | 0 | 0 | The gold standard for finding the signed angle between two vectors is atan2(cross(a,b)), dot(a,b)). High accuracy and robustness at all angles. (In 2D, cross is the perpendicular dot product, ax*by-ay*bx. In three dimensions, use the length of the cross product; its direction is your axis of rotation.)
Some things NOT to do:
Anything involving acos. Arccosine is a code smell. It suffers from limited range, won't give you signs, needs manual argument clamping, and has poor precision at its extrema. If you find yourself using it, try something else.
Anything involving line slopes. Poor accuracy, and of course is undefined for vertical vectors.
Manually choosing an angular range based on extra tests. This is likely to lead to discontinuities near the axes. | 2 | 2 | 0 | I have been working on a 2-dimensional object creator program in python 3.2.5 that handles manipulations of arbitrary shapes and calculates collision detection between them. The program allows you to input a shape's coordinates, and from there it will do whatever else you want it to do (draw the shape to the screen, expand the border, manipulate individual coordinates, make it symmetrical, etc.).
But I've run into a problem when trying to calculate the interior angles of an arbitrary polygon. While the algorithm's I've used to calculate the angles technically output the correct angle, I have no way of telling whether or not the program spits out an interior angle or an exterior angle (since the arbitrary shape the user inputs could have concave vertices).
On paper this would seem like a piece of cake, since you can visualize the shape and you can interpret which angle is interior and exterior automatically. But since the program only stores the values of the coordinates and doesn't actually visually create the object to extrapolate data, this problem becomes a little bit trickier to solve.
So my question is:
What method should I use to calculate the angle between two lines and how should I go about using it to determine the difference between an interior and an exterior angle?
For example, if I have a shape that has coordinates ((30,50),(35,47),(40,50),(37,43),(35,35),(33,43)) (which ends up looking sort of like an upside-down spire with a concave base), I can easily calculate the angles between the lines, but which angle I am calculating is a mystery. | Best algorithm for detecting interior and exterior angles of an arbitrary shape in python | 0.197375 | 0 | 0 | 5,803 |
20,253,410 | 2013-11-27T21:28:00.000 | 1 | 0 | 0 | 0 | python,django | 20,254,928 | 2 | false | 1 | 0 | I am a bit confused. Are you trying to let users create account and sign in? Then use django-registration which is easy and works out of the box. | 1 | 1 | 0 | I am trying to write custom get_profile() function which should create user profile for users who are registered thru admin or any other way where post_save was not called.
How can I start this? | Writing user.get_profile() custom function which should create auth profile if there is not any | 0.099668 | 0 | 0 | 84 |
20,255,421 | 2013-11-27T23:57:00.000 | 2 | 0 | 1 | 0 | python,c,perl,shell,duplicates | 20,257,399 | 1 | false | 0 | 0 | If you can't just fire up an instance on amazon with enough memory to hold everything in RAM, this is the strategy I would use:
Step 1 - go through and generate a checksum/hashvalue for each line. I'd probably use SIPHASH. Output these to a file.
Step 2 - sort the file of siphash values, and throw away any that only have one entry. Output the result as a set of hashvalues & number of matches.
Step 3 - read through the file. regenerate the hashvalue for each line. If its a line that has a match, hold onto it in memory. If there's another already in memory with same hashvalue, compare to see if the lines themselves match. Output "match" if true. If you've already seen all N lines that have the same hashvalue and they didn't match, go ahead and dispose of the record.
This strategy depends on the number of duplicates being only a small fraction of the number of total lines. If that's not the case, then I would use some other strategy, like a divide and conquer. | 1 | 0 | 0 | I have a rather big text file , average 30GB. I want to remove duplicate lines from this file. What is a good efficient algorithm to do this. for small files, I usually use dictionaries, eg Python dictionaries to store unique keys. But this time the file is rather big. any language suggestion is fine. ( i am thinking of using C? or is it rather not language dependent but the algorithm that is more important? ). thanks | Removing duplicates from BIG text file | 0.379949 | 0 | 0 | 836 |
20,257,359 | 2013-11-28T03:50:00.000 | 4 | 0 | 1 | 0 | python,regex,python-2.x | 20,257,472 | 1 | false | 0 | 0 | Use a two-pass approach. The first pass uses the first regex to find the "interesting bits" and outputs those offsets into a separate file. You didn't say if you can tell where the "end" of each interesting segment is, but you'd include that too if available. The second pass uses the offsets to load sections of the file as independent strings and then applies whatever secondary regex you like on each smaller string. | 1 | 4 | 0 | Using Python 2.6.6.
I was hoping that the re module provided some method of searching that mimicked the way str.find() works, allowing you to specify a start index, but apparently not...
search() lets me find the first match...
findall() will return all (non-overlapping!) matches of a single pattern
finditer() is like findall(), but via an iterator (more efficient)
Here is the situation... I'm data mining in huge blocks of data. For parts of the parsing, regex works great. But once I find certain matches, I need to switch to a different pattern, or even use more specialized parsing to find where to start searching next. If re.search allowed me to specify a starting index, it would be perfect. But in absence of that, I'm looking at:
Using finditer(), but skipping forward until I reach an index that is past where I want to resume using re. Potential problems:
If the embedded binary data happens to contain a match that overlaps a legitimate match just after the binary chunk...
Since I'm not searching for a single pattern, I'd have to juggle multiple iterators, which also has the possibility of a false match hiding the real one.
Slicing, i.e., creating a copy of the remainder of the data each time I want to search again.
This would be robust, but would force a lot of "needless" copying on data that could be many megabytes.
I'd prefer to keep it so that all match locations were indexes into the single original string object, since I may hang onto them for a while and want to compare them. Finding subsequent matches within separate sliced-off copies is a bookkeeping hassle.
Just occurred to me that I may be able to use a "rotating buffer" sort of approach, but haven't thought it through completely. That might introduce a lot of complexity to the code.
Am I missing any obvious alternatives? Not sure if there would be a way to wrap a huge string with a class that would serve slices... Or a slicing sort of iterator or "string cursor" idiom? | Performing incremental regex searches in huge strings (Python) | 0.664037 | 0 | 0 | 897 |
20,260,547 | 2013-11-28T07:54:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 20,260,724 | 4 | false | 0 | 0 | if you mean for example start a timer and start a loop and exactly after that, I think you can you ; like this: start_timer; start_loop in one line | 1 | 0 | 0 | How would I run two lines of code at the exact same time in Python 2.7? I think it's called parallel processing or something like that but I can't be too sure. | Running two lines of code in python at the same time? | 0.049958 | 0 | 0 | 10,101 |
20,260,726 | 2013-11-28T08:06:00.000 | 0 | 1 | 0 | 1 | python,eve | 20,278,344 | 2 | false | 0 | 0 | I had the exact same problem.
You are running something like this:
>python yourPeve.py
You need to run:
>python yourPeve.py &
The & simbol, will put the process in the background, so, when you close the terminal, the process won't be killed. | 1 | 1 | 0 | I've set up a very simple python eve on a linux machine. Somehow, it always stops responding after running for a while. I don't have much experience on python programming and eve doesn't seem to have very nice log file.
Can someone please help me to look into the root cause?
Thanks,
Chunan | Python eve not responding after running for a couple of days | 0 | 0 | 0 | 433 |
20,262,330 | 2013-11-28T09:33:00.000 | 0 | 0 | 0 | 0 | python,xml,module,openerp | 23,950,717 | 1 | false | 1 | 0 | i had the same error just a while ago
you need to give a proper name to your model without upper case
my example : Remise is the designation and x_remise is the module name | 1 | 0 | 0 | when i create a module through interface it shows me error when click on save button error in create m2o screen
ERROR ValidateError
Error occurred while validating the field(s) res_model,src_model: Invalid model name in the action definition. | create module in openerp through interface | 0 | 0 | 0 | 226 |
20,263,021 | 2013-11-28T10:04:00.000 | 9 | 1 | 0 | 0 | python,excel,win32com,xlrd,openpyxl | 20,263,978 | 1 | true | 0 | 0 | Open and write directly and efficiently excel files, for instance.
win32com uses COM communication, which while being very useful for certain purposes, it needs to perform complicated API calls that can be very slow (so to say, you are using code that controls Windows, that controls Excel)
openpyxl or others, just open an excel file, parse it and let you work with it.
Try to populate an excel file with 2000 rows, 100 cells each, with win32com and then with any other direct parser. While a parser needs seconds, win32com will need minutes.
Besides, openpyxl (I haven't tried the others) does not need that excel is installed in the system. It does not even need that the OS is windows.
Totally different concepts. win32com is a piece of art that opens a door to automate almost anything, while the other option is just a file parser. In other words, to iron your shirt, you use a $20 iron, not a 100 ton metal sheet attached to a Lamborghini Diablo. | 1 | 3 | 0 | win32com is a general library to access COM objects from Python.
One of the major hallmarks of this library is ability to manipulate excel documents.
However, there is lots of customized modules, whose only purpose it to manipulate excel documents, like openpyxl, xlrd, xlwt, python-tablefu.
Are these libraries any better for this specific task? If yes, in what respect? | What do third party libraries like openpyxl or xlrd/xlwt have, what win32com doesn't have? | 1.2 | 1 | 0 | 3,031 |
20,266,882 | 2013-11-28T12:58:00.000 | 0 | 0 | 0 | 1 | google-app-engine,python-2.7 | 20,269,288 | 1 | false | 1 | 0 | Its not sql. You dont clone or delete 'tables', no such thing in the datastore.
To do the migration you would use task queues to run through a query. You probably need to stop your frontend while doing so. Task queues have a longer limit than the 60sec you mention and each taskqueue will create another one until you finish pro essing all items in your query.
Yiu also complain that its harder than other enviroments but it isnt so. The problem maybe is that you chose to use the datastore instead of cloud sql which you could also have used. Each has its pros and cons. | 1 | 0 | 0 | I'm trying to change the Property several Fields on my GAE AppEngine to a custom Type (Encrypted Content).
Most of them are currently String or Text Properties. Since we have multiple millions of Entries in our DB, migration is not an easy task. I'm looking for a best practise, here is what I think will work best but this might be very challenging to execution time limits plus I'm a little bit frightened about the costs for this task.
clone table to tmp_table
delete table
create table with new attributes
insert values from tmp_table into table
What sounds like a short hiking trip on most environments feels a little bit more complex on GAE ;)
My Questions to you:
- Are there any know best practises you are aware of / did you already achieve this challenge & how?
- Any Idea how to trigger the process (I would estimate it takes several minutes so the 60 second limit | Change Property of db.Model to custom Type | 0 | 0 | 0 | 49 |
20,268,901 | 2013-11-28T14:42:00.000 | 1 | 1 | 1 | 0 | python,dictionary,hash | 20,269,010 | 2 | false | 0 | 0 | By using hastables you achieve O(1) retrieval data, while comparing against each independent vale for equality will take O(n) (in a sequential search) or O(log(n)) in a binary search.
Also note that O(1) is amortized time, because if there are several values that hash to the same key, then a sequential search is needed among these values. | 1 | 0 | 0 | I've recently been looking into Python's dictionaries (I believe they're called associate arrays in other languages) and was confused by a couple of the restrictions on its keys.
First, dict keys must be immutable. When I looked up the logic behind it the answer was that dictionaries work like hash tables to look up the values for keys, and thus immutable keys (if they're hashable at all) may change their hash value, causing problems when retrieving the value.
I understood why that was the case just fine, but I was still somewhat confused by what the point of using a hash table would be. If you simply didn't hash the keys and tested for true equality (assuming indentically constructed objects compare equal), you could replicate most of the functionality of the dictionary without that limitation just by using two lists.
So, I suppose that's my real question - what's the rationale behind using hashes to look up values instead of equality?
If I had to guess, it's likely simply because comparing integers is incredibly fast and optimized, whereas comparing instances of other classes may not be. | Why use hashes instead of test true equality? | 0.099668 | 0 | 0 | 90 |
20,271,881 | 2013-11-28T17:32:00.000 | 0 | 1 | 0 | 1 | python,mercurial,mercurial-hook | 20,836,549 | 1 | true | 0 | 0 | I don't know exactly what happened, but it seams like it was not using the script because an exceptions somehow prohibited it from compiling to a pyc, and Mercurial somehow fetched the old version of that pyc file. Not too sure, but that's my best guess (as somehow noone else seems to have an idea and the Mercurial guys made it really clear they only answer stuff on their mailing list instead of SO.. how .. nice). | 1 | 0 | 0 | Ok, this is really weird. I have an old Mercurial 2.02. with python 2.6 on an old Ubuntu-something (I think 10.4).
We are a windows shop, und push regularly, so I wanted kind of a review service. It absolutely worked on windows.. pretxnchangegroup referencing the python file on the drive, worked..
But I made the mistake to create the Mercurial hook on a new Mercurial 2.7, but then recognized the internal API changed, so I got back and fixed it, or tried to. I'm using windows, but need to deploy the hook to Linux, so I use WinSCP to copy the py file to my home directory. And then sudo cp it to the python 2.6 distro folder where the other hook file lie.
I invoke the hook via the module pattern on the linux box:
pretxnchangegroup.pushtest = python:mycompanyname.testcommit.exportpatches
In the folder "mycompanyname" is the file testcommit.py and the function is named exportpatches. It works locally without a problem.
The strange thing: It worked once, and kind of unstable: sometime it just says that the function "mycompanyname.testcommit.exportpatches" is not defined. And sometimes it just uses an old version of the hook (I see that because it gives an old exception message instead of the newer one). And I don't know how to get exception messages in python, so I'm lost there..
Second strange thing: these hook files also have a .pyc version, probably compiled, but my hook doesn't get such treatment. Is that autocompilation?
If I try the directory approach to point to the file, I get a straight 500 internal error on push.
I'm really lost and desperate by now, because the stuff has to work pretty soon, and I'm banging my head against the wall right now.. | Mercurial hook: isn't recompiled after change? | 1.2 | 0 | 0 | 44 |
20,275,176 | 2013-11-28T21:46:00.000 | 0 | 0 | 0 | 0 | mysql,python-3.x,python-module | 65,242,155 | 3 | false | 0 | 0 | pip3 install mysql-connector-python worked for me | 2 | 4 | 0 | trying to import python-mysql.connector on Python 3.2.3 and receiving an odd stack. I suspect bad configuration on my ubuntu 12.04 install.
vfi@ubuntu:/usr/share/pyshared$ python3
Python 3.2.3 (default, Sep 25 2013, 18:22:43)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mysql.connector
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mysql.connector
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/share/pyshared/apport_python_hook.py", line 66, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "apport/__init__.py", line 1, in
from apport.report import Report
File "apport/report.py", line 20, in
import apport.fileutils
File "apport/fileutils.py", line 22, in
from apport.packaging_impl import impl as packaging
File "apport/packaging_impl.py", line 20, in
import apt
File "apt/__init__.py", line 24, in
from apt.package import Package
File "apt/package.py", line 1051
return file_list.read().decode("utf-8").split(u"\n")
^
SyntaxError: invalid syntax
Original exception was:
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mysql.connector
here is the related modules state on my pc:
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python3-apt
i python3-apt - Python 3 interface to libapt-pkg
p python3-apt:i386 - Python 3 interface to libapt-pkg
p python3-apt-dbg - Python 3 interface to libapt-pkg (debug extension)
p python3-apt-dbg:i386 - Python 3 interface to libapt-pkg (debug extension)
v python3-apt-dbg:any -
v python3-apt-dbg:any:i386 -
v python3-apt:any -
v python3-apt:any:i386 -
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python-apt
i python-apt - Python interface to libapt-pkg
p python-apt:i386 - Python interface to libapt-pkg
i python-apt-common - Python interface to libapt-pkg (locales)
p python-apt-dbg - Python interface to libapt-pkg (debug extension)
p python-apt-dbg:i386 - Python interface to libapt-pkg (debug extension)
v python-apt-dbg:any -
v python-apt-dbg:any:i386 -
p python-apt-dev - Python interface to libapt-pkg (development files)
p python-apt-doc - Python interface to libapt-pkg (API documentation)
v python-apt-p2p -
v python-apt-p2p-khashmir -
v python-apt:any -
v python-apt:any:i386 -
i python-aptdaemon - Python module for the server and client of aptdaemon
p python-aptdaemon-gtk - Transitional dummy package
i python-aptdaemon.gtk3widgets - Python GTK+ 3 widgets to run an aptdaemon client
p python-aptdaemon.gtkwidgets - Python GTK+ 2 widgets to run an aptdaemon client
i python-aptdaemon.pkcompat - PackageKit compatibilty for AptDaemon
p python-aptdaemon.test - Test environment for aptdaemon clients
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python-mysql.connector
pi python-mysql.connector - pure Python implementation of MySQL Client/Server protocol
Hope you can help!
Thanks | ImportError: No module named mysql.connector using Python3? | 0 | 1 | 0 | 20,264 |
20,275,176 | 2013-11-28T21:46:00.000 | 5 | 0 | 0 | 0 | mysql,python-3.x,python-module | 20,275,797 | 3 | true | 0 | 0 | Finally figured out what was my problem.
python-mysql.connector was not a py3 package and apt-get nor aptitude was proposing such version.
I managed to install it with pip3 which was not so simple on ubuntu 12.04 because it's only bundled with ubuntu starting at 12.10 and the package does not have the same name under pip...
vfi@ubuntu:$sudo apt-get install python3-setuptools
vfi@ubuntu:$sudo easy_install3 pip
vfi@ubuntu:$ pip --version
pip 1.4.1 from /usr/local/lib/python3.2/dist-packages/pip-1.4.1-py3.2.egg (python 3.2)
vfi@ubuntu:$sudo pip install mysql-connector-python | 2 | 4 | 0 | trying to import python-mysql.connector on Python 3.2.3 and receiving an odd stack. I suspect bad configuration on my ubuntu 12.04 install.
vfi@ubuntu:/usr/share/pyshared$ python3
Python 3.2.3 (default, Sep 25 2013, 18:22:43)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mysql.connector
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mysql.connector
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/share/pyshared/apport_python_hook.py", line 66, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "apport/__init__.py", line 1, in
from apport.report import Report
File "apport/report.py", line 20, in
import apport.fileutils
File "apport/fileutils.py", line 22, in
from apport.packaging_impl import impl as packaging
File "apport/packaging_impl.py", line 20, in
import apt
File "apt/__init__.py", line 24, in
from apt.package import Package
File "apt/package.py", line 1051
return file_list.read().decode("utf-8").split(u"\n")
^
SyntaxError: invalid syntax
Original exception was:
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mysql.connector
here is the related modules state on my pc:
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python3-apt
i python3-apt - Python 3 interface to libapt-pkg
p python3-apt:i386 - Python 3 interface to libapt-pkg
p python3-apt-dbg - Python 3 interface to libapt-pkg (debug extension)
p python3-apt-dbg:i386 - Python 3 interface to libapt-pkg (debug extension)
v python3-apt-dbg:any -
v python3-apt-dbg:any:i386 -
v python3-apt:any -
v python3-apt:any:i386 -
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python-apt
i python-apt - Python interface to libapt-pkg
p python-apt:i386 - Python interface to libapt-pkg
i python-apt-common - Python interface to libapt-pkg (locales)
p python-apt-dbg - Python interface to libapt-pkg (debug extension)
p python-apt-dbg:i386 - Python interface to libapt-pkg (debug extension)
v python-apt-dbg:any -
v python-apt-dbg:any:i386 -
p python-apt-dev - Python interface to libapt-pkg (development files)
p python-apt-doc - Python interface to libapt-pkg (API documentation)
v python-apt-p2p -
v python-apt-p2p-khashmir -
v python-apt:any -
v python-apt:any:i386 -
i python-aptdaemon - Python module for the server and client of aptdaemon
p python-aptdaemon-gtk - Transitional dummy package
i python-aptdaemon.gtk3widgets - Python GTK+ 3 widgets to run an aptdaemon client
p python-aptdaemon.gtkwidgets - Python GTK+ 2 widgets to run an aptdaemon client
i python-aptdaemon.pkcompat - PackageKit compatibilty for AptDaemon
p python-aptdaemon.test - Test environment for aptdaemon clients
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python-mysql.connector
pi python-mysql.connector - pure Python implementation of MySQL Client/Server protocol
Hope you can help!
Thanks | ImportError: No module named mysql.connector using Python3? | 1.2 | 1 | 0 | 20,264 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.