Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,580,835 |
2012-05-14T09:41:00.000
| 1 | 0 | 0 | 1 |
python,mysql,gevent
| 12,335,813 | 2 | true | 1 | 0 |
Postgres may be better suited due to its asynchronous capabilities
| 2 | 4 | 0 |
I have found that ultramysql meets my requirement. But it has no document, and no windows binary package.
I have a program heavy on internet downloads and mysql inserts. So I use gevent to solve the multi-download-tasks problem. After I downloaded the web pages, and parsed the web pages, I get to insert the data into mysql.
Is monkey.patch_all() make mysql operations async?
Can anyone show me a correct way to go.
|
How to use mysql in gevent based programs in python?
| 1.2 | 1 | 0 | 1,197 |
10,580,835 |
2012-05-14T09:41:00.000
| 1 | 0 | 0 | 1 |
python,mysql,gevent
| 13,006,283 | 2 | false | 1 | 0 |
I think one solution is use pymysql. Since pymysql use python socket, after monkey patch, should be work with gevent.
| 2 | 4 | 0 |
I have found that ultramysql meets my requirement. But it has no document, and no windows binary package.
I have a program heavy on internet downloads and mysql inserts. So I use gevent to solve the multi-download-tasks problem. After I downloaded the web pages, and parsed the web pages, I get to insert the data into mysql.
Is monkey.patch_all() make mysql operations async?
Can anyone show me a correct way to go.
|
How to use mysql in gevent based programs in python?
| 0.099668 | 1 | 0 | 1,197 |
10,581,737 |
2012-05-14T10:37:00.000
| 1 | 0 | 1 | 0 |
java,python,architecture
| 10,581,791 | 3 | true | 1 | 0 |
I am not sure if I understood question correctly, but I supose that you want to build web app on multiple languages. My first guess is Service Oriented Programming. You build services on multiple languages and they can communicate trough JSon or XML
| 3 | 1 | 0 |
I've searched for this on different places but I haven't found an exhaustive answer. Suppose we have separate modules written on different languages each of them realizes certain part of logic. We have not permission for recompile them to certain one language (for example using JPython).
I`m a real novice at this part of programming so it is hardly to find words describing this problem. It seems I try to find something like Maven but for multiple languages and for addition modules may be precompiled.
Is this theoretically possible?
|
Building an infrastructure for developing web-applications using multiple programming languages(Python Java C#)
| 1.2 | 0 | 0 | 274 |
10,581,737 |
2012-05-14T10:37:00.000
| 1 | 0 | 1 | 0 |
java,python,architecture
| 10,582,487 | 3 | false | 1 | 0 |
I think rocco337's Service Oriented Programming is really a good idea.
But there's one slight downside of that particular approach.
Traffic between your services because of all these HTTP requests.
I heard amazon had suffered for it, But they managed it I guess because they are giant.
This one also has it's own downside.
Just think this one as dirty and quick alternative.
Web application that I built recently was based on both python, php and bunch of C modules.
The way I mingled them was using simple command line and shell scripts.
And python works really great as a glue language.
so basically what you have to do is.
A. Asynchronous approach. (When your module need more than few seconds to finish it's job)
open up a thread
run command line application (written in java, C# what ever)
show what ever view you want while waiting for result.
when u get result from command line, let user reload or using ajax to refresh your view with result.
B. Synchronous approach. (Job is fairly simple)
run command line application
wait until you get result.
show user result view with result from 2
Good luck with your project!
| 3 | 1 | 0 |
I've searched for this on different places but I haven't found an exhaustive answer. Suppose we have separate modules written on different languages each of them realizes certain part of logic. We have not permission for recompile them to certain one language (for example using JPython).
I`m a real novice at this part of programming so it is hardly to find words describing this problem. It seems I try to find something like Maven but for multiple languages and for addition modules may be precompiled.
Is this theoretically possible?
|
Building an infrastructure for developing web-applications using multiple programming languages(Python Java C#)
| 0.066568 | 0 | 0 | 274 |
10,581,737 |
2012-05-14T10:37:00.000
| 1 | 0 | 1 | 0 |
java,python,architecture
| 10,581,810 | 3 | false | 1 | 0 |
It is certainly possible. For example I have developed web applications using a mix of:
Java (for back end interfaces, APIs and performance-sensitive code)
Clojure (for the main web app on the server side)
JavaScript (for client side code in the browser)
Overall, this has worked pretty well in that you can use each of the languages for their specific strengths. Though you should be warned that it does require you to be pretty multi-skilled in all the languages to be effective, and it does require a lot more configuration of your environment to get everything working smoothly
Tricks I have found useful:
Be very clear about why you are using each language. For example, JavaScript might be limited strictly to client side code in the browser.
It helps enormously if languages run in the same platform. For example, Clojure and Java both run in the JVM which makes interoperability much easier as you don't need any extra code for interfacing between the two.
Use an IDE that supports all of the languages you intend to use (in my case Eclipse)
Use a good build system with multi-language support. You want to be able to do a "one click build" that includes all the languages. IMHO Maven is the best tool for this (especially if you use something like the Eclipse plugin for integration with the IDE)
| 3 | 1 | 0 |
I've searched for this on different places but I haven't found an exhaustive answer. Suppose we have separate modules written on different languages each of them realizes certain part of logic. We have not permission for recompile them to certain one language (for example using JPython).
I`m a real novice at this part of programming so it is hardly to find words describing this problem. It seems I try to find something like Maven but for multiple languages and for addition modules may be precompiled.
Is this theoretically possible?
|
Building an infrastructure for developing web-applications using multiple programming languages(Python Java C#)
| 0.066568 | 0 | 0 | 274 |
10,583,195 |
2012-05-14T12:19:00.000
| 1 | 0 | 1 | 0 |
python,oop
| 10,583,784 | 3 | false | 0 | 0 |
I have a .csv file
You're in luck; CSV support is built right in, via the csv module.
Do you suggest creating a class dictionary for accessing every instance?
I don't know what you think you mean by "class dictionary". There are classes, and there are dictionaries.
But I still need to provide a name to every single instance, right? How can I do that dynamically? Probably the best thing would be to use the unique ID, read from the file, as the instance name, but I think that numbers can't be used as instance names, can they?
Numbers can't be instance names, but they certainly can be dictionary keys.
You don't want to create "instance names" dynamically anyway (assuming you're thinking of having each in a separate variable or something gross like that). You want a dictionary. So just let the IDs be keys.
I miss pointers! :(
I really, honestly, can't imagine how you expect pointers to help here, and I have many years of experience with C++.
| 1 | 0 | 1 |
I apologise if this question has already been asked.
I'm really new to Python programming, and what I need to do is this:
I have a .csv file in which each line represent a person and each column represents a variable.
This .csv file comes from an agent-based C++ simulation I have done.
Now, I need to read each line of this file and for each line generate a new instance of the class Person(), passing as arguments every variable line by line.
My problem is this: what is the most pythonic way of generating these agents while keeping their unique ID (which is one of the attributes I want to read from the file)? Do you suggest creating a class dictionary for accessing every instance? But I still need to provide a name to every single instance, right? How can I do that dynamically? Probably the best thing would be to use the unique ID, read from the file, as the instance name, but I think that numbers can't be used as instance names, can they? I miss pointers! :(
I am sure there is a pythonic solution I cannot see, as I still have to rewire my mind a bit to think in pythonic ways...
Thank you very much, any help would be greatly appreciated!
And please remember that this is my first project in python, so go easy on me! ;)
EDIT:
Thank you very much for your answers, but I still haven't got an answer on the main point: how to create an instance of my class Person() for every line in my csv file. I would like to do that automatically! Is it possible?
Why do I need this? Because I need to create networks of these people with networkx and I would like to have "agents" linked in a network structure, not just dictionary items.
|
How can I dynamically generate class instances with single attributes read from flat file in Python?
| 0.066568 | 0 | 0 | 725 |
10,585,184 |
2012-05-14T14:20:00.000
| 1 | 0 | 0 | 0 |
python,canvas,tkinter
| 10,585,691 | 1 | false | 0 | 1 |
The canvas has many methods for finding objects. You could, for example, call find_closest to find the element closest to the coordinate you are wanting to check. Then, for the element it finds, you can use the coords method to find out if all of the coordinates of the element are identical.
| 1 | 0 | 0 |
Is there a way to track elements properties in Tkinter canvas?
Specifically, I want to know if at a certain coordinates set I have already created an element or not.
I believe I can do this with tracking sets of elements in a dictinoary but I was hoping for something more elegant.
|
Tracking list on elements on canvas (tkinter)
| 0.197375 | 0 | 0 | 153 |
10,586,778 |
2012-05-14T15:53:00.000
| 2 | 0 | 1 | 0 |
python,parsing
| 10,586,876 | 3 | false | 0 | 0 |
Is this a user-defined string, or one you're defining?
If it's a string you're creating, you could use eval (eval("20 < 30")), but if the string is given by the user, you might want to sanitize it first...
| 1 | 5 | 0 |
I have a boolean expression in a string. eg. 20 < 30. Is there a simple way to parse and evaluate this string so it will return True (in this case).
ast.literal_eval("20 < 30") does not work.
|
test a boolean expression in a Python string
| 0.132549 | 0 | 0 | 1,321 |
10,588,289 |
2012-05-14T17:33:00.000
| 0 | 0 | 0 | 0 |
python,django
| 10,588,902 | 1 | false | 1 | 0 |
You will want to use Django Sessions in your login view. Depending on how you set up the login view you might want to query your Sessions object and filter as well and then compare datetime.now() vs NameOfQuerySessionVariable.expire_date
| 1 | 2 | 0 |
Django allows a user to be logged in from multiple computers, in different sessions. Is there a way to limit user from logging from multiple machines at the same time. That is if there's a live session with the user logged in on a browser or a computer, you must not allow him to login at another computer.
This would be a useful hack, for security purpose. Do advise
|
Django:limiting user to login once at a time
| 0 | 0 | 0 | 1,188 |
10,588,730 |
2012-05-14T18:10:00.000
| 1 | 0 | 1 | 0 |
python,bottle
| 10,588,776 | 1 | true | 0 | 0 |
I would suspect formatting chars getting inserted on the textarea e.g(newline and carriage returns) might be the issue. Have you checked for this?
| 1 | 0 | 0 |
I am trying to make a webform where you can give input as a file or to paste it into textarea. But when the same data arrives to bottle it is different. Data length from the textarea is larger when from file input. Why could this happen?
|
Why the same file differs from textarea and file input?
| 1.2 | 0 | 1 | 111 |
10,589,590 |
2012-05-14T19:09:00.000
| 8 | 0 | 1 | 1 |
python,macos,version,homebrew
| 10,589,656 | 2 | true | 0 | 0 |
Lion uses Python 2.7 by default; 2.5 and 2.6 are also available.
/Library/Frameworks/Python.framework does not exist on a stock install of Lion. My guess is that you've ended up with this by installing some application.
The default Python install is primarily installed in /System/Library/Frameworks/Python.framework, although some components are located elsewhere.
Yes - you can brew install python@2 to get a Python 2.7 separate from the system version, or brew install python to get Python 3.7. Both will install to /usr/local, like any other Homebrew recipe.
| 1 | 12 | 0 |
I'm working on Mac Os 10.7 (Lion) and I have some questions:
What is the pre-installed version of python on Lion?
I've been working on this computer for some time now, and i've installed lots of software in order to do college work many times I didn't know what I was really doing. The thing is: now I hava on the /Library/Frameworks/Python.framework/Versions/ a folder called "7.0" I'm pretty sure there no python version 7. Is this folder native or a third-part program installation. Can I delete it? (it's using 1 Gb on disk).
Where is located the original python that comes with mac os?
I've choose Homebrew as my package manager, is there a easy way to manage python versions with it?
|
Python Versions on Mac
| 1.2 | 0 | 0 | 35,177 |
10,590,363 |
2012-05-14T20:12:00.000
| 0 | 0 | 1 | 0 |
python-2.7
| 10,591,696 | 1 | false | 0 | 0 |
First of all, base64 is not "encryption". It's just an encoding format, and saving anything in base64 will not protect it from being read.
I think the best solution for your case is to use some kind of system-level "keychain". Otherwise, just ask the password to the user every time it is needed, but of course that may become annoying.
| 1 | 0 | 0 |
Alright, so I am having issues with getting input from the user that will be used in the program until it is ran again with certain options in the cmd.
So say a user runs the program from cmd with the argument GUI, this will open a Tk window that asks for a their email, the user presses submit, and the text from the entry box is saved to a variable, now it will be able to use it for that runtime, but at the next run, say with no parameters, it will not have anything assigned to that variable since it was cleared from the memory.
I would find it ideal if I could just have it save the variable somehow after the runtime, since the user will use the program like so until they got a new email, then they would just run it with the option GUI again to assign a new one. Right now I am using a .txt to do that, but I find that a bit unsecure even after encrypting the email/pass with base64 since it can easily be unencrypted. How would I do this in a more safe, and more portable way, since a user can easily forget not to delete the file, and to move the .txt file to the right directory.
|
How to save a variable after runtime in Python 2.7
| 0 | 0 | 0 | 198 |
10,590,497 |
2012-05-14T20:24:00.000
| 1 | 0 | 0 | 0 |
python,django,django-models
| 10,591,191 | 2 | false | 1 | 0 |
Make the REST calls using the builtin urllib (a bit clunky but functional) and wrap the interface into a class, with a method for each remote call. Your class can then translate to and from native python types. That is what I'd do anyway!
| 1 | 11 | 0 |
I'm building a Django application that needs to interact with a 3rd party RESTful API, making various GETs, PUTs, etc to that resource. What I'm looking for is a good way to represent that API within Django.
The most obvious, but perhaps less elegant solution seems to be creating a model that has various methods mapping to webservice queries. On the other hand, it seems that using something like a custom DB backend would provide more flexibility and be better integrated into Django's ORM.
Caveat: This is the first real project I've done with Django, so it's possible I'm missing something obvious here.
|
Consuming a RESTful API with Django
| 0.099668 | 0 | 0 | 6,261 |
10,592,913 |
2012-05-15T01:01:00.000
| 1 | 0 | 1 | 1 |
python,exe
| 11,889,192 | 9 | false | 0 | 0 |
For this you have two choices:
A downgrade to python 2.6. This is generally undesirable because it is backtracking and may nullify a small portion of your scripts
Your second option is to use some form of exe converter. I recommend pyinstaller as it seems to have the best results.
| 1 | 44 | 0 |
I am looking for a way to convert a Python Program to a .exe file WITHOUT using py2exe. py2exe says it requires Python 2.6, which is outdated. Is there a way this is possible so I can distribute my Python program without the end-user having to install Python?
|
How do I convert a Python program to a runnable .exe Windows program?
| 0.022219 | 0 | 0 | 185,806 |
10,593,027 |
2012-05-15T01:19:00.000
| 10 | 0 | 0 | 0 |
python,tkinter
| 10,593,117 | 2 | true | 0 | 1 |
Short version is, you can't. At least, not without doing extra work. The text widget doesn't directly support a variable option.
If you want to do all the work yourself it's possible to set up a trace on a variable so that it keeps the text widget up to date, and you can add bindings to the text widget to keep the variable up to date, but there's nothing built directly into Tkinter to do that automatically.
The main reason this isn't directly supported is that the text widget can have more than just ascii text -- it can have different fonts and colors, embedded widgets and images, and tags.
| 1 | 5 | 0 |
Basically, I want the body of a Text widget to change when a StringVar does.
|
How can I connect a StringVar to a Text widget in Python/Tkinter?
| 1.2 | 0 | 0 | 5,546 |
10,593,791 |
2012-05-15T03:29:00.000
| 0 | 0 | 1 | 0 |
python,msdn
| 10,593,968 | 3 | false | 0 | 0 |
You can use pydoc.For example, you want to fine all methods in file class:
In Linux terminal, type pydoc file
Like Vim, to search a method you want, use j/k key to navigate or / key to find
| 2 | 0 | 0 |
I am new to python, coming from C# world. Is there any MSDN like help available for python programmers where I can search for classes, their methods properties etc.
Thanks,
Taimoor.
|
MSDN like help for python programmers
| 0 | 0 | 0 | 272 |
10,593,791 |
2012-05-15T03:29:00.000
| 0 | 0 | 1 | 0 |
python,msdn
| 10,594,034 | 3 | false | 0 | 0 |
From off python documentation:
You can also use pydoc to start an HTTP server on the local machine that will serve documentation to visiting Web browsers. pydoc -p 1234 will start a HTTP server on port 1234, allowing you to browse the documentation at http://localhost:1234/ in your preferred Web browser. pydoc -g will start the server and additionally bring up a small Tkinter-based graphical interface to help you search for documentation pages.
Now you can use pydoc via browser like msdn web pages.
| 2 | 0 | 0 |
I am new to python, coming from C# world. Is there any MSDN like help available for python programmers where I can search for classes, their methods properties etc.
Thanks,
Taimoor.
|
MSDN like help for python programmers
| 0 | 0 | 0 | 272 |
10,594,517 |
2012-05-15T05:18:00.000
| 2 | 0 | 1 | 1 |
python
| 10,595,836 | 1 | false | 0 | 0 |
using pyinstaller for example you could add the executable file as a resource file, it extracts the resources to a temp folder, from there you can access your resource files as you want
| 1 | 2 | 0 |
Simple question:
I have a python program which launches an executable.
The launcher uses optparse module to setup the run and lauch the binary through shell.
Is it possible to bundle the laucher and the binary into a single package?
Platform of interest is Linux.
|
Bundling Python program with a native binary
| 0.379949 | 0 | 0 | 126 |
10,595,058 |
2012-05-15T06:13:00.000
| 1 | 0 | 1 | 0 |
python,django,dictionary,memcached,redis
| 10,597,896 | 4 | false | 1 | 0 |
5Mb isn't that large. You could keep it in memory in process, and I recommend that you do, until it becomes clear from profiling and testing that that approach isn't meeting your needs. Always do the simplest thing possible.
Socket communication doesn't of itself introduce much of an overhead. You could probably pare it back a little by using a unix domain socket. In any case, if you're not keeping your data in process, you're going to have to talk over some kind of pipe.
| 3 | 10 | 0 |
I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this !
|
Maintain a large dictionary in memory for Django-Python?
| 0.049958 | 0 | 0 | 2,666 |
10,595,058 |
2012-05-15T06:13:00.000
| 1 | 0 | 1 | 0 |
python,django,dictionary,memcached,redis
| 10,595,177 | 4 | false | 1 | 0 |
In past for a similar problem I have used the idea of a dump.py . I would think that all of the other data structures would require a layer to convert objects of one kind into python objects . However I would still think that this would depend on data size and the amount of data you are handling . Memcache and redis should have better indexing and look up when it comes to really large data sets and things like regex based lookup . So my recommendation would be
json -- if you are serving the data over http to some other service
python file - if data structure is not too large and you need not any special kind of look ups
memcache and redis -- if the data becomes really large
| 3 | 10 | 0 |
I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this !
|
Maintain a large dictionary in memory for Django-Python?
| 0.049958 | 0 | 0 | 2,666 |
10,595,058 |
2012-05-15T06:13:00.000
| 2 | 0 | 1 | 0 |
python,django,dictionary,memcached,redis
| 10,595,172 | 4 | false | 1 | 0 |
Memcached, though a great product, is trumped by Redis in my book. It offers lots of things that memcached doesn't, like persistence.
It also offers more complex data structures like hashses. What is your particular data dump? How big is it, and how large / what type of values?
| 3 | 10 | 0 |
I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this !
|
Maintain a large dictionary in memory for Django-Python?
| 0.099668 | 0 | 0 | 2,666 |
10,597,019 |
2012-05-15T08:38:00.000
| 3 | 0 | 0 | 0 |
python,google-app-engine
| 10,603,141 | 1 | true | 1 | 0 |
On App Engine, I think the best way to do this is to store the total pages inside the Shelf. Add an IntegerProperty field to the shelf, I'll call it totalPages. Every time you add or remove a book to the shelf, update totalPages appropriately. Note that this will need to be done in a transaction.
Then it's easy to search the Shelf objects by totalPages.
| 1 | 1 | 0 |
I have 2 models > Shelf and Book in my models.py. The model Book has a field ReferenceProperty(Shelf), it also has a IntegerProperty field that stores the no. of pages in the book. What i am trying to achieve is to get a list of Top 25 Shelf Names according to highest number of Pages (which should be the sum of pages of all the books in that shelf) in descending order.
i am a beginner with python programming. Please advise me.
|
python google app engine programming
| 1.2 | 0 | 0 | 116 |
10,597,284 |
2012-05-15T08:55:00.000
| 6 | 0 | 1 | 1 |
python,windows,makefile,cygwin,installation
| 10,607,864 | 6 | false | 0 | 0 |
@spacediver is right on. Run cygwin's setup.exe again and when you get to the packages screen make sure you select make and python (and any other libs/apps you may need - perhaps gcc or g++).
| 5 | 20 | 0 |
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin?
|
install python and make in cygwin
| 1 | 0 | 0 | 37,080 |
10,597,284 |
2012-05-15T08:55:00.000
| 12 | 0 | 1 | 1 |
python,windows,makefile,cygwin,installation
| 10,597,334 | 6 | false | 0 | 0 |
Look into cygwin native package manager, devel category. You should find make and python there.
| 5 | 20 | 0 |
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin?
|
install python and make in cygwin
| 1 | 0 | 0 | 37,080 |
10,597,284 |
2012-05-15T08:55:00.000
| 7 | 0 | 1 | 1 |
python,windows,makefile,cygwin,installation
| 19,168,003 | 6 | false | 0 | 0 |
After running into this problem myself, I was overlooking all of the relevant answers saying to check the setup.exe again. This was the solution to me, there are a few specific things to check.
Check /bin for "make.exe". If it's not there, you have not installed it correctly
Run the setup.exe. Don't be afraid, as new package installs append to your installation and do not over write
In the setup.exe, make sure you run the install from the Internet and NOT your local folder. This was where I was running into problems. Search "make" and make sure you select to Install it, do not leave this as "Default".
| 5 | 20 | 0 |
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin?
|
install python and make in cygwin
| 1 | 0 | 0 | 37,080 |
10,597,284 |
2012-05-15T08:55:00.000
| 0 | 0 | 1 | 1 |
python,windows,makefile,cygwin,installation
| 58,692,435 | 6 | false | 0 | 0 |
In my case, it was happened due to python is not well installed. So python.exe is referenced in the shell so it can't find the file because the system is different.
Please check cygwin python is well installed.
| 5 | 20 | 0 |
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin?
|
install python and make in cygwin
| 0 | 0 | 0 | 37,080 |
10,597,284 |
2012-05-15T08:55:00.000
| 5 | 0 | 1 | 1 |
python,windows,makefile,cygwin,installation
| 43,129,128 | 6 | false | 0 | 0 |
Here is a command line version to install python in cygwin
wget rawgit.com/transcode-open/apt-cyg/master/apt-cyg
install apt-cyg /bin
apt-cyg install python
| 5 | 20 | 0 |
I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin?
|
install python and make in cygwin
| 0.16514 | 0 | 0 | 37,080 |
10,598,691 |
2012-05-15T10:22:00.000
| 3 | 0 | 0 | 0 |
python,django,scrapy
| 10,611,818 | 4 | false | 1 | 0 |
If I understand correctly the problem, You can get url from response.url and then write to item['url'].
In Spider: item['url'] = response.url
And in pipeline: url = item['url'].
Or put response.url into meta as warvariuc wrote.
| 2 | 7 | 0 |
I'm using Scrapy, in particular Scrapy's CrawlSpider class to scrape web links which contain certain keywords. I have a pretty long start_urls list which gets its entries from a SQLite database which is connected to a Django project. I want to save the scraped web links in this database.
I have two Django models, one for the start urls such as http://example.com and one for the scraped web links such as http://example.com/website1, http://example.com/website2 etc. All scraped web links are subsites of one of the start urls in the start_urls list.
The web links model has a many-to-one relation to the start url model, i.e. the web links model has a Foreignkey to the start urls model. In order to save my scraped web links properly to the database, I need to tell the CrawlSpider's parse_item() method which start url the scraped web link belongs to. How can I do that? Scrapy's DjangoItem class does not help in this respect as I still have to define the used start url explicitly.
In other words, how can I pass the currently used start url to the parse_item() method, so that I can save it together with the appropriate scraped web links to the database? Any ideas? Thanks in advance!
|
How to access a specific start_url in a Scrapy CrawlSpider?
| 0.148885 | 0 | 0 | 5,316 |
10,598,691 |
2012-05-15T10:22:00.000
| 1 | 0 | 0 | 0 |
python,django,scrapy
| 43,817,221 | 4 | false | 1 | 0 |
Looks like warvariuc's answer requires a slight modification as of Scrapy 1.3.3: you need to override _parse_response instead of parse. Overriding make_requests_from_url is no longer necessary.
| 2 | 7 | 0 |
I'm using Scrapy, in particular Scrapy's CrawlSpider class to scrape web links which contain certain keywords. I have a pretty long start_urls list which gets its entries from a SQLite database which is connected to a Django project. I want to save the scraped web links in this database.
I have two Django models, one for the start urls such as http://example.com and one for the scraped web links such as http://example.com/website1, http://example.com/website2 etc. All scraped web links are subsites of one of the start urls in the start_urls list.
The web links model has a many-to-one relation to the start url model, i.e. the web links model has a Foreignkey to the start urls model. In order to save my scraped web links properly to the database, I need to tell the CrawlSpider's parse_item() method which start url the scraped web link belongs to. How can I do that? Scrapy's DjangoItem class does not help in this respect as I still have to define the used start url explicitly.
In other words, how can I pass the currently used start url to the parse_item() method, so that I can save it together with the appropriate scraped web links to the database? Any ideas? Thanks in advance!
|
How to access a specific start_url in a Scrapy CrawlSpider?
| 0.049958 | 0 | 0 | 5,316 |
10,601,010 |
2012-05-15T12:48:00.000
| 2 | 0 | 1 | 0 |
python,debugging,intellisense,pyramid
| 10,601,595 | 1 | false | 1 | 0 |
To my mind there's two viable option out there.
I use both actually.
Eclipse + Aptana Studio + Pydev or Aptana Studio
Pros
Free
Decent auto completion (IntelliSense like system)
More plug-ins (since it's based on eclipse)
Support django template
Cons
relatively poor html editor
no mako or jinja2 support (as far as I know)
Pycharm
Pros
better auto completion
Support mako,jinja2 and django template
Good HTML edtitor
Cons
Not Free
Both support debug without too many problems.
| 1 | 1 | 0 |
There is a wonderful video on youtube where it is explained how to debug Django applications with Python Tools for Visual Studio.
I wonder if the same thing is possible with the Pyramid applications? Moreover I would love to use VS' IntelliSense (hinting system) while writing for the Pyramid framework.
Or may be there are another ways to achieve the same debug+IntelliSense effect. I'd be glad to hear any suggestions.
|
debugging and intellisensing pyramid application
| 0.379949 | 0 | 0 | 616 |
10,601,947 |
2012-05-15T13:37:00.000
| 0 | 0 | 0 | 0 |
python,concurrency,sqlalchemy
| 10,602,194 | 1 | true | 0 | 0 |
You said it, you are using SQLAlchemy's event interface, it is not the one of the RDBMS, and SQLAlchemy does not communicate with the other instances connected to that DB.
SQLAlchemy's event system calls a function in your own process. It's up to you to make this function send a signal to the rest of them via the network (or however they are connected). As long as SQLAlchemy is concerned, it doesn't know about the other instances connected to your database.
So, you might want to start another server on the machine with the database running, and make all the other listening to it, and act accordingly.
Hope it helps.
| 1 | 2 | 0 |
I'm currently developing an application which connects to a database using sqlalchemy. The idea consists of having several instances of the application running in different computers using the same database. I want to be able to see changes in the database in all instances of the application once they are commited. I'm currently using sqlalchemy event interface, however it's not working when I have several concurrent instances of the application. I change something in one of the instances, but there are no signals emitted in the other instances.
|
Concurrency in sqlalchemy
| 1.2 | 1 | 0 | 1,036 |
10,603,596 |
2012-05-15T15:04:00.000
| 2 | 1 | 0 | 0 |
python,python-3.x,expect,fabric,pexpect
| 23,901,161 | 2 | false | 0 | 0 |
Happily, pexpect now supports python 3 (as of 2013 if not earlier).
It appears that @ThomasK has been able to add his pexpect-u Python 3 functionality (with some API changes) back into the main project. (Thanks Thomas!)
| 1 | 10 | 0 |
I would like to use an expect-like module in python3. As far as I know, neither pexpect nor fabric work with python3. Is there any similar package I can use? (If no, does anyone know if py3 support is on any project's roadmap?)
A perfectly overlapping feature set isn't necessary. I don't think my use case is necessary here, but I'm basically reimplementing a Linux expect script that does a telnet with some config-supplied commands, but extending functionality.
|
Is there an implementation of 'expect' or an expect-like library that works in python3?
| 0.197375 | 0 | 0 | 1,916 |
10,603,622 |
2012-05-15T15:05:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,celery
| 10,603,968 | 3 | false | 0 | 0 |
NO! Please, for the love of your God or intelligent designer, don't do that! Don't continually create/spawn/whatever threads and try to micro-manage them. Threadpool - create some threads at startup and pass them a producer-consumer queue to wait on for class instances representing those HTTP tasks.
| 2 | 0 | 0 |
I am working on a realtime data grabber. I have a while True loop, and inside it, I spawn threads that do relatively small tasks (I am querying a 3rd party API over HTTP, and to achieve fast speeds I am querying in parallel).
Every thread takes care of updating a specific data series. This might take 2, 3 or even 5 seconds. However, my while True loop might spawn threads faster than how long it takes for the thread to finish. Hence, I need the spawned threads to wait for their previous threads to finish.
In general, its unpredictable how long it takes for the threads to finish because the threads query an HTTP server...
I was thinking of creating a named semaphore for every thread, and then if a thread spawned for a specific series finds a previous thread working on the same series, it will wait.
The only issue that I can see is a possible backlog of threads..
What is the best solution here? Should I look into things like Celery? I am currently using the threading module.
Thanks!
|
Signaling between threads in Python
| 0.132549 | 0 | 0 | 332 |
10,603,622 |
2012-05-15T15:05:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,celery
| 10,603,972 | 3 | false | 0 | 0 |
You should use Queue.Queue. Create a queue for each series, and a thread to listen on that queue. Each time you need to read a series, put a request in the queue. The thread waits for items in the queue, and each one it receives, it reads the data.
| 2 | 0 | 0 |
I am working on a realtime data grabber. I have a while True loop, and inside it, I spawn threads that do relatively small tasks (I am querying a 3rd party API over HTTP, and to achieve fast speeds I am querying in parallel).
Every thread takes care of updating a specific data series. This might take 2, 3 or even 5 seconds. However, my while True loop might spawn threads faster than how long it takes for the thread to finish. Hence, I need the spawned threads to wait for their previous threads to finish.
In general, its unpredictable how long it takes for the threads to finish because the threads query an HTTP server...
I was thinking of creating a named semaphore for every thread, and then if a thread spawned for a specific series finds a previous thread working on the same series, it will wait.
The only issue that I can see is a possible backlog of threads..
What is the best solution here? Should I look into things like Celery? I am currently using the threading module.
Thanks!
|
Signaling between threads in Python
| 0.132549 | 0 | 0 | 332 |
10,604,523 |
2012-05-15T15:58:00.000
| 3 | 0 | 0 | 1 |
python,asynchronous,twisted,twisted.web
| 10,624,853 | 2 | false | 1 | 0 |
As others have said, a Deferred on its own is just a promise of a value, and a list of things to do when the value arrives (or when there is a failure getting the value).
How they work is like this: some function sees that the value it wants to return is not yet ready. So it prepares a Deferred, and then arranges somehow for that Deferred to be called back ("fired") with the value once it's ready. That second part is what may be causing your confusion; Deferreds on their own don't control when or how they are fired. It's the responsibility of whatever created the Deferred.
In the context of a whole Twisted app, nearly everything is event-based, and events are managed by the reactor. Say your code used twisted.web.client.getPage(), so it now has a Deferred that will be fired with the result of the http fetch. What that means is that getPage() started up a tcp conversation with the http server, and essentially installed handlers in the reactor saying "if you see any traffic on this tcp connection, call a method on this Protocol object". And once the Protocol object sees that it has received the whole page you asked for, it fires your Deferred, whereupon your own code is invoked via that Deferred's callback chain.
So everything is callbacks and hooks, all the way down. This is why you should never have blocking code in a Twisted app, unless on a separate thread- because it will stop everything else from being handled too.
Does that help?
| 1 | 2 | 0 |
Does it spawn a new thread underneath? If classical web server spawns a thread to serve a HTTP request and with Twisted web I have to spawn a Deferred() each time I want to query mysql - where's the gain? Looks like it doesn't make sens if it spawned a thread, so how's it implemented?
|
How is twisted's Deferred implemented?
| 0.291313 | 0 | 0 | 673 |
10,607,350 |
2012-05-15T19:16:00.000
| 0 | 0 | 0 | 0 |
python,pickle
| 10,608,972 | 2 | false | 0 | 0 |
Metaprogramming is strong in Python; Python classes are extremely malleable. You can alter them after declaration all the way you want, though it's best done in a metaclass (decorator). More than that, instances are malleable, independently of their classes.
A 'reference to a place' is often simply a string. E.g. a reference to object's field is its name. Assume you have multiple node references inside your node object. You could have something like {persistent_id: (object, field_name),..} as your unresolved references table, easy to look up. Similarly, in lists of nodes 'references to places' are indices.
BTW, could you use a key-value database for graph storage? You'd be able to pull nodes by IDs without waiting.
| 2 | 1 | 1 |
I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now.
I thought i could manage this with metaprogramming in Python. But it seems that metaprogramming is not strong enough in Python.
Here's what i do for now. My graph is composed of several different objects. Some of them are instances of a special class. This class describes the root object to be pickled. This is where the modularity come in. Each time i pickle something it starts from one of those instances and i never pickle two of them at the same time. Whenever there is a reference to another instance, accessible by the root object, I replace this reference by a persistant_id, thus ensuring that i won't have two of them in the same pickling stream. The problem comes when unpickling the stream. I can found a persistant_id of an instance which is not loaded yet. When this is the case, i have to wait for the target instance to be loaded before allowing access to it. And i don't see anyway to do that :
1/ I tried to build an accessor which get methods return the target of the reference. Unfortunately, accessors must be placed in the class declaration, I can't assign them to the unpickled object.
2/ I could store somewhere the places where references have to be resolved. I don't think this is possible in Python : one can't keep reference to a place (a field, or a variable), it is only possible to keep a reference to a value.
My problem may not be clear. I'm still looking for a clear formulation. I tried other things like using explicit references which would be instances of some "Reference" class. It isn't very convenient though.
Do you have any idea how to implement modular serialisation with pickle ? Would i have to change internal behaviour of Unpickler to be able to remember places where i need to load the remaining of the object graph ? Is there another library more suitable to achieve similar results ?
|
Modular serialization with pickle (Python)
| 0 | 0 | 0 | 289 |
10,607,350 |
2012-05-15T19:16:00.000
| 0 | 0 | 0 | 0 |
python,pickle
| 10,608,783 | 2 | false | 0 | 0 |
Here's how I think I would go about this.
Have a module level dictionary mapping persistent_id to SpecialClass objects. Every time you initialise or unpickle a SpecialClass instance, make sure that it is added to the dictionary.
Override SpecialClass's __getattr__ and __setattr__ method, so that specialobj.foo = anotherspecialobj merely stores a persistent_id in a dictionary on specialobj (let's call it specialobj.specialrefs). When you retrieve specialobj.foo, it finds the name in specialrefs, then finds the reference in the module-level dictionary.
Have a module level check_graph function which would go through the known SpecialClass instances and check that all of their specialrefs were available.
| 2 | 1 | 1 |
I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now.
I thought i could manage this with metaprogramming in Python. But it seems that metaprogramming is not strong enough in Python.
Here's what i do for now. My graph is composed of several different objects. Some of them are instances of a special class. This class describes the root object to be pickled. This is where the modularity come in. Each time i pickle something it starts from one of those instances and i never pickle two of them at the same time. Whenever there is a reference to another instance, accessible by the root object, I replace this reference by a persistant_id, thus ensuring that i won't have two of them in the same pickling stream. The problem comes when unpickling the stream. I can found a persistant_id of an instance which is not loaded yet. When this is the case, i have to wait for the target instance to be loaded before allowing access to it. And i don't see anyway to do that :
1/ I tried to build an accessor which get methods return the target of the reference. Unfortunately, accessors must be placed in the class declaration, I can't assign them to the unpickled object.
2/ I could store somewhere the places where references have to be resolved. I don't think this is possible in Python : one can't keep reference to a place (a field, or a variable), it is only possible to keep a reference to a value.
My problem may not be clear. I'm still looking for a clear formulation. I tried other things like using explicit references which would be instances of some "Reference" class. It isn't very convenient though.
Do you have any idea how to implement modular serialisation with pickle ? Would i have to change internal behaviour of Unpickler to be able to remember places where i need to load the remaining of the object graph ? Is there another library more suitable to achieve similar results ?
|
Modular serialization with pickle (Python)
| 0 | 0 | 0 | 289 |
10,615,196 |
2012-05-16T09:00:00.000
| 9 | 0 | 1 | 0 |
python,list,range
| 10,615,351 | 3 | false | 0 | 0 |
len([x for x in l if x > 34 and x < 566])
| 1 | 7 | 0 |
I have a list of elements (integers) and what I need to do is to quickly check how many elements from this list fall within a specified range. The example is below.
range is from 34 to 566
l = [9,20,413,425]
The result is 2.
I can of course use a simple for loop for the purpose and compare each element with the min and max value (34 < x < 566) and then use a counter if the statement is true, however I think there might be a much easier way to do this, possibly with a nice one-liner.
|
Check how many elements from a list fall within a specified range (Python)
| 1 | 0 | 0 | 2,870 |
10,615,980 |
2012-05-16T09:45:00.000
| 1 | 0 | 0 | 0 |
couchdb,replication,couchdb-python
| 10,616,421 | 1 | true | 1 | 0 |
If you want to make sure they're exactly the same, write a map job that emits the document path as the key, and the documents hash (generated any way you like) as the value. Do not include the _rev field in the hash generation.
You cannot reduce to a single hash because order is not guaranteed, but you can feed the resultant JSON document to a good diff program.
| 1 | 1 | 0 |
I have a couchdb instance with database a and database b. They should contain identical sets of documents, except that the _rev property will be different, which, AIUI, means I can't use replication.
How do I verify that the two databases really do contain the same documents which are all otherwise 'equal'?
I've tried using the python-based couchdb-dump tool with a lot of sed magic to get rid of the _rev and MD5 and ETag headers, but then it still seems that property order in the JSON structure is slightly random, which means I still can't compare the output easily with something like diff.
Is there a better approach here? Have other people wanted to solve a similar problem?
|
Compare two couchdb databases
| 1.2 | 1 | 0 | 685 |
10,616,104 |
2012-05-16T09:53:00.000
| 0 | 0 | 0 | 1 |
python,design-patterns,thrift
| 10,649,734 | 1 | false | 0 | 0 |
Sounds like you need some sort of a mechanism to correlate requests to the different plugins available. Ideally, there should be a different URL path per set of operations published for each plugin.
I would consider implementing a sort of map/dictionary of URL paths to plugins. Then for each request received, do a lookup in the map and get the associated plugin and send it the request accordingly. If there is no entry in the map, then a redirect/proxy could be sent. For example if URL = http://yourThriftServer/path/operation, the operation or the path and operation would map to a plugin.
An extra step would be to implement a sort of meta request, whereby a client could query what URL paths/operations are available in the server.
| 1 | 1 | 0 |
I am having a little trouble with a server / client design and
wonder if anyone had any advice.
I have a Thrift server that abstracts a data store. The idea is that
there will be a number of clients that are essentially
out of process plugins that use the interface provided by the server
to receive, manipulate the underlying data store and also provide
their own data.
There will be a number of other clients which simply access the data
provided by the server and its "plugins".
The problem case is when one of these "plugins" wishes to provide its
own data and provide an interface to that data.
The server should have no knowledge of the plugins data or interface.
I would ideally like all clients to access functionality through the
main thrift server so it acts as a facade for the plugins. If a client
requested some data provided by a plugin the main server could
delegate to the plugin to provide that data. I guess this would mean
have each plugin being a thrift client and server. I have written the
server in python so could probably handle thrift calls that are not
yet defined but would it be possible to forward these calls another
thrift server IE act as a proxy ?
An alternative is maybe have the plugins be clients only and push data
to the server. But the format of these messages
would unknown to the server and would have to be generic enough to
accommodate different types of data. I not sure how I would provide a
useful interface to this data to other clients.
As far as I can see only the plugins knows how to store and manipulate
the data it owns so this idea probably would not work.
Thanks for any advice. Any suggestions welcomed.
|
Thrift server facade for clients
| 0 | 0 | 0 | 314 |
10,621,021 |
2012-05-16T14:46:00.000
| 0 | 0 | 1 | 0 |
python
| 10,621,225 | 4 | false | 0 | 0 |
"new" means a shallow copy of the portion of the list you sliced.
It depends on what you are trying to do. For your particular implementation, you might not care about the original data, but I'm sure you could come up with scenarios where you want to work with a subset of data without modifying the originals (though do keep in mind it is only a shallow copy, so there are many instances where you will be modifying the data in the original and slice when working on a slice). Also, it's not faster; in fact, it's actually slower, since the system needs to allocate memory and construct new objects. The gain is not speed, but functionality.
| 2 | 3 | 0 |
I am a newbie to python,everywhere I read about list methods I see one thing
The slice method returns a "new" list
What is here meant by "new" list,and why is it faster then changing the original list?
Does it really matter if python manipulates the original list,I mean I cant use it anyway.
|
Python Lists(Slice method)
| 0 | 0 | 0 | 744 |
10,621,021 |
2012-05-16T14:46:00.000
| 0 | 0 | 1 | 0 |
python
| 10,621,232 | 4 | false | 0 | 0 |
When a function/method create a new list, it means that your script has to consume the double amount of memory and has a little (or not so little) overhead while creating a duplicate of the old list.
If the list is really big, the performance of your script can drop very fast. That's why changing lists in-place is preferred when you have large amounts of data.
| 2 | 3 | 0 |
I am a newbie to python,everywhere I read about list methods I see one thing
The slice method returns a "new" list
What is here meant by "new" list,and why is it faster then changing the original list?
Does it really matter if python manipulates the original list,I mean I cant use it anyway.
|
Python Lists(Slice method)
| 0 | 0 | 0 | 744 |
10,622,581 |
2012-05-16T16:18:00.000
| 0 | 0 | 0 | 0 |
html,django,python-3.x
| 10,629,574 | 2 | false | 1 | 0 |
I can share my experience with you as I have recently learned Django.
Instead of following any book you should try to use the Django documentation and also dont be afraid to look at the source code, it will help you to understand how things are working behind the scene.
| 1 | 1 | 0 |
Hey I recently heard about Django, and will hopefully be moving on to learn an HTML type platform. I am currently learning python 3 and wanted to know if Django, especially recent editions, are the "best" ( sorry about the arbitrariness of that).
Plus I was hoping to know any good books / tutorials for django or any other that you believe is more vesitile, easy, etc. Most books don't seem to be up to date on Django as there have apparently been big changes from 1.0 to 1.1 and another leap on 1.3, from what I've read.
Thanks a lot!
|
Django HTML quality and tutorials
| 0 | 0 | 0 | 282 |
10,626,766 |
2012-05-16T21:10:00.000
| 3 | 0 | 0 | 0 |
python,algorithm
| 10,627,590 | 2 | false | 0 | 0 |
Minimax is a way of exploring the space of potential moves in a two player game with alternating turns. You are trying to win, and your opponent is trying to prevent you from winning.
A key intuition is that if it's currently your turn, a two-move sequence that guarantees you a win isn't useful, because your opponent will not cooperate with you. You try to make moves that maximize your chances of winning and your opponent makes moves that minimize your chances of winning.
For that reason, it's not very useful to explore branches from moves that you make that are bad for you, or moves your opponent makes that are good for you.
| 1 | 10 | 0 |
I'm quite new to algorithms and i was trying to understand the minimax, i read a lot of articles,but i still can't get how to implement it into a tic-tac-toe game in python.
Can you try to explain it to me as easy as possible maybe with some pseudo-code or some python code?.
I just need to understand how it works. i read a lot of stuff about that and i understood the basic, but i still can't get how it can return a move.
If you can please don't link me tutorials and samples like (http://en.literateprograms.org/Tic_Tac_Toe_(Python)) , i know that they are good, but i simply need a idiot explanation.
thank you for your time :)
|
Minimax explanation "for dummies"
| 0.291313 | 0 | 0 | 5,575 |
10,627,055 |
2012-05-16T21:36:00.000
| 0 | 0 | 0 | 0 |
java,python,nanotime
| 10,627,094 | 3 | false | 1 | 0 |
Divide the output of System.nanoTime() by 10^9. This is because it is in nanoseconds, while the output of time.time() is in seconds.
| 1 | 5 | 0 |
java's System.nanoTime() seems to give a long: 1337203874231141000L
while python time.time() will give something like 1337203880.462787
how can i convert time.time()'s value to something match up to System.nanoTime()?
|
convert python time.time() to java.nanoTime()
| 0 | 0 | 0 | 3,163 |
10,628,262 |
2012-05-16T23:52:00.000
| -4 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter
| 58,837,887 | 14 | false | 0 | 0 |
You can find your current working directory by 'pwd' command in jupyter notebook without quotes.
| 5 | 305 | 0 |
I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble?
|
Inserting image into IPython notebook markdown
| -1 | 0 | 0 | 455,440 |
10,628,262 |
2012-05-16T23:52:00.000
| 0 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter
| 67,960,394 | 14 | false | 0 | 0 |
I never could get "insert image" into a markdown cell to work. However, the drag and drop entered the png file saved in the same directory as my notebook. It brought this text into the cell
""
The shift + enter > image is now displayed in notebook.
FWIW
| 5 | 305 | 0 |
I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble?
|
Inserting image into IPython notebook markdown
| 0 | 0 | 0 | 455,440 |
10,628,262 |
2012-05-16T23:52:00.000
| 3 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter
| 19,664,281 | 14 | false | 0 | 0 |
minrk's answer is right.
However, I found that the images appeared broken in Print View (on my Windows machine running the Anaconda distribution of IPython version 0.13.2 in a Chrome browser)
The workaround for this was to use <img src="../files/image.png"> instead.
This made the image appear correctly in both Print View and the normal iPython editing view.
UPDATE: as of my upgrade to iPython v1.1.0 there is no more need for this workaround since the print view no longer exists. In fact, you must avoid this workaround since it prevents the nbconvert tool from finding the files.
| 5 | 305 | 0 |
I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble?
|
Inserting image into IPython notebook markdown
| 0.042831 | 0 | 0 | 455,440 |
10,628,262 |
2012-05-16T23:52:00.000
| 10 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter
| 48,560,308 | 14 | false | 0 | 0 |
Last version of jupyter notebook accepts copy/paste of image natively
| 5 | 305 | 0 |
I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble?
|
Inserting image into IPython notebook markdown
| 1 | 0 | 0 | 455,440 |
10,628,262 |
2012-05-16T23:52:00.000
| 62 | 0 | 1 | 0 |
python,jupyter-notebook,ipython,jupyter
| 55,623,116 | 14 | false | 0 | 0 |
Getting an image into Jupyter NB is a much simpler operation than most people have alluded to here.
Simply create an empty Markdown cell.
Then drag-and-drop the image file into the empty Markdown cell.
The Markdown code that will insert the image then appears.
For example, a string shown highlighted in gray below will appear in the Jupyter cell:

Then execute the Markdown cell by hitting Shift-Enter. The Jupyter server will then insert the image, and the image will then appear.
I am running Jupyter notebook server is: 5.7.4 with Python 3.7.0 on Windows 7.
This is so simple !!
UPDATE AS OF March 18, 2021:
This simple "Drag-and-Drop-from-Windows-File-System" method still works fine in JupyterLab. JupyterLab inserts the proper HTML code to embed the image directly and permanently into the notebook so the image is stored in the .ipynb file. I am running Jupyter Lab v2.2.7 on Windows 10 Python 3.7.9 still works in JupyterLab. I am running Jupyter Lab v2.2.7 using Python 3.7.9 on Windows 10.
This stopped working in Jupyter Classic Notebook v6.1.5 sometime last year. I reported an bug notice to the Jupyter Classic Notebook developers.
It works again in the latest version of Jupyter Classic Notebook. I just tried it in v6.4 on 7/15/2021. Thank you Jupyter NB Classic Developers !!
| 5 | 305 | 0 |
I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble?
|
Inserting image into IPython notebook markdown
| 1 | 0 | 0 | 455,440 |
10,632,427 |
2012-05-17T08:47:00.000
| 1 | 0 | 1 | 0 |
python,list,matrix
| 10,632,557 | 4 | true | 0 | 0 |
First and foremost, such matrix would have 10G elements. Considering that for any useful operation you would then need 30G elements, each taking 4-8 bytes, you cannot assume to do this at all on a 32-bit computer using any sort of in-memory technique. To solve this, I would use a) genuine 64-bit machine, b) memory-mapped binary files for storage, and c) ditch python.
Update
And as I calculated below, if you have 2 input matrices and 1 output matrix, 100000 x 100000 32 bit float/integer elements, that is 120 GB (not quite GiB, though) of data. Assume, on a home computer you could achieve constant 100 MB/s I/O bandwidth, every single element of a matrix needs to be accessed for any operation including addition and subtraction, the absolute lower limit for operations would be 120 GB / (100 MB/s) = 1200 seconds, or 20 minutes, for a single matrix operation. Written in C, using the operating system as efficiently as possible, memmapped IO and so forth. For million by million elements, each operation takes 100 times as many time, that is 1.5 days. And as the hard disk is saturated during that time, the computer might just be completely unusable.
| 2 | 1 | 0 |
Circumstances
I have a procedure which will construct a matrix using the given list of values!
and the list starts growing bigger like 100 thousand or million values in a list, which in turn, will result in million x million size matrix.
in the procedure, i am doing some add/sub/div/multiply operations on the matrix, either based on the row, the column or just the element.
Issues
since the matrix is so big that i don`t think doing the whole manipulation in the memory would work.
Questions
therefore, my question would be:
how should i manipulate this huge matrix and the huge value list?
like, where to store it, how to read it etc, so that i could carry out my operations on the matrix and the computer won`t stuck or anything.
|
How do I operate on a huge matrix (100000x100000) stored as nested list?
| 1.2 | 0 | 0 | 1,461 |
10,632,427 |
2012-05-17T08:47:00.000
| 0 | 0 | 1 | 0 |
python,list,matrix
| 10,643,647 | 4 | false | 0 | 0 |
Your data structure is not possible with arrays, it is too large. If the matrix is for instance a binary matrix you could look at representations for its storage like hashing larger blocks of zeros together to the same bucket.
| 2 | 1 | 0 |
Circumstances
I have a procedure which will construct a matrix using the given list of values!
and the list starts growing bigger like 100 thousand or million values in a list, which in turn, will result in million x million size matrix.
in the procedure, i am doing some add/sub/div/multiply operations on the matrix, either based on the row, the column or just the element.
Issues
since the matrix is so big that i don`t think doing the whole manipulation in the memory would work.
Questions
therefore, my question would be:
how should i manipulate this huge matrix and the huge value list?
like, where to store it, how to read it etc, so that i could carry out my operations on the matrix and the computer won`t stuck or anything.
|
How do I operate on a huge matrix (100000x100000) stored as nested list?
| 0 | 0 | 0 | 1,461 |
10,632,796 |
2012-05-17T09:13:00.000
| 3 | 0 | 1 | 0 |
java,php,python,sphinx
| 10,636,231 | 2 | false | 0 | 0 |
No sphinx cant read json directly. Converting to xml seems like the easiest way.
Note you dont have to convert to a file, sphinx can read the output of a script. So the script could just be reading the josn file, and outputing xml directly. No intermediate file actully required.
| 2 | 0 | 0 |
I have well formatted JSON File.
From Sphinx search ,i first convert it into sphinx xml formatted File.
Then using xml pipe on new generated xml file ,i do Sphinx Search.
Is there any direct way to search on json with out converting into specific xml file?
|
How to search using Sphinx from JSON formatted File?
| 0.291313 | 0 | 0 | 488 |
10,632,796 |
2012-05-17T09:13:00.000
| 0 | 0 | 1 | 0 |
java,php,python,sphinx
| 10,633,653 | 2 | false | 0 | 0 |
Sphinx has only 2 types of data source they are sql data source and xmlpipe datasource. At the moment you cant directly search json files
One solution that I can think off is to store the json data in a database and then use sql data source. Just be creative when storing the json data and when indexing it.
| 2 | 0 | 0 |
I have well formatted JSON File.
From Sphinx search ,i first convert it into sphinx xml formatted File.
Then using xml pipe on new generated xml file ,i do Sphinx Search.
Is there any direct way to search on json with out converting into specific xml file?
|
How to search using Sphinx from JSON formatted File?
| 0 | 0 | 0 | 488 |
10,635,212 |
2012-05-17T11:50:00.000
| -1 | 0 | 0 | 0 |
python,django,authentication,hash,digest
| 10,635,489 | 2 | false | 1 | 0 |
AES is not a hashing algorithm. It's an encryption algorithm.
You can use hashing algorithms like SHA1 or MD5.
| 1 | 1 | 0 |
As far as my knowledge says in digest authentication, a client does an irreversible computation, using the password and a random value supplied by the server as input values. The result is transmitted to the server who does the same computation and authenticates the client if he arrives at the same value. Since the computation is irreversible, an eavesdropper can't obtain the password.
Keeping eye on the above definition, I used CryptoJS.HmacSHA256("password", "key") in Javascript to send the information to django server, now the problem is:
I need to check that in server using same logic but django already has hashed the password in its own format, for example using pbkdf2_sha256.
Should I use some reversible algorithm like AES? I don't think it is possible to crack django's hashing algorithm and write the same for client side?
|
Digest authentication in django
| -0.099668 | 0 | 0 | 1,753 |
10,636,024 |
2012-05-17T12:48:00.000
| 1 | 0 | 0 | 0 |
python,user-interface,pandas,dataframe
| 54,660,955 | 20 | false | 0 | 0 |
I've also been searching very simple gui. I was surprised that no one mentioned gtabview.
It is easy to install (just pip3 install gtabview ), and it loads data blazingly fast.
I recommend using gtabview if you are not using spyder or Pycharm.
| 1 | 70 | 1 |
I'm using the Pandas package and it creates a DataFrame object, which is basically a labeled matrix. Often I have columns that have long string fields, or dataframes with many columns, so the simple print command doesn't work well. I've written some text output functions, but they aren't great.
What I'd really love is a simple GUI that lets me interact with a dataframe / matrix / table. Just like you would find in a SQL tool. Basically a window that has a read-only spreadsheet like view into the data. I can expand columns, page up and down through long tables, etc.
I would suspect something like this exists, but I must be Googling with the wrong terms. It would be great if it is pandas specific, but I would guess I could use any matrix-accepting tool. (BTW - I'm on Windows.)
Any pointers?
Or, conversely, if someone knows this space well and knows this probably doesn't exist, any suggestions on if there is a simple GUI framework / widget I could use to roll my own? (But since my needs are limited, I'm reluctant to have to learn a big GUI framework and do a bunch of coding for this one piece.)
|
Python / Pandas - GUI for viewing a DataFrame or Matrix
| 0.01 | 0 | 0 | 100,991 |
10,636,409 |
2012-05-17T13:12:00.000
| 1 | 1 | 0 | 0 |
python,web-services,apache2,mod-wsgi,psycopg2
| 10,645,670 | 1 | false | 1 | 0 |
If you are using mod_wsgi in embedded moe, especially with preform MPM for Apache, then likely that Apache is killing off the idle processes. Try using mod_wsgi daemon mode, which keeps process persistent and see if it makes a difference.
| 1 | 0 | 0 |
I have made a python ladon webservice and I run is on Ubuntu with Apache2 and mod_wsgi. (I use Python 2.6).
The webservice connect to a postgreSQL database with psycopg2 python module.
My problem is that the psycopg2.connection is closed (or destroyed) automatically after a little time (after about 1 or 2 minutes).
The other hand if I run the server with
ladon2.6ctl testserve
command (http://ladonize.org/index.php/Python_Configuration)
than the server is working and the connection is not closed automatically.
I can't understand why the connection is closed with apache+mod_wsgi and in this case the webserver is very slowly.
Can anyone help me?
|
Python psycopg2 + mod_wsgi: connection is very slow and automatically close
| 0.197375 | 1 | 0 | 450 |
10,637,637 |
2012-05-17T14:27:00.000
| 3 | 0 | 0 | 1 |
python,google-app-engine,clone
| 10,637,991 | 1 | false | 1 | 0 |
The problem is that your 'clone' application does not have access to Khans Academy's AppEngine datastore so there is no content to display. Even if you do use all of the code for their application, you are still going to have to generate all of your own content.
Even if you are planning to 'clone' their content, too, you are going to have to do a lot of probably manual work to get it in to your application's datastore.
| 1 | 2 | 0 |
I'm trying to create a KhanAcademy (KA) clone on Google App Engine (GAE). I downloaded the offline version of KA (http://code.google.com/p/khanacademy/downloads/list) for Mac, and set it up with GoogleAppEngineLauncher (https://developers.google.com/appengine/). Because KA was produced on Python 2.5, I have the setup running through the Python 2.5 included in the KA offline version download, and I added these extra flags to the app (to essentially duplicate the functionality of the included Run file):
--datastore_path=/Users/Tadas/KhanAcademy/code/datastore --use_sqlite
As is, GAELauncher is able to get that up and running perfectly fine on a localhost. However, to get it up on my Google appspot domain, I need to change the application name in app.yaml. When I change "application: khan-academy" in app.yaml to a new name and try to run the local version via GAELauncher (or the included Run file), the site comes up but all the content (exercises, etc.) has disappeared (essentially, the site loses most of its functionality). If I try to "Deploy" the app in this state, I received a 500 Server Error when I try to go on the appspot website. Any ideas as to what could be going wrong?
Thanks.
|
Creating a KhanAcademy clone via Google App Engine - issues with application name in app.yaml
| 0.53705 | 0 | 0 | 1,290 |
10,638,071 |
2012-05-17T14:49:00.000
| 1 | 0 | 0 | 0 |
.net,python,sql-server,web-services,suds
| 10,653,866 | 1 | false | 0 | 0 |
Suds is a library to connect via SOAP, so you may already have blown "efficiently transmitted" out of the window, as this is a particularly verbose format over the wire. Your maximum data size is relatively small, and so should almost certainly be transmitted back in a single message so the SOAP overhead is incurred only once. So you should create a web service that returns a list or array of results, and call it once. This should be straightforwardly serialised to a single XML body that Suds then gives you access to.
| 1 | 0 | 0 |
I wish to consume a .net webservice containing the results of SQL Server query using a Python client. I have used the Python Suds library to interface to the same web service but not with a set of results. How should I structure the data so it is efficiently transmitted and consumed by a Python client. There should be a maximum of 40 rows of data, containing 60 bytes of data per row in 5 columns.
|
SQL Query result via .net webservice to a non .net- Python client
| 0.197375 | 1 | 0 | 175 |
10,640,720 |
2012-05-17T17:41:00.000
| 11 | 0 | 0 | 0 |
python,django,localhost,port
| 10,641,540 | 2 | true | 1 | 0 |
You need to know your server's IP or domain address. If you used example.com to access you server SSH, then launching Django with
./manage.py runserver 0.0.0.0:8002
and accessing it with http://example.com:8002 should work. But if you only know IP, then launch Django with that IP instead of
0.0.0.0
and access it with http://YOUR-IP:8002
| 1 | 6 | 0 |
I just set up Django on a dreamhost server. Ran through everything, but can't seem to get the welcome page. I got to the point where it says "Development server is running at 127.0.0.1:8002 (tried with 8000 but got the "that port is already in use error). When I try to access that address in my browser in Chrome I get Error 102 (net::ERR_CONNECTION_REFUSED): The server refused the connection.
Any idea why this is happening? I am stuck in a loop, I have no clue what is going on. Help is sincerely appreciated.
|
Localhost Server Refusing Connection
| 1.2 | 0 | 0 | 31,050 |
10,642,331 |
2012-05-17T19:32:00.000
| 0 | 0 | 0 | 0 |
python-3.x,pygame,pixels
| 10,856,777 | 1 | true | 0 | 1 |
So Ian Mallett was correct:
Efficient how? .set_at will be more space efficient, but obviously
you're setting individual pixels every time you want to change.
Loading an entirely new image at startup will be faster, but you'll
use more space. I'd guess you're wanting the second, but . . .
I wanted the second one, faster (even at the cost of more space).
So its better to load the entire 'blue' image as well as the 'red' one, then switch between them as necessary.
Thanks Ian!
| 1 | 0 | 0 |
I'm making a little game for fun, and there is an event where you can take control of the enemies (muhahaha I can take over their little programmed minds)
I want to convert part of the image from red to blue.
Should I either use the surface.set to change to colour value,
Or should I just load a new image into the surface, for example load "red_man.png" originally then when he changes sides load "blue_man.png"?
Which is more efficient to do?
Thanks in advance :)
|
Python Pygame Changing Surfaces efficiently
| 1.2 | 0 | 0 | 106 |
10,643,506 |
2012-05-17T21:05:00.000
| 1 | 0 | 1 | 0 |
python,django
| 10,643,577 | 1 | true | 1 | 0 |
It needs to go in $PYTHONPATH instead. Create that variable if it's not already defined.
| 1 | 0 | 0 |
I installed django and want to specify its path in my mac, and I put the path into .profile and also checked with $PATH to ensure that is specified. However, when I go to python's environment and type import django, it cannot find. Have no idea about that. Any suggestions?
|
Django path in Mac OS X
| 1.2 | 0 | 0 | 906 |
10,643,982 |
2012-05-17T21:46:00.000
| 2 | 0 | 1 | 0 |
python
| 10,644,350 | 3 | false | 0 | 0 |
The "Result too large" doesn't refer to the number of characters in the decimal representation of the number, it means that the number that resulted from your exponential function is large enough to overflow whatever type python uses internally to store floating point values.
You need to either use a different type to handle your floating point calculations, or rework you code so that e**(-x) doesn't overflow or underflow.
| 1 | 1 | 0 |
Is there a way in python to truncate the decimal part at 5 or 7 digits?
If not, how can i avoid a float like e**(-x) number to get too big in size?
Thanks
|
python e**(-x) OverflowError: (34, 'Result too large')
| 0.132549 | 0 | 0 | 6,061 |
10,644,183 |
2012-05-17T22:08:00.000
| 4 | 0 | 1 | 0 |
python,registry,preferences,persistent
| 10,644,199 | 8 | false | 0 | 0 |
Unless I"m missing something, regex has nothing to do with the windows registry. Just store a config file somewhere, like the user's home directory or whatever. At least that's what I'd do on linux. Store a file somewhere in ~/.config/
| 3 | 8 | 0 |
I'm working on a program in Python for Windows, and would like to save variables and user preferences so that I can recall them even after the program has been terminated and restarted.
Is there an ideal way to do this on Windows machines? Would _winreg and the Windows registry be suited for this task? Or do I need to create some sort of database of my own?
|
How to store variables/preferences in Python for later use
| 0.099668 | 0 | 0 | 5,574 |
10,644,183 |
2012-05-17T22:08:00.000
| 1 | 0 | 1 | 0 |
python,registry,preferences,persistent
| 10,644,409 | 8 | false | 0 | 0 |
While the pickle module is an obvious choice, the files it writes will be non-human-readable. For configuration files, it would be good if your configuration file was a simple text format that your users can read or even edit. So, I recommend you use JSON and the json module.
| 3 | 8 | 0 |
I'm working on a program in Python for Windows, and would like to save variables and user preferences so that I can recall them even after the program has been terminated and restarted.
Is there an ideal way to do this on Windows machines? Would _winreg and the Windows registry be suited for this task? Or do I need to create some sort of database of my own?
|
How to store variables/preferences in Python for later use
| 0.024995 | 0 | 0 | 5,574 |
10,644,183 |
2012-05-17T22:08:00.000
| 1 | 0 | 1 | 0 |
python,registry,preferences,persistent
| 10,644,447 | 8 | false | 0 | 0 |
There are several ways, devided into two main directions
Access the Windows registry with the _winreg module.
Using the registry it is nontrivial to transfer the settings to another machine or to inspect them or back them up; you'd have to use regedit for those.
The other way store the preferences in a file in the user's 'My Documents' folder. This makes it easier to move the settings to another machine or user, and doesn't mix your program's settings with those of a host of other applications. So things like backing them up and restoring them are easier; you just copy one file. If you choose a text format, it is also easier to inspect and debug the settings.
Put your settings in a list, tuple or dictionary and save them with the cPickle module. This is probably one of the easiest methods. On the downside, while pickled data uses an ASCII format by default, it is not human-readable.
Use the ConfigParser module to save and load config files in a similar structure as Windows' .ini files. These files are human-readable.
Use the json module to store settings in json format. These files are human-readable.
Save the preferences as Python expressions to a text file, load it with execfile(). N.B. This could be used to execute arbitrary code, so there are safety considerations. But on the upside, it is very easy and readable.
| 3 | 8 | 0 |
I'm working on a program in Python for Windows, and would like to save variables and user preferences so that I can recall them even after the program has been terminated and restarted.
Is there an ideal way to do this on Windows machines? Would _winreg and the Windows registry be suited for this task? Or do I need to create some sort of database of my own?
|
How to store variables/preferences in Python for later use
| 0.024995 | 0 | 0 | 5,574 |
10,645,793 |
2012-05-18T01:57:00.000
| 1 | 0 | 0 | 0 |
python,flask-sqlalchemy
| 15,194,364 | 1 | false | 1 | 0 |
you app's SELECT is probably within its own transaction / session so changes submitted by another session (e.g. MySQL Workbench connection) are not yet visible for your SELECT. You can easily verify it by enabling mysql general log or by setting 'echo: false' in your create_engine(...) definition. Chances are you're starting your SQLAlchemy session in SET AUTOCOMMIT = 0 mode which requires explicit commit or rollback (when you restart / reload, Flask-SQLAlchemy does it for you automatically). Try either starting your session in autocommit=true mode or stick explicit commit/rollback before calling your SELECT.
| 1 | 1 | 0 |
I'm seeing some unexpected behaviour with Flask-SQLAlchemy, and I don't understand what's going on:
If I make a change to a record using e.g. MySQL Workbench or Sequel Pro, the running app (whether running under WSGI on Apache, or from the command line) isn't picking up the change. If I reload the app by touching the WSGI file, or by reloading it (command line), I can see the changed record. I've verified this by running an all() query in the interactive shell, and it's the same – no change until I quit the shell, and start again. I get the feeling I'm missing something incredibly obvious here – it's a single table, no joins etc. – Running MySQL 5.5.19, and SQLA 0.7.7 on 2.7.3
|
Flask SQLAlchemy not picking up changed records
| 0.197375 | 1 | 0 | 326 |
10,647,482 |
2012-05-18T06:01:00.000
| 1 | 0 | 0 | 0 |
python,django,postgresql,google-maps,postgis
| 10,648,479 | 2 | false | 1 | 0 |
"Using Python and Django" only, you're not going to do this. Obviously you're going to need Javascript.
So you may as well dump Google Maps and use an open-source web mapping framework. OpenLayers has a well-defined Javascript API which will let you do exactly what you want. Examples in the OpenLayers docs show how.
You'll thank me later - specifically when Google come asking for a fee for their map tiles and you can't switch your Google Maps widget to OpenStreetMap or some other tile provider. This Actually Happens.
| 1 | 0 | 0 |
Background:
I'm trying to use a Google Map as an interface to mark out multiple polygons, that can be stored in a Postgres Database.
The Database will then be queried with a geocoded Longitude Latitude Point to determine which of the Drawn Polygons encompass the point.
Using Python and Django.
Question
How do I configure the Google Map to allow a user to click around and specify multiple polygon areas?
|
Mark Out Multiple Delivery Zones on Google Map and Store in Database
| 0.099668 | 1 | 0 | 667 |
10,648,729 |
2012-05-18T07:48:00.000
| 1 | 0 | 1 | 0 |
python,mongodb,gridfs
| 10,648,760 | 2 | false | 0 | 0 |
You could use md5 hash and compare new hash with exists before saving file.
| 2 | 5 | 0 |
There is a way to avoid duplicate files in mongo gridfs?
Or I have to do that via application code (I am using pymongo)
|
Mongo: avoid duplicate files in gridfs
| 0.099668 | 1 | 0 | 2,727 |
10,648,729 |
2012-05-18T07:48:00.000
| 5 | 0 | 1 | 0 |
python,mongodb,gridfs
| 10,650,262 | 2 | true | 0 | 0 |
The MD5 sum is already part of Mongo's gridfs meta-data, so you could simply set a unique index on that column and the server will refuse to store the file. No need to compare on the client side.
| 2 | 5 | 0 |
There is a way to avoid duplicate files in mongo gridfs?
Or I have to do that via application code (I am using pymongo)
|
Mongo: avoid duplicate files in gridfs
| 1.2 | 1 | 0 | 2,727 |
10,650,676 |
2012-05-18T10:10:00.000
| 1 | 0 | 0 | 0 |
python,web-scraping
| 10,663,615 | 2 | false | 1 | 0 |
Splinter (http://splinter.cobrateam.info - uses selenium) makes browsing iframe elements easy. At least as long iframe tag has id attribute.
| 1 | 1 | 0 |
I am working on python web scraping
The web page is polluted using iframe and the content is filled by ajax(jquery)
I have tried using src of iframe(using lxml,.) but its of no use
How can i extract the content of the iframe using python modules
Thanks
|
python web extraction of iframe (ajax ) content
| 0.099668 | 0 | 1 | 1,272 |
10,651,801 |
2012-05-18T11:27:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,logging
| 10,653,105 | 1 | false | 0 | 0 |
In the python logging module, loggers are managed by a logging.Manager instance. usually there is only one logging manager, available as logging.Logger.manager. Loggers are identified by their name. Each time you use logging.getLogger('name') this call is acutally forwarded to logging.Logger.manager.getLogger which holds a dict of loggers and returns the same logger for each 'name' every time.
so if you don't use a different name when getting the logger from a thread, you're actually using the same logger instance each time and don't have to worry about a memory leak.
| 1 | 1 | 0 |
I have tried logging in Python. It looks like once a logging instance is created by a thread it won't be deleted. However, my program should produce more than 100 threads per minute, and each will create their own logger, which may result in a kind of memory leak (logging.Logger instances will not be collected by the garbage collector).
Can anyone help me on this, is there a way to use logger for multi-threaded applications?
|
Each thread create its own logger instance, logging their own event
| 0.379949 | 0 | 0 | 712 |
10,652,097 |
2012-05-18T11:48:00.000
| 4 | 1 | 0 | 1 |
python,testing,mocking,integration-testing,celery
| 10,653,559 | 3 | false | 1 | 0 |
Without the use of a special mock library, I propose to prepare the code for being in mock-up-mode (probably by a global variable). In mock-up-mode instead of calling the normal time-function (like time.time() or whatever) you could call a mock-up time-function which returns whatever you need in your special case.
I would vote down for changing the system time. That does not seem like a unit test but rather like a functional test as it cannot be done in parallel to anything else on that machine.
| 1 | 20 | 0 |
I've built a paywalled CMS + invoicing system for a client and I need to get more stringent with my testing.
I keep all my data in a Django ORM and have a bunch of Celery tasks that run at different intervals that makes sure that new invoices and invoice reminders get sent and cuts of access when users don't pay their invoices.
For example I'd like to be a able to run a test that:
Creates a new user and generates an invoice for X days of access to the site
Simulates the passing of X + 1 days, and runs all the tasks I've got set up in Celery.
Checks that a new invoice for an other X days has been issued to the user.
The KISS approach I've come up with so far is to do all the testing on a separate machine and actually manipulate the date/time at the OS-level. So the testing script would:
Set the system date to day 1
Create a new user and generate the first invoice for X days of access
Advance then system date 1 day. Run all my celery tasks. Repeat until X + 1 days have "passed"
Check that a new invoice has been issued
It's a bit clunky but I think it might work. Any other ideas on how to get it done?
|
Simulating the passing of time in unittesting
| 0.26052 | 0 | 0 | 3,901 |
10,655,217 |
2012-05-18T15:13:00.000
| 10 | 0 | 1 | 0 |
python,matplotlib,ipython
| 10,660,806 | 6 | false | 0 | 0 |
At present, the closest you can come is to redraw it at a larger size using the figsize function. It expects dimensions in inches, which caught me out the first time I tried to use it.
There are some plants for a rich backend that would allow plots to be manipulated live, using HTML5, but I think it will be a few more months before that's ready.
If you're using the notebook on your local computer, for now the easiest option might be not to use inline mode, so the plots pop up as separate windows.
| 2 | 90 | 0 |
Is it possible to zoom into a plot if inline is activated? Especially regarding to 3d-plots rotating and zooming is a necessary feature.
|
ipython notebook --pylab inline: zooming of a plot
| 1 | 0 | 0 | 84,875 |
10,655,217 |
2012-05-18T15:13:00.000
| 107 | 0 | 1 | 0 |
python,matplotlib,ipython
| 41,125,787 | 6 | false | 0 | 0 |
You can now use %matplotlib notebook instead of %matplotlib inline and you'll be able to interact with your plots.
| 2 | 90 | 0 |
Is it possible to zoom into a plot if inline is activated? Especially regarding to 3d-plots rotating and zooming is a necessary feature.
|
ipython notebook --pylab inline: zooming of a plot
| 1 | 0 | 0 | 84,875 |
10,656,426 |
2012-05-18T16:32:00.000
| 0 | 0 | 0 | 0 |
python,sqlalchemy
| 10,778,146 | 2 | false | 0 | 0 |
What kind of environment are you looking to work with on top of SQLAlchemy?
Most likely, if you are using a popular web framework like django, Flask or Pylons, you can find many examples and tutorials specific to that framework that include SQLAlchemy.
This will boost your knowledge both with SQLAlchemy and whatever else it is you are working with.
Chances are, you won't find any good project examples in 'just' SQLAlchemy as it essentially a tool.
| 1 | 18 | 0 |
Are there any good example projects which uses SQLAlchemy (with Python Classes) that I can look into? (which has at least some basic database operations - CRUD)
I believe that, it is a good way to learn any programming language by looking into someone's code.
Thanks!
|
SQLAlchemy Example Projects
| 0 | 1 | 0 | 12,180 |
10,657,383 |
2012-05-18T17:46:00.000
| 2 | 0 | 1 | 0 |
python,image-processing
| 55,408,630 | 5 | false | 0 | 0 |
Use numpy.hstack() or numpy.vstack() based on whether you want the images next to each other or on top of each other. You can transform your images into numpy arrays if they are some weird format that numpy doesn't accept. Make sure that you set dtype=np.uint8 if you interpret images as arrays using the np.asarray() method.
| 1 | 26 | 0 |
So for this project I'm working on, I have 2 photos. These two photos need to be stitched together, one on the top and one on the bottom, and then you will be able to see the whole picture. Any ideas on what module I should use to do this?
|
Stitching Photos together
| 0.07983 | 0 | 0 | 36,810 |
10,659,211 |
2012-05-18T20:21:00.000
| 0 | 0 | 1 | 0 |
python,oop,inheritance,pass-by-reference,mutable
| 10,659,352 | 4 | false | 0 | 0 |
I think the problem you are running in to here is that classes are actually objects, and class members are members of the type type object of the class rather than members of the instances thereof. From that perspective, it is easy to see why a subclass does not actually have a unique instance of the parent's class-level member. A method I've used to solve this sort of problem in the past is to make your .instances class variable in the parent a dictionary, then catalog instances of each child, in the parent's __init__ method, in a list stored in the .instances dictionary under a key that is of the self type. Remember, even from within the parent scope, doing type() on self will give you the type the object was created as directly, not the type of the local scope.
| 1 | 0 | 0 |
I have a base class which has a blank list as a class attribute. Many child classes inherit from this base class. The intent is to use this list class attribute in conjunction with a class method in order to allow each child class to keep track of a certain set of its own instances. Each class is supposed to have a separate list which contains only its own instances.
When I made a naive implementation of this inheritance scheme, durring debugging I noticed that in fact every single child class was sharing the exact same list as a class attribute, and I was once again having Fun with the fact that python passes lists by reference. On closer inspection, it seems like every class attribute of the parent, methods and values alike, is simply passed by reference to every child, and that this doesn't change unless a particular attribute is explicitly overridden in the definition of the child.
What is the best way to reinitialize the class variables of a child upon its creation? Ideally, I'd like a way to ensure that every child starts with a cls.instances attribute that is a unique blank list which involves NO extra code in any of the children. Otherwise, what am I bothering with inheritance for in the first place? (don't answer that)
|
How can I ensure that every child class gets a fresh copy of its parent's class attributes (in Python)?
| 0 | 0 | 0 | 320 |
10,660,246 |
2012-05-18T21:59:00.000
| 0 | 0 | 1 | 0 |
python,restructuredtext,doctest
| 32,209,186 | 2 | false | 0 | 0 |
Adding doctests to your documentation makes sense to ensure that code in your documentation is actually working as expected. So, you're testing your documentation. For general code-testing, using doctests can't be recommended at all.
| 1 | 0 | 0 |
Another way to ask this:
If I wrote doctests in reST, can I use it for Sphinx or other automatic documentation efforts?
Background: I don't know how to use Sphinx and have not much experience with reST either, so I am wondering if I can use reST-written doctests somewhere else useful than with Sphinx?
|
Why would I write doctests in restructured text?
| 0 | 0 | 0 | 880 |
10,660,411 |
2012-05-18T22:16:00.000
| 5 | 0 | 0 | 0 |
python,python-db-api
| 10,660,537 | 3 | false | 0 | 0 |
Connection object is your connection to the database, close that when you're done talking to the database all together. Cursor object is an iterator over a result set from a query. Close those when you're done with that result set.
| 1 | 41 | 0 |
I am confused about why python needs cursor object. I know jdbc and there the database connection is quite intuitive but in python I am confused with cursor object. Also I am doubtful about what is the difference between cursor.close() and connection.close() function in terms of resource release.
|
difference between cursor and connection objects
| 0.321513 | 1 | 0 | 17,423 |
10,661,041 |
2012-05-18T23:41:00.000
| 1 | 0 | 0 | 0 |
python,macos,tkinter,glade
| 21,465,072 | 1 | false | 0 | 1 |
No, there is no way to get glade or other not-designed-for-tkinter GUI tools to work with tkinter. At least, not without a tremendous amount of effort.
| 1 | 2 | 0 |
Is there a way to get Glade, or any similar software, to work with Tkinter? I'm unable to install WxPython or any other libraries, so I need something compatible with what's installed on OSX.6 by default.
|
Glade with Tkinter?
| 0.197375 | 0 | 0 | 1,825 |
10,661,381 |
2012-05-19T00:38:00.000
| 4 | 0 | 0 | 0 |
python,svg,slice,animated
| 10,661,419 | 1 | true | 0 | 0 |
The support for animated svg in svgwrite seems to only work in the form of algorithmically moving objects in the drawing.
Well, yes. That's how SVG animation works; it takes the current objects in the image and applies transformations to them. If you want a "movie" then you will need to make a video from the images.
| 1 | 0 | 1 |
I have been using the svgwrite library to generate a sequence of svg images. I would like to turn this sequence of images into an animated svg. The support for animated svg in svgwrite seems to only work in the form of algorithmically moving objects in the drawing. Is it possible to use the time slices I have to generate an animated svg or am I stuck rasterizing them and creating a video from the images. Thanks!
|
Generating animated SVG with python
| 1.2 | 0 | 0 | 1,810 |
10,664,244 |
2012-05-19T10:12:00.000
| 0 | 0 | 0 | 0 |
python,django
| 69,156,705 | 19 | false | 1 | 0 |
You're probably going to use the wsgi.py file for production (this file is created automatically when you create the django project). That file points to a settings file. So make a separate production settings file and reference it in your wsgi.py file.
| 2 | 184 | 0 |
I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings.
It would be great to know the following:
How best to deal with development and production settings.
How to keep apps such as django-debug-toolbar only in a development environment.
Any other tips and best practices for development and deployment settings.
|
Django: How to manage development and production settings?
| 0 | 0 | 0 | 106,539 |
10,664,244 |
2012-05-19T10:12:00.000
| 0 | 0 | 0 | 0 |
python,django
| 71,316,114 | 19 | false | 1 | 0 |
What we do here is to have an .ENV file for each environment. This file contains a lot of variables like ENV=development
The settings.py file is basically a bunch of os.environ.get(), like ENV = os.environ.get('ENV')
So when you need to access that you can do ENV = settings.ENV.
You would have to have a .env file for your production, testing, development.
| 2 | 184 | 0 |
I have been developing a basic app. Now at the deployment stage it has become clear I have need for both a local settings and production settings.
It would be great to know the following:
How best to deal with development and production settings.
How to keep apps such as django-debug-toolbar only in a development environment.
Any other tips and best practices for development and deployment settings.
|
Django: How to manage development and production settings?
| 0 | 0 | 0 | 106,539 |
10,665,475 |
2012-05-19T13:11:00.000
| 0 | 0 | 0 | 0 |
python,django,static
| 10,668,094 | 3 | false | 1 | 0 |
I put them in a 'business' app. I feel there is no need to overthink it really.
| 1 | 1 | 0 |
Its said to keep each functionality as an app and keep it as pluggable as possible.
So,
How do you organise pages like :
Homepage
About Us
Contact Us
etc
These are not exactly functionality, so does django devs manage these ?
|
Django: How to you organise static pages in apps?
| 0 | 0 | 0 | 1,228 |
10,665,768 |
2012-05-19T13:55:00.000
| 1 | 1 | 0 | 1 |
python,eclipse-plugin,eclipse-pde
| 10,856,306 | 1 | true | 1 | 0 |
You can already create an External Launch config from Run>External Tools>External Tools Configurations. You are basically calling the program from eclipse. Any output should then show up in the eclipse Console view. External launch configs can also be turned into External Builders and attached to projects.
If you are looking to run your python script within your JVM then you need a implementation of python in java ... is that what you are looking for?
| 1 | 2 | 0 |
I want to generate an Eclipse plugin that just runs an existing Python script with parameters.
While this sounds very simple, I don't think it's easy to implement. I can generate a Eclipse plugin. My issue is not how to use PDE. But:
can I call the existing Python script from Java, from an Eclipse plugin?
it needs to run from the embedded console with some parameters
Is this reasonably easy to do? And I don't plan to reimplement it in any way. Calling it from command-line works very well. My question is: can Eclipse perform this, too?
Best,
Marius
|
Eclipse plugin that just runs a python script
| 1.2 | 0 | 0 | 1,884 |
10,665,804 |
2012-05-19T14:00:00.000
| 3 | 0 | 1 | 0 |
python,set
| 10,665,891 | 2 | true | 0 | 0 |
Basically, no, because, as I pointed out in my comment, it is perfectly possible for two unequal objects to share the same hash key.
The hash key points, not to either nothing or an object, but to a bucket which contains zero or more objects. The set implementation then needs to do equality comparisons against each of these to work out if the object is in the set.
So you always need at least enough information to make an equality comparison. If you've got very large objects whose equality can be decided on a subset of their data, say 2 or 3 fields, you could consider creating a new object with just these fields and storing this in the set instead of the whole object.
| 1 | 3 | 0 |
I want a data type that will allow me to efficiently keep track of objects that have been "added" to it, allowing me to test for membership. I don't need any other features.
As far as I can tell, Python does not have such a datatype. The closest to what I want is the Set, but the set will always store values (which I do not need).
Currently the best I can come up with is taking the hash() of each object and storing it in a set, but at a lower level a hash of the hash is being computed, and the hash string is being stored as a value.
Is there a way to use just the low-level lookup functionality of Sets without actually pointing to anything?
|
Is there a Set-like object that doesn't store values?
| 1.2 | 0 | 0 | 176 |
10,666,877 |
2012-05-19T16:15:00.000
| 0 | 1 | 0 | 1 |
python,sockets,tornado
| 10,667,711 | 2 | false | 0 | 0 |
Depending on the scale - the simple thing is to just use HTTP and the AsyncHTTPClient in Tornado. For the request<->response case in our application we're going 300 connections/second with such an approach.
For the first case Fire and forget, you could also use AsyncHTTP and just have the server close out the connection and continue working...
| 1 | 1 | 0 |
I use Tornado as the web server. I write some daemons with Python, which run in the server hardware. Sometimes the web server needs to send some data to the daemon and receives some computed results. There are two working:
1. Asynchronous mode: the server sends some data to the daemons, and it doesn't need the results soon. Can I use message queue to do it perfectly?
2. Synchronous mode: the server sends data to the daemons, and it will wait until it get the results. Should Iuse sockets?
So what's the best way of communication between tornado and Python based daemon?
|
What's the best way of communication between tornado and Python based daemon?
| 0 | 0 | 0 | 732 |
10,668,116 |
2012-05-19T19:14:00.000
| 0 | 0 | 0 | 0 |
python,tkinter,pygame
| 72,318,955 | 3 | false | 0 | 1 |
I actually recommend Tkinter for simple python game development tho.
| 2 | 12 | 0 |
I've been searching the web and reading about GUI development in Python and have read that Tkinter is one of the best GUI toolkits for Python. But I also know that Pygame is a library that can be used for GUI. As i'm fairly new to programming, could somebody please explain the differences between Pygame and Tkinter, and which should be used in what occasion. Which would be a better choice for a chess engine GUI.
Thank you very much in advance.
|
pygame vs tkinter.
| 0 | 0 | 0 | 26,041 |
10,668,116 |
2012-05-19T19:14:00.000
| 1 | 0 | 0 | 0 |
python,tkinter,pygame
| 64,797,294 | 3 | false | 0 | 1 |
Pygame is normally used to create games though I would recommend using the arcade library instead. Pygame does not help you to create GUIs. instead you should use tkinter which is specifically designed for GUIs. For your project I would recommend using tkinter. Tkinter helps you create labels, buttons and stuff pygame helps you make graphics and stuff
| 2 | 12 | 0 |
I've been searching the web and reading about GUI development in Python and have read that Tkinter is one of the best GUI toolkits for Python. But I also know that Pygame is a library that can be used for GUI. As i'm fairly new to programming, could somebody please explain the differences between Pygame and Tkinter, and which should be used in what occasion. Which would be a better choice for a chess engine GUI.
Thank you very much in advance.
|
pygame vs tkinter.
| 0.066568 | 0 | 0 | 26,041 |
10,668,992 |
2012-05-19T21:30:00.000
| 5 | 0 | 1 | 0 |
php,javascript,python,nlp
| 10,669,005 | 2 | true | 0 | 0 |
What you are describing is the task of "paraphrase" (or bidirectional "textual entailment"). This is an extremely hard problem and an open research area. I doubt that there is a system available that would do well enough on this task for real-world, general use.
If you have a very narrow set of transformations in mind (such as the "would you like" <-> "do you want" alternation), you could try and construct a set of transformation rules that convert one sentence to the other. These rules could act directly on the sentence or on a parse tree produced from a statistical parser.
| 1 | 4 | 0 |
I would like to find an NLP library in Python, PHP or even JavaScript for determining whether a sentence in a string is equivalent to a differently structured sentence?
For example, the library would need to be able to determine whether these two sentences are equivalent:
"Would you like the order for here or to go?"
"Do you want the order for here or to go?"
Is there such a thing? Or would it actually be easier for me to build something like this myself for the specific application I need it for?
|
NLP library for determining if a sentence is equivalent to another sentence?
| 1.2 | 0 | 0 | 623 |
10,669,270 |
2012-05-19T22:24:00.000
| 2 | 0 | 0 | 0 |
python,numpy,memory-management,matrix-multiplication,large-data
| 12,110,615 | 2 | false | 0 | 0 |
you might try to use np.memmap, and compute the 10x10 output matrix one element at a time.
so you just load the first row of the first matrix and the first column of the second, and then np.sum(row1 * col1).
| 1 | 3 | 1 |
Im trying to produce a usual matrix multiplication between two huge matrices (10*25,000,000).
My memory runs out when I do so. How could I use numpy's memmap to be able to handle this?
Is this even a good idea? I'm not so worried about the speed of the operation, I just want the result even if it means waiting some time. Thank you in advanced!
8 gbs ram, I7-2617M 1.5 1.5 ghz, Windows7 64 bits. Im using the 64 bit version of everything: python(2.7), numpy, scipy.
Edit1:
Maybe h5py is a better option?
|
Python numpy memmap matrix multiplication
| 0.197375 | 0 | 0 | 1,031 |
10,669,497 |
2012-05-19T23:05:00.000
| 0 | 1 | 1 | 1 |
python,buildout
| 10,678,298 | 2 | false | 0 | 0 |
A) The mr.developer recipe mentioned on your recipe's page is probably a better choice.
B) you want your eggs in bin/python? Include them in 'eggs' in your zc.recipe.eggs part in your buildout where you generate bin/python.
| 1 | 0 | 0 |
I'm using isotoma.buildout.autodevelop to develop eggs which I'm currently developing within my buildout.
I would like to include these developed eggs (which are located on the filesystem next to my buildout.cfg) as namespaces in my buildout's custom interpreter.
Can anyone provide an example of this or link to some resource ?
|
Including a locally developed python package in a buildout interpreter
| 0 | 0 | 0 | 143 |
10,669,677 |
2012-05-19T23:46:00.000
| 0 | 0 | 0 | 0 |
python,django,include
| 10,670,224 | 3 | false | 1 | 0 |
This isn't rocket science.
Create a constants file, say constants.txt. Put name/value pairs in that file in an easily-parseable format. For example name:value. Write a small program in your language of choice (Python would be great for this). This program reads in constants.txt, and then writes out an appropriate file for each of the languages you will be working with (like constants.py, constants.h, etc.)
For example, if constants.txt contained an entry of 'MODEL1_FIELD1_MAX_LENGTH: 20', then constants.h could contain an entry of the form '#define MODEL1_FIELD1_MAX_LENGTH 20', but constants.py would contain an entry of the form 'MODEL1_FIELD1_MAX_LENGTH=20'. You get the picture.
Now make that little program be run automatically as part of your projects build process any time constants.txt is changed.
There you go--your constants kept in one file, yet always synchronized and available for any language you use.
| 1 | 0 | 0 |
First, let me preface this by saying I am brand new to both Python and Django. I would like to be using a language I already know, like and prefer, alas the frameworks simply don't exist for them. Bottom line, I'm no "pythonista."
At any rate, I'm on the first couple of pages of a Django tutorial, and am at the point of creating the data model. Right away I see that the example hardcodes things like the max length of character fields right there in the model. This is something I simply won't do, as this information will not only change often and be required in many places, but it will also be used when I code up backend applications in another programming language.
The critical issue is, I won't be using python for backend stuff. I will be using another language. Programs in that language will need access to things like the max length of character fields.
In any of the other languages I use, this is a simple matter. I simply stick something like a max length in a file called MAXLENGTH, and include that file wherever I need it. If max length ever needs to change (and it will), I change it in one place. It is then changed in all other places, no matter what other languages are used.
I need this capability in Python/Django, or something which will achieve similar effect with minimal hassle. I did find an import statement, but it doesn't seem to do exactly what I want (it seems to import Python code, but I can't use a Python-only solution here).
Note that I'm not likely to entertain exotic, complicated solutions involving lots of complicated declarations of classes and what not. It's a simple problem, I need a simple solution.
Also, I would accept a solution in either Python, or Django (if Django has some special capability in this regard).
Much thanks.
|
Include needed for Python/Django
| 0 | 0 | 0 | 119 |
10,670,022 |
2012-05-20T01:08:00.000
| 4 | 0 | 1 | 0 |
python,django,heroku,dependencies
| 10,670,227 | 2 | true | 1 | 0 |
The short answer is that the site packages live in /app/.heroku/venv/lib/python2.7/site-packages. If you want to take a look around you can open a remote shell using `heroku run -app [app_name] bash'. However, you probably don't want to just edit the packages in place since there's no guarantee that heroku won't wipe that clean and start fresh using your requirements.txt for another instance. Instead, if you need to customize a package a good strategy is to create your own fork of the project's code and then specify your customized fork in the requirements.txt.
For example, I use django-crowdsourcing for one of my sites, but needed to customize the code. So I created my own fork on google code and pointed to this custom fork using the following entry in requirements.txt:
-e hg+https://[email protected]/r/evangrim-django-crowdsourcing/@b824d8f377b5bc2706d9755650e3f35061f3e309#egg=django_crowdsourcing-dev
This tells pip to checkout a copy of my fork and use it to install the package into heroku's virtualenv for my app. Doing things this way is much more robust since it works within the pip installation framework that heroku expects you to use.
| 1 | 5 | 0 |
I've launched my app on Heroku but need to change one of the files of one the dependent libs that I installed in the requirements.txt file.
On my local machine, this would just be in my virtual environment in lib > python2.7 > site-packages etc.
Where are these dependencies stored within Heroku's file structure? When I go into the python folder in lib the site-packages doesn't seem to have my libraries there.
|
Where are dependent libraries stored in Heroku?
| 1.2 | 0 | 0 | 1,969 |
10,670,325 |
2012-05-20T02:26:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,python-module
| 10,671,099 | 1 | true | 1 | 0 |
There is no official library for what you're trying to do, and the Google Terms of Service prohibit using automated tools to 'scrape' search results.
| 1 | 0 | 0 |
I am trying to build an application in GAE using python. I needs to do is give the query received from user and give it to Google search and return the answer in a formatted way to the user. I found lots of questions asked here. But couldn't get a clear answer regarding my requirements. My needs are
Needs to process large number of links. Many Google API described gives only top four links
Which module is best regarding my requirement. Whether I need to go for something like Mechanize, Urllib... I don't know whether they work in GAE. Also found a Google API, but it gives only few results
|
Python module for google search in GAE
| 1.2 | 0 | 0 | 143 |
10,670,576 |
2012-05-20T03:27:00.000
| 1 | 1 | 0 | 1 |
python,eclipse,installation,pydev,interpreter
| 20,319,027 | 5 | false | 0 | 0 |
I've been wresting with this problem all evening and just now solved it for me. My problem was with a workspace saved in Google Drive, but where Drive had created a lot of files with a (1) before the first period in the .metadata folder, presumably as a conflict resolution thing.
Using File Commander (the search in Windows 7 ignored the parenthesis ?!) I searched for all the files containing (1) and delted them. (It should be said I made a copy of the folder first and opened it as a workspace to experiment on, as I've never figured out how to import a project once the workspace is lost.)
In my case, it worked like a charm. Now I'm going to be very nervous about having Eclipse open on both coding machines at once. We'll see how it goes from here.
| 4 | 3 | 0 |
I'm attempting to get eclipse running as something more powerful than a colored text editor so that I can do some Maya scripting. There's literally nothing fancy about this setup, it just doesn't keep my interpreter once the prefs window is closed.
I can open and view .py docs fine, but pydev will not keep the interpreter I give it. As soon as I save the prefs with vanilla python.exe chosen as the interpreter, eclipse loses it. Opening the prefs again will show a blank interpreter page.
Auto config used to work before I started mucking with settings. I had the same disappearing problem even though Autoconfig could find everything.
c:\Python27 is set in my PYTHONPATH for user and system variables.
I've tried 32 and 64bit python (running win7 64). I was using Aptana with pydev and it seemed to not complain for a while, but then the interpreter went awol and I tried Eclipse to fix it. I can't start an actual project due to the missing interpreter, and the large "help" box that pops up when I'm typing is slowing me down considerably.
Eclipse 3.7.2
Python 2.7.2
Pydev 2.5
Thanks for your help, I'm pretty green at this.
|
Eclipse + Pydev wont keep interpreter setting within the same session.
| 0.039979 | 0 | 0 | 2,628 |
10,670,576 |
2012-05-20T03:27:00.000
| 0 | 1 | 0 | 1 |
python,eclipse,installation,pydev,interpreter
| 24,027,445 | 5 | false | 0 | 0 |
I had the same problem in fedora, disappearing interpreter settings. The issue was Eclipse couldn't write to the folder even after granting read+write access.
Solution: Go to terminal and type: sudo eclipse
enter admin password to run as admin. Solved
| 4 | 3 | 0 |
I'm attempting to get eclipse running as something more powerful than a colored text editor so that I can do some Maya scripting. There's literally nothing fancy about this setup, it just doesn't keep my interpreter once the prefs window is closed.
I can open and view .py docs fine, but pydev will not keep the interpreter I give it. As soon as I save the prefs with vanilla python.exe chosen as the interpreter, eclipse loses it. Opening the prefs again will show a blank interpreter page.
Auto config used to work before I started mucking with settings. I had the same disappearing problem even though Autoconfig could find everything.
c:\Python27 is set in my PYTHONPATH for user and system variables.
I've tried 32 and 64bit python (running win7 64). I was using Aptana with pydev and it seemed to not complain for a while, but then the interpreter went awol and I tried Eclipse to fix it. I can't start an actual project due to the missing interpreter, and the large "help" box that pops up when I'm typing is slowing me down considerably.
Eclipse 3.7.2
Python 2.7.2
Pydev 2.5
Thanks for your help, I'm pretty green at this.
|
Eclipse + Pydev wont keep interpreter setting within the same session.
| 0 | 0 | 0 | 2,628 |
10,670,576 |
2012-05-20T03:27:00.000
| 1 | 1 | 0 | 1 |
python,eclipse,installation,pydev,interpreter
| 27,167,729 | 5 | false | 0 | 0 |
I had the same issue. This is how I solved it:
Go to your workspace folder.
Edit the file ".pydevproject".
Change the path located after "PYTHON_PROJECT_INTERPRETER" pydev property.
Save and you're good to go.
| 4 | 3 | 0 |
I'm attempting to get eclipse running as something more powerful than a colored text editor so that I can do some Maya scripting. There's literally nothing fancy about this setup, it just doesn't keep my interpreter once the prefs window is closed.
I can open and view .py docs fine, but pydev will not keep the interpreter I give it. As soon as I save the prefs with vanilla python.exe chosen as the interpreter, eclipse loses it. Opening the prefs again will show a blank interpreter page.
Auto config used to work before I started mucking with settings. I had the same disappearing problem even though Autoconfig could find everything.
c:\Python27 is set in my PYTHONPATH for user and system variables.
I've tried 32 and 64bit python (running win7 64). I was using Aptana with pydev and it seemed to not complain for a while, but then the interpreter went awol and I tried Eclipse to fix it. I can't start an actual project due to the missing interpreter, and the large "help" box that pops up when I'm typing is slowing me down considerably.
Eclipse 3.7.2
Python 2.7.2
Pydev 2.5
Thanks for your help, I'm pretty green at this.
|
Eclipse + Pydev wont keep interpreter setting within the same session.
| 0.039979 | 0 | 0 | 2,628 |
10,670,576 |
2012-05-20T03:27:00.000
| 1 | 1 | 0 | 1 |
python,eclipse,installation,pydev,interpreter
| 56,161,921 | 5 | false | 0 | 0 |
I encountered this problem, and the issue was that .project and .pydevproject were read only and Eclipse couldn't save the configurations.
Solution: make .project and .pydevproject writable.
| 4 | 3 | 0 |
I'm attempting to get eclipse running as something more powerful than a colored text editor so that I can do some Maya scripting. There's literally nothing fancy about this setup, it just doesn't keep my interpreter once the prefs window is closed.
I can open and view .py docs fine, but pydev will not keep the interpreter I give it. As soon as I save the prefs with vanilla python.exe chosen as the interpreter, eclipse loses it. Opening the prefs again will show a blank interpreter page.
Auto config used to work before I started mucking with settings. I had the same disappearing problem even though Autoconfig could find everything.
c:\Python27 is set in my PYTHONPATH for user and system variables.
I've tried 32 and 64bit python (running win7 64). I was using Aptana with pydev and it seemed to not complain for a while, but then the interpreter went awol and I tried Eclipse to fix it. I can't start an actual project due to the missing interpreter, and the large "help" box that pops up when I'm typing is slowing me down considerably.
Eclipse 3.7.2
Python 2.7.2
Pydev 2.5
Thanks for your help, I'm pretty green at this.
|
Eclipse + Pydev wont keep interpreter setting within the same session.
| 0.039979 | 0 | 0 | 2,628 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.