Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many))
In my understanding, there are three ways to implement it.
Use 'parent' argument
Use 'repeated' Structured property
Use 'repeated' Key property
I choose a way based on the logic below usually, but does it make sense to you?
If you have better logic, please teach me.
Use 'parent' argument
Transactional operation is required between these entities
Bidirectional reference is required between these entities
Strongly intend 'Parent-Child' relationship
Use 'repeated' Structured property
Don't need to use 'many' entity individually (Always, used with 'one' entity)
'many' entity is only referred by 'one' entity
Number of 'repeated' is less than 100
Use 'repeated' Key property
Need to use 'many' entity individually
'many' entity can be referred by other entities
Number of 'repeated' is more than 100
No.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can.
I really appreciate your opinion. | 11 | 6 | 1 | 0 | false | 14,749,034 | 0 | 1,389 | 2 | 1 | 0 | 14,739,044 | One thing that most GAE users will come to realize (sooner or later) is that the datastore does not encourage design according to the formal normalization principles that would be considered a good idea in relational databases. Instead it often seems to encourage design that is unintuitive and anathema to established norms. Although relational database design principles have their place, they just don't work here.
I think the basis for the datastore design instead falls into two questions:
How am I going to read this data and how do I read it with the minimum number of read operations?
Is storing it that way going to lead to an explosion in the number of write and indexing operations?
If you answer these two questions with as much foresight and actual tests as you can, I think you're doing pretty well. You could formalize other rules and specific cases, but these questions will work most of the time. | 1 | 0 | 0 | Effective implementation of one-to-many relationship with Python NDB | 2 | python,google-app-engine,app-engine-ndb | 0 | 2013-02-06T21:22:00.000 |
I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many))
In my understanding, there are three ways to implement it.
Use 'parent' argument
Use 'repeated' Structured property
Use 'repeated' Key property
I choose a way based on the logic below usually, but does it make sense to you?
If you have better logic, please teach me.
Use 'parent' argument
Transactional operation is required between these entities
Bidirectional reference is required between these entities
Strongly intend 'Parent-Child' relationship
Use 'repeated' Structured property
Don't need to use 'many' entity individually (Always, used with 'one' entity)
'many' entity is only referred by 'one' entity
Number of 'repeated' is less than 100
Use 'repeated' Key property
Need to use 'many' entity individually
'many' entity can be referred by other entities
Number of 'repeated' is more than 100
No.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can.
I really appreciate your opinion. | 11 | 7 | 1 | 0 | false | 14,740,062 | 0 | 1,389 | 2 | 1 | 0 | 14,739,044 | A key thing you are missing: How are you reading the data?
If you are displaying all the tasks for a given person on a request, 2 makes sense: you can query the person and show all his tasks.
However, if you need to query say a list of all tasks say due at a certain time, querying for repeated structured properties is terrible. You will want individual entities for your Tasks.
There's a fourth option, which is to use a KeyProperty in your Task that points to your Person. When you need a list of Tasks for a person you can issue a query.
If you need to search for individual Tasks, then you probably want to go with #4. You can use it in combination with #3 as well.
Also, the number of repeated properties has nothing to do with 100. It has everything to do with the size of your Person and Task entities, and how much will fit into 1MB. This is potentially dangerous, because if your Task entity can potentially be large, you might run out of space in your Person entity faster than you expect. | 1 | 0 | 0 | Effective implementation of one-to-many relationship with Python NDB | 2 | python,google-app-engine,app-engine-ndb | 0 | 2013-02-06T21:22:00.000 |
Pretty simple question but haven't been able to find a good answer.
In Excel, I am generating files that need to be automatically read. They are read by an ID number, but the format I get is setting it as text. When using xlrd, I get this format:
5.5112E+12
When I need it in this format:
5511195414392
What is the best way to achieve this? I would like to avoid using xlwt but if it is necessary I could use help on getting started in that process too | 2 | 1 | 0.099668 | 0 | false | 14,854,783 | 0 | 1,158 | 1 | 0 | 0 | 14,751,806 | I used the CSV module to figure this out, as it read the cells correctly. | 1 | 0 | 0 | Reading scientific numbers in xlrd | 2 | python,xlrd | 0 | 2013-02-07T13:05:00.000 |
I have really big database which I want write to xlsx/xls file. I already tried to use xlwt, but it allows to write only 65536 rows (some of my tables have more than 72k rows). I also found openpyxl, but it works too slow, and use huge amount of memory for big spreadsheets. Are there any other possibilities to write excel files?
edit:
Following kennym's advice i used Optimised Reader and Writer. It is less memory consuming now, but still time consuming. Exporting takes more than hour now (for really big tables- up to 10^6 rows). Are there any other possibilities? Maybe it is possible to export whole table from HDF5 database file to excel, instead of doing it row after row- like it is now in my code? | 8 | 1 | 0.066568 | 0 | false | 31,982,266 | 0 | 5,375 | 1 | 0 | 0 | 14,754,090 | XlsxWriter work for me. I try openpyxl but it error. 22k*400 r*c | 1 | 0 | 0 | How to write big set of data to xls file? | 3 | python,excel,hdf5 | 0 | 2013-02-07T14:56:00.000 |
How would one go about connecting to a different database based on which module is being used? Our scenario is as follows:
We have a standalone application with its own database on a certain server and OpenERP running on different server. We want to create a module in OpenERP which can utilise entities on the standalone application server rather than creating its own entities in its own database, is this possible? How can we change the connection parameters that the ORM uses to connect to its own database to point to a different database?
Ofcourse, one way is to use the base_synchro module to synchronise the required entities between both database but considering the large amount of data, we don't want duplication. Another way is to use xmlrpc to get data into OpenERP but that still requires entities to be present in OpenERP database.
How can we solve this problem without data duplication? How can a module in OpenERP be created based on a different database? | 1 | 1 | 0.197375 | 0 | false | 14,796,657 | 1 | 1,877 | 1 | 0 | 0 | 14,756,365 | One way to connect to an external application is to create a connector module. There are already several connector modules that you can take a look at:
the thunderbird and outlook plugins
the joomla and magento modules
the 'event moodle' module
For example, the joomla connector uses a joomla plugin to handle the communication between OpenERP and joomla. The communication protocol used is XML-RPC but you can choose any protocol you want. You can even choose to connect directly to the external database using the psycopg2 modules (if the external database is using Postgresql) but this is not recommended. But perhaps you don't have the choice if this external application has no connection API.
You need to know what are the available ways to connect to this external application and choose one of these. Once you have chosen the right protocol, you can create your OpenERP module.
You can map entities stored on the external application using osv.TransientModel objects (formerly known as osv memory). The tables related to these objects will still be created in the OpenERP database but the data is volatile (deleted after some time). | 1 | 0 | 0 | How to connect to a different database in OpenERP? | 1 | python,xml-rpc,openerp | 0 | 2013-02-07T16:45:00.000 |
I have pip installed psycopg2, but when I try to runserver or syncdb in my Django project, it raises an error saying there is "no module named _psycopg".
EDIT: the "syncdb" command now raises:
django.core.exceptions.ImproperlyConfigured: ImportError django.contrib.admin: No module named _psycopg
Thanks for your help | 2 | 1 | 1.2 | 0 | true | 15,337,328 | 1 | 1,586 | 1 | 0 | 0 | 14,758,024 | This was solved by performing a clean reinstall of django. There was apparently some dependecies missing that the recursive pip install did not seem to be able to solve. | 1 | 0 | 0 | Psycopg missing module in Django | 2 | python,django,pip,psycopg2,psycopg | 0 | 2013-02-07T18:10:00.000 |
TLDR; Are there drawbacks to putting two different types of documents into the same collection to save a round-trip to the database?
So I have documents with children, and a list of keys in the parent referencing the children, and almost whenever we want a parent, we also want the children to come along. The naive way to do this is to fetch the parent, and then get the children using the list of child keys with $IN (in SQL, we would use a join). However, this means making 2 round trips for a fairly frequent operation. We have a few options to improve this, especially since we can retrieve the child keys at the same time as the parent keys:
Put the children in the parent document
While this would play to mongo's strength, we also want to keep this data normalized
Pipeline database requests in threads
Which may or may not improve performance once we factor in the connection pool. It also means dealing with threading in a python app, which isn't terrible, but isn't great.
Keep the parent/child documents in the same collection (not embedded)
This way we can do one query for all the keys at once; this does mean some conceptual overhead in the wrapper for accessing the database, and forcing all indexes to be sparse, but otherwise seems straightforward.
We could profile all these options, but it does feel like someone out there should already have experience with this despite not finding anything online. So, is there something I am missing in my analysis? | 0 | 1 | 1.2 | 0 | true | 14,780,990 | 0 | 339 | 1 | 0 | 0 | 14,780,381 | I'll address the three points separately. You should know that it absolutely depends on the situation on what works best. There is no "theoretically correct" answer as it depends on your data store/access patterns.
It is always a fairly complex decision on how you store your data. I think the main rule should be "How do I query my data?", and not "We want to have all data normalised". Data normalisation is something you do for a relational database, not for MongoDB. If you almost always query the children with the parent, and you don't have an unbound list of children, then that is how you should store them. Just be aware that a document in MongoDB is limited to 16MB (which is a lot more than you think).
Avoid threading. You will just be better off running two queries in sequence, from two different collections. Less complex is a good thing!
This works, but it is a fairly ugly way. But then again, ugly isn't always a bad thing if it makes things go a lot faster. I don't quite know how distinct your parent and child documents are of course, so it's a difficult to say whether this is a good solution. A sparse index, which I assume you will do on a specific field depending on whether it is a parent or child, is a good idea. But perhaps you can get away with one index as well. I'd be happy to update your answer after you've shown your suggested schemas.
I would recommend you do some benchmarking, but forget about option 2. | 1 | 0 | 1 | Put different "schemas" into same MongoDB collection | 2 | python,performance,mongodb | 0 | 2013-02-08T19:58:00.000 |
My python project involves an externally provided database: A text file of approximately 100K lines.
This file will be updated daily.
Should I load it into an SQL database, and deal with the diff daily? Or is there an effective way to "query" this text file?
ADDITIONAL INFO:
Each "entry", or line, contains three fields - any one of which can be used as an index.
The update is is the form of the entire database - I would have to manually generate a diff
The queries are just looking up records and displaying the text.
Querying the database will be a fundamental task of the application. | 2 | 0 | 0 | 0 | false | 14,797,390 | 0 | 336 | 2 | 0 | 0 | 14,795,810 | What I've done before is create SQLite databases from txt files which were created from database extracts, one SQLite db for each day.
One can query across SQLite db to check the values etc and create additional tables of data.
I added an additional column of data that was the SHA1 of the text line so that I could easily identify lines that were different.
It worked in my situation and hopefully may form the barest sniff of an acorn of an idea for you. | 1 | 0 | 0 | Large text database: Convert to SQL or use as is | 2 | python,sql,database,text | 0 | 2013-02-10T07:53:00.000 |
My python project involves an externally provided database: A text file of approximately 100K lines.
This file will be updated daily.
Should I load it into an SQL database, and deal with the diff daily? Or is there an effective way to "query" this text file?
ADDITIONAL INFO:
Each "entry", or line, contains three fields - any one of which can be used as an index.
The update is is the form of the entire database - I would have to manually generate a diff
The queries are just looking up records and displaying the text.
Querying the database will be a fundamental task of the application. | 2 | 1 | 0.099668 | 0 | false | 14,795,870 | 0 | 336 | 2 | 0 | 0 | 14,795,810 | How often will the data be queried? On the one extreme, if once per day, you might use a sequential search more efficiently than maintaining a database or index.
For more queries and a daily update, you could build and maintain your own index for more efficient queries. Most likely, it would be worth a negligible (if any) sacrifice in speed to use an SQL database (or other database, depending on your needs) in return for simpler and more maintainable code. | 1 | 0 | 0 | Large text database: Convert to SQL or use as is | 2 | python,sql,database,text | 0 | 2013-02-10T07:53:00.000 |
I dropped my database that I had previously created for django using :
dropdb <database>
but when I go to the psql prompt and say \d, I still see the relations there :
How do I remove everything from postgres so that I can do everything from scratch ? | 0 | 1 | 0.099668 | 0 | false | 14,880,796 | 1 | 88 | 2 | 0 | 0 | 14,869,718 | Most likely somewhere along the line, you created your objects in the template1 database (or in older versions the postgres database) and every time you create a new db i thas all those objects in it. You can either drop the template1 / postgres database and recreate it or connect to it and drop all those objects by hand. | 1 | 0 | 0 | postgres : relation there even after dropping the database | 2 | python,django,postgresql | 0 | 2013-02-14T07:23:00.000 |
I dropped my database that I had previously created for django using :
dropdb <database>
but when I go to the psql prompt and say \d, I still see the relations there :
How do I remove everything from postgres so that I can do everything from scratch ? | 0 | 0 | 0 | 0 | false | 14,870,374 | 1 | 88 | 2 | 0 | 0 | 14,869,718 | Chances are that you never created the tables in the correct schema in the first place. Either that or your dropdb failed to complete.
Try to drop the database again and see what it says. If that appears to work then go in to postgres and type \l, putting the output here. | 1 | 0 | 0 | postgres : relation there even after dropping the database | 2 | python,django,postgresql | 0 | 2013-02-14T07:23:00.000 |
We have a Python application with over twenty modules, most of which are shared by several web and console applications.
I've never had a clear understanding of the best practice for establishing and managing database connection in multi module Python apps. Consider this example:
I have a module defining an object class for Users. It has many defs for creating/deleting/updating users in the database. The users.py module is imported into a) a console based utility, 2) a web.py based web application and 3) a constantly running daemon process.
Each of these three application have different life cycles. The daemon can open a connection and keep it open. The console utility connects, does work, then dies. Of course the http requests are atomic, however the web server is a daemon.
I am currently opening, using then closing a connection inside each function in the Users class. This seems the most inefficient, but it works in all examples. An alternative used as a test is to declare and open a global connection for the entire module. Another option would be to create the connection at the top application layer and pass references when instantiating classes, but this seems the worst idea to me.
I know every application architecture is different. I'm just wondering if there's a best practice, and what it would be? | 17 | 4 | 0.379949 | 0 | false | 14,883,719 | 1 | 6,022 | 2 | 0 | 0 | 14,883,346 | MySQL connections are relatively fast, so this might not be a problem (i.e. you should measure). Most other databases take much more resources to create a connection.
Creating a new connection when you need one is always the safest, and is a good first choice. Some db libraries, e.g. SqlAlchemy, have connection pools built in that transparently will re-use connections for you correctly.
If you decide you want to keep a connection alive so that you can re-use it, there are a few points to be aware of:
Connections that are only used for reading are easier to re-use than connections that that you've used to modify database data.
When you start a transaction on a connection, be careful that nothing else can use that connection for something else while you're using it.
Connections that sit around for a long time get stale and can be closed from underneath you, so if you're re-using a connection you'll need to check if it is still "alive", e.g. by sending "select 1" and verifying that you get a result.
I would personally recommend against implementing your own connection pooling algorithm. It's really hard to debug when things go wrong. Instead choose a db library that does it for you. | 1 | 0 | 0 | How should I establish and manage database connections in a multi-module Python app? | 2 | python,mysql | 0 | 2013-02-14T20:20:00.000 |
We have a Python application with over twenty modules, most of which are shared by several web and console applications.
I've never had a clear understanding of the best practice for establishing and managing database connection in multi module Python apps. Consider this example:
I have a module defining an object class for Users. It has many defs for creating/deleting/updating users in the database. The users.py module is imported into a) a console based utility, 2) a web.py based web application and 3) a constantly running daemon process.
Each of these three application have different life cycles. The daemon can open a connection and keep it open. The console utility connects, does work, then dies. Of course the http requests are atomic, however the web server is a daemon.
I am currently opening, using then closing a connection inside each function in the Users class. This seems the most inefficient, but it works in all examples. An alternative used as a test is to declare and open a global connection for the entire module. Another option would be to create the connection at the top application layer and pass references when instantiating classes, but this seems the worst idea to me.
I know every application architecture is different. I'm just wondering if there's a best practice, and what it would be? | 17 | 16 | 1.2 | 0 | true | 14,883,590 | 1 | 6,022 | 2 | 0 | 0 | 14,883,346 | The best method is to open a connection when you need to do some operations (like getting and/or updating data); manipulate the data; write it back to the database in one query (very important for performance), and then close the connection. Opening a connection is a fairly light process.
Some pitfalls for performance include
opening the database when you won't definitely interact with it
using selectors that take more data than you need (e.g., getting data about all users and filtering it in Python, instead of asking MySQL to filter out the useless data)
writing values that haven't changed (e.g. updating all values of a user profile, when just their email has changed)
having each field update the server individually (e.g., open the db, update the user email, close the db, open the db, update the user password, close the db, open th... you get the idea)
The bottom line is that it doesn't matter how many times you open the database, it's how many queries you run. If you can get your code to join related queries, you've won the battle. | 1 | 0 | 0 | How should I establish and manage database connections in a multi-module Python app? | 2 | python,mysql | 0 | 2013-02-14T20:20:00.000 |
I have a MySQL database with around 10,000 articles in it, but that number will probably go up with time. I want to be able to search through these articles and pull out the most relevent results based on some keywords. I know there are a number of projects that I can plug into that can essentially do this for me. However, the application for this is very simple, and it would be nice to have direct control and working knowledge of how the whole thing operates. Therefore, I would like to look into building a very simple search engine from scratch in Python.
I'm not even sure where to start, really. I could just dump everything from the MySQL DB into a list and try to sort that list based on relevance, however that seems like it would be slow, and get slower as the amount of database items increase. I could use some basic MySQL search to get the top 100 most relevant results from what MySQL thinks, then sort those 100. But that is a two step process which may be less efficient, and I might risk missing an article if it is just out of range.
What are the best approaches I can take to this? | 0 | 3 | 0.291313 | 0 | false | 14,889,522 | 1 | 643 | 1 | 0 | 0 | 14,889,206 | The best bet for you to do "Search Engine" for the 10,000 Articles is to read "Programming Collective Intelligence" by Toby Segaran. Wonderful read and to save your time go to Chapter 4 of August 2007 issue. | 1 | 0 | 0 | Search engine from scratch | 2 | python,mysql,search,search-engine | 0 | 2013-02-15T06:10:00.000 |
I have a situation where my script parse approx 20000 entries and save them to db. I have used transaction which takes around 35 seconds to save and also consume high memory since until committed queries are saved in memory.
I have Found another way to write CSV then load into postgres using "copy_from" which is very fast. If anyone can suggest that if I should open file once at start then close file while loading to postgres or open file when single entry is ready to write then close.
what will be the best approach to save memory utilization? | 0 | 1 | 1.2 | 0 | true | 14,890,240 | 0 | 90 | 1 | 0 | 0 | 14,890,211 | Reduce the size of your transactions? | 1 | 0 | 0 | File writing in python | 1 | python,file,postgresql,csv | 0 | 2013-02-15T07:45:00.000 |
If we have a json format data file which stores all of our database data content, such as table name, row, and column, etc content, how can we use DB-API object to insert/update/delete data from json file into database, such as sqlite, mysql, etc. Or please share if you have better idea to handle it. People said it is good to save database data information into json format, which will be much convenient to work with database in python.
Thanks so much! Please give advise! | 0 | 1 | 0.197375 | 0 | false | 14,951,638 | 0 | 454 | 1 | 0 | 0 | 14,942,462 | There's no magic way, you'll have to write a Python program to load your JSON data in a database. SQLAlchemy is a good tool to make it easier. | 1 | 0 | 1 | how will Python DB-API read json format data into an existing database? | 1 | python,database,json,sqlalchemy,python-db-api | 0 | 2013-02-18T17:57:00.000 |
Example scenario:
MySQL running a single server -> HOSTNAME
Two MySQL databases on that server -> USERS , GAMES .
Task -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins )
In Django as well as Python MySQLdb , why is having one cursor for each database more preferable ?
What is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying "use USERS;" ), and then work on corresponding database
MySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ? | 6 | 10 | 1.2 | 0 | true | 15,328,753 | 1 | 1,351 | 3 | 0 | 0 | 14,986,129 | A shorter answer would be, "MySQL doesn't support that type of cursor", so neither does Python-MySQL, so the reason one connection command is preferred is because that's the way MySQL works. Which is sort of a tautology.
However, the longer answer is:
A 'cursor', by your definition, would be some type of object accessing tables and indexes within an RDMS, capable of maintaining its state.
A 'connection', by your definition, would accept commands, and either allocate or reuse a cursor to perform the action of the command, returning its results to the connection.
By your definition, a 'connection' would/could manage multiple cursors.
You believe this would be the preferred/performant way to access a database as 'connections' are expensive, and 'cursors' are cheap.
However:
A cursor in MySQL (and other RDMS) is not a the user-accessible mechanism for performing operations. MySQL (and other's) perform operations in as "set", or rather, they compile your SQL command into an internal list of commands, and do numerous, complex bits depending on the nature of your SQL command and your table structure.
A cursor is a specific mechanism, utilized within stored procedures (and there only), giving the developer a way to work with data in a procedural way.
A 'connection' in MySQL is what you think of as a 'cursor', sort of. MySQL does not expose it's internals for you as an iterator, or pointer, that is merely moving over tables. It exposes it's internals as a 'connection' which accepts SQL and other commands, translates those commands into an internal action, performs that action, and returns it's result to you.
This is the difference between a 'set' and a 'procedural' execution style (which is really about the granularity of control you, the user, is given access to, or at least, the granularity inherent in how the RDMS abstracts away its internals when it exposes them via an API). | 1 | 0 | 0 | Why django and python MySQLdb have one cursor per database? | 3 | python,mysql,django,mysql-python | 0 | 2013-02-20T17:27:00.000 |
Example scenario:
MySQL running a single server -> HOSTNAME
Two MySQL databases on that server -> USERS , GAMES .
Task -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins )
In Django as well as Python MySQLdb , why is having one cursor for each database more preferable ?
What is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying "use USERS;" ), and then work on corresponding database
MySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ? | 6 | 2 | 0.132549 | 0 | false | 15,302,237 | 1 | 1,351 | 3 | 0 | 0 | 14,986,129 | As you say, MySQL connections are cheap, so for your case, I'm not sure there is a technical advantage either way, outside of code organization and flow. It might be easier to manage two cursors than to keep track of which database a single cursor is currently talking to by painstakingly tracking SQL 'USE' statements. Mileage with other databases may vary -- remember that Django strives to be database-agnostic.
Also, consider the case where two different databases, even on the same server, require different access credentials. In such a case, two connections will be necessary, so that each connection can successfully authenticate. | 1 | 0 | 0 | Why django and python MySQLdb have one cursor per database? | 3 | python,mysql,django,mysql-python | 0 | 2013-02-20T17:27:00.000 |
Example scenario:
MySQL running a single server -> HOSTNAME
Two MySQL databases on that server -> USERS , GAMES .
Task -> Fetch 10 newest games from GAMES.my_games_table , and fetch users playing those games from USERS.my_users_table ( assume no joins )
In Django as well as Python MySQLdb , why is having one cursor for each database more preferable ?
What is the disadvantage of an extended cursor which is single per MySQL server and can switch databases ( eg by querying "use USERS;" ), and then work on corresponding database
MySQL connections are cheap, but isn't single connection better than many , if there is a linear flow and no complex tranasactions which might need two cursors ? | 6 | 0 | 0 | 0 | false | 15,421,235 | 1 | 1,351 | 3 | 0 | 0 | 14,986,129 | One cursor per database is not necessarily preferable, it's just the default behavior.
The rationale is that different databases are more often than not on different servers, use different engines, and/or need different initialization options. (Otherwise, why should you be using different "databases" in the first place?)
In your case, if your two databases are just namespaces of tables (what should be called "schemas" in SQL jargon) but reside on the same MySQL instance, then by all means use a single connection. (How to configure Django to do so is actually an altogether different question.)
You are also right that a single connection is better than two, if you only have a single thread and don't actually need two database workers at the same time. | 1 | 0 | 0 | Why django and python MySQLdb have one cursor per database? | 3 | python,mysql,django,mysql-python | 0 | 2013-02-20T17:27:00.000 |
I'm in the process of building a Django powered site that is backed by a MySQL server. This MySQL server is going to be accessed from additional sources, other than the website, to read and write table data; such as a program that users run locally which connects to the database.
Currently the program running locally is using the MySQL/C Connector library to connect directly to the sql server and execute queries. In a final release to the public this seems insecure, since I would be exposing the connection string to the database in the code or in a configuration file.
One alternative I'm considering is having all queries be sent to the Django website (authenticated with a user's login and password) and then the site will sanitize and execute the queries on the user's behalf and return the results to them.
This has a number of downsides that I can think of. The webserver will be under a much larger load by processing all the SQL queries and this could potentially exceed the limit of my host. Additionally, I would have to figure out some way of serializing and transmitting the sql results in Python and then unserializing them in C/C++ on the client side. This would be a decent amount of custom code to write and maintain.
Any other downsides to this approach people can think of?
Does this sound reasonable and if it does, anything that could ease working on it; such as Python or C libraries to help develop the proxy interface?
If it sounds like a bad idea, any suggestions for alternative solutions i.e. a Python library that specializes in this type of proxy sql server logic, a method of encrypting sql connection strings so I can securely use my current solution, etc...?
Lastly, is this a valid concern? The database currently doesn't hold any terribly sensitive information about users (most sensitive would be their email and their site password which they may have reused from another source) but it could in the future which is my cause for concern if it's not secure. | 3 | 1 | 1.2 | 0 | true | 14,992,070 | 1 | 377 | 1 | 0 | 0 | 14,991,783 | This is a completely valid concern and a very common problem. You have described creating a RESTful API. I guess it could be considered a proxy to a database but is not usually referred to as a proxy.
Django is a great tool to use to use to accomplish this. Django even has a couple packages that will assist in speedy development, Django REST Framework, Tastiepy, and django-piston are the most popular. Of course you could just use plain old Django.
Your Django project would be the only thing that interfaces with the database and clients can send authenticated requests to Django; so clients will never connect directly to your database. This will give you fine grained permission control on a per client, per resource basis.
The webserver will be under a much larger load by processing all the
SQL queries and this could potentially exceed the limit of my host
I believe scaling a webservice is going to be a lot easier then scaling direct connections from your clients to your database. There are many tried and true methods for scaling apps that have hundreds of requests per seconds to their databases. Because you have Django between you and the webserver you can implement caching for frequently requested resources.
Additionally, I would have to figure out some way of serializing and
transmitting the SQL results in Python and then unserializing them in
C/C++ on the client side
This should be a moot issue. There are lots of extremely popular data interchange formats. I have never used C/C++ but a quick search I saw a couple of c/c++ json serializers. python has JSON built in for free, there shouldn't be any custom code to maintain regarding this if you use a premade C/C++ JSON library.
Any other downsides to this approach people can think of?
I don't think there are any downsides, It is a tried and true method. It has been proven for a decade and the most popular sites in the world expose themselves through restful apis
Does this sound reasonable and if it does, anything that could ease
working on it; such as Python or C libraries to help develop the proxy
interface?
It sounds very reasonable, the Django apps I mentioned at the beginning of the answer should provide some boiler plate to allow you to get started on your API quicker. | 1 | 0 | 0 | Django as a mysql proxy server? | 1 | c++,python,mysql,c,django | 0 | 2013-02-20T23:21:00.000 |
I have a postgre database with a timestamp column and I have a REST service in Python that executes a query in the database and returns data to a JavaScript front-end to plot a graph using flot.
Now the problem I have is that flot can automatically handle the date using JavaScript's TIMESTAMP, but I don't know how to convert the Postgre timestamps to JavaScript TIMESTAMP (YES a timestamp, not a date stop editing if you don't know the answer) in Python. I don't know if this is the best approach (maybe the conversion can be done in JavaScript?). Is there a way to do this? | 8 | 3 | 0.291313 | 0 | false | 15,032,100 | 1 | 4,296 | 1 | 0 | 0 | 15,031,856 | You can't send a Python or Javascript "datetime" object over JSON. JSON only accepts more basic data types like Strings, Ints, and Floats.
The way I usually do it is send it as text, using Python's datetime.isoformat() then parse it on the Javascript side. | 1 | 0 | 0 | Converting postgresql timestamp to JavaScript timestamp in Python | 2 | javascript,python,postgresql,flot | 0 | 2013-02-22T19:33:00.000 |
The first element of arrays (in most programming languages) has an id (index) of 0. The first element (row) of MySQL tables has an (auto incremented) id of 1. The latter seems to be the exception. | 5 | 4 | 0.26052 | 0 | false | 15,056,205 | 0 | 2,295 | 2 | 0 | 0 | 15,055,175 | The better question to ask is "why are arrays zero-indexed?" The reason has to do with pointer arithmetic. The index of an array is an offset relative to the pointer address. In C++, given array char x[5], the expressions x[1] and *(x + 1) are equivalent, given that sizeof(char) == 1.
So auto increment fields starting at 1 make sense. There is no real correlation between arrays and these fields. | 1 | 0 | 0 | Why does MySQL count from 1 and not 0? | 3 | php,python,mysql,ruby | 0 | 2013-02-24T18:40:00.000 |
The first element of arrays (in most programming languages) has an id (index) of 0. The first element (row) of MySQL tables has an (auto incremented) id of 1. The latter seems to be the exception. | 5 | 0 | 0 | 0 | false | 15,055,977 | 0 | 2,295 | 2 | 0 | 0 | 15,055,175 | The main reason I suppose is that a row in a database isnt an array and the autoincrement value isnt an index in the sense that an array index is. The primary key id can be any value and to a great extent it is simply essential it is unique and is not guaranteed to be anything else (for example you can delete a row and it won't renumber).
This is a little like comparing apples and oranges!
Array start at 0 because that's the first number. Autoinc fields start at whatever number you want them too, and in that case we would all rather it was 1. | 1 | 0 | 0 | Why does MySQL count from 1 and not 0? | 3 | php,python,mysql,ruby | 0 | 2013-02-24T18:40:00.000 |
The DBF files are updated every few hours. We need to import new records into MySQL and skip duplicates. I don't have any experience with DBF files but as far as I can tell a handful of the one's we're working with don't have unique IDs.
I plan to use Python if there are no ready-made utilities that do this. | 0 | -1 | -0.099668 | 0 | false | 16,302,184 | 0 | 2,974 | 1 | 0 | 0 | 15,059,749 | When you say you are using dBase, I presume you have access to the (.) dot prompt.
At dot prompt convert the .dbf file into a delimited text file.
Reconvert the delimited text file into a MySql data file with the necessary command in
MySql. I do not know the actual command for it. All DBMS will have commands to do that
work.
For eliminiating the duplicates you will have to do it at the time of populating the
data to the .dbf file through a programme written in dBase. | 1 | 0 | 0 | What's the best way to routinely import DBase (dbf) files into MySQL tables? | 2 | python,mysql,dbf,dbase | 0 | 2013-02-25T03:45:00.000 |
I'm doing a small web application which might need to eventually scale somewhat, and am curious about Google App Engine. However, I am experiencing a problem with the development server (dev_appserver.py):
At seemingly random, requests will take 20-30 seconds to complete, even if there is no hard computation or data usage. One request might be really quick, even after changing a script of static file, but the next might be very slow. It seems to occur more systematically if the box has been left for a while without activity, but not always.
CPU and disk access is low during the period. There is not allot of data in my application either.
Does anyone know what could cause such random slowdowns? I've Google'd and searched here, but need some pointers.. /: I've also tried --clear_datastore and --use_sqlite, but the latter gives an error: DatabaseError('file is encrypted or is not a database',). Looking for the file, it does not seem to exist.
I am on Windows 8, python 2.7 and the most recent version of the App Engine SDK. | 2 | 2 | 0.197375 | 0 | false | 15,098,634 | 1 | 237 | 2 | 1 | 0 | 15,098,051 | Don't worry about it. It (IIRC) keeps the whole DB (datastore) in memory using a "emulation" of the real thing. There are lots of other issues that you won't see when deployed.
I'd suggest that your hard drive is spinning down and the delay you see is it taking a few seconds to wake back up.
If this becomes a problem, develop using the deployed version. It's not so different. | 1 | 0 | 0 | Google App Engine development server random (?) slowdowns | 2 | python,google-app-engine | 0 | 2013-02-26T19:54:00.000 |
I'm doing a small web application which might need to eventually scale somewhat, and am curious about Google App Engine. However, I am experiencing a problem with the development server (dev_appserver.py):
At seemingly random, requests will take 20-30 seconds to complete, even if there is no hard computation or data usage. One request might be really quick, even after changing a script of static file, but the next might be very slow. It seems to occur more systematically if the box has been left for a while without activity, but not always.
CPU and disk access is low during the period. There is not allot of data in my application either.
Does anyone know what could cause such random slowdowns? I've Google'd and searched here, but need some pointers.. /: I've also tried --clear_datastore and --use_sqlite, but the latter gives an error: DatabaseError('file is encrypted or is not a database',). Looking for the file, it does not seem to exist.
I am on Windows 8, python 2.7 and the most recent version of the App Engine SDK. | 2 | 0 | 0 | 0 | false | 15,106,246 | 1 | 237 | 2 | 1 | 0 | 15,098,051 | Does this happen in all web browsers? I had issues like this when viewing a local app engine dev site in several browsers at the same time for cross-browser testing. IE would then struggle, with requests taking about as long as you describe.
If this is the issue, I found the problems didn't occur with IETester.
Sorry if it's not related, but I thought this was worth mentioning just in case. | 1 | 0 | 0 | Google App Engine development server random (?) slowdowns | 2 | python,google-app-engine | 0 | 2013-02-26T19:54:00.000 |
How do I save an open excel file using python= I currently read the excel workbook using XLRD but I need to save the excel file so any changes the user inputs are read.
I have done this using a VBA script from within excel which saves the workbook every x seconds, but this is not ideal. | 0 | 0 | 0 | 0 | false | 15,114,556 | 0 | 758 | 1 | 0 | 0 | 15,114,329 | It looks like XLRD is used for reading the data, not interfacing with excel. So no, unless you use a different library using python is not the best way to do this, what is wrong with the VBA script? | 1 | 0 | 0 | Save open excel file using python | 2 | python,excel,xlrd | 0 | 2013-02-27T14:17:00.000 |
Django needs MySQL-python package to manipulate MySQL, but MySQL-python doesn't support Python 3.3. I have tried MySQL-for-Python-3, but it doesn't work.
Please help! Thanks a lot! | 1 | 0 | 0 | 0 | false | 15,203,056 | 1 | 713 | 1 | 0 | 0 | 15,202,503 | As others have noted, Python 3 support in Django 1.5 is "experimental" and, as such, not everything should be expected to work.
That being said, if you absolutely need to get this working, you may be able to run the 2to3 tool on a source version of MySQL-python to translate it to Python 3 (and build against Python 3 headers if required). | 1 | 0 | 0 | How can I use MySQL with Python 3.3 and Django 1.5? | 3 | python,mysql,django | 0 | 2013-03-04T13:18:00.000 |
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.
The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.
The ideal answer would point to example code, of course. | 14 | 14 | 1.2 | 0 | true | 15,265,556 | 1 | 14,722 | 3 | 1 | 0 | 15,254,538 | Tornado designed to be stateless and don't have session support out of the box.
Use secure cookies to store sensitive information like user_id.
Use standard cookies to store not critical information.
For storing large objects - use standard scheme - MySQL + memcache. | 1 | 0 | 0 | standard way to handle user session in tornado | 4 | python,tornado | 0 | 2013-03-06T17:55:00.000 |
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.
The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.
The ideal answer would point to example code, of course. | 14 | 17 | 1 | 0 | false | 16,320,593 | 1 | 14,722 | 3 | 1 | 0 | 15,254,538 | Here's how it seems other micro frameworks handle sessions (CherryPy, Flask for example):
Create a table holding session_id and whatever other fields you'll want to track on a per session basis. Some frameworks will allow you to just store this info in a file on a per user basis, or will just store things directly in memory. If your application is small enough, you may consider those options as well, but a database should be simpler to implement on your own.
When a request is received (RequestHandler initialize() function I think?) and there is no session_id cookie, set a secure session-id using a random generator. I don't have much experience with Tornado, but it looks like setting a secure cookie should be useful for this. Store that session_id and associated info in your session table. Note that EVERY user will have a session, even those not logged in. When a user logs in, you'll want to attach their status as logged in (and their username/user_id, etc) to their session.
In your RequestHandler initialize function, if there is a session_id cookie, read in what ever session info you need from the DB and perhaps create your own Session object to populate and store as a member variable of that request handler.
Keep in mind sessions should expire after a certain amount of inactivity, so you'll want to check for that as well. If you want a "remember me" type log in situation, you'll have to use a secure cookie to signal that (read up on this at OWASP to make sure it's as secure as possible, thought again it looks like Tornado's secure_cookie might help with that), and upon receiving a timed out session you can re-authenticate a new user by creating a new session and transferring whatever associated info into it from the old one. | 1 | 0 | 0 | standard way to handle user session in tornado | 4 | python,tornado | 0 | 2013-03-06T17:55:00.000 |
So, in order to avoid the "no one best answer" problem, I'm going to ask, not for the best way, but the standard or most common way to handle sessions when using the Tornado framework. That is, if we're not using 3rd party authentication (OAuth, etc.), but rather we have want to have our own Users table with secure cookies in the browser but most of the session info stored on the server, what is the most common way of doing this? I have seen some people using Redis, some people using their normal database (MySQL or Postgres or whatever), some people using memcached.
The application I'm working on won't have millions of users at a time, or probably even thousands. It will need to eventually get some moderately complex authorization scheme, though. What I'm looking for is to make sure we don't do something "weird" that goes down a different path than the general Tornado community, since authentication and authorization, while it is something we need, isn't something that is at the core of our product and so isn't where we should be differentiating ourselves. So, we're looking for what most people (who use Tornado) are doing in this respect, hence I think it's a question with (in theory) an objectively true answer.
The ideal answer would point to example code, of course. | 14 | 4 | 0.197375 | 0 | false | 16,346,968 | 1 | 14,722 | 3 | 1 | 0 | 15,254,538 | The key issue with sessions is not where to store them, is to how to expire them intelligently. Regardless of where sessions are stored, as long as the number of stored sessions is reasonable (i.e. only active sessions plus some surplus are stored), all this data is going to fit in RAM and be served fast. If there is a lot of old junk you may expect unpredictable delays (the need to hit the disk to load the session). | 1 | 0 | 0 | standard way to handle user session in tornado | 4 | python,tornado | 0 | 2013-03-06T17:55:00.000 |
I have a problem in kettle connecting python. In kettle, I only find the js script module.
Does kettle support python directly? I mean, can I call a python script in kettle without using js or others?
By the way, I want to move data from Oracle to Mongo regularly. I choose to use python to implement the transformation. So without external files, does it have some easy methods to keep the synchronization between a relational db and a no-rdb?
Thanks a lot. | 3 | 2 | 1.2 | 0 | true | 15,274,794 | 0 | 6,043 | 1 | 0 | 0 | 15,263,196 | It doesnt support it directly from what I've seen.
However there is a mongodb input step. And a lot of work has been done on it recently ( and still ongoing.
So given there is a mongodb input step, if you're using an ETL tool already then why would you want to make it execute a python script to do the job?? | 1 | 0 | 0 | how to call python script in kettle | 1 | python,kettle | 0 | 2013-03-07T04:39:00.000 |
I'm working on a Django application that needs to interact with a mongoDB instance ( preferably through django's ORM) The meat of the application still uses a relational database - but I just need to interact with mongo for a single specific model.
Which mongo driver/subdriver for python will suite my needs best ? | 0 | 0 | 0 | 0 | false | 15,498,874 | 1 | 258 | 1 | 0 | 0 | 15,314,025 | You could use django-nonrel which is a fork of Django and will let you use the same ORM.
If you dont want a forked Django you could use MongoEngine which has a similar syntax otherwise just raw pymongo. | 1 | 0 | 0 | Use MongoDB with Django but also use relational database | 1 | python,django,mongodb | 0 | 2013-03-09T18:00:00.000 |
I have created a cronjob in Python. The purpose is to insert data into a table from another one based on certain conditions. There is more than 65000 record to be inserted.
I have executed the cronjob and has seen more than 25000 records inserted. But after that the record are getting automatically deleted from that table. Even the records that has already inserted into the table that day before executing the cronjob is getting deleted.
"The current database is hosted in Xeround cloud."
Is MySQL doing so, i.e some kind of rollback or something
Does anybody have any idea about this. Please give me a solution.
Thanks in advance.. | 4 | 1 | 0.197375 | 0 | false | 15,388,969 | 0 | 637 | 1 | 0 | 0 | 15,332,618 | Run your django orm statement in the django shell and print the traceback. Look for delete statements in the django traceback sql. | 1 | 0 | 0 | Records getting deleted from Mysql table automatically | 1 | mysql,django,python-2.7,xeround | 0 | 2013-03-11T06:40:00.000 |
I'm currently exploring using python to develop my server-side implementation. I've decided to use SQLAlchemy for database stuff.
What I'm not currently to sure about is how it should be set up so that more than one developer can work on the project. For the code it is not a problem but how do I handle the database modifications? How do the users sync databases and how should potential data be set up? Should/can each developer use their own sqlite db for development?
For production postgresql will be used but the developers must be able to work offline. | 0 | 0 | 0 | 0 | false | 15,346,132 | 0 | 195 | 1 | 0 | 0 | 15,345,864 | Make sure you have a python programs or programs to fill databases with test data from scratch. It allows each developer to work from different starting points, but also test with the same environment. | 1 | 0 | 0 | Multi developer environment python and sqlalchemy | 2 | python,database,development-environment | 0 | 2013-03-11T18:28:00.000 |
So I know how to download Excel files from Google Drive in .csv format. However, since .csv files do not support multiple sheets, I have developed a system in a for loop to add the '&grid=tab_number' to the file download url so that I can download each sheet as its own .csv file. The problem I have run into is finding out how many sheets are in the excel workbook on the Google Drive so I know how many times to set the for loop for. | 0 | 0 | 1.2 | 0 | true | 15,505,507 | 0 | 95 | 1 | 0 | 1 | 15,456,709 | Ended up just downloading with xlrd and using that. Thanks for the link Rob. | 1 | 0 | 0 | Complicated Excel Issue with Google API and Python | 1 | python,excel,google-drive-api | 0 | 2013-03-17T01:39:00.000 |
I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow.
Can point me some example python script or some idea how can I do the same?
Thank you | 2 | 0 | 0 | 1 | false | 15,643,468 | 0 | 4,583 | 2 | 0 | 0 | 15,592,980 | You need to get the nzcli installed on the machine that you want to run nzload from - your sysadmin should be able to put it on your unix/linux application server. There's a detailed process to setting it all up, caching the passwords, etc - the sysadmin should be able to do that to.
Once it is set up, you can create NZ control files to point to your data files and execute a load. The Netezza Data Loading guide has detailed instructions on how to do all of this (it can be obtained through IBM).
You can do it through aginity as well if you have the CREATE EXTERNAL TABLE privledge - you can do a INSERT INTO FROM EXTERNAL ... REMOTESOURCE ODBC to load the file from an ODBC connection. | 1 | 0 | 0 | How to use NZ Loader (Netezza Loader) through Python Script? | 3 | python,netezza | 0 | 2013-03-23T22:45:00.000 |
I have a huge csv file which contains millions of records and I want to load it into Netezza DB using python script I have tried simple insert query but it is very very slow.
Can point me some example python script or some idea how can I do the same?
Thank you | 2 | 1 | 0.066568 | 1 | false | 17,522,337 | 0 | 4,583 | 2 | 0 | 0 | 15,592,980 | you can use nz_load4 to load the data,This is the support utility /nz/support/contrib/bin
the syntax is same like nzload,by default nz_load4 will load the data using 4 thread and you can go upto 32 thread by using -tread option
for more details use nz_load4 -h
This will create the log files based on the number of thread,like if | 1 | 0 | 0 | How to use NZ Loader (Netezza Loader) through Python Script? | 3 | python,netezza | 0 | 2013-03-23T22:45:00.000 |
For my app, I need to determine the nearest points to some other point and I am looking for a simple but relatively fast (in terms of performance) solution. I was thinking about using PostGIS and GeoDjango but I think my app is not really that "geographic" (I still don't really know what that means though). The geographic part (around 5 percent of the whole) is that I need to keep coordinates of objects (people and places) and then there is this task to find the nearest points. To put it simply, PostGIS and GeoDjango seems to be an overkill here.
I was also thinking of django-haystack with SOLR or Elasticsearch because I am going to need a strong, strong text search capabilities and these engines have also these "geographic" features. But not sure about it either as I am afraid of core db <-> search engine db synchronisation and hardware requirements for these engines. At the moment I am more akin to use posgreSQL trigrams and some custom way to do that "find near points problem". Is there any good one? | 0 | 0 | 0 | 0 | false | 15,593,621 | 1 | 689 | 1 | 0 | 0 | 15,593,572 | You're probably right, PostGIS/GeoDjango is probably overkill, but making your own Django app would not be too much trouble for your simple task. Django offers a lot in terms of templating, etc. and with the built in admin makes it pretty easy to enter single records. And GeoDjango is part of contrib, so you can always use it later if your project needs it. | 1 | 0 | 0 | Django + postgreSQL: find near points | 3 | python,django,postgresql,postgis,geodjango | 0 | 2013-03-24T00:09:00.000 |
In my company we want to build an application in Google app engine which will manage user provisioning to Google apps. But we do not really know what data source to use?
We made two propositions :
spreadsheet which will contains users' data and we will use spreadsheet API to get this data and use it for user provisioning
Datastore which will contains also users' data and this time we will use Datastore API.
Please note that my company has 3493 users and we do not know too many advantages and disadvantages of each solution.
Any suggestions please? | 0 | 0 | 0 | 0 | false | 15,671,792 | 1 | 248 | 1 | 1 | 0 | 15,671,591 | If you use the Datastore API, you will also need to build out a way to manage users data in the system.
If you use Spreadsheets, that will serve as your way to manage users data, so in that way managing the data would be taken care of for you.
The benefits to use the Datastore API would be if you'd like to have a seamless integration of managing the user data into your application. Spreadsheet integration would remain separate from your main application. | 1 | 0 | 0 | Datastore vs spreadsheet for provisioning Google apps | 1 | python,google-app-engine,google-sheets,google-cloud-datastore | 0 | 2013-03-27T23:37:00.000 |
If I've been given a Query object that I didn't construct, is there a way to directly modify its WHERE clause? I'm really hoping to be able remove some AND statements or replace the whole FROM clause of a query instead of starting from scratch.
I'm aware of the following methods to modify the SELECT clause:
Query.with_entities(), Query.add_entities(), Query.add_columns(), Query.select_from()
which I think will also modify the FROM. And I see that I can view the WHERE clause with Query.whereclause, but the docs say that it's read-only.
I realize I'm thinking in SQL terms, but I'm more familiar with those concepts than the ORM, at this point. Any help is very appreciated. | 2 | 2 | 1.2 | 0 | true | 15,707,037 | 0 | 489 | 1 | 0 | 0 | 15,705,511 | you can modify query._whereclause directly, but I'd seek to find a way to not have this issue in the first place - whereever it is that the Query is generated should be factored out so that the non-whereclause version is made available. | 1 | 0 | 0 | SQLAlchemy ORM: modify WHERE clause | 1 | python,orm,sqlalchemy,where-clause | 0 | 2013-03-29T14:41:00.000 |
I have installd Python 2.7.3 on Linux 64 bit machine. I have Oracle 11g client(64bit) as well installed. And I set ORACLE_HOME, PATH, LD_LIBRARY_PATH, and installed cx_oracle 5.1.2 version for Python 2.7 & Oracle 11g. But ldd command on cx_oracle is unable to find libclntsh.so.11.1.
I tried creating symlinks to libclntsh.so.11.1 under /usr/lib64, updated oracle.conf file under /etc/ld.so.conf.d/. Tried all possible solutions that have been discussed on this issue on the forums, but no luck.
Please let me know what am missing. | 0 | 0 | 0 | 0 | false | 15,745,441 | 0 | 412 | 1 | 1 | 0 | 15,740,464 | The issue with me was that I installed python, cx_oracle as root but Oracle client installation was done by "oracle" user. I got my own oracle installation and that fixed the issue.
Later I ran into PyUnicodeUCS4_DecodeUTF16 issues with Python and for that I had to install python with —enable-unicode=ucs4 option | 1 | 0 | 0 | cx_oracle unable to find Oracle Client | 1 | python,cx-oracle | 0 | 2013-04-01T09:00:00.000 |
I try to connect to a remote oracle server by cx_Oracle:
db = cx_Oracle.connect('username', 'password', dsn_tns)
but it says databaseError: ORA-12541 tns no listener | 6 | 1 | 0.066568 | 0 | false | 46,728,202 | 0 | 17,494 | 1 | 0 | 0 | 15,772,351 | In my case it was due to the fact that my server port was wrong:
./install_database_new.sh localhost:1511 XE full
I changed the port to "1521" and I could connect. | 1 | 0 | 0 | ocx_Oracle ORA-12541 tns no listener | 3 | python,cx-oracle | 0 | 2013-04-02T19:10:00.000 |
I'm using Sqlalchemy in a multitenant Flask application and need to create tables on the fly when a new tenant is added. I've been using Table.create to create individual tables within a new Postgres schema (along with search_path modifications) and this works quite well.
The limitation I've found is that the Table.create method blocks if there is anything pending in the current transaction. I have to commit the transaction right before the .create call or it will block. It doesn't appear to be blocked in Sqlalchemy because you can't Ctrl-C it. You have to kill the process. So, I'm assuming it's something further down in Postgres.
I've read in other answers that CREATE TABLE is transactional and can be rolled back, so I'm presuming this should be working. I've tried starting a new transaction with the current engine and using that for the table create (vs. the current Flask one) but that hasn't helped either.
Does anybody know how to get this to work without an early commit (and risking partial dangling data)?
This is Python 2.7, Postgres 9.1 and Sqlalchemy 0.8.0b2. | 0 | 3 | 1.2 | 0 | true | 15,775,816 | 0 | 1,872 | 1 | 0 | 0 | 15,774,899 | (Copy from comment)
Assuming sess is the session, you can do sess.execute(CreateTable(tenantX_tableY)) instead.
EDIT: CreateTable is only one of the things being done when calling table.create(). Use table.create(sess.connection()) instead. | 1 | 0 | 0 | How do you create a table with Sqlalchemy within a transaction in Postgres? | 1 | python,postgresql,sqlalchemy,ddl,flask-sqlalchemy | 0 | 2013-04-02T21:38:00.000 |
I'm writing a python app that connects to perforce on a daily basis. The app gets the contents of an excel file on perfoce, parses it, and copies some data to a database. The file is rather big, so I would like to keep track of which revision of the file the app last read on the database, this way i can check to see if the revision number is higher and avoid reading the file if it has not changed.
I could make do with getting the revision number, or the changelist number when the file was last checked in / changed. Or if you have any other suggestion on how to accomplish my goal of avoiding doing an unnecessary read of the file.
I'm using python 2.7 and the perforce-python API | 0 | 2 | 1.2 | 0 | true | 15,806,216 | 0 | 1,797 | 1 | 0 | 0 | 15,795,038 | Several options come to mind.
The simplest approach would be to always let your program use the same client and let it sync the file. You could let your program call p4 sync and see if you get a new version or not. Let it continue if you get a new version. This approach has the advantage that you don't need to remember any states/version from the previous run of your program.
If you don't like using a fixed client you could let your program always check the current head revision of the file in question:
p4 fstat //depot/path/yourfile |grep headRev | sed 's/.*headRev \(.*\)/\1/'
You could store that version for the next run of your program in some temp file and compare versions each time.
If you run your program at fixed times (e.g. via cron) you could check the last modification time (either with p4 filelog or with p4 fstat) and if the time is between the time of the last run and the current time then you need to process the file. This option is a bit intricate since you need to parse those different time formats. | 1 | 0 | 0 | How to get head revision number of a file, or the changelist number when it was checked in / changed | 1 | python,python-2.7,perforce | 0 | 2013-04-03T18:23:00.000 |
I am searching for a persistent data storage solution that can handle heterogenous data stored on disk. PyTables seems like an obvious choice, but the only information I can find on how to append new columns is a tutorial example. The tutorial has the user create a new table with added column, copy the old table into the new table, and finally delete the old table. This seems like a huge pain. Is this how it has to be done?
If so, what are better alternatives for storing mixed data on disk that can accommodate new columns with relative ease? I have looked at sqlite3 as well and the column options seem rather limited there, too. | 5 | 5 | 1.2 | 0 | true | 19,470,951 | 0 | 2,350 | 1 | 0 | 0 | 15,797,163 | Yes, you must create a new table and copy the original data. This is because Tables are a dense format. This gives it a huge performance benefits but one of the costs is that adding new columns is somewhat expensive. | 1 | 0 | 0 | Is the only way to add a column in PyTables to create a new table and copy? | 2 | python,pytables | 0 | 2013-04-03T20:13:00.000 |
I'm writing a web application in Python (on Apache server on a Linux system) that needs to connect to a Postgres database. It therefore needs a valid password for the database server. It seems rather unsatisfactory to hard code the password in my Python files.
I did wonder about using a .pgpass file, but it would need to belong to the www-data user, right? By default, there is no /home/www-data directory, which is where I would have expected to store the .pgpass file. Can I just create such a directory and store the .pgpass file there? And if not, then what is the "correct" way to enable my Python scripts to connect to the database? | 3 | 1 | 0.099668 | 0 | false | 15,897,981 | 0 | 1,239 | 1 | 0 | 0 | 15,895,788 | No matter what approach you use, other apps running as www-data will be able to read your password and log in as you to the database. Using peer auth won't help you out, it'll still trust all apps running under www-data.
If you want your application to be able to isolate its data from other databases you'll need to run it as a separate user ID. The main approaches with this are:
Use the apache suexec module to run scripts as a separate user;
Use fast-cgi (fcgi) or scgi to run the cgi as a different user; or
Have the app run its own minimal HTTP server and have Apache reverse proxy for it
Of these, by far the best option is usually to use scgi/fcgi. It lets you easily run your app as a different unix user but avoids the complexity and overhead of reverse proxying. | 1 | 0 | 0 | "Correct" way to store postgres password in python website | 2 | python,apache,postgresql,mod-wsgi | 0 | 2013-04-09T07:23:00.000 |
I am having trouble finding this answer anywhere on the internet. I want to be able to monitor a row in a MySQL table for changes and when this occurs, run a Python function. This Python function I want to run has nothing to do with MySQL; it just enables a pin on a Raspberry Pi. I have tried looking into SQLAlchemy; however, I can't tell if it is a trigger or a data mapping. Is something like this even possible?
Thanks in advance! | 1 | 4 | 0.379949 | 0 | false | 15,904,750 | 0 | 6,558 | 1 | 0 | 0 | 15,903,357 | What about a cron job instead of create a loop? I think it's a bit nicer. | 1 | 0 | 0 | How to execute Python function when value in SQL table changes? | 2 | python,sql,sqlalchemy,raspberry-pi | 0 | 2013-04-09T13:31:00.000 |
I have a couple of python scripts which I plan to put up on a server and run them repeatedly once a day. This python script does some calculation and finally uploads the data to a central database. Of course to connect to the database a password and username is required. Is it safe to input this username and password on my python script. If not is there any better way to do it? | 0 | 0 | 0 | 0 | false | 15,907,470 | 0 | 284 | 1 | 0 | 0 | 15,905,113 | Create a DB user with limited access rights, for example, to that only table where it uploads data to. Hardcode that user in your script or pass it as command line arguments. There is little else you can do for a automated script because it has to use some username and password to connect to the DB somehow.
You could encrypt the credentials and decrypt them in your script, but once a sufficiently determined attacker gets access to your user account and script extracting the username and password from a plain text script should not be too hard. You could use a compiled script to hide the credentials from the prying eyes, but again, it depends on how valuable access to your database is. | 1 | 0 | 0 | Connecting to a database using python and running it as a cron job | 1 | python | 0 | 2013-04-09T14:47:00.000 |
What's a reasonable default for pool_size in a ZODB.DB call in a multi-threaded web application?
Leaving the actual default value 7 gives me some connection WARNINGs even when I'm the only one navigating through db-interacting handlers. Is it possible to set a number that's too high? What factors play into deciding what exactly to set it to? | 2 | 4 | 1.2 | 0 | true | 15,919,692 | 0 | 485 | 1 | 0 | 0 | 15,914,198 | The pool size is only a 'guideline'; the warning is logged when you exceed that size; if you were to use double the number of connections an CRITICAL log message would be registed instead. These are there to indicate you may be using too many connections in your application.
The pool will try to reduce the number of retained connections to the pool size as you close connections.
You need to set it to the maximum number of threads in your application. For Tornado, which I believe uses asynchronous events instead of threading almost exclusively, that might be harder to determine; if there is a maximum number of concurrent connections configurable in Tornado, then the pool size needs to be set to that number.
I am not sure how the ZODB will perform when your application scales to hundreds or thousands of concurrent connections, though. I've so far only used it with at most 100 or so concurrent connections spread across several processes and even machines (using ZEO or RelStorage to serve the ZODB across those processes).
I'd say that if most of these connections only read, you should be fine; it's writing on the same object concurrently that is ZODB's weak point as far as scalability is concerned. | 1 | 0 | 0 | Reasonable settings for ZODB pool_size | 1 | python,connection-pooling,zodb | 0 | 2013-04-09T23:12:00.000 |
I have to implement nosetests for Python code using a MongoDB store. Is there any python library which permits me initializing a mock in-memory MongoDB server?
I am using continuous integration. So, I want my tests to be independent of any MongoDB running server.
Is there a way to mock mongoDM Server in memory to test the code independently of connecting to a Mongo server?
Thanks in advance! | 17 | 4 | 0.197375 | 0 | false | 15,915,744 | 0 | 14,649 | 1 | 0 | 0 | 15,915,031 | I don’t know about Python, but I had a similar concern with C#. I decided to just run a real instance of Mongo on my workstation pointed at an empty directory. It’s not great because the code isn’t isolated but it’s fast and easy.
Only the data access layer actually calls Mongo during the test. The rest can rely on the mocks of the data access layer. I didn’t feel like faking Mongo was worth the effort when really I want to verify the interaction with Mongo is correct anyway. | 1 | 0 | 1 | Use mock MongoDB server for unit test | 4 | python,mongodb,python-2.7,pymongo | 0 | 2013-04-10T00:42:00.000 |
We need to store a text field ( say 2000 characters) and its unique hash ( say SHA1 ) in a MySQL table.
To test that text already exists in the MySQL table, we generate SHA1 of the text , and find whether it exists in the unique field hash .
Now lets assume there are two texts:
"This is the text which will be stored in the database, and its hash will be generated"
"This is the text,which will be stored in the database and its hash will be generated."
Notice the minor differences.
Lets say 1 has already been added to the database, the check for 2 will not work as their SHA1 hashes will be drastically different.
One obvious solution is to use Leveinstein distance, or difflib to iterate over all already added text fields to fine near matches from the MySQL table.
But that is not performance oriented.
Is there a good hashing algorithm which has a correlation with the text content ? i.e. Two hashes generated for very similar texts will be very similar in themselves.
That way it would be easier to detect possible duplicates before adding them in the MySQL table. | 1 | 1 | 1.2 | 0 | true | 15,919,118 | 0 | 496 | 1 | 0 | 0 | 15,919,063 | I highly doubt anything you're looking for exists, so I propose a simpler solution:
Come up with a simple algorithm for normalizing your text, e.g.:
Normalize whitespace
Remove punctuation
Then, calculate the hash of that and store it in a separate column (normalizedHash) or store an ID to a table of normalized hashes. Then you can compare the two different entries by their normalized content. | 1 | 0 | 0 | Good hashing algorithm with proximity to original text input , less avalanche effect? | 2 | python,mysql,string-matching | 0 | 2013-04-10T07:00:00.000 |
I wrote a little script that copies files from bucket on one S3 account to the bucket in another S3 account.
In this script I use bucket.copy_key() function to copy key from one bucket in another bucket.
I tested it, it works fine, but the question is: do I get charged for copying files between S3 to S3 in same region?
What I'm worry about that may be I missed something in boto source code, and I hope it's not store the file on my machine, than send it to another S3.
Also (sorry, if its to much questions in one topic) if I upload and run this script from EC2 instance will I get charge for bandwidth? | 1 | 3 | 1.2 | 0 | true | 15,957,021 | 1 | 322 | 1 | 0 | 1 | 15,956,099 | If you are using the copy_key method in boto then you are doing server-side copying. There is a very small per-request charge for COPY operations just as there are for all S3 operations but if you are copying between two buckets in the same region, there is no network transfer charges. This is true whether you run the copy operations on your local machine or on an EC2 instance. | 1 | 0 | 0 | Will I get charge for transfering files between S3 accounts using boto's bucket.copy_key() function? | 1 | python,amazon-web-services,amazon-s3,boto,data-transfer | 0 | 2013-04-11T18:24:00.000 |
Things to note in advance:
I am using wampserver 2.2
Ive forwarded port 80
I added a rule to my firewall to accept traffic through port 3306
I have added "Allow from all" in directory of "A file i forget"
My friend can access my phpmyadmin server through his browser
I am quite the novice, so bear with me.
I am trying to get my friend to be able to alter my databases on my phpmyadmin server through
python. I am able to do so on the host machine using "127.0.0.1" as the HOST. My Question is, does he have to use my external ip as the HOST or my external ip/phpmyadmin/ as the HOST? And if using the external ip iscorrect...What could the problem be? | 0 | 0 | 0 | 0 | false | 16,370,493 | 0 | 120 | 1 | 0 | 0 | 15,958,249 | If your phpmyadmin runs on the same machine as mysql-server, 127.0.0.1 is enough (and safer if your mysql server binds to 127.0.0.1, rather than 0.0.0.0) if you use tcp(rather than unix socket). | 1 | 0 | 0 | What do I use for HOST to connect to a remote server with mysqldb python? | 1 | python,sql,phpmyadmin,mysql-python,host | 0 | 2013-04-11T20:27:00.000 |
So there has been a lot of hating on singletons in python. I generally see that having a singleton is usually no good, but what about stuff that has side effects, like using/querying a Database? Why would I make a new instance for every simple query, when I could reuse a present connection already setup again? What would be a pythonic approach/alternative to this?
Thank you! | 7 | 1 | 0.099668 | 0 | false | 15,960,691 | 0 | 6,703 | 2 | 0 | 0 | 15,958,678 | If you're using an object oriented approach, then abamet's suggestion of attaching the database connection parameters as class attributes makes sense to me. The class can then establish a single database connection which all methods of the class refer to as self.db_connection, for example.
If you're not using an object oriented approach, a separate database connection module can provide a functional-style equivalent. Devote a module to establishing a database connection, and simply import that module everywhere you want to use it. Your code can then refer to the connection as db.connection, for example. Since modules are effectively singletons, and the module code is only run on the first import, you will be re-using the same database connection each time. | 1 | 0 | 1 | DB-Connections Class as a Singleton in Python | 2 | python,database,singleton | 0 | 2013-04-11T20:53:00.000 |
So there has been a lot of hating on singletons in python. I generally see that having a singleton is usually no good, but what about stuff that has side effects, like using/querying a Database? Why would I make a new instance for every simple query, when I could reuse a present connection already setup again? What would be a pythonic approach/alternative to this?
Thank you! | 7 | 7 | 1.2 | 0 | true | 15,958,721 | 0 | 6,703 | 2 | 0 | 0 | 15,958,678 | Normally, you have some kind of object representing the thing that uses a database (e.g., an instance of MyWebServer), and you make the database connection a member of that object.
If you instead have all your logic inside some kind of function, make the connection local to that function. (This isn't too common in many other languages, but in Python, there are often good ways to wrap up multi-stage stateful work in a single generator function.)
If you have all the database stuff spread out all over the place, then just use a global variable instead of a singleton. Yes, globals are bad, but singletons are just as bad, and more complicated. There are a few cases where they're useful, but very rare. (That's not necessarily true for other languages, but it is for Python.) And the way to get rid of the global is to rethink you design. There's a good chance you're effectively using a module as a (singleton) object, and if you think it through, you can probably come up with a good class or function to wrap it up in.
Obviously just moving all of your globals into class attributes and @classmethods is just giving you globals under a different namespace. But moving them into instance attributes and methods is a different story. That gives you an object you can pass around—and, if necessary, an object you can have 2 of (or maybe even 0 under some circumstances), attach a lock to, serialize, etc.
In many types of applications, you're still going to end up with a single instance of something—every Qt GUI app has exactly one MyQApplication, nearly every web server has exactly one MyWebServer, etc. No matter what you call it, that's effectively a singleton or global. And if you want to, you can just move everything into attributes of that god object.
But just because you can do so doesn't mean you should. You've still got function parameters, local variables, globals in each module, other (non-megalithic) classes with their own instance attributes, etc., and you should use whatever is appropriate for each value.
For example, say your MyWebServer creates a new ClientConnection instance for each new client that connects to you. You could make the connections write MyWebServer.instance.db.execute whenever they want to execute a SQL query… but you could also just pass self.db to the ClientConnection constructor, and each connection then just does self.db.execute. So, which one is better? Well, if you do it the latter way, it makes your code a lot easier to extend and refactor. If you want to load-balance across 4 databases, you only need to change code in one place (where the MyWebServer initializes each ClientConnection) instead of 100 (every time the ClientConnection accesses the database). If you want to convert your monolithic web app into a WSGI container, you don't have to change any of the ClientConnection code except maybe the constructor. And so on. | 1 | 0 | 1 | DB-Connections Class as a Singleton in Python | 2 | python,database,singleton | 0 | 2013-04-11T20:53:00.000 |
I have an error no such table: mytable, even though it is defined in models/tables.py. I use sqlite. Interesting enough, if I go to admin panel -> my app -> database administration then I see a link mytable, however when I click on it then I get no such table: mytable.
I don't know how to debug such error?
Any ideas? | 2 | 3 | 1.2 | 0 | true | 16,026,857 | 1 | 1,115 | 1 | 0 | 0 | 16,026,776 | web2py keeps the structure it thinks the table has in a separate file. If someone has manually dropped the table, web2py will still think it exists, but of course you get an error when you try to actually use the table
Look for the *.mytable.table file in the databases directory | 1 | 0 | 0 | web2py. no such table error | 1 | python,web2py | 0 | 2013-04-16T00:21:00.000 |
I understand how to save a redis database using bgsave. However, once my database server restarts, how do I tell if a saved database is present and how do I load it into my application. I can tolerate a few minutes of lost data, so I don't need to worry about an AOF, but I cannot tolerate the loss of, say, an hour's worth of data. So doing a bgsave once an hour would work for me. I just don't see how to reload the data back into the database.
If it makes a difference, I am working in Python. | 2 | 1 | 0.197375 | 0 | false | 16,069,631 | 0 | 1,375 | 1 | 0 | 0 | 16,068,644 | You can stop redis and replace dump.rdb in /var/lib/redis (or whatever file is in the dbfilename variable in your redis.conf). Then start redis again. | 1 | 0 | 0 | How to load a redis database after | 1 | python,redis,persistence,reload | 0 | 2013-04-17T19:33:00.000 |
I have a python script that retrieves the newest 5 records from a mysql database and sends email notification to a user containing this information.
I would like the user to receive only new records and not old ones.
I can retrieve data from mysql without problems...
I've tried to store it in text files and compare the files but, of course, the text files containing freshly retrieved data will always have 5 records more than the old one.
So I have a logic problem here that, being a newbie, I can't tackle easily.
Using lists is also an idea but I am stuck in the same kind of problem.
The infamous 5 records can stay the same for one week and then we can have a new record or maybe 3 new records a day.
It's quite unpredictable but more or less that should be the behaviour.
Thank you so much for your time and patience. | 0 | 2 | 1.2 | 0 | true | 16,079,138 | 0 | 85 | 1 | 0 | 0 | 16,078,856 | Are you assigning a unique incrementing ID to each record? If you are, you can create a separate table that holds just the ID of the last record fetched, that way you can only retrieve records with IDs greater than this ID. Each time you fetch, you could update this table with the new latest ID.
Let me know if I misunderstood your issue, but saving the last fetched ID in the database could be a solution. | 1 | 0 | 1 | How to check if data has already been previously used | 1 | python | 0 | 2013-04-18T09:11:00.000 |
I need to scrap about 40 random webpages at the same time.These pages vary on each request.
I have used rpcs in python to fetch the urls and scraped the data using BeautifulSoup. It takes about 25 seconds to scrap all the data and display on the screen.
To increase the speed i stored the data in appengine datastore so that each data is scraped only once and can be accessed from there quickly.
But the problem is-> as the size of the data increases in the datastore, it is taking too long to fetch the data from the datastore(more than the scraping).
Should i use memcache Or shift to mysql? Is mysql faster than gae-datastore?
Or is there any other better way to fetch the data as quickly as possible? | 0 | 0 | 0 | 0 | false | 16,131,039 | 1 | 402 | 1 | 1 | 0 | 16,098,570 | Based on what I know about your app it would make sense to use memcache. It will be faster, and will automatically take care of things like expiring stale cache entries. | 1 | 0 | 0 | What is the fastest way to get scraped data from so many web pages? | 1 | python,mysql,google-app-engine,google-cloud-datastore,web-scraping | 0 | 2013-04-19T06:29:00.000 |
I've had my python program removed from windows a while ago, and recently downloaded python2.7.4 from the main site, but when I type "python" in the Windows PowerShell(x86) prompt from C:, I get the message "'python' is not recognized as an internal or external command, operable program or batch file.", and I'd like to find out how to fix this.
I get the same message when I'm in the actual python27 folder (and the python.exe is indeed there). However, when I type in .\python, it runs as expected, and my computer can run other .exe's just fine. I'm using Windows 7 Home Premium Service Pack 1 on a Sony VAIO laptop. I'm not very familiar with the inner workings of my computer, so I'm not sure where to look from here.
My current path looks like this, with the python folder at the very end:
%SystemRoot%\system32\WindowsPowerShell\v1.0\;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Program Files\WIDCOMM\Bluetooth Software\syswow64;C:\Program Files (x86)\Common Files\Roxio Shared\10.0\DLLShared\;C:\Program Files (x86)\Common Files\Roxio Shared\DLLShared\;C:\Program Files (x86)\Common Files\Adobe\AGL;C:\Program Files (x86)\Windows Live\Shared;C:\Program Files\Java\jdk1.6.0_23\bin;c:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files (x86)\MySQL\MySQL Workbench CE 5.2.42;C:\Program Files\MySQL\MySQL Server 5.5\bin;C:\Program Files (x86)\apache-ant-1.8.4\bin;C:\Program Files\TortoiseSVN\bin;C:\Windows\system32\WindowsPowerShell\v1.0\;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Program Files\WIDCOMM\Bluetooth Software\syswow64;C:\Program Files (x86)\Common Files\Roxio Shared\10.0\DLLShared\;C:\Program Files (x86)\Common Files\Roxio Shared\DLLShared\;C:\Program Files (x86)\Common Files\Adobe\AGL;C:\Program Files (x86)\Windows Live\Shared;C:\Program Files\Java\jdk1.6.0_23\bin;c:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;C:\Program Files (x86)\MySQL\MySQL Workbench CE 5.2.42;C:\Program Files\MySQL\MySQL Server 5.5\bin;C:\Program Files (x86)\apache-ant-1.8.4\bin;C:\Program Files\TortoiseSVN\bin;C:\Program Files\Java\jdk1.6.0_23\bin;C:\Python27 | 1 | 1 | 0.197375 | 0 | false | 16,108,206 | 0 | 1,533 | 1 | 1 | 0 | 16,107,658 | Making the comments an answer for future reference:
Have a ; at the end of the PATH and logout and log back in. | 1 | 0 | 0 | Can't open python.exe in Windows Powershell | 1 | python,windows,powershell,path,exe | 0 | 2013-04-19T15:02:00.000 |
My little website has a table of comments and a table of votes. Each user of the website gets to vote once on each comment.
When displaying comments to the user, I will select from the comments table and outerjoin a vote if one exists for the current user.
Is there a way to make a query where the vote will be attached to the comment through comment.my_vote ?
The way I'm doing it now, the query is returning a list for each result - [comment, vote] - and I'm passing that directly to my template. I'd prefer if the vote could be a child object of the comment. | 0 | 0 | 1.2 | 0 | true | 17,140,662 | 0 | 149 | 1 | 0 | 0 | 16,114,939 | In the end I decided that working with the tuple returned by the query wasn't a problem. | 1 | 0 | 0 | SqlAlchemy: Join onto another object | 2 | python,sqlalchemy | 0 | 2013-04-19T23:35:00.000 |
Is there any SQL injection equivalents, or other vulnerabilities I should be aware of when using NoSQL?
I'm using Google App Engine DB in Python2.7, and noticed there is not much documentation from Google about security of Datastore.
Any help would be appreciated! | 2 | 7 | 1.2 | 0 | true | 16,140,194 | 1 | 973 | 1 | 1 | 0 | 16,134,927 | Standard SQL injection techniques rely on the fact that SQL has various statements to either query or modify data. The datastore has no such feature. The GQL (the query language for the datastore) can only be used to query, not modify. Inserts, updates, and deletes are done using a separate method that does not use a text expression. Thus, the datastore is not vulnerable to such injection techniques. In the worst case, an attacker could only change the query to select data you did not intend, but never change it. | 1 | 0 | 0 | NDB/DB NoSQL Injection Google Datastore | 1 | python,security,google-app-engine,nosql,google-cloud-datastore | 0 | 2013-04-21T18:51:00.000 |
I am totally fresh and noob as you can be on Twisted. I chose a database proxy as my final project. The idea is, have a mysql as a database. A twisted proxy runs in between client and the database.The proxy makes the methods like UPDATE,SELECT,INSERT through its XMLRPC to the client. And, the methods itself in the proxy hits the database and grabs the data. And, I was thinking of some caching mechanism too on the proxy. So, any heads up on the project? How does chaching work in twisted | 1 | 0 | 1.2 | 0 | true | 16,171,818 | 0 | 166 | 1 | 0 | 0 | 16,155,776 | As you use XML-RPC, you will have to write simple Twisted web application that handles XML-RPC calls. There are many possibilities for cache: expiring, storing on disk, invalidating, etc etc. You may start from simple dict for storing queries and find its limitations. | 1 | 0 | 0 | Database Proxy using Twisted | 1 | python,twisted | 0 | 2013-04-22T20:07:00.000 |
I am a rails developer that is learning python and I am doing a project using the pyramid framework. I am used to having some sort of way of rolling back the database changes If I change the models in some sort of way. Is there some sort of database rollback that works similar to the initialize_project_db command? | 0 | 2 | 1.2 | 0 | true | 16,159,421 | 1 | 139 | 1 | 0 | 0 | 16,157,144 | initialize_db is not a migration script. It is for bootstrapping your model and that's that. If you want to tie in migrations with upgrade/rollback support, look at alembic for SQL schema migrations. | 1 | 0 | 0 | Is there some sort of way to roll back the initialize_project_db script in pyramid? | 1 | python,database,pyramid | 0 | 2013-04-22T21:36:00.000 |
I have an old SQLite 2 database that I would like to read using Python 3 (on Windows). Unfortunately, it seems that Python's sqlite3 library does not support SQLite 2 databases. Is there any other convenient way to read this type of database in Python 3? Should I perhaps compile an older version of pysqlite? Will such a version be compatible with Python 3? | 0 | 0 | 1.2 | 0 | true | 23,542,492 | 0 | 445 | 1 | 0 | 0 | 16,193,630 | As the pysqlite author I am pretty sure nobody has ported pysqlite 1.x to Python 3 yet. The only solution that makes sense effort-wise is the one theomega suggested.
If all you need is access the data from Python for importing them elsewhere, but doing the sqlite2 dump/sqlite3 restore dance is not possible, there is an option, but it is not convenient: Use the builtin ctypes module to access the necessary functions from the SQLite 2 DLL. You would then implement a minimal version of pysqlite yourself that only wraps what you really need. | 1 | 0 | 0 | Read an SQLite 2 database using Python 3 | 1 | sqlite,python-3.x | 0 | 2013-04-24T13:43:00.000 |
What's the best way to automatically query several dozen MySQL databases with a script on a nightly basis? The script usually returns no results, so I'd ideally have it email or notify me if any are ever returned.
I've looked into PHP, Ruby and Python for this, but I'm a little stumped as to how best to handle this. | 0 | 1 | 0.066568 | 0 | false | 16,203,901 | 0 | 307 | 1 | 0 | 0 | 16,203,859 | I believe the only one can answer this question is you. All 3 examples you gave can do what you need to do with cron to automate the job. But the best script language to be used is the one you are most comfortable to use. | 1 | 0 | 0 | What's the best way to automate running MySQL scripts on several databases on a daily basis? | 3 | php,python,mysql,sql,ruby | 1 | 2013-04-24T23:19:00.000 |
I am quite new to heroku and I reached a bump in my dev...
I am trying to write a server/client kind of application...on the server side I will have a DB(I installed postgresql for python) and I was hoping I could reach the server, for now, via a python client(for test purposes) and send data/queries and perform basic tasks on the DB.
I am using python with Heroku, I manage to install the DB and it seems to be working(i.e i can query, insert, delete, etc...)
now all i want is to write a server(in python) that would be my app and would listen on a port and receive messages and then perform whatever tasks it is asked to do...I tought about using sockets for this and have managed to write a basic server/client locally...however when I deploy the app on heroku i cannot connect to the server and my code is basically worthless
can somebody plz advise on the basic framework for this sort of requirements...surely I am not the first guy to want to write a client/server app...if you could point to a tutorial/doc i would be much obliged.
Thx | 0 | 3 | 0.53705 | 0 | false | 16,245,012 | 1 | 569 | 1 | 0 | 0 | 16,244,924 | Heroku is for developing Web (HTTP, HTTPS) applications. You can't deploy code that uses socket to Heroku.
If you want to run your app on Heroku, the easier way is to use a web framework (Flask, CherryPy, Django...). They usually also come with useful libraries and abstractions for you to talk to your database. | 1 | 0 | 0 | how to write a client/server app in heroku | 1 | python,heroku | 0 | 2013-04-26T20:44:00.000 |
I have a SQLAlchemy Session object and would like to know whether it is dirty or not. The exact question what I would like to (metaphorically) ask the Session is: "If at this point I issue a commit() or a rollback(), the effect on the database is the same or not?".
The rationale is this: I want to ask the user wether he wants or not to confirm the changes. But if there are no changes, I would like not to ask anything. Of course I may monitor myself all the operations that I perform on the Session and decide whether there were modifications or not, but because of the structure of my program this would require some quite involved changes. If SQLAlchemy already offered this opportunity, I'd be glad to take advantage of it.
Thanks everybody. | 18 | 0 | 0 | 0 | false | 16,257,019 | 0 | 12,854 | 1 | 0 | 0 | 16,256,777 | Sessions have a private _is_clean() member which seems to return true if there is nothing to flush to the database. However, the fact that it is private may mean it's not suitable for external use. I'd stop short of personally recommending this, since any mistake here could obviously result in data loss for your users. | 1 | 0 | 0 | How to check whether SQLAlchemy session is dirty or not | 4 | python,sqlalchemy | 0 | 2013-04-27T20:54:00.000 |
I'm building a web app in GAE that needs to make use of some simple relationships between the datastore entities. Additionally, I want to do what I can from the outset to make import and exportability easier, and to reduce development time to migrate the application to another platform.
I can see two possible ways of handling relationships between entities in the datastore:
Including the key (or ID) of the related entity as a field in the entity
OR
Creating a unique identifier as an application-defined field of an entity to allow other entities to refer to it
The latter is less integrated with GAE, and requires some kind of mechanism to ensure the unique identifier is in fact unique (which in turn will rely on ancestor queries).
However, the latter may make data portability easier. For example, if entities are created on a local machine they can be uploaded (provided the unique identifier is unique) without problem. By contrast, relying on the GAE defined ID will not work as the ID will not be consistent from the development to the deployed environment.
There may be data exportability considerations too that mean an application-defined unique identifier is preferable.
What is the best way of doing this? | 0 | 1 | 1.2 | 0 | true | 16,268,751 | 1 | 48 | 1 | 1 | 0 | 16,266,979 | GAE's datastore just doesn't export well to SQL. There's often situations where data needs to be modeled very differently on GAE to support certain queries, ie many-to-many relationships. Denormalizing is also the right way to support some queries on GAE's datastore. Ancestor relationships are something that don't exist in the SQL world.
In order to import export data, you'll need to write scripts specific to your data models.
If you're planning for compatibility with SQL, use CloudSQL instead of the datastore.
In terms of moving data between dev/production, you've already identified the ways to do it. There's no real "easy" way. | 1 | 0 | 0 | GAE: planning for exportability and relational databases | 1 | google-app-engine,python-2.7,google-cloud-datastore | 0 | 2013-04-28T19:40:00.000 |
I am using wx.Grid to build spreadsheetlike input interface. I want to lock the size of the cells so the user can not change them. I have successfully disabled the drag-sizing with grid.EnableDragGridSize(False) of the grid but user can still resize the cells by using borders between column and row labels. I am probably missing something in wxGrid documentation. | 1 | 0 | 1.2 | 0 | true | 16,279,016 | 0 | 426 | 1 | 0 | 0 | 16,278,613 | I found the solution. To completely lock user ability to resize cells it is needed to use .EnableDragGridSize(False) , .DisableDragColSize() and .DisableDragRowSize() methods. | 1 | 1 | 0 | wx.Grid cell size lock | 1 | python,wxpython | 0 | 2013-04-29T12:26:00.000 |
I am able to easily call a python script from php using system(), although there are several options. They all work fine, except they all fail. Through trial and error I have narrowed it down to it failing on
import MySQLdb
I am not too familiar with php, but I am using it in a pinch. I understand while there could be reasons why such a restriction would be in place, but this will be on a local server, used in house, and the information in the mysql db is backed up and not to critical. Meaning such a restriction can be reasonably ignored.
But how to allow php to call a python script that imports mysql? I am on a Linux machine (centOs) if that is relevant. | 0 | 1 | 1.2 | 0 | true | 16,282,538 | 0 | 322 | 1 | 0 | 0 | 16,281,823 | The Apache user (www-data in your case) has a somewhat restricted environment. Check where the Python MySQLdb package is installed and edit the Apache user's env (cf Apache manual and your distrib's one about this) so it has a usable Python environment with the right PYTHONPATH etc. | 1 | 0 | 0 | call python script from php that connects to MySQL | 1 | php,python,mysql,linux | 1 | 2013-04-29T14:55:00.000 |
Let's say I have some free form entries for names, where some are in the format "Last Name, First Name" and others are in the format "First Name Last Name" (eg "Bob MacDonald" and "MacDonald. Bob" are both present).
From what I understand, Lucene indexing does not allow for wildcards in the beginning of the sentence, so what would be some ways in which I could find both. This is for neo4j and py2neo, so solutions in either lucene pattern matching, or in python regex matching are welcome. | 2 | 1 | 0.099668 | 0 | false | 16,290,406 | 1 | 194 | 1 | 0 | 0 | 16,290,237 | Can you just use OR? "Hilary Clinton" OR "Clinton, Hilary"? | 1 | 0 | 1 | Lucene or Python: Select both "Hilary Clinton" and "Clinton, Hilary" name entries | 2 | python,regex,neo4j,lucene | 0 | 2013-04-30T00:09:00.000 |
There are two possible cases where I am finding MySQL and RDBMS too slow. I need a recommendation for a better alternative in terms of NOSQL.
1) I have an application that's saving tons of emails for later analysis. Email content is saved in a simple table with a couple of relations to another two tables. Columns are sender, recepient, content, headers, timestamp, etc.
Now that the records are a close to a million, it's taking longer to search through. Basically there are some pattern searches we are running.
Which would be the best free/open source NOSQL for replacement to store mails so that searching through them would be faster?
2) Another use case is fundamentally ann asset management library consisting of files. System very simplar to mails. Here we have files of all type of extensions. When the files are created or changed, we are storing meta data of the files in a table. Again data sizes have grown big over time, that searching them is not easy.
Ideas welcome. Someone suggested Mongo. Is there anything better and faster? | 0 | 1 | 1.2 | 0 | true | 16,306,049 | 0 | 56 | 1 | 0 | 0 | 16,304,959 | If search is your primary use case, I'd look into a search solution like ElasticSearch or Solr. Even if some databases support some sort of full text indexing, they're not optimized for this problem. | 1 | 0 | 0 | Possible NoSQL cases | 1 | python,nosql | 0 | 2013-04-30T16:42:00.000 |
I have been running a Python octo.py script to do word counting/author on a series of files. The script works well -- I tried it on a limited set of data and am getting the correct results.
But when I run it on the complete data set it takes forever. I am running on a windows XP laptop with dual core 2.33 GHz and 2 GB RAM.
I opened up my CPU usage and it shows the processors running at 0%-3% of maximum.
What can I do to force Octo.py to utilize more CPU?
Thanks. | 0 | 0 | 1.2 | 0 | true | 16,378,262 | 0 | 196 | 1 | 0 | 0 | 16,376,374 | As your application isn't very CPU intensive, the slow disk turns out to be the bottleneck. Old 5200 RPM laptop hard drives are very slow, which, in addition to fragmentation and low RAM (which impacts disk caching), make reading very slow. This in turns slows down processing and yields low CPU usage. You can try defragmenting, compressing the input files (as they become smaller in disk size, processing speed will increase) or other means of improving IO. | 1 | 0 | 0 | Octo.py only using between 0% and 3% of my CPUs | 1 | python-2.7,multiprocessing,cpu-usage | 0 | 2013-05-04T16:20:00.000 |
I'm trying to write a function to do a bulk-save to a mongoDB using pymongo, is there a way of doing it? I've already tried using insert and it works for new records but it fails on duplicates. I need the same functionality that you get using save but with a collection of documents (it replaces an already added document with the same _id instead of failing).
Thanks in advance! | 3 | 1 | 0.197375 | 0 | false | 16,380,066 | 0 | 311 | 1 | 0 | 0 | 16,379,254 | you can use bulk insert with option w=0 (ex safe=False), but then you should do a check to see if all documents were actually inserted if this is important for you | 1 | 0 | 1 | Is there a pymongo (or another Python library) bulk-save? | 1 | python,mongodb,pymongo | 0 | 2013-05-04T21:42:00.000 |
I am executing update in mysqldb which is changing the values of part of a key and field. When I execute the query in python it triggers something in the database to cause it to add extra rows. When I execute the same exact query from mysql workbench it performs the update correctly without adding extra rows. What is the difference between calling from the application and calling from python?
Thanks | 0 | 0 | 0 | 0 | false | 16,943,780 | 0 | 78 | 1 | 0 | 0 | 16,420,461 | There was a trigger activating that I did not know about. Thanks for the help | 1 | 0 | 0 | MySQLdb for python behaves differently for queries than the mysql workbench browser | 1 | python,mysql | 0 | 2013-05-07T13:35:00.000 |
If I have text that is saved in a Postgresql database is there any way to execute that text as Python code and potentially have it update the same database? | 0 | 0 | 0 | 0 | false | 16,470,721 | 0 | 162 | 1 | 0 | 0 | 16,470,079 | let me see if I understand what you are trying to accomplish:
store ad-hoc user code in a varchar field on a database
read and execute said code
allow said code to affect the database in question, say drop table ...
Assuming that I've got it, you could write something that
reads the table holding the code (use pyodbc or something)
runs an eval on what was pulled from the db - this will let you execute ANY code, including self updating code
are you sure this is what you want to do? | 1 | 0 | 0 | Execute text in Postgresql database as Python code | 2 | python,postgresql | 0 | 2013-05-09T19:57:00.000 |
I can connect to my local mysql database from python, and I can create, select from, and insert individual rows.
My question is: can I directly instruct mysqldb to take an entire dataframe and insert it into an existing table, or do I need to iterate over the rows?
In either case, what would the python script look like for a very simple table with ID and two data columns, and a matching dataframe? | 60 | -1 | -0.022219 | 0 | false | 56,185,092 | 0 | 168,445 | 1 | 0 | 0 | 16,476,413 | df.to_sql(name = "owner", con= db_connection, schema = 'aws', if_exists='replace', index = >True, index_label='id') | 1 | 0 | 0 | How to insert pandas dataframe via mysqldb into database? | 9 | python,mysql,pandas,mysql-python | 0 | 2013-05-10T06:29:00.000 |
I want to do something like
select * from table where name like '%name%'
is there anyway to do this in Hbase ? and if there is a way so how to do that
ps. I use HappyBase to communicate with Hbase | 0 | 1 | 0.197375 | 0 | false | 16,608,107 | 0 | 364 | 1 | 0 | 0 | 16,606,906 | HBase provides a scanner interface that allows you to enumerate over a range of keys in an HTable. HappyBase has support for scans and this is documented pretty well in their API.
So this would solve your question if you were asking for a "like 'name%'" type of query which searches for anything that begins with the prefix 'name'. I am assuming name is the row key in your table, otherwise you would need a secondary index which relates the name field to the row key value of the table or go with the sub-awesome approach of scanning the entire table and doing the matching in Python yourself, depending on your usecase...
Edit: HappyBase also supports passing a 'filter' string assuming you are using a recent HBase version. You could use the SubStringComparator or RegexStringComparator to fit your needs. | 1 | 0 | 0 | Hbase wildcard support | 1 | python,hbase,thrift | 0 | 2013-05-17T10:33:00.000 |
I have to read and write data's into .xlsx extentsion files using python. And I have to use cell formatting features like merging cells,bold,font size,color etc..So which python module is good to use ? | 0 | 1 | 0.099668 | 0 | false | 24,190,976 | 0 | 367 | 1 | 0 | 0 | 16,651,124 | openpyxl is the only library I know of that can read and write xlsx files. It's down side is that when you edit an existing file it doesn't save the original formatting or charts. A problem I'm dealing with right now. If anyone knows a work around please let me know. | 1 | 0 | 1 | Which module has more option to read and write xlsx extension files using Python? | 2 | python | 0 | 2013-05-20T13:55:00.000 |
In MongoDB if we provide a coordinate and a distance, using $near operator will find us the documents nearby within the provided distance, and sorted by distance to the given point.
Does Redis provide similar functions? | 2 | 1 | 1.2 | 0 | true | 16,886,089 | 0 | 798 | 1 | 0 | 0 | 16,761,134 | Noelkd was right. There is no inbuilt function in Redis.
I found that the simplest solution is to use geohash to store the hashed lat/lng as keys.
Geohash is able to store locations nearby with similar structure, e.g.
A hash of a certain location is ebc8ycq, then the nearby locations can be queried with the wildcard ebc8yc* in Redis. | 1 | 0 | 0 | How to find geographically near documents in Redis, like $near in MongoDB? | 2 | python,mongodb,redis,geospatial | 0 | 2013-05-26T16:13:00.000 |
I am running my Django site on appengine. In the datastore, there is an entity kind / table X which is only updated once every 24 hours.
X has around 15K entries and each entry is of form ("unique string of length <20", integer).
In some context, a user request involves fetching an average of 200 entries from X, which is quite costly if done individually.
What is an efficient way I can adopt in this situation?
Here are some ways I thought about, but have some doubts in them due to inexperience
Using the Batch query supported by db.get() where a list of keys may be passed as argument and the get() will try to fetch them all in one walk. This will reduce the time quite significantly, but still there will be noticeable overhead and cost. Also, I am using Django models and have no idea about how to relate these two.
Manually copying the whole database into memory (like storing it in a map) after each update job which occurs every 24 hour. This will work really well and also save me lots of datastore reads but I have other doubts. Will it remain persistent across instances? What other factors do I need to be aware of which might interfere? This or something like this seems perfect for my situation.
The above are just what I could come up with in first thought. There must be ways I am unaware/missing.
Thanks. | 1 | 1 | 1.2 | 0 | true | 16,775,062 | 1 | 53 | 1 | 1 | 0 | 16,773,961 | Your total amout of data is very small and looks like a dict. Why not save it (this object) as a single entry in the database or the blobstore and you can cache this entry. | 1 | 0 | 0 | A way to optimize reading from a datastore which updates once a day | 1 | python,django,google-app-engine | 0 | 2013-05-27T13:08:00.000 |
I need to represent instances of Python "Long integer" in MySQL. I wonder what the most appropriate SQL data type I should use.
The Python documentation (v2.7) says (for numbers.Integral):
Long integers
These represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2’s complement which gives the illusion of an infinite string of sign bits extending to the left.
My read of the MySQL documentation suggests that BIGINT is limited to 64 bits. The DECIMAL type seems to be limited to 65 digits. I can, of course, use BLOB.
The application needs to support very large amounts of data, but I don't know yet how big these long integers might get, nor how many of them I'm likely to see.
I'd like to preserve the spirit of the Python long integer definition, which suggests BLOB. I'd also like to avoid re-inventing the wheel, and so I am appealing to the stackoverflow hive-mind.
Suggestions? | 2 | 3 | 0.148885 | 0 | false | 16,867,914 | 0 | 1,333 | 1 | 0 | 0 | 16,867,823 | Yes if you really need unlimited precision then you'll have to use a blob because even strigns are limited.
But really I can almost guarantee that you'll be fine with a NUMERIC/DECIMAL data type. 65 digits means that you can represent numbers in the range (-10^65, 10^65). How large is this? To give you some idea: The number of atoms in the whole universe is estimated to be about 10^80. If you only need positive numbers you can further increase the range by a factor of 2 by subtracting 10^65 -1 beforehand. | 1 | 0 | 0 | What are the options for storing Python long integers in MySQL? | 4 | python,mysql,mysql-python | 0 | 2013-06-01T00:26:00.000 |
Can someone advise on what database is better for storing textual information such as part of speech sequences, dependencies, sentences used in NLP project written in python. Now this information is stored in files and they need to be parsed every time in order to extract the mentioned blocks which are used as an input for next processing stage.
Options considered - MongoDB, Cassandra and MySQL. Are NoSQL databases better in this type of application.
Thanks. | 0 | 6 | 1.2 | 0 | true | 16,873,052 | 0 | 2,075 | 1 | 0 | 0 | 16,872,221 | This really depends on what exactly you are storing and which operations you will perform on this data.
SQL vs. NoSQL is a very fundamental decision and no one can give you a good advice here. If your data fits relational model well, then, SQL (PostgreSQL or MySQL) is your choice. If your data is more like documents, use MongoDB.
That said, just recently I made a search engine. We had to store indexed pages (raw text), the same text but tokenized and some additional metadata. MongoDB performed really well. | 1 | 0 | 0 | Database for NLP project | 1 | python,mysql,mongodb,nlp,bigdata | 0 | 2013-06-01T11:31:00.000 |
So I have my Django app running and I just added South. I performed some migrations which worked fine locally, but I am seeing some database errors on my Heroku version. I'd like to view the current schema for my database both locally and on Heroku so I can compare and see exactly what is different. Is there an easy way to do this from the command line, or a better way to debug this? | 2 | 3 | 1.2 | 0 | true | 16,942,831 | 1 | 3,348 | 1 | 0 | 0 | 16,942,317 | From the command line you should be able to do heroku pg:psql to connect directly via PSQL to your database and from in there \dt will show you your tables and \d <tablename> will show you your table schema. | 1 | 0 | 0 | How to View My Postgres DB Schema from Command Line | 3 | python,django,postgresql,heroku,django-south | 0 | 2013-06-05T14:15:00.000 |
I am executing an update query using MySQLdb and python 2.7. Is it possible to know which rows affected by retrieving all their ids? | 1 | 2 | 1.2 | 0 | true | 16,961,869 | 0 | 67 | 1 | 0 | 0 | 16,961,438 | You can get the number of affected rows by using cursor.rowcount. The information which rows are affected is not available since the mysql api does not support this. | 1 | 0 | 0 | Python, mySQLdb: Is it possible to retrieve updated keys, after update? | 1 | python,mysql-python | 0 | 2013-06-06T11:53:00.000 |
MySQL is installed at /usr/local/mysql
In site.cfg the path for mysql_config is /usr/local/mysql/bin/mysql_config
but when i try to build in the terminal im getting this error:
hammads-imac-2:MySQL-python-1.2.4b4 syedhammad$ sudo python setup.py build
running build
running build_py
copying MySQLdb/release.py -> build/lib.macosx-10.8-intel-2.7/MySQLdb
running build_ext
building '_mysql' extension
clang -fno-strict-aliasing -fno-common -dynamic -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,4,'beta',4) -D_version_=1.2.4b4 -I/usr/local/mysql/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.8-intel-2.7/_mysql.o -Wno-null-conversion -Os -g -fno-strict-aliasing -arch x86_64
unable to execute clang: No such file or directory
error: command 'clang' failed with exit status 1
Help Please | 1 | 2 | 1.2 | 0 | true | 16,985,650 | 0 | 141 | 1 | 1 | 0 | 16,985,604 | You probably need Xcode's Command Line Tools.
Download the lastest version of Xcode, then go to "Preferences", select "Download" tab, then install Command Line Tools. | 1 | 0 | 0 | Configuring MySQL with python on OS X lion | 1 | python,mysql,macos | 0 | 2013-06-07T13:41:00.000 |
We are currently developing an application that makes heavy use of PostgreSQL. For the most part we access the database using SQLAlchemy, and this works very well. For testing the relevant objects can be either mocked, or used without database access. But there are some parts of the system that run non-standard queries. These subsystems have to create temporary tables insert a huge number of rows and then merge data back into the main table.
Currently there are some SQL statements in these subsystems, but this makes the relevant classes tightly coupled with the database, which in turn makes things harder to unit-test.
Basically my question is, is there any design pattern for solving this problem? The only thing that I could come up with is to put these SQL statements into a separate class and just pass an instance to the other class. This way I can mock the query-class for unit-tests, but it still feels a bit clumsy. Is there a better way to do this? | 3 | 0 | 0 | 0 | false | 17,017,714 | 0 | 161 | 1 | 0 | 0 | 16,999,676 | So after playing around with it some more I now have a solution that is halfway decent. I split the class in question up into three separate classes:
A class that provides access to the required data;
A context manager that supports the temporary table stuff;
And the old class with all the logic (sans the database stuff);
When I instantiate my logic class I supply it with an instance of the aforementioned classes. It works ok, abstraction is slightly leaky (especially the context manager), but I can at least unit test the logic properly now. | 1 | 0 | 0 | Design Pattern for complicated queries | 1 | python,sql,design-patterns | 0 | 2013-06-08T12:48:00.000 |
I've been writing a Python web app (in Flask) for a while now, and I don't believe I fully grasp how database access should work across multiple request/response cycles. Prior to Python my web programming experience was in PHP (several years worth) and I'm afraid that my PHP experience is misleading some of my Python work.
In PHP, each new request creates a brand new DB connection, because nothing is shared across requests. The more requests you have, the more connections you need to support. However, in a Python web app, where there is shared state across requests, DB connections can persist.
So I need to manage those connections, and ensure that I close them. Also, I need to have some kind of connection pool, because if I have just one connection shared across all requests, then requests could block waiting on DB access, if I don't have enough connections available.
Is this a correct understanding? Or have I identified the differences well? In a Python web app, do I need to have a DB connection pool that shares its connections across many requests? And the number of connections in the pool will depend on my application's request load?
I'm using Psycopg2. | 4 | 4 | 1.2 | 0 | true | 17,012,369 | 1 | 351 | 1 | 0 | 0 | 17,012,349 | Have you looked in to SQLAlchemy at all? It takes care of a lot of the dirty details - it maintains a pool of connections, and reuses/closes them as necessary. | 1 | 0 | 0 | Database access strategy for a Python web app | 2 | python,psycopg2 | 0 | 2013-06-09T17:36:00.000 |
Need to get one row from a table, and delete the same row.
It does not matter which row it is. The function should be generic, so the column names are unknown, and there are no identifiers. (Rows as a whole can be assumed to be unique.)
The resulting function would be like a pop() function for a stack, except that the order of elements does not matter.
Possible solutions:
Delete into a temporary table.
(Can this be done in pysqlite?)
Get * with 1 as limit, and the Delete * with 1 as limit.
(Is this safe if there is just one user?)
Get one row, then delete with a WHERE clause that compares the whole row.
(Can this be done in pysqlite?)
Suggestions? | 0 | 1 | 1.2 | 0 | true | 17,382,716 | 0 | 64 | 1 | 0 | 0 | 17,127,306 | Well. every table in a sqlite has a rowid. Select one and delete it? | 1 | 0 | 0 | row_pop() function in pysqlite? | 1 | python,database,pysqlite | 0 | 2013-06-15T19:47:00.000 |
I have the user registration form made in django.
I want to know the city from which the user is registering.
Is there any way that i get the IP address of the user and then somehow get the city for that IP. using some API or something | 0 | 0 | 0 | 0 | false | 17,159,679 | 1 | 163 | 1 | 0 | 0 | 17,159,576 | Not in any reliable way, or at least not in Django. The problem is that user IPs are usually dynamic, hence the address is changing every couple of days. Also some ISPs soon will start to use a single IP for big blocks of users (forgot what this is called) since they are running out of IPv4 IP addresses... In other words, all users from that ISP within a whole state or even country will have a single IP address.
So using the IP is not reliable. You could probably figure out the country or region of the user with reasonable accuracy however my recommendation is not to use the IP for anything except logging and permission purposes (e.g. blocking a spam IP).
If you want user locations, you can however use HTML5 location API which will have a much better shot of getting more accurate location since it can utilize other methods such us using a GPS sensor in a phone. | 1 | 0 | 0 | Is there any simple way to store the user location while registering in database | 4 | python,django,ip | 0 | 2013-06-18T02:12:00.000 |
I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess.
My questions are:
how can i get the same order as the items of the website itself.
how can i reverse this order of question 1.
So items on website:
A
B
C
D
E
adding order in my sql:
E
D
C
B
A | 2 | 0 | 1.2 | 0 | true | 17,213,740 | 1 | 201 | 2 | 0 | 0 | 17,213,515 | Items in a database are have not a special order if you don't impose it. So you should add a timestamp to your table in the database, keep it up-to-date (mysql has a special flag to mark a field as auto-now) and use ORDER BY in your queries. | 1 | 0 | 0 | Scrapy reversed item ordening for preparing in db | 3 | python,scrapy | 0 | 2013-06-20T12:20:00.000 |
I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess.
My questions are:
how can i get the same order as the items of the website itself.
how can i reverse this order of question 1.
So items on website:
A
B
C
D
E
adding order in my sql:
E
D
C
B
A | 2 | 1 | 0.066568 | 0 | false | 17,221,923 | 1 | 201 | 2 | 0 | 0 | 17,213,515 | It's hard to say without the actual code, but in theory..
Scrapy is completely async, you cannot know the order of items that will be parsed and processed through the pipeline.
But, you can control the behavior by "marking" each item with priority key. Add a field priority to your Item class, in the parse_item method of your spider set the priority based on the position on a web page, then in your pipeline you can either write this priority field to the database (in order to have an ability to sort later), or gather all items in a class-wide list, and in close_spider method sort the list and bulk insert it into the database.
Hope that helps. | 1 | 0 | 0 | Scrapy reversed item ordening for preparing in db | 3 | python,scrapy | 0 | 2013-06-20T12:20:00.000 |
Lets take SQLAlchemy as an example.
Why should I use the Flask SQLAlchemy extension instead of the normal SQLAlchemy module?
What is the difference between those two?
Isn't is perfectly possible to just use the normal module in your Flask app? | 1 | 4 | 1.2 | 0 | true | 17,223,377 | 1 | 99 | 1 | 0 | 0 | 17,222,824 | The extensions exist to extend the functionality of Flask, and reduce the amount of code you need to write for common usage patterns, like integrating your application with SQLAlchemy in the case of flask-sqlalchemy, or login handling with flask-login. Basically just clean, reusable ways to do common things with a web application.
But I see your point with flask-sqlalchemy, its not really that much of a code saver to use it, but it does give you the scoped-session automatically, which you need in a web environment with SQLAlchemy.
Other extensions like flask-login really do save you a lot of boilerplate code. | 1 | 0 | 0 | Why do Flask Extensions exist? | 1 | python,sqlalchemy,flask,flask-sqlalchemy | 0 | 2013-06-20T20:06:00.000 |
I would like to save data to sqlite3 databases which will be fetched from the remote system by FTP. Each database would be given a name that is an encoding of the time and date with a resolution of 1 hour (i.e. a new database every hour).
From the Python 3 sqlite3 library, would any problems be encountered if two threads try to create the database at the same time? Or are there protections against this? | 1 | 0 | 0 | 0 | false | 17,275,138 | 0 | 426 | 1 | 0 | 0 | 17,274,626 | This will work just fine.
When two threads are trying to create the same file, one will fail to do so, but it will continue to try to lock the file. | 1 | 0 | 0 | Can sqlite3 databases be created in a thread-safe way? | 1 | python,python-3.x,sqlite | 0 | 2013-06-24T11:41:00.000 |
I have a flask application which use three types of databases - MySQL, Mongo and Redis. Now, if it had been simple MySQL I could have use SQLAlchemy or something on that line for database modelling. Now, in the current scenario where I am using many different types of database in a single application, I think I will have to create custom models.
Can you please suggest what are the best practices to do that? Or any tutorial indicating the same? | 3 | 0 | 0 | 0 | false | 17,289,054 | 1 | 79 | 1 | 0 | 0 | 17,276,970 | It's not an efficient model, but this would work:
You can write three different APIs (RESTful pattern is a good idea). Each will be an independent Flask application, listening on a different port (likely over localhost, not the public IP interface).
A forth Flask application is your main application that external clients can access. The view functions in the main application will issue API calls to the other three APIs to obtain data as they see fit.
You could optimize and merge one of the three database APIs into the main application, leaving only two (likely the two less used) to be implemented as APIs. | 1 | 0 | 0 | How to create models if I am using various types of database simultaneously? | 2 | python,database,flask,flask-sqlalchemy | 0 | 2013-06-24T13:41:00.000 |
I'm looking for the best approach for inserting a row into a spreadsheet using openpyxl.
Effectively, I have a spreadsheet (Excel 2007) which has a header row, followed by (at most) a few thousand rows of data. I'm looking to insert the row as the first row of actual data, so after the header. My understanding is that the append function is suitable for adding content to the end of the file.
Reading the documentation for both openpyxl and xlrd (and xlwt), I can't find any clear cut ways of doing this, beyond looping through the content manually and inserting into a new sheet (after inserting the required row).
Given my so far limited experience with Python, I'm trying to understand if this is indeed the best option to take (the most pythonic!), and if so could someone provide an explicit example. Specifically can I read and write rows with openpyxl or do I have to access cells? Additionally can I (over)write the same file(name)? | 19 | -1 | -0.016665 | 0 | false | 17,305,443 | 0 | 90,928 | 1 | 0 | 0 | 17,299,364 | Unfortunately there isn't really a better way to do in that read in the file, and use a library like xlwt to write out a new excel file (with your new row inserted at the top). Excel doesn't work like a database that you can read and and append to. You unfortunately just have to read in the information and manipulate in memory and write out to what is essentially a new file. | 1 | 0 | 0 | Insert row into Excel spreadsheet using openpyxl in Python | 12 | python,excel,xlrd,xlwt,openpyxl | 0 | 2013-06-25T14:00:00.000 |
So I have a password protected XLS file which i've forgotten the password for...I'm aware it's a date within a certain range so i'm trying to write a brute forcer to try various dates of the year. However, I can't find how to use python/java to enter the password for the file. It's protected such that I can't open the xls file unless I have it and it has some very important information on there (so important I kept the password in a safe place that I now can't find lol).
I'm using fedora. Are there any possible suggestions? Thankyou. | 0 | 0 | 0 | 0 | false | 17,344,366 | 0 | 268 | 1 | 0 | 0 | 17,344,335 | If you search there are a number of applications that you can download that will unblock the workbook. | 1 | 0 | 0 | How to enter password in XLS files with python? | 1 | java,python,excel,passwords,xls | 0 | 2013-06-27T13:20:00.000 |
In my python/django based web application I want to export some (not all!) data from the app's SQLite database to a new SQLite database file and, in a web request, return that second SQLite file as a downloadable file.
In other words: The user visits some view and, internally, a new SQLite DB file is created, populated with data and then returned.
Now, although I know about the :memory: magic for creating an SQLite DB in memory, I don't know how to return that in-memory database as a downloadable file in the web request. Could you give me some hints on how I could reach that? I would like to avoid writing stuff to the disc during the request. | 0 | 1 | 1.2 | 0 | true | 17,382,483 | 1 | 169 | 1 | 0 | 0 | 17,382,053 | I'm not sure you can get at the contents of a :memory: database to treat it as a file; a quick look through the SQLite documentation suggests that its API doesn't expose the :memory: database to you as a binary string, or a memory-mapped file, or any other way you could access it as a series of bytes. The only way to access a :memory: database is through the SQLite API.
What I would do in your shoes is to set up your server to have a directory mounted with ramfs, then create an SQLite3 database as a "file" in that directory. When you're done populating the database, return that "file", then delete it. This will be the simplest solution by far: you'll avoid having to write anything to disk and you'll gain the same speed benefits as using a :memory: database, but your code will be much easier to write. | 1 | 0 | 0 | Python: Create and return an SQLite DB as a web request result | 2 | python,django,sqlite | 0 | 2013-06-29T16:01:00.000 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.