Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
suppose there was a database table with one column, and it's a PK. To make things more specific this is a django project and the database is in mysql.
If I needed an additional column with all unique values, should I create a new UniqueField with unique integers, or just write a hash-like function to convert the existing PK's for each existing row (model instance) into a new unique variable. The current PK is a varchar/ & string.
With creating a new column it consumes more memory but I think writing a new function and converting fields frequently has disadvantages also. Any ideas? | 0 | 1 | 1.2 | 0 | true | 17,393,525 | 1 | 50 | 1 | 0 | 0 | 17,393,291 | Having a string-valued PK should not be a problem in any modern database system. A PK is automatically indexed, so when you perform a look-up with a condition like table1.pk = 'long-string-key', it won't be a string comparison but an index look-up. So it's ok to have string-valued PK, regardless of the length of the key values.
In any case, if you need an additional column with all unique values, then I think you should just add a new column. | 1 | 0 | 0 | Database design, adding an extra column versus converting existing column with a function | 1 | python,mysql,django | 0 | 2013-06-30T18:03:00.000 |
i am working on developing a Django application with Cassandra as the back end database. while Django supports ORM feature for SQL, i wonder if there is any thing similar for Cassandra.
what would be the best approach to load the schema into the Cassandra server and perform CRUD operations.
P.S. I am complete beginner to Cassandra. | 1 | 3 | 1.2 | 0 | true | 17,403,637 | 1 | 410 | 1 | 0 | 0 | 17,403,346 | There's an external backend for Cassandra, but it has some issues with the authentication middleware, which doesn't handle users correctly in the admin. If you use a non-relational database, you lose a lot of goodies that django has. You could try using Postgres' nosql extension for the parts of your data that you want to store in a nosql'y way, and the regular Postgres' tables for the rest. | 1 | 0 | 0 | Cassandra-Django python application approach | 2 | python,django,orm,cassandra | 0 | 2013-07-01T11:25:00.000 |
I have a state column in my table which has the following possible values: discharged, in process and None.
Can I fetch all the records in the following order: in process, discharged followed by None? | 1 | 2 | 0.379949 | 0 | false | 17,408,674 | 0 | 1,384 | 1 | 0 | 0 | 17,408,276 | If you've declared that column as an enum type (as you should for cases such as these where the values are drawn from a small, fixed set of strings), then using ORDER BY on that column will order results according to the order in which the values of the enum were declared. So the datatype for that column should be ENUM('in process', 'discharged', 'None'); that will cause ORDER BY to sort in the order you desire. Specifically, each value in an enum is assigned a numerical index and that index is used when comparing enum values for sorting purposes. (The exact way in which you should declare an enum will vary according to which type of backend you're using.) | 1 | 0 | 0 | Sqlalchemy order_by custom ordering? | 1 | python,sql,sqlalchemy | 0 | 2013-07-01T15:33:00.000 |
I'm trying to work with oursql in python 3.2, and it's really not going so well.
Facts:
I downloaded oursql binary and ran the installer.
I have MySQL 5.1 installed.
I separately downloaded the libmysql dll and placed it in the System32 directory.
I downloaded cython for version 3.1 because there wasn't one for 2.7 or 3.2.
I have python versions 2.7, 3.1, and 3.2 installed.
I rebooted.
I now still get the ImportError: DLL load failed: The specified module could not be found. error when running import oursql from the Python 3.1 shell.
Any ideas? | 0 | 0 | 0 | 0 | false | 17,420,506 | 0 | 196 | 1 | 0 | 0 | 17,420,396 | OK, I moved libmysql.dll to the same directory as python.exe, instead of in the DLL's folder, and it seems like it works. | 1 | 0 | 0 | Error on installing oursql for Python 3.1 | 1 | python,mysql,python-3.x,oursql | 0 | 2013-07-02T08:00:00.000 |
I am using python 2.7 and Mysql. I am using multi-threading and giving connections to different threads by using PooledDB . I give db connections to different threads by
pool.dedicated_connection().Now if a thread takes a connection from pool and dies due to some reason with closing it(ie. without returning it to the pool).What happens to this connection.
If it lives forever how to return it to the pool?? | 0 | 2 | 1.2 | 0 | true | 17,423,440 | 0 | 178 | 1 | 0 | 0 | 17,423,384 | No, it does not. You have to tell the server on the other side that the connection is closed, because it can't tell the difference between "going away" and "I haven't sent my next query yet" without an explicit signal from you.
The connection can time out, of course, but it won't be closed or cleaned up without instructions from you. | 1 | 0 | 0 | Does database connection return to pool if a thread holding it dies? | 1 | python,mysql,multithreading,python-2.7 | 0 | 2013-07-02T10:37:00.000 |
I have an existing MySQL database that I set up on PMA, it has FKs that references columns that are not primary keys. Now I am trying to move the database to Django and am having trouble because when I try to set up d Foreign Keys in django it automatically references the Primary Key of the table that I am attempting to reference so the data doesnt match because column A and column B do not contain the same info. Is there a way to tell django what column to reference? | 0 | 0 | 0 | 0 | false | 17,491,830 | 1 | 62 | 1 | 0 | 0 | 17,491,720 | You can use the to_field attribute of a ForeignKey.
Django should detect this automatically if you use ./manage.py inspectdb, though. | 1 | 0 | 0 | Moving database from PMA to Django | 1 | python,mysql,django,phpmyadmin | 0 | 2013-07-05T14:54:00.000 |
What is the easiest way to export the results of a SQL Server query to a CSV file? I have read that the pymssql module is the preferred way, and I'm guessing I'll need csv as well. | 0 | 0 | 0 | 0 | false | 17,495,797 | 0 | 1,521 | 1 | 0 | 0 | 17,495,581 | Do you need to do this programmatically or is this a one-off export?
If the latter, the easiest way by far is to use the SSMS export wizard. In SSMS, select the database, right-click and select Tasks->Export Data. | 1 | 0 | 0 | Export SQL Server Query Results to CSV using pymssql | 1 | python,sql-server,csv,pymssql | 0 | 2013-07-05T19:20:00.000 |
im using openpyxl to edit an excel file that contains some formulas in certain cells. Now when i populate the cells from a text file, im expecting the formula to work and give me my desired output. But what i observe is that the formulas get removed and the cells are left blank. | 0 | 1 | 0.099668 | 0 | false | 24,183,661 | 0 | 1,400 | 1 | 0 | 0 | 17,522,521 | I had the same problem when saving the file with openpyxl: formulas removed.
But I pointed out that some intermediate formulas were still there.
After some tests, it appears that, in my case, all formulas which are displaying blank result (nothing) are cleaned when the save occured, unlike the formulas with an output in the cell, which are preserved.
ex :
=IF((SUM(P3:P5))=0;"";(SUM(Q3:Q5))/(SUM(P3:P5))) => can be removed when saving because of the blank result
ex :
=IF((SUM(P3:P5))=0;"?";(SUM(Q3:Q5))/(SUM(P3:P5))) => preserved when saving
for my example I'm using openpyxl-2.0.3 on Windows. Open and save function calls are :
self._book = load_workbook("myfile.xlsx", data_only=False)
self._book.save("myfile.xlsx") | 1 | 0 | 0 | Openpyxl: Formulas getting removed when saving file | 2 | python-2.7,openpyxl | 0 | 2013-07-08T08:55:00.000 |
I have two tables with a common field I want to find all the the
items(user_id's) which present in first table but not in second.
Table1(user_id,...)
Table2(userid,...)
user_id in and userid in frist and second table are the same. | 1 | 1 | 1.2 | 0 | true | 17,542,024 | 0 | 225 | 1 | 0 | 0 | 17,541,225 | session.query(Table1.user_id).outerjoin(Table2).filter(Table2.user_id == None) | 1 | 0 | 0 | find missing value between to tables in sqlalchemy | 2 | python,sqlalchemy | 0 | 2013-07-09T06:15:00.000 |
I want to build Python 3.3.2 from scratch on my SLE 11 (OpenSUSE).
During the compilation of Python I got the message that the modules _bz2, _sqlite and _ssl have not been compiled.
I looked for solutions with various search engines. It is often said that you have to install the -dev packages with your package management system, but I have no root access.
I downloaded the source packages of the missing libs, but I have no idea how to tell Python to use these libs. Can somebody help me? | 1 | 0 | 0 | 0 | false | 17,979,292 | 0 | 444 | 1 | 0 | 0 | 17,546,628 | I don't use that distro, but Linux Mint (it's based on Ubuntu).
In my case before the compilation of Python 3.3.2 I've installed the necessary -dev libraries:
$ sudo apt-get install libssl-dev
$ sudo apt-get install libbz2-dev
...
Then I've compiled and installed Python and those imports work fine.
Hope you find it useful
León | 1 | 0 | 0 | How to build python 3.3.2 with _bz2, _sqlite and _ssl from source | 2 | python-3.x,sqlite,ssl,compilation,non-admin | 0 | 2013-07-09T11:05:00.000 |
I've got a fairly simple Python program as outlined below:
It has 2 threads plus the main thread. One of the threads collects some data and puts it on a Queue.
The second thread takes stuff off the queue and logs it. Right now it's just printing out the stuff from the queue, but I'm working on adding it to a local MySQL database.
This is a process that needs to run for a long time (at least a few months).
How should I deal with the database connection? Create it in main, then pass it to the logging thread, or create it directly in the logging thread? And how do I handle unexpected situations with the DB connection (interrupted, MySQL server crashes, etc) in a robust manner? | 1 | 0 | 0 | 0 | false | 17,578,684 | 0 | 84 | 1 | 0 | 0 | 17,578,630 | How should I deal with the database connection? Create it in main,
then pass it to the logging thread, or create it directly in the
logging thread?
I would perhaps configure your logging component with the class that creates the connection and let your logging component request it. This is called dependency injection, and makes life easier in terms of testing e.g. you can mock this out later.
If the logging component created the connections itself, then testing the logging component in a standalone fashion would be difficult. By injecting a component that handles these, you can make a mock that returns dummies upon request, or one that provides connection pooling (and so on).
How you handle database issues robustly depends upon what you want to happen. Firstly make your database interactions transactional (and consequently atomic). Now, do you want your logger component to bring your system to a halt whilst it retries a write. Do you want it to buffer writes up and try out-of-band (i.e. on another thread) ? Is it mission critical to write this or can you afford to lose data (e.g. abandon a bad write). I've not provided any specific answers here, since there are so many options depending upon your requirements. The above details a few possible options. | 1 | 0 | 1 | Architechture of multi-threaded program using database | 1 | python,mysql,database,multithreading | 0 | 2013-07-10T18:49:00.000 |
How do you install pyodbc package on a Linux (RedHat server RHEL) onto a Zope/Plone bundled Python path instead of in the global Python path?
yum install pyodbc and python setup.py install, all put pyodbc in the sys python path.
I read articles about putting pyodbc in python2.4/site-packages/
I tried that, but it didn't work for my Plone external method, which still complains about no module named pyodbc. | 1 | 1 | 0.197375 | 0 | false | 17,794,367 | 0 | 265 | 1 | 0 | 0 | 17,662,330 | Add the package to the eggs section in buildout and then re-run buildout.
There might be additional server requirements to install pyodbc. | 1 | 0 | 1 | pyodbc Installation Issue on Plone Python Path | 1 | python,plone,zope,pyodbc | 0 | 2013-07-15T19:32:00.000 |
I have a client-server interface realized using the module requests as client and tornado as server. I use this to query a database, where some dataitems may not be avaiable. For example the author in a query might not be there or the book-title.
Is there a recommended way to let my client know, what was missing? Like an HTTP 404: Author missing or something like that? | 0 | 1 | 1.2 | 0 | true | 17,681,053 | 0 | 237 | 1 | 0 | 0 | 17,678,927 | Since HTTP 404 responses can have a response body, I would put the detailed error message in the body itself. You can, for example, send the string Author Not Found in the response body. You could also send the response string in the format that your API already uses, e.g. XML, JSON, etc., so that every response from the server has the same basic shape.
Whether using code 404 with a X Not Found message depends on the structure of your API. If it is a RESTful API, where each URL corresponds to a resource, then 404 is a good choice if the resource itself is the thing missing. If a requested data field is missing, but the requested resource exists, I don't think 404 would be a good choice. | 1 | 0 | 0 | Can I have more semantic meaning in an http 404 error? | 2 | python,http,tornado,http-status-codes | 0 | 2013-07-16T14:13:00.000 |
my teammate and i wrote a Python script running on the same server where the database is. Now we want to know if the performance changes when we write the same code as a stored procedure in our postgres database. What is the difference or its the same??
Thanks. | 2 | 2 | 1.2 | 0 | true | 17,686,435 | 0 | 507 | 1 | 0 | 0 | 17,682,444 | There can be differences - PostgreSQL stored procedures (functions) uses inprocess execution, so there are no any interprocess communication - so if you process more data, then stored procedures (in same language) can be faster than server side application. But speedup depends on size of processed data. | 1 | 0 | 0 | What is the difference between using a python script running on server and a stored procedure? | 1 | python,database,performance,postgresql,plpgsql | 0 | 2013-07-16T16:48:00.000 |
I have been using the datastore with ndb for a multiplayer app. This appears to be using a lot of reads/writes and will undoubtedly go over quota and cost a substantial amount.
I was thinking of changing all the game data to be stored only in memcache. I understand that data stored here can be lost at any time, but as the data will only be needed for, at most, 10 minutes and as it's just a game, that wouldn't be too bad.
Am I right to move to solely use memcache, or is there a better method, and is memcache essentially 'free' short term data storage? | 3 | 1 | 0.099668 | 0 | false | 17,816,617 | 1 | 341 | 1 | 1 | 0 | 17,702,165 | As a commenter on another answer noted, there are now two memcache offerings: shared and dedicated. Shared is the original service, and is still free. Dedicated is in preview, and presently costs $.12/GB hour.
Dedicated memcache allows you to have a certain amount of space set aside. However, it's important to understand that you can still experience partial or complete flushes at any time with dedicated memcache, due to things like machine reboots. Because of this, it's not a suitable replacement for the datastore.
However, it is true that you can greatly reduce your datastore usage with judicious use of memcache. Using it as a write-through cache, for example, can greatly reduce your datastore reads (albeit not the writes).
Hope this helps. | 1 | 0 | 0 | Datastore vs Memcache for high request rate game | 2 | python,google-app-engine,memcached,google-cloud-datastore,app-engine-ndb | 0 | 2013-07-17T14:13:00.000 |
I have some very complex XSD schemas to work with. By complex I mean that each of these XSD would correspont to about 20 classes / tables in a database, with each table having approximately 40 fields. And I have 18 different XSD like that to program.
What I'm trying to achieve is: Get a XML file defined by the XSD and save all the data in a PostgreSQL database using SQLAlchemy. Basically I need a CRUD application that will persist a XML file in the database following the model of the XSD schema, and also be able to retrieve an object from the database and create a XML file.
I want to avoid having to manually create the python classes, the sqlalchemy table definitions, the CRUD code. This would be a monumental job, subject to a lot of small mistakes, given the complexity of the XSD files.
I can generate python classes from XSD in many ways like GenerateDS, PyXB, etc... I need to save those objects in the database. I'm open to any suggestions, even if the suggestion is conceptually different that what I'm describing.
Thank you very much | 2 | 1 | 0.099668 | 0 | false | 34,734,878 | 0 | 2,179 | 1 | 0 | 0 | 17,750,340 | Not sure if there is a way directly, but you could indirectly go from XSD to a SQL Server DB, and then import the DB from SQLAlchemy | 1 | 0 | 0 | Generate Python Class and SQLAlchemy code from XSD to store XML on Postgres | 2 | python,xml,postgresql,xsd,sqlalchemy | 0 | 2013-07-19T15:50:00.000 |
I am working with Python, fetching huge amounts of data from MS SQL Server Database and processing those for making graphs.
The real issue is that I wanted to know whether it would be a good idea to repeatedly perform queries to filter the data (using pyodbc for SQL queries) using attributes like WHERE and SELECT DISTINCT etc. in queries
OR
To fetch the data and use the list comprehensions, map and filter functionalities of python to filter the data in my code itself.
If I choose the former, there would be around 1k queries performed reducing significant load on my python code, otherwise if I choose the latter, I would be querying once and add on a bunch of functions to go through all the records I have fetched, more or less the same number of times(1k).
The thing is python is not purely functional, (if it was, I wouldnt be asking and would have finished and tested my work hundreds of times by now).
Which one would you people recommend?
For reference I am using Python 2.7. It would be highly appreciated if you could provide sources of information too. Also, Space is not an issue for fetching the whole data.
Thanks | 1 | 0 | 0 | 0 | false | 17,757,423 | 0 | 470 | 1 | 0 | 0 | 17,757,031 | If you have bandwidth to burn, and prefer Python to SQL, go ahead and do one big query and filter in Python.
Otherwise, you're probably better off with multiple queries.
Sorry, no references here. ^_^ | 1 | 0 | 0 | SQL query or Programmatic Filter for Big Data? | 1 | python,sql-server-2008,map,bigdata | 0 | 2013-07-19T23:33:00.000 |
I have a load of data in CSV format. I need to be able to index this data based on a single text field (the primary key), so I'm thinking of entering it into a database. I'm familiar with sqlite from previous projects, so I've decided to use that engine.
After some experimentation, I realized that that storing a hundred million records in one table won't work well: the indexing step slows to a crawl pretty quickly. I could come up with two solutions to this problem:
partition the data into several tables
partition the data into several databases
I went with the second solution (it yields several large files instead of one huge file). My partition method is to look at the first two characters of the primary key: each partition has approximately 2 million records, and there are approximately 50 partitions.
I'm doing this in Python with the sqlite3 module. I keep 50 open database connections and open cursors for the entire duration of the process. For each row, I look at the first two characters of the primary key, fetch the right cursor via dictionary lookup, and perform a single insert statement (via calling execute on the cursor).
Unfortunately, the insert speed still decreases to an unbearable level after a while (approx. 10 million total processed records). What can I do to get around this? Is there a better way to do what I'm doing? | 1 | 5 | 0.462117 | 0 | false | 17,826,461 | 0 | 1,881 | 1 | 0 | 0 | 17,826,391 | Wrap all insert commands into a single transaction.
Use prepared statements.
Create the index only after inserting all the data (i.e., don't declare a primary key). | 1 | 0 | 0 | What's the best way to insert over a hundred million rows into a SQLite database? | 2 | python,database,sqlite | 0 | 2013-07-24T06:07:00.000 |
Psycopg is the most popular PostgreSQL adapter for the Python programming language.
The name Psycopg does not make sense to me.
I understand the last pg means Postgres, but what about Psyco? | 21 | 11 | 1 | 0 | false | 17,869,993 | 0 | 2,412 | 1 | 0 | 0 | 17,869,761 | I've always thought of it as psycho-Postgres. | 1 | 0 | 0 | Where does the name `Psycopg` come from? | 1 | python,postgresql | 0 | 2013-07-25T22:19:00.000 |
I am trying to copy the entire /contentstore/ folder on a bucket to a timestamped version. Basically /contenstore/ would be copied to /contentstore/20130729/.
My entire script uses s3s3mirror first to clone my production S3 bucket to a backup. I then want to rename the backup to a timestamped copy so that I can keep multiple versions of the same.
I have a working version of this using s3cmd but it seems to take an abnormally long time. The s3s3mirror part between the two buckets is done within minutes, possibly because it is a refresh on existing folder. But even in the case of a clean s3s3mirror (no existing contentstore on backup) it take around 20 minutes.
On the other hand copying the conentstore to a timestamped copy on the backup bucket takes over an hour and 10 minutes.
Am I doing something incorrectly? Should the copy of data on the same bucket take longer than a full clone between two different buckets?
Any ideas would be appreciated.
P.S: The command I am running is s3cmd --recursive cp backupBucket/contentStore/ backupBucket/20130729/ | 1 | 0 | 0 | 0 | false | 20,389,005 | 1 | 995 | 1 | 0 | 0 | 17,931,579 | Since your source path contains your destination path, you may actually be copying things more than once -- first into the destination path, and then again when that destination path matches your source prefix. This would also explain why copying to a different bucket is faster than within the same bucket.
If you're using s3s3mirror, use the -v option and you'll see exactly what's getting copied. Does it show the same key being copied multiple times? | 1 | 0 | 0 | Copying files in the same Amazon S3 bucket | 1 | python,amazon-web-services,amazon-s3,boto,s3cmd | 0 | 2013-07-29T18:32:00.000 |
I have an html file on network which updates almost every minute with new rows in a table. At any point, the file contains close to 15000 rows I want to create a MySQL table with all data in the table, and then some more that I compute from the available data.
The said HTML table contains, say rows from the last 3 days. I want to store all of them in my mysql table, and update the table every hour or so (can this be done via a cron?)
For connecting to the DB, I'm using MySQLdb which works fine. However, I'm not sure what are the best practices to do so. I can scrape the data using bs4, connect to table using MySQLdb. But how should I update the table? What logic should I use to scrape the page that uses the least resources?
I am not fetching any results, just scraping and writing.
Any pointers, please? | 1 | 0 | 0 | 0 | false | 17,940,205 | 1 | 509 | 1 | 0 | 0 | 17,939,824 | My Suggestion is instead of updating values row by row try to use Bulk Insert in temporary table and then move the data into an actual table based on some timing key. If you have key column that will be good for reading the recent rows as you added. | 1 | 0 | 0 | Update a MySQL table from an HTML table with thousands of rows | 2 | python,mysql,beautifulsoup,mysql-python | 0 | 2013-07-30T06:33:00.000 |
Title question says it all. I was trying to figure out how I could go about integrating the database created by sqlite3 and communicate with it through Python from my website.
If any further information is required about the development environment, please let me know. | 5 | 1 | 0.066568 | 0 | false | 18,099,967 | 1 | 1,713 | 1 | 0 | 0 | 17,953,552 | It looks like your needs has changed and you are going into direction where static web site is not sufficient any more.
Firstly, I would pick appropriate Python framework for your needs. if static website was sufficient until recently Django can be perfect for you.
Next I would suggest describing your DB schema for ORM used in chosen framework. I see no point in querying your DB using SQL until you would have a specific reason.
And finally, I would start using static content of your website as templates, replacing places where dynamic data is required. Django internal template language can be easily used that way. If not, Jinja2 also could be good.
My advise is base on many assumptions, as your question is quite open and undefined.
Anyway, I think it would be the best way to start transition period from static to dynamic. | 1 | 0 | 0 | I have a static website built using HTML, CSS and Javascript. How do I integrate this with a SQLite3 database accessed with the Python API? | 3 | python,sqlite,static-site | 0 | 2013-07-30T17:29:00.000 |
I'm currently running into an issue in integrating ElasticSearch and MongoDB. Essentially I need to convert a number of Mongo Documents into searchable documents matching my ElasticSearch query. That part is luckily trivial and taken care of. My problem though is that I need this to be fast. Faster than network time, I would really like to be able to index around 100 docs/second, which simply isn't possible with network calls to Mongo.
I was able to speed this up a lot by using ElasticSearch's bulk indexing, but that's only half of the problem. Is there any way to either bundle reads or cache a collection (a manageable part of a collection, as this collection is larger than I would like to keep in memory) to help speed this up? I was unable to really find any documentation about this, so if you can point me towards relevant documentation I consider that a perfectly acceptable answer.
I would prefer a solution that uses Pymongo, but I would be more than happy to use something that directly talks to MongoDB over requests or something similar. Any thoughts on how to alleviate this? | 1 | 0 | 0 | 0 | false | 24,357,799 | 0 | 195 | 1 | 0 | 0 | 17,955,275 | pymongo is thread safe, so you can run multiple queries in parallel. (I assume that you can somehow partition your document space.)
Feed the results to a local Queue if processing the result needs to happen in a single thread. | 1 | 0 | 1 | Bundling reads or caching collections with Pymongo | 1 | python,performance,mongodb,pymongo | 0 | 2013-07-30T19:04:00.000 |
I'm working on an app that employs the python sqlite3 module. My database makes use of the implicit ROWID column provided by sqlite3. I expected that the ROWIDs be reordered after I delete some rows and vacuum the database. Because in the sqlite3 official document:
The VACUUM command may change the ROWIDs of entries in any tables that
do not have an explicit INTEGER PRIMARY KEY.
My pysqlite version is 2.6.0 and the sqlite version is 3.5.9. Can anybody tell me why it is not working? Anything I should take care when using vacuum?
P.S. I have a standalone sqlite installed whose version is 3.3.6. I tested the vacuum statement in it, and the ROWIDs got updated. So could the culprit be the version? Or could it be a bug of pysqlite?
Thanks in advance for any ideas or suggestions! | 1 | 0 | 1.2 | 0 | true | 17,988,741 | 0 | 134 | 1 | 0 | 0 | 17,987,732 | This behaviour is version dependent.
If you want a guaranteed reordering, you have to copy all records into a new table yourself.
(This works with both implicit and explicit ROWIDs.) | 1 | 0 | 0 | Why are not ROWIDs updated after VACUUM when using python sqlite3 module? | 1 | python,sqlite,pysqlite | 0 | 2013-08-01T07:30:00.000 |
I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try syncdb it is not creating the table I deleted manually.
Honestly, I get really stuck every time with this such kind of migration issues.
What should I do to create the tables again? | 0 | 0 | 0 | 0 | false | 17,996,086 | 0 | 824 | 2 | 0 | 0 | 17,995,963 | are you using south?
If you are, there is a migration history database that exists.
Make sure to delete the row mentionnaing the migration you want to run again. | 1 | 0 | 0 | syncdb is not creating tables again? | 4 | python,django,django-south | 0 | 2013-08-01T13:51:00.000 |
I am stuck with this issue: I had some migration problems and I tried many times and on the way, I deleted migrations and tried again and even deleted one table in db. there is no data in db, so I don't have to fear. But now if I try syncdb it is not creating the table I deleted manually.
Honestly, I get really stuck every time with this such kind of migration issues.
What should I do to create the tables again? | 0 | 0 | 0 | 0 | false | 29,407,625 | 0 | 824 | 2 | 0 | 0 | 17,995,963 | Try renaming the migration file and running python manage.py syncdb. | 1 | 0 | 0 | syncdb is not creating tables again? | 4 | python,django,django-south | 0 | 2013-08-01T13:51:00.000 |
I'm designing a g+ application for a big international brand. the entities I need to create are pretty much in form of a graph, hence a lot of many-to-many relations (arcs) connecting nodes that can be traversed in both directions. I'm reading all the readable docs online, but I haven't found anything so far specific to ndb design best practices and guidelines. unfortunately I am under nda, and cannot reveal details of the app, but it can match almost one to one the context of scientific conferences with proceedings, authors, papers and topics.
below the list of entities envisioned so far (with context shifted to match the topics mentioned):
organization (e.g. acm)
conference (e.g. acm multimedia)
conference issue (e.g. acm multimedia 13)
conference track (e.g. nosql, machine learning, computer vision, etc.)
author (e.g. myself)
paper (e.g. "designing graph like db for ndb")
as you can see, I can visit and traverse the graph through any direction (or facet, from a frontend point of view):
author with co-authors
author to conference tracks
conference tracks to papers
...
and so on, you fill the list.
I want to make it straight and solid because it will launch with a lot of p.r. and will need to scale consistently overtime, both in content and number of users. I would like to code it from scratch hence designing my own models, restful api to read/write this data, avoiding non-rel django and keeping the presentation layer to a minimum template mechanism. I need to check with the company where I work, but we might be able to release part of the code with a decent open source license (ideally, a restful service for ndb models).
if anyone could point me towards the right direction, that would be awesome.
thanks!
thomas
[edit: corrected typo related to many-to-many relations] | 1 | 1 | 0.099668 | 0 | false | 18,035,092 | 1 | 478 | 1 | 1 | 0 | 18,017,150 | There's two ways to implement one-to-many relationships in App Engine.
Inside entity A, store a list of keys to entities B1, B2, B3. In th old DB, you'd use a ListProperty of db.Key. In ndb you'd use a KeyProperty with repeated = True.
Inside entity B1, B2, B3, store a KeyProperty to entity A.
If you use 1:
When you have Entity A, you can fetch B1, B2, B3 by id. This can be potentially more consistent than the results of a query.
It could be slightly less expensive since you save 1 read operation over a query (assuming you don't count the cost of fetching entity A). Writing B instances is slightly cheaper since it's one less index to update.
You're limited in the number of B instances you can store by the maximum entity size and number of indexed properties on A. This makes sense for things like conference tracks since there's generally a limited number of tracks that doesn't go into the thousands.
If you need to sort the order of B1, B2, B3 arbitrarily, it's easier to store them in order in a list than to sort them using some sorted indexed property.
If you use 2:
You only need entity A's Key in order to query for B1, B2, B3. You don't actually need to fetch entity A to get the list.
You can have pretty much unlimited # of B entities. | 1 | 0 | 0 | best practice for graph-like entities on appengine ndb | 2 | python,google-app-engine,app-engine-ndb,graph-databases | 0 | 2013-08-02T12:42:00.000 |
I have read somewhere that you can store python objects (more specifically dictionaries) as binaries in MongoDB by using BSON. However right now I cannot find any any documentation related to this.
Would anyone know how exactly this can be done? | 18 | 5 | 0.321513 | 0 | false | 18,089,722 | 0 | 23,835 | 1 | 0 | 0 | 18,089,598 | Assuming you are not specifically interested in mongoDB, you are probably not looking for BSON. BSON is just a different serialization format compared to JSON, designed for more speed and space efficiency. On the other hand, pickle does more of a direct encoding of python objects.
However, do your speed tests before you adopt pickle to ensure it is better for your use case. | 1 | 0 | 1 | Is there a way to store python objects directly in mongoDB without serializing them | 3 | python,mongodb,pymongo,bson | 0 | 2013-08-06T20:14:00.000 |
I have several S3 buckets containing a total of 40 TB of data across 761 million objects. I undertook a project to copy these objects to EBS storage. To my knowledge, all buckets were created in us-east-1. I know for certain that all of the EC2 instances used for the export to EBS were within us-east-1.
The problem is that the AWS bill for last month included a pretty hefty charge for inter-regional data transfer. I'd like to know how this is possible?
The transfer used a pretty simple Python script with Boto to connect to S3 and download the contents of each object. I suspect that the fact that the bucket names were composed of uppercase letters might have been a contributing factor (I had to specify OrdinaryCallingFormat()), but I don't know this for sure. | 0 | 0 | 0 | 0 | false | 18,366,790 | 1 | 878 | 1 | 0 | 1 | 18,113,426 | The problem ended up being an internal billing error at AWS and was not related to either S3 or Boto. | 1 | 0 | 0 | Boto randomly connecting to different regions for S3 transfers | 2 | python,amazon-web-services,amazon-s3,boto | 0 | 2013-08-07T20:42:00.000 |
I am trying to run the following db2 command through the python pyodbc module
IBM DB2 Command : "DB2 export to C:\file.ixf of ixf select * from emp_hc"
i am successfully connected to the DSN using the pyodbc module in python and works fine for select statement
but when i try to execute the following command from the Python IDLE 3.3.2
cursor.execute(" export to ? of ixf select * from emp_hc",r"C:\file.ixf")
pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0104N An unexpected token "db2 export to ? of" was found following "BEGIN-OF-STATEMENT". Expected tokens may include: "". SQLSTATE=42601\r\n (-104) (SQLExecDirectW)')
or
cursor.execute(" export to C:\file.ixf of ixf select * from emp_hc")
Traceback (most recent call last):
File "", line 1, in
cursor.execute("export to C:\myfile.ixf of ixf select * from emp_hc")
pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0007N The character "\" following "export to C:" is not valid. SQLSTATE=42601\r\n (-7) (SQLExecDirectW)')
am i doing something wrong ? any help will be greatly appreciated. | 0 | 1 | 0.099668 | 0 | false | 18,135,069 | 0 | 1,372 | 1 | 0 | 0 | 18,134,390 | db2 export is a command run in the shell, not through SQL via odbc.
It's possible to write database query results to a file with python and pyodbc, but db2 export will almost certainly be faster and effortlessly handle file formatting if you need it for import. | 1 | 0 | 0 | sql import export command error using pyodbc module python | 2 | python,sql,db2,pyodbc | 0 | 2013-08-08T19:24:00.000 |
when connecting to mysql database in Django ,I get the error.
I'm sure mysql server is running.
/var/run/mysqld/mysqld.sock doesn't exist.
When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output.
I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.
I searched a lot, but I still don't know how to do.
Any help is greate. Thanks in advance.
Well, I just tried some ways. And it works.
I did as follows.
Add socket = /tmp/mysql.sock .Restart the mysql server.
ln -s /tmp/mysql.sock /var/lib/mysqld/mysqld.sock
I met an another problem today. I can't login to mysql.
I'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.
I add socket = /var/mysqld/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs. | 26 | 0 | 0 | 0 | false | 66,405,102 | 1 | 101,165 | 3 | 0 | 0 | 18,150,858 | I faced this problem when connecting MySQL with Django when using Docker.
Try 'PORT':'0.0.0.0'.
Do not use 'PORT': 'db'. This will not work if you tried to run your app outside Docker. | 1 | 0 | 0 | OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") | 5 | python,mysql,django,mysql.sock | 0 | 2013-08-09T15:55:00.000 |
when connecting to mysql database in Django ,I get the error.
I'm sure mysql server is running.
/var/run/mysqld/mysqld.sock doesn't exist.
When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output.
I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.
I searched a lot, but I still don't know how to do.
Any help is greate. Thanks in advance.
Well, I just tried some ways. And it works.
I did as follows.
Add socket = /tmp/mysql.sock .Restart the mysql server.
ln -s /tmp/mysql.sock /var/lib/mysqld/mysqld.sock
I met an another problem today. I can't login to mysql.
I'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.
I add socket = /var/mysqld/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs. | 26 | 0 | 0 | 0 | false | 56,762,083 | 1 | 101,165 | 3 | 0 | 0 | 18,150,858 | in flask, you may use that
app=Flask(__name__)
app.config["MYSQL_HOST"]="127.0.0.1
app.config["MYSQL_USER"]="root"... | 1 | 0 | 0 | OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") | 5 | python,mysql,django,mysql.sock | 0 | 2013-08-09T15:55:00.000 |
when connecting to mysql database in Django ,I get the error.
I'm sure mysql server is running.
/var/run/mysqld/mysqld.sock doesn't exist.
When I run $ find / -name *.sock -type s, I only get /tmp/mysql.sock and some other irrelevant output.
I added socket = /tmp/mysql.sock to /etc/my.cnf. And then restared mysql, exited django shell, and connected to mysql database. I still got the same error.
I searched a lot, but I still don't know how to do.
Any help is greate. Thanks in advance.
Well, I just tried some ways. And it works.
I did as follows.
Add socket = /tmp/mysql.sock .Restart the mysql server.
ln -s /tmp/mysql.sock /var/lib/mysqld/mysqld.sock
I met an another problem today. I can't login to mysql.
I'm newbie to mysql. So I guess mysql server and client use the same socket to communicate.
I add socket = /var/mysqld/mysqld.sock to [mysqld] [client] block in my.cnf and it wokrs. | 26 | 0 | 0 | 0 | false | 72,389,079 | 1 | 101,165 | 3 | 0 | 0 | 18,150,858 | You need to change your HOST from 'localhost' to '127.0.0.1' and check your django app :) | 1 | 0 | 0 | OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)") | 5 | python,mysql,django,mysql.sock | 0 | 2013-08-09T15:55:00.000 |
I used mysqldb to connect to a database in my localhost.
It works, but if I add data to a table in the database when the program is running, it shows that it has been added, but when I check the table from localhost, it hasn't been updated. | 0 | 0 | 1.2 | 0 | true | 18,245,522 | 0 | 30 | 1 | 0 | 0 | 18,245,510 | if your table uses innodb engine, you should call connection.commit() on every cursor.execute(). | 1 | 0 | 0 | musqldb-python doesnt really update the original database | 1 | python,mysql,database-connection,mysql-python | 0 | 2013-08-15T02:41:00.000 |
I'm trying to understand which of the following is a better option:
Data calculation using Python from the output of a MySQL query.
Perform the calculations in the query itself.
For example, the query returns 20 rows with 10 columns.
In Python, I compute the difference or division of some of the columns.
Is it a better thing to do this in the query or in Python ? | 2 | 1 | 0.099668 | 0 | false | 18,270,751 | 0 | 2,805 | 2 | 0 | 0 | 18,270,585 | It is probably a matter of taste but...
... to give you an exact opposite answer as the one by Alma Do Mundo, for (not so) simple calculation made on the SELECT ... clause, I generally push toward using the DB "as a calculator".
Calculations (in the SELECT ... clause) are performed as the last step while executing the query. Only the relevant data are used at this point. All the "big job" has already been done (processing JOIN, where clauses, aggregates, sort).
At this point, the extra load of performing some arithmetic operations on the data is really small. And that will reduce the network traffic between your application and the DB server.
It is probably a matter of taste thought... | 1 | 0 | 0 | Data Calculations MySQL vs Python | 2 | python,mysql,query-performance,sql-tuning,query-tuning | 0 | 2013-08-16T09:53:00.000 |
I'm trying to understand which of the following is a better option:
Data calculation using Python from the output of a MySQL query.
Perform the calculations in the query itself.
For example, the query returns 20 rows with 10 columns.
In Python, I compute the difference or division of some of the columns.
Is it a better thing to do this in the query or in Python ? | 2 | 1 | 0.099668 | 0 | false | 18,271,329 | 0 | 2,805 | 2 | 0 | 0 | 18,270,585 | If you are doing basic arithmetic operation on calculations in a row, then do it in SQL. This gives you the option of encapsulating the results in a view or stored procedure. In many databases, it also gives the possibility of parallel execution of the statements (although performance is not an issue with so few rows of data).
If you are doing operations between rows in MySQL (such as getting the max for the column), then the balance is more even. Most databases support simple functions to these calculations, but MySQL does not. The added complexity to the query gives some weight to doing these calculations on the client-side.
In my opinion, the most important consideration is maintainability of the code. By using a database, you are necessary incorporating business rules in the database itself (what entities are related to which other entities, for instance). A major problem with maintaining code is having business logic spread through various systems. I much prefer to have an approach where such logic is as condensed as possible, creating very clear APIs between different layers.
For such an approach, "read" access into the database would be through views. The logic that you are talking about would go into the views and be available to any user of the database -- ensuring consistency across different functions using the database. "write" access would be through stored procedures, ensuring that business rules are checked consistently and that operations are logged appropriately. | 1 | 0 | 0 | Data Calculations MySQL vs Python | 2 | python,mysql,query-performance,sql-tuning,query-tuning | 0 | 2013-08-16T09:53:00.000 |
I'm trying to use python for manipulating some data in MySQL DB.
DB is on a remote PC. And I will use another PC with Python to connect to the DB.
When I searched how to install MySQLdb module to Python, they all said MySQL need to be installed on the local PC.
Is it right? Or I don't need to install MySQL on the local PC? | 0 | 1 | 1.2 | 0 | true | 18,288,628 | 0 | 323 | 1 | 0 | 0 | 18,288,616 | You just need it if you want to compile the Python MySQL bindings from source. If you already have the binary version of the python library then the answer is no, you don't need it. | 1 | 0 | 1 | Do I need MySQL installed on my local PC to use MySQLdb for Python to connect MySQL server remotely? | 1 | python,mysql | 0 | 2013-08-17T12:07:00.000 |
I'm using the python packages xlrd and xlwt to read and write from excel spreadsheets using python. I can't figure out how to write the code to solve my problem though.
So my data consists of a column of state abbreviations and a column of numbers, 1 through 7. There are about 200-300 entries per state, and i want to figure out how many ones, twos, threes, and so on exist for each state. I'm struggling with what method I'd use to figure this out.
normally i would post the code i already have but i don't even know where to begin. | 0 | 0 | 0 | 0 | false | 18,413,675 | 0 | 247 | 1 | 0 | 0 | 18,413,606 | Prepare a dictionary to store the results.
Get the numbers of line with data you have using xlrd, then iterate over each of them.
For each state code, if it's not in the dict, you create it also as a dict.
Then you check if the entry you read on the second column exists within the state key on your results dict.
4.1 If it does not, you'll create it also as a dict, and add the number found on the second column as a key to this dict, with a value of one.
4.2 If it does, just increment the value for that key (+1).
Once it has finished looping, your result dict will have the count for each individual entry on each individual state. | 1 | 0 | 0 | Python Programming approach - data manipulation in excel | 2 | python,excel,xlrd,xlwt | 1 | 2013-08-24T00:09:00.000 |
I've been working with Python MySQLdb. With InnoDB tables autocommit is turned off in default and that was what I needed. But since I'm now working with MyISAM tables, the docs for MySQL say
MyISAM tables effectively always operate in autocommit = 1 mode
Since I'm running up to a few hundreds of queries a second, does committing with every single query slow down the performance of my script? Because I used to commit once every 1000 queries before, now I can't do that with MyISAM. If it slows it down, what can I try? | 1 | 0 | 0 | 0 | false | 18,463,239 | 0 | 440 | 1 | 0 | 0 | 18,462,528 | MyISAM has no transactions, so you can't not to "autocommit" using MyISAM.
Your runtime change may be also caused by the fact you moved from innoDB to MyISAM.
The best approach for DB runtime issues in general is benchmarking, benchmarking and benchmarking. | 1 | 0 | 0 | does autocommit slow down performance in python? | 1 | python,mysql,commit | 0 | 2013-08-27T10:06:00.000 |
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database.
My issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions:
DatabaseError: SSL error: decryption failed or bad record mac
InterfaceError: connection already closed
What is the best solution to make the forked TCP server work?
Can I ensure the forked process uses its own database connection?
Should I be looking at other modules for writing to the postgres database? | 2 | 0 | 0 | 0 | false | 18,496,589 | 1 | 941 | 2 | 1 | 0 | 18,492,467 | The libpq driver, which is what the psycopg2 driver usually used by django is built on, does not support forking an active connection. I'm not sure if there might be another driver does not, but I would assume not - the protocol does not support multiplexing multiple sessions on the same connection.
The proper solution to your problem is to make sure each forked processes uses its own database connection. The easiest way is usually to wait to open the connection until after the fork. | 1 | 0 | 0 | Forking Django DB connections | 2 | python,django,postgresql | 0 | 2013-08-28T15:42:00.000 |
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database.
My issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions:
DatabaseError: SSL error: decryption failed or bad record mac
InterfaceError: connection already closed
What is the best solution to make the forked TCP server work?
Can I ensure the forked process uses its own database connection?
Should I be looking at other modules for writing to the postgres database? | 2 | 1 | 0.099668 | 0 | false | 18,531,322 | 1 | 941 | 2 | 1 | 0 | 18,492,467 | So one solution I found is to create a new thread to spawn from. Django opens a new connection per thread so spawning from a new thread ensures you pass a new connection to the new process.
In retrospect I wish I'd used psycopg2 directly from the beginning rather than Django. Django is great for the web front end but not so great for a standalone app where all I'm using it for is the model layer. Using psycopg2 would have given be greater control over when to close and open connections. Not just because of the forking issue but also I found Django doesn't keep persistent postgres connections - something we should have better control of in 1.6 when released and should for my specific app give a huge performance gain. Also, in this type of application I found Django intentionally leaks - something that can be fixed with DEBUG set to False. Then again, I've written the app now :) | 1 | 0 | 0 | Forking Django DB connections | 2 | python,django,postgresql | 0 | 2013-08-28T15:42:00.000 |
The company I work for is starting development of a Django business application that will use MySQL as the database engine. I'm looking for a way to keep from having database credentials stored in a plain-text config file.
I'm coming from a Windows/IIS background where a vhost can impersonate an existing Windows/AD user, and then use those credentials to authenticate with MS SQL Server.
As an example: If the Django application is running with apache2+mod_python on an Ubuntu server, would it be sane to add a "www-data" user to MySQL and then let MySQL verify the credentials using its PAM module?
Hopefully some of that makes sense. Thanks in advance! | 3 | 1 | 0.197375 | 0 | false | 18,496,083 | 1 | 691 | 1 | 0 | 0 | 18,495,773 | MySQL controls access to tables from its own list of users, so it's better to create MySQL users with permissions. You might want to create roles instead of users so you don't have as many to manage: an Admin, a read/write role, a read-only role, etc.
A Django application always runs as the web server user. You could change that to "impersonate" an Ubuntu user, but what if that user is deleted? Leave it as "www-data" and manage the database role that way. | 1 | 0 | 0 | Can a Django application authenticate with MySQL using its linux user? | 1 | python,mysql,django | 0 | 2013-08-28T18:39:00.000 |
I have an excel file whose extension is .xls but his type is Tab Space separated Text.
When I try to open the file by MS Excel it tells me that the extension is fake. And So I have to confirm that I trust the file and so I can read it then.
But my real problem is that when I try to read my file by the xlrd library it gives me this message :
xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record;
And so to resolve this problem, I go to Save as in MS Excel and I change the type manually to .xls.
But my boss insist that I have to do this by code. I have 3 choices : Shell script under Linux, .bat file under Windows or Python.
So, how can I change the type of the excel file from Tab space separated Text to xls file by Shell script (command line), .bat or Python? | 0 | 1 | 0.099668 | 0 | false | 18,574,653 | 0 | 399 | 1 | 0 | 0 | 18,570,143 | mv file.{xls,csv}
It's a csv file, stop treating it as an excel file and things will work a lot better. :) There are nice csv manipulation tools available in most languages. Do you really need the excel library? | 1 | 0 | 0 | How to change automatically the type of the excel file from Tab space separated Text to xls file? | 2 | python,linux,excel,shell,xlrd | 0 | 2013-09-02T09:46:00.000 |
I need to do the following
Delete many entities from a database, also those entities have a file associated with them saved into the file system, which are accessed also by the web server (images!).
The problem: File deletion might fail, I have all the files in a folder for the main entity (its actually a 1-N relation, being each one of the N the file owners). If I try to delete a file when the web server is accessing them, I will get an exception and the process will go in half, some images deleted, and some doesnt, leaving the system inconsistent.
Is there a way to to do something similar to a transaction but in the file system (either delete all files or don't delete any)? Or perhaps another approach (the worst plan is to save the files in the database, but it is bad) | 0 | 2 | 1.2 | 0 | true | 18,581,616 | 0 | 708 | 1 | 0 | 0 | 18,581,117 | There is no way to transactionally delete multiple files on normal filesystems (you might be able to find esoteric filesystems where it is, but even if so I doubt that helps you. Apparently your current filesystem doesn't even let you delete a file that's being read, so presumably you're stuck with what you have!).
Perhaps you could save in the database not the file contents, but a list of which filenames in the filesystem "really exist". Refer to that list for anything that requires consistency. If file deletion fails, you can mark the file as "not really existing" and requiring future attempts at deletion, then retry whenever seems sensible (maybe an occasional maintenance job, maybe a helper process retrying each failure with exponential backoff to a limit).
For this to work either (a) your webserver must refer to the database before serving the file, or else (b) it must be OK for there to be a indefinite period after the file fails to delete, during which it may nevertheless be served. And of course there is also the "natural race condition" that a file that begins to be served before the deletion attempt, will complete its download even after the transaction is complete.
[Edit: Ah, it just occurred to me that "i have all the files in a folder for the main entity" might actually be really helpful. In your transaction, rename the directory. That atomically "removes" all the files, from their old names at least, and it will fail (on filesystems that forbid that sort of thing) if any of the files is in use. If the rename succeeds, and nobody else knows the new name, then they won't be accessing the files and you should be able to delete them all without trouble. I think. Of course this doesn't work if you encounter another reason for failing to delete the file, because then you might be able to rename the folder but unable to delete the file.] | 1 | 0 | 0 | Delete files atomically/transactionally in python | 1 | python,django,transactions | 0 | 2013-09-02T21:41:00.000 |
I have a Mac running OS X 10.6.8, which comes pre-installed with SQLite3 v3.6. I installed v3.8 using homebrew. But when I type "sqlite3" in my terminal it continues to run the old pre-installed version. Any help? Trying to learn SQL as I'm building my first web app.
Not sure if PATH variable has anything to do with it, but running echo $PATH results in the following: /usr/local/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
And the NEW version of SQLite3 is in the following directory: /usr/local/Cellar/sqlite
I should add that I also downloaded the binary executable to my desktop, and that works if I click from my desktop, but doesn't work from the terminal.
Any help would be greatly appreciated? | 1 | 0 | 0 | 0 | false | 18,629,528 | 0 | 1,449 | 1 | 1 | 0 | 18,626,114 | To figure out exactly which sqlite3 binaries your system can find type which -a sqlite3. This will list the apps in the order that they are found according to your PATH variable, this also shows what order the thes ystem would use when figuring out which to run if you have multiple versions.
Homebrew should normally links binaries into your /usr/local/bin, but as sqlite3 is provided by MAC OS, it is only installed into /usr/local/Cellar/sqlite3, and not linked into /usr/local/bin. As the Cellar path is not in your PATH variable, the system doesn't know that the binaries exist to run.
Long story short, you can just run the Homebrew binary directly with /usr/local/Cellar/sqlite/3.8.0/bin/sqlite3. | 1 | 0 | 0 | Running upgraded version of SQLite (3.8) on Mac when Terminal still defaults to old version 3.6 | 2 | python,linux,macos,sqlite | 0 | 2013-09-05T00:54:00.000 |
I want to use BDB as a time-series data store, and planning to use the microseconds since epoch as the key values. I am using BTREE as the data store type.
However, when I try to store integer keys, bsddb3 gives an error saying TypeError: Integer keys only allowed for Recno and Queue DB's.
What is the best workaround? I can store them as strings, but that probably will make it unnecessarily slower.
Given BDB itself can handle any kind of data, why is there a restriction? can I sorta hack the bsddb3 implementation? has anyone used anyother methods? | 0 | -1 | 1.2 | 0 | true | 18,793,657 | 0 | 689 | 1 | 0 | 0 | 18,664,940 | Well, there's no workaround. But you can use two approaches
Store the integers as string using str or repr. If the ints are big, you can even use string formatting
use cPickle/pickle module to store and retrieve data. This is a good way if you have data types other than basic types. For basics ints and floats this actually is slower and takes more space than just storing strings | 1 | 0 | 0 | Use integer keys in Berkeley DB with python (using bsddb3) | 2 | python,berkeley-db,bsddb | 0 | 2013-09-06T19:11:00.000 |
I have some code that I am working on that scrapes some data from a website, and then extracts certain key information from that website and stores it in an object. I create a couple hundred of these objects each day, each from unique url's. This is working quite well, however, I'm inexperienced in what options are available to me in Python for persistence and what would be best suited for my needs.
Currently I am using pickle. To do so, I am keeping all of these webpage objects and appending them in a list as new ones are created, then saving that list to a pickle (then reloading it whenever the list is to be updated). However, as i'm in the order of some GB of data, i'm finding pickle to be somewhat slow. It's not unworkable, but I'm wondering if there is a more well suited alternative. I don't really want to break apart the structure of my objects and store it in a sql type database, as its important for me to keep the methods and the data as a single object.
Shelve is one option I've been looking into, as my impression is then that I wouldn't have to unpickle and pickle all the old entries (just the most recent day that needs to be updated), but am unsure if this is how shelve works, and how fast it is.
So to avoid rambling on, my question is: what is the preferred persistence method for storing a large number of objects (all of the same type), to keep read/write speed up as the collection grows? | 0 | 0 | 0 | 0 | false | 18,674,706 | 1 | 95 | 1 | 0 | 0 | 18,674,630 | Martijn's suggestion could be one of the alternatives.
You may consider to store the pickle objects directly in a sqlite database which still can manage from the python standard library.
Use a StringIO object to convert between the database column and python object.
You didn't mention the size of each object you are pickling now. I guess it should stay well within sqlite's limit. | 1 | 0 | 0 | Persistence of a large number of objects | 1 | python,persistence | 0 | 2013-09-07T15:02:00.000 |
For a music project I want to find what which groups of artists users listens to. I have extracted three columns from the database: the ID of the artist, the ID of the user, and the percentage of all the users stream that is connected to that artist.
E.g. Half of the plays from user 15, is of the artist 12.
12 | 15 | 0.5
What I hope to find is a methodology to group clusters of groups together, so e.g. find out that users who tends to listen to artist 12 also listens to 65, 74, and 34.
I wonder what kind of methodologies that can be used for this grouping, and if there are any good sources for this approach (Python or Ruby would be great). | 0 | 0 | 1.2 | 0 | true | 18,712,558 | 0 | 261 | 1 | 0 | 0 | 18,705,223 | Sounds like a classic matrix factorization task to me.
With a weighted matrix, instead of a binary one. So some fast algorithms may not be applicable, because they support binary matrixes only.
Don't ask for source on Stackoverflow: asking for off-site resources (tools, libraries, ...) is off-topic. | 1 | 0 | 0 | Data Mining: grouping based on two text values (IDs) and one numeric (ratio) | 2 | python,ruby,data-mining,data-analysis | 0 | 2013-09-09T19:09:00.000 |
I'm a beginner of openerp 7. i just want to know the details regarding how to generate report in openerp 7 in xls format.
The formats supported in OpenERP report types are : pdf, odt, raw, sxw, etc..
Is there any direct feature that is available in OpenERP 7 regarding printing the report in EXCEL format(XLS) | 1 | 0 | 0 | 0 | false | 18,716,823 | 1 | 2,902 | 1 | 0 | 0 | 18,716,623 | In python library are available to export data in pdf and excel
For excel you can use:
1)xlwt
2)Elementtree
For pdf genration :
1)Pypdf
2)Reportlab
are available | 1 | 0 | 0 | How to print report in EXCEL format (XLS) | 3 | python,openerp | 0 | 2013-09-10T10:34:00.000 |
The context for this question is:
A Google App Engine backend for a two-person multiplayer turn-based card game
The game revolves around different combinations of cards giving rise to different scores in the game
Obviously, one would store the state of a game in the GAE datastore, but I'm not sure on the approach for the design of the game logic itself. It seems I might have two choices:
Store entries in the datastore with a key that is a sorted list of the valid combinations of cards that can be player. These will then map to the score values. When a player tries to play a combination of cards, the server-side python will sort the combination appropriately and lookup the key. If it succeeds, it can do the necessary updates for the score, if it fails then the combination wasn't valid.
Store the valid combinations as a python dictionary written into the server-side code and perform the same lookups as above to test the validity/get the score but without a trip to the datastore.
From a cost point of view (datastore lookups aren't free), option 2 seems like it would be better. But then there is the performance of the instance itself - will the startup time, processing time, memory usage start to tip me into greater expense?
There's also the code maintanence issue of constructing that Python dictionary, but I can bash together some scripts to help me write the code for that on the infrequently occasions that the logic changes. I think there will be on the order of 1000 card combinations (that can produce a score) of between 2 and 6 cards if that helps anyone who wants to quantify the problem.
I'm starting out with this design, and the summary of the above is whether it is sensible to store the static logic of this kind of game in the datastore, or simply keep it as part of the CPU bound logic? What are the pros and cons of both approaches? | 0 | 1 | 0.099668 | 0 | false | 18,807,184 | 1 | 209 | 1 | 0 | 0 | 18,807,022 | If the logic is fixed, keep it in your code. Maybe you can procedurally generate the dicts on startup. If there is a dynamic component to the logic (something you want to update frequently), a data store might be a better bet, but it sounds like that's not applicable here. Unless the number of combinations runs over the millions, and you'd want to trade speed in favour of a lower memory footprint, stick with putting it in the application itself. | 1 | 0 | 1 | Where to hold static information for game logic? | 2 | python,google-app-engine,google-cloud-datastore | 0 | 2013-09-14T22:29:00.000 |
I have a postgres DB in which most of the tables have a column 'valid_time' indicating when the data in that row is intended to represent and an 'analysis_time' column, indicating when the estimate was made (this might be the same or a later time than the valid time in the case of a measurement or an earlier time in the case of a forecast). Typically there are multiple analysis times for each valid time, corresponding to different measurements (if you wait a bit, more data is available for a given time, so the analysis is better but the measurment is less prompt) and forecasts with different lead times.
I am using SQLalchemy to access this DB in Python.
What I would like to do is be able to pull out all rows with the most recent N unique datetimes of a specified column. For instance I might want the 3 most recent unique valid times, but this will typically be more than 3 rows, because there will be multiple analysis times for each of those 3 valid times.
I am new to relational databases. In a sense there are two parts to this question; how can this be achieved in bare SQL and then how to translate that to the SQLalchemy ORM? | 0 | 1 | 0.099668 | 0 | false | 18,818,835 | 0 | 144 | 1 | 0 | 0 | 18,818,634 | I'm not sure about the SQLalchemy part, but as far as the SQL queries I would do it in two steps:
Get the times. For example, something like.
SELECT DISTINCT valid_time FROM MyTable LIMIT 3 ORDER BY valid_time DESC;
Get the rows with those times, using the previous step as a subquery:
SELECT * FROM MyTable WHERE valid_time IN (SELECT DISTINCT valid_time FROM MyTable LIMIT 3 ORDER BY valid_time DESC); | 1 | 0 | 0 | Selecting the rows with the N most recent unique values of a datetime | 2 | python,sql,postgresql,sqlalchemy | 0 | 2013-09-15T23:45:00.000 |
I'm looking into the software architecture for using a NoSQL database (MongoDB). I would ideally want to use a database independent ORM/ODM for this, but I can't find any similar library to SQLAlchemy for NoSQL. Do you know any?
I do find a lot of wrappers, but nothing that seems to be database independent. If there's none, is it because all the NoSQL databases out there have different use cases that a common ORM/ODM wouldn't make sense like it does in the SQL case ? | 3 | 0 | 0 | 0 | false | 18,980,345 | 0 | 943 | 1 | 0 | 0 | 18,827,379 | Not sure about python, but in Java you can use frameworks like PlayORM for this purpose which supports Csasandra, HBase and MongoDb. | 1 | 0 | 0 | NoSQL database independent ORM/ODM for Python | 1 | python,mongodb,nosql | 0 | 2013-09-16T11:54:00.000 |
I have multiple xlsx File which contain two worksheet(data,graph). I have created graph using xlsxwriter in graph worksheet and write data in data worksheet. So I need to combine all graph worksheet into single xlsx File. So My question is:
openpyxl : In openpyxl module, we can load another workbook and modify the value.is there anyway to append new worksheet of another File. For Example.
I have two xlsx data.xlsx(graph worksheet) and data_1.xlsx(graph worksheet)
So Final xlsx (graph worksheet and graph_1 worksheet)
xlsxwriter : As of my understanding, we can not modify existing xlsx File. Do we any update into this module. | 2 | 0 | 0 | 0 | false | 18,917,174 | 0 | 2,875 | 1 | 0 | 0 | 18,913,370 | In answer to the last part of the question:
xlsxwriter : As of my understanding, we can not modify existing xlsx File. Do we any update into this module.
That is correct. XlsxWriter only writes new files. It cannot be used to modify existing files. Rewriting files is not a planned feature. | 1 | 0 | 0 | Combine multiple xlsx File in single Xlsx File | 2 | python,openpyxl,xlsxwriter | 0 | 2013-09-20T09:33:00.000 |
I have a 17gb xml file. I want to store it in MySQL. I tried it using xmlparser in php but it says maximum execution time of 30 seconds exceeded and inserts only a few rows. I even tried in python using element tree but it is taking lot of memory gives memory error in a laptop of 2 GB ram. Please suggest some efficient way of doing this. | 4 | 0 | 0 | 0 | false | 18,945,969 | 0 | 215 | 1 | 0 | 0 | 18,945,802 | I'd say, turn off execution time limit in PHP (e.g. use a CLI script) and be patient. If you say it starts to insert something into database from a 17 GB file, it's actually doing a good job already. No reason to hasten it for such one-time job. (Increase memory limit too, just in case. Default 128 Mb is not that much.) | 1 | 0 | 0 | extremely large xml file to mysql | 2 | php,mysql,python-2.7,xml-parsing | 0 | 2013-09-22T16:01:00.000 |
in my program , ten process to write mongodb by update(key, doc, upsert=true)
the "key" is mongodb index, but is not unique.
query = {'hotelid':hotelid,"arrivedate":arrivedate,"leavedate":leavedate}
where = "data.%s" % sourceid
data_value_where = {where:value}
self.collection.update(query,{'$set':data_value_where},True)
the "query" id the not unique index
I found sometimes the update not update exists data, but create a new data.
I write a log for update method return, the return is " {u'ok': 1.0, u'err': None, u'upserted': ObjectId('5245378b4b184fbbbea3f790'), u'singleShard': u'rs1/192.168.0.21:10000,192.168.1.191:10000,192.168.1.192:10000,192.168.1.41:10000,192.168.1.113:10000', u'connectionId': 1894107, u'n': 1, u'updatedExisting': False, u'lastOp': 5928205554643107852L}"
I modify the update method to update(query, {'$set':data_value_where},upsert=True, safe=True), but three is no change for this question. | 0 | 0 | 0 | 0 | false | 18,998,582 | 0 | 820 | 2 | 0 | 0 | 18,995,966 | You would not end up with duplicate documents due to the operator you are using. You are actually using an atomic operator to update.
Atomic (not to be confused with SQL atomic operations of all or nothing here) operations are done in sequence so each process will never pick up a stale document or be allowed to write two ids to the same array since the document each $set operation picks up will have the result of the last $set.
The fact that you did get duplicate documents most likely means you have an error in your code. | 1 | 0 | 0 | mongodb update(use upsert=true) not update exists data, insert a new data? | 2 | python,mongodb,pymongo | 0 | 2013-09-25T04:00:00.000 |
in my program , ten process to write mongodb by update(key, doc, upsert=true)
the "key" is mongodb index, but is not unique.
query = {'hotelid':hotelid,"arrivedate":arrivedate,"leavedate":leavedate}
where = "data.%s" % sourceid
data_value_where = {where:value}
self.collection.update(query,{'$set':data_value_where},True)
the "query" id the not unique index
I found sometimes the update not update exists data, but create a new data.
I write a log for update method return, the return is " {u'ok': 1.0, u'err': None, u'upserted': ObjectId('5245378b4b184fbbbea3f790'), u'singleShard': u'rs1/192.168.0.21:10000,192.168.1.191:10000,192.168.1.192:10000,192.168.1.41:10000,192.168.1.113:10000', u'connectionId': 1894107, u'n': 1, u'updatedExisting': False, u'lastOp': 5928205554643107852L}"
I modify the update method to update(query, {'$set':data_value_where},upsert=True, safe=True), but three is no change for this question. | 0 | 0 | 0 | 0 | false | 18,996,136 | 0 | 820 | 2 | 0 | 0 | 18,995,966 | You can call it "threadsafe", as the update itself is not done in Python, it's in the mongodb, which is built to cater many requests at once.
So in summary: You can safely do that. | 1 | 0 | 0 | mongodb update(use upsert=true) not update exists data, insert a new data? | 2 | python,mongodb,pymongo | 0 | 2013-09-25T04:00:00.000 |
I am considering to serialize a big set of database records for cache in Redis, using python and Cassandra. I have either to serialize each record and persist a string in redis or to create a dictionary for each record and persist in redis as a list of dictionaries.
Which way is faster? pickle each record? or create a dictionary for each record?
And second : Is there any method to fetch from database as list of dic's? (instead of a list of model obj's) | 4 | 3 | 1.2 | 0 | true | 19,033,019 | 0 | 2,722 | 1 | 0 | 0 | 19,025,952 | Instead of serializing your dictionaries into strings and storing them in a Redis LIST (which is what it sounds like you are proposing), you can store each dict as a Redis HASH. This should work well if your dicts are relatively simple key/value pairs. After creating each HASH you could add the key for the HASH to a LIST, which would provide you with an index of keys for the hashes. The benefits of this approach could be avoiding or lessening the amount of serialization needed, and may make it easier to use the data set in other applications and from other languages.
There are of course many other approaches you can take and that will depend on lots of factors related to what kind of data you are dealing with and how you plan to use it.
If you do go with serialization you might want to at least consider a more language agnostic serialization format, like JSON, BSON, YAML, or one of the many others. | 1 | 0 | 0 | Python - Redis : Best practice serializing objects for storage in Redis | 1 | python,redis,cassandra,cql,cqlengine | 0 | 2013-09-26T10:39:00.000 |
datetime is stored in postgres DB with UTC. I could see that the date is 2013-09-28 00:15:52.62504+05:30 in postgres table.
But when I fetch the value via django model, I get the same datetime field as datetime.datetime(2013, 9, 27, 18, 45, 52, 625040, tzinfo=).
USE_TZ is True and TIME_ZONE is 'Asia/Kolkata' in settings.py file. I think saving to DB works fine as DB contains datetime with correct UTC of +5:30.
What am i doing wrong here?
Please help.
Thanks
Kumar | 2 | 3 | 1.2 | 0 | true | 19,076,075 | 1 | 1,585 | 1 | 0 | 0 | 19,058,491 | The issue has been solved. The problem was that I was using another naive datetime field for calculation of difference in time, whereas the DB field was an aware field. I then converted the naive to timezone aware date, which solved the issue.
Just in case some one needs to know. | 1 | 0 | 0 | Postgres datetime field fetched without timezone in django | 1 | python,django,postgresql,timezone | 0 | 2013-09-27T19:19:00.000 |
I'm a complete beginner to Flask and I'm starting to play around with making web apps.
I have a hard figuring out how to enforce unique user names. I'm thinking about how to do this in SQL, maybe with something like user_name text unique on conflict fail, but then how to I catch the error back in Python?
Alternatively, is there a way to manage this that's built in to Flask? | 1 | 0 | 0 | 0 | false | 19,087,185 | 1 | 1,118 | 1 | 0 | 0 | 19,086,885 | You can use SQLAlchemy.It's a plug-in | 1 | 0 | 0 | How do I enforce unique user names in Flask? | 2 | python,sql,web-applications,flask | 0 | 2013-09-30T05:17:00.000 |
I have a program which calculates a set of plain interlinked objects (the objects consist of properties which basically are either String, int or link to another object).
I would like to have the objects stored in a relational database for easy SQL querying (from another program).
Moreover, the objects (classes) tend to change and evolve. I would like to have a generic solution not requiring any changes in the 'persistence layer' whenever the classes evolve.
Do you see any way to do that? | 1 | 1 | 0.049958 | 0 | false | 19,142,716 | 0 | 60 | 1 | 0 | 0 | 19,142,497 | What about storing the objects in JSON?
You could write a function that serialize your object before storing it into the database.
If you have a specific identifier for your objects, I would suggest to use it as index so that you can easily retrieve it. | 1 | 0 | 1 | Store Python objects in a database for easy quering | 4 | python,database,orm | 0 | 2013-10-02T16:56:00.000 |
I am dealing with a doubt about sqlalchemy and objects refreshing!
I am in the situation in what I have 2 sessions, and the same object has been queried in both sessions! For some particular thing I cannot to close one of the sessions.
I have modified the object and commited the changes in session A, but in session B, the attributes are the initial ones! without modifications!
Shall I implement a notification system to communicate changes or there is a built-in way to do this in sqlalchemy? | 37 | 9 | 1 | 0 | false | 54,821,257 | 0 | 54,352 | 1 | 0 | 0 | 19,143,345 | I just had this issue and the existing solutions didn't work for me for some reason. What did work was to call session.commit(). After calling that, the object had the updated values from the database. | 1 | 0 | 0 | About refreshing objects in sqlalchemy session | 6 | python,mysql,session,notifications,sqlalchemy | 0 | 2013-10-02T17:43:00.000 |
I'm currently using SQLAlchemy with two distinct session objects. In one object, I am inserting rows into a mysql database. In the other session I am querying that database for the max row id. However, the second session is not querying the latest from the database. If I query the database manually, I see the correct, higher max row id.
How can I force the second session to query the live database? | 2 | 0 | 0 | 0 | false | 49,755,122 | 0 | 1,599 | 1 | 0 | 0 | 19,159,142 | Had a similar problem, for some reason i had to commit both sessions. Even the one that is only reading.
This might be a problem with my code though, cannot use same session as it the code will run on different machines. Also documentation of SQLalchemy says that each session should be used by one thread only, although 1 reading and 1 writing should not be a problem. | 1 | 0 | 0 | How to force SQLAlchemy to update rows | 2 | python,mysql,database,session,sqlalchemy | 0 | 2013-10-03T12:22:00.000 |
I'm working with a somewhat large set (~30000 records) of data that my Django app needs to retrieve on a regular basis. This data doesn't really change often (maybe once a month or so), and the changes that are made are done in a batch, so the DB solution I'm trying to arrive at is pretty much read-only.
The total size of this dataset is about 20mb, and my first thought is that I can load it into memory (possibly as a singleton on an object) and access it very fast that way, though I'm wondering if there are other, more efficient ways of decreasing the fetch time by avoiding disk I/O. Would memcached be the best solution here? Or would loading it into an in-memory SQLite DB be better? Or loading it on app startup simply as an in-memory variable? | 2 | 0 | 0 | 0 | false | 19,311,615 | 1 | 1,409 | 1 | 0 | 0 | 19,310,083 | Does the disk IO really become the bottleneck of your application's performance and affect your user experience? If not, I don't think this kind of optimization is necessary.
Operating system and RDBMS (e.g MySQL , PostgresQL) are really smart nowdays. The data in the disk will be cached in memory by RDBMS and OS automatically. | 1 | 0 | 0 | Load static Django database into memory | 2 | python,django,sqlite,orm,memcached | 0 | 2013-10-11T04:06:00.000 |
I have a workbook that has some sheets in it. One of the sheets has charts in it. I need to use xlrd or openpyxl to edit another sheet, but, whenever I save the workbook, the charts are gone.
Any workaround to this? Is there another python package that preserves charts and formatting? | 4 | 2 | 0.379949 | 0 | false | 20,910,668 | 0 | 477 | 1 | 0 | 0 | 19,323,049 | This is currently not possible with either but I hope to have it in openpyxl 2.x. Patches / pull requests always welcome! ;-) | 1 | 0 | 0 | How can I edit Excel Workbooks using XLRD or openpyxl while preserving charts? | 1 | python,xlrd,xlwt,openpyxl,xlutils | 0 | 2013-10-11T16:33:00.000 |
I have a simple python/Django Application in which I am inserting records in database through some scanning event. And I am able to show the data on a simple page. I keep reloading the page every second to show the latest inserted database records.But I want it to improve so that page should update the records when ever new entry comes in database, instead of reloading every second.
Is there any way to do this?
Database: I am using mysql
Python: Python 2.7
Framework: Django | 1 | 2 | 0.132549 | 0 | false | 19,333,028 | 1 | 824 | 1 | 0 | 0 | 19,332,760 | you need to elemplments the poll/long poll or server push. | 1 | 0 | 0 | Updating client page only when new entry comes in database in Django | 3 | python,mysql,django | 0 | 2013-10-12T09:38:00.000 |
I understand that ForeignKey constrains a column to be an id value contained in another table so that entries in two different tables can be easily linked, but I do not understand the behavior of relationships(). As far as I can tell, the primary effect of declaring a relationship between Parent and Child classes is that parentobject.child will now reference the entries linked to the parentobject in the children table. What other effects does declaring a relationship have? How does declaring a relationship change the behavior of the SQL database or how SQLAlchemy interacts with the database? | 1 | 5 | 1.2 | 0 | true | 19,369,883 | 0 | 251 | 1 | 0 | 0 | 19,366,605 | It doesn't do anything at the database level, it's purely for convenience. Defining a relationship lets SQLAlchemy know how to automatically query for the related object, rather than you having to manually use the foreign key. SQLAlchemy will also do other high level management such as allowing assignment of objects and cascading changes. | 1 | 0 | 0 | SQLAlchemy Relationships | 1 | python,sql,sqlalchemy,relationship | 0 | 2013-10-14T18:21:00.000 |
I'm just curious if there's a way to make the no default value warning I get from Storm to go away. I have an insert trigger in MySQL that handles these fields and everything is functioning as expected so I just want to remove this unnecessary information. I tried setting the default value to None but that causes an error because the fields do not allow nulls. So how do I make the warning go away? | 1 | 0 | 0 | 0 | false | 20,010,872 | 1 | 770 | 1 | 0 | 0 | 19,373,289 | Is it not possible for you to remove the 'IsNull' constraint from your MySQL database? I'm not aware of any where it is not possible to do this. Otherwise you could set a default string which represents a null value. | 1 | 0 | 0 | How can I avoid "Warning: Field 'xxx' doesn't have a default value" in Storm? | 1 | python,mysql,apache-storm | 0 | 2013-10-15T04:26:00.000 |
I have a few large hourly upload tables with RECORD fieldtypes. I want to pull select records out of those tables and put them in daily per-customer tables. The trouble I'm running into is that using QUERY to do this seems to flatten the data out.
Is there some way to preserve the nested RECORDs, or do I need to rethink my approach?
If it helps, I'm using the Python API. | 1 | 0 | 1.2 | 0 | true | 19,459,294 | 0 | 234 | 1 | 0 | 0 | 19,458,338 | Unfortunately, there isn't a way to do this right now, since, as you realized, all results are flattened. | 1 | 0 | 0 | Bigquery: how to preserve nested data in derived tables? | 2 | python,google-bigquery | 0 | 2013-10-18T20:17:00.000 |
I'm looking for a simple way to extract text from excel/word/ppt files. The objective is to index contents in whoosh for search with haystack.
There are some packages like xlrd and pandas that work for excel, but they go way beyond what I need, and I'm not really sure that they will actually just print the cell's unformatted text content straight from the box.
Anybody knows of an easy way around this? My guess is ms office files must be xml-shaped.
Thanks!
A. | 1 | 2 | 1.2 | 0 | true | 19,500,864 | 0 | 631 | 1 | 0 | 0 | 19,500,625 | I've done this "by hand" before--as it turns out, .(doc|ppt|xls)x files are just zip files which contain .xml files with all of your content. So you can use zipfile and your favorite xml parser to read the contents if you can find no better tool to do it. | 1 | 0 | 1 | Extract text from ms office files with python | 1 | python,django-haystack,whoosh | 1 | 2013-10-21T17:07:00.000 |
Rackspace has added the feature to select certain cloud servers (as hosts) while creating a user in a cloud database instance. This allows the specified user to be accessed, only from those cloud servers.
So I would like to know whether there is an API available in pyrax(python SDK for Rackspace APIs) to accomplish this or not.
If possible, then how to pass multiple cloud server IPs using the API.
Thanks, | 1 | 0 | 1.2 | 0 | true | 19,761,340 | 1 | 70 | 1 | 0 | 0 | 19,585,830 | I released version 1.6.1 of pyrax a few days ago that adds support for the 'host' parameter for users, as well as for Cloud Database backups. | 1 | 0 | 0 | Host Parameter While Creating a User in Rackspace Cloud Database Instance | 2 | python,mysql,database,cloud,rackspace-cloud | 0 | 2013-10-25T09:17:00.000 |
We are building a datawarehouse in PostgreSQL. We want to connect to different data sources. Most data will come from ms access. We not not python experts (yet :-)).
We found several database connectors. We want to use (as much as possible) standard SQL for our queries.
We looked at pyodbc pscopg2.
Given that we use MS Access and PostgreSQL and want to have the same query syntax and return data types; Which drivers should we use ? | 0 | 1 | 0.197375 | 0 | false | 20,310,119 | 0 | 126 | 1 | 0 | 0 | 19,605,580 | Your query syntax differences will depend on PostgreSQL extensions vs MS Access-specific quirks. The psycodb and pyodbc will both provide a query interface using whatever SQL dialect (with quirks) the underlying db connections provide. | 1 | 0 | 0 | python postgresql ms access driver advice | 1 | python,postgresql,ms-access,psycopg2,pyodbc | 0 | 2013-10-26T10:25:00.000 |
The goal is to find values in an Excel spreadsheet which match values in a separate list, then highlight the row with a fill color (red) where matches are found. In other words:
Excel file A: source list (approximately 200 items)
Excel file B: has one column containing the list we are checking; must apply fill color (red) to entire row where matches are found
Wondering what the best approach might be. I'm currently using AppleScript to highlight and sort data in a large volume of spreadsheets; a looped find checks each cell in a range for a single string of text and colors all matching rows. While this task is similar, the source list contains hundreds of items so it feels silly (and very slow) to include all this data in the actual script. Any suggestions would be greatly appreciated. | 2 | 0 | 0 | 0 | false | 20,011,728 | 0 | 961 | 1 | 0 | 0 | 19,612,872 | I don't know what format your original list is in, but this sounds like a job for conditional formatting, if you can get the list into Excel. You can do conditional formatting based on a formula, and you can use a VLOOKUP() formula to do it. | 1 | 0 | 0 | Find text in Excel file matching text in separate file, then apply fill color to row | 1 | python,regex,excel,macos,applescript | 0 | 2013-10-26T23:07:00.000 |
I'd like my Python script to read some data out of a postgresql dump file. The Python will be running on a system without postgresql, and needs to process the data in a dump file.
It looks fairly straightforward to parse the CREATE TABLE calls to find the column names, then the INSERT INTO rows to build the contents. But I'm sure there would be quite a few gotchas in doing this reliably. Does anyone know of a module which will do this? | 3 | 1 | 1.2 | 0 | true | 19,703,149 | 0 | 4,660 | 1 | 0 | 0 | 19,638,019 | Thanks for all the comments, even if they are mostly "don't do this!" ;)
Given:
The dump is always produced in the same format from a 3rd-party system
I need to be able to automate reading it on another 3rd-party system without postgres
I've gone for writing my own basic parser, which is doing a good enough job for what I require. | 1 | 0 | 0 | How to read postgresql dump file in Python | 2 | python,postgresql | 0 | 2013-10-28T14:53:00.000 |
I'm using Django + Postgres. When I do a SQL query using psql,
e.g. \d+ myapp_stories
correctly shows the columns in the table
But when I do SELECT * FROM myapp_stories, it returns nothing. But querying the same database & table from my python code returns data just fine. So there is data in the table. Any thoughts? I'm using venv, not sure if that affects anything. Thanks in advance! | 1 | 1 | 0.099668 | 0 | false | 19,665,116 | 1 | 82 | 2 | 0 | 0 | 19,664,732 | I guess you forgot to enter semicolon:
SELECT * FROM myapp_stories; | 1 | 0 | 0 | SELECT using psql returns no rows even though data is there | 2 | python,django,postgresql | 0 | 2013-10-29T17:05:00.000 |
I'm using Django + Postgres. When I do a SQL query using psql,
e.g. \d+ myapp_stories
correctly shows the columns in the table
But when I do SELECT * FROM myapp_stories, it returns nothing. But querying the same database & table from my python code returns data just fine. So there is data in the table. Any thoughts? I'm using venv, not sure if that affects anything. Thanks in advance! | 1 | 1 | 0.099668 | 0 | false | 19,666,882 | 1 | 82 | 2 | 0 | 0 | 19,664,732 | Prefix the table in your query with the schema, as the search_path might be causing your query (or psql) to look in a schema other than what you are expecting. | 1 | 0 | 0 | SELECT using psql returns no rows even though data is there | 2 | python,django,postgresql | 0 | 2013-10-29T17:05:00.000 |
I have PyQt application which uses SQLite files to store data and would like to allows multiple users to read and write to the same database. It uses QSqlDatabase and QSqlTableModels with item views for reading and editing.
As is multiple users can launch the application and read/write to different tables. The issue is this:
Say user1's application reads table A then user2 writes to index 0,0 on table A. Since user1 application has already read and cached that cell and doesn't see user2's change right away. The Qt item view's will update when the dataChanged signal emits but in this case the data is being changed by another application instance. Is there some way to trigger file changes by another application instance. What's the best way to handle this.
I'm assuming this is really best solved by using an SQL server host connection rather than SQLite for the database, but in the realm of SQLite what would be my closest workaround option?
Thanks | 0 | 0 | 0 | 0 | false | 19,764,106 | 0 | 74 | 1 | 0 | 0 | 19,759,594 | SQLite has no mechanism by which another user can be notified.
You have to implement some communication mechanism outside of SQLite. | 1 | 1 | 0 | Signaling Cell Changes across multiple QSqlDatabase to the same SQliteFile | 1 | python,sql,qt,sqlite | 0 | 2013-11-03T23:40:00.000 |
I'm trying to get Django running on OS X Mavericks and I've encountered a bunch of errors along the way, the latest way being that when runpython manage.py runserver to see if everything works, I get this error, which I believe means that it misses libssl:
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: @loader_path/../lib/libssl.1.0.0.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so Reason: image not found
I have already upgraded Python to 2.7.6 with the patch that handles some of the quirks of Mavericks.
Any ideas?
Full traceback:
Unhandled exception in thread started by >
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 93, in inner_run
self.validate(display_num_errors=True)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/base.py", line 280, in validate
num_errors = get_validation_errors(s, app)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/validation.py", line 28, in get_validation_errors
from django.db import models, connection
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/init.py", line 40, in
backend = load_backend(connection.settings_dict['ENGINE'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/init.py", line 34, in getattr
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 93, in getitem
backend = load_backend(db['ENGINE'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 27, in load_backend
return import_module('.base', backend_name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
import(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 14, in
from django.db.backends.postgresql_psycopg2.creation import DatabaseCreation
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/creation.py", line 1, in
import psycopg2.extensions
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/init.py", line 50, in
from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded: @loader_path/../lib/libssl.1.0.0.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/psycopg2/_psycopg.so
Reason: image not found | 2 | 2 | 0.07983 | 0 | false | 19,772,866 | 1 | 9,380 | 1 | 1 | 0 | 19,767,569 | It seems that it's libssl.1.0.0.dylib that is missing. Mavericks comme with libssl 0.9.8. You need to install libssl via homebrew.
If loaderpath points to /usr/lib/, you also need to symlink libssl from /usr/local/Cell/openssl/lib/ into /usr/lib. | 1 | 0 | 0 | Django can't find libssl on OS X Mavericks | 5 | python,django,macos,postgresql | 0 | 2013-11-04T12:15:00.000 |
I have two different python programs. One of the program uses the python BeautifulSoup module, the other uses the MySQLdb module. When I run the python files individually, I have no problem and the program run fine and give me the desired output. However I need to combine the two programs so to achieve my ultimate goal. However the Beautiful soup module only runs if I open it in python 2.7.3 and the MySQLdb runs only on the python 2.7.4 (64bit) version. I installed both the modules exactly the way it was mentioned in the docs. Any help will be much appreciated. | 0 | 0 | 0 | 0 | false | 19,801,757 | 1 | 50 | 1 | 0 | 0 | 19,799,605 | If you have 2 versions of python installed on your system, then you've somehow installed one library in each of them.
You either need to install both libraries in both versions of python (which 2 seperate versions of pip can do), or need to setup your PYTHONPATH environment variable to allow loading of modules from additional paths (like the site-packages folder of the python 2.7.4 (64bit) installation from the python 2.7.3 executable). | 1 | 0 | 0 | Modules not Working across different python versions | 1 | python,python-2.7,beautifulsoup,mysql-python | 0 | 2013-11-05T21:42:00.000 |
Situation:
I have a requirement to use connection pooling while connecting to Oracle database in python. Multiple python applications would use the helper connection libraries I develop.
My Thought Process:
Here I can think of two ways of connection pooling:
1) Let connection pool be maintained and managed by database itself (as provided by Oracle's DRCP) and calling modules just ask connections from the connection broker described by Oracle DRCP.
2) Have a server process that manages the connection pool and all caller modules ask for connections from this pool (like dbcp?)
What suggestions do I need:
option 1) looks very straight forward since pool does not need to be stored by application.
But I wanted to know what advantages do I get other than simplicity using option 1)?
I am trying to avoid option 2) since it would require a dedicated server process always running (considering shelving is not possible for connection objects).
Is there any other way? | 4 | 0 | 0 | 0 | false | 19,848,278 | 0 | 622 | 1 | 0 | 0 | 19,848,191 | Let the database handle the pool. . . it's smarter than you'll be, and you'll leverage every bug fix/performance improvement Oracle's installed base comes up with. | 1 | 0 | 0 | Application vs Database Resident Connection Pool | 1 | python,oracle,connection-pooling | 0 | 2013-11-07T22:40:00.000 |
I got one table in which modifications are made :-account_bank_statement, what other tables are needed for the point of sale and if i make a sale in which tables modifications are made.I want to make a sale but not through the pos provided. | 0 | 0 | 0 | 0 | false | 19,897,059 | 1 | 49 | 1 | 0 | 0 | 19,892,934 | All the sales done through post is registered in post.order. If you are creating orders from an external source other than pos, you can create the order in this table and call the confirm bottom action. Rest changes in all other table will be done automatically.. | 1 | 0 | 0 | In which tables changes are made in openERP when an items is sold at Point of sale | 1 | python,openerp | 0 | 2013-11-10T17:46:00.000 |
I know there exists a plugin for nginx to load the config through perl. I was wondering, does anyone have any experience doing this without using a plugin? Possibly a fuse-backed Python script that queries a DB?
I would really like to not use the perl plugin, as it doesn't seem that stable. | 0 | 1 | 1.2 | 0 | true | 20,018,813 | 0 | 722 | 1 | 0 | 0 | 19,957,613 | I haven't seen any working solution to solve your task, a quick google search doesn't give any useful information either (it doesn't look like HttpPerlModule could help with DB stored configuration).
It sounds like it's a good task to develop and contribute to Nginx project ! | 1 | 0 | 0 | Running Nginx with a database-backed config file | 1 | python,sql,configuration,nginx,fuse | 1 | 2013-11-13T15:23:00.000 |
I'm trying to share an in-memory database between processes. I'm using Python's sqlite3. The idea is to create a file in /run/shm and use it as a database. Questions are:
Is that safe? In particular: do read/write locks (fcntl) work the same in shm?
Is that a good idea in the first place? I'd like to keep things simple and not have to create a separate database process. | 1 | 0 | 1.2 | 0 | true | 20,004,051 | 0 | 230 | 1 | 0 | 0 | 19,976,664 | I've tested fcntl (in Python) with shm files and it seems that locking works correctly. Indeed, from process point of view it is a file and OS handles everything correctly.
I'm going to keep this architecture since it is simple enough and I don't see any (major) drawbacks. | 1 | 0 | 0 | sqlite3 database in shared memory | 1 | python,sqlite,shared-memory | 0 | 2013-11-14T11:38:00.000 |
I'm working on a web app in Python (Flask) that, essentially, shows the user information from a PostgreSQL database (via Flask-SQLAlchemy) in a random order, with each set of information being shown on one page. Hitting a Next button will direct the user to the next set of data by replacing all data on the page with new data, and so on.
My conundrum comes with making the presentation truly random - not showing the user the same information twice by remembering what they've seen and not showing them those already seen sets of data again.
The site has no user system, and the "already seen" sets of data should be forgotten when they close the tab/window or navigate away.
I should also add that I'm a total newbie to SQL in general.
What is the best way to do this? | 1 | 1 | 0.099668 | 0 | false | 20,081,554 | 1 | 512 | 1 | 0 | 0 | 20,072,309 | The easiest way is to do the random number generation in javascript at the client end...
Tell the client what the highest number row is, then the client page keeps track of which ids it has requested (just a simple js array). Then when the "request next random page" button is clicked, it generates a new random number less than the highest valid row id, and providing that the number isn't in its list of previously viewed items, it will send a request for that item.
This way, you (on the server) only have to have 2 database accessing views:
main page (which gives the js, and the highest valid row id)
display an item (by id)
You don't have any complex session tracking, and the user's browser is only having to keep track of a simple list of numbers, which even if they personally view several thousand different items is still only going to be a meg or two of memory.
For performance reasons, you can even pre-fetch the next item as soon as the current item loads, so that it displays instantly and loads the next one in the background while they're looking at it. (jQuery .load() is your friend :-) )
If you expect a large number of items to be removed from the database (so that the highest number is not helpful), then you can instead generate a list of random ids, send that, and then request them one at a time. Pre-generate the random list, as it were.
Hope this helps! :-) | 1 | 0 | 0 | Best way to show a user random data from an SQL database? | 2 | python,sql,flask,flask-sqlalchemy | 0 | 2013-11-19T13:03:00.000 |
I am wondering if anyone knows a way to generate a connection to a SQLite database in python from a StringIO object.
I have a compressed SQLite3 database file and I would like to decompress it using the gzip library and then connect to it without first making a temp file.
I've looked into the slqite3 library source, but it looks like filename gets passed all the way through into the C code. Are there any other SQLite3 connection libraries that you could use a file ID for? Or is there some why I can trick the builtin sqlite3 library into thinking that my StringIO (or some other object type) is an actual file? | 10 | 4 | 1.2 | 0 | true | 20,084,315 | 0 | 1,371 | 1 | 0 | 0 | 20,084,135 | The Python sqlite3 module cannot open a database from a file number, and even so, using StringIO will not give you a file number (since it does not open a file, it just emulates the Python file object).
You can use the :memory: special file name to avoid writing a file to disk, then later write it to disk once you are done with it. This will also make sure the file is optimized for size, and you can opt not to write e.g. indexes if size is really a big issue. | 1 | 0 | 0 | SQLite3 connection from StringIO (Python) | 1 | python,sqlite,stringio | 0 | 2013-11-19T23:10:00.000 |
I have a MYSQL database with users table, and I want to make a python application which allows me to login to that database with the IP, pass, username and everything hidden. The thing is, the only IP which is allowed to connect to that mysql database, is the server itself (localhost).
How do I make a connection to that database from a user's computer, and also be able to retrieve data from it securely? Can I build some PHP script on the server that is able to take parameters and retrieve data to that user? | 0 | 0 | 0 | 0 | false | 20,193,357 | 0 | 143 | 2 | 0 | 0 | 20,193,144 | As i understood you are able to connect only with "server itself (localhost)" so to connect from any ip do this:
mysql> CREATE USER 'myname'@'%.mydomain.com' IDENTIFIED BY 'mypass';
I agree with @Daniel no PHP script needed... | 1 | 0 | 0 | Secure MySQL Connection in Python | 3 | php,python,mysql,python-2.7 | 0 | 2013-11-25T12:29:00.000 |
I have a MYSQL database with users table, and I want to make a python application which allows me to login to that database with the IP, pass, username and everything hidden. The thing is, the only IP which is allowed to connect to that mysql database, is the server itself (localhost).
How do I make a connection to that database from a user's computer, and also be able to retrieve data from it securely? Can I build some PHP script on the server that is able to take parameters and retrieve data to that user? | 0 | 1 | 0.066568 | 0 | false | 20,193,562 | 0 | 143 | 2 | 0 | 0 | 20,193,144 | You should not make a connection from the user's computer. By default, most database configurations are done to allow only requests from the same server (localhost) to access the database.
What you will need is this:
A server side script such as Python, PHP, Perl, Ruby, etc to access the database. The script will be on the server, and as such, it will access the database locally
Send a web request from the user's computer using Python, Perl, or any programming language to the server side script as described above.
So, the application on the user's computer sends a request to the script on the server. The script connects to the database locally, accesses the data, and sends it back to the application. The application can then use the data as needed.
That is basically, what you are trying to achieve.
Hope the explanation is clear and it helps. | 1 | 0 | 0 | Secure MySQL Connection in Python | 3 | php,python,mysql,python-2.7 | 0 | 2013-11-25T12:29:00.000 |
win32com is a general library to access COM objects from Python.
One of the major hallmarks of this library is ability to manipulate excel documents.
However, there is lots of customized modules, whose only purpose it to manipulate excel documents, like openpyxl, xlrd, xlwt, python-tablefu.
Are these libraries any better for this specific task? If yes, in what respect? | 3 | 9 | 1.2 | 0 | true | 20,263,978 | 0 | 3,031 | 1 | 0 | 0 | 20,263,021 | Open and write directly and efficiently excel files, for instance.
win32com uses COM communication, which while being very useful for certain purposes, it needs to perform complicated API calls that can be very slow (so to say, you are using code that controls Windows, that controls Excel)
openpyxl or others, just open an excel file, parse it and let you work with it.
Try to populate an excel file with 2000 rows, 100 cells each, with win32com and then with any other direct parser. While a parser needs seconds, win32com will need minutes.
Besides, openpyxl (I haven't tried the others) does not need that excel is installed in the system. It does not even need that the OS is windows.
Totally different concepts. win32com is a piece of art that opens a door to automate almost anything, while the other option is just a file parser. In other words, to iron your shirt, you use a $20 iron, not a 100 ton metal sheet attached to a Lamborghini Diablo. | 1 | 0 | 0 | What do third party libraries like openpyxl or xlrd/xlwt have, what win32com doesn't have? | 1 | python,excel,win32com,xlrd,openpyxl | 1 | 2013-11-28T10:04:00.000 |
trying to import python-mysql.connector on Python 3.2.3 and receiving an odd stack. I suspect bad configuration on my ubuntu 12.04 install.
vfi@ubuntu:/usr/share/pyshared$ python3
Python 3.2.3 (default, Sep 25 2013, 18:22:43)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mysql.connector
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mysql.connector
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/share/pyshared/apport_python_hook.py", line 66, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "apport/__init__.py", line 1, in
from apport.report import Report
File "apport/report.py", line 20, in
import apport.fileutils
File "apport/fileutils.py", line 22, in
from apport.packaging_impl import impl as packaging
File "apport/packaging_impl.py", line 20, in
import apt
File "apt/__init__.py", line 24, in
from apt.package import Package
File "apt/package.py", line 1051
return file_list.read().decode("utf-8").split(u"\n")
^
SyntaxError: invalid syntax
Original exception was:
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mysql.connector
here is the related modules state on my pc:
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python3-apt
i python3-apt - Python 3 interface to libapt-pkg
p python3-apt:i386 - Python 3 interface to libapt-pkg
p python3-apt-dbg - Python 3 interface to libapt-pkg (debug extension)
p python3-apt-dbg:i386 - Python 3 interface to libapt-pkg (debug extension)
v python3-apt-dbg:any -
v python3-apt-dbg:any:i386 -
v python3-apt:any -
v python3-apt:any:i386 -
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python-apt
i python-apt - Python interface to libapt-pkg
p python-apt:i386 - Python interface to libapt-pkg
i python-apt-common - Python interface to libapt-pkg (locales)
p python-apt-dbg - Python interface to libapt-pkg (debug extension)
p python-apt-dbg:i386 - Python interface to libapt-pkg (debug extension)
v python-apt-dbg:any -
v python-apt-dbg:any:i386 -
p python-apt-dev - Python interface to libapt-pkg (development files)
p python-apt-doc - Python interface to libapt-pkg (API documentation)
v python-apt-p2p -
v python-apt-p2p-khashmir -
v python-apt:any -
v python-apt:any:i386 -
i python-aptdaemon - Python module for the server and client of aptdaemon
p python-aptdaemon-gtk - Transitional dummy package
i python-aptdaemon.gtk3widgets - Python GTK+ 3 widgets to run an aptdaemon client
p python-aptdaemon.gtkwidgets - Python GTK+ 2 widgets to run an aptdaemon client
i python-aptdaemon.pkcompat - PackageKit compatibilty for AptDaemon
p python-aptdaemon.test - Test environment for aptdaemon clients
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python-mysql.connector
pi python-mysql.connector - pure Python implementation of MySQL Client/Server protocol
Hope you can help!
Thanks | 4 | 0 | 0 | 0 | false | 65,242,155 | 0 | 20,264 | 2 | 0 | 0 | 20,275,176 | pip3 install mysql-connector-python worked for me | 1 | 0 | 0 | ImportError: No module named mysql.connector using Python3? | 3 | mysql,python-3.x,python-module | 0 | 2013-11-28T21:46:00.000 |
trying to import python-mysql.connector on Python 3.2.3 and receiving an odd stack. I suspect bad configuration on my ubuntu 12.04 install.
vfi@ubuntu:/usr/share/pyshared$ python3
Python 3.2.3 (default, Sep 25 2013, 18:22:43)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mysql.connector
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mysql.connector
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/share/pyshared/apport_python_hook.py", line 66, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "apport/__init__.py", line 1, in
from apport.report import Report
File "apport/report.py", line 20, in
import apport.fileutils
File "apport/fileutils.py", line 22, in
from apport.packaging_impl import impl as packaging
File "apport/packaging_impl.py", line 20, in
import apt
File "apt/__init__.py", line 24, in
from apt.package import Package
File "apt/package.py", line 1051
return file_list.read().decode("utf-8").split(u"\n")
^
SyntaxError: invalid syntax
Original exception was:
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named mysql.connector
here is the related modules state on my pc:
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python3-apt
i python3-apt - Python 3 interface to libapt-pkg
p python3-apt:i386 - Python 3 interface to libapt-pkg
p python3-apt-dbg - Python 3 interface to libapt-pkg (debug extension)
p python3-apt-dbg:i386 - Python 3 interface to libapt-pkg (debug extension)
v python3-apt-dbg:any -
v python3-apt-dbg:any:i386 -
v python3-apt:any -
v python3-apt:any:i386 -
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python-apt
i python-apt - Python interface to libapt-pkg
p python-apt:i386 - Python interface to libapt-pkg
i python-apt-common - Python interface to libapt-pkg (locales)
p python-apt-dbg - Python interface to libapt-pkg (debug extension)
p python-apt-dbg:i386 - Python interface to libapt-pkg (debug extension)
v python-apt-dbg:any -
v python-apt-dbg:any:i386 -
p python-apt-dev - Python interface to libapt-pkg (development files)
p python-apt-doc - Python interface to libapt-pkg (API documentation)
v python-apt-p2p -
v python-apt-p2p-khashmir -
v python-apt:any -
v python-apt:any:i386 -
i python-aptdaemon - Python module for the server and client of aptdaemon
p python-aptdaemon-gtk - Transitional dummy package
i python-aptdaemon.gtk3widgets - Python GTK+ 3 widgets to run an aptdaemon client
p python-aptdaemon.gtkwidgets - Python GTK+ 2 widgets to run an aptdaemon client
i python-aptdaemon.pkcompat - PackageKit compatibilty for AptDaemon
p python-aptdaemon.test - Test environment for aptdaemon clients
vfi@ubuntu:/usr/share/pyshared$ sudo aptitude search python-mysql.connector
pi python-mysql.connector - pure Python implementation of MySQL Client/Server protocol
Hope you can help!
Thanks | 4 | 5 | 1.2 | 0 | true | 20,275,797 | 0 | 20,264 | 2 | 0 | 0 | 20,275,176 | Finally figured out what was my problem.
python-mysql.connector was not a py3 package and apt-get nor aptitude was proposing such version.
I managed to install it with pip3 which was not so simple on ubuntu 12.04 because it's only bundled with ubuntu starting at 12.10 and the package does not have the same name under pip...
vfi@ubuntu:$sudo apt-get install python3-setuptools
vfi@ubuntu:$sudo easy_install3 pip
vfi@ubuntu:$ pip --version
pip 1.4.1 from /usr/local/lib/python3.2/dist-packages/pip-1.4.1-py3.2.egg (python 3.2)
vfi@ubuntu:$sudo pip install mysql-connector-python | 1 | 0 | 0 | ImportError: No module named mysql.connector using Python3? | 3 | mysql,python-3.x,python-module | 0 | 2013-11-28T21:46:00.000 |
I'm getting different information for a particular thing and i'm storing those information in a dictionary
e.g. {property1:val , property2:val, property3:val}
now I have several dictionary of this type (as I get many things ..each dictionary is for a thing)
now I want to save information in DB so there would be as many columns as key:value pair in a dictionary
so what is the best or simplest way to do that.
Please provide all steps to do that (I mean syntax for login in DB, push data into row or execute sql query etc... I hope there wont be more than 4 or 5 steps )
PS: All dictionaries have the same keys, and each key always has the same value-type . and also number of columns are predefined. | 0 | 0 | 0 | 0 | false | 20,305,193 | 1 | 861 | 2 | 0 | 0 | 20,304,863 | You're doing it wrong!
Make an object that represents a row in the database, use __getitem__ to pretend it's a dictionary.
Put your database logic in that.
Don't go all noSQL unless your tables are not related. Just by being tables they are ideal for SQL! | 1 | 0 | 1 | How to save Information in Database using BeautifulSoup | 4 | python,database,python-2.7,beautifulsoup,mysql-python | 0 | 2013-11-30T19:46:00.000 |
I'm getting different information for a particular thing and i'm storing those information in a dictionary
e.g. {property1:val , property2:val, property3:val}
now I have several dictionary of this type (as I get many things ..each dictionary is for a thing)
now I want to save information in DB so there would be as many columns as key:value pair in a dictionary
so what is the best or simplest way to do that.
Please provide all steps to do that (I mean syntax for login in DB, push data into row or execute sql query etc... I hope there wont be more than 4 or 5 steps )
PS: All dictionaries have the same keys, and each key always has the same value-type . and also number of columns are predefined. | 0 | 0 | 0 | 0 | false | 20,305,076 | 1 | 861 | 2 | 0 | 0 | 20,304,863 | If your dictionaries all have the same keys, and each key always has the same value-type, it would be pretty straight-forward to map this to a relational database like MySQL.
Alternatively, you could convert your dictionaries to objects and use an ORM like SQLAlchemy to do the back-end work. | 1 | 0 | 1 | How to save Information in Database using BeautifulSoup | 4 | python,database,python-2.7,beautifulsoup,mysql-python | 0 | 2013-11-30T19:46:00.000 |
I am hosting a web app at pythonanywhere.com and experiencing a strange problem. Every half-hour or so I am getting the OperationalError: (2006, 'MySQL server has gone away'). However, if I resave my wsgi.py file, the error disappears. And then appears again some half-an-hour later...
During the loading of the main page, my app checks a BOOL field in a 1x1 table (basically whether sign-ups should be open or closed). The only other MySQL actions are inserts into another small table, but none of these appear to be associated with the problem.
Any ideas for how I can fix this? I can provide more information as is necessary. Thanks in advance for your help.
EDIT
Problem turned out to be a matter of knowing when certain portions of code run. I assumed that every time a page loaded a new connection was opened. This was not the case; however, I have fixed it now. | 2 | 4 | 1.2 | 0 | true | 20,309,286 | 1 | 2,432 | 1 | 0 | 0 | 20,308,097 | It normally because your mysql network connect be disconnected, may by your network gateway/router, so you have two options. One is always build a mysql connect before every query (not using connect pool etc). Second is try and catch this error, then get connect and query db again. | 1 | 0 | 0 | Periodic OperationalError: (2006, 'MySQL server has gone away') | 1 | python,mysql,mysql-python,pythonanywhere | 0 | 2013-12-01T02:33:00.000 |
Have some programming background, but in the process of both learning Python and making a web app, and I'm a long-time lurker but first-time poster on Stack Overflow, so please bear with me.
I know that SQLite (or another database, seems like PostgreSQL is popular) is the way to store data between sessions. But what's the most efficient way to store large amounts of data during a session?
I'm building a script to identify the strongest groups of employees to work on various projects in a company. I have received one SQLite database per department containing employee data including skill sets, achievements, performance, and pay.
My script currently runs one SQL query on each database in response to an initial query by the user, pulling all the potentially-relevant employees and their data. It stores all of that data in a list of Python dicts so the end-user can mix-and-match relevant people.
I see two other options: I could still run the comprehensive initial queries but instead of storing it in Python dicts, dump it all into SQLite temporary tables; my guess is that this would save some space and computing because I wouldn't have to store all the joins with each record. Or I could just load employee name and column/row references, which would save a lot of joins on the first pass, then pull the data on the fly from the original databases as the user requests additional data, storing little if any data in Python data structures.
What's going to be the most efficient? Or, at least, what is the most common/proper way of handling large amounts of data during a session?
Thanks in advance! | 2 | 1 | 1.2 | 0 | true | 20,320,905 | 0 | 2,649 | 1 | 0 | 0 | 20,320,642 | Aren't you over-optimizing? You don't need the best solution, you need a solution which is good enough.
Implement the simplest one, using dicts; it has a fair chance to be adequate. If you test it and then find it inadequate, try SQLite or Mongo (both have downsides) and see if it suits you better. But I suspect that buying more RAM instead would be the most cost-effective solution in your case.
(Not-a-real-answer disclaimer applies.) | 1 | 0 | 0 | What's faster: temporary SQL tables or Python dicts for session data? | 1 | python,python-2.7,sqlite,sqlalchemy | 0 | 2013-12-02T04:05:00.000 |
I have two databases (infact two database dump ... db1.sql and db2.sql)
both database have only 1 table in each.
in each table there are few columns (not equal number nor type) but 1 or 2 columns have same type and same value
i just want to go through both databases and find a row from each table so that they both have one common value
now from these two rows(one from each table) i would extract some information and would write into a file.
I want efficient methods to do that
PS: If you got my question please edit the title
EDIT: I want to compare these two tables(database) by a column which have contact number as primary key.
but the problem is one table has it is as an integer(big integer) and other table has it is as a string. now how could i inner-join them.
basically i dont want to create another database, i simply want to store two columns from each table into a file so I guess i dont need inner-join. do i?
e.g.
in table-1 = 9876543210
in table-2 = "9876543210" | 0 | 0 | 0 | 0 | false | 20,348,851 | 0 | 1,032 | 2 | 0 | 0 | 20,348,584 | Not sure if I understand what it is you want to do. You want to match a value from a column from one table to a value from a column from another table?
If you'd have the data in two tables in a database, you could make an inner join.
Depending on how big the file is, you could use a manual comparison tool like WinMerge. | 1 | 0 | 0 | Compare two databases and find common value in a row | 2 | python,mysql,sql,database,mysql-python | 0 | 2013-12-03T10:28:00.000 |
I have two databases (infact two database dump ... db1.sql and db2.sql)
both database have only 1 table in each.
in each table there are few columns (not equal number nor type) but 1 or 2 columns have same type and same value
i just want to go through both databases and find a row from each table so that they both have one common value
now from these two rows(one from each table) i would extract some information and would write into a file.
I want efficient methods to do that
PS: If you got my question please edit the title
EDIT: I want to compare these two tables(database) by a column which have contact number as primary key.
but the problem is one table has it is as an integer(big integer) and other table has it is as a string. now how could i inner-join them.
basically i dont want to create another database, i simply want to store two columns from each table into a file so I guess i dont need inner-join. do i?
e.g.
in table-1 = 9876543210
in table-2 = "9876543210" | 0 | 0 | 0 | 0 | false | 20,348,719 | 0 | 1,032 | 2 | 0 | 0 | 20,348,584 | You can use Join with alias name. | 1 | 0 | 0 | Compare two databases and find common value in a row | 2 | python,mysql,sql,database,mysql-python | 0 | 2013-12-03T10:28:00.000 |
So I was making a simple chat app with python. I want to store user specific data in a database, but I'm unfamiliar with efficiency. I want to store usernames, public rsa keys, missed messages, missed group messages, urls to profile pics etc.
There's a couple of things in there that would have to be grabbed pretty often, like missed messages and profile pics and a couple of hashes. So here's the question: what database style would be fastest while staying memory efficient? I want it to be able to handle around 10k users (like that's ever gonna happen).
heres some I thought of:
everything in one file (might be bad on memory, and takes time to load in, important, as I would need to load it in after every change.)
seperate files per user (Slower, but memory efficient)
seperate files
per data value
directory for each user, seperate files for each value.
thanks,and try to keep it objective so this isnt' instantly closed! | 0 | 0 | 1.2 | 0 | true | 20,382,525 | 0 | 45 | 1 | 0 | 0 | 20,380,661 | The only answer possible at this point is 'try it and see'.
I would start with MySQL (mostly because it's the 'lowest common denominator', freely available everywhere); it should do everything you need up to several thousand users, and if you get that far you should have a far better idea of what you need and where the bottlenecks are. | 1 | 0 | 0 | efficient database file trees | 1 | python,database,performance,chat | 1 | 2013-12-04T16:27:00.000 |
Is there a way to know how many rows were commited on the last commit on a SQLAlchemy Session? For instance, if I had just inserted 2 rows, I wish to know that there were 2 rows inserted, etc. | 0 | 1 | 1.2 | 0 | true | 20,389,560 | 0 | 193 | 1 | 0 | 0 | 20,389,368 | You can look at session.new, .dirty, and .deleted to see what objects will be committed, but that doesn't necessarily represent the number of rows, since those objects may set extra rows in a many-to-many association, polymorphic table, etc. | 1 | 0 | 0 | SQLAlchemy, how many rows were commited on last commit | 1 | python,sqlalchemy | 0 | 2013-12-05T00:59:00.000 |
I have a giant (100Gb) csv file with several columns and a smaller (4Gb) csv also with several columns. The first column in both datasets have the same category. I want to create a third csv with the records of the big file which happen to have a matching first column in the small csv. In database terms it would be a simple join on the first column.
I am trying to find the best approach to go about this in terms of efficiency. As the smaller dataset fits in memory, I was thinking of loading it in a sort of set structure and then read the big file line to line and querying the in memory set, and write to file on positive.
Just to frame the question in SO terms, is there an optimal way to achieve this?
EDIT: This is a one time operation.
Note: the language is not relevant, open to suggestions on column, row oriented databases, python, etc... | 0 | 0 | 0 | 1 | false | 20,390,085 | 0 | 154 | 1 | 0 | 0 | 20,389,982 | If you are only doing this once, your approach should be sufficient. The only improvement I would make is to read the big file in chunks instead of line by line. That way you don't have to hit the file system as much. You'd want to make the chunks as big as possible while still fitting in memory.
If you will need to do this more than once, consider pushing the data into some database. You could insert all the data from the big file and then "update" that data using the second, smaller file to get a complete database with one large table with all the data. If you use a NoSQL database like Cassandra this should be fairly efficient since Cassandra is pretty good and handling writes efficiently. | 1 | 0 | 0 | Intersecting 2 big datasets | 2 | c#,python,database,bigdata | 0 | 2013-12-05T02:00:00.000 |
Just wondering how to store files in the google app engine datastore.
There are lots of examples on the internet, but they are using blobstore
I have tried importing db.BlobProperty, but when i put() the data
it shows up as a <Blob> i think. It appears like there is no data
Similar to None for a string
Are there any examples of using the Datastore to store files
Or can anyone point me in the right direction
I am new to programming, so not to complex, but I have a good
hang of Python, just not an expert yet.
Thanks for any help | 0 | 0 | 0 | 0 | false | 20,424,484 | 1 | 71 | 1 | 1 | 0 | 20,421,965 | Datastore has a limit on the size of objects stored there, thats why all examples and documentation say to use the blobstore or cloud storage. Do that. | 1 | 0 | 0 | How do I store files in googleappengine datastore | 1 | python,google-app-engine,blob,google-cloud-datastore | 0 | 2013-12-06T10:47:00.000 |
I'm using a python script to run hourly scrapes of a website that publishes the most popular hashtags for a social media platform. They're to be stored in a database (MYSQL), with each row being a hashtag and then a column for each hour that it appears in the top 20, where the number of uses within that past hour is listed.
So, the amount of rows as well as columns will constantly increase, as new hashtags appear and ones that have previously appeared resurface into the top 20.
Is there a best way to go about this? | 0 | 2 | 1.2 | 0 | true | 20,452,854 | 0 | 48 | 1 | 0 | 0 | 20,452,796 | Your design is poorly suited for a relational database such as MySQL. The best way to go about it is to either redesign your storage layout to a form that a relational database works well with (eg. make each row a (hashtag, hour) pair), or use something other than a relational database to store it. | 1 | 0 | 0 | Best way to handle a database with lots of dynamically added columns? | 1 | python,mysql | 0 | 2013-12-08T11:17:00.000 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.