Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
Problem I have a list of ~5000 locations with latitude and longitude coordinates called A, and a separate subset of this list called B. I want to find all locations from A that are within n miles of any of the locations in B. Structure All of this data is stored in a mysql database, and requested via a python script. Approach My current approach is to iterate through all locations in B, and request locations within n miles of each location, adding them to the list if they don't exist yet. This works, but in the worst case, it takes a significant amount of time, and is quite inefficient. I feel like there has to be a better way, but I am at a loss as for how to do it. Ideas Load all locations into a list in python, and calculate distances there. This would reduce the number of mysql queries, and likely speed up the operation. It would still be slow though.
0
1
1.2
0
true
20,455,724
0
71
1
0
0
20,455,129
Load B into a python list and for each calculate maxlat, minlat, maxlong, minlong that everything outside of the box is definitely outside of your radius, if your radius is in nautical miles and lat/long in degrees. You can then raise an SQL query for points meeting criteria of minlat < lat < maxlat and minlong < long < maxlong. The resulting points can then be checked for exact distance and added to the in range list if they are in range. I would suggest doing this in multiple processes.
1
0
0
Finding Locations with n Miles of Existing Locations
1
python,mysql,latitude-longitude
0
2013-12-08T15:29:00.000
I can convert date read from excel to a proper date using xldate_as_tuple function. Is there any function which can do the reverse i.e. convert proper date to float which is stored as date in excel ?
2
0
0
0
false
21,302,801
0
1,405
1
0
0
20,464,887
Excel dates are represented as pywintypes.Time type objects. So in order to e.g. assign the current timestamp to a cell you do: workbook.Worksheets(1).Cells(1,1).Value = pywintypes.Time(datetime.datetime.now())
1
0
1
How to convert current date to float which is stored in excel as date?
3
python,excel,xlrd
0
2013-12-09T06:57:00.000
how to do file uploading in turbogears 2.3.1? I am using CrudRestController and tgext.datahelpers and it is uploading the file in the sqlite3 database but in an unknown format. I want to make a copy of the uploaded file in the hard drive. My query is how to ensure that when user uploads a file, it is loaded both in the database and the hard drive. (Thank you for suggestions)
1
0
1.2
0
true
20,525,832
0
226
1
0
0
20,492,587
tgext.datahelpers uploads files on disk inside the public/attachments directory (this can be change with tg.config['attachments_path']). So your file is already stored on disk, only the file metadata, like the URL, filename, thumbnail_url and so on are stored on database in JSON format
1
0
0
file upload turbogears 2.3.1
1
python-2.7,turbogears2
0
2013-12-10T10:57:00.000
I tend to start projects that are far beyond what I am capable of doing, bad habit or a good way to force myself to learn, I don't know. Anyway, this project uses a postgresql database, python and sqlalchemy. I am slowly learning everything from sql to sqlalchemy and python. I have started to figure out models and the declarative approach, but I am wondering: what is the easiest way to populate the database with data that needs to be there from the beginning, such as an admin user for my project? How is this usually done? Edit: Perhaps this question was worder in a bad way. What I wanted to know was the possible ways to insert initial data in my database, I tried using sqlalchemy and checking if every item existed or not, if not, insert it. This seemed tedious and can't be the way to go if there is a lot of initial data. I am a beginner at this and what better way to learn is there than to ask the people who do this regularly how they do it? Perhaps not a good fit for a question on stackoverflow, sorry.
1
0
1.2
0
true
20,589,295
0
995
1
0
0
20,587,888
You could use a schema change management tool like liquibase. Normally this is used to keep your database schema in source control, and apply patches to update your schema. You can also use liquibase to load data from CSV files. So you could add a startup.csv file in liquibase that would be run the first time you run liquibase against your database. You can also have it run any time, and will merge data in the CSV with the database.
1
0
0
Sqlalchemy, python, easiest way to populate database with data
1
python,sql,postgresql,sqlalchemy
0
2013-12-14T20:26:00.000
I am building the back-end for my web app; it would act as an API for the front-end and it will be written in Python (Flask, to be precise). After taking some decisions regarding design and implementation, I got to the database part. And I started thinking whether NoSQL data storage may be more appropriate for my project than traditional SQL databases. Following is a basic functionality description which should be handled by the database and then a list of pros and cons I could come up with regarding to which type of storage should I opt for. Finally some words about why I have considered RethinkDB over other NoSQL data storages. Basic functionality of the API The API consists of only a few models: Artist, Song, Suggestion, User and UserArtists. I would like to be able to add a User with some associated data and link some Artists to it. I would like to add Songs to Artists on request, and also generate a Suggestion for a User, which will contain an Artist and a Song. Maybe one of the most important parts is that Artists will be periodically linked to Users (and also Artists can be removed from the system -- hence from Users too -- if they don't satisfy some criteria). Songs will also be dynamically added to Artists. All this means is that Users don't have a fixed set of Artists and nor do Artists have a fixed set of Songs -- they will be continuously updating. Pros for NoSQL: Flexible schema, since not every Artist will have a FacebookID or Song a SoundcloudID; While a JSON API, I believe I would benefit from the fact that records are stored as JSON; I believe the number of Songs, but especially Suggestions will raise quite a bit, hence NoSQL will do a better job here; for SQL: It's fixed schema may come in handy with relations between models; Flask has support for SQLAlchemy which is very helpful in defining models; Cons for NoSQL: Relations are harder to implement and updating models transaction-like involves a bit of code; Flask doesn't have any wrapper or module to ease things, hence I will need to implement some kind of wrapper to help me make the code more readable while doing database operations; I don't have any certainty on how should I store my records, especially UserArtists for SQL: Operations are bulky, I have to define schemas, check whether columns have defaults, assign defaults, validate data, begin/commit transactions -- I believe it's too much of a hassle for something simple like an API; Why RethinkDB? I've considered RehinkDB for a possible implementation of NoSQL for my API because of the following: It looks simpler and more lightweight than other solutions; It has native Python support which is a big plus; It implements table joins and other things which could come in handy in my API, which has some relations between models; It is rather new, and I see a lot of implication and love from the community. There's also the will to continuously add new things that leverage database interaction. All these being considered, I would be glad to hear any advice on whether NoSQL or SQL is more appropiate for my needs, as well as any other pro/con on the two, and of course, some corrections on things I haven't stated properly.
11
14
1.2
0
true
20,600,546
1
2,835
1
0
0
20,597,590
I'm working at RethinkDB, but that's my unbiased answer as a web developer (at least as unbiased as I can). Flexible schema are nice from a developer point of view (and in your case). Like you said, with something like PostgreSQL you would have to format all the data you pull from third parties (SoundCloud, Facebook etc.). And while it's not something really hard to do, it's not something enjoyable. Being able to join tables, is for me the natural way of doing things (like for user/userArtist/artist). While you could have a structure where a user would contain artists, it is going to be unpleasant to use when you will need to retrieve artists and for each of them a list of users. The first point is something common in NoSQL databases, while JOIN operations are more a SQL databases thing. You can see RethinkDB as something providing the best of each world. I believe that developing with RethinkDB is easy, fast and enjoyable, and that's what I am looking for as a web developer. There is however one thing that you may need and that RethinkDB does not deliver, which is transactions. If you need atomic updates on multiple tables (or documents - like if you have to transfer money between users), you are definitively better with something like PostgreSQL. If you just need updates on multiple tables, RethinkDB can handle that. And like you said, while RethinkDB is new, the community is amazing, and we - at RethinkDB - care a lot about our users. If you have more questions, I would be happy to answer them : )
1
0
0
How suitable is opting for RethinkDB instead of traditional SQL for a JSON API?
1
python,sql,database,nosql,rethinkdb
0
2013-12-15T17:37:00.000
I am working with an Oracle database with millions of rows and 100+ columns. I am attempting to store this data in an HDF5 file using pytables with certain columns indexed. I will be reading subsets of these data in a pandas DataFrame and performing computations. I have attempted the following: Download the the table, using a utility into a csv file, read the csv file chunk by chunk using pandas and append to HDF5 table using pandas.HDFStore. I created a dtype definition and provided the maximum string sizes. However, now when I am trying to download data directly from Oracle DB and post it to HDF5 file via pandas.HDFStore, I run into some problems. pandas.io.sql.read_frame does not support chunked reading. I don't have enough RAM to be able to download the entire data to memory first. If I try to use cursor.fecthmany() with a fixed number of records, the read operation takes ages at the DB table is not indexed and I have to read records falling under a date range. I am using DataFrame(cursor.fetchmany(), columns = ['a','b','c'], dtype=my_dtype) however, the created DataFrame always infers the dtype rather than enforce the dtype I have provided (unlike read_csv which adheres to the dtype I provide). Hence, when I append this DataFrame to an already existing HDFDatastore, there is a type mismatch for e.g. a float64 will maybe interpreted as int64 in one chunk. Appreciate if you guys could offer your thoughts and point me in the right direction.
12
0
0
1
false
29,225,626
0
5,171
1
0
0
20,618,523
Okay, so I don't have much experience with oracle databases, but here's some thoughts: Your access time for any particular records from oracle are slow, because of a lack of indexing, and the fact you want data in timestamp order. Firstly, you can't enable indexing for the database? If you can't manipulate the database, you can presumably request a found set that only includes the ordered unique ids for each row? You could potentially store this data as a single array of unique ids, and you should be able to fit into memory. If you allow 4k for every unique key (conservative estimate, includes overhead etc), and you don't keep the timestamps, so it's just an array of integers, it might use up about 1.1GB of RAM for 3 million records. That's not a whole heap, and presumably you only want a small window of active data, or perhaps you are processing row by row? Make a generator function to do all of this. That way, once you complete iteration it should free up the memory, without having to del anything, and it also makes your code easier to follow and avoids bloating the actual important logic of your calculation loop. If you can't store it all in memory, or for some other reason this doesn't work, then the best thing you can do, is work out how much you can store in memory. You can potentially split the job into multiple requests, and use multithreading to send a request once the last one has finished, while you process the data into your new file. It shouldn't use up memory, until you ask for the data to be returned. Try and work out if the delay is the request being fulfilled, or the data being downloaded. From the sounds of it, you might be abstracting the database, and letting pandas make the requests. It might be worth looking at how it's limiting the results. You should be able to make the request for all the data, but only load the results one row at a time from the database server.
1
0
0
Reading a large table with millions of rows from Oracle and writing to HDF5
2
python,pandas,hdf5,pytables
0
2013-12-16T18:50:00.000
I dont even know if this is possible. But if it is, can someone give me the broadstrokes on how I can use a Python script to populate a Google spreadsheet? I want to scrape data from a web site and dump it into a google spreadsheet. I can imagine what the Python looks like (scrapy, etc). But does the language support writing to Google Drive? Can I kick off the script within the spreadsheet itself or would it have to run outside of it? Ideal scenario would be to open a google spreadsheet, click on a button, Python script executes and data is filled in said spreadsheet.
0
0
0
0
false
50,629,830
1
3,149
1
0
0
20,693,168
Yes, it is possible and this is how I am personally doing it so. search for "doGet" and "doPost(e)
1
0
0
Is this possible - Python script to fill a Google spreadsheet?
3
python,google-sheets
0
2013-12-19T22:45:00.000
i have deployed a simple Django application on AWS. The database i use is MySQL. Most parts of this application runs well. But there happens to be a problem when i submitted a form and store data from the form into a model. The error page presents Data truncated for column 'temp' at row 1. temp is a ChoiceField like this: temp = forms.ChoiceField(label="temperature", choices=TEMP), in the model file the temp is a CharField like this temp = models.CharField(max_length=2, choices=TEMP). The error happens at .save(). How can i fix this problem? Any advice and help is appreciated. BTW, as what i have searched, the truncation problem happens because of data type to be stored in database. But i still cannot figure out how to modify my code.
0
1
1.2
0
true
20,712,349
1
2,106
1
0
0
20,712,174
Your column is only 2 chars wide, but you are trying to store the strings 'HIGH', 'MEDIUM', 'LOW' from your TEMP choices (the first value of each tuple is saved in the database). Increase max_length or choose different values for choices, e.g. TEMP = ( ('H', 'High'), ('M', 'Medium'), ('L', 'Low'), ). It worked fine in SQLite because SQLite simply ignores the max_length attribute (and other things).
1
0
0
Data truncated for column 'temp' at row 1
2
python,mysql,database,django,amazon-ec2
0
2013-12-20T21:24:00.000
I'm started enhancing an application which has developed in Python. Zope server has been used to deploy the application. In many modules DB connection has established and used for DB transaction, and which has not used any connection pooling mechanism. Considering the volume of users it is vulnerable to have DB connections established for every request and it is a bad design. Now In order to have connection pooling, what should I do? My application uses Python 2.4,Zope 2.11.4 and MySQL 5.5. Is Zope provides any way to achieve it, like configure the DB in external file and inside the Python code referring the connection which Zope takes care of utilizing from connection pool? Or Do I need to write in a Python code in such a way that independent of the server(Zope or other) provided MySQL module for python
1
0
0
0
false
21,954,872
0
309
1
0
0
20,798,818
I guess you've advanced with your problem, but this is not a reason not to comment. 1) Long-term answer: seriously consider building a path to migrating to ZODB instead of mysql. ZODB is integrated with Zope and is way more efficient than mysql for storing Zope data. You can't do it at once, but may be you can identify part of the data that can be migrated to ZODB first, and then do it by "clusters of data". 2) short-term answer: I don't know what library you're using to connect to mysql (there aren't many of them), let's say it's python-mysqldb, and the function to Connect to the database is Connect. You Can write your own MySqlDB module, and put it before the system MySqlDB in the sys.path (manipulating the sys.path of your zope application if necessary), so your module is called instead of the system MySqlDB one. In your module, you write a Connect function that encapsulates your pooling logic and proxy everything else to the original (system) MySqlDB module. Hope I've been clear for you or everyone else having the same problem.
1
0
0
How to configure DB connection pooling in Python Zope server
1
python,mysql,connection-pooling,mysql-python,zope
0
2013-12-27T10:15:00.000
I am trying to import sqlalchemy.databases.sqlite.DateTimeMixIn. I get ImportError: No module named sqlite. SQLAlchemy 0.8.4 is installed. If I do import sqlite I get the same error.
0
1
0.099668
0
false
20,835,718
0
79
1
0
0
20,834,740
Sounds like the python binary you are using wasn't compiled with the sqlite module. If you are compiling from source, make sure you have the sqlite headers available.
1
0
0
Importing SQLAlchemy DateTimeMixin raises ImportErrror
2
python,sqlite,sqlalchemy
0
2013-12-30T06:52:00.000
Is there any way to check whether a row in a table has been modified or not in Cassandra. I don't want to compare the date before and after updating row in table. After Update operation I need to verify the query executed properly or not using python scripts. I am using Cassandra Driver for python.
0
0
1.2
0
true
20,928,821
0
167
1
0
0
20,855,659
If you want to verify that an update happened as planned, execute a SELECT against the updated row.
1
0
0
Cassandra row update check in a table
1
python,cassandra
0
2013-12-31T10:15:00.000
I want to publish an Android application that I have developed but have a minor concern. The application will load with a database file (or sqlite3 file). If updates arise in the future and these updates are only targeting the application's functionality without the database structure, I wish to allow users to keep their saved entries in their sqlite3 files. So what is the best practice to send updates? Compile the apk files with the new updated code only and without the database files? Or is there any other suggestion? PS: I am not working with Java and Eclipse, but with python for Android and the Kivy platform which is an amazing new way for developing Android applications.
4
0
0
0
false
20,856,571
1
487
2
0
0
20,856,465
if you're using local sqlite then you have to embed the database file within the app as failure to do so it means there's no database, in case for updates database have version numbers where as it can not upgrade the database provided the version number is the same as the previous app updates
1
1
0
Update apk file on Google Play
2
android,python-2.7,sqlite,apk,kivy
0
2013-12-31T11:14:00.000
I want to publish an Android application that I have developed but have a minor concern. The application will load with a database file (or sqlite3 file). If updates arise in the future and these updates are only targeting the application's functionality without the database structure, I wish to allow users to keep their saved entries in their sqlite3 files. So what is the best practice to send updates? Compile the apk files with the new updated code only and without the database files? Or is there any other suggestion? PS: I am not working with Java and Eclipse, but with python for Android and the Kivy platform which is an amazing new way for developing Android applications.
4
0
0
0
false
46,767,741
1
487
2
0
0
20,856,465
I had the same issue when I started my app but since kivy has no solution for this I tried to create a directory outside my app directory in android with a simple os.mkdir('../##') and I put all the files there. Hope this helps!
1
1
0
Update apk file on Google Play
2
android,python-2.7,sqlite,apk,kivy
0
2013-12-31T11:14:00.000
I have a Django app that has several database backends - all connected to different instances of Postgresql database. One of them is not guaranteed to be always online. It even can be offline when application starts up. Can I somehow configure Django to use lazy connections? I would like to: Try querying return "sorry, try again later" if database is offline or return the results if database is online Is this possible?
1
2
0.379949
0
false
21,235,393
1
380
1
0
0
20,878,709
The original confusion is that Django tries to connect to its databases on startup. This is actually not true. Django does not connect to database, until some app tries to access the database. Since my web application uses auth and site apps, it looks like it tries to connect on startup. But its not tied to startup, its tied to the fact that those app access the database "early". If one defines second database backend (non-default), then Django will not try connecting to it unless application tries to query it. So the solution was very trivial - originally I had one database that hosted both auth/site data and also "real" data that I've exposed to users. I wanted to make "real" database connection to be volatile. So I've defined separate psql backend for it and switched default backend to sqlite. Now when trying to access "real" database through Query, I can easily wrap it with try/except and handle "Sorry, try again later" over to the user.
1
0
0
Lazy psql connection with Django
1
python,django,django-models
0
2014-01-02T08:01:00.000
As the title suggests, I am using the s3cmd tool to upload/download files on Amazon. However I have to use Windows Server and bring in some sort of progress reporting. The problem is that on windows, s3cmd gives me the following error: ERROR: Option --progress is not yet supported on MS Windows platform. Assuming - -no-progress. Now, I need this --progress option. Are there any workarounds for that? Or maybe some other tool? Thanks.
1
2
1.2
0
true
21,165,278
1
701
1
0
1
21,017,853
OK, I have found a decent workaround to that: Just navigate to C:\Python27\Scripts\s3cmd and comment out lines 1837-1845. This way we can essentially skip a windows check and print progress on the cmd. However, since it works normally, I have no clue why the authors put it there in the first place. Cheers.
1
0
0
s3cmd tool on Windows server with progress support
2
python,windows,progress-bar,progress,s3cmd
0
2014-01-09T10:38:00.000
I am trying to install psycopg2 on Mac OS X Mavericks but it doesn't see any pg_config file. Postgres was installed via Postgres.app . I found pg_config in /Applications/Postgres.app/Contents/MacOS/bin/ and put it to setup.cfg but still can't install psycopg2. What might be wrong?
2
0
0
0
false
21,414,139
0
916
1
1
0
21,033,198
I had the same problem when I tried to install psycopg2 via Pycharm and using Postgres93.app. The installer (when running in Pycharm) insisted it could not find the pg_config file despite the fact that pg_config is on my path and I could run pg_config and psql successfully in Terminal. For me the solution was to install a clean version of python with homebrew. Navigate to the homebrew installation of Python and run pip in the terminal (rather than with Pycharm). It seems pip running in Pycharm did not see the postgres installation on my PATH, but running pip directly in a terminal resolved the problem.
1
0
0
Can't install psycopg2 on Maverick
1
python,macos,postgresql,psycopg2
0
2014-01-09T23:15:00.000
Recently i m working on web2py postgresql i made few changes in my table added new fields with fake_migration_all = true it does updated my .table file but the two new added fields were not able to be altered in postgres database table and i also tried fake_migration_all = false and also deleted mu .table file but still it didnt help to alter my table does able two add fields in datatable Any better solution available so that i should not drop my data table and fields should also be altered/added in my table so my data shouldn't be loast
0
0
0
0
false
21,050,586
1
535
1
0
0
21,046,136
fake_migrate_all doesn't do any actual migration (hence the "fake") -- it just makes sure the metadata in the .table files matches the current set of table definitions (and therefore the actual database, assuming the table definitions in fact match the database). If you want to do an actual migration of the database, then you need to make sure you do not have migrate_enabled=False in the call to DAL(), nor migrate=False in the relevant db.define_table() calls. Unless you explicitly set those to false, migrations are enabled by default. Always a good idea to back up your database before doing a migration.
1
0
0
Web2py postgreSQL database
1
python,web2py
0
2014-01-10T13:55:00.000
Forgive my ignorance as I am new to oursql. I'm simply trying to pass a parameter to a statement: cursor.execute("select blah from blah_table where blah_field = ?", blah_variable) this treated whatever is inside the blah_variable as a char array so if I pass "hello" it will throw a ProgrammingError telling me that 1 parameter was expected but 5 was given. I've tried looking through the docs but their examples are not using variables. Thanks!
0
1
0.099668
0
false
21,053,569
0
233
1
0
0
21,053,472
IT is expecting a sequence of parameters. Use: [blah_variable]
1
0
0
Python oursql treating a string variable as a char array
2
python,parameters,oursql
0
2014-01-10T20:06:00.000
I am deploying my flask app to EC2, however i get the error in my error.log file once i visit the link of my app. My extensions are present in the site-packages of my flask environment and not the "usr" folder of the server, however it tries to search usr folder to find the hook File "/usr/local/lib/python2.7/dist-packages/flask/exthook.py", line 87, in load_module It is located in /var/www/sample/flask/lib/python2.7/site-packages How to get over this issue?
0
0
0
0
false
21,124,613
1
2,246
1
0
0
21,107,967
You should be building your python apps in a virtualenv rather than using the system's installation of python. Try creating a virtualenv for your app and installing all of the extensions in there.
1
0
0
ImportError: No module named flask.ext.sqlalchemy
1
python,deployment,amazon-ec2,flask,flask-sqlalchemy
0
2014-01-14T07:23:00.000
What I am using: PostgreSQL and Python. I am using Python to access PostgreSQL What I need: Receive a automatic notification, on Python, if anyone records something on a specific table on database. I think that it is possible using a routine that go to that table, over some interval, and check changes. But it requires a loop and I would like something like an a assynchronous way. Is it possible?
21
17
1.2
0
true
21,128,034
0
17,343
1
0
0
21,117,431
donmage is quite right - LISTEN and NOTIFY are what you want. You'll still need a polling loop, but it's very lightweight, and won't cause detectable server load. If you want psycopg2 to trigger callbacks at any time in your program, you can do this by spawning a thread and having that thread execute the polling loop. Check to see whether psycopg2 enforces thread-safe connection access; if it doesn't, you'll need to do your own locking so that your polling loop only runs when the connection is idle, and no other queries interrupt a polling cycle. Or you can just use a second connection for your event polling. Either way, when the background thread that's polling for notify events receives one, it can invoke a Python callback function supplied by your main program, which might modify data structures / variables shared by the rest of the program. Beware, if you do this, that it can quickly become a nightmare to maintain. If you take that approach, I strongly suggest using the multithreading / multiprocessing modules. They will make your life massively easier, providing simple ways to exchange data between threads, and limiting modifications made by the listening thread to simple and well-controlled locations. If using threads instead of processes, it is important to understand that in cPython (i.e. "normal Python") you can't have a true callback interrupt, because only one thread may be executing in cPython at once. Read about the "global interpreter lock" (GIL) to understand more about this. Because of this limitation (and the easier, safer nature of shared-nothing by default concurrency) I often prefer multiprocessing to multithreading.
1
0
0
How to receive automatic notifications about changes in tables?
3
python,postgresql,events,triggers,listener
0
2014-01-14T15:35:00.000
We have a database that contains personally-identifying information (PII) that needs to be encrypted. From the Python side, I can use PyCrypto to encrypt data using AES-256 and a variable salt; this results in a Base64 encoded string. From the PostgreSQL side, I can use the PgCrypto functions to encrypt data in the same way, but this results in a bytea value. For the life of me, I can't find a way to convert between these two, or to make a comparison between the two so that I can do a query on the encrypted data. Any suggestions/ideas? Note: yes, I realize that I could do all the encryption/decryption on the database side, but my goal is to ensure that any data transmitted between the application and the database still does not contain any of the PII, as it could, in theory, be vulnerable to interception, or visible via logging.
1
3
1.2
0
true
21,128,178
0
2,355
1
0
0
21,122,847
Imagine you have a Social Security Number field in your table. Users must be able to query for a particular SSN when needed. The SSN, obviously, needs to be encrypted. I can encrypt it from the Python side and save it to the database, but then in order for it to be searchable, I would have to use the same salt for every record so that I can incorporate the encrypted value as part of my WHERE clause, and that just leaves us vulnerable. I can encrypt/decrypt on the database side, but in that case, I'm sending the SSN in plain-text whenever I'm querying, which is also bad. The usual solution to this kind of issue is to store a partial value, hashed unsalted or with a fixed salt, alongside the randomly salted full value. You index the hashed partial value and search on that. You'll get false-positive matches, but still significantly benefit from DB-side indexed searching. You can fetch all the matches and, application-side, discard the false positives. Querying encrypted data is all about compromises between security and performance. There's no magic answer that'll let you send a hashed value to the server and have it compare it to a bunch of randomly salted and hashed values for a match. In fact, that's exactly why we salt our hashes - to prevent that from working, because that's also pretty much what an attacker does when trying to brute-force. So. Compromise. Either live with sending the SSNs as plaintext (over SSL) for comparison to salted & hashed stored values, knowing that it still greatly reduces exposure because the whole lot can't be dumped at once. Or index a partial value and search on that. Do be aware that another problem with sending values unhashed is that they can appear in the server error logs. Even if you don't have log_statement = all, they may still appear if there's an error, like query cancellation or a deadlock break. Sending the values as query parameters reduces the number of places they can appear in the logs, but is far from foolproof. So if you send values in the clear you've got to treat your logs as security critical. Fun!
1
0
0
Encryption using Python and PostgreSQL
2
python,postgresql,encryption
1
2014-01-14T20:01:00.000
I have several million rows in a Sqlite database that need 5 columns updated. Each row/column value is different, so I have to update each row individually. Because of the way I'm looping through JSON from an external API, for each row, I have the option of either: 1) do 5 UPDATE operations, one for value. 2) build a temporary dict in python, then unpack it into a single UPDATE operation that updates all 5 columns at once. Basically I'm trading off Python time (slower language, but in memory) for SQLite time (faster language, but on disk). Which is faster?
1
1
0.197375
0
false
21,150,081
0
71
1
0
0
21,150,012
Building a dict doesn't really take that much memory. It's much more efficient since you'll only need to do one operation - and let SQLite handle it. Well, python is going to clean the dict anyway, so this is definitely the way to go. But as @JoranBeasley mentioned in the comment.. You never know until you try. Hope this helps!
1
0
1
For a single Sqlite row, faster to do 5 UPDATEs or build a python dict, then 1 Update?
1
python,python-2.7,sqlite
0
2014-01-15T22:55:00.000
I am using Oracle database and in a certain column I need to insert Strings, which in some cases are larger than 4000 symbols (Oracle 11g limits Varchar2 size to 4000). We are required to use Oracle 11g, and I know about the 12g extended mode. I would not like to use the CLOB datatype for performance considerations. The solution that I have in mind is to split the column and write a custom SQLAlchemy datatype that writes the data to the second column in case of string larger than 4000. So, my questions are: Are we going to gain any significant performance boost from that (rather than using Clob)? How should that SQLAlchemy be implemented? Currently we are using types.TypeDecorator for custom types, but in this case we need to read/write in two fields.
0
1
1.2
0
true
21,238,505
0
820
1
0
0
21,237,645
CLOB or NCLOB would be the best options. Avoid splitting data into columns. What would happen when you have data larger than 2 columns - it will fail again. It also makes it maintenance nightmare. I've seen people split data into rows in some databases just because the database would not support larger character datatypes (old Sybase versions). However, if your database has a datatype built for this purpose by all means use it.
1
0
0
SQLAlchemy type containing strings larger than 4000 on Oracle using Varchar2
1
python,oracle,oracle11g,sqlalchemy
0
2014-01-20T15:20:00.000
I have a serious problem, I don't now how to solve it. I have a Win 7 64bit laptop, with MS Office 2007 installed (32 bits). I installed Anaconda 64bits, BUT I am trying to connect to a MS Access MDB file with the ACE drives and I got an error that there is no driver installed. Due to MS Office 2007, I was forced to install ACE drivers 32 bits. Any help? The same code runs perfect under Win XP, with exactly the same installed: Anaconda, ACE drivers and MS Office 2007. It can be a problem mixin 32bits and 64 bits?
1
1
1.2
0
true
21,333,377
0
217
1
0
0
21,296,441
I finally got it! Yes, the problem was mixing 32 and 64 bits. I solved the problem installing the Microsoft ACE Drivers 64bits on a MS-DOS console, writting: AccessDatabaseEngine_x64.exe /passive And everything works!
1
0
1
Python on Win 7 64bits error MS Access
2
python,ms-access,ms-office,anaconda
0
2014-01-22T23:39:00.000
I have a big problem here with python, openpyxl and Excel files. My objective is to write some calculated data to a preconfigured template in Excel. I load this template and write the data on it. There are two problems: I'm talking about writing Excel books with more than 2 millions of cells, divided into several sheets. I do this successfully, but the waiting time is unthinkable. I don't know other way to solve this problem. Maybe openpyxl is not the solution. I have tried to write in xlsb, but I think openpyxl does not support this format. I have also tried with optimized writer and reader, but the problem comes when I save, due to the big data. However, the output file size is 10 MB, at most. I'm very stuck with this. Do you know if there is another way to do this? Thanks in advance.
5
4
1.2
0
true
21,352,070
0
5,110
1
0
0
21,328,884
The file size isn't really the issue when it comes to memory use but the number of cells in memory. Your use case really will push openpyxl to the limits at the moment which is currently designed to support either optimised reading or optimised writing but not both at the same time. One thing you might try would be to read in openpyxl with use_iterators=True this will give you a generator that you can call from xlsxwriter which should be able to write a new file for you. xlsxwriter is currently significantly faster than openpyxl when creating files. The solution isn't perfect but it might work for you.
1
0
0
openpyxl: writing large excel files with python
1
python,excel,openpyxl
0
2014-01-24T09:25:00.000
is it possible to use custom _id fields with Django and MongoEngine? The problem is, if I try to save a string to the _id field it throws an Invalid ObjectId eror. What I want to do is using my own Id's. This never was a problem without using Django because I caught the DuplicateKeyError on creation if a given id was already existing (which was even necessary to tell the program, that this ID is already taken) Now it seems as if Django/MongoEngine won't even let me create a custom _id field :-/ Is there any way to work arround this without creating a second field for the ID and let the _id field create itself? Greetings Codehai
0
6
1.2
0
true
21,498,341
1
2,413
1
0
0
21,370,889
You can set the parameter primary_key=True on a Field. This will make the target Field your _id Field.
1
0
0
custom _id fields Django MongoDB MongoEngine
1
python,django,mongodb,mongoengine
0
2014-01-26T23:51:00.000
I have python 2.7 32 bit running on a Windows 8.1 64 bit machine. I have Access 2013 and a .accdb file that I'm trying to access from python and pyodbc. I can create a 64 bit DSN in the 64 bit ODBC manager. However, when I try to connect to it from python, I get the error: Error: (u'IM002', u'[IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified') Presumably, python is only looking for 32-bit DSNs and doesn't find the 64-bit one that I've created. When I try to create a 32-bit DSN within the 32-bit ODBC manager, there is no driver for a accdb file (just .mdb). I think I need a 32 bit ODBC driver for Access 2013 files (.accdb), but haven't been able to find one. Is it possible to do what I'm trying to do? -- 32bit python access a Access 2013 .accdb file?
4
2
0.132549
0
false
21,393,854
0
10,567
1
0
0
21,393,558
Trial and error showed that installing the "Access Database Engine" 2007 seemed to create 32-bit ODBC source for Access accdb files.
1
0
0
32 bit pyodbc reading 64 bit access (accdb)
3
python,ms-access,odbc
0
2014-01-27T23:03:00.000
I would like to integrate a Python application and PHP application for data access. I have a Python app and it stores data in its application, now i want to access the data from python database to php application database. For PHP-Python integration which methods are used? Thanks
0
0
0
0
false
21,410,252
0
561
1
0
0
21,399,625
The easiest way to accomplish this is to build a private API for your PHP app to access your Python app. For example, if using Django, make a page that takes several parameters and returns JSON-encoded information. Load that into your PHP page, use json_decode, and you're all set.
1
0
0
Integration of PHP-Python applications
1
php,python,web-services,integration
1
2014-01-28T07:43:00.000
I'm using Flask-Babel for translating string. In some templates I'm reading the strings from the database(postgresql). How can I translate the strings from the database using Flask-Babel?
9
2
0.197375
0
false
22,099,629
1
1,789
1
0
0
21,497,489
It's not possible to use Babel in database translations, as database content is dynamic and babel translations are static (they didn't change). If you read the strings from the database you must save the translations on the database. You can create a translation table, something like (locale, source, destination), and get the translated values with a query.
1
0
0
translating strings from database flask-babel
2
python,flask,python-babel,flask-babel
0
2014-02-01T11:31:00.000
I'm having trouble in establishing an ideal setup where I can distinguish between production and test environment for my django app. I'm using a postgresql database that stores a relative file path to a s3 bucket after I upload an image. Am I supposed to make a production copy of all the files in the s3 bucket and connect my current development code to this static directory to do testing? I certainly don't want to connect to production ... What's best practice in this situation? Also I may be doing things wrong here by having the file path in a postgresql database. Would it be more ideal to have some foreign key to a mongodb table which then holds the file path for the file path in aws s3? Another best practice question is how should the file path should be organized? Should I just organize the file path like the following: ~somebucket/{userName}/{date}/{fileNameName} OR ~somebucket/{userName}/{fileName} OR ~somebucket/{fileName} OR ~somebucket/{date}/{userName}/{fileNameName} OR ~somebucket/{fileName} = u1234d20140101funnypic.png ?? This is really confusing for me on how to build an ideal way to store static files for development and production. Any better recommendations would be greatly appreciated. Thanks for your time :)
0
1
1.2
0
true
21,518,701
1
41
1
0
0
21,518,268
Its good to have different settings for production and dev. So you can just create a settings folder and have settings may be prod.py and dev.py. this will let you use diff apps for eg: you actually don't need debug tool bar on prod. And regarding the file, I feel you dont have to worry about the structure as such, you can always refer to Etag and get the file (md5 hash of the object)
1
0
0
How should I set up my dev enviornment for a django app so that I can pull on static s3 files?
1
python,django,mongodb,postgresql,amazon-s3
0
2014-02-03T00:53:00.000
I have a Model class which is part of my self-crafted ORM. It has all kind of methods like save(), create() and so on. Now, the thing is that all these methods require a connection object to act properly. And I have no clue on what's the best approach to feed a Model object with a connection object. What I though of so far: provide a connection object in a Model's __init__(); this will work, by setting an instance variable and use it throughout the methods, but it will kind of break the API; users shouldn't always feed a connection object when they create a Model object; create the connection object separately, store it somewhere (where?) and on Model's __init__() get the connection from where it has been stored and put it in an instance variable (this is what I thought to be the best approach, but have no idea of the best spot to store that connection object); create a connection pool which will be fed with the connection object, then on Model's __init__() fetch the connection from the connection pool (how do I know which connection to fetch from the pool?). If there are any other approached, please do tell. Also, I would like to know which is the proper way to this.
0
1
1.2
0
true
21,651,170
1
76
1
0
0
21,650,889
Here's how I would do: Use a connection pool with a queue interface. You don't have to choose a connection object, you just pick the next on the line. This can be done whenever you need transaction, and put back afterwards. Unless you have some very specific needs, I would use a Singleton class for the database connection. No need to pass parameters on the constructor every time. For testing, you just put a mocked database connection on the Singleton class. Edit: About the connection pool questions (I could be wrong here, but it would be my first try): Keep all connections open. Pop when you need, put when you don't need it anymore, just like a regular queue. This queue could be exposed from the Singleton. You start with a fixed, default number of connections (like 20). You could override the pop method, so when the queue is empty you block (wait for another to free if the program is multi-threaded) or create a new connection on the fly. Destroying connections is more subtle. You need to keep track of how many connections the program is using, and how likely it is you have too many connections. Take care, because destroying a connection that will be needed later slows the program down. In the end, it's a n heuristic problem that changes the performance characteristics.
1
0
0
Getting connection object in generic model class
1
python,database-connection
0
2014-02-08T19:39:00.000
I'm creating a Google App Engine application (python) and I'm learning about the general framework. I've been looking at the tutorial and documentation for the NDB datastore, and I'm having some difficulty wrapping my head around the concepts. I have a large background with SQL databases and I've never worked with any other type of data storage system, so I'm thinking that's where I'm running into trouble. My current understanding is this: The NDB datastore is a collection of entities (analogous to DB records) that have properties (analogous to DB fields/columns). Entities are created using a Model (analogous to a DB schema). Every entity has a key that is generated for it when it is stored. This is where I run into trouble because these keys do not seem to have an analogy to anything in SQL DB concepts. They seem similar to primary keys for tables, but those are more tightly bound to records, and in fact are fields themselves. These NDB keys are not properties of entities, but are considered separate objects from entities. If an entity is stored in the datastore, you can retrieve that entity using its key. One of my big questions is where do you get the keys for this? Some of the documentation I saw showed examples in which keys were simply created. I don't understand this. It seemed that when entities are stored, the put() method returns a key that can be used later. So how can you just create keys and define ids if the original keys are generated by the datastore? Another thing that I seem to be struggling with is the concept of ancestry with keys. You can define parent keys of whatever kind you want. Is there a predefined schema for this? For example, if I had a model subclass called 'Person', and I created a key of kind 'Person', can I use that key as a parent of any other type? Like if I wanted a 'Shoe' key to be a child of a 'Person' key, could I also then declare a 'Car' key to be a child of that same 'Person' key? Or will I be unable to after adding the 'Shoe' key? I'd really just like a simple explanation of the NDB datastore and its API for someone coming from a primarily SQL background.
17
13
1.2
0
true
21,658,988
1
7,423
1
1
0
21,655,862
I think you've overcomplicating things in your mind. When you create an entity, you can either give it a named key that you've chosen yourself, or leave that out and let the datastore choose a numeric ID. Either way, when you call put, the datastore will return the key, which is stored in the form [<entity_kind>, <id_or_name>] (actually this also includes the application ID and any namespace, but I'll leave that out for clarity). You can make entities members of an entity group by giving them an ancestor. That ancestor doesn't actually have to refer to an existing entity, although it usually does. All that happens with an ancestor is that the entity's key includes the key of the ancestor: so it now looks like [<parent_entity_kind>, <parent_id_or_name>, <entity_kind>, <id_or_name>]. You can now only get the entity by including its parent key. So, in your example, the Shoe entity could be a child of the Person, whether or not that Person has previously been created: it's the child that knows about the ancestor, not the other way round. (Note that that ancestry path can be extended arbitrarily: the child entity can itself be an ancestor, and so on. In this case, the group is determined by the entity at the top of the tree.) Saving entities as part of a group has advantages in terms of consistency, in that a query inside an entity group is always guaranteed to be fully consistent, whereas outside the query is only eventually consistent. However, there are also disadvantages, in that the write rate of an entity group is limited to 1 per second for the whole group.
1
0
0
Simple explanation of Google App Engine NDB Datastore
2
python,google-app-engine,app-engine-ndb
0
2014-02-09T05:53:00.000
I am just beginning learning Django and working through the tutorial, so sorry if this is very obvious. I have already a set of Python scripts whose ultimate result is an sqlite3 db that gets constantly updated; is Django the right tool for turning this sqlite db something like a pretty HTML table for a website? I can see that Django is using an sqlite db for managing groups/users and data from its apps (like the polls app in the tutorial), but I'm not yet sure where my external sqlite db, driven by my other scripts, fits into the grand scheme of things? Would I have to modify my external python scripts to write out to a table in the Django db (db.sqlite3 in the Django project dir in tutorial at least), then make a Django model based on my database structure and fields? Basically,I think my question boils down to: 1) Do I need to create Django model based on my db, then access the one and only Django "project db", and have my external script write into it. 2) or can Django utilise somehow a seperate db driven by another script somehow? 3) Finally, is Django the right tool for such a task before I invest weeks of reading...
0
1
1.2
0
true
21,768,188
1
1,298
1
0
0
21,767,229
If you care about taking control over every single aspect of how you want to render your data in HTML and serve it to others, Then for sure Django is a great tool to solve your problem. Django's ORM models make it easier for you to read and write to your database, and they're database-agnostic. Which means that you can reuse the same code with a different database (like MySQL) in the future. So, to wrap it up. If you're planning to do more development in the future, then use Django. If you only care about creating these HTML pages once and for all, then don't. PS: With Django, you can easily integrate these scripts into your Django project as management commands, run them with cronjobs and integrate everything you develop together with a unified data access layer.
1
0
0
Django and external sqlite db driven by python script
2
python,django,sqlite
0
2014-02-13T22:48:00.000
I am trying to serve up some user uploaded files with Flask, and have an odd problem, or at least one that I couldn't turn up any solutions for by searching. I need the files to retain their original filenames after being uploaded, so they will have the same name when the user downloads them. Originally I did not want to deal with databases at all, and solved the problem of filename conflicts by storing each file in a randomly named folder, and just pointing to that location for the download. However, stuff came up later that required me to use a database to store some info about the files, but I still kept my old method of handling filename conflicts. I have a model for my files now and storing the name would be as simple as just adding another field, so that shouldn't be a big problem. I decided, pretty foolishly after I had written the implmentation, on using Amazon S3 to store the files. Apparently S3 does not deal with folders in the way a traditional filesystem does, and I do not want to deal with the surely convoluted task of figuring out how to create folders programatically on S3, and in retrospect, this was a stupid way of dealing with this problem in the first place, when stuff like SQLalchemy exists that makes databases easy as pie. Anyway, I need a way to store multiple files with the same name on s3, without using folders. I thought of just renaming the files with a random UUID after they are uploaded, and then when they are downloaded (the user visits a page and presses a download button so I need not have the filename in the URL), telling the browser to save the file as its original name retrieved from the database. Is there a way to implement this in Python w/Flask? When it is deployed I am planning on having the web server handle the serving of files, will it be possible to do something like this with the server? Or is there a smarter solution?
0
0
1.2
0
true
21,817,783
1
143
1
0
0
21,807,032
I'm stupid. Right in the Flask API docs it says you can include the parameter attachment_filename in send_from_directory if it differs from the filename in the filesystem.
1
0
0
Is there a way to tell a browser to download a file as a different name than as it exists on disk?
1
python,amazon-s3,flask
0
2014-02-16T03:48:00.000
Maybe I got this wrong: Is there a way to automatically create the target table for a tabledata.insertAll command? If yes please point me in the right direction. If not - what is the best approach to create the tables needed? Check for existing tables on startup and create the ones that does not exist by loading from GCS? Or can they be created directly from code without a load job? I have a number of event classes (Python Cloud endpoints) defined and the perfect solution would be using those definitions to create matching BQ tables.
3
4
1.2
0
true
21,868,123
0
973
1
0
0
21,830,868
There is no way to create a table automatically during streaming, since BigQuery doesn't know the schema. JSON data that you post doesn't have type information -- if there is a field "123" we don't know if that will always be a string or whether it should actually be an integer. Additionally, if you post data that is missing an optional field, the schema that got created would be narrower than the one you wanted. The best way to create the table is with a tables.insert() call (no need to run a load job to load data from GCS). You can provide exactly the schema you want, and once the table has been created you can stream data to it. In some cases, customers pre-create a month worth of tables, so they only have to worry about it every 30 days. In other cases, you might want to check on startup to see if the table exists, and if not, create it.
1
0
0
Auto-create BQ tables for streaming inserts
1
python,google-bigquery
0
2014-02-17T13:51:00.000
I tried to use pymsql with sqlalchemy using this code : from sqlalchemy import create_engine engine = create_engine("mysql+pymsql://root:@localhost/pydb") conn = engine.connect() and this exception is raised here is the full stack trace : Traceback (most recent call last): File "D:\Parser\dal__init__.py", line 3, in engine = create_engine("mysql+pymsql://root:@localhost/pydb") File "C:\Python33\lib\site-packages\sqlalchemy-0.9.2-py3.3.egg\sqlalchemy\engine__init__.py", line 344, in create_engine File "C:\Python33\lib\site-packages\sqlalchemy-0.9.2-py3.3.egg\sqlalchemy\engine\strategies.py", line 48, in create File "C:\Python33\lib\site-packages\sqlalchemy-0.9.2-py3.3.egg\sqlalchemy\engine\url.py", line 163, in make_url File "C:\Python33\lib\site-packages\sqlalchemy-0.9.2-py3.3.egg\sqlalchemy\engine\url.py", line 183, in _parse_rfc1738_args File "C:\Python33\lib\re.py", line 214, in compile return _compile(pattern, flags) File "C:\Python33\lib\re.py", line 281, in _compile p = sre_compile.compile(pattern, flags) File "C:\Python33\lib\sre_compile.py", line 498, in compile code = _code(p, flags) File "C:\Python33\lib\sre_compile.py", line 483, in _code _compile(code, p.data, flags) File "C:\Python33\lib\sre_compile.py", line 75, in _compile elif _simple(av) and op is not REPEAT: File "C:\Python33\lib\sre_compile.py", line 362, in _simple raise error("nothing to repeat") sre_constants.error: nothing to repeat
0
0
0
0
false
21,866,204
0
1,686
1
0
0
21,853,660
Drop the : from your connection string after your username. It should instead be mysql+pymsql://root@localhost/pydb
1
0
0
Error when trying to use pymysql with sqlalchemy sre_constants.error: nothing to repeat
2
python,sqlalchemy,pymysql
0
2014-02-18T12:20:00.000
I have created an excel sheet using XLWT plugin using Python. Now, I need to re-open the excel sheet and append new sheets / columns to the existing excel sheet. Is it possible by Python to do this?
1
2
0.197375
0
false
22,414,279
0
8,628
1
0
0
21,856,559
You read in the file using xlrd, and then 'copy' it to an xlwt Workbook using xlutils.copy.copy(). Note that you'll need to install both xlrd and xlutils libraries. Note also that not everything gets copied over. Things like images and print settings are not copied, for example, and have to be reset.
1
0
0
How to append to an existing excel sheet with XLWT in Python
2
python,xlwt
0
2014-02-18T14:20:00.000
I'm sometimes using a TextField to store data with a structure that may change often (or very complex data) into model instances, instead of modelling everything with the relational paradigm. I could mostly achieve the same kind of things using more models, foreignkeys and such, but it sometimes feels more straightforward to store JSON directly. I still didn't delve into postgres JSON type (can be good for read-queries notably, if I understand well). And for the moment I perform some json.dumps and json.loads each time I want to access this kind of data. I would like to know what are (theoretically) the performance and caching drawbacks of doing so (using JSON type and not), compared to using models for everything. Having more knowledge about that could help me to later perform some clever comparison and profiling to enhance the overall performance.
1
3
0.53705
0
false
21,909,779
1
1,302
1
0
0
21,908,068
Storing data as json (whether in text-typed fields, or PostgreSQL's native jsontype) is a form of denormalization. Like most denormalization, it can be an appropriate choice when working with very difficult to model data, or where there are serious performance challenges with storing data fully normalized into entities. PostgreSQL reduces the impact of some of the problems caused by data denormalization by supporting some operations on json values in the database - you can iterate over json arrays or key/value pairs, join on the results of json field extraction, etc. Most of the useful stuff was added in 9.3; in 9.2, json support is just a validating data type. In 9.4, much more powerful json features will be added, including some support for indexing in json values. There's no simple one-size-fits all answer to your question, and you haven't really characterized your data or your workload. Like most database challenges "it depends" on what you're doing with the data. In general, I would tend to say it's best to relationally model the data if it is structured and uniform. If it's unstructured and non-uniform, storage with something like json may be more appropriate.
1
0
1
Django & postgres - drawbacks of storing data as json in model fields
1
python,json,django,postgresql
0
2014-02-20T12:38:00.000
I have a django app which provides a rest api using Django-rest-framework. The API is used by clients as expected, but I also have another process(on the same node) that uses Django ORM to read the app's database, which is sqlite3. Is it better architecture for the process to use the rest api to interact(only reads) with the app's database? Or is there a better, perhaps more efficient way than making a ton of HTTP requests from the same node? The problem with the ORM approach(besides the hacky nature) is that occasionally reads fail and must be retried. Also, I want to write to the app's db which would probably causes more sqlite concurrency issues.
0
0
0
0
false
21,914,906
1
138
1
0
0
21,912,993
It depends on what your application is doing. If your REST application reads a piece of data from SQLITE using the Django ORM and then the other app does a write you can run into some interesting race situations. To prevent that it might make sense to have both these applications as django-app in a single Django project.
1
0
0
SOA versus Django ORM with multiple processes
1
python,django,sqlite,rest,orm
0
2014-02-20T15:57:00.000
I am trying to design an app that uses Google AppEngine to store/process/query data that is then served up to mobile devices via Cloud Endpoints API in as real time as possible. It is straight forward enough solution, however I am struggling to get the right balance between, performance, cost and latency on AppEngine. Scenario (analogy) is a user checks-in (many times per day from different locations, cities, countries), and we would like to allow the user to query all the data via their device and provide as up to date information as possible. Such as: The number of check-ins over the last: 24 hours 1 week 1 month All time Where is the most checked in place/city/country over the same time periods Where is the least checked in place over the same time periods Other similar querying reports We can use Memcache to store the most recent checkins, pushing to the Datastore every 5 minutes, but this may not scale very well and is not robust! Use a Cron job to run the Task Queue/Map Reduce to get the aggregates, averages for each location every 30 mins and update the Datastore. The challenge is to use as little read/writes over the datastore because the last "24 hours" data is changing every 5 mins, and hence so is the last weeks data, last months data and so on. The data has to be dynamic to some degree, so it is not fixed points in time, they are always changing - here in lies the issue! It is not a problem to set this up, but to set it up in an efficient manner, balancing performance/latency for the user and cost/quotas for us is not so easy! The simple solution would be to use SQL, and run date range queries but this will not scale very well. We could eventually use BigTable & BigQuery for the "All time" time period querying, but in order to give the users as real-time as possible data via the API for the other time periods is proving quite the challenge! Any suggestions of AppEngine architecture/approaches would be seriously welcomed. Many thanks.
1
0
0
0
false
21,962,823
1
173
1
1
0
21,941,030
First, writes to the datastore take milliseconds. By the time your user hits the refresh button (or whatever you offer), the data will be as "real-time" as it gets. Typically, developers become concerned with real-time when there is a synchronization/congestion issue, i.e. each user can update something (e.g. bid on an item), and all users have to get the same data (the highest bid) in real time. In your case, what's the harm if a user gets the number of check-ins which is 1 second old? Second, data in Memcache can be lost at any moment. In your proposed solution (update the datastore every 5 minutes), you risk losing all data for the 5 min period. I would rather use Memcache in the opposite direction: read data from datastore, put it in Memcache with 60 seconds (or more) expiration, serve all users from Memcache, then refresh it. This will minimize your reads. I would do it, of course, unless your users absolutely must know how many checkins happened in the last 60 seconds. The real question for you is how to model your data to optimize writes. If you don't want to lose data, you will have to record every checkin in datastore. You can save by making sure you don't have unnecessary indexed fields, separate out frequently updated fields from the rest, etc.
1
0
0
AppEngine real time querying - cost, performance, latency balancing act and quotas
2
python,google-app-engine,mapreduce,task-queue
0
2014-02-21T17:23:00.000
Going through Django tutorial 1 using Python 2.7 and can't seem to resolve this error: OperationalError: no such table: polls_poll This happens the moment I enter Poll.objects.all() into the shell. Things I've already tried based on research through the net: 1) Ensured that 'polls' is listed under INSTALLED_APPS in settings.py Note: I've seen lots of suggestions inserting 'mysite.polls' instead of 'polls' into INSTALLED_APPS but this gives the following error: ImportError: cannot import name 'polls' from 'mysite' 2) Run python manage.py syncdb . This creates my db.sqlite3 file successfully and seemingly without issue in my mysite folder. 3) Finally, when I run python manage.py shell, the shell runs smoothly, however I do get some weird Runtime Warning when it starts and wonder if the polls_poll error is connected: \django\db\backends\sqlite3\base.py:63: RuntimeWarning: SQLite received a naive datetime (2014-02-03 17:32:24.392000) while time zone support is active. Any help would be appreciated.
5
11
1.2
0
true
23,184,956
1
13,012
1
0
0
21,976,383
I meet the same problem today and fix it I think you miss some command in tutorial 1 just do follow: ./python manage.py makemigrations polls python manage.py sql polls ./python manage.py syncdb then fix it and gain the table polls and you can see the table created you should read the "manage.py makemigrations" command
1
0
0
Django Error: OperationalError: no such table: polls_poll
4
python,django,shell,sqlite
0
2014-02-23T23:49:00.000
I have a csv file with about 280 columns, which are possibly changing from time to time. Is there a way to import a csv file to sqlite3 and have it 'guess' the column types? I am using a python script to import this.
2
0
0
0
false
22,005,726
0
1,025
1
0
0
22,004,809
make headers of the columns in csv as the same column names in sqlite3 table. Then directly read and check the type by using type() before inserting into DB.
1
0
0
csv import sqlite3 without specifying column types
2
python,csv,sqlite
0
2014-02-25T04:44:00.000
On OS X 10.9 and 10.9.1, the cx_Oracle works OK. But after I updated my system to OS X 10.9.2 yesterday, it cannot work. When connecting to Oracle database, DatabaseError is raised. And the error message is: ORA-21561: OID generation failed Can anyone help me?
2
0
0
0
false
39,339,545
0
1,020
2
0
0
22,060,338
I haven't seen this on OS X but the general Linux solution is to add your hostname to /etc/hosts for the IP 127.0.0.1.
1
0
0
cx_Oracle can't connect to Oracle database after updating OS X to 10.9.2
2
python,macos,oracle
0
2014-02-27T06:04:00.000
On OS X 10.9 and 10.9.1, the cx_Oracle works OK. But after I updated my system to OS X 10.9.2 yesterday, it cannot work. When connecting to Oracle database, DatabaseError is raised. And the error message is: ORA-21561: OID generation failed Can anyone help me?
2
0
0
0
false
41,649,509
0
1,020
2
0
0
22,060,338
This can be fixed with a simple edit to your hosts file. Find the name of your local-machine by running hostname in your local-terminal $hostname Edit your local hosts file $vi /etc/hosts assuming $hostname gives local_machine_name append it to your localhost , 127.0.0.1 localhost local_machine_name press esc and type wq! to save Cheers!
1
0
0
cx_Oracle can't connect to Oracle database after updating OS X to 10.9.2
2
python,macos,oracle
0
2014-02-27T06:04:00.000
I'm using Amazon Linux AMI release 2013.09. I've install virtualenv and after activation then I run pip install mysql-connector-python, but when I run my app I get an error: ImportError: No module named mysql.connector. Has anyone else had trouble doing this? I can install it outside of virtualenv and my script runs without issues. Thanks in advance for any help!
35
3
0.042831
0
false
44,177,264
0
73,109
1
0
0
22,100,757
Also something that can go wrong: Don't name your own module mysql import mysql.connector will fail because the import gives the module in the project precedence over site packages and yours likely doesnt have a connector.py file.
1
0
1
Can not get mysql-connector-python to install in virtualenv
14
python,mysql
0
2014-02-28T16:37:00.000
I know that Redis have 16 databases by default, but what if i need to add another database, how can i do that using redis-py?
1
0
0
0
false
22,111,910
0
460
1
0
0
22,110,562
You cannot. The number of databases is not a dynamic parameter in Redis. You can change it by updating the Redis configuration file (databases parameter) and restarting the server. From a client (Python or other), you can retrieve this value using the "GET CONFIG DATABASES" command. But the "SET CONFIG DATABASES xxx" command will be rejected.
1
0
0
Insert a new database in redis using redis.StrictRedis()
1
python,database,redis,redis-py
0
2014-03-01T05:29:00.000
I'd rather just use raw MySQL, but I'm wondering if I'm missing something besides security concerns. Does SQLAlchemy or another ORM handle scaling any better than just using pymysql or MySQLdb?
0
1
0.099668
0
false
22,128,680
1
941
2
0
0
22,128,419
SQL Alchemy is generally not faster (esp. as it uses those driver to connect). However, SQL Alchemy will help you structure your data in a sensible way and help keep the data consistent. Will also make it easier for you to migrate to a different db if needed.
1
0
0
Should I use an ORM like SQLAlchemy for a lightweight Flask web service?
2
python,sqlalchemy,flask,flask-sqlalchemy
0
2014-03-02T13:49:00.000
I'd rather just use raw MySQL, but I'm wondering if I'm missing something besides security concerns. Does SQLAlchemy or another ORM handle scaling any better than just using pymysql or MySQLdb?
0
1
1.2
0
true
22,134,840
1
941
2
0
0
22,128,419
Your question is too open to anyone guarantee SQLAlchemy is not a good fit, but SQLAlchemy probably will never be your problem to handle scalability. You'll have to handle almost the same problems with or without SQLAlchemy. Of course SQLAlchemy has some performance impact, it is a layer above the database driver, but it also will help you a lot. That said, if you want to use SQLAlchemy to help with your security (SQL escaping), you can use the SQLAlchemy just to execute your raw SQL queries, but I recommend it to fix specific bottlenecks, never to avoid the ORM.
1
0
0
Should I use an ORM like SQLAlchemy for a lightweight Flask web service?
2
python,sqlalchemy,flask,flask-sqlalchemy
0
2014-03-02T13:49:00.000
I would like for a user, without having to have an Amazon account, to be able to upload mutli-gigabyte files to an S3 bucket of mine. How can I go about this? I want to enable a user to do this by giving them a key or perhaps through an upload form rather than making a bucket world-writeable obviously. I'd prefer to use Python on my serverside, but the idea is that a user would need nothing more than their web browser or perhaps opening up their terminal and using built-in executables. Any thoughts?
0
0
0
0
false
22,162,436
1
141
1
0
0
22,160,820
This answer is relevant to .Net as language. We had such requirement, where we had created an executable. The executable internally called a web method, which validated the app authenticated to upload files to AWS S3 or NOT. You can do this using a web browser too, but I would not suggest this, if you are targeting big files.
1
0
0
user upload to my S3 bucket
2
python,file-upload,amazon-web-services,amazon-s3
0
2014-03-04T00:59:00.000
I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails. My plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. Is this the most efficient way to do this? (I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)
0
0
0
0
false
22,181,923
1
76
2
0
0
22,178,513
I asked about a soft button earlier. If your computer program is password/access protected you could just store it all in a pickle/config file somewhere, I am unsure what the value of the sql file is: use last_push = time.time() and check the difference to current push if seconds difference less than x do not progress, if bigger than x reset last_push and progress.... or am I missing something
1
0
0
Check time since last request
2
python
1
2014-03-04T17:13:00.000
I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails. My plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. Is this the most efficient way to do this? (I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)
0
0
0
0
false
22,179,026
1
76
2
0
0
22,178,513
If this is the easiest solution for you to implement, go right ahead. Worst case scenario, it's too slow to be practical and you'll need to find a better way. Any other scenario, it's good enough and you can forget about it. Honestly, it'll almost certainly be efficient enough to serve your purposes. The number of users at any one time will very rarely exceed one. An SQL query to determine if the timestamp is over a day before the current time will be quick, enough so that even the most determined gas-hole(!) wouldn't be able to cause any damage by spam-clicking the button. I would be very surprised if you ran into any problems.
1
0
0
Check time since last request
2
python
1
2014-03-04T17:13:00.000
I sometimes run python scripts that access the same database concurrently. This often causes database lock errors. I would like the script to then retry ASAP as the database is never locked for long. Is there a better way to do this than with a try except inside a while loop and does that method have any problems?
0
0
0
0
false
22,222,873
0
619
1
0
0
22,191,236
If you are looking for concurrency, SQlite is not the answer. The engine doesn't perform good when concurrency is needed, especially when writing from different threads, even if the tables are not the same. If your scripts are accessing different tables, and they have no relationships at DB level (i.e. declared FK's), you can separate them in different databases and then your concurrency issue will be solved. If they are linked, but you can link them in the app level (script), you can separate them as well. The best practice in those cases is implementing a lock mechanism with events, but honestly I have no idea how to implement such in phyton.
1
0
1
Sqlite3 and Python: Handling a locked database
2
python,sqlite
0
2014-03-05T07:23:00.000
Inside an web application ( Pyramid ) I create certain objects on POST which need some work done on them ( mainly fetching something from the web ). These objects are persisted to a PostgreSQL database with the help of SQLAlchemy. Since these tasks can take a while it is not done inside the request handler but rather offloaded to a daemon process on a different host. When the object is created I take it's ID ( which is a client side generated UUID ) and send it via ZeroMQ to the daemon process. The daemon receives the ID, and fetches the object from the database, does it's work and writes the result to the database. Problem: The daemon can receive the ID before it's creating transaction is committed. Since we are using pyramid_tm, all database transactions are committed when the request handler returns without an error and I would rather like to leave it this way. On my dev system everything runs on the same box, so ZeroMQ is lightning fast. On the production system this is most likely not an issue since web application and daemon run on different hosts but I don't want to count on this. This problem only recently manifested itself since we previously used MongoDB with a write_convern of 2. Having only two database servers the write on the entity always blocked the web-request until the entity was persisted ( which is obviously is not the greatest idea ). Has anyone run into a similar problem? How did you solve it? I see multiple possible solutions, but most of them don't satisfy me: Flushing the transaction manually before triggering the ZMQ message. However, I currently use SQLAlchemy after_created event to trigger it and this is really nice since it decouples this process completely and thus eliminating the risk of "forgetting" to tell the daemon to work. Also think that I still would need a READ UNCOMMITTED isolation level on the daemon side, is this correct? Adding a timestamp to the ZMQ message, causing the worker thread that received the message, to wait before processing the object. This obviously limits the throughput. Dish ZMQ completely and simply poll the database. Noooo!
1
0
0
0
false
22,247,025
1
2,062
1
0
0
22,245,407
This comes close to your second solution: Create a buffer, drop the ids from your zeromq messages in there and let you worker poll regularly this id-pool. If it fails retrieving an object for the id from the database, let the id sit in the pool until the next poll, else remove the id from the pool. You have to deal somehow with the asynchronous behaviour of your system. When the ids arrive constantly before persisting the object in the database, it doesnt matter whether pooling the ids (and re-polling the the same id) reduces throughput, because the bottleneck is earlier. An upside is, you could run multiple frontends in front of this.
1
0
0
ZeroMQ is too fast for database transaction
2
python,postgresql,sqlalchemy,zeromq
1
2014-03-07T08:48:00.000
I am using mongodb 2.4.6 and python 2.7 .I have frequent executing queries.Is it possible to save the frequent qaueries results in cache.? Thanks in advance!
0
1
1.2
0
true
22,251,094
0
1,377
1
0
0
22,250,987
Yes but you will need to make one, how about memcached or redis? However as a pre-cautionary note, MongoDB does have its recently used data cached to RAM by the OS already so unless you are doing some really resource intensive aggregation query or you are using the results outside of your working set window you might not actually find that it increases performance all that much.
1
0
1
How to cache Mongodb Queries?
1
mongodb,python-2.7,caching
0
2014-03-07T13:09:00.000
Introduction I am working on a GPS Listener, this is a service build on twisted python, this app receive at least 100 connections from gps devices, and it is working without issues, each GPS send data each 5 seconds, containing positions. ( the next week must be at least 200 gps devices connected ) Database I am using a unique postgresql connection, this connection is shared between all gps devices connected for save and store information, postgresql is using pgbouncer as pooler Server I am using a small pc as server, and I need to find a way to have a high availability application with out loosing data Problem According with my high traffic on my app, I am having issues with memory data after 30 minutes start to appear as no saved, however queries are being executed on postgres ( I have checked that on last activity ) Fake Solution I have amke a script that restart my app, postgres ang pgbouncer, however this is a wrong solution, because each time that I restart my app, gps get disconnected, and must to reconnected again Posible Solution I am thinking on a high availability solution based on a Data Layer, where each time when database have to be restarted or something happened, a txt file store data from gps devices. For get it, I am thing on a no unique connection, I am thinking on a simple connection each time one data must be saved, and then test database, like a pooler, and then if database connection is wrong, the txt file store it, until database is ok again, and the other process read txt file and send info to database Question Since I am thinking on a app data pooler and a single connection each time when this data must be saved for try to no lost data, I want to know Is ok making single connection each time that data is saved for this kind of app, knowing that connections will be done more than 100 times each 5 seconds? As I said, my question is too simple, which one is the right way on working with db connections on a high traffic app? single connections per query or shared unique connection for all app. The reason on looking this single question, is looking for the right way on working with db connections considering memory resources. I am not looking for solve postgresql issues or performance, just to know the right way on working with this kind of applications. And that is the reason on give as much of possible about my application Note One more thing,I have seen one vote to close this question, that is related to no clear question, when the question is titled with the word "question" and was marked on italic, now I have marked as gray for notice people that dont read the word "question" Thanks a lot
0
1
1.2
0
true
22,409,012
1
270
1
0
0
22,256,760
Databases do not just lose data willy-nilly. Not losing data is pretty much number one in their job description. If it seems to be losing data, you must be misusing transactions in your application. Figure out what you are doing wrong and fix it. Making and breaking a connection between your app and pgbouncer for each transaction is not good for performance, but is not terrible either; and if that is what helps you fix your transaction boundaries then do that.
1
0
0
Right way to manage a high traffic connection application
1
python,database,postgresql,gps,twisted
0
2014-03-07T17:28:00.000
I use driving_distance function in pgRouting to work with my river network. There are 12 vertices in my river network, and I want to get the distance between all of these 12 vertices, starting from vertex_id No.1. The result is fine, but I want to get other results using other vertices as starting point. I know it would not cost much time to change the SQL code everytime, but thereafter I would have more than 500 vertices in this river network, so I need to do this more efficiently. How to use python to get what I want? How can I write a python script to do this? Or there are existing python script that I want? I am a novice with programming language, please give me any detailed advice, thank you.
0
1
1.2
0
true
22,392,600
0
684
1
0
0
22,279,499
pyscopg2 Is an excellent python module that allows your scripts to connect to your postgres database and run SQL whether as inputs or as fetch queries. You can have python walk through the number of possible combinations between vertices and have it build the individual SQL queries as strings. It can then run through them and print your output into a text file.
1
0
0
How to use python to loop through all possible results in postgresql?
1
python,postgresql,pgrouting
0
2014-03-09T07:23:00.000
Unfortunately I have a REHL3 and Python 2.3 and no chance of upgrading. Does anyone have any examples of how to interact with the DB, openning sqlplus, logging in and then I only want a simple SELECT query bring the data to a CSV and then I can figure out the rest. Any ideas please?
0
0
0
0
false
23,064,270
0
319
1
0
0
22,300,744
I used a bash script to produce the csv file and then manipulated the data with Python. That was the only solution I could think of with Python 2.3.
1
0
0
Simple query to Oracle SQL using Python 2.3
1
sql,oracle,shell,oracle10g,python-2.x
0
2014-03-10T12:56:00.000
I'm trying to determine the best practices for storing and displaying user input in MongoDB. Obviously, in SQL databases, all user input needs to be encoded to prevent injection attacks. However, my understanding is that with MongoDB we need to be more worried about XSS attacks, so does user input need to be encoded on the server before being stored in mongo? Or, is it enough to simply encode the string immediately before it is displayed on the client side using a template library like handlebars? Here's the flow I'm talking about: On the client side, user updates their name to "<script>alert('hi');</script>". Does this need to be escaped to "&lt;script&gt;alert(&#x27;hi&#x27;);&lt;&#x2F;script&gt;" before sending it to the server? The updated string is passed to the server in a JSON document via an ajax request. The server stores the string in mongodb under "user.name". Does the server need to escape the string in the same way just to be safe? Would it have to first un-escape the string before fully escaping so as to not double up on the '&'? Later, user info is requested by client, and the name string is sent in JSON ajax response. Immediately before display, user name is encoded using something like _.escape(name). Would this flow display the correct information and be safe from XSS attacks? What about about unicode characters like Chinese characters? This also could change how text search would need to be done, as the search term may need to be encoded before starting the search if all user text is encoded. Thanks a lot!
7
4
1.2
0
true
22,315,740
1
2,232
1
0
0
22,312,452
Does this need to be escaped to "&lt;script&gt;alert(&#x27;hi&#x27;);&lt;&#x2F;script&gt;" before sending it to the server? No, it has to be escaped like that just before it ends up in an HTML page - step (5) above. The right type of escaping has to be applied when text is injected into a new surrounding context. That means you HTML-encode data at the moment you include it in an HTML page. Ideally you are using a modern templating system that will do that escaping for you automatically. (Similarly if you include data in a JavaScript string literal in a <script> block, you have to JS-encode it; if you include data in in a stylesheet rule you have to CSS-encode it, and so on. If we were using SQL queries with data injected into their strings then we would need to do SQL-escaping, but luckily Mongo queries are typically done with JavaScript objects rather than a string language, so there is no escaping to worry about.) The database is not an HTML context so HTML-encoding input data on the way to the database is not the right thing to do. (There are also other sources of XSS than injections, most commonly unsafe URL schemes.)
1
0
1
Encoding user input to be stored in MongoDB
2
javascript,python,ajax,mongodb,unicode
0
2014-03-10T22:12:00.000
building bsddb3-6.0.1, Python 3.3.2, BerkeleyDB 5.3, Windows7. First linker asked for libdb53s.lib, but there's no such file, so I deleted 's' symbol (in setup3.py) and now linker can find libdb53.lib, but... _bsddb.obj : error LNK2019: unresolved external symbol db_create referenced in f unction newDBObject _bsddb.obj : error LNK2019: unresolved external symbol db_strerror referenced in function makeDBError _bsddb.obj : error LNK2019: unresolved external symbol db_env_create referenced in function newDBEnvObject _bsddb.obj : error LNK2019: unresolved external symbol db_version referenced in function _promote_transaction_dbs_and_sequences _bsddb.obj : error LNK2019: unresolved external symbol db_full_version reference d in function _promote_transaction_dbs_and_sequences _bsddb.obj : error LNK2019: unresolved external symbol db_sequence_create refere nced in function newDBSequenceObject build\lib.win-amd64-3.3\bsddb3_pybsddb.pyd : fatal error LNK1120: 6 unresolved externals error: command '"C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\BIN\amd6 4\link.exe"' failed with exit status 1120 Copied BDB folders to bsddb3-6.0.1\db bsddb3-6.0.1\db\lib contains libdb53.lib bsddb3-6.0.1\db\bin contains libdb53.dll Are there any ready to use bsddb3 binaries for Python3.3.2 ?
2
0
0
0
false
22,391,487
0
183
1
0
0
22,373,983
Deleting the 's' symbol isn't appropriate - the s designates the static libdb53 library. Assuming you are building libdb53 from source as well, in the build_windows directory there is a Berkeley_DB.sln that includes Static_Debug and Static_Release configurations that will build these. However, your troubles may not end there. I'm using the static libraries and still getting similar unresolved external errors.
1
0
0
bsddb3-6.0.1 Windows7 bulid error: _bsddb.obj : error LNK2019: unresolved external symbol db_create referenced in function newDBObject
1
python,windows,berkeley-db,bsddb
0
2014-03-13T09:18:00.000
I keep seeing what appears to be a partly-commited transaction using innodb tables: all my tables use innodb as a backend mysql version: 5.5.31-0ubuntu0.13.04.1-log python web application based on uwsgi, each http request is wrapped in a separate transaction that is either commited or rolled back depending on whether an exception is generated during the request each request-serving process uses a single mysql connection that is not shared across processes a couple of other processes connect to the DB to perform background tasks that are all wrapped in transactions transactions are all created and tracked through a sqlalchemy middleware which is configured to not change the default mysql isolation level which is REPEATABLE READ Despite all this (I triple checked each item a couple of times), my DB appears to contain half-commited transactions: 1. 2 tables A and B with A that contains a foreign key to B (there are no constraints defined in the DB) 2. A contains a valid row that points to a non-existent row in B. 3. B contains rows with id + 1 and id - 1. 4. both rows in both tables are inserted within a single transaction To summarize, I can't see what I could have possibly done wrong. I can't imagine I am hitting a bug in the mysql storage backend so, I am looking for help on how I could debug this further and what assumption I made above is the most likely to be wrong.
0
0
0
0
false
22,668,180
0
73
1
0
0
22,389,675
It took me a while but it appears that the transaction was automatically rolled back by error 1213 (deadlock) and 1205 (lock wait timeout exceeded). It did not help that these errors were caught in some internal middleware that tried to execute again the statement that failed instead of forwarding the error up to the transaction layer where the entire transaction could be retried or abandoned as a whole. The result was that the code would keep on executing normally under the assumption that the ongoing transaction was still ongoing while it had been rolled back by the mysql server.
1
0
0
mysql transaction commited only partly
1
python,mysql,sqlalchemy
0
2014-03-13T20:03:00.000
I am using MySQLdb in python 3 to insert multiple rows using executemany. Now I need to get the primary keys after the multi insert.. Is there any way to get them?
3
3
1.2
0
true
22,399,907
0
591
2
0
0
22,399,825
You can use connection.insert_id()
1
0
0
Get primary keys inserted after executemany
2
python,mysql
0
2014-03-14T08:36:00.000
I am using MySQLdb in python 3 to insert multiple rows using executemany. Now I need to get the primary keys after the multi insert.. Is there any way to get them?
3
1
0.099668
0
false
22,399,968
0
591
2
0
0
22,399,825
When you use executemany you insert multiple rows at a time. You will get first inserted rows primery key using connection.inster_id() To get all inserted ids, you have to run another query.
1
0
0
Get primary keys inserted after executemany
2
python,mysql
0
2014-03-14T08:36:00.000
I have list of nested Json objects which I want to save into cassandra (1.2.15). However the constraint I have is that I do not know the column family's column data types before hand i.e each Json object has got a different structure with fields of different datatypes. So I am planning to use dynamic composite type for creating a column family. So I would like to know if there is an API or suggest some ideas on how to save such Json object list into cassandra. Thanks
1
2
0.379949
1
false
22,519,952
0
1,459
1
0
0
22,437,058
If you don't need to be able to query individual items from the json structure, just store the whole serialized string into one column. If you do need to be able to query individual items, I suggest using one of the collection types: list, set, or map. As far as typing goes, I would leave the value as text or blob and rely on json to handle the typing. In other words, json encode the values before inserting and then json decode the values when reading.
1
0
0
Import nested Json into cassandra
1
java,python,json,cassandra,cassandra-cli
0
2014-03-16T12:51:00.000
I want to convert a csv file to a db (database) file using python. How should I do it ?
0
0
1.2
0
true
22,443,606
0
933
1
0
0
22,443,297
I don't think this can be done in full generality without out-of-band information or just treating everything as strings/text. That is, the information contained in the CSV file won't, in general, be sufficient to create a semantically “satisfying” solution. It might be good enough to infer what the types probably are for some cases, but it'll be far from bulletproof. I would use Python's csv and sqlite3 modules, and try to: convert the cells in the first CSV line into names for the SQL columns (strip “oddball” characters) infer the types of the columns by going through the cells in the second CSV file line (first line of data), attempting to convert each one first to an int, if that fails, try a float, and if that fails too, fall back to strings this would give you a list of names and a list of corresponding probably types from which you can roll a CREATE TABLE statement and execute it try to INSERT the first and subsequent data lines from the CSV file There are many things to criticize in such an approach (e.g. no keys or indexes, fails if first line contains a field that is a string in general but just so happens to contain a value that's Python-convertible to an int or float in the first data line), but it'll probably work passably for the majority of CSV files.
1
0
0
How do I convert a .csv file to .db file using python?
2
python,linux,database,sqlite
0
2014-03-16T21:31:00.000
I made a script that opens a .xls file, writes a few new values in it, then saves the file. Later, the script opens it again, and wants to find the answers in some cells which contain formulas. If I call that cell with openpyxl, I get the formula (ie: "=A1*B1"). And if I activate data_only, I get nothing. Is there a way to let Python calculate the .xls file? (or should I try PyXll?)
13
1
0.033321
0
false
22,486,256
0
24,869
1
0
0
22,451,973
No, and in openpyxl there will never be. I think there is a Python library that purports to implements an engine for such formualae which you can use.
1
0
0
Calculating Excel sheets without opening them (openpyxl or xlwt)
6
python,excel,xlwt,openpyxl,pyxll
0
2014-03-17T10:31:00.000
I'm using Django 1.6 with PostgreSQL and I want to use a two different Postgres users - one for creating the initial tables (syncdb) and performing migrations, and one for general access to the database in my application. Is there a way of doing this?
3
1
0.197375
0
false
22,527,486
1
78
1
0
0
22,523,519
From ./manage.py help syncdb: --database=DATABASE Nominates a database to synchronize. Defaults to the "default" database. You can add another database definition in your DATABASES configuration, and run ./manage.py syncdb --database=name_of_database_definition. You might want to create a small wrapper script for running that command, so that you don't have to type out the --database=... parameter by hand every time. south also supports that option, so you can also use it to specify the database for your migrations.
1
0
0
Different Postgres users for syncdb/migrations and general database access in Django
1
python,django,postgresql
0
2014-03-20T04:28:00.000
Is it possible to call Python within an Oracle procedure? I've read plenty of literature about the reverse case (calling Oracle SQL from Python), but not the other way around. What I would like to do is to have Oracle produce a database table, then I would like to call Python and pass this database table to it in a DataFrame so that I could use Python to do something to it and produce results. I might need to call Python several times during the Oracle procedure. Does anyone know if this is possible and how could it be done?
18
0
0
0
false
53,431,165
0
12,854
1
0
0
22,564,503
kind of complicated but possible. I have seen it once. You need to create a javaclass inside oracle database. This class calls a .py file in the directory which contains it. create a procedure that calls the java class of item 1. in your sql query, call the procedure of item 2 whenever you need it.
1
0
0
Calling Python from Oracle
7
python,sql,oracle,pandas,cx-oracle
0
2014-03-21T16:40:00.000
insert_query = u""" INSERT INTO did_you_know ( name, to_be_handled, creator, nominator, timestamp) VALUES ('{0}', '{1}', '{2}', '{3}', '{4}') """.format("whatever", "whatever", "whatever", "whatever", "whatever") is my example. Does every single value in a MySQL query have to contain quotes? Would this be acceptable or not? INSERT INTO TABLE VALUES ('Hello', 1, 1, 0, 1, 'Goodbye') Thank you.
0
0
0
0
false
22,600,910
0
55
1
0
0
22,600,861
You need to quote only character and other non-integer data types. Integer data types need not be quoted.
1
0
0
Does every single value in a MySQL query have to be quoted?
3
python,mysql,mysql-python
0
2014-03-24T04:01:00.000
so I am new to both programming and Python, but I think I have the issue narrowed down enough to ask a meaningful question. I am trying to use MySQLdb on my computer. No problem, right? I just enter: import PyMySQL PyMySQL.install_as_MySQLdb() import MySQLdb At the top of the script. But here is the problem. I installed Anaconda the other day to try to get access to more stats packages. As a result, on the command line, "which python" returns: /Users/vincent/anaconda/bin/python Based on reading other people's questions and answers, I think the problem is caused by being through Anaconda and not usr/bin/python, but I have no idea if this is correct... I would rather not uninstall Anaconda as I know that is not the right solution. So, I would like to ask for a very basic list of steps of how fix this if possible. I am on OSX (10.9) Anaconda is 1.9.1 and I think python is 2.7 Thank you!
1
2
0.379949
0
false
24,050,784
0
7,318
1
0
0
22,602,065
You don't need to uninstall anaconda. In your case, Try pip install PyMySql. If which pip return /Users/vincent/anaconda/bin/pip, this should work.
1
0
1
ImportError: No module named PyMySQL - issues connecting with Anaconda
1
python,compatibility,mysql-python,anaconda
0
2014-03-24T05:56:00.000
I am working on a Python/Django application. The core logic rests in a Python application, and the web UI is taken care of by Django. I am planning on using ZMQ for communication between the core and UI apps. I am using a time-series database, which uses PostgreSQL in the background to store string data, and another time-series tool to store time-series data. So I already have a PostgreSQL requirement as part of the time-series db. But I need another db to store data (other than time-series), and I started work using PostgreSQL. sqlite3 was a suggestion from one of my team members. I have not worked on either, and I understand there are pros and cons to each one of them, I would like to understand the primary differences between the two databases in question here, and the usage scenarios.
1
1
0.099668
0
false
22,664,876
1
759
1
0
0
22,662,456
use postgreSQL, our team worked with sqlite3 for a long time. However, when you import data to db,it often give the information 'database is locked!' The advantages of sqlite3 it is small,as you put it, no server setup needed max_length in models.py is not strictly, you set max_length=10, if you put 100 chars in the field, sqlite3 never complain about it,and never truncate it. but postgreSQL is faster than sqlite3, and if you use sqlite3. some day you want to migrate to postgreSQL, It is a matter beacuse sqlite3 never truncate string but postgreSQL complains about it!
1
0
0
Database engine choice for Django/ Python application
2
python,database,django,sqlite,postgresql
0
2014-03-26T13:26:00.000
I'm making a Django app with the Two Scoops of Django template. Getting this Heroku error, are my Postgres production settings off? OperationalError at / could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? Exception Location: /app/.heroku/python/lib/python2.7/site-packages/psycopg2/__init__.py foreman start works fine Procfile: web: python www_dev/manage.py runserver 0.0.0.0:$PORT --noreload local.py settings: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'www', 'USER': 'amyrlam', 'PASSWORD': '*', 'HOST': 'localhost', 'PORT': '5432', } } production.py settings: commented out local settings from above, added standard Heroku Django stuff: import dj_database_url DATABASES['default'] = dj_database_url.config() SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') ALLOWED_HOSTS = ['*'] import os BASE_DIR = os.path.dirname(os.path.abspath(file)) STATIC_ROOT = 'staticfiles' STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(BASE_DIR, 'static'), ) UPDATE: production settings, tried changing: import dj_database_url DATABASES['default'] = dj_database_url.config(default=os.environ["DATABASE_URL"]) (named my Heroku color URL to DATABASE_URL, same link in heroku config)
3
5
1.2
0
true
22,693,845
1
1,923
1
0
0
22,674,128
Have you set your DJANGO_SETTINGS_MODULE environment variable? I believe what is happening is this: by default Django is using your local.py settings, which is why it's trying to connect on localhost. To make Django detect and use your production.py settings, you need to do the following: heroku config:set DJANGO_SETTINGS_MODULE=settings.production This will make Django load your production.py settings when you're on Heroku :)
1
0
0
Can't get Django/Postgres app settings working on Heroku
1
python,django,postgresql,heroku
0
2014-03-26T22:12:00.000
What is the code sintax to import an excel file to an MS access database IN PYTHON ? I have already tried making it a text file but with no sucesss
0
0
0
0
false
22,695,473
0
993
1
0
0
22,695,359
In Access select "External Data" then under "Import & Link" select Excel. You should be able to just use the wizard to choose the Excel file and import the data to a new table.
1
0
0
Import Excel spread sheet into Access
2
excel,python-2.7,ms-access-2010
0
2014-03-27T17:47:00.000
I'm using a flask on an ec2 instance as server, and on that same machine I'm using that flask talking to a MongoDB. For the ec2 instance I only leaves port 80 and 22 open, without leaving the mongo port (27017) because all the clients are supposed to talk to the flask server via http calls. Only in the flask I have code to insert or query the server. What I'm wondering is Is it secure enough? I'm using a key file to ssh to that ec2 machine, but I do need to be 99% sure that nobody else could query/insert into the mongodb If not, what shall I do? Thanks!
1
2
0.197375
0
false
22,716,431
1
182
2
0
0
22,715,888
EC2 security policies by default block all incoming ports except ones you have sanctioned, as such the firewall will actually stop someone from getting directly to your MongoDB instance; as such yes it is secure enough. Since the instances are physically isolated there is no chance of the problems you would get on shared hosting of someone being able to route through the back of their instances to yours (though some things are still shared like IO read head).
1
0
0
Security concerning MongoDB on ec2?
2
python,mongodb,security,amazon-web-services,amazon-ec2
0
2014-03-28T14:39:00.000
I'm using a flask on an ec2 instance as server, and on that same machine I'm using that flask talking to a MongoDB. For the ec2 instance I only leaves port 80 and 22 open, without leaving the mongo port (27017) because all the clients are supposed to talk to the flask server via http calls. Only in the flask I have code to insert or query the server. What I'm wondering is Is it secure enough? I'm using a key file to ssh to that ec2 machine, but I do need to be 99% sure that nobody else could query/insert into the mongodb If not, what shall I do? Thanks!
1
2
1.2
0
true
22,716,299
1
182
2
0
0
22,715,888
Should be secure enough. If I understand correctly, you don't have ports 27017 open to the world, i.e. you have (or should)block it thru your aws security group and perhaps your local firewall on the ec2 instance, then the only access to that port will be from calls originating on the same server. Nothing is 100% secure, but I don't see any holes in what you have done.
1
0
0
Security concerning MongoDB on ec2?
2
python,mongodb,security,amazon-web-services,amazon-ec2
0
2014-03-28T14:39:00.000
I am building a web crawler in Python using MongoDB to store a queue with all URLs to crawl. I will have several independent workers that will crawl URLs. Whenever a worker completes crawling a URL, it will make a request in the MongoDB collection "queue" to get a new URL to crawl. My issue is that since there will be multiple crawlers, how can I ensure that two crawlers won't query the database at the same time and get the same URL to crawl? Thanks a lot for your help
0
0
1.2
0
true
22,738,408
0
510
1
0
1
22,737,982
Since reads in MongoDB are concurrent I completely understand what your saying. Yes, it is possible for two workers to pick the same row, amend it and then re-save it overwriting each other (not to mention wasted resources on crawling). I believe you must accept that one way or another you will lose performance, that is an unfortunate part of ensuring consistency. You could use findAndModify to pick exclusively, since findAndModify has isolation it can ensure that you only pick a URL that has not been picked before. The problem is that findAndModify, due to being isolated, to slow down the rate of your crawling. Another way could be to do an optimistic lock whereby you write a lock to the database rows picked very quickly after picking them, this will mean that there is some wastage when it comes to crawling duplicate URLs but it does mean you will get the maximum performance and concurrency out of your workers. Which one you go for requires you to test and discover which best suites you.
1
0
0
Multiple workers getting information from a single MongoDB queue
1
python,mongodb,queue,mongodb-query,worker
0
2014-03-29T22:55:00.000
Hi Everyone Ive hit a road block in sql. Its the dreaded storing images in sql database. Apparently the solution to this is to store image in a file system. Does anyone know any book or video tutorial that teaches this I cant seem to find any in the web. Im using My Sql and Python to learn how to work with images. I cant find any examples in the web.
0
1
1.2
0
true
22,750,529
0
67
1
0
0
22,750,497
Store the image as a file, and store the path of the file in the database. The fact that the file is an image is irrelevant. If you want a more specific answer, you will need to ask a more specific question. Also, please edit your title so that it corresponds to the question.
1
0
0
Sql Filesystem programming Python
1
python,sql,database,image
0
2014-03-30T22:06:00.000
I'm using xlwt to create tables in excel. In excel there is a feature format as table which makes the table have an automatic filters for each column. Is there a way to do it using python?
9
8
1.2
0
true
22,837,578
0
10,141
1
0
0
22,831,520
OK, after searching the web, I realized that with xlwt it's not possible to do it, but with XlsxWriter it's possible and very easy and convenient.
1
0
0
how to do excel's 'format as table' in python
3
python,excel,python-2.7,xlwt
0
2014-04-03T08:10:00.000
I'm just about getting started on deploying my first live Django website, and I'm wondering how to set the Ubuntu server file permissions in the optimal way for security, whilst still granting the permissions required. Firstly a question of directories: I'm currently storing the site in ~/www/mysite.com/{Django apps}, but have often seen people using /var/www/... or /srv/www; is there any reason picking one of these directories is better than the other? or any reason why keeping the site in my home dir is a bad idea? Secondly, the permissions of the dir and files themselves. I'm serving using apache with mod_wsgi, and have the file WSGIScriptAlias / ~/www/mysite.com/mainapp/wsgi.py file. Apache runs as www-data user. For optimal security who should own the wsgi.py file, and what permissions should I grant it and its containing dir? Similarly, for the www, www/mysite.com, and www/mysite.com/someapp directories? What are the minimal permissions that are needed for the dirs and files? Currently I am using 755 and 644 for dir and files respecitvely, which works well enough which allows the site to function, but I wonder if it is optimal/too liberal. My Ubuntu user is the owner of most files, and www-data owns the sqlite dbs.
4
1
0.197375
0
false
24,634,526
1
1,003
1
0
0
22,872,888
In regards to serving the application from your home directory, this is primarily preference based. However, deployment decisions may be made depending on the situation. For example, if you have multiple users making use of this server to host their website, then you would likely have the files served from their home directories. From a system administrator's perspective that is deploying the applications; you may want them all accessible from /var/www... so they are easier to locate. The permissions you set for serving the files seem fine, however they may need to run as different users... depending on the number of people using this machine. For example, lets say you have one other application running on the server and that both applications run as www-data. If the www-data user has read permissions of Django's config file, then the other user could deploy a script that can read your database credentials.
1
0
0
Security optimal file permissions django+apache+mod_wsgi
1
python,django,apache,security,permissions
0
2014-04-04T20:58:00.000
I'm writing a little chatserver and client. There I got the idea to let users connect (nice :D) and when they want to protect their account by a password, they send /password <PASS> and the server will store the account information in a sqlite database file, so only users, who know the passphrase, are able to use the name. But there's the problem: I totally forgot, that sqlite3 in python is not thread-safe :( And now its not working. Thanks to git I can undo all changes with the storage. Does anyone have an idea how to store this stuff, so that they are persistent when stopping/starting the server? Thanks.
0
0
1.2
0
true
24,468,686
0
85
1
0
0
22,886,350
OK, I'm using a simple JSON text file with automatic saving every minute
1
0
0
python simple data storage
1
python,multithreading,sqlite
0
2014-04-05T20:26:00.000
I wrote a migration script which works fine on sqlite, but if i try to apply the same to postgres it is stuck forever. With a simple ps i can see the postres stuck on "create table waiting". There are any best practice?
20
2
0.099668
0
false
22,896,742
0
10,633
2
0
0
22,896,496
your database is most likely locked by another query. Especially if you do stuff with their GUI pgAdmin, this can happen a lot I found. (truncating tables is especially tricky, sometimes pgAdmin crashes and the db gets stuck) what you want to do is to restart the complete postgresql service and try again. Make sure that you: minimize the usage of GUI pgadmin close your cursors/dbs with psycopg2 if you don't need them
1
0
0
Alembic migration stuck with postgresql?
4
python,postgresql,alembic
0
2014-04-06T16:08:00.000
I wrote a migration script which works fine on sqlite, but if i try to apply the same to postgres it is stuck forever. With a simple ps i can see the postres stuck on "create table waiting". There are any best practice?
20
3
0.148885
0
false
35,921,043
0
10,633
2
0
0
22,896,496
You can always just restart postgresql.
1
0
0
Alembic migration stuck with postgresql?
4
python,postgresql,alembic
0
2014-04-06T16:08:00.000
I am facing a problem while installing cx_Oracle module. I have installed Oracle Sql developer using which I can connect to any Oracle Server. I have also installed cx_oracle module. Now when I try to import the module I am reciving below mentioned error. import cx_Oracle Traceback (most recent call last): File "", line 1, in import cx_Oracle ImportError: DLL load failed: The specified module could not be found. After googling I can find that they want me to install Oracle client, but since I already have Oracle Sql developer which can act as a oracle client, I am unable to find the difference between two. Can someone please help me out.
0
1
0.099668
0
false
25,801,626
0
112
1
0
0
22,943,247
You will need C-language based Oracle "client" libraries installed on your local machine. (SQL Developer uses Java libraries). To connect to a remote database you can install the Oracle Instant Client.
1
0
0
Error while importing cx_Oracle on windows
2
python-2.7,cx-oracle
0
2014-04-08T16:43:00.000
I am new to python. Could someone help me to figure out how to execute following commands using cx_Oracle in python? Spool C:\drop_tables.sql SELECT 'DROP TABLE ' || table_name || ' CASCADE CONSTRAINTS;' FROM user_tables; Spool off @C:\drop_tables.sql I know I can use cursor.execute() for 2nd command but for other non sql commands specially 1 & 3 I am not getting any clue. Appreciate if someone can help. Thanks, Aravi
1
1
1.2
0
true
22,962,131
0
912
1
0
0
22,945,637
So I achieved what I need by following way cur.execute("SELECT table_name FROM user_tables") result = cur.fetchall() for row in result: cur.execute('DROP TABLE ' + row[0] + ' CASCADE CONSTRAINTS')* Thanks much Luke for your idea.
1
0
0
How to execute non sql commands in python using cx_Oracle
1
python,cx-oracle
0
2014-04-08T18:43:00.000
I'm trying to store user data for a website in Python I'm making. Which is more efficient: -Storing all the user data in one huge table -Storing all the user data in several tables, one per user, in one database. -Storing each user's data in a XML or JSON file, one file per user. Each file has a unique name based on the user id. Also, which is safer? I'm biased towards storing user data in JSON files because that is something I already know how to do. Any advice? I'd post some code I already have, but this is more theoretical than code-based.
0
1
0.197375
0
false
22,951,848
1
242
1
0
0
22,951,806
I don't think efficiency should be part of your calculus. I don't like either of your proposed designs. One table? That's not normalized. I don't know what data you're talking about, but you should know about normalization. Multiple copies? That's not scalable. Every time you add a user you add a table? Sounds like the perfect way to ensure that your user population will be small. Is all the data JSON? Document based? Maybe you should consider a NoSQL document based solution like MongoDB.
1
0
1
Storing user data in one big database or in a different file for each user - which is more efficient?
1
python,json,database,performance,security
0
2014-04-09T02:39:00.000
So I have a sqlite3 db, which I access from Python (2.7), where I would like to store more than the by default allowed 2.000 columns. I understand that there is a setting or command, SQLITE_MAX_COLUMN, which I can alter, so that my database can store up to ~32.000 columns. My question is how do I in practice set the maximum number of columns to for example 30.000 - what is the specific code, that I should run? Hope my question is clearly stated. Thanks
1
2
1.2
0
true
22,964,138
0
1,019
1
0
0
22,964,033
That's a compile-time parameter for SQLite itself. As in, you'll need to recompile the SQLite library in order to change it. Nothing you can do in Python will be able to overcome this.
1
0
0
How to actually change the maximum number of columns in SQLITE
1
python-2.7,sqlite,max
0
2014-04-09T13:27:00.000
I have a script running continuously (using a for loop and time.sleep). It performs queries on models after loading Django. Debug is set to False in Django settings. However, I have noticed that the process will eat more and more memory. Before my time.sleep(5), I have added a call to django.db.reset_queries(). The very small leak (a few K at a time) has come to an almost full stop, and the issue appears to be addressed. However, I still can't explain why this solves the issue, since when I look at what reset_queries does, it seems to clear a list of queries located in each of connections.all().queries. When I try to output the length of these, it turns out to be 0. So the reset_queries() method seems to clear lists that are already empty. Is there any reason this would still work nevertheless? I understand reset_queries() is run when using mod wsgi regardless of whether DEBUG is True or not. Thanks,
3
4
1.2
0
true
23,063,519
1
801
1
0
0
22,976,981
After running a debugger, indeed, reset_queries() is required for a non-web python script that uses Django to make queries. For every query made in the while loop, I did find its string representation appended to ones of the queries list in connections.all(), even when DEBUG was set as False.
1
0
0
Is django.db.reset_queries required for a (nonweb) script that uses Django when DEBUG is False?
1
python,django,memory-leaks,daemon
0
2014-04-10T01:25:00.000
I need to store anonymous form data (string, checkbox, FileUpload,...) for a Conference registration site, but ATContentTypes seems to me a little bit oversized. Is there a lightweight alternative to save the inputs - SQL and PloneFormGen are not an option I need to list, view and edit the data inputs in the backend... Plone 3.3.6 python 2.4 Thanks
2
0
0
0
false
22,986,589
1
153
1
0
0
22,985,483
One approach is to create a browser view that accepts and retrieves JSON data and then just do all of the form handling in custom HTML. The JSON could be stored in an annotation against the site root, or you could create a simple content type with a single field for holding the JSON and create one per record. You'll need to produce your own list and item view templates, which would be easier with the item-per-JSON-record approach, but that's not a large task. If you don't want to store it in the ZODB, then pick whatever file store you want - like shelf - and dump it there instead.
1
0
0
Plone store form inputs in a lightweight way
3
python,forms,plone
0
2014-04-10T10:33:00.000
folks. I'm very new to coding and Python. This is my second Stack question ever. Apologies if I'm missing the obvious. But, I've researched this and am still stuck. I've been trying to install and use mod_wsgi on CentOS 6.5 and am getting an error when trying to add a VirtualHost to Apache. The mod_wsgi install seemed to go fine and my Apache status says: Server Version: Apache/2.2.26 (Unix) mod_ssl/2.2.26 OpenSSL/1.0.1e-fips DAV/2 mod_wsgi/3.4 Python/2.6.6 mod_bwlimited/1.4 So, it looks to me like mod_wsgi is installed and running. I have also added this line to the my pre-main include file for httpd.conf: LoadModule wsgi_module modules/mod_wsgi.so (I have looked ad mod_wsgi is in apache/modules.) And, I have restarted Apache several times. The error comes when I try to add a VirtualHost to any of the include files for https.conf. I always get an error message that says: Invalid command 'WSGIScriptAlias', perhaps misspelled or defined by a module not included in the server configuration If I try to use a VirtualHost with a WSGIDaemonProcess reference, I get a similar error message about WSGIDaemonProcess. From reading on Stack and other places, it sounds like I don't have mod_wsgi installed, or I don't have the Apache config file loading it, or that I haven't restarted Apache since doing those things. But, I really think I have taken all of those steps. What am I missing here? Thanks! Marc :-)
0
1
0.197375
0
false
23,104,951
1
1,164
1
0
0
22,992,857
I think I figured the out. I needed to load the module and define the VirtualHost in the same include file. I was trying to load in the first include file and define the VirtualHost in the second. Putting them both in one file kept the error from happening.
1
0
0
mod_wsgi Error on CentOS 6.5
1
python,linux,apache,mod-wsgi
0
2014-04-10T15:48:00.000
How do you set the header/footer height in openpyxl? I don't find a setting for it. In a spreadsheet there are settings for height and autofit height but I don't see a means for setting either of those in openpyxl.
0
0
0
0
false
23,135,653
0
563
1
0
0
23,068,670
Look at the HeaderFooter class in the worksheet section of the code.
1
0
0
how to set the header/footer height in openpyxl
1
python,openpyxl
0
2014-04-14T19:32:00.000
I'm trying to stop duplicates of a database entity called "Post" in my program. In order to do this I want to append a number like "1" or "2" next to it. In other words: helloworld helloworld1 helloworld2 In order to do this I need to query the database for postid's starting with helloworld. I read that GQL doesn't support the LIKE operation, so what can I do instead? Thanks!
0
0
0
0
false
23,071,550
0
117
1
0
0
23,070,947
Could you make the number a separate field? Then you won't have to search for prefix.
1
0
0
How to look for a substring GQL?
2
python,google-app-engine,webapp2,gql
0
2014-04-14T21:39:00.000
I need ID of an object which I've just added using session.add(). I need the auto-increment ID of this object before committing session. If I called the object instance.id, I get None. Is there a way to get ID of a object without committing?
9
13
1.2
0
true
23,080,688
0
6,464
1
0
0
23,076,483
Simple answer: call session.flush().
1
0
0
SQLAlchemy get added object's id without committing session
1
python,sqlalchemy
0
2014-04-15T06:51:00.000
Using postgresql9.1.9 in pylons1.0 project with mod_wsgi. Getting "out of memory error". Query is of about 1.4 million lines and it crashes on query.all(). The column used for filtering is indexed. In postgresql.conf, shared_buffers=24MB, max_connections=100. Can you please suggest the work around?
0
1
0.197375
0
false
23,084,850
0
292
1
0
0
23,077,972
Query is of about 1.4 million lines and it crashes on query.all(). When you say it crashes: Do you mean the python client executable, or the PostgreSQL server? I strongly suspect the crash is in Python. I'd say you're reading all results into memory at once, and they jus tdon't fit. What you will need to do is progressively read the query results, process them, and discard them from memory. In psycopg2 you do this by iterating over the cursor object, or using cursor.fetchone(). Pylons should offer similar methods.
1
0
0
MemoryError for postgresql 9.1.9
1
python,postgresql,mod-wsgi
0
2014-04-15T08:10:00.000
I want to connect to a mySQL database with python, then query a number corresponding to an image, and load this image in Qt. From what I found online, it is suggested not to use mysql database to store the image, but instead store a file location on the server. If this is the case, can I load the image (do i have to download it?) into qt using mysql or do i have to open another connection with ftp, download the image to a folder, and then load it with qt? If there are any resources on this type of workflow I would appreciate it.
0
1
0.197375
0
false
23,191,497
0
421
1
0
0
23,190,976
You don't need to download the file using FTP (or the like) to load it into Qt. Assuming the database stores the correct file path to the image, you can just use the same functionality once you get the file path, i.e. you anyway only need the file path to load the image into Qt. There is nothing special you would do by downloading the image itself. If the database is on a remote server, a possible approach is to use the JDBC API to access the database, get the image as a binary file and then serialize it, which can be transferred over the network.
1
1
0
mySQL connection and loading images to Qt
1
python,mysql,pyqt
0
2014-04-21T04:50:00.000
I searched around and couldn't really find any information on this. Basically i have a database "A" and a database "B". What i want to do is create a python script (that will likely run as a cron job) that will collect data from database "A" via sql, perform an action on it, and then input that data into database "B". I have written it using functions something along the lines of: Function 1 gets the date the script was last run Function 2 Gets the data from Database "A" based on function 1 Function 3-5 Perform the needed actions Function 6 Inserts data into Database "B" My question is, it was mentioned to me that i should use a Class to do this rather than just functions. The only problem is, I am honestly a bit hazy on Classes and when to use them. Would a Class be better for this? Or is writing this out as functions that feed into each other better? If i would use a Class, could you tell me how it would look?
3
4
0.664037
0
false
23,237,519
0
156
1
0
0
23,237,444
Would a Class be better for this? Probably not. Classes are useful when you have multiple, stateful instances that have shared methods. Nothing in your problem description matches those criteria. There's nothing wrong with having a script with a handful of functions to perform simple data transfers (extract, transform, store).
1
0
1
Collecting Data from Database, functions vs classes
1
python,mysql,database,class,oop
0
2014-04-23T07:17:00.000
I have made a database file using SQL commands in python. i have used quite a lot of foreign keys as well but i am not sure how to display this data onto qt with python? any ideas? i would also like the user to be able to add/edit/delete data
1
1
0.099668
0
false
23,281,662
0
4,532
1
0
0
23,268,179
This question is a bit broad, but I'll try answering it anyway. Qt does come with some models that can be connected to a database. Specifically classes like QSqlTableModel. If you connect such a model to your database and set it as the model for a QTableView it should give you most of the behavior you want. Unfortunately I don't think I can be any more specific than that. Once you have written some code, feel free to ask a new question about a specific issue (remember to include example code!)
1
1
0
How to display data from a database file onto pyqt so that the user can add/delete/edit the data?
2
python,sql,qt,pyqt
0
2014-04-24T11:55:00.000
Is there a way to use pandas to_excel function to write to the desktop, no matter which user is running the script? I've found answers for VBA but nothing for python or pandas.
2
1
0.099668
1
false
23,279,735
0
781
1
0
0
23,279,546
This depends on your operating system. You're saying you'd like to save the file on the desktop of the user who is running the script right? On linux (not sure if this is true of every distribution) you could pass in "~/desktop/my_file.xls" as the path where you're saving the file
1
0
0
to_excel on desktop regardless of the user
2
python,pandas
0
2014-04-24T20:51:00.000
This is a very very strange issue. I have quite a large excel file (the contents of which I cannot discuss as it is sensitive data) that is a .xlsx and IS a valid excel file. When I download it from my email and save it on my desktop and try to open the workbook using xlrd, xlrd throws an AssertionError and does not show me what went wrong. When I open the file using my file browser, then save it (without making any changes), it works perfectly with xlrd. Has anyone faced this issue before? I tried passing in various flags to the open_workbook function to no avail and I tried googling for the error. So far I haven't found anything. The method I used was as follows file = open('bigexcelfile.xlsx') fileString = file.read() wb = open_workbook(file_contents=filestring) Please help! The error is as follows Traceback (most recent call last): File "./varify/samples/resources.py", line 354, in post workbook = xlrd.open_workbook(file_contents=fileString) File "/home/vagrant/varify-env/lib/python2.7/site-packages/xlrd/__init__.py", line 416, in open_workbook ragged_rows=ragged_rows, File "/home/vagrant/varify-env/lib/python2.7/site-packages/xlrd/xlsx.py", line 791, in open_workbook_2007_xml x12sheet.process_stream(zflo, heading) File "/home/vagrant/varify-env/lib/python2.7/site-packages/xlrd/xlsx.py", line 528, in own_process_stream self_do_row(elem) File "/home/vagrant/varify-env/lib/python2.7/site-packages/xlrd/xlsx.py", line 722, in do_row assert tvalue is not None AssertionError
6
2
0.197375
0
false
29,564,738
0
2,289
1
0
0
23,346,771
rename or Save as your Excel file as .xls instead of .xlsx Thank You
1
0
0
xlrd cannot read xlsx file downloaded from email attachment
2
python,excel,xlrd,fileparsing
0
2014-04-28T16:50:00.000
I have installed Python version 3.4.0 and I would like to do a project with MySQL database. I downloaded and tried installing MySQLdb, but it wasn't successful for this version of Python. Any suggestions how could I fix this problem and install it properly?
53
0
0
0
false
49,185,013
0
143,430
1
0
0
23,376,103
for fedora and python3 use: dnf install mysql-connector-python3
1
0
0
Python 3.4.0 with MySQL database
11
python,mysql,python-3.x,mysql-python,python-3.4
0
2014-04-29T22:01:00.000
I'm running a python script that makes modifications in a specific database. I want to run a second script once there is a modification in my database (local server). Is there anyway to do that? Any help would be very appreciated. Thanks!
8
3
0.197375
0
false
23,382,595
0
20,648
1
0
0
23,382,499
You can use 'Stored Procedures' in your database a lot of RDBMS engines support one or multiple programming languages to do so. AFAIK postgresql support signals to call external process to. Google something like 'Stored Procedures in Python for PostgreSQL' or 'postgresql trigger call external program'
1
0
0
Run python script on Database event
3
python,mysql,database,events
0
2014-04-30T07:49:00.000
I'm trying to use the data collected by a form I to a sqlite query. In this form I've made a spin button which gets any numeric input (ie. either2,34 or 2.34) and sends it in the form of 2,34 which python sees as str. I've already tried to float() the value but it doesn't work. It seems to be a locale problem but somehow locale.setlocale(locale.LC_ALL, '') is unsupported (says WinXP). All these happen even though I haven't set anything to greek (language, locale, etc) but somehow Windows does its magic. Can someone help? PS: Of course my script starts with # -*- coding: utf-8 -*- so as to have anything in greek (even comments) in the code.
0
1
1.2
0
true
23,389,055
0
48
1
0
0
23,388,647
AFAIK, WinXP supports setlocale just fine. If you want to do locale-aware conversions, try using locale.atof('2,34') instead of float('2,34').
1
1
0
pygtk spinbutton "greek" floating point
1
python,sqlite,pygtk
0
2014-04-30T12:53:00.000