Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I am trying to reinstall one of my apps on my project site. These are the steps that I have followed to do so: Removing the name of the installed app from settings.py Manually deleting the app folder from the project folder Manually removing the data tables from PostgreSQL Copying the app folder back into the project folder; making sure that all files, except __init__.py is removed. Run python manage.py sqlmigrate app_name 0001 Run python manage.py makemigrations app_name Run python manage.py migrate app_name Run python manage.py makemigrations Run python manage.py migrate However, after all these steps the message I am getting is that there are "no changes detected" and the data tables have not been recreated in the database, PostgreSQL. Am I missing some additional steps?
1
0
0
0
false
33,704,110
1
205
1
0
0
33,703,866
I think I might have managed to solve the problem. The command, python manage.py sqlmigrate app_name 0001, produces the SQL statements required for the table creation. Thus, I copied and paste the output into the PostgreSQL console and got the tables created. It seems to work for now, but I am not sure if there will be repercussions later.
1
0
0
Reinstalling Django App - Data tables not re-created
1
python,django
0
2015-11-14T00:37:00.000
I would like to push sensor data from the raspberry pi to localhost phpmyadmin. I understand that I can install the mysql and phpmyadmin on the raspberry pi itself. But what I want is to access my local machine's database in phpmyadmin from the raspberry pi. Would it be possible?
0
1
0.099668
0
false
33,705,227
0
1,407
2
0
0
33,704,183
Well, from what I understand, you'd like to save the sensor data arriving in your Raspberry Pi to a database and access it from another machine. What I suggest is, install a mysql db instance and phpmyadmin in your Raspberry Pi and you can access phpmyadmin from another machine in the network by using the RPi's ip address. Hope this is what you wanted to do.
1
0
0
Push sensor data from raspberry pi to local host phpmyadmin database
2
mysql,python-2.7,phpmyadmin,raspberry-pi2
1
2015-11-14T01:29:00.000
I would like to push sensor data from the raspberry pi to localhost phpmyadmin. I understand that I can install the mysql and phpmyadmin on the raspberry pi itself. But what I want is to access my local machine's database in phpmyadmin from the raspberry pi. Would it be possible?
0
0
0
0
false
33,716,584
0
1,407
2
0
0
33,704,183
Sure, as long as they're on the same network and you have granted proper permission, all you have to do is use the proper hostname or IP address of the MySQL server (what you call the local machine). In whatever utility or custom script you have that writes data, use the networked IP address instead of 127.0.0.1 or localhost for the database host. Depending on how you've installed MySQL, you may not have a user that listens for non-local connections, incoming MySQL connections may be blocked at the firewall, or your MySQL server may not listen for incoming network connections. You've asked about using phpMyAdmin from the Pi, accessing your other computer, which doesn't seem to make much sense to me (I'd think you'd want to run phpMyAdmin on your desktop computer, not a Pi), but if you've got a GUI and compatible web browser running on the Pi then you'd just have phpMyAdmin and the webserver run on the same desktop computer that has MySQL and access that hostname and folder from the Pi (such as http://192.0.2.15/phpmyadmin). If you're planning to make the MySQL server itself public-facing, you should really re-think that decision unless you know why that's a bad idea and how to properly secure it (but that may not be a concern; for instance I have one at home that is available on the local network, but my router blocks any incoming connections from external sources).
1
0
0
Push sensor data from raspberry pi to local host phpmyadmin database
2
mysql,python-2.7,phpmyadmin,raspberry-pi2
1
2015-11-14T01:29:00.000
can I store PDF files in the database, as object or blob, with Flask-Admin? I do not find any reference in the documentation. Thanks. Cheers
2
-2
-0.197375
0
false
33,724,438
1
3,809
1
0
0
33,722,132
Flask-Admin doesn't store anything. It's just a window into the underlying storage. So yes, you can have blob fields in a Flask-Admin app -- as long as the engine of your database supports blob types. In case further explanation is needed, Flask-Admin is not a database. It is an interface to a database. In a flask-admin app, you connect to a pre-existing database. This might be an sqlite database, PostGresSQL, MySQL, MongoDB or any of a variety of databases.
1
0
0
Storing a PDF file in DB with Flask-admin
2
python,mongodb,object,flask-sqlalchemy,flask-admin
0
2015-11-15T16:40:00.000
Is it possible to use same formatting variable for formatting multiple excel workbooks using xlsxwriter? If yes, how? Currently I am able to use formatting variable for single excel workbook as I am initializing it using workbook.add_format method but this variable is bounded to that workbook only.
1
1
0.197375
0
false
33,751,625
0
56
1
0
0
33,749,918
Is it possible to use same formatting variable for formatting multiple excel workbooks using xlsxwriter? No. A formatting object is created by and thus, tied to, a workbook object. However, there are other ways of doing what you need to do such as storing the properties for the format in a dict and using that to initialize several format objects in the same way.
1
0
1
Use same formatting variable for multiple Excel workbooks
1
python,format,xlsxwriter
0
2015-11-17T05:39:00.000
I have a database in MS Access. I am trying to query one table to Python using pypyodbc. I get the following error message: ValueError: could not convert string to float: E+6 The numbers in the table are fairly big, with up to ten significant figures. The error message tells me that MSAccess is formatting them in scientific notation and Python is reading them as strings. The fields in the table are formatted as singles with two decimal places. When I see the numbers in the table in the database they are not formatted using scientific notation. but the error message seems to indicate that they are. Furthermore, if I change the numbers in the table (at lest for a test row) to small numbers (integers from 1 to 5) the query runs. Which supports my theory that the problem is scientific formatting of big number. Any ideas of how to: write into the database table in a way that the numbers are not formatted in scientific notation, or make pypyodbc retrieve numbers as such and ignore any scientific notation.
3
0
0
0
false
33,894,451
0
1,522
1
0
0
33,769,143
As I was putting together test files for you to try to reproduce, I noticed that two of the fields in the table were set to Single type rather than Double. Changed them to Double and that solved the problem. Sorry for the bother and thanks for the help.
1
0
0
Issue querying from Access database: "could not convert string to float: E+6"
3
python,ms-access,pypyodbc
0
2015-11-17T23:35:00.000
I'm using Sql alchemy with Sql Server as database engine. I have queries that take long time (approximately 10 second). When I send concurrent requests to database, the response time takes more (Exactly, time = execution time * requests count). I Increased the connection pool but no changes have been happend.
2
1
1.2
0
true
33,991,887
0
403
1
0
0
33,775,269
if the issue is only with threads and not concurrent processes then the DBAPI in use would be suspect. I don't see which driver you are using but perhaps it is not releasing the GIL while it waits for a server response. produce a test case that isolates it just to that driver running in two threads, and then report it as a bug on their system.
1
0
0
SQL alchemy is slow in concurrent connection
1
python,sql-server,sqlalchemy
0
2015-11-18T08:41:00.000
I have a Python program that connects to an MSSQL database using an ODBC connection. The Python library I'm using is pypyodbc. Here is my setup: Windows 8.1 x64 SQL Server 2014 x64 Python 2.7.9150 PyPyODBC 1.3.3 ODBC Driver: SQL Server Native Client 11.0 The problem I'm having is that when I query a table with a varchar(max) column, the content is being truncated. I'm new to pypyodbc and I've been searching around like crazy and can't find anything on how to prevent this from happening in pypyodbc or even pyodbc. At least not with the search terms I've been using and I don't know what other phrases to try. I even tried adding SET TEXTSIZE 2147483647; to my SQL query, but the data is still being truncated. How do I prevent this from happening? Or can you point me in the right direction, please? UPDATE: So, I tried performing a cast in my SQL query. When I do CAST(my_column as VARCHAR(MAX)) it truncates at the same position. However, if I do CAST(my_column as VARCHAR(8000)) it gives me a larger set of the text, but it's still truncating some of the contents. If I try to do anything larger than 8000 I get an error saying that 8000 is the largest I can use. Anyone know what might be going on here? It seem strange that using MAX won't work.
8
8
1
0
false
53,201,310
0
5,189
1
0
0
33,878,291
The "{SQL Server}" doesn't work in my case. "{SQL Server}" works perfectly well if the database is on my local machine. However, if I tried to connect to the remote server, always the error message below would return: pypyodbc.DatabaseError: ('08001', '[08001] [Microsoft][ODBC SQL Server Driver][DBNETLIB]SSL Security error') For those who are still struggling in the VARCHAR(MAX) truncation, a brilliant workaround my colleague came out with is to CAST the VARCHAR(MAX) to TEXT type. Let's say we have a column called note and its data type is VARCHAR(MAX), instead of using SELECT note FROM notebook, writing in SELECT CAST(note AS TEXT) FROM notebook. Hope it helps!
1
0
0
How to get entire VARCHAR(MAX) column with Python pypyodbc
3
python,sql-server,pyodbc,pypyodbc
0
2015-11-23T18:45:00.000
I have 2 different python processes (running from 2 separate terminals) running separately at the same time accessing and updating mysql. It crashes when they are using same table at the same time. Any suggestions on how to fix it?
0
0
0
0
false
34,680,890
0
41
1
0
0
33,909,039
Are you using myisam or innodb? I suggest using innodb since it has a better table/record locking flexibility for multiple simultaneous updates.
1
0
1
python accessing and updating mysql from simultaneously running processes
1
python,mysql,python-2.7,mysql-python
0
2015-11-25T05:26:00.000
I want to connect my project from app engine with (googleSQL), but I get that error exceeded the maximum of 12 connections in python, I have a CLOUDSQL D8 1000 simultaneous connections how can i change this number limit conexions, I'm using django and python thanks
0
2
0.379949
0
false
33,978,178
1
223
1
1
0
33,977,130
Each single app engine instance can have no more than 12 concurrent connections to Cloud SQL -- but then, by default, an instance cannot service more than 8 concurrent requests, unless you have deliberately pushed that up by setting the max_concurrent_requests in the automatic_scaling stanza to a higher value. If you've done that, then presumably you're also using a hefty instance_class in that module (perhaps the default module), considering also that Django is not the lightest-weight or fastest of web frameworks; an F4 class, I imagine. Even so, pushing max concurrent requests above 12 may result in latency spikes, especially if serving each and every request also requires other slow, heavy-weight operations such as MySQL ones. So, consider instead using many more instances, each of a lower (cheaper) class, serving no more than 12 requests each (again, assuming that every request you serve will require its own private connection to Cloud SQL -- pooling those up might also be worth considering). For example, an F2 instance costs, per hour, half as much as an F4 one -- it's also about half the power, but, if serving half as many user requests, that should be OK. I presume, here, that all you're using those connections for is to serve user requests (if not, you could dispatch other, "batch-like" uses to separate modules, perhaps ones with manual or basic scheduling -- but, that's another architectural issue).
1
0
0
As codified the limit of 12 connections appengine to cloudsql
1
python,google-app-engine,google-cloud-sql
0
2015-11-28T22:26:00.000
I'm self-teaching programming through the plethora of online resources to build a startup idea I've had for awhile now. Currently, I'm using the SaaS platform at sharetribe.com for my business but I'm trying to build my own platform as share tribe does not cater to the many options I'd like to have available to my users. I'm try to setup the database at this time and I'm currently working on the architecture. I plan to use MySQL for my database. The website will feature an online inventory management system where users can track all their items, update availability, pricing, delivery, payments, analytical tools, etc. This is so the user can easily monitor their current items, create new listings, etc. so it creates more of a "business" feel for the users. Here is a simple explanation of the work flow. Users will create their profile having access to rent or rent out their items. Once their account is created they can search listing based on the category, subcategory, location, price, etc. When rental is placed, the user will request the rental at specified time, once approved, the rental process will begin. My question is how should I set up the infrastructure/architecture for the database? I have this as my general workings but I know I'm missing a lot of queries and criteria to suit the application. User queries: -user_ID -name -email -username -encrypted_password -location -social_media -age -photo Product queries: -item_ID -user_ID -category_ID -subcategory_ID -price -description -availability -delivery_option As you can see, I'm new to this but as many of the resources I've used for my research, all have said the best way to learn is to do. I'm probably taking on a bigger project that I should for my beginning stages but there will be plenty of mistakes made that will assist my learning. Any and all recommendations and assistance are appreciated. For general knowledge, I intend to utilize Rails as my server language. If you recommend Python/Django over Ruby/Rails, could you please explain why this would be more beneficial to me? Thanks.
0
0
0
0
false
34,133,045
1
657
1
0
0
34,129,887
Learn a good book on software development methodologies before you get into this . Then read some simple tutorial online on mysql . Then it will be a lot more easy to do this .
1
0
0
How do I build the database for my P2P rental marketplace?
2
python,mysql,ruby-on-rails,ruby,database
0
2015-12-07T09:07:00.000
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want one client's state to affect the others) Each of these installations are binded to a central repository. If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application. If we update the database schema on the other hand, currently, we need to run migrations manually by connecting to each db instance one by one (settings.py file reads the database settings from an external file which is not in the repo, we add this file manually upon installation). Can we automate this process? i.e. given a list of databases, is it possible to run migrations on these databases with a single command?
1
1
0.099668
0
false
34,878,492
1
510
2
0
0
34,197,011
First, I'd really look (very hard) for a way to launch a script that does as masnun suggests on the client side, really hard. Second, if that does not work, then I'd try the following: Configure on your local machine all client databases in the settings variable DATABASES Make sure you can connect to all the client databases, this may need some fiddling Then you run the "manage.py migrate" process with the extra flag --database=mydatabase (where "mydatabase" is the handle provided in the configuration) for EACH client database I have not tried this, but I don't see why it wouldn't work ...
1
0
0
Running django migrations on multiple databases simultaneously
2
python,django,django-migrations
0
2015-12-10T08:32:00.000
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want one client's state to affect the others) Each of these installations are binded to a central repository. If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application. If we update the database schema on the other hand, currently, we need to run migrations manually by connecting to each db instance one by one (settings.py file reads the database settings from an external file which is not in the repo, we add this file manually upon installation). Can we automate this process? i.e. given a list of databases, is it possible to run migrations on these databases with a single command?
1
3
0.291313
0
false
34,197,250
1
510
2
0
0
34,197,011
If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application. I assume that you have some sort of automation to pull the codes and restart the web server. You can just add the migration to this automation process. Each of the server's settings.py would read the database details from the external file and run the migration for you. So the flow should be something like: Pull the codes Migrate Collect Static Restart the web server
1
0
0
Running django migrations on multiple databases simultaneously
2
python,django,django-migrations
0
2015-12-10T08:32:00.000
Python, Twistd and SO newbie. I am writing a program that organises seating across multiple rooms. I have only included related columns from the tables below. Basic Mysql tables Table id Seat id table_id name Card seat_id The Seat and Table tables are pre-populated with the 'name' columns initially NULL. Stage One I want to update a seat's name by finding the first available seat given a group of table ids. Stage Two I want to be able to get the updated row id from Stage One (because I don't already know this) to add to the Card table. Names can be assigned to more than one seat so I can't just find a seat that matches a name. I can do Stage One but have no idea how to do Stage Two because lastrowid only works for inserts not updates. Any help would be appreciated. Using twisted.enterprise.adbapi if that helps. Cheers
3
0
0
0
false
35,131,551
0
311
1
1
0
34,213,706
I think the best way to accomplish this is to first make a select for the id (or ids) of the row/rows you want to update, then update the row with a WHERE condition matching the id of the item to update. That way you are certain that you only updated the specific item. An UPDATE statement can update multiple rows that matches your criteria. That is why you cannot request the last updated id by using a built in function.
1
0
0
Python Twistd MySQL - Get Updated Row id (not inserting)
1
python,mysql,twisted
0
2015-12-10T23:32:00.000
I have a python code which needs to retrieve and store data to/from a database on a LAMP server. The LAMP server and the device running the python code are never on the same internet network. The devices running the python code can be either a Linux, Windows or a MAC system. Any idea how could I implement this?
0
-1
-0.197375
0
false
34,242,082
0
155
1
0
0
34,242,017
are never on the same internet network. Let me clear the question, the problem is are never on the same internet network. firstly you need to fix the network issue, add router between the two sides which you want to communicate with. No relations with Python or LAMP. let me assume your DB is mysql, if you can make you can access that DB from outside servers, you can just talk with that DB directly from another servers. but for another solution, I recommend you to use API which would cover all request above the DB, then you can talk with that API to handle the data.
1
0
0
How to fetch or store data into a database on a LAMP server from devices over the internet?
1
python,mysql,database,lamp,mysql-python
0
2015-12-12T16:11:00.000
As the title says, I’ am trying to run Flask alongside a PHP app. Both of them are running under Apache 2.4 on Windows platform. For Flask I’m using wsgi_module. The Flask app is actually an API. The PHP app controls users login therefore users access to API. Keep in mind that I cannot drop the use of the PHP app because it controls much more that the logging functionality [invoicing, access logs etc]. The flow is: User logs in via PHP app PHP stores user data to a database [user id and a flag indicating if user is logged in] User makes a request to Flask API Flask checks if user data are in database: If not, redirects to PHP login page, otherwise let user use the Flask API. I know that between steps 2 and 3, PHP have to share a session variable-cookie [user id] with Flask in order Flask app to check if user is logged in. Whatever I try fails. I cannot pass PHP session variables to Flask. I know that I can’t pass PHP variables to Flask, but I’m not sure for that. Has anyone tried something similar? What kind of user login strategy should I implement to the above setup?
2
1
1.2
0
true
34,272,457
1
1,770
1
0
0
34,266,083
I'm not sure this is the answer you are looking for, but I would not try to have the Flask API access session data from PHP. Sessions and API do not go well together, a well designed API does not need sessions, it is instead 100% stateless. What I'm going to propose assumes both PHP and Flask have access to the user database. When the user logs in to the PHP app, generate an API token for the user. This can be a random sequence of characters, a uuid, whatever you want, as long as it is unique. Write the token to the user database, along with an expiration date if you like. The login process should pass that token back to the client (use https://, of course). When the client needs to make an API call, it has to send that token in every request. For example, you can include it in the Authorization header, or you can use a custom header as well. The Flask API gets the token and searches the user database for it. If it does not find the token, it returns 401. If the token is found, it now knows who the user is, without having to share sessions with PHP. For the API endpoints you will be looking up the user from the token for every request. Hope this helps!
1
0
0
Run Flask alongside PHP [sharing session]
1
php,python,session,flask
1
2015-12-14T11:37:00.000
I need to create random entries with a given sql-schema in sql with the help of python programming language. Is there a simple way to do that or do I have to write own generators?
0
1
0.066568
0
false
63,923,673
0
1,307
1
0
0
34,301,518
You can also use faker. just pip install faker Just go through documentation and check it out
1
0
0
How to create random entries in database with python
3
python,sql,generator
0
2015-12-15T23:38:00.000
I have created groups to give access rights everything seems fine but I want to custom access - rights for module issue. When user of particular group logins, I want that user only able to create/edit their own issue and can't see other users issue.Please help me out!! Thanks
0
2
0.197375
0
false
34,328,053
1
6,109
1
0
0
34,327,655
Providing access rule is one part of the solution. If you look at "Access Control List" in "Settings > Technical > Security > Access Controls Lists", you can see that the group Hr Employee has only read access to the model hr.employee. So first you have to provide write access also to model hr.employee for group Employee. After you have allowed write access to the group Employee for model hr.employee, Create a new record rule from Settings > Technical > Security > Record Rules named User_edit_own_employee_rule (As you wish). Provide domain for this group User_edit_own_employee_rule as [('user_id', '=', user.id)]. And this domain should apply for Read and Write. ie; by check "Apply for Read" and "Apply for Write" Boolean field. Create another record rule named User_edit_own_employee_rule_1 Provide domain for this group User_edit_own_employee_rule as [('user_id', '!=', user.id)]. And this domain should apply for Read only. ie; check "Apply for Read". Now by creating two record rule for the group Employee, we can provide access to read and write his/her own record but only to read other employee records. Detail: Provide write access in access control list to model hr.employee for group Employee. Create two record rule: User_edit_own_employee_rule : Name : User_edit_own_employee_rule Object : Employee Apply for Read : Checked Apply for Write : Checked Rule Definition : [('user_id', '=', user.id)] Groups : Human Resources / Employee User_edit_own_employee_rule_1 : Name : User_edit_own_employee_rule_1 Object : Employee Apply for Read : Checked Apply for Write : Un Checked Rule Definition : [('user_id', '!=', user.id)] Groups : Human Resources / Employee I hope this will help you.
1
0
0
How to make user can only access their own records in odoo?
2
python,xml,openerp
0
2015-12-17T06:03:00.000
MySQLdb as I understand doesn't support Python 3. I've heard about PyMySQL as a replacement for this module. But how does it work in production environment? Is there a big difference in speed between these two? I asking because I will be managing a very active webapp that needs to create entries in the database very often.
0
3
1.2
0
true
34,341,868
0
79
1
0
0
34,341,489
PyMySQL is a pure-python database connector for MySQL, and can be used as a drop-in replacement using the install_as_MySQLdb() function. As a pure-python implementation, it will have some more overhead than a connector that uses C code, but it is compatible with other versions of Python, such as Jython and PyPy. At the time of writing, Django recommends to use the mysqlclient package on Python 3. This fork of MySQLdb is partially written in C for performance, and is compatible with Python 3.3+. You can install it using pip install mysqlclient. As a fork, it uses the same module name, so you only have to install it and Django will use it in its MySQL database engine.
1
0
0
MySQL module for Python 3
1
python,django,python-3.x
0
2015-12-17T18:13:00.000
I can't connect to a DB2 remote server using Python. Here is what I've done: Created a virtualenv with Python 2.7.10 (On Mac OS X 10.11.1) installed ibm-db using sudo pip install ibm_db Ran the following code: import ibm_db ibm_db.connect("my_connection_string", "", "") I then get the following error: Exception: [IBM][CLI Driver] SQL1042C An unexpected system error occurred. SQLSTATE=58004 SQLCODE=-1042 I've googled around for hours and trying out different solutions. Unfortunately, I haven't been able to find a proper guide for setting the environment up on Mac OS X + Python + DB2.
1
1
0.066568
0
false
34,651,608
0
2,550
1
1
0
34,436,084
We are able to install the driver successfully and connection to db is established without any problem. The steps are: 1) Upgraded to OS X El Capitan 2) Install pip - sudo pip install 3) Install ibm_db - sudo pip install ibm_db 4) During installation, below error was hit Referenced from: /Users/roramana/Library/Python/2.7/lib/python/site-packages/ibm_db.so Reason: unsafe use of relative rpath libdb2.dylib in /Users/roramana/Library/Python/2.7/lib/python/site-packages/ibm_db.so with restricted binary After disabling the System Integrity Protection, installation went fine. From the error sql1042c, it seems like you are hitting some environment setup issue. You could try setting DYLD_LIBRARY_PATH to the path where you have extracted the odbc and cli driver . If the problem still persist, please collect db2 traces and share with us: db2trc on -f trc.dmp run your repro db2trc off db2trc flw trc.dmp trc.flw db2trc fmt trc.dmp trc.fmt Share the trc.flw and trc.fmt files.
1
0
0
Can't connect to DB2 Driver through Python: SQL1042C
3
python,db2,dashdb
0
2015-12-23T12:48:00.000
I am working on scaling out a webapp and providing some database redundancy for protection against failures and to keep the servers up when updates are needed. The app is still in development, so I have chosen a simple multi-master redundancy with two separate database servers to try and achieve this. Each server will have the Django code and host its own database, and the databases should be as closely mirrored as possible (updated within a few seconds). I am trying to figure out how to set up the multi-master (master-master) replication between databases with Django and MySQL. There is a lot of documentation about setting it up with MySQL only (using various configurations), but I cannot find any for making this work from the Django side of things. From what I understand, I need to approach this by adding two database entries in the Django settings (one for each master) and then write a database router that will specify which database to read from and which to write from. In this scenario, both databases should accept both reads and writes, and writes/updates should be mirrored over to the other database. The logic in the router could simply use a round-robin technique to decide which database to use. From there on, further configuration to set up the actual replication should be done through MySQL configuration. Does this approach sound correct, and does anyone have any experience with getting this to work?
0
0
0
0
false
34,841,926
1
1,628
1
0
0
34,468,030
Your idea of the router is great! I would add that you need automatically detect whether a databases is [slow] down. You can detect that by the response time and by connection/read/write errors. And if this happens then you exclude this database from your round-robin list for a while, trying to connect back to it every now and then to detect if the databases is alive. In other words the round-robin list grows and shrinks dynamically depending on the health status of your database machines. The another important notice is that luckily you don't need to maintain this round-robin list common to all the web servers. Each web server can store its own copy of the round-robin list and its own state of inclusion and exclusion of databases into this list. This is just because a database server can be seen from one web server and can be not seen from another one due to local network problems.
1
0
0
Multi-master database replication with Django webapp and MySQL
1
python,mysql,django,multi-master-replication
0
2015-12-26T02:34:00.000
I have a Django app with a postgres backend hosted on Heroku. I'm now migrating it to Azure. On Azure, the Django application code and postgres backend have been divided over two separate VMs. Everything's set up, I'm now at the stage where I'm transferring data from my live Heroku website to Azure. I downloaded a pg_dump to my local machine, transferred it to the correct Azure VM, ran syncdb and migrate, and then ran pg_restore --verbose --clean --no-acl --no-owner -U myuser -d mydb latest.dump. The data got restored (11 errors were ignored, pertaining to 2 tables that get restored, but which my code now doesn't use). When I try to access my website, I get the kind of error that usually comes in my website if I haven't run syncdb and migrate: Exception Type: DatabaseError Exception Value: relation "user_sessions_session" does not exist LINE 1: ...last_activity", "user_sessions_session"."ip" FROM "user_sess... ^ Exception Location: /home/myuser/.virtualenvs/myenv/local/lib/python2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py in execute, line 54 Can someone who has experienced this before tell me what I need to do here? It's acting as if the database doesn't exist and I had never run syncdb. When I use psql, I can actually see the tables and the data in them. What's going on? Please advise.
0
1
1.2
0
true
34,480,125
1
119
1
0
0
34,472,609
Try those same steps WITHOUT running syncdb and migrate at all. So overall, your steps will be: heroku pg:backups capture curl -o latest.dump heroku pg:backups public-url `scp -P latest.dump [email protected]:/home/myuser drop database mydb; create database mydb; pg_restore --verbose --clean --no-acl --no-owner -U myuser -d mydb latest.dump
1
0
0
Unable to correctly restore postgres data: I get the same error I usually get if I haven't run syncdb and migrate
1
python,django,database,postgresql,database-migration
0
2015-12-26T15:18:00.000
Edited to clarify my meaning: I am trying to find a method using a Django action to take data from one database table and then process it into a different form before inserting it into a second table. I am writing a kind of vocabulary dictionary which extracts data about students' vocabulary from their classroom texts. To do this I need to be able to take the individual words from the table field containing the content and then insert the words into separate rows in another table. I have already written the code to extract the individual words from the record in the first database table, I just need a method for putting it into a second database table as part of the same Django action. I have been searching for an answer for this, but it seems Django actions are designed to handle the data for only one database table at a time. I am considering writing my own MySQL connection to inject the words into the second table in the database. I thought I would write here first though to see if anyone knows if I am missing a built-in way to do this in Django.
0
1
0.099668
0
false
34,477,438
1
884
1
0
0
34,477,062
I am pretty sure there is no built-in way for something this specific. Finding single words in a text alone is a quite complex task if you take into consideration misspelled words, hyphen-connected words, quotes, all sorts of punctuation and unicode letters. Your best bet would be using a regex for each text and save the matches on a second model manually.
1
0
0
Django way to modify a database table using the contents of another table
2
python,mysql,django
0
2015-12-27T02:18:00.000
I am using MongoVue and Python library Pymongo to insert some documents. I used MongoVue to see the db created. It was not listed. However, I made a find() request in shell. I got all the inserted documents. Once I manually create DB all the inserted documents appears then.Every other db's inside the localhost is not affected. What is the reason for this behaviour?
0
0
1.2
0
true
37,524,187
0
117
1
0
0
34,488,751
So, found out the fix for this behavior. Refreshing in MongoVue didn't work. So, I had to close it and open the MongoVue again to see the newly created collections.
1
0
0
Database is not appearing in MongoVue
1
python-2.7,mongovue,pymongo-2.x
0
2015-12-28T06:35:00.000
I am using pymongo driver to work with Mongodb using Python. Every time when I run a query in python shell, it returns me some output which is very difficult to understand. I have used the .pretty() option with mongo shell, which gives the output in a structured way. I want to know whether there is any method like pretty() in pymongo, which can return output in a structured way ?
6
-1
-0.039979
0
false
34,493,742
0
4,368
1
0
0
34,493,535
It probably depends on your IDE, not the pymongo itself. the pymongo is responsible for manipulating data and communicating with the mongodb. I am using Visual Studio with PTVS and I have such options provided from the Visual Studio. The PyCharm is also a good option for IDE that will allow you to watch your code variables and the JSON in a formatted structure.
1
0
1
Pretty printing of output in pymongo
5
mongodb,python-3.x,pymongo
0
2015-12-28T12:26:00.000
I set a key that I have now realizes is wrong. It is set at migration 0005. The last migration I did was 0004. I'm now up to 0008. I want to rebuild the migrations with the current models.py against the current database schema. Migration 0005 is no longer relevant and has been deleted from models.py. Migration 0005 is also an IntegrityError, so it cannot be applied without deleting data that shouldn't be deleted. How do I get past migration 0005 so I can migrate?
0
0
0
0
false
61,643,148
1
512
1
0
0
34,502,379
Simply delete 0005-0008 migration files from migrations/ folder. Re. database tables, you won't need to delete anything from there if migrations weren't applied. You can check yourself django_migrations table entries to be sure.
1
0
0
Delete migrations that haven't been migrated yet
1
python,django,django-migrations,django-1.9
0
2015-12-28T23:37:00.000
I have data in an excel spreadsheet (*.xlsx) that consists of 1,213 rows of sensitive information (so, I'm sorry I can't share the data) and 35 columns. Every entry is a string (I don't know if that is screwing it up or not). The first row is the column names and I've never had a problem importing it with the column names embedded before (it's just easier to click that they're embedded so I don't have to name every column by hand). I put the path to the data in the quick start wizard and hit the next button and it doesn't do anything. I hit it again and it turns the mouse into the loader as if it's loading. I've waited for it for 15 minutes before, but every time I click on QlikView the program just crashes. I have a deadline I have to meet here and I can't afford to not finish this project. It's extremely important that I get it working. Just as a NB, I used Python to merge two Excel spreadsheets together so I don't know if that may be what's causing the problem either. I can open the file perfectly fine in Excel though.
0
0
0
0
false
34,534,891
0
626
1
0
0
34,532,708
I have had QlikView crash when importing an Excel spreadsheet that was exported with the SQuirreL SQL client (from a Firebird database). Opening the spreadsheet in Excel, and saving it again solved the problem. I know that this is no longer relevant to your problem, but hopefully it can help someone with a similarly appearing issue.
1
1
0
Why does QlikView keep crashing when I try to load my data?
3
python,excel,qlikview
0
2015-12-30T15:52:00.000
My application is very database intensive so I'm trying to reduce the load on the database. I am using PostgreSQL as rdbms and python is the programming language. To reduce the load I am already using a caching mechanism in the application. The caching type I used is a server cache, browser cache. Currently I'm tuning the PostgreSQL query cache to get it in line with the characteristics of queries being run on the server. Questions: Is it possible to fine tune query cache on a per database level? Is it possible to fine tune query cache on a per table basis? please provide tutorial to learn query cache in PostgreSQL.
11
2
0.197375
0
false
58,349,778
0
3,440
1
0
0
34,553,778
Tuning PostgreSQL is far more than just tuning caches. In fact, the primary high level things are "shared buffers" (think of this as the main data and index cache), and the work_mem. The shared buffers help with reading and writing. You want to give it a decent size, but it's for the entire cluster.. and you can't really tune it on a per table or especially query basis. Importantly, it's not really storing query results.. it's storing tables, indexes and other data. In an ACID compliant database, it's not very efficient or useful to cache query results. The "work_mem" is used to sort query results in memory and not have to resort to writing to disk.. depending on your query, this area could be as important as the buffer cache, and easier to tune. Before running a query that needs to do a larger sort, you can issue the set command like "SET work_mem = '256MB';" As others have suggested you can figure out WHY a query is running slowly using "explain". I'd personally suggest learning the "access path" postgresql is taking to get to your data. That's far more involved and honestly a better use of resources than simply thinking of "caching results". You can honestly improve things a lot with data design as well and using features such as partitioning, functional indexes, and other techniques. One other thing is that you can get better performance by writing better queries.. things like "with" clauses can prevent postgres' optimizer from optimizing queries fully. The optimizer itself also has parameters that can be adjusted-- so that the DB will spend more (or less) time optimizing a query prior to executing it.. which can make a difference. You can also use certain techniques to write queries to help the optimizer. One such technique is to use bind variables (colon variables)--- this will result in the optimizer getting the same query over and over with different data passed in. This way, the structure doesn't have to be evaluated over and over.. query plans can be cached in this way. Without seeing some of your queries, your table and index designs, and an explain plan, it's hard to make specific recommendation. In general, you need to find queries that aren't as performant as you feel they should be and figure out where the contention is. Likely it's disk access, however,the cause is ultimately the most important part.. is it having to go to disk to hold the sort? Is it internally choosing a bad path to get to the data, such that it's reading data that could easily be eliminated earlier in the query process... I've been an oracle certified DBA for over 20 years, and PostgreSQL is definitely different, however, many of the same techniques are used when it comes to diagnosing a query's performance issues. Although you can't really provide hints, you can still rewrite queries or tune certain parameters to get better performace.. in general, I've found postgresql to be easier to tune in the long run. If you can provide some specifics, perhaps a query and explain info, I'd be happy to give you specific recommendations. Sadly, though, "cache tuning" is likely to provide you the speed you're wanting all on its own.
1
0
0
Enable the query cache in postgreSQL to improve performance
2
python,sql,database,postgresql,caching
0
2016-01-01T05:26:00.000
I am trying to use OpenPyXL to create invoices. I have a worksheet with an area to be printed and some notes outside of that range. I have most everything working but I am unable to find anything in the API for one function. Is there a way to set the print area on a worksheet? I am able to find lots of print settings, but not the print area. Thanks!
2
0
0
0
false
34,579,357
0
3,471
1
0
0
34,578,910
This isn't currently directly possible. You could do it manually by creating a definedNamed using the reserved xlnm prefix (see Worksheet.add_print_title for an example.
1
0
0
OpenPyXL - How to set print area for a worksheet
3
python,openpyxl
0
2016-01-03T16:37:00.000
I am writing a python code for beam sizing. I have an Excel workbook from AISC that has all the data of the shapes and other various information on the cross-sections. I would like to be able to reference data in particular cells in this Excel workbook in my python code. For example if the width of rectangle is 2in and stored in cell A1 and the height is 10in and stored in cell B1 I want to write a code in python that somehow pulls in cell A1 and B1 and multiply them. I do not need to export back into excel I just want to make python do all the work and use excel purely as reference material. Thank you in advance for all your advice and input!!
1
0
0
0
false
34,603,357
0
2,722
1
0
0
34,603,090
Thank you for you inputs. I have found the solution I was looking for by using Numpy. data = np.loadtxt('C:\Users[User_Name]\Desktop[fname].csv', delimiter=',') using that it took the data and created an array with the data that I needed. Now I am able to use the data like any other matrix or array.
1
0
0
Reference Excel in Python
4
python,excel
0
2016-01-05T02:03:00.000
I've had a fairly good look on the web for an answer to this question, but I've tended to find that people assume more knowledge of databases than I currently have. I'm sorry if this is a rookie question - I've always been aware of databases and their advantages, but never actually had to work with them. I have a requirement for a series of Python applications to read and write to a single SQL database. My original plan was to create the database using SQLServer (at least while experimenting) and then access it via Python. However, when I came to look at relevant Python packages (sqlite3, sqlalchemy, etc), they appear to create and maintain the database completely through Python. This is absolutely fine, but will those Python-created-databases be fully compatible with non-Python tools and processes? We will need to read data from C# as well. As a secondary question, I like the look of sqlalchemy, but has it gone mainstream?
0
1
1.2
0
true
34,609,510
0
93
1
0
0
34,609,259
Although I would consider creating and maintaining databases in Python bad practice ( at least for MySQL and SQL Server), these databases will be fully compatible with non-Python tools and processes as they are created with the same SQL code. Regarding SQLAlchemy, this is used by several major companies and I have never experienced any problems with it, other than slow performance for large inserts.
1
0
0
Python linkage to SQL databases
1
python,mysql,sql-server,database
0
2016-01-05T10:20:00.000
In pyodbc, cursor.rowcount works perfectly when using cursor.execute(). However, it always returns -1 when using cursor.executemany(). How does one get the correct row count for cursor.executemany()? This applies to multiple inserts, updates, and deletes.
5
1
0.099668
0
false
47,834,345
0
7,653
1
0
0
34,613,875
You can't, only the last query row count is returned from executemany, at least that's how it says in the pyodbc code docs. -1 usually indicates problems with query though. If you absolutely need the rowcount, you need to either cursor.execute in a loop or write a patch for pyodbc library.
1
0
0
How to get correct row count when using pyodbc cursor.executemany()
2
python,sql,pyodbc
0
2016-01-05T14:20:00.000
I am writing a form submit in my application written in python/Django.Form has an attachment(upto 3MB) uploaded. On submit it has to save the attachment in aws s3, save the other data in database and also send emails. This form submit is taking too much time and the UI is hanging. Is there any other way to do this in python/django?
1
0
0
0
false
34,673,727
1
427
1
0
0
34,673,515
The usual solution to tasks that are too long to be handled synchronously and can be handled asynchronously is to delegate them to some async queue like celery. In your case, saving the form's data to db should be quite fast so I would not bother with this part, but moving the uploaded file to s3 and sending mails are good candidates.
1
0
0
Python - On form submit send email and save record in database taking huge time
2
python,django,forms,performance,amazon-s3
0
2016-01-08T09:28:00.000
Let's say that I need to maintain an index on a table where multiple documents can relate do the same item_id (not primary key of course). Can one secondary compound index based on the result of a function which of any item_id returns the most recent document based on a condition, update itself whenever a newer document gets inserted? This table already holds 1.2 million documents in just 25 days, so it's a big-data case here as it will keep growing and must always keep the old records to build whatever pivots needed over the years.
0
0
0
0
false
34,750,764
0
65
1
0
0
34,750,575
I'm not 100% sure I understand the question, but if you have a secondary index and insert a new document or change an old document, the document will be in the correct place in the index once the write completes. So if you had a secondary index on a timestamp, you could write r.table('items').orderBy(index: r.desc('timestamp')).limit(n) to get the most recent n documents (and you could also subscribe to changes on that).
1
0
1
RethinkDb do function based secondary indexes update themselves dynamically?
1
indexing,rethinkdb,rethinkdb-python
0
2016-01-12T17:53:00.000
I'm trying to connect to a SQL Server named instance from python 3.4 on a remote server, and get an error. File "C:\Scripts\Backups Integrity Report\Backup Integrity Reports.py", line 269, in conn = pymssql.connect(host=r'hwcvcs01\HDPS', user='My-office\romano', password='PASS', database='CommServ') File "pymssql.pyx", line 636, in pymssql.connect (pymssql.c:10178) pymssql.OperationalError: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed\n') Other SQLs are connected without a problem. Also I manage to connect to the SQL using the Management Studio, from the same remote server. Tried different ports, tried to connect to the host itself rather than the instance, and also tried pypyodbc. What might be the problem?
3
1
0.099668
0
false
66,684,103
0
4,865
1
0
0
34,774,326
According to the pymssql documentation on the pymssql Connection class, for a named instance containing database theDatabase, looking like this: myhost\myinstance You could connect as follows: pymssql.connect(host=r'myhost\myinstance', database='theDatabase', user='user', password='pw') The r-string is a so-called raw string that does not treat the '' as an escape.
1
0
0
Python pymssql - Connecting to Named Instance
2
python,instance,pymssql,named
0
2016-01-13T18:28:00.000
I'm new to Django. It wasted me whole afternoon to config the MySQL engine. I am very confused about the database engine and the database driver. Is the engine also the driver? All the tutorial said that the ENGINE should be 'django.db.backends.mysql', but how the ENGINE decide which driver is used to connect MySQL? Every time it says 'django.db.backends.mysql', sadly I can't install MySQLDb and mysqlclient, but PyMysql and the official mysql connector 2.1.3 has been installed. How could I set the driver to PyMysql or mysql connector? Many thanks! OS: OS X Al Capitan Python: 3.5 Django: 1.9 This question is not yet solved: Is the ENGINE also the DRIVER?
20
1
0.099668
0
false
53,195,032
1
18,120
1
0
0
34,777,755
The short answer is no they are not the same. The engine, in a Django context, is in reference to RDBMS technology. The driver is the library developed to facilitate communication to that actual technology when up and running. Letting Django know what engine to use tells it how to translate the ORM functions from a backend perspective. The developer doesn't see a change in ORM code but Django will know how to convert those actions to a language the technology understands. The driver then takes those actions (e.g. selects, updates, deletes) and sends them over to a running instance to facilitate the action.
1
0
0
How to config Django using pymysql as driver?
2
python,mysql,django,pymysql
0
2016-01-13T21:50:00.000
I have a Flask-admin application and I have a class with a "Department" and a "Subdepartment" fields. In the create form, I want that when a Department is selected, the Subdepartment select automatically loads all the corresponding subdepartments. In the database, I have a "department" table and a "sub_department" table that was a foreign key "department_id". Any clues on how I could achieve that? Thanks in advance.
0
0
0
0
false
34,786,896
1
44
1
0
0
34,786,665
You have to follow this steps Javascript Bind a on change event to your Department select . If the select changes you get the value selected. When you get the value, you have to send it to the server through an AJAX request. Flask Implement a method that reads the value and loads the associated Subdepartments. Send a JSON response to the view with your Subdepartments Javascript In your AJAX request implement a success function. This function by default has as first parameter the data received from the server. Loop over them and append them to the wished select.
1
0
0
Load a select list when selecting another select
1
python,flask,flask-admin
0
2016-01-14T10:05:00.000
Coming off a NLTK NER problem, I have PERSONS and ORGANIZATIONS, which I need to store in a sqlite3 db. The obtained wisdom is that I need to create separate TABLEs to hold these sets. How can i create a TABLE when len(PERSONs) could vary for each id. It can even be zero. The normal use of: insert into table_name values (?),(t[0]) will return a fail.
0
0
0
0
false
34,827,171
0
129
1
0
0
34,824,495
Thanks to CL.'s comment, I figured out the best way is to think rows in a two-column table, where the first column is id INT, and the second column contains person_names. This way, there will be no issue with varying lengths of PERSONS list. of course, to link the main table with the persons table, the id field has to REFERENCE (foreign keys) to the story_id (main table).
1
0
0
insert data in sqlite3 when array could be of different lengths
1
python-2.7,database-design,sqlite
0
2016-01-16T07:09:00.000
I have been working on a localhost copy of my Django website for a little while now, but finally decided it was time to upload it to PythonAnywhere. The site works perfectly on my localhost, but I am getting strange errors when I do the initial migrations for the new site. For example, I get this: mysql.connector.errors.DatabaseError: 1264: Out of range value for column 'applied' at row 1 'applied' is not a field in my model, so this error has to be generated by Django making tables for its own use. I have just checked in the MySQL manager for my localhost and the field 'applied' appears to be from the table django_migrations. Why is Django mishandling setting up tables for its own use? I have dropped and remade the database a number of times, but the errors persist. If anyone has any idea what would cause this I would appreciate your advice very much. My website front end is still showing the Hello World page and the Admin link comes up with a page does not exist error. At this stage I am going to assume this is related to the database errors. EDIT: Additional information about why I cannot access the front-end of the site: It turns out when I am importing a pre-built site into PythonAnywhere, I have to edit my wsgi.py file to point to the application. The trouble now is that I don't know exactly what to put there. When I follow the standard instructions in the PythonAnywhere help files nothing seems to change. There website is also seems to be very short on detailed error messages to help sort it out. Is there perhaps a way to turn off their standard hello world placeholder pages and see server error messages instead?
1
1
1.2
0
true
34,837,989
1
463
1
0
0
34,836,049
As it says in my comment above, it turns out that the problem with the database resulted from running an upgrade of Django from 1.8 to 1.9. I had forgotten about this. After rolling my website back to Django 1.8, the database migrations ran correctly. The reason why I could not access the website turned out to be because I had to edit the wsgi.py file, but I was editing the wrong version. The nginx localhost web server I was using keeps it in the different folder location than PythonAnyhwere's implementation. I uploaded the file from my localhost copy and edited it according to the instructions on PythonAnywhere's help system without realizing it was not being read by PythonAnywhere's server. What I really needed to do was edit the correct file by accessing it through the web tab on their control panel. Once I edited this file, the website front end began to work as expected.
1
0
0
Strange error during initial database migration of a Django site
2
python,django,python-3.x,django-forms,pythonanywhere
0
2016-01-17T07:12:00.000
As part of a big system, I'm trying to implement a service that (among other tasks) will serve large files (up to 300MB) to other servers (running in Amazon). This files service needs to have more than one machine up and running at each time, and there are also multiple clients. Service is written in Python, using Tornado web server. First approach was using MySQL, but I figured I'm going to have hell saving such big BLOBs, because of memory consumption. Tried to look at Amazon's EFS, but it's not available in our region. I heard about SoftNAS, and am currently looking into it. Any other good alternatives I should be checking?
0
0
0
0
false
34,899,601
1
191
1
1
0
34,895,738
You also can use MongoDb , it provides several API, and also you can store file in S3 bucket with the use of Multi-Part Upload
1
0
0
Serving large files in AWS
1
python,mysql,amazon-web-services,nas
0
2016-01-20T09:09:00.000
I'm using wget to download Excel file with xlsx extension. The thing is that when I want to deal with the file using openpyxl, I get the above mentioned error. But when I download the file manually using fire fox, I don't have any problems. So I checked the difference between the two downloaded files. I found that the manually one's size is much bigger (269.2 kB) compared to the wget one (7.3 kB), though both files show the same content when open by Excel 2013 I don't add any options for the wget just use it like wget <downloadLink> What's wrong with wget and Excel files?
0
0
0
0
false
34,896,780
0
1,390
1
0
0
34,896,043
If the files are different in size then wget isn't getting the right file. Many websites now rely on javascript to handle links which wget can't emulate. I suspect that if you look at the file with less you'll see some HTML source as opposed to the start of a zipfile.
1
0
0
wget causes "BadZipfile: File is not a zip file" for openpyxl
1
python-2.7,debian,wget
0
2016-01-20T09:23:00.000
I have a Django website running on an Amazon EC2 instance. I want to add an EBS. In order to do that, I need to change the location of my PGDATA directory if I understand well. The new PGDATA path should be something like /vol/mydir/blabla. I absolutely need to keep the data safe (some kind of dump could be useful). Do you have any clues on how I can do that ? I can't seem to find anything relevant on the internet. Thanks
1
0
0
0
false
34,922,249
1
62
1
0
0
34,905,744
Ok, thanks for your answers, I used : find . -name "postgresql.conf" to find the configuration find, which was located into the "/etc/postgresql/9.3/main" folder. There is also pg_lsclusters if you want to show the directory data. Then I edited that file putting the new path, restarted postgres and imported my old DB.
1
0
0
Django PostgreSQL : migrating database to a different directory
1
python,django,postgresql,amazon-ec2
0
2016-01-20T16:46:00.000
I have a series of python objects, each associated with a different user, e.g., obj1.userID = 1, obj2.userID = 2, etc. Each object also has a transaction history expressed as a python dict, i.e., obj2.transaction_record = {"itemID": 1, "amount": 1, "date": "2011-01-04"} etc. I need these objects to persist, and transaction records may grow over time. Therefore, I'm thinking of using an ORM like sqlalchemy to make this happen. What kind of database schema would I need to specify to store these objects in a database? I have two alternatives, but neither seems like the correct thing to do: Have a different table for each user: CREATE TABLE user_id ( itemID INT PRIMARY KEY, amount INT, date CHARACTER(10) ); Store the transaction history dict as a BLOB of json: CREATE TABLE foo ( userID INT PRIMARY KEY, trasaction_history BLOB); Is there a cleaner way to implement this?
0
-1
-0.066568
0
false
34,979,208
0
358
1
0
0
34,978,896
If you didn't make a decision until now for what kind of database you'll use, I advise you to pick mongodb as database server and mongoengine module for persist data, it's what you need, mongoengine has a DictField you can store in it a python dict directly and it's very easy to learn.
1
0
0
What kind of database schema would I use to store users' transaction histories?
3
python,sqlite,sqlalchemy
0
2016-01-24T17:17:00.000
I am trying to call a postgres database procedure using psycopg2 in my python class. lCursor.callproc('dbpackage.proc',[In_parameter1,In_parameter2,out_parameter]). In_parameter values is 5008001#60°V4#FR.tif But I am getting the below error. DataError: invalid byte sequence for encoding "UTF8": 0xb0 I have tried mostly solutions given on net, but no luck.
0
0
1.2
0
true
34,993,660
0
2,227
1
0
0
34,993,615
Your encoding and the database connection encoding don't match. The database connection is in UTF8 and you're probably trying to send with Latin1 encoding. When opening the connection send SET client_encoding TO 'Latin1', after that PostgreSQL will assume all strings to be in Latin1 encoding regardless of the database encoding. Alternatively you can use conn.set_client_encoding('Latin1')
1
0
0
DataError: invalid byte sequence for encoding "UTF8": 0xb0 while calling the database procedure
1
python,postgresql,utf-8
0
2016-01-25T13:17:00.000
When I am defining a model and using unique_together in the Meta, I can define more than one tuple. Are these going to be ORed or ANDed? That is lets say I have a model where class MyModel(models.Model): druggie = ForeignKey('druggie', null=True) drunk = ForeignKey('drunk', null=True) quarts = IntegerField(null=True) ounces = IntegerField(null=True) class Meta: unique_together = (('drunk', 'quarts'), ('druggie', 'ounces')) either both druggie and ounces are unique or both drunk and quarts are unique, but not both.
23
24
1.2
0
true
35,024,190
1
5,860
1
0
0
35,024,007
Each tuple results in a discrete UNIQUE clause being added to the CREATE TABLE query. As such, each tuple is independent and an insert will fail if any data integrity constraint is violated.
1
0
0
Multiple tuples in unique_together
1
python,django,django-models
0
2016-01-26T21:10:00.000
I have a python code which queries psql and returns a batch of results using cursor.fetchall(). It throws an exception and fails the process if a casting fails, due to bad data in the DB. I get this exception: File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 377, in fetchall return [self._build_row() for _ in xrange(size)] File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/cursor.py", line 891, in _build_row self._casts[i], val, length, self) File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 71, in typecast return caster.cast(value, cursor, length) File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 39, in cast return self.caster(value, length, cursor) File "/usr/local/lib/python2.7/site-packages/psycopg2cffi/_impl/typecasts.py", line 311, in parse_date raise DataError("bad datetime: '%s'" % bytes_to_ascii(value)) DataError: bad datetime: '32014-03-03' Is there a way to tell the caster to ignore this error and parse this as a string instead of failing the entire batch?
4
0
0
0
false
35,034,708
0
280
1
0
0
35,033,997
change your psql query to cast and get the date column as string e.g. select date_column_name:: to_char from table_name.
1
0
0
psql cast parse error during cursor.fetchall()
2
python,python-2.7,psycopg2,psql
0
2016-01-27T09:55:00.000
I was asked to port a Access database to MySQL and provide a simple web frontend for the users. The DB consists of 8-10 tables and stores data about clients consulting (client, consultant,topic, hours, ...). I need to provide a webinterface for our consultants to use, where they insert all this information during a session into a predefined mask/form. My initial thought was to port the Access-DB to MySQL, which I have done and then use the web2py framework to build a user interface with login, inserting data, browse/scroll through the cases and pulling reports. web2py with usermanagment and a few samples views & controllers and MySQL-DB is running. I added the DB to the DAL in web2py, but now I noticed, that with web2py it is mandatory to define every table again in web2py for it being able to communicate with the SQL-Server. While struggeling to succesfully run the extract_mysql_models.py script to export the structure of the already existing SQL DB for use in web2py concerns about web2py are accumulating. This double/redundant way of talking to my DB strikes me as odd and web2py does not support python3. Is web2py the correct way to fulfill my task or is there better way? Thank you very much for listening/helping out.
0
0
0
0
false
35,039,883
1
670
1
0
0
35,038,543
This double/redundant way of talking to my DB strikes me as odd and web2py does not support python3. Any abstraction you want to use to communicate with your database (whether it be the web2py DAL, the Django ORM, SQLAlchemy, etc.) will have to have some knowledge of the database schema in order to construct queries. Even if you programmatically generated all the SQL statements yourself without use of an ORM/DAL, your code would still have to have some knowledge of the database structure (i.e., somewhere you have to specify names of tables and fields, etc.). For existing databases, we aim to automate this process via introspection of the database schema, which is the purpose of the extract_mysql_models.py script. If that script isn't working, you should report an issue on Github and/or open a thread on the web2py Google Group. Also, note that when creating a new database, web2py helps you avoid redundant specification of the schema by handling migrations (including table creation) for you -- so you specify the schema only in web2py, and the DAL will automatically create the tables in the database (of course, this is optional).
1
0
0
Using web2py for a user frontend crud
1
python,mysql,frontend,crud,web2py
0
2016-01-27T13:21:00.000
Our current Python pipeline scrapes from the web and stores those data into the MongoDB. After that we load the data into an analysis algorithm. This works well on a local computer since mongod locates the database, but I want to upload the database on sharing platform like Google Drive so that other users can use the data without having to run the scraper again. I know that MongoDB stores data at /data/db as default, so could I upload the entire /data/db onto the Google Drive? Another option seems to be exporting MongoDB into JSON or CSV, but our current implementation for the analysis algorithm already loads directly from MongoDB.
1
0
0
0
false
35,120,084
0
7,187
1
0
0
35,119,959
You can create a little Rest API for your database with unique keys and all peoples in your team will can use it. If you want to use export only one time - just export it to JSON and no problem.
1
0
0
How to share database created by MongoDB?
3
python,mongodb,pymongo,database
0
2016-01-31T21:52:00.000
My Python script uses an ADODB.Recordset object. I use an ADODB.Command object with a collection of ADODB.Parameter objects to update a record in the set. After that, I check the state of the recordset, and it was 1, which is adStateOpen. But when I call MyRecordset.Close(), I get an exception complaining that the operation is invalid in the set's current state. What state could an open recordset be in that would make it invalid to close it, and what can I do to fix it? Code is scattered between a couple of files. I'll work on getting an illustration together.
0
0
1.2
0
true
35,134,057
0
305
1
0
0
35,133,678
Yes, that was the problem. Once I change the value of one of a recordset's ADODB.Field objects, I have to either update the recordset using ADODB.Recordset.Update() or call CancelUpdate(). The reason I'm going through all this rigarmarole of the ADODB.Command object is that ADODB.Recordset.Update() fails at random (or so it seems to me) times, complaining that "query-based update failed because row to update cannot be found". I've never been able to predict when that will happen or find a reliable way to keep it from happening. My only choice when that happens is to replace the ADODB.Recordset.Update() call with the construction of a complete update query and executing it using an ADODB.Connection or ADODB.Command object.
1
0
0
Why can't a close an open ADODB.Recordset?
1
python,adodb
0
2016-02-01T14:59:00.000
I created my own python module and packaged it with distutils. Now I installed it on a new system (python setup.py install) and I'm trying to call it from a plpython3u function, but I get an error saying the module does not exist. It was working on a previous Ubuntu instalation, and I'm not sure what I did wrong when setting up my new system. I'm trying this on a Ubuntu 15.10 pc with postgresql 9.5, everything freshly installed. I'm also trying this setup in a docker image built with the same componentes (ubuntu 15.10 and pg 9.5). I get the same error in both setups. Could you please hint me about why this is failing? I wrote down my installation instructions for both systems (native and docker), so I can provide them if that helps. Thanks
2
1
1.2
0
true
35,205,633
0
884
1
0
0
35,204,352
Sorry guys I think I found the problem. I'm using plpython3 in my stored procedure, but intalled my custom module using python 2. I just did sudo python3 setup.py install and now it's working on the native Ubuntu. I'll now try modifying my docker image and see if it works there too. Thanks
1
0
0
Can't import own python module in Postgresql plpython function
1
python,postgresql,ubuntu,docker
0
2016-02-04T14:58:00.000
In a platform using Flask, SQLAlchemy, and Alembic, we constantly need to create new separate instances with their own set of resources, including a database. When creating a new instance, SQLAlchemy's create_all gives us a database with all the updates up to the point when the instance is created, but this means that this new instance does not have the migrations history that older instances have. It doesn't have an Alembic revisions table pointing to the latest migration. So when the time comes to update both older instances (with migrations histories) and a newer instance without migrations history we have to either give the newer instance a custom set of revisions (ignoring older migrations than the database itself) or create a fake migrations history for it and use a global set of migrations. For the couple of times that this has happened, we have done the latter. Is making a root migration that sets up the entire database as it was before the first migration and then running all migrations instead of create_all a better option for bootstrapping the database of new instances? I'm concerned for the scalability of this as migrations increase in number. Is there perhaps another option altogether?
4
1
1.2
0
true
35,275,008
1
591
1
0
0
35,260,536
If you know the state of the database you can just stamp the revision you were at when you created in the instance. setup instance run create_all alembic heads (to determine latest version available in scripts dir) alembic stamp Here is the doc from the commandline: stamp 'stamp' the revision table with the given revision; don't run any migrations.
1
0
0
SQLAlchemy, Alembic and new instances
1
python,sqlalchemy,flask-sqlalchemy,alembic
0
2016-02-07T23:32:00.000
If one is using Django, what happens with changes made directly to the database (in my case postgres) through either pgadmin or psql? How are such changes handled by migrations? Do they take precedence over what the ORM thinks the state of affairs is, or does Django override them and impose it's own sense of change history? Finally, how are any of these issues effected, or avoided, by git, if at all? Thanks.
7
3
0.291313
0
false
35,273,897
1
670
1
0
0
35,273,294
The migrations system does not look at your current schema at all. It builds up its picture from the graph of previous migrations and the current state of models.py. That means that if you make changes to the schema from outside this system, it will be out of sync; if you then make the equivalent change in models.py and create migrations, when you run them you will probably get an error. For that reason, you should avoid doing this. If it's done already, you could apply the conflicting migration in fake mode, which simply marks it as done without actually running the code against the database. But it's simpler to do everything via migrations in the first place. git has no impact on this at all, other than to reiterate that migrations are code, and should be added to your git repo.
1
0
0
Edit database outside Django ORM
2
python,django,git,postgresql
0
2016-02-08T15:31:00.000
I am using apache with mod_wsgi in windows platform to deploy my flask application. I am using sqlalchemy to connect redshift database with connection pool(size 10). After few days suddenly I am getting follwoing error. (psycopg2.OperationalError) SSL SYSCALL error: Software caused connection abort Can anybody suggest why I am getting this error and how to fix? If I do the apache restart then this error gone. But after few days it again comeback.
4
1
0.197375
0
false
44,923,869
1
3,963
1
0
0
35,322,629
I solved this error by turning DEBUG=False in my config file [and/or in the run.py]. Hope it helps someone.
1
0
0
(psycopg2.OperationalError) SSL SYSCALL error: Software caused connection abort
1
python,apache,flask,amazon-redshift
0
2016-02-10T18:00:00.000
I'm developing a Django 1.8 application locally and having reached a certain point a few days ago, I uploaded the app to a staging server, ran migrations, imported the sql dump, etc. and all was fine. I've since resumed local development which included the creation of a new model, and changing some columns on an existing model. I ran the migrations locally with success, but after rsync-ing my files to the staging server, I get a 'relation already exists' error when running manage.py migrate. And when I visit the admin page for the new model, I get a 'column does not exist' error. It seems as though the migrations for this model were partially successful but I cannot migrate the entirety of the model schema. I've tried commenting out parts of the migration files, but was not successful. Would it be possible to create the missing columns via psql? Or is there some way of determining what is missing and then manually write a migration to create the missing database structure? I'm using Django 1.8.6, Python 3.4.3, and PostgreSQL 9.3.6. Any advice on this would be great. Thanks.
0
0
1.2
0
true
35,343,687
1
772
1
0
0
35,336,992
Try running migrate --fake-initial since you're getting the "relation already exists" error. Failing that, I would manually back up each one of my migration folders, remove them from the server, then re-generate migration files for each app and run them all again from scratch (i.e., the initial makemigrations).
1
0
0
Migrations error in Django after moving to new server
1
python,django,postgresql,django-migrations
0
2016-02-11T10:39:00.000
I have created a model using QSqlTableModel, then created a tablview using QTableView and set the model on it. I want to update the model and view automatically whenever the database is updated by another program. How can I do that?
0
1
0.197375
0
false
35,392,361
0
255
1
0
0
35,383,018
There's no signal emitted for that currently. You could use a timer to query the last update timestamp and refresh the model data at designated intervals.
1
1
0
Automatically updating QSqlTableModel and QTableView
1
python,pyqt,auto-update,qtableview,qsqltablemodel
0
2016-02-13T17:31:00.000
I have a spreadsheet which references/caches values from an external spreadsheet. When viewing the cell in Excel that I want to read using OpenPyxl, I see the the contents as a string: Users . When I select the cell in Excel, I see the actual content in the Formula Bar is ='C:\spreadsheets\[_comments.xlsm]Rules-Source'!C5. I do not have the source spreadsheet stored on my machine. So, it appears Excel is caching the value from a separate spreadsheet as I am able to view the value Users when viewing the local spreadsheet in Excel. When I read the cell from the local spreadsheet using OpenPyxl, I get ='[1]Rules-Source'!C5. It is my understanding that OpenPyxl will not evaluate formulas. However, the string Users has to be cached somewhere in the XLSM document, right? Is there any way I can get OpenPyxl to read the cached source rather than returning the cell formula?
1
0
1.2
0
true
35,392,192
0
310
1
0
0
35,385,486
Yes, Excel does cache the values from the other sheet but openpyxl does not preserve this because there is no way of checking it.
1
0
0
OpenPyxl - difficulty getting cell value when cell is referencing other source
1
python,excel,python-3.x,openpyxl
0
2016-02-13T21:22:00.000
I'm using openpyxl to read an Excel spreadsheet with a lot of formulas. For some cells, if I access the cell's value as e.g. sheet['M30'].value I get the formula as intended, like '=IFERROR(VLOOKUP(A29, other_wksheet, 9, FALSE)*E29, "")'. But strangely, if I try to access another cell's value, e.g. sheet['M31'].value all I get is =, even though in Excel that cell as essentially the same formula as M30: '=IFERROR(VLOOKUP(A30, other_wksheet, 9, FALSE)*E29, "")'. This is happening in a bunch of other sheets with a bunch of other formulas and I can't seem to find any rhyme or reason for it. I've looked through the docs and I'm not loading data_only=True so I'm not sure what's going wrong.
1
0
1.2
0
true
35,392,251
0
637
1
0
0
35,385,519
This sounds very much like you are looking at cells using "shared formulae". When this is the case the same formula is used by several cells. The formula itself is only stored with one of those cells and all others are marked as formulae but just contain a reference. Until version 2.3 of openpyxl all such cells would return "=" as their value. However, version 2.3 now performs the necessary transformation of the formula for dependent cells. ie. a shared formula of say "=A1+1" for A1 will be translated to "=B1+1" for B1. Please upgrade to 2.3 if you are not already using it. If this is not the case then please submit a bug report with the sample file.
1
0
0
openpyxl showing '=' instead of formula
1
python,excel,xlrd,openpyxl
0
2016-02-13T21:26:00.000
I have a Flask app that uses SQLAlchemy (Flask-SQLAlchemy) and Alembic (Flask-Migrate). The app runs on Google App Engine. I want to use Google Cloud SQL. On my machine, I run python manage.py db upgrade to run my migrations against my local database. Since GAE does not allow arbitrary shell commands to be run, how do I run the migrations on it?
8
1
0.066568
0
false
35,395,267
1
1,816
1
1
0
35,391,120
You can whitelist the ip of your local machine for the Google Cloud SQL instance, then you run the script on your local machine.
1
0
0
Run Alembic migrations on Google App Engine
3
python,google-app-engine,flask,google-cloud-sql,alembic
0
2016-02-14T11:17:00.000
I'm working in a Python program which has to access data that is currently stored in plain text files. Each file represents a cluster of data points that will be accessed together. I don't need to support different queries, the only thing I need is to retrieve and copy to memory cluster of data as fast as possible. I'm wondering if maybe a document oriented database could work better than my current text file approach. In particular, I would like to know if the seek time and transfer speed are the same in document-oriented DBs that in files. Should I switch to a document-oriented database or stay with the plain file?
2
0
0
0
false
35,421,941
0
367
1
0
0
35,421,803
A DODB sounds like a much more reliable and professional solution. Besides you can add stored procedures thinking in the future and besides most databases offer text search capabilities. Backups are also easier, instead of using an incremental tar command, you can use the native DB backup tools. I'm fan of CouchDB and you can add RESTful calls in a "transparent" way to it with JSON as the default response.
1
0
0
Document-oriented databases vs plain text files
2
python,database,filesystems,document-oriented-db
0
2016-02-16T00:52:00.000
When Mongoengine rebuild(update) a information about indexes? I mean, if a added or change some field (added uniques or sparse option to filed) or added some meta info in model declaration. So question is: When mongoengine update it? How do they track changes?
3
1
1.2
0
true
35,648,604
1
349
1
0
0
35,437,458
Mongoengine do not rebuild index automaticly. Mongoengine track changes in models (btw dont work if you add sparse to your filed(if field dont have unique options)) and then fire the ensureIndex in mongoDB. But when its fire - make sure you delete oldest index version manualy(Mongoengine doesn't) in mongoDB. The problem is: if you add sparse to filed w.o unique option - this changes dont mapped in mongoDB index. You need to combine unique = True, sparse = True If you change indexs in models - you need to manualy delete old indexes in mongoDB.
1
0
0
When Mongoengine rebuild indexes?
1
python,mongodb,mongoengine
0
2016-02-16T16:11:00.000
I have about million records in a list that I would like to write to a Netezza table. I have been using executemany() command with pyodbc, which seems to be very slow (I can load much faster if I save the records to Excel and load to Netezza from the excel file). Are there any faster alternatives to loading a list with executemany() command? PS1: The list is generated by a proprietary DAG in our company, so writing to the list is very fast. PS2: I have also tried looping executemany() into chunks, with each chunk containing a list with 100 records. It takes approximately 60 seconds to load, which seems very slow.
1
0
0
0
false
35,599,759
0
806
1
0
0
35,466,165
Netezza is good for bulk loads, where executeMany() inserts number of rows in one go. The best way to load millions of rows is "nzload" utility which can be scheduled by vbscript, Excel Macro from Windows or Shell script from Linux.
1
0
0
Loading data to Netezza as a list is very slow
2
python,list,pyodbc,netezza,executemany
0
2016-02-17T19:40:00.000
I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time. Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise this process. So I wonder: if there is any relatively easy way to parallelize those computations?
2
1
0.066568
1
false
35,583,196
0
939
2
0
0
35,581,528
I am assuming you have already implemented GeoPandas and are still finding difficulties? you can improve this by further hashing your coords data. similar to how google hashes their search data. Some databases already provide support for these types of operations (eg mongodb). Imagine if you took the first (left) digit of your coords, and put each set of cooresponding data into a seperate sqlite file. each digit can be a hash pointing to the correct file to look for. now your lookup time has improved by a factor of 20 (range(-9,10)), assuming your hash lookup takes minimal time in comparison
1
0
0
Fastest approach for geopandas (reading and spatialJoin)
3
python,multithreading,pandas,geopandas
0
2016-02-23T15:28:00.000
I have about a million rows of data with lat and lon attached, and more to come. Even now reading the data from SQLite file (I read it with pandas, then create a point for each row) takes a lot of time. Now, I need to make a spatial joint over those points to get a zip code to each one, and I really want to optimise this process. So I wonder: if there is any relatively easy way to parallelize those computations?
2
1
1.2
1
true
35,786,998
0
939
2
0
0
35,581,528
As it turned out, the most convenient solution in my case is to use pandas.read_SQL function with specific chunksize parameter. In this case, it returns a generator of data chunks, which can be effectively feed to the mp.Pool().map() along with the job; In this (my) case job consists of 1) reading geoboundaries, 2) spatial joint of the chunk 3) writing the chunk to the database.
1
0
0
Fastest approach for geopandas (reading and spatialJoin)
3
python,multithreading,pandas,geopandas
0
2016-02-23T15:28:00.000
I need to migrate data from MySQL to Postgres. It's easy to write a script that connects to MySQL and to Postgres, runs a select on the MySQL side and inserts on the Postgres side, but it is veeeeery slow (I have + 1M rows). It's much faster to write the data to a flat file and then import it. The MySQL command line can download tables pretty fast and output them as tab-separated values, but that means executing a program external to my script (either by executing it as a shell command and saving the output to a file or by reading directly from the stdout). I am trying to download the data using Python instead of the MySQL client. Does anyone know what steps and calls does the MySQL command line perform to query a large dataset and output it to stdout? I thought it could be just that the client is in C and should be much faster than Python, but the Python binding for MySQL is itself in C so... any ideas?
0
0
0
0
false
35,598,628
0
181
1
0
0
35,592,092
I believe that the problem is that you are inserting each row in a separate transaction (which is the default behavior when you run SQL-queries without explicitly starting a transaction). In that case, the database must write (flush) changes to disk on every INSERT. It can be 100x times slower than inserting data in a single transaction. Try to run BEGIN before importing data and COMMIT after.
1
0
0
Why is MySQL command line so fast vs. Python?
1
python,mysql,postgresql
0
2016-02-24T02:22:00.000
I first had a updating problem with using google drive api, Even I followed the example of Quickstart, and after making some changes on it, the file on google drive is updated successfully. But now here comes a new problem after updating, I am not sure if it is because my change to the Quickstart is not proper, or something else. The problem is after updating the an excel file on google drive with an excel file on my local machine, the excel file on my local mahine is not editable if I don't close the IDLE terminal; but if I close the IDLE window, I can do everything with the excel file and save the changes. Such as, without closing the IDLE file, and I made some changes on the excel file and try to save it, then the system says something like sharing violation, and save the file as a temporary file 62635600...., if I try to delete the excel file, then the system says the file is being used by pythonw.exe. After closing the IDLE window, the excel goes back to normal, same as a normal excel file. Anybody has any idea?
1
1
0.066568
0
false
35,604,757
0
597
1
0
0
35,604,605
You can install google drive on your local machine and copy the file into the google drive directory at the correct position. then google drive (the client software) will update the file.
1
0
0
After updating file on google drive through google api, the file on local machine is not editable without closing IDLE window
3
python,google-api-python-client
0
2016-02-24T14:17:00.000
I have a flask app that recently had to start using mssql generated guid's as primary keys (previously it was just integers). The guid's are latin-1 encoding. Also, I am not using sqlalchemy. Now, when I'm trying to display the queried mssql guid's in a flask jinja2 template, I get the following error: UnicodeDecodeError: 'ascii' codec can't decode byte 0xc1 in position 0: ordinal not in range(128). I've tried: unsetting the LANG on the linux host Forcing utf-8 in FreeTDS config (this was already done) escaping in the jinja template using python3, no luck switching from pypyodbc to pyodbc3, but the problem presists Nothing seems to work. If I import sys and set the decoding to utf-8, the error changes replacing ascii with utf-8, but the jinja template will not render the guid's. Any thoughts? Thanks for reading. Also to note, my dev environment is on windows 7 and this issue does not crop up there. It's only on the linux server.
0
0
1.2
0
true
35,608,084
1
167
1
0
0
35,604,937
Well, this feels like a hack, but since the only time I'm ever using these guid's is when i'm reading them from the database, I just did: CAST(REC_GUID_ID as VARCHAR(36)) as REC_GUID_ID And now they are in a format that everything seems to read just fine.
1
0
0
Unicode issue using flask and mssql guids with FreeTDS
1
python,sql-server,flask,jinja2
0
2016-02-24T14:31:00.000
I've built a Django app that uses sqlite (the default database), but I can't find anywhere that allows deployment with sqlite. Heroku only works with postgresql, and I've spent two days trying to switch databases and can't figure it out, so I want to just deploy with sqlite. (This is just a small application.) A few questions: Is there anywhere I can deploy with sqlite? If so, where/how?
0
-2
-0.132549
0
false
35,615,302
1
3,302
1
0
0
35,615,273
sure you can deploy with sqlite ... its not really recommended but should work ok if you have low network traffic you set your database engine to sqlite in settings.py ... just make sure you have write access to the path that you specify for your database
1
0
0
Is it possible to deploy Django with Sqlite?
3
python,django,postgresql,sqlite,heroku
0
2016-02-24T23:23:00.000
I have a Raspberry Pi collecting data from sensors attached to it. I would like to have this data - collected every minute - accessible from an online DB (Amazon RDS | MySQL). Currently, a python script running on the Pi pushes this data to an Amazon RDS instance every 50 seconds (~per minute). However, I have no records when internet is down. I will appreciate any suggestions on how to fix this. Here are my thoughts so far: store data on a local MySQL DB, run a separate script that checks for differences between the online and local DB and updates the online one where needed. This will run every minute and write only one record to the online DB every minute if all is well. Utilize some sort of feature within MySQL itself - a replication job?
1
1
0.197375
0
false
38,479,349
0
476
1
0
0
35,617,670
I went with my first thought: store the sensor data on a local DB (SQLite3 for its small footprint). Records are created every half minute. a separate script - run regularly via cron - compares the last timestamp entry in the cloud DB with the local one and updates the cloud DB. Even though the comparison would ideally mean a doubling of DB transactions (a read + a write), if the last timestamp recorded on the online DB is stored locally for reference the remote read becomes unnecessary, thus being more efficient.
1
0
0
Syncing locally collected regular data to online DB over unreliable internet connection
1
python,mysql,database,synchronization,raspberry-pi
1
2016-02-25T03:28:00.000
I am currently developing an export plugin for MySQL Workbench 6.3. It is my first one. Is there any developer tool that I can use to help me (debug console, watches, variables state, etc.)
1
2
1.2
0
true
35,668,094
0
156
1
0
0
35,649,215
There is the GRT scripting shell, which you can reach via menu -> Scripting -> Scripting Shell. This shell is mostly useful for python plugins, but also shows some useful informations from the GRT (classes, the current tree with all settings, open editors, models etc.)
1
0
0
MySQL Workbench developer tools
1
python,debugging,plugins,mysql-workbench
0
2016-02-26T10:26:00.000
I am using a postgres database with sql-alchemy and flask. I have a couple of jobs which I have to run through the entire database to updates entries. When I do this on my local machine I get a very different behavior compared to the server. E.g. there seems to be an upper limit on how many entries I can get from the database? On my local machine I just query all elements, while on the server I have to query 2000 entries step by step. If I have too many entries the server gives me the message 'Killed'. I would like to know 1. Who is killing my jobs (sqlalchemy, postgres)? 2. Since this does seem to behave differently on my local machine there must be a way to control this. Where would that be? thanks carl
1
3
0.291313
0
false
35,707,179
1
2,227
1
0
0
35,705,211
Just the message "killed" appearing in the terminal window usually means the kernel was running out of memory and killed the process as an emergency measure. Most libraries which connect to PostgreSQL will read the entire result set into memory, by default. But some libraries have a way to tell it to process the results row by row, so they aren't all read into memory at once. I don't know if flask has this option or not. Perhaps your local machine has more available RAM than the server does (or fewer demands on the RAM it does have), or perhaps your local machine is configured to read from the database row by row rather than all at once.
1
0
0
postgres database: When does a job get killed
2
python,database,postgresql,sqlalchemy,flask-sqlalchemy
0
2016-02-29T17:01:00.000
I am using mongodb in python. The problem I'm facing is during the generation of a key. The code through which I'm generating a key is: post_id = posts.insert_one({msg["To"]:a} Now here, the "To" consist of an email address (which consists of a symbol dot(.)). I researched few documents online and I got to knew that “To” of a mail cannot be used as a key, because in mongodb they use “.(dot)” and “$” as a nested document. So now how can I proceed?
0
1
1.2
0
true
35,765,568
0
77
1
0
0
35,716,642
i have done something like this.. 'To':'test@gmail(dot)com'
1
0
1
How to set a generated key in mongodb using python?
2
python,mongodb
0
2016-03-01T07:04:00.000
I'm trying to write some documentation on how to restore a CKAN instance in my organization. I have backuped and restored successfully CKAN database and resources folder but i don't know what i have to do with datastore db. Which is the best practice? Use pg_dump to dump the database or initialize it from the resources folder (if there is a way)? Thanks. Alex
5
4
1.2
0
true
35,729,219
1
1,289
1
0
0
35,726,924
Backup CKAN's databases (the main one and Datastore one if you use it) with pg_dump. If you use Filestore then you need to take a backup copy of the files in the directory specified by ckan.storage_path (default is /var/lib/ckan/default) Restore the database backups (after doing createdb) using psql -f. Then run paster db upgrade just in case it was from an older ckan version. Then paster --plugin=ckan search-index rebuild. In an emergency use rebuild_fast instead of rebuild, but I think it might create some duplicates entries, so to be certain you could then do rebuild -r to do it again carefully but slowly. initialize [the datastore database] from the resources folder (if there is a way) I don't think the CKAN Data Pusher has a command-line interface to push all the resources. It would be a good plan for you to write one and submit a PR for everyone's benefit.
1
0
0
Ckan backup and restore
1
python,ckan
0
2016-03-01T15:31:00.000
Hello everybody this is my first post, I made a website with Django 1.8.9 and Python 3.4.4 on Windows 7. As I was using SQLite3 everything was fine. I needed to change the database to MySQL. I installed MySQL 5.6 and mysqlclient. I changed the database settings and made the migration ->worked. But when I try to register a new account or logging into the admin (made createsuperuser before) I get this Error: (1146, "Table 'community_db.app_cache' doesn't exist") I restarted the server and restarted command prompt. What also confuses me is the next row: C:\Python34\lib\site-packages\MySQLdb\connections.py in query, line 280 I was reading that there isn't any MySQLdb for Python 3 Would be nice if there is any help. I already spent such a long time for this website and I tried to solve this problem like allllll the other ones before, but for this one I can't find any help via google/stackover. I don't know what to do
0
1
0.099668
0
false
35,777,867
1
2,782
2
0
0
35,732,758
So here is the answer for all the django (or coding in general) noobs like me. python manage.py createcachetable I totally forgot about that and this caused all the trouble with "app_cache doesn't exist". At least in this case... I changed my database to PostgreSQL, but I am sure it also helps with MySQL...
1
0
0
Django - MySQL : 1146 Table doesn't exist
2
mysql,django,python-3.x,django-database
0
2016-03-01T20:25:00.000
Hello everybody this is my first post, I made a website with Django 1.8.9 and Python 3.4.4 on Windows 7. As I was using SQLite3 everything was fine. I needed to change the database to MySQL. I installed MySQL 5.6 and mysqlclient. I changed the database settings and made the migration ->worked. But when I try to register a new account or logging into the admin (made createsuperuser before) I get this Error: (1146, "Table 'community_db.app_cache' doesn't exist") I restarted the server and restarted command prompt. What also confuses me is the next row: C:\Python34\lib\site-packages\MySQLdb\connections.py in query, line 280 I was reading that there isn't any MySQLdb for Python 3 Would be nice if there is any help. I already spent such a long time for this website and I tried to solve this problem like allllll the other ones before, but for this one I can't find any help via google/stackover. I don't know what to do
0
0
0
0
false
35,733,218
1
2,782
2
0
0
35,732,758
I would assume this was an issue with permissions. As in the web-page connects with a user that doesn't have the proper permissions to create content. If your tables are InnoDB, you'll get the table doesn't exist message. You need the ib* files in the root of the MySQL datadir (e.g. ibdata1, ib_logfile0 ib_logfile1) If you don't have these files, you might need to fix permissions by logging directly into your DB
1
0
0
Django - MySQL : 1146 Table doesn't exist
2
mysql,django,python-3.x,django-database
0
2016-03-01T20:25:00.000
What's the best way to switch to a database management software from LibreOffice Calc? I would like to move everything from a master spreadsheet to a database with certain conditions. Is it possible to write a script in Python that would do all of this for me? The data I have is well structured I have about 300 columns of assets and under every asset there is 0 - ~50 filenames. The asset names are uniform as well as the filenames. Thank you all!
0
1
0.099668
0
false
66,788,273
0
1,516
2
0
0
35,784,155
You can ofcourse use python for this task but it might be an overkill. The CSV export / import sequence is likely much faster, less error prone and needs less ongoing maintainance (e.g if you change the spreadsheet columns). The sequence is roughly as follows: select the sheet that you want to import into a DB select Files / Save as.. and then text/csv select a column separator that will not interfere with your data (e.g. |) The import sequence into a database depends on your choice of db but today many IDE's and database GUI environments will automatically import / introspect your CSV file and create the table / insert the data for you. Things to be double check: You may have to indicate that the first row is a header The assigned datatype may need fine tuning if the automated guesses are not optimal
1
0
0
How to import data from LibreOffice Calc to a SQL database?
2
python,sql,database,libreoffice,libreoffice-calc
0
2016-03-03T22:15:00.000
What's the best way to switch to a database management software from LibreOffice Calc? I would like to move everything from a master spreadsheet to a database with certain conditions. Is it possible to write a script in Python that would do all of this for me? The data I have is well structured I have about 300 columns of assets and under every asset there is 0 - ~50 filenames. The asset names are uniform as well as the filenames. Thank you all!
0
0
1.2
0
true
35,784,265
0
1,516
2
0
0
35,784,155
You can create a python script that will read this spreadsheet row by row and then run insert statements in a database. In fact, would be even better if you save the spreadsheet as CSV for example, if you only need the data there.
1
0
0
How to import data from LibreOffice Calc to a SQL database?
2
python,sql,database,libreoffice,libreoffice-calc
0
2016-03-03T22:15:00.000
I am trying to create my personal web page. So in that I needed to put in the recommendations panel , which contains recommendations by ex employees/friends etc. So I was planning to create a model in django with following attributes:- author_name author_designation author_image author_comments I have following questions in my mind related to image part:- Is it good practice to store images in the backend database?(database is for structured information from what i understand) How to store images so that scaling the content and managing it becomes really easy?
3
1
0.066568
0
false
35,844,490
1
1,276
1
0
0
35,844,303
The best way to do this is to store the images in your server in some specific, general folder for this images. After that you store a string in your DB with the path to the image that you want to load. This will be a more efficient way to do this.
1
0
0
Is it a good practice to save images in the backend database in mysql/django?
3
python,mysql,django
0
2016-03-07T12:54:00.000
I have installed Open edX (Dogwood) on an EC2 ubuntu 12.04 AMI and, honestly, nothing works. I can sign up in studio, and create a course, but the process does not complete. I get a nice page telling me that the server has an error. However, the course will show up on the LMS page. But, I cannot edit the course in Studio. If I sign out of studio, I cannot log back without an error. However, upon refreshing the page, I am logged in. I can enable the search function and install the search app, but it doesn't show any courses and returns an error. Can someone point me to an AMI that works with, or includes, Open edX? The Open edX documentation is worthless. Or, failing that, explain to be what I am missing when installing Open edX using the automated installation scripts from the documention.
1
1
0.197375
0
false
36,759,310
1
226
1
0
0
35,948,834
This one works ami-7de8981d (us-east). Login with ssh as the 'ubuntu' user. Studio is on port 18010 and the LMS is on port 80.
1
0
0
Open edX Dogwood problems
1
python,django,amazon-web-services,edx,openedx
0
2016-03-11T19:57:00.000
I am using the python cassandra-driver to execute queries on a cassandra database and I am wondering how to re-insert a ResultSet returned from a SELECT query on table A to a table B knowing that A and B have the same columns but a different primary keys. Thanks in advance
0
0
0
0
false
35,969,262
0
294
1
0
0
35,964,324
There is no magic, you'll need to: create a prepare statement for INSERT ... INTO tableB ... on each ResultSet from table A, extract the values and create a bound statement for table B, then execute the bound statement for insertion into B You can use asynchronous queries to accelerate the migration a little bit but be careful to throttle the async.
1
0
0
Cassandra python driver - how to re-insert a ResultSet
1
python,cassandra,resultset
0
2016-03-12T22:52:00.000
I am thinking if I don't use auto id as primary id in mysql but use other method to implement, may I replace auto id from bson.objectid.ObjectId in mysql? According to ObjectId description, it's composed of: a 4-byte value representing the seconds since the Unix epoch a 3-byte machine identifier a 2-byte process id a 3-byte counter, starting with a random value. It seems it can provide unique and not duplicate key. Is it a good idea?
0
3
1.2
0
true
35,983,791
0
373
1
0
0
35,983,632
You certainly could do this. One issue though is that since this can't be set by the database itself, you'll need to write some Python code to ensure it is set on save. Since you're not using MongoDB, though, I wonder why you want to use a BSON id. Instead you might want to consider using UUID, which can indeed be set automatically by the db.
1
0
0
May I use bson.objectid.ObjectId as (primary key) id in sql?
1
python,mysql,django,flask,primary-key
0
2016-03-14T09:23:00.000
I made a program with using sqlite3 and pyqt modules. The program can be used by different persons simultaneously. Actually I searched but I did not know and understand the concept of server. How can i connect this program with a server. Or just the computers that have connections with the server is enough to run the program simultaneously?
0
2
1.2
0
true
35,986,703
0
59
1
0
0
35,986,526
Do u want to connect to sqlite database server? SQLite Is Serverless. It stores your data in a file. U should use maria db for db server. Or u can store your sqlite database file in a network shared drive or cloud or...
1
1
0
How to connect my app with the database server
1
python,sqlite,server
0
2016-03-14T11:38:00.000
I am not getting Database tool window under View-> Tool Windows, in pyhcharm community version software, so that I can connect to MYSQL server database. Also,please suggest me if there is other ways by which I can connect to MY SQL server database using pycharm community version.
0
1
0.197375
0
false
38,564,886
0
493
1
0
0
35,991,312
Database support available only in paid Jetbrains IDEs
1
0
0
Unable to connect to MYSQL server database using pycharm community (2.7.11)
1
python,pycharm
0
2016-03-14T15:15:00.000
If cursor.execute('select * from users') returns a 4 row set, and then cursor.fetchone(), is there a way to re-position the cursor to the beginning of the returned results so that a subsequent cursor.fetchall() gives me all 4 rows? Or do I need to the cursor.execute again, and then cursor.fetchall()? This seems awkward. I checked the Python docs and couldn't find something relevant. What am I missing?
2
2
1.2
0
true
36,030,685
0
1,900
1
0
0
36,022,384
SQLite computes each result row on demand, so it is neither possible to go back to an earlier row, nor to determine how many following rows there will be. The only way to go back is to re-execute the query. Alternatively, call fetchall() first, and then use the returned list instead of the cursor.
1
0
0
Python & SQLite: fetchone() and fetchall() and cursor control
1
python,sqlite
0
2016-03-15T21:19:00.000
I am creating an appEngine application in python that will need to perform efficient geospatial queries on datastore data. An example use case would be, I need to find the first 20 posts within a 10 mile radius of the current user. Having done some research into my options, I have found that currently what seems like the 2 best approaches for achieving this type of functionality would be: Indexing geoHashed geopoint data using Python's GeoModel library Creating/deleting documents of structured data using Google's newer SearchAPI It seems from a high level perspective that indexing geohashes and performing queries on them directly would be less costly and much faster than having to create and delete a document for every geospatial query, however i've also read that geohashing can be very inaccurate along the equator or along 'faultlines' created by the hashing algorithm. I've seen very few posts contrasting the best methods in detail, and I think stack is a good place to have this conversation, so my questions are as follows: Has anyone implemented similar features and had positive experiences with either methods? Which method would be the cheaper alternative? Which would be the faster alternative? Is there another important method I'm leaving out? Thanks in advance.
1
1
1.2
0
true
36,110,881
1
326
1
0
0
36,092,591
Geohashing does not have to be inaccurate at all. It's all in the implementation details. What I mean is you can check the neighbouring geocells as well to handle border-cases, and make sure that includes neighbours on the other side of the equator. If your use case is finding other entities within a radius as you suggest, I would definitely recommend using the Search API. They have a distance function tailored for that use. Search API queries are more expensive than Datastore queries yes, but if you weigh in the computation time to do these calculations in your instance and probably iterating through all entities for each geohash to make sure the distance is actually less than the desired radius, then I would say Search API is the winner. And don't forget about the implementation time.
1
0
0
Geohashing vs SearchAPI for geospatial querying using datastore
2
python,google-app-engine,google-cloud-datastore,google-search-api,geohashing
0
2016-03-18T19:15:00.000
I'm completely new to managing data using databases so I hope my question is not too stupid but I did not find anything related using the title keywords... I want to setup a SQL database to store computation results; these are performed using a python library. My idea was to use a python ORM like SQLAlchemy or peewee to store the results to a database. However, the computations are done by several people on many different machines, including some that are not directly connected to internet: it is therefore impossible to simply use one common database. What would be useful to me would be a way of saving the data in the ORM's format to be able to read it again directly once I transfer the data to a machine where the main database can be accessed. To summarize, I want to do: On the 1st machine: Python data -> ORM object -> ORM.fileformat After transfer on a connected machine: ORM.fileformat -> ORM object -> SQL database Would anyone know if existing ORMs offer that kind of feature?
2
0
0
0
false
36,104,630
0
963
1
0
0
36,104,521
Is there a reason why some of the machine cannot be connected to the internet? If you really can't, what I would do is setup a database and the Python app on each machine where data is collected/generated. Have each machine use the app to store into its own local database and then later you can create a dump of each database from each machine and import those results into one database. Not the ideal solution but it will work.
1
0
0
Python ORM - save or read sql data from/to files
2
python,mysql,database,orm
0
2016-03-19T16:59:00.000
I try to use pip install psycopg2 on windows10 and python3.5, but it show me below error message. how can i fix it ? Command "d:\desktop\learn\python\webcatch\appserver\webcatch\scripts\python.exe -u -c "import setuptools, tokenize;file='C:\Users\16022001\AppData\Local\Temp\pip-build-rsorislh\psycopg2\setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record C:\Users\16022001\AppData\Local\Temp\pip-kzsbvzx9-record\install-record.txt --single-version-externally-managed --compile --install-headers d:\desktop\learn\python\webcatch\appserver\webcatch\include\site\python3.5\psycopg2" failed with error code 1 in C:\Users\16022001\AppData\Local\Temp\pip-build-rsorislh\psycopg2\
0
0
0
0
false
69,657,488
0
329
1
0
0
36,115,491
pip install psycopg this worked for me, don't mention version i.e (pip install psycopg2)
1
0
1
pip install psycopg2 error
1
python,windows,psycopg2
0
2016-03-20T15:11:00.000
In Django database username is used as schema name. In DB2 there is no database level users. OS users will be used for login into the database. In my database I have two different names for database user and database schema. So in django with db2 as backend how can I use different schema name to access the tables? EDIT: Clarifying that I'm trying to access via the ORM and not raw SQLs. The ORM implicitly is using the username as the schema name. How do I avoid that ?
2
0
0
0
false
36,159,818
1
688
1
0
0
36,159,706
DB2 uses so-called two part names, schemaname.objectname. Each object, including tables, can be referenced by the entire name. Within a session there is the current schema which by default is set to the username. It can be changed by the SET SCHEMA myschema statement. For your question there are two options: 1) Reference the tables with their full name: schemaname.tablename 2) Use set schema to set the common schemaname and reference just the table.
1
0
0
django-db2 use different schema name than database username
2
django,python-2.7,django-models,db2
0
2016-03-22T16:15:00.000
I've tried to deploy (includes migration) production environment. But my Django migration (like add columns) very often stops and doesn't progress anymore. I'm working with postgresql 9.3, and I find some reasons of this problem. If postgresql has an active transaction, alter table query is not worked. So until now, restarting postgresql service before migration was a solution, but I think this is a bad idea. Is there any good idea to progress deploying nicely?
1
1
0.197375
0
false
36,213,045
1
1,701
1
0
0
36,212,891
Open connections will likely stop schema updates. If you can't wait for existing connections to finish, or if your environment is such that long-running connections are used, you may need to halt all connections while you run the update(s). The downtime, if it's likely to be significant to you, could be mitigated if you have a read-only slave that could stay online. If not, ensuring your site fails over to some sort of error/explanation page/redirect would at least avoid raw failure code responses to requests that come in if downtime for migrations is acceptable.
1
0
0
django migration doesn't progress and makes database lock
1
python,django,postgresql
0
2016-03-25T01:53:00.000
I'm currently using google cloud sql 2nd generation instances to host my database. I need to make a schema change to a table but Im not sure what the best way to do this. Ideally, before I deploy using gcloud preview app deploy my migrations will run so the new version of the code is using the latest schema. Also, if I need to rollback to an old version of my app the migrations should run for that point in time. Is there a way to integrate sql schema migrations with my app engine deploys? My app is app engine managed VM python/flask.
1
-2
1.2
0
true
36,407,336
1
180
1
1
0
36,231,114
SQL schema migration is a well-known branch of SQL DB administration which is not specific to Cloud SQL, which is mainly different to other SQL systems in how it is deployed and networked. Other than this, you should look up schema migration documentation and articles online to learn how to approach your specific situation. This question is too broad for Stack Overflow as it is, however. Best of luck!
1
0
0
How to perform sql schema migrations in app engine managed vm?
1
python,google-app-engine,google-cloud-sql,gcloud
0
2016-03-26T02:54:00.000
I want to use psycopg2 (PostgreSQL) with virtualenv. I am using Ubuntu and root already having psycopg2 and it is working fine but if i try to use it after activating virtualenv it shows ImportError: No module named psycopg2 Do i need to put symbolic link of dist-packages manually ??
0
2
1.2
0
true
36,247,000
0
20
1
0
0
36,246,954
virtualenvs are by default isolated from the system packages so you need to install all packages into each virtualenv (or you can pass --system-site-packages when creating it).
1
0
1
PostgreSQL not working with virtual envirement
1
python,postgresql,python-2.7,virtualenv,psycopg2
0
2016-03-27T11:49:00.000
I am using a simple sqlalchemy paginate statement like this items = models.Table.query.paginate(page, 100, False) with page = 1. When running this command twice I get different outputs? If I run it with less element (e.g. 10) it gives me the same outputs when run multiple times? I thought for a paginate command to work it has to result in the same set each time it is called? cheers carl
0
0
0
0
false
36,334,148
1
193
1
0
0
36,319,702
ok I don't know the answer to this question but ordering the query (order_by) solved my problem... I am still interested to know why paginate does not have an order by itself, because it basically means that without the order statement, paginate cannot be used to iterate through all elements? cheers carl
1
0
0
flask sqlalchemy paginate() function does not get the same elements when run twice
1
python,pagination,flask-sqlalchemy
0
2016-03-30T21:04:00.000
I need to develop natural language querying tool for a structured database. I tried two approaches. using Python nltk (Natural Language Toolkit for python) using Javascript and JSON (for data source) In the first case I did some NLP steps to format the natural query by doing removing stop words, stemming, finally mapping keywords using featured grammar mapping. This methodology works for simple scenarios. Then I moved to second approach. Finding the data in JSON and getting corresponding column name and table name , then building a sql query. For this one, I also implemented removing stop words, stemming using javascript. Both of these techniques have limitations.I want to implement semantic search approach. Please can anyone suggest me better approach to do this..
1
0
0
0
false
36,531,992
0
1,077
1
0
0
36,330,033
As I commented, I think you should add some code, since not everyone has read the book. Anyway my conclusion is that yes, as you said it has a lot of limitations and the only way to achieve more complex queries is to write very extensive and complete grammar productions, a pretty hard work.
1
0
0
Natural Language Processing Database Querying
2
javascript,python,json,nlp,nltk
0
2016-03-31T09:56:00.000
I have been trying to find examples in ZODB documentation about doing a join of 2 or more tables. I know that this is an Object Database, but I am trying to create objects that represent tables. And I see that ZODB makes use SQLAlchemy. So I was wondering if I can treat things in ZODB in a relational like sense. I hope someone can let me know if my train of thought in using this ZODB is OK, or if one has to think in very different way.
0
2
1.2
0
true
36,479,782
0
156
1
0
0
36,457,076
ZODB does not use SQLAlchemy, and there is no relational model. There are no tables to join, period. The ZODB stores an object tree, there is no schema. It's just Python objects in more Python objects. Any references to ZODB and SQLAlchemy are all for applications built on top of the ZODB, where transactions for external relational databases accessed through SQLAlchemy are tied in with the ZODB transaction manager to ensure that transactions cover both the ZODB and data in other databases. This essential means that when you commit the ZODB transaction, SQLAlchemy is told about this too.
1
0
0
ZODB database: table joins
1
python,sqlalchemy,zodb
0
2016-04-06T16:32:00.000
There is a use case in which we would like to add columns from the data of a webservice to our original sql data table. If anybody has done that then pls do comment.
0
2
0.197375
0
false
36,533,707
0
959
1
0
0
36,526,219
Shadowfax is correct that you should review the How to Ask guide. that said, Spotfire offers this feature in two ways: use IronPython scripting attached to an action control to retrieve the data. this is a very rigid solution that offers no caching, and the data must be retrieved and placed in memory each time the document is opened. I'll leave you to the search feature here on SO; I've posted a sample document somewhere. the ideal solution is to use a separate product called Spotfire Advanced Data Services. this data federation layer can mashup data and perform advanced, custom caching based on your needs. the data is then exposed as an information link in Spotfire Server. you'll need to talk to your TIBCO sales rep about this.
1
0
0
How to use a web service as a datasource in Spotfire
2
sql-server,web-services,ironpython,spotfire
0
2016-04-10T05:33:00.000
I have a dynamo table called 'Table'. There are a few columns in the table, including one called 'updated'. I want to set all the 'updated' field to '0' without having to providing a key to avoid fetch and search in the table. I tried batch write, but seems like update_item required Key inputs. How could I update the entire column to have every value as 0 efficiently please? I am using a python script. Thanks a lot.
2
1
1.2
0
true
36,564,082
1
643
2
0
0
36,562,764
At this point, you cannot do this, we have to pass a key (Partition key or Partition key and sort key) to update the item. Currently, the only way to do this is to scan the table with filters to get all the values which have 0 in "updated" column and get respective Keys. Pass those keys and update the value. Hopefully, in future AWS will come up with something better.
1
0
0
DynamoDB update entire column efficiently
2
database,python-2.7,amazon-dynamodb,insert-update
0
2016-04-12T02:49:00.000
I have a dynamo table called 'Table'. There are a few columns in the table, including one called 'updated'. I want to set all the 'updated' field to '0' without having to providing a key to avoid fetch and search in the table. I tried batch write, but seems like update_item required Key inputs. How could I update the entire column to have every value as 0 efficiently please? I am using a python script. Thanks a lot.
2
0
0
0
false
36,564,129
1
643
2
0
0
36,562,764
If you can get partition key, for each partition key you can update the item
1
0
0
DynamoDB update entire column efficiently
2
database,python-2.7,amazon-dynamodb,insert-update
0
2016-04-12T02:49:00.000
I am using python with boto3 to upload file into s3 bucket. Boto3 support upload_file() to create s3 object. But this API takes file name as input parameter Can we give actual data buffer as a parameter to upload file () function instanced of file name? I knew that we can use put_object() function if we want to give data buffer as parameter to create s3 object. But I want to use upload_file with data buffer parameter. Is there any way to get out of this? Thanks in advance
1
1
1.2
0
true
36,582,398
0
638
1
0
1
36,568,713
There is currently no way to use a file-like object with upload_file. put_object and upload_part do support these, though you don't get the advantage of automatic multipart uploads.
1
0
0
Boto3 : Can we use actual data buffer as parameter instaed of file name to upload file in s3?
1
python-2.7,boto,boto3
0
2016-04-12T09:13:00.000
I have just cloned a Django app from Github to a local directory. I know for a fact that the app works because I've run it on others' computers. When I run the server, I can see the site and register for an account. This works fine (I get a confirmation email). But then my login information causes an error because the DB appears to not have configured properly on my machine. I get the following errors: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/utils.py in execute return self.cursor.execute(sql, params) ... ▶ Local vars /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py in execute return Database.Cursor.execute(self, query, params) ... ▶ Local vars The above exception (no such table: django_session) was the direct cause of the following exception: (It then lists a bunch of problems with local vars). I tried making migrations with every part of the app but this didn't appear to fix anything.
1
0
0
0
false
36,613,541
1
315
1
0
0
36,609,201
The django_sessions table should get initialized when you run your first migrations. You said taht you made your migrations, but did you run them (with python manage.py migrate). Also, do you have django.contrib.auth in the installed_apps in your settings file? This is the app that owns that session table
1
0
0
Problems with database after cloning Django app from Github
1
python,django,git,github
0
2016-04-13T20:42:00.000
Need help from someone who has got Apache , Python and cx_Oracle (Lib to run Oracle database using python) . Even after setting all the required variables still getting the error ": libclntsh.so.11.1: cannot open shared object file: No such file or directory" when running python script . The same script works perfectly fine when running it from cli. My working environment is RHEL 6.4 An help in this matter would be appreciated , for those who got this working in their environment Merci d'avance
0
0
1.2
0
true
36,711,130
0
2,034
1
0
0
36,655,812
I was able to solve this with the help of mod_env module of python by natively passing the env_variables to apache . What I did to achieve this was --> define the my required env variables in the file /etc/sysconfig/httpd like LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/folder_with_library/ export LD_LIBRARY_PATH --> Then passing this variable in the httpd.conf file like PassEnv LD_LIBRARY_PATH Hope this helps
1
0
0
libclntsh.so.11.1: cannot open shared object file python error while running CGIusing cx_Oracle
2
python,apache,cx-oracle
0
2016-04-15T19:56:00.000
Which is more efficient? Is there a downside to using open() -> write() -> close() compared to using logger.info()? PS. We are accumulating query logs for a university, so there's a perchance that it becomes big data soon (considering that the min-max cap of query logs per day is 3GB-9GB and it will run 24/7 constantly for a lifetime). It would be appreciated if you could explain and differentiate in great detail the efficiency in time and being error prone aspects.
3
0
0
0
false
36,819,582
0
1,554
2
0
0
36,819,540
It is always better to use a built-in facility unless you are facing issues with the built-in functionality. So, use the built-in logging function. It is proven, tested and very flexible - something you cannot achieve with open() -> f.write() -> close().
1
0
0
Python logging vs. write to file
2
python,logging,file-writing,bigdata
0
2016-04-24T05:15:00.000