Question
stringlengths 25
7.47k
| Q_Score
int64 0
1.24k
| Users Score
int64 -10
494
| Score
float64 -1
1.2
| Data Science and Machine Learning
int64 0
1
| is_accepted
bool 2
classes | A_Id
int64 39.3k
72.5M
| Web Development
int64 0
1
| ViewCount
int64 15
1.37M
| Available Count
int64 1
9
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Q_Id
int64 39.1k
48M
| Answer
stringlengths 16
5.07k
| Database and SQL
int64 1
1
| GUI and Desktop Applications
int64 0
1
| Python Basics and Environment
int64 0
1
| Title
stringlengths 15
148
| AnswerCount
int64 1
32
| Tags
stringlengths 6
90
| Other
int64 0
1
| CreationDate
stringlengths 23
23
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I'm using fixtures with SQLAlchemy to create some integration tests.
I'd like to put SQLAlchemy into a "never commit" mode to prevent changes ever being written to the database, so that my tests are completely isolated from each other. Is there a way to do this?
My initial thoughts are that perhaps I could replace Session.commit with a mock object; however I'm not sure if there are other things that might have the same effect that I also need to mock if I'm going to go down that route. | 3 | 3 | 1.2 | 0 | true | 23,702,417 | 0 | 838 | 1 | 0 | 0 | 23,394,785 | The scoped session manager will by default return the same session object for each connection. Accordingly, one can replace .commit with .flush, and have that change persist across invocations to the session manager.
That will prevent commits.
To then rollback all changes, one should use session.transaction.rollback(). | 1 | 0 | 0 | sqlalchemy - force it to NEVER commit? | 1 | python,testing,sqlalchemy | 0 | 2014-04-30T17:48:00.000 |
I am writing a small web application using Flask and I have to use DynamoDB as backend for some hard requirements.
I went through the tutorial on Flask website without establishing sqlite connection. All data were pulled directly from DynamoDB and it seemed to work.
Since I am new to web development in general and Flask framework, do you see any problems with this approach? | 0 | 2 | 0.379949 | 0 | false | 23,512,512 | 1 | 637 | 1 | 0 | 0 | 23,510,212 | No. SQLite is just one option for backend storage. SQLite is mentioned in the tutorial only for its simplicity in getting something working fast and simply on a typical local developers environment. (No db to or service to install/configure etc.) | 1 | 0 | 0 | Use Flask with Amazon DynamoDB without SQLite | 1 | python,flask,amazon-dynamodb | 0 | 2014-05-07T06:21:00.000 |
I have a mysql database with some huge tables, i have a task that I must run three queries one after another and the last one exports to the outfile.csv.
i.e.
Query 1. Select values from some tables with certain parameter. then write into a new table. aprox 4.5 hours
Query 2. After the first one is done, then use the new table join with another to get results to new table. Then write to outfile.csv. aprox 2 hours
How do I manage to automatically call these queries one after another even though one can take 4 hours to finish
I am open to any solution, Scripts, or database functions. I am running on ubuntu server so, no graphical solutions.
Thanks for your help. | 0 | 2 | 1.2 | 0 | true | 23,529,243 | 0 | 121 | 1 | 0 | 0 | 23,529,212 | you can just separate the queries with a semi-column and run them as a batch. | 1 | 0 | 0 | How to automatically run chain multiple mysql queries | 1 | scripting,mysql-python | 0 | 2014-05-07T21:56:00.000 |
I have a server with a database, the server will listen for http requests, and using JSONs for
data transferring.
Currently, what my server code(python) mainly do is read the JSON file, convert it to SQL and make some modifications to the database. And the function of the server, as I see, is only like a converter between JSON and SQL. Is this the right procedure that people usually do?
Then I come up with another idea, I can define a class for user information and every user's information in the database is an instance of that class. When I get the JSON file, first I put it into the instance, do some operation and then put it into the database. In my understanding, it adds a language layer between the http request and the database.
The question is, what do people usually do? | 0 | 0 | 0 | 0 | false | 23,593,850 | 0 | 46 | 2 | 0 | 0 | 23,593,618 | People usually make use of a Web framework, instead of implementing the basic machinery themselves as you are doing.
That is: Python i s a great language that easily allows one to translate "json into sql" with a small amount of code - and it is great for learning. If you are doing this for educational purposes, it is a nice project to continue fiddling with, and maybe you can have some nice ideas in this answer and in others.
But for "real world" usage, the apparent simple plan comes up with real world issues. Tens or even hundreds of them. How to proper separate html/css template from content from core logic, how to deal with many, many aspects of security, etc...
Them, the frameworks come into play: a web-framewrok project is a software project that had, over the years, and soemtimes hundreds of hours of work from several contributors, thought about, and addresses all of the issues a real web application can and will face.
So, it is ok if one want to to everything from scratch if he believes he can come up with a a framework taht has distinguished capabilities not available in any existing project. And it is ok to make small projects for learning purposes. It is not ok to try to come up with something from scratch for real server production, without having a deep knowledge of all the issues involved, and knowing well at least 3, 4 frameworks.
So, now, you've got the basic understanding of a way to get to a framework - it istime to learn some of the frameworks tehmselves. Try, for example, Bottle and Flask (microframeworks), and Django (a fully featured framework for web application development), maybe Tornado (an http server, but with enough of a web framework in it to be usable, and to be very instructive)- just reading the documentation on "how to get started" with these projects, to get to a "hello world" page will lead you to lots of concepts you probably had not thought about yet. | 1 | 0 | 1 | How do people usually operate information on server database? | 2 | python,sql,rdbms | 0 | 2014-05-11T14:11:00.000 |
I have a server with a database, the server will listen for http requests, and using JSONs for
data transferring.
Currently, what my server code(python) mainly do is read the JSON file, convert it to SQL and make some modifications to the database. And the function of the server, as I see, is only like a converter between JSON and SQL. Is this the right procedure that people usually do?
Then I come up with another idea, I can define a class for user information and every user's information in the database is an instance of that class. When I get the JSON file, first I put it into the instance, do some operation and then put it into the database. In my understanding, it adds a language layer between the http request and the database.
The question is, what do people usually do? | 0 | 0 | 0 | 0 | false | 23,593,774 | 0 | 46 | 2 | 0 | 0 | 23,593,618 | The answer is: people do usually that, what they need to do. The layer between database and client normally provides a higher level api, to make the request independent from the actual database. But how this higher level looks like depends on the application you have. | 1 | 0 | 1 | How do people usually operate information on server database? | 2 | python,sql,rdbms | 0 | 2014-05-11T14:11:00.000 |
Is there an option in psycopg2 (in the connect() method) similar to psql -w (never issue a password prompt) and -W (force psql to prompt for a password before connecting to a database)? | 2 | 5 | 1.2 | 0 | true | 23,606,436 | 0 | 1,041 | 1 | 0 | 0 | 23,606,102 | psycopg2 will never prompt for a password - that's a feature of psql, not of the underlying libpq that both psql and psycopg2 use. There's no equvialent of -w / -W because there's no password prompt feature to turn on/off.
If you want to prompt for a password you must do it yourself in your code: trap the exception thrown when authentication fails because a password is required, prompt the user for a password, and reconnect using the password. That's what psql does anyway, if you take a look at the sources. | 1 | 0 | 0 | Is there a way to ask the user for a password or not with psycopg2? | 1 | python,postgresql,psycopg2 | 0 | 2014-05-12T10:02:00.000 |
I have a question in MySQL and Python MySQLdb library:
Suppose I'd like to insert a bulk in to the DB. When I'm inserting, there are possibly a duplicated among the records in the bulk, or a record in the bulk may be a duplicate of a record in the table. In case of duplicates, I'd like to ignore the duplication and just insert the rest of the bulk. In case of a duplication in within the bulk, I'd like that only one of the records (any of them) will be inserted, and the rest of the bulk will also be inserted.
How can I write it in MySQL syntax? And is it built-in in MySQLdb library in Python? | 1 | 0 | 0 | 0 | false | 23,636,177 | 0 | 289 | 1 | 0 | 0 | 23,636,068 | Step 1 - Make your initial insert into a staging table.
Step 2 - deal with duplicate records and all other issues.
Step 3 - write to your real tables from the staging table. | 1 | 0 | 0 | Bulk insert into MySQL on duplicate | 1 | python,mysql,sql,mysql-python | 0 | 2014-05-13T15:56:00.000 |
I have been going through the Google App Engine documentation (Python) now and found two different types of storage.
NDB Datastore
DB Datastore
Both quota limits (free) seem to be same, and their database design too. However NDB automatically cache data in Memcache!
I am actually wondering when to use which storage? What are the general practices regarding this?
Can I completely rely on NDB and ignore DB? How should it be done?
I have been using Django for a while and read that in Django-nonrel the JOIN operations can be somehow done in NDB! and rest of the storage is used in DB! Why is that? Both storages are schemaless and pretty well use same design.. How is that someone can tweak JOIN in NDB and not in DB? | 2 | 5 | 1.2 | 0 | true | 23,646,875 | 1 | 568 | 1 | 1 | 0 | 23,645,572 | In simple words these are two versions of datastore . db being the older version and ndb the newer one. The difference is in the models, in the datastore these are the same thing. NDB provides advantages like handling caching (memcache) itself. and ndb is faster than db. so you should definitely go with ndb. to use ndb datastore just use ndb.Model while defining your models | 1 | 0 | 0 | App Engine: Difference between NDB and Datastore | 1 | python,django,google-app-engine | 0 | 2014-05-14T04:19:00.000 |
I just wanted to know if it is possible to insert the contents of a local tabulated html file into an Excel worksheet using xlsxwriter? Manually it works fine by just dragging and dropping the file into Excel and the formatting is clear, but I can't find any information on inserting file contents into Excel using xlsxwriter. I can only find information related to inserting an image into Excel.
Many thanks for reading this,
MikG | 0 | 1 | 1.2 | 0 | true | 23,674,407 | 0 | 448 | 1 | 0 | 0 | 23,673,433 | No, such functionality is not what xlsxwriter offers.
This package is able to write Excel files, but as importing HTML you describe is using MS Excell GUI functionality and as MS Excel is not an requirement of xlsxwriter, do not expect it to be present.
On the other hand, you could play with Python to do the conversion of HTML to spreadsheet data yourself, it is definitely not drag and drop solution, but for repetitive tasks this can be much more productive at the end. | 1 | 0 | 0 | Using Python's xlsxwriter to drop a tabulated html file into worksheet - is this possible? | 1 | python,xlsxwriter | 0 | 2014-05-15T08:50:00.000 |
I have an app which will work in multiple timezones. I need the app to be able to say "Do this at 10 PM, 12 April, 2015, in the America/New York timezone and repeat every 30 days." (And similar).
If I get a datetime from the user in PST, should I be storing the datetime in DB after converting to UTC?
Pros: Easier to manage, every datetime in DB is in UTC.
Cons: Can not take DST in account. (I think). | 0 | 1 | 0.099668 | 0 | false | 23,676,984 | 0 | 158 | 1 | 0 | 0 | 23,676,718 | Yes, you should store everything in your db in UTC.
I don't know why you say you won't be able to cope with DST. On the contrary, any good timezone library - such as pytz - is quite capable of translating UTC to the correct time in any timezone, taking DST into account. | 1 | 0 | 0 | Convert datetime to UTC before storing in DB? | 2 | python,django,datetime,timezone | 0 | 2014-05-15T11:19:00.000 |
My web app asks users 3 questions and simple writes that to a file, a1,a2,a3. I also have real time visualization of the average of the data (reads real time from file).
Must I use a database to ensure that no/minimal information is lost? Is it possible to produce a queue of read/writes>(Since files are small I am not too worried about the execution time of each call). Does python/flask already take care of this?
I am quite experienced in python itself, but not in this area(with flask). | 0 | 0 | 0 | 0 | false | 23,703,319 | 1 | 31 | 1 | 0 | 0 | 23,703,135 | I see a few solutions:
read /dev/urandom a few times, calculate sha-256 of the number and use it as a file name; collision is extremely improbable
use Redis and command like LPUSH, using it from Python is very easy; then RPOP from right end of the linked list, there's your queue | 1 | 0 | 0 | Is it possible to make writing to files/reading from files safe for a questionnaire type website? | 1 | python,flask | 0 | 2014-05-16T19:24:00.000 |
Hi I am trying to write python functional tests for our application. It involves several external components and we are mocking them all out.. We have got a better framework for mocking a service, but not for mocking a database yet.
sqlite is very lite and thought of using them but its a serverless, is there a way I can write some python wrapper to make it a server or I should look at other options like HSQL DB? | 0 | -1 | -0.197375 | 0 | false | 23,744,831 | 0 | 535 | 1 | 0 | 0 | 23,744,128 | I don't understand your problem. Why do you care that it's serverless?
My standard technique for this is:
use SQLAlchemy
in tests, configure it with sqlite:/// or sqlite:///:memory: | 1 | 0 | 0 | how to do database mocking or make sqlite run on localhost? | 1 | python,sqlite,unit-testing | 0 | 2014-05-19T17:52:00.000 |
I am working on my Python project using PySide as my Ui language. My projet is a game which require an internet connection to update the users'score and store in the database.
My problem is how can I store my database in the internet. I mean that all users can access this information when they are connected to an internet (when they are playing my game) and the information/database must be updated all the time.
I am not sure which database is the most appropriate, how to store this information/database in the internet, how to access this information.
I am using Python and PySide.
For the database, I currently use PySide.QtSql .
Thank you for answer(s) or suggestion(s). | 0 | 0 | 0 | 0 | false | 23,754,331 | 0 | 193 | 1 | 0 | 0 | 23,754,108 | I'm not familiar with PySide .. but the idea is
you need to build a function that when internet connection is available it should synchronize your local database with online database and in the server-side you need to build a script that can handle requests ( POST / GET ) to receive the scores and send it to database and I suggest MySQL ..
Hope that helps | 1 | 1 | 0 | Using Database with Pyside and Socket | 1 | python,database,sockets,pyside,qtsql | 0 | 2014-05-20T07:59:00.000 |
I need to develop an A/B testing method for my users. Basically I need to split my users into a number of groups - for example 40% and 60%.
I have around 1,000,00 users and I need to know what would be my best approach. Random numbers are not an option because the users will get different results each time. My second option is to alter my database so each user will have a predefined number (randomly generated). The negative side is that if I get 50 for example, I will always have that number unless I create a new user. I don't mind but I'm not sure that altering the database is a good idea for that purpose.
Are there any other solutions so I can avoid that? | 2 | 3 | 0.148885 | 0 | false | 23,846,676 | 0 | 1,690 | 2 | 0 | 0 | 23,846,617 | Run a simple algorithm against the primary key. For instance, if you have an integer for user id, separate by even and odd numbers.
Use a mod function if you need more than 2 groups. | 1 | 0 | 0 | Algorithm for A/B testing | 4 | python,mysql,python-2.7 | 0 | 2014-05-24T15:16:00.000 |
I need to develop an A/B testing method for my users. Basically I need to split my users into a number of groups - for example 40% and 60%.
I have around 1,000,00 users and I need to know what would be my best approach. Random numbers are not an option because the users will get different results each time. My second option is to alter my database so each user will have a predefined number (randomly generated). The negative side is that if I get 50 for example, I will always have that number unless I create a new user. I don't mind but I'm not sure that altering the database is a good idea for that purpose.
Are there any other solutions so I can avoid that? | 2 | 0 | 0 | 0 | false | 23,846,772 | 0 | 1,690 | 2 | 0 | 0 | 23,846,617 | I would add an auxiliary table with just userId and A/B. You do not change existent table and it is easy to change the percentage per class if you ever need to. It is very little invasive. | 1 | 0 | 0 | Algorithm for A/B testing | 4 | python,mysql,python-2.7 | 0 | 2014-05-24T15:16:00.000 |
I'm installing MySQL 5.6 Community Edition using MySQL installer and everything was installed properly except for "Connector/Python 2.7 1.1.6".
Upon mousing over, I get the error message "The product requires Python 2.7 but it was not detected on this machine. Python 2.7 requires manual installation and must be installed prior to installing this product"
The problem is, I have Python 2.7 installed in C: already and I can't seem to direct this detection towards where I have Python 2.7.
(I am using Windows 8) | 0 | 0 | 0 | 0 | false | 23,866,951 | 0 | 50 | 1 | 0 | 0 | 23,866,874 | Check if there is any install path dependencies on your installer to figure out if your python is in the right place.
But I recommend that you install the connector you want manually.
P.S. Are you sure you need the connector? Or you just saw the error and assumed you need it? | 1 | 0 | 0 | Connector installation error (MySQL installer) | 1 | mysql,django,python-2.7 | 0 | 2014-05-26T09:24:00.000 |
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server?
Any explanation would be really helpful, Cheers! | 0 | -1 | -0.066568 | 0 | false | 23,920,627 | 1 | 237 | 3 | 0 | 0 | 23,920,481 | Usually when the settings that are controlling the application are changed then the server has to be restarted. | 1 | 0 | 0 | When do I need to restart database server in Django? | 3 | python,mysql,django,postgresql | 0 | 2014-05-28T19:41:00.000 |
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server?
Any explanation would be really helpful, Cheers! | 0 | 2 | 0.132549 | 0 | false | 23,920,963 | 1 | 237 | 3 | 0 | 0 | 23,920,481 | You will not NEED to restart your database in production due to anything you've done in Django. You may need to restart it to change your database security or configuration settings, but that has nothing to do with Django and in a lot of cases doesn't even need a restart. | 1 | 0 | 0 | When do I need to restart database server in Django? | 3 | python,mysql,django,postgresql | 0 | 2014-05-28T19:41:00.000 |
I was wondering in which cases do I need to restart the database server in Django on production. Whether it is Postgres/MySQL, I was just thinking do we need to restart the database server at all. If we need to restart it, when and why do we need to restart the server?
Any explanation would be really helpful, Cheers! | 0 | 1 | 1.2 | 0 | true | 23,920,777 | 1 | 237 | 3 | 0 | 0 | 23,920,481 | You shouldn't really ever need to restart the database server.
You probably do need to restart - or at least reload - the web server whenever any of the code changes. But the db is a separate process, and shouldn't need to be restarted. | 1 | 0 | 0 | When do I need to restart database server in Django? | 3 | python,mysql,django,postgresql | 0 | 2014-05-28T19:41:00.000 |
I am using pyhs2 as hive client. The sql statement with ‘where’ clause was not recognized. Got
'pyhs2.error.Pyhs2Exception: 'Error while processing statement:
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask'
But it runs ok in hive shell. | 1 | 3 | 0.53705 | 0 | false | 23,984,349 | 0 | 945 | 1 | 0 | 0 | 23,971,667 | Fixed! It was due to permission on remote server. Changed user in connect statement from 'root' to 'hdfs' solved the problem. | 1 | 0 | 0 | python hive client pyhs2 does not recognize 'where' clause in sql statement | 1 | python,sql,client,hive | 0 | 2014-05-31T15:24:00.000 |
So in my spare time, I've been developing a piece of network monitoring software that essentially can be installed on a bunch of clients, and the clients report data back to the server(RAM/CPU/Storage/Network usage, and the like). For the administrative console as well as reporting, I've decided to use Django, which has been a learning experience in itself.
The Clients report to the Server asynchronously, with whatever data they happen to have(As of right now, it's just received and dumped, not stored in a DB). I need to access this data in Django. I have already created the models to match my needs. However, I don't know how to go about getting the actual data into the django DB safely.
What is the way to go about doing this? I thought of a few options, but they all had some drawbacks:
Give the Django app a reference to the Server, and just start a thread that continuously checks for new data and writes it to the DB.
Have the Server access the Django DB directly, and write it's data there.
The problem with 1 is that im even more tightly coupling the server with the django app, but the upside is that I can use the ORM to write the data nicely.
The problem with 2 is that I can't use the ORM to write data, and I'm not sure if it could cause blocking on the DB in some cases.
Is there some obvious good option I'm missing? I'm sorry if this question is vague. I'm very new to Django, and I don't want to write myself into a corner. | 2 | 1 | 0.099668 | 0 | false | 23,987,194 | 1 | 849 | 1 | 0 | 0 | 23,987,050 | I chose option 1 when I set up my environment, which does much of the same stuff.
I have a JSON interface that's used to pass data back to the server. Since I'm on a well-protected VLAN, this works great. The biggest benefit, like you say, is the Django ORM. A simple address call with proper data is all that's needed. I also think this is the simplest method.
The "blocking on the DB" issue should be non-existent. I suppose that it would depend on the DB backend, but really, that's one of the benefits of a DB. For example, a single-threaded file-based sqlite instance may not work.
I keep things in Django as much as I can. This could also help with DB security/integrity, since it's only ever accessed in one place. If your client accesses the DB directly, you'll need to ship username/password with the Client.
My recommendation is to go with 1. It will make your life easier, with fewer lines of code. Besides, as long as you code Client properly, it should be easy to modify DB access later on down the road. | 1 | 0 | 0 | How to access Django DB and ORM outside of Django | 2 | python,django,orm | 0 | 2014-06-02T03:41:00.000 |
I'm curious if there's a way to insert a new row (that would push all the existing rows down) in an existing openpyxl worksheet? (I'm looking to insert at the first row if it helps)
I looked through all the docs and didn't see anything mentioned. | 1 | 0 | 0 | 0 | false | 24,012,565 | 0 | 5,098 | 1 | 0 | 0 | 24,006,376 | This is currently not directly possible in openpyxl because it would require the reassigning of all cells below the new row.
You can do it yourself by iterating through the relevant rows (starting at the end) and writing a new row with the values of the previous row. Then you create a row of cells where you wan them. | 1 | 0 | 0 | How to insert a row in openpyxl | 1 | python,openpyxl | 0 | 2014-06-03T02:53:00.000 |
I have a "local" Oracle database in my work network. I also have a website at a hosting service.
Can I connect to the 'local' Oracle database from the hosting service? Or does the Oracle database need to be at the same server as my website?
At my work computer I can connect to the Oracle database with a host name, username, password, and port number. | 0 | 1 | 0.197375 | 0 | false | 24,016,200 | 0 | 490 | 1 | 0 | 0 | 24,015,758 | It can depend on how your hosting is setup, but if it is allowed you will need the following.
Static IP, or Dynamic DNS setup so your home server can be found regularly.
Port forwarding on your router to allow traffic to reach the server.
The willingness to expose your home systems to the dangers of the internet
Strictly speaking a static IP/Dynamic DNS setup is not required, but if you don't use that kind of setup, you will have to change the website configuration every time your home system changes IPs, the frequency of which depends on your ISP. It's also worth noting that many ISP's consider running servers on your home network a violation of the terms of service for residential customers, but in practice as long as you aren't generating too much traffic, it's not usually an issue.
With Port forwarding on your router, you can specify traffic incoming on a particular port be redirected to a specific internal address:port on your network, (e.g. myhomesystem.com:12345 could be redirected to 192.168.1.5:1521)
Once those are in place, you can use the static IP, or the Dynamic DNS entry as the hostname to connect to. | 1 | 0 | 0 | Connect to local Oracle database from online website | 1 | python,database,django,oracle,database-connection | 0 | 2014-06-03T12:54:00.000 |
I am using Flask to make a small webapp to manage a group project, in this website I need to manage attendances, and also meetings reports. I don't have the time to get into SQLAlchemy, so I need to know what might be the bad things about using CSV as a database. | 3 | -1 | -0.049958 | 0 | false | 64,239,216 | 1 | 3,024 | 2 | 0 | 0 | 24,072,231 | I am absolutely baffled by how many people discourage using CSV as an database storage back-end format.
Concurrency: There is NO reason why CSV can not be used with concurrency. Just like how a database thread can write to one area of a binary file at the same time that another thread writes to another area of the same binary file. Databases can do EXACTLY the same thing with CSV files. Just as a journal is used to maintain the atomic nature of individual transactions, the same exact thing can be done with CSV.
Speed: Why on earth would a database read and write a WHOLE file at a time, when the database can do what it does for ALL other database storage formats, look up the starting byte of a record in an index file and SEEK to it in constant time and overwrite the data and comment out anything left over and record the free space for latter use in a separate index file, just like a database could zero out the bytes of any unneeded areas of a binary "row" and record the free space in a separate index file... I just do not understand this hostility to non-binary formats, when everything that can be done with one format can be done with the other... everything, except perhaps raw binary data compression, depending on the particular CSV syntax in use (special binary comments... etc.).
Emergency access: The added benefit of CSV is that when the database dies, which inevitably happens, you are left with a CSV file that can still be accessed quickly in the case of an emergency... which is the primary reason I do not EVER use binary storage for essential data that should be quickly accessible even when the database breaks due to incompetent programming.
Yes, the CSV file would have to be re-indexed every time you made changes to it in a spread sheet program, but that is no different than having to re-index a binary database after the index/table gets corrupted/deleted/out-of-sync/etc./etc.. | 1 | 0 | 0 | Somthing wrong with using CSV as database for a webapp? | 4 | python,csv,web-applications,flask | 0 | 2014-06-06T00:11:00.000 |
I am using Flask to make a small webapp to manage a group project, in this website I need to manage attendances, and also meetings reports. I don't have the time to get into SQLAlchemy, so I need to know what might be the bad things about using CSV as a database. | 3 | 1 | 0.049958 | 0 | false | 47,320,760 | 1 | 3,024 | 2 | 0 | 0 | 24,072,231 | I think there's nothing wrong with that as long as you abstract away from it. I.e. make sure you have a clean separation between what you write and how you implement i . That will bloat your code a bit, but it will make sure you can swap your CSV storage in a matter of days.
I.e. pretend that you can persist your data as if you're keeping it in memory. Don't write "openCSVFile" in you flask app. Use initPersistence(). Don't write "csvFile.appendRecord()". Use "persister.saveNewReport()". When and if you actually realise CSV to be a bottleneck, you can just write a new persister plugin.
There are added benefits like you don't have to use a mock library in tests to make them faster. You just provide another persister. | 1 | 0 | 0 | Somthing wrong with using CSV as database for a webapp? | 4 | python,csv,web-applications,flask | 0 | 2014-06-06T00:11:00.000 |
I have read a few posts on how to enable remote login to mysql. My question is: is this a safe way to access data remotely?
I have a my sql db located at home (on Ubuntu 14.04) that I use for research purposes. I would like to run python scripts from my Macbook at work. I was able to remote login from my old windows OS using workbench connection (DNS ip). However the OS change has got me thinking what is the best/most secure way to accomplish this task? | 0 | 0 | 1.2 | 0 | true | 24,112,483 | 0 | 49 | 1 | 0 | 0 | 24,112,422 | Use ssh to login to your home computer, setup authorized keys for it and disable password login. setup iptables on your linux machine if you don't have a firewall on your router, and disable traffic on all ports except 80 and 22 (ssh and internet). That should get you started. | 1 | 0 | 0 | Best way to access data from mysql db on other non-local machines | 1 | python,mysql,ruby-on-rails | 0 | 2014-06-09T01:02:00.000 |
I'm trying to use the new excel integration module xlwings
It works like a charm under Anaconda 2.0 for python 2.7
but I'm getting this error under Anaconda 2.0 for python 3.4
the xlwings file does contain class Workbook so I don't understand why it can't import it
when I simply use the xlwings file in my project for 3.4 it works just fine
File "C:\Users\xxxxx\AppData\Local\Continuum\Anaconda3\lib\site-packages\xlwings__init__.py", line 1, in
from xlwings import Workbook, Range, Chart, version
ImportError: cannot import name 'Workbook' | 4 | 3 | 1.2 | 0 | true | 24,122,113 | 0 | 2,145 | 1 | 0 | 0 | 24,121,692 | In "C:\Users\xxxxx\AppData\Local\Continuum\Anaconda3\lib\site-packages\xlwings__init__.py"
Try changing from xlwings import Workbook, Range, Chart, __version__
to from xlwings.xlwings import Workbook, Range, Chart, __version__ | 1 | 0 | 1 | importing xlwings module into python 3.4 | 1 | python,excel,xlwings | 0 | 2014-06-09T13:49:00.000 |
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script will communicate with the hardware via bacnet IP and log diagnostic info in an SQL database. The nodejs JavaScript server code will respond to webfront requests by querying the database and returning the relevant info. I'm pretty new to SQL, so my primary question is.. Is this possible? My second and more abstract questions would be... is this wise? And, is there any obvious advantage to using the same language on both ends, whether that language is Python, Java, or something else? | 1 | 0 | 0 | 0 | false | 24,157,502 | 1 | 1,076 | 2 | 0 | 0 | 24,156,992 | Yes its possible.
Two applications with different languages using one database is almost exactly the same as one application using several connections to it, so you are probably already doing it. All the possible problems are exactly the same. The database won't even know whether the connections are made from one application or the other. | 1 | 0 | 0 | Can two programs, written in different languages, connect to the same SQL database? | 3 | sql,node.js,python-2.7 | 0 | 2014-06-11T07:25:00.000 |
The question is pretty much in the header, but here are the specifics. For my senior design project we are going to be writing software to control some hardware and display diagnostics info on a web front. To accomplish this, I'm planning to use a combination of Python and nodejs. Theoretically, a python service script will communicate with the hardware via bacnet IP and log diagnostic info in an SQL database. The nodejs JavaScript server code will respond to webfront requests by querying the database and returning the relevant info. I'm pretty new to SQL, so my primary question is.. Is this possible? My second and more abstract questions would be... is this wise? And, is there any obvious advantage to using the same language on both ends, whether that language is Python, Java, or something else? | 1 | 1 | 1.2 | 0 | true | 24,158,115 | 1 | 1,076 | 2 | 0 | 0 | 24,156,992 | tl;dr
You can use any programming language that provides a client for the database server of your choice.
To the database server, as long as the client is communicating as per the server's requirements (that is, it is using the server's library, protocol, etc.), then there is no difference to what programming language or system is being used.
The database drivers provide a common abstract layer, providing a guarantee that the database server and the client are speaking the same language.
The programming language's interface to the database driver takes care of the language specifics - for example, providing syntax that conforms to the language; and on the opposite side it the driver will ensure that all commands are sent in the protocol that the server expects.
Since drivers are such a core requirement, there are usually multiple drivers available for databases; and also because good database access is a core requirement for programmers, each language strives to have a "standard" API for all databases. For example Java has JDBC
Python has the DB-API, .NET has ODBC (and ADO I believe, but I am not a .NET expert).
These are what the database drivers will conform to, so that it doesn't matter which database server you are using, you have one standard way to connect, one standard way to execute queries and one standard way to fetch results - in effect, making your life as a programmer easier.
In most cases, there is a reference driver (and API/library) provided by the database vendor. It is usually in C, and it is also what the "native" client to the database uses. For example the mysql client for the MySQL database server using the MySQL C drivers to connect, and it is the same driver that is used by the Python MySQLdb driver; which conforms to the Python DB-API. | 1 | 0 | 0 | Can two programs, written in different languages, connect to the same SQL database? | 3 | sql,node.js,python-2.7 | 0 | 2014-06-11T07:25:00.000 |
I have a python script running on my server which accessed a database, executes a fetch query and runs a learning algorithm to classify and updates certain values and means depending on the query.
I want to know if for some reason my server shuts down in between then my python script would shut down and my query lost.
How do i get to know where to continue from once I re-run the script and i want to carry on the updated means from the previous queries that have happened. | 0 | 2 | 0.379949 | 0 | false | 24,202,505 | 0 | 22 | 1 | 0 | 0 | 24,202,386 | First of all: the question is not really related to Python at all. It's a general problem.
And the answer is simple: keep track of what your script does (in a file or directly in db). If it crashes continue from the last step. | 1 | 0 | 0 | Python re establishment after Server shuts down | 1 | python | 0 | 2014-06-13T09:45:00.000 |
How can I use Google Cloud Datastore stats object (in Python ) to get the number of entities of one kind (i.e. Person) in my database satisfying a given constraint (i.e. age>20)? | 0 | 1 | 0.099668 | 0 | false | 24,227,785 | 1 | 68 | 1 | 1 | 0 | 24,227,510 | You can't, that's not what it's for at all. It's only for very broad-grained statistics about the number of each types in the datastore. It'll give you a rough estimate of how many Person objects there are in total, that's all. | 1 | 0 | 0 | How to use Google Cloud Datastore Statistics | 2 | python,google-app-engine,google-cloud-datastore | 0 | 2014-06-15T07:34:00.000 |
I have a script in Python that retrieves data from a remote server into a MySQL database located on my computer (the one that runs the Python script).
This script is executed daily to retrieve fresh data into the MySQL database. I am using Workbench 6.0 for Windows 64.
I want to add a web GUI to the system that will present some of the data on a web page. I would like to write it in PHP and for my PHP program to use the same MySQL database that my Python script uses. I would like to make it a web server later on so users can log in to the sever and use this web GUI.
Can the PHP and the Python scripts use the same DB?
In the past I have worked with WAMP sever for PHP only. If I install WAMP on the sever, will it be able to use the same DB or can it cause a collision? | 1 | 0 | 0 | 0 | false | 24,228,304 | 0 | 1,052 | 1 | 0 | 0 | 24,227,779 | I'd comment, but I don't have enough rep. yet. You should be able to start the Apache and PHP services separately from the WAMP tray icon. If you can't, try this:
You should be able to use the WAMP menu on the tray icon to open WAMP's my.ini file. In there, just change the port number from 3306 to something else (might as well use 3307) and save the file, then restart WAMP. In the future, completely ignore WAMP's MySQL instance and just use the one that Python is connecting to.
If that still doesn't work, you might have Skype running. By default, Skype listens on Port 80 for some bizarre reason. Open Skype's settings, go to the "Connection" section and untick "Use port 80 as an alternative for incoming connections." Then try restarting WAMP again. | 1 | 0 | 0 | Python and PHP use the same mySQL database | 1 | php,python,mysql,wamp | 0 | 2014-06-15T08:17:00.000 |
I'm storing strings on the order of 150M. It's well-below the maximum size of strings in Redis, but I'm seeing a lot of different, conflicted opinions on the approach I should take, and no clear path.
On the one hand, I've seen that I should use a hash with small data chunks, and on the other hand, I've been told that leads to gapping, and that storing the whole string is most efficient.
On the one hand, I've seen that I could pass in the one massive string, or do a bunch of string-append operations to build it up. The latter seems like it might be more efficient than the former.
I'm reading the data from elsewhere, so I'd rather not fill a local, physical file just so that I can pass a whole string. Obviously, it'd be better all around if I can chunk the input data, and feed it into Redis via appends. However, if this isn't efficient with Redis, it might take forever to feed all of the data, one chunk at a time. I'd try it, but I lack the experience, and it might be inefficient for a number of different reasons.
That being said, there's a lot of talk of "small" strings and "large" strings, but it's not clear what Redis considers an optimally "small" string. 512K, 1M, 8M?
Does anyone have any definitive remarks?
I'd love it if I could just provide a file-like object or generator to redis-py, but that's more language-specific than I meant this question to be, and most likely impossible for the protocol, anyway: it'd just require internal chunking of the data, anyway, when it's probably better to just impose this on the developer. | 4 | 2 | 0.379949 | 0 | false | 24,321,923 | 0 | 2,349 | 1 | 0 | 0 | 24,320,040 | One option would be:
Storing data as long list of chunks
store data in List - this allows storing the content as sequence of chunks as well as desctroying whole list in one step
store the data using pipeline contenxt manager to ensure, you are the only one, who writes at that moment.
be aware, that Redis is always processing single request and all others are blocked for that moment. With large files, which take time to write you can not only slow other clients down, but you are also likely to exceed max execution time (see config for this value).
Store data in randomly named list with known pointer
Alternative approach, also using list, would be to invent random list name, write content chunk by chunk into it, and when you are done, update value in known key in Redis pointing to this randomly named list. Do not forget to remove old one, this can be done from your code, but you might use expiration if it seems usable in your use case. | 1 | 0 | 1 | Best way to store large string in Redis... Getting mixed signals | 1 | python,redis | 0 | 2014-06-20T04:36:00.000 |
I'm looking to insert the current system timestamp into a field on a database. I don't want to use the server side now() function and need to use the python client's system timestamp. What MySQL datatype can store this value, and how should I insert it? Is time.time() sufficient? | 0 | 1 | 0.099668 | 0 | false | 24,368,972 | 0 | 3,202 | 1 | 0 | 0 | 24,367,155 | time.time() is a float, if a resolution of one second is enough you can just truncate it and store it as an INTEGER. | 1 | 0 | 0 | Inserting a unix timestamp into MySQL from Python | 2 | python,mysql,python-3.x,timestamp | 0 | 2014-06-23T13:26:00.000 |
I'm used to having a remote server I can use via ssh but I am looking at using Amazon Web Services for a new project to give me better performance and resilience at reduced costs but I'm struggling to understand how to use it.
This is what I want to do:
First-time:
Create a Postgres db
Connect to Amazon Server
Download code to server
Once a month:
Download large data file to server
Run code (written in python) to load database with data
Run code (written in Java) to create a Lucene Search Index file from data in the database
Continuously:
Run Java code in servlet container, this will use lucense search index file but DOES NOT require access to the database.
Note: Technically I could create do the database population locally the trouble is the resultant lucene index file is about 5GB and I dont have a good enough Internet connection
to upload a file of that size to Amazon.
All that I have managed to do so far is create a Postgres database but I don't understand how to connect to it or get a ssh/telnet connection to my server (I requested a Direct Connect but this seems to be a different service).
Update so far
FYI:
So I created a Postgres database using RDS
I created a Ubuntu linux installation using EC2
I connected to the linux installation using ssh
I installed required software (using apt-get)
I downloaded data file to my linux installation
I think according to the installation should be able to connect to my Postgres db from my EC2 instance and even from my local machine however in both cases it just times out.
* Update 2 **
Probably security related but I cannot for the life of me understand what I'm meant to do with security groups ands why they don't make the EC2 instance able to talk to my database by default.
Ive checked both RDS and EC2 have the3 same vpc id, and both are in the same availability zone. Postgres is using port 5432 (not 3306) but havent been able to access it yet. So taking my working EC2 instance as the starting point should I create a new security group before creating a database, and if so what values do I need to put into it so I can access the db with psql from within my ec2 ssh session - thats all that is holding me up for now and all I need to do
* Update 3 *
At last I have access to my database, my database had three security groups ( I think the other two were created when I created a new EC2 instance) I removed two of them and in the remaining on the inbound tab I set rule to
All Traffic
Ports 0-65535
Protocol All
IPAddress 0.0.0.0/0
(The outbound tab already had the same rule) and it worked !
I realize this is not the most secure setup but at least its progress.
I assume to only allow access from my EC2 instance I can change the IPAddress of the inbound rule but I don't how to calculate the CIDR for the ipaddress ?
My new problem is having successfully downloaded my datafile to my EC2 instance I am unable to unzip it because I don't not have enough diskspace. I assume I have to use S3 Ive created a bucket but how do I make it visible as diskspace from my EC2 instance so I can
Move my datafile to it
Unzip the datafile into it
Run my code against the unzipped datafile to load the database
(Note the datafile is an Xml format and has to be processed with custom code to get it into the database it cannot just be loaded directly into the database using some generic tool)
Update 4
S3 is the wrong solution for me instead I can use EBS which is basically disc storage accessible not as a service but by clicking Volumes in EC2 Console. Ensure create the volume in the same Availability zone as the instance, there maybe more than one in each location, for example my EC2 instance was created in eu-west-1a but the first time I created a volume it was in eu-west-1b and therefore could not be used.
Then attach volume to instance
But I cannot see the volume from the linux commandline, seems there is something else required.
Update 5
Okay, have to format the disk and mount it in linux for it to work
I now have my code for uploading the data to database working but it is running incredibly slow, much slower than my cheap local server I have at home. I'm guessing that because the data is being loaded one record at a time that the bottleneck is not the micro database but my micro instance, looks like I need to redo with a more expensive instance
Update 6
Updated to a large Computative instance still very slow. Im now thinking the issue is the network latency between server and database perhaps need to install a postgres server directly onto my instance to cut that part out. | 0 | 0 | 1.2 | 0 | true | 24,373,021 | 1 | 186 | 1 | 0 | 0 | 24,367,485 | First-time:
Create a Postgres db - Depending on size(small or large), might want RDS or Redshift
Connect to Amazon Server - EC2
Download code to server - upload your programs to an S3 bucket
Once a month:
Download large data file to server - Move data to S3, if using redshift data can be loaded directly from s3 to redshift
Run code (written in python) to load database with data
Run code (written in Java) to create a Lucene Search Index file from data in the database - might want to look into EMR with this
Continuously:
Run Java code in servlet container, this will use lucense search index file but DOES NOT require access to the database - If you have a java WAR file, you can host this using Elasticbean stalk
In order to connect to your database, you must make sure the security group allows for this connection, and for an ec2 you must make sure port 22 is open to your IP to conncet to it. It sounds like the security group for RDS isn't opening up port 3306. | 1 | 0 | 0 | How do i get started with Amazon Web Services for this scenario? | 1 | java,python,amazon-web-services,amazon-ec2 | 0 | 2014-06-23T13:40:00.000 |
I can't connect with mysql and I can't do "python manage.py syncdb" on it
how to connect with mysql in django and django-cms without any error? | 1 | 3 | 0.291313 | 0 | false | 24,380,525 | 1 | 9,227 | 1 | 0 | 0 | 24,380,269 | This is an error message you get if MySQLdb isn't installed on your computer.
The easiest way to install it would be by entering pip install MySQL-python into your command line. | 1 | 0 | 0 | Getting “Error loading MySQLdb module: No module named MySQLdb” in django-cms | 2 | python,mysql,django,django-cms | 0 | 2014-06-24T06:59:00.000 |
There are Python libraries that allow to communicate with a database. Of course, to use these libraries there should be an installed and running database server on the computer (python cannot communicate with something that does not exist).
My question is whether the above written is applicable to the sqlite3 library. Can one say that this library does not need any database to be installed (and running) on the computer? Can one say that sqlite3 needs only a file system? | 2 | 0 | 0 | 0 | false | 24,410,155 | 0 | 878 | 1 | 0 | 0 | 24,410,124 | No, sqlite package is part of Python standard library and as soon as you have Python installed, you may use sqlite functionality.
MartijnPieters noted, the actual shared library is not technically part of Python (this was my a bit oversimplified answer) but comes as shared library, which has to be installed too.
Practically speaking, if you manage installing Python, you have the sqlite available for your Python code.
As OP asked for for a need to install sqlite separately, I will not speculate on how to install Python, which is not able to work with it. | 1 | 0 | 0 | Does python sqlite3 library need sqlite to be installed? | 2 | python,database,sqlite | 0 | 2014-06-25T13:30:00.000 |
I am using pandas to organize and manipulate data I am getting from the twitter API. The 'id' key returns a very long integer (int64) that pandas has no problem handling (i.e. 481496718320496643).
However, when I send to SQL:
df.to_sql('Tweets', conn, flavor='sqlite', if_exists='append', index=False)
I now have tweet id: 481496718320496640 or something close to that number.
I converted the tweet id to str but Pandas SQLite Driver / SQLite still messes with the number. The data type in the SQLite database is [tweet_id] INTEGER. What is going on and how do I prevent this from happening? | 0 | 0 | 0 | 0 | false | 24,419,432 | 0 | 160 | 1 | 0 | 0 | 24,416,140 | I have found the issue -- I am using SQLite Manager (Firefox Plugin) as a SQLite client. For whatever reason, SQLite Manager displays the tweet IDs incorrectly even though they are properly stored (i.e. when I query, I get the desired values). Very strange I must say. I downloaded a different SQLite client to view the data and it displays properly. | 1 | 0 | 0 | Long integer values in pandas dataframe change when sent to SQLite database using to_sql | 1 | python,sqlite,pandas | 0 | 2014-06-25T18:39:00.000 |
This might sound like a bit of an odd question - but is it possible to load data from a (in this case MySQL) table to be used in Django without the need for a model to be present?
I realise this isn't really the Django way, but given my current scenario, I don't really know how better to solve the problem.
I'm working on a site, which for one aspect makes use of a table of data which has been bought from a third party. The columns of interest are liklely to remain stable, however the structure of the table could change with subsequent updates to the data set. The table is also massive (in terms of columns) - so I'm not keen on typing out each field in the model one-by-one. I'd also like to leave the table intact - so coming up with a model which represents the set of columns I am interested in is not really an ideal solution.
Ideally, I want to have this table in a database somewhere (possibly separate to the main site database) and access its contents directly using SQL. | 2 | 1 | 0.066568 | 0 | false | 44,363,554 | 1 | 1,496 | 1 | 0 | 0 | 24,423,645 | There is one feature called inspectdb in Django. for legacy databases like MySQL , it creates models automatically by inspecting your db tables. it stored in our app files as models.py. so we don't need to type all column manually.But read the documentation carefully before creating the models because it may affect the DB data ...i hope this will be useful for you. | 1 | 0 | 0 | Loading data from a (MySQL) database into Django without models | 3 | python,mysql,django,webproject | 0 | 2014-06-26T06:19:00.000 |
I am developing a web app based on the Google App Engine.
It has some hundreds of places (name, latitude, longitude) stored in the Data Store.
My aim is to show them on google map.
Since they are many I have registered a javascript function to the idle event of the map and, when executed, it posts the map boundaries (minLat,maxLat,minLng,maxLng) to a request handler which should retrieve from the data store only the places in the specified boundaries.
The problem is that it doesn't allow me to execute more than one inequality in the query (i.e. Place.latminLat, Place.lntminLng).
How should I do that? (trying also to minimize the number of required queries) | 0 | 0 | 0 | 0 | false | 24,501,164 | 1 | 144 | 1 | 1 | 0 | 24,497,219 | You didn't say how frequently the data points are updated, but assuming 1) they're updated infrequently and 2) there are only hundreds of points, then consider just querying them all once, and storing them sorted in memcache. Then your handler function would just fetch from memcache and filter in memory.
This wouldn't scale indefinitely but it would likely be cheaper than querying the Datastore every time, due to the way App Engine pricing works. | 1 | 0 | 0 | Google App Engine NDB Query on Many Locations | 2 | javascript,python,google-maps,google-app-engine | 0 | 2014-06-30T19:09:00.000 |
I'm used to creating connections using MySQLdb directly so I'm not sure this is at all possible using sqlalchemy, but is there a way to get the mysql connection thread id from a mysql Session a la MySQLdb.connection.thread_id()? I've been digging through and can't seem to find a way to access it. I'm not creating a connection from the Engine directly, and would like to avoid doing so.
For reference this is a single-thread application, I just need to get the mysql thread id for other purposes. | 3 | 3 | 1.2 | 0 | true | 24,614,258 | 0 | 707 | 1 | 0 | 0 | 24,500,665 | session.connection().connection.thread_id() | 1 | 0 | 0 | Getting a mysql thread id from sqlalchemy Session | 1 | python,mysql,sqlalchemy,mysql-python | 0 | 2014-07-01T00:02:00.000 |
I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around two-three hundred computers uploading information (not always at the same time but it can happen). How reliable is this? Is that even possible?
It's my first script ever so I appreciate if you could point me in the right direction. Thanks in advance. | 0 | 0 | 0 | 0 | false | 24,502,420 | 0 | 1,497 | 2 | 0 | 0 | 24,502,362 | Yes, it is possible to have up to that many number of mySQL connectins. It depends on a few variables. The maximum number of connections MySQL can support depends on the quality of the thread library on a given platform, the amount of RAM available, how much RAM is used for each connection, the workload from each connection, and the desired response time.
The number of connections permitted is controlled by the max_connections system variable. The default value is 151 to improve performance when MySQL is used with the Apache Web server.
The important part is to properly handle the connections and closing them appropriately. You do not want redundant connections occurring, as it can cause slow-down issues in the long run. Make sure when coding that you properly close connections. | 1 | 0 | 0 | simultaneous connections to a mysql database | 2 | mysql,python-2.7,mysql-python | 0 | 2014-07-01T04:16:00.000 |
I made a program that receives user input and stores it on a MySQL database. I want to implement this program on several computers so users can upload information to the same database simoultaneously. The database is very simple, it has just seven columns and the user will only enter four of them.
There would be around two-three hundred computers uploading information (not always at the same time but it can happen). How reliable is this? Is that even possible?
It's my first script ever so I appreciate if you could point me in the right direction. Thanks in advance. | 0 | 0 | 1.2 | 0 | true | 24,502,484 | 0 | 1,497 | 2 | 0 | 0 | 24,502,362 | Having simultaneous connections from the same script depends on how you're processing the requests. The typical choices are by forking a new Python process (usually handled by a webserver), or by handling all the requests with a single process.
If you're forking processes (new process each request):
A single MySQL connection should be perfectly fine (since the total number of active connections will be equal to the number of requests you're handling).
You typically shouldn't worry about multiple connections since a single MySQL connection (and the server), can handle loads much higher than that (completely dependent upon the hardware of course). In which case, as @GeorgeDaniel said, it's more important that you focus on controlling how many active processes you have and making sure they don't strain your computer.
If you're running a single process:
Yet again, a single MySQL connection should be fast enough for all of those requests. If you want, you can look into grouping the inserts together, as well as multiple connections.
MySQL is fast and should be able to easily handle 200+ simultaneous connections that are writing/reading, regardless of how many active connections you have open. And yet again, the performance you get from MySQL is completely dependent upon your hardware. | 1 | 0 | 0 | simultaneous connections to a mysql database | 2 | mysql,python-2.7,mysql-python | 0 | 2014-07-01T04:16:00.000 |
I'm attempting to store information from a decompiled file in Dynamo.
I have all of the files stored in s3 however I would like to change some of that.
I have an object id with properties such as a date, etc which I know how to create a table of in dynamo. My issue is that each object also contains images, text files, and the original file. I would like to have the key for s3 for the original file in the properties of the file:
Ex: FileX, date, originalfileLoc, etc, images pointer, text pointer.
I looked online but I'm confused how to do the nesting. Does anyone know of any good examples? Is there another way? I assume I create an images and a text table. Each with the id and all of the file's s3 keys. Any example code of how to create the link itself?
I'm using python boto btw to do this. | 0 | 0 | 0 | 0 | false | 24,642,053 | 0 | 477 | 2 | 0 | 0 | 24,520,176 | From what you described, I think you just need to create one table with hashkey. The haskey should be object id. And you will have columns such as "date", "image pointer", "text pointer", etc.
DynamoDB is schema-less so you don't need to create the columns explicitly. When you call getItem the server will return you a dictionary with column name as key and the value.
Being schema-less also means you can create new column dynamically. Assuming you already have a row in the table with only "date" column. now you want to add the "image pointer" column. you just need to call UpdateItem and gives it the hashkey and image-pointer key-value pair. | 1 | 0 | 0 | How to correctly nest tables in DynamoDb | 2 | python,database,amazon-s3,amazon-dynamodb,boto | 0 | 2014-07-01T22:25:00.000 |
I'm attempting to store information from a decompiled file in Dynamo.
I have all of the files stored in s3 however I would like to change some of that.
I have an object id with properties such as a date, etc which I know how to create a table of in dynamo. My issue is that each object also contains images, text files, and the original file. I would like to have the key for s3 for the original file in the properties of the file:
Ex: FileX, date, originalfileLoc, etc, images pointer, text pointer.
I looked online but I'm confused how to do the nesting. Does anyone know of any good examples? Is there another way? I assume I create an images and a text table. Each with the id and all of the file's s3 keys. Any example code of how to create the link itself?
I'm using python boto btw to do this. | 0 | 0 | 1.2 | 0 | true | 24,536,208 | 0 | 477 | 2 | 0 | 0 | 24,520,176 | If you stay between the limits of Dynamodb of 64Kb per item.
You can have one item (row) per file.
DynamoDB has String type (for file name, date, etc) and also a StringSet (SS) for list of attributes (for text files, images).
From what you write I assume you are will only save pointers (keys) to binary data in the S3.
You could also save binary data and binary sets in DynamoDB but I believe you will reach the limit AND have an expensive solution in terms of throughput. | 1 | 0 | 0 | How to correctly nest tables in DynamoDb | 2 | python,database,amazon-s3,amazon-dynamodb,boto | 0 | 2014-07-01T22:25:00.000 |
I have made an online website which acts as the fronted for a database into which my customers can save sales information. So each customer logs onto the online website with their own credentials and only see their own sales records. The database comes in the form of SQL Server 2008.
Some of these customers have a third party Windows tool on their PCs which itself acts as a fronted for a database with specific sales records. This tool is used by them for printing receipts. This tool comes with a Python interface which can be used to update the database, if the tool itself is not used. I've installed the tool on my PC and successfully added records to this tool's database by running a simple Python Script.
At the moment, customers are adding by hand sales information from my website to the tool on the PC. I would like to provide them with an automatic way of doing this. This syncing should only occur when the customer requests such a sync and indeed they have all requested that it should work so. This is to ensure that they get an opportunity to validate the information.
How might I solve this problem? Should I develop a PC application which they install on their local computer or can I do this via the browser? Either solution will need to execute Python code in order to update the database on their PC and then there are of course security issues. | 3 | 0 | 0 | 0 | false | 24,659,493 | 0 | 995 | 1 | 0 | 0 | 24,611,812 | So in summary you have a website/sql server application. Then some of your users have a separate local database with a python front end. And you need to bridge the two applications.
You can expose your sql server database with an rest api (using whatever tech of your choice). Then create a python app that calls that api (either via a button or automatically scheduled) and then executes the needed python code for the reporting.
Another approach would be to add that needed reporting functionality into your web app. So if you look at the functionality of what the 3rd party database and reporting provides and add that functionality into your website so there's no longer a need to use that 3rd party application at all.
Those are your two paths to go down based on on the info provided. | 1 | 0 | 0 | Syncing PC data with online data | 2 | python,sql-server,browser,sync | 0 | 2014-07-07T13:32:00.000 |
Goal: Take/attach pictures in a PhoneGap application and send a public URL for each picture to a Google Cloud SQL database.
Question 1: Is there a way to create a Google Cloud Storage object from a base64 encoded image (in Python), then upload that object to a bucket and return a public link?
I'm looking to use PhoneGap to send images to a Python Google App Engine application, then have that application send the images to a Google Cloud Storage bucket I have set up, then return a public link back to the PhoneGap app. These images can either be taken directly from the app, or attached from existing photo's on the user's device.
I use PhoneGap's FileTransfer plugin to upload the images to GAE, which are sent as base64 encoded images (this isn't something I can control).
Based on what I've found in Google Docs, I can upload the images to Blobstore; however, it requires <input type='file'> elements in a form. I don't have 'file' input elements; I just take the image URI returned from PhoneGap's camera object and display a thumbnail of the picture that was taken (or attached).
Question 2: Is it possible to have an <input type='file'> element and control it's value? As in, is it possible to set it's value based on whether the user chooses a file, or takes a picture?
Thanks in advance! | 1 | 0 | 0 | 0 | false | 24,657,475 | 1 | 620 | 1 | 1 | 0 | 24,655,877 | Yes, that is a fine use for GAE and GCS. You do not need an <input type=file>, per se. You can just set up POST parameters in your call to your GAE url. Make sure you send a hidden key as well, and work from SSL-secured urls, to prevent spammers from posting to your app. | 1 | 0 | 0 | Using PhoneGap + Google App Engine to Upload and Save Images | 2 | python,google-app-engine,cordova,google-cloud-storage | 0 | 2014-07-09T14:05:00.000 |
So I have a string in Python that contains like 500 SQL INSERT queries, separated by ;. This is purely for performance reasons, otherwise I would execute individual queries and I wouldn't have this problem.
When I run my SQL query, Python throws: IntegrityError: (1062, "Duplicate entry 'http://domain.com' for key 'PRIMARY'")
Lets say on the first query of 500, the error is thrown. How can I make sure those other 499 queries are executed on my database?
If I used a try and except, sure the exception wouldn't be raised, but the rest of my statement wouldn't be executed. Or would it, since Python sends it all in 1 big combined string to MySQL?
Any ideas? | 0 | 0 | 1.2 | 0 | true | 24,666,569 | 0 | 828 | 1 | 0 | 0 | 24,664,413 | For anyone that cares, the ON DUPLICATE KEY UPDATE SQL command was what I ended up using. | 1 | 0 | 0 | Python Ignore MySQL IntegrityError when trying to add duplicate entry with a Primary key | 1 | python,mysql,sql | 0 | 2014-07-09T21:55:00.000 |
I apologize if this has been asked already, or if this is answered somewhere else.
Anyways, I'm working on a project that, in short, stores image metadata and then allows the user to search said metadata (which resembles a long list of key-value pairs). This wouldn't be too big of an issue if the metadata was standardized. However, the problem is that for any given image in the database, there is any number of key/values in its metadata. Also there is no standard list of what keys there are.
Basically, I need to find a way to store a dictionary for each model, but with arbitrary key/value pairs. And I need to be able to query them. And the organization I'm working for is planning on uploading thousands of images to this program, so it has to query reasonably fast.
I have one model in my database, an image model, with a filefield.
So, I'm in between two options, and I could really use some help from people with more experience on choosing the best one (or any other solutions that would work better)
Using a traditional relational database like MySql, and creating a separate model with a foreignkey to the image model, a key field, and a value field. Then, when I need to query the data, I'll ask for every instance of this separate table that relates to an image, and then query those rows for the key/value combination I need.
Using something like MongoDB, with django-toolbox and its DictField to store the metadata. Then, when I need to query, I'll access the dict and search it for the key/value combination I need.
While I feel like 1 would be much better in terms of query time, each image may have up to 40 key/values of metadata, and that makes me worry about that separate "dictionary" table growing far too large if there's thousands of images.
Any advice would be much appreciated. Thanks! | 0 | 0 | 0 | 0 | false | 24,690,665 | 1 | 992 | 1 | 0 | 0 | 24,688,388 | In a Django project you've got 4 alternatives for this kind of problem, in no particular order:
using PostgreSQL, you can use the hstore field type, that's basically a pickled python dictionary. It's not very helpful in terms of querying it, but does its job saving your data.
using Django-NoRel with mongodb you get the ListField field type that does the same thing and can be queried just like anything in mongo. (option 2)
using Django-eav to create an entity attribute value store with your data. Elegant solution but painfully slow queries. (option 1)
storing your data as a json string in a long enough TextField and creating your own functions to serializing and deserializing the data, without thinking on being able to make a query over it.
In my own experience, if you by any chance need to query over the data, your option two is by far the best choice. EAV in Django, without composite keys, is painful. | 1 | 0 | 0 | Django: storing/querying a dictionary-like data set? | 2 | python,mysql,django,mongodb,database | 0 | 2014-07-11T00:29:00.000 |
I faced with problem:
There is a big old database on microsoft sql server (with triggers, functions etc.). I am writing C# app on top of this db. Most of work is a "experiments" like this:
Write a part of functionality and see if it works in old Delphi app (i.e. inserted data in C# loaded correctly in Delphi).
So I need tool, that can determine which fields of each table is used or not (used in my queries). I think to write python script with sql syntax analyser or just using regular expressions.
What solution would you recommend? | 0 | 0 | 1.2 | 0 | true | 24,690,183 | 0 | 70 | 1 | 0 | 0 | 24,690,101 | You can run a trace in SQL Profiler to see the queries being executed on the server. | 1 | 0 | 0 | Analyse sql queries text | 1 | c#,python,sql,tsql | 0 | 2014-07-11T04:26:00.000 |
So, locally I've changed my models a few times and used South to get everything working. I have a postgres database to power my live site, and one model keeps triggering a column mainsite_message.spam does not exist error. But when I run heroku run python manage.py migrate mainsite from the terminal, I get Nothing to migrate. All my migrations have been pushed. | 1 | 0 | 0 | 0 | false | 24,698,874 | 1 | 1,178 | 2 | 0 | 0 | 24,697,420 | I presume that you have created a migration to add mainsite_message.spam to the schema. Have you made sure that this migration is in your git repository?
If you type git status you should see untracked files. If the migration is untracked you need to git add path_to_migration and then push it to Heroku before you can run it there. | 1 | 0 | 0 | Add a column to heroku postgres database | 3 | python,django,postgresql,heroku | 0 | 2014-07-11T12:11:00.000 |
So, locally I've changed my models a few times and used South to get everything working. I have a postgres database to power my live site, and one model keeps triggering a column mainsite_message.spam does not exist error. But when I run heroku run python manage.py migrate mainsite from the terminal, I get Nothing to migrate. All my migrations have been pushed. | 1 | 0 | 0 | 0 | false | 24,697,852 | 1 | 1,178 | 2 | 0 | 0 | 24,697,420 | Did you run schemamigration before? If yes, go to your database and take a look at your table "south_migrationhistory" there you can see what happened.
If you already did the steps above you should try to open your migration file and take a look as well, there you can find if the creation column is specified or not! | 1 | 0 | 0 | Add a column to heroku postgres database | 3 | python,django,postgresql,heroku | 0 | 2014-07-11T12:11:00.000 |
I'm not sure how to best phrase this question:
I would like to UPDATE, ADD, or DELETE information in an SQLite3 Table, but I don't want this data to be written to disk yet. I would still like to be able to
SELECT the data, and get the updated information, but then I want to choose to either rollback or commit.
Is this possible? Will the SELECT get the data before the UPDATE or after? Or must I rollback or commit before the next statement? | 1 | 0 | 0 | 0 | false | 24,707,793 | 0 | 91 | 1 | 0 | 0 | 24,707,471 | If you explicitly need to commit multiple times throughout the code, and you are worried about the performance times of transactions, you could always build the database in memory db=sqlite3.connect(':memory:') and then dump it's contents to disk when all the time-critical aspects of the program have been completed. I.e the end of the script. | 1 | 0 | 0 | Can I Stage data to memory SELECT, then choose to rollback, or commit in sqlite3? python 2.7 | 3 | python,sqlite | 0 | 2014-07-11T22:20:00.000 |
I don't have access PHP server nor database like Mysql on machine I'll be working on. Would it be feasible to use Python instead of PHP and flat file database instead of Mysql? I'm not too concerned about performance or scalability. It's not like I'm going to create next facebook. I just want to load data from server and show it on webpage and possibly handle some input forms. Also is there any major flaw with my reasoning? Or is there any other way to circumvent lack of PHP and database on server? | 0 | 1 | 0.049958 | 0 | false | 24,727,209 | 0 | 1,786 | 2 | 0 | 0 | 24,727,096 | Python comes bundled with sqlite3 module that gives access to SQLite databases. The only downside is that it is pretty much possible for just one thread can have write locks to it at any given moment. | 1 | 0 | 0 | Using Python and flat file database for server-side | 4 | python,web | 0 | 2014-07-13T21:21:00.000 |
I don't have access PHP server nor database like Mysql on machine I'll be working on. Would it be feasible to use Python instead of PHP and flat file database instead of Mysql? I'm not too concerned about performance or scalability. It's not like I'm going to create next facebook. I just want to load data from server and show it on webpage and possibly handle some input forms. Also is there any major flaw with my reasoning? Or is there any other way to circumvent lack of PHP and database on server? | 0 | 1 | 0.049958 | 0 | false | 24,727,365 | 0 | 1,786 | 2 | 0 | 0 | 24,727,096 | There are many ways to serve Python applications, but you should probably look at something that does this using the WSGI standard. Many frameworks will let you do this e.g: Pyramid, Pylons, Django, .....
If you haven't picked one then it would be worth looking at your long term requirements and also what you already know.
In terms of DB, there are many choices. SQLlite has been mentioned, but there are many other DBs that don't require a server process. If you're only storing a small amount of data then flat files may work for you, but anything bigger or more relational, then look at SQLlite. | 1 | 0 | 0 | Using Python and flat file database for server-side | 4 | python,web | 0 | 2014-07-13T21:21:00.000 |
I need to be able to query documents that have a date field between some range, but sometimes in my dataset the year doesn't matter (this is represented with a boolean flag in the mongo document).
So, for example, I might have a document for Christmas (12/25-- year doesn't matter) and another document for 2014 World Cup Final Match (8/13/2014). If the user searches for dates between 8/1/2014 and 12/31/2014, both of those documents should match, but another document for 2010 World Cup Final Match would not.
All approaches I've gotten to work so far have used a complicated nesting of $and and $or statements, which ends up being too slow for production, even with indexes set appropriately. Is there a simple or ideal way to handle this kind of conditional date searching in mongo? | 0 | 0 | 0 | 0 | false | 24,729,803 | 0 | 61 | 1 | 0 | 0 | 24,728,191 | For me you have to store specific values you'll search on, and index them.
For example, alongside with the date, you may store "year", "month", and "day", index on "month" and "day", and do your queries on it.
You may want to store them as "y", "m", and "d" to gain some bytes (That's sad, I know). | 1 | 0 | 1 | Mongo query on custom date system | 1 | python,mongodb | 0 | 2014-07-14T00:41:00.000 |
I'm using matplotlib and MySQLdb to create some graphs from a MySQL database. For example, the number of unique visitors in a given time period, grouped by periods of say, 1 hours. So, there'll be a bunch of (Time, visits in 1-hour period near that time) points.
I have a table as (ip, visit_time) where each ip can occur multiple times.
My question is should I run a single query and then process the results (remove duplicates, do the counting etc.), or should I run multiple SQL queries (for example, for 1 day period, there will be 24 queries for finding out the number of visits in each hour). Which will be faster and more efficient? | 1 | 2 | 1.2 | 0 | true | 24,776,157 | 0 | 86 | 1 | 0 | 0 | 24,776,000 | Generally Database queries should be faster than python for two reasons:
Databases are optimised to work with data, and they will optimise a high level abstraction language like SQL in order to get the best performance, while python might be fast but doesn't have to be
Running SQL analyses the data at the source and you don't need to transfer it at first.
That being said, there might be some extremely complex queries which could be faster in python but this doesn't seem the case for your. Also the more you squash the data with sql the smaller and easier the algorithm in python will be.
At last, I don't know your queries, but it should be possible to run them for all 24h at once including the removing duplicates and counting. | 1 | 0 | 0 | Python and MySQL - which to use more? | 1 | python,mysql,sql | 0 | 2014-07-16T08:36:00.000 |
I have a folder with a large number of Excel workbooks. Is there a way to convert every file in this folder into a CSV file using Python's xlrd, xlutiles, and xlsxWriter?
I would like the newly converted CSV files to have the extension '_convert.csv'.
OTHERWISE...
Is there a way to merge all the Excel workbooks in the folder to create one large file?
I've been searching for ways to do both, but nothing has worked... | 1 | 0 | 0 | 1 | false | 24,785,891 | 0 | 2,934 | 1 | 0 | 0 | 24,785,824 | Look at openoffice's python library. Although, I suspect openoffice would support MS document files.
Python has no native support for Excel file. | 1 | 0 | 0 | Converting a folder of Excel files into CSV files/Merge Excel Workbooks | 5 | python,csv,xlrd,xlsxwriter | 0 | 2014-07-16T16:20:00.000 |
When compiling documentation using Sphinx, I got the error AttributeError: 'str' object has no attribute 'worksheets'. How do I fix this? | 0 | 0 | 1.2 | 0 | true | 24,790,240 | 0 | 466 | 1 | 0 | 0 | 24,790,239 | You're getting the error because you don't have the most recent iPython installed. You probably installed it with sudo apt-get install ipython, but you should upgrade using sudo pip install ipython --upgrade and then making sure that the previous installation was removed by running sudo apt-get remove ipython. | 1 | 0 | 1 | Compiling Sphinx with iPython doc error "AttributeError: 'str' object has no attribute 'worksheets'" | 1 | ipython,python-sphinx | 0 | 2014-07-16T20:38:00.000 |
I currently have a Raspberry Pi running Iperf non stop and collecting results.
After collecting results it uploads the bandwidth tests to MySQL.
Is there a way to automatically refresh the table to which the data is added? | 0 | 0 | 0 | 0 | false | 24,795,785 | 0 | 726 | 1 | 0 | 0 | 24,791,510 | Is your goal is to use MySQL workbench to build a live-view of your data ? If so I don't think you're using the right tools.
You may just use ElasticSearch to store your data and Kibana to display it, this way you'll have free graphs and charts of your stored data, and auto-refresh (based on an interval, not on events).
You also may take a look a Grafana, an event more specialized tool in storing / representing graphs of values.
But if you really want to store your data on MySQL, you may not want to use MySQL Workbench as a user interface, it's a developper tool to build your database. You may however build a graphical interface from scratch, and send it an event when you're updating your tables so it refreshes itself, but it's a lot of work that Kibana/Grafana does for you. | 1 | 0 | 0 | MySQL WorkBench How to automatically re run query? | 1 | python,mysql | 1 | 2014-07-16T21:57:00.000 |
I'm doing data analytics on medium sized data (2GB, 20Mio records) and on the current machine it hardly fits into memory. Windows 7 slows down considerably when reaching 3GB occupation on this 4 GB machine. Most of my current analysis need to iterate over all records and consider properties of groups of records determined by some GroupID.
How can approach this task? My current method is to load it into SQLite and iterate by row. I build the groups in-memory, but this too grows quite large.
I had the following ideas, but maybe you can suggest better approaches:
sort SQLite table by GroupID so that groups come in together
store data somehow column-wise so that I don't have to read all columns
serialize data to parse it faster with Python?
These ideas seem hard to combine for me :( What should I do?
(PS: Hardware upgrades are hard to get. Admin right are cumbersome, too) | 2 | 1 | 0.197375 | 0 | false | 24,866,662 | 0 | 246 | 1 | 0 | 0 | 24,866,113 | It's hard to say anything without knowing more about the data & aggregation you are trying to do, but definitely don't do serialize data to parse it faster with Python -- most probably that's not where the problem is. And probably not store data somehow column-wise so that I don't have to read all columns.
sort SQLite table by GroupID so that groups come in together <- this sounds like a good approach. But lot of aggregations (like count, average, sum etc.) don't require this. In this type of aggregation, you can simply hold a map of (key, aggregation), and iterate through the rows and iteratively apply them to the aggregation (and throw the row away).
Are you currently gathering all rows that belong to a group in-memory and then doing the aggregation? If so, you might just need to change the code so that you do the aggregation as you read the rows.
EDIT: In response to the comment:
If that's the case, then I'd go for sorting. SQL might be an overkill though if all you do is sort. Maybe you can just write the sorted file on disk? Once you do that you could look into parallilizing. Essentially you'll have one process reading the sorted file (which you don't want to parallelize as long as you don't do distributed processing), which packages one group worth of data and sends it to a pool of processes (the number of processes should be fixed to some number which you tune, to avoid memory shortage) which does the rest of processing. | 1 | 0 | 1 | Iterate over large data fast with Python? | 1 | python,database | 0 | 2014-07-21T13:18:00.000 |
I am working on a python app that uses python 2.4, postgres 8.2 and old versions of pygresql, xlrd, etc. Because of this it is quite a pain to use, and has to be used in a windows xp VM. There are other problems such as the version of xlrd doesn't support .xlsx files, but the new version of xlrd doesn't work with python 2.4, understandably.
I recently made a branch called 'upgrade' where I started to try to get it working with up to date versions of the libraries (python 2.7), for instance unicode handling has changed a bit so required some changes here and there.
Most of the work I'm doing on the app should work in both environments, but it's nicer to work on the upgrade branch because it doesn't need to run in a vm.
So my question is how can I make commits using the upgrade branch but then apply them to the master branch so they will still apply to the old version of the software the client is using? I realise I can cherry pick commits off the upgrade branch onto master but it seems a bit wrong, having commits in both branches. I'm thinking maybe I should be rebasing the upgrade branch so it is always branching off head after the most recent commits, but then that would mean committing to the master branch which means working in a VM.
Hope this makes some kind of sense, I'll try and do some diagrams if not. | 0 | 0 | 1.2 | 0 | true | 24,913,304 | 0 | 77 | 1 | 0 | 0 | 24,908,188 | IMHO you should probably commit in your master branch, then rebase your upgrade branch, it will make more sense in your repository history.
If those commits are working on both environments, you should use a different branch based on the master one, so you can work out on the newer version of python, then merge it in the master, then rebase your upgrade branch. | 1 | 0 | 0 | Managing a different python version as a branch in git | 1 | python,git,version-control,branching-and-merging | 0 | 2014-07-23T10:36:00.000 |
I want to order a large SQLite table and write the result into another table. This is because I need to iterate in some order and ordering takes a very long time on that big table.
Can I rely in a (Python) iterator giving my the rows in the same order as I INSERTed them? Is there a way to guarantee that? (I heard comments that due to caching the order might break) | 0 | 2 | 1.2 | 0 | true | 24,909,921 | 0 | 46 | 1 | 0 | 0 | 24,909,851 | I think you are approaching this wrong. If it is taking too long to extract data in a certain order from a table in any SQL database, that is a sign that you need to add an index. | 1 | 0 | 0 | Are SQLite rows ordered persistently? | 1 | python,sql,sqlite | 0 | 2014-07-23T11:55:00.000 |
So I'm fairly new to Django development and I started using the cx_Oracle and MySQLdb libraries to connect to Oracle and MySQL databases. The idea is to build an interface that will connect to multiple databases and support CRUD ops. The user logs in with the db credentials for the respective databases. I tried not using the Django ORM (I know you may ask then what is the point)but then it is still all a learning endeavor for me. Without the Django ORM (or any ORM for that matter),I was having trouble persisting db connections across multiple requests(Tried using sessions).I need some direction as to what is the best way to design this. | 0 | 0 | 1.2 | 0 | true | 24,917,828 | 1 | 110 | 1 | 0 | 0 | 24,912,020 | Django uses connection pooling (i.e. few requests share the same DB connection). Of course, you can write a middleware to close and reinitialize connection on every request, but I can't guarantee you will not create race conditions, and, as you said, there is no point to do so.
If you want to make automatic multi-database CRUD, you'd better use some other framework (maybe, Flask or Bottle), because Django is optimized in much aspects for content sites with pre-set data scheme.
Also, it's not quite simple application, and maybe it's not a good way to learn some new technology at all. Try starting with something simpler, maybe. | 1 | 0 | 0 | How to use the Django ORM for creating an interface like MySQL admin that connects to multiple databases | 1 | django,mysql-python,django-orm,cx-oracle | 0 | 2014-07-23T13:38:00.000 |
I want to store html string in sql server database using pyodbc driver. I have used nvarchar(max)as the data type for storing in the database but it is throwing the following error
Error:
('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Warning: Partial insert/update. The insert/update of a text or image column(s) did not succeed. (0) (SQLPutData); [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The text, ntext, or image pointer value conflicts with the column name specified. (7125)') | 2 | 2 | 0.379949 | 0 | false | 31,753,938 | 0 | 1,047 | 1 | 0 | 0 | 24,930,835 | The link that Anthony Kong supplied includes something that may resolve the issue; it did for me in a very similar situation.
switch to DRIVER={SQL Server Native Client 10.0} instead of DRIVER={SQL Server} in the connection string
This would be for Sql Server 2008 (you didn't specify the Edition); for Sql Server 2012 it would be Native Client 11.0. | 1 | 0 | 0 | Pyodbc Store html unicode string in Sql Server | 1 | python | 0 | 2014-07-24T10:06:00.000 |
I have a python webpage which pulls information from a MSSQL database with pyodbc.
This works, however since some queries that get run are quite heavy. the webpage can take 20-30 seconds to load.
I want to fix this, What would be the best way to run all queries once every 15-30 minutes and store that data locally on the server or locally and pull that data into the webpage instead of rerunning the query on page load.
I would like to have a relatively fast way for the webpage to acces the data so accesing the webpage would only take a 1-2 seconds max.
redis is really fast but isn't really suited as it is too simple. key-value pairs
the most advanced I really need is is a table with a few rows and columnes (always less than 10).
Is there a relatively fast way to store such data locally? | 0 | 0 | 1.2 | 0 | true | 24,934,239 | 0 | 764 | 1 | 0 | 0 | 24,933,185 | I have run into this when creating large reports. Nobody will wait for a 30 second query, even if it's going back over 15 years of sales data.
You have a few options:
Create a SQL Job in the SQL Server Agent to run a stored procedure that runs the query and saves to a table. (This is what I do)
Use a scheduled task to run the query and save it to another table. I think python could drive it on a windows box, but never used it myself. I would do it in .NET.
Sometimes creating a view is enough of a performance boost, but depends on your data and database setup. In addition check if there are any indexes or other common performance gains you can make.
I really think #1 is elegant, simple, and keeps all the work in the database. | 1 | 0 | 0 | async information from MSSQL database | 1 | python,sql,sql-server,pyodbc | 0 | 2014-07-24T12:03:00.000 |
I am developing a system that will need to connect from a remote mysql database on the fly to do a specific task. To accomplish this, I am thinking to use Mysql-db module in python. Since the remote database is not part of the system itself I do not prefer to add it on the system core database settings (DATABASES settings.py). Is there a much better way to accomplish this aside from using Mysql-db module? Is there a built in django module that I can use? | 0 | 1 | 0.099668 | 0 | false | 24,997,975 | 1 | 56 | 2 | 0 | 0 | 24,944,869 | For working inside virtualenv you need to install
pip install MySQL-python==1.2.5 | 1 | 0 | 0 | Django Database Module | 2 | database,django,mysql-python,django-database | 0 | 2014-07-24T22:10:00.000 |
I am developing a system that will need to connect from a remote mysql database on the fly to do a specific task. To accomplish this, I am thinking to use Mysql-db module in python. Since the remote database is not part of the system itself I do not prefer to add it on the system core database settings (DATABASES settings.py). Is there a much better way to accomplish this aside from using Mysql-db module? Is there a built in django module that I can use? | 0 | 0 | 1.2 | 0 | true | 24,997,774 | 1 | 56 | 2 | 0 | 0 | 24,944,869 | MySQLdb is the best way to do this. | 1 | 0 | 0 | Django Database Module | 2 | database,django,mysql-python,django-database | 0 | 2014-07-24T22:10:00.000 |
I made a little PostgreSQL trigger with Plpython. This triggers plays a bit with the file system, creates and delete some files of mine. Created files are owned by the "postgres" unix user, but I would like them to be owned by another user, let's say foobar. Triggers are installed with user "foobar" and executed with user "foobar" too.
Is there a way to execute the SQL trigger with the unix user 'foobar' with PostgreSQL or Plpython?
Should I use SET ROLE foobar ?
Playing with SECURITY INVOKER and SECURITY DEFINER does not seem to be good enough. | 0 | 3 | 1.2 | 0 | true | 24,958,698 | 0 | 1,158 | 1 | 0 | 0 | 24,951,431 | You're confusing operating system users and PostgreSQL users.
SECURITY DEFINER lets you run a function as the defining postgresql user. But no matter what PostgreSQL user is running the operating system user the back-end server runs as is always the same - usually the operating system user postgres.
By design, the PostgreSQL server cannot run operating system commands or system calls as other operating system users. That would be a nasty security hole.
However, if you want to permit that, you can. You could:
Grant the postgres user sudo rights to run some or all commands as other users; or
Write a program to run with setuid rights to do what you want and grant the postgres user the right to execute it.
In either case, the only way to run these programs is by launching them from an untrusted procedural language like plpython or plperl, or from a C extension.'
It isn't clear why you want to set the file ownership like this in the first place, but I suspect it's probably not a great idea. What if the PostgreSQL client and server aren't even on the same computer? What if there's no operating system user for that PostgreSQL user, or the usernames are different? etc. | 1 | 0 | 0 | PostgreSQL trigger with a given role | 2 | postgresql,roles,plpython | 0 | 2014-07-25T08:35:00.000 |
I am using cherrypy along with sqlalchemy-mysql as backend. I would like to know the ways of dealing with UNICODE strings in cherrypy web application. One brute-force way would be to convert all string coming in as parameters into UNICODE (and then decoding them to UTF-8) before storing them to database. But I was wondering if there is any standard way of handling UNICODE characters in a web application. I tried cherrypy's tools.encode but it doesn't seem to work for me (may be I haven't understood it properly yet). Or may be there are standard python libraries to handle UNICODEs which I could just import and use. What ways should I look for? | 0 | 0 | 0 | 0 | false | 25,016,312 | 0 | 153 | 1 | 0 | 0 | 24,997,946 | SQLAlchemy provides Unicode or UnicodeText for your purposes.
Also don't forget about u'text' | 1 | 0 | 0 | how to handle UNICODE characters in cherrypy-sqlalchemy-mysql application? | 1 | mysql,python-2.7,unicode,sqlalchemy,cherrypy | 0 | 2014-07-28T14:52:00.000 |
I'm using python to write a report which is put into an excel spreadshet.
There are four columns, namely:
Product Name | Previous Value | Current Value | Difference
When I am done putting in all the values I then want to sort them based on Current Value. Is there a way I can do this in xlwt? I've only seen examples of sorting a single column. | 1 | -1 | -0.197375 | 1 | false | 25,032,965 | 0 | 1,410 | 1 | 0 | 0 | 25,024,437 | You will get data from queries, right? Then you will write them to an excel by xlwt. Just before writing, you can sort them. If you can show us your code, then maybe I can optimize them. Otherwise, you have to follow wnnmaw's advice, do it in a more complicate way. | 1 | 0 | 0 | Sorting multiple columns in excel via xlwt for python | 1 | python,excel,xlwt | 0 | 2014-07-29T20:32:00.000 |
I am using Robot Framework with Database Library to test database queries on localhost. I am running it by XAMPP.
This is my test case:
*** Settings ***
Library DatabaseLibrary
*** Variables ***
@{DB} robotframework root \ localhost 3306
*** Test Cases ***
Select from database
[Tags] This
Connect To Database MySQLdb @{DB}[0] @{DB}[1] @{DB}[2] @{DB}[3] @{DB}[4]
@{results}= Query Select * From tbName
Log Many @{results}
I have installed MySQLDb for Python 2.7, however, when I run it using pybot, it keeps returning error:
Select from database | FAIL |
NoSectionError: No section: 'default'
Please help me to solve this problem. Thanks. | 1 | 2 | 0.379949 | 0 | false | 32,266,513 | 1 | 4,534 | 1 | 0 | 0 | 25,072,996 | You should check the content of dbConfigFile. You don't specify one so the default one is ./resources/db.cfg.
The error says when python try to parse that file it cannot find a section named default. In documentation it says:
note: specifying dbapiModuleName, dbName dbUsername or dbPassword directly will override the properties of the same key in dbConfigFile
so even if you specify all properties it reads config file. | 1 | 0 | 0 | Error: No section: 'default' in Robot Framework using DatabaseLibrary | 1 | python-2.7,robotframework | 0 | 2014-08-01T04:44:00.000 |
I'm initiating celery tasks via after_insert events.
Some of the celery tasks end up updating the db and therefore need the id of the newly inserted row. This is quite error-prone because it appears that if the celery task starts running immediately sometimes sqlalchemy will not have finished committing to the db and celery won't find the row.
What are my other options?
I guess I could gather these celery tasks up somehow and only send them on "after_commit" but it feels unnecessarily complicated. | 0 | 0 | 1.2 | 0 | true | 25,086,833 | 0 | 209 | 1 | 1 | 0 | 25,078,815 | It wasn't so complicated, subclass Session, providing a list for appending tasks via after_insert. Then run through the list in after_commit. | 1 | 0 | 0 | SQLAlchemy after_insert triggering celery tasks | 1 | python,sqlalchemy,celery | 0 | 2014-08-01T11:01:00.000 |
I'm currently in the process of trying to redesign the general workflow of my lab, and am coming up against a conceptual roadblock that is largely due to my general lack of knowledge in this subject.
Our data currently is organized in a typical file system structure along the lines of:
Date\Cell #\Sweep #
where for a specific date there are generally multiple Cell folders, and within those Cell folders there are multiple Sweep files (these are relatively simple .csv files where the recording parameters are saved separated in .xml files). So within any Date folder there may be a few tens to several hundred files for recordings that day organized within multiple Cell subdirectory folders.
Our workflow typically involves opening multiple sweep files within a Cell folder, averaging them, and then averaging those with data points from other Cell folders, often across multiple days.
This is relatively straightforward to do with the Pandas and Numpy, although there is a certain “manual” feel to it when remotely accessing folders saved to the lab server. We also, on occasion, run into issues because we often have to pull in data from many of these files at once. While this isn’t usually an issue, the files can range between a few MBs to 1000s of MBs in size. In the latter case we have to take steps to not load the entire file into memory (or not load multiple files at once at the very least) to avoid memory issues.
As part of this redesign I have been reading about Pytables for data organization and for accessing data sets that may be too large to store within memory. So I guess my 2 main questions are
If the out-of-memory issues aren’t significant (i.e. that utility wouldn’t be utilized often), are there any significant advantages to using something like Pytables for data organization over simply maintaining a file system on a server (or locally)?
Is there any reason NOT to go the Pytables database route? We are redesigning our data collection as well as our storage, and one option is to collect the data directly into Pandas dataframes and save the files in the HDF5 file type. I’m currently weighing the cost/benefit of doing this over the current system of the data being stored into csv files and then loaded into Pandas for analysis later on.
My thinking is that by creating a database vs. the filesystem we current have we may 1. be able to reduce (somewhat anyway) file size on disk through the compression that hdf5 offers and 2. accessing data may overall become easier because of the ability to query based on different parameters. But my concern for 2 is that ultimately, since we’re usually just opening an entire file, that we won’t be utilizing that to functionality all that much – that we’d basically be performing the same steps that we would need to perform to open a file (or a series of files) within a file system. Which makes me wonder whether the upfront effort that this would require is worth it in terms of our overall workflow. | 0 | 0 | 0 | 1 | false | 25,122,607 | 0 | 332 | 1 | 0 | 0 | 25,110,089 | First of all, I am a big fan of Pytables, because it helped me manage huge data files (20GB or more per file), which I think is where Pytables plays out its strong points (fast access, built-in querying etc.). If the system is also used for archiving, the compression capabilities of HDF5 will reduce space requirements and reduce network load for transfer. I do not think that 'reproducing' your file system inside an HDF5 file has advantages (happy to be told I'm wrong on this). I would suggest a hybrid approach: keep the normal filesystem structure and put the experimental data in hdf5 containers with all the meta-data. This way you keep the flexibility of your normal filesystem (access rights, copying, etc.) and can still harness the power of pytables if you have bigger files where memory is an issue. Pulling the data from HDF5 into normal pandas or numpy is very cheap, so your 'normal' work flow shouldn't suffer. | 1 | 0 | 0 | Benefits of Pytables / databases over file system for data organization? | 1 | python,csv,organization,pytables | 0 | 2014-08-03T23:40:00.000 |
My RDS is in a VPC, so it has a private IP address. I can connect my RDS database instance from my local computer with pgAdmin using SSH tunneling via EC2 Elastic IP.
Now I want to connect to the database instance in my code in python. How can I do that? | 2 | 0 | 0 | 0 | false | 25,115,502 | 0 | 667 | 1 | 0 | 1 | 25,112,648 | Point your python code to the same address and port you're using for the tunnelling.
If you're not sure check the pgAdmin destination in the configuration and just copy it. | 1 | 0 | 0 | AWS - Connect to RDS via EC2 tunnel | 1 | postgresql,python-2.7,amazon-web-services,psycopg2,amazon-vpc | 0 | 2014-08-04T06:08:00.000 |
I'm using wsgi apache and flask to run my application. I'm use from yourapplication import app as application to start my application. That works so far fine. The problem is, with every request a new instance of my application is created. That leads to the unfortunate situation that my flask application creates a new database connection but only closes it after about 15 min. Since my server allows only 16 open DB connections the server starts to block requests very soon. BTW: This is not happening when I run flask without apache/wsgi since it opens only one connection and serves all requests as I want.
What I want: I want to run only one flask instance which then servers all requests. | 1 | 1 | 1.2 | 0 | true | 25,143,417 | 1 | 1,457 | 1 | 0 | 0 | 25,143,105 | The WSGIApplicationGroup directive may be what you're looking for as long as you have the wsgi app running in daemon mode (otherwise I believe apache's default behavior is to use prefork which spins up a process to handle each individual request):
The WSGIApplicationGroup directive can be used to specify which application group a WSGI application or set of WSGI applications belongs to. All WSGI applications within the same application group will execute within the context of the same Python sub interpreter of the process handling the request.
You have to provide an argument to the directive that specifies a name for the application group. There's a few expanding variables: %{GLOBAL}, %{SERVER}, %{RESOURCE} and %{ENV:variable}; or you can specify your own explicit name. %{GLOBAL} is special in that it expands to the empty string, which has the following behavior:
The application group name will be set to the empty string.
Any WSGI applications in the global application group will always be executed within the context of the first interpreter created by Python when it is initialised. Forcing a WSGI application to run within the first interpreter can be necessary when a third party C extension module for Python has used the simplified threading API for manipulation of the Python GIL and thus will not run correctly within any additional sub interpreters created by Python.
I would recommend specifying something other than %{GLOBAL}.
For every process you have mod_wsgi spawn, everything will be executed in the same environment. Then you can simply control the number of database connections based on the number of processes you want mod_wsgi to spawn. | 1 | 0 | 0 | start only one flask instance using apache + wsgi | 1 | python,apache,flask,wsgi | 0 | 2014-08-05T15:49:00.000 |
My installed version of the python(2.7) module pandas (0.14.0) will not import. The message I receive is this:
UserWarning: Installed openpyxl is not supported at this time. Use >=1.6.1 and <2.0.0.
Here's the problem - I already have openpyxl version 1.8.6 installed so I can't figure out what the problem might be! Does anybody know where the issue may lie? Do I need a different combination of versions? | 0 | 0 | 0 | 1 | false | 25,178,533 | 0 | 114 | 1 | 0 | 0 | 25,168,058 | The best thing would be to remove the version of openpyxl you installed and let Pandas take care. | 1 | 0 | 0 | Python pandas module openpxyl version issue | 1 | python,pandas,openpyxl,versions | 0 | 2014-08-06T18:55:00.000 |
I've looked through all the docs I could find, and read the source code...and it doesn't seem you can actually create a MySQL database (or any other kind, that I could find) using peewee. If so, that means for any the database I may need to connect to, I would need to create it manually using mysql or some other tool.
Is that accurate, or am I missing something? | 11 | 3 | 0.197375 | 0 | false | 25,365,070 | 0 | 4,151 | 2 | 0 | 0 | 25,194,297 | Peewee cannot create databases with MySql or with other systems that require database and user setup, but will create the database with sqlite when the first table is created. | 1 | 0 | 0 | Can peewee create a new MySQL database | 3 | python-3.x,peewee | 0 | 2014-08-08T00:23:00.000 |
I've looked through all the docs I could find, and read the source code...and it doesn't seem you can actually create a MySQL database (or any other kind, that I could find) using peewee. If so, that means for any the database I may need to connect to, I would need to create it manually using mysql or some other tool.
Is that accurate, or am I missing something? | 11 | 14 | 1.2 | 0 | true | 25,195,428 | 0 | 4,151 | 2 | 0 | 0 | 25,194,297 | Peewee can create tables but not databases. That's standard for ORMs, as creating databases is very vendor-specific and generally considered a very administrative task. PostgreSQL requires you to connect to a specific database, Oracle muddles the distinction between users and databases, SQLite considers each file to be a database...it's very environment specific. | 1 | 0 | 0 | Can peewee create a new MySQL database | 3 | python-3.x,peewee | 0 | 2014-08-08T00:23:00.000 |
I have developed a website where the pages are simply html tables. I have also developed a server by expanding on python's SimpleHTTPServer. Now I am developing my database.
Most of the table contents on each page are static and doesn't need to be touched. However, there is one column per table (i.e. page) that needs to be editable and stored. The values are simply text that the user can enter. The user enters the text via html textareas that are appended to the tables via javascript.
The database is to store key/value pairs where the value is the user entered text (for now at least).
Current situation
Because the original format of my webpages was xlsx files I opted to use an excel workbook as my database that basically just mirrors the displayed web html tables (pages).
I hook up to the excel workbook through win32com. Every time the table (page) loads, javascript iterates through the html textareas and sends an individual request to the server to load in its respective text from the database.
Currently this approach works but is terribly slow. I have tried to optimize everything as much as I can and I believe the speed limitation is a direct consequence of win32com.
Thus, I see four possible ways to go:
Replace my current win32com functionality with xlrd
Try to load all the html textareas for a table (page) at once through one server call to the database using win32com
Switch to something like sql (probably use mysql since it's simple and robust enough for my needs)
Use xlrd but make a single call to the server for each table (page) as in (2)
My schedule to build this functionality is around two days.
Does anyone have any thoughts on the tradeoffs in time-spent-coding versus speed of these approaches? If anyone has any better/more streamlined methods in mind please share! | 0 | 1 | 1.2 | 0 | true | 25,203,796 | 1 | 273 | 1 | 0 | 0 | 25,195,723 | Probably not the answer you were looking for, but your post is very broad, and I've used win32coma and Excel a fair but and don't see those as good tools towards your goal. An easier strategy is this:
for the server, use Flask: it is a Python HTTP server that makes it crazy easy to respond to HTTP requests via Python code and HTML templates. You'll have a fully capable server running in 5 minutes, then you will need a bit of time create code to get data from your DB and render from templates (which are really easy to use).
for the database, use SQLite (there is far more overhead intergrating with MysQL); because you only have 2 days, so
you could also use a simple CSV file, since the API (Python has a CSV file read/write module) is much simpler, less ramp up time. One CSV per user, easy to manage. You don't worry about insertion of rows for a user, you just append; and you don't implement remove of rows for a user, you just mark as inactive (a column for active/inactive in your CSV). In processing GET request from client, as you read from the CSV, you can count how many certain rows are inactive, and do a re-write of the CSV, so once in a while the request will be a little slower to respond to client.
even simpler yet you could use in-memory data structure of your choice if you don't need persistence across restarts of the server. If this is for a demo this should be acceptable limitation.
for the client side, use jQuery on top of javascript -- maybe you are doing that already. Makes it super easy to manipulate the DOM and use effects like slide-in/out etc. Get yourself the book "Learning jQuery", you'll be able to make good use of jQuery in just a couple hours.
If you only have two days it might be a little tight, but you will probably need more than 2 days to get around the issues you are facing with your current strategy, and issues you will face imminently. | 1 | 0 | 0 | Database in Excel using win32com or xlrd Or Database in mysql | 1 | python,mysql,excel,win32com,xlrd | 0 | 2014-08-08T03:38:00.000 |
This is not a question of a code, I need to extract some BLOB data from an Oracle database using python script. My question is what are the steps in dealing with BLOB data and how to read as images, videos and text? Since I have no access to the database itself, is it possible to know the type of BLOBs stored if it is pictures, videos or texts? Do I need encoding or decoding in order to tranfer these BLOBs into .jpg, .avi or .txt files ? These are very basic questions but I am new to programming so need some help to find a starting point :) | 0 | 0 | 0 | 0 | false | 25,205,260 | 0 | 1,342 | 1 | 0 | 0 | 25,205,157 | If you have a pure BLOB in the database, as opposed to, say, an ORDImage that happens to be stored in a BLOB under the covers, the BLOB itself has no idea what sort of binary data it contains. Normally, when the table was designed, a column would be added that would store the data type and/or the file name. | 1 | 0 | 0 | Reading BLOB data from Oracle database using python | 1 | oracle,python-2.7,blob | 0 | 2014-08-08T13:54:00.000 |
I am still a noob in web app development and sorry if this question might seem obvious for you guys.
Currently I am developing a web application for my University using Python and Django. And one feature of my web app is to retrieve a large set of data in a table in the database(postgreSQL), and displaying these data in a tabular form on webpage. Each column of the table need to have the sorting and filtering feature. The data set goes up to roughly 2 millions of rows.
So I wonder if something like jpxGrid could help me to achieve such goal or it would be too slow to handle/sort/display/render such a large data set on web page. I plan to retrieve all the data inside the table once (only initiate one database query call) and pass it into jpxGrid, however, my colleague suggests that each sort and filter should initiate a separate query call to the database to achieve better performance(database order by is very fast). I tried to use another open source jquery library that handles the form and enables sorting, filtering and paging(non professional outdated one) at the beginning, which starts to lag after 5k data rows and becomes impossible to use after 20k rows.
My question is if something like jpxGrid is a good solution to my problem or I should build my own system that letting the database to handle the sorting and filtering(probably need to add the paging feature too). Thank you very much for helping. | 1 | 3 | 1.2 | 0 | true | 25,208,098 | 1 | 1,235 | 1 | 0 | 0 | 25,207,697 | Are you allowed to use paging in your output? If so, then i'd start by setting a page size of 100 (for example) and then use LIMIT 100 in my various SQL queries. Essentially, each time the user clicks next or prev on the web page a new query would be executed based on the current filtering or sorting options with the LIMIT. The SQL should be pretty easy to figure out. | 1 | 0 | 0 | Handle and display large data set in web browser | 1 | python,database,django,postgresql,web | 0 | 2014-08-08T16:09:00.000 |
I am scoping out a project with large, mostly-uncompressible time series data, and wondering if Django + Postgres with raw SQL is the right call.
I have time series data that is ~2K objects/hour, every hour. This is about 2 million rows per year I store, and I would like to 1) be able to slice off data for analysis through a connection, 2) be able to do elementary overview work on the web, served by Django. I think the best idea is to use Django for the objects themselves, but drop to raw SQL to deal with the large time series data associated. I see this as a hybrid approach; that might be a red flag, but using the full ORM for a long series of data samples feels like overkill. Is there a better way? | 21 | 0 | 0 | 0 | false | 25,887,408 | 1 | 10,343 | 1 | 0 | 0 | 25,212,009 | You might also consider using the PostGIS postgres extension which includes support for raster data types (basically large grids of numbers) and has many features to make use of them.
However, do not use the ORM in this case, you will want to do SQL directly on the server. The ORM will add a huge amount of overhead for large numerical datasets. It's also not very adapted to handling large matrices within python itself, for that you need numpy. | 1 | 0 | 0 | Django + Postgres + Large Time Series | 4 | python,django,postgresql,heroku,bigdata | 0 | 2014-08-08T20:48:00.000 |
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user.
I also want to save those results, so that a user can go on our website and look at previous results.
My question is - what data do I save?
Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again.
Save the HTML that was generated back then, and simply display it when the user wishes to see this result?
I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient.
The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return.
Thanks! | 0 | 1 | 0.066568 | 0 | false | 25,222,611 | 1 | 35 | 3 | 0 | 0 | 25,222,515 | If I would create such type of application then
I will have some common queries like get by current date,current time , date ranges, time ranges, n others based on my application for the user to select easily.
Some autocompletions for common keywords.
If the data gets changed frequently there is no use saving html, generating new one is good option | 1 | 0 | 0 | Creating an archive - Save results or request them every time? | 3 | python,html,database | 0 | 2014-08-09T20:01:00.000 |
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user.
I also want to save those results, so that a user can go on our website and look at previous results.
My question is - what data do I save?
Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again.
Save the HTML that was generated back then, and simply display it when the user wishes to see this result?
I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient.
The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return.
Thanks! | 0 | 1 | 1.2 | 0 | true | 25,222,656 | 1 | 35 | 3 | 0 | 0 | 25,222,515 | Specifically regarding retrieving the results from queries that have been run previously I would suggest saving the results to be able to view later rather than running the queries again and again. The main benefits of this approach are:
You save unnecessary computational work re-running the same queries;
You guarantee that the result set will be the same as the original report. For example if you save just the SQL then the records queried may have changed since the query was last run or records may have been added / deleted.
The disadvantage of this approach is that it will probably use more disk space, but this is unlikely to be an issue unless you have queries returning millions of rows (in which case html is probably not such a good idea anyway). | 1 | 0 | 0 | Creating an archive - Save results or request them every time? | 3 | python,html,database | 0 | 2014-08-09T20:01:00.000 |
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user.
I also want to save those results, so that a user can go on our website and look at previous results.
My question is - what data do I save?
Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again.
Save the HTML that was generated back then, and simply display it when the user wishes to see this result?
I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient.
The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return.
Thanks! | 0 | 1 | 0.066568 | 0 | false | 25,222,678 | 1 | 35 | 3 | 0 | 0 | 25,222,515 | The crucial difference is that if data changes, new query will return different result than what was saved some time ago, so you have to decide if the user should get the up to date data or a snapshot of what the data used to be.
If relevant data does not change, it's a matter of whether the queries will be expensive, how many users will run them and how often, then you may decide to save them instead of re-running queries, to improve performance. | 1 | 0 | 0 | Creating an archive - Save results or request them every time? | 3 | python,html,database | 0 | 2014-08-09T20:01:00.000 |
I've done some research, and I don't fully understand what I found.
My aim is to, using a udp listener I wrote in python, store data that it receives from an MT4000 telemetry device. This data is received and read in hex, and I want that data to be put into a table, and store it as a string in base64. In terms of storing something as an integer by using the 'columnname' INT format, I would like to know how to store the information as base64? ie. 'data' TEXT(base64)? or something similar.
Could I do this by simply using the TEXT datatype, and encode the data in the python program?
I may be approaching this in the wrong way, as I may have misinterpreted what I read online.
I would be very grateful for any help. Thanks,
Ed | 2 | 2 | 1.2 | 0 | true | 25,239,591 | 0 | 5,730 | 2 | 0 | 0 | 25,239,361 | You can just save the base64 string in a TEXT column type. After retrieval just decode this string with base64.decodestring(data) ! | 1 | 0 | 0 | How to store base64 information in a MySQL table? | 2 | python,mysql | 0 | 2014-08-11T08:57:00.000 |
I've done some research, and I don't fully understand what I found.
My aim is to, using a udp listener I wrote in python, store data that it receives from an MT4000 telemetry device. This data is received and read in hex, and I want that data to be put into a table, and store it as a string in base64. In terms of storing something as an integer by using the 'columnname' INT format, I would like to know how to store the information as base64? ie. 'data' TEXT(base64)? or something similar.
Could I do this by simply using the TEXT datatype, and encode the data in the python program?
I may be approaching this in the wrong way, as I may have misinterpreted what I read online.
I would be very grateful for any help. Thanks,
Ed | 2 | 0 | 0 | 0 | false | 62,777,767 | 0 | 5,730 | 2 | 0 | 0 | 25,239,361 | You can storage a base64 string in a TEXT column type, but in my experience I recommend to use LONGTEXT type to avoid truncated errors in big base64 texts. | 1 | 0 | 0 | How to store base64 information in a MySQL table? | 2 | python,mysql | 0 | 2014-08-11T08:57:00.000 |
Using Boto, you can create an S3 bucket and configure a lifecycle for it; say expire keys after 5 days. I would like to not have a default lifecycle for my bucket, but instead set a lifecycle depending on the path within the bucket. For instance, having path /a/ keys expire in 5 days, and path /b/ keys to never expire.
Is there a way to do this using Boto? Or is expiration tied to buckets and there is no alternative?
Thank you | 0 | 0 | 1.2 | 0 | true | 25,245,827 | 1 | 41 | 1 | 0 | 1 | 25,245,710 | After some research in the boto docs, it looks like using the prefix parameter in the lifecycle add_rule method allows you to do this. | 1 | 0 | 0 | Setting a lifecycle for a path within a bucket | 1 | python,amazon-web-services,amazon-s3,boto | 0 | 2014-08-11T14:28:00.000 |
When mounting and writing files in the google cloud storage using the gcsfs, the gcsfs is creating folders and files but not writing files. Most of the times it shows input/output error. It even occurs even when we copy files from local directory to the mounted gcsfs directory.
gcsfs version 0.15 | 1 | 0 | 0 | 0 | false | 57,971,532 | 0 | 554 | 1 | 1 | 0 | 25,265,110 | Although this is quiet an old topic, I will try to provide an answer especially because of people who might stumble on this in the course of their own work. I have experience using more recent versions of gcsfs and it works quiet well. You can find the latest documentation at https://gcsfs.readthedocs.io/en/latest. To make it work you need to have the environment variable:
GOOGLE_APPLICATION_CREDENTIALS=SERVICE_ACCOUNT_KEY.json. | 1 | 0 | 0 | gcsfs is not writing files in the google bucket | 1 | python-3.x,google-cloud-platform,google-cloud-storage | 0 | 2014-08-12T13:07:00.000 |
I am using python to parse an Excel file and am accessing the application COM using excel = Dispatch('Excel.Application') at the beginning of a restart the code will find the application object just fine and I will be able to access the active workbook.
The problem comes when I have had two instances of Excel open and I close the first. From then on every call to excel = Dispatch('Excel.Application') provides an application object that is different from the open instance of Excel. If I try excel.Visible=1 it opens a new Excel instance rather than showing the already open instance of excel. How do I get the COM object of the already open instance of Excel rather than creating a new instance? | 3 | 2 | 1.2 | 0 | true | 25,308,893 | 0 | 1,075 | 1 | 0 | 0 | 25,298,281 | When an application registers itself, only the first instance gets registered, until it dies and then the very next instance to register gets registered.
There's no registration queue, so when your first instance dies, the second keeps unregistered, so any call to Excel.Application will launch a third instance and they'll keep using it until it dies too.
In summary, the instances launched in between registered instances never get registered.
If you need to reuse an instance, you must keep a pointer to it.
That said, if you get an instance of an open Excel file, you might obtain a link to an unregistered Excel instance. For instance, if Excel 1 (registered) has workbook 1 open, and Excel 2 (unregistered) has workbook 2 open, if you ask for workbook 2, you'll get Excel 2's instance (e.g. through Workbook.Application). | 1 | 1 | 0 | win32com dispatch Won't Find Already Open Application Instance | 1 | python,excel,com,win32com | 0 | 2014-08-14T00:26:00.000 |
From a pyramid middleware application I'm calling a stored procedure with pymssql. The procedure responds nicely upon the first request I pass through the middleware from the frontend (angularJS). Upon subsequent requests however, I do not get any response at all, not even a timeout.
If I then restart the pyramid application, the same above described happens again.
I'm observing this behavior with a couple of procedures that were implemented just yesterday. Some other procedures implemented months ago are working just fine, regardless of how often I call them.
I'm not writing the procedures myself, they are provided for.
From what I'm describing here, can anybody tell where the bug should be hiding most probably? | 0 | 1 | 1.2 | 0 | true | 25,646,833 | 1 | 122 | 1 | 0 | 0 | 25,367,508 | The solution was rather trivial. Within one object instance, I was calling two different stored procedures without closing the connection after the first call. That caused a pending request or so in the MSSQL-DB, locking it for further requests. | 1 | 0 | 0 | pyramid middleware call to mssql stored procedure - no response | 1 | python-2.7,stored-procedures,pyramid,pymssql | 0 | 2014-08-18T16:05:00.000 |
In general I want to know the possible benefits of Graphite. For now I have a web app that receives data directly from JavaScript Ajax call and plots the data using high chart.
It first run 20 different queries for each graph using Python from my SQL database.
And sends each result data to HighChart library using GET Ajax call.
And HighChart adds plot to each graph in realtime.
There is no need to save data because I need only realtime plotting within certain time range. Data outside time range just plushes.
But when I see the 20 Ajax calls in one page I feel like I am doing this in an inefficient way although it gets the job done.
So I looked at the Graphite but it is hard for me to decide which is better. Since I will pull up all data from present SQL table I don't need another storage. But everybody says graphite performs fast but I would still need to instantiate 20 different graphite graphs. Please give me some guidance.
What would you do if you have to visualize 20 different realtime graphs in one page concurrently each of which receives its own query data? | 0 | 0 | 1.2 | 0 | true | 25,381,895 | 0 | 199 | 1 | 0 | 0 | 25,374,338 | Maybe better is call one ajax which gets all data and then prepare parser which will return data for each chart. | 1 | 0 | 0 | Graphite or multiple query with AJAX call? | 1 | javascript,python,ajax,highcharts,graphite | 0 | 2014-08-19T01:18:00.000 |
I am using python's CSV module to output a series of parsed text documents with meta data. I am using the csv.writer module without specifying a special delimiter, so I am assuming it is delimited using commas. There are many commas in the text as well as in the meta data, so I was expecting there to be way more columns in the document rows, when compared to the header row.
What surprises me is that when I load the outputted file in Excel, everything looks exactly right. How does Excel know how to delimit this correctly??? How is it able to figure out which commas are text commas and which ones are delimiters?
Related question: Do people usually use CSV for saving text documents? Is this a standard practice? It seems inferior to JSON or creating a SQLite database in every sense, from long-term sustainability to ease of interpreting without errors. | 1 | 1 | 0.099668 | 0 | false | 25,380,579 | 0 | 518 | 1 | 0 | 0 | 25,380,448 | You shall inspect the real content of CSV file you have created and you will see, that there are ways to enclose text in quotes. This allows distinction between delimiter and a character inside text value.
Check csv module documentation, it explains these details too. | 1 | 0 | 1 | Python CSV module - how does it avoid delimiter issues? | 2 | python,excel,csv,export-to-csv | 0 | 2014-08-19T09:52:00.000 |
We have a query which returns 0 records sometimes when called. When you call the getQueryResults on the jobId it returns with a valid pageToken with 0 rows. This is a bit unexpected since technically there is no data. Whats worst is if you keep supplying the pageToken for subsequent data-pulls it keeps giving zero rows with valid tokens at each page.
If the query does return data initially with a pageToken and you keep using the pageToken for subsequent data pulls it returns pageToken as None after the last page giving a termination condition.
The behavior here seems inconsistent?Is this a bug?
Here is a sample jobresponse I see:
Here is a sample job response:
{u'kind': u'bigquery#getQueryResultsResponse', u'jobReference': {u'projectId': u'xxx', u'jobId': u'job_aUAK1qlMkOhqPYxwj6p_HbIVhqY'}, u'cacheHit': True, u'jobComplete': True, u'totalRows': u'0', u'pageToken': u'CIDBB777777QOGQFBAABBAAE', u'etag': u'"vUqnlBof5LNyOIdb3TAcUeUweLc/6JrAdpn-kvulQHoSb7ImNUZ-NFM"', u'schema': {......}}
I am using python and running queries on GAE using the BQ api | 1 | 0 | 1.2 | 0 | true | 25,393,093 | 1 | 471 | 1 | 1 | 0 | 25,388,124 | This is a known issue that has lingered for far far too long. It is fixed in this week's release, which should go live this afternoon or tomorrow. | 1 | 0 | 0 | BigQuery Api getQueryResults returning pageToken for 0 records | 1 | python,google-app-engine,google-bigquery | 0 | 2014-08-19T16:07:00.000 |
Let's assume I am developing a service that provides a user with articles. Users can favourite articles and I am using Solr to store these articles for search purposes.
However, when the user adds an article to their favourites list, I would like to be able to figure out out which articles the user has added to favourites so that I can highlight the favourite button.
I am thinking of two approaches:
Fetch articles from Solr and then loop through each article to fetch the "favourite-status" of this article for this specific user from MySQL.
Whenever a user favourites an article, add this user's ID to a multi-valued column in Solr and check whether the ID of the current user is in this column or not.
I don't know the capacity of the multivalued column... and I also don't think the second approach would be a "good practice" (saving user-related data in index).
What other options do I have, if any? Is approach 2 a correct approach? | 0 | 0 | 1.2 | 0 | true | 25,414,143 | 1 | 102 | 1 | 0 | 0 | 25,413,343 | I'd go with a modified version of the first one - it'll keep user specific data that's not going to be used for search out of the index (although if you foresee a case where you want to search for favourite'd articles, it would probably be an interesting field to have in the index) for now. For just display purposes like in this case, I'd take all the id's returned from Solr, fetch them in one SQL statement from the database and then set the UI values depending on that. It's a fast and easy solution.
If you foresee that "search only in my fav'd articles" as a use case, I would try to get that information into the index as well (or other filter applications against whether a specific user has added the field as a favourite). I'd try to avoid indexing anything more than the user id that fav'd the article in that case.
Both solutions would however work, although the latter would require more code - and the required response from Solr could grow large if a large number of users fav's an article, so I'd try to avoid having to return a set of userid's if that's the case (many fav's for a single article). | 1 | 0 | 0 | Solr & User data | 1 | python,mysql,solr,django-haystack | 0 | 2014-08-20T19:55:00.000 |
Logging sql queries is useful for debugging but in some cases, it's useless to log the whole query, especially for big inserts. In this case, display only first N caracters would be enough.
Is there a simple way to truncate sql queries when they are logged ? | 2 | 2 | 1.2 | 0 | true | 25,447,136 | 1 | 142 | 1 | 0 | 0 | 25,446,832 | It's quite simple actually:
in settings.py, let's say your logger is based on a handler which formatter is named 'simple'.
'formatters': {
...
'simple': {
'format': '%(asctime)s %(message).150s'
},
...
},
The message will now be truncated to the first 150 caracters. Playing with handlers will allow you to specify this parameter per each logger. Thanks Python! | 1 | 0 | 0 | Truncate logging of sql queries in Django | 1 | python,django | 0 | 2014-08-22T12:15:00.000 |
I ran a simple select query (with no LIMIT applied) using the Big Query python api. I also supplied a destination table as the result was too large. When run, the job returned an "unexpected LIMIT clause" error. I used ignore case at the end of the query. There could be a possibility that it might be causing the problem.
Anybody ran into a similar problem?
For reference, my job_id is job_QrkB7t9WFEHqcH5qfsPZZsM476E | 1 | 1 | 1.2 | 0 | true | 25,489,066 | 0 | 144 | 1 | 0 | 0 | 25,483,349 | This issue is an artifact of how bigquery does "allow large results" queries interacting poorly with the "ignore case" clause. We're tracking an internal bug on the issue, and hopefully will have a fix soon. The workaround is either to remove the "allow large results" flag or the "ignore case" clause. | 1 | 0 | 0 | BigQuery: "unexpected LIMIT clause at:" error when using list query job | 1 | python,google-bigquery | 0 | 2014-08-25T09:57:00.000 |
Background.
My OS is Win7 64bit.
My Python is 2.7 64bit from python-2.7.8.amd64.msi
My cx_Oracle is 5.0 64bit from cx_Oracle-5.0.4-10g-unicode.win-amd64-py2.7.msi
My Oracle client is 10.1 (I don't know 32 or 64 arch, but SQL*Plus is 10.1.0.2.0
Database is
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
ORACLE_HOME variable added from haki reply.
C:\Oracle\product\10.1.0\Client_1\
Not work problem still persist.
ORACLE_HOME Try Oracle instant from instantclient-basic-win64-10.2.0.5.zip
C:\instantclient_10_2\
C:\Users\PavilionG4>sqlplus Lee/123@chstchmp
Error 6 initializing SQL*Plus
Message file sp1.msb not found
SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory
My sql*plus is not let me set the Oracle.
ORACLE_HOME Come back to the
C:\Oracle\product\10.1.0\Client_1\
PATH variable
C:\Program Files (x86)\Seagate Software\NOTES\C:\Program Files (x86)\Seagate Software\NOTES\DATA\C:\Program Files (x86)\Java\jdk1.7.0_05\binC:\Oracle\product\10.1.0\Client_1\binC:\Oracle\product\10.1.0\Client_1\jre\1.4.2\bin\clientC:\Oracle\product\10.1.0\Client_1\jre\1.4.2\binC:\app\PavilionG4\product\11.2.0\dbhome_1\binC:\app\PavilionG4\product\11.2.0\client_2\binc:\Program Files (x86)\AMD APP\bin\x86_64c:\Program Files (x86)\AMD APP\bin\x86C:\Windows\system32C:\WindowsC:\Windows\System32\WbemC:\Windows\System32\WindowsPowerShell\v1.0\c:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-StaticC:\Users\PavilionG4\AppData\Local\Smartbar\Application\C:\PROGRA~2\IBM\SQLLIB\BINC:\PROGRA~2\IBM\SQLLIB\FUNCTIONC:\Program Files\gedit\binC:\Kivy-1.7.2-w32C:\Program Files (x86)\ZBar\binjC:\Program Files (x86)\Java\jdk1.7.0_05\binC:\Program Files\MATLAB\R2013a\runtime\win64C:\Program Files\MATLAB\R2013a\binC:\Python27
TNS is :
C:\Oracle\product\10.1.0\Client_1\NETWORK\ADMIN\tnsnames.ora
REPORT1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 172.28.128.110)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = REPORT1)
)
)
f1.py shows me error
import cx_Oracle
ip = '172.25.25.42'
port = 1521
SID = 'REPORT1'
dns_tns = cx_Oracle.makedsn(ip,port,SID)
connection = cx_Oracle.connect(u"Lee",u"123",dns_tns)
cursor = connection.cursor()
connection.close()
Error
Traceback (most recent call last):
File "f1.py", line 6, in
connection = cx_Oracle.connect(u"Lee",u"123",dns_tns)
cx_Oracle.InterfaceError: Unable to acquire Oracle environment handle
Questions
1. How to acquire Oracle environment handle?
I had searched the websites. Unfortunately they are not hit my problem at all.
2. How to let Python use another Oracle client without impact to the existing one? | 4 | 2 | 0.379949 | 0 | false | 27,795,948 | 0 | 6,803 | 1 | 1 | 0 | 25,542,787 | If python finds more than one OCI.DLL file in the path (even if they are identical) it will throw this error. (Your path statement looks like it may throw up more than one). You can manipulate the path inside your script to constrain where python will look for the supporting ORACLE files which may be your only option if you have to run several versions of oracle/clients locally. | 1 | 0 | 0 | Python + cx_Oracle : Unable to acquire Oracle environment handle | 1 | python,oracle | 0 | 2014-08-28T07:10:00.000 |
I need to know, what are the steps to generate an Excel sheet in OpenERP?
Or put it this way, I want to generate an Excel sheet for data that I have retrieved from different tables through queries with a function that I call from a button on wizard. Now I want when I click on the button an Excel sheet should be generated.
I have installed OpenOffice, the problem is I don't know how to create that sheet and put data on it. Please will you tell me the steps? | 0 | 1 | 1.2 | 0 | true | 25,993,349 | 1 | 596 | 1 | 0 | 0 | 25,552,075 | You can do it easily with python library called XlsxWriter. Just download it and add in openerp Server, look for XlsxWriter Documentation , plus there are also other python libraries for generating Xlsx reports. | 1 | 0 | 0 | What are the steps to create or generate an Excel sheet in OpenERP? | 1 | python,openerp | 0 | 2014-08-28T15:00:00.000 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.