Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
Is it possible to write nice-formatted excel files with dataframe.to_excel-xlsxwriter combo? I am aware that it is possible to format cells when writing with pure xlsxwriter. But dataframe.to_excel takes so much less space. I would like to adjust cell width and add some colors to column names. What other alternatives would you suggest?
1
2
0.379949
1
false
28,862,593
0
100
1
0
0
28,839,976
I found xlwings. It's intuitive and does all the things I want to do. Also, it does well with all pandas data types.
1
0
0
How to do formmating with combination of pandas dataframe.to_excel and xlsxwriter?
1
python,pandas,xlsxwriter
0
2015-03-03T19:10:00.000
Is it possible to add or remove entries to an excel file or a text file while it is still open (viewing live update of values from python output) instead of seeing the output in terminal?
2
3
0.53705
0
false
28,879,986
0
2,289
1
0
0
28,879,391
It depends on the application you are using to view the file. You will have to check the features available in the tools you are using. For instance, in Excel, this is impossible. When you open an Excel document, it actually creates an invisible copy. You are not editing the original. It is only when the file is saved that the original is updated. So, if you have a file my_excel_file.xlsx, when you open it, another file is created named ~$my_excel_file.xlsx. So, editing the original file will not update the file being viewed in the Excel application. For text files, on the other hand, there are some applications that will reload changes from disk. Sublime Text is an example of this. If you have a file open in Sublime Text, then make a change to the file with another program, Sublime Text will automatically reload the new version when the application regains focus.
1
0
1
Editing an open document with python
1
python,excel,xlsx
0
2015-03-05T13:50:00.000
I've written some code that iterates through a flat file. After a certain section is completed reading, I take the data and put it into a spreadsheet. Then, I go back and continue reading the flat file for the next section and write to a new worksheet...and so on and so forth. When looping through the python code, I create a new worksheet for each section read above. During this looping, I create the new worksheet as such: worksheet = workbook.add_worksheet(thename) The problem is that the second time through the loop, python crashes when re-assigning the worksheet object above to a new worksheet. Is there a way to "close the worksheet object", then re-assign it? FYI: If I can't use the same object name, "worksheet" in this case, the code is going to become tremendously long and messy in order to handle "worksheet1", "worksheet2", "worksheet3", etc... (as you might imagine) Thank you in advance!
1
0
0
0
false
28,909,616
0
1,050
1
0
0
28,909,360
Instead of assigning the variable worksheet to workbook.add_worksheet(thename), have a list called worksheets. When you normally do worksheet = workbook.add_worksheet(thename), do worksheets.append(workbook.add_worksheet(thename)). Then access your latest worksheet with worksheets[-1].
1
0
1
python xlsxwriter worksheet object reuse
2
python,xlsxwriter
0
2015-03-06T23:26:00.000
after I put the photologue on the server, I have no issue with uploading photos. the issue is when I am creating a Gallery from the admin site, I can choose only one photo to be attached to the Gallery. even if I selected many photos, one of them will be linked to the Gallery only. The only way to add photos to a gallery is by adding them manually to photologue_gallery_photos table in the database :( anyone kows how to solve it?
0
0
0
0
false
31,394,483
1
184
2
0
0
28,927,247
I had exactly the same problem. I suspected some problem with django-sortedm2m package. To associate photo to gallery, it was using SortedManyToMany() from sortedm2m package. For some reason, the admin widget associated with this package did not function well. (I tried Firefox, Chrome and safari browser). I actually did not care for the order of photos getting uploaded to Gallery, so I simply replaced that function call with Django's ManyToManyField(). Also, I noticed that SortedManyToMany('Photo') was called with constant string Photo. Instead it should be called with SortedManyToMany(Photo) to identify Photo class. Although it did not resolve my problem entirely. So I used default ManyToMany field and it is showing all the photos from Gallery.
1
0
0
Gallery in Photologue can have only one Photo
3
python,django,python-3.4,django-1.7,photologue
0
2015-03-08T13:57:00.000
after I put the photologue on the server, I have no issue with uploading photos. the issue is when I am creating a Gallery from the admin site, I can choose only one photo to be attached to the Gallery. even if I selected many photos, one of them will be linked to the Gallery only. The only way to add photos to a gallery is by adding them manually to photologue_gallery_photos table in the database :( anyone kows how to solve it?
0
0
0
0
false
32,932,624
1
184
2
0
0
28,927,247
I guess your problem is solved by now, but just in case.. I had the same problem. Looking around in the logs, I found it was caused by me not having consolidated the static files from sortedm2m with the rest of my static files (hence the widget was not working properly).
1
0
0
Gallery in Photologue can have only one Photo
3
python,django,python-3.4,django-1.7,photologue
0
2015-03-08T13:57:00.000
I have the following TimeStamp value: Wed Jun 25 09:18:15 +0000 2014. I am writing a MapReduce program in Python that reads JSON objects from an Amazon S3 location and export it to a local CSV file. The CSV file will then export data to a MySQL and HBase database. I have about 200 million records (1 TB), so I need to optimize every processing step. What data type should I use to store the TimeStamp value in Python, CSV, MySQL and HBase database? I need to store all aspects of the TimeStamp value. My schema has 4 columns in the CSV file, MySQL and HBase database tables. Thanks!
1
2
0.379949
0
false
28,963,527
0
444
1
0
0
28,961,577
Use long to represent time (milli seconds), so you don't bother about the date formatting/string encoding. It's space efficient and much easier to perform range queries.
1
0
0
Best way to store TimeStamp
1
python,mysql,csv,hbase
0
2015-03-10T10:41:00.000
I have a python script that queries some data from several web APIs and after some processing writes it to MySQL. This process must be repeated every 10 seconds. The data needs to be available to Google Compute instances that read MySQL and perform CPU-intensive work. For this workflow I thought about using GCloud SQL and running GAppEngine to query the data. NOTE: The python script does not run on GAE directly (imports pandas, scipy) but should run on a properly setup App Engine Managed VM. Finally the question: is it possible and would it be reasonable to schedule a cron job on a GApp Managed VM to run a command invoking my data collection script every 10 seconds? Any alternatives to this approach?
0
1
1.2
0
true
29,050,842
1
1,243
1
1
0
29,044,322
The finest resolution of a cron job is 1 minute, so you cannot run a cron job once every 10 seconds. In your place, I'd run a Python script that starts a new thread every 10 seconds to do your MySQL work, accompanied by a cronjob that runs every minute. If the cronjob finds that the Python script is not running, it would restart it. (i.e., the crontab line would look like * * * * * /command/to/restart/Python/script). Worse-case scenario you'd miss 5 runnings of your MySQL worker threads (a 50 seconds' duration).
1
0
0
Cron job on google cloud managed virtual machine
2
python,google-app-engine,cron,virtual-machine,google-compute-engine
0
2015-03-14T01:06:00.000
I'm new to python and mysql-python module. Is there any way to reuse db connection so that we may not connect() and close() every time a request comes. More generally, how can I keep 'status' on server-side? Can somebody give me a tutorial to follow or guide me somehow, lots of thanks!
0
0
1.2
0
true
29,064,930
0
101
1
0
0
29,064,875
Really not possible with CGI, the original Common Gateway Interface dictates that the program be run from scratch for each request. You'd want to use WSGI instead (a Python standard), which allows your application be long-lived. WSGI in turn is easiest if you use a Web Framework such as Pyramid, Flask or Django; their integrations with databases like MySQL support connection pooling out of box.
1
0
0
How to Reuse Database Connection under Python CGI?
1
python,cgi
0
2015-03-15T19:02:00.000
I am new to this so a silly question I am trying to make a demo website using Django for that I need a database.. Have downloaded and installed MySQL Workbench for the same. But I don't know how to setup this. Thank you in advance :) I tried googling stuff but didn't find any exact solution for the same. Please help
0
1
1.2
0
true
29,493,720
1
1,235
1
0
0
29,102,422
I am a mac user. I have luckily overcome the issue with connecting Django to mysql workbench. I assume that you have already installed Django package created your project directory e.g. mysite. Initially after installation of MySQL workbench i have created a database : create database djo; Go to mysite/settings.py and edit following piece of block. NOTE: Keep Engine name "django.db.backends.mysql" while using MySQL server. and STOP the other Django MySQL service which might be running. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'djo', # Or path to database file if using sqlite3. # The following settings are not used with sqlite3: 'USER': 'root', 'PASSWORD': '****', # Replace **** with your set password. 'HOST': '127.0.0.1', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP. 'PORT': '3306', # Set to empty string for default. } } now run manage.py to sync your database : $ python mysite/manage.py syncdb bash-3.2$ python manage.py syncdb Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_content_type Creating table django_session Creating table django_site You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'ambershe'): root Email address: [email protected] /Users/ambershe/Library/Containers/com.bitnami.django/Data/app/python/lib/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal. passwd = fallback_getpass(prompt, stream) Warning: Password input may be echoed. Password: **** Warning: Password input may be echoed. Password (again): **** Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s)
1
0
0
Connect MySQL Workbench with Django in Eclipse in a mac
1
python,mysql,django,eclipse,pydev
0
2015-03-17T14:56:00.000
I have some very peculiar behavior happening when running a data importer using multiprocessor in python. I believe that this is a database issue, but I am not sure how to track it down. Below is a description of the process I am doing: 1) Multiprocessor file that runs XX number of processors doing parts two and three 2) Queue processor that iterates through an sqs queue pulling a company id. This id is used to pull a json string stored in mysql. This json string is loaded as a json object and sent to a parsing file that normalizes the data so that it can be imported into mysql as normalized data. 3) Company parser/importer reads through json object and creates inserts into a mysql database, normalizing the data. These are batch inserted into RDS in batches of XXX size to mitigate IOPS issues. This code is run from a c4.Large instance and works. When it is started, it works fast (~30,000 inserts per min) without maxing out IOPS, CPU, or other resources on either the RDS or ec2 instance. Then, after a certain amount of time (5-30min), the RDS server's CPU drops to ~20% and has a weird heartbeat type of rhythm. I have tried launching additional ec2 instances to speed up this process and the import speed remains unchanged and slow (~2000 inserts per min), so I believe the bottleneck is with the RDS instance. I tried changing the RDS instance's size from medium to large with no change. I also tried changing the RDS instance's IOPS to provisioned SSD with 10k. This also did not fix the problem As far as I can tell, there is some sort of throttling or limitation by the RDS server. But, I don't know where else to look. There are no red flags about what is being limited. Can you please provide other potential reasons for why this type of behavior would be happening? I don't know what else to test. Current setup is 500gb t2.medium RDS instance with ~200 Write IOPS, CPU at ~20%, Read IOPS < 20, Queue < 1, stable 12 db connections(this is not connecting and then disconnecting), and plenty of free memory.
0
0
0
0
false
29,169,114
0
260
1
0
0
29,129,589
I solved this by upping my instance type to a m3.Large instance without limited CPU credits. Everything works well now.
1
0
1
Importing data to mysql RDS with python multiprocessor - RDS
1
python,mysql,linux,amazon-ec2,rds
0
2015-03-18T18:10:00.000
I'm not sure what exactly the wording for the problem is so if I haven't been able to find any resource telling me how to do this, that's most likely why. The basic problem is that I have a webcrawler, coded in Python, that has a 'Recipe' object that stores certain data about a specific recipe such as 'Name', 'Instructions', 'Ingredients', etc. with 'Instructions' and 'Ingredients' being a string array. Now, the problem I have comes when I want to store this data in a database for access from other sources. A basic example of the database looks as follows: (Recipes) r_id, name, .... (Ingredients) i_id, name, .... (RecipeIngredients) r_id, i_id. Now, specifically my problem is, how do I make sure I'm not duplicating ingredients and how do I insert the data so that the ingredient is linked to the id of the current Recipe object? I know my explanation is bad but I'm struggling to put it into words. Any help is appreciated, thanks.
0
0
0
0
false
29,170,744
0
45
1
0
0
29,170,268
For the first question (how do I make sure I'm not duplicating ingredients?), if I understand well, is basically put your primary key as (i_id, name) in the table ingredients. This way you guarantee that is impossible insert an ingredient with the same key (i_id, name). Now for the second question (how do I insert the data so that the ingredient is linked to the id of the current Recipe object?). I really don't understand this question very well. What I think you want is link the recipes with ingredients. This can be made with the table RecipeIngredients. When you want to do that, you simple insert a new row in that table with the id of the recipe and the id of the ingredient. If isn't this what you want sorry, but I really don't understand.
1
0
0
Inserting data into SQL database that needs to be linked
2
python,sql
0
2015-03-20T15:33:00.000
I have a MySQLdb installation for Python 2.7.6. I have created a MySQLdb cursor once and would like to reuse the cursor for every incoming request. If 100 users are simultaneously active and doing a db query, does the cursor serve each request one by one and block others? If that is the case, is there way to avoid that? Will having a connection pool will do the job in a threadsafe manner or should I look at Gevent/monkey patching? Your responses are welcome.
0
1
0.099668
0
false
29,199,028
0
633
1
0
0
29,196,096
For this purpose you can use Persistence Connection or Connection Pool. Persistence Connection - very very very bad idea. Don't use use it! Just don't! Especially when you are talking about web programming. Connection Pool - Better then Persistence Connection, but with no deep understanding of how it works, you will end with the same problems of Persistence Connection. Don't do optimization unless you really have performance problems. In web, its common to open/close connection per page request. It works really fast. You better think about optimizing sql queries, indexes, caches.
1
0
0
Is a MySQLdb cursor for Python blocking in nature by default?
2
python,mysql,mysql-python
0
2015-03-22T15:21:00.000
MS SQL Server supports passing a table as a stored-procedure parameter. Is there any way to utilize this from Python, using PyODBC or pymssql?
1
0
1.2
0
true
29,679,132
0
830
1
0
0
29,371,570
Use IronPython. It allows direct access to the .net framework, and therefore you can build a DataTable object and pass it over.
1
0
0
Can you pass table input parameter to SQL Server from Python
2
python,sql-server,pyodbc
0
2015-03-31T14:49:00.000
I am using db = MySQLdb.connect(host="machine01", user=local_settings.DB_USERNAME, passwd=local_settings.DB_PASSWORD, db=local_settings.DB_NAME) to connect to a DB, but I am doing this from machine02 and I thought this would still work, but it does not. I get _mysql_exceptions.OperationalError: (1045, "Access denied for user 'test_user'@'machine02' (using password: YES)") as a result. However, if I simply ssh over to machine01 and perform the same query, it works just fine. Isn't the point of host to be able to specify where the MySQL db is and be able to query it from any other host instead of having to jump on there to make the query?
0
0
0
0
false
29,372,847
0
41
2
0
0
29,372,365
The error tells you that 'test_user' at machine 'machine02' is not allowed. Probably user 'test_user' is on 'mysql.user' table registered with 'localhost' as connection's host. Check it using a query like this: select host, user from mysql.user; Best regards, Oscar.
1
0
0
Query MySQL db from Python returns "Access Denied"
2
python,mysql
0
2015-03-31T15:25:00.000
I am using db = MySQLdb.connect(host="machine01", user=local_settings.DB_USERNAME, passwd=local_settings.DB_PASSWORD, db=local_settings.DB_NAME) to connect to a DB, but I am doing this from machine02 and I thought this would still work, but it does not. I get _mysql_exceptions.OperationalError: (1045, "Access denied for user 'test_user'@'machine02' (using password: YES)") as a result. However, if I simply ssh over to machine01 and perform the same query, it works just fine. Isn't the point of host to be able to specify where the MySQL db is and be able to query it from any other host instead of having to jump on there to make the query?
0
0
1.2
0
true
29,372,390
0
41
2
0
0
29,372,365
Make sure your firewall isn't blocking port 3306.
1
0
0
Query MySQL db from Python returns "Access Denied"
2
python,mysql
0
2015-03-31T15:25:00.000
I am passing the output from a sql query to again insert the data to ms sql db. If my data is null python / pyodbc is returning None instead of NULL. What is the best way to convert None to NULL when I am calling another query using the same data. Or a basic string transformation is the only way out ? Thanks Shakti
5
-1
-0.099668
0
false
29,431,913
0
16,822
1
0
0
29,431,557
You could overwrite query function in way that None will be replace with "NULL"
1
0
1
How convert None to NULL with Python 2.7 and pyodbc
2
python,sql-server,pyodbc
0
2015-04-03T11:43:00.000
I have a script to format a bunch of data and then push it into excel, where I can easily scrub the broken data, and do a bit more analysis. As part of this I'm pushing quite a lot of data to excel, and want excel to do some of the legwork, so I'm putting a certain number of formulae into the sheet. Most of these ("=AVERAGE(...)" "=A1+3" etc) work absolutely fine, but when I add the standard deviation ("=STDEV.P(...)" I get a name error when I open in excel 2013. If I click in the cell within excel and hit (i.e. don't change anything within the cell), the cell re-calculates without the name error, so I'm a bit confused. Is there anything extra that needs to be done to get this to work? Has anyone else had any experience of this? Thanks, Will --
3
0
0
0
false
29,487,114
0
813
1
0
0
29,486,671
I suspect that there might be a subtle difference in what you think you need to write as the formula and what is actually required. openpyxl itself does nothing with the formula, not even check it. You can investigate this by comparing two files (one from openpyxl, one from Excel) with ostensibly the same formula. The difference might be simple – using "." for decimals and "," as a separator between values even if English isn't the language – or it could be that an additional feature is required: Microsoft has continued to extend the specification over the years. Once you have some pointers please submit a bug report on the openpyxl issue tracker.
1
0
0
openpyxl and stdev.p name error
2
python,openpyxl
0
2015-04-07T08:01:00.000
I think InfluxDB is a really cool time series DB. I am planning to use it as an intermediate data aggregator (collecting time based metrics from many sensors). The data needs to be processed in "moving window" manner - when X samples received, Python based processing algorithm should be triggered. What is the best wait to trigger the algorithm upon enough data aggregated? (I assume that polling with select queries is not the best option). Is there any events I can wait on? Thanks! Meir
0
1
0.197375
0
false
29,533,201
0
528
1
0
0
29,528,394
Not using Python, but in my case i use continuous queries in InfluxDb to consolidate automatically data in one place/serie. Then i request every X seconds on the newly created serie using a time window to select my data. They are then draw using a standard framework (highcharts.js) Maybe in your case you could wait for a predefined data volume before trigerring the push to the processing function.
1
0
0
How to use InfluxDB as an intermediate data storage
1
python,time-series,influxdb
0
2015-04-09T01:51:00.000
I'm writing a web application in python and postgreSQL. Users are to access a lot of information during a session. All such information (almost) are indexed in the database. My question is, should I litter the code with specific queries, or is it better practice to query larger chunks of information, cashing it, and letting python process the chunk for finer pieces? For example: A user is to ask for entries in a payment log. Either one writes a query asking for the specific entries requested, or one collect the payment history of the user and then use python to select the specific entries. Of course cashing is preferred when working with heavy queries, but since nearly all my data is indexed, direct database access is fast and the cashing approach would not yield much if any extra speed. But are there other factors that may still render the cashing approach preferable?
0
1
0.197375
0
false
29,538,970
0
30
1
0
0
29,538,870
Database designers spend a lot of time on caching and optimization. Unless you hit a specific problem, it's probably better to let the database do the database stuff, and your code do the rest instead of having your code try to take over some of the database functionality.
1
0
0
General queries vs detailed queries to database
1
python,postgresql
0
2015-04-09T12:44:00.000
I am using Python to stream large amounts of Twitter data into a MySQL database. I anticipate my job running over a period of several weeks. I have code that interacts with the twitter API and gives me an iterator that yields lists, each list corresponding to a database row. What I need is a means of maintaining a persistent database connection for several weeks. Right now I find myself having to restart my script repeatedly when my connection is lost, sometimes as a result of MySQL being restarted. Does it make the most sense to use the mysqldb library, catch exceptions and reconnect when necessary? Or is there an already made solution as part of sqlalchemy or another package? Any ideas appreciated!
0
0
1.2
0
true
29,552,956
0
48
1
0
0
29,552,868
I think the right answer is to try and handle the connection errors; it sounds like you'd only be pulling in a much a larger library just for this feature, while trying and catching is probably how it's done, whatever level of the stack it's at. If necessary, you could multithread these things since they're probably IO-bound (i.e. suitable for Python GIL threading as opposed to multiprocessing) and decouple the production and the consumption with a queue, too, which would maybe take some of the load off of the database connection.
1
0
0
Persistant MySQL connection in Python for social media harvesting
1
python,mysql
0
2015-04-10T03:32:00.000
I'm trying to create unit tests for a function that uses database queries in its implementation. My understanding of unit testing is that you shouldn't be using outside resources such as databases for unit testing, and you should just create mock objects essentially hard coding the results of the queries. However, in this case, the queries are implementation specific, and if the implementation would change, so would the queries. My understanding is also that unit testing is very useful because it essentially allows you to change the implementation of your code whenever you want while being sure it still works. In this case, would it be better to create a database for testing purposes, or to make the testing tailored to this specific implementation and change the test code if we ever change the implementation?
1
1
0.099668
0
false
29,576,807
0
105
2
0
0
29,565,712
As it seems I got the wrong end of the stick, I had a similarish problem and like you an ORM was not an option. The way I addressed it was with simple collections of Data Transfer objects. So the new code I wrote, had no direct access to the db. It did everything with simple lists of objects. All the business logic and ui could be tested without the db. Then I had an other module that did nothing but read and write to the db, to and from my collections of objects. It was a poor mans ORM basically, a lot of donkey work. Testing was run the db creation script, then some test helper code to populate the db with data I needed for each test. Boring but effective, and you can with a bit of care, refactor it in to the code base without too much risk.
1
0
0
Unit testing on implementation-specific database usage
2
python,database,unit-testing
1
2015-04-10T15:51:00.000
I'm trying to create unit tests for a function that uses database queries in its implementation. My understanding of unit testing is that you shouldn't be using outside resources such as databases for unit testing, and you should just create mock objects essentially hard coding the results of the queries. However, in this case, the queries are implementation specific, and if the implementation would change, so would the queries. My understanding is also that unit testing is very useful because it essentially allows you to change the implementation of your code whenever you want while being sure it still works. In this case, would it be better to create a database for testing purposes, or to make the testing tailored to this specific implementation and change the test code if we ever change the implementation?
1
2
1.2
0
true
29,566,319
0
105
2
0
0
29,565,712
Well, to start with, I think this is very much something that depends on the application context, the QA/dev's skill set & preferences. So, what I think is right may not be right for others. Having said that... In my case, I have a system where an extremely complex ERP database, which I dont control, is very much in the driver's seat and my code is a viewer/observer, rather than a driver of that database. I don't, and can't really, use an ORM layer much, all my added value is in queries that deeply understand the underlying database data model. Note also that I am mostly a viewer of that db, in fact my code has read-only access to the primary db. It does have write access to its own tagging database which uses the Django ORM and testing there is different in nature because of my reliance on the ORM. For me, it had better be tested with the database. Mock objects? Please, mocking would have guzzled time if there is a lot of legitimate reasons to view/modify database contents with complex queries. Changing queries. In my case, changing and tweaking those queries, which are the core of my application logic, is very often needed. So I need to make fully sure that they perform as intended against real data. Multi-platform concerns. I started coding on postgresql, tweaked my connectivity libraries to support Oracle as well. Ran the unit tests and fixed anything that popped up as an error. Would a database abstraction have identified things like the LIMIT clause handling in Oracle? Versioning. Again, I am not the master of the database. So, as versions change, I need to hook up my code to it. The unit testing is invaluable, but that's because it hits the raw db. Test robustness. One lesson I learned along the way is to uncouple the test from the test db. Say you want to test a function that flags active customers that have not ordered anything in a year. My initial test approach involved manual lookups in the test database, find CUST701 to be a match to the condition. Then call my function and test if CUST701 is the result set of customers needing review. Wrong approach. What you want to do is to write, in your test, a query that finds active customers that have not ordered anything in a year. No hardcoded CUST701s at all, but your test query query can be as hardcoded as you want - in fact, it should look as little as your application queries as possible - you don't want your test sql to replicate what could potentially be a bug in your production code. Once you have dynamically identified a target customer meeting the criteria, then call your code under test and see if the results are as expected. Make sure your coverage tools identify when you've been missing test scenarios and plug those holes in the test db. BDD. To a large extent, I am starting to approach testing from a BDD perspective, rather than a low-level TDD. So, I will be calling the url that handles the inactive customer lists, not testing individual functions. If the overall result is OK and I have enough coverage, I am OK, without wondering about the detailed low-level to and fro. So factor this as well in qualifying my answer. Coders have always had test databases. To me, it seems logical to leverage them for BDD/unit-testing, rather than pretending they don't exist. But I am at heart a SQL coder that knows Python very well, not a Python expert who happens to dabble in SQL.
1
0
0
Unit testing on implementation-specific database usage
2
python,database,unit-testing
1
2015-04-10T15:51:00.000
I have a command-line tool that I'm creating and I'm looking for a safe place to put my sqlite database so it doesn't get overwritten or deleted by the user by accident in mac,windows,or linux and be accessible by my application.
0
1
0.197375
0
false
29,589,262
0
64
1
0
0
29,587,822
Your tool runs with the permissions of the user. Any file created by it can also be delete by the same user. You can ask the administrator to protect your files, but on most Mac/Windows/Linux PCs, the user is the administrator. There is no place that is safe from the user that controls your tool's execution environment. For that matter, no software is safe against users with access to the hardware: “If you don’t open that exit hatch this moment I shall zap straight off to your major data banks and reprogram you with a very large axe, got that?” ― Douglas Adams, The Hitchhiker's Guide to the Galaxy
1
0
0
Where to put sqlite database in python command-line project
1
python,linux,windows,macos,sqlite
0
2015-04-12T09:15:00.000
I have a PostgreSQL db. Pandas has a 'to_sql' function to write the records of a dataframe into a database. But I haven't found any documentation on how to update an existing database row using pandas when im finished with the dataframe. Currently I am able to read a database table into a dataframe using pandas read_sql_table. I then work with the data as necessary. However I haven't been able to figure out how to write that dataframe back into the database to update the original rows. I dont want to have to overwrite the whole table. I just need to update the rows that were originally selected.
25
0
0
1
false
68,004,057
0
11,070
1
0
0
29,607,222
For sql alchemy case of read table as df, change df, then update table values based on df, I found the df.to_sql to work with name=<table_name> index=False if_exists='replace' This should replace the old values in the table with the ones you changed in the df
1
0
0
Update existing row in database from pandas df
2
python,postgresql,pandas
0
2015-04-13T14:01:00.000
In SQLAlchemy, is there a way to store arbitrary metadata in an column object? For example, I want to store a flag on each column that says whether or not that column should be serialized, and then access this information via inspect( Table ).attrs.
0
2
1.2
0
true
29,611,824
0
95
1
0
0
29,611,273
You can pass extra data at the info param in Column initializer Column(...., info={'data': 'data'})
1
0
0
Storing arbitrary metadata in SQLAlchemy column
1
python,sqlalchemy
0
2015-04-13T17:21:00.000
I apologize in advance for my lack of knowledge concerning character encoding. My question is: are there any inherent advantages/disadvantages to using the 'Unicode' type, rather than the 'String' type, when storing data in PostgreSQL using SQLAlchemy (or vice-versa)? If so, would you mind elaborating?
14
5
0.761594
0
false
35,273,011
0
3,572
1
0
0
29,617,210
In 99.99% of the cases go for Unicode and if possible use Python 3 as it would make your life easier.
1
0
0
'Unicode' vs. 'String' with SQLAlchemy and PostgreSQL
1
python,postgresql,unicode,sqlalchemy,python-2.x
0
2015-04-14T00:27:00.000
I am using the openpyxl module for my Python scripts to create and edit .xlsx files directly from the script. Now I want to save a not known amount of number on after the other. How can I increase the cell number? So if the last input was made in A4, how can I say that the next should be in A5?
0
0
0
0
false
29,658,155
0
90
1
0
0
29,642,697
You can use the .offset() method of a cell to get a cell a particular number of rows or cells away.
1
0
0
openpyxl for Python dynamic lines
1
python,export-to-excel
0
2015-04-15T06:08:00.000
I am working on a Python/MySQL cloud app with a fairly complex architecture. Operating this system (currently) generates temporary files (plain text, YAML) and log files and I had intended to store them on the filesystem. However, our prospective cloud operator only provides a temporary, non-persistent filesystem to apps. This means that the initial approach with storing the temporary and log files won't work. There must be a standard approach to solving this problem which I am not aware of. I don't want to use object storage like S3 because it would extend the current stack and add complexity. But I have the possibility to install an additional, dedicated app (if there is anything made for this purpose) on a different server with the same provider. The only limitation is that it would have to be in PHP, Python, MySQL. The generic question: What is the standard approach to storing files when no persistent filesystem is available? And for my specific case: Is there any solution using Python and/or MySQL which is simple and quick to implement? Is this a usecase for Redis?
11
-1
-0.066568
0
false
29,656,524
0
976
1
1
0
29,656,422
Store your logs in MySQL. Just make a table like this: x***time*****source*****action ---------------------------- ****unixtime*somemodule*error/event Your temporary storage should be enough for temporary files :)
1
0
0
How/where to store temp files and logs for a cloud app?
3
python,mysql,redis,cloud,storage
0
2015-04-15T17:04:00.000
I am using py2neo and I would like to extract the information from query returns so that I can do stuff with it in python. For example, I have a DB containing three "Person" nodes: for num in graph.cypher.execute("MATCH (p:Person) RETURN count(*)"): print num outputs: >> count(*) 3 Sorry for shitty formatting, it looks essentially the same as a mysql output. However, I would like to use the number 3 for computations, but it has type py2neo.cypher.core.Record. How can I convert this to a python int so that I can use it? In a more general sense, how should I go about processing cypher queries so that the data I get back can be used in Python?
2
0
0
0
false
29,683,003
0
2,448
1
0
0
29,682,897
can you int(), float() str() on the __str__() method that looks to be outputting the value you want in your example?
1
0
1
How to convert neo4j return types to python types
2
python,neo4j,type-conversion,py2neo
0
2015-04-16T18:16:00.000
I need to use some aggregate data in my django application that changes frequently and if I do the calculations on the fly some performance issues may happen. Because of that I need to save the aggregate results in a table and, when data changes, update them. Because I use django some options may be exist and some maybe not. For example I can use django signals and a table that, when post_save signal is emitted, updates the results. Another option is materialized views in postgresql or indexed views in MSSQL Server, that I do not know how to use in django or if django supports them or not. What is the best way to do this in django for improving performance and accuracy of results.
13
14
1
0
false
55,397,481
1
6,549
1
0
0
29,716,972
You can use Materialized view with postgres. It's very simple. You have to create a view with query like CREATE MATERIALIZED VIEW my_view as select * from my_table; Create a model with two option managed=false and db_name=my_view in the model Meta like this MyModel(models.Model): class Meta: managed = False db_table='my_view' Simply use powers of ORM and treat MyModel as a regular model. e.g. MyModel.objects.count()
1
0
0
using materialized views or alternatives in django
2
python,sql-server,django,database,postgresql
0
2015-04-18T11:54:00.000
The same attributes stored in __dict__ are needed to restore the object, right?
0
1
0.197375
0
false
29,786,610
0
100
1
0
0
29,786,322
I think a SQLAlchemy RowProxy uses _row, a tuple, to store the value. It doesn't have a __dict__, so no storage overhead of a _dict__ per row. Its _parent object has fields which store the column names to index pos in tuple lookup. Pretty common thing to do if you are trying to cut on down sql fetching result sizes - the column list is always the same for each row of the same select, so you rely on a common parent to keep track of which index of the tuple holds which column rather than having your own per-row __dict__. Additional advantage is that, at the db lib connect level, sql cursors return (always?) their values in tuples, so you have little processing overhead. But a straight sql fetch is just that, a cursor descr & a bunch of disconnected rows with tuples in them - SQLALchemy bridges that and allows to use column names. Now, as to how the unpickling process goes, you'd have to look at the actual implementation.
1
0
0
Why is a pickled SQLAlchemy model object smaller than its pickled `__dict__`?
1
python,sqlalchemy
0
2015-04-22T01:50:00.000
I am using simple_salesforce package in python to extract data from SalesForce. I have a table that has around 2.8 million records and I am using query_more to extract all data. SOQL extracts 1000 rows at a time. How can I increase the batchsize in python to extract maximum number of rows at a time. [I hope maximum number of rows is 2000 at a time]? Thanks
0
0
0
0
false
34,097,734
0
122
1
0
0
29,799,993
If you truly wish to extract everything, you can use the query_all function. query_all calls the helper function get_all_results which recursively calls query_more until query_more returns "done". The returned result is the full dictionary of all your results. The plus, you get all of your data in a single dictionary. The rub, you get all 2.8 million records at once. They may take a while to pull back and, depending on the size of the record, that may be a significant amount of ram.
1
0
0
Increasing Batch size in SOQL
1
python,salesforce,soql
0
2015-04-22T14:01:00.000
I'm currently running into soft memory errors on my Google App Engine app because of high memory usage. A number of large objects are driving memory usage sky high. I thought perhaps if I set and recalled them from memcache maybe that might reduce overall memory usage. Reading through the docs this doesn't seem to be the case, and that the benefit of memcache is to reduce HRD queries. Does memcache impact overall memory positively or negatively? Edit: I know I can upgrade the instance class to F2 but I'm trying to see if I can remain on the least expensive while reducing memory.
0
3
1.2
0
true
29,806,800
1
130
1
1
0
29,806,384
Moving objects to and from Memcache will have no impact on your memory unless you destroy these objects in your Java code or empty collections. A bigger problem is that memcache entities are limited to 1MB, and memcache is not guaranteed. The first of these limitations means that you cannot push very large objects into Memcache. The second limitations means that you cannot easily replace, for example, a HashMap with memcache - it's impossible to tell if getValue() returns null because an object is not present or because it was bumped out of memcache. So you will have to make an extra call each time to a datastore to see if an object is really not present.
1
0
0
Will using memcache reduce my instance memory?
1
python-2.7,google-app-engine,memcached
0
2015-04-22T18:48:00.000
I'm using cx_Oracle module in python. Do we need to close opened cursors explicitly? What will happen when we miss to close the cursor after fetching data and closing only the connection object (con.close()) without issuing cursor.close()? Will there be any chance of memory leak in this situation?
4
1
0.099668
0
false
30,171,565
0
4,239
1
0
0
29,843,170
If you use multiple cursor. cursor.close() will help you to release the resources you don't need anymore. If you just use one cursor with one connection. I think connection.close() is fine.
1
0
0
cx_Oracle module cursor close in python
2
python,cx-oracle
0
2015-04-24T09:05:00.000
I'm having this weird problem when using Model.objects.get(op1=1,op2=2) it raises the does not exist error although it exists. Did that ever happen with anyone? I even checked in my logs to make sure that the log happened when the id already existed in the database. [2015-04-24 20:18:21,106] ERROR: Couldn't find the model entry: Traceback (most recent call last): DoesNotExist: NpBilling matching query does not exist. and in the database, the last modified date for this row specifically is 20:18:19. How could that possible ever happen?! The weird thing is that sometimes it works and sometimes it throws this error. I tried to use get_or_create but I end up with 2 entries in the database. one of them is what was already created. Thanks in advance for your help. I would appreciate fast responses and suggestions.
0
0
1.2
0
true
29,882,375
1
57
1
0
0
29,854,433
I solved it by using transaction.commit() before my second query.
1
0
0
Model.objects.get returns nothing
1
python,django,object,get,models
0
2015-04-24T18:00:00.000
I have UNIQUE constraint on two columns of a table in SQLite. If I insert a record with a duplicate on these two columns into the table, I will get an exception (sqlite3.IntegrityError). Is it possible to retrieve the primary key ID of this record upon such a violation, without doing an additional SELECT?
1
1
1.2
0
true
29,874,681
0
1,878
1
0
0
29,871,461
If the primary key is part of the UNIQUE constraint that led to the violation, you already have its value. Otherwise, the two columns in the UNIQUE constraint are an alternate key for the table, i.e., they can uniquely identify the conflicting row. If you need the actual primary key, you need to do an additional SELECT. (The primary key of the existing row is not part of the exception because it was never looked at during the INSERT attempt.)
1
0
0
Return existing primary key ID upon constraint failure in sqlite3
2
python,python-3.x,sqlite,unique-constraint
0
2015-04-25T22:31:00.000
I have two repository written in flask and django. These projects sharing the database model which is written in SQLAlchemy in flask and written in Django ORM. When I write migration script in flask as alembic, How can django project migrates with that script? I also think about Django with SQLAlchemy. But I can't find out Django projects using SQLAlchemy. Is that bad idea? Thanks.
6
2
0.197375
0
false
29,890,773
1
3,757
1
0
0
29,890,684
Firstly, don't do this; you're in for a world of pain. Use an API to pass data between apps. But if you are resigned to doing it, there isn't actually any problem with migrations. Write all of them in one app only, either Django or Alembic and run them there. Since they're sharing a database table, that's all there is to it.
1
0
0
How to manage django and flask application sharing one database model?
2
python,django,flask,sqlalchemy
0
2015-04-27T08:25:00.000
I am using the psycopg2 library with Python3 on a linux server to create some temporary tables on Redshift and querying these tables to get results and write to files on the server. Since my queries are long and takes about 15 minutes to create all these temp tables that I ultimate pull data from, how do I ensure that my connection persists and I don't lose the temp tables that I later query? Right now I just do a cursor() before the execute(), is there a default timeout for these? I have noticed that whenever I do a Select a,b from #results_table or select * from #results_table the query just freezes/hangs, but select top 35 from #results_table returns the results (select top 40 fails!). There are about a 100 rows in #results_table, and I am not able to get them all. I did a ps aux and the process just stays in the S+ state. If I manually run the query on Redshift it finishes in seconds. Any ideas?
0
1
1.2
0
true
29,915,754
0
82
1
0
0
29,893,476
Re-declaring a cursor doesn't create new connection while using psycopg2.
1
0
0
Does redeclaring a cursor create new connection while using psycopg2?
1
linux,postgresql,python-3.x,psycopg2,amazon-redshift
0
2015-04-27T10:42:00.000
I am using Python's peewee ORM with MYSQL. I want to list the active connections for the PooledDatabase. Is there any way to list..?
1
2
1.2
0
true
29,968,980
0
283
1
0
0
29,962,386
What do you mean "active"? Active as in being "checked out" by a thread, or active as in "has a connection to the database"? For the first, you would just do pooled_db._in_use. For the second, it's a little trickier -- basically it will be the combination of pooled_db._in_use (a dict) and pooled_db._connections (a heap).
1
0
0
Counting Active connections in peewee ORM
1
python-2.7,peewee
0
2015-04-30T08:10:00.000
I'm stumped on this one, please help me oh wise stack exchangers... I have a function that uses xlrd to read in an .xls file which is a file that my company puts out every few months. The file is always in the same format, just with updated data. I haven't had issues reading in the .xls files in the past but the newest release .xls file is not being read in and is producing this error: *** formula/tFunc unknown FuncID:186 Things I've tried: I compared the new .xls file with the old to see if I could spot any differences. None that I could find. I deleted all of the macros that were contained in the file (older versions also had macros) Updated xlrd to version 0.9.3 but get the same error These files are originally .xlsm files. I open them and save them as .xls files so that xlrd can read them in. This worked just fine on previous releases of the file. After upgrading to xlrd 0.9.3 which supposedly supports .xlsx, I tried saving the .xlsm file as.xlsx and tried to read it in but got an error with a blank error message Useful Info: Python 2.7 xlrd 0.9.3 Windows 7 (not sure if this matters but...) My guess is that there is some sort of formula in the new file that xlrd doesn't know how to read. Does anybody know what FuncID: 186 is? Edit: Still no clue on where to go with this. Anybody out there run into this? I tried searching up FuncID 186 to see if it's an excel function but to no avail...
4
0
0
0
false
30,945,220
0
1,917
1
0
0
29,971,186
I had the same problem and I think we have to look at the cells excel that these are not picking up empty, that's how I solved it.
1
0
0
Python XLRD Error : formula/tFunc unknown FuncID:186
3
python,windows,excel,python-2.7,xlrd
1
2015-04-30T15:01:00.000
I had a duplicate sqlite database. I tried deleting the duplicate but instead deleted both. Is there a way I can generate a new database? The data was not especially important.
7
3
0.291313
0
false
66,293,699
1
5,560
1
0
0
29,991,871
When you have no database in your project, a simple python manage.py migrate will create a new db.sqlite3 file.
1
0
0
Generating new SQLite database django
2
python,django,web
0
2015-05-01T17:30:00.000
I have a table that stores tasks submitted by users, with timestamps. I would like to write a query that returns certain rows based on when they were submitted (was it this day/week/month..). To check if it was submitted on this week, I wanted to use date.isocalendar()[1] function. The problem is, that my timestamps are datetimes, so I would need to transform those to dates. Using func: filter(func.date(Task.timestamp) == datetime.date(datetime.utcnow())) works properly. But I need the date object's isocalendar() method, so I try filter(func.date(Task.timestamp).isocalendar()[1]==datetime.date(datetime.utcnow()).isocalendar()[1]) and it's no good, I get AttributeError: Neither 'Function' object nor 'Comparator' object has an attribute 'isocalendar' If I make a simple query and try datetime.date(task.timestamp).isocalendar()[1] it works properly. How do I get it to work in the query's filter?
0
0
0
0
false
30,030,232
0
1,710
1
0
0
30,029,827
Can you try sqlalchemy.extract(func.date('year', Task.timestamp)) == ... ?
1
0
0
SQLAlchemy func issue with date and .isocalendar()
3
python,sqlite,date,datetime,sqlalchemy
0
2015-05-04T12:10:00.000
Os: Mac 10.9 Python ver: 2.7.9 database: postgresql 9.3 I am putting the following command to install psycopg2 in my virtualenv: ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install psycopg2 I am getting the following error: Traceback (most recent call last): File "/Users/dialynsoto/python_ex/crmeasy/venv/bin/pip", line 7, in from pip import main File "/Users/dialynsoto/python_ex/crmeasy/venv/lib/python2.7/site-packages/pip/init.py", line 13, in from pip.utils import get_installed_distributions, get_prog File "/Users/dialynsoto/python_ex/crmeasy/venv/lib/python2.7/site-packages/pip/utils/init.py", line 18, in from pip.locations import ( File "/Users/dialynsoto/python_ex/crmeasy/venv/lib/python2.7/site-packages/pip/locations.py", line 9, in import tempfile File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py", line 35, in from random import Random as _Random File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/random.py", line 49, in import hashlib as _hashlib File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 138, in _hashlib.openssl_md_meth_names) AttributeError: 'module' object has no attribute 'openssl_md_meth_names' Any clues ?
0
0
0
0
false
30,148,423
0
213
1
0
0
30,148,133
Try to find hashlib module within your system. It is likely that you have two modules and the one that is being imported is the wrong one (remove the wrong one if it is the case) or you should simply upgrade your python version.
1
0
0
psycopg2 error installation in virtualenv
1
python
0
2015-05-10T05:41:00.000
cursor.execute(sql_statement) conn.close() return cursor the above are the closing lines of my program. I've 3 html pages (users, workflows, home), returning curosor is triggering data for workflows and home page, but not for users page Where as, if i do return cursor.fetchall(), then it's working for all 3 pages. The reason why i want to return cursor is, the client might want to iterate or do other processing on the cursor. I'm not sure what am doing different with Users page.
0
0
0
0
false
30,181,819
1
24
1
0
0
30,181,471
If you close the connection, you cannot iterate a cursor anymore. There is no connection to the database.
1
0
0
Returning cursor isn't retrieving data from DB
1
python-2.7,postgresql-9.3
0
2015-05-12T03:52:00.000
I have a scenario in which I am writing formula for calculating the sum of values of different cells in xlsx. After calculating the sum I write it into different cell. I am doing this in python and for xlsx writing I am using xlsxwriter. For writing values I am using inmemory option for xlsxwritter and then I am reading it using xlrd by passing in memory string buffer to its constructor but when I am accessing the cell in which sum is written I am getting 0. I understand why this is happening. But is there any way of calculating the sum value in memory. So that when I read it I get the calculated sum using formula.
1
0
0
0
false
30,227,506
0
505
1
0
0
30,222,389
But is there any way of calculating the sum value in memory. Not with XlsxWriter since it doesn't have a calculation engine like Excel. However, if you only need to do a sum then you could do that in Python.
1
0
0
How to trigger calculation of formula in xlsx while writing value cell
1
python,xlsx,xlsxwriter
0
2015-05-13T18:15:00.000
I am using sqlalchemy to query memory logs off a MySql database. I am using: session.query(Memory).filter(Memmory.timestamp.between(from_date, to_date)) but the results after using the time window are still too many. Now I want to query for results withing the time window, but filtered down by asking for entries logged every X minutes/hours and skipping the ones between, but cannot find a simple way to do it. To further elaborate, lets say the 'l's are all the results from a query in a given timewindow: lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll To dilute them, I am looking for a query that will return only the 'l's every X minutes/hours so that I am not overwhelmed: l......l......l.....l......l......l......l.....l.....l.....l. I could get everything and then write a function that does this, but that beats the purpose of avoiding choking with results in the first place. Sidenote: Worse comes to worse, I can ask for a row after skipping a predifined number of rows, using mod on the row id column. But it would be great to avoid that since there is a timestamp (DateTime type of sqlalchemy). Edit: There could be some value using group by on timestamp and then somehow selecting a row from every group, but still not sure how to do this in a useful manner with sqlalchemy.
0
0
0
0
false
30,238,066
0
587
1
0
0
30,234,706
If I understand correctly your from_date and to_date are just dates. If you set them to python datetime objects with the date/times you want your results between, it should work.
1
0
0
How to query rows with a minute/hour step interval in SqlAlchemy?
1
python,mysql,flask,sqlalchemy
0
2015-05-14T10:14:00.000
I want to create a table(postgres) that stores data about what items were viewed by what user. authenticated users are no problem but how can I tell one anonymous user from another anonymous user? This is needed for analysis purposes. maybe store their IP address as unique ID? How can I do this?
8
7
1.2
0
true
30,298,038
1
4,523
1
0
0
30,297,785
I think you should use cookies. When a user that is not authenticated makes a request, look for a cookie named whatever ("nonuserid" in this case). If the cookie is not present it means it's a new user so you should set the cookie with a random id. If it's present you can use the id in it to identificate the anonymous user.
1
0
0
how to give some unique id to each anonymous user in django
2
python,django,authentication
0
2015-05-18T07:52:00.000
I have a 3+ million record XLS file which i need to dump in Oracle 12C DB (direct dump) using a python 2.7. I am using Cx_Oracle python package to establish connectivity to Oracle , but reading and dumping the XLS (using openpyxl pckg) is extremely slow and performance degrades for thousands/million records. From a scripting stand point used two ways- I've tried bulk load , by reading all the values in array and then dumping it using cursor prepare (with bind variables) and cursor fetchmany.This doesn't work well with huge data. Iterative loading of the data as it is being fetched.Even this way has performance issues. What options and techniques/packages can i deploy as a best practise to load this volume of data from XLS to Oracle DB ?Is it advisable to load this volume of data via scripting or should i necessarily use an ETL tool ? As of now i only have option via python scripting so please do answer the former
4
2
1.2
0
true
30,324,469
0
4,256
3
0
0
30,324,370
If is possible to export your excel fila as a CSV, then all you need is to use sqlldr to load the file in db
1
0
0
loading huge XLS data into Oracle using python
5
python,oracle,cx-oracle
0
2015-05-19T11:29:00.000
I have a 3+ million record XLS file which i need to dump in Oracle 12C DB (direct dump) using a python 2.7. I am using Cx_Oracle python package to establish connectivity to Oracle , but reading and dumping the XLS (using openpyxl pckg) is extremely slow and performance degrades for thousands/million records. From a scripting stand point used two ways- I've tried bulk load , by reading all the values in array and then dumping it using cursor prepare (with bind variables) and cursor fetchmany.This doesn't work well with huge data. Iterative loading of the data as it is being fetched.Even this way has performance issues. What options and techniques/packages can i deploy as a best practise to load this volume of data from XLS to Oracle DB ?Is it advisable to load this volume of data via scripting or should i necessarily use an ETL tool ? As of now i only have option via python scripting so please do answer the former
4
0
0
0
false
30,328,198
0
4,256
3
0
0
30,324,370
Excel also comes with ODBC support so you could pump straight from Excel to Oracle assuming you have the drivers. That said, anything that involves transforming a large amount of data in memory (from whatever Excel is using internally) and then passing it to the DB is likely to be less performant than a specialised bulk operation which can be optimised to use less memory. Going through Python just adds another layer to the task (Excel to Python to Oracle), though it might be possible to set this up to use streams.
1
0
0
loading huge XLS data into Oracle using python
5
python,oracle,cx-oracle
0
2015-05-19T11:29:00.000
I have a 3+ million record XLS file which i need to dump in Oracle 12C DB (direct dump) using a python 2.7. I am using Cx_Oracle python package to establish connectivity to Oracle , but reading and dumping the XLS (using openpyxl pckg) is extremely slow and performance degrades for thousands/million records. From a scripting stand point used two ways- I've tried bulk load , by reading all the values in array and then dumping it using cursor prepare (with bind variables) and cursor fetchmany.This doesn't work well with huge data. Iterative loading of the data as it is being fetched.Even this way has performance issues. What options and techniques/packages can i deploy as a best practise to load this volume of data from XLS to Oracle DB ?Is it advisable to load this volume of data via scripting or should i necessarily use an ETL tool ? As of now i only have option via python scripting so please do answer the former
4
0
0
0
false
63,836,641
0
4,256
3
0
0
30,324,370
Automate the export of XLSX to CSV as mentioned in a previous answer. But, instead of then calling a sqlldr script, create an external table that uses your sqlldr code. It will load your table from the CSV each time the table is selected from.
1
0
0
loading huge XLS data into Oracle using python
5
python,oracle,cx-oracle
0
2015-05-19T11:29:00.000
Sorry for the rookie question. I have a sqlite file and I need to get table column names. How can I get them?
1
2
1.2
0
true
30,329,701
0
696
1
0
0
30,329,528
use the pragma table_info(spamtable) command. The table names will be index 1 of the returned tuples.
1
0
0
How to get list of column names of a sqlite db file
2
python,database,sqlite
0
2015-05-19T15:12:00.000
I am using GoogleScraper for some automated searches in python. GoogleScraper keeps search results for search queries in its database named google_scraper.db.e.g. if i have searched site:*.us engineering books and due to internet issue while making json file by GoogleScraper.If the result is missed and json file is not like that what it must be then when i again search that command using GoogleScraper it gives same result while internet is working fine,i mean to say GoogleScraper maintains its database for a query which it has searched and does not search again it, when i search that command whose result is stored in database,it does not give new result but give results from database stored previously
0
0
1.2
0
true
30,433,834
1
192
1
0
1
30,347,571
I solved the issue which keeps searches in GoogleScraper database,we first have to run following command GoogleScraper --clean This command cleans all cache and we can search again with new results. Regards!
1
0
0
GoogleScraper keeps searches in database
1
python,bash,web-scraping
0
2015-05-20T10:54:00.000
How to do syncdb in django 1.4.2? i.e. having data in database, how to load the models again when the data schema is updated? Thanks in advance
1
3
1.2
0
true
30,392,918
1
2,775
1
0
0
30,387,974
Thanks Amyth for the hints. btw the commands is a bit different, i will post a 10x tested result here. Using south 1. setup the model python manage.py schemamigration models --initial dump data if you have to python manage.py dumpdata -e contenttypes -e auth.Permission --natural > data.json syncdb python manage.py syncdb python manage.py migrate models load the data back into the db python manage.py loaddata data.json Afterwards, you may use python manage.py schemamigration models --auto python manage.py migrate models after every change you made in the models schema A few notes 1. Unloading the database and reload it is essential, because if not doing so the first migration will tell you already have those models. 2. The -e contenttypes -e auth.Permission --natural parameter in dumpdata is essential otherwise exception will be thrown when doing loaddata.
1
0
0
How to do django syncdb in version 1.4.2?
2
python,django,django-models,django-syncdb
0
2015-05-22T03:38:00.000
I could create tables using the command alembic revision -m 'table_name' and then defining the versions and migrate using alembic upgrade head. Also, I could create tables in a database by defining a class in models.py (SQLAlchemy). What is the difference between the two? I'm very confused. Have I messed up the concept? Also, when I migrate the database using Alembic, why doesn't it form a new class in my models.py? I know the tables have been created because I checked them using a SQLite browser. I have done all the configurations already. The target for Alembic's database and SQLALCHEMY_DATABASE-URI in config.py are the same .db file.
19
52
1.2
0
true
30,425,438
1
10,088
1
0
0
30,425,214
Yes, you are thinking about it in the wrong way. Let's say you don't use Alembic or any other migration framework. In that case you create a new database for your application with the following steps: Write your model classes Create and configure a brand new database Run db.create_all(), which looks at your models and creates the corresponding tables in your database. So now consider the case of an upgrade. For example, let's say you release version 1.0 of your application and now start working on version 2.0, which requires some changes to your database. How can you achieve that? The limitation here is that db.create_all() does not modify tables, it can only create them from scratch. So it goes like this: Make the necessary changes to your model classes Now you have two options to transfer those changes to the database: 5.1 Destroy the database so that you can run db.create_all() again to get the updated tables, maybe backing up and restoring the data so that you don't lose it. Unfortunately SQLAlchemy does not help with the data, you'll have to use database tools for that. 5.2 Apply the changes manually, directly to the database. This is error prone, and it would be tedious if the change set is large. Now consider that you have development and production databases, that means the work needs to be done twice. Also think about how tedious would it be when you have several releases of your application, each with a different database schema and you need to investigate a bug in one of the older releases, for which you need to recreate the database as it was in that release. See what the problem is when you don't have a migration network? Using Alembic, you have a little bit of extra work when you start, but it pays off because it simplifies your workflow for your upgrades. The creation phase goes like this: Write your model classes Create and configure a brand new database Generate an initial Alembic migration, either manually or automatically. If you go with automatic migrations, Alembic looks at your models and generates the code that applies those to the database. Run the upgrade command, which runs the migration script, effectively creating the tables in your database. Then when you reach the point of doing an upgrade, you do the following: Make the necessary changes to your model classes Generate another Alembic migration. If you let Alembic generate this for you, then it compares your models classes against the current schema in your database, and generates the code necessary to make the database match the models. Run the upgrade command. This applies the changes to the database, without the need to destroy any tables or back up data. You can run this upgrade on all your databases (production, development, etc.). Important things to consider when using Alembic: The migration scripts become part of your source code, so they need to be committed to source control along with your own files. If you use the automatic migration generation, you always have to review the generated migrations. Alembic is not always able to determine the exact changes, so it is possible that the generated script needs some manual fine tuning. Migration scripts have upgrade and downgrade functions. That means that they not only simplify upgrades, but also downgrades. If you need to sync the database to an old release, the downgrade command does it for you without any additional work on your part!
1
0
0
What is the difference between creating db tables using alembic and defining models in SQLAlchemy?
2
python,flask,sqlalchemy,alembic
0
2015-05-24T15:30:00.000
My problem is rather simple : I have an Excel Sheet that does calculations and creates a graph based on the values of two cells in the sheet. I also have two lists of inputs in text files. I would like to loop through those text files, add the values to the excel sheet, refresh the sheet, and print the resulting graph to a pdf file or an excel file named something like 'input1 - input2.xlsx'. My programming knowledge is limited, I am decent with Python and have looked into python libraries that work with excel such as openpyxl, however most of those don't seem to work for me for various reasons. Openpyxl deletes the graphs when opening an excel file; XlsxWriter can only write files, not read from them; and xlwings won't work for me. Should I use python, which I'm familiar with, or would VBA work for this kind of problem? Have any of you ever done something of the sort? Thanks in advance
1
1
0.099668
0
false
30,436,742
0
226
1
0
0
30,436,329
I think you should consider win32Com for excel operation in python instead of Openpyxl,XlsxWriter. you can read/write excel, create chart and format excel file using win32com without any limitation. And creating chart you can consider matplotlib, in that after creating chart you can save it in pdf file also.
1
0
0
Automatic input from text file in excel
2
python,excel
0
2015-05-25T10:34:00.000
I have a large A.csv file (~5 Gb) with several columns. One of the columns is Model. There is another large B.csv file (~15 Gb) with Vendor, Name and Model columns. Two questions: 1) How can I create result file that combines all columns from A.csv and corresponding Vendor and Name from B.csv (join on Model). The trick is - how to do it when my RAM is 4 Gb only, and I'm using python. 2) How can I create a sample (say, 1 Gb) result file that combines random subsample from A.csv (all columns) joined with Vendor and Name from B.csv. The trick is, again, in 4 Gb of RAM. I know how to do it in pandas, but 4 Gb is limiting factor I can't overcome (
0
0
0
1
false
30,441,330
0
862
1
0
0
30,441,107
As @Marc B said, reading one row at a time is the solution. About the join I would do the following (pseudocode: I don't know python). "Select distinct Model from A" on first file A.csv Read all rows, search for Model field and collect distinct values in a list/array/map "Select distinct Model from B" on second file B.csv Same operation as 1, but using another list/array/map Find matching models Compare the two lists/arrays/maps finding only matching models (they will be part of the join) Do the join Reading rows of file A which match model, read all the rows of file B which match same model and write a file C with join result. To this for all models. Note: it's not particularly optimized. For point 2 just choose a subset of matching models and/or read a part of rows of file A and/or B with maching models.
1
0
0
Concatenate large files in sql-like way with limited RAM
3
python,file,memory,merge
0
2015-05-25T14:57:00.000
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed. Has anyone solved this issue before? I'ts driving me nuts...
26
13
1
0
false
39,704,698
0
35,704
7
0
0
30,467,495
Just to add to the murkiness, I had the same error with current version of MySql install when attempting with python 3.5 installed (which is the latest python download). Long story short, I uninstalled python 3.5, installed python 3.4.4 (which interestingly didn't update PATH so I updated it manually) and reran installer and it found the python installation. So my conclusion is the MySql installer is tied to certain versions of the add-on products which in this case meant specifically python 3.4
1
0
0
mysql installer fails to recognize python 3.4
12
mysql,python-3.x,installation
0
2015-05-26T19:42:00.000
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed. Has anyone solved this issue before? I'ts driving me nuts...
26
10
1
0
false
35,611,377
0
35,704
7
0
0
30,467,495
just in case anyone else has this issue in future. Look at what bit version you have for Python 3.4. When I installed 64 bit version of Python 3.4, this issue went away.
1
0
0
mysql installer fails to recognize python 3.4
12
mysql,python-3.x,installation
0
2015-05-26T19:42:00.000
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed. Has anyone solved this issue before? I'ts driving me nuts...
26
8
1
0
false
54,292,906
0
35,704
7
0
0
30,467,495
I ran into a similar issue with Python 3.7.2. In my case, the problem was that I tried to install the 64 bit MySQL connector, but had the 32 bit version of Python installed on my machine. I got a similar error message: Python v3.7 not found. We only support Python installed using the Microsoft Windows Installer (MSI) [...] The problem just went away by installing the 32 bit MySQL connector instead.
1
0
0
mysql installer fails to recognize python 3.4
12
mysql,python-3.x,installation
0
2015-05-26T19:42:00.000
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed. Has anyone solved this issue before? I'ts driving me nuts...
26
2
0.033321
0
false
30,468,759
0
35,704
7
0
0
30,467,495
From my experience if you have both Py2.7 and Py3.4 installed when installing the mysql connector for py3.4 you will run into this issue. Not sure of the WHY but for some reason if you have py2.7 installed, the py3.4 mysql connector recognizes that version first and just assumes that you have py2.7 installed and does not recognize that py3.4 is installed. The only way I have found around this is to uninstall py2.7 and then install the py3.4 mysql connector. You can always install py2.7 again after the fact.
1
0
0
mysql installer fails to recognize python 3.4
12
mysql,python-3.x,installation
0
2015-05-26T19:42:00.000
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed. Has anyone solved this issue before? I'ts driving me nuts...
26
0
0
0
false
53,574,430
0
35,704
7
0
0
30,467,495
I had this problem until I discovered I had installed python based in another architecture (32b). MySQL required 64 bit.
1
0
0
mysql installer fails to recognize python 3.4
12
mysql,python-3.x,installation
0
2015-05-26T19:42:00.000
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed. Has anyone solved this issue before? I'ts driving me nuts...
26
1
0.016665
0
false
50,449,146
0
35,704
7
0
0
30,467,495
I was looking for an similar answer. The correct answer is that there is a bug in the mysqlconnector MSI. When python installs, it creates a registry entry under HKLM Software\Python\PythonCore\3.6-32\InstallPath however, the MSI for mysqlconnector is looking for installation path in the registry Software\Python\PythonCore\3.6\InstallPath as part of the RegLocator/registrypath variable. Use ORCA to edit the MSI, change the RegLocator so that -32 is in the path. It will install now without error or changes to the system.
1
0
0
mysql installer fails to recognize python 3.4
12
mysql,python-3.x,installation
0
2015-05-26T19:42:00.000
I'm trying to istall mysql server on a windows 7 machine - that has python 3.4.3 installed. However, when trying to install the python connectors for 3.4, the installer fails to recognize the python installation, saying python 3.4 is not installed. Has anyone solved this issue before? I'ts driving me nuts...
26
0
0
0
false
61,068,804
0
35,704
7
0
0
30,467,495
Here is a much simpler work around: pip install mysql-connector-python Is the same package that MySQL is having trouble installing. Just use pip to install it. Next, go back to the installation style and select "Manual" instead of "Developer". They are identical, but "Manual" allows you to remove packages. Just remove the "Connector/Python" package from the list to be installed. Carry on with the install, you're done.
1
0
0
mysql installer fails to recognize python 3.4
12
mysql,python-3.x,installation
0
2015-05-26T19:42:00.000
I'm not sure if this has been answered before, I didn't get anything on a quick search. My table is built in a random order, but thereafter it is modified very rarely. I do frequent selects from the table and in each select I need to order the query by the same column. Now is there a way to sort a table permanently by a column so that it does not need to be done again for each select?
2
3
1.2
0
true
30,504,339
0
344
1
0
0
30,503,358
You can add an index sorted by the column you want. The data will be presorted according to that index.
1
0
0
SQLAlchemy: how can I order a table by a column permanently?
2
python,sqlalchemy
0
2015-05-28T10:04:00.000
i have two models, when i do request.POST.get('room_id') or ('id') i'm getting an error Room matching query does not exist. how to solved this problem? help me class Room(models.Model): status = models.BooleanField('Status',default=True) name = models.CharField('Name', max_length=100, unique=True) class Book(models.Model): date = models.DateTimeField('Created',auto_now_add=True) from_date = models.DateField('Check-in') to_date = models.DateField('Check-out') room = models.ForeignKey(Room, related_name='booking') i need detail room request get id, booked dates range(from_date,to_date) def room_detail(request,pk): room = get_object_or_404(Room,pk=pk) if request.method == 'POST': form = BookForm(request.POST,room=room) if form.is_valid(): s = form.save(commit=True) s.save() return redirect(request.path) else: form = BookForm() #roomid = Room.objects.values('id') type = request.POST.get('id') # or get('room_id') rooms = Room.objects.get(id=type) start_dates = rooms.booking.values_list('from_date',flat=True) end_dates = rooms.booking.values_list('to_date',flat=True) dates = [start + timedelta(days=i) for start, end in zip(start_dates,end_dates) for i in range((end-start).days+1)] c = {} c['form'] = form return render_to_response('rooms_detail.html',c) please help me, thanks in advance
1
3
0.197375
0
false
30,517,111
1
1,484
1
0
0
30,517,002
You are looking into request.POST, even if the request.method is not equal to 'POST'. This will not work, because when the request is not an HTTP-post, the POST-member of your request is empty.
1
0
0
django models request get id error Room matching query does not exist
3
django,python-3.x,django-queryset,models
0
2015-05-28T20:59:00.000
First, the server setup: nginx frontend to the world gunicorn running a Flask app with gevent workers Postgres database, connection pooled in the app, running from Amazon RDS, connected with psycopg2 patched to work with gevent The problem I'm encountering is inexplicably slow queries that are sometimes running on the order of 100ms or so (ideal), but which often spike to 10s or more. While time is a parameter in the query, the difference between the fast and slow query happens much more frequently than a change in the result set. This doesn't seem to be tied to any meaningful spike in CPU usage, memory usage, read/write I/O, request frequency, etc. It seems to be arbitrary. I've tried: Optimizing the query - definitely valid, but it runs quite well locally, as well as any time I've tried it directly on the server through psql. Running on a larger/better RDS instance - I'm currently working on an m3.medium instance with PIOPS and not coming close to that read rate, so I don't think that's the issue. Tweaking the number of gunicorn workers - I thought this could be an issue, if the psycopg2 driver is having to context switch excessively, but this had no effect. More - I've been working for a decent amount of time at this, so these were just a couple of the things I've tried. Does anyone have ideas about how to debug this problem?
0
0
0
0
false
30,519,353
1
1,181
1
0
0
30,519,299
You could try this from within psql to get more details on query timing EXPLAIN sql_statement Also turn on more database logging. mysql has slow query analysis, maybe PostgreSQL has an equivalent.
1
0
0
Inconsistently slow queries in production (RDS)
2
python,postgresql,amazon-rds,gevent
0
2015-05-29T00:34:00.000
I can do http://127.0.0.1:5000/people?where={"lastname":"like(\"Smi%\")"} to get people.lastname LIKE "Smi%" How do I concat two conditions, like where city=XX and pop<1000 ?
1
2
1.2
0
true
30,680,531
0
113
1
0
0
30,672,259
It's quite simple you just do: http://127.0.0.1:5000/people?where={"city":"XX", "pop":"<1000"}
1
0
0
Eve SQLAlchemy query catenation
1
python,sqlalchemy,eve
0
2015-06-05T17:15:00.000
I'm working on a project where I have to store about 17 million 128-dimensional integer arrays e.g [1, 2, 1, 0, ..., 2, 6, 4] and I'm trying to figure out what's the best way to do it. The perfect solution would be one that makes it fast to both store and retrieve the arrays, since I need to access ALL of them to make calculations. With such a vast amount of data, I obviously can't store them all in memory in order to make calculations, so accessing batches of arrays should be as fast as possible. I'm working in Python. What do you recommend ? Using a DB (SQL vs NOSQL ?), storing it in a text file, using python's Pickle?
2
0
0
1
false
30,684,782
0
1,207
1
0
0
30,682,311
It seems not so big with numpy arrays, if your integers are 8 bits. a=numpy.ones((17e6,128),uint8) is created in less than a second on my computer. but ones((17e6,128),uint16) is difficult, and ones((17e6,128),uint64) crashed.
1
0
1
Fastest way to store and retrieve arrays
2
python,sql,arrays,database,nosql
0
2015-06-06T11:32:00.000
My use case is simple, i have performed some kind of operation on image and the resulting feature vector is a numpy object of shape rowX1000(what i mean to say is that the row number can be variable but column number is always 1000) I want to store this numpy array in mysql. No kind of operation is to be performed on this array. The query will be simple given a image name return the whole feature vector. so is there any way in which the array can be stored (something like a magic container which encapsulates the array and then put it on the table and on retrieval it retrieves the magic container and pops out the array) I want to do this in python. If possible support with a short code snippit of how to put the data in the mysql database.
9
8
1.2
1
true
30,713,767
0
7,366
1
0
0
30,713,062
You could use ndarray.dumps() to pickle it to a string then write it to a BLOB field? Recover it using numpy.loads()
1
0
0
store numpy array in mysql
2
python,mysql,arrays,numpy
0
2015-06-08T15:20:00.000
i have tried import file csv using bulk insert but it is failed, is there another way in query to import csv file without using bulk insert ? so far this is my query but it use bulk insert : bulk insert [dbo].[TEMP] from 'C:\Inetpub\vhosts\topimerah.org\httpdocs\SNASPV0374280960.txt' with (firstrow=2,fieldterminator = '~', rowterminator = ' ');
0
0
1.2
1
true
30,724,975
0
777
1
0
0
30,724,143
My answer is to work with bulk-insert. 1. Make sure you have bulk-admin permission in server. 2. Use SQL authentication login (For me most of the time window authentication login haven't worked.) for bulk-insert operation.
1
0
0
how to import file csv without using bulk insert query?
1
python,mysql,sql,sql-server,csv
0
2015-06-09T06:05:00.000
I have a couple thousand lines of data in excel. In one column, however, only every fifth line is filled. What I'm trying to do is fill in the four empty lines below each filled line with the data from the line above. I have a beginner's grasp of python, so if someone could steer me in the right direction, it would be a great help. Thanks a lot.
0
2
0.132549
0
false
30,811,295
0
1,002
1
0
0
30,810,963
Based on your description, this seems easy enough to do in Excel: Assume row 1 contains column headers, and data begin in row 2. If column A contains your values (starting in A2), in cell B2 use the formula =IF(ISBLANK(A2), B1, A2) and fill down. This formula will return the value of A2 if it is not blank, and will return the previous value in column B if the current value in column A is blank. Note that this requires that the first cell in each group contains the value that you want to fill down. A post-script for general reference: Excel has a hard time with blank cells resulting from formulas, so the formula ="" (or the result of something like =IFERROR(..., "")) is not blank, but does have a length of 0. Changing ISBLANK(A2) to LEN(A2)<1 accounts for these situations.
1
0
0
Filling in missing data in excel
3
python,excel
0
2015-06-12T19:39:00.000
What is the convention/best practices for naming database tables in Django... using the default database naming scheme (appname_classname) or creating your own table name (using your own naming conventions) with the meta class?
0
3
0.53705
0
false
30,872,816
1
1,656
1
0
0
30,872,599
The default convention is better and cleaner to use : It avoids any table naming conflict ( As It's a combination of App name and Model name) It creates well organized database (Tables are ordered by App names) So until you have any special case that needs special naming convention , use the default.
1
0
0
Django database table naming convention
1
python,django,database,naming-conventions
0
2015-06-16T15:57:00.000
If I ftp into a database and use pandas.read_sql to read in a huge file, what data type would the variable set equal to this be? And, if applicable, what kind of format would it be in? What object type is a pandas data frame?
0
1
1.2
0
true
30,881,760
0
160
2
0
0
30,881,489
Variable = ? The variable set would be equal to a pandas.core.frame.DataFrame object. Format? The pandas.core.frame.DataFrame format is a collection of numpy ndarrays, dicts, series, arrays or list-like structures that make up a 2 dimensional (typically) tabular data structure. Pandas Object Type? A pandas.core.frame.DataFrame object is an organized collection of list like structures containing multiple data types.
1
0
0
Data type using Pandas
2
python,pandas
0
2015-06-17T02:46:00.000
If I ftp into a database and use pandas.read_sql to read in a huge file, what data type would the variable set equal to this be? And, if applicable, what kind of format would it be in? What object type is a pandas data frame?
0
0
0
0
false
30,881,700
0
160
2
0
0
30,881,489
The function pandas.read_sql returns a DataFrame. The type of a DataFrame in pandas is pandas.core.frame.DataFrame.
1
0
0
Data type using Pandas
2
python,pandas
0
2015-06-17T02:46:00.000
I'm using Python 3.4. I have a binary column in a my postgresql database with some files and I need to retrieve it from the database and read it... the problem is that for this to work, I first have to (1) open a new file in the filesystem with 'wb', (2) write the contents of the binary column and then (3) read() the filesystem file with 'rb'. I would like to skip this whole process... I just wanto to get the file from the database, into a variable and use it AS IF IT WAS OPENED from the filesystem... How can I do that? I already tried BytesIO and it does not work... Thank you
0
0
0
0
false
30,922,498
0
176
1
0
0
30,920,656
Answering my own question: bytes(file)
1
0
0
Reading a file from database binary column (postgresql) in memory without having to save and open the file in the filesystem
1
python,database,python-3.x,io
0
2015-06-18T16:16:00.000
I have a txt file with about 100 million records (numbers). I am reading this file in Python and inserting it to MySQL database using simple insert statement from python. But its taking very long and looks like the script wouldn't ever finish. What would be the optimal way to carry out this process ? The script is using less than 1% of memory and 10 to 15% of CPU. Any suggestions to handle such large data and insert it efficiently into database, would be greatly appreciated. Thanks.
4
1
0.066568
0
false
53,005,996
0
13,099
1
0
0
30,928,713
Having tried to do this recently, I found a fast method, but this may be because I'm using an AWS Windows server to run python from that has a fast connection to the database. However, instead of 1 million rows in one file, it was multiple files that added up to 1 million rows. It's faster than other direct DB methods I tested anyway. With this approach, I was able to read files sequentially and then run the MySQL Infile command. I then used threading with this process too. Timing the process it took 20 seconds to import 1 million rows into MySQL. Disclaimer: I'm new to Python, so I was trying to see how far I could push this process, but it caused my DEV AWS-RDS DB to become unresponsive (I had to restart it), so taking an approach that doesn't overwhelm the process is probably best!
1
0
0
Inserting millions of records into MySQL database using Python
3
python,mysql,insert,sql-insert,large-data
0
2015-06-19T01:55:00.000
I am writing a web tool using Python and Pyramid. It access a MySQL database using MySQLdb and does queries based on user input. I created a user account for the tool and granted it read access on the tables it uses. It works fine when I open the page in a single tab, but if I try loading it in second tab the page won't load until the first search is finished. Is there a way to get around this or am I just trying to use MySQL incorrectly?
0
1
1.2
0
true
30,969,950
0
69
1
0
0
30,948,885
What @AlexIvanov is trying to say is that when you're starting your Pyramid app in console it is served using Pyramid's built-in development server. This server is single-threaded and serves requests one after another, so if you have a long request which takes, say, 15 seconds - you won't be able to use your app in another tab until that long request finishes. This sequential nature of the built-in webserver is actually an awesome feature which greatly simplifies debugging. In production, your Pyramid app is normally served by a "real" webserver, such as Apache or Nginx. Such webservers normally spawn multiple "workers", or use multiple threads which allow them to serve multiple concurrent requests. So I suspect there's nothing wrong with your setup (provided you didn't do anything particularly strange with Pyramid's initial scaffold and it's still using SQLAlchemy's session configured with ZopeTransactionExtension etc.). A "single shared MySQL account" in no way prevents multiple connected clients from running queries concurrently in MySQL - the thing is, with the development server you only have one single-threaded client.
1
0
0
Accessing MySQL from multiple views of a web site
1
python,mysql,pyramid,mysql-python
0
2015-06-19T23:56:00.000
I have a web application (based on Django 1.5) wherein a user uploads a spreadsheet file. I've been using xlrd for manipulating xls files and looked into openpyxl which claims to support xlsx/xlsm files. So is there a common way to read/write both xls and xlsx files? Another option could be to convert the uploaded file to xls and use xlrd. For this I looked into gnumeric and ssconvert, this would be favorable since all my existing code in written using xlrd and I will not have to change the existing codebase. So should I change the library I use or go with the conversion solution? Thanks in advance.
0
1
0.197375
0
false
30,974,768
0
1,651
1
0
0
30,974,575
xlrd can read both xlsx and xls files, so it's probably simplest to use that. Support for xlsx isn't as extensive as openpyxl but should be sufficient. There's a risk of losing information in converting xlsx to xls because xlsx files can be much larger.
1
0
0
How do I read/write both xlsx and xls files in Python?
1
python,xlrd,openpyxl
0
2015-06-22T07:48:00.000
I'm using openpyxl to write to an existing file and everything works fine. However after the data is saved on the file, graphs disappear. I understand Openpyxl currently only supports chart creation within a worksheet only. Charts in existing workbooks will be lost. Are there any alternate libraries in Python to achieve this. I just want to feed a few values, so all the graphs and calculation happen in excel. Thank you.
1
0
1.2
0
true
31,022,634
0
2,336
1
0
0
31,020,766
This is currently (version 2.2) not possible.
1
0
0
Graphs lost while overwriting to existing excel file in Python
2
python,excel,openpyxl
0
2015-06-24T07:49:00.000
Is this a known limitation that will be addressed at some point, or is this just something that I need to accept? If this is not possible with xlwings, I wonder if any of the other alternatives out there supports connecting to other instances. I'm specifically talking about the scenario where you are calling python from within Excel, so the hope is that the getCaller() function will be able to figure out which instance of the Excel is actually calling it.
1
1
0.197375
0
false
31,110,844
0
1,096
1
0
0
31,106,542
Ok, based on your comments I think I can answer your question: Actually, yes, xlwings can handle various instances. But workbooks from untrusted locations (like downloaded from the internet or sometimes on shared network drives) don't play nicely. So in your case you could try to add the network location to File > Options > Trust Center > Trust Center Settings... > Trusted Locations or, under Trusted Documents, tick the checkbox Allow documents on a network to be trusted. If you don't have the previlegies to change these options, then I guess you're left with the options of running the tools locally or indeed, open them in the 1st instance...
1
0
0
Does xlwings only work with the first instance of Excel?
1
python,xlwings
0
2015-06-29T01:26:00.000
I have limited experience with Jira and Jira query language. This is regarding JIRA Query language. I have a set of 124 rows (issues) in Jira that are under a certain 'Label' say 'myLabel'. I need to extract columns col1, col2 and col5 for all of the above 124 rows where the Label field is 'myLabel'. Once I have the above result I need to export it to an excel sheet. Is there a JIRA query that I can fire to do this ? Or Is there some other way that this can be done, like maybe exporting all of the 124 rows with all the n columns to a SQL table and then doing an SQL query on top of it to retrieve the results that is needed ? Also there is something python-jira. Can that be of some help ?
1
1
0.099668
0
false
31,676,903
0
1,804
1
0
0
31,123,357
One way of doing this stuff what you want to do is with Excel directly. 0. create the filter in JIRA 1. create a VBA for Excel script which will Open the Exported to Excel filter from JIRA. In order to do that you have to copy the link from JIRA, -> export -> Excel current fields. 2. Most probably you will have to login first to JIRA from EXCEL VBA, so I recommend you to do this by logging in with REST requests from excel. 3. Copy all the information from the exported JIRA query to your other worksheet 4. After copying you can then process as you wish easily with VBA the list of tickets. I know this is not the straight forward way of doing this stuff, but I hope it helped you a little to go further into your searches.
1
0
0
JQL to retrive specific columns for a cetain label
2
jira,jira-plugin,jql,python-jira
0
2015-06-29T18:54:00.000
I am interested in using a cursor to duplicate a database from one mongod to another. I want to limit the amount of insert requests sent so instead of inserting each document in the cursor individually I want to do an insert_many of each cursor batch. Is there a way to do this in pymongo/python? I have tried converting the cursor to a list and then calling insert_many and this works, but if the collection is over the amount of ram that I have then it won't work. Any ideas on how to grab a batch from a cursor and convert it to a list would be appreciated Thanks!
1
0
1.2
0
true
31,413,796
0
1,052
1
0
0
31,123,896
So far this has been my "slice/batch" solution and it has been much more effective than individually iterating each document from the cursor: Keep note of the id field last document you have grabbed Open a cursor with the query "Greater than _id of last doc" and with a limit of whatever your batch_size is Now you should have a cursor with your desired number of documents in a batch Make this cursor into a python list by doing list(cursor) Insert_many on this list Update last grabbed, and delete the list to free up ram You can adjust your batch size to accommodate your RAM limitations. This is a pretty good solution because it reduces the bottleneck of cursor iteration, and also doesn't take too much ram as you are constantly deleting the batches as you go on.
1
0
0
How to insertmany from a cursor using Pymongo?
2
python,mongodb,cursor,pymongo
0
2015-06-29T19:24:00.000
I'm hoping to be pointed in the right direction as far as what tools to use while in the process of developing an application that runs on two servers per client. [Main Server][Client db Server] Each client has their own server which has a django application managing their respective data, in addition to serving as a simple front end. The main application server has a more feature-rich front end, using the same models/db schemas. It should have full read/write access to the client's database server. The final desired effect would be a typical SaaS type deal: client1.djangoapp.com => Connects to mysql database @ client1_IP client2.djangoapp.com => Connects to mysql database @ client2_IP... Thanks in advance!
0
1
1.2
0
true
31,227,735
1
843
1
0
0
31,226,223
You could use different settings files, let's say settings_client_1.py and settings_client_2.py, import common settings from a common settings.py file to keep it DRY. Then add respective database settings. Do the same with wsgi files, create one for each settings. Say, wsgi_c1.py and wsgi_c2.py Then, in your web server direct the requests for client1.djangoapp.com to wsgi_c1.py and client2.djangoapp.com to wsgi_c2.py
1
0
0
Effectively communicating between two Django applications on two servers (Multitenancy)
1
python,django,web-deployment,multi-tenant,saas
0
2015-07-05T00:30:00.000
I've been trying to write a script to copy formatting from one workbook to another and, as anyone dealing with openpyxl knows, it's a big script. I've gotten it to work pretty well, but one thing I can't seem to figure out is how to read from the original if columns are hidden. Can anyone tell me where to look in a workbook, worksheet, column or cell object to see where hidden columns are?
5
3
1.2
0
true
31,262,488
0
5,323
1
0
0
31,257,353
Worksheets have row_dimensions and column_dimensions objects which contain information about particular rows or columns, such as whether they are hidden or not. Column dimensions can also be grouped so you'll need to take that into consideration when looking.
1
0
0
Finding hidden cells using openpyxl
2
python,excel,openpyxl
0
2015-07-06T23:26:00.000
Let's suppose we have a single host where there is a Web Server and a Database Server. An external application sends an http request to the web server to access to the database. The data access logic is made for example by Python API. The web server takes the request and the Python application calls the method to connect to the database, e.g. MySQLdb.connect(...). Which process establishes the connection with the database server and communicates with it? Is it the web server process?
0
1
1.2
0
true
31,274,653
0
26
1
0
0
31,274,509
Yes, as Python application lives inside of the web server process, this process will establish the connection with database server.
1
0
0
Which process establishes the connection with the database server?
1
python,webserver,database-connection,data-access-layer,database-server
0
2015-07-07T16:37:00.000
I'm making an application that will fetch data from a/n (external) postgreSQL database with multiple tables. Any idea how I can use inspectdb only on a SINGLE table? (I only need that table) Also, the data in the database would by changing continuously. How do I manage that? Do I have to continuously run inspectdb? But what will happen to junk values then?
0
0
1.2
0
true
31,309,910
1
78
1
0
0
31,295,352
I think you have misunderstood what inspectdb does. It creates a model for an existing database table. It doesn't copy or replicate that table; it simply allows Django to talk to that table, exactly as it talks to any other table. There's no copying or auto-fetching of data; the data stays where it is, and Django reads it as normal.
1
0
0
Django 1.8 and Python 2.7 using PostgreSQL DB help in fetching
1
python,django,postgresql,python-2.7,django-1.8
0
2015-07-08T14:15:00.000
I'd like people's views on current design I'm considering for a tornado app. Although I'm using mongoDB to store permanent information I currently have the session information as a python data structure that I've simply added within the Application object at initialisation. I will need to perform some iteration and manipulation of the sessions while the server is running. I keep debating whether to move these to another mongoDB or just keep it as a python structure. Is there anything wrong with keeping session information this way?
0
2
1.2
0
true
31,311,950
0
120
1
0
0
31,311,620
If you store session data in Python your apllication will: loose it if you stop the Python process; likely consume more memory as Python isn't very efficient in memory management (and you will have to store all the sessions in memory, not the ones you need right now). If these are not problems for you you can go with Python structures. But usually these are serious concerns and most of the projects use some external storage for sessions.
1
0
0
Tornado Application design
1
python,mongodb,tornado,tornado-motor
0
2015-07-09T08:07:00.000
I want to enter data into a Microsoft Excel Spreadsheet, and for that data to interact and write itself to other documents and webforms. With success, I am pulling data from an Excel spreadsheet using xlwings. Right now, I’m stuck working with .docx files. The goal here is to write the Excel data into specific parts of a Microsoft Word .docx file template and create a new file. My specific question is: Can you modify just a text string(s) in a word/document.xml file and still maintain the integrity and functionality of its .docx encasement? It seems that there are numerous things that can change in the XML code when making even the slightest change to a Word document. I've been working with python-docx and lxml, but I'm not sure if what I seek to do is possible via this route. Any suggestions or experiences to share would be greatly appreciated. I feel I've read every article that is easily discoverable through a google search at least 5 times. Let me know if anything needs clarification. Some things to note: I started getting into coding about 2 months ago. I’ve been doing it intensively for that time and I feel I’m picking up the essential concepts, but there are severe gaps in my knowledge. Here are my tools: Yosemite 10.10, Microsoft Office 2011 for Mac
2
1
0.197375
0
false
31,349,163
0
581
1
0
0
31,346,625
You probably need to be more specific, but the short answer is, in principle, yes. At a certain level, all python-docx does is modify strings in the XML. A couple things though: The XML you create needs to remain well-formed and valid according to the schema. So if you change the text enclosed in a <w:t> element, for example, that works fine. Conversely, if you inject a bunch of random XML at an arbitrary point in one of the .xml parts, that will corrupt the file. The XML "files", known as parts that make up a .docx file are contained in a Zip archive known as a package. You must unpackage and repackage that set of parts properly in order to have a valid .docx file afterward. python-docx takes care of all those details for you, but if you're going directly at the .docx file you'll need to take care of that yourself.
1
0
0
Can you modify only a text string in an XML file and still maintain integrity and functionality of .docx encasement?
1
python,xml,lxml,docx,python-docx
0
2015-07-10T17:13:00.000
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error?
83
0
0
0
false
61,108,863
1
62,898
4
0
0
31,353,137
RedHat/CentOS: dnf install -y unixODBC-devel along with unixODBC installation
1
0
0
sql.h not found when installing PyODBC on Heroku
7
python,heroku,pyodbc
0
2015-07-11T03:31:00.000
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error?
83
1
0.028564
0
false
59,790,771
1
62,898
4
0
0
31,353,137
I recently saw this error in Heroku. To fix this problem I took the following steps: Add Apt File to the root folder, with the following: unixodbc unixodbc-dev python-pyodbc libsqliteodbc Commit that Run heroku buildpacks:clear Run heroku buildpacks:add --index 1 heroku-community/apt Push to Heroku For me the problem was that I previously installed the buildpack for python, which was not needed. By running heroku buildpacks:clearI removed all un-needed buildpacka, then add back the one I needed. So if you do follow these steps be sure to make note of the build packs you need. To view the buildpacks you have run heroku buildpacks before following these steps.
1
0
0
sql.h not found when installing PyODBC on Heroku
7
python,heroku,pyodbc
0
2015-07-11T03:31:00.000
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error?
83
1
0.028564
0
false
47,557,567
1
62,898
4
0
0
31,353,137
The other answers are more or less correct; you're missing the unixodbc-dev[el] package for your operating system; that's what pip needs in order to build pyodbc from source. However, a much easier option is to install pyodbc via the system package manager. On Debian/Ubuntu, for example, that would be apt-get install python-pyodbc. Since pyodbc has a lot of compiled components and interfaces heavily with the UnixODBC OS-level packages, it is probably a better fit for a system package rather than a Python/pip-installed one. You can still list it as a dependency in your requirements.txt files if you're making code for distribution, but it'll usually be easier to install it via the system PM.
1
0
0
sql.h not found when installing PyODBC on Heroku
7
python,heroku,pyodbc
0
2015-07-11T03:31:00.000
I'm trying to install PyODBC on Heroku, but I get fatal error: sql.h: No such file or directory in the logs when pip runs. How do I fix this error?
83
8
1
0
false
31,358,757
1
62,898
4
0
0
31,353,137
You need the unixODBC devel package. I don't know what distro you are using but you can google it and build from source.
1
0
0
sql.h not found when installing PyODBC on Heroku
7
python,heroku,pyodbc
0
2015-07-11T03:31:00.000
We have a ticket software to manage our work, every ticket is assigned to a tech in one field -the normal stuff-, but now we want to assign the same ticket to several technicians eg: tick 5432: tech_id(2,4,7) where 2,4,7 are tech IDs. Of course we can do that using a separate table with the IDs of the tech and the ticket ID, but we have to convert the data.
0
0
0
0
false
31,371,403
0
136
1
0
0
31,369,558
The "right" way to do this is to have a separate table of ticket assignments. Converting the data for something like this is fairly simple on the database end. create table assign as select tech_id from ... followed by creating any necessary foreign key constraints. Rewriting your interface code can be trickier, but you're going to have to do that anyway to allow for more than one tech. You could use an array type, but sometimes database interfaces don't understand postgres array types. There isn't anything inherent in arrays that prevents duplicates or imposes ordering, but you could do that with an appropriate trigger.
1
0
0
Is any variable in PostgreSQL to store a list
1
python,postgresql
0
2015-07-12T15:40:00.000
I am using Spark 1.3.1 (PySpark) and I have generated a table using a SQL query. I now have an object that is a DataFrame. I want to export this DataFrame object (I have called it "table") to a csv file so I can manipulate it and plot the columns. How do I export the DataFrame "table" to a csv file? Thanks!
106
0
0
1
false
69,462,087
0
340,481
1
0
0
31,385,363
try display(df) and use the download option in the results. Please note: only 1 million rows can be downloaded with this option but its really quick.
1
0
0
How to export a table dataframe in PySpark to csv?
9
python,apache-spark,dataframe,apache-spark-sql,export-to-csv
0
2015-07-13T13:56:00.000
I have a fairly large redshift table with around 200 million records. I would like to update the values in one of the columns using a user-defined python function. If I run the function in an EC2 instance, it results in millions of updates to the table, and it is very slow. Is there a better process for me to speed up these updates?
0
0
0
0
false
31,411,856
0
56
1
0
0
31,388,220
Unlike row-based systems, which are ideal for transaction processing, column-based systems (Redshift) are ideal for data warehousing and analytics, where queries often involve aggregates performed over large data sets. Since only the columns involved in the queries are processed and columnar data is stored sequentially on the storage media, column-based systems require far fewer I/Os, greatly improving query performance. In your example instead of doing multiple separate updatecommands you can perform a single update .. set.. from ... where ....
1
0
0
How to increase performance of large number of updates to a redshift table with python functions
1
python,amazon-web-services
0
2015-07-13T16:05:00.000
Everything I found about this via searching was either wrong or incomplete in some way. So, how do I: delete everything in my postgresql database delete all my alembic revisions make it so that my database is 100% like new
7
3
0.197375
0
false
31,392,595
0
7,991
1
0
0
31,392,285
This works for me: 1) Access your session, in the same way you did session.create_all, do session.drop_all. 2) Delete the migration files generated by alembic. 3) Run session.create_all and initial migration generation again.
1
0
0
Clear postgresql and alembic and start over from scratch
3
python,postgresql,sqlalchemy,alembic
0
2015-07-13T19:56:00.000
I have a 2D array, M 390x420 with float values in it that I would like to save as a table in a sqlite db with python. the row number of the table should be 390, the column number 420. executemany from sqlite is not optimal because then I would have to write ~ 420 of "?" , as far as I've understood. Thank you!
0
0
0
0
false
48,332,928
0
1,249
1
0
0
31,426,367
As CL recommended, not to use 420 columns. I would recommend an algorithmic approach to save much processing power. Here is an example, since the size is always 390x420, have a table with 10 columns, and 16380 rows. Referencing any point on this matrix can be done with a simple algorithm, and would be much more efficient. Remember, in sql, it is always better to have more rows than columns because of how the data is managed.
1
0
0
save a matrix (or a 2 dimensional array) in a sqlite db with Python
1
python,sqlite,matrix
0
2015-07-15T09:20:00.000
Each module in odoo have a table in the database. I'd like to know if I can create two tables in the odoo database for one module.
0
0
1.2
0
true
31,456,810
1
592
1
0
0
31,456,406
Yes you can, for every class new_class(... with a unique _name="new.class" is created a table in the data base, if you want more than one table, you need to create more than one class in your .py file For more reference look the account module in account_invoice.py you have class account_invoice(models.Model): _name = "account.invoice" and class account_invoice_line(models.Model): _name = "account.invoice.line" for each class are a table in the data base. I Hope this will can help you!
1
0
0
Is there any way to create two tables in the database for one odoo module?
1
python-2.7,openerp,odoo
0
2015-07-16T14:00:00.000
I am running a migrate script in postgres, and at the top of one of the files I have from sqlalchemy import * in the file I create tables with entries such as 1Column('tmp1', DOUBLE_PRECISION(precision=53)) However, when I run the script I get the error: name 'DOUBLE_PRECISION' is not defined Why is this?
0
0
0
0
false
34,866,070
0
622
1
0
0
31,459,477
First off, I'd advise against doing 'from import *' that can bring unknown things into your namespace that are quite difficult to debug. Second, the sqlalchemy module simply doesn't have a 'DOUBLE_PRECISION' column type. So the reason it says it's not defined, is because sqlalchemy does not define any such name. Perhaps you are looking for 'Float'?
1
0
0
name 'DOUBLE_PRECISION' is not defined - PostgreSQL - SQLAlchemy
1
python,postgresql,sqlalchemy
0
2015-07-16T16:15:00.000
I am using SQLAlchemy and am trying to update a boolean column value. I have the following command: sess.query(Testing).filter(Testing.id == id).update({Testing.state: True}) I do not seem to get any errors, however, when I go to the database, nothing changes. Have I implemented something incorrectly with the command?
0
0
0
0
false
31,517,891
0
277
1
0
0
31,517,753
I simply left out sess.commit() as the next line of code.
1
0
0
SQLAlchemy Update Command
1
python,sql
0
2015-07-20T13:25:00.000