Question
stringlengths
25
7.47k
Q_Score
int64
0
1.24k
Users Score
int64
-10
494
Score
float64
-1
1.2
Data Science and Machine Learning
int64
0
1
is_accepted
bool
2 classes
A_Id
int64
39.3k
72.5M
Web Development
int64
0
1
ViewCount
int64
15
1.37M
Available Count
int64
1
9
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Q_Id
int64
39.1k
48M
Answer
stringlengths
16
5.07k
Database and SQL
int64
1
1
GUI and Desktop Applications
int64
0
1
Python Basics and Environment
int64
0
1
Title
stringlengths
15
148
AnswerCount
int64
1
32
Tags
stringlengths
6
90
Other
int64
0
1
CreationDate
stringlengths
23
23
I was wondering if one of you could advice me how to tackle a problem I am having. I developed a python script that updates data to a database (MySQL) every iteration (endless while loop). What I want to prevent is that if the script is accidentally closed or stopped half way the script it waits till all the data is loaded into the database and the MySQL connection is closed (I want this to prevent incomplete queries). Is there a way to tell the program to wait till the loop is done before it closes? I hope this all makes sense, feel free to ask questions. Thank you for your time in advance.
0
2
0.132549
0
false
25,562,066
0
184
1
0
0
25,561,971
There are some things you can do to prevent a program from being closed unexpectedly (signal handlers, etc), but they only work in some cases and not others. There is always the chance of a system shutdown, power failure or SIGKILL that will terminate your program whether you like it or not. The canonical solution to this sort of problem is to use database transactions. If you do your work in a transaction, then the database will simply roll back any changes if your script is interrupted, so you will not have any incomplete queries. The worst that can happen is that you need to repeat the query from the beginning next time.
1
0
0
Closing python MySQL script
3
python,mysql,while-loop
0
2014-08-29T05:02:00.000
After importing the module, on script run I get the error and I have the module installed already, I am new with python, so I expect I might have forgot to install something else? Python is version 2.7.
0
1
1.2
0
true
25,797,155
0
691
1
0
0
25,623,002
Maybe you have more versions than 2.7 and You installed the module on another version.
1
0
1
error "No module named MySQLdb"
1
python-2.7,mysql-python
0
2014-09-02T12:05:00.000
I need help switching my database engine from sqlite to mysql. manage.py datadump is returning the same error that pops up when I try to do anything else with manage.py : ImproperlyConfigured: Error loading MySQL module, No module named MySQLdb. This django project is a team project. I pulled new changes from bitbucket and our backend has a new configuration. This new configuration needs mysql (and not sqlite) to work. Our lead dev is sleeping right now. I need help so I can get started working again. Edit: How will I get the data in the sqlite database file into the new MySQL Database?
0
-1
-0.099668
0
false
25,630,191
1
4,742
1
0
0
25,629,092
Try the followings steps: 1. Change DATABASES in settings.py to MYSQL engine 2. Run $ ./manage.py syncdb
1
0
0
How can I switch my Django project's database engine from Sqlite to MySQL?
2
python,mysql,django,sqlite,mysql-python
0
2014-09-02T17:31:00.000
I have written an extensive python package that utilizes excel and pywin32. I am now in the progress of moving this package to a linux environment on a Vagrant machine. I know there are "emulator-esque" software packages (e.g. WINE) that can run Windows applications and look-a-likes for some Windows applications (e.g. Excel to OpenOffice). However, I am not seeing the right path to take in order to get my pywin32/Excel dependent code written for Windows running in a Linux environment on a Vagrant machine. Ideally, I would not have to alter my code at all and just do the appropriate installs on my Vagrant machine. Thanks
0
2
1.2
0
true
25,629,595
0
850
1
1
0
25,629,462
The short answer is, you can't. WINE does not expose a bottled Windows environment's COM registry out to linux—and, even if it did, pywin32 doesn't build on anything but Windows. So, here are some options, roughly ordered from the least amount of change to your code and setup to the most: Run both your Python script and Excel under real Windows, inside a real emulator. Run both your Python script and Excel under WINE. Write or find a library that does expose a bottled Windows environment's COM registry out to Linux. Write or find a cross-platform DCOM library that presents a win32com-like API, then change your code to use that to connect to the bottled Excel remotely. Rewrite your code to script Excel indirectly by, e.g., sshing into a Windows box and running minimal WSH scripts. Rewrite your code to script LibreOffice or whatever you prefer instead of Excel. Rewrite your code to process Excel files (or CSV or some other interchange format) directly instead of scripting Excel.
1
0
0
Porting Python on Windows using pywin32/excel to Linux on Vagrant Machine
1
python,linux,excel,vagrant,pywin32
0
2014-09-02T17:56:00.000
In Python, I'm using SQLite's executemany() function with INSERT INTO to insert stuff into a table. If I pass executemany() a list of things to add, can I rely on SQLite inserting those things from the list in order? The reason is because I'm using INTEGER PRIMARY KEY to autoincrement primary keys. For various reasons, I need to know the new auto-incremented primary key of a row around the time I add it to the table (before or after, but around that time), so it would be very convenient to simply be able to assume that the primary key will go up one for every consecutive element of the list I'm passing executemany(). I already have the highest existing primary key before I start adding stuff, so I can increment a variable to keep track of the primary key I expect executemany() to give each inserted row. Is this a sound idea, or does it presume too much? (I guess the alternative is to use execute() one-by-one with sqlite3_last_insert_rowid(), but that's slower than using executemany() for many thousands of entries.)
3
3
0.53705
0
false
25,712,762
0
444
1
0
0
25,712,611
Python's sqlite3 module executes the statement with the list values in the correct order. Note: if the code already knows the to-be-generated ID value, then you should insert this value explicitly so that you get an error if this expectation turns out to be wrong.
1
0
0
Python SQLite executemany() always in order?
1
python,sql,sqlite
0
2014-09-07T16:52:00.000
After scanning the very large daily event logs using regular expression, I have to load them into a SQL Server database. I am not allowed to create a temporary CSV file and then use the command line BCP to load them into the SQL Server database. Using Python, is it possible to use BCP streaming to load data into SQL Server database? The reason I want to use BCP is to improve the speed of the insert into SQL Server database. Thanks
1
0
0
0
false
25,743,680
0
1,625
1
0
0
25,740,355
The BCP API is only available using the ODBC call-level interface and the managed SqlClient .NET API using the SqlBulkCopy class. I'm not aware of a Python extension that provides BCP API access. You can insert many rows in a single transaction to improve performance. This can be accomplished by batching individual insert statements or by passing multiple rows at once using an XML parameter (which also reduces round-trips).
1
0
0
Loading Large data into SQL Server [BCP] using Python
1
python,sql-server
0
2014-09-09T08:52:00.000
I've got a sqlite3 database and I want to write in it from multiple threads. I've got multiple ideas but I'm not sure which I should implement. create multiple connection, detect and waif if the DB is locked use one connection and try to make use of Serialized connections (which don't seem to be implemented in python) have a background process with a single connection, which collects the queries from all threads and then executes them on their behalft forget about SQlite and use something like Postgresql What are the advances of these different approaches and which is most likely to be fruitful? Are there any other possibilities?
4
0
0
0
false
25,748,935
0
1,466
1
0
0
25,747,192
I used method 1 before. It is the easiest in coding. Since that project has a small website, each query take only several milliseconds. All the users requests can be processed promptly. I also used method 3 before. Because when the query take longer time, it is better to queue the queries since frequent "detect and wait" makes no sense here. And would require a classic consumer-producer model. It would require more time to code. But if the query is really heavy and frequent. I suggest look to other db like MS SQL/MySQL.
1
0
0
Writing in SQLite multiple Threads in Python
2
python,multithreading,postgresql,sqlite
0
2014-09-09T14:28:00.000
i'm writing a script that puts a large number of xml files into mongodb, thus when i execute the script multiple times the same object is added many times to the same collection. I checked out for a way to stop this behavior by checkinng the existance of the object before adding it, but can't find a way. help!
0
0
0
0
false
25,807,361
0
491
1
0
1
25,807,271
You can index on one or more fields(not _id) of the document/xml structure. Then make use of find operator to check if a document containing that indexed_field:value is present in the collection. If it returns nothing then you can insert new documents into your collection. This will ensure only new docs are inserted when you re-run the script.
1
0
1
add if no duplicate in collection mongodb python
2
python,mongodb,pymongo,upsert
0
2014-09-12T11:28:00.000
I want to store the job in mongodb using python and it should schedule on specific time. I did googling and found APScheduler will do. i downloaded the code and tried to run the code. It's schedule the job correctly and run it, but it store the job in apscheduler database of mongodb, i want to store the job in my own database. Can please tell me how to store the job in my own database instead of default db.
0
1
0.197375
0
false
25,898,609
0
708
1
1
0
25,884,242
Simply give the mongodb jobstore a different "database" argument. It seems like the API documentation for this job store was not included in what is available on ReadTheDocs, but you can inspect the source and see how it works.
1
0
0
APScheduler store job in custom database of mongodb
1
mongodb,python-2.7,apscheduler
0
2014-09-17T07:01:00.000
I have built a couple basic workflows using XML tools on top of XLSX workbooks that are mapped to an XML schema. You would enter data into the spreadsheet, export the XML and I had some scripts that would then work with the data. Now I'm trying to eliminate that step and build a more integrated and portable tool that others could use easily by moving from XSLT/XQuery to Python. I would still like to use Excel for the data entry, but have the Python script read the XLSX file directly. I found a bunch of easy to use libraries to read from Excel but they need to explicitly state what cells the data is in, like range('A1:C2') etc. The useful thing about using the XML maps was that users could resize or even move tables to fit different rows and rename sheets. Is their a library that would let me select tables as units? Another approach I tried was to just uncompress the XLSX and just parse the XML directly. The problem with that is that our data is quite complex (taking up to 30-50 sheets) and parsing that in the uncompressed XLSX structure is really daunting. I did find my XML schema within the uncompressed XLSX, so is there any way to reformat the data into this schema outside of Excel? (basically what Excel does when I save a workbook as an .xml file)
0
0
1.2
0
true
25,908,953
0
2,150
1
0
0
25,893,266
The Excel format is pretty complicated with dependencies between components – you can't for example be sure of that the order of the worksheets in the folder worksheets has any bearing to what the file looks like in Excel. I don't really understand exactly what you're trying to do but the existing libraries present an interface for client code that hides the XML layer. If you don't want that you'll have to root around for the parts that you find useful. In openpyxl you want to look at the stuff in openpyxl/reader specifically worksheet.py. However, you might have better luck using lxml as this (using libxml2 in the background) will allow you load a single XML into Python and manipulate it directly using the .objectify() method. We don't do this in openpyxl because XML trees consume a lot of memory (and many people have very large worksheets) but the library for working with Powerpoint shows just how easy this can be.
1
0
0
XLSX to XML with schema map
1
python,xml,excel,xlsx,openpyxl
0
2014-09-17T14:25:00.000
I am close to finishing an ORM for RethinkDB in Python and I got stuck at writing tests. Particularly at those involving save(), get() and delete() operations. What's the recommended way to test whether my ORM does what it is supposed to do when saving or deleting or getting a document? Right now, for each test in my suite I create a database, populate it with all tables needed by the test models (this takes a lot of time, almost 5 seconds/test!), run the operation on my model (e.g.: save()) and then manually run a query against the database (using RethinkDB's Python driver) to see whether everything has been updated in the database. Now, I feel this isn't just right; maybe there is another way to write these tests or maybe I can design the tests without even running that many queries against the database. Any idea on how can I improve this or a suggestion on how this has to be really done?
3
3
1.2
0
true
25,922,468
1
484
1
0
0
25,916,839
You can create all your databases/tables just once for all your test. You can also use the raw data directory: - Start RethinkDB - Create all your databases/tables - Commit it. Before each test, copy the data directory, start RethinkDB on the copy, then when your test is done, delete the copied data directory.
1
0
0
Testing an ORM for RethinkDB
1
python,unit-testing,testing,orm,rethinkdb
1
2014-09-18T15:32:00.000
I am using mongoimport in a python script to import multiple CSV files into my Mongo DB. Some values contain backslash escaped commas. How can I use this to correctly import these files to Mongo? I can't find any specific solutions to this.
0
0
0
0
false
25,936,756
0
313
1
0
0
25,936,385
I'm not familiar with mongoimport, but I do know that if you use csv.reader, the backslashes are taken care of during reading. Maybe you could consider using a package specifically designed to read the csv, and then pass that along to mongoimport.
1
0
1
Mongoimport: Escaping commas in CSV
1
python,mongoimport
0
2014-09-19T14:34:00.000
I know you can read in Excel files with pandas, but I have had trouble reading in files where the column headings in the worksheets are not in a format easily readable like plain text. In other words, if the column headings had special characters then the file would fail to import. Where as if you import data like that into Microsoft Access or other databases, you get the option to import anyway, or remove special characters. My only solution to this has been to write an Excel macro to strip out characters not usually liked by databases when importing - and then import the file using python. But there must be a way of handling this situation purely using python (which is a lot faster). My question, how does python handle importing .xls and .xlsx files when the column headings have special characters which won't import?
1
0
0
0
false
49,387,955
0
1,140
1
0
0
25,987,179
Add a "u" before your string. For example, if you're looking for a column named 'lissé' in a dataframe "df" then you should put df[u'lissé']
1
0
0
Python: read an Excel file using Pandas when the file has special characters in column headers
1
python,excel,pandas,xls,xlsx
0
2014-09-23T04:56:00.000
Django 1.7, Python 3.4. In my models I have several TextFields defined. When I go to load a JSON fixture (which was generated from an SQLite3 dump), it fails on the second object, which has 515 characters for one of its fields. The error printed is psycopg2.DataError: value too long for type character varying(500) I created a new database (not just a table drop, a whole new db), modified my settings.py file, ran manage.py syncdb on the new database, created a user, and tried to load the data again, getting the same error. Upon opening pgAdmin3, all columns, both CharField and TextField defined are listed as type character var. So it seems TextField is being ignored and CharFields are being created instead. The PostgreSQL documentation explicitly lists both text and character types, and defines text as being unlimited in length. Any idea why?
2
1
1.2
0
true
26,030,265
1
1,186
1
0
0
26,028,200
I'm not sure what the exact cause was, but it seems to be related to django's migration tool storing migrations, even on a new database. What I did to get this behavior: Create django project, then apps, using CharField syncdb, run the project's dev server kill the devserver, modify fields to be TextField Create a new Postgres database, modify settings.py Run syncdb, attempt to load fixtures See the error in question, examine db instance What fixed the problem: Create a new database, modify settings.py delete all migrations in apps/migrations folders after running syncdb, also run createmigrations and migrate The last step generated a migration, even though there were none stored in the migrations folder, and there had been no changes to models or data since syncdb was run on the new database, which I found to be odd. Somewhere in the last two steps this was fixed. Future people stumbling upon this: sorry, I'm not going to keep creating django projects to test the behavior further, but perhaps with this information you can fix your own database problems.
1
0
0
Why is Django creating my TextField as a varchar in the PostgreSQL database?
1
python,sql,django,postgresql,psycopg2
0
2014-09-24T23:35:00.000
I have a Python script running as a daemon. At startup, it spawns 5 processes, each of which connects to a Postgres database. Now, in order to reduce the number of DB connections (which will eventually become really large), I am trying to find a way of sharing a single connection across multiple processes. And for this purpose I am looking at the multiprocessing.sharedctypes.Value API. However, I am not sure how I can pass a psycopg2.connection object using this API across processes. Can anyone tell me how it might be done? I'm also open to other ideas in order to solve this issue. The reason why I did not consider passing the connection as part of the constructor to the 5 processes is mutual exclusion handling. I am not sure how I can prevent more than one process from accessing the connection if I follow this approach. Can someone tell me if this is the right thing to do?
4
12
1.2
0
true
26,072,257
0
5,128
1
0
0
26,070,040
You can't sanely share a DB connection across processes like that. You can sort-of share a connection between threads, but only if you make sure the connection is only used by one thread at a time. That won't work between processes because there's client-side state for the connection stored in the client's address space. If you need large numbers of concurrent workers, but they're not using the DB all the time, you should have a group of database worker processes that handle all database access and exchange data with your other worker processes. Each database worker process has a DB connection. The other processes only talk to the database via your database workers. Python's multiprocessing queues, fifos, etc offer appropriate messaging features for that.
1
0
0
Share connection to postgres db across processes in Python
1
python,postgresql,psycopg2,python-multiprocessing
0
2014-09-27T00:16:00.000
I have CSV files that I want to make database tables from in mysql. I've searched all over and can't find anything on how to use the header as the column names for the table. I suppose this must be possible. In other words, when creating a new table in MySQL do you really have to define all the columns, their names, their types etc in advance. It would be great if MySQL could do something like Office Access where it converts to the corresponding type depending on how the value looks. I know this is maybe a too broadly defined question, but any pointers in this matter would be helpful. I am learning Python too, so if it can be done through a python script that would be great too. Thank you very much.
0
0
1.2
0
true
26,108,522
0
3,067
1
0
0
26,108,160
The csv module can easily give you the column names from the first line, and then the values from the other ones. The hard part will be do guess the correct column types. When you load a csv file into an Excel worksheet, you only have few types : numeric, string, date. In a database like MySQL, you can define the size of string columns, and you can give the table a primary key and eventually other indexes. You will not be able to guess that part automatically from a csv file. At the simplest way, you can treat all columns as varchar(255). It is really uncommon to have fields in a csv file that do not fit in 255 characters. If you want something more clever, you will have to scan the file twice : first time to control the maximum size for each colum, and at the end, you could take the minimum power of 2 greater than that. Next step would be to control if any column contains only integers or floating point values. It begins to be harder to do that automatically, because the representation of floating point values may be different depending on the locale. For example 12.51 in an english locale would be 12,51 in a french locale. But Python can give you the locale. The hardest thing would be eventual date or datetime fields, because there are many possible formats only numeric (dd/mm/yyyy or mm/dd/yy) or using plain text (Monday, 29th of september). My advice would be to define a default mode, for example all string, or just integer and strings, and use configuration parameters or even a configuration file to finely tune conversion per column. For the reading part, the csv module will give you all what you need.
1
0
0
create database by load a csv files using the header as columnnames (and add a column that has the filename as a name)
2
python,mysql,sql,csv
0
2014-09-29T20:19:00.000
In my project there is two models ,ORGANISATION and CUSTUMER .Here what i am doing is while i am adding new customer to the organisation i save the organisation_id to the table CUSTOMER .But now i am worrying about the performance of my project when the database becomes huge. So now i am planning to create new database for every newly creating organisation .And save all the information of the organisation in that organisation's database. But i don't know how to create a new database for every newly creating organisation.And i'd like to know which method is better in performance.Please correct the question if not.
0
1
0.197375
0
false
26,158,170
1
181
1
0
0
26,157,625
It doesn't make sense to create a new database for each organization. Even if the number of customers or organizations grows to the hundreds or thousands, keeping data in a single database is your best option. Edit: Your original concern was that an increase in the number of organizations would impact performance. Well then, imagine if you had to create an entire database for each organization. You would have the same tables, indexes, views, etc replicated n-times. Each database would need system resources and disk space. Not to mention the code needed to make Django aware of the split. You would have to get data from n databases instead of just one table in one database. Also keep in mind that robust databases like PostgreSQL or MySQL are very capable of managing thousands or millions of records. They will do a better job out of the box than any code you may come up with. Just trust them with the data and if you notice a performance decline then you can find plenty of tips online to optimize them.
1
0
0
django multiple database for multiple organisation in a single project
1
mysql,django,python-2.7
0
2014-10-02T09:07:00.000
How can i create Django sqlite3 dump file (*.sql) using terminal? There is a fabric fabfile.py with certain dump scripts, but when i try to use fab command next massage shows up: The program 'fab' is currently not installed. To run fab please ask your administrator to install the package 'fabric'. But there are fabric files in /python2.7/site-packages/fabric/. I'm not good at Django and Python at all. The guy, who was responsible for our Django project, just left without any explanations. In general i need to know how to create Django sqlite3 dump file (*.sql) via terminal. Help? :)
0
0
0
0
false
26,179,396
1
1,114
1
0
0
26,178,633
You could also use fixtures. And generate fixtures for your app. Dependes on what you're planing to do with them. You'll just make a loaddata after that.
1
0
0
Django sqlite3 database dump
2
python,django,database,sqlite
0
2014-10-03T12:07:00.000
I am using Django 1.7 and I want to use MongoDB, So for that I try to install django-nonrel. Please let me know django-nonrel is compatible with Django 1.7?
1
0
0
0
false
26,293,980
1
250
1
0
0
26,293,481
Django-nonrel isn't "compatible" with anything. It is actually a fork of Django, currently based on the 1.5 release.
1
0
1
does django-nonrel is compatible with django 1.7
1
django,python-2.7,django-nonrel
0
2014-10-10T06:49:00.000
I have been given a few TSV files containing data, around 800MB total in a couple of files. Each of them has columns that link up with columns in another file. I have so far imported all of my data into Python and stored it in an array. I now need to find a way to build a database out of this data without using any SQL, NoSQL, etc. In the end I will be performing SQL-like queries on it (without SQL) and performing OLAP operations on the data. I can also NOT use any external libraries. After doing some research, I have came across using dictionaries as a way to do this project, but I am not sure how to go about linking the tables together with dictionary. Would it be a list of dictionaries?
1
1
1.2
0
true
26,329,692
0
101
1
0
0
26,329,613
Yes, you could fake a lot of DB operations with a nested dict structure. Top level is your "tables", each table has entries (use a "primary key" on these) and each entry is a dict of key:value pairs where keys are "column names" and values are, well, values. You could even write a little sql-like query language on this if you wanted, but you'd want to start by writing some code to manage this. You don't want to be building this DB freehand, it'll be important to define the operations as code. For example, insert should deal with enforcing value restrictions and imposing defaults and setting auto-incrementing keys and so forth (if you really want to be "performing sql like queries" against it)
1
0
1
Database-like operations without any database use
1
python,mysql,sql,database,nosql
0
2014-10-12T20:25:00.000
I am using Python to establish a connection to greenplum and run codes automatically. For that I am using these drivers ­ psycopg2,­ psycopg2.extensions & psycopg2.extras. I also have to establish a connection to Teradata and run some codes and tranfer tables from Teradata to greenplum. Can someone please suggest some drivers or method to do this? I heard that arrays or alteryx can be used in python to do so but i couldn't anything.
0
1
0.197375
0
false
26,425,426
0
694
1
0
0
26,418,454
I'm guessing the data volumes are at least moderate in size - 10's of millions or greater. FastExport or Teradata Parallel Transport Export of the Teradata data to a flat file or named pipe. Ingesting using Greenplum's preferred method for bulk loading data from a flat file or named pipe. Other options may include invoking a Teradata FastExport API via JDBC using Python but then you still have to figure out how to efficiently ingest the data via Greenplum.
1
0
0
How to tranfer data from Teradata to Greenplum using Python?
1
python,teradata,greenplum
0
2014-10-17T05:31:00.000
Is it possible? If so then how? Currently I'm inserting strings >16MB into GridFS one-by-one, but its very slow when dealing not with 1 string, but with thousands. I tried to check documentation, but didn't find a single line about bulk insert to GridFS storage, not just simple collection. I'm using PyMongo for communication with MongoDB.
1
1
1.2
0
true
26,662,382
0
1,705
1
0
0
26,429,023
I read and researched all the answers but unfortunately they didn't fulfill my requirements. The data that I was needed to use for specifying _id of jsons in GridFS was actually stored inside of JSON itself. It sounds like worst idea ever including redundancy and etc, but unfortunately its requirement. What I did is I wrote insert thread for multiprocessing insertion to GridFS and inserted all the data with several threads (2 GridFS threads was enough to get proper performance).
1
0
1
Bulk insert to GridFS in MongoDB
3
python,mongodb,pymongo,bulkinsert,gridfs
0
2014-10-17T16:03:00.000
I'm new to Web Dev and I came across a problem. I was wondering if there's a Javascript Framework that will allow me to register and authenticate users to a database like when using PHP and MySql. Also, when the user is granted access to the site, such user will be required to upload files that will be written to the local filesystem of that server. Can this be done with Javascript or some sort of Javascript Framework, or is it better just for me to learn PHP and do it in a normal LAMP stack? Or perhaps Ruby on Rails? I have been searching online but the majority of results are leaning towards PHP & MySql. Thanks a lot!
0
0
0
0
false
26,518,519
1
959
1
0
0
26,518,355
Welcome to the world of development. In general, javascript is only used to give more resources to the user's navigation on the site (ex: visual effects). As you're starting out, I advise you to start studying the part of the server-side login. For security purposes who confirms whether the user is logged in or not is always the server. Some developers prefer PHP, other developers love Ruby on Rails. Maybe your best friend prefer Python... It is your choice, they both are easy.
1
0
0
User Registration and Authentication to a Database using Javascript
3
javascript,python,ruby-on-rails,angularjs,node.js
0
2014-10-22T22:41:00.000
since I could not find an answer to my question neither here nor in other forums, I decided to ask it to the community: Does anybody know if and how it is possible to realize automatic documentation generation for code generated with Dymola? The background for this e. g. is that I want/need to store additional information within my model files to explain the concepts of my modelling and to store and get the documentation directly from the model code, which I would later like to be in a convenient way displayable not only from within Dymola, but also by a html and LaTeX documentation. I know that there exist several tools for automatic documentation generation like e. g. DoxyGen and Python Sphinx, but I could not figure out if the can be used with Dymola code. Plus, I am pretty new to this topic, so that I do not really know how to find out if they will work out. Thank you people very much for your help! Greetings, mindm49907
1
3
1.2
0
true
26,543,595
0
274
1
0
0
26,529,779
If you mean the Modelica model code, how does the HTML export in Dymola work for you? What's missing? If you mean the C code generated by Dymola, the source code generation option enables more comments in the code.
1
0
0
Automatic documentation generation for Dymola code
1
doxygen,python-sphinx,documentation-generation,dymola
0
2014-10-23T13:56:00.000
I'm using celery with django and am storing the task results in the DB. I'm considering having a single set of workers reading messages from a single message broker. Now I can have multiple clients submitting celery tasks and each client will have tasks and their results created/stored in a different DB. Even though the workers are common, they know which DB to operate upon for each task. Can I have duplicate task ids generated because they were submitted by different clients pointing to different DBs? Thanks,
0
1
1.2
0
true
26,644,301
1
412
1
1
0
26,637,631
Eventually you will have duplicates. Many people ignore this issue because it is a "low probability", and then are surprised when it hits them. And then a story leaks how someone was logged into another uses Facebook account. If you require them to always be unique then you will have to prefix each ID with something that will never repeat - like current date and time with microseconds. And if that is not good enough, because there still is a even tinier chance of a collision, you can create a small application that will generate those prefixes, and will add an counter (incremented after each hash request, and reset every couple seconds) to the date and microseconds. It will have to work in single-threaded mode, but this will be a guarantee to generate unique prefixes that won't collide.
1
0
0
Common celery workers for different clients having different DBs
1
python,django,celery,django-celery
0
2014-10-29T18:06:00.000
Currently, I'm using Google's 2-step method to backup the datastore and than import it to BigQuery. I also reviewed the code using pipeline. Both methods are not efficient and have high cost since all data is imported everytime. I need only to add the records added from last import. What is the right way of doing it? Is there a working example on how to do it in python?
2
2
0.197375
0
false
26,722,516
1
541
1
1
0
26,722,127
There is no full working example (as far as I know), but I believe that the following process could help you : 1- You'd need to add a "last time changed" to your entities, and update it. 2- Every hour you can run a MapReduce job, where your mapper can have a filter to check for last time updated and only pick up those entities that were updated in the last hour 3- Manually add what needs to be added to your backup. As I said, this is pretty high level, but the actual answer will require a bunch of code. I don't think it is suited to Stack Overflow's format honestly.
1
0
0
Import Data Efficiently from Datastore to BigQuery every Hour - Python
2
python,google-app-engine,google-bigquery,google-cloud-datastore
0
2014-11-03T20:00:00.000
If there a way using Whoosh to return the documents that have a field matching exactly the terms in a query? For example, say I have a schema that has a autograph field that has three possible values; Autograph, Partial autograph, and No Autograph. If I do a standard query autograph:autograph, I get all the records. Because the term autograph is in all records. I have tried doing something like Term('autograph', 'autograph') and applying that to the filter key word argument for the search function, but I end up getting the same results. Am I doing something wrong?
2
0
0
0
false
26,740,422
0
257
1
0
0
26,723,964
I have come up with a solution, It works. First off, I redefined by schema so that autograph was an ID field in whoosh. Then I added a filter to the search call using a Regex query. This works, but I am not going to accept it as the answer in hopes that there is a more elegant solution for filtering results.
1
0
0
Whoosh: matching terms exactly
1
python,whoosh
0
2014-11-03T21:55:00.000
I have a bottle+mongo application running in openshift. when I git-clone the application to my local computer neither the database nor the env-variables get download on my computer --just the python files. Should I have to mimic the mongo part in my local computer to developed locally? Or I missing something here.
0
0
0
0
false
26,922,793
0
47
1
1
0
26,921,629
Yes. You have to run your own Mongodb server locally or port forward and use the OPENSHIFT Mongodb.
1
0
0
openshift python mongodb local
1
python,mongodb,openshift,bottle
0
2014-11-14T01:43:00.000
I wrote a database program using SQLAlchemy. So far, I've been using FreeFileSync to sync the database file over the network for two computers when necessary. I want to learn how to set things up so that the file stays in one place and allows multiple user access but I don't know where to begin. Is it possible to open and read/write to a SQLAlchemy database on another computer over a network? I couldn't find information on this (or maybe I just don't understand the terminology)? Are there any courses or topics I should look into that I will be able to apply with Python and SQLAlchemy? Or would making a web-based program be the best solution? I'm good at algorithms and scientific programming but I'm still a novice at network and web programming. I appreciate any tips on where to start.
0
0
1.2
0
true
27,054,069
0
321
1
0
0
26,965,270
I can't delete this question outright, so I will answer it with what I did. Part of the problem was that I was trying to find a solution for moving a sqlite3 database to a server, but it turns out that sqlite3 is only intended for use in simpler local situations. So I decided to migrate to MySQL. The following are the major steps: I took an old computer that has XP and installed Lubuntu 14.04 as the default OS. Installed MySQL on Lubuntu: sudo apt-get install mysql-server. Edit the etc/mysql/my.cnf document's bind-address to match the computers network address. On the client computer with Python 2.7 installed PyMySQL: easy_install PyMySQL Works great. Now I'm in the process of making sure the program watches for changes to the database and updates the GUI accordingly.
1
0
0
SQLAlchemy, how to transition from single-user to multi-user
1
python-2.7,sqlalchemy,multi-user
0
2014-11-17T03:55:00.000
I am getting this error when I am querying my rest app built with tornado, gevent, postgres and patched using psycogreen. I am constantly getting this error even when i am making requests at a concurrency of 10. If any one has a solution or info about what I might be doing wrong please share. Error messages: ProgrammingError: (ProgrammingError) execute cannot be used while an asynchronous query is underway ProgrammingError: close cannot be used while an asynchronous query is underway Stack Trace: File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2320, in all return list(self) File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2438, in __iter__ return self._execute_and_instances(context) File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2453, in _execute_and_instances result = conn.execute(querycontext.statement, self._params) File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 322, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement compiled_sql, distilled_params File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context context) File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1159, in _handle_dbapi_exception exc_info File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context context) File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute cursor.execute(statement, parameters) ProgrammingError: (ProgrammingError) execute cannot be used while an asynchronous query is underway
2
2
0.379949
0
false
27,316,073
0
1,803
1
0
0
27,025,622
You are probably using the same connection with two different cursors concurrently.
1
0
0
ProgrammingError: close cannot be used while an asynchronous query is underway
1
python-2.7,sqlalchemy,tornado,psycopg2,gevent
0
2014-11-19T19:45:00.000
Sorry i deleted my code because i realized i wasn't supposed to put it up
0
1
0.197375
0
false
27,093,407
0
76
1
0
0
27,093,359
You are getting this error because the __init__() function in your class requires 3 arguments - new_dict, coloumn_name, and coloumn_value - and you did not supply them.
1
0
0
How do you get a column name and row from table?
1
python
0
2014-11-23T19:35:00.000
I'm looking for open-ended advice on the best approach to re-write a simple document control app I developed, which is really just a custom file log generator that looks for and logs files that have a certain naming format and file location. E.g., we name all our Change Orders with the format "CO#3 brief description.docx". When they're issued, they get moved to an "issued" folder under another folder that has the project name. So, by logging the file and querying it's path, we can tell which project it's associated with and whether it's been issued. I wrote it with Python 3.3. Works well, but the code's tough to support because I'm building the reports while walking the file structure, which can get pretty messy. I'm thinking it would be better to build a DB of most/all of the files first and then query the DB with SQL to build the reports. Sorry for the open-ended question, but I'm hoping not to reinvent the wheel. Anyone have any advice as to going down this road? E.g., existing apps I should look at or bundles that might help? I have lots of C/C++ coding experience but am still new to Python and MySQL. Any advice would be greatly appreciated.
0
1
0.099668
0
false
30,106,383
0
92
1
0
0
27,096,588
Firstly, if it works well as you suggest, then why fix it? Secondly, before doing any changes to your code I would ask myself the following questions: What are the improvements/new requirements I want to implement that I can't easily do with the current structure? Do I have a test suite of the current solution, so that I can regression-test any refactoring? When re-implementing something it is easy to overlook some specific behaviors which are not very well documented but that you/users rely on. Do those improvements warrant an SQL database? For instance: Do you need to often run reports out of an SQL database without walking the directory structure? Is there a problem with walking the directories? Do you have network or performance issues? Are you facing an increase in usage? When implementing an SQL solution, you will need a new task to update the SQL data. If I understand correctly, the reports are currently generated on-the-fly, and therefore are always up-to-date. That won't be the case with SQL reports, so you need to make sure they are up-to-date too. How frequently will you update the SQL database: a) In real-time? That will necessitate a background service. That could be a operational hassle. b) On-demand? Then what would be the difference with the current solution? c) At scheduled times? Then your data may be not up-to-date between the updates. I don't have any packages or technical approaches to recommend to you, I just thought I'd give you those general software management advices. In any case, I also have extensive C++ and Python and SQL experience, and I would just stick to Python on this one. On the SQL side, why stick to traditional SQL engines? Why not MongoDB for instance, which would be well suited to storing structured data such as file information.
1
0
0
Need advice on writing a document control software with Python and MySQL
2
python,mysql,file
0
2014-11-24T01:24:00.000
I am trying to build a simple login / register system with python sockets and Tkinter. It might sound like a stupid question, but I really couldn't find by searching in Google. I am wondering if using sqlite3 for storing username and password (with a server) is a good idea. If No, please explain why shouldn't I use sqlite3 and what is the alternative for this need.
1
2
1.2
0
true
27,134,845
0
1,093
1
0
0
27,134,539
You'll need to store the names and (secured) passwords on the server. SQLite is a perfectly good solution for this but there are many, many other ways to do it. If your application does not otherwise use a database for storage there's no need to add database support just for this simple task. Assuming that you don't have a very large and every-growing list of users it could be as easy as pickling Python dictionary.
1
0
0
Should I use sqlite3 for storing username and password with python?
1
python,database,python-2.7,sqlite
0
2014-11-25T18:57:00.000
I am using Ubuntu 14.04 and trying to run snoopy_auth which is a part of the snoopy-ng application I downloaded and installed from their GitHub. When running, I get an error that is documented on snoopy-ng's GitHub page, which says that it works using version 0.7.8. How can I downgrade sqlalchemy to 0.7.8? The error looks like: snoopy_auth -l [+] Available drone accounts: Traceback (most recent call last): File "/usr/bin/snoopy_auth", line 103, in drones = auth_.manage_drone_account("foo", "list") File "/usr/bin/snoopy_auth", line 29, in manage_drone_account self.db.create(self.drone_tbl_def ) File "", line 2, in create File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/deprecations.py", line 106, in warned return fn(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/interfaces.py", line 859, in create raise NotImplementedError() NotImplementedError
1
2
1.2
0
true
27,371,294
1
1,813
1
0
0
27,161,760
To get passed this error I just simply ran the command: sudo easy_install "SQLAlchemy==0.7.8" The virtual environments do seem like the preferred method though, so hopefully I don't run into any additional problems from downgrading system-wide.
1
0
0
How can I remove version 0.9.7 of sqlalchemy and install 0.7.8 instead?
2
python,linux,python-2.7,ubuntu,sqlalchemy
0
2014-11-27T01:37:00.000
I have a working Django 1.6 project using sqlite deployed in Digital Ocean, Ubuntu. I use Git to update my project on server side. (Git clone and git pull thereafter) My question is: every time after I update my database locally (e.g. added some new tables), how can I synchronise with the server one? Using git pull results in conflicts that cannot be resolved. I can do it using git fetch --all and git reset --HARD. But it doesn't seem to be the correct way. Any help is greatly appreciated! Thank you in advance.
0
2
1.2
0
true
27,163,084
1
860
1
0
0
27,162,982
Follow the following steps to push from local and pull to server. make changes to models.py Use this cmd to add change to git . > git add models.py use this cmd to commit > git commit -m "your message" git push > this will push your local changes to repo. go to sever now. run cmd > git status see if there are any local changes done to models.py file. you can see those local changes using > git diff models.py If those changes are already in your repo. use this cmd to discard them > git checkout models.py Now run cmd which will take your latest changes from server.> git pull P.S. : Use the same commands for all the changes made to any file into the clone. South migrations for syncing database: Initial : 1. python manage.py schemamigration --initial 2. python manage.py migrate --fake Do any change to database and do following steps: 1. python manage.py schemamigration --auto 2. python manage.py migrate Do not checkin the migration folder created in app as it will conflict between your local and production clone. Note: All the history for south migrations are stored in south_migrations table in database.
1
0
0
How to synchronise local Django sqlite database with the server one?
1
python,django,database,git
0
2014-11-27T04:17:00.000
I have uploaded some data into Elastic server as " job id , job place , job req , job desc ". My index is my_index and doctype = job_list. I need to write a query to find a particular term say " Data Analyst " and it should give me back matching results with a specified field like " job place " . ie, Data Analyst term matching in the documents , and I need to have all "job place" information only. Any help. I tried curd . but not working. if it is in python good.
0
1
0.099668
1
false
27,177,167
0
5,426
1
0
0
27,166,357
the above search example looks correct.Try lowercasing the Data "Analyst" as "data analyst". if doesn't help post your mappings,query you firing and response you are getting.
1
0
0
Elastic Search query filtering
2
python,search,curl,elasticsearch
0
2014-11-27T08:43:00.000
I'm working on a multithreaded application that uses the SQLAlchemy ORM. It already uses scoped_session with the thread as its scope, but we are having some issues when we pass an ORM object from a worker thread back to the main thread. Since the objects are attached to the worker thread's session, when the worker thread is shut down, we start getting DetachedInstanceErrors on those objects. Is there a way I can generically tell the ORM objects to detach/reattach themselves to the correct session as needed? We spawn a new thread whenever we have a slow operation that we don't want locking up our UI, so putting the reattach code in everywhere we spawn a new thread would be a mess. I think we also need to be able to clone the ORM object when we spawn the thread, so that we can have one in the main thread and one in the worker thread. I see a "merge" but no "split". Is this possible?
2
5
1.2
0
true
27,194,059
1
1,254
1
0
0
27,193,849
Session.merge() is enough and should do what you're after, but even then it gets fiddly with threads. You might want to rethink this. Pass the primary key(s) to the worker instead of the objects, and then handle object loading and the actual work in the worker itself. No messing around with threading and open/closed sessions that will eventually lead to headaches. Once the workers can deal with the objects separately, you could even move the workers to a separate process (similar to what Celery does).
1
0
0
SQLAlchemy ORM: safely passing objects between threads without manually reattaching?
1
python,multithreading,sqlalchemy
0
2014-11-28T17:57:00.000
Right now I'm using print(), calling the variables I want that are stored in a tuple and then formatting them using: print(format(x,"<10s")+ format(y,"<40s")...) but this gives me output that isn't aligned in a column form. How do I make it so that each row's element is aligned? So, my code is for storing student details. First, it takes a string and returns a tuple, with constituent parts like: (name,surname,student ID, year). It reads these details from a long text file on student details, and then it parses them through a tuplelayout function (the bit which will format the tuple) and is meant to tabulate the results. So, the argument for the tuplelayout function is a tuple, of the form: surname | name | reg number | course | year
1
0
0
0
false
27,209,158
0
5,713
1
0
0
27,196,501
My shell has the font settings changed so the alignment was off. Back to font: "Courier" and everything is working fine. Sorry.
1
0
1
Format strings to make 'table' in Python 3
2
python,formatting,tabular
0
2014-11-28T22:07:00.000
I'm trying to delete cells from an Excel spreadsheet using openpyxl. It seems like a pretty basic command, but I've looked around and can't find out how to do it. I can set their values to None, but they still exist as empty cells. worksheet.garbage_collect() throws an error saying that it's deprecated. I'm using the most recent version of openpyxl. Is there any way of just deleting an empty cell (as one would do in Excel), or do I have to manually shift all the cells up? Thanks.
1
4
1.2
0
true
27,280,801
0
2,658
1
0
0
27,259,478
In openpyxl cells are stored individually in a dictionary. This makes aggregate actions like deleting or adding columns or rows difficult as code has to process lots of individual cells. However, even moving to a tabular or matrix implementation is tricky as the coordinates of each cell are stored on each cell meaning that you have process all cells to the right and below an inserted or deleted cell. This is why we have not yet added any convenience methods for this as they could be really, really slow and we don't want the responsibility for that. Hoping to move towards a matrix implementation in a future version but there's still the problem of cell coordinates to deal with.
1
0
0
Delete cells in Excel using Python 2.7 and openpyxl
1
python,excel,openpyxl
0
2014-12-02T21:38:00.000
Is it just about creating models that use the best fitting data store API? For part of the data I need relations, joins and sum(). For other this is not necessary but nosql way is more appropriate.
1
0
0
0
false
28,197,823
1
62
1
1
0
27,278,297
MySQL commands cannot be run on NoSQL. You will need to do some conversions during manipulation of the data from both DBs.
1
0
0
can I combine NDB and mysqldb in one app on google cloud platform
1
google-app-engine,google-cloud-storage,google-cloud-datastore,mysql-python,app-engine-ndb
0
2014-12-03T17:47:00.000
I am dealing with some performance issues whilst working with a very large dataset. The data is a pairwise distance matrix of ~60k entries. The resulting vectors have been generated in the following format: mol_a,mol_b,score, year_a, year_b 1,1,1,year,year 1,2,x,year,year 1,3,y,year,year ... 1,60000,z,year,year 2,1,x,year,year 2,2,1,year,year ... where mol_a and mol_b are unique molecules (INTs) and score is their jaccard/tanimoto similarity score (FLOAT/REAL) and year_a and b are dates (INT(4)) that are associated with mol_a and b respectively. Since this is a distance matrix the values are reflected across the diagonal ie 0 1 2 3 1 1 x y 2 x 1 z 3 y z 1 The resulting file has ~3.6 billion rows and becomes a ~100GB sqlite3 db. It take about 10 hours to make using all the PRAGMA tweaks I have read and doing executemany in 5 million entry batches. I would love to throw away half of it while building the but I can't think of a good way of doing this without ending up building a (prohibitively) giant list in memory... It's constructed via 2 nested for loops: for i in MolList: for j in MolList: a = calculate_score(i,j) write_buffer.append(a) Though the creation is slow it is not the prohibitive part, the actual analysis i want to do with it is.. I will be grouping things by year_a and year_b so I started creating an index on year_a, year_b and score to have a 'Covering Index' but the build is 13 hours in and occasionally using mass amounts of harddrive space on my c: drive (which is a small SSD vs the raid where the database is). The whole thing is running on 12 core workstation with 16GB of ram, Windows 7 on a 240GB SSD and data storage on a 1TB Raid 1 array (built in MoBo controller). I have also been attempting to build a MySql database on a small little ubuntu server that I have (intel core duo 2ghz 4GB ram 128GB SSD) but the python inserts across the network are taking forever! and I'm not convinced I'll see any real improvements. SQLite from what I've read seems like what I really wanted, essentially this would all be handled in Python memory if I had ~150GB of RAM at my disposal, but I needed a file based storage solution which seems like exactly what SQLite was designed for. However watching SQLite consume a pittance of memory and CPU while Disk IO bounces around 5MB/s chugs away makes me think that disk is just a bottleneck. What are my options for streamlining this process on a single node (aka no Hadoop clusters at my disposal)? I'm a DB newb so please keep suggestions within the realm of possibility. Having never worked with datasets in the billions I don't quite know what I'm in for but would greatly appreciate help/advice from this sage community.
2
0
0
0
false
27,323,942
0
314
1
0
0
27,322,027
You have several options. The simplest is to simply save the output in chunks instead (say save one file for all the 1st molecule 'distance' scores, a second file for the second molecule distances, etc., with 60,000 files in all). That would allow you to also process your work in batches, and then aggregate to get the combined result. If that's not possible due to operations you need to do, you can perhaps sign up for a free Amazon Web Services Trial and upload the data to a Redshift server instance there, it has very good compression (>10x is common), is frighteningly fast for data analysis, and if you know SQL, you should be fine.
1
0
0
database solution for very large table
2
python,mysql,sql,sqlite
0
2014-12-05T18:04:00.000
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_Geodatabase.py". However if I try to import numpy (I can't remember what numby is, however it is located in the same site-packages folder C:\Python27\Lib\site-packages\numpy) it has no problem. Any idea of what could be causing this issue? Thanks for the help.
38
1
0.022219
0
false
42,188,912
0
200,367
4
0
0
27,385,097
I am not sure what caused this but it went all well once I changed the path name from Lib into lib and I was finally able to make it work.
1
0
0
ImportError: No module named xlsxwriter
9
python-2.7,xlsxwriter
0
2014-12-09T17:29:00.000
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_Geodatabase.py". However if I try to import numpy (I can't remember what numby is, however it is located in the same site-packages folder C:\Python27\Lib\site-packages\numpy) it has no problem. Any idea of what could be causing this issue? Thanks for the help.
38
0
0
0
false
67,318,348
0
200,367
4
0
0
27,385,097
I found the same error when using xlsxwriter in my test.py application. First, check if you have xlsxwriter module installed or not. sudo pip install xlsxwriter Then check the python version you are using, The following worked for me python2 test.py
1
0
0
ImportError: No module named xlsxwriter
9
python-2.7,xlsxwriter
0
2014-12-09T17:29:00.000
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_Geodatabase.py". However if I try to import numpy (I can't remember what numby is, however it is located in the same site-packages folder C:\Python27\Lib\site-packages\numpy) it has no problem. Any idea of what could be causing this issue? Thanks for the help.
38
1
0.022219
0
false
72,355,605
0
200,367
4
0
0
27,385,097
in VSCode: instead of activating your environment with script use python select interpreter from VSCode(press ctrl + shift + p) and then select your environment from the list (marked with recommended)
1
0
0
ImportError: No module named xlsxwriter
9
python-2.7,xlsxwriter
0
2014-12-09T17:29:00.000
I recently downloaded the xlsxwriter version 0.6.4 and installed it on my computer. It correctly added it to my C:\Python27\Lib\site-packages\xlsxwriter folder, however when I try to import it I get the error ImportError: No module named xlsxwriter. The traceback is File "F:\Working\ArcGIS\ArcGIS .py\Scripts\Append_Geodatabase.py". However if I try to import numpy (I can't remember what numby is, however it is located in the same site-packages folder C:\Python27\Lib\site-packages\numpy) it has no problem. Any idea of what could be causing this issue? Thanks for the help.
38
5
0.110656
0
false
50,458,074
0
200,367
4
0
0
27,385,097
I managed to resolve this issue as follows... Be careful, make sure you understand the IDE you're using! - Because I didn't. I was trying to import xlsxwriter using PyCharm and was returning this error. Assuming you have already attempted the pip installation (sudo pip install xlsxwriter) via your cmd prompt, try using another IDE e.g. Geany - & import xlsxwriter. I tried this and Geany was importing the library fine. I opened PyCharm and navigated to 'File>Settings>Project:>Project Interpreter' xlslwriter was listed though intriguingly I couldn't import it! I double clicked xlsxwriter and hit 'install Package'... And thats it! It worked! Hope this helps...
1
0
0
ImportError: No module named xlsxwriter
9
python-2.7,xlsxwriter
0
2014-12-09T17:29:00.000
my job id is job_7mb6iw3BHoMRC09US9Vqq-Qd06s, while uploading data by this job on Bigquery The data was not getting uploaded on bigquery. And I am not getting any error for this.
0
1
0.197375
0
false
27,449,004
0
95
1
0
0
27,416,642
That job failed with reason "invalid" and message starting with "Too many errors encountered." In order to detect job failure, when you get a successful response from jobs.get, first ensure that the job is in a DONE state, then look for the presence of errors in the status.errorResult.reason and status.errorResult.message. Additionally, the status.errorStream will contain a list of individual failures encountered. In this case, it looks like the job was trying to load data that didn't match the schema of the table. You can find the file/line/field offsets in the "location" field of each error in the errorStream. Here are a couple of causes for "not finding the errors in the job" that we've seen: Forgetting to check for DONE before looking for job errors. Waiting for job completion with a timeout, but forgetting to treat timeout as failure. Letting temporary errors from jobs.get (5xx http response codes) terminate your wait loop early, and then not knowing the state of the job since the jobs.get itself failed. I hope this helps narrow down the problem for you.
1
0
0
Bigquery data not getting uploaded
1
google-app-engine,python-2.7,google-bigquery
0
2014-12-11T06:26:00.000
My goal was to duplicate my Google App Engine application. I created new application, and upload all needed code from source application(python). Then I uploaded previously created backup files from the Cloud Storage of the source application (first I downloaded those files to PC and than uploaded files to GCS bucket of the target app) After that I tried to restore data from those files, by using "Import Backup Information" button. Backup information file is founded and I can add it to the list of available backups. But when I try to do restore I receive error: "There was a problem kicking off the jobs. The error was: Backup not readable" Also I tried to upload those files back to original application and I was able to restore from them, by using the same procedure, so the files are not corrupted. I know there are another methods of copying data between applications, but I wanted to use this method. If for example, my Google account is being hacked and I can not access my original application data, but I have all backup data on my hard drive. Then I can simply create new app and copy all data to the new app... Has anyone before encountered with the similar problem, and maybe found some solution? Thanks!
8
4
0.379949
0
false
34,706,288
1
961
2
1
0
27,514,985
Yes!! What you are trying to do is not possible. The reason is that there are absolute references in the backup files to the original backup location (bucket). So moving the files to another GCS location will not work. Instead you have to leave the backup files in the original GCS bucket and give your new project read access to that folder. That is done in the "Edit bucket permissions" option. eg. add: Project - owners-12345678 - Reader Now you are able to import from that bucket in your new project in "Import Bucket Information".
1
0
0
Backup in one and restore in another Google App Engine application by using Cloud Storage?
2
python,google-app-engine
0
2014-12-16T22:17:00.000
My goal was to duplicate my Google App Engine application. I created new application, and upload all needed code from source application(python). Then I uploaded previously created backup files from the Cloud Storage of the source application (first I downloaded those files to PC and than uploaded files to GCS bucket of the target app) After that I tried to restore data from those files, by using "Import Backup Information" button. Backup information file is founded and I can add it to the list of available backups. But when I try to do restore I receive error: "There was a problem kicking off the jobs. The error was: Backup not readable" Also I tried to upload those files back to original application and I was able to restore from them, by using the same procedure, so the files are not corrupted. I know there are another methods of copying data between applications, but I wanted to use this method. If for example, my Google account is being hacked and I can not access my original application data, but I have all backup data on my hard drive. Then I can simply create new app and copy all data to the new app... Has anyone before encountered with the similar problem, and maybe found some solution? Thanks!
8
1
0.099668
0
false
29,852,870
1
961
2
1
0
27,514,985
Given the message, my guess is that the target application has no read access to the bucket where the backup is stores. Add the application to the permitted users to that bucket before creating the backup so that the backup objects will inherit the permission.
1
0
0
Backup in one and restore in another Google App Engine application by using Cloud Storage?
2
python,google-app-engine
0
2014-12-16T22:17:00.000
I get an error when trying to run ogr2ogr thru subprocess but I am able to run it using just the windows command prompt. The script will be part of a series of processes that start with batch importing gpx files unto a postgres db. Can somebody please tell me what's wrong? Thanks! :::::::::::::::::::::::::::: Running THIS script gives me an ERROR: 'ogr2ogr' is not recognized as an internal or external command, operable program or batch file. import subprocess import sys print sys.executable track= "20131007.gpx" subprocess.call(["ogr2ogr", "-f", "PostgreSQL", "PG:dbname=TTBASEMain host=localhost port=5432 user=postgres password=minda", track], shell=True) ::::::::::::::::::::::::::::: THIS CODE does its job well. ogr2ogr -f PostgreSQL PG:"dbname='TTBASEMain' host='localhost' port='5432' user='postgres' password='minda'" "20131007.gpx" ::::::::::::::::::::::::::::: THIS is what I have in my environment path: C:\Users\User>path PATH=C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x86;C:\Program Files (x86)\Intel\OpenCL SDK\3.0\bin\x64;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine C omponents\IPT;C:\Program Files\Lenovo\Bluetooth Software\;C:\Program Files\Lenovo\Bluetooth Software\syswow64;C:\lastools\bin;C:\Python27;C:\Python27\Scripts;C:\Python27\DLLs;C:\Python27\Lib\site-packages;C:\Users\User\AppData\Roaming.local\bin;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\GDAL
0
0
0
0
false
27,570,551
0
1,962
1
1
0
27,567,450
REINSTALLING the python bindings resolved my issue. I don't see GDAL on the paths below but its working now. Is it supposed to be there so since its not, I might probably have another round of GDAL head scratching in the future? ::::::::::::::::::::::::::::::::::::::: THIS is what I currently have when I type in sys.path on python: Microsoft Windows [Version 6.2.9200] (c) 2012 Microsoft Corporation. All rights reserved. C:\Users\User>python Python 2.7.8 (default, Jun 30 2014, 16:08:48) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. import sys sys.path ['', 'C:\windows\SYSTEM32\python27.zip', 'C:\Python27\DLLs', 'C:\Python27\lib', 'C:\Python27 \lib\plat-win', 'C:\Python27\lib\lib-tk', 'C:\Python27', 'C:\Python27\lib\site-packages', ' C:\Python27\lib\site-packages\wx-3.0-msw']
1
0
0
ERROR: 'ogr2ogr' is not recognized as an internal or external command, operable program or batch file when running ogr2ogr in python script
1
python-2.7,subprocess
0
2014-12-19T13:48:00.000
I'm using the mySQLdb module within my django application which is linked to Apache via WSGI. However I'm getting permission issues (shown below). This is down to SElinux and if I set it to passive everything is ok. ImproperlyConfigured: Error loading MySQLdb module: /opt/django/virtenv/django15/lib/python2.7/site-packages/_mysql.so: failed to map segment from shared object: Permission denied What is the best way to update SELinux to include this without having to turn off the whole the thing. The error is shown below: ImproperlyConfigured: Error loading MySQLdb module: /opt/django/virtenv/django1/lib/python2.7/site-packages/_mysql.so: cfailed to map segment from shared object: Permission denied
8
0
0
0
false
27,734,160
1
1,090
1
0
0
27,584,508
Couple of permission issues that I notice: Make sure your credentials for mySQLdb have access to the database. If you are using IP and Port to connect to the database, try using localhost. Make sure the user (chmod permissions) have access to the folder where mySQL stores stuff. Sometimes when storing media and things it need permission to the actual folder. Lastly, I would try to reset Apache server (not the entire machine).
1
0
0
Python MySQLdb with SELinux
4
python,django,mysql-python,selinux
0
2014-12-20T21:29:00.000
I am trying to get writing privileges to my sqlite3.db file in my django project hosted on bluehost, but I cannot get any other chmod command to work besides the dangerous/risky chmod 777. When I chmod 777 the db file and the directory, everything works perfectly. However, in order to be more prudent, I’ve tried chmodding 775 the directory of the sqlite file and chmod 664 the actual db file itself. No luck. I still get OperationalError: Attempt to Write to a Read Only Database whenever I access a feature that requires writing to the db. I appreciate any assistance.
2
1
1.2
0
true
27,594,818
1
310
1
0
0
27,594,703
The user accessing the database (www-data?) needs to have write privileges to the folder the data resides in as well as the file itself. I would probably change the group ownership (chgrp) of the folder to www-data and add a group sticky bit to the folder as well (chmod g+s dbfolder). The last one makes sure that any new files created belongs to the group owner. If you're on bluehost you should also have access to MySql (which is a much better choice for web-facing db).
1
0
0
Django on CentOS/Bluehost: Attempt to Write a Readonly Database, which Chmod besides 777 to use?
1
python,django,sqlite
0
2014-12-21T23:04:00.000
I use python 2.7.3 and Windows7. I want to decorate the Excel chart by using Python. It's not necessary to make charts from start to end. First step(EXCEL STEP), I store data in the Excel sheet and make line chart roughly. (by selecting data range and using hot-key 'ALT+N+N+enter') Next step(PYTHON STEP), I want to modify chart made in first step. Specifically border line color and width, chart size, label fonts, fonts size and so on. How can I select or activate existing Excel chart by Python?(Not create chart from Python)
0
0
1.2
0
true
27,599,618
0
191
1
0
0
27,596,890
It seems that all the python module could only create excels but not activate existing charts. Try xlrd and xlwt. Good luck.
1
0
0
Selecting or activating existing Excel chart
1
python,excel,charts
0
2014-12-22T05:11:00.000
I'm developing intranet web app that is based on Pyramid with SQLAlchemy. It eventually may (will) happen that 2 users will edit the same record. How can I handle the requirement to notify the user who started editing later that particular record is being edited by the first user?
1
0
0
0
false
27,616,278
1
55
1
0
0
27,616,098
You need a table with current editor, record_id and timeout. The first editor asks per POST-request to edit a record and you put a new line in this table, with a reasonable timeout, say 5 min. The first editor gets an "ok" in return. For the second editor you find a match for the record_id in the table, look at the timeout, and if it's not timed out, (s)he gets an "error" in return to the post request. In an second POST-request, an editor send it's changes. You look in the table, if (s)he's the editor and send "changed" or "rejected" accordingly.
1
0
0
Notifying user's browser of change without websockets
2
python,web,pyramid
0
2014-12-23T07:44:00.000
I am generating load test data in a Python script for Cassandra. Is it better to insert directly into Cassandra from the script, or to write a CSV file and then load that via Cassandra? This is for a couple million rows.
0
0
0
1
false
27,688,141
0
364
1
0
0
27,678,990
For a few million, I'd say just use CSV (assuming rows aren't huge); and see if it works. If not, inserts it is :) For more heavy duty stuff, you might want to create sstables and use sstable loader.
1
0
0
Python/Cassandra: insert vs. CSV import
1
python,cassandra,load-testing
0
2014-12-28T17:47:00.000
I have a typical Django project with one primary database where I keep all the data I need. Suppose there is another DB somewhere with some additional information. That DB isn't directly related to my Django project so let's assume I do not even have a control under it. The problem is that I do ont know if I need to create and maintain a model for this external DB so I could use Django's ORM. Or maybe the best solution is to use raw SQL to fetch data from external DB and then use this ifo to filter data from primary DB using ORM, or directly in views. The solution with creating a model seems to be quite ok but the fact that DB isn't a part of my project means I am not aware of possible schema changes and looks like it's a bad practice then. So in the end if I have some external resources like DBs that are not related to but needed for my project should I: Try to create django models for them Use raw SQL to get info from external DB and then use it for filtering data from the primary DB with ORM as well as using data directly in views if needed Use raw SQL both for a primary and an external DB where they intersect in app's logic
1
0
0
0
false
27,744,297
1
437
1
0
0
27,742,457
I would create the minimal django models on the external databases => those that interact with your code: Several outcomes to this If parts of the database you're not interested in change, it won't have an impact on your app. If the external models your using change, you probably want to be aware of that as quickly as possible (your app is likely to break in that case too). All the relational databases queries in your code are handled by the same ORM.
1
0
0
Django models with external DBs
3
python,django
0
2015-01-02T12:45:00.000
I have a Python client program (which will be available to a limited number of users) that fetches data from a remote MySQL-DB using the pymysql-Module. The problem is that the login data for the DB is visible for everyone who takes a look at the code, so everyone could manipulate or delete data in the DB. Even if I would store the login data in an encrypted file, some still could edit the code and insert their own MySql queries (and again manipulate or delete data). So how can I access the DB from my program and still SELECT, DELETE or UPDATE data in it, but make sure that no one can execute his own (evil) SQL Code (except the ones that are triggered by using the GUI)?
1
0
0
0
false
27,743,210
0
934
1
0
0
27,743,031
This happens to be one of the reasons desktop client-server architecture gave way to web architecture. Once a desktop user has access to a dbms, they don't have to use just the SQL in your application. They can do whatever their privileges allow. In those bad old days, client-server apps only could change rows in the DBMS via stored procedures. They didn't have direct privileges to INSERT, UPDATE, or DELETE rows. The users of those apps had accounts that were GRANTed a limited set of privileges; they could SELECT rows and run procedures, and that was it. They certainly did not have any create / drop / table privilege. (This is why a typical DBMS has such granular privilege control.) You should restrict the privileges of the account or accounts employed by the users of your desktop app. (The same is, of course, true for web app access accounts.) Ideally, each user should have her own account. It should only grant access to the particular database your application needs. Then, if you don't trust your users to avoid trashing your data, you can write, and test, and deploy, stored procedures to do every insert, update, or delete needed by your app. This is a notoriously slow and bureaucratic way to get IT done; you may want to make good backups and trust your users, or switch to a web app. If you do trust them tolerably well, then restrict them to the particular database employed by your app.
1
0
0
Secure MySQL login data in a Python client program
2
python,mysql,pymysql
0
2015-01-02T13:31:00.000
I am connecting to MySQL database using torndb in Python. Is there a way to switch between databases after connection is established? Like the select_db method?
0
0
0
0
false
27,760,934
0
106
2
0
0
27,760,817
this ultimately will be decided if the database is running on the same host and in the instance of MySQL. If it is running in the same instance you should be able to prefix your tables names with the database name. For example; "select splat from foo.bar where splat is not null" where foo is the database name and bar is the table name. Hope this helps!
1
0
0
Torndb - Switch from one database to another
2
python,tornado
0
2015-01-03T23:45:00.000
I am connecting to MySQL database using torndb in Python. Is there a way to switch between databases after connection is established? Like the select_db method?
0
2
0.197375
0
false
27,765,801
0
106
2
0
0
27,760,817
Switch db: conn.execute('use anotherDBName');
1
0
0
Torndb - Switch from one database to another
2
python,tornado
0
2015-01-03T23:45:00.000
I'm running Django with Postgres database. On top of application-level security checks, I'm considering adding database-level restrictions. E.g. the application code should only be able to INSERT into log tables, and not UPDATE or DELETE from them. I would manually create database user with appropriate grants for this. I would also need a more powerful user for running database migrations. My question is, do people practice things like this? Any advice, best practices on using restricted database users with Django? Edit: To clarify, there's no technical problem, I'm just interested to hear other people's experiences and takeaways. One Django-specific thing is, I'll need at least two DB users: for normal operation and for running migrations. Where do I store credentials for the more privileged user? Maybe make manage.py migrate prompt for password? As for the reasoning, suppose my app has a SQL injection vulnerability. With privileged user, the attacker can do things like drop all tables. With a more limited user there's slightly less damage potential and afterwards there's some evidence in insert-only log tables.
2
0
0
0
false
27,964,212
1
1,088
2
0
0
27,819,930
Yes, this is practiced sometimes, but not commonly. The best way to do it is to grant specific privileges on user, not in django. Making such restrictions means that we should not trust application, because it might change some files / data in db in the way that we do not expect it to do so. So, to sum up: create another user able to create / modify data and user another one with restrictions to use normally. It's also quite common in companies to create one user to insert data and another one for employees / scripts to access it.
1
0
0
Restricted database user for Django
3
python,django,database,postgresql,security
0
2015-01-07T12:50:00.000
I'm running Django with Postgres database. On top of application-level security checks, I'm considering adding database-level restrictions. E.g. the application code should only be able to INSERT into log tables, and not UPDATE or DELETE from them. I would manually create database user with appropriate grants for this. I would also need a more powerful user for running database migrations. My question is, do people practice things like this? Any advice, best practices on using restricted database users with Django? Edit: To clarify, there's no technical problem, I'm just interested to hear other people's experiences and takeaways. One Django-specific thing is, I'll need at least two DB users: for normal operation and for running migrations. Where do I store credentials for the more privileged user? Maybe make manage.py migrate prompt for password? As for the reasoning, suppose my app has a SQL injection vulnerability. With privileged user, the attacker can do things like drop all tables. With a more limited user there's slightly less damage potential and afterwards there's some evidence in insert-only log tables.
2
1
0.066568
0
false
27,972,123
1
1,088
2
0
0
27,819,930
For storing the credentials to the privileged user for management commands, when running manage.py you can use the --settings flag, which you would point to another settings file that has the other database credentials. Example migrate command using the new settings file: python manage.py migrate --settings=myapp.privileged_settings
1
0
0
Restricted database user for Django
3
python,django,database,postgresql,security
0
2015-01-07T12:50:00.000
I've gone through many threads related to installing mysql-python in a virtualenv, including those specific to users of Percona. None have solved my problem thus far. With Percona, it is normal to get a long error on pip install MySQL-python in the virtualenv that ultimately says EnvironmentError: mysql_config not found. One method to remedy this is yum install mysql-devel, which I've done. I can actually get mysql-python to install properly outside of the virtualenv via yum. I'm getting the error in the virtualenv only - it uses Python 2.7.9, wheareas 2.6.6 is what comes with Centos. Also, with MySQL-python installed via yum it will import to the OS's python interpreter, but will not import into the virtualenv's python interpreter. To clarify, I only installed mysql-python via yum to see whether or not it would work that way. I would prefer it be by pip, in the environment only. What am I missing here? As far as I'm aware it should work - considering it will work outside of virtualenv.
1
1
1.2
0
true
27,829,817
0
110
1
0
0
27,828,737
Found the solution! I think it was improper of my to install mysql-devel in the first place, so I went ahead and uninstalled it. Instead, I used a packaged supplied by Percona - Percona-Server-devel-55 yum install Percona-Server-devel-55 and the problem is solved!
1
0
0
Unable to get these to cooperate: mysql-python + virtualenv + percona + centos6
1
python-2.7,virtualenv,mysql-python,centos6,percona
0
2015-01-07T21:11:00.000
I have a python script set up that captures game data from users while the game is being played. The end goal of this is to get all that data, from every user, into a postgresql database on my web server where it can all be collated and displayed via django The way I see it, I have 2 options to accomplish this: While the python script is running, I can directly open a connection to the db and upload to it in real time During the game session, instead of uploading to the db directly, I can save out a csv file to their computer and have a separate app that will find these log files and upload them to the db at a later point I like (1) because it means these log files cannot be tampered with by the user as it is going straight to the db - therefore we can prevent forgery and ensure valid data. I like (2) because the initial python script is something that every user would have on their computer, which means they can open it at will (it must be this way for it to work with the game). In other words, if I went with (1) users would be exposed to the user/pass details for connecting to the db which is not secure. With (2) the app can just be an exe where you cant see the source code and cant see the db login details My questions: So in one case I'd be exposing login details, in the other I'd be risking end users tampering with csv files before uploading. Is there a method that could combine the pros of the 2 methods without having to deal with the cons? At the very least, if I had to choose either of these 2 methods, whats the best way to get around its downfall? So is it possible to prevent exposing db credentials in a publicly available python script? And if I have to save out csv files, is there a way to prevent tampering or checking if it has been tampered with?
0
1
0.099668
0
false
27,885,848
1
70
1
0
0
27,885,733
The script makes a POST request to your Django web server either with login/pwd or unique string. The web server validates credentials and inserts data into DB.
1
0
0
Clients uploading to database
2
python,postgresql,security,csv
0
2015-01-11T09:46:00.000
I need to backup the current db while logged into odoo. I should be able to do it using a button, so that suppose I click on the button, it works the same way as odoo default backup in manage databases, but I should be able to do it from within while logged in. Is there any way to achieve this? I do know that this is possible from outside odoo using bash but thats not what I want.
13
1
0.022219
0
false
28,070,202
1
4,840
1
0
0
27,935,745
You can use a private browser session to access the Database menu, from the login screen, and perform the the backup form there (you need to know the master password to access that, defined in the server configuration file).
1
0
0
Backup Odoo db from within odoo
9
python,openerp,odoo
0
2015-01-14T04:11:00.000
Python 2.7 and 3.4 co-exist in my mac-os. After installing the official mysql connector (downloaded from dev.mysql.com), import mysql.connector can only pass in python 2.7. Is there any way for the connector to work for both python versions?
0
0
0
0
false
47,486,286
0
569
1
0
0
28,039,131
Relative to Python 3.6: The official mysql connector only worked in python 2.7 after i installed it on OSX. As an alternative I used the easy_install-3.6 python module integrated in python 3.6 Go to directory: /Library/Frameworks/Python.framework/Versions/3.6/bin command: easy_install-3.6 mysql-connector-python
1
0
0
mysql-python connector work with python2.7 and 3.4 at the same time
2
python,mysql,mysql-python,mysql-connector,python-3.4
0
2015-01-20T06:36:00.000
I want to build an universal database in which I will keep data from multiple countries so I will need to work with the UNICODE charset. I need a little help in order to figure out which is the best way to work with stuff like that and how my queries will be affected ( some sql example queries from php/python for basic stuff like insert/update/select would also be great) Thank you.
0
1
0.099668
0
false
28,096,917
0
1,733
2
0
0
28,096,856
just put a N infront of the string, something like INSERT INTO MYTABLE VALUES(N"xxx") and make sure your column type is nvarchar
1
0
0
How to insert UNICODE characters to SQL db?
2
php,python,sql,unicode
1
2015-01-22T19:13:00.000
I want to build an universal database in which I will keep data from multiple countries so I will need to work with the UNICODE charset. I need a little help in order to figure out which is the best way to work with stuff like that and how my queries will be affected ( some sql example queries from php/python for basic stuff like insert/update/select would also be great) Thank you.
0
0
0
0
false
28,096,868
0
1,733
2
0
0
28,096,856
there is nothing special you need to do. with php you can do... query("SET NAMES utf8");
1
0
0
How to insert UNICODE characters to SQL db?
2
php,python,sql,unicode
1
2015-01-22T19:13:00.000
i installed PyCharm 4 on my Mac Yosemite, then installed SQLAlchemy through easy_install with console, also I have already official python 2.7.9 IDLE. I tried to import SQLAlchemy module in official IDLE and it works, but in PyCharm 4 IDE it doesn't. How can i fix this error? Traceback (most recent call last): File "/Users/artyom/PycharmProjects/untitled/hella.py", line 1, in import sqlalchemy ImportError: No module named sqlalchemy
0
0
1.2
0
true
28,123,162
0
664
1
0
0
28,121,229
Go into Settings -> Project Settings -> Project Interpreter. Then press configure interpreter, and navigate to the "Paths" tab. Press the + button in the Paths area. You can put the path to the module you'd like it to recognize.
1
0
0
SQLAlchemy with Pycharm 4
1
macos,sqlalchemy,pycharm,osx-yosemite,python-2.x
0
2015-01-24T01:10:00.000
Many spreadsheets have formulas and formatting that Python tools for reading and writing Excel files cannot faithfully reproduce. That means that any file I want to create programmatically must be something I basically create from scratch, and then other Excel files (with the aforementioned sophistication) have to refer to that file (which creates a variety of other dependency issues). My understanding of Excel file 'tabs' is that they're actually just a collection of XML files. Well, is it possible to use pandas (or one of the underlying read/write engines such as xlsxwriter or openpyxl to modify just one of the tabs, leaving other tabs (with more wicked stuff in there) intact? EDIT: I'll try to further articulate the problem with an example. Excel Sheet test.xlsx has four tabs (aka worksheets): Sheet1, Sheet2, Sheet3, Sheet4 I read Sheet3 into a DataFrame (let's call it df) using pandas.read_excel() Sheet1 and Sheet2 contain formulas, graphs, and various formatting that neither openpyxl nor xlrd can successfully parse, and Sheet4 contains other data. I don't want to touch those tabs at all. Sheet2 actually has some references to cells on Sheet3 I make some edits to df and now want to write it back to sheet3, leaving the other sheets untouched (and the references to it from other worksheets in the workbook intact) Can I do that and, if so, how?
26
6
1
0
false
28,254,411
0
32,281
1
0
0
28,142,420
I'm 90% confident the answer to "can pandas do this" is no. Posting a negative is tough, because there always might be something clever that I've missed, but here's a case: Possible interface engines are xlrd/xlwt/xlutils, openpyxl, and xlsxwriter. None will work for your purposes, as xlrd/wt don't support all formulae, xlsxwriter can't modify existing xlsx files, and openpyxl loses images and charts. Since I often need to do this, I've taken to only writing simple output to a separate file and then calling the win32api directly to copy the data between the workbooks while preserving all of my colleague's shiny figures. It's annoying, because it means I have to do it under Windows instead of *nix, but it works. If you're working under Windows, you could do something similar. (I wonder if it makes sense to add a native insert option using this approach to help people in this situation, or if we should simply post a recipe.) P.S.: This very problem has annoyed me enough from time to time that I've thought of learning enough of the modern Excel format to add support for this to one of the libraries. P.P.S.: But since ignoring things you're not handling and returning them unmodified seems easy enough, the fact that no one seems to support it makes me think there are some headaches, and where Redmond's involved I'm willing to believe it. @john-machin would know the details, if he's about..
1
0
0
Can Pandas read and modify a single Excel file worksheet (tab) without modifying the rest of the file?
6
python,excel,pandas
0
2015-01-25T22:38:00.000
Recently I've been receiving this error regarding what appears to be an insufficiency in connection slots along with many of these Heroku errors: H18 - Request Interrupted H19 - Backend connection timeout H13 - Connection closed without response H12 - Request timeout Error django.db.utils.OperationalError in / FATAL: remaining connection slots are reserved for non-replication superuser connections Current Application setup: Django 1.7.4 Postgres Heroku (2x 2 dynos, Standard-2) 5ms response time, 13rpm Throughput Are there general good practices for where one should or should not perform querysets in a Django application, or when to close a database connection? I've never experienced this error before. I have increased my dynos on heroku and allocated significantly more RAM and I am still experiencing the same issue. I've found similar questions on Stack Overflow but I haven't been able to figure out what might be causing the issue exactly. I have querysets in Model methods, views, decorator views, context processors. My first inclination would be that there is an inefficient queryset being performed somewhere causing connections to remain open that eventually crashes the application with enough people accessing the website. Any help is appreciated. Thanks.
2
4
1.2
0
true
28,395,905
1
3,434
1
0
0
28,238,144
I realized that I was using the django server in my procfile. I accidentally commented out and commited it to heroku instead of using gunicorn. Once I switched to gunicorn on the same heroku plan the issue was resolved. Using a production level application server really makes a big difference. Also don't code at crazy hours of the day when you're prone to errors.
1
0
0
Django/Postgres: FATAL: remaining connection slots are reserved for non-replication superuser connections
1
python,django,postgresql,heroku,django-queryset
0
2015-01-30T14:35:00.000
I've written a python/webdriver script that scrapes a table online, dumps it into a list and then exports it to a CSV. It does this daily. When I open the CSV in Excel, it is unformatted, and there are fifteen (comma-delimited) columns of data in each row of column A. Of course, I then run 'Text to Columns' and get everything in order. It looks and works great. But tomorrow, when I run the script and open the CSV, I've got to reformat it. Here is my question: "How can I open this CSV file with the data already spread across the columns in Excel?"
1
0
0
1
false
28,238,935
0
24
1
0
0
28,238,830
Try importing it as a csv file, instead of opening it directly on excel.
1
0
0
Retain Excel Settings When Adding New CSV
1
python,excel,csv
0
2015-01-30T15:11:00.000
Most of the Flask tutorials and examples I see use an ORM such as SQLAlchemy to handle interfacing with the user database. If you have a general working knowledge of SQL, is this extra level of abstraction, heavy with features, necessary? I am tempted to write a lightweight interface/ORM of my own so I better understand exactly what's going on and have full control over the queries, inserts, etc. But are there pitfalls to this approach that I am not considering that may crop up as the project gets more complex, making me wish I used a heavier ORM like SQLAlchemy?
2
0
0
0
false
28,280,443
1
970
1
0
0
28,271,711
No, an ORM is not required, just incredibly convenient. SQLAlchemy will manage connections, pooling, sessions/transactions, and a wide variety of other things for you. It abstracts away the differences between database engines. It tracks relationships between tables in convenient collections. It generally makes working with complex data much easier. If you're concerned about performance, SQLAlchemy has two layers, the orm and the core. Dropping down to the core sacrifices some convenience for better performance. It won't be as fast as using the database driver directly, but it will be fast enough for most use cases. But no, you don't have to use it.
1
0
0
Handling user database in flask web app without ORM like SQLAlchemy
1
python,sql,orm,flask,sqlalchemy
0
2015-02-02T05:39:00.000
I save my data on RethinkDB Database. As long as I dont restart the server, all is well. But when I restart, it gives me an error saying database doesnt exist, although the folder and data does exist in folder rethinkdb_data. What is the problem ?
7
10
1.2
0
true
28,330,153
0
897
1
0
0
28,329,352
You're almost certainly not losing data, you're just starting RethinkDB without pointing it to the data. Try the following: Start RethinkDB from the directory that contains the rethinkdb_data directory. Alternatively, pass the -d flag to RethinkDB to point it to the directory that contains rethinkdb_data. For example, rethinkdb -d /path/to/data/directory/rethinkdb_data
1
0
0
RethinkDB losing data after restarting server
1
python,ubuntu-14.04,rethinkdb,rethinkdb-python
0
2015-02-04T19:04:00.000
I'm trying to pip install pymssql in my Centos 6.6, but kept on experiencing this error: _mssql.c:314:22: error: sqlfront.h: No such file or directory cpp_helpers.h:34:19: error: sybdb.h: No such file or directory I already installed freetds, freetds-devel, and cython. Any ideas? Thanks in advance!
1
2
1.2
0
true
28,349,658
0
2,708
1
0
0
28,343,666
Looking at the full traceback we see that include_dirs includes /usr/local/include but the header files are in /usr/include which I imagine has to do with the fact python 2.7 is not the system python. You can change the setup.py script to include /usr/include or copy the files into /usr/local/include
1
0
0
Installing pymssql in Centos 6.6 64-bit
1
python,python-2.7,pip,pymssql
0
2015-02-05T12:12:00.000
I m building a web crawler and I wanted to save links in a database with informations like type, size, etc. and actually I don't know when I should commit the database (how often) in other terms: is it a problem if I commit the database every 0.1 second?
0
0
1.2
0
true
28,400,155
0
67
1
0
0
28,400,064
In terms of logical correctness, you should commit every time a set of one or more queries that are supposed to execute atomically (i.e, all of them, or else none of them, execute) is finished. There is no connection between this logical correctness and any given amount of time between commits. In your vaguely-sketched use case, I guess I'd be committing every time I'm done with a whole web page -- what I want to avoid is likely the committing of a web page that's "partially done" but not completely so -- whether that means 100 msec, or 50, or 200 -- why should that duration matter?
1
0
0
Python sqlite3 correct use of commit
1
python,sqlite
0
2015-02-08T22:23:00.000
i Launch cluster spark cassandra with datastax dse in aws cloud. So my dataset storage in S3. But i don't know how transfer data from S3 to my cluster cassandra. Please help me
0
1
1.2
1
true
28,419,293
0
1,657
1
0
0
28,417,806
The details depend on your file format and C* data model but it might look something like this: Read the file from s3 into an RDD val rdd = sc.textFile("s3n://mybucket/path/filename.txt.gz") Manipulate the rdd Write the rdd to a cassandra table: rdd.saveToCassandra("test", "kv", SomeColumns("key", "value"))
1
0
0
How import dataset from S3 to cassandra?
2
python,cassandra,datastax-enterprise
0
2015-02-09T19:34:00.000
I've been looking for ways to do this and haven't found a good solution to this. I'm trying to copy a sheet in an .xlsx file that has macros to another workbook. I know I could do this if the sheet contained data in each cell but that's not the case. The sheet contains checkboxes and SOME text. Is there a way to do this in python (or any other language for that matter?) I just need it done programmatically as it will be part of a larger script.
0
0
0
0
false
28,946,313
0
194
1
0
0
28,423,604
Try win32com package. This offers an interface of VBA for python You can find it on SourceForge. I've done some projects with this package, we can discuss more on your problem if this package helps.
1
0
0
Copy sheet with Macros to workbook in Python
1
python,xlsx,xlsxwriter
0
2015-02-10T03:32:00.000
Is it possible to connect a Flask app to a database using MySQLdb-python and vertica_python? It seems that the recommended Flask library for accessing databases is Flask-SQLAlchemy. I have an app that connects to MySQL and Vertica databases, and have written a GUI wrapper for it in Flask using flask-wtforms, but am just getting an error when I try to test a Vertica or MySQL connection through the flask app. Is there a reason that I cannot use the prior libraries that I was using within my app?
0
0
0
0
false
28,502,445
1
224
1
0
0
28,489,779
Yes, it is possible. I was having difficulties debugging because of the opacity of the error, but ran it with app.run(debug=True), and managed to troubleshoot my problem.
1
0
0
Can I connect a flask app to a database using MySQLdb-python vertica_python?
1
python,flask,flask-sqlalchemy
0
2015-02-12T23:21:00.000
Currently I am installing psycopg2 for work within eclipse with python. I am finding a lot of problems: The first problem sudo pip3.4 install psycopg2 is not working and it is showing the following message Error: pg_config executable not found. FIXED WITH:export PATH=/Library/PostgreSQL/9.4/bin/:"$PATH” When I import psycopg2 in my project i obtein: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Library libssl.1.0.0.dylib Library libcrypto.1.0.0.dylib FIXED WITH: sudo ln -s /Library/PostgreSQL/9.4/lib/libssl.1.0.0.dylib /usr/lib sudo ln -s /Library/PostgreSQL/9.4/lib/libcrypto.1.0.0.dylib /usr/lib Now I am obtaining: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _lo_lseek64 Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Expected in: /usr/lib/libpq.5.dylib in /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Can you help me?
48
13
1
0
false
60,101,069
0
17,998
2
1
0
28,515,972
I was able to fix this on my Mac (running Catalina, 10.15.3) by using psycopg2-binary rather than psycopg2. pip3 uninstall psycopg2 pip3 install psycopg2-binary
1
0
0
Problems using psycopg2 on Mac OS (Yosemite)
8
python,eclipse,macos,postgresql,psycopg2
0
2015-02-14T13:12:00.000
Currently I am installing psycopg2 for work within eclipse with python. I am finding a lot of problems: The first problem sudo pip3.4 install psycopg2 is not working and it is showing the following message Error: pg_config executable not found. FIXED WITH:export PATH=/Library/PostgreSQL/9.4/bin/:"$PATH” When I import psycopg2 in my project i obtein: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Library libssl.1.0.0.dylib Library libcrypto.1.0.0.dylib FIXED WITH: sudo ln -s /Library/PostgreSQL/9.4/lib/libssl.1.0.0.dylib /usr/lib sudo ln -s /Library/PostgreSQL/9.4/lib/libcrypto.1.0.0.dylib /usr/lib Now I am obtaining: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so, 2): Symbol not found: _lo_lseek64 Referenced from: /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Expected in: /usr/lib/libpq.5.dylib in /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/_psycopg.so Can you help me?
48
4
0.099668
0
false
28,949,608
0
17,998
2
1
0
28,515,972
I am using yosemite, postgres.app & django. this got psycopg2 to load properly for me but the one difference was that my libpq.5.dylib file is in /Applications/Postgres.app/Contents/Versions/9.4/lib. thus my second line was sudo ln -s /Applications/Postgres.app/Contents/Versions/9.4/lib/libpq.5.dylib /usr/lib
1
0
0
Problems using psycopg2 on Mac OS (Yosemite)
8
python,eclipse,macos,postgresql,psycopg2
0
2015-02-14T13:12:00.000
I have a python process serving as a WSGI-apache server. I have many copies of this process running on each of several machines. About 200 megabytes of my process is read-only python data. I would like to place these data in a memory-mapped segment so that the processes could share a single copy of those data. Best would be to be able to attach to those data so they could be actual python 2.7 data objects rather than parsing them out of something like pickle or DBM or SQLite. Does anyone have sample code or pointers to a project that has done this to share?
11
1
0.049958
0
false
30,273,392
0
1,046
1
0
0
28,570,438
One possibility is to create a C- or C++-extension that provides a Pythonic interface to your shared data. You could memory map 200MB of raw data, and then have the C- or C++-extension provide it to the WSGI-service. That is, you could have regular (unshared) python objects implemented in C, which fetch data from some kind of binary format in shared memory. I know this isn't exactly what you wanted, but this way the data would at least appear pythonic to the WSGI-app. However, if your data consists of many many very small objects, then it becomes important that even the "entrypoints" are located in the shared memory (otherwise they will waste too much memory). That is, you'd have to make sure that the PyObject* pointers that make up the interface to your data, actually themselves point to the shared memory. I.e, the python objects themselves would have to be in shared memory. As far as I can read the official docs, this isn't really supported. However, you could always try "handcrafting" python objects in shared memory, and see if it works. I'm guessing it would work, until the Python interpreter tries to free the memory. But in your case, it won't, since it's long-lived and read-only.
1
0
1
How to store easily python usable read-only data structures in shared memory
4
python,shared-memory,wsgi,uwsgi
0
2015-02-17T20:23:00.000
I have a file that is several G in size and contains a JSON hash on each line. The document itself is not a valid JSON document, however I have no control over the generation of this data so I cannot change it. The JSON needs to be read, lookups need to be performed on certain "fields" in the JSON and then the result of these lookups needs to be inserted into a MySQL database. At the moment, it is taking hours to process this file and I think that it is because I am inserting and commiting on each row instead of using executemany, however I'm struggling to work out how best to approach this because I need to do the lookups as part of the process and then insert into multiple tables. The process is effectively as follows: 1) Iterate over the file, reading each line as we go 2) For each line, work out if it needs to be inserted into the database 3) If the line does need to be inserted into the database, look up foreign keys for various JSON fields and replace them with the FK id 4) Insert the "new" line into the database. The issue comes at (3) as there are some cases where the FK id is created by an insert of a subset of the data. In short, I need to do a mass insert of a nested data structure with various parts of the nested data needing to be inserted into different tables whilst maintaining referential integrity. Thanks for all and any help, Matt
1
1
1.2
0
true
34,722,784
0
76
1
0
0
28,579,257
1) filter out the lines you can ignore. 2) work out your table dependency graph and partition rows into multiple files by table. 3) insert all rows for tables without dependencies; optionally, cache these so you don't have to ask the DB what you just told it for lookups. N) use that cache + do any DB lookups required to insert rows that depend on rows inserted in step N-1. Do all this as multiple processes so you can verify each stage. Use bulk inserts and consider disabling FK verification.
1
0
1
Mass insert of data with intermediate lookups using Python and MySQL
1
python,mysql,json,database
0
2015-02-18T08:42:00.000
Query results from some Postgres data types are converted to native types by psycopg2. Neither pgdb (PostgreSQL) and cx_Oracle seem to do this. …so my attempt to switch pgdb out for psycopg2cffi is proving difficult, as there is a fair bit of code expecting strings, and I need to continue to support cx_Oracle. The psycopg2 docs explain how to register additional types for conversion, but I'd actually like to remove that conversion if possible and get the strings as provided by Postgres. Is that doable?
2
3
1.2
0
true
28,602,221
0
274
1
0
0
28,597,575
You can re-register a plain string type caster for every single PostgreSQL type (or at least for every type you expect a string for in your code): when you register a type caster for an already registered OID the new definition takes precedence. Just have a look at the source code of psycopg (both C and Python) to find the correct OIDs. You can also compile your own version of psycopg disabling type casting. I don't have the source code here right now but probably is just a couple line changes.
1
0
0
Can one disable conversion to native types when using psycopg2?
1
postgresql,psycopg2,python-db-api
0
2015-02-19T02:11:00.000
I need to process a lot of .xls files which come out of this Microscopy image analysis software called Aperio (after analysis with Aperio, it allows you to export the data as "read-only" xls format. The save-as only works in Excel on a Mac, on windows machine, the save and save as buttons are greyed out since the files are protected). Unfortunately, the header of these files are not standard OLE2 format. Therefore, they cannot be picked up with Java API POI unless they are manually loaded in Microsoft Excel and save as .xls one by one. Since there are so many of them in the directory, it would be pretty painful to do the save-as by hand. Is there a way to write a Java program to automatically save these files as standard xls files? If it is impossible for Java, what other language can handle this situation, Python? Edit: I loaded one of the files in hex reader and here it is: 09 04 06 00 07 00 10 00 00 00 5C 00 04 00 05 4D 44 41 80 00 08 00 00 00 00 00 00 00 00 00 92 00 19 00 06 00 00 00 00 00 F0 F0 F0 00 00 00 00 00 FF FF FF 00 00 00 00 00 FF FF FF 0C 00 02 00 01 00 0D 00 02 00 64 00 0E 00 02 00 01 00 0F 00 02 00 01 00 11 00 02 00 00 00 22 00 02 00 00 00 2A 00 02 00 00 00 2B 00 02 00 00 00 25 02 04 00 00 00 FF 00 1F 00 02 00 22 00 1E 04 0A 00 00 00 07 47 65 6E 65 72 61 6C 1E 04 04 00 00 00 01 30 1E 04 07 00 00 00 04 30 2E 30 30 1E 04 08 00 00 00 05 23 2C 23 23 30 1E 04 0B 00 00 00 08 23 2C 23 23 30 2E 30 30 1E 04 18 00 00 00 15 23 2C 23 23 30 5F F0 5F 2E 3B 5C 2D 23 2C 23 23 30 5F F0 5F 2E 1E 04 1D 00 00 00 1A 23 2C 23 23 30 5F F0 5F 2E 3B 5B 52 65 64 5D 5C 2D 23 2C 23 23 30 5F F0 5F 2E 1E 04 1E 00 00 00 1B 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 3B 5C 2D 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 1E 04 23 00 00 00 20 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 3B 5B 52 65 64 5D 5C 2D 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 1E 04 18 00 00 00 15 23 2C 23 23 30 22 F0 2E 22 3B 5C 2D 23 2C 23 23 30 22 F0 2E 22 1E 04 1D 00 00 00 1A 23 2C 23 23 30 22 F0 2E 22 3B 5B 52 65 64 5D 5C 2D 23 2C 23 23 30 22 F0 2E 22 1E 04 1E 00 00 00 1B 23 2C 23 23 30 2E 30 30 22 F0 2E 22 3B 5C 2D 23 2C 23 23 30 2E 30 30 22 F0 2E 22 1E 04 23 00 00 00 20 23 2C 23 23 30 2E 30 30 22 F0 2E 22 3B 5B 52 65 64 5D 5C 2D 23 2C 23 23 30 2E 30 30 22 F0 2E 22 1E 04 05 00 00 00 02 30 25 1E 04 08 00 00 00 05 30 2E 30 30 25 1E 04 0B 00 00 00 08 30 2E 30 30 45 2B 30 30 1E 04 0A 00 00 00 07 23 22 20 22 3F 2F 3F 1E 04 09 00 00 00 06 23 22 20 22 3F 3F 1E 04 0D 00 00 00 0A 64 64 2F 6D 6D 2F 79 79 79 79 1E 04 0C 00 00 00 09 64 64 2F 6D 6D 6D 2F 79 79 1E 04 09 00 00 00 06 64 64 2F 6D 6D 6D 1E 04 09 00 00 00 06 6D 6D 6D 2F 79 79 1E 04 0E 00 00 00 0B 68 3A 6D 6D 5C 20 41 4D 2F 50 4D 1E 04 11 00 00 00 0E 68 3A 6D 6D 3A 73 73 5C 20 41 4D 2F 50 4D 1E 04 07 00 00 00 04 68 3A 6D 6D 1E 04 0A 00 00 00 07 68 3A 6D 6D 3A 73 73 1E 04 13 00 00 00 10 64 64 2F 6D 6D 2F 79 79 79 79 5C 20 68 3A 6D 6D 1E 04 0B 00 00 00 08 23 23 30 2E 30 45 2B 30 1E 04 08 00 00 00 05 6D 6D 3A 73 73 1E 04 04 00 00 00 01 40 1E 04 36 00 00 00 33 5F 2D 2A 20 23 2C 23 23 30 22 F0 2E 22 5F 2D 3B 5C 2D 2A 20 23 2C 23 23 30 22 F0 2E 22 5F 2D 3B 5F 2D 2A 20 22 2D 22 22 F0 2E 22 5F 2D 3B 5F 2D 40 5F 2D 1E 04 36 00 00 00 33 5F 2D 2A 20 23 2C 23 23 30 5F F0 5F 2E 5F 2D 3B 5C 2D 2A 20 23 2C 23 23 30 5F F0 5F 2E 5F 2D 3B 5F 2D 2A 20 22 2D 22 5F F0 5F 2E 5F 2D 3B 5F 2D 40 5F 2D 1E 04 3E 00 00 00 3B 5F 2D 2A 20 23 2C 23 23 30 2E 30 30 22 F0 2E 22 5F 2D 3B 5C 2D 2A 20 23 2C 23 23 30 2E 30 30 22 F0 2E 22 5F 2D 3B 5F 2D 2A 20 22 2D 22 3F 3F 22 F0 2E 22 5F 2D 3B 5F 2D 40 5F 2D 1E 04 3E 00 00 00 3B 5F 2D 2A 20 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 5F 2D 3B 5C 2D 2A 20 23 2C 23 23 30 2E 30 30 5F F0 5F 2E 5F 2D 3B 5F 2D 2A 20 22 2D 22 3F 3F 5F F0 5F 2E 5F 2D 3B 5F 2D 40 5F 2D 31 00 14 00 A0 00 00 00 08 00 0D 4D 53 20 53 61 6E 73 20 53 65 72 69 66 31 00 14 00 A0 00 00 00 0E 00 0D 4D 53 20 53 61 6E 73 20 53 65 72 69 66 31 00
0
1
0.066568
0
false
28,634,898
0
415
1
0
0
28,632,987
Use JODConverter. You have an Excel 4.0 file; too old for Apache POI.
1
0
0
How to program to save a bunch of ".xls" files in Excel
3
java,python,excel,apache-poi,poi-hssf
0
2015-02-20T15:51:00.000
I could use some help. My python 3.4 Django 1.7.4 site worked fine using sqlite. Now I've moved it to Heroku which uses Postgres. And when I try to create a user / password i get this error: column "is_superuser" is of type integer but expression is of type boolean LINE 1: ...15-02-08 19:23:26.965870+00:00', "is_superuser" = false, "us... ^ HINT: You will need to rewrite or cast the expression. The last function call in the stack trace is: /app/.heroku/python/lib/python3.4/site-packages/django/db/backends/utils.py in execute return self.cursor.execute(sql, params) ... ▶ Local vars I don't have access to the base django code, just the code on my app. So any help getting this to work would be really helpful.
0
0
0
0
false
28,636,553
1
2,600
2
0
0
28,636,141
It seems to me that you are using raw SQL queries instead of Django ORM calls and this causes portability issues when you switch database engines. I'd strongly suggest to use ORM if it's possible in your case. If not, then I'd say that you need to detect database engine on your own and construct queries depending on current engine. In this case you could try to use 0 instead of false, I guess this should work both on SQLite and Postgres.
1
0
0
column "is_superuser" is of type integer but expression is of type boolean DJANGO Error
3
django,python-3.x,heroku,heroku-postgres
0
2015-02-20T18:50:00.000
I could use some help. My python 3.4 Django 1.7.4 site worked fine using sqlite. Now I've moved it to Heroku which uses Postgres. And when I try to create a user / password i get this error: column "is_superuser" is of type integer but expression is of type boolean LINE 1: ...15-02-08 19:23:26.965870+00:00', "is_superuser" = false, "us... ^ HINT: You will need to rewrite or cast the expression. The last function call in the stack trace is: /app/.heroku/python/lib/python3.4/site-packages/django/db/backends/utils.py in execute return self.cursor.execute(sql, params) ... ▶ Local vars I don't have access to the base django code, just the code on my app. So any help getting this to work would be really helpful.
0
-1
-0.066568
0
false
28,638,965
1
2,600
2
0
0
28,636,141
The problem is caused by a variable trying to change data types (i.e. from a char field to date-time) in the migration files. A database like PostgreSQL might not know how to change the variable type. So, make sure the variable has the same type in all migrations.
1
0
0
column "is_superuser" is of type integer but expression is of type boolean DJANGO Error
3
django,python-3.x,heroku,heroku-postgres
0
2015-02-20T18:50:00.000
I have a large dataset that I do not have direct access to and am trying to convert the data headers into column headings using Python and then returning it back to Excel. I have created the function to do this and it works but I have hit a snag. What I want the Excel VBA to do is loop down the range and if the cell's value matches the criteria call the Python function and return the resulting list items in the columns moving across from the original cell. For example: A1 holds the string to format, the functions returns B1, C1, D1, and so on. I can only get this to work if I hard code B1, C1, D1, etc. Is there a way to do this via the get_address() range method? I think I can then use the offset() method but am not sure.
0
0
0
0
false
28,684,225
0
5,808
1
0
0
28,663,658
Thanks for your help. I've got this to work now and I'm super excited about the future possibilities for Python, xlwings and Excel. My problem was simple once I got the looping through the range sorted (which incidentally was handily imported as each row per element rather than each cell). I had declared my list outside of the function and so was not reset each time the true condition was met. It was getting very frustrating watching my cells fill with the same values time after time. Simple once you know how :)
1
0
0
xlwings output to iterative cell range
2
python,excel,python-3.x,xlwings,vba
0
2015-02-22T21:44:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
32
8
1
0
false
35,020,640
1
73,141
6
0
0
28,685,931
the new django 1.9 has removed "syncdb", run "python manage.py migrate", if you are trying to create a super user, run "python manage.py createsuperuser"
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
32
0
0
0
false
34,814,438
1
73,141
6
0
0
28,685,931
You can run the command from the project folder as: "python.exe manage.py migrate", from a commandline or in a batch-file. You could also downgrade Django to an older version (before 1.9) if you really need syncdb. For people trying to run Syncdb from Visual Studio 2015: The option syncdb was removed from Django 1.9 (deprecated from 1.7), but this option is currently not updated in the context menu of VS2015. Also, in case you didn't get asked to create a superuser you should manually run this command to create one: python.exe manage.py createsuperuser
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
32
0
0
0
false
36,004,441
1
73,141
6
0
0
28,685,931
Run the command python manage.py makemigratons,and than python manage.py migrate to sync.
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
32
1
0.019997
0
false
42,688,208
1
73,141
6
0
0
28,685,931
Django has removed python manage.py syncdb command now you can simply use python manage.py makemigrations followed bypython manage.py migrate. The database will sync automatically.
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
32
2
0.039979
0
false
42,795,652
1
73,141
6
0
0
28,685,931
In Django 1.9 onwards syncdb command is removed. So instead of use that one, you can use migrate command,eg: python manage.py migrate.Then you can run your server by python manage.py runserver command.
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I want to create the tables of one database called "database1.sqlite", so I run the command: python manage.py syncdb but when I execute the command I receive the following error: Unknown command: 'syncdb' Type 'manage.py help' for usage. But when I run manage.py help I don`t see any command suspicious to substitute python manage.py syncdb Version of Python I use: 3.4.2 Version of Django I use:1.9 I would be very grateful if somebody could help me to solve this issue. Regards and thanks in advance
32
0
0
0
false
43,525,717
1
73,141
6
0
0
28,685,931
Alternarte Way: Uninstall Django Module from environment Edit Requirements.txt a type Django<1.9 Run Install from Requirments option in the enviroment Try Syncdb again This worked for me.
1
0
0
"Unknown command syncdb" running "python manage.py syncdb"
10
django,sqlite,python-3.x,django-1.9
0
2015-02-24T00:01:00.000
I have a CherryPy Webapp that I originally wrote using file based sessions. From time to time I store potentially large objects in the session, such as the results of running a report - I offer the option to download report results in a variety of formats, and I don't want to re-run the query when the user selects a download due to the potential of getting different data. While using file based sessions, this worked fine. Now I am looking at the potential of bringing a second server online, and as such I need to be able to share session data between the servers, for which it would appear that using the memchached session storage type is the most appropriate. I briefly looked at using a PostgreSQL storage type, but this option was VERY poorly documented, and from what I could find, may well be broken. So I implemented the memcached option. Now, however, I am running into a problem where, when I try to save certain objects to the session, I get an "AssertionError: Session data for id xxx not set". I'm assuming that this is due to the object size exceeding some arbitrary limit set in the CherryPy session backend or memcached, but I don't really know since the exception doesn't tell me WHY it wasn't set. I have increased the object size limit in memcached to the maximum of 128MB to see if that helped, but it didn't - and that's probably not a safe option anyway. So what's my solution here? Is there some way I can use the memcached session storage to store arbitrarily large objects? Do I need to "roll my own" DB based or the like solution for these objects? Is the problem potentially NOT size based? Or is there another option I am missing?
2
1
0.066568
0
false
28,705,996
0
1,009
2
0
0
28,705,661
Sounds like you want to store a reference to the object stored in Memcache and then pull it back when you need it, rather than relying on the state to handle the loading / saving.
1
0
0
CherryPy Sessions and large objects?
3
python,cherrypy
0
2015-02-24T20:30:00.000
I have a CherryPy Webapp that I originally wrote using file based sessions. From time to time I store potentially large objects in the session, such as the results of running a report - I offer the option to download report results in a variety of formats, and I don't want to re-run the query when the user selects a download due to the potential of getting different data. While using file based sessions, this worked fine. Now I am looking at the potential of bringing a second server online, and as such I need to be able to share session data between the servers, for which it would appear that using the memchached session storage type is the most appropriate. I briefly looked at using a PostgreSQL storage type, but this option was VERY poorly documented, and from what I could find, may well be broken. So I implemented the memcached option. Now, however, I am running into a problem where, when I try to save certain objects to the session, I get an "AssertionError: Session data for id xxx not set". I'm assuming that this is due to the object size exceeding some arbitrary limit set in the CherryPy session backend or memcached, but I don't really know since the exception doesn't tell me WHY it wasn't set. I have increased the object size limit in memcached to the maximum of 128MB to see if that helped, but it didn't - and that's probably not a safe option anyway. So what's my solution here? Is there some way I can use the memcached session storage to store arbitrarily large objects? Do I need to "roll my own" DB based or the like solution for these objects? Is the problem potentially NOT size based? Or is there another option I am missing?
2
1
0.066568
0
false
28,717,896
0
1,009
2
0
0
28,705,661
From what you have explained I can conclude that conceptually it isn't a good idea to mix user sessions and a cache. What sessions are mostly designed for is holding state of user identity. Thus it has security measures, locking, to avoid concurrent changes, and other aspects. Also a session storage is usually volatile. Thus if you mean to use sessions as a cache you should understand how sessions really work and the consequences are. What I suggest you to do it to establish normal caching of your domain model that produces report data and keep session for identity. CherryPy details Default CherryPy session implementation locks the session data. In the OLAP case your user likely won't be able to perform concurrent requests (open another tab for instance) until the report is completed. There's however an option of manual locking management. PostgreSQL session storage is broken and may be removed in next releases. Memcached session storage doesn't implement distributed locking, so make sure you use consistent rule to balance your user across your servers.
1
0
0
CherryPy Sessions and large objects?
3
python,cherrypy
0
2015-02-24T20:30:00.000
I'm making queries from a MS SQL server using Python code (Pymssql library) however I was wondering if there was any way to make the connection secure and encrypt the data being sent from the server to python? Thanks
4
0
0
0
false
38,181,077
0
6,365
1
0
0
28,724,427
If you want to connect SQL server using secured connection using pymssql then you need to provide "secure" syntax in your host.. for e.g. unsecured connection host : xxx.database.windows.net:1433 secured connection host : xxx.database.secure.windows.net:1443
1
0
0
Can Pymssql have a secure connection (SSL) to MS SQL Server?
4
python,sql-server,pymssql
0
2015-02-25T16:32:00.000
Python application, standard web app. If a particular request gets executed twice by error the second request will try to insert a row with an already existing primary key. What is the most sensible way to deal with it. a) Execute a query to check if the primary key already exists and do the checking and error handling in the python app b) Let the SQL engine reject the insertion with a constraint failure and use exception handling to handle it back in the app From a speed perspective it might seem that a failed request will take the same amount of time as a successful one, making b faster because its only one request and not two. However, when you take things in account like read-only db slaves and table write-locks and stuff like that things get fuzzy for my experience on scaling standard SQL databases.
1
1
0.099668
0
false
28,787,981
1
61
2
0
0
28,787,814
The latter one you need to do and handle in any case, thus I do not see there is much value in querying for duplicates, except to show the user information beforehand - e.g. report "This username has been taken already, please choose another" when the user is still filling in the form.
1
0
0
Let the SQL engine do the constraint check or execute a query to check the constraint beforehand
2
python,mysql,sql
0
2015-02-28T22:40:00.000
Python application, standard web app. If a particular request gets executed twice by error the second request will try to insert a row with an already existing primary key. What is the most sensible way to deal with it. a) Execute a query to check if the primary key already exists and do the checking and error handling in the python app b) Let the SQL engine reject the insertion with a constraint failure and use exception handling to handle it back in the app From a speed perspective it might seem that a failed request will take the same amount of time as a successful one, making b faster because its only one request and not two. However, when you take things in account like read-only db slaves and table write-locks and stuff like that things get fuzzy for my experience on scaling standard SQL databases.
1
2
1.2
0
true
28,788,000
1
61
2
0
0
28,787,814
The best option is (b), from almost any perspective. As mentioned in a comment, there is a multi-threading issue. That means that option (a) doesn't even protect data integrity. And that is a primary reason why you want data integrity checks inside the database, not outside it. There are other reasons. Consider performance. Passing data into and out of the database takes effort. There are multiple levels of protocol and data preparation, not to mention round trip, sequential communication from the database server. One call has one such unit of overhead. Two calls have two such units. It is true that under some circumstances, a failed query can have a long clean-up period. However, constraint checking for unique values is a single lookup in an index, which is both fast and has minimal overhead for cleaning up. The extra overhead for handling the error should be tiny in comparison to the overhead for running the queries from the application -- both are small, one is much smaller. If you had a query load where the inserts were really rare with respect to the comparison, then you might consider doing the check in the application. It is probably a tiny bit faster to check to see if something exists using a SELECT rather than using INSERT. However, unless your query load is many such checks for each actual insert, I would go with checking in the database and move on to other issues.
1
0
0
Let the SQL engine do the constraint check or execute a query to check the constraint beforehand
2
python,mysql,sql
0
2015-02-28T22:40:00.000
Are null bytes allowed in unicode strings? I don't ask about utf8, I mean the high level object representation of a unicode string. Background We store unicode strings containing null bytes via Python in PostgreSQL. The strings cut at the null byte if we read it again.
5
-2
-0.132549
0
false
28,813,836
0
8,634
2
0
0
28,813,409
Since a string is basically just data and a pointer, you can save null in it. However, since null represents the end of the string ("null terminator "), there is no way to read beyond the null without knowing the size ahead of reading. Therefore, seems that you ought to store your data in binary and read it as a buffer. Good luck!
1
0
0
Are null bytes allowed in unicode strings in PostgreSQL via Python?
3
python,postgresql,unicode
0
2015-03-02T15:27:00.000
Are null bytes allowed in unicode strings? I don't ask about utf8, I mean the high level object representation of a unicode string. Background We store unicode strings containing null bytes via Python in PostgreSQL. The strings cut at the null byte if we read it again.
5
1
0.066568
0
false
28,814,135
0
8,634
2
0
0
28,813,409
Python itself is perfectly capable of having both byte strings and Unicode strings with null characters having a value of zero. However if you call out to a library implemented in C, that library may use the C convention of stopping at the first null character.
1
0
0
Are null bytes allowed in unicode strings in PostgreSQL via Python?
3
python,postgresql,unicode
0
2015-03-02T15:27:00.000
For a project I have to use DynamoDB(aws) and python(with boto). I have items with a date and I need to display the count grouped by date or by month. Something like by date of the month [1/2: 5, 2/2: 10, 3/2: 7, 4/2: 30, 5/2: 25, ...] or by month of the year [January: 5, February: 10, March: 7, ...]
0
2
1.2
0
true
28,890,074
1
3,106
1
0
0
28,818,394
You can create 2 GSIs: 1 with date as hashKey, 1 with month as hashKey. Those GSIs will point you to the rows of that month / of that day. Then you can just query the GSI, get all the rows of that month/day, and do the aggregation on your own. Does that work for you? Thanks! Erben
1
0
0
Using group by in DynamoDB
1
python,amazon-web-services,amazon-dynamodb,boto
0
2015-03-02T19:56:00.000