Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Setup of Flask App Directory and Permissions?
25,212,067
4
4
2,504
0
python,security,ubuntu,flask
I understand you're using Apache or Nginx as your web server. If that's correct, I would place both your apps code and the app.wsgi file in your home directory. Placing a file in /var/www allows it to be seen by the outside world in some cases (that it, unless you specifically specified it to be ignored by your webserver/deny access by the webserver). Placing it /home/user doesn't allow it to be seen by the outside world, unless explicitly specified. As for permissions, You would need to give the web server user (usually www-data in Apache, unless flask_user is your web server user as well) read permission to the WSGI file, and probably also execute permissions. Not sure about permissions to the other python files, but that's easy to test. Start off with denying your web server user all permissions to the file. If that doesn't work, give it read permissions, and so on until the site works. That would be the minimum needed permission.
0
0
0
0
2014-08-07T15:23:00.000
1
0.664037
false
25,186,308
0
0
1
1
I have built a simple flask app on ubuntu server and have placed the code in the following directories: Main app code: /home/user/flaskapp WSGI Config: www/flaskapp/app.wsgi My questions are: Is the placement of the app's code in my home directory okay in production? What should I have in my Folder Permissions to run a safe/secure site? My Ubuntu Users name is 'flask_user', should I give it any special permissions or groups?
Python Flask and Google OAuth2 best practices
25,207,309
0
0
396
0
python,authentication,oauth,flask
One option would be to require the user to register with your site after using OAuth2, but that's silly - you use OAuth2 to save that in the first place. I'd just not save a password for this user. Why would you need it anyway? He's authenticating via OAuth2 as you said, and you need to ping the OAuth2 provider to verify the user.
0
0
0
0
2014-08-07T20:23:00.000
1
0
false
25,191,537
0
0
1
1
I have a User model that has name, email, and password. If a user signs up normally (through a form) the password stored in my db is hashed using passlib. Then, a token is generated and returned to the user's client. The token is a serialization of the player's id. To verify a player, he simply logs in with the token in the password field of a basic auth, and I simply deserialize the token and look up the player via the deserialized id. Also, a player may verify himself with his password. The problem with using Google's OAuth2 is that a player does NOT set his password; he only sends the server a Google verified token, which the server sends to Google and obtains the user's email, name, etc. How do I generate a password for the user? What am I supposed to do here? My hackish workaround right now for Google OAuth2 user registration is simply: get the user's info from Google, generate a bogus password (which is hashed), and then generate the auth token for the user. Then, replace the bogus password with the auth token (which is hashed) and insert that as the user's password. The auth token is then returned to the user's client. In other words, the auth token becomes the password. Right now my auth token's don't expire either. Obviously this is a huge hack. What's the correct way to do this? Should I just ping Google every time a user needs to verify himself?
How do I make Solr return an empty result?
25,193,866
0
0
660
0
python,solr,lucene,solr-query-syntax
You shouldn't query solr when there is no term being looked for (and I seriously doubt google looks over it's searchable indexes when a search term is empty). This logic should be built into whatever mechanism you use to parse the user supplied query terms before constructing the solr query. Lets say the user's input is represented as a simple string where each word is treated as a unique query term. You would want to split the string on spaces into an array of strings, map over the array and remove strings prefixed by "-", and then construct the query terms from what remains in the array. If flattening the array yields an empty string, return an empty array instead of querying solr at all.
0
0
1
0
2014-08-07T22:18:00.000
2
0
false
25,193,192
0
0
1
1
I want my search tool to have a similar behaviour to Google Search when all of the elements entered by the user are excluded (eg.: user input is -obama). In those cases, Google returns an empty result. In my current code, my program just makes an empty Solr query, which causes error in Solr. I know that you can enter *:* to get all the results, but what should I fill in my Solr query so that Solr will return an empty search result? EDIT: Just to make it clearer, the thing is that when I have something like -obama, I want Solr to return an empty search result. If you google -obama, that's what you get, but if you put -obama on Solr, it seems that the result is everything (all the documents), except for the ones that have "obama"
Database in Excel using win32com or xlrd Or Database in mysql
25,203,796
1
0
273
1
python,mysql,excel,win32com,xlrd
Probably not the answer you were looking for, but your post is very broad, and I've used win32coma and Excel a fair but and don't see those as good tools towards your goal. An easier strategy is this: for the server, use Flask: it is a Python HTTP server that makes it crazy easy to respond to HTTP requests via Python code and HTML templates. You'll have a fully capable server running in 5 minutes, then you will need a bit of time create code to get data from your DB and render from templates (which are really easy to use). for the database, use SQLite (there is far more overhead intergrating with MysQL); because you only have 2 days, so you could also use a simple CSV file, since the API (Python has a CSV file read/write module) is much simpler, less ramp up time. One CSV per user, easy to manage. You don't worry about insertion of rows for a user, you just append; and you don't implement remove of rows for a user, you just mark as inactive (a column for active/inactive in your CSV). In processing GET request from client, as you read from the CSV, you can count how many certain rows are inactive, and do a re-write of the CSV, so once in a while the request will be a little slower to respond to client. even simpler yet you could use in-memory data structure of your choice if you don't need persistence across restarts of the server. If this is for a demo this should be acceptable limitation. for the client side, use jQuery on top of javascript -- maybe you are doing that already. Makes it super easy to manipulate the DOM and use effects like slide-in/out etc. Get yourself the book "Learning jQuery", you'll be able to make good use of jQuery in just a couple hours. If you only have two days it might be a little tight, but you will probably need more than 2 days to get around the issues you are facing with your current strategy, and issues you will face imminently.
0
0
0
0
2014-08-08T03:38:00.000
1
1.2
true
25,195,723
0
0
1
1
I have developed a website where the pages are simply html tables. I have also developed a server by expanding on python's SimpleHTTPServer. Now I am developing my database. Most of the table contents on each page are static and doesn't need to be touched. However, there is one column per table (i.e. page) that needs to be editable and stored. The values are simply text that the user can enter. The user enters the text via html textareas that are appended to the tables via javascript. The database is to store key/value pairs where the value is the user entered text (for now at least). Current situation Because the original format of my webpages was xlsx files I opted to use an excel workbook as my database that basically just mirrors the displayed web html tables (pages). I hook up to the excel workbook through win32com. Every time the table (page) loads, javascript iterates through the html textareas and sends an individual request to the server to load in its respective text from the database. Currently this approach works but is terribly slow. I have tried to optimize everything as much as I can and I believe the speed limitation is a direct consequence of win32com. Thus, I see four possible ways to go: Replace my current win32com functionality with xlrd Try to load all the html textareas for a table (page) at once through one server call to the database using win32com Switch to something like sql (probably use mysql since it's simple and robust enough for my needs) Use xlrd but make a single call to the server for each table (page) as in (2) My schedule to build this functionality is around two days. Does anyone have any thoughts on the tradeoffs in time-spent-coding versus speed of these approaches? If anyone has any better/more streamlined methods in mind please share!
Is Scrapy able to crawl any type of websites?
25,197,916
1
0
218
0
python,scrapy
Broad Crawls Scrapy defaults are optimized for crawling specific sites. These sites are often handled by a single Scrapy spider, although this is not necessary or required (for example, there are generic spiders that handle any given site thrown at them). In addition to this “focused crawl”, there is another common type of crawling which covers a large (potentially unlimited) number of domains, and is only limited by time or other arbitrary constraint, rather than stopping when the domain was crawled to completion or when there are no more requests to perform. These are called “broad crawls” and is the typical crawlers employed by search engines. These are some common properties often found in broad crawls: they crawl many domains (often, unbounded) instead of a specific set of sites they don’t necessarily crawl domains to completion, because it would impractical (or impossible) to do so, and instead limit the crawl by time or number of pages crawled they are simpler in logic (as opposed to very complex spiders with many extraction rules) because data is often post-processed in a separate stage they crawl many domains concurrently, which allows them to achieve faster crawl speeds by not being limited by any particular site constraint (each site is crawled slowly to respect politeness, but many sites are crawled in parallel) As said above, Scrapy default settings are optimized for focused crawls, not broad crawls. However, due to its asynchronous architecture, Scrapy is very well suited for performing fast broad crawls.
0
0
0
0
2014-08-08T07:09:00.000
1
1.2
true
25,197,798
0
0
1
1
Is Scrapy framework efficient in crawling any website ? I ask this question because I found on their tutorial that they build usually regular expressions that depends on the architecture (the structure of the links) of the website to crawl it. Does this mean Scrapy is not able to be generic and crawl any website whatever the manner on which its URL are structured ? Because in my case I have to deal with a very large number of websites: it is impossible to program regular expressions for each one of them.
Handle and display large data set in web browser
25,208,098
3
1
1,235
1
python,database,django,postgresql,web
Are you allowed to use paging in your output? If so, then i'd start by setting a page size of 100 (for example) and then use LIMIT 100 in my various SQL queries. Essentially, each time the user clicks next or prev on the web page a new query would be executed based on the current filtering or sorting options with the LIMIT. The SQL should be pretty easy to figure out.
0
0
0
0
2014-08-08T16:09:00.000
1
1.2
true
25,207,697
0
0
1
1
I am still a noob in web app development and sorry if this question might seem obvious for you guys. Currently I am developing a web application for my University using Python and Django. And one feature of my web app is to retrieve a large set of data in a table in the database(postgreSQL), and displaying these data in a tabular form on webpage. Each column of the table need to have the sorting and filtering feature. The data set goes up to roughly 2 millions of rows. So I wonder if something like jpxGrid could help me to achieve such goal or it would be too slow to handle/sort/display/render such a large data set on web page. I plan to retrieve all the data inside the table once (only initiate one database query call) and pass it into jpxGrid, however, my colleague suggests that each sort and filter should initiate a separate query call to the database to achieve better performance(database order by is very fast). I tried to use another open source jquery library that handles the form and enables sorting, filtering and paging(non professional outdated one) at the beginning, which starts to lag after 5k data rows and becomes impossible to use after 20k rows. My question is if something like jpxGrid is a good solution to my problem or I should build my own system that letting the database to handle the sorting and filtering(probably need to add the paging feature too). Thank you very much for helping.
Django + Postgres + Large Time Series
25,887,408
0
21
10,343
1
python,django,postgresql,heroku,bigdata
You might also consider using the PostGIS postgres extension which includes support for raster data types (basically large grids of numbers) and has many features to make use of them. However, do not use the ORM in this case, you will want to do SQL directly on the server. The ORM will add a huge amount of overhead for large numerical datasets. It's also not very adapted to handling large matrices within python itself, for that you need numpy.
0
0
0
0
2014-08-08T20:48:00.000
4
0
false
25,212,009
0
0
1
1
I am scoping out a project with large, mostly-uncompressible time series data, and wondering if Django + Postgres with raw SQL is the right call. I have time series data that is ~2K objects/hour, every hour. This is about 2 million rows per year I store, and I would like to 1) be able to slice off data for analysis through a connection, 2) be able to do elementary overview work on the web, served by Django. I think the best idea is to use Django for the objects themselves, but drop to raw SQL to deal with the large time series data associated. I see this as a hybrid approach; that might be a red flag, but using the full ORM for a long series of data samples feels like overkill. Is there a better way?
Vagrant and Google App Engine are not syncing files
26,824,688
4
2
298
0
python,google-app-engine,vagrant
Finally found the answer! In the latest version of google app engine, there is a new parameter you can pass to dev_appserver.py. using dev_appserver.py --use_mtime_file_watcher=True works! Although the change takes 1-2 seconds to detect, but it still works!
0
1
0
0
2014-08-09T09:43:00.000
1
1.2
true
25,217,223
0
0
1
1
I am currently using Vagrant to spin up a VM to run GAE's dev_appserver in the Virtual Machine. The sync folder works and I can see all the files. But, after I run the dev appserver, changes to python files by the host machine are not dynamically updated. To see updates to my python files, I have to relaunch dev appserver in my Virtual Machine. Also, I have grunt tasks that watch html/css files. These also do not sync properly when updated by editors outside the Virtual Machine. I suspect that it's something to do with the way Vagrant syncs files changed on the host machine. Has anyone found a solution to this problem?
OpenSuse Python-Django Install Issue
46,187,663
0
0
1,428
0
python,django
zypper install python-pip pip install virtualenv virtualenv name-env Source name-env/bin/activate (name-env) pip install django==version pip install django
0
0
0
0
2014-08-09T15:08:00.000
3
0
false
25,219,911
0
0
1
1
I keep receiving the following error message when trying to install Python-Django on my OpenSuse Linux VM: The installation has failed. For more information, see the log file at /var/log/YaST2/y2log. Failure stage was: Adding Repositories Not sure how to add additional Repositories when I am using the opensuse download center. Does anyone know how to resolve this error? Thank you.
Serving static files in Flask with Content-Length in HTTP response header
25,221,517
0
2
1,698
0
python,flask
If you use send_file() with a filename (not a file object), Flask will automatically set the Content-Length header for you. send_from_directory() uses send_file() with a filename, so you are set there. Do make sure you are using Flask 0.10 or newer; the header code was added in that version.
0
0
0
0
2014-08-09T17:58:00.000
1
1.2
true
25,221,497
0
0
1
1
I am writing a small Python Flask application which allow people download file from my server. These files will be served by Python, not web server like Nginx or Apache. I tried to use send_from_directory() and send_file() and I can download my file but there was no file-size because it was missed the Content-Length filed in the Header. How can I added the header when I use send_from_directory() or send_file(). Or are there any other better way to do this. Thank you.
Creating an archive - Save results or request them every time?
25,222,611
1
0
35
1
python,html,database
If I would create such type of application then I will have some common queries like get by current date,current time , date ranges, time ranges, n others based on my application for the user to select easily. Some autocompletions for common keywords. If the data gets changed frequently there is no use saving html, generating new one is good option
0
0
0
0
2014-08-09T20:01:00.000
3
0.066568
false
25,222,515
0
0
1
3
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address. They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user. I also want to save those results, so that a user can go on our website and look at previous results. My question is - what data do I save? Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again. Save the HTML that was generated back then, and simply display it when the user wishes to see this result? I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient. The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return. Thanks!
Creating an archive - Save results or request them every time?
25,222,656
1
0
35
1
python,html,database
Specifically regarding retrieving the results from queries that have been run previously I would suggest saving the results to be able to view later rather than running the queries again and again. The main benefits of this approach are: You save unnecessary computational work re-running the same queries; You guarantee that the result set will be the same as the original report. For example if you save just the SQL then the records queried may have changed since the query was last run or records may have been added / deleted. The disadvantage of this approach is that it will probably use more disk space, but this is unlikely to be an issue unless you have queries returning millions of rows (in which case html is probably not such a good idea anyway).
0
0
0
0
2014-08-09T20:01:00.000
3
1.2
true
25,222,515
0
0
1
3
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address. They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user. I also want to save those results, so that a user can go on our website and look at previous results. My question is - what data do I save? Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again. Save the HTML that was generated back then, and simply display it when the user wishes to see this result? I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient. The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return. Thanks!
Creating an archive - Save results or request them every time?
25,222,678
1
0
35
1
python,html,database
The crucial difference is that if data changes, new query will return different result than what was saved some time ago, so you have to decide if the user should get the up to date data or a snapshot of what the data used to be. If relevant data does not change, it's a matter of whether the queries will be expensive, how many users will run them and how often, then you may decide to save them instead of re-running queries, to improve performance.
0
0
0
0
2014-08-09T20:01:00.000
3
0.066568
false
25,222,515
0
0
1
3
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address. They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user. I also want to save those results, so that a user can go on our website and look at previous results. My question is - what data do I save? Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again. Save the HTML that was generated back then, and simply display it when the user wishes to see this result? I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient. The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return. Thanks!
Python-How to update an entity in apeengine?
25,226,221
4
1
40
0
python,google-app-engine
If you call .put() on an entity that you've previously retrieved from the datastore, it will update the existing entity. (Make sure you're not specifying a new key for the entity.)
0
1
0
0
2014-08-10T06:27:00.000
1
0.664037
false
25,226,153
0
0
1
1
In appengine documentation, it says that the put() method replaces the previous entity. But when I do so it always adds a new entity to the datastore. How do I update an entity?
Playback of at least 3 music files at one in python
25,243,720
1
0
57
0
python,audio
I finally got an answer/workaround. I am using pythons multiprocessing class/functionality to run multiple pygame instances. So i can play more than one music file at a time with fully control over playmode and playback position.
0
1
0
0
2014-08-10T21:57:00.000
1
1.2
true
25,233,396
0
0
1
1
I am searching for a way/framework to play at least 3 music files at one in a python application. It should run at least under ubuntu and mac as well as on the raspberry pi. I need per channel/music file/"deck" that is played: Control of the Volume of the Playback Control of the start position of the playback Play and Pause/Resume functionality Should support mp3 (Not a must but would be great!) Great would be built in repeat functionality really great would be a fade to the next iteration when the song is over. If I can also play at least two video files with audio over the same framework, this would be great, but is not a must. Has anyone an Idea? I already tried pygame but their player can play just one music file at once or has no control over the playback position. I need that for a theatre where a background sound should be played (and started simultaneosly with the light) and when it is over, a next file fades over. while that is happening there are some effects (e.g. a bell) at a third audio layer.
django-oauth2-provider with custom user model?
47,976,325
5
3
2,847
0
python,django,oauth,django-rest-framework,django-oauth
As the previous answer suggested, you should extend AbstractUser from django.contrib.auth.models. The problem with the access token that the OP referring to, occur when changing the setting AUTH_USER_MODEL AFTER django-oauth2-provider was migrated. When django-oauth2-provider is migrated, it creates a key constrain between the User model and django-oauth2-provider. The solution is very easy: Create your new User model and change the AUTH_USER_MODEL setting. Go to the django_migration table in your database. Delete all rows of django-oauth2-provider. run python manage.py makemigrations run python manage.py migrate Now, the django-oauth2-provider tables are connected to the RIGHT User model.
0
0
0
0
2014-08-11T08:03:00.000
2
0.462117
false
25,238,425
0
0
1
1
I am really stuck in my project right now. I am trying to implement Oauth2 for my app. I found out about django-oauth2-provider a lot and tried it. The only problem is, it uses the User model at django.contrib.auth. The main users of our site are saved in a custom model called User which does not inherit from or extend the model at django.contrib.auth. Is there any way to use my custom User model for creating clients and token? If django-oauth2-provider can't be used for this purpose, can anyone recommend me some oauth2 library with the option to implement oauth2 with my own model. Sincerely, Sushant Karki
Setting a lifecycle for a path within a bucket
25,245,827
0
0
41
1
python,amazon-web-services,amazon-s3,boto
After some research in the boto docs, it looks like using the prefix parameter in the lifecycle add_rule method allows you to do this.
0
0
1
0
2014-08-11T14:28:00.000
1
1.2
true
25,245,710
0
0
1
1
Using Boto, you can create an S3 bucket and configure a lifecycle for it; say expire keys after 5 days. I would like to not have a default lifecycle for my bucket, but instead set a lifecycle depending on the path within the bucket. For instance, having path /a/ keys expire in 5 days, and path /b/ keys to never expire. Is there a way to do this using Boto? Or is expiration tied to buckets and there is no alternative? Thank you
Web Crawler gets slower with time
25,302,804
0
3
204
0
python-2.7,facebook-graph-api,selenium-webdriver,phantomjs,facebook-sdk-3.0
We faced the same issue. We resolved this by closing browser automatically after particular time interval. Clear temporary cache and open new browser instance and continue the process.
0
0
1
0
2014-08-12T02:49:00.000
1
0
false
25,255,414
0
0
1
1
I am doing a data extraction project where i am required to build a web scraping program written using python using selenium and phantomjs headless webkit as browser for scaping public information like friendlist in facebook.The program is starting fairly fast but after a day of running it is getting slower and slower and I cannot figure out why ?? Can anyone give me an idea why it is getting slower ? I am running on a local machine which pretty good specs of 4gb ram and quad core processor . Does FB provide any API to find friends of friends ?
Flask global variables and sessions
25,274,042
9
2
2,915
0
python,flask,global-variables
Generally speaking, global variables are shared between requests. Some WSGI servers can use a new separate process for each request, but that is not an efficient way to scale your requests. Most will use treading or several child processes to spread the load but even in the case of separate child processes each subprocess will have to handle multiple requests during its lifetime. In other words: no, Flask will not protect your global variables from being shared between different users.
0
0
0
0
2014-08-12T21:03:00.000
1
1.2
true
25,273,989
0
0
1
1
If I have global variables in flask and have multiple users accessing the site at once, can one persons session overwrite the global variables of another persons session, or does flask make a unique instance of my site and program code each time its requested from a users browser?
Predict if sites returns the same content
25,275,720
1
1
52
0
python,url,web-crawler,urllib2
You can store the hash of the content of pages previously seen and check if the page has already been seen before continuing.
0
0
1
0
2014-08-12T23:19:00.000
2
0.099668
false
25,275,654
0
0
1
1
I am writing a web crawler, but I have a problem with function which recursively calls links. Let's suppose I have a page: http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind. I am looking for all links, and then open each link recursively, downloading again all links etc. The problem is, that some links, although have different urls, drive to the same page, for example: http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind#mw-navigation gives the same page as the previous link. And I have an infinite loop. Is any possibility to check if two links drive to the same page without comparing the all content of this pages?
Google App Engine 413 error (Request Entity Too Large)
25,311,367
4
3
5,558
0
python,google-app-engine,http-status-code-413
Looks like it was because I was making a GET request. Changing it to POST fixed it.
0
1
0
0
2014-08-14T03:40:00.000
1
1.2
true
25,299,681
0
0
1
1
I've implemented an app engine server in Python for processing html documents sent to it. It's all well and good when I run it locally, but when running off the App engine, I get the following error: "413. That’s an error. Your client issued a request that was too large. That’s all we know." The request is only 155KB, and I thought the app engine request limit was 10MB. I've verified that I haven't exceeded any of the daily quotas, so anyone know what might be going on? Thanks in advance! -Saswat
How to prevent user changing URL to see other submission data Django
25,312,789
3
5
3,945
0
python,django,model-view-controller
Just check that the object retrieved by the primary key belongs to the requesting user. In the view this would be if some_object.user == request.user: ... This requires that the model representing the object has a reference to the User model.
0
0
0
0
2014-08-14T16:04:00.000
6
0.099668
false
25,312,626
0
0
1
1
I'm new to the web development world, to Django, and to applications that require securing the URL from users that change the foo/bar/pk to access other user data. Is there a way to prevent this? Or is there a built-in way to prevent this from happening in Django? E.g.: foo/bar/22 can be changed to foo/bar/14 and exposes past users data. I have read the answers to several questions about this topic and I have had little luck in an answer that can clearly and coherently explain this and the approach to prevent this. I don't know a ton about this so I don't know how to word this question to investigate it properly. Please explain this to me like I'm 5.
AJAX and Javascript to display the same for everyone?
25,319,136
1
1
29
0
javascript,python,ajax,websocket,flask
You have to store the current state on your server and when a page is requested, you have to build a page from your server that will show the current state. When anything changes the current state (I don't know what actions can change the state as you haven't stated how that works), then you must update the state on the server so it stays current. If you want other open clients to update anytime anyone changes the state, then each open page will have to either maintain some sort of open connection to the server like a websocket (so it can be notified of updates to the state and it can update it's visuals) or you will have to poll the server from each open page to find out if anything has been updated.
0
0
0
0
2014-08-14T23:15:00.000
1
1.2
true
25,319,035
0
0
1
1
I have a web app with several AJAX call and from them it draws realtime graphs from the call. And the problem is that everytime we connect to the page, it start over and draw and make calls from there. I want everybody to share the same state of the page, not each person reloading and getting different values. How do I limit the calls and share the same state for everyone?
Issue with CSRF token cookies in Django 1.6
25,334,477
1
4
596
0
python,django,cookies,csrf,django-1.6
I think we finally figured it out. The separate "CSRF_COOKIE_DOMAIN" for each environment (".beta.site.com", ".demo.site.com", etc.) stopped the cross-environment issues. We also ended up setting "CSRF_COOKIE_NAME" to "csrf_token" instead of the default "csrftoken" so that users with old csrftoken cookies weren't negatively affected.
0
0
0
0
2014-08-15T13:24:00.000
1
1.2
true
25,327,192
0
0
1
1
We've been experiencing issues with duplicate CSRF token cookies in Django in our most recent release. We just upgraded from Django 1.4 to 1.6 and we never had any issues back in 1.4. Basically, everything starts fine for each user, but at some point they end up having more than one CSRF token cookie and the browser gets confused and doesn't know which one to use. It typically chooses wrong and causes CSRF failure issues. Our site uses multiple sub-domains, so there's typically a cookie for .site.com, .sub.site.com, site.com, and other variants. We tried setting "CSRF_COOKIE_DOMAIN" to .site.com, and that seemed to make the issue happen less frequently, but it still happened occasionally when sub-domains were being used and users were logging out and logging back in as other users. We also discovered that the favicon shortcut wasn't being defined in our base template, causing an extra request to go through the middleware, but that was fixed. We then confirmed that only the real request was going through the middleware and not any of the static or media files. We still can't reproduce the issue on command, and typically whenever it does happen then clearing cookies works as a temporary fix, but it still keeps happening periodically. Does anyone know why this might be happening? Is there something that we're missing in the docs? Thanks. EDIT: One thing I forgot to mention is that we have multiple server environments (site.com, demo.site.com, and beta.site.com). After a little more digging, it looked like users who were testing on beta and then used production had cross-environment cookie collisions. Just now we tried setting the csrf cookie domains for each environment to ".beta.site.com" and ".demo.site.com" instead of just ".site.com" and that seemed to help, especially when you clear your cookies between working in each environment. However, there's still potential for collisions between .site.com cookies on production colliding in beta and demo, but that's less of an issue at least. So is there anything more we can do about this? Also, is there anything we can do once we push this to production when users have old "site.com" cookies that run into collisions with the new specified ".site.com" cookies? EDIT 2: I posted the solution, but it won't let me accept it for a few days.
django website with gunicorn errror using chrome when login user
25,341,679
0
0
211
0
python,django,google-chrome,gunicorn
The session information, i.e. which user is logged in, is saved in a cookie, which is send from browser to server with each request. The cookie is set through the server with your login request. For some reason, chrome does not send or save the correct cookie. If you have a current version of each browser, they should behave similar. Older browser versions may not be as strict as newer versions in respect to cookie security: Same origin: are all pages located at the same sub-domain, or is the login page at some other domain? path: do you set the cookie for a specific path, but use URLs with other paths? http-only: Do you try to set or get a cookie with javascript, which is set http-only? secure-only: Do you use https for the login-page but http for other pages? Look at the developer tools in chrome Resources -> Cookies which cookies are set and if they change with each login. Delete all cookies, and try again.
0
0
0
0
2014-08-16T14:53:00.000
1
0
false
25,341,332
0
0
1
1
i have a strange error with my website which created by django . for the server i use gunicorn and nginx .yes it works well at first,when i use firefox to test my website. i create an account ,login the user ,once i submit the data,the user get login . one day i change to chrome to test my website ,i go to the login page,fill in the user name and password,click the submit button ,this user get login ,when i refresh the page,the strange thing is ,the website ask me to login again ,it means the user do not login at that time.this happens only in chrome .i test in IE and firefox ,all works well. my english is not good,i description the error again. when i use chrome ,i login one account,the page show the account get login already,however i refresh the page or i click to other page,the website show the user is not in login status. this error only in chrome. and if i stop guncorn ,i start the website use django command .manage.py runserver. even i user chrome ,the error do not appear. i do not know what exact cause the problem. any one can help me.
Python RabbitMQ - consumer only seeing every second message
25,345,174
10
3
1,694
0
python,rabbitmq,amqp,pika
Your code is fine logically, and runs without issue on my machine. The behavior you're seeing suggests that you may have accidentally started two consumers, with each one grabbing a message off the queue, round-robin style. Try either killing the extra consumer (if you can find it), or rebooting.
0
1
0
1
2014-08-16T21:38:00.000
1
1.2
true
25,344,239
0
0
1
1
I'm testing out a producer consumer example of RabbitMQ using Pika 0.98. My producer runs on my local PC, and the consumer runs on an EC2 instance at Amazon. My producer sits in a loop and sends up some system properties every second. The problem is that I am only seeing the consumer read every 2nd message, it's as though every 2nd message is not being read. For example, my producer prints out this (timestamp, cpu pct used, RAM used): 2014-08-16 14:36:17.576000 -0700,16.0,8050806784 2014-08-16 14:36:18.578000 -0700,15.5,8064458752 2014-08-16 14:36:19.579000 -0700,15.0,8075313152 2014-08-16 14:36:20.580000 -0700,12.1,8074121216 2014-08-16 14:36:21.581000 -0700,16.0,8077778944 2014-08-16 14:36:22.582000 -0700,14.2,8075038720 but my consumer is printing out this: Received '2014-08-16 14:36:17.576000 -0700,16.0,8050806784' Received '2014-08-16 14:36:19.579000 -0700,15.0,8075313152' Received '2014-08-16 14:36:21.581000 -0700,16.0,8077778944' The code for the producer is: import pika import psutil import time import datetime from dateutil.tz import tzlocal import logging logging.getLogger('pika').setLevel(logging.DEBUG) connection = pika.BlockingConnection(pika.ConnectionParameters( host='54.191.161.213')) channel = connection.channel() channel.queue_declare(queue='ems.data') while True: now = datetime.datetime.now(tzlocal()) timestamp = now.strftime('%Y-%m-%d %H:%M:%S.%f %z') msg="%s,%.1f,%d" % (timestamp, psutil.cpu_percent(),psutil.virtual_memory().used) channel.basic_publish(exchange='', routing_key='ems.data', body=msg) print msg time.sleep(1) connection.close() And the code for the consumer is: connection = pika.BlockingConnection(pika.ConnectionParameters( host='0.0.0.0')) channel = connection.channel() channel.queue_declare(queue='hello') print ' [*] Waiting for messages. To exit press CTRL+C' def callback(ch, method, properties, body): print " [x] Received %r" % (body,) channel.basic_consume(callback, queue='hello', no_ack=True) channel.start_consuming()
How can I display full (non-truncated) dataframe information in HTML when converting from Pandas dataframe to HTML?
63,317,500
0
374
438,196
0
python,html,pandas
For those who like to reduce typing (i.e., everyone!): pd.set_option('max_colwidth', None) does the same thing
0
0
0
0
2014-08-17T17:52:00.000
9
0
false
25,351,968
0
1
1
1
I converted a Pandas dataframe to an HTML output using the DataFrame.to_html function. When I save this to a separate HTML file, the file shows truncated output. For example, in my TEXT column, df.head(1) will show The film was an excellent effort... instead of The film was an excellent effort in deconstructing the complex social sentiments that prevailed during this period. This rendition is fine in the case of a screen-friendly format of a massive Pandas dataframe, but I need an HTML file that will show complete tabular data contained in the dataframe, that is, something that will show the latter text element rather than the former text snippet. How would I be able to show the complete, non-truncated text data for each element in my TEXT column in the HTML version of the information? I would imagine that the HTML table would have to display long cells to show the complete data, but as far as I understand, only column-width parameters can be passed into the DataFrame.to_html function.
Python - Issues with selenium button click using XPath
25,355,323
1
0
531
0
python,selenium,selenium-webdriver
I'd use findelement(by.name(" submit.button2-click.x")).click() or use find element(by.cssSelector("selector ")).click()
0
0
1
0
2014-08-18T01:36:00.000
2
0.099668
false
25,355,287
0
0
1
1
I'm using the following code to click a button on a page but the XPath keeps changing so the code keeps breaking: mydriver.find_element_by_xpath("html/body/div[2]/div[3]/div[1]/div/div[2]/div[2]/div[4]/div/form[2]/span/span/input").click() Is there a better way I should be doing this? Here is the code for the button I am trying to click: <input class="a-button-input" type="submit" title="Button 2" name="submit.button2-click.x" value="Button 2 Click"/>
Neo4Django create node not working in manage.py shell
25,363,972
0
0
113
0
python,django,neo4j,neo4django
This sounds like a network setup problem. Can you check what URL the library is trying to connect to and that that one really goes to your local Neo4j Server?
0
0
0
0
2014-08-18T11:42:00.000
1
0
false
25,362,508
0
0
1
1
I have neo4j-2.1.3 installed and server running on my Linux system . I created model "publisher" in my app . And then in manage.py shell , whenever I save a node with from BooksGraph.models import Publisher p=Publisher.objects.create(name='Sunny',address='b-1/196') a long error pops up with: Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/dist-packages/neo4django/db/models/manager.py", line 42, in create return self.get_query_set().create(**kwargs) File "/usr/local/lib/python2.7/dist-packages/neo4django/db/models/query.py", line 1052, in create return super(NodeQuerySet, self).create(**kwargs) File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 377, in create obj.save(force_insert=True, using=self.db) File "/usr/local/lib/python2.7/dist-packages/neo4django/db/models/base.py", line 325, in save return super(NodeModel, self).save(using=using, **kwargs) File "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py", line 463, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/usr/local/lib/python2.7/dist-packages/neo4django/db/models/base.py", line 341, in save_base self._save_neo4j_node(using) File "", line 2, in _save_neo4j_node File "/usr/local/lib/python2.7/dist-packages/neo4django/db/models/base.py", line 111, in trans_method len(connections[args[0].using]._transactions) < 1: File "/usr/local/lib/python2.7/dist-packages/neo4django/utils.py", line 313, in getitem **db['OPTIONS']) File "/usr/local/lib/python2.7/dist-packages/neo4django/neo4jclient.py", line 29, in init super(EnhancedGraphDatabase, self).init(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/neo4jrestclient/client.py", line 74, in init response = Request(**self._auth).get(self.url) File "/usr/local/lib/python2.7/dist-packages/neo4jrestclient/request.py", line 63, in get return self._request('GET', url, headers=headers) File "/usr/local/lib/python2.7/dist-packages/neo4django/db/init.py", line 60, in _request headers) File "/usr/local/lib/python2.7/dist-packages/neo4jrestclient/request.py", line 198, in _request auth=auth, verify=verify) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 468, in get return self.request('GET', url, **kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 456, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 559, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 378, in send raise ProxyError(e) ProxyError: ('Cannot connect to proxy.', error(113, 'No route to host'))
pyramid middleware call to mssql stored procedure - no response
25,646,833
1
0
122
1
python-2.7,stored-procedures,pyramid,pymssql
The solution was rather trivial. Within one object instance, I was calling two different stored procedures without closing the connection after the first call. That caused a pending request or so in the MSSQL-DB, locking it for further requests.
0
0
0
0
2014-08-18T16:05:00.000
1
1.2
true
25,367,508
0
0
1
1
From a pyramid middleware application I'm calling a stored procedure with pymssql. The procedure responds nicely upon the first request I pass through the middleware from the frontend (angularJS). Upon subsequent requests however, I do not get any response at all, not even a timeout. If I then restart the pyramid application, the same above described happens again. I'm observing this behavior with a couple of procedures that were implemented just yesterday. Some other procedures implemented months ago are working just fine, regardless of how often I call them. I'm not writing the procedures myself, they are provided for. From what I'm describing here, can anybody tell where the bug should be hiding most probably?
How to use webbrowser.open() with request in python
25,370,027
1
0
2,534
0
python,django,post,browser
Solution #1) skip all this and see Rjzheng's link below -- it's much simpler. Solution #2) Since webbrowser.open() doesn't take POST args: 1) write a javascript page which accepts args via GET, then does an Ajax POST 2) have webbbrowser.open() open URL from step #1 Not glamorous, but it'll work :) Be careful with security, you don't want to expose someone's password in the GET URL!
0
0
1
0
2014-08-18T16:48:00.000
1
1.2
true
25,368,199
0
0
1
1
I made a local GUI which requires the users to enter their usernames and passwords. Once they click submit, I want to have a pop out window which directs them to a website with their personal information through POST, which requires a request. I know that there is webbroswer.open() to open a website, but it doesn't take any requests, how would I be able to do what I want it to do? I am using django 1.6 and python 2.7
Django Process Lifetime
25,370,876
4
3
266
0
python,django,lifecycle
This is not a function of Django at all, but of whatever system is being used to serve Django. Usually that'll be wsgi via something like mod_wsgi or a standalone server like gunicorn, but it might be something completely different like FastCGI or even plain CGI. The point is that all these different systems have their own models that determines process lifetime. In anything other than basic CGI, any individual process will certainly serve several requests before being recycled, but there is absolutely no general guarantee of how many - the process might last several days or weeks, or just a few minutes. One thing to note though is that you will almost always have several processes running concurrently, and you absolutely cannot count on any particular request being served by the same one as the previous one. That means if you have any user-specific data you want to persist between requests, you need to store it somewhere like the session.
0
0
0
0
2014-08-18T19:02:00.000
1
1.2
true
25,370,287
1
0
1
1
When using Django, how long does the Python process used to service requests stay alive? Obviously, a given Python process services an entire request, but is it guaranteed to survive across across requests? The reason I ask is that I perform some expensive computations at when I import certain modules and would like to know how often the modules will be imported.
LiveServerTestCase server sees different database to tests
25,793,512
0
1
190
0
python,mysql,django,testing
The celery workers are still feeding off of the dev database even if the test server brings up other databases because they were told to in the settings file. One fix would be to make a separate settings_test.py file that specifies the test database name and bring up celery workers from the setup command using subprocess.checkoutput that consume from a special queue for testing. Then these celery workers would feed from the test database rather than the dev database.
0
0
0
0
2014-08-19T03:57:00.000
1
0
false
25,375,469
0
0
1
1
I have some code (a celery task) which makes a call via urllib to a Django view. The code for the task and the view are both part of the same Django project. I'm testing the task, and need it to be able to contact the view and get data back from it during the test, so I'm using a LiveServerTestCase. In theory I set up the database in the setUp function of my test case (I add a list of product instances) and then call the task, it does some stuff, and then calls the Django view through urllib (hitting the dev server set up by the LiveServerTestCase), getting a JSON list of product instances back. In practice, though, it looks like the products I add in setUp aren't visible to the view when it's called. It looks like the test case code is using one database (test_<my_database_name>) and the view running on the dev server is accessing another (the urllib call successfully contacts the view but can't find the product I've asked for). Any ideas why this may be the case? Might be relevant - we're testing on a MySQL db instead of the sqlite. Heading off two questions (but interested in comments if you think we're doing this wrong): I know It seems weird that the task accesses the view using urllib. We do this because the task usually calls one of a series of third party APIs to get info about a product, and if it cannot access these, it accesses our own Django database of products. The code that makes the urllib call is generic code that is agnostic of which case we're dealing with. These are integration tests so we'd prefer actually make the urllib call rather than mock it out
Aptana Studio 3 newproject error with Django
25,376,287
0
0
116
0
python,django,windows-7-x64,aptana3
After some searching, finally figured out that the default program to run the django-admin.py was aptana studio 3, even though the program had supposedly been uninstalled completely from my system. I changed the default program to be the python console launcher and now it works fine. There goes 2 hours down the drain..
0
0
0
0
2014-08-19T04:58:00.000
1
0
false
25,375,903
0
0
1
1
I am having an issue with starting a new project from the command prompt. After I have created a virtual env and activated the enviroment, when I enter in .\Scripts\django-admin.py startproject new_project, a popup window shows up which says "AptanaStudio3 executable launcher was unable to locate its companion shared library" I have tried uninstalling Aptana studio, but even when it is uninstalled, the error still occurs. Not sure what I need to do fix this. I have not unistalled/reinstalled python, i'm not even sure if that has anything to do with it. Many thanks in advance
Using etcd to manage Django settings
26,611,821
0
6
1,116
0
python,django,configuration,distributed,etcd
I haven't used CoreOS or Docker but read a lot and think it's very sexy stuff. I guess the solution depends on how you set up your app. If you have the same sort of "touch-reload" support you see in many appservers (uWSGI f.ex.), you can set key_file in /etc/etcd/etcd.conf and make your appserver watch that. This feels a ton heavier than it should be thou. I'm quite sure someone with experience with the platform can come up with something much better.
0
0
0
0
2014-08-19T14:13:00.000
2
0
false
25,385,706
0
0
1
1
Let's say that I have a Django app, and I've offloaded environment variable storage to etcd. When I deploy a new server, the app can read from etcd, write the vars into (for example) a Python file that can be conditionally loaded on the app boot. This much is acceptable. When the configuration changes, however, I have no way of knowing. Afaik, etcd doesn't broadcast changes. Do I need to set up a daemon that polls and then reloads my app on value changes? Should I query etcd whenever I need to use one of these parameters? How do people handle this?
Whats the difference between a OneToOne, ManyToMany, and a ForeignKey Field in Django?
69,433,734
0
93
31,671
0
python,django,many-to-many,foreign-key-relationship,one-to-one
In my point of View the diff b/w One-To-One & One-To-Many is One-To-One : it means one person can contain one passport only One-To-Many : it means one person can contain many address like(permanent address, Office address, Secondary Address) if you call the parent model it will automatically call the many child class
0
0
0
0
2014-08-19T14:32:00.000
2
0
false
25,386,119
0
0
1
1
I'm having a little difficulty getting my head around relationships in Django models. Could someone explain what the difference is between a OneToOne, ManyToMany and ForeignKey?
BigQuery Api getQueryResults returning pageToken for 0 records
25,393,093
0
1
471
1
python,google-app-engine,google-bigquery
This is a known issue that has lingered for far far too long. It is fixed in this week's release, which should go live this afternoon or tomorrow.
0
1
0
0
2014-08-19T16:07:00.000
1
1.2
true
25,388,124
0
0
1
1
We have a query which returns 0 records sometimes when called. When you call the getQueryResults on the jobId it returns with a valid pageToken with 0 rows. This is a bit unexpected since technically there is no data. Whats worst is if you keep supplying the pageToken for subsequent data-pulls it keeps giving zero rows with valid tokens at each page. If the query does return data initially with a pageToken and you keep using the pageToken for subsequent data pulls it returns pageToken as None after the last page giving a termination condition. The behavior here seems inconsistent?Is this a bug? Here is a sample jobresponse I see: Here is a sample job response: {u'kind': u'bigquery#getQueryResultsResponse', u'jobReference': {u'projectId': u'xxx', u'jobId': u'job_aUAK1qlMkOhqPYxwj6p_HbIVhqY'}, u'cacheHit': True, u'jobComplete': True, u'totalRows': u'0', u'pageToken': u'CIDBB777777QOGQFBAABBAAE', u'etag': u'"vUqnlBof5LNyOIdb3TAcUeUweLc/6JrAdpn-kvulQHoSb7ImNUZ-NFM"', u'schema': {......}} I am using python and running queries on GAE using the BQ api
Dynamically add to Django Model
25,393,815
3
0
106
0
python,django,django-south
It sounds like you want your program to add and delete fields from the model? That sounds like a bad idea. That would imply that your database schema will change dynamically under program control, which would be very unusual indeed. Think harder about what data you need to represent, and come up with a database schema that works for all of your data. Or, change to a non-SQL database, which means avoiding South altogether.
0
0
0
0
2014-08-19T22:05:00.000
1
0.53705
false
25,393,753
0
0
1
1
I have need to dynamically (not manually edit models.py) alter/add/remove from a Django Model. Is this possible? Once the model is altered, will it persist? I then want to use South for running the database migration from the altered model.
In which design layer i can put jinja2
25,407,418
1
0
54
0
python,google-app-engine,jinja2
These are very artificial distinctions, and it's a mistake to assume that all apps have each of these layers, or that any particular function will fit only into one of them. Jinja2 is a template language. It's firmly in the presentation layer. There isn't really any such thing as the data access layer. If you really need to put something here, one possibility would be whichever library you are using to access the data: ndb or the older db.
0
1
0
0
2014-08-20T14:20:00.000
1
1.2
true
25,407,197
0
0
1
1
I'm new in Python/GAE and jinja2, and I want to present a schema of this architecture with displaying that in Layered, like this: Presentation Layer: HTML+CSS+JQUERY Business Layer: webapp2 DAO Layer: (I don't know what I put here when it's Python, I find some exemples for java thay put here "JDO orJDO or low level API") Data Layer: appengine DataStore My questions: Regarding jinja2, where can I put it? What can I put in DAO layer for Python/GAE Thanks
Google+ Sign-In - Page accessible for logged in users only
25,419,890
1
0
73
0
javascript,python,google-app-engine,google-plus,google-signin
You cannot perform reliable access control using only client-side javascript. This is because since the javascript is executed on the user's browser, the user will be able to bypass any access control rule you've set there. You must perform your access control on server-side, in your case in Python code. Generally, people also perform some kind of access control check on the client side, not to prevent access, but for example to hide/disable buttons that the user cannot use.
0
0
1
0
2014-08-20T18:39:00.000
2
1.2
true
25,412,094
0
0
1
1
I decided to use social media benefits on my page and currently I'm implementing Google+ Sign-In. One of the pages on my website should be accessible for logged in users only (adding stuff to the page). I am logging user to website via JavaScript. I'm aware that javascript is executed on client-side but I am curious is it possible to restrict access to the certain page using only javascript.
Solr & User data
25,414,143
0
0
102
1
python,mysql,solr,django-haystack
I'd go with a modified version of the first one - it'll keep user specific data that's not going to be used for search out of the index (although if you foresee a case where you want to search for favourite'd articles, it would probably be an interesting field to have in the index) for now. For just display purposes like in this case, I'd take all the id's returned from Solr, fetch them in one SQL statement from the database and then set the UI values depending on that. It's a fast and easy solution. If you foresee that "search only in my fav'd articles" as a use case, I would try to get that information into the index as well (or other filter applications against whether a specific user has added the field as a favourite). I'd try to avoid indexing anything more than the user id that fav'd the article in that case. Both solutions would however work, although the latter would require more code - and the required response from Solr could grow large if a large number of users fav's an article, so I'd try to avoid having to return a set of userid's if that's the case (many fav's for a single article).
0
0
0
0
2014-08-20T19:55:00.000
1
1.2
true
25,413,343
0
0
1
1
Let's assume I am developing a service that provides a user with articles. Users can favourite articles and I am using Solr to store these articles for search purposes. However, when the user adds an article to their favourites list, I would like to be able to figure out out which articles the user has added to favourites so that I can highlight the favourite button. I am thinking of two approaches: Fetch articles from Solr and then loop through each article to fetch the "favourite-status" of this article for this specific user from MySQL. Whenever a user favourites an article, add this user's ID to a multi-valued column in Solr and check whether the ID of the current user is in this column or not. I don't know the capacity of the multivalued column... and I also don't think the second approach would be a "good practice" (saving user-related data in index). What other options do I have, if any? Is approach 2 a correct approach?
Prevent greenthread switch in eventlet
25,425,696
1
1
349
0
python,django,eventlet,green-threads
There is no such context manager, though you are welcome to contribute one. You have monkey patched everything, but you do not want to monkey patch socket in memcache client. Your options: monkey patch everything but socket, then patcher.import_patched particular modules. This is going to be very hard with Django/Tastypie. modify your memcache client to use eventlet.patcher.original('socket')
0
1
0
0
2014-08-20T21:04:00.000
1
1.2
true
25,414,394
0
0
1
1
I have a Django/Tastypie app where I've monkey patched everything with eventlet. I analysed performance during load tests while using both sync and eventlet worker clasees for gunicorn. I tested against sync workers to eliminate the effects of waiting for other greenthreads to switch back, and I found that the memcached calls in my throttling code only take about 1ms on their own. Rather than switch to another greenthread while waiting for this 1ms response, I'd rather just block at this one point. Is there some way to tell eventlet to not switch to another greenthread? Maybe a context manager or something?
Customized Execution status in Robot Framework
25,830,228
-1
1
3,561
0
python,robotframework
Actually, you can SET TAG to run whatever keyword you like (for sanity testing, regression testing...) Just go to your test script configuration and set tags And whenever you want to run, just go to Run tab and select check-box Only run tests with these tags / Skip tests with these tags And click Start button :) Robot framework will select any keyword that match and run it. Sorry, I don't have enough reputation to post images :(
0
0
0
1
2014-08-21T09:29:00.000
5
-0.039979
false
25,422,847
0
0
1
1
In Robot Framework, the execution status for each test case can be either PASS or FAIL. But I have a specific requirement to mark few tests as NOT EXECUTED when it fails due to dependencies. I'm not sure on how to achieve this. I need expert's advise for me to move ahead.
Can't disable example app on Web2Py
25,433,301
0
0
196
0
python,web2py
An easy way is to change the last element in the path name to something that isn't a valid Python identifier. Web2py internally represents views, models, apps and other constructs using python objects, and if you give something a name that isn't a valid identifier, web2py will pas over it. For example, change beautify to beautify.IGNORE and see what happens. I can't recall which objects have this effect immediately and which require the web2py server process to restart. I think (not sure) app name changes require a restart while views, controller etc. do not.
0
0
0
0
2014-08-21T18:07:00.000
2
0
false
25,433,128
0
0
1
1
I've set up an instance of Web2Py on a hosted server and in the administrative interface, I've disabled the example app but it's still accessible. For example, (see what I did there?) if I type the address myserver.com/examples/template_examples/beautify then Web2Py happily dumps all sorts of nasty bits about my server onto the page for God and everybody to look at. How do I make a Web2Py installed application inactive without deleting it?
State considerations when converting a python desktop application into a web app?
25,447,817
1
0
84
0
python,django,apache,web-applications,desktop-application
This question is a bit vague: some specifics, or even some code, would have helped. There are two separate differences between running this as a desktop app and running it on the web. Lack of state is one issue, but it seems like the much more significant difference is the per-user configuration. You need some way of storing that configuration for each user. Where you put that depends on how persistent you want it to be: the session is great for things that need to be persisted for an actual session, ie the time the user is actively using the app at one go, but don't necessarily need to be persisted beyond that or if the user logs in from a new machine. Otherwise, storing it explicitly in the database attached to the user record is a good way to go. In terms of "what happens between requests", the answer as Bruno points out is "nothing". Each request is really a blank state: the only way to keep state is with a cookie, which is abstracted by the session framework. What you definitely don't want to do is try to keep any kind of global or module-level per-user state in your app, as that can't work: any state really is global, so applies to all users, and therefore is clearly not what you want here.
0
0
0
0
2014-08-22T10:35:00.000
1
0.197375
false
25,445,041
0
0
1
1
I'm confused as to what considerations should be taken into account when converting a desktop application into a web app. We have a desktop app written in python using the wxPython library for the GUI and its a very traditional application which sets up the main window and calls the app.Mainloop() method to sustain the GUI and respond to events. The application itself is a configuration utility that simply accepts file(s) and allows the user to configure it. Naturally, the program keeps track of all the changes made and responds to events in light of those changes. I intend to serve this utility as part of a larger application using the Django framework hosted on an Apache server and expect many users to use it simultaneously. Once I remove the app.MainLoop() directive, as expected, running the app simply goes through the code once and exits. This is obviously not what I need, the application needs to remember the state. So far, I've started to identify and decouple all GUI code from the core of the application so I can effectively reuse the core and decided to write the UI in JavaScript using frameworks such as jQuery to handle GUI events and such. Two obvious options of storing the state would be sessions and databases but I'm somewhat stuck while forming a big picture of how this will all work. What will happen between requests in terms of Django views? I would really appreciate it if someone could shed some light on the overall workflow. Thank you.
Truncate logging of sql queries in Django
25,447,136
2
2
142
1
python,django
It's quite simple actually: in settings.py, let's say your logger is based on a handler which formatter is named 'simple'. 'formatters': { ... 'simple': { 'format': '%(asctime)s %(message).150s' }, ... }, The message will now be truncated to the first 150 caracters. Playing with handlers will allow you to specify this parameter per each logger. Thanks Python!
0
0
0
0
2014-08-22T12:15:00.000
1
1.2
true
25,446,832
0
0
1
1
Logging sql queries is useful for debugging but in some cases, it's useless to log the whole query, especially for big inserts. In this case, display only first N caracters would be enough. Is there a simple way to truncate sql queries when they are logged ?
ipython %bookmark error: quotes don't fix this
32,031,738
0
0
92
0
ipython,ipython-magic
Using "double-quotes" fixed it for me. Have a go with: %bookmark md "C:/Users/user1/my documents"
0
0
0
0
2014-08-25T18:15:00.000
1
0
false
25,491,977
0
0
1
1
This is probably a stupid question, but I can't find an answer. %bookmark dl 'C:/Users/user1/Downloads' works, but %bookmark md 'C:/Users/user1/my documents' doesn't work, throwing error: "UsageError: %bookmark: too many arguments" How to fix this?
Display a contantly updated text file in a web user interface using Python flask framework
25,496,797
0
0
268
0
python,shell,flask
The one way I can think of doing this is to refresh the page. So, you could set the page to refresh itself every X seconds. You would hope that the file you are reading is not large though, or it will impact performance. Better to have the output in memory.
0
1
0
0
2014-08-25T22:40:00.000
1
0
false
25,495,410
0
0
1
1
In my project workflow, i am invoking a sh script from a python script file. I am planning to introduce a web user interface for this, hence opted for flask framework. I am yet to figure out how to display the terminal output of the shell script invoked by my python script in a component like text area or label. This file is a log file which is constantly updated till the script run is completed. The solution i thought was to redirect terminal output to a text file and read the text file every X seconds and display the content. I can also do it via the Ajax way from my web application. Is there any other prescribed way to achieve this ? Thanks
Javascript execute python file
25,527,346
0
0
556
0
javascript,python,ajax,raspberry-pi,sudo
Solved it by spawning my python script from nodejs and communicating in realtime to my webclient using socket.io.
0
0
0
1
2014-08-26T17:25:00.000
2
1.2
true
25,511,708
0
0
1
1
I'm building a house monitor using a Raspberry Pi and midori in kiosk mode. I wrote a python script which reads out some GPIO pins and prints a value. Based on this value I'd like to specify a javascript event. So when for example some sensor senses it's raining I want the browser to immediately update its GUI. What's the best way to do this? I tried executing the python script in PHP and accessing it over AJAX, but this is too slow.
Scraping websites - Online or offline data processing is better
25,518,599
0
0
259
0
python,excel
Do it at the same time. It will probably only take a handful of lines of code. There's no reason to do the work of walking over the whole file twice.
0
0
0
0
2014-08-27T03:14:00.000
1
1.2
true
25,518,381
0
0
1
1
I am scraping websites for a research project using Python Beautifulsoup. I have scraped a few thousand records and put them in excel. In essence, I want to extract a substring of text (e.g. "python" from a post-title "Introduction to python for dummies"). The post-title is scraped and stored in a cell in excel. I want to extract "pyhon" and put it in another cell. I need some advice if it was better to do the extraction while scraping OR do it offline in excel. Since this is research project, there is no need for real time speed. i am looking at saving my effort. Another related question is if python can be used to do the extraction in the offline mode - i.e. open excel, do the extraction , close excel. Any help or advice is really appreciated.
Scrapy - use multiple IP Addresses for a host
25,521,907
1
0
953
0
python,scrapy,web-crawler
You can just set your DNS names manually in your hosts file. On windows this can be found at C:\Windows\System32\Drivers\etc\hosts and on Linux in /etc/hosts
0
0
1
0
2014-08-27T07:59:00.000
1
0.197375
false
25,521,837
0
0
1
1
Wasn't able to find anything in the docs/SO relating to my question. So basically I'm crawling a website with 8 or so subdomains They are all using Akamai/CDN. My question is if I can find Ips of a few different Akamai data centres can, I somehow explicitly say this subdomain should use this ip for the host name etc.. So basically override the auto dns resolving... As this would allow greater efficiency and I would imagine less likely to be throttled as I'd be distributing the crawling. Thanks
google app engine datastore
25,538,591
1
0
28
0
database,google-app-engine,python-2.7
Assuming that you are talking about an entity that you have on your local machine but not on App Engine once you deploy the app: your local datastore is for testing purposes only and nothing from it will be deployed to GAE. You will need to re-create all datastore data once your app is deployed if it wasn't there already.
0
1
0
0
2014-08-27T18:15:00.000
1
0.197375
false
25,534,295
0
0
1
1
I have an instance which is blobstore.BlobReferenceProperty() in local data base viewer it appears but when I deploy the application in the google database the value is '{}' and when click on an entity it appears at that instance that it has an unknow property.Can anyone help me?
Generating user content in Jenkins
25,563,114
0
0
879
0
python,jenkins,jython
I have found the way to do this: install Scriptler plugin write Groovy script that implements some additional functionality needed by Jenkins users write webpage that uses Javascript + jQuery to use form elements' values for GET/POST to Groovy script, update the webpage dynamically (say by replacing html body or adding to it), put it in userContent grant selected Jenkins users Run script permission in the Jenkins' security matrix config
0
0
0
0
2014-08-28T07:26:00.000
1
0
false
25,543,058
0
0
1
1
For non-technical reasons I need to keep generating user content in Jenkins. Theoretically I could do smth like: have parameterized build provide webpage in user content folder that does GET/POST to parameterized build display webpage with results (I don't even know if it's possible) UPDATE: That is, I want to run some dynamic webpage in Jenkins (yes I know it does not look very good). Specifically, Jenkins users after logging in need some additional functionality like generating paths and hashes from job workspaces and have them displayed and running such logic as a separate Jenkins job is not very attractive (user content folder is simply the most appropriate place for such stuff I think). Typically, I'd provide such features using say simple Django webpage, but that's not an option for various reasons.
Access Django app from other computers
70,518,273
0
23
29,704
0
python,django,webserver,localhost
very simple, first you need to add ip to allowed host, ALLOWED_HOST =['*'] 2. then execute python manage.py runserver 0.0.0.0:8000 now you can access the local project on different system in the same network
0
0
0
0
2014-08-28T13:33:00.000
7
0
false
25,550,116
0
0
1
3
I am developing a web application on my local computer in Django. Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this?
Access Django app from other computers
57,634,195
11
23
29,704
0
python,django,webserver,localhost
Just add your own IP Address to ALLOWED_HOSTS ALLOWED_HOSTS = ['192.168.1.50', '127.0.0.1', 'localhost'] and run your server python manage.py runserver 192.168.1.50:8000 and access your own server to other computer in your network
0
0
0
0
2014-08-28T13:33:00.000
7
1
false
25,550,116
0
0
1
3
I am developing a web application on my local computer in Django. Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this?
Access Django app from other computers
43,633,252
6
23
29,704
0
python,django,webserver,localhost
Run the application with IP address then access it in other machines. python manage.py runserver 192.168.56.22:1234 Both machines should be in same network, then only this will work.
0
0
0
0
2014-08-28T13:33:00.000
7
1
false
25,550,116
0
0
1
3
I am developing a web application on my local computer in Django. Now I want my webapp to be accessible to other computers on my network. We have a common network drive "F:/". Should I place my files on this drive or can I just write something like "python manage.py runserver test_my_app:8000" in the command prompt to let other computers in the network access the web server by writing "test_my_app:8000" in the browser address field? Do I have to open any ports and how can I do this?
What are the steps to create or generate an Excel sheet in OpenERP?
25,993,349
1
0
596
1
python,openerp
You can do it easily with python library called XlsxWriter. Just download it and add in openerp Server, look for XlsxWriter Documentation , plus there are also other python libraries for generating Xlsx reports.
0
0
0
0
2014-08-28T15:00:00.000
1
1.2
true
25,552,075
0
0
1
1
I need to know, what are the steps to generate an Excel sheet in OpenERP? Or put it this way, I want to generate an Excel sheet for data that I have retrieved from different tables through queries with a function that I call from a button on wizard. Now I want when I click on the button an Excel sheet should be generated. I have installed OpenOffice, the problem is I don't know how to create that sheet and put data on it. Please will you tell me the steps?
Pyinstaller scrapy error:
52,980,333
0
1
2,324
0
python,windows-7,scrapy,pyinstaller,scrapy-spider
You need to create a scrapy folder under the same directory as runspider.exe (the exe file generated by pyinstaller). Then copy the "VERSION" and "mime.types" files(default path: %USERPROFILE%\AppData\Local\Programs\Python\Python37\Lib\site-packages\scrapy) to the scrapy you just created in the scrappy folder you create . (If you only copy "VERSION", you will be prompted to find the "mime.types" file)
0
0
0
0
2014-08-28T20:43:00.000
2
0
false
25,557,693
0
0
1
1
After installing all dependencies for scrapy on windows 32bit. I've tried to build an executable from my scrapy spider. Spider script "runspider.py" works ok when running as "python runspider.py" Building executable "pyinstaller --onefile runspider.py": C:\Users\username\Documents\scrapyexe>pyinstaller --onefile runspider.py 19 INFO: wrote C:\Users\username\Documents\scrapyexe\runspider.spec 49 INFO: Testing for ability to set icons, version resources... 59 INFO: ... resource update available 59 INFO: UPX is not available. 89 INFO: Processing hook hook-os 279 INFO: Processing hook hook-time 279 INFO: Processing hook hook-cPickle 380 INFO: Processing hook hook-_sre 561 INFO: Processing hook hook-cStringIO 700 INFO: Processing hook hook-encodings 720 INFO: Processing hook hook-codecs 1351 INFO: Extending PYTHONPATH with C:\Users\username\Documents\scrapyexe 1351 INFO: checking Analysis 1351 INFO: building Analysis because out00-Analysis.toc non existent 1351 INFO: running Analysis out00-Analysis.toc 1351 INFO: Adding Microsoft.VC90.CRT to dependent assemblies of final executable 1421 INFO: Searching for assembly x86_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21 022.8_none ... 1421 INFO: Found manifest C:\Windows\WinSxS\Manifests\x86_microsoft.vc90.crt_1fc 8b3b9a1e18e3b_9.0.21022.8_none_bcb86ed6ac711f91.manifest 1421 INFO: Searching for file msvcr90.dll 1421 INFO: Found file C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_ 9.0.21022.8_none_bcb86ed6ac711f91\msvcr90.dll 1421 INFO: Searching for file msvcp90.dll 1421 INFO: Found file C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_ 9.0.21022.8_none_bcb86ed6ac711f91\msvcp90.dll 1421 INFO: Searching for file msvcm90.dll 1421 INFO: Found file C:\Windows\WinSxS\x86_microsoft.vc90.crt_1fc8b3b9a1e18e3b_ 9.0.21022.8_none_bcb86ed6ac711f91\msvcm90.dll 1592 INFO: Analyzing C:\python27\lib\site-packages\PyInstaller\loader_pyi_boots trap.py 1621 INFO: Processing hook hook-os 1661 INFO: Processing hook hook-site 1681 INFO: Processing hook hook-encodings 1872 INFO: Processing hook hook-time 1872 INFO: Processing hook hook-cPickle 1983 INFO: Processing hook hook-_sre 2173 INFO: Processing hook hook-cStringIO 2332 INFO: Processing hook hook-codecs 2963 INFO: Processing hook hook-pydoc 3154 INFO: Processing hook hook-email 3255 INFO: Processing hook hook-httplib 3305 INFO: Processing hook hook-email.message 3444 INFO: Analyzing C:\python27\lib\site-packages\PyInstaller\loader\pyi_import ers.py 3535 INFO: Analyzing C:\python27\lib\site-packages\PyInstaller\loader\pyi_archiv e.py 3615 INFO: Analyzing C:\python27\lib\site-packages\PyInstaller\loader\pyi_carchi ve.py 3684 INFO: Analyzing C:\python27\lib\site-packages\PyInstaller\loader\pyi_os_pat h.py 3694 INFO: Analyzing runspider.py 3755 WARNING: No django root directory could be found! 3755 INFO: Processing hook hook-django 3785 INFO: Processing hook hook-lxml.etree 4135 INFO: Processing hook hook-xml 4196 INFO: Processing hook hook-xml.dom 4246 INFO: Processing hook hook-xml.sax 4296 INFO: Processing hook hook-pyexpat 4305 INFO: Processing hook hook-xml.dom.domreg 4736 INFO: Processing hook hook-pywintypes 5046 INFO: Processing hook hook-distutils 7750 INFO: Hidden import 'codecs' has been found otherwise 7750 INFO: Hidden import 'encodings' has been found otherwise 7750 INFO: Looking for run-time hooks 7750 INFO: Analyzing rthook C:\python27\lib\site-packages\PyInstaller\loader\rth ooks\pyi_rth_twisted.py 8111 INFO: Analyzing rthook C:\python27\lib\site-packages\PyInstaller\loader\rth ooks\pyi_rth_django.py 8121 INFO: Processing hook hook-django.core 8131 INFO: Processing hook hook-django.core.management 8401 INFO: Processing hook hook-django.core.mail 8862 INFO: Processing hook hook-django.db 9112 INFO: Processing hook hook-django.db.backends 9153 INFO: Processing hook hook-django.db.backends.mysql 9163 INFO: Processing hook hook-django.db.backends.mysql.base 9163 INFO: Processing hook hook-django.db.backends.oracle 9183 INFO: Processing hook hook-django.db.backends.oracle.base 9253 INFO: Processing hook hook-django.core.cache 9874 INFO: Processing hook hook-sqlite3 10023 INFO: Processing hook hook-django.contrib 10023 INFO: Processing hook hook-django.contrib.sessions 11887 INFO: Using Python library C:\Windows\system32\python27.dll 12226 INFO: Warnings written to C:\Users\username\Documents\scrapyexe\build\runspid er\warnrunspider.txt 12256 INFO: checking PYZ 12256 INFO: rebuilding out00-PYZ.toc because out00-PYZ.pyz is missing 12256 INFO: building PYZ (ZlibArchive) out00-PYZ.toc 16983 INFO: checking PKG 16993 INFO: rebuilding out00-PKG.toc because out00-PKG.pkg is missing 16993 INFO: building PKG (CArchive) out00-PKG.pkg 19237 INFO: checking EXE 19237 INFO: rebuilding out00-EXE.toc because runspider.exe missing 19237 INFO: building EXE from out00-EXE.toc 19237 INFO: Appending archive to EXE C:\Users\username\Documents\scrapyexe\dist\run spider.exe running built exe "runspider.exe": C:\Users\username\Documents\scrapyexe\dist>runspider.exe Traceback (most recent call last): File "", line 2, in File "C:\python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line 270, in load_module exec(bytecode, module.dict) File "C:\Users\username\Documents\scrapyexe\build\runspider\out00-PYZ.pyz\scrapy" , line 10, in File "C:\Users\username\Documents\scrapyexe\build\runspider\out00-PYZ.pyz\pkgutil ", line 591, in get_data File "C:\python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line 342, in get_data fp = open(path, 'rb') IOError: [Errno 2] No such file or directory: 'C:\Users\username\AppData\Local\ \Temp\_MEI15522\scrapy\VERSION' I'm extremely helpful for any kind of help. I need to know how to build standalone exe from scrapy spider for windows. Thank you very much for any help.
BeautifulSoup crawling cookies
25,565,899
-1
0
4,870
0
python,drupal,cookies,web-crawler
I don't think you need BeautifulSoup for this. You could do this with urllib2 for connection and cookielib for operations on cookies.
0
0
1
0
2014-08-29T09:48:00.000
3
-0.066568
false
25,565,774
0
0
1
1
I've been tasked with creating a cookie audit tool that crawls the entire website and gathers data on all cookies on the page and categorizes them according to whether they follow user data or not. I'm new to Python but I think this will be a great project for me, would beautifulsoup be a suitable tool for the job? We have tons of sites and are currently migrating to Drupal so it would have to be able to scan Polopoly CMS and Drupal.
How do you use Python with C and Java simultaneously?
25,577,983
1
0
152
0
python,c,language-interoperability
For C, you can use ctype module or SWIG. For Java, Jython is a good choice.
1
0
0
1
2014-08-29T23:24:00.000
2
0.099668
false
25,577,470
0
0
1
1
Edit<<<<<<< The question is: -How do you launch C code from python? (say, in a function) -How do you load Java code into python? (perhaps in a class?) -Can you simply work with these two in a python program or are there special considerations? -Will it be worth it, or will integrating cause too much lag? Being familiar with all three languages (C, Java and Python) and knowing that Python supports C libraries, (and apparently can integrate with Java also) I was wondering if Python could integrate a program using both languages? What I would like is fast flexible C functions while taking advantage of Java's extensive front-end libraries and coordinating the two in Python's clean, readable syntax. Is this possible? EDIT----> To be more specific, I would like to write and execute python code that integrates my own fast C functions. Then, call Java libraries like swing to create user interface and handle networking. Probably taking advantage of XML as well to aid in file manipulation.
Need help downloading bucket from Google Cloud
26,923,909
0
1
775
0
python,download,cloud,bucket,gsutil
First thing you need to know is that gsutil tools works only with python version 2.7 or lower for windows. Once you have the correct python version, Please follow following steps if you are a windows user : open commend prompt and switch to your gsutil directory using: -- cd\ -- cd gsutil Once you are in the gsutil directory execute following command: python gsutil config -b This will open a link a browser requesting you access to your google account. Please make sure you are logged into google from the account you want to access cloud storage and grant access Once done, this will give you a KEY (authorization code). Copy that key and paste it back into your command prompt. Hit enter and now this will ask you a PROJECT-ID. Now Navigate to your cloud console and provide the PROJECT-ID. If Successful, this will create a .boto file in c:\users. Now you are ready to access your private buckets from cloud console. For this, user following command: C:\python27>python c:\gsutil\gsutil cp -r gs://your_bucked_id/path_to_file path_to_save_files
0
1
0
0
2014-09-01T14:46:00.000
2
0
false
25,608,336
0
0
1
1
My computer crashed and I need to download everything I stored on the Google Cloud. I am not a computer tech and I can't seem to find a way to download whole buckets from Google Cloud. I have tried to follow the instructions given in the Google help docs. I have downloaded and installed Python and I downloaded gsutil and followed the instructions to put it in my c:\ drive (I can see it there). When I go to the command prompt and type cd \gsutil the next prompt says "c:\gsutil>" but I'm not sure what to do with that. When I type "gsutil config" it says "file 'c:\gsutil\gsutil.py", line 2 SyntaxError: encoding problem utf8". When I type "python gsutil" (which the instructions said would give me a list of commands) it says "'python' is not recognized as an internal or external command, operable program or batch file" even though I did the full installation process for Python. Someone suggested a more user-friendly program called Cloudberry Explorer which I downloaded and installed, but the list of sources I can set up does not include Google Cloud. Can anyone help?
Maintaining jobs history in apscheduler
25,633,337
1
2
472
0
python,scheduler,apscheduler
If you want such extra functionality, add the appropriate event listeners to the scheduler to detect the adding and any modifications to a job. In the event listener, get the job from the scheduler and store it wherever you want. They are serializable btw.
0
1
0
0
2014-09-02T10:18:00.000
1
0.197375
false
25,621,035
0
0
1
1
I am using apscheduler to schedule my scrapy spiders. I need to maintain history of all the jobs executed. I am using mongodb jobstore. By default, apscheduler maintains only the details of the currently running job. How can I make it to store all instances of a particular job?
How can I switch my Django project's database engine from Sqlite to MySQL?
25,630,191
-1
0
4,742
1
python,mysql,django,sqlite,mysql-python
Try the followings steps: 1. Change DATABASES in settings.py to MYSQL engine 2. Run $ ./manage.py syncdb
0
0
0
0
2014-09-02T17:31:00.000
2
-0.099668
false
25,629,092
0
0
1
1
I need help switching my database engine from sqlite to mysql. manage.py datadump is returning the same error that pops up when I try to do anything else with manage.py : ImproperlyConfigured: Error loading MySQL module, No module named MySQLdb. This django project is a team project. I pulled new changes from bitbucket and our backend has a new configuration. This new configuration needs mysql (and not sqlite) to work. Our lead dev is sleeping right now. I need help so I can get started working again. Edit: How will I get the data in the sqlite database file into the new MySQL Database?
Same Kind Entities Groups
25,632,434
0
0
52
0
python,google-app-engine,data-structures,google-cloud-datastore,app-engine-ndb
Why not just put a boolean in your "BlogPost" Entity, 0 if it's past, 1 if it's future? will let you query them separately easily.
0
0
0
0
2014-09-02T21:01:00.000
2
0
false
25,632,301
0
0
1
1
Let's take an example on which I run a blog that automatically updates its posts. I would like to keep an entity of class(=model) BlogPost in two different "groups", one called "FutureBlogPosts" and one called "PastBlogPosts". This is a reasonable division that will allow me to work with my blog posts efficiently (query them separately etc.). Basically the problem is the "kind" of my model will always be "BlogPost". So how can I separate it into two different groups? Here are the options I found so far: Duplicating the same model class code twice (once FutureBlogPost class and once PastBlogPost class (so their kinds will be different)) -- seems quite ridiculous. Putting them under different anchestors (FutureBlogPost, "SomeConstantValue", BlogPost, #id) but this method also has its implications (1 write per second?) and also the whole ancestor-child relationship doesn't seem fit here. (and why do I have to use "SomeConstantValue" if I choose that option?) Using different namespaces -- seems too radical for such a simple separation What is the right way to do it?
Stop Automatic Lead Email
25,640,818
0
0
278
0
python,email,openerp,openerp-7
You can check the automated actions from OpenERP in the Settings/Technical/Scheduler/Scheduled Actions menu. Look for the actions that read incoming e-mails and de-activate it.
0
0
0
0
2014-09-03T07:25:00.000
3
0
false
25,638,581
0
0
1
2
We are using OpenERP 7 to manage leads within our organisation. Leads are created by incoming emails. When assigning to a different sales person, the sales person gets an email with the original email and the from address is the original person that emailed it. This is a problem because it looks like the customer emailed them directly and encourages the sales person to manage the lead from their email, rather than sending responses from the OpenERP system. How can I stop this email from being sent? I want to make my own template and use an automatic action to send a notification. There is no automatic action sending this email. I believe it is somewhere in the python code.
Stop Automatic Lead Email
25,680,204
0
0
278
0
python,email,openerp,openerp-7
I found a hack solution which I hope someone can improve. Basically the email comes in and adds an entry to the table mail_message. The type is set as "email" and this seems to be the issue. If I change it to "notification", the original email does not get sent to the newly assigned salesperson which is the behaviour that I want. Create a server action on the incoming email server that executes the following python code: cr.execute("UPDATE mail_message SET type = 'notification' WHERE model = 'crm.lead' AND res_id = %s AND type = 'email' ", (object.id, ))
0
0
0
0
2014-09-03T07:25:00.000
3
0
false
25,638,581
0
0
1
2
We are using OpenERP 7 to manage leads within our organisation. Leads are created by incoming emails. When assigning to a different sales person, the sales person gets an email with the original email and the from address is the original person that emailed it. This is a problem because it looks like the customer emailed them directly and encourages the sales person to manage the lead from their email, rather than sending responses from the OpenERP system. How can I stop this email from being sent? I want to make my own template and use an automatic action to send a notification. There is no automatic action sending this email. I believe it is somewhere in the python code.
Taking screenshot of particular div element including the area inside scroll region using selenium and python
25,644,516
0
0
1,055
0
python,selenium,screenshot,python-imaging-library
You can scrool with driver.execute_script method and then take a screenshot. I scroll some modal windowns this way with jQuery: driver.execute_script("$('.ui-window-wrap:visible').scrollTop(document.body.scrollHeight);")
0
0
1
0
2014-09-03T12:12:00.000
1
0
false
25,643,996
0
0
1
1
I need to take a screenshot of a particular given dom element including the area inside scroll region. I tried to take a screen shot of entire web page using selenium and crop the image using Python Imaging Library with the dimensions given by selenium. But I couldnt figure out a way to capture the are under scroll region. for example I have a class element container in my page and it is height is dynamic based on the content. I need to take screenshot of it entirely. but the resulting image skips the region inside scrollbar and the cropped image results with just the scroll bar in it Is there any way to do this? Solution using selenium is preferable, if it cannot be done with selenium alternate solution will also do.
How to scrape tag information from questions on Stack Exchange
25,644,601
0
0
191
0
python,tags,extract
Visit the site to find the URL that shows the information you want, then look at the page source to see how it has been formatted.
0
0
1
0
2014-09-03T12:32:00.000
3
0
false
25,644,432
0
0
1
1
My problem is that I want to create a data base of all of the questions, answers, and most importantly, the tags, from a certain (somewhat small) Stack Exchange. The relationships among tags (e.g. tags more often used together have a strong relation) could reveal a lot about the structure of the community and popularity or interest in certain sub fields. So, what is the easiest way to go through a list of questions (that are positively ranked) and extract the tag information using Python?
Django 1.7 - migrations from South
25,648,138
3
0
192
0
python,django,django-south
From a blog post I can't find anymore, the best way is to create two distinct directories: one new_migrations which will handle the migrations files (django 1.7), and another one old_migrations which will handle (if you need to) the downgrade part. In order to do it, move your migrations folder to old_migrations, then recreate all your schema with the migrations built-in :) In case of downgrade, just move your old directory and use South as before.
0
0
0
0
2014-09-03T13:11:00.000
1
1.2
true
25,645,275
0
0
1
1
I have a project based on Django 1.6 with South. I wonder is it possible to upgrade my project to Django 1.7 with new built-in database migration system and save possibility to downgrade database to previous statements?
How to move a model between two Django apps (Django 1.7)
43,198,881
0
147
38,041
0
python,mysql,database,django,schema-migration
change the names of old models to ‘model_name_old’ makemigrations make new models named ‘model_name_new’ with identical relationships on the related models (eg. user model now has user.blog_old and user.blog_new) makemigrations write a custom migration that migrates all the data to the new model tables test the hell out of these migrations by comparing backups with new db copies before and after running the migrations when all is satisfactory, delete the old models makemigrations change the new models to the correct name ‘model_name_new’ -> ‘model_name’ test the whole slew of migrations on a staging server take your production site down for a few minutes in order to run all migrations without users interfering Do this individually for each model that needs to be moved. I wouldn’t suggest doing what the other answer says by changing to integers and back to foreign keys There is a chance that new foreign keys will be different and rows may have different IDs after the migrations and I didn’t want to run any risk of mismatching ids when switching back to foreign keys.
0
0
0
0
2014-09-03T15:36:00.000
11
0
false
25,648,393
0
0
1
2
So about a year ago I started a project and like all new developers I didn't really focus too much on the structure, however now I am further along with Django it has started to appear that my project layout mainly my models are horrible in structure. I have models mainly held in a single app and really most of these models should be in their own individual apps, I did try and resolve this and move them with south however I found it tricky and really difficult due to foreign keys ect. However due to Django 1.7 and built in support for migrations is there a better way to do this now?
How to move a model between two Django apps (Django 1.7)
33,096,296
0
147
38,041
0
python,mysql,database,django,schema-migration
Lets say you are moving model TheModel from app_a to app_b. An alternate solution is to alter the existing migrations by hand. The idea is that each time you see an operation altering TheModel in app_a's migrations, you copy that operation to the end of app_b's initial migration. And each time you see a reference 'app_a.TheModel' in app_a's migrations, you change it to 'app_b.TheModel'. I just did this for an existing project, where I wanted to extract a certain model to an reusable app. The procedure went smoothly. I guess things would be much harder if there were references from app_b to app_a. Also, I had a manually defined Meta.db_table for my model which might have helped. Notably you will end up with altered migration history. This doesn't matter, even if you have a database with the original migrations applied. If both the original and the rewritten migrations end up with the same database schema, then such rewrite should be OK.
0
0
0
0
2014-09-03T15:36:00.000
11
0
false
25,648,393
0
0
1
2
So about a year ago I started a project and like all new developers I didn't really focus too much on the structure, however now I am further along with Django it has started to appear that my project layout mainly my models are horrible in structure. I have models mainly held in a single app and really most of these models should be in their own individual apps, I did try and resolve this and move them with south however I found it tricky and really difficult due to foreign keys ect. However due to Django 1.7 and built in support for migrations is there a better way to do this now?
How can you map multiple tables to one model?
25,655,829
1
0
703
0
python,flask,sqlalchemy,flask-sqlalchemy
If you have two tables with the same columns, your database schema could probably be done better. I think you should really have a table called CarMake, with entries for Toyota, Honda etc, and another table called Car which has a foreign key to CarMake (e.g. via a field called car_make or similar). That way, you could represent this in Flask with two models - one for Car and one for CarMake.
0
0
0
0
2014-09-04T00:50:00.000
2
0.099668
false
25,655,755
0
0
1
1
If I have two tables with same columns, ex. a table called Toyota and a table called Honda, how can I map these two tables with one model (maybe called Car) in flask?
Can't stop web server in Google App Engine Launcher
33,986,803
0
12
981
0
python,google-app-engine,python-2.7
I face this issue too. It has to do with the application you are running. If you are sure it runs perfectly fine, then it may be over burdening the server in a way. I strongly recommend logging relevant aspect of your code so it displays any issue in the log console. Hope this helps
0
1
0
0
2014-09-04T14:48:00.000
4
0
false
25,668,522
0
0
1
3
I am running development web server in Google App Engine Launcher without any troubles. But I can't successfully stop it. When I am press Stop button, nothing happens. Nothing adds in logs after pressing Stop. And after that I can't close launcher. The only way to close launcher is Task Manager. Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C. By the way I am under proxy.
Can't stop web server in Google App Engine Launcher
28,325,931
0
12
981
0
python,google-app-engine,python-2.7
I think your server is crashed because maybe you overloaded it or maybe there's an internal error that can be solved by re-installing the web-server.
0
1
0
0
2014-09-04T14:48:00.000
4
0
false
25,668,522
0
0
1
3
I am running development web server in Google App Engine Launcher without any troubles. But I can't successfully stop it. When I am press Stop button, nothing happens. Nothing adds in logs after pressing Stop. And after that I can't close launcher. The only way to close launcher is Task Manager. Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C. By the way I am under proxy.
Can't stop web server in Google App Engine Launcher
27,682,317
0
12
981
0
python,google-app-engine,python-2.7
This is just a suggestion, but I think if you overloaded the server by repeatedly pinging the IP, you could crash the webserver.
0
1
0
0
2014-09-04T14:48:00.000
4
0
false
25,668,522
0
0
1
3
I am running development web server in Google App Engine Launcher without any troubles. But I can't successfully stop it. When I am press Stop button, nothing happens. Nothing adds in logs after pressing Stop. And after that I can't close launcher. The only way to close launcher is Task Manager. Although when I am using dev_appserver.py myapp via cmd it is successfully stopped by Ctrl+C. By the way I am under proxy.
I just downloaded Aptana Studio 3 in Windows 8.1 Pro with Java SDK
26,286,915
0
0
576
0
python
Try: C:\Users\\appdata\roaming\appcelerator\ that is where I found it. I had the same problem. I also just put aptan into the search input field and let the system do its thing.
0
1
0
0
2014-09-05T10:26:00.000
1
0
false
25,683,758
0
0
1
1
But I can't find it where it is installed. It isn't even listed at start menu, it can't be found at Program Files (64 bit) and also in Program Files (x86). I repaired installation but again no way to find.
Getting Around Webdriver's Lack of Interactions API in Safari
26,345,400
1
1
95
0
python,safari,selenium-webdriver
Darth, Mac osascript has libraries for Python. Be sure to 'import os' to gain access to the Mac osascript functionality. Here is the command that I am using: cmd = """ osascript -e 'tell application "System Events" to keystroke return' """ os.system(cmd) This does a brute force return. If you're trying to interact with system resources such as a Finder dialog, or something like that, make sure you give it time to appear and go away once you interact with it. You can find out what windows are active (as well as setting Safari or other browsers 'active', if it hasn't come back to front) using Webdriver / Python. Another thing that I have to do is to use a return call after clicking on buttons within Safari. Clicks are a little busted, so I will click on something to select it (Webdriver gets that far), then do an osascript 'return' to commit the click. I hope this helps. Best wishes, -Vek If this answer appears on ANY other site than stackoverflow.com, it is without my authorization and should be reported
0
0
1
0
2014-09-05T22:24:00.000
1
0.197375
false
25,694,785
0
0
1
1
I am needing to use the ENTER key in Safari. Turns out Webdriver does not have the Interactions API in the Safari driver. I saw some code from a question about this with a java solution using Robot, and was wondering if there is a purely Python way to do a similar thing.
How to access system display memory / frame buffer in a Java program?
25,699,358
0
1
886
0
java,python,memory,vnc
directly access system display memory on Linux You can't. Linux is a memory protected virtual address space operating system. Ohh, the kernel gives you access to the graphics memory through some node in /dev but that's not how you normally implement this kind of thing. Also in Linux you're normally running a display server like X11 (or in the future something based on the Wayland protocol) and there might be no system graphics memory at all. I have researched a bit and found that one way to achieve this is to capture the screen at a high frame rate (screen shot), convert it into RAW format, compress it and store it in an ArrayList. That's exactly how its done. Use the display system's method to capture the screen. It's the only reliable way to do this. Note that if conversion or compression is your bottleneck, you'd have that with fetching it from graphics memory as well.
0
1
0
0
2014-09-06T10:28:00.000
1
1.2
true
25,699,308
0
0
1
1
I am trying to create my own VNC client and would like to know how to directly access system display memory on Linux? So that I can send it over a Socket or store it in a file locally. I have researched a bit and found that one way to achieve this is to capture the screen at a high frame rate (screenshot), convert it into RAW format, compress it and store it in an ArrayList. But, I find this method a bit too resource heavy. So, was searching for alternatives. Please, let me also know if there are other ways for the same (using Java or Python only)?
Running C in A Browser
51,140,813
13
6
8,593
0
javascript,python,c,google-app-engine,browser
Old question but for those that land in here in 2018 it would be worth looking at Web Assembly.
0
1
0
0
2014-09-07T17:58:00.000
3
1
false
25,713,194
0
0
1
1
I've spent days of research over the seemingly simple question: is it possible to run C code in a browser at all? Basically, I have a site set up in Appengine that needs to run some C code supplied by (a group of trusted) users and run it, and return the output of the code back to the user. I have two options from here: I either need to completely run the code in the browser, or find some way to have Python run this C code without any system calls. I've seen mixed responses to my question. I've seen solutions like Emscripten, but that doesn't work because I need the LLVM code to be produced in the browser (I cannot run compilers in AppEngine.) I've tried various techniques, including scraping from the output page on codepad.org, but the output I will produce is so high that I cannot use services like codepad.org because they trim the output (my output will be ~20,000 lines of about 60 characters each, which is trimmed by codepad due to a timeout). My last resort is to make my own server that can serve my requests from my Appengine site, but that seems a little extreme. The code supplied by my users will be very simple C. There are no I/O or system operations called by their code. Unfortunately, I probably cannot simply use a find/replace operation in their code to translate it to Javascript, because they may use structures like multidimensional arrays or maybe even classes. I'm fine with limiting my users to one cross-platform browser, e.g. Chrome or Firefox. Can anyone help me find a solution to this question? I've been baffled for days.
Configuring Django 1.7 and Python 3 on mac osx 10.9.x
25,715,948
3
2
344
0
python,django,macos,python-3.x
You need to install django for python 3, pip3 install django
0
0
0
0
2014-09-08T00:10:00.000
1
1.2
true
25,715,940
0
0
1
1
I have installed the latest versions of both django and python. The default "python" command is set to 2.7; if I want to use python 3, I have to type "python3". Having to type "python3" and a django command causes problems. For example if I type: "python3 manage.py migrate" , I get an error. The error is: Traceback (most recent call last): File "manage.py", line 8, in from django.core.management import execute_from_command_line ImportError: No module named 'django' Django does not seem to recognize my python 3. How do I get around this? Your help is greatly appreciated.
autologon to a django web app
25,724,888
0
0
77
0
python,django,web
I think that for this purposes you need openwrt or similar firmware for your router, or as a second solution you can make one of your computers as internet-gateway, so router will get internet from this gateway, and on a gateway there should be an app/config/etc. , which redirect user to your app, when user firstly open any page.
0
0
0
0
2014-09-08T10:35:00.000
1
0
false
25,722,308
0
0
1
1
I Would like to write a little django web-app to be run in my local WLAN, to allow my customers to browse thru the offers that I made available. The WLAN is not password protected and isolated from web. Ideally, I would like that when a user connect to my wlan with a smartphone or tablet, he or she is been jumped directly to the offer webserver, without entering any address or url. Is there any combination of port forwarding/triggering on the wlan router and the webserver that can accomplish this task ?
Changing theme in Pyscripter?
30,013,579
0
1
2,831
0
python,pyscripter
Click on the windows logo, write %appdata%, then open "Roaming"
1
0
0
0
2014-09-08T13:43:00.000
2
0
false
25,725,702
0
0
1
1
I have installed it using the executable file downloaded from its webpage. I tried finding %AppData%\skins\ as suggested by blogs but I just couldn't find it. Has anybody been stuck here?
How the user of my Django app can create "customized" tasks?
25,728,619
0
0
32
0
python,django,task,customization,celery
As you know how to create & execute tasks, it's very easy to allow customers to create tasks. You can create a simple form to get required information from the user. You can also have a profile page or some page where you can show user preferences. The above form helps to get data(like how frequently he needs to receive eamils) from user. Once the form is submitted, you can trigger a task asynchronously which will does processing & send emails to customer.
0
0
0
0
2014-09-08T16:03:00.000
1
0
false
25,728,378
0
0
1
1
I am new to Celery and I can't figure out at all how I can do what I need. I have seen how to create tasks by myself and change Django's file settings.py in order schedule it. But I want to allow users to create "customized" tasks. I have a Django application that is supposed to give the user the opportunity to create the products they need (files that gather weather information) and to send them emails at a frequency they choose. I have a database that contains the parameters of every product like geographical time, other parameters and frequency. I also have a script sendproduct.py that sends the product to the user. Now, how can I simply create for each product, user created a task that execute my script with the given parameters and the given frequency?
Store css data in django db
25,739,277
1
0
278
0
python,django
There is absolutely no problem with storing CSS in DB. Just create a TextField in your model and put it there. Then in your view's template output it in a <style type="text/css"> tag and that's all.
0
0
0
0
2014-09-09T04:56:00.000
1
1.2
true
25,736,956
0
0
1
1
Hi I have a scenario where user inputs css data in text box, I need to read it and apply it to a django view. I also need to store the css data for future modifications by user. So my question is 1.should I store the css data in database or 2.in a static css file and store path to file in db? Thanks.
passing user information from php to django
25,740,202
2
0
274
0
php,python,django
You must use one of these possibilities: Your friend gives you direct access (even only read access) to his database and you represent everything as Django models or use raw SQL. The problem with that approach is that you have a very high-coupling between the two systems. If he changes his table or scheme structure for some reason you will also have to be notified and change stuff on your end. Which is a real headache. Your friend provides an API end-point from his system that you can access. This protocol can be simple GET requests to retrieve information that return JSON or any other format that suites you both. That's the simplest and best approach for the long run. You can "fetch" content directly from his site, that returns raw HTML for every request, and then you can scrape the response you receive. That's also a headache in case he changes his site structure, and you'll need to be aware of that.
0
0
0
0
2014-09-09T08:23:00.000
3
1.2
true
25,739,840
0
0
1
1
I am building a web app in django and I want to integrate it with the php web app that my friend has build. Php web app is like forum where students can ask question to the teachers. For this they have to log in. And I am making a app in django that displays a list of colleges and every college has information about teachers like the workshop/classes timing of the teachers. In my django app colleges can make their account and provide information about workshop/classes of the teachers. Now what I want is the student that are registered to php web app can book for the workshop/classes provided by colleges in django app and colleges can see which students and how many students have booked for the workshop/classes. Here how can I get information of students from php web app to django so that colleges can see which students have booked for workshop. Students cannot book for workshop untill they are logged in to php web app. Please give me any idea about this.. How can I make this possible
GAE multi-module, multi-language application on localhost
26,067,953
2
4
503
0
java,python,google-app-engine,module,google-cloud-datastore
You might be able to do something similar by using "appscale" (an open source project that could be able to help you, if you setup Virtual Box and load the image on it). Look at community.appscale.com Another way (mind you, this is tricky) would be to : 1- deploy your python as a standalone project on localhost:9000 2- deploy your java as a standalone project on localhost:8000 3- Change your python and java code so that when they are in Dev, they hit the right localhost (java hits localhost:9000 and python hits localhost:8000) 4- Try, like @tx802 suggested, to specify a path to local_db. I am not sure either method works, but I figure they are both worth trying at the very least.
0
1
0
0
2014-09-09T13:51:00.000
1
0.379949
false
25,746,419
0
0
1
1
I have a multi-module GAE Application that is structured like this: a Python27 module, that is a regular web application. This Python app uses the Datastore API. Regular, boring web app. a Java module (another web application) that hooks on the Datastore calls (calls made by the Python web app), and displays aggregated data about the recorded Datastore calls. I have been able to deploy this application on the GAE cloud, and everything works fine. However, problems arise when I want to run my application on localhost. The Python module must be started using the Python SDK. The Java module must be started using the Java SDK. However, the 2 SDK's do not seem to share the same datastore (I believe the 2 SDKs write/read to separate files on disk). It seems to me that the 2 SDK's also differ in the advancement of the Development Console implementation. The Python SDK sports a cleaner, more "recent-looking" Development Console (akin to the new console.developers.google.com console) than the Java SDK, which has the old-looking version of the Development Console (akin to the old appspot.com console) So my question is, is there a way to boot 2+ modules (in different languages: Python, Java) that share the same Datastore files? That'd be nice, since it would allow the Java module to hook on the Python Datastore calls, which does not seem to be possible at the moment.
Django Extend admin index
25,763,182
7
4
4,302
0
python,django,django-admin
Worked it out - I set admin.site.index_template = "my_index.html" in admin.py, and then the my_index template can inherit from admin/index.html without a name clash.
0
0
0
0
2014-09-09T15:23:00.000
5
1.2
true
25,748,396
0
0
1
1
I wish to make some modifications to the Django admin interface (specifically, remove the "change" link, while leaving the Model name as a link to the page for changes to the instances). I can achieve this by copying and pasting index.html from the admin application, and making the modifications to the template, but I would prefer to only override the offending section by extending the template - however I am unsure how to achieve this as the templates have the same name. I am also open to alternative methods of achieving this effect. (django 1.7, python 3.4.1)
Get list of connected users with Django
25,765,046
3
2
3,712
0
python,django,django-authentication
You have to consider what exactly means for the users to be "online". Since any user can close the browser window any time and without the server knowing about that action, you'd end up having lots of false "online" users. You have two basic options: Keep track of the user's last activity time. Every time the user loads a page you'd update the value of the timer. To get a list of all online users you'd need to select the ones with an activity before X minutes. This is what is done by some web forums. Open a websocket, long polling connection or some heartbeat to the server. This is what Facebook chat does. You'd need more than just django, since to keep a connection open another kind of server-side resources are needed.
0
0
0
0
2014-09-10T11:59:00.000
2
0.291313
false
25,764,889
0
0
1
1
I'm looking for a way to keep track of users that are online/offline. So if I present all users in a list i could have an icon or some kind of flag to show this. Is this built in in Django's default Auth system? My first thought was to simply have a field in my profiles called last_logout in the models and update it with the date/time each time user logged out. With this info and the built in last_login I should be able to make some kind of function to determine if the user is loggedin/online right? Or should I just have a boolean field called "online" that I can change when user logs in and out?
Django session id security tips?
72,094,059
0
3
357
0
python,django,cookies
Sadly, there is no best way you can prevent this from what I know but you can send the owner of an account an email and set some type of 2fa.
0
0
0
0
2014-09-10T16:21:00.000
2
0
false
25,770,457
0
0
1
1
I'm currently developing a site with Python + Django and making the login I started using the request.session[""] variables for the session the user was currently in, and i recently realized that when i do that it generates a cookie with the "sessionid" in it with a value every time the user logs in, something like this "c1cab412bc71e4xxxx1743344b3edbcc" and if I take that string and paste it in the cookie on other computer in other network and everything, i can have acces to the session without login in. So what i'm asking here actually is if anyone can give me any tips of how can i add some security on my system or if i'm doing something wrong setting session variables? Can anyone give me any suggestions please?
Appengine: Query only a subset of the data?
25,779,584
1
0
37
0
python,google-app-engine,google-cloud-datastore
You should add to each Datastore entity an indexed property to query one. For example you could create an "hash" property that will contain the date (in ms since epoch) modulo 15 minutes (in ms). Then you just have to query with a filter saying hash=0, or rather a random value between 0 and 15 min (in ms).
0
1
0
0
2014-09-11T03:30:00.000
1
0.197375
false
25,778,586
0
0
1
1
My users can supply a start and end date and my server will return a list of points between those two dates. However, there are too many points between each hour and I am interested to pick only one random point per every 15 minutes. Is there an easy to do this in Appengine?
Starting Script on Server when QR-Code gets scanned
25,781,952
1
0
602
0
python,flask,raspberry-pi,qr-code
First of all, QR codes aren't magic. All they contain is a string of text. That text could say "Hello", or be a phone number, email address, or URL. It is up to the QR scanner to decide what to do with the text it encounters. For example, you could build a QR scanner which tells your Pi to delete data when it scans the text "123abc". Or, you could have a URL like http://192.168.0.34/delete?data=abc123 where the IP address is the internal network address of your Pi.
0
0
0
1
2014-09-11T06:22:00.000
1
0.197375
false
25,780,445
0
0
1
1
is it possible to let a web server, for example my Raspberry Pi, start a Script when a specific QR-Code gets scanned from my Mobile Device connected to my Home-Network? e.x.: I want the Pi to delete spezific data from a Database if the QR-Code gets scanned from a Mobile Device inside my Network.
Python framework choice
25,783,845
7
1
252
0
android,python,ios,django
Django's strength is in it's ORM, huge documentation, and the thousands of reusable applications. The problem with those reusable apps is that the majority is written following Django's MVC design, and as you need a web service, and not a website or web application, most of those apps will be almost useless for you. On the other hand, there is Django-REST-Framework, extending Django itself, which is pretty good, and it's declarative API feels as if it was part of Django itself. For simple cases just a couple lines of code could produce you a complete CRUD API following REST conventions, generating beautiful URLs, out-of-the box support for multiple authentication mechanisms, etc. but it could be an overkill to pick Django just because of that, especially if you do not wish to use it's ORM. Flask on the other hand is pretty lightweight, and it's not an MVC-only framework, so in combination with Flask-RESTful, I think it would be an ideal tool for writing REST services. So a conclusion would be that Django provides the best out-of-the-box experience, but Flask's simplicity and size is too compelling to ignore it.
0
0
0
0
2014-09-11T09:12:00.000
2
1.2
true
25,783,481
0
0
1
2
I know this is a bit off topic, but I really needed some help regarding this. I am new to Python. I'm trying to build my next project (a dictionary web app which will have both iOS and android app as well) for myself in Python. I've done some research and listed out some promising frameworks. django pylons (pyramid + repoze.bfg) tornado CherryPy pyjamas flask web.py etc But while django is great, it was originally built for newspaper like sites project building. Im stuck with choice making for dictionary like web application which will have to provide RESTful web service api for mobile request handling. So anyone can you please help in pointing out which framework is the best choice for this type of web app. I think I should go with django. Or should I go with native python coding? Any suggestions will be great.
Python framework choice
25,784,170
2
1
252
0
android,python,ios,django
Go with Django, ignore its entire templating system(used to generate web pages) and use Django-Tastypie for REST service. Easy to learn and set-up is instant.
0
0
0
0
2014-09-11T09:12:00.000
2
0.197375
false
25,783,481
0
0
1
2
I know this is a bit off topic, but I really needed some help regarding this. I am new to Python. I'm trying to build my next project (a dictionary web app which will have both iOS and android app as well) for myself in Python. I've done some research and listed out some promising frameworks. django pylons (pyramid + repoze.bfg) tornado CherryPy pyjamas flask web.py etc But while django is great, it was originally built for newspaper like sites project building. Im stuck with choice making for dictionary like web application which will have to provide RESTful web service api for mobile request handling. So anyone can you please help in pointing out which framework is the best choice for this type of web app. I think I should go with django. Or should I go with native python coding? Any suggestions will be great.
What big data solution can I use to process a huge number of input files?
25,824,846
1
0
148
0
python,amazon-ec2,bigdata,amazon-sqs
Problem with Hadoop is when you get a very large number of files that you do not combine with CombineFileInput format, it makes the job less efficient. Spark doesnt seem to have a problem with this though, Ive had jobs run without problems with 10s of 1000s of files and output 10s of 1000s of files. Not tried to really push the limits, not sure if there even is one!
0
0
1
0
2014-09-12T23:14:00.000
1
1.2
true
25,818,198
0
1
1
1
I am currently searching for the best solution + environment for a problem I have. I'm simplifying the problem a bit, but basically: I have a huge number of small files uploaded to Amazon S3. I have a rule system that matches any input across all file content (including file names) and then outputs a verdict classifying each file. NOTE: I cannot combine the input files because I need an output for each input file. I've reached the conclusion that Amazon EMR with MapReduce is not a good solution for this. I'm looking for a big data solution that is good at processing a large number of input files and performing a rule matching operation on the files, outputting a verdict per file. Probably will have to use ec2. EDIT: clarified 2 above
How to remove exported templates from Visual Studio Express 2013
29,585,803
0
0
232
0
python,templates,visual-studio-2013,ptvs
@GeoCoder, Pavel's link is mostly what you need. If after deleting all those files you still see it, then you need to delete {program folder}\Common7\IDE\ItemTemplatesCache\cache.bin also.
0
0
0
0
2014-09-13T20:29:00.000
1
0
false
25,827,393
1
0
1
1
I have Visual Studio 2013 Express running on Windows 8.1. Also, I installed Python Tools for Visual Studio template. I have developed Python applications a few times as well as C# stuff. For Python applications, I decided to export a general game template. Since it does not look good I wanna remove it before I attempt to export a better template. I tried search everywhere but to no avail.
Sphinx PDF output is bad. How do I chase down the cause?
37,615,276
0
0
1,143
0
pdf,python-sphinx,rst2pdf
We had a similar problem: bad pdf output on project with a lot of chapters and images. We solved disabling the break page: in the conf.py, set the pdf_break_level value at 0.
0
0
0
0
2014-09-14T22:39:00.000
1
0
false
25,838,717
0
0
1
1
My Sphinx input is six rst files and a bunch of PNGs and JPGs. Sphinx generates the correct HTML, but when I make pdf I get an output file that comes up blank in Adobe Reader (and comes up at over 5000%!) and does not display at all in Windows Explorer. The problem goes away if I remove various input files or if I edit out what looks like entirely innocuous sections of the input, but I cannot get a handle on the specific cause. Any ideas on how to track this one down? Running Sphinx build with the -v option shows no errors. I'm using the latest Sphinx (1.2.3) and the latest rst2pdf (0.93), with the default style. On Win7. (added) This may help others with the same problem: I tried concatenating the rst files, then running rst2pdf on the concatenated file. That worked, though it gave me a bunch of warnings for bad section hierarchy and could not handle the Sphinx :ref: stuff. Could the bad section hierarchy thing (i.e. ==, --, ~~ in one file, ==, ~~, -- in another) be connected to the hopeless PDFs? Removing the conflict does not solve the problem, but that doesn't mean it's not a clue! I could explore more if I could capture the output that Sphinx sends to rst2pdf.
Django virtual environment
25,851,210
4
1
149
0
django,python-2.7,virtualenv
It doesn't matter where the directory is - the only important thing is that you activate the virtual environment every time you want to work on the project. I personally prefer to have the project directory inside the virtual env directory, but that is not required. One caveat: don't put the virtual env inside your project directory. That may cause problems with test discovery and with git.
0
0
0
0
2014-09-15T15:05:00.000
1
1.2
true
25,851,142
1
0
1
1
What is the proper way of adding an already existing Django project to a newly created virtual environment? Do I just move the project to the virtual environment root directory?
Quickfix failing to read repeating group
25,858,986
5
1
1,635
0
python,quickfix,fix-protocol
(edit -- I have turned off the data dictionary in the config file -- could it have anything to do with that?) Yep, that's exactly the problem. Without the DD, your engine doesn't know when a repeating group ends or begins. As far as it's concerned, there's no such thing as repeating groups. You need a DD, and you need to make sure it matches your counterparty's message and field set. If they've added custom fields or messages, you need to make sure your DD reflects that.
0
0
0
0
2014-09-15T22:52:00.000
2
1.2
true
25,858,091
0
0
1
2
I am using quickfix in Windows with python bindings. I have been able to make market data requests in the past. I recently changed to a different API provider (Cunningham, aka CTS) and am encountering a lot of issues. At least one of them, however, seems to be internal to quickfix. It is baffling me. When I send a market data request, I get back a response. It is a typical 35=W message, a market snapshot. Quickfix is rejecting this message because tag 269 appears more than once! Of course, tag 269 is MDEntryType, it is supposed to occur more than once. Notice also that tag 268, NoMDEntries, is defined and says there are 21 entries in the group. I think this is internal to quickfix because quickfix is generating an error message and sending it back to CTS. Also, this error aborts the message before it can get passed to the fromApp function. (I know because my parsers which apply themselves to the message whenever fromApp is called are not even getting this message). Any ideas? The message is below. (edit -- I have turned off the data dictionary in the config file -- could it have anything to do with that?) <20140915-22:39:11.953, FIX.4.2:XXXXX->CTS, incoming> (8=FIX.4.2 ☺ 9=836 ☺ 35=W ☺ 34=4 ☺ 49=CTS ☺ 56=XXXXX ☺ 52=20140915-22:39:11.963 ☺ 48=XDLCM E_F ZN (Z14) ☺ 387=2559 ☺ 965=2 ☺ 268=21 ☺ 269=0 ☺ 270=124156250 ☺ 271=646 ☺ 1023=1 ☺ 269=0 ☺ 270= 124140625 ☺ 271=918 ☺ 1023=2 ☺ 269=0 ☺ 270=124125000 ☺ 271=1121 ☺ 1023=3 ☺ 269=0 ☺ 270=124109375 ☺ 271=998 ☺ 1023=4 ☺ 269=0 ☺ 270=124093750 ☺ 271=923 ☺ 1023=5 ☺ 269=0 ☺ 270=124078125 ☺ 271=1689 ☺ 1023=6 ☺ 269=0 ☺ 270=124062500 ☺ 271=2011 ☺ 1023=7 ☺ 269=0 ☺ 270=124046875 ☺ 271=1782 ☺ 1023=8 ☺ 2 69=0 ☺ 270=124031250 ☺ 271=2124 ☺ 1023=9 ☺ 269=0 ☺ 270=124015625 ☺ 271=1875 ☺ 1023=10 ☺ 269=1 ☺ 27 0=124171875 ☺ 271=422 ☺ 1023=1 ☺ 269=1 ☺ 270=124187500 ☺ 271=577 ☺ 1023=2 ☺ 269=1 ☺ 270=12420312 5 ☺ 271=842 ☺ 1023=3 ☺ 269=1 ☺ 270=124218750 ☺ 271=908 ☺ 1023=4 ☺ 269=1 ☺ 270=124234375 ☺ 271=1482 ☺ 1023=5 ☺ 269=1 ☺ 270=124250000 ☺ 271=1850 ☺ 1023=6 ☺ 269=1 ☺ 270=124265625 ☺ 271=1729 ☺ 1023=7 ☺ 269=1 ☺ 270=124281250 ☺ 271=2615 ☺ 1023=8 ☺ 269=1 ☺ 270=124296875 ☺ 271=1809 ☺ 1023=9 ☺ 269=1 ☺ 27 0=124312500 ☺ 271=2241 ☺ 1023=10 ☺ 269=4 ☺ 270=124156250 ☺ 271=1 ☺ 10=140 ☺ ) <20140915-22:39:12.004, FIX.4.2:XXXX->CTS, event> (Message 4 Rejected: Tag appears more than once:269) <20140915-22:39:12.010, FIX.4.2:XXXX->CTS, outgoing> (8=FIX.4.2 ☺ 9=102 ☺ 35=3 ☺ 34=4 ☺ 49=XXXX ☺ 52=20140915-22:39:12.009 ☺ 56=CTS ☺ 45=4 ☺ 58= Tag appears more than once ☺ 371=269 ☺ 372=W ☺ 10=012 ☺ )
Quickfix failing to read repeating group
49,272,369
1
1
1,635
0
python,quickfix,fix-protocol
I realize this thread is years old but I had this exact problem and finally resolved it so I am putting it here to help anyone else that stumbles across this. The issue was that in my config I was using the 'DataDictionary=..' parameter. Changing this to 'AppDataDictionary=...' solved my problem. Steve
0
0
0
0
2014-09-15T22:52:00.000
2
0.099668
false
25,858,091
0
0
1
2
I am using quickfix in Windows with python bindings. I have been able to make market data requests in the past. I recently changed to a different API provider (Cunningham, aka CTS) and am encountering a lot of issues. At least one of them, however, seems to be internal to quickfix. It is baffling me. When I send a market data request, I get back a response. It is a typical 35=W message, a market snapshot. Quickfix is rejecting this message because tag 269 appears more than once! Of course, tag 269 is MDEntryType, it is supposed to occur more than once. Notice also that tag 268, NoMDEntries, is defined and says there are 21 entries in the group. I think this is internal to quickfix because quickfix is generating an error message and sending it back to CTS. Also, this error aborts the message before it can get passed to the fromApp function. (I know because my parsers which apply themselves to the message whenever fromApp is called are not even getting this message). Any ideas? The message is below. (edit -- I have turned off the data dictionary in the config file -- could it have anything to do with that?) <20140915-22:39:11.953, FIX.4.2:XXXXX->CTS, incoming> (8=FIX.4.2 ☺ 9=836 ☺ 35=W ☺ 34=4 ☺ 49=CTS ☺ 56=XXXXX ☺ 52=20140915-22:39:11.963 ☺ 48=XDLCM E_F ZN (Z14) ☺ 387=2559 ☺ 965=2 ☺ 268=21 ☺ 269=0 ☺ 270=124156250 ☺ 271=646 ☺ 1023=1 ☺ 269=0 ☺ 270= 124140625 ☺ 271=918 ☺ 1023=2 ☺ 269=0 ☺ 270=124125000 ☺ 271=1121 ☺ 1023=3 ☺ 269=0 ☺ 270=124109375 ☺ 271=998 ☺ 1023=4 ☺ 269=0 ☺ 270=124093750 ☺ 271=923 ☺ 1023=5 ☺ 269=0 ☺ 270=124078125 ☺ 271=1689 ☺ 1023=6 ☺ 269=0 ☺ 270=124062500 ☺ 271=2011 ☺ 1023=7 ☺ 269=0 ☺ 270=124046875 ☺ 271=1782 ☺ 1023=8 ☺ 2 69=0 ☺ 270=124031250 ☺ 271=2124 ☺ 1023=9 ☺ 269=0 ☺ 270=124015625 ☺ 271=1875 ☺ 1023=10 ☺ 269=1 ☺ 27 0=124171875 ☺ 271=422 ☺ 1023=1 ☺ 269=1 ☺ 270=124187500 ☺ 271=577 ☺ 1023=2 ☺ 269=1 ☺ 270=12420312 5 ☺ 271=842 ☺ 1023=3 ☺ 269=1 ☺ 270=124218750 ☺ 271=908 ☺ 1023=4 ☺ 269=1 ☺ 270=124234375 ☺ 271=1482 ☺ 1023=5 ☺ 269=1 ☺ 270=124250000 ☺ 271=1850 ☺ 1023=6 ☺ 269=1 ☺ 270=124265625 ☺ 271=1729 ☺ 1023=7 ☺ 269=1 ☺ 270=124281250 ☺ 271=2615 ☺ 1023=8 ☺ 269=1 ☺ 270=124296875 ☺ 271=1809 ☺ 1023=9 ☺ 269=1 ☺ 27 0=124312500 ☺ 271=2241 ☺ 1023=10 ☺ 269=4 ☺ 270=124156250 ☺ 271=1 ☺ 10=140 ☺ ) <20140915-22:39:12.004, FIX.4.2:XXXX->CTS, event> (Message 4 Rejected: Tag appears more than once:269) <20140915-22:39:12.010, FIX.4.2:XXXX->CTS, outgoing> (8=FIX.4.2 ☺ 9=102 ☺ 35=3 ☺ 34=4 ☺ 49=XXXX ☺ 52=20140915-22:39:12.009 ☺ 56=CTS ☺ 45=4 ☺ 58= Tag appears more than once ☺ 371=269 ☺ 372=W ☺ 10=012 ☺ )
Can Apache be used as a front end for Django and Tornado at the same time?
25,861,972
1
0
534
0
python,django,apache,tornado,wsgi
You would be better off to use nginx as a front end proxy on port 80 and have it proxy to both Apache/mod_wsgi and Tornado as backends on their own ports. Apache/mod_wsgi will actually benefit from this as well if everything is setup properly as nginx will isolate Apache from slow HTTP clients allowing Apache to perform better with fewer resources.
0
1
0
0
2014-09-16T02:16:00.000
2
0.099668
false
25,859,704
0
0
1
1
I have Apache set up as a front end for Django and it's working fine. I also need to handle web sockets so I have Tornado running on port 8888. Is it possible to have Apache be a front end for Tornado so I don't have to specify the 8888 port? My current /etc/apache2/sites-enabled/000-default.conf file is: WSGIDaemonProcess myappiot python-path=/home/ubuntu/myappiot/sw/www/myappiot:/usr/local/lib/python2.7/site-packages WSGIProcessGroup myappiot WSGIScriptAlias / /home/ubuntu/myappiot/sw/www/myappiot/myappiot/wsgi.py # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.example.com ServerAdmin webmaster@localhost DocumentRoot /var/www/html # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf