Title
stringlengths
11
150
A_Id
int64
518
72.5M
Users Score
int64
-42
283
Q_Score
int64
0
1.39k
ViewCount
int64
17
1.71M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
14
4.78k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
0
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
55
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
469
42.4M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
1
1
Available Count
int64
1
15
Question
stringlengths
17
21k
Django: global variables in multi-connections
32,396,689
2
1
295
0
python,ajax,django,web,global
The answer is clear, surely: you should not be using global variables. If you need to store state for a user, do it in the session or the database.
0
0
0
0
2015-09-04T10:52:00.000
1
1.2
true
32,396,300
0
0
1
1
I developed a django web-application and I tested it in local.. good news, it works! Know I am trying to move forward. I want to make it reachable via public Internet.. Bad news, it won't work! The client-side interact with server using Ajax and execute some python script and get some result to display in the web page. The problem is that my application/server can't handle multiple connexions!! I clarify: Mainly, the problem is that when more than one client are served (2 for ex.), each one will ask the server to run a python script and, because there is a lot of global variables in the script, the two clients will modify them simultaneously and than bof! Can multi-threading be a solution? How? PS: It's clear, I am newbie to web :-). Thanks
Mezzanine (Django) menu tree generation from lower branch level
32,489,152
1
1
136
0
python,django,mezzanine
Hello if you are far ahead in your development i am sorry for you. If not, run away as far as you can from mezzanine. The documentation for this CMS is scarce. Lucky for you you can solve this by using "page.branch_level" instead of just "branch_level". The former, will give you the depth of the current branch, and the latter will give you the depth of the page related to the page tree. Hope this can help you.
0
0
0
0
2015-09-04T17:30:00.000
1
1.2
true
32,403,617
0
0
1
1
I have the following menu structure: Personal PersonalOption1 Sub-Option1 Sub-Option2 PersonalOption2 Enterprise EnterpriseOption1 EnterpriseOption2 From the Page on Sub-Option1, I'm trying to generate a page_menu to only show: PersonalOption1 PersonalOption2 But based on the branch_level value, I'm getting: PersonalOption1 PersonalOption2 Enterprise EnterpriseOption1 EnterpriseOption2 This is the tree I'm getting using branch_level to identify each node: Personal (branch_level: 0) PersonalOption1 (branch_level: 1) Sub-Option1 (branch_level: 2) Sub-Option2 (branch_level: 2) PersonalOption2 (branch_level: 1) Enterprise (branch_level: 1) EnterpriseOption1 (branch_level: 1) EnterpriseOption2 (branch_level: 1) Enterprise should have branch_level 0.
Automation in django: Celery
32,410,234
0
1
131
0
django,python-2.7,automation,celery,django-celery
You need, at minimum, a user model, comment model, article model, and most likely, a site model to store your RSS URLs and metadata about that site. You will then need to create a function to parse the RSS from your URLs, and populate your article table. You will need to call this function on a periodic basis, either via Cron, or something like Celery. The case of user-submitted articles is similar, although rather than a site model, you would need something like a category or channel model. The rest is all forms and views. The syndication framework does not parse RSS, it generates RSS from an existing model, so that's useless in your case, unless you intend to publish an RSS feed of your articles, linking to the comments pages (Reddit does this).
0
0
0
0
2015-09-04T18:49:00.000
1
1.2
true
32,404,764
0
0
1
1
Django noob, please bear with me How does to parse the rss/atom feed of an external site(any news site) and create a comments section for each post? Or simply on reddit where user submits the links; here the links are to be updated from a single/multiple websites and add a comment section. Its easy to do with syndication framework, if site is in same db. But I couldn't find the exact solution and process to make it work for external sites. I have created the user model and comments model.I got stuck at automating the process of adding links. Using django==1.8, python==2.7 Thanks a lot EDIT: How to do it in celery?
Using SCM to synchronize PyDev eclipse projects between different computer
32,408,606
0
0
86
0
python,eclipse,version-control,synchronization,pydev
I use mercurial. I picked it because it seemed easier. But is is only easiER. There is mercurial eclipse plugin. Save a copy of your workspace and maybe your eclipse folder too before daring it :)
0
1
0
1
2015-09-04T21:21:00.000
3
1.2
true
32,406,765
0
0
1
2
I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace. However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places. However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively? Thanks
Using SCM to synchronize PyDev eclipse projects between different computer
32,466,408
0
0
86
0
python,eclipse,version-control,synchronization,pydev
I use bitbucket coupled with mercurial. That is my repository is on bitbucket and i pull and psuh to it from mercurial within eclipse For my backup i have an independent carbonite process going to net back all hard disk files. But I imagine there is a clever free programatic way to do so. If one knew how to write the appropriate scripts. Glad the first suggestion was helpful .you are wise to bite the bullet and get this in place now. ;)
0
1
0
1
2015-09-04T21:21:00.000
3
0
false
32,406,765
0
0
1
2
I use eclipse to write python codes using pydev. So far I have been using dropbox to synchronize my workspace. However, this is far from ideal. I would like to use github (or another SCM platform) to upload my code so I can work with it from different places. However, I have found many tutorials kind of daunting... Maybe because they are ready for projects shared between many programers Would anyone please share with me their experience on how to do this? Or any basic tutorial to do this effectively? Thanks
NDB query by time part of DateTimeProperty
32,419,523
2
1
519
0
python,google-app-engine,datetime,google-cloud-datastore,app-engine-ndb
The only way would be to store the time of day as a separate property. An int will be fine, you can store it as seconds. You could do this explicitly (ie set the time property at the same time you set the datetime, or use a computedproperty to automatically set the value.
0
0
0
0
2015-09-05T19:32:00.000
2
1.2
true
32,416,955
1
0
1
1
I need to query items that where added at some time of day, ignoring which day. I only save the DateTime of when the item was added. Comparing datetime.Time to DateTimeProperty gives an error, and DateTimeProperty does not have a time() method.
Configuring an aiohttp app hosted by gunicorn
32,440,342
1
1
273
0
python-3.x,gunicorn,aiohttp
At least for now aiohttp is a library without reading configuration from .ini or .yaml file. But you can write code for reading config and setting up aiohttp server by hands easy.
0
0
0
1
2015-09-06T12:23:00.000
1
0.197375
false
32,423,519
0
0
1
1
I implemented my first aiohttp based RESTlike service, which works quite fine as a toy example. Now I want to run it using gunicorn. All examples I found, specify some prepared application in some module, which is then hosted by gunicorn. This requires me to setup the application at import time, which I don't like. I would like to specify some config file (development.ini, production.ini) as I'm used from Pyramid and setup the application based on that ini file. This is common to more or less all python web frameworks, but I don't get how to do it with aiohttp + gunicorn. What is the smartest way to switch between development and production settings using those tools?
Updating Scrapy Spider does not reflect changes
32,700,014
-1
1
610
0
python,python-2.7,scrapy,scrapy-spider
As @alecxe recommended in the comments, removing the .pyc files followed by re-running the scrapy crawl crawler-name recompiles the python code and creates the new, update .pyc files.
0
0
0
0
2015-09-06T15:39:00.000
1
1.2
true
32,425,245
0
0
1
1
I'm using Scrapy 1.0.3 with Python 2.7.6. I've placed print statements in a file under the /spiders directory for debugging purposes. However, I've more recently added new print statements but scrapy isn't throwing it onto the console. Finding this suspicious, I removed the previous print statements to see if scrapy would update the output accordingly. However, the output from the previous working code still remains the same. I'm suspecting that scrapy caches the working codes and found .Python to be a suspecting file which I've removed but the issue remains. Some google-fu didn't help either and I was wondering if anyone could enlighten me if the issue lies with python or scrapy?
OSQA Reaching to Page Files (Bitnami)
32,460,787
1
0
36
0
python,bitnami,osqa
When you install a Bitnami Stack the files of the application, OSQA in this case, are in /installdir/apps/osqa/htdocs, just changing installdir with the directory where you installed the stack. For instance on windows it is installed by default on C:\Bitnami\osqa\apps\osqa\htdocs. On the \installdir\osqa directory you will find on htdocs the application files like .css, .py ... and on conf the configuration files of apache, so if you want to add more subdirectories or change any directives you should take a look here. If you want to edit any feature of the application you should go to the htdocs directory and edit the python files in order to achieve your developments.
0
0
0
0
2015-09-07T15:33:00.000
1
1.2
true
32,442,067
0
0
1
1
I've been trying to reach to OSQA pages to modify them. I've installed it to my pc with bitnami and I cannot find the files of the pages. I couldn't find anything on the wiki and readme files. Is there a way to edit the pages? Not just css but also I'm going to add more stuff to it. Thank you very much.
Rails and Django migrations on a shared database
32,450,497
1
0
254
1
python,ruby-on-rails,django,postgresql,ruby-on-rails-4
I think you need to maintain migrations at one system (in this case, Rails), because it will be difficult to check migrations between two different apps. What you'll do if you'll haven't access to another app? But you can store something like db/schema.rb for django tracked in git.
0
0
0
0
2015-09-08T06:11:00.000
1
0.197375
false
32,450,413
0
0
1
1
Is it bad practice to have Django perform migrations on a predominantly Rails web app? We have a RoR app and have moved a few of the requirements out to Python. One of the devs here has suggested creating some of the latest database migration using Django and my gut says this is a bad idea. I haven't found any solid statements one way or the other after scouring the web and am hoping someone can provide some facts of why this is crazy (or why I should keep calm). database: Postgres hosting: heroku skills level: junior
Django app with long running calculations
32,456,170
2
1
359
0
python,django,heroku,celery
I think Celery is a good approach. Not sure if you need Redis/RabbitMQ as a broker or you could just use MySQL - it depends on your tasks. Celery workers could be runned on the different servers, so Celery supports distributed queues. Another approach - implement some queue engine with python, database as a broker and a cron for job executions. But it could be a dirty way with a lots of pain and bugs. So I think that Celery is a more nice way to do it.
0
0
0
0
2015-09-08T10:50:00.000
2
1.2
true
32,455,821
0
0
1
1
I'm creating a Django web app which features potentially very long running calculations of up to an hour. The calculations are simulation models built in Python. The web app sends inputs to the simulation model and after some time receives the answer. Also, the user should be able to close his browser after starting the simulation and if he logs in the next day the results should be there. From my research it seems like I can use Celery together with Redis/RabbitMQ as broker to run the calculation in the background. Ideally I would want to display progress updates using ajax, so that the page updates without a user refresh when the calculation is complete. I want to host the app on Heroku, so the calculation will also be running on the Heroku server. How hard will it be if I want to move the calculation engine to another server? It might be useful if the calculation engine is on a different server. So my question is, is my this a good approach above or what other options can I look at?
InterfaceError:(sqlte3.InterfaceError)Error binding parameter 0
32,471,731
0
2
454
1
python,xpath,sqlite,scrapy
The problem that you're experiencing is that SQLite3 wants a datatype of "String", and you're passing in a list with a unicode string in it. change: item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract() to item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract()[0]. You'll be left with a string to be inserted, and your SQLite3 errors should go away. Warning, though, if your ever wanting to deal with more than just one title, this will limit you to the first. You can use whatever method you want to persuade those into being a string, though.
0
0
0
0
2015-09-08T14:13:00.000
1
1.2
true
32,460,120
0
0
1
1
Recently, I used Python and Scrapy to crawl article information like 'title' from a blog. Without using a database, the results are fine / as expected. However, when I use SQLalchemy, I received the following error: InterfaceError:(sqlite3.InterfaceError)Error binding parameter 0 -probably unsupported type.[SQL:u'INSERT INTO myblog(title) VALUES (?)'] [PARAMETERS:([u'\r\n Accelerated c++\u5b66\u4e60 chapter3 -----\u4f7f\u7528\u6279\u636e \r\n '],)] My xpath expression is: item['title'] = sel.xpath('//*[@class="link_title"]/a/text()').extract() Which gives me the following value for item['title']: [u'\r\n Accelerated c++ \u5b66 \u4e60 chapter3 -----\u4f7f\u7528\u6279\u636e \r\n '] It's unicode, why doesn't sqlite3 support it? This blog's title information contains some Chinese. I am a tired of sqlalchemy. I've referred its documents, but found nothing, and I'm out of ideas.
Django testcase without database migrations and syncdb
32,499,038
1
1
642
0
python,django,django-testing,django-migrations
When you use django's TestCase, it has an explicit requirement that the database must be setup, which means all migrations must be applied. If you want to test things without the migrations happening, you cannot use TestCase. Use a testing toolkit that doesn't depend on django, like pytest and write your own code to test. You can always import django models and settings explicitly. Your tests would first run the explicit tests where database is not created, after which, the other tests can be run containing TestCase. I'm not sure whether such a setup is possible with manage.py, but you can certainly create your own script (maybe using fabric or plain python) to run tests in your choice of order.
0
0
0
0
2015-09-10T07:10:00.000
1
0.197375
false
32,495,377
0
0
1
1
I am trying to create test cases for my migration functions (called with migrations.RunPython). My idea was to create a test case that doesn’t run migrations before starting, neither syncdb to create the database in one step. After this, I’m planning to run the first step, run associated tests, run the second step then its associated tests, etc. Is this possible somehow, or if not, is it possible to test migration functions in any way?
Python and Java in internal command
41,688,142
0
0
29
0
java,python-2.7,python-3.4
You can create a new variable name, for example MY_PYTHEN=C:\Pythen34 . Then you need to add the variable name into system variable PATH such as, PATH = ...;%MY_PYTHEN% PATH is a Windows system default variable.
0
1
0
1
2015-09-10T08:56:00.000
2
0
false
32,497,329
0
0
1
1
I have installed java and using it in internal command with variable name:PATH and variable value: C:\Program Files\Java\jdk1.8.0_60\bin . Now i want add python to internal command. What variable name do I give so that it works.I tried with Name: PTH and Value:C:\Python34; its not working.
Can Django template variables be model instances?
32,508,275
2
1
826
0
python,django,django-templates
The issue isn't that the variables are static copies. It's just that the template language itself doesn't allow you to call methods which take arguments. It's still the same object under the hood you're accessing, you just have no way to express certain programmatic concepts (assignment, passing arguments, etc.) in the language. To answer your update: Yes the template layer could update models if the model had a method which modified the object and that method didn't take any arguments. But just because you can do a thing doesn't mean you should do a thing. Don't assume that because the developers of Django haven't absolutely prevented something means it's totally acceptable, but if that's what you really want to do, there's nothing to stop you.
0
0
0
0
2015-09-10T17:22:00.000
3
1.2
true
32,508,107
0
0
1
1
I was asked this on an interview presumably to see if I understood the separation between the Template layer and the Model layer. My understanding is that template variables are essentially: A static copy of the instance (such that all the properties can be accessed) An instance that has all of the method with arguments "hidden" (such that they can't be called, but methods without arguments can be called ) Therefore, if you had a model with only methods with no arguments and passed an instance into a template could you say that it was a static copy of the instance? Is this even a correct way to think about template variables? UPDATE: Is the template (view) layer able to update models (e.g. from a custom context processor)? If no, then how is this prevented by the Django framework if it's not making copies of the model instance? If yes, then wouldn't this be a major deviation from typical web framework MVC design where data only flows in one direction from Model to View?
Is it smart to use django as backend only?
32,510,043
2
0
903
0
python,angularjs,django,project,project-structure
If you're truly dealing with a scaling issue, you want to decouple every single component. That way you can pour resources into the part of your system under the heaviest load. That would involve things like spinning up multiple front-end web/cache servers, compute nodes, etc, etc. That said, very few companies need to handle that kind of scale, and by the time you do, you'll have a team of developers to do all that for you. (As someone once said, "Scalability is a problem every developer wishes they had"). Until then, Have a front-end site and an Api. If you write the Api well, you'll be able to plug in desktop/mobile clients very easily at a later date. You may also consider making the api public (at least partially) in future to allow other developers to interact with your product.
0
0
0
0
2015-09-10T18:52:00.000
2
0.197375
false
32,509,598
0
0
1
1
I'm creating right now a new application which will be kind of start'up. User can register it and use many tools inside. I hope there will be at least thousands hits per day. I'm sure that it will be using python & django because it's technology I work with. I'm not sure about project structure and communication in a projects like this. I thought that i'll use django with tastypie as a backend to serv endpoints and another one app based by nodejs (using GULP for example) to host frontend only. (frontend will use AngularJS & ui router, it will be SPA). Is it a better option to separate backend and frontend application or maybe i should keep whole frontend files (js, css, html) inside django as a static files? Which solution is better for a potentially huge web application? Maybe both are bad ideas? Thanks a lot for help!
Hosting API docs generated with mkdocs at a URL within a Django project
32,511,724
4
2
1,356
0
python,django,mkdocs
Django is just a framework you need to host your static files and serve them with something like Nginx or Apache etc.
0
0
0
0
2015-09-10T21:05:00.000
3
0.26052
false
32,511,575
0
0
1
1
I tend to write my API documentation in Markdown and generate a static site with MkDocs. However, the site I'd like to host the documentation on is a Django site. So my question is, and I can't seem to find an answer Googling around, is how would I go about hosting the MkDocs static generated site files at a location like /api/v1/docs and have the static site viewable at that URL? UPDATE: I should point out I do serve all static files under Apache and do NOT use runserver or Debug mode to serve static files, that's just crazy. The site is completely built and working along with a REST API. My question was simply how do I get Django (or should I say Apache) to serve the static site (for my API docs) generated by MkDocs under a certain URL. I understand how to serve media and static files for the site, but not necessarily how to serve what MkDocs generates. It generates a completely static 'site' directory with HTML and assets files in it.
How does Django's extends work?
32,513,994
3
1
716
0
python,django,django-templates
You cannot extend from multiple django templates.. it's a single line inheritance. If you want /templates/index.html to be your base index template, and /templates/hello/index.html to be your index template for the hello part of your application.. then you should have /templates/hello/index.html start with {% extends 'index.html' %}. The thing to understand with Django templates is that the base template.. the one that is 'extended', is THE template.. and everything in that template will be displayed whether it is within a block tag or outside one. When you 'extend' a template, any blocks declared which match blocks in the template that was extended, will override the content of those blocks. Most web sites/applications have a more or less consistent layout from page to page. So, a typical template setup would be to have a master template that contains blocks for all the various parts of the page, with divs and css to arrange the layout the way you want. Put as much as the common html.. the stuff that does not change often from one page to the next, in that base template, and make sure the base template contains blocks for anything you need to fill in when you extend the template. These blocks can contain default html which will be shown if the extending template does not override that block. Or they can be empty. Then, for each new template variation that you need, extend the master template and override only those blocks that need to be filled in or overrriden. Don't think of the extend as bringing the code of your base template into the template that is extending it.. Django templates do not work like that. Think of the base template as THE template which has all the basic building blocks of your page, and then the extension MODIFIES the blocks of the template that it extends. If you have a different situation where the pieces of your page need to be defined in different templates and you wish to piece them together, then what you are looking for is the {% include 'templatename' %} tag.
0
0
0
0
2015-09-11T00:52:00.000
1
1.2
true
32,513,828
0
0
1
1
I have my index.html file in /templates directory and I have another index.html located in /templates/hello. I've created a file named templates.html in /templates/hello and it should extend index.html. Can I make template.html extends both index.html files (from both directories) using {% extends index.html %} tag in it? Thanks.
Google App Engine Server to Server OAuth Python
32,524,341
0
0
163
0
python,google-app-engine,gmail,google-oauth,service-accounts
A service account isn't you its it's own user. Even if you could access Gmail with a service account which I doubt you would only be accessing the service accounts GMail account (Which I don't think it has) and not your own. To my knowledge the only way to access Gmail API is with Oauth2. Service accounts can be used to access some of the Google APIs for example Google drive. The service account his its own Google drive account files will be uploaded to its drive account. I can give it permission to upload to my google drive account by adding it as a user on a folder in Google drive. You cant give another user permission to read your Gmail Account so again the only way to access the Gmail API will be to use Oauth2.
0
1
1
0
2015-09-11T13:04:00.000
2
0
false
32,524,226
0
0
1
1
I can't find a solution to authorize server-to-server authentication using Google SDK + Python + MAC OSx + GMAIL API. I would like testing GMail API integration in my local environment, before publishing my application in GAE, but until now I have no results using samples that I have found in GMail API or OAuth API documentation. During all tests I received the same error "403-Insufficient Permission" when my application was using GCP Service Account, but if I convert the application to use User Account everything was fine.
ImportError: No module named spiders
40,071,747
1
1
4,316
0
python,scrapy
Don't delete __init__.py from any place in your project directory. Just because it's empty doesn't mean you don't need it. Create a new empty file called __init__.py in your spiders directory, and you should be good to go.
0
0
1
0
2015-09-11T14:00:00.000
6
0.033321
false
32,525,307
1
0
1
3
from scrapy.spiders import CrawlSpider Rule is giving error I am using ubuntu I have Scrapy 0.24.5 and Python 2.7.6 I tried with tutorial project of scrapy I am working on pycharm
ImportError: No module named spiders
32,531,370
0
1
4,316
0
python,scrapy
Make sure scrapy is installed. Try running scrapy when your terminal directory is python, or you can try to update scrapy..
0
0
1
0
2015-09-11T14:00:00.000
6
0
false
32,525,307
1
0
1
3
from scrapy.spiders import CrawlSpider Rule is giving error I am using ubuntu I have Scrapy 0.24.5 and Python 2.7.6 I tried with tutorial project of scrapy I am working on pycharm
ImportError: No module named spiders
46,567,274
0
1
4,316
0
python,scrapy
Mostly the tutorial you are fallowing and your version is mismatched. Simply replace this (scrapy.Spider) with (scrapy.spiders.Spider). Spider function is put into spiders module.
0
0
1
0
2015-09-11T14:00:00.000
6
0
false
32,525,307
1
0
1
3
from scrapy.spiders import CrawlSpider Rule is giving error I am using ubuntu I have Scrapy 0.24.5 and Python 2.7.6 I tried with tutorial project of scrapy I am working on pycharm
django imagekit - list view and details view image
32,639,949
0
1
95
0
python,django,django-class-based-views,django-imagekit
Based on the answer provided by kicker86. I plan to retain single image.
1
0
0
0
2015-09-11T20:03:00.000
1
1.2
true
32,531,308
0
0
1
1
I have just started using django imagekit. I have a list view page where the images are of dimensions 270 x 203 (30 KB approx.) and same images have a size of 570 x 427 (90 KB approx.) in the details view page. I wanted to know; Should have create 2 different images for each image with different size and dimensions. if the answer to the 1st query is yes. How to do it on Django Imagekit. PS: I am planning to use django Imagekit on the form level.
How to Pickle Django Model
32,554,191
2
1
1,634
0
python,django,pickle
While it's not really the "answer" to my question, the best solution I found was to implement a dehydrate() method on the model, allowing me to alter the model's __dict__ and store that instead. On recovery from the cache, it's as simple as using the ** syntax and you'll have your original model back.
0
0
0
0
2015-09-11T22:53:00.000
2
1.2
true
32,533,228
0
0
1
1
I'm currently trying to pickle certain Django models. I created a __getstate__ and __setstate__ method for the model, but it looks like pickle.dumps() is using the default __reduce__ instead. Is there a way to force use of __getstate__ and __setstate__ ? If not, what is the best way to overwrite __reduce__ ? I am currently using Django 1.6 and Python 2.7.6, if that helps. In essence, I am using get and set state to remove two fields before pickling in order to save space.
What is the Django HOST_DOMAIN setting for?
32,543,440
2
0
44
0
python,django,django-settings
That is not a Django setting. It's perfectly good practice to define your own project-specific settings inside settings.py, and that is presumably what the original developer did here.
0
0
0
0
2015-09-12T20:32:00.000
2
0.197375
false
32,543,419
0
0
1
1
Despite googling I can't find any documentation for the Django HOST_DOMAIN setting in the settings.py. I am going through a settings.py file I have been given and this is the only part of the file I am not 100% clear on.
What is the right way in Django to generate random URLs for user uploaded items?
49,653,008
0
1
508
0
python,django,url
What I usually do is create a dir with uploaded date as name, e.g. 04042018/ and then I rename the uploaded file with an uuid4, e.g. 550e8400-e29b-41d4-a716-446655440000.jpg So the fullpath for the uploaded file will be something like this: site_media/media/04042018/550e8400-e29b-41d4-a716-446655440000.jpg Personally I think is best this way than having something with site_media/media/username/file.jpg because will be easier to figure out what images belong to who.
0
0
0
0
2015-09-16T07:11:00.000
1
1.2
true
32,601,954
0
0
1
1
For example, uploading a gif to gfycat generates URLs in the form of Adjective-Adjective-Noun such as ForkedTestyRabbit. Obviously going to this URL allows you to view the model instance that was uploaded. So I'm thinking the post upload generates a unique random URL, e.g. /uploads/PurpleSmellyGiraffe. The model will have a column that is the custom URL part, i.e. PurpleSmellyGiraffe, and then in the urls.py we have anything /uploads/* will select the corresponding model with that URL. However, I'm not sure this is the best practice. Could I get some feedback/suggestions on how to implement this?
xhtml2pdf is not displaying GBP ("£") signs while creating PDF
33,307,465
0
0
319
0
python,django,pdf-generation,xhtml2pdf
When you call pisa.CreatePDF() make sure to include encoding. Here's what we use, obviously its a bit out of context but it was mostly copied from the example documentation. pisaStatus = pisa.CreatePDF(html.encode('UTF-8'), encoding="UTF-8", dest=f, link_callback=link_callback)
0
0
0
0
2015-09-16T14:20:00.000
1
0
false
32,611,297
0
0
1
1
While creating PDF using xhtm2pdf we are unable to print "£" sign but works for dollar and euro. We have used setting font face and changed font family but still we are not able to print GBP.
PyCharm throws "AttributeError: 'module' object has no attribute" when running tests for no reason
32,615,089
1
5
7,129
0
python,django,testing,django-rest-framework
Given all the others paths were already covered: Import order Virtual Env created Project's Interpreter using the Virtual Env The only thing that occurred to me was do run the following command within the vurtal env: pip install -r requirements.txt And it worked! In the end, someone had updated the requirements which weren't being met by my current virtual env. Screwing up with the paths/imports within PyCharm.
0
0
0
0
2015-09-16T17:31:00.000
2
0.099668
false
32,615,088
1
0
1
2
So, I have a Django-REST Framework project and one day it simply ceased being able to run the tests within PyCharm. From the command line I can run them both using paver or the manage.py directly. There was a time when that would happen when we didn't import the class' superclass at the top of the file, but that's not the case. We have a virtualenv set locally and run the server from a vagrant box. I assured the virtual environment is loaded and that the project's Interpreter is using the afore mentioned virtual env. No clue on what's the matter.
PyCharm throws "AttributeError: 'module' object has no attribute" when running tests for no reason
37,061,343
2
5
7,129
0
python,django,testing,django-rest-framework
I had the same problem but my solution was different. When I tried to run a test from PyCharm, the target path looked like this: tests.apps.an_app.models.a_model.ATestCase But since ATest was a class inside a_model.py, the targ path should actually be: tests.apps.an_app.models.a_model:ATestCase Changing the target in the test configuration worked.
0
0
0
0
2015-09-16T17:31:00.000
2
0.197375
false
32,615,088
1
0
1
2
So, I have a Django-REST Framework project and one day it simply ceased being able to run the tests within PyCharm. From the command line I can run them both using paver or the manage.py directly. There was a time when that would happen when we didn't import the class' superclass at the top of the file, but that's not the case. We have a virtualenv set locally and run the server from a vagrant box. I assured the virtual environment is loaded and that the project's Interpreter is using the afore mentioned virtual env. No clue on what's the matter.
web2py database configuration
32,618,356
2
0
939
1
python,database,web2py
Files in the /models folder are executed in alphabetical order, so just put the DAL definition at the top of the first model file that needs to use it (it will then be available globally in all subsequent model files as well as all controllers and views).
0
0
0
0
2015-09-16T19:00:00.000
2
1.2
true
32,616,625
0
0
1
1
I found this line to help configure Postgresql in web2py but I can't seem to find a good place where to put it : db = DAL("postgres://myuser:mypassword@localhost:5432/mydb") Do I really have to write it in all db.py ?
Django 1.8 migrate: django_content_type does not exist
32,637,043
1
6
10,126
1
django,python-2.7,django-1.7,django-migrations,django-1.8
Well, I found the issue. I have auditlog installed as one my apps. I removed it and migrate works fine.
0
0
0
0
2015-09-17T00:45:00.000
6
1.2
true
32,620,930
0
0
1
4
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error: django.db.utils.ProgrammingError: relation "django_content_type" does not exist I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django? EDIT: I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4?
Django 1.8 migrate: django_content_type does not exist
67,501,508
0
6
10,126
1
django,python-2.7,django-1.7,django-migrations,django-1.8
i had drop the database and rebuild it, then in the PyCharm Terminal py manage.py makemigrations and py manage.py migrate fix this problem. I think the reason is the table django_content_type is the django's table, if it misssed can not migrate, so have to drop the database and rebuild.
0
0
0
0
2015-09-17T00:45:00.000
6
0
false
32,620,930
0
0
1
4
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error: django.db.utils.ProgrammingError: relation "django_content_type" does not exist I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django? EDIT: I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4?
Django 1.8 migrate: django_content_type does not exist
32,623,157
6
6
10,126
1
django,python-2.7,django-1.7,django-migrations,django-1.8
Delete all the migration folder from your app and delete the database then migrate your database...... if this does not work delete django_migration table from database and add the "name" column in django_content_type table ALTER TABLE django_content_type ADD COLUMN name character varying(50) NOT NULL DEFAULT 'anyName'; and then run $ python manage.py migrate --fake-initial
0
0
0
0
2015-09-17T00:45:00.000
6
1
false
32,620,930
0
0
1
4
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error: django.db.utils.ProgrammingError: relation "django_content_type" does not exist I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django? EDIT: I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4?
Django 1.8 migrate: django_content_type does not exist
37,074,120
2
6
10,126
1
django,python-2.7,django-1.7,django-migrations,django-1.8
Here's what I found/did. I am using django 1.8.13 and python 2.7. The problem did not occur for Sqlite. It did occur for PostgreSQL. I have an app the uses a GenericForeignKey (which relies on Contenttypes). I have another app that has a model that is linked to the first app via the GenericForeignKey. If I run makemigrations for both these apps, then migrate works.
0
0
0
0
2015-09-17T00:45:00.000
6
0.066568
false
32,620,930
0
0
1
4
I just upgraded my django from 1.7.1 to 1.8.4. I tried to run python manage.py migrate but I got this error: django.db.utils.ProgrammingError: relation "django_content_type" does not exist I dropped my database, created a new one, and ran the command again. But I get the same error. Am I missing something? Do I need to do something for upgrading my django? EDIT: I downgraded back to 1.7.1 and it works. Is there a way to fix it for 1.8.4?
How to create a record from one to one relationship reverse direction in Django?
32,631,283
2
2
325
0
python,django,models
No, there is no way of doing that because there is no queryset between class A and class B, like it was with ManyToMany or ForeignKey. You must create B directly and assign A to proper field on B.
0
0
0
0
2015-09-17T11:35:00.000
1
1.2
true
32,629,369
0
0
1
1
Django offers Related objects for oneToMany or manyToMany relationship. Using this object, It can create a record from the reverse direction. (for example with XXXX_set.create(.....) or XXXX_set.get_or_create(.....)) I want to use this kind of function with an oneToOne relationship. Is there any way to create an one to one relationship record from the reverse direction ? for example If class A(models.Models) and class B(models.Model) are tied by one to one relationship and I create A and save() it, than I want to create B also through A
Use migrations like in Django in Rails models
32,639,897
5
4
433
0
python,ruby-on-rails,ruby,django,ruby-on-rails-3
The db/schema.rb file keeps track of the current state, and you can delete your migrations at any point and use the rake db:schema:load task to load the db/schema.rb into your DB.
0
0
0
0
2015-09-17T20:48:00.000
1
1.2
true
32,639,706
0
0
1
1
I was developing in both RoR and Django based projects, and I don't like the way how RoR deals with migrations. For example, if I make huge changes to my models over 2 years, in Django I can delete all migrations and make new, single file, basing on actual state of my models. In RoR I will have, like 50 files, where some of them may be absolutely redundant (correct me if I'm wrong). I would like to have RoR app, that would create migration basing on models, like in Django (so I assume models would need some information about fields). Is there any gem/framework to RoR, that would add such feature?
Add models to specific user (Django)
32,660,678
0
2
1,112
0
python,mysql,django
You could create a model UserItems for each user with a ForeignKey pointing to the user and an item ID pointing to items. The UserItems model should store the unique item IDs of the items that belong to a user. This should scale better if items can be attached to multiple users or if items can exist that aren't attached to any user yet.
0
0
0
0
2015-09-18T20:27:00.000
2
0
false
32,660,496
0
0
1
1
Good evening, I am working on some little website for fun and want users to be able to add items to their accounts. What I am struggling with is coming up with a proper solution how to implement this properly. I thought about adding the User Object itself to the item's model via ForeignKey but wouldn't it be necessary to filter through all entries in the end to find the elements attached to x user? While this would work, it seems quite inefficient, especially when the database has grown to some point. What would be a better solution?
Python - Rebuild Javascript generated code using Requests Module
32,680,624
2
0
84
0
javascript,python,browser,beautifulsoup,python-requests
In most cases, it is enougth to analyze the "network" tab of the developer tools and see the requests that are fired when you hit that button you metioned. As you understand those requests, you will be able to implement your scraper to run similar requests and grab the relevant data.
0
0
1
0
2015-09-20T14:35:00.000
1
0.379949
false
32,680,534
0
0
1
1
I'm facing a new problem. I'm writing a scraper for a website, usually for this kind of tasks I use selenium, but in this case I cannot use anything that simulate a web-browser. Researching on StackOverflow, I read the best solution is to undestand what javascript did and rebuild the request over HTTP. Yeah, I understand well in theory, but don't know how to start, as I don't know well technologies involved. In my specific case, some HTML is added to the page when the button is clicked. With developer tools I set a breakpoint on the 'click' event, but from here, I'm literally lost. Anyone can link some resource and examples I can study?
How to find Post_ID which related to Facebook ADS API
32,905,674
0
0
502
0
python,facebook,api,ads
You can get the post that is used for a creative by fetching the object_story_id field on an Ad Creative. In the hierarchy of Facebook ads products, this is located here: Ad Account -> Ad Campaign -> Ad Set -> Ad -> Ad Creative. There is no way to reversely look up which ads objects are using a particular post for their creative.
0
0
0
0
2015-09-20T17:04:00.000
1
1.2
true
32,682,020
0
0
1
1
I use Python Facebook Ads API. I got AD_SET_ID. How Can I find boosted post ID (looks like 204807582871020_615180368500404) which is related to AD_SET_ID
Best server configuration for site with heavy calculations
32,682,202
1
1
229
0
python,nginx,apache2,server,uwsgi
Andrew, I believe that you can move some pieces of your deployment topology. My suggestion is use nginx for delivering HTTP content, and expose your application using some web framework, i.e. tornadoweb (my preference, considering async core, and best documented if compared to twisted, even twisted being a really great framework) You can communicate between nginx and tornado by proxy. It is simple to be configured. You can replicate your service instance to distribute your calculation application inside the same machine and another hosts. It can be easily configured by nginx upstreams. If you need more performance, you can break your application in small modules and integrate it using Async Messaging. You can choose using zeromq or rabbitmq, among other solutions. Then, you can have different topologies, gradually applied during the evolution of your application. 1th Topology: nginx -> tornadoweb 2th Topology: nginx with loadbalance (upstreams) -> tornadoweb replicated on [1..n] instances 3rd Topology: [2nd topology] -> your app integrated by messaging (zeromq, amqp(rabbitmq), ...) My favorite is 3rd, for begining. But, you should start, for this moment, by 1th and 2nd There are a lot of options. But, these thre may be sufficient for a simple organization of your app.
0
0
0
1
2015-09-20T17:10:00.000
1
1.2
true
32,682,065
0
0
1
1
I have a site, that performs some heavy calculations, using library for symbolic math. Currently average calculation time is 5 seconds. I know, that ask too broad question, but nevertheless, what is the optimized configuration for this type of sites? What server is best for this? Currently, I'm using Apache with mod_wsgi, but I don't know how to correctly configure it. On average site is receiving 40 requests per second. How many processes, threads, MaxClients etc. should I set? Maybe, it is better to use nginx/uwsgi/gunicorn (I'm using python as programming language)? Anyway, any info is highly appreciated.
django migrations - workflow with multiple dev branches
38,489,436
9
62
11,656
0
python,django,git,migration
I don't have a good solution to this, but I feel the pain. A post-checkout hook will be too late. If you are on branch A and you check out branch B, and B has fewer migrations than A, the rollback information is only in A and needs to be run before checkout. I hit this problem when jumping between several commits trying to locate the origin of a bug. Our database (even in development trim) is huge, so dropping and recreating isn't practical. I'm imagining a wrapper for git-checkout that: Notes the newest migration for each of your INSTALLED_APPS Looks in the requested branch and notes the newest migrations there For each app where the migrations in #1 are farther ahead than in #2, migrate back to the highest migration in #2 Check out the new branch For each app where migrations in #2 were ahead of #1, migrate forward A simple matter of programming!
0
0
0
0
2015-09-20T17:32:00.000
3
1
false
32,682,293
0
0
1
1
I'm curious how other django developers manage multiple code branches (in git for instance) with migrations. My problem is as follows: - we have multiple feature branches in git, some of them with django migrations (some of them altering fields, or removing them altogether) - when I switch branches (with git checkout some_other_branch) the database does not reflect always the new code, so I run into "random" errors, where a db table column does not exist anymore, etc... Right now, I simply drop the db and recreate it, but it means I have to recreate a bunch of dummy data to restart work. I can use fixtures, but it requires keeping track of what data goes where, it's a bit of a hassle. Is there a good/clean way of dealing with this use-case? I'm thinking a post-checkout git hook script could run the necessary migrations, but I don't even know if migration rollbacks are at all possible.
Python/Django run development server
44,451,289
2
1
844
0
django,python-2.7,django-1.8,manage.py
It seems pretty clear that Django is unable to find your database at specified location, reasons can be, You have created django project using "sudo" or with any other other linux user than your current user, thats why your current user might not have permissions to access that database. You can check permissions of files by typing in your project's root directory, ls -la You have configured wrong path for database file in your settings.py
0
0
0
0
2015-09-21T19:50:00.000
2
1.2
true
32,703,401
0
0
1
1
I'm trying to create a project using Django 1.8.4 and Python 2.7.10, but I can't execute the command python manage.py runserver. I can create the project and apps, but can't run the server. Please somebody help me, I'm new with Python/Django and I couldn't advance more. The cmd show the next error when the command is executed. C:\Users\Efren\SkyDrive\UniCosta\VIII\Ingeniería de Software II\Django\PrimerProyecto>python manage.py runserver Performing system checks... System check identified no issues (0 silenced). Unhandled exception in thread started by Traceback (most recent call last): File "C:\Python27\lib\site-packages\django\utils\autoreload.py", line 225, in wrapper fn(*args, **kwargs) File "C:\Python27\lib\site-packages\django\core\management\commands\runserver.py", line 112, in inner_run self.check_migrations() File "C:\Python27\lib\site-packages\django\core\management\commands\runserver.py", line 164, in check_migrations executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS]) File "C:\Python27\lib\site-packages\django\db\migrations\executor.py", line 19, in init self.loader = MigrationLoader(self.connection) File "C:\Python27\lib\site-packages\django\db\migrations\loader.py", line 47, in init self.build_graph() File "C:\Python27\lib\site-packages\django\db\migrations\loader.py", line 182, in build_graph self.applied_migrations = recorder.applied_migrations() File "C:\Python27\lib\site-packages\django\db\migrations\recorder.py", line 59, in applied_migrations self.ensure_schema() File "C:\Python27\lib\site-packages\django\db\migrations\recorder.py", line 49, in ensure_schema if self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()): File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 162, in cursor cursor = self.make_debug_cursor(self._cursor()) File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 135, in _cursor self.ensure_connection() File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 130, in ensure_connection self.connect() File "C:\Python27\lib\site-packages\django\db\utils.py", line 97, in exit six.reraise(dj_exc_type, dj_exc_value, traceback) File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 130, in ensure_connection self.connect() File "C:\Python27\lib\site-packages\django\db\backends\base\base.py", line 119, in connect self.connection = self.get_new_connection(conn_params) File "C:\Python27\lib\site-packages\django\db\backends\sqlite3\base.py", line 204, in get_new_connection conn = Database.connect(**conn_params) django.db.utils.OperationalError: unable to open database file
Python - global Serial obj instance accessible from multiple modules
32,709,121
0
3
328
0
python,global
Delegate the opening and management of the serial port to a separate daemon, and use a UNIX domain socket to transfer the file descriptor for the serial port to the client programs.
0
0
0
1
2015-09-22T04:53:00.000
2
0
false
32,708,630
1
0
1
1
I have 5 different games written in python that run on a raspberry pi. Each game needs to pass data in and out to a controller using a serial connection. The games get called by some other code (written in nodeJS) that lets the user select any of the games. I'm thinking I don't want to open and close a serial port every time I start and finish a game. Is there anyway to make a serial object instance "global", open it once, and then access it from multiple game modules, all of which can open and close at will? I see that if I make a module which assigns a Serial object to a variable (using PySerial) I can access that variable from any module that goes on to import this first module, but I can see using the id() function that they are actually different objects - different instances - when they are imported by the various games. Any ideas about how to do this?
Python CGI With Data Return
32,710,858
1
0
490
0
javascript,python,html,cgi
Well if you don't know what CGI is and find that what you ask for is "far more complicated than it should be", you first have to learn the HTTP protocol, obviously, and that's way to broad for a SO answer. Basically what you want requires: an html document some javascript code, either linked from the document or embedded into it a web server to serve the document (and the javascript) a web server (can of course be the same as the previous one) that knows how to invoke your python script and return the result as an HTTP response (you'll probably want a json content type) - this can be done with plain CGI or with a wsgi or fcgi connector, in your case CGI might well be enough. Then in the browser your javascript code will have to issue a GET request (ajax) every x seconds to the web server hosting the Python script and update the DOM accordingly. This is all ordinary web programming, and as I said, a basic understanding of the HTTP protocol is the starting point.
0
0
0
0
2015-09-22T07:00:00.000
2
0.099668
false
32,710,348
0
0
1
1
I'm trying to accomplish the following: Pull information from a module through Python. [Accomplished] Constantly pull information from Python for use in HTML. Avoid writing the entire HTML/CSS/JS document in print statements. I've seen the term CGI thrown around, but don't really understand how it works. Simply put, I want to run the Python script which returns an integer value. I then would like to take that integer value into JavaScript so that it may update the designated tag. It should be able to execute the Python script every two seconds, receive the output, then apply it to the page. I do not want to write out the entire HTML document in Python by doing one line at a time, as I've seen some people doing on sites I've found. It seems like doing something like this is far more complicated than it should be. The page should run, call the script for its output, then give me the output to use.
Django not providing CSRF token for AngularJS frontend
32,714,567
1
1
49
0
javascript,python,angularjs,django,cookies
You won't be able to get cookies on other domain because all cookies are set per domain, this is for security reasons. If you want to access session and cookies in other domain, you must copy them. You can do it by sending some request with special token (for validation) and create view in django that will fetch data from some storage, based on that token and populate user cookies, so on next request they will be available.
0
0
0
0
2015-09-22T10:25:00.000
1
1.2
true
32,714,470
0
0
1
1
I'm trying to send a cross domain PUT request from AngularJS frontend to Django backend. It's all fine when I'm running on the same domain (frontend at localhost:8000 and backend at localhost:8001), I'm getting my csrftoken from $cookies and can send a successful request. The problem begins when I switch the backend to an external QA server. I get empty $cookies, no sessionid nor csrftoken cookies at all. I ran out of ideas and that's why I'm asking for help here, thanks in advance.
No Module Named MySqlDb in Python Virtual Enviroment
32,723,517
1
0
660
1
python,mysql,django
This approach worked ! I was able to install the mysqlclient inside the virtual environment through the following command:- python -m pip install mysqlclient Thanks Much..!!!!!
0
0
0
0
2015-09-22T11:02:00.000
3
0.066568
false
32,715,175
0
0
1
1
I had posted about this error some time back but need some more clarification on this. I'm currently building out a Django Web Application using Visual Studio 2013 on a Windows 10 machine (Running Python3.4). While starting out I was constantly dealing with the MySQL connectivity issue, for which I did a mysqlclient pip-install. I had created two projects that use MySQL as a backend and after installing the mysqlclient I was able to connect to the database through the current project I was working on. When I opened the second project and tried to connect to the database, I got the same 'No Module called MySqlDB' error. Now, the difference between both projects was that the first one was NOT created within a Virtual Environment whereas the second was. So I have come to deduce that projects created within the Python virtual environment are not able to connect to the database. Can someone here please help me in getting this problem solved. I need to know how the mysqlclient module can be loaded onto a virtual environment so that a project can use it. Thanks
Lifetime of objects in a flask app
32,732,460
1
1
958
0
python-3.x,flask
Short answer: Depends on when and where the class is initialised. Objects have little to do with a user logging in and logging out. Object lifetimes are dependent on when and where they are initialised. Objects initialised outside a function or class are effectively singletons and last as long as the application instance exists Objects initialised inside a class last as long as the orginal object last. Objects initialised inside a function exist until the function completes execution. Now classes that handle database requests are better kept as singletons. This avoids the necessity of creating new database connections every time a query has to be executed. So the easiest way to create a singleton would be to declare it as a variable in a module outside a function or class
0
0
0
0
2015-09-22T19:42:00.000
1
1.2
true
32,725,493
0
0
1
1
I have a web app that retrieves data from a database and displays it on a UI. I have a class called table that handles the database requests based on the URL variables. My question is: Does flask recycle objects when a new URL is requested? Or does it keep the objects in memory until the user logs out? Should I have one table object and just update the query every time the URL changes? Or should I just create a new object?
Modelica Parameter studies with python
32,745,918
2
3
738
0
python,modelica
It might be a strcutrual parameter, these are evaluated also. It should work if you explicitly set Evaluate=False for the parameter that you want to study. Is it not visible in the variable browser or is it just greyed out and constant? If it is not visible at all you should check if it is protected.
0
0
0
0
2015-09-23T12:19:00.000
2
0.197375
false
32,739,428
0
0
1
1
I want to run parameter studies in different modelica building libraries (buildings, IDEAS) with python: For example: change the infiltration rate. I tried: simulateModel and simulateExtendedModel(..."zone.n50", [value]) My questions:Why is it not possible to translate the model and then change the parameter: Warning: Setting zone.n50 has no effect in model. After translation you can only set literal start-values and non-evaluated parameters. It is also not possible to run: simulateExtendedModel. When i go to command line in dymola and write for zone.n50, then i get the actual value (that i have defined in python), but in the result file (and the plotted variable) it is always the standard n50 value.So my question: How can I change values ( befor running (and translating?) the simulation? The value for the parameter is also not visible in the variable browser. Kind regards
Python Django DRF API one time session/token/pass authentication without a username/password
32,759,648
1
2
1,261
0
python,django,authentication,django-rest-framework,one-time-password
You can create a view that generates a time-based OTP and then use it in a custom auth module to authenticate against a single user. You can also use JWT with an expiry time to authenticate against a single user.
0
0
0
0
2015-09-23T15:28:00.000
1
1.2
true
32,743,668
0
0
1
1
I have a Django and django rest framework project where I want a mobile to be able to request a token and then use that token for x minutes before they're disconnected. I do not want to create a user for each mobile device, I just want a one time password. I tried using the auth system built into drf, however it required a user. So I was thinking about just using the onetimepass package to generate a one time token.
Is it necessary to use virtualenv to use Flask framework?
32,756,729
12
6
2,441
0
python,flask,virtualenv
No, there is no requirement to use a virtualenv. No project ever would require you to use one; it is just a method of insulating a collection of Python libraries from other projects. I personally do strongly recommend you use a virtualenv, because it makes it much, much easier to swap out versions of libraries and not affect other Python projects. Without a virtualenv, you just continue with installing the dependencies, and they'll end up in your system libraries collection. Installing the Flask project with pip will pull in a few other packages, such as Werkzeug, Jinja2 and itsdangerous. These would all be installed 'globally'.
0
0
0
0
2015-09-24T08:25:00.000
1
1.2
true
32,756,711
1
0
1
1
I just started exploring Flask. Earlier I tried to explore Django but found it a bit complicated. However, Installing Flask requires us to install virtualenv first which, As I can recall, is not required in the case of Django. In case it is not required, how to go ahead without virtualenv?
Can Django send multi-part responses for a single request?
32,759,144
2
2
1,409
0
python,django,api,rest,response
As far as I know, No you can't send Multipart http response not yet atleast. Multipart response is only valid in http requests. Why? Because no browser as I know of completely supports this. Firefox 3.5: Renders only the last part, others are ignored. IE 8: Shows all the content as if it were text/plain, including the boundaries. Chrome 3: Saves all the content in a single file, nothing is rendered. Safari 4: Saves all the content in a single file, nothing is rendered. Opera 10.10: Something weird. Starts rendering the first part as plain/text, and then clears everything. The loading progress bar hangs on 31%. (Data credits Diego Jancic)
0
0
0
0
2015-09-24T10:30:00.000
1
1.2
true
32,759,082
0
0
1
1
I apologise if this is a daft question. I'm currently writing against a Django API (which I also maintain) and wish under certain circumstances to be able to generate multiple partial responses in the case where a single request yields a large number of objects, rather than sending the entire JSON structure as a single response. Is there a technique to do this? It needs to follow a standard such that client systems using different request libraries would be able to make use of the functionality. The issue is that the client system, at the point of asking, does not know the number of objects that will be present in the response. If this is not possible, then I will have to chain requests on the client end - for example, getting the first 20 objects & if the response suggests there will be more, requesting the next 20 etc. This approach is an OK work-around, but any subsequent requests rely on the previous response. I'd rather ask once and have some kind of multi-part response.
Django - Server error 500 with URL tag
32,765,111
1
1
419
0
python,django,url,tags,server-error
I solved the issue by adding #-*- coding: utf-8 -*- at the begenning of all my Python (*.py) files because it's an old version about python (2.6) Thank's to all :)
0
0
0
0
2015-09-24T11:55:00.000
1
0.197375
false
32,760,681
0
0
1
1
I have an issue when I put online my Django Project. If I have an {% url ... %} tag in my main html file (base.html), I see "Server Error 500". If I remove all these lines with {% url ... %}, my django website works fine and the "Server Error 500" disappears ! I have this issue only with the URL tag. For information, I have no issue when I work locally (127.0.0.1) on my computer. My hosting (Alwaysdata.com) use python 2.6 and Django 1.6.4 Could you help me please ? :)
How to generate temporary downloads in Flask?
32,772,507
0
2
684
0
python,apache,flask
You could use S3 to host the file with a temporary URL. Use Flask to upload the file to S3 (using boto3), but use a dynamically-generated temporary key. Example URL: http://yourbucket.s3.amazon.com/static/c258d53d-bfa4-453a-8af1-f069d278732c/sound.mp3 Then, when you tell the user where to download the file, give them that URL. You can then delete the S3 file at the end of the time period using a cron job. This way, Amazon S3 is serving the file directly, resulting in a complete bypass of your Flask server.
0
0
0
0
2015-09-24T23:06:00.000
1
0
false
32,772,343
0
0
1
1
I have a Flask app that lets users download MP3 files. How can I make it so the URL for the download is only valid for a certain time period? For example, instead of letting anyone simply go to example.com/static/sound.mp3 and access the file, I want to validate each request to prevent an unnecessary amount of bandwidth. I am using an Apache server although I may consider switching to another if it's easier to implement this. Also, I don't want to use Flask to serve the file because this would cause a performance overhead by forcing Flask to directly serve the file to the user. Rather, it should use Apache to serve the file.
Can anyone offer a full example of django-haystack with solr 5.x?
32,996,183
2
2
520
0
python,django,solr
It seems Django-Haystack does not support SOLR 5 well. SOLR 5's solrconfig.xml file for its core uses ManagedIndexSchemaFactory as default schemaFactory. If you change it to ClassicIndexSchemaFactory, you will run in troubles with your schema.xml which is generated by python manage.py build_solr_schema. Lots of fields' types are not supported. Probably going back to SOLR 4 would be a better choice
0
0
0
0
2015-09-25T04:25:00.000
1
1.2
true
32,774,796
0
0
1
1
Trying to find an example of how to build an application using solr 5.x with django-haystack, but most examples online are using solr 4.x or solr 3.x. Can anyone give some instructions on how to work with solr 5.x using django-haystack, or just offer some example project? Thanks!
How to remove Group and Permission models from Django
32,786,679
2
1
3,099
0
python,django,django-admin
If you don't want to use Django's Group and Permission, maybe you don't want to use django.contrib.auth at all? If that is the case, simply remove django.contrib.auth from INSTALLED_APPS. However, I want to point out that I can't really think of a usecase where this would make sense. You have to have a really good reason for writing your own Group and Permission
0
0
0
0
2015-09-25T16:15:00.000
2
1.2
true
32,786,482
0
0
1
1
I want to create my own Group and Permission model. However, Django has its built-in models for this which are giving me naming conflicts on it's own modelGroup. I know you can deregister to remove it from this admin, but this does not help me. What is best here? Is it possible to use Django's own and models for Groups and add to them (i.e add more fields), or can I remove the built-in Group model altogether? My user seems to extend from AbstractBaseUser.
When should HStoreField be used instead of JSONField?
32,792,698
20
21
4,193
0
python,django,postgresql,hstore,jsonb
If you need indexing, use jsonb if you're on 9.4 or newer, otherwise hstore. There's really no reason to prefer hstore over jsonb if both are available. If you don't need indexing and fast processing and you're just storing and retrieving validated data, use plain json. Unlike the other two options this preserves duplicate keys, formatting, key ordering, etc.
0
0
0
0
2015-09-25T23:03:00.000
1
1.2
true
32,791,851
0
0
1
1
Django 1.8 provides HStoreField and Django 1.9 will provide JSONField (which uses jsonb) for PostgreSQL. My understanding is that hstore is faster than json, but does not allow nesting and only allows strings. When should one be used over the other? Should one be preferred over the other? Is hstore still the clear winner in performance compared to jsonb?
Detect page is fully loaded using requests library
32,796,957
0
0
522
0
python,python-requests,loaded
First requests GET will return you the entire page but requests is no a browser, it does not parse the content. When you load a page with the browser, it does usually 10-50 requests for each resource, runs the JavaScript, ....
0
0
1
0
2015-09-26T11:35:00.000
1
0
false
32,796,751
0
0
1
1
I want to know if there is a response from requests.get(url) when the page is fully loaded. I did tests with around 200 refreshes of my page and it happens randomly that once or twice the page does not load the footer.
Format python code in Spyder IDE
33,624,414
2
3
5,911
0
python,ide,format,spyder
I tried this Source -> Fix Indentation, Remove Trailing Spaces This is not the most efficient way, but it seems there are no key combinations to do these. Even in preferences, they have not allowed to add new shortcut for this.
0
0
0
0
2015-09-27T20:57:00.000
1
0.379949
false
32,812,765
1
0
1
1
Can anyone please advice the key combination to format python code in annaconda/Spyder IDE? In Eclipse IDE when coding in Java I usually use command F, however in Spyder IDEthe key combination causes search window to pop up.
Send element from BeautifulSoup to Selenium
32,815,252
0
3
386
0
python,selenium,beautifulsoup
These are completely different tools that, in general, cannot be considered alternatives, though they somewhat cross on the "Locating Elements" front. The located elements though are very different - one is a Tag instance in BeautifulSoup and the other one is a webdriver WebElement instance that can actually be interacted with it is "live". Both tools support CSS selectors. The support is quite different, but if you don't go in depth with things like multiple attribute checks (a[class*=test1][class^=test] - not gonna work in BeautifulSoup, for instance), nth-child, nth-of-type, going sideways with + etc, you can assume things are gonna work on both ends. Please add examples of the elements you want to correlate and we can work through them.
0
0
1
0
2015-09-27T22:49:00.000
1
0
false
32,813,646
0
0
1
1
I am using Selenium to navigate a webpage. To analyze the elements and data, I use BeautifulSoup because of the excellent options they give, including searching with regex. So now I have an element located in BeautifulSoup. I want to select it in Selenium. I figured I could somehow pass a XPath or CSS selector from the BeautifulSoup element to the Selenium element. Is there a direct way of going from a BeautifulSoup element to Selenium element?
how to keep django models in database for users customized view?
32,830,790
0
1
106
0
python,django,django-models
it is not clear what you mean by "my user" . is this just the admin user who set this configuration globaly for the site . or do you mean evey user of the site has his/her own preferences ? if the latter case then make a new model called Preferences which has one to one relation to the user model . then in your query you should create three separate queries according to the preference values .
0
0
0
0
2015-09-28T06:36:00.000
2
0
false
32,816,966
0
0
1
1
suppose i have three models, at my view I am showing all the item of these models, i want to give my user privilege to set which model's objects are to be shown at view at first, second and third. what is the best way to implement this?
How to load balance celery tasks across several servers?
58,830,493
3
6
4,182
0
python,rabbitmq,celery
Best option is to use celery.send_task from the producing server, then deploy the workers onto n instances. The workers can then be run as @ealeon mentioned, using celery -A proj worker -l info -Ofair. This way, load will be distributed across all servers without having to have the codebase present on the consuming servers.
0
1
0
0
2015-09-28T20:18:00.000
2
0.291313
false
32,831,111
0
0
1
1
I'm running celery on multiple servers, each with a concurrency of 2 or more and I want to load balance celery tasks so that the server that has the lowest CPU usage can process my celery tasks. For example, lets say I have 2 servers (A and B), each with a concurrency of 2, if I have 2 tasks in the queue, I want A to process one task and B to process the other. But currently its possible that the first process on A will execute one task and the second process on A will execute the second task while B is sitting idle. Is there a simple way, by means of celery extensions or config, that I can route tasks to the server with lowest CPU usage?
javascript accepting multiple date format
32,847,574
0
0
163
0
javascript,python,date,web,format
You can use Javascript to give the user the feedback that the correct format(s) is being used. But if you are taking any data to your server be sure verify the data on the server. To verify the correct dataformat you can use Regular expressions and check if any format is correct. You should iterate through all allowed possibilities until one is found correct.
0
0
1
0
2015-09-29T15:04:00.000
2
0
false
32,847,487
1
0
1
1
I should preface this post by saying that I am a very elementary developer with a generic IS degree. Without going into too much detail, I was given a moderately large web application from an interning software engineer to support an enhance if need be. It was written primarily in Python, JavaScript and HTML5 and utilizes a Google Map API to visually represent the location and uses of given inputs. This leads me to my question. There is a date picker modal that the application/user utilizes. They pick a START and END date, in the default format YYYY-MM-DD (if the user does not use that exact format (i.e. 2015-09-29) the date picker will not work), and the application then goes to the DB and picks the given inputs between those dates and represents them on the map. I have been told that, for usability, I have to make the program accept multiple date formats (i.e. September 29 2015, 09-29-2015, 9-29-2015, 9/29/2015). How would I go about doing this?
Maya right click context sensitive menu
32,870,983
0
0
955
0
python,contextmenu,menuitem,maya,right-click
Okay i think I have found a solution, that's the best I have found but this is not very handy... create a copy of the buildobjectMenuItemsNow.mel and dagMenuProc.mel in your script folder so Maya will read those ones instead of the native ones. Once you have done that you can modify the dagMenuProc.mel without destructing anything as you re working on a copy. the proc that needs modifications is the last one : dagMenuProc (line 2240) and you can start modifying inside the loop (line 2266) What I have done is make a call to a Python file that adds some more items in the menu and you can obviously remove some items by deleting few MEL lines. I hope this can help someone. And if there is an other way of doing that, I would love to hear about !
0
0
0
0
2015-09-29T16:35:00.000
1
1.2
true
32,849,325
0
0
1
1
I am trying to edit the right click context sensitive menu from Maya. I have found how to add a menuItem but I would like to have this Item in the top of the list not at the bottom... I think in this case I need to deleteAllItems from the menu, add mine and then re-add the default Maya ones but I don't know how to re-add them. Where can I find it ? Most of the topics say "modify Maya's source code" but that's not even an option. Any suggestions ? Thx !
Redirect to a specific search result
32,870,284
0
0
66
0
python,django,web-scraping,url-redirection
Are you storing the results in a database or some persistent storage mechanism (maybe even in a KV store)? Once you hold the results somewhere on your website, you can redirect from your results page via the Book Now button to a view withs the result's identifying value (say some hash) and have that view redirect to the website offering the service.
0
0
1
0
2015-09-30T15:34:00.000
1
1.2
true
32,870,164
0
0
1
1
I have a scraper to pull search results from a number of travel websites. Now that I have the search results nicely displayed with "Book Now" buttons, I want those "Book Now" buttons to redirect to the specific search result so the user can book that specific travel search result. These search results are dynamic so the redirect may change. What's the easiest way to accomplish this? I'm building this search engine in Python/Django and have Django CMS.
Run Django 1.9 on Python 3.5 instead of 2.7
32,871,849
3
4
13,030
0
python,django
Virtualenv is your friend. My life got so much easier when I started using it. You can create a virtualenv to use a particular version of Python, then set up your requirements.txt file to install all the packages you need using pip.
0
0
0
0
2015-09-30T16:29:00.000
5
0.119427
false
32,871,265
0
0
1
2
I have Python 2.7 and 3.5 running on OSX 10.10 and also Django 1.9a -- which is support for both Python version. The problem is I want to run Django on Python 3.5 instead of 2.7. On some threads I found suggestions to run it by including the Python version e.g: python3.5 manage.py runserver, but I found this error: File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named 'django' FYI, I have no problem run Python3.5 on the same machine. How can I solve this? thank you very much!
Run Django 1.9 on Python 3.5 instead of 2.7
32,871,487
1
4
13,030
0
python,django
You first have to install Django for 3.5, which is a separate install from Django for 2.7. If you're using pip, make sure to use pip3. Otherwise, make sure to run setup.py using python3.5.
0
0
0
0
2015-09-30T16:29:00.000
5
0.039979
false
32,871,265
0
0
1
2
I have Python 2.7 and 3.5 running on OSX 10.10 and also Django 1.9a -- which is support for both Python version. The problem is I want to run Django on Python 3.5 instead of 2.7. On some threads I found suggestions to run it by including the Python version e.g: python3.5 manage.py runserver, but I found this error: File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named 'django' FYI, I have no problem run Python3.5 on the same machine. How can I solve this? thank you very much!
Google app engine, full text search for empty (None) field
57,089,611
0
1
428
0
python,google-app-engine,full-text-search
Have you tried with: NOT logo_url: Null
0
1
0
0
2015-10-01T08:28:00.000
2
0
false
32,882,856
0
0
1
1
I'd like to use Google AppEngine full text search to search for items in an index that have their logo set to None tried "NOT logo_url:''" is there any way I write such a query, or do I have to add another property which is has_logo?
Flask: 'session' vs. 'g'?
32,910,056
92
66
16,777
0
python,session,flask
No, g is not an object to hang session data on. g data is not persisted between requests. session gives you a place to store data per specific browser. As a user of your Flask app, using a specific browser, returns for more requests, the session data is carried over across those requests. g on the other hand is data shared between different parts of your code base within one request cycle. g can be set up during before_request hooks, is still available during the teardown_request phase and once the request is done and sent out to the client, g is cleared.
0
0
0
0
2015-10-02T14:47:00.000
1
1.2
true
32,909,851
1
0
1
1
I'm trying to understand the differences in functionality and purpose between g and session. Both are objects to 'hang' session data on, am I right? If so, what exactly are the differences and which one should I use in what cases?
Django: loaddata in migrations errors
32,912,200
2
8
5,546
0
python,django,database-migration
When you run python manage.py migrate it's trying to load your testmodel.json in fixtures folder, but your model (after updated) does not match with data in testmodel.json. You could try this: Change your directory from fixture to _fixture. Run python manage.py migrate Optional, you now can change _fixture by fixture and load your data as before with migrate command or load data with python manage.py loaddata app/_fixtures/testmodel.json
0
0
0
0
2015-10-02T16:54:00.000
3
0.132549
false
32,912,112
0
0
1
1
Something really annoying is happening to me since using Django migrations (not south) and using loaddata for fixtures inside of them. Here is a simple way to reproduce my problem: create a new model Testmodel with 1 field field1 (CharField or whatever) create an associated migration (let's say 0001) with makemigrations run the migration and add some data in the new table dump the data in a fixture testmodel.json create a migration with call_command('loaddata', 'testmodel.json'): migration 0002 add some a new field to the model: field2 create an associated migration (0003) Now, commit that, and put your db in the state just before the changes: ./manage.py migrate myapp zero. So you are in the same state as your teammate that didn't get your changes yet. If you try to run ./manage.py migrate again you will get a ProgrammingError at migration 0002 saying that "column field2 does not exist". It seems it's because loaddata is looking into your model (which is already having field2), and not just applying the fixture to the db. This can happen in multiple cases when working in a team, and also making the test runner fail. Did I get something wrong? Is it a bug? What should be done is those cases? -- I am using django 1.7
Should I dockerize different servers running on a single machine?
33,002,100
0
0
27
0
python,docker,webserver
Yes, Docker prefers "one process per container" approach. I would not see this as an overkill, quite to the contrary - at your case it might rather soon be beneficial to have instances of different users better isolated: less security risks, easier to maintain - say you need new version of everything for new version of you app, but would like to keep some of the users still on an old version due to a blocker.
0
1
0
0
2015-10-02T18:00:00.000
1
1.2
true
32,913,119
0
0
1
1
I want to make simple service that each user will have his own (simple and light) webserver. I want to use an AWS instance to do this. I understand that I can do that by starting Python's SimpleHTTPserver (Proof of concept) multiple times on different ports, and that the number of servers I can have depends on the resources. My question is: Is it a better practice or an overkill to Dockerize each user with his server ?
Creating a webpage crawler that finds and maches user input
32,915,448
1
0
41
0
java,php,python,html,mysql
The programing language does not realy matter for the way to solve the problem. You can implement it in the language which you are comfortable with. There are two basic ways to solve the problem: Use a crawler which creates a index of words found on the different pages The use that index to lookup the searched word or When the user has entered the search expression, you start crawling the pages and look if the search expression is found Of course both solutions will have different (dis)advantages For example: In 1) you need to do a inital crawl (and udate it later on when the pages change) In 1) you need to store the crawl result in some sort of database In 1) you will receive instanst search results In 2) You don't need a database/datastore In 2) You will have to wait until all pages are searched before showing the final resultlist
0
0
1
0
2015-10-02T18:46:00.000
1
1.2
true
32,913,859
0
0
1
1
I made a website with many pages, on each page is a sample essay. The homepage is a page with a search field. I'm attempting to design a system where a user can type in a word and when they click 'search', multiple paragaphs containing the searched word from the pages with a sample essays are loaded on to the page. I'm 14 and have been programming for about 2 years, can anyone please explain to me the programming languages/technologies I'll need to accomplish this task and provide suggestions as to how I can achieve my task. All I have so far are the web pages with articles and a custom search page I've made with PHP. Any suggestions?
GAE/P: Storing list of keys to guarantee getting up to date data
32,941,257
1
0
60
1
python,google-app-engine,google-cloud-datastore,eventual-consistency
if, like you say in comments, your lists change rarely and cant use ancestors (I assume because of write frequency in the rest of your system), your proposed solution would work fine. You can do as many get(multi) and as frequently as you wish, datastore can handle it. Since you mentioned you can handle having that keys list updated as needed, that would be a good way to do it. You can stream-read a big file (say from cloud storage with one row per line) and use datastore async reads to finish very quickly or use google cloud dataflow to do the reading and processing/consolidating. dataflow can also be used to instantly generate that keys list file in cloud storage.
0
1
0
0
2015-10-02T20:35:00.000
2
1.2
true
32,915,462
0
0
1
1
In my Google App Engine App, I have a large number of entities representing people. At certain times, I want to process these entities, and it is really important that I have the most up to date data. There are far too many to put them in the same entity group or do a cross-group transaction. As a solution, I am considering storing a list of keys in Google Cloud Storage. I actually use the person's email address as the key name so I can store a list of email addresses in a text file. When I want to process all of the entities, I can do the following: Read the file from Google Cloud Storage Iterate over the file in batches (say 100) Use ndb.get_multi() to get the entities (this will always give the most recent data) Process the entities Repeat with next batch until done Are there any problems with this process or is there a better way to do it?
Where does PyCharm keep the error log for Python/Django projects?
32,924,415
3
1
213
0
python,django,pycharm
This isn't PyCharm specific, but it might help. If DEBUG=True, Django will include the traceback in the response. If DEBUG=False, then by default Django will email a report to the users in the ADMINS settings.
0
0
0
0
2015-10-03T15:40:00.000
1
0.53705
false
32,924,392
0
0
1
1
I am running a Django project locally using PyCharm and it is returning a 500 error on an API call. I think this signifies an internal server error so I am assuming the reason for and nature of this error will be in a log somewhere. But I can't find where it is. Is such an error log kept? If so where?
Django's philosophy on updates?
32,938,503
4
1
55
0
python,django
Possibly not a question for that site, but i will answer anyway. Django does not have strict policy if you should update or not and if you can touch core files or not, it is totally up to you. But as always - touching core files is not good idea. Django core files usually lives outside of your project so there is no reason to change them. Versioning of django is very simple: all major releases (1.6, 1.7, 1.8, 1.9, 2.0 etc) have some new features, all minor releases (1.8.2, 1.8.5 etc) have only security and bug fixes - so it will be totally safe and recommended to always update to newest minor release. There are some major releases marked as LTS - that releases will have security and bug fixes released longer than other. And that's all. Rest of all is totally up to you.
0
0
0
0
2015-10-04T21:07:00.000
1
1.2
true
32,938,416
0
0
1
1
With WordPress and other CMS out there, there is a philosophy that you should always keep it up to date, no matter what. And never change the core files. How does Django as a framework stand on this topic?
Network error(Request timeout) in Heroku Django app
32,949,624
0
0
324
0
python,django,heroku
The problem turned out to be in ALLOWED_HOSTS variable in settings file. I set it to ['appname.herokuapp.com'] and it is working fine now.
0
0
0
0
2015-10-05T11:52:00.000
1
1.2
true
32,947,993
0
0
1
1
I just started a new project and pushed it to heroku. I set up everything: Procfile, dyno and environment variables. Everything is working fine in localhost. But I get Network error on browser and logs show me Request timeout and Workder timeout error in heroku. I read that this happens when some request takes a lot of time. However, I don't have any request right now, it just shows This is the landing page.. The only thing I have in my landing page is one css file which comes from AWS. What could be a reason for this error? UPDATE: just found out that it is working in production only if DEBUG is set to True. I don't know why.
while spawning the fab file it asks me the login password for 'ubuntu'
33,025,966
0
0
219
0
python,django,amazon-web-services
Looks like there's an issue with your "ec2 key pairs". Make sure you have the correct key and that the permission of that key are 400. To know if the key is working try to manually connect to the instance with ssh -i ~/.ssh/<your-key> ubuntu@<your-host>
0
1
0
1
2015-10-05T12:25:00.000
1
0
false
32,948,568
0
0
1
1
I wrote the default password of my ami i.e. 'ubuntu' but it didn't work. I even tried with my ssh key. I've browsed enough and nothing worked yet.Can anybody please help me out? [] Executing task 'spawn' Started... Creating instance EC2Connection:ec2.us-west-2.amazonaws.com Instance state: pending Instance state: pending Instance state: pending Instance state: pending Instance state: running Public dns: ec2-52-89-191-143.us-west-2.compute.amazonaws.com Waiting 60 seconds for server to boot... [ec2-52-89-191-143.us-west-2.compute.amazonaws.com] run: whoami [ec2-52-89-191-143.us-west-2.compute.amazonaws.com] Login password for 'ubuntu':
Is it possibile to use PhantomJS like a mobile driver in Selenium?
32,953,131
3
1
370
0
python,selenium,mobile,selenium-webdriver,phantomjs
There is no such thing as a "phantom mobile driver". You can change the user agent string and the viewport/window size in order to suggest to the website to deliver the same markup that a mobile client would receive.
0
0
1
0
2015-10-05T15:54:00.000
1
1.2
true
32,952,744
0
0
1
1
I'm using Selenium with PhantomJS in order to scrape a dynamic website with infinite scroll. It's working but my teacher suggested to use a mobile phantom driver in order to get the mobile version of the website. With the mobile version I expect to see less Ads or JavaScript and retrieve the information faster. There is any "phantom mobile driver"?
Keeping state in a Django view to improve performance for pagination
32,962,651
1
1
541
0
python,django,postgresql,datatables
There are multiple way of improving your code structure, First is you fetch only that data which is required according to your page number using Django ORM hit, second is you cache your ORM output and reuse that result if same query is passed again. First goes like this. In your code Pizza.objects.all() paginated = filtered[start: start + length] You are first fetching all data then, you are slicing that, which is very expensive SQL query, convert that to filtered = Pizza.objects.all()[(page_number-1) * 30, (page_number-1) * 30 + 30] above given ORM will only fetch those rows which are according to supplied page number and is very fast compare to fetching all and then slicing it. The second way, is you first fetch data according to query put that on, caching solution like memcache or redis, next time when you are required to fetch the data from the database, then first check if data is present in cache for that query, if present, then simply use that data, because in-memory caching solution are way faster than fetching the data from the database because of very large input output transfer between memory and hard drive and we know hard drives are traditionally slow.
0
0
0
0
2015-10-05T23:29:00.000
2
0.099668
false
32,959,469
0
0
1
1
I'm designing a data-tables-driven Django app and have an API view that data-tables calls with AJAX (I'm using data-tables in its server-side processing mode). It implements searching, pagination, and ordering. My database recently got large (about 500,000 entries) and performance has greatly suffered, both for searches and for simply moving to the next page. I suspect that the way I wrote the view is grossly inefficient. Here's what I do in the view (suppose the objects in my database are pizzas): filtered = Pizza.objects.filter(...) to get the set of pizzas that match the search criteria. (Or Pizza.objects.all() if there is no search criteria). paginated = filtered[start: start + length] to get only the current page of pizzas. (At max, only 100 of them). Start and length are passed in from the data-tables client-side code, according to what page the user is on. pizzas = paginated.order_by(...) to apply the ordering to the current page. Then I convert pizzas into JSON and return them from the view. It seems that, while search might justifiably be a slow operation on 500,000 entries, simply moving to the next page shouldn't require us to redo the whole search. So what I was thinking of doing was caching some stuff in the view (it's a class-based view). I would keep track of what the last search string was, along with the set of results it produced. Then, if a request comes through and the search string isn't different (which is what happens if the user is clicking through a few pages of results) I don't have to hit the database again to get the filtered results -- I can just use the cached version. It's a read-only application, so getting out of sync would not be an issue. I could even keep a dictionary of a whole bunch of search strings and the pizzas they should produce. What I'd like to know is: is this a reasonable solution to the problem? Or is there something I'm overlooking? Also, am I re-inventing the wheel here? Not that this wouldn't be easy to implement, but is there a built-in option on QuerySet or something to do this?
How to conciliate REST and JSONschema?
33,002,536
1
1
237
0
python,rest,extjs,pyramid,jsonschema
If I understand you correctly, you want to use JSON Schema for input validation, but you are struggling to figure out how to validate query parameters with JSON Schema in a RESTful way. Unfortunately, there isn't a definitive answer. JSON Schema just wasn't designed for that. Here are the options I have considered in my own work with REST and JSON Schema. Convert query parameters to JSON then validate against the schema Stuff your JSON into a query param and validate the value of that param. (i.e. /foo/1?params={"page": 2, "perPage": 10}) Use POST instead of GET and stick your fingers in your ears when people tell you you are doing REST wrong. What do they know anyway. I prefer option 1 because it is idiomatic HTTP. Option 2 is probably the easiest to work with on the back-end, but it's dirty. Option 3 is mostly a joke, but in all seriousness, there is nothing in REST or HTTP that says a POST can only be used for creation. In fact, it is the most flexible and versatile of the HTTP methods. Think of it like a factory that does something. That something could generate and store a new resource or just return it. If you are finding that you need to send a large number of query parameters, it's probably not really a simple GET. My rule of thumb is that if the result is inherently not cacheable, it's possible that a POST is more appropriate (or at least not inappropriate).
0
0
0
0
2015-10-07T14:48:00.000
3
1.2
true
32,995,454
1
0
1
1
I'm starting a new project that consists in an Extjs 6 application with a pyramid/python backend. Due to this architecture, the backend will only provide an RPC and won't serve any page directly. My implementation of such a thing is usually based on REST and will fit nicely this CRUD application. Regarding data validation i would like to move from Colander/Peppercorn that i always found awkward to the simpler and more streamlined jsonschema. The idea here would be to move all the parameters - minus the id contained in the url when is the case - of the various requests into a json body that could be easily handled by jsonschema. The main problem here is that GET requests shouldn't have a body and i definitely want to put parameters in there (filters, pagination, etc). There's probably some approach to REST or REST-like and JSONschema but i'm not able to find anything on the web. Edit: someone mentioned the question about body in GET HTTP request. While putting a body in a GET HTTP request is somehow possible, it's in violation of part of HTTP 1.1 specification and therefore this is NOT the solution to this problem.
Is there a metod to have a trash can in Plone?
33,016,003
2
1
166
0
python,plone,plone-4.x
If you don't find a proper add-on, know that in Plone a trash can only be a matter of workflow. You can customize your workflow adding a new trash transition that move the content in a state (trashed) where users can't see it (maybe keep the visibility open for Manager and/or Site Administrators). Probably you must also customize the content_status_modify script because after the trash on a content you must be redirected to another location (or you'll get an Unhautorized error).
0
0
0
1
2015-10-08T07:55:00.000
4
0.099668
false
33,009,839
0
0
1
2
I want to give at all members of a Plone (4.3.7) site the possibility to restore a file accidentally deleted. I only found ecreall.trashcan for this purpose, but I've some problem with installation. After add it in buildout.conf and do a bin/buildout the output contain some error like... File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/isTrashcanOpened.py", line 11 return session and session.get('trashcan', False) or False SyntaxError: 'return' outside function File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_trash.py", line 23 return context.translate(msg) SyntaxError: 'return' outside function File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_restore.py", line 23 return context.translate(msg) SyntaxError: 'return' outside function ... And so, I don't find any new add-on to enable or configure in site setup. Someone know what could be, or is there another method for do what I want? Please.... thanks in advance
Is there a metod to have a trash can in Plone?
33,026,043
1
1
166
0
python,plone,plone-4.x
I've found the solution(!!!) working with -Content Rules- in the control panel. First I've created a folder called TRASHCAN , after in content rule I've added a rule that copy the file/page/image in folder trashcan if it will be removed. This rule can be disable in trashcan folder, so you could delete definitely the objects inside.
0
0
0
1
2015-10-08T07:55:00.000
4
0.049958
false
33,009,839
0
0
1
2
I want to give at all members of a Plone (4.3.7) site the possibility to restore a file accidentally deleted. I only found ecreall.trashcan for this purpose, but I've some problem with installation. After add it in buildout.conf and do a bin/buildout the output contain some error like... File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/isTrashcanOpened.py", line 11 return session and session.get('trashcan', False) or False SyntaxError: 'return' outside function File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_trash.py", line 23 return context.translate(msg) SyntaxError: 'return' outside function File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_restore.py", line 23 return context.translate(msg) SyntaxError: 'return' outside function ... And so, I don't find any new add-on to enable or configure in site setup. Someone know what could be, or is there another method for do what I want? Please.... thanks in advance
How to send an email on python without authenticating
33,017,245
1
0
286
0
python,django,email,smtplib
You connect to the SMTP server, preferably your own, that doesn't require authentication or on which you do have an account, then you create an email that has the users e-mail in the from field, and you just send it. Which lib you will use to do it, smtplib, some Django stuff, or anything else, is irrelevant. If you want to, you can even skip the SMTP server, and simulate one. That way you can deposit the composed mail directly into users POP server inbox. But there is rarely a need for such extremes.
0
0
0
1
2015-10-08T12:57:00.000
1
0.197375
false
33,016,533
0
0
1
1
I'm trying to send a notification from my Django Application everytime the user perform specific actions, i would like to send those notifications from the email of the person who performed these actions. I don't want them to have to put their password on my application or anything else. I know this is possible because i remember doing this with PHP Long time ago.
Sudden performance drop going from 1024 to 1025 bytes
33,051,195
1
2
68
0
python,django,django-rest-framework
This turned out to be related to libcurl's default "Expect: 100-continue" header.
0
1
0
0
2015-10-09T02:34:00.000
1
0.197375
false
33,028,985
0
0
1
1
I am running a dev server using runserver. It exposes a json POST route. Consistently I'm able to reproduce the following performance artifact - if request payload is <= 1024 bytes it runs in 30ms, but if it is even 1025 bytes it takes more than 1000ms. I've profiled and the profile points to rest_framework/parsers.py JSONParser.parse() -> django/http/request HTTPRequest.read() -> django/core/handlers/wsgi.py LimitedStream.read() -> python2.7/socket.py _fileobject.read() Not sure if there is some buffer issue. I'm using Python 2.7 on Mac os x 10.10.
Passing binary data from java to python
38,824,000
0
0
250
0
java,python,subprocess,jython,processbuilder
If there is no limit to the length of the string argument for launching the python script, you could simply encode the binary data from the image into a string, and pass that. The only problem you might encounter with this approach would be null characters and and negative numbers.
1
0
0
0
2015-10-09T03:13:00.000
1
0
false
33,029,293
0
0
1
1
I have a working program written in Java (a 3d game) and some scripts in Python written with theano to process images. I am trying to capture the frames of the game as it is running and run these scripts on the frames. My current implementation grabs the binary data from each pixel in the frame, saves the frame as a png image and calls the python script (using ProcessBuilder) which opens the image and does its thing. Writing an image to file and then opening it in python is pretty inefficient, so I would like to be able to pass the binary data from Java to Python directly. If I am not mistaken, processBuilder only takes arguments as strings, so does anyone know how I can pass this binary data directly to my python script? Any ideas? Thanks
Adding custom attributes to django scheema
33,340,454
0
1
79
0
python,django,saml-2.0,okta
Once again I have a privilege to answer my own question. So hear is the solution. Django has a user profile module which is to be turned on by giving the module location in the settings.py i.e - "AUTH_PROFILE_MODULE = appTitle.UserProfile" The UserProfile needs to be specified in modules.py specifying the required structure of user profile u need for your app. Now doing sync -db django creates the Database table for your user profile and further on the same user profile pysaml adds the value (CustomAttribute) which come on the saml Assertion. more explanations on this can be found on django documentations too. If any one still faces any issue please let me know.
0
0
0
0
2015-10-09T14:29:00.000
1
0
false
33,040,841
0
0
1
1
I am trying to authenticate my django application written in python with okta IDP. I have almost configured everything at SP side and IDP side too. Now I need to pass a custom variable from IDP which assert SP that user is a publisher,editor or admin and further save this to the django format database (in auth_user_groups table). Anyone have tried doing this, or anyone has idea about this? I am able to get the custom variable values by attributes mappings from IDP. But this allows me to save the custom attributes only on the user table. please let me know if i have not made myself clear here about my question.
Sending a message between HTML and python on the raspberry pi
33,235,245
0
1
1,183
0
javascript,python,html,raspberry-pi,raspberry-pi2
Maybe you can try to create a nodejs script that create a websocket. You ca connect to the websocket with python and so, you are able to send data from your website to nodejs and from nodejs to python in real-time. Have a nice day
0
0
0
1
2015-10-11T00:05:00.000
3
0
false
33,060,256
0
0
1
1
I want to do the following. I want to have a button on a HTML page that once It gets pressed a message is sent to some python script I'm running. For example, once the button is pressed some boolean will turn true, we will call the boolean bool_1. Then that boolean is sent to my python code, or written to a text file. Then in my python code I want to do something depending on that value. Is there a way to do this? Ive been looking at many thing but they haven't worked. I know that in javascript you can't write a text files because of security issues. My python code is constantly running, computing live values from sensors.
How to define critical section in django
33,072,629
0
2
341
0
python,django
It depends what are your queries, Your single query to the database will never face any race condition due to ACID principle impose by the database. But if there is any condition like first you are reading the data from database and after some operation on application level you are writing the updated data back to the database, then race condition may occur for that you have to implement the locks or the mutex in python.
0
0
0
0
2015-10-12T02:54:00.000
2
0
false
33,072,577
0
0
1
1
Is there any simple way to define critical section? when a user during the updating some database table, I'd force to make the other user cannot update on the same tables.
encrypting data on server and decrypting it on client
33,080,475
-3
0
1,327
0
javascript,python,ios,swift,encryption
It's very wide question and the variety of ways to do that. First of all, you need to choose a method of the encryption and purpose why you encrypt the data. There are 3 main encryption methods: 1. Symmetric Encryption Encrypter and Decrypter have access to the same key. It's quite easy, but it has one big drawback - key needs to be shared, so if you put it to the client it can be stolen and abused. As a solution, you would need to use some another method to send the key and encrypt it on the client. 2. Asymmetric Encryption With asymmetric encryption, the situation is different. To encrypt data you need to use public/private key pair. The public key is usually used to encrypt data, but decryption is possible only with a private key. So you can hand out public key to your clients, and they would be able to send some encrypted traffic back to you. But still you need to be sure that you are talking to the right public key to not abuse your encryption. Usually, it's used in TLS (SSL), SSH, signing updates, etc. 3. Hashing (it's not really encryption) It's the simplest. With hashing, you produce some spring that can't be reverted, but with a rule that same data will produce the same hash. So you could just pick the most suitable method and try to find appropriate package in the language you use.
0
0
0
1
2015-10-12T11:32:00.000
3
-0.197375
false
33,080,063
0
0
1
1
I have a very simple iOS and Android applications that download a txt file from a web server and preset it to the user. I'm looking for a way that only the application will be able to read the file so no one else can download and use it. I would like to take the file that is on my computer, encrypt it somehow, upload the result to the server and when the client will download the file it will know how to read it. What is the simplest way to to this kind of thing? Thanks a lot!
Control Libreoffice Impress from Python
33,251,685
0
1
3,178
0
python,libreoffice
Finally, I found a way to solve this using Python, in an elegant and easy way. Instead of libraries or APIs, Im using a socket to connect to Impress and control it. At the end of the post you can read the full-text that indicates how to control Impress this way. It is easy, and amazing. You send a message using Python to Impress ( that is listening in some port ), it receives the message and does things based on your request. You must enable this "remote control" feeature in the app. I solved my problem using this. Thanks for your replies!. LibreOffice Impress Remote Protocol Specification Communication is over a UTF-8 encoded character stream. (Using RTL_TEXTENCODING_UTF8 in the LibreOffice portion.) TCP More TCP-specific details on setup and initial handshake to be written, but the actual message protocol is the same as for Bluetooth. Message Format A message consists of one or more lines. The first line is the message description, further lines can add any necessary data. An empty line concludes the message. I.e. "MESSAGE\n\n" or "MESSAGE\nDATA\nDATA2...\n\n" You must keep reading a message until an empty line (i.e. double new-line) is reached to allow for future protocol extension. Intialisation Once connected the server sends "LO_SERVER_SERVER_PAIRED". (I.e. "LO_SERVER_SERVER_PAIRED\n\n" is sent over the stream.) Subsequently the server will send either slideshow_started if a slideshow is running, or slideshow_finished if no slideshow is running. (See below for details of.) The current server implementation then proceeds to send all slide notes and previews to the client. (This should be changed to prevent memory issues, and a preview request mechanism implemented.) Commands (Client to Server) The client should not assume that the state of the server has changed when a command has been sent. All changes will be signalled back to the client. (This is to allow for cases such as multiple clients requesting different changes, etc.) Any lines in [square brackets] are optional, and should be omitted if not needed. transition_next transition_previous goto_slide slide_number presentation_start presentation_stop presentation_resume // Resumes after a presentation_blank_screen. presentation_blank_screen [Colour String] // Colour the screen will show (default: black). Not // implemented, and format hasn't yet been defined. As of gsoc2013, these commands are extended to the existing protocol, since server-end are tolerant with unknown commands, these extensions doesn't break backward compatibility pointer_started // create a red dot on screen at initial position (x,y) initial_x // This should be called when user first touch the screen initial_y // note that x, y are in percentage (from 0.0 to 1.0) with respect to the slideshow size pointer_dismissed // This dismiss the pointer red dot on screen, should be called when user stop touching screen pointer_coordination // This update pointer's position to current (x,y) current_x // note that x, y are in percentage (from 0.0 to 1.0) with respect to the slideshow size current_y // unless screenupdater's performance is significantly improved, we should consider limit the update frequency on the // remote-end Status/Data (Server to Client) slideshow_finished // (Also transmitted if no slideshow running when started.) slideshow_started // (Also transmitted if a slideshow is running on startup.) numberOfSlides currentSlideNumber slide_notes slideNumber [Notes] // The notes are an html document, and may also include \n newlines, // i.e. the client should keep reading until a blank line is reached. slide_updated // Slide on server has changed currentSlideNumber slide_preview // Supplies a preview image for a slide. slideNumber image // A Base 64 Encoded png image. As of gsoc2013, these commands are extended to the existing protocol, since remote-end also ignore all unknown commands (which is the case of gsoc2012 android implementation), backward compatibility is kept. slideshow_info // once paired, the server-end will send back the title of the current presentation Title
0
1
0
0
2015-10-13T00:46:00.000
3
1.2
true
33,092,424
0
0
1
1
Im writing an application oriented to speakers and conferences. Im writing it with Python and focused on Linux. I would like to know if its possible to control LibreOffice Impress with Python, under Linux in some way. I want to start an instance of LibreOffice Impress with some .odp file loaded, from my Python app. Then, I would like to be able to receive from the odp some info like: previous, current and next slide. Or somehow generate the images of the slides on the go. Finally, I want to control LibreOffice in real time. This is: move through the slides using direction keys; right and left. The idea is to use python alone, but I don't mind using external libraries or frameworks. Thanks a lot.
python: APScheduler in WSGI app
33,232,473
3
5
1,030
0
python,mod-wsgi,wsgi,apscheduler
You're right -- the scheduler won't start until the first request comes in. Therefore running a scheduler in a WSGI worker is not a good idea. A better idea would be to run the scheduler in a separate process and connect to the scheduler when necessary via some RPC mechanism like RPyC or Execnet.
0
1
0
0
2015-10-15T10:01:00.000
1
0.53705
false
33,145,523
0
0
1
1
I would like to run APScheduler which is a part of WSGI (via Apache's modwsgi with 3 workers) webapp. I am new in WSGI world thus I would appreciate if you could resolve my doubts: If APScheduler is a part of webapp - it becomes alive just after first request (first after start/reset Apache) which is run at least by one worker? Starting/resetting Apache won't start it - at least one request is needed. What about concurrent requests - would every worker run same set of APScheduler's tasks or there will be only one set shared between all workers? Would once running process (webapp run via worker) keep alive (so APScheduler's tasks will execute) or it could terminate after some idle time (as a consequence - APScheduler's tasks won't execute)? Thank you!
How to send command to receipt printers from Django app?
57,876,597
0
1
1,911
0
python,django,printing,receipt
For a web app to use a device on the client, it has to go through the browser. I may be wrong, but I seriously doubt this is a built-in feature for receipt printers. I see three options: 1) Find/make a normal printer driver that works with your receipt printer, put it on the client box, and just use the normal print js commands. 2) Find/make a browser plugin that talks to the printer and exposes an API. 3) Find/make a simple web app that talks to a server-connected receipt printer (probably via native code execution or script call), and install it on each POS, with CORS to allow remote origin; then just post to that on 127.0.0.1:whatever from the webapp client script. Side note: I seriously discourage connecting a POS to anything resembling a network any more than absolutely necessary. Every outbound network request or trusted network peer is a potential attack vector. In short, I would never use django or any other web app for physical POS software.
0
0
0
0
2015-10-15T10:35:00.000
1
0
false
33,146,245
0
0
1
1
I have created a simple POS application. I now have to send a command to the receipt printer to print the receipt. I don't have any code related to this problem as I don't know where to start even. My questions are: 1) Is Windows a good choice for working with receipt printers as every shop I went to use a desktop application on Windows for POS? 2) Is it possible to control the receipt printer and cash register/drawer from a web app? 3) Is there a good reading material for developing POS systems by myself?
Can I have multiple django projects in the same virtualenv folder
56,910,915
1
2
1,645
0
python,django
It really depends on the situation. Suppose if your Project A need to use Pip version 17.01 to run while your project B need to use Pip version 18.01 to run. So It is not possible to use 1 virtual env to run multiple project but the downside of having multiple virtual environments is it consume much space & resource of PC.
0
0
0
0
2015-10-15T15:43:00.000
1
1.2
true
33,152,869
1
0
1
1
Another newbie to django here. I was wondering if it is recommended/not-recommended to run two different projects in the same virtualenv folder that have the same django version. To be more clear, is it necessary to create separate virtualenv everytime I want to start a new project when i know that i am using same django version for all projects. I am using python django on OSX.
How to use Django 1.8.5 ORM without creating a django project?
67,953,078
0
26
15,755
0
python,django,orm
I had to add import django, and then this worked.
0
0
0
0
2015-10-16T12:01:00.000
5
0
false
33,170,016
0
0
1
1
I've used Django ORM for one of my web-app and I'm very much comfortable with it. Now I've a new requirement which needs database but nothing else that Django offers. I don't want to invest more time in learning one more ORM like sqlalchemy. I think I can still do from django.db import models and create models, but then without manage.py how would I do migration and syncing?
Multi-threading on Google app engine
33,184,298
0
0
921
0
multithreading,python-2.7,sockets,google-app-engine,apple-push-notifications
Found the issue. I was calling start_background_thread with argument set to function(). When I fixed it to calling it as function it worked as expected.
0
1
0
0
2015-10-17T07:18:00.000
1
0
false
33,183,963
0
0
1
1
Does Google App engine support multithreading? I am seeing conflicting reports on the web. Basically, I am working on a module that communicates with Apple Push Notification server (APNS). It uses a worker thread that constantly looks for new messages that need to be pushed to client devices in a pull queue. After it gets a message and sends it to APNS, I want it to start another thread that checks if APNS sends any error response in return. Basically, look for incoming bytes on same socket (it can only be error response per APNS spec). If nothing comes in 60s since the last message send time in the main thread, the error response thread terminates. While this "error response handler" thread is waiting for bytes from APNS (using a select on the socket), I want the main thread to continue looking at the pull queue and send any packets that come immediately. I tried starting a background thread to do the function of the error response handler. But this does not work as expected since the threads are executed serially. I.e control returns to the main thread only after the error response thread has finished its 60s wait. So, if there are messages that arrive during this time they sit in the queue. I tried threadsafe: false in .yaml My questions are: Is there any way to have true multithreading in GAE. I.e the main thread and the error response thread execute in parallel in the above example instead of forcing one to wait while other executes? If no, is there any solution to the problem I am trying to solve? I tried kick-starting a new instance to handle the error response (basically use a push queue), but I don't know how to pass the socket object the error response thread will have to listen to as a parameter to this push queue task (since dill is not supported to serialize socket objects)
Command line programs in a web (python)
33,184,244
1
1
179
0
python-2.7,web-applications,command-line,pycharm
Based on what you've described, there are many ways to approach this: Create a terminal emulator on your webpage. If you want nicer UI, you can set up any web framework and have your command line programs in the backend, while exposing frontend interfaces to allow users to input parameters and see the results. If you're just trying to surface the functionalities of your programs, you can wrap them as services which application can call and use (including your web terminal/app)
0
1
0
0
2015-10-17T07:40:00.000
1
1.2
true
33,184,127
0
0
1
1
I am new to programming and web apps so i am not even sure if this question is obvious or not. Sometimes I find command line programs more time efficient and easy to use (even for the users). So is there a way to publish my command line programs with command line interface to a web as a web app using cgi or Wsgi? For example if i make a program to calculate all the formulas in math can i publish it to a web in its command line form? I am using python in pycharm. I have tried using cgi but coudnt do much beacuse cgi has more to do with forms being send to a data server for information comparison and storage. -Thanks
Making an alias for an attribute field, to be used in a django queryset
33,187,887
0
1
452
0
python,django
Even if you could do this, it wouldn't help solve your ultimate problem. You can't use order_by on concatenated querysets from different models; that can't possibly work, since it is a request for the database to do an ORDER BY on the query.
0
0
0
0
2015-10-17T14:11:00.000
2
0
false
33,187,572
0
0
1
2
In Django, how does one give an attribute field name an alias that can be used to manipulate a queryset? Background: I have a queryset where the underlying model has an auto-generating time field called "submitted_on". I want to use an alias for this time field (i.e. "date"). Why? Because I will concatenate this queryset with another one (with the same underlying model), and then order_by('-date'). Needless to say, this latter qset already has a 'date' attribute (attached via annotate()). How do I make a 'date' alias for the former queryset? Currently, I'm doing something I feel is an inefficient hack: qset1 = qset1.annotate(date=Max('submitted_on')) I'm using Django 1.5 and Python 2.7.
Making an alias for an attribute field, to be used in a django queryset
33,192,349
0
1
452
0
python,django
It seems qset1 = qset1.annotate(date=Max('submitted_on')) is the closest I have right now. This, or using exclude(). I'll update if I get a better solution. Of course other experts from SO are welcome to chime in with their own answers.
0
0
0
0
2015-10-17T14:11:00.000
2
1.2
true
33,187,572
0
0
1
2
In Django, how does one give an attribute field name an alias that can be used to manipulate a queryset? Background: I have a queryset where the underlying model has an auto-generating time field called "submitted_on". I want to use an alias for this time field (i.e. "date"). Why? Because I will concatenate this queryset with another one (with the same underlying model), and then order_by('-date'). Needless to say, this latter qset already has a 'date' attribute (attached via annotate()). How do I make a 'date' alias for the former queryset? Currently, I'm doing something I feel is an inefficient hack: qset1 = qset1.annotate(date=Max('submitted_on')) I'm using Django 1.5 and Python 2.7.
Multiple user logins on a development server using Django
33,193,714
-1
1
103
0
django,python-3.3
The easiest way is to use multiple browsers, and if necessary, multiple dev servers on alternate ports. 8000, 8001, etc.
0
0
0
0
2015-10-18T01:52:00.000
1
-0.197375
false
33,193,501
0
0
1
1
I was wondering whether it is possible to have more than one user log in on a development server using Django 1.8 I am creating an app, where these "active" users are able to view one another details (or fields) respective to the relative models I designed. Currently, I am only able to log in as a single user and wondered whether it is possible to somehow allow my app to have multiple logins. Thanks
Create New Model Instance with API Call in Django
33,209,925
1
2
305
0
python,django
Do I put the code in a specific view a django view is a callable that must accept an HTTP request and return an HTTP response, so unless you need to be able to call your code thru HTTP there's no point in using a view at all, and even if you want to have a view exposing this code it doesn't mean the code doing the API call etc has to live in the view. Remember that a "django app" is basically a Python package, so beside the django-specific stuff (views, models etc) you can put any module you want and have your views, custom commands etc call on these modules. So just write a module for your API client etc with a function doing the fetch / create model instance / whatever job, and then call this function from where it makes sense (view, custom command called by a cron job, celery task, whatever).
0
0
0
0
2015-10-19T07:26:00.000
3
0.066568
false
33,208,805
0
0
1
1
I'm unsure how to approach this problem in general in my Django app: I need to make a call to an API every n days. I can make this call and fetch the data required via Python, but where exactly should I put the code? Do I put the code in a specific view and then map the view to a URL and have that URL called whenever I want to create new model instances based on the API call? Or am I approaching this the wrong way?
Sent RSA Key Through Autobahn
33,211,565
0
0
120
0
python,encryption,autobahn
You haven't provided information on how you are encrypting your data. But you should never send the private keys over the network. Never. Doing that is as secure as locking your house but leave the key in the keyhole. If you've done it even once, throw the key away and generate a new one. The strength of RSA comes from the fact that anyone holding the public key can encrypt data, but only the one holding the private keys can decrypt. Think of it as an old video store's return box: any costumer can put the videos in it through the hole, but only the store staff can take the videos from it. What you want to do is: Generate the keys on the server Client call the server and grab the public key Client encrypt data using the retrieved public key Client send encrypted data to server. Server decrypt it using the private key
0
0
1
0
2015-10-19T08:38:00.000
1
0
false
33,210,021
0
0
1
1
I'm new in python and websocket programmer. I want to sent data that was encrypted with RSA key (in Python) through websocket to server in cloud (using nodeJs). For decrypt that data I need that key, right? How I can sent RSA to server and use that key to decrypt? Thankyou