Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Passing an image from Lambda to API Gateway
| 36,757,815 | 0 | 0 | 835 | 0 |
python,aws-lambda,aws-api-gateway
|
You could return it base64-encoded...
| 0 | 0 | 1 | 0 |
2016-04-19T11:56:00.000
| 2 | 0 | false | 36,717,654 | 0 | 0 | 1 | 2 |
I have a lambdas function that resizes and image, stores it back into S3. However I want to pass this image to my API to be returned to the client.
Is there a way to return a png image to the API gateway, and if so how can this be done?
|
Passing an image from Lambda to API Gateway
| 36,727,013 | 0 | 0 | 835 | 0 |
python,aws-lambda,aws-api-gateway
|
API Gateway does not currently support passing through binary data either as part of a request nor as part of a response. This feature request is on our backlog and is prioritized fairly high.
| 0 | 0 | 1 | 0 |
2016-04-19T11:56:00.000
| 2 | 0 | false | 36,717,654 | 0 | 0 | 1 | 2 |
I have a lambdas function that resizes and image, stores it back into S3. However I want to pass this image to my API to be returned to the client.
Is there a way to return a png image to the API gateway, and if so how can this be done?
|
PyCharm doesn't autocomplete Django model queries anymore in 2016.1.2
| 36,729,774 | 0 | 11 | 3,482 | 0 |
python,django,pycharm
|
I've just tried it on 2016.1.2 and the auto-complete works for me for statements which handle models. I have not changed my code editing settings on PyCharm for several versions now.
Baffling. Have you perhaps tried a restart of PyCharm?
| 0 | 0 | 0 | 0 |
2016-04-19T15:27:00.000
| 2 | 0 | false | 36,722,859 | 0 | 0 | 1 | 2 |
The 2016.1.2 version of PyCharm doesn't seem to autocomplete queries on Django models anymore. For example on Foo.objects.filter(some-field-lookup) the filter method doesn't get autocompleted (or any other method) and also the field-lookup parameters don't get autcompleted, which both worked in PyCharm version 5.
Is anybody else having this issue? Is this expected behavior? Is there some setting which needs to be turned on?
Restarting or invalidating the cache and restarting didn't have any effect on this
|
PyCharm doesn't autocomplete Django model queries anymore in 2016.1.2
| 42,135,532 | 26 | 11 | 3,482 | 0 |
python,django,pycharm
|
For me, the problem turned about to be that PyCharm wasn't aware that the site was using Django, since I didn't use PyCharm's creation tool to start the Django project. (I assume most people don't after the first few projects they try, which is why the autocompletion seems to work and then break)
Go under Settings/Languages & Frameworks/Django, and make sure that Django Support is turned on, and that the settings.py and manage.py files are correctly specified. This fixed the problem for me.
| 0 | 0 | 0 | 0 |
2016-04-19T15:27:00.000
| 2 | 1 | false | 36,722,859 | 0 | 0 | 1 | 2 |
The 2016.1.2 version of PyCharm doesn't seem to autocomplete queries on Django models anymore. For example on Foo.objects.filter(some-field-lookup) the filter method doesn't get autocompleted (or any other method) and also the field-lookup parameters don't get autcompleted, which both worked in PyCharm version 5.
Is anybody else having this issue? Is this expected behavior? Is there some setting which needs to be turned on?
Restarting or invalidating the cache and restarting didn't have any effect on this
|
Bluemix Flask API Call Timeout
| 36,745,441 | 1 | 0 | 330 | 0 |
python,flask,ibm-cloud
|
All Bluemix traffic goes through the IBM WebSphere® DataPower® SOA Appliances, which provide reverse proxy, SSL termination, and load balancing functions. For security reasons DataPower closes inactive connections after 2 minutes.
This is not configurable (as it affects all Bluemix users), so the only solution for your scenario is to change your program to make sure the connection is not idle for more than 2 minutes.
| 0 | 0 | 0 | 0 |
2016-04-20T00:07:00.000
| 1 | 0.197375 | false | 36,731,567 | 0 | 0 | 1 | 1 |
I have an API written with python flask running on Bluemix. Whenever I send it a request and the API takes more than 120 seconds to respond it times out. It does not return anything and it returns the following error: 500 Error: Failed to establish a backside connection.
I need it to be able to process longer requests as well. Is there any way to extend the timeout value or is there a workaround for this issue?
|
Pandas: datareader unable to get historical stock data
| 36,783,492 | 0 | 1 | 850 | 0 |
python-2.7,pandas,datareader,google-finance,pandas-datareader
|
That URL is a 404 - pandas isn't at fault, maybe just check the URL? Perhaps they're on different exchanges with different google finance support.
| 0 | 0 | 1 | 0 |
2016-04-20T15:52:00.000
| 1 | 0 | false | 36,749,105 | 0 | 1 | 1 | 1 |
I found that some of the stock exchanges is not supported for datareader. Example, Singapore. Any workaround?
query = web.DataReader(("SGX:BLA"), 'google', start, now) return such error`
IOError: after 3 tries, Google did not return a 200 for url 'http://www.google.com/finance/historical?q=SGX%3ABLA&startdate=Jan+01%2C+2015&enddate=Apr+20%2C+2016&output=csv
It works for IDX indonesia
query = web.DataReader(("IDX:CASS"), 'google', start, now)
|
What is the all() function in RelatedManager?
| 36,775,635 | 0 | 0 | 371 | 0 |
python,django
|
The RelatedManager is a Manager and not a QuerySet, but it implements the database-abstraction API and because of that it has all the QuerySet methods such as get(), exclude(), filter() and all().
The difference in calling all() in a RelatedManager is that it actually performs a query in the database.
The all() method returns a QuerySet.
| 0 | 0 | 0 | 0 |
2016-04-21T15:59:00.000
| 2 | 0 | false | 36,774,887 | 0 | 0 | 1 | 1 |
In django ManyToManyField(), when you refer to it, it is going to return a RelatedManager.
If you want to get the actual objects, you have to call all(), however I don't see any documents describing this behaviour, is RelatedManager a kind of QuerySet? Otherwise, why there can be an all() method?
And after calling all(), is it going to return a QuerySet?
|
Custom button in django admin panel
| 36,795,738 | 0 | 0 | 496 | 0 |
python,django
|
You can override the /admin/base_site.html template in your project by including a template with the same relative path on your projects /template dir.
| 0 | 0 | 0 | 0 |
2016-04-22T10:14:00.000
| 1 | 1.2 | true | 36,790,997 | 0 | 0 | 1 | 1 |
Is it possible to add a custom button in admin panel? To be more specific: I have a django app with some custom admin views, but without models. Now I can reach this app typing url. Maybe there is more appropriate way?
The difficulty is that I don't want to interract with the project, just with reusable app.
|
Implementation of read-only singleton in Python Django
| 36,791,481 | 1 | 2 | 433 | 0 |
python,django,django-models,singleton
|
Use of Django caching will be best here. You will need to use a 3rd party caching server e.g. Redis. There is Memcached too, but as you said your data is 20MB so you will need Redis as Memcached only allows 1MB at max per key.
Also using cache is very easy, you just need to sudo apt-get install redis, add CACHES setting in Django settings and you will be good to go.
Redis (or Memcached) are in-memory cache servers and hold all the cached data in memory, so getting it from Redis will be as fast as it can be.
| 0 | 0 | 0 | 0 |
2016-04-22T10:18:00.000
| 5 | 0.039979 | false | 36,791,091 | 0 | 0 | 1 | 1 |
Situation
When the Django website starts up, it need to load some data from a table in the database for computation. The data is read-only and large (e.g. 20MB).
The computation will be invoked every time a certain page is open. A module will use the data for computation. Therefore, I don't want the module to SELECT and load data every time the page is open.
Question
I guess singleton may be one of the solutions. How to implement the singleton in Django? Or is there any better solutions?
|
Rewrite Python project to Java - worth it?
| 36,806,181 | 1 | 1 | 648 | 0 |
java,python,performance,optimization
|
The crucial question is this one: "Java's static typing including seems to make it less prone to errors on a larger scale". The crucial word here is "seems." Sure, Java will help you catch this one particular type of error. But how important is that, and what do you have to pay for it? The overhead imposed by Java's type system means that you have to write more lines of code, which means reduced productivity. I've used both and I have no doubt that I'm more productive in Python. I have found that type-related bugs in Python are generally easy to find and fix. Keep in mind that in a professional environment you're not going to ship code without testing it pretty carefully. The bottom line for a programming environment is productivity - usable functionality per unit of effort, not the number of bugs you found and fixed during development.
My advice: if you have a working project written in Python, don't rewrite it unless you're certain there's a benefit.
| 0 | 0 | 0 | 1 |
2016-04-23T00:32:00.000
| 3 | 0.066568 | false | 36,805,233 | 0 | 0 | 1 | 3 |
First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat.
Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project.
|
Rewrite Python project to Java - worth it?
| 36,805,273 | 0 | 1 | 648 | 0 |
java,python,performance,optimization
|
Java is inherently object oriented. Alternatively python is procedural.
As far as the ability of the language to handle large projects you can make do with either.
As far as producing more usable products I would recommend java script as opposed to java because of its viability in the browser. By embedding your js in a publicly hosted website you allow people with no coding knowledge to run your project seamlessly in the browser.
Further more all the GUI design features of HTML are available at your disposal.
That said any language has it's ups and downs and anything I've said here is simply my perception.
| 0 | 0 | 0 | 1 |
2016-04-23T00:32:00.000
| 3 | 0 | false | 36,805,233 | 0 | 0 | 1 | 3 |
First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat.
Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project.
|
Rewrite Python project to Java - worth it?
| 36,805,510 | 2 | 1 | 648 | 0 |
java,python,performance,optimization
|
It's only worth it if it solves a real problem, note, that problem could be
I want to learn something better
I need it to go faster to reduce power requirements in my colo.
I need to hire more people and the talent pool for [insert language here]
is too small.
Insert innumerable real problems here.
Python and Java are both suitable for production. Write it in whatever makes it easiest to solve the problems you and or your team are facing and if you want to preempt some problems make sure you've done your homework. Plenty of projects have died because they chose C/C++ believing performance was going to be a major factor without thinking about the extra effort involved in using these language well.
You mentioned maintainability. You're likely to require more code to rewrite it in Java and there's a direct correlation between Bugs and LOC. It's up for debate which one is easier to maintain. I'm sure both camps believe theirs is.
Of the two which one do you enjoy coding with the most?
| 0 | 0 | 0 | 1 |
2016-04-23T00:32:00.000
| 3 | 1.2 | true | 36,805,233 | 0 | 0 | 1 | 3 |
First of all, I love Python, and I currently use it for most stuff. However, as a PhD student, I mostly implement prototypes for testing and evaluating ideas. This also includes that I'm usually the only one coding, and that -- while I certainly try to write half-way efficient code -- performance is not a primary issue. And for quick prototyping, Python is for me just neat.
Now I consider to go with some of my stuff more "serious", i.e., to bring it into a productive environment, make it better maintainable, and maybe more efficient. So I wonder if it's worthy to rewrite my code to, say, Java (with which I'm also reasonably familiar). I know that Python is not slow, but things like Java's static typing including seems to make it less prone to errors on a larger scale, particularly when different people work on the same project.
|
PyCharm 2016 and lline waprs on copy/pasting
| 44,912,869 | -1 | 1 | 236 | 0 |
python,pycharm
|
(On a mac)
To get my editor to stop auto wrapping long lines of code I did this.
PyCharm --> Preferences --> Editor --> Code Style --> Default Options --> Right margin (columns): 999
The editor wouldn't let me set a value larger than 999.
It's not perfect but it reduced the annoyance factor quite a bit for me.
Hope it helps.
| 0 | 0 | 0 | 0 |
2016-04-23T16:26:00.000
| 1 | -0.197375 | false | 36,813,366 | 0 | 0 | 1 | 1 |
can someone tell me if theres some way to disable the autoformating, when copy/pasting?
Every time i paste some line, thats longer than PEP-8-Max line, PyCharm automaticly inserts line warps. Thats realy anoying.
I'm using the professional Version.
Many thanks
rene
|
Code generation for multiple platforms
| 36,819,049 | 1 | 1 | 46 | 0 |
python,code-generation,jinja2,template-engine,software-design
|
It doesn't matter which technique you use, you'll face three potential problems:
Using "the same (orchestration-driver) data" across all N targets.
There will be a preferred way for each target to represent that data.
You can choose a lowest common denominator (e.g., text or XML) at the price of making the target engines clumsier to write
Finding equivalent effect in each of N target. Imagine you need "eval" (I hope not) in each target; even if they appear to have similar implementations, some detail will be wrong and you'll have to work around that
The performance of one or more of the targets is poor.
If you code your own implementation, you can more easily overcome 2) and 3).
If you generate code, you have more flexibility to change how a particular target runs. If you use simple text-based "templates" to generate target language code, you won't be able to generate very efficient code; you can't optimize what you generate. If you use a more sophisticated code generator, you might be able to generate/optimize the result.
Its hard to tell how much trouble you are going to have, partly because you haven't told us what this engine will do or what the target langauges are. It will also be hard to tell even with that data; until you have a running system you can't be sure there isn't a rude surprise.
People use sophisticated code generation techniques when they are facing the unknown because that maximizes flexibility and therefore makes it easier to overcome complications.
People use simpler code generation when they don't have the energy to learn how to use a sophisticated generator. If they are lucky, no problems arise and they win. If this experiment isn't a lot of work, then you should try it and hope for the best.
| 0 | 0 | 0 | 0 |
2016-04-24T03:09:00.000
| 1 | 0.197375 | false | 36,818,862 | 0 | 0 | 1 | 1 |
I'm designing an orchestration engine which can automate tasks within multiple environments: JavaScript web UIs, Python webservers, and c runtimes. One possible approach to is to write the orchestration core in each language. That seems brittle as each new engine feature will need to be added to each supported language (and bugs will have to be resolved multiple times, all while dealing with different idioms in each language). Another approach would be to write the core once in the lowest common denominator language (possibly c) and then wrap it in the other languages. But, I think deployment of the compiled libraries to browsers would be a nightmare if not impossible. So, another option I'm considering is templates and code generation. The engine could then be written once (probably in Python), and the workflows compiled to each target using jinja templates.
Does this last approach sound feasible? If I go that route, what pitfalls should I be aware of? Should I suck it up and write the engine three times?
|
How to drop table and recreate in amazon RDS with Elasticbeanstalk?
| 36,820,728 | 4 | 1 | 5,860 | 1 |
python,django,amazon-web-services,amazon-elastic-beanstalk,amazon-rds
|
The easiest way to accomplish this is to SSH to one of your EC2 instances, that has acccess to the RDS DB, and then connect to the DB from there. Make sure that your python scripts can read your app configuration to access the configured DB, or add arguments for DB hostname. To drop and create your DB, you must just add the necessary arguments to connect to the DB. For example:
$ createdb -h <RDS endpoint> -U <user> -W ebdb
You can also create a RDS snapshot when the DB is empty, and use the RDS instance actions Restore to Point in Time or Migrate Latest Snapshot.
| 0 | 0 | 0 | 0 |
2016-04-24T06:51:00.000
| 1 | 1.2 | true | 36,820,171 | 0 | 0 | 1 | 1 |
My database on Amazon currently has only a little data in it (I am making a web app but it is still in development) and I am looking to delete it, make changes to the schema, and put it back up again. The past few times I have done this, I have completely recreated my elasticbeanstalk app, but there seems like there is a better way. On my local machine, I will take the following steps:
"dropdb databasename" and then "createdb databasename"
python manage.py makemigrations
python manage.py migrate
Is there something like this that I can do on amazon to delete my database and put it back online again without deleting the entire application? When I tried just deleting the RDS instance a while ago and making a new one, I was having problems with elasticbeanstalk.
|
Google Oauth2 Contacts API returns Invalid token: Stateless token expired after an hour
| 36,996,677 | 0 | 0 | 252 | 0 |
python-3.x,oauth-2.0,google-contacts-api,django-allauth
|
See, when you are logging into the website, you are probably using cookies. So basically you might be using the same session and actually the api is not called.
The time when you are logging in incognito mode or in a diffrent browser, that cookie cannot be used, so this time api is called. For this reason, the token is getting changed.
For example, if after few users have signed up with google, you change the scope of the app, what happens is, if the user has enabled cookies and it has not expired, when he visits your site, it simply logs him in. It does not asks for permissions (that you added recently to scope). But when he logs out and logs in again, then it asks for the additional permission and th token also gets changed.
What you should do is, you should go through th codes of django-allauth and clear it out how they are using the token. You must also know that for getting refresh token, you must have offline access enabled in your configuration.
| 0 | 0 | 0 | 0 |
2016-04-24T09:02:00.000
| 1 | 1.2 | true | 36,821,140 | 0 | 0 | 1 | 1 |
What's wrong with my setup?
I am using django-allauth for social signup and recently i added contacts to it's scope. Things are working fine. It now asks for permission to manage contacts and I am able to get contact details of users through the API.
But once i make a request to get contacts of a user(I am not saving any refresh token or accss token at that time), after an hour when i make the request again with same token, It shows this error "Invalid token: Stateless token expired".
However I can still login into the website and the token does not change. However when I logout and login again the token changes and i can again get the contacts using that token for one hour.
What's the issue? What am I missing?
|
A list model field for Models.Model
| 36,826,937 | 0 | 1 | 46 | 0 |
python,django
|
You'll define another model that has foreign key to the main model.
| 0 | 0 | 0 | 0 |
2016-04-24T18:02:00.000
| 2 | 0 | false | 36,826,909 | 1 | 0 | 1 | 1 |
I'm looking for a field that I can use to be defined in my model which is essentially a list because it'll be used to store multiple string values. Obviously CharField cannot be used.
|
How to modify a pyramid template on the fly before rendering
| 36,873,321 | 0 | 0 | 45 | 0 |
python-2.7,web-applications,pyramid
|
If you're using jinja, try this:
<div class="html-content">{{scraped_html|safe}}</div>
| 0 | 0 | 0 | 0 |
2016-04-25T09:04:00.000
| 2 | 0 | false | 36,836,101 | 0 | 0 | 1 | 1 |
please I am working on a site where I would scrap another website's html table source code and append it to my template before rendering my page.
I have written the script which stores the html code in a variable but don't know how to appendix it.
Kindly suggest.
|
How to create an equivalent of a background thread for an auto-scaling instance
| 36,874,638 | 0 | 1 | 153 | 0 |
google-app-engine,google-app-engine-python
|
You can use a cron job that will start a task. In this task, you can call all your instances to clean up expired objects.
| 0 | 1 | 0 | 0 |
2016-04-26T19:40:00.000
| 2 | 0 | false | 36,874,278 | 0 | 0 | 1 | 1 |
Been reading up a bit on background threads and it seems to only be allowed for backend instance. I have created an LRU instance cache that I want to call period cleanup jobs on to remove all expired objects. This will be used in both frontend and backend instances.
I thought about using deferred or taskqueue but those do not have the option to route a request back to the same instance. Any ideas?
|
Some confusions regarding celery in python
| 36,891,584 | 0 | 1 | 385 | 0 |
python,django,celery
|
First, just to explain how it works briefly. You have a celery client running in your code. You call tasks.add(1,2) and a new Celery Task is created. That task is transferred by the Broker to the queue. Yes the queue is persisted in Rabbimq or SQS. The Celery Daemon is always running and is listening for new tasks. When there is a new task in the queue, it starts a new Celery Worker to perform the work.
To answer your questions:
Celery daemon is always running and it's starting celery workers.
Yes Rabitmq or SQS is doing the work of a queue.
With the celery monitor you can monitor how many tasks are running, how many are completed, what is the size of the queue, etc.
| 0 | 1 | 0 | 0 |
2016-04-26T23:28:00.000
| 2 | 0 | false | 36,877,581 | 0 | 0 | 1 | 1 |
I have divided celery into following parts
Celery
Celery worker
Celery daemon
Broker: Rabbimq or SQS
Queue
Result backend
Celery monitor (Flower)
My Understanding
When i hit celery task in django e,g tasks.add(1,2). Then celery adds that task to queue. I am confused if thats 4 or 5 in above list
WHen task goes to queue Then worker gets that task and delete from queue
The result of that task is saved in Result Backend
My Confusions
Whats diff between celery daemon and celery worker
Is Rabbitmq doing the work of queue. Does it means tasks gets saved in Rabitmq or SQS
What does flower do . does it monitor workers or tasks or queues or resulst
|
Using DataTables with Python
| 36,917,130 | 0 | 0 | 1,506 | 0 |
python-3.x,datatable
|
It's easier than I thought, there is no need for PHP and MariaDB.
When using nginx, you need uswgi and uswgi-plugin-cgi to let nginx know that the Python script is a script and not data. Point to the Python script in the Ajax parameter of the DataTable JS code, make it executable and print the array with JSON function in the Python script, and include cgi/json header strings. The array should look like that in the example of Datatables Website (Ajax source).
It's all running in the memory now.
| 0 | 0 | 0 | 0 |
2016-04-27T19:03:00.000
| 1 | 1.2 | true | 36,898,723 | 0 | 0 | 1 | 1 |
I'm a beginner at website programming and want to understand some basics.
I've created a Python 3 script which fetches some data from a website and makes some calculations. Result is then about 20 rows with 7 columns.
What is the easiest way to make them available on my website? When refreshing my website, the Python script should fetch the data from the 3rd party website and this data should then be displayed in a simple table with sorting option.
I've discovered the jQuery plugin DataTables with Ajax JSON source. I would create a PHP script which executes the Python script which writes data to a DB like MariaDB. PHP then creates a JSON for Ajax.
Is this the right way or are there easier ways? Maybe using a framework etc.?
Thanks!
|
No "Run Python Console" in PyCharm menu
| 36,912,291 | 0 | 0 | 1,002 | 0 |
python,console,pycharm
|
Click the icon in the lower left corner of pycharm, then you will get a button for the console.
| 0 | 0 | 0 | 0 |
2016-04-28T10:40:00.000
| 1 | 0 | false | 36,912,233 | 1 | 0 | 1 | 1 |
I am new to PyCharm and I am stuck on something really stupid: I cannot get Pycharm to display a Python interpreter console window.
The help tells me to click "Tools -> Run Python Console" in the main menu, which is simple and logical enough, except there is no Run Python Console command in my Tools submenu. There is a "Tools -> Python Console..." command (yes with the dots, plus an icon), but it does nothing. Ditto for the "Python Console" box (with the same icon) in the right end of the bottom bar.
I have searched a lot for a solution, but nobody seems to have discussed this or a similar problem.
My installation is:
PyCharm Community Edition 2016.1.2,
Build #PC-145.844, built on April 8, 2016,
JRE: 1.8.0_60-b27 x86,
JVM: Java HotSpot(TM) Server VM by Oracle Corporation
Thanks for any hints.
|
Django-Compressor throws UncompressableFileError on bower installed asset
| 38,980,458 | 0 | 0 | 99 | 0 |
python,django,unit-testing,django-compressor
|
I think if you also set COMPRESS_PRECOMPILERS = () in your test-specific settings, that should fix your problem.
| 0 | 0 | 0 | 1 |
2016-04-28T21:04:00.000
| 1 | 0 | false | 36,925,440 | 0 | 0 | 1 | 1 |
When I run my unit tests I am getting UncompressableFileError for files installed through Bower. This happens because I don't run bower install in my unit tests and I don't want to have to run bower install for my unit tests.
Is there a way to disable django-compressor, or to mock the files so that this error doesn't happen?
I have COMPRESS_ENABLED set to False but no luck there, it still looks for the file.
|
python django No module named 'psycopg2' error
| 36,930,928 | 0 | 1 | 1,046 | 0 |
python,django
|
Did you install dev packages? assuming that you have already installed psycopg2
if not and if you are on ubuntu do this sudo apt-get install libpq-dev python-dev
| 0 | 0 | 0 | 0 |
2016-04-29T01:54:00.000
| 2 | 0 | false | 36,928,288 | 0 | 0 | 1 | 1 |
After completing the installation of python djnago perfectly. When running a command
"python manage.py runserver"
getting an error like
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e) django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No module named 'psycopg2'
|
How to implement EAV in Django
| 40,278,064 | -1 | 6 | 2,167 | 0 |
python,django,entity-attribute-value
|
I am trying to answer,let me know, wheather we are on a same plane. I think, you need to formulate EAV database scema first. For that identify what are the entities,attributes, and the associated values. Here, in the example mentioned by you, entity maybe device and it's attribute maybe setting. If we take other example, say in case of car sales, entity is sales recipt, attribute is product purchased by the customer(car), and values are price, car model, car colour etc.
Make master tables and tables that stores mappings if any.
This schema implementation in models.py will make your models, and insert values in those models through shell, or insert script.
| 0 | 0 | 0 | 0 |
2016-04-29T15:08:00.000
| 2 | -0.099668 | false | 36,941,823 | 0 | 0 | 1 | 1 |
I need to implement a fairly standard entity-attribute-value hierarchy. There are devices of multiple types, each type has a bunch of settings it can have, each individual device has a set of particular values for each setting. It seems that both django-eav and eav-django packages are no longer maintained, so I guess I need to roll my own. But how do I architect this? So far, I am thinking something like this (skipping a lot of detail)
class DeviceType(Model):
name = CharField()
class Device(Model):
name = CharField()
type = ForeignKey(DeviceType)
class Setting(Model):
name = CharField()
type = CharField(choices=(('Number', 'int'), ('String', 'str'), ('Boolean', 'bool')))
device_type = ForeignKey(DeviceType)
class Value(Model):
device = ForeignKey(Device)
setting = ForeignKey(Setting)
value = CharField()
def __setattr__(self, name, value):
if name == 'value':
... do validation based on the setting type ...
def __getattr__(self, name):
if name == 'value':
... convert string to whatever is the correct value for the type ...
Am I missing something? Is there a better way of doing this? Will this work?
|
python scrapy can not start project
| 36,999,348 | 1 | 2 | 702 | 0 |
python,windows,scrapy,anaconda
|
Try the command scrapy.bat startproject tutorial, it should solve the problem.
And you don't need to edit the enviornment path.
| 0 | 0 | 0 | 0 |
2016-04-29T22:11:00.000
| 1 | 1.2 | true | 36,948,236 | 1 | 0 | 1 | 1 |
I use anaconda installed scrapy in windows 10 system. But I can not start the scrapy with scrapy startproject tutorial, I got feedback "bash: scrapy: command not found".
After searching in internet, I found a suggestion from a similar topic to add the environment variable: C:\Users\conny\Anaconda2\Lib\site-packages\scrapy behind the variable PATH, but it still doesn't work.
Do you have any idea, what is the problem?
|
Web service hosted on EC2 host is not reachable from browser
| 36,962,685 | 2 | 0 | 543 | 0 |
web-services,amazon-web-services,amazon-ec2,flask-sqlalchemy,python-webbrowser
|
It seems that web service isn't up and running or it is not listening on right port or it is listening just on 127.0.0.1 address. Check it with 'sudo netstat -tnlp' command. You should see process name, what IP and port it is listening on.
| 0 | 0 | 1 | 0 |
2016-05-01T00:22:00.000
| 1 | 0.379949 | false | 36,961,672 | 0 | 0 | 1 | 1 |
I hosted a Python/Flask web service on my Amazon (AWS) EC2 instance. modified the security group rules such that All inbound traffic is allowed.
I can login from ssh and ping(with public ip) is working fine but I couldn't open the service URL from the web browser. Could any one please suggest how can I debug this issue?
Thanks,
|
Remove Headers from Flask Response
| 36,969,127 | 2 | 1 | 6,083 | 0 |
python,pythonanywhere,alexa-skills-kit
|
You can't, but I seriously doubt that anyone would write code that would fall down when there were extra headers fields in a request. Perhaps you're misinterpreting the error.
| 0 | 0 | 0 | 0 |
2016-05-01T11:19:00.000
| 2 | 1.2 | true | 36,966,042 | 0 | 0 | 1 | 1 |
I am trying to develop a web service back-end for an Alexa skill, and this requires me to have very specific headers in the HTTP response.
Looking at the details of my response (using hurl.it), I have a whole bunch of HTTP headers that Amazon doesn't want. How can I remove the 'X-Clacks-Overhead', the 'Server', etc., responses.
I am using Flask and Python 3.
|
Scaling a sequential program into chain of queues
| 36,972,378 | 0 | 1 | 121 | 0 |
python,amazon-sqs
|
It looks like you can do the following:
Assigner
Reads from the assigner queue and assigns the proper ids
Packs the data in bulks and uploads them to S3.
Sends the path to S3 to the Dumper queue
Dumper reads the bulks and dumps them to DB in bulks
| 0 | 0 | 0 | 0 |
2016-05-01T21:40:00.000
| 2 | 0 | false | 36,972,296 | 0 | 0 | 1 | 2 |
I am trying to scale an export system that works in the following steps:
Fetch a large number of records from a MySQL database. Each record is a person with an address and a product they want.
Make an external API call to verify address information for each of them.
Make an internal API call to get store and price information about the product on each record.
Assign identifiers to each record in a specific format, which is different for each export.
Dump all the data into a file, zip it and email it.
As of now all of this happens in one monolithic python script which is starting to show its age. As the number of records being exported at a time has grown by about 10x, the script takes a lot of memory and whole export process is slow because all the steps are blocking and sequential.
In order to speed up the process and make it scalable I want to distribute the work into a chain of SQS queues. This is quite straightforward for the first 4 steps:
Selector queue - takes a request, decides which records will be exported. Creates a msg for each of them in the verifier queue with export_id and record_id.
Verifier queue - takes the id of the record, makes the API call to verify its address. Creates a msg in the price queue with export_id and record_id.
Price queue - takes the id of a record, makes the API call to get prices and attaches it to the record. Creates a msg in the assigner queue with export_id and record_id.
Assigner queue - takes the id of a record, assigns it the sequential export ID. Creates a msg in the dumper queue with export_id and record_id.
Dumper queue - ???
This is all fine and dandy till now. Work is parallelized and we can add more workers to whichever step needs them the most.
I'm stumped by how to add the last step in the process?
Till now all the queues have been (suitably) dumb. They get a msg, perform an action and pass it on. In the current script, by the time we reach the last step, the program can be certain that all previous steps are complete for all the records and it is time to dump the information. How should I replicate this in the distributed case?
Here are the options I could think of:
The dumper queue just saves it's incoming msgs in a DB table till it gets a msg flagged "FINAL" and then it dumps all msgs of that export_id. This makes the final msg a single point of failure. If multiple exports are being processed at the same time, order of msgs is not guaranteed so deciding which msg is final is prone to failure.
Pass an expected_total and count in each step and the dumper queue waits till it gets enough msgs. This would cause the dumper queue to get blocked and other exports will have to wait till all msgs of a previously started export are received. Will also have to deal with possibly infinite wait time in some way if msgs get lost.
None of the above options seem good enough. What other options do I have?
At a high level, consistency is more important than availability in this problem. So the exported files can arrive late, but they should be correct.
Msg Delay Reasons
As asked in the comments:
Internal/External API response times may vary. Hard to quantify.
If multiple exports are being processed at the same time, msgs from one export may get lagged behind or be received in a mixed sequence in queues down the line.
|
Scaling a sequential program into chain of queues
| 61,071,601 | 0 | 1 | 121 | 0 |
python,amazon-sqs
|
You should probably use a cache instead of a queue.
| 0 | 0 | 0 | 0 |
2016-05-01T21:40:00.000
| 2 | 0 | false | 36,972,296 | 0 | 0 | 1 | 2 |
I am trying to scale an export system that works in the following steps:
Fetch a large number of records from a MySQL database. Each record is a person with an address and a product they want.
Make an external API call to verify address information for each of them.
Make an internal API call to get store and price information about the product on each record.
Assign identifiers to each record in a specific format, which is different for each export.
Dump all the data into a file, zip it and email it.
As of now all of this happens in one monolithic python script which is starting to show its age. As the number of records being exported at a time has grown by about 10x, the script takes a lot of memory and whole export process is slow because all the steps are blocking and sequential.
In order to speed up the process and make it scalable I want to distribute the work into a chain of SQS queues. This is quite straightforward for the first 4 steps:
Selector queue - takes a request, decides which records will be exported. Creates a msg for each of them in the verifier queue with export_id and record_id.
Verifier queue - takes the id of the record, makes the API call to verify its address. Creates a msg in the price queue with export_id and record_id.
Price queue - takes the id of a record, makes the API call to get prices and attaches it to the record. Creates a msg in the assigner queue with export_id and record_id.
Assigner queue - takes the id of a record, assigns it the sequential export ID. Creates a msg in the dumper queue with export_id and record_id.
Dumper queue - ???
This is all fine and dandy till now. Work is parallelized and we can add more workers to whichever step needs them the most.
I'm stumped by how to add the last step in the process?
Till now all the queues have been (suitably) dumb. They get a msg, perform an action and pass it on. In the current script, by the time we reach the last step, the program can be certain that all previous steps are complete for all the records and it is time to dump the information. How should I replicate this in the distributed case?
Here are the options I could think of:
The dumper queue just saves it's incoming msgs in a DB table till it gets a msg flagged "FINAL" and then it dumps all msgs of that export_id. This makes the final msg a single point of failure. If multiple exports are being processed at the same time, order of msgs is not guaranteed so deciding which msg is final is prone to failure.
Pass an expected_total and count in each step and the dumper queue waits till it gets enough msgs. This would cause the dumper queue to get blocked and other exports will have to wait till all msgs of a previously started export are received. Will also have to deal with possibly infinite wait time in some way if msgs get lost.
None of the above options seem good enough. What other options do I have?
At a high level, consistency is more important than availability in this problem. So the exported files can arrive late, but they should be correct.
Msg Delay Reasons
As asked in the comments:
Internal/External API response times may vary. Hard to quantify.
If multiple exports are being processed at the same time, msgs from one export may get lagged behind or be received in a mixed sequence in queues down the line.
|
cloud9 installation doesnt let me edit /python/ops/seq2seq.py
| 36,977,498 | 1 | 2 | 139 | 0 |
python,tensorflow,cloud9-ide
|
Ok, found it. While after installing on c9 there is the ~/workspace/tensorflow-path with all the files (incl. the ops-files) in them, actually there also is the /usr/local/lib/python2.7/dist-packages/tensorflow-path.
When running from the ~/workspace/tensorflow-path the ops-files are still loaded from the /usr...-path. So when editing my python/ops/seq2seq.py in the /usr..-path all is fine and I get access to my third return-value.
| 0 | 0 | 0 | 0 |
2016-05-02T07:17:00.000
| 2 | 0.099668 | false | 36,976,966 | 0 | 0 | 1 | 1 |
In a local installation I added a return value of model_with_buckets() in /python/ops/seq2seq.py. Works like magic (locally). Then I upload both my model-files (/models/rnn/translate/seq2seq_model.py) as well as my new /python/ops/seq2seq.py to cloud 9.
But then when I run it the system complains it's requesting 3 return values but only getting 2 (even though the new seq2seq.py should return 3). Does c9 cache those ops-files somewhere?
Thx
|
access local file with jquery in client side of a Django project
| 36,983,801 | 0 | 0 | 269 | 0 |
javascript,jquery,python,django,file
|
One viable option that would work and not set out security alarms all over the place, would be to use file form field on your page and ask an end user to give that file to you.
Then, you can use HTML5 File API and do whatever you need in javascript or send it to your server.
| 0 | 0 | 0 | 0 |
2016-05-02T13:15:00.000
| 1 | 0 | false | 36,983,393 | 0 | 0 | 1 | 1 |
I need to access a local file on the client side of a Django project and read an xml file from the client's local disk. Like C:\\test.xml
I am doing this in a single html and script file and using Chrome --allow-file-access to get permission for this access and it works but when I move this code into my Django project and use this jquery script in my html templates, it does not work and shows cross origin request ... error.
Please help me. Why is this happening and what is the solution?
Thanks.
|
SSL Error: Bad handshake
| 38,236,543 | 6 | 3 | 1,752 | 0 |
python,django,python-requests
|
I did bunch of things but I believe pip uninstall pyopenssl did the trick
| 0 | 0 | 1 | 0 |
2016-05-03T16:38:00.000
| 1 | 1.2 | true | 37,009,692 | 0 | 0 | 1 | 1 |
I keep getting SSLError: ('bad handshake SysCallError(0, None)) anytime I try to make a request with python requests in my django app.
What could possibly be the issue?
|
How to automate user clicking through third-party website to retrieve data?
| 37,010,752 | 1 | 1 | 218 | 0 |
java,python,automation,web,automated-tests
|
Generally, you can inspect the web traffic to figure out what kind of request is being sent. EG., the tamperdata plugin for firefox, or the firebug net panel.
Figure out what the browser is sending (EG., POST request to the server) which will include all the form data of buttons and dropdowns, and then replicate that in your own code using Apache HTTP Client or jsoup or other HTTP client library.
| 0 | 0 | 1 | 0 |
2016-05-03T17:20:00.000
| 1 | 0.197375 | false | 37,010,482 | 0 | 0 | 1 | 1 |
I am trying to automate a process in which a user goes on a specific website, clicks a few buttons, selects the same values on the drop down lists and finally gets a link on which he/she can then download csv files of the data.
The third-party vendor does not have an API. How can I automate such a step?
The data I am looking for is processed by the third party and not available on the screen at any given point.
|
Django simulate user connected through command line
| 37,021,382 | 2 | 2 | 226 | 0 |
python,django,administration
|
Before you start anything, set up an environment where you are not working with the live data or production environment.
Now that you've done that you have a few options.
Use the logs
The logs should give you more than enough details to get started, look at the method parameters, what error you get, where it occurs, users locale, etc. etc.
Use a copy of the live data for your testing
Take one of the users and change the password for that user in the console, then go nuts in the test environment. Beware of any data protection laws your server may be bound by when doing this
Talk to your users
Just be honest, tell your user you're looking into an issue and see if they are able to help at all
| 0 | 0 | 0 | 0 |
2016-05-04T07:21:00.000
| 3 | 0.132549 | false | 37,020,995 | 0 | 0 | 1 | 1 |
I have a lot of client who can connect successfully with login + password and did a lot of things without any problems. But I have 5 clients who managed to do strange things and now they have some problems when they go to some URLs.
Of course I dont have their password (and I dont want them). So I need a way to login like if I were them, five times, to see what's happening with their account. I may have to do this again many times in the future. I didn't find anything on google which could allow me via command line or whatever to login as a specific user easily.
Is there something around like this?
|
gcloud preview app deploy uploads all souce code files everytime in python project taking long time
| 37,061,546 | 1 | 2 | 1,415 | 0 |
python,git,google-app-engine,gcloud-python,google-cloud-python
|
Yes, this is the expected behaviour, each deployment is standalone, no assumption is made about anything being "already deployed", all app's artifacts are uploaded at every deployment.
Update: Kekito's comment suggests different tools may actually behave differently. My answer applies to the linux version of the Python SDK, regardless of deploying a new version or re-deploying the same version.
| 0 | 1 | 0 | 0 |
2016-05-05T06:02:00.000
| 1 | 0.197375 | false | 37,043,493 | 0 | 0 | 1 | 1 |
After i recently updated the gcloud components with gcloud components update to version 108.0.0, i noticed the gcloud preview app deploy app.yaml command has started taking too long every time (about 15 minutes) for my project. Before this it only used to take about a minute to complete.
I figured out that using gcloud preview app deploy --verbosity info app.yaml displays progress of deployment process and I noticed every file in source code is being uploaded every time i deploy including the files in lib directory which has a number of packages installed, about 2000 files in it so this is where the delay is coming from. Since I am new to appengine, i dont know if this is normal.
The project exists inside a folder of git repo, and i noticed after every deploy, 2 files in default directory, source-context.json and source-contexts.json, are being created and have information about git repo inside. I feel that can somehow be relevant.
I went through a number of relevant questions here but couldnt figure out the issue. It would be great if this can be resolved if its an issue at all because its a big inconvenience having to wait 15 mins to deploy every time.
I only started using google appengine a month ago so please dont mind if the question is incorrect. Please let me know if additional info is needed to resolve this. Thanks
UPDATE: I am using gcloud sdk on ubuntu 14.04 LTS.
|
Django model changes cannot be migrated after a Git pull
| 56,626,356 | 0 | 0 | 881 | 1 |
python,django,git
|
After you pull, do not delete the migrations file or folder. Simply just do python manage.py migrate. Even after this there is no change in database schema then open the migrations file which came through the git pull and remove the migration code of the model whose table is not being created in the database. Then do makemigrations and migrate. I had this same problem. This worked for me.
| 0 | 0 | 0 | 0 |
2016-05-05T07:12:00.000
| 2 | 0 | false | 37,044,634 | 0 | 0 | 1 | 1 |
I am working on a Django project with another developer. I had initially created a model which I had migrated and was synced correctly with a MySQL database.
The other developer had later pulled the code I had written so far from the repository and added some additional fields to my model.
When I pulled through his changes to my local machine the model had his changes, and additionly a second migration file had been pulled.
So I then executed the migration commands:
python manage.py makemigrations myapp, then python manage.py migrate in order to update my database schema. The response was that no changes had been made.
I tried removing the migration folder in my app and running the commands again. A new migrations folder had been generated and again my database schema had not been updated.
Is there something I am missing here? I thought that any changes to model can simply be migrated to alter the database schema.
Any help would be greatly appreciated. (Using Django version 1.9).
|
Setting environment variables in virtualenv (Python, Windows)
| 37,236,689 | 0 | 4 | 2,106 | 1 |
python,windows,flask,pycharm,virtualenv
|
The problem was that PyCharm does not activate the virtualenvironment when pressing the run button. It only uses the virtualenv python.exe.
| 0 | 0 | 0 | 0 |
2016-05-05T09:10:00.000
| 2 | 0 | false | 37,046,677 | 0 | 0 | 1 | 1 |
As the title suggests, I'm trying to use a environment variable in a config file for a Flask project (in windows 10).
I'm using a virtual env and this far i have tried to add set "DATABASE_URL=sqlite:///models.db" to /Scripts/activate.bat in the virtualenv folder.
But it does not seem to work. Any suggestions?
|
Django block user login from incognito browser window if user is already logged in from regular browser window
| 37,048,760 | 0 | 0 | 652 | 0 |
python,django
|
You can filter out the ip address of the user in real time and check if there is some active session related to that ip address.
If it is there, it will block the user from using it in incognito or any other browser.
And active session means open session here
| 0 | 0 | 0 | 0 |
2016-05-05T10:06:00.000
| 1 | 0 | false | 37,047,754 | 0 | 0 | 1 | 1 |
Let's say, I have a user who has logged in using Chrome browser normal window(not incognito), now he opens an incognito window tries to login using same credentials, I want to detect that the particular user is logged in already and disallow a second login.
I have seen few questions like this, where solution is to clear up all older sessions. But, is it the only solution? Can't I have all those sessions untouched and still guarantee that there be only one active session.
|
Enable mod_python and mod_wsgi module
| 37,048,554 | 2 | 0 | 481 | 0 |
wsgi,mod-python,mod
|
For recent versions of mod_wsgi no you cannot load them at the same time as mod_wsgi will prevent it as mod_python thread code doesn't use Python C API for threads properly and causes various problems.
Short answer is that you shouldn't be using mod_python any more. Use a proper Python web framework with a more modern template system instead.
If for some reason you really don't want to do that, go back and use mod_wsgi 3.5.
| 0 | 0 | 0 | 1 |
2016-05-05T10:10:00.000
| 1 | 0.379949 | false | 37,047,860 | 0 | 0 | 1 | 1 |
Can I use mod_python.so and mod_wsgi.so at the same time on Apache Web Server defining different directories for each of them. At the moment I can not enable them both in my apache config file at the same time using LoadModule.
mod_wsgi for Django and mod_python for .py and .psp scripts.
|
Is it possible to put a function in timed loop using django-background-task
| 37,066,028 | 3 | 0 | 212 | 0 |
django,python-3.x,background-task
|
No It's not possible in any case as it will effectively create cyclic import problems in django. Because in tasks you will have to import that function and in the file for that function, you will have to import tasks.
So no whatever strategy you take, you are gonna land into the same problem.
| 0 | 0 | 0 | 0 |
2016-05-06T06:37:00.000
| 2 | 1.2 | true | 37,065,874 | 1 | 0 | 1 | 1 |
Say i want to execute a function every 5 minutes without using cron job.
What i think of doing is create a django background task which actually calls that function and at the end of that function, i again create that task with schedule = say 60*5.
this effectively puts the function in a time based loop.
I tried a few iterations, but i am getting import errors. But is it possible to do or not?
|
How to dynamically visualize dataset on web?
| 37,077,704 | 0 | 0 | 52 | 0 |
javascript,python,dynamic,data-visualization
|
Js library like d3.js or highcharts can be helpful to solve your problem. You can easily send the data from sever to front-end where these library can gracefully plot the data.
| 0 | 0 | 0 | 0 |
2016-05-06T16:01:00.000
| 1 | 0 | false | 37,076,808 | 0 | 0 | 1 | 1 |
I am developing a website where I have around 800 data sets. I want to visualize my data using bar charts and pie charts, but I don't want to hard code this for every data set. What technology can I use to dynamically read the data from a json/csv/xml and render the graph? (btw I'm going to use a Python based backend (either Django or Flask))
|
How do I run python script within swift app?
| 37,082,146 | 1 | 0 | 1,182 | 0 |
python,ios,swift
|
Short answer: You don't.
There is no Python interpreter running on iOS, and Apple will likely neither provide nor allow one, since they don't allow you to deliver and run new code to in iOS app once it's installed. The code is supposed to be fixed at install time, and Python is an interpreted language.
| 0 | 0 | 0 | 1 |
2016-05-06T22:00:00.000
| 1 | 0.197375 | false | 37,082,038 | 0 | 0 | 1 | 1 |
I am making an app which will login to a website and scrape the website for the information I have. I currently have the all the login and web scraping written in Python completely done. What I am trying to figure out is running that python code in Xcode in my swift project. I want to avoid setting up a server capable of executing cgi scripts. Essentially the user will input their credentials and I will pass that to the python file, and the script will run.
|
How to use pg_restore on Windows Command Line?
| 37,104,332 | 5 | 4 | 10,221 | 1 |
python,django,database,postgresql,heroku
|
Since you're on windows, you probably just don't have pg_restore on your path.
You can find pg_restore in the bin of your postgresql installation e.g. c:\program files\PostgreSQL\9.5\bin.
You can navigate to the correct location or simply add the location to your path so you won't need to navigate always.
| 0 | 1 | 0 | 0 |
2016-05-08T19:55:00.000
| 1 | 0.761594 | false | 37,104,193 | 0 | 0 | 1 | 1 |
I have downloaded a PG database backup from my Heroku App, it's in my repository folder as latest.dump
I have installed postgres locally, but I can't use pg_restore on the windows command line, I need to run this command:
pg_restore --verbose --clean --no-acl --no-owner -j 2 -h localhost -d DBNAME latest.dump
But the command is not found!
|
Unit tests with an unmanaged external read-only database
| 37,130,919 | 2 | 1 | 299 | 1 |
python,django,unit-testing
|
After a day of staring at my screen, I found a solution:
I removed the managed=True from the models, and generated migrations. To prevent actual migrations against the production database, I used my database router to prevent the migrations. (return False in allow_migrate when for the appropriate app and database).
In my settings I detect whether unittests are being run, and then just don't define the database router or the external database. With the migrations present, the unit tests.
| 0 | 0 | 0 | 0 |
2016-05-09T11:50:00.000
| 1 | 0.379949 | false | 37,115,070 | 0 | 0 | 1 | 1 |
I'm working on a project which involves a huge external dataset (~490Gb) loaded in an external database (MS SQL through django-pyodbc-azure). I've generated the Django models marked managed=False in their meta. In my application this works fine, but I can't seem to figure out how to run my unit tests. I can think of two approaches: mocking the data in a test database, and giving the unit tests (and CI) read-only access to the production dataset. Both options are acceptable, but I can't figure out either of them:
Option 1: Mocked data
Because my models are marked managed=False, there are no migrations, and as a result, the test runner fails to create the database.
Option 2: Live data
django-pyodbc-azure will attempt to create a test database, which fails because it has a read-only connection. Also I suspect that even if it were allowed to do so, the resulting database would be missing the required tables.
Q How can I run my unittests? Installing additional packages, or reconfiguring the database is acceptable. My setup uses django 1.9 with postgresql for the main DB.
|
Python script to detect USBs in java GUI
| 37,133,845 | 0 | 0 | 125 | 0 |
java,python,javafx,automation,libusb
|
No, in most (Windows) scenarios this will not work. The problem is that libusb on Windows uses a special backend (libusb0.sys, libusbK.sys or winusb.sys). You have to install one of those backends (libusb-win32 is libusb0.sys) on every machine you want your software to run on. Under Linux this should work fine out of the box.
Essentially you have to ship the files you generate with inf_wizard.exe with your software and install the inf (needs elevated privileges) before you can use the device with your software.
| 0 | 1 | 0 | 0 |
2016-05-09T22:28:00.000
| 1 | 0 | false | 37,126,446 | 0 | 0 | 1 | 1 |
I'm making a java GUI application (javafx) that calls a python script (python2.7) which detects connected devices. The reason for this is so I can automate my connections with multiple devices.
In my python script, I use pyusb. However to detect a device, I have to use inf_wizard.exe from libusb-win32 to communicate with the device. This is fine for my own development and debugging, but what happens if I wish to deploy this app and have other users use this?
Would this app, on another computer, be able to detect a device?
Thanks
Please let me know if there is a better way to doing this.
|
Parsing indexed key pairs from a QueryDict in Django or Python
| 38,005,054 | 1 | 1 | 201 | 0 |
jquery,python,django,parsing
|
I think it's the wrong way. To make it easier, you should set the traditional argument to true in $.param().
This is the difference between traditional is true and false:
var obj = { a: [ 1, 2, 3 ] };
$.param(myObject); // a%5B%5D=1&a%5B%5D=2&a%5B%5D=3 ==> a[]=1&a[]=2&a[]=3
$.param(myObject, true); // b=1&b=2&b=3
With traditional is true, you can use this code in your Django project:
request.POST.getlist('a[]') # [1, 2, 3]
| 0 | 0 | 0 | 0 |
2016-05-10T20:31:00.000
| 1 | 0.197375 | false | 37,148,404 | 0 | 0 | 1 | 1 |
From a jQuery form, I get the following QueryDict, when I submit a form:
<QueryDict: {'marc[0].sub': [''], 'csrfmiddlewaretoken': ['K6Fd4AbFP2bLmAWaD4hAGoFbzyKjHErN'], 'field': [''], 'marc[2].field': ['856'], 'marc[0].field': ['001'], 'sub': [''], 'marc[1].sub': ['a'], 'marc[2].sub': ['u'], 'marc[1].field': ['655']}>
I can get at the data that I want if I use the very specific call in my view. For example:
print(QueryDict.getlist(request.POST, 'marc[2].sub'))
...shows the desired 'u' on the console, but I'm not sure how to loop through indexed key pairs in this odd format, where the keys have no relation, except the interloping index number. Eventually, I need a for each type statement, where I'd loop through the following:
marc[0].field: 001 and marc[0].sub: ''
marc[1].field: 655 and marc[1].sub: 'a'
marc[2].field: 856 and marc[2].sbu: 'u'
...or, better, would be to loop through something more like this:
field_subs = ('001', ''), ('655', 'a'), ('856', 'u')
...to perform another operation.
e.g.
for field_sub in field_subs:
If I need to submit more code, am heading at this the wrong way, or making it more difficult than it is, I'd appreciate any direction. I'm using Django 1.9
Thanks
|
Flask: Save files downloaded with URL retrieve into the static/img/ folder
| 37,174,450 | 0 | 0 | 678 | 0 |
python,flask,urllib
|
Remove the / at the beginning of your path to make it relative instead of absolute.
| 0 | 0 | 0 | 0 |
2016-05-11T21:17:00.000
| 1 | 0 | false | 37,173,450 | 0 | 0 | 1 | 1 |
Whenever I try to use urllib.urlretrieve(href, '/static/img/'+filename), I get the error "No such file or directory". However, I do have that directory in there.
If I remove the "/static/img/" the images download fine into the root folder. I need the images to go into the static/img folder to follow Flask convention.
How do I download images using urlretrieve into a directory that I set in Flask?
|
How to stop flask app.run()?
| 37,179,018 | 2 | 2 | 13,550 | 0 |
python,flask
|
CTRL+C is the right way to quit the app, I do not think that you can visit the url after CTRL+C. In my environment it works well.
What is the terminal output after CTRL+C? Maybe you can add some details.
You can try to visit the url by curl to test if browser cache or anything related with browser cause this problem.
| 0 | 0 | 0 | 0 |
2016-05-12T06:10:00.000
| 3 | 0.132549 | false | 37,178,582 | 0 | 0 | 1 | 2 |
I made a flask app following flask's tutorial. After python flaskApp.py, how can I stop the app? I pressed ctrl + c in the terminal but I can still access the app through the browser. I'm wondering how to stop the app? Thanks.
I even rebooted the vps. After the vps is restated, the app still is running!
|
How to stop flask app.run()?
| 43,197,195 | 1 | 2 | 13,550 | 0 |
python,flask
|
Have you tried pkill python?
WARNING: do not do so before consulting your system admin if are sharing a server with others.
| 0 | 0 | 0 | 0 |
2016-05-12T06:10:00.000
| 3 | 0.066568 | false | 37,178,582 | 0 | 0 | 1 | 2 |
I made a flask app following flask's tutorial. After python flaskApp.py, how can I stop the app? I pressed ctrl + c in the terminal but I can still access the app through the browser. I'm wondering how to stop the app? Thanks.
I even rebooted the vps. After the vps is restated, the app still is running!
|
Peer authentication failed for user "odoo"
| 37,199,710 | 5 | 4 | 23,159 | 1 |
python,postgresql,openerp,odoo-9
|
This helped me.
sudo nano /etc/postgresql/9.3/main/pg_hba.conf
then add
local all odoo trust
then restart postgres
sudo service postgresql restart
| 0 | 0 | 0 | 0 |
2016-05-12T16:57:00.000
| 3 | 0.321513 | false | 37,193,143 | 0 | 0 | 1 | 1 |
I'm on Odoo 9, I have an issue when lunching odoo server $odoo.py -r odoo -w password, the localhost:8069 doesn't load and I get an error on terminal "Peer authentication failed for user "odoo"".
I already created a user "odoo" on postgres.
When lunching $odoo.py I can load the odoo page on browser but I can't create database (as default user).
It was working and i already created database but when I logged out I couldn't connect to my database account anymore.
Any ideas ?
|
Getting Django to run a main.py file
| 37,198,563 | 0 | 0 | 1,358 | 0 |
python,django
|
This question is a little broad for a specific answer, but in general, one can:
Have the button access an API which will, on the server, in another thread, your main.py file.
Once the application is finished, move the generated files to a deterministic location that serves static files on your web server.
Provide the user a URL to the newly created file's location.
Have a cron job run to clear out old files in the static directory.
| 0 | 0 | 0 | 0 |
2016-05-12T22:26:00.000
| 1 | 0 | false | 37,198,467 | 0 | 0 | 1 | 1 |
I've written a scientific program in python which outputs a .png and a .pdf
I would like to execute this main.py file from a web interface, with a nice big button saying GO and then display the .png and download a .pdf
I'm using a Django framework to serve the page saying GO. How do i get it to:
run my main.py file?
return the .png file to html template?
download the file which is generated by the main.py script?
Thank you internet
|
How to create missing DB tables in django?
| 37,232,610 | 1 | 1 | 2,300 | 1 |
python,django,django-migrations,django-database,django-1.9
|
When you exported the data and re-imported it on the other database part of that package would have included the django_migrations table. This is basically a log of all the migrations successfully executed by django.
Since you have left out only the log table according to you, that should really be the only table that's missing from your schema. Find the entries in django_migrations that correspond to this table and delete them. Then run ./manage.py migrate again and the table will be created.
| 0 | 0 | 0 | 0 |
2016-05-14T19:35:00.000
| 1 | 0.197375 | false | 37,231,032 | 0 | 0 | 1 | 1 |
I am trying to get a project from one machine to another. This project contains a massive log db table. It is too massive. So I exported and imported all db tables except this one via phpmyadmin.
No if I run the migrate command I expected django to create everything missing. But it is not.
How to make django check for and create missing db tables?
What am I missing, why is it not doing this? I feel like the old syncdb did the job. But the new --run-syncdb does not.
Thank you for your help.
|
python tools visual studio - step into not working
| 49,196,161 | 1 | 2 | 2,148 | 0 |
python,visual-studio,ptvs
|
workaround rather than full answer.
I encountered this problem while importing my own module which ran code (which was erroring) on import.
By setting the imported module to the startup script, I was able to step through the startup code in the module and debug.
My best guess is that visual studio 2015 decided the imported module was a python standard library, but it really isn't viable to turn on the 'debug standard library option' as many standard library modules generate errors on import themselves.
| 0 | 0 | 0 | 0 |
2016-05-15T15:55:00.000
| 2 | 0.099668 | false | 37,240,431 | 1 | 0 | 1 | 1 |
I am trying to debug a scrapy project , built in Python 2.7.1
in visual studio 2013.
I am able to reach breakpoints, but when I do step into/ step over
the debugger seems to continue the exceution as if I did resume (F5).
I am working with standard python launcher.
Any idea how to make the step into/over functionality work?
|
how to hide "py4j.java_gateway:Received command c on object id p0"?
| 37,252,533 | 36 | 22 | 6,581 | 0 |
python,pyspark,py4j
|
using the logging module run:
logging.getLogger("py4j").setLevel(logging.ERROR)
| 0 | 0 | 0 | 0 |
2016-05-16T11:11:00.000
| 3 | 1 | false | 37,252,527 | 0 | 0 | 1 | 1 |
Once logging is started in INFO level I keep getting bunch of py4j.java_gateway:Received command c on object id p0 on your logs. How can I hide it?
|
django-admin command not working in Mac OS
| 37,266,854 | 0 | 4 | 13,509 | 0 |
python,django
|
You need to add django to your path variables and then restart the terminal.
| 0 | 1 | 0 | 0 |
2016-05-16T15:51:00.000
| 6 | 0 | false | 37,258,045 | 0 | 0 | 1 | 3 |
I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ?
|
django-admin command not working in Mac OS
| 48,470,351 | 0 | 4 | 13,509 | 0 |
python,django
|
I know I'm jumping in a little late, but my installations seem to all reside away from /usr/local/bin/... . What worked for me was adding an export path in bash_profile for my django installation.
This also made me realize that it was installed globally. From what I've heard, it's better to install django locally within venv as you work on different projects. That way each virtual environment can contain its own versions and dependencies for django (and whatever else you're using). Big thanks to @Arefe.
| 0 | 1 | 0 | 0 |
2016-05-16T15:51:00.000
| 6 | 0 | false | 37,258,045 | 0 | 0 | 1 | 3 |
I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ?
|
django-admin command not working in Mac OS
| 37,266,721 | 6 | 4 | 13,509 | 0 |
python,django
|
I solved the issue after reading a webpage about the mentioned issue.
In the Python shell, write the following,
>> import django
>> django.__file__
>> django also works
It will provide the installation location of django.
Change the path to the new path /usr/local/bin/django-admin.py,
sudo ln -s the complete path of django-admin.py /usr/local/bin/django-admin.py
In Mac OS, The call needs to be django-admin.py startproject mysite than django-admin startproject mysite
| 0 | 1 | 0 | 0 |
2016-05-16T15:51:00.000
| 6 | 1 | false | 37,258,045 | 0 | 0 | 1 | 3 |
I started Django in Mac OS and after installing Django using pip, I tried to initiated a new project using the command django-admin startproject mysite. I get the error -bash: django-admin: command not found. I make quick search in Google and haven't get any solution that works.
How to start a new project using Django using django-admin ?
|
App engine vs Compute engine : django project
| 37,267,336 | 4 | 2 | 314 | 0 |
python,django,google-app-engine,google-compute-engine
|
DigitalOcean is IaaS (infrastructure as a service). I guess the corresponding offer from Google is Google Compute Engine, GCE.
Google App Engine is more like Heroku, a PaaS offer (platform as a service).
In practice, what is the difference between PaaS and IaaS?
IaaS: if you have a competent system administrator in your team he probably will choose IaaS - because this kind of service will give him more control at the cost of more decisions and setup - but that is his work.
PaaS: if you are willing to pay more (like double it) to avoid most of the management work and don't mind a more opinionated platform, than a PaaS may be the right product for you. You are a programmer and just want to deploy your code (and you are happy to pay an extra in order to avoid dealing with those dickheads in operations).
Probably you can find a more elegant comparison if you google for it.
| 0 | 1 | 0 | 0 |
2016-05-17T03:13:00.000
| 1 | 1.2 | true | 37,266,479 | 0 | 0 | 1 | 1 |
I am trying to transfer my current server from DO to GCP but not sure what to use between App engine and Compute engine.
Currently using:
django 1.8
postgres (connected using psycopg2)
python 2.7
Thanks in advance!
|
Streaming a python game through a jsp
| 37,271,937 | 0 | 0 | 111 | 0 |
java,python,jsp,web,pygame
|
The way I see it you have two options:
If network connection is intermittent (not so frequent), you can use javascript (AJAX specifically) to make HTTP calls to the server whenever you need to access your database.
If you are expecting frequent (continuous) requests (i.e. multiplayer games), you would need to keep a connection alive using either
Persistent HTTP request
TCP Socket
Websocket : You probably want to use this if you want cross-browser support.
Let me know if you have any other questions regarding the options above.
| 0 | 0 | 0 | 0 |
2016-05-17T06:55:00.000
| 1 | 0 | false | 37,269,117 | 0 | 0 | 1 | 1 |
So im creating this game using pygame, and i have to put it up on a website, but it has to run server-side since the game is gonna be using a database locally, and the client will be able to enter a web page and click on a button and the game is gonna run, i cant make the game run completely client side because then the game wont be able to connect to my local database
Any ideas?
|
Does Google App Engine keep code snapshots of past deployments?
| 37,287,718 | 1 | 0 | 26 | 0 |
python,google-app-engine
|
You can only see which version of your app was deployed and when - unless you deleted the older version.
| 0 | 1 | 0 | 0 |
2016-05-17T23:09:00.000
| 2 | 1.2 | true | 37,287,631 | 0 | 0 | 1 | 1 |
I've deployed code changes to a GAE app under development and broken the app. My IDE isn't tracking history to the point in time where things still worked, and I didn't commit my code to a repo as often as I updated the app, so I can't be sure what the state of the deployed code was at the point in time when it was working, though I do know a date when it was working. Is there a way to either:
Rollback the app to a specific date?
See what code was deployed at a specific deployment or date?
I see that deployments are logged - I'm hoping that GAE keeps a copy of code for each deployment allowing me to at least see the code or diffs.
|
Deploying app to Google App Engine (python)
| 44,471,398 | 0 | 0 | 52 | 0 |
google-app-engine-python
|
create the project in your google app engine account by specifiying the
Application identifier and title(say you have given your application identifier
is=helloworld)
Go to the google app engine launcher and match your project name in the
app.yaml
with the name identifier you created in your google app engine account and then deploy it.
| 0 | 1 | 0 | 0 |
2016-05-18T19:44:00.000
| 2 | 0 | false | 37,308,794 | 0 | 0 | 1 | 1 |
I'm new to this and trying to deploy a first app to the app engine. However, when i try to i get this message:
"This application does not exist (app_id=u'udacity')."
I fear it might have to do with the app.yaml file so i'll just leave here what i have there:
application: udacity
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /favicon.ico
static_files: favicon.ico
upload: favicon.ico
url: /.*
script: main.app
libraries:
- name: webapp2
version: "2.5.2"
Thanks in advance.
|
Django sending xml requests at the same time
| 37,311,205 | 3 | 1 | 112 | 0 |
python,xml,django
|
You might want to consider using PycURL or Twisted. These should have the asynchronous capabilities you're looking for.
| 0 | 0 | 1 | 0 |
2016-05-18T22:33:00.000
| 1 | 1.2 | true | 37,311,172 | 0 | 0 | 1 | 1 |
Using the form i create several strings that looks like xml data. One part of this strings i need to send on several servers using urllib and another part, on soap server, then i use suds library. When i receive the respond, i need to compare all of this data and show it to user. The sum of these server is nine and quantity of servers can grow. When i make this requests successively, it takes lot of time. According to this i have a question, is there some python library that can make different requests at the same time? Thank you for answer.
|
How do I add authentication/security to access HBase using happybase?
| 40,366,307 | 0 | 0 | 410 | 0 |
python,python-3.x,hbase,thrift,happybase
|
thrift servers are generally only run in a trusted network. that said, thrift can run over ssl but support in happybase is limited because no one stepped up to properly design and implement an api for it. feel free to contribute.
| 0 | 0 | 0 | 1 |
2016-05-19T01:22:00.000
| 1 | 0 | false | 37,312,465 | 0 | 0 | 1 | 1 |
I'm using happybase to access HBase. However, the only parameter I need is the host name. How does Thrift work without authentication? How do I add security to my code?
|
Django with uwsgi(multiple progress and multiple threading): How to run a function when server exits?
| 38,935,407 | 0 | 0 | 641 | 0 |
python,django,uwsgi,grpc
|
As of grpcio 0.15.0 clients no longer need to be closed.
| 0 | 0 | 0 | 0 |
2016-05-19T08:59:00.000
| 1 | 0 | false | 37,318,490 | 0 | 0 | 1 | 1 |
I am writing a Django project with uwsgi(multiple progress and multiple threading mode) using grpc.
Right now, when the server exits, the grpc python client can not be closed because it used threading and rewrite the exit function. so I have to make a clean function to close the grpc client when the uwsgi sever is reload or exit, and I wish to organise it so that this function is called automatically when the server quits.
Any help would be greatly appreciated.
|
Which Role to choose for an User
| 37,330,762 | 0 | 0 | 29 | 0 |
python,authorization,access-control,rbac
|
From a UX stand point, Option A is better. If Abe has to perform a variety of tasks, then he may have to log out and log in multiple times under different roles, increasing his cognitive load and potentially delay between accomplishing his task.
From a development standpoint, Option B may be slightly easier, if for the the only reason that you must now perform track the users permissions in a slightly more complex way on the backend, but I'd largely consider this negligible.
I personally would go with Option A for these reasons.
| 0 | 0 | 0 | 0 |
2016-05-19T17:51:00.000
| 1 | 0 | false | 37,330,662 | 0 | 0 | 1 | 1 |
In an RBAC an User can have multiple roles. Say, an User named Abe is part of couple of Roles, Driver and Mechanic. Abe wants to check whether the truck needs an OilChange (an operation only Mechanics can do)
Option A) When Abe logs in to the application (mobile app or webpage), the backend authenticates Abe and determines that he has two roles. The application iteratively checks each of the Roles and see which Role has the permission to do that operation.
Option B) When Abe logs in to the application (mobile app or webpage), he choose the Role along with it. And backend authenticates Abe and uses the Role sent by Abe to check whether he has the permission to do that operation.
Or is there a better way to chose a Role for an action performed by the User?
Thanks.
|
Dynamically update static webpage with python script
| 37,479,489 | 0 | 0 | 1,371 | 0 |
javascript,jquery,python,raspberry-pi,rfid
|
To update page data without delay you need to use websockets.
There is no need in using heavy frameworks.
Once page is loaded first time you open websocket with js and listen to it.
Every time you read a tag you post all necessary data to this open socket and it instantly appear on client side.
| 0 | 0 | 0 | 0 |
2016-05-20T07:43:00.000
| 3 | 0 | false | 37,340,848 | 0 | 0 | 1 | 1 |
So I'm using a Raspberry Pi 2 with a rfid scanner and wrote a script in python that logs people in and out of our attendance system, connects to our postgresql database and returns some data like how much overtime they have and whether their action was a login or logout.
This data is meant to be displayed on a very basic webpage (that is not even on a server or anything) that just serves as a graphical interface to display said data.
My problem is that I cannot figure out how to dynamically display that data that my python script returns on the webpage without having to refresh it. I'd like it to simply fade in the information, keep it there for a few seconds and then have it fade out again (at which point the system becomes available again to have someone else login or logout).
Currently I'm using BeautifulSoup4 to edit the Html File and Chrome with the extension "LivePage" to then automatically update the page which is obviously a horrible solution.
I'm hoping someone here can point me in the right direction as to how I can accoumplish this in a comprehensible and reasonably elegant way.
TL;DR: I want to display the results of my python script on my web page without having to refresh it.
|
Can I tell where my django app is mounted in the url hierarchy?
| 37,352,987 | 0 | 1 | 33 | 0 |
python,django,url-routing
|
After doing more research and talking with coworkers, I realized that reverse does exactly what I need.
| 0 | 1 | 0 | 0 |
2016-05-20T16:15:00.000
| 1 | 0 | false | 37,351,300 | 0 | 0 | 1 | 1 |
I need to redirect my clients to another endpoint in my django app. I know I can use relative urls with request.build_absolute_uri() to do this, but I am searching for a generic solution that doesn't require the redirecting handler to know its own place in the URL hierarchy.
As an example, I have handlers at the following two URLs:
https://example.com/some/other/namespace/MY_APP/endpoint_one
https://example.com/some/other/namespace/MY_APP/foo/bar/endpoint_two
Both handlers need to redirect to this URL:
https://example.com/some/other/namespace/MY_APP/baz/destination_endpoint
I would like for endpoint_one and endpoint_two to both be able to use the exact same logic to redirect to destination_endpoint.
My app has no knowledge of the /some/other/namespaces/ part of the URL, and that part of the URL can change depending on the deployment (or might not be there at all in a development environment).
I know I could use different relative urls from each endpoint, and redirect to the destination URL. However, that required that the handlers for endpoint_one and endpoint_two know their relative position in the URL hierarchy, which is something I am trying to avoid.
|
Allowing a Django-admin user to add an to a dropdown
| 37,629,111 | 0 | 1 | 728 | 0 |
python,django,django-forms,django-admin
|
there is only one requirement - select field must get a list of values or list of tuple (value, description), of course saving those options is the second thing
how you will do this is up to you - you need to find a solution that will fit your needs and skills
simplest is to have secondary table, referenced from primary table as foreign key - you can create very fast admin for this model and it will work straight forward (problems will start when you get a lot of choices...)
other option option is to use any django module, which provide "dynamic settings" - some of them have options to store lists, some of them provide only simple objects, but even then you can store comma separated lists in text field
another option is to store data in files (even .py file that can be imported)
each of those options will be fine, you have the tools, now - choose wisely!
| 0 | 0 | 0 | 0 |
2016-05-22T01:35:00.000
| 1 | 0 | false | 37,369,691 | 0 | 0 | 1 | 1 |
I have a contact form that is posting values to a model called Enquiry. The Enquiry model has an attribute named enquiry_type which is a dropdown select box on the form. Current options are 'general enquiry' & 'request a call back'.
My question is what is the best practice to allow an admin to add to this dropdown? I was thinking of creating another model named EnquiryType with an attribute .text, & iterate through this in the form? This would allow an admin to create new objects with a .text value of their choice. E.g they could add 'Request an email quote' to the dropdown.
Is this correct thinking or is there a simpler/more accepted protocol? Sorry Im still new to Django!
|
Django-Channels - /admin/ portal not displaying new models created
| 37,388,078 | 1 | 1 | 205 | 0 |
python,django,django-models,django-channels
|
As mentioned by knbk, restarting the worker processes made it reflect the changes on my Admin portal. That was the only thing I hadn't tried.
| 0 | 0 | 0 | 0 |
2016-05-22T12:11:00.000
| 1 | 0.197375 | false | 37,374,206 | 0 | 0 | 1 | 1 |
I have a implemented django-channels. Earlier I was using Apache to serve the django application, but now Channels uses Daphne(server) to serve my application. After adding two new models to the models.py file, I migrated the changes to database. I also registered the models in the admin.py file.
Even so, the models are not showing up in the Django-admin panel.
I tried the following:
Stopped Daphne process.
Started Apache server. The Admin panel started showing the new models.
Stopped Apache server. Started Daphne on port80. This time Admin panel did not show the new models.
I am wondering what might be the case. As far as I can guess, whenever the application is served by Apache, updated files are used. Whereas, whenever the application is served by Django-Channels (Daphne), the old configurations (without the new models) are used.
Would like all the help to solve this issue. How can I make Django-Channels(Daphne) reflect the changes, the new models in my Django Admin console.
|
Which base class to inherit if it's a common class in odoo9
| 37,405,131 | 1 | 1 | 57 | 0 |
python,openerp
|
If you are going to create odoo normal module then you must create models.Model.
If you are going to create odoo module which will handle post or get request from web service then you must use controller.
If you are going to create odoo module for other module, and this module is wizard then you must use transient model and etc.
Also if you need you can make simple class and use in your module but with your question I cant tell you more
| 0 | 0 | 0 | 0 |
2016-05-24T01:25:00.000
| 1 | 1.2 | true | 37,402,998 | 0 | 0 | 1 | 1 |
I'm using odoo9 for my project.
In my project, there is a common excel handling class fill in the excel template and output result, and here is my question: which base class should it inherit? models.Model or http.Controller, or nothing?
|
Submit form on internet website
| 37,421,286 | 0 | 0 | 55 | 0 |
python
|
You can use selenium with PhantomJS to do this without the browser opening. You have to use the Keys portion of selenium to send data to the form to be submitted. It is also worth noting that this method will not work if there are captcha's on the form.
| 0 | 0 | 1 | 0 |
2016-05-24T18:01:00.000
| 3 | 0 | false | 37,420,756 | 0 | 0 | 1 | 1 |
I want to build a python script to submit some form on internet website. Such as a form to publish automaticaly some item on site like ebay.
Is it possible to do it with BeautifulSoup or this is only to parse some website?
Is it possible to do it with selenium but quickly without open really the browser?
Are there any other ways to do it?
|
Celery PeriodicTask per user
| 37,637,827 | 1 | 2 | 341 | 0 |
python,django,celery
|
periodic tasks scheduler in celery is not designed to handle thousands of scheduled tasks, so from performance perspective, much better solution is to have one task that is running at the smallest interval (e.g. if you allow user to sechedule dayly, weekly, monthly - running task daily is enough)
such approach is as well more stable - every time schedule changes, all of the schedule records are reloaded
plus is more secure because you do not expose or use any internal mechanisms for tasks execution
| 0 | 1 | 0 | 0 |
2016-05-25T13:34:00.000
| 1 | 0.197375 | false | 37,438,867 | 0 | 0 | 1 | 1 |
I'm working on project which main future will be running periodically one type of async task for each user. Every user will be able to configure task (running daily, weekly etc. at specified time). Also task will use some data stored by user. Now I'm wondering which approach should be better: allow users to create own PeriodicTask (by using some restricted endpoint of course) or create single PeriodicTask (for example running every 5 minutes) which will iterate over all users and determine if task should be queued or not for current user? I think I will use AMPQ as broker.
|
PyDev - how to include new apps in the project
| 37,478,375 | 0 | 0 | 220 | 0 |
python,django,eclipse
|
I usually use the context menu (right click on project in project explorer) and choose django/start new app.
| 0 | 0 | 0 | 0 |
2016-05-25T19:43:00.000
| 1 | 0 | false | 37,446,282 | 0 | 0 | 1 | 1 |
I have created a new app inside a django project using the command line: python manage.py startapp name and I'm using PyDev as my IDE for the first time.
My problem is that I can't add this app to the project so I can begin to code. I have been reading some questions and answers but so far I couldn't find a answer. Can you help me, please?
I'm using eclipse mars and I installed PyDev using the market place.
|
Fixing request entity too large error under django/apache/mod_wsgi for file uploads
| 37,495,881 | 2 | 1 | 1,167 | 0 |
python,django,apache,mod-wsgi,django-filer
|
That would indicate that you have LimitRequestBody directive set to a very small value in the Apache configuration. This directive wouldn't normally be set at all if using standard Apache. Even if you happen to be using mod_wsgi-express the default there is 10MB. So it must have been overridden to a smaller value.
| 0 | 0 | 0 | 0 |
2016-05-26T15:16:00.000
| 1 | 0.379949 | false | 37,464,886 | 0 | 0 | 1 | 1 |
I after much gnashing of teeth found out my uploads were working just that I was getting a request entity to large 413 error I wasn't seeing. The django app is running under apache mod_wsgi, and I am no apache guru so i am not sure what I need to set to handle larger files. I tried to google online but it was unclear if it was a timeout issue or a file size restriction issue to me. Also not clear if it is a setting in my settings.py file. I currently cannot upload anything over 1MB about. (Doesn't even sound right for all the defaults I read) Anyone done this before and can give some insight?
|
Django + Pythonanywhere: How to disable Debug Mode
| 37,480,695 | 4 | 2 | 1,288 | 0 |
django,pythonanywhere
|
I Figured it out, thanks for the hint Mr. Raja Simon.
In my PythonAnywhere Dashboard on Web Tab. I set something like this..
URL /media/
Directory /home//media_cdn
*media_cdn is where my images located.
| 0 | 0 | 0 | 0 |
2016-05-27T09:38:00.000
| 1 | 1.2 | true | 37,480,048 | 0 | 0 | 1 | 1 |
I am using Django and PythonAnywhere, and I want to make the DEBUG to False. But when I set it to False and make ALLOWED_HOSTS = ['*'], it works fine. But the problem is the media (or the images) is not displaying. Anyone encounter this and know how to resolve it?
|
Route testing with Tornado
| 37,504,714 | 3 | 1 | 129 | 0 |
python,python-3.x,tornado,pytest
|
No, it is not currently possible to test this in Tornado via any public interface (as of Tornado version 4.3).
It's straightforward to avoid spinning up a server, although it requires a nontrivial amount of code: the interface between HTTPServer and Application is well-defined and documented. The trickier part is the other side: there is no supported way to determine which handler will be invoked before that handler is invoked.
I generally recommend testing routing via end-to-end tests for this reason. You could also store your URL route list before passing it into Tornado, and do your tests against that - the internal logic of "take the first regex match" is pretty easy to replicate.
| 0 | 1 | 0 | 1 |
2016-05-28T23:09:00.000
| 1 | 1.2 | true | 37,504,566 | 0 | 0 | 1 | 1 |
I'm new to Tornado, and working on a project that involves some rather complex routing. In most of the other frameworks I've used I've been able to isolate routing for testing, without spinning up a server or doing anything terribly complex. I'd prefer to use pytest as my testing framework, but I'm not sure it matters.
Is there a way to, say, create my project's instance of tornado.web.Application, and pass it arbitrary paths and assert which RequestHandler will be invoked based on that path?
|
Django py.test run on real database?
| 37,513,426 | 0 | 1 | 1,804 | 1 |
python,django,pytest
|
yeah you can override the settings on the setUp
set the real database for the tests and load your databases fixtures.. but I think, it`s not a good pratice, since you want to run yours tests without modify your "real" app env.
you should try the pytest-django.
with this lib you can reuse, create drop your databases test.
| 0 | 0 | 0 | 0 |
2016-05-29T07:46:00.000
| 3 | 0 | false | 37,507,458 | 0 | 0 | 1 | 1 |
py.test sets up a test database.
I'd like to use the real database set in settings.py file.
(Since I'm on test machine with test data already)
Would it be possible?
|
How can I set QGroupBox's title with HTML expressions? (Python)
| 37,535,825 | 1 | 1 | 372 | 0 |
python,html,qt,groupbox
|
QGroupBox's title property does not support HTML. The only customization you can do through the title string (besides the text itself) is the addition of an ampersand (&) for keyboard accelerators.
In short, unlike QLabel, you can't use HTML with QGroupBox.
| 1 | 0 | 0 | 0 |
2016-05-30T13:38:00.000
| 3 | 1.2 | true | 37,527,124 | 0 | 0 | 1 | 1 |
I want to set my QGroupBox's title with HTML expressions in python program,
e.g. :
ABC. (subscript)
Does anybody have an idea how to do this?
|
Flask get viewport size
| 49,279,779 | 0 | 4 | 4,664 | 0 |
python,html,flask
|
Rushy already provided a correct answer, but when you want to use flask and bokeh (I am in the same position right now) responsive design does not help in all cases (I am using a bokeh-gridplot and don't want that if accessed from a mobile device).
I will append "/mobile" to the links from my home page to the plot page on mobile devices (by checking the screen width with css) and then make the plot layout depend on whether "/mobile" was appended to the link.
Obviously this has some drawbacks (if someone on desktop sends a link to someone on mobile or the other way around), but as my page is login-protected, they will be redirected to the login page in most of the cases anyways (if they are not already logged-in) and need to go to the plot-page manually.
Just in cases this helps someone.
| 0 | 0 | 0 | 0 |
2016-05-30T14:07:00.000
| 3 | 0 | false | 37,527,706 | 0 | 0 | 1 | 1 |
a short question: Is there a way to get my browsers viewport size by Flask?
I'm designing a page for many devices, for example: normal PC, iphone etc...
Thank you, FFodWindow
|
Odoo module A depends of module B which depends of module A
| 37,555,901 | 0 | 0 | 806 | 0 |
python-2.7,dependencies,openerp
|
Maybe this help you:
The auto_install flag means that the module will be automatically installed as soon as all its dependencies are satisfied. It can be set in the __openerp__.py file. Its default value is False.
Add this flag to both modules.
So you can create a dummy module C. Add it as a dependency of A and B. All the rest of dependencies of A and B should be in C. Then, if you install C, both modules are installed at the same time.
| 0 | 0 | 0 | 0 |
2016-05-31T16:37:00.000
| 1 | 0 | false | 37,551,217 | 1 | 0 | 1 | 1 |
I'd like to know if their is any way to make a module A depend of module B if module B depend of module A? Like having them installed at the same time? When I try, both are not selectionnable in module list.
If not, is it possible to merge both module easily?
I have to add something which imply to do this inter-dependency. Will I have to rewrite every line of both modules in a single one? I would be surprised since using different module make the development easier and more structured, odoo seams to be aware of it, so that's why I come to ask you this question even if I found nothing about it.
|
Thumbor On Heroku causes heroku time out when fetching URL returns 404
| 37,598,908 | 0 | 0 | 178 | 0 |
python,heroku,amazon-s3,timeout,thumbor
|
This issue on the tc_aws library we are using. It wasn't executing callback functions when 404 is returned from S3. We were at version 2.0.10. After upgrade the library, the problem is fixed.
| 0 | 0 | 0 | 0 |
2016-05-31T20:03:00.000
| 1 | 1.2 | true | 37,554,692 | 0 | 0 | 1 | 1 |
We are hosting our thumbor image resizing service on Heroku, with image store in aws s3. The Thumbor service access s3 via the aws plugin.
Recently we observe a behavior on our thumbor service which I don't understand.
Use case: our client application was sending a resize request to resize an image that does not exist in aws S3.
expected behavior:
the thumbor service fetch image in s3.
s3 returns 404 not found
thumbor return 404 to heroku, the router returns 404 to client app.
what we observe:
under certain situation(I cannot reproduce this consistently).
s3 returns 404 but thumbor does not let the heroku router know. As a result the heroku wait for 30s and return the request as 503 time out.
The flowing are the logs
2016-05-31T19:38:15.094468+00:00 app[web.1]: 2016-05-31 19:38:15 thumbor:WARNING ERROR retrieving image from S3 [bucket]/3C6A3A84-F249-458B-9EFA-BF9BC863874B: {'ResponseMetadata': {'HTTPStatusCode': 404, 'RequestId': '3D61A8CBB187D846', 'HostId': 'C1qYC9Au42J0Salt1SVlCkcvcrKcQv4dltwOCdwGNF1TUFScWpkHb1qC++ZBJ0JzVqQlXW0xONU='}, 'Error': {'Key': '[bucket]/3C6A3A84-F249-458B-9EFA-BF9BC863874B', 'Code': 'NoSuchKey', 'Message': 'The specified key does not exist.'}}
2016-05-31T19:38:14.777549+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/bALh_vgGXd7e_J7kZ0GhyE_lhZ0=/150x150/[bucket]/3C6A3A84-F249-458B-9EFA-BF9BC863874B" host=heroku.app.com request_id=67d87ea3-8010-4fbe-8c29-b2b7298f1cbc fwd="54.162.233.176,54.240.144.43" dyno=web.1 connect=5ms service=30000ms status=503 bytes=0
I am wondering if anyone can help to understand why thumbor hangs?
Many thanks in advance!
|
Sharing URL generation from uuid4?
| 37,651,587 | 1 | 0 | 491 | 0 |
python,django,url,sharing
|
As Seluck suggested I decided to go with base64 encoding and decoding:
In the model my "link" property is now built from the standard url + base64.urlsafe_b64encode(str(media_id))
The url pattern I use to match the base64 pattern:
base64_pattern = r'(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$'
And finally in the view we decode the id to load the proper data:
media_id = base64.urlsafe_b64decode(str(media_id))
media = Media.objects.get(pk=media_id)
| 0 | 0 | 1 | 0 |
2016-06-01T12:36:00.000
| 2 | 1.2 | true | 37,568,811 | 0 | 0 | 1 | 1 |
I'm having a bit of a problem figuring out how to generate user friendly links to products for sharing.
I'm currently using /product/{uuid4_of_said_product}
Which is working quite fine - but it's a bit user unfriendly - it's kind of long and ugly.
And I do not wish to use and id as it would allow users to "guess" products. Not that that is too much of an issue - I would like to avoid it.
Do you have any hints on how to generate unique, user friendly, short sharing urls based on the unique item id or uuid?
|
clone url of a pull request
| 37,578,664 | 0 | 0 | 107 | 0 |
python,github,github3.py
|
github3.pull_request(owner, repo, number).as_dict()['head']['repo']['clone_url']
| 0 | 0 | 1 | 0 |
2016-06-01T20:32:00.000
| 1 | 1.2 | true | 37,578,259 | 0 | 0 | 1 | 1 |
I'm trying to get the clone URL of a pull request. For example in Ruby using the Octokit library, I can fetch it from the head and base like so, where pr is a PullRequest object: pr.head.repo.clone_url or pr.base.repo.clone_url.
How can I achieve the same thing using github3.py?
|
How to make a field immutable after creation in MongoDB?
| 38,043,469 | 3 | 2 | 3,632 | 1 |
python,json,node.js,mongodb,immutability
|
imho there is no know methods to prevent updates inside mongo.
As you can control app behavior, then someone will still able to make this update outside the app. Mongo don't have triggers - which in sql world have the possibility to play as a data guards and prevent field changes.
As you re not using ODM, then all you can have is CQRS pattern which will allow you to control app behavior and prevent such updates.
| 0 | 0 | 0 | 0 |
2016-06-01T23:03:00.000
| 2 | 0.291313 | false | 37,580,165 | 0 | 0 | 1 | 1 |
Use case: I'm writing a backend using MongoDB (and Flask). At the moment this is not using any ORM like Mongoose/Mongothon. I'd like to store the _id of the user which created each document in the document. I'd like it to be impossible to modify that field after creation. The backend currently allows arbitrary updates using (essentially) collection.update_one({"_id": oid}, {"$set": request.json})
I could filter out the _creator_id field from request.json (something like del request.json["_creator_id"]) but I'm concerned that doesn't cover all possible ways in which the syntax could be modified to cause the field to be updated (hmm, dot notation?). Ideally I'd like a way to make fields write-once in MongoDB itself, but failing that, some bulletproof way to prevent updates of a field in code.
|
Migrate Stock_quants using stock move openerp7 odoo 8
| 37,652,714 | 0 | 0 | 156 | 0 |
python,openerp,database-migration,odoo-8,talend
|
Here is my code
from openerp.osv import osv,fields
from openerp import SUPERUSER_ID
from openerp import netsvc
class generation(osv.osv):
_name ="generation.models"
_columns={
"visible":fields.boolean("Visible")
}
_defaults={
'visible':False
}
def get_stockmovedone(self,cr ,uid ,ids, Context=None):
cr.execute("SELECT id from stock_move where state='done' order by
date ASC")
liste_move_done =[]
res = cr.fetchall()
for i in range(len(res)):
liste_move_done.append(res[i][0])
return liste_move_done
def get_id_wkf_workitem(self,cr ,uid,ids ,Context=None):
cr.execute("select distinct(wkf_workitem.id) from wkf_workitem \
where inst_id in (select distinct(wkf_instance.id) from stock_move\
inner join wkf_instance\
on stock_move.id=wkf_instance.res_id \
where stock_move.state='done'\
and wkf_instance.res_type='stock.move')")
res=cr.fetchall()
liste_wkf_work=[]
for i in range(len(res)):
liste_wkf_work.append(res[i][0])
return liste_wkf_work
def get_id_wkf_inst(self,cr ,uid,ids ,Context=None):
cr.execute("select distinct(wkf_instance.id) from stock_move \
inner join wkf_instance on stock_move.id=wkf_instance.res_id \
where stock_move.state='done' and wkf_instance.res_type='stock.move'")
res=cr.fetchall()
liste_wkf_inst=[]
for i in range(len(res)):
liste_wkf_inst.append(res[i][0])
return liste_wkf_inst
def update_leisyah(self,cr,uid,ids,Context=None):
liste1= self.get_id_wkf_inst(cr, uid, ids, Context)
liste2= self.get_id_wkf_workitem(cr, uid, ids, Context)
liste_stockmove =self.get_stockmovedone(cr,uid,ids,Context)
for t in liste_stockmove:
cr.execute("UPDATE stock_move SET state ='assigned' where
id={}".format(t))
for i in liste1:
cr.execute("UPDATE wkf_instance SET state='active' \
where id={} ".format(i))
for j in liste2:
cr.execute("UPDATE wkf_workitem SET act_id=62 \
where id={} ".format(j))
for r in liste_stockmove:
netsvc.LocalService("workflow").trg_validate(SUPERUSER_ID,
'stock.move', r, 'action_done', cr)
gener_obj = self.pool.get('generation.models')
gener_obj.write(cr,uid, [ids[0]] , {'visible':True}, context=Context)
| 0 | 0 | 0 | 0 |
2016-06-02T08:11:00.000
| 1 | 0 | false | 37,586,188 | 0 | 0 | 1 | 1 |
How to to fill stock.quant and stock_quants_move_rel with stock_move Migration openErp7 to odoo8?
Im trying to reprocess stock move with states done to fill stock quants but I have a lot of data in stock_moves
|
Django get app name from file
| 37,592,383 | 4 | 0 | 463 | 0 |
python,django
|
Instead of "guessing" the table name using the default convention you should use Model.objects.model._meta.db_table to get the real name.
A model can override the default table name convention and this will break your code reusability...
| 0 | 0 | 0 | 0 |
2016-06-02T12:46:00.000
| 1 | 0.664037 | false | 37,592,297 | 0 | 0 | 1 | 1 |
I need to execute some custom raw SQL in Django (1.9). Since tables in Django are prefixed with the app name I need to retrieve the app name. I want to use the same code in different apps later on, so I would like to get the app name in a soft coded way, just given the file the code resides in. What would be the best way to do this?
|
Odoo overriden function call order
| 37,606,530 | 2 | 3 | 1,781 | 0 |
python,openerp
|
Yes it's very interesting point and no one can predict which method from which module called first because odoo manages hierarchical structure for dependency.
Calling pattern comes into the picture only when the method will be
called from object (manually from code) and if write method call from UI
(means Sales Order edit from UI) then it will call each write method
written for that model no matter in which module it is and it's
sequence is LAST WRITTEN CALL FIRST (but it's only when the method is
called from UI).
So in your case Custom Module 1 and Custom Module 2 will be on same level and both have the same parent Sale Order.
Sales Order => Custom Module 1 (Write method override)
Sales Order => Custom Module 2 (Write method override)
So while the write method will be called manually from code then it
will gives priority to local module first and then it will call super
method.
In that case suppose write method calling from Module 1 then it may be possible that it will ignore write method of Module 2 because Module 1 and Module 2 both are on same level (super will be called write method of parent class). Because we have face this issue many time in development that methods which are override on multiple modules and these are on same level then it will not going to call the method of the next module.
So when you need to call each method of the each module they must be in hierarchy but not at the same level.
Because there is main reason why the method will not be called some times for parallel modules.
Because here two thing comes into the picture,
1). depends : Parent Module (which decides the module hierarchy)
2). _inherit : Here the methods and behavoirs of the object's definied.
Module 1 and Module 2 are not there in depends of each other so by
hierarchy it's not necessary to call the method from these both module
no matter whether they are overriding the same method of same model.
| 0 | 0 | 0 | 0 |
2016-06-02T14:40:00.000
| 1 | 1.2 | true | 37,594,983 | 0 | 0 | 1 | 1 |
It is to my understanding that odoo does not extend its models as Python extends its classes (_inherit = 'model'. And that seems pretty reasonable. My question though is this:
If I have customModule1 that extends sale.order and overrides the write method adding some functionality and I later install customModule2 which in turn extends sale.order model and overrides the write method adding some functionality as well I understand that all the versions of the write method will be called but at which order?
Is write of customModule1 going to be called first when the client writes on the sale.order model? Or write of customModule2 ?
|
Is Google Cloud Datastore a Column Oriented NoSQL database?
| 37,609,672 | 3 | 1 | 429 | 1 |
python,google-app-engine,google-cloud-datastore
|
Strictly speaking, Google Cloud Datastore is distributed multi-dimensional sorted map. As you mentioned it is based on Google BigTable, however, it is only a foundation.
From high level point of view Datastore actually consists of three layers.
BigTable
This is a necessary base for Datastore. Maps row key, column key and timestamp (three-dimensional mapping) to an array of bytes. Data is stored in lexicographic order by row key.
High scalability and availability
Strong consistency for single row
Eventual consistency for multi-row level
Megastore
This layer adds transactions on top of the BigTable.
Datastore
A layer above Megastore. Enables to run queries as index scans on BigTable. Here index is not used for performance improvement but is required for queries to return results.
Furthermore, it optionally adds strong consistency for multi-row level via ancestor queries. Such queries force the respective indexes to update before executing actual scan.
| 0 | 1 | 0 | 0 |
2016-06-02T21:41:00.000
| 1 | 0.53705 | false | 37,602,604 | 0 | 0 | 1 | 1 |
From my understanding BigTable is a Column Oriented NoSQL database. Although Google Cloud Datastore is built on top of Google’s BigTable infrastructure I have yet to see documentation that expressively says that Datastore itself is a Column Oriented database. The fact that names reserved by the Python API are enforced in the API, but not in the Datastore itself makes me question the extent Datastore mirrors the internal workings of BigTable. For example, validation features in the ndb.Model class are enforced in the application code but not the datastore. An entity saved using the ndb.Model class can be retrieved someplace else in the app that doesn't use the Model class, modified, properties added, and then saved to datastore without raising an error until loaded into a new instance of the Model class. With that said, is it safe to say Google Cloud Datastore is a Column Oriented NoSQL database? If not, then what is it?
|
Django-Nginx Patch request :405 Method \"METHOD_OTHER\" not allowed
| 37,630,789 | 0 | 2 | 642 | 0 |
python,django,nginx,django-rest-framework
|
Ok I tried the access the same code from different network and it worked.
Probably it was firewall issue of that particular wifi network.
| 0 | 0 | 0 | 0 |
2016-06-03T10:17:00.000
| 1 | 0 | false | 37,611,742 | 0 | 0 | 1 | 1 |
I am using django rest framework.
Patch on api endpoint( users/user_id) is working in local django server on my machine. But on nginx development server its showing
{"detail":"Method \"METHOD_OTHER\" not allowed."}
Do we need to change some settings in nginx?
|
python-social-auth and facebook login: what is the whitelist redirect url to include in fb configuration?
| 37,662,391 | 1 | 0 | 654 | 0 |
python-social-auth,django-socialauth
|
include a url to your website that is the absolute url version of this relative url:
/complete/facebook/
how to find this out?
use Chrome browser dev tools, enable preserve log, try to login to your app.
This question / answer is for django-social-auth but likely applies to python-social-auth too.
| 0 | 0 | 1 | 0 |
2016-06-06T16:24:00.000
| 1 | 1.2 | true | 37,662,390 | 0 | 0 | 1 | 1 |
I was getting this facebook login error:
URL Blocked
This redirect failed because the redirect URI is not
whitelisted in the app’s Client OAuth Settings. Make sure Client and
Web OAuth Login are on and add all your app domains as Valid OAuth
Redirect URIs.
Facebook login requires whitelisting of the call-back url.
what is the call back url for django-social-auth or python-social-auth ?
|
cron couldn't run Scrapy
| 37,671,502 | 0 | 0 | 108 | 0 |
python,cron,scrapy,crontab
|
Problem solved. Rather than running the crawl as root, use crontab -u user -e to create a crontab for user, and run as user.
| 0 | 0 | 0 | 1 |
2016-06-07T05:06:00.000
| 1 | 1.2 | true | 37,670,895 | 0 | 0 | 1 | 1 |
The code in crontab 0 * * * * cd /home/scrapy/foo/ && scrapy crawl foo >> /var/log/foo.log
It failed to run the crawl, as there was no log in my log file.
I tested using 0 * * * * cd /home/scrapy/foo/ && pwd >> /var/log/foo.log, it echoed '/home/scrapy/foo' in log.
I also tried PATH=/usr/local/bin and PATH=/usr/bin, but no success.
I'm able to run it manually by typing cd /home/scrapy/foo/ && scrapy crawl foo in command line.
Any thoughts? Thanks.
|
Migration for model inherited from Django
| 37,811,571 | 0 | 1 | 322 | 0 |
python,django,django-models,django-migrations
|
Based on my investigation and on comments provided, it looks like there is no solution for the moment.
| 0 | 0 | 0 | 0 |
2016-06-08T12:06:00.000
| 1 | 1.2 | true | 37,702,032 | 0 | 0 | 1 | 1 |
We're using CustomFlatPage model derived from django.FlatPage model in our application. It works fine, but the FlatPage changed in Django 1.9, which triggers the migration for our CustomFlatPage. But we'd like to have a clean migrations, that is state where makemigrations doesn't create any migrations in 1.8, nor 1.9.
Is it possible to write a migration which would be compatible with Django 1.8 and 1.9 without any change to the CustomFlatPage model itself?
|
Custom User Model extending AbstractUser, authenticate return None
| 37,771,830 | 0 | 0 | 411 | 0 |
python,django,django-authentication,django-users,django-custom-user
|
SOLVED!
The problem was not to extend AbstracUser, the problem was when i saved the user,
CustomUser(username='myusername', password='mypassword'), the password is saved like plain text and de function authenticate() doesn't work with this.
It's necessary save user usinng UserCreationForm, or extending it, the most important is save() method because use set_password() method which set encrypted password so authenticate() method works.
| 0 | 0 | 0 | 0 |
2016-06-08T14:49:00.000
| 1 | 1.2 | true | 37,705,914 | 0 | 0 | 1 | 1 |
I want create my custom user extending AbstractUser, but when i want authenticate my custom user return None.
When i create CustomUser() is stored in database but the passwowrd is not encrypted. Can i use authenticate function of banckend default? or i must create a custom backend for my custom user.
i added:
AUTH_USER_MODEL = 'mysite.customuser'
I think extending of AbstractUser, my class don't have same method or something is wrong
|
Django object in view?
| 37,712,058 | 0 | 0 | 37 | 0 |
python,ajax,django
|
Unless you set up some kind of socket (for instance using Django Channels) you'll have to resend the entire request data on each request. This doesn't seem to be much of a problem for your use case, though.
| 0 | 0 | 0 | 0 |
2016-06-08T20:01:00.000
| 1 | 0 | false | 37,711,850 | 0 | 0 | 1 | 1 |
I'm using Django templates with HTML/JS to show the results of simulations in python/Anaconda. The simulation depends on the setting of different parameters. After the initial data is loaded (from files) and visualized by the first call of the page, the parameters can be chosen in textfields/dropdowns in the template. An AJAX-request sends the parameter to the view and retrieves an array with the results.
Do I need to send all the initial data with the requests everytime, or is it possible to store it in, for example, an attribute of an object in the view? Are examples avaiable?
|
pyAudio and PJSIP in a Virtual Machine
| 37,734,848 | 0 | 1 | 689 | 0 |
python,audio,amazon-ec2,pjsip,pyaudio
|
Alright, this isn't the most reliable solution but it does seem to work.
To start with you must verify you have pulseaudio installed and working
Use what ever package installer you need:
apt-get/yum/zypper pulseaudio pulseaudio-devel alsa-lib alsa-devel alsa-plugins-pulseaudio
pulseaudio --start
pacmd load-module module-null-sink sink_name=MySink
pacmd update-sink-proplist MySink device.description=MySink
This will allow you to pass audio around in your vm so that it can be sent out using pjsip.
If you dont have your own loopback written in python you can use:
pacmd load-module module-loopback sink=MySink
to pass audio back out. If you do have a loopback written you cannot use both.
| 0 | 1 | 0 | 0 |
2016-06-08T20:26:00.000
| 1 | 1.2 | true | 37,712,275 | 0 | 0 | 1 | 1 |
I am writing a SIP client in python. I am able to make my script run on my computer just fine. It plays a wav file, grabs the audio and then sends the audio out using a sip session. I am having a very hard time getting this to run in the AWS ec2 VM. The VM is running SUSE 12.
There seems to be a lot of questions related to audio loop backs and piping audio around. But I haven't found any that seem to encompass all of the ways I am having issues.
I have tried figuring out how to set one up using pacmd but havent had and luck. I have Dummy Output and Monitor of Dummy Output as defaults but that didnt work.
When I try to open the stream i still get a no default output device error.
What I am trying to find is a way to have a virtual sound card (i guess) that I can have for channels on the sip call and stream the wav file into.
Any advice or direction would be very helpful.
Thanks in advance
|
Django processes concurrency
| 37,775,758 | 0 | 0 | 35 | 0 |
django,python-3.x,concurrency
|
I ended up enforcing the unicity at the database (model) level, and catching the resulting IntegrityError in the code.
| 0 | 0 | 0 | 0 |
2016-06-09T14:59:00.000
| 1 | 1.2 | true | 37,729,594 | 0 | 0 | 1 | 1 |
I am running a Django app with 2 processes (Apache + mod_wsgi).
When a certain view is called, the content of a folder is read and the process adds entries to my database based on what files are new/updated in the folder.
When 2 such views execute at the same time, both see the new file and both want to create a new entry. I cannot manage to have only one of them write the new entry.
I tried to use select_on_update, with transaction.atomic(), get_or_create, but without any success (maybe used wrongly?).
What is the proper way of locking to avoid writing an entry with the same content twice with get_or_create ?
|
Scrapy installed, but won't run from the command line
| 63,207,972 | 0 | 6 | 24,069 | 0 |
python,scrapy
|
make sure you activate command that is
"Scripts\activate.bat"
| 0 | 0 | 0 | 0 |
2016-06-10T21:12:00.000
| 9 | 0 | false | 37,757,233 | 0 | 0 | 1 | 5 |
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
|
Scrapy installed, but won't run from the command line
| 40,753,182 | 7 | 6 | 24,069 | 0 |
python,scrapy
|
I tried the following sudo pip install scrapy , however was promtly advised by Ubuntu 16.04 that it was already installed.
I had to first use sudo pip uninstall scrapy, then sudo pip install scrapy for it to successfully install.
Now you should successfully be able to run scrapy.
| 0 | 0 | 0 | 0 |
2016-06-10T21:12:00.000
| 9 | 1 | false | 37,757,233 | 0 | 0 | 1 | 5 |
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
|
Scrapy installed, but won't run from the command line
| 60,684,096 | 3 | 6 | 24,069 | 0 |
python,scrapy
|
I faced the same problem and solved using following method. I think scrapy is not usable by the current user.
Uninstall scrapy.
sudo pip uninstall scrapy
Install scrapy again using -H.
sudo -H pip install scrapy
Should work properly.
| 0 | 0 | 0 | 0 |
2016-06-10T21:12:00.000
| 9 | 0.066568 | false | 37,757,233 | 0 | 0 | 1 | 5 |
I'm trying to run a scraping program I wrote for in python using scrapy on an ubuntu machine. Scrapy is installed. I can import until python no problem and when try pip install scrapy I get
Requirement already satisfied (use --upgrade to upgrade): scrapy in /system/linux/lib/python2.7/dist-packages
When I try to run scrapy from the command, with scrapy crawl ... for example, I get.
The program 'scrapy' is currently not installed.
What's going on here? Are the symbolic links messed up? And any thoughts on how to fix it?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.