Title
stringlengths 11
150
| A_Id
int64 518
72.5M
| Users Score
int64 -42
283
| Q_Score
int64 0
1.39k
| ViewCount
int64 17
1.71M
| Database and SQL
int64 0
1
| Tags
stringlengths 6
105
| Answer
stringlengths 14
4.78k
| GUI and Desktop Applications
int64 0
1
| System Administration and DevOps
int64 0
1
| Networking and APIs
int64 0
1
| Other
int64 0
1
| CreationDate
stringlengths 23
23
| AnswerCount
int64 1
55
| Score
float64 -1
1.2
| is_accepted
bool 2
classes | Q_Id
int64 469
42.4M
| Python Basics and Environment
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Web Development
int64 1
1
| Available Count
int64 1
15
| Question
stringlengths 17
21k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CreateView and ListView from different apps on the same page.
| 35,943,837 | 0 | 0 | 188 | 0 |
python,django,django-templates,django-views
|
First of all, forget about apps here; they have nothing to do with anything. An app is just a collection of models and views, it has no relationship to what can be shown on a page.
Your issue is that a single view is exclusively responsible for rendering a page. Django calls a view in response to a request at a particular URL, and whatever is returned from there becomes the content of the page. There is no way to call multiple views from a URL.
Instead, you need to think about constructing your code in such a way that the view renders content from multiple places. There are various ways of doing this, such as including templates, using template tags or context processors, composing class-based views, etc.
| 0 | 0 | 0 | 0 |
2016-03-11T15:04:00.000
| 1 | 1.2 | true | 35,943,500 | 0 | 0 | 1 | 1 |
I have two different apps, recipe and comment.
I have a DetailView in the recipe app which points to url(r'^(?P<pk>\d+)/$', RecipeDetailView.as_view(), name='recipe-detail') which is also in the recipe app url file.
I also have a CreateView in my views.py file in my comment app. How can i put this CreateView which is in my comment app into the same url that is shown above? Do I do this in the template? Or do I do this in the recipe views.py or urls.py file?
I have had no problems making views with one app, i am getting tripped up trying to show views across apps.
|
Open edX Dogwood problems
| 36,759,310 | 1 | 1 | 226 | 1 |
python,django,amazon-web-services,edx,openedx
|
This one works ami-7de8981d (us-east). Login with ssh as the 'ubuntu' user. Studio is on port 18010 and the LMS is on port 80.
| 0 | 0 | 0 | 0 |
2016-03-11T19:57:00.000
| 1 | 0.197375 | false | 35,948,834 | 0 | 0 | 1 | 1 |
I have installed Open edX (Dogwood) on an EC2 ubuntu 12.04 AMI and, honestly, nothing works.
I can sign up in studio, and create a course, but the process does not complete. I get a nice page telling me that the server has an error. However, the course will show up on the LMS page. But, I cannot edit the course in Studio.
If I sign out of studio, I cannot log back without an error. However, upon refreshing the page, I am logged in.
I can enable the search function and install the search app, but it doesn't show any courses and returns an error.
Can someone point me to an AMI that works with, or includes, Open edX? The Open edX documentation is worthless. Or, failing that, explain to be what I am missing when installing Open edX using the automated installation scripts from the documention.
|
How to use virtualenv in Python project?
| 35,952,928 | 0 | 0 | 2,126 | 0 |
python,django,virtualenv
|
It's also good practice to make a requires.txt file for all your dependencies. If for example your project requires Flask and pymongo, create a file with:
Flask==<version number you want here>
pymongo==<version number you want here>
Then you can install all the necessary libraries by doing:
pip install -r requires.txt
Great if you want to share your project or don't want to remember every library you need in your virtualenv.
| 0 | 0 | 0 | 0 |
2016-03-12T01:06:00.000
| 2 | 0 | false | 35,952,511 | 1 | 0 | 1 | 1 |
I'm trying to use virtualenv in my new mainly Python project. The code files are located at ~/Documents/Project, and I installed a virtual environment in there, located at ~/Documents/Project/env. I have all my packages and libraries I wanted in the env/bin folder.
The question is, how do I actually run my Python scripts, using this virtual environment? I activate it in Terminal, then open idle as a test, and try
"import django"
but it doesn't work. Basically, how can I use the libraries install in the virtual environment with my project when I run it, instead of it using the standard directories for installed Python libraries?
|
Python development: Server Handling
| 35,962,887 | 1 | 0 | 42 | 0 |
python,django,server
|
manage.py runserver is only used to speed your development process, it shouldn't be run on your server. It's similar to the newly introduced php's built-in server php -S host:port.
Since you're coming from PHP you can use apache with mod_wsgi in order to serve your django application, there are a lot of tutorials online on how to configure it properly. You might want to read what wsgi is and why it's important.
| 0 | 0 | 0 | 0 |
2016-03-12T19:55:00.000
| 1 | 1.2 | true | 35,962,581 | 0 | 0 | 1 | 1 |
This might be a very dumb question, so please bear with me (there's also no code included either).
Recently, I switched from PHP to Python and fell in love with Django. Locally, everything works well.
However, how are these files accessed when on a real server?
Is the manage.py runserver supposed to be used in a server environment?
Do I need to use mod_python ?
Coming from PHP, one would simply use Apache or Nginx but how does the deployment work with Python/Django?
This is all very confusing to me, admittedly. Any help is more than welcome.
|
Is there a way to determine how long has an Amazon AWS EC2 Instance been running for?
| 36,037,353 | 4 | 4 | 3,416 | 0 |
python,amazon-web-services,amazon-ec2,cron,aws-cli
|
The EC2 service stores a LaunchTime value for each instance which you can find by doing a DescribeInstances call. However, if you stop the instance and then restart it, this value will be updated with the new launch time so it's not really a reliable way to determine how long the instance has been running since it's original launch.
The only way I can think of to determine the original launch time would be to use CloudTrail (assuming you have it enabled for your account). You could search CloudTrail for the original launch event and this would have an EventTime associated with it.
| 0 | 1 | 0 | 1 |
2016-03-15T18:21:00.000
| 2 | 0.379949 | false | 36,019,161 | 0 | 0 | 1 | 1 |
I am looking for a way to programmatically kill long running AWS EC2 Instances.
I did some googling around but I don't seem to find a way to find how long has an instance been running for, so that I then can write a script to delete the instances that have been running longer than a certain time period...
Anybody dealt with this before?
|
How can I launch an EMR using SPOT Block using boto?
| 46,980,003 | 1 | 4 | 454 | 0 |
python-2.7,boto,emr,boto3
|
According to the boto3 documentation, yes it does support spot blocks.
BlockDurationMinutes (integer) --
The defined duration for Spot instances (also known as Spot blocks) in minutes. When specified, the Spot instance does not terminate before the defined duration expires, and defined duration pricing for Spot instances applies. Valid values are 60, 120, 180, 240, 300, or 360. The duration period starts as soon as a Spot instance receives its instance ID. At the end of the duration, Amazon EC2 marks the Spot instance for termination and provides a Spot instance termination notice, which gives the instance a two-minute warning before it terminates.
Iniside the LaunchSpecifications dictionary, you need to assign a value to BlockDurationMinutes. However, the maximum value is 360 (6 hours) for a spot block.
| 0 | 0 | 1 | 0 |
2016-03-16T08:02:00.000
| 2 | 0.099668 | false | 36,029,866 | 0 | 0 | 1 | 1 |
How can I launch an EMR using spot block (AWS) using boto ? I am trying to launch it using boto but I cannot find any parameter --block-duration-minutes in boto, I am unable to find how to do this using boto3.
|
Receive data on server side with django-websocket-redis?
| 36,880,964 | 0 | 0 | 312 | 0 |
python,ios,django,sockets,websocket
|
You can achieve that by using periodically ajax calls from client to server. From documentation:
A client wishing to trigger events on the server side, shall use
XMLHttpRequests (Ajax), as they are much more suitable, rather than
messages sent via Websockets. The main purpose for Websockets is to
communicate asynchronously from the server to the client.
Unfortunately I was unable to find the way to achieve it using just websocket messages.
| 0 | 0 | 1 | 0 |
2016-03-16T13:37:00.000
| 1 | 0 | false | 36,037,286 | 0 | 0 | 1 | 1 |
I'm working with django-websocket-redis lib, that allow establish websockets over uwsgi in separated django loop.
By the documentation I understand well how to send data from server through websockets, but I don't understand how to receive.
Basically I have client and I want to send periodically from the client to server status. I don't understand what I need to do, to handle receiving messages from client on server side? What URL I should use on client?
|
CFFI UserWarning: 'point_conversion_form_t' has no values explicitly defined;
| 54,451,026 | 1 | 6 | 2,949 | 0 |
python,scrapy
|
Downgrading to cffi==1.2.1 ended up being the solution for me.
| 0 | 0 | 0 | 0 |
2016-03-17T00:47:00.000
| 1 | 1.2 | true | 36,049,690 | 0 | 0 | 1 | 1 |
I'm getting the following warning when running a scrapy crawler:
C:\Users\dan\Anaconda2\envs\scrapy\lib\site-packages\cffi\model.py:526: UserWarning: 'point_conversion_form_t' has no values explicitly defined; next version will refuse to guess which integer type it is meant to be (unsigned/signed, int/long)
% self._get_c_name())
I hadn't been getting this in my previous Anaconda Python install on my Windows 10. I had to reset my environment and now I am.
It's not preventing the crawler from running, but it's kind of annoying. Can anyone tell me what might be causing this?
|
Django-cms with non-django-based project
| 36,055,522 | 0 | 0 | 52 | 0 |
java,python,angularjs,django,django-cms
|
No, it's not possible.
django CMS is a standard Django application, that requires a standard kind of Django environment. It can't do anything except as part of a Django project.
There's nothing to stop you configuring your web server so that some requests (by URL) go to the Django project while the others are handled by the Java backend, but this isn't integration, it's simply some form of wholly independent co-existence.
| 0 | 0 | 0 | 0 |
2016-03-17T00:54:00.000
| 1 | 0 | false | 36,049,768 | 0 | 0 | 1 | 1 |
I have a project with a Java backend and Angular-based frontend and I'd like to utilize Django-cms. Is this possible to do with a non-Django project? I've been looking over the documentation, but I can't find an explicit 'yes' or 'no'. I can't wrap my head around how I'd integrate, what seem to me, two very different projects.
|
Geohashing vs SearchAPI for geospatial querying using datastore
| 36,110,881 | 1 | 1 | 326 | 1 |
python,google-app-engine,google-cloud-datastore,google-search-api,geohashing
|
Geohashing does not have to be inaccurate at all. It's all in the implementation details. What I mean is you can check the neighbouring geocells as well to handle border-cases, and make sure that includes neighbours on the other side of the equator.
If your use case is finding other entities within a radius as you suggest, I would definitely recommend using the Search API. They have a distance function tailored for that use.
Search API queries are more expensive than Datastore queries yes, but if you weigh in the computation time to do these calculations in your instance and probably iterating through all entities for each geohash to make sure the distance is actually less than the desired radius, then I would say Search API is the winner. And don't forget about the implementation time.
| 0 | 0 | 0 | 0 |
2016-03-18T19:15:00.000
| 2 | 1.2 | true | 36,092,591 | 0 | 0 | 1 | 1 |
I am creating an appEngine application in python that will need to perform efficient geospatial queries on datastore data. An example use case would be, I need to find the first 20 posts within a 10 mile radius of the current user. Having done some research into my options, I have found that currently what seems like the 2 best approaches for achieving this type of functionality would be:
Indexing geoHashed geopoint data using Python's GeoModel library
Creating/deleting documents of structured data using Google's newer SearchAPI
It seems from a high level perspective that indexing geohashes and performing queries on them directly would be less costly and much faster than having to create and delete a document for every geospatial query, however i've also read that geohashing can be very inaccurate along the equator or along 'faultlines' created by the hashing algorithm. I've seen very few posts contrasting the best methods in detail, and I think stack is a good place to have this conversation, so my questions are as follows:
Has anyone implemented similar features and had positive experiences with either methods?
Which method would be the cheaper alternative?
Which would be the faster alternative?
Is there another important method I'm leaving out?
Thanks in advance.
|
How can I use all objects in a table as a label - input in a model in Django's Admin?
| 36,094,120 | 0 | 1 | 73 | 0 |
python,django,admin,models
|
If you set editable=False in the field properties, the field will be read-only in admin. For example:
language = models.ForeignKey(Language, editable=False)
| 0 | 0 | 0 | 0 |
2016-03-18T19:22:00.000
| 2 | 0 | false | 36,092,706 | 0 | 0 | 1 | 1 |
I want to know if there's some easy way to have 2 models, for example Language, and Word. And we have another model Translation that has the string of the translation and both Models are referenced with a Foreign Key.
Imagine I have 2 languages, English and Spanish. Is there some way to make always appear every language as a label and the string of the translation as a textbox?
|
Keep user signed in using mod_wsgi
| 36,096,173 | 1 | 0 | 127 | 0 |
python,apache,cookies,mod-wsgi,sign
|
The best (read: easiest) way to go about this is with session variables. That said, in lieu of session variable functionality you would get with a framework, you can implement your own basic system.
1) Generate a random Session id
2) send a cookie to browser
3) Json or pickle encode your variables
4a) save encoded string to key-value storage system like redis or memcached with session if as the key, or
4b) save it to a file on the server preferably in /tmp/
| 0 | 0 | 0 | 0 |
2016-03-18T23:14:00.000
| 1 | 0.197375 | false | 36,095,800 | 0 | 0 | 1 | 1 |
I'm developing a web app using my own framework that I created using mod_wsgi.
I want to avoid using dependencies such as Django or Flask, just to have a short script, It actually won't be doing much.
I have managed to authenticate user using LDAP, from a login page, the problem is that I don't want the user to authenticate every time a action requires authorization, but I don't know how to keep user logged in.
Should I use the cookies? If so, what would be the best method to keep identification in cookies? What are my options?
|
Flask-Babel won't translate text on AWS within a docker container, but does locally
| 36,165,474 | 0 | 0 | 163 | 0 |
python,amazon-web-services,flask,docker,docker-compose
|
I found the problem.
Locally i am running it on a vagrant virtual machine on a windows computer. It looks like because windows is not a case sensitive file system, when the python gettext() function was looking for en_US, i was passing it en_us, which it found on windows. But on AWS, it did not because it was running linux which is case sensitive.
| 0 | 1 | 0 | 0 |
2016-03-19T01:11:00.000
| 1 | 1.2 | true | 36,096,703 | 0 | 0 | 1 | 1 |
I have a flask app that is using flask-babel to translate text. I have created a docker container for it all to run in. And i have verified multiple times that both are being run and built exactly the same way.
When i put the app on my local docker container (using a vagrant linux machine). The translations work fine. When i put it on AWS, the translations do not work, and they simply show the msgid text. So things like "website_title" etc. instead of the correct localized text.
This is really weird to me because everything is running EXACTLY the same and inside of docker containers, so there shouldn't be anything different about them.
If needed i can post some code snippets with sensitive stuff edited out, but i was more hoping for someone to point me in a general direction on why this might be happening or how to even debug it. As far as i can tell there are no errors being logged anywhere.
|
Persist file upload html forms in web2py
| 36,105,590 | 3 | 1 | 119 | 0 |
php,python,html,web2py
|
Persisting file upload fields would requiring knowing the paths to the files on the user's local machine, and browsers do not allow this as it would be a security vulnerability. There are a few alternative approaches that you could take, but web2py does not include built-in functionality to implement them.
One option would be to do an initial client-side validation (or possibly validation via Ajax if you need any server-side database lookups) before the form is submitted. You would still want to do server-side validation for security purposes, but this would at least prevent the user from submitting data that will ultimately fail validation.
Another option would be to have the user do an initial submission of all data except the files, and then upload the files only after the other data have been successfully submitted.
Finally, on the server side, when validation fails, you could store the uploaded files in a temporary location. The returned form could then show the filenames of the successfully uploaded files while also including file upload widgets in case the user wants to change any of the uploaded files. Upon successful form submission, you could then copy the temporarily stored files to the proper location. In this case, you would need some way to associate the particular form submission with the temporary files, and you might also want to run a periodic task to clean up orphaned temporary files.
| 0 | 0 | 0 | 0 |
2016-03-19T09:41:00.000
| 1 | 1.2 | true | 36,100,141 | 0 | 0 | 1 | 1 |
I have developed a web portal using web2py. The portal has a input form that is to be filled by user. There are 5 steps in the form. At the last step there are bunch of file uploads fields.
If a user encounters form validation error after filling the form, he has to again upload the file from upload fields. Because file upload fields get reset after the form validation error. This is acceptable if user has to fill just one form. But it becomes difficult for user when he has to fill hundreds of similar forms to input data.
I want to implement a feature that persist file uploads field even after the form validation error. Is there a way to achieve this using html or php or is there something inbuilt in web2py.
Please let me know if anyone has done something like this before.
|
Is it safe to upload my Django project to GitHub?
| 36,107,082 | 10 | 5 | 3,430 | 0 |
python,django,git,security,github
|
In general, and as long as your settings.py does not include sensitive information, uploading your Django project to GitHub will not compromise your super user account. Your user information is stored in your database, which should not be included in your Git repository.
The most likely situation where this might be a problem is if you are using SQLite, a file-based database. If you are, make sure that your database file is not (and has never been) checked into your repository.
| 0 | 0 | 0 | 0 |
2016-03-19T20:47:00.000
| 1 | 1.2 | true | 36,106,945 | 0 | 0 | 1 | 1 |
Will uploading my Django project to GitHub make my superuser (created with python manage.py createsuperuser) vulnerable?
I used a sensitive password with the superuser I created and I do not want it to be compromised by uploading the source code to GitHub.
The website itself does not contain any sensitive code.
|
Checking friendship in Tweepy
| 55,508,308 | 0 | 0 | 3,016 | 0 |
python,tweepy
|
i am using below code this works fine ..
if(api.show_friendship(source_screen_name='venky6amesh', target_screen_name='vag')):
print("the user is not a friend of ")
try:
api.create_friendship(j)
print('success')
except tweepy.TweepError as e:
print('error')
print(e.message[0]['code'])
else:
print("user is a friend of ",j)
| 0 | 0 | 1 | 0 |
2016-03-20T19:31:00.000
| 4 | 0 | false | 36,118,490 | 0 | 0 | 1 | 1 |
I am trying to find, do I follow someone or not. I realized that although it is written in official Tweepy document, I cannot use API.exists_friendship anymore. Therefore, I tried to use API.show_friendship(source_id/source_screen_name, target_id/target_screen_name) and as the document said it returns me friendship object. (<tweepy.models.Friendship object at 0x105cd0890>, <tweepy.models.Friendship object at 0x105cd0ed0>)
When I write screen_names = [user.screen_name for user in connection.show_friendship(target_id=someone_id)] it returns my_username and username for someone_id.
Can anyone tell me how can I use it properly? or is there any other method that simply returns me True/False because I just want to know do I follow him/her or not.
|
Expected Chromecast Audio Delay?
| 41,686,041 | 1 | 6 | 657 | 0 |
python,audio,raspberry-pi,chromecast
|
I've been testing notifications with pychromecast. I've got a delay of 7 sec.
Since you can't play a local file, but only a file hosted on a webserver, I guess the chromecast picks up the file externally.
Routing is via google's servers, which is what google does with all its products.
| 0 | 0 | 1 | 1 |
2016-03-21T03:51:00.000
| 1 | 0.197375 | false | 36,122,859 | 0 | 0 | 1 | 1 |
My 10 year old and I are implementing a project which calls for audio to be played by a Chromecast Audio after a physical button is pressed.
She is using python and pychromecast to connect up to a chromecast audio.
The audio files are 50k mp3 files and hosted over wifi on the same raspberry pi running the button tools. They are hosted using nginx.
Delay from firing the play_media function in pychromecast to audio coming out of the chromecast is at times in excess of 3 seconds, and never less than 1.5 seconds. This seems, anecdotally, to be much slower than casting from spotify or pandora. And, it's definitely too slow to make pushing the button 'fun'.
File access times can matter on the pi, but reading the entire file using something like md5sum takes less than .02 seconds, so we are not dealing with filesystem lag.
Average file download times for the mp3 files from the pi is 80-100ms over wifi, so this is not the source of the latency.
Can anyone tell me
What the expected delay is for the chromecast audio to play a short file
If pychromecast is particularly inefficient here, and if so, any suggestions for go, python or lisp-based libraries that could be used.
Any other tips for minimizing latency? We have already downconverted from wav files thinking raw http speed could be an issue.
Thanks in advance!
|
Spark running on EC2 vs EMR
| 36,143,671 | 3 | 1 | 3,688 | 0 |
python,amazon-web-services,amazon-ec2,apache-spark,amazon-emr
|
EMR provides a simple to use Hadoop/spark as service. You just have to select the components you want to be installed (spark, hadoop), their versions, how many machines you want to use and a couple other options and then it installs everything for you. Since you are students I assume you don't have experience in automation tools like Ansible, Puppet or Chef and probably you never had to maintain your own hadoop cluster. If that is the case I would definitively suggest EMR. As an experienced hadoop/spark user, at the same time I can tell you that it has its own limitations. When I used it 6 months ago I wanted to use the latest version of EMR (4.0 If remember correctly) because it supported the latest version of Spark and I had few headaches to customise it to install Java 8 instead of the provided Java 7. I believe it was their early days of supporting Java 8 and they should have fixed that by now. But this is what you miss with all the "all included" solutions, flexibility especially if you are an expert user.
| 0 | 0 | 0 | 0 |
2016-03-21T21:04:00.000
| 2 | 1.2 | true | 36,141,570 | 0 | 1 | 1 | 1 |
We are students that are working on a graduation project related to the Data Science, we are developing a Recommender Engine using Spark with python (Pyspark) with Android Application (Interface for the users) and we have a faced a lot of roadblocks, one of them was how to keep the Spark script up and running on a cloud for a fast processing and real-time results.
All we knew about EMR that it's newer than EC2 and already has the Hadoop installed on it.
We still have hard time taking the decision on which to use and what are the differences between them dealing with Spark.
|
How does django-rest-framework decide what the default `allowed_methods` should be for a `ModelViewSet`?
| 36,162,477 | 3 | 3 | 1,242 | 0 |
python,django,django-rest-framework
|
The short answer:
In my case, I was accidentally sending my PATCH to the list URL, rather than the put/patch URL.
The longer answer:
I found that the problem isn't that one project has different defaults for allowed_methods, it's that the action_map and allowed_methods properties of the ViewSet change based on which of the ViewSet URLs you hit, since the action_map is influenced by the router (see SimpleRouter.routes).
So if you try to hit "//[base_url]/your-model/" with PATCH or PUT, as I was doing, it will say that only ['GET', 'POST', 'HEAD', 'OPTIONS'] are allowed, and patch() will NOT be linked to partial_update(), even though it uses the same ViewSet class and partial_update() is present in that class.
If you want to send a PATCH, you have to send it to "//[base_url]/your-model/[some_id]/".
| 0 | 0 | 0 | 0 |
2016-03-21T21:59:00.000
| 2 | 0.291313 | false | 36,142,418 | 0 | 0 | 1 | 1 |
The company I work for has 2 projects that use django and DRF 3.
both projects have a ViewSet that extends ModelViewSet
both ViewSets do not explicitly define the allowed_methods property and are just using whatever DRF figures should be the default
both ViewSets do not override or define any handler methods (create(), update(), partial_update(), patch(), etc.)
However, in one project the allowed_methods property defaults to [u'GET', u'PUT', u'PATCH', u'DELETE', u'HEAD', u'OPTIONS']. For the other allowed_methods defaults to [u'GET', u'POST', u'HEAD', u'OPTIONS']. Consequently, I get a 405 response with
Method "PATCH" not allowed.
when I attempt to send a PATCH request.
What causes project 2 to be more restricted?
|
Django: Making custom permissions
| 36,146,532 | 0 | 0 | 41 | 0 |
python,django,forms,permissions,verify
|
You have several ways to do it:
UI level: when the search field is focused you can say through an alert or other mechanism to notify users you are not allowed to search.
Server level: assuming your user is logged in or has an account you can verify the user in the search request and return a response where you state you cannot search without confirming your email.
Don't let them use the site after registering unless they confirm their email. You can see doing searches as data display and if you don't block that either you confuse users. Why can I see all articles but can't search?
I would go for 3. and let them use the site. They can confirm it afterwards when they try to do something which modifies the DB (aka they try to post something, then from a psychological standpoint there is a block between them and their objective and they will be more willing to confirm in order to achieve their objective)
| 0 | 0 | 0 | 0 |
2016-03-22T00:12:00.000
| 1 | 0 | false | 36,144,018 | 0 | 0 | 1 | 1 |
So I have lots of forms that aren't attached to models, like a search form. I don't want people to be able to access these without first verifying their account through an email. How is the best way to limit their ability to do this? Is it through custom permissions? If so, how do I go about this? Thank you so much!
|
Django - makemigrations - No changes detected
| 60,474,730 | 0 | 227 | 202,787 | 0 |
python,django,django-migrations
|
The possible reason could be deletion of the existing db file and migrations folder
you can use python manage.py makemigrations <app_name> this should work. I once faced a similar problem.
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 0 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 60,492,787 | 1 | 227 | 202,787 | 0 |
python,django,django-migrations
|
One more edge case and solution:
I added a boolean field, and at the same time added an @property referencing it, with the same name (doh). Commented the property and migration sees and adds the new field. Renamed the property and all is good.
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 0.005882 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 60,585,192 | 0 | 227 | 202,787 | 0 |
python,django,django-migrations
|
If you have the managed = True in yout model Meta, you need to remove it and do a migration. Then run the migrations again, it will detect the new updates.
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 0 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 62,388,011 | -1 | 227 | 202,787 | 0 |
python,django,django-migrations
|
The Best Thing You can do is, Delete the existing database. In my case, I were using phpMyAdmin SQL database, so I manually delete the created database overthere.
After Deleting:
I create database in PhpMyAdmin, and doesn,t add any tables.
Again run the following Commands:
python manage.py makemigrations
python manage.py migrate
After These Commands: You can see django has automatically created other necessary tables in Database(Approx there are 10 tables).
python manage.py makemigrations <app_name>
python manage.py migrate
And Lastly: After above commands all the model(table) you have created are directly imported to the database.
Hope this will help.
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | -0.005882 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 62,203,936 | 1 | 227 | 202,787 | 0 |
python,django,django-migrations
|
Try registering your model in admin.py, here's an example:-
admin.site.register(YourModelHere)
You can do the following things:-
1. admin.site.register(YourModelHere) # In admin.py
2. Reload the page and try again
3. Hit CTRL-S and save
4. There might be an error, specially check models.py and admin.py
5. Or, at the end of it all just restart the server
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 0.005882 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 36,972,913 | 9 | 227 | 202,787 | 0 |
python,django,django-migrations
|
There are sometimes when ./manage.py makemigrations is superior to ./manage.py makemigrations <myapp> because it can handle certain conflicts between apps.
Those occasions occur silently and it takes several hours of swearing to understand the real meaning of the dreaded No changes detected message.
Therefore, it is a far better choice to make use of the following command:
./manage.py makemigrations <myapp1> <myapp2> ... <myappN>
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 1 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 61,434,419 | -1 | 227 | 202,787 | 0 |
python,django,django-migrations
|
Well, I'm sure that you didn't set the models yet, so what dose it migrate now ??
So the solution is setting all variables and set Charfield, Textfield....... and migrate them and it will work.
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | -0.005882 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 58,637,080 | 6 | 227 | 202,787 | 0 |
python,django,django-migrations
|
Another possible reason is if you had some models defined in another file (not in a package) and haven't referenced that anywhere else.
For me, simply adding from .graph_model import * to admin.py (where graph_model.py was the new file) fixed the problem.
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 1 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 57,418,764 | 11 | 227 | 202,787 | 0 |
python,django,django-migrations
|
Make sure your app is mentioned in installed_apps in settings.py
Make sure you model class extends models.Model
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 1 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 47,740,087 | 80 | 227 | 202,787 | 0 |
python,django,django-migrations
|
My problem (and so solution) was yet different from those described above.
I wasn't using models.py file, but created a models directory and created the my_model.py file there, where I put my model. Django couldn't find my model so it wrote that there are no migrations to apply.
My solution was: in the my_app/models/__init__.py file I added this line:
from .my_model import MyModel
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 1 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 64,500,819 | 0 | 227 | 202,787 | 0 |
python,django,django-migrations
|
Another possibility is you squashed some migrations and applied the resulting one, but forgot to remove the replaces attribute from it.
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 0 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 52,953,891 | 2 | 227 | 202,787 | 0 |
python,django,django-migrations
|
I solved that problem by doing this:
Erase the "db.sqlite3" file. The issue here is that your current data base will be erased, so you will have to remake it again.
Inside the migrations folder of your edited app, erase the last updated file. Remember that the first created file is: "0001_initial.py". For example: I made a new class and register it by the "makemigrations" and "migrate" procedure, now a new file called "0002_auto_etc.py" was created; erase it.
Go to the "pycache" folder (inside the migrations folder) and erase the file "0002_auto_etc.pyc".
Finally, go to the console and use "python manage.py makemigrations" and "python manage.py migrate".
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 0.011764 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 55,265,330 | 0 | 227 | 202,787 | 0 |
python,django,django-migrations
|
You should add polls.apps.PollsConfig to INSTALLED_APPS in setting.py
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | 0 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
Django - makemigrations - No changes detected
| 57,435,023 | -1 | 227 | 202,787 | 0 |
python,django,django-migrations
|
First of all, make sure your app is registered in the Installed_app in the setting.py
Then the above answer works perfectly fine
| 0 | 0 | 0 | 0 |
2016-03-22T11:55:00.000
| 34 | -0.005882 | false | 36,153,748 | 0 | 0 | 1 | 14 |
I was trying to create migrations within an existing app using the makemigrations command but it outputs "No changes detected".
Usually I create new apps using the startapp command but did not use it for this app when I created it.
After debugging, I found that it is not creating migration because the migrations package/folder is missing from an app.
Would it be better if it creates the folder if it is not there or am I missing something?
|
django-db2 use different schema name than database username
| 36,159,818 | 0 | 2 | 688 | 1 |
django,python-2.7,django-models,db2
|
DB2 uses so-called two part names, schemaname.objectname. Each object, including tables, can be referenced by the entire name. Within a session there is the current schema which by default is set to the username. It can be changed by the SET SCHEMA myschema statement.
For your question there are two options:
1) Reference the tables with their full name: schemaname.tablename
2) Use set schema to set the common schemaname and reference just the table.
| 0 | 0 | 0 | 0 |
2016-03-22T16:15:00.000
| 2 | 0 | false | 36,159,706 | 0 | 0 | 1 | 1 |
In Django database username is used as schema name.
In DB2 there is no database level users. OS users will be used for login into the database.
In my database I have two different names for database user and database schema.
So in django with db2 as backend how can I use different schema name to access the tables?
EDIT:
Clarifying that I'm trying to access via the ORM and not raw SQLs. The ORM implicitly is using the username as the schema name. How do I avoid that ?
|
How to set application path in music21
| 36,364,217 | 2 | 2 | 1,301 | 0 |
python,linux,anaconda,midi,music21
|
First of all, are you sure you have a midi player?
Timidity is a good option. Check if you have it installed, and if you doesn't, just use sudo apt-get install timidity
Once installed, the path you need should be '/usr/bin/timidity'
| 0 | 1 | 0 | 0 |
2016-03-22T22:31:00.000
| 1 | 0.379949 | false | 36,166,485 | 0 | 0 | 1 | 1 |
I'm using Ubuntu 14.04 64bit.
I don't know what to set on path to application.
I have installed music21 in anaconda3, but I got output as follows:
music21.converter.subConverters.SubConverterException: Cannot find a valid application path for format midi. Specify this in your Environment by calling environment.set(None, 'pathToApplication')
What application should I choose? I've seen a lot of pages but no one tells me what to set.
|
For E2E Testing: which is better selenium or protractor for following web stack (Angular, Python and MongoDB)?
| 36,178,367 | 1 | 0 | 548 | 0 |
python,angularjs,selenium,testing,protractor
|
Protractor is based off using Selenium webdrivers. If you have an Angular app for your entire front-end, I would go with Protractor. If you are going to have a mixed front-end environment, you may want to go with Selenium only.
| 0 | 0 | 1 | 1 |
2016-03-23T12:26:00.000
| 2 | 0.099668 | false | 36,178,187 | 0 | 0 | 1 | 1 |
I recently came to know about protractor framework which provides end to end testing for angular applications.
I would like to know, which test framework suits better for following webstack, either selenium or protractor
Angular, Python and MongoDB.
I am going to use mozilla browser only.
Can anyone please provide your valueable suggestions
|
Delete specific cache in Flask-Cache or Flask-Caching
| 50,083,102 | 11 | 7 | 9,332 | 0 |
python,caching,flask,flask-cache,flask-caching
|
For cache.cached(), use cache.delete() to delete specific cache, pass the cache key (default to view/<request.path>).
For cache.memoize(), use cache.delete_memoized() to delete specific cache, pass the cache key (default to function name with or without args).
Use cache.clear() to delete all the cache data.
| 0 | 0 | 0 | 0 |
2016-03-23T13:49:00.000
| 2 | 1 | false | 36,180,066 | 0 | 0 | 1 | 2 |
I am using Flask cache in my API in python.
Currently I am using the decorator @app.cache.memoize(cache_memoize_value) and I flush it by calling app.cache.delete_memoized(view)
The problem is that with memoize it will be cached for n views and not for a specific amount of time. If I want to specify a timeout for the cache I need to use the decorator @app.cache.cached(timeout=300) and clear it with app.cache.clear(). However, this clear method will clear everything and not only a specific view.
How can I only clear a specific view while using the cached decorator?
|
Delete specific cache in Flask-Cache or Flask-Caching
| 36,309,686 | 3 | 7 | 9,332 | 0 |
python,caching,flask,flask-cache,flask-caching
|
It's in fact pretty easy and I should have tried this before.
Like for the cached decorator, you can specify a value in the memoized decorator.
But instead of doing this:
@app.cache.memoize(cache_memoize_value)
You need to do this
@app.cache.memoize(timeout=cache_memoize_value)
| 0 | 0 | 0 | 0 |
2016-03-23T13:49:00.000
| 2 | 1.2 | true | 36,180,066 | 0 | 0 | 1 | 2 |
I am using Flask cache in my API in python.
Currently I am using the decorator @app.cache.memoize(cache_memoize_value) and I flush it by calling app.cache.delete_memoized(view)
The problem is that with memoize it will be cached for n views and not for a specific amount of time. If I want to specify a timeout for the cache I need to use the decorator @app.cache.cached(timeout=300) and clear it with app.cache.clear(). However, this clear method will clear everything and not only a specific view.
How can I only clear a specific view while using the cached decorator?
|
Cuckoo Error: TemplateDoesNotExist at /
| 36,680,726 | 0 | 0 | 587 | 0 |
django,python-2.7,sandbox,malware-detection
|
Do you have TEMPLATE_DIR and TEMPLATE_LOADERS in your settings.py file. I faced the same issues too. Once you do this, it will work.
| 0 | 0 | 0 | 1 |
2016-03-23T17:49:00.000
| 2 | 0 | false | 36,185,274 | 0 | 0 | 1 | 1 |
I installed last Cuckoo version on my physical machine Ubuntu 15.10 and I configured cuckoo following official guide.
I have problem with web gui:
TemplateDoesNotExist at /
and it tries to search dashboard template in
usr/lib/python2.7/dist-packages/django/contrib/auth/templates/dashboard/index.html (with File does not exist error)
instead of to search it in ~/cuckoo/web/templates/dashboard/
I tried to search a solution in cuckoo official support but it seems to be deserted.
|
Using Fiddler to intercept requests from Windows program
| 36,207,761 | 0 | 0 | 767 | 0 |
http,python-requests,fiddler
|
Fiddler isn't correctly intercepting anything that this program is sending/receiving
That means the program is either firing requests to localhost (very unlikely), or ignoring the proxy settings for the current user (most likely). The latter also means this application won't function on a machine where a proxy connection is required in order to make HTTP calls to the outside.
The alternative would be to use a packet inspector like Wireshark, or to let the application be fixed to respect proxy settings, or to capture all HTTP requests originating from that machine on another level, for example the next router in your network.
| 0 | 0 | 1 | 0 |
2016-03-24T04:13:00.000
| 1 | 0 | false | 36,193,175 | 0 | 0 | 1 | 1 |
I am trying to intercept HTTP requests sent via an application I have installed on my Windows 7 machine. I'm not sure what platform the application is built on, I just know that Fiddler isn't correctly intercepting anything that this program is sending/receiving. Requests through Chrome are intercepted fine.
Can Fiddler be set up as a proxy for ALL applications, and if so, how would I go about doing this? I have no control over the application code, it's just something I installed. It is a live bidding auction program which seems to mainly display HTML pages inside the application window.
|
Bridging two realms/subscribing one client to multiple realms
| 70,193,006 | 0 | 2 | 163 | 0 |
python,autobahn,wamp-protocol,crossbar
|
Yes, the simplest thing that could possibly work is to open two WAMP clients - one for each realm.
| 0 | 0 | 0 | 0 |
2016-03-24T13:58:00.000
| 1 | 0 | false | 36,201,960 | 0 | 0 | 1 | 1 |
I'm looking into designing a WAMP/Crossbar application with two or more realms; one realm would be for backend messaging, while a second would essentially expose a public API to frontend clients. Now, at some point messages need to cross between realms, which would require one client to join two realms and act as a bridge.
Is that feasible at all without a lot of bending over backwards? Or is the design approach flawed from the beginning, and I should rather use specific topic URIs to separate front and backends?
|
Queuing pictures to be requested in Django
| 36,209,892 | 1 | 1 | 394 | 0 |
python,django,multithreading,django-rest-framework,django-1.9
|
Django itself doesn't have a queue, but you can easily simulate it. Personally, I would probably use an external service, like rabbitMQ, but it can be done in pure Django if you want. Add a separate ImageQueue model to hold references to incoming images and use transaction management to make sure simultaneous requests don't return the same image. Maybe something like this (this is purely proof of concept code, of course).
class ImageQueue(models.Model):
image = models.OneToOne(Image)
added = models.DateTimeField(auto_now_add=True)
processed = models.DateTimeField(null=True, default=None)
processed_by = models.ForeignKey(User, null=True, default=None)
class Meta:
order_by=('added')
...
# in the incoming image API that drone uses
def post_an_image(request):
image = Image()
... whatever you do to post an image ...
image.save()
queue = ImageQueue.objects.create(image=image)
... whatever else you need to do ...
# in the API your users will use
from django.db import transaction
@transaction.atomic
def request_images(request):
user = request.user
num = request.POST['num'] # number of images requested
queue_slice = ImageQueue.objects.filter(processed__isnull=True)[:num]
for q in queue_slice:
q.processed = datetime.datetime.now()
q.processed_by = user
q.save()
return [q.image for q in queue_slice]
| 0 | 0 | 0 | 0 |
2016-03-24T14:41:00.000
| 1 | 1.2 | true | 36,202,899 | 0 | 0 | 1 | 1 |
I am setting up a system where one user will be posting images to a Django server and N users will each be viewing a subset of the posted images in parallel. I can't seem to find a queuing mechanism in Django to accomplish this task. The closest thing is using latest with filter(), but that will just keep sending the latest image over and over again until a new one comes. The task queue doesn't help since this isn't a periodic task, it only occurs when a user asks for the next picture. I have one Viewset for uploading the images and another for fetching. I thought about using the python thread-safe Queue. The unloader will enqueue the uploaded image pk, and when multiple users request a new image, the sending Viewset will dequeue an image pk and send it to the most recent user requesting an image and then the next one dequeued to the second most recent user and so on...
However, I still feel like there are some race conditions possible here. I read that Django is thread-safe, but that the app can become un-thread-safe. In addition, the Queue would need to be global to be shared among the Viewsets, which feels like bad practice. Is there a better and safer way of going about this?
Edit
Here is more detail on what I'm trying to accomplish and to give it some context. The user posting the pictures is a Smart-phone attached to a Drone. It will be posting pictures from the sky at a constant interval to the Django server. Since there will be a lot of pictures coming in. I would like to be able to have multiple users splitting up the workload of looking at all the pics (i.e. no two user's should see the same picture). So when a user will contact the Django server, saying "send me the next pic you have or send me the next 3 pics you have or etc...". However, multiple users might say this at the same time. So Django needs to keep some sort of ordering to the pictures,that's why I said Queue and figure out how to pass it to users if more than one of them asks at a time. So one Viewset is for the smart phone to post the pics and the other is for the users to ask for the pics. I am looking for a thread-safe way to do this. The only idea I have so far is to use Python's thread-safe queue and make it a global queue to the Viewsets. However, I feel like that is bad practice, and I'm not sure if it is thread-safe with Django.
|
H2OFrame converts dict to all zeros
| 36,227,222 | 2 | 0 | 276 | 0 |
python,django,pandas,scikit-learn,h2o
|
It seems the Pandas DataFrame to H2OFrame conversion works fine outside Django, but fails inside Django. The problem might be with Django's pre_save not allowing the writing/reading of the temporary .csv file that H2O creates when ingesting a python object. A possible workaround is to explicitly write the Pandas DataFrame to a .csv file with model_data_frame.to_csv(<path>, index=False) and then import the file into H2O with h2o.import_file(<path>).
| 0 | 0 | 0 | 0 |
2016-03-25T01:44:00.000
| 1 | 0.379949 | false | 36,212,815 | 0 | 1 | 1 | 1 |
I am taking input values from a django model admin screen and on pre_save calling h2o to do predictions for other values and save them.
Currently I convert my input from pandas (trying to work with sklearn preprocessing easily here) by using:
modelH2OFrame = h2o.H2OFrame(python_obj = model_data_frame.to_dict('list'))
It parses and loads. Hell it even creates a frame with values when I do it step by step.
BUT. When I run this inside of the Django pre_save, the H2OFrame comes back completely empty.
Ideas for why this may be happening? Sometimes I get errors connecting to the h2o cluster or timeouts--maybe that is a related issue? I load the H2O models in the pre_save call and do the predictions, allocate them to model fields, and then shut down the h2o cluster (in one function).
|
django migration doesn't progress and makes database lock
| 36,213,045 | 1 | 1 | 1,701 | 1 |
python,django,postgresql
|
Open connections will likely stop schema updates. If you can't wait for existing connections to finish, or if your environment is such that long-running connections are used, you may need to halt all connections while you run the update(s).
The downtime, if it's likely to be significant to you, could be mitigated if you have a read-only slave that could stay online. If not, ensuring your site fails over to some sort of error/explanation page/redirect would at least avoid raw failure code responses to requests that come in if downtime for migrations is acceptable.
| 0 | 0 | 0 | 0 |
2016-03-25T01:53:00.000
| 1 | 0.197375 | false | 36,212,891 | 0 | 0 | 1 | 1 |
I've tried to deploy (includes migration) production environment. But my Django migration (like add columns) very often stops and doesn't progress anymore.
I'm working with postgresql 9.3, and I find some reasons of this problem. If postgresql has an active transaction, alter table query is not worked. So until now, restarting postgresql service before migration was a solution, but I think this is a bad idea.
Is there any good idea to progress deploying nicely?
|
How to resume training from *.meta in tensorflow?
| 38,489,976 | 0 | 0 | 429 | 0 |
python,tensorflow
|
Not sure if this will work for you, but at least for DNNCLassifiers you can specify the model_dir parameter when creating it and that will construct the model from the files and then you can continue the training.
For the DNNClassifiers you specify model_dir when first creating the object and the training will store checkpoints and other files on this directory. You can come then after that, and create another DNNClassifier specifying the same model_dir and that will restore your pre-trained model.
| 0 | 0 | 0 | 0 |
2016-03-25T14:13:00.000
| 1 | 0 | false | 36,221,588 | 0 | 1 | 1 | 1 |
In the latest version of tensorflow, when I save the model I find two files are produced: model_xxx and model_xxx.meta.
Does model_xxx.meta specify the network? Can I resume training using model_xxx and model_xxx.meta without specify the network in the code? What about training queue structure, are they stored in model_xxx.meta?
|
Testing python project with Tox and Teamcity
| 36,237,069 | 3 | 4 | 864 | 0 |
python,teamcity,pytest,tox
|
TeamCity counts the tests based on their names. My guess is since your tests in the tox matrix have the same name, they are counted as one test. This should be visible on the test page of your build, where you can see invocation counts of each test.
For TeamCity to report number of tests correctly, test names must differ in different configurations. Perhaps, you could include configuration details in the reported test name
| 0 | 0 | 0 | 1 |
2016-03-25T19:19:00.000
| 1 | 1.2 | true | 36,226,500 | 0 | 0 | 1 | 1 |
I have project with very simple configuration matrix, described in tox: py{27,35}-django{18,19}
I'm using TeamCity as the CI-server, run tests with py.test with installed teamcity-messages. I've tried to run every configuration like tox -e py27-django18 in different steps. But Teamcity didn't summarize tests and didn't accumulate coverage for files, it's only count coverage for last run and Tests passed: ... show tests from only one build.
How testing with multiple Python configurations can be integrated into Teamcity?
upd. Find out, that coverage counts correctly, just forgot to add --cov-append option to py.test.
|
python requests memory usage on heroku
| 36,230,874 | 0 | 0 | 854 | 0 |
python,heroku,python-requests,cpython
|
CPython will release memory, but it's a bit murky.
CPython allocates chunks of memory at a time, lets call them fields.
When you instantiate an object, CPython will use blocks of memory from an existing field if possible; possible in that there's enough contagious blocks for said object.
If there's not enough contagious blocks, it'll allocate a new field.
Here's where it gets murky.
A Field is only freed when it contains zero objects, and while there's garbage collection in CPython, there's no "trash compactor". So if you have a couple objects in a few fields, and each field is only 70% full, CPython wont move those objects all together and free some fields.
It seems pretty reasonable that the large data chunk you're pulling from the HTTP call is getting allocated to "new" fields, but then something goes sideways, the object's reference count goes to zero, then garbage collection runs and returns those fields to the OS.
| 0 | 0 | 0 | 0 |
2016-03-26T01:25:00.000
| 2 | 0 | false | 36,230,585 | 0 | 0 | 1 | 1 |
Some observations on Heroku that don't completely mesh with my mental model.
My understanding is that CPython will never release memory once it has been allocated by the OS. So we should never observe a decrease in resident memory of CPython processes. And this is in fact my observation from occasionally profiling my Django application on Heroku; sometimes the resident memory will increase, but it will never decrease.
However, sometimes Heroku will alert me that my worker dyno is using >100% of its memory quota. This generally happens when a long-running response-data-heavy HTTPS request that I make to an external service (using the requests library) fails due to a server-side timeout. In this case, memory usage will spike way past 100%, then gradually drop back to less than 100% of quota, when the alarm ceases.
My question is, how is this memory released back to the OS? AFAIK it can't be CPython releasing it. My guess is that the incoming bytes from the long-running TCP connection are being buffered by the OS, which has the power to de-allocate. It's murky to me when exactly "ownership" of TCP bytes is transferred to my Django app. I'm certainly not explicitly reading lines from the input stream, I delegate all of that to requests.
|
How to perform sql schema migrations in app engine managed vm?
| 36,407,336 | -2 | 1 | 180 | 1 |
python,google-app-engine,google-cloud-sql,gcloud
|
SQL schema migration is a well-known branch of SQL DB administration which is not specific to Cloud SQL, which is mainly different to other SQL systems in how it is deployed and networked. Other than this, you should look up schema migration documentation and articles online to learn how to approach your specific situation. This question is too broad for Stack Overflow as it is, however. Best of luck!
| 0 | 1 | 0 | 0 |
2016-03-26T02:54:00.000
| 1 | 1.2 | true | 36,231,114 | 0 | 0 | 1 | 1 |
I'm currently using google cloud sql 2nd generation instances to host my database. I need to make a schema change to a table but Im not sure what the best way to do this.
Ideally, before I deploy using gcloud preview app deploy my migrations will run so the new version of the code is using the latest schema. Also, if I need to rollback to an old version of my app the migrations should run for that point in time. Is there a way to integrate sql schema migrations with my app engine deploys?
My app is app engine managed VM python/flask.
|
How do I use "tel", "number", or other input types in WTForms?
| 36,382,378 | 1 | 6 | 5,419 | 0 |
python,flask,wtforms,flask-wtforms
|
Ok. I found it.
IntegerField(widget = widgets.Input(input_type="tel"))
| 1 | 0 | 0 | 0 |
2016-03-26T21:06:00.000
| 2 | 0.099668 | false | 36,240,900 | 0 | 0 | 1 | 1 |
I want to use a phone number field in my form. What I need is when this field is tapped on Android phone, not general keyboard, but digital only appears.
I learned that this can be achieved by using <input type="tel" or <input type="number".
How do I use the tel or number input types in WTForms?
|
Why do I have to restart or reload the webserver when I make changes in django?
| 36,245,682 | 4 | 0 | 164 | 0 |
python,django,apache
|
In most deployment scenarios there is a Python interpreter running in the web server or next to it, and it has your code loaded into memory. If the code is changed, the loaded parts are not reloaded automatically (but some updated parts may be loaded if they were not loaded previously, hence errors) and there is no clean way to fully reload all code without destroying all objects, so restarting the interpreter is the only way.
You can use the Django development server with the autorestart option, but that's still uses restarting.
| 0 | 0 | 0 | 0 |
2016-03-27T08:54:00.000
| 1 | 1.2 | true | 36,245,566 | 0 | 0 | 1 | 1 |
If I don't reload the webserver (apache) after making changes to source files in my django application, the browser displays erratic content, sometines errors.
Why is that? (Just out of interest)
And more importantly: can I switch it of during development?
|
Node.js long term tasks
| 36,248,252 | 0 | 0 | 99 | 0 |
javascript,python,ajax,node.js,sockets
|
So you have three systems, and an asynchronous request. I solved a problem like this recently using PHP and the box.com API. PHP doesn't allow keeping a connection open indefinitely so I had a similar problem.
To solve the problem, I would use a recursive request. It's not 'real-time' but that is unlikely to matter.
How this works:
The client browser sends the "Get my download thing" request to the Node.js server. The Node.js server returns a unique request id to the client browser.
The client browser starts a 10 second poll, using the unique request id to see if anything has changed. Currently, the answer is no.
The Node.js server receives this and sends a "Go get his download thing" request to the Python server. (The client browser is still polling every 10 seconds, the answer is still no)
The python server actually goes and gets his download thing, sticks it in a place, creates a URL and returns that to the Node.js server. (The client browser is still polling every 10 seconds, the answer is still no)
The Node.js server receives a message back from the Python server with the URL to the thing. It stores the URL against the request id it started with. At this point, its state changes to "Yes, I have your download thing, and here it is! - URL).
The client browser receives the lovely data packet with its URL, stops polling now, and skips happily away into the sunset. (or similar more appropriate digital response).
Hope this helps to give you a rough idea of how you might solve this problem without depending on push technology. Consider tweaking your poll interval (I suggested 10 seconds to start) depending on how long the download takes. You could even get tricky, wait 30 seconds, and then poll every 2 seconds. Fine tune it to your problem.
| 0 | 0 | 1 | 0 |
2016-03-27T11:46:00.000
| 1 | 1.2 | true | 36,246,925 | 0 | 0 | 1 | 1 |
I have a node.js server which is communicating from a net socket to python socket. When the user sends an asynchronous ajax request with the data, the node server passes it to the python and gets data back to the server and from there to the client.
The problem occurs when the user sends the ajax request: he has to wait for the response and if the python process takes too much time then the ajax is already timed out.
I tried to create a socket server in node.js and a client that connects to the socket server in python with the data to process. The node server responds to the client with a loading screen. When the data is processed the python socket client connects to the node.js socket server and passes the processed data. However the client can not request the processed data because he doesn't know when it's done.
|
Error with the google foobar challenges
| 36,270,465 | 2 | 5 | 9,217 | 0 |
python,python-2.7,google-chrome
|
Re-indenting the file seemed to help, but that might have just been coincidental.
| 0 | 0 | 1 | 1 |
2016-03-28T15:44:00.000
| 2 | 0.197375 | false | 36,265,728 | 0 | 0 | 1 | 1 |
Has anyone had trouble verifying/submitting code to the google foobar challenges? I have been stuck unable to progress in the challenges, not because they are difficult but because I literally cannot send anything.
After I type "verify solution.py" it responds "Verifying solution..." then after a delay: "There was a problem evaluating your code."
I had the same problem with challenge 1. I waited an hour then tried verifying again and it worked. Challenge 2 I had no problems. But now with challenge 3 I am back to the same cryptic error.
To ensure it wasn't my code, I ran the challenge with no code other than "return 3" which should be the correct response to test 1. So I would have expected to see a "pass" for test 1 and then "fail" for all the rest of the tests. However it still said "There was a problem evaluating your code."
I tried deleting cookies and running in a different browser. Neither changed anything. I waited overnight, still nothing. I am slowly running out of time to complete the challenge. Is there anything I can do?
Edit: I've gotten negative votes already. Where else would I put a question about the google foobar python challenges? Also, I'd prefer not to include the actual challenge or my code since it's supposedly secret, but if necessary I will do so.
|
Serving locally a webapp
| 36,275,505 | 0 | 0 | 61 | 0 |
javascript,python,server
|
Not too sure what you are trying to achieve from reading the question. but from what i understand is that you'd like to be able to launch your application and serve it on the web when done.
you could use heroku!(www.heroku.com), same way you are hosting the application on your local, you simply make a Procfile, and in that file you put in the command that you'd normally run on your local, and push to Heroku.
| 0 | 0 | 1 | 0 |
2016-03-29T04:16:00.000
| 2 | 0 | false | 36,275,331 | 0 | 0 | 1 | 1 |
I'm developing an html web page which will visualize data. The intention is that this web page works only on one computer, so I don't want it to be online. Just off line. The web uses only Js, css and html. It is very simple and is not using any database, the data is loaded through D3js XMLHttpRequest. Up to now it is working with a local python server for development, through python -m SimpleHTTPServer. Eventually I will want to launch it easyer. Is it possible to pack the whole thing in a launchable app? Do you recommend some tools to do it or some things to read? What about the server part? Is it possible to launch a "SimpleHTTPServer" kind of thing without the console? Or maybe just one command which launches the server plus the web?
Thanks in advance.
|
can i use django to access folders on my pc using same os.path for windows?
| 36,440,307 | 1 | 0 | 55 | 0 |
python,django,web-applications
|
I have found out that django works fine with os.path and no problems. actually if you are programming with python then django is a great choice for server work.
| 0 | 0 | 0 | 0 |
2016-03-29T23:51:00.000
| 1 | 1.2 | true | 36,297,179 | 0 | 0 | 1 | 1 |
I have a python algorithm that access a huge database in my laptop. I want to create a web server to work with it. can I use django with the folder paths I have used ? like how do I communicate with it ? I want to get an image from web application and get it sent to my laptop and run algorithm on it then send result back to the webserver. would that still be possible without me changing my algorithm paths? like I use os.path to access my database folder, would I still be able to do what with django or shall I learn something else? I wanted to try django as it runs in python and I can learn it easy.
|
Twitter Bot is restarting after Heroku dyno recharges
| 36,302,969 | 0 | 0 | 103 | 0 |
python,heroku
|
When the dyno restarts, it's a new one. The filesystem on Heroku is ephemeral and is not persisted across dynos; so your file is lost.
You need to store it somewhere more permanent - either somewhere like S3, or one of the database add-ons. Redis might be suitable for this.
| 0 | 0 | 0 | 1 |
2016-03-30T07:47:00.000
| 1 | 1.2 | true | 36,302,677 | 1 | 0 | 1 | 1 |
I have a twitter bot which is reading a text file and tweeting. Now, a free Heroku dyno sleeps after every 18 hours for 6 hours, after which it restarts with the same command. So, the text file is read again and the tweets are repeated.
To avoid this, everytime a line was read out of the list of lines from the file, I was removing the line from the list (after tweeting) and putting the remaining list into a new file which is then renamed to the original file.
I thought this might work, but when the dyno restarted, it started from the beginning. Am I missing something here? It would be great if someone could help me with this.
|
The use of model field "verbose name"
| 54,033,108 | 10 | 8 | 10,288 | 0 |
python,django,field,verbose
|
Verbose Field Names are optional. They are used if you want to make your model attribute more readable, there is no need to define verbose name if your field attribute is easily understandable. If not defined, django automatically creates it using field's attribute name.
Ex : student_name = models.CharField(max_length=30)
In this example, it is understood that we are going to store Student's name, so no need to define verbose explicitly.
Ex : name = models.CharField(max_length=30)
In this example, one may be confused what to store - name of student or Teacher. So, we can define a verbose name.
name = models.CharField("student name",max_length=30)
| 0 | 0 | 0 | 0 |
2016-03-30T16:58:00.000
| 1 | 1 | false | 36,315,168 | 0 | 0 | 1 | 1 |
If I have a web app that use only one language and it is not English, is that correct to use model field verbose_name attribute for the field description (that will be printed in form)? I dont use the translation modules.
|
How to right code in Django and AngularJS?
| 36,317,441 | 1 | 2 | 86 | 0 |
python,angularjs,django
|
It really depends on what you want to do, there are many ways, but yeah, ideally your routes and templates are handled in AngularJS, Angular requests information from Django or POSTs information, and they communicate using JSON.
You need to put your Angular templates and files in the static folder, you can use grunt or django-pipeline to better manage all the files you have.
| 0 | 0 | 0 | 0 |
2016-03-30T18:36:00.000
| 1 | 1.2 | true | 36,317,027 | 0 | 0 | 1 | 1 |
Now I am working with Django on server side and jQuery on client. Django views functions return templates with js code.
How it should look using AngularJS. Should I return JSON from Django and render response using JS ?
Many thanks!
|
flask sqlalchemy paginate() function does not get the same elements when run twice
| 36,334,148 | 0 | 0 | 193 | 1 |
python,pagination,flask-sqlalchemy
|
ok I don't know the answer to this question but ordering the query (order_by) solved my problem... I am still interested to know why paginate does not have an order by itself, because it basically means that without the order statement, paginate cannot be used to iterate through all elements?
cheers
carl
| 0 | 0 | 0 | 0 |
2016-03-30T21:04:00.000
| 1 | 0 | false | 36,319,702 | 0 | 0 | 1 | 1 |
I am using a simple sqlalchemy paginate statement like this
items = models.Table.query.paginate(page, 100, False)
with page = 1. When running this command twice I get different outputs? If I run it with less element (e.g. 10) it gives me the same outputs when run multiple times? I thought for a paginate command to work it has to result in the same set each time it is called?
cheers
carl
|
What is the difference between click() and tap() method in appium while writing test cases
| 51,953,768 | 1 | 1 | 2,478 | 0 |
appium,python-appium
|
tap() method belongs to AppiumDriver class while the click() method belongs to the WebDriver class
I think, it is better to use driver.tap() method as it is more closely binded to the mobile scenario. and we can use on both emulator and real device. the result is same
| 0 | 0 | 0 | 0 |
2016-03-31T10:00:00.000
| 2 | 0.099668 | false | 36,330,139 | 0 | 0 | 1 | 2 |
Suppose I have to tap on a menu navigation button. I have tried click() and tap() both are working fine but which one is preferred(for both emulator and physical device)
|
What is the difference between click() and tap() method in appium while writing test cases
| 36,330,191 | 0 | 1 | 2,478 | 0 |
appium,python-appium
|
I suppose that click is for computers and tap for touch screen/mobile devices (phone, tablette)
| 0 | 0 | 0 | 0 |
2016-03-31T10:00:00.000
| 2 | 0 | false | 36,330,139 | 0 | 0 | 1 | 2 |
Suppose I have to tap on a menu navigation button. I have tried click() and tap() both are working fine but which one is preferred(for both emulator and physical device)
|
py.test timeout/keepalive/heartbeat?
| 38,299,299 | 0 | 0 | 158 | 0 |
python,python-3.x,pytest,codeship
|
Not sure If I understood your question correctly but if your concern was gobbling of output by Py.Test they run the pytest using -s option.
| 0 | 0 | 0 | 1 |
2016-04-01T17:13:00.000
| 1 | 0 | false | 36,362,122 | 0 | 0 | 1 | 1 |
I'm attempting to add a test to my unit tests that is significantly more complicated and takes longer to perform. The idea would be to run this longer test infrequently. However, the test itself takes longer than the 10 minute timeout that codeship currently has, and since it doesn't fail/pass within 10 minutes my codeship will show as failing.
Is there any way to get py.test to print out a heartbeat or something every x minutes to keep codeship happy? Obviously any of my output and logging gets gobbled up by py.test itself, so that isn't helpful.
Thanks!
|
How do I properly set up a django project in OSx?
| 36,373,089 | 0 | 0 | 42 | 0 |
python,django,macos,path
|
You shouldn't do anything with paths at all, other than setting up a virtualenv. If you have pip, you can install virtualenv with sudo pip install virtualenv (and note that this is the last thing you should install with sudo pip; everything else after that should be inside an activated virtualenv).
| 0 | 0 | 0 | 0 |
2016-04-02T12:05:00.000
| 2 | 0 | false | 36,373,009 | 0 | 0 | 1 | 1 |
I have a background in programming with python but I really would like to start playing around with django but I have had difficulty setting up a project.
I know that I have django installed but the command django-admin is not recognized. I believe this has something to do with the way my path is set up. I still feel clumsy about setting up correct paths, so a clear explanation of this would be greatly appreciated.
I also understand that it is advisable to set everything up within a virtual environment. I believe that I have pip installed which should enable virtualenv to be recognized, unfortunately the virtualenv command is also not recognized. I feel like I'm missing something very basic. Any help on setting up these basics would be very greatly appreciated.
|
Estimated Cost field is missing in Appengine's new Developer Console
| 36,388,402 | 1 | 0 | 18 | 0 |
google-app-engine,google-app-engine-python
|
App Engine > Dashboard
This view shows how much you are charged so far during the current billing day, and how many hours you still have until the reset of the day. This is equivalent to what the old console was showing, except there is no "total" line under all charges.
App Engine > Quotas
This view shows how much of each daily quota have been used.
App Engine > Quotas > View Usage History
This view gives you a summary of costs for each of the past 90 days. Clicking on a day gives you a detailed break-down of all charges for that day.
| 0 | 1 | 0 | 0 |
2016-04-03T14:16:00.000
| 1 | 0.197375 | false | 36,386,528 | 0 | 0 | 1 | 1 |
In the old (non-Ajax) Google Appengine's Developer Console Dashboard - showed estimated cost for the last 'n' hours. This was useful to quickly tell how the App engine is doing vis-a-vis the daily budget.
This field seems to be missing in the new Appengine Developer Console. I have tried to search various tabs on the Console and looked for documentation, but without success.
Looking for any pointers as to how do I get to this information in the new Console and any help/pointers are highly appreciated !
|
Several (two) Flask objects in same application
| 36,445,638 | 1 | 3 | 1,414 | 0 |
python,flask
|
Ok, Solution is :
app.run( threaded=True, ...)
Now it is possible to process at same time several requests, for exemple one for video streaming, other for video parameter tuning and so on.
| 0 | 0 | 0 | 0 |
2016-04-05T13:11:00.000
| 2 | 1.2 | true | 36,427,384 | 0 | 0 | 1 | 1 |
I need two http servers in same python application (with two differents ports 8081 and 8082):
One for a video stream coming from webcam and sent to WebBrowser;
Second one for command (quality, filter, etc.)
I don't succeed to define two Flask objects, because 'app.run' is blocking.
Is it possible, or do I need to use a Flask and a BaseHTTPServer ?
Best regards.
|
Create object in Django once when the server starts and use it throughout
| 68,335,159 | 0 | 3 | 953 | 0 |
python,django,object,static,django-views
|
Try to make a fixture and before running server use manage.py loaddata
| 0 | 0 | 0 | 0 |
2016-04-05T18:31:00.000
| 1 | 0 | false | 36,434,276 | 0 | 0 | 1 | 1 |
I am working on Django project. I need to create an object only once when the server initially starts. I want to use the methods associated with this particular object everytime a user accesses a particular page, that is I want the attributes and methods of this object to be accessible in views without having to instantiate the object again and again.
How exactly do i do it?
|
Python Selenium: How to get fresh data from the page get refreshed periodically?
| 36,446,094 | 0 | 1 | 1,158 | 0 |
python,ajax,selenium
|
You can implement so-called smart wait.
Indicate the most frequently updating and useful for you web element
at the page
Get data from it by using JavaScript since DOM model will not be updated without page refresh, eg:
driver.execute_script('document.getElementById("demo").innerHTML')
Wait for certain time, get it again and compare with previous result. If changed - refresh page, fetch data, etc.
| 0 | 0 | 1 | 0 |
2016-04-06T08:11:00.000
| 4 | 0 | false | 36,445,162 | 0 | 0 | 1 | 2 |
I have already written a script that opens Firefox with a URL, scrape data and close. The page belongs to a gaming site where page refresh contents via Ajax.
Now one way is to fetch those AJAX requests and get data or refresh page after certain period of time within open browser.
For the later case, how should I do? Should I call the method after certain time period or what?
|
Python Selenium: How to get fresh data from the page get refreshed periodically?
| 36,449,601 | 0 | 1 | 1,158 | 0 |
python,ajax,selenium
|
Make sure to call findElement() again after waiting, bc otherwise you might not get a fresh instance. Or use page factory, which will get a fresh copy of the WebElement for you every time the instance is accessed.
| 0 | 0 | 1 | 0 |
2016-04-06T08:11:00.000
| 4 | 0 | false | 36,445,162 | 0 | 0 | 1 | 2 |
I have already written a script that opens Firefox with a URL, scrape data and close. The page belongs to a gaming site where page refresh contents via Ajax.
Now one way is to fetch those AJAX requests and get data or refresh page after certain period of time within open browser.
For the later case, how should I do? Should I call the method after certain time period or what?
|
Does Python have a zip class that is compatible with Java's java.util.zip Inflater and Deflater classes?
| 36,452,775 | 2 | 0 | 594 | 0 |
java,python
|
If you meant if there is something in python to handle ZIP format, there is. It is the module zipfile. Python comes with all batteries included.
| 0 | 0 | 0 | 1 |
2016-04-06T13:24:00.000
| 2 | 1.2 | true | 36,452,520 | 0 | 0 | 1 | 1 |
I am working with byte arrays in Java 1.7. I am using java.util.zip's Inflater and Deflater classes to compress the data. I have to interface with data generated by Python code.
Does Python have the capability to compress data that can be uncompressed by Java's Inflater class, and the capability to decompress data that has been compressed by Java's Deflater class?
|
Verify appium current activity response
| 36,461,041 | 0 | 0 | 438 | 0 |
android-activity,automation,appium,python-appium
|
There are two ways to do it what I can think of,
UI prospect :
Capture the screenshot of the webview with 200 response. Let's call it expectedScreen.png
Capture the screenshot of the under test response(be it 200, 400 etc.). Lets call this finalScreen.png
Compare both the images to verify/assert.
API prospect : Since the Activity suppose to be displayed shall never/rarely be changed depending on transitions between different activities on your application as designed, so verifying current activity is a less important check during the test. You can rather verify these using API calls and then(if you get proper response) look for presence of elements on screen accordingly.
| 0 | 0 | 1 | 0 |
2016-04-06T19:19:00.000
| 1 | 0 | false | 36,460,455 | 0 | 0 | 1 | 1 |
I verify launched current activity if it's in browser or in app by comparing with current activity.
activity = driverAppium.current_activity
And then I verify if activity matches with browser activity name e.g. org.chromium.browser...
But can I verify the http response on the webpage e.g. 200 or 404?
With above test always passes even though webpage didn't load or get null response.
Can I verify with current activity and response both?
|
how to use --patterns for tests in django
| 36,485,158 | 0 | 0 | 528 | 0 |
python,django,testing,automated-tests
|
Ok, so the key is quite simple, the file is not supposed to start with test.
I named it blub_test.py and then called it with
./manage.py test --pattern="blub_test.py"
| 0 | 0 | 0 | 0 |
2016-04-06T21:41:00.000
| 1 | 1.2 | true | 36,462,875 | 0 | 0 | 1 | 1 |
I have a test file with tests in it which will not be called with the regular
manage.py test
command, only when I specifically tell django to do so.
So my file lives in the same folder as tests.py and its name is test_blub.py
I tried it with
manage.py test --pattern="test_*.py"
Any idea?
|
what should I use for android app development : Java or Python?
| 36,472,877 | 0 | 0 | 1,319 | 0 |
java,android,python-3.x
|
Java Without a Doubt.
The native language for Android Development is Java, So plan on going with Java
| 0 | 0 | 0 | 0 |
2016-04-07T09:59:00.000
| 5 | 0 | false | 36,472,751 | 0 | 0 | 1 | 2 |
I am a beginner in Android development.
But confused b/w the two technologies python and java.
|
what should I use for android app development : Java or Python?
| 36,472,923 | 0 | 0 | 1,319 | 0 |
java,android,python-3.x
|
JAVA is the main language used in android apps. You can create app using python as well, but I will recommend you to use java as it is more orthodox and you can find tutorial for that too.
| 0 | 0 | 0 | 0 |
2016-04-07T09:59:00.000
| 5 | 1.2 | true | 36,472,751 | 0 | 0 | 1 | 2 |
I am a beginner in Android development.
But confused b/w the two technologies python and java.
|
OneNote's PagesUrl Not Including All Pages in a Section
| 36,504,337 | 0 | 2 | 61 | 0 |
python,api,get,onenote,onenote-api
|
Yesterday (2016/04/08) there was an incident with the OneNote API which prevented us from updating the list of pages. This was resolved rpughly at 11PM PST and the API should be returning all pages now.
| 0 | 0 | 1 | 0 |
2016-04-08T01:13:00.000
| 1 | 0 | false | 36,489,891 | 1 | 0 | 1 | 1 |
I want to get a list of all the pages in a given section for a given notebook. I have the id for the section I want and use it in a GET call to obtain a dictionary of the section's information. One of the keys in the dictionary is "pagesUrl". A GET call on this returns a list of dictionaries where there's one dictionary for each page in this section.
Up until yesterday, this worked perfectly. However, as of today, pagesUrl only returns pages created within the last minute or so. Anything older isn't seen. Does anyone know why this is happening?
|
How to get OAuth token without user consent?
| 36,541,353 | 0 | 0 | 106 | 0 |
python,oauth,oauth2client
|
Update the grant_type to implicit
| 0 | 0 | 1 | 0 |
2016-04-09T19:59:00.000
| 1 | 0 | false | 36,521,976 | 0 | 0 | 1 | 1 |
The standard (for me) OAuth flow is:
generate url flow.step1_get_authorize_url() and ask user to allow app access
get the code
get the credentials with flow.step2_exchange(auth_code)
But I faced with another service, where I just need to initiate a POST request to token_uri with client_id and client_secret passed as form values (application/x-www-form-urlencoded), grant_type is client_credentials and scope is also passed as form field value.
Does oauth2client library supports it?
|
How to run Appium script in multiple Android device/emulators?
| 59,561,249 | 0 | 1 | 447 | 0 |
android,python,python-appium
|
You need to create multiple Udid and modify the same in the code.Inorder to launch in multiple devices you need to create multiple instances of appium server opening in Different ports(eg.4000,4001....etc)
| 0 | 0 | 0 | 1 |
2016-04-11T05:06:00.000
| 1 | 0 | false | 36,540,220 | 0 | 0 | 1 | 1 |
I am trying Appium using Python language. I have written a simple Login script in Python,it executes perfectly in one android device/emultor using Appium. But i have no idea how to run in multiple device/emulators..i read some forums but did not get any solutions(i am very new to Automation-Appium).
Please help me with detailed steps or procedure.
Thank you.
|
Unicode not rendering in Django-Rest-Framework browsable API in Chrome
| 36,549,592 | 2 | 1 | 256 | 0 |
python,django,unicode,django-rest-framework,content-type
|
You likely miss the system language settings available within Django. Depending on your stack (apache or supervisor do remove default system settings) you will need to define it explicitly.
The reason is, unicode is for Python internal specific. You need to encode the unicode into an output format. Could be utf8, or any iso code.
Note that this is deferent from the header# -*- coding: utf-8 -*- which goal is to decode the file into unicode using the utf-8 charset. It doesn't mean that any output within that file code will be converted using utf8.
| 0 | 0 | 0 | 0 |
2016-04-11T12:54:00.000
| 1 | 0.379949 | false | 36,549,300 | 0 | 0 | 1 | 1 |
I am trying to display a unicode value u'\u20b9' from my SQLite database, using the browsable API of django-rest-framework 3.1.3
I don't get the expected value ₹ for currency_symbol, it returns the following, depending on the browser:
Chrome 49.0.2623.110 (64-bit):
Browsable API: "" (Blank String)
JSON: "₹"
Safari 9.1 (10601.5.17.4):
Browsable API: ₹
JSON: "₹"
CURL:
JSON: ₹
How do I get it to consistently display ₹?
|
DynamoDB update entire column efficiently
| 36,564,082 | 1 | 2 | 643 | 1 |
database,python-2.7,amazon-dynamodb,insert-update
|
At this point, you cannot do this, we have to pass a key (Partition key or Partition key and sort key) to update the item.
Currently, the only way to do this is to scan the table with filters to get all the values which have 0 in "updated" column and get respective Keys.
Pass those keys and update the value.
Hopefully, in future AWS will come up with something better.
| 0 | 0 | 0 | 0 |
2016-04-12T02:49:00.000
| 2 | 1.2 | true | 36,562,764 | 0 | 0 | 1 | 2 |
I have a dynamo table called 'Table'. There are a few columns in the table, including one called 'updated'. I want to set all the 'updated' field to '0' without having to providing a key to avoid fetch and search in the table.
I tried batch write, but seems like update_item required Key inputs. How could I update the entire column to have every value as 0 efficiently please?
I am using a python script.
Thanks a lot.
|
DynamoDB update entire column efficiently
| 36,564,129 | 0 | 2 | 643 | 1 |
database,python-2.7,amazon-dynamodb,insert-update
|
If you can get partition key, for each partition key you can update the item
| 0 | 0 | 0 | 0 |
2016-04-12T02:49:00.000
| 2 | 0 | false | 36,562,764 | 0 | 0 | 1 | 2 |
I have a dynamo table called 'Table'. There are a few columns in the table, including one called 'updated'. I want to set all the 'updated' field to '0' without having to providing a key to avoid fetch and search in the table.
I tried batch write, but seems like update_item required Key inputs. How could I update the entire column to have every value as 0 efficiently please?
I am using a python script.
Thanks a lot.
|
UnicodeDecodeError: 'utf8' codec can't decode byte
| 36,579,883 | 1 | 0 | 2,359 | 0 |
mysql,json,django,python-2.7,utf-8
|
-- encoding: utf-8
Is changing only encoding of the source file, meaning you can define variables/comments using non-ascii chars.
You can try to use
json.dumps(..., ensure_ascii=False, encoding="ISO-8859-1")
| 0 | 0 | 0 | 0 |
2016-04-12T16:22:00.000
| 1 | 0.197375 | false | 36,578,931 | 0 | 0 | 1 | 1 |
I'm using Django and ajax to print data to an HTML table with jQuery and JSON.
It was working until new data came and had "ú@ñ" type of characters and I got:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xf9 in position 4: invalid start byte
I've read and tried many different possible reasons and it's still not working.
I've tried:
saving my file in UTF-8 in Sublime Text and with a file -bi myfile I still get text/x-python; charset=us-ascii
using # -*- encoding: utf-8 -*- at the beginning of my views.py
changing MySQL charset to CHARSET=utf8mb4 from CHARSET=latin1
json.dumps(list(rows), default=datetime_handler), content_type="application/json", encoding='utf-8')
I'd rather avoid using .decode() for every string in my data but if there's no other solution, it's what I'll have to do.
|
Performance of D3 treemap with large amounts of data
| 36,601,249 | 0 | 0 | 170 | 0 |
javascript,python,ajax,d3.js,flask
|
If simply loading the json is too heavy for the browser, then doing a complete rendering server-side would not help, as the rendered object would one way or another include the same amount of data.
But I guess you cannot show that much data at once. Since you are going for a zoomable visualizer, you should probably only load the data that is visible at the current scale, within the current window (just like any map application does: you can't just load the whole world-map at street level at once, but zooming can still go smoothly). Quadtrees are normally quite useful for this task.
| 0 | 0 | 0 | 0 |
2016-04-13T13:34:00.000
| 1 | 0 | false | 36,600,093 | 0 | 0 | 1 | 1 |
So my issue is that I'm passing a large JSON file (I'm not sure of the exact size, but it's very very big) into a D3 zoomable treemap.
I'm doing this by way of AJAX call to a Python backend. The performance of my browser just degrades completely when I load the file in, it takes 5-10 mins for it to even appear.
I'm just wondering are there any options that will help with performance? Rendering it server side perhaps?
This is the first ever time I've run into a performance issue like this so I'm really not sure where to go. Any help would be appreciated.
|
Error: Opening Robot Framework log failed
| 71,966,776 | 0 | 25 | 41,528 | 0 |
javascript,python,robotframework
|
For me editing JAVA_ARGS in /etc/default/jenkins didn't work. To make changes permanent on Ubuntu 18.04 LTS when running Jenkins as service I did following:
Run service jenkins status and from second line take path to actual service configuration file, mine was: /lib/systemd/system/jenkins.service
Run sudo vim /lib/systemd/system/jenkins.service find property Environment= under comment Arguments for the Jenkins JVM
Paste: -Dhudson.model.DirectoryBrowserSupport.CSP=\"sandbox allow-scripts; default-src 'none'; img-src 'self' data: ; style-src 'self' 'unsafe-inline' data: ; script-src 'self' 'unsafe-inline' 'unsafe-eval' ;\" behind -Djava.awt.headless=true
Run sudo service jenkins stop, you should see following warning: Warning: The unit file, source configuration file or drop-ins of jenkins.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Run sudo systemctl daemon-reload
Run sudo service jenkins start
You should be now able to browse robot framework results after restart.
| 0 | 0 | 0 | 0 |
2016-04-13T19:05:00.000
| 9 | 0 | false | 36,607,394 | 0 | 0 | 1 | 3 |
If I open any .html file that generated by Robot Framework and try to convert it in any other format(for example, docx formate) using either any python code or inbuilt command line tool that are available. I am getting below error,
Opening Robot Framework log failed
• Verify that you have JavaScript enabled in your browser.
• Make sure you are using a modern enough browser. Firefox 3.5, IE 8, or equivalent is required, newer browsers are recommended.
• Check are there messages in your browser's JavaScript error log. Please report the problem if you suspect you have encountered a bug.
· I am getting this error even though I have already enabled JavaScript in my browser.I am using Mozilla Firefox version 45.0.2 on mac.
Can anyone please help me to solve this issue?
|
Error: Opening Robot Framework log failed
| 58,062,311 | 0 | 25 | 41,528 | 0 |
javascript,python,robotframework
|
The accepted answer works for me but is not persistent. To make it persistent, modify the file /etc/default/jenkins and after JAVA_ARGS line, add the following line:
JAVA_ARGS="$JAVA_ARGS -Dhudson.model.DirectoryBrowserSupport.CSP=\"sandbox allow-scripts; default-src 'none'; img-src 'self' data: ; style-src 'self' 'unsafe-inline' data: ; script-src 'self' 'unsafe-inline' 'unsafe-eval' ;\""
Change will apply and be persistent after reboot
| 0 | 0 | 0 | 0 |
2016-04-13T19:05:00.000
| 9 | 0 | false | 36,607,394 | 0 | 0 | 1 | 3 |
If I open any .html file that generated by Robot Framework and try to convert it in any other format(for example, docx formate) using either any python code or inbuilt command line tool that are available. I am getting below error,
Opening Robot Framework log failed
• Verify that you have JavaScript enabled in your browser.
• Make sure you are using a modern enough browser. Firefox 3.5, IE 8, or equivalent is required, newer browsers are recommended.
• Check are there messages in your browser's JavaScript error log. Please report the problem if you suspect you have encountered a bug.
· I am getting this error even though I have already enabled JavaScript in my browser.I am using Mozilla Firefox version 45.0.2 on mac.
Can anyone please help me to solve this issue?
|
Error: Opening Robot Framework log failed
| 53,811,785 | 2 | 25 | 41,528 | 0 |
javascript,python,robotframework
|
The easiest thing to do is (if there are no worries on security aspects) also a permanent fix.
open the jenkins.xml file and
add the following
<arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -Dhudson.model.DirectoryBrowserSupport.CSP="" -jar "%BASE%\jenkins.war" -- httpPort=8080 --webroot="%BASE%\war"</arguments>
restart the jenkins server
rerun your jenkins jobs to see the result files.
If we are using the script console, every time you restart the jenkins server, the changes will be lost.
| 0 | 0 | 0 | 0 |
2016-04-13T19:05:00.000
| 9 | 0.044415 | false | 36,607,394 | 0 | 0 | 1 | 3 |
If I open any .html file that generated by Robot Framework and try to convert it in any other format(for example, docx formate) using either any python code or inbuilt command line tool that are available. I am getting below error,
Opening Robot Framework log failed
• Verify that you have JavaScript enabled in your browser.
• Make sure you are using a modern enough browser. Firefox 3.5, IE 8, or equivalent is required, newer browsers are recommended.
• Check are there messages in your browser's JavaScript error log. Please report the problem if you suspect you have encountered a bug.
· I am getting this error even though I have already enabled JavaScript in my browser.I am using Mozilla Firefox version 45.0.2 on mac.
Can anyone please help me to solve this issue?
|
A different virtualenv for each Django app
| 36,609,193 | 5 | 2 | 89 | 0 |
python,django,virtualenv
|
The entire project is loaded into the same Python process. You can't have two Python environments active at the same time in the same process. So the answer is no - you can't have concurrent virtual environments for apps in the same project.
| 0 | 0 | 0 | 0 |
2016-04-13T20:39:00.000
| 1 | 1.2 | true | 36,609,150 | 0 | 0 | 1 | 1 |
In Django, a project can contain many apps. Can each app have its own virtualenv? Or do all the apps in a Django project have to use the project's virtualenv?
|
Problems with database after cloning Django app from Github
| 36,613,541 | 0 | 1 | 315 | 1 |
python,django,git,github
|
The django_sessions table should get initialized when you run your first migrations. You said taht you made your migrations, but did you run them (with python manage.py migrate). Also, do you have django.contrib.auth in the installed_apps in your settings file? This is the app that owns that session table
| 0 | 0 | 0 | 0 |
2016-04-13T20:42:00.000
| 1 | 0 | false | 36,609,201 | 0 | 0 | 1 | 1 |
I have just cloned a Django app from Github to a local directory. I know for a fact that the app works because I've run it on others' computers.
When I run the server, I can see the site and register for an account. This works fine (I get a confirmation email). But then my login information causes an error because the DB appears to not have configured properly on my machine. I get the following errors:
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/utils.py in execute
return self.cursor.execute(sql, params) ...
▶ Local vars
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py in execute
return Database.Cursor.execute(self, query, params) ...
▶ Local vars
The above exception (no such table: django_session) was the direct cause of the following exception:
(It then lists a bunch of problems with local vars).
I tried making migrations with every part of the app but this didn't appear to fix anything.
|
request.data in DRF vs request.body in Django
| 60,892,980 | 8 | 30 | 34,630 | 0 |
python,django,django-rest-framework
|
In rest_framework.request.Request
request.body is bytes, which is always available, thus there is no limit in usage
request.data is a "property" method and can raise an exception,
but it gives you parsed data, which are more convenient
However, the world is not perfect and here is a case when request.body win
Consider this example:
If client send:
content-type: text/plain
and your REST's endpoint doesn't accept text/plain
your server will return 415 Unsupported Media Type
if you access request.data
But what if you know that json.loads(request.body) is correct json.
So you want to use that and only request.body allow that.
FYI: A described example is a message of AWS SNS notification sent by AWS to HTTP endpoint. AWS SNS works as a client here and of course, this case is a bug in their SNS.
Another example of benefits from request.body is a case when you have own custom parsing and you use own MIME format.
| 0 | 0 | 1 | 0 |
2016-04-14T07:22:00.000
| 2 | 1 | false | 36,616,309 | 0 | 0 | 1 | 1 |
Django REST framework introduces a Request object that extends the regular HttpRequest, this new object type has request.data to access JSON data for 'POST', 'PUT' and 'PATCH' requests.
However, I can get the same data by accessing request.body parameter which was part of original Django HttpRequest type object.
One difference which I see is that request.data can be accessed only one time. This restriction doesnt apply to request.body.
My question is what are the differences between the two. What is preferred and what is the reason DRF provides an alternative way of doing same thing when There should be one-- and preferably only one --obvious way to do it.
UPDATE: Limiting the usecase where body is always of type JSON. Never XML/ image or conventional form data. What are pros/cons of each?
|
Selenium: How to work with already opened web application in Chrome
| 36,622,332 | 0 | 0 | 127 | 0 |
python,selenium
|
Selenium doesn't start Chrome in incognito mode, It just creates a new and fresh profile in the temp folder. You could force Selenium to use the default profile or you could launch Chrome with the debug port opened and the let Selenium connect to it. There is also a third way which is to preinstall the webdriver extension in Chrome. These are the only ways I've encountered to automate Chrome with Selenium.
| 0 | 0 | 1 | 0 |
2016-04-14T08:51:00.000
| 1 | 0 | false | 36,618,103 | 0 | 0 | 1 | 1 |
I'm looking for a solution that could help me out automating the already opened application in Chrome web browser using Selenium and Python web driver. The issue is that the application is super secured, and if it is opened in incognito mode as Selenium tries to do, it sends special code on my phone. This defeats the whole purpose. Can someone provide a hacky way or any other work around/open source tool to automate the application.
|
How to implement async page refresh in django?
| 36,641,177 | 1 | 0 | 914 | 0 |
python,django,asynchronous
|
Async page refresh can be done only in front-end with javascript. Django will only render the template or return the HTTP response
P.S: You can do page refresh via backend code(Django) or any
| 0 | 0 | 0 | 0 |
2016-04-15T07:40:00.000
| 2 | 0.099668 | false | 36,641,083 | 0 | 0 | 1 | 1 |
I have a page that displays last 10 requests to server. Requests are models that are saved by the middleware.
I need to update page with new requests, without refreshing.
I know I can use ajax, and ping server periodically, but sure should be better approach.
|
Django app deployment on shared hosting
| 52,244,829 | 3 | 5 | 6,154 | 0 |
python,django
|
I know this is a while as to when i asked the question. I finally fixed this by changing the hosts. I went for Digital Oceans (created a new droplet) which supports wsgi. I deployed the app using gunicorn (application server) and nginx (proxy server).
It is not a good idea to deploy a Django app on shared hosting as you will be limited especially installing the required packages.
| 0 | 0 | 0 | 0 |
2016-04-15T10:46:00.000
| 3 | 1.2 | true | 36,645,076 | 0 | 0 | 1 | 2 |
I am trying to deploy a django app on hostgator shared hosting. I followed the hostgator django installation wiki and i deployed my app. The issue is that i am getting a 500 error internal page when entering the site url in the browser. I contacted the support team but could not provide enough info on troubleshooting the error Premature end of script headers: fcgi.This was the error found on the server error log.
I am installed django 1.9.5 on the server and from the django documentation it does not support fastcgi.
So my question 500 error be caused by the reason that i am running django 1.9.5 on the server and it does not support fastcgi. if so do i need to install lower version of django to support the fastcgi supported by hostgator shared hosting
First i thought the error was caused by my .htaccess file but it has no issue from the what i heard from support team.
Any Leads to how i can get the app up and running will be appreciated. This is my first time with django app deployment. Thank you in advance
|
Django app deployment on shared hosting
| 36,646,426 | 0 | 5 | 6,154 | 0 |
python,django
|
As you say, Django 1.9 does not support FastCGI.
You could try using Django 1.8, which is a long term support release and does still support FastCGI.
Or you could switch to a different host that supports deploying Django 1.9 with wsgi.
| 0 | 0 | 0 | 0 |
2016-04-15T10:46:00.000
| 3 | 0 | false | 36,645,076 | 0 | 0 | 1 | 2 |
I am trying to deploy a django app on hostgator shared hosting. I followed the hostgator django installation wiki and i deployed my app. The issue is that i am getting a 500 error internal page when entering the site url in the browser. I contacted the support team but could not provide enough info on troubleshooting the error Premature end of script headers: fcgi.This was the error found on the server error log.
I am installed django 1.9.5 on the server and from the django documentation it does not support fastcgi.
So my question 500 error be caused by the reason that i am running django 1.9.5 on the server and it does not support fastcgi. if so do i need to install lower version of django to support the fastcgi supported by hostgator shared hosting
First i thought the error was caused by my .htaccess file but it has no issue from the what i heard from support team.
Any Leads to how i can get the app up and running will be appreciated. This is my first time with django app deployment. Thank you in advance
|
TastyPie throttling - by user or by IP?
| 36,657,503 | 2 | 3 | 203 | 0 |
python,django,tastypie,throttling
|
Throttle key is based on authentication.get_identifier function.
Default implementation of this function returns a combination of IP address and hostname.
Edit
Other implementations (i.e. BasicAuthentication, ApiKeyAuthentication) returns username of the currently logged user or nouser string.
| 0 | 0 | 0 | 1 |
2016-04-15T21:23:00.000
| 2 | 1.2 | true | 36,657,049 | 0 | 0 | 1 | 2 |
I can't seem to find any information on what TastyPie throttles based on. Is it by the IP of the request, or by the actual Django user object?
|
TastyPie throttling - by user or by IP?
| 36,659,688 | 2 | 3 | 203 | 0 |
python,django,tastypie,throttling
|
Tomasz is mostly right, but some of the authentication classes have a get_identifier method that returns the username of the currently logged in user, otherwise 'nouser'. I plan on standardizing this soon.
| 0 | 0 | 0 | 1 |
2016-04-15T21:23:00.000
| 2 | 0.197375 | false | 36,657,049 | 0 | 0 | 1 | 2 |
I can't seem to find any information on what TastyPie throttles based on. Is it by the IP of the request, or by the actual Django user object?
|
Python web service and DMZ
| 36,664,840 | 0 | 0 | 301 | 0 |
python,web-services,security
|
If you want the service itself to run on dedicated hardware within the network and have the webserver itself hosted on the DMZ you'll have to use a proxy. You can use Nginx (for instance) to forward the port in which you're exposing the webserver to the port opened by flask on the dedicated machine. You'll have to configure your firewall to forward that port so that it's accessible to the machine in the DMZ.
I'd provide the configuration for your firewall and Nginx but it really depends on the parameters of your network and the service you want to run.
| 0 | 0 | 0 | 0 |
2016-04-16T07:07:00.000
| 1 | 0 | false | 36,661,233 | 0 | 0 | 1 | 1 |
I have written a python library and a web service using flask to expose functions from that library. The library should run on computer A (to do its processing). In our IT setup, web servers will run on a DMZ (computer B). This being the case, if the flask web service directly imports the library and runs a function, it would be running on the DMZ, rather than the intended computer? How do I design the program such that the library executes on the intended hardware, but the web service is hosted by the webserver on the DMZ?
|
Streaming values in a python script to a wep app
| 36,669,596 | 0 | 0 | 63 | 0 |
python,azure,web-applications,azure-webjobs
|
You would need to provide some more information about what kind of interface your web app exposes. Does it only handle normal HTTP1 requests or does it have a web socket or HTTP2 type interface? If it has only HTTP1 requests that it can handle then you just need to make multiple requests or try and do long polling. Otherwise you need to connect with a web socket and stream the data over a normal socket connection.
| 0 | 1 | 0 | 0 |
2016-04-16T20:42:00.000
| 2 | 0 | false | 36,669,500 | 0 | 0 | 1 | 2 |
I have a python script that runs continuously as a WebJob (using Microsoft Azure), it generates some values (heart beat rate) continuously, and I want to display those values in my Web App.
I don't know how to proceed to link the WebJob to the web app.
Any ideas ?
|
Streaming values in a python script to a wep app
| 36,671,291 | 1 | 0 | 63 | 0 |
python,azure,web-applications,azure-webjobs
|
You have two main options:
You can have the WebJobs write the values to a database or to Azure Storage (e.g. a queue), and have the Web App read them from there.
Or if the WebJob and App are in the same Web App, you can use the file system. e.g. have the WebJob write things into %home%\data\SomeFolderYouChoose, and have the Web App read from the same place.
| 0 | 1 | 0 | 0 |
2016-04-16T20:42:00.000
| 2 | 1.2 | true | 36,669,500 | 0 | 0 | 1 | 2 |
I have a python script that runs continuously as a WebJob (using Microsoft Azure), it generates some values (heart beat rate) continuously, and I want to display those values in my Web App.
I don't know how to proceed to link the WebJob to the web app.
Any ideas ?
|
Certificate Error while Deploying python code in Google App Engine
| 36,673,729 | 1 | 0 | 97 | 0 |
python,google-app-engine
|
Upgrading Python to 2.7.8 or later versions fixed the issue.
EDIT:
Also check if you are using google app engine SDK 1.8.1 or later version. As of version SDK 1.8.1 the cacerts.txt has been renamed to urlfetch_cacerts.txt. You can try removing cacerts.txt file to fix the problem.
| 0 | 1 | 0 | 0 |
2016-04-17T07:07:00.000
| 1 | 1.2 | true | 36,673,670 | 0 | 0 | 1 | 1 |
I tried deploying python code using google app engine.
But I got Error Below:
certificate verify failed
I had included proxy certificate in urlfetch_cacerts.py and enabled 'validate_certificate' in urlfetch_stub.py by _API_CALL_VALIDATE_CERTIFICATE_DEFAULT = True.But I still get the error..
Can you suggest any solution?
Thanks in advance.
|
Atlassian Bamboo command tasks not running correctly
| 36,808,047 | 0 | 0 | 837 | 0 |
python-3.x,selenium,centos6,bamboo
|
I solved the problem by changing the task type from a command task to a script task. My understanding is that not all tasks are run in the sequence as they were defined in the job. If this is not the case, then it might be a bug in Bamboo.
| 0 | 0 | 1 | 0 |
2016-04-18T06:15:00.000
| 1 | 0 | false | 36,686,661 | 0 | 0 | 1 | 1 |
I have setup an Atlassian Bamboo deploy plan. One of its steps is to run a command to run automated UI tests written in Selenium for Python. This runs on a headless Centos 6 server.
I had to install the X-server to simulate the existence of a display
I made the following commands run in the system boot so that the X-server is always started when the machine starts
Xvfb :1 -screen 1600x900x16
export DISPLAY=:1
The command task in the deployment plan simply invokes the following
/usr/local/bin/python3.5 .py
The funny thing is that when I run that directly from the command line it works perfect the the UI unit tests work. They start firefox and start dealing with the site.
On the other hand, when this is done via the deployment command I keep getting the error "The browser appears to have exited "
17-Apr-2016 14:18:23 selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details" As if it still does not sense that there is a display.
I even added a task in the deployment job to run X-server again but it came back with error that the server is already running.
This is done on Bamboo version 5.10.3 build 51020.
So, any ideas why it would fail within the deployment job?
Thanks,
|
Burlap java server to work with python client
| 36,695,144 | 0 | 0 | 205 | 0 |
java,python,server,client,hessian
|
Burlap and Hessian are 2 different (but related) RPC protocols, with Burlap being XML based and Hessian being binary.
They're both also pretty ancient, so if you have an opportunity to use something else, I'd highly recommend it. If not, then you're going to have to find a Burlap lib for Python.
Since it seems that a Burlap lib for Python simply doesn't exist (at least anymore), your best choice is probably to make a small Java proxy that communicates with a more recent protocol with the Python side and in Burlap with the Java server.
| 0 | 0 | 0 | 1 |
2016-04-18T13:10:00.000
| 1 | 1.2 | true | 36,694,973 | 0 | 0 | 1 | 1 |
I'm trying to connect a burlap java server with a python client but I can't find any detail whatsoever regarding how to use burlap with python or if it even is implemented for python. Any ideas? Can I build burlap python clients? Any resources? Would using a hessian python client work with a java burlap server?
|
Call a function when Flask session expires
| 36,829,120 | 0 | 5 | 3,947 | 0 |
python,python-2.7,session,flask
|
Yeah its kind of possible run a loop to see until session['key']==None and if the condition becomes true call the function. I hope this helps!!!
| 0 | 0 | 0 | 0 |
2016-04-19T02:15:00.000
| 2 | 0 | false | 36,707,367 | 0 | 0 | 1 | 1 |
In my Flask application, I am saving files that correspond to a user, and want to delete these files when the user's "session" expires. Is it possible to detect the session expiration and immediately call a function?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.