Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
35,264,655 | 2016-02-08T07:49:00.000 | 0 | 0 | 0 | 0 | python,web-scraping,beautifulsoup | 35,265,134 | 1 | false | 0 | 0 | First line of HTTP (>1.0) Response is status line.
This error says that your status line is broken. It can be if server response starts with non-RFC status line (i don't think it can be), and if response is version http/0.9 (no status line and headers). Server answer with HTTP/0.9 if your request (for example header lines) was broken
Check response using wireshark (or something else) and give us a bit more info about your error. | 1 | 0 | 0 | I am getting this error while web scraping
http protocol error', 0, 'got a bad status line
What does it mean? And
How can I avoid this? | How can I avoid Http protocol error 0 while WebScraping | 0 | 0 | 1 | 181 |
35,266,360 | 2016-02-08T09:40:00.000 | 1 | 0 | 1 | 0 | javascript,firefox,ipython-notebook,jupyter-notebook | 35,270,287 | 2 | false | 1 | 0 | In the address bar, type "about:config" (with no quotes), and press Enter.
Click "I'll be careful, I promise".
In the search bar, search for "javascript.enabled" (with no quotes).
Right click the result named "javascript.enabled" and click "Toggle". JavaScript is now enabled.
To Re-disable JavaScript, repeat these steps. | 2 | 1 | 0 | I'm trying to run jupyter notebook from a terminal on Xfce Ubuntu machine.
I typed in the command:
jupyter notebook --browser=firefox
The firefox browser opens, but it is empty and with the following error:
"IPython Notebook requires JavaScript. Please enable it to proceed."
I searched the web on how to enable JavaScript on Ipython NoteBook but didn't find an answer. I would appreciate a lot any help. Thanks! | ipython notebook requires javascript - on firefox web browser | 0.099668 | 0 | 0 | 3,643 |
35,266,360 | 2016-02-08T09:40:00.000 | 1 | 0 | 1 | 0 | javascript,firefox,ipython-notebook,jupyter-notebook | 35,266,458 | 2 | false | 1 | 0 | javascript has to be enabled in firefox browser it is turned off. to turn it on do this
To enable JavaScript for Mozilla Firefox: Click the Tools drop-down menu and select Options. Check the boxes next to Block pop-up windows, Load images automatically, and Enable JavaScript. Refresh your browser by right-clicking anywhere on the page and selecting Reload, or by using the Reload button in the toolbar. | 2 | 1 | 0 | I'm trying to run jupyter notebook from a terminal on Xfce Ubuntu machine.
I typed in the command:
jupyter notebook --browser=firefox
The firefox browser opens, but it is empty and with the following error:
"IPython Notebook requires JavaScript. Please enable it to proceed."
I searched the web on how to enable JavaScript on Ipython NoteBook but didn't find an answer. I would appreciate a lot any help. Thanks! | ipython notebook requires javascript - on firefox web browser | 0.099668 | 0 | 0 | 3,643 |
35,267,280 | 2016-02-08T10:23:00.000 | 1 | 1 | 0 | 0 | python,redis,redis-py | 35,913,988 | 2 | true | 0 | 0 | I guess in your password there will be "$". If it is remove that It will work. | 1 | 0 | 0 | I am trying to insert millios of rows to redis .
I gone through the redis massinsertion tutorials and tried
cat data.txt | python redis_proto.py | redis-cli -p 6321 -a "myPassword" --pipe
here the redis_proto.py is the python script which reads the data.txt and convert to redis protocol.
I got some error like as below
All data transferred. Waiting for the last reply...
NOAUTH Authentication required.
NOAUTH Authentication required.
any help or suggestions would be appreciated ? | No Auth "Authentication required" Error in redis Mass Insertion? | 1.2 | 0 | 1 | 3,200 |
35,272,350 | 2016-02-08T14:44:00.000 | 4 | 0 | 1 | 0 | python | 35,272,485 | 4 | false | 0 | 0 | Return the “identity” of an object. This is an integer (or long integer) which is guaranteed to be unique and constant for this object during its lifetime. Two objects with non-overlapping lifetimes may have the same id() value.
There's nothing that says that it cannot be zero (zero is an integer). If you rely on it not being zero then you're relying on a current implementation detail which is not smart.
What you instead should do is to use for example None to indicate that it isn't an id of an object. | 1 | 0 | 0 | I’m wondering if there is anything about python object IDs that will prevent them from ever equaling zero? I’m asking because I’m using zero as a stand-in for a special case in my code. | Python: will id() always be nonzero? | 0.197375 | 0 | 0 | 106 |
35,273,294 | 2016-02-08T15:31:00.000 | 3 | 0 | 0 | 0 | python,django,git,postgresql | 35,273,897 | 2 | false | 1 | 0 | The migrations system does not look at your current schema at all. It builds up its picture from the graph of previous migrations and the current state of models.py. That means that if you make changes to the schema from outside this system, it will be out of sync; if you then make the equivalent change in models.py and create migrations, when you run them you will probably get an error.
For that reason, you should avoid doing this. If it's done already, you could apply the conflicting migration in fake mode, which simply marks it as done without actually running the code against the database. But it's simpler to do everything via migrations in the first place.
git has no impact on this at all, other than to reiterate that migrations are code, and should be added to your git repo. | 1 | 7 | 0 | If one is using Django, what happens with changes made directly to the database (in my case postgres) through either pgadmin or psql?
How are such changes handled by migrations? Do they take precedence over what the ORM thinks the state of affairs is, or does Django override them and impose it's own sense of change history?
Finally, how are any of these issues effected, or avoided, by git, if at all?
Thanks. | Edit database outside Django ORM | 0.291313 | 1 | 0 | 670 |
35,276,163 | 2016-02-08T17:57:00.000 | 5 | 0 | 1 | 1 | python,command-line,sys | 35,276,219 | 1 | true | 0 | 0 | Are you using a shell? $ is a special character in the shell that is interpreted as a shell variable. Since the variable does not exist, it is textually substituted with an empty string.
Try using single quotes around your parameter, like > python myapp.py '$unny-Day'. | 1 | 1 | 0 | I'm trying to read command line arguments in python in the form:
python myprogram.py string string string
I have tried using sys.argv[1-3] to get each string, but when I have a string such as $unny-Day, it does not process the entire string. How can I process strings like these entirely? | Python: Command line arguments not read? | 1.2 | 0 | 0 | 425 |
35,276,844 | 2016-02-08T18:36:00.000 | 0 | 0 | 1 | 1 | python,pythonpath,sys.path | 35,276,882 | 1 | false | 0 | 0 | It always uses PYTHONPATH. What happened is probably that you quit python, but didn't quit your console/command shell. For that shell, the environment that was set when the shell was started still applies, and hence, there's no PYTHONPATH set. | 1 | 2 | 0 | I'm having some trouble understanding how Python uses the PYTHONPATH environment variable. According to the documentation, the import search path (sys.path) is "Initialized from the environment variable PYTHONPATH, plus an installation-dependent default."
In a Windows command box, I started Python (v.2.7.6) and printed the value of sys.path. I got a list of pathnames, the "installation-dependent default."
Then I quit Python, set PYTHONPATH to .;./lib;, restarted Python, and printed os.environ['PYTHONPATH']. I got .;./lib; as expected. Then I printed sys.path. I think it should have been the installation-dependent default with .;./lib; added to the start or the end. Instead it was the installation-dependent default alone, as if PYTHONPATH were empty. What am I missing here? | When/how does Python use PYTHONPATH | 0 | 0 | 0 | 96 |
35,277,128 | 2016-02-08T18:56:00.000 | 6 | 0 | 0 | 1 | python,sys | 35,277,184 | 3 | false | 0 | 0 | os.system (which is just a thin wrapper around the POSIX system call) runs the command in a shell launched as a child of the current process. Running a cd in that shell only changes the current directory of that process, not the parent. | 3 | 5 | 0 | I tried doing a "pwd" or cwd, after the cd, it does not seem to work when we use os.system("cd"). Is there something going on with the way the child processes are created. This is all under Linux. | Why does not os.system("cd mydir") work and we have to use os.chdir("mydir") instead in python? | 1 | 0 | 0 | 7,672 |
35,277,128 | 2016-02-08T18:56:00.000 | 12 | 0 | 0 | 1 | python,sys | 35,277,168 | 3 | false | 0 | 0 | os.system('cd foo') runs /bin/sh -c "cd foo"
This does work: It launches a new shell, changes that shell's current working directory into foo, and then allows that shell to exit when it reaches the end of the script it was called with.
However, if you want to change the directory of your current process, as opposed to the copy of /bin/sh that system() creates, you need that call to be run within that same process; hence, os.chdir(). | 3 | 5 | 0 | I tried doing a "pwd" or cwd, after the cd, it does not seem to work when we use os.system("cd"). Is there something going on with the way the child processes are created. This is all under Linux. | Why does not os.system("cd mydir") work and we have to use os.chdir("mydir") instead in python? | 1 | 0 | 0 | 7,672 |
35,277,128 | 2016-02-08T18:56:00.000 | 9 | 0 | 0 | 1 | python,sys | 35,277,171 | 3 | true | 0 | 0 | The system call creates a new process. If you do system("cd .., you are creating a new process that then changes its own current working directory and terminates. It would be quite surprising if a child process changing its current working directory magically changed its parent's current working directory. A system where that happened would be very hard to use. | 3 | 5 | 0 | I tried doing a "pwd" or cwd, after the cd, it does not seem to work when we use os.system("cd"). Is there something going on with the way the child processes are created. This is all under Linux. | Why does not os.system("cd mydir") work and we have to use os.chdir("mydir") instead in python? | 1.2 | 0 | 0 | 7,672 |
35,277,753 | 2016-02-08T19:31:00.000 | 0 | 0 | 1 | 0 | python,file | 35,277,794 | 3 | false | 0 | 0 | Take the dictionary, put all the words from it in some sort of hash set.
Take the sentence, split it into words, hash each word. Check if the hash occurs in the hash set. | 1 | 1 | 0 | I am trying to create a program that checks if a string entered by the user contains only words from a text file, which will contain all words from the English dictionary. This will remove any slang language. If you have any other way of doing this, please let me know as I am relatively new to python.
Thanks in advance. | How to check if sentence contains words only from a file | 0 | 0 | 0 | 675 |
35,278,050 | 2016-02-08T19:47:00.000 | 0 | 1 | 0 | 1 | igraph,pythonanywhere | 35,338,580 | 1 | false | 0 | 0 | python-igraph installed perfectly fine in my account. My guess is that you're facing a different issue to a missing library. Perhaps a network error or something like that. | 1 | 0 | 0 | I'm trying to run a web app (built with flask-wtforms and using iGraph) on Pythonanywhere. As igraph isn't part of the already inculded modules, I try and install it using the bash console, as such:
pip install --user python-igraph
How ever, what I get is:
Could not download and compile the C core of igraph.
It usually means (according to other people having the same issue on Stackoverflow) that I need to first install:
sudo apt-get install -y libigraph0-dev
Except, apt-get isn't available on Pythonanywhere, as far as I know.
Is there any workaround to install the iGraph module for Python 2.7 on Pythonanywhere? | iGraph install error with Python Anywhere | 0 | 0 | 0 | 118 |
35,278,437 | 2016-02-08T20:12:00.000 | 2 | 0 | 1 | 0 | python,jupyter,jupyter-notebook | 35,287,484 | 1 | false | 1 | 0 | Jupyter uses no cloud services, and should make no external requests when you are running it locally. The best way to think of a local install of Jupyter notebook is a desktop application that happens to use your web browser for its UI. It talks to the local filesystem, and relays that data to your browser over HTTP on localhost. | 1 | 1 | 0 | This is a pretty simple question. I if you download jupyter as a part of Anaconda. How is your data being secured. When I run jupyter it does go straight to an html page, but that page displays my local folders on the servers I am connected to.
If I make a notebook, will that notebook be stored on a cloud server. Where does it go, and how can I keep all of my filee ("notebooks) local? | Is jupyter replicating my data to a cloud server | 0.379949 | 0 | 0 | 392 |
35,279,661 | 2016-02-08T21:27:00.000 | 0 | 0 | 0 | 0 | python,python-2.7,web-scraping,scrapy | 35,303,023 | 1 | false | 1 | 0 | The code was printing some print statements that made it delay the response which made me think it was crawling. | 1 | 1 | 0 | I created a scrapy project form the command line and added two spiders, myspider1 and myspider2. Whenever I run "scrapy crawl myspider1" or ""scrapy crawl myspider2" it starts "myspider1".
When I run "scrapy list" it also starts myspider1
I am running this under a vertualenv in python but I can't understand why it does this. It seems that whenever I run a command with scrapy it always executes the first spider in my spiders folder.
Any idea on why this is happening? | "scrapy list" command in my project directory starts a spider | 0 | 0 | 0 | 383 |
35,282,336 | 2016-02-09T01:13:00.000 | 0 | 1 | 0 | 1 | python,macos | 35,303,065 | 1 | false | 0 | 0 | This seems to be fixed in TODAY's beta release: 15E39d | 1 | 1 | 0 | I can no longer run python on my mac. Upgraded to mac OS X 10.11.4 Beta and now if I run python it gets killed.
$python
Killed: 9
the system log shows:
taskgated[396]: killed pid 954 because its code signature is invalid (error -67030) | Why is mac OS X killing python? | 0 | 0 | 0 | 493 |
35,282,363 | 2016-02-09T01:18:00.000 | 0 | 0 | 0 | 0 | python,user-interface,python-3.x,download,tkinter | 35,282,454 | 2 | false | 0 | 0 | There are a number of GUI toolkits you could use, including:
Kivy (modern touch-enabled)
Tkinter (bundled with Python)
These have file chooser widgets, which you could use that would provide standard-looking interfaces to your file system.
How do you want to run this program? | 1 | 0 | 0 | I would like to create a GUI that pops up asking where to download a file using python. I would like it to be similar to the interface that Google Chrome uses when downloading a file as that looks pretty standard. Is there a default module or add on that I can use to create thus GUI? or will I have to create myself? any help would be appreciated. | python 3 create a GUI to set directory in a program | 0 | 0 | 1 | 60 |
35,282,469 | 2016-02-09T01:30:00.000 | 0 | 0 | 1 | 0 | python,file | 35,285,288 | 4 | false | 0 | 0 | str.replace('bread', 'breakfast) where bread is being replaced by breakfast. | 1 | 3 | 0 | Say a method returns me a long list of lines which I am writing to file. Now on fly is there any way I can change the word "Bread" to "Breakfast", assuming word "Bread" actually exists in several places of my file that is being generated.
Thanks.
I have assigned the sys.stdout to file object, that way all my console print goes to file. So on fly hack would be great. | How to automatically change a particular word while writing to a file in python? | 0 | 0 | 0 | 173 |
35,282,924 | 2016-02-09T02:29:00.000 | 8 | 0 | 0 | 1 | python,google-app-engine | 35,283,754 | 1 | true | 1 | 0 | When your handler ends, the response goes to the client -- if you've never written anything to the response, then it will be an empty response (should come with an HTTP 204 status, but browsers are notoriously resigned to broken servers like the one you're apparently planning to create:-).
Nothing about this will cause "the instance GAE creates to handle that request will stay alive so to speak indefinitely". After at most 60 seconds (for auto-scaled modules, which are the default choice), things will time out and a 500 HTTP status will go to the browser. | 1 | 3 | 0 | I'm developing with GAE Python. If I have a URL that routes to a handler, is it necessary to actually call self.response.out.write or self.render(if I'm using a template)?
I'm thinking if I don't specify a response.out call, then the instance GAE creates to handle that request will stay alive so to speak indefinitely? | Google App Engine - Is it necessary to call self.response in handler? | 1.2 | 0 | 0 | 311 |
35,283,654 | 2016-02-09T04:02:00.000 | 0 | 0 | 0 | 0 | python,numpy,pandas,machine-learning,scikit-learn | 47,066,621 | 2 | false | 0 | 0 | You can add more features based on the raw data, and using methods like RFM Analysis. RFM = recency, frequency, monetary
For example:
How often the user logged in?
The last time the user logged in? | 1 | 0 | 1 | So I have a data in the form [UID obj1 obj2..] x timestamp and I want to cluster this data in python using kmeans from sklearn. Where should I start?
EDIT:
So basically I'm trying to cluster users based on clickstream data, and classify them based on usage patterns. | How to cluster a time series using KMeans in python | 0 | 0 | 0 | 2,413 |
35,283,799 | 2016-02-09T04:18:00.000 | 0 | 0 | 1 | 1 | multithreading,python-3.x,python-multithreading | 35,283,833 | 1 | true | 0 | 0 | There's no point. Disk write operations aren't blocking anyway, so there's no point in creating many threads just to perform them. | 1 | 0 | 0 | I am making a crawler to download some images extensively and I want to speed up by using thread (I'm new to multithreading). I am not sure about the inner mechanism behind disk writing operation. Can I write different files to disk simultaneously using thread? (does the writing get scheduled automatically?) Or should I make a lock for disk access for each thread to take turns and write? | Should I spawn one thread per disk writing operation? | 1.2 | 0 | 0 | 37 |
35,285,419 | 2016-02-09T06:36:00.000 | 1 | 0 | 0 | 0 | python-2.7,python-3.x,image-processing,computer-vision | 35,300,489 | 2 | false | 0 | 0 | So I had a look at that skimage library.
In case you are looking for your blob's contour (I assume you just confused that with perimeter) to calculate things like centroid distance functions you might give skimage.measure.find_contours(array, level) a try. It looks like what you need.
Perimeter is the length of that contour which of course is a scalar. | 1 | 0 | 0 | How to calculate the shape signature of an object from a binary image in python? I am not getting an function to calculate the perimeters of the binary object. | Shape signature of an object in python? | 0.099668 | 0 | 0 | 426 |
35,287,620 | 2016-02-09T08:55:00.000 | 0 | 0 | 0 | 0 | python,django | 35,295,319 | 3 | false | 1 | 0 | You could group them into a few separate models, linked by OneToOneFields to the main model. That would "namespace" your data, and namespaces are "one honking great idea". | 1 | 3 | 0 | I have a model with hundreds of properties. The properties can be of different types (integer, strings, uploaded files, ...). I would like to implement this complex model step by step, starting with the most important properties. I can think of two options:
Define the properties as regular model fields
Define a separate model to hold each property separately, and link it to the main model with a ForeignKey
I have not found any suggestions on how to handle models with lots of properties with django. What are the advantages / drawbacks of both approaches? | Django model with hundreds of fields | 0 | 0 | 0 | 390 |
35,294,239 | 2016-02-09T14:12:00.000 | 0 | 0 | 0 | 0 | python,mysql,tkinter | 35,295,077 | 1 | false | 0 | 1 | in order to periodically refresh the user messages, just make an infinite while loop and set it to update every 5 seconds or so. this way every 5 seconds you check to see if database has new messages. alternatively you can make the while loop update if the database has been updated at any point but this is more complex. | 1 | 0 | 0 | I made a simple chat program in python that uses tkinter and mysql db. It connects to db first, gets the messages and shows them to user. But when another user send a message to the user, the user can not see the new messages. So, I made a refresh button. But, everybody knows, people don't want to use a chat program that you always should press a button to see messages. The question is, how can I make a instant message app without clicking any buttons?
It doesn't require to use tkinter for gui. It can be run with other gui libs. | Python3 - Tkinter - Instant messaging | 0 | 0 | 0 | 294 |
35,295,491 | 2016-02-09T15:11:00.000 | 3 | 0 | 0 | 0 | python,pandas | 35,295,661 | 1 | false | 0 | 0 | del dataframe will unpollute your namespace and free your memory, while dataframe = None will only free your memory. Hope that helps! | 1 | 0 | 1 | How do I drop a pandas dataframes after I store them in a database. I can only find a way to drop columns or rows from a dataframe but how can I drop a complete data frame to free my computer memory? | How to drop a pandas dataframe after storing in database | 0.53705 | 0 | 0 | 86 |
35,296,683 | 2016-02-09T16:07:00.000 | 2 | 0 | 1 | 0 | python,multithreading | 35,296,959 | 1 | true | 0 | 0 | Queue.join will wait for the queue to be empty (actually that Queue.task_done is called for each item after processing). Thread.join will block until all threads terminates. The behavior using one or other might be similar if all the threads take items from the queue, make a task and return when there's nothing left. However you can still have threads which don't use a queue at all, thus Queue.join would be useless. | 1 | 2 | 0 | In Python, what is the difference between using a thread.join vs a queue.join? I feel that it could both do the same job in some scenarios. Especially if there is a one-one correpodence between thread spawned and item picked from the queue for a job. Is it something like if you are going to use Threading on a queue, it is best to depend on queue.join and if you are just doing something in paralelly where there is no queue data structure used, but its something like a list you could use thread.join? Ofcourse in the scenario of thread.join you need to mention all the threads spawned.
Also just as an aside is queue something you would normally use for consuming input? I think in the scenario of chaining inputs for another job it makes sense to use as an output as well, but in general queue is for processing input? Can someone clarify? | What is the main difference of thread.join vs queue.join? | 1.2 | 0 | 0 | 910 |
35,299,374 | 2016-02-09T18:21:00.000 | 0 | 0 | 0 | 0 | python,django,deployment,django-south | 35,299,971 | 1 | false | 1 | 0 | It would be much safer to use a backup of the live data to test it locally
Changes should never be tested in live production.
You should try to at least have a test server that mimics the live one to push to first
is there anything else I should be aware of?
Depends what versions of apps your changing from and to. Django for example, occasionally introduces changes that aren't backwards compatible | 1 | 1 | 0 | Django is not my main framework, I was hired on contract to update an already existing (live) Django application. Now is time for deployment of my updates to the live server, so I thought I would ask to make sure my process will not cause any problems.
The Django version that was being used was extremely old (2012) and I updated it to one of the recent versions a few months back. This update is the one that worries me most with deployment.
Back in 2012 apparently Django still used South for migrations, now it has been integrated. Since there is some existing data on the live server, I'm concerned that the migration will not play nicely. Any pitfalls I should be aware of regarding this?
As my plan, I'm going to perform a full backup of the existing server/database, then upgrade Django & the dependencies in INSTALLED_APPS to the current version used on my development server, then copy over the files, and perform migration.
Should this process be sufficient? Or is there anything else I should be aware of? | Deploying Django update to live server | 0 | 0 | 0 | 671 |
35,300,052 | 2016-02-09T18:57:00.000 | 1 | 0 | 0 | 0 | python,scrapy | 35,317,246 | 2 | true | 1 | 0 | I think using a spider middleware and overwriting the start_requests() would be a good start.
In your middleware, you should loop over all urls in start_urls, and could use conditional statements to deal with different types of urls.
For your special URLs which do not require a request, you can
directly call your pipeline's process_item(), do not forget to import your pipeline and create a scrapy.item from your url for this
as you mentioned, pass the url as meta in a Request, and have a separate parse function which would only return the url
For all remaining URLs, your can launch a "normal" Request as you probably already have defined | 1 | 5 | 0 | I am writing a scrapy spider that takes as input many urls and classifies them into categories (returned as items). These URLs are fed to the spider via my crawler's start_requests() method.
Some URLs can be classified without downloading them, so I would like to yield directly an Item for them in start_requests(), which is forbidden by scrapy. How can I circumvent this?
I have thought about catching these requests in a custom middleware that would turn them into spurious Response objects, that I could then convert into Item objects in the request callback, but any cleaner solution would be welcome. | Returning Items in scrapy's start_requests() | 1.2 | 0 | 1 | 1,541 |
35,301,369 | 2016-02-09T20:17:00.000 | -1 | 0 | 0 | 0 | pip,python-3.5 | 43,424,008 | 1 | false | 0 | 0 | I get the exact same probelm and searching for solutions here.
You can try yum install pip | 1 | 1 | 0 | I've been trying to figure this all day and I've reached many dead-ends so I thought I reach out to the fine people here @ stackoverflow.
Here's what I'm working against. I've had Python 3.5.1 installed into a Linux (Linux [xxx] 2.6.9-42.0.2.ELsmp #1 SMP Thu Aug 17 17:57:31 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux) server I don't, or didn't at the time, have root access to. For what ever reason PIP was not included in the installation of Python (even though ever posting I've found about installing PIP for Python >3.4 insists it's installed by default).
I've tried installing installing PIP by using GET-PIP.py, but attempts to run get-ip.py gives a long run of errors (I can provide the errors, if it makes difference).
I've tried installing PIP by using ensurepip, but I'm blocked by the following error:
python -m ensurepip
Ignoring ensurepip failure: pip 7.1.2 requires SSL/TLS
even though I have OpenSSL installed,
openssl version
OpenSSL 0.9.7a Feb 19 2003
Unfortunately, I am stuck here. I don't know why PIP wasn't included in the Python 3.5.1 build, but I need to correct this. Any advise would be appreciated.
Dan | python 3.5.1 install pip | -0.197375 | 0 | 1 | 790 |
35,302,508 | 2016-02-09T21:27:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,debugging,command-line,visual-studio-2015 | 35,303,799 | 5 | false | 0 | 0 | You want to select "Execute Project with Python Interactive" from the debug dropdown menu. The keyboard shortcut for this is Shift+Alt+F5. When you do that, you will have a window open at the bottom of the screen called Python Interactive and you will see your printed statements and any prompts for inputs from your program.
This does not allow you to also enter debug mode though. It is either one or the other. | 1 | 17 | 0 | I am working with Python Tools for Visual Studio. (Note, not IronPython.)
I need to work with arguments passed to the module from the command line. I see how to start the module in Debug by right-clicking in the code window and selecting "Start with Debugging". But this approach never prompts me for command line arguments, and len(sys.argv) always == 1.
How do I start my module in debug mode and also pass arguments to it so sys.argv has more than 1 member? | How do I pass command line arguments to Python from VS in Debug mode? | 0 | 0 | 0 | 21,881 |
35,304,131 | 2016-02-09T23:19:00.000 | 2 | 1 | 1 | 0 | python,unit-testing | 50,380,006 | 2 | false | 0 | 0 | I tried Łukasz’s answer and it works, but I don’t like OK (SKIP=<number>) messages. For my own desires and aims for having a test suite I don’t want me or someone to start trusting any particular number of skipped tests, or not trusting and digging into the test suite and asking why something was skipped, and always?, and on purpose? For me that’s a non-starter.
I happen to use nosetests exclusively, and by convention test classes starting with _ are not run, so naming my base class _TestBaseClass is sufficient.
I tried this in Pycharm with Unittests and py.test and both of those tried to run my base class and its tests resulting in errors because there’s no instance data in the abstract base class. Maybe someone with specific knowledge of either of those runners could make a suite, or something, that bypasses the base class. | 1 | 4 | 0 | I need to use unittest in python to write some tests. I am testing the behavior of 2 classes, A and B, that have a lot of overlap in behavior because they are both subclasses of C, which is abstract. I would really like to be able to write 3 testing classes: ATestCase, BTestCase, and AbstractTestCase, where AbstractTestCase defines the common setup logic for ATestCase and BTestCase, but does not itself run any tests. ATestCase and BTestCase would be subclasses of AbstractTestCase and would define behavior/input data specific to A and B.
Is there a way to create an abstract class via python unittest that can take care of setup functionality by inheriting from TestCase, but not actually run any tests? | python unittest inheritance - abstract test class | 0.197375 | 0 | 0 | 1,639 |
35,304,616 | 2016-02-10T00:07:00.000 | -1 | 0 | 1 | 0 | python,pycharm | 60,867,945 | 2 | false | 0 | 0 | you could just create a new debug configuration (run > edit configurations) and point it to a script in your project (e.g. called debug.py that you gitignore). Then when you hit debug it will run that script and drop you into a console.
Personally, I prefer to just launch ipython in the embedded terminal than using the debug console. On linux, you can create a bash alias in your .bashrc such as alias debug_myproject=PYTHONSTARTUP=$HOME/myproject/debug.py ipython. Then calling debug_myproject will run that script and drop you into an ipython console. | 1 | 10 | 0 | In PyCharm, it is possible to set a script that runs upon opening a new console (through Settings -> 'Build, Execution, Deployment' -> Console -> Python Console -> Starting script).
Is there a way to similarly apply a startup script to the debugger console? I find myself importing the same packages over and over again, each time I run the code. | Setting startup script in PyCharm debugger console | -0.099668 | 0 | 0 | 695 |
35,305,360 | 2016-02-10T01:29:00.000 | 1 | 0 | 1 | 0 | python,types,dynamic-typing,static-typing | 35,305,911 | 1 | false | 0 | 0 | This seems fundamentally un-pythonic. There's no typing of function parameters in python, so there's no way to restrict the argument types to a function.
Type hinting is useful for documentation or code linters, but python doesn't use that information to enforce anything at runtime.
If you really want to ensure the validity of an interface (even beyond just argument types), the way to do that would be with functional unittests.
Unittesting and Test-Driven Development are so prevalent in the python community that type-hinting doesn't really add much when it comes to testing and finding bugs. And while it's a debatable point, there are many who believe that any benefit from type-hinting is immediately destroyed by making python code harder to read. There are some promising possibilities with type-hinting of being able to compile python out to C or Java, but they don't exist, yet. | 1 | 3 | 0 | I want to create a class that requires a specific method, with specifically typed arguments and return values.
I can inherit from an abstract class that requires the method to be implemented - but I do not have the ability to force specific argument values and return values ala a static language like Java (I could throw an error at runtime if I wanted to). What is the best way of approaching this with Python? Have looked into type hinting a bit but I don't think it solves this problem. | How to enforce method interface with Python? | 0.197375 | 0 | 0 | 1,454 |
35,305,671 | 2016-02-10T02:06:00.000 | 0 | 0 | 0 | 0 | django,postgresql,python-3.x,django-templates,django-template-filters | 35,306,452 | 1 | false | 1 | 0 | Turns out this had nothing to do with Django itself, which is not surprising.
The data migration which happened between the last and current version broke the newlines in the raw data. Therefore the linebreaksbr was working, but didn't find any linebreaks. | 1 | 0 | 0 | Currently I have a working Django 1.9 application using Python 3.5 in development. The database is Postgres 9.4.2.0.
I have a TEXT type field in the database which contains raw input gathered from users, which is then rendered back for other users to read.
The raw text contains newlines and whatnot which look like:
chat.freenode.net\r\n#randomchannel
The HTML template itself attempts to replace the line breaks with break tags and escape anything else
{{ post.body|linebreaksbr|escape }}
But it doesn't seem to matter what filters I add to the post.body, it always renders the raw \r\n and never replaces the values with <br> tags.
I am not getting any errors in the development server and the rendering of the template works fine, it just seems the filters are not working.
I'm pulling my hair our trying to work out why these filters are not working. Does anyone have any ideas? | Django template filters not working at all | 0 | 0 | 0 | 313 |
35,307,232 | 2016-02-10T05:06:00.000 | 1 | 0 | 1 | 0 | python,regex | 35,307,445 | 2 | true | 0 | 0 | @maxymoo's answer is correct for the example you posted, but will not work if some words in your corpus contain slashes (e.g., "and/or"), or hyphens.
To capture hyphenated words, replace (\w+) in his answer with (\w+-\w+|\w+).
Slashes are more difficult. You need to gather a full list of tags and write a look-ahead. | 1 | 0 | 0 | I'm doing some experiment with NLP in Python. I know about NLTK, but right now I'm not using it. I have a tagged corpus and I want to capture the words only, not their tags through regular expression.
For example,
\n\n\tthe/at fulton/np-tl county/nn-tl grand/jj-tl jury/nn-tl said/vbd is a portion of the tagged corpus and I want to extract the words. I'm new in using re module. Please suggest some pattern so that it can be helpful to my work. | Regular expression to capture only words in a tagged corpus | 1.2 | 0 | 0 | 99 |
35,307,829 | 2016-02-10T05:52:00.000 | 0 | 1 | 0 | 1 | python,linux,bash,ssh,parallel-processing | 35,316,821 | 1 | false | 0 | 0 | For creating lots of parallel SSH connections there is already a tool called pssh. You should use that instead.
But if we're really talking about 100 machines or more, you should really use a dedicated cluster management and automation tool such as Salt, Ansible, Puppet or Chef. | 1 | 0 | 0 | I am writing a script in Python that establishes more than 100 parallel SSH connections, starts a script on each machine (the output is 10-50 lines), then waits for the results of the bash script and processes it. However it is run on a web server and I don't know, whether it would be better to first store the output in a file on the remote machine (that way, I suppose, I can start more remote scripts at once) and later establish another SSH connection (1 command / 1 connection) and read from those files? Now I am just reading the output but the CPU usage is really high, and I suppose the problem is, that a lot of data comes to the server at once. | Script on a web server that establishes a lot of parallel SSH connections, which approach is better? | 0 | 0 | 1 | 51 |
35,308,103 | 2016-02-10T06:12:00.000 | 0 | 1 | 0 | 0 | dronekit-python,dronekit-android | 54,972,589 | 1 | false | 0 | 1 | I don't really know what you are asking for but:
if the distance between the centers of two circles < the sum of their radii then they have collided. | 1 | 1 | 0 | We're building off of the Tower app, which was built with dronekit-android, and flying the 3dr solo with it. We're thinking about adding some sort of collision detection with it.
Is it feasible to run some python script on the drone, basically reading some IR or ultrasound sensor via the accessory bay, and basically yell at the Android tablet when it detects something? That way, the tablet will tell the drone to fly backwards or something.
Otherwise, would we use the dronekit-python libs to do that? How would use a tablet / computer to have a Tower-like functionality with that?
Thanks a bunch. | Implementing collision detection python script on dronekit | 0 | 0 | 0 | 208 |
35,309,463 | 2016-02-10T07:44:00.000 | 0 | 1 | 0 | 1 | php,python,apache | 35,309,692 | 2 | true | 0 | 0 | I see no security issue with giving www-data the sudo right for a single restart command without any wildcards.
If you want to avoid using sudo at all, you can create a temporary file with php, and poll for this file from a shell script executed by root regularly.
But this may be more error prown, and leads to the same result. | 2 | 0 | 0 | I have a a requirement to stop/start some service using sudo service stop/start using python script. The script will be called by a webpage php code on server side running apache webserver.
One way I know is to give www-data sudoer permission to run the specific python script.
Is there other way without giving www-data specific permission. Example will cgi or mod_python work in this case. If yes what is the best implementation to all python script execution in LAMP server.
Thanks in advance. | How to run python script which require sudoer | 1.2 | 0 | 0 | 501 |
35,309,463 | 2016-02-10T07:44:00.000 | 0 | 1 | 0 | 1 | php,python,apache | 35,310,045 | 2 | false | 0 | 0 | You can run a python thread that listens to a stop/start request, and then this thread will stop/start the service. The thread should run as sudo, but it listens to tcp. The web server can send requests w/o any special permissions (SocketServer is a very simple out-of-the-box python tcp server).
You may want to add some security, e.g. hashing the request to this server with a secret, so only allowed services will be able to request the start/stop of the service, and apply iptables rules (requests from localhost where the web server is) | 2 | 0 | 0 | I have a a requirement to stop/start some service using sudo service stop/start using python script. The script will be called by a webpage php code on server side running apache webserver.
One way I know is to give www-data sudoer permission to run the specific python script.
Is there other way without giving www-data specific permission. Example will cgi or mod_python work in this case. If yes what is the best implementation to all python script execution in LAMP server.
Thanks in advance. | How to run python script which require sudoer | 0 | 0 | 0 | 501 |
35,310,552 | 2016-02-10T08:53:00.000 | 0 | 0 | 0 | 0 | python,django,database-schema | 35,466,901 | 6 | false | 1 | 0 | I know it sounds like an awful hack, but maybe you can build an interface that creates text files?
One file would be models.py, with model definitions, and excluding this model from migrations with managed = False
Another file is the SQL with DROP and CREATE table if the customer wants a new table, or just ALTER table.
Another script can run the SQL script, copy the models.py file to the correct directory, and reload django. | 2 | 8 | 0 | A customer wants to add custom fields to a django model we provide.
He wants to do this on his own, without programming.
These things should be addable:
boolean (yes/no) fields. Optional "unset"
single choice fields
multiple choice fields
single line text fields
textarea fields
date
Example:
The customer wants to add a field he calls "was successful". And the field > should have these choices: yes/no/unset. Defaulting to unset.
Things would be easy if I could do it by creating or extending a model. But in this case no source code changes are allowed :-(
How to solve this?
Update
Querying for instances with given values needs to be supported. Example: Show all instances where "was successful" is True. | Adding custom fields to a django model (without changes in source code) | 0 | 0 | 0 | 1,657 |
35,310,552 | 2016-02-10T08:53:00.000 | 2 | 0 | 0 | 0 | python,django,database-schema | 35,414,612 | 6 | false | 1 | 0 | Well, when I had such problem, I used to create a custom field model, with a name field and a type field, usually a choice field with choices for the possible field types. You can also add a is_active field to filter the active and inactive CustomFields.
Than, when I create the for, I search this objects to know which fields I must have in that form.
To store the data, I'd have another model, called CustomFieldAnswer, or somethink like this. This model should have a ForeignKey to the main model that should have this data, and the custom field.
Doing so, You can have any kinds of fields for yout model dinamically and wothout you client needing to code anything.
You could use metaprogramming to create acutual fields in a form based on the query in the CustomFields. Or, you could just put the fields in the template and change the type of the input for each CustomField.
Hope that helps! | 2 | 8 | 0 | A customer wants to add custom fields to a django model we provide.
He wants to do this on his own, without programming.
These things should be addable:
boolean (yes/no) fields. Optional "unset"
single choice fields
multiple choice fields
single line text fields
textarea fields
date
Example:
The customer wants to add a field he calls "was successful". And the field > should have these choices: yes/no/unset. Defaulting to unset.
Things would be easy if I could do it by creating or extending a model. But in this case no source code changes are allowed :-(
How to solve this?
Update
Querying for instances with given values needs to be supported. Example: Show all instances where "was successful" is True. | Adding custom fields to a django model (without changes in source code) | 0.066568 | 0 | 0 | 1,657 |
35,313,998 | 2016-02-10T11:30:00.000 | -1 | 0 | 0 | 1 | python,stm32,discovery | 38,325,794 | 2 | false | 0 | 1 | which type of Invensense chip you are using?
I think you need to check if you use the right COM Port in windows.
Check if you could get data from your MPUxxxx Board through I2C
Check log_stm32.c if this function work well fputc(out[i]); | 1 | 0 | 0 | I am trying to run the Invensense motion_driver_6.12. I compiled the code with IAR and the STM32 works ok - all the test I've done with the board are ok: UART, I2C.. etc. But when I run the python client demo program "eMPL-client-py" the program shows only one empty black window and nothing occurs. I tried to run first the program and then switch on the board and vice-versa.
Thanks | Invensense Motion Driver 6.12 STM32 demo python don't work | -0.099668 | 0 | 0 | 733 |
35,314,206 | 2016-02-10T11:40:00.000 | 0 | 0 | 0 | 0 | python,security,web-scraping | 61,044,312 | 2 | false | 1 | 0 | I personally drown it in proxies. 1 proxy for 4 requests before blocked, then I change proxy. I've several tens of thousands of free proxies, so it's not a big problem. But it's not very fast, so I set concurrency to 1k or about that. | 2 | 2 | 0 | I am trying to scrape a website using Scrapy framework in python. But i am getting the captchas. The server implements the bot detection using Distil netwrok bot detection. Is there anyway i can work around with it? | Working of the distil networks bot detection | 0 | 0 | 1 | 4,932 |
35,314,206 | 2016-02-10T11:40:00.000 | -8 | 0 | 0 | 0 | python,security,web-scraping | 35,330,104 | 2 | false | 1 | 0 | You can get over it by using tools like Selenium. It is a web testing framework that automatically loads the web browser to mimic a normal user. Once a page loads, you can scrape the content with tools such as Scrapy or Bs4. Continue loading the next page, then scrape. It's slower than normal scrapers but it does the job and gets through most detectors like Incapsula.
Hope that helps. | 2 | 2 | 0 | I am trying to scrape a website using Scrapy framework in python. But i am getting the captchas. The server implements the bot detection using Distil netwrok bot detection. Is there anyway i can work around with it? | Working of the distil networks bot detection | -1 | 0 | 1 | 4,932 |
35,318,566 | 2016-02-10T14:59:00.000 | 0 | 0 | 1 | 0 | python-3.x,pip | 35,322,006 | 1 | false | 0 | 0 | I am on Windows, and you appear not to be, but maybe the following will help.
If pip is in your system's equivalent of python35/Lib/site-packages, then python3.5 -m pip should run pip so that it installs into the 3.5 site-packages.
If you do not have pip in the 3.5 site-packages, copy its directory, along with its dependencies (pip....dist-info/, setuptools/, setuptools....dist-info/, and easyinstall.py) from the 3.4 site_packages.
Or, if pip3 or even pip3.4 is in python35/Scripts, run it with its full path name so you are not running the 3.4 version. | 1 | 0 | 0 | I'm trying to install packages for my python 3.5.0 versus my python 3.4.3
I can run both by typing either python3.4 or python3.5
I have pip2 and pip3. I also ran the script sudo easy_install3 pip, which made me be able to use pip3.4 But I am still having trouble installing modules for python3.5. pip3 just installs for python3.4
I am looking to install termcolor for python3.5 and I am having no success. Can anyone help? | Use or install different versions of python3 pip | 0 | 0 | 0 | 167 |
35,318,602 | 2016-02-10T15:01:00.000 | 1 | 0 | 0 | 0 | python,cluster-analysis,data-mining,k-means | 35,321,747 | 1 | true | 0 | 0 | K-means is about minimizing the least squares. Among it's largest drawbacks (there are many) is that you need to know k. Why do you want to inherit this drawback?
Instead of hacking k-means into not ignoring the order, why don't you instead look at time series segmentation and change detection approaches that are much more appropriate for this problem?
E.g. split your time series if abs(x[i] - x[-1]) > stddev where stddev is the standard deviation of your data set. Or the standard deviation of the last 10 samples (in above series, the standard deviation is about 3, so it would split as [1,2,2], [8,9], [0,0,0,1,1,1] because the change 0 to 1 is not significant. | 1 | 1 | 1 | I want to find groups in one dimensional array where order/position matters. I tried to use numpys kmeans2 but it works only when i have numbers in increasing order.
I have to maximize average difference between neigbour sub-arrays
For example: if I have array [1,2,2,8,9,0,0,0,1,1,1] and i want to get 4 groups the result should be something like [1,2,2], [8,9], [0,0,0], [1,1,1]
Is there a way to do it in better then O(n^k)
answer: I ended up with modiied dendrogram, where I merge neigbors only. | Modify kmeans alghoritm for 1d array where order matters | 1.2 | 0 | 0 | 112 |
35,318,866 | 2016-02-10T15:13:00.000 | 0 | 0 | 0 | 1 | python,json,elasticsearch | 35,405,127 | 2 | true | 1 | 0 | This is Chrisses answer, copied from gitter.im:
You can use the dict field type for "unstructured data", as it takes arbitrary json. If the db engine is postgres, it uses jsonfield under the hood, and if the db engine is mongo, it's converted to a bson document as usual. Either way it should index automatically as expected in ES and will be queryable through the Ramses API.
The following ES queries are supported on documents/fields: nefertari-readthedocs-org/en/stable/making_requests.html#query-syntax-for-elasticsearch
See the docs for field types here, start at the high level (ramses) and it should "just work", but you can see what the code is mapped to at each level below down to the db if desired:
ramses: ramses-readthedocs-org/en/stable/fields.html
nefertari (underlying web framework): nefertari-readthedocs-org/en/stable/models.html#wrapper-api
nefertari-sqla (postgres-specific engine): nefertari-sqla-readthedocs-org/en/stable/fields.html
nefertari-mongodb (mongo-specific engine): nefertari-mongodb-readthedocs-org/en/stable/fields.html
Let us know how that works out, sounds like it could be a useful thing. So far we've just used that field type to hold data like user settings that the frontend wants to persist but for which the API isn't concerned. | 1 | 1 | 0 | I would like to give my users the possibility to store unstructured data in JSON-Format, alongside the structured data, via an API generated with Ramses.
Since the data is made available via Elasticsearch, I try to achieve that this data is indexed and searchable, too.
I can't find any mentioning in the docs or searching.
Would this be possible and how would one do it?
Cheers /Carsten | Storing unstructured data with ramses to be searched with Ramses-API? | 1.2 | 0 | 0 | 95 |
35,322,508 | 2016-02-10T17:54:00.000 | 1 | 0 | 1 | 0 | function,python-3.x,libraries | 35,322,587 | 1 | true | 0 | 0 | Balance between namespaces and convenience.
The built-in functions are considered generally useful to the point where they are available by default; it would be a royal pain to need to import a module just to use the str or int constructor after all.
The built-in modules/packages (requiring an import to access) are considered less generally useful; avoiding the expense of loading them when they're not needed, and namespacing them to avoid cluttering the global namespace with tons of names people may want to use for other purposes is generally a good design pattern. | 1 | 0 | 0 | I just started learning Python and I have some confusion about using built-in functions when you have the ability to call functions from libraries and why do you even have built-in functions in the first place when you have libraries? | Built-in functions vs library-called functions | 1.2 | 0 | 0 | 93 |
35,322,629 | 2016-02-10T18:00:00.000 | 1 | 0 | 0 | 0 | python,apache,flask,amazon-redshift | 44,923,869 | 1 | false | 1 | 0 | I solved this error by turning DEBUG=False in my config file [and/or in the run.py]. Hope it helps someone. | 1 | 4 | 0 | I am using apache with mod_wsgi in windows platform to deploy my flask application. I am using sqlalchemy to connect redshift database with connection pool(size 10).
After few days suddenly I am getting follwoing error.
(psycopg2.OperationalError) SSL SYSCALL error: Software caused connection abort
Can anybody suggest why I am getting this error and how to fix?
If I do the apache restart then this error gone. But after few days it again comeback. | (psycopg2.OperationalError) SSL SYSCALL error: Software caused connection abort | 0.197375 | 1 | 0 | 3,963 |
35,323,318 | 2016-02-10T18:39:00.000 | 1 | 0 | 0 | 0 | python,django | 35,324,781 | 1 | true | 1 | 0 | I think by default django uses HOST = '' that equals 127.0.0.1. You can change it in settings.py however I do not know how to use two different hosts. | 1 | 1 | 0 | How does Django know the host name for a FileField that points to my MEDIA_ROOT location? I need to replace the current host name by another one. How do I do it? | How does Django know the host name for a FileField? | 1.2 | 0 | 0 | 102 |
35,324,749 | 2016-02-10T20:00:00.000 | 0 | 0 | 0 | 0 | python,view,pyqt4 | 35,326,282 | 1 | true | 0 | 1 | If you setFocus on an object in the corresponding view that was just made visible, then only that view's keyPressEvent will be triggered. No need to disable or enable keypress events. The focus does this automatically.
The solution above is the answer to the question. However, I may change from using QAction buttons to trigger changing the views, to using a tabwidget to change the views instead. | 1 | 0 | 0 | I have a Python PyQt program which has a QWidget on the main window.
I added 2 different views into the same QWidget.
When the user clicks Button1, I show view1 and hide view2.
When the user clicks Button2, I show view2 and hide view1.
Each view has its own KeyPressEvent for using arrow keys to page through records in the view.
I don't want to page through records in both views simultaneously, I just need to page the records of the active view.
How can I disable/enable the appropriate KeyPressEvent when the corresponding view is active. Or is the a better approach?
Apologies for no code, but it is difficult to simplify my working example. | PyQt: Independent KeyPressEvent between 2 different views on a form | 1.2 | 0 | 0 | 80 |
35,326,476 | 2016-02-10T21:39:00.000 | 2 | 0 | 1 | 0 | python,django | 35,326,564 | 2 | false | 1 | 0 | Create your own virtualenv
If all fails, just recreate the virtualenv from the requirements.txt and go from there
Find out how the old app was being launched
If you insist on finding the old one, IMO the most direct way is to find how is the production Django app being ran. Look for bash scripts that start it, some supervisor entries etc
If you find how it starts, then you can pinpoint the environment it is launched in (e.g. which virtualenv)
Find the virtualenv by searching for common files
Other than that you can use find or locate command to search for files we know to exist in a virtualenv like lib/pythonX.Y/site-packages, bin/activate or bin/python etc | 2 | 2 | 0 | Deploying to a live server for an existing Django application. It's a very old site that has not been updated in 3+ years.
I was hired on contract to bring it up to date, which included upgrading the Django version to be current. This broke many things on the site that had to be repaired.
I did a test deployment and it went fine. Now it is time for deployment to live and I am having some issues....
First thing I was going to do is keep a log of the current Django version on server, incase of any issues we can roll back. I tried logging in Python command prompt and importing Django to find version number, and it said Django not found.
I was looking further and found the version in a pip requirements.txt file.
Then I decided to update the actual django version on the server. Update went through smoothly. Then I checked the live site, and everything was unchanged (with the old files still in place). Most of the site should have been broken. It was not recognizing any changes in Django.
I am assuming the reason for this might be that the last contractor used virtualenv? And that's why it is not recognizing Django, or the Django update are not doing anything to the live site?
That is the only reason I could come up with to explain this issue, as since there is a pip requirements.txt file, he likely installed Django with pip, which means Python should recognize the path to Django.
So then I was going to try to find the source path for the virtualenv with command "lsvirtualenv". But when I do that, even that gives me a "command not found" error.
My only guess is that this was an older version of virtualenv that does not have this command? If that is not the case, I'm not sure what is going on.
Any advice for how I find the information I need to update the package versions on this server with the tools I have access to? | How to locate a virtualenv install | 0.197375 | 0 | 0 | 5,340 |
35,326,476 | 2016-02-10T21:39:00.000 | 0 | 0 | 1 | 0 | python,django | 35,345,062 | 2 | false | 1 | 0 | Why not start checking what processes are actually running, and with what commandline, using ps auxf or something of the sort. Then you know if its nginx+uwsgi or django-devserver or what, and maybe even see the virtualenv path, if it's being launched very manually. Then, look at the config file of the server you find.
Alternatively, look around, using netstat -taupen, for example, which processes are listening to which ports. Makes even more sense, if there's a reverse proxy like nginx or whatever running, and you know that it's proxying to.
The requirements.txt I'd ignore completely. You'll get the same, but correct information from the virtualenv once you activate it and run pip freeze. The file's superfluous at best, and misleading at worst.
Btw, if this old contractor compiled and installed a custom Python, (s)he might not even have used a virtualenv, while still avoiding the system libraries and PYTHONPATH. Unlikely, but possible. | 2 | 2 | 0 | Deploying to a live server for an existing Django application. It's a very old site that has not been updated in 3+ years.
I was hired on contract to bring it up to date, which included upgrading the Django version to be current. This broke many things on the site that had to be repaired.
I did a test deployment and it went fine. Now it is time for deployment to live and I am having some issues....
First thing I was going to do is keep a log of the current Django version on server, incase of any issues we can roll back. I tried logging in Python command prompt and importing Django to find version number, and it said Django not found.
I was looking further and found the version in a pip requirements.txt file.
Then I decided to update the actual django version on the server. Update went through smoothly. Then I checked the live site, and everything was unchanged (with the old files still in place). Most of the site should have been broken. It was not recognizing any changes in Django.
I am assuming the reason for this might be that the last contractor used virtualenv? And that's why it is not recognizing Django, or the Django update are not doing anything to the live site?
That is the only reason I could come up with to explain this issue, as since there is a pip requirements.txt file, he likely installed Django with pip, which means Python should recognize the path to Django.
So then I was going to try to find the source path for the virtualenv with command "lsvirtualenv". But when I do that, even that gives me a "command not found" error.
My only guess is that this was an older version of virtualenv that does not have this command? If that is not the case, I'm not sure what is going on.
Any advice for how I find the information I need to update the package versions on this server with the tools I have access to? | How to locate a virtualenv install | 0 | 0 | 0 | 5,340 |
35,327,272 | 2016-02-10T22:29:00.000 | 1 | 0 | 0 | 0 | python,arrays,scipy,trend | 35,329,441 | 2 | false | 0 | 0 | I would look into numpy.polyfit but I'm not sure what performance gain it has over scipy.stats.linregress.
It's pretty fast from my experience. You might have to do some math on your own to get r and p values from residuals and covariance matrix. | 1 | 1 | 1 | I've got a 3d array of shape (time,latitude,longitude). I'd like to calculate the linear trend at each lon/lat point.
I know I can simply loop over all points and use spicy.stats.linregress at each point. However, that gets quite slow for large arrays.
The scipy function "detrend" can calculate and remove linear trends for n-dimensional arrays, and is really fast. But I can't find any method to just calculate the trends.
Does anyone know of a fast way to calculate slope, intercept, r value and p value for each point on a large grid?
Any help/suggestion is greatly appreciated!
Cheers
Joakim | How do I calculate linear trend for a multi-dimensional array in Python | 0.099668 | 0 | 0 | 1,476 |
35,328,278 | 2016-02-10T23:50:00.000 | 0 | 0 | 0 | 0 | python-2.7 | 35,330,126 | 3 | false | 0 | 0 | You can use
1) Beautiful Soup
2) Python Requests
3) Scrapy
4) Mechanize
... and many more. These are the most popular tools, and easy to learn for the beginner.
From there, you can branch out to more complex stuff such as UserAgentSpoofing, HTML Load balancing, Regex, XPATH and CSS Selectors. You will need these to scrape more difficult sites that has protection or login fields.
Hope that helps.
Cheers | 2 | 0 | 0 | I am looking for some real help. I want to web scraping using Python, I need it because i want to import some database, How can we do that in Python. What libraries do we need ? | Python Web scraping- Required Libraries and how to do it | 0 | 0 | 1 | 71 |
35,328,278 | 2016-02-10T23:50:00.000 | 0 | 0 | 0 | 0 | python-2.7 | 44,280,408 | 3 | false | 0 | 0 | As others have suggested I too would use Beautiful Soup and Python Requests, but if you get problems with websites which have to load some data with Javascript after the page has loaded and you only get the incomplete html with Request, try using Selenium and PhantomJs for the scraping. | 2 | 0 | 0 | I am looking for some real help. I want to web scraping using Python, I need it because i want to import some database, How can we do that in Python. What libraries do we need ? | Python Web scraping- Required Libraries and how to do it | 0 | 0 | 1 | 71 |
35,330,282 | 2016-02-11T03:29:00.000 | 3 | 0 | 0 | 0 | python,arrays,numpy | 35,330,365 | 3 | false | 0 | 0 | Numpy does not omit the dimension of an array. It is a library built for multidimensional arrays (not just 1d or 2d arrays), and so it makes very clear distinctions between arrays of different dimensions (it cannot assume that any array is just a degenerate form of a higher dimension array, because the number of dimensions is infinite, conceptually).
An array with dimension (81, 1) is a 2d array with the value of the 2nd dimension equal to 1.
An array with dimension (81, ) is just a 1d array.
When you write C[:,0], you are referring to a column. Therefore
If you write C[: 0] = X, you are assigning a column to one of the columns of C (which happens to be the only one), and therefore are not changing the dimension of C.
If you write C = X, then you are saying that C is now a column as well, and therefore are changing the dimension of C. | 1 | 5 | 1 | I am a beginner user of Python. I used to work with matlab intensively. Now I am shifting to python. I have a question about the dimension of an array.
I import Numpy
I first create an array X, then I use some embedded function, like, sum, to play with my array. Eventually, when I try to check the dimension of my array X, it becomes: X.shape, outputs (81,). The number 81 is what I expected, but I also expect the 2nd dimension is 1, rather than just omitted. This makes me feel very uncomfortable even though when I directly type X, it output correctly, i.e., one column and the figures in X are all as expected.
Then when I use another array Y, which literally has Y.shape, outputs (81,1), then if I type X*Y, which I expected to see one array of dimension (81,1) but instead, I saw an array of dimension (81,81).
I don't know what is the underlying mechanism to produce this results.
The way I solve this problem is very stupid. I first create a new array C = zeros((81,1)), so C literally has dimension (81,1), then I assign my X to C by typing C[:,0]=X, then C.shape = (81,1). Note that if I type C=X, then C.shape=(81,), which goes back to my problem. So I can solve my problem, but I am sure there is better method to solve my problem and I also don't understand why python would produce something like (81,), with the 2nd dimension omitted. | Why Numpy sometimes omits the dimension of an array | 0.197375 | 0 | 0 | 2,639 |
35,330,326 | 2016-02-11T03:35:00.000 | 1 | 0 | 0 | 0 | python-3.x,scroll,pygame,interactive | 35,336,224 | 1 | true | 0 | 1 | You have to translate the mouse position on the screen to the world coordinates. When you scroll your map, you generate an offset, and you have to use that for translation.
Say if the map is not scrolled ((0, 0) of the map is at (0, 0) of the screen), and a player clicks on (12, 12), you know she clicked on (12, 12) of the world.
When this player scrolled the map to the right (let's say 100px), then you blit the map surface at (-100, 0) of the screen. Now if the player clicks on (12, 12), you can calculate that (12, 12) of the screen is actually (112, 12) on the map (screenX - scrollX, screenY - scrollY).
So if you want to determinate which country a player clicked on, always use the translated coordinates (not the screen coordinates), and the issue you describe disappears. | 1 | 1 | 0 | Ok, so our world map for a Risk-style conquest game we are making is made of a matrix of mouse-interactive "country" objects containing their own values like name, owner, and unit counts.
Right now the entire map is sized to fit within the game screen; we are considering making the map scrollable. A possibly I found was that we generate the map on its own surface that's bigger than the game screen, then blit a portion of the map surface onto a subsurface of the game screen and let player input offset what portion of the map is being drawn at a given time.
I know this type of scrolling works- that's not what this question is about- the question is would using this kind of scrolling prevent us from letting the player properly interact with the countries on the map using the mouse?
We already tried a form of scrolling where we just offset the indexes of the map matrix to move the countries positions that way, but this gave the server problems handling attack orders as the same two countries could be at different coordinates for different players. The scrolling method I described would allow the map itself to remain stationary for all players, but how would we determine the mouse's position relative to the world map as opposed to its position on the game screen? | pygame- Making mouse-interactive elements on a "scrolling" surface | 1.2 | 0 | 0 | 192 |
35,330,685 | 2016-02-11T04:15:00.000 | 0 | 0 | 1 | 0 | python,multithreading,cpu-usage,processor | 35,330,769 | 2 | false | 0 | 0 | There is no real answer to this question and everyone might have a different view. Determining the number of threads for your application to improve performance should be decided after testing for x scenarios with y of threads. Performance depends on how the OS scheduler is scheduling your threads to the available CPU cores depending on the CPU load or number of processes running. If you have 4 cores, then it doesn't mean it's good to execute 4 threads only. In fact you can run 23 threads too. This is what we call the illusion of parallelism given to us by the scheduler by scheduling processes after processes hence making us think everything is running simultaneously.
Here is the thing
If you run 1 thread, you may not gain enough performance. As you keep increasing threads to infinity, then the scheduler will take more time to schedule threads and hamper your overall application performance. | 1 | 1 | 0 | I am running a python program in a server having python2.7.6 . I have used threading module of python to create multiple threads . I have created 23 threads , so i am confused whether all my processor cores are actually being used or not , Is there any way i can check this in my server . Any suggestion as what should be the ideal number of threads that should be spawned according to the number of processors that we have in order to improve efficiency of my program. | How should be spawn threads in python | 0 | 0 | 0 | 610 |
35,332,395 | 2016-02-11T06:35:00.000 | 1 | 0 | 1 | 0 | python,algorithm | 35,332,791 | 3 | false | 0 | 0 | Using sorted list may help. First element will always be minimum and last element will always be maximum, with O(1) complexity to get the value.
Using binary search to add/remove with sorted list will be O(log(n)) | 2 | 1 | 0 | Suppose I am maintaining a set of integers (will add/remove dynamically, and the integers might have duplicate values). And I need to efficiently find max/min element of the current element set. Wondering if any better solutions?
My current solution is maintain a max heap and a min heap. I am using Python 2.7.x and open for any 3rd party pip plug-ins to fit my problem. | find minimal and maximum element in Python | 0.066568 | 0 | 0 | 493 |
35,332,395 | 2016-02-11T06:35:00.000 | 1 | 0 | 1 | 0 | python,algorithm | 35,332,456 | 3 | false | 0 | 0 | min(list_of_ints)
Will yield your minimum of the list and...
max(list_of_ints)
Will yield your maximum of the list.
Hope that helps.... | 2 | 1 | 0 | Suppose I am maintaining a set of integers (will add/remove dynamically, and the integers might have duplicate values). And I need to efficiently find max/min element of the current element set. Wondering if any better solutions?
My current solution is maintain a max heap and a min heap. I am using Python 2.7.x and open for any 3rd party pip plug-ins to fit my problem. | find minimal and maximum element in Python | 0.066568 | 0 | 0 | 493 |
35,333,875 | 2016-02-11T08:09:00.000 | 1 | 0 | 0 | 0 | python,pygame,resolution | 35,334,029 | 2 | false | 0 | 1 | i usually create a base resolution and then whenever the screen is resized, i scale all the assets and surfaces by ratios.
This works well if you have assets that are of large resolution and you have scaled then down but would pixelate for smaller images.
you can also create multiple assets file for each resolution and when ever your resolution goes above one of the available asset resolution you can change the image. you can think in in context of css media query to better understand. | 1 | 1 | 0 | I have designed my whole pygame to work for 1920x1080 resolution.
However, I have to adapt it for smaller resoltion.
There is a lot of hardcoded value in the code.
Is there a simple way to change the resolution, like resizing the final image at the end of each loop, just before drawing it ? | Pygame, change resolution of my whole game | 0.099668 | 0 | 0 | 1,469 |
35,335,781 | 2016-02-11T09:49:00.000 | 1 | 0 | 1 | 1 | python,windows,focus | 35,335,984 | 1 | false | 0 | 0 | Running with pythonw (or changing extension to .pyw which is the same) may help.
pythonw.exe doesn't create a console window but I dunno about focus. It doesn't create any windows by default, either, so it shouldn't steal it. | 1 | 0 | 0 | So I have uTorrent set up to run a Python script when a torrent's state is changed so it can sort it. It all works fine except it takes focus from whatever I'm doing and it's crazy annoying.
I run it using sorter.py <arguments>. I'm on Windows 10. What can I do, if anything, to get this to run in the background and not take focus from what I'm doing?
I'm also sorry if this has already been answered but I couldn't find anything that worked. | Start Python in Background | 0.197375 | 0 | 0 | 219 |
35,336,992 | 2016-02-11T10:39:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,django-migrations | 35,343,687 | 1 | true | 1 | 0 | Try running migrate --fake-initial since you're getting the "relation already exists" error. Failing that, I would manually back up each one of my migration folders, remove them from the server, then re-generate migration files for each app and run them all again from scratch (i.e., the initial makemigrations). | 1 | 0 | 0 | I'm developing a Django 1.8 application locally and having reached a certain point a few days ago, I uploaded the app to a staging server, ran migrations, imported the sql dump, etc. and all was fine.
I've since resumed local development which included the creation of a new model, and changing some columns on an existing model. I ran the migrations locally with success, but after rsync-ing my files to the staging server, I get a 'relation already exists' error when running manage.py migrate. And when I visit the admin page for the new model, I get a 'column does not exist' error.
It seems as though the migrations for this model were partially successful but I cannot migrate the entirety of the model schema. I've tried commenting out parts of the migration files, but was not successful. Would it be possible to create the missing columns via psql? Or is there some way of determining what is missing and then manually write a migration to create the missing database structure?
I'm using Django 1.8.6, Python 3.4.3, and PostgreSQL 9.3.6. Any advice on this would be great. Thanks. | Migrations error in Django after moving to new server | 1.2 | 1 | 0 | 772 |
35,339,460 | 2016-02-11T12:33:00.000 | 4 | 0 | 0 | 0 | python,django | 35,339,613 | 1 | true | 1 | 0 | This will work fine. Views can be wherever you want.
You can add the package that is your site (the one that has settings.py in it) to INSTALLED_APPS, and then a models.py in it, management commands, et cetera will also work fine.
Apps are handy when things become big and you want to split them into smaller parts. | 1 | 1 | 0 | I am developing a project using django python server. I have created my project on django and put all my files including views.py in the project folder and I am using it without creating any app and its working fine.
Is this the right way of doing it (or) I need to create an app instead and put all my files in the project ? | Can I put my views.py file in the project folder of django? | 1.2 | 0 | 0 | 725 |
35,339,916 | 2016-02-11T12:54:00.000 | 0 | 0 | 1 | 0 | python,urllib2 | 35,340,559 | 1 | false | 0 | 0 | Sometimes python can't see modules in site-packages (or dist-packages in *nix) folder. Especially when you try to use generator (like py2exe) or use python interpreter as zip-package.
Your module six in site-package folder, but urllib2 is not. I have solved my (similar) problem with copying all modules from site-packages to top-level (site-packages/..) folder.
P.S. I know, it is a bad way, but you can do it with your copy of interpreter. | 1 | 0 | 0 | I apologize if this question has been answered in a different thread, I have been looking everywhere for the last week but couldn't find anything that is specific to my case.
I created a .py program that is working as expected, however the moment that I try to convert it into an exe, it starts to generate the following error:
File "site-package\six.py", line82, in _import_module
ImportError: No module named urllib2
I understand that the six module was made to facilitate running the code whether using python 2 or 3 and I also understand that urllib2 has been split into request and error.
I went through the six.py file to check references of urllib2 but I am not sure what kind of modification I need to make, I am kind of new to Python.
I tried this in python 2.7.10 and python 3.4 and I really don't understand what I am missing. i also tried pyinstaller and py2exe and got the same error message.
I didn't include the code I wrote because the error is coming from the six.py file itself.
Any help that you can provide me to fix this is greatly appreciated.
Thank you!
Intidhar | no module named urllib2 when converting from .py to .exe | 0 | 0 | 1 | 287 |
35,341,566 | 2016-02-11T14:09:00.000 | 1 | 1 | 0 | 0 | python,c,raspberry-pi,matrix-multiplication,raspberry-pi2 | 35,343,669 | 1 | true | 0 | 0 | Mathematica is part of the standard Raspbian distribution. It should be able to multiply matrices. | 1 | 0 | 1 | What matrix multiplication library would you recommend for Raspberry Pi 2?
I think about BLAS or NumPy, What do you think?
I'm wondering if there is an external hardware module for matrix multiplication available.
Thank you! | Raspberry pi matrix multiplication | 1.2 | 0 | 0 | 357 |
35,346,456 | 2016-02-11T17:46:00.000 | 0 | 0 | 0 | 0 | javascript,python,google-chrome-extension | 35,349,167 | 2 | false | 1 | 0 | The only way to get the output of a Python script inside a content script built with Javascript is to call the file with XMLHttpRequest. As you noted, you will have to use an HTTPS connection if the page is served over HTTPS. A workaround for this is to make a call to your background script, which can then fetch the data in whichever protocol it likes, and return it to your content script. | 1 | 6 | 0 | I'm writing a chrome extension that injects a content script into every page the user goes to. What i want to do is to get the output of a python function for some use in the content script (can't write it in javascript, since it requires raw sockets to connect to my remote SSL server).
I've read that one might use CGI and Ajax or the like, to get output from the python code into the javascript code, but i ran into 3 problems:
I cannot allow hosting the python code on a local server, since it is security sensitive data that only the local computer should be able to know.
Chrome demands that HTTP and HTTPS can not mix- if the user goes to an HTTPS website, i can't host the python code on a HTTP server.
I don't think Chrome even supports CGI on extensions-when i try to access a local file, all it does is print out the text (the python code itself) instead of what i defined to be its output (I tried to do so using Flask). As I said in 1, I shouldn't even try this anyway, but this is just a side note.
So my question is, how do i get the output of my python functions inside a Content Script, built with javascript? | Combining Python and Javascript in a chrome plugin | 0 | 0 | 1 | 7,214 |
35,349,113 | 2016-02-11T20:10:00.000 | 0 | 0 | 1 | 0 | python,multithreading,tkinter,vpython | 35,349,400 | 1 | true | 0 | 1 | No, there is no widget to do that. | 1 | 0 | 0 | Is there a widget that shows a thread visually for Tkinter?
reason why. I want to open a Vpython window inside a Tkinter window.
I know there is a possibility to open a Vpython thread on the side while Tkinter is active, but can i show it inside Tkinter? (Like in some sort of Frame) | can Python Tkinter widget show a thread? | 1.2 | 0 | 0 | 96 |
35,358,825 | 2016-02-12T09:20:00.000 | 0 | 0 | 0 | 0 | python,ios,iphone,selenium-webdriver,appium | 35,365,816 | 1 | false | 0 | 1 | No, there no way to open push notification tray for iOS on hardware device using Appium and Python, or any iOS public API. | 1 | 0 | 0 | Is there any way to open push notification tray for iOS on hardware device using Appium and Python.
Regards | Open push notification tray on iOS hardware device using Appium | 0 | 0 | 0 | 322 |
35,361,811 | 2016-02-12T11:45:00.000 | -1 | 0 | 1 | 0 | python,python-3.x,random,numbers | 61,278,365 | 3 | false | 0 | 0 | Here we have a limitation of memory, so we can get the random numbers to the maximum a system can reach. Just place the n digit numbers you want in the condition and you can get the desired result .As an example I tried for 6 digit random numbers.One can try as per the requirements. Hope this solves your question to an extent.
import sys
from random import *
for i in range(sys.maxsize):
print(randint(0,9),randint(0,9),randint(0,9),randint(0,9),randint(0,9),randint(0,9),sep='') | 1 | 2 | 0 | How do I generate n random numbers in python 3? n is a to be determined variable. preferably natural numbers (integers > 0),
All answers I've found take random integers from a range, however I don't want to generate numbers from a range. (unless the range is 0 to infinity) | How to Generate N random numbers in Python 3 between 0 to infinity | -0.066568 | 0 | 0 | 11,929 |
35,362,241 | 2016-02-12T12:05:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,numpy,floating-point | 35,362,451 | 1 | true | 0 | 0 | That's because floats (most of the time) cannot represent the exact value you put in. Try print("%.25f" % np.float64(0.1)) which returns 0.1000000000000000055511151 that's not exactly 0.1.
Numpy already provides a good workaround for almost-equal (floating point) comparisons: np.testing.assert_almost_equal so you can test by using np.testing.assert_almost_equal(20,np.arange(5, 60, 0.1)[150]).
The reason why your second example provides the real valus is because 0.5 can be represented as exact float 2**(-1) = 0.5 and therefore multiplications with this value do not suffer from that floating point problem. | 1 | 1 | 1 | Why does np.arange(5, 60, 0.1)[150] yield 19.999999999999947. But np.arange(5, 60, 0.5)[30] yield 20.0?
Why does this happen? | numpy.arange floating point errors | 1.2 | 0 | 0 | 1,265 |
35,363,795 | 2016-02-12T13:25:00.000 | 0 | 0 | 1 | 0 | python,windows,python-2.7 | 35,412,636 | 2 | false | 0 | 0 | Running this solved the problem: pip install scipy-0.16.1-cp27-none-win_amd64.whl After doing this, all other packages were able to be re-installed and successfully imported. | 1 | 0 | 1 | I've been working in Jupyter IPython notebook (using Python 2.7) and haven't had any issues before this importing and installing packages. Most of my packages were installed via Anaconda. Now I'm randomly having problems importing packages that I've always been able to import. I have an example below. Please help. I'm pretty new to Python so I'm completing stuck on what the problem is and how to fix it.
import pandas as pd
ImportError Traceback (most recent call last)
in ()
----> 1 import pandas as pd
C:\Users\IBM_ADMIN\Anaconda2\lib\site-packages\pandas__init__.py in ()
11 "pandas from the source directory, you may need to run "
12 "'python setup.py build_ext --inplace' to build the C "
---> 13 "extensions first.".format(module))
14
15 from datetime import datetime
ImportError: C extension: No module named numpy not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --inplace' to build the C extensions first. | Error Importing Python Packages into Jupyter | 0 | 0 | 0 | 1,319 |
35,363,880 | 2016-02-12T13:30:00.000 | 0 | 1 | 1 | 0 | python,web | 35,363,981 | 1 | false | 0 | 0 | Yes there is a way to do this. if you are on a server running some form of linux you can use crontab. As for server hostage, I don't know of any free servers but there is always servers for small fees. | 1 | 0 | 0 | I have a Python Script that every time I run it, collects data from different sites, stores them into a file and than runs some analysis.
What I want to do next is to somehow install Python and all the packages that I need on a server for example and create a task, let`s say that everyday at 3 p.m, the code I have written executes without me being around and the data and results should be stored into a table.
My questions would be is this is doable? | Run Automated Offline Tasks | 0 | 0 | 0 | 56 |
35,363,982 | 2016-02-12T13:35:00.000 | 0 | 0 | 0 | 0 | python,django,django-models | 35,364,106 | 3 | false | 1 | 0 | So you have one model in which you want to save a list of strings (links). The easiest way to do that is by creating a separate model with a textfield and a one-to-many relation. | 1 | 0 | 0 | I am currently working on a project which involves a django model which should have a field containing links to images on AWS S3.
The field should be a list of strings but django has no default field for this.
I have searched online and a way to solve this is by creating another model called imagesModel and link them by ForeignKey. But in my case I really only need a list of strings, so I believe there should be an easier way to accomplish this?
Some other people suggest creating a custom field to hold the list of strings and some suggest using JSON field to hold the list of strings.
I think it should be rather common to store links to images on cloud in django models and there should be a conventional way to do this.
Any help please? | What is the best way to store list of links to cloud files in django model | 0 | 0 | 0 | 516 |
35,364,084 | 2016-02-12T13:40:00.000 | 0 | 1 | 0 | 0 | javascript,python,ajax,cgi | 35,366,240 | 1 | false | 1 | 0 | From the question I read that you have already managed to run a Python script in a web server via CGI and you already know how to do an HTTP (ajax) request from your JavaScript to that web service.
When you now close the page in your browser (or an excavator cuts your line), the backend python script is not terminated. In fact, how should the backend even know that you have closed the page? Your Python script is still running in the backend, but no one will be left to capture the HTTP response of the web server and display it to you.
However, when you want to start some kind of demon, a program that is supposed to run in the backend for a very long time, then your Python script should spin off that task via a Popen in a variant that keeps the child process alive, even when the script has returned it's HTTP response (and possibly even the web server has shut down).
This pattern is sometimes used to remote control little servers that mock IoT devices in test environments. Just start and stop the simulation via some fire-and-forget HTTP requests triggered from a simple interactive web page. | 1 | 0 | 0 | I am currently working on an application ROFFLE, I may not be very good in terming correctly, What I am able to do right now?
User goes on a website, he clicks on a button and an ajax request is done to python file (test.py) but when he exits, the request is aborted and the processing done till yet has gone waste
What I want to do?
As user clicks the button, the processing starts. The script should not be killed even if the user leaves the webpage. In simple words, the Javascript part should be limited to trigger/queue the python script to execute (with input provided online) which has to be deployed by a web server that supports it via CGI
How can this be implemented?
Please note:
1. This is a web application and cannot be a software | Asynchronous unblocked Execution/triggering of python script through javascript | 0 | 0 | 0 | 68 |
35,368,557 | 2016-02-12T17:18:00.000 | 11 | 1 | 0 | 0 | telegram-bot,python-telegram-bot | 35,375,185 | 6 | false | 0 | 0 | Filter messages by field update.message.from.id | 1 | 23 | 0 | When I send a message to my Telegram Bot, it responses with no problems.
I wanna limit the access such that me and only me can send message to it.
How can I do that? | How To Limit Access To A Telegram Bot | 1 | 0 | 1 | 27,010 |
35,371,284 | 2016-02-12T20:05:00.000 | 2 | 0 | 1 | 0 | ipython-notebook,jupyter-notebook | 68,431,736 | 3 | false | 0 | 0 | Hope you've already found how to recover lost work from Jupyter notebook work. If not, try the following:
Go to Anaconda Navigator (or go to step 3 for Visual Studio Code).
Launch a Jupyter Lab
In Jupyter Lab, open a Terminal window
Launch iPython in the terminal by typing ipython and hitting enter
Run "%history -g"
All your codes are stored in history and each cell compilation that you would've done in the past shows up there.
Copy+Paste it back to a new Jupyter notebook and you are ready to go again! | 2 | 13 | 0 | I accidentally deleted an ipython notebook (.ipnyb) when I meant to delete an untitled notebook and didn't realize the other notebook was selected. Has anyone ever been able to recover a deleted notebook? | Recover deleted ipython/jupyter notebook? | 0.132549 | 0 | 0 | 6,537 |
35,371,284 | 2016-02-12T20:05:00.000 | 0 | 0 | 1 | 0 | ipython-notebook,jupyter-notebook | 47,054,976 | 3 | false | 0 | 0 | Have a look at the sub-folder called .ipynb_checkpoints in the folder where you save the ipynb file, sometimes jupyter will save a copy of current ipynb file under that folder. And if you are lucky enough, you would get it back. | 2 | 13 | 0 | I accidentally deleted an ipython notebook (.ipnyb) when I meant to delete an untitled notebook and didn't realize the other notebook was selected. Has anyone ever been able to recover a deleted notebook? | Recover deleted ipython/jupyter notebook? | 0 | 0 | 0 | 6,537 |
35,371,372 | 2016-02-12T20:10:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,random,choice | 35,371,559 | 2 | false | 0 | 0 | Shuffle the list and pop elements from the top. That will only produce each list element once. | 1 | 0 | 0 | Is there a way to "pseudo"-randomly select an element from a list that wasn't chosen before? I know about the choice function, which returns a random item from the list, but without taking into account previous chosen items. I could keep track of which elements were already picked, and keep randomly choose another not yet selected item, but this might include nested loops, etc.
I could also for example remove the element chose from the list at each iteration, but this does not seem a good solution, too.
My question is: is there a "aware" choice function that selects only items that weren't chosen before? Note that I'm not asking how to implement such a function, but possible solutions are of course well-accepted too. | Pseudo-randomly pick an element from a list only if it was not chosen yet | 0 | 0 | 0 | 953 |
35,371,697 | 2016-02-12T20:32:00.000 | 2 | 1 | 0 | 0 | python,django,testing,protractor,tox | 35,371,901 | 2 | true | 1 | 0 | I think it should work if you modify your path in the manage.py file to include django-protractor directory, because the Django management command line uses manage.py. | 1 | 3 | 0 | I'm using tox to run protractor tests which will test an application which uses django+angularjs, there is a glue library (django-protractor) which makes this easier, except that it makes the call to protractor inside a django management command, and relies on $PATH to show it where protractor is.
So if I set the $PATH properly before running tox, it works fine, but I'd rather not require all the devs to do that manually. | How can I add to $PATH with tox? | 1.2 | 0 | 0 | 1,975 |
35,376,747 | 2016-02-13T06:11:00.000 | 2 | 0 | 0 | 0 | android,python,kivy,buildozer | 35,383,918 | 1 | true | 0 | 1 | Hello guys I finally found the problem. The import android actually works.
The problem was that I used it wrongly . I was trying to do a makeToast like dis 'android.makeToast'. Evidently dat was wrong. Found out there was another way to do it with pyjnius.
Thanks so ooo much for your assistance | 1 | 1 | 0 | I'm building a small project for my android phone using kivy. I am trying to get the android back key to do a make Toast saying 'press back again to exit', and then exit when the back key is pressed twice. I checked online and saw a tutorial on how to do this. I had to useimport android
but the problem is that it just doesn't work on my phone. Not on kivy launcher when i tested it. I even compiled to an android apk using buildozer, but it still doesn't work. Please im still very new to kivy and android api. Help me get this right. Or if there is another way to do this i also appreciate it. Please include an example in your response. | kivy import android doesnt work | 1.2 | 0 | 0 | 614 |
35,378,389 | 2016-02-13T09:49:00.000 | -1 | 0 | 1 | 0 | python,computer-vision,shadow-removal | 35,541,436 | 1 | true | 0 | 0 | There are lots of methods to remove shadows.
Using threshold or histogram equalization.
Some other method has not been implemented in the opencv. | 1 | 0 | 0 | I am wondering if there are any python implementation of shadow removal of an image. I have searched online, and there are no existing resources related to python. Or can anyone give me any suggestions.
If you are voting down this question, please give me some reason so I can improve my question.
Appreciate! | python implementation for shadow removal | 1.2 | 0 | 0 | 825 |
35,380,802 | 2016-02-13T13:56:00.000 | 1 | 0 | 0 | 0 | python,pandas | 35,383,780 | 3 | false | 0 | 0 | Use cProfile and lineprof to figure out where the time is being spent.
To get help from others, post your real code and your real profile results.
Optimization is an empirical process. The little tips people have are often counterproductive. | 2 | 0 | 1 | I am using three dataframes to analyze sequential numeric data - basically numeric data captured in time. There are 8 columns, and 360k entries. I created three identical dataframes - one is the raw data, the second a "scratch pad" for analysis and a third dataframe contains the analyzed outcome. This runs really slowly. I'm wondering if there are ways to make this analysis run faster? Would it be faster if instead of three separate 8 column dataframes I had one large one 24 column dataframe? | Making Dataframe Analysis faster | 0.066568 | 0 | 0 | 69 |
35,380,802 | 2016-02-13T13:56:00.000 | 0 | 0 | 0 | 0 | python,pandas | 35,383,875 | 3 | false | 0 | 0 | Most probably it doesn't matter because pandas stores each column separately anyway (DataFrame is a collection of Series). But you might get better data locality (all data next to each other in memory) by using a single frame, so it's worth trying. Check this empirically. | 2 | 0 | 1 | I am using three dataframes to analyze sequential numeric data - basically numeric data captured in time. There are 8 columns, and 360k entries. I created three identical dataframes - one is the raw data, the second a "scratch pad" for analysis and a third dataframe contains the analyzed outcome. This runs really slowly. I'm wondering if there are ways to make this analysis run faster? Would it be faster if instead of three separate 8 column dataframes I had one large one 24 column dataframe? | Making Dataframe Analysis faster | 0 | 0 | 0 | 69 |
35,382,336 | 2016-02-13T16:29:00.000 | 0 | 0 | 1 | 0 | python,numpy,scipy,pickle | 35,384,823 | 1 | true | 0 | 0 | You could use dill. dill.dump accesses and uses the dump method from numpy to store an array or matrix object, so it's stored the same way it would be if you did it directly from the method on the numpy object. You'd just dill.dump the dictionary.
dill also has the ability to store pickles in compressed format, but it's slower. As mentioned in the comments, there's also joblib, which can also do the same as dill… but basically, joblib leverages cloudpickle (which is another serializer) or can also use dill, to do the serialization.
If you have a huge dictionary, and don't need all of the contents at once… maybe a better option would be klepto, which can use advanced serialization methods (from dill) to store a dict to several files on disk (or a database), where you have a proxy dict in memory that enables you to only get the entries you need.
All of these packages give you a fast unified dump for standard python and also for numpy objects. | 1 | 0 | 1 | Let's say I have a dictionary of about 100k pairs of strings, and a numpy matrix of shape (100k, 500). I would like to save them to the disk in a same file.
What I'm doing right now is using cPickle to dump the dictionary, and scipy.io.savemat to dump the matrix. This way, the dump / load is very fast. But the problem is that since I use different methods I obtain 2 files, and I would like to have just one file containing my 2 objects. How can I do this?
I could cPickle them both in the same file, but cPickle is incredibly slow on big arrays. | dumping several objects into the same file | 1.2 | 0 | 0 | 498 |
35,383,018 | 2016-02-13T17:31:00.000 | 1 | 0 | 0 | 0 | python,pyqt,auto-update,qtableview,qsqltablemodel | 35,392,361 | 1 | false | 0 | 1 | There's no signal emitted for that currently. You could use a timer to query the last update timestamp and refresh the model data at designated intervals. | 1 | 0 | 0 | I have created a model using QSqlTableModel, then created a tablview using QTableView and set the model on it.
I want to update the model and view automatically whenever the database is updated by another program. How can I do that? | Automatically updating QSqlTableModel and QTableView | 0.197375 | 1 | 0 | 255 |
35,383,769 | 2016-02-13T18:34:00.000 | 1 | 0 | 1 | 0 | python,datetime,pytz,python-datetime | 35,383,881 | 2 | false | 0 | 0 | I want to know why time class needs timezone information.
I find it useful e.g. if we're dealing with events that occur in the same time regardless of the date (e.g. a scheduled job), and need to display, manipulate and compare, in a different timezone.
How to get a TZ-aware datetime.time object
datetime.timetz()
Return time object with same hour, minute, second,
microsecond, and tzinfo attributes. See also method time().
So as for my example use case, I'd pull the datetime.time object from my tz-aware datetime.datetime object, using datetime.datetime.timetz(), which conserves it's tzinfo
This would fit in a datetime.time object, as opposed to a datetime.datetime object that also carries the date information.
But time class have no method like .astimezone.
You can't TZ-convert using time only
As for the reason there is no time.astimezone(), I think it might be because without a date, it is impossible to guess the effects of DST transitions and other non-fixed UTC offsets. | 1 | 1 | 0 | datetime, Python's builtin module, have some classes.
But I can not understand well parameter of datetime.time class.
time class have tzinfo param, default is None.
I want to know why time class needs timezone information.
In case of datetime class, it has .astimezone method and we can change data by timezone information. But time class have no method like .astimezone.
Is it just reserved for datetime.combine classmethod? or is there some important story about time and timezone? | Why Python's datetime.time have tzinfo parameter? | 0.099668 | 0 | 0 | 808 |
35,385,486 | 2016-02-13T21:22:00.000 | 0 | 0 | 0 | 0 | python,excel,python-3.x,openpyxl | 35,392,192 | 1 | true | 0 | 0 | Yes, Excel does cache the values from the other sheet but openpyxl does not preserve this because there is no way of checking it. | 1 | 1 | 0 | I have a spreadsheet which references/caches values from an external spreadsheet. When viewing the cell in Excel that I want to read using OpenPyxl, I see the the contents as a string: Users .
When I select the cell in Excel, I see the actual content in the Formula Bar is ='C:\spreadsheets\[_comments.xlsm]Rules-Source'!C5. I do not have the source spreadsheet stored on my machine. So, it appears Excel is caching the value from a separate spreadsheet as I am able to view the value Users when viewing the local spreadsheet in Excel.
When I read the cell from the local spreadsheet using OpenPyxl, I get ='[1]Rules-Source'!C5.
It is my understanding that OpenPyxl will not evaluate formulas. However, the string Users has to be cached somewhere in the XLSM document, right? Is there any way I can get OpenPyxl to read the cached source rather than returning the cell formula? | OpenPyxl - difficulty getting cell value when cell is referencing other source | 1.2 | 1 | 0 | 310 |
35,385,519 | 2016-02-13T21:26:00.000 | 0 | 0 | 0 | 0 | python,excel,xlrd,openpyxl | 35,392,251 | 1 | true | 0 | 0 | This sounds very much like you are looking at cells using "shared formulae". When this is the case the same formula is used by several cells. The formula itself is only stored with one of those cells and all others are marked as formulae but just contain a reference. Until version 2.3 of openpyxl all such cells would return "=" as their value. However, version 2.3 now performs the necessary transformation of the formula for dependent cells. ie. a shared formula of say "=A1+1" for A1 will be translated to "=B1+1" for B1.
Please upgrade to 2.3 if you are not already using it.
If this is not the case then please submit a bug report with the sample file. | 1 | 1 | 0 | I'm using openpyxl to read an Excel spreadsheet with a lot of formulas. For some cells, if I access the cell's value as e.g. sheet['M30'].value I get the formula as intended, like '=IFERROR(VLOOKUP(A29, other_wksheet, 9, FALSE)*E29, "")'. But strangely, if I try to access another cell's value, e.g. sheet['M31'].value all I get is =, even though in Excel that cell as essentially the same formula as M30: '=IFERROR(VLOOKUP(A30, other_wksheet, 9, FALSE)*E29, "")'.
This is happening in a bunch of other sheets with a bunch of other formulas and I can't seem to find any rhyme or reason for it. I've looked through the docs and I'm not loading data_only=True so I'm not sure what's going wrong. | openpyxl showing '=' instead of formula | 1.2 | 1 | 0 | 637 |
35,386,546 | 2016-02-13T23:25:00.000 | 28 | 0 | 1 | 0 | python,algorithm,big-o | 35,386,577 | 5 | true | 0 | 0 | It's O(n). It's a general algorithm, you can't find the max/min in the general case without checking all of them. Python doesn't even have a built-in sorted collection type that would make the check easy to specialize.
A for loop would have the same algorithmic complexity, but would run slower in the typical case, since min/max (on CPython anyway) are running an equivalent loop at the C layer, avoiding bytecode interpreter overhead, which the for loop would incur. | 4 | 12 | 0 | What is the big O of the min and max functions in Python? Are they O(n) or does Python have a better way to find the minimum and maximum of an array? If they are O(n), isn't it better to use a for-loop to find the desired values or do they work the same as a for-loop? | Big O of min and max in Python | 1.2 | 0 | 0 | 24,996 |
35,386,546 | 2016-02-13T23:25:00.000 | 2 | 0 | 1 | 0 | python,algorithm,big-o | 70,754,602 | 5 | false | 0 | 0 | For time complexity O(1) You can use this:
minimum = y ^ ((x ^ y) & -(x < y)); // min(x, y)
maximum = x ^ ((x ^ y) & -(x < y)); // max(x, y) | 4 | 12 | 0 | What is the big O of the min and max functions in Python? Are they O(n) or does Python have a better way to find the minimum and maximum of an array? If they are O(n), isn't it better to use a for-loop to find the desired values or do they work the same as a for-loop? | Big O of min and max in Python | 0.07983 | 0 | 0 | 24,996 |
35,386,546 | 2016-02-13T23:25:00.000 | 0 | 0 | 1 | 0 | python,algorithm,big-o | 35,386,865 | 5 | false | 0 | 0 | As already stated in the (theoretically) general case finding the minimum or maximum of a unsorted collection of values requires you to look at each value (thus O(N)), because if you wouldn't look at a value, that value could be greater than all other values of your collection.
[..] isn't it better to use a for-loop to find the desired values or do they work the same as a for-loop?
No. Programming is about abstraction: Hiding details of implementation (that loop) behind a fancy name. Otherwise you could write that loop in assembly, couldn't you?
Now for the "special" case: We normally don't work with arbitrary large numbers. Assume a 2 bit unsigned integer: The only possible values are 0, 1, 2 and 3. As soon as you find a 3 in some (arbitrary large) collection you can be certain that there will be no larger value. In a case like that it can make sense to have a special check to know whether one already found the maximum (minimum) possible value. | 4 | 12 | 0 | What is the big O of the min and max functions in Python? Are they O(n) or does Python have a better way to find the minimum and maximum of an array? If they are O(n), isn't it better to use a for-loop to find the desired values or do they work the same as a for-loop? | Big O of min and max in Python | 0 | 0 | 0 | 24,996 |
35,386,546 | 2016-02-13T23:25:00.000 | 0 | 0 | 1 | 0 | python,algorithm,big-o | 35,386,591 | 5 | false | 0 | 0 | They are both O(n) and there is no way to find min max in an array. Actually the underlying implementation of min/max is a for loop. | 4 | 12 | 0 | What is the big O of the min and max functions in Python? Are they O(n) or does Python have a better way to find the minimum and maximum of an array? If they are O(n), isn't it better to use a for-loop to find the desired values or do they work the same as a for-loop? | Big O of min and max in Python | 0 | 0 | 0 | 24,996 |
35,388,753 | 2016-02-14T05:41:00.000 | -1 | 0 | 1 | 0 | python-2.7,pycharm | 35,453,612 | 1 | false | 0 | 0 | CTRL+SHIFT+ALT+N allows you to go to symbols, quite powerful because you can search at the same time function names, class names, packages and so
You can always rely too on CTRL+SHIFT+F to find in your whole project (script in this case) and you'll get a nice preview of the usage | 1 | 1 | 0 | I want to find bottleneck function in python script. that would be better if I could do it with IDE's feature. (I am using PyCharm now)
Thanks | Pycharm-Find Bottlenecks function | -0.197375 | 0 | 0 | 419 |
35,388,867 | 2016-02-14T05:59:00.000 | 2 | 0 | 1 | 0 | python,multithreading,python-3.x,daemon | 35,389,073 | 1 | true | 0 | 0 | Python tries to join non-daemon threads at exit. If you haven't implemented a mechanism to terminate them, python will hang. Annoyingly, ctrl-C typically doesn't work and you have to kill the program externally. | 1 | 1 | 0 | While going through the python docs for thread objects it had a note on daemon threads wherein it said:
Daemon threads are abruptly stopped at shutdown. Their resources (such as open files, database transactions, etc.) may not be released properly. If you want your threads to stop gracefully, make them non-daemonic and use a suitable signalling mechanism such as an Event.
So why do we use them ? | How are daemon threads useful in Python 3.5.1? | 1.2 | 0 | 0 | 247 |
35,388,878 | 2016-02-14T06:01:00.000 | 0 | 0 | 1 | 0 | python,mechanize | 35,401,852 | 2 | false | 1 | 0 | br.select_form(predicate=lambda f: f.attrs.get('id', None) == 'email-form')
This may help you in your function for selecting the form.. | 1 | 0 | 0 | I'm trying to submit a form in a website and when I run the code, it gives me the error:
mechanize._form.ParseError: nested FORMs
So I checked and in the website, there are 2 forms that are inside each other.
the form that I need, which is the first one, is closed properly.
Is there anyway to deal with that? | How to deal with nested forms?(ERROR: mechanize._form.ParseError: nested FORMs) | 0 | 0 | 0 | 336 |
35,391,120 | 2016-02-14T11:17:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,flask,google-cloud-sql,alembic | 35,395,267 | 3 | false | 1 | 0 | You can whitelist the ip of your local machine for the Google Cloud SQL instance, then you run the script on your local machine. | 1 | 8 | 0 | I have a Flask app that uses SQLAlchemy (Flask-SQLAlchemy) and Alembic (Flask-Migrate). The app runs on Google App Engine. I want to use Google Cloud SQL.
On my machine, I run python manage.py db upgrade to run my migrations against my local database. Since GAE does not allow arbitrary shell commands to be run, how do I run the migrations on it? | Run Alembic migrations on Google App Engine | 0.066568 | 1 | 0 | 1,816 |
35,393,776 | 2016-02-14T15:57:00.000 | 0 | 0 | 0 | 0 | python,openerp,openerp-7 | 35,422,853 | 2 | false | 1 | 0 | You can create a wizard with cancel and proceed button within | 1 | 2 | 0 | Is it possible to create information message with options like proceed or cancel in OpenERP ? If it is possible how to create one ? | Openerp information messages | 0 | 0 | 0 | 34 |
35,394,675 | 2016-02-14T17:17:00.000 | 2 | 1 | 1 | 0 | python,package,pypi | 35,591,766 | 2 | false | 0 | 0 | Yes, it's just a problem with the pypi search engine, like Khush said. | 2 | 9 | 0 | I maintain the pi3d package which is available on pypi.python.org. Prior to v2.8 the latest version was always returned by a search for 'pi3d'. Subsequently v2.7 + v2.8 then v2.7 + v2.8 + v2.9 were listed. These three are still listed even though I am now at v2.10. i.e. the latest version is NOT listed and it requires sharp eyes to spot the text on the v2.9 page saying it's not the latest version!
NB all old versions are marked as 'hidden' I have tried lots of different permutations of hiding and unhiding releases, updating releases, switching on and off autohide old releases, editing the text of each release etc ad infinitum.
Is there some obvious cause of this behaviour that I have missed? | on pypi.python.org what would cause hidden old versions to be returned by explicit search | 0.197375 | 0 | 0 | 269 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.