Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
32,870,164
2015-09-30T15:34:00.000
0
0
0
0
python,django,web-scraping,url-redirection
32,870,284
1
true
1
0
Are you storing the results in a database or some persistent storage mechanism (maybe even in a KV store)? Once you hold the results somewhere on your website, you can redirect from your results page via the Book Now button to a view withs the result's identifying value (say some hash) and have that view redirect to the website offering the service.
1
0
0
I have a scraper to pull search results from a number of travel websites. Now that I have the search results nicely displayed with "Book Now" buttons, I want those "Book Now" buttons to redirect to the specific search result so the user can book that specific travel search result. These search results are dynamic so the redirect may change. What's the easiest way to accomplish this? I'm building this search engine in Python/Django and have Django CMS.
Redirect to a specific search result
1.2
0
1
66
32,871,265
2015-09-30T16:29:00.000
3
0
0
0
python,django
32,871,849
5
false
1
0
Virtualenv is your friend. My life got so much easier when I started using it. You can create a virtualenv to use a particular version of Python, then set up your requirements.txt file to install all the packages you need using pip.
2
4
0
I have Python 2.7 and 3.5 running on OSX 10.10 and also Django 1.9a -- which is support for both Python version. The problem is I want to run Django on Python 3.5 instead of 2.7. On some threads I found suggestions to run it by including the Python version e.g: python3.5 manage.py runserver, but I found this error: File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named 'django' FYI, I have no problem run Python3.5 on the same machine. How can I solve this? thank you very much!
Run Django 1.9 on Python 3.5 instead of 2.7
0.119427
0
0
13,030
32,871,265
2015-09-30T16:29:00.000
1
0
0
0
python,django
32,871,487
5
false
1
0
You first have to install Django for 3.5, which is a separate install from Django for 2.7. If you're using pip, make sure to use pip3. Otherwise, make sure to run setup.py using python3.5.
2
4
0
I have Python 2.7 and 3.5 running on OSX 10.10 and also Django 1.9a -- which is support for both Python version. The problem is I want to run Django on Python 3.5 instead of 2.7. On some threads I found suggestions to run it by including the Python version e.g: python3.5 manage.py runserver, but I found this error: File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named 'django' FYI, I have no problem run Python3.5 on the same machine. How can I solve this? thank you very much!
Run Django 1.9 on Python 3.5 instead of 2.7
0.039979
0
0
13,030
32,873,308
2015-09-30T18:34:00.000
0
0
0
0
python,fortran,wxpython,f2py
32,882,987
3
false
0
1
At high risk of being down voted, if you know roughly how long each task is going to take, the simplest option is base the progress on how much time has elapsed since the task started, measured against the expected duration of the task. To keep it relevant you could always store the run duration for the task each time and use that, or an average, as your base time-line. Sometimes, we can over over complicate things ;)
1
1
0
I've got a Python GUI (wxPython) which wraps around a fortran "back-end," using f2py. Sometimes, the fortran process(es) may be quite long running, and we would like to put a progress bar in the GUI to update the progress through the Fortran routine. Is there any way to get the status/progress of the Fortran routine, without involving file I/O?
Updating long-running Fortran subroutine in Python GUI using f2py
0
0
0
185
32,874,446
2015-09-30T19:49:00.000
0
1
1
0
python,arrays,numpy,int,factorial
32,875,136
3
false
0
0
Store it as tuples of prime factors and their powers. A factorization of a factorial (of, let's say, N) will contain ALL primes less than N. So k'th place in each tuple will be k'th prime. And you'll want to keep a separate list of all the primes you've found. You can easily store factorials as high as a few hundred thousand in this notation. If you really need the digits, you can easily restore them from this (just ignore the power of 5 and subtract the power of 5 from the power of 2 when you multiply the factors to get the factorial... cause 5*2=10).
1
2
0
I am working with huge numbers, such as 150!. To calculate the result is not a problem, by example f = factorial(150) is 57133839564458545904789328652610540031895535786011264182548375833179829124845398393126574488675311145377107878746854204162666250198684504466355949195922066574942592095735778929325357290444962472405416790722118445437122269675520000000000000000000000000000000000000. But I also need to store an array with N of those huge numbers, in full presison. A list of python can store it, but it is slow. A numpy array is fast, but can not handle the full precision, wich is required for some operations I perform later, and as I have tested, a number in scientific notation (float) does not produce the accurate result. Edit: 150! is just an example of huge number, it does not mean I am working only with factorials. Also, the full set of numbers (NOT always a result of factorial) change over time, and I need to do the actualization and reevaluation of a function for wich those numbers are a parameter, and yes, full precision is required.
How to store array of really huge numbers in python?
0
0
0
2,771
32,880,370
2015-10-01T05:58:00.000
0
0
0
0
python,apache-spark,pyspark,mapreduce,apache-spark-sql
32,900,703
2
false
0
0
Zero323's solution works great but wanted to post an rdd implementation as well. I think this will be helpful for people trying to translate streaming MapReduce to pyspark. My implementation basically maps keys (individuals in this case) to a list of list for the streaming values that would associate with that key (areas and times) and then iterates over the list to satisfy the iterative component - and the rest is just normal reducing by keys and mapping. from pyspark import SparkContext, SparkFiles, SparkConf from datetime import datetime conf = SparkConf() sc = SparkContext(conf=conf) rdd = sc.parallelize(["IndividualX|AreaQ|1/7/2015 0:00", "IndividualX|AreaQ|1/7/2015 1:00", "IndividualX|AreaW|1/7/2015 3:00", "IndividualX|AreaQ|1/7/2015 4:00", "IndividualY|AreaZ|2/7/2015 4:00", "IndividualY|AreaZ|2/7/2015 5:00", "IndividualY|AreaW|2/7/2015 6:00", "IndividualY|AreaT|2/7/2015 7:00"]) def splitReduce(x): y = x.split('|') return (str(y[0]),[[str(y[2]),str(y[1])]]) def resultSet(x): processlist = sorted(x[1], key=lambda x: x[0]) result = [] start_area = processlist[0][1] start_date = datetime.strptime(processlist[0][0], '%d/%m/%Y %H:%M') dur = 0 if len(processlist) > 1: for datearea in processlist[1::]: end_date = datetime.strptime(datearea[0],'%d/%m/%Y %H:%M') end_area = datearea[1] dur = (end_date-start_date).total_seconds()/60 if start_area != end_area: result.append([start_area,start_date,end_date,dur]) start_date = datetime.strptime(datearea[0], '%d/%m/%Y %H:%M') start_area = datearea[1] dur = 0 return (x[0],result) def finalOut(x): return str(x[0]) + '|' + str(x[1][0]) + '|' + str(x[1][1]) + '|' + str(x[1][2]) + '|' + str(x[1][3]) footfall = rdd\ .map(lambda x: splitReduce(x))\ .reduceByKey(lambda a, b: a + b)\ .map(lambda x: resultSet(x))\ .flatMapValues(lambda x: x)\ .map(lambda x: finalOut(x))\ .collect() print footfall Provides output of: ['IndividualX|AreaQ|2015-07-01 00:00:00|2015-07-01 03:00:00|180.0', 'IndividualX|AreaW|2015-07-01 03:00:00|2015-07-01 04:00:00|60.0', 'IndividualY|AreaZ|2015-07-02 04:00:00|2015-07-02 06:00:00|120.0', 'IndividualY|AreaW|2015-07-02 06:00:00|2015-07-02 07:00:00|60.0']
1
2
1
I am trying to aggregate session data without a true session "key" in PySpark. I have data where an individual is detected in an area at a specific time, and I want to aggregate that into a duration spent in each area during a specific visit (see below). The tricky part here is that I want to infer the time someone exits each area as the time they are detected in the next area. This means that I will need to use the start time of the next area ID as the end time for any given area ID. Area IDs can also show up more than once for the same individual. I had an implementation of this in MapReduce where I iterate over all rows and aggregate the time until a new AreaID or Individual is detected, then output the record. Is there a way to do something similar in Spark? Is there a better way to approach the problem? Also of note, I do not want to output a record unless the individual has been detected in another area (e.g. IndividualY, AreaT below) I have a dataset in the following format: Individual AreaID Datetime of Detection IndividualX AreaQ 1/7/2015 0:00 IndividualX AreaQ 1/7/2015 1:00 IndividualX AreaW 1/7/2015 3:00 IndividualX AreaQ 1/7/2015 4:00 IndividualY AreaZ 2/7/2015 4:00 IndividualY AreaZ 2/7/2015 5:00 IndividualY AreaW 2/7/2015 6:00 IndividualY AreaT 2/7/2015 7:00 I would like the desired output of: Individual AreaID Start_Time End_Time Duration (minutes) IndividualX AreaQ 1/7/2015 0:00 1/7/2015 3:00 180 IndividualX AreaW 1/7/2015 3:00 1/7/2015 4:00 60 IndividualY AreaZ 2/7/2015 4:00 2/7/2015 6:00 120 IndividualY AreaW 2/7/2015 6:00 2/7/2015 7:00 60
PySpark - Combining Session Data without Explicit Session Key / Iterating over All Rows
0
0
0
445
32,880,777
2015-10-01T06:28:00.000
3
0
1
0
python
32,880,856
3
true
0
0
The most concise way to do it in Python is to use mylist.index(min(mylist)).
1
1
0
Wondering if there are any Python built-in API to return minimal element index of a list? min returns the actual value.
any Python built-in API to return minimal element index of a list
1.2
0
0
59
32,882,856
2015-10-01T08:28:00.000
0
0
0
1
python,google-app-engine,full-text-search
57,089,611
2
false
1
0
Have you tried with: NOT logo_url: Null
1
1
0
I'd like to use Google AppEngine full text search to search for items in an index that have their logo set to None tried "NOT logo_url:''" is there any way I write such a query, or do I have to add another property which is has_logo?
Google app engine, full text search for empty (None) field
0
0
0
428
32,885,938
2015-10-01T11:01:00.000
2
1
1
1
python,google-cloud-platform
32,886,058
1
true
0
0
Since you can open an SSH session you install any number of terminal multiplexers such as tmux, screen or byobu. If you can't install things on your VM, invoking the script every minute via a cron job could also solve the issue.
1
2
0
I set up a VM with google and want it to run a python script persistently. If I exit out of the SSH session, the script stops. Is there a simple way to keep this thing going after I log out?
Keep a python script running on google VM
1.2
0
0
854
32,888,240
2015-10-01T13:00:00.000
0
0
1
0
python,python-2.7,pip,virtualenv,virtualenvwrapper
33,249,502
1
true
0
0
It turns out I had two packages with the same name defined in their setup.py files. Every time I installed one the other was uninstalled. To make things worse, even after I changed one of the package's name the problem persisted. This was because the previous egg dir, generated by setuptools (I guess), was still there.
1
0
0
Starting a few days ago I'm having to reinstall my editable packages for every new virtualenv session. I have the impression that this didn't happen in a not so distant past. Every time I switch to a virtualenv the packages previously installed (in the same virtualenv, of course) via pip -e aren't found. Any idea of what can be happening? Is this a expected behavior? I'm using virtualenv 13.0.3 and it's using pip 7.1.2 internally.
Why is virtualenv not finding editable packages previously installed via pip -e?
1.2
0
0
229
32,893,657
2015-10-01T17:43:00.000
4
0
1
1
python,macos,virtualenv
32,893,798
1
true
0
0
Most likely, /usr/local/opt/python3 is a symlink actually pointing to /usr/local/Cellar/python3/3.5.0/bin/python3. ls -l /usr/local/opt/python3 will show what it's pointing to. To my knowledge, OSX doesn't have anything installed natively in /usr/local/opt/ without homebrew. Also, OSX doesn't come with python3.
1
1
0
I installed Python 3.5 and virtualenv using Homebrew. python3 symlink in /usr/local/bin points to /usr/local/Cellar/python3/3.5.0/bin/python3, which means that when we execute a .py script using command python3, then the interpreter in the location above will be used. But, when I see the contents of virtualenv in /usr/local/bin using cat virtualenv, the shebang is #!/usr/local/opt/python3/bin/python3.5, which means that when we execute virtualenv, then interpreter in /usr/local/opt is used. Why is there a difference in the python interpreter being used? Which one should be used?
Where is Python interpreter on Mac?
1.2
0
0
6,930
32,902,615
2015-10-02T07:45:00.000
2
0
1
0
python,multithreading,python-2.7,python-3.x,python-multithreading
32,902,789
2
true
0
0
I would suggest creating an addToVariable or setVariable method to take care of the actual adding to the variable in question. This way you can just set a flag, and if the flag is set addToVariable returns immediately instead of actually adding to the variable in question. If not, you should look into operator overloading. Basically what you want is to make your own number-class and write a new add function which contains some sort of flag that stops further additions to have any effect.
1
0
0
I am working with multi-threading. There is a thread that periodically adds to a certain variable, say every 5 seconds. I want to make it so that, given a call to function f(), additions to that variable will have no effect. I can't find how to do this anywhere and I'm not sure where to start. Is there any syntax that makes certain operations not no effect no specific variables for a certain duration? Or is there any starting point someone could give me and I can figure out the rest? Thanks.
Make certain operators have no effect over a duration of time? (Python)
1.2
0
0
51
32,902,645
2015-10-02T07:46:00.000
5
0
1
0
python,pycharm,highlighting
32,902,806
1
false
0
0
If you prefer not to see PyCharm's code inspections, I would suggest creating a new inspections profile that does no inspections. To do that, go to the Code -> Configure Current File Analysis dialog like you have been, but this time click on the Configure inspections link. At the top, click the "Manage" dropdown and copy the current profile into a new profile which you'll name "No inspections". Then uncheck everything, save the new inspections profile, and you'll be done. All new .py files should now be created under your "No inspections" profile. Note that as far as I can tell, the inspection settings get saved per-project, rather than as a global setting. But changing the inspection profile once per project shouldn't be too much of a hassle.
1
3
0
I just started using PyCharm as my IDE for Python. At the moment whenever I open a new .py file, I have to go to Code -> Configure Current File Analysis..., and then change the highlighting to syntax (preference) for every individual .py file. Is there a way to change this for every file automatically, by default? First post on stackoverflow by the way. Thanks!
Change highlighting setting for every file (by default) in PyCharm CE
0.761594
0
0
562
32,908,025
2015-10-02T13:14:00.000
5
0
0
0
python,machine-learning,neural-network,caffe,conv-neural-network
32,909,946
1
false
0
0
Differentiating between validation and testing is made to imply that hyperparameters may be tuned to the validation set while nothing is fitted to the test set in any way. caffe doesn't optimize anything but the weights, and since the test is only there for evaluation, it does exactly as expected. Assuming you're tuning hyper parameters between solver optimization runs. The lmdb passed to caffe for testing is really the validation set. If you're done with tuning your hyperparameters and do one more solver optimization with an lmdb for testing that holds data never used in previous runs. That last lmdb is your test set. Since caffe doesn't optimize hyperparameters, its test set is what it is, a test set. It's possible to come up with a some python code around the solver optimization calls that iterates through hyperparameter values. After it's done it can swap in a new lmdb with unseen data to tell you about how well the network generalizes with it. I don't recommend modifying caffe for an explicit val/test distinction. You don't even have to do anything elaborate with setting up the prototxt file for the solver and network definition. You can do the val/test swap at the end by simply moving the val lmdb somewhere else and moving the test lmdb in its place using shutil.copy(src, dst)
1
2
1
I've been using caffe for a while, with some success, but I have noticed in examples given that there is only ever a two way split on the data set with TRAIN and TEST phases, where the TEST set seems to act as a validation set. Ideally I would like to have three sets, so that once the model is trained, I can save it and test it on a completely new test set - stored in a completed separate lmdb folder. Does anyone have any experience of this? Thanks.
Caffe: train, validation and test split
0.761594
0
0
4,078
32,909,851
2015-10-02T14:47:00.000
92
0
1
0
python,session,flask
32,910,056
1
true
1
0
No, g is not an object to hang session data on. g data is not persisted between requests. session gives you a place to store data per specific browser. As a user of your Flask app, using a specific browser, returns for more requests, the session data is carried over across those requests. g on the other hand is data shared between different parts of your code base within one request cycle. g can be set up during before_request hooks, is still available during the teardown_request phase and once the request is done and sent out to the client, g is cleared.
1
66
0
I'm trying to understand the differences in functionality and purpose between g and session. Both are objects to 'hang' session data on, am I right? If so, what exactly are the differences and which one should I use in what cases?
Flask: 'session' vs. 'g'?
1.2
0
0
16,777
32,911,336
2015-10-02T16:06:00.000
88
0
1
0
python,json
32,911,369
2
false
0
0
json loads -> returns an object from a string representing a json object. json dumps -> returns a string representing a json object from an object. load and dump -> read/write from/to file instead of string
1
160
0
What is the difference between json.dumps and json.load? From my understanding, one loads JSON into a dictionary and another loads into objects.
What is the difference between json.dumps and json.load?
1
0
0
233,478
32,911,933
2015-10-02T16:43:00.000
4
0
0
0
python,xpath,web-scraping,beautifulsoup
32,912,192
2
false
0
0
I would suggest bs4, its usage and docs were more friendly, will save your time and increase confidence which is very important when you are self learning string manipulation. However in practice, it will require a strong CPU. I once scrape with not more than 30 connections on my 1core VPS, and CPU usage of python process keeps at 100%. It could be result of bad implementation, but later I chaned all to re.compile and performance issue was gone. As for performance, regex > lxml >> bs4. As for get things done, no difference.
1
5
0
I've been learning about web scraping using BeautifulSoup in Python recently, but earlier today I was advised to consider using XPath expressions instead. How does the way XPath and BeautifulSoup both work differ from each other?
Pros and Cons of Python Web Scraping using BeautifulSoup vs XPath
0.379949
0
1
4,014
32,912,112
2015-10-02T16:54:00.000
2
0
0
0
python,django,database-migration
32,912,200
3
false
1
0
When you run python manage.py migrate it's trying to load your testmodel.json in fixtures folder, but your model (after updated) does not match with data in testmodel.json. You could try this: Change your directory from fixture to _fixture. Run python manage.py migrate Optional, you now can change _fixture by fixture and load your data as before with migrate command or load data with python manage.py loaddata app/_fixtures/testmodel.json
1
8
0
Something really annoying is happening to me since using Django migrations (not south) and using loaddata for fixtures inside of them. Here is a simple way to reproduce my problem: create a new model Testmodel with 1 field field1 (CharField or whatever) create an associated migration (let's say 0001) with makemigrations run the migration and add some data in the new table dump the data in a fixture testmodel.json create a migration with call_command('loaddata', 'testmodel.json'): migration 0002 add some a new field to the model: field2 create an associated migration (0003) Now, commit that, and put your db in the state just before the changes: ./manage.py migrate myapp zero. So you are in the same state as your teammate that didn't get your changes yet. If you try to run ./manage.py migrate again you will get a ProgrammingError at migration 0002 saying that "column field2 does not exist". It seems it's because loaddata is looking into your model (which is already having field2), and not just applying the fixture to the db. This can happen in multiple cases when working in a team, and also making the test runner fail. Did I get something wrong? Is it a bug? What should be done is those cases? -- I am using django 1.7
Django: loaddata in migrations errors
0.132549
0
0
5,546
32,912,567
2015-10-02T17:23:00.000
0
0
0
0
python-2.7,numpy,scipy,scikit-learn
33,026,758
1
false
0
0
FIRST: I'm guessing the reason sparse data is giving a different answer than the same data converted to dense, is that my representation of sparse was starting feature indices from one rather than zero (because oll library that I used previously required so). So my first column was all zero, when converted to dense it was not preserved and that's the reason for slightly better results when using dense representation. SECOND: adding new rows to the sparse matrix in that scale is not efficient. Not even if you reserve a large matrix at the beginning (with padded zeros) to replace later. This can be because of the structure sparse matrix is stored in (It uses three arrays, in case of crs one for row number, one for non-zero column indices in rows and one for values themselves; check the documentation). SOLUTION: best way I found is to use dense representations from the beginning (if that's an option of course). Collect all the instances that need to be added to the training set. Instantiate a new matrix to the size of aggregated data and then start adding instances "randomly" from both last training set and also from new instances that you want to add. To make it random I generate a sorted list of random positions that tell me when I should add data from new instances otherwise copy from the older ones.
1
1
1
I have a dataset of 15M+ training instances in form of svmlight dataset. I read these data using sklearn.datasets.load_svmlight_file(). The data itself is not sparse, so I don't mind converting it to any other dense representation (I will prefer that). At some point in my program I need to add millions of new data records (instances) to my training data (in random positions). I used vstack and also tried converting to dense matrices but was either inefficient or failed to give correct results (details below). Is there any way to do this task efficiently? I'm implementing DAgger algorithm and in the first iteration it is trying to add about 7M new training instances. I want to add these new instances in random positions. I tried vstack (given my data was in csr format I was expecting it not to be too inefficient at least). However after 24hours it's not done yet. I tried converting my data to numpy.matrix format just after loading them in svmlight format. A sampling showed it can help me speed things up but interestingly the results I get from training on the converted dataset and the original dataset seem not to match with each other. It appears sklearn does not work with numpy matrix in the way I thought. I couldn't find anything in the sklearn documentation. Another approach I thought was to define a larger dataset from the beginning so that it will "reserve" enough space in memory, but when I'm using sklearn train or test features I'll index my dataset to the last "true" record. In this way, I presume, vstack will not require opening up a new large space in memory which can make the whole operation take longer. Any thoughts on this?
inserting training instances to scikit-learn dataset
0
0
0
162
32,913,119
2015-10-02T18:00:00.000
0
0
0
1
python,docker,webserver
33,002,100
1
true
1
0
Yes, Docker prefers "one process per container" approach. I would not see this as an overkill, quite to the contrary - at your case it might rather soon be beneficial to have instances of different users better isolated: less security risks, easier to maintain - say you need new version of everything for new version of you app, but would like to keep some of the users still on an old version due to a blocker.
1
0
0
I want to make simple service that each user will have his own (simple and light) webserver. I want to use an AWS instance to do this. I understand that I can do that by starting Python's SimpleHTTPserver (Proof of concept) multiple times on different ports, and that the number of servers I can have depends on the resources. My question is: Is it a better practice or an overkill to Dockerize each user with his server ?
Should I dockerize different servers running on a single machine?
1.2
0
0
27
32,913,859
2015-10-02T18:46:00.000
1
0
0
0
java,php,python,html,mysql
32,915,448
1
true
1
0
The programing language does not realy matter for the way to solve the problem. You can implement it in the language which you are comfortable with. There are two basic ways to solve the problem: Use a crawler which creates a index of words found on the different pages The use that index to lookup the searched word or When the user has entered the search expression, you start crawling the pages and look if the search expression is found Of course both solutions will have different (dis)advantages For example: In 1) you need to do a inital crawl (and udate it later on when the pages change) In 1) you need to store the crawl result in some sort of database In 1) you will receive instanst search results In 2) You don't need a database/datastore In 2) You will have to wait until all pages are searched before showing the final resultlist
1
0
0
I made a website with many pages, on each page is a sample essay. The homepage is a page with a search field. I'm attempting to design a system where a user can type in a word and when they click 'search', multiple paragaphs containing the searched word from the pages with a sample essays are loaded on to the page. I'm 14 and have been programming for about 2 years, can anyone please explain to me the programming languages/technologies I'll need to accomplish this task and provide suggestions as to how I can achieve my task. All I have so far are the web pages with articles and a custom search page I've made with PHP. Any suggestions?
Creating a webpage crawler that finds and maches user input
1.2
0
1
41
32,914,037
2015-10-02T18:57:00.000
2
0
0
0
python,database,permissions,pyodbc
32,915,064
2
false
0
0
Since you're on Windows, a few things you should know: Using the Driver={SQL Server} only enables features and data types supported by SQL Server 2000. For features up through 2005, use {SQL Native Client} and for features up through 2008 use {SQL Server Native Client 10.0}. To view your ODBC connections, go to Start and search for "ODBC" and bring up Data Sources (ODBC). This will list User, System, and File DSNs in a GUI. You should find the DSN with username and password filled in there.
2
3
0
I inherited a project and am having what seems to be a permissions issue when trying to interact with the database. Basically we have a two step process of detach and then delete. Does anyone know where the user would come from if the connection string only has driver, server, and database name. EDIT I am on Windows Server 2008 standard EDIT "DRIVER={%s};SERVER=%s;DATABASE=%s;" Where driver is "SQL Server"
Where does pyodbc get its user and pwd from when none are provided in the connection string
0.197375
1
0
989
32,914,037
2015-10-02T18:57:00.000
3
0
0
0
python,database,permissions,pyodbc
32,951,286
2
true
0
0
I just did a few tests and the {SQL Server} ODBC driver apparently defaults to using Windows Authentication if the Trusted_connection and UID options are both omitted from the connection string. So, your Python script must be connecting to the SQL Server instance using the Windows credentials of the user running the script. (On the other hand, the {SQL Server Native Client 10.0} driver seems to default to SQL Authentication unless Trusted_connection=yes is included in the connection string.)
2
3
0
I inherited a project and am having what seems to be a permissions issue when trying to interact with the database. Basically we have a two step process of detach and then delete. Does anyone know where the user would come from if the connection string only has driver, server, and database name. EDIT I am on Windows Server 2008 standard EDIT "DRIVER={%s};SERVER=%s;DATABASE=%s;" Where driver is "SQL Server"
Where does pyodbc get its user and pwd from when none are provided in the connection string
1.2
1
0
989
32,915,462
2015-10-02T20:35:00.000
1
0
0
1
python,google-app-engine,google-cloud-datastore,eventual-consistency
32,941,257
2
true
1
0
if, like you say in comments, your lists change rarely and cant use ancestors (I assume because of write frequency in the rest of your system), your proposed solution would work fine. You can do as many get(multi) and as frequently as you wish, datastore can handle it. Since you mentioned you can handle having that keys list updated as needed, that would be a good way to do it. You can stream-read a big file (say from cloud storage with one row per line) and use datastore async reads to finish very quickly or use google cloud dataflow to do the reading and processing/consolidating. dataflow can also be used to instantly generate that keys list file in cloud storage.
1
0
0
In my Google App Engine App, I have a large number of entities representing people. At certain times, I want to process these entities, and it is really important that I have the most up to date data. There are far too many to put them in the same entity group or do a cross-group transaction. As a solution, I am considering storing a list of keys in Google Cloud Storage. I actually use the person's email address as the key name so I can store a list of email addresses in a text file. When I want to process all of the entities, I can do the following: Read the file from Google Cloud Storage Iterate over the file in batches (say 100) Use ndb.get_multi() to get the entities (this will always give the most recent data) Process the entities Repeat with next batch until done Are there any problems with this process or is there a better way to do it?
GAE/P: Storing list of keys to guarantee getting up to date data
1.2
1
0
60
32,917,179
2015-10-02T23:18:00.000
3
0
1
0
python
32,917,216
2
false
0
0
A sequence is a type that follows the sequence protocol. That implies not just that indices are numeric, but that they are consecutive, start at zero, and iterating yields elements in order of increasing index, and that len(my_sequence) works. In practice, this means they need to implement __getitem__ and __len__ methods appropriately. From there, Python can "fill in the blanks" so that iteration, x in my_sequence and reversed(my_sequence) all work without implementing the associated methods - but they might still choose to implement those, particularly if they can provide a more efficient implementation (for example, the default iteration behaviour is as if __iter__ just tried self[i] from i=0 until it hits an IndexError, which isn't ideal for a linked list).
1
3
0
I've been confused about that question for a long time. Is any Python type whose objects contain elements that are numbered called a sequence type?
Is any Python type whose objects contain elements that are numbered called a sequence type?
0.291313
0
0
201
32,922,265
2015-10-03T11:56:00.000
1
0
0
1
python,python-3.x,unicode,python-unicode
32,922,798
2
false
0
0
There isn't one. The bytes constants in codecs are what you should be using. This is because you should never see a BOM in decoded text (i.e., you shouldn't encounter a string that actually encodes the code point U+FEFF). Rather, the BOM exists as a byte pattern at the start of a stream, and when you decode some bytes with a BOM, the U+FEFF isn't included in the output string. Similarly, the encoding process should handle adding any necessary BOM to the output bytes---it shouldn't be in the input string. The only time a BOM matters is when either converting into or converting from bytes.
1
2
0
It's not a real problem in practice, since I can just write BOM = "\uFEFF"; but it bugs me that I have to hard-code a magic constant for such a basic thing. [Edit: And it's error prone! I had accidentally written the BOM as \uFFFE in this question, and nobody noticed. It even led to an incorrect proposed solution.] Surely python defines it in a handy form somewhere? Searching turned up a series of constants in the codecs module: codecs.BOM, codecs.BOM_UTF8, and so on. But these are bytes objects, not strings. Where is the real BOM? This is for python 3, but I would be interested in the Python 2 situation for completeness.
Unicode Byte Order Mark (BOM) as a python constant?
0.099668
0
0
670
32,922,630
2015-10-03T12:35:00.000
2
0
0
0
python,list,canvas,tkinter
32,924,414
1
true
0
1
The canvas has no way to draw an individual pixel, except to draw a line that is exactly one pixel long and one pixel wide. If you only need to place pixels, and don't need the other features of the canvas widget, you can use a PhotoImage object. An instance of PhotoImage has methods for setting individual pixels.
1
1
0
I'm using Tkinter to create a window in python and a canvas to display graphics in the window. This is working fine so far. But I have a two dimensional list containing colours that I would like to directly place on the canvas. Example I have a class defined (named CRGB) that has three variables: r, g and b. These are the red, green and blue values of a colour, and are integers between 0 and 255. I also have a two-dimensional list, which contains CRGB objects with the colour data. I then have a Canvas (defined in a variable called screenCanvas) which is the same size as the 2D list. How would I transfer the pixels from the 2D list to the canvas? Notes: I would like the code to work on Mac AND Windows, and not use any external libraries (libraries not included in Python by default.)
Drawing individual pixels to a canvas in tkinter
1.2
0
0
2,388
32,924,392
2015-10-03T15:40:00.000
3
0
0
0
python,django,pycharm
32,924,415
1
false
1
0
This isn't PyCharm specific, but it might help. If DEBUG=True, Django will include the traceback in the response. If DEBUG=False, then by default Django will email a report to the users in the ADMINS settings.
1
1
0
I am running a Django project locally using PyCharm and it is returning a 500 error on an API call. I think this signifies an internal server error so I am assuming the reason for and nature of this error will be in a log somewhere. But I can't find where it is. Is such an error log kept? If so where?
Where does PyCharm keep the error log for Python/Django projects?
0.53705
0
0
213
32,926,435
2015-10-03T19:11:00.000
0
0
1
0
python,pip
51,099,091
3
false
0
0
This is not python's pip. A nodejs module exposes an executable with the same name. You may want to use python -m pip install to install python packages. Provided by @cel
2
1
0
pip --version ./node_modules/.bin/pip: line 1: syntax error near unexpected token (' ./node_modules/.bin/pip: line 1:var freckle = require('freckle') I have already reinstalled Python and whenever I try and use the pip command this comes up. Any help would be awesome!
Trouble with running pip install
0
0
0
3,391
32,926,435
2015-10-03T19:11:00.000
0
0
1
0
python,pip
33,020,363
3
false
0
0
Do sudo pip install. You might have other installed tools that are imposing on the install, such as homebrew.
2
1
0
pip --version ./node_modules/.bin/pip: line 1: syntax error near unexpected token (' ./node_modules/.bin/pip: line 1:var freckle = require('freckle') I have already reinstalled Python and whenever I try and use the pip command this comes up. Any help would be awesome!
Trouble with running pip install
0
0
0
3,391
32,926,685
2015-10-03T19:34:00.000
1
0
1
0
python,amazon-ec2,ipython-notebook
39,991,061
1
false
0
0
this is because all though each notebook has multiple cells but only one kernel, so the commands in the other cells are queued until the first cell finishes its task. When you open a new notebook that notebook is provided with its own kernel so it can do other simple commands quickly without attempting whatever it is that's taking so much cpu power
1
1
0
I'm using an EC2 spot instance (my windows to ubuntu instance) to run a function that was well beyond my laptop's capabilities. The kernel busy dot has been filled for hours. Previously, I would just listen to my laptop as it was obvious when something was running as opposed to ipnb getting stuck. Is there any way I can tell now? If I try something like 1+1 in the box below my function it will also turn into an asterisk, but I can open a new notebook and have zero issues running simple commands in the new notebook.
(is) kernel busy in ipython notebook
0.197375
0
0
1,221
32,935,393
2015-10-04T15:56:00.000
0
0
1
0
python,matplotlib,plot,symbols,pound
32,939,239
2
true
0
0
Thank you guys, I just reinstalled LaTeX and seems to work now. That was very weird. I was getting an exception in Tkinter callback only with the pound symbol---spent a couple of hours on that.
1
0
0
Trying to place the pound symbol; '£', in a Python plot label had given me a headache. Simple plt.xlabel(r"$\pounds$") does not seem to work. Suggestion are really appreciated. Thanks
Placing the pound symbol '£' in a Python plot axis label
1.2
0
0
956
32,936,166
2015-10-04T17:18:00.000
3
0
0
0
python-2.7,gpu,gpgpu,theano
33,177,976
1
false
0
0
If borrow is set to true garbage collection is on (default true: config.allow_gc=True) and the video card is not currently being used as a display device (doubtful, since you're using a mobile gpu), the only other options are to reduce the parameters of the network or possibly the batch size of the model. The latter will be especially effective if the model uses dropout or noise-based masks (these will be equal to the number of examples in the batch x the number of parameters dropped out or noised). Otherwise maybe you could boot to the command prompt to save a few mbs? :/
1
3
1
When running theano, I get an error: not enough memory. See below. What are some possible actions that can be taken to free up memory? I know I can close applications etc, but I just want see if anyone has other ideas. For example, is it possible to reserve memory? THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python conv_exp.py Using gpu device 0: GeForce GT 650M Trying to run under a GPU. If this is not desired, then modify network3.py to set the GPU flag to False. Error allocating 156800000 bytes of device memory (out of memory). Driver report 64192512 bytes free and 1073414144 bytes total Traceback (most recent call last): File "conv_exp.py", line 25, in training_data, validation_data, test_data = network3.load_data_shared() File "/Users/xr/courses/deep_learning/con_nn/neural-networks-and-deep-learning/src/network3.py", line 78, in load_data_shared return [shared(training_data), shared(validation_data), shared(test_data)] File "/Users/xr/courses/deep_learning/con_nn/neural-networks-and-deep-learning/src/network3.py", line 74, in shared np.asarray(data[0], dtype=theano.config.floatX), borrow=True) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/theano/compile/sharedvalue.py", line 208, in shared allow_downcast=allow_downcast, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/theano/sandbox/cuda/var.py", line 203, in float32_shared_constructor deviceval = type_support_filter(value, type.broadcastable, False, None) MemoryError: ('Error allocating 156800000 bytes of device memory (out of memory).', "you might consider using 'theano.shared(..., borrow=True)'")
How do you free up gpu memory?
0.53705
0
0
3,994
32,937,078
2015-10-04T18:48:00.000
1
1
0
0
python,python-import,panda3d
32,937,162
1
false
0
0
try to change your PYTHONPATH? i met a problem like this, and then i modify my PYTHONPATH, and it worked.
1
1
0
I have been searching the web for hours now, found several instances where someone had the same problem, but I seem to be too much of a newb with linux/ubuntu to follow the instructions properly, as none of the given solutions worked. Whenever I try to run a panda3d sample file from the python shell, I would give me an error saying: Traceback (most recent call last): File "/usr/share/panda3d/samples/asteroids/main.py", line 16, in from direct.showbase.ShowBase import ShowBase ImportError: No module named 'direct' What really bugs me is that when I try to execute the .py file directly (without opening it in the IDLE or pycharm) it works just fine. I know this has been asked before, but I would like to ask for a working step by step solution to be able to import panda3d from pycharm and the IDLE. I have no clue how to get it working, as none of the answers given to this question worked for me.
panda3d python import error
0.197375
0
0
501
32,938,416
2015-10-04T21:07:00.000
4
0
0
0
python,django
32,938,503
1
true
1
0
Possibly not a question for that site, but i will answer anyway. Django does not have strict policy if you should update or not and if you can touch core files or not, it is totally up to you. But as always - touching core files is not good idea. Django core files usually lives outside of your project so there is no reason to change them. Versioning of django is very simple: all major releases (1.6, 1.7, 1.8, 1.9, 2.0 etc) have some new features, all minor releases (1.8.2, 1.8.5 etc) have only security and bug fixes - so it will be totally safe and recommended to always update to newest minor release. There are some major releases marked as LTS - that releases will have security and bug fixes released longer than other. And that's all. Rest of all is totally up to you.
1
1
0
With WordPress and other CMS out there, there is a philosophy that you should always keep it up to date, no matter what. And never change the core files. How does Django as a framework stand on this topic?
Django's philosophy on updates?
1.2
0
0
55
32,938,494
2015-10-04T21:15:00.000
0
0
0
0
python,scikit-learn,cluster-analysis,data-mining,dbscan
71,779,306
2
false
0
0
I wrote my own distance code ref the top answer, just as it says, it was extremely slow, the built-in distance code was much better. I'm wondering how to speed up.
1
4
1
I have objects and a distance function, and want to cluster these using DBSCAN method in scikit-learn. My objects don't have a representation in Euclidean space. I know, that it is possible to useprecomputed metric, but in my case it's very impractical, due to large size of distance matrix. Is there any way to overcome this in scikit-learn? Maybe, are there another python implementations of DBSCAN that can do so?
DBSCAN (with metric only) in scikit-learn
0
0
0
6,418
32,939,832
2015-10-05T00:37:00.000
0
0
1
0
function,user-interface,python-3.x,tkinter,parameter-passing
32,940,178
1
false
0
1
The simple fact is, GUI code is no different than any other python code. It can go wherever you want. The exact same rules apply for widgets as they do for integers and strings and anything else in a python program. Local variables are only visible locally, global variables are visible everywhere within a module, and instance variables are accessible to anything inside an object, or anything that has a reference to an object.
1
0
0
I had a pretty general question. I'm used to programming my main code inside def main(): But when I made a GUI using TKinter and put it inside my main code none of my variables worked! After putting my GUI on indent 0 code the GUI finally worked, but any functions I activated using my GUI didn't have my variables! Does anyone know what to do? Also, if my GUI takes input values and stores them in a variable and activates a function, will that function need to have this variable passed into it? Or does it already know? Programming on Jetbrains Pycharm in Python 3.4.
Where do GUI's go in a general application
0
0
0
41
32,941,097
2015-10-05T04:04:00.000
7
0
1
0
python,operators,bitwise-operators,boolean-operations
32,941,126
1
true
0
0
and is a Boolean operator. It treats both arguments as Boolean values, returning the first if it's falsy, otherwise the second. Note that if the first is falsy, then the second argument isn't even computed at all, which is important for avoiding side effects. Examples: False and True --> False True and True --> True 1 and 2 --> 2 False and None.explode() --> False (no exception) & has two behaviors. If both are int, then it computes the bitwise AND of both numbers, returning an int. If one is int and one is bool, then the bool value is coerced to int (as 0 or 1) and the same logic applies. Else if both are bool, then both arguments are evaluated and a bool is returned. Otherwise a TypeError is raised (such as float & float, etc.). Examples: 1 & 2 --> 0 1 & True --> 1 & 1 --> 1 True & 2 --> 1 & 2 --> 0 True & True --> True False & None.explode() --> AttributeError: 'NoneType' object has no attribute 'explode'
1
2
0
Is there any difference in the logic or performance of using the word and vs. the & symbol in Python?
Are & and equivalent in python?
1.2
0
0
472
32,947,993
2015-10-05T11:52:00.000
0
0
0
0
python,django,heroku
32,949,624
1
true
1
0
The problem turned out to be in ALLOWED_HOSTS variable in settings file. I set it to ['appname.herokuapp.com'] and it is working fine now.
1
0
0
I just started a new project and pushed it to heroku. I set up everything: Procfile, dyno and environment variables. Everything is working fine in localhost. But I get Network error on browser and logs show me Request timeout and Workder timeout error in heroku. I read that this happens when some request takes a lot of time. However, I don't have any request right now, it just shows This is the landing page.. The only thing I have in my landing page is one css file which comes from AWS. What could be a reason for this error? UPDATE: just found out that it is working in production only if DEBUG is set to True. I don't know why.
Network error(Request timeout) in Heroku Django app
1.2
0
0
324
32,948,568
2015-10-05T12:25:00.000
0
1
0
1
python,django,amazon-web-services
33,025,966
1
false
1
0
Looks like there's an issue with your "ec2 key pairs". Make sure you have the correct key and that the permission of that key are 400. To know if the key is working try to manually connect to the instance with ssh -i ~/.ssh/<your-key> ubuntu@<your-host>
1
0
0
I wrote the default password of my ami i.e. 'ubuntu' but it didn't work. I even tried with my ssh key. I've browsed enough and nothing worked yet.Can anybody please help me out? [] Executing task 'spawn' Started... Creating instance EC2Connection:ec2.us-west-2.amazonaws.com Instance state: pending Instance state: pending Instance state: pending Instance state: pending Instance state: running Public dns: ec2-52-89-191-143.us-west-2.compute.amazonaws.com Waiting 60 seconds for server to boot... [ec2-52-89-191-143.us-west-2.compute.amazonaws.com] run: whoami [ec2-52-89-191-143.us-west-2.compute.amazonaws.com] Login password for 'ubuntu':
while spawning the fab file it asks me the login password for 'ubuntu'
0
0
0
219
32,950,512
2015-10-05T14:02:00.000
0
0
0
0
python,window,pygame
32,959,873
1
true
0
1
I do not believe this is possible within regular Pygame. A Google search turned up "Window Manager Extensions", though, which explicitly supports non-rectangular windows.
1
0
0
Is there a way to get a clear window in Pygame? I know that I can have a borderless window, but is it possible to get rid of everything inside of the window to make it clear? I want to make some stuff that works on top of my desktop, but it needs a shape that isn't rectangle. I do know that it would still be a rectangle, I just don't want it to look like one.
How Do I Make A Clear Window in Pygame
1.2
0
0
85
32,952,260
2015-10-05T15:26:00.000
0
0
1
0
python,tar,tarfile
32,952,335
2
false
0
0
You can use .getnames() to list the contents of the tarfile obj.
1
1
0
I am using pythons tarfile.extractall() to unpack a foo.tar.gz file. I want to access the extracted folder, but sometimes the extracted folder has a different name than the packed file. I need a way to control the name of the extracted folder or, a return value that tells me the name of the extracted folder. Example packed file: foo-rc-2.0.0.tar.gz unpacked folder: foo-2.0.0-rc
Different name of unpacked tar.gz folder using tarfile.extractall()
0
0
0
1,207
32,952,744
2015-10-05T15:54:00.000
3
0
0
0
python,selenium,mobile,selenium-webdriver,phantomjs
32,953,131
1
true
1
0
There is no such thing as a "phantom mobile driver". You can change the user agent string and the viewport/window size in order to suggest to the website to deliver the same markup that a mobile client would receive.
1
1
0
I'm using Selenium with PhantomJS in order to scrape a dynamic website with infinite scroll. It's working but my teacher suggested to use a mobile phantom driver in order to get the mobile version of the website. With the mobile version I expect to see less Ads or JavaScript and retrieve the information faster. There is any "phantom mobile driver"?
Is it possibile to use PhantomJS like a mobile driver in Selenium?
1.2
0
1
370
32,953,391
2015-10-05T16:28:00.000
2
0
1
0
python-2.7,ros,rospy
32,965,565
1
true
0
0
There are a lot of cases in which you definitely want to use catkin: Your package contains C++ code. This has to be compiled which will be taken care of by catkin. You have custom message types. Custom messages have to be generated and compiled. Again this is done by catkin. You have dependencies on other ROS package (or vice versa). catkin resolves this dependencies and build them if necessary. You have Python modules which need to be installed so other packages can use them. Of course you can make your custom setup.py but using catkin is the ROS-way to do this. When your scripts are in a catkin package, you can use the ROS command line tools (rosrun, roscd, rosed, ...), which are very convenient. As long as you really only have simple Python scripts without dependencies on other non-core ROS packages, you are probably fine without bundling them in a package. However, as soon as you are sharing your code with other ROS developers, I would package them nonetheless. While it may be working, it will be confusing for the others if they don't get the package structure they are used to.
1
2
0
Up until this point, while working on my project, I've been building ROS scripts using rospy- establishing topics and nodes, subscribing to things, and just generally doing all sorts of functions. I've been led to believe, though, that eventually my scripts will need to be made into 'packages', with the notion being that they increase modularity of programs (and is just the way things are done). So far, my scripts are pretty compact, and I don't see why sending out a python script invoking rospy would require this extra level of wrapping (particularly given the obfuscatory nature of most of ROS wiki's tutorials). I've not had to create catkin packages or anything for any of my programs so far. Is there some overwhelming reason why I need concern myself with ROS packages and catkin and the like? Right now, I just don't see the point when everything works well and likely would across any machine the script is run from. Thanks!
Regarding the Necessity of ROS Packages
1.2
0
0
274
32,953,669
2015-10-05T16:44:00.000
3
0
0
0
python,postgresql,imdb,imdbpy
32,972,532
1
true
0
0
Problem solved! For those who are having the same problem, here it goes: Download the java movie database. It works witch postrgres or mysql. You'll have to download java runtime. After that open the readme in the directory you installed the java movie database, there are all the instructions, but I'll help you. Follow the link off the *.list archives and download then. Move them into a new folder. After that, open JMDB (java movie database) and select the right folders of moviecollection, sound etc (they are in C:/programs...). In IMDb-import select the folder you created which contains the *.files. Well, this is the end. Run the JMDB and you'll have your DB populated.
1
0
0
I'm startint a project on my own and I'm having some troubles with importing datas from IMDb. Already downloaded everything that's necessary but I'm kinda newbie in this python and command lines stuff, and it's pissing me off because I'm doing my homework (trying to learn how to do these things) but I can't reach it :( So, is there anybody who could create a step-by-step of how to do it? I mean, something like: You'll need to download this and run this commands on 'x'. Create a database and then run 'x'. It would be amazing for me and other people who don't know how to do this as well and I would truly appreciate A LOT, really!. Oh, I'm using Windows.
How to run imdbpy2sql.py and import data from IMDb to postgres
1.2
1
0
1,057
32,954,687
2015-10-05T17:44:00.000
0
0
0
1
python,amazon-web-services,etl,emr,amazon-emr
44,104,221
4
false
0
0
Now you can put your script on AWS Lambda for ETL. It supports scheduler and Trigger on other AWS components. It is on-demand and will charge you only when the Lambda function got executed.
1
0
0
I want to execute a on-demand ETL job, using AWS architecture. This ETL process is going to run daily, and I don't want to pay for a EC2 instance all the time. This ETL job can be written in python, for example. I know that in EMR, I can build my cluster on-demand and execute a hadoop job. What is the best architecture to run a simple on-demand ETL job?
Using AWS to execute on-demand ETL
0
0
0
613
32,959,469
2015-10-05T23:29:00.000
1
0
0
0
python,django,postgresql,datatables
32,962,651
2
false
1
0
There are multiple way of improving your code structure, First is you fetch only that data which is required according to your page number using Django ORM hit, second is you cache your ORM output and reuse that result if same query is passed again. First goes like this. In your code Pizza.objects.all() paginated = filtered[start: start + length] You are first fetching all data then, you are slicing that, which is very expensive SQL query, convert that to filtered = Pizza.objects.all()[(page_number-1) * 30, (page_number-1) * 30 + 30] above given ORM will only fetch those rows which are according to supplied page number and is very fast compare to fetching all and then slicing it. The second way, is you first fetch data according to query put that on, caching solution like memcache or redis, next time when you are required to fetch the data from the database, then first check if data is present in cache for that query, if present, then simply use that data, because in-memory caching solution are way faster than fetching the data from the database because of very large input output transfer between memory and hard drive and we know hard drives are traditionally slow.
1
1
0
I'm designing a data-tables-driven Django app and have an API view that data-tables calls with AJAX (I'm using data-tables in its server-side processing mode). It implements searching, pagination, and ordering. My database recently got large (about 500,000 entries) and performance has greatly suffered, both for searches and for simply moving to the next page. I suspect that the way I wrote the view is grossly inefficient. Here's what I do in the view (suppose the objects in my database are pizzas): filtered = Pizza.objects.filter(...) to get the set of pizzas that match the search criteria. (Or Pizza.objects.all() if there is no search criteria). paginated = filtered[start: start + length] to get only the current page of pizzas. (At max, only 100 of them). Start and length are passed in from the data-tables client-side code, according to what page the user is on. pizzas = paginated.order_by(...) to apply the ordering to the current page. Then I convert pizzas into JSON and return them from the view. It seems that, while search might justifiably be a slow operation on 500,000 entries, simply moving to the next page shouldn't require us to redo the whole search. So what I was thinking of doing was caching some stuff in the view (it's a class-based view). I would keep track of what the last search string was, along with the set of results it produced. Then, if a request comes through and the search string isn't different (which is what happens if the user is clicking through a few pages of results) I don't have to hit the database again to get the filtered results -- I can just use the cached version. It's a read-only application, so getting out of sync would not be an issue. I could even keep a dictionary of a whole bunch of search strings and the pizzas they should produce. What I'd like to know is: is this a reasonable solution to the problem? Or is there something I'm overlooking? Also, am I re-inventing the wheel here? Not that this wouldn't be easy to implement, but is there a built-in option on QuerySet or something to do this?
Keeping state in a Django view to improve performance for pagination
0.099668
0
0
541
32,965,937
2015-10-06T09:07:00.000
1
0
0
0
wxpython,wxwidgets
32,968,457
2
false
0
1
The simplest way to do it I see would be to store only the abbreviated contents in wxTextCtrl itself normally and only replace it with the full contents when the user is about to start editing it (i.e. when you get wxEVT_SET_FOCUS) and then replace it with the abbreviated version again later (i.e. when you get wxEVT_KILL_FOCUS).
2
0
0
Last week I was given a requirement for an already crowded display/input screen written in wxPython (data stored in MySQL database) to show the first 24 characters of two free format comment fields but to allow up to 255 characters to be displayed/input. For instance on entry the screen might show “Right hip and knee X-ra” whilst the full entry continues “ys requested 24th September in both laying and standing positions with knee shown straighten and bent through 75 degrees”. Whilst I can create a wxtextCtrl that holds more characters than displayed (scrolls) I cannot work out how to display/enter the contents as a multiline box when selected. I have gone through our wxPython book and searched online with no joy.
wxtextCtrl To Allow Input/Display Of More Text Than Default Display
0.099668
0
0
109
32,965,937
2015-10-06T09:07:00.000
0
0
0
0
wxpython,wxwidgets
33,018,442
2
false
0
1
Have you considered using my.TextCtrl.SetToolTipString() and setting it to the same value as the textctrl contents. In this manner only the first 24 characters show in the textctrl but if you hover over it the entire string will be displayed as a tooltip.
2
0
0
Last week I was given a requirement for an already crowded display/input screen written in wxPython (data stored in MySQL database) to show the first 24 characters of two free format comment fields but to allow up to 255 characters to be displayed/input. For instance on entry the screen might show “Right hip and knee X-ra” whilst the full entry continues “ys requested 24th September in both laying and standing positions with knee shown straighten and bent through 75 degrees”. Whilst I can create a wxtextCtrl that holds more characters than displayed (scrolls) I cannot work out how to display/enter the contents as a multiline box when selected. I have gone through our wxPython book and searched online with no joy.
wxtextCtrl To Allow Input/Display Of More Text Than Default Display
0
0
0
109
32,965,980
2015-10-06T09:09:00.000
4
0
1
1
python,git-bash
59,281,928
2
false
0
0
Follow these steps: Open Git bash, cd ~ Depending on your favorite editortouch, code or vim (in my case) type code .bashrc Add the line alias python='winpty c:/Python27/python.exe' to the open .bashrc Save and Close. Try python --version on git bash again. Hopefully it works for you.
1
8
0
I've installed python 3.5 and python 2.7 on windows. And I've added path for python 2.7 in PATH variable. When I type 'python --version' in windows cmd, it prints 2.7. But when i type 'python --version' in git bush, it prints 3.5. How to change python version in windows git bash to 2.7?
How to change python version in windows git bash?
0.379949
0
0
11,192
32,971,613
2015-10-06T13:44:00.000
0
1
1
0
python,c
32,972,064
5
false
0
0
In C you can compile without declaring, the compiler assumes the function is int, if that's the case your program compiles and runs. If your function or functions are another type you'll get problems and you'll have to declare. It is often quicker to not declare and generate your declarations from your code to an h file and include where needed. You can leave it to a program to write this part of your program. Just like you can leave the indent, write a big mess and let indent do it for you.
1
3
0
I am reading and learning now Python and C at the same time. (Don't ask me why, it is a lot of fun! :-)) I use "Learning Python" by Mark Lutz. Here is what he writes about functions in Python: Unlike in compiled languages such as C, Python functions do not need to be fully defined before the program runs. More generally, defs are not evaluated until they are reached and run, and the code inside defs is not evaluated until the functions are later called. I do not quite get it as in my second book K.N.King says that you CAN declare a function first and create a definition later. English is not my native language so what I am missing here? I can make only one guess, that it is somehow related to program runtime. In C the compiler runs through the program and finds the function declaration. Even if it is not defined, compiler goes on and finds function definition later. Function declaration in C helps to avoid problems with return-type of a function (as it is int by default). On the other hand in Python function is not evaluated until it is reached during runtime. And when it is reached, it does not evaluate the body of a function until there is a function call. But this guess does not explain a quote above. What is then Mr.Lutz is talking about? I am confused a bit...
Difference in declaring a function in Python and C
0
0
0
455
32,972,145
2015-10-06T14:09:00.000
6
0
0
0
python,pyqt,pyside,maya,qlineedit
32,972,495
1
true
0
1
Blah.... I just figured that the QComboBox does exactly what I want to do when the "setEditable" is on... It has a completer, and an history of whatever was typed in the textfield!
1
5
0
Do you know an easy way to make an "history" on a QLineEdit in PySide/PyQt? Example: Whenever Enter is pressed, the typed text will be stored, and pressing the "up" or "down" arrows allows you to navigate through the history. Thank you very much
PyQt QLineEdit with history
1.2
0
0
1,457
32,973,725
2015-10-06T15:22:00.000
3
0
1
0
python
32,973,783
3
false
0
0
Because that's an explicit choice the language made; assigning to indices requires those indices to exist. Assigning to slices will expand or contract the list as needed to accommodate a new size.
1
3
0
Why is a[len(a):] = [x] equivalent to a.append(x), but a[len(a)] = [x] gives an out of range error?
Difference between a[len(a):] = [x] and a[len(a)] = [x]
0.197375
0
0
409
32,975,414
2015-10-06T16:52:00.000
0
0
0
0
python,sql,postgresql
32,981,952
1
true
0
0
There's no way to automatically omit columns where all values are null from a result set, no. You'll have to handle this client side. The good news is that sending nulls is extremely efficient, so the network impact of lots of null columns is minimal.
1
0
0
If you have many columns eg a 100, but only want to return columns in a row which are NOT NULL is it possible to do so without explicitly naming each column? Is there anyway around this? Thank you.
If you have many columns some with NULL Values-- exclude without explicitly naming columns
1.2
0
0
39
32,978,233
2015-10-06T19:36:00.000
0
1
0
1
python,gdb
32,979,189
2
false
0
0
I was able to figure out. What I understood is GDB embeds the Python interpreter so it can use Python as an extension language. You can't just import gdb from /usr/bin/python like it's an ordinary Python library because GDB isn't structured as a library. What you can do is source MY-SCRIPT.py from within gdb (equivalent to running gdb -x MY-SCRIPT.py).
1
0
0
where should my python files be stored so that I can run that using gdb. I have custom gdb located at /usr/local/myproject/bin. I start my gdb session by calling ./arm-none-eabi-gdb from the above location. I don't know how this gdb and python are integrated into each other. Can anyone help.?
Invoke gdb from python script
0
0
0
886
32,981,331
2015-10-06T23:18:00.000
0
0
1
0
python
32,981,354
3
false
0
0
Just cast them to float using float(x) method where x is the string that contains the float
1
0
0
I am trying to read in a 4 column txt file and create a 5th column. Column 1 is a string and columns 2-4 are numbers, however they are being read as strings. I have two questions - my python script is currently unable to perform multiplication on two of the columns because it is reading columns 2-4 as strings. I want to know how to change columns 2-4 (which are numbers) to floating numbers and then create a 5th column that is two of the previous columns multiplied together.
Change data type in text file from string to float Python
0
0
0
761
32,981,685
2015-10-07T00:01:00.000
1
0
1
0
python,turtle-graphics
32,981,814
1
false
0
0
I suspect that the proper answer is "no". Turtle graphics does not maintain the shape in a form useful for you to test, nor does it provide shape manipulation methods. You could develop your own package to represent objects, and include an intersection method, but this takes a lot of work. If you're interested, see the BOOST library shape methods (that's in C++) that Luke Simonson did (2009, I think). However, if your shapes are regular enough, you can make a proximity detector. For instance, if the shapes are more or less circular, you could simply see whether they've come within r1 + r2 of each other (a simple distance function on their current positions), where r1 & r2 are the radii of the objects. Is that close enough for your purposes?
1
0
0
I'm using turtle and inserting 2 shapes into the program and I am trying to make the program perform a specific function when the objects intersect. Is it possible to do it with an if statement?
How to detect turtle objects intersecting in python?
0.197375
0
0
999
32,982,034
2015-10-07T00:50:00.000
0
0
0
0
python,numpy,pandas
64,074,702
7
false
0
0
You can just use the unique() function from pandas on each column in your dataset. ex: df["colname"].unique() This will return a list of all unique values in the specified column. You can also use for loop to traverse through all the columns in the dataset. ex: [df[cols].unique() for cols in df]
1
6
1
I have a pandas dataframe with a large number of columns and I need to find which columns are binary (with values 0 or 1 only) without looking at the data. Which function should be used?
Which columns are binary in a Pandas DataFrame?
0
0
0
9,639
32,988,413
2015-10-07T09:18:00.000
1
0
1
0
python,nlp,nltk
32,994,584
1
false
0
0
That's a good suggestion, I will try it with anaphora too. For now, my problem is solved by the concept of noun phrase & verb phrase. I extracted clause(s) from the sentence identified verbs & nouns in each, and related them through an iterative technique. Thank you for the help.
1
0
1
Given a sentence, Using python NLTK how can I know which Verb is talking about which Noun. Eg: Cat sat on the mat. Here "sat(verb)" is talking about "Cat(noun)". Consider a complex sentence which has more nouns & verbs Thank You.
NLP - Find which Verb is talking about the Noun in a sentence
0.197375
0
0
1,077
32,991,315
2015-10-07T11:43:00.000
3
0
1
0
python,pyinstaller
41,541,894
2
true
0
0
uninstall using "python27\Scripts>pip uninstall pyinstall" and Reinstall "python27\Scripts>pip install pyinstall" worked for me.
2
2
0
How can I know if I build a 32bit application or 64bit application? I have created .exe with Pyinstaller and it works fine on my computer (Win7 Ultimate 64), when I try to run the same exe on Virtual Machine (Win7 Home Premium 64) I get: Error 193 not a valid Win32 application. Python 2.7 32bit PyInstaller 2.1 (I think also 32) So isn't this 32bit application that should work fine on 64 too?
Not a valid Win32 application... Python, PyInstaller, Windows7
1.2
0
0
3,129
32,991,315
2015-10-07T11:43:00.000
2
0
1
0
python,pyinstaller
39,793,336
2
false
0
0
Use pip to uninstall then reinstall pyinstaller. The first time I used pyinstaller it worked, since then it corrupted. Reinstalling fixed the issue.
2
2
0
How can I know if I build a 32bit application or 64bit application? I have created .exe with Pyinstaller and it works fine on my computer (Win7 Ultimate 64), when I try to run the same exe on Virtual Machine (Win7 Home Premium 64) I get: Error 193 not a valid Win32 application. Python 2.7 32bit PyInstaller 2.1 (I think also 32) So isn't this 32bit application that should work fine on 64 too?
Not a valid Win32 application... Python, PyInstaller, Windows7
0.197375
0
0
3,129
32,994,489
2015-10-07T14:07:00.000
0
0
0
0
python,selenium,selenium-chromedriver
32,999,196
2
true
0
0
All, thanks for your help. I found the root cause. Actually the problem was with non optimal usage of find_elements. Even when it is called once it executes for ages. Replaced with a workaround using find_element and it started to work. The workaround is fragile, but it's better than nothing.
1
1
0
I need to work with a very huge page (there are a lot of elements, really) with selenium and Chromedriver. After navigation happened and page loaded test gets hung for more than 2 hours. Chrome is consuming 100 % CPU during this process. I suspect it to parse the loaded page. Is there a way to avoid or handle it somehow? (I know that the page should not be that huge, but it is a different story) Thanks in advance for your help.
Chromedriver works with page for too long
1.2
0
1
224
32,994,822
2015-10-07T14:22:00.000
-1
0
0
0
python,python-2.7
32,997,001
2
false
0
0
If it's raw data, I always export it to a .csv file and work on it directly. CSV is a simple format with one row per line and all the elements on the row separated with commas. Depending on what you want to do, it's not hard to write a python script to edit that.
1
4
0
I need to change data in large excel file(more than 240 000 rows on sheet), it's possible through win32com.client, but I need use Linux OS ... Please, could you advise something suitable!
Change data in large excel file(more than 240 000 rows on sheet)
-0.099668
1
0
414
32,995,454
2015-10-07T14:48:00.000
1
0
1
0
python,rest,extjs,pyramid,jsonschema
33,002,536
3
true
1
0
If I understand you correctly, you want to use JSON Schema for input validation, but you are struggling to figure out how to validate query parameters with JSON Schema in a RESTful way. Unfortunately, there isn't a definitive answer. JSON Schema just wasn't designed for that. Here are the options I have considered in my own work with REST and JSON Schema. Convert query parameters to JSON then validate against the schema Stuff your JSON into a query param and validate the value of that param. (i.e. /foo/1?params={"page": 2, "perPage": 10}) Use POST instead of GET and stick your fingers in your ears when people tell you you are doing REST wrong. What do they know anyway. I prefer option 1 because it is idiomatic HTTP. Option 2 is probably the easiest to work with on the back-end, but it's dirty. Option 3 is mostly a joke, but in all seriousness, there is nothing in REST or HTTP that says a POST can only be used for creation. In fact, it is the most flexible and versatile of the HTTP methods. Think of it like a factory that does something. That something could generate and store a new resource or just return it. If you are finding that you need to send a large number of query parameters, it's probably not really a simple GET. My rule of thumb is that if the result is inherently not cacheable, it's possible that a POST is more appropriate (or at least not inappropriate).
1
1
0
I'm starting a new project that consists in an Extjs 6 application with a pyramid/python backend. Due to this architecture, the backend will only provide an RPC and won't serve any page directly. My implementation of such a thing is usually based on REST and will fit nicely this CRUD application. Regarding data validation i would like to move from Colander/Peppercorn that i always found awkward to the simpler and more streamlined jsonschema. The idea here would be to move all the parameters - minus the id contained in the url when is the case - of the various requests into a json body that could be easily handled by jsonschema. The main problem here is that GET requests shouldn't have a body and i definitely want to put parameters in there (filters, pagination, etc). There's probably some approach to REST or REST-like and JSONschema but i'm not able to find anything on the web. Edit: someone mentioned the question about body in GET HTTP request. While putting a body in a GET HTTP request is somehow possible, it's in violation of part of HTTP 1.1 specification and therefore this is NOT the solution to this problem.
How to conciliate REST and JSONschema?
1.2
0
0
237
33,000,288
2015-10-07T18:57:00.000
0
0
0
0
python,tkinter,python-imaging-library,tkinter-canvas
33,000,636
2
false
0
1
(actually, you are asking 2 different, not very precise questions) scroll forever: Independent from python a common approach is to mirror the images at the edges so you can implement a virtually endless world from 1 or some images (tiles of the map). GUI framework/API: From my experience Qt (so in your case maybe PyQt) is well documented and designed to quite easily realize OS independent GUI.
1
0
0
I have an image of a map. I would like to make the left and right (East and west) edges of the map connect so that you can scroll forever to the right or left and keep scrolling over the same picture. I've looked around and can't find anything on the topic (likely because I don't know what to call it). I would also like to have the picture in a frame that I can grab and drag to move the picture around. I was trying to do this in Tkinter, but I have a feeling there are probably easier ways to do this.
Connecting edges of picture in python
0
0
0
180
33,002,726
2015-10-07T21:29:00.000
2
0
0
0
android,python,apk,kivy,buildozer
33,002,817
1
true
0
1
Instead of changing the theme in a local kivy installation, place the image in a folder named data/images in your app's directory (i.e. ./data/images/defaulttheme-0.png from your app script). Edit: it is also necessary to copy the atlas file to this location, as noted by Tshirtman.
1
0
0
When I change the defaulttheme-0.png of my Python Kivy installation, my App appears different when launching it as .py on Ubuntu. But if I now "convert" it to an .apk and run it on my mobile (Android 5.1.2) it appears as before, without the theme being changed. What do I have to do, to tell kivy/buildozer to integrate the theme into the .apk? What I tried: - Normally running "sudo buildozer android debug" - Deleting all files created by "sudo buildozer init" and calling that one again - And of course I searched Google, but as I didn't found anything, I hope you can help me, because my App would be almost done after fixing that problem...
Python Kivy: Changing defaulttheme-0.png won't change final .apk design
1.2
0
0
358
33,004,491
2015-10-08T00:08:00.000
1
1
1
0
python-3.x,py2exe,pypy
33,037,742
1
true
0
0
py2exe and pypy are incompatible. It's possible to write an equivalent of py2exe for pypy, but some work has to be done.
1
0
0
I have been developing a python script called script.py . I wish to compile it using py2exe. However, I wish to make sure the the final script.exe is optimized with the pypy JIT compiler and hence faster. P.S. I am new to both py2exe and pypy
py2exe with Pypy
1.2
0
0
423
33,005,743
2015-10-08T02:46:00.000
1
0
1
0
python,pycharm
33,005,817
1
true
0
0
if you press ALT-1 you will see on the left, files of your current open project. If you click on file with .py and it does not open in the Editor window, it may mean your default layout got somehow changed. Press Shift-F12 to restore to default layout, and click on a .py file again. Other thing to try is to delete .idea folder in the project directory and restart Pycharm.
1
2
0
Currently, I have PyCharm recognizing the .py extension and all .py files have the PyCharm icon. However, when I double click on a .py file, the project opens up (if it's in a project) and on the left hand side I can see everything in my project folder, but the center of PyCharm is telling me "No files are open." I can open, write, and run scratch files though. How do I get the python script in the editor?
Opening .py files in PyCharm Editor
1.2
0
0
2,749
33,009,532
2015-10-08T07:40:00.000
0
0
0
0
python-2.7,selenium-webdriver
33,010,025
1
false
0
0
Cross check your language binding, check with older versions
1
0
0
I am implementing selenium and i have already included "from selenium import webdriver" but still getting this error "ImportError: cannot import name webdriver" Any idea how to resolve this error?
ImportError: cannot import name webdriver
0
0
1
1,185
33,009,839
2015-10-08T07:55:00.000
2
1
0
0
python,plone,plone-4.x
33,016,003
4
false
1
0
If you don't find a proper add-on, know that in Plone a trash can only be a matter of workflow. You can customize your workflow adding a new trash transition that move the content in a state (trashed) where users can't see it (maybe keep the visibility open for Manager and/or Site Administrators). Probably you must also customize the content_status_modify script because after the trash on a content you must be redirected to another location (or you'll get an Unhautorized error).
2
1
0
I want to give at all members of a Plone (4.3.7) site the possibility to restore a file accidentally deleted. I only found ecreall.trashcan for this purpose, but I've some problem with installation. After add it in buildout.conf and do a bin/buildout the output contain some error like... File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/isTrashcanOpened.py", line 11 return session and session.get('trashcan', False) or False SyntaxError: 'return' outside function File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_trash.py", line 23 return context.translate(msg) SyntaxError: 'return' outside function File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_restore.py", line 23 return context.translate(msg) SyntaxError: 'return' outside function ... And so, I don't find any new add-on to enable or configure in site setup. Someone know what could be, or is there another method for do what I want? Please.... thanks in advance
Is there a metod to have a trash can in Plone?
0.099668
0
0
166
33,009,839
2015-10-08T07:55:00.000
1
1
0
0
python,plone,plone-4.x
33,026,043
4
false
1
0
I've found the solution(!!!) working with -Content Rules- in the control panel. First I've created a folder called TRASHCAN , after in content rule I've added a rule that copy the file/page/image in folder trashcan if it will be removed. This rule can be disable in trashcan folder, so you could delete definitely the objects inside.
2
1
0
I want to give at all members of a Plone (4.3.7) site the possibility to restore a file accidentally deleted. I only found ecreall.trashcan for this purpose, but I've some problem with installation. After add it in buildout.conf and do a bin/buildout the output contain some error like... File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/isTrashcanOpened.py", line 11 return session and session.get('trashcan', False) or False SyntaxError: 'return' outside function File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_trash.py", line 23 return context.translate(msg) SyntaxError: 'return' outside function File "build/bdist.linux-x86_64/egg/ecreall/trashcan/skins/ecreall_trashcan_templates/object_restore.py", line 23 return context.translate(msg) SyntaxError: 'return' outside function ... And so, I don't find any new add-on to enable or configure in site setup. Someone know what could be, or is there another method for do what I want? Please.... thanks in advance
Is there a metod to have a trash can in Plone?
0.049958
0
0
166
33,011,328
2015-10-08T09:07:00.000
0
0
1
0
python,python-3.x,overriding,argparse,exit-code
33,025,950
4
false
0
0
I solved the problem catching SystemExit and determining what error by simply testing and comparing. Thanks for all the help guys!
2
1
0
I'm creating a program as a assignment in my school, I'm all done with it except one thing. We have to make the program exit with different codes depending on how the execution went. In my program I'm processing options using "argparse" and when I use built-in functions like "version" I've managed to override the exit-code, but if I write a option that doesn't exist, then it won't work. It gives me the "unrecognized error"-message, and exits with code "0", I need it to exit with code 1. Is there anyway to do this? Been driving me nuts, have struggled with it for days now... Thanks in advance! /feeloor
Python3 override argparse error
0
0
0
648
33,011,328
2015-10-08T09:07:00.000
0
0
1
0
python,python-3.x,overriding,argparse,exit-code
33,011,374
4
false
0
0
Use sys.exit(returnCode) to exit with particular codes. Obviously on linux machines you need to no 8 bit right shift inorder to get the right return code.
2
1
0
I'm creating a program as a assignment in my school, I'm all done with it except one thing. We have to make the program exit with different codes depending on how the execution went. In my program I'm processing options using "argparse" and when I use built-in functions like "version" I've managed to override the exit-code, but if I write a option that doesn't exist, then it won't work. It gives me the "unrecognized error"-message, and exits with code "0", I need it to exit with code 1. Is there anyway to do this? Been driving me nuts, have struggled with it for days now... Thanks in advance! /feeloor
Python3 override argparse error
0
0
0
648
33,016,533
2015-10-08T12:57:00.000
1
1
0
0
python,django,email,smtplib
33,017,245
1
false
1
0
You connect to the SMTP server, preferably your own, that doesn't require authentication or on which you do have an account, then you create an email that has the users e-mail in the from field, and you just send it. Which lib you will use to do it, smtplib, some Django stuff, or anything else, is irrelevant. If you want to, you can even skip the SMTP server, and simulate one. That way you can deposit the composed mail directly into users POP server inbox. But there is rarely a need for such extremes.
1
0
0
I'm trying to send a notification from my Django Application everytime the user perform specific actions, i would like to send those notifications from the email of the person who performed these actions. I don't want them to have to put their password on my application or anything else. I know this is possible because i remember doing this with PHP Long time ago.
How to send an email on python without authenticating
0.197375
0
0
286
33,017,847
2015-10-08T13:54:00.000
1
1
0
0
python,ssh,telnet,paramiko
33,018,073
2
true
0
0
Yes, the system (Linux and Windows) keeps track of all of the resources your process uses. It can be files, mutexes, sockets, anything else. When process dies, all of the resources are freed. It doesn't really matter which programming language do you use and how you terminate your application. There are several subtle exceptions for this rule like WAIT_CLOSE state for server sockets or resources held by zombie processes, but in general you can assume that whenever your process is terminated, all of the resources are freed. UPD. As it was mentioned in comments, OS cannot guarantee that the resources were freed properly. In network connections case it means that there are no guarantee that the FIN packet was sent, so although everything was cleaned up on your machine, the remote endpoint can still wait for data from you. Theoretically, it can wait infinitely long. So it is always better practice to use "finally" statement to notify the other endpoint about closing connection.
1
1
0
This might be bad practice so forgive me, but when python ends on non telnet lib exception or a non paramiko (SSH) exception, will the SSH or Telnet connection automatically close? Also, will sys.exit() close all connections that the script is using?
Does python automatically close SSH and Telnet
1.2
0
1
702
33,020,365
2015-10-08T15:45:00.000
2
0
1
0
python,divide-by-zero
33,020,408
2
true
0
0
Because 1/0 can be either +inf (positive) or -inf (negative). 1/inf can only be 0.
1
0
0
In general, 1/a = b ⟺ 1/b = a, so if we're letting the reciprocal of infinity be 0, the reciprocal of 0 should be infinity. It seems strange for Python to use the limit for 1/inf but not for 1/0. What is the rationale behind this decision?
Why does 1/inf == 0 but 1/0 != inf?
1.2
0
0
569
33,023,432
2015-10-08T18:33:00.000
6
0
1
0
python,arguments,global-variables,parameter-passing
33,023,611
2
false
0
0
This is a very generic question so it is hard to be specific. What you seem to be describing is a bunch of inter-related functions that share data. That pattern is usually implemented as an Object. Instead of a bunch of functions, create a class with a lot of methods. For the common data, use attributes. Set the attributes, then call the methods. The methods can refer to the attributes without them being explicitly passed as parameters.
1
5
0
Let's say I have a python module that has a lot of functions that rely on each other, processing each others results. There's lots of cohesion. That means I'll be passing back and forth a lot of arguments. Either that, or I'd be using global variables. What are best practices to deal with such a situation if? Things that come to mind would be replacing those parameters with dictionaries. But I don't necessarily like how that changes the function signature to something less expressive. Or I can wrap everything into a class. But that feels like I'm cheating and using "pseudo"-global variables? I'm asking specifically for how to deal with this in Python but I understand that many of those things would apply to other languages as well. I don't have a specific code example right, it's just something that came to mind when I was thinking about this issue. Examples could be: You have a function that calculates something. In the process, a lot of auxiliary stuff is calculated. Your processing routines need access to this auxiliary stuff, and you don't want to just re-compute it.
Avoiding global variables but also too many function arguments (Python)
1
0
0
1,672
33,025,522
2015-10-08T20:37:00.000
0
0
1
1
python,linux,directory,installation
33,025,633
1
false
0
0
It depends on your distro. Most of distros have it in its repository and for installing you just need to use your package manager. for example on ubuntu it's: sudo apt-get install python or on fedora it's: su -c 'dnf install python'
1
0
0
How can i install python correctly? When i install it manually it is in the /usr/local/bin directory but that causes many problems, for example i am not able to install modules. I want to install it into /usr/bin
How to install python correctly on Linux?
0
0
0
156
33,027,086
2015-10-08T22:37:00.000
2
0
1
0
python,pandas
33,027,650
1
true
0
0
This isn't going to be a very complete answer, but hopefully is an intuitive "general" answer. Pandas doesn't use a list as the "core" unit that makes up a DataFrame because Series objects make assumptions that lists do not. A list in python makes very little assumptions about what is inside, it could be pretty much anything, which makes it great as a core component of python. However, if you want to build a more specialized package that gives you extra functionality liked Pandas, then you want to create your own "core" data object and start building extra functionality on top of that. Compared with lists, you can do a lot more with a custom Series object (as witnessed by pulling a single column from a DataFrame and seeing what methods are available to the output).
1
0
1
Why doesn't Pandas build DataFrames directly from lists? Why was such a thing as a series created in the first place? Or: If the data in a DataFrame is actually stored in memory as a collection of Series, why not just use a collection of lists? Yet another way to ask the same question: what's the purpose of Series over lists?
What's the purpose of Series instead of lists in Pandas and Python?
1.2
0
0
221
33,028,390
2015-10-09T01:12:00.000
2
1
1
0
python
33,028,435
1
true
0
0
You should be able to do multiple read access, but it might get really, really, really slow, compared to reading the same file by several scripts on the same computer (obviously, the degree will very much depend on how much reading you are doing). You may want to copy the file over before processing.
1
0
0
I have a large text file stored in a shared directory on a server in which different other machines have access to that. I'm running various analysis on this text file without changing or updating it. I'd like to know whether I can run different python scripts on different machines in which all of them reading that large text file? None of the scripts make any change to that file, they just need to read it.
Python: Can I read a large text file from different scripts?
1.2
0
0
42
33,028,985
2015-10-09T02:34:00.000
1
0
0
1
python,django,django-rest-framework
33,051,195
1
false
1
0
This turned out to be related to libcurl's default "Expect: 100-continue" header.
1
2
0
I am running a dev server using runserver. It exposes a json POST route. Consistently I'm able to reproduce the following performance artifact - if request payload is <= 1024 bytes it runs in 30ms, but if it is even 1025 bytes it takes more than 1000ms. I've profiled and the profile points to rest_framework/parsers.py JSONParser.parse() -> django/http/request HTTPRequest.read() -> django/core/handlers/wsgi.py LimitedStream.read() -> python2.7/socket.py _fileobject.read() Not sure if there is some buffer issue. I'm using Python 2.7 on Mac os x 10.10.
Sudden performance drop going from 1024 to 1025 bytes
0.197375
0
0
68
33,029,293
2015-10-09T03:13:00.000
0
0
0
0
java,python,subprocess,jython,processbuilder
38,824,000
1
false
1
1
If there is no limit to the length of the string argument for launching the python script, you could simply encode the binary data from the image into a string, and pass that. The only problem you might encounter with this approach would be null characters and and negative numbers.
1
0
0
I have a working program written in Java (a 3d game) and some scripts in Python written with theano to process images. I am trying to capture the frames of the game as it is running and run these scripts on the frames. My current implementation grabs the binary data from each pixel in the frame, saves the frame as a png image and calls the python script (using ProcessBuilder) which opens the image and does its thing. Writing an image to file and then opening it in python is pretty inefficient, so I would like to be able to pass the binary data from Java to Python directly. If I am not mistaken, processBuilder only takes arguments as strings, so does anyone know how I can pass this binary data directly to my python script? Any ideas? Thanks
Passing binary data from java to python
0
0
0
250
33,029,955
2015-10-09T04:32:00.000
1
0
1
0
python,json,eval
33,030,940
1
true
0
0
If you have a Unicode string that contains JSON text; it is always safe (as far as any C code that accepts user input can be) to pass it to json.loads(). Where you pass the results of json.loads() is up to you: if you want to interpret the received data as code; you can do it.
1
1
0
Is it possible to use json to execute code? For example can I pass a code object into it or something along those lines? I guess my question is how does python evaluate json objects, and can this be used to run code? I want to make sure passing information with json is safe from remote execution.
Python transfer code with json
1.2
0
0
77
33,034,511
2015-10-09T09:20:00.000
1
0
1
0
python,nodes,maya
33,038,931
2
true
0
0
Any two objects can have the same names, but never the same DAG paths. In your script, make sure all your ls, listRelatives calls etc. Have the full path or longName or long flags set so you always operate on the full DAG paths as opposed to the possibly conflicting short names.
1
0
0
I am writing a script to export alembic caches of animation in a massive project containing lots of maya files. Our main character is having an issue; along the way his eyes somehow ended up with the same name. This has created issues with the alembic export. Dose maya already have a sort of clean up function that can correct matching names?
Maya Python: fix matching names
1.2
0
0
747
33,034,687
2015-10-09T09:29:00.000
0
0
1
1
python
33,034,977
3
false
0
0
If it's not a multithreaded program, then just let the program do whatever it needs and then: raw_input("Press Enter to stop the sharade.\n") Maybe it's not exactly what you're looking for but on the other hand you should not rely on a predefined sleep time.
1
0
0
I want the program to wait like 5 seconds before exiting the console after finishing whatever it does, so the user can read the "Good Bye" message, how can one do this?
How to stop console from exiting
0
0
0
67
33,037,770
2015-10-09T12:01:00.000
0
0
0
1
python,c++,linux,winpdb
33,562,769
1
true
0
0
Finally i have fixed the issue by using the latest version of python QT
1
0
0
I am using PythonQT to execute python script (because I need to call c++ methods from python script) My winpdb version is 1.4.6 and machine is CetOS 6.5 Now I want to enable debugging in python script I have added rpdb2.start_embedded_debugger('test') inside the script and called PythonQt.EvalFile() function, now script is waiting for debugger. I have opened winpdb UI from terminal and attached to the debugger. I am able to do the “Next”, “Step into” etc. and all local variables are visible correctly But when I am trying to detach the debugger it is not detaching. Status showing “DETACHING” and nothing happening and I cannot even close winpdb. The only way to exit is kill winpdb. If I run the same script file from terminal it is working properly (python ) and detaching as expected. By looking the logs I have found that ,If I run from terminal then the debug channel is encrypted but when from PythonQt debug channel is NOT encrypted, not sure this have any relation with detaching By further looking into rpdb2.py code I have found that Winpdb is hang on the line self.getSession().getProxy().request_go(fdetach) of request_go(self, fdetach = False): in rpdb2.py The port 51000 is still in established mode Please advise me on this.
Debugger is not detaching from Winpdb
1.2
0
0
96
33,038,821
2015-10-09T12:53:00.000
1
0
0
0
python,tkinter,widget
33,040,441
1
true
0
1
You can place a button in whatever frame you want (with the exception you can't move a widget between toplevel windows). However, the button can't appear in two frames at the same time. It's certainly possible to move the button when you switch frames, though I would either move the button to a common toolbar, or just have two buttons that call the same functions. Moving the button around adds complexity without giving much extra value in return.
1
0
0
I'm currently working on a simple GUI written using the Tkinter library for Python that makes use of two different frames. With a button I can switch between the two frames making only one of the two visible at a time. There's one specific button that I would require to use in both frames. Is it possible to place it in different frames? Of course I have several back-door solutions to my problem, like creating a button that makes use of the same variables and commands, but what I would like to know is if it is possible to use exactly the same button.
Using the same button in different Tkinter frames
1.2
0
0
154
33,039,884
2015-10-09T13:43:00.000
2
0
0
0
python,machine-learning,scikit-learn,cluster-computing
33,040,012
1
true
0
0
The batch size is defined by batch_size, period. Furthermore you can define init_size which is the size of samples taken to initiallize the process, and by default it is 3*batch_size. You can simply set bath_size=100 and init_size=10 and then 10 samples are used to perform initialization (kmeans is not globaly convergent, there are many techniques to deal with it onthe initialization stage) and later on batch of 100 will be used during the algorithm execution.
1
1
1
I am using the function MiniBatchKMeans() from scikitlearn. Well, in its documentation there are: batch_size : int, optional, default: 100 Size of the mini batches. init_size : int, optional, default: 3 * batch_size Number of samples to randomly sample for speeding up the initialization (sometimes at the expense of accuracy): the only algorithm is initialized by running a batch KMeans on a random subset of the data. This needs to be larger than n_clusters. I didn't understand it very well, because it seems that the final dimension of the mini batch is 3*batch_size and not the one specified by batch_size argument. Am I misunderstanding something. If so, some one can explain those two arguments. It I am right, why there are these two arguments since they seems to be redundant. Thanks!!!
MiniBatchKMeans Python
1.2
0
0
879
33,040,841
2015-10-09T14:29:00.000
0
0
0
0
python,django,saml-2.0,okta
33,340,454
1
false
1
0
Once again I have a privilege to answer my own question. So hear is the solution. Django has a user profile module which is to be turned on by giving the module location in the settings.py i.e - "AUTH_PROFILE_MODULE = appTitle.UserProfile" The UserProfile needs to be specified in modules.py specifying the required structure of user profile u need for your app. Now doing sync -db django creates the Database table for your user profile and further on the same user profile pysaml adds the value (CustomAttribute) which come on the saml Assertion. more explanations on this can be found on django documentations too. If any one still faces any issue please let me know.
1
1
0
I am trying to authenticate my django application written in python with okta IDP. I have almost configured everything at SP side and IDP side too. Now I need to pass a custom variable from IDP which assert SP that user is a publisher,editor or admin and further save this to the django format database (in auth_user_groups table). Anyone have tried doing this, or anyone has idea about this? I am able to get the custom variable values by attributes mappings from IDP. But this allows me to save the custom attributes only on the user table. please let me know if i have not made myself clear here about my question.
Adding custom attributes to django scheema
0
0
0
79
33,040,867
2015-10-09T14:30:00.000
0
1
0
0
python
33,045,585
1
true
0
0
Turns out a boolean was being evaluated against a string. This caused a True; it when into the if block and was purging the target dir.
1
1
0
We have a python script that ftp's (downloads) files. However, target folder contents are deleted when the source is empty. How do you not delete the files in the target dir when the source is empty? We are using shutil.copy2 -- can that be the cause? Are there alternatives that preserve metadata?
Python: shutil.copy2 empties target dir
1.2
0
0
57
33,042,478
2015-10-09T15:51:00.000
-1
0
0
0
python,python-2.7
33,042,717
1
false
0
0
If you're not connected to an access point when running the script, and don't have an IP address assigned to your device socket.getaddrinfo will fail. Maybe it's still connecting when you run the script. The domain name cannot be resolved because you are not connected to the network, thus no DNS. Does it fail when you're actually connected to the network? Does curl http://icanhazip.com work at the point when the script fails? Or if you run ifconfig does your device have an IP? (I'm assuming you're on a *nix box).
1
1
0
I'm having a problem with a Python script which should check if the user is connected to a wifi network with a captive portal. Specifically, the script is long-running, attempting to connect to example.org every 60 seconds. The problem is that if the network starts offline (meaning the wifi isn't connected at the start of the script), socket.getaddrinfo will always fail with the error "Name or service not known", even once the wifi is connected, until the Python script is restarted. (This isn't a DNS thing -- all requests fail.) Because both urllib and requests use sockets, it's totally impossible to download an example page once Python gets into this state. Is there a way around this or a way to reset sockets so it works properly once the network fails? To be clear, here's a repro: Disconnect wifi Run an interactive Python session import urllib and urllib.open("http://stackoverflow.com/") -- fails as expected Reconnect wifi urllib.open("http://example.com/") Expected: Returned HTML from example.com Actual: socket.gaierror: [Errno -2] Name or service not known
socket.getaddrinfo fails if network started offline
-0.197375
0
1
302
33,043,704
2015-10-09T17:06:00.000
1
0
0
0
python,hadoop,hive,hbase,bigdata
33,043,867
2
false
0
0
If it's already in the CSV or any format on the linux file system, that PIG can understand, just do a hadoop fs -copyFromLocal to If you want to read/process the raw H5 File format using Python on HDFS, look at hadoop-streaming (map/reduce) Python can handle 2GB on a decent linux system- not sure if you need hadoop for it.
2
0
1
I have downloaded a subset of million song data set which is about 2GB. However, the data is broken down into folders and sub folders. In the sub-folder they are all in several 'H5 file' format. I understand it can be read using Python. But I do not know how to extract and load then into HDFS so I can run some data analysis in Pig. Do I extract them as CSV and load to Hbase or Hive ? It would help if someone can point me to right resource.
How to load big datasets like million song dataset into BigData HDFS or Hbase or Hive?
0.099668
0
0
726
33,043,704
2015-10-09T17:06:00.000
0
0
0
0
python,hadoop,hive,hbase,bigdata
50,411,499
2
false
0
0
Don't load such amount of small files into HDFS. Hadoop doesn't handle well lots of small files. Each small file will incur in overhead because the block size (usually 64MB) is much bigger. I want to do it myself, so I'm thinking of solutions. The million song dataset files don't have more than 1MB. My approach will be to aggregate data somehow before importing into HDFS. The blog post "The Small Files Problem" from Cloudera may shed some light.
2
0
1
I have downloaded a subset of million song data set which is about 2GB. However, the data is broken down into folders and sub folders. In the sub-folder they are all in several 'H5 file' format. I understand it can be read using Python. But I do not know how to extract and load then into HDFS so I can run some data analysis in Pig. Do I extract them as CSV and load to Hbase or Hive ? It would help if someone can point me to right resource.
How to load big datasets like million song dataset into BigData HDFS or Hbase or Hive?
0
0
0
726
33,046,387
2015-10-09T20:11:00.000
0
0
1
0
python,list
33,046,426
1
false
0
0
One reason is that it makes it easier to extract a sublist of a certain length. For instance, a[i:i+n] will extract the sublist of length n starting at element i of the list a. This convention also dovetails with the way range(...) and xrange(...) work.
1
0
0
I was wondering why the following that when I have list in python , lis Why does lis[1:3] only includes the 2nd and 3rd element of the list rather than 2nd, 3rd and 4th element?
List Indexing with Python
0
0
0
39
33,049,137
2015-10-10T00:55:00.000
1
1
1
0
python,cython,pyserial
33,049,637
1
false
0
0
Doubtful porting would remedy the problem you are encountering. The problem with using a UART is the relatively small OS-provided buffer for the incoming data. As an alternative, you might try one of the Ethernet / Serial converters to allow serial I/O through an Ethernet port. The advantage of this approach is the use of the network driver's much larger buffer. If your application can't readily ingest the data at the rate it's arriving, no amount of buffer will help. In this case, if you can't accept some packet loss, you should try to lower the data rate.
1
1
0
I have a application where it is required to read data very fast from a COM-Port. The data arrives with 10kHz (1.25MBaud) in 8 byte packages. Therefore the data capturing (getting the data from the COM-Port buffer) and processing must be as fast as possible. I think my code is quite optimised but I still loose sometimes some data packages because the serial buffer overflows. Because of this I thought of porting the pyserial package (or at least the parts I use) to Cython. Is it possible to port the pyserial package to Cython? And even more important: would there be a speed improvement if the code is written in Cython? Are there other, possibly easier methods, to improve the performance?
Is it possible and useful to port pyserial to cython
0.197375
0
0
610
33,054,711
2015-10-10T13:51:00.000
0
0
0
0
python-2.7,opencv
67,288,557
3
false
0
0
replace foreground = np.absolute(frame - background) with foreground = cv2.absdiff(frame, background)
1
1
1
The following program displays 'foreground' completely black and not 'frame'. I also checked that all the values in 'frame' is equal to the values in 'foreground'. They have same channels,data type etc. I am using python 2.7.6 and OpenCV version 2.4.8 import cv2 import numpy as np def subtractBackground(frame,background): foreground = np.absolute(frame - background) foreground = foreground >= 0 foreground = foreground.astype(int) foreground = foreground * frame cv2.imshow("foreground",foreground) return foreground def main(): cap = cv2.VideoCapture(0) dump,background = cap.read() while cap.isOpened(): dump,frame = cap.read() frameCopy = subtractBackground(frame,background) cv2.imshow('Live',frame) k = cv2.waitKey(10) if k == 32: break if __name__ == '__main__': main()
Subtracting Background From Image using Opencv in Python
0
0
0
3,991
33,055,691
2015-10-10T15:35:00.000
0
0
1
0
python,azure,apache-spark,azure-hdinsight,jupyter
38,753,037
1
false
0
0
Just saw this question way too late, but I will venture that you are using an unsupported browser. Please use Chrome to connect to Jupyter.
1
0
1
I am trying to run a Python module using a Jupyter Notebook on Azure HDInsight, but I continue to get the following error message: A connection to the notebook server could not be established. The notebook will continue trying to reconnect, but until it does, you will NOT be able to run code. Check your network connection or notebook server configuration. I have an Azure subscription, created a cluster, created a storage blob, and have created a Jupyter Notebook. I am successfully logged into the cluster, so I am not sure why I cannot connect to the notebook. Any insight into this problem would be hugely appreciated.
Cannot connect to Jupyter Notebook server in Azure HDInsight
0
0
0
2,411
33,057,049
2015-10-10T17:42:00.000
3
0
0
0
python,resize,pygtk
33,057,480
1
true
0
1
For the first question: Gtk.Window.resize(width, height) should work. If you use set_size_request(width, height), you cannot resize your window smaller than these values. For the second question: Gtk.Window.set_resizable(False)
1
1
0
I want to set a window's size, and then be able to resize it while the program is running. I've been able to make the window large, but I can't resize it smaller than the original set size. For a different project, I would also like to know how to make it so the window is not resizable at all.
PYGTK Resizing permissions
1.2
0
0
41
33,059,170
2015-10-10T21:31:00.000
0
0
1
0
python,bioinformatics,phylogeny
36,169,560
1
false
0
0
Without resorting to anything fancier than an editor you could find your "tip" in tree1 and insert the string that is tree2 at that point. (being nested sets and all)
1
2
0
I am working on phylogenies by using Python libraries (Bio.Phylo and DendroPy). I have to import 2 trees in Newick format (this is obviously not the difficult part) and join them together, more precisely I have to add one tree at one tip/leaf of another. I have tried with add_child and new_child methods from DendroPy, but without success. How would I solve this issue?
Join 2 trees using Python (dendroPy or other libraries)
0
0
1
91
33,060,256
2015-10-11T00:05:00.000
0
1
0
0
javascript,python,html,raspberry-pi,raspberry-pi2
33,235,245
3
false
1
0
Maybe you can try to create a nodejs script that create a websocket. You ca connect to the websocket with python and so, you are able to send data from your website to nodejs and from nodejs to python in real-time. Have a nice day
1
1
0
I want to do the following. I want to have a button on a HTML page that once It gets pressed a message is sent to some python script I'm running. For example, once the button is pressed some boolean will turn true, we will call the boolean bool_1. Then that boolean is sent to my python code, or written to a text file. Then in my python code I want to do something depending on that value. Is there a way to do this? Ive been looking at many thing but they haven't worked. I know that in javascript you can't write a text files because of security issues. My python code is constantly running, computing live values from sensors.
Sending a message between HTML and python on the raspberry pi
0
0
0
1,183