Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
33,382,383
2015-10-28T03:24:00.000
2
0
1
0
python,file-writing
33,382,431
2
false
0
0
Option #3 is usually the best; normal file objects are buffered, so you won't be performing excessive system calls by writing as you receive data to write. Alternatively, you can mix option #2 and #3; don't build intermediate lists and call .writelines on them, make the code that would produce said lists a generator function (having it yield values as it goes) or generator expression, and pass that to .writelines. It's functionally equivalent to #3 in most cases, but it pushes the work of iterating the generator to the C layer, removing a lot of Python byte code processing overhead. It's usually meaningless in the context of file I/O costs though.
2
1
0
I need to auto-generate a somewhat large Makefile using a Python script. The number of lines is expected to be relatively large. The routine for writing to the file is composed of nested loops, whole bunch of conditions etc. My options: Start with an empty string and keep appending the lines to it and finally write the huge string to the file using file.write (pro: only single write operation, con: huge string will take up memory) Start with an empty list and keep appending the lines to it and finally use file.writelines (pro: single write operation (?), con: huge list takes up memory) Write each line to the file as it is constructed (pro: no large memory consumed, con: huge number of write operations) What is the idiomatic/recommended way of writing large number of lines to a file?
Pythonic way to write a large number of lines to a file
0.197375
0
0
1,175
33,383,097
2015-10-28T04:47:00.000
0
0
1
0
python,python-2.7
33,383,137
1
false
0
0
Operator overloading means having an "operator" (like * or +) doing different things depending on context. Method overloading means having multiple methods of the same name in the same class that are differentiated by their parameter signature. In your Python example, operator overloading is implemented by providing specially named methods (which themselves are not overloaded, as long as you have just one of them per class for each name).
1
1
0
For example whenever we overload some of the operators in python like str, mul, add,etc(obviously with underscores with them) but i think these are methods,so shouldn't it be called as method overloading instead?correct me if i am wrong.
why it is called operator overloading and not method overloading since we are overloading methods in python?
0
0
0
35
33,383,840
2015-10-28T05:53:00.000
200
0
0
0
javascript,python,function,lexical
33,383,865
10
true
1
0
Python's pass mainly exists because in Python whitespace matters within a block. In Javascript, the equivalent would be putting nothing within the block, i.e. {}.
2
141
0
I am looking for a JavaScript equivalent of the Python: pass statement that does not run the function of the ... notation? Is there such a thing in JavaScript?
Is there a JavaScript equivalent of the Python pass statement that does nothing?
1.2
0
0
105,967
33,383,840
2015-10-28T05:53:00.000
6
0
0
0
javascript,python,function,lexical
61,111,819
10
false
1
0
Javascript does not have a python pass equivalent, unfortunately. For example, it is not possible in javascript to do something like this: process.env.DEV ? console.log('Connected..') : pass Instead, we must do this: if (process.env.DEV) console.log('Connected..') The advantage of using the pass statement, among others, is that in the course of the development process we can evolve from the above ternary operator example in this case without having to turn it into a full if statement.
2
141
0
I am looking for a JavaScript equivalent of the Python: pass statement that does not run the function of the ... notation? Is there such a thing in JavaScript?
Is there a JavaScript equivalent of the Python pass statement that does nothing?
1
0
0
105,967
33,384,611
2015-10-28T06:47:00.000
2
0
0
1
python-2.7,logging,tornado,log-rotation
33,393,197
1
true
0
0
Tornado's logging just uses the python logging module directly; it's not a separate system. Tornado defines some command-line flags to configure logging in simple ways, but if you want anything else you can do it directly with the logging module. A timed rotation mode is being added in Tornado 4.3 (--log-rotate-mode=time), and until then you can use logging.handlers.TimedRotatingFileHandler.
1
1
0
I am trying to log the requests to my tornado server to separate file, and I want to make a log rotation for each day. I want to use the tornado.log function and not the python logging. I have defined the log path in my main class and it is logging properly I want to know if I can do a log rotate. Does tornado log allow us to log things based on type like log4j Thanks
Tornado log rotation for each day
1.2
0
0
826
33,385,618
2015-10-28T07:53:00.000
2
0
0
0
python,django,django-models,django-migrations
44,014,653
3
false
1
0
So far, I've tried different things, all without any success: used the managed=False Meta option on both Models That option (the managed = False attribute on the model's meta options) seems to meet the requirements. If not, you'll need to expand the question to say exactly what is special about your model that managed = False doesn't do the job.
2
17
0
In my django application (django 1.8) I'm using two databases, one 'default' which is MySQL, and another one which is a schemaless, read-only database. I've two models which are accessing this database, and I'd like to exclude these two models permanently from data and schema migrations: makemigrations should never detect any changes, and create migrations for them migrate should never complain about missing migrations for that app So far, I've tried different things, all without any success: used the managed=False Meta option on both Models added a allow_migrate method to my router which returns False for both models Does anyone have an example of how this scenario can be achieved? Thanks for your help!
django: exclude models from migrations
0.132549
1
0
7,061
33,385,618
2015-10-28T07:53:00.000
1
0
0
0
python,django,django-models,django-migrations
68,460,381
3
false
1
0
You have the correct solution: used the managed=False Meta option on both Models It may appear that it is not working but it is likely that you are incorrectly preempting the final result when you see - Create model xxx for models with managed = False when running makemigrations. How have you been checking/confirming that migrations are being made? makemigrations will still print to terminal - Create model xxx and create code in the migration file but those migrations will not actually result in any SQL code or appear in Running migrations: when you run migrate.
2
17
0
In my django application (django 1.8) I'm using two databases, one 'default' which is MySQL, and another one which is a schemaless, read-only database. I've two models which are accessing this database, and I'd like to exclude these two models permanently from data and schema migrations: makemigrations should never detect any changes, and create migrations for them migrate should never complain about missing migrations for that app So far, I've tried different things, all without any success: used the managed=False Meta option on both Models added a allow_migrate method to my router which returns False for both models Does anyone have an example of how this scenario can be achieved? Thanks for your help!
django: exclude models from migrations
0.066568
1
0
7,061
33,387,900
2015-10-28T09:57:00.000
0
0
1
0
python,regex,python-2.7,split,string-split
33,438,752
2
false
0
0
You could split and then join again Dr/Mr/... It doesn't need complicated regexes and could be faster (you should benchmark it to choose best option).
1
1
0
Good morning, I found multiple threads dealing with splitting strings with multiple delimiters, but not with one delimiter and multiple conditions. I want to split the following strings by sentences: desc = Dr. Anna Pytlik is an expert in conservative and aesthetic dentistry. She speaks both English and Polish. If I do: [t.split('. ') for t in desc] I get: ['Dr', 'Anna Pytlik is an expert in conservative and aesthetic dentistry', 'She speaks both English and Polish.'] I don't want to split the first dot after 'Dr'. How can I add a list of substrings in which case the .split('. ') should not apply? Thank you!
Split a string with one delimiter but multiple conditions
0
0
0
208
33,399,351
2015-10-28T18:55:00.000
0
0
0
0
python,user-interface,checkbox,event-handling,kivy
33,402,193
1
true
0
1
The event raised when the state of the checkbox is changed is on_active. If you have a problem with it, post a simple runnable example demonstrating the issue.
1
0
0
I can not find which is the main event for the checkbox in the kv file. I tried on_checkbox_active but raise an error. I tried on_active but does nothing (and do not raise any error) on_release, on_press but obviously give me an error. this is my basic test code line: on_active: print("hello") What is the event that run when click on a checkbox? thank you guys
Why Kivy checkbox active event not work?
1.2
0
0
730
33,399,526
2015-10-28T19:05:00.000
0
0
0
1
jquery,python,google-app-engine
33,405,768
1
false
1
0
If you're not seeing anything in your server logs about the error, that suggests to me that you might have a configuration error in one of your .yaml files. Are GET requests working? Are you sure that you are sending your POST requests to an endpoint that is handled by your application? Check for typos in your JavaScript and application Route definitions, and check for a catch-all request handler (e.g. /*) that might be receiving the request and failing to respond. Sharing the contents of your app.yaml, your server-side URL routes, and a snippet of your JavaScript would really help us to help you.
1
0
0
I am getting error 500 on every second POST request made from a browser (chrome and firefox) irrespective of whether it is a Jquery Post or Form submissions, app engine is alternating between error 500, and successful post. The error 500 are not appearing anyway in the logs. I have tested this with over 5 different post handlers, the errors are only occurring on production not on the Local SDK server. Note that the requests are perfectly successful when made from a python script using the requests module.
App Engine Returning Error 500 on Post Requests from
0
0
0
150
33,403,197
2015-10-28T23:11:00.000
1
0
0
0
python,django
33,403,212
1
false
1
0
You'll have to create a new Django project, and move app2 to that project.
1
0
0
I have a django project. I have created 2 apps(app1 and app2) under the project. Each app has its own urls.py and views.py. Settings.py is under the project folder. What I want to do is: When I edit the views.py file for app1. And if I save the file with an incorrect indentation. It brings down app2 as well. I want to make them independent, so that no matter what change I do local to app1 it should not affect app2. Is that possible ?
Django: Remove dependency between apps in a project
0.197375
0
0
284
33,404,837
2015-10-29T02:15:00.000
1
0
0
0
database,odbc,python-db-api
33,404,879
1
false
0
0
There are other programming languages besides python -- java, javascript, ruby, perl, cobol, lisp, smalltalk, go, r, and many, many others. None of them can use the python db-api, but all of them could, potentially use odbc. Python offers odbc for people who come from other languages and already know odbc,and its own db-api for people who only know python and who aren't interested in learning the standard. Also, python db-api isn't really a standard, because it hasn't been accepted by any standards body (afaik)
1
1
0
Well, I tried to understand Open Database Connectivity and Python DB-API, but I can't. ODBC is some kind of standard and Python DB-API is another standard, but why not use just one standard? Or maybe I got these terms wrong. Can someone please explain these terms and the difference between them as some of the explanations I read were too technical? Thank you
Difference between ODBC and Python DB-API?
0.197375
1
0
342
33,405,411
2015-10-29T03:21:00.000
3
0
1
1
python,eclipse,pydev
33,405,546
1
true
0
0
First check whether python3.5 is auto-configured in eclipse. Go to Window>Preferences On the preferences window you will find PyDev configurations on left pan. PyDev>Interpreters>Python Interpreter If python3.5 is not listed you can either add using "Quick Auto-Config" or if you want to add manually click "New" then add give the interpreter name (ex:Py3.5) and then browse to the path of python executable (In your case inside /Library/Frameworks/Python.framework/) Once you have configured your interpreter in PyDev then you can change the interpreter of your project. Right click on your project>Properties On the left pan click PyDev-Interpreter.In that select the name of the PythonInterpreter(Py3.5) which you previously configured and you can also select the grammar version.
1
1
0
In eclipse, I'm used to configuring the buildpath for versions of java installed on my computer. I recently added Python 3.5 to my computer and want to use it in place of the default 2.7 that Macs automatically include. How can I configure my build path on PyDev, if there is such as concept to begin with, for the plugin? I've found that Python 3.5 is located at/Library/Frameworks/Python.framework/; how can I now change PyDev to use it?
How to change build path on PyDev
1.2
0
0
1,264
33,405,759
2015-10-29T04:03:00.000
2
0
1
0
python,ordereddictionary
33,405,789
2
false
0
0
From the Python documentation: If a new entry overwrites an existing entry, the original insertion position is left unchanged. Deleting an entry and reinserting it will move it to the end.
1
0
0
Should I remove item at the index and add item at the index? Where should I look at to find the source for OrderedDict class?
Replacing an element in an OrderedDict?
0.197375
0
0
3,855
33,408,408
2015-10-29T07:39:00.000
0
0
0
0
python,django,git,deployment,tornado
33,409,062
1
false
1
0
My first rule of deployment is "whatever works". Every production environment has different requirements. But to give opinions on your questions: Not everything should be in your Python project. Perhaps there is a way to do it, but I think it's using the wrong hammer. You can create a separate Git repo that handles configuration and asset files for your production deployment (this does not even be managed by Git if you don't care about old, irrelevant configuration files). This does not have to be a Python project, just the files for the production deployment. You may optionally put a Python script or two in here (or just a README.txt or fab files or a Buildout config) to automate tasks such as unpacking your platter or copying config files around. It's tempting (and possible) to put production config things in your main Git repo. This is even suggested by apps that create boilerplate files for development and production configuration. This doesn't mean it's the best way to do things though. My rule is that the main Git repo is "development only". It's cloned by developers who are setting up and working in development environments. It conflates a Python project far too much to try and be an Python application and also be a place to manage a production system, IMHO. Production is managed separately. Sometimes by people different from the developers or at least the developer is wearing a different hat when thinking about a production deployment. This way you can also have a small, clean repo that tracks just changes to your production system. Playing with symlinks within a single deployment that represents different builds is an extra layer of confusion. And the impetus to do so comes from trying to do everything from a single Python project. Deploy your python application to something like /var/myapp/build-2015-10-29/. Then create a symlink at /var/myapp/current/ that points to this location. This way you can create a full deployment at /var/myapp/build-2015-11-05/ and tweak the config to start on a separate port, bring the app up and ensure everything works, then just switch from the symlink from the old build to the new build with minimal downtime.
1
1
0
This question is mostly about the technical details + some best practices of how to efficiently deploy a python web app that's built using platter. Taking Django for instance, I have a project that's already built into a tarball distribution. This includes all wheels of all deps + the package of the app itself. My repo directory also contains some other files that need to be distributed with the deployed code, such as: manage.py, a fabfile package with fabric utils, and some configuration files (for supervisor, nginx, etc). So my questions are: How can I wrap these extra files into the distribution that contains the project? If I simply use git to clone/pull the project on the server I have these files, but then I have duplicate of the source code being both in the project and zipped in the tarball. How can I avoid that? Committing the tarball into a separate repo? Perhaps the duplication is not so bad, and I'll end up with multiple tarballs in my dist/ directory and only one symlinked to the current from which I deploy? Same goes for a Tornado based app.
How to deploy a Django/Tornado based web app that's built with platter?
0
0
0
167
33,408,497
2015-10-29T07:44:00.000
0
0
0
0
python,django,postgresql
33,408,635
3
false
1
0
If there is no other use of EventLog table, just put entries in Entries table and put a date of execution in that. If the date is postponed , update the date of execution in Event table itself. This also assumes that you dont need the previous dates( as your requirement says that you just need to show the events happening on that day)
1
1
0
I am stuck in a scenario where I am making an API that returns events in the given a month and year combination. The Database structure: Event - has basic details of an event. EventLog - has a foreignkey to the event, a from_date and a to_date. When Events are created, an entry is made to both Event table and EventLog table (with from_date set as null). When an Event is postponed an entry is made to EventLog with previous date and current date. Now given a date I want to show events occurring on that date as well as the postponed events with latest dates that were supposed to happen on that day. How should I go about it without making too many calls to the database ?
How to get events which can be postponed for a given date from a table with change log and a table for event in Django?
0
0
0
103
33,409,424
2015-10-29T08:39:00.000
0
0
1
0
python,anaconda
64,629,354
7
false
0
0
when calling any function from another file, it should be noted to not import any library inside the function
4
11
0
I've written my own mail.py module in spider (anaconda). I want to import this py file in other python (spider) files just by 'import mail' I searched on the internet and couldn't find a clearly solution.
Import own .py files in anaconda spyder
0
0
1
43,941
33,409,424
2015-10-29T08:39:00.000
1
0
1
0
python,anaconda
69,061,619
7
false
0
0
Searched for an answer for this question to. To use a .py file as a import module from your main folder, you need to place both files in one folder or append a path to the location. If you storage both files in one folder, then check the working directory in the upper right corner of the spyder interface. Because of the wrong working directory you will see a ModuleNotFoundError.
4
11
0
I've written my own mail.py module in spider (anaconda). I want to import this py file in other python (spider) files just by 'import mail' I searched on the internet and couldn't find a clearly solution.
Import own .py files in anaconda spyder
0.028564
0
1
43,941
33,409,424
2015-10-29T08:39:00.000
0
0
1
0
python,anaconda
33,413,809
7
false
0
0
There are many options, e.g. Place the mail.py file alongside the other python files (this works because the current working dir is on the PYTHONPATH. Save a copy of mail.py in the anaconda python environment's "/lib/site-packages" folder so it will be available for any python script using that environment.
4
11
0
I've written my own mail.py module in spider (anaconda). I want to import this py file in other python (spider) files just by 'import mail' I searched on the internet and couldn't find a clearly solution.
Import own .py files in anaconda spyder
0
0
1
43,941
33,409,424
2015-10-29T08:39:00.000
3
0
1
0
python,anaconda
56,931,134
7
false
0
0
I had the same problem, my files were in same folder, yet it was throwing an error while importing the "to_be_imported_file.py". I had to run the "to_be_imported_file.py" seperately before importing it to another file. I hope it works for you too.
4
11
0
I've written my own mail.py module in spider (anaconda). I want to import this py file in other python (spider) files just by 'import mail' I searched on the internet and couldn't find a clearly solution.
Import own .py files in anaconda spyder
0.085505
0
1
43,941
33,415,269
2015-10-29T13:14:00.000
1
0
1
0
python,regex,string,file,split
33,415,597
2
true
0
0
I see three options here, from the most memory-consuming to the least: split will create a copy of your file as a list of strings, meaning additional 400 MB used. Easy to implement, takes RAM. Use re or simply iterate over a string and memorize \n positions: for i, c in enumerate(s): if c == '\n': newlines.append(i+1). The same as point 2, but with the string stored as a file on HDD. Slow but really memory efficient, also addressing the disadvantage of Python strings - they're immutable, and if one wants to do some changes, interpreter will create a copy. Files don't suffer from this, allowing in-place operations without loading the whole file at all. I would also suggest to encapsulate solutions 2 or 3 into a separate class in order to keep newline indexes and the string contents consistent. Proxy pattern and the idea of lazy evaluation would fit here, I think.
1
3
0
I have a large file (400+ MB) that I'm reading in from S3 using get_contents_as_string(), which means that I end up with the entire file in memory as a string. I'm running several other memory-intensive operations in parallel, so I need a memory-efficient way of splitting the resulting string into chunks by line number. Is split() efficient enough? Or is something like re.finditer() a better way to go?
Best way to chunk a large string by line
1.2
0
0
183
33,415,788
2015-10-29T13:37:00.000
1
0
1
0
python,multithreading
33,418,099
1
true
0
0
An RLock is used by default to prevent the very thing you're trying to do. In order to guarantee that the predicate is protected, locks are acquired and conditions are waited on in the same thread. Using an RLock guarantees this. When using a Lock with a condition, there's no implicit guarantee that the predicate is protected since the locking thread can be mucking with it in parallel.
1
1
0
threading.Condition takes a lock as an argument, but if none is specified, it uses a threading.RLock by default. I found this out when I called acquire on a condition variable in one thread, and then handed it over to another thread to wait on it. This fails with an RLock, so the solution is to use a normal lock. What's the rationale behind using RLock by default?
Why does Python's threading.Condition use an RLock by default?
1.2
0
0
280
33,417,496
2015-10-29T14:50:00.000
7
0
1
0
python
33,417,553
5
true
0
0
In general, the risk with using not x instead of x==0 is that you might match another kind of value that is also falsey (for instance, None or an empty list). In this case, since x must be a number, it is safe to use not x to mean x==0. Use whichever seems more readable. To me, the first version looks a little odd, because I expect the results of an arithmetic operation to be treated like a number, so I would prefer the second version. But falseyness is there for convenience, and there are lots of circumstances it makes sense to make use of it.
3
4
0
I really thought I'd have found something on this, and maybe it's out there and I'm missing it. If that's the case I apologize and I'll close the question. I'm checking to see if a modulo operation returns a result of zero, and I was wondering which of these is "better" (more pythonic, faster, whatever): if not count % mod OR if count % mod == 0 I guess I should clarify and say that I have a very good understanding of truthy and falsey values, I just wanted to know if there was a concrete reason to use one over the other. Especially considering this is always going to be a number (otherwise the % operator would throw a TypeError).
Should I use 'not x' or 'x == 0' to check to see if the result of a modulo operation is zero in python
1.2
0
0
306
33,417,496
2015-10-29T14:50:00.000
-1
0
1
0
python
33,417,573
5
false
0
0
I would say use x==0 as not x would also evaluate to true if x = False
3
4
0
I really thought I'd have found something on this, and maybe it's out there and I'm missing it. If that's the case I apologize and I'll close the question. I'm checking to see if a modulo operation returns a result of zero, and I was wondering which of these is "better" (more pythonic, faster, whatever): if not count % mod OR if count % mod == 0 I guess I should clarify and say that I have a very good understanding of truthy and falsey values, I just wanted to know if there was a concrete reason to use one over the other. Especially considering this is always going to be a number (otherwise the % operator would throw a TypeError).
Should I use 'not x' or 'x == 0' to check to see if the result of a modulo operation is zero in python
-0.039979
0
0
306
33,417,496
2015-10-29T14:50:00.000
0
0
1
0
python
33,417,633
5
false
0
0
Notice the diferences: if not foo would be true for foo = 0 and for foo = "" and for foo = None if foo == 0 would be true for foo = 0 but not for foo = "" and foo = None
3
4
0
I really thought I'd have found something on this, and maybe it's out there and I'm missing it. If that's the case I apologize and I'll close the question. I'm checking to see if a modulo operation returns a result of zero, and I was wondering which of these is "better" (more pythonic, faster, whatever): if not count % mod OR if count % mod == 0 I guess I should clarify and say that I have a very good understanding of truthy and falsey values, I just wanted to know if there was a concrete reason to use one over the other. Especially considering this is always going to be a number (otherwise the % operator would throw a TypeError).
Should I use 'not x' or 'x == 0' to check to see if the result of a modulo operation is zero in python
0
0
0
306
33,417,670
2015-10-29T14:58:00.000
2
0
0
0
python,flask
33,418,843
1
true
1
0
Yes, just access the current_app. This is the way to do it. The before_first_request callback run inside the app context.
1
0
0
I want the function I pass to before_first_request_funcs the ability to access app.config object. Can I pass an argument to the function somehow? Access the "current app object" (it is not really global and I can just access it, right?)
pass arguments to `before_first_request_funcs` in flask
1.2
0
0
262
33,418,316
2015-10-29T15:26:00.000
0
0
1
0
python,list
33,418,551
5
false
0
0
You don't need to "delete" the node, just "skip" it. That is, change Node1's next member to the second Node2. Edit your question if you would like specific code examples (which are the norm for this site).
2
2
0
I was wondering if any of you could give me a walk through on how to remove an element from a linked list in python, I'm not asking for code but just kinda a pseudo algorithm in english. for example I have the linked list of 1 -> 2 -> 2 -> 3 -> 4 and I want to remove one of the 2's how would i do that? I thought of traversing through the linked list, checking to see if the data of one of the nodes is equal to the data of the node after it, if it is remove it. But I'm having trouble on the removing part. Thanks!
removing an element from a linked list in python
0
0
0
3,800
33,418,316
2015-10-29T15:26:00.000
0
0
1
0
python,list
33,418,852
5
false
0
0
You can do something like: if element.next.value == element.value: element.next = element.next.next Just be carefull to free the memory if you are programing this in C/C++ or other language that does not have GC
2
2
0
I was wondering if any of you could give me a walk through on how to remove an element from a linked list in python, I'm not asking for code but just kinda a pseudo algorithm in english. for example I have the linked list of 1 -> 2 -> 2 -> 3 -> 4 and I want to remove one of the 2's how would i do that? I thought of traversing through the linked list, checking to see if the data of one of the nodes is equal to the data of the node after it, if it is remove it. But I'm having trouble on the removing part. Thanks!
removing an element from a linked list in python
0
0
0
3,800
33,418,463
2015-10-29T15:32:00.000
3
0
1
1
python,python-2.7,python-3.x,azure,azure-webjobs
33,427,176
3
true
0
0
Also if you wanna run different python versions in the same site, you can always drop a run.cmd that calls the right version of python for you. They are installed in D:\Python34 and D:\Python27
1
3
0
How can I select which Python version to use for a WebJob on Microsoft Azure? When I do print(sys.version) I get 2.7.8 (default, Jun 30 2014, 16:03:49) [MSC v.1500 32 bit (Intel)] Where can I specify another version? I would like to use Python 3 for some jobs. I have tried adding runtime.txt reading python-3.4 to the root path, but it had no effect.
Specify Python version in Microsoft Azure WebJob?
1.2
0
0
1,707
33,418,678
2015-10-29T15:41:00.000
0
0
0
0
python,windows,opencv,numpy,tkinter
33,434,056
2
false
0
1
Finally did it with .whl files. Download them, copy to C:\python27\Scripts and then open "cmd" and navigate to that folder with "cd\" etc. Once there run: pip install numpy-1.10.1+mkl-cp27-none-win_amd64.whl for example. In IDLE I then get: import numpy numpy.version '1.10.1'
2
0
1
I want to use "tkinter", "opencv" (cv2) and "numpy" in windows(8 - 64 bit and x64) with python2.7 - the same as I have running perfectly well in Linux (Elementary and DistroAstro) on other machines. I've downloaded the up to date Visual Studio and C++ compiler and installed these, as well as the latest version of PIP following error messages with the first attempts with PIP and numpy first I tried winpython, which already has numpy present but this comes without tkinter, although openCV would install. I don't want to use qt. so I tried vanilla Python, which installs to Python27. Numpy won't install with PIP or EasyInstall (unless it takes over an hour -same for SciPy), and the -.exe installation route for Numpy bombs becausee its looking for Python2.7 (not Python27). openCV won't install with PIP ("no suitable version") extensive searches haven't turned up an answer as to how to get a windows Python 2.7.x environment with all three of numpy, tkinter and cv2 working. Any help would be appreciated!
tkinter opencv and numpy in windows with python2.7
0
0
0
170
33,418,678
2015-10-29T15:41:00.000
0
0
0
0
python,windows,opencv,numpy,tkinter
33,441,221
2
false
0
1
small remark: WinPython has tkinter, as it's included by Python Interpreter itself
2
0
1
I want to use "tkinter", "opencv" (cv2) and "numpy" in windows(8 - 64 bit and x64) with python2.7 - the same as I have running perfectly well in Linux (Elementary and DistroAstro) on other machines. I've downloaded the up to date Visual Studio and C++ compiler and installed these, as well as the latest version of PIP following error messages with the first attempts with PIP and numpy first I tried winpython, which already has numpy present but this comes without tkinter, although openCV would install. I don't want to use qt. so I tried vanilla Python, which installs to Python27. Numpy won't install with PIP or EasyInstall (unless it takes over an hour -same for SciPy), and the -.exe installation route for Numpy bombs becausee its looking for Python2.7 (not Python27). openCV won't install with PIP ("no suitable version") extensive searches haven't turned up an answer as to how to get a windows Python 2.7.x environment with all three of numpy, tkinter and cv2 working. Any help would be appreciated!
tkinter opencv and numpy in windows with python2.7
0
0
0
170
33,420,633
2015-10-29T17:14:00.000
0
0
0
0
python,python-2.7,pandas,dataframe,pivot-table
33,421,040
1
false
0
0
I'm going to be general here, since there was no sample code or data provided. Let's say your original dataframe is called df and has columns Date and Sales. I would try creating a list that has all dates from 01-01-2014 to 12-31-2015. Let's call this list dates. I would also create an empty list called sales (i.e. sales = []). At the end of this workflow, sales should include data from dt['Sales'] AND placeholders for dates that are not within the data frame. In your case, these placeholders will be 0. In my answer, the names of the columns in the dataframe are capitalized; names of lists start with a lower case. Next, I would iterate through dates and check to see if each date is in dt['Date']. Each iteration through the list dates will be called date (i.e. date = dates[i]). If date is in dt['Date'], I would append the Sales data for that date into sales. You can find the date in the dataframe through this command: df['Date']==date. So, to append the corresponding Sales data into the list, I would use this command sales.append(df[df['Date']==date]['Sales']. If date is NOT in dt['Date'], I would append a placeholder into sales (i.e. sales.append(0). Once you iterate through all the dates in the list, I would create the final dataframe with dates and sales. The final dataframe should have both your original data and placeholders for dates that were not in the original data.
1
1
1
I have a pivot table which has an index of dates ranging from 01-01-2014 to 12-31-2015. I would like the index to range from 01-01-2013 to 12-31-2016 and do not know how without modifying the underlying dataset by inserting a row in my pandas dataframe with those dates in the column I want to use as my index for the pivot table. Is there a way to accomplish this wihtout modifying the underlying dataset?
Padding python pivot tables with 0
0
0
0
214
33,420,918
2015-10-29T17:29:00.000
16
0
0
0
python,django,django-admin,django-admin-tools
33,421,173
2
true
1
0
django.contrib.admin is simply a Django app. Remove or comment django.contrib.admin from INSTALLED_APPS in settings.py file. Also remove or comment from django.contrib import admin from admin.py',urls.py and all the files having this import statement. Remove url(r'^admin/', include(admin.site.urls) from urlpatterns in urls.py.
2
9
0
I am trying to run my Django Application without Django admin panel because I don't need it right now but getting an exception value: Put 'django.contrib.admin' in your INSTALLED_APPS setting in order to use the admin application. Could I ran my application without django.contrib.admin ? Even if go my localhost:8000 it is showing you need to add django.contrib.admin in your installed_apps?
Run django application without django.contrib.admin
1.2
0
0
5,473
33,420,918
2015-10-29T17:29:00.000
1
0
0
0
python,django,django-admin,django-admin-tools
33,421,110
2
false
1
0
I have resolved this issue. I had #url(r'^admin/', include(admin.site.urls)), in my urls.py which I just commented out.
2
9
0
I am trying to run my Django Application without Django admin panel because I don't need it right now but getting an exception value: Put 'django.contrib.admin' in your INSTALLED_APPS setting in order to use the admin application. Could I ran my application without django.contrib.admin ? Even if go my localhost:8000 it is showing you need to add django.contrib.admin in your installed_apps?
Run django application without django.contrib.admin
0.099668
0
0
5,473
33,426,380
2015-10-29T23:25:00.000
0
0
0
0
python,pyautogui
38,478,105
1
false
0
1
Use confidence, default value is 0.999. Reason is pyscreeze is actually used by pyautogui which has the confidence value which most likely represents a percentage from 0% - 100% for a similarity match. Looking through the code with my amateur eyes reveals that OpenCV and NumPy are required for confidence to work otherwise a different function would be used that doesn't have the confidence value. for example: by doing pyautogui.locateCenterOnScreen('foo.png', confidence=0.5) will set your confidence to 0.5, which means 50%.
1
1
0
ok, I've got into programming with python and thus far was having a fair amount of success. I've typed up a program that uses pyautogui to automates atask I need to do on a monthly basis. I took Screenshots of where I needed the mouse to click and when all was done I had a working program that searched the screen for the button to clicked, controlled the mouse that location, and printed out the report I needed. So, all I needed to do was plug it into the task scheduler and it would do the work for me! Several days afterwards, I decided to go ahead and schedule it. I ran the program again, and it crashed! Long Story short, the screen shots didn't match. I took a screen shot again, and zoomed both images 800% in Paint, and check the pixel next to the "I" in The two different images and sure enough the rgb values are different. I tried several other places to, and while they looked the same... The rgb values are different by maybe one or two points! I'm curious as to why is this happening!
Pyautogui - Problems with Changing Screenshots
0
0
0
1,631
33,427,081
2015-10-30T00:34:00.000
1
1
0
1
python,rundeck
33,427,554
2
false
0
0
okay, so I changed the step type to a command rather than script file and it worked. I guess my understanding of what a script file is was off.
1
0
0
I am new to Rundeck, so I apologize if I ask a question that probably has an obvious answer I'm overlooking. I've installed Rundeck on my Windows PC. I've got a couple of Python scripts that I want to execute via Rundeck. The scripts run fine when I execute them manually. I created a job in Rundeck, created a single step (script file option) to test the python script. The job failed after six seconds. When I checked the log, it was because it was executing it line by line rather than letting python run it as an entire script. How do I fix this?
Rundeck :: Execute a python script
0.099668
0
0
10,632
33,433,231
2015-10-30T09:59:00.000
0
0
0
0
python-2.7,openerp,odoo-8,openerp-8
33,492,872
2
false
1
0
Just make this changes to field.boolean("string",default=False,readonly=False, ,required=False) It will works, Thanks
2
1
0
I am trying to keep a check box "unchecked" in my custom module, Any Idea on this?
How to keep a check box "unchecked" in odoo?
0
0
0
1,459
33,433,231
2015-10-30T09:59:00.000
1
0
0
0
python-2.7,openerp,odoo-8,openerp-8
33,433,277
2
false
1
0
If you want to do that, you must set it as readonly, that way no user will be able to set it as True.
2
1
0
I am trying to keep a check box "unchecked" in my custom module, Any Idea on this?
How to keep a check box "unchecked" in odoo?
0.099668
0
0
1,459
33,433,262
2015-10-30T10:00:00.000
0
0
0
0
python,encoding,decoding,happybase
38,149,242
1
false
1
0
that data is not valid utf-8, so if you really retrieved it as such from the database, you should check who/what put it in there.
1
3
0
While trying to decode the values from HBase, i am seeing an error but it is apparent that Python thinks it is not in UTF-8 format but the Java application that put the data into HBase encoded it in UTF-8 only a = '\x00\x00\x00\x00\x10j\x00\x00\x07\xe8\x02Y' a.decode("UTF-8") Traceback (most recent call last): File "", line 1, in File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/encodings/utf_8.py", line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode byte 0xe8 in position 9: invalid continuation byte any thoughts?
Decoding HappyBase data from HBase
0
0
0
547
33,437,998
2015-10-30T14:10:00.000
1
0
0
0
python,image,image-processing,bmp
40,827,549
2
false
0
0
From the wiki: The bits representing the bitmap pixels are packed in rows. The size of each row is rounded up to a multiple of 4 bytes (a 32-bit DWORD) by padding. For images with height > 1, multiple padded rows are stored consecutively, forming a Pixel Array. Together with color mapping, compression techniques makes your calculation invalid. Hopes this helps.
1
4
0
I have a bmp file which size is 30*30. In python, I use im = Image.open("big.bmp") rgb_img_data = list(im.getdata()) len = len(rgb_img_data) get 900 So I guess the real image data should be 900*3 = 2700 (r,g,b) But I read the image data with read() function. Get rid of the header and footer, I get 2756 data items like this, 11110101 (I convert it to binary, '11110101' is one data item, I get 2756 data items like this) Thanks for your help!
A 30*30 bmp image file, why does it have 2756 pixel data?
0.099668
0
0
195
33,438,868
2015-10-30T14:55:00.000
1
0
1
0
python,ansible
33,443,110
1
false
0
0
It turns out that by setting check=True inside of the PlayBook class will run ansible playbooks in a way so that no changes take place on the remote/local server you're connecting to. I wanted to post this to make sure other people having this problem will be able to find some reprieve from the hours of time I took resolving it.
1
1
0
When I run a command ansible-playbook -i /tmp/srv /prov/playbooks/common.yml -vvvv I am receiving no errors and my playbook runs on the intended server; however, I run this same playbook through the Python API and my commands return with 'changed' and do not make any changes. However changes are being made when I run the playbook normally. Has anyone else had this problem? I am currently unable to find any information regarding an issue with the Ansible Python API being unable to install on a remote server.
Unable to run Ansible API with PlayBook class
0.197375
0
0
83
33,439,189
2015-10-30T15:11:00.000
2
1
1
0
python,debugging,python-2.x
40,102,123
3
false
0
0
Type "system settings" in the cortana/start bar search box Click "View Advanced System Settings In the Advanced tab in the bottom right corner click "Environment Variables" Create a new variable named "PYTHONPATH" Add these two directories to the PYTHONPATH variable: "C:\Program Files (x86)\Immunity Inc\Immunity Debugger" "C:\Program Files (x86)\Immunity Inc\Immunity Debugger\Libs"
2
1
0
I have Python 2.7 installed on my windows machine. I've downloaded and installed Immunity Debugger. But when I try to import immlib or immutils module in python, it says no such module. How to install these modules? Using pip, it says no such repository. Please help.
How to install immlib module in python?
0.132549
0
0
1,919
33,439,189
2015-10-30T15:11:00.000
1
1
1
0
python,debugging,python-2.x
34,432,268
3
true
0
0
It goes with Immunity Debugger. Install immunity debugger and your immlib.py will be there.
2
1
0
I have Python 2.7 installed on my windows machine. I've downloaded and installed Immunity Debugger. But when I try to import immlib or immutils module in python, it says no such module. How to install these modules? Using pip, it says no such repository. Please help.
How to install immlib module in python?
1.2
0
0
1,919
33,442,707
2015-10-30T18:31:00.000
1
0
1
0
python,windows,tkinter,virtualenv,tox
33,454,458
1
false
0
1
Not yet but hopefully soon someone will address this problem and create a compatible virtual env.
1
0
0
I know that tkinter has issues working with virtual envs due to the binaries not being copied and that there are workarounds if I'm just using virtual envs, but what about the autogenerated virtual envs generated by tox? Is there any way to use something like tkinter with tox?
Can tkinter work with tox (on windows)?
0.197375
0
0
88
33,446,347
2015-10-30T23:34:00.000
0
0
1
0
python,python-3.x,ubuntu,pymysql
59,368,220
19
false
0
0
if you are using SPYDER IDE , just try to restart the console or restart the IDE, it works
5
69
0
I'm trying to use PyMySQL on Ubuntu. I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql' I'm using Ubuntu 15.10 64-bit and Python 3.5. The same .py works on Windows with Python 3.5, but not on Ubuntu.
No module named 'pymysql'
0
1
0
196,169
33,446,347
2015-10-30T23:34:00.000
0
0
1
0
python,python-3.x,ubuntu,pymysql
50,157,898
19
false
0
0
I had this same problem just now, and found the reason was my editor (Visual Studio Code) was running against the wrong instance of python; I had it set to run again python bundled with tensorflow, I changed it to my Anaconda python and it worked.
5
69
0
I'm trying to use PyMySQL on Ubuntu. I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql' I'm using Ubuntu 15.10 64-bit and Python 3.5. The same .py works on Windows with Python 3.5, but not on Ubuntu.
No module named 'pymysql'
0
1
0
196,169
33,446,347
2015-10-30T23:34:00.000
0
0
1
0
python,python-3.x,ubuntu,pymysql
49,817,699
19
false
0
0
sudo apt-get install python3-pymysql This command also works for me to install the package required for Flask app to tun on ubuntu 16x with WISG module on APACHE2 server. BY default on WSGI uses python 3 installation of UBUNTU. Anaconda custom installation won't work.
5
69
0
I'm trying to use PyMySQL on Ubuntu. I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql' I'm using Ubuntu 15.10 64-bit and Python 3.5. The same .py works on Windows with Python 3.5, but not on Ubuntu.
No module named 'pymysql'
0
1
0
196,169
33,446,347
2015-10-30T23:34:00.000
0
0
1
0
python,python-3.x,ubuntu,pymysql
57,734,684
19
false
0
0
Just a note: for Anaconda install packages command: python setup.py install
5
69
0
I'm trying to use PyMySQL on Ubuntu. I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql' I'm using Ubuntu 15.10 64-bit and Python 3.5. The same .py works on Windows with Python 3.5, but not on Ubuntu.
No module named 'pymysql'
0
1
0
196,169
33,446,347
2015-10-30T23:34:00.000
0
0
1
0
python,python-3.x,ubuntu,pymysql
63,201,272
19
false
0
0
I also got this error recently when using Anaconda on a Mac machine. Here is what I found: After running python3 -m pip install PyMySql, pymysql module is under /Library/Python/3.7/site-packages Anaconda wants this module to be under /opt/anaconda3/lib/python3.8/site-packages Therefore, after copying pymysql module to the designated path, it runs correctly.
5
69
0
I'm trying to use PyMySQL on Ubuntu. I've installed pymysql using both pip and pip3 but every time I use import pymysql, it returns ImportError: No module named 'pymysql' I'm using Ubuntu 15.10 64-bit and Python 3.5. The same .py works on Windows with Python 3.5, but not on Ubuntu.
No module named 'pymysql'
0
1
0
196,169
33,446,872
2015-10-31T00:46:00.000
0
0
0
0
python,hbase,thrift,happybase
35,449,021
2
false
0
0
My sysadmin told me that in theory he could install an HBase Thrift Server on one of the Hadoop edge nodes that are blocked off, and only open the port to my server via ACLs. He however has no intention of doing this (and I do not either). As this is not a suitable answer I'll leave the question open.
1
1
0
Anyone who knows the port and host of a HBase Thrift server, and who has access to the network, can access HBase. This is a security risk. How can the client access to the HBase Thrift server be made secure?
How to secure client connections to an HBase Thrift Server?
0
0
1
929
33,447,032
2015-10-31T01:13:00.000
1
1
0
0
python,performance,scaling,bots,irc
33,447,076
1
false
0
0
Threading is one option but it doesn't scale beyond a certain point (google Python GIC limitation). Depending on how much scaling you want to do, then you need to do multi-process (launch multiple instances). One pattern is to have a pool of worker threads that process a queue of things to do. There's a lot of overhead to creating and destroying threads in most languages.
1
0
0
I have made a simple IRC bot for myself in python which works great, but now some friends has asked me if the bot can join their IRC channel too. Their IRC channels are very active, it is Twitch chat(IRC wrapper), which means a lot of messages. I want them to use my bot, but I have no idea how it will perform, this is my first bot I've made. Right now my code is like this: Connect to IRC server & channel while true: Receive data from the socket (4096, max data to be received at once) do something with data received What changes should I do to make it perform better? 1. Should I have a sleep function in the loop? 2. Should I use threads? 3. Any general dos and don'ts? Thank you for reading my post.
IRC Bot, performance and scaling
0.197375
0
1
186
33,449,969
2015-10-31T09:20:00.000
1
1
1
0
python,sublimetext2,sublimetext3,sublimetext,sublime-text-plugin
55,436,658
2
false
0
0
The second question about running without saving to hard disk: 1-press ctrl + shift + p 2- write install package and install it 3- write "auto save" and install it 4- go to preferences> package settings> Auto-save> Settings Default and copy all the code 5- go to preferences> package settings> Auto-save> Settings user and past it and change the first code from: "auto_save_on_modified": false, to "auto_save_on_modified": true, good luck
1
5
0
I am using Sublime Text Editor for Python development. If I create a new file by Ctrl + N, the default language setting for this file is Plain Text, so how to change the default language setting for the new file to be Python ? Another question :If I write some code in the new file and have not save it to disk, it is impossible to run it and get the running result, is there a solution to remove this restriction so that we can run code in the new file without saving it to disk first?
Sublime Text: run code in a new file without saving to disk and the default language setting for a new file
0.099668
0
0
2,066
33,450,285
2015-10-31T09:59:00.000
0
0
0
0
python,mapreduce,scikit-learn,svm
35,586,970
1
false
0
0
Make sure that all of the required libraries (scikit-learn, NumPy, pandas) are installed on every node in your cluster. Your mapper will process each line of input, i.e., your training row and emit a key that basically represents the fold for which you will be training your classifier. Your reducer will collect the lines for each fold and then run the sklearn classifier on all lines for that fold. You can then average the results from each fold.
1
1
1
I've been tasked with solving a sentiment classification problem using scikit-learn, python, and mapreduce. I need to use mapreduce to parallelize the project, thus creating multiple SVM classifiers. I am then supposed to "average" the classifiers together, but I am not sure how that works or if it is even possible. The result of the classification should be one classifier, the trained, averaged classifier. I have written the code using scikit-learn SVM Linear kernel, and it works, but now I need to bring it into a map-reduce, parallelized context, and I don't even know how to begin. Any advice?
Combining SVM Classifiers in MapReduce
0
0
0
453
33,451,504
2015-10-31T12:20:00.000
2
1
0
0
python,raspberry-pi
33,451,582
1
false
0
0
You are looking for the wall BSD function. NAME wall -- write a message to users SYNOPSIS wall [-g group] [file] DESCRIPTION The wall utility displays the contents of file or, by default, its standard input, on the terminals of all currently logged in users.
1
1
0
I have a program that runs when the Pi is booted. The print statements are not displaying on additional terminal sessions. I can only get the print statements when I kill the auto-booted process and restart the program. Is there a method to broadcast print messages to all users - like the message displayed then typing 'Halt'? Thx
Raspberry Pi & Python 2.7 - trying to print to all users
0.379949
0
0
71
33,453,441
2015-10-31T15:44:00.000
4
0
0
1
python,google-app-engine,google-cloud-datastore
33,456,580
3
false
1
0
The are two important considerations here. The number of roundtrip calls from the client to the server. One call to update a user profile will execute much faster than 5 calls to update different parts of user profile as you save on roundtrip time between the client and the server and between the server and the datastore. Write costs. If you update 5 properties in a user profile and save it, and then update 5 other properties and save it, etc., your writing costs will be much higher because every update incurs writing costs, including updates on all indexed properties - even those you did not change. Instead of creating a huge user profile with 50 properties, it may be better to keep properties that rarely change (name, gender, date of birth, etc.) in one entity, and separate other properties into a different entity or entities. This way you can reduce your writing costs, but also reduce the payload (no need to move all 50 properties back and forth unless they are needed), and simplify your application logic (i.e. if a user only updates an address, there is no need to update the entire user profile).
3
1
0
I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example). If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend. The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers. So tl;dr: Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks?
App Engine: Few big scripts or many small ones?
0.26052
0
0
419
33,453,441
2015-10-31T15:44:00.000
3
0
0
1
python,google-app-engine,google-cloud-datastore
33,457,227
3
true
1
0
A single big script would have to be loaded every time an instance for your app starts, possibly hurting the instance start time, the response time of every request starting an instance and the memory footprint of the instance. But it can handle any request immediately, no additional code needs to be loaded. Multiple smaller scripts can be lazy-loaded, on demand, after your app is started, offering advantages maybe appealing to some apps: the main app/module script can be kept small, which keeps the instance startup time short the app's memory footprint can be kept smaller, handler code in lazy-loaded files is not loaded until there are requests for such handlers - interesting for rarely used handlers the extra delay in response time for a request which requires loading the handler code is smaller as only one smaller script needs to be loaded. Of course, the disadvantage is that some requests will have longer than usual latencies due to loading of the handler scripts: in the worst case the number of affected requests is the number of scripts per every instance lifetime. Updating a user profile is not something done very often, I'd consider it a rarely used piece of functionality, thus placing its handlers in a separate file looks appealing. Splitting it into one handler per file - I find that maybe a bit extreme. It's really is up to you, you know better your app and your style. From the GAE (caching) infra perspective - the file quota is 10000 files, I wouldn't worry too much with just ~100 files.
3
1
0
I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example). If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend. The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers. So tl;dr: Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks?
App Engine: Few big scripts or many small ones?
1.2
0
0
419
33,453,441
2015-10-31T15:44:00.000
0
0
0
1
python,google-app-engine,google-cloud-datastore
33,858,532
3
false
1
0
Adding to Dan Cornilescu’s answer, writing/saving an instance to the database re-writes to the whole instance (i.e. all its attributes) to the database. If you’re gonna use put() multiple times, you’re gonna re-write the who instance multiple times. Which, aside from being a heavy task to perform, will cost you more money.
3
1
0
I am working on a website that I want to host on App Engine. My App Engine scripts are written in Python. Now let's say you could register on my website and have a user profile. Now, the user Profile is kind of extensive and has more than 50 different ndb properties (just for the sake of an example). If the user wants to edit his records (which he can to some extend) he may do so through my website send a request to the app engine backend. The way the profile is section, often about 5 to 10 properties fall into a small subsection or container of the page. On the server side I would have multiple small scripts to handle editing small sections of the whole Profile. For example one script would update "adress information", another one would update "interests" and an "about me" text. This way I end up with 5 scripts or so. The advantage is that each script is easy to maintain and only does one specific thing. But I don't know if something like this is smart performance wise. Because if I maintain this habbit for the rest of the page I'll probably end up with about 100 or more different .py scripts and a very big app.yaml and I have no idea how efficiently they are cached on the google servers. So tl;dr: Are many small backend scripts to perform small tasks on my App Engine backend a good idea or should I use few scripts which can handle a whole array of different tasks?
App Engine: Few big scripts or many small ones?
0
0
0
419
33,458,865
2015-11-01T03:15:00.000
8
0
0
0
python,pandas,dataframe,series
33,458,868
1
true
0
0
You can do df.ix[[n]] to get a one-row dataframe of row n.
1
5
1
I have a huge dataframe, and I index it like so: df.ix[<integer>] Depending on the index, sometimes this will have only one row of values. Pandas automatically converts this to a Series, which, quite frankly, is annoying because I can't operate on it the same way I can a df. How do I either: 1) Stop pandas from converting and keep it as a dataframe ? OR 2) easily convert the resulting series back to a dataframe ? pd.DataFrame(df.ix[<integer>]) does not work because it doesn't keep the original columns. It treats the <integer> as the column, and the columns as indices. Much appreciated.
how to make 1 by n dataframe from series in pandas?
1.2
0
0
1,239
33,463,442
2015-11-01T14:49:00.000
0
0
1
0
python,dataset,python-3.5
33,463,589
2
true
0
0
You can store a list of tuples with each tuple containing data in a particular order (eg. score, name etc). Then you can sort the list using list.sort() function by defining a function f to compare the scores and passing the function in sort().
1
0
0
I understand there's different ways of storing data in Python but I can't figure what to use for my needs. I've made a small client/server game, and I want the amount of guesses it took them to be their score. I would then like to write their name (currently the IP address) along with the score into a file as to create a list of high scores. While I can do that perfectly fine, I only want a maximum of 5 scores stored and to be able to sort them so that when I display the high scores and names to the user, the lowest (being the best score) at the top. I'd also like to allow the username to exist more than once. While it's easy to write the data and read it, I really can't figure out what data type to use, dictionary would make a lot of sense in some cases, but a key can only have one value and the key can only exist once, a list has no relation to other specific values contained within so neither make sense to use, and tuples can't be sorted either it seems. I was thinking about reading each line into a sperate list and then using the index to compare the score so I could sort them and write it back to the file, but this would be bad on memory in my opinion? What would be the easiest method to save the name and score together without using some extreme learning curve like SQL?
Storing two linked values in Python
1.2
0
0
87
33,464,294
2015-11-01T16:16:00.000
2
0
0
0
python,statsmodels
33,479,441
1
true
0
0
When we request automatic lag selection in adfulller, then the function needs to compare all models up to the given maxlag lags. For this comparison we need to use the same observations for all models. Because lagged observations enter the regressor matrix we loose observations as initial conditions corresponding to the largest lag included. As a consequence autolag uses nobs - maxlags observations for all models. For calculating the test statistic for adfuller itself, we don't need model comparison anymore and we can use all observations available for the chosen lag, i.e. nobs - best_lag. More general, how to treat initial conditions and different number of initial conditions is not always clear cut, autocorrelation and partial autocorrelation are largely based on using all available observations, full MLE for AR and ARMA models uses the stationary model to include the initial conditions, while conditional MLE or least squares drops them as necessary.
1
3
1
This question is on Augmented Dickey–Fuller test implementation in statsmodels.tsa.stattools python library - adfuller(). In principle, AIC and BIC are supposed to compute information criterion for a set of available models and pick up the best (the one with the lowest information loss). But how do they operate in the context of Augmented Dickey–Fuller? The thing which I don't get: I've set maxlag=30, BIC chose lags=5 with some informational criterion. I've set maxlag=40 - BIC still chooses lags=5 but the information criterion have changed! Why in the world would information criterion for the same number of lags differ with maxlag changed? Sometimes this leads to change of the choice of the model, when BIC switches from lags=5 to lags=4 when maxlag is changed from 20 to 30, which makes no sense as lag=4 was previously available.
How exactly BIC in Augmented Dickey–Fuller test work in Python?
1.2
0
0
1,081
33,464,819
2015-11-01T17:11:00.000
-1
0
0
0
python,sublimetext3,sublime-text-plugin
33,464,838
2
false
0
0
You need to have a file '__init__.py' created in the myplugin folder with that file. Otherwise you will be unable to load the file as a module.
2
1
0
I'm using Sublime text 3 and I'm writing a simple plugin, the problem that i have is that whenever i put myplugin.py in the Packages/User folder I get the result perfectly. BUT when I move myplugin.py file to a folder for example myplugin/myplugin.py the plugin is not working anymore. I tried to see if there is any information logged to the console but I found nothing related to my problem. Can any one tell me what is exactly the problem and what I'm doing wrong?
Sublime text plugin is not working
-0.099668
0
0
202
33,464,819
2015-11-01T17:11:00.000
2
0
0
0
python,sublimetext3,sublime-text-plugin
33,465,589
2
false
0
0
Actually I was missing the fact that a sublime text plugin should be living in the Packages folder and not Packages/User folder
2
1
0
I'm using Sublime text 3 and I'm writing a simple plugin, the problem that i have is that whenever i put myplugin.py in the Packages/User folder I get the result perfectly. BUT when I move myplugin.py file to a folder for example myplugin/myplugin.py the plugin is not working anymore. I tried to see if there is any information logged to the console but I found nothing related to my problem. Can any one tell me what is exactly the problem and what I'm doing wrong?
Sublime text plugin is not working
0.197375
0
0
202
33,465,153
2015-11-01T17:43:00.000
0
0
0
0
python,django,django-migrations
33,465,379
1
true
1
0
I was using a models directory. Adding an import of the model to __init__.py allowed me to control whether it's visible to makemigrations or not. I found that using strace.
1
0
0
I just made a mess in my local Django project and realized that somehow I'm out of sync with my migrations. I tried to apply initial and realized that some of the tables already exist, so I tried --fake. This made the migration pass, but now I'm missing the one table I just wanted to add... how can I prepare migration just for one model or make Django re-discover what my database is missing and create that?
Django Migrations - how to insert just one model?
1.2
0
0
941
33,465,685
2015-11-01T18:32:00.000
0
0
1
0
python,oop
33,465,756
2
false
0
0
More information needs to be given to fully understand the context. But, in a general sense, I'd do a mix of all of them. Use helper functions for "shared" parts, and use conditional statements too. Honestly, a lot of it comes down to just what is easier for you to do?
1
1
1
I need several very similar plotting functions in python that share many arguments, but differ in some and of course also differ slightly in what they do. This is what I came up with so far: Obviously just defining them one after the other and copying the code they share is a possibility, though not a very good one, I reckon. One could also transfer the "shared" part of the code to helper functions and call these from inside the different plotting functions. This would make it tedious though, to later add features that all functions should have. And finally I've also thought of implementing one "big" function, making possibly not needed arguments optional and then deciding on what to do in the function body based on additional arguments. This, I believe, would make it difficult though, to find out what really happens in a specific case as one would face a forest of arguments. I can rule out the first option, but I'm hard pressed to decide between the second and third. So I started wondering: is there another, maybe object-oriented, way? And if not, how does one decide between option two and three? I hope this question is not too general and I guess it is not really python-specific, but since I am rather new to programming (I've never done OOP) and first thought about this now, I guess I will add the python tag. EDIT: As pointed out by many, this question is quite general and it was intended to be so, but I understand that this makes answering it rather difficult. So here's some info on the problem that caused me to ask: I need to plot simulation data, so all the plotting problems have simulation parameters in common (location of files, physical parameters,...). I also want the figure design to be the same. But depending on the quantity, some plots will be 1D, some 2D, some should contain more than one figure, sometimes I need to normalize the data or take a logarithm before plotting it. The output format might also vary. I hope this helps a bit.
What is a good way to implement several very similar functions?
0
0
0
444
33,469,625
2015-11-02T02:00:00.000
0
0
1
0
python,c++,text,extract
33,469,697
3
false
0
0
It sounds like what you want to do is first read File B, collecting the IDs. You can store the IDs in a set or a dict. Then read File A. For each line in File A, extract the ID, then see if it was in File B by checking for membership in your set or dict. If not, then skip that line and continue with the next line. If it is, then process that line as desired.
1
1
0
So first, I know there are some answers out there for similar questions, but...my problem has to do with speed and memory efficiency. I have a 60 GB text file that has 17 fields and 460,368,082 records. Column 3 has the ID of the individual and the same individual can have several records in this file. Lets call this file, File A. I have a second file, File B, that has the ID of 1,000,000 individuals and I want to extract the rows of File A that have an ID that is in File B. I have a windows PC and I'm open to doing this in C or Python, or whatever is faster... but not sure how to do it fast and efficiently. So far every solution I have come up with takes over 1.5 years according to my calculations.
Extracting certain rows from a file that match a condition from another file
0
0
0
358
33,469,633
2015-11-02T02:01:00.000
7
0
0
0
python,machine-learning,scikit-learn
33,504,368
2
false
0
0
The reason why the results are different (and why calling transform even workds) is that LinearSVC also has a transform (now deprecated) that does feature selection If you want to transform using just the first step, pipeline.named_steps['tfidf'].transform([item]) is the right thing to do. If you would like to transform using all but the last step, olologin's answer provides the code. By default, all steps of the pipeline are executed, so also the transform on the last step, which is the feature selection performed by the LinearSVC.
1
10
1
I have a simple scikit-learn Pipeline of two steps: a TfIdfVectorizer followed by a LinearSVC. I have fit the pipeline using my data. All good. Now I want to transform (not predict!) an item, using my fitted pipeline. I tried pipeline.transform([item]), but it gives a different result compared to pipeline.named_steps['tfidf'].transform([item]). Even the shape and type of the result is different: the first is a 1x3000 CSR matrix, the second a 1x15000 CSC matrix. Which one is correct? Why do they differ? How do I transform items, i.e. get an item's vector representation before the final estimator, when using scikit-learn's Pipeline?
How to transform items using sklearn Pipeline?
1
0
0
7,410
33,471,710
2015-11-02T06:12:00.000
-2
0
0
1
python,linux,django,filesystems
33,471,765
3
false
1
0
Save your python code file somewhere, using "Save" or "Save as" in your editor. Lets call it 'first.py' in some folder, like "pyscripts" that you make on your Desktop. Open a prompt (a Windows 'cmd' shell that is a text interface into the computer): start > run > "cmd".
1
1
0
I am learning Python and DJango and I am relatively nub with Linux. When I create DJango project I have manage.py file which I can execute like ./manage.py runserver. However when I create some Python program by hand it looks like that my Linux trying to execute it using Bash, not Python. So i need to write python foo.py instead ./foo.py. Attributes of both files manage.py and foo.py are the same (-rwx--x---). So my Q is: where is difference and how I can execute python program without specifying python? Links to any documentations are very appreciate. Thanks.
How to execute Python file
-0.132549
0
0
176
33,473,848
2015-11-02T08:54:00.000
14
0
1
1
python,windows,theano
46,394,599
1
false
0
1
If you are using Visual Studio in Windows, right-click on your project in the Solution Explorer and navigate as follows: Properties -> C/C++ -> General -> Additional Include Directories -> Add C:/Anaconda/include/ (or wherever your Anaconda install is located)
1
13
0
I'm trying to run a sample Theano code that uses GPU on windows. My python (with python-dev and Theano and all required libraries) was installed from Anaconda. This is the error I run into: Cannot open include file: 'Python.h': No such file or directory My Python.h is actually in c://Anaconda/include/ I'm guessing that I should add that directory to some environmental variable, but I don't know which.
Windows missing Python.h
1
0
0
20,029
33,479,646
2015-11-02T14:14:00.000
0
0
0
0
matlab,python-2.7,csv,export-to-csv
33,481,202
1
true
0
0
To read a text file in Matlab you can use fscanf or textscan then to export to excel you can use xlswrite that write directly to the excel file.
1
0
1
I am trying to read data from text file (which is output given by Tesseract OCR) and save the same in excel file. The problem i am facing here is the text files are in space separated format, and there are multiple files. Now i need to read all the files and save the same in excel sheet. I am using MATLAB to import and export data. I even thought of using python to convert the files into CSV format so that i can easily import the same in MATLAB and simply excelwrite the same. But no good solution. Any guidance would be of great help. thank you
Importing data from text file and saving the same in excel
1.2
1
0
212
33,480,139
2015-11-02T14:39:00.000
0
0
0
1
python,linux,dbus
33,510,521
1
false
0
0
Just a guess: python client might be able to use X11 to discover session bus address (in addition to using DBUS_SESSION_BUS_ADDRESS environment variable). It is stored in _DBUS_SESSION_BUS_ADDRESS property of _DBUS_SESSION_BUS_SELECTION_[hostname]_[uuid] selection owner window (uuid is content of /var/lib/dbus/machine-id )
1
0
0
I'm trying to use D-Bus to control another application. When using Python bindings, it is possible to use D-Bus just with dbus.SessionBus(). However, other application require to first set up the environment variables DBUS_SESSION_BUS_ADDRESS and DBUS_SESSION_BUS_PID, otherwise they report that the name "was not provided by any .service files". My question is, why is it necessary for some application to set up the environment variables? Is the a standard procedure to initialize the session bus in some situations?
Session bus initialization
0
0
0
194
33,484,524
2015-11-02T18:35:00.000
1
0
1
0
python,image,python-3.x,pillow
33,485,422
2
false
0
0
It ended up installing correctly after using easy_install instead of pip.
1
1
0
I have python 3.x, and was told to install Pillow for image manipulation. After installing it with pip however, i'm unable to import PIL from the python interpreter. It just says ImportError: No module named 'PIL'. Running pip list in the command line shows that Pillow is indeed installed.
Can't import PIL after installing Pillow
0.099668
0
0
2,238
33,489,896
2015-11-03T01:19:00.000
1
0
1
0
python-3.x,memory-profiling
33,517,071
1
true
0
0
It depends how you are using memory_profiler. This can be used in two different ways: To get memory usage line-by-line (run with python -m memory_profiler my_script.py). This needs to get memory information (from the OS) for every line executed within the profiled function. How this affects run-time depends on the amount of lines in the function: if it has a lot of lines with fast execution times, it might suppose a significant overhead. On the other hand, if the function to profile has few lines, and each lines has a significant computing time, then the overhead will be negligible. To get memory as a function of time (run with mprof run my_script.py and plot with mprof plot). In this case the function that collects the memory usage is in a different process as the one that runs your script, hence the overhead is minimal (unless you are using all CPUs).
1
0
0
I am evaluating the tools that profile my python program. One of the interesting tools here is memory_profiler. Before moving forward, just want to know whethermemory_profiler affects runtime. The reason I am asking this question is that memory_profiler will output a lot of memory usages. So I am suspecting it might affect runtime. Thanks Derek
python: will memory_profiler affect runtime?
1.2
0
0
714
33,491,466
2015-11-03T04:35:00.000
0
0
1
0
python,ipython-notebook
57,379,733
1
false
0
0
No, you only need to import the module a single time, and it does not need to be in the same cell that you are using it.
1
1
0
I am a Python programmer, new to IPython notebook. I have started a notebook that requires me to use the csv module in several cells. It seems that I have to import the module separately for each cell. I can't just put import csv at the top of the notebook, I have to write it once in each cell. Is that correct? It seems very clunky if so, suspect I've missed something obvious!
Where to put imports in an iPython notebook?
0
0
0
416
33,493,178
2015-11-03T07:02:00.000
1
1
0
1
python,linux,svn
33,493,376
1
false
0
0
Like setting environment variable in bash, if you close session it will be disapear. So just add sys.path.append it will add path in runtime.
1
0
0
I am trying to utilise mailer.py script to send mails after a SVN Commit. In mailer.py svn module has been used. I think the svn module is present in /opt/CollabNet_Subversion/lib-146/svn-python/svn and I tried to append it to the sys path using sys.path.append. For once it is getting appended and when I do sys.path I can see the appended path but after that the path is removed and I am getting import error: No Module named SVN. Am I missing something?
Is it possible to add the module path to the python environment variable in linux with out root access?
0.197375
0
0
192
33,497,639
2015-11-03T11:09:00.000
1
0
0
0
python-2.7,mod-wsgi
34,476,701
1
false
0
0
Copy all the files libz.so* to any path in your LD_LIBRARY_PATH Short story long, I have miniconda and stuck at the same issue. I realised that conda prefer to search for library in LD_LIBRARY_PATH than its own libs. Hence, you need to make missing library available in LD_LIBRARY_PATH, adding the whole conda lib directory to LD_LIBRARY_PATH never a good idea (i.e. it just breaks your whole system). As a result, copy appropriate lib from conda library to any folder in your LD_LIBRARY_PATH is best solution. Note the path must show up before /lib64 in your LD_LIBRARY_PATH (i.e. export LD_LIBRARY_PATH=/your/path:$LD_LIBRARY_PATH)
1
4
1
I've created a flask application that I'm trying to deploy on an apache server. I've installed a conda distribution of python where I've downloaded associated modules, including flask, matplotlib and others. I'm using wsgi to launch the application. The problem I'm having is when the server runs wsgi script it fails saying that when trying to import matplotlib it can't find the correct version libz ImportError: /lib64/libz.so.1: version `ZLIB_1.2.3.4' not found (required by /mypath/miniconda/lib/python2.7/site-packages/matplotlib/../../.././libpng16.so.16) However the correct version of libz is found at /mypath/miniconda/lib/libz.* The wsgi module was built with this version of python. In addition the apache init script sets the PATH environment variable this location of python (and there are no other python 2.7 on the system). When I print the ldd path of libpng via the wsgi script it points to the python version of libz as the one it should be loading. linux-vdso.so.1 => (0x00007fff9fe00000) libz.so.1 => /mypath/miniconda/lib/python2.7/site-packages/matplotlib/../../../././libz.so.1 (0x00007fb2e4388000) libm.so.6 => /lib64/libm.so.6 (0x00007fb2e40e8000) libc.so.6 => /lib64/libc.so.6 (0x00007fb2e3d50000) /lib64/ld-linux-x86-64.so.2 (0x00000035a9e00000) so why is it trying to load from /lib64 ?? When I try load the module via the same python from a terminal, it loads fine. I understand my environment is not going to be the same as the apache environment but offhand I couldn't see any major differences. I haven't tried explicitly setting the LD_LIBRARY_PATH or WSGIPythonHome, neither which seem like they should be necessary. But that's the next avenue I'll try. Even if that works (but especially if it doesn't), I'd be curious if anyone has any ideas as to what's going on. Thanks in advance.
Why is wsgi looking for a library in /lib64 when the correct version is in the python distribution
0.197375
0
0
725
33,503,134
2015-11-03T15:37:00.000
1
0
0
1
python,python-3.x,asynchronous,scalability,popen
33,503,827
1
false
0
0
You could use os.listdir or os.walk instead of ls, and the re module instead of grep. Wrap everything up in a function, and use e.g. the map method from a multiprocessing.Pool object to run several of those in parallel. This is a pattern that works very well. In Python3 you can also use Executors from concurrent.futures in a similar way.
1
0
0
Requirement - I want to execute a command that uses ls, grep, head etc using pipes (|). I am searching for some pattern and extracting some info which is part of the query my http server supports. The final output should not be too big so m assuming stdout should be good to use (I read about deadlock issues somewhere) Currently, I use popen from subprocess module but I have my doubts over it. how many simultaneous popen calls can be fired. does the result immediately come in stdout? (for now it looks the case but how to ensure it if the commands take long time) how to ensure that everything is async - keeping close to single thread model? I am new to Python and links to videos/articles are also appreciated. Any other way than popen is also fine.
Async execution of commands in Python
0.197375
0
0
302
33,504,746
2015-11-03T16:50:00.000
8
1
0
0
python,ssl,nginx
33,526,221
1
true
0
0
It seems that my problem was that I did not create the CA properly and wasn't signing keys the right way. A CA cert needs to be signed and if you pretend to be top level CA you self-sign your CA cert. openssl req -new -newkey rsa:2048 -keyout ca.key -out ca.pem openssl ca -create_serial -out cacert.pem -days 365 -keyfile ca.key -selfsign -infiles ca.pem Then you use ca command to sign requests openssl genrsa -des3 -out server.key 1024 openssl req -new -key server.key -out server.csr openssl ca -out server.pem -infiles server.csr
1
9
0
OK, I am trying to use client certificates to authenticate a python client to an Nginx server. Here is what I tried so far: Created a local CA openssl genrsa -des3 -out ca.key 4096 openssl req -new -x509 -days 365 -key ca.key -out ca.crt Created server key and certificate openssl genrsa -des3 -out server.key 1024 openssl rsa -in server.key -out server.key openssl req -new -key server.key -out server.csr openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt Used similar procedure to create a client key and certificate openssl genrsa -des3 -out client.key 1024 openssl rsa -in client.key -out client.key openssl req -new -key client.key -out client.csr openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt Add these lines to my nginx config server { listen 443; ssl on; server_name dev.lightcloud.com; keepalive_timeout 70; access_log /usr/local/var/log/nginx/lightcloud.access.log; error_log /usr/local/var/log/nginx/lightcloud.error.log; ssl_certificate /Users/wombat/Lightcloud-Web/ssl/server.crt; ssl_certificate_key /Users/wombat/Lightcloud-Web/ssl/server.key; ssl_client_certificate /Users/wombat/Lightcloud-Web/ssl/ca.crt; ssl_verify_client on; location / { uwsgi_pass unix:///tmp/uwsgi.socket; include uwsgi_params; } } created a PEM client file cat client.crt client.key ca.crt > client.pem created a test python script import ssl import http.client context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) context.load_verify_locations("ca.crt") context.load_cert_chain("client.pem") conn = http.client.HTTPSConnection("localhost", context=context) conn.set_debuglevel(3) conn.putrequest('GET', '/') conn.endheaders() response = conn.getresponse() print(response.read()) And now I get 400 The SSL certificate error from the server. What am I doing wrong?
Doing SSL client authentication is python
1.2
0
1
16,933
33,506,328
2015-11-03T18:19:00.000
0
0
0
0
python,tkinter,widget
33,507,438
2
false
0
1
The very definition of a row is that is the same height all the way across. That's what makes it a row. The same can be said for columns. Therefore, the tallest item in a row (height plus padding) is what controls the overall height of the row. The only control you have over smaller widgets is which sides of their too-large cell they stick to. For example, if you want all widgets to be aligned along their tops, use sticky="n", which causes the top of the widgets to "stick" to the top (north) side of the space they have been allocated. If you want them aligned along their bottoms, use sticky="s". Providing neither "n" or "s" means they will be aligned along their midpoints.
1
0
0
I have an large frame of a wide array of elements. Within this frame, there are basically two different sides to the frame. Consider a widget x on the left side, which is placed by .grid(row=4, column=0). Padding is added to this object x, so it is actually x.grid(row=4, column=0, pady=10) Well, the opposite object, object y, is placed on the same row by y.grid(row=4, column=4), or something along those lines. I have this setup, but the pady on x is adding padding to y as well. I want there to be padding on one widget in the row-- not the entire row. Therefore, my paraphrased question is, how does one add padding to only one widget in a row, without adding padding to every object in that respective row?
How to add padding to a widget, but not the entire row's widgets, in tkinter?
0
0
0
767
33,508,572
2015-11-03T20:35:00.000
0
1
0
0
python,eclipse,tweepy
33,510,764
1
false
0
0
If you recently installed the package maybe just reconfigure your pydev (Window->Preferences->PyDev->Interpreters->Python Interpreter: Quick Auto Configure).
1
1
0
I'm having trouble importing tweepy. I've looked through so many previous questions and still can't find a correct solution. I think it has something to do with how tweepy is being downloaded when I install but I'm not sure. I get an import error saying that "tweepy is not a package". I have tweepy library connected to the interpreter and all that but, it is saved as a compressed EGG file instead of a file folder like the rest of my packages. I think that has something to do with it but I'm not too sure. Also, tweepy works in my command line but not in eclipse.
Python import error in eclipse (Package works in command line but not eclipse)
0
0
1
321
33,510,814
2015-11-03T23:09:00.000
1
0
0
0
python,selenium,scrapy
33,521,521
3
false
1
0
Scrapy by itself does not control browsers. However, you could start a Selenium instance from a Scrapy crawler. Some people design their Scrapy crawler like this. They might process most pages only using Scrapy but fire Selenium to handle some of the pages they want to process.
1
1
0
When I use Selenium I can see the Browser GUI, is it somehow possible to do with scrapy or is scrapy strictly command line based?
Can scrapy control and show a browser like Selenium does?
0.066568
0
1
1,636
33,511,004
2015-11-03T23:23:00.000
3
0
0
0
python,user-interface,tkinter
33,511,107
1
true
0
1
Sure it's possible. Just call root.deiconify(). You can either pass root as a parameter to the login window, or make it a global variable.
1
1
0
So i think this is possible but I'm not sure... Im creating a login system for my program, the main screen is a tinter GUI root window, when this is created it is then .withdraw() and the top level login window is opened (this is stored in another module in a class). When the username and password are correct in the login top level window i want to .deiconify() the root window from within a method of the login window class. Is this possible and if so how.... Sorry i haven't got the code with me so can't upload any right now Thank You!
Tkinter GUI - deiconify() top level window from a top level window class in another module
1.2
0
0
3,137
33,511,359
2015-11-03T23:53:00.000
2
0
0
0
android,python,sl4a,qpython
34,069,082
1
false
0
1
You can develop qpython project as other python projects with your pc or MC, and upload the project into your mobile's /sdcard/com.hipipal.qpyplus/projects/ then run it in qpython. The qpython project should contain the main.py which is used for the project first launch script. Besides adb (android develop tool), you can use the qpython's FTP service ( You could find it in setting page ) or other FTP app to upload the project into your mobile. GOOD NEWS: In the newest qpython(1.2.2), it contains a qedit4web.py which allow you develop from browser and edit and run code from your mobile.
1
3
0
Being new to QPython, didn't find any reference about developing on a Mac or Pc, eventually deploying the code on the Android device. In contrast to developing the code itself on the Android device which seems very awkward specially for larger projects. I wish to write the code using a "normal" IDE such as IntelliJ using my Mac or Windows, eventually deploy it on an Android device, and execute with QPython. So the following questions come to mind: Best practice to transfer source code to an Android device with QPython installed (not using the QR Code which is limited to few KB's of code) Is it possible to develop QPython code on Mac/Windows namely using the SL4A (androidhelper) or is it strictly available on the Android device itself I have more questions but would be better to have the basic best practices. Ps. to give a context in relation to question #1 we need to rapidly deploy QPython code on many devices quickly, so copying the .py files manually is out of the question, and the QR code feature is very limited, so perhaps create a script that imports a script? (via git or HTTP)
QPython development environment using mac or windows
0.379949
0
0
2,614
33,511,936
2015-11-04T00:57:00.000
0
0
0
1
macos,python-2.7,homebrew,py2app
34,974,981
1
false
0
0
You say you've "installed too much via Homebrew" and need "Apple's Python to find everything" After installing Python modules into Homebrew's site-packages, you can make them importable from outside. First make a directory here (assuming 2.7): mkdir -p ~/Library/Python/2.7/lib/python/site-packages Then put a path file in it: echo 'import site; site.addsitedir("'$(brew --prefix)'/lib/python2.7/site-packages")' >> ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth
1
0
0
I have a new MacBook w/ Yosemite. In an attempt to get OSC, Zeroconf, PySide and Kivy working, I installed too much via Homebrew. I've successfully (?) undone most of the damage, I think, and have installed all the Python modules so that Apple's Python finds everything... from the terminal window. However, now my code runs from the console, correctly importing a custom pythonosc module installed with "sudo python setup.py install", but when I package it with py2app it can no longer find pythonosc. (It found it previously with Python et al installed a la Homebrew.)
py2app worked with Homebrew, now it builds w/ module missing
0
0
0
361
33,512,329
2015-11-04T01:45:00.000
2
0
1
1
ipython,ipython-notebook,jupyter
33,672,094
1
false
0
0
There is not, this state is not stored anywhere, in part because it changes rapidly, and in part because there shouldn't be many, if any, actions that should be taken differently based on its value. It is only published via messages on the IOPub channel, which you can connect to via zeromq or websocket. If you want to know the busy/idle state of a kernel: connect to kernel (zmq or websocket) initial state is busy send a kernel_info request monitor status IOPub messages for busy/idle changes If the kernel is idle, it will handle the kernel_info request promptly and you will get a status:idle message.
1
5
0
I want to be able to detect from outside a notebook server if the kernel is busy or actively running some cell. Is there some way for me to print this state as a command line call or have it returned as the response to a http request.
How to get jupyter notebook kernel state?
0.379949
0
0
5,351
33,512,961
2015-11-04T02:57:00.000
0
0
1
0
python-2.7,python-3.x
33,513,006
2
false
0
0
Read it like this, "make sure each character is lower case". As in use a function to make sure each character is lower case.
1
0
0
I'm working on my first exercise from "Exercises in Programming Style" and I'm having great difficulty understanding the instructions. While reading an input file from Pride and Prejudice, one of the instructions says something that doesn't make any sense. What do the instructions mean when they said "Filter the characters, normalize to lowercase?" How do you filter the characters?
What do they mean by "filter the characters, normalize to lowercase?"
0
0
0
42
33,514,183
2015-11-04T05:15:00.000
0
0
0
0
python,sap
72,168,012
3
false
0
0
pyhdb executemany() is faster than simply execute() but for larger records even if you divide in chunks and use executemany() it still takes significant time. For better and faster performance use string formatting like values (?, ?, ?...) instead of values('%s', '%s', '%s', ...) This saves a lots of time that heavy type conversion uses on server side and gets response faster and hence faster execution.
1
0
0
Is it possible to insert many rows into a table using one query in pyhdb? Because when I have millions of records to insert, inserting each record in a loop is not very efficient.
How to insert many rows into a table using pyhdb?
0
1
0
1,564
33,514,313
2015-11-04T05:26:00.000
4
0
0
0
python,selenium,cookies,phantomjs
33,516,899
1
true
0
0
Documentation suggests this driver.cookies_enabled = False, you can use it.
1
0
0
I have searched for long time but I could not find how to disable cookies for phantomjs using selenium with python . I couldn't understand the documentation of phantomjs.Please someone help me.
disabling Cookies on phantomjs using selenium with python
1.2
0
1
730
33,516,192
2015-11-04T07:38:00.000
2
0
0
0
python-2.7,hash,compare,web-crawler
34,488,088
6
false
1
0
There is no universal solution. Use If-modifed-since or HEAD when possible (usually ignored by dynamic pages) Use RSS when possible. Extract last modification stamp in site-specific way (news sites have publication dates for each article, easily extractable via XPATH) Only hash interesting elements of page (build site-specific model) excluding volatile parts Hash whole content (useless for dynamic pages)
3
8
0
Basically I'm trying to run some code (Python 2.7) if the content on a website changes, otherwise wait for a bit and check it later. I'm thinking of comparing hashes, the problem with this is that if the page has changed a single byte or character, the hash would be different. So for example if the page display the current date on the page, every single time the hash would be different and tell me that the content has been updated. So... How would you do this? Would you look at the Kb size of the HTML? Would you look at the string length and check if for example the length has changed more than 5%, the content has been "changed"? Or is there some kind of hashing algorithm where the hashes stay the same if only small parts of the string/content has been changed? About last-modified - unfortunately not all servers return this date correctly. I think it is not reliable solution. I think better way - combine hash and content length solution. Check hash, and if it changed - check string length.
How to check if content of webpage has been changed?
0.066568
0
0
8,152
33,516,192
2015-11-04T07:38:00.000
2
0
0
0
python-2.7,hash,compare,web-crawler
34,488,574
6
false
1
0
Safest solution: download the content and create a hash checksum using SHA512 hash of content, keep it in the db and compare it each time. Pros: You are not dependent to any Server headers and will detect any modifications. Cons: Too much bandwidth usage. You have to download all the content every time. Using Head Request page using HEAD verb and check the Header Tags: Last-Modified: Server should provide last time page generated or Modified. ETag: A checksum-like value which is defined by server and should change as soon as content changed. Pros: Much less bandwidth usage and very quick update. Cons: Not all servers provides and obey following guidelines. Need to get real resource using GET request if you find data is need to fetch Using GET Request page using GET verb and using conditional Header Tags: * If-Modified-Since: Server will check if resource modified since following time and return content or return 304 Not Modified Pros: Still Using less bandwidth, Single trip to receive data. Cons: Again not all resource support this header. Finally, maybe mix of above solution is optimum way for doing such action.
3
8
0
Basically I'm trying to run some code (Python 2.7) if the content on a website changes, otherwise wait for a bit and check it later. I'm thinking of comparing hashes, the problem with this is that if the page has changed a single byte or character, the hash would be different. So for example if the page display the current date on the page, every single time the hash would be different and tell me that the content has been updated. So... How would you do this? Would you look at the Kb size of the HTML? Would you look at the string length and check if for example the length has changed more than 5%, the content has been "changed"? Or is there some kind of hashing algorithm where the hashes stay the same if only small parts of the string/content has been changed? About last-modified - unfortunately not all servers return this date correctly. I think it is not reliable solution. I think better way - combine hash and content length solution. Check hash, and if it changed - check string length.
How to check if content of webpage has been changed?
0.066568
0
0
8,152
33,516,192
2015-11-04T07:38:00.000
2
0
0
0
python-2.7,hash,compare,web-crawler
34,584,705
6
false
1
0
If you're trying to make a tool that can be applied to arbitrary sites, then you could still start by getting it working for a few specific ones - downloading them repeatedly and identifying exact differences you'd like to ignore, trying to deal with the issues reasonably generically without ignoring meaningful differences. Such a quick hands-on sampling should give you much more concrete ideas about the challenge you face. Whatever solution you attempt, test it against increasing numbers of sites and tweak as you go. Would you look at the Kb size of the HTML? Would you look at the string length and check if for example the length has changed more than 5%, the content has been "changed"? That's incredibly rough, and I'd avoid that if at all possible. But, you do need to weigh up the costs of mistakenly deeming a page unchanged vs. mistakenly deeming it changed. Or is there some kind of hashing algorithm where the hashes stay the same if only small parts of the string/content has been changed? You can make such a "hash", but it's very hard to tune the sensitivity to meaningful change in the document. Anyway, as an example: you could sort the 256 possible byte values by their frequency in the document and consider that a 2k hash: you can later do a "diff" to see how much that byte value ordering's changed in a later download. (To save memory, you might get away with doing just the printable ASCII values, or even just letters after standardising capitalisation). An alternative is to generate a set of hashes for different slices of the document: e.g. dividing it into header vs. body, body by heading levels then paragraphs, until you've got at least a desired level of granularity (e.g. 30 slices). You can then say that if only 2 slices of 30 have changed you'll consider the document the same. You might also try replacing certain types of content before hashing - e.g. use regular expression matching to replace times with "<time>". You could also do things like lower the tolerance to change more as the time since you last processed the page increases, which could lessen or cap the "cost" of mistakenly deeming it unchanged.
3
8
0
Basically I'm trying to run some code (Python 2.7) if the content on a website changes, otherwise wait for a bit and check it later. I'm thinking of comparing hashes, the problem with this is that if the page has changed a single byte or character, the hash would be different. So for example if the page display the current date on the page, every single time the hash would be different and tell me that the content has been updated. So... How would you do this? Would you look at the Kb size of the HTML? Would you look at the string length and check if for example the length has changed more than 5%, the content has been "changed"? Or is there some kind of hashing algorithm where the hashes stay the same if only small parts of the string/content has been changed? About last-modified - unfortunately not all servers return this date correctly. I think it is not reliable solution. I think better way - combine hash and content length solution. Check hash, and if it changed - check string length.
How to check if content of webpage has been changed?
0.066568
0
0
8,152
33,524,731
2015-11-04T14:45:00.000
3
0
0
0
mysql,python-3.x
33,524,987
1
true
0
0
Python tries hard to be forward compatible. A pure-python module written for 3.4 should work with 3.5; a binary package may work, you just have to try it and see.
1
3
0
All of the MySql modules I've found are compatible with Python 2.7 or 3.4, but none with 3.5. Any way I can use a MySql module with the newest Python version? ANSWER: The regular Python versions of mysql-connector-python would not work, but the rf version did. python -m pip install mysql-connector-python-rf
Will a MySql module for Python 3.4 work with 3.5?
1.2
1
0
1,498
33,525,279
2015-11-04T15:11:00.000
1
0
0
0
python,plot,geometry,ellipse,pyqtgraph
34,653,828
3
false
0
1
As a regular user of pyqtgraph I do not believe that it has functions for generating circles or ellipses. I believe that you would have to define the circle and ellipse functions yourself and generate the points of the circle or ellipse from those.
1
0
0
I would like plot circles or ellipses in Pyqtgraph gl.GLViewWidget(). However I did not find a function to do that. Anyone know the way to do that?
Pyqtgraph: How do I plot an ellipse or a circle
0.066568
0
0
2,153
33,525,524
2015-11-04T15:21:00.000
0
0
1
0
python,json,hadoop,mapreduce
33,633,027
1
false
0
0
Well, after much fiddling around with the script, I found that the problem disappears when doing the following: Using #!/usr/bin/env python2 at the top of the mapper and reducer files. This shebang specifically specifies version 2.X of the Python runtime to be used for execution. Using the following notation when issuing the Hadoop streaming command: hadoop jar hadoop-streaming.jar -files mapper.py,reducer.py -mapper 'python mapper.py' -reducer 'python reducer.py' -input <hdfs path to input file(s)> -output <hdfs path to output directory> After applying these modifications, the problem ceased to appear. Go figure!
1
0
0
I'm trying to run a simple python mapreduce script on Hadoop via streaming. The mapper part loads a json document, reads the text from a property and emits each word in the text with 1, to be later summed up by the reducer portion of the script. The code works perfectly fine outside Hadoop. Once submitted to Hadoop, the map fails with a "ValueError: No Json object could be decoded". The line of error is the one with the "json.loads()" function. I am completely stumped by this. The Hadoop ecosystem I'm trying to run on is the HortonWorks sandbox with Python 2.6.6 onboard. Did anybody else run into a similar problem?
Getting "ValueError: No Json object could be decoded" when running a Python MapReduce script via Hadoop Streaming
0
0
0
242
33,528,486
2015-11-04T17:39:00.000
0
0
1
1
python
33,530,241
1
false
0
0
readline returns a single line of text including the trailing new line until the stream is closed on the other end. Once all data has been read, it starts returning empty strings to let the caller know that the stream is closed and there will never be new data. Generally, while loops should break when an empty string is returned. You should also call proc.wait() at the end so that system information about the dead process is cleaned up.
1
1
0
Im using Python 3.5, my code is as follows: Given a_sentence the program hangs during the while loop because line_read is "" so it never increments nl_c, therefore never exits the loop, I'm relatively new to using sub processes so I'm not sure where the problem is, whether it's not being read in correctly or the output. tl;dr Output from subprocess is "" when it should be an arbitrary string. Can someone point me in the right direction in getting the line_read = proc.stdout.readline() to be the line inputted above?
Process.stdout.readline() doesn't output correctly
0
0
0
784
33,528,576
2015-11-04T17:44:00.000
0
0
1
0
python,file,text
34,884,028
2
false
0
0
A bit more pythonic way of doing the same thing Aneta suggested with open(filename, 'r') as source: text = source.read() place_to_names = dict([line.split(r',') for line in text.split()]) while True: name = raw_input('Enter a name:') print("%s lives in %s" % (name, places_to_names[name]))
1
0
0
So I have a text file that I've made up with a persons name followed by a comma and then a place where they could live. Yes I know its random but I need a way to understand this :) So here is the text file (called "namesAndPlaces.txt"): Bob,Bangkok Ellie,London Anthony,Beijing Michael,Boston Fred,Texas Alisha,California So I want the user to be able to enter a name into the program and then the program looks at the text file to see where they live and then prints it out to the user. How can I do this? Thanks Michael
Reading data from files from a user input
0
0
0
96
33,529,029
2015-11-04T18:09:00.000
0
0
0
0
jquery,python,selenium-webdriver,robotframework
43,115,077
1
false
1
0
From Selenium 3.0 - Gecko driver is required to run automation scripts in firefox Selenium version less than 3.0 works. Try with the following versions: robotframework (3.0.2) robotframework-selenium2library (1.8.0) selenium (2.53.1)
1
3
0
I am using Selenium2Library '1.7.4' and Robot Framework 2.9.2 (Python 2.7.8 on win32). If I try to give locator as jQuery, the following exception occurs: WebDriverException: Message: unknown error: jQuery is not defined. Please advise which version of Selenium2Library and 'Robot Framework' combination works to identify jQuery as a locator.
WebDriverException: Message: unknown error: jQuery is not defined error in robot framework
0
0
1
4,119
33,530,673
2015-11-04T19:45:00.000
4
0
0
1
python,linux,multithreading,asynchronous,tornado
33,535,454
3
true
0
0
For "normal" logging (a few lines per request), I've always found logging directly to a file to be good enough. That may not be true if you're logging all the traffic to the server. The one time I've needed to do something like that I just captured the traffic externally with tcpdump instead of modifying my server. If you want to capture it in the process, start by just writing to a file from the main thread. As always, measure things in your own environment before taking drastic action (IOLoop.set_blocking_log_threshold is useful for determining if your logging is a problem). If writing from the main thread blocks for too long, you can either write to a queue that is processed by another thread, or write asynchronously to a pipe or socket to another process (syslog?).
2
4
0
I am working on an application in which I may potentially need to log the entire traffic reaching the server. This feature may be turned on or off, or may be used when exceptions are caught. In any case, I am concerned about the blocking nature of disk I/O operations and their impact on the performance of the server. The business logic that is applied when a request is handled (mostly POST http requests), is asynchronous in such that every network or db calls are asynchronously executed. On the other hand, I am concerned about the delay to the thread while it is waiting for the disk IO operation to complete. The logged messages can be a few bytes to a few KBs but in some cases a few MBs. There is no real need for the thread to pause while data is written to disk, the http request can definitely complete at that point and there is no reason that the ioloop thread not to work on another task while data is written to disk. So my questions are: am I over-worried about this issue? is logging to standard output and later redirecting it to a file "good enough"? what is the common approach, or the one you found most practical for logging in tornado-based applications? even for simple logging and not the (extreme) case I outlined above? is this basically an ideal case for queuing the logging messages and consume them from a dedicated thread? Say I do offload the logging to a different thread (like Homer Simpson's "Can't Someone Else Do It?"), if the thread that performs the disk logging is waiting for the disk io operation to complete, does the linux kernel takes that point as an opportunity a context switch? Any comments or suggestion are much appreciated, Erez
Logging in an asynchronous Tornado (python) server
1.2
0
0
1,242
33,530,673
2015-11-04T19:45:00.000
0
0
0
1
python,linux,multithreading,asynchronous,tornado
60,500,844
3
false
0
0
" write asynchronously to a pipe or socket to another process (syslog?" How can it be? log_requestis a normal function - not a coroutine and all default python handlers are not driven by asyncio event loop so they are not truly asynchronous. This is imho one of the factors that make Tornado less performant than ie. aiohttp. Writing to the memory or using udp is fast but it is not async anyway.
2
4
0
I am working on an application in which I may potentially need to log the entire traffic reaching the server. This feature may be turned on or off, or may be used when exceptions are caught. In any case, I am concerned about the blocking nature of disk I/O operations and their impact on the performance of the server. The business logic that is applied when a request is handled (mostly POST http requests), is asynchronous in such that every network or db calls are asynchronously executed. On the other hand, I am concerned about the delay to the thread while it is waiting for the disk IO operation to complete. The logged messages can be a few bytes to a few KBs but in some cases a few MBs. There is no real need for the thread to pause while data is written to disk, the http request can definitely complete at that point and there is no reason that the ioloop thread not to work on another task while data is written to disk. So my questions are: am I over-worried about this issue? is logging to standard output and later redirecting it to a file "good enough"? what is the common approach, or the one you found most practical for logging in tornado-based applications? even for simple logging and not the (extreme) case I outlined above? is this basically an ideal case for queuing the logging messages and consume them from a dedicated thread? Say I do offload the logging to a different thread (like Homer Simpson's "Can't Someone Else Do It?"), if the thread that performs the disk logging is waiting for the disk io operation to complete, does the linux kernel takes that point as an opportunity a context switch? Any comments or suggestion are much appreciated, Erez
Logging in an asynchronous Tornado (python) server
0
0
0
1,242
33,535,853
2015-11-05T02:51:00.000
4
0
0
0
python,html
48,959,088
2
false
1
0
This is an image. Specifically a jpeg. Since it's a byte stream python prints it with b'.............' A jpeg starts with \xff\xd8\xff\
1
3
0
I'm using python to retrieve an HTML source, but what comes out looks like this. What is this, and why am I not getting the actual page source? b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xdb\x00C
Weird HTML code looks like this b'\xff\xd8\xff\xe0
0.379949
0
0
5,486
33,536,182
2015-11-05T03:30:00.000
7
0
0
0
python,sentiment-analysis,lstm,keras
34,154,972
2
true
0
0
So what you basically need to do is as follows: Tokenize sequnces: convert the string into words (features): For example: "hello my name is georgio" to ["hello", "my", "name", "is", "georgio"]. Next, you want to remove stop words (check Google for what stop words are). This stage is optional, it may lead to faulty results but I think it worth a try. Stem your words (features), that way you'll reduce the number of features which will lead to a faster run. Again, that's optional and might lead to some failures, for example: if you stem the word 'parking' you get 'park' which has a different meaning. Next thing is to create a dictionary (check Google for that). Each word gets a unique number and from this point we will use this number only. Computers understand numbers only so we need to talk in their language. We'll take the dictionary from stage 4 and replace each word in our corpus with its matching number. Now we need to split our data set to two groups: training and testing sets. One (training) will train our NN model and the second (testing) will help us to figure out how good is our NN. You can use Keras' cross validation function. Next thing is defining whats the max number of features our NN can get as an input. Keras call this parameter - 'maxlen'. But you don't really have to do this manually, Keras can do that automatically just by searching for the longest sentence you have in your corpus. Next, let's say that Keras found out that the longest sentence in your corpus has 20 words (features) and one of your sentences is the example in the first stage, which its length is 5 (if we'll remove stop words it'll be shorter), in such case we'll need to add zeros, 15 zeros actually. This is called pad sequence, we do that so every input sequence will be in the same length.
1
2
1
I have trained the imdb_lstm.py on my PC. Now I want to test the trained network by inputting some text of my own. How do I do it? Thank you!
Testing the Keras sentiment classification with model.predict
1.2
0
0
2,818