Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
24,944,626 | 2014-07-24T21:51:00.000 | 2 | 1 | 1 | 0 | python,python-3.x | 24,944,710 | 2 | false | 0 | 0 | They do two different things; compare bytes(1234) with struct.pack("!H", 1234). The first just provides a 4-byte string representation of the number bytes object with 1,234 null bytes; the second provides a two-byte string with the (big-endian) value of the integer.
(Edit: Struck out irrelevant Python 2 definition of bytes(1234).) | 2 | 3 | 0 | I'm just curious here, but I have been using bytes() to convert things to bytes ever since I learned python. It was until recently that I saw struct.pack(). I didn't bother learning how to use it because I thought It did essentially did the same thing as bytes(). But it appears many people prefer to use struct.pack(). Why? what are the advantages of one over the other? | Python-bytes() vs struct.pack() | 0.197375 | 0 | 0 | 1,956 |
24,944,626 | 2014-07-24T21:51:00.000 | 3 | 1 | 1 | 0 | python,python-3.x | 24,944,749 | 2 | true | 0 | 0 | bytes() does literally what the name implies:
Return a new “bytes” object, which is an immutable sequence of
integers in the range 0 <= x < 256
struck.pack() does something very different:
This module performs conversions between Python values and C structs represented as Python strings
While for some inputs these might be equivalent, they are not at all the same operation. struct.pack() is essentially producing a byte-string that represents a POD C-struct in memory. It's useful for serializing/deserializing data. | 2 | 3 | 0 | I'm just curious here, but I have been using bytes() to convert things to bytes ever since I learned python. It was until recently that I saw struct.pack(). I didn't bother learning how to use it because I thought It did essentially did the same thing as bytes(). But it appears many people prefer to use struct.pack(). Why? what are the advantages of one over the other? | Python-bytes() vs struct.pack() | 1.2 | 0 | 0 | 1,956 |
24,944,869 | 2014-07-24T22:10:00.000 | 1 | 0 | 0 | 0 | database,django,mysql-python,django-database | 24,997,975 | 2 | false | 1 | 0 | For working inside virtualenv you need to install
pip install MySQL-python==1.2.5 | 2 | 0 | 0 | I am developing a system that will need to connect from a remote mysql database on the fly to do a specific task. To accomplish this, I am thinking to use Mysql-db module in python. Since the remote database is not part of the system itself I do not prefer to add it on the system core database settings (DATABASES settings.py). Is there a much better way to accomplish this aside from using Mysql-db module? Is there a built in django module that I can use? | Django Database Module | 0.099668 | 1 | 0 | 56 |
24,944,869 | 2014-07-24T22:10:00.000 | 0 | 0 | 0 | 0 | database,django,mysql-python,django-database | 24,997,774 | 2 | true | 1 | 0 | MySQLdb is the best way to do this. | 2 | 0 | 0 | I am developing a system that will need to connect from a remote mysql database on the fly to do a specific task. To accomplish this, I am thinking to use Mysql-db module in python. Since the remote database is not part of the system itself I do not prefer to add it on the system core database settings (DATABASES settings.py). Is there a much better way to accomplish this aside from using Mysql-db module? Is there a built in django module that I can use? | Django Database Module | 1.2 | 1 | 0 | 56 |
24,946,067 | 2014-07-25T00:13:00.000 | 3 | 0 | 1 | 0 | python,multiprocessing,python-3.4 | 26,242,741 | 2 | true | 0 | 0 | If you're using Eclipse and PyDev you need to include 'multiprocessing' as a forced builtin for the python interpreter. | 1 | 1 | 0 | I am porting a working application from Python 3.3 to 3.4 and have encountered a weird situation. The class multiprocessing.Process is not present in the download from python.org. Instead, in the multiprocessing.process module, I find a class multiprocessing.process.BaseProcess. The only trace of the old Process class that I can find is in the new multiprocessing.context module where multiprocessing.context.Process is basically a cover function for BaseProcess. None of this is mentioned in the documentation for Python 3.4. Can anyone tell me what is going on and possibly point me at some documentation. | Where is multiprocessing.Process | 1.2 | 0 | 0 | 1,105 |
24,946,778 | 2014-07-25T01:44:00.000 | 0 | 1 | 0 | 1 | python,eclipse,shell,python-3.x,project | 24,947,778 | 2 | false | 0 | 0 | Based on current information, I would suggest you to run it this way in OSX
1) Bring up the Terminal app
2) cd to the location where bla lives
3) run python bla/blah/projMain.py
Show us stacktrace if the above failed. | 1 | 0 | 0 | Eclipse can run a python project rather than just one .py file. Is it possible to run an entire project from Python 3.x shell. I looked into it a little, but I didn't really find a way. I tried just running the .py file with the main using exec(open('bla/blah/projMain.py')) like you would any python file. All of my modules (including the main) is in one package, but when I ran the main I got a no module named 'blah' (the package it is in). Also, as a side note there is in fact aninit.pyand even apycache' directory.
Maybe I didn't structure it correctly with Eclipse (or rather maybe Eclipse didn't structure it properly), but Eclipse can run it, so how can I with a Python 3.4.1 shell? Do I have to put something in __init__.py, perhaps, and then run that file? | Run Python project from shell | 0 | 0 | 0 | 880 |
24,951,431 | 2014-07-25T08:35:00.000 | 3 | 0 | 0 | 0 | postgresql,roles,plpython | 24,958,698 | 2 | true | 0 | 0 | You're confusing operating system users and PostgreSQL users.
SECURITY DEFINER lets you run a function as the defining postgresql user. But no matter what PostgreSQL user is running the operating system user the back-end server runs as is always the same - usually the operating system user postgres.
By design, the PostgreSQL server cannot run operating system commands or system calls as other operating system users. That would be a nasty security hole.
However, if you want to permit that, you can. You could:
Grant the postgres user sudo rights to run some or all commands as other users; or
Write a program to run with setuid rights to do what you want and grant the postgres user the right to execute it.
In either case, the only way to run these programs is by launching them from an untrusted procedural language like plpython or plperl, or from a C extension.'
It isn't clear why you want to set the file ownership like this in the first place, but I suspect it's probably not a great idea. What if the PostgreSQL client and server aren't even on the same computer? What if there's no operating system user for that PostgreSQL user, or the usernames are different? etc. | 1 | 0 | 0 | I made a little PostgreSQL trigger with Plpython. This triggers plays a bit with the file system, creates and delete some files of mine. Created files are owned by the "postgres" unix user, but I would like them to be owned by another user, let's say foobar. Triggers are installed with user "foobar" and executed with user "foobar" too.
Is there a way to execute the SQL trigger with the unix user 'foobar' with PostgreSQL or Plpython?
Should I use SET ROLE foobar ?
Playing with SECURITY INVOKER and SECURITY DEFINER does not seem to be good enough. | PostgreSQL trigger with a given role | 1.2 | 1 | 0 | 1,158 |
24,954,106 | 2014-07-25T10:53:00.000 | 0 | 0 | 1 | 0 | python,installation,kivy,suds | 25,000,617 | 2 | true | 0 | 1 | Are you by any chance running the Python 3 version of Kivy? Suds looks like it is abandondonware (last release in 2010) and likely does not have a Python3 port. You may have luck with the Python2.7 version of Kivy and pip installing suds, but keep in mind you will be relying on an apparently unsupported module (suds) for your project. | 1 | 1 | 0 | I am pretty new to python, my background is with VB visual studios, I am trying to develop a app in which I want to consume WCF service. Found Suds is the required python module.
I am using Kivy 1.8.0 and Eclipse with pydev on Windows 7 64bit. Could you please point me in correct direction on how to instal the package, found no exe, I have run the setup.py from suds but did not work.
Any advice/direction towards tutorial is of great help. | Kivy with suds - module installation | 1.2 | 0 | 0 | 396 |
24,956,390 | 2014-07-25T13:02:00.000 | 0 | 0 | 0 | 0 | python,elasticsearch,firewall,python-3.4 | 25,247,538 | 1 | true | 0 | 0 | It was the firewall caching that was causing the havoc! Once the caching was disabled for certain endpoints, the issue resolved itself. Painful! | 1 | 1 | 0 | For example when I have ids lookup and want to search one by by one to see if the document already exists or not. One of two things happen:
First -> first search request returns the correct doc and all the calls after that returns the same doc as first one even though I was searching for different Ids
Second -> first search request returns the correct doc and all the calls after that returns empty hits array even though I was searching for different Ids. The search metadata does tell me that "total" was one for this request but no actual hits are returned.
I have been facing this weird behaviour with ElasticSearch.py and using raw http requests as well.
Could it be firewall that is causing some sort of weird caching behaviour?
Is there anyway to force the results?
Any ideas are welcome at this point.
Thanks in advance! | ElasticSearch doesn't return hits when making consecutive calls - Python | 1.2 | 0 | 1 | 112 |
24,957,764 | 2014-07-25T14:12:00.000 | 1 | 0 | 1 | 0 | python,windows,sua | 24,960,688 | 1 | true | 0 | 0 | No, installing Subsystem for Unix-based Applications (Windows Services for Unix) doesn't change the behaviour of a binary distribution of Python in any way. The version of Python you're trying to use would have to be specifically built to support the Windows POSIX subsystem in order to take any advantage of it.
Microsoft's POSIX subsystem is no different than Cygwin in this respect. If you download and install the standard Windows binary distribution of CPython, its behaviour won't change if you later install Cygwin. You'd have to download install the Cygwin version of CPython if you want your Python program to take advantage of Cygwin's Unix emulation environment. Note also that Cygwin version of Python loses much, if not all, of the Windows specific functionality of standard Windows version of CPython.
You should also be aware that many popular third party Python modules are dependent on C extension modules. These modules have to be built for the specific version of Python you're using. While many of these modules support the standard Windows CPython distributions, and few support Cygwin, you'd need to compile these yourself for the POSIX subsystem. | 1 | 1 | 0 | When reading the Python docs, many libraries/functions indicate they work differently, or not at all, depending on the operating system.
Most of the libraries depend on whether the OS is POSIX compliant, or Win32.
When using the SUA package with Windows-7, does this allow python to enable/alter those
Posix-dependent features?
If so; Fully? Partially? Indeterminant/Untested?
If yes to any of the three previous cases, does python adopt the new posix-os behavior automatically, or does it assume standard win32-os (meaning it must be configured, or perhaps even compiled, to enable the Posix modes)?
Notes
I am currently using the SUA utils/SDK provided by Microsoft, with no additional third party at the moment.
For the record, I have used Cygwin/MinGW, and do find them very useful, but for the scope of this question, lets just say they cannot be deployed (even though I probably will later). I am trying to discover how deeply SUA really integrates, and whether or not that has any bearing on typical python installations. | Python OS-dependent libraries: Windows 7 SUA | 1.2 | 0 | 0 | 190 |
24,958,237 | 2014-07-25T14:36:00.000 | 0 | 0 | 0 | 1 | python,pydev | 24,958,918 | 2 | true | 0 | 0 | Yes you can do that. Just type in the console what ever commands you want :). I usually have to right click then
Debug As >> Python run
PyDev is a little bit quirky, but you get used to it. | 1 | 0 | 0 | so far I used the Komodo IDE for Python development, but I'm now testing Eclipse with PyDev. Everything works fine, but there is one Komodo feature that I'm missing.
In Komodo I can inspect the running application in a debugger shell. I.e. after hitting a breakpoint I can not only read the content of variables, but I can execute arbitrary Python code (e.g. changing the value of variables) and then continue program execution.
PyDev has also some interactive shell during debugging, but I can only read variables and not change their content. Is this feature not available in PyDev or am I missing something here?
Many thanks,
Axel | Has PyDev an interactive shell (during debugging) as in Komodo? | 1.2 | 0 | 0 | 50 |
24,958,833 | 2014-07-25T15:06:00.000 | 15 | 0 | 1 | 0 | python,anaconda,conda | 43,454,479 | 4 | false | 0 | 0 | Before you proceed to conda update --all command, first update conda with conda update conda command if you haven't update it for a long time. It happent to me (Python 2.7.13 on Anaconda 64 bits). | 2 | 216 | 0 | Is there a way (using conda update) that I can list outdated packages and select or bulk update (compatible) packages in Anaconda?
It doesn't make much sense updating the packages individually as there are hundreds of them. | Bulk package updates using Conda | 1 | 0 | 0 | 118,886 |
24,958,833 | 2014-07-25T15:06:00.000 | 356 | 0 | 1 | 0 | python,anaconda,conda | 24,965,191 | 4 | true | 0 | 0 | You want conda update --all.
conda search --outdated will show outdated packages, and conda update --all will update them (note that the latter will not update you from Python 2 to Python 3, but the former will show Python as being outdated if you do use Python 2). | 2 | 216 | 0 | Is there a way (using conda update) that I can list outdated packages and select or bulk update (compatible) packages in Anaconda?
It doesn't make much sense updating the packages individually as there are hundreds of them. | Bulk package updates using Conda | 1.2 | 0 | 0 | 118,886 |
24,960,514 | 2014-07-25T16:40:00.000 | 3 | 0 | 1 | 0 | python,pip | 24,960,542 | 1 | true | 0 | 0 | Pip only installs packages that are not installed yet.
This does mean that even if a new version is available, old packages will be kept. You can pass the --upgrade flag to prevent that behavior and install the latest versions (but then pip will call pypi for every package in your requirements file, in order to identify its latest version).
An alternative is to have version specifiers in your requirements file (e.g. mypackage==1.2.3), so that if you change your requirements file and use new versions, pip will pick those up without the --upgrade flag. | 1 | 1 | 0 | I am wondering about this command pip install -r requirements.txt. Does pip install modules if they are not satisfied or does it try and install anyway even if the modules are already there? If it is the latter, than is there any way to write a shell script which checks if dependencies are satisfied and if not invoke pip install? | Python - Pip install requirements only if dependencies are not satisifed | 1.2 | 0 | 0 | 1,670 |
24,963,423 | 2014-07-25T19:47:00.000 | 2 | 0 | 1 | 0 | python,timedelta | 24,963,537 | 1 | false | 0 | 0 | Although the reviewer makes a valid point that in Python you can't tell by looking at the name of an object what type it is, I don't think this argument is applicable in this specific case - as this would be the case whether you pass a timedelta or int. Either way you won't be able to tell it's type. So I would ignore this argument.
If you already have a timedelta available, why convert it in various places. That's unnecessary overhead.
Consistency: if the rest of the code is passing ints and converting to timedeltas then I would stick with the current convention. If there is no such precedent in the code then I would pass timedeltas. Being consistent is important as other programmers who look at this codebase have certain expectations. Best not to annoy them!
Comment your code well. Use docstrings to list the type of the parameter.
In essence, if there are no precedents in the code of using an int, I see no problems with using timedelta. | 1 | 0 | 0 | I'm contributing to an open source project, and in there, I'm querying for objects that were updated recently, so I have a function that accepts a max_age object. The person reviewing my code has a slight preference for passing around time in seconds. While it's not a big issue, I see this as a learning opportunity for me. Here's the conversation we had. For some context, we started out discussing the name of the argument.
code reviewer:
max_age is fine I think, I had misread. I think max_age_timedelta is
also good (more explicit). Thinking about it more, is there a good
reason to not use seconds? As in, pass in an integer? Are there
significant speed issues?
me:
no, just ease of use issues
code reviewer:
I'm mildly in favor of passing objects between functions that are as simple as possible.
me:
I'm querying based on dates, and timedeltas make date math easier.
code reviewer:
Is it tricky to create a timedelta internally, with the number of seconds as input?
Or do you use the timedelta in so many places that it's going to be tedious to do that each time?
(nb: 'internally' meant 'internal to the function doing the querying')
me:
It's not that hard, I could certainly just construct a timedelta internally.
But a timedelta object carries the semantics of being a length of time.
Integers and other basic data types get their semantics through argument and variable names.
code reviewer:
Yep, I understand. However, in Python, stuff is untyped, so the main method of signaling we have is the variable name.
I.e. it's not easy to just tell what the type of an object is by looking at the code, as it is in C++.
What would you do in a case like this? | Is it better to accept timedeltas or number of seconds? | 0.379949 | 0 | 0 | 64 |
24,965,646 | 2014-07-25T22:52:00.000 | -2 | 0 | 0 | 0 | python,qt,python-3.x,pyqt | 24,965,943 | 2 | false | 0 | 1 | You have the right method if you want to directly access the data in memory. It is a C array, and has no python object associated, so it's a little tricky. You need: the start address from bits(), the depth() and the byteCount(). ctypes.string_at would give you access to the data there, which you would pass to a numpy array using the known bits-per-pix and image shape.
I won't write more, because this kind of thing is likely to cause a crash (and I have no test data right now) - can you not load the image in the usual way? | 1 | 0 | 0 | How can I get raw base64 PNG data from a PyQt4.QtGui.QImage object? The only method I can find that seems like it would help is bits(), but that just returns a sip.voidptr object and I have no idea where to go from there. | Convert PyQt4.QtGui.QImage object to base64 PNG data | -0.197375 | 0 | 0 | 1,434 |
24,966,803 | 2014-07-26T02:01:00.000 | 0 | 0 | 0 | 0 | python-3.x,pygame | 24,967,111 | 2 | false | 0 | 1 | You can use the pygame.APPACTIVE event. From the documentation for the pygame.display.iconify:
Then the display mode is set, several events are placed on the pygame event queue. pygame.QUIT is sent when the user has requested the program to shutdown. The window will receive pygame.ACTIVEEVENT events as the display gains and loses input focus [This is when the window is minimized]. If the display is set with the pygame.RESIZABLE flag, pygame.VIDEORESIZE events will be sent when the user adjusts the window dimensions. Hardware displays that draw direct to the screen will get pygame.VIDEOEXPOSE events when portions of the window must be redrawn.
Look at the second example in furas' answer below, but don't use the polling approach of the first example for something like this, you don't want to spend time every frame trying to check if the window has been minimized. | 1 | 1 | 0 | I was playing Terraria the other day and in a new update they make the game pause when it's minimized. Obviously it's written in a different programme to python, but I was wondering if it was possible to replicate the effects. | Is there a way in pygame to make a programme pause when it's minimized? | 0 | 0 | 0 | 1,031 |
24,966,984 | 2014-07-26T02:41:00.000 | 3 | 0 | 0 | 0 | python,scikit-learn | 25,012,484 | 2 | false | 0 | 0 | There is no way around finding out which possible values your categorical features can take, which probably implies that you have to go through your data fully once in order to obtain a list of unique values of your categorical variables.
After that it is a matter of transforming your categorical variables to integer values and setting the n_values= kwarg in OneHotEncoder to an array corresponding to the number of different values each variable can take. | 1 | 2 | 1 | I have a large dataset which I plan to do logistic regression on. It has lots of categorical variables, each having thousands of features which I am planning to use one hot encoding on. I will need to deal with the data in small batches. My question is how to make sure that one hot encoding sees all the features of each categorical variable during the first run? | One-hot encoding of large dataset with scikit-learn | 0.291313 | 0 | 0 | 3,004 |
24,971,400 | 2014-07-26T13:17:00.000 | 1 | 0 | 0 | 0 | python,numpy | 24,971,512 | 1 | true | 0 | 0 | Long answer short, when you need do huge mathematical operations, like vector multiplications and so on which requires writing lots of loops and what not, yet your codes gets unreadable yet not efficient you should use Numpy.
Few key benefits:
NumPy arrays have a fixed size at creation, unlike Python lists (which can grow dynamically). Changing the size of an ndarray will create a new array and delete the original. So it is more memory efficient than the other.
The elements in a NumPy array are all required to be of the same data type, and thus will be the same size in memory. The exception: one can have arrays of (Python, including NumPy) objects, thereby allowing for arrays of different sized elements.
NumPy arrays facilitate advanced mathematical and other types of operations on large numbers of data. Typically, such operations are executed more efficiently and with less code than is possible using Python’s built-in sequences.
A growing plethora of scientific and mathematical Python-based packages are using NumPy arrays; though these typically support Python-sequence input, they convert such input to NumPy arrays prior to processing, and they often output NumPy arrays. In other words, in order to efficiently use much (perhaps even most) of today’s scientific/mathematical Python-based software, just knowing how to use Python’s built-in sequence types is insufficient - one also needs to know how to use NumPy arrays.
-Vector operations comes handy in Numpy. You don't need to go through writing loops but yet pythonic.
-Object oriented approach | 1 | 1 | 1 | I'm a newbee of python. And recently I heard some people say that numpy is a good module for dealing with huge data.
I'm curious what can numpy do for us in the daily work.
As I know, most of us were not scientists and researchers, at what circumstances numpy can bring us benefit?
Can you share a good practice with me? | When should I use numpy? | 1.2 | 0 | 0 | 669 |
24,972,238 | 2014-07-26T14:51:00.000 | 1 | 1 | 0 | 1 | bash,emacs,path,pythonpath,.bash-profile | 24,980,728 | 1 | false | 0 | 0 | The issue might be that emacs, as many other programs you run, reads your login shell rc files, such as ~/.bash_login or ~/.profile, but not ~/.bashrc, where as your terminal also reads you user shell rc file: ~/.bashrc. | 1 | 1 | 0 | Usually I use my .bashrc file to load some functions for my bash environment. When I call these functions (that I created based on some frameworks I use). So, I play around with variables such as PATH and PYTHONPATH when I use the functions depending on the environment I'm working on.
So far so well with the terminal. The problem is that when I use emacs these functions and these environmental variables that I activate with my functions, they don't exist. .bashrc is not read by emacs, and therefore I don't have the functions loaded by .bashrc don't work. I would like them to work.
Any Ideas? | Emacs, bash, bashrc, functions and paths | 0.197375 | 0 | 0 | 594 |
24,973,568 | 2014-07-26T17:19:00.000 | 0 | 0 | 1 | 0 | python,django,eclipse,interpreter | 24,973,729 | 1 | false | 1 | 0 | On the menubar go to Window -> Preferences -> Pydev -> Interpreters -> Python
Remove the interpreter and click on Quick Auto Config.
That should do the trick. Make sure django is installed first. | 1 | 1 | 0 | I'm an eclipse noob.
After adding PyDev to eclipse, I try to create a "PyDev Django Project", but and I get the "Django not found" error.
I heard that you have to remove the python interpreter from eclipse, then add it again. But I don't know how to do that.
Can someone show me how to remove/add the python interpreter in eclipse?
It is greatly appreciated.
Brent. | How do I remove/add the python interpreter from eclipse? | 0 | 0 | 0 | 1,345 |
24,975,287 | 2014-07-26T20:32:00.000 | 2 | 0 | 1 | 0 | python,python-3.x | 24,975,313 | 1 | true | 0 | 0 | Python is duck-typed so you will never get the exact API for each variable that you can get from a strongly-typed (if you excuse me for using "strong-typed" as the opposite of "duck-typed") language such as Java.
However, most IDEs such as PyCharm and Eclipse PyDev do offer intelligent auto-completion that you would expect from a proper IDE. | 1 | 0 | 0 | Is there a way/IDE to make python show the api for every type? (int, file etc...) just like in Eclipse for java where you write varname. and it shows you all the possible functions for it.
Thanks! | Python api for every type | 1.2 | 0 | 0 | 33 |
24,979,640 | 2014-07-27T09:28:00.000 | 1 | 1 | 0 | 0 | python,processing,paperjs,codea,pythonista | 38,641,689 | 1 | false | 0 | 0 | The ui module actually includes a lot of vector drawing functions, inside a ui.ImageContext. ui.ImageContext is a thin wrapper around part of one of the Objective-C APIs (maybe CALayer?) The drawing methods are designed to operate inside the draw method of a custom view class, but you can present these things in other contexts using a UIImageContext, from which you can get a static image. | 1 | 4 | 0 | I've taken to creative coding on my iPad and iPhone using Codea, Procoding, and Pythonista. I really love the paper.js Javascript library, and I'm wondering how I might have the functionality that I find in paper.js when writing in Python.
Specifically, I'd love to have the vector math and path manipulation that paper.js affords. Things like finding the intersection of two paths or binding events to paths (on click, mouse move, etc).
There's an ImagePath module provided by Pythonista that does some path stuff but it's not as robust as paper.js (it seems).
Any ideas? | A paperjs-equivalent for python (specifically, Pythonista for iOS)? | 0.197375 | 0 | 0 | 901 |
24,980,103 | 2014-07-27T10:39:00.000 | 0 | 1 | 0 | 0 | python,google-app-engine,twitter,webapp2 | 25,003,619 | 1 | false | 1 | 0 | Are you talking about being logged in to Twitter.com or your app? If you have received oAuth access tokens by authenticating an app, then logging out of twitter.com won't 'log you out' of any apps, the tokens will remain valid until the user revokes the access. | 1 | 0 | 0 | I have managed to use oauth authentication and add a Sign in with Twitter functionality to a Google App Engine web app.
How should I verify, during the site navigation, if the user is still logged in Twitter? | Sign in with Twitter: how to verify the current user is stil logged in | 0 | 0 | 0 | 36 |
24,981,184 | 2014-07-27T12:56:00.000 | 20 | 0 | 0 | 0 | python,flask,filenames | 24,982,070 | 2 | true | 1 | 0 | Found the answer. request.files['upload'].filename gives the file name and extension in flask | 1 | 6 | 0 | lets say that I have <input name="upload" type="file"> and I am uploading picture.jpg. The question is how can I get the file name+extention? By other words the correct script for request.files.filename or request.upload.filename | get input file name and file extention using flask | 1.2 | 0 | 0 | 6,570 |
24,982,125 | 2014-07-27T14:50:00.000 | -1 | 0 | 1 | 0 | python,virtualenv,pythonpath | 24,982,206 | 4 | false | 0 | 0 | I assume that your other modules are at predictable path (relative to $0)
We can compute absolute path of $0
os.path.realpath(sys.argv[0])
then arrive at your module path and append it
sys.path.append(something) | 1 | 3 | 0 | I'm developing several different Python packages with my team. Say that we have ~/src/pkg1, ~/src/pkg2, and ~/src/pkg3. How do we add these to PYTHONPATH without each of us having to manage dot-files?
We could add, say, ~/src/site/sitecustomize.py, which is added once to PYTHONPATH, but is it "guaranteed" that there won't be a global sitecustomize.py.
virtualenv seems like the wrong solution, because we don't want to have to build/install the packages after each change. | Managing PYTHONPATH for development environment | -0.049958 | 0 | 0 | 1,555 |
24,985,127 | 2014-07-27T20:05:00.000 | 3 | 0 | 1 | 0 | python,algorithm,grid | 24,985,330 | 3 | true | 0 | 0 | You can hash all coordinate points (e.g. using dictionary structure in python) and then for each coordinate point, hash the adjacent neighbors of the point to find pairs of points that are adjacent and "merge" them. Also, for each point you can maintain a pointer to the connected component that that point belongs to (using the dictionary structure), and for each connected component you maintain a list of points that belong to the component.
Then, when you hash a neighbor of a point and find a match, you merge the two connected component sets that the points belong to and update the group pointers for all new points in the union set. You can show that you only need to hash all the neighbors of all points just once and this will find all connected components, and furthermore, if you update the pointers for the smaller of the two connected components sets when two connected component sets are merged, then the run-time will be linear in the number of points. | 1 | 8 | 1 | Given a list of X,Y coordinate points on a 2D grid, what is the most efficient algorithm to create a list of groups of adjacent coordinate points?
For example, given a list of points making up two non-adjacent squares (3x3) on a grid (15x15), the result of this algorithm would be two groups of points corresponding to the two squares.
I suppose you could do a flood fill algorithm, but this seems overkill and not very efficient for a large 2D array of say 1024 size. | Efficiently grouping a list of coordinates points by location in Python | 1.2 | 0 | 0 | 7,135 |
24,987,488 | 2014-07-28T01:49:00.000 | 0 | 0 | 0 | 0 | android,python,django,api | 24,988,398 | 6 | false | 1 | 0 | If I understand your question correctly, you have a Django installation on your laptop and you are able to hit a specific page/service locally using "localhost:8000/polls/link/2/" and it works fine. Now you want to access it from your mobile. Correct?
Replace the "localhost" with IP address of your laptop. For example, "10.0.0.1:8000/polls/link/2/"
You will find your IP address in System Preferences -> Network | 2 | 0 | 0 | I am developing an android app. For local testing I have created a django server on my laptop. My problem is I am not able to call django apis from app code. For example, if I want to call any django server api from desktop I write "localhost:8000/polls/link/2/". Now how to replace this "localhost part of url" if calling same api from mobile. And also my desktop is connected to internet by the same mobile hotspot. So basically both desktop and phone are on same network.
My ifconfig command on desktop shows
And my desktop is Mac and mobile is Samsung core duo | Calling Django server api from android phone | 0 | 0 | 0 | 3,171 |
24,987,488 | 2014-07-28T01:49:00.000 | 0 | 0 | 0 | 0 | android,python,django,api | 47,381,501 | 6 | false | 1 | 0 | Firstly run the django server by typing python manage.py runserver 0.0.0.0:8000. Then check the ip of your private network i.e wlan ip using ifconfig command then simply replace your localhost text with that ip while calling API.
But above all make sure that your phone and lappy should be in same network. | 2 | 0 | 0 | I am developing an android app. For local testing I have created a django server on my laptop. My problem is I am not able to call django apis from app code. For example, if I want to call any django server api from desktop I write "localhost:8000/polls/link/2/". Now how to replace this "localhost part of url" if calling same api from mobile. And also my desktop is connected to internet by the same mobile hotspot. So basically both desktop and phone are on same network.
My ifconfig command on desktop shows
And my desktop is Mac and mobile is Samsung core duo | Calling Django server api from android phone | 0 | 0 | 0 | 3,171 |
24,988,880 | 2014-07-28T05:27:00.000 | -2 | 0 | 1 | 1 | python,python-2.7,python-idle | 24,988,918 | 2 | false | 0 | 0 | Right click on any .py file
Click Open With...
Click Choose a default program...
IfIDLEis on the list, click it.
else
Click Browse, and find the IDLE program
Click OK and voila! | 1 | 8 | 0 | I am on Windows 7. I have Python 2.7.8 (64 bit) installed. Today, I changed the default program that opens .py files from IDLE to Windows Command Processor and stupidly selected the checkbox that said "always use the selected program to open this kind of file".
What I want to do is change my default program back to IDLE.
When I attempt to change it back to IDLE, I go to Control Panel\Programs\Default Programs\Set Associations and select the .py name and click Change Program. I do see python.exe but selecting that does nothing. I then use the "Browse" button to navigate to C:\Python27\Lib\idlelib but don't know if I should select idle.py, idle.pyw, idle.bat or some other IDLE program that will force the default program to be IDLE!
Nothing happens after I select one of these.
How do I make IDLE be the default program that opens .py files and now disassociate Windows Command Processor from being the default? | Set Python IDLE as Default Program to Open .py Extensions | -0.197375 | 0 | 0 | 22,665 |
24,989,488 | 2014-07-28T06:24:00.000 | 2 | 1 | 0 | 0 | php,python,cookies,web,login | 24,989,747 | 1 | true | 0 | 0 | Cookie are just cookies and browsers don't record (and possibly can't) how they were created. So when you create one cookie using PHP and then you would like to read the same cookie using any other language that support cookie, you should do it without a problem.
Of course you need to remember about cookie domain and path. They should be set properly if you want to access your cookie without a problem. | 1 | 2 | 0 | I'm busy creating a login system to the admin area of my new personal website. The site backend is written entirely in Python, owing to my knowledge of the language.
I have been looking at ways of tracking a user once they login so that the rest of the site knows they are logged in.
I can't get a definitive answer online (unless I'm not looking hard enough or are searching the wrong thing) as to whether are all cookies are the same and accessible from all languages. My tests have proved inconclusive; either they are not or I am doing it wrong but some clarification would be appreciated.
For example, if I create a cookie in Python with cookie.SimpleCookie() in the http.cookies module, is there a way of loading and accessing the value of this cookie in PHP?
Thanks in advance for any help,
Ilmiont | Are all cookies created equal? | 1.2 | 0 | 0 | 52 |
24,991,571 | 2014-07-28T08:52:00.000 | 1 | 0 | 1 | 1 | python,abaqus | 25,001,330 | 1 | false | 0 | 0 | on windows using abaqus cae will call the most recent version of abaqus installed. If you want to run on a specific version use this call instead abq6121 cae nogui = scriptname.py | 1 | 0 | 0 | I want to run a python script without Abaqus GUI and this already works well with the command:
abaqus cae nogui = scriptname.py
As I want to include a subroutine I have to run it in Abaqus version 12-1 but I have also version 13-1 installed (running the script in cae, I always got an error while using 13-1 but not with 12-1).
With the command above I don't know which version will be used. Is there a way to specify the used version in the cmd? | Run script using a certain version of Abaqus | 0.197375 | 0 | 0 | 1,939 |
24,993,048 | 2014-07-28T10:16:00.000 | 2 | 0 | 0 | 0 | javascript,python,angularjs,client-server | 24,993,506 | 2 | true | 1 | 0 | It doesn't have much to do with Python, really. Your javascript code is executed on the client's brower, and all it can do is issuing HTTP requests (synchronous or asynchronous). At this point which webserver / technology / language is used to handle the HTTP request is totally irrelevant. So, from the client javascript code POV, you are not "calling a Python function", you are sending an HTTP request and handling the HTTP response.
If your web host doesn't let you run django (or any wsgi-compliant script) then you'll probably have to either use plain CGI (warning: very primitive techno) or migrate to PHP (no comment). Or find another hosting that doesn't live in the past ;) | 1 | 0 | 0 | this question have been asked numerous times, I know and I'm sorry if by ignorance I didn't get the answers.
I have a hosting plan which doesn't allow me to install django, which was really nice to call an rest api easily with the routing settings.
What I want is to be able to call a python function from javascript code doing a get/post (I'm using AngularJs, but it would be the same making an ajax get/post.)
Let's say I have a js controller 'Employee' and a view 'CreateEmployee'.
From my javascript view, I can call my CreateEmployee() on the js controller, now my question is, how can I call a specific function (let's say) def CreateEmployee(params...) on my .py file?
All I found is making a get/post on my .py file, but didn't find how to invoke a specific function.
I probably don't get the python and client/server communication paradigm, I've been coding on asp.net WebForms for a long time, and since I can't use frameworks like Django I'm stuck.
Thanks | Call python FUNCTION from javascript | 1.2 | 0 | 0 | 4,050 |
24,996,265 | 2014-07-28T13:23:00.000 | 0 | 0 | 0 | 0 | java,python,command-line,couchbase,couchbase-view | 24,997,007 | 2 | true | 1 | 0 | So it appears after watching how their Web UI console works, that they're essentially reusing the "create a design document," i.e. the PUT command, to overwrite the existing design document. I have no idea how this would work on a production machine while it is running or the implications of such actions. The documentation, for a commercial product, in this regards is lacking. | 1 | 1 | 0 | I can query the views of a design document. I can create a brand new design document. I can delete a design document. I can add a view to a design document I'm in the process of creating but...
How do you add a view to an already existing design document without going through their web UI? Is it even possible or do you always have to create a brand new design doc just to modify it.
For reference, I've looked at the "couchbase-cli" tool, the Python SDK, the Java SDK and even the REST API itself. Nowhere have I found a means of adding a view to a design document and persist that view in Couchbase without having to create a design document. Did I miss something from the documentation? | Couchbase: Add View to existing Design Document | 1.2 | 0 | 0 | 563 |
24,997,946 | 2014-07-28T14:52:00.000 | 0 | 0 | 0 | 0 | mysql,python-2.7,unicode,sqlalchemy,cherrypy | 25,016,312 | 1 | false | 0 | 0 | SQLAlchemy provides Unicode or UnicodeText for your purposes.
Also don't forget about u'text' | 1 | 0 | 0 | I am using cherrypy along with sqlalchemy-mysql as backend. I would like to know the ways of dealing with UNICODE strings in cherrypy web application. One brute-force way would be to convert all string coming in as parameters into UNICODE (and then decoding them to UTF-8) before storing them to database. But I was wondering if there is any standard way of handling UNICODE characters in a web application. I tried cherrypy's tools.encode but it doesn't seem to work for me (may be I haven't understood it properly yet). Or may be there are standard python libraries to handle UNICODEs which I could just import and use. What ways should I look for? | how to handle UNICODE characters in cherrypy-sqlalchemy-mysql application? | 0 | 1 | 0 | 153 |
25,001,824 | 2014-07-28T18:27:00.000 | 1 | 0 | 0 | 1 | python,linux,bash,gdb,named-pipes | 25,047,902 | 1 | true | 0 | 0 | My recommendation is not to do this. Instead there are two more supportable ways to go:
Write your code in Python directly in gdb. Gdb has been extensible in Python for several years now.
Use the gdb MI ("Machine Interface") approach. There are libraries available to parse this already (not sure if there is one in Python but I assume so). This is better than parsing gdb's command-line output because some pains are taken to avoid gratuitous breakage -- this is the preferred way for programs to interact with gdb. | 1 | 0 | 0 | Here's a general example of what I need to do:
For example, I would initiate a back trace by sending the command "bt" to GDB from the program. Then I would search for a word such as "pardrivr" and get the line number associated with it by using regular expressions. Then I would input "f [line_number_of_pardriver]" into GDB. This process would be repeated until the correct information is eventually extracted.
I want to use named pipes in bash or python to accomplish this.
Could someone please provide a simple example of how to do this? | Use named pipes to send input to program based on output | 1.2 | 0 | 0 | 289 |
25,001,953 | 2014-07-28T18:35:00.000 | 2 | 1 | 0 | 0 | php,python,session | 25,001,993 | 1 | false | 0 | 0 | You can assign to and read anything you want from the $_SESSION array. It's just a regular array like any other in PHP, except for two things:
a) It's a superglobal
b) IF you've called session_start(), then PHP will auto-populate the array from whatever's in session storage (files, db, etc...), and auto-save the contents of the array upon script exit or calling session_write_close(). | 1 | 1 | 0 | So I've been working on porting a python tester to PHP, but I'm fairly new to PHP still. I understand there is a session command within PHP, and I've read the documentation as well as other questions that have come up here in stackoverflow that are close to it, but not quite what I'm looking for.
So my question is whether if there is something similar to sess = requests.Session() from python to PHP, i.e. is there something I could pass in just like I did in python that can occur also in PHP?
EDIT: So I've re-read the documentations for both the python Request package and Sessions for PHP. And I think the meat of my question is if there is a way to have a session object in PHP that holds persistent parameters across POST and GET Request? And to further explain, my main problem is that I have certain POST and GET endpoints that require a login, but even after using the login POST first, I still receive a 401 error code after.
Example Code:
$current->httpPost($accountLoginURL, $accountLoginPostData);
$current->httpPost($followFriend, $followFriendData);
And even after the first line gives me a 200. The second gives me a 401. | Is it possible to pass in the session as a variable like in Python for PHP? | 0.379949 | 0 | 0 | 212 |
25,003,108 | 2014-07-28T19:45:00.000 | 0 | 0 | 0 | 0 | python,ajax,django | 25,031,586 | 1 | true | 1 | 0 | Turns out it was a "feature" of the client-side AJAX package we were using (flow.js) and we just had to increase chunkSize. | 1 | 3 | 0 | I'm making a Django app in which a user can upload a file (an image) using AJAX.
While developing locally, I saw that PIL, which I used to process the image after upload, had a bug. After investigating I found out it's because PIL is getting the file data cut off. It's only getting the first 1MB of the file, which is why it's failing. (The file is around 3MB.)
Why could this be, and how can I solve it? My immediate suspicion is that runserver, which I use locally, caps AJAX uploads for some reason. But I can't be sure. And if it does, I don't know how to make it stop.
Can anyone help? | Django cutting off uploaded file | 1.2 | 0 | 0 | 86 |
25,003,730 | 2014-07-28T20:26:00.000 | 0 | 0 | 0 | 1 | python,ssh,expect,spawn,pexpect | 25,003,819 | 1 | false | 0 | 0 | The prompt should be needed after having written a command and waiting for it to finish. You have to tell "readline" what it should expect (in your case "testuser:"). | 1 | 0 | 0 | I have a custom shell which looks like below.
testuser:
How do I set custom PROMPT attribute to login to a shell which look like
I'm reusing the hive.py code from samples section and set original_prompt to :.
original_prompt='[:]'
The result is it skips the host as it fails to connect with
ERROR could not synchronize with original prompt
What am I missing?
Thanking in anticipation. | pexpect to login to custom shell | 0 | 0 | 0 | 203 |
25,003,748 | 2014-07-28T20:27:00.000 | 1 | 1 | 1 | 1 | python,linux,shell,command,executable | 25,003,823 | 2 | false | 0 | 0 | You should add the folder that contains the script to your system's $PATH variable (I assume you're on Linux). This variable contains all of the directories that are searched looking for a specific command. You can add to it by typing PATH=/path/to/folder:$PATH. Alternately, you need to move the script into a folder that's already in the $PATH variable (which is generally a better idea than messing with system variables). | 2 | 0 | 0 | I have a python script, which has multiple command line options. I want to make this script runnable without having to type "python myscript.py" and without having to be in the same directory as the script. For example, if one installs git on linux, regardless of which directory the user is in, the user can do "git add X, etc..". So, an example input I would like is "myscript -o a,b,c -i" instead of "python myscript.py -o a,b,c -i". I already added "#! /usr/bin/env python" to the top of my script's code, which makes it executable when I type "./myscript", however I don't want the ./, and I want this to work from any directory. | Making python script executable from any directory | 0.099668 | 0 | 0 | 4,406 |
25,003,748 | 2014-07-28T20:27:00.000 | 0 | 1 | 1 | 1 | python,linux,shell,command,executable | 25,003,818 | 2 | false | 0 | 0 | Your script needs to be in a location searchable via your PATH. On Unix/Linux systems, the generally accepted location for locally-produced programs and scripts that are not part of the system is /usr/local/bin. So, make sure your script is executable by running chmod +x myscript, then move it to the right place with sudo mv myscript /usr/local/bin (while in the directory containing myscript). You'll need to enter an admin's password, then you should be all set. | 2 | 0 | 0 | I have a python script, which has multiple command line options. I want to make this script runnable without having to type "python myscript.py" and without having to be in the same directory as the script. For example, if one installs git on linux, regardless of which directory the user is in, the user can do "git add X, etc..". So, an example input I would like is "myscript -o a,b,c -i" instead of "python myscript.py -o a,b,c -i". I already added "#! /usr/bin/env python" to the top of my script's code, which makes it executable when I type "./myscript", however I don't want the ./, and I want this to work from any directory. | Making python script executable from any directory | 0 | 0 | 0 | 4,406 |
25,004,564 | 2014-07-28T21:21:00.000 | 1 | 0 | 0 | 0 | python,numpy,scipy,linear-algebra,numerical-methods | 25,005,318 | 1 | true | 0 | 0 | For symmetric sparse matrix eigenvalue/eigenvector finding, you may use scipy.sparse.linalg.eigsh. It uses ARPACK behind the scenes, and there are parallel ARPACK implementations. AFAIK, SciPy can be compiled with one if your scipy installation uses the serial version.
However, this is not a good answer, if you need all eigenvalues and eigenvectors for the matrix, as the sparse version uses the Lanczos algorithm.
If your matrix is not overwhelmingly large, then just use numpy.linalg.eigh. It uses LAPACK or BLAS and may use parallel code internally.
If you end up rolling your own, please note that SciPy/NumPy does all the heavy lifting with different highly optimized linear algebra packages, not in pure Python. Due to this the performance and degree of parallelism depends heavily on the libraries your SciPy/NumPy installation is compiled with.
(Your question does not reveal if you just want to have parallel code running on several processors, or on several computers. Also, the size of your matrix has a big impact on the best method. So, this answer may be completely off-the-mark.) | 1 | 1 | 1 | Is anyone aware of an implemented version (perhaps using scipy/numpy) of parallel exact matrix diagonalization (equivalently, finding the eigensystem)? If it helps, my matrices are symmetric and sparse. I would hate to spend a day reinventing the wheel.
EDIT:
My matrices are at least 10,000x10,000 (but, preferably, at least 20 times larger). For now, I only have access to a 4-core Intel machine (with hyperthreading, so 2 processes per core), ~3.0Ghz each with 12GB of RAM. I may later have access to a 128-core node ~3.6Ghz/core with 256GB of RAM, so single machine/multiple cores should do it (for my other parallel tasks, I have been using multiprocessing). I would prefer for the algorithms to scale well.
I do need exact diagonalization, so scipy.sparse routines are not be good for me (tried, didn't work well). I have been using numpy.linalg.eigh (I see only single core doing all the computations).
Alternatively (to the original question): is there an online resource where I can find out more about compiling SciPy so as to insure parallel execution? | Parallel exact matrix diagonalization with Python | 1.2 | 0 | 0 | 3,153 |
25,005,943 | 2014-07-28T23:28:00.000 | 0 | 0 | 1 | 0 | python,windows | 25,006,282 | 3 | false | 0 | 0 | Insert input() in the last line. It will make the program wait for a input. While it doesn't occur the windows program will be open. If you press any key and then enter, it will close. | 1 | 1 | 0 | I am new to all programming and I just started to get interested in learning how to program. So to do so I started with what most people consider the easiest language: Python.
The problem I am having right now though is that if I say to Python print("Hello!"), save it in a file, and then run it, a black window opens up and closes right away. I just do not understand why it is doing this. | Window closes immediately after running program | 0 | 0 | 0 | 7,183 |
25,006,539 | 2014-07-29T00:37:00.000 | -1 | 0 | 0 | 0 | python | 25,006,589 | 1 | false | 1 | 0 | You have a couple of options these are the only ones I can think of:
fork (sorry but this may be the easiest/ quickest),
wait for a new version for the older package, or
change it to not use zipped eggs (I don't really understand this though).
[EDIT] could you potentially trick one into thinking that it is using its version. I don't know the specifics, but from my understanding you could use a virtual machine.
There could be others that I don't know (it actually probable) but that is all I could think of hopefully you find a solution though! | 1 | 1 | 0 | I (thankfully) never ran into this before, and (sadly) just did.
My app now imports 2 packages, which each import the requests library. The two authors have pegged the version of requests to different versions. One wants 2.1.0 , the other wants 2.3.0.
Automated tests appear to pass on both. My app appears to function perfectly on both.
My app won't start, however, because of the requirements. From what I can understand on my development environment, it's because of the version number being pegged in a requirements.txt file. [ In dev we have PasteDeploy + Waitress, an exception is raised in PasteDeploy; in production we have uwsgi ]
The only ways I can think of handling this, is to:
fork the projects
change the system to not use zipped eggs, and run a patch.
both are going to be a hassle to maintain, and add a lot of complexity to the build/deploy process.
does anyone have other suggestions? | handling different required package versions | -0.197375 | 0 | 0 | 49 |
25,007,098 | 2014-07-29T01:53:00.000 | 2 | 0 | 1 | 0 | macos,ipython,keyboard-shortcuts,ipython-notebook,jupyter-notebook | 25,007,221 | 3 | false | 0 | 0 | You first press Ctrl and m (don't press the minus key), that will put the interface in command mode. For deletion you then have to press d twice. | 2 | 11 | 0 | I had a hard time figuring the keyboard shortcut.
Is that true that I should press Ctrl-m together and press the other key such as d to delete one cell? I tried it but it did not work for me I also tried without - but it still does not work for me :(
I am using a Mac. | IPython/Jupyter notebook shortcut not working on Mac | 0.132549 | 0 | 0 | 8,696 |
25,007,098 | 2014-07-29T01:53:00.000 | 4 | 0 | 1 | 0 | macos,ipython,keyboard-shortcuts,ipython-notebook,jupyter-notebook | 27,818,613 | 3 | false | 0 | 0 | On my mac, I use fn key, instead of ctrl, to make shortcuts work in iPython notebook (in Safari). For example,
fn + d + d deletes a cell (d + d works too)
fn + x cuts a cell
fn + c copies a cell
fn + z undoes an action,
and so on. As already noted above, you must press Esc before applying any of these shortcuts; that is, you first hit Esc to make a cell "grey", then you press fn + x to cut a cell. | 2 | 11 | 0 | I had a hard time figuring the keyboard shortcut.
Is that true that I should press Ctrl-m together and press the other key such as d to delete one cell? I tried it but it did not work for me I also tried without - but it still does not work for me :(
I am using a Mac. | IPython/Jupyter notebook shortcut not working on Mac | 0.26052 | 0 | 0 | 8,696 |
25,012,108 | 2014-07-29T09:30:00.000 | 0 | 0 | 1 | 0 | python,character,word-count | 25,012,260 | 4 | false | 0 | 0 | If all your words written in latin letters are in English you could use regular expressions. | 1 | 1 | 0 | Let's say that I have a paragraph with different languages in it. like:
This is paragraph in English. 这是在英国段。Это пункт на английском языке. این بند در زبان انگلیسی است.
I would like to calculate what percentage (%) of this paragraph includes English words. So would like to ask how to do that in python. | How to calculate percentage of english words in a paragraph using Python | 0 | 0 | 0 | 3,245 |
25,018,343 | 2014-07-29T14:51:00.000 | 0 | 1 | 0 | 0 | python,antlr4 | 51,549,667 | 2 | false | 0 | 0 | The example you used was badly written. input is the name of a built-in Python function. Give the Lexer a FileStream and it might work. | 1 | 0 | 0 | Is it possible to use the antlr4 python runtime with python 2.6 or is the minimun required version python 2.7? I want to use it on CentOS 6.3 that comes with python 2.6.6. If it is not possible, is it known what features of python 2.7 that is used? | use antlr4 python runtime with python 2.6 | 0 | 0 | 0 | 805 |
25,020,482 | 2014-07-29T16:35:00.000 | 6 | 0 | 0 | 0 | python,scikit-learn,feature-selection | 25,036,779 | 1 | true | 0 | 0 | You can't. The feature selection routines in scikit-learn will consider the dummy variables independently of each other. This means they can "trim" the domains of categorical variables down to the values that matter for prediction. | 1 | 6 | 1 | My question is i want to run feature selection on the data with several categorical variables. I have used get_dummies in pandas to generate all the sparse matrix for these categorical variables. My question is how sklearn knows that one specific sparse matrix actually belongs to one feature and select/drop them all? For example, I have a variable called city. There are New York, Chicago and Boston three levels for that variable, so the sparse matrix looks like:
[1,0,0]
[0,1,0]
[0,0,1]
How can I inform the sklearn that in these three "columns" actually belong to one feature, which is city and won't end up with choosing New York, and delete Chicago and Boston?
Thank you so much! | How can sklearn select categorical features based on feature selection | 1.2 | 0 | 0 | 3,078 |
25,023,050 | 2014-07-29T19:03:00.000 | 1 | 0 | 1 | 0 | python,memory | 25,023,098 | 2 | false | 0 | 0 | In your first example s is still in memory until the garbage collector deletes it, so your second example is more efficient in terms of memory use. However, considering that's a very very little portion of the RAM, in many cases it's better to go for readability (the first example looks better).
Hope this helps. | 1 | 2 | 0 | This has been bugging me for a while. I'm wondering about the comparative memory-efficiency of assigning variables and calling methods. Consider, for example:
s = "foo"
x = s.lower()
Versus
x = "foo".lower()
Which one of these is more efficient in terms of memory use?
This is obviously a trivial example, but it illustrates what I'm wondering about.
There are many instances in which we define some variable var1 = foo, and then define a second variable var2 = var1.method(). Does this total process require more memory than just defining var2 = foo.method()? | Python: Memory-Efficiency of Assigning Variables and Calling Methods | 0.099668 | 0 | 0 | 58 |
25,024,010 | 2014-07-29T20:06:00.000 | 2 | 1 | 0 | 1 | python | 25,026,388 | 1 | true | 0 | 0 | The current python executable is always available as sys.executable, which should give full path (but you can ensure this using os.path functions). | 1 | 3 | 0 | I have a series of unit tests that are meant to run in two contexts:
1) On a buildbot server
2) in developer's home environments
In both our development procedure and in the buildbot server we use virtualenv. The tests run fine in the developer environments, but with buildbot the tests are being run from the python executable in the virtualenv without activating the virtualenv.
This works out for most tests, but there are a few that shell out to run scripts, and I want them to run the scripts with the virtualenv's python executable. Is there a way to pull the path to the current python executable inside the tests themselves to build the shell commands that way? | How can I get the path to the calling python executable | 1.2 | 0 | 0 | 313 |
25,024,416 | 2014-07-29T20:30:00.000 | 0 | 0 | 1 | 0 | javascript,python,macos,ide | 25,026,782 | 2 | false | 0 | 0 | I use PyCharm. It's a heavyweight IDE, so expect more features than you probably want if you're just getting started. It has a very good integrated debugger. You'll be able to break into both your Python and Javascript. Further, you'll see a pretty nice productivity jump with all the editing support like auto complete and intellisense. My advice is to stick with print() and logging as long as you can. For me getting a firm grasp of packages, python environments, virtualenv, command line tricks and git all before committing to the PyCharm IDE helped me adopt it with more confidence I was getting the value of all the integration. | 1 | 1 | 0 | I looking for python/JavaScript IDE where I can put breakpoints. Currently I'm using coderunner but I can not put break points. I'll really appreciate you recommendations | mac os x python/JavaScript IDE where I can use break points | 0 | 0 | 0 | 103 |
25,024,437 | 2014-07-29T20:32:00.000 | -1 | 0 | 0 | 0 | python,excel,xlwt | 25,032,965 | 1 | false | 0 | 0 | You will get data from queries, right? Then you will write them to an excel by xlwt. Just before writing, you can sort them. If you can show us your code, then maybe I can optimize them. Otherwise, you have to follow wnnmaw's advice, do it in a more complicate way. | 1 | 1 | 1 | I'm using python to write a report which is put into an excel spreadshet.
There are four columns, namely:
Product Name | Previous Value | Current Value | Difference
When I am done putting in all the values I then want to sort them based on Current Value. Is there a way I can do this in xlwt? I've only seen examples of sorting a single column. | Sorting multiple columns in excel via xlwt for python | -0.197375 | 1 | 0 | 1,410 |
25,024,919 | 2014-07-29T21:00:00.000 | 1 | 0 | 0 | 0 | python,linux,scp | 25,025,029 | 2 | false | 0 | 0 | Consider:
1) copy file to all servers, into a temporary path on the same drive as the final location. If this fails just restart it.
2) across all servers, move the file from temp path into the final path. On Linux and probably other OSes, moving files on a single drive is fast, durable, and probably atomic.
This isn't atomic, but #2 is fast and durable. You're guaranteed each server will have either the old file, or the new file, but not a combination. | 2 | 0 | 0 | I need to SCP the same file to multiple servers, transactionally. It will overwrite the previous version of the file. I need to guarantee that every server has the new file, or every server has the old file. It is acceptable for the servers to appear to be in an intermediate state temporarily, as long as it ends in consistent state.
Is there a good library for this in python? I have been googling around, and most transactional libraries seem to be designed with databases in mind.
If there isn't what is a good way to go about this?
Added: I should say "as transactional as possible". I recognise that the situation make true transactionality difficult, but the goal is to get as close as possible, and scream for the sysadmin if they go south anyway. | Transactionally SCPing to multiple servers in python | 0.099668 | 0 | 0 | 80 |
25,024,919 | 2014-07-29T21:00:00.000 | 2 | 0 | 0 | 0 | python,linux,scp | 25,025,058 | 2 | false | 0 | 0 | Real transactions are probably best handled with databases that reach the replication level that you need
While their is probably not something that will magically handle this for you, you can follow some best practices and end up with OK results. I don't really know where to start but let's try some of these:
Paramiko can be used to handle the mechanics of an ssh session from python if that is important to you.
Making files somewhat atomic is typically handled by copying your new file to the same storage subsystem and then renaming it. Linux allows this to be atomic using the renameat system call.
Within python you can have a try except finally block such that at least your running application is aware of what the state of things are.
Messaging Queues (like zeroMQ) may help in being able to reliably send messages between computers.
None of this really addresses what happens when actual problems occur which is why if you want real transactions use a database. | 2 | 0 | 0 | I need to SCP the same file to multiple servers, transactionally. It will overwrite the previous version of the file. I need to guarantee that every server has the new file, or every server has the old file. It is acceptable for the servers to appear to be in an intermediate state temporarily, as long as it ends in consistent state.
Is there a good library for this in python? I have been googling around, and most transactional libraries seem to be designed with databases in mind.
If there isn't what is a good way to go about this?
Added: I should say "as transactional as possible". I recognise that the situation make true transactionality difficult, but the goal is to get as close as possible, and scream for the sysadmin if they go south anyway. | Transactionally SCPing to multiple servers in python | 0.197375 | 0 | 0 | 80 |
25,025,299 | 2014-07-29T21:27:00.000 | 1 | 0 | 1 | 1 | python | 25,025,400 | 1 | false | 0 | 0 | Creating a new directory is effectively the same as writing small amount of data. It adds an inode.
The only way mkdir (or os.mkdirs) should fail is if the directory exists - otherwise the directory will always be created. In terms of the data being buffered - it's unlikely that this would happen - even journaled filesystems will sync out pretty regularly.
If you're having non-deterministic behavior, just wrap your directory creation / writing a file into that directory inside a try / except / finally that makes a few efforts? But really - the need for such code hints at something much more sinister and is likely a bigger issue. | 1 | 1 | 0 | I am running Python with MPI on a supercomputing cluster. I am getting strange nondeterministic behavior that I think is a result of I/O complications that are not present on the single machines I'm used to working with.
One of the things my code does is to create directories using os.makedirs somewhat frequently. I know also that I generally should not write small amounts of data to the filesystem-- this can end up with the data getting stuck in some buffer and not written for a long time. I suspect this may be happening with my directory creation calls, and then later code tries to write to files inside the directory before it exists. Two questions:
is creating a new directory effectively the same thing as writing a small amount of data?
When forcing data to be written, I use flush and os.fsync. These require a file object. Is there an equivalent to make sure the directory has been created? | Force directory to be created in Python | 0.197375 | 0 | 0 | 147 |
25,025,928 | 2014-07-29T22:18:00.000 | 0 | 1 | 1 | 0 | python,unit-testing | 25,025,972 | 6 | false | 0 | 0 | I am sure that there are other, better, methods but you could always set a global flag from your main and not under unit test then access it in your method.
The other way of course would be to override the method as a part of the unit test set-up - if your method is called brian and you have a test_brian then simply during your pre-test setting brian = test_brian will do the job, you may need to put module names into the preceding statement. | 1 | 29 | 0 | I have a large project that is unit tested using the Python unittest module.
I have one small method that controls large aspects of the system's behaviour. I need this method to return a fixed result when running under the UTs to give consistent test runs, but it would be expensive for me to mock this out for every single UT.
Is there a way that I can make this single method, unittest aware, so that it can modify its behaviour when running under the unittest? | How can a piece of python code tell if it's running under unittest | 0 | 0 | 0 | 11,612 |
25,027,339 | 2014-07-30T01:11:00.000 | 0 | 0 | 0 | 0 | python,html,extract | 25,027,442 | 4 | false | 1 | 0 | BeautifulSoup could be used to parse the html document, and extract anything you want. It's not designed for downloading. You could find the elements you want by it's class and id. | 1 | 6 | 0 | I'd like to get the data from inspect element using Python. I'm able to download the source code using BeautifulSoup but now I need the text from inspect element of a webpage. I'd truly appreciate if you could advise me how to do it.
Edit:
By inspect element I mean, in google chrome, right click gives us an option called inspect element which has code related to each element of that particular page. I'd like to extract that code/ just its text strings. | How to get data from inspect element of a webpage using Python | 0 | 0 | 1 | 34,323 |
25,032,656 | 2014-07-30T08:59:00.000 | 1 | 0 | 1 | 0 | python,kivy | 25,033,218 | 1 | true | 0 | 1 | Run your computationally intensive stuff in one or more threads or separate sub-processes and have them periodically post some updates to the GUI to say how they are doing, (and the results at the end of course). | 1 | 0 | 0 | I want to visualize a computing process in kivy. The problem is that kivy freezes when the python function runs. Any ideas how to manage that problem.
It is like a progress bar. The computation is running and the user should see that his PC did not hang up. | manual update kivy screen when a user function is doing heavy computation | 1.2 | 0 | 0 | 330 |
25,036,688 | 2014-07-30T12:21:00.000 | 0 | 0 | 1 | 1 | python | 39,276,446 | 3 | false | 0 | 0 | You need to go into the directory that you are going to "setup". For example, if you are installing numpy, and you have git-cloned it, then it probably is located at ~/numpy. So first cd into ~/numpy, and the type the commend like "python setup.py build" there. | 1 | 23 | 0 | This problem started while I was installing pyswip and needed to run a setup.py file. Using the command "python setup.py", I'm greeted with the following message: "python: can't open file 'setup.py': [Errno 2] No such file or directory."
I know this question's been asked a lot before, so I've tried everything in previous answers. Including #!/usr/bin/env python or #!/usr/bin/env python-3.3.0 at the very top of the script and then trying "chmod +x setup.py"
gives the following: "chmod: cannot access setup.py': No such file or directory".
Trying to run other .py files from the terminal gives the same result.
Running the file in the Python Shell from IDLE doesn't do anything.
Running the "ls -d */" command shows that the Python-3.3.0/ directory, where the .py files in question are, is definitely there.
Am I missing something really obvious? (If it helps, I have Elementary OS 0.2.) | "Cannot access setup.py: No such file or directory" - can't run any .py files? | 0 | 0 | 0 | 102,215 |
25,041,311 | 2014-07-30T15:50:00.000 | 1 | 1 | 1 | 0 | python | 25,041,459 | 1 | true | 0 | 0 | file.write() appends to the end of the file. You never clear the file after reading the contents. The simplest thing to do, probably, would be to read the file once in 'r' mode, then open it again in 'w' mode (which will clear the file), and write out the edited content.
The output doesn't print because you don't tell it to. Calling infile.readlines() on its own just reads the file, then discards the result. The final line should be print infile.readlines(). | 1 | 0 | 0 | So i am suppose to open a text file by >>>yyy('yyy.txt') and after inputing that python should find my file (which is does, since in the same directory) and edit all the words 'hot' to new word 'why not'. after editing the text file the content of the entire file should be printed.
Its opening the file and its editing 'hot' with 'why not' but it duplicates the whole text in the text file and it does not return anything on the python when i need the text to be displayed.
Any help??? | Python, opening the file, reading the file, editing the file, and priting the file string? | 1.2 | 0 | 0 | 52 |
25,042,205 | 2014-07-30T16:33:00.000 | 0 | 1 | 0 | 1 | python,apache,.htaccess,wsgi | 25,056,888 | 2 | false | 1 | 0 | You should not stick the WSGI file in the DocumentRoot directory in the first place. You have created the situation yourself. It doesn't need to be in that directory for WSGIScriptAlias to work. | 2 | 1 | 0 | I am using mod_wsgi with apache to serve the python application. I have a directive in the VirtualHost entry as follows WSGIScriptAlias /app /home/ubuntu/www/app.wsgi. I also have DocumentRoot /home/ubuntu/www/. Therefore, if the user attempts to read /app.wsgi it gets the raw file. If I try to block access to it via .htaccess, the application becomes unusable. How do I fix this? Is there a way to do so without moving the file out of the DocumentRoot? | How do I prevent the raw WSGI python file from being read? | 0 | 0 | 0 | 77 |
25,042,205 | 2014-07-30T16:33:00.000 | 0 | 1 | 0 | 1 | python,apache,.htaccess,wsgi | 25,045,724 | 2 | false | 1 | 0 | This is far from the best option, but it does seem to work: I added WSGIScriptAlias /app.wsgi /home/ubuntu/www/app.wsgi to the VirtualHost as well so that it will run the app on that uri instead of returning the raw file. | 2 | 1 | 0 | I am using mod_wsgi with apache to serve the python application. I have a directive in the VirtualHost entry as follows WSGIScriptAlias /app /home/ubuntu/www/app.wsgi. I also have DocumentRoot /home/ubuntu/www/. Therefore, if the user attempts to read /app.wsgi it gets the raw file. If I try to block access to it via .htaccess, the application becomes unusable. How do I fix this? Is there a way to do so without moving the file out of the DocumentRoot? | How do I prevent the raw WSGI python file from being read? | 0 | 0 | 0 | 77 |
25,043,745 | 2014-07-30T17:59:00.000 | 0 | 0 | 1 | 0 | python | 25,043,827 | 4 | false | 0 | 0 | The code as written doesn't even examine the input text at all. You're counting the number of times each vowel appears in vowels.
Replace with text.count(i) instead of vowels.count(i). | 1 | 0 | 0 | So i am typing a code where a user will need to input >>Count('bla bla bla') and the program needs to count the in the bla bla bla phrase and return the word with its count in this SPECIFIC order.
Any Help? | Python, counting from an inputted string. Needs to be returned in a specific order? | 0 | 0 | 0 | 67 |
25,045,924 | 2014-07-30T20:04:00.000 | -1 | 1 | 0 | 0 | python,multithreading,web,browser,python-webbrowser | 31,276,859 | 3 | false | 1 | 1 | i have written winks-up in vala.
It's small and fast and compile well on raspbian.
All the code was optimize to reduce memory occupation.
isn't perfect but was better like nothing | 2 | 0 | 0 | Got a bit of weird request here, however it's one which I can't really figure out the answer to.
I'm writing a python application that displays web pages and locally stored images.
What I need is a way of displaying a web page using python that is really lightweight and quite fast. The reason for this is that it is running on a Raspberry Pi.
Of course I have many options, I can run it through web browser installed on the Raspbian distribution and run it as a separate process in python, I can download an Arch-Linux compatible browser and run it as a separate process in python and finally I can write my own native python file using Gtk or PyQt.
All of these approaches have their downsides as well as serious overheads. The web browser must also be full screen when I have a web page to display, and minimised when I'm displaying an image.
The main issue I have had with Gtk and PyQt is the way they have to be executed on the main thread - which is impossible as it doesn't align with my multithreaded architecture. The downside to using the web browsers that are pre-installed on raspbian, is that from python you lack control and it's slow. And finally, the issue with using an Arch-Linux browser is that it ends up being messy and hard to control.
What I would Ideally need is a web browser that loads a web page almost instantaneously, or a multithreaded web browser that can handle multiple instances. This way I can buffer one web page in the background whilst another browser is being displayed.
Do you guys have any advice to point me in the right direction? I would've thought that there would be a neat multithreaded python based solution by now, and I would think that's either because no one needs to do what I'm doing (less likely) - or I'm missing something big (more likely)!
Any advice would be appreciated.
James. | Lightweight Python Web Browser | -0.066568 | 0 | 0 | 1,135 |
25,045,924 | 2014-07-30T20:04:00.000 | 0 | 1 | 0 | 0 | python,multithreading,web,browser,python-webbrowser | 25,051,761 | 3 | false | 1 | 1 | I'd use PyQT to display the page but if the way PyQT use threads does not fit within you application, you may just write a minimalist (I'm speaking of ~10 lines of code here) web browser using PyQT, and fork it from your main application ? | 2 | 0 | 0 | Got a bit of weird request here, however it's one which I can't really figure out the answer to.
I'm writing a python application that displays web pages and locally stored images.
What I need is a way of displaying a web page using python that is really lightweight and quite fast. The reason for this is that it is running on a Raspberry Pi.
Of course I have many options, I can run it through web browser installed on the Raspbian distribution and run it as a separate process in python, I can download an Arch-Linux compatible browser and run it as a separate process in python and finally I can write my own native python file using Gtk or PyQt.
All of these approaches have their downsides as well as serious overheads. The web browser must also be full screen when I have a web page to display, and minimised when I'm displaying an image.
The main issue I have had with Gtk and PyQt is the way they have to be executed on the main thread - which is impossible as it doesn't align with my multithreaded architecture. The downside to using the web browsers that are pre-installed on raspbian, is that from python you lack control and it's slow. And finally, the issue with using an Arch-Linux browser is that it ends up being messy and hard to control.
What I would Ideally need is a web browser that loads a web page almost instantaneously, or a multithreaded web browser that can handle multiple instances. This way I can buffer one web page in the background whilst another browser is being displayed.
Do you guys have any advice to point me in the right direction? I would've thought that there would be a neat multithreaded python based solution by now, and I would think that's either because no one needs to do what I'm doing (less likely) - or I'm missing something big (more likely)!
Any advice would be appreciated.
James. | Lightweight Python Web Browser | 0 | 0 | 0 | 1,135 |
25,049,859 | 2014-07-31T02:23:00.000 | 0 | 0 | 1 | 0 | python-3.x,epoch | 25,050,216 | 1 | false | 0 | 0 | This syntax for long int literals is no longer valid in python 3. All integers are now longs by default. | 1 | 0 | 0 | I am trying to write a simple sntp client for printing time.when I try to assign TIME1970 = 2208988800L ,it gives an error saying invalid syntax.I am running program on python3 plus windows environment 64 bit. | Error in geting Epoch Time in python3 | 0 | 0 | 0 | 674 |
25,054,347 | 2014-07-31T08:37:00.000 | 0 | 0 | 0 | 0 | python,django,apache | 25,054,828 | 1 | true | 1 | 0 | No, there is no way to do this, and that's a very good thing as it is a bad idea. There is no reason to use the same Apache process for different sites: instead you should have different virtual hosts for each of your sites, and let Apache manage them. | 1 | 0 | 0 | I'd like to change the environment variable DJANGO_SETTINGS_MODULE (along with a few others) and then have ALL relevant modules like django.conf, django.db etc reloaded to reflect the information from the new settings module. The new settings module will have different database. I will be doing this in a middleware.
I was able to achieve this by reloading a few modules along with django.conf and django.db. All new SQL statements were fired against the new DB.
But this appears to be so hackish.
The main reason for me wanting to do this is to have the same apache child process serve requests for different django applications (different settings and not different apps) without having to recreate a new apache child process which reloads the whole thing.
Is there a clean way of achieving what I want to do?
Thanks,
UPDATE (19-Sept-2014): I have accepted Daniel Roseman's answer as that seems to be the reality in the context of the question asked. The Router approach suggested by him was something that I explored but couldn't use because django's transaction classes don't use the router. The router I presume exists for a different reason. The application code base I'm working on, which is pretty large, has tons of transaction.commit_manually for the default or a specific db alias. I was trying to get this to support multiple client databases without changing the application code.
However, I did manage to solve the main problem which was to support multiple client DBs and other settings. I don't try to change the settings on the fly nor do I use the router. I instead have a single settings.py with all client DB information. I monkey patched the connection handler to return a different database connection for 'default' alias (or other specific alias used by the code) based on certain env variables set in the middleware. So far this has worked fine. I will post an update if I run into any issues or if someone else can point out a potential issue with the approach. | How to change settings in a middleware? | 1.2 | 0 | 0 | 394 |
25,056,040 | 2014-07-31T10:00:00.000 | 0 | 0 | 1 | 0 | python,html,dictionary,key | 25,056,118 | 2 | false | 0 | 0 | As long as keys are hash-able you can have keys of any format. Note, tuples are hash-able so that would be a possible solution to your problem
Make a tuple of case-owner and type and use it as a key to your dictionary.
Note, generally all objects that are hashable should also be immutable, but not vice-versa. So | 1 | 2 | 0 | I am currently writing a script that extracts data from an xml and writes it into an html file for easy viewing on a webpage.
Each piece of data has 2 pieces of "sub data": Owner and Type.
In order for the html to work properly I need the "owner" string and the "type" string to be written in the correct place. If it was just a single piece of data then I would use dictionaries and just use the data name as the key and then write the value to html, however there are 2 pieces of data.
My question is, can a dictionary have 2 values (in my case owner and type) assigned to a single key? | Multiple Values for a single key in a dictionary | 0 | 0 | 0 | 1,335 |
25,057,063 | 2014-07-31T10:52:00.000 | 0 | 0 | 0 | 0 | python,signals,filtering,time-series | 25,069,634 | 1 | false | 0 | 0 | There are some rather quick ways.
I assume you are only interested in the slope and average of the signal Y. In order to calculate these, you need to have:
sum(Y)
sum(X)
sum(X.X)
sum(X.Y)
All sums are over the samples in the window. When you have these, the average is:
sum(Y) / n
and the slope:
(sum(X.Y) - sum(X) sum(Y) / n) / (sum(X.X) - sum(X)^2 / n)
To make a quick algorithm it is worth noting that all of these except for sum(X.Y) can be calculated in a trivial way from either X or Y. The rolling sums are very fast to calculate as they are cumulative sums of differences of two samples ("incoming to window" minus "outgoing from the window").
Only sum(X.Y) needs to be calculated separately for each time step.
All these operations can be vectorized, even though the time lag is probably easier to write as a loop without any notable performance hit. This way you will be able to calculate tens of millions of regressions per second. Is that fast enough? | 1 | 0 | 1 | I have to perform linear regressions on a rolling window on Y and a time lagged version of X, ie finding Y(t) = aX(t-1) + b. The window size is fixed at 30 samples. I want to return a numpy array of all the beta coefficients. Is there a quick way of doing this? I read about the Savitsky-Golay filter, but it regresses only X with the time lagged version of itself. Thanks! | Performing a rolling vector auto regression with two variables X and Y, with time lag on X in Python | 0 | 0 | 0 | 149 |
25,057,937 | 2014-07-31T11:38:00.000 | 1 | 0 | 0 | 0 | wxpython,wxwidgets | 25,059,718 | 1 | true | 0 | 1 | The widget probably doesn't support text alignment. If you want complete control over how it displays its contents, then you should probably switch to a custom drawn control, such as ComboCtrl. | 1 | 0 | 0 | I have a combobox in wxpython but I cant figure out how to align the text it contains to the right?
I have tried to use wx.ComboBox(self, choices=["1","2","3"], style=wx.TEXT_ALIGNMENT_RIGHT) but that didnt work. | How do I right align the text in a wx.ComboBox? | 1.2 | 0 | 0 | 835 |
25,064,876 | 2014-07-31T17:03:00.000 | 1 | 0 | 0 | 0 | javascript,python,django,angularjs,rest | 25,067,631 | 1 | true | 1 | 0 | It depends on the architecture of your application.
If you are building your client as a single page web application using Angular & your business logic is served using the Django REST API in JSON/XML format. Then rendering of the view should be the responsibility of the client side code.
As per me whatever you are doing looks perfectly okay. I don't see any redundancy of the templates in this architecture. | 1 | 1 | 0 | Ok, so basically what is happening is I have a search input on my index page. The user types something in the search input, and that is sent to the Django REST api which returns the data in JSON format. I can loop through that results array using Angular ng-repeat. But my question is: is there a way to send that request to another django view and have django return the values using a for loop and a template that I already created?
( I am trying to avoid recreating the template specifically for Angular because that would be repetitive)
Any suggestions, or help on this comes much appreciated. Thank you in advance for taking the time to help me.
All the best. | Pass a request to a DJANGO REST api using Angular and then return those results in a DJANGO view | 1.2 | 0 | 0 | 109 |
25,066,084 | 2014-07-31T18:11:00.000 | 4 | 0 | 1 | 1 | python,setuptools | 25,168,476 | 3 | false | 0 | 0 | Have you tried `os.path.abspath(__file__)' in your entry point script? It'll return yours entry point absolute path.
Or call find_executable from distutils.spawn:
import distutils.spawn
distutils.spawn.find_executable('executable') | 1 | 13 | 0 | So I have an entry point defined in my setup.py [console_scripts] section. The command is properly installed and works fine, but I need a way to programatically find out the path to the script (e.g. on windows it'll be something like C:/my/virtual/env/scripts/my_console_script.exe). I need this so I can pass that script path as an argument to other commands, regardless of where the package is installed. Setuputils provides the pkg_resources, but that doesn't seem to expose any way of actually getting at the raw installed paths, only loadable objects.
Edit: To make the use case plain here's the setup.
I have a plugin-driven application that communicates with various local services. One of these plug-ins ties into the alerting interface of an NMS package. The only way this alerting package can get alerts out to an arbitrary handler is to call a script - the path to execute (the console_scripts entry point in this case) is register as a complete path - that's the path I need to get. | Get entry point script file location in setuputils package? | 0.26052 | 0 | 0 | 5,445 |
25,067,790 | 2014-07-31T19:53:00.000 | 7 | 0 | 1 | 0 | python,c | 25,069,295 | 2 | false | 0 | 0 | I don't think that is possible for the basic reason that Python String objects are embedded into the PyObject structure. In other words, the Python string object is the PyObject_HEAD followed by the bytes of the string. You would have to have room in memory to put the PyObject_HEAD information around the existing bytes. | 1 | 10 | 0 | I have a large buffer of strings (basically 12GB) from a C app.
I would like to create PyString objects in C for an embedded Python interpreter without copying the strings. Is this possible? | Create PyString from c character array without copying | 1 | 0 | 0 | 1,617 |
25,069,005 | 2014-07-31T21:11:00.000 | 2 | 0 | 0 | 0 | python,port,forwarding | 25,069,162 | 4 | false | 0 | 0 | So your application needs to do TCP/UDP networking if I understand correctly. That means that at least one of the connecting clients needs a properly open port, and if both of them is behind NAT (a router) and have no configured open ports, your clients cannot connect.
There are several possible solutions for this, but not all are reliable: UPnP, as suggested here, can open ports on demand but isn't supported (or enabled) on all routers (and it is a security threat), and P2P solutions are complex and still require open ports on some clients.
The only reliable solution is to have a dedicated server that all clients can connect to that will negotiate the connections, and possibly proxy between them. | 1 | 6 | 0 | I'm developing a client-server game in python and I want to know more about port forwarding.
What I'm doing for instance is going to my router (192.168.0.1) and configure it to allow request for my real IP-adress to be redirected to my local adress 192.168.0.X. It works really well. But I'm wondering if I can do it by coding something automatically ?
I think skype works like a kind of p2p and I can see in my router that skype is automatically port forwarded to my pc adress. Can I do it in Python too? | Python port forwarding | 0.099668 | 0 | 1 | 9,022 |
25,069,079 | 2014-07-31T21:17:00.000 | 1 | 1 | 1 | 0 | python,eclipse,package | 25,069,554 | 1 | false | 0 | 0 | Three things spring to mind:
Does the project have a pythonpath set? Right-click the project -> properties -> pythonpath. Add the root project directory, or whatever is appropriate for your project.
Do your packages contain an __init__.py file?
Have you got a python interpreter configured in PyDev?
Is your package explorer tab/window titled "PyDev Package Explorer"? If not, got to Window -> "Show View" -> "PyDev Package Explorer".
Do you have the pydev builder enabled? (PyDev Settings -> Builders) | 1 | 1 | 0 | I have a project on eclipse and I am wondering why I do not see the package symbol on the folders in the hierarchy.
Do I have to choose an option to be able to see the folders appear as package on eclipse?
I am using PyDev plugin here.. | How to get package icon in eclipse? | 0.197375 | 0 | 0 | 254 |
25,070,854 | 2014-08-01T00:00:00.000 | 1 | 0 | 1 | 0 | python | 25,071,044 | 6 | false | 0 | 0 | I had a problem with that recently:
I was writing some stuff to a file in a for-loop, but if I interrupt the script with ^C, a lot of data which should have actually been written to the file wasn't there. It looks like Python stops to writing there for no reason. I opened the file before the for loop. Then I changed the code so that Python opens and closes the file for ever single pass of the loop.
Basically, if you write stuff for your own and you don't have any issues - it's fine, if you write stuff for more people than just yourself - put a close() inside the code, because someone could randomly get an error message and you should try to prevent this. | 3 | 53 | 0 | Usually when I open files I never call the close() method, and nothing bad happens. But I've been told this is bad practice. Why is that? | Why should I close files in Python? | 0.033321 | 0 | 0 | 54,943 |
25,070,854 | 2014-08-01T00:00:00.000 | 86 | 0 | 1 | 0 | python | 25,070,939 | 6 | true | 0 | 0 | For the most part, not closing files is a bad idea, for the following reasons:
It puts your program in the garbage collectors hands - though the file in theory will be auto closed, it may not be closed. Python 3 and Cpython generally do a pretty good job at garbage collecting, but not always, and other variants generally suck at it.
It can slow down your program. Too many things open, and thus more used space in the RAM, will impact performance.
For the most part, many changes to files in python do not go into effect until after the file is closed, so if your script edits, leaves open, and reads a file, it won't see the edits.
You could, theoretically, run in to limits of how many files you can have open.
As @sai stated below, Windows treats open files as locked, so legit things like AV scanners or other python scripts can't read the file.
It is sloppy programming (then again, I'm not exactly the best at remembering to close files myself!)
Hope this helps! | 3 | 53 | 0 | Usually when I open files I never call the close() method, and nothing bad happens. But I've been told this is bad practice. Why is that? | Why should I close files in Python? | 1.2 | 0 | 0 | 54,943 |
25,070,854 | 2014-08-01T00:00:00.000 | 1 | 0 | 1 | 0 | python | 25,070,925 | 6 | false | 0 | 0 | You only have to call close() when you're writing to a file.
Python automatically closes files most of the time, but sometimes it won't, so you want to call it manually just in case. | 3 | 53 | 0 | Usually when I open files I never call the close() method, and nothing bad happens. But I've been told this is bad practice. Why is that? | Why should I close files in Python? | 0.033321 | 0 | 0 | 54,943 |
25,072,996 | 2014-08-01T04:44:00.000 | 2 | 0 | 0 | 0 | python-2.7,robotframework | 32,266,513 | 1 | false | 1 | 0 | You should check the content of dbConfigFile. You don't specify one so the default one is ./resources/db.cfg.
The error says when python try to parse that file it cannot find a section named default. In documentation it says:
note: specifying dbapiModuleName, dbName dbUsername or dbPassword directly will override the properties of the same key in dbConfigFile
so even if you specify all properties it reads config file. | 1 | 1 | 0 | I am using Robot Framework with Database Library to test database queries on localhost. I am running it by XAMPP.
This is my test case:
*** Settings ***
Library DatabaseLibrary
*** Variables ***
@{DB} robotframework root \ localhost 3306
*** Test Cases ***
Select from database
[Tags] This
Connect To Database MySQLdb @{DB}[0] @{DB}[1] @{DB}[2] @{DB}[3] @{DB}[4]
@{results}= Query Select * From tbName
Log Many @{results}
I have installed MySQLDb for Python 2.7, however, when I run it using pybot, it keeps returning error:
Select from database | FAIL |
NoSectionError: No section: 'default'
Please help me to solve this problem. Thanks. | Error: No section: 'default' in Robot Framework using DatabaseLibrary | 0.379949 | 1 | 0 | 4,534 |
25,074,173 | 2014-08-01T06:30:00.000 | 2 | 0 | 0 | 0 | python,utf-8 | 25,074,740 | 4 | false | 1 | 0 | HTTP headers are strictly 7-bit US ASCII. The RFC allows you to accept ISO8859-1as a compatibility hack, but don't send any byte beyond 127.
There is no standard or best way to send any other data type beside ASCII in the headers. It is your application's responsibility to encode arbitrary sequences of bytes (and your UTF string is an arbitrary sequence of bytes) such that the encoding is 7-bit safe.
Use whatever is most convenient for both client and server in their implementation language(s). Base64 encoding, \hh byte escapes, \uhhhh unicofe character escapes, %hh as per URL encoding, =HH as in MIME, or &#... entities. All of these methods exist and are being used in the wild. | 2 | 1 | 0 | I have an app that submit a request to a Python webserver. The app has a UTF8 string with the following contents:
la langue franþaise.ppt
This is put into a HTTP header, and somehow converted as such:
la langue fran\xfeaise.ppt
Then Python on the web-server tried to do something with the string that maybe expects it to be UTF8, and I get this error:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xfe in position 14: invalid start byte
I would basically like to preserve this UTF8 from the app to the web-server, such that the variable would contain the following if I printed it:
la langue franþaise.ppt
What's the best way to preserve a UTF8 string from a web client and server (assuming both written in Python)? | How to preserve UTF8 string from app to webserver in Python | 0.099668 | 0 | 0 | 191 |
25,074,173 | 2014-08-01T06:30:00.000 | 2 | 0 | 0 | 0 | python,utf-8 | 25,074,360 | 4 | true | 1 | 0 | \xfe is ISO-8859-1 encoding for þ.
While utf8 in content is widely supported, HTTP headers should be ASCII. The HTTP spec allows ISO-8859-1, but it's not recommended or reliable in tooling. Other encodings are not allowed without special escaping.
If possible, escape your special chars in a way that allows them to be transferred as ASCII. Base64 as suggested by fileoffset is one option, another would be the quote function from urllib.parse (or urrlib on python2) | 2 | 1 | 0 | I have an app that submit a request to a Python webserver. The app has a UTF8 string with the following contents:
la langue franþaise.ppt
This is put into a HTTP header, and somehow converted as such:
la langue fran\xfeaise.ppt
Then Python on the web-server tried to do something with the string that maybe expects it to be UTF8, and I get this error:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xfe in position 14: invalid start byte
I would basically like to preserve this UTF8 from the app to the web-server, such that the variable would contain the following if I printed it:
la langue franþaise.ppt
What's the best way to preserve a UTF8 string from a web client and server (assuming both written in Python)? | How to preserve UTF8 string from app to webserver in Python | 1.2 | 0 | 0 | 191 |
25,078,549 | 2014-08-01T10:46:00.000 | 1 | 1 | 0 | 0 | python,eclipse,pydev,importerror | 25,087,270 | 1 | false | 1 | 0 | For that to work you need to have init.py under 'project_dir', 'dira' and 'dirb' and then you need to set as a source folder the directory which is the parent of the 'project_dir' (and not the project_dir itself) -- and no other directories should be set as source folders.
I.e.: the source folder is the directory added to the PYTHONPATH (so, for importing 'project_dir', its parent must be in the PYTHONPATH).
Note: You may have to remove the project from Eclipse/PyDev and recreate it a level before for this to work depending on how you created it the first time. | 1 | 1 | 0 | I have co'd existing code from an SVN repo that uses full imports - by which I mean:
-->projectdir
-------->dira
-------------->a1.py
-------------->a2.py
-------->dirb
-------------->b1.py
Suppose a1.py imports a method from a2.py:
Normally I would simply write:
from a2 import xyz
Here they have written it as:
from project_dir.dira.a2 import xyz
How do I make eclipse recogonize these imports?
Basically I want to be able to Ctrl+click and Open Declaration. I need to browse through this massive project and I simply cannot do so until this works.
PS:
I have tried adding the projectdir to the PYTHONPATH
I have tried adding each and every sub-directory to PYTHONPATH
I have an init in every folder -_- | Import Error - Pydev Eclipse | 0.197375 | 0 | 0 | 249 |
25,078,815 | 2014-08-01T11:01:00.000 | 0 | 0 | 0 | 1 | python,sqlalchemy,celery | 25,086,833 | 1 | true | 0 | 0 | It wasn't so complicated, subclass Session, providing a list for appending tasks via after_insert. Then run through the list in after_commit. | 1 | 0 | 0 | I'm initiating celery tasks via after_insert events.
Some of the celery tasks end up updating the db and therefore need the id of the newly inserted row. This is quite error-prone because it appears that if the celery task starts running immediately sometimes sqlalchemy will not have finished committing to the db and celery won't find the row.
What are my other options?
I guess I could gather these celery tasks up somehow and only send them on "after_commit" but it feels unnecessarily complicated. | SQLAlchemy after_insert triggering celery tasks | 1.2 | 1 | 0 | 209 |
25,085,813 | 2014-08-01T17:43:00.000 | 1 | 0 | 1 | 0 | python,iterator,abc | 25,085,843 | 1 | false | 0 | 0 | Your options are not to inherit from that class, provide your own, correct and proper next method, or provide an alternative iterator to be returned from the __iter__ method.
There is no registration involved here other than that isinstance(instance, collections.Iterator) is True, and that returns True for two reasons:
your class inherits from the collections.Iterator class; it is a direct subclass.
your class has __iter__ and next methods; any class implementing these two methods will register as a collections.Iterator.
The base collections.Iterator class provides an abstractmethod next, and a concrete __iter__ method (it returns self). If your subclass does not implement their own version of next then creating instances of that class will not work; a TypeError is raised complaining that next is an abstract method.
If the class is not an iterator, then your first option is to alter the original class not to inherit fro collections.Iterator.
The next option is to provide a fixed next() method; it requires that your instance keeps state to help produce the next value each time it is called, or raise StopIterator if there are no more new values to produce.
Your third option is to return a proper iterator from the __iter__ method instead. Instead of returning self you could return a new object that implements iteration over the instance. | 1 | 0 | 0 | I've been handed code with a class that incorrectly subclasses the abstract base class collections.Iterator. It doesn't follow the Iterator contract and this flawed inheritance relationship causes issues downstream. Is there any way to unregister an abstract base class?
Note: I know that this is a strange situation. Please avoid the compulsion to voice an opinion on Monkey Patching. | Can I unregister a class as an `Iterator`? | 0.197375 | 0 | 0 | 148 |
25,087,111 | 2014-08-01T19:11:00.000 | 1 | 0 | 0 | 0 | python,interpolation | 32,809,719 | 4 | false | 0 | 0 | Another function that could work is scipy.ndimage.interpolation.map_coordinates.
It does spline interpolation with periodic boundary conditions.
It does not not directly provide derivatives, but you could calculate them numerically. | 1 | 7 | 1 | I'm running a simulation on a 2D space with periodic boundary conditions. A continuous function is represented by its values on a grid. I need to be able to evaluate the function and its gradient at any point in the space. Fundamentally, this isn't a hard problem -- or to be precise, it's an almost already solved problem. The function can be interpolated using a cubic spline with scipy.interpolate.RectBivariateSpline. The reason it's almost solved is that RectBivariateSpline cannot handle periodic boundary conditions, nor can anything else in scipy.interpolate, as far as I can figure out from the documentation.
Is there a python package that can do this? If not, can I adapt scipy.interpolate to handle periodic boundary conditions? For instance, would it be enough to put a border of, say, four grid elements around the entire space and explicitly represent the periodic condition on it?
[ADDENDUM] A little more detail, in case it matters: I am simulating the motion of animals in a chemical gradient. The continuous function I mentioned above is the concentration of a chemical that they are attracted to. It changes with time and space according to a straightforward reaction/diffusion equation. Each animal has an x,y position (which cannot be assumed to be at a grid point). They move up the gradient of attractant. I'm using periodic boundary conditions as a simple way of imitating an unbounded space. | 2D Interpolation with periodic boundary conditions | 0.049958 | 0 | 0 | 3,228 |
25,088,006 | 2014-08-01T20:15:00.000 | 3 | 1 | 1 | 0 | python | 25,088,132 | 2 | true | 0 | 0 | Scripts need shebangs. Modules don't need shebangs -- they're sometimes used to make text editors' jobs easier (in detecting the programming language easier), or if a module can be run as a script to execute its tests, but are not called for otherwise.
All Python code, whether in modules or scripts, should have encoding directives when not using the default encoding for the Python version targeted (ASCII for 2.x, Unicode for 3.x). | 1 | 1 | 0 | our application is composed of several modules grouped in two packages, and one script.
I am not sure, and don't manage to google my way through to knowing if the modules need a coding directive in order to manage our native french accented strings, and while we're at it, a shabang with the interpreter directive, or if it is enough to put these two lines in the one script. | Do Python modules need shebangs or encoding directives? | 1.2 | 0 | 0 | 98 |
25,089,702 | 2014-08-01T22:39:00.000 | 0 | 0 | 0 | 1 | python,datetime,batch-file,csv,ubuntu | 25,089,847 | 2 | false | 0 | 0 | So your options, roughly, are:
Python
Windows 'cmd' script
Transfer the files to a *nix environment and do it there with those tools if you are more familiar
If using Python, look at:
the os module, os.listdir(), os.path etc.
Regex replace using a function (re.sub taking a function rather than a string as a replacement)
datetime.datetime.strptime and datetime.datetime.strftime | 1 | 0 | 0 | I have a question about writing a command prompt script (DOS) in Windows 7.
The task I have:
I have a directory of raw data files (*.csv) where the 38th line is where the date and time are saved.
Example File cell A38:
Start Date/Time: 6/20/2014 13:26:16
However, this date format is M/DD/YYYY because it was saved using a sampling computer where the date of the computer was set-up as such.
I know there is a way to write a script that can be executed on a directory of these files so that none of the other information (text or actual time stamp) is changed,
but the Date format switched to the UK style of DD/MM/YYYY.
Intended product:
The file is unchanged in any way but line 38 reads
Start Date/Time: 20/06/2014 13:26:16
I really do not want to go through and do this to 800 plus files, and more coming, so any help would be very appreciated in helping do this format change
in a script format that could be executed on the entire directory of *.csv files.
I also think it is an important note that the entire text as well as the actual date and time are in one Cell in Excel (A38) (Start Date/Time: M/D/YYYY HH:MM:SS)
and that I DO want to keep the time as 24hour time.
Any guidance/pointers would be great. I am very new to command line programming in Windows. Also happy to see if such a script is available for an Ubuntu environment, or a python script, or anything really that would automate this tedious task of changing one part of one line close to 1000 times, as switching the changed directory back to the Windows computer is no big deal at all. Just easier (and Im sure possible using cmd.exe)
Cheers,
Wal | Script for changing one line of *.csv file for a whole directory of files? | 0 | 0 | 0 | 115 |
25,090,573 | 2014-08-02T00:47:00.000 | 0 | 0 | 0 | 0 | python,django,memory-leaks,uwsgi | 68,154,893 | 2 | false | 1 | 0 | Django doesn't have known memory leak issues.
I had a similar memory issue. I found that there is a slow SQL causing a high DB CPU percentage. The memory issue is fixed after I fixed the slow SQL. | 1 | 21 | 0 | I've a Django application that every so often is getting into memory leak.
I am not using large data that could overload the memory, in fact the application 'eats' memory incrementally (in a week the memory goes from ~ 70 MB to 4GB), that is why I suspect the garbage collector is missing something, I am not sure though. Also, it seems as this increment is not dependant of the number of requests.
Obvious things like DEBUG=True, leaving open files, etc... no apply here.
I'm using uWSGI 2.0.3 (+ nginx) and Django 1.4.5
I could set up wsgi so that restart the server when the memory exceeds certain limit, but I wouldn't like to do that since that is not a solution really.
Are there any well know situations where the garbage collector "doesn't do its work properly"? Could it provide some code examples?
Is there any configuration of uWSGI + Django that could cause this? | Django memory leak: possible causes? | 0 | 0 | 0 | 15,649 |
25,090,619 | 2014-08-02T00:57:00.000 | 0 | 0 | 0 | 0 | python,html,ads,adsense | 25,090,652 | 1 | true | 1 | 0 | If the HTML is definitely the same every time, the variations are probably being done on the client side using javascript.
The answer depends on what you mean by "classify." If you just want to know, on any given load of the page, where the widgets are, you will probably have to use something like Selenium that actually opens the page in a browser and runs javascript, rather than just fetching the HTML source. Then you will need to use Selenium to eval some javascript that detects the widget locations. There is a selenium module for python that is fairly straightforward to use. Consider hooking it up to PhantomJS so you don't have to have a browser window up. | 1 | 0 | 0 | There is a webpage which when loaded uses a random placement of forms / controls / google ads. However, the set is closed--from my tests there are at least three possible variations, with two very common and the third very rare.
I would like to be able to classify this webpage according to each variation. I tried analyzing the html source of each variation, but the html of all the variations is exactly the same, according to both Python string equals and the Python difflib. There doesn't seem to be any information specifying where to put the google ads or the controls.
For an example, consider a picture with two boxes, a red one (call it box A) and a blue one (call it box B). The boxes themselves never change position, but what takes their position does.
Now consider two possible variations, one of which is chosen everytime the webpage is loaded / opened.
Variation 1: Suppose 50% of the time, the google ad is positioned at box A (the red one) and the website control is thus placed at box B (the blue one).
Variation 2: Suppose also 50% of the time, the google ad is positioned at box B (the blue one) and the website control is thus placed at box A (the red one).
So if I load the webpage, how can I classify it based on its variation? | Classify different versions of the same webpage | 1.2 | 0 | 1 | 34 |
25,090,620 | 2014-08-02T00:59:00.000 | 2 | 0 | 0 | 0 | python,widget,kivy,python-3.4 | 32,950,247 | 2 | false | 0 | 1 | One thing you can do is to make use of the copy function in python.
This does not copy all the pos/size values but you will have all the attributes.
Ex:
from copy import copy
new_box = copy(self.current_box)
Hope this helps. | 2 | 0 | 0 | I have 2 widgets, I need to copy all attributes (pos, size, canvas, etc.) from one widget to another somehow (and then move the last to new pos). Probably I can copy attributes one by one, but is there some built-in function?
It seems Python's copy makes only shell copy (I can't move duplicate etc.) and deepcopy fails. | Can I copy attributes from one widget to another in Kivy? | 0.197375 | 0 | 0 | 1,181 |
25,090,620 | 2014-08-02T00:59:00.000 | 2 | 0 | 0 | 0 | python,widget,kivy,python-3.4 | 25,102,895 | 2 | false | 0 | 1 | I'm not aware of a pre-existing way to do this in general, but you could probably fairly easily make a function to do it. You can get a list of the widget properties through the properties() method of EventDispatcher, though you'd also need to manually keep track of any non-kivy-property attributes you want to copy, and might need to check to make sure it's safe to copy them all.
Depending on the situation, there may also be other possibilities. For instance, if the widget is instantiated from a set of arguments in the first place and never really modified much, you could just save the argument list and use it to construct a new widget. There might also be more efficient alternatives - if you don't need to interact with the 'copy', you don't need to make a new widget at all, but could draw the original one to a Fbo and simply re-use its texture. This would be a more advanced use of kivy, but isn't that hard, let me know if you're interested in it but don't know how to do it. | 2 | 0 | 0 | I have 2 widgets, I need to copy all attributes (pos, size, canvas, etc.) from one widget to another somehow (and then move the last to new pos). Probably I can copy attributes one by one, but is there some built-in function?
It seems Python's copy makes only shell copy (I can't move duplicate etc.) and deepcopy fails. | Can I copy attributes from one widget to another in Kivy? | 0.197375 | 0 | 0 | 1,181 |
25,093,943 | 2014-08-02T10:17:00.000 | 1 | 1 | 0 | 0 | python | 25,106,484 | 1 | true | 0 | 0 | When you're working on a library you can use python setup.py develop instead of install. This will install the package into your local environment and keep it updated as you develop.
To be clear, if you use develop you don't have to run it again when you change your source files. | 1 | 0 | 0 | I have installed a python package slimit and I have cloned the source code from github.
I am doing changes to this package in my local folders which I want to test (often) but I don't want to do allways python setup.py install.
My folder structure is:
../develop/slimit/src/slimit (contains package files)
../develop/test/test.py
I'm using eclipse + pydev + python 2.7, on linux
Should I run eclipse with "sudo rights"?
Even better, is there a way to import the local development package into my testing script? | how to test changes to an installed package | 1.2 | 0 | 0 | 227 |
25,096,357 | 2014-08-02T15:00:00.000 | 1 | 0 | 0 | 0 | python,pandas | 25,110,368 | 3 | false | 0 | 0 | I haven't done much with Panels, but what exactly is the functionality that you need? Is there a reason a simple python list wouldn't work? Or, if you want to refer by name and not just by list position, a dictionary? | 1 | 0 | 1 | I want to write a function to return several data frames (different dims) and put them into a larger "container" and then select each from the "container" using indexing. I think I want to find some data structure like list in R, which can have different kinds of objects.
What can I use to do this? | What is the data structure in python that can contain multiple pandas data frames? | 0.066568 | 0 | 0 | 104 |
25,096,745 | 2014-08-02T15:46:00.000 | 4 | 0 | 1 | 0 | python,python-3.x | 25,096,768 | 1 | true | 0 | 0 | Dictionaries and sets are not ordered. This is easy to overlook when you consider that iteration over them is supported... but one cannot assume any particular order in which iter() will provide items from sets and dictionaries, so it would not make sense to define a way to reverse this order. | 1 | 0 | 0 | Why is iter() implemented for all collections whereas reversed() isn't (e.g. dicts and sets don't implement it)? As the doc says reversed() returns a simple reverse iterator... | Why isn't reversed() implemented for all collections? | 1.2 | 0 | 0 | 43 |
25,097,038 | 2014-08-02T16:18:00.000 | 0 | 0 | 1 | 0 | python,macos,python-2.7,python-3.x | 25,097,174 | 2 | true | 0 | 0 | Python installations on OS X generally go in separately and don't uninstall each other. Also, the convention is still for the executable python to refer to 2 and python3 to refer to 3, so they don't even really overlap.
Common locations you might have python include
/usr/bin/python (the system installed one, probably an osx specific 2.7.5)
/Library/Frameworks/Python.framework/Versions/... This is where the ones you install from Python.org go, separately for each version.
Your homebrew directory if you are using that
Which one runs when you type python (or python3) depends on your PATH environment variable. | 2 | 0 | 0 | Could there possibly be hidden files that I would need to find. Or do I have to re-install Python 2.7 if I want to work with it?
Thanks | How to find out if I still have Python 2.7 on Mac? Does installing Python 3.3 also uninstall the older package? | 1.2 | 0 | 0 | 159 |
25,097,038 | 2014-08-02T16:18:00.000 | 0 | 0 | 1 | 0 | python,macos,python-2.7,python-3.x | 25,097,095 | 2 | false | 0 | 0 | python --version - will give you the version of the currently used python in environment variable PATH.
Nothing gets uninstalled. You have to just adjust the PATH variable according what you will be using. | 2 | 0 | 0 | Could there possibly be hidden files that I would need to find. Or do I have to re-install Python 2.7 if I want to work with it?
Thanks | How to find out if I still have Python 2.7 on Mac? Does installing Python 3.3 also uninstall the older package? | 0 | 0 | 0 | 159 |
25,099,749 | 2014-08-02T21:54:00.000 | 0 | 1 | 0 | 1 | python,ubuntu,digital-ocean | 25,100,208 | 2 | false | 0 | 0 | First install and enable fcron. Then, sudo -s into root and run fcrontab -e. In the editor, enter */30 * * * /path/to/script.py and save the file. Change 30 to 15 if every 15 minutes is what you're after. | 1 | 0 | 0 | I have correct python3 program looking like *.py.
I have Digital Ocean(DO) droplet with Ubuntu 14.04.
My program post message to my twitter account.
I just copy my *.py in some directory on DO droplet and run it with ssh and all works fine.
But I need to post message(rum my program) automatically every 15-30 min for example.
Iam newbie with this all.
What should i do? Step-by-step please! | Run my python3 program on remote Ubuntu server every 30 min | 0 | 0 | 0 | 492 |
25,101,081 | 2014-08-03T02:31:00.000 | 1 | 0 | 1 | 0 | python,numpy,scipy,ipython,spyder | 25,123,660 | 1 | false | 0 | 0 | You may find Spyder's array editor better suited for large arrays than the qt console. | 1 | 1 | 1 | I am using Spyder from the Anaconda scientific package set (3.x) and consistently work with very large arrays. I want to be able to see these arrays in my console window so I use these two commands:
set_printoptions(linewidth=1000)
to set the maximum characters displayed on a single line to 1000 and:
set_printoptions(threshold='nan')
to prevent truncation of large arrays. Putting these two lines of code into the startup option as such
set_printoptions(linewidth=1000),set_printoptions(threshold='nan')
causes Spyder to hang and crash upon a new session of ipython in the console. Is there a way to run these lines of code without having me type them all the time. Also, the console window only allows me to scroll up to a certain point then stops. This can be a problem when I want to view large arrays. Is there any way to increase the scroll buffer? (Note, I'm very new to Python having just switched over from MATLAB). | Spyder, Python IDE startup code crashing GUI | 0.197375 | 0 | 0 | 604 |
25,103,685 | 2014-08-03T10:16:00.000 | 1 | 0 | 0 | 0 | python-3.x,gtk,gtk3 | 25,107,700 | 3 | false | 0 | 0 | The idea behind that is that the introspector converts GTK_ENUMTYPE_ELEMENT to Gtk.ENUMTYPE.ELEMENT for gtk3 python bindings (gi). So having a look at the original Gtk+-3 .x documentation suffices (i.e. using devhelp). (why not have a look at the binding doc itself anyways?). | 1 | 3 | 0 | It's been widely discussed that a few things changed from Python2/Gtk2 to Python3/Gtk+3. I got along with that so far, but there is one thing I am having trouble with: Where did all the constants go?
In Python 2.x I could just do gtk.RESPONSE_OK and now I can do (after a lot of trying around, I found) Gtk.ResponseType.OK
Question: Is there any complete and comprehensive list/documentation of where the specific constants went? | Is there a list/guide/documentation where the GTK constants went in GTK+ 3 in Python 3.x? | 0.066568 | 0 | 0 | 282 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.