Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
25,814,206 | 2014-09-12T18:04:00.000 | 0 | 1 | 0 | 0 | python,apache,ubuntu | 25,820,393 | 2 | false | 0 | 0 | Heyy,
For ubuntu, this is what you can do:
sudo a2dismod mod_python
sudo /etc/init.d/apache2 restart
Hope that helps :) | 1 | 0 | 0 | I want to disable mode_python error, traceback, module cache details and any other warning notice !
I am using simple mod_python module on ubuntu server with apache, without any framework (Django ect ...).
I made lot of searchs on GOOGLE but no one speak about this :)
I want alternative to error_reporting(0); on PHP or any config changes on the server side.
Thank you in advance. | Disable mod_python error and traceback | 0 | 0 | 0 | 505 |
25,816,009 | 2014-09-12T20:04:00.000 | 1 | 0 | 1 | 0 | python,linux,multithreading | 25,816,143 | 3 | false | 0 | 0 | In all likelihood, multithreading won't help you.
Your data generation speed is either:
IO-bound (that is, limited by the speed of your hard drive), and the only way to speed it up is to get a faster storage device. The only type of parallelization that can help you is finding a way to spread your writes across multiple devices (can you use multiple hard drives?).
CPU-bound, in which case Python's GIL means you can't take advantage of multiple CPU cores within one process. The way to speed your program up is to make it so you can run multiple instances of it (multiple processes), each generating part of your data set.
Regardless, the first thing you need to do is profile your program. What parts are slow? Why are they slow? Is your process IO-bound or CPU-bound? Why? | 2 | 0 | 0 | I have a small python script used to generate lots of data to a file, it takes about 6 mins to generate 6GB data, however, my target data size could up to 1TB, for linear calculation, it will take about 1000 mins to generate 1TB data which I think it's unacceptable for me.
So I am wondering will multiple threading help me here to short the time? and why could that be? If not, do I have other options?
Thanks! | Multiple Thread data generator | 0.066568 | 0 | 0 | 208 |
25,816,009 | 2014-09-12T20:04:00.000 | 1 | 0 | 1 | 0 | python,linux,multithreading | 25,816,754 | 3 | false | 0 | 0 | 6 mins to generate 6GB means you take a minute to generate 1 GB. A typical hard drive is capable of up to 80 - 100 MB/s throughput when new. This leaves you with approximately 6 GB / minute IO limit.
So it looks like the limiting factor is the CPU, which is good news (running more instances can help you).
However I wouldn't use multithreading for Python because of GIL. A better idea will be to run some scripts writing to different offsets in different processes or tu multiprocessing module of Python.
I would check it though with running it an writing to /dev/null to make sure you truly are CPU bound. | 2 | 0 | 0 | I have a small python script used to generate lots of data to a file, it takes about 6 mins to generate 6GB data, however, my target data size could up to 1TB, for linear calculation, it will take about 1000 mins to generate 1TB data which I think it's unacceptable for me.
So I am wondering will multiple threading help me here to short the time? and why could that be? If not, do I have other options?
Thanks! | Multiple Thread data generator | 0.066568 | 0 | 0 | 208 |
25,817,254 | 2014-09-12T21:39:00.000 | 5 | 0 | 1 | 0 | python,packaging,setup.py | 25,817,456 | 2 | true | 0 | 0 | If you don't install with pip, you can't uninstall with pip, so you never actually uninstalled the old version. python setup.py install will install different versions, but typically they install on top of the old versions (except for the .egg-info file or directory). You don't say how exactly the two versions were living side-by-side, because setup.py (or pip) won't rename site-packages/my_module to my_module_v1, for example. I assume that you changed the directory structure and .py file names enough that the two versions could coexist in the same parent directory, so in IPython you could run from my_module import OldClassName and from my_module import NewClassName. | 1 | 5 | 0 | I'm working on a python module for a larger system. I did a setup.py package for installing it in the main module. It worked correctly, but then i did some changes to my module, mainly modified the names of the py files, and reorganize a bunch of classes.
Then I updated the version of the module, uninstall the old one with pip, and install the new version using python setup.py install and when i try to import in ipython found that i got the older, erased module.
Found it quite odd and checked my virtualenv lib folder and found both versions of the module, with the old classes structure and the new one. And both usable, as I imported both in ipython and tested it.
It doesn't raises any problem, as I can simply use the newest version, but is confusing. Any idea why this behaviour? | Why is setup.py installing old files? | 1.2 | 0 | 0 | 1,747 |
25,818,198 | 2014-09-12T23:14:00.000 | 1 | 0 | 0 | 0 | python,amazon-ec2,bigdata,amazon-sqs | 25,824,846 | 1 | true | 1 | 0 | Problem with Hadoop is when you get a very large number of files that you do not combine with CombineFileInput format, it makes the job less efficient.
Spark doesnt seem to have a problem with this though, Ive had jobs run without problems with 10s of 1000s of files and output 10s of 1000s of files. Not tried to really push the limits, not sure if there even is one! | 1 | 0 | 1 | I am currently searching for the best solution + environment for a problem I have. I'm simplifying the problem a bit, but basically:
I have a huge number of small files uploaded to Amazon S3.
I have a rule system that matches any input across all file content (including file names) and then outputs a verdict classifying each file. NOTE: I cannot combine the input files because I need an output for each input file.
I've reached the conclusion that Amazon EMR with MapReduce is not a good solution for this. I'm looking for a big data solution that is good at processing a large number of input files and performing a rule matching operation on the files, outputting a verdict per file. Probably will have to use ec2.
EDIT: clarified 2 above | What big data solution can I use to process a huge number of input files? | 1.2 | 0 | 1 | 148 |
25,819,061 | 2014-09-13T01:46:00.000 | 3 | 0 | 1 | 0 | python | 25,819,192 | 2 | false | 0 | 0 | The Python debugger is implemented on top of the sys.settrace function. This is a way to register a Python function that will be invoked for every line of code executed. As the Python interpreter executes your code, it will call the trace function for each line it encounters. The trace function can do whatever it likes.
The debugger has a trace function that it uses to track the execution of your code. It examines the current stack frame to see if you've gotten to a breakpoint, for example. pdb.set_trace means, "set the pdb trace function as the trace function with sys.settrace."
It's not a good name, because it describes how the function is implemented, rather than what it does for the user. | 1 | 1 | 0 | I use pdb.set_trace() in a script to set a breakpoint.
But I can't figure out why the function is called set_trace.
It seem that trace often appears in discussion of executing or debugging a program.
What does trace mean in the executing or debugging a program?
Thanks. | What does "trace" mean in "pdb.set_trace()"? | 0.291313 | 0 | 0 | 1,689 |
25,820,713 | 2014-09-13T06:59:00.000 | 0 | 0 | 1 | 0 | python,eclipse,pydev | 25,823,825 | 2 | true | 0 | 0 | Hmmm, I am not familiar with the IDE IDLE, nor do I typically run a file via the console, but maybe I understand your question. The core answer is you need a breakpoint so that execution does not terminate and therefore x=10 is resident in memory. If the breakpoint is set post x=10, then when you reach the breakpoint and execution stops and you type "x" you will get 10.
There is documentation online on the console about how to use it in context of a load of file from with the console. I tend instead to hit shift-F9 while in the file to run it in debug mode. This leaves you in debug console rather than interactive console (you'll see no prompting ">") but you'll still be able to have x=10 when you enter x at break.
Probably misunderstood but though I would give it a shot. Good luck! | 1 | 1 | 0 | So I installed PyDev in Eclipse and started testing it and I have come to an issue.
While using IDLE to run Python I could, for example, create a file, set a variable x = 10 and then make IDLE run said file. I would then be able to ask python for x and it would give me 10. I don't know how to do that in PyDev.
I created a python interactive console and then when prompted chose the "Console for currently active editor" but the console will not recognize x even though the editor has x defined to 10. I did save before creating the console, I also ran the file before I opened the console... I do not know what to do...
Thank you! | PyDev Interactive Console Issue | 1.2 | 0 | 0 | 251 |
25,821,590 | 2014-09-13T09:04:00.000 | 0 | 0 | 1 | 1 | tornado,ipython-notebook | 25,821,699 | 1 | false | 0 | 0 | i think you start the python on behalf of user, that do not have access to python source files. Try to start the python application as root. | 1 | 0 | 0 | When I try to use cmd to open ipython notebook, I got this
error:tornado.access:500 GET/static/base/images/favicon.ico?v=4e6c6be5716444f7ac7b902e7f388939<::1> 150.00ms referer=None
I tried to reinstall the python2.7.8 , pythonxy and chrome. But it still failed.
Can anyone have any idea to help me fix that? | ipython notebook error with tornado | 0 | 0 | 0 | 478 |
25,824,415 | 2014-09-13T15:00:00.000 | 2 | 0 | 1 | 0 | python,numpy | 25,824,686 | 1 | true | 0 | 0 | All numbers in a numpy array have the same dtype. So you can quickly check what dtype the array has by looking at array.dtype. If this is float or float64 then every single item in the array will be of type float.
Numpy can also create arrays with mixed dtypes similar to normal python lists but then array.dtype=np.object, in this case anything can be in the array elements. But in my experience there are only a few cases where you actually need np.object
To check if the dtype is either of float16,float32, float64 use
if not issubclass(array.dtype.type, numpy.float):
raise TypeError('float type expected') | 1 | 0 | 1 | Just as the title says, I want to raise an exception when I send in an input A that should be an array containing floats. That is, if A contains at least one item that is not a float it should return an TypeError. | How do raise exception if all elements of numpy array are not floats? | 1.2 | 0 | 0 | 828 |
25,824,658 | 2014-09-13T15:24:00.000 | 4 | 0 | 1 | 1 | python,linux,windows,cross-platform | 25,826,816 | 1 | true | 0 | 0 | The Python bytecode itself is not platform-dependent, assuming a full Python VM implementation.
There are specific modules and functions that are only available on certain platforms, therefore Python source code can be made platform-dependent if it uses these. The documentation specifies if a name is only available on a restricted subset of platforms, so avoiding these will go far to make it platform-independent. | 1 | 4 | 0 | Lets assume a python code written and tested on linux system with python 2.7.1. It utilizes only the default python libraries like: os, itertools, tkinter, csv, collections.
If we take this code and put into a python 2.7.1 on a windows system, will it work fine? | Is python code platform independent? | 1.2 | 0 | 0 | 8,517 |
25,827,393 | 2014-09-13T20:29:00.000 | 0 | 0 | 1 | 0 | python,templates,visual-studio-2013,ptvs | 29,585,803 | 1 | false | 1 | 0 | @GeoCoder, Pavel's link is mostly what you need. If after deleting all those files you still see it, then you need to delete {program folder}\Common7\IDE\ItemTemplatesCache\cache.bin also. | 1 | 0 | 0 | I have Visual Studio 2013 Express running on Windows 8.1. Also, I installed Python Tools for Visual Studio template. I have developed Python applications a few times as well as C# stuff. For Python applications, I decided to export a general game template. Since it does not look good I wanna remove it before I attempt to export a better template. I tried search everywhere but to no avail. | How to remove exported templates from Visual Studio Express 2013 | 0 | 0 | 0 | 232 |
25,830,569 | 2014-09-14T06:08:00.000 | 2 | 0 | 1 | 0 | python,csv | 25,830,628 | 2 | false | 0 | 0 | You need to come up with a way to detect the start and end of the relevant section of the file; the csv module does not contain any built-in mechanism for doing this by itself, because there is no general and unambiguous delimiter for the beginning and end of a particular section.
I have to question the wisdom of jamming multiple CSV files together like this. Is there a reason that you can't separate the sections into individual files? | 1 | 0 | 1 | my csv file has multiple tables in a single file for example
name age gender
n1 10 f
n2 20 m
n3 30 m
city population
city1 10
city2 20
city3 30
How can I print from city row to city3 row.using python csv module | Print from specific row of csv file | 0.197375 | 0 | 0 | 764 |
25,832,125 | 2014-09-14T10:03:00.000 | 0 | 0 | 1 | 0 | python,module,anaconda | 25,838,038 | 2 | false | 0 | 0 | I have found a solution or better I learn a thing I did not know. For installing a module in the python 3.4 installation I should use per specif pip3.4.exe file which is in the directory scripts in the python 3.4 installation and not the "general" pip.exe file which install all the modules in the python 3.4 anaconda installation, which is considered the default one. | 1 | 0 | 0 | I'd like to use either python 3.4 installation standard and Anaconda with python 3.4 in the same Computer. But when I make a standard installation of a new module or python installation (for example pip install Django) all is right in anaconda environment but it doesn't work in the python 3.4 standard environment. My OS is Windows 7 and I'd like a solution not like sys.path.append() which I should execute everytime I start Python 3.4 standard version | A way to share modules between python 3.4 installation standard and anaconda py34 one | 0 | 0 | 0 | 89 |
25,836,133 | 2014-09-14T17:43:00.000 | 1 | 0 | 0 | 0 | python,algorithm,random,cumulative-sum,cumulative-frequency | 25,836,165 | 1 | false | 0 | 0 | Your approach is (also) correct, but it uses space proportional to the input text size. The approach suggested by the book uses space proportional only to the number of distinct words in the input text, which is usually much smaller. (Think about how often words like "the" appear in English text.) | 1 | 0 | 1 | I'm working on exercise 13.7 from Think Python: How to Think Like a Computer Scientist. The goal of the exercise is to come up with a relatively efficient algorithm that returns a random word from a file of words (let's say a novel), where the probability of the word being returned is correlated to its frequency in the file.
The author suggests the following steps (there may be a better solution, but this is assumably the best solution for what we've covered so far in the book).
Create a histogram showing {word: frequency}.
Use the keys method to get a list of words in the book.
Build a list that contains the cumulative sum of the word frequencies, so that the last item in this list is the total number of words in the book, n.
Choose a random number from 1 to n.
Use a bisection search to find the index where the random number would be inserted in the cumulative sum.
Use the index to find the corresponding word in the word list.
My question is this: What's wrong with the following solution?
Turn the novel into a list t of words, exactly as they as they appear in the novel, without eliminating repeat instances or shuffling.
Generate a random integer from 0 to n, where n = len(t) – 1.
Use that random integer as an index to retrieve a random word from t.
Thanks. | Why is a list of cumulative frequency sums required for implementing a random word generator? | 0.197375 | 0 | 0 | 242 |
25,838,717 | 2014-09-14T22:39:00.000 | 0 | 0 | 0 | 0 | pdf,python-sphinx,rst2pdf | 37,615,276 | 1 | false | 1 | 0 | We had a similar problem: bad pdf output on project with a lot of chapters and images.
We solved disabling the break page: in the conf.py, set the pdf_break_level value at 0. | 1 | 0 | 0 | My Sphinx input is six rst files and a bunch of PNGs and JPGs. Sphinx generates the correct HTML, but when I make pdf I get an output file that comes up blank in Adobe Reader (and comes up at over 5000%!) and does not display at all in Windows Explorer.
The problem goes away if I remove various input files or if I edit out what looks like entirely innocuous sections of the input, but I cannot get a handle on the specific cause. Any ideas on how to track this one down? Running Sphinx build with the -v option shows no errors.
I'm using the latest Sphinx (1.2.3) and the latest rst2pdf (0.93), with the default style. On Win7.
(added) This may help others with the same problem: I tried concatenating the rst files, then running rst2pdf on the concatenated file. That worked, though it gave me a bunch of warnings for bad section hierarchy and could not handle the Sphinx :ref: stuff. Could the bad section hierarchy thing (i.e. ==, --, ~~ in one file, ==, ~~, -- in another) be connected to the hopeless PDFs? Removing the conflict does not solve the problem, but that doesn't mean it's not a clue!
I could explore more if I could capture the output that Sphinx sends to rst2pdf. | Sphinx PDF output is bad. How do I chase down the cause? | 0 | 0 | 0 | 1,143 |
25,839,400 | 2014-09-15T00:44:00.000 | 1 | 0 | 1 | 0 | ipython-notebook | 27,091,821 | 1 | true | 0 | 0 | IPython notebook is not intended to do such tasks with too much calculation or too many output data because actually such things is for standalone program rather than a notebook.
To fix such issues, you need to create a standalone application (script) to do it from console, then paste the meaningful result into IPython notebook. | 1 | 1 | 0 | I am using the LATEST (2.2.0) ipython notebook, when I create a notebook with a loop to write many lines (about 20000 lines), then it run forever I guess since I always see the running icon at the top right. Even if I restart the computer and reopen the notebook again, it will into a running mode automatically, then I almost unable to do anything in this page. I have to copy the code and new another page to fix it.
How can I fix such hang issue during open a too large notebook? I have tried the kernel "interrupt" and "restart" menu and it seems no any effect at all. | ipython notebook hang when open a large notebook | 1.2 | 0 | 0 | 849 |
25,841,900 | 2014-09-15T06:19:00.000 | 0 | 0 | 0 | 0 | android,python-2.7,dump,android-uiautomator,androidviewclient | 25,967,652 | 4 | false | 0 | 1 | QWERTY keyboard is webview and UiAutomator does not support webview at present. AndroidViewClient is based on UiAutomator and therefore does not capture the keyboard.
If your objective is to just type the text, you can first detect if the focus is on the text field and then use device.type('your_text'). | 1 | 2 | 0 | I'm using android version 4.4.2 and python 2.7 for UI automation.
when I tried to capture view using UI automator/culebra/dump, I'm not able to capture QWERTY keypad view. Please help me on this
I need to touch and type on qwerty keypad and I should be able to type alpha-numeric text and smileys. Also once typed, you should verify if what is displayed on the screen is what you intended to type.
Thanks in advance. | In android view client I'm not able to capture qwerty keyboard in message | 0 | 0 | 0 | 849 |
25,842,002 | 2014-09-15T06:27:00.000 | 2 | 0 | 1 | 0 | python,functional-programming | 25,842,095 | 2 | true | 0 | 0 | No, the standard way to do this is with try... except.
There is no mechanism to hide or suppress any generic exception within a function. I suspect many Python users would consider indiscriminate use of such a function to be un-Pythonic for a couple reasons:
It hides information about what particular exception occurred. (You might not want to handle all exceptions, since some could come from other libraries and indicate conditions that your program can't recover from, like running out of disk space.)
It hides the fact that an exception occurred at all; the default value returned in case of an exception might coincide with a valid non-default value. (Sometimes reasonable, sometimes not really so.)
One of the principles of the Pythonic philosophy, I believe, is that "explicit is better than implicit," so Python generally avoids automatic type casting and error recovery, which are features of more "implicit- friendly"languages like Perl.
Although the try... except form can be a bit verbose, in my opinion it has a lot of advantages in terms of clearly showing where an exception may occur and what the control flow is around that exception. | 1 | 5 | 0 | Does Python has a feature that allows one to evaluate a function or expression and if the evaluation fails (an exception is raised) return a default value.
Pseudo-code:
evaluator(function/expression, default_value)
The evaluator will try to execute the function or expression and return the result is the execution is successful, otherwise the default_value is returned.
I know I create a user defined function using try and except to achieve this but I want to know if the batteries are already included before going off and creating a custom solution. | Python: return a default value if function or expression fails | 1.2 | 0 | 0 | 2,831 |
25,843,607 | 2014-09-15T08:20:00.000 | 0 | 0 | 1 | 0 | c#,python,debugging,pycharm | 50,284,697 | 1 | false | 0 | 0 | PyCharm has a "Step Over" button right next to the step into button. Step over goes into the next line in the file rather than to the next line that will be executed. The keyboard shortcut for step over is F8. | 1 | 11 | 0 | Is there a way to mark a certain method in python so that the debugger won't step into it while debugging ?
(I'm using PyCharm, so if there's something specific that the IDE can help me with, that would be great too)
For those familiar with C# - I'm looking for a DebuggerStepThrough attribute in python... | DebuggerStepThrough in python? | 0 | 0 | 0 | 81 |
25,843,807 | 2014-09-15T08:32:00.000 | 1 | 0 | 1 | 0 | python,setuptools,multi-user | 25,843,859 | 1 | false | 0 | 0 | Prefix your installation command with sudo and you will install the package globally. | 1 | 0 | 0 | I found out that packages that I installed with setuptools are not accessible by other users. I understand that this behavior is logical, especially because I installed them in develop mode. However I would like to give other users on my server the access to these packages: they are quite complicated to install.
So my questions are:
for the future, is there a way to do (develop) install for all users, or some multiuser mode (eg. group)?
Is there a way to "simply" give access to such packages?
For both, I guess the main trouble are about dependencies.
[I am running ubuntu 13.04 (I can update if necessary), but answer for any OS are welcome] | Can I install a python package for all users (with setuptools in develop mode) | 0.197375 | 0 | 0 | 6,040 |
25,847,411 | 2014-09-15T11:49:00.000 | 2 | 0 | 1 | 0 | python,vectorization,numba | 40,950,835 | 1 | false | 0 | 0 | You can limit the number of threads that target=parallel will use by setting the NUMBA_NUM_THREADS envvar. Note that you can't change this after Numba is imported, it gets set when you first start it up. You can check whether it works by examining the value of numba.config.NUMBA_DEFAULT_NUM_THREADS | 1 | 1 | 1 | Does anyone know if there is a way to configure anaconda such that @vectorize does not take all the processors available in the machine? For example, if I have an eight core machine, I only want @vectorize to use four cores. | Numba vectorize maxing out all processors | 0.379949 | 0 | 0 | 430 |
25,851,142 | 2014-09-15T15:05:00.000 | 4 | 0 | 1 | 0 | django,python-2.7,virtualenv | 25,851,210 | 1 | true | 1 | 0 | It doesn't matter where the directory is - the only important thing is that you activate the virtual environment every time you want to work on the project.
I personally prefer to have the project directory inside the virtual env directory, but that is not required.
One caveat: don't put the virtual env inside your project directory. That may cause problems with test discovery and with git. | 1 | 1 | 0 | What is the proper way of adding an already existing Django project to a newly created virtual environment? Do I just move the project to the virtual environment root directory? | Django virtual environment | 1.2 | 0 | 0 | 149 |
25,852,330 | 2014-09-15T16:09:00.000 | 1 | 1 | 1 | 0 | python,cron,crontab | 25,852,354 | 1 | false | 0 | 0 | That would happen if you're running the python program as root (which would happen if you're using root's crontab).
To fix it, just remove it with sudo rm /path/to/file.pyc, and make sure to run the program as your user next time. If you want to keep using root's crontab, you could use su youruser -c yourprogram, but the cleanest way would be simply to use your user's crontab | 1 | 1 | 0 | I have a python program, stored on Dropbox, which runs via cron on a couple of different machines. For some reason, recently one of the .pyc files is being created with root as the owner, which means that Dropbox doesn't have permission to sync it anymore.
Why would it do that, and how do I change it? | Python pyc files created with root as owner | 0.197375 | 0 | 0 | 935 |
25,854,722 | 2014-09-15T18:42:00.000 | 1 | 0 | 1 | 1 | python,bash,subprocess,parent-child,popen | 25,854,761 | 3 | false | 0 | 0 | Why not just create a shell script with all the commands you need to run, then just use a single subprocess.Popen() call to run it? If the contents of the commands you need to run depend on results calculated in your Python script, you can just create the shell script dynamically, then run it. | 1 | 1 | 0 | I need to run a lot of bash commands from Python. For the moment I'm doing this with
subprocess.Popen(cmd, shell=True)
Is there any solution to run all these commands in the same shell? subprocess.Popen opens a new shell at every execution and I need to set up all the necessary variables at every call, in order for cmd command to work properly. | Prevent creating new child process using subprocess in Python | 0.066568 | 0 | 0 | 972 |
25,858,087 | 2014-09-15T22:52:00.000 | 1 | 0 | 0 | 0 | android,python,python-2.7,kivy | 25,858,139 | 1 | true | 0 | 1 | I want to find a way to make it so that the screenshot() gets saved in /sdcard/Pictures.
The argument to screenshot is the filepath to save at, just write Window.screenshot('/sdcard/Pictures'). | 1 | 0 | 0 | Is there a way to use the OS module in python to save a jpeg created by the screenshot() function in Kivy? I am on Android so I want to find a way to make it so that the screenshot() gets saved in /sdcard/Pictures.
If I don't have to use the OS module, how would I do it?
Please use examples and add code snippets that other users and I can use for future reference.
I have been stuck on this issue for a long time.
Thanks in advance!!!! | How to use os module to save a jpeg to a cretin path- Using kivy screenshot() | 1.2 | 0 | 0 | 77 |
25,858,091 | 2014-09-15T22:52:00.000 | 5 | 0 | 0 | 0 | python,quickfix,fix-protocol | 25,858,986 | 2 | true | 1 | 0 | (edit -- I have turned off the data dictionary in the config file -- could it have anything to do with that?)
Yep, that's exactly the problem.
Without the DD, your engine doesn't know when a repeating group ends or begins. As far as it's concerned, there's no such thing as repeating groups.
You need a DD, and you need to make sure it matches your counterparty's message and field set. If they've added custom fields or messages, you need to make sure your DD reflects that. | 2 | 1 | 0 | I am using quickfix in Windows with python bindings. I have been able to make market data requests in the past. I recently changed to a different API provider (Cunningham, aka CTS) and am encountering a lot of issues. At least one of them, however, seems to be internal to quickfix. It is baffling me.
When I send a market data request, I get back a response. It is a typical 35=W message, a market snapshot.
Quickfix is rejecting this message because tag 269 appears more than once!
Of course, tag 269 is MDEntryType, it is supposed to occur more than once. Notice also that tag 268, NoMDEntries, is defined and says there are 21 entries in the group.
I think this is internal to quickfix because quickfix is generating an error message and sending it back to CTS. Also, this error aborts the message before it can get passed to the fromApp function. (I know because my parsers which apply themselves to the message whenever fromApp is called are not even getting this message).
Any ideas? The message is below.
(edit -- I have turned off the data dictionary in the config file -- could it have anything to do with that?)
<20140915-22:39:11.953, FIX.4.2:XXXXX->CTS, incoming>
(8=FIX.4.2 ☺ 9=836 ☺ 35=W ☺ 34=4 ☺ 49=CTS ☺ 56=XXXXX ☺ 52=20140915-22:39:11.963 ☺ 48=XDLCM
E_F ZN (Z14) ☺ 387=2559 ☺ 965=2 ☺ 268=21 ☺ 269=0 ☺ 270=124156250 ☺ 271=646 ☺ 1023=1 ☺ 269=0 ☺ 270=
124140625 ☺ 271=918 ☺ 1023=2 ☺ 269=0 ☺ 270=124125000 ☺ 271=1121 ☺ 1023=3 ☺ 269=0 ☺ 270=124109375
☺ 271=998 ☺ 1023=4 ☺ 269=0 ☺ 270=124093750 ☺ 271=923 ☺ 1023=5 ☺ 269=0 ☺ 270=124078125 ☺ 271=1689 ☺
1023=6 ☺ 269=0 ☺ 270=124062500 ☺ 271=2011 ☺ 1023=7 ☺ 269=0 ☺ 270=124046875 ☺ 271=1782 ☺ 1023=8 ☺ 2
69=0 ☺ 270=124031250 ☺ 271=2124 ☺ 1023=9 ☺ 269=0 ☺ 270=124015625 ☺ 271=1875 ☺ 1023=10 ☺ 269=1 ☺ 27
0=124171875 ☺ 271=422 ☺ 1023=1 ☺ 269=1 ☺ 270=124187500 ☺ 271=577 ☺ 1023=2 ☺ 269=1 ☺ 270=12420312
5 ☺ 271=842 ☺ 1023=3 ☺ 269=1 ☺ 270=124218750 ☺ 271=908 ☺ 1023=4 ☺ 269=1 ☺ 270=124234375 ☺ 271=1482
☺ 1023=5 ☺ 269=1 ☺ 270=124250000 ☺ 271=1850 ☺ 1023=6 ☺ 269=1 ☺ 270=124265625 ☺ 271=1729 ☺ 1023=7 ☺
269=1 ☺ 270=124281250 ☺ 271=2615 ☺ 1023=8 ☺ 269=1 ☺ 270=124296875 ☺ 271=1809 ☺ 1023=9 ☺ 269=1 ☺ 27
0=124312500 ☺ 271=2241 ☺ 1023=10 ☺ 269=4 ☺ 270=124156250 ☺ 271=1 ☺ 10=140 ☺ )
<20140915-22:39:12.004, FIX.4.2:XXXX->CTS, event>
(Message 4 Rejected: Tag appears more than once:269)
<20140915-22:39:12.010, FIX.4.2:XXXX->CTS, outgoing>
(8=FIX.4.2 ☺ 9=102 ☺ 35=3 ☺ 34=4 ☺ 49=XXXX ☺ 52=20140915-22:39:12.009 ☺ 56=CTS ☺ 45=4 ☺ 58=
Tag appears more than once ☺ 371=269 ☺ 372=W ☺ 10=012 ☺ ) | Quickfix failing to read repeating group | 1.2 | 0 | 0 | 1,635 |
25,858,091 | 2014-09-15T22:52:00.000 | 1 | 0 | 0 | 0 | python,quickfix,fix-protocol | 49,272,369 | 2 | false | 1 | 0 | I realize this thread is years old but I had this exact problem and finally resolved it so I am putting it here to help anyone else that stumbles across this.
The issue was that in my config I was using the 'DataDictionary=..' parameter. Changing this to 'AppDataDictionary=...' solved my problem.
Steve | 2 | 1 | 0 | I am using quickfix in Windows with python bindings. I have been able to make market data requests in the past. I recently changed to a different API provider (Cunningham, aka CTS) and am encountering a lot of issues. At least one of them, however, seems to be internal to quickfix. It is baffling me.
When I send a market data request, I get back a response. It is a typical 35=W message, a market snapshot.
Quickfix is rejecting this message because tag 269 appears more than once!
Of course, tag 269 is MDEntryType, it is supposed to occur more than once. Notice also that tag 268, NoMDEntries, is defined and says there are 21 entries in the group.
I think this is internal to quickfix because quickfix is generating an error message and sending it back to CTS. Also, this error aborts the message before it can get passed to the fromApp function. (I know because my parsers which apply themselves to the message whenever fromApp is called are not even getting this message).
Any ideas? The message is below.
(edit -- I have turned off the data dictionary in the config file -- could it have anything to do with that?)
<20140915-22:39:11.953, FIX.4.2:XXXXX->CTS, incoming>
(8=FIX.4.2 ☺ 9=836 ☺ 35=W ☺ 34=4 ☺ 49=CTS ☺ 56=XXXXX ☺ 52=20140915-22:39:11.963 ☺ 48=XDLCM
E_F ZN (Z14) ☺ 387=2559 ☺ 965=2 ☺ 268=21 ☺ 269=0 ☺ 270=124156250 ☺ 271=646 ☺ 1023=1 ☺ 269=0 ☺ 270=
124140625 ☺ 271=918 ☺ 1023=2 ☺ 269=0 ☺ 270=124125000 ☺ 271=1121 ☺ 1023=3 ☺ 269=0 ☺ 270=124109375
☺ 271=998 ☺ 1023=4 ☺ 269=0 ☺ 270=124093750 ☺ 271=923 ☺ 1023=5 ☺ 269=0 ☺ 270=124078125 ☺ 271=1689 ☺
1023=6 ☺ 269=0 ☺ 270=124062500 ☺ 271=2011 ☺ 1023=7 ☺ 269=0 ☺ 270=124046875 ☺ 271=1782 ☺ 1023=8 ☺ 2
69=0 ☺ 270=124031250 ☺ 271=2124 ☺ 1023=9 ☺ 269=0 ☺ 270=124015625 ☺ 271=1875 ☺ 1023=10 ☺ 269=1 ☺ 27
0=124171875 ☺ 271=422 ☺ 1023=1 ☺ 269=1 ☺ 270=124187500 ☺ 271=577 ☺ 1023=2 ☺ 269=1 ☺ 270=12420312
5 ☺ 271=842 ☺ 1023=3 ☺ 269=1 ☺ 270=124218750 ☺ 271=908 ☺ 1023=4 ☺ 269=1 ☺ 270=124234375 ☺ 271=1482
☺ 1023=5 ☺ 269=1 ☺ 270=124250000 ☺ 271=1850 ☺ 1023=6 ☺ 269=1 ☺ 270=124265625 ☺ 271=1729 ☺ 1023=7 ☺
269=1 ☺ 270=124281250 ☺ 271=2615 ☺ 1023=8 ☺ 269=1 ☺ 270=124296875 ☺ 271=1809 ☺ 1023=9 ☺ 269=1 ☺ 27
0=124312500 ☺ 271=2241 ☺ 1023=10 ☺ 269=4 ☺ 270=124156250 ☺ 271=1 ☺ 10=140 ☺ )
<20140915-22:39:12.004, FIX.4.2:XXXX->CTS, event>
(Message 4 Rejected: Tag appears more than once:269)
<20140915-22:39:12.010, FIX.4.2:XXXX->CTS, outgoing>
(8=FIX.4.2 ☺ 9=102 ☺ 35=3 ☺ 34=4 ☺ 49=XXXX ☺ 52=20140915-22:39:12.009 ☺ 56=CTS ☺ 45=4 ☺ 58=
Tag appears more than once ☺ 371=269 ☺ 372=W ☺ 10=012 ☺ ) | Quickfix failing to read repeating group | 0.099668 | 0 | 0 | 1,635 |
25,859,704 | 2014-09-16T02:16:00.000 | 1 | 0 | 0 | 1 | python,django,apache,tornado,wsgi | 25,861,972 | 2 | false | 1 | 0 | You would be better off to use nginx as a front end proxy on port 80 and have it proxy to both Apache/mod_wsgi and Tornado as backends on their own ports. Apache/mod_wsgi will actually benefit from this as well if everything is setup properly as nginx will isolate Apache from slow HTTP clients allowing Apache to perform better with fewer resources. | 1 | 0 | 0 | I have Apache set up as a front end for Django and it's working fine. I also need to handle web sockets so I have Tornado running on port 8888. Is it possible to have Apache be a front end for Tornado so I don't have to specify the 8888 port?
My current /etc/apache2/sites-enabled/000-default.conf file is:
WSGIDaemonProcess myappiot python-path=/home/ubuntu/myappiot/sw/www/myappiot:/usr/local/lib/python2.7/site-packages
WSGIProcessGroup myappiot
WSGIScriptAlias / /home/ubuntu/myappiot/sw/www/myappiot/myappiot/wsgi.py
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf | Can Apache be used as a front end for Django and Tornado at the same time? | 0.099668 | 0 | 0 | 534 |
25,866,102 | 2014-09-16T10:14:00.000 | 0 | 0 | 1 | 0 | python-sphinx | 72,362,123 | 5 | false | 0 | 0 | Make sure there is a line between above the line that says .. image::
My image wasn't showing, but I just had to separate it from the text line that was above it. | 2 | 63 | 0 | I am quite new in using sphinx, Doing documentation for the first time for the python project.
How to embed image for example in the sphinx documentation ? | How do we embed images in sphinx docs? | 0 | 0 | 0 | 44,266 |
25,866,102 | 2014-09-16T10:14:00.000 | 0 | 0 | 1 | 0 | python-sphinx | 72,366,702 | 5 | false | 0 | 0 | Since you have to use a relative path, I created a images directory, in which case I needed to use the following for the image path:
.. image:: ..\\images\\image.png | 2 | 63 | 0 | I am quite new in using sphinx, Doing documentation for the first time for the python project.
How to embed image for example in the sphinx documentation ? | How do we embed images in sphinx docs? | 0 | 0 | 0 | 44,266 |
25,870,184 | 2014-09-16T13:31:00.000 | 0 | 0 | 0 | 0 | python,exception,flask,mule,payload | 30,016,140 | 1 | false | 1 | 0 | did you try with getExceptionMessage().
Note point here is when exception occurs payload will be lost in the mule message. You have to store payload in session var to get it during the exception case. | 1 | 0 | 0 | I'm testing my REST API on Mule ESB, and when the API, implemented using Python Flask, returns a 200 http status Mule correctly exhibits the message returned (a JSON, as it is expecting). But when I return any other status, I can't seem to exhibit the message returned, which is a string.
I'm trying to configure the exception thrown to show the original message returned by the API. How can I access it? I'm using Anypoint Studio.
Thanks in advance. | Mule Exception Payload | 0 | 0 | 0 | 334 |
25,871,082 | 2014-09-16T14:10:00.000 | 3 | 0 | 0 | 0 | python,django,pip,monkeypatching | 25,871,179 | 1 | true | 1 | 0 | Almost everyone's on GitHub these days. Fork the repos, make your changes, and point your requirements file to your forks.
You might even want to make pull requests back to the maintainers, which will help these issues be fixed even more quickly. | 1 | 0 | 0 | I've just upgraded to Django 1.7 and I've found that a couple of the modules we rely on which are installed by pip have small issues.
I've played on a test box and found that each of these modules only needs a couple of lines to be changed to support Django 1.7. Both have import errors which are easily fixed.
What would be the best way to make a temporary patch to these files?
Ideally I would like the fix to live with my project until updated modules appear and I can remove it. We're running puppet on the production systems so I could just overwrite the two files with new versions but this seems too easy to lose track of. Monkey patching might work, but as they are import errors I'm not sure how to cut this out before it fails. | Python and Django 1.7 I need to change the source of some of the supporting modules | 1.2 | 0 | 0 | 53 |
25,871,611 | 2014-09-16T14:34:00.000 | 0 | 0 | 0 | 0 | python,django,python-2.7,flask,tornado | 25,871,904 | 2 | false | 1 | 0 | Disclaimer: questions like "what framework is better" often doesn't have exact answer. It depends on many factors, and one of them is personal preference. So below is just my opinion.
I think, that django is the best choice in case you don't have strong arguments for other frameworks. Even if you don't plan to use it's built'in admin part.
Yes, Flask is simpler. But, django has more community, has more batteries.
Also, django will restrict your project architecture. I.e., you'll have to follow some approaches and project structure, that django provides. Someone can say, that it is bad. Well, maybe. But if you are new in python and web development, better follow it. Django is developed and maintained by good programmers. So, you will learn good patterns from working with it. | 2 | 0 | 0 | We are up to developing an "accounting software" with Python. The software will based on SOA (rest) and hosted on cloud.
We are actually PHP developers but we'd like to switch Python for our future software. Our Python experience is 3 months and we developed a mid-sized social media software with Python / Tornado.
After googling about Python frameworks, we decided to use Django because it covers the libraries that we want to use as ORM, forms etc. and we think that it's communutiy is quite good compared with Flask.
Django presents an admin interface which we will NOT be using it. We'd like to develop our own class generators to create forms etc. Some say that "if you won't use Django admin interface, you'd better to choose Flask instead. Because it is more minimal and easy to use for Phyton beginners.
Therefore we are confused. Any help would be appreciated. | Not using "admin interface" is a barrier for developing with Django? | 0 | 0 | 0 | 229 |
25,871,611 | 2014-09-16T14:34:00.000 | 1 | 0 | 0 | 0 | python,django,python-2.7,flask,tornado | 25,872,121 | 2 | true | 1 | 0 | It's certainly not the case that the only benefit from Django is the admin. As you say, there are many good features: the ORM, template language, forms, authentication, and especially the third-party ecosystem are all great reasons to use Django. Now you can get all those same features with Flask if you're prepared to do some integration work, but the argument can just as easily be made that Django is better for beginners precisely because it does come with all those things built-in. | 2 | 0 | 0 | We are up to developing an "accounting software" with Python. The software will based on SOA (rest) and hosted on cloud.
We are actually PHP developers but we'd like to switch Python for our future software. Our Python experience is 3 months and we developed a mid-sized social media software with Python / Tornado.
After googling about Python frameworks, we decided to use Django because it covers the libraries that we want to use as ORM, forms etc. and we think that it's communutiy is quite good compared with Flask.
Django presents an admin interface which we will NOT be using it. We'd like to develop our own class generators to create forms etc. Some say that "if you won't use Django admin interface, you'd better to choose Flask instead. Because it is more minimal and easy to use for Phyton beginners.
Therefore we are confused. Any help would be appreciated. | Not using "admin interface" is a barrier for developing with Django? | 1.2 | 0 | 0 | 229 |
25,872,043 | 2014-09-16T14:54:00.000 | 1 | 0 | 0 | 0 | python,graph,networkx | 25,872,307 | 3 | false | 0 | 0 | You don't need to get rid of them, they don't do anything other than specify the encoding type. This can be helpful sometimes, but I can't think of a time when it isn't helpful. | 1 | 1 | 0 | I created a graph in Networkx by importing edge information in through nx.read_edgelist(). This all works fine and the graph loads.
The problem is when I print the neighbors of a node, I get the following for example...
[u'own', u'record', u'spending', u'companies', u'back', u'shares', u'their', u'amounts', u'are', u'buying']
This happens for all calls to the nodes and edges of the graph. It is obviously not changing the names of the nodes seeing as it is outside of the quotations.
Can someone advise me how to get rid of these 'u's when printing out the graph nodes.
I am a Python novice and I'm sure it is something very obvious and easy. | Networkx appends 'u' before node names after reading in from an edge list. How to get rid? | 0.066568 | 0 | 1 | 539 |
25,877,330 | 2014-09-16T19:54:00.000 | 0 | 0 | 1 | 0 | python,class,types,enums | 30,245,717 | 1 | false | 0 | 0 | Go ahead and return None, a Vector, or a Segment. Those are the three possibilities, and even if you wrap it up the user will, at some point, need to deal with the differences.
So keep it simple, and let your users use isinstance() (which is perfectly Pythonic). | 1 | 0 | 0 | I'm trying to write a Python class Segment, which consists of two Vectors representing its points and functions relating these points.
Segments can intersect, so I'm trying to write a method, s1.intersect(s2), which returns information about how Segments s1 and s2 intersect. Depending on the segments, a different number of points are required for representing the intersection:
No intersection -> 0 points (no particular data structure is required)
A single intersection -> 1 point (a single Vector is required)
A region of intersection -> 2 points (a Segment is required)
In other words, it seems like 2-3 data types are needed for different situations. This is inconvenient because user of the function would have to manually differentiate between them.
Here are some ways I can think of to return these data types:
Return a list of Vector containing either 0, 1 or 2 items.
This would get the information across and the job done, but the returned values don't reflect the data types they are actually referring to -- namely Vector, Segment, and some notice of failure.
Return different data types: Vector, Segment, or False
In this case, users would have to use isinstance to tell these apart or risk an invalid type problem. This is also ugly.
Return a wrapper class, Intersection, that stores a Vector, Segment, or False.
This has the benefit of returning the same data type every time, but users would still have to use isinstance to differentiate between types.
What is the best way to deal with multiple return types in Python? | Proper way to handle variable data types in Python | 0 | 0 | 0 | 77 |
25,882,126 | 2014-09-17T04:07:00.000 | 0 | 0 | 1 | 0 | python,arrays | 25,882,840 | 6 | false | 0 | 0 | Although zip() is the preferred solution, I believe the original problems with the way you were doing it were:
You were not converting the integers to strings (solved in To Click's answer)
You were not adding the strings (still incorrect in To Click's answer)
There could be a problem if the arrays are of different sizes, a problem taken care of by zip(). | 1 | 2 | 0 | I have two different lists which I would like to combine
a = ['A', 'B', 'C']
b = [2, 10, 120]
So the desired output should be like this:
ab = ['A2', 'B10', 'C120']
I've tried this:
ab = [a[i]*b[i] for i in range(len(a))]
But I now understand that this will only work if I want to multiply two array of integers. So what should I do in order to get the desired output as above?
Thank you. | How to multiply python string and integer arrays | 0 | 0 | 0 | 3,371 |
25,884,242 | 2014-09-17T07:01:00.000 | 1 | 0 | 0 | 1 | mongodb,python-2.7,apscheduler | 25,898,609 | 1 | false | 0 | 0 | Simply give the mongodb jobstore a different "database" argument. It seems like the API documentation for this job store was not included in what is available on ReadTheDocs, but you can inspect the source and see how it works. | 1 | 0 | 0 | I want to store the job in mongodb using python and it should schedule on specific time.
I did googling and found APScheduler will do. i downloaded the code and tried to run the code.
It's schedule the job correctly and run it, but it store the job in apscheduler database of mongodb, i want to store the job in my own database.
Can please tell me how to store the job in my own database instead of default db. | APScheduler store job in custom database of mongodb | 0.197375 | 1 | 0 | 708 |
25,889,636 | 2014-09-17T11:42:00.000 | 0 | 0 | 0 | 0 | python,django,session,satchless | 25,891,456 | 1 | true | 1 | 0 | You haven't shown any code at all, which would have been helpful. But I expect the problem is that you're not passing the request object to the template context in your "home" view: usually this happens automatically if you are using a RequestContext or the render shortcut, which presumably you are doing in the other views. | 1 | 0 | 0 | I'm currently developing an e-commerce website using Django 1.6 and Satchless. I have two applications in my project: home and nos_produits.
I'm trying to store the satchless cart object in the django session in order to display the related informations in my templates. Everything works fine in the scope of my application nos_produits (where I add my cart object to the session) but when I navigate through the application "home" it seems that my cart object doesn't exist in the session, i.e {% if 'cart' in request.session %} is not evaluated to true. So my question is what's exactly the scope of a session in django. Is it limited to the application scope where the session is set or to the whole scope of the project?
Thanks in advance.
EDIT
Found the problem: in my "home" view I used render(request, myTemplate.html) instead of using render(request, myTemplate.html, locals()) | Django session scope | 1.2 | 0 | 0 | 684 |
25,889,837 | 2014-09-17T11:52:00.000 | 3 | 0 | 1 | 0 | python,ptvs | 25,889,931 | 1 | false | 0 | 0 | They are two different things.
Python Tools for Visual Studio is an extension for Visual Studio for a Python IDE, but for vanilla python using a python compiler/virtual machine. It is not a compiler in itself, just an IDE.
IronPython is a separate compiler for Python whereby Python is compiled to the .NET Platform to run on the CLR. The IronPython Tools for Visual Studio integrate that compiler into Visual Studio.
Edit (from comment below):
IronPython and PTVS work fine together and can be installed on the same machine. | 1 | 1 | 0 | what is the difference between Python Tools for Visual Studio and the IronPython Tools for Visual Studio included in the IronPython distribution? If there is a difference can they live side by side? | What's the difference between PTVS and IronPython Tools for Visual Studio? | 0.53705 | 0 | 0 | 477 |
25,891,415 | 2014-09-17T13:05:00.000 | 1 | 0 | 0 | 0 | python,spss | 25,896,919 | 2 | false | 0 | 0 | Glad that is solved. The 32/64-bit issue has been a regular confusion for Statistics users. | 2 | 0 | 0 | I originally installed Canopy to use Python, but it would not recognize the SPSS modules, so I removed canopy, re-downloaded python2.7, and changed my PATH to ;C:\Python27. In SPSS, I changed the default python file directory to C:\Python27.
Python still will not import the SPSS modules. I have a copy of SPSS 22, so python is integrated into it...
Any thoughts on what might be causing this, and how to fix it? | How can I get Python to recognize the SPSSClient Module? | 0.099668 | 0 | 0 | 399 |
25,891,415 | 2014-09-17T13:05:00.000 | 1 | 0 | 0 | 0 | python,spss | 25,892,696 | 2 | true | 0 | 0 | I figured this out only with some help from a friend who had a similar issue. I had downloaded python from python.org, without realizing it was 32 bit. All of the SPSS modules are 64 bit! I downloaded the correct version of python, and then copied the spss modules from my spss install (inside the python folder within spss) into my python library. modules are working now! | 2 | 0 | 0 | I originally installed Canopy to use Python, but it would not recognize the SPSS modules, so I removed canopy, re-downloaded python2.7, and changed my PATH to ;C:\Python27. In SPSS, I changed the default python file directory to C:\Python27.
Python still will not import the SPSS modules. I have a copy of SPSS 22, so python is integrated into it...
Any thoughts on what might be causing this, and how to fix it? | How can I get Python to recognize the SPSSClient Module? | 1.2 | 0 | 0 | 399 |
25,893,266 | 2014-09-17T14:25:00.000 | 0 | 0 | 0 | 0 | python,xml,excel,xlsx,openpyxl | 25,908,953 | 1 | true | 0 | 0 | The Excel format is pretty complicated with dependencies between components – you can't for example be sure of that the order of the worksheets in the folder worksheets has any bearing to what the file looks like in Excel.
I don't really understand exactly what you're trying to do but the existing libraries present an interface for client code that hides the XML layer. If you don't want that you'll have to root around for the parts that you find useful. In openpyxl you want to look at the stuff in openpyxl/reader specifically worksheet.py.
However, you might have better luck using lxml as this (using libxml2 in the background) will allow you load a single XML into Python and manipulate it directly using the .objectify() method. We don't do this in openpyxl because XML trees consume a lot of memory (and many people have very large worksheets) but the library for working with Powerpoint shows just how easy this can be. | 1 | 0 | 0 | I have built a couple basic workflows using XML tools on top of XLSX workbooks that are mapped to an XML schema. You would enter data into the spreadsheet, export the XML and I had some scripts that would then work with the data.
Now I'm trying to eliminate that step and build a more integrated and portable tool that others could use easily by moving from XSLT/XQuery to Python. I would still like to use Excel for the data entry, but have the Python script read the XLSX file directly.
I found a bunch of easy to use libraries to read from Excel but they need to explicitly state what cells the data is in, like range('A1:C2') etc. The useful thing about using the XML maps was that users could resize or even move tables to fit different rows and rename sheets. Is their a library that would let me select tables as units?
Another approach I tried was to just uncompress the XLSX and just parse the XML directly. The problem with that is that our data is quite complex (taking up to 30-50 sheets) and parsing that in the uncompressed XLSX structure is really daunting. I did find my XML schema within the uncompressed XLSX, so is there any way to reformat the data into this schema outside of Excel? (basically what Excel does when I save a workbook as an .xml file) | XLSX to XML with schema map | 1.2 | 1 | 0 | 2,150 |
25,899,286 | 2014-09-17T19:58:00.000 | 0 | 0 | 0 | 0 | python,audio,machine-learning,signal-processing | 25,902,242 | 2 | false | 0 | 0 | Your points 1 and 2 are not very different: 1) is the end results of a classification problem 2) is the feature that you give for classification. What you need is a good classifier (SVM, decision trees, hierarchical classifiers etc.) and a good set of features (pitch, formants etc. that you mentioned). | 1 | 0 | 1 | I'm am trying to identify phonemes in voices using a training database of known ones.
I'm wondering if there is a way of identifying common features within my training sample and using that to classify a new one.
It seems like there are two paths:
Give the process raw/normalised data and it will return similar ones
Extract certain metrics such as pitch, formants etc and compare to training set
My interest is the first!
Any recommendations on machine learning or regression methods/algorithms? | Signal feature identification | 0 | 0 | 0 | 401 |
25,900,806 | 2014-09-17T21:41:00.000 | 4 | 0 | 1 | 0 | python,ipython | 25,900,994 | 1 | true | 0 | 0 | To clear all cell numbers and output from an existing notebook in IPython Notebook:
Open the notebook from the IPython Notebook interface by clicking it.
From the menu bar select Cell -> All Output -> Clear. | 1 | 2 | 0 | Is there a way to make IPython Notebook not show the results of the previous session upon opening an existing notebook? It's not a big deal, but it's fairly annoying to have to scroll through the notebook and manually hide all of my results after starting it so I don't have to scroll for five minutes looking for a specific line. Thanks! | How to make IPython Notebook not show results of the previous session? | 1.2 | 0 | 0 | 401 |
25,901,582 | 2014-09-17T22:48:00.000 | 0 | 1 | 0 | 0 | python,twitter-bootstrap,pdf,flask,weasyprint | 25,938,231 | 1 | true | 1 | 0 | I figured out what the problem was. When I declared the style sheet i set media="screen", I removed that tag element and it seemed to fix it. Further research indicated I could also declare a separate stylesheet and set media="print". | 1 | 0 | 0 | Does anybody have any experience rendering web pages in weasyprint that are styled using twitter Bootstrap? Whenever I try, the html renders completely unstyled as if there was no css applied to it. | Weasprint and Twitter Bootstrap | 1.2 | 0 | 0 | 512 |
25,902,367 | 2014-09-18T00:24:00.000 | 1 | 0 | 1 | 1 | python,pycharm,pythonpath | 26,268,411 | 3 | false | 0 | 0 | There are multiple ways to solve this.
In PyCharm go to Run/Edit Configurations and add the environment variable PYTHONPATH to $PYTHONPATH: and hit apply. The problem with this approach is that the imports will still be unresolved but the code will run fine as python knows where to find your modules at run time.
If you are using mac or unix systems. Use the command "EXPORT PYTHONPATH=$PYTHONPATH:". If Windows, you will have to add the directory to the PYTHONPATH environment variable.
This is as plarke suggested. | 1 | 1 | 0 | I'm using PyCharm, and in the shell, I can't run a file that isn't in the current directory. I know how to change directories in the terminal. But I can't run files from other folders. How can I fix this? Using Mac 2.7.8. Thanks! | In Python, how can I run a module that's not in my path? | 0.066568 | 0 | 0 | 1,132 |
25,902,674 | 2014-09-18T01:11:00.000 | 1 | 0 | 0 | 0 | python-2.7,python-3.x,ipv6,subnet | 25,939,078 | 1 | false | 0 | 0 | You do not want to break a /64 into smaller networks. See RFC 5375, IPv6 Unicast Address Assignment Considerations, "Using a subnet prefix length other than a /64 will break many features of IPv6..."
RFC 6164, Using 127-Bit IPv6 Prefixes on Inter-Router Links, allows for /127 point-to-point links, "Routers MUST support the assignment of /127 prefixes on point-to-point inter-router links."
And, of course you are allowed to us /128 for loppback addresses.
All that said, you should only take a single /127 or /128 out of a /64. Subdividing a /64 into multiple subnets is unnecessary and just asking for trouble. We need to change our mindsets from IPv4 scarcity to IPv6 plenty since there is no problems getting as many /64 blocks as you need; anyone can request and get a /48 which is 65536 /64 networks. | 1 | 0 | 0 | I have a /64 IP subnet, I need to subnet that /64 and I need to get 100 /126 ip subnets from it. I am trying to use Python netaddr library to do it. Can anyone help?
Thanks | Subnet IPv6 subnet /64 into /126 using python netaddr library | 0.197375 | 0 | 1 | 520 |
25,904,218 | 2014-09-18T04:32:00.000 | 0 | 1 | 0 | 0 | python,c++ | 25,904,243 | 1 | true | 0 | 0 | Instead of doing IPC you can simply call Python from C++. You can do this either using the Python C API, or perhaps easier for some people, Boost.Python. This is referred to as "embedding" Python within your application, and it will allow you to directly invoke functions and pass data around without undue copying between processes. | 1 | 0 | 0 | In my project, I have bridged C++ module with python using plain (windows) sockets with proto buffer as serializing/ de-serializing mechanism. Below are requirements of this project:-
1) C++ module will have 2 channels. Through one it will accept request and send appropriate reply to python module. Through other it will send updates which it gets from backend to python side.
2) Today we are proposing it for 100 users ( i.e request/reply for 100 users + updates with each messages around 50 bytes ).
But I want to make sure it works fine later even with 100 K users. Also, I am planning to use ZMQ with this but I don't know much about its performance / latency / bottlenecks.
Can anyone please suggest me if its appropriate choice OR are there better tools available.
Thanks in advance for you advice. | C++ and python communication | 1.2 | 0 | 1 | 114 |
25,904,333 | 2014-09-18T04:47:00.000 | 2 | 0 | 1 | 1 | python,text,exe,py2exe | 25,904,445 | 1 | true | 0 | 0 | Py2Exe packages your script with a standalone Python interpreter, but under normal circumstances won't "compile" it.
Viewing the source code for a Py2Exe package executable would be trivial. | 1 | 0 | 0 | I made a python script that saves its output into a .text file. The .text file contents is scrambled but the python script I made can unscramble the text. If I use py2exe to make the script an .exe file, will others be able to see the script it uses to unscramble the text if they have a copy of the .exe file? | Python: Is original source code viewable when converting a python script to .exe file using py2exe? | 1.2 | 0 | 0 | 143 |
25,908,254 | 2014-09-18T08:53:00.000 | 0 | 0 | 0 | 0 | python,django,url-pattern | 25,908,284 | 2 | false | 1 | 0 | You should create
404.html
file inside your TEMPLATE_DIRS path | 1 | 0 | 0 | Is there a way for an url in django to be triggered when no pattern matched the url requested by the client, something like:
defaulturl = "/path/to/default/page"
errorpage = "/path/to/error/page"
Thank you! | DJANGO default/error url | 0 | 0 | 0 | 59 |
25,916,839 | 2014-09-18T15:32:00.000 | 3 | 1 | 0 | 0 | python,unit-testing,testing,orm,rethinkdb | 25,922,468 | 1 | true | 1 | 0 | You can create all your databases/tables just once for all your test.
You can also use the raw data directory:
- Start RethinkDB
- Create all your databases/tables
- Commit it.
Before each test, copy the data directory, start RethinkDB on the copy, then when your test is done, delete the copied data directory. | 1 | 3 | 0 | I am close to finishing an ORM for RethinkDB in Python and I got stuck at writing tests. Particularly at those involving save(), get() and delete() operations. What's the recommended way to test whether my ORM does what it is supposed to do when saving or deleting or getting a document?
Right now, for each test in my suite I create a database, populate it with all tables needed by the test models (this takes a lot of time, almost 5 seconds/test!), run the operation on my model (e.g.: save()) and then manually run a query against the database (using RethinkDB's Python driver) to see whether everything has been updated in the database.
Now, I feel this isn't just right; maybe there is another way to write these tests or maybe I can design the tests without even running that many queries against the database. Any idea on how can I improve this or a suggestion on how this has to be really done? | Testing an ORM for RethinkDB | 1.2 | 1 | 0 | 484 |
25,917,996 | 2014-09-18T16:35:00.000 | 1 | 0 | 0 | 1 | python,macos,subprocess | 25,923,581 | 1 | true | 0 | 0 | The cause of the application halt turns out to be not the subprocess.Popen call, but the call of mktemp that creates a temporary file inside of *.app folder, where a Mac app is definitely not permitted to write by default. After commenting this out, the code runs just fine. I'll make note of this and remind myself not to create temp file inside *.app folder again! | 1 | 0 | 0 | I am developing a GUI application using Kivy that in turn it will call an external console program from Python script using subprocess.Popen and capture its stderr output live. Finally, it works (thanks to SO for this!). I package the application using Pyinstaller, in which it produce an *.app that contains the executable resided in Contents\MacOS. If I run this executable directly from within Terminal, it runs well. The stderr output can be capture live. But, if I try to run the *.app directly either using open command from Terminal or double click its *.app icon from Finder, the call to subprocess.Popen simply halt.
I am not sure about this, but is there any restriction on an OSX app about how it can execute external program? | After turning into OSX app, Python subprocess can't call external console command | 1.2 | 0 | 0 | 559 |
25,919,214 | 2014-09-18T17:48:00.000 | 3 | 0 | 1 | 0 | python,dictionary,string-length | 25,919,479 | 3 | true | 0 | 0 | It depends on what you mean by "strings" and frequency dictionary:
If you are mentioning python data types str and dict:
Strings keep track of their length in a field of their C structure, which means len(str) is O(1), aka constant time (and with a very small constant).
With a frequency table you have to sum the counts, which is O(k) where k is the number of distinct letters in the string (assuming constant time integer operations, which isn't strictly true). Since the number of characters is bounded O(k) = O(1), so, asymptotically, they take the same time, but the difference in constants is pretty big: len(str) will always be faster. (also, if you consider unicode characters k can be in the order of millions, so it can be 10^6 times slower to use the frequency table).
If you mean "strings" in general then counting the character in a string takes O(n) time, while summing the counts in a frequency table is, as already stated, O(k) which is O(1) assuming a bounded number of characters. However this doesn't take into account the time to create the frequency table.
I'm assuming constant time operations on integers since this makes sense for real world usage. However, even with unbounded integers, the time taken to sum k integers would be less than O(n). In fact it should be about O(log(n)) because the operations has to compute the log(n) bits of the representation of n which is the length of the string.
(with the assumption of a bounded number of characters, otherwise you could have strings with length n and consisting of n distinct character for each n...)
Here I'm assuming that the problem you wanted to solve is to compute the length of the string by either counting the single characters or summing some counts.
However if you really meant len(a_string) vs len(a_dict) the answer is quite simpler: they both take the same O(1) time since both str and dict store a field with their length. | 1 | 0 | 0 | So I was in an interview today and my interviewer and I weren't sure which is more efficient: given a long string of characters which is faster 'len(str)' or 'len(freqDict)' where freqDict is a dictionary with the character as the key and the frequency of the character in the string as the value? | Length of String vs. # of keys in Dictionary | 1.2 | 0 | 0 | 1,037 |
25,921,478 | 2014-09-18T20:08:00.000 | 0 | 0 | 0 | 0 | python,sockets,websocket,twisted | 25,923,833 | 1 | false | 0 | 0 | “Yes.”
If you'd like a more thorough answer, you'll have to include more information in your question. | 1 | 1 | 0 | I have an existing chat socket server built on top of twisted's NetstringReceiver class. There exist Android/iOS clients that work with it fine. Apparently web sockets use a different protocol and so it is unable to connect to my server.
Does a different chat server need to be written to support web sockets? | do i need to rewrite my twisted chat server if i want to support web sockets? | 0 | 0 | 1 | 151 |
25,924,639 | 2014-09-19T00:42:00.000 | 0 | 0 | 1 | 0 | python | 25,924,663 | 2 | false | 0 | 0 | This is how you build the string: firstname[0] + surname[:4] | 1 | 0 | 0 | Write a function called getUsername which takes two input parameters, firstname (string) and surname (string), and both returns and prints a username made up of the first character of the firstname and the first four characters of the surname. Assume that the given parameters always have at least four characters. | python idle how to create username? | 0 | 0 | 0 | 180 |
25,926,384 | 2014-09-19T04:28:00.000 | 1 | 0 | 0 | 0 | python,django,flask | 25,932,635 | 1 | true | 1 | 0 | You can run anything from Django, it just provides a framework to get stuff to the web.
As long as the Django application is running as a user which has privileges to access your NIC there won't be an issue with that.
You can simply call your code you already have from the Django Views.
The time this stuff takes may be too long for a web request so you may need to pass some stuff down a message queue and look up the results. Look at Celery for that purpose.
I do prefer Flask over Django but it doesn't matter what you use.
Just remember Django is another library it all still runs inside Python :) | 1 | 0 | 0 | Hi I just want to ask if its possible to run Impacket to Django,
On my project I am already done on my sniffing and parsing using Impacket and Pcapy but my clients requested that the GUI will be web based. I picked Django because its the most widely used and all but I am having doubts that it can run my Libraries.
For starters can Django open my NIC in Ubuntu and have access to sniff on it?
or is it better for me to use Flask for from what I read flask is being run on the Python Console Application, from what I understood I will install a HTTP Server in the Project then the Python Console will be like a Controller (MVC) to my GUI which is Flask. | Can I run Python 2.7 Libraries (Impacket, Pcapy) on Django | 1.2 | 0 | 0 | 151 |
25,935,799 | 2014-09-19T14:02:00.000 | 0 | 0 | 1 | 1 | python,linux | 25,936,008 | 1 | false | 0 | 0 | If you put something in the background, then it's no longer connected to the current shell (or the terminal). So you would need the background process to open a socket so the command line part could send it the command.
In the end, there is no way around creating a new connection to the server every time you start the command line process and close the connection when the command line process exits.
The only alternative is to use the readline module to simulate the command line inside of your script. That way, you can open the connection, use readline to ask for any number of commands to send to the server. Plus you need an "exit" command which terminates the command line process (which also closes the server connection). | 1 | 0 | 0 | I'm new to python. I'm trying to write an application with command line interface. The main application is communicating with server using tcp protocol. I want it to work in the background so I won't have to connect with the server every time I use interface. What is a proper approach to such a problem?
I don't want the interface to be an infinite loop. I would like to use it like this:
my_app.py command arguments.
Please note that I have no problems with writing interface (I'm using argparse library right now) but don't know what architecture would suit me best and how to implement it in python. | Command line interface application with background process | 0 | 0 | 0 | 59 |
25,936,385 | 2014-09-19T14:34:00.000 | 0 | 0 | 1 | 0 | python,mongoimport | 25,936,756 | 1 | false | 0 | 0 | I'm not familiar with mongoimport, but I do know that if you use csv.reader, the backslashes are taken care of during reading. Maybe you could consider using a package specifically designed to read the csv, and then pass that along to mongoimport. | 1 | 0 | 0 | I am using mongoimport in a python script to import multiple CSV files into my Mongo DB. Some values contain backslash escaped commas. How can I use this to correctly import these files to Mongo? I can't find any specific solutions to this. | Mongoimport: Escaping commas in CSV | 0 | 1 | 0 | 313 |
25,943,156 | 2014-09-19T22:18:00.000 | 0 | 0 | 1 | 1 | python,linux,installation,virtualenv,redhat | 25,943,276 | 1 | true | 0 | 0 | With Linux you don't need to worry about where to install files, the OS takes care of that for you. Google CentOS Yum and read the Yum docs on how to install everything. You probably already have Python 2.7 installed, to check just open the terminal CTRL + ALT + T, and type python. This will start the python interpreter and display the version. The next step would be to see if pip and virtualenv are installed. You can simply type the command at the command prombt (exit python first). If you get something to the effect of command not found then you need to install them. Install pip with the Yum installer and virtualenv with pip. If everything is install then you just need to make your virtual environment, ex virtualenv name_of_directory, if the directory doesn't exist then yum will create it. And now you're done. | 1 | 0 | 0 | in what order should I install things? My goal is to have python 2.7.6 running on a virtualenv for a project for work. I am working on a Virtual Box machine in CentOS 6.5.
What folders should I be operating in to install things? I have never used linux before today, and was just kind of thrust into this task of getting a program running that requires python 2.7.6 and a bunch of packages for it. Thanks in advance if you can get me command line entries. I have opened about 3 Virtual Boxes and deleted them because I installed things in the wrong order. Please let me know how things should be installed, with command line entries, if possible. | Redhat Python 2.7.6 installation on virtualenv | 1.2 | 0 | 0 | 1,684 |
25,943,256 | 2014-09-19T22:28:00.000 | 1 | 1 | 0 | 0 | python,raspberry-pi | 30,958,127 | 2 | false | 0 | 0 | I would have a mySQL database on the master only and have the slaves write their own tables to that database using the cymysql python3 module
(pip3 install cymysql) | 1 | 1 | 0 | I have 3 machines (raspberry pi's). One has a database of sensor readings on, the other two are 'slave' devices that read/run various sensors. what is the best solution to allow the 'master' pi to access sensor readings on the 'slave' pis- so it can save the values to the database.
All the pis are on the same internal network, and will never be on the internet
The 'slave' pis return integers to the master pi, that is all.
It has to be python3 (because the software that queries the sensors is)
What is the very simplest way?
Some kind of web service? I've so far failed to get get pysimplesoap and cherrypy to work on python3.
Something else? Pyro? It seems a bit complicated just to get back 2 integers.
Roll my own with sockets (that can't be the easiest way?!)
Give up and put a mysql database on each pi, then make the 'sensor-value-reporting-website' stretch across 3 databases/hosts. | Python machine-communication on raspberry pi | 0.099668 | 0 | 0 | 198 |
25,949,382 | 2014-09-20T13:39:00.000 | 3 | 1 | 0 | 0 | python,web-services,api,amazon-web-services,amazon-ec2 | 25,949,451 | 1 | false | 0 | 0 | No.
Apache is your doorman. Your python script are the workers inside the building. The public comes to the door and talks to the doorman. The doorman hands everything to the workers inside the building, and when the work is done, hands it back to the appropriate person.
Apache manages the coming and going of individual TCP/IP messages, and delegates the work that each request needs to do to your script. If the request asks for the API it hands it to the api script; if the request asks for the website it hands it to the website script. Your script passes back the response to apache, which handles the job of giving it to the client over port 80.
As @Lafada comments: you can have a backdoor—another port—but apache is still the doorman. | 1 | 2 | 0 | Say, If I am hosting a website say www.mydomain.com on EC2 instance then apache would be running on port 80. Now If I want to host a RESTful API (say mydomain.com/MyAPI) using a python script(web.py module). How can I do that? Wouldn't running a python script cause a port conflict? | Amazon AWS EC2 : How to host an API and a website on EC2 instance | 0.53705 | 0 | 1 | 607 |
25,949,733 | 2014-09-20T14:23:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,scikit-learn,asymmetric,regularized | 25,961,702 | 2 | false | 0 | 0 | Depending on the amount of data you have and the classifier you would like to use, it might be easier to implement the loss and then use a standard solver like lbfgs or newton, or do stochastic gradient descent, if you have a lot of data.
Using a simple custom solver will most likely be much slower than using scikit-learn code, but it will also be much easier to write. In particular if you are after logistic regression, for example, you would need to dig into LibLinear C code.
On the other hand, I'm pretty certain that you can implement it in ~10 lines of python using lbfgs in an unoptimized way. | 1 | 0 | 1 | The problem requires me to regularize weights of selected features while training a linear classifier. I am using python SKlearn.
Having googled a lot about incorporating asymmetric regularization for classifiers in SKlearn, I could not find any solution. The core library function that performs this task is provided as a DLL for windows hence modifying the existing library is not possible.
Is there any machine learning library for python with this kind of flexibility? Any kind of help will be appreciated. | asymmetric regularization in machine learning libraries (e.g. scikit ) in python | 0 | 0 | 0 | 206 |
25,952,174 | 2014-09-20T18:45:00.000 | 1 | 0 | 1 | 0 | python,exe,cx-freeze | 25,952,889 | 2 | false | 0 | 0 | Yes, If your on windows this method works.
Run ->> iexpress
Follow the instructions.
This will compile all the files into on exe but first you need to create the exe using cx_freeze then browse to the directory in iexpress and it will do the rest. | 1 | 0 | 0 | Whenever I build an exe with cx_Freeze and Python I get a bunch of extra stuff like the Library.zip and all the .dll files. Is there a way I can make it just one executable file that I can just send over to someone and have them run without having to give them all the extra files also? Python 3.4. Thank you! | Cx_Freeze's extra stuff | 0.099668 | 0 | 0 | 974 |
25,955,073 | 2014-09-21T02:01:00.000 | 0 | 0 | 1 | 0 | python,django | 25,955,108 | 1 | true | 1 | 0 | Alter your need. "Overcoming" the recursion would just mean that you succeed at creating infinite objects, which is inadvisable.
If what you really need is to create a second Invoice only when the first one was created manually, well, that's different. Make your function do nothing unless the sender is manually-created. If you really can't tell the difference between manual and automatic, add a field that tells you. (Though surely there's something unique about the automatic objects, or you wouldn't need to create them!) | 1 | 1 | 0 | I need to create an invoice object whenever an invoice is created from inside post_save.connect(after_saving_invoice, sender=Invoice). This is causing Exception Value: maximum recursion depth exceeded while calling a Python object.
The reason of the exception is clear but how can I overcome it? | Avoiding Exception Value: maximum recursion depth exceeded while calling a Python object in Django signals | 1.2 | 0 | 0 | 478 |
25,955,204 | 2014-09-21T02:25:00.000 | 2 | 0 | 1 | 0 | python,debugging,pycharm,breakpoints | 51,086,861 | 2 | false | 0 | 0 | You can add a breakpoint in the line you need to watch and right-click it.
Then in the dialog box you have "condition" as last input: add a condition that uses the variable you need and it should stop when you set it to. | 1 | 33 | 0 | I am trying to track down when a variable gets updated. I have a watcher, but is there any way to set it up so that the execution can be paused when the value is getting updated?
This is called a watchpoint or a data breakpoint. | Stop at the line where a variable gets changed | 0.197375 | 0 | 0 | 6,863 |
25,957,739 | 2014-09-21T09:24:00.000 | 1 | 1 | 0 | 0 | python,geolocation,instagram,hashtag | 26,262,484 | 1 | false | 0 | 0 | If u can't find ready solution for this in api, try to search by location first.
As u receive location search results, you can manually filter them by hashtags.
The idea is that every photo has tags and location attributes you can filter by. But everything depends on your specific task here. | 1 | 2 | 0 | How can you search Instagram by hashtag and filter it by searching again using its location?
I checked the Instagram API, but I don't see a solution. | Search hashtag in instagram then filter by location | 0.197375 | 0 | 1 | 1,078 |
25,959,594 | 2014-09-21T13:16:00.000 | 1 | 0 | 1 | 0 | python,formatting,whitespace,python-idle | 25,986,544 | 1 | false | 0 | 0 | In the IDLE Preferences, under "Fonts/Tabs" there should be an "Indentation Width" preference, where you can change the tab width to four spaces. | 1 | 0 | 0 | Just a minute ago I opened IDLE to start a new Python file. After I had written a function header, I pressed tab (from column 0) and it only indented two spaces rather than four. This hasn't happened to me before. How can I change/ reset the tab width to four spaces? | Change IDLE tab width/ indent width Python | 0.197375 | 0 | 0 | 5,506 |
25,963,074 | 2014-09-21T19:35:00.000 | 0 | 0 | 0 | 1 | python,linux,subprocess,dd | 25,974,983 | 2 | false | 0 | 0 | The solution that worked for me was subprocess.call(["ddrescue $0 $1 | tee -a drclog", in_file_path, out_file_path], shell=True). | 1 | 1 | 0 | I have subprocess.call(["ddrescue", in_file_path, out_file_path], stdout=drclog). I'd like this to display the ddrescue in the terminal as it's running and write the output to the file drclog. I've tried using subprocess.call(["ddrescue", in_file_path, out_file_path], stdout=drclog, shell=True), but that gives me an input error into ddrescue. | How do I push a subprocess.call() output to terminal and file? | 0 | 0 | 0 | 1,563 |
25,963,463 | 2014-09-21T20:21:00.000 | 0 | 0 | 0 | 1 | python,hadoop,mapreduce,streaming | 26,011,654 | 1 | true | 0 | 0 | Can you please try stopping all the daemons using 'stop-all' first and then rerun your MR job after restarting the daemons (using 'start-all')?
Lets see if it helps! | 1 | 0 | 1 | I have an application that creates text files with one line each and dumps it to hdfs.
This location is in turn being used as the input directory for a hadoop streaming job.
The expectation is that the number of mappers will be equal to the "input file split" which is equal to the number of files in my case. Some how all the mappers are not getting triggered and I see a weird issue in the streaming output dump:
Caused by: java.io.IOException: Cannot run program "/mnt/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/appcache/application_1411140750872_0001/container_1411140750872_0001_01_000336/./CODE/python_mapper_unix.py": error=26, Text file busy
"python_mapper.py" is my mapper file.
Environment Details:
A 40 node aws r3.xlarge AWS EMR cluster [No other job runs on this cluster]
When this streaming jar is running, no other job is running on the cluster, hence none of the external processes should be trying to open the "python_mapper.py" file
Here is the streaming jar command:
ssh -o StrictHostKeyChecking=no -i hadoop@ hadoop jar /home/hadoop/contrib/streaming/hadoop-streaming.jar -files CODE -file CODE/congfiguration.conf -mapper CODE/python_mapper.py -input /user/hadoop/launchidlworker/input/1 -output /user/hadoop/launchidlworker/output/out1 -numReduceTasks 0 | "Text file busy" error for the mapper in a Hadoop streaming job execution | 1.2 | 0 | 0 | 317 |
25,964,881 | 2014-09-21T23:22:00.000 | 0 | 0 | 1 | 0 | python,indexing,slice | 25,964,934 | 2 | true | 0 | 0 | Instead of using index, use split and split on the various separators. E.g., full_name.split(', ')[0] == 'Hun', full_name.split(', ')[1] == 'Attila', and full_name.split(' ')[-1] == 'The'. You can then easily recombine them with string formatting or simple concatenation. | 1 | 0 | 0 | I have recently started learning Python, and I received a question: "Write a Python program that asks the user to enter full name in the following format:
LastName, FirstName MiddleName
(for Example: "Hun, Attila The”).
The input will have a single space after the comma and only a single space between the first name and the middle name. Use string methods and operators in Python to convert this to a new string having the form:
FirstName MiddleInitial Period LastName (for example: "Attila T. Hun") and output it."
It is easy for me to do this if I make three different variables, and then reorder/slice them later. But how do I do this in one variable only. I know I will need to slice up until "," for the LastName, but I can't do "[0:,]" as I am not using an integer value, so how do i find the integer value for it, if the lastName will vary from user to user. | How to use the index method efficiently | 1.2 | 0 | 0 | 261 |
25,965,606 | 2014-09-22T01:33:00.000 | 0 | 0 | 0 | 0 | python,scrapy | 25,967,819 | 1 | false | 1 | 0 | You can maintain a list of urls that u have crawled , whenever you came across a url that is already in the list you can log it and increment a counter. | 1 | 1 | 0 | One of the Scrapy spiders (version 0.21) I'm running isn't pulling all the items I'm trying to scrape.
The stats show that 283 items were pulled, but I'm expecting well above 300 here. I suspect that some of the links on the site are duplicates, as the logs show the first duplicate request, but I'd like to know exactly how many duplicates were filtered so I'd have more conclusive proof. Preferably in the form of an additional stat at the end of the crawl.
I know that the latest version of Scrapy already does this, but I'm kinda stuck with 0.21 at the moment and I can't see any way to replicate that functionality with what I've got. There doesn't seem to be a signal emitted when a duplicate url is filtered, and DUPEFILTER_DEBUG doesn't seem to work either.
Any ideas on how I can get what I need? | Display the number of duplicate requests filtered in the post-crawler stats | 0 | 0 | 1 | 108 |
25,965,953 | 2014-09-22T02:34:00.000 | 0 | 0 | 0 | 0 | python,pybrain,reinforcement-learning | 25,996,185 | 1 | false | 0 | 0 | It is certainly possible to train a neural network (based on pybrain or otherwise) to make predictions of this sort that are better than a coin toss.
However, weather prediction is a very complex art, even for people who do it as their full-time profession and have been for decades. Those weather forecasters have much bigger neural networks inside their head than pybrain can simulate. If it were possible to make accurate predictions in the way you describe, then it would have been done long ago. For this reason, I wouldn't expect to do better than (or even as well as) the local weather forecaster. So, if your goal is to learn pybrain, I would pick a less complex system to model, and if your goal is to predict the weather, I would suggest www.wunderground.com. | 1 | 1 | 1 | Can you use Reinforcement Learning from Pybrain on dynamic changing output. For example weather: lets say you have 2 attributes Humidity and Wind and the output will be either Rain or NO_Rain ( and all attributes are either going to have a 1 for true or 0 for false in the text file i am using). can you use Reinforcement Learning on this type of problem? the reason I ask is sometime even if we have humidity it does not guaranty that its going to rain. | Pybrain Reinforcement Learning dynamic output | 0 | 0 | 0 | 241 |
25,966,195 | 2014-09-22T03:09:00.000 | 2 | 0 | 0 | 1 | python,openshift,wsgi,timed | 26,024,110 | 3 | true | 0 | 0 | The solution was remove the cartridge and install Python 2.6 | 2 | 1 | 0 | I have a python application (webservice) hosted in Openshift but, a few days ago, the app don't work anymore. The log points to "[error] script timed out before returning headers" and I can't solve this.
Someone can help me? | script timed out before returning headers openshift | 1.2 | 0 | 0 | 684 |
25,966,195 | 2014-09-22T03:09:00.000 | 0 | 0 | 0 | 1 | python,openshift,wsgi,timed | 25,966,254 | 3 | false | 0 | 0 | Please log in to your openshift account and check whether your application and cartridges are up and running. | 2 | 1 | 0 | I have a python application (webservice) hosted in Openshift but, a few days ago, the app don't work anymore. The log points to "[error] script timed out before returning headers" and I can't solve this.
Someone can help me? | script timed out before returning headers openshift | 0 | 0 | 0 | 684 |
25,966,602 | 2014-09-22T04:14:00.000 | 0 | 0 | 0 | 0 | python,django,django-admin | 25,978,053 | 1 | true | 1 | 0 | For your first question:
I think it's possible. I've made a quick test on the admin interface for Users, and if you play with the URL querystring, you can combine filters : /admin/auth/user/?is_superuser=1&is_superuser=0 will list both super and non-super users.
You'll have to override admin filters template to generate proper URLs for your needs, though.
I don't understand your second question. What do you mean by combined ? If you select an item in the first filter and an item in the second one, you will have combined filtering, won't you ? | 1 | 2 | 0 | I actually have two questions, please answer what you can:
Question 1:
In django admin, if you have list_filters = ["book"], and your options were "red carpet" & "Bingo the Dinosaur", you can only select one book at a time; either "red carpet" or "Bingo the Dinosaur". Is there a way to make it so that the user can select both at the same time?
Question 2:
In django admin is there a way to combine list_filter fields? so if you have list_filer = [" bookname", "bookauthor"], is there a way to make it so that the book name and author are combined in one filter and you search it at the same time? | Is there a way to select more than one option in django admin filters? | 1.2 | 0 | 0 | 495 |
25,972,979 | 2014-09-22T11:26:00.000 | 0 | 0 | 1 | 0 | python,pdb | 71,416,044 | 4 | false | 0 | 0 | Though you may not be able to reverse the code execution in time, the next best thing pdb has are the stack frame jumps.
Use w to see where you're in the stack frame (bottom is the newest), and u(p) or d(own) to traverse up the stackframe to access the frame where the function call stepped you into the current frame. | 1 | 67 | 0 | After I hit n to evaluate a line, I want to go back and then hit s to step into that function if it failed. Is this possible?
The docs say:
j(ump) lineno
Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don’t want to run. | Is it possible to step backwards in pdb? | 0 | 0 | 0 | 31,351 |
25,982,798 | 2014-09-22T20:42:00.000 | 0 | 0 | 1 | 0 | python,user-interface,editor,spyder | 71,969,462 | 7 | false | 0 | 0 | Tools > Reset Spyder to factory defaults
it fixed my problem | 6 | 23 | 0 | I'm using Spyder 2.2.5 with Python 2.7 and I want the editor to display the arguments of the function I'm calling after I type the first parenthesis. I know it's possible, because I can see it in the video-tutorials I'm using. I've tried (at least I think so..) all menu-items.
Getting crazy here, please help! | How to display function arguments in Spyder? | 0 | 0 | 0 | 61,475 |
25,982,798 | 2014-09-22T20:42:00.000 | 8 | 0 | 1 | 0 | python,user-interface,editor,spyder | 50,948,634 | 7 | false | 0 | 0 | For Spyder 3:
Tools > Preferences > Help > Automatic Connections and then tick all the workspaces you want it to show functions' arguments. | 6 | 23 | 0 | I'm using Spyder 2.2.5 with Python 2.7 and I want the editor to display the arguments of the function I'm calling after I type the first parenthesis. I know it's possible, because I can see it in the video-tutorials I'm using. I've tried (at least I think so..) all menu-items.
Getting crazy here, please help! | How to display function arguments in Spyder? | 1 | 0 | 0 | 61,475 |
25,982,798 | 2014-09-22T20:42:00.000 | 18 | 0 | 1 | 0 | python,user-interface,editor,spyder | 49,372,487 | 7 | false | 0 | 0 | I have a similar problem. The arguments pop-up shows up only until I start typing, so I have a problem if I forget what the latter arguments are. A workaround is to move the cursor on the function and press Ctrl + i. It shows the function documentation in help window, including its definition. | 6 | 23 | 0 | I'm using Spyder 2.2.5 with Python 2.7 and I want the editor to display the arguments of the function I'm calling after I type the first parenthesis. I know it's possible, because I can see it in the video-tutorials I'm using. I've tried (at least I think so..) all menu-items.
Getting crazy here, please help! | How to display function arguments in Spyder? | 1 | 0 | 0 | 61,475 |
25,982,798 | 2014-09-22T20:42:00.000 | -2 | 0 | 1 | 0 | python,user-interface,editor,spyder | 63,547,514 | 7 | false | 0 | 0 | Install kite from tools menu, which will resolve your issue | 6 | 23 | 0 | I'm using Spyder 2.2.5 with Python 2.7 and I want the editor to display the arguments of the function I'm calling after I type the first parenthesis. I know it's possible, because I can see it in the video-tutorials I'm using. I've tried (at least I think so..) all menu-items.
Getting crazy here, please help! | How to display function arguments in Spyder? | -0.057081 | 0 | 0 | 61,475 |
25,982,798 | 2014-09-22T20:42:00.000 | -2 | 0 | 1 | 0 | python,user-interface,editor,spyder | 51,189,598 | 7 | false | 0 | 0 | often would need to restart Spyder to have the inline help | 6 | 23 | 0 | I'm using Spyder 2.2.5 with Python 2.7 and I want the editor to display the arguments of the function I'm calling after I type the first parenthesis. I know it's possible, because I can see it in the video-tutorials I'm using. I've tried (at least I think so..) all menu-items.
Getting crazy here, please help! | How to display function arguments in Spyder? | -0.057081 | 0 | 0 | 61,475 |
25,982,798 | 2014-09-22T20:42:00.000 | 0 | 0 | 1 | 0 | python,user-interface,editor,spyder | 53,983,031 | 7 | false | 0 | 0 | Go to View > Window layouts > Spyder Default Layout. This resets the Spyder IDE to the defaults, and the object inspector will function again (worked for me). | 6 | 23 | 0 | I'm using Spyder 2.2.5 with Python 2.7 and I want the editor to display the arguments of the function I'm calling after I type the first parenthesis. I know it's possible, because I can see it in the video-tutorials I'm using. I've tried (at least I think so..) all menu-items.
Getting crazy here, please help! | How to display function arguments in Spyder? | 0 | 0 | 0 | 61,475 |
25,985,069 | 2014-09-23T00:28:00.000 | 0 | 0 | 0 | 0 | heroku,google-api,google-calendar-api,google-authentication,google-api-python-client | 26,019,429 | 2 | false | 1 | 0 | So, you want to be remembered.
If you want to dispose of any kind of authenticacion but yet the user needs to be recognized you should be using a cookie.
On the server side that cookie should be used to select the offline token.
Of course, without that cookie the user needs to be authenticated in any way. I would make them reauth by Google so you get a new offline token.
Hope that it helps. | 1 | 0 | 0 | I'm writing a web application that reads my personal calendar data, crunches stats, and then spits them out for the world to see. I don't need an authorization flow. Is it possible to leverage the Google APIs without going through a user sign-in flow? In other words, I want my personal Google account permanently and securely signed in to my server without the risk of my token invalidating or having to re-auth.
Right now I'm signing myself in with an offline token, then uploading the authorization file onto my server, basically spoofing the server that I already auth'd. Is there not a cleaner way?
I've spent hours reading through the API docs and Auth docs, but haven't found and answer. If there is a page I've missed, please point me to it!
PS. I'm using the Calendars API through Python/Flask on Heroku, but that shouldn't matter. | User data through Google APIs without authorization flow | 0 | 0 | 0 | 664 |
25,987,179 | 2014-09-23T04:56:00.000 | 0 | 0 | 0 | 0 | python,excel,pandas,xls,xlsx | 49,387,955 | 1 | false | 0 | 0 | Add a "u" before your string. For example, if you're looking for a column named 'lissé' in a dataframe "df" then you should put df[u'lissé'] | 1 | 1 | 0 | I know you can read in Excel files with pandas, but I have had trouble reading in files where the column headings in the worksheets are not in a format easily readable like plain text.
In other words, if the column headings had special characters then the file would fail to import. Where as if you import data like that into Microsoft Access or other databases, you get the option to import anyway, or remove special characters.
My only solution to this has been to write an Excel macro to strip out characters not usually liked by databases when importing - and then import the file using python.
But there must be a way of handling this situation purely using python (which is a lot faster).
My question, how does python handle importing .xls and .xlsx files when the column headings have special characters which won't import? | Python: read an Excel file using Pandas when the file has special characters in column headers | 0 | 1 | 0 | 1,140 |
25,989,667 | 2014-09-23T07:46:00.000 | 0 | 0 | 0 | 0 | python,google-app-engine,google-cloud-datastore,app-engine-ndb | 25,990,090 | 2 | false | 1 | 0 | As I know get_by_id() (small) operations are free of charge. So you will pay only for instance hours. But I think it would be better to store subscription emails in another kind, because storage is cheap, and denormalization of data is a good practice on GAE. Anyway CSV does not looks like a good idea. | 1 | 0 | 0 | I want to maintain an email blacklist of people who don't want to ever receive email from my service.
Before I send each email, I want to do a lookup on whether the recipient is in the list.
Which of 2 choices is better?
I can create a BlacklistEmail model in datastore and key it on the email address so I can have faster lookups using get_by_id(). In 99% of cases, the recipient will not be in the blacklist so this would actually cost a read as it would not hit memcache.
I can store the blacklisted email in a csv file and check whether the recipient is in the list. This seems like it wouldn't cost anything, but I am not sure about performance. I don't expect the list to be very big.
Any other better way?
Which is better in terms of cost and performance? | Better to maintain a list in datastore or csv file? | 0 | 0 | 0 | 98 |
25,990,036 | 2014-09-23T08:06:00.000 | 3 | 0 | 0 | 1 | python,google-app-engine | 25,990,536 | 1 | true | 1 | 0 | I had the same problem before. Solved by changing the loading method in app.yaml to wsgi, for example, from:
script: my_app/main.py
To:
script: my_app.main.application
Let me know if it works for you. | 1 | 5 | 0 | I have an App Engine app running locally using dev_appserver.py. In the app directory I have the standard appengine_config.py that is supposed to execute on every request made to the app. In the past it used to execute the module, but suddenly it stopped doing it.
In another app runs on the same machine it works fine.
I checked with Process Monitor to see if the file is loaded from another location, but it's not (I can see the other app's file being loaded).
Any ideas why appengine_config.py is not executed? | dev_appserver.py doesn't load appengine_config.py | 1.2 | 0 | 0 | 698 |
25,991,626 | 2014-09-23T09:32:00.000 | 0 | 0 | 1 | 0 | python,django,pycharm | 25,992,011 | 1 | false | 0 | 0 | Have you set project interpreter correctly, i also had this problem when i started using pycharm, it can be also because of your editor settings file>editor>code-completion. | 1 | 0 | 0 | Such as:
It can not show tipMember.objects when I type Member.o.(Member is a django model)
And same with python build-in function, like for,filter, map and so on.
This feature worked yesterday, I don't know why it broken now.
I have tried Invalidate Caches / Restart, do not work.
UPDATE
Maybe I know what is the problem.
I have a big folder name ALL project, it is my project. and project A under ALL project.
I accidentally mark project A excluded. And I use File -> Open .. to add it back.
It seems pycharm create .idea folder under project A, it means pycharm treat project A as a project.I have to set Project Interpreter for project A in order to get auto-completion back.
I don't want pycharm treat project A as a project. I want it treat project A as a folder, then it would use the Project Interpreter of ALL project.
Ok, I find that Project Structure dialog in File | Settings | Project Structure . Adding the folder back make every thing fine. | Pycharm add excluded folder back | 0 | 0 | 0 | 1,165 |
25,999,972 | 2014-09-23T16:17:00.000 | 3 | 0 | 1 | 0 | python,visual-studio-2013,intellisense,ptvs | 49,339,391 | 2 | false | 0 | 0 | Restart just worked for me. ;) | 1 | 13 | 0 | Sorry if this seems like a noob question but I have never used visual studio. I am trying to use PTVS and while it works great in general, I can't get Intellisense to work for imports from the local directory. When I import a local module I get
Unable to resolve (module). Intellisense may be missing for this module
Thanks in advance | Intellisense in python tools for visual studio | 0.291313 | 0 | 0 | 20,082 |
26,001,543 | 2014-09-23T17:49:00.000 | 1 | 0 | 0 | 0 | python,decimal,truncate | 26,002,508 | 3 | false | 0 | 0 | What's wrong with the good old-fashioned value - int(value)? | 1 | 1 | 0 | I am trying to make a calculator that converts cms into yards, feet, and inches. Example: 127.5 cm is 1 yard, 1 inch, etc. But I am just wondering how I am able to retain the value after the decimal place, is there a way to truncate the number before the decimal place. So if the user inputs a value that results into 3.4231 yards, I want to retain the value ".4231" so that I can convert that into feet, and then the same for inches from feet. Sorry if this is unclear. This is for python 3 | Truncating values before the decimal point | 0.066568 | 0 | 0 | 177 |
26,006,727 | 2014-09-24T00:35:00.000 | 6 | 0 | 0 | 1 | python,security,encryption,docker | 26,134,653 | 6 | false | 0 | 0 | Sounds like Docker is not the right tool, because it was never intended to be used as a full-blown sandbox (at least based on what I've been reading). Why aren't you using a more full-blown VirtualBox approach? At least then you're able to lock up the virtual machine behind logins (as much as a physical installation on someone else's computer can be locked up) and run it isolated, encrypted filesystems and the whole nine yards.
You can either go lightweight and open, or fat and closed. I don't know that there's a "lightweight and closed" option. | 1 | 58 | 0 | We all know situations when you cannot go open source and freely distribute software - and I am in one of these situations.
I have an app that consists of a number of binaries (compiled from C sources) and python code that wraps it all into a system. This app used to work as a cloud solution so users had access to app functions via network but no chance to touch the actual server where binaries and code are stored.
Now we want to deliver the "local" version of our system. The app will be running on PCs that our users will physically own. We know that everything could be broken, but at least want to protect the app from possible copying and reverse-engineering as much as possible.
I know that docker is a wonderful deployment tool so I wonder: it is possible to create encrypted docker containers where no one can see any data stored in the container's filesystem? Is there a known solution to this problem?
Also, maybe there are well known solutions not based on docker? | Encrypted and secure docker containers | 1 | 0 | 0 | 34,645 |
26,011,567 | 2014-09-24T07:59:00.000 | 0 | 0 | 0 | 0 | python,django,django-templates,mako,edx | 26,646,633 | 2 | false | 1 | 1 | port 18010 is forwaded via nginx, So you have to run studio collectstatic.
ie..update_assets for cms(studio) in edx. | 1 | 0 | 0 | The Studio app that is served on port 18010 shows with no theme. It only shows black text on a white background and with no images... that happens while the other counterpart LMS (on port 80) app shows fine.
Where shall I start troubleshooting this problem? | Edx Studio not showing with a theme | 0 | 0 | 0 | 276 |
26,011,787 | 2014-09-24T08:11:00.000 | 1 | 0 | 1 | 0 | ipython,pickle,ipython-parallel,dill | 26,202,109 | 1 | true | 0 | 0 | I'm the dill author. I don't know if IPython does anything unusual, but you can revert to pickle if you like through dill directly with dill.extend(False)… although this is a relatively new feature (not yet in a stable release).
If IPython doesn't have a dv.use_pickle() (it doesn't at the moment), it should… and could just use the above to do it. | 1 | 0 | 1 | I'm developing a distributed application using IPython parallel. There are several tasks which are carried out one after another on the IPython cluster engines.
One of these tasks inevitably makes use of closures. Hence, I have to tell IPython to use Dill instead of Pickle by calling dv.use_dill(). Though this should be temporarily.
Is there any way to activate Pickle again once Dill is enabled? I couldn't find any function (something of the form dv.use_pickle()) which would make such an option explicit. | Tell IPython Parallel to use Pickle again after Dill has been activated | 1.2 | 0 | 0 | 186 |
26,023,136 | 2014-09-24T17:38:00.000 | 0 | 0 | 1 | 1 | python,python-idle,python-2.5 | 26,088,770 | 2 | false | 0 | 0 | The fact that Windows changed the right-context menu for .py files has nothing to do with Idle, and probably nothing to do with Python either. You are not the first to have this problem. You can potentially restore 'Edit with Idle' but without directly editing the registry (an expert option) I only knew how to do so in XP. You might also be able to fix it be going back to a restore point before it changed, but you would lose all updates since, so I would not do that.
I am surprised that re-installing did not restore it. The line was once gone for me, too, and was restored by a recent install.
I have Win7. I just now tried 'Open with', navigated to 3.4 idlelib, and selected idle.bat (the .py files were not offered as a choice). The .py file opened in an Idle editor just fine. It is now a permanent option for Open with, without having to navigate.
Idle has gotten perhaps 150 patches since 2.5. Even if you have to edit programs to run on 2.5, I strongly recommend installing a current version of Python and Idle.
I have no ideal what your comment "the programs still can't find anything associated with it, like Tkinter for example " means. | 2 | 0 | 0 | So I've been working with Python on my computer for about the last 2 months with no issues. Just recently however, something went wrong with IDLE. I am running python 2.5
I used to be able to right-click and select "Edit with IDLE" for a python program. That option no longer is available. When I try "open with" and navigate to the idlelib in python, I can select idle.bat, idle.py, or idle.py (no console). I've tried each option and each fails to open and returns an error that either it is not a valid Win32 application or that "Windows cannot find idle.pyw"
I am able to open IDLE on its own and use the open function in IDLE to open files, but can't open files directly using IDLE as I could before.
There was formerly the White background icon with the python logo, which is now replace by windows' logo for no program (white square, blue and red dots). I have tried to repair-install and unistall-re-install both with no success. There is no firewall or antivirus, and it was installed with permissions for all users.
Any help is much appreciated, this has been maddeningly difficult to figure out. | Hard times with IDLE | 0 | 0 | 0 | 309 |
26,023,136 | 2014-09-24T17:38:00.000 | 0 | 0 | 1 | 1 | python,python-idle,python-2.5 | 26,025,133 | 2 | false | 0 | 0 | The native one that comes with python on windows is problematic at times, so you could uninstall and reinstall it as a solution, or open it from its directory instead of a shortcut, or get another IDE. I recommend the Ninja IDE very nice and light looking, or if you're on linux you could just use vim from terminal.
Also, if it's extremely necessary, try upgrading your python version and IDE. I think the IDE included for windows looks like a modified emacs to be honest. | 2 | 0 | 0 | So I've been working with Python on my computer for about the last 2 months with no issues. Just recently however, something went wrong with IDLE. I am running python 2.5
I used to be able to right-click and select "Edit with IDLE" for a python program. That option no longer is available. When I try "open with" and navigate to the idlelib in python, I can select idle.bat, idle.py, or idle.py (no console). I've tried each option and each fails to open and returns an error that either it is not a valid Win32 application or that "Windows cannot find idle.pyw"
I am able to open IDLE on its own and use the open function in IDLE to open files, but can't open files directly using IDLE as I could before.
There was formerly the White background icon with the python logo, which is now replace by windows' logo for no program (white square, blue and red dots). I have tried to repair-install and unistall-re-install both with no success. There is no firewall or antivirus, and it was installed with permissions for all users.
Any help is much appreciated, this has been maddeningly difficult to figure out. | Hard times with IDLE | 0 | 0 | 0 | 309 |
26,025,570 | 2014-09-24T20:07:00.000 | 0 | 0 | 1 | 0 | python | 26,025,638 | 2 | false | 0 | 0 | Slicing is an operation that gives you some elements out of a sequence.
s[a:b:c] means "the items starting at a, stopping at b, with a step of c".
If you have s[::-1] that means "the whole sequence, going backwards". | 1 | 1 | 0 | I keep seeing this: s[::-1] in Python and I don't know what it does. Sorry if this is a question but I'm new to python and generally programming. | I don't know what s[::-1] is in Python | 0 | 0 | 0 | 100 |
26,026,865 | 2014-09-24T21:30:00.000 | 1 | 0 | 0 | 0 | python,windows,user-interface,desktop-application | 26,043,694 | 1 | false | 0 | 1 | You actually don't need to create the program as a service. You can just start the application and not show the window immediately. You can use PyQt or wxPython. I'm more familiar with wxPython, so if you went that route, you could use a wx.Frame style flag such as wx.STAY_ON_TOP to get the functionality you want.
I have created applications that load up in the system tray with just an icon. When you click the icon, it shows the frame. The rest of the time, the frame is hidden. I would try that route before looking at doing a service. | 1 | 1 | 0 | The goal is to have an application that runs on top of all other applications and windows in the desktop, and display a clickable control (say, an image button) that moves around the screen.
The application must run as a service in the background and show thebutton (let's say) each hour, once clicked it disappears until the next hour.
This application has to be written in Python.
It looks like PyQt is one of the better options, but I'm not sure if it does support this sort of functionality and if it is a good alternative for modern Windows applications.
What packages or frameworks are appropriate for this scenario? I have seen Pygl and PyGame but they seem to be limited to a window, is this correct? | How to make a clickable control that is always on top, in Windows using Python? | 0.197375 | 0 | 0 | 127 |
26,028,200 | 2014-09-24T23:35:00.000 | 1 | 0 | 0 | 0 | python,sql,django,postgresql,psycopg2 | 26,030,265 | 1 | true | 1 | 0 | I'm not sure what the exact cause was, but it seems to be related to django's migration tool storing migrations, even on a new database.
What I did to get this behavior:
Create django project, then apps, using CharField
syncdb, run the project's dev server
kill the devserver, modify fields to be TextField
Create a new Postgres database, modify settings.py
Run syncdb, attempt to load fixtures
See the error in question, examine db instance
What fixed the problem:
Create a new database, modify settings.py
delete all migrations in apps/migrations folders
after running syncdb, also run createmigrations and migrate
The last step generated a migration, even though there were none stored in the migrations folder, and there had been no changes to models or data since syncdb was run on the new database, which I found to be odd.
Somewhere in the last two steps this was fixed. Future people stumbling upon this: sorry, I'm not going to keep creating django projects to test the behavior further, but perhaps with this information you can fix your own database problems. | 1 | 2 | 0 | Django 1.7, Python 3.4.
In my models I have several TextFields defined.
When I go to load a JSON fixture (which was generated from an SQLite3 dump), it fails on the second object, which has 515 characters for one of its fields.
The error printed is
psycopg2.DataError: value too long for type character varying(500)
I created a new database (not just a table drop, a whole new db), modified my settings.py file, ran manage.py syncdb on the new database, created a user, and tried to load the data again, getting the same error.
Upon opening pgAdmin3, all columns, both CharField and TextField defined are listed as type character var.
So it seems TextField is being ignored and CharFields are being created instead. The PostgreSQL documentation explicitly lists both text and character types, and defines text as being unlimited in length. Any idea why? | Why is Django creating my TextField as a varchar in the PostgreSQL database? | 1.2 | 1 | 0 | 1,186 |
26,028,253 | 2014-09-24T23:41:00.000 | 1 | 0 | 1 | 1 | python,windows-8 | 26,028,450 | 1 | false | 0 | 0 | I created a file called myfile.txt. It showed up in Explorer as myfile.txt. If you just see myfile (no extension), then go to Folder Options, Advanced, and uncheck "Hide extensions for known file types".
I Right-clicked myfile.txt, selected Rename, and Windows selected just "myfile", not ".txt". I changed the selection to "txt", overwrote it with "py" and hit enter. Windows popped up a message warning that I was changing the extension. I clicked OK and the file was renamed.
An alternate approach is to open a command prompt, cd to the directory and use "move" to change the name.
Yet another option, if you are doing this from a text editor, click "Save As", change the save dialog's "Save As Type" drop-down to All Files, change the name to .py and hit OK. You end up with a .txt and a .py. | 1 | 1 | 0 | How do I change file extensions in windows 8? I tried and my system will not recognize the change.
I tried changing from .txt to .py for python so i can use in IDLE. | how to change file extensions in windows 8? i tried and it will not recognize the change | 0.197375 | 0 | 0 | 5,230 |
26,037,592 | 2014-09-25T11:34:00.000 | 1 | 1 | 1 | 0 | python,python-3.x,module,directory | 26,037,677 | 2 | false | 0 | 0 | If you allow any directory to be seen as a package, then trying to import a module that exists both as a directory and as a module on the search path could pick the directory over the module.
Say you have a images directory and an images.py module. import images would find the images directory if it was found earlier on the search path.
By requiring a __init__.py to mark packages you make it possible to include such data directories next to your Python code without having to worry about masking genuine modules with the same name. | 1 | 0 | 0 | The subject is package imports and the __init__ file:
There is one quote in the specification that says
these files serve to prevent directories with common names from
unintentionally hiding true modules that appear later on the module
search path. Without this safeguard, Python might pick a directory
that has nothing to do with your code, just because it appears nested
in an earlier directory on the search path.
Can you give me practical examples of this ? | Hide true modules | 0.099668 | 0 | 0 | 158 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.