Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17,030,677 |
2013-06-10T18:46:00.000
| 2 | 0 | 0 | 1 |
debugging,python-2.7,tornado
| 20,555,024 | 2 | true | 1 | 0 |
If you are running your app using foreman you would set you environment variable in .env file in root project folder.
Setting the below env variable in my .env file did the tick form me.
PYTHONUNBUFFERED=true
Now I can set code breakpoints in my app, and also print output to server logs while running the app using foreman.
| 1 | 3 | 0 |
How could I set a break point in my tornado app?
I tried pdb, but Tornado app seams to be ignoring my pdb.set_trace() command in my app.
|
Set break points in Tornado app
| 1.2 | 0 | 0 | 1,963 |
17,031,075 |
2013-06-10T19:12:00.000
| 1 | 0 | 0 | 1 |
google-app-engine,email,python-2.7,webapp2,unsubscribe
| 17,033,151 | 1 | true | 1 | 0 |
Each new class implies a new query, which adds to the total cost. Pack as much information that is practical into the User class. A simple boolean in the User class should work for active/inactive or subscribe/unsubscribe. Your app needs to accept emails to receive the Unsubscribe request and set the associated boolean to False.
| 1 | 0 | 0 |
I'm sending automated emails and hence I should deliver an unsubscribe function. I have a User entity that is not used much, only when a user registers and the emails can be send to users who are not registered as Users. So when I send an email and I must include an unsubscribe link, should I keep a whole separate entity / class for class Unsubscriptions or include them as a variable in the User class whether or not a user is registered to receive emails?
Did you use any method for unsubscribe that you can recommend? Are there any frameworks for unsubriptions? GAE that I'm using has a very primitive framework for sending and receiving emails and I understand that Amazon has a much more developed API for manging large email list, but I suppose I can still do it all in GAE without Amazon though that would take longer time so I'm considering managing large email lists from Amazon. I have > 10 000 registered users that I never emailed and I'd like to email them a reminder that they are welcome to use my application and that they can unsubscribe from future mailings.
|
How to implement unsubscribe usecase for website
| 1.2 | 0 | 0 | 138 |
17,032,676 |
2013-06-10T20:55:00.000
| 1 | 0 | 0 | 1 |
python,celery,pacemaker
| 18,737,345 | 1 | false | 0 | 0 |
The short answer is "yes." Pacemaker will do what you want, here.
The longer answer is that your architecture is tricky due to the requirement to restart in the middle of a sequence.
You have two solutions available here. The first is to use some sort of database (or a DRBD file system) to record the fact that 25 of the 50 calls have been completed. The problem with this isn't the 24 completed calls, or the 25 yet-to-be-completed, it's the one that the system was doing, when it crashed. Call #25, say. If C25 wasn't yet started then you're OK. The slave will fire up under Pacemaker control, the DRBD file system will fail over, and the new master will execute #25 through #50. What happens though if #25 was called but the old master hadn't yet marked it as such?
You can architect it so that it marks the call as complete before it actually executes it, in which case, C25 won't get called on this particular occasion or you can mark it as complete after the call in which case C25 will get called twice.
Ideally, you would make the calls idempotent. This is your second option. In which case, it doesn't matter if C1 -> C25 get called again because there's no repeat affect. C26 -> C50 only get called a single time. I don't know enough about your architecture to say which would work, but hopefully this helps.
Pacemaker will certainly handle failing over. Add DRBD and you can save state between the two systems. However, you will need to address the partial-call issue yourself.
| 1 | 2 | 0 |
As far as I know about celery, celery beat is a scheduler considered as SPOF. It means the service crashes, nothing will be scheduled and run.
My case is that, I will need a HA set up with two schedulers: master/slave, master is making some calls periodically(let's say every 30 mins) while slave can be idle.
When master crashes, the slave needs to become the master and pick up the left over from the dead master, and carry on the periodic tasks. (leader election)
The requirements here are:
the task is scheduled every 30mins (this can be achieved by celery beat)
the task is not atomic, it's not just a call every 30 mins which either fails or succeeds. Let's say, every 30 mins, the task makes 50 different calls. If master finished 25 and crashed, the slave is expected to come up and finish the remaining 25, instead of going through all 50 calls again.
when the dead master is rebooted from failure, it needs to realize there is a master running already. By all means, it doesn't need to come up as master and just needs to stay idle til the running master crashes again.
Is pacemaker a right tool to achieve this combined with celery?
|
celery beat HA - using pacemaker?
| 0.197375 | 0 | 0 | 1,055 |
17,037,621 |
2013-06-11T06:16:00.000
| 0 | 0 | 0 | 1 |
python,cloud,openstack,openstack-nova
| 17,218,519 | 1 | true | 0 | 0 |
most of the python clients for openstack have a not so well documented --debug flag, this will show the api queries occurring in incredibly unsafe verbosity.
| 1 | 2 | 0 |
I want to trace the functions used by a particular command, specifically for OpenStack. Now, I have a command, let's say 'nova image-list', which shows the images available in the repository. I want to know what functions is this command calling?
I tried with strace, but the maximum I could get was the files that the command opens (and it's lot of them!). Again I tried with trace module of python, but when I try
tracer.run('nova image-list')
it gives a syntax error. Now, is there tool/mechanism that can help me to get the flow of this command?
|
How to track the functions used by a python command?
| 1.2 | 0 | 0 | 135 |
17,039,253 |
2013-06-11T08:06:00.000
| 1 | 0 | 1 | 0 |
python,setuptools,setup.py
| 17,070,902 | 1 | false | 0 | 0 |
The site-packages directories is supposed to contain modules (e.g. spam.py) or packages (spam/__init__.py). In a setup script, the things referenced in py_modules and packages will get installed in site-packages. Could you explain what it is you want to do that does not work with py_modules or packages?
| 1 | 1 | 0 |
I want to copy some files to the site-packages folder.
How to define in setup.py to copy files to the site-packes folder instead of a subfolder?
|
Copy files to site-packages
| 0.197375 | 0 | 0 | 1,190 |
17,039,873 |
2013-06-11T08:44:00.000
| 2 | 0 | 0 | 0 |
python,node.js,socket.io
| 17,039,955 | 1 | false | 1 | 0 |
Expose a Restful API on the chat server. Then your Django web application can easily make API calls to modify state in the chat server.
Doing anything else is more complicated and most likely unnecessary.
| 1 | 1 | 0 |
I am building a chat application that consists of a Django web backend with a Node.js/socket.io powered chat server. There will be instances when changes made via the web interface (e.g. banning a user) need to be pushed immediately to the chat server. I can think of the following options:
Use a Python-based socket.io client to interface directly with the server (what are some good Python clients?)
Use redis or a message queue to do pub/sub of events (seems like overkill)
implement a simple TCP wire protocol on a secondary localhost-only port (this could be done using the built-in Node and Python TCP libraries)
What would be the best option?
|
Communicating with a Node.js server application
| 0.379949 | 0 | 1 | 361 |
17,040,209 |
2013-06-11T09:01:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,debugging,ide,breakpoints
| 18,138,699 | 2 | false | 0 | 0 |
The latest version of PyDev (2.8.1) supports GAE debugging. However, "Edit and Continue Debugging or Interactive Debugging" feature seems to have stopped working.
| 1 | 2 | 0 |
I'm developing on GAE-Python 2.7 using Eclipse+PyDev as IDE. Since GAE SDK 1.7.6 (March 2013), where Google "broke" support for breakpoints*, I've been using the old dev server to continue debugging the application I'm working on.
However, Google will drop support of the old dev server as of July 2013 and, since I do not expect a prompt solution for this on PyDev (I've seen no activity so far about this), I would like to look for an alternative IDE to still being able to do debugging.
I know that one of the possible options is to go for PyCharm (initial license of 89€+VAT and 59€+VAT each year to continue receiving upgrades), but I would like to know how other people is (will be) addressing this problem and what are the current alternatives to PyCharm
*I would like to clarify the sentence "Google broke support for breakpoints": In SDK 1.7.6+, Google started using stdin/stdout in the new dev server for doing IPC and this leaves no chances to even do debugging with pdb. Google claims that they have created the hooks for tool vendors to support debugging (as PyCharm did) but, in my opinion, they "broke" debugging by forcing people to move away from the IDE they were initially recommending due to an architectural decision (I'm not an expert, but they could have used the native IPC mechanisms included in Python instead of using stdin/stdout).
EDIT:
I forgot to mention that I'm running Eclipse+Pydev for MacOSX, so please, also mention your OS compatibility in your alternatives/solutions.
|
Alternative IDE supporting debugging for Google App Engine in Python (Eclipse + PyDev no debug support on SDK 1.7.6+)
| 0 | 0 | 0 | 927 |
17,043,814 |
2013-06-11T12:13:00.000
| 0 | 0 | 0 | 1 |
python,python-2.7,permissions,permission-denied,ioerror
| 52,801,091 | 3 | false | 0 | 0 |
Although your code seems correct, I think it's better to assign an absolute path. If you develop on your local machine and the app runs on another server, there is probably some differences, like, who calls the process. Logs are recommended to be written to /var/log/app_name
| 2 | 6 | 0 |
I am creating Log file for the code but I am getting the following error :
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] import mainLCF
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/home/ai/Desktop/home/ubuntu/LCF/GA-LCF/mainLCF.py", line 10, in
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 1528, in basicConfig
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] hdlr = FileHandler(filename, mode)
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 901, in __init__
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] StreamHandler.__init__(self, self._open())
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 924, in _open
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] stream = open(self.baseFilename, self.mode)
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] IOError: [Errno 13] Permission denied: '/genetic.log'
I have checked the permissions in the particular folder where I want to make the log but still getting the error .
My code is : (name is mainLCF.py)
import logging
import sys
logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
logging.debug("starting of Genetic Algorithm")
sys.path.append("/home/ai/Desktop/home/ubuntu/LCF/ws_code")
import blackboard
from pyevolve import *
def eval_func(chromosome):
some function here
My system's file structure is :
/
home
ai
Desktop
home
ubuntu
LCF
ws_code GA-LCF
blackboard.py main-LCF.py
I am calling mainLCF.py from another function lcf.py which is in ws_code .
|
Why am I getting IOError: [Errno 13] Permission denied?
| 0 | 0 | 0 | 27,894 |
17,043,814 |
2013-06-11T12:13:00.000
| 0 | 0 | 0 | 1 |
python,python-2.7,permissions,permission-denied,ioerror
| 17,044,205 | 3 | false | 0 | 0 |
Looks like logging tried to open the logfile as /genetic.log. If you pass filename as a keyword argument to logging.basicConfig it creates a FileHandler which passes it to os.path.abspath which expands the filename to an absolute path based on your current working dir. So you're either in your root dir or your code changes your current working dir.
| 2 | 6 | 0 |
I am creating Log file for the code but I am getting the following error :
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] import mainLCF
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/home/ai/Desktop/home/ubuntu/LCF/GA-LCF/mainLCF.py", line 10, in
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 1528, in basicConfig
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] hdlr = FileHandler(filename, mode)
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 901, in __init__
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] StreamHandler.__init__(self, self._open())
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] File "/usr/lib/python2.7/logging/__init__.py", line 924, in _open
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] stream = open(self.baseFilename, self.mode)
[Tue Jun 11 17:22:59 2013] [error] [client 127.0.0.1] IOError: [Errno 13] Permission denied: '/genetic.log'
I have checked the permissions in the particular folder where I want to make the log but still getting the error .
My code is : (name is mainLCF.py)
import logging
import sys
logging.basicConfig(filename='genetic.log',level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
logging.debug("starting of Genetic Algorithm")
sys.path.append("/home/ai/Desktop/home/ubuntu/LCF/ws_code")
import blackboard
from pyevolve import *
def eval_func(chromosome):
some function here
My system's file structure is :
/
home
ai
Desktop
home
ubuntu
LCF
ws_code GA-LCF
blackboard.py main-LCF.py
I am calling mainLCF.py from another function lcf.py which is in ws_code .
|
Why am I getting IOError: [Errno 13] Permission denied?
| 0 | 0 | 0 | 27,894 |
17,045,229 |
2013-06-11T13:24:00.000
| 0 | 0 | 1 | 1 |
python,easy-install
| 17,045,827 | 2 | false | 0 | 0 |
It's hard to say without providing us operation system information you use. For example in OSX, easy_install has it's own versions, I just type easy_install[tab][tab] to get all available version of easy_install.
In OSX and debian and redhat I've got these:
easy_install
easy_install-2.5
easy_install-2.6
easy_install-2.7
For each python version you've got your own package. For example if this would be pip there's packages in osx:
py27-pip
py24-pip
py31-pip
easy_install probably build-in within python so it's should go per python version. and default one will be, the one which python version is set to default in your environment.
| 1 | 0 | 0 |
I'm trying to install a python package using easy_install. There are several python versions installed.
This causes the package to be installed on python2.7, whereas I want it to be installed on python2.4.
Suggestions?
Thanks
Edit:
I already have tried easy_install-2.4. I get -bash: easy_install-2.4: command not found
|
Install package on multiple python versions using easy_insatll
| 0 | 0 | 0 | 103 |
17,046,461 |
2013-06-11T14:21:00.000
| 3 | 0 | 0 | 1 |
java,python,file,rename,move
| 17,046,562 | 3 | true | 0 | 0 |
If you use java 7, you can simply use WatchService and WatchKey. This is a observer to watch a directory and each time something is changed, created or deleted you can do an action/file handling.
| 1 | 2 | 0 |
I'm trying to detect when a file is being moved or renamed in windows and I want to then use that change to update a database.
When I say file move: I mean moving from one directory to another from ".../A/foo.txt" to ".../B/foo.txt".
When I say file rename: I mean renaming but staying in the same directory ".../A/foo.txt" to ".../A/bar.txt"
I know that linux and most people treat them as the same thing, and for my purposes they are the same thing. I just want to know the actual file path after and be able to match it to the original file path even in circumstances where there is a batch move.
I am using python for the parent program, but I am willing to use any coding language though it preferably is Java/Python/some form of C.
|
How to Detect File Rename / Move in Windows
| 1.2 | 0 | 0 | 1,402 |
17,046,929 |
2013-06-11T14:41:00.000
| 0 | 0 | 0 | 1 |
python,emacs,command,interpreter
| 17,047,535 | 2 | false | 0 | 0 |
AFAIS the keys are the same as in M-x shell. See menu In/Out for available keys/commands.
| 1 | 12 | 0 |
Inside emacs I am running interpreters for several different languages (python, R, lisp, ...). When I run the interpreters through the terminal in most cases I can use the up arrow to see the last command or line of code that I entered. I no longer have this functionality when I am running the interpreters in emacs. How can I achieve this functionality?
How can I access the command history from the interpreter inside emacs?
Can I do this generally for language X?
At the moment I need to use python, so if anyone knows how to do this specifically with the python interpreter in emacs please let me know!
|
Command history in interpreters in emacs
| 0 | 0 | 0 | 1,865 |
17,047,285 |
2013-06-11T14:57:00.000
| 0 | 0 | 1 | 0 |
python,gcc,header-files,setup.py
| 17,047,466 | 1 | true | 0 | 0 |
Just export CPLUS_INCLUDE_PATH=/path/to/desired/include/directory and re-run the installation script.
| 1 | 4 | 0 |
Basically the title says it all. I have a non-standard path for a library's header files and need to include it in the search path of a python installation script.
|
Specify header file location for a python setup.py installation
| 1.2 | 0 | 0 | 1,731 |
17,048,540 |
2013-06-11T15:54:00.000
| 0 | 1 | 1 | 0 |
php,python,code-analysis
| 17,223,867 | 4 | false | 0 | 0 |
Although a lot of suggestions point towards pycallgraph and phpcallgraph I don't think these will do what you want to do - these are for runtime analysis, whereas what it sounds like you want to do static analysis.
I'm not aware of any tools for this, but, given that you're only interested in the workings of a single class and the relationships within that class, with a little effort you should be able to hack something together in your scripting language of choice which
Parses all function names and variable declarations inside the class and stores them somewhere
Uses the information from step 1 to identify variable usages, variable assignments and function calls, along with the functions in which these occur.
Convert this information into the graph format used by dot and then use dot to generate a directed graph showing dependencies.
Given the effort involved, if the class is not too large I would be tempted just to do it by hand!
Good luck, and if you do find a solution would love to see it.
| 1 | 6 | 0 |
I am looking for an easy-to-use tool which can visualize the 'inner working' of a class, written e.g. in PHP. What I would like to see are the different class methods, and how they are related (method A calls method B etc). Is there such a tool to create such a graph?
In a further step, maybe there is a tool which also visualizes the 'inner working' of a class (in a reverse-engineering way) of really how the workflow is, i.e. with all if-else decisions etc, what methods are called in what case?
If anyone can refer me to such a tool (preferably for PHP and Python) I would appreciate it.
|
What tools are available to visualize in-class dependencies (e.g. for PHP)?
| 0 | 0 | 0 | 885 |
17,049,030 |
2013-06-11T16:20:00.000
| 0 | 0 | 1 | 0 |
python,setuptools,distutils
| 17,070,977 | 1 | false | 0 | 0 |
distutils imports all modules to byte-compile them (i.e. create the pyc and maybe pyo files) at build time and/or install time. There is no option currently to have a module skipped. You could write your setup script so that a different sdist is generated for Python 2 and Python 3 (so for example somemodule2.py would not be included in the Python 3 sdist), but not all tools work well with different sdists, including PyPI.
At the current time, I would try to make each module not raise a SyntaxError when imported by either Python 2 or 3. Or I would follow Martijn’s advice and write just one module, possibly using the six modules, if the end result is not too messy (I’ve seen really horrible 2-and-3 code so I don’t like that solution, but a big part of the community has chosen it because it works).
| 1 | 1 | 0 |
I have a Python library written to work under both Python 2 and Python 3, with all the version-specific code localized in one module that exists in two variants, one source code file for Python 2 and one for Python 3. Each file contains code that raises a SyntaxError if imported into the wrong Python version.
When I package my library with distutils and install it, I always get a syntax error report for one or the other file. Is there a way to get rid of this? Ideally, I would like to tell distutils/setuptools to ignore the file that is not for the currently running Python version.
|
Packaging python libraries with version-specific (2/3) code
| 0 | 0 | 0 | 86 |
17,051,552 |
2013-06-11T18:49:00.000
| 0 | 0 | 1 | 0 |
python,configparser
| 17,052,268 | 1 | false | 0 | 0 |
At best, I think you could change the option value to set it empty.
| 1 | 0 | 0 |
When using ConfigParser to parse multiple configuration files at the same time, is it possible to have an option from the first file removed by the second, and not just have it's value changed?
|
Use Python's ConfigParser to automatically remove options
| 0 | 0 | 0 | 298 |
17,052,725 |
2013-06-11T20:02:00.000
| 6 | 0 | 1 | 0 |
python,django
| 17,052,782 | 2 | false | 1 | 0 |
Python is a programming language. Django is a web framework built using Python, designed to simplify the creation of websites. It provides a set of common functionality to reduce the amount of trivial code that you need to write.
Django provides:
An administration panel
A database modeling layer
A templating system
Form generation and validation.
and other common functionality.
| 1 | 5 | 0 |
I'm looking at a job possibility that has a need for both Django and Python. I have some experience with Python but none with Django, nor do I know precisely what Django is. Can someone please explain the difference between Django and Python, how they are related and what they are used for?
Thanks in advance for all your help.
|
What is the difference between Django and Python?
| 1 | 0 | 0 | 9,241 |
17,053,548 |
2013-06-11T20:48:00.000
| 3 | 0 | 0 | 0 |
python,machine-learning,multiprocessing,scikit-learn
| 17,070,022 | 1 | false | 0 | 0 |
I don't think this is possible. You could implement something with OpenMP inside the minibatch processing. I'm not aware of any parallel minibatch k-means procedures. Parallizing stochastic gradient descent procedures is somewhat hairy.
Btw, the n_jobs parameter in KMeans only distributes the different random initializations afaik.
| 1 | 6 | 1 |
In Scikit-learn , K-Means have n_jobs but MiniBatch K-Means is lacking it.
MBK is faster than KMeans but at large sample sets we would like it distribute the processing across multiprocessing (or other parallel processing libraries).
Is MKB's Partial-fit the answer?
|
How can i distribute processing of minibatch kmeans (scikit-learn)?
| 0.53705 | 0 | 0 | 1,981 |
17,053,671 |
2013-06-11T20:56:00.000
| 40 | 0 | 1 | 0 |
python,multithreading,numpy
| 17,054,932 | 5 | true | 0 | 0 |
Set the MKL_NUM_THREADS environment variable to 1. As you might have guessed, this environment variable controls the behavior of the Math Kernel Library which is included as part of Enthought's numpy build.
I just do this in my startup file, .bash_profile, with export MKL_NUM_THREADS=1. You should also be able to do it from inside your script to have it be process specific.
| 3 | 63 | 1 |
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all.
I am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.
|
How do you stop numpy from multithreading?
| 1.2 | 0 | 0 | 25,964 |
17,053,671 |
2013-06-11T20:56:00.000
| 12 | 0 | 1 | 0 |
python,multithreading,numpy
| 21,673,595 | 5 | false | 0 | 0 |
In more recent versions of numpy I have found it necessary to also set NUMEXPR_NUM_THREADS=1.
In my hands, this is sufficient without setting MKL_NUM_THREADS=1, but under some circumstances you may need to set both.
| 3 | 63 | 1 |
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all.
I am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.
|
How do you stop numpy from multithreading?
| 1 | 0 | 0 | 25,964 |
17,053,671 |
2013-06-11T20:56:00.000
| 52 | 0 | 1 | 0 |
python,multithreading,numpy
| 48,665,619 | 5 | false | 0 | 0 |
Only hopefully this fixes all scenarios and system you may be on.
Use numpy.__config__.show() to see if you are using OpenBLAS or MKL
From this point on there are a few ways you can do this.
2.1. The terminal route export OPENBLAS_NUM_THREADS=1 or export MKL_NUM_THREADS=1
2.2 (This is my preferred way) In your python script import os and add the line os.environ['OPENBLAS_NUM_THREADS'] = '1' or os.environ['MKL_NUM_THREADS'] = '1'.
NOTE when setting os.environ[VAR] the number of threads must be a string! Also, you may need to set this environment variable before importing numpy/scipy.
There are probably other options besides openBLAS or MKL but step 1 will help you figure that out.
| 3 | 63 | 1 |
I have to run jobs on a regular basis on compute servers that I share with others in the department and when I start 10 jobs, I really would like it to just take 10 cores and not more; I don't care if it takes a bit longer with a single core per run: I just don't want it to encroach on the others' territory, which would require me to renice the jobs and so on. I just want to have 10 solid cores and that's all.
I am using Enthought 7.3-1 on Redhat, which is based on Python 2.7.3 and numpy 1.6.1, but the question is more general.
|
How do you stop numpy from multithreading?
| 1 | 0 | 0 | 25,964 |
17,054,291 |
2013-06-11T21:38:00.000
| 0 | 0 | 0 | 1 |
python,subprocess,pipe
| 17,054,413 | 2 | false | 0 | 0 |
It sounds like you might be confusing stderr and the process's return code (available in proc.returncode after you've called proc.communicate()). stderr is the second output stream available to the process. It's generally used for printing error messages that shouldn't be mixed with the process's normal ("standard") output, but there's no rule that says it MUST be used for that purpose, or indeed that it MUST be used at all. And if you pass invalid commands to the cmd argument of Popen(), then stderr will never be used since no command actually gets run. If you're trying to get the error code (a numeric value) from the process, then proc.returncode is what you want.
| 1 | 0 | 0 |
I have code
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
I tried invalid commands assigning to cmd, but stderr always is Null
An invalid command like 'ls fds' returns 'ls: cannot access fds: No such file or directory'
But the message doesn't appear in neither stdout nor stderr.
|
can't get stderr value from a subprocess
| 0 | 0 | 0 | 221 |
17,055,472 |
2013-06-11T23:25:00.000
| 0 | 1 | 0 | 1 |
c++,python,c,api
| 44,001,898 | 2 | false | 0 | 0 |
I had the same problem but when I fixed all the \ to / and added a . at the beginning of the path it worked ie. the path should look sth like that PySys_SetPath("./Python/") or PySys_SetPath("C:/full/path/Python/")
| 1 | 1 | 0 |
I'm using the Python C API, and numerous times now I've tried using PySys_SetPath() to redirect the interpreter to a path where I've stored all of my scripts. Yet, every time I try it, I get the following error:
Unhandled exception at 0x1e028482 in app.exe: 0xC0000005: Access violation reading location 0x00000004.
I use it in the following syntax: PySys_SetPath("/Python/"). Is that incorrect? Why does it keep crashing? Thanks in advance.
|
Why won't PySys_SetPath() work?
| 0 | 0 | 0 | 3,054 |
17,055,496 |
2013-06-11T23:28:00.000
| 0 | 0 | 0 | 1 |
android,python,unzip
| 64,447,721 | 3 | false | 0 | 1 |
A late entry...but all I had to do was rename the file to a .ZIP extension.
| 1 | 2 | 0 |
I am doing research with mobile apps and need to analyze their code after unzipping the .apk file. However, the process of unzipping naturally involves lots of IO, which doesn't make it scalable, I am thinking if it's possible to hold the unzipped data in memory, with several variables representing it, thus saving the trouble of writing to FS. I am loaded with thousands of apps to analyze, so being able to do something like this would significantly speed up my process. Is there anyone who can suggest a way out for me. I am using python.
Thanks in advance
|
is it possible to unzip a .apk file(or generally any zipped file) into memory instead of writing it to fs
| 0 | 0 | 0 | 14,330 |
17,056,138 |
2013-06-12T00:55:00.000
| 1 | 1 | 0 | 0 |
python,testing,pytest,doctest
| 17,083,687 | 4 | false | 0 | 0 |
Could you try with the repo version of pytest and paste a session log? I'd think --doctest-modules should pick up any .py files.
| 2 | 26 | 0 |
We currently have pytest with the coverage plugin running over our tests in a tests directory.
What's the simplest way to also run doctests extracted from our main code? --doctest-modules doesn't work (probably since it just runs doctests from tests). Note that we want to include doctests in the same process (and not simply run a separate invocation of py.test) because we want to account for doctest in code coverage.
|
How to make pytest run doctests as well as normal tests directory?
| 0.049958 | 0 | 0 | 7,851 |
17,056,138 |
2013-06-12T00:55:00.000
| 0 | 1 | 0 | 0 |
python,testing,pytest,doctest
| 53,343,424 | 4 | false | 0 | 0 |
worked with doctest as well as with plain tests in one module. for a non-doctest test to be picked up, standard py.test discovery mechanism applies: a module name with test prefix, test function with test prefix.
| 2 | 26 | 0 |
We currently have pytest with the coverage plugin running over our tests in a tests directory.
What's the simplest way to also run doctests extracted from our main code? --doctest-modules doesn't work (probably since it just runs doctests from tests). Note that we want to include doctests in the same process (and not simply run a separate invocation of py.test) because we want to account for doctest in code coverage.
|
How to make pytest run doctests as well as normal tests directory?
| 0 | 0 | 0 | 7,851 |
17,056,818 |
2013-06-12T02:33:00.000
| 0 | 0 | 0 | 0 |
python,csv,file-io
| 21,414,260 | 4 | false | 0 | 0 |
I know this is an older question so maybe you have long since solved it but I think you are approaching this in a more complex way than is needed. I figure I'll respond in case someone else has the same problem and finds this.
If you are doing things this way because you do not have a software key, it might help to know that the E-Merge and E-DataAid programs for eprime don't require a key. You only need the key for editing build files. Whoever provided you with the .txt files should probably have an install disk for these programs. If not, it is available on the PST website (I believe you need a serial code to create an account, but not certain)
Eprime generally creates a .edat file that matches the content of the text file you have posted an example of. Sometimes though if eprime crashes you don't get the edat file and only have the .txt. Luckily you can generate the edat file from the .txt file.
Here's how I would approach this issue:
If you do not have the edat files available first use E-DataAid to recover the files.
Then presuming you have multiple participants you can use E-Merge to merge all of the edat files together for all participants in who completed this task.
Open the merged file. It might look a little chaotic depending on how much you have in the file. You can got to Go to tools->Arrange columns. This will show a list of all your variables.
Adjust so that only the desired variables are in the right hand box. Hit ok.
Then you should have something resembling your end goal which can be exported as a csv.
If you have many procedures in the program you might at this point have lines that just have startup info and NULL in the locations where your variables or interest are. You can fix this by going to tools->filter and creating a filter to eliminate those lines.
| 1 | 7 | 1 |
Eprime outputs a .txt file like this:
*** Header Start ***
VersionPersist: 1
LevelName: Session
Subject: 7
Session: 1
RandomSeed: -1983293234
Group: 1
Display.RefreshRate: 59.654
*** Header End ***
Level: 2
*** LogFrame Start ***
MeansEffectBias: 7
Procedure: trialProc
itemID: 7
bias1Answer: 1
*** LogFrame End ***
Level: 2
*** LogFrame Start ***
MeansEffectBias: 2
Procedure: trialProc
itemID: 2
bias1Answer: 0
I want to parse this and write it to a .csv file but with a number of lines deleted.
I tried to create a dictionary that took the text appearing before the colon as the key and
the text after as the value:
{subject: [7, 7], bias1Answer : [1, 0], itemID: [7, 2]}
def load_data(filename):
data = {}
eprime = open(filename, 'r')
for line in eprime:
rows = re.sub('\s+', ' ', line).strip().split(':')
try:
data[rows[0]] += rows[1]
except KeyError:
data[rows[0]] = rows[1]
eprime.close()
return data
for line in open(fileName, 'r'):
if ':' in line:
row = line.strip().split(':')
fullDict[row[0]] = row[1]
print fullDict
both of the scripts below produce garbage:
{'\x00\t\x00M\x00e\x00a\x00n\x00s\x00E\x00f\x00f\x00e\x00c\x00t\x00B\x00i\x00a\x00s\x00': '\x00 \x005\x00\r\x00', '\x00\t\x00B\x00i\x00a\x00s\x002\x00Q\x00.\x00D\x00u\x00r\x00a\x00t\x00i\x00o\x00n\x00E\x00r\x00r\x00o\x00r\x00': '\x00 \x00-\x009\x009\x009\x009\x009\x009\x00\r\x00'
If I could set up the dictionary, I can write it to a csv file that would look like this!!:
Subject itemID ... bias1Answer
7 7 1
7 2 0
|
Parsing a txt file into a dictionary to write to csv file
| 0 | 0 | 0 | 1,142 |
17,061,118 |
2013-06-12T08:38:00.000
| 1 | 1 | 1 | 0 |
python,performance
| 17,061,230 | 3 | false | 0 | 0 |
There isn't a specific set of circumstances in which C or C++ win. Pretty much any CPU-heavy code you write in C or C++ will run many times faster than the equivalent Python code.
If you haven't noticed, it's simply because, for the problems you've had to solve in Python, performance has never been an issue.
| 1 | 7 | 0 |
It is always said that Python is not so efficient as other languages such as C/C++, Java etc. And it is also recommended to write the bottleneck part in C. But I've never run into such problems, maybe it's because most of the time it is the way you solve the problem rather than the efficiency of the language to bother.
Can anybody illustrate any real circumstances? Some simple codes will be great.
|
Any real examples to show python's inefficiency?
| 0.066568 | 0 | 0 | 1,860 |
17,062,505 |
2013-06-12T09:52:00.000
| 4 | 0 | 0 | 0 |
python-3.x,sublimetext2
| 17,116,403 | 3 | false | 0 | 0 |
You can easily do this with two keystrokes - CtrlD,CtrlC.
| 1 | 2 | 0 |
In Netbeans its possible to create a macro for selecting a word and copying it to clipboard
I wonder if its possible with Sublime Text 2 ?
Thanks for any help.
Edit : I understand that this is possible with a plugin. But I dont know Python, if any Python developers can create a plugin for this, it would be awesome! :)
|
Sublime text - select word and copy to clipboard macro
| 0.26052 | 0 | 0 | 2,487 |
17,063,168 |
2013-06-12T10:26:00.000
| 0 | 1 | 1 | 0 |
java,python,perl,debugging
| 17,064,348 | 1 | false | 0 | 0 |
Yes, alter the host program (the java program ) so that it runs the perl program in a debugger
You might well run into problems with the way that debuggers "attach" in different programming language environments. perl -d assumes a tty is there for interactive commands whereas java does something completely different
| 1 | 0 | 0 |
This question just came up in a discussion and i am curious to know .
Is it possible to have a debugger that can debug 2 languages . For example . If i have Java program that references/opens/accesses/ a script (Perl or Python) then is it possible to have a debugger to be able to debug the Perl/Python script ?
Note : Logging is not an acceptable debugging technique here .
|
Dual debugger Java + (Perl/Python) script
| 0 | 0 | 0 | 39 |
17,071,584 |
2013-06-12T17:24:00.000
| 0 | 0 | 0 | 1 |
python,bash,jenkins,environment-variables
| 64,924,366 | 2 | false | 0 | 0 |
import os
os.environ.get("variable_name")
| 1 | 8 | 0 |
so I have a bash script in which I use the environment variables from Jenkins
for example:
QUALIFIER=echo $BUILD_ID | sed "s/[-_]//g" | cut -c1-12
Essentially I'm taking the build id, along with job name to determine which script to call from my main script. I want to use python instead so I was wondering whether I can use these variables without the jenkins python api.
I hope the question makes sense. Thanks
|
How to use Jenkins Environment variables in python script
| 0 | 0 | 0 | 30,579 |
17,072,642 |
2013-06-12T18:25:00.000
| 2 | 0 | 1 | 0 |
python
| 17,072,705 | 1 | false | 0 | 0 |
afaict, no you won't have to reinstall all your modules, if you're only recompiling and make install, it will just overwrite already installed files.
Though, if in a recent upgrade you've changed a dependency of python that has been used by a compiled module, then some extensions may break. But recompiling only those will make things go fine.
| 1 | 1 | 0 |
I missed out the bz standard moudle during inital python compilation. This version of Python has been in use for a few months and quite a number of add-on modules such as numpy, scipy have been installed since then. Can anyone tell me if I recompile Python, do I have to re-install all add-on modules?
Your help is much appreciated.
Thanks,
Alex
|
will recompiling Python require re-install all add-on modules such as numpy, scipy?
| 0.379949 | 0 | 0 | 97 |
17,075,418 |
2013-06-12T21:09:00.000
| 4 | 1 | 0 | 0 |
python,fortran,embed
| 23,725,918 | 7 | false | 0 | 0 |
There is a very easy way to do this using f2py. Write your python method and add it as an input to your Fortran subroutine. Declare it in both the cf2py hook and the type declaration as EXTERNAL and also as its return value type, e.g. REAL*8. Your Fortran code will then have a pointer to the address where the python method is stored. It will be SLOW AS MOLASSES, but for testing out algorithms it can be useful. I do this often (I port a lot of ancient spaghetti Fortran to python modules...) It's also a great way to use things like optimised Scipy calls in legacy fortran
| 1 | 8 | 1 |
I was looking at the option of embedding python into fortran90 to add python functionality to my existing fortran90 code. I know that it can be done the other way around by extending python with fortran90 using the f2py from numpy. But, i want to keep my super optimized main loop in fortran and add python to do some additional tasks / evaluate further developments before I can do it in fortran, and also to ease up code maintenance. I am looking for answers for the following questions:
1) Is there a library that already exists from which I can embed python into fortran? (I am aware of f2py and it does it the other way around)
2) How do we take care of data transfer from fortran to python and back?
3) How can we have a call back functionality implemented? (Let me describe the scenario a bit....I have my main_fortran program in Fortran, that call Func1_Python module in python. Now, from this Func1_Python, I want to call another function...say Func2_Fortran in fortran)
4) What would be the impact of embedding the interpreter of python inside fortran in terms of performance....like loading time, running time, sending data (a large array in double precision) across etc.
Thanks a lot in advance for your help!!
Edit1: I want to set the direction of the discussion right by adding some more information about the work I am doing. I am into scientific computing stuff. So, I would be working a lot on huge arrays / matrices in double precision and doing floating point operations. So, there are very few options other than fortran really to do the work for me. The reason i want to include python into my code is that I can use NumPy for doing some basic computations if necessary and extend the capabilities of the code with minimal effort. For example, I can use several libraries available to link between python and some other package (say OpenFoam using PyFoam library).
|
Embed python into fortran 90
| 0.113791 | 0 | 0 | 9,062 |
17,077,494 |
2013-06-13T00:23:00.000
| -2 | 0 | 1 | 0 |
ipython,ipython-notebook
| 68,028,557 | 14 | false | 0 | 0 |
The %notebook foo.ipynb magic command will export the current IPython to "foo.ipynb".
More info by typing %notebook?
| 1 | 370 | 0 |
I'm looking at using the *.ipynb files as the source of truth and programmatically 'compiling' them into .py files for scheduled jobs/tasks.
The only way I understand to do this is via the GUI. Is there a way to do it via command line?
|
How do I convert a IPython Notebook into a Python file via commandline?
| -0.028564 | 0 | 0 | 366,952 |
17,079,358 |
2013-06-13T04:38:00.000
| 1 | 0 | 0 | 1 |
python,windows,google-app-engine
| 17,086,966 | 1 | false | 1 | 0 |
I was having same problem with google app engine 1.8.0 then i installed the latest 1.8.1 and the issue fixed!
| 1 | 1 | 0 |
I'm install google app engine on my laptop and when i clicked on google app engine launcher icon, mouse change to loading icon then nothing run, nothing display and no error reported, just nothing.
My laptop running with WIN7 64bit, Python27 installed.
Please help.
|
Google App Engine Launcher not starting
| 0.197375 | 0 | 0 | 291 |
17,079,919 |
2013-06-13T05:39:00.000
| 0 | 0 | 1 | 1 |
python,openstack,openstack-nova
| 17,218,560 | 2 | false | 0 | 0 |
General rule of thumb for devstack.
Always unstack.sh before re-running stack.sh or pulling from repositories and re-running stack.sh
| 1 | 0 | 0 |
I have installed openstack on Ubuntu 12.04 single node using devstack. Now, it was running smoothly till yesterday. When i ran ./stack.sh today, it showed an error
./stack.sh:672 nova-api did not start
I have python-paste and python-pastedeploy installed. How to fix this error?
|
Error in devstack script. nova-api did not start?
| 0 | 0 | 0 | 3,196 |
17,081,821 |
2013-06-13T07:46:00.000
| 0 | 0 | 0 | 0 |
python,ironpython
| 37,896,844 | 5 | false | 0 | 1 |
All implementations can run on Eclipse via PyDev. So it kills the argument of which one to use as it is all the same language, implementations other than this instance are more domain specific. Iron Python targets Microsoft, Jython targets Java, Python targets itself. Each environment naturally has its own complier/library all you are doing is trading environments by choosing one over the other. Each has its positives and negatives but naturally you would want to give Python a try before touching other environments as a beginners rule.
| 2 | 13 | 0 |
I have to make a GUI for some testing teams. I have been asked to do it in Python, but when I Google, all I see is about Iron Python.
I also was asked not to use Visual Studio because it is too expensive for the company. So if you have any idea to avoid that I would be very happy.
I am still new to Python and programming overall so not any to advanced solutions.
If you have any questions just ask.
GUI PART: with would you use when using windows and mac(most windows) I would like some drag and drop so I don't waste to much time making the display part
|
Python vs Iron Python
| 0 | 0 | 0 | 21,603 |
17,081,821 |
2013-06-13T07:46:00.000
| 1 | 0 | 0 | 0 |
python,ironpython
| 30,813,721 | 5 | false | 0 | 1 |
If you're just trying to create a GUI that runs on Windows, C# on Visual Studio is the easiest way to go. Their free version, Community (used to be Express) provides all the Windows controls you're used to using with a drag and drop GUI builder.
| 2 | 13 | 0 |
I have to make a GUI for some testing teams. I have been asked to do it in Python, but when I Google, all I see is about Iron Python.
I also was asked not to use Visual Studio because it is too expensive for the company. So if you have any idea to avoid that I would be very happy.
I am still new to Python and programming overall so not any to advanced solutions.
If you have any questions just ask.
GUI PART: with would you use when using windows and mac(most windows) I would like some drag and drop so I don't waste to much time making the display part
|
Python vs Iron Python
| 0.039979 | 0 | 0 | 21,603 |
17,087,758 |
2013-06-13T13:02:00.000
| -1 | 0 | 1 | 0 |
python
| 17,089,367 | 3 | false | 0 | 0 |
If your purpose is to not give away code, then just distribute the python-compiled library, rather than the source code for it. No need to manually weed code calls, just distribute the pyc versions of your files. If you're afraid that people will take your code and not give you credit, don't give them code if there is an alternative.
That said, we have licenses for a reason. You put the minimal header and your attribution at the top of every file, and you distribute a LICENSE file with your software that clearly indicates what people are, and are not, allowed to do with your source code. If they violate that, and you catch them, you now have legal recourse. IF you don't trust people to uphold that license: that's the whole reason it's there. If your code is so unique that it needs to be licensed for fear of others passing it off as their own, it will be easy to find infractions. If, however, you treat all your code like this, a small reality check: you are not that good. Almost nothing you write will be original enough that others haven't already also written it, trying to cling to it is not going to benefit you, or anyone else.
Best code protection? Stick it online for everyone to see, so that you can point everyone else to it and go "see? that's my code. And this jerk is using it in their own product without giving me credit". Worse code protection, but still protection: don't distribute code, distribute the compiled libraries. (Worst code protect: distributing gimped code because you're afraid of the world for the wrong reasons)
| 2 | 13 | 0 |
I have the following situation: I am working on several projects which make use of library modules that I have written. The library modules contain several classes and functions. In each project, some subset of the code of the libraries is used.
However, when I publish a project for other users, I only want to give away the code that is used by that project rather than the whole modules. This means I would like, for a given project, to remove unused library functions from the library code (i.e. create a new reduced library). Is there any tool that can do this automatically?
EDIT
Some clarifications/replies:
Regarding the "you should not do this in general" replies: The bottom line is that in practice, before I publish a project, I manually go through the library modules and remove unused code. As we are all programmers, we know that there is no reason to do something manually when you could easily explain to a computer how to do it. So practically, writing such a program is possible and should even not be too difficult (yes, it may not be super general). My question was if someone know whether such a tool exists, before I start thinking about implementing it by myself. Also, any thoughts about implementing this are welcome.
I do not want to simply hide all my code. If I would have wanted to do that I would have probably not used Python. In fact, I want to publish the source code, but only the code which is relevant to the project in question.
Regarding the "you are legally protected" comments: In my specific case, the legal/license protection does not help me. Also, the problem here is more general than some stealing the code. For example, it could be for the sake of clarity: if someone needs to use/develop the code, you don't want dozens of irrelevant functions to be included.
|
Is there a tool that removes functions that are not used in Python?
| -0.066568 | 0 | 0 | 755 |
17,087,758 |
2013-06-13T13:02:00.000
| 1 | 0 | 1 | 0 |
python
| 17,096,348 | 3 | false | 0 | 0 |
I agree with @zmo - one way to avoid future problems like this is to plan ahead and make your code as modular as possible. I would have suggested putting the classes and functions in much smaller files. This would mean that for every project you make, you would have to hand-select which of these smaller files to include. I'm not sure if that's feasible with the size of your projects right now. But for future projects it's a practice you may consider.
| 2 | 13 | 0 |
I have the following situation: I am working on several projects which make use of library modules that I have written. The library modules contain several classes and functions. In each project, some subset of the code of the libraries is used.
However, when I publish a project for other users, I only want to give away the code that is used by that project rather than the whole modules. This means I would like, for a given project, to remove unused library functions from the library code (i.e. create a new reduced library). Is there any tool that can do this automatically?
EDIT
Some clarifications/replies:
Regarding the "you should not do this in general" replies: The bottom line is that in practice, before I publish a project, I manually go through the library modules and remove unused code. As we are all programmers, we know that there is no reason to do something manually when you could easily explain to a computer how to do it. So practically, writing such a program is possible and should even not be too difficult (yes, it may not be super general). My question was if someone know whether such a tool exists, before I start thinking about implementing it by myself. Also, any thoughts about implementing this are welcome.
I do not want to simply hide all my code. If I would have wanted to do that I would have probably not used Python. In fact, I want to publish the source code, but only the code which is relevant to the project in question.
Regarding the "you are legally protected" comments: In my specific case, the legal/license protection does not help me. Also, the problem here is more general than some stealing the code. For example, it could be for the sake of clarity: if someone needs to use/develop the code, you don't want dozens of irrelevant functions to be included.
|
Is there a tool that removes functions that are not used in Python?
| 0.066568 | 0 | 0 | 755 |
17,087,905 |
2013-06-13T13:09:00.000
| 1 | 0 | 0 | 0 |
python,sockets,tcp
| 17,088,117 | 1 | false | 0 | 0 |
You can absolutely do this using Twisted Python. You just accept the connections and set up your own handling logic (of course the library does not including built-in support for your particular communication pattern exactly, but you can't expect that).
| 1 | 0 | 0 |
I am trying to create a server in Python 2.7.3 which sends data to all client connections whenever one client connection sends data to the server. For instance, if client c3 sent "Hello, world!" to my server, I would like to then have my server send "Hello, world!" to client connections c1 and c2. By client connections, I mean the communications sockets returned by socket.accept(). Note that I have tried using the asyncore and twisted modules, but AFAIK they do not support this. Does anybody know any way to accomplish this?
EDIT: I have seen Twisted, but I would much rather use the socket module. Is there a way (possibly multithreading, possibly using select) that I can do this using the socket module?
|
How would I handle multiple sockets and send data between them in Python 2.7.3?
| 0.197375 | 0 | 1 | 108 |
17,092,893 |
2013-06-13T17:00:00.000
| 0 | 0 | 0 | 0 |
python,django,django-1.5
| 17,105,379 | 1 | false | 1 | 0 |
Resolved. I had deploy settings in a different file overriding the allowed_hosts in settings.py. Appologies missed this before posting. Thanks for the responses received.
| 1 | 5 | 0 |
I have updated to django 1.5 and am getting the following message:
SuspiciousOperation: Invalid HTTP_HOST header (you may need to set ALLOWED_HOSTS): localhost:8000
I have tried localhost, 127.0.0.1, localhost:8000 in ALLOWED_HOSTS. I have also tried ['*'] all without success.
Anybody any ideas where I am going wrong? Works as expected with DEBUG=False
|
django 1.5 update ALLOWED_HOSTS failing SuspiciousOperation
| 0 | 0 | 0 | 4,184 |
17,093,372 |
2013-06-13T17:27:00.000
| 5 | 0 | 0 | 0 |
python,flask,query-string
| 21,792,010 | 2 | false | 1 | 0 |
request.query_string also seems to work.
| 1 | 6 | 0 |
Is there a way to get the raw query string or a list of query string parameters in Flask?
I know how to get query string parameters with request.args.get('key'), but I would like to be able to take in variable query strings and process them myself. Is this possible?
|
Get raw query string in flask
| 0.462117 | 0 | 0 | 4,634 |
17,094,157 |
2013-06-13T18:13:00.000
| 0 | 0 | 0 | 0 |
python,file,security,directory
| 17,111,088 | 4 | false | 0 | 0 |
Here's the approach I use, which I think has the benefits of giving easy control over access to the files, and preventing path manipulation.
When the user uploads the file:
Read the filename.
Generate a random alphanumeric token.
Save the file in a non-web accessible directory with a filename of that token.
Record the token and original filename in a database (along with who uploaded it, or some way of indicating who has permission to it).
To get the file:
The user requests the file via the token, not a filepath (mysite.com/download/587and83j21h1)
Use white-list validation on the token to ensure that it is alphanumeric.
Check the user's permission to the requested file.
Write the file to the response stream and set the filename equal to the original filename.
| 1 | 2 | 0 |
Part of my app requires the client to request files. Now, a well-behaved client will only request files that are safe to give, but I don't want a user to go about supplying "../../../creditCardInfo.xls", instead. What's the best practice for/simplest way to secure a filename to make sure that no files are served that would be higher than a certain point in the directory hierarchy? First instinct is to disallow filenames with .. in them but that seems... incomplete and unsatisfactory.
The current questions about filename safety on SO focus on making a writable/readable filename, not ensuring that files that shouldn't be accessed are accessed.
|
Secure user-provided filename
| 0 | 0 | 0 | 1,341 |
17,094,525 |
2013-06-13T18:34:00.000
| 1 | 0 | 1 | 0 |
python,komodo
| 17,094,602 | 2 | false | 0 | 0 |
You can choose the file type/language when you create a new file in Komodo. Instead of selecting File/New/New File, use File/New/File from Template, and choose Python (for 2.x) or Python3 from the dialog. The file type will take effect immediately for syntax highlighting and checking.
After you do this for the first time, you'll notice Python or Python3 appearing in the numbered list in the submenu under File/New, along with any other languages you've used recently.
If you're already editing a file and want to change its type without saving, you can use Edit/Current File Settings, and in the File Preferences panel change the Language selection. This may be useful, for example, if you need to switch a file from Python to Python3 or vice versa.
| 2 | 2 | 0 |
I just downloaded komodo edit for editing python and I have an issue with its configuration. How would I get the editor to automatically assume that the code I type will be python. The highlighting of code parts only comes after i've typed the whole code and then saved adding the .py suffix.
|
How do I set the file type for a new file in Komodo Edit/IDE?
| 0.099668 | 0 | 0 | 1,153 |
17,094,525 |
2013-06-13T18:34:00.000
| 0 | 0 | 1 | 0 |
python,komodo
| 30,636,863 | 2 | false | 0 | 0 |
Also, after launching new file, you can change the language from the lower portion of the Komodo Edit main window. You don't have to save as .py file.
| 2 | 2 | 0 |
I just downloaded komodo edit for editing python and I have an issue with its configuration. How would I get the editor to automatically assume that the code I type will be python. The highlighting of code parts only comes after i've typed the whole code and then saved adding the .py suffix.
|
How do I set the file type for a new file in Komodo Edit/IDE?
| 0 | 0 | 0 | 1,153 |
17,094,940 |
2013-06-13T19:00:00.000
| 0 | 1 | 0 | 0 |
python,flash,selenium
| 17,096,754 | 2 | false | 0 | 0 |
As long as you have access to the flash source code it is possible (although it requires some work). To do that you have to expose the flash actions you want to test using selenium. This requires that you make the methods available in Flash to execute via Javascript. Once you can do that, you should be able to automate the process with using selenium's ability to execute javascript.
| 1 | 2 | 0 |
I'm using Selenium with Python to test a web application. The app has a Flash component that I'd like to test. The only references I've seen to using Selenium with Flash refer to Flash-Selenium which hasn't been updated in several years. Is testing Flash with Selenium even possible?
|
Selenium/Python/Flash - How?
| 0 | 0 | 1 | 3,667 |
17,096,050 |
2013-06-13T20:07:00.000
| 2 | 0 | 1 | 0 |
python-3.x,intellij-idea,pycharm
| 45,573,678 | 2 | false | 0 | 0 |
It is a rough ride for anyone using the IntelliJ IDEA editor coming from any of their dedicated editors (PyCharm, PHPStorm, etc). They look almost the same, but there are critical differences that often go undocumented. A couple tips for anyone struggling with these types of issues:
In IntelliJ IDEA, there's a "Project Structure" window that houses many of the things that are put in easy-to-find locations in the dedicated editors. Its icon looks a blocky staircase -- on Mac, it you can open it using Apple + semicolon or selecting it from the "File" menu.
Intellij IDEA must be told what kind of project it is editing before certain menu options will show up. Nope, the polyglot IDE cannot guess that you are working on a python project if all your files have the .py extension, you have to install the Python module. In PHP, IntelliJ is even more helpless: it cannot take a wild guess that PHP is the language being used and there is no "module" or PHP framework support. Derp de derpity derp!
As nice of a product as the IDEA editor is, it is maddeningly stupid and its developers seem oblivious to the problems inherent with it for people coming from IntelliJ's language-optimized IDEs. In my experience, the best way to get help with some of these issues is to file a ticket directly with IntelliJ because the wiki/help pages almost without fail document the corresponding features in the dedicated language editor.
| 1 | 14 | 0 |
I use JetBrains' IntelliJ IDEA 12 for both Java and Python development (Python development through the official Python IntelliJ plugin). My friend uses PyCharm (same company and similar interface, just dedicated to Python) and he showed me a cool feature of PyCharm: there's a Python package manager built-in to the IDE. I looked through the menu options in IntelliJ IDEA but I couldn't find anything relating to Python packages. Does this exist in IntelliJ IDEA/the Python plugin, or am I out of luck for now/unless I move to PyCharm for dedicated Python development?
I'm currently using Python 3.2 and IntelliJ 12.1.4 and Python Plugin 2.10.1.
|
IntelliJ IDEA 12 Python Package Manager?
| 0.197375 | 0 | 0 | 7,409 |
17,098,654 |
2013-06-13T23:05:00.000
| 2 | 0 | 0 | 0 |
python,pandas,dataframe
| 63,390,537 | 13 | false | 0 | 0 |
Another quite fresh test with to_pickle().
I have 25 .csv files in total to process and the final dataframe consists of roughly 2M items.
(Note: Besides loading the .csv files, I also manipulate some data and extend the data frame by new columns.)
Going through all 25 .csv files and create the dataframe takes around 14 sec.
Loading the whole dataframe from a pkl file takes less than 1 sec
| 1 | 415 | 1 |
Right now I'm importing a fairly large CSV as a dataframe every time I run the script. Is there a good solution for keeping that dataframe constantly available in between runs so I don't have to spend all that time waiting for the script to run?
|
How to reversibly store and load a Pandas dataframe to/from disk
| 0.03076 | 0 | 0 | 432,843 |
17,099,364 |
2013-06-14T00:30:00.000
| 4 | 0 | 0 | 0 |
python,database,parsing,web-scraping
| 17,099,503 | 5 | false | 0 | 0 |
Python with BeautifulSoup and Urllib2 will probably serve you well. Of course, it is questionable as to whether or not you should be scraping data from other websites and you might find yourself in a constant struggle if those websites change layouts.
| 2 | 4 | 0 |
I want to create a website that extracts information from other websites and print them into my website, I am on research step, so I would like to hear some opinions and what's the best solution to this project?
I have heard that Python using parser can do this I just want to know what's the path I should take and which language should I use?.
|
How can I get data from other websites?
| 0.158649 | 0 | 1 | 314 |
17,099,364 |
2013-06-14T00:30:00.000
| 0 | 0 | 0 | 0 |
python,database,parsing,web-scraping
| 17,100,130 | 5 | false | 0 | 0 |
You can write some web spiders to gather some data from other website.By using urllib2 or requests can help you download the html from the website.Beautiful or PyQuery can help you parse the html and get the data you want.
| 2 | 4 | 0 |
I want to create a website that extracts information from other websites and print them into my website, I am on research step, so I would like to hear some opinions and what's the best solution to this project?
I have heard that Python using parser can do this I just want to know what's the path I should take and which language should I use?.
|
How can I get data from other websites?
| 0 | 0 | 1 | 314 |
17,099,581 |
2013-06-14T01:00:00.000
| 1 | 1 | 0 | 1 |
java,python,sip,lync,office-communicator
| 17,125,543 | 2 | false | 0 | 0 |
Well, if you are on Lync 2013, you can have a look at UCWA ucwa.lync.com. It's a web service that allows to log in to Lync and use IM, presence, etc.
You can use then any language you want. I played with it using Node on Mac OS X, for example.
| 1 | 2 | 0 |
I need to send out Instant Messages to a Lync/OCS server from Linux programmatically as an alerting mechanism.
I've looked into using python dbus and pidgin-sipe with finch or pidgin, but they aren't really good for sending one-off instant messages (finch and pidgin need to be running all the time).
Ideally, I'd have a python script or java class that could spit out Instant Messages to users when needed.
|
Sending out IMs to Lync/OCS programmatically
| 0.099668 | 0 | 0 | 3,151 |
17,099,850 |
2013-06-14T01:39:00.000
| 3 | 1 | 1 | 0 |
python,arrays,numpy,pypy
| 17,101,084 | 1 | false | 0 | 0 |
array.array is a memory efficient array. It packs bytes/words etc together, so there is only a few bytes of extra overhead for the entire array.
The one place where numpy can use less memory is when you have a sparse array (and are using one of the sparse array implementations)
If you are not using sparse arrays, you simply measured it wrong.
array.array also doesn't have a packed bool type, so you can implement that as wrapper around an array.array('I') or a bytearray() or even just use bit masks with a Python long
| 1 | 3 | 1 |
My project currently uses NumPy, only for memory-efficient arrays (of bool_, uint8, uint16, uint32).
I'd like to get it running on PyPy which doesn't support NumPy. (failed to install it, at any rate)
So I'm wondering: Is there any other memory-efficient way to store arrays of numbers in Python? Anything that is supported by PyPy? Does PyPy have anything of it's own?
Note: array.array is not a viable solution, as it uses a lot more memory than NumPy in my testing.
|
PyPy and efficient arrays
| 0.53705 | 0 | 0 | 919 |
17,100,293 |
2013-06-14T02:38:00.000
| 1 | 0 | 0 | 0 |
python,performance,sockets,loops,select
| 17,100,500 | 1 | true | 0 | 0 |
I'd use a priority queue to keep track of who's been dormant.
Notice, however, that you don't actually need a full-fledged priority queue if you only want to time out connections that have been inactive for a certain fixed amount of time. You can use a linked list instead:
The linked list stores all of the sockets in sorted order by the last time activity was seen.
When a socket receives data, you update a per-socket "data last seen at" member and move its list entry to the back of the list.
Pass select() the time until the head of the list expires.
At the end of an iteration of your select() loop, you pop off all of the expired list nodes (they're in sorted order) and close the connections.
It's important, if you want sockets to expire at the right time, to use a monotonic clock. The list might lose its sorted order if the clock happens to go backward at some point.
| 1 | 0 | 0 |
I'm designing a Python SSL server that needs to be able to handle up to a few thousand connections per minute. If a client doesn't send any data in a given period of time, the server should close their connection in order to free up resources.
Since I need to check if each connection has expired, would it be more efficient to make the sockets non-blocking and check all the sockets for data in a loop while simultaneously checking if they've timed out, or would it be better to use select() to get sockets that have data and maintain some kind of priority queue ordered by the time data was received on a socket to handle connection timeout?
Alternatively, is there a better method of doing this I haven't thought of, or are there existing libraries I could use that have the features I need?
|
Connections with timeout: better to set sockets to non-blocking and loop through them or use select() and maintain a priority queue?
| 1.2 | 0 | 1 | 235 |
17,113,591 |
2013-06-14T16:55:00.000
| 0 | 0 | 0 | 0 |
python,cmd,shutdown,minecraft
| 17,114,583 | 3 | false | 0 | 0 |
I would look at the tools in the os module, it would also help if i had more information about what operating system you are using.
| 2 | 0 | 0 |
I have multiple minecraft servers running on the machine. servers are started with bat files that have according titles. My question is that, how can i shut down a certain minecraft server with python? Or how to kill a titled cmd.exe process with python?
|
How to shut down a minecraft server with python?
| 0 | 0 | 0 | 1,744 |
17,113,591 |
2013-06-14T16:55:00.000
| 0 | 0 | 0 | 0 |
python,cmd,shutdown,minecraft
| 17,331,529 | 3 | false | 0 | 0 |
I ended up using autohotkey. launched autohotkey with python and made separate .ahk files for every server.
| 2 | 0 | 0 |
I have multiple minecraft servers running on the machine. servers are started with bat files that have according titles. My question is that, how can i shut down a certain minecraft server with python? Or how to kill a titled cmd.exe process with python?
|
How to shut down a minecraft server with python?
| 0 | 0 | 0 | 1,744 |
17,113,613 |
2013-06-14T16:55:00.000
| 0 | 0 | 0 | 0 |
python,machine-learning,artificial-intelligence,neural-network
| 17,117,416 | 2 | false | 0 | 0 |
Assume you have v visible units, and h hidden units, and v < h. The key idea is that once you've fixed all the values for each visible unit, the hidden units are independent.
So you loop through all 2^v subsets of visible unit activations. Then computing the likelihood for the RBM with this particular activated visible subset is tractable, because the hidden units are independent[1]. So then loop through each hidden unit, and add up the probability of it being on and off conditioned on your subset of visible units. Then multiply out all of those summed on/off hidden probabilities to get the probability that particular subset of visible units. Add up all subsets and you are done.
The problem is that this is exponential in v. If v > h, just "transpose" your RBM, pretending the hidden are visible and vice versa.
[1] The hidden units can't influence each other, because you influence would have to go through the visible units (no h to h connections), but you've fixed the visible units.
| 1 | 0 | 1 |
I have been researching RBMs for a couple months, using Python along the way, and have read all your papers. I am having a problem, and I thought, what the hey? Why not go to the source? I thought I would at least take the chance you may have time to reply.
My question is regarding the Log-Likelihood in a Restricted Boltzmann Machine. I have read that finding the exact log-likelihood in all but very small models is intractable, hence the introduction of contrastive divergence, PCD, pseudo log-likelihood etc. My question is, how do you find the exact log-likelihood in even a small model?
I have come across several definitions of this formula, and all seem to be different. In Tielemen’s 2008 paper “Training Restricted Boltzmann Machines using Approximations To the Likelihood Gradient”, he performs a log-likelihood version of the test to compare to the other types of approximations, but does not say the formula he used. The closest thing I can find is the probabilities using the energy function over the partition function, but I have not been able to code this, as I don’t completely understand the syntax.
In Bengio et al “Representation Learning: A Review and New Perspectives”, the equation for the log-likelihood is:
sum_t=1 to T (log P(X^T, theta))
which is equal to sum_t=1 to T(log * sum_h in {0,1}^d_h(P(x^(t), h; theta))
where T is training examples. This is (14) on page 11.
The only problem is that none of the other variables are defined. I assume x is the training data instance, but what is the superscript (t)? I also assume theta are the latent variables h, W, v… But how do you translate this into code?
I guess what I’m asking is can you give me a code (Python, pseudo-code, or any language) algorithm for finding the log-likelihood of a given model so I can understand what the variables stand for? That way, in simple cases, I can find the exact log-likelihood and then compare them to my approximations to see how well my approximations really are.
|
Finding log-likelihood in a restricted boltzmann machine
| 0 | 0 | 0 | 1,642 |
17,116,718 |
2013-06-14T20:24:00.000
| 1 | 0 | 0 | 1 |
android,python,django,web,localhost
| 17,116,746 | 9 | false | 1 | 0 |
If both are connected to the same network, all you need to do is provide the IP address of your server (in your network) in your Android app.
| 6 | 31 | 0 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 0.022219 | 0 | 0 | 33,438 |
17,116,718 |
2013-06-14T20:24:00.000
| 0 | 0 | 0 | 1 |
android,python,django,web,localhost
| 61,816,349 | 9 | false | 1 | 0 |
Try this
python manage.py runserver
then connect both tablet and system to same wifi and browse in the address
eg: python manage.py runserver 192.168.0.100:8000
In tablet type that url in adress bar
| 6 | 31 | 0 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 0 | 0 | 0 | 33,438 |
17,116,718 |
2013-06-14T20:24:00.000
| 1 | 0 | 0 | 1 |
android,python,django,web,localhost
| 17,116,796 | 9 | false | 1 | 0 |
need to know the ip address of your machine ..
Make sure both of your machines (tablet and computer) connected to same network
192.168.0.22 - say your machine address
do this :
192.168.0.22:8000 -- from your tablet
this is it !!!
| 6 | 31 | 0 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 0.022219 | 0 | 0 | 33,438 |
17,116,718 |
2013-06-14T20:24:00.000
| 16 | 0 | 0 | 1 |
android,python,django,web,localhost
| 17,116,791 | 9 | true | 1 | 0 |
You can find out what the ip address of your PC is with the ipconfig command in a Windows command prompt. Since you mentioned them being connected over WiFi look for the IP address of the wireless adapter.
Since the tablet is also in this same WiFi network, you can just type that address into your tablet's browser, with the :8000 appended to it and it should pull up the page.
| 6 | 31 | 0 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 1.2 | 0 | 0 | 33,438 |
17,116,718 |
2013-06-14T20:24:00.000
| 6 | 0 | 0 | 1 |
android,python,django,web,localhost
| 17,116,785 | 9 | false | 1 | 0 |
127.0.0.1 is a loopback address that means, roughly, "this device"; your PC and your android tablet are separate devices, so each of them has its own 127.0.0.1. In other words, if you try to go to 127.0.0.1 on your Android tab, it's trying to connect to a webserver on the Android device, which is not what you want.
However, you should be able to connect over the wifi. On your windows box, open a command prompt and execute ipconfig. Somewhere in the output should be your windows box's address, probably 192.168.1.100 or something similar. You tablet should be able to see the Django server at that address.
| 6 | 31 | 0 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 1 | 0 | 0 | 33,438 |
17,116,718 |
2013-06-14T20:24:00.000
| 23 | 0 | 0 | 1 |
android,python,django,web,localhost
| 48,594,665 | 9 | false | 1 | 0 |
Though this thread was active quite a long time ago. This is what worked for me on windows 10. Posting it in details. Might be helpful for the newbies like me.
Add ALLOWED_HOSTS = ['*'] in django settings.py file
run django server with python manage.py 0.0.0.0:YOUR_PORT. I used 9595 as my port.
Make firewall to allow access on that port:
Navigate to control panel -> system and Security -> Windows Defender Firewall
Open Advanced Settings, select Inbound Rules then right click on it and then select New Rule
Select Port, hit next, input the port you used (in my case 9595), hit next, select allow the connections
hit next again then give it a name and hit next for the last time.
Now find the ip address of your PC.
Open Command Promt as adminstrator and run ipconfig command.
You may find more than one ip addresses. As I'm connected through wifi I took the one under Wireless LAN adapter WiFi. In my case it was 192.168.0.100
Note that this ip may change when you reconnect to the network. So you need to check it again then.
Now from another device (pc, mobile, tablet etc.) connected to the same network go to ip_address:YOUR_PORT (in my case 192.168.0.100:9595)
Hopefully you'll be good to go !
| 6 | 31 | 0 |
I am developing a webpage in django (on my pc with windows 7) and now i need to test some pages in tablet pcs. suddenly the thought came if i can have an access to my localhost in windows from android tablet. is that possible? I am in the same wifi connection in both devices at home.
i read a lot questions and answers regarding this issue in stackoverflow and other places, but none of them gave me some concrete solutions.
I have samsung tab 2 10.1 with android 4.0.4.
appreciate any help, thanks
|
how to access my 127.0.0.1:8000 from android tablet
| 1 | 0 | 0 | 33,438 |
17,118,747 |
2013-06-14T23:35:00.000
| 3 | 0 | 1 | 1 |
python,process
| 17,118,788 | 1 | true | 0 | 0 |
Python has good support for ZeroMQ, which is much easier and more robust than using raw sockets.
The ZeroMQ site treats Python as one of its primary languages and offers copious Python examples in its documentation. Indeed, the example in "Learn the Basics" is written in Python.
| 1 | 2 | 0 |
I'm having a problem creating a inter-process communication for my python application. I have two python scripts at hand, let's say A and B. A is used to open a huge file, keep it in memory and do some processing that Mysql can't do, and B is a process used to query A very often.
Since the file A needs to read is really large, I hope to read it once and have it hang there waiting for my Bs' to query.
What I do now is, I use cherrypy to build a http-server. However, I feel it's kind of awkward to do so since what I'm trying to do is absolutely local. So, I'm wondering are there other more organic way to achieve this goal?
I don't know much about TCP/socket etc. If possible, toy examples would be appreciate (please include the part to read file).
|
Inter-process communication for python
| 1.2 | 0 | 0 | 1,821 |
17,119,388 |
2013-06-15T01:29:00.000
| 0 | 0 | 0 | 0 |
javascript,python,tinymce,pyqt,qwebkit
| 17,288,641 | 1 | false | 1 | 1 |
EvaluateJavaScript does make javascript function calls, or embed a whole javascript file. The following details out the attempts to solve the problem:
The approach of first reading the tinyMCE.js file and then using that in an evaluatejavascript method embeds the javascript somewhere, and can't be sniffed out in a webkit console. When loading files using the evaluatejavascript method, any dependencies, such as the ones that tinymce require, are not loaded. I think it's because javascript calls are "attached" to the webkit but not embedded in the frame's DOM itself.
The second approach consists of creating a webkit page and loading an html file. The html file itself embeds the javascript, so the component works like a "browser". In tinymce's configuration, toolbars and unnecessary parts were hidden. TinyMCE version 3 worked well with PyQt4. When the 4th version was embedded in an html page however, textareas were not being converted to tinymce editors. The console itself shows 'undefined' error messages, deduced to the assumption that tinymce 4 uses different javascript syntax and a different compiler.
And so ends my quest to write a stand-alone webkit editor. :)
| 1 | 0 | 0 |
as the question states, I wish to embed a tinymce editor in a PyQT webkit component.
As far as I understand, evaluateJavascript allows for js functions to be called.
However, when I try loading tinymce.min.js, the editor does not display anything at all. As suspected, when evaluating a javascript that 'loads' other javascript files, they don't actually get loaded.
At this point, I feel lost. I will try to manually load 'plugins' that will be specified in tinymce's init function and will update this.
Till that time, any help would be really appreciated.
|
Embedding a TinyMCE editor in PyQT QWebkit
| 0 | 0 | 0 | 239 |
17,122,268 |
2013-06-15T09:46:00.000
| 0 | 0 | 1 | 0 |
python,text,curses
| 17,122,309 | 2 | false | 0 | 0 |
You create static text by putting it in a separate window. Create a window large enough for all the text, and then create a smaller window for the dynamic text. Text is colored by passing the various COLOR_* constants as the text attribute.
| 1 | 2 | 0 |
I am creating a text adventure. How could I to add static text to this.
What i mean is some text that always stays on the left side of the window. Even if all the other text is scrolling down. Also how could i make this text red.
|
(python) How to create static text in curses
| 0 | 0 | 0 | 2,751 |
17,122,479 |
2013-06-15T10:10:00.000
| 0 | 0 | 0 | 0 |
python,printing
| 17,122,495 | 2 | false | 0 | 0 |
Wherever you call your printing routine, add a small unique header to printout (module name+line number) to identify the place print was initiated from.
| 2 | 1 | 0 |
This application I am working on (GUI based), has well over a dozen modules. On running the application and using it, there is this particular action(clicking a label) upon which I get tons of empty prints in the stdout and because of which I suspect the application's performance is suffering. Now the problem is I am unable to find out exactly which print statement is causing this to happen.
What I have tried as yet:
multi-buffer searches
commented the print statements which I know will be executed and left out the one which I am almost 100% sure will never be executed. period.
What I haven't tried:
pdb(time consuming)
Any ease hack(not too ugly) to corner out this print statement?
|
how to find out exactly from where print is being executed
| 0 | 0 | 0 | 79 |
17,122,479 |
2013-06-15T10:10:00.000
| 6 | 0 | 0 | 0 |
python,printing
| 17,122,501 | 2 | true | 0 | 0 |
Replace sys.stdout with a file-like that will spew out a traceback when its write() method is called.
| 2 | 1 | 0 |
This application I am working on (GUI based), has well over a dozen modules. On running the application and using it, there is this particular action(clicking a label) upon which I get tons of empty prints in the stdout and because of which I suspect the application's performance is suffering. Now the problem is I am unable to find out exactly which print statement is causing this to happen.
What I have tried as yet:
multi-buffer searches
commented the print statements which I know will be executed and left out the one which I am almost 100% sure will never be executed. period.
What I haven't tried:
pdb(time consuming)
Any ease hack(not too ugly) to corner out this print statement?
|
how to find out exactly from where print is being executed
| 1.2 | 0 | 0 | 79 |
17,123,649 |
2013-06-15T12:29:00.000
| 0 | 0 | 1 | 0 |
python,package,sublimetext
| 21,785,292 | 1 | false | 0 | 0 |
Remove existing /User/Python.sublime-package and try manually unzip Python.sublime-package (change it's extension to .zip) to Package folder /Packages/Python. Package folder can be revealed via Preferences->Browse Packages... command.
| 1 | 0 | 0 |
I tried adding some extra features to my python package in Sublime Text 2 and it ended up showing: Error trying to parse settings . Unexpected trailing characters in ... {my path} \User\Python.sublime-package:1:11
I searched the internet for about 3 hours and my frustration grown off the charts.
My ideea was trying to find the default code for the python package but it didn`t work out. Also , uninstalled and installed python and sublime for like 20 times , still not working.
Please help me
|
Sublime Text 2 Python.sublime-package
| 0 | 0 | 0 | 229 |
17,124,966 |
2013-06-15T15:02:00.000
| 0 | 0 | 1 | 0 |
python,ida
| 17,152,569 | 1 | false | 0 | 0 |
Make sure that your Python interpreter is 32 bit and not a more recently compiled 64 bit version, even if you are running a 64 bit OS. If you need to compile your own you will need the Development kit for your version of IDA. Back in the 5.5 days you had to actually build your own, but idapython is now included as part of the 6.x versions of today.
| 1 | 1 | 0 |
I am trying to install IDA Pro 5.5 on a windows 7 machine. I have installed python 2.5. When starting IDA, i get error msg that init.py failed. While looking inside this file, i found that it is importing _idaapi module, but I cannot find this module anywhere in IDA installation directory. There is a python module named idaapi.py which is also importing _idaapi. I also tried downloading IDAPython separately, but it is not working still. Can anybody suggest something to get rid of this error and make my IDA working properly with IDAPython installed?
Thanks in advance
-Sanjay
|
IDAPython error no module _idaapi
| 0 | 0 | 0 | 4,548 |
17,125,247 |
2013-06-15T15:35:00.000
| 4 | 0 | 0 | 0 |
python,python-2.7,machine-learning,svm,scikit-learn
| 17,133,162 | 1 | false | 0 | 0 |
First, for text data you don't need a non linear kernel, so you should use an efficient linear SVM solver such as LinearSVC or PassiveAggressiveClassifier instead.
The SMO algorithm of SVC / libsvm is not scalable: the complexity is more than quadratic which is practice often makes it useless for dataset larger than 5000 samples.
Also to deal with the class imbalance you might want to try to subsample the class 2 and class 3 to have a number of samples maximum twice the number of samples of class 1.
| 1 | 2 | 1 |
I have a text data labelled into 3 classes and class 1 has 1% data, class 2 - 69% and class 3 - 30%. Total data size is 10000. I am using 10-fold cross validation. For classification, SVM of scikit learn python library is used with class_weight=auto. But the code for 1 step of 10-fold CV has been running for 2 hrs and has not finished. This implies that for code will take at least 20 hours for completion. Without adding the class_weight=auto, it finishes in 10-15min. But then, no data is labelled of class 1 in the output. Is there some way to achieve solve this issue ?
|
SVM Multiclass Classification using Scikit Learn - Code not completing
| 0.664037 | 0 | 0 | 2,914 |
17,125,248 |
2013-06-15T15:35:00.000
| 1 | 0 | 0 | 0 |
python,string,pandas,series
| 52,143,806 | 2 | false | 0 | 0 |
Get the Series head(), then access the first value:
df1['tweet'].head(1).item()
or: Use the Series tolist() method, then slice the 0'th element:
df.height.tolist()
[94, 170]
df.height.tolist()[0]
94
(Note that Python indexing is 0-based, but head() is 1-based)
| 1 | 2 | 1 |
I think I have a relatively simply question but am not able to locate an appropriate answer to solve the coding problem.
I have a pandas column of string:
df1['tweet'].head(1)
0 besides food,
Name: tweet
I need to extract the text and push it into a Python str object, of this format:
test_messages = ["line1",
"line2",
"etc"]
The goal is to classify a test set of tweets and therefore believe the input to: X_test = tfidf.transform(test_messages) is a str object.
|
Get first element of Pandas Series of string
| 0.099668 | 0 | 0 | 8,156 |
17,127,306 |
2013-06-15T19:47:00.000
| 1 | 0 | 0 | 0 |
python,database,pysqlite
| 17,382,716 | 1 | true | 0 | 0 |
Well. every table in a sqlite has a rowid. Select one and delete it?
| 1 | 0 | 0 |
Need to get one row from a table, and delete the same row.
It does not matter which row it is. The function should be generic, so the column names are unknown, and there are no identifiers. (Rows as a whole can be assumed to be unique.)
The resulting function would be like a pop() function for a stack, except that the order of elements does not matter.
Possible solutions:
Delete into a temporary table.
(Can this be done in pysqlite?)
Get * with 1 as limit, and the Delete * with 1 as limit.
(Is this safe if there is just one user?)
Get one row, then delete with a WHERE clause that compares the whole row.
(Can this be done in pysqlite?)
Suggestions?
|
row_pop() function in pysqlite?
| 1.2 | 1 | 0 | 64 |
17,128,130 |
2013-06-15T21:36:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine
| 35,530,496 | 2 | false | 1 | 0 |
You need to add the requests/requests sub-folder to your project. From your script's location (.), you should see a file at ./requests/__init__.py.
This applies to all modules you include for Google App Engine. If it doesn't have a __init__.py directly under that location, it will not work.
You do not need to add the module to app.yaml.
| 1 | 4 | 0 |
I'm trying to import the requests module for my app which I want to view locally on Google App Engine. I am getting a log console error telling me that "no such module exists".
I've installed it in the command line (using pip) and even tried to install it in my project directory. When I do that the shell tells me:
"Requirement already satisfied (use --upgrade to upgrade): requests in /Library/Python/2.7/site-packages".
App Engine is telling me that the module doesn't exist and the shell says it's already installed it.
I don't know if this is a path problem. If so, the only App Engine related application I can find in my mac is the launcher?
|
Python Request Module - Google App Engine
| 0 | 0 | 0 | 4,323 |
17,128,740 |
2013-06-15T23:07:00.000
| 0 | 0 | 0 | 0 |
python,django,django-south
| 17,128,801 | 2 | false | 1 | 0 |
I usually make a temporary modification to the migration script that fails. Comment out or modify the parts that are not needed, run the migrations, then restore everything to the way it was before.
It's not ideal, and it involves some duplication of work - you have to do the same steps both on dev machine and on the server, but it lets you preserve South support and work around the failing migration.
| 1 | 2 | 0 |
I am using a 3rd party app inside my django application and the older versions of it had a dependancy on the django auth model, but the newer version supports the custom auth model of django 1.5.
The problem I am having is that when I install the app and migrate app, it breaks on the migration 002 because it is referencing a table that the final version of the app doesn't need, therefore i dont have.
If i turn off south and just do a syncdb everything works fine. But then I will have to do fake migrations for all my other apps. Is there an easy way that I can have either south skip these errors and keep proceeding with the migrations or south just use the models.py to create the schema and then for me to do a fake migration for just that one app?
Thanks for your help :)
|
South skip broken migrations
| 0 | 0 | 0 | 1,534 |
17,128,878 |
2013-06-15T23:28:00.000
| 1 | 1 | 0 | 1 |
python,vim,plugins
| 17,131,966 | 4 | false | 0 | 0 |
All folders in the rtp (runtimepath) option need to have the same folder structure as your $VIMRUNTIME ($VIMRUNTIME is usually /usr/share/vim/vim{version}). So it should have the same subdirectory names e.g. autoload, doc, plugin (whichever you need, but having the same names is key). The plugins should be in their corresponding subdirectory.
Let's say you have /path/to/dir (in your case it's ~/.vim) is in your rtp, vim will
look for global plugins in /path/to/dir/plugin
look for file-type plugins in /path/to/dir/ftplugin
look for syntax files in /path/to/dir/syntax
look for help files in /path/to/dir/doc
and so on...
vim only looks for a couple of recognized subdirectories† in /path/to/dir. If you have some unrecognized subdirectory name in there (like /path/to/dir/plugins), vim won't see it.
† "recognized" here means that a subdirectory of the same name can be found in /usr/share/vim/vim{version} or wherever you have vim installed.
| 1 | 3 | 0 |
I was trying to install autoclose.vim to Vim. I noticed I didn't have a ~/.vim/plugin folder, so I accidentally made a ~/.vim/plugins folder (notice the extra 's' in plugins). I then added au FileType python set rtp += ~/.vim/plugins to my .vimrc, because from what I've read, that will allow me to automatically source the scripts in that folder.
The plugin didn't load for me until I realized my mistake and took out the extra 's' from 'plugins'. I'm confused because this new path isn't even defined in my runtime path. I'm basically wondering why the plugin loaded when I had it in ~/.vim/plugin but not in ~/.vim/plugins?
|
Vim plugins don't always load?
| 0.049958 | 0 | 0 | 938 |
17,132,334 |
2013-06-16T10:30:00.000
| 0 | 0 | 0 | 0 |
python,mysql,database,django
| 17,132,550 | 1 | false | 1 | 0 |
I have solved this issue by wrapping my view in the @transaction.autocommit decorator and executing transaction.commit() immediately before checking in the database if an answer with a particular client_id exists. This accomplishes the "refresh" I was aiming for.
| 1 | 0 | 0 |
I have a Django app with a MySQL database which allows answering of questions on an HTML page. The answers get sent to the server via AJAX calls. These calls are initiated by various JavaScript events and can often be fired multiple times for one answer. When this happens, multiple save requests for one answer get sent to the server.
In order to avoid duplicate answers, each answer has a client-side ID generated the first time it gets saved - client_id. Before creating a new answer server-side, the Django app first checks the DB to see if an answer with such a client_id exists. If one does, the second save requests updates the answer instead of creating a new one.
In Chrome, when a text input field is focused, and the user clicks outside of the Chrome window, two save requests get fired one after the other. The server receives them both. Let's say that for the sake of the example the client_id is 71.
The first request checks the DB and sees that no answers with a client_id 71 exist. It creates a new answer and saves in the the DB. I am debugging with breakpoints and at this time, I see in my external MySQL database viewer that indeed the answer is saved. In my IDE, when I execute Answer.objects.filter(client_id=71) I get the answer as well. I let the debugger continue.
Immediately my second breakpoint fires for the second AJAX save answer request. Now a curious thing happens. In my IDE, when I execute Answer.objects.filter(client_id=71) I see no answers! My external tool confirms that the answer is there. So my code creates a new answer and saves it. Now if in my IDE I execute Answer.objects.filter(client_id=71) I see two answers with that client_id.
I am guessing that the DB connection or MySQL uses some kind of time-based method of keeping views constant, but it is causing me problems here. I would like a live insight into the state of the DB.
I am not using any transaction management, so Django should be doing auto_commit.
How can I instruct the DB connect to "refresh" or "reset" itself to take into consideration data which is actually in the DB?
|
Simultanous AJAX requests and MySQL database data visibility in Django
| 0 | 0 | 0 | 323 |
17,136,085 |
2013-06-16T17:51:00.000
| 0 | 0 | 0 | 0 |
python,string,types,type-conversion
| 17,136,094 | 4 | false | 0 | 0 |
I'm not quite sure what your question is, but print automatically calls str on all of it's arguments ... So if you want the same output as print to be put into your file, then myfile.write(str(whatever)) will put the same text in myfile that print (x) would have put into the file (minus a trailing newline that print puts in there).
| 1 | 1 | 0 |
I am using python and XMLBuilder, a module I downloaded off the internet (pypi). It returns an object, that works like a string (I can do print(x)) but when I use file.write(x) it crashes and throws an error in the XMLBuilder module.
I am just wondering how I can convert the object it returns into a string?
I have confirmed that I am writing to the file correctly.
I have already tried for example x = y although, as I thought, it just creates a pointer, and also x=x+" " put I still get an error. It also returns an string like object with "\n".
Any help on the matter would be greatly appreciated.
|
Converting a string-like object into a string in python
| 0 | 0 | 1 | 165 |
17,137,050 |
2013-06-16T19:49:00.000
| 3 | 0 | 0 | 0 |
python,django,model-view-controller,event-handling,tkinter
| 17,137,094 | 2 | false | 1 | 1 |
Tkinter is a GUI library (for desktop applications) and Django is for web development. Both are completely different and in fact it is useless to compare them even.
| 1 | 0 | 0 |
I have a school project where my team needs to build a board game. We want to use Python based on all the good things we have heard. I have been researching MVC frameworks and came across Django (its part of my installation on Pydev). I have a Mac, fyi.
I have also been looking up Tkinter but cant seem to understand what the difference is between Django and Tkinter. Why would you use one over the other? I understand that Django is for Web Development. And I think I understand that Tkinter is for building GUI's right?
The board game will have multiple players who should all get updated when one of the players makes a move.
Can any of you point me to where I should be looking online based on what I am trying to do? I am not looking for code, but just the right website with some good documentation and tutorials that will help me out. Thanks again, Mash
|
Trying to understand Python and all its moving parts - What is the difference between Tkinter and Django
| 0.291313 | 0 | 0 | 2,632 |
17,138,389 |
2013-06-16T22:38:00.000
| 1 | 1 | 0 | 0 |
python,.net,ios
| 17,138,453 | 4 | false | 0 | 1 |
I learned to write iOs apps from the CS 193P iPhone Application Development course on iTunes U. It's fantastic and I highly recommend it if you are sure iOs is what you want to do.
| 2 | 0 | 0 |
I really would like to start getting into Objective C coding, specifically so I can write applications for iOS.
My coding background is that I have written C# .NET GUI Windows apps and PHP web scripts for years; I've also become a very good Python coder in the past year. I have written hundreds of useful command-line Python scripts, and also a few GUI apps using wxPython successfully. I also wrote VB6 GUI apps way back in the day, and of course, I cut my teeth on QuickBASIC in DOS. ;-)
I understand OOP concepts: I understand classes, methods, properties and the like. I use OOP a lot in Python, and obviously use it extensively in C#.
I haven't actually taken the time to really get good at C or C++, however I am able to write simple "test" programs to accomplish small tasks. The problem is that I understand the syntax just fine, but the APIs can be very different depending on platform, and accomplishing the same thing in C on Linux at the command line is totally different than accomplishing it in Windows in a GUI.
I've looked over a few books out there for iOS coding but they seem to assume little to no programming knowledge and quickly bore me, and I can't easily find the information I really need buried among all of the "here's what an object is" or "this is called a class and a method" stuff...
I also tried the Stanford lectures on iTunes U, but I found myself struggling with the MVC concepts and the idea of setting up different files for "implementation" and "header" and all of that...
Is there any resources that you guys can think of that would be good for me to get started with iOS?
It's also worth noting I have dabbled with PyObjC a little on Mac and therefore do understand a LITTLE about the NS foundation classes and such, and I've also looked at Apple's reference documentation and I'm sure that once I get the basics down I could put good use to it, but I still don't know how to actually get a functional iOS app that does something useful going.
|
Best intro to iOS for Python/PHP/C# Coder
| 0.049958 | 0 | 0 | 166 |
17,138,389 |
2013-06-16T22:38:00.000
| 1 | 1 | 0 | 0 |
python,.net,ios
| 17,140,686 | 4 | false | 0 | 1 |
I have gotten more from Erica Sadun's books than any of the others, personally. iOS apps use a lot of animation and graphics, by necessity, and her code examples are clean and concise. They aren't beginner's books but you sound as though you're not a beginning coder. They hit on a lot of topics it is hard to find much on.
If you're willing to work through the sample programs, I found iPad iOS 6 Development Essentials to be comprehensive (Neil Smith). However, it tends to focus on the visual IDE of xCode which I think is lousy and chose not to use at all; if you plan to use it, then that would be a good resource imo. Also, I got a book that covered Objective C only (Aaron Hillegass) which I thought was good. The iOS book from the same author was not good for me, because it depended on you working prior chapter examples to proceed to later chapters, which for me was a waste of time, so I bailed out of it quickly. I also got Pro Core Data (Privat and Warner) which I found to be of limited (actually, little) value for the same reason as the Hillegass iOS book -- the examples are too big and not to the point.
And, of course, Google.
| 2 | 0 | 0 |
I really would like to start getting into Objective C coding, specifically so I can write applications for iOS.
My coding background is that I have written C# .NET GUI Windows apps and PHP web scripts for years; I've also become a very good Python coder in the past year. I have written hundreds of useful command-line Python scripts, and also a few GUI apps using wxPython successfully. I also wrote VB6 GUI apps way back in the day, and of course, I cut my teeth on QuickBASIC in DOS. ;-)
I understand OOP concepts: I understand classes, methods, properties and the like. I use OOP a lot in Python, and obviously use it extensively in C#.
I haven't actually taken the time to really get good at C or C++, however I am able to write simple "test" programs to accomplish small tasks. The problem is that I understand the syntax just fine, but the APIs can be very different depending on platform, and accomplishing the same thing in C on Linux at the command line is totally different than accomplishing it in Windows in a GUI.
I've looked over a few books out there for iOS coding but they seem to assume little to no programming knowledge and quickly bore me, and I can't easily find the information I really need buried among all of the "here's what an object is" or "this is called a class and a method" stuff...
I also tried the Stanford lectures on iTunes U, but I found myself struggling with the MVC concepts and the idea of setting up different files for "implementation" and "header" and all of that...
Is there any resources that you guys can think of that would be good for me to get started with iOS?
It's also worth noting I have dabbled with PyObjC a little on Mac and therefore do understand a LITTLE about the NS foundation classes and such, and I've also looked at Apple's reference documentation and I'm sure that once I get the basics down I could put good use to it, but I still don't know how to actually get a functional iOS app that does something useful going.
|
Best intro to iOS for Python/PHP/C# Coder
| 0.049958 | 0 | 0 | 166 |
17,138,569 |
2013-06-16T23:09:00.000
| 1 | 1 | 0 | 1 |
python,linux,benchmarking,inotify
| 17,139,897 | 1 | false | 0 | 0 |
I would try and remove as many other processes as possible in order to get a repeatable benchmark. For example, I would set up a separate, dedicated server with an NFS mount to the directories. This server would only run inotify and the Python script. For simple server measurements, I would use top or ps to monitor CPU and memory.
The real test is how quickly your script "drains" the directories, which depends entirely on your process. You could profile the script and see where it's spending the time.
| 1 | 0 | 0 |
I'm looking at using inotify to watch about 200,000 directories for new files. On creation, the script watching will process the file and then it will be removed. Because it is part of a more compex system with many processes, I want to benchmark this and get system performance statistics on cpu, memory, disk, etc while the tests are run.
I'm planning on running the inotify script as a daemon and having a second script generating test files in several of the directories (randomly selected before the test).
I'm after suggestions for the best way to benchmark the performance of something like this, especially the impact it has on the Linux server it's running on.
|
Benchmarking System performance of Python System
| 0.197375 | 0 | 0 | 780 |
17,139,217 |
2013-06-17T01:07:00.000
| 2 | 0 | 0 | 0 |
python,yum
| 17,140,290 | 1 | true | 0 | 0 |
You want to use the postresolve_hook(), and walk the transaction list. To see a fairly simple copy and paste example look at the changelog plugin (displays the rpm changelog for everything to be installed/upgraded in the transaction).
| 1 | 1 | 0 |
I am writing my first Yum plugin, which I hope to use to display some info about the packages to be downloaded on an update or an install. I have successfully gotten the plugin to run and have it all set up properly. My problem is getting a list of packages that will be downloaded before the user accepts or cancels the transaction.
There is a method available in a certain conduit, the one provided to predownload_hook(conduit) and postdownload_hook(conduit), that can be called with conduit.getDownloadPackages() to do exactly what I want. However, both of these hooks are called after the user accepts or declines the transaction. According to the yum Python API docs, getDownloadPackages() is not available anywhere else.
I have asked about this in #yum on Freenode a couple of times but haven't gotten an answer. A solution or any help is greatly appreciated. Have a good one.
|
How can I use the yum Python module to get a list of packages that will be downloaded before accepting the transaction?
| 1.2 | 0 | 1 | 498 |
17,139,485 |
2013-06-17T02:00:00.000
| 1 | 0 | 1 | 0 |
python,pycharm
| 58,257,720 | 5 | false | 1 | 0 |
Well, I wish I had a better answer, but what helped me was simply the following:
switch the interpreter from a remote one to a system one
wait until the Pycharm indexing is done
switch the interpreter back to the initial/desired one
| 2 | 30 | 0 |
I'm using PyCharm (v 2.7.2) to develop a Django app, but I can't get it to check PEP8 style violations.
I have enabled "PEP8 coding style violation" in the "Inspctions" section of the settings, but PyCharm doesn't highlight the style violations.
Is there a way to fix this?
|
How to get PyCharm to check PEP8 code style?
| 0.039979 | 0 | 0 | 58,662 |
17,139,485 |
2013-06-17T02:00:00.000
| 13 | 0 | 1 | 0 |
python,pycharm
| 34,655,470 | 5 | false | 1 | 0 |
Mine wasn't showing up due to the color scheme. By default it's marked as "weak warning", so you might have to edit the appearance to make it visible. Editor > Colors & Fonts > General > Errors and Warnings.
| 2 | 30 | 0 |
I'm using PyCharm (v 2.7.2) to develop a Django app, but I can't get it to check PEP8 style violations.
I have enabled "PEP8 coding style violation" in the "Inspctions" section of the settings, but PyCharm doesn't highlight the style violations.
Is there a way to fix this?
|
How to get PyCharm to check PEP8 code style?
| 1 | 0 | 0 | 58,662 |
17,140,080 |
2013-06-17T03:36:00.000
| 0 | 0 | 0 | 0 |
python-2.7,pandas,xls
| 17,141,432 | 1 | false | 0 | 0 |
The problem you are facing is that your excel has a character that cannot be decoded to unicode. It was probably working before but maybe you edited this xls file somehow in Excel/Libre. You just need to find this character and either get rid of it or replace it with the one that is acceptable.
| 1 | 0 | 1 |
I am trying to export dataframe to .xls file using to_excel() method. But while execution it was throwing an error: "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 892: ordinal not in range(128)". Just few moments back it was working fine.
The code I used is:
:csv2.to_excel("C:\\Users\\shruthi.sundaresan\\Desktop\\csat1.xls",sheet_name='SAC_STORE_DATA',index=False).
csv2 is the dataframe. Why does this kind of error happens and how to avoid this is in the future?
|
Unable to export pandas dataframe into excel file
| 0 | 0 | 0 | 474 |
17,140,280 |
2013-06-17T04:08:00.000
| 1 | 0 | 0 | 0 |
python,compression,zip,deflate
| 24,719,120 | 2 | false | 0 | 0 |
LZ77 is about referencing strings back in the decompressing buffer by their lengths and distances from the current position. But it is left to you how do you encode these back-references. Many implementations of LZ77 do it in different ways.
But you are right that there must be some way to distinguish "literals" (uncompressed pieces of data meant to be copied "as is" from the input to the output) from "back-references" (which are copied from already uncompressed portion).
One way to do it is reserving some characters as "special" (so called "escape sequences"). You can do it the way you did it, that is, by using < to mark the start of a back-reference. But then you also need a way to output < if it is a literal. You can do it, for example, by establishing that when after < there's another <, then it means a literal, and you just output one <. Or, you can establish that if after < there's immediately >, with nothing in between, then that's not a back-reference, so you just output <.
It also wouldn't be the most efficient way to encode those back-references, because it uses several bytes to encode a back-reference, so it will become efficient only for referencing strings longer than those several bytes. For shorter back-references it will inflate the data instead of compressing them, unless you establish that matches shorter than several bytes are being left as is, instead of generating back-references. But again, this means lower compression gains.
If you compress only plain old ASCII texts, you can employ a better encoding scheme, because ASCII uses just 7 out of 8 bits in a byte. So you can use the highest bit to signal a back-reference, and then use the remaining 7 bits as length, and the very next byte (or two) as back-reference's distance. This way you can always tell for sure whether the next byte is a literal ASCII character or a back-reference, by checking its highest bit. If it is 0, just output the character as is. If it is 1, use the following 7 bits as length, and read up the next 2 bytes to use it as distance. This way every back-reference takes 3 bytes, so you can efficiently compress text files with repeating sequences of more than 3 characters long.
But there's a still better way to do this, which gives even more compression: you can replace your characters with bit codes of variable lengths, crafted in such a way that the characters appearing more often would have shortest codes, and those which are rare would have longer codes. To achieve that, these codes have to be so-called "prefix codes", so that no code would be a prefix of some other code. When your codes have this property, you can always distinguish them by reading these bits in sequence until you decode some of them. Then you can be sure that you won't get any other valid item by reading more bits. The next bit always starts another new sequence. To produce such codes, you need to use Huffman trees. You can then join all your bytes and different lengths of references into one such tree and generate distinct bit codes for them, depending on their frequency. When you try to decode them, you just read the bits until you reach the code of some of these elements, and then you know for sure whether it is a code of some literal character or a code for back-reference's length. In the second case, you then read some additional bits for the distance of the back-reference (also encoded with a prefix code). This is what DEFLATE compression scheme does. But this is whole another story, and you will find the details in the RFC supplied by @MarkAdler.
| 2 | 1 | 0 |
I'm learning about LZ77 compression, and I saw that when I find a repeated string of bytes, I can use a pointer of the form <distance, length>, and that the "<", ",", ">" bytes are reserved. So... How do I compress a file that has these bytes, if I cannot compress these byte,s but cannot change it by a different byte (because decoders wouldn't be able to read it). Is there a way? Or decoders only decode is there is a exact <d, l> string? (if there is, so imagine if by a coencidence, we find these bytes in a file. What would happen?)
Thanks!
|
LZ77 compression reserved bytes "< , >"
| 0.099668 | 0 | 0 | 475 |
17,140,280 |
2013-06-17T04:08:00.000
| 0 | 0 | 0 | 0 |
python,compression,zip,deflate
| 17,141,490 | 2 | false | 0 | 0 |
If I understand your question correctly, it makes no sense. There are no "reserved bytes" for the uncompressed input of an LZ77 compressor. You need to simply encodes literals and length/distance pairs unambiguously.
| 2 | 1 | 0 |
I'm learning about LZ77 compression, and I saw that when I find a repeated string of bytes, I can use a pointer of the form <distance, length>, and that the "<", ",", ">" bytes are reserved. So... How do I compress a file that has these bytes, if I cannot compress these byte,s but cannot change it by a different byte (because decoders wouldn't be able to read it). Is there a way? Or decoders only decode is there is a exact <d, l> string? (if there is, so imagine if by a coencidence, we find these bytes in a file. What would happen?)
Thanks!
|
LZ77 compression reserved bytes "< , >"
| 0 | 0 | 0 | 475 |
17,140,809 |
2013-06-17T05:15:00.000
| 0 | 0 | 0 | 0 |
python,django,rest,django-forms,django-piston
| 17,200,127 | 1 | false | 1 | 0 |
Use Django Tastypie, it's a much more robust REST framework than Piston :)
| 1 | 0 | 0 |
Is there any easy way to process the ModalForm in Django Piston API to a Django Client? ,
On documentations @validate decorator is mentioning but I couldn't find the way to send forms from API to the Django Client. I feel like it is possible to use Django Forms from API like in local on Client side.
|
Forms between Django Client and Django Piston API
| 0 | 0 | 0 | 75 |
17,141,979 |
2013-06-17T07:00:00.000
| 3 | 0 | 1 | 0 |
python,floating-point,integer,rounding
| 53,398,021 | 12 | false | 0 | 0 |
Just make round(x-0.5) this will always return the next rounded down Integer value of your Float. You can also easily round up by do round(x+0.5)
| 3 | 144 | 0 |
I want to take a floating-point number and round it down to the nearest integer. However, if it's not a whole, I always want to round down the variable, regardless of how close it is to the next integer up. Is there a way to do this?
|
Round a floating-point number down to the nearest integer?
| 0.049958 | 0 | 0 | 387,046 |
17,141,979 |
2013-06-17T07:00:00.000
| 1 | 0 | 1 | 0 |
python,floating-point,integer,rounding
| 27,183,740 | 12 | false | 0 | 0 |
Don't know if you solved this, but I just stumble upon this question. If you want to get rid of decimal points, you could use int(x) and it will eliminate all decimal digits. Theres no need to use round(x).
| 3 | 144 | 0 |
I want to take a floating-point number and round it down to the nearest integer. However, if it's not a whole, I always want to round down the variable, regardless of how close it is to the next integer up. Is there a way to do this?
|
Round a floating-point number down to the nearest integer?
| 0.016665 | 0 | 0 | 387,046 |
17,141,979 |
2013-06-17T07:00:00.000
| -6 | 0 | 1 | 0 |
python,floating-point,integer,rounding
| 52,981,095 | 12 | false | 0 | 0 |
I used this code where you subtract 0.5 from the number and when you round it, it is the original number rounded down.
round(a-0.5)
| 3 | 144 | 0 |
I want to take a floating-point number and round it down to the nearest integer. However, if it's not a whole, I always want to round down the variable, regardless of how close it is to the next integer up. Is there a way to do this?
|
Round a floating-point number down to the nearest integer?
| -1 | 0 | 0 | 387,046 |
17,149,112 |
2013-06-17T13:46:00.000
| 0 | 0 | 1 | 0 |
python,excel
| 17,150,025 | 2 | false | 0 | 0 |
It sounds like the API you're using is returning different types depending on the content of the cells. You have two options.
You can convert everything to a string and then do what you're currently doing:
s = str(S1)
...
You can check the types of the input and act appropriately:
if isinstance(S1, basestring):
# this is a string, strip off the prefix
elif isinstance(S1, float):
# this is a float, just use it
| 1 | 1 | 0 |
I have just learned Python for this project I am working on and I am having trouble comparing two values - I am using the Python xlwt and xlrd libraries and pulling values of cells from the documents. The problem is some of the values are in the format 'NP_000000000', 'IPI00000000.0', and '000000000' so I need to check which format the value is in and then strip the characters and decimal points off if necessary before comparing them.
I have tried using S1[:3] to get the value without alphabet characters, but I get a 'float is not subscriptable' error
Then I tried doing re.sub(r'[^\d.]+, '', S1) but I get a Typerror: expected a string or buffer
I figured since the value of the cell that is being returned via sheet.cell( x, y).value would be a string since it is alphanumeric, but it seems like it must be returned as a float
What is the best way to format these values and then compare them?
|
How to strip letters out of a string and compare values?
| 0 | 0 | 0 | 205 |
17,151,693 |
2013-06-17T15:50:00.000
| 0 | 0 | 0 | 0 |
python,zope
| 17,305,631 | 1 | false | 1 | 0 |
There are two adapters needed for this. One adapts the ZODB context one wishes to use and zope.publisher.interfaces.IRequest, while providing zope.traversing.interfaces.ITraversable (view). The second adapts the previous objects instantiated view and zope.publisher.interfaces.browser.IBrowserRequest, while providing zope.publisher.interfaces.IPublishTraverse (traverser). I subclassed BrowserView for both adapters.
Inside the traverser, the publishTraverse method will be called successively for each URL part that is being traversed and returns a view for that URL part.
| 1 | 1 | 0 |
How can I serve arbitrary paths zope.browserrsource does for @@ and ++resource++ URIs in Zope?
|
How can I serve arbitrary request paths?
| 0 | 0 | 1 | 47 |
17,153,483 |
2013-06-17T17:38:00.000
| 0 | 0 | 0 | 1 |
c++,python,shell
| 17,153,614 | 2 | true | 0 | 0 |
One option is to remove all the modules that allow running arbitrary shell commands, i.e.: subprocess.py*, os.py*... and include only the modules that the end users are allowed to have immediate access to.
| 2 | 0 | 0 |
I want to embed Python 3.x in our C++ application to allow scripting of maintenance and other tasks. It so far does everything we need including file manipulation.
The problem is that to meet some specifications (like PCI), we aren't allowed to arbitrarily run shell commands such as with subprocess.call or popen.
Is there a way to prevent and similar calls from working in embedded Python?
|
Preventing Embedded Python from Running Shell Commands?
| 1.2 | 0 | 0 | 114 |
17,153,483 |
2013-06-17T17:38:00.000
| 0 | 0 | 0 | 1 |
c++,python,shell
| 17,153,992 | 2 | false | 0 | 0 |
Unless your application is really locked down I don't think you can prevent someone from loading their own python module from an arbitrary directory (with import) so I don't think you can prevent execution of arbitrary code as long as you have python embedded.
| 2 | 0 | 0 |
I want to embed Python 3.x in our C++ application to allow scripting of maintenance and other tasks. It so far does everything we need including file manipulation.
The problem is that to meet some specifications (like PCI), we aren't allowed to arbitrarily run shell commands such as with subprocess.call or popen.
Is there a way to prevent and similar calls from working in embedded Python?
|
Preventing Embedded Python from Running Shell Commands?
| 0 | 0 | 0 | 114 |
17,154,006 |
2013-06-17T18:11:00.000
| 2 | 0 | 0 | 0 |
python,colors,matplotlib,scatter
| 17,154,439 | 1 | true | 0 | 0 |
I'm not sure if this is the "proper" way to do this, but you could programmatically split your data into two subsets: one containing the positive values and the second containing the negative values. Then you can call the plot function twice, specifying the color you want for each subset.
It's not an elegant solution, but a solution nonetheless.
| 1 | 1 | 1 |
I have a pyplot polar scatter plot with signed values. Pyplot does the "right" thing and creates only a positive axis, then reflects negative values to look as if they are a positive value 180 degrees away.
But, by default, pyplot plots all points using the same color. So positive and negative values are indistinguishable.
I'd like to easily tell positive values at angle x from negative values at angle (x +/- 180), with positive values red and negative values blue.
I've made no progress creating what should be a very simple color map for this situation.
Help?
|
Pyplot polar scatter plot color for sign
| 1.2 | 0 | 0 | 785 |
17,154,381 |
2013-06-17T18:34:00.000
| 0 | 0 | 1 | 0 |
python,scipy,cluster-computing,mpi
| 20,356,186 | 3 | false | 0 | 0 |
One quick workaround is to use a local directory on each node (e.g. /tmp as Wesley said), but use one MPI task per node, if you have the capacity.
| 1 | 3 | 1 |
If scipy.weave.inline is called inside a massive parallel MPI-enabled application that is run on a cluster with a home-directory that is common to all nodes, every instance accesses the same catalog for compiled code: $HOME/.pythonxx_compiled. This is bad for obvious reasons and leads to many error messages. How can this problem be circumvented?
|
How can scipy.weave.inline be used in a MPI-enabled application on a cluster?
| 0 | 0 | 0 | 282 |
17,156,844 |
2013-06-17T21:03:00.000
| 0 | 0 | 0 | 1 |
python,google-chrome
| 24,497,812 | 3 | false | 0 | 0 |
A few options you might consider, with their advantages and disadvantages:
URL:
advantage: as Chris mentioned, accessing the URL and manually changing it is an option. It should be easy to write a script for this, and I can send you my perl script if you want
disadvantage: I am not sure if you can do it. I made a perl script for that before, but it didn't work because Google states that you can't use its services outside the Google interface. You might face the same problem
Google's search API:
advantage: popular choice. Good documentation. It should be a safe choice
disadvantage: Google's restrictions.
Research other search engines:
advantage: they might not have the same restrictions as Google. You might find some search engines that let you play around more and have more freedom in general.
disadvantage: you're not going to get results that are as good as Google's
| 1 | 0 | 0 |
I'd like to write a script (preferably in python, but other languages is not a problem), that can parse what you type into a google search. Suppose I search 'cats', then I'd like to be able to parse the string cats and, for example, append it to a .txt file on my computer.
So if my searches were 'cats', 'dogs', 'cows' then I could have a .txt file like so,
cats
dogs
cows
Anyone know any APIs that can parse the search bar and return the string inputted? Or some object that I can cast into a string?
EDIT: I don't want to make a chrome extension or anything, but preferably a python (or bash or ruby) script I can run in terminal that can do this.
Thanks
|
Parse what you google search
| 0 | 0 | 1 | 205 |
17,158,233 |
2013-06-17T23:01:00.000
| 2 | 0 | 1 | 0 |
java,python,exception-handling
| 17,158,594 | 3 | true | 1 | 0 |
OK, I can try and give an answer which I'll keep as neutral as it can be... (note: I have done Python professionally for a few months, but I am far from mastering the language in its entirety)
The guidelines are "free"; if you come from a Java background, you will certainly spend more time than most Python devs out there looking for documentation on what is thrown when, and have more try/except/finally than what is found in regular python code. In other words: do what suits you.
Apart from the fact that they can be thrown anywhere, at any moment, Python has multi-exception catch (only available in Java since 7), with (somewhat equivalent to Java 7's try-with-resources), you can have more than one except block (like Java can catch more than once), etc. Additionally, there are no real conventions that I know of on how exceptions should be named, so don't be fooled if you see SomeError, it may well be what a Java dev regards as a "checked exception" and not an Error.
| 2 | 9 | 0 |
I am original Java developer, for me, checked Exception in Java is obviously/easy enough for me to decide to catch or throw it to the caller to handle later. Then it comes Python, there is no checked exception, so conceptually, nothing forces you to handle anything(In my experience, you don't even know what exceptions are potentially thrown without checking the document). I've been hearing quite a lot from Python guys that, in Python, sometimes you better just let it fail at runtime instead of trying to handle the exceptions.
Can someone give me some pointers regarding to:
what's the guideline/best practice for Python Exception Handling?
what's the difference between Java and Python in this regard?
|
Exception Handling guideline- Python vs Java
| 1.2 | 0 | 0 | 2,973 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.