Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
22,831,462
2014-04-03T08:07:00.000
0
0
0
0
python,popup,tkinter,size,line
22,835,256
2
false
0
1
There is nothing you can do with the tkMessageBox. You either have to alter the text or create your own message window. The latter isn't very difficult -- a Toplevel, a Text widget with a scrollbar, and a couple of buttons is about all you need.
1
0
0
Good Day All! I am trying to figure out how to limit the popup box shown bellow. I am not trying to trim the text, I am however trying to set the amount of characters in the popup per line. eg: 30 Characters per line in the popup box tkMessageBox.showinfo("Results", str(e)) Any suggestions, without modifying the text itself?
Tkinter Limit Size Of Popup per line
0
0
0
157
22,831,520
2014-04-03T08:10:00.000
8
0
0
0
python,excel,python-2.7,xlwt
22,837,578
3
true
0
0
OK, after searching the web, I realized that with xlwt it's not possible to do it, but with XlsxWriter it's possible and very easy and convenient.
1
9
0
I'm using xlwt to create tables in excel. In excel there is a feature format as table which makes the table have an automatic filters for each column. Is there a way to do it using python?
how to do excel's 'format as table' in python
1.2
1
0
10,141
22,837,709
2014-04-03T12:24:00.000
2
0
0
0
python
22,837,881
2
false
0
1
Your algorithm looks perfectly fine. As you aim for this to be implemented in Python, I'd start with creating simple PyGame (or other library of your choice) application, that only draws two of your grids. That will help you debug your other functionality, once you'll see it better than array dump in the console. Alternatively, you may implement everything text-mode based just for now, and later enhance it with graphics — making your application more like "query-response", printing two grids with plain print() and asking for move with raw_input() — that simple.
1
2
0
I am a First Year Physics major at Goshen College. I am supposed to create a final project for my programming class. I am thinking about doing the game Battleship. I realize that I could find the complete code somewhere online but I would really like to write my own.I came up with a list of things I would like to try to implement into the game and a general idea of how I would like the program to run. Make four 10 x 10 grids. One with stored locations for the computer's ships, one with the player's ships, and two to be displayed, keeping track of both you and the computer's guesses. By using format (a,6), user/computer can guess location. Tells user what ship they are placing and how big it is. User gives a star cordinate, then gives either up, left, or right to decide which direction the ship lies. This is grid one. Grid two is a stored grid of ships. That will be the computer's grid. The game will keep track of your guesses, and put circles where you miss, x's where you hit and ~'s are water(spaces you haven't guessed. This is grid three. Grid four is the computer's guesses. Random computer guess until hit. Then use algorithm to check all adjacent spaces until the ship is sunk. Take turns Display Grids three and four simultaneously. I really just don't know where to start. I have a general idea of the logic that I would need to create the game, I just don't know where to jump in and start defining programs. Thanks!
Battleship in Python
0.197375
0
0
1,306
22,838,333
2014-04-03T12:51:00.000
0
0
1
1
python,import,subprocess
22,840,850
2
false
0
0
If you want to use functions from an other script then you usually import the script. When the script is script.py you can write import script and use the functions that are defined in the script with script.function_in_the_script.
2
1
0
Suppose I have python script having having 4-5 functions all are called from single function in a script. If I want to results after executing script( Use functions from another script) I can make script executable and use subprocess.popen and I can also import these functions in another script. Which is better way to do this?
python import module vs running script as subprocess.popen
0
0
0
903
22,838,333
2014-04-03T12:51:00.000
1
0
1
1
python,import,subprocess
22,878,579
2
true
0
0
Which is better way to do this? Use import unless you have to use subprocess.Popen to run Python code. import uses sys.path to find the module; you don't need to specify the path explicitly usually, imported functions accept arguments, return results in the same process; you don't need to serialize Python objects into bytes to send them to another process
2
1
0
Suppose I have python script having having 4-5 functions all are called from single function in a script. If I want to results after executing script( Use functions from another script) I can make script executable and use subprocess.popen and I can also import these functions in another script. Which is better way to do this?
python import module vs running script as subprocess.popen
1.2
0
0
903
22,839,403
2014-04-03T13:34:00.000
3
0
1
0
python,rpy2
22,848,834
2
true
0
0
Bring the presumed limitations on. Rpy2 is, at its lower level (the rpy2.rinterface level), exposing a very large part of the R C-API. Technically, one can do more with rpy2 than one can from R itself (writing C extension for R would possibly be the only way to catch up). As an amusing fact,doing "R stuff" from rpy2 can be faster than doing the same from R itself (see the rpy2 documentation benchmarking the access of elements in an R vector). The higher level in rpy2 (rpy2.robject level) is adding a layer that makes "doing R stuff" more "pythonic" (although by surrendering the performance claim mentioned above). R packages look like Python modules, has classes such as Formula, Factor, etc... to have all R objects as Python classes, has a conversion system to let one complex R structures can be mapped to Python objects automagically (see example with lme4 in the rpy2 documentation0, translates on the fly invalid R variable names ('.' is a valid character for variable names in R), create on the fly Python docstrings from R documentation.
1
2
1
I know basic python programming and thus want to stay on the python datasci path. Problem is, there are many R packages that appeal to me as a social science person. Can Rpy2 allow full use of any general arbitrary r package, or is there a catch. How well does it work in practice? If Rpy2 is too limited, I'd unfortunately have to branch over to r, but I would rather not, because of the extra overhead. Thanks Tai
What are the operational limits of Rpy2?
1.2
0
0
488
22,850,566
2014-04-03T23:02:00.000
58
0
1
0
ipython,ipython-notebook,jupyter-notebook,docstring,tab-completion
22,850,810
3
true
0
0
Oh, the shortcut is now shift+tab.
1
29
0
In IPython, I am used to write function( and then strike a tab, and get the contents of the docstring and a list of the named arguments. However, this stopped working since I installed IPython 2.0. Is there an explanation or a know fix?
function name + tab does not return docstring in IPython
1.2
0
0
12,303
22,850,591
2014-04-03T23:05:00.000
2
0
1
1
c#,python
22,850,707
1
true
0
0
I think it may be because things like anti-virus software are hooked into kernel-mode as drivers and can intercept user-mode input and intervene. The anti-virus may be hooked into the kernel APIs for process management, and reject calls through the process APIs to kill a process with the same PID as itself. If this is the case, then the answer would be that no you can't as I highly doubt that C# can be run in kernel-mode.
1
1
0
I want to make a repeated question. how to prevent that someone stop an application with the task manager. I now that is posible, if you try to kill avastui.exe form the task manager the task manager say "the operation could not be completed acces denied" and it happens when de service of avast is on, when you stop the service of avast you can kill the process avastui.exe. Someone have any idea how avast do it? how can i make it on c# or python? Thanks in advance
How to prevent an app from being killed in windows task manager?
1.2
0
0
1,098
22,852,845
2014-04-04T03:14:00.000
2
0
0
0
django,python-2.7
22,856,740
2
true
1
0
I recently had something similar to do. I have for each domain a specific setting file with an unique SITE_ID and also a wsgi file per site. Then in my http.conf (I'm using apache on webfaction) i set up multiple VirtualHost instances, each pointing out to the specific wsgi file. My configuration looks something like this: random_django_app/ __init__.py models.py ... another_app/ ... settings_app/ settings/ __init__.py base.py example_co_uk.py example_ca.py ... wsgis/ __init__.py example_co_uk.py example_ca.py __init__.py urls.py
1
1
0
I have an app that shows products available in the US. If i want to change the country, I simply modify the value of a variable in my settings.py file. Now... each country I serve needs to have its own site, e.g. example.co.uk, example.ca, etc. They'll all be hosted on the same server and use the same database. The views, static files,etc. would be almost the same for each country. What's the best way of setting this up? Should I have one main app and then have per-country apps that extend the app? (Using Django 1.6.2/Python 2.7)
Multiple websites using the same app - how to set this up?
1.2
0
0
60
22,854,509
2014-04-04T05:46:00.000
1
0
1
0
python,windows,macos,notepad++,textwrangler
22,856,093
1
false
0
0
Here is something to do with your file via Notepad++: Edit -> Blank Operations -> TAB to Space If this won't help (and most likely it won't) you will need to check indents manually. I can suggest View -> Show Symbol -> Show Indent Guide for convenience. It is a good and safe style to use only spaces. Not to face this problem in your future projects configure Notepad++: Settings -> Tab Settings -> Replace by space. You will still be able to use tabs, but they will be changed to defined number of spaces (4 for me). Hope this helps.
1
0
0
So I wrote a program in python using NotePad++ in Windows, but then when I opened the file in Mac computer using TextWrangler or any text editor in it and after compiling it, there was an error message regarding indentation. How can I easily fix this?
Python program written in Notepad++, error in TextWrangler
0.197375
0
0
154
22,854,756
2014-04-04T06:03:00.000
0
1
1
0
python,testing,robotframework
41,011,635
4
false
0
0
The simple solution is using Jekins: You could install Jeknins with robotframework plugin.You can have two job run in parallel by default without any slave node. Or you have have multiple slave node, then using tag in robot and node label to distribute job. Just set the parameter in Jenkins job build section, such as: pybot --include tag1 test.robot for job1 then set pybot --include tag2 test.robot for job2. Then trigger upstream job. you will get them running in parallel. But still you need to make sure the file you accesing is locked by one of the testing job.
2
8
0
I have 5 test suites which are independent of each other. I have to run it against the same environment. Most of my test suites consist of API calls. The test cases inside the suites should run in sequence as they are dependent on each other. Is there any way we can run all the test suites in parallel via the pybot command?
Is there any way to run robot framework test suites in parallel?
0
0
0
8,580
22,854,756
2014-04-04T06:03:00.000
0
1
1
0
python,testing,robotframework
28,816,002
4
false
0
0
When the tests are completely stand-alone and can run completely parallel I have had some success with just writing an execution script that iterates through all the IP addresses of the units on which I would want to run a test in parallel and then calling the test with that IP address as an argument. I also tell it to only create the output.xml files, naming them based on the hostname or IP address, and then the scrip does post-processing with rebot which creates an aggregated report with all the units.
2
8
0
I have 5 test suites which are independent of each other. I have to run it against the same environment. Most of my test suites consist of API calls. The test cases inside the suites should run in sequence as they are dependent on each other. Is there any way we can run all the test suites in parallel via the pybot command?
Is there any way to run robot framework test suites in parallel?
0
0
0
8,580
22,856,638
2014-04-04T07:45:00.000
89
1
0
0
python,pytest,nose
22,856,817
1
true
0
0
I used to use Nose because it was the default with Pylons. I didn't like it at all. It had configuration tendrils in multiple places, virtually everything seemed to be done with an underdocumented plugin which made it all even more indirect and confusing, and because it did unittest tests by default, it regularly broke with Unicode tracebacks, hiding the sources of errors. I've been pretty happy with py.test the last couple years. Being able to just write a test with assert out of the box makes me hate writing tests way less, and hacking whatever I need atop the core has been pretty easy. Rather than a fixed plugin interface it just has piles of hooks, and pretty understandable source code should you need to dig further. I even wrote an adapter for running Testify tests under py.test, and had more trouble with Testify than with py.test. That said, I hear nose has plugins for classless tests and assert introspection nowadays, so you'll probably do fine with either. I still feel like I can hit the ground running with py.test, though, and I can understand what's going on when it breaks.
1
87
0
I've started working on a rather big (multithreaded) Python project, with loads of (unit)tests. The most important problem there is that running the application requires a preset environment, which is implemented by a context manager. So far we made use of a patched version of the unit test runner that would run the tests inside this manager, but that doesn't allow switching context between different test modules. Both nose and pytest do support such a thing because they support fixtures at many granularities, so we're looking into switching to nose or pytest. Both these libraries would also support 'tagging' tests and run only these tagged subsets, which is something we also would like to do. I have been looking through the documentation of both nose and pytest a bit, and as far as I can see the bigger part of those libraries essentially support the same functionality, except that it may be named differently, or require slightly different syntax. Also, I noted some small differences in the available plugins (nose has multiprocess-support, pytest doesn't seem to for instance) So it seems, the devil is in the detail, which means (often at least) in personal taste and we better go with the library that fits our personal taste best. So I'd to ask for a subjective argumentation why I should be going with nose or pytest in order to choose the library/community combo that best fits our needs.
nose vs pytest - what are the (subjective) differences that should make me pick either?
1.2
0
0
36,739
22,871,051
2014-04-04T19:03:00.000
2
1
0
1
python
22,871,157
2
false
0
0
Yes, you are correct. Passing a tuple will print the tuple to stderr and return with a exit code of 1. You must return None to denote success. Notice this is a convention of shells and the like and is not required. That being said the conventions are in place for a very, very good reason.
1
1
0
I am running my scripts on python 2.6. The requirement is as mentioned below. There are some 100 test scripts (all python scripts) in one directory. I have to create one master python script which will start running all the 100 test scripts one by one and then I have to display whether test case is failed or not. Every script will call sys.exit() to finish the execution of script. Currently I am reading the sys.exit() value from the master script and I am determining whether the particular test case is failed or not. But now there is a requirement change that I have to display the log file name also (log files will be created when I run scripts). So can I send a tuple as argument (which contains status as well as log file name) to sys.exit() instead of sending integer value? I have read on net that if we pass an argument other than integer, None is equivalent to passing zero, and any other object is printed to stderr and results in an exit code of 1. So if I pass a tuple as an argument, will os consider as failure even in success case also as I am not passing None? I am using subprocess.popen() in my master script to run the scripts and I am using format() to read the sys.exit() value.
Can we send a tuple as an argument to sys.exit() in python
0.197375
0
0
991
22,871,112
2014-04-04T19:07:00.000
0
0
1
0
python,macos,virtualenv
23,426,218
1
true
0
0
After almost a month I finally found a workaround. syncfolderpro has a option to exclude a folder from the sync, and I set it to exclude the virtenv folders. Then I sync requirement.txt between the machines and install the virtual environment on each on separately. That way the project files sync, but the virtual environment ones don't becomes messed up
1
0
0
For sometime now I have been syncing my projects folder between my laptop and desktop using an app called syncfolderspro. Inside my projects folder I also have some python virtual environment folders. As I understand it such folders cannot be synced as many file path are hardcoded. But is this only that case with the activate script or something to do with python importing libraries (I suspect the latter, as even a direct path to the virtual env python doesn't work). Is there a particular reason why relative paths can't be used? What are some good workarounds when working on multiple machines?
Syncing VirtualEnvs
1.2
0
0
484
22,871,249
2014-04-04T19:13:00.000
1
0
1
0
python-3.3
22,871,309
2
false
0
0
As it turns out, thanks to user2864740, I found itertools.permutations. This does what I asked.
1
1
0
I'm trying to generate all the possible 10-digit combinations of the digits 0-9 without repeats for a math problem, but I can't seem to get my head around it. I've tried itertools.combinations, but that gets subsequences. I've also tried random.shuffle, but that's horribly inefficient with multiple repeats. Is there an algorithm to solve this?
All possible 10-digit combinations of digits 0-9 without repeats
0.099668
0
0
2,984
22,872,888
2014-04-04T20:58:00.000
1
0
0
0
python,django,apache,security,permissions
24,634,526
1
false
1
0
In regards to serving the application from your home directory, this is primarily preference based. However, deployment decisions may be made depending on the situation. For example, if you have multiple users making use of this server to host their website, then you would likely have the files served from their home directories. From a system administrator's perspective that is deploying the applications; you may want them all accessible from /var/www... so they are easier to locate. The permissions you set for serving the files seem fine, however they may need to run as different users... depending on the number of people using this machine. For example, lets say you have one other application running on the server and that both applications run as www-data. If the www-data user has read permissions of Django's config file, then the other user could deploy a script that can read your database credentials.
1
4
0
I'm just about getting started on deploying my first live Django website, and I'm wondering how to set the Ubuntu server file permissions in the optimal way for security, whilst still granting the permissions required. Firstly a question of directories: I'm currently storing the site in ~/www/mysite.com/{Django apps}, but have often seen people using /var/www/... or /srv/www; is there any reason picking one of these directories is better than the other? or any reason why keeping the site in my home dir is a bad idea? Secondly, the permissions of the dir and files themselves. I'm serving using apache with mod_wsgi, and have the file WSGIScriptAlias / ~/www/mysite.com/mainapp/wsgi.py file. Apache runs as www-data user. For optimal security who should own the wsgi.py file, and what permissions should I grant it and its containing dir? Similarly, for the www, www/mysite.com, and www/mysite.com/someapp directories? What are the minimal permissions that are needed for the dirs and files? Currently I am using 755 and 644 for dir and files respecitvely, which works well enough which allows the site to function, but I wonder if it is optimal/too liberal. My Ubuntu user is the owner of most files, and www-data owns the sqlite dbs.
Security optimal file permissions django+apache+mod_wsgi
0.197375
1
0
1,003
22,875,067
2014-04-05T00:21:00.000
1
1
0
0
python,architecture,integer
22,875,190
2
false
0
0
How is it possible for Python to calculate these large numbers? How is it possible for you to calculate these large numbers if you only have the 10 digits 0-9? Well, you use more than one digit! Bignum arithmetic works the same way, except the individual "digits" are not 0-9 but 0-4294967296 or 0-18446744073709551616.
1
6
0
In C, C++, and Java, an integer has a certain range. One thing I realized in Python is that I can calculate really large integers such as pow(2, 100). The same equivalent code, in C, pow(2, 100) would clearly cause an overflow since in 32-bit architecture, the unsigned integer type ranges from 0 to 2^32-1. How is it possible for Python to calculate these large numbers?
How does python represent such large integers?
0.099668
0
0
1,781
22,877,052
2014-04-05T05:40:00.000
8
0
0
1
eclipse,google-app-engine,python-2.7
23,118,828
1
true
1
0
This is clearly a bug, but there's a possible workaround: In a .py file in your project, right-click and go to "Run As." Then, select "Python Run" (not a custom configuration). Let it run and crash or whatever this particular module does. Now, go look at your run configurations - you'll see one for this run. You can customize it as if you had made it anew.
1
4
0
I had trouble to run the pyDev Google App run on Eclipse. I can't create a new run configuration and I get this error message: Path for project must have only one segment. Any ideas about how to fix it? I am running Eclipse Kepler on Ubuntu 13.10
pydev Google App run Path for project must have only one segment
1.2
0
0
1,009
22,878,109
2014-04-05T07:47:00.000
18
0
1
0
python,python-3.x,scipy,pip
23,772,908
4
false
0
0
I was getting the same thing when using pip, I went to the install and it pointed to the following dependencies. sudo apt-get install python python-dev libatlas-base-dev gcc gfortran g++
1
7
1
I'm trying to install scipy library through pip on python 3.3.5. By the end of the script, i'm getting this error: Command /usr/local/opt/python3/bin/python3.3 -c "import setuptools, tokenize;file='/private/tmp/pip_build_root/scipy/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /tmp/pip-9r7808-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/tmp/pip_build_root/scipy Storing debug log for failure in /Users/dan/.pip/pip.log
Error installing scipy library through pip on python 3: "compile failed with error code 1"
1
0
0
11,676
22,879,518
2014-04-05T10:17:00.000
2
0
1
0
python,class,inheritance
22,879,590
2
true
0
0
Let's say you have a class called Animal. In this class you have a method called walk, that prints "Animal is walking" Now you have 2 other classes: 1. Class Bird that is inherited from Animal. You can now add an additional method to it: fly. This will print that a bird can fly. 2. Class Monkey that is inherited from Animal as well. You can now add an additional method to it: climb. This will print that a monkey can climb trees. Both Monkey and Bird derive from Animal, so they can walk, they have a same functionality/feature. But both have a distinct feature as well: birds can fly and monkeys can climb trees. So it takes sense, right? The reverse is false, because not every Animal can fly or can climb trees. EDIT: I exaplined it to you in terms of methods. But this can apply to variables as well. The Animal class can have a weight field. Which is accessible from both inherited classes: Monkey and Bird. But Monkey can also have a field called max_power and Bird can have a class variable called max_fly_altitude. This fields are unique for these certain types of animal and NOT for the generic Animal. Which can also be a crocodile. And a crocodile can't fly to have a max_fly_altitude attribute.
1
2
0
I'm struggling with how I should interpret class inheritance. What does it actually do? As far as I know, it allows your class to: Use the inherited class function/methods Use the inherited class local variables (self variables) Does this go both ways? If something is inherited, will it be able to read his inheriter as well? Please give your answer in as much layman's terms as possible. I'm not an native Englisch speaker and I've never had proper programming education. Give me an explenation, not a definiton :]
What does inheritance actually do in Python?
1.2
0
0
80
22,879,593
2014-04-05T10:24:00.000
1
0
1
0
python,flask
22,879,658
3
true
1
0
You can use sys.modules.keys() but you will need to import sys to use it.
1
1
0
I'm building a flask application and I want to remove the redundancy on importing modules. So, on runtime I want to print all the imported modules. Is there a way to do that?
Find Imported Python Modules
1.2
0
0
67
22,879,890
2014-04-05T10:53:00.000
2
0
0
1
python,google-app-engine,google-cloud-datastore,app-engine-ndb,urlfetch
22,880,568
2
false
1
0
You can use the NDB to_dict() method for an entity and use json to exchange te data. If it is a lot of data you can use a cursor. To exchange the entity keys, you can add the safe key to the dict.
1
0
0
I am planning to exchange NDB Entities between two GAE web apps using URL Fetch. One Web app can initiate the HTTP POST Request with the entity model name, starting entity index number and number of entities to be fetched. Each entity would have an index number which would be incremented sequentially for new entities. To Send an Entity: Some delimiter could be added to separate different entities as well as to separate properties of an entity. The HTTP Response would have a variable (say "content") containing the entity data. Receiving Side Web APP: The receiver web app would parse the received data and store the entities and their property values by creating new entities and "put"ting them Both the web apps are running GAE Python and have the same models. My Questions: Is there any disadvantage with the above method? Is there a better way to achieve this in automated way in code? I intend to implement this for some kind of infrequent data backup design implementation
Exchanging NDB Entities between two GAE web apps using URL Fetch
0.197375
0
0
131
22,880,430
2014-04-05T11:42:00.000
1
0
1
0
ipython-notebook
22,896,874
1
true
0
0
On pre 2.0 IPython clicking on a notebook link took you to a temporary kernel address (e.g. http://127.0.0.1:8889/10327f95-f1f6-4016-80f0-e23c477edbfe). Since 2.0 these links are permanent so you can just provide direct notebook links to your students, e.g. http://127.0.0.1:8888/notebooks/Test.ipynb
1
0
0
I have been setting up some IPython Notebooks on public servers for training purposes. Once logged in, you are taken to the default landing page where you can choose the Notebook you want, create a new one and since IPython 2.0 navigate directories. I would however like to serve a default Notebook upon logging in. This would make it possible for the student to log in and automatically be taken to a notebook that was set up with some instructions. Of course he/she could just click on the link but it would just make it easier and better looking if it could start with a default page.
IPython Notebook - Start using default Notebook
1.2
0
0
81
22,884,351
2014-04-05T17:37:00.000
0
0
0
0
python,python-2.7,graphics,pygame,turtle-graphics
31,048,800
2
false
0
1
There is a series of books on Python Graphics called "Python Graphics for Games" (Amazon). It covers vector drawing and animation. Author Mike Ohlson de Fine.
1
0
0
I have to make a computer graphics project on "Vote for better Nation" using python in a week..I have a knowledge of pygame but don't know how to create a particular object(like small cartoon man)..And then how to make it move..So please help me if anyone knows how to make a moving object.. Sorry For The English Thank You
Computer Graphics with python
0
0
0
1,621
22,884,502
2014-04-05T17:50:00.000
0
1
1
0
python,windows,mechanize-python
22,884,534
1
true
0
0
Bah, I had placed my .py file in a new folder within the Python27 folder and apparently that was the issue. I moved it to Python27 and it's correctly importing.
1
0
0
Ok, so I just installed mechanize with easy_install from the command prompt, but now when I try to write a little snippet of code to test it, Python is telling me it can't import mechanize, any idea what might be going wrong? I'm at a loss and unfamiliar with mechanize.
Python and mechanize issues (Windows)
1.2
0
0
42
22,886,350
2014-04-05T20:26:00.000
0
0
0
0
python,multithreading,sqlite
24,468,686
1
true
0
0
OK, I'm using a simple JSON text file with automatic saving every minute
1
0
0
I'm writing a little chatserver and client. There I got the idea to let users connect (nice :D) and when they want to protect their account by a password, they send /password <PASS> and the server will store the account information in a sqlite database file, so only users, who know the passphrase, are able to use the name. But there's the problem: I totally forgot, that sqlite3 in python is not thread-safe :( And now its not working. Thanks to git I can undo all changes with the storage. Does anyone have an idea how to store this stuff, so that they are persistent when stopping/starting the server? Thanks.
python simple data storage
1.2
1
0
85
22,896,496
2014-04-06T16:08:00.000
2
0
0
0
python,postgresql,alembic
22,896,742
4
false
0
0
your database is most likely locked by another query. Especially if you do stuff with their GUI pgAdmin, this can happen a lot I found. (truncating tables is especially tricky, sometimes pgAdmin crashes and the db gets stuck) what you want to do is to restart the complete postgresql service and try again. Make sure that you: minimize the usage of GUI pgadmin close your cursors/dbs with psycopg2 if you don't need them
2
20
0
I wrote a migration script which works fine on sqlite, but if i try to apply the same to postgres it is stuck forever. With a simple ps i can see the postres stuck on "create table waiting". There are any best practice?
Alembic migration stuck with postgresql?
0.099668
1
0
10,633
22,896,496
2014-04-06T16:08:00.000
3
0
0
0
python,postgresql,alembic
35,921,043
4
false
0
0
You can always just restart postgresql.
2
20
0
I wrote a migration script which works fine on sqlite, but if i try to apply the same to postgres it is stuck forever. With a simple ps i can see the postres stuck on "create table waiting". There are any best practice?
Alembic migration stuck with postgresql?
0.148885
1
0
10,633
22,896,581
2014-04-06T16:14:00.000
0
0
1
0
python,eclipse,pydev
49,270,965
2
false
0
0
This is a workaround. You could create a new working set and select the projects that you have created and wish to see them in the PyDev Package Explorer. This trick worked for me. Steps to create a working set : Click on the inverted triangle in the PyDec Package explorer. Select "Select Working Set" Click New button. Select Resources and click Next Select the projects that you wish to have them seen on the explorer -> mention a name of the working set and click Finish button.
1
5
0
I have been working for weeks using Eclipse's PyDev (Eclipse 3.8.1) and usually I click on files in Package Explorer to navigate through them. Now all of a sudden my Python project looks empty in the Package Explorer, just showing standard python libs. I tried many things such as: Refreshing project. Importing again project to workspace. Looking at custom filters in "customize view". Opening project file in the editor and then using "link with editor". Closing PyDev Package Explorer and opening it again. Closing and opening Eclipse again (several times). None of those showed the files. I don't know what is wrong with this project. I think it is not related, but it is also a git project. Do you know what else is missing for me to try? Thanks.
PyDev Package Explorer doesn't show files in Eclipse
0
0
0
6,622
22,897,243
2014-04-06T17:12:00.000
0
0
0
0
python,pandas,linear-regression
22,897,471
2
false
0
0
as far as I know, there is no way to put this all at once in the optimized Fortran library, LAPACK, since each regression is it's own independent optimization problem. note that the loop over 4 items is not taking any time relative to the regression itself, that you need to fully compute because each regression is an isolated linear algebra problem... so I don't think there is much time to save here...
1
0
1
I have y - a 100 row by 5 column Pandas DataFrame I have x - a 100 row by 5 column Pandas DataFrame For i=0,...,4 I want to regress y[:,i] against x[:,i]. I know how to do it using a loop. But is there a way to vectorise the linear regression, so that I don't have the loop in there?
Perform n linear regressions, simultaneously
0
0
0
472
22,897,317
2014-04-06T17:19:00.000
0
0
0
0
python,pygame
22,897,426
1
false
0
1
Seeing as it sounds like enemy objects and player objects can do the same thing, maybe you should have one common class for both enemy and player. This could be called for example a creature class. Then based on if the creature is a enemy or player you can control it using an EnemyAI class or a UserInput class. As for identifying who was hit, you can easily add a name variable within each of the creature classes that you define upon declaring a new creature. Then when you evaluate hits, you can have the function return the name of the creature who was hit.
1
0
0
I am new to pygame and I am working on a distributed peer-peer multiplayer game. I tried doing the following but I was not able to figure out how. I have a player class and an enemy class. The player and enemies are all part of different sprite groups. Say in a 4 player game where there is one player object and 3 enemy objects, When a player fires , I use the spritecollide method to check collision with the enemy sprite group. But I want to identify in specific as to which enemy has been shot within the spritegroup. I am quite not able to figure that out. Is that even possible ?
Detecting sprite object within a sprite group on collision
0
0
0
162
22,898,824
2014-04-06T19:24:00.000
4
0
0
0
python,datetime,pandas,filtering,dataframe
62,341,726
16
false
0
0
You could just select the time range by doing: df.loc['start_date':'end_date']
2
275
1
I have a Pandas DataFrame with a 'date' column. Now I need to filter out all rows in the DataFrame that have dates outside of the next two months. Essentially, I only need to retain the rows that are within the next two months. What is the best way to achieve this?
Filtering Pandas DataFrames on dates
0.049958
0
0
624,303
22,898,824
2014-04-06T19:24:00.000
43
0
0
0
python,datetime,pandas,filtering,dataframe
63,021,426
16
false
0
0
If you have already converted the string to a date format using pd.to_datetime you can just use: df = df[(df['Date'] > "2018-01-01") & (df['Date'] < "2019-07-01")]
2
275
1
I have a Pandas DataFrame with a 'date' column. Now I need to filter out all rows in the DataFrame that have dates outside of the next two months. Essentially, I only need to retain the rows that are within the next two months. What is the best way to achieve this?
Filtering Pandas DataFrames on dates
1
0
0
624,303
22,900,026
2014-04-06T21:04:00.000
0
0
0
1
python,google-app-engine
22,900,378
2
false
1
0
Just relaunch the task from the task with another deferred.defer call.
2
0
0
I am using deferred.defer quite heavily to schedule tasks using push queues on AppEngine. Sometimes I wish I would have a clean way to signal a retry for a task without having to raise an Exception that generates a log warning. Is there a way to do this?
Clean retry in deferred.defer
0
0
0
165
22,900,026
2014-04-06T21:04:00.000
4
0
0
1
python,google-app-engine
22,905,035
2
true
1
0
If you raise a deferred.SingularTaskFailure it will set an error HTTP-status, but there won't be an exception in the log.
2
0
0
I am using deferred.defer quite heavily to schedule tasks using push queues on AppEngine. Sometimes I wish I would have a clean way to signal a retry for a task without having to raise an Exception that generates a log warning. Is there a way to do this?
Clean retry in deferred.defer
1.2
0
0
165
22,902,616
2014-04-07T02:22:00.000
1
0
0
0
python,django
22,902,662
1
true
1
0
If you're feeding the result of request.POST right into a SQL query (i.e., without using the Django ORM), you will most definitely be vulnerable to SQL injection. But, if you are using the Django ORM (or another well-written ORM, such as SQLAlchemy), all of your input data will be sanitized. tldr; you're safe
1
1
0
I'm currently getting POST data using the method request.POST.get(). I'd like to know if this method gives me raw POST data or if it's correctly escaping and protected against SQL injection. Thank you in advance for your help. Galaf
Django request.POST.get SQL injection
1.2
0
0
784
22,903,625
2014-04-07T04:29:00.000
1
0
1
0
javascript,python,parameter-passing
22,907,980
1
false
1
0
I'm trying to code my bot in Python, then, since it can run synchronously with the browser. How can I pass the JavaScript variables to a client-side Python program?... You can pass the JavaScript variables only with query string. I create the server in CherryPy (CherryPy is an object-oriented web application framework using the Python ) and the client function with file HTML. Repeat: The data can be passed only by query string because the Server works statically and the Client works dynamically. This is a stupid sentence but so function a general Client/Server. The server receive a call or message one times and offers the service and response. I also can wrong....This is a my opinion. Exists Mako Templates where you can include the pages HTML (helpful for build the structure of the site) or pass variable from Server to Client. I not know nothing programs or languages that you can pass the JavaScript variable to Server (and I try with Mako Templates but not function).
1
1
0
I'm trying to create a bot for an online game. The values for the game are stored in Javascript variables, which I can access. However, running my bot code in Javascript freezes the browser, since my code is the only thing that executes. I'm trying to code my bot in Python, then, since it can run synchronously with the browser. How can I pass the Javascript variables to a client-side Python program?
Reading Javascript Variable in Python
0.197375
0
1
672
22,910,772
2014-04-07T11:14:00.000
0
0
1
0
python,json,rest,python-2.7,simplehttpserver
22,915,936
2
true
0
0
The issue was that I hadn't closed the zipfile object before I tried to return it. It appeared there was a lock on the file. To return a zip file from a simple http python server using GET, you need to do the following: Set the header to 'application/zip' self.send_header("Content-type:", "application/zip") Create the zip file using zipfile module Using the file path (ex: c:/temp/zipfile.zip) open the file using 'rb' method to read the binary information openObj = open( < path > , 'rb') return the object back to the browser openObj.close() del openObj self.wfile.write(openObj.read()) That's about it. Thank you all for your help.
1
0
0
I created a simple threaded python server, and I have two parameters for format, one is JSON (return string data) and the other is zip. When a user selects the format=zip as one of the input parameters, I need the server to return a zip file back to the user. How should I return a file to a user on a do_GET() for my server? Do I just return the URL where the file can be downloaded or can I send the file back to the user directly? If option two is possible, how do I do this? Thank you
Issue with Python Server Returning File On GET
1.2
0
1
257
22,910,946
2014-04-07T11:21:00.000
2
1
0
1
memcached,python-memcached
23,104,270
1
true
0
0
Not possible AFAIK, but a really good (and simple) solution is to modify your memcached library and do a print (or whatever you want) in the delete and multidelete methods. You can then get the keys that are being deleted (both by your app and by the library itself). I hope that helps
1
2
0
Is there any inbuilt way/ or a hack by which I can know which key is being evicted from memcache ? There is one solution of polling for all possible keys inserted into memcache (e.g. get multi), but that is inefficient and certainly not implementable for large number of keys. The functionality is not needed to be run in production, but during some benchmarking and optimization runs.
How to find the keys being evicted from memcache?
1.2
0
0
220
22,911,865
2014-04-07T12:02:00.000
0
0
0
0
python,pandas
23,057,921
2
true
0
0
No, it is not possible, only with datetime or with float index. However, variant offered by unutbu is very useful.
1
1
1
I've stuck with such a problem. I have a set of observation of passenger traffic. Data is stored in .xlsx file with the following structure: date_of_observation, time, station_name, boarding, alighting. I wonder if it's possible to create Dataframe with DatetimeIndex from such data if I need only 'time' component of datetime. (No dublicates of time is presented in dataset). The reason for this requirement is that I use specific logic based on circular time (for example, 23.00 < 0.00, but 0.01 < 0.02 when compared), so I don't want to convert them to datetime.
DatetimeIndex with time part only: is it possible
1.2
0
0
938
22,913,080
2014-04-07T12:56:00.000
0
0
0
0
python,django
22,915,407
1
true
1
0
The Django CMS is a totally different environment. You can't install it on top of your current project. So if you want your models inside django cms you have to migrate them manually to the new enviroment. Maybe their are solutions for it but I'm not aware of them.
1
0
0
I'm totally new to Python and I've been learning how to use Django and it's admin functionality to work with my models. My question really is how, if I were to install Django CMS, would work with the admin? My understanding it limited so I wanted to check as I'm struggling to know if it will still show the model's that I've been making in the same /admin/ url (as i read you login to the cms part via the /admin/ url). Would installing the CMS overwrite anything current in my /admin/ view, or would the data management merely appear within the CMS control panel?
If I install Django CMS will it still show my current work in Django admin
1.2
0
0
34
22,913,096
2014-04-07T12:56:00.000
1
0
1
0
python,pip
22,913,498
1
true
0
0
Two points. One, are the changes you're planning to make useful for anyone else? If the first, you might consider cloning the source repo, making your changes and submitting a PR. Even if it's not immediately merged, you can make use of setup.py to create a local package and install that in your virtualenv. And two, are you planning to use these changes for just one project, or on many projects? If it's just for one project, throwing it in your repo and deeply modifying it is probably an ok thing, (although you need to confirm you're allowed to do so by the license). If you can foresee using this in multiple projects, you're probably better off creating a repo for it, and packaging it via setup.py.
1
0
0
I've locally installed via pip Python package in virtualenv. I'd like to modify it (not monkey patch or subclass, but deeply modify) and keep it in my source control system referencing without installing. Maybe later I'd like to package it again so I'd like to keep all files for creating package, not only python sources. Should I just copy it to my project folder and deinstall from virtualenv?
Locally modify package from pip
1.2
0
0
248
22,913,490
2014-04-07T13:12:00.000
0
0
1
0
python,python-2.7,pycharm
72,122,711
6
false
0
0
Sounds silly but this worked for me. Exit PyCharm and delete .idea folder from finder and startup PyCharm again.
1
35
0
Is there any way to send a keyboard interrupt event in PyCharm IDE (3.1) while in the debugging mode?
Keyboard interrupt in debug mode PyCharm
0
0
0
18,646
22,920,782
2014-04-07T19:00:00.000
0
1
0
0
python,testing,nose
22,944,568
1
false
0
0
I think this feature goes against very strong fundamentals of testing, but you can always output test results into a file by using --with-xunit and --xunit-file=my_test_results.xml and write a short script that does what you want.
1
0
0
I have a large set of tests that have not been maintained for a while. Some percentage of the tests pass, and some fail. I'd like to ask nose to show me only the succeeding tests. Is there a way to do this?
Given a set of python nose tests, is there a way to run or display only the succeeding tests with nose?
0
0
0
22
22,921,549
2014-04-07T19:40:00.000
2
0
1
1
python,macos,applescript,py2app,platypus
23,134,724
1
false
0
0
This is not really a py2app problem, but caused by the way the platform works: when a user tries to open a file that's associated with an application that is already running the system doesn't start a second instance of the application but sends the already running application an event to tell it to open the new file. To handle multiple files you should implemented some kind of GUI event loop (using PyObjC, Tk, ...) that can be used to receive the OSX events that are sent when a user tries to open a file for an already running application.
1
3
0
I am using python 2.7 on mac osx 10.9 for creating an app. This app takes file name as argument, and then opens the file, and keep monitoring the file for changes till file is closed. It is working fine for a single file. I used, py2app and platypus for converting python code .py file to an app. Limitation of it is, once an instance(process) of an app is started(by clicking on any file to open), file opens. But, simultaneously, I am not able to open two files at a time i.e. to launch to instance of an app. Through terminal, it is possible to launch multiple instance of an app. Then, what should I do, to open multiple files at a time, by clicking on multiple files at a time through this app.
Launch multiple process of an app on mac osx
0.379949
0
0
511
22,923,303
2014-04-07T21:18:00.000
1
1
1
0
python,pytest
22,936,892
2
false
0
0
Apart from the genscript option the normal way to go about this is to intall py.test into the python3.3 interpreter's environment. The problem you have is that the pip you invoke is also a py27 version, so it will install into py27. So you can start with installing pip into py33 (usually under the alias pip3) and then invoking that pip or you can simply install py and pytest in the py33 environment the old fashioned way: download the packages and run python3.3 setup.py install --user. You will then still want to make sure you can invoke the correct version of py.test however, either making sure you can call py.test and py.test3 using aliases or so. Or simply by using pythonX.Y -m pytest.
1
2
0
I have both Python 2.7 and 3.3 installed on a Mac. I'm trying to get Pytest to run my tests under Python 3.3. When I run python3 –m py.test it halts, looking for the libs under the 3.3 path. When I run pip install –U pytest it installs to the 2.7 path. I've seen the write-ups for Virtualenv, but I'm not ready to go there yet. Is there another way?
running pytest with python 3.3
0.099668
0
0
3,058
22,934,204
2014-04-08T10:24:00.000
245
0
1
0
python,memory,ipython
22,934,273
8
true
0
0
%reset seems to clear defined variables.
2
175
0
Sometimes I rerun a script within the same ipython session and I get bad surprises when variables haven't been cleared. How do I clear all variables? And is it possible to force this somehow every time I invoke the magic command %run? Thanks
How to clear variables in ipython?
1.2
0
0
344,569
22,934,204
2014-04-08T10:24:00.000
0
0
1
0
python,memory,ipython
46,619,326
8
false
0
0
An quit option in the Console Panel will also clear all variables in variable explorer *** Note that you will be loosing all the code which you have run in Console Panel
2
175
0
Sometimes I rerun a script within the same ipython session and I get bad surprises when variables haven't been cleared. How do I clear all variables? And is it possible to force this somehow every time I invoke the magic command %run? Thanks
How to clear variables in ipython?
0
0
0
344,569
22,936,567
2014-04-08T12:08:00.000
0
1
0
1
python,plugins,intellij-idea,jetbrains-ide
22,990,376
2
false
0
0
You can't use Python plugin with Idea Community edition, sorry. It requires IntelliJ IDEA Ultimate.
1
0
0
I'm using IDEA CE 13.1.1 and tried to install the Python plugin version 3.4.Beta.135.1 from file because my development PC has no access to internet for security reasons. But get following warning and the plugin get not activated: Plugin Python depends on unknown plugins org.jetbrains.plugins.yaml, org.jetbrains.plugins.remote-run, Coverage I searched for these plugins in the repository but did not find them, only references in other plugin details that depend on them. How are they really called? How can I find them? Thanks
Plugin Python depends on unknown plugins org.jetbrains.plugins.yaml, org.jetbrains.plugins.remote-run, Coverage
0
0
0
2,069
22,939,011
2014-04-08T13:46:00.000
1
0
0
0
python,django,oop
22,943,192
1
false
1
0
Templates need HTML. If you want to be generic I would Use a base template and then for each model and CRUD-Operation a partial That takes an object or list of objects and knows exactly how to render That model. The block notation also is good to arrange HTML content. However, to write an standardized interface like the admin is a lot of Work and not appropriate for an Frontend. From my Vantage point use TemplateTags and -Filters, Blocks, Use partials wherever you can and standardize your variables you pass to your Template. This gives you a very pluggable Template system where you can reuse loads of you HTML code.
1
0
0
AFAIK you need a template to use generic views in django. Is there a way or third party app to use generic views without HTML templates? I love the django admin interface, since you can use and configure it without writing HTML. I prefer the object oriented way which is used in django admin to customize it. In most cases you can stay in nice python code, without any HTML/template files. Update The django admin uses templates. That's true. But everybody uses the same and proven templates from django.contrib.admin. With generic views everybody writes his own templates. I think this is a drawback and waste of time. Good and extensible default templates would be nice. I guess someone has already a generic view system for django where you only need to use templates if you want to modify the default. But I could not find such an app with my favourite search engine.
django: Generic Views without Templates
0.197375
0
0
1,412
22,939,260
2014-04-08T13:56:00.000
1
0
1
0
python,itertools
22,940,159
5
false
0
0
To find all assignments of N balls to M slots: if N is 0 then leave all M slots empty otherwise, if M is 1, then put all N balls to the only slot otherwise For each i in 0 .. N put i balls in M-th slot, and find all assignments of remaining N-i balls to remaining M-1 slots
1
2
0
I'm trying to find a way, using built-in functions, to list every way to organize N balls in M slots. The balls can stack in the slots. For example: N = 2, M = 3 -> {|0|1|1|, |1|0|1|, |1|1|0|, |2|0|0|, |0|2|0|, |0|0|2|} itertools.permutations() is part of the puzzle, but how can you go through all possible stacks of balls that preserves N?
Every way to organize N objects in M list slots
0.039979
0
0
1,779
22,939,454
2014-04-08T14:03:00.000
1
0
1
0
python,methods,self
22,939,550
4
false
0
0
It's how Python's implementation of object oriented programming works -- a method of an instance (a so-called bound method) is called with that instance as its first argument. Besides variables of the instance (self.x) you can also use it for anything else, e.g. call another method (self.another_method()), pass this instance as a parameter to something else entirely (mod.some_function(3, self)), or use it to call this method in the superclass of this class (return super(ThisClass, self).this_method()). You can give it an entirely different name as well. Using pony instead of self will work just as well. But you shouldn't, for obvious reasons.
1
2
0
I can understand why it is needed for local variables, (self.x), but why is is nessecary as parameter in a function? Is there something else you could put there instead of self? Please explain in as much layman terms as possible, I never had decent programming education.
Why do functions/methods in python need self as parameter?
0.049958
0
0
4,995
22,939,822
2014-04-08T14:18:00.000
0
0
0
0
python,amazon-web-services,boto,amazon-sqs
23,070,269
1
false
0
0
install the newest version of boto (2.27 or more, lower versions have an issue with unicode strings) send it as unicode, and you will succeed
1
0
0
I'm trying to send this string on Python SQS: "Talhão", with no quotes. How do I do that? Thanks!
Amazon SQS Python/boto: how do I send messages with accented characters?
0
0
1
95
22,940,269
2014-04-08T14:35:00.000
1
0
0
0
python,gtk
22,942,890
1
true
0
1
You don't use get_path_at_pos. It is meant for cases where you handle the button presses directly (which you should avoid unless you really have good reasons to do so). Simply use gtk_icon_view_get_selected_items (C) or the pygtk equivalent iconview.get_selected_items() which gives you a list (in C a GList) of currently selected Gtk.TreePaths which is what you desire.
1
0
0
I am building a file browser using Gtk.IconView in python. I am trying to find the path of an icon selected using " selection-changed" signal using gtk.IconView.get_path_at_pos(x,y). The docs are mum on how to obtain the (x,y). How do I find them? using python 2.7 and pygtk 2.24
Finding Mouse click position in IconView in GTK
1.2
0
0
84
22,941,147
2014-04-08T15:13:00.000
4
0
0
0
python,numpy,pandas
62,222,727
5
false
0
0
If the priority is speed I would recommend: feather - the fastest parquet - a bit slower, but saves lots of disk space
1
31
1
I've been working for a while with very large DataFrames and I've been using the csv format to store input data and results. I've noticed that a lot of time goes into reading and writing these files which, for example, dramatically slows down batch processing of data. I was wondering if the file format itself is of relevance. Is there a preferred file format for faster reading/writing Pandas DataFrames and/or Numpy arrays?
Fastest file format for read/write operations with Pandas and/or Numpy
0.158649
0
0
31,690
22,942,091
2014-04-08T15:53:00.000
0
0
0
1
python,database,file-io,cross-platform,backup
22,947,202
1
false
0
0
You could look for the file being closed and archive it. The phi notify library allows you to watch given files or directories for a number of events, including CLOSE-WRITE which allows you to detect those files which have closed with changes.
1
1
0
I'm writing a Python-based service that scans a specified drive for files changes and backs them up to a storage service. My concern is handling files which are open and being actively written to (primarily database files). I will be running this cross-platform so Windows/Linux/OSX. I do not want to have to tinker with volume shadow copy services. I am perfectly happy with throwing a notice to the user/log that a file had to be skipped or even retrying a copy operation x number of times in the event of an intermittent write lock on a small document or similar type of file. Successfully copying out a file in an inconsistent state and not failing would certainly be a Bad Thing(TM). The users of this service will be able to specify the path(s) they want backed-up so I have to be able to determine at runtime what to skip. I am thinking I could just identify any file which has a read/write handle and try to obtain exclusive access to it during the archival process, but I think this might be too intrusive(?) if the user was actively using the system. Ideas?
How to detect 'live' files during filesystem backup
0
0
0
49
22,943,247
2014-04-08T16:43:00.000
1
0
0
0
python-2.7,cx-oracle
25,801,626
2
false
0
0
You will need C-language based Oracle "client" libraries installed on your local machine. (SQL Developer uses Java libraries). To connect to a remote database you can install the Oracle Instant Client.
1
0
0
I am facing a problem while installing cx_Oracle module. I have installed Oracle Sql developer using which I can connect to any Oracle Server. I have also installed cx_oracle module. Now when I try to import the module I am reciving below mentioned error. import cx_Oracle Traceback (most recent call last): File "", line 1, in import cx_Oracle ImportError: DLL load failed: The specified module could not be found. After googling I can find that they want me to install Oracle client, but since I already have Oracle Sql developer which can act as a oracle client, I am unable to find the difference between two. Can someone please help me out.
Error while importing cx_Oracle on windows
0.099668
1
0
112
22,945,637
2014-04-08T18:43:00.000
1
0
0
0
python,cx-oracle
22,962,131
1
true
0
0
So I achieved what I need by following way cur.execute("SELECT table_name FROM user_tables") result = cur.fetchall() for row in result: cur.execute('DROP TABLE ' + row[0] + ' CASCADE CONSTRAINTS')* Thanks much Luke for your idea.
1
1
0
I am new to python. Could someone help me to figure out how to execute following commands using cx_Oracle in python? Spool C:\drop_tables.sql SELECT 'DROP TABLE ' || table_name || ' CASCADE CONSTRAINTS;' FROM user_tables; Spool off @C:\drop_tables.sql I know I can use cursor.execute() for 2nd command but for other non sql commands specially 1 & 3 I am not getting any clue. Appreciate if someone can help. Thanks, Aravi
How to execute non sql commands in python using cx_Oracle
1.2
1
0
912
22,947,073
2014-04-08T20:04:00.000
2
0
0
0
python,wand
22,947,169
2
true
0
0
Keep the format. PNG uses a different way of "encoding" color and is not very optimized for photos (it is better for illustrations, icons and clip art). You'll see it works fine if there is a limited number of colors in the image. Rule-of-thumb for image formats is to use JPEG for photos, PNG for anything else.
2
0
0
I've got a batch process that converts uploaded images using wand to generate thumbnails and resized versions. The problem is that the converted images get a lot larger than the original image. An uploaded jpg (1024x768) that was 239kB ends up over 1.2MB at 800x600. If I just resize but don't change format, the image is 132kB. Here's the relevent bit of code from my script. im1 = Image(blob=file) sizemedium = '800x600' im1.transform(resize=sizemedium) im1.format ='png' medfile = im1.make_blob()
jpg images converted to PNG with wand get much larger
1.2
0
0
479
22,947,073
2014-04-08T20:04:00.000
0
0
0
0
python,wand
22,947,210
2
false
0
0
The thing is that PNGs can be larger than JPGs, specially when you are storing photos, so that might be the problem. If you do not need a PNG for a specific reason I would just keep the JPG format.
2
0
0
I've got a batch process that converts uploaded images using wand to generate thumbnails and resized versions. The problem is that the converted images get a lot larger than the original image. An uploaded jpg (1024x768) that was 239kB ends up over 1.2MB at 800x600. If I just resize but don't change format, the image is 132kB. Here's the relevent bit of code from my script. im1 = Image(blob=file) sizemedium = '800x600' im1.transform(resize=sizemedium) im1.format ='png' medfile = im1.make_blob()
jpg images converted to PNG with wand get much larger
0
0
0
479
22,948,589
2014-04-08T21:30:00.000
1
0
0
0
python-2.7,wxpython
22,968,522
1
true
0
1
No. The ListCtrl does not support that functionality. There is a pure Python list control widget called the UltimateListCtrl that allows you add any widget to a cell, although it doesn't appear to allow cell spanning either. I would still try this widget and see if it works for you. If it does not, you may be able to patch it yourself because it's written in Python or you could do a feature request for it to be added to the UltimateListCtrl on the wxPython mailing list and see if anyone takes you up on it. You can do spanning in a wx.Grid widget if you wanted to go that route, although it's pretty limited when it comes to embedding other widgets.
1
0
0
Is there a way to have a ListCtrl cell span several columns? Or perhaps to able to append a panel / other element to a ListCtrl item that will contain the info I need?
Possible for ListCtrl Colspan, or similar functionality?
1.2
0
0
30
22,949,966
2014-04-08T23:11:00.000
1
0
0
0
python,numpy
22,949,986
5
false
0
0
If you want an inner product then use numpy.dot(x,x) for outer product use numpy.outer(x,x)
1
13
1
I'm working with numpy in python to calculate a vector multiplication. I have a vector x of dimensions n x 1 and I want to calculate x*x_transpose. This gives me problems because x.T or x.transpose() doesn't affect a 1 dimensional vector (numpy represents vertical and horizontal vectors the same way). But how do I calculate a (n x 1) x (1 x n) vector multiplication in numpy? numpy.dot(x,x.T) gives a scalar, not a 2D matrix as I want.
dot product of two 1D vectors in numpy
0.039979
0
0
11,187
22,950,275
2014-04-08T23:41:00.000
0
0
0
0
javascript,python
22,950,323
3
false
1
0
For security reasons, javascript in a browser is usually restricted to only communicate with the site it was loaded from. Given that, that's an AJAX call, a very standard thing to do.
1
0
0
Is there a way to send data packets from an active Python script to a webpage currently running JavaScript? The specific usage I'm looking for is to give the ability for the webpage, using JavaScript, to tell the Python script information about the current state of the webpage, then for the Python script to interpret that data and then send data back to the webpage, which the JavaScript then uses to decide which function to execute. This is for a video game bot (legally), so it would need to happen in real time. I'm fairly proficient in Python and web requests, but I'm just getting into JavaScript, so hopefully a solution for this wouldn't be too complex in terms of Javascript. EDIT: One way I was thinking to accomplish this would be to have Javascript write to a file that the Python script could also read and write to, but a quick google search says that JavaScript is very limited in terms of file I/O. Would there be a way to accomplish this?
Using Python to communicate with JavaScript?
0
0
1
645
22,951,806
2014-04-09T02:39:00.000
1
0
1
0
python,json,database,performance,security
22,951,848
1
false
1
0
I don't think efficiency should be part of your calculus. I don't like either of your proposed designs. One table? That's not normalized. I don't know what data you're talking about, but you should know about normalization. Multiple copies? That's not scalable. Every time you add a user you add a table? Sounds like the perfect way to ensure that your user population will be small. Is all the data JSON? Document based? Maybe you should consider a NoSQL document based solution like MongoDB.
1
0
0
I'm trying to store user data for a website in Python I'm making. Which is more efficient: -Storing all the user data in one huge table -Storing all the user data in several tables, one per user, in one database. -Storing each user's data in a XML or JSON file, one file per user. Each file has a unique name based on the user id. Also, which is safer? I'm biased towards storing user data in JSON files because that is something I already know how to do. Any advice? I'd post some code I already have, but this is more theoretical than code-based.
Storing user data in one big database or in a different file for each user - which is more efficient?
0.197375
1
0
242
22,958,634
2014-04-09T09:43:00.000
0
0
0
1
python,multithreading,asynchronous,rabbitmq,celery
22,985,609
2
false
0
0
In a similar setup, I decided to go with specific queues for different tasks, and then I can decide which worker listens on which queue (which can also be changed dynamically !).
2
3
0
I have some independent tasks which I am currently putting into different/independent workers. To be understood easily I will walk you through an example. Let's say I have three independent tasks namely sleep, eat, smile. A task may need to work under different celery configurations. So, I think, it is better to separate each of these tasks into different directories with different workers. Some tasks may be required to work on different servers. I am planning add some more tasks in the future and each of them will be implemented by different developers. Providing these conditions, there are more than one workers associated to each individual task. Now, here is the problem and my question. When I start three smile tasks, one of these will be fetched by smile's worker and carried out. But the next task will be fetched from eat's worker and never will be carried out. So, what is the accepted, most common pattern? Should I send each tasks into different queues and workers should listen its own queue?
celery tasks, workers and queues organization
0
0
0
898
22,958,634
2014-04-09T09:43:00.000
1
0
0
1
python,multithreading,asynchronous,rabbitmq,celery
24,001,208
2
true
0
0
The answer depends on couple of things that should be taken in consideration: Does order of commands should be preserved ? If so the best approach is placing some sort of command pattern as serialized message so each fetched/consumed message can be executed in it's order in single place in your application. If it's not an issue for you - you can play with topic exchange while publishing different message types in single exchange, and having different workers receiving the messages by predefined pattern. This by the way will let you easily add another task lets say "drink" without changing a line in already existing transportation topology/already existing workers. Are you planning scaling queues among different machines to increase throughput ? In case you have very intense traffic of tasks (in terms of frequency) it may be worth creating different queue for each task type so latter when you grow you can place each one on different node in rabbit cluster.
2
3
0
I have some independent tasks which I am currently putting into different/independent workers. To be understood easily I will walk you through an example. Let's say I have three independent tasks namely sleep, eat, smile. A task may need to work under different celery configurations. So, I think, it is better to separate each of these tasks into different directories with different workers. Some tasks may be required to work on different servers. I am planning add some more tasks in the future and each of them will be implemented by different developers. Providing these conditions, there are more than one workers associated to each individual task. Now, here is the problem and my question. When I start three smile tasks, one of these will be fetched by smile's worker and carried out. But the next task will be fetched from eat's worker and never will be carried out. So, what is the accepted, most common pattern? Should I send each tasks into different queues and workers should listen its own queue?
celery tasks, workers and queues organization
1.2
0
0
898
22,961,926
2014-04-09T12:03:00.000
1
0
0
0
python,opengl,glut,antialiasing,qglwidget
22,969,282
1
true
0
1
Qt is an application framework that implements the GUI event loop. GLUT is an application, that implements the GUI event loop. There can be only one GUI event loop in a process. Hence Qt and GLUT can not be mixed. Just configure QGLWidget with multisampling. Create a QGLFormat instance, call setSampleBuffers(true) on it, and pass it to the QGLWidget constructor.
1
1
0
I have a QGLWidget in which openGL objects are created. I want to get better anti-aliasing than I have now by using glutInitDisplayMode(GLUT_DOUBLE | GLUT_MULTISAMPLE | GLUT_DEPTH). How can I impliment this function (in python)?
How can I use glutInitDisplayMode() in a QGLWidget, so I can get anti-aliasing?
1.2
0
0
238
22,964,033
2014-04-09T13:27:00.000
2
0
0
0
python-2.7,sqlite,max
22,964,138
1
true
0
0
That's a compile-time parameter for SQLite itself. As in, you'll need to recompile the SQLite library in order to change it. Nothing you can do in Python will be able to overcome this.
1
1
0
So I have a sqlite3 db, which I access from Python (2.7), where I would like to store more than the by default allowed 2.000 columns. I understand that there is a setting or command, SQLITE_MAX_COLUMN, which I can alter, so that my database can store up to ~32.000 columns. My question is how do I in practice set the maximum number of columns to for example 30.000 - what is the specific code, that I should run? Hope my question is clearly stated. Thanks
How to actually change the maximum number of columns in SQLITE
1.2
1
0
1,019
22,967,718
2014-04-09T15:47:00.000
2
0
1
0
python,batch-file,python-3.x,docx
22,971,672
2
true
0
0
You should be using Visual Studio 2010, as Python 3.3 was built with it, and therefore satisfies all dependencies needed. 1) Install VS10 2) SET VS100COMNTOOLS=C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools (assuming that the path is still the same?)
1
0
0
When running pip install python-docx I encounter the error message error: Unable to find vcvarsall.bat. These are basically the two solutions: 1) Install VS2008 2) SET VS90COMNTOOLS=C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools Both I've done and the error persists. Does anyone hold another solution?
vcvarsall.bat error through pip install python-docx
1.2
0
0
1,231
22,967,882
2014-04-09T15:56:00.000
0
0
1
0
python,upload,google-docs
22,967,914
2
false
0
0
You could open the .csv using excel, then it knows its a csv (Comma Delimeted file) but you can set other things to 'delimit' the file by such as spaces etc. Edit: Sorry, should've mentioned, don't open the file using the 'open with' method, open Excel first, then open the file from within Excel. This should open the 'Text Import Wizard' - Where you can choose what to delimt the file with such as tab,semicolon,comma,space etc.
1
0
0
I'm using Python, and with the library gdata I can upload a .csv file, but the delimiter stays as default, that is comma (","). How can I do to change the delimiter to, for example, ";" ? What I want is, from Python, change the delimiter of an uploading file. I don't want to change the "," to ";", I want to change the delimiter.
How to upload a .csv file changing the delimiter?
0
0
0
108
22,969,333
2014-04-09T17:06:00.000
10
0
1
0
python,ipython-notebook
38,714,686
2
false
0
0
One workaround is, save the notebook as html. The rendered file will have the css embedded. Open the HTML in a text editor and do a search for @media print and delete the offending !important;color:#000 save, open the file in a browser and print. It's not ideal but you don't have to go digging around changing the CSS and recompiling in your site-packages. Just in case someone is looking for an easier workaround.
1
7
0
I'm using Windows Python 2.7.6 and iPython 2.0.0. When I do a print preview on a notebook, I get a gorgeous color output with syntax highlighting, etc. As soon as I either do a print preview, or print it, it becomes gray scale and loses the syntax highlighting. How can I print keeping the color and highlighting?
Printing iPython Notebook preview in color?
1
0
0
5,092
22,969,365
2014-04-09T17:08:00.000
1
1
0
1
python,celery
23,090,632
1
true
0
0
The solution for me was to restart redis after the time update, and also restart celerybeat. That combination seems to work.
1
1
0
I'm trying to test out some periodic tasks I'm running in Celery, which are supposed to run at midnight of the first day of each month. To test these, I have a cron job running every few minutes which bumps the system time up to a few minutes before midnight on the last day of the month. When the clock strikes midnight (every few minutes), the tasks are not run. All the times are UTC, and celery is set to UTC mode. Celery itself is working fine, I can run the tasks manually. What might be going on here? Also, how does celery keep track of the system time for its scheduling, how does it handle a system time update? Could it be that celery's time and the system time get out of sync somehow? This is Celery 3.1.0 with redis as broker/backend
Celery periodic tasks: testing by modifying system time
1.2
0
0
283
22,969,787
2014-04-09T17:28:00.000
0
0
1
0
python,synchronization,python-multithreading
22,995,953
1
false
0
0
I would check whether the connection is established after any thread acquires a semaphore. If it's not then re-establish it, otherwise proceed as usual.
1
0
0
I am trying to figure out the correct way to handle the following scenario with synchronization in python. Say there is a shared resource connection which is used by 8 threads to issue commands over a connection. Occasionally, connection goes stale and will throw exceptions for which I added an exception handling routine which can re-establish the connection. The issue is such that when the connection goes stale, all 8 threads will get the exceptions. To solve this, I can add a semaphore such that only one thread will try to re-establish the connection at a time, but even this has an issue as well. If thread 1 is the first to acquire the semaphore, then threads 2-8 will soon be blocked on that semaphore when they encounter exceptions. Thread 1 will get a fresh connection which threads 2-8 can use successfully and will release the semaphore. At this point, what I would like to happen is to have threads 2-8 being processing again, however, since they were previously blocked on the semaphore, thread 2 now becomes unblocked on the semaphore and tries to re-establish the connection again which thread 1 is now already trying to use. This can result is a cascading problem where threads keep trying to use the connection while other threads are trying to re-establish it. Is there a standard paradigm for dealing with a shared resource such as this?
Shared resource re-establishment between threads
0
0
0
46
22,976,981
2014-04-10T01:25:00.000
4
0
0
0
python,django,memory-leaks,daemon
23,063,519
1
true
1
0
After running a debugger, indeed, reset_queries() is required for a non-web python script that uses Django to make queries. For every query made in the while loop, I did find its string representation appended to ones of the queries list in connections.all(), even when DEBUG was set as False.
1
3
0
I have a script running continuously (using a for loop and time.sleep). It performs queries on models after loading Django. Debug is set to False in Django settings. However, I have noticed that the process will eat more and more memory. Before my time.sleep(5), I have added a call to django.db.reset_queries(). The very small leak (a few K at a time) has come to an almost full stop, and the issue appears to be addressed. However, I still can't explain why this solves the issue, since when I look at what reset_queries does, it seems to clear a list of queries located in each of connections.all().queries. When I try to output the length of these, it turns out to be 0. So the reset_queries() method seems to clear lists that are already empty. Is there any reason this would still work nevertheless? I understand reset_queries() is run when using mod wsgi regardless of whether DEBUG is True or not. Thanks,
Is django.db.reset_queries required for a (nonweb) script that uses Django when DEBUG is False?
1.2
1
0
801
22,978,833
2014-04-10T04:44:00.000
0
0
0
1
python,pox,openflow
43,923,014
1
false
0
0
I think this question is old. but you can do so using host tracker module. Have alook at the host tracker module and gephi_topo module under misc to see the code to extract such information under PacketIn event.
1
3
0
I am trying to write a Pox controller using python. The environment is set up using Mininet and the switch type is ovsk (open vswitch). For each individual switch, some of the ports are connected to hosts, some of them are connected with the other peer switches, some might connected to the controller, or routers. I can use "sh ovs-ofctl show " in mininet to get the openflow port number mapping with interface name. My question is: in the Pox python code, how can I check which ports on a switch are connected to host and which ones are connected to peer switches, controllers or routers?
How to check which ports are connected to host in Mininet using open vswitch and a Pox controller?
0
0
0
2,310
22,980,487
2014-04-10T06:46:00.000
0
0
0
0
python,arrays,numpy,floating-accuracy,floating-point-conversion
23,002,130
1
false
0
0
If you're working with large arrays, be aware of potential overflow problems!! Changing from 32-bit to 64-bit floats in this instance avoids an (unflagged as far as I can tell) overflow that lead to the anomalous mean calculation.
1
2
1
I have an input array, which is a masked array. When I check the mean, I get a nonsensical number: less than the reported minimum value! So, raw array: numpy.mean(A) < numpy.min(A). Note A.dtype returns float32. FIX: A3=A.astype(float). A3 is still a masked array, but now the mean lies between the minimum and the maximum, so I have some faith it's correct! Now for some reason A3.dtype is float64. Why?? Why did that change it, and why is it correct at 64 bit and wildly incorrect at 32 bit? Can anyone shed any light on why I needed to recast the array to accurately calculate the mean? (with or without numpy, it turns out). EDIT: I'm using a 64-bit system, so yes, that's why recasting changed it to 64bit. It turns out I didn't have this problem if I subsetted the data (extracting from netCDF input using netCDF4 Dataset), smaller arrays did not produce this problem - therefore it's caused by overflow, so switching to 64-bit prevented the problem. So I'm still not clear on why it would have initially loaded as float32, but I guess it aims to conserve space even if it is a 64-bit system. The array itself is 1872x128x256, with non-masked values around 300, which it turns out is enough to cause overflow :)
Why is the mean smaller than the minimum and why does this change with 64bit floats?
0
0
0
226
22,985,483
2014-04-10T10:33:00.000
0
0
0
0
python,forms,plone
22,986,589
3
false
1
0
One approach is to create a browser view that accepts and retrieves JSON data and then just do all of the form handling in custom HTML. The JSON could be stored in an annotation against the site root, or you could create a simple content type with a single field for holding the JSON and create one per record. You'll need to produce your own list and item view templates, which would be easier with the item-per-JSON-record approach, but that's not a large task. If you don't want to store it in the ZODB, then pick whatever file store you want - like shelf - and dump it there instead.
1
2
0
I need to store anonymous form data (string, checkbox, FileUpload,...) for a Conference registration site, but ATContentTypes seems to me a little bit oversized. Is there a lightweight alternative to save the inputs - SQL and PloneFormGen are not an option I need to list, view and edit the data inputs in the backend... Plone 3.3.6 python 2.4 Thanks
Plone store form inputs in a lightweight way
0
1
0
153
22,988,970
2014-04-10T13:06:00.000
0
0
1
0
python,function
22,989,116
2
false
0
0
The answer is yes, you can call a function before its defined in the same file.
1
0
0
I'm wondering if I can call a function in python above its actual definition. I want to put all of my functions at the end of the source code, but I am not sure if this will work. Thanks!
Do I have to define a function before I call it?
0
0
0
121
22,989,689
2014-04-10T13:38:00.000
1
0
0
0
python,django,foreign-key-relationship
22,990,016
2
false
1
0
There are (at least) two ways to accomplish it: More elegant solution: Use a TicketProfile class which has a one-to-one relation to Ticket, and put the Client foreign key into it. Hacky solution: Use a many-to-many relation, and manually edit the automatically created table and make ticket_id unique.
1
5
0
My django project uses django-helpdesk app. This app has Ticket model. My app got a Client model, which should have one to many relationship with ticket- so I could for example list all tickets concerning specific client. Normally I would add models.ForeignKey(Client) to Ticket But it's an external app and I don't want to modify it (future update problems etc.). I wold have no problem with ManyToMany or OneToOne but don't know how to do it with ManyToOne (many tickets from external app to one Client from my app)
How to add many to one relationship with model from external application in django
0.099668
0
0
564
22,992,433
2014-04-10T15:30:00.000
3
1
1
0
python,scala,functional-programming
52,065,113
11
false
0
0
A list that happens to always be of length zero or one fulfills some of the same goals as optional/maybe types. You won't get the benefits of static typing in Python, but you'll probably get a run-time error even on the happy path if you write code that tries to use the "maybe" without explicitly "unwrapping" it.
1
46
0
I really enjoy using the Option and Either monads in Scala. Are there any equivalent for these things in Python? If there aren't, then what is the pythonic way of handling errors or "absence of value" without throwing exceptions?
Is there a Python equivalent for Scala's Option or Either?
0.054491
0
0
21,367
22,992,857
2014-04-10T15:48:00.000
1
0
0
0
python,linux,apache,mod-wsgi
23,104,951
1
false
1
0
I think I figured the out. I needed to load the module and define the VirtualHost in the same include file. I was trying to load in the first include file and define the VirtualHost in the second. Putting them both in one file kept the error from happening.
1
0
0
folks. I'm very new to coding and Python. This is my second Stack question ever. Apologies if I'm missing the obvious. But, I've researched this and am still stuck. I've been trying to install and use mod_wsgi on CentOS 6.5 and am getting an error when trying to add a VirtualHost to Apache. The mod_wsgi install seemed to go fine and my Apache status says: Server Version: Apache/2.2.26 (Unix) mod_ssl/2.2.26 OpenSSL/1.0.1e-fips DAV/2 mod_wsgi/3.4 Python/2.6.6 mod_bwlimited/1.4 So, it looks to me like mod_wsgi is installed and running. I have also added this line to the my pre-main include file for httpd.conf: LoadModule wsgi_module modules/mod_wsgi.so (I have looked ad mod_wsgi is in apache/modules.) And, I have restarted Apache several times. The error comes when I try to add a VirtualHost to any of the include files for https.conf. I always get an error message that says: Invalid command 'WSGIScriptAlias', perhaps misspelled or defined by a module not included in the server configuration If I try to use a VirtualHost with a WSGIDaemonProcess reference, I get a similar error message about WSGIDaemonProcess. From reading on Stack and other places, it sounds like I don't have mod_wsgi installed, or I don't have the Apache config file loading it, or that I haven't restarted Apache since doing those things. But, I really think I have taken all of those steps. What am I missing here? Thanks! Marc :-)
mod_wsgi Error on CentOS 6.5
0.197375
1
0
1,164
22,993,206
2014-04-10T16:02:00.000
0
0
1
0
python,emacs
22,996,440
3
false
0
0
Open a newline with C-j, you should get the indentation.
2
1
0
I'm using emacs 24.3 and Ubuntu 12.04 LTS. How do I make emacs automatically indent lines in Python, like in IDLE? Currently, it does not do that. Also, in general, how would I do this for any programming language, say, Java or c++?
How to make emacs automatically indent Python code?
0
0
0
4,538
22,993,206
2014-04-10T16:02:00.000
3
0
1
0
python,emacs
23,000,155
3
false
0
0
Try electric-indent-mode. It will be ebabled by default in Emacs-24.4. But note that the version in 24.3 probably doesn't work too well in python-mode buffers.
2
1
0
I'm using emacs 24.3 and Ubuntu 12.04 LTS. How do I make emacs automatically indent lines in Python, like in IDLE? Currently, it does not do that. Also, in general, how would I do this for any programming language, say, Java or c++?
How to make emacs automatically indent Python code?
0.197375
0
0
4,538
22,993,244
2014-04-10T16:04:00.000
0
0
0
1
python,windows,interrupt
22,993,707
2
false
0
0
Python does not seem to have an exception for this case. The closest would be SystemExit, however that does not actually capture the interrupt you're looking for. Windows seems to actually send Ctrl+C before killing the process when you close a terminal, however capturing KeyboardInterrupt doesn't seem to work either. At this point you might want to look into the signal module.
1
1
0
I presume closing a terminal window (or a terminal window embedded in an IDE) sends some kind of OS interrupt signal to the process running in the terminal. How can I find out what this signal is? I am looking for a way to capture the interrupt, run some clean up, and then abort. I am using Python and Windows.
What OS interrupt comes from closing a terminal tab?
0
0
0
365
22,995,746
2014-04-10T18:09:00.000
4
0
0
0
python,python-2.7,scrapy
22,997,392
3
false
1
0
You will have a name for every spider in the file that says name="youspidername". and when you call it using scrapy crawl yourspidername, it will crawl only that spider. you will have to again give a command to run the other spider using scrapy crawl youotherspidername. The other way is to just mention all the spiders in the same command like scrapy crawl yourspidername,yourotherspidername,etc.. (this method is not supported for the newer versions of scrapy)
1
3
0
I'm beginner in Python & Scrapy. I've just create a Scrapy project with multiple spiders, when running "scrapy crawl .." it runs only the first spider. How can I run all spiders in the same process? Thanks in advance.
How to run multiple spiders in the same process in Scrapy
0.26052
0
0
2,104
22,996,224
2014-04-10T18:32:00.000
7
0
1
0
python,amazon-ec2,pyenchant
29,129,461
1
false
0
0
I had to run yum install aspell-en enchant-aspell before I could get it working. Notice there is no space in "aspell-en". enchant-aspell includes "Integration with aspell for libenchant", allowing enchant and aspell to talk to each other. Hope this help.
1
2
0
I'm trying to run the command d = enchant.Dict('en_US') but am getting an error message "enchant.errors.DictNotFoundError: Dictionary for language 'en_US' could not be found" I've run the command sudo yum install aspell -en and tried setting the param path "enchant.set_param("enchant.aspell.dictionary.path","/usr/lib64/aspell-0.60")" to no avail. Any suggestions?
EC2 Enchant Can't Find Dictionary en_US
1
0
0
2,694
22,996,507
2014-04-10T18:48:00.000
4
0
0
0
python,arrays,numpy,arcgis,arcpy
22,996,581
2
true
0
0
If I understand your description right, you should just be able to do B[A].
1
2
1
I have two raster files which I have converted into NumPy arrays (arcpy.RasterToNumpyArray) to work with the values in the raster cells with Python. One of the raster has two values True and False. The other raster has different values in the range between 0 to 1000. Both rasters have exactly the same extent, so both NumPy arrays are build up identically (columns and rows), except the values. My aim is to identify all positions in NumPy array A which have the value True. These positions shall be used for getting the value at these positions from NumPy array B. Do you have any idea how I can implement this?
How to search in one NumPy array for positions for getting at these position the value from a second NumPy array?
1.2
0
0
143
22,999,993
2014-04-10T22:08:00.000
1
0
1
0
python,multithreading,model-view-controller,python-multithreading
23,000,219
4
false
0
1
You will use multithreading to perform parallel or background tasks that you don't want the main thread to wait, you don't want it to hang the GUI while it runs, or interfer with the user interactivity, or some other priority tasks. Most applications today don't use multithreading or use very little of it. Even if they do use multi threads, its usually because of libraries the final programmer is using and isn't even aware that multithreading is happening there as he developed his application. Even major softwares like AutoCAD use very little multithreading. It's not that its poorly made, but multithreading has very specific applications. For instance, it is pointless to allow user interaction while the project he wants to work on is still loading. A software designed to interact with a single user will hardly need it. Where you can see multithreading fit a really important role is in servers, where a single application can attend requests from thousands of users without interfering with each other. In this scenario the easier way to make sure everyone is happy is by creating a new thread to each request.
4
1
0
I am creating an application in Python that uses SQLite databases and wxPython. I want to implement it using MVC in some way. I am just curious about threading. Should I be doing this in any scenario that uses a GUI? Would this kind of application require it?
When should I be considering using threading
0.049958
0
0
106
22,999,993
2014-04-10T22:08:00.000
1
0
1
0
python,multithreading,model-view-controller,python-multithreading
23,000,279
4
false
0
1
Actually, GUIs are typically single threaded implementations where a single thread (called UI thread) keeps polling for events and keeps executing them in the order they occur. Regarding the main question, consider this scenario. At the click of a button you want to do something time consuming that takes say 5-10 seconds or more. You have got 2 options. Do that operation in the main UI thread itself. This will freeze the UI for that duration and user will not be able to interact with it. Do that operation in a separate thread that would on completion just notify the main UI thread (in case UI thread needs to make any UI updates based on result of the operation). This option will not block the UI thread and user can continue to use the application. However, there will be situations where you do not want user to be using the application while something happens. In such cases usually you can still use a separate thread but block the UI using some sort of overlay / progress indicator combination.
4
1
0
I am creating an application in Python that uses SQLite databases and wxPython. I want to implement it using MVC in some way. I am just curious about threading. Should I be doing this in any scenario that uses a GUI? Would this kind of application require it?
When should I be considering using threading
0.049958
0
0
106
22,999,993
2014-04-10T22:08:00.000
0
0
1
0
python,multithreading,model-view-controller,python-multithreading
23,000,020
4
true
0
1
almost certainly you already are... alot of wx is already driven by an asynchronous event loop .. that said you should use wx.PubSub for communication within an MVC style wx Application, but it is unlikely that you will need to implement any kind of threading (you get it for free practically) a few good places to python threading(locked by gil) use are: serial communication socket servers a few places to use multiprocessing (still locked by gil but at least it sends it to different cores) bitcoin miners anything that requires massive amounts of data processing that can be parallelized there are lots more places to use it, however most gui are already fairly asynchronously driven by events (not entirely true, but close enough), and sqlite3 queries definitely should be executed one at a time from the same thread(in fact sqlite breaks horribly if you try to write to it in two different threads) this is likely all a gross oversimplification
4
1
0
I am creating an application in Python that uses SQLite databases and wxPython. I want to implement it using MVC in some way. I am just curious about threading. Should I be doing this in any scenario that uses a GUI? Would this kind of application require it?
When should I be considering using threading
1.2
0
0
106
22,999,993
2014-04-10T22:08:00.000
1
0
1
0
python,multithreading,model-view-controller,python-multithreading
23,000,216
4
false
0
1
One thing I learned from javascript/node.js is that there is a difference between asynchronous programming and parallel programming. In asynchronous programming you may have things running out of sequence, but any given task runs to completion before something else starts running. That way you don't have to worry about synchronizing shared resources with semaphores and locks and things like that, which would be an issue if you have multiple threads running in parallel, with either run simultaneously or might get preempted, thus the need for locks. Most likely you are doing some sort of asynchronous code in a gui environment, and there isn't any need for you to also do parallel multi-threaded code.
4
1
0
I am creating an application in Python that uses SQLite databases and wxPython. I want to implement it using MVC in some way. I am just curious about threading. Should I be doing this in any scenario that uses a GUI? Would this kind of application require it?
When should I be considering using threading
0.049958
0
0
106
23,000,998
2014-04-10T23:36:00.000
0
0
0
1
python,google-app-engine
23,025,323
2
false
1
0
Another way to solve this, that i found, is to use memcache. It's super easy. Though it should probably be noted that memcache could be cleared at anytime, so NDB is probably a better solution. Set the timestamp: memcache.set("timestamp", current_timestamp) Then, to read the timestamp: memcache.get("timestamp")
1
0
0
I am running a Python script on Google's AppEngine. The python script is very basic. Every time the script runs i need it to update a timestamp SOMEWHERE so i can record and keep track of when the script last ran. This will allow me to do logic based on when the last time the script ran, etc. At the end of the script i'll update the timestamp to the current time. Using Google's NBD seems to be overkill for this but it also seems to be the only way to store ANY data in AppEngine. Is there a better/easier way to do what i want?
Easiest way to store a single timestamp in appengine
0
0
0
651
23,001,932
2014-04-11T01:26:00.000
4
0
0
0
python,algorithm,count
23,001,960
1
true
0
0
Since all 1's come before the 0's, you can find the index of the first 0 using Binary search algorithm (which is log N) and you just have to do this for all the N rows. So the total complexity is NlogN.
1
1
1
Assuming that in each row of the array, all 1's come before the 0's, how would I be able to come up with an (O)nlogn algorithm to count the 1's in the array. I think first I would have to make a counter, search each row for 1's (n), and add that to the counter. Where does the "log n part" come into play? I read that a recursive algorithm to do this has nlogn complexity, but Im not too sure how I would do this. I know how to do this in O(n^2) with for loops. Pseudo code or hints would be helpful! Thank you
Counting 1's in a n x n array of 0's and 1's
1.2
0
0
100
23,014,432
2014-04-11T13:46:00.000
0
0
1
0
python,multithreading,performance,multiprocessing
23,014,920
3
false
0
0
In python you want multiprocessing over multithreading. Threads don't do well in Python beacause of the GIL.
1
0
0
I've written a script that pulls data from my school's website and I'm having some trouble with execution time. There are over 20 campuses, each with data for three semesters. The script looks up those school names, then the semesters available for each school, then the subjects/departments that are offering classes each semester. Then the script searches for the classes per department and then I do things with that data. I timed the execution of the script on just one campus, and it ran for over three minutes. When I ran it for all 24 campuses it took over an hour. I'm using the "requests" library, which runs each HTTP request in synchronously. I'm using the "requests" library, primarily because it handles sessions nicely. I'm looking for ways to bring down the time the script takes to run, by making the various requests for each semester run in parallel. I suspect that if I run three semesters asynchronously, then each school should take a minute, instead of three. Then, I can run all schools in parallel and achieve the same minute for all of the data. A minute is a lot less than an hour and a quarter! Am I wrong in my guess that multithreading/processing will bring down the execution time so drastically? What Python libraries should I be using for threads or processes? Once I've got each school being processed on a thread, how do I consolidate the data from all the schools into one place? I know that it's considered poor practice for threads to alter global state, but what's the alternative here?
Performance Improvements with Processes or Threads
0
0
1
67
23,015,194
2014-04-11T14:18:00.000
1
0
1
0
python,ipython
23,015,230
3
false
0
0
Why not just execute the script. You can do it as follows from the command-line (cmd for windows or sh for unix): python movie_analysis.py Or instead, if you want to use selected functions, methods and classes (assuming you don't have any script outside of a method that would get executed immediately), you can do: import movie_analysis.py from the command line
1
1
0
I am a python newbie, and I've written so many lines of code in a python script. I initially started copying and pasting each line in the iPython console and I feel like it is taking forever. Is there a more efficient way to do this? Lets say I have a script called "movie_analysis.py" saved in my current working directory. How can I ask the program to read in the file, and then execute every line in the script one after the order (i.e in the order they were written). Thanks in advance!!!
Execute Python Script Line by Line
0.066568
0
0
4,588
23,019,076
2014-04-11T17:33:00.000
4
0
0
0
python,scikit-learn
23,028,931
1
true
0
0
All features are continuous for gradient boosting (and practically all other estimators). Tree-based models should be able to learn splits in categorical features that are encoded as "levels" (1, 2, 3) rather than dummy variables ([1, 0, 0], [0, 1, 0], [0, 0, 1]), but this requires deep trees instead of stumps and the exact ordering may still affect the outcome of learning.
1
1
1
Does scikit's GradientBoostingRegressor make any assumptions on the feature's type? Or does it treat all features as continuous? I'm asking because I have several features that are truly categorical that I have encoded using LabelEncoder().
Scikit's GBM assumptions on feature type
1.2
0
0
138
23,020,031
2014-04-11T18:27:00.000
0
0
1
0
python,module,sublimetext3
23,020,211
1
false
0
0
SublimeCodeIntel will work for any module, as long as it's indexed. After you first install the plugin, indexing can take a while, depending on the number and size of third-party modules you have in site-packages. If you're on Linux and have multiple site-packages locations, make sure you define them all in the settings. I'd also recommend changing "codeintel_max_recursive_dir_depth" to 25, especially if you're on OS X, as the default value of 10 may not reach all the way into deep directory trees. Make sure you read through all the settings, and modify them to suit your needs. The README also contains some valuable information for troubleshooting, so if the indexing still isn't working after a while, and after restarting Sublime a few times, you may want to delete the database and start off fresh.
1
2
0
I've been looking around here but I haven't finded what I was searching, so I hope it's not answer around here. If it is, I would delete my question. I was wondering if Sublime Text can suggest you functions from a module when you write "module.function". For example, if I write "import PyQt4", then sublime Text suggests me "PyQt4.QtCore" when I write "PyQt4.Q". For now, I'll installed "SublimeCodeIntel" and just does it but for only some modules (like math or urllib). It's possible to configure it for any module? Or you can recommend me any other plugin? Thanks for reading! PD: also, could be possible to configute it also for my own modules? I mean, for example, module that I have written and are in the same folder as the current file I'm editing.
Sublime Text 3 - Module Functions Suggestions?
0
0
0
1,143
23,023,294
2014-04-11T21:59:00.000
0
0
0
0
python,web-scraping,scrapy
23,269,542
1
false
1
0
one workaround is to first login using scrapy (using FormRequest) and then invoke inspect_response(response) in the parse method
1
1
0
Is there a way to pass formdata in a scrapy shell? I am trying to scrape data from an authenticated session, and it would be nice to check xpaths and so on through a scrapy shell.
Pass username/password (Formdata) in a scrapy shell
0
0
1
407
23,023,710
2014-04-11T22:36:00.000
1
0
1
0
python,validation,anti-patterns
23,024,580
2
true
0
0
I think the parts of the validation code that are specific to one of the classes should probably be put into the class itself - maybe as a classmethod? That way the 'generic' validation code can just call obj.validate() at the appropriate time. You then don't need to import the classes from the generic validation code.
2
0
0
I recently refactored my code to put input validation methods that are shared among several classes in their own module, validate.py. Some of these validation methods check if their input is an instance of a class, e.g. MyClass. Therefore validate.py must import MyClass so it's method is_MyClass can check if isinstance(input, MyClass). But, I want to use some validation methods from validate.py in MyClass to sanitize input to MyClass.my_method, so MyClass must import validate.py. Something tells me I just casually refactored my way into an anti-pattern. If what I'm trying to do implies circular dependencies, then I must be Doing It Wrong™. But, code reuse is a good idea. So what's the best practice for sharing validation methods in this way?
How to avoid circular dependencies in validation module
1.2
0
0
289