Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
18,377,549
2013-08-22T10:11:00.000
0
0
0
1
python,html,ruby,bash,ssi
22,432,278
2
false
1
0
On your dev machine, use your browser to display the web page, and then save the 'result' with an appropriate file name/in an output directory. Thus, if you had mainfile.html which executed various time/last-mod directives and which included fileA.inc and fileB.inc at appropriate places, the resulting display (and save-able HTML file) will comprise all four/five components. =dn
1
1
0
I am developing the front end code of a website which I will be handing over to some developers for them to integrate it with the backend. The site will be written in .NET but I'm developing the front end code with static HTML files (and a bit of javascript). Because the header, footer and a few other elements are the same across all pages I am using Server Side Includes in my development environment. However, every time I hand the code to the developers I need to manually replace each SSI with the actual HTML by copying and pasting. This is starting to get tedious. I have tried writing a bash script to do this but my bash knowledge is extremely limited so I have failed miserably (I'm not really sure where to start). What I tried to achieve was: Loop through all the HTML files in my project Look for an include ( <!--#include file="myfile.html"--> ) If one is found, replace the include with the HTML from the file specified in the include Keep doing this until there are no more includes and move on to the next file Does anyone know of a script that can do this, or can point me in the right direction for achieving this myself? I'm happy for it to be in any language as long as I can run it on my Mac. Thanks. EDIT It is safe to assume that all instances of <!--#include file="myfile.html"--> are on their own line.
I need a script that searches files for SSI and replaces the include with the actual HTML
0
0
0
417
18,382,287
2013-08-22T13:46:00.000
4
0
1
1
python,windows,eclipse
18,382,626
2
false
0
0
Don't know if it is global or local (project-related). Globally you can set the interpreter via the path Window-Menu → Preferences → PyDev → Interpreter - Python. Project-related this can be done via right-click on project → Properties → PyDev - Interpreter/Grammar. Have a look at both and make sure that both are set to correct values.
2
2
0
After forced restart due to frozen laptop (windows 7 pro, 32 bit) Eclipse is providing the following message: It seems that the Python interpreter is not currently configured. How do you want to proceed?" Clicking the Auto config option and then ok I get the Python Interpreters window with the right name (Python), Location (c:\Program Files\Python27\python.exe) and system libs. It all looks ok but clicking OK or Apply doesn't seem to do anything and the whole thing starts from the beginning (the message about Python not currently configured...). I've checked my .pydevproject permissions and I have full control over the file. I also have dropbox sync-ing the project files but it has been ok for a while now. What is wrong, what should I check/do?
PyDev-Eclipse Python not configured
0.379949
0
0
7,044
18,382,287
2013-08-22T13:46:00.000
2
0
1
1
python,windows,eclipse
18,395,343
2
true
0
0
Checking the log files under workspace/.metadata shows exception wwith the following message: !MESSAGE For input string: "0 (xxxx xxxxx's conflicted copy 2013-08-16)" These are files created by dropbox due to conflicts. Eclipse was not expecting these sort of files and when trying to read them raised and exception. Deleting all such files restored Eclipse's configuration.
2
2
0
After forced restart due to frozen laptop (windows 7 pro, 32 bit) Eclipse is providing the following message: It seems that the Python interpreter is not currently configured. How do you want to proceed?" Clicking the Auto config option and then ok I get the Python Interpreters window with the right name (Python), Location (c:\Program Files\Python27\python.exe) and system libs. It all looks ok but clicking OK or Apply doesn't seem to do anything and the whole thing starts from the beginning (the message about Python not currently configured...). I've checked my .pydevproject permissions and I have full control over the file. I also have dropbox sync-ing the project files but it has been ok for a while now. What is wrong, what should I check/do?
PyDev-Eclipse Python not configured
1.2
0
0
7,044
18,385,303
2013-08-22T15:51:00.000
0
0
0
0
python,heroku,nltk
62,639,159
4
false
1
0
You need to follow the below steps. nltk.txt needs to present at the root folder Add the modules you want to download like punkt, stopwords as separate row items Change the line ending from windows to UNIX Changing the line ending is a very important step. Can be easily done through Sublime Text or Notepad++. In Sublime Text, it can done from the View menu, then Line Endings. Hope this helps
3
13
0
Hey i'd like to install the NLTK pos_tag on my Heroku server. How can i do so. Please give me the steps as im new to the Heroku server system.
How to install NLTK modules in Heroku
0
0
0
8,697
18,385,303
2013-08-22T15:51:00.000
17
0
0
0
python,heroku,nltk
42,257,701
4
false
1
0
I just added official nltk support to the buildpack! Simply add a nltk.txt file with a list of corpora you want installed, and everything should work as expected.
3
13
0
Hey i'd like to install the NLTK pos_tag on my Heroku server. How can i do so. Please give me the steps as im new to the Heroku server system.
How to install NLTK modules in Heroku
1
0
0
8,697
18,385,303
2013-08-22T15:51:00.000
2
0
0
0
python,heroku,nltk
44,803,942
4
false
1
0
If you want to use simple functionalities like pos_tag, tokenizer, stemming, etc. then you can do the following steps mention nltk in requirements.txt mention following modules in nltk.txt wordnet pros_cons reuters hmm_treebank_pos_tagger maxent_treebank_pos_tagger universal_tagset punkt averaged_perceptron_tagger_ru averaged_perceptron_tagger snowball_data rslp porter_test vader_lexicon treebank dependency_treebank
3
13
0
Hey i'd like to install the NLTK pos_tag on my Heroku server. How can i do so. Please give me the steps as im new to the Heroku server system.
How to install NLTK modules in Heroku
0.099668
0
0
8,697
18,386,023
2013-08-22T16:24:00.000
0
0
0
0
javascript,python,web2py
18,414,025
1
true
1
0
I used a Form to achieve this. Working quite well.
1
0
0
I want to know how I can get content of a certain element by a dynamic id/name in embeded python codes in web2py view page? Basically I want something like: {{for task in tasks:}} ... {{=TEXTAREA(task['remark'], _name='remark'+str(task['id']), _id='remark'+str(task['id']), _rows=2)}} {{=A('OK', _class='button', _href=URL('update_remark', vars=dict(task_id=task['id'], new_remark=['remark'+str(task['id'])])))}} What I want the ['remark'+str(task['id'])] do is to get the content automatically but obviously it won't work, I'm wondering how I can achieve this? Is there any API that can help? Thanks in advance!
How to get content of element in embeded python codes in web2py view
1.2
0
1
79
18,387,093
2013-08-22T17:30:00.000
2
1
0
1
python,ubuntu
18,387,374
1
true
0
0
The pytables on my ubuntu system is 2.3.1. I think that open_file is a version 3 thing. I'm not sure where you can pick up the latest package, but you could always install the latest with pip.
1
0
0
I have recently moved from Python on Windows to Python on Ubuntu. In Windows I could just hit F5 in the IDLE editor to run the script. However, in Ubuntu I have to run the script by typing python /path/to/file.py to execute. The thing is it seems the imports within the file are not working when I run from command line. It gives me the error: NameError: global name 'open_file' is not defined This is the open_file method of Pytables. In the python file I have: from tables import * I have made the file executable and all. Appreciate your help.
Running Python script from Ubuntu terminal NameError
1.2
0
0
546
18,390,874
2013-08-22T21:08:00.000
1
0
1
0
python,csv,wxpython
18,390,934
2
false
0
0
During a heavy operation a wx frame is "stuck" waiting on the process to finish. You're best solution is to create a worker thread and let it do the heavy job for you.
1
0
0
I have built a csv reader with. It iterats through a file and gives results based on search terms. I am reading 3 gig files. When I let it iterate through the file it works fine. But if I even touch the wxpython window after processing has begun the app stops responding then crashes. My best guess is I have to somehow monitor/throttle cpu usage. I have no idea how to do this or if I am on the right path.
wxpython csv reader crashing while processing
0.099668
0
0
90
18,390,935
2013-08-22T21:12:00.000
1
0
0
0
python,scapy
19,122,626
1
true
1
0
OK, at the beginning I put the fields behind the encryption in a packet, and do all the encryption magic in post_build (encrypt) and pre_dissect (decrypt), but that was really tricky... so Instead I created another packet (EncryptedPacket) which overloads addfield and getfield to do all the encryption stuff, this solution is much cleaner and nicer then the previous one. I will add examples later.
1
2
0
I have a protocol with encrypted fields. I want to be able when dissecting the packet, decrypt them and when building it will encrypt them (lets say I know the private\public key...). Need this for changing the fields under the encryption. What is the best way to do this with scapy... I couldn't find anything usefull.. maybe something with post_build post_dissect ?
Scapy fields under encryption
1.2
0
0
635
18,391,405
2013-08-22T21:48:00.000
2
0
0
1
python,django,wsgi,uwsgi
18,396,906
1
false
1
0
There are no problems as each "vassal" can be configured with its special cheaper mode. In this way you can have QoS for your customers.
1
1
0
I need to host multiple Django sites (quite a lot of sites actually) and currently I am using Apache+mod_wsgi but I want to switch to uWSGI. One of the nice features of uWSGI is cheaper mode that spawns processes as needed and shuts them down as needed as well. On the other hand, it seems that the way to make it run multiple sites is to use emperor mode. Can emperor mode be used together with cheaper subsystem? Are there any quirks/problems I should be aware of? Has anyone ever done this?
Is it possible to use both cheaper and emperor with uWSGI
0.379949
0
0
304
18,391,563
2013-08-22T22:02:00.000
4
0
0
1
python,redis,celery
25,943,620
3
false
0
0
For AMQP this is an example. /usr/bin/celery -A app_name --broker=amqp://user:pw@host//vhost --broker_api=http://user:pw@host:host_port/api flower The broker_api is the rabbitmq web ui endpoint with /api
1
9
0
I'm running celery and celery flower with redis as a broker. Everything boots up correctly, the worker can find jobs from redis, and the celery worker completes the jobs successfully. The issue I'm having is the Broker tab in the celery flower web UI doesn't show any of the information from Redis. I know the Redis url is correct, because it's the same URL that celeryd is using. I also know that the celery queue has information in it, because I can manually confirm that via redis-cli. I'm wondering if celery flower is trying to monitor a different queue in the Broker tab? I don't see any settings in the flower documentation to override or confirm. I'm happy to provide additional information upon request, but I'm not certain what is relevant.
Celery flower's Broker tab is blank
0.26052
0
0
6,554
18,393,842
2013-08-23T02:45:00.000
0
0
0
0
python,networkx,adjacency-list
71,715,493
6
false
0
0
Yes,you can get a k-order ego_graph of a node subgraph = nx.ego_graph(G,node,radius=k) then neighbors are nodes of the subgraph neighbors= list(subgraph.nodes())
2
11
1
I have a directed graph in which I want to efficiently find a list of all K-th order neighbors of a node. K-th order neighbors are defined as all nodes which can be reached from the node in question in exactly K hops. I looked at networkx and the only function relevant was neighbors. However, this just returns the order 1 neighbors. For higher order, we need to iterate to determine the full set. I believe there should be a more efficient way of accessing K-th order neighbors in networkx. Is there a function which efficiently returns the K-th order neighbors, without incrementally building the set? EDIT: In case there exist other graph libraries in Python which might be useful here, please do mention those.
K-th order neighbors in graph - Python networkx
0
0
0
10,646
18,393,842
2013-08-23T02:45:00.000
27
0
0
0
python,networkx,adjacency-list
21,031,826
6
true
0
0
You can use: nx.single_source_shortest_path_length(G, node, cutoff=K) where G is your graph object.
2
11
1
I have a directed graph in which I want to efficiently find a list of all K-th order neighbors of a node. K-th order neighbors are defined as all nodes which can be reached from the node in question in exactly K hops. I looked at networkx and the only function relevant was neighbors. However, this just returns the order 1 neighbors. For higher order, we need to iterate to determine the full set. I believe there should be a more efficient way of accessing K-th order neighbors in networkx. Is there a function which efficiently returns the K-th order neighbors, without incrementally building the set? EDIT: In case there exist other graph libraries in Python which might be useful here, please do mention those.
K-th order neighbors in graph - Python networkx
1.2
0
0
10,646
18,393,975
2013-08-23T03:00:00.000
0
0
1
0
python
18,394,654
3
false
0
0
If the problem is indeed to create a new password, I explain how to do it, but no code: Use the random.SystemRandom() if you want secure passwords. First create N characters from the set of allowed characters, where N is the length of your password. Then check if there are at least 2 digits. If there are less than 2 digits characters then remove 1 or 2 non-digit characters and add random digits, so that there are at least those 2 digits. Finally use the shuffle() to permute your password. That way your password has maximum entropy possible.
1
0
0
I've been trying to make a password creator to create a password for a assessment and one of the things that's needed is if the password doesn't have at least 2 numbers in it will print a error asking for a password with at least two numbers. I do not know how to do this and I was just wondering if you could give me a little hint on how I could do this in code. Thanks in advance
I'm trying to make a password creator
0
0
0
118
18,397,145
2013-08-23T07:24:00.000
1
0
1
0
python,maya,mel,pymel
18,408,263
2
false
0
0
In the API it would be calling mfnTransform.setRotatePivotTranslation and setScalePivotTranslation with 'balance' turned on. There's not enough overhead to warrant a workaround - it's hard to see how this could be a bottleneck.
1
0
0
Is there a way to center the pivot of an object without the use of xform? I really would like to try and find a pyMel version of this, or the maya api, as xform is generally 10x slower than a pymel or api solution. Obviously you can achieve it with xform like so: xform(obj, cp=1) But I'm trying to find another way, does anyone know anything?
Center Pivot without xform
0.099668
0
0
6,966
18,397,534
2013-08-23T07:47:00.000
0
1
1
0
python,eclipse
18,397,967
3
false
0
0
You can also name one of your files __main__.py.
1
0
0
Python newbiew here. Please bear with me. Having written some good chunk of code one thing bothers me is that I cannot find a way to tell eclipse that this file/funciton is the starting point for my project (test). I step into code during debugging and end up in some file deep in the code. Then if I want to run it again I go to the tab containing the start file and run it again. It would be nice to be able to specify a "main" function for a python project like we do in C for e.g. Is something like that possible? If not can I at least tell eclipse to use that one file as the starting point for the project?
Setting a default start function for your python project
0
0
0
126
18,403,526
2013-08-23T12:59:00.000
3
0
0
0
python,performance,search,queue,elasticsearch
18,465,751
2
false
1
0
I am new here, but I will try to share my own experience with ES. Here, we are using couchDB to store the json we are indexing into ES. However, we do heavy modifications on those docs, like creating new nodes, etc etc. The docs are big, hundreds of fields, more than 15 nested collections. Finally, there are thousands of docs. So, yes, in my humble opinion, if you can create your docs via your application, I do not see why ES would have trouble with that. For the python part, though, I cannot help, we're doing things in java, here. However, for ES, I would use the bulk api. ES is (much, much) more efficient that way. I'd probably store the ids of the docs that couldn't be indexed due to random errors in another index (or in a file, or somewhere else) so that you can reconstruct and reindex them afterward, instead of a retry on error. (Though I couldn't know about the feasibility in python) not use a replica for an index currently indexing. For the retry on errors, I have mixed feelings. If the error is due to a wrong construction of the doc or to a mapping error, it will fail on each retry. Here, we are indexing thousands of those docs a minute, and still can issue search and facet requests (those could be slightly slower, though). This is not much, but I hope it helps. Good luck.
1
2
0
I am looking at the possibility of using ES without a database, constructing my data from my python application and sending it straight to ES in real time. It says me a lot of complexity, however my concern is that I might be generating data very quickly and sending requests relentlessly, even when ES might not be ready to accept it. My question is, in this case does it makes sense to use a queue system as a buffer between the two, so my application sends everything to a queue, and then queue tries to add it to ES, retrying if it doesn't have success. I am not sure if this is the most logical or efficient method. If anyone has any information or ideas on what queue systems would be suited, or if I even need one, I'd be very interested to hear. James
Feeding ES directly - Is a queue needed?
0.291313
0
0
205
18,405,726
2013-08-23T14:46:00.000
0
0
0
1
python,ios,windows,web,bonjour
18,773,687
2
false
1
1
I would definitely suggest a webapp. And the answer to your questions are given below: How would I receive and send notifications over a local network. Use a REST based web service to communicate with the server. You have to use polling to receive data:-( How could I connect to the server using NSURLConnection if it does not have a static ip? If possible configure a domain name in your network which points to server ip. (Configure local DHCP to give same IP to your server every time based on mac address!) Have a IP Range and when the app starts, try to reach a specific URL and check if it is responding. Ask the user to enter the server IP every time the app starts!
1
4
0
Background: I am just about to start development on an mobile and desktop app. They will both be connected to a local wifi network (no internet connection) and will need to communicate with one another. At the outset we are targeting iOS and Windows as the two platforms, with the intention of adding Linux, OSX, and Android support in that order. The desktop app will largely be a database server/notification center for receiving updates from the iOS apps and sending out the data to other iOS apps. There may be a front end to the desktop app, but we could also incorporate it into the iOS app if needed. For the moment we just want the iOS app to automatically detect when it is on the same network as the server and then display the data that is sent by that server (bonjour like). As far as I see it there are two paths we could take to implement this Create a completely native app for each platform (Windows, Linux, OSX). Pro: We like the ideas of having native apps for performance and ease of install. Con: I know absolutely nothing about Windows or Linux development. Create an app that is built using web technologies (probably python) and create an easy to use installer that will create a local server out of the desktop machine which the mobile apps can communicate with. Pro: Most of the development would be cross-platform and the installer should be easy enough to port. Con: If we do want to add a front-end to the server app it will not be platform native and would be using a css+html+javascript GUI. Question: My question is how would implement the connection between the iOS app and server app in each circumstance. How would I receive and send notifications over a local network. How could I connect to the server using NSURLConnection if it does not have a static ip? I hope this is clear. If not please ask and I will clarify. Update 09/06/2013 Hopefully this will clear things up. I need to have a desktop app that will manage a database, this app will connect to iOS devices on a local wireless network that is not connected to the internet. I can do this with either the http protocol (preferably with a flask app) or by using a direct socket connection between the apps and the server. My question is which of the above two choices is best? My preference would be for a web-based app using Python+Flask, but I would have no idea how to connect the iOS app to a flask app running on a local network with out a static ip. Any advice on this would be appreciated.
Connecting iOS app to Windows/Linux apps
0
0
0
1,265
18,407,249
2013-08-23T16:05:00.000
1
0
0
1
python,google-app-engine,path,google-cloud-storage,sys.path
18,955,756
1
false
1
0
In GAE change the python path via Preferences settings, set Python Path to match your python 27 path.
1
2
0
I'm trying to use the GCS client library with my app engine app and I ran into this - "In order to use the client library in your app, put the /src/cloudstorage directory in your sys.path so Python can find it." First, does this mean I need to move the directory into my sys.path OR does it need to add the ~/src/cloudstorage/ to my PATH environment variable? Second, when I print sys.version and sys.path from the App Engine Interactive Console, I see a Python Version of 2.7.2, but when I print from my Terminal (on a Mac), I get the Python I want to use and installed via Homebrew - 2.7.5. The sys.path in the Console shows all App Engine paths and the default Python installation - /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7 On my terminal - /usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/ I need help understanding how to change this. ** UPDATE ** Okay, I figured out part of this answer. "In order to use the client library in your app, put the /src/cloudstorage directory in your sys.path so Python can find it." means moving the actual directory to the App Engine project directory. The second piece still remains - why is my Mac PATH environment variable not used in APP Engine. How can I change the default version of Python used by the App Engine (from 2.7.2 to 2.7.5)? This is not related to changing the version in the YAML file.
Google App Engine, Change which python version
0.197375
0
0
1,807
18,407,470
2013-08-23T16:17:00.000
0
0
0
1
android,python,adb
18,408,154
1
false
0
0
sendevent takes 4 parameters args for Popen should be ['adb', 'shell', 'sendevent /dev/input/eventX type code value'] - do not split the remote command timings are important for sendevent sequences and adb shell call itself is kind of expensive - so using shell script on the device works better pay attention to the newline characters in your shell scripts - make sure it's unix style (single \n instead of the \r\n)
1
0
0
I am running into a strange issue, running adb shell sendevent x x x commands from commandline works fine, but when I use any of the following: subprocess.Popen(['adb', 'shell', 'sendevent', 'x', 'x','x']) subprocess.Popen('adb shell sendevent x x x', shell=True) subprocess.call(['adb', 'shell', 'sendevent', 'x', 'x','x']) They all fail - the simulated touch even that works in a shell script does not work properly when called through python. Furthermore I tried adb push the shell script to the device, and using adb shell /system/sh /sdcard/script.sh I was able to run it successfully, but when I try to run that commandline through python, the script fails. What's even stranger, is that he script runs, but for example, it does not seem to execute the command sleep 1 half way through the script, echo commands work, sendevent commands don't seem to work. Doesn't even seem possible, but there it is. How do I run a set of adb shell sendevent x x x commands through python?
Using adb sendevent in python
0
0
0
1,330
18,411,090
2013-08-23T20:14:00.000
3
0
1
0
python,python-2.7,speech-recognition,speech-to-text,python-dragonfly
18,421,278
1
true
0
0
DragonFly uses the shared recognizer, which starts Windows Speech Recognition. If you modify DragonFly to use an inproc recognizer, Windows Speech Recognition will not start. (Unfortunately, I don't know enough Python to contribute a fix.)
1
1
0
Context I'm working with DragonFly, the python module. At this point, I just want speech recognized and outputted back to me. Question I'm wondering if there is a way to use DragonFly purely for voice recognition without going through Windows Speech Recognition (or any alternative program). Is Dragonfly only meant for "post-speech-recognition"? The examples I've seen and run all open up windows speech recognition. I've also looked into the old speech recognition module - pySpeech, but that also "borrows" windows speech recognition. Should I be looking towards other modules?
DragonFly - no external application?
1.2
0
0
320
18,413,606
2013-08-24T00:09:00.000
0
1
0
0
python,excel,xlrd,xlwt
18,413,675
2
false
0
0
Prepare a dictionary to store the results. Get the numbers of line with data you have using xlrd, then iterate over each of them. For each state code, if it's not in the dict, you create it also as a dict. Then you check if the entry you read on the second column exists within the state key on your results dict. 4.1 If it does not, you'll create it also as a dict, and add the number found on the second column as a key to this dict, with a value of one. 4.2 If it does, just increment the value for that key (+1). Once it has finished looping, your result dict will have the count for each individual entry on each individual state.
1
0
0
I'm using the python packages xlrd and xlwt to read and write from excel spreadsheets using python. I can't figure out how to write the code to solve my problem though. So my data consists of a column of state abbreviations and a column of numbers, 1 through 7. There are about 200-300 entries per state, and i want to figure out how many ones, twos, threes, and so on exist for each state. I'm struggling with what method I'd use to figure this out. normally i would post the code i already have but i don't even know where to begin.
Python Programming approach - data manipulation in excel
0
1
0
247
18,419,500
2013-08-24T14:16:00.000
0
0
0
1
python,macos,python-2.7
47,548,607
6
false
0
0
From $ brew info python: This formula installs a python2 executable to /usr/local/bin. If you wish to have this formula's python executable in your PATH then add the following to ~/.bash_profile: export PATH="/usr/local/opt/python/libexec/bin:$PATH" Then confirm your python executable corresponds to the correct installation: $ which python or $ python --version
1
35
0
I have searched online for a while for this question, and what I have done so far is installed python32 in homebrew changed my .bash_profile and added the following line to it: export PATH=/usr/local/bin:/usr/local/sbin:~/bin:$PATH but when I close the terminal and start again, I type 'which python', it still prints: /usr/bin/python and type 'python --version' still got: Python 2.7.2 I also tried the following instruction: brew link --overwrite python or try to remove python installed by homebrew by running this instruction: brew remove python but both of the above two instructions lead to this error: Error: No such keg: /usr/local/Cellar/python can anybody help, thanks
How to make Mac OS use the python installed by Homebrew
0
0
0
78,233
18,420,854
2013-08-24T16:48:00.000
4
1
0
0
python,django,pyramid,firebase,bottle
18,420,953
1
false
1
0
Some contras of using Firebase: Your data is in an external server (deal-breaker for sensitive data) It costs money You have an additional dependency that you don't fully control (if they go out of service/business you might be in trouble) You know the pros. If you think these are not relevant to you then go for it.
1
0
0
Since Firebase can do user login as well as hold a lot of other stuff about users and their interactions with my app. What are some of the advantages and disadvantages of using Firebase solely as a web framework, instead of using django, pyramids, bottle, etc etc? http routing, etc etc.... I have that sorta stuff handle by another process... So, if I'm looking basically to hold some user stuff and allow for user logins and user to user private/personal communications. It seems firebase is an almost total solution, no? I know this isn't a technical question, but I'm just looking for opinions from a realtime crowd....stackoverflow seems the best fit.
Python Frameworks vs Firebase
0.664037
0
0
2,929
18,424,495
2013-08-25T00:28:00.000
5
1
0
0
python,django,unit-testing,debugging,pydev
19,337,234
1
false
1
0
Setup a new debug configuration. Run -> Debug Configurations... Select 'PyDev Django' Click 'New Launch Configuration (top left corner) Name your new configuration Set the project to your project Set the module to your manage.py (browse to your manage.py) Go to the 'Arguments' tab and enter 'test' under 'Program arguments' Click 'Apply' This will allow you to run 'manage.py test' and be able to stop on your breakpoints. Unfortunately, you'll have to create different configurations if you only want to run a subset of tests.
1
1
0
i've written a few unittests for a Django project. I'd like to debug them. I've set a break point on the server side. what should I click to run the Django unittest with debugging enabled in PyDev Eclipse? It seems I can run the manage.py test command from Pydev, but then there's no debugging. If I run the unittest with right-click debug unittest, then I get all sort Internal Server errors presumably because the test envrionment wasnt set up correctly.
How to debug Django unittests with PyDev?
0.761594
0
0
1,988
18,426,205
2013-08-25T05:57:00.000
0
0
1
0
python,if-statement,python-3.x,indexing,range
18,426,555
1
false
0
0
The use of in and not in is not enough to produce correct results unless you limit input to only one use of "ie" or, if not only one use, only one usage (type of use). There seems to be an attempt to analyze words in isolation, but the above is valid for words too. The program needs to either use a RegEx (with a look-behind) or scan the text properly. To answer your question, the "index out of range error" occurs because line2 is a split of line1, which is constant "Line: ". This means that line2 is always equal to ["Line:"]. On the second iteration of the while, a takes the value 1, which is an invalid index for line2. If you add if a >= 1: break before any use in the loop of line2[a], this exception will not occur any more. Now, for the reasons I expressed above, I don't think the program is going to accomplish its stated objectives anyways.
1
1
0
Nevermind, requesting deletion of thread. Cheers.
Analysis of grammatical rules
0
0
0
62
18,428,204
2013-08-25T10:50:00.000
0
0
1
0
django,python-venv
51,107,469
3
false
1
0
What happened to me was that I was trying to install django from outside the environment directory/folder. So make sure you are inside the environment directory and then use pip install django
2
1
0
Would anyone know possible reasons why Django is being installed in the global site package and not my venv's site package folder? Here's my set up and what I did, this is a bit detailed since I'm new to Python/Django and not sure which information is important: Python 3.3 is installed in c:\python33 I have virtualenv, pip, easy_install installed in C:\Python33\Scripts. My venv is c:\users\username\projects\projB This venv was created using pyvenv, not virtualenv. I activated the venv. I changed directory to C:\Python33\Scripts to run "pip install django". Django was created inside C:\Python33\Lib\site-packages and not inside C:\users\username\projects\projB\Lib\site-packages. Do I need to install pip inside my venv and use that to install Django?
Django not installed in venv?
0
0
0
1,802
18,428,204
2013-08-25T10:50:00.000
1
0
1
0
django,python-venv
18,429,225
3
true
1
0
Pip should be installed when you create the virtual environment. Don't change directory into C:\Python33\Scripts before running pip. It looks like that means you use the base install's pip instead of your virtual environment's pip. You should be able to run pip from any other directory. However I'm not familiar with python on Windows, so I'm not certain that pip is added to the path when you activate the environment. If that doesn't work, you'll have to change directory into the bin directory of your virtual environment, then run pip.
2
1
0
Would anyone know possible reasons why Django is being installed in the global site package and not my venv's site package folder? Here's my set up and what I did, this is a bit detailed since I'm new to Python/Django and not sure which information is important: Python 3.3 is installed in c:\python33 I have virtualenv, pip, easy_install installed in C:\Python33\Scripts. My venv is c:\users\username\projects\projB This venv was created using pyvenv, not virtualenv. I activated the venv. I changed directory to C:\Python33\Scripts to run "pip install django". Django was created inside C:\Python33\Lib\site-packages and not inside C:\users\username\projects\projB\Lib\site-packages. Do I need to install pip inside my venv and use that to install Django?
Django not installed in venv?
1.2
0
0
1,802
18,428,750
2013-08-25T12:01:00.000
10
0
1
1
python,linux,terminal,command
33,800,400
9
false
0
0
pgrep -f <your process name> | xargs kill -9 This will kill the your process service. In my case it is pgrep -f python | xargs kill -9
3
71
0
I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files). obviously the processes should be closed. Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine). I need it from the terminal because i write c code and i use linux commands... Hope for help
Kill python interpeter in linux from the terminal
1
0
0
134,684
18,428,750
2013-08-25T12:01:00.000
6
0
1
1
python,linux,terminal,command
31,704,546
9
false
0
0
pgrep -f youAppFile.py | xargs kill -9 pgrep returns the PID of the specific file will only kill the specific application.
3
71
0
I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files). obviously the processes should be closed. Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine). I need it from the terminal because i write c code and i use linux commands... Hope for help
Kill python interpeter in linux from the terminal
1
0
0
134,684
18,428,750
2013-08-25T12:01:00.000
0
0
1
1
python,linux,terminal,command
66,626,681
9
false
0
0
to kill python script while using ubuntu 20.04.2 intead of Ctrl + C just push together Ctrl + D
3
71
0
I want to kill python interpeter - The intention is that all the python files that are running in this moment will stop (without any informantion about this files). obviously the processes should be closed. Any idea as delete files in python or destroy the interpeter is ok :D (I am working with virtual machine). I need it from the terminal because i write c code and i use linux commands... Hope for help
Kill python interpeter in linux from the terminal
0
0
0
134,684
18,429,202
2013-08-25T12:53:00.000
1
0
0
0
python,windows,windows-7,kivy
18,429,288
1
false
0
1
use pythonw.exe that is in the python installation directories. if you have virtualenv installed and your kivy project rests in a virtualenv, try /<virtual_env_foler>/scripts/pythonw.exe for pythonw.exe
1
1
0
I'm new to programming and learning to use python with kivy. I have windows 7. Is it possible to open kivy widgets without the terminal automatically opening in the background?
Opening kivy widgets without the terminal?
0.197375
0
0
291
18,429,298
2013-08-25T13:05:00.000
1
0
0
0
python,sockets,networking,socketserver
18,432,587
2
false
0
0
You have many questions, I'll try to respond to them one by one. If you have a service that uses a specific port, and you have multiple computers on the same ip addess, how is this handled? Someone mentioned that multiple computers cannot have the same IP address. In the original IP model, this is true, though today such address sharing (through NAT) is common. But even in the original model, your question makes sense if you reformulate it slightly: "If you have a service that uses a specific port, and you have multiple clients on the same ip address, how is this handled?" There can be multiple client processes on the same host (thus sharing the same IP address) trying to contact the same server (using the same destination address+port combination). This was natural at the time IP was developed, as most machines powerful enough to connect to the network were multi-user machines. That's why TCP (and UDP) have port numbers on both sides (source and destination, or client and server). Client processes typically don't specify the source port when contacting a server, but an "ephemeral" source port is allocated to the socket by the host operating system for the lifetime of the socket (connection). So this is how the server distinguishes between clients from the same address: by their source ports. NAT maps different hosts (with different "internal" IP addresses) to the same "external" IP addresses, but it also allocates unique source ports to outgoing packets. So the server sees this just like the original case (multiple client processes from the same "host"/IP address). The NAT then "demultiplexes" the server's responses to the different internal hosts. Should the service specify to which computer on the ip address the information should be send? What if both computers on the same ip use the same service, but request different information? The server does this by sending responses to the same address+port combination that the different clients used as source address/port. This is mostly handled automatically by the socket API. As described above, the two clients will get separate connections, and the server hopefully handles these as separate "sessions" and doesn't confuse requests between these sessions. Also, if a client is on a dynamic ip, how should the service detect that the ip has been changed, but the client (and the session) is the same? Should clients identify themselves for every request (much like cookies over http)? Now, this is a whole can of worms. If a service wants to "survive" client IP address changes, then it will have to use some other identifier. HTTP (session) cookies are a good example. TCP connections are broken by address changes - this is normal, as such changes weren't envisioned as part of normal operation when TCP/IP was designed. There have been attempts at making TCP/IP more robust against such changes, such as Mobile IP, MPTCP, and possibly SCTP, but none of these have really entered the mainstream yet. Basing your protocol on HTTP(S) and using session cookies may be your best bet.
2
0
0
If you have a service that uses a specific port, and you have multiple computers on the same ip addess, how is this handled? Should the service specify to which computer on the ip address the information should be send? What if both computers on the same ip use the same service, but request different information? Also, if a client is on a dynamic ip, how should the service detect that the ip has been changed, but the client (and the session) is the same? Should clients identify themselves for every request (much like cookies over http)?
Multiple clients from same IP
0.099668
0
1
2,393
18,429,298
2013-08-25T13:05:00.000
0
0
0
0
python,sockets,networking,socketserver
18,429,538
2
false
0
0
I don't think I fully understand what you've said. There is no way that multiple computers will be on the same IP, this is not how the internet works.. There are protocols which hadels such things. Did you mean that you're a server and multiple computers try connect to you? If so, you listen in a port and when you get a connection you open a new thread for the service of that computer and the main loop still listening
2
0
0
If you have a service that uses a specific port, and you have multiple computers on the same ip addess, how is this handled? Should the service specify to which computer on the ip address the information should be send? What if both computers on the same ip use the same service, but request different information? Also, if a client is on a dynamic ip, how should the service detect that the ip has been changed, but the client (and the session) is the same? Should clients identify themselves for every request (much like cookies over http)?
Multiple clients from same IP
0
0
1
2,393
18,429,452
2013-08-25T13:24:00.000
1
0
1
0
python,pyqt,maya,pymel
50,134,039
12
false
0
1
Update for anyone using PyQt5 with python 3.x: Open terminal (eg. Powershell, cmd etc.) cd into the folder with your .ui file. Type: "C:\python\Lib\site-packages\PyQt5\pyuic5.bat" -x Trial.ui -o trial_gui.py for cases where PyQt5 is not a path variable. The path in quotes " " represents where the pyuic5.bat file is. This should work!
3
53
0
Is there a way to convert a ui formed with qtDesigner to a python version to use without having an extra file? I'm using Maya for this UI, and converting this UI file to a readable python version to implement would be really great!
Convert pyQt UI to python
0.016665
0
0
247,890
18,429,452
2013-08-25T13:24:00.000
73
0
1
0
python,pyqt,maya,pymel
18,430,351
12
true
0
1
You can use pyuic4 command on shell: pyuic4 input.ui -o output.py
3
53
0
Is there a way to convert a ui formed with qtDesigner to a python version to use without having an extra file? I'm using Maya for this UI, and converting this UI file to a readable python version to implement would be really great!
Convert pyQt UI to python
1.2
0
0
247,890
18,429,452
2013-08-25T13:24:00.000
0
0
1
0
python,pyqt,maya,pymel
28,266,946
12
false
0
1
I've ran into the same problem recently. After finding the correct path to the pyuic4 file using the file finder I've ran: C:\Users\ricckli.qgis2\python\plugins\qgis2leaf>C:\OSGeo4W64\bin\pyuic4 -o ui_q gis2leaf.py ui_qgis2leaf.ui As you can see my ui file was placed in this folder... QT Creator was installed separately and the pyuic4 file was placed there with the OSGEO4W installer
3
53
0
Is there a way to convert a ui formed with qtDesigner to a python version to use without having an extra file? I'm using Maya for this UI, and converting this UI file to a readable python version to implement would be really great!
Convert pyQt UI to python
0
0
0
247,890
18,430,726
2013-08-25T15:41:00.000
11
0
1
0
python,django,virtualenv,pycharm
18,431,537
1
true
0
0
In settings, under the Project section (in the left pane) go to Interpreters. From there you can select a found environment or click the + to add your own from a path. Find the environment you created and add it to the list. Then, once you select the environment you can see the installed modules underneath. You can add new modules through their built in pip. After that, it'll take you to the first page where you have to select your custom environment from a drop down. Good luck! Edit: Reread your question. Sometimes when I can't get the module to be recognized in PyCharm, I do a pip install through the command line in the virtual environment. Then restart PyCharm. Navigate to the environment in cmd and run python -m pip install -U SOUNDCLOUD_MODULE and it'll work.
1
10
0
Greetings to everyone! I've got a little issue in a project made by someone in PyCharm, with virtual environment(VE) precisely. I've set this VE up few months ago and didn't use it for some time. Now i need to go back to it, because it has a lot of necessary things installed. Therefore there is one more battery needed to be installed into this VE - it is soundcloud API. I installed by directly from PyCharm in project settings, i checked whether this VE is still a default VE - it is. But project keeps complaining that there is "No module named soundcloud". Can you show me the way i can fix this? Thanks in advance.
Setting up virtual environment in PyCharm
1.2
0
0
11,813
18,433,883
2013-08-25T21:25:00.000
4
0
1
0
python
18,433,897
2
false
0
0
Each hexadecimal digit carries 4 bits of information. You have eight hexadecimal digits. Therefore, the number carries 32 bits of information. It just happens that the address fits in 32 bits. If you are using a 32-bit Python, this will always be the case; if you are using a 64-bit Python, it could be that the address just happened to have all high bits zero and ended up fitting into 32 bits.
1
0
0
Examples : 0xb6f99cec 0xb6f99d2c 0xb6f99d6c 0xb6f99dac Is it a design decision, is it a memory issue, does it make something easier, why 8 and not 4 or 12 or 16?
why are object addresses always 8 hex digits?
0.379949
0
0
86
18,433,917
2013-08-25T21:29:00.000
1
0
1
0
python,encryption,sha
18,434,078
3
false
0
0
Hash functions are one way tickets. You cannot use them for encryption. Hash function algorithms are realised through modulo, xor and other familiar (one way) operations. You may try to find what argument was used to generate hash but in theory you will never be 100% sure it is the correct value. For example try with a really simple (useless in cryptography) hash function modulo 10. This function returns ten different values. If it's 7 you may guess the entry was 7 or 137 and 1234567. Same with md5, sha1 and better ones. As you can see, in the case when you are using hash function that returns only 40 bytes with files that are much bigger (maybe even few hundred megabytes) there in theory exists infinite numbers of files for each possible hash.
1
2
0
I have a function for encrypting with SHA-1 in Python, using hashlib. I take a file and encrypt the contents with this hash. If I set a password for an encrypted text file can I use this password to decrypt and to restore the file with the original text?
Decrypt SHA1 with (password) in python
0.066568
0
0
13,349
18,434,685
2013-08-25T23:15:00.000
2
0
0
1
git,google-app-engine,python-2.7
18,455,561
1
true
1
0
No -- It doesn't update backends. (My cron jobs ran last night and failed because they were running old code.) Nothin' like good ol' appcfg.py update ./ --backends
1
1
0
When using appcfg.py, I had to specify backends to update them. What about when I'm using Push-to-Deploy? I ask because I see two of my Versions don't have the same "deployed" date -- the backend still says "6 days ago". I didn't change backends.yaml, but I did change the code that runs on that backend. Should I see a new "deployed" date? Is git Push-to-Deploy working?
Does Google App Engine's git Push-to-Deploy also update backends?
1.2
0
0
157
18,435,328
2013-08-26T01:09:00.000
2
0
1
1
python
18,435,371
2
false
0
0
Check IDLE key preferences. For Mac it is CTRL+P. look for history-previous key mapping
1
0
0
I am using IDLE 3 in windows . My question is simply , is there any way we can get the last thing entered by pressing the up arrow key ( like in case of ipython ) . It is very problematic to copy the last command and again to paste it !
Showing last command in IDLE
0.197375
0
0
853
18,435,891
2013-08-26T02:45:00.000
1
0
0
0
python,com,constants,win32com
18,492,848
3
true
0
0
To find out what Com applications you can use... See http://timgolden.me.uk/pywin32-docs/html/com/win32com/HTML/QuickStartClientCom.html Basically you can't know for sure. Sorry. Each computer will have a different list based on the software installed, the webpage I linked suggests using pyWins ComBrowser, however I haven't ever found it that useful. My normal approach is 'I want to interface with application X in Python... lets google "python com X" and see what comes up' or 'I want to interface with application X in Python.. lets look through the documentation of AppX for references to COM' Afterall you'll want to have some form of documentation to that programmes COM interface in order to be able to do anything meaningful with the program (other than opening it).
1
3
0
I'm currently wondering how to list the constants in win32com in python, for example using excel win32com.client.Dispatch('Excel.Application') Is there a way to display all constants using win32com.client.Constants ? Or does someone know where i could find win32com's documentation ? Because all the links I found are dead ...
Python win32com constants
1.2
0
0
7,331
18,436,903
2013-08-26T04:58:00.000
1
0
1
0
python,scite
18,437,018
1
true
0
0
Highlight the block of code and press shift+tab
1
0
0
My English just as my programming, are not good, apologize. I'm using SciTE to run python code. I added a while statement to the outside of a block of code. Then, in order to indent the next block of code, I selected it and pressed tab. After some more coding, I now want to delete the while statement and dedent (unindent) the block of code that's in the while-loop. How can I dedent a block of code? Hope people can understand my poor description, man. Thanks!
in SciTE, how to make the whole block of code have less margin in python
1.2
0
0
298
18,437,202
2013-08-26T05:27:00.000
8
0
0
0
python,odoo
18,457,696
2
true
1
0
self.pool.get is used to get the Singleton instance of the orm model from the registry pool for the database in use. self.browse is a method of the orm model to return a browse record. As a rough analogy, think of self.pool.get as getting a database cursor and self.browse as a sql select of a records by Id. Note if you pass a browse an integer you get a single browse record, if you pass a list of ids you get a list of browse records.
1
4
0
I have been working on developing a module in OpenERP 7.0. I have been using Python and the Eclipse IDE for development. I wanted to know the difference between self.browse() and self.pool.get() in OpenERP development. Thanks.
What's the difference between self.browse() and self.pool.get() in OpenERP development?
1.2
0
1
11,039
18,437,284
2013-08-26T05:36:00.000
2
0
0
0
python,plone
18,516,979
2
true
1
0
The codeless way to do this is to make use of Plone's workflow system. Out-of-the-box, Plone's file and image content types do not have their own workflow. That means that files and images will simply inherit the publication state of their parent folder. This is easy and sensible, but it doesn't meet the need you're describing. To change the situation, you may use the "types" configuration panel to turn on independent workflow for files and images. Then, their publication status may be set separately from their containing folders. Typically, you'd choose the same workflow that you're using for documents. Then, you may publish a folder and list its contents while having the files within be private -- thus requiring login for viewing. If you need this to work differently in different places, you may turn on "placeful" workflow (turn it on by adding it in the add-ons panel; it's pre-installed, but not active). This allows different workflows in different parts of a site. It increases complexity, but is often an ideal solution to this kind of puzzle.
1
1
0
I wish to make the contents of a folder in Plone downloadable only for certain roles. Can this be done easily? At present anybody who clicks the hyperlink for file name in the folder contents can download the file easily. I know about the site-wide option of overriding the at_download code using ZMI.
How do I make files downloadable for a particular role in Plone?
1.2
0
0
155
18,438,125
2013-08-26T06:42:00.000
0
0
1
0
python,multithreading,multiplayer
18,440,014
2
false
0
0
You need to analyze why your program is slowing down when other threads do their work. Assuming that the threads are doing CPU-intensive work, the slowdown is consistent with threads being serialized by the global interpreter lock. It is impossible to answer in detaile without knowing more about the nature of the work your threads are performing and of objects that must be shared in parallel. In general, you have two viable options: Use processes, typically through the multiprocessing module. The typical reasons why objects are not picklable is because they contain unpicklable state such as closures, open file handles, or other system resources. But pickle allows objects to implement methods like __getstate__ or __reduce__ which identify object's state, using the state to rebuild the objects. If your objects are unpicklable because they are huge, then you might need to write a C extension that stores them in shared memory or a memory-mapped file, and pickle only a key that identifies them in the shared memory. Use threads, finding ways to work around the GIL. If your computation is concentrated in several hot spots, you can move those hot spots to C, and release the GIL for the duration of the computation. For this to work, the computation must not refer to any Python objects, i.e. all data must be extracted from the objects while the GIL is held, and stored back into the Python world after the GIL has been reacquired.
1
2
0
I'm creating a simple multiplayer game in python. I have split the processes up using the default thread module in python. However I noticed that the program still slows down with the speed of other threads. I tried using the multiprocessing module but not all of my objects can be pickled. Is there an alternative to using the multiprocessing module for running simultaneous processes?
Python 2.7.5 - Run multiple threads simultaneously without slowing down
0
0
0
433
18,442,440
2013-08-26T10:55:00.000
-1
0
1
0
python,nlp,nltk,information-retrieval,opennlp
18,443,963
1
false
0
0
If the source is in .txt format regular expressions probably would be the best solution. I don't think it easy (or even possible) to write a regex for all arbitrary kinds of ads but the more examples you'll have the better your search will work.
1
1
0
There are a lot of classified ads appearing in NON-HTML format(paper ,text ,written ,etc) which tend to sell house,automobile,rent,lease,flat,etc. A classified ads say for example, a flat rent ad has some of the features included like: SIZE,AREA,LOCALITY,PRICE,CONTACT INFO. .etc My question is how to extract the street address(address mentioned in article /LOCALITY) in which the ad resides or has mentioned in former article ? Is there any solution to this problem using NLTK & python ?? Imagine that the source of article is in normal text file(.txt) .
How to extract street address from any classified ads?
-0.197375
0
0
457
18,446,004
2013-08-26T14:01:00.000
1
0
1
1
python,string,filenames,filepath,file-extension
18,446,406
3
false
0
0
You want os.path.split + os.path.splitext. Please take some time reading the doc next time, it would have been waaaayyyy faster than posting here.
1
2
0
I'm trying to separate a string into three variables where C:\Example\readme.txt could be read as C:\Example, readme, and .txt for the sake of a script I'm writing. It may be deployed in both Windows and Unix environments and may deal with both Windows or Unix paths, so I need to find a way that complies to both standards; I've read about several functions that achieve similar to this, but I'm looking for some input on how to best handle the single string inside a function. *Note, I'm running IronPython 2.6 in this environment, and I'm not sure if that varies so greatly with standard Python 2.7 that I would need to adapt my usage. EDIT: I'm aware of using os.path.splitext to get the extension from the filename, but finding a platform-independent way to get both the path and the filename (which I later use splitext on) is what boggles me.
What is the most stable and Pythonic cross-platform way to separate a string of path,filename,ext into three separate variables?
0.066568
0
0
356
18,446,580
2013-08-26T14:30:00.000
2
0
1
0
python,multithreading,performance,win32-process
18,446,899
2
true
0
0
Each process has a separate instance of the global variable. If you want each process to see the same value, you'll need to pass that value as an argument to each process.
1
0
0
With this global variable defined in the script upper focus t0 = time.time() ## is global and this function def timestamp(t0): ... return ("[" + str(time.time()-t0)+ "] ") ## time stamping from initial start I'm trying to timestamp every print() of my script with print(timestamp(t0) + ""...whatever..."") This works, but when i'm enterring multithreading by for thread_id in range(win32-safe_os): ... p = Process(target=fonction, args=((thread_id),"test")) ... p.start() ... thread_list.append(p) in order to def fonction(thread_id,filetodo): ... print(timestamp(t0)+"Load core "+str(thread_id)) ... print(timestamp(t0)+str(filetodo)+" on core "+str(thread_id)) ... print(timestamp(t0)+"Free core "+str(thread_id)) i get this stdout : [2.70299983025] 297 jpg / 36087 files [2.75] Enterring multithreading [2.75] Win32 finds : 2 core(s) [0.0] Load core 0 [0.0] test on core 0 [0.0] Free core 0 [0.0] Load core 1 [0.0] test on core 1 [0.0] Free core 1 I can see that my call to timestamp() and t0 is working, but not in p.start(). I'm wondering how(, and why) i need to correct ? PS : I tried with time.clock, but in win32 it refers to the beginning of a THREAD (not a script)/
Understanding why multithreading could not read a global variable
1.2
0
0
112
18,455,213
2013-08-27T00:42:00.000
0
0
0
0
python,django,reporting,google-calendar-api
52,007,377
3
false
1
0
I'm probably going to write a Python script for this soon. I had written a reporting app before in C#, but it's badly written and I think Google has changed their API again so it's not working anymore. The way I did it was to use tags. I wanted my total work hours per client. I would enter them into Google Calendar as: @clientname description of work Where clientname can be the first few letters only and the software matches it to a full name from a list of clients. My software would then allow you to chose a time-period and one or more clients and would output them into a neat looking Word file. PS: to by honest the suggestion of using gtimereport.com seems very bad to me. You're basically uploading all of your calendars to strangers. That's why I'm going to write a script for this.
1
3
0
I am planning my all day activities in Google calendar LIke Offcie time sleeping playing Gym I am happy with Google but issue is that i don't get reporting so that i can see how much time is spent in each category. I know python and django so i was thinking is it possible that i still log all events in Google calendar and then i have daily cron jobs which will fetch events from Google calendar and then put in mysql database. The main issue is i want to define separate categories for different things. Like WORK, SLEEP, SHOPPING etc. But how can i do that from event name only. DO i need to enter some words in events whicg i can grab and make them as category. ANy ideas on that
Can i generate reporting using google calendar
0
0
0
3,005
18,455,589
2013-08-27T01:35:00.000
1
0
1
0
python,numpy
18,456,597
6
false
0
0
It is possible to do the job with one pass and without loading the entire file into memory as well. Though the code itself is going to be much more complicated and mostly unneeded unless the file is HUGE. The trick is the following: Suppose we only need one random line, then first save first line into a variable, then for ith line, replace the currently with probability 1/i. Return the saved line when reaching end of file. For 10 random lines, then have an list of 10 element and do the process 10 times for each line in the file.
1
4
1
I have a text file which is 10k lines long and I need to build a function to extract 10 random lines each time from this file. I already found how to generate random numbers in Python with numpy and also how to open a file but I don't know how to mix it all together. Please help.
Retrieve 10 random lines from a file
0.033321
0
0
3,957
18,455,991
2013-08-27T02:27:00.000
1
0
0
0
python,html,python-3.x,html-parsing
18,456,494
2
false
1
0
try to use HTML.Parser library or re library they will help you to do that and i think you can use regex to do it r'http[s]?://[^\s<>"]+|www.[^\s<>"]+
1
1
0
i have to basically make a program that take a user-input web address and parses html to find links . then stores all the links in another HTML file in a certain format. i only have access to builtin python modules (python 3) . im able to get the HTML code from the link using urllib.request and put that into a string. how would i actually go about extracting links from this string and putting them into a string array? also would it be possible to identify links (such as an image link / mp3 link) so i can put them into different arrays (then i could catagorize them when im creating the output file)
Extracting links from HTML in Python
0.099668
0
1
718
18,456,708
2013-08-27T04:01:00.000
1
0
1
0
python,repository,sublimetext2
18,722,089
2
false
0
0
You can also access the Package Control settings from the Menu bar under: Preferences -> Package Settings -> Package Control -> Settings - User. From there you can edit or remove the bad URL.
1
2
0
I have installed package control, and I use it frequently, the problem is that I added a new repository and it is wrong, and when I try to install other package, sublime throw exception, somebody know how can I remove a repository in sublime text. Note: I have the problem in a OSX.
sublime text remove repository
0.099668
0
0
1,741
18,456,841
2013-08-27T04:17:00.000
0
0
0
0
python,google-app-engine,app-engine-ndb
18,468,452
2
false
1
0
You haven't described many limitations, so I assume it's just a simple copy operation you're after. "Best way" is kinda vague, I don't know what you're comparing against. The only thing that you'd want to be careful about is to do the actual work of creating the new entity, copying data over, and deleting the old entity in a transaction. This is simple to do, and will prevent you from creating duplicates in case something goes wrong. The remote API shell is definitely the least-coding-effort way to do it. You can write simple python functions to do your transactional copy, and run it in the shell. You don't need to write any extra handlers, and you don't even need to deploy a new version of your app. The problem with the remote shell is that it's probably 100x slower in accessing your datastore, so it could take a long time. If you let it run overnight, it potentially could stop if you have a hiccup in your internet connection - though this shouldn't be a huge problem if you copied your entities in a transaction, you can just restart the operation. Just as a reference, I recently ran an operation via remote API that uploaded 6000 entities, it took maybe 5 minutes. If you're ok with letting the operation run overnight, this is probably the way to go unless you have > 100K entities. The mapreduce API method will run faster, since the load will be spread across a number of instances. A bit more effort to get mapreduce set up, and you'll have to deploy a new version of your app with the functionality, kick it off, wait until it finishes, and maybe clean out the code, as well as a bunch of logging entities that mapreduce automatically generates.
1
0
0
I want to consolidate my logging data into a single StatisticStore model. Right now, my logging data is scattered around 3 models, which is a mess. What would be the best way to iterate over all those records of all 3 models, and create a copy of each in the new StatisticStore model?
Most efficient way to migrate from one model to another?
0
0
0
46
18,460,147
2013-08-27T08:04:00.000
0
0
0
1
python,linux,time,cpu,kill
18,460,519
4
false
0
0
ps aux | grep 'gnome-panel ' | awk '{if ($3>80)print $2}' | xargs kill -9
1
4
0
I have this script in python on linux which deploys vnc locally, does some graphical job on this vnc screen, and kills vnc. Sometimes after the job is done process named gnome-panel hangs and stays with 100% cpu usage. Then I need to log in through putty and kill all those processes manually (sometime lots of them actually). I would like to add few lines to my python script when it finishes its job, which will not only kill vnc (it does it already), but also kill gnome-panel if it consumes certain amount of cpu over given time period. I cant simply kill all gnome-panels, as some of them are working fine (im deploying 4 vnc screens at the same time). So I need this condition in python: if process name is gnome-panel and consumes over 80% of cpu and runs over 1 minute, kill process id thank you!
how to kill specific process using %cpu over given time in python on linux?
0
0
0
2,672
18,461,791
2013-08-27T09:32:00.000
2
0
0
0
python,django,pdf,powerpoint,openxml
18,533,110
1
true
0
0
After some research and with the help of python-pptx's creator, I was able to write to the PowerPoint COM interface using a Virtual Machine. In case someone reads this thread, this is how I managed to get this done: - Setup a VM with Microsoft Windows/Office installed on it ; - Install Python, Django and win32com libraries on the VM. The files are sent locally from the original Django project to the virtual machine (which are on the same network) through a simple POST request. The file is converted on the VM using win32com.client (which is just a simple call to the win32com.client library) and then sent back as a response to the original Django view, which in turn processes the response. Note: it took me some time to realize I needed to use the @csrf_exempt decorator for this setup to work.
1
1
0
First of all, I agree that this might sound like a question which has already been asked many times in the past. However I couldn't find any answer that was relevant to me in the similar questions so I'll try to be more specific. I would need to transform PPTX/DOCX files into PDF using Python but I don't have any experience in file format conversion. I have been looking in many places/forums/websites, read a lot of documentation and came across some useful libraries (python-pptx and pyPdf mainly), but I still don't know where to start. When looking on the Internet, I can see many websites that offer file format conversions as a paying service, even with advanced API's: submit a file via POST and get the transformed PDF file in return. This could work for me, but I am really interested in writing myself the code that does the conversion work from OOXML to PDF. How would you start doing this? Or is it just impossible on my own? Thanks for your help!
django/python: How to convert pptx/docx formats to PDF using python?
1.2
0
0
2,596
18,462,319
2013-08-27T09:56:00.000
0
0
1
0
c++,python,list,boost,reference
18,467,078
1
false
0
1
Okay so the Problem appears when the type is not declared for boost::python. !
1
0
0
I have a vector of pointers to objects in c++ and want to expose it to python with a list. So far I gave a reference of a python list to c++. I figured pointers are not suitable for python so I read about how to make a pointer to a reference by (*obj) it. But when I call: myList.append((*obj)); python just crashes. Can someone tell me how to put objects I only have pointers of into a python list correctly so I can manipulate that list later? Greetings Chris
appending object references to python list with boost
0
0
0
123
18,462,528
2013-08-27T10:06:00.000
0
0
0
0
python,mysql,commit
18,463,239
1
false
0
0
MyISAM has no transactions, so you can't not to "autocommit" using MyISAM. Your runtime change may be also caused by the fact you moved from innoDB to MyISAM. The best approach for DB runtime issues in general is benchmarking, benchmarking and benchmarking.
1
1
0
I've been working with Python MySQLdb. With InnoDB tables autocommit is turned off in default and that was what I needed. But since I'm now working with MyISAM tables, the docs for MySQL say MyISAM tables effectively always operate in autocommit = 1 mode Since I'm running up to a few hundreds of queries a second, does committing with every single query slow down the performance of my script? Because I used to commit once every 1000 queries before, now I can't do that with MyISAM. If it slows it down, what can I try?
does autocommit slow down performance in python?
0
1
0
440
18,463,836
2013-08-27T11:14:00.000
2
0
0
1
python,c,dbus
18,476,891
1
true
0
0
dbus-monitor "sender=org.freedesktop.Telepathy.Connection.******"
1
5
0
I writing Dbus service implementing some protocol. My service sends to client message with unexpected data (library i used has some bugs, that i want to overwrite). How to inspect, trace client calls? I want to determine what client wants and locate buggy method. Or how to trace all calls in service? I has much of logger.debug() inserted. Service is python, client is c. How to specify path or service to monitor in dbus-monitor with sender and reciever?
How debug or trace DBus?
1.2
0
0
1,405
18,466,998
2013-08-27T13:34:00.000
0
0
1
0
python-2.7
18,467,120
2
false
0
0
Just regarding the use of regular expressions: Regular expressions are equivalent to finite automatons and these have the property that they have only a finite set of states, which in turn means they have kind of finite memory. Thus you can't do stuff involving an unknown arbitrary lenght objective string.
1
0
0
I would like to compare 2 strings and display any matched words. For example - string1 = "cat feet" string2 = "cat shoes" The result should = "cat" How can I do this with regular expressions? Or is there a better way to do this?
Compare or match 2 strings and display matched word
0
0
0
153
18,467,222
2013-08-27T13:43:00.000
3
0
0
1
javascript,python,google-app-engine
18,468,740
3
false
1
0
I'm assuming you're paying a lot in instance hours. Reading from the GAE filesystem is rather slow. So the easiest way to optimize is only read from the static file once on your instance startup, keep the js file in memory (ie a global variable), and print it. Secondly, make sure your js is being cached by the customers so when they reload your page, you don't have to serve the js to them again unnecessarily. Next way is to serve the js file as a static file if possible. This would save you some money if the js file is big and you're consuming CPU cycles just printing it. In this case have your handler that generates the HTML insert the appropriate URL to the appropriate js file instead of regenerating the entire js each time. You'll save money because you won't get charged instance hours for files served as static files, plus they can get cached in the edge cache (GAE's CDN), and you won't get billed anything at all for them.
1
2
0
We have a piece of Javascript which is served to millions of browsers daily. In order to handle the load, we decided to go for Google App Engine. One particular thing about this piece of Javascript is that it is (very) slightly different per company using our service. So far we are handling this by serving everything through main.py which basically goes: - Read the JS static file and print it - Print custom code We do this on every load, and costs are starting to really add-up. Apart from having a static version of the file per customer, is there any other way that you could think about in order to reduce our bill? Would using memcache instead of reading a file reduce the price in any way? Thanks a lot.
Reducing Google App Engine costs
0.197375
0
0
638
18,472,394
2013-08-27T17:55:00.000
1
0
0
0
python,macos,matplotlib,wxpython
18,474,882
1
false
0
1
_idletimer is likely to be a private, possibly implementation specific member of one of the classes - since you do not include the code or context I can not tell you which. In general anything that starts with an _ is private and if it is not your own, and specific to the local class, should not be used by your code as it may change or even disappear when you rely on it.
1
1
1
I am creating a GUI program using wxPython. I am also using matplotlib to graph some data. This data needs to be animated. To animate the data I am using the FuncAnimate function, which is part of the matplotlib package. When I first started to write my code I was using a PC, running windows 7. I did my initial testing on this computer and everything was working fine. However my program needs to be cross platform. So I began to run some test using a Mac. This is where I began to encounter an error. As I explained before, in my code I have to animate some data. I programmed it such that the user has the ability to play and pause the animation. Now when the user pauses the animation I get the following error: AttributeError: 'FigureCanvasWxAgg' object has no attribute '_idletimer'. Now I find this to be very strange because like I said I ran this same code on a PC and never got this error. I was wondering if anyone could explain to me what is meant by this _idletimer error and what are possible causes for this.
AttributeError: 'FigureCanvasWxAgg' object has no attribute '_idletimer'
0.197375
0
0
287
18,472,956
2013-08-27T18:32:00.000
6
1
0
1
python,unix
18,473,032
2
true
0
0
The Pythonic way to do it is not to care what platform you are on. If there are multiple different facilities to accomplish something depending on the platform, then abstract them behind a function or class, which should try a facility and move on to another if that facility is not available on the current platform.
1
5
0
I'm trying to determine if the operating system is Unix-based from a Python script. I can think of two ways to do this but both of them have disadvantages: Check if platform.system() is in a tuple such as ("Linux", "Darwin"). The problem with this is that I don't want to provide a list of every Unix-like system every made, in particular there are many *BSD varieties. Check if the function os.fchmod exists, as this function is only available on Unix. This doesn't seem like a clean or "Pythonic" way to do it.
How can I determine if the operating system a Python script is running on is Unix-like?
1.2
0
0
2,875
18,474,222
2013-08-27T19:50:00.000
3
0
0
0
python,django
18,474,264
1
true
1
0
The error is complaining about LargeImage. That's being caused by this expression: product.LargeImage. You might want to check for that first, or even better, put this in a try/except block.
1
0
0
I'm trying to check if some xml in a django app has certain elements/nodes and if not just to skip that code block. I'm checking for the elements existance using hasattr(), which should return false if the element doesn't exist: if hasattr(product.ItemAttributes, 'ListPrice') \ and hasattr(product.Offers.Offer.OfferListing, 'PercentageSaved') \ and hasattr(product.LargeImage, 'URL'): Except in my case it's throwing an attribute error: AttributeError at /update_products/ no such child: {http://webservices.amazon.com/AWSECommerceService/2011-08-01}LargeImage I don't understand why it's throwing an error instead of just returning false and letting me skip the code block?
hasattr() throws AttributeError
1.2
0
1
361
18,474,784
2013-08-27T20:27:00.000
1
0
1
0
python
18,474,929
4
false
0
0
I am assuming you are talking about modules, which are a mean of collecting sets of features and/or custom data types. They exist to enrich the Python standard library, which already contains over 200 packages and modules (they can be listed by entering help('modules') in a Python interpreter. Modules are meant to be imported and used by your programs. As an FYI, modules that provide related functionality can be grouped together in a package. Pypi, the Python Index Package, is a repository of such third-party modules. Chances are you will find an existing module for whatever task you want to accomplish, just search Pypi for for interests you. You have two options to install them: Download and install them manually (here assuming the module is foo-1.0): gunzip -c foo-1.0.tar.gz | tar xf -# unpacks into directory foo-1.0 cd foo-1.0 python setup.py install Install them with pip in a virtualenv environment (virt_env)user@computer$ pip install SomePackage==1.0 The typical installation path on Unix/Linux is prefix/lib/pythonX.Y/site-packages, where prefix is by default /usr. On windows, it the install path is prefix\Lib\site-packages, where prefix is C:\Program Files\Python.
2
7
0
Sorry for such a basic question, I couldn't really find an answer through google that I could understand. What exactly is a python library in laymans terms? It seems like its something that you download or import and move into a certain folder to add a specific functionality in python? If I download a library for python, does it go in /usr/lib ? Any help would be appreciated I'm really lost on this!
What exactly is a python library?
0.049958
0
0
8,727
18,474,784
2013-08-27T20:27:00.000
1
0
1
0
python
18,475,092
4
false
0
0
Python libraries are called "modules". These modules provide commonly used functionality in the form of different objects or functions. For example, there is a module that has functions you can use in your code to test if files exist on your hard drive; there are modules that have functions for implementing web-server, or web-browser functionality; there are modules to work with images; there are modules to create charts and graphs; there are modules to parse XML or HTML files; etc. The idea is that there are things lots of people might want to do with python - e.g. read HTML files. Everyone could write python code to do that themselves, but that's time consuming. So smart people write a module that does this in a well-defined - and well-documented - way. Everyone else just has to import that module and use it. Then the low-level work (e.g. reading an HTML file) is done, and you just get to use the HTML file to do whatever clever work you want to do.
2
7
0
Sorry for such a basic question, I couldn't really find an answer through google that I could understand. What exactly is a python library in laymans terms? It seems like its something that you download or import and move into a certain folder to add a specific functionality in python? If I download a library for python, does it go in /usr/lib ? Any help would be appreciated I'm really lost on this!
What exactly is a python library?
0.049958
0
0
8,727
18,475,116
2013-08-27T20:43:00.000
2
0
1
0
python,django,virtualenv,virtualenvwrapper
18,475,165
1
false
1
0
When i do tutorials like "Effective Django" they use the virtualenv command on an empty folder, then activate it. That works, until tomorrow when I want to work on the app again at which point the virtualenv is gone. I strongly doubt that this is the case, unless something is deleting your directories overnight. If that is the case, stop putting your code where it is being deleted. Assuming that is not the case, the solution is for you to go back to the directory you created as a virtualenv, and reactivate it.
1
0
0
Sorry if this is dumb but every piece of documentation i read doesn't ever seem to answer this question in a direct way. How do i properly use virtualenv so that I have a virtualenv i can call with workon? When i do tutorials like "Effective Django" they use the virtualenv command on an empty folder, then activate it. That works, until tomorrow when I want to work on the app again at which point the virtualenv is gone. What do i do at this point, I've used mkvirtualenv before and that creates a "permanent" virtualenv i can call with "workon" but I don't understand how i would use mkvirtualenv on an existing project or if this is a good idea or not, as it stands i have a project I virtualenv yesterday that has a bin folder in it and I am not sure if I need to source it again or what. Ideally i want to just workon project and get to work.
virtualenv/virtualenvwrapper confusion - how to properly use
0.379949
0
0
329
18,475,321
2013-08-27T20:56:00.000
1
0
0
0
python,django,eclipse,http,pydev
18,476,086
1
true
1
0
It's the size of the response, in bytes. Note that this has nothing to do with Eclipse, it's just the way Django's runserver formats its output.
1
0
0
I am a programming newbie, and I recently installed Python + Django, and successfully created a very small web app. Everything works fine, but I am puzzled about 4 digits that appear after HTTP status codes in Eclipse's console following any request I make to my server. Example: [27/Aug/2013 22:53:32] "GET / HTTP/1.1" 200 1305 What does 1305 represent here and in every other request?
Digits in Eclipse's console after HTTP status codes
1.2
0
1
45
18,476,353
2013-08-27T22:13:00.000
-1
0
1
1
python,macos,directory,osx-lion,osx-mountain-lion
18,476,362
4
false
0
0
/usr and several other system paths are not visible in the Finder.
1
0
0
On doing 'which Python' it says '/usr/local/bin/python'. But when I go there through 'finder' there's nothing there. I can see /Library/Python through finder and on clicking Library/Python I see 2.3, 2.5, 2.6, 2.7. The default Python currently is 2.7which I can see with --version. But all it has is /site-packages. How is this possible? I am not sure if it is the one that came with the OS or if it was installed later by someone. I am so so confused. OSX 10.8.4
Python installation missing a lot of things on Mountain Lion
-0.049958
0
0
130
18,479,208
2013-08-28T04:00:00.000
3
1
1
0
python,emacs
18,484,021
5
false
0
0
For the first question, use M-xspeed-bar, like Alex suggested. For the second, enable hs-minor-mode, M-xhs-minor-mode, and use C-cC-@C-S-h to hide all methods, and C-cC-@C-S-s to show.
3
13
0
Is there a emacs plugin which lists all the methods in the module in a side pane. I am looking for a plugin which has keyboard shortcuts to show/hide all the methods in python module file currently opened.
Emacs plugin to list all methods in a python module
0.119427
0
0
3,818
18,479,208
2013-08-28T04:00:00.000
1
1
1
0
python,emacs
33,194,710
5
false
0
0
For me, the easiest and most convenient method to quickly lookup methods is the command helm-occur (C-x c M-s o). You start typing the name of the method you want to jump to and suggestions start popping in as you type. Then you hit enter to select the one you want and your cursor jumps right there in the code. Helm-occur wasn't strictly written for this purpose, but works quite well that way.
3
13
0
Is there a emacs plugin which lists all the methods in the module in a side pane. I am looking for a plugin which has keyboard shortcuts to show/hide all the methods in python module file currently opened.
Emacs plugin to list all methods in a python module
0.039979
0
0
3,818
18,479,208
2013-08-28T04:00:00.000
0
1
1
0
python,emacs
40,116,159
5
false
0
0
Speedbar is good, and another nice alternative is helm-imenu. I've bind several keys to access it quicky from different contexts and use it most of the time
3
13
0
Is there a emacs plugin which lists all the methods in the module in a side pane. I am looking for a plugin which has keyboard shortcuts to show/hide all the methods in python module file currently opened.
Emacs plugin to list all methods in a python module
0
0
0
3,818
18,479,236
2013-08-28T04:02:00.000
3
0
1
0
python,ipython,dispatch,docstring
18,479,378
2
false
0
0
You can set foo.__doc__ = "my doc string".
1
0
0
I like that IPython will fetch docstrings if I type foo.bar? However, I may sometimes build the foo.bar method dynamically, using foo.__getattr__. I could conceivably also generate a docstring dynamically, perhaps in a magic method like foo.__getdoc__. Does IPython provide any mechanism for doing this, such that it would discover and display docstrings built on the fly?
Is there a way to generate IPython docstrings on the fly?
0.291313
0
0
708
18,481,533
2013-08-28T07:15:00.000
1
0
0
0
python,wxpython,listctrl
18,491,970
1
true
0
1
No, I think that's built-in. You would have to catch the selection event and probably make all the cells editable so you could select just the cell. Otherwise I would look at UltimateListCtrl as that is a custom widget and you can probably subclass it in such a way as to add that functionality. Or don't use a ListCtrl at all and switch to using a wx.Grid
1
1
0
I noticed wx.ListCtrl would always highlight a whole row wherever it's clicked by default, is there a way to make it only highlight the selected cell?
Is there a way to only highlight a cell instead of the whole row when it's clicked with wx.ListCtrl?
1.2
0
0
97
18,484,879
2013-08-28T10:03:00.000
1
0
1
0
java,python,mahout,py4j
21,098,619
5
false
1
0
I don't know Mahout. But think about that: At least with JPype and Py4J you will have performance impact when converting types from Java to Python and vice versa. Try to minimize calls between the languages. Maybe it's an alternative for you to code a thin wrapper in Java that condenses many Javacalls to one python2java call.
1
7
0
After searching for an option to run Java code from Django application(python), I found out that Py4J is the best option for me. I tried Jython, JPype and Python subprocess and each of them have certain limitations: Jython. My app runs in python. JPype is buggy. You can start JVM just once after that it fails to start again. Python subprocess. Cannot pass Java object between Python and Java, because of regular console call. On Py4J web site is written: In terms of performance, Py4J has a bigger overhead than both of the previous solutions (Jython and JPype) because it relies on sockets, but if performance is critical to your application, accessing Java objects from Python programs might not be the best idea. In my application performance is critical, because I'm working with Machine learning framework Mahout. My question is: Will Mahout also run slower because of Py4J gateway server or this overhead just mean that invoking Java methods from Python functions is slower (in latter case performance of Mahout will not be a problem and I can use Py4J).
Py4J has bigger overhead than Jython and JPype
0.039979
0
0
6,465
18,489,613
2013-08-28T13:40:00.000
3
0
0
0
python,wxpython,glob,filebrowse
18,489,942
1
true
0
1
The documentation is not very clear. You should be using a semi-colon inside parenthesis, like so: "TXT and CSV files (*.txt; *.csv)|*.txt; *.csv" You can also add a second line like so: "TXT and CSV files (*.txt; *.csv)|*.txt; *.csv|PNG files (*.png)|*.png"
1
3
0
I'm trying to make a wx.lib.filebrowsebutton.FileBrowseButton button match both txt and csv files, but it doesn't seem to support glob pattern as described, *.{txt,csv} ends up matching nothing on windows and it literally tries to look for files with extension of {txt,csv}. So how do I make it work for both txt and csv files?
Is there a way to add multiple filemasks in wx.lib.filebrowsebutton.FileBrowseButton?
1.2
0
0
157
18,491,040
2013-08-28T14:39:00.000
0
0
0
0
python,django
18,494,672
2
false
1
0
Experienced Django users seem to always err on the side of putting code in models. In part, that's because it's a lot easier to unit test models - they're usually pretty self-contained, whereas views touch both models and templates. Beyond that, I would just ask yourself if the code pertains to the model itself or whether it's specific to the way it's being accessed and presented in a given view. I don't entirely understand your example (I think you're going to have to post some code if you want more specific help), but everything you mention sounds to me like it belongs in the model. That is, creating a new Operation sounds like it's an inherent part of what it means to do something called add_operation()!
1
3
0
I'm pretty novice so I'll try to explain in a way that you can understand what I mean. I'm coding a simple application in Django to track cash operations, track amounts, etc. So I have an Account Model (with an amount field to track how many money is inside) and an Operation Model(with an amount field as well). I've created a model helper called Account.add_operation(amount). Here is my question: Should I include inside the code to create the new Operation inside Account.add_operation(amount) or should I do it in the Views? And, should I call the save() method in the models (for example at the end of Account.add_operation() or must it be called in the views?) What's the best approach, to have code inside the models or inside the views? Thanks for your attention and your patience.
Django MVT design: Should I have all the code in models or views?
0
0
0
166
18,492,467
2013-08-28T15:42:00.000
0
0
0
1
python,django,postgresql
18,496,589
2
false
1
0
The libpq driver, which is what the psycopg2 driver usually used by django is built on, does not support forking an active connection. I'm not sure if there might be another driver does not, but I would assume not - the protocol does not support multiplexing multiple sessions on the same connection. The proper solution to your problem is to make sure each forked processes uses its own database connection. The easiest way is usually to wait to open the connection until after the fork.
2
2
0
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database. My issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions: DatabaseError: SSL error: decryption failed or bad record mac InterfaceError: connection already closed What is the best solution to make the forked TCP server work? Can I ensure the forked process uses its own database connection? Should I be looking at other modules for writing to the postgres database?
Forking Django DB connections
0
1
0
941
18,492,467
2013-08-28T15:42:00.000
1
0
0
1
python,django,postgresql
18,531,322
2
false
1
0
So one solution I found is to create a new thread to spawn from. Django opens a new connection per thread so spawning from a new thread ensures you pass a new connection to the new process. In retrospect I wish I'd used psycopg2 directly from the beginning rather than Django. Django is great for the web front end but not so great for a standalone app where all I'm using it for is the model layer. Using psycopg2 would have given be greater control over when to close and open connections. Not just because of the forking issue but also I found Django doesn't keep persistent postgres connections - something we should have better control of in 1.6 when released and should for my specific app give a huge performance gain. Also, in this type of application I found Django intentionally leaks - something that can be fixed with DEBUG set to False. Then again, I've written the app now :)
2
2
0
I have an application which receives data over a TCP connection and writes it to a postgres database. I then use a django web front end to provide a gui to this data. Since django provides useful database access methods my TCP receiver also uses the django models to write to the database. My issue is that I need to use a forked TCP server. Forking results in both child and parent processes sharing handles. I've read that Django does not support forking and indeed the shared database connections are causing problems e.g. these exceptions: DatabaseError: SSL error: decryption failed or bad record mac InterfaceError: connection already closed What is the best solution to make the forked TCP server work? Can I ensure the forked process uses its own database connection? Should I be looking at other modules for writing to the postgres database?
Forking Django DB connections
0.099668
1
0
941
18,494,959
2013-08-28T17:55:00.000
0
0
1
1
python,class,methods,tkinter,logic
18,525,402
1
false
0
0
You just have to make an object of type vntProcessor, if this class exists, or import the module vntProcessor in the GUI module, so you can use its functions and process the data (path and subfolder list).
1
0
0
i am making an application that has GUI and some classes. I ran into a problem of getting an idea for the logic. here is a brief description of the program: Structure: 3 modules Module 1 - dataPreparation.py -responsible for string processing - made of several classes and methods that receive PATH to directory, collect all files in a LIST, after that for each file based on type of file name it sorts it out to appropriate categories that can be accessed through class instances. Module 2 - gui.py - Responsible for GUI. It crates a simple GUi-layout that offer BROWSE button (to get the PATH), QUIT button to exit application, LISTBOX that lists subfolders from the PATH, and BATCH button that must execute the main processor. Module 3 - vntProcessor.py - Responsible for processing of collected data. This module is based of an API of another application. It receives the values from the BATCH-button and invokes specific methods based on sorting that was performed using MODULE 1. So, here is the logic problem that i encountered, and i wanted to ask what is the best way to handle it. My approach: I crated scene7_vntAssembler.py . This file imports Module 1(dataSorting), Module 2(GUI), Module 3(Processor) i create an instance of GUI and call it to start interface ( have a window open) in the interface, i browse for specific folder, so my PATH variable is set. my list box is populated with subfolders. my next step should be to press the BATCH folder and forward all of the values (PATH and ARRAY of SUBFOLDERS) to my Module 3 (processor). Problem: I cannot figure out the way to do that. How to pass the PATH and SUBFOLDER-LIST to module 3? and invoke operations on collected data?
Python - Application Logic
0
0
0
135
18,495,773
2013-08-28T18:39:00.000
1
0
0
0
python,mysql,django
18,496,083
1
false
1
0
MySQL controls access to tables from its own list of users, so it's better to create MySQL users with permissions. You might want to create roles instead of users so you don't have as many to manage: an Admin, a read/write role, a read-only role, etc. A Django application always runs as the web server user. You could change that to "impersonate" an Ubuntu user, but what if that user is deleted? Leave it as "www-data" and manage the database role that way.
1
3
0
The company I work for is starting development of a Django business application that will use MySQL as the database engine. I'm looking for a way to keep from having database credentials stored in a plain-text config file. I'm coming from a Windows/IIS background where a vhost can impersonate an existing Windows/AD user, and then use those credentials to authenticate with MS SQL Server. As an example: If the Django application is running with apache2+mod_python on an Ubuntu server, would it be sane to add a "www-data" user to MySQL and then let MySQL verify the credentials using its PAM module? Hopefully some of that makes sense. Thanks in advance!
Can a Django application authenticate with MySQL using its linux user?
0.197375
1
0
691
18,495,789
2013-08-28T18:41:00.000
3
0
0
0
python,networkx
32,207,696
1
false
0
0
Just type pip install networkx --upgrade This would get you the latest release of Networkx ( 1.10 as of now).
1
0
0
All_shortest_paths is not working in version 1.6 and i would like to update it to version 1.7. Is there a simple update command i can use?
How do i update the networkx from version 1.6 to 1.7?
0.53705
0
1
3,283
18,499,338
2013-08-28T22:29:00.000
1
0
0
0
python,security,lucene,debian,elasticsearch
18,514,779
5
false
0
0
There is no restriction by default, ElasticSearch expose a standard HTTP API on the port 9200. From your third party server, are you able to: curl http://es_hostname:9200/?
2
5
0
I have a default installation of Elasticsearch which I am trying to query from a third party server. However, it seems that by default this is blocked. Is anyone please able to tell me how I can configure Elasticsearch so that I can query it from a different server?
Allowing remote access to Elasticsearch
0.039979
0
1
15,468
18,499,338
2013-08-28T22:29:00.000
3
0
0
0
python,security,lucene,debian,elasticsearch
37,047,491
5
false
0
0
In config/elasticsearch.yml, put network.host: 0.0.0.0. And also add Inbound Rule in firewall for your ElasticSearch port(9200 ByDefault). It worked in ElasticSearch version 2.3.0
2
5
0
I have a default installation of Elasticsearch which I am trying to query from a third party server. However, it seems that by default this is blocked. Is anyone please able to tell me how I can configure Elasticsearch so that I can query it from a different server?
Allowing remote access to Elasticsearch
0.119427
0
1
15,468
18,499,379
2013-08-28T22:32:00.000
1
0
1
0
python,qt,user-interface,qt-designer
18,715,408
1
true
0
1
You can't save two forms in one .ui file, but I think that you can do this (I did not tried it but it should work): save for example one QDialog and one QMainWindow (QFrame can't stand alone-it needs parent widget) in separate .ui files, convert them via pyuic4 command to two .py files and then join those two files into one .py module (copy data from one to another - copy class and create instance in __main__). That is not very practical when you want to change something in designer , because you'll need to do whole procedure again.
1
0
0
You know you can after working in Qt Designer and converting via pyuic4 command to executable program or module, You can modify your code and merge together and build a complete program, But i have serious question: Suppose I have QMainWindow, some QFrame and i don't want to save in separately *ui files, i need to save in just one *ui files, it's possible?
some Qframe and other widget just in one *ui files (pyuic4)
1.2
0
0
114
18,500,496
2013-08-29T00:32:00.000
4
1
0
1
python,ftp
18,500,555
2
false
0
0
The best way to solve this would be to have the sending process SFTP to a holding area, and then (presumably using SSH) execute a mv command to move the file from the holding area to the final destination area. Then, once the file appears in the destination area, your script knows that it is completely transferred.
1
5
0
I am using Python to develop an application that does the following: Monitors a particular directory and watches for file to be transferred to it. Once the file has finished its transfer, run some external program on the file. The main issue I have developing this application is knowing when the file has finished transferring. From what I know the file will be transferred via SFTP to a particular directory. How will Python know when the file has finished transferring? I know that I can use the st_size attribute from the object returned by os.stat(fileName) method. Are there more tools that I need to use to accomplish these goals?
Using Python to Know When a File Has Completely Been Received From an FTP Source
0.379949
0
0
5,218
18,501,677
2013-08-29T03:06:00.000
1
0
0
0
python,wxpython,filedialog
18,511,156
1
true
0
1
This is not exclusive to wxPython and is not a bug. Try this in any Windows application and you will see you can save a txt file as a .exe in Notepad or open a .png file in MS Word. A file extension is just a convention, which means it can be broken for any number of reasons. If you are confident that you must check the file extension of a file, you will need to perform some validation with the return value of the wx.FileDialog.
1
0
0
I noticed that even if you set your wildcard to match *.txt files only, all wx.FileDialog does is list all txt files under that directory, still you can input any existent file with a different extension and hit open button without having any problem at all? There doesn't seem to exist a window style to avoid this from happening, so I guess you have to validate the file extension yourself, right? Interesting, does this count as a bug?
How to make wx.FileDialog(wx.FD_OPEN mode) check input file name extension against the wildcard?
1.2
0
0
333
18,503,561
2013-08-29T06:18:00.000
-1
1
0
1
python,linux,macos,permissions,crontab
18,503,894
1
false
0
0
@Lucas Ou-Yang @Hyperboreus as Hyperboreus said it depends on the user privilege who run it . i think that if you give the /tmp/ dir a 777 permission from the root account it'll be fixed : chmod 777 -R /tmp/ okay try with : chmod 777 /tmp/ if the error still check if the directory /tmp/ exist !
1
1
0
I hear that the permissions level via crontab and terminal is totally different. More specifically, my python script has a command to write a file into the /tmp/ directory. On a linux machine, everything works, both cron and regular shell. However on OSX, the terminal runs fine but when this command is set on the crontab, an error appears saying that we don't have permissions to write to the /tmp directory. How should I handle this? Thanks.
Running python script on crontab is causing permissions errors but running via terminal is fine
-0.197375
0
0
510
18,506,274
2013-08-29T08:50:00.000
0
0
1
0
python,main
18,506,421
3
false
0
0
When the python interpreter is running a module (the source file) as the main program, it sets the special __name__ variable to have a value "__main__", not as main().
1
2
0
Also, why do we use the underscores? After all, I define the main method as main(), not as __main__().
Why using "if __name__=='__main__': main()" and not simply "main()" in Python?
0
0
0
8,560
18,507,246
2013-08-29T09:35:00.000
2
0
0
0
python,database,django,model,internationalization
18,507,451
2
false
1
0
I have alreay made a project like this. I have used one table with mixed languages, with a column to specify which language it is. I have no problem with this implementation. An other approach I had thought is to create dynamically a table like content_ and to fill in. But very boring (you have to manage id dependancy with other tables) and not necessary for me. Have you got a fixed number language ?
1
1
0
In my Django Project I am using the i18n internationalization to translate all templates. Now, depending on the chosen language I also would like to separate the data that users are submitting to the database. I do not want to have mixed languages in one table. What is the best approach how to solve this problem? I am developing using Django 1.5.2.
How to separate languages in Django Model / Database
0.197375
0
0
82
18,511,206
2013-08-29T12:36:00.000
2
0
0
0
python,matplotlib,histogram2d
18,511,409
1
true
0
0
Extent defines the images max and min of the horizontal and vertical values. It takes four values like so: extent=[horizontal_min,horizontal_max,vertical_min,vertical_max].
1
1
1
Im wanting to use imshow() to create an image of a 2D histogram. However on several of the examples ive seen the 'extent' is defined. What does 'extent' actually do and how do you choose what values are appropriate?
What does extent do within imshow()?
1.2
0
0
128
18,513,411
2013-08-29T14:14:00.000
2
0
0
1
python,google-app-engine,optimization,memcached,entity
18,520,162
1
true
1
0
If you're not using NDB, use NDB. Your data won't change, just the way you interface with the datastore will. NDB entities are automatically cached so any requests by key are searched for in memcache first and then the datastore if the entity is not found. NDB is the new standard anyways, so you might as well switch now instead of later.
1
1
0
I have several appengine entities that are frequently read at different places of my applications and not so frequently updated. I'd like to use memcache to reduce the number of datastore reads of my app, but i don't really want to update my code everywhere. I was wondering if there is a decent way to override the get() method of my entity to check if we stored it in memcache before doing a datastore read, and use put() to delete this memcache entry. Does someone have a good solution for that ?
Is there a efficient way to override get() and put() method in an appengine entity to make it use memcache?
1.2
0
0
148
18,513,932
2013-08-29T14:37:00.000
1
0
1
1
python,macos,python-imaging-library,pillow
19,933,131
3
false
0
0
I'm assuming you've solved the problem by now. If you haven't, you can now install pillow using brew by going brew install samueljohn/python/pillow. Why it isn't just brew install pillow is beyond me.
1
2
0
I'm attempting to install the Pillow fork of PIL, and every method I've tried results in this error: unable to execute /: Permission denied error: command '/' failed with exit status 1 This occurs using pip install Pillow using easy_install Pillow-master.zip using python setup.py install using python setup.py build The last is the most confusing to me, honestly. I can't even build the module in the same directory it's been extracted to. I've made sure to install all of the prereqs using homebrew, just as the Pillow readme suggests. This error has not occurred when installing other python modules. Edit: I have run all of these commands with and without sudo.
Permission denied error installing Pillow (PIL) on mac
0.066568
0
0
1,554
18,515,276
2013-08-29T15:36:00.000
0
0
1
0
python,random
18,515,364
5
false
0
0
Yes. Plenty. Determine number of times to iterate. Start a random number generator. Set variable previous to None or similar. Start iteration. Generate next random number and compare it with previous. If it is the same, skip it.
1
1
0
I try to constantly show one of 5 possible items randomly (for a 100 times or so), BUT the important thing is that the SAME item is not allowed to be shown immediately after each other. There needs to be (at least) always one other item in between. Any idea? Thank you soo much
pick random item from a list with constantly reapeating items, but not repeat one twice immediately after
0
0
0
974
18,519,356
2013-08-29T19:19:00.000
1
0
0
0
python,algorithm,cluster-analysis,data-mining,dbscan
18,537,377
1
true
0
0
Note that DBSCAN doesn't actually need the distances. Look up Generalized DBSCAN: all it really uses is a "is a neighbor of" relationship. If you really need to incorporate uncertainty, look up the various DBSCAN variations and extensions that handle imprecise data explicitely. However, you may get pretty much the same results just by choosing a threshold for epsilon that is somewhat reasonable. There is room for choosing a larger epsilon that the one you deem adequate: if you want to use epsilon = 1km, and you assume your data is imprecise on the range of 100m, then use 1100m as epsilon instead.
1
0
1
I've been running sci-kit learn's DBSCAN implementation to cluster a set of geotagged photos by lat/long. For the most part, it works pretty well, but I came across a few instances that were puzzling. For instance, there were two sets of photos for which the user-entered text field specified that the photo was taken at Central Park, but the lat/longs for those photos were not clustered together. The photos themselves confirmed that they both sets of observations were from Central Park, but the lat/longs were in fact further apart than epsilon. After a little investigation, I discovered that the reason for this was because the lat/long geotags (which were generated from the phone's GPS) are pretty imprecise. When I looked at the location accuracy of each photo, I discovered that they ranged widely (I've seen a margin of error of up to 600 meters) and that when you take the location accuracy into account, these two sets of photos are within a nearby distance in terms of lat/long. Is there any way to account for margin of error in lat/long when you're doing DBSCAN? (Note: I'm not sure if this question is as articulate as it should be, so if there's anything I can do to make it more clear, please let me know.)
DBSCAN with potentially imprecise lat/long coordinates
1.2
0
0
723
18,528,258
2013-08-30T08:24:00.000
2
0
1
0
python,types,naming
18,528,316
2
false
0
0
Python uses what is called "Duck Typing" i.e. if it looks like a duck and it sounds like a duck you might as well call it a duck. You can use: Parameter type checking, Parameter conversion, Exceptions & Documentation
1
1
0
I'm new to python, just have one question: Python don't need to declare variable types. For example, when we use functions, we don't declare which type should be passed in. So sometimes, I can't figure out which type the passed in parameter actually is or do I pass in an invalid parameter. For example, a parameter named: run_date, Its type can be string or datetime or date. I have to find it from the code... Is there a way to solve this problem? I think I should do good naming. But howto? I don't mean to check the type in the code. I just get confused with the function parameters when coding... I always forget which type the parameter is...
python handle different types
0.197375
0
0
484
18,532,363
2013-08-30T11:56:00.000
1
0
0
0
python,plone,xml-rpc,zope
27,062,942
2
false
0
0
Be aware that control characters in response content could block the parsing of the xml and, for example, just cut data. At least this is what happened to me. Maybe I'm a little bit late, anyway I spent so much time today on this issue that I would like to share what I've finally found. At the beginning I've tried to check if there is some limits somewhere: in xml-rpc server php code, in my apache http server, in the python xml-rpc client (base on incution xml-rpc library)... but found nothing. Than I begun searching for hidden control characters in the response content and finally found a record separator ASCII character. Deleted and all worked well.
1
1
0
This is likely too general a question but, googlin' around for hours, I haven't found anything. I have a web application based on zope/plone/python where zope/plone is used, among other things, as a soap and xml-rpc web server. However sometimes (when response is "big") my xml-rpc response is truncated(*) as if the xml-rpc protocol could not handle more than "x" characters (or bytes). Is anyone aware of this? Bonus question: If you were in my shoes, what would you look for during "investigation"? I've also added "python" tag because zope/plone components are written in this language and, maybe, there are some pythoners that could help me. (*) Received to caller (that is onto another network, for example) truncated.
Does xml-rpc have some kind of "response length limit"?
0.099668
0
1
2,236
18,532,596
2013-08-30T12:09:00.000
0
0
0
0
python,scrapy,desktop-application,py2exe,pyinstaller
18,535,442
3
false
1
0
The simplest way is write a script in python for them I guess... If you are running a Windows Server you can even schedule the comand that you use (scrapy crawl yoursprider) to run the spiders.
1
3
0
I have a set of Scrapy spiders. They need to be run daily from a desktop application. What is the simplest way (from user's point of view) to install and run it on another windows machine?
How do I package a Scrapy script into a standalone application?
0
0
0
2,202