Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,161,992 |
2013-04-23T05:57:00.000
| 3 | 0 | 0 | 0 |
java,python,protocol-buffers
| 16,162,045 | 2 | false | 0 | 0 |
Is a .proto file enough to have get all the data out of?
The proto file is used to define the structure of the message. Each field is given a tag number. As long as you have the right proto file, the data can be de-serialized correctly. Yes the proto file will suffice.
one team can create the proto file and serialize the data with with
java and another team can simply take the file and use python to
deserialze it is that correct assumption?
One team can create the structure needed to define the data being sent / received and others can use that definition to communicate. As long as both teams use the same .proto file and the tag numbers are assigned correctly, you should have no trouble doing what you're asking.
| 2 | 2 | 0 |
I'm missing something with protobuffers. Here is some questions that I have that I'm having difficulties answering.
Is a .proto file enough to have get all the data out of? In the address book example that is on the site they seem to write the data out to a file and consume it out of a file (separate than the .proto file itself). I understand that the proto serializes the object structure and I know that it can serialize a message however I'm having a hard time finding where to put the data and retrieve within one contained .proto file itself.
If question above is answered as I think it would my assumption is that one team can create the proto file and serialize the data with with java and another team can simply take the file and use python to deserialze it is that correct assumption?
|
Please help me understand protocol buffers
| 0.291313 | 0 | 1 | 165 |
16,161,992 |
2013-04-23T05:57:00.000
| 0 | 0 | 0 | 0 |
java,python,protocol-buffers
| 16,162,532 | 2 | false | 0 | 0 |
Think of it this way
Client side java code
- proto encoder
- - bytes
======= network =======
- - bytes
- proto decoder
Server side java code
Instead of the network, the bytes may be written to a file by one side and read from it by the other. Whichever.
In order for proto encoder and proto decoder to do their job, they need to understand the format of the bytes coming in. proto file describes that format.
It's somewhat analogous to sending an image over the network: both sides need to know it's a png file. If one sends png and the other tries to decode jpg, things won't work. The same way as jpg vs png describe the image format, proto files describe the data format.
| 2 | 2 | 0 |
I'm missing something with protobuffers. Here is some questions that I have that I'm having difficulties answering.
Is a .proto file enough to have get all the data out of? In the address book example that is on the site they seem to write the data out to a file and consume it out of a file (separate than the .proto file itself). I understand that the proto serializes the object structure and I know that it can serialize a message however I'm having a hard time finding where to put the data and retrieve within one contained .proto file itself.
If question above is answered as I think it would my assumption is that one team can create the proto file and serialize the data with with java and another team can simply take the file and use python to deserialze it is that correct assumption?
|
Please help me understand protocol buffers
| 0 | 0 | 1 | 165 |
16,163,085 |
2013-04-23T07:09:00.000
| 1 | 0 | 1 | 0 |
python,virtualenv,pip
| 16,164,433 | 1 | false | 0 | 0 |
In reality, there is a more severe danger: pip downloads packages via plain HTTP, end even if you force it to use HTTPS, it can not check id the certificate is valid (since Python stdlib does not have the functionality either). That is someone in the middle can mask the expected package with something that you will readily install (sometimes with sudo). That's why in production environment it is anyway a good idea to install only separately downloaded and already checked packages.
| 1 | 3 | 0 |
We heavily use virtualenv and pip in our build system to isolate our Python deliveries. Everything is fully automated and was working fine until now.
A couple of days ago, an issue appeared: an indirect dependency was moved to a private bitbucket repository, and pip started to prompt for a login/password. Which was dramatic for continuous integration tools... And at some point, the very same dependency was removed from pypi, so we could no longer install our environment.
In a couple of words, pip started to prompt for a user/password, which was quickly a pain (all automated tools were hanging forever...). And since today, it simply fails.
I was wondering how to have something more reliable. As I guess anyone can remove a package from pypi, it is not safe to fully rely on it, right ? I was using a cache and thought that would be enough, but apparently it always try to connect to the internet to check the package existence.
What is recommanded here ? Should I download everything manually, and only refer to local path via the variable "dependency_links" in my setup.py ? Or is there something smarter ?
Thanks !
Emmanuel
|
How to protect yourself from package disappearance on pip?
| 0.197375 | 0 | 0 | 92 |
16,167,995 |
2013-04-23T11:18:00.000
| 1 | 0 | 1 | 0 |
windows,python-2.7,tkinter,virtual-machine
| 16,169,706 | 1 | false | 0 | 1 |
You cannot do what you want. It is impossible to open an external program inside a Tkinter window.
While it is true that it's possible to embed some X11 based apps into an X11 based Tkinter application if the app was designed to be embedded, you will never be able to do this with non-X11 based apps.
| 1 | 1 | 0 |
I am trying to open the applications like notepad,photoshop,etc. inside the Tkinter window in Python. I tried using os.sytem() but it opens independently(not inside Tkinter window).
I had searched a lot about this but didn't got any solution. Any kind of help will be appreciated.
|
How to open an exe file in Tkinter window
| 0.197375 | 0 | 0 | 553 |
16,170,268 |
2013-04-23T13:11:00.000
| 1 | 1 | 1 | 0 |
python-2.7,module
| 16,173,774 | 2 | false | 0 | 0 |
Are you using distutils or setuptools?
I tested right now, and if it's distutils, it's enough to have
scripts=['bin/script_name']
in your setup() call
If instead you're using setuptools you can avoid to have a script inside bin/ altogether and define your entry point by adding
entry_points={'console_scripts': ['script_name = module_name:main']}
inside your setup() call (assuming you have a main function inside module_name)
are you sure that the bin/script_name is marked as executable?
what is the exact error you get when trying to run the script? what are the contents of your setup.py?
| 1 | 2 | 0 |
I have written a python27 module and installed it using python setup.py install.
Part of that module has a script which I put in my bin folder within the module before I installed it. I think the module has installed properly and works (has been added to site-packages and scripts). I have built a simple script "test.py" that just runs functions and the script from the module. The functions work fine (the expected output prints to the console) but the script does not.
I tried from [module_name] import [script_name] in test.py which did not work.
How do I run a script within the bin of a module from the command line?
|
How to execute python script from a module I have made
| 0.099668 | 0 | 0 | 106 |
16,171,245 |
2013-04-23T13:55:00.000
| 3 | 0 | 0 | 1 |
python,linux
| 16,173,056 | 2 | true | 0 | 0 |
To run a process with the name mysoft,
Create a python with the name mysoft without .py extension.
Inside that file create a endless while loop or something like that, in a way that it runs long time. Or put a line like raw_input("enter something"). It will wait until you give the input.
Make the file executable by chmod 775 [filename]
First line of this file should be #!/usr/bin/python. Change this line according to your python path.
Put this file system path. Or add this file path to system path. (eg. /home/[user]/bin/)
Now, type mysoft. It will start.
You need to kill this manually when you want to terminate this process. Setting pid to a process is not possible to my knowledge.
| 2 | 2 | 0 |
I'm currently working on a stub for tests purpose. Using Python I need to create a process with a specific name ("mysoft") and a specific pid ("1234")
My final purpose is to be able to run the command "pgrep mysoft" on a terminal and get the PID I set (1234).
The process doesn't need to do anything, it just need to exists.
I looked at the subprocess module but I think this is not exactly what I need. What do you think ?
|
create fake process in Python
| 1.2 | 0 | 0 | 1,984 |
16,171,245 |
2013-04-23T13:55:00.000
| 1 | 0 | 0 | 1 |
python,linux
| 16,171,310 | 2 | false | 0 | 0 |
You can't create processes with specific PIDs. The PID is assigned by the OS.
| 2 | 2 | 0 |
I'm currently working on a stub for tests purpose. Using Python I need to create a process with a specific name ("mysoft") and a specific pid ("1234")
My final purpose is to be able to run the command "pgrep mysoft" on a terminal and get the PID I set (1234).
The process doesn't need to do anything, it just need to exists.
I looked at the subprocess module but I think this is not exactly what I need. What do you think ?
|
create fake process in Python
| 0.099668 | 0 | 0 | 1,984 |
16,171,519 |
2013-04-23T14:07:00.000
| 1 | 1 | 0 | 0 |
python,python-2.7
| 16,401,807 | 1 | false | 0 | 0 |
first, use the latest version of minepy. Second, you can use a smaller value of "alpha" parameter, say 0.5 or 0.45. In this way, you will reduce the computational time in despite of characteristic matrix accuracy.
Davide
| 1 | 0 | 1 |
I have a numerical matrix of 2500*2500. To calculate the MIC (maximal information coefficient) for each pair of vectors, I am using minepy.MINE, but this is taking forever, can I make it faster?
|
how to make minepy.MINE run faster?
| 0.197375 | 0 | 0 | 425 |
16,172,839 |
2013-04-23T15:05:00.000
| 1 | 0 | 1 | 1 |
python,linux,bash,service
| 16,173,008 | 1 | false | 0 | 0 |
I would start it using init and monitor at interval with a seperate script using crontab watching for a PID file it creates.
Alternatively, if you can find the PID ID using your crontab script (using ps) and re-init the script if it doesn't exist.
My 2 Cents..
| 1 | 0 | 0 |
I have a script on my machine that I want to start running as the machine boots and will attempt run again should it crash or otherwise fail.
Would this mean turing it into a service? I realise I could use the crontab and use @reboot, but that doesn't fix the issue should the script crash
|
Running a python script persistently
| 0.197375 | 0 | 0 | 572 |
16,176,738 |
2013-04-23T18:37:00.000
| -1 | 0 | 0 | 0 |
python,formatting,conditional,xlrd,xlwt
| 47,033,397 | 2 | false | 0 | 0 |
True that xlswriter makes formatting pretty easy, but i think it can not be used for appending data tpsheets, which i feel is a big drawback.
| 1 | 6 | 0 |
I have seen some posts that say you can NOT perform conditional formatting using xlwt, but they were rather old. I was curious if this has evolved?
I have been searching for about half a day now. Furthermore, if I con't write it directly from xlwt, can I create an .xls file containing a single cell with the conditional format I want and have xlrd read that format and paste it into the sheet I aim to produce then using xlwt?
|
Conditional Formatting xlwt
| -0.099668 | 0 | 0 | 3,948 |
16,177,610 |
2013-04-23T19:26:00.000
| 8 | 0 | 1 | 0 |
python,performance,data-structures,artificial-intelligence
| 16,177,714 | 3 | true | 0 | 0 |
Using a Python dictionary is the way to go.
| 2 | 0 | 0 |
I am developing AI to perform MDP, I am getting states(just integers in this case) and assigning it a value, and I am going to be doing this a lot. So I am looking for a data structure that can hold(no need for delete) that information and will have a very fast get/update function. Is there something faster than the regular dictionary? I am looking for anything really so native python, open sourced, I just need fast getting.
|
Fastest Get Python Data Structures
| 1.2 | 0 | 0 | 1,839 |
16,177,610 |
2013-04-23T19:26:00.000
| 1 | 0 | 1 | 0 |
python,performance,data-structures,artificial-intelligence
| 16,201,956 | 3 | false | 0 | 0 |
If you want a rapid prototype, use python. And don't worry about speed.
If you want to write fast scientific code (and you can't build on fast native libraries, like LAPACK for linear algebra stuff) write it in C, C++ (maybe only to call from Python). If fast instead of ultra-fast is enough, you can also use Java or Scala.
| 2 | 0 | 0 |
I am developing AI to perform MDP, I am getting states(just integers in this case) and assigning it a value, and I am going to be doing this a lot. So I am looking for a data structure that can hold(no need for delete) that information and will have a very fast get/update function. Is there something faster than the regular dictionary? I am looking for anything really so native python, open sourced, I just need fast getting.
|
Fastest Get Python Data Structures
| 0.066568 | 0 | 0 | 1,839 |
16,180,946 |
2013-04-23T23:35:00.000
| 2 | 0 | 0 | 0 |
python,matplotlib,axis
| 16,180,974 | 3 | false | 0 | 0 |
I would look at the largest value in your data set (i.e. the histogram bin values) multiply that value by a number greater than 1 (say 1.5) and use that to define the y axis value. This way it will appear above your histogram regardless of the values within the histogram.
| 1 | 90 | 1 |
I am drawing a histogram using matplotlib in python, and would like to draw a line representing the average of the dataset, overlaid on the histogram as a dotted line (or maybe some other color would do too). Any ideas on how to draw a line overlaid on the histogram?
I am using the plot() command, but not sure how to draw a vertical line (i.e. what value should I give for the y-axis?
thanks!
|
Drawing average line in histogram (matplotlib)
| 0.132549 | 0 | 0 | 127,120 |
16,183,941 |
2013-04-24T05:21:00.000
| 1 | 0 | 1 | 0 |
python,algorithm,alphabetical
| 16,186,125 | 6 | false | 0 | 0 |
As simple as a tree
Let suppose you have give "1261"
Construct a tree with it a Root .
By defining the node(left , right ) , where left is always direct map and right is combo
version suppose for the if you take given Number as 1261
1261 ->
(1(261) ,12(61)) -> 1 is left-node(direct map -> a) 12 is right node(combo-map1,2->L)
(A(261) , L(61)) ->
(A(2(61),26(1))) ,L(6(1)) ->
(A(B(6(1)),Z(1)) ,L(F(1))) ->
(A(B(F(1)),Z(A)) ,L(F(A))) ->
(A(B(F(A)),Z(A)) ,L(F(A)))
so now you have got all the leaf node..
just print all paths from root to leaf node , this gives you all possible combinations .
like in this case
ABFA , AZA , LFA
So once you are done with the construction of tree just print all paths from root to node
which is your requirement .
| 1 | 1 | 1 |
I have just come across an interesting interview style type of question which I couldn't get my head around.
Basically, given a number to alphabet mapping such that [1:A, 2:B, 3:C ...], print out all possible combinations.
For instance "123" will generate [ABC, LC, AW] since it can be separated into 12,3 and 1,23.
I'm thinking it has to be some type of recursive function where it checks with windows of size 1 and 2 and appending to a previous result if it's a valid letter mapping.
If anyone can formulate some pseudo/python code that'd be much appreciated.
|
How to generate a list of all possible alphabetical combinations based on an input of numbers
| 0.033321 | 0 | 0 | 1,582 |
16,185,594 |
2013-04-24T07:11:00.000
| 0 | 0 | 0 | 0 |
python,qt,user-interface,qt-designer,serial-communication
| 16,185,984 | 1 | false | 0 | 1 |
Use a thread to monitor the serial port. Use appropriate mechanisms to communicate with the UI (main) thead.
That way you can write your comm module without worrying about the event base nature of the UI. Of course you still cannot use a busy while loop to listen to the serial port. But using the usual read() methods with a time out is ok.
A Queue is well suitable to send data to the UI, and you can write files directly from you comm thread.
| 1 | 1 | 0 |
I used to use TKinter for my GUIS and am now trying to move to QT Designer. I am also new to event based programming. What I am trying to do is listen to communication on my serial port continuously after a start button has been pressed.
I want to call a function update() which takes in the data and manipulates it and writes it to a file. It must then handle any other events queued up before calling update() again. Obviously if I use a while loop my CPU usage goes to 100% and my GUI becomes unresponsive. In TKinter I got around this problem (messily) by using an after_idle call which called update() whenever the GUI was idle.
What is the best practice manner of doing this kind of thing using QT?
|
Running continuous communications using QT and python
| 0 | 0 | 0 | 260 |
16,186,027 |
2013-04-24T07:31:00.000
| 1 | 0 | 0 | 1 |
python,windows,network-programming
| 16,189,674 | 1 | false | 0 | 0 |
Until you don't implement your own packet filter driver you can't do that just looking on a data stream captured via network card promiscuous mode.
| 1 | 1 | 0 |
Is there a way to determine which process is sending a particular packet. I am capturing packets on Network Interface card and want to determine the process for each packet. I am working in python under Windows.
|
Determine process for outbound packets
| 0.197375 | 0 | 0 | 65 |
16,191,318 |
2013-04-24T11:53:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,google-cloud-datastore
| 16,191,993 | 1 | true | 1 | 0 |
Why is what you describe not possible? The object representing the logged-in user is an instance of google.appengine.api.users.User. That object has a user_id property. You can use that ID as a field in your own user model, to which you can add a field determining whether or not they are an admin.
| 1 | 0 | 0 |
I am developing an app in GAE. Application provides different view depending upon whether logged-in user is admin or normal user. I want to use 'Google Apps domain' as Authentication Type. So that all user of my domain can login into the application and use it.
Here, application can't differentiate a logged-in as admin or normal user. Somehow I should make an user as admin and as soon as that user logs in, application should use admin view for that user. Is there any way to tell application that a particular user is admin?
If we have our own USER table, we can mark any user as admin. Whenever a user logs into the app, app can consult USER table to check if user is admin or not? But in my scenario, it is not possible.
|
Differentiating logged-in users as admin and normal user in Google App Engine?
| 1.2 | 0 | 0 | 145 |
16,193,630 |
2013-04-24T13:43:00.000
| 0 | 0 | 0 | 0 |
sqlite,python-3.x
| 23,542,492 | 1 | true | 0 | 0 |
As the pysqlite author I am pretty sure nobody has ported pysqlite 1.x to Python 3 yet. The only solution that makes sense effort-wise is the one theomega suggested.
If all you need is access the data from Python for importing them elsewhere, but doing the sqlite2 dump/sqlite3 restore dance is not possible, there is an option, but it is not convenient: Use the builtin ctypes module to access the necessary functions from the SQLite 2 DLL. You would then implement a minimal version of pysqlite yourself that only wraps what you really need.
| 1 | 0 | 0 |
I have an old SQLite 2 database that I would like to read using Python 3 (on Windows). Unfortunately, it seems that Python's sqlite3 library does not support SQLite 2 databases. Is there any other convenient way to read this type of database in Python 3? Should I perhaps compile an older version of pysqlite? Will such a version be compatible with Python 3?
|
Read an SQLite 2 database using Python 3
| 1.2 | 1 | 0 | 445 |
16,196,268 |
2013-04-24T15:38:00.000
| 29 | 1 | 1 | 0 |
python,python-2.7
| 31,109,017 | 6 | false | 0 | 0 |
So if your a novice like myself and your directories are not very well organized you may want to try this method.
Open your python terminal. Import a module that you know works such as numpy in my case and do the following.
Import numpy
numpy.__file__
which results in
'/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages/numpy/__init__.py'
The result of numpy.__file__ is the location you should put the python file with your module (excluding the numpy/__init__.py) so for me that would be
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages
To do this just go to your terminal and type
mv "location of your module" "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site- packages"
Now you should be able to import your module.
| 2 | 103 | 0 |
I have my own package in python and I am using it very often. what is the most elegant or conventional directory where i should put my package so it is going to be imported without playing with PYTHONPATH or sys.path?
What about site-packages for example?
/usr/lib/python2.7/site-packages.
Is it common in python to copy and paste the package there ?
|
Where should I put my own python module so that it can be imported
| 1 | 0 | 0 | 102,848 |
16,196,268 |
2013-04-24T15:38:00.000
| 1 | 1 | 1 | 0 |
python,python-2.7
| 38,079,471 | 6 | false | 0 | 0 |
On my Mac, I did a sudo find / -name "site-packages". That gave me a few paths like /Library/Python/2.6/site-packages, /Library/Python/2.7/site-packages, and /opt/X11/lib/python2.6/site-packages.
So, I knew where to put my modules if I was using v2.7 or v2.6.
Hope it helps.
| 2 | 103 | 0 |
I have my own package in python and I am using it very often. what is the most elegant or conventional directory where i should put my package so it is going to be imported without playing with PYTHONPATH or sys.path?
What about site-packages for example?
/usr/lib/python2.7/site-packages.
Is it common in python to copy and paste the package there ?
|
Where should I put my own python module so that it can be imported
| 0.033321 | 0 | 0 | 102,848 |
16,198,789 |
2013-04-24T17:45:00.000
| 2 | 0 | 1 | 0 |
python,macos,pip,macports
| 16,550,756 | 2 | true | 0 | 0 |
If you are using macports for python then you should install pip and easy_install via macports as well. You seem to have a non macports pipon your path.
installing py27-pip will give you a pip-2.7 executable script on your path, similarly for easy_install.
The macports version have the python version in their name so as to allow multiple versions of python to be installed. If you want just pip then create a bash alias or link the pip-2.7 script to pip in a directory on yourpath.
| 1 | 0 | 0 |
I've got a mac on which I installed python using macports. I like it this way because I could manually install numpy, scipy etc. without needing to mess with pre-built packages like enthought. I now wanted to install web.py, but after trying to install it through both easy_install and pip, I can't import it on the interactive python command line.
When installing it, it also says: Installed /Library/Python/2.6/site-packages/web.py-0.37-py2.6.egg while it says the following when I type 'python': Python 2.7.3 (default, Oct 22 2012, 06:12:28) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
When I type 'which python' I get: /opt/local/bin/python
So my question is: how to make easy_install and/or pip install modules in the python installation which I enter when I simply type 'python' on the command line?
|
How to make pip install python modules in my current python2.7 installation on mac (instead of 2.6)?
| 1.2 | 0 | 0 | 3,596 |
16,203,729 |
2013-04-24T23:06:00.000
| 0 | 0 | 1 | 0 |
python,excel,csv
| 16,203,954 | 1 | true | 0 | 0 |
If you can convert from excel to csv using a different delimiter, e.g. ; instead of comma, you can remove the commas from the converted file, then convert the csv delimiter (;) back to comma.
| 1 | 0 | 0 |
I have the python code to convert an excel file to csv, however portions of the data in the excel file have commas which affects the conversion.
What would be the best way, staying within python, to remove the commas then convert?
|
Using python I need to remove commas from my excel file and then convert it to csv
| 1.2 | 0 | 0 | 875 |
16,203,859 |
2013-04-24T23:19:00.000
| 1 | 1 | 0 | 0 |
php,python,mysql,sql,ruby
| 16,203,901 | 3 | false | 0 | 0 |
I believe the only one can answer this question is you. All 3 examples you gave can do what you need to do with cron to automate the job. But the best script language to be used is the one you are most comfortable to use.
| 1 | 0 | 0 |
What's the best way to automatically query several dozen MySQL databases with a script on a nightly basis? The script usually returns no results, so I'd ideally have it email or notify me if any are ever returned.
I've looked into PHP, Ruby and Python for this, but I'm a little stumped as to how best to handle this.
|
What's the best way to automate running MySQL scripts on several databases on a daily basis?
| 0.066568 | 1 | 0 | 307 |
16,204,914 |
2013-04-25T01:23:00.000
| 0 | 0 | 0 | 1 |
python,windows,concurrent-programming
| 16,208,557 | 1 | false | 0 | 0 |
The reason was that each process used massive memory, and I tried to work it concurrently. It was over capacity. It was frequently swapping out, but it was hidden by SQLite access which was also working hard with my HDD.
I would like to thank Lennart, who notified me to check other params in my computer, and everybody who gave me guesses and hints, since I gave too less information about the issues occuring in front of me.
| 1 | 0 | 0 |
There is a python script I built called convert.py, which is a file converter (SQLite3 -> SQLite3). It takes about 30 minutes to finish, working on with heavy CPU and I/O.
I opened 30 CMD.EXE windows on my computer (Windows 7, CPU with 2 cores and hyper-threading), and started convert.py on each window. I let them work over night, so I expected that it will be all done until next morning ... but it didn't. Only half of them were done. The CPU monitor log told me that my computer was working 100% on it for 1 hour and suddenly it started to use only 25%. It seems that all CPU power wasn't used for the left tasks.
The left tasks resumed it work when I pressed Ctrl+C on it.
So, what is going on? Is this problem a python's problem, or windows' problem? Is there a way to let my computer work constantly 100% on my python script until all of them are done?
|
Why is my heavy python script pausing on windows? Can it be prevented?
| 0 | 0 | 0 | 108 |
16,212,161 |
2013-04-25T10:18:00.000
| 0 | 0 | 0 | 0 |
python,django,tcp-ip
| 16,217,531 | 1 | true | 1 | 0 |
You'll probably have to listen on the port from another process, or thread. Save the incoming data somewhere, whether it be a log file, database, or whatever. Then have Django use this data when it prepares the web page to send in response to requests on the URL.
| 1 | 0 | 0 |
I am trying to connect a django URL to incoming data on TCP/IP Port.
It would be great if someone, could shed some light on this
|
How to Connect My Django App with incoming data on a TCP Port
| 1.2 | 0 | 0 | 797 |
16,214,803 |
2013-04-25T12:28:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,google-cloud-datastore,flask
| 16,215,227 | 1 | true | 1 | 0 |
As I explained in your other question, you need a separate class. User is not a model, and it is not stored in the datastore. It's simply a combination of the user_id and email that are obtained from Google's accounts system when you log in. If you want to store something about the user, you need to create your own model class, and use store user_id and/or email as fields which you compare against the logged-in user.
| 1 | 0 | 0 |
I am working on an App using Flask and App Engine, which requires extra information to be stored in User Object apart from nickname, email and user_id.
Is it possible to extend User class in datastore?
If not, is there is any workaround? I am planning to have my own User model. So, once user logs into the app(using google authentication), I would collect user info using users.get_current_user() function and also add some other extra fields I require. All these information will get stored in my own User model. Is it the right way to handle this situation?
|
Adding extra properties to the User class in App Engine datastore?
| 1.2 | 0 | 0 | 395 |
16,218,288 |
2013-04-25T15:03:00.000
| 3 | 1 | 1 | 0 |
python,ironpython,python-import
| 16,218,323 | 2 | false | 0 | 0 |
You should not do the import globally, but inside a function which gets called after you appended the path.
| 1 | 2 | 0 |
I have a (single) .py script. In it, I need to import a library.
In order for this library to be found, I need to call sys.path.append. However, I do not want to hardcode the path to the library, but pass it as a parameter.
So my problem is that if I make a function (set_path) in this file, I need to import the file, and import fails because the path is not yet appended.
What are good ways to solve this problem?
Clarification after comments:
I am using IronPython, and the library path is the path to CPython/lib. This path is (potentially) different on every system.
As far as I know, I cannot pass anything via sys.argv, because the script is run in an embedded python interpreter, and there is no main function.
|
Python: sys.path.append vs. import?
| 0.291313 | 0 | 0 | 741 |
16,219,178 |
2013-04-25T15:43:00.000
| 0 | 0 | 0 | 0 |
java,python,selenium
| 16,219,793 | 1 | false | 1 | 0 |
The script has to run first and create the browser session. Currently AFAIK there is no way to take webdriver control of a browser that is already open.
| 1 | 0 | 0 |
I have a scrip (using Python) that submits to a form on www.example.com/form/info.php
currently my script will:
- open Firefox
- enter name, age, address
- press submit
what I want to do is have a web form (with name, age, address) on LAMP and when the user press submit it adds those options to the selenium script (to be put into www.example.coom/form/info.php) and submits it directly in the browser. Is this possible?
UPDATE:
I know this is possible using mechanize, because i have tested it out, but it doesnt so well with javascript which is why i am using selenium.
|
Is it possible to run selenium script from web browser?
| 0 | 0 | 1 | 114 |
16,220,046 |
2013-04-25T16:27:00.000
| 0 | 0 | 1 | 1 |
python,windows
| 16,220,212 | 3 | false | 0 | 0 |
You can try one solution. You can call some powershell command as Write-Host (echo equivalent in cmd) from Python code and get call result. If it fails is not PowerShell. (I can not test it now).
| 1 | 2 | 0 |
I'm writing a python program that I want to run differently whether it runs on the windows command prompt or windows powershell. I was wondering if there was a way to determine the environment python runs on within the program.
Thanks in advance!
|
Determining environment python runs on
| 0 | 0 | 0 | 61 |
16,220,585 |
2013-04-25T16:57:00.000
| -1 | 0 | 1 | 0 |
python,random,numpy,prng
| 16,223,497 | 2 | false | 0 | 0 |
If reproducibility is very important to you, I'm not sure I'd fully trust any PRNG to always produce the same output given the same seed. You might consider capturing the random numbers in one phase, saving them for reuse; then in a second phase, replay the random numbers you've captured. That's the only way to eliminate the possibility of non-reproducibility -- and it solves your current problem too.
| 2 | 3 | 1 |
We have a very simple program (single-threaded) where we we do a bunch of random sample generation. For this we are using several calls of the numpy random functions (like normal or random_sample). Sometimes the result of one random call determines the number of times another random function is called.
Now I want to set a seed in the beginning s.th. multiple runs of my program should yield the same result. For this I'm using an instance of the numpy class RandomState. While this is the case in the beginning, at some time the results become different and this is why I'm wondering.
When I am doing everything correctly, having no concurrency and thereby a linear call of the functions AND no other random number generator involded, why does it not work?
|
Python numpy - Reproducibility of random numbers
| -0.099668 | 0 | 0 | 1,368 |
16,220,585 |
2013-04-25T16:57:00.000
| 5 | 0 | 1 | 0 |
python,random,numpy,prng
| 16,296,438 | 2 | true | 0 | 0 |
Okay, David was right. The PRNGs in numpy work correctly. Throughout every minimal example I created, they worked as they are supposed to.
My problem was a different one, but finally I solved it. Do never loop over a dictionary within a deterministic algorithm. It seems that Python orders the items arbitrarily when calling the .item() function for getting in iterator.
So I am not that disappointed that this was this kind of error, because it is a useful reminder of what to think about when trying to do reproducible simulations.
| 2 | 3 | 1 |
We have a very simple program (single-threaded) where we we do a bunch of random sample generation. For this we are using several calls of the numpy random functions (like normal or random_sample). Sometimes the result of one random call determines the number of times another random function is called.
Now I want to set a seed in the beginning s.th. multiple runs of my program should yield the same result. For this I'm using an instance of the numpy class RandomState. While this is the case in the beginning, at some time the results become different and this is why I'm wondering.
When I am doing everything correctly, having no concurrency and thereby a linear call of the functions AND no other random number generator involded, why does it not work?
|
Python numpy - Reproducibility of random numbers
| 1.2 | 0 | 0 | 1,368 |
16,220,930 |
2013-04-25T17:19:00.000
| 0 | 0 | 1 | 0 |
python,header
| 16,220,952 | 6 | false | 0 | 0 |
In this context, you are correct. A header block means a set of comments at the top of the source file that contains the requested information. It does not need to contain any code that does anything.
| 1 | 20 | 0 |
I'm new to Python and programming in general. I am taking a module at university which requires me to write some fairly basic programs in Python. However, I got this feedback on my last assignment:
There should be a header block containing the file name, author name, date created, date modified and python version
What is a header block? Is it just comments at the top of your code or is it be something which prints when the program runs? Or something else?
|
Python: What is a header?
| 0 | 0 | 0 | 87,904 |
16,222,064 |
2013-04-25T18:28:00.000
| 0 | 0 | 0 | 0 |
java,python,design-patterns,error-handling,software-design
| 16,224,608 | 2 | false | 0 | 0 |
For every exception your app throws you insert it into a db, your error code is the id for that record, that's what the user sees. Obviously same exception gets the same id. The plugin information you need is stored in the DB
| 1 | 1 | 0 |
UPDATE:
The error codes here is not such a return value of a function. Actually I am not discussing using exception or error codes for error handling. I am trying to figure out, using what pattern to organize errors.
What I am really doing is like error codes showing on Windows blue screen. Back to the very old time, when your Windows crash you can get an error code on blue screen, using that code you can figure out what's going on by looking up MS's document.
In my system, there are many plugins contributed by different people that may not know each other. If I allow them to define their error codes. It's mostly like error code of two plugins may conflict.
================================================================================================
I want to design an extendable error codes system that allows plugins to define their own error codes.
The basic idea is:
system has a range of reserved error codes
plugin can select a range that not used by system, then creates its error codes in that range. However, the problem is plugins don't know each other(because plugins may be written by different people and installed into system by user's preference). so their error codes range may conflict.
Is there any good practice for this requirement? I google a lot but to my surprise, there are very few articles talking about designing error codes in production software. Most posts are focus on exception vs error code.
And, is there any good pattern for showing error in product that user could figure out what's going on?
My basic idea is to show user an error which has error code, descriptions, details. And user could click the error code then I would show him the proper solution
|
How to design extendable error codes?
| 0 | 0 | 0 | 838 |
16,223,006 |
2013-04-25T19:18:00.000
| 4 | 0 | 1 | 0 |
python,module,field,openerp
| 16,229,121 | 1 | true | 1 | 0 |
What you want currently not possible in openerp. But you can use one trick, you should use two fields one is integer for giving interval and other in char fields for giving months, days etc. You can get this example on Scheduler , ir.cron object of opener.
| 1 | 0 | 0 |
I need a field that can contain a number and text on it, like for example "6 months", i've used the datetime field, but it only takes a formatted date on it, and if use integer or float it takes a number, and char takes only a character, so how can i have an integer and a char on the same field?
|
DateTime field OpenErp
| 1.2 | 0 | 0 | 422 |
16,224,901 |
2013-04-25T21:18:00.000
| 1 | 1 | 1 | 1 |
python,next
| 16,225,009 | 3 | false | 0 | 0 |
though you could call ByteIter.next() in 2.6. This is not recommended however, as the method has been renamed in python 3 to next().
| 1 | 2 | 0 |
When I do next(ByteIter, '')<<8 in python, I got a name error saying
"global name 'next' is not defined"
I'm guessing this function is not recognized because of python version? My version is 2.5.
|
Python: next() is not recognized
| 0.066568 | 0 | 0 | 2,002 |
16,225,782 |
2013-04-25T22:27:00.000
| 1 | 1 | 1 | 0 |
python
| 16,225,924 | 2 | false | 0 | 0 |
The help() function gives you help on almost everything but if your searching for something (like a module to use) then type help('modules') and it will search for available modules.
Then if you need to find information about a module load it and type dir(module_name) to see the methods that are defined in the module.
| 1 | 2 | 0 |
Is there a way to check what a function or a method does inside of python itself similar to the help function in Matlab. I want to get the definition of the function without having to Google it.
|
Python: Function Documentation
| 0.099668 | 0 | 0 | 2,004 |
16,228,532 |
2013-04-26T04:00:00.000
| 0 | 0 | 0 | 0 |
python,set,trace,pdb
| 57,042,360 | 2 | false | 0 | 0 |
change the name file pdb.py same bdb_test.py it run with me
| 2 | 0 | 0 |
When I insert import pdb; pdb.set_trace() in my code, it shows an error message:
'module' object has attribute 'set_trace'
In the pdb.py file, there is the def set_trace() function. How can it not work?
Has anyone had the same problem and knows how to solve this?
|
pdb.set_trace() not working
| 0 | 0 | 0 | 3,446 |
16,228,532 |
2013-04-26T04:00:00.000
| 0 | 0 | 0 | 0 |
python,set,trace,pdb
| 65,112,924 | 2 | false | 0 | 0 |
What Python version are you using?
I can reproduce that error if I try to import pdb; pdb.set_trace() on Python 3.8.
Try to use breakpoint() instead (has basically the same purpose), it worked for me!
Cheers,
Andres
| 2 | 0 | 0 |
When I insert import pdb; pdb.set_trace() in my code, it shows an error message:
'module' object has attribute 'set_trace'
In the pdb.py file, there is the def set_trace() function. How can it not work?
Has anyone had the same problem and knows how to solve this?
|
pdb.set_trace() not working
| 0 | 0 | 0 | 3,446 |
16,230,984 |
2013-04-26T07:29:00.000
| 0 | 0 | 0 | 0 |
python,machine-learning,nlp,classification,nltk
| 16,231,323 | 2 | false | 0 | 0 |
I see two questions
How to train the system?
Can the system consist of "sci-fi" and "others"?
The answer to 2 is yes. Having a 80% confidence threshold idea also makes sense, as long as you see with your data, features and algorithm that 80% is a good threshold. (If not, you may want to consider lowering it if not all sci-fi movies are being classified as sci-fi, or lowering it, if too many non-sci-fi movies are being categorized as sci-fi.)
The answer to 1 depends on the data you have, the features you can extract, etc. Jared's approach seems reasonable. Like Jared, I'd also to emphasize the importance of enough and representative data.
| 2 | 0 | 1 |
I am just starting out with nltk, and I am following the book. Chapter six is about text classification, and i am a bit confused about something. In the examples (the names, and movie reviews) the classifier is trained to select between two well-defined labels (male-female, and pos-neg). But how to train if you have only one label.
Say I have a bunch of movie plot outlines, and I am only interested in fishing out movies from the sci-fi genre. Can I train a classifier to only recognize sci-fi plots, en say f.i. if classification confidence is > 80%, then put it in the sci-fi group, otherwise, just ignore it.
Hope somebody can clarify, thank you,
|
train nltk classifier for just one label
| 0 | 0 | 0 | 441 |
16,230,984 |
2013-04-26T07:29:00.000
| 0 | 0 | 0 | 0 |
python,machine-learning,nlp,classification,nltk
| 16,231,216 | 2 | true | 0 | 0 |
You can simply train a binary classifier to distinguish between sci-fi and not sci-fi
So train on the movie plots that are labeled as sci-fi and also on a selection of all other genres. It might be a good idea to have a representative sample of the same size for the other genres such that not all are of the romantic comedy genre, for instance.
| 2 | 0 | 1 |
I am just starting out with nltk, and I am following the book. Chapter six is about text classification, and i am a bit confused about something. In the examples (the names, and movie reviews) the classifier is trained to select between two well-defined labels (male-female, and pos-neg). But how to train if you have only one label.
Say I have a bunch of movie plot outlines, and I am only interested in fishing out movies from the sci-fi genre. Can I train a classifier to only recognize sci-fi plots, en say f.i. if classification confidence is > 80%, then put it in the sci-fi group, otherwise, just ignore it.
Hope somebody can clarify, thank you,
|
train nltk classifier for just one label
| 1.2 | 0 | 0 | 441 |
16,233,593 |
2013-04-26T09:53:00.000
| 6 | 0 | 1 | 0 |
python,string,strip
| 16,234,651 | 5 | false | 0 | 0 |
unicode('foo,bar').translate(dict([[ord(char), u''] for char in u',']))
| 1 | 85 | 0 |
How can I strip the comma from a Python string such as Foo, bar? I tried 'Foo, bar'.strip(','), but it didn't work.
|
How to strip comma in Python string
| 1 | 0 | 0 | 186,115 |
16,236,364 |
2013-04-26T12:22:00.000
| 1 | 0 | 0 | 0 |
python,openerp,rml
| 16,248,525 | 1 | false | 1 | 0 |
that's a little bit tricky thing, you should refer to "survey" module's report of openerp, which has such reports. I hope it helps.
Cheers,
Parthiv
| 1 | 0 | 0 |
I am going to implement BlockTable, I have taken a BlockTable inside
section and I want <tr> color dynamic, like striped(one white & one grey)
but have no idea how to do.
Can anyone help me.
|
How to make Striped RML table in openerp?
| 0.197375 | 0 | 0 | 280 |
16,236,652 |
2013-04-26T12:37:00.000
| 4 | 0 | 0 | 0 |
python
| 16,236,700 | 3 | true | 0 | 0 |
Well, to convert your example, you would use np.random.normal(x, *P). However, np.random.normal(x,a,b,c,...,z) wouldn't actually work. Maybe you meant another function?
| 1 | 0 | 1 |
I am trying to use function np.random.curve_fit(x,a,b,c,...,z) with a big but fixed number of fitting parameters. Is it possible to use tuples or lists here for shortness, like np.random.curve_fit(x,P), where P=(a,b,c,...,z)?
|
list instead of separated arguments in python
| 1.2 | 0 | 0 | 77 |
16,237,742 |
2013-04-26T13:30:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,authentication,google-cloud-endpoints
| 16,270,248 | 1 | true | 1 | 0 |
That is exactly what you need to do. On the server, generate a key (you choose the length), and store it in the datastore. When the other server makes a request, use HTTPS and include the key. Its like an API key (it is actually).
| 1 | 0 | 0 |
I cannot see how I could authenticate a server with vs GAE.
Let's say I have an application on GAE which have some data and I somehow need this data on another server.
It is easy to enable OAuth authentication on GAE but here I cannt use this since there is no "account" binded to my server.
Plus GAE doesn't support client certificate.
I could generate a token for each server that needs to access the GAE Application, and transfe them on the server. It would then use it to access the GAE Application by adding it in the URL (using HTTPS)...
Any other idea?
|
Authenticate a server versus an AppEngine application
| 1.2 | 0 | 0 | 77 |
16,241,755 |
2013-04-26T17:09:00.000
| 2 | 0 | 0 | 0 |
python,tkinter
| 16,241,833 | 1 | false | 0 | 1 |
Every created image is assigned a unique name (unless you specify the name of the image when creating). This unique name is generated by using a counter, which increases monotonically.
| 1 | 1 | 0 |
I have a program that displays many tkinter PhotoImages at once (relevant: I'm not using PIL). It currently has several screens, and when the play gets to the edge it loads a new tilemap, creating a bunch more photoimages in the currentTiles array after clearing the old contents. I'm fairly certain there are no other references to these photoimages in the rest of the program.
The weird thing is that when I print the contents of the last item in the array after the loadLevel function is called, it says things such as "pyimage3761", and it increments each time I load a new screen. Is this due to tkinter keeping track of how many have been created so far, or is it because the old tiles are still in memory? I can't for the life of me figure out where there could be another reference, so I'm just wondering if there are any other possibilities before I spend hours looking for errors.
Thanks!!
|
Tkinter PhotoImage: References in Memory?
| 0.379949 | 0 | 0 | 238 |
16,243,650 |
2013-04-26T19:15:00.000
| 3 | 0 | 0 | 0 |
python,image,dialog,tkinter
| 16,244,140 | 1 | true | 0 | 1 |
No, there is not. You'll need to create your own message box with a Toplevel and some widgets. When you do that, you'll have complete control over what goes in the window.
| 1 | 0 | 0 |
Is there any way to add a custom image to the tkinter built in tkMessageBox?
|
Python Tkinter message box add custom image
| 1.2 | 0 | 0 | 3,397 |
16,244,303 |
2013-04-26T19:59:00.000
| 1 | 0 | 0 | 0 |
python,tkinter
| 16,244,366 | 4 | false | 0 | 1 |
No, it is not possible to show a TIF or PDF in a Tkinter window. Your main options are to show plain text, and to show images (.gif with plain Tkinter, other formats if you include PIL - the python imaging library).
| 1 | 1 | 0 |
I am new to TKinter and cant seem to find any examples on how to view a document in a window. What I am trying to accomplish is that when selecting a PDF or TIF it will open the file and show the first page in a window using TKinter. Is this possible?
|
View a document TKinter
| 0.049958 | 0 | 0 | 6,715 |
16,244,894 |
2013-04-26T20:42:00.000
| 0 | 1 | 0 | 0 |
python,svn,jenkins,hudson
| 16,254,689 | 2 | false | 0 | 0 |
Except svn info you can also use svn log -q -l 1 URL or svn ls -v --depth empty URL
| 1 | 1 | 0 |
We have multiple branches in SVN and use Hudson CI jobs to maintain our builds. We use SVN revision number as part of our application version number. The issue is when a Hudson job check out HEAD of a brach, it is getting HEAD number of SVN not last committed revision of that brach. I know, SVN maintains revision numbers globally, but we want to reflect last committed number of particular brach in our version.
is there a way to get last committed revision number of a brach using python script so that I can checkout that branch using that revision number?
or better if there a way to do it in Hudson itself?
Thanks.
|
Using actual branch head revision number in Hudson
| 0 | 0 | 0 | 1,110 |
16,244,924 |
2013-04-26T20:44:00.000
| 3 | 0 | 0 | 0 |
python,heroku
| 16,245,012 | 1 | false | 1 | 0 |
Heroku is for developing Web (HTTP, HTTPS) applications. You can't deploy code that uses socket to Heroku.
If you want to run your app on Heroku, the easier way is to use a web framework (Flask, CherryPy, Django...). They usually also come with useful libraries and abstractions for you to talk to your database.
| 1 | 0 | 0 |
I am quite new to heroku and I reached a bump in my dev...
I am trying to write a server/client kind of application...on the server side I will have a DB(I installed postgresql for python) and I was hoping I could reach the server, for now, via a python client(for test purposes) and send data/queries and perform basic tasks on the DB.
I am using python with Heroku, I manage to install the DB and it seems to be working(i.e i can query, insert, delete, etc...)
now all i want is to write a server(in python) that would be my app and would listen on a port and receive messages and then perform whatever tasks it is asked to do...I tought about using sockets for this and have managed to write a basic server/client locally...however when I deploy the app on heroku i cannot connect to the server and my code is basically worthless
can somebody plz advise on the basic framework for this sort of requirements...surely I am not the first guy to want to write a client/server app...if you could point to a tutorial/doc i would be much obliged.
Thx
|
how to write a client/server app in heroku
| 0.53705 | 1 | 0 | 569 |
16,246,066 |
2013-04-26T22:19:00.000
| 0 | 0 | 0 | 0 |
python,cluster-computing,scikit-learn,hierarchical-clustering
| 54,499,731 | 3 | false | 0 | 0 |
Recommend to take a look at agglomerative clustering.
| 1 | 24 | 1 |
My objective is to cluster words based on how similar they are with respect to a corpus of text documents. I have computed Jaccard Similarity between every pair of words. In other words, I have a sparse distance matrix available with me. Can anyone point me to any clustering algorithm (and possibly its library in Python) which takes distance matrix as input ? I also do not know the number of clusters beforehand. I only want to cluster these words and obtain which words are clustered together.
|
Clustering words based on Distance Matrix
| 0 | 0 | 0 | 28,598 |
16,249,380 |
2013-04-27T07:10:00.000
| 2 | 0 | 0 | 0 |
python,django,django-cms,slug
| 16,262,956 | 1 | true | 1 | 0 |
Try to delete the pages and readd the pages after it.
Start to publish from the root and then the child.
If it still failed, you can check the db table cms_titles. To fix your paths manually or post it here
| 1 | 4 | 0 |
I'm having an issue with slugs in child pages. Imagine I have a page called "Something" in the main tree with the slug "something", and another page named "Anything" with the slug "anything". This second page (Anything) has a child page also called "Something" with the slug "something" again, that should result in an /anything/something/ url, and it was working on django-cms 2.3.5 but it's not working anymore on 2.4.1, i get an error saying i've already used that url (Page 'Something' has the same url 'something' as current page "Something"). It's the only thing stoping me from updating to 2.4.1 (latest release in the moment). Thank you. Note that It will let me create the duplicate page if its not published. The problem is when I try to publish them.
|
Django-cms: Can't publish 2 pages with the same slug in different levels
| 1.2 | 0 | 0 | 660 |
16,252,331 |
2013-04-27T12:54:00.000
| -1 | 0 | 1 | 0 |
python
| 17,052,487 | 3 | false | 0 | 0 |
It's a little complicated.
It comes down to cases here:
A function that merely mutates its object's state doesn't return anything (returns None). For example, given a list called L: L.sort(); L.append("joe")
Other functions create a new object or a new copy of an object, and return it without mutating the original list. Consider: sorted(L) ; y = L + [1,2,3]
It's generally bad form to return None meaning "everything is fine."
If you have some kind of lookup/accessor, None means "the value of that item is None", and when it's not found you should throw the appropriate exception.
In most other cases, returning None is a little confusing.
| 2 | 0 | 0 |
The question is in the title...
I'm in a process of learning Python and I've heard a couple of times that function returning None is something you never should have in a real program (unless your function never returns anything). While I can't seem to find a situation when it is absolutely necessary, I wonder if it ever could be a good programming practice to do it. I.e., if your function returns integers (say, solutions to an equation), None would indicate that there is no answer to return. Or should it always be handled as an exception inside the function? Maybe there are some other examples when it is actually useful? Or should I never do it?
|
Is returning 'None' for Python function ever a good idea?
| -0.066568 | 0 | 0 | 571 |
16,252,331 |
2013-04-27T12:54:00.000
| 11 | 0 | 1 | 0 |
python
| 16,252,358 | 3 | true | 0 | 0 |
This just flat-out isn't true. For one, any function that doesn't need to return a value will return None.
Beyond that, generally, keeping your output consistent makes things easier, but do what makes sense for the function. In some cases, returning None is logical.
If something goes wrong, yes, you should throw an exception as opposed to returning None.
Unfortunately, programming tends to be full of advice where things are over-generalized. It's easy to do, but Python is pretty good with this - practicality beats purity is part of the Zen of Python. It's essentially the use case for dict.get() - in general, it's better to throw the exception if a key isn't found, but in some specific cases, getting a default value back is more useful.
| 2 | 0 | 0 |
The question is in the title...
I'm in a process of learning Python and I've heard a couple of times that function returning None is something you never should have in a real program (unless your function never returns anything). While I can't seem to find a situation when it is absolutely necessary, I wonder if it ever could be a good programming practice to do it. I.e., if your function returns integers (say, solutions to an equation), None would indicate that there is no answer to return. Or should it always be handled as an exception inside the function? Maybe there are some other examples when it is actually useful? Or should I never do it?
|
Is returning 'None' for Python function ever a good idea?
| 1.2 | 0 | 0 | 571 |
16,252,538 |
2013-04-27T13:11:00.000
| 0 | 0 | 1 | 0 |
python,django,web,infinite-loop
| 16,252,661 | 3 | false | 1 | 0 |
Yes, your analysis is correct. The worker thread/process will keep running. Moreover, if there is no wait/sleep in the loop, it will hog the CPU. Other threads/process will get very little cpu, resulting your entire site on slow response.
Also, I don't think server will send any timeout error to client explicitly. If the TCP timeout is set, TCP connection will be closed.
Client may also have some timeout setting to get response, which may come into picture.
Avoiding such code is best way to avoid such code. You can also have some monitoring tool on server to look for CPU/memory usage and notify for abnormal activity so that you can take action.
| 1 | 7 | 0 |
Something that I just thought about:
Say I'm writing view code for my Django site, and I make a mistake and create an infinite loop.
Whenever someone would try to access the view, the worker assigned to the request (be it a Gevent worker or a Python thread) would stay in a loop indefinitely.
If I understand correctly, the server would send a timeout error to the client after 30 seconds. But what will happen with the Python worker? Will it keep on working indefinitely? That sounds dangerous!
Imagine I've got a server in which I've allocated 10 workers. I let it run and at some point, a client tries to access the view with the infinite loop. A worker will be assigned to it, and will be effectively dead until the next server restart. The dangerous thing is that at first I wouldn't notice it, because the site would just be imperceptibly slower, having 9 workers instead of 10. But then it might happen again and again throughout a long span of time, maybe months. The site would just get progressively slower, until eventually it would be really slow with just one worker.
A server restart would solve the problem, but I'd hate to have my site's functionality depend on server restarts.
Is this a real problem that happens? Is there a way to avoid it?
Update: I'd also really appreciate a way to take a stacktrace of the thread/worker that's stuck in an infinite loop, so I could have that emailed to me so I'll be aware of the problem. (I don't know how to do this because there is no exception being raised.)
Update to people saying things to the effect of "Avoid writing code that has infinite loops": In case it wasn't obvious, I do not spend my free time intentionally putting infinite loops into my code. When these things happen, they are mistakes, and mistakes can be minimized but never completely avoided. I want to know that even when I make a mistake, there'll be a safety net that will notify me and allow me to fix the problem.
|
What happens when you have an infinite loop in Django view code?
| 0 | 0 | 0 | 2,964 |
16,256,777 |
2013-04-27T20:54:00.000
| 0 | 0 | 0 | 0 |
python,sqlalchemy
| 16,257,019 | 4 | false | 0 | 0 |
Sessions have a private _is_clean() member which seems to return true if there is nothing to flush to the database. However, the fact that it is private may mean it's not suitable for external use. I'd stop short of personally recommending this, since any mistake here could obviously result in data loss for your users.
| 1 | 18 | 0 |
I have a SQLAlchemy Session object and would like to know whether it is dirty or not. The exact question what I would like to (metaphorically) ask the Session is: "If at this point I issue a commit() or a rollback(), the effect on the database is the same or not?".
The rationale is this: I want to ask the user wether he wants or not to confirm the changes. But if there are no changes, I would like not to ask anything. Of course I may monitor myself all the operations that I perform on the Session and decide whether there were modifications or not, but because of the structure of my program this would require some quite involved changes. If SQLAlchemy already offered this opportunity, I'd be glad to take advantage of it.
Thanks everybody.
|
How to check whether SQLAlchemy session is dirty or not
| 0 | 1 | 0 | 12,854 |
16,261,413 |
2013-04-28T09:20:00.000
| 5 | 0 | 0 | 1 |
python,amazon-web-services,config,web-worker,amazon-elastic-beanstalk
| 16,341,391 | 1 | true | 1 | 0 |
fixed it, just needed to write this command instead:
command: "nohup ./workers.py > foo.out 2> foo.err < /dev/null &"
| 1 | 6 | 0 |
I am trying to start a background job in elastic beanstalk, the background job has an infinite loop so it never returns a response and so I receive this error:" Some instances have not responded to commands.Responses were not received from [i-ba5fb2f7]."
I am starting the background job in the elastic beanstalk .config file like this:
06_start_workers:
command: "./workers.py &"
Is there any way to do this? I don't want elastic beanstalk to wait for a return value of that process ..
|
Run background jobs with elastic beanstalk
| 1.2 | 0 | 0 | 2,006 |
16,264,992 |
2013-04-28T16:26:00.000
| 0 | 0 | 0 | 0 |
python,http,network-programming
| 16,265,073 | 3 | false | 0 | 0 |
You should probably switch to use the Amazon API for product queries.
| 2 | 1 | 0 |
I wrote a web crawler to crawl product infomation from www.amazon.com by using urllib2,but it seems that amazon limit the connection for each IP to 1.
When I start more than one thread to crawl simultaneously, it raises HTTP Error 503: Service Temporarily Unavailable.
I want to start more threads to crawl fast,so how can I fix this error?
|
how to crawl web pages fast when the number of connection is limited
| 0 | 0 | 1 | 201 |
16,264,992 |
2013-04-28T16:26:00.000
| 1 | 0 | 0 | 0 |
python,http,network-programming
| 16,265,020 | 3 | false | 0 | 0 |
Short version: you can't, and it would be a bad idea to even try.
| 2 | 1 | 0 |
I wrote a web crawler to crawl product infomation from www.amazon.com by using urllib2,but it seems that amazon limit the connection for each IP to 1.
When I start more than one thread to crawl simultaneously, it raises HTTP Error 503: Service Temporarily Unavailable.
I want to start more threads to crawl fast,so how can I fix this error?
|
how to crawl web pages fast when the number of connection is limited
| 0.066568 | 0 | 1 | 201 |
16,266,979 |
2013-04-28T19:40:00.000
| 1 | 0 | 0 | 1 |
google-app-engine,python-2.7,google-cloud-datastore
| 16,268,751 | 1 | true | 1 | 0 |
GAE's datastore just doesn't export well to SQL. There's often situations where data needs to be modeled very differently on GAE to support certain queries, ie many-to-many relationships. Denormalizing is also the right way to support some queries on GAE's datastore. Ancestor relationships are something that don't exist in the SQL world.
In order to import export data, you'll need to write scripts specific to your data models.
If you're planning for compatibility with SQL, use CloudSQL instead of the datastore.
In terms of moving data between dev/production, you've already identified the ways to do it. There's no real "easy" way.
| 1 | 0 | 0 |
I'm building a web app in GAE that needs to make use of some simple relationships between the datastore entities. Additionally, I want to do what I can from the outset to make import and exportability easier, and to reduce development time to migrate the application to another platform.
I can see two possible ways of handling relationships between entities in the datastore:
Including the key (or ID) of the related entity as a field in the entity
OR
Creating a unique identifier as an application-defined field of an entity to allow other entities to refer to it
The latter is less integrated with GAE, and requires some kind of mechanism to ensure the unique identifier is in fact unique (which in turn will rely on ancestor queries).
However, the latter may make data portability easier. For example, if entities are created on a local machine they can be uploaded (provided the unique identifier is unique) without problem. By contrast, relying on the GAE defined ID will not work as the ID will not be consistent from the development to the deployed environment.
There may be data exportability considerations too that mean an application-defined unique identifier is preferable.
What is the best way of doing this?
|
GAE: planning for exportability and relational databases
| 1.2 | 1 | 0 | 48 |
16,269,215 |
2013-04-28T23:38:00.000
| 1 | 0 | 1 | 0 |
python,ruby,concurrency,connection-pooling,pgpool
| 16,284,439 | 2 | true | 0 | 0 |
No. Unless you use non-blocking evented IO.
| 2 | 2 | 0 |
Is there any reason to maintain a pool with more than one connection when running a single-process, single-threaded application?
|
Connection Pool for a Single Threaded Application
| 1.2 | 0 | 0 | 652 |
16,269,215 |
2013-04-28T23:38:00.000
| 0 | 0 | 1 | 0 |
python,ruby,concurrency,connection-pooling,pgpool
| 23,256,500 | 2 | false | 0 | 0 |
I can think of 2 reasons to use pool:
1, if the application access to database (open, read, then close connection) frequently, like 100 times or more every second, using the pool, the connection between the pool and the real db is maintained by pool, the pooled connection is not actually close/open, then the performance of the program is improved.
2, In the scenario the application use a "global connection", open once at the beginning of the application, close at time of exit. If the application takes long time to execute, like 10 hours, then it is possible that the connection is disconnected for some unknown reason (intermittent network issue?). By using pool, the pool will re-connect to db automatically? Is it possible?
At least, when creating several connections in the pool, at the unfortunate event that one connection is down, the next connection will be used for the application.
| 2 | 2 | 0 |
Is there any reason to maintain a pool with more than one connection when running a single-process, single-threaded application?
|
Connection Pool for a Single Threaded Application
| 0 | 0 | 0 | 652 |
16,273,227 |
2013-04-29T07:22:00.000
| 1 | 0 | 0 | 0 |
google-app-engine,python-2.7,base64,google-cloud-endpoints,protorpc
| 16,276,321 | 1 | true | 1 | 0 |
The way it finally worked was just open the file with the (open().read()) and save it in the NDB.
The response message was a BytesField just sending the string of the open().read(), without any encoding.
The console in my browser was not reading the value of the field in the answer, but it works normal in my app.
| 1 | 1 | 0 |
I have an endpoint that must send an image in the response.
The original image is a file in the server that I open with python (open().read()) and save it in the NDB as BlobProperty (ndb.BlobProperty()).
My protoRPC message is a BytesField.
If I go in the apis-explorer the picture comes with the correct value, but it doesn't work in my JS Client.
I've been trying to just read the file, encode and decode base64 but the JS is still not recognizing it.
Does anyone have an idea how to solve it? How can I send the base64 image via Endpoints?
Thank you!
|
Send image as base64 via Google Endpoints
| 1.2 | 0 | 1 | 406 |
16,273,351 |
2013-04-29T07:30:00.000
| 1 | 0 | 1 | 0 |
c++,python,algorithm,linear-algebra
| 16,273,799 | 4 | false | 0 | 0 |
The first thing is to parse the string, to identify the various tokens (numbers, variables and operators), so that an expression tree can be formed by giving operator proper precedences.
Regular expressions can help, but that's not the only method (grammar parsers like boost::spirit are good too, and you can even run your own: its all a "find and recourse").
The tree can then be manipulated reducing the nodes executing those operation that deals with constants and by grouping variables related operations, executing them accordingly.
This goes on recursively until you remain with a variable related node and a constant node.
At the point the solution is calculated trivially.
They are basically the same principles that leads to the production of an interpreter or a compiler.
| 1 | 1 | 1 |
What would be the most efficient algorithm to solve a linear equation in one variable given as a string input to a function? For example, for input string:
"x + 9 – 2 - 4 + x = – x + 5 – 1 + 3 – x"
The output should be 1.
I am considering using a stack and pushing each string token onto it as I encounter spaces in the string. If the input was in polish notation then it would have been easier to pop numbers off the stack to get to a result, but I am not sure what approach to take here.
It is an interview question.
|
Solving a linear equation in one variable
| 0.049958 | 0 | 0 | 3,057 |
16,278,613 |
2013-04-29T12:26:00.000
| 0 | 0 | 0 | 0 |
python,wxpython
| 16,279,016 | 1 | true | 0 | 1 |
I found the solution. To completely lock user ability to resize cells it is needed to use .EnableDragGridSize(False) , .DisableDragColSize() and .DisableDragRowSize() methods.
| 1 | 1 | 0 |
I am using wx.Grid to build spreadsheetlike input interface. I want to lock the size of the cells so the user can not change them. I have successfully disabled the drag-sizing with grid.EnableDragGridSize(False) of the grid but user can still resize the cells by using borders between column and row labels. I am probably missing something in wxGrid documentation.
|
wx.Grid cell size lock
| 1.2 | 1 | 0 | 426 |
16,281,823 |
2013-04-29T14:55:00.000
| 1 | 1 | 0 | 0 |
php,python,mysql,linux
| 16,282,538 | 1 | true | 0 | 0 |
The Apache user (www-data in your case) has a somewhat restricted environment. Check where the Python MySQLdb package is installed and edit the Apache user's env (cf Apache manual and your distrib's one about this) so it has a usable Python environment with the right PYTHONPATH etc.
| 1 | 0 | 0 |
I am able to easily call a python script from php using system(), although there are several options. They all work fine, except they all fail. Through trial and error I have narrowed it down to it failing on
import MySQLdb
I am not too familiar with php, but I am using it in a pinch. I understand while there could be reasons why such a restriction would be in place, but this will be on a local server, used in house, and the information in the mysql db is backed up and not to critical. Meaning such a restriction can be reasonably ignored.
But how to allow php to call a python script that imports mysql? I am on a Linux machine (centOs) if that is relevant.
|
call python script from php that connects to MySQL
| 1.2 | 1 | 0 | 322 |
16,282,141 |
2013-04-29T15:09:00.000
| 0 | 0 | 0 | 0 |
python,tkinter,real-time,ttk
| 16,283,361 | 2 | false | 0 | 1 |
I just implemented #2 (function + .after(1, function)) by changing literally 8 lines of code in my original implementation, and it runs perfectly. So #2 wins based on simplicity and effectiveness. It runs using 3-4% of 1 core (i5-2500, 3.3 GHz)
But if my UI becomes more complex, such as by adding stripchart recording via matplotlib animation, then queues and a separate acquisition thread may become necessary. But not today!
EDIT: But don't use .after(0, ...): My UI locked up when I tried it.
| 2 | 0 | 0 |
I have a Tkinter/ttk app that analyzes packets arriving every 10-25 ms. My present implementation uses a thread that updates 30 StringVars after every socket read, then calls update_idletasks() to update the corresponding Entry widgets. My app crashes no more than 30 minutes after starting.
Searches revealed that Tk isn't really thread-safe, and I have two main choices:
Use a thread + queues.
Use a function + .after(1, function).
The UI does little more than start/stop the updates, and provide the Entry widgets for display.
The primary wait in the system is the socket read, which has a timeout of 2x the expected packet rate (so it can't block forever).
In this case, would you prefer approach #1 or #2?
I'm leaning toward #2 for its simplicity, but I'm not sure if there are any Tk gotchas waiting along that path. I'll probably try both while I await the community wisdom.
|
Real-time Tkinter/ttk Entry widget update: thread+queue or .after(1)?
| 0 | 0 | 0 | 548 |
16,282,141 |
2013-04-29T15:09:00.000
| 0 | 0 | 0 | 0 |
python,tkinter,real-time,ttk
| 16,282,853 | 2 | false | 0 | 1 |
My personal rule of thumb is to avoid threads unless they are strictly required. Threads add complexity. Plus, there's a large percentage of programmers who aren't particularly thread-savvy, so threads add an element of risk if this program will be used or maintained by others.
That being said, in this particular case I think I'd go with thread + queues. Since you know how often the display should update, I'd set up a poll of the queue for every 10ms or so to update the widgets.
| 2 | 0 | 0 |
I have a Tkinter/ttk app that analyzes packets arriving every 10-25 ms. My present implementation uses a thread that updates 30 StringVars after every socket read, then calls update_idletasks() to update the corresponding Entry widgets. My app crashes no more than 30 minutes after starting.
Searches revealed that Tk isn't really thread-safe, and I have two main choices:
Use a thread + queues.
Use a function + .after(1, function).
The UI does little more than start/stop the updates, and provide the Entry widgets for display.
The primary wait in the system is the socket read, which has a timeout of 2x the expected packet rate (so it can't block forever).
In this case, would you prefer approach #1 or #2?
I'm leaning toward #2 for its simplicity, but I'm not sure if there are any Tk gotchas waiting along that path. I'll probably try both while I await the community wisdom.
|
Real-time Tkinter/ttk Entry widget update: thread+queue or .after(1)?
| 0 | 0 | 0 | 548 |
16,287,662 |
2013-04-29T20:29:00.000
| 1 | 0 | 1 | 0 |
python,printing,brackets
| 16,287,716 | 4 | false | 0 | 0 |
maybe a simple print "(" + str(var) + ")"?
| 2 | 1 | 0 |
i want to print "(" in python
print "(" + var + ")"
but it says:
TypeError: coercing to Unicode: need string or buffer, NoneType found
can somebody help me? that cant be too hard... -.-
|
How to print brackets in python
| 0.049958 | 0 | 0 | 20,324 |
16,287,662 |
2013-04-29T20:29:00.000
| 1 | 0 | 1 | 0 |
python,printing,brackets
| 16,287,730 | 4 | false | 0 | 0 |
it appears that var is None in what you provided. Everything is correct, but var does not contain a string.
| 2 | 1 | 0 |
i want to print "(" in python
print "(" + var + ")"
but it says:
TypeError: coercing to Unicode: need string or buffer, NoneType found
can somebody help me? that cant be too hard... -.-
|
How to print brackets in python
| 0.049958 | 0 | 0 | 20,324 |
16,288,749 |
2013-04-29T21:43:00.000
| 1 | 0 | 1 | 0 |
python,random,floating-point
| 31,707,656 | 10 | false | 0 | 0 |
Try
random.randint(1, 10)/100.0
| 2 | 26 | 1 |
I'm trying to generate a random number between 0.1 and 1.0.
We can't use rand.randint because it returns integers.
We have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0.
Does somebody else have an idea for this problem?
|
Generate random number between 0.1 and 1.0. Python
| 0.019997 | 0 | 0 | 37,037 |
16,288,749 |
2013-04-29T21:43:00.000
| 1 | 0 | 1 | 0 |
python,random,floating-point
| 16,289,924 | 10 | false | 0 | 0 |
The standard way would be random.random() * 0.9 + 0.1 (random.uniform() internally does just this). This will return numbers between 0.1 and 1.0 without the upper border.
But wait! 0.1 (aka ¹/₁₀) has no clear binary representation (as ⅓ in decimal)! So You won't get a true 0.1 anyway, simply because the computer cannot represent it internally. Sorry ;-)
| 2 | 26 | 1 |
I'm trying to generate a random number between 0.1 and 1.0.
We can't use rand.randint because it returns integers.
We have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0.
Does somebody else have an idea for this problem?
|
Generate random number between 0.1 and 1.0. Python
| 0.019997 | 0 | 0 | 37,037 |
16,288,787 |
2013-04-29T21:45:00.000
| 0 | 1 | 0 | 0 |
python
| 21,212,477 | 1 | true | 0 | 0 |
The solution was to go into the code, and change the following lines to: Line 118: self.track = u'' Lines 149-152: self.track = int(self._fields.get(TRACK, u'')) + 1
| 1 | 0 | 0 |
I'm using the Python library HSAudioTag, and I'm trying to read the track number in my files, however, without fail, the file returns 0 as the track number, even if it's much higher. Does anybody have any idea how to fix this?
Thanks.
|
Python HSAudioTag for WMA files always returns 0?
| 1.2 | 0 | 1 | 87 |
16,290,237 |
2013-04-30T00:09:00.000
| 1 | 0 | 1 | 0 |
python,regex,neo4j,lucene
| 16,290,406 | 2 | false | 1 | 0 |
Can you just use OR? "Hilary Clinton" OR "Clinton, Hilary"?
| 1 | 2 | 0 |
Let's say I have some free form entries for names, where some are in the format "Last Name, First Name" and others are in the format "First Name Last Name" (eg "Bob MacDonald" and "MacDonald. Bob" are both present).
From what I understand, Lucene indexing does not allow for wildcards in the beginning of the sentence, so what would be some ways in which I could find both. This is for neo4j and py2neo, so solutions in either lucene pattern matching, or in python regex matching are welcome.
|
Lucene or Python: Select both "Hilary Clinton" and "Clinton, Hilary" name entries
| 0.099668 | 1 | 0 | 194 |
16,292,220 |
2013-04-30T04:33:00.000
| 1 | 0 | 1 | 0 |
django,python-2.7,python-3.x
| 24,737,475 | 2 | true | 1 | 0 |
The real answer here is that you should be using virtual environments, as they both solve this particular problem, and are the industry standard because they solve many problems.
Simple process:
install python-virtualenv :
$>sudo apt-get install python-virtualenv # on Ubuntu
create a virtual-env against the Python you want, and activate it
$> virtualenv -p /usr/bin/pythonX ve
$> source ve/bin/activate
install django into this virtual-env
$> pip install django
...
PROFIT!
| 1 | 0 | 0 |
I have both python 2.7 and 3.2 installed on my computer and I wanted to install django. And when I did it automatically installed django 1.4 on python 3. Is there a way I can install it on python 2.7?
|
Installing django on two existing versions on python
| 1.2 | 0 | 0 | 66 |
16,293,597 |
2013-04-30T06:29:00.000
| 0 | 0 | 0 | 1 |
python,python-2.7,network-programming,udp,sliding-window
| 16,399,916 | 2 | false | 0 | 0 |
You could have a message loop continuously listening and processing received packets and putting them on a queue then read them at your leisure...
However you would need to implement you own ACKs taking into account the possibilities of lost and duplicates (if your application is concerned about them).. Which begs the question - why not use TCP?
| 1 | 0 | 0 |
Let's say a source A is sending me an unknown number of messages using UDP. How can I intercept all those messages? This is the complete scenario:
Send 7 messages
Wait for their ACKs
Process ACKs
Send another batch
Repeat...
Problems: (1) I don't know how many messages arrive, some may get lost and some are repeated, and (2) I might be be doing something else later, so I cannot wait forever.
|
Receive an unknown number of UDP messages
| 0 | 0 | 0 | 273 |
16,294,819 |
2013-04-30T07:48:00.000
| 20 | 0 | 1 | 0 |
python,pip,requirements.txt
| 16,294,888 | 8 | false | 0 | 0 |
You can run pip freeze to see what you have installed and compare it to your requirements.txt file.
If you want to install missing modules you can run pip install -r requirements.txt and that will install any missing modules and tell you at the end which ones were missing and installed.
| 1 | 41 | 0 |
I have a requirements.txt file with a list of packages that are required for my virtual environment. Is it possible to find out whether all the packages mentioned in the file are present. If some packages are missing, how to find out which are the missing packages?
|
Check if my Python has all required packages
| 1 | 0 | 0 | 41,835 |
16,296,124 |
2013-04-30T09:07:00.000
| 2 | 0 | 1 | 0 |
python
| 16,297,598 | 1 | false | 0 | 0 |
"Isn't the whole point of a virtual environment to hedge against API changes in modules?" You are right. But API changes in the packages added to the site-packages/dist-packages. Not the standard library.
The idea of a virtual environment is for you to set up a library of distributed packages that you want to use together under a certain environment while maintaining the integrity of your standard library. You would want to do this if, for example, you want different versions (or combinations) of these distributed packages to be run in different scenarios without conflicts with other versions. That way, each virtual environment links to the same standard library and you can be sure that the programs executed in an environment have access to a certain version (or even set) of packages that you have decided to keep in that environment.
| 1 | 2 | 0 |
When I create a virtual environment with pyvenv, the virtual environment's python executable is symlinked to the system-wide installation and consequently, I can access the system-wide standard library.
Why is this?
Isn't the whole point of a virtual environment to hedge against API changes in modules?
Can't changes in the standard library break an application, too?
|
pyvenv - Why provide access to the system-wide python executable and standard library?
| 0.379949 | 0 | 0 | 260 |
16,297,433 |
2013-04-30T10:15:00.000
| 4 | 0 | 1 | 0 |
python,class
| 16,297,958 | 1 | true | 0 | 0 |
Standalone module-level functions are totally fine if their functionality is related to that of the module. Just take a look at the Python standard library modules, they usually contain both functions and classes for a given topic.
| 1 | 1 | 0 |
By 'stand-alone' I mean functions not inside a class (I'm not sure of the correct terminology - 'module-level'?)
I have ~7 functions in a file that already has a few classes.These functions deal with processing input from the command line, reading and writing to files, checking whether file paths are valid - that kind of thing. Their purposes are closely related, and yet I'm not sure whether to put them into their own class because I can't think of any reason to instantiate such a class, or any states that would be associated with it.
What are my options, and what would you recommend I do with these functions? Is the use of static methods discouraged/unnecessary in Python?
|
Stand-alone, yet closely related functions at module level
| 1.2 | 0 | 0 | 104 |
16,298,022 |
2013-04-30T10:47:00.000
| 2 | 1 | 0 | 0 |
python,testing,ssh,telnet,robotframework
| 16,315,259 | 3 | false | 0 | 0 |
We haven't exactly used the method with telnet but with another ssh session or other shells that we cannot access otherwise...
Open an ssh connection to the first machine.
On this connection, use SSHLibrary keywords like Set Prompt, Write and Read or Read Until Prompt to manually open a telnet connection to the next machine.
Write and Read Keywords can be used a bit like the expect and spawn...
| 1 | 4 | 0 |
There is a node where I ssh into and start a script remotely by Robot Framework (SSHLibrary.Start Command or Execute Command). This remote script starts a telnet connection to another node which is hidden from outside. This telnet call seems to be a blocking event to Robot. I use RIDE for test execution and it simply stops working. I can send stop signals inefficiently. Is it possible to spawn telnet within ssh?
|
Is there a way to use telnet within an ssh connection in Robot Framework?
| 0.132549 | 0 | 1 | 2,735 |
16,304,959 |
2013-04-30T16:42:00.000
| 1 | 0 | 0 | 0 |
python,nosql
| 16,306,049 | 1 | true | 0 | 0 |
If search is your primary use case, I'd look into a search solution like ElasticSearch or Solr. Even if some databases support some sort of full text indexing, they're not optimized for this problem.
| 1 | 0 | 0 |
There are two possible cases where I am finding MySQL and RDBMS too slow. I need a recommendation for a better alternative in terms of NOSQL.
1) I have an application that's saving tons of emails for later analysis. Email content is saved in a simple table with a couple of relations to another two tables. Columns are sender, recepient, content, headers, timestamp, etc.
Now that the records are a close to a million, it's taking longer to search through. Basically there are some pattern searches we are running.
Which would be the best free/open source NOSQL for replacement to store mails so that searching through them would be faster?
2) Another use case is fundamentally ann asset management library consisting of files. System very simplar to mails. Here we have files of all type of extensions. When the files are created or changed, we are storing meta data of the files in a table. Again data sizes have grown big over time, that searching them is not easy.
Ideas welcome. Someone suggested Mongo. Is there anything better and faster?
|
Possible NoSQL cases
| 1.2 | 1 | 0 | 56 |
16,307,347 |
2013-04-30T19:16:00.000
| 1 | 0 | 0 | 0 |
python,postgresql,constraints,openerp
| 16,336,050 | 1 | true | 1 | 0 |
You can do this with a module by loading the data through xml or csv. Have a look at any module with a security\ir.model.access.csv file.
So to load data, create new module and add a csv file with the name of the table you want to load into (eg ir.model.data.csv) and add it to the __openerp__.py file under 'update_xml'.
| 1 | 1 | 0 |
I need a way to add, some external ID's in the system without having to add them manually or by .csv
Is there any way to do this by a module, that maybe updates all the ir.model.data tables of the db?
If so, what module should i look for? Is there any in existence, so i can make a new one based on it?
Thanks in advance
|
OpenErp - External Id Bulk Update
| 1.2 | 0 | 0 | 143 |
16,307,421 |
2013-04-30T19:20:00.000
| 0 | 0 | 1 | 0 |
python,wxpython
| 16,307,892 | 1 | false | 0 | 1 |
Is the Windows 7 box you're using to create the installer 64-bit? Is the Python version you were using on Windows 7 64-bit? If so, then you'll need to create the exe using a 32-bit Python, 32-bit wxPython and 32-bit installer.
A 64-bit program will run just fine on a 64-bit system, but it won't work on a 32-bit one. However, a 32-bit program will run on both.
| 1 | 0 | 0 |
I programmed a simple, little game with wxpython and created an .exe of it with py2exe (and in a 2nd try with pyinstaller). I also programmed an installer which just creates a folder and copies all the game files into this folder. I also created an .exe for this installer. Everything works fine on Windows 7. But on Windows XP i get this problem: My installer.exe succesfully works, but when I start the .exe from my game then I get this error:
foo.exe is not a valid win32 application
I use python 2.7
|
py2exe/pyinstaller: execution failure
| 0 | 0 | 0 | 1,262 |
16,317,420 |
2013-05-01T11:37:00.000
| -1 | 0 | 0 | 0 |
python,math
| 16,319,018 | 3 | false | 0 | 0 |
The simple answer is: pick a random number from geometric distribution and return mod n.
Eg: random.geometric(p)%n
P(x) = p(1-p)^x+ p(1-p)^(x+n) + p(1-p)^(x+2n) ....
= p(1-p)^x *(1+(1-p)^n +(1-p)^(2n) ... )
Note that second part is a constant for a given p and n. The first part is geometric.
| 1 | 2 | 1 |
What is a good way to sample integers in the range {0,...,n-1} according to (a discrete version of) the exponential distribution? random.expovariate(lambd) returns a real number from 0 to positive infinity.
Update. Changed title to make it more accurate.
|
Sample integers from truncated geometric distribution
| -0.066568 | 0 | 0 | 2,007 |
16,317,886 |
2013-05-01T12:13:00.000
| 1 | 0 | 0 | 0 |
python,web-applications,deployment,web-deployment
| 16,318,079 | 1 | true | 1 | 0 |
The app will be modified in that way, that one instance may serve for
more customers. Based on requested domain, it will prepare (load) the
right settings, database connection etc. for that customer.
Is this good idea?
Well, I've used a similar system in production whereby there are n instances of the app, but each instance can serve any customer, based on the HTTP Host header, and it works quite well.
Given a sufficiently large number of customers, it may not be cost-effective, or even practical, to have one instance per customer.
| 1 | 0 | 0 |
I have python web app (WSGi) which is deployed using uwsgi and nginx. I am going to provide this app to many users (customers) - each user will have his own settgins, database, templates, data folder etc. The code of the app can be shared.
My original idea was to have one uwsgi process per customer. But it is quite wasteful approach, because currently the app has about 100MB memory footprint. I expect that most of these instances will be sleeping most of the time (max 500 requests per day).
I came up with this solution:
The app will be modified in the way, that one instance may serve for more customers. Based on the requested domain, it will prepare (load) the right settings, database connection etc. for that customer.
Is this good idea? Or should I rather focus on lowering the memory footprint?
Thank you for your answers!
|
Python web app deployment of multiple app instances
| 1.2 | 0 | 0 | 289 |
16,320,412 |
2013-05-01T14:49:00.000
| 0 | 0 | 0 | 0 |
python,random
| 16,320,580 | 3 | false | 0 | 0 |
You can use some mapping to convert strings to numeric and then apply standard tests like Diehard and TestU01. Note that long sequences of samples are needed (typically few MB files will do)
| 2 | 10 | 1 |
I've come up with 2 methods to generate relatively short random strings- one is much faster and simpler and the other much slower but I think more random. Is there a not-super-complicated method or way to measure how random the data from each method might be?
I've tried compressing the output strings (via zlib) figuring the more truly random the data, the less it will compress but that hasn't proved much.
|
Quantifying randomness
| 0 | 0 | 0 | 740 |
16,320,412 |
2013-05-01T14:49:00.000
| 0 | 0 | 0 | 0 |
python,random
| 16,328,721 | 3 | false | 0 | 0 |
An outcome is considered random if it can't be predicted ahead of time with certainty. If it can be predicted with certainty it is considered deterministic. This is a binary categorization, outcomes either are deterministic or random, there aren't degrees of randomness. There are, however, degrees of predictability. One measure of predictability is entropy, as mentioned by EMS.
Consider two games. You don't know on any given play whether you're going to win or lose. In game 1, the probability of winning is 1/2, i.e., you win about half the time in the long run. In game 2, the odds of winning are 1/100. Both games are considered random, because the outcome isn't a dead certainty. Game 1 has greater entropy than game 2, because the outcome is less predictable - while there's a chance of winning, you're pretty sure you're going to lose on any given trial.
The amount of compression that can be achieved (by a good compression algorithm) for a sequence of values is related to the entropy of the sequence. English has pretty low entropy (lots of redundant info both in the relative frequency of letters and the sequences of words that occur as groups), and hence tends to compress pretty well.
| 2 | 10 | 1 |
I've come up with 2 methods to generate relatively short random strings- one is much faster and simpler and the other much slower but I think more random. Is there a not-super-complicated method or way to measure how random the data from each method might be?
I've tried compressing the output strings (via zlib) figuring the more truly random the data, the less it will compress but that hasn't proved much.
|
Quantifying randomness
| 0 | 0 | 0 | 740 |
16,320,603 |
2013-05-01T14:59:00.000
| 3 | 0 | 0 | 0 |
python,flask,coding-style,standards,project-structure
| 16,321,335 | 3 | false | 1 | 0 |
There is no right or wrong answer to this. One file may be easy to manage if it is a very small project and you probably are the only one working on it. Some of the reasons you split the project into multiple source files however are:
You only change and commit what requires change. What I mean here is if you have a large single file with all your code in it, any change in the file will mean saving/updating the entire file. Imagine if you made a mistake, the entire codebase could get screwed up.
You have a large team possibly with different set of duties and responsibilities. For example, you could have a designer who only takes care of the design/front end (HTML,CSS etc.). If you have all the code in one file, they are exposed to the other stuff that they don't need to worry about. Also, they can independently work on their portion without needing to worry about anything else. You minimize the risk of mistakes by having multiple source files here.
Easier to manage as the codebase gets bigger. Can you imagine looking through 100,000 lines of code in one single file and trying to debug a problem ?
| 2 | 12 | 0 |
I'm currently writing a web application in Python using the Flask web framework. I'm really getting used to just putting everything in the one file, unlike many other projects I see where they have different directories for classes, views, and stuff. However, the Flask example just stuff everything into the one file, which is what I seem to be going with.
Is there any risks or problems in writing the whole web app in the one single file, or is it better to spread out my functions and classes across separate files?
|
Is it bad practice to write a whole Flask application in one file?
| 0.197375 | 0 | 0 | 3,546 |
16,320,603 |
2013-05-01T14:59:00.000
| -2 | 0 | 0 | 0 |
python,flask,coding-style,standards,project-structure
| 16,326,032 | 3 | false | 1 | 0 |
As it is a micro framework, you should not rely on it to build full blown applications since it's not designed for it.
As long a you keep your project small (a few forms, a few tables and mostlys static contents) you will be fine. But if you want to have a bigger application, you might "out program" the capacities of the framework in terms of modularity and reuse of code. In that case you might want to migrate toward a full blown framework where everything is separated in their own module.
| 2 | 12 | 0 |
I'm currently writing a web application in Python using the Flask web framework. I'm really getting used to just putting everything in the one file, unlike many other projects I see where they have different directories for classes, views, and stuff. However, the Flask example just stuff everything into the one file, which is what I seem to be going with.
Is there any risks or problems in writing the whole web app in the one single file, or is it better to spread out my functions and classes across separate files?
|
Is it bad practice to write a whole Flask application in one file?
| -0.132549 | 0 | 0 | 3,546 |
16,326,699 |
2013-05-01T21:18:00.000
| 1 | 0 | 0 | 0 |
python,numpy,machine-learning,scipy,scikit-learn
| 16,332,805 | 2 | false | 0 | 0 |
Some linear model (Regression, SGD, Bayes) will probably be your best bet if you need to train your model frequently.
Although before you go running any models you could try the following
1) Feature reduction. Are there features in your data that could easily be removed? For example if your data is text or ratings based there are lots known options available.
2) Learning curve analysis. Maybe you only need a small subset of your data to train a model, and after that you are only fitting to your data or gaining tiny increases in accuracy.
Both approaches could allow you to greatly reduce the training data required.
| 1 | 6 | 1 |
I have a csv file of [66k, 56k] size (rows, columns). Its a sparse matrix. I know that numpy can handle that size a matrix. I would like to know based on everyone's experience, how many features scikit-learn algorithms can handle comfortably?
|
How many features can scikit-learn handle?
| 0.099668 | 0 | 0 | 2,840 |
16,328,801 |
2013-05-02T01:03:00.000
| 1 | 0 | 0 | 0 |
python,selenium,selenium-chromedriver
| 17,212,203 | 3 | false | 0 | 0 |
I have faced recently the same issue. Tried a lot of solutions found in the Internet, no one helped. So finally I came to this:
Launch chrome with empty user-data-dir (in /tmp folder) to let chrome initialize it
Quit chrome
Modify Default/Preferences in newly created user-data-dir, add those fields to the root object (just an example):
"download": {
"default_directory": "/tmp/tmpX7EADC.downloads",
"directory_upgrade": true
}
Launch chrome again with the same user-data-dir
Now it works just fine.
Another tip: If you don't know file name of file that is going to be downloaded, create snapshot (list of files) of downloads directory, then download the file and find its name by comparin snapshot and current list of files in the downloads directory.
| 1 | 8 | 0 |
Everything is in the title!
Is there a way to define the download directory for selenium-chromedriver used with python?
In spite of many research, I haven't found something conclusive...
As a newbie, I've seen many things about "the desired_capabilities" or "the options" for Chromedriver but nothing has resolved my problem... (and I still don't know if it will!)
To explain a little bit more my issue:
I have a lot of url to scan (200 000) and for each url a file to download.
I have to create a table with the url, the information i scrapped on it, AND the name of the file I've just downloaded for each webpage.
With the volume I have to treat, I've created threads that open multiple instance of chromedriver to speed up the treatment.
The problem is that every downloaded file arrives in the same default directory and I'm no more able to link a file to an url...
So, the idea is to create a download directory for every thread to manage them one by one.
If someone have the answer to my question in the title OR a workaround to identify the file downloaded and link it with the current url, I will be grateful!
|
Define download directory for chromedriver selenium with python
| 0.066568 | 0 | 1 | 9,773 |
16,334,482 |
2013-05-02T09:23:00.000
| -1 | 1 | 1 | 0 |
python,encryption,passwords
| 16,334,819 | 4 | false | 0 | 0 |
You could use htpasswd which is installed with apache or can be downloaded seperately. Use subprocess.check_output to run it and you can create Python functions to add users, remove them, verify they have given the correct password etc. Pass the -B option to enable salting and you will know that it's secure (unlike if you implement salts yourself).
| 1 | 6 | 0 |
I have a small python program which will be used locally by a small group of people (<15 people).But for accountability, i want to have a simple username+password check at the start of the program ( doesn't need to be super secure).For your information, I am just a beginner and this is my first time trying it.When i search around, i found that python has passlib for encryption. But even after looking though it i am still not sure how to implement my encryption.So, there are a few things that i want to know.
How do i store the passwords of users locally? The only way i know at the moment is to create a text file and read/write from it but that will ruin the whole purpose of encryption as people can just open the text file and read it from there.
What does hash & salt means in encryption and how does it work? (a brief and simple explanation will do.)
What is the recommended way to implement username and password check?
I am sorry for the stupid questions. But i will greatly appreciate if you could answers my question.
|
Password Protection Python
| -0.049958 | 0 | 0 | 6,052 |
16,335,771 |
2013-05-02T10:29:00.000
| 18 | 0 | 1 | 0 |
python
| 16,335,832 | 5 | true | 0 | 0 |
Python's "not" operand is not, not !.
Python's "logical not" operand is not, not !.
| 2 | 6 | 0 |
For the sake of learning, is there a shorter way to do:
if string.isdigit() == False :
I tried:
if !string.isdigit() : and if !(string.isdigit()) : which both didn't work.
|
Shorter way to check if a string is not isdigit()
| 1.2 | 0 | 0 | 60,969 |
16,335,771 |
2013-05-02T10:29:00.000
| 4 | 0 | 1 | 0 |
python
| 61,558,997 | 5 | false | 0 | 0 |
maybe using .isalpha() is an easier way...
so; instead of if not my_str.isdigit() you can try if my_str.isalpha()
it is the shorter way to check if a string is not digit
| 2 | 6 | 0 |
For the sake of learning, is there a shorter way to do:
if string.isdigit() == False :
I tried:
if !string.isdigit() : and if !(string.isdigit()) : which both didn't work.
|
Shorter way to check if a string is not isdigit()
| 0.158649 | 0 | 0 | 60,969 |
16,336,919 |
2013-05-02T11:28:00.000
| 2 | 1 | 0 | 1 |
python,bash,fabric
| 16,337,848 | 1 | true | 0 | 0 |
ctrl+c generates a SIGINT signal.
You can send a signal with kill -SIGINT pid where pid is the process id. you wish to signal. kill is a Bash built-in.
| 1 | 2 | 0 |
How do I fire a Ctrl+C with fabric, in other words is it possible to trigger KeyboardInterrupt manually via bash?
|
Simulate KeyboardInterrupt with fabric
| 1.2 | 0 | 0 | 414 |
16,340,305 |
2013-05-02T14:09:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7
| 16,340,366 | 2 | false | 0 | 0 |
If list() is given another list then it creates a list using the given list's members (A copy)
| 2 | 0 | 0 |
Say we have x = [2,4,5]. If I do y = list(x), I get back [2,4,5] and similarly true for tuples. This is a bit surprising to me, I could have expected [[2,4,5]] in the former case. What would have been the motivation for not giving back a list of a list?
|
Design of Python curiosity, why so?
| 0 | 0 | 0 | 68 |
16,340,305 |
2013-05-02T14:09:00.000
| 9 | 0 | 1 | 0 |
python,python-2.7
| 16,340,373 | 2 | true | 0 | 0 |
The list builtin type takes an arbitrary iterable (regardless of type) and creates a new list out of it. Since a list instance is iterable, it can be used to construct a new list by iteration.
If you expect list([1,2,3]) to give [[1,2,3]], why wouldn't you expect list((1,2,3)) to return [(1,2,3)] or list(x for x in range(10)) to return [<generator object <genexpr> at 0xef170>]?
| 2 | 0 | 0 |
Say we have x = [2,4,5]. If I do y = list(x), I get back [2,4,5] and similarly true for tuples. This is a bit surprising to me, I could have expected [[2,4,5]] in the former case. What would have been the motivation for not giving back a list of a list?
|
Design of Python curiosity, why so?
| 1.2 | 0 | 0 | 68 |
16,341,001 |
2013-05-02T14:42:00.000
| 0 | 0 | 0 | 0 |
python,macos
| 16,342,227 | 2 | false | 1 | 0 |
So I figured out what the problem was.
The website is returning an erroneous response code for these 6 pages. Even though it's returning a 404, it's also returning the web page. Chrome and Safari seem to ignore the response code and display the page anyways, my script aborts on the 404.
| 1 | 0 | 0 |
I have a python script that pings 12 pages on someExampleSite.com every 3 minutes. It's been working for a couple months but today I started receiving 404 errors for 6 of the pages every time it runs.
So I tried going to those urls on the pc that the script is running on and they load fine in Chrome and Safari. I've also tried changing the user agent string the script is using and that also didn't change anything. Also I tried removing the ['If-Modified-Since'] header which also didn't change anything.
Why would the server be sending my script a 404 for these 6 pages but on that same computer I can load them in Chrome and Safari just fine? (I made sure to do a hard refresh in Chrome and Safari and they still loaded)
I'm using urllib2 to make the request.
|
Troubleshooting 404 received by python script
| 0 | 0 | 1 | 52 |
16,343,985 |
2013-05-02T17:17:00.000
| 0 | 0 | 0 | 1 |
python,perl,awk,biopython,bioperl
| 16,360,951 | 2 | false | 0 | 0 |
I think you should consider using an alignment tool designed for this data for a couple of reasons:
Those tools will also find reverse complemented matches (though, you could also implement this).
Aligners will properly handle paired-end reads and multiple matches.
Most aligners are written in C and use data structures and algorithms designed for this amount of data.
For those reasons, and others, any script you come up with will likely not be near as fast and complete as the tools that already exist. If you want to specify the number of mismatches to keep, instead of aligning all your reads and then parsing the output, you could use Vmatch if you have access to it (this tool is very fast and good for many matching tasks).
| 1 | 1 | 0 |
i have a fastq file with more than 100 million reads in it and a genome sequence of 10000 in length
i want to take out the sequences from the fastq file and search in the genome sequence with allowing 3 mismatches
I tried in this way using awk i got the sequences from fastq file:
1.fq(few lines)
@DH1DQQN1:269:C1UKCACXX:1:1101:1207:2171 1:N:0:TTAGGC
NATCCCCATCCTCTGCTTGCTTTTCGGGATATGTTGTAGGATTCTCAGC
+
1=ADBDDHD;F>GF@FFEFGGGIAEEI?D9DDHHIGAAF:BG39?BB
@DH1DQQN1:269:C1UKCACXX:1:1101:1095:2217 1:N:0:TTAGGC
TAGGATTTCAAATGGGTCGAGGTGGTCCGTTAGGTATAGGGGCAACAGG
+
??AABDD4C:DDDI+C:C3@:C):1?*):?)?################
$ awk 'NR%4==2' 1.fq
NATCCCCATCCTCTGCTTGCTTTTCGGGATATGTTGTAGGATTCTCAGC
TAGGATTTCAAATGGGTCGAGGTGGTCCGTTAGGTATAGGGGCAACAGG
i have all the sequences in file,now i want to take each line of sequence and search in genome sequence with allowing 3 mismatches and if it finds print the sequences
example:
genome sequence file:
GGGGAGGAATATGATTTACAGTTTATTTTTCAACTGTGCAAAATAACCTTAACTGCAGACGTTATGACATACATACATTCTATGAATTCCACTATTTTGGAGGACTGGAATTTTGGTCTACAACCTCCCCCAGGAGGCACACTAGAAGATACTTATAGGTTTGTAACCCAGGCAATTGCTTGTCAAAAACATACA
search sequence file:
GGGGAGGAATATGAT
GGGGAGGAATATGAA
GGGGAGGAATATGCC
TCAAAAACATAGG
TCAAAAACATGGG
OUTPUT FILE:
GGGGAGGAATATGAT 0 # 0 mismatch exact sequence
GGGGAGGAATATGAA 1 # 1 mismatch
GGGGAGGAATATGCC 2 # 2 mismatch
TCAAAAACATAGG 2 # 2 mismatch
TCAAAAACATGGG 3 # 3 mismatch
|
search sequence in genome with mismatches
| 0 | 0 | 0 | 979 |
16,344,677 |
2013-05-02T17:59:00.000
| 0 | 1 | 1 | 0 |
c++,python,string,hash,dictionary
| 16,344,775 | 1 | false | 0 | 0 |
If you tokenize string you should be carefull not to have to different strings with same token. The std::unordered_map will also use hashes for quick search but also will take care of string with same hash but with different values. Of course it will take some time.
If you can tokenize the strings in such way that two strings would never have the same token, using map with ints as key is very good idea.
| 1 | 0 | 0 |
I have a question about using a hash which has strings as keys. Let's say I have a hash which maps strings to doubles.
The questions is, I've heard some say that it is better to tokenize the strings into ints and have the hash map ints to doubles and not string to doubles? Will this generally be faster in Python or C++ (2 questions) or will it not matter. Let's say that we're using boost unsorted_map in C++ so it's ore like a Python dictionary.
Will this matter if the keys are actually (string, string) -- > double or in c++ unsorted_map>?
|
tokenizing strings to int for faster hash maps
| 0 | 0 | 0 | 136 |
16,347,535 |
2013-05-02T20:53:00.000
| 4 | 0 | 1 | 0 |
python,pygame
| 16,347,585 | 1 | false | 0 | 1 |
Your problem is that your script (or something else it imports) is already named pygame. So, Python can't find the package with the same name.
Just rename your script to something else, like my_game.py.
| 1 | 0 | 0 |
I need help in pygame for 3.1 i am watching The New Bostons tutorials and whenever I put in
import pygame
from pygame.locals import *
I receive
Traceback (most recent call last):
File "C:\Documents and Settings\Aidan\Desktop\pygame.py", line 8, in
import pygame
File "C:\Documents and Settings\Aidan\Desktop\pygame.py", line 9, in
from pygame.locals import *
ImportError: No module named locals
|
Pygame python 3.1
| 0.664037 | 0 | 0 | 156 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.