Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14,677,548 |
2013-02-03T21:34:00.000
| 0 | 0 | 0 | 0 |
python,django,django-forms
| 14,678,289 | 1 | true | 1 | 0 |
I would suggest, you use separate view login for up and down voting.
Something like this /upvote/{{comment.pk|urlize}}
and then write a view that works with this url. With PK find the comment that the user is trying to up/down vote, then write the necessary condition to check if the user is authorized to perform that kind of action, and then finally execute that action.
I hope this helps
| 1 | 3 | 0 |
I want to make upvote and downvote buttons for comments but I want all the form inputs that django.contrib.comments.forms.CommentSecurityForm gives me to make sure the form is secure. Is that necessary? And if so, how do I make a form class that with upvote and downvote buttons? Custom checkbox styles?
|
Upvote and Downvote buttons with Django forms
| 1.2 | 0 | 0 | 829 |
14,677,763 |
2013-02-03T21:55:00.000
| 1 | 0 | 0 | 0 |
python,video,image-processing,opencv,computer-vision
| 14,727,117 | 2 | true | 0 | 0 |
I guess you are detecting the cars in each frame and creating a new bounding box each time a car is detected. This would explain the many increments of your variable.
You have to find a way to figure out if the car detected in one frame is the same car from the frame before (if you had a car detected in the previous frame). You might be able to achieve this by simply comparing the bounding box distances between two frames; if the distance is less than a threshold value, you can say that it's the same car from the previous frame. This way you can track the cars.
You could increment the counter variable when the detected car leaves the camera's field of view (exits the frame).
The tracking procedure I proposed here is very simple, try searching for "object tracking" to see what else you can use (maybe have a look at OpenCV's KLT tracking).
| 2 | 0 | 1 |
ive got this big/easy problem that i need to solve but i cant..
What im trying to do is to count cars on a highway, and i actually can detect the moving cars and put bounding boxes on them... but when i try to count them, i simply cant. I tried making a variable (nCars) and increment everytime the program creates a bounding box, but that seems to increment to many times..
The question is: Whats the best way to count moving cars/objects?
PS: I Dont know if this is a silly question but im going nutts.... Thanks for everything (:
And im new here but i know this website for some time (: Its great!
|
Counting Cars in OpenCV + Python
| 1.2 | 0 | 0 | 1,748 |
14,677,763 |
2013-02-03T21:55:00.000
| 0 | 0 | 0 | 0 |
python,video,image-processing,opencv,computer-vision
| 14,678,095 | 2 | false | 0 | 0 |
You should use an sqlite database for store cars' informations.
| 2 | 0 | 1 |
ive got this big/easy problem that i need to solve but i cant..
What im trying to do is to count cars on a highway, and i actually can detect the moving cars and put bounding boxes on them... but when i try to count them, i simply cant. I tried making a variable (nCars) and increment everytime the program creates a bounding box, but that seems to increment to many times..
The question is: Whats the best way to count moving cars/objects?
PS: I Dont know if this is a silly question but im going nutts.... Thanks for everything (:
And im new here but i know this website for some time (: Its great!
|
Counting Cars in OpenCV + Python
| 0 | 0 | 0 | 1,748 |
14,679,012 |
2013-02-04T00:34:00.000
| 0 | 1 | 1 | 0 |
python,python-2.7
| 14,679,195 | 2 | false | 0 | 0 |
Maybe look at the logging module that comes with python
| 1 | 0 | 0 |
I have 3 scripts. one is starttest.py which kicks the execution of methods called in test.py. Methods are defined in module.py.
There are many print statements in each of file and I want to capture each print statement in my log file from Starttest.py file itself. I tried using sys.stdout in starttest.py file but this function only takes print statements from starttest.py file. It does not have any control on test.py and module.py file print statements.
Any suggestions to capture the print statements from all of the files in a single place only?
|
Capturing log information from bunch of python functions in single file
| 0 | 0 | 0 | 109 |
14,679,610 |
2013-02-04T02:22:00.000
| 1 | 0 | 0 | 0 |
javascript,python,mysql,d3.js,data-visualization
| 14,679,748 | 3 | false | 0 | 0 |
d3 is a javascript library that run on client-side, while MySQL database is supposed to run on server-side.
d3 can't connect to MySQL database, let alone conversion to json format. The way you thought it was possible (steps 1 and 2) is what you should do.
| 1 | 4 | 0 |
I need some help with d3 and MySQL. Below is my question:
I have data stored in MySQL (eg: keywords with their frequencies). I now want to visualize it using d3. As far as my knowledge of d3 goes, it requires json file as input. My question is: How do I access this MySQL database from d3 script? One way which i could think of is:
Using Python, connect with database and convert the data in json format. Save this in some .json file.
In d3, read this json file as input and use it in visualization.
Is there any other way to convert the data in MySQL into .json format directly using d3? Can we connect to MySQL from d3 and read the data?
Thanks a lot!
|
Accessing MySQL database in d3 visualization
| 0.066568 | 1 | 0 | 8,185 |
14,681,015 |
2013-02-04T05:44:00.000
| 1 | 0 | 1 | 1 |
python,django,subprocess
| 14,691,638 | 2 | false | 1 | 0 |
You can use the same technique in Python as you did in Java, that is store the reference to the process in a module variable or implement a kind of a singleton.
The only problem you have as opposed to Java, is that Python does not have that rich analogy to the Servlet specification, and there is no interface to handle the application start or finish. In most cases you should not be worried how many instances of your application are running, because you fetch all data from a persistent storage. But in this case you should understand how your application is deployed.
If there is a single long running instance of your application (a FastCGI instance, for example, or a single WSGI application on cherrypy), you can isolate the process handling functionality in a separate module and load it when the module is imported (any module is imported only once within an application). If there are many instances (many FastCGI instances, or plain CGI-scripts), you should better detach child processes and keep their ids in a persistent storage (in a database, or files) and intersect them with the the list of currently running processes on demand.
| 1 | 1 | 0 |
So I am using subprocess to spawn a long running process through the web interface using Django. Now if a user wants to come back to the page I would like to give him the option of terminating the subprocess at a later stage.
How can do this? I implemented the same thing in Java and made a global singleton ProcessManager dictionary to store the Process Object in Memory. Can I do something similar in Python?
EDIT
Yes Singletons and a hash of ProcessManager is the way of doing it cleanly. Emmanuel's code works perfectly fine with a few modifications.
Thanks
|
Storing subprocess object in memory using global singleton instance
| 0.099668 | 0 | 0 | 1,465 |
14,681,697 |
2013-02-04T06:52:00.000
| 0 | 0 | 1 | 0 |
python-2.7,wxpython
| 14,692,494 | 1 | true | 0 | 1 |
You probably won't be able to get an accurate progress unless your build scripts are generating some progress information in the output that you can parse and represent in your GUI. Either way you will probably want to use wx.ProgressDialog or use a wx.Gauge in your main panel or something like that. Both wx.ProgressDialog and wx.Gauge can be used in a mode that shows actual values (a percentage complete) or an 'indeterminate' mode that represents that something is happening but there isn't any way to tell how far in the process it is.
| 1 | 0 | 0 |
I have developed the python script for checking out a source code from repository and build it using visual studio.When i run the script,a GUI opens(developed using wxPython) which shows a button,clicking on which does the checkout and build.I would want to show a progress bar showing the process running when i click on the button and a success message after the script finishes it's work.Please help me out.
Thanks in advance.
|
Show progress bar when running a script
| 1.2 | 0 | 0 | 400 |
14,682,935 |
2013-02-04T08:29:00.000
| 0 | 0 | 0 | 0 |
python,pyqt4,qtableview
| 16,030,744 | 1 | true | 0 | 1 |
Check if you are using same instance of QtSql.QSqlDatabase for all the views you create.
| 1 | 1 | 0 |
I have an application developed in Python using PyQt4, In which i used QTableView to display a reports.
when i open one report window with QTableView it shows the result properly.
If i open another report window with QTableView it shows the result properly.
Now both the windows are displaying the data in there QTableViews.
Now the issue is, When i select the 1st report window the data in it is disappearing.
Can anyone tell me exactly whats wrong i am doing.
|
QTableView data disapears by opening another QTableView
| 1.2 | 0 | 0 | 91 |
14,684,968 |
2013-02-04T10:39:00.000
| -2 | 0 | 1 | 0 |
python,django,virtualenv
| 36,716,554 | 3 | false | 1 | 0 |
If it is going to be on the same path you can tar it and extract it on another machine. If all the same dependencies, libraries etc are available on the target machine it will work.
| 1 | 72 | 0 |
I'm new to virtualenv but I'm writting django app and finally I will have to deploy it somehow.
So lets assume I have my app working on my local virtualenv where I installed all the required libraries. What I want to do now, is to run some kind of script, that will take my virtualenv, check what's installed inside and produce a script that will install all these libraries on fresh virtualenv on other machine. How this can be done? Please help.
|
How to export virtualenv?
| -0.132549 | 0 | 0 | 67,525 |
14,686,543 |
2013-02-04T12:12:00.000
| 1 | 0 | 0 | 0 |
python,tkinter,filedialog
| 14,822,605 | 2 | false | 0 | 1 |
I had to get rid of the canvasx/y statements. That line now simply reads set item [$data(canvas) find closest $x $y], which works well. $data(canvas) canvasx $x for its own works well but not in connection with find closest, neither if it is written in two lines.
| 1 | 4 | 0 |
I am using Tkinter with Python 2.6 and 2.7 for programming graphic user interfaces.
These User Interfaces contain dialogs for opening files and saving data from the tkFileDialog module. I would like to adapt the dialogs and add some further entry widgets e.g. for letting the user leave comments.
Is there any way for doing so?
It seems that the file dialogs are taken directly from the operating system. In Tkinter they are derived from the Dialog class in the tkCommonDialog module and call the tk.call("tk_getSaveFile") method of a frame widget (in this case for saving data).
I could not find out where this method is defined.
|
Python Tkinter: Adding widgets to file dialogs
| 0.099668 | 0 | 0 | 1,084 |
14,686,861 |
2013-02-04T12:31:00.000
| 0 | 1 | 0 | 1 |
python,c,linux,rpc
| 14,688,041 | 3 | false | 0 | 0 |
An ONC RPC client can be created by using the .idl file and rpcgen. The original RPC protocol precedes SOAP by several years.
Yes, you can create the RPC client in linux (see rpcgen)
Yes, you can create the RPC client in python (please see pep-0384)
| 1 | 2 | 0 |
I am looking for solutions to create a RPC client in Linux that can connect to Sun ONC RPC server.
The server is written in C.
I would like to know if I can:
Create an RPC client in Linux
Create the RPC client in Python
|
Connect to Sun ONC RPC server from Linux
| 0 | 0 | 0 | 1,821 |
14,687,281 |
2013-02-04T12:58:00.000
| 0 | 1 | 0 | 0 |
python,django,django-testing
| 14,689,563 | 1 | false | 0 | 0 |
Try to test every case of how your custom field could be used. In example try to send by it different kind of datas (string, integers, blank, different image formats etc.) and check if it works according to your expectations.
| 1 | 0 | 0 |
i have developed a custom field that extends ImageField and this custom field, dynamically creates 2more normal fields. Now, I need to write tests for this custom fields ?
What tests are needed for this customfield ? Can you name them so that I will code those test cases. I am not asking technically how to write a test, I donno but I wil learn . But, what I want to know is, what are the things I need to test here.
|
What tests do I need to write for the customfield that I have developed?
| 0 | 0 | 0 | 32 |
14,688,306 |
2013-02-04T13:59:00.000
| 14 | 0 | 0 | 0 |
python,pandas
| 25,715,719 | 13 | false | 0 | 0 |
Just ran into this issue myself. As of pandas 0.13, DataFrames have a _metadata attribute on them that does persist through functions that return new DataFrames. Also seems to survive serialization just fine (I've only tried json, but I imagine hdf is covered as well).
| 1 | 127 | 1 |
Is it possible to add some meta-information/metadata to a pandas DataFrame?
For example, the instrument's name used to measure the data, the instrument responsible, etc.
One workaround would be to create a column with that information, but it seems wasteful to store a single piece of information in every row!
|
Adding meta-information/metadata to pandas DataFrame
| 1 | 0 | 0 | 59,446 |
14,689,531 |
2013-02-04T15:05:00.000
| 2 | 0 | 1 | 0 |
python,regex,python-re,rawstring
| 53,171,192 | 4 | false | 0 | 0 |
you also can use [\r\n] for matching to new line
| 1 | 50 | 0 |
I got a little confused about Python raw string. I know that if we use raw string, then it will treat '\' as a normal backslash (ex. r'\n' would be \ and n). However, I was wondering what if I want to match a new line character in raw string. I tried r'\\n', but it didn't work.
Anybody has some good idea about this?
|
How to match a newline character in a raw string?
| 0.099668 | 0 | 0 | 124,726 |
14,690,411 |
2013-02-04T15:55:00.000
| 1 | 0 | 0 | 0 |
python,graph
| 14,691,588 | 1 | true | 1 | 0 |
I would recommend the awesome d3.js package, you can do just about anything with it and it produces beautiful interactive charts.
| 1 | 1 | 0 |
I'm using Python to work on a web app that has a data visualization element. Basically, it will gather data about a user's music catalog and allow them to visualize it and take actions based on what the data tells them. I'll pretty much exclusively need bar graphs to achieve the visualization I want. Given the dynamic nature of the app, the package needs to support creating charts on the fly -- essentially responding to a user's commands and their data to quickly render a new chart.
The problem is, some of the more lightweight packages like PyCha create charts that aren't visually appealing or suitable for a consumer based web app. I've looked into Fusion Charts, but that seems a bit heavy weight for my purposes, and it uses Flash, which I'd like to avoid.
Is there a nice middle ground somewhere that allows me to create reasonably pretty charts based on user input but doesn't bog down my server with Flash baggage and an enterprise level amount of features?
|
Dynamically Generating Pretty Web Based Charts/Graphs
| 1.2 | 0 | 0 | 262 |
14,692,367 |
2013-02-04T17:45:00.000
| 1 | 0 | 0 | 0 |
python,pygame
| 14,694,352 | 1 | true | 0 | 1 |
There is no single command to do this. You will first have to change the size using pygame.transform.scale, then make a rect of the same size, and set its place, and finally blit. It would probably be wisest to do this in a definition.
| 1 | 0 | 0 |
I'm currently working on a small game using pygame.
Right now I render images the standard way, by loading them and then blitting them to my main surface. This is great, if I want to work with an individual image size. Yet, I'd like to take in any NxN image and use it at an MxM resolution. Is there a technique for this that doesn't use surfarray and numeric? Something that already exists in pygame? If not, do you think it would be expensive to compute this?
I'd like to stretch the image. So, upscale or downscale the image. Sorry I wasn't clearer.
|
Rendering image independent of size?
| 1.2 | 0 | 0 | 128 |
14,692,822 |
2013-02-04T18:11:00.000
| 1 | 0 | 1 | 0 |
python,py2exe
| 14,693,751 | 1 | false | 0 | 1 |
I think it is possible. I'm not sure how py2exe works, but I know how pyinstaller does and since both does the same it should work similiar.
Namely, one-file flag doesn't really create one file. It looks like that for end user, but when user run app, it unpacks itself and files are stored somewhere physically. You could try to edit some source file (ie numbers.py, or data.py) and pack it again with changed data.
I know it's not the best explanation, you have to think further on your own. I'm just showing you the possible way.
| 1 | 3 | 0 |
I have seen a similar question on this site but not answered correctly for my requirements. I am reasonably familiar with py2exe.
I'd like to create a program (in python and py2exe) that I can distribute to my customers which would enable them to add their own data (not code, just numbers) and redistribute as a new/amended exe for further distribution (as a single file, so my code + data). I understand this can be done with more than one file.
Is this conceptually possible without my customers installing python? I guess I'm asking how to perform the 'bundlefiles' option?
Many thanks
|
compile py2exe from executable
| 0.197375 | 0 | 0 | 167 |
14,694,559 |
2013-02-04T20:00:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7
| 14,694,670 | 4 | false | 0 | 0 |
Probably for the same reason for which classes dont have private attributes. This is spirit of Python.
| 2 | 4 | 0 |
I just got bit by a bug that would have been prevented if list were a reserved word in Python. (Dumbery on my part, to be sure.)
So why isn't list (or dict or float or any of the types) a reserved word? It seems easier to add an interpreter error than to try and remember a rule.
(I also know Eclipse/PyDev has a setting that will remind you of this rule, which can be useful.)
|
Why isn't 'list' a reserved word in Python?
| 0 | 0 | 0 | 5,125 |
14,694,559 |
2013-02-04T20:00:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7
| 27,695,558 | 4 | false | 0 | 0 |
This is a bit of an opinion question, but here's my 2c.
It's a big deal to make a keyword reserved, as essentially it means you can never use that keyword in code, so it's often considered good programming language design to keep the list short. (perl doesn't, but then perl has a completely different philosophy to most other programming languages and uses special signs before variables to try to prevent clashes).
Anyway, to see why this is the case, consider forwards compatibility. Imagine the python developers decide that array is such a fundamental concept that they want to make it a builtin (not inconceivable - this happened with set in, um, python 2.6?). If builtins were automatically reserved, then anyone who had previously used array for something else (even if explicitly imported as a from superfastlist import array), or implemented their own (numpy have done this), would suddenly find that their code wouldn't work, and they'd be very irate.
(For that matter, consider if help was made a reserved word - a zillion libraries, including argparse, use help as a keyword argument)
| 2 | 4 | 0 |
I just got bit by a bug that would have been prevented if list were a reserved word in Python. (Dumbery on my part, to be sure.)
So why isn't list (or dict or float or any of the types) a reserved word? It seems easier to add an interpreter error than to try and remember a rule.
(I also know Eclipse/PyDev has a setting that will remind you of this rule, which can be useful.)
|
Why isn't 'list' a reserved word in Python?
| 0 | 0 | 0 | 5,125 |
14,696,626 |
2013-02-04T22:10:00.000
| 2 | 0 | 1 | 0 |
javascript,python
| 14,696,671 | 2 | false | 0 | 0 |
The equivalent operator in Javascript is "===".
Similarly, "!==" is the same as "is not" in Python.
| 1 | 3 | 0 |
How to check that two variables point to the same object? Meaning that if I mutate it—the value pointed to by both variables will change. In Python there is is operator, what about JavaScript?
|
Object identity in JavaScript
| 0.197375 | 0 | 0 | 307 |
14,699,031 |
2013-02-05T02:09:00.000
| 1 | 0 | 1 | 0 |
python,algorithm,dynamic-programming
| 14,700,093 | 2 | false | 0 | 0 |
I believe the point of the rod cutting problem is that a greedy algorithm will not always produce the optimal solution - this variant seems to prove the same point.
Consider the L=50 rod to be cut at [13,25,26]. An algorithm selecting the cut closest to the mid-point would tell you to do [25, 13, 26] for a total cost of 50 + 25 + 25 = 100. We can improve on that by doing [26, 13, 25] for a total cost of 50 + 26 + 13 = 89.
Edit:
ie. You would cut an L=50 rod at P=26 resulting in an L=24 (P=26->50) rod that needs no more cuts and an L=26 (P=0->26) rod that needs to be cut at [25,13]. Then you cut the L=26 rod at P=13 resulting in one L=13 (P=0->13) rod needing no more cuts and a second L=13 (P=13->26) rod needing a final cut at P=25. Then you do the final cut resulting in a cost that is the sum of the lengths of the rods which were cut at each stage (50 + 26 + 13).
The alternatives usually proposed are top-down and bottom-up techniques and the efficiency of these usually depend on the logic involved (for the traditional rod cutting problem in which you are trying to maximise sale cost, bottom-up is preferred as it reduces recursive calls).
| 1 | 2 | 0 |
I was asked this as a brain teaser in one of my classes but was unable to figure it out (Wasn't a homework question, just a teaser one of the TA's gave us to think about).
You are given a rod with a list of n points to cut at, for example [1,5,11], and the total length of the rod, for example 20. You are also told that the expense of cutting a rod is equivalent to that of the length of the rod. We want to find the minimum cost of cutting the rod at all the given cuts and the sequence of those cuts that would lead to the optimal cost.
For example, to cut a rod of length 20 at position 5, it would cost you $20 and you would end up with 2 logs, one with length 5 and one with length 15.
Or in another example, if you cut a rod of length 25 at positions 5 and then at position 10, it would cost you $25 to cut it at position 5, leaving you with a length 5 rod and a length 20 rod, and then cost you another $20 to cut it at position 10, giving you the total cost of cutting at the two positions at $45. However if you cut the rod at position 10 and then position 5, it would cost you $25 + $10 = $35.
In the end, we want to return the minimum cost of cutting the rod at all the given cuts and the sequence of those cuts that would lead to the optimal cost.
I attempted to come up with a recursive solution for this problem, but kept coming up empty-handed. Thoughts? Any help is appreciated. Thanks!
|
Python Rod Cutting Algorithm - Variant
| 0.099668 | 0 | 0 | 2,075 |
14,700,305 |
2013-02-05T04:37:00.000
| 1 | 0 | 0 | 1 |
php,python,apache,localhost,tornado
| 14,700,450 | 1 | true | 1 | 0 |
Easiest is to run Tornado and Apache on different ports/addresses
So you probably have Apache listening to port 80 already. Tornado could listen to port 81
If the server is multihomed, you could have Apache listen to a.b.c.d:80 and Tornado listen to a.b.c.e:80. This means that you'll at least have to have the Apache part and the Tornado part on different subdomains
If you need to run them all under the same domain and port, you'll need something lean and fast in front of them to work out which url gets routed to which server.
| 1 | 0 | 0 |
I have a website built in PHP and currently running on an Apache server (XAMPP locally). I would like to integrate a real-time chat system into the website. PHP and Apache not being geared for this in the slightest, I decided to work with Tornado and Python.
What is the easiest way to keep the base of the site in PHP and run it on Apache while delegating all the "chatting" to the Tornado server? I would like to be able to do this locally (...and needless to say, I have successfully installed Tornado and have been working on said script. However, I'm not sure exactly how to integrate it into the already existing site.)
Any advice greatly appreciated,
thanks!
|
How can I integrate Tornado into my (currently) Apache driven site?
| 1.2 | 0 | 0 | 956 |
14,700,943 |
2013-02-05T05:45:00.000
| 0 | 0 | 0 | 0 |
python,django
| 14,702,956 | 1 | false | 1 | 0 |
what is the reason for nesting "data" app in "apps"? it is uncommon to nest one app inside of another when there are only two apps (unless you have some great reason to). suggestions:
create an apps app, and create a data model for it;
create an apps app, and a data app; (link them however through the models)
the answer to your problem is probably file structure, but your basic requirements for what you're asking is detailed in your app requirements.
Long story short.. what application are you trying to create?
| 1 | 0 | 0 |
I want to make an application in django with two apps named apps and data.the "data" apps is placed within "apps".I had entered 'apps.data' in the Installed apps in "settings.py".when I run the devserver i got this error "no modules named apps.data".Any one please help me.
|
Modules not found in django
| 0 | 0 | 0 | 207 |
14,701,901 |
2013-02-05T07:03:00.000
| 4 | 0 | 1 | 1 |
python
| 40,097,923 | 2 | false | 0 | 0 |
You can just rebind the logger in the child process to its own. I don't know about other OS, but on Linux the forking doesn't duplicate the entire memory footprint (as Ellioh mentioned), but uses "copy-on-write" concept. So until you change something in the child process - it stays in the memory scope of the parent process. For instance, you can fork 100 child processes (that don't write into memory, only read) and check the overall memory usage. It'll not be parent_memory_usage * 100, but much less.
| 1 | 4 | 0 |
I am using multiprocessing module to fork child processes. Since on forking, child process gets the address space of parent process, I am getting the same logger for parent and child. I want to clear the address space of child process for any values carried over from parent. I got to know that multiprocessing does fork() at lower level but not exec(). I want to know whether it is good to use multiprocessing in my situation or should I go for os.fork() and os.exec() combination or is there any other solution?
Thanks.
|
Multiprocessing or os.fork, os.exec?
| 0.379949 | 0 | 0 | 6,063 |
14,711,161 |
2013-02-05T15:36:00.000
| 1 | 0 | 0 | 0 |
python,selenium
| 14,711,223 | 1 | true | 0 | 0 |
Short answer, unless the xml is posted to the page, you can't. Long answer, you can use Selenium to do JS injection on the page so that the xml document is replicated to some hidden page element you can expect, or stored to a file locally that you can open. This is, of course, assuming that the xml document is actually retrieved client side; if this is all serverside, you'll need to integrate with the backend or emulate the call yourself. Oh, and one last option to explore would be to proxy the browser Selenium is driving, then inspect the traffic for the response containing the xml. Though more complicated, that actually could be argued to be the best solution, since you aren't modifying the system under test to test it.
| 1 | 0 | 0 |
So I have a webpage that has some javascript that gets executed when a link is clicked. This javascript opens a new window and calls some other javascript which requests an xml document which it then parses for a url to pass to a video player. How can I get that xml response using selenium?
|
Handling responses with Python bindings for Selenium
| 1.2 | 0 | 1 | 149 |
14,715,739 |
2013-02-05T19:50:00.000
| 1 | 0 | 0 | 0 |
python,wxpython,pyopengl
| 14,971,097 | 2 | false | 0 | 1 |
Are you running Linux? Perhaps you could get that information from the table of display modes that glxinfo -t outputs.
| 1 | 1 | 0 |
I am creating a wx.Frame with a GLCanvas. On some platforms, setting the WX_GL_DEPTH_SIZE attribute of the canvas to 32 works fine. On another platform, I just get a blank frame (the GLCanvas doesn't render) unless I reduce the depth size to 16. Is there an easy way in the calling code to determine the allowable values for the depth size?
|
How can I determine the max allowable WX_GL_DEPTH_SIZE for a wx GLCanvas?
| 0.099668 | 0 | 0 | 239 |
14,716,662 |
2013-02-05T20:51:00.000
| 9 | 1 | 0 | 1 |
eclipse,pydev,python
| 15,360,958 | 2 | false | 0 | 0 |
Unfortunately no. You can remotely connect to your Linux server via Remote System Explorer (RSE). But can't use it as a remote interpreter. I use Pycharm. You can use the free Community Edition or the Professional Edition for which you have to pay for it. It is not that expensive and it has been working great for me.
| 1 | 11 | 0 |
is there a posibility to make eclipse PyDev use a remote Python interpreter?
I would like to do this, as the Linux Server I want to connect to has several optimization solvers (CPLEX, GUROBI etc.) running, that my script uses.
Currently I use eclipse locally to write the scripts, then copy all the files to the remote machine, log in using ssh and execute the scripts there with "python script.py".
Instead I hope to click the "run" button and just have everything executed within my eclipse IDE.
Thanks
|
Eclipse PyDev use remote interpreter
| 1 | 0 | 0 | 7,677 |
14,718,543 |
2013-02-05T22:55:00.000
| 2 | 0 | 1 | 0 |
python,nlp,nltk,n-gram
| 14,718,635 | 1 | false | 0 | 0 |
You might want to look into word sense disambiguation (WSD), it is the problem of determining which "sense" (meaning) of a word is activated by the use of the word in a particular context, a process which appears to be largely unconscious in people.
| 1 | 3 | 1 |
I'm currently working on a NLP project that is trying to differentiate between synonyms (received from Python's NLTK with WordNet) in a context. I've looked into a good deal of NLP concepts trying to find exactly what I want, and the closest thing I've found is n-grams, but its not quite a perfect fit.
Suppose I am trying to find the proper definition of the verb "box". "Box" could mean "fight" or "package"; however, somewhere else in the text, the word "ring" or "fighter" appears. As I understand it, an n-gram would be "box fighter" or "box ring", which is rather ridiculous as a phrase, and not likely to appear. But on a concept map, the "box" action might be linked with a "ring", since they are conceptually related.
Is n-gram what I want? Is there another name for this? Any help on where to look for retrieving such relational data?
All help is appreciated.
|
Natural Language Processing - Similar to ngram
| 0.379949 | 0 | 0 | 1,365 |
14,720,070 |
2013-02-06T01:26:00.000
| 7 | 0 | 1 | 0 |
python,ide
| 14,753,015 | 4 | false | 0 | 0 |
In PyCharm, select a code fragment you want to execute then choose "Execute selection in console" (Alt+Shift+E in my keymap).
| 1 | 8 | 0 |
I think this is a very useful feature for beginners. When I learned R, I would keep my notes for each lesson in one file and execute the lines that I wanted. Now I'm learning Python, and I have to save each new thing in a different file. Is there no IDE that can do what R does? I'm currently using PyCharm.
|
Does any Python IDE let you run a selection of code like R does?
| 1 | 0 | 0 | 7,809 |
14,720,476 |
2013-02-06T02:11:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine
| 14,723,922 | 3 | false | 1 | 0 |
I assume you are using Linux, Ubuntu/Mint If not that would be a good start
Debug as much as you can locally using dev_appserver.py - this will display errors on start up (in the console)
Add your own debug logs when needed
Run code snippets in the interactive console - this is really useful to test snippets of code:
if you are on GAE >= 1.7.6 http://localhost:8000/console
if you are on GAE < 1.7.6 http://localhost:8080/_ah/admin/interactive/interactive
| 1 | 1 | 0 |
I'm starting to use Google App Engine and being a newcomer to much of the stuff going on here, I broke my webpage (all I see is "server error" in my web browser). I'd like to be able to see a console of some sort which is telling me what's going wrong (python syntax? file not found? something else?). Searching around a bit didn't lead me to a quick solution to this, so I came here. Any advice? Ideally, there would be some sort of tutorial/guide that would show how to do this.
|
How to monitor google app engine from command line?
| 0.066568 | 0 | 0 | 104 |
14,723,707 |
2013-02-06T07:25:00.000
| 1 | 0 | 0 | 0 |
python,bioinformatics,biopython,chemistry
| 14,726,014 | 3 | false | 0 | 0 |
A pdb file can contain pretty much anything.
A lot of projects allows you to parse them. Some specific to biology and pdb files, other less specific but that will allow you to do more (setup calculations, measure distances, angles, etc.).
I think you got downvoted because these projects are numerous: you are not the only one wanting to do that so the chances that something perfectly fitting your needs exists are really high.
That said, if you just want to parse pdb files for this specific need, just do it yourself:
Open the files with a text editor.
Identify where the relevant data are (keywords, etc.).
Make a Python function that opens the file and look for the keywords.
Extract the figures from the file.
Done.
This can be done with a short script written in less than 10 minutes (other reason why downvoting).
| 1 | 2 | 0 |
I am working on bio project.
I have .pdb (protein data bank) file which contains information about the molecule.
I want to find out the following of a molecule in the .pdb file:
Molecular Mass.
H bond donor.
H bond acceptor.
LogP.
Refractivity.
Is there any module in python which can deal with .pdb file in finding this?
If not then can anyone please let me know how can I do the same?
I found some modules like sequtils and protienparam but they don't do such things.
I have researched first and then posted, so, please don't down-vote.
Please comment, if you still down-vote as to why you did so.
Thanks in advance.
|
python with .pdb files
| 0.066568 | 0 | 0 | 1,814 |
14,723,964 |
2013-02-06T07:45:00.000
| 2 | 0 | 1 | 0 |
python
| 14,724,021 | 2 | false | 0 | 0 |
Use r'C:\abc\test' or 'C:\\abc\\test' when you enter your strings
| 1 | 2 | 0 |
I build one web and user can enter directory path in form.
My program will extract the path and write into one file.
My question is when the path include some special word such as \t \n, the program can't write file correctly.
For example:
C:\abc\test will become C:\abc[TAB]est
How can I change the string into other type like raw string and write file correctly ?
Thank you for your reply.
|
How to write \t to file using Python
| 0.197375 | 0 | 0 | 2,685 |
14,726,495 |
2013-02-06T10:16:00.000
| 0 | 0 | 0 | 0 |
python,hdf5,pytables
| 14,750,133 | 1 | false | 0 | 0 |
How about you change the _v_title attribute on the Node and then save the hdf5 file again?
table._v_title = 'new title'
| 1 | 0 | 0 |
I was wondering if there was a way to change the title of an HDF5 table, that I created in my python code, using pyTables. I gave the wrong title string, and I need too change it now, so when I open it again in python, I can distinguish it from other tables that I load, according to its title.
|
How to change the an HDF5 table title (created using pytables)
| 0 | 0 | 0 | 185 |
14,727,517 |
2013-02-06T11:06:00.000
| 2 | 0 | 1 | 0 |
python,self
| 14,727,642 | 3 | true | 0 | 0 |
No, there isn't. You could though use another word instead of self, although the convention is to use "self".
| 2 | 5 | 0 |
Is there anyway of making Python methods have access to the class fields/methods without using the self parameter?
It's really annoying having to write self. self. self. self. The code gets so ugly to the point I'm thinking of not using classes anymore purely for code aesthetics. I don't care about the risks or best practices. I just want not to see self anymore.
P.S. I know about the renaming possibility but it's not the point.
|
Any ideas about how to get rid of self?
| 1.2 | 0 | 0 | 1,285 |
14,727,517 |
2013-02-06T11:06:00.000
| 3 | 0 | 1 | 0 |
python,self
| 14,727,662 | 3 | false | 0 | 0 |
The only possible solution (except for making your own no-self Python version (using sources))
Try another language.
| 2 | 5 | 0 |
Is there anyway of making Python methods have access to the class fields/methods without using the self parameter?
It's really annoying having to write self. self. self. self. The code gets so ugly to the point I'm thinking of not using classes anymore purely for code aesthetics. I don't care about the risks or best practices. I just want not to see self anymore.
P.S. I know about the renaming possibility but it's not the point.
|
Any ideas about how to get rid of self?
| 0.197375 | 0 | 0 | 1,285 |
14,727,628 |
2013-02-06T11:12:00.000
| 0 | 1 | 0 | 0 |
python,performance,templating
| 16,305,372 | 1 | true | 1 | 0 |
After lots of trying and reading I found string.Template from the Core Library to be the fastest - I just wrapped in my own simple class to encapsulate the file-access/reads et voilà.
| 1 | 0 | 0 |
After using cheetah and mako at their functional minimum (only for substitution) for sometime, I started asking myself whether just using string.Template wouldn't be the better and simpler approach for my use case(less deps).
In addition I wondered whether it would be reasonable to import these templates as .py files to avoid .open() on each call. This would make handling templates a little more complicated but other than that I'd save a lot of system calls.
What do you guys think?
I'm well aware that my present templating is speedy enough for 99.9% of the use cases I will go through.
Thank you for any Input
|
Fastest way to do _simple_ templating with Python
| 1.2 | 0 | 0 | 42 |
14,731,988 |
2013-02-06T14:55:00.000
| 1 | 0 | 0 | 0 |
python,django,http,templates
| 14,732,094 | 4 | false | 1 | 0 |
Have you tried with another browser ? Is your custom error page larger than 512 bytes ? It seems some browsers (including Chrome) replace errors page with their own when the server's answer is shorter than 512 bytes.
| 1 | 9 | 0 |
I am creating a custom HTTP 500 error template. Why is it Django shows it when i raise an exception and not when I return HttpResponseServerError (I just get the default browser 500 error)? I find this behaviour strange...
|
Django HTTP 500 Error
| 0.049958 | 0 | 0 | 19,790 |
14,732,950 |
2013-02-06T15:43:00.000
| 0 | 0 | 1 | 0 |
python,camera,blender
| 14,811,330 | 1 | false | 0 | 0 |
The easiest way to make the camera look at an object is the Edit Object Actuator. There you can replace Add Object to Track To and then you just need to specify an object.
Perhaps you can change it in the game with a python script, using EditObjectActuator.track_object
| 1 | 1 | 0 |
I want to use blender to programmatically move the camera around the scene while remaining focused on a particular location. What's the easiest way to make the camera look at an object without having to specify rx,ry,and rz. I'm looking for the Python function to call and not do it through the blender GUI. I am using blender 2.65.
|
Blender python scripting
| 0 | 0 | 0 | 667 |
14,733,471 |
2013-02-06T16:06:00.000
| 2 | 0 | 1 | 0 |
python,numpy,string-formatting,multidimensional-array
| 14,734,299 | 3 | false | 0 | 0 |
Try numpy.set_printoptions() -- there you can e.g. specify the number of digits that are printed and suppress the scientific notation. For example, numpy.set_printoptions(precision=8,suppress=True) will print 8 digits and no "...e+xx".
| 1 | 2 | 1 |
I'm a beginner in python and easily get stucked and confused...
When I read a file which contains a table with numbers with digits, it reads it as an numpy.ndarray
Python is changing the display of the numbers.
For example:
In the input file i have this number: 56143.0254154
and in the output file the number is written as: 5.61430254e+04
but i want to keep the first format in the output file.
i tried to use the string.format or locale.format functions but it doesn't work
Can anybody help me to do this?
Thanks!
Ruxy
|
python changes format numbers ndarray many digits
| 0.132549 | 0 | 0 | 1,536 |
14,735,464 |
2013-02-06T17:44:00.000
| 1 | 0 | 0 | 0 |
python,django,nginx,gunicorn
| 14,740,868 | 1 | true | 1 | 0 |
Use gunicorn_django [OPTIONS] myproject if you use myproject.settings
| 1 | 6 | 0 |
I've been reading about deploying Django with gunicorn and I wanted to give it a try.
I have found at least 3 ways of running a server with gunicorn and django:
gunicorn [OPTIONS] [APP_MODULE] # tested locally and worked fine
python managy.py run_gunicorn # also works fine locally
gunicorn_django [OPTIONS] [SETTINGS_PATH] # I have an error due to apps/ location
I have Apache with nginx (serving static files) in production at the moment, works fine but is a litle slow and want to try Gunicorn. The first 2 options worked fine locally with nginx serving static files.
I want to know a couple if things:
What is the difference between any option above ?
What is the proper instruction to run in PRODUCTION environments ?
Thank you guys.
|
Django with Gunicorn different ways to deploy
| 1.2 | 0 | 0 | 472 |
14,735,468 |
2013-02-06T17:44:00.000
| 1 | 0 | 1 | 0 |
python,random,seeding,mersenne-twister
| 24,877,029 | 2 | false | 0 | 0 |
I realise this was more than a year ago, but in case you glance back, there's an easy solution: Just grab a new value from SystemRandom to XOR with the output from the MT RNG every kth time, for some sufficiently large k, instead of every time. E.g. if SystemRandom is 50 times slower and you set k = 5000, your new combined RNG should only be ~1% slower, and (assuming that SystemRandom is "really" random) any permutation can be reached in every run involving more than 5000 RNG calls.
| 1 | 1 | 0 |
As many people know, Python uses the Mersenne Twister (MT) algorithm to handle its random numbers. However, despite having a very long period (~2^19937), it is also well-known that you can't reach every random permutation when you shuffle a sequence greater than 2080 elements (since !2081 > 2^19937). As I am dealing with permutations and statistical properties are important to me, I'm trying to figure out the best way to mix or re-seed the Python generator with an additional source of randomness to avoid repetition.
Currently, my concept is to use the system random number generator (SystemRandom) to add an external source of randomness to the MT generator. There are two ways that I can think of to do this:
XOR the SystemRandom random number with the MT random number
Use the SystemRandom to reseed the MT
The first approach is used with some frequency by hardware random number generators, to reduce their bias tendencies. However, it's wildly inefficient. On a Windows XP machine, the SystemRandom is 50 times slower than the standard Python random function. That's a huge performance hit when most of your function involves shuffling. Given that, reseeding the MT with the SystemRandom should be significantly more efficient.
However, there are two issues with that approach also. Firstly, reseeding the MT during operation might disrupt its statistical properties. I'm fairly certain this shouldn't be an issue if the MT runs long enough, as each run of MT values should be well-formed (regardless of the starting point). It does however indicate that a sizable period between MT reseeding is preferred. Secondly, there is the question of what is the most efficient way to trigger reseeding. The simplest way to handle this is with a counter. However, more efficient ways might be possible.
So then, there are three questions on this point:
Has anyone read anything to the effect that reseeding an MT with a random value after every N samples will alter its desirable statistical properties?
Does anyone know a more efficient way to do so than incrementing a counter to trigger a reseed?
Finally, if anyone knows a generally better way to approach this problem, I'm all ears.
|
Avoiding Exact Repeats for Mersenne Twister in Python
| 0.099668 | 0 | 0 | 699 |
14,736,788 |
2013-02-06T19:06:00.000
| 2 | 1 | 0 | 1 |
python,python-2.7,debian,gunicorn
| 14,737,918 | 1 | true | 0 | 0 |
That is indeed the proper way to do it. Start it with the -p option so you don't have to guess at the PID if you have more than one instance running. You can tell gunicorn to reload your application without restarting the gunicorn process itself by sending it a SIGHUP instead of killing it.
If that makes you uncomfortable, you can always write a management script to put in /etc/init.d and start it like any other service.
| 1 | 1 | 0 |
How can I stop, restart or start Gunicorn running within a virtualenv on a Debian system?
I can't seem to find a solution apart from finding the PID for the gunicorn daemon and killing it.
Thank you.
|
How can I stop, restart or start Gunicorn running within a virtualenv on a Debian system?
| 1.2 | 0 | 0 | 5,683 |
14,739,044 |
2013-02-06T21:22:00.000
| 6 | 0 | 0 | 1 |
python,google-app-engine,app-engine-ndb
| 14,749,034 | 2 | false | 0 | 0 |
One thing that most GAE users will come to realize (sooner or later) is that the datastore does not encourage design according to the formal normalization principles that would be considered a good idea in relational databases. Instead it often seems to encourage design that is unintuitive and anathema to established norms. Although relational database design principles have their place, they just don't work here.
I think the basis for the datastore design instead falls into two questions:
How am I going to read this data and how do I read it with the minimum number of read operations?
Is storing it that way going to lead to an explosion in the number of write and indexing operations?
If you answer these two questions with as much foresight and actual tests as you can, I think you're doing pretty well. You could formalize other rules and specific cases, but these questions will work most of the time.
| 2 | 11 | 0 |
I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many))
In my understanding, there are three ways to implement it.
Use 'parent' argument
Use 'repeated' Structured property
Use 'repeated' Key property
I choose a way based on the logic below usually, but does it make sense to you?
If you have better logic, please teach me.
Use 'parent' argument
Transactional operation is required between these entities
Bidirectional reference is required between these entities
Strongly intend 'Parent-Child' relationship
Use 'repeated' Structured property
Don't need to use 'many' entity individually (Always, used with 'one' entity)
'many' entity is only referred by 'one' entity
Number of 'repeated' is less than 100
Use 'repeated' Key property
Need to use 'many' entity individually
'many' entity can be referred by other entities
Number of 'repeated' is more than 100
No.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can.
I really appreciate your opinion.
|
Effective implementation of one-to-many relationship with Python NDB
| 1 | 1 | 0 | 1,389 |
14,739,044 |
2013-02-06T21:22:00.000
| 7 | 0 | 0 | 1 |
python,google-app-engine,app-engine-ndb
| 14,740,062 | 2 | false | 0 | 0 |
A key thing you are missing: How are you reading the data?
If you are displaying all the tasks for a given person on a request, 2 makes sense: you can query the person and show all his tasks.
However, if you need to query say a list of all tasks say due at a certain time, querying for repeated structured properties is terrible. You will want individual entities for your Tasks.
There's a fourth option, which is to use a KeyProperty in your Task that points to your Person. When you need a list of Tasks for a person you can issue a query.
If you need to search for individual Tasks, then you probably want to go with #4. You can use it in combination with #3 as well.
Also, the number of repeated properties has nothing to do with 100. It has everything to do with the size of your Person and Task entities, and how much will fit into 1MB. This is potentially dangerous, because if your Task entity can potentially be large, you might run out of space in your Person entity faster than you expect.
| 2 | 11 | 0 |
I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many))
In my understanding, there are three ways to implement it.
Use 'parent' argument
Use 'repeated' Structured property
Use 'repeated' Key property
I choose a way based on the logic below usually, but does it make sense to you?
If you have better logic, please teach me.
Use 'parent' argument
Transactional operation is required between these entities
Bidirectional reference is required between these entities
Strongly intend 'Parent-Child' relationship
Use 'repeated' Structured property
Don't need to use 'many' entity individually (Always, used with 'one' entity)
'many' entity is only referred by 'one' entity
Number of 'repeated' is less than 100
Use 'repeated' Key property
Need to use 'many' entity individually
'many' entity can be referred by other entities
Number of 'repeated' is more than 100
No.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can.
I really appreciate your opinion.
|
Effective implementation of one-to-many relationship with Python NDB
| 1 | 1 | 0 | 1,389 |
14,741,395 |
2013-02-07T00:14:00.000
| 2 | 0 | 0 | 1 |
java,python,google-app-engine
| 14,743,018 | 1 | true | 1 | 0 |
It'll be a complete rewrite.
However, the server side should be independent of the client. You can have a python client for the Raspberry Pi and your server side code can still be written in Java.
| 1 | 0 | 0 |
I'm a big noob to GAE, moderate level in Python, and moderate-to-rusty in Java.
I am looking to convert an existing and working GAE Java app (in the Google Play store and runs on Android) into GAE Python.
The end goal is to get it into the Raspberry Pi Store, so I'm assuming GAE Python would be the most seamless.
Has anyone done this, assuming its even possible?
Would it require a complete rewrite, or could I just write a wrapper/container?
|
Convert Java Google AppEngine app to Python AppEngine
| 1.2 | 0 | 0 | 201 |
14,741,824 |
2013-02-07T01:02:00.000
| 6 | 0 | 0 | 0 |
python,django,web2py
| 14,742,495 | 2 | false | 1 | 0 |
I'm a beginner also. I started in about 8 months ago knowing no computer science, programming, powershell, or even html/css, and now I just about have a full django website ready, minus some minor issues because I need video and video is still above my head and hard to find info/learn about. Anyway, if you don't already know html/css, I used codecademy.com to learn that and some javascript, then learnpythonthehardway.org to learn python, followed by djangobook.com for django. All are great resources. They even point you in the direction of other things you need along the way. It won't be easy, but there are great learning resources available, and since I myself began learning, codecademy has also added python. I quit my job to focus on programming full time so that I can learn it while chasing a dream, so I know from personal experience that it can be done with the free resources available online. I still don't know A LOT and it will take time for things to start clicking, but if you want to learn it just start. I know you asked for expert opinions, and trust me I'm no expert, but from my experience django wasn't too bad. Yeah, you will find yourself banging your head against the wall from time to time, but communities like stackoverflow can also help you figure out answers to your questions. I don't however have any experience with web2py so I can't speak about that. Good luck!
| 1 | 0 | 0 |
I am very interested in learning web programming.. I want to use something from python but not sure if I want to use web2py or django.. Django seems difficult to set up for a beginner such as myself.. But I do not want to throw it out just yet.. Soo what are some expert opinions on web programming frameworks? Also, if django really isn't as hard as it seems, could someone please explain how I would set it up. Thanks in advance!!
|
Web2py vs django for beginner web programmer
| 1 | 0 | 0 | 4,734 |
14,742,170 |
2013-02-07T01:41:00.000
| 0 | 0 | 0 | 0 |
python,django
| 14,742,935 | 4 | false | 1 | 0 |
You can have a look at CKEditor.
| 1 | 0 | 0 |
I am very interested in using Django for creating a small travel blog for myself. There are a few reasons why I am more interested in using Django instead of something like Wordpress. One is that I am interested in having a grip on all the details and in the end creating something that doesn't look like a wordpress blog, the second, is that I want several blogs, one for each place I visit, which isn't natively handled by wordpress.
The problem is that when I'm abroad, I want to be able to type a blog that will automatically create paragraph tags for me at the very least and handle a lot of the small html formatting tyoe things that Wordpress does for you. What is the common workflow for something like this? I don't want something crazy like TinyMCE, but something above having to type every little html tag.
Thanks for the help.
|
Django Text Formatting, Markup? Markdown?
| 0 | 0 | 0 | 1,434 |
14,742,893 |
2013-02-07T03:13:00.000
| 2 | 0 | 0 | 0 |
python,matlab,pandas,gps
| 14,754,539 | 5 | false | 0 | 0 |
What you can do is use interp1 function. This function will fit in deifferent way the numbers for a new X series.
For example if you have
x=[1 3 5 6 10 12]
y=[15 20 17 33 56 89]
This means if you want to fill in for x1=[1 2 3 4 5 6 7 ... 12], you will type
y1=interp1(x,y,x1)
| 1 | 2 | 1 |
This is probably a very easy question, but all the sources I have found on interpolation in Matlab are trying to correlate two values, all I wanted to benefit from is if I have data which is collected over an 8 hour period, but the time between data points is varying, how do I adjust it such that the time periods are equal and the data remains consistent?
Or to rephrase from the approach I have been trying: I have GPS lat,lon and Unix time for these points; what I want to do is take the lat and lon at time 1 and time 3 and for the case where I don't know time 2, simply fill it with data from time 1 - is there a functional way to do this? (I know in something like Python Pandas you can use fill) but I'm unsure of how to do this in Matlab.
|
Interpolation Function
| 0.07983 | 0 | 0 | 705 |
14,743,431 |
2013-02-07T04:14:00.000
| 0 | 0 | 0 | 0 |
python,django,caching
| 14,743,602 | 3 | false | 1 | 0 |
Yes, Memcached is the right answer. Just take time to do it.
| 1 | 1 | 0 |
I need to implement some caching mechanism in my Python/Django project. Currently our live site is in Heroku cloud.
Which is the best caching mechanism to use i.e 'Local-memory caching' ,'Filesystem caching' or 'Database caching'? I need to easily implement this in our live heroku cloud environment. One of my friend suggested to use 'python Memcached'. But its really difficult to set up. My time is minimal.
Can anyone advise me on this, please?
Thanks in advance...
|
Caching in Django
| 0 | 0 | 0 | 572 |
14,744,178 |
2013-02-07T05:23:00.000
| 0 | 1 | 0 | 0 |
python,linux
| 14,744,411 | 2 | false | 0 | 0 |
In GNU/linux the firewall (netfilter) is part of the kernel, so I think that if linux is on, the firewall is too.
next, you may ask netfilter if it is configured, and if is there any rules. for this you might parse iptables command (such as iptables -L) output.
| 1 | 0 | 0 |
I want to display whether firewall is present or not.. if it is not enabled, the user should get an alert.. can it be done using python code?
|
Determining presence of firewall using python on linux
| 0 | 0 | 0 | 579 |
14,746,917 |
2013-02-07T08:42:00.000
| 0 | 1 | 1 | 0 |
python,ironpython,exe
| 14,746,966 | 2 | false | 0 | 0 |
I think with py2exe and the .net framework
| 2 | 0 | 0 |
I'm using the pyc tool to compile IronPython scripts to executables, but can they be run without IronPython installed? If so what do I have to include?
|
is it possible to run compiled iron python scripts on PCs without iron python installed?
| 0 | 0 | 0 | 230 |
14,746,917 |
2013-02-07T08:42:00.000
| 2 | 1 | 1 | 0 |
python,ironpython,exe
| 14,746,941 | 2 | true | 0 | 0 |
Yes, you can run it on other PC's without installing IronPy or Visual Studi (ergo, off the bat).
Sometimes you'd might need the Windows runtime libraries that you compiled the application with, but other then that.. yes you can execute it on any other Windows PC equal to the one your compiled it on.
(example, compiling on win7 will most likely run on win7 off the bat but not on a XP without the runtime libraries used on the compiling machine)
| 2 | 0 | 0 |
I'm using the pyc tool to compile IronPython scripts to executables, but can they be run without IronPython installed? If so what do I have to include?
|
is it possible to run compiled iron python scripts on PCs without iron python installed?
| 1.2 | 0 | 0 | 230 |
14,751,806 |
2013-02-07T13:05:00.000
| 1 | 0 | 0 | 0 |
python,xlrd
| 14,854,783 | 2 | false | 0 | 0 |
I used the CSV module to figure this out, as it read the cells correctly.
| 1 | 2 | 0 |
Pretty simple question but haven't been able to find a good answer.
In Excel, I am generating files that need to be automatically read. They are read by an ID number, but the format I get is setting it as text. When using xlrd, I get this format:
5.5112E+12
When I need it in this format:
5511195414392
What is the best way to achieve this? I would like to avoid using xlwt but if it is necessary I could use help on getting started in that process too
|
Reading scientific numbers in xlrd
| 0.099668 | 1 | 0 | 1,158 |
14,753,159 |
2013-02-07T14:12:00.000
| 0 | 0 | 0 | 1 |
python,ip,resource-leak
| 14,754,106 | 1 | false | 0 | 0 |
What exactly do you want to do?
As far as I see, you don't count eth0 filehandles, but instead you count all filehandles.
If you just wan't open IP filehandles, you can use lsof (shelltool) under Linux.
lsof -u yourUser | grep IPv4
not just eth0, but I don't know how to filter that for interface.
| 1 | 1 | 0 |
How to get the network information in Python in both Linux and Windows? I try to use netinfo package (ver 0.3.2) in Python 2.7 on Ubuntu 12.10 64 bit, but the use of this package makes the handles are not closed, as showed below. It is not accepted in my case.
import netinfo
def countOpenFiles():
import resource, fcntl, os
n_open = 0
names = []
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
for fd in range(0, soft):
try:
f = fcntl.fcntl(fd, fcntl.F_GETFD)
n_open += 1
except IOError:
continue
return n_open
for i in range(10):
netinfo.get_ip('eth0')
print countOpenFiles()
It produces:
4
5
6
7
8
9
10
11
12
13
I would like to have similar to netinfo package without resource leaks.
Thanks for any help.
|
How read ip address under python without resource leaks
| 0 | 0 | 0 | 265 |
14,753,481 |
2013-02-07T14:29:00.000
| 5 | 0 | 1 | 0 |
python,list,set,typeerror
| 14,753,518 | 3 | true | 0 | 0 |
The code if foo in {} checks if any of the keys of the dictionary is equal to foo.
In your example, foo is a list. A list is an unhashable type, and cannot be the key of a dictionary.
If you want to check if any entry of a list is contained in a dictionaries' keys or in a set, you can try:
if any([x in {} for x in (4, 5, False)]).
If you want to check if any of your values is equal to your list, you can try:
if any([v == [4, 5, False, False, False, False] for v in your_dict.values()])
| 1 | 1 | 0 |
I'm trying to run a program which is effectively doing the following:
if [4, 5, False, False, False, False] in {}
And, on this line, I'm getting a TypeError: unhashable type 'list'
What am I doing wrong?
|
Python TypeError: unhashable type: 'list'
| 1.2 | 0 | 0 | 6,798 |
14,753,914 |
2013-02-07T14:48:00.000
| 1 | 0 | 0 | 1 |
python,python-2.7,google-drive-api
| 14,789,775 | 2 | true | 0 | 0 |
I think you have the right idea in your "update". Treat Drive as flat, make calls to list everything, and generate your own tree from that.
| 2 | 1 | 0 |
Any ideas how to query for all the children and the children of the children in a single query?
Update
It seems like a simple question. I doubt if there is a simple solution?
Quering the tree of folders and files can cost a lot of API calls.
So, to solve my problem, I use a single query to list all the files and folders of an owner. This query also returns subfiles and subfolders. To find the folder and all of its children (folder, files, subfolders and subfiles) in the list, I had to create a tree like index.
Conclusion
A single query is not enough. You have to list all or narrow the query with an owner. Next You have to index the results to (recursive) find the tree for the folder.
A query option like (ls -R in Unix) would be nice.
|
How to list all files, folders, subfolders and subfiles of a Google drive folder
| 1.2 | 0 | 0 | 5,009 |
14,753,914 |
2013-02-07T14:48:00.000
| 2 | 0 | 0 | 1 |
python,python-2.7,google-drive-api
| 14,808,043 | 2 | false | 0 | 0 |
I'm trying to do the same in PHP. My solution is:
Retrieve the complete list of files and folders from the drive
Make a double iteration (nested) on the retrieved json:
the first over the elements in "items" array,
the second (recursive) over the parents id of each element,
rejecting all the elements that not contain the id of my specific folder in its parents id list.
Don't worry, it's just one Google APIs call. The rest of the job is local.
| 2 | 1 | 0 |
Any ideas how to query for all the children and the children of the children in a single query?
Update
It seems like a simple question. I doubt if there is a simple solution?
Quering the tree of folders and files can cost a lot of API calls.
So, to solve my problem, I use a single query to list all the files and folders of an owner. This query also returns subfiles and subfolders. To find the folder and all of its children (folder, files, subfolders and subfiles) in the list, I had to create a tree like index.
Conclusion
A single query is not enough. You have to list all or narrow the query with an owner. Next You have to index the results to (recursive) find the tree for the folder.
A query option like (ls -R in Unix) would be nice.
|
How to list all files, folders, subfolders and subfiles of a Google drive folder
| 0.197375 | 0 | 0 | 5,009 |
14,754,090 |
2013-02-07T14:56:00.000
| 1 | 0 | 0 | 0 |
python,excel,hdf5
| 31,982,266 | 3 | false | 0 | 0 |
XlsxWriter work for me. I try openpyxl but it error. 22k*400 r*c
| 1 | 8 | 0 |
I have really big database which I want write to xlsx/xls file. I already tried to use xlwt, but it allows to write only 65536 rows (some of my tables have more than 72k rows). I also found openpyxl, but it works too slow, and use huge amount of memory for big spreadsheets. Are there any other possibilities to write excel files?
edit:
Following kennym's advice i used Optimised Reader and Writer. It is less memory consuming now, but still time consuming. Exporting takes more than hour now (for really big tables- up to 10^6 rows). Are there any other possibilities? Maybe it is possible to export whole table from HDF5 database file to excel, instead of doing it row after row- like it is now in my code?
|
How to write big set of data to xls file?
| 0.066568 | 1 | 0 | 5,375 |
14,755,062 |
2013-02-07T15:40:00.000
| 2 | 1 | 1 | 0 |
python,embed,bytecode
| 14,755,102 | 2 | false | 0 | 0 |
pyc are not compiled to machine code. Use Shedskin for that.
| 2 | 0 | 0 |
I have files containing compiled Python bytecode. I want to run them through my executable program without the massive overload of the Python interpreter.
Any ideas?
|
Is there a light version of Python that only runs .pyc files?
| 0.197375 | 0 | 0 | 457 |
14,755,062 |
2013-02-07T15:40:00.000
| 0 | 1 | 1 | 0 |
python,embed,bytecode
| 14,756,123 | 2 | false | 0 | 0 |
You mention the massive overhead of the interpreter: do you have any evidence that the compilation step is massive overhead? You might be misunderstanding what is in a .pyc file. Python bytecode is not machine code, it is very high-level bytecodes that are executed by the Python interpreter.
In any case, no, there is not a build of Python that can run .pyc files and not .py files.
| 2 | 0 | 0 |
I have files containing compiled Python bytecode. I want to run them through my executable program without the massive overload of the Python interpreter.
Any ideas?
|
Is there a light version of Python that only runs .pyc files?
| 0 | 0 | 0 | 457 |
14,755,065 |
2013-02-07T15:40:00.000
| 3 | 1 | 0 | 0 |
python,svn,jenkins,continuous-integration,release
| 14,828,819 | 2 | true | 1 | 0 |
The question is a bit too big to be answered in a simple post, I will therefore try to give a few hints and references as far as I see from my personal view:
A few quick tips:
I like the idea of separating the developers into branches, but I would do the testing on the feature-branch and only merge to the beta branch if the feature passes tests, this way nothing enters beta until it is tested!
I would put the integration steps into a script outside of Jenkins. Make it part of the source code. This way you can test the script itself quickly outside of Jenkins
Use the build-system or scripting language you feel most comfortable with, most of the steps can easily done with any programming language
Make the script return success or failure, so Jenkins can flag the build as failed
For the merge-issues, you have two possibilities
Require the branch to be manually rebased before a developer can submit it for integration, check in the script and fail it if a rebase is necssary. This way merge-errors cannot happen, the build simply fails if the branch is not rebased
If you rather allow non-rebased merges, you need to fail the build on merge errors so the developer can manually resolve the problem (by rebasing his/her branch before submitting again)
Here some books that I found useful in this area:
How Google Tests Software, by James A. Whittaker, Jason Arbon, Jeff Carollo
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble
Let me know via comments what additional content you would like to have.
| 1 | 4 | 0 |
I am working on a web project with 7 developers. I setup a beta box (debian) so that we can do testing of new code before passing it to staging.
On the beta box, I setup Jenkins and would like to automate the merge/testing process. We also have a test suite which I would like to tie-in somehow.
How should I test and run python web projects with SVN / Jenkins?
I'm trying to formulate a good workflow. Right now each developer works on a feature branch, I run the code in the branch, if it looks good we merge it.
I would love to have developers login to the beta jenkins, and tell it to build from their feature branch. Here is my plan for what Jenkins would do:
Make sure the feature branch is rebased from trunk
Make sure the beta branch is identical to trunk (overwriting any merged-in feature branches)
Merge the feature branch into the beta branch
Kill the running server
Start the server nohup python app.py &
Run the test suite python test.py
Output the test data to the developer's view in Jenkins
If any of the tests fail, revert to the state before the branch was merged
I'm not sure how to handle merge conflicts. Also, the above is probably bad and wrong. Any advice would be appreciated!
|
How should our devs test python SVN branches with Jenkins?
| 1.2 | 0 | 0 | 601 |
14,755,187 |
2013-02-07T15:46:00.000
| 3 | 1 | 0 | 0 |
python,architecture,ipc,twisted,zeromq
| 14,763,596 | 2 | true | 0 | 0 |
All of these approaches are possible. I can only speak abstractly because I don't know the precise contours of your application.
If you already have a working application but it just isn't fast enough to handle the number of messages you throw at it, then identify the bottleneck. The two likely causes of your holdup are DB access or alert-triggering because either one of these are probably synchronous IO operations.
How you deal with this depends on your workload:
If your message rate is high and constant, then you need to make sure your database can handle this rate. If your DB can't handle it, then no amount of non-blocking message passing will help you! In this order:
Try tuning your database.
Try putting your database on a bigger comp with more memory.
Try sharding your database across multiple machines to distribute the workload.
Once you know your db can handle the message rate, you can deal with other bottlenecks using other forms of parallelism.
If your message rate is bursty then you can use queueing to handle the bursts. In this order:
Put a load balancer in front of a cluster of message processors. All this balancer should do is redistribute sensor messages to different machines for check-and-alert processing. The advantage of this approach is that you will probably not need to change your existing application, just run it on more machines. This works best if your load balancer does not need to wait for a response, just forward the message.
If your communication needs are more complex or are bidirectional, you can use a message bus (such as ZeroMQ) as the communication layer between message-processors, alert-senders, and database-checkers. The idea is to increase parallelism by having non-blocking communication occur through the bus and having each node on the bus do one thing only. You can then alter the ratio of node types depending on how long each stage of message processing takes. (I.e. to make the queue depth equal across the entire message processing process.)
| 1 | 3 | 0 |
I'm using twisted to get messages from internet connected sensors in order to store it to a db. I want to check these messages without interfere these process,because I need compare every message with some base values at db, if some is matched I need trigger an alert for this, and the idea is not block any process...
My Idea is create a new process to check and alert, but I need after the first process store the message, it will send the message to the new process in order to check and alert if is required.
I'm need IPC for this, and I was thinking to use ZeroMQ, but also twisted have a approach to work with IPC, I think if I use ZeroMQ, but maybe it will be self-defeating...
What think you about my approach? Maybe I'm completely wrong at all?
Any advice are welcome..
Thanks
PD:This Process will run at a dedicated server, with a expected load of 6000 msg/hour of 1Kb each one
|
Architecture approach with IPC, Twisted or ZeroMQ?
| 1.2 | 0 | 0 | 1,496 |
14,755,882 |
2013-02-07T16:22:00.000
| 0 | 1 | 1 | 0 |
python,regex,compilation
| 14,756,210 | 3 | false | 0 | 0 |
It sounds to me like the author is simply saying it's more efficient to compile a regex and save that than to count on a previously compiled version of it still being held in the module's limited-size internal cache. This is probably because to the amount of effort it takes to compile them plus the extra cache lookup overhead that must first occur being greater than the client simply storing them itself.
| 1 | 4 | 0 |
I'm working through Doug Hellman's "The Python Standard Library by Example" and came across this:
"1.3.2 Compiling Expressions
re includes module-level functions for working with regular expressions as text strings, but it is more efficient to compile the expressions a program uses frequently."
I couldn't follow his explanation for why this is the case. He says that the "module-level functions maintain a cache of compiled expressions" and that since the "size of the cache" is limited, "using compiled expressions directly avoids the cache lookup overhead."
I'd greatly appreciate it if someone could please explain or direct me to an explanation that I could better understand for why it is more efficient to compile the regular expressions a program uses frequently, and how this process actually works.
|
Compiling Regular Expressions in Python
| 0 | 0 | 0 | 1,111 |
14,756,286 |
2013-02-07T16:40:00.000
| 5 | 0 | 0 | 0 |
java,python,ipod
| 14,756,354 | 2 | false | 1 | 0 |
I believe the transfer protocol between iTunes and ipod is a closed one..and hence dont think there is a publicly available protocol.
| 1 | 1 | 0 |
Is there an API to copy music (files on disk) to an iPod?
Any language will do, but preferably Java or python.
|
API to copy music to iPod
| 0.462117 | 0 | 0 | 258 |
14,756,365 |
2013-02-07T16:45:00.000
| 1 | 0 | 0 | 0 |
python,xml-rpc,openerp
| 14,796,657 | 1 | false | 1 | 0 |
One way to connect to an external application is to create a connector module. There are already several connector modules that you can take a look at:
the thunderbird and outlook plugins
the joomla and magento modules
the 'event moodle' module
For example, the joomla connector uses a joomla plugin to handle the communication between OpenERP and joomla. The communication protocol used is XML-RPC but you can choose any protocol you want. You can even choose to connect directly to the external database using the psycopg2 modules (if the external database is using Postgresql) but this is not recommended. But perhaps you don't have the choice if this external application has no connection API.
You need to know what are the available ways to connect to this external application and choose one of these. Once you have chosen the right protocol, you can create your OpenERP module.
You can map entities stored on the external application using osv.TransientModel objects (formerly known as osv memory). The tables related to these objects will still be created in the OpenERP database but the data is volatile (deleted after some time).
| 1 | 1 | 0 |
How would one go about connecting to a different database based on which module is being used? Our scenario is as follows:
We have a standalone application with its own database on a certain server and OpenERP running on different server. We want to create a module in OpenERP which can utilise entities on the standalone application server rather than creating its own entities in its own database, is this possible? How can we change the connection parameters that the ORM uses to connect to its own database to point to a different database?
Ofcourse, one way is to use the base_synchro module to synchronise the required entities between both database but considering the large amount of data, we don't want duplication. Another way is to use xmlrpc to get data into OpenERP but that still requires entities to be present in OpenERP database.
How can we solve this problem without data duplication? How can a module in OpenERP be created based on a different database?
|
How to connect to a different database in OpenERP?
| 0.197375 | 1 | 0 | 1,877 |
14,757,199 |
2013-02-07T17:25:00.000
| 0 | 0 | 0 | 0 |
python,rabbitmq
| 14,757,404 | 3 | false | 0 | 0 |
The q in rabbit mq stands for queue. This means that all messages placed in the queue, in your case the producers random numbers, will remain in the queue until someone comes to get them.
| 2 | 0 | 0 |
suppose i have a producer on rabbitmq server which will generate a random number and pass it to the consumer. Consumer will receive all the random-numbers from producer. If i will kill my consumer process, what will producer do in this situation? whether it will continuously generate the number and when ever the consumer(client) will come up it will start sending again all the numbers generated by producer or some thing else...
|
Rabbitmq server working
| 0 | 0 | 0 | 151 |
14,757,199 |
2013-02-07T17:25:00.000
| 1 | 0 | 0 | 0 |
python,rabbitmq
| 14,771,469 | 3 | false | 0 | 0 |
To fully embrace the functionality you need to understand how the rabbitmq broker works with exchanges. I believe this will solve your problem.
Instead of sending to a single queue you will create an exchange. The producer sends to the exchange. In this state with no queues at this point the messages will be discarded. You will then need to create a queue in order for a consumer to receive the messages. The consumer will create the queue and bind it to the exchange. At that point the queue will receive messages and deliver them to the consumer.
In your case you will probably use a fanout exchange so that you do not need to worry about binding and routing keys. But you should also set you queue to be autodelete. That will ensure that when the consumer goes down the queue will be deleted. And hence the producer, unaffected by this, will continue to send messages to the exchange that are discarded until the queue is reconnected.
| 2 | 0 | 0 |
suppose i have a producer on rabbitmq server which will generate a random number and pass it to the consumer. Consumer will receive all the random-numbers from producer. If i will kill my consumer process, what will producer do in this situation? whether it will continuously generate the number and when ever the consumer(client) will come up it will start sending again all the numbers generated by producer or some thing else...
|
Rabbitmq server working
| 0.066568 | 0 | 0 | 151 |
14,758,024 |
2013-02-07T18:10:00.000
| 1 | 0 | 0 | 0 |
python,django,pip,psycopg2,psycopg
| 15,337,328 | 2 | true | 1 | 0 |
This was solved by performing a clean reinstall of django. There was apparently some dependecies missing that the recursive pip install did not seem to be able to solve.
| 1 | 2 | 0 |
I have pip installed psycopg2, but when I try to runserver or syncdb in my Django project, it raises an error saying there is "no module named _psycopg".
EDIT: the "syncdb" command now raises:
django.core.exceptions.ImproperlyConfigured: ImportError django.contrib.admin: No module named _psycopg
Thanks for your help
|
Psycopg missing module in Django
| 1.2 | 1 | 0 | 1,586 |
14,760,751 |
2013-02-07T20:50:00.000
| 0 | 1 | 0 | 0 |
python,selenium,webdriver,selenium-webdriver
| 21,198,235 | 1 | false | 0 | 0 |
I've experienced similar problems before with Firefox. The rare times that we managed to catch a machine in the act it was just not closing browser sessions. Hence the BSOD eventually. Obviously this was a bug in either webdriver, firefox, or XP (which we were also using). We solved it by aggressively killing every firefox process between each individual test. This worked for us. And because you are not running tests in parallel it would work for you as well. By agressively I mean putting an axe through it. The windows equivalent of killall -9 firefox. Because these sessions were unresponsive.
As to the root cause? The problem did not occur with specific versions of Firefox. But we never actually managed to debug it properly. Debugging was very difficult because it wasn't reproducible under short test runs and once the issue arose it really did cause a hard crash.
| 1 | 6 | 0 |
We run a bunch of Python test scripts on a group of test stations. The test scripts interface with hardware units on these test stations, so we're stuck running one test script at a time per station (we can't virtualize everything). We built a tool to assign tests to different stations and report test results - this allows us to queue up thousands of tests and let these run overnight, or for any length of time.
Occasionally, what we've found is that test stations will drop out of the cluster. When I remotely log into them, I get a black screen, then they reboot, then upon logging in I'm notified that windows XP had a "serious error". The Event Log contains a record of this error, which states Category: (102) and Event ID: 1003.
Previously, we found that this was caused by the creation of hundreds of temporary Firefox profiles - our tests use selenium webdriver to automate website interactions, and each time we started a new browser, a temporary Firefox profile was created. We added a step in the cleanup between each test that empties these temporary Firefox profiles, but we're still finding that stations drop out sometime, and always with this serious error and record in the Event Log.
I would like to find the root cause of this problem, but I don't know how to go about doing this. I've tried searching for information about how to read event log entries, but I haven't turned up anything that helps. I'm open to any suggestions for ways to go about debugging this issue.
|
Python selenium webdriver tests causing "serious error" when run in large batches on windows XP
| 0 | 0 | 1 | 855 |
14,763,722 |
2013-02-08T00:29:00.000
| 1 | 0 | 1 | 0 |
python,modulo
| 14,763,823 | 3 | false | 0 | 0 |
It has to do with the inexact nature of floating point arithmetic. 3.5 % 0.1 gets me 0.099999999999999811, so Python is thinking that 0.1 divides into 3.5 at most 34 times, with 0.099999999999999811 left over. I'm not sure exactly what algorithm is being used to achieve this result, but that's the gist.
| 1 | 48 | 0 |
Can anyone explain how the modulo operator works in Python?
I cannot understand why 3.5 % 0.1 = 0.1.
|
Python modulo on floats
| 0.066568 | 0 | 0 | 65,966 |
14,767,077 |
2013-02-08T06:38:00.000
| 0 | 0 | 1 | 0 |
python
| 14,767,542 | 3 | false | 0 | 0 |
Try to reduce your functions as much as possible, and reuse them.
For example you might have a function next_prime which is called repeatedly by n_primes and n_th_prime.
This also makes your code more maintainable, as if you come up with a more efficient way to count primes, all you do is change the code in next_prime.
Furthermore you should make your output as neutral as possible. If you're function returns several values, it should return a list or a generator, not a comma separated string.
| 2 | 4 | 0 |
I find myself constantly having to change and adapt old code back and forth repeatedly for different purposes, but occasionally to implement the same purpose it had two versions ago.
One example of this is a function which deals with prime numbers. Sometimes what I need from it is a list of n primes. Sometimes what I need is the nth prime. Maybe I'll come across a third need from the function down the road.
Any way I do it though I have to do the same processes but just return different values. I thought there must be a better way to do this than just constantly changing the same code. The possible alternatives I have come up with are:
Return a tuple or a list, but this seems kind of messy since there will be all kinds of data types within including lists of thousands of items.
Use input statements to direct the code, though I would rather just have it do everything for me when I click run.
Figure out how to utilize class features to return class properties and access them where I need them. This seems to be the cleanest solution to me, but I am not sure since I am still new to this.
Just make five versions of every reusable function.
I don't want to be a bad programmer, so which choice is the correct choice? Or maybe there is something I could do which I have not thought of.
|
Writing reusable code
| 0 | 0 | 0 | 2,359 |
14,767,077 |
2013-02-08T06:38:00.000
| 0 | 0 | 1 | 0 |
python
| 14,767,171 | 3 | false | 0 | 0 |
My guess, create module that contain:
private core function (example: return list of n-th first primes or even something more generall)
several wrapper/util functions that use core one and prepare output different ways. (example: n-th prime number)
| 2 | 4 | 0 |
I find myself constantly having to change and adapt old code back and forth repeatedly for different purposes, but occasionally to implement the same purpose it had two versions ago.
One example of this is a function which deals with prime numbers. Sometimes what I need from it is a list of n primes. Sometimes what I need is the nth prime. Maybe I'll come across a third need from the function down the road.
Any way I do it though I have to do the same processes but just return different values. I thought there must be a better way to do this than just constantly changing the same code. The possible alternatives I have come up with are:
Return a tuple or a list, but this seems kind of messy since there will be all kinds of data types within including lists of thousands of items.
Use input statements to direct the code, though I would rather just have it do everything for me when I click run.
Figure out how to utilize class features to return class properties and access them where I need them. This seems to be the cleanest solution to me, but I am not sure since I am still new to this.
Just make five versions of every reusable function.
I don't want to be a bad programmer, so which choice is the correct choice? Or maybe there is something I could do which I have not thought of.
|
Writing reusable code
| 0 | 0 | 0 | 2,359 |
14,770,972 |
2013-02-08T10:55:00.000
| 2 | 0 | 0 | 0 |
python,django,apache,caching,memcached
| 14,770,990 | 3 | false | 1 | 0 |
You have to restart your server (WSGI, UWSGI or whatever your use on production environment)
| 1 | 11 | 0 |
I changed a .py file and changes reflected on local dev. server for Django after deleting .pyc.
The production server does not even have .pyc for this specific file. Tried touching apache wsgi and restarting apache on prod. server but no luck.
Even deleting this .py file makes application work the same. There is memcached installed but I don't have much idea how it caches, there is .git as well and 5 servers are hosting - one main, 4 load balancers.
Regards !
|
Django code changes not reflecting on production server
| 0.132549 | 0 | 0 | 6,659 |
14,776,751 |
2013-02-08T16:11:00.000
| 2 | 0 | 0 | 0 |
python,amazon-web-services,boto,amazon-sqs
| 14,778,171 | 2 | false | 0 | 0 |
Long polling is more efficient because it allows you to leave the HTTP connection open for a period of time while you wait for more results. However, you can still do your own polling in boto by just setting up a loop and waiting for some period of time between reading the queue. You can still get good overall throughput with this polling strategy.
| 1 | 3 | 0 |
I am very new to AWS SQS queues and I am currently playing around with boto. I noticed that when I try to read a queue filled with messages in a while loop, I see that after 10-25 messages are read, the queue does not return any message (even though the queue has more than 1000+ messages). It starts populating another set of 10-25 messages after a few seconds or on stopping and restarting the the program.
while true:
read_queue() // connection is already established with the desired queue.
Any thoughts on this behaviour or point me in the right direction. Just reiterating I am just couple of days old to SQS !!
Thanks
|
Reading data consecutively in a AWS SQS queue
| 0.197375 | 0 | 1 | 1,560 |
14,778,178 |
2013-02-08T17:35:00.000
| 9 | 0 | 1 | 0 |
module,python-3.x
| 50,516,261 | 10 | false | 0 | 0 |
You need to add YourPythonPath\Library\bin to your PATH environment variable. In my case it is C:\Python36-64\Library\bin
| 5 | 13 | 0 |
I'm new to Python, just installed cvxopt module for my Python3.3 system (64 bit). The installation was successful, but when I typed "import cvxopt" in Python command line, it returned an error:
File "C:\Program Files
(x86)\Python\lib\site-packages\cvxopt__init__.py", line 33, in
import cvxopt.base ImportError: DLL load failed: The
specified module could not be found.
Could anyone help me on this problem? Thanks a lot!
|
import cvxopt.base: the specified module could not be found
| 1 | 0 | 0 | 18,458 |
14,778,178 |
2013-02-08T17:35:00.000
| 0 | 0 | 1 | 0 |
module,python-3.x
| 48,837,788 | 10 | false | 0 | 0 |
I had the same issue of ImportError while importing cvxopt module. Since cvxopt supports python version 2.7-3.5, I created a conda virtual environment first with python 3.5 using the steps below:
open Anaconda Prompt
conda create -n python=3.5
conda activate
In the activated conda environment install cvxopt package using command:
conda install cvxopt
This will install the cvxopt package and all the dependencies.
After installation open spyder by typing spyder in the Anaconda prompt and this will open Spyder with the virtual environment that you have created.
After this cvxopt package will work without any errors.
Note: I have been trying to open the virtual environment in Pycharm but that didn't work and in the end switched to spyder.
| 5 | 13 | 0 |
I'm new to Python, just installed cvxopt module for my Python3.3 system (64 bit). The installation was successful, but when I typed "import cvxopt" in Python command line, it returned an error:
File "C:\Program Files
(x86)\Python\lib\site-packages\cvxopt__init__.py", line 33, in
import cvxopt.base ImportError: DLL load failed: The
specified module could not be found.
Could anyone help me on this problem? Thanks a lot!
|
import cvxopt.base: the specified module could not be found
| 0 | 0 | 0 | 18,458 |
14,778,178 |
2013-02-08T17:35:00.000
| 0 | 0 | 1 | 0 |
module,python-3.x
| 49,500,660 | 10 | false | 0 | 0 |
I had the same issue and what fixed it was to move to python 3.5 (by creating a virtual environment). Note that cvxopt does not work unfortunately with python 3.6.
| 5 | 13 | 0 |
I'm new to Python, just installed cvxopt module for my Python3.3 system (64 bit). The installation was successful, but when I typed "import cvxopt" in Python command line, it returned an error:
File "C:\Program Files
(x86)\Python\lib\site-packages\cvxopt__init__.py", line 33, in
import cvxopt.base ImportError: DLL load failed: The
specified module could not be found.
Could anyone help me on this problem? Thanks a lot!
|
import cvxopt.base: the specified module could not be found
| 0 | 0 | 0 | 18,458 |
14,778,178 |
2013-02-08T17:35:00.000
| 0 | 0 | 1 | 0 |
module,python-3.x
| 58,593,980 | 10 | false | 0 | 0 |
Open the System Properties window and click on the Advanced tab.
Click the Environment Variables button at the bottom.
In the User variables section, select Path and click Edit.
Add the directory that contains mkl_rt.dll to the path.
| 5 | 13 | 0 |
I'm new to Python, just installed cvxopt module for my Python3.3 system (64 bit). The installation was successful, but when I typed "import cvxopt" in Python command line, it returned an error:
File "C:\Program Files
(x86)\Python\lib\site-packages\cvxopt__init__.py", line 33, in
import cvxopt.base ImportError: DLL load failed: The
specified module could not be found.
Could anyone help me on this problem? Thanks a lot!
|
import cvxopt.base: the specified module could not be found
| 0 | 0 | 0 | 18,458 |
14,778,178 |
2013-02-08T17:35:00.000
| 1 | 0 | 1 | 0 |
module,python-3.x
| 50,691,143 | 10 | false | 0 | 0 |
I fixed it. Just add path C:\Python36\Library\bin to PATH environment variable the same as Artashes Khachatryan said.
When I imported cvxopt library, it run base.cp36-win_amd64 file and this file requires dll in bin folder.
| 5 | 13 | 0 |
I'm new to Python, just installed cvxopt module for my Python3.3 system (64 bit). The installation was successful, but when I typed "import cvxopt" in Python command line, it returned an error:
File "C:\Program Files
(x86)\Python\lib\site-packages\cvxopt__init__.py", line 33, in
import cvxopt.base ImportError: DLL load failed: The
specified module could not be found.
Could anyone help me on this problem? Thanks a lot!
|
import cvxopt.base: the specified module could not be found
| 0.019997 | 0 | 0 | 18,458 |
14,779,216 |
2013-02-08T18:40:00.000
| 22 | 0 | 0 | 0 |
php,python,blowfish,sha-3,keccak
| 14,862,876 | 3 | false | 0 | 0 |
short answer:
No, and probably never. For password hashing, BCrypt & PBKDF2-HMAC-xxx are better choices than any simple SHA-1/2/3 algorithm. And until SHA-1/2 actual have feasible preimage attacks published, SHA-3 is actually the worst choice, specifically because of it's speed and low cache footprint.
longer answer:
A major factor in the relative security of different password hashing algorithms is: how much faster can a dedicated attacker hash passwords compared to you? That is, how much faster is their software/hardware combination (purchased for the express purpose of password hashing), versus your software on your server (off-the-shelf C implementation for software, hardware purchased for the needs of your application).
One of the main SHA-3 criteria was that is should run efficiently on embedded architectures, which are typified by small amounts of on-die cache, registers, etc. But this also describes modern GPUs: fewer registers/accumulators, smaller on-die caches; but on the flipside, their silicon is optimized to perform the same task in parallel on LARGE amounts of data. This is perfect for your attacker's brute-force attempts: for ever dollar spent on silicon, your attacker gets more SHA3 hashes/sec by buying another GPU than you do by buying a better CPU.
For this specific reason, BCrypt was designed to do a larger number of reads/writes to an in-memory table, one that's currently larger than the cache of most GPUs. Which means the current GPU-based BCrypt implementations aren't even up to speed with their CPU counterparts. So just by choosing BCrypt, you've slowed down your attacker's advantage for every dollar he spends, by forcing him to buy CPUs the same as you.
This is why raw speed is the enemy of password hashing. You want to choose the algorithm whose fastest software/hardware combination offers your attacker the least advantage per dollar over the commodity software/hardware that you'll be using. Right now, that's BCrypt, or the slightly lesser choice of PBKDF2-HMAC-xxx. Since GPUs are probably only going to getter better at doing SHA3, I doubt it'll ever be the correct choice. I don't have the numbers on SHA3, but "which is more secure" is not a nebulously relative term - the above rule can be used to precisely quantify it.
| 1 | 3 | 0 |
The winner of SHA-3 hashing algorithm contest has been chosen. The winner's algorithm is Keccak.
I use Blowfish and really like it, but Keccak is said to be better. Is it worth to use it for storing user's passwords on my website?
If yes, are there any implementations of Keccak for PHP, Python, Ruby or any other languages, used in web programming?
I hope this question will help other people, too. Thanks!
|
Password hashing: Keccak or not
| 1 | 0 | 0 | 3,026 |
14,780,381 |
2013-02-08T19:58:00.000
| 1 | 0 | 1 | 0 |
python,performance,mongodb
| 14,780,990 | 2 | true | 0 | 0 |
I'll address the three points separately. You should know that it absolutely depends on the situation on what works best. There is no "theoretically correct" answer as it depends on your data store/access patterns.
It is always a fairly complex decision on how you store your data. I think the main rule should be "How do I query my data?", and not "We want to have all data normalised". Data normalisation is something you do for a relational database, not for MongoDB. If you almost always query the children with the parent, and you don't have an unbound list of children, then that is how you should store them. Just be aware that a document in MongoDB is limited to 16MB (which is a lot more than you think).
Avoid threading. You will just be better off running two queries in sequence, from two different collections. Less complex is a good thing!
This works, but it is a fairly ugly way. But then again, ugly isn't always a bad thing if it makes things go a lot faster. I don't quite know how distinct your parent and child documents are of course, so it's a difficult to say whether this is a good solution. A sparse index, which I assume you will do on a specific field depending on whether it is a parent or child, is a good idea. But perhaps you can get away with one index as well. I'd be happy to update your answer after you've shown your suggested schemas.
I would recommend you do some benchmarking, but forget about option 2.
| 1 | 0 | 0 |
TLDR; Are there drawbacks to putting two different types of documents into the same collection to save a round-trip to the database?
So I have documents with children, and a list of keys in the parent referencing the children, and almost whenever we want a parent, we also want the children to come along. The naive way to do this is to fetch the parent, and then get the children using the list of child keys with $IN (in SQL, we would use a join). However, this means making 2 round trips for a fairly frequent operation. We have a few options to improve this, especially since we can retrieve the child keys at the same time as the parent keys:
Put the children in the parent document
While this would play to mongo's strength, we also want to keep this data normalized
Pipeline database requests in threads
Which may or may not improve performance once we factor in the connection pool. It also means dealing with threading in a python app, which isn't terrible, but isn't great.
Keep the parent/child documents in the same collection (not embedded)
This way we can do one query for all the keys at once; this does mean some conceptual overhead in the wrapper for accessing the database, and forcing all indexes to be sparse, but otherwise seems straightforward.
We could profile all these options, but it does feel like someone out there should already have experience with this despite not finding anything online. So, is there something I am missing in my analysis?
|
Put different "schemas" into same MongoDB collection
| 1.2 | 1 | 0 | 339 |
14,780,533 |
2013-02-08T20:09:00.000
| 3 | 0 | 1 | 0 |
python
| 14,780,579 | 4 | false | 0 | 0 |
You have to escape \n somehow
Either just the sequence
print "\\n"
or mark whole string as raw
print r"\n"
| 1 | 0 | 0 |
I want to print the actual following string: \n , but everything I tried just reads it in takes it as a new line operator.. how do I just print the following: \n ?
|
How to print the string "\n"
| 0.148885 | 0 | 0 | 213 |
14,786,072 |
2013-02-09T07:49:00.000
| 2 | 1 | 0 | 0 |
python,django,settings
| 53,798,521 | 7 | false | 1 | 0 |
Here's one way to do it that is compatible with deployment on Heroku:
Create a gitignored file named .env containing:
export DJANGO_SECRET_KEY = 'replace-this-with-the-secret-key'
Then edit settings.py to remove the actual SECRET_KEY and add this instead:
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
Then when you want to run the development server locally, use:
source .env
python manage.py runserver
When you finally deploy to Heroku, go to your app Settings tab and add DJANGO_SECRET_KEY to the Config Vars.
| 4 | 29 | 0 |
One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS
keys, etc.. values into settings files has problem:
Secrets often should be just that: secret! Keeping them in version control means
that everyone with repository access has access to them.
My question is how to keep all keys as secret?
|
Keep Secret Keys Out
| 0.057081 | 0 | 0 | 26,432 |
14,786,072 |
2013-02-09T07:49:00.000
| 6 | 1 | 0 | 0 |
python,django,settings
| 14,786,575 | 7 | false | 1 | 0 |
Store your local_settings.py data in a file encrypted with GPG - preferably as strictly key=value lines which you parse and assign to a dict (the other attractive approach would be to have it as executable python, but executable code in config files makes me shiver).
There's a python gpg module so that's not a problem. Get your keys from your keyring, and use the GPG keyring management tools so you don't have to keep typing in your keychain password. Make sure you are reading the data straight from the encrypted file, and not just creating a decrypted temporary file which you read in. That's a recipe for fail.
That's just an outline, you'll have to build it yourself.
This way the secret data remains solely in the process memory space, and not in a file or in environment variables.
| 4 | 29 | 0 |
One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS
keys, etc.. values into settings files has problem:
Secrets often should be just that: secret! Keeping them in version control means
that everyone with repository access has access to them.
My question is how to keep all keys as secret?
|
Keep Secret Keys Out
| 1 | 0 | 0 | 26,432 |
14,786,072 |
2013-02-09T07:49:00.000
| 5 | 1 | 0 | 0 |
python,django,settings
| 14,786,114 | 7 | false | 1 | 0 |
Ideally, local_settings.py should not be checked in for production/deployed server. You can keep backup copy somewhere else, but not in source control.
local_settings.py can be checked in with development configuration just for convenience, so that each developer need to change it.
Does that solve your problem?
| 4 | 29 | 0 |
One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS
keys, etc.. values into settings files has problem:
Secrets often should be just that: secret! Keeping them in version control means
that everyone with repository access has access to them.
My question is how to keep all keys as secret?
|
Keep Secret Keys Out
| 0.141893 | 0 | 0 | 26,432 |
14,786,072 |
2013-02-09T07:49:00.000
| 0 | 1 | 0 | 0 |
python,django,settings
| 46,735,039 | 7 | false | 1 | 0 |
You may need to use os.environ.get("SOME_SECRET_KEY")
| 4 | 29 | 0 |
One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS
keys, etc.. values into settings files has problem:
Secrets often should be just that: secret! Keeping them in version control means
that everyone with repository access has access to them.
My question is how to keep all keys as secret?
|
Keep Secret Keys Out
| 0 | 0 | 0 | 26,432 |
14,795,546 |
2013-02-10T07:06:00.000
| 1 | 0 | 1 | 0 |
python
| 14,795,563 | 6 | false | 0 | 0 |
Class dict is shared among all the instances (objects) of the class, while each instance (object) has its own separate copy of instance dict.
| 1 | 5 | 0 |
I was reading the python descriptors and there was one line there
Python first looks for the member in the instance dictionary. If it's
not found, it looks for it in the class dictionary.
I am really confused what is instance dict and what is class dictionary
Can anyone please explain me with code what is that
I was thinking of them as same
|
What is the dfifference between instance dict and class dict
| 0.033321 | 0 | 0 | 9,723 |
14,795,810 |
2013-02-10T07:53:00.000
| 0 | 0 | 0 | 0 |
python,sql,database,text
| 14,797,390 | 2 | false | 0 | 0 |
What I've done before is create SQLite databases from txt files which were created from database extracts, one SQLite db for each day.
One can query across SQLite db to check the values etc and create additional tables of data.
I added an additional column of data that was the SHA1 of the text line so that I could easily identify lines that were different.
It worked in my situation and hopefully may form the barest sniff of an acorn of an idea for you.
| 2 | 2 | 0 |
My python project involves an externally provided database: A text file of approximately 100K lines.
This file will be updated daily.
Should I load it into an SQL database, and deal with the diff daily? Or is there an effective way to "query" this text file?
ADDITIONAL INFO:
Each "entry", or line, contains three fields - any one of which can be used as an index.
The update is is the form of the entire database - I would have to manually generate a diff
The queries are just looking up records and displaying the text.
Querying the database will be a fundamental task of the application.
|
Large text database: Convert to SQL or use as is
| 0 | 1 | 0 | 336 |
14,795,810 |
2013-02-10T07:53:00.000
| 1 | 0 | 0 | 0 |
python,sql,database,text
| 14,795,870 | 2 | false | 0 | 0 |
How often will the data be queried? On the one extreme, if once per day, you might use a sequential search more efficiently than maintaining a database or index.
For more queries and a daily update, you could build and maintain your own index for more efficient queries. Most likely, it would be worth a negligible (if any) sacrifice in speed to use an SQL database (or other database, depending on your needs) in return for simpler and more maintainable code.
| 2 | 2 | 0 |
My python project involves an externally provided database: A text file of approximately 100K lines.
This file will be updated daily.
Should I load it into an SQL database, and deal with the diff daily? Or is there an effective way to "query" this text file?
ADDITIONAL INFO:
Each "entry", or line, contains three fields - any one of which can be used as an index.
The update is is the form of the entire database - I would have to manually generate a diff
The queries are just looking up records and displaying the text.
Querying the database will be a fundamental task of the application.
|
Large text database: Convert to SQL or use as is
| 0.099668 | 1 | 0 | 336 |
14,797,375 |
2013-02-10T11:43:00.000
| 4 | 0 | 1 | 0 |
python,exception,pep8
| 14,797,389 | 7 | false | 0 | 0 |
You will also catch e.g. Control-C with that, so don't do it unless you "throw" it again. However, in that case you should rather use "finally".
| 4 | 103 | 0 |
When using PyCharm IDE the use of except: without an exception type triggers a reminder from the IDE that this exception clause is Too broad.
Should I be ignoring this advice? Or is it Pythonic to always specific the exception type?
|
Should I always specify an exception type in `except` statements?
| 0.113791 | 0 | 0 | 86,239 |
14,797,375 |
2013-02-10T11:43:00.000
| 4 | 0 | 1 | 0 |
python,exception,pep8
| 42,745,424 | 7 | false | 0 | 0 |
Here are the places where i use except without type
quick and dirty prototyping
That's the main use in my code for unchecked exceptions
top level main() function, where i log every uncaught exception
I always add this, so that production code does not spill stacktraces
between application layers
I have two ways to do it :
First way to do it : when a higher level layer calls a lower level function, it wrap the calls in typed excepts to handle the "top" lower level exceptions. But i add a generic except statement, to detect unhandled lower level exceptions in the lower level functions.
I prefer it this way, i find it easier to detect which exceptions should have been caught appropriately : i "see" the problem better when a lower level exception is logged by a higher level
Second way to do it : each top level functions of lower level layers have their code wrapped in a generic except, to it catches all unhandled exception on that specific layer.
Some coworkers prefer this way, as it keeps lower level exceptions in lower level functions, where they "belong".
| 4 | 103 | 0 |
When using PyCharm IDE the use of except: without an exception type triggers a reminder from the IDE that this exception clause is Too broad.
Should I be ignoring this advice? Or is it Pythonic to always specific the exception type?
|
Should I always specify an exception type in `except` statements?
| 0.113791 | 0 | 0 | 86,239 |
14,797,375 |
2013-02-10T11:43:00.000
| 9 | 0 | 1 | 0 |
python,exception,pep8
| 14,797,648 | 7 | false | 0 | 0 |
Not specfic to Python this.
The whole point of exceptions is to deal with the problem as close to where it was caused as possible.
So you keep the code that could in exceptional cirumstances could trigger the problem and the resolution "next" to each other.
The thing is you can't know all the exceptions that could be thrown by a piece of code. All you can know is that if it's a say a file not found exception, then you could trap it and to prompt the user to get one that does or cancel the function.
If you put try catch round that, then no matter what problem there was in your file routine (read only, permissions, UAC, not really a pdf, etc), every one will drop in to your file not found catch, and your user is screaming "but it is there, this code is crap"
Now there are a couple of situation where you might catch everything, but they should be chosen consciously.
They are catch, undo some local action (such as creating or locking a resource, (opening a file on disk to write for instance), then you throw the exception again, to be dealt with at a higher level)
The other you is you don't care why it went wrong. Printing for instance. You might have a catch all round that, to say There is some problem with your printer, please sort it out, and not kill the application because of it. Ona similar vain if your code executed a series of separate tasks using some sort of schedule, you wouldnlt want the entire thing to die, because one of the tasks failed.
Note If you do the above, I can't recommend some sort of exception logging, e.g. try catch log end, highly enough.
| 4 | 103 | 0 |
When using PyCharm IDE the use of except: without an exception type triggers a reminder from the IDE that this exception clause is Too broad.
Should I be ignoring this advice? Or is it Pythonic to always specific the exception type?
|
Should I always specify an exception type in `except` statements?
| 1 | 0 | 0 | 86,239 |
14,797,375 |
2013-02-10T11:43:00.000
| 4 | 0 | 1 | 0 |
python,exception,pep8
| 14,797,393 | 7 | false | 0 | 0 |
Always specify the exception type, there are many types you don't want to catch, like SyntaxError, KeyboardInterrupt, MemoryError etc.
| 4 | 103 | 0 |
When using PyCharm IDE the use of except: without an exception type triggers a reminder from the IDE that this exception clause is Too broad.
Should I be ignoring this advice? Or is it Pythonic to always specific the exception type?
|
Should I always specify an exception type in `except` statements?
| 0.113791 | 0 | 0 | 86,239 |
14,797,686 |
2013-02-10T12:24:00.000
| 0 | 0 | 1 | 0 |
python
| 14,797,711 | 4 | false | 0 | 0 |
Lambdas are used when they're needed, when they make the code more concise but not less clear, and so on. They're not really in competition with list comprehensions (or any other comprehensions). If you find something less readable in Python, chances are you should change it to something you find more readable.
| 1 | 1 | 0 |
I'm just starting out as a Python programmer. While doing the Python challenge I learn a lot about the language by looking at other peoples' solutions after I've solved them myself.
I see lambda functions all over the place. They seem easy, but also a bit less readable (at least for me now).
Is there any value in using lambda functions over something else? I'd like to know if it's something worth learning this early in my learning curve!
|
Is there any reason to use a lambda function over e.g a list comprehension?
| 0 | 0 | 0 | 154 |
14,799,847 |
2013-02-10T16:27:00.000
| 9 | 1 | 0 | 0 |
php,python,passwords,password-protection,password-encryption
| 14,799,899 | 2 | true | 0 | 0 |
Forget this idea. Hashing the password on the client, sending the hash to the server and then compare it to the stored hash is equivalent to storing plain passwords in the database, because the hash becomes the password.
Or should I just invest in https?
Yes!
| 1 | 0 | 0 |
I am currently building a web/desktop application. The user can create an account online and login either online or via the desktop client.
The client will be built in Python and exported to exe.
I want to encrypt the password before it is sent online as the site has no https connection.
What is the best way to do this so the hashed password will be the same in python and php? Or is their a better way or should I just invest in https?
I have tried using simple hashing but php md5("Hello") will return something different to python's hashlib.md5("Hello").hexdigest()
|
Password protection for Python and PHP
| 1.2 | 0 | 0 | 240 |
14,800,708 |
2013-02-10T17:51:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7,pyscripter
| 14,800,809 | 2 | false | 0 | 0 |
python is just kind off a simple language, it does not need variable declaration for example.
But it's better that it automatically asks your input instead that you have to write the code for starting the variable
| 1 | 0 | 0 |
I am using the PyScripter integrated development environment and taking courses using Python 2.7.
Why does number = input("some input text") immediately display the python input dialog when the program is ran? Wouldn't we have to execute it? Because really, it's just setting a variable to a python input. It never says to execute it? Is number not just any variable?
There's a mini-forum which the site that I go to has, but have not received an answer in 5 days, so I came here.
|
PyScripter - Why does this work?
| 0 | 0 | 0 | 324 |
14,801,979 |
2013-02-10T19:58:00.000
| 3 | 0 | 0 | 1 |
python,virtualenv,zeromq
| 14,801,987 | 1 | false | 0 | 0 |
Once you make your virtualenv and activate it, use pip to install Python packages. They will install into your virtualenv.
Alternately, when you create your virtualenv, enable system-wide packages (with the --system-site-packages switch) within it so that system-installed packages will be visible in the virtualenv.
| 1 | 3 | 0 |
I was able to install 0MQ in Ubuntu 12.04 by doing the followinng:
$ sudo apt-get install libzmq-dev
$ sudo apt-get install python-zmq
but when I went to use it in a virtualenv it could not find the module. What do I have to do in my virtualenv to see it
|
0MQ in virtualenv
| 0.53705 | 0 | 0 | 247 |
14,802,057 |
2013-02-10T20:06:00.000
| 10 | 0 | 1 | 0 |
python,multithreading,python-2.7
| 14,802,065 | 2 | true | 0 | 0 |
You can call the same function from both threads. The issue to be aware of is modifying shared data from two threads at once. If the function attempts to modify the same data from both threads, you will end up with an unpredictable program.
So the answer to your question is, "it depends what the function does."
It certainly won't help to copy the function into both thread classes. What matters is what the function does, not how many copies of the code there are.
| 1 | 3 | 0 |
Basic using threads question here.
I'm modifying a program with 2 thread classes and I'd like to use a function defined in one class in both classes now.
As a thread newbie (only been playing with them for a few months) is it OK to move the function out of the thread class into the main program and just call it from both classes or do I need to duplicate the function in the other class that doesn't have it?
regards
Simon
|
Two threads using same function
| 1.2 | 0 | 0 | 3,910 |
14,802,197 |
2013-02-10T20:21:00.000
| 7 | 1 | 0 | 0 |
python,pyramid
| 14,802,581 | 1 | true | 1 | 0 |
No, that is not possible to determine with server-side code only. Browsers do not share that information when making HTTP requests to the server.
You'll have to do this with JavaScript.
| 1 | 1 | 0 |
Is it possible to get the user's browser width and height in Pyramid? I've searched through the response object and Googled.
If it's not available in Pyramid, I'll just grab it in javascript
|
Can I get the browser width and height in Pyramid?
| 1.2 | 0 | 0 | 118 |
14,804,291 |
2013-02-11T00:27:00.000
| 0 | 0 | 1 | 0 |
c#,python,.net,sorting,hash
| 14,804,387 | 2 | false | 0 | 0 |
Hash codes aren't meant to be unique for unequal objects - there typically will be some collisions. They definitely can't be used to test for equality.
Hash codes are used to place objects (hopefully) evenly across a data structure. If you want to test for equality, test whether the coordinates are equal.
| 1 | 0 | 0 |
I have some points in space where each point has an id. I also have a subset of these points in another group that have different id values.
How can I create a new type of id for both groups of points so that the points that have the same coordinates end up using the same id values?
I assume I need to generate hash codes using their coordinates which should give me the same id value for points that have the same coordinates, right?
I am confused how I could use it because the set of hashcodes is much smaller than float[3]. So not sure if I am on the right track.
|
How to match arbitrary data using hash codes?
| 0 | 0 | 0 | 124 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.