Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
29,337,928 | 2015-03-30T03:25:00.000 | 4 | 0 | 1 | 1 | 0 | python,windows,anaconda | 0 | 56,794,491 | 0 | 14 | 0 | false | 0 | 0 | Method1:
To uninstall Anaconda3 go to the Anaconda3 folder, there u will be able to find an executable called Uninstall-Anaconda3.exe, double click on it. This should uninstall ur application.
There are times when the shortcut of anaconda command prompt,jupyter notebook, spyder, etc exists, so delete those files too.
Method2 (Windows8):
Go to control panel->Programs->Uninstall a Program and then select Anaconda3(Python3.1. 64-bit)in the menu. | 13 | 102 | 0 | 0 | I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7.
I removed Anaconda and deleted all the directories and installed python 2.7.
But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this? | How to remove anaconda from windows completely? | 0 | 0.057081 | 1 | 0 | 0 | 368,077 |
29,341,688 | 2015-03-30T08:38:00.000 | 0 | 1 | 0 | 0 | 0 | python,authentication,pyramid | 0 | 32,814,226 | 0 | 1 | 0 | true | 1 | 0 | There are two broad ways to do integrate custom auth with Pyramid:
- write your own authentication policy for Pyramid (I haven't done this)
- write your own middleware to deal with your auth issues, and use the RemoteUserAuthenticationPolicy in Pyramid (I have done this)
For the second, you write some standard wsgi middleware, sort out your custom authentication business in there, and then write to the wsgi env. Pyramid authorization will then work fine, with the Pyramid auth system getting the user value from the wsgi env's 'REMOTE_USER' setting.
I personally like this approach because it's easy to wrap disparate apps in your middleware, and dead simple to turn it off or swap it out. While not really the answer to exactly what you asked, that might be a better approach than what you're trying. | 1 | 1 | 0 | 0 | Context
My app relies on external service for authentication, python API has function authentiate_request witch takes
request instance as a param, and returns result dict:
if auth was successful, dict contains 3 keys:
successful: true
username: alice
cookies: [list of set-cookie headers required to remember user]
if unsuccessful:
successful: false
redirect: url where to redirect user for web based auth
Now, call to this function is relatively expensive (is does HTTP POST underneath).
Question
I'm new to Pyramid security model, and I'm struggling how to use existing/properly write AuthenticationPolicy for my app, so it uses my auth service, and does not call it's API more than once per session (In auth success scenario)? | How to use external Auth system in Pyramid | 0 | 1.2 | 1 | 0 | 0 | 201 |
29,358,494 | 2015-03-31T00:14:00.000 | 0 | 0 | 0 | 0 | 0 | java,python,hadoop,mapreduce,apache-spark | 0 | 29,516,550 | 0 | 1 | 1 | false | 0 | 0 | You can use pairRDD.countByKey() function for counting the rows according their keys. | 1 | 0 | 1 | 0 | We need to get the count of each key (the keys are not known before executing), and do some computation dynamically in each Mapper. The key count could be global or only in each Mapper. What is the best way to implement that? In Hadoop this is similar to an aggregator function.
The accumulator in Spark needs to be defined before the Mapper jobs run. But we do not know what and how many keys are there. | Get the count of each key in each Mapper or globally in Spark MapReduce model | 0 | 0 | 1 | 0 | 0 | 110 |
29,384,696 | 2015-04-01T06:58:00.000 | 26 | 0 | 1 | 0 | 0 | python,python-2.7,python-3.x | 0 | 29,384,723 | 0 | 2 | 0 | false | 0 | 0 | Use the date.weekday() method. Digits 0-6 represent the consecutive days of the week, starting from Monday. | 1 | 56 | 0 | 0 | Please suggest me on the following.
How to find whether a particular day is weekday or weekend in Python? | how to find current day is weekday or weekends in Python? | 0 | 1 | 1 | 0 | 0 | 91,511 |
29,395,946 | 2015-04-01T16:23:00.000 | 1 | 0 | 0 | 0 | 1 | python,amazon-web-services,amazon-ec2,cron,boto | 0 | 29,407,280 | 0 | 2 | 0 | false | 1 | 0 | The entire issue appeared to be HTTP_PROXY environment variable. The variable was set in /etc/bashrc and all users got it this way but when cron jobs ran (as root) /etc/bashrc wasn't read and the variable wasn't set. By adding the variable to the configuration file of crond (via crontab -e) the issue was solved | 1 | 1 | 0 | 0 | I have a very basic python script which uses boto to query the state of my EC2 instances. When I run it from console, it works fine and I'm happy. The problem is when I want to add some automation and run the script via crond. I notices that the script hangs and waits indefinitely for the connection. I saw that boto has this problem and that some people suggested to add some timeout value to boto config file. I couldn't understand how and where, I added manually /etc/boto.cfg file with the suggested timeout value (5) but it didn't help. With strace you can see that this configuration file is never being accessed. Any suggestions how to resolve this issue? | Connection with boto to AWS hangs when running with crond | 1 | 0.099668 | 1 | 0 | 1 | 774 |
29,397,839 | 2015-04-01T18:11:00.000 | 1 | 0 | 0 | 1 | 0 | python | 0 | 29,398,092 | 0 | 2 | 0 | false | 0 | 0 | It is basically useless if you don't have executable permission in the remote machine. You need to contact your administrator to obtain an executable permission.
In the case for the SCP files to the remote server, you may still be able to cp you files but you may not be able to execute it. | 1 | 0 | 0 | 0 | I am SSHed into a remote machine and I do not have rights to download python packages but I want to use 3rd party applications for my project. I found cx_freeze but I'm not sure if that is what I need.
What I want to achieve is to be able to run different parts of my project (will mains everywhere) with command line arguments on the remote machine. My project will be filled with a few 3rd party python packages. Not sure how to get around this as I cannot pip install and am not a sudoer. I can SCP files to the remote machine | Using 3rd party packages on remote machine without download/install rights | 0 | 0.099668 | 1 | 0 | 0 | 33 |
29,443,218 | 2015-04-04T05:46:00.000 | 1 | 0 | 1 | 0 | 1 | algorithm,python-2.7 | 1 | 29,443,892 | 0 | 1 | 0 | false | 0 | 0 | There are 6402373705728000 permutations of 18 elements so it takes years to iterate over them. It should be better to think of an analytic solution for this problem. | 1 | 0 | 0 | 0 | I got a problem in calculating permutations. The program needs to generate
permutations(xrange(num), num)) and for each permutation I have to count the number of consecutive primes. i.e sum of every adjacent two digits in the number should be a prime.
max value 'num' would be 18
primes = permutations(xrange(1,num+1), num)
for val in primes:
for x in range(0,len(val-1)):
if (prime(val[x] + val[x+1])):
num_primes += 1
if the 'num' range from 10 to 18, it is giving a response message of
'killed' after a long waiting. please help me to solve this.. | how to avoid memory error in generating and processing python permutations? | 0 | 0.197375 | 1 | 0 | 0 | 198 |
29,445,943 | 2015-04-04T11:38:00.000 | 1 | 0 | 0 | 0 | 0 | python,graph,plotly | 0 | 29,460,131 | 0 | 1 | 0 | true | 0 | 0 | Full disclosure, I work for Plotly.
Here's my shot at summarizing your problem in general, you've got 4 dimensions for each country (year, exports, gdp, standard of living).
You might be able to use either or both of these solutions:
visualize this in two dimensions using x-value, y-value, marker-size, and marker-line-size (a bubble chart in 2d)
visualize this in three dimensions using x-value, y-value, z-value, and marker-size
I'll leave a link to a notebook in the comments, but since it's not a very permanent link, I won't include it in the answer here. | 1 | 0 | 1 | 0 | I have 3 sets of comparison data(y axes) which needs to be plotted against a target source values. I'm comparing exports, gdp, standard of living values of different countries against a target countries values for different years. But values of each category are haphazard i.e exports in millions of dollars, gdp in percentage and standard of living scale of 1 to 10. Moreover I have years value for comparison as well.
What I want to see is over the years how different parameters for each country over different years vary against the target country parameters. All of this plotted in one graph in plotly.
I can plot multiple y axes in plotly, but the scale doesn't match.
Has anyone any suggestions how to fit all the comparison in one layout. Maybe this is more of a graphing suggestion needed rather than help in plotly? Any ideas how to squeeze all in one graph? | graph of multiple y axes in plotly | 1 | 1.2 | 1 | 0 | 0 | 925 |
29,452,879 | 2015-04-05T00:28:00.000 | 0 | 0 | 1 | 1 | 1 | python,regex | 0 | 29,452,916 | 0 | 3 | 0 | false | 0 | 0 | Set a flag false.
Iterate over each line.
For each line,
1) When you match your pattern, set a flag.
2) If the flag is currently set set, print the line. | 1 | 2 | 0 | 0 | I have an issue that I can't seem to find a solution within python.
From command line I can do this by:
sed '1,/COMMANDS/d' /var/tmp/newFile
This delete everything from line #1 till regex "COMMANDS". Simple
But I can't do the same with Python that I can find.
The re.sub and multiline doesn't seem to work.
So I have a question how can I do this in a pythonic way? I really rather not run sed from within python unless I have to. | Python delete lines of text line #1 till regex | 1 | 0 | 1 | 0 | 0 | 1,429 |
29,461,480 | 2015-04-05T19:44:00.000 | 0 | 1 | 1 | 0 | 1 | python,performance,networking,cpu,mininet | 0 | 29,502,719 | 0 | 1 | 0 | false | 0 | 0 | I finally found the real problem. It was not because of the prints (removing them improved performance a bit, but not significantly) but because of a thread that was using a shared lock. This lock was shared over multiple CPU cores causing the whole thing being very slow.
It even got slower the more cores I added to the executing VM which was very strange...
Now the new bottleneck seems to be the APScheduler... I always get messages like "event missed" because there is too much load on the scheduler. So that's the next thing to speed up... :) | 1 | 0 | 0 | 0 | I am doing my bachelor's thesis where I wrote a program that is distributed over many servers and exchaning messages via IPv6 multicast and unicast. The network usage is relatively high but I think it is not too high when I have 15 servers in my test where there are 2 requests every second that are going like that:
Server 1 requests information from server 3-15 via multicast. every of 3-15 must respond. if one response is missing after 0.5 sec, the multicast is resent, but only the missing servers must respond (so in most cases this is only one server)
Server 2 does exactly the same. If there are missing results after 5 retries the missing servers are marked as dead and the change is synced with the other server (1/2)
So there are 2 multicasts every second and 26 unicasts every second. I think this should not be too much?
Server 1 and 2 are running python web servers which I use to do the request every second on each server (via a web client)
The whole szenario is running in a mininet environment which is running in a virtual box ubuntu that has 2 cores (max 2.8ghz) and 1GB RAM. While running the test, i see via htop that the CPUs are at 100% while the RAM is at 50%. So the CPU is the bottleneck here.
I noticed that after 2-5 minutes (1 minute = 60 * (2+26) messages = 1680 messages) there are too many missing results causing too many sending repetitions while new requests are already coming in, so that the "management server" thinks the client servers (3-15) are down and deregisters them. After syncing this with the other management server, all client servers are marked as dead on both management servers which is not true...
I am wondering if the problem could be my debug outputs? I am printing 3-5 messages for every message that is sent and received. So that are about (let's guess it are 5 messages per sent/recvd msg) (26 + 2)*5 = 140 lines that are printed on the console.
I use python 2.6 for the servers.
So the question here is: Can the console output slow down the whole system that simple requests take more than 0.5 seconds to complete 5 times in a row? The request processing is simple in my test. No complex calculations or something like that. basically it is something like "return request_param in ["bla", "blaaaa", ...] (small list of 5 items)"
If yes, how can I disable the output completely without having to comment out every print statement? Or is there even the possibility to output only lines that contain "Error" or "Warning"? (not via grep, because when grep becomes active all the prints already have finished... I mean directly in python)
What else could cause my application to be that slow? I know this is a very generic question, but maybe someone already has some experience with mininet and network applications... | Console output consuming much CPU? (about 140 lines per second) | 0 | 0 | 1 | 0 | 1 | 102 |
29,484,408 | 2015-04-07T05:29:00.000 | 0 | 0 | 0 | 0 | 1 | python,numpy,pygame | 1 | 29,488,142 | 0 | 1 | 0 | false | 0 | 1 | Instead of setting each individual pixel, use pygame's line drawing function to draw a line from the current coordinate to the next instead of using sub-pixel coordinates (pygame.draw.line or even pygame.draw.lines).
This way, the "gaps" between two points are filled; no need for sub-pixel coordinates.
You just have to draw the lines in the right order; just ensure the coordinates are sorted.
Other than that, you could also simple convert your sub-pixel coordinates by casting the x/y values to integers. | 1 | 0 | 0 | 0 | The question I have as the title says is on the idea of setting up a graph in pygame that graphs sub-pixel coordinates.
Something a friend and I spoke of was how I could try to make a function graphing program for fun in python, and I thought about how I could use it, but I found a few issues.
The first one was the use of range, due to it using integers and not floats, but arange from numpy fixed that problem, but that brings me to the second issue.
The idea for the graph I thought about so that it would be simple, not making massive thick lines or odd shaped one, is that it uses display.set_at to make a single pixel a color. And for simple graphs, this works perfectly. But when I went into more complicated graphs, I ran into two main errors:
The first error is that the graph shows pixels without any line between them, the idea of the line was the illusion of having all of the pixels near each other. But I found that with a range step of one in range, it leaves this gap. In theory, using arange with a step of .01, the gaps would vanish all together, but this brings to the second problem.
The display.set_at does not work with sub-pixel coordinates. Would anyone be able to suggest a way to make this work? It would be most appreciated. | Pygame, sub-pixel coordinates. | 0 | 0 | 1 | 0 | 0 | 312 |
29,508,958 | 2015-04-08T07:53:00.000 | 2 | 0 | 0 | 0 | 0 | django,python-2.7,django-templates,bokeh | 0 | 32,680,856 | 0 | 5 | 0 | false | 1 | 0 | It must put {{the_script|safe}} inside the head tag | 1 | 31 | 0 | 0 | I want to display graphs offered by the bokeh library in my web application via django framework but I don't want to use the bokeh-server executable because it's not the good way. so is that possible? if yes how to do that? | how to embed standalone bokeh graphs into django templates | 1 | 0.07983 | 1 | 0 | 0 | 15,838 |
29,515,509 | 2015-04-08T13:06:00.000 | 0 | 1 | 0 | 0 | 1 | python-2.7,exception,testing,exception-handling,python-behave | 0 | 29,516,346 | 0 | 1 | 0 | false | 0 | 0 | Regadless to framework/programming language exception is a state when something went wrong. This issue has to be handled somehow by the application, that's why a good programmer will write exception handling code in places where it needed at most.
Exception handling can be everything. In your case you want to test that exception is logged. Therefore I see the an easy test sequence here:
Execute the code/sequence of actions which will rase the exception
Verify that log file has an entry related to the exception raised in previous step with help of your test automation framework. | 1 | 4 | 0 | 0 | When an exception is raised in the application that is not accounted for (an uncaught/unhandled exception), it should be logged. I would like to test this behaviour in behave.
The logging is there to detect unhandled exceptions so developers can implement handling for these exceptions or fix them if needed.
In order to test this, I think I have to let the code under test raise an exception. The problem is that I cannot figure out how to do that without hard-coding the exception-raising in the production code. This is something I like to avoid as I do not think this test-code belongs in production.
While unit-testing I can easily mock a function to raise the exception. In behave I cannot do this as the application is started in another process.
How can I cause an exception to be raised in behave testing, so it looks as if the production code has caused it, without hard-coding the exception in the production code? | How to test uncaught/unhandled exceptions in behave? | 0 | 0 | 1 | 0 | 0 | 557 |
29,524,885 | 2015-04-08T20:37:00.000 | 3 | 0 | 0 | 0 | 0 | python,screenshot,python-imaging-library | 0 | 29,624,597 | 0 | 1 | 0 | false | 0 | 1 | The cursor isn't on the same layer as the desktop or game your playing, so a screenshot won't capture it (try printscreen and paste into mspaint). A workaround is to get the position of the cursor and draw it on the image. you could use win32gui.GetCursorPos(point) for windows. | 1 | 6 | 0 | 0 | I'm making a program that streams my screen to another computer(like TeamViewer), I'm using sockets, PIL ImageGrab, Tkinter.
Everything is fine but the screenshot I get from ImageGrab.grab() is without the mouse cursor, which is very important for my program purpose.
Do you know how can I take screenshot with the mouse cursor? | Include mouse cursor in screenshot | 0 | 0.53705 | 1 | 0 | 0 | 2,661 |
29,526,895 | 2015-04-08T23:09:00.000 | 0 | 0 | 1 | 0 | 0 | python,pygame | 0 | 29,527,035 | 0 | 2 | 0 | false | 0 | 1 | The command turtle.down() will work, I guess | 1 | 1 | 0 | 0 | I am making a game for a presentation and I cannot seem to understand how to make a delay in Python.
For example, whenever I press the D key, my character not only moves but also changes pictures so it looks like it's running.
I have the movement part down, I just need to slow down the changing of the sprite so that it doesn't look like he's running a million miles per hour. I have set the FPS. | How Do you make a delay in python without stoping the whole program | 0 | 0 | 1 | 0 | 0 | 165 |
29,541,619 | 2015-04-09T14:40:00.000 | 1 | 0 | 0 | 0 | 0 | javascript,python,bash,youtube,command-line-interface | 0 | 29,541,704 | 0 | 1 | 0 | false | 0 | 0 | Python or Node (JS) will probably be a lot easier for this task than Bash, primarily because you're going to have to do OAuth to get access to the social network.
Or, if you're willing to get a bit "hacky", you could issue scripts to PhantomJS, and automate the interaction with the sites in question... | 1 | 0 | 0 | 0 | I would like to write a script to access data on a website, such as:
1) automatically searching a youtuber's profile for a new posting, and printing the title of it to stdout.
2) automatically posting a new video, question, or comment to a website at a specified time. For a lot of sites, there is a required login, so that is something that would need to be automated as well.
I would like to able to do all this stuff from the command line.
What set of tools should I use for this? I was intending to use Bash, mostly because I am in the process of learning it, but if there are other options, like Python or Javascript, please let me know.
In a more general sense, it would be nice to know how to read and directly interact with a website's JS; I've tried looking at the browser console, but I can't make much sense of it. | How to interact with social websites (auto youtube posting, finding titles of new videos etc.) from the command line | 1 | 0.197375 | 1 | 0 | 1 | 52 |
29,548,735 | 2015-04-09T20:51:00.000 | 1 | 1 | 0 | 1 | 0 | python,psutil,iowait | 0 | 29,548,863 | 0 | 1 | 0 | true | 0 | 0 | %wa is giving your the iowait of the CPU, and if you are using times = psutil.cpu_times() or times = psutil.cpu_times_percent() then it is under the times.iowait variable of the returned value (Assuming you are on a Linux system) | 1 | 1 | 0 | 0 | I am writing a python script to get some basic system stats. I am using psutil for most of it and it is working fine except for one thing that I need.
I'd like to log the average cpu wait time at the moment.
from top output it would be in CPU section under %wa.
I can't seem to find how to get that in psutil, does anyone know how to get it? I am about to go down a road I really don't want to go on....
That entire CPU row is rather nice, since it totals to 100 and it is easy to log and plot.
Thanks in advance. | Get IO Wait time as % in python | 0 | 1.2 | 1 | 0 | 0 | 1,680 |
29,551,003 | 2015-04-09T23:49:00.000 | 0 | 0 | 1 | 0 | 0 | python,bash,qt,pyqt,tornado | 0 | 29,551,539 | 0 | 2 | 1 | false | 0 | 1 | You won't need a bash script. Probably simplest to write a PyQt application and have the application launch the web server. The server may be in a separate thread or process depending on your requirements, but I'd start by having a single thread as a first draft and splitting it out later.
Having the PyQt app as your main thread makes sense as your GUI is going to be responsible for user inputs (start/stop server, etc) and program outputs (server status, etc) and therefore it makes sense to make this the controlling thread with references to other objects or threads. | 1 | 0 | 0 | 0 | I am trying to program an application that runs a HTTP server as well as a GUI using Tornado and PyQt4 respectively. I am confused about how to use these two event loops in parallel. Can this be done with the multiprocessing module? Should the HTTP server be run in a QtThread? Or is a bash script the best way to go to run both of these processes at the same time? | How can I combine PyQt4 and Tornado's event loops into one application? | 0 | 0 | 1 | 0 | 0 | 757 |
29,560,307 | 2015-04-10T11:28:00.000 | 0 | 0 | 0 | 0 | 0 | python,pyqt,qt-designer,qtabwidget | 0 | 44,781,276 | 0 | 2 | 0 | false | 0 | 1 | I see that this thread is kinda old. But I hope this will still help.
You can use the remove() method to "hide" the tab. There's no way to really hide them in pyqt4. when you remove it, it's gone from the ui. But in the back end, the tab object with all your settings still exist. I'm sure you can find a way to improvise it back. Give it a try! | 1 | 1 | 0 | 0 | I am trying to build a GUI which will:
Load a file with parameters which describe certain type of problem.
Based on the parameters of the file, show only certain tab in QTabwidget (of many predefined in Qt Designer .ui)
I plan to make a QTabwidget with, say 10 tabs, but only one should be visible based on the parameters loaded. Enabling certain tab is not an option since it takes to many space and the disabled tabs are grey. I do not want to see disabled tabs.
Removing tab could be an option but the index is not related to a specific tab so I have to take care of the shift in the indices. And furthermore if user loads another file with different parameters, a good tab should be added and the current one removed.
My questions are:
How to do this effectively?
Is it better to use any other type of widget?
In Qt designer, is it possible to define many widgets one over another and then just push the good one in front. If yes, how? And how to edit and change any of them?
If using RemoveTab, how to use pointers on tabs, rather than indices?
I use PyQt4 | PyQT Qtabwidget add, remove, hide, show certain tab | 1 | 0 | 1 | 0 | 0 | 7,611 |
29,584,270 | 2015-04-11T23:33:00.000 | 1 | 0 | 0 | 1 | 0 | python,macos,exe | 0 | 29,584,289 | 0 | 2 | 0 | false | 0 | 0 | You can run python scripts through OS X Terminal. You just have to write a python script with an editor, open your Terminal and enter python path_to_my_script/my_script.py | 1 | 8 | 0 | 0 | My google searching has failed me. I'd like to know how to make an executable Python file on OS X, that is, how to go about making an .exe file that can launch a Python script with a double click (not from the shell). For that matter I'd assume the solution for this would be similar between different scripting languages, is this the case? | How do you make a Python executable file on Mac? | 0 | 0.099668 | 1 | 0 | 0 | 17,211 |
29,584,840 | 2015-04-12T01:10:00.000 | 0 | 0 | 1 | 1 | 1 | macos,python-3.x | 0 | 37,517,408 | 1 | 3 | 0 | false | 0 | 0 | I have found that making the 'python' alias replace the default version of python that the system comes with is a bad idea.
When you install a new version of python (3.4 for instance),
these two new commands are installed, specifically for the version you installed:
pip3.4
python3.4
If you're using an IDE that wants you to indicate which python version you are using the IDE will let you navigate to it in the Library folder
pip will still be for python2.7 after you download some other python version, as I think that's the current version osx comes installed with | 2 | 0 | 0 | 0 | I'm on OSX, and I installed IDLE for Python 3.4. However, in Terminal my python -V and pip --version are both Python 2.7.
How do I fix this? I really have no idea how any of this works, so please bear with my lack of knowledge. | I don't know how to update my Python version to 3.4? | 0 | 0 | 1 | 0 | 0 | 113 |
29,584,840 | 2015-04-12T01:10:00.000 | 0 | 0 | 1 | 1 | 1 | macos,python-3.x | 0 | 29,584,868 | 1 | 3 | 0 | false | 0 | 0 | Try python3 or python3.4. It should print out the right version if correctly installed.
Python 3.4 already has pip with it. You can use python3 -m pip to access pip. Or python3 -m ensurepip to make sure that it's correctly installed. | 2 | 0 | 0 | 0 | I'm on OSX, and I installed IDLE for Python 3.4. However, in Terminal my python -V and pip --version are both Python 2.7.
How do I fix this? I really have no idea how any of this works, so please bear with my lack of knowledge. | I don't know how to update my Python version to 3.4? | 0 | 0 | 1 | 0 | 0 | 113 |
29,595,382 | 2015-04-12T22:27:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,django-models,django-orm | 0 | 29,596,796 | 0 | 2 | 1 | true | 1 | 0 | Also all 3rd-party packages rely on get_user_model(), so looks like if I don't use custom user model, all your relations should go to User, right? But I still can't add methods to User, so if User has friends relation, and I want to add recent_friends method, I should add this method to UserProfile.
I have gone down the "one-to-one" route in the past and I ended up not liking the design of my app at all, it seems to me that it forces you away from SOLID. So if I was you I would rather subclass AbstractBaseUser or AbstractUser.
With AbstractBaseUser you are provided just the core implementation of User and then you can extend the model according to your requirements.
Depending on what sort of 3rd-party packages you are using you might need more than just the core implementation: if that's the case just extend AbstractUser which lets you extend the complete implementation of User. | 2 | 0 | 0 | 0 | I know how to make custom user models, my question is about style and best practices.
What are the consequences of custom user model in Django? Is it really better to use auxiliary one-to-one model?
And for example if I have a UserProfile models which is one-to-one to User, should I create friends relationship (which would be only specific to my app) between UserProfile or between User?
Also all 3rd-party packages rely on get_user_model(), so looks like if I don't use custom user model, all your relations should go to User, right? But I still can't add methods to User, so if User has friends relation, and I want to add recent_friends method, I should add this method to UserProfile. This looks a bit inconsistent for me.
I'd be glad if someone experienced in Django could give a clear insight. | Custom user model in Django? | 0 | 1.2 | 1 | 0 | 0 | 138 |
29,595,382 | 2015-04-12T22:27:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,django-models,django-orm | 0 | 29,595,803 | 0 | 2 | 1 | false | 1 | 0 | I would definitely recommend using a custom user model - even if you use a one-to-one with a profile. It is incredibly hard to migrate to a custom user model if you've committed to the default user model, and there's almost always a point where you want to add at least some custom logic to the user model.
Whether you use a profile or further extend the user model should then be based on all considerations that usually apply to your database structure. The right™ decision depends on the exact details of your profile, which only you know. | 2 | 0 | 0 | 0 | I know how to make custom user models, my question is about style and best practices.
What are the consequences of custom user model in Django? Is it really better to use auxiliary one-to-one model?
And for example if I have a UserProfile models which is one-to-one to User, should I create friends relationship (which would be only specific to my app) between UserProfile or between User?
Also all 3rd-party packages rely on get_user_model(), so looks like if I don't use custom user model, all your relations should go to User, right? But I still can't add methods to User, so if User has friends relation, and I want to add recent_friends method, I should add this method to UserProfile. This looks a bit inconsistent for me.
I'd be glad if someone experienced in Django could give a clear insight. | Custom user model in Django? | 0 | 0.099668 | 1 | 0 | 0 | 138 |
29,596,204 | 2015-04-13T00:21:00.000 | 2 | 0 | 1 | 0 | 1 | python,datetime,epoch,mktime | 0 | 29,596,282 | 0 | 1 | 0 | true | 0 | 0 | time.mktime converts a time tuple in local time to seconds since the Epoch. Since time.gmtime(0) returns GMT time tuple, and the conversion assumes it was in your local time, you see this discrepancy. | 1 | 2 | 0 | 0 | I am expecting the following code will return 0 but I get -3600, could someone explains why? and how to fix it? thanks
import datetime
import time
ts = time.mktime(time.gmtime(0))
print time.mktime(datetime.datetime.fromtimestamp(ts).timetuple()) | how to make time.mktime work consistently with datetime.fromtimestamp? | 0 | 1.2 | 1 | 0 | 0 | 709 |
29,607,222 | 2015-04-13T14:01:00.000 | 0 | 0 | 0 | 0 | 0 | python,postgresql,pandas | 0 | 68,004,057 | 0 | 2 | 0 | false | 0 | 0 | For sql alchemy case of read table as df, change df, then update table values based on df, I found the df.to_sql to work with name=<table_name> index=False if_exists='replace'
This should replace the old values in the table with the ones you changed in the df | 1 | 25 | 1 | 1 | I have a PostgreSQL db. Pandas has a 'to_sql' function to write the records of a dataframe into a database. But I haven't found any documentation on how to update an existing database row using pandas when im finished with the dataframe.
Currently I am able to read a database table into a dataframe using pandas read_sql_table. I then work with the data as necessary. However I haven't been able to figure out how to write that dataframe back into the database to update the original rows.
I dont want to have to overwrite the whole table. I just need to update the rows that were originally selected. | Update existing row in database from pandas df | 0 | 0 | 1 | 1 | 0 | 11,070 |
29,615,953 | 2015-04-13T22:11:00.000 | 0 | 0 | 1 | 0 | 1 | python,regex | 0 | 29,615,982 | 0 | 2 | 0 | true | 0 | 0 | Use the regexp ^[^[]* to match everything up to the first [. | 1 | 0 | 0 | 0 | I can't figure out how to get "any text" from the string any text [monkey bars][fancy swing][1002](special)
after a lot of trying I've made (.*)[\(*]|[\[*] but it doesn't seem to work very well
I'm using the python regex engine | can't find substring in regex | 0 | 1.2 | 1 | 0 | 0 | 48 |
29,621,193 | 2015-04-14T07:08:00.000 | 1 | 0 | 1 | 1 | 0 | python,shell | 0 | 29,621,377 | 0 | 2 | 0 | true | 0 | 0 | What if you make a 'master' shell script that would execute all the others in sequence? This way you'll only have to create a single sub-process yourself, and the individual scripts will share the same environment.
If, however, you would like to interleave script executions with Python code, then you would probably have to make each of the scripts echo its environment to stdout before exiting, parse that, and then pass it into the next script (subprocess.Popen() accepts the env parameter, which is a map.) | 2 | 0 | 0 | 0 | I need to execute several shell scripts with python, some scripts would export environment parameters, so I need to execute them in the same process, otherwise, other scripts can't see the new environment parameters
in one word, I want to let the shell script change the environment of the python process
so I should not use subprocess, any idea how to realize it? | how to execute shell script in the same process in python | 0 | 1.2 | 1 | 0 | 0 | 1,004 |
29,621,193 | 2015-04-14T07:08:00.000 | 1 | 0 | 1 | 1 | 0 | python,shell | 0 | 29,621,636 | 0 | 2 | 0 | false | 0 | 0 | No, you cannot run more than one program (bash, python) in the same process at the same time.
But you can run them in sequence using exec in bash or one of the exec commands in python, like os.execve. Several things survive the "exec boundary", one of which is the environment block. So in each bash script you exec the next, and finally exec your python.
You might also consider using an IPC mechanism like a named pipe to pass data between processes.
I respectfully suggest that you look at your design again. Why are you mixing bash and python? Is it just to reuse code? Even if you managed this you will end with a real mess. It is generally easier to stick with one language. | 2 | 0 | 0 | 0 | I need to execute several shell scripts with python, some scripts would export environment parameters, so I need to execute them in the same process, otherwise, other scripts can't see the new environment parameters
in one word, I want to let the shell script change the environment of the python process
so I should not use subprocess, any idea how to realize it? | how to execute shell script in the same process in python | 0 | 0.099668 | 1 | 0 | 0 | 1,004 |
29,647,683 | 2015-04-15T10:21:00.000 | 0 | 0 | 0 | 0 | 0 | python,neural-network,genetic-algorithm | 0 | 30,931,808 | 0 | 1 | 0 | false | 0 | 0 | Since I can't comment yet I will just answer taking some assumptions.
(I'm also starting to experiment with NN and GA in Python too (MultiNEAT), so not an expert either ;) )
The problem could be poor feedback to the genetic algorithm, so it can't select the best individuals. Try making your fitness score more fine grained, for example, making that every little movement towards an enemy soldier increases a little the fitness, instead of just scoring based on final points.
What are your network outputs? Maybe the network's results are not being passed onto the game properly. I would make, for example, the output neurons targeting movement for soldiers and maybe some for reactions in case of proximity (I don't really know the game's mechanics). If you have a limited amount of possible answers from the network, it could be possible that those are just not enough to win the game, so no network can really score better.
Are you waiting long enough? The bigger the problem, the more time it will take to find an answer, specially in slow computers. Try using the GPU to implement the Genetic algorithm to speed up the evaluations of networks, if speed proves to be an issue.
Good luck. | 1 | 0 | 0 | 0 | I am attempting to create a program that can find the best solution for winning a game using NN and I was hoping to get some help from the wonderful community here.
The game is a strategy war game, you have your soldiers and a land you need to conquer. There is also the opponent's soldiers you need to be aware from them,
for every second you have a land in your possession you get a certain amount of points.
you have a lot of inputs that the engine writer created that you have access for example: where all the lands located on the map, where the other's opponent soldiers right now, if the land is already conquered or in the middle of conquering.
I've already integrated the ANN in the game engine and set the fitness to be the points he collected but the fitness stays on nothing+1,
i assume that the problem is that to capture a land you need to stay next to it for few seconds and i can't get him to learn how to do it the fitness stays on 1
I've tried a big population and a lot of generations but its not conquering and i dont know what to do.
sorry for my bad english. | Defining inputs and fitness in ANN with GA Python | 0 | 0 | 1 | 0 | 0 | 167 |
29,662,290 | 2015-04-15T22:49:00.000 | 1 | 0 | 1 | 0 | 0 | python,virtualenv,pylint | 0 | 29,663,226 | 0 | 2 | 0 | false | 0 | 0 | Usually python modules for different major versions don't interfer with each other. The only problem is utilities. So the recipe is as follows:
Create a virtual environment for a python2, then go to the bin/ folder of the created environment and rename all created scripts/wrappers/binaries so that all of them would have suffix 2
Repeat the creation of the virtual env. in the same directory but for python3. Again, go to the bin/ subfolder of the created virtual environment and rename all newly created scripts to have suffix 3.
Make sure that all hashbangs in the scripts call an apropriate python version.
Now you should source <virtenv>/bin/activate as the docs suggest
And now you may install pylint in the virtual env, you need to repeat the procedure for both python2 and python3, again separating the binaries in <virtualenv>/bin/. Use pip2 and pip3 or python2 -m pip.../python3 -m pip... for that.
I haven't installed pylint, but have an environment prepared for both python2 and python3 with a bunch of python utilities like bpython (called as bpython2 and bpython3 respectively, pygmentize etc). I don't think pylint is something different in this aspect. | 1 | 2 | 0 | 0 | I have a codebase that includes both python2 and python3 code. I want to make one script that will run pylint on all python2 and all python3 files, ideally from within a single virtualenv.
I can figure out which version of pylint to run by annotating the directories (eg, adding a .pylint3 file to directories that need the python3 pylint to run or something like that). However, I don't know how to install two separate versions of pylint side by side, either in the OS as a whole or in a virtualenv, without doing some manual annoying stuff.
Is there a good way to get two versions of pylint running side by side in the same virtualenv?
Thanks! | How to install pylint for python2 and python3 side by side | 0 | 0.099668 | 1 | 0 | 0 | 3,773 |
29,704,766 | 2015-04-17T16:30:00.000 | 0 | 1 | 0 | 1 | 0 | python,windows,file,wmi,remote-access | 1 | 29,705,179 | 0 | 2 | 0 | false | 0 | 0 | I've done some work with WMI before (though not from Python) and I would not try to use it for a project like this. As you said WMI tends to be obscure and my experience says such things are hard to support long-term.
I would either work at the Windows API level, or possibly design a service that performs the desired actions access this service as needed. Of course, you will need to install this service on each machine you need to control. Both approaches have merit. The WinAPI approach pretty much guarantees you don't invent any new security holes and is simpler initially. The service approach should make the application faster and required less network traffic. I am sure you can think of others easily.
You still have to have the necessary permissions, network ports, etc. regardless of the approach. E.g., WMI is usually blocked by firewalls and you still run as some NT process.
Sorry, not really an answer as such -- meant as a long comment.
ADDED
Re: API programming, though you have no Windows API experience, I expect you find it familiar for tasks such as you describe, i.e., reading and writing files, scanning directories are nothing unique to Windows. You only need to learn about the parts of the API that interest you.
Once you create the appropriate security contexts and start your client process, there is nothing service-oriented in the, i.e., your can simply open and close files, etc., ignoring that fact that the files are remote, other than server name being included in the UNC name of the file/folder location. | 1 | 1 | 0 | 0 | I'm about to start working on a project where a Python script is able to remote into a Windows Server and read a bunch of text files in a certain directory. I was planning on using a module called WMI as that is the only way I have been able to successfully remotely access a windows server using Python, But upon further research I'm not sure i am going to be using this module.
The only problem is that, these text files are constantly updating about every 2 seconds and I'm afraid that the script will crash if it comes into an MutEx error where it tries to open the file while it is being rewritten. The only thing I can think of is creating a new directory, copying all the files (via script) into this directory in the state that they are in and reading them from there; and just constantly overwriting these ones with the new ones once it finishes checking all of the old ones. Unfortunately I don't know how to execute this correctly, or efficiently.
How can I go about doing this? Which python module would be best for this execution? | python copying directory and reading text files Remotely | 0 | 0 | 1 | 0 | 0 | 693 |
29,713,533 | 2015-04-18T03:10:00.000 | 0 | 0 | 0 | 0 | 0 | python,html | 0 | 29,713,745 | 0 | 2 | 0 | false | 1 | 0 | No, your python process does not care about the JPG at all. It just generates html asking the browser to fetch the JPG. Then it's the browser, fetching the JPG by making another request to the webserver.
Therefore it is very likely that your python script needs to live in a different directory than the JPG. Have a look on your web server log. You should see two requests. One for the HTML and one for the JPG. | 1 | 0 | 0 | 0 | The generated HTML page works fine for text. But
does not find the file and displays the alt text instead. I know the HTML works because "view source" in the browser can be copied into a file, which then works locally when the .jpg is in the same directory.
On the remote site, the .jpg file is in the same directory as the Python program that generated the HTML, and this is the directory where the Python process is running. Clearly this process is looking for the file (it shows the alt); how do I find where it is looking, so I can put the file there? I would rather have a local reference than an absolute one elsewhere on the Web, to improve performance. | My Python program generates an HTML page; how do I display a .jpg that's in the same directory? | 0 | 0 | 1 | 0 | 0 | 61 |
29,716,972 | 2015-04-18T11:54:00.000 | 14 | 0 | 0 | 0 | 0 | python,sql-server,django,database,postgresql | 0 | 55,397,481 | 0 | 2 | 1 | false | 1 | 0 | You can use Materialized view with postgres. It's very simple.
You have to create a view with query like CREATE MATERIALIZED VIEW
my_view as select * from my_table;
Create a model with two
option managed=false and db_name=my_view in the model Meta like
this
MyModel(models.Model):
class Meta:
managed = False
db_table='my_view'
Simply use powers of ORM and treat MyModel as a regular model. e.g. MyModel.objects.count() | 1 | 13 | 0 | 0 | I need to use some aggregate data in my django application that changes frequently and if I do the calculations on the fly some performance issues may happen. Because of that I need to save the aggregate results in a table and, when data changes, update them. Because I use django some options may be exist and some maybe not. For example I can use django signals and a table that, when post_save signal is emitted, updates the results. Another option is materialized views in postgresql or indexed views in MSSQL Server, that I do not know how to use in django or if django supports them or not. What is the best way to do this in django for improving performance and accuracy of results. | using materialized views or alternatives in django | 1 | 1 | 1 | 1 | 0 | 6,549 |
29,721,897 | 2015-04-18T19:35:00.000 | 2 | 0 | 0 | 0 | 0 | python,django | 0 | 29,722,220 | 0 | 2 | 0 | false | 1 | 0 | One of the possible solutions would be to use separate daemonized lightweight python script to perform all the in-game business logic and left django be just the frontend to your game. To bind them together you might pick any of high-performance asynchronous messaging library like ZeroMQ (for instance to pass player's actions to that script). This stack would also have a benefit of a frontend being separated and completely agnostic of a backend implementation. | 1 | 3 | 0 | 0 | I'm working on creating a browser-based game in Django and Python, and I'm trying to come up with a solution to one of the problems I'm having.
Essentially, every second, multiple user variables need to be updated. For example, there's a currency variable that should increase by some amount every second, progressively getting larger as you level-up and all of that jazz.
I feel like it's a bad idea to do this with cronjobs (and from my research, other people think that too), so right now I'm thinking I should just create a thread that loops through all of the users in the database that performs these updates.
Am I on the right track here, or is there a better solution? In Django, how can I start a thread the second the server starts?
I appreciate the insight. | Django/Python - Updating the database every second | 0 | 0.197375 | 1 | 0 | 0 | 782 |
29,723,704 | 2015-04-18T22:37:00.000 | 5 | 0 | 0 | 0 | 0 | python,tornado,gevent,rethinkdb | 0 | 29,727,792 | 0 | 1 | 0 | false | 0 | 0 | If you're using a non-blocking library, one connection should be sufficient in RethinkDB 2.0 (prior to 2.0 there was less per-connection parallelism). Per-connection overhead is pretty low, though. Some people open a connection per query and even that isn't too slow, so you should just do whatever's easiest.
EDIT: This advice is now outdated. For newer versions of RethinkDB using one connection per query is strongly discouraged. One connection per thread is still fine. | 1 | 3 | 0 | 0 | I'm starting out with rethinkdb in python, and taking a look at the different approaches:
Blocking approach with threads
Non-blocking, callback-based approach with Tornado
Greenlet-based approach with gevent
In the first case, the natural thing to do is to give each thread a connection object. In the second and third cases, however, I don't quite get it.
With tornado and gevent, how and when should I create connections? How many should I have around? | RethinkDB: how many connections? | 0 | 0.761594 | 1 | 0 | 0 | 373 |
29,726,821 | 2015-04-19T07:02:00.000 | 0 | 0 | 0 | 0 | 0 | php,python-2.7,magento-1.9 | 0 | 29,727,058 | 0 | 1 | 0 | false | 1 | 0 | Ok. Great. I found the issue. I had not noticed another javascript line at the end of my html file's tag. so I am home and dry. Cheers. | 1 | 0 | 0 | 0 | I am trying to use the Magento Custom Menu extension in a django project.
I have modified the menucontent.phtml, but yet my menu items are not reflecting the appropriate captions.
Does anyone know how does the extension work to generate the menu? | Magento 1.9 Custom Menu Extension Use in django | 0 | 0 | 1 | 0 | 0 | 65 |
29,754,112 | 2015-04-20T17:08:00.000 | 1 | 0 | 0 | 0 | 0 | python,web-scraping,scrapy | 0 | 29,783,461 | 0 | 4 | 0 | false | 1 | 0 | set DOWNLOAD_DELAY = some_number where some_number is the delay (in seconds) you want for every request and RANDOMIZE_DOWNLOAD_DELAY = False so it can be static. | 2 | 1 | 0 | 0 | My use case is this: I have 10 spiders and the AUTO_THROTTLE_ENABLED setting is set to True, globally. The problem is that for one of the spiders the runtime WITHOUT auto-throttling is 4 days, but the runtime WITH auto-throttling is 40 days...
I would like to find a balance and make the spider run in 15 days (3x the original amount). I've been reading through the scrapy documentation this morning but the whole thing has confused me quite a bit. Can anyone tell me how to keep auto-throttle enabled globally, and just turn down the amount to which it throttles? | How to Set Scrapy Auto_Throttle Settings | 0 | 0.049958 | 1 | 0 | 0 | 5,769 |
29,754,112 | 2015-04-20T17:08:00.000 | 1 | 0 | 0 | 0 | 0 | python,web-scraping,scrapy | 0 | 33,189,665 | 0 | 4 | 0 | false | 1 | 0 | Auto_throttle is specifically designed so that you don't manually adjust DOWNLOAD_DELAY. Setting the DOWNLOAD_DELAY to some number will set an lower bound, meaning your AUTO_THROTTLE will not go faster than the number set in DOWNLOAD_DELAY. Since this is not what you want, your best bet would be to set AUTO_THROTTLE to all spiders except for the one you want to go faster, and manually set DOWNLOAD_DELAY for just that one spider without AUTO_THROTTLE to achieve whatever efficiency you desire. | 2 | 1 | 0 | 0 | My use case is this: I have 10 spiders and the AUTO_THROTTLE_ENABLED setting is set to True, globally. The problem is that for one of the spiders the runtime WITHOUT auto-throttling is 4 days, but the runtime WITH auto-throttling is 40 days...
I would like to find a balance and make the spider run in 15 days (3x the original amount). I've been reading through the scrapy documentation this morning but the whole thing has confused me quite a bit. Can anyone tell me how to keep auto-throttle enabled globally, and just turn down the amount to which it throttles? | How to Set Scrapy Auto_Throttle Settings | 0 | 0.049958 | 1 | 0 | 0 | 5,769 |
29,758,554 | 2015-04-20T21:15:00.000 | 1 | 0 | 0 | 0 | 0 | python,web-scraping,scrapy | 0 | 30,086,257 | 0 | 4 | 0 | true | 1 | 0 | It appears that the primary problem was not having cookies enabled. Having enabled cookies, I'm having more success now. Thanks. | 2 | 1 | 0 | 0 | I'm running Scrapy 0.24.4, and have encountered quite a few sites that shut down the crawl very quickly, typically within 5 requests. The sites return 403 or 503 for every request, and Scrapy gives up. I'm running through a pool of 100 proxies, with the RotateUserAgentMiddleware enabled.
Does anybody know how a site could identify Scrapy that quickly, even with the proxies and user agents changing? Scrapy doesn't add anything to the request headers that gives it away, does it? | Scrapy crawl blocked with 403/503 | 0 | 1.2 | 1 | 0 | 1 | 3,256 |
29,758,554 | 2015-04-20T21:15:00.000 | 0 | 0 | 0 | 0 | 0 | python,web-scraping,scrapy | 0 | 72,128,238 | 0 | 4 | 0 | false | 1 | 0 | I simply set AutoThrottle_ENABLED to True and my script was able to run. | 2 | 1 | 0 | 0 | I'm running Scrapy 0.24.4, and have encountered quite a few sites that shut down the crawl very quickly, typically within 5 requests. The sites return 403 or 503 for every request, and Scrapy gives up. I'm running through a pool of 100 proxies, with the RotateUserAgentMiddleware enabled.
Does anybody know how a site could identify Scrapy that quickly, even with the proxies and user agents changing? Scrapy doesn't add anything to the request headers that gives it away, does it? | Scrapy crawl blocked with 403/503 | 0 | 0 | 1 | 0 | 1 | 3,256 |
29,761,728 | 2015-04-21T02:17:00.000 | 3 | 0 | 1 | 1 | 0 | python,spyder,osx-yosemite | 0 | 29,841,327 | 1 | 1 | 1 | false | 0 | 0 | (Spyder dev here) There is no simple way to do what you ask for, at least for the Python version that comes with Spyder.
I imagine you downloaded and installed our DMG package. That package comes with its own Python version as part of the application (along with several important scientific packages), so it can't be removed because that would imply to remove Spyder itself :-)
I don't know how you installed IDL(E?), so I can't advise you on how to remove it. | 1 | 4 | 0 | 0 | I have installed the Python IDE Spyder. For me it's a great development environment.
Some how in this process I have managed to install three versions of Python on my system.These can be located as following:
Version 2.7.6 from the OS X Terminal;
Version 2.7.8 from the Spyder Console; and
Version 2.7.9rc1 from an IDL window.
The problem I have is (I think) that the multiple versions are preventing Spyder from working correctly.
So how do I confirm that 2.7.6 is the latest version supported by Apple and is there a simple way ('silver bullet') to remove other versions from my system.
I hope this is the correct forum for this question. If not I would appreciate suggestions where I could go for help.
I want to keep my life simple and to develop python software in the Spyder IDE. I am not an OS X guru and I really don't want to get into a heavy duty command line action. To that end I just want to delete/uninstall the 'unofficial versions' of Python. Surely there must be an easy way to do this - perhaps 'pip uninstall Python-2.7.9rc1' or some such. The problem is that I am hesitant to try this due to the fear that it will crash my system.
Help on this would be greatly appreciated. | How to uninstall and/or manage multiple versions of python in OS X 10.10.3 | 1 | 0.53705 | 1 | 0 | 0 | 9,789 |
29,765,250 | 2015-04-21T07:08:00.000 | 0 | 0 | 1 | 0 | 1 | python,eclipse,numpy,pydev | 0 | 59,991,219 | 0 | 5 | 0 | false | 0 | 0 | Pandas can be installed after install python in to your pc.
to install pandas go to command prompt and type "pip install pandas" this command collecting packages related to pandas. After if it asking to upgrade pip or if pip is not recognized by the command prompt use this command:
python -m pip install --upgrade pip. | 1 | 1 | 0 | 0 | I am trying to install a package called 'numpy'.
i have python setup in eclipse luna with the help of pydev.
how do i install numpy in pydev.
tried putting numpy in site-packages folder. doesnt seem to work | Install Numpy in pydev(eclipse) | 0 | 0 | 1 | 0 | 0 | 16,056 |
29,778,405 | 2015-04-21T16:46:00.000 | 1 | 0 | 0 | 0 | 1 | python,linux,apache,file-permissions | 0 | 29,778,427 | 0 | 2 | 0 | false | 0 | 0 | Change file permissions to make it executable: sudo chmod +x file.py | 2 | 0 | 0 | 0 | How to execute a file after it gets downloaded on client side,File is a python script, User dont know how to change permission of a file, How to solve this issue?? | File permission gets changed after file gets downloaded on client machine | 0 | 0.099668 | 1 | 0 | 1 | 48 |
29,778,405 | 2015-04-21T16:46:00.000 | 0 | 0 | 0 | 0 | 1 | python,linux,apache,file-permissions | 0 | 29,778,455 | 0 | 2 | 0 | false | 0 | 0 | Maybe you can try teaching them how to use chmod +x command? Or actualy even more simple it would be to change it using GUI: right click -> properties -> permissions-> pick what is needed | 2 | 0 | 0 | 0 | How to execute a file after it gets downloaded on client side,File is a python script, User dont know how to change permission of a file, How to solve this issue?? | File permission gets changed after file gets downloaded on client machine | 0 | 0 | 1 | 0 | 1 | 48 |
29,785,534 | 2015-04-22T00:24:00.000 | 1 | 0 | 1 | 1 | 1 | python,bash,debugging,pycharm,pdb | 0 | 37,883,146 | 0 | 1 | 0 | false | 0 | 0 | You can attach the debugger to a python process launched from terminal:
Use Menu Tools --> Attach to process then select python process to debug.
If you want to debug a file installed in site-packages you may need to open the file from its original location.
You can to pause the program manually from debugger and inspect the suspended Thread to find your source file. | 1 | 4 | 0 | 0 | I know how to set-up run configurations to pass parameters to a specific python script. There are several entry points, I don't want a run configuration for each one do I? What I want to do instead is launch a python script from a command line shell script and be able to attach the PyCharm debugger to the python script that is executed and have it stop at break points. I've tried to use a pre-launch condition of a utility python script that will sleep for 10 seconds so I can attempt to "attach to process" of the python script. That didn't work. I tried to import pdb and settrace to see if that would stop it for attaching to the process, but that looks to be command line debugging specific only. Any clues would be appreciated.
Thanks! | How to attach to PyCharm debugger when executing python script from bash? | 0 | 0.197375 | 1 | 0 | 0 | 1,147 |
29,789,325 | 2015-04-22T06:23:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,photologue | 0 | 31,057,437 | 0 | 2 | 0 | false | 1 | 0 | In the admin panel, you also need to:
Create a gallery.
Choose which photos are a part of which galleries. | 1 | 0 | 0 | 0 | I installed photologue correctly in my project (blog) and I can add images in admin panel, but how to display them on my main page? | How to add a gallery in photologue? | 0 | 0 | 1 | 0 | 0 | 244 |
29,797,893 | 2015-04-22T12:40:00.000 | 0 | 0 | 1 | 0 | 0 | python,opencv,pycharm | 0 | 56,717,998 | 0 | 4 | 0 | false | 0 | 0 | Do the following steps:
Download and install the OpenCV executable.
Add OpenCV in the system path(%OPENCV_DIR% = /path/of/opencv/directory)
Go to C:\opencv\build\python\2.7\x86 folder and copy cv2.pyd file.
Go to C:\Python27\DLLs directory and paste the cv2.pyd file.
Go to C:\Python27\Lib\site-packages directory and paste the cv2.pyd file.
Go to PyCharm IDE and go to DefaultSettings > PythonInterpreter.
Select the Python which you have installed.
Install the packages numpy, matplotlib and pip in pycharm.
Restart your PyCharm. | 3 | 7 | 1 | 0 | I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline with:
no module named cv2
I assume that PyCharm is unable to locate my cv2.so file but I have the latest PyCharm version (4.0.6) and none of the forums I've looked at are helpful for this version. How do I get PyCharm to recognise my cv2 file? I went in Project Interpreter but there is no option for importing OpenCV from my own machine. Furthermore in Edit Configurations I defined an environment variable
PYTHONPATH
and set it to
/usr/local/lib/python2.7/site-packages:$PYTHONPATH
but this didn't help either.
Any ideas?
EDIT: I set up a virtualenv to no avail and figured out how to add a path to the current framework on the new PyCharm version and it turns out the path to cv2.so has already been given yet it is still complaining. | Cannot import cv2 in PyCharm | 0 | 0 | 1 | 0 | 0 | 8,253 |
29,797,893 | 2015-04-22T12:40:00.000 | 0 | 0 | 1 | 0 | 0 | python,opencv,pycharm | 0 | 44,804,084 | 0 | 4 | 0 | false | 0 | 0 | Have you selected the right version of python ?
or rather, when you have installed opencv with brew, this last probably has installed a new version of python that you can find in Cellar's Directory. You can see this immediately; from the main window of PyCharm select:
Configure -> Preferences -> Project Interpreter
click on Project Interpreter Combobox and be careful if there is a instance of python in Cellar's Directory, if yes, select it and you can see the cv2 in the list below. | 3 | 7 | 1 | 0 | I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline with:
no module named cv2
I assume that PyCharm is unable to locate my cv2.so file but I have the latest PyCharm version (4.0.6) and none of the forums I've looked at are helpful for this version. How do I get PyCharm to recognise my cv2 file? I went in Project Interpreter but there is no option for importing OpenCV from my own machine. Furthermore in Edit Configurations I defined an environment variable
PYTHONPATH
and set it to
/usr/local/lib/python2.7/site-packages:$PYTHONPATH
but this didn't help either.
Any ideas?
EDIT: I set up a virtualenv to no avail and figured out how to add a path to the current framework on the new PyCharm version and it turns out the path to cv2.so has already been given yet it is still complaining. | Cannot import cv2 in PyCharm | 0 | 0 | 1 | 0 | 0 | 8,253 |
29,797,893 | 2015-04-22T12:40:00.000 | 0 | 0 | 1 | 0 | 0 | python,opencv,pycharm | 0 | 39,482,840 | 0 | 4 | 0 | false | 0 | 0 | I got the same situation under win7x64 with pycharm version 2016.1.1, after a quick glimpse into the stack frame, I think it is a bug!
Pycharm ipython patches import action for loading QT, matplotlib, ..., and finally sys.path lost its way!
anyway, there is a workaround, copy Lib/site-packages/cv2.pyd or cv2.so to $PYTHONROOT, problem solved! | 3 | 7 | 1 | 0 | I am working on a project that requires OpenCV and I am doing it in PyCharm on a Mac. I have managed to successfully install OpenCV using Homebrew, and I am able to import cv2 when I run Python (version 2.7.6) in Terminal and I get no errors. The issue arises when I try importing it in PyCharm. I get a red underline with:
no module named cv2
I assume that PyCharm is unable to locate my cv2.so file but I have the latest PyCharm version (4.0.6) and none of the forums I've looked at are helpful for this version. How do I get PyCharm to recognise my cv2 file? I went in Project Interpreter but there is no option for importing OpenCV from my own machine. Furthermore in Edit Configurations I defined an environment variable
PYTHONPATH
and set it to
/usr/local/lib/python2.7/site-packages:$PYTHONPATH
but this didn't help either.
Any ideas?
EDIT: I set up a virtualenv to no avail and figured out how to add a path to the current framework on the new PyCharm version and it turns out the path to cv2.so has already been given yet it is still complaining. | Cannot import cv2 in PyCharm | 0 | 0 | 1 | 0 | 0 | 8,253 |
29,826,237 | 2015-04-23T14:19:00.000 | 0 | 0 | 1 | 1 | 1 | python,python-2.7,pip,anaconda,eyed3 | 0 | 29,851,171 | 0 | 1 | 0 | false | 0 | 0 | The problem is that this file is only written for Python 2 but you are using Python 3. You should use Anaconda (vs. Anaconda3), or create a Python 2 environment with conda with conda create -n py2 anaconda python=2 and activate it with activate py2. | 1 | 2 | 0 | 0 | could you help me with that. I can't manage to install this plugin. I tried:
1) install it through pip
2) through setup.py in win console
3) through anaconda3 but still no.
4) I searched about it in web and here, but insructions are made to older versions.
5) and also through the installation page of eyeD3.
Could you guide me how should I do this? Maybe I'm doing something wrong. For first: should I use for this Python 2.7.9 or can it be Anaconda3 | I can't install eyeD3 0.7.5 into Python in windows | 0 | 0 | 1 | 0 | 0 | 641 |
29,826,523 | 2015-04-23T14:30:00.000 | 2 | 0 | 0 | 0 | 1 | python,floating-point,double,precision | 0 | 29,826,612 | 0 | 2 | 0 | false | 0 | 0 | You could try the c_float type from the ctypes standard library. Alternatively, if you are capable of installing additional packages you might try the numpy package. It includes the float32 type. | 1 | 6 | 1 | 0 | I need to implement a Dynamic Programming algorithm to solve the Traveling Salesman problem in time that beats Brute Force Search for computing distances between points. For this I need to index subproblems by size and the value of each subproblem will be a float (the length of the tour). However holding the array in memory will take about 6GB RAM if I use python floats (which actually have double precision) and so to try and halve that amount (I only have 4GB RAM) I will need to use single precision floats. However I do not know how I can get single precision floats in Python (I am using Python 3). Could someone please tell me where I can find them (I was not able to find much on this on the internet). Thanks.
EDIT: I notice that numpy also has a float16 type which will allow for even more memory savings. The distances between points are around 10000 and there are 25 unique points and my answer needs to be to the nearest integer. Will float16 provide enought accuracy or do I need to use float32? | Python float precision float | 1 | 0.197375 | 1 | 0 | 0 | 4,158 |
29,839,766 | 2015-04-24T05:56:00.000 | 3 | 0 | 0 | 0 | 0 | python,django | 1 | 42,911,596 | 0 | 2 | 0 | false | 1 | 0 | If you need it only for reporting errors the best choice would be to inherit from the django.utils.log.AdminEmailHandler and override the def format_subject(self, subject): method.
Note that changing EMAIL_SUBJECT_PREFIX will affect not only error emails but all emails send to admins including system information emails or so. | 1 | 7 | 0 | 0 | i noticed about to change the subject for django error reporting emails,
is it possible to change subject?
can we modify the subject for Django error reporting emails ? | how to change the subject for Django error reporting emails? | 1 | 0.291313 | 1 | 0 | 0 | 1,862 |
29,845,940 | 2015-04-24T11:08:00.000 | 0 | 0 | 0 | 1 | 0 | python,google-app-engine | 0 | 29,846,170 | 0 | 1 | 0 | false | 1 | 0 | I'm not entirely clear on what you're looking for, but you can retrieve that type of information from the WSGI environmental variables. The method of retrieving them varies with WSGI servers, and the number of variables made available to your application depends on the web server configuration.
That being said, getting client IP address is a common task and there is likely a method on the request object of the web framework you are using.
What framework are you using? | 1 | 0 | 0 | 0 | I'm hosting my app on Google App Engine. Is there any posibility to get server IP of my app for current request?
More info:
GAE has a specific IP addressing. All http requests go to my 3-level domain, and IP of this domain isn't fixed, it changes rather frequently and can be different on different computers at the same moment. Can I somehow find out, what IP address client is requesting now?
Thank you! | GAE: how to get current server IP? | 0 | 0 | 1 | 0 | 0 | 157 |
29,846,606 | 2015-04-24T11:41:00.000 | -1 | 0 | 0 | 0 | 0 | python,django,internationalization,translation | 0 | 29,846,744 | 0 | 3 | 1 | false | 1 | 0 | I assume that you are using Django to create an API, and you consume the API with javascript. You can check the user-agent string from the header and make the appropriate redirect according to the request. | 1 | 3 | 0 | 0 | I have django app which is backend for javascript application intended for multiple TV devices. Each device has different frontend but I don't think that creating multiple .po files is good idea for this goal, because most of translations are repetitive for these devices.
Is this possible to add additional parameters for translations, for example in my case some function with parameter "device" would be very useful? If not, how to do in Django way? | Django way for multiple translations for one language or parametrize translations | 1 | -0.066568 | 1 | 0 | 0 | 521 |
29,852,509 | 2015-04-24T16:14:00.000 | 1 | 0 | 0 | 0 | 0 | python,tcp,tcp-ip | 0 | 29,898,151 | 0 | 1 | 0 | false | 0 | 0 | To open a UDP socket you'd use:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, socket.IPPROTO_UDP
To send use:
query = craft_dns_query() # you do this part
s.sendto(query,(socket.inet_aton("8.8.8.8",53))
To receive the response use:
response = s.recv(1024)
You'll have to refer to documentation on DNS for actually crafting the messages and handling the responses. | 1 | 2 | 0 | 0 | I've been using Scapy to craft packets and test my network, but the programmer inside me is itching to know how to do this without Scapy.
For example, how do I craft a DNS Query using sockets (I assume it's sockets that would be used).
Thanks | Crafting a DNS Query Message in Python | 0 | 0.197375 | 1 | 0 | 1 | 2,153 |
29,886,501 | 2015-04-27T03:03:00.000 | 1 | 0 | 1 | 0 | 0 | python,datetime,time | 0 | 29,886,958 | 0 | 2 | 0 | false | 0 | 0 | If running the script in a UNIX like OS, you can use the date command -
>>>import subprocess
>>>process=subprocess.Popen(['date','-d','@1430106933', '+%Y%m%d'], stdout=subprocess.PIPE)
>>>out,err = process.communicate()
>>>print out
20150426 | 1 | 2 | 0 | 0 | Without having to convert it to datetime, how can I get the date from a Unix timestamps? In other words, I would like to remove hours, minutes and seconds from the time stamp and get the numbers that represent the date only. | How to remove hours, minutes, and seconds from a Unix timestamp? | 0 | 0.099668 | 1 | 0 | 0 | 3,529 |
29,888,233 | 2015-04-27T05:58:00.000 | 1 | 0 | 0 | 0 | 0 | python,image,neural-network | 0 | 29,889,993 | 0 | 9 | 0 | false | 0 | 0 | Draw the network with nodes as circles connected with lines. The line widths must be proportional to the weights. Very small weights can be displayed even without a line. | 1 | 29 | 1 | 0 | I want to draw a dynamic picture for a neural network to watch the weights changed and the activation of neurons during learning. How could I simulate the process in Python?
More precisely, if the network shape is: [1000, 300, 50],
then I wish to draw a three layer NN which contains 1000, 300 and 50 neurons respectively.
Further, I hope the picture could reflect the saturation of neurons on each layer during each epoch.
I've no idea about how to do it. Can someone shed some light on me? | How to visualize a neural network | 0 | 0.022219 | 1 | 0 | 0 | 35,856 |
29,893,476 | 2015-04-27T10:42:00.000 | 1 | 0 | 0 | 0 | 0 | linux,postgresql,python-3.x,psycopg2,amazon-redshift | 0 | 29,915,754 | 0 | 1 | 0 | true | 0 | 0 | Re-declaring a cursor doesn't create new connection while using psycopg2. | 1 | 0 | 0 | 0 | I am using the psycopg2 library with Python3 on a linux server to create some temporary tables on Redshift and querying these tables to get results and write to files on the server.
Since my queries are long and takes about 15 minutes to create all these temp tables that I ultimate pull data from, how do I ensure that my connection persists and I don't lose the temp tables that I later query? Right now I just do a cursor() before the execute(), is there a default timeout for these?
I have noticed that whenever I do a
Select a,b from #results_table
or
select * from #results_table
the query just freezes/hangs, but
select top 35 from #results_table
returns the results (select top 40 fails!). There are about a 100 rows in #results_table, and I am not able to get them all. I did a ps aux and the process just stays in the S+ state. If I manually run the query on Redshift it finishes in seconds.
Any ideas? | Does redeclaring a cursor create new connection while using psycopg2? | 0 | 1.2 | 1 | 1 | 0 | 82 |
29,896,309 | 2015-04-27T12:50:00.000 | 1 | 0 | 1 | 0 | 0 | python-3.x,python-2.7,proxy,anaconda,conda | 0 | 40,988,680 | 0 | 3 | 0 | false | 0 | 0 | There are chances that the .condarc file is hidden as was in my case. I was using Linux Mint (Sarah) and couldn't find the file though later on I found that it was hidden in the home directory and hence when I opted to show hidden files I could find it. | 2 | 10 | 0 | 0 | I am trying to set up a proxy server in Anaconda because my firewall does not allow me to run online commands such as
conda update
I see online that I should create a .condarc file that contains the proxy address. Unfortunately,
I dont know how to create that file (is it a text file?)
and where to put it?! (in which folder? in the Anaconda folder?)
Any help appreciated
Thanks! | how to create a .condarc file for Anaconda? | 1 | 0.066568 | 1 | 0 | 0 | 46,665 |
29,896,309 | 2015-04-27T12:50:00.000 | 0 | 0 | 1 | 0 | 0 | python-3.x,python-2.7,proxy,anaconda,conda | 0 | 69,901,282 | 0 | 3 | 0 | false | 0 | 0 | to create the .condarc file open Anaconda Prompt and type:
conda config
it will appear in your user's home directory | 2 | 10 | 0 | 0 | I am trying to set up a proxy server in Anaconda because my firewall does not allow me to run online commands such as
conda update
I see online that I should create a .condarc file that contains the proxy address. Unfortunately,
I dont know how to create that file (is it a text file?)
and where to put it?! (in which folder? in the Anaconda folder?)
Any help appreciated
Thanks! | how to create a .condarc file for Anaconda? | 1 | 0 | 1 | 0 | 0 | 46,665 |
29,903,381 | 2015-04-27T18:31:00.000 | 1 | 0 | 0 | 0 | 0 | python,angularjs,mongodb,frameworks,mean-stack | 0 | 36,828,297 | 0 | 1 | 0 | false | 1 | 0 | To answer my own question about a year later, what I would do now is just run my python script in tiny web server that lived on the same server as my MEAN app. It wouldn't have any external ports exposed and the MEAN app would just ping it for information and get JSON back. Just in case anyone is looking at this question down the road...
I find this way easier than trying to integrate the python script into the application itself. | 1 | 1 | 0 | 0 | I currently have a small webapp on AWS based on the MEAN (Mongo, Express, Angular, Node) but have a python script I would like to execute on the front end. Is there a way to incorporate this? Basically, I have some data objects on my AngularJS frontend from a mongoDB that I would like to manipulate with python and don't know how to get them into a python scope, do something to them, and send them to a view.
Is this possible? If so, how could it be done? Or is this totally against framework conventions and should never be done?
from | Webapp architecture: Putting python in a MEAN stack app | 0 | 0.197375 | 1 | 0 | 0 | 1,124 |
29,928,477 | 2015-04-28T19:42:00.000 | 0 | 0 | 0 | 1 | 1 | python,google-app-engine | 1 | 29,928,760 | 1 | 1 | 0 | true | 1 | 0 | Fixed by shutting down all instances (on all modules/versions just to be safe). | 1 | 2 | 0 | 0 | I am currently experiencing an issue in my GAE app with sending requests to non-default modules. Every request throws an error in the logs saying:
Request attempted to contact a stopped backend.
When I try to access the module directly through the browser, I get:
The requested URL / was not found on this server.
I attempted to stop and start the "backend" modules a few times to no avail. I also tried changing the default version for the module to a previous working version, but the requests from my front-end are still hitting the "new", non-default version. When I try to access a previous version of the module through the browser, it does work however.
One final symptom: I am able to upload my non-default modules fine, but cannot upload my default front-end module. The process continually says "Checking if deployment succeeded...Will check again in 60 seconds.", even after rolling back the update.
I Googled the error from the logs and found almost literally nothing. Anyone have any idea what's going on here, or how to fix it? | GAE module: "Request attempted to contact a stopped backend." | 1 | 1.2 | 1 | 0 | 0 | 607 |
29,928,485 | 2015-04-28T19:43:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,pyqt,saas,pyqtgraph | 0 | 29,947,088 | 0 | 2 | 0 | true | 1 | 1 | Here is what I have sort of put together by pulling several threads online:
Ruby On Rails seems to be more popular than python at this moment.
If you go python, Flask and Django are good templates.
bokeh seems to be a good way of plotting to a browser.
AFAIK, there is no way to take an existing PyQt or pyqtgraph application and have it run on the web.
I am not sure how Twisted (Tornado, Node.js and Friends) fits in to the web SaaS, but I see it referred to occasionally since it is asynchronous event-driven.
People often suggest using Rest, but that seems slow to me. Not sure why... | 2 | 4 | 0 | 0 | Is there a way to take existing python pyqtgraph and pyqt application and have it display on a web page to implement software as a service? I suspect that there has to be a supporting web framework like Django in between, but I am not sure how this is done.
Any hints links examples welcome. | Displaying pyqtgraph and pyqt widgets on web | 0 | 1.2 | 1 | 0 | 0 | 1,640 |
29,928,485 | 2015-04-28T19:43:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,pyqt,saas,pyqtgraph | 0 | 29,987,875 | 0 | 2 | 0 | false | 1 | 1 | If all you need are static plots, then it should be straightforward to draw and export to an SVG file, then display the SVG in a webpage (or export to image, as svg rendering is not reliable in all browsers). If you need interactivity, then you're going to need a different solution and probably pyqtgraph is not the tool for this job. VisPy does have some early browser support but this has only been demonstrated with ipython notebook. | 2 | 4 | 0 | 0 | Is there a way to take existing python pyqtgraph and pyqt application and have it display on a web page to implement software as a service? I suspect that there has to be a supporting web framework like Django in between, but I am not sure how this is done.
Any hints links examples welcome. | Displaying pyqtgraph and pyqt widgets on web | 0 | 0 | 1 | 0 | 0 | 1,640 |
29,935,200 | 2015-04-29T05:38:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,rest,django-rest-framework | 0 | 29,935,296 | 0 | 1 | 1 | false | 1 | 0 | Consider the upvote button to the left. When you click it, a request may be sent to stackoverflow.com/question/12345/upvote. It creates an "action resource" on the db, so later you can go to your user profile and check out the list of actions you took.
You can consider doing the same thing for your application. It may be a better user experience to have immediate action taken like SO, or a "batch" request like with gmail's check boxes. | 1 | 1 | 0 | 0 | I have a question about REST design in general and specifically what the best way to implement a solution is in Django Rest Framework. Here it the situation:
Say I have an app for keeping track of albums that the user likes. In the browser, the user sees a list of albums and each one has a check box next to it. Checking the box means you like the album. At the bottom of the page is a submit button.
I want the submit button to initiate an AJAX request that sends tp my API endpoint a list of the ids (as in, the Djano model ids) of the albums that are liked by the user.
My question is, is this a standard approach for doing this sort of thing (I am new to web stuff and REST in particular). In other words, is there a better way to handle the transmission of these data than to send an array of ids like this? As a corollary, if this is an alright approach, how does one implement this in Django Rest Framework in a way which is consistent with its intended methodology.
I am keeping this question a little vague (not presenting any code for the album serializer, for example) intentionally because I am looking to learn some fundamentals, not to debug a particular piece of code.
Thanks a lot in advance! | Django rest framework: correctly handle incoming array of model ids | 1 | 0.379949 | 1 | 0 | 0 | 116 |
29,939,110 | 2015-04-29T09:08:00.000 | 0 | 0 | 0 | 0 | 0 | python,angularjs,http,simplehttpserver | 0 | 29,939,768 | 0 | 3 | 0 | false | 1 | 0 | Well i had a similar problem but the difference is that i had Spring on the Server Side.
You can capture page not found exception at your server side implementation, and redirect to the default page [route] in your app. In Spring, we do have handlers for page not found exceptions, i guess they are available in python too. | 1 | 0 | 0 | 0 | I have an angularjs app that uses Angular UI Router and the URL that are created have a # in them.. Eg. http://localhost:8081/#/login. I am using Python Simple HTTP server to run the app while developing. I need to remove the # from the URL. I know how to remove it by enabling HTML5 mode in angular. But that method has its problems and i want to remove the # from the server side.
How can i do this using Python Simple HTTP Server? | Remove # from the URL in Python Simple HTTP Server | 0 | 0 | 1 | 0 | 0 | 1,156 |
29,950,300 | 2015-04-29T17:13:00.000 | 33 | 0 | 1 | 1 | 0 | python,virtualenv,virtualenvwrapper,pyenv | 0 | 46,344,026 | 0 | 2 | 0 | false | 0 | 0 | Short version:
virtualenv allows you to create local (per-directory), independent python installations by cloning from existing ones
pyenv allows you to install (build from source) different versions of Python alongside each other; you can then clone them with virtualenv or use pyenv to select which one to run at any given time
Longer version:
Virtualenv allows you to create a custom Python installation e.g. in a subdirectory of your project. This is done by cloning from an existing Python installation somewhere on your system (some files are copied, some are reused/shared to save space). Each of your projects can thus have their own python (or even several) under their respective virtualenv. It is perfectly fine for some/all virtualenvs to even have the same version of python (e.g. 3.8.5) without conflict - they live separately and don't know about each other. If you want to use any of those pythons from shell, you have to activate it (by running a script which will temporarily modify your PATH to ensure that that virtualenv's bin/ directory comes first). From that point, calling python (or pip etc.) will invoke that virtualenv's version until you deactivate it (which restores the PATH). It is also possible to call into a virtualenv Python using its absolute path - this can be useful e.g. when invoking Python from a script.
Pyenv operates on a wider scale than virtualenv. It is used to install (build from source) arbitrary versions of Python (it holds a register of available versions). By default, they're all installed alongside each other under ~/.pyenv, so they're "more global" than virtualenv. Then, it allows you to configure which version of Python to run when you use the python command (without virtualenv). This can be done at a global level or, separately, per directory (by placing a .python-version file in a directory). It's done by prepending pyenv's shim python script to your PATH (permanently, unlike in virtualenv) which then decides which "real" python to invoke. You can even configure pyenv to call into one of your virtualenv pythons (by using the pyenv-virtualenv plugin). You can also duplicate Python versions (by giving them different names) and let them diverge.
Using pyenv can be a convenient way of installing Python for subsequent virtualenv use. | 1 | 209 | 0 | 0 | I recently learned how to use virtualenv and virtualenvwrapper in my workflow but I've seen pyenv mentioned in a few guides but I can't seem to get an understanding of what pyenv is and how it is different/similar to virtualenv. Is pyenv a better/newer replacement for virtualenv or a complimentary tool? If the latter what does it do differently and how do the two (and virtualenvwrapper if applicable) work together? | What is the relationship between virtualenv and pyenv? | 1 | 1 | 1 | 0 | 0 | 58,878 |
29,961,898 | 2015-04-30T07:44:00.000 | 4 | 0 | 0 | 0 | 0 | python,flask,flask-login,anonymous-users | 0 | 30,008,742 | 0 | 2 | 0 | false | 1 | 0 | You can use a AnonymousUserMixin subclass if you like, but you need to add some logic to it so that you can associate each anonymous user with a cart stored in your database.
This is what you can do:
When a new user connects to your application you assign a randomly generated unique id. You can write this random id to the user session (if you want the cart to be dropped when the user closes the browser window) or to a long-lived cookie (if you want the cart to be remembered even after closing the browser). You can use Flask-Login for managing the session/cookie actually, you don't have to treat unknown users as anonymous, as soon as you assign an id to them you can treat them as logged in users.
How do you know if an anonymous user is known or new? When the user connects you check if the session or cookie exist, and look for the id there. If an id is found, then you can locate the cart for the user. If you use a subclass of AnonymousUserMixin, then you can add the id as a member variable, so that you can do current_user.id even for anonymous users. You can have this logic in the Flask-Login user loader callback.
When the user is ready to pay you convert the anonymous user to a registered user, preserving the id.
If you have a cron job that routinely cleans up old/abandoned anonymous carts from the database, you may find that an old anonymous user connects and provides a user id that does not have a cart in the database (because the cart was deemed stale and deleted). You can handle this by creating a brand new cart for the same id, and you can even notify the user that the contents of the cart expired and were removed.
Hope this helps! | 2 | 7 | 0 | 0 | My app implements a shopping cart in which anonymous users can fill their cart with products. User Login is required only before payment. How can this be implemented?
The main challenge is that flask must keep track of the user (even if anonymous) and their orders. My current approach is to leverage the AnonymousUserMixin object that is assigned to current_user. The assumption is that current_user will not change throughout the session. However, I noticed that a new AnonymousUserMixin object is assigned to current_user, for example, upon every browser page refresh. Notice that this does not happen if a user is authenticated.
Any suggestions on how to circumvent this? | How to track anonymous users with Flask | 1 | 0.379949 | 1 | 0 | 0 | 2,806 |
29,961,898 | 2015-04-30T07:44:00.000 | 9 | 0 | 0 | 0 | 0 | python,flask,flask-login,anonymous-users | 0 | 29,962,315 | 0 | 2 | 0 | false | 1 | 0 | There is no need for a custom AnonymousUserMixin, you can keep the shopping cart data in session:
anonymous user adds something to hist cart -> update his session with the cart data
the user wants to check out -> redirect him to login page
logged in user is back at the check out -> take his cart data out of the session and do whatever you would do if he was logged in the whole time | 2 | 7 | 0 | 0 | My app implements a shopping cart in which anonymous users can fill their cart with products. User Login is required only before payment. How can this be implemented?
The main challenge is that flask must keep track of the user (even if anonymous) and their orders. My current approach is to leverage the AnonymousUserMixin object that is assigned to current_user. The assumption is that current_user will not change throughout the session. However, I noticed that a new AnonymousUserMixin object is assigned to current_user, for example, upon every browser page refresh. Notice that this does not happen if a user is authenticated.
Any suggestions on how to circumvent this? | How to track anonymous users with Flask | 1 | 1 | 1 | 0 | 0 | 2,806 |
29,967,612 | 2015-04-30T12:19:00.000 | 0 | 0 | 0 | 0 | 0 | python,websocket | 0 | 29,967,827 | 0 | 1 | 0 | false | 1 | 0 | It depends on your software design, if you decide the logic from WebSocketServer.px and CoreApplication.py belongs together, merge it.
If not, you need some kind of inter process communication (ipc).
You can use websockets for this ipc, but i would suggest, you use something simpler. For example, you can you json-rpc over tcp or unix domain to send control messages from CoreApplication.py to WebSocketServer.py | 1 | 2 | 0 | 0 | I am trying to understand how to use websockets correctly and seem to be missing some fundamental part of the puzzle.
Say I have a website with 3 different pages:
newsfeed1.html
newsfeed2.html
newsfeed3.html
When a user goes to one of those pages they get a feed specific to the page, ie newsfeed1.html = sport, newsfeed2.html = world news etc.
There is a CoreApplication.py that does all the handling of getting data and parsing etc.
Then there is a WebSocketServer.py, using say Autobahn.
All the examples I have looked at, and that is alot, only seem to react to a message from the client (browser) within the WebSocketServer.py, think chat echo examples. So a client browser sends a chat message and it is echoed back or broadcast to all connected client browsers.
What I am trying to figure out is given the following two components:
CoreApplication.py
WebSocketServer.py
How to best make CoreApplication.py communicate with WebSocketServer.py for the purpose of sending messages to connected users.
Normally should CoreApplication.py simply send command messages to the WebSocketServer.py as a client. For example like this:
CoreApplication.py -> Connects to WebServerSocket.py as a normal client -> sends a Json command message (like broadcast message X to all users || send message Y to specific remote client) -> WebSocketServer.py determines how to process the incoming message dependant on which client is connected to which feed and sends to according remote client browsers.
OR, should CoreApplication.py connect programatically with WebSocketServer.py? As I cannot seem to find any examples of being able to do this for example with Autobahn or other simple web sockets as once the WebSocketServer is instantiated it seems to run in a loop and does not accept external sendMessage requests?
So to sum up the question: What is the best practice? To simply make CoreApplication.py interact with WebSocketServer.py as a client (with special command data) or for CoreApplication.py to use an already running instance of WebSocketServer.py (both of which are on the same machine) through some more direct method to directly sendMessages without having to make a full websocket connection first to the WebSocketServer.py server? | WebSockets best practice for connecting an external application to push data | 1 | 0 | 1 | 0 | 1 | 704 |
29,968,829 | 2015-04-30T13:18:00.000 | 0 | 0 | 1 | 0 | 0 | server,ipython | 0 | 42,136,317 | 0 | 4 | 0 | false | 0 | 0 | If it is a text file, create a empty file, edit it and then copy/paste the content..
You can do this to bypass the 25mb constraint | 1 | 15 | 0 | 0 | I did setup an ipython server for other people (in my company department) to have a chance to learn and work with python.
Now I wonder how people can load their own local data into the ipython notebook session on the remote server. Is there any way to do this? | Load local data into IPython notebook server | 0 | 0 | 1 | 0 | 0 | 52,276 |
29,971,186 | 2015-04-30T15:01:00.000 | 0 | 1 | 0 | 0 | 0 | python,windows,excel,python-2.7,xlrd | 1 | 30,945,220 | 0 | 3 | 0 | false | 0 | 0 | I had the same problem and I think we have to look at the cells excel that these are not picking up empty, that's how I solved it. | 1 | 4 | 0 | 0 | I'm stumped on this one, please help me oh wise stack exchangers...
I have a function that uses xlrd to read in an .xls file which is a file that my company puts out every few months. The file is always in the same format, just with updated data. I haven't had issues reading in the .xls files in the past but the newest release .xls file is not being read in and is producing this error: *** formula/tFunc unknown FuncID:186
Things I've tried:
I compared the new .xls file with the old to see if I could spot any
differences. None that I could find.
I deleted all of the macros that were contained in the file (older versions also had macros)
Updated xlrd to version 0.9.3 but get the same error
These files are originally .xlsm files. I open them and save them as
.xls files so that xlrd can read them in. This worked just fine on previous releases of the file. After upgrading to xlrd 0.9.3 which supposedly supports .xlsx, I tried saving the .xlsm file as.xlsx and tried to read it in but got an error with a blank error message
Useful Info:
Python 2.7
xlrd 0.9.3
Windows 7 (not sure if this matters but...)
My guess is that there is some sort of formula in the new file that xlrd doesn't know how to read. Does anybody know what FuncID: 186 is?
Edit: Still no clue on where to go with this. Anybody out there run into this? I tried searching up FuncID 186 to see if it's an excel function but to no avail... | Python XLRD Error : formula/tFunc unknown FuncID:186 | 0 | 0 | 1 | 1 | 0 | 1,917 |
29,972,537 | 2015-04-30T16:01:00.000 | 1 | 0 | 1 | 0 | 0 | python,multiprocessing,pickle,serialization | 0 | 30,238,617 | 0 | 2 | 0 | false | 0 | 0 | I'm the author of dill and pathos. Multiprocessing should use cPickle by default, so you should't have to do anything.
If your object doesn't searliize, you have two options: go to a fork of multiprocessing or some other parallel backend, or add methods to your class (i.e. reduce methods) that register how to serialize the object. | 1 | 6 | 0 | 0 | Using Python 2.7,
I am passing many large objects across processes using a manager derived from multiprocessing.managers. BaseManager and I would like to use cPickle as the serializer to save time; how can this be done? I see that the BaseManager initializer takes a serializer argument, but the only options appear to be pickle and xmlrpclib. | How do I change the serializer that my multiprocessing.mangers.BaseManager subclass uses to cPickle? | 0 | 0.099668 | 1 | 0 | 0 | 330 |
29,976,769 | 2015-04-30T20:00:00.000 | 1 | 1 | 0 | 0 | 0 | python,unit-testing,nosetests,coverage.py | 0 | 29,985,334 | 0 | 2 | 1 | true | 0 | 0 | The simplest way to direct coverage.py's focus is to use the source option, usually source=. to indicate that you only want to measure code in the current working tree. | 1 | 2 | 0 | 0 | I am using nosetests --with-coverage to test and see code coverage of my unit tests. The class that I test has many external dependencies and I mock all of them in my unit test.
When I run nosetests --with-coverage, it shows a really long list of all the imports (including something I don't even know where it is being used).
I learned that I can use .coveragerc for configuration purposes but it seems like I cannot find a helpful instruction on the web.
My questions are..
1) In which directory do I need to add .coveragerc? How do I specify the directories in .coveragerc? My tests are in a folder called "tests"..
/project_folder
/project_folder/tests
2)It is going to be a pretty long list if I were to add each in omit= ...
What is the best way to only show the class that I am testing with the unittest in the coverage report?
It would be nice if I could get some beginner level code examples for .coveragerc. Thanks. | how to omit imports using .coveragerc in coverage.py? | 0 | 1.2 | 1 | 0 | 0 | 1,938 |
29,990,202 | 2015-05-01T15:46:00.000 | 0 | 0 | 0 | 0 | 0 | python,wxpython,objectlistview-python | 0 | 30,030,742 | 0 | 2 | 0 | false | 0 | 1 | For the future: what I did was to show the color as a background of the row of the list. | 1 | 0 | 0 | 0 | I am using wxPython ObjectListView and it is very easy to use. Now I need to render a wx.Color as a column but I haven't found a way in the documentation. Basically I have list of items each of them have the following attributes: name, surname and hair color. Hair color is a RGB color and I would like to show it as a column in my ObjectListView.
Is there a way to do it ?
Many thanks | ObjectListView wxPython: how to show a wx.Color | 0 | 0 | 1 | 0 | 0 | 360 |
30,002,869 | 2015-05-02T13:29:00.000 | 2 | 0 | 0 | 0 | 0 | python,python-2.7,tkinter | 0 | 30,014,871 | 0 | 1 | 0 | true | 0 | 1 | You cannot change the window border, but you can remove it entirely and draw your own border. You'll also be responsible for adding the ability to move and resize the window. Search this site for "overrideredirect" for lots of questions and answers related to this feature.
As for third party themes: no, there aren't any. | 1 | 1 | 0 | 0 | My question is simple, apart from the three themes pre-installed in Tkinter are there any other themes I can get ? Something like 3rd party themes ? If not, how can I change the button or other widgets looks (manually changing the form,etc..)?
Also I would like to know if it is possible to change the outside window look, like the look of the
[ _ ] [ [] ] [X]
buttons of the window, if not is there a way to remove them so I can put my own buttons in the frame?
Any code example or link is welcome. | 3rd party Tkinter themes and modifying outside window buttons | 1 | 1.2 | 1 | 0 | 0 | 174 |
30,035,123 | 2015-05-04T16:31:00.000 | 0 | 0 | 0 | 0 | 0 | python,matplotlib,mayavi | 0 | 30,125,166 | 0 | 1 | 0 | false | 0 | 0 | Mayavi is not really good at plotting 2d-diagramms, you can cheat a little by setting your camera position parallel to an 2d image. If you want to plot 2d-diagramms try using matplotlib. | 1 | 0 | 1 | 0 | I have a dataset of a tennis game. This dataset contains the ball positions in each rally and the current score. I already 3d-visualized the game and ball positions in mayavi.
Now I want to plot 2d line diagrams in mayavi that visualizes the score developement after specific events (such as after: a break, a set-win, set-loss,...).
I came up with some ideas, but none of them are satisfying:
I could use imshow and "draw" the diagram
I could use points3d to plot the diagram
Maybe I can somehow use pyplot to plot the diagram, then make a screenshot und then plot this screenshot in mayavi... Any idea if this is possible?
Do you have any other idea how I could plot a 2d line diagram in mayavi? | Plot 2d line diagram in mayavi | 1 | 0 | 1 | 0 | 0 | 783 |
30,036,175 | 2015-05-04T17:26:00.000 | 0 | 0 | 0 | 0 | 0 | python-2.7,packet,scapy | 0 | 47,197,343 | 0 | 1 | 0 | false | 0 | 0 | pkt.time gives you the epoch time that is included in the FRAME layer of the packet in wireshark.
Just after the notation pkt[IP].time gives you the time that is included in the IP layer of the packet in wireshark. But the IP layer has no time, so I don't think this command will work. | 1 | 3 | 0 | 1 | I was wondering what is the difference between using pkt.time and pkt[IP].time since they both give different times for the same packet.
I was also wondering how to interpret packet time such as 1430123453.564733
If anyone has an idea or knows where I can find such information it would be very helpful.
Thanks. | Scapy packet time interpretation | 1 | 0 | 1 | 0 | 1 | 501 |
30,045,659 | 2015-05-05T06:26:00.000 | 0 | 1 | 0 | 0 | 0 | python,raspberry-pi,gpio | 0 | 30,187,490 | 0 | 1 | 0 | false | 0 | 0 | There is a built-in function GPIO.cleanup() that clean up all the ports you've used.
For the power and ground pins, they are not under software control. | 1 | 0 | 0 | 0 | Basically, I need to disable or turn off a GPIO pin whenever I execute a method in python.
Does anyone knows how to disable the pins? | How to disable GPIO pins on the RaspberryPi? | 0 | 0 | 1 | 0 | 0 | 4,582 |
30,049,490 | 2015-05-05T09:45:00.000 | 0 | 0 | 0 | 0 | 1 | python,opencv,training-data | 0 | 32,845,800 | 0 | 2 | 0 | true | 0 | 0 | I later found the answer and would like to share it if someone will be facing the same challenges.
You need pictures only for the different people you are trying to recognise. I created my training set with 30 images of every person (6 persons) and figured out that histogram equalisation can play an important role when creating the training set and later when recognising faces. Using the histogram equalisation model accuracy was greatly increased. Another thing to consider is eye axis alignment so that all pictures have their eye axis aligned before they enter face recognition. | 1 | 0 | 1 | 0 | I am using python and openCV to create face recognition with Eigenfaces. I stumbled on a problem, since I don't know how to create training set.
Do I need multiple faces of people I want to recognize(myself for example), or do I need a lot of different faces to train my model?
First I tried training my model with 10 pictures of my face and 10 pictures of ScarJo face, but my prediction was not working well.
Now I'm trying to train my model with 20 different faces (mine is one of them).
Am I doing it wrong and if so what am I doing wrong? | What does eigenfaces training set have to look like? | 0 | 1.2 | 1 | 0 | 0 | 247 |
30,051,770 | 2015-05-05T11:29:00.000 | 0 | 0 | 1 | 0 | 0 | python,localization,windows-installer,installation,multiple-languages | 0 | 30,155,408 | 0 | 2 | 0 | false | 0 | 0 | A lot of this seems to be a question about how bdist_msi works, and it seems to be a tool that nobody here knows anything about. I would get some clarification from that tool somehow. The docs seem non-existent to me.
It might generate only one MSI in English. If so then you need to use a tool like Orca to translate the MSI text into each language and save each difference as a transform, an .mst file. Then you'd write a program that gets the language from the user and installs the MSI with a TRANSFORMS= command line that refers to the .mst file for the language.
It might work like Visual Studio, where each language has its own separate MSI file. Again, you'd need a setup program asks the user what language and you fire off the appropriate MSI.
In general, there's no need to ask the user what language to use. I have seen those dialogs but I don't know why they bother. I think it's better to assume the current user language rather than show a dialog that says "Choose a language". You'd need to localise that "Choose a language" text to the user's language anyway unless you assume that everyone already understands English.
You might be able to use something like WiX Burn to package your MSI and provide localisation, not sure. | 1 | 0 | 0 | 0 | I have created a python application and created a .msi installer for it to work and get installed on other machines.
I would like to know how can the user change the language during the installation. ie the localization of msi. | How to localize a msi setup installer file to support various languages? | 0 | 0 | 1 | 0 | 0 | 381 |
30,056,331 | 2015-05-05T14:50:00.000 | 3 | 0 | 0 | 0 | 0 | python,scikit-learn | 0 | 56,863,216 | 0 | 4 | 0 | false | 0 | 0 | AdaBoostClassifier
BaggingClassifier
BayesianGaussianMixture
BernoulliNB
CalibratedClassifierCV
ComplementNB
DecisionTreeClassifier
ExtraTreeClassifier
ExtraTreesClassifier
GaussianMixture
GaussianNB
GaussianProcessClassifier
GradientBoostingClassifier
KNeighborsClassifier
LabelPropagation
LabelSpreading
LinearDiscriminantAnalysis
LogisticRegression
LogisticRegressionCV
MLPClassifier
MultinomialNB
NuSVC
QuadraticDiscriminantAnalysis
RandomForestClassifier
SGDClassifier
SVC
_BinaryGaussianProcessClassifierLaplace
_ConstantPredictor | 2 | 22 | 1 | 0 | I need a list of all scikit-learn classifiers that support the predict_proba() method. Since the documentation provides no easy way of getting that information, how can get this programatically? | How to list all scikit-learn classifiers that support predict_proba() | 0 | 0.148885 | 1 | 0 | 0 | 8,716 |
30,056,331 | 2015-05-05T14:50:00.000 | 0 | 0 | 0 | 0 | 0 | python,scikit-learn | 0 | 72,497,753 | 0 | 4 | 0 | false | 0 | 0 | If you are interested in a spesific type of estimator(say classifier), you could go with:
import sklearn
estimators = sklearn.utils.all_estimators(type_filter="classifier")
for name, class_ in estimators:
if not hasattr(class_, 'predict_proba'):
print(name) | 2 | 22 | 1 | 0 | I need a list of all scikit-learn classifiers that support the predict_proba() method. Since the documentation provides no easy way of getting that information, how can get this programatically? | How to list all scikit-learn classifiers that support predict_proba() | 0 | 0 | 1 | 0 | 0 | 8,716 |
30,063,430 | 2015-05-05T21:21:00.000 | 1 | 0 | 1 | 0 | 0 | python,arrays,image,image-processing,colors | 0 | 30,063,676 | 0 | 1 | 0 | false | 0 | 0 | The image is being opened as a color image, not as a black and white one. The shape is 181x187x3 because of that: the 3 is there because each pixel is an RGB value. Quite often images in black and white are actually stored in an RGB format. For an image image, if np.all(image[:,:,0]==image[:,:,1]) and so on, then you can just choose to use any of them (eg, image[:,:,0]). Alternatively, you could take the mean with np.mean(image,axis=2).
Note too that the range of values will depend on the format, and so depending upon what you mean by color intensity, you may need to normalize them. In the case of a jpeg, they are probably uint8s, so you may want image[:,:,0].astype('float')/255 or something similar. | 1 | 0 | 1 | 0 | Suppose I have got a black an white image, how do I convert the colour intensity at each point into a numerical value that represents its relativity intensity?
I checked somewhere on the web and found the following:
Intensity = np.asarray(PIL.Image.open('test.jpg'))
What's the difference between asarray and array?
Besides, the shape of the array Intensity is '181L, 187L, 3L'. The size of the image test.jpg is 181x187, so what does the extra '3' represent?
And are there any other better ways of extracting the colour intensity of an image?
thank you. | how to extract the relative colour intensity in a black and white image in python? | 0 | 0.197375 | 1 | 0 | 0 | 473 |
30,063,974 | 2015-05-05T21:58:00.000 | 3 | 1 | 1 | 0 | 0 | python,camera,raspberry-pi,camera-calibration | 0 | 30,064,269 | 0 | 3 | 0 | false | 0 | 0 | What do you mean by "black and white image," in this case? There is no "true" black and white image of anything. You have sensors that have some frequency response to light, and those give you the values in the image.
In the case of the Raspberry Pi camera, and almost all standard cameras, there are red, green and blue sensors that have some response centered around their respective frequencies. Those sensors are laid out in a certain pattern, as well. If it's particularly important to you, there are cameras that only have an array of a single sensor type that is sensitive to a wider range of frequencies, but those are likely going to be considerable more expensive.
You can get raw image data from the raspi camera with picamera. This is not the "raw" format described in the documentation and controlled by format, which is really just the processed data before encoding. The bayer option will return the actual raw data. However, that means you'll have to deal with processing by yourself. Each pixel in that data will be from a different color sensor, for example, and will need to be adjusted based on the sensor response.
The easiest thing to do is to just use the camera normally, as you're not going to get great accuracy measuring light intensity in this way. In order to get accurate results, you'd need calibration, and you'd need to be specific about what the data is for, how everything is going to be illuminated, and what data you're actually interested in. | 1 | 2 | 0 | 0 | Are there any ways to set the camera in raspberry pi to take black and white image?, like using some commands / code in picamera library?
Since I need to compare the relative light intensity of a few different images, I'm worried that the camera will already so some adjustments itself when the object is under different illuminations, so even if I convert the image to black and white later on the object's 'true' black and white image will have been lost.
thanks
edit: basically what I need to do is to capture a few images of an object when the camera position is fixed, but the position of the light source is changed (and so the direction of illumination is changed as well). Then for each point on the image I will need to compare the relative light intensity of the different images. As long as the light intensity, or the 'brightness' of all the images are relative to the same scale, then it's ok, but I'm not sure if this is the case. I'm sure if the camera will adjust something like contrast automatically itself when an image is 'inherently' darker or brighter. | how to set the camera in raspberry pi to take black and white image? | 1 | 0.197375 | 1 | 0 | 0 | 16,954 |
30,089,003 | 2015-05-06T22:48:00.000 | 3 | 0 | 1 | 0 | 0 | python,file | 0 | 30,089,116 | 0 | 2 | 0 | false | 0 | 0 | Another way to do it is '\1'. Cheers! | 1 | 2 | 0 | 0 | Wondering how to write unreadable ^A into a file using Python. For unreadable ^A, I mean when we use command "set list" in vi, we can see unreadable character like ^I for '\t', $ for '\n'.
thanks in advance,
Lin | how to write unreadable ^A into output file in Python? | 0 | 0.291313 | 1 | 0 | 0 | 243 |
30,090,942 | 2015-05-07T02:30:00.000 | 1 | 0 | 1 | 1 | 0 | python,windows | 0 | 30,090,978 | 0 | 1 | 0 | true | 0 | 0 | You don't need to create a py2exe executable for this, you can simply run the Python executable itself (assuming it's installed of course), passing the name of your script as an argument.
And one way to do that is to use the task scheduler, which can create tasks to be run at boot time, under any user account you have access to. | 1 | 0 | 0 | 0 | I want to run a python script which should always start when windows boot.
i believe i can create an executable windows executable file from python by using py2exe... But how to make as a start up service which will be triggered while boot
Is there any way ? | is there any possible way to run a python script on boot in windows operating system? | 1 | 1.2 | 1 | 0 | 0 | 111 |
30,108,404 | 2015-05-07T17:55:00.000 | 2 | 0 | 1 | 0 | 1 | python,scope | 0 | 30,108,596 | 0 | 2 | 0 | true | 0 | 0 | You're going against the point of having Scopes at all. We have local and global scopes for a reason. You can't prevent Python from seeing outer scope variables. Some other languages allow scope priority but Python's design principles enforce strong scoping. This was a language design choice and hence Python is the wrong language to try to prevent outer scope variable reading.
Just use better naming methodologies to ensure no confusion, you change up variable names by using the Find-Replace function that most text editors provide. | 1 | 4 | 0 | 0 | When defining a python function, I find it hard to debug if I had a typo in a variable name and that variable already exists in the outer scope. I usually use similar names for variables of the same type of data structure, so if that happens, the function still runs fine but just returns a wrong result. So is there a way to prevent python functions to read outer scope variables? | how to prevent python function to read outer scope variable? | 1 | 1.2 | 1 | 0 | 0 | 841 |
30,118,631 | 2015-05-08T07:52:00.000 | 1 | 1 | 0 | 0 | 0 | python,timer | 0 | 30,118,684 | 0 | 1 | 0 | true | 0 | 0 | Two general ways:
Create a separate timer for each user when he joins, do something when the timer fires and destroy it when the user leaves.
Have one timer set to fire, say, every second (or ten seconds) and iterate over all the users when it fires to see how long they have been idle.
A more precise answer would require deeper insight into your architecture, I’m afraid. | 1 | 0 | 0 | 0 | I apologize I couldn't find a proper title, let me explain what I'm working on:
I have a Python IRC bot, and I want to be able to keep track of how long users have been idle in the channel, and allow them to earn things (I have it tied to Skype/Minecraft/my website) each x amount of hours they're idle in the channel.
I already have everything to keep track of each user and have them validated with the site and stuff, but I am not sure how I would keep track of the time they're idle.
I have it capture on join/leave/part messages. How can I get a timer set up when they join, and keep that timer running, along with other times for all of the users who are in that channel, and each hour they've been idle (not all at same time) do something then restart the timer over for them? | Keep track of items in array with timer | 0 | 1.2 | 1 | 0 | 1 | 103 |
30,119,149 | 2015-05-08T08:23:00.000 | 1 | 0 | 0 | 0 | 0 | android,python,numpy,scikit-learn | 0 | 30,120,067 | 0 | 2 | 1 | false | 0 | 1 | Depends on what you need....
Python on a server using Flask/ Django would allow you to build an http UI or even an API interface for your Android (or any) device.
Qpython is a brilliant way to run python on an Android but probably won't cope with the whole of scipy so depends on what libraries have already been ported across by the Qpython team. It's a great tool though and worth a look anyway.
IMHO learning a bit of flask for server side running would be easier and more flexible than using Kivy. | 1 | 1 | 0 | 0 | I am having some Python code that heavily relies on numpy/scipy and scikit-learn. What would be the best way to get it running on an Android device? I have read about a few ways to get Python code running on Android, mostly Pygame and Kivy but I am not sure how this would interact with numpy and scikit-learn.
Or would it be better to consider letting the android application send data to some server where Python is running? | Port Python Code to Android | 0 | 0.099668 | 1 | 0 | 0 | 2,766 |
30,130,277 | 2015-05-08T18:12:00.000 | 0 | 1 | 0 | 0 | 0 | python,numpy,scipy | 0 | 30,130,970 | 0 | 3 | 0 | false | 0 | 0 | Use struct.pack() with the f type code to get them into 4-byte packets. | 1 | 7 | 1 | 0 | I need to store a massive numpy vector to disk. Right now the vector that I am trying to store is ~2.4 billion elements long and the data is float64. This takes about 18GB of space when serialized out to disk.
If I use struct.pack() and use float32 (4 bytes) I can reduce it to ~9GB. I don't need anywhere near this amount of precision disk space is going to quickly becomes an issue as I expect the number of values I need to store could grow by an order of magnitude or two.
I was thinking that if I could access the first 4 significant digits I could store those values in an int and only use 1 or 2 bytes of space. However, I have no idea how to do this efficiently. Does anyone have any idea or suggestions? | Binary storage of floating point values (between 0 and 1) using less than 4 bytes? | 0 | 0 | 1 | 0 | 0 | 934 |
30,151,258 | 2015-05-10T12:14:00.000 | 1 | 0 | 1 | 0 | 0 | ipython,python-3.4,qtconsole | 0 | 30,160,102 | 0 | 1 | 0 | true | 0 | 0 | Reposting as an answer:
If you just run ipython3 in a terminal, what you get is a pure terminal interface, it's not running a kernel that the Qt console can talk to.
If you run ipython3 console, you'll get a similar interface but it will be talking to a kernel, so you can start a Qt console to interact with it. You can either run %qtconsole from inside that interface, or run ipython qtconsole --existing in a shell to start a Qt console and connect to an existing kernel. | 1 | 1 | 0 | 0 | When I run %qtconsole from within ipython3 I get ERROR: Line magic function%qtconsolenot found., but ipython3 qtconsole in terminal starts fine. According to this, how can I run qtconsole instance connected to ipython3 instance? And how to run it on a single core -- rc[0].execute(%qtconsole)?
P.S. If someone know, tell me please how to escape `(backquote) symbol in code-mode. | How to run qtconsole connected to ipython3 instance? | 0 | 1.2 | 1 | 0 | 0 | 493 |
30,172,686 | 2015-05-11T16:16:00.000 | 0 | 0 | 1 | 0 | 0 | python,class,oop,object | 0 | 30,173,088 | 0 | 2 | 0 | false | 0 | 0 | The __*__ attributes of an object are meant to implement internal functions standardized by the Python language. Like __add__ (which is used to provide a result of object + whatever, __repr__ is expected to behave in a defined way, which would include to return (a) certain datatype(s).
While statically typed languages will report a compile-time error, for dynamically typed languages like Python, this might result in unexpected (yet not undefined!) runtime behaviour. This need not even result in an error message. Therefore, never change that behaviour to somethin unexpected.
If you want to return something custom, use a custom method like get_info(self) or similar. (Remember not to use __*__ names for that either) | 1 | 0 | 0 | 0 | Usually when outputting an object in Python, you define the string that is returned in __repr__, but what if you want it to return something else instead, like an integer or tuple?
I'll give an example. My class is Users and there are member variables self.username and self.email. If I want to output an object of type Users, how do I get it to return (self.username,self.email) as a tuple? | Returning a specific data type when referring to an object in Python | 0 | 0 | 1 | 0 | 0 | 89 |
30,203,785 | 2015-05-13T00:42:00.000 | 0 | 0 | 0 | 0 | 0 | python,scipy,sparse-matrix | 0 | 33,983,181 | 0 | 1 | 0 | false | 0 | 0 | If you are using any plugin that is named as "infinite posts scroll" or "jetpack" or any thing similar delete it. | 1 | 0 | 1 | 0 | I can find sum of all (non-zero) elements in scipy sparse matrix by mat.sum(), but how can I find their product? There's no mat.prod() method. | Product of elements of scipy sparse matrix | 0 | 0 | 1 | 0 | 0 | 44 |
30,204,877 | 2015-05-13T02:58:00.000 | 1 | 0 | 0 | 0 | 1 | python,opencv,svm | 0 | 30,208,490 | 0 | 1 | 0 | false | 0 | 0 | The classic SVM does part the n-dimensional feature space with planes. That means every point in space is in one of the partitions and therefore belongs to one of the trained classes. there is no outlier detection.
However there is also the concept of a one-class SVM that tries to encapsulate the "known" space and classifies into "known" and "unknown". The libSVM package also has probabilities, you could try to analyse if that helps. You could also try other classification concepts to detect outliers like nearest neighbour | 1 | 0 | 1 | 0 | I try to make classification multiclass with SVM on OpenCV (I use openCV for python). Let's say if I have 5 class and training it well. I have been test it, and got good result.
The problem appear when object from 6th class come to this classification. Althought I haven't train this class before, why I got result this object (that come from 6th class) recognize as object from one of the class that I have been train before (It classified as member of 1st,or 2nd, ect class). While machine should say didn't know it from what class. SVM at OpenCV didn't return the probability, it just return the label of the class.
I have an idea to make it 2 times classification. First with biner classification, with all of the sample as the training set. Second I classify it to multiclass.
But the problem, how should I find the negative sample for the first classification while I didn't know the other object (let's say that come from 6th or 7th class). Anybody can help me, what should I do? Which samples that should I use as Negative Sample? It is the great idea or stupid idea? Is there other way to solve this problem? | SVM find member of outside training set | 0 | 0.197375 | 1 | 0 | 0 | 87 |
30,208,421 | 2015-05-13T07:31:00.000 | 0 | 0 | 0 | 1 | 0 | python,web2py | 0 | 30,356,551 | 0 | 1 | 0 | false | 1 | 0 | Just use the file system.
GUI:
Copy your application folder to your new instance of web2py (/web2py/applications).
Command line:
scp -r /home/username/oldarea/web2py/application/myApp /home/username/newarea/web2py/applications | 1 | 2 | 0 | 0 | I am new to web2py I have web2py application in my local system i want to upload this application into web2py environment throught admin interface option present in web2py Upload & install packed application and do some modifications and run the application but i am unable to uploded the app please give the suggesations how to do this
Thanks in advance | how to upload the existing app in to web2py environment | 0 | 0 | 1 | 0 | 0 | 724 |
30,234,706 | 2015-05-14T10:14:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,flask,sqlalchemy | 0 | 30,238,066 | 0 | 1 | 0 | false | 0 | 0 | If I understand correctly your from_date and to_date are just dates. If you set them to python datetime objects with the date/times you want your results between, it should work. | 1 | 0 | 0 | 0 | I am using sqlalchemy to query memory logs off a MySql database. I am using:
session.query(Memory).filter(Memmory.timestamp.between(from_date, to_date))
but the results after using the time window are still too many.
Now I want to query for results withing the time window, but filtered down by asking for entries logged every X minutes/hours and skipping the ones between, but cannot find a simple way to do it.
To further elaborate, lets say the 'l's are all the results from a query in a given timewindow:
lllllllllllllllllllllllllllllllllllllllllllllllllllllllllllll
To dilute them, I am looking for a query that will return only the 'l's every X minutes/hours so that I am not overwhelmed:
l......l......l.....l......l......l......l.....l.....l.....l.
I could get everything and then write a function that does this, but that beats the purpose of avoiding choking with results in the first place.
Sidenote:
Worse comes to worse, I can ask for a row after skipping a predifined number of rows, using mod on the row id column. But it would be great to avoid that since there is a timestamp (DateTime type of sqlalchemy).
Edit:
There could be some value using group by on timestamp and then somehow selecting a row from every group, but still not sure how to do this in a useful manner with sqlalchemy. | How to query rows with a minute/hour step interval in SqlAlchemy? | 1 | 0 | 1 | 1 | 0 | 587 |
30,249,063 | 2015-05-14T23:22:00.000 | 0 | 1 | 0 | 0 | 0 | python,linux,performance,ubuntu,analytics | 0 | 30,249,116 | 0 | 2 | 1 | false | 0 | 0 | If you just want to know how long a process takes to run the time command is pretty handy. Just run time <command> and it will report how much time it took to run with it counted in a few categories, like wall clock time, system/kernel time and user space time. This won't tell you anything about which parts of the system are taking up the amount of time. You can always look at a profiler if you want/need that type of information.
That said, as Barmar said, if you aren't doing much processing of the sites you are grabbing, the laptop is probably not going to be a limiting factor. | 2 | 0 | 0 | 0 | I created a python script that grabs some info from various websites, is it possible to analyze how long does it take to download the data and how long does it take to write it on a file?
I am interested in knowing how much it could improve running it on a better PC (it is currently running on a crappy old laptop. | Is it possible to see what a Python process is doing? | 1 | 0 | 1 | 0 | 0 | 238 |
30,249,063 | 2015-05-14T23:22:00.000 | 0 | 1 | 0 | 0 | 0 | python,linux,performance,ubuntu,analytics | 0 | 30,249,196 | 0 | 2 | 1 | false | 0 | 0 | You can always store the system time in a variable before a block of code that you want to test, do it again after then compare them. | 2 | 0 | 0 | 0 | I created a python script that grabs some info from various websites, is it possible to analyze how long does it take to download the data and how long does it take to write it on a file?
I am interested in knowing how much it could improve running it on a better PC (it is currently running on a crappy old laptop. | Is it possible to see what a Python process is doing? | 1 | 0 | 1 | 0 | 0 | 238 |
Subsets and Splits