Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
20,996,193
2014-01-08T12:51:00.000
29
0
0
0
0
python,user-interface,qt5,pyqt5
0
21,359,084
0
3
0
false
0
1
Been looking for PyQt5 tutorials for some time? Look no further! You won't find many around the internet. Not really tutorials, but pretty self-explanatory basic scripts under the following path: /python/lib/site-packages/PyQt5/examples you will find about 100 examples in 30 folders ranging from beginner to advanced, covering basic windows, menus, tabs, layouts, network, OpenGL, etc.
1
65
0
1
I am looking for a PyQt5 tutorial. It is rather complicated to start GUI development with Python for the first time without a tutorial. I only found some PyQt4 tutorials so far, and since something changed from Qt4 to Qt5, for example the fact SIGNAL and SLOT are no more supported in Qt5, it would be nice to have specific tutorials for PyQt5. Can someone please provide a tutorial on how to start GUI development with PyQt5?
Is there a tutorial specifically for PyQt5?
0
1
1
0
0
57,823
21,026,487
2014-01-09T16:57:00.000
5
0
1
0
0
python,text,colors,psychopy
0
21,063,050
0
2
0
true
0
0
No, that isn't possible right now. There's an experimental new stimulus class called TextBox that will allow it, but you'd have to write code to use that (not available yet from the Builder interface). Or just create some tif images of your stimuli and use those?
2
3
1
0
I am messing around in PsychoPy now, trying to modify the Sternberg demo for a class project. I want the stimulus text—the number set—to display in a variety of colors: say, one digit is red, the next blue, the next brown, etc. A variety within the same stimulus. I can only find how to change the color of the entire set. I was wondering if I could add another variable to the spreadsheet accompanying the experiment and have the values in the cells be comma separated (red,blue,brown…). Is this possible?
Text with multiple colors in PsychoPy
1
1.2
1
0
0
1,012
21,026,487
2014-01-09T16:57:00.000
2
0
1
0
0
python,text,colors,psychopy
0
23,425,589
0
2
0
false
0
0
The current way to implement this is to have a separate text stimulus for each digit, each with the desired colour. If the text representation of the number is contained in a variable called, say, stimulusText then in the Text field for the first text component put "$stimulusText[0]" so that it contains just the first digit. In the next text component , use "$stimulusText[1]", and so on. The colour of each text component can be either fixed or vary according to separate column variables specified in a conditions file.
2
3
1
0
I am messing around in PsychoPy now, trying to modify the Sternberg demo for a class project. I want the stimulus text—the number set—to display in a variety of colors: say, one digit is red, the next blue, the next brown, etc. A variety within the same stimulus. I can only find how to change the color of the entire set. I was wondering if I could add another variable to the spreadsheet accompanying the experiment and have the values in the cells be comma separated (red,blue,brown…). Is this possible?
Text with multiple colors in PsychoPy
1
0.197375
1
0
0
1,012
21,033,857
2014-01-10T00:10:00.000
1
0
1
0
0
python,list,python-2.7,dictionary
0
21,034,096
0
3
0
false
0
0
A couple of ideas: Set up a collections.defaultdict for your output. This is a dictionary with a default value for keys that don't yet exist (in this case, as aelfric5578 suggests, an empty list); Build a list of all the words in your file, in order; and You can use zip(lst, lst[1:]) to create pairs of consecutive list elements.
1
0
0
0
I am having difficulties with writing a Python program that reads from a text file, and builds a dictionary which maps each word that appears in the file to a list of all the words that immediately follow that word in the file. The list of words can be in any order and should include duplicates. For example,the key "and" might have the list ["then", "best", "after", ...] listing all the words which came after "and" in the text. Any idea would be great help.
how to write a Python program that reads from a text file, and builds a dictionary which maps each word
0
0.066568
1
0
0
363
21,064,467
2014-01-11T16:01:00.000
0
0
0
0
1
redirect,python-2.7,scrapy,http-post
0
21,065,555
0
1
0
true
1
0
I found my own solution to this promlem. Instead of building a list of requests and return them at once, I build a chain of them and passed the next one inside the requests meta_data. Inside the callback I pass either the next request, storing the parsed item in a spider member, or the parsed list of items if there is no next request to execute.
1
0
0
0
I've written a Spider which has one start_url. The parse method of my spider scraps some data and returns a list of FormRequests. The problem comes with the response of that post request. It redirects me to another site with some irrelevant GET Parameters. The only parameter which seems to matter is a SESSION_ID posted along in the header. Unfortunately Scrapys behavior is to execute my requests, one after another and queues the redirect response at the end of the queue. If all returned FormRequests are executed, scrapy starts to execute all redirects, which all return the same site. How can I circumvent this behavior, so that a FormRequest is executed, and the redirect returned in the requests response is executed befor any new FormRequest? Maybe there is another way, like forcing the site somehow to get a new SESSION_ID cookie for each FormRequest. I'm open to any idea that could probably solve the problem.
Handle Redirects one by one with scrapy
0
1.2
1
0
1
302
21,068,311
2014-01-11T21:48:00.000
0
0
0
1
0
python,google-app-engine
0
21,069,451
0
1
0
true
1
0
You cannot specify an ancestor for the DatastoreInputReader -- except for a namespace -- so the pipeline will always go through all your Domain entities in a given namespace.
1
0
0
0
Is there a way to use the standard DatastoreInputReader from AppEngine's mapreduce with entity kind requiring ancestors ? Let's say I have an entity kind Domain with ancestor kind SuperDomain (useful for transactions), where do I specify in mapreduce_pipeline.MapreducePipeline how to use a specific SuperDomain entity as ancestor to all queries?
DatastoreInputReader using entity kind with ancestor
1
1.2
1
0
0
57
21,069,586
2014-01-12T00:07:00.000
0
0
1
1
1
macos,python-3.x,upgrade
0
21,069,629
0
1
0
true
0
0
As per Martijn Pieters's comment, I used python3 and now it works as expected.
1
1
0
0
I've just installed python 3.3.3 on my OS X 10.9.1, however when I run python from the terminal the version that is indicated is 2.7.5. What have I done wrong and how can I make it right?
python version doesn't update on OS X
0
1.2
1
0
0
57
21,078,720
2014-01-12T18:40:00.000
0
0
0
0
1
python,listbox,tkinter
0
21,079,948
0
2
0
true
0
1
The listbox will fire the virtual event <<ListboxSelect>> whenever the selection changes. If you bind to it, your function will be called whenever the selection changes, even if it was changed via the keyboard.
1
0
0
0
I have a listbox on a GUI in Tkinter. I would like to implement a routine where if a listbox item is selected a function is called (based on this selection) to modify the gui (add another adjacent listbox). Then if that selection changes, the gui reverts back to its default view. Can this be done? Seems you would need to associate a function to a listbox selection, not sure how to do this or if its possible... Does anyone have the secret? Its possible to add "select" buttons to the bottom of my listbox, but I wanted to avoid this extra work for user and save space on the GUI. Thanks to all in advance! Daniel
Calling a function based on a Listbox current selection "curselection()" in Tkinter
1
1.2
1
0
0
1,733
21,117,002
2014-01-14T15:13:00.000
0
0
0
1
0
python,virtualhost,cherrypy,bottle
0
21,117,703
0
4
0
false
1
0
perhaps you can simply put nginx as reverse proxy and configure it to send the traffic to the two domains to the right upstream (the cherryPy webserver).
1
2
0
0
I have a website (which running in Amazon EC2 Instance) running Python Bottle application with CherryPy as its front end web server. Now I need to add another website with a different domain name already registered. To reduce the cost, I want to utilize the existing website host to do that. Obviously, virtual host is the solution. I know Apache mod_wsgi could play the trick. But I don't want to replace CherryPy. I've googled a a lot, there are some articles showing how to make virtual hosts on CherryPy, but they all assume Cherrypy as Web Sever + Web application, Not CherrPy as Web server and Bottle as Application. How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
0
0
1
0
0
882
21,118,740
2014-01-14T16:33:00.000
0
0
1
0
0
python,virtualenv
0
21,119,115
0
1
0
false
0
0
It's not completely the same thing, but you could try running pip freeze > requirements.txt against that version of Python, then using the resulting file inside your virtualenv with pip install -r requirements.txt to install copies of the modules there.
1
1
0
0
Several questions address the way to make a virtualenv that does include global site-packages. I'm looking for something different: how to create a new virtualenv based on a Python executable from another location in my network, and also to include the libraries that are installed in that location in the network. I have a local desktop machine, but there is an IT-maintained version of Python and associated installed libraries, and it is the ubiquitous Python used by developers. I'm using virtualenv to create several local versions of Python that allow me to try out libraries or change settings, but I'd also like to maintain an installation that is nothing but a pure mirror of that IT-maintained system. So the question is how to make a virtualenv that points at that IT-maintained Python, and which does reference the previously installed packages for that Python and not for my local machine's global site-packages, etc.
How to create virtualenv and include non-global associated installed libraries
0
0
1
0
0
116
21,141,026
2014-01-15T15:10:00.000
0
0
0
0
0
python
0
21,144,008
0
2
0
false
0
0
I would use .. a VCS ? If it were feasible, I'd hack up a windows installer that: - installs git/subversion/your favorite vcs - does an initial checkout/clone of the repository - add a scheduled job to the machine (windows equiv of cronjobs) to run every hour and update the working copies Could be done in a couple hours work and should be simple enough that users just need to run the installer and eventually choose the location of where to clone the repo (which directory to place it in). Then from there you push your changes to the repo and the clients computers will check for updates every hour or so.
1
1
0
0
(This may not be an appropriate question--if there is a better stack site for it, please let me know.) I belong to an organization that distributes sheet music to its users. Right now, we have to individually download each file, and it's a pain. Files are frequently updated, and every time there's a new version we have to download the new one, delete the old one, blah blah blah. I've automated the process myself with Python, so when I run my script I have a nice folder with all the current files. I'm looking for a way to share this with others. I initially thought Dropbox, but that just requires users to go to my Dropbox folder and still do it all manually (I know there's an option to download as a .zip, but many of our members are not very technically proficient). Is there a way to have users sign up and somehow have a folder on their computers download what's in mine? A helpful Google suggestion may be all I need.
Distribute files to users automatically
0
0
1
0
0
67
21,168,690
2014-01-16T17:26:00.000
0
0
0
0
0
python,selenium,selenium-grid2
1
21,194,979
0
2
0
false
0
0
You can restart the node instead of the server.
1
0
0
0
We're using Selenium's Python bindings at work. Occasionally I forget to put the call to WebDriver.quit() in a finally clause, or the tear down for a test. Something bad happens, an exception is thrown, and the session is abandoned and stuck as "in use" on the grid. How can I quit those sessions and return them to being available for use without restarting the grid server?
how do I quit a web driver session after the code has finished executing?
0
0
1
0
1
288
21,179,274
2014-01-17T06:29:00.000
-1
1
0
1
1
python,linux,security
0
21,179,425
0
2
0
false
0
0
If you just want to do this for learning, you can easily build a fake environment with your own faked passwd-file. You can use some of the built-in python encrypt method to generate passwords. this has the advantage of proper test cases, you know what you are looking for and where you should succeed or fail.
1
4
0
0
Preface: I am fully aware that this could be illegal if not on a test machine. I am doing this as a learning exercise for learning python for security and penetration testing. This will ONLY be done on a linux machine that I own and have full control over. I am learning python as my first scripting language hopefully for use down the line in a security position. Upon asking for ideas of scripts to help teach myself, someone suggested that I create one for user enumeration.The idea is simple, cat out the user names from /etc/passwd from an account that does NOT have sudo privileges and try to 'su' into those accounts using the one password that I have. A reverse brute force of sorts, instead of a single user with a list of passwords, Im using a single password with a list of users. My issue is that no matter how I have approached this, the script hangs or stops at the "Password: " prompt. I have tried multiple methods, from using os.system and echoing the password in, passing it as a variable, and using the pexpect module. Nothing seems to be working. When I Google it, all of the recommendations point to using sudo, which in this scenario, isnt a valid option as the user I have access to, doesnt have sudo privileges. I am beyond desperate on this, just to finish the challenge. I have asked on reddit, in IRC and all of my programming wizard friends, and beyond echo "password" |sudo -S su, which cant work because the user is not in the sudoers file, I am coming up short. When I try the same thing with just echo "password"| su I get su: must be run from a terminal. This is at a # and $ prompt. Is this even possible?
Learning python for security, having trouble with su
1
-0.099668
1
0
0
230
21,200,565
2014-01-18T05:46:00.000
0
1
0
0
0
python,linux,curl
0
21,940,288
0
4
0
false
0
0
The requests library is most supported and advanced way to do this.
1
0
0
0
I have a python program that takes pictures and I am wondering how I would write a program that sends those pictures to a particular URL. If it matters, I am running this on a Raspberry Pi. (Please excuse my simplicity, I am very new to all this)
Curl Equivalent in Python
0
0
1
0
1
2,487
21,201,970
2014-01-18T08:44:00.000
0
0
1
0
0
python,random
0
21,202,082
0
5
0
false
0
0
Yes, for repeat sample from one population, @MaxLascombe's answer is OK. If you do not want tiles in samples, you should kick the chosen one out. Then use @MaxLascombe's answer on the rest of the list.
1
0
0
0
How to use random.randint() to select random names given in a list in python. I want to print 5 names from that list. Actually i know how to use random.randint() for numbers. but i don't know how to select random names from a given list. we are not allowed to use random.choice. help me please
randomly choose from list using random.randint in python
0
0
1
0
0
2,543
21,203,648
2014-01-18T11:47:00.000
0
1
1
0
0
python,unit-testing,web-crawler
0
21,203,787
0
2
0
false
0
0
Unit testing verifies that your code does what you expect in a given environment. You should make sure all other variables are as you expect them to be and test your single method. To do that for methods which use third party APIs, you should probably mock them using a mocking library. By mocking you provide data you expect and verify that your method works as expected. You can also try to separate your code so that the part which makes API request and the part that parses/uses it are separate and unit test that second part with a certain API example response you provide.
2
3
0
0
Sorry if this is a really dumb question but I've been searching for ages and just can't figure it out. So I have a question about unit testing, not necessarily about Python, but since I'm working with Python at the moment I chose to base my question on it. I get the idea of unit testing, but the only thing I can find on the internet are the very simple unit tests. Like testing if the method sum(a, b) returns the sum of a + b. But how do you apply unit testing when dealing with a more complex program? As an example, I have written a crawler. I don't know what it will return, else I wouldn't need the crawler. So how can I test that the crawler works properly without knowing what the method will return? Thanks in advance!
Python - unit testing
0
0
1
0
0
569
21,203,648
2014-01-18T11:47:00.000
4
1
1
0
0
python,unit-testing,web-crawler
0
21,203,798
0
2
0
true
0
0
The whole crawler would be probably tested functionally (we'll get there). As for unit testing, you have probably written your crawler with several components, like page parser, url recogniser, fetcher, redirect handler, etc. These are your UNITS. You should unit tests each of them, or at least those with at least slightly complicated logic, where you can expect some output for some input. Remember, that sometimes you'll test behaviour, not input/output, and this is where mocks and stubs may come handy. As for functional testing - you'll need to create some test scenarios, like webpage with links to other webpages that you'll create, and set them up on some server. Then you'll need to perform crawling on webpages YOU created, and check whether your crawler is behaving as expected (you should know what to expect, because you;ll be creating those pages). Also, sometimes it is good to perform integration tests between unit and functional testing. If you have some components working together (for example fetcher using redirect handler) it is good to check whether those two work together as expected (for example, you may create resource on your own server, that when fetched will return redirect HTTP code, and check whether it is handled as expected). So, in the end: create unit tests for components creating your app, to see if you haven't made simple mistake create integration tests for co-working components, to see if you glued everything together just fine create functional tests, to be sure that your app will work as expected (because some errors may come from project, not from implementation)
2
3
0
0
Sorry if this is a really dumb question but I've been searching for ages and just can't figure it out. So I have a question about unit testing, not necessarily about Python, but since I'm working with Python at the moment I chose to base my question on it. I get the idea of unit testing, but the only thing I can find on the internet are the very simple unit tests. Like testing if the method sum(a, b) returns the sum of a + b. But how do you apply unit testing when dealing with a more complex program? As an example, I have written a crawler. I don't know what it will return, else I wouldn't need the crawler. So how can I test that the crawler works properly without knowing what the method will return? Thanks in advance!
Python - unit testing
0
1.2
1
0
0
569
21,206,568
2014-01-18T16:14:00.000
1
0
0
0
1
python,html,webserver
0
21,260,040
0
1
0
true
1
0
This has nothing to do with BaseHTTPRequestHandler as its purpose is to serve HTML, how you generate the HTML is another topic. You should use a templating tool, there are a lot available for Python, I would suggest using Mako or Jinja2. then, on your code, just get the real HTML using the template and use it on your handler response.
1
0
0
0
I am building a small program with Python, and I would like to have a GUI for some configuration stuff. Now I have started with a BaseHTTPServer, and I am implementing a BaseHTTPRequestHandler to handle GET and POST requests. But I am wondering what would be best practice for the following problem. I have two separate requests that result in very similar responses. That is, the two pages that I return have a lot of html in common. I could create a template html page that I retrieve when either of these requests is done and fill in the missing pieces according to the specific request. But I feel like there should be a way where I could directly retrieve two separate html pages, for the two requests, but still have one template page so that I don't have to copy this. I would like to know how I could best handle this, e.g. something scalable. Thanks!
Use template html page with BaseHttpRequestHandler
0
1.2
1
0
1
205
21,209,496
2014-01-18T20:31:00.000
5
0
0
0
0
python-3.x,pygame
0
21,209,675
0
1
0
true
0
1
There are 2 methods available for getting the width and height of a surface. The first one is get_size(), it returns a tuple (width,height). To access width for instance, you would do: surface.get_size()[0] and for height surface.get_size()[1]. The second method is to use get_width(), and get_height(), which return the width and the height. I suggest going through the python tutorial, to learn more about basic data structures such as tuples.
1
1
0
0
How do you get width and height of an image imported into pygame. I got the size using: Surface.get_size , but I dont know how to get the width and height.
Getting width and height of an image in Pygame
0
1.2
1
0
0
7,226
21,221,141
2014-01-19T18:57:00.000
0
0
0
0
0
javascript,python,asp.net,pyqt,qtwebkit
0
21,532,850
0
1
0
true
1
1
Since nobody answered, I will post my work-around. Basically, wanted to "transfer" my session from Mechanize (the python module) to the QtWebKits QWebView (PyQt4 module) because the vast majority of my project was automated headless, but I had encountered a road block where I had no choice but to have the user manually enter data into a possible resulting page (as the form was different each time depending on circumstances). Instead of transferring sessions, I met this requirement by utilizing QWebViews javascript functionality. My method went like this: Load page in Mechanize, and save the downloaded HTML to a local temporary file. Load this local file in QWebView. The user can now enter required data into the local copy of this page. Locate the form fields on this page, and pull the data the user entered using javascript. You can do this by getting the main frame object for the page (QWebView->Page()->MainFrame()), and then evaluating javascript code to accomplish the above task (use evaluateJavaScript()). Take the data you have extracted from the form fields, and use it to submit the form with the connection you still have open with mechanize. That's it! A bit of a work-around, but it works none-the-less :\
1
1
0
0
The issue: I have written a ton of code (to automate some pretty laborious tasks online), and have used the mechanize library for Python to handle network requests. It is working very well, except now I have encountered a page which I need javascript functionality... mechanize does not handle javascript. Proposed Solution: I am using PyQt to write the GUI for this app, and it comes packaged with QtWebKit, which DOES handle javascript. I want to use QtWebKit to evaluate the javascript on the page that I am stuck on, and the easiest way of doing this would be to transfer my web session from mechanize over to QtWebKit. I DO NOT want to use PhantomJS, Selenium, or QtWebKit for the entirety of my web requests; I 100% want to keep mechanize for this purpose. I'm wondering how I might be able to transfer my logged in session from mechanize to QtWebKit. Would this work? Transfer all cookies from mechanize to QtWebView Transfer the values of all state variables (like _VIEWSTATE, etc.) from mechanize to QWebView (the page is an ASP.net page...) Change the User-Agent header of QWebView to be identical to mechanize... I don't really see how I could make the two "browsers" appear more identical to the server... would this work? Thanks!
Python - Transferring session between two browsers
0
1.2
1
0
0
384
21,221,431
2014-01-19T19:20:00.000
5
0
1
0
0
python
0
21,221,460
0
2
0
true
0
0
Just use python -m pdb mycode.py, which will run your code in the python debugger (pdb module). In the debugger you can execute arbitrary code, watch variables, and jump to different places in the code. Specifically, n will execute the next line and h will show you the debugger help.
1
1
0
0
Is it possible to run code line by line with Python. Including running any module code, when used, line by line as well. I would like to go out and run some code line by line and watch as each of the lines goes through the processing phase and see just what code is getting executed when certain actions occur. I'm curious how certain values are getting passed off to the interpreter.
Python: Is line by line execution possible
1
1.2
1
0
0
323
21,222,621
2014-01-19T21:02:00.000
2
0
1
0
0
python
0
21,222,941
0
5
0
false
0
0
You should use the decimal module. Each number knows how many significant digits it has.
1
2
0
0
I'd like to pass numbers around between functions, while preserving the decimal places for the numbers. I've discovered that if I pass a float like '10.00' in to a function, then the decimal places don't get used. This messes an operation like calculating percentages. For example, x * (10 / 100) will always return 0. But if I manage to preserve the decimal places, I end up doing x * (10.00 / 100). This returns an accurate result. I'd like to have a technique that enables consistency when I'm working with numbers that decimal places that can hold zeroes.
In python, how do I preserve decimal places in numbers?
0
0.07983
1
0
0
10,506
21,222,632
2014-01-19T21:03:00.000
4
0
1
0
1
python,py2exe,pyinstaller
0
21,222,812
0
2
0
true
0
0
The problem you would have is that if your friend decided to change something in the config, he'd have to ask you to do it, run py2exe again and send the .exe to him again. With an .ini file, he'd simply edit the file.
2
1
0
0
I'm writing a script for a colleague who runs Windows but my development environment is GNU/Linux. I have a bunch of variables that need to be configurable. So I put them all in a config.py that I've imported it into the main project. Originally I planned to ask him to install Cygwin but then I thought of packaging it into an exe with py2exe or pyinstaller. I've not used either of these before so I don't know how they work. Would I have problems with the config.py file or should I be using an actual module like ConfigParser to store my settings so that it can be separate from the .exe file?
py2exe/pyinstaller: Is it bad practice to put all configurable variables in a .py file?
0
1.2
1
0
0
338
21,222,632
2014-01-19T21:03:00.000
1
0
1
0
1
python,py2exe,pyinstaller
0
21,222,800
0
2
0
false
0
0
I would definitely use a config parser or even just a json or ini file.
2
1
0
0
I'm writing a script for a colleague who runs Windows but my development environment is GNU/Linux. I have a bunch of variables that need to be configurable. So I put them all in a config.py that I've imported it into the main project. Originally I planned to ask him to install Cygwin but then I thought of packaging it into an exe with py2exe or pyinstaller. I've not used either of these before so I don't know how they work. Would I have problems with the config.py file or should I be using an actual module like ConfigParser to store my settings so that it can be separate from the .exe file?
py2exe/pyinstaller: Is it bad practice to put all configurable variables in a .py file?
0
0.099668
1
0
0
338
21,242,498
2014-01-20T19:28:00.000
0
0
1
0
0
python,git,repository,pygit2
0
24,710,103
0
2
1
false
0
0
You can do either ways. I find the index.add() method straightforward. You can fetch all the files to be added or removed to the index using Repository.status() as a dictionary. The dictionary contains the filename as key and status of file as value. Depending upon the status values deleted files will be needed to be removed from index using index.remove(filename). Write this index to in-memory tree using index.write_tree() which will return a tree-id to be used in Repository.commit(). However for changes to be saved to disk use index.write() too.
1
1
0
0
I'm a little confused about how to get started with PyGit2. When adding files (plural) to a newly created repo, should I add them to index.add('path/to/file') or would I be better off creating a TreeBuilder and using tb.insert( 'name',oid, GIT_FILEMODE_BLOB ) to add new content ? If the second case, I am stumped as to how I create the tree object needed to commit to a newly created repo? Anyone?
PyGit2 - TreeBuilder.insert('name',blobid,GIT_FILEMODE_BLOB) vs index.add( 'path/to/file' )?
0
0
1
0
0
406
21,274,359
2014-01-22T04:43:00.000
5
0
0
1
0
python,macos,command-line,scrapy,bin
1
21,274,416
0
1
0
true
1
0
First, next time you get a Permission Denied from pip uninstall foo, try sudo pip uninstall foo rather than trying to do it manually. But it's too late to do that now, you've already erased the files that pip needs to do the uninstall. Also: Up until this point, I've resisted the urge to just delete it. But I know that those folders are hidden by Apple for a reason... Yes, they're hidden so that people who don't use command-line programs, write their own scripts, etc. will never have to see them. That isn't you. You're a power-user, and sometimes you will need to see stuff that Apple hides from novices. You already looked into /Library, so why not /usr/local? The one thing to keep in mind is learning to distinguish stuff installed by OS X itself from stuff installed by third-party programs. Basically, anything in /System/Library or /usr is part of OS X, so you should generally not touch it or you might break the OS; anything installed in /Library or /usr/local is not part of OS X, so the worst you could do is break some program that you installed. Also, remember that you can always back things up. Instead of deleting a file, move it to some quarantine location under your home directory. Then, it it turns out you made a mistake, just move it back. Anyway, yes, it's safe to delete /usr/local/bin/scrapy. Of course it will break scrapy, but that's the whole point of what you're trying to do, right? On the other hand, it's also safe to leave it there, except for the fact that if you accidentally type scrapy at a shell prompt, you'll get an error about scrapy not being able to find its modules, instead of an error about no such program existing. Well, that, and it may get in the way of you trying to re-install scrapy. Anyway, what I'd suggest is this: pip install scrapy again. When it complains about files that it doesn't want to overwrite, those files must be from the previous installation, so delete them, and try again. Repeat until it succeeds. If at some point it complains that you already have scrapy (which I don't think it will, given what you posted), do pip install --upgrade scrapy instead. If at some point it fails because it wants to update some Apple pre-installed library in /System/Library/…/lib/python, don't delete that; instead, switch to pip install --no-deps scrapy. (Combine this with the --upgrade flag if necessary.) Normally, the --no-deps option isn't very useful; all it does is get you a non-working installation. But if you're only installing to uninstall, that's not a problem. Anyway, once you get it installed, pip uninstall scrapy, and now you should be done, all traces gone.
1
2
0
1
So I've been having a lot of trouble lately with a messy install of Scrapy. While I was learning the command line, I ended up installing with pip and then easy_install at the same time. Idk what kinda mess that made. I tried the command pip uninstall scrapy, and it gave me the following error: OSError: [Errno 13] Permission denied: '/Library/Python/2.6/site-packages/Scrapy-0.22.0-py2.6.egg/EGG-INFO/dependency_links.txt' so, I followed the path and deleted the text file... along with anything else that said "Scrapy" within that path. There were two versions in the /site-packages/ directory. When I tried again with the pip uninstall scrapy command, I was given the following error: Cannot uninstall requirement scrapy, not installed That felt too easy, so I went exploring through my directory hierarchy and I found the following in the usr/local/bin directory: -rwxr-xr-x 1 greyelerson staff 173 Jan 21 06:57 scrapy* Up until this point, I've resisted the urge to just delete it. But I know that those folders are hidden by Apple for a reason... 1.) Is it safe to just delete it? 2.) Will that completely remove Scrapy, or are there more files that I need to remove as well? (I haven't found any robust documentation on how to remove Scrapy once it's installed)
Safely removing program from usr/local/bin on Mac OSX 10.6.8?
0
1.2
1
0
0
10,163
21,279,835
2014-01-22T10:08:00.000
0
0
0
0
0
python,django
0
21,280,027
0
1
1
true
1
0
You could have base app if you want to, but you don't need one. All apps are wired when you declare them in the INSTALLED_APPS in the settings, each app has a urls.py file that will catch the route and call one of the views in that app if there's a match. I use a base app to define global templates, global static files, helpers. Hope this helps
1
0
0
0
I'm pretty new with Django, I've been reading and watching videos but there is one thing that is confusing me. It is related to the apps. I've watched a video where a guy said that is convenient to have apps that do a single thing, so if I have a big project, I will have a lot of apps. I made an analogy to a bunch of classes, where each app would be a class with their own functions and elements, is this a correct interpretation? In this case, is there like an app where I have like a main method in a class? I mean, I don't know how to wire all the applications I have, is there like a principal app in charge of manage the others? or how does it work? thanks!
how to wire all the django apps
0
1.2
1
0
0
66
21,287,447
2014-01-22T15:44:00.000
2
0
1
0
0
jira,jira-rest-api,python-jira
0
21,292,775
0
2
0
false
0
0
Labels are a field that is shared across all issues potentially, but I don't think there is a REST API to get the list of all labels. So you'd either have to write a JIRA add-on to provide such a resource, or retrieve all the issues in question and iterate over them. You can simplify things by excluding issues that have no label JQL: project = MYPROJ and labels is not empty And restrict the fields that are returned from the search using the "fields" parameter for search_issues
1
3
0
0
I am using the python API for Jira. I would like to get a list of all the labels that are being used in a project. I know that issue.fields.labels will get me just the labels of an issue, but I am interested in looping through all the labels used in a project. Found this to list all components in a project components = jira.project_components(projectID) Am looking for something similar, but for labels...
In the Jira Python API, how can I get a list of all labels used in a project?
0
0.197375
1
0
0
6,767
21,287,667
2014-01-22T15:53:00.000
0
0
0
0
0
python,histogram,data-analysis
0
21,287,882
0
3
0
false
0
0
what format is your data in? Python offers modules to read data from a variety of data formats (CSV, JSON, XML, ...) CSV is a very common one that suffices for many cases (the csv module is part of the standard library) Typically you write a small routine that casts the different fields as expected (string to floating point numbers, or dates, integers,...) and cast your data in a numpy matrix (np.array) where each row corresponds to a sample and each column to an observation for the plots check matplotlib. It is really easy to generate graphs, especially if you have some previous experience with Matlab
1
1
1
0
I'm new to python and was curious as to how, given a large set of data consisting of census information, I could plot a histogram or graph of some sort. My main question is how to access the file, not exactly how the graph should be coded. Do I import the file directly? How do I extract the data from the file? How is this done? Thanks
Plotting in Python
0
0
1
0
0
498
21,312,425
2014-01-23T15:24:00.000
1
0
1
0
0
python,dictionary,key,multiple-value
0
21,312,503
0
2
0
false
0
0
What you need is to make a custom dictionary class where the __getitem__ method rounds down the value before calling the standard __getitem__ method.
1
1
0
0
I want to make a Python dictionary. I want values like 0.25, 0.30, 0.35 to be keys in this dictionary. The problems is that I have values like 0.264, 0.313, 0.367. I want this values to access the keys e.g. I want every value from 0.25(inclusive) to 0.30(exclusive) to access the value under the key 0.25. Any ideas how to do this? I think I've done that before somehow, but I have no ideas right now. Thanks in advance.
How to make a dictionary, in which multiple values access one value(Python)
0
0.099668
1
0
0
143
21,324,050
2014-01-24T03:48:00.000
-1
0
0
0
0
python,django,heroku,rpc,bitcoin
0
21,972,946
0
2
0
false
1
0
You can use SSL with RPC to hide the password. rpcssl=1
1
0
0
0
Hey I was wondering if anyone knew how to connect to a bitcoin wallet located on another server with bitcoinrpc I am running a web program made in django and using a python library called bitcoinrpc to make connections. When testing locally, I can use bitcoinrpc.connect_to_local), or even bitcoinrpc.connect_to_remote('account','password') and this works as well as long as the account and password match the values specified in my 'bitcoin.conf' file. I can then use the connection object to get values and do some tasks in my django site. The third parameter in connect_to_local is default localhost. I was wondering: A) What to specify for this third parameter in order to connect from my webserver to the wallet stored on my home comp (is it my IP address?) B) Because the wallet is on my PC and not some dedicated server, does that mean that my IP will change and I won't be able to access the wallet? C) The connection string is in the django app - which is hosted on heroku. Heroku apps are launched by pushing with git but I believe it is to a private repository. Still, if anyone could see the first few lines of my 'view' they would have all they need to take my BTC (or, more accurately, mBTC). Anyone know how bad this is - or any ways to go about doing btc payments/movements in a more secure way. Thanks a lot.
Bitcoinrpc connection to remote server
1
-0.099668
1
0
0
1,399
21,338,216
2014-01-24T16:59:00.000
-1
1
0
0
1
python,heroku,notifications,worker
0
21,348,604
0
2
0
false
1
0
Easiest way - push message to your api from worker - it's log or anything you need to have in your app
1
1
0
0
I'm writing python app which currently is being hosted on Heroku. It is in early development stage, so I'm using free account with one web dyno. Still, I want my heavier tasks to be done asynchronously so I'm using iron worker add-on. I have it all set up and it does the simplest jobs like sending emails or anything that doesn't require any data being sent back to the application. The question is: How do I send the worker output back to my application from the iron worker? Or even better, how do I notify my app that the worker is done with the job? I looked at other iron solutions like cache and message queue, but the only thing I can find is that I can explicitly ask for the worker state. Obviously I don't want my web service to poll the worker because it kind of defeats the original purpose of moving the tasks to background. What am I missing here?
Ironworker job done notification
0
-0.099668
1
0
0
209
21,348,299
2014-01-25T08:17:00.000
0
0
1
0
0
python,python-3.x
0
21,348,388
0
2
0
false
0
0
I've just manually corrupted a pickled file. It threw an error. Presumably, if a file does not throw an error it's either the file you pickled, or it's been so carefully tampered with that it fools the pickle module. In that case, I think you're pretty much sunk.
1
0
0
0
When I load a pickle using pickle.load("foo") how do I know if whats read back is corrupt or not? For example, if I'm pickling a large list using pickle.dump and kill my python process before its finished, what would the consequences then be and how should I deal with them?
On Pickle Corruption
0
0
1
0
0
2,020
21,366,266
2014-01-26T16:45:00.000
0
0
1
0
0
python,nlp,nltk,wordnet,word-sense-disambiguation
0
21,422,720
0
1
0
false
0
0
If a word has multiple synsets then you can say it is a polysemous word.
1
0
0
0
If my input query is: "Dog is barking at tree" here word "bark" is polysemous word and we know that. But how to check it through a code in python language using wordnet as a lexical database?
How to find polysemous words from given input query?
0
0
1
0
0
261
21,395,056
2014-01-28T01:12:00.000
0
0
0
0
0
python,pyqt,qt-designer
0
28,092,434
0
2
0
false
0
1
Just because I had a similar problem and the reason wasn't a wrong object: The property's content can be accessed with toString().
1
3
0
0
I've created a dynamic property in the Designer interface. How do I access this property in my code? I don't see any properties listed with the name I've provided. I've found a dynamicPropertyNames property that contains a QByteArray object and the name I provided, but I cannot figure out how to access the data I stored (nor do I know if this is even the correct place to be querying). Thanks!
How do I access a dynamic property in PyQt?
0
0
1
0
0
1,777
21,397,605
2014-01-28T05:30:00.000
0
1
0
0
0
python-2.7,openerp,base
0
21,398,386
0
2
0
false
1
0
why you going in installed modules and search for base module and update it? you have to only update that module in which you have done changes in xml file not event py file. if you have changes in xml file of those module you have to update only those module. if you going to update base module it will update all module which installed in your databae, because every module depend on base, we can call base is the kernal of our all modules, all module depend on this module, if you update base it will going to update all modules if you have done some changes in sale then you have to search for sale and update the only sale module not go to update base module regards,
2
0
0
0
I'm new to Openerp.I have modified the base module and when i goto installed modules and search for BASE module and click upgrade button it is nearly taking 5mins.Can any one please say me how can i reduce the time that is taking for up-gradation of existing module. Note: I have Messaging,Sales,Invoicing,Human-resource,Tools and Reporting modules installed,is it due to i have more modules installed?? Thanks in advance.
Why is it taking more time, When i upgrade a module in Openerp
0
0
1
0
0
155
21,397,605
2014-01-28T05:30:00.000
1
1
0
0
0
python-2.7,openerp,base
0
21,398,452
0
2
0
true
1
0
As you have said that You are new to OpenERP, Let me tell you something which would be very helpful to you. i.e Never Do changes in Standard modules not in base. If you want to add or remove any functionality of any module, you can do this by creating a customzed module. in which inherit the object you want, and do the changes as per your requirement. Now regarding the time spent when upgrading base module, This is because when you update base module it will automatically update all the other modules which are already installed (in your case - Sales,Invoicing,Human-resource,Tools and Reporting) as base is the main module on which all the other modules are dependedent. So, Better is to do your changes in customized module and upgrade that perticular module only, not the base. Hope this will help you.
2
0
0
0
I'm new to Openerp.I have modified the base module and when i goto installed modules and search for BASE module and click upgrade button it is nearly taking 5mins.Can any one please say me how can i reduce the time that is taking for up-gradation of existing module. Note: I have Messaging,Sales,Invoicing,Human-resource,Tools and Reporting modules installed,is it due to i have more modules installed?? Thanks in advance.
Why is it taking more time, When i upgrade a module in Openerp
0
1.2
1
0
0
155
21,419,369
2014-01-28T23:49:00.000
1
0
1
0
0
python
0
21,419,493
0
3
0
true
0
0
Internally the Int object is stored as 2's complement representation like in C (well, this is true if range value allow it, python can automagically convert it to some other representation if it does not fit any more). Now to get the string representation you have to change that to a string (and a string merely some unmutable list of chars). The algorithm is simple mathematical computing: divide the number by 10 (integer division) and keep the remainder, add that to character code '0'. You get the unit digit. Go on with the result of the division until the result of the division is zero. It's as simple as that. This approach works with any integer representation but of course it will be more efficient to call the ltoa C library function or equivalent C code to do that if possible than code it in python.
1
5
0
0
I understand it's easy to convert an int to a string by using the built-in method str(). However, what's actually happening? I understand it may point to the __str__ method of the int object but how does it then compute the “informal” string representation? Tried looking at the source and didn't find a lead; any help appreciated.
What's actually happening when I convert an int to a string?
0
1.2
1
0
0
118
21,420,269
2014-01-29T01:20:00.000
2
0
0
0
0
python,python-2.7,max,pickle
0
21,420,290
0
3
0
false
0
0
You would need to load your existing pickle'd object, modify it, and then dump it again with the modifications.
1
1
0
0
I am trying to save high scores in a game that I am creating, but each time I do a pickle.dump, it overwrites my previous data. Any help?
I am trying to save high scores with pickle, but how do I add to an already pickled document and then get the max?
0
0.132549
1
0
0
262
21,439,346
2014-01-29T18:19:00.000
2
0
1
0
0
python,permutation,alphabetical
0
21,439,435
0
4
0
false
0
0
It might be faster to run it in reverse: index your document, and for each word, see if it is a subset of your list of letters.
1
4
0
0
So I am making a word generator that takes several inputted letters, puts them in all possible positions, and matches them with a document to find words. If I am approaching this wrong please tell me! If not how can I do this? Thanks
How can I generate a list of all possible permutations of several letters?
0
0.099668
1
0
0
11,854
21,445,897
2014-01-30T00:55:00.000
0
0
0
1
0
python,web-services,google-app-engine,rest,qualtrics
0
21,449,385
0
1
0
true
1
0
I am familiar with Qualtrics but I will answer (b) first. You can write a Python Web Service in a variety of ways, depending on your choice: You could write a simple get handler Use Google Cloud Endpoints Use one of several Web Services Python libraries Having said that, a quick glance at Qualtrics indicated that it required a RSS feed in the result format(I could be wrong). So what you will need to take care of while doing (b) is to ensure that it is in a format that Qualtrics understand and parses out the response format for you. For e.g. if you have to return RSS, you could write your Python Web Service to return that data. Optionally, it can also take one or more parameters to fine tune the results.
1
0
0
0
Has anyone out there created a a.) web service for Qualtrics or b.) a Python web service on Google App Engine? I need to build in some functionality to a Qualtrics survey that seems only a web service (in the Qualtrics Survey Flow) could do, like passing parameters to a web service then getting a response back. I've looked at GAE Protocol RPC, but I'm not quite sure if that's the right path. Qualtrics gave me a PHP code example but I don't know how to begin translating it to python and/or GAE.
Creating a web service for Qualtrics written in Python on Google App Engine
0
1.2
1
0
0
909
21,457,402
2014-01-30T13:06:00.000
-1
0
1
0
0
python,matplotlib,tkinter,python-3.3
0
21,458,494
0
1
0
false
0
0
I suggest you get matplotlib from your distro's repositories. Pip is fine for installing simple, pure Python packages but isn't very convenient for packages such as matplotlib or numpy, for which a lot of non-Python dependencies need to be solved. Your package manager should nicely take care of all this stuff for you.
1
2
0
0
I have a fresh Python 3.3 installation on Red Hat Enterprise Linux 6.5, including Tkinter — python3.3 -m tkinter works and shows a dialogue. However, when I run pip3.3 install matplotlib, at the Optional Backend Dependencies, it says: Tkinter: no * TKAgg requires Tkinter How does matplotlib determine the availability of Tkinter, and how can I give it a hint?
When installing through pip, how do I tell matplotlib how to find tkinter?
0
-0.197375
1
0
0
492
21,490,336
2014-01-31T21:23:00.000
3
0
1
0
0
python,ubuntu,pip
0
21,490,370
0
2
0
true
0
0
Any single pip installation is (roughly) specific to one Python installation. You can, however, have multiple parallel pip installations. Your package manager probably has a package called pip-3.3 or similar. If not, you can manually install it (run the get-pip.py script using Python 3.3), though you'll have to be careful that it ends up in the right place in PATH. You can also use a virtualenv.
1
1
0
0
I'm using Ubuntu, how do I instruct pip to use the Python3 installation and not Python2.6? 2.6 is the default installation on Ubuntu. I can't upgrade that as it will break Ubuntu.
Instructing Pip to Use Python 3
0
1.2
1
0
0
95
21,498,342
2014-02-01T12:57:00.000
1
0
1
1
0
python,subprocess
1
21,498,354
0
1
0
true
0
0
Given you're working in a Linux/POSIX environment you could read the EDITOR environment variable using the os.environ map.
1
0
0
0
I am launching the text editor but for different users the default text editor could be different, so how do I get the name of which text editor is being used just to handle if an error occur switch to different text editor ?
how do I get the name of application launched by subprocess in python?
0
1.2
1
0
0
35
21,500,736
2014-02-01T16:47:00.000
0
0
1
0
0
python,regex,machine-learning
0
21,604,563
0
6
0
false
0
0
There were suggestions about unsupervised learning, but I recommend to use supervised learning, so you'll categorize 100-200 positions manually, and then algo will do the rest. There are number of resources, libraries, etc. - look please at "Programming Collective Intelligence" book - they provided good machine learning topics with python examples.
2
11
0
0
Problem: I am given a long list of various position titles for jobs in the IT industry (support or development); I need to automatically categorize them based on the general type of job they represent. For example, IT-support analyst, help desk analyst... etc. Could all belong to the group IT-Support. Current Approach: Currently, I am manually building regex patterns to accomplish this, which change as I encounter new titles which should be included in a group. For example, I originally used the pattern: "(HELP|SERVICE) DESK" to match IT-Support type jobs, and this eventually became: "(HELP|SUPPORT|SERVICE) (DESK|ANALYST)" which was even more inclusive. Question: I feel like there should be a fairly intuitive way to automatically build these regex patterns with some sort of algorithm, but I have no idea how this might work... I've read about NLP briefly in the past, but its extremely alien to me... Any suggestions on how I might implement such an algorithm with/without NLP? EDIT: I'm considering using a decision tree, but it has some limitations which prevent it from working (in this situation) "out-of-the-box"; for example, if I have built the following tree: (Service)->(Desk)->(Support) OR ->(Analyst) ...where Support and Analyst are both children of Desk Say I get the string "Level-1 Service Desk Analyst"... This should be categorized using the decision tree above, but it will not inherantly match the tree (since there is no root node named "Level" or "Level-1"). I believe I am heading in the right direction now, but I need additional logic. For example, if I am given the following hypothetical strings: IT Service Desk Analyst Level-1 Help Desk Analyst Computer Service Desk Support I would like my algorithm to create something like below: (Service OR Help)->(Desk)->(Analyst OR Support) ...where Service and Help are both root nodes, and both Analyst and Support are children of Desk Basically, I need the following: I would like this matching algorithm to be able to reduce the strings it is presented with to a minimal number of sub-strings which effectively match all of the strings in a given cateogory (preferably using a decision tree). If I am not being clear enough, just let me know!
Python - A way to learn and detect text patterns?
0
0
1
0
0
13,027
21,500,736
2014-02-01T16:47:00.000
0
0
1
0
0
python,regex,machine-learning
0
21,549,987
0
6
0
false
0
0
This sounds like a clustering, or unsupervised, problem rather than a decision tree one (do you know all the roles in advance, and can you provide labelled data). If it were me, I'd be tempted to build a bag-of-words style representation of your strings and run a generic clustering algorithm (k-means, say) to see what came out. Deciding on a category to assign a new string to is then a fairly simple matching operation (depending on what you use to do the clustering). You could also look at topic models, with the simplest being Latent Dirichlet Allocation, as being of potential application here. You'd get an assignment to a topic per-word, not per string, but that could be altered if you tweaked the method.
2
11
0
0
Problem: I am given a long list of various position titles for jobs in the IT industry (support or development); I need to automatically categorize them based on the general type of job they represent. For example, IT-support analyst, help desk analyst... etc. Could all belong to the group IT-Support. Current Approach: Currently, I am manually building regex patterns to accomplish this, which change as I encounter new titles which should be included in a group. For example, I originally used the pattern: "(HELP|SERVICE) DESK" to match IT-Support type jobs, and this eventually became: "(HELP|SUPPORT|SERVICE) (DESK|ANALYST)" which was even more inclusive. Question: I feel like there should be a fairly intuitive way to automatically build these regex patterns with some sort of algorithm, but I have no idea how this might work... I've read about NLP briefly in the past, but its extremely alien to me... Any suggestions on how I might implement such an algorithm with/without NLP? EDIT: I'm considering using a decision tree, but it has some limitations which prevent it from working (in this situation) "out-of-the-box"; for example, if I have built the following tree: (Service)->(Desk)->(Support) OR ->(Analyst) ...where Support and Analyst are both children of Desk Say I get the string "Level-1 Service Desk Analyst"... This should be categorized using the decision tree above, but it will not inherantly match the tree (since there is no root node named "Level" or "Level-1"). I believe I am heading in the right direction now, but I need additional logic. For example, if I am given the following hypothetical strings: IT Service Desk Analyst Level-1 Help Desk Analyst Computer Service Desk Support I would like my algorithm to create something like below: (Service OR Help)->(Desk)->(Analyst OR Support) ...where Service and Help are both root nodes, and both Analyst and Support are children of Desk Basically, I need the following: I would like this matching algorithm to be able to reduce the strings it is presented with to a minimal number of sub-strings which effectively match all of the strings in a given cateogory (preferably using a decision tree). If I am not being clear enough, just let me know!
Python - A way to learn and detect text patterns?
0
0
1
0
0
13,027
21,504,617
2014-02-01T22:40:00.000
5
0
1
1
1
python,eclipse,python-2.7,ubuntu
0
24,506,641
1
1
0
false
0
0
Configure Aptana Studio's python interpreter( you can configure more than one) In aptana, Window -> Preferences -> Interpreter Python and create a New interpreter. Select the python executable from the virtual environment (in windows it is python.exe which resides in Scripts subfoler of the virtualenv,where as in ubuntu python is under bin subfolder) . Now Aptana will show a list of directories to add also remember to check C:\Python27\Lib or Ubuntu conterpart. Now on creating use this interpreter. Or if to use with existing project Step 1.Take project properties(File -> Properties OR By right clicking on Project). Step 2.From PyDev Interpreter/Grammer select the interpreter you configured above. Edit : In this way you can even configure both python 3 and python 2 for Aptana. You have to configure an interpreter for each python 3 and python 2. Then follow steps above to select the interpreter.
1
3
0
0
I did some searches on this topic and the solutions didn't work for me. I am running both a Linux (Ubuntu) environment and Windows. My system is Windows 8.1 but I have virtualbox with Ubuntu on that. Starting with Windows... I created a venv directory off the root of the e drive. Created a project folder and then ran the activate command, which is in the venv>Scripts directory. So, after activating that (note, I had installed virtualenv already)... so after activating that I then changed into the folder with my module and it ran fine, with the shebang, I didn't even have to type python in front of my filename. However, in Aptana Studio, it cannot find the module I installed with pip. So, it doesn't work. In an earlier post it was recommended that one choose a different interpreter and browse to the env and select that. So, how does one get this installed and working with an IDE like Eclipse and Aptana Studio? I am having problems on Ubuntu. The instructions I found had me using package installer to install virtualenv, pip and a few other tools that package these. The problem is that on Ubuntu the default version of python is 2.7.x. I need 3.3 or 3.x. So, can someone point me in the direction of how to setup virtual environments for the 2.7.x branch of python and the 3.x branch. Also, how does one tell the IDE (Eclipse or Aptana Studio) to use the virtualenv? Thanks, Bruce
How do I tell Aptana Studio to use Python virtualenv?
0
0.761594
1
0
0
2,520
21,509,623
2014-02-02T10:46:00.000
0
0
0
0
0
python,tkinter,raspberry-pi,on-screen-keyboard
0
21,511,280
0
1
0
true
0
1
You don't need to simulate keypresses if all you want to do is insert the numbers into the entry widget. Just have the buttons directly insert their value into the entry widget with the entry widget insert method.
1
0
0
0
I am using Tkinter to create an application which requires a 0-9 numerical keypad to be built in to the UI. I plan to do this with 10 button widgets which enter the relevant number(s) into the currently selected Entry widget. I do not want to use one of the pre-made on-screen keyboards (e.g. Matchbox-keyboard) that are available, it needs to be bespoke to the application. So essentially - how do I simulate key-press events using on-screen buttons to enter values into entry fields without taking the focus off the entry field?
In Python (on Raspberry Pi) how do I create an embedded keypad within my Tkinter window
0
1.2
1
0
0
1,024
21,510,739
2014-02-02T12:46:00.000
0
0
1
0
0
python,virtual-machine
0
21,511,103
0
2
0
false
0
0
Run this in command prompt python -c "from platform import python_implementation; print python_implementation()"
1
4
0
0
I installed Python in my PC. how to find out which VM came with it. is it cpython or ipython or jpython?
How to find out which VM i have in my Python installation?
0
0
1
0
0
68
21,527,115
2014-02-03T12:11:00.000
1
0
1
0
1
windows,jython,python-sphinx,jython-2.5
1
21,824,353
0
1
0
true
1
0
I have managed to get it working. The problem was that the manual installation and the use of Jython meant that certain environment variables that were expected were not in place. Also, the use of Windows 7 (and I believe MS Windows in general) means that Python scripts without an extension cannot be run without calling them explicitly through Jython (Windows doesn't check for shebangs). Finally, file associations had not been set up (as happens automatically with CPython installation, but has not happened with Jython). For anyone else with similar problems the following setup works for me: Locations: Java Runtime: C:\Java\jre7 Jython: C:\Jython\jython2.5.2 User Environment Variables: JRE_HOME: C:\Java\jre7 JAVA_HOME: %JRE_HOME% CLASSPATH: . JYTHON_HOME: C:\Jython\jython2.5.2 PATH: %JRE_HOME%\bin;%JYTHON_HOME%\bin File Associations: At the command prompt type assoc .py=Python.File to associate 'Python.File' with the '.py' extension. At the command prompt type ftype Python.File=C:\Jython\jython2.5.2\jython.bat "%1" %* to associate the Jython command with files of type 'Python.File'. Append '.py' (;.PY) to the PATHEXT system environment variable. This will make it possible to execute Python files without having to provide their '.py' extension. (N.B. This does not make it possible to run Python files that do not have a '.py' extension.) File Extensions: Rename the four Sphinx commands to include '.py' extensions. This is remarkably difficult with vanilla Windows 7 as it does everything it can to distance the user from such 'low level' details as file extensions, however the rename command at the command prompt does the job: type ren sphinx* sphinx*.py when in the Jython bin directory. It should now be possible to call sphinx-apidoc or similar from anywhere. Once this is complete the command make html, when called from the documentation directory, should work as expected.
1
1
0
0
Once sphinx-apidoc has been run the command C:\path\to\doc\make html produces an error beginning: The 'sphinx-build' command was not found [snip] However the command does exist and the relevant environment variables are set. More detail: 1 - Trying to run sphinx_apidoc: 'C:\path\to\jython\bin\sphinx-apidoc' is not recognised as an internal or external command 2 - Called using Jython works: jython C:\path\to\jython\bin\sphinx-apidoc with sensible options produces the documentation *.rst files, conf.py, etc files. 3 - make html then produces the following error: The 'sphinx-build' command was not found [snip] It then recommends setting the SPHINXBUILD environment variable, and even the PATH. I already have these two environment variables set, proven to myself by calling echo %PATH% and echo %SPHINXBUILD%. This is where I get stuck. It appears that the files that Sphinx uses (sphinx-apidoc and sphinx-build in this case), which are in the C:\path\to\jython\bin\ directory, do not have any file suffixes. When called directly from Jython they work as expected (see point 2 above), however when called as part of another process (e.g. make html) they are not recognised and the execution fails (see points 1 and 3 above). Does anyone know the what, why and most importantly 'how to fix' of this problem? My setup process is on an unnetworked Windows 7 computer. Jython (2.5.2) was installed using the Jython installer. Then each of the following packages (except setuptools) was installed by extracting it locally and then running jython setup.py install in its extracted directory: setuptools: by calling jython ez_setup.py with setuptools-1.4.2.tar.gz in the same directory (so there is no attempt to download it) Jinja2 (2.5) docutils (0.11) Pygments (1.6) Sphinx (1.2.1) numpydoc (0.4) - Only mentioned because it is also isntalled on the machine.
How to get Sphinx working with Jython on an unnetworked Windows 7 computer?
1
1.2
1
0
0
376
21,535,003
2014-02-03T18:36:00.000
1
1
0
0
0
python,api,soundcloud
0
30,069,248
0
3
1
false
0
0
You can also do the following : import soundcloud token= 'user_access_token' client = soundcloud.Client(access_token=token) user_info = client.get('/me') user_favorites = client.get('/me/favorites') user_tracks = client.get('/me/tracks') and so on...
1
1
0
1
Issues using SoundCloud API with python to get user info I've downloaded the soundcloud library and followed the tutorials, and saw on the soundcloud dev page that user syntax is, for example /users/{id}/favorites. I just don't know how to use python to query user information. Specifically, i would like to print a list of tracks that a given user liked, (or favorited, but liked would be better). any help would be greatly appreciated. thanks!
soundcloud api python user information
0
0.066568
1
0
0
503
21,545,635
2014-02-04T06:40:00.000
1
0
0
1
0
python,google-app-engine
0
37,264,173
0
2
0
false
1
0
Sorry to revive this old question, but I have a solution for this issue given a few constraints with possible workarounds. Basically, the cursors for previous pages can be stored and reused for revisiting that page. Constraints: This requires that pagination is done dynamically (e.g. with Javascript) so that older cursors are not lost. Workaround if pagination is done across html pages, the cursors would need to be passed along. Users would not be able to arbitrarily select a forward page, and would only be given next/back buttons. Though any previously visited page could easily be jumped to. Workaround could be to internally iterate and discard entries while generating cursors at pagination points until finally reaching the desired results. Then return the list of previous page cursors as well. All of this requires a lot of extra bookkeeping and complexity, which almost makes the solution purely academic, but I suppose that would depend on how much more efficient cursors are than simply limit/offset. This could be a worthwhile endeavor if your data is such that you don't expect your users to want to jump ahead more than one page at a time (which includes most type of searches).
1
6
0
0
I want to do pagination in google app engine search api using cursors (not offset). the forward pagination is straight forward , the problem is how to implement the backward pagination.
Pagination in Google App EngineSearch API
0
0.099668
1
0
0
575
21,556,278
2014-02-04T15:17:00.000
1
0
0
0
0
python,https,gunicorn,url-pattern
0
21,556,311
0
1
0
false
1
0
The protocol has nothing to do with django. That part is handled by your http server
1
0
0
0
I'm building a Django-based app, and I need it to use secure requests. The secure requests in my site are enabled and manually writing the url gets it through fine. As I have quite a lot of urls I don't want to do it manually, but instead do something so Django always sends secure requests. How can I make it so it always send https?
how do I re-write/redirect URLs in Gunicorn web server configuration?
0
0.197375
1
0
0
1,587
21,558,022
2014-02-04T16:32:00.000
0
1
0
1
0
python,bash,exe,samba
0
21,559,703
0
3
0
false
0
0
As you've said, this executable file would need to be something that runs on both Linux and Windows. That will exclude binary files, such as compiled C files. What you are left with would be an executable script, which could be Bash Ruby Python PHP Perl If need be the script could simply be a bootstrapper that loads the appropriate binary executable depending on the operating system.
1
1
0
0
I have a executable file working in Ubuntu that runs a script in Python and works fine. I have also a shared directory with Samba server. The idea is that everyone (even Windows users) can execute this executable file located in this shared folder to run the script located in my computer. But, how can I make an executable file that runs the python script of MY computer from both Linux and Windows remote users?
Executable shell file in Windows
0
0
1
0
0
262
21,558,984
2014-02-04T17:12:00.000
1
0
0
0
0
python,redirect,flask,worker
0
21,561,671
0
1
0
true
1
0
You can do it as follows: When the user presses the button the server starts the task, and then sends a response to the client, possibly a "please wait..." type page. Along with the response the server must include a task id that references the task accessible to Javascript. The client uses the task id to poll the server regarding task completion status through ajax. Let's say this is route /status/<taskid>. This route returns true or false as JSON. It can also return a completion percentage that you can use to render a progress bar widget. When the server reports that the task is complete the client can issue the redirect to the completion page. If the client needs to be told what is the URL to redirect to, then the status route can include it in the JSON response. I hope this helps!
1
2
0
0
I have the app in python, using flask and iron worker. I'm looking to implement the following scenario: User presses the button on the site The task is queued for the worker Worker processes the task Worker finishes the task, notifies my app My app redirects the user to the new endpoint I'm currently stuck in the middle of point 5, I have the worker successfully finishing the job and sending a POST request to the specific endpoint in my app. Now, I'd like to somehow identify which user invoked the task and redirect that user to the new endpoint in my application. How can I achieve this? I can pass all kind of data in the worker payload and then return it with the POST, the question is how do I invoke the redirect for the specific user visiting my page?
Redirect user when the worker is done
0
1.2
1
0
0
191
21,559,666
2014-02-04T17:46:00.000
1
0
1
0
1
google-chrome,ipython,ipython-notebook
0
24,776,086
0
2
0
false
0
0
I was experiencing the same issue. Actually I found this is related to chrome extensions installed. Try disabling all the extensions and re-enabling them one by one. You'll find which is crashing your tab. In my case, crashes were due to the Evernote extension. Alternatively, you can open up an incognito window, which has all the extensions disabled by default, and try opening your notebook there. Ciao
1
9
0
0
I'm doing some work in an IPython Notebook session, and I now have a large-ish notebook containing code, some plots, and some embedded videos (of plot stacks; it seemed like the easiest way to be able to scroll through a sequence of plots interactively in the Notebook view). I'm working in Chrome (Mac, 32.0.1700.102) since H.264 encoding worked best (Vp8 compressed out shading detail in the plots that I needed), and Safari and Firefox don't render the videos. Recently, this notebook has started crashing Chrome tabs every couple minutes (showing the 'Aw Snap' page). It's become basically unusable. I can work, saving very frequently, but saving the notebook causes the Chrome tab to crash about half the time (which makes me wonder if the random crashes that occur when I'm working are caused by the autosaves, but I don't know). Has anyone else encountered this? Does anyone know how to fix it? Is there some more information I can provide to diagnose the problem? Thanks for any help.
IPython Notebook crashing Chrome tabs
0
0.099668
1
0
0
6,425
21,578,382
2014-02-05T13:14:00.000
4
0
0
0
0
python,django,django-models,django-admin
0
21,579,803
0
5
0
true
1
0
I don't see an obvious solution to this — the models are sorted by their _meta.verbose_name_plural, and this happens inside the AdminSite.index view, with no obvious place to hook custom code, short of subclassing the AdminSite class and providing your own index method, which is however a huge monolithic method, very inheritance-unfriendly.
1
13
0
0
I have several configuration objects in django admin panel. They are listed in the following order Email config General config Network config Each object can be configured separately, but all of them are included in General config. So basically you will need mostly General config, so I want to move it to the top. I know how to order fields in a model itself, but how to reorder models?
Reorder model objects in django admin panel
0
1.2
1
0
0
5,031
21,582,358
2014-02-05T16:10:00.000
11
0
0
1
0
python,breakpoints,ipdb
0
21,582,431
0
2
0
false
0
0
Use the break command. Don't add any line numbers and it will list all instead of adding them.
2
13
0
0
Trying to find how to execute ipdb (or pdb) commands such as disable. Calling the h command on disable says disable bpnumber [bpnumber ...] Disables the breakpoints given as a space separated list of bp numbers. So how whould I get those bp numbers? was looking through the list of commands and couldn't get any to display the bp numbers [EDIT] The break, b and info breakpoints commands don't do anything, although in my module i clearly have 1 breakpoint set like this import pdb; pdb.set_trace( ) - same for ipdb. Moreover info is not defined. The output of help in pdb: Documented commands (type help ): ======================================== EOF bt cont enable jump pp run unt a c continue exit l q s until alias cl d h list quit step up args clear debug help n r tbreak w b commands disable ignore next restart u whatis break condition down j p return unalias where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv And for ipdb: Documented commands (type help ): ======================================== EOF bt cont enable jump pdef psource run unt a c continue exit l pdoc q s until alias cl d h list pfile quit step up args clear debug help n pinfo r tbreak w b commands disable ignore next pinfo2 restart u whatis break condition down j p pp return unalias where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv I have saved my module as pb3.py and am executing it within the command line like this python -m pb3 The execution does indeed stop at the breakpoint, but within di pdb (ipdb) console, the commands indicated don't display anything - or display a NameError If more info is needed, i will provide it.
How to find the breakpoint numbers in pdb (ipdb)?
0
1
1
0
0
4,097
21,582,358
2014-02-05T16:10:00.000
-3
0
0
1
0
python,breakpoints,ipdb
0
21,582,459
0
2
0
false
0
0
info breakpoints or just info b lists all breakpoints.
2
13
0
0
Trying to find how to execute ipdb (or pdb) commands such as disable. Calling the h command on disable says disable bpnumber [bpnumber ...] Disables the breakpoints given as a space separated list of bp numbers. So how whould I get those bp numbers? was looking through the list of commands and couldn't get any to display the bp numbers [EDIT] The break, b and info breakpoints commands don't do anything, although in my module i clearly have 1 breakpoint set like this import pdb; pdb.set_trace( ) - same for ipdb. Moreover info is not defined. The output of help in pdb: Documented commands (type help ): ======================================== EOF bt cont enable jump pp run unt a c continue exit l q s until alias cl d h list quit step up args clear debug help n r tbreak w b commands disable ignore next restart u whatis break condition down j p return unalias where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv And for ipdb: Documented commands (type help ): ======================================== EOF bt cont enable jump pdef psource run unt a c continue exit l pdoc q s until alias cl d h list pfile quit step up args clear debug help n pinfo r tbreak w b commands disable ignore next pinfo2 restart u whatis break condition down j p pp return unalias where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv I have saved my module as pb3.py and am executing it within the command line like this python -m pb3 The execution does indeed stop at the breakpoint, but within di pdb (ipdb) console, the commands indicated don't display anything - or display a NameError If more info is needed, i will provide it.
How to find the breakpoint numbers in pdb (ipdb)?
0
-0.291313
1
0
0
4,097
21,588,464
2014-02-05T21:03:00.000
0
1
0
1
1
python,python-2.7,emacs,emacs24
0
21,590,370
0
2
0
false
0
0
I don't use python, but from the source to python-mode, I think you should look into customizing the variable python-python-command - It seems to default to the first path command matching "python"; perhaps you can supply it with a custom path?
1
2
0
0
I'm new to Emacs and I'm trying to set up my python environment. So far I've learned that using "python-mode.el" in a python buffer C-c C-c loads the contents of the current buffer into an interactive python shell, apparently using what which python yields. In my case that is python 3.3.3. But since I need to get a python 2.7 shell, I'm trying to get Emacs to spawn such a shell on C-c C-c. Unfortunatly I can't figure out, how to do this. Setting py-shell-name to what which python2.7 yields (i.e. /usr/bin/python2.7) does not work. How can get Emacs to do this, or how can I trace back what Emacs executes when I hit C-c C-c?
Using python2.7 with Emacs 24.3 and python-mode.el
0
0
1
0
0
1,019
21,617,616
2014-02-07T01:25:00.000
2
0
1
1
0
json,pythonanywhere
0
21,627,045
0
2
0
false
1
0
You can get to /var/www/static in the File browser. Just click on the '/' in the path at the top of the page and then follow the links. You can also just copy things there from a Bash console. You may need to create the static folder in /var/www if it's not there already.
1
1
0
0
I am trying to have my app on Amazon appstore. In order to do this Amazon needs to park a small json file (web-app-manifest.json). If I upload it to the the root of my web site (as suggested), Amazon bot says it cannot access file. Amazon support mention I should save it to /var/www/static but either I don't know how to get there or I don't have access to this part of the server. Any ideas ?
Where should I save the Amazon Manifest json file on an app hosted at PythonAnywhere?
0
0.197375
1
0
0
155
21,633,136
2014-02-07T16:38:00.000
0
0
0
0
0
python-2.7,probability,scikit-learn,prediction,adaboost
0
21,645,757
0
1
0
false
0
0
Do you mean you get probabilities per sample that are 1/n_classes on average? That's necessarily the case; the probabilities reported by predict_proba are the conditional class probability distribution P(y|X) over all values for y. To produce different probabilities, perform any necessary computations according to your probability model.
1
0
1
0
I'm using the AdaBoostClassifier in Scikit-learn and always get an average probability of 0.5 regardless of how unbalanced the training sets are. The class predictions (predict_) seems to give correct estimates, but these aren't reflected in the predict_probas method which always average to 0.5. If my "real" probability is 0.02, how do I transform the standardized probability to reflect that proportion?
The predict method shows standardized probability?
0
0
1
0
0
213
21,640,028
2014-02-08T00:08:00.000
1
0
0
0
0
python,arrays,numpy
0
24,217,870
0
2
0
false
0
0
In case anyone else has a similar problem but the chosen answer doesn't solve it, one possibility could be that in Python3, some index or integer quantity fed into a np function is an expression using '/' for example n/2, which ought to be '//'.
1
3
1
0
I have a large dataset stored in a numpy array (A) I am trying to sum by block's using: B=numpy.add.reduceat(numpy.add.reduceat(A, numpy.arange(0, A.shape[0], n),axis=0), numpy.arange(0, A.shape[1], n), axis=1) it work's fine when i try it on a test array but with my data's I get the following message: TypeError: Cannot cast array data from dtype('float64') to dtype('int32') according to the rule 'safe' Does someone now how to handle this? Thanks for the help.
Error when trying to sum an array by block's
0
0.099668
1
0
0
4,600
21,650,889
2014-02-08T19:39:00.000
1
0
0
0
0
python,database-connection
0
21,651,170
0
1
1
true
1
0
Here's how I would do: Use a connection pool with a queue interface. You don't have to choose a connection object, you just pick the next on the line. This can be done whenever you need transaction, and put back afterwards. Unless you have some very specific needs, I would use a Singleton class for the database connection. No need to pass parameters on the constructor every time. For testing, you just put a mocked database connection on the Singleton class. Edit: About the connection pool questions (I could be wrong here, but it would be my first try): Keep all connections open. Pop when you need, put when you don't need it anymore, just like a regular queue. This queue could be exposed from the Singleton. You start with a fixed, default number of connections (like 20). You could override the pop method, so when the queue is empty you block (wait for another to free if the program is multi-threaded) or create a new connection on the fly. Destroying connections is more subtle. You need to keep track of how many connections the program is using, and how likely it is you have too many connections. Take care, because destroying a connection that will be needed later slows the program down. In the end, it's a n heuristic problem that changes the performance characteristics.
1
0
0
0
I have a Model class which is part of my self-crafted ORM. It has all kind of methods like save(), create() and so on. Now, the thing is that all these methods require a connection object to act properly. And I have no clue on what's the best approach to feed a Model object with a connection object. What I though of so far: provide a connection object in a Model's __init__(); this will work, by setting an instance variable and use it throughout the methods, but it will kind of break the API; users shouldn't always feed a connection object when they create a Model object; create the connection object separately, store it somewhere (where?) and on Model's __init__() get the connection from where it has been stored and put it in an instance variable (this is what I thought to be the best approach, but have no idea of the best spot to store that connection object); create a connection pool which will be fed with the connection object, then on Model's __init__() fetch the connection from the connection pool (how do I know which connection to fetch from the pool?). If there are any other approached, please do tell. Also, I would like to know which is the proper way to this.
Getting connection object in generic model class
0
1.2
1
1
0
76
21,653,108
2014-02-08T23:30:00.000
0
0
0
0
0
python,qt,python-2.7,pyqt,pyqt4
0
21,654,212
0
3
0
false
0
1
The QTreeWidget is a red herring. What you are saving is a generic QAbstractItemModel (treeWidget->model()) - after all, a QTreeWidget is a view, and has a built-in model. Now, those model's items are simply QVariants, and those are simply Python types, but also fully supported by QDataStream::operator<<. All you need is to choose a tree traversal (depth-first, breadth-first, or something else), and dump the items and their depth in the tree to the stream. When you read the stream, that's sufficient information to reconstruct the tree.
1
4
0
0
I recently spent some time working out how to use a QDataStream with a QTreeWidget in PyQt. I never found specific examples for doing exactly this, and pyqt documentation for QDataStream seems to be pretty scarce in general. So I thought I'd post a question here as a breadcrumb trail in case someone else down the line needs a hint. I'll wait a bit in case someone would like to jump in and take a shot at it, and I'll post back in a bit with my own efforts. The question is: In PyQt, how can I use a QDataStream to save QTreeWidgetItems to a file as native QT objects, and then read the file back to restore the tree structure exactly as it was saved? Eric
PyQt: Saving native QTreeWidgets using QDataStream
0
0
1
0
0
2,494
21,655,862
2014-02-09T05:53:00.000
13
0
0
1
0
python,google-app-engine,app-engine-ndb
0
21,658,988
0
2
0
true
1
0
I think you've overcomplicating things in your mind. When you create an entity, you can either give it a named key that you've chosen yourself, or leave that out and let the datastore choose a numeric ID. Either way, when you call put, the datastore will return the key, which is stored in the form [<entity_kind>, <id_or_name>] (actually this also includes the application ID and any namespace, but I'll leave that out for clarity). You can make entities members of an entity group by giving them an ancestor. That ancestor doesn't actually have to refer to an existing entity, although it usually does. All that happens with an ancestor is that the entity's key includes the key of the ancestor: so it now looks like [<parent_entity_kind>, <parent_id_or_name>, <entity_kind>, <id_or_name>]. You can now only get the entity by including its parent key. So, in your example, the Shoe entity could be a child of the Person, whether or not that Person has previously been created: it's the child that knows about the ancestor, not the other way round. (Note that that ancestry path can be extended arbitrarily: the child entity can itself be an ancestor, and so on. In this case, the group is determined by the entity at the top of the tree.) Saving entities as part of a group has advantages in terms of consistency, in that a query inside an entity group is always guaranteed to be fully consistent, whereas outside the query is only eventually consistent. However, there are also disadvantages, in that the write rate of an entity group is limited to 1 per second for the whole group.
1
17
0
1
I'm creating a Google App Engine application (python) and I'm learning about the general framework. I've been looking at the tutorial and documentation for the NDB datastore, and I'm having some difficulty wrapping my head around the concepts. I have a large background with SQL databases and I've never worked with any other type of data storage system, so I'm thinking that's where I'm running into trouble. My current understanding is this: The NDB datastore is a collection of entities (analogous to DB records) that have properties (analogous to DB fields/columns). Entities are created using a Model (analogous to a DB schema). Every entity has a key that is generated for it when it is stored. This is where I run into trouble because these keys do not seem to have an analogy to anything in SQL DB concepts. They seem similar to primary keys for tables, but those are more tightly bound to records, and in fact are fields themselves. These NDB keys are not properties of entities, but are considered separate objects from entities. If an entity is stored in the datastore, you can retrieve that entity using its key. One of my big questions is where do you get the keys for this? Some of the documentation I saw showed examples in which keys were simply created. I don't understand this. It seemed that when entities are stored, the put() method returns a key that can be used later. So how can you just create keys and define ids if the original keys are generated by the datastore? Another thing that I seem to be struggling with is the concept of ancestry with keys. You can define parent keys of whatever kind you want. Is there a predefined schema for this? For example, if I had a model subclass called 'Person', and I created a key of kind 'Person', can I use that key as a parent of any other type? Like if I wanted a 'Shoe' key to be a child of a 'Person' key, could I also then declare a 'Car' key to be a child of that same 'Person' key? Or will I be unable to after adding the 'Shoe' key? I'd really just like a simple explanation of the NDB datastore and its API for someone coming from a primarily SQL background.
Simple explanation of Google App Engine NDB Datastore
0
1.2
1
1
0
7,423
21,675,320
2014-02-10T10:59:00.000
-2
0
0
0
0
python,qt,autocomplete,tkinter
0
21,675,660
0
2
0
false
0
1
You can use QTextEdit::cursorForPosition to get a cursor for mouse position. After that you can call QTextCursor::select with QTextCursor::WordUnderCursor to select the word and QTextCursor::selectedText to get the word.
1
3
0
0
I want to make a text editor with autocompletion feature. What I need is somehow get the text which is selected by mouse (case #1) or just a word under cursor(case #2) to compare it against a list of word I want to be proposed for autocompletion. By get I mean return as a a string value. Can it be done with tkinter at all? I'm not familiar with qt but I'll try to use it if the feature can be achieved with it.
How do I return word under cursor in tkinter?
0
-0.197375
1
0
0
1,225
21,700,792
2014-02-11T11:39:00.000
2
0
0
0
0
python-2.7,heroku-postgres
0
21,710,704
0
1
0
true
1
0
$ heroku pg:info --app yourapp
1
2
0
0
I have a python / django app on heroku. how do i find the postgres version running on my app?
How do I find the postgres version running on my heroku app?
0
1.2
1
0
0
37
21,708,859
2014-02-11T17:30:00.000
0
0
1
0
1
python,regex
0
21,709,113
0
3
0
false
0
0
You can also try like below, a = 'your-string' result = re.findall('(mon|tues|wed|thurs|fri|sat|sun)day', a) if result: _day = result[0] + 'day'
1
0
0
0
I seem to be having a problem finding the correct regex for weekdays in Python. I have tried this: /(mon|tues|wednes|thurs|fri|satur|sun)day/ The problem is that this regex accepts if I just have "mon" in a text, but I only want it to accept if I have "monday". How do I fix this? I can't seem to understand how to do this.
Regex for weekdays in python
0
0
1
0
0
4,421
21,715,132
2014-02-11T23:07:00.000
0
1
0
0
1
python,unicode,flask
1
21,716,642
1
1
0
true
1
0
OK, after wrestling with it under the hood for a while I fixed it, but not in a very elegant way, I had to modify the source of some werkzeug things. In "http.py", I replaced str(value) with unicode(value), and replaced every instance of "latin-1" with "utf-8" in both http.py and datastructures.py. It fixed the problem, file gets downloaded fine in both the latest Firefox and Chrome. As I said before, I would rather not have to modify the source of the libraries I am using because this is a pain when deploying/testing on different systems, so if anyone has a better fix for this please share. I've seen some people recommend just making the filename part of the URL but I cannot do this as I need to keep my URLs simple and clean.
1
1
0
0
So I am using Flask to serve some files. I recently downgraded the project from Python 3 to Python 2.7 so it would work with more extensions, and ran into a problem I did not have before. I am trying to serve a file from the filesystem with a Japanese filename, and when I try return send_from_directory(new_folder_path, filename, as_attachment=True) I get UnicodeEncodeError: 'ascii' codec can't encode characters in position 15-20: ordinal not in range(128). in quote_header_value = str(value) (that is a werkzeug thing). I have template set to display the filename on the page by just having {{filename}} in the HTML and it is displaying just fine, so I'm assuming it is somehow reading the name from the filesystem? Only when I try send_from_directory so the user can download it does it throw this error. I tried a bunch of combinations of .encode('utf-8') and.decode('utf-8')`none of which worked at all and I'm getting very frustrated with this. In Python 3 everything just worked seamlessly because everything was treated as unicode, and searching for a way to solve this brought up results that it seems I would need a degree in compsci to wrap my head around. Does anyone have a fix for this? Thanks.
Python - how to send file from filesystem with a unicode filename?
0
1.2
1
0
0
887
21,716,890
2014-02-12T01:36:00.000
1
0
1
0
1
python,serialization,pickle,shelve
0
21,718,777
0
2
0
true
1
0
Without trying it out I'm fairly sure the answer is: They can both be served at once, however, if one user is reading while the other is writing the reading user may get strange results. Probably not. Once the tree has been read from the file into memory the other user will not see edits of the first user. If the tree hasn't been read from the file then the change will still be detected. Both changes will be made simultaneously and the file will likely be corrupted. Also, you mentioned shelve. From the shelve documentation: The shelve module does not support concurrent read/write access to shelved objects. (Multiple simultaneous read accesses are safe.) When a program has a shelf open for writing, no other program should have it open for reading or writing. Unix file locking can be used to solve this, but this differs across Unix versions and requires knowledge about the database implementation used. Personally, at this point, you may want to look into using a simple key-value store like Redis with some kind of optimistic locking.
1
0
0
0
I have data that is best represented by a tree. Serializing the structure makes the most sense, because I don't want to sort it every time, and it would allow me to make persistent modifications to the data. On the other hand, this tree is going to be accessed from different processes on different machines, so I'm worried about the details of reading and writing. Basic searches didn't yield very much on the topic. If two users simultaneously attempt to revive the tree and read from it, can they both be served at once, or does one arbitrarily happen first? If two users have the tree open (assuming they can) and one makes an edit, does the other see the change implemented? (I assume they don't because they each received what amounts to a copy of the original data.) If two users alter the object and close it at the same time, again, does one come first, or is an attempt made to make both changes simultaneously? I was thinking of making a queue of changes to be applied to the tree, and then having the tree execute them in the order of submission. I thought I would ask what my problems are before trying to solve any of them.
Can serialized objects be accessed simultaneously by different processes, and how do they behave if so?
0
1.2
1
0
0
2,682
21,755,574
2014-02-13T13:25:00.000
1
0
1
0
0
python,pdf,ipython,ipython-notebook
0
55,154,772
0
6
0
false
0
0
ipython nbconvert notebook.ipynb --to pdf
3
3
0
0
I'm trying to export my IPython notebook to pdf, but somehow I can't figure out how to do that. I searched through stackoverflow and already read about nbconvert, but where do I type that command? In the notebook? In the cmd prompt? If someone can tell me, step by step, what to do? I'm using Python 3.3 and IPython 1.1.0. Thank you in advance :)
IPython notebook - unable to export to pdf
0
0.033321
1
0
0
13,981
21,755,574
2014-02-13T13:25:00.000
2
0
1
0
0
python,pdf,ipython,ipython-notebook
0
25,941,564
0
6
0
false
0
0
open terminal navigate to the directory of your notebook ipython nbconvert mynotebook.ipynb --to latex --post PDF
3
3
0
0
I'm trying to export my IPython notebook to pdf, but somehow I can't figure out how to do that. I searched through stackoverflow and already read about nbconvert, but where do I type that command? In the notebook? In the cmd prompt? If someone can tell me, step by step, what to do? I'm using Python 3.3 and IPython 1.1.0. Thank you in advance :)
IPython notebook - unable to export to pdf
0
0.066568
1
0
0
13,981
21,755,574
2014-02-13T13:25:00.000
1
0
1
0
0
python,pdf,ipython,ipython-notebook
0
54,918,257
0
6
0
false
0
0
I was facing the same problem. I tried to use the option select File --> Download as --> Pdf via LaTeX (.pdf) in the notebook but it did not worked for me(It is not working for me). I tried other options too still not working. I solved it by using the following very simple steps. I hope it will help you too: You can do it by 1st converting the notebook into HTML and then into PDF format: Following steps I have implemented on: OS: Ubuntu, Anaconda-Jupyter notebook, Python 3 1 Save Notebook in HTML format: Start the jupyter notebook that you want to save in HTML format. First save the notebook properly so that HTML file will have a latest saved version of your code/notebook. Run the following command from the notebook itself: !jupyter nbconvert --to html notebook_name.ipynb After execution will create HTML version of your notebook and will save it in the current working directory. You will see one html file will be added into the current directory with your_notebook_name.html name (notebook_name.ipynb --> notebook_name.html). 2 Save html as PDF: Now open that notebook_name.html file (click on it). It will be opened in a new tab of your browser. Now go to print option. From here you can save this file in pdf file format. Note that from print option we also have the flexibility of selecting a portion of a notebook to save in pdf format.
3
3
0
0
I'm trying to export my IPython notebook to pdf, but somehow I can't figure out how to do that. I searched through stackoverflow and already read about nbconvert, but where do I type that command? In the notebook? In the cmd prompt? If someone can tell me, step by step, what to do? I'm using Python 3.3 and IPython 1.1.0. Thank you in advance :)
IPython notebook - unable to export to pdf
0
0.033321
1
0
0
13,981
21,785,951
2014-02-14T17:38:00.000
1
0
1
0
0
python
0
21,785,991
0
2
0
false
0
0
It really depends what you mean by "display" the file. When we display text, we need to take the file, get all of its text, and put it onto the screen. One possible display would be to read every line and print them. There are certainly others. You're going to have to open the file and read the lines in order to display it, though, unless you make a shell command to something like vim file.txt.
1
0
0
1
I am trying to find a way to display a txt/csv file to the user of my Python script. Everytime I search how to do it, I keep finding information on how to open/read/write etc ... But I just want to display the file to the user. Thank you in advance for your help.
Display a file to the user with Python
0
0.099668
1
0
0
47
21,788,522
2014-02-14T20:01:00.000
0
0
0
0
1
python,user-interface
0
21,789,391
0
1
0
true
0
1
The console could possibly appear if you used the 'console' parameter to setup(). Switch to 'windows' instead if that is the case. Can't say for sure without seeing your setup.py script. Possibly your app could also be opening console, but again hard to say without seeing source. One thing to check is to make sure you are not printing anything to stdout or stderr. You might want to redirect all of stdout and stderr to your log just in case, and do this right at the top of your start script so that if some 3rd party import was writing to stdout you'd be able to capture that. The db is not part of your executable, so py2exe will not do anything with it. However, you should probably package your application with an installer, and you can make the installer include the db and install it along with the executable.
1
0
0
0
I am new to python programming and development. After much self study through online tutorials I have been able to make a GUI with wxpython. This GUI interacts with a access database in my computer to load list of teams and employees into the comboboxes. Now my first question is while converting the whole program into a windows exe file can I also include the .accdb file with it...as in I only need to send the exe file to the users and not the database..if yes how. My second question is... I actually tried converting the program into exe using the py2exe (excluding the database...am not sure how to do that) and I got the .exe file of my program into the "Dist" folder. But when I double click it to run it a black screen (cmd) appears for less than a second and disappears. Please help me understand the above issue and resolve it. am not sure if I have a option of attaching files...then I could have attached my wxpython program for reference. Thanks in advance. Regards, Premanshu
wxpython GUI program to exe using py2exe
0
1.2
1
0
0
373
21,790,816
2014-02-14T22:43:00.000
3
0
0
0
0
python-2.7,pandas
0
21,791,001
0
2
0
false
0
0
for the 60 days you're looking to compare to, create a timedelta object of that value timedelta(days=60) and use that for the filter. and if you're already getting timedelta objects from the subtraction, recasting it to a timedelta seems unnecessary. and finally, make sure you check the signs of the timedeltas you're comparing.
1
3
1
0
I got a pandas dataframe, containing timestamps 'expiration' and 'date'. I want to filter for rows with a certain maximum delta between expiration and date. When doing fr.expiration - fr.date I obtain timedelta values, but don't know how to get a filter criteria such as fr[timedelta(fr.expiration-fr.date)<=60days]
filter pandas dataframe for timedeltas
0
0.291313
1
0
0
2,238
21,791,565
2014-02-14T23:57:00.000
2
0
1
1
0
python,macos
0
21,791,729
0
2
0
false
0
0
I do all of my main development on OSX. I deploy on a linux box. Pycharm (CE) is your friend.
2
1
0
0
I'm new to Mac, and I have OS X 10.9.1. The main question is whether it is better to create a virtual machine with Linux and do port forwarding or set all packages directly to the Mac OS and work with it directly? If I create a virtual machine, I'm not sure how it will affect the health of SSD and ease of development. On the other hand, I also do not know how to affect the stability and performance of Mac OS installation packages directly into it. Surely there are some best practices, but I do not know them.
Python development on Mac OS X: pure Mac OS or linux in virtualbox
0
0.197375
1
0
0
1,076
21,791,565
2014-02-14T23:57:00.000
3
0
1
1
0
python,macos
0
21,791,847
0
2
0
true
0
0
On my Mac, I use Python and PyCharm and all the usual Unix tools, and I've always done just fine. Regard OS X as a Unix machine with a very nice GUI on top of it, because it basically is -- Mac OS X is POSIX-compliant, with BSD underpinnings. Why would you even consider doing VirtualBox'd Linux? Even if you don't want to relearn the hotkeys, PyCharm provides a non-OS X mapping, and in Terminal, CTRL and ALT work like you expect. If you're used to developing on Windows but interfacing with Unix machines through Cygwin, you'll be happy to use Terminal, which is a normal bash shell and has (or can easily get through Homebrew) all the tools you're used to. Plus the slashes go the right way and line endings don't need conversion. If you're used to developing on a Linux distro, you'll be happy with all the things that "just work" and let you move on with your life. So in answer to your question, do straight Mac OS X. Working in a virtualized Linux environment imparts a cost and gains you nothing.
2
1
0
0
I'm new to Mac, and I have OS X 10.9.1. The main question is whether it is better to create a virtual machine with Linux and do port forwarding or set all packages directly to the Mac OS and work with it directly? If I create a virtual machine, I'm not sure how it will affect the health of SSD and ease of development. On the other hand, I also do not know how to affect the stability and performance of Mac OS installation packages directly into it. Surely there are some best practices, but I do not know them.
Python development on Mac OS X: pure Mac OS or linux in virtualbox
0
1.2
1
0
0
1,076
21,802,946
2014-02-15T20:07:00.000
2
0
0
0
0
python,scikit-learn,cluster-analysis,data-mining,k-means
0
21,824,056
0
2
0
false
0
0
Is the data already in vector space e.g. gps coordinates? If so you can cluster on it directly, lat and lon are close enough to x and y that it shouldn't matter much. If not, preprocessing will have to be applied to convert it to a vector space format (table lookup of locations to coords for instance). Euclidean distance is a good choice to work with vector space data. To answer the question of whether they played music in a given location, you first fit your kmeans model on their location data, then find the "locations" of their clusters using the cluster_centers_ attribute. Then you check whether any of those cluster centers are close enough to the locations you are checking for. This can be done using thresholding on the distance functions in scipy.spatial.distance. It's a little difficult to provide a full example since I don't have the dataset, but I can provide an example given arbitrary x and y coords instead if that's what you want. Also note KMeans is probably not ideal as you have to manually set the number of clusters "k" which could vary between people, or have some more wrapper code around KMeans to determine the "k". There are other clustering models which can determine the number of clusters automatically, such as meanshift, which may be more ideal in this case and also can tell you cluster centers.
2
2
1
0
I have a dataset of users and their music plays, with every play having location data. For every user i want to cluster their plays to see if they play music in given locations. I plan on using the sci-kit learn k-means package, but how do I get this to work with location data, as opposed to its default, euclidean distance? An example of it working would really help me!
Computing K-means clustering on Location data in Python
0
0.197375
1
0
0
976
21,802,946
2014-02-15T20:07:00.000
5
0
0
0
0
python,scikit-learn,cluster-analysis,data-mining,k-means
0
21,825,022
0
2
0
true
0
0
Don't use k-means with anything other than Euclidean distance. K-means is not designed to work with other distance metrics (see k-medians for Manhattan distance, k-medoids aka. PAM for arbitrary other distance functions). The concept of k-means is variance minimization. And variance is essentially the same as squared Euclidean distances, but it is not the same as other distances. Have you considered DBSCAN? sklearn should have DBSCAN, and it should by now have index support to make it fast.
2
2
1
0
I have a dataset of users and their music plays, with every play having location data. For every user i want to cluster their plays to see if they play music in given locations. I plan on using the sci-kit learn k-means package, but how do I get this to work with location data, as opposed to its default, euclidean distance? An example of it working would really help me!
Computing K-means clustering on Location data in Python
0
1.2
1
0
0
976
21,817,135
2014-02-16T21:47:00.000
2
0
1
0
0
python,file-io
0
21,818,173
0
3
0
false
0
0
A few tips: use try catch wherever possible. Even if it crashes, stack trace will tell which line was last executed.
1
0
0
0
Is there a way to programmatically find out why a Python program closed? I'm making a game in python, and I've been using the built in open() function to create a log in a .txt file. A major problem I've come across is that when it occasionally crashes, the log doesn't realise it's crashed. I've managed to record if the user closes the game through pressing an exit button, but I was wondering if there is a way to check how the program closed. For instance if the user presses exit, if it crashes or if it is forcefully closed(through the task manager for instance)
Is there a way to find out why a python program closed?
1
0.132549
1
0
0
114
21,846,978
2014-02-18T07:32:00.000
0
0
0
0
0
python,selenium,automated-tests,robotframework
0
22,005,969
0
2
0
false
1
0
Could you please provide a part of your code you use to get the span element and a part of your GUI application where you are trying to get the element from (HTML, or smth.)?
2
0
0
0
I am using robot framework to test a GUI application , I need to select a span element which is a list box , but I have multiple span elements with same class ID in the same page , So how can I select each span element(list box) ??? Thanks in advance
How to select a span element which is a list box ,when multiple span elements with same class ID are present in the same page?
0
0
1
0
1
313
21,846,978
2014-02-18T07:32:00.000
0
0
0
0
0
python,selenium,automated-tests,robotframework
0
22,013,855
0
2
0
false
1
0
Selenium provides various ways to locate elements in the page. If you can't use id, consider using CSS or Xpath.
2
0
0
0
I am using robot framework to test a GUI application , I need to select a span element which is a list box , but I have multiple span elements with same class ID in the same page , So how can I select each span element(list box) ??? Thanks in advance
How to select a span element which is a list box ,when multiple span elements with same class ID are present in the same page?
0
0
1
0
1
313
21,852,518
2014-02-18T11:34:00.000
0
0
0
0
0
python,django,django-admin,django-sites
0
21,852,651
0
1
0
false
1
0
1 WAY You can go to your models.py into your app by using django signal you can do this. from django.db.models.signals import post_save class Test(models.Model): # ... fields here # method for updating def update_on_test(sender, instance, **kwargs): # custome operation as you want to perform # register the signal post_save.connect(update_on_test, sender=Test) 2 WAY You can ovveride save() method of modeladmin class if you are filling data into table by using django admin. class TestAdmin( admin.ModelAdmin ): fields = ['title', 'body' ] form = TestForm def save_model(self, request, obj, form, change): # your login if you want to perform some comutation on save # it will help you if you need request into your work obj.save()
1
1
0
0
i have a Django project and right now everything works fine. i have a Django admin site and now, i want that when i add a new record to my model, a function calls simultaneously and a process starts. how i can do this? what is this actions name?
call a method in Django admin site
0
0
1
0
0
2,361
21,868,709
2014-02-19T00:47:00.000
1
0
0
0
0
python,api,flask
0
25,578,832
0
1
0
false
1
0
Tornado would do the trick. Flask is not designed for asynchronization. A Flask instance processes one request at a time in one thread. Therefore, when you hold the connection, it will not proceed to next request.
1
0
0
1
I have an HTTP API using Flask and in one particular operation clients use it to retrieve information obtained from a 3rd party API. The retrieval is done with a celery task. Usually, my approach would be to accept the client request for that information and return a 303 See Other response with an URI that can be polled for the response as the background job is finished. However, some clients require the operation to be done in a single request. They don't want to poll or follow redirects, which means I have to run the background job synchronously, hold on to the connection until it's finished, and return the result in the same response. I'm aware of Flask streaming, but how to do such long-pooling with Flask?
Flask request waiting for asynchronous background job
0
0.197375
1
0
1
2,430
21,877,935
2014-02-19T10:37:00.000
0
0
1
0
0
python,notepad
0
21,878,937
0
3
0
false
0
0
First check how to open notepad through command then use this command in subprocess or os.system. or use open() in os module which allow to open file.
1
0
0
0
I am doing automation task in which I have to open notepad, write some contents and save that file. I know how to open and do keyboard simulation. Is there any way through which I can save that opened Notepad file through script
Opening Notepad and saving using python
0
0
1
0
0
3,320
21,884,127
2014-02-19T14:56:00.000
3
0
1
0
0
python,editor,keyboard-shortcuts,pycharm
0
21,885,102
0
1
0
false
0
0
The shortcuts: Next Tab = 'Alt'+'Right' Previous Tab = 'Alt' + 'Left' While the cursor is in the Terminal. You can see all the shortcuts in the menu (and add more, or change, or delete): File -> Setting... -> [IDE Settings] Keymap
1
2
0
0
I am recently started to use pycharm. Its embedded terminal is really cool. We can create multiple sessions of terminal using 'ctr+shift+t'. Also we can close sessions using 'ctrl+sht+w'. But how to toggle between these sessions? Is there any keyboard shortcut? Also where should I get list of all shortcuts? Thanks in advance.
keyboard shortcut for toggling sessions of terminals in pycharm
0
0.53705
1
0
0
410
21,884,271
2014-02-19T15:00:00.000
1
0
0
0
0
python,python-3.x,matplotlib
0
71,455,335
0
7
0
false
0
0
matplotlib by default keeps a reference of all the figures created through pyplot. If a single variable used for storing matplotlib figure (e.g "fig") is modified and rewritten without clearing the figure, all the plots are retained in RAM memory. Its important to use plt.cla() and plt.clf() instead of modifying and reusing the fig variable. If you are plotting thousands of different plots and saving them without clearing the figure, then eventually your RAM will get exhausted and program will get terminated. Clearing the axes and figures have a significant impact on memory usage if you are plotting many figures. You can monitor your RAM consumption in task manager (Windows) or in system monitor (Linux). First your RAM will get exhausted, then the OS starts consuming SWAP memory. Once both are exhausted, the program will get automatically terminated. Its better to clear figures, axes and close them if they are not required.
1
233
1
0
In a script where I create many figures with fix, ax = plt.subplots(...), I get the warning RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory. However, I don't understand why I get this warning, because after saving the figure with fig.savefig(...), I delete it with fig.clear(); del fig. At no point in my code, I have more than one figure open at a time. Still, I get the warning about too many open figures. What does that mean / how can I avoid getting the warning?
warning about too many open figures
1
0.028564
1
0
0
164,804
21,885,856
2014-02-19T16:03:00.000
1
0
0
0
0
python,jinja2,python-sphinx
0
21,909,382
0
2
0
true
1
0
I've found a good way to do this. Sphinx's configuration parameter template_bridge allows to control over TemplateBribge object - which is responsible for themes rendering. Standard sphinx.jinja2glue.TemplateBridge constructs environment attribute in init method (it's not a constructor, unfortunate name for method) - which is jinja2's environment itself used for templates rendering. So just subclass TemplateBridge and override init method.
1
0
0
0
I'd like to implement custom navigation to my sphinx docs. I use my custom theme based on basic sphinx theme. But I don't know how to create new tag for template system or use my custom sphinx plugin's directive in html templates. Any ideas where I can plug in? Update As I can see in sphinx sources, jinja2 environment constructed in websupport jinja2glue module. Though I can't understand the way it can be reconfigured besides monkey-patching.
Custom jinja2 tag in sphinx template
0
1.2
1
0
0
737
21,890,973
2014-02-19T19:54:00.000
3
1
0
1
0
python,openshift
0
21,893,287
0
2
0
true
0
0
You are looking for the add-on cartridge that is called cron. However, by default the cron cartridge only supports jobs that run every minute or every hour. You would have to write a job that runs minutely to determine if its a 10 minute interval and then execute your script. Make sense? rhc cartridge add cron -a yourAppName Then you will have a cron directory in application directory under .openshift for placing the cron job.
1
1
0
0
How to create shedule on OpenShift hosting to run python script that parses RSS feeds and will send filtered information to my email? It feature is available? Please help, who works with free version of this hosting. I have script that works fine. But i dont know how to run it every 10 min to catch freelance jobs. Or anyone does know free hosting with python that can create shedule for scripts.
OpenShift, Python Application run script every 10 min
0
1.2
1
0
0
1,892
21,893,973
2014-02-19T22:28:00.000
0
0
0
0
0
python,algorithm,graph,scipy,mathematical-optimization
0
21,894,459
0
2
0
false
0
0
The prohibition against self-flows makes some instances of this problem infeasible (e.g., one node that has in- and out-flows of 1). Otherwise, a reasonably sparse solution with at most one self-flow always can be found as follows. Initialize two queues, one for the nodes with positive out-flow from lowest ID to highest and one for the nodes with positive in-flow from highest ID to lowest. Add a flow from the front node of the first queue to the front node of the second, with quantity equal to the minimum of the out-flow of the former and the in-flow of the latter. Update the out- and in-flows to their residual values and remove the exhausted node(s) from their queues. Since the ID of the front of the first queue increases, and the ID of the front of the second queue decreases, the only node that self-flows is the one where the ID numbers cross. Minimizing the total flow is trivial; it's constant. Finding the sparsest solution is NP-hard; there's a reduction from subset sum where each of the elements being summed has a source node with that amount of out-flow, and two more sink nodes have in-flows, one of which is equal to the target sum. The subset sum instance is solvable if and only if no source flows to both sinks. The algorithm above is a 2-approximation. To get rid of the self-flow on that one bad node sparsely: repeatedly grab a flow not involving the bad node and split it into two, via the bad node. Stop when we exhaust the self-flow. This fails only if there are no flows left that don't use the bad node and there is still a self-flow, in which case the bad node has in- and out-flows that sum to more than the total flow, a necessary condition for the existence of a solution. This algorithm is a 4-approximation in sparsity.
1
0
1
0
I'm looking for a solution to the following graph problem in order to perform graph analysis in Python. Basically, I have a directed graph of N nodes where I know the following: The sum of the weights of the out-edges for each node The sum of the weights of the in-edges for each node Following from the above, the sum of the sum across all nodes of the in-edges equals the sum of the sum of out-edges No nodes have edges with themselves All weights are positive (or zero) However, I know nothing about to which nodes a given node might have an edge to, or what the weights of any edges are Represented as a weighted adjacency matrix, I know the column sums and row sums but not the value of the edges themselves. I've realized that there is not a unique solution to this problem (Does anyone how to prove that, given the above, there is an assured solution?). However, I'm hoping that I can at least arrive at a solution to this problem that minimizes the sum of the edge weights or maximizes the number of 0 edge weights or something along those lines (Basically, out of infinite choices, I'd like the most 'simple' graph). I've thought about representing it as: Min Sum(All Edge Weights) s.t. for each node, the sum of its out-edge weights equals the known sum of these, and the sum of its in-edge weights equals the known sum of these. Additionally, constrained such that all weights are >= 0 I'm primarily using this for data analysis in Scipy and Numpy. However, using their constrained minimization techniques, I'll end up with approximately 2N^2-2N constraints from the edge-weight sum portion, and N constraints from the positive portion. I'm worried this will be unfeasible for large data sets. I could have up to 500 nodes. Is this a feasible solution using SciPy's fmin_cobyla? Is there another way to layout this problem / another solver in Python that would be more efficient? Thanks so much! First post on StackOverflow.
SciPy - Constrained Minimization derived from a Directed Graph
0
0
1
0
0
200
21,918,718
2014-02-20T20:23:00.000
1
0
1
0
0
python,matplotlib,plot,weather
0
21,919,317
0
3
0
false
0
0
Matplotlib xticks are your friend. Will allow you to set where the ticks appear. As for date formatting, make sure you're using dateutil objects, and you'll be able to handle the formatting.
2
0
1
0
I want to plot weather data over the span of several days every half hour, but I only want to label the days at the start as a string in the format 'mm/dd/yy'. I want to leave the rest unmarked. I would also want to control where such markings are placed along the x axis, and control the range of the axis. I also want to plot multiple sets of measurements taken over different intervals on the same figure. Therefore being able to set the axis and plot the measurements for a given day would be best. Any suggestions on how to approach this with matplotlib?
How to label certain x values
0
0.066568
1
0
0
2,934
21,918,718
2014-02-20T20:23:00.000
2
0
1
0
0
python,matplotlib,plot,weather
0
21,919,748
0
3
0
false
0
0
You can use a DayLocator as in: plt.gca().xaxis.set_major_locator(dt.DayLocator()) And DateFormatter as in: plt.gca().xaxis.set_major_formatter(dt.DateFormatter("%d/%m/%Y")) Note: import matplotlib.dates as dt
2
0
1
0
I want to plot weather data over the span of several days every half hour, but I only want to label the days at the start as a string in the format 'mm/dd/yy'. I want to leave the rest unmarked. I would also want to control where such markings are placed along the x axis, and control the range of the axis. I also want to plot multiple sets of measurements taken over different intervals on the same figure. Therefore being able to set the axis and plot the measurements for a given day would be best. Any suggestions on how to approach this with matplotlib?
How to label certain x values
0
0.132549
1
0
0
2,934
21,923,046
2014-02-21T00:56:00.000
0
1
0
1
0
python
0
21,923,164
0
4
0
false
0
0
If you can put your own programs or scripts on the remote machine there are a couple of things you can do: Write a script on the remote machine that outputs just what you want, and execute that over ssh. Use ssh to tunnel a port on the other machine and communicate with a server on the remote machine which will respond to requests for information with the data you want over a socket.
1
1
0
0
I would like to be able to gather the values for number of CPUs on a server and stuff like storage space etc and assign them to local variables in a python script. I have paramiko set up, so I can SSH to remote Linux nodes and run arbitrary commands on them, and then have the output returned to the script. However, many commands are very verbose "such as df -h", when all I want to assign is a single integer or value. For the case of number of CPUs, there is Python functionality such as through the psutil module to get this value. Such as 'psutil.NUM_CPUS' which returns an integer. However, while I can run this locally, I can't exactly execute it on remote nodes as they don't have the python environment configured. I am wondering how common it is to manually parse output of linux commands (such as df -h etc) and then grab an integer from it (similar to how bash has a "cut" function). Or whether it is somehow better to set up an environment on each remote server (or a better way).
Reading values over ssh in python
0
0
1
0
0
1,205
21,937,072
2014-02-21T14:25:00.000
0
1
0
0
0
python,html,django
0
21,937,650
0
3
0
false
1
0
Why you do not do simple form on index page when user is not authenticated?
1
3
0
0
Here is the deal, how do I put the simplest password protection on an entire site. I simply want to open the site to beta testing but don't really care about elegance - just a dirty way of giving test users a username and password without recourse to anything complex and ideally i'd like to not to have to install any code or third party solutions. I'm trying to keep this simple.
Whats the smartest way to password protect an entire Django site for testing purposes
0
0
1
0
0
4,949
21,967,466
2014-02-23T11:14:00.000
0
0
1
0
0
python,dsl,xtext
0
22,065,084
0
1
0
false
0
0
Well, I think the answer is quite simple: I can generate the code in a XML or pyconf format and read from python :) thank you anyway
1
0
0
0
I am new to DSL area and I am developing a DSL language. I will need to provide an editor to write the in that language what makes Xtext a very good option. However, some of my libraries are in Python and I need to "run" the DSL in python. Any idea how to integrate them? the perfect scenario would be: Xtext -> Pass the Tokens to Python -> Semantics in Python thank you
Xtext integration with Python
0
0
1
0
0
780
21,973,249
2014-02-23T19:19:00.000
0
1
0
0
0
python,matplotlib,gnuplot,swig
0
21,977,146
0
2
0
false
0
0
You should start by reading the first few chapters of swig manual, and making some of its example projects for python, the distrib has many that illustrate many different capabilities of swig and the make files are already built so one less thing to learn.
1
0
0
0
I want to access some functions from a large C-project from Python. It seems to me that SWIG is the way to go. I'm not very used to programming i C and my experience with "make" is mostly from downloading source tars. The functions I want to access resides in a large C-project (Gnuplot) and I have no idea who to use SWIG on such a large number of source files. The functions I want to access are all in a single c-file but there are many recursive includes. I would like some suggestions on how to get started. What I want to access: term/emf.trm Reason: Missing support for symbols an LaTex in the EMF-backend to matplotlib (this backend has even been removed from matplotlib). I'm stuck with an old version of Word at work and there is no way to get plots in this program that are suitable for my purpose without EMF. I could use Gnuplot instead of matplotlib but many of the plots are specialized for a certain purpose and matplotlib is much easier to use than Gnuplot. Any suggestions would be much appreciated.
Using SWIG to interface large C-project with Python
0
0
1
0
0
728
21,974,997
2014-02-23T21:45:00.000
4
0
1
0
1
python,python-2.7,python-3.x,pip
0
22,714,674
0
2
0
true
0
0
Try these two solutions: 1)Remove python3.3 from the path variable and try installing library using pip now. so that pip from python27 can install things. 2)if this doesn't work then use C:\python27\Scripts\pip.exe install
1
0
0
0
I am using pip to pull down libraries but didnt realize the key one is only for 2.7. So now I am working in the 2.7 directory but pip is still installing libs in 3.3. So pyCharm keeps saying the lib is missing. I have the PATH var set (this is gasp fn windows 8) so that Python 2.7 comes first but i think the python exe isn't looking in the first place I had pip install things. Maybe there is a setting in pip that will install it elsewhere now? Any hints on how to make this work would be great. Maybe I just need to start over w/o python 3.3? Thank you for your time!
Is it practical to have both Python 2.7 and 3.3 installed at the same time?
0
1.2
1
0
0
739
22,013,532
2014-02-25T11:59:00.000
0
0
0
0
0
javascript,jquery,python,api
0
25,426,295
0
1
0
false
1
0
It all depends on what you're authenticating. If you're authenticating each user that uses your API, you have to do something like the following: Your site has to somehow drop a cookie in that user's browser, Your API needs to support CORS (we use easyXDM.js), somehow upon logging in to their site, their site needs to send the user to your site to have a token passed that authenticates the user against your API (or vice versa, depending on the relationship). If you're just authenticating that a certain site is authorized to use your API, you can issue that site an API key. You check for that API key whenever your API is called. The problem with this approach is that JavaScript is visible to the end user. Anyone who really wants to use your API could simply use the same API key. It's not really authentication without some sort of server to server call. At best, you're simply offering a very weak line of defense against the most obvious of attacks.
1
1
0
0
I have a javascript placed on third party site and this js makes API calls to my server. JS is publicly available and third party cannot save credentials in JS. I want to authenticate API calls before sharing JSON and also want to rate limit. Any one has ideas on how can i authenticate API?
how to do authentication of rest api from javascript, if javascript is on third party site?
0
0
1
0
1
117