Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
16,636,933 |
2013-05-19T16:36:00.000
| 3 | 0 | 1 | 0 |
python,caching,memoization
| 16,637,764 | 2 | false | 0 | 0 |
This question doesn't make much sense to me. When you start talking about "high performance" and "concurrent requests" , you're not really talking about using a Python library within an application -- it sounds more like using (or building) some sort of dedicated, external specialized service or daemon.
Personally, I use a mixture of memoization and 'lazy loaded' or 'deferred' properties to define cache gets (and computes). By 'lazy loaded', I mean that instead of always pulling (or computing) cached data, I create a proxy object that has all the information to call the get/create function from the cache on first access. when it comes to database backed material, i've also found it useful to group cache misses and consolidate cache gets - this allows me to load in parallel when possible ( instead of multiple serial requests ).
dogpile.cache takes care of my cache administration (get, set, invalidate), which is configured to store in memcached or dbm ( it allows for several backends ). i use two lightweight objects (12lines?) to handle the deferred gets.
| 1 | 0 | 0 |
I have a function, which given an argument calculates a corresponding value and returns it. Returned value of the function, depends only on its parameters, so I'd like to cache (memoize) the value somehow. Furthermore, I also want to be able to invalidate a cached value.
It seems to be a common need, so I am trying to avoid reinventing the wheel.
What I'm looking for is an advanced highly-configurable high-performance library (tool, framework, etc.) and would like the changes to be as lean as possible. Some good points are:
efficiently handling concurrent requests
being able to use different cache backends (e.g. RAM or DB)
retaining responsiveness on large scale data
What are some good libraries to use, and how are they compared?
|
How to cache value of a function in Python?
| 0.291313 | 0 | 0 | 1,947 |
16,637,879 |
2013-05-19T18:12:00.000
| 1 | 0 | 0 | 0 |
javascript,python,beautifulsoup,urllib2
| 23,974,667 | 1 | true | 1 | 0 |
I came back after a long time just to answer quickly my own question..
I found many solutions and tutorials on the web and most of them were suggesting using Selenium and xpath but this method was more complex than I needed..
So I ended up using Selenium ONLY for emulating the Browser (firefox in my case) and grabbing the html after the page was loaded completely.
After that I was still using beautifoulsoup to parse the html code (whihc now would include the javascript data too).
| 1 | 0 | 0 |
Unfortunately I am newbie with beautifulsoup and urllib so I might not even ask correctly what I need..
There is a website www.example.com
I need to extract some data from this website which displays a random message.
The problem is the message is displayed after the user presses a button, otherwise it shows a general message like "press the button to see the message".
After searching stackoverflow I realised that probably there is NO way to change the variables by calling with my browser the url like this.. www.example.com/?showRandomMsg='true'
In some threads I read that maybe I can do it with bookmarlets..
Is there anyway to use bookmarklets with beautifulsoup or urllib in order to access the website and make it display a random message?
Thanks in advance! :D
|
Cannot scrape with beautifulsoup and urllib because of javascript variable
| 1.2 | 0 | 1 | 169 |
16,638,977 |
2013-05-19T20:08:00.000
| 8 | 0 | 0 | 1 |
python,macos,wxpython
| 20,097,953 | 3 | false | 0 | 1 |
Go to System preferences --> Security and privacy --> Allow applications downloaded from..select 'Anywhere'
| 1 | 6 | 0 |
I am trying to install wxpython onto my Mac OSX 10.8.3. I download the disk images from their downloads page and mount it. When I try to install the package I get an error that saying that the package is damaged and can't be opened. Any suggestions on how I can fix this?
I have also tried opening the package through the terminal but no luck.
Thanks in advance.
|
Trying to install wxpython on Mac OSX
| 1 | 0 | 0 | 7,577 |
16,639,752 |
2013-05-19T21:41:00.000
| 2 | 0 | 1 | 0 |
python,python-3.x,tkinter
| 16,639,964 | 3 | false | 0 | 1 |
Maybe you disabled it during Python installation? It is Tcl/Tk item in install wizard and it can be disabled. Try reinstall Python and do not turn it off.
| 2 | 1 | 0 |
Started messing with Tkinter today, but when I tried to run my first program it crashed. It appears the reason is that I don't have Tkinter. This is a standard installation of Python 3.3 on Windows 7, why is it not there? How can I get it?
|
Why is Tkinter missing?
| 0.132549 | 0 | 0 | 4,331 |
16,639,752 |
2013-05-19T21:41:00.000
| 6 | 0 | 1 | 0 |
python,python-3.x,tkinter
| 16,639,892 | 3 | true | 0 | 1 |
This answer might be irrelevant with more information, but, for now: are you capitalizing "Tkinter" when using the import command? In Python 3.x, it's imported as lower-case ("import tkinter"), but in 2.x code it's imported with an initial capital ("import Tkinter").
| 2 | 1 | 0 |
Started messing with Tkinter today, but when I tried to run my first program it crashed. It appears the reason is that I don't have Tkinter. This is a standard installation of Python 3.3 on Windows 7, why is it not there? How can I get it?
|
Why is Tkinter missing?
| 1.2 | 0 | 0 | 4,331 |
16,639,770 |
2013-05-19T21:44:00.000
| -1 | 0 | 0 | 0 |
python,neo4j,networkx,directed-graph
| 71,859,666 | 2 | false | 0 | 0 |
networkx supports flexible container structures (eg arbitrary combinations of py lists and dicts) in both nodes and edges.
Are there restrictions on the Neo4j side to persist such flexible data?
| 1 | 5 | 0 |
I have an application that creates many thousands of graphs in memory per second. I wish to find a way to persist these for subsequent querying. They aren't particularly large (perhaps max ~1k nodes).
I need to be able to store the entire graph object including node attributes and edge attributes. I then need to be able to search for graphs within specific time windows based on a time attribute in a node.
Is there a simple way to coerce this data into neo4j ? I've yet to find any examples of this. Though I have found several python libs including an embedded neo4j and a rest client.
Is the common approach to manually traverse the graph and store it in that manner?
Are there any better persistence alternatives?
|
Python networkx and persistence (perhaps in neo4j)
| -0.099668 | 0 | 1 | 1,778 |
16,644,007 |
2013-05-20T06:45:00.000
| -1 | 0 | 0 | 0 |
c++,python,windows,api
| 66,233,621 | 3 | false | 0 | 1 |
There is an easiest way.
Install VBest Icon groups. ( it is a desktop icon stack )
Then select any stack and add a single button. ( drag and drop any application will get its link and icon in the stack )
Select the stack and in its settings, set transparency equals 0 ( slide to left)
That is it. Now you have a single button with link.
| 1 | 0 | 0 |
I've collected many nice wall papers over years.
I know python and c++ (a little MFC eperience).
I want to make a programme that can change my wallpapers.
I want to operate like this:
there is a little icon (half transparent), if i click it, it change my wallpapers to the next picture in my wallpaper collection folder.
I found many infos of changing wallpaper by programme by google.
But can't find out the win7 API for adding a button on desktop.
Please some one tell me how to set a icon-button on the desktop, or it's just there's no such API?
EDIT:
I just find that there are ways to make a window unmovable. So, I think now I need to find ways to make windows out of an icon. Then it'll looks like a button on desktop. And, there are ways to make windows taking some response when it's clicked(once), right?
Closure:
Captain's method may be a better way for people with good understanding of windows.
I know python, but few knowledge of MFC or similar framework with deep relations with the OS itself. This desktop button creation is very hard to implement for me.
Modifying christian's script and a windows shortcut would be a not-good-looking compromise. I'll do it this way.
|
create an button on desktop
| -0.066568 | 0 | 0 | 672 |
16,646,135 |
2013-05-20T09:11:00.000
| 1 | 1 | 1 | 0 |
c#,ironpython
| 16,652,356 | 1 | true | 0 | 1 |
Parts of IronPython's standard library are implemented in C#, mainly because the equivalents in CPython are written in C. You can access those parts directly from a C# (or any other static .NET language) directly, but they're not intended to be used that way and may not be easy to use.
| 1 | 0 | 0 |
I've recently discovered IronPython in C# and only tutorials I found were how to use python script in C#, but I've noticed, that IronPython has classes and methods you can use directly in C# like : PythonIterTools.product some_pr = new PythonIterTools.product(); and others, can anyone explain how does this work?
|
Using IronPython in C#
| 1.2 | 0 | 0 | 223 |
16,650,124 |
2013-05-20T13:01:00.000
| 0 | 0 | 0 | 0 |
python,flask,flask-extensions,flask-security
| 16,696,027 | 1 | true | 1 | 0 |
I'm going to just answer my own question and say the best/easiest thing to do is just pull the extensions code into my project and modify as I need it.
Seems like this will be the case with several Flask-extensions that involve more view/template code than pure infrastructure (like flask-security/flask-login, etc.)
| 1 | 1 | 0 |
I'm using several third party flask-extensions (flask-login, flask-security, flask-principal, flask-mongoengine etc...the list is about 12 deep) in an application that is failing silently in production environment (currently AppFog Paas)
I'm specifically trying to debug an issue with flask-security (I "think", but it could just as easily be flask-login, flask-mongoengine or any plugin in my authentication pipeline) since it has to do with logging in and redirecting the user to through the application.
I notice several things about flask extensions that's making them a big quirky/difficult to work with:
most don't have any debug logging in them
some have partial support for configuring (like using your own templates)
they install into the environment (I'm using virtualenv) as opposed to an application "plugins/extensions" directory
I'm wondering if there's some guidance around:
installing plugins into the project, so they can be quickly
modified (maybe logging added) and then pushed into production
without having to fork a github project and repackage it.
Standard logging practices for 3rd party extensions, or anything
that would help trace the code
Any information/tips to help me debug my current problems
Thanks.
|
Looking for techniques to debug external Python Flask Extensions
| 1.2 | 0 | 0 | 145 |
16,651,124 |
2013-05-20T13:55:00.000
| 1 | 0 | 1 | 0 |
python
| 24,190,976 | 2 | false | 0 | 0 |
openpyxl is the only library I know of that can read and write xlsx files. It's down side is that when you edit an existing file it doesn't save the original formatting or charts. A problem I'm dealing with right now. If anyone knows a work around please let me know.
| 1 | 0 | 0 |
I have to read and write data's into .xlsx extentsion files using python. And I have to use cell formatting features like merging cells,bold,font size,color etc..So which python module is good to use ?
|
Which module has more option to read and write xlsx extension files using Python?
| 0.099668 | 1 | 0 | 367 |
16,651,259 |
2013-05-20T14:01:00.000
| 0 | 0 | 1 | 1 |
python,linux,windows,macos,testing
| 63,363,093 | 3 | false | 0 | 0 |
You can use travis to run tests on linux, mac and windows. Travis supports these platforms. This is the most convenient option. If the repo is open source, travis is free.
| 2 | 2 | 0 |
I need to test my Python package if it works properly on different systems. I found Tox for different Python versions, but what about different operating systems like Windows, Linux and Mac.
Can you recommend me a convenient way to test if my code works on all systems?
|
Python: How can I test my package if it runs on Linux, Mac and Windows
| 0 | 0 | 0 | 808 |
16,651,259 |
2013-05-20T14:01:00.000
| 0 | 0 | 1 | 1 |
python,linux,windows,macos,testing
| 68,606,338 | 3 | false | 0 | 0 |
Just assuming you use Windows...
I use Ubuntu on WSL2 (Windows Subsystem for Linux 2). It is basically a virtual machine, but is much faster than Hyper-V or Virtual box. It doesn't come with a GUI unless you're in the Windows Insiders Dev Channel, but that is likely not needed just to test code, and you can install GWSL (an X server designed for WSL and SSH) to provide a GUI. On my laptop, Hyper-V and VirtualBox VMs crash within seconds of starting, but WSL2 runs smoothly for hours of intense usage. For an IDE, I would recommend installing Visual Studio Code (on Windows,not on the WSL2 VM), then use the Remote - WSL extension. And I would recommend installing Windows Terminal to replace the ugly Windows Console Host. And for MacOS, I guess you just have to use a regular VM.
| 2 | 2 | 0 |
I need to test my Python package if it works properly on different systems. I found Tox for different Python versions, but what about different operating systems like Windows, Linux and Mac.
Can you recommend me a convenient way to test if my code works on all systems?
|
Python: How can I test my package if it runs on Linux, Mac and Windows
| 0 | 0 | 0 | 808 |
16,652,406 |
2013-05-20T15:01:00.000
| 0 | 1 | 0 | 0 |
python,cherrypy,nose,nosetests
| 16,652,717 | 1 | true | 1 | 0 |
With python unit tests, you are basically testing the server. And the correct response from server is the redirect exception and not the redirected page itself. I would recommend you testing this behaviour in two steps:
test if the first page/url throws correctly initialized (code, url) HTTPRedirect exception
test contents of the second page (on which is being redirected)
But of course, if you insist, you can resolve the redirect in Try/Except by yourself by inspecting the exception attributes and calling testing method on target url again.
| 1 | 0 | 0 |
I’m writing a cherrypy application that needs to redirect to a particular page and I use HTTPRedirect(‘mynewurl’, status=303) to achieve this. This works inasmuch as the browser (Safari) redirects to ‘mynewurl’ without asking the user. However, when I attempt to unit test using nosetests with assertInBody(), I get a different result; assertInBody reports that ‘This resource can be found at mynewurl’ rather than the actual contents of ‘mynewurl’. My question is how can I get nosetests to behave in the same way as a Safari, that is, redirecting to a page without displaying an ‘ask’ message?
Thanks
Kevin
|
Unit testing Cherrypy HTTPRedirect.
| 1.2 | 0 | 1 | 190 |
16,655,156 |
2013-05-20T17:44:00.000
| 1 | 0 | 1 | 1 |
python,pip,software-distribution
| 16,656,024 | 1 | true | 0 | 0 |
I'm sorry, but Python knows nothing about bash, or man, or other things you might take for granted. For instance, Windows, a widely deployed platform supported by Python, has neither. Other platforms, even Unix-like, may not have bash, too (e.g. using busybox) and would rather not spend storage space on man pages. Some users don't even have bash installed on capable systems (and use zsh for interactive work and ash for scripts).
So please limit your egg archive to things that only require Python, or Python extensions.
If you want to install other files, you have a few options.
Publish an archive that contains both a setup.py for your package and whatever optional files you might want to include, possibly with an installation script for them, too.
Create proper packages for the OSes you target. The rest will use option 1.
Run the extra installations steps from your setup script (not recommended).
Also, you don't have to provide a man page, just support --help well. E.g. easy_install on Debian does not have a man page, and I'm fine with it.
| 1 | 2 | 0 |
My Python project includes some manpages and bash completion script. I want those to be installed when user installs a package with, for example, pip install mypackage. How do I do that? I only came across a very barbaric way of doing so by calling an external script (for example an .sh script) in the setup.py. Is there a more elegant approach?
|
How to install various files other than Python code using Python packages?
| 1.2 | 0 | 0 | 78 |
16,655,681 |
2013-05-20T18:13:00.000
| 3 | 0 | 0 | 0 |
php,python,scrapy
| 16,659,313 | 1 | false | 1 | 0 |
Scrapy is a framework. You can define pipelines and systematic ways of crawling a URL; cURL is simply boiler plate code to query a page or download files over a protocol like HTTP.
If you are building an extensive scraping system or project, Scrapy is probably a better bet. Otherwise for hacky or one time things, cURL is hard to beat (or if you are constrained to PHP).
| 1 | 0 | 0 |
I've been into scraping websites data using Python Scrapy although I have a strong experience in PHP cURL. I don't know which is better for scraping data and manipulating the returned values and the speed and the memory usage.
And what is (yield) function in Python Scrapy supposed to do?
|
PHP cURL vs Python Scrapy?
| 0.53705 | 0 | 1 | 1,634 |
16,656,850 |
2013-05-20T19:25:00.000
| 1 | 0 | 1 | 0 |
python,arrays,2d,large-data
| 16,657,039 | 2 | false | 0 | 0 |
It would help to know more about your data, and what kind of access you need to provide. How fast is "fast enough" for you? Just to be clear, "7M" means 7,000,000 right?
As a quick answer without any of that information, I have had positive experiences working with redis and tokyo tyrant for fast read access to large amounts of data, either hundreds of megabytes or gigabytes.
| 1 | 0 | 1 |
Which python data type should I use to create a huge 2d array (7Mx7M) with fast random access? I want to write each element once and read many times.
Thanks
|
Which python data type should I use to create a huge 2d array (7Mx7M) with fast random access?
| 0.099668 | 0 | 0 | 390 |
16,661,790 |
2013-05-21T03:47:00.000
| 9 | 0 | 1 | 0 |
python,matplotlib
| 46,957,388 | 4 | false | 0 | 0 |
I think it is worth mentioning that plt.close() releases the memory, thus is preferred when generating and saving many figures in one run.
Using plt.clf() in such case will produce a warning after 20 plots (even if they are not going to be shown by plt.show()):
More than 20 figures have been opened. Figures created through the
pyplot interface (matplotlib.pyplot.figure) are retained until
explicitly closed and may consume too much memory.
| 3 | 32 | 1 |
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way?
I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.
|
Difference between plt.close() and plt.clf()
| 1 | 0 | 0 | 49,996 |
16,661,790 |
2013-05-21T03:47:00.000
| 2 | 0 | 1 | 0 |
python,matplotlib
| 44,976,331 | 4 | false | 0 | 0 |
plt.clf() clears the entire current figure with all its axes, but leaves the window opened, such that it may be reused for other plots.
plt.close() closes a window, which will be the current window, if not specified otherwise.
| 3 | 32 | 1 |
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way?
I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.
|
Difference between plt.close() and plt.clf()
| 0.099668 | 0 | 0 | 49,996 |
16,661,790 |
2013-05-21T03:47:00.000
| 41 | 0 | 1 | 0 |
python,matplotlib
| 16,661,815 | 4 | true | 0 | 0 |
plt.close() will close the figure window entirely, where plt.clf() will just clear the figure - you can still paint another plot onto it.
It sounds like, for your needs, you should be preferring plt.clf(), or better yet keep a handle on the line objects themselves (they are returned in lists by plot calls) and use .set_data on those in subsequent iterations.
| 3 | 32 | 1 |
In matplotlib.pyplot, what is the difference between plt.clf() and plt.close()? Will they function the same way?
I am running a loop where at the end of each iteration I am producing a figure and saving the plot. On first couple tries the plot was retaining the old figures in every subsequent plot. I'm looking for, individual plots for each iteration without the old figures, does it matter which one I use? The calculation I'm running takes a very long time and it would be very time consuming to test it out.
|
Difference between plt.close() and plt.clf()
| 1.2 | 0 | 0 | 49,996 |
16,661,801 |
2013-05-21T03:48:00.000
| 1 | 0 | 0 | 0 |
python,ajax,jboss,richfaces,screen-scraping
| 16,662,103 | 2 | false | 1 | 0 |
You need to make the AJAX calls from your client to the server and interpret the data. Interpreting the AJAX data is easier and less error-prone than scraping HTML any way.
Although it can be a bit tricky to figure out the AJAX API if it isn't documented. A network sniffer tool like wireshark can be helpful there, there may also be useful plugins for your browser to do the same nowadays. I haven't needed to do that for years. :-)
| 1 | 0 | 0 |
I'm writing a python script to do some screen scraping of a public website. This is going fine, until I want to interact with an AJAX-implemented tree control. Evidently, there is a large amount of javascript controlling the AJAX requests. It seems that the tree control is a JBoss RichFaces RichTree component.
How should I interact with this component programatically?
Are there any tricks I should know about?
Should I try an implement a subset of the RichFaces AJAX?
Or would I be better served wrapping some code around an existing web-browser? If so, is there a python library that can help with this?
|
How can I programmatically interact with a website that uses an AJAX JBoss RichTree component?
| 0.099668 | 0 | 1 | 601 |
16,664,714 |
2013-05-21T07:46:00.000
| 4 | 0 | 0 | 0 |
python,intellij-idea,pycharm,manage.py
| 16,706,964 | 1 | true | 1 | 0 |
Okay, so apparently the culprit is the IntelliJ IDEA Project Creation Wizard.
If you create a new project within PyCharms and choose Django Project as desired Project type, it just works. You don't have to configure anything else.
To do this in IntelliJ IDEA, create a new Project, choose "Python Module" as type and check Django as technology. In the new project go to "File > Project Structure", navigate to "Facets", choose your Django Module on the right and set the "Settings:" option to point to the specific "settings.py" file.
After this configuration, everything should work as smooth as in PyCharms.
I consider this problem as a bug, as the Wizard creates the basic project, including a "settings.py" file, but doesn't add it in the project settings. Plus, you don't get any warning that such a strongly needed settings file is missing.
| 1 | 2 | 0 |
I'm currently looking for a nifty Python/Django IDE and came across PyCharm from JetBrains which I tested for about a week now and I'm quite impressed by this piece of software.
However, I've read that IntelliJ Ultimate with JetBrains own Python Plugin offers about the same Features as PyCharm itself, so I went ahead and gave it a try, but experienced some issues which I didn't have within PyCharms.
In IntelliJ, the built in Feature 'Tools > "Run manage.py Task..."' works with most (e.g. runserver, startapp, syncdb, ...), but not with all commands:
Almost all sql-related commands like "sql", "sqlall", "sqlclear", ... are shown as available commands, but raise a "No Applications" error message when entered.
In PyCharm those commands works fine.
Running the test suite via "test" opens a box where I have to choose the applications which I want to test. However, there is only one entry "[All Applications]" and running it causes several Exceptions to be thrown.
Adding some new Applications to the "settings.py" file has no effect on "Run manage.py Task..." whereas in PyCharm it adds new commands to it. I've tried this for example with "django.contrib.gis" to enable "ogrinspect"
All those issues lead to the assumption that IntelliJ somehow fails to recognize the installed django applications. Did I miss some configurations or settings in IntelliJ which are already set in PyCharm?
To further explain what I did exactly:
In IntelliJ I created a new project "Python Module" then enabled Django as desired technology.
In PyCharm I just created a new project of type "Django Project".
All following steps were exactly the same.
|
IntelliJ 12 Python - Issues with manage.py Tasks
| 1.2 | 0 | 0 | 1,462 |
16,672,846 |
2013-05-21T14:43:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,authentication,google-plus
| 16,673,012 | 2 | true | 1 | 0 |
First of all you should store the email property always in lowercase since the case is not relevant. Now if you also want to take into the account the dot or the plus symbols and being able to query on them, you should then store in another (hidden) property the stripped out version of the email and execute your queries on this one.
| 1 | 1 | 0 |
I'm using google ID as the datastore id for my user objects.
Sometimes I want to find a user by email. The gmail address can appear with dots or without, capital letters and other variations. How can I retrieve the user id from the given email?
|
How to get google ID from email
| 1.2 | 0 | 0 | 1,883 |
16,676,706 |
2013-05-21T18:12:00.000
| 1 | 0 | 1 | 0 |
python,django,string,unicode,flask
| 16,677,616 | 2 | true | 0 | 0 |
Depends on encoding. UTF-32 always use 4 bytes pr char, UTF-8 use single byte for english text, two bytes for most european languages but can go up to four for mathematical symbols, chinese/japanese/etc. so, most likely 200 bytes are enough to fit any 50 char long string... Unless 5-byte per char encoding exists(?) :)
| 1 | 2 | 0 |
So, working to a specification which dictates a byte length for a given variable (200 bytes) provided by end users/apps.
Using a python string, what is the maximum character length of a string, which meets 200 bytes, and therefore I can specify for my max_length setting of my database field?
(Equally I may be missing something in the byte-unicode conversion!)
|
Longest 200 byte string - database validation max_length
| 1.2 | 0 | 0 | 974 |
16,679,272 |
2013-05-21T20:50:00.000
| -1 | 0 | 1 | 0 |
python,python-3.x,operator-precedence,boolean-expression
| 70,054,456 | 7 | false | 0 | 0 |
Expression 1 or 1 and 0 or 0 returns 1. Looks like we have the same priority, almost same.
| 3 | 106 | 0 |
As far as I know, in C & C++, the priority sequence for NOT AND & OR is NOT>AND>OR. But this doesn't seem to work in a similar way in Python. I tried searching for it in the Python documentation and failed (Guess I'm a little impatient.). Can someone clear this up for me?
|
Priority of the logical operators NOT, AND, OR in Python
| -0.028564 | 0 | 0 | 114,829 |
16,679,272 |
2013-05-21T20:50:00.000
| 6 | 0 | 1 | 0 |
python,python-3.x,operator-precedence,boolean-expression
| 29,112,031 | 7 | false | 0 | 0 |
Of the boolean operators the precedence, from weakest to strongest, is as follows:
or
and
not x
is not; not in
Where operators are of equal precedence evaluation proceeds from left to right.
| 3 | 106 | 0 |
As far as I know, in C & C++, the priority sequence for NOT AND & OR is NOT>AND>OR. But this doesn't seem to work in a similar way in Python. I tried searching for it in the Python documentation and failed (Guess I'm a little impatient.). Can someone clear this up for me?
|
Priority of the logical operators NOT, AND, OR in Python
| 1 | 0 | 0 | 114,829 |
16,679,272 |
2013-05-21T20:50:00.000
| 38 | 0 | 1 | 0 |
python,python-3.x,operator-precedence,boolean-expression
| 45,944,055 | 7 | false | 0 | 0 |
You can do the following test to figure out the precedence of and and or.
First, try 0 and 0 or 1 in python console
If or binds first, then we would expect 0 as output.
In my console, 1 is the output. It means and either binds first or equal to or (maybe expressions are evaluated from left to right).
Then try 1 or 0 and 0.
If or and and bind equally with the built-in left to right evaluation order, then we should get 0 as output.
In my console, 1 is the output. Then we can conclude that and has higher priority than or.
| 3 | 106 | 0 |
As far as I know, in C & C++, the priority sequence for NOT AND & OR is NOT>AND>OR. But this doesn't seem to work in a similar way in Python. I tried searching for it in the Python documentation and failed (Guess I'm a little impatient.). Can someone clear this up for me?
|
Priority of the logical operators NOT, AND, OR in Python
| 1 | 0 | 0 | 114,829 |
16,680,798 |
2013-05-21T22:51:00.000
| 3 | 0 | 0 | 0 |
python,wxpython
| 16,681,397 | 1 | false | 0 | 1 |
Bind an event handler for the checkbox widget that will call the textctrl's Enable method, passing True or False based on the status of the checkbox.
| 1 | 0 | 0 |
How can I add a
TextCtrl
area that will allow input only if a checkbox is ticked? Otherwise the TextCtrl area will be ignored.
|
wxPython - Optional Text Boxes
| 0.53705 | 0 | 0 | 42 |
16,682,410 |
2013-05-22T02:13:00.000
| 0 | 1 | 1 | 0 |
python,module
| 16,682,782 | 1 | false | 0 | 0 |
Python modules are already executable - you don't compile them. If you want to run them on another computer, you can install python and any other dependent module such as pygame on that computer, copy the scripts over and run them.
Python has many ways to wrap scripts up into an installer to do the work for you. Its common to use python distutils to write a setup.py file which handles the install. From there you can use setup.py to bundle your scripts into zip files, tarballs, executables, rpms, etc... for other machines. You can document what the user needs to make your stuff go or you can use something like pip or distribute to write dependency files to automatically install pygame (and etc...).
There are many ways to handle this and its not particularly easy the first time round. For starters, read up on distutils in the standard python docs and then google for the pip installer.
| 1 | 0 | 0 |
So I have been taking a few classes on python and the whole time, I was wondering about modules. I can install them and run them with Eclipse but if I compile that program, so if it has an 'exe' extension, how would the module react on a computer that doesn't have it installed.
Example:
If I made some random little thing with something like pygame. I installed the pygame module on my computer, made an application with the pygame module and compiled it into an executable, how does the other computer that I run that file on. Or does it not work at all?
|
Python modules on different devices
| 0 | 0 | 0 | 148 |
16,684,354 |
2013-05-22T05:50:00.000
| 1 | 0 | 1 | 0 |
python,variables,output,temporary,komodoedit
| 16,684,381 | 1 | true | 0 | 0 |
The Python output is realtime.
If your output is not realtime, this is likely an artefact of Komodo Edit. Run your script outside of Komodo.
And Python, as any programming language, starts from scratch when you start it. How would it otherwise work?
If you want a interpreter-like situation you can use import pdb;pdb.set_trace() in your script. That will give you an interpreter prompt for debugging.
| 1 | 1 | 0 |
Forgive me if it's a silly question. I'm new to Python and scripting languages. Now I'm using Komodo Edit to code and run Python programs. Each time I run it, I have to wait until the program finished execution to see my "print" results in the middle. I'm wondering if it's possible to see realtime outputs as in a console. Maybe it is caused by some preference in Komodo?
Another question is that I know in the interpreter, when I store some variables it will remember what I stored, like in a Matlab workspace. But in Komodo Edit, every time the program runs from the beginning and store no temporary variables for debugging. For example if I need to read in some large file and do some operations, every time I have to read it in again which takes a lot of time.
Is there a way to achieve instant output or temporary variable storage without typing every line into the interpreter directly, when using other environments like Komodo?
|
Python outputs only after script has finished in Komodo Edit
| 1.2 | 0 | 0 | 316 |
16,684,894 |
2013-05-22T06:29:00.000
| 1 | 0 | 0 | 1 |
python,windows,audio,portaudio
| 16,702,788 | 2 | false | 0 | 0 |
Apparently I can get the full string from ffmpeg, as follows:
ffmpeg -list_devices true -f dshow -i dummy
And then the name of the mic will be on the line after "DirectShow audio devices"
| 1 | 2 | 0 |
In python2.7 on Windows, I need to get the name of the default microphone, which will be a string such as "Microphone (2- High Definition Audio Device)".
My first attempt was to query WMI using subprocess: wmic path Win32_SoundDevice get * /format:list. Unfortunately, this seems to return speakers as well as mics, and I can't see any property that would let me distinguish the two. Also, the name of the correct device is not in the right format, e.g. it appears as simply "High Definition Audio Device" instead of the full correct string "Microphone (2- High Definition Audio Device)".
My second attempt was to use PyAudio (the python bindings to PortAudio). Calling PyAudio().get_default_input_device_info()["name"] gets me pretty close, but unfortunately the name is getting truncated for some reason! The return value is "Microphone (2- High Definition " (truncated to 31 characters length). If I could only get a non-truncated version of this string, it would be perfect.
Any ideas for what is the simplest, most self-contained way to get the default microphone name? Thanks!
|
Windows: get default microphone name
| 0.099668 | 0 | 0 | 2,638 |
16,689,681 |
2013-05-22T10:41:00.000
| 5 | 0 | 0 | 0 |
python,matlab,data-structures,cell
| 16,689,864 | 1 | true | 0 | 0 |
Have you considered a list of numpy.arrays?
| 1 | 1 | 1 |
I have a Matlab cell array, each of whose cells contains an N x M matrix. The value of M varies across cells.
What would be an efficient way to represent this type of a structure in Python using numpy or any standard Python data-structure?
|
Alternative to Matlab cell data-structure in Python
| 1.2 | 0 | 0 | 1,299 |
16,697,391 |
2013-05-22T16:49:00.000
| 7 | 0 | 0 | 0 |
python,opencv,image-processing,computer-vision,scikit-learn
| 16,698,292 | 1 | true | 0 | 0 |
I have worked mainly with OpenCV and also with scikit-image. I would say that while OpenCV is more focus on computer vision (classification, feature detection and extraction,...). However lately scikit-image is improving rapidly.
I faced that some algorithms perform faster under OpenCV, however in most cases I find much more easier working with scikit-image, OpenCV documentations is quite cryptic.
As long as OpenCV 2.x bindings works with numpy as well as scikit-image I would take into account using both libraries, trying to take the better of each of them. At least is what I have done in my last project.
| 1 | 8 | 1 |
I want to decide about a Python computer vision library. I had used OpenCV in C++, and like it very much. However this time I need to develop my algorithm in Python. My short list has three libraries:
1- OpenCV (Python wrapper)
2- PIL (Python Image Processing Library)
3- scikit-image
Would you please help me to compare these libraries?
I use numpy, scipy, scikit-learn in the rest of my code. The performance and ease of use is an important factor, also, portability is an important factor for me.
Thanks for your help
|
Comparing computer vision libraries in python
| 1.2 | 0 | 0 | 2,421 |
16,698,220 |
2013-05-22T17:36:00.000
| 0 | 0 | 0 | 0 |
javascript,python,html,web
| 16,698,568 | 2 | false | 1 | 0 |
I had to do this too. I had a python script(which gets me data from another website) which gets executed when you click on a button.
I used Ruby on Rails for my client side code. I embedded the script file in my ruby controller which then gets called by my form and hence it gets executed.
eg:
cmd = " python getData.py "
exec( cmd )
| 1 | 0 | 0 |
I'm trying to have it so that when my web page is loaded, a python script is executed. The website is run on an apache server and the script is in the same directory as the index.html (it's a very small project).
Is there anyway I can do this? I'm not trying to output the data from the python file to the webpage, nor am I trying to affect anything client-side, I simply want the python script to execute and do it's thing whenever the web page is loaded.
Is there some sort of javascript function that I can use? I've searched around but really haven't found anything similar. Thanks!
|
Executing a Python Script when Web Page is loaded
| 0 | 0 | 0 | 4,778 |
16,698,260 |
2013-05-22T17:38:00.000
| 0 | 0 | 0 | 1 |
google-app-engine,python-2.7,nltk
| 16,700,974 | 1 | false | 1 | 0 |
Where do you have nltk installed?
GAE libraries need to be available in your app folder. If you have nltk elsewhere in your pythonpath it won't work.
| 1 | 0 | 0 |
I am trying to import NLTK library in Google App Engine it gives error, I created another module "testx.py" and this module works without error but I dont know why NLTK does not work.
My code
nltk_test.py
import webapp2
import path_changer
import testx
import nltk
class MainPage(webapp2.RequestHandler):
def get(self):
#self.response.headers['Content-Type'] = 'text/plain'
self.response.write("TEST")
class nltkTestPage(webapp2.RequestHandler):
def get(self):
text = nltk.word_tokenize("And now for something completely different")
self.response.write(testx.test("Hellooooo"))
application = webapp2.WSGIApplication([
('/', MainPage), ('/nltk', nltkTestPage),
], debug=True)
testx.py code
def test(txt):
return len(txt)
path_changer.py code
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'nltk'))
sys.path.insert(1, os.path.join(os.path.dirname(__file__), 'new'))
app.yaml
application: nltkforappengine
version: 0-1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /.*
script: nltk_test.application
- url: /nltk.*
script: nltk_test.application
libraries:
- name: numpy
version: "1.6.1"
This code works fine When I comment the import nltk and nltk related code, I think NLTK is not imported, please help me to sort out this problem, thanks
|
Google App Engine import NLTK error
| 0 | 0 | 0 | 423 |
16,698,296 |
2013-05-22T17:40:00.000
| 0 | 0 | 0 | 0 |
python,django,python-2.7
| 50,297,908 | 2 | false | 1 | 0 |
you should use python manage.py migrate app_name rather than python manage.py migrate.Because you had used it before
| 1 | 1 | 0 |
I've tried to solve the following issue:
I'm running python manage.py test to test my application.
After creating a new test_app database, I'm getting
DatabaseError: (1050, "Table 'auth_group' already exists")
I haven't installed South (it's not on the INSTALLED_APPS list), how do I solve this?
|
Django "Table already exists" on testing
| 0 | 0 | 0 | 2,226 |
16,699,577 |
2013-05-22T18:56:00.000
| 2 | 0 | 0 | 0 |
python,twisted,quickfix
| 20,018,685 | 1 | false | 0 | 0 |
Use the message.ToString() function to serialize the message.
| 1 | 3 | 0 |
I have used quickfix in c++. I am trying to use the python version.
Documentation seems a little sparse, so I was hoping to get some information regarding the same.
I have an emulator, that assembles a message in various protocols (some fix/ some non fix).
opens a tcp connection to a server and sends these messages over.
I am considering assembling the fix message using quickfix.
I don't want to use the client portion of quickfix, just the part which assembles a fix message.
Can this be done? ie: does the api support getting the raw fix(which can then be sent over tcp connection) from Message format.
Thanks and Regards.
|
Using the quickfix python along with twisted
| 0.379949 | 0 | 0 | 489 |
16,699,883 |
2013-05-22T19:15:00.000
| 0 | 1 | 0 | 1 |
python,linux,ubuntu
| 56,511,789 | 4 | false | 0 | 0 |
You do not need to use any module for this.
Simply you can navigate to
/sys/class/power_supply/BAT0.
Here you will find a lot of files with information about your battery.
You will get current charge in charge_now file and total charge in charge_full file.
Then you can calculate battery percentage by using some math.
Note:- You may need root access for this. You can use sudo nautilus command to open directories in root mode.
| 2 | 5 | 0 |
I am trying to come out with a small python script to monitor the battery state of my ubuntu laptop and sound alerts if it's not charging as well as do other stuff (such as suspend etc).
I really don't know where to start, and would like to know if there is any library for python i can use.
Any help would be greatly appreciated.
Thanks
|
Use Python to Access Battery Status in Ubuntu
| 0 | 0 | 0 | 5,179 |
16,699,883 |
2013-05-22T19:15:00.000
| 0 | 1 | 0 | 1 |
python,linux,ubuntu
| 39,884,293 | 4 | false | 0 | 0 |
The the "power" library on pypi is a good bet, it's cross platform too.
| 2 | 5 | 0 |
I am trying to come out with a small python script to monitor the battery state of my ubuntu laptop and sound alerts if it's not charging as well as do other stuff (such as suspend etc).
I really don't know where to start, and would like to know if there is any library for python i can use.
Any help would be greatly appreciated.
Thanks
|
Use Python to Access Battery Status in Ubuntu
| 0 | 0 | 0 | 5,179 |
16,700,381 |
2013-05-22T19:46:00.000
| 1 | 0 | 0 | 0 |
python,sockets,client-server
| 16,700,468 | 1 | true | 0 | 0 |
The criteria that determine uniqueness (for connectivity) are:
IP Address
Protocol
Port
Thus, if A and B have different IP addresses, they all use TCP, but B's server has a different port than B's client, then, all other things being equal, they should all be reachable.
| 1 | 0 | 0 |
As the title says, can client A connect to client B on different machines, when the server is on the same machine with client B ?
Note that the client B and server on the machine have different port numbers.
And client B acts like a server i.e. it also listen for clients, but client A must first handshake with the server and then with client B. Is this possible? Thank you.
|
can peer A connect to peer B if server is on peer B?
| 1.2 | 0 | 1 | 49 |
16,700,526 |
2013-05-22T19:56:00.000
| 0 | 0 | 0 | 0 |
python,django,django-syncdb,syncdb,django-manage.py
| 34,055,679 | 4 | false | 1 | 0 |
This seems to work - python manage.py syncdb --database=NEW_APP_DB
| 1 | 0 | 0 |
First of all: please don't ask me: "Why would you need this?".
Now the question itself: I have several applications installed in INSTALLED_APPS. Database is now empty and I want to synchronise only some of apps. What's the simplest way to do this?
I know I can write my custom management command based on syncdb. But it's a shame syncdb is designed in such a way that I would have to copy/paste a large chunk of code, changing only one line. This is a reason I want to explore other possibilities.
|
django select app(s) for syncdb
| 0 | 0 | 0 | 2,970 |
16,700,600 |
2013-05-22T20:01:00.000
| 0 | 0 | 0 | 1 |
java,python,protocol-buffers
| 19,603,923 | 2 | false | 1 | 0 |
Unfortunately the Python protobuf deserialization is just pretty slow (as of 2013) compared to the other languages.
| 1 | 2 | 0 |
I'm new to google's protocol buffers and looking into some insight. I have a large object that is serialized in java which I am de-serializing in python. The upstream tells me that the file is serialized in about 4 to 5 seconds. Where it takes me 37 seconds to de-serialize. Any ideas on why it is such a huge difference besides hardware? are there ways I can speed up the de-serialization? Does Java perform better for this? I'm simply grabbing a serialized data file and using ParseFromString.
Thanks
UPDATE:- So just got back to this after a while and tried to deserialize the file using java. It took 4 seconds to deserialize a bigger file (56 m). Now this solves my problem with the performance however, I really am confused about the huge difference between the python and java, any insights?
|
optimizing google protocol buffer
| 0 | 0 | 0 | 1,004 |
16,701,027 |
2013-05-22T20:28:00.000
| 2 | 0 | 0 | 0 |
django,python-2.7,scrapy
| 16,755,838 | 1 | true | 1 | 0 |
I generally put my scrapy project somewhere inside my Django project root folder. Just remember you will need to make sure both projects are in the python path. This is easy to do if you are using virtualenv properly.
Aside from that as long as you can import your Django models from Scrapy i think everything else in the Scrapy docs is very clear. When you import your Django model, the Django settings are set up at that point, this means your database connection etc should all be working fine as long as they are already working in Django.
The only real trick is getting the python path set up properly (which is probably a topic for another questions).
| 1 | 3 | 0 |
I am am new to Django / Scrapy and well to programming in general. I am trying to make a Django site to help me learn.
What I want to do is Scrape product information from different sites and store them in my postgres database using DjangoItem from Scrapy.
I have read all the docs from both Scrapy and Django. I have searched here and other sites for a couple days and just couldn't find exactly what I was looking for that made the light bulb go off.
Anyway, my question is, what is the standard for deploying Scrapy and Django together. Ideally I would like to scrape 5-10 different sites and store their information in my database.
Scrapy's docs are a little short on information on the best way to implement DjangoItem.
1) Should the Scrapy project be inside my Django app, at the root level of my Django project or outside all together.
2) Other than setting DjangoItem to my Django model, do I need to change any other settings?
Thanks
Brian
|
Using Scrapy DjangoItem with Django best way
| 1.2 | 0 | 0 | 896 |
16,701,764 |
2013-05-22T21:19:00.000
| 0 | 0 | 1 | 0 |
java,python,jython
| 16,947,188 | 3 | false | 1 | 0 |
Maybe I'm just missing the point, but can't you use getResourceAsStream() on a Java class?
| 1 | 4 | 0 |
I have some Jython modules that I'm trying to make work from within a JAR. Everything is set up fine except that some modules expect to open files from the filesystem that are located in the same directory as the Python script itself. This doesn't work anymore because those files are now bundled into the JAR.
Basically I want to know if there's an equivalent of Class.getResourceAsStream() that I can use from within the Python code to load these data files. I tried to use '__pyclasspath__/path/to/module/data.txt' but it didn't exist.
|
How can I load data files in a Jython module?
| 0 | 0 | 0 | 739 |
16,705,684 |
2013-05-23T04:29:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,webapp2
| 16,705,889 | 3 | false | 1 | 0 |
Usually, you just have to call the corresponding method.
For being more specific... Which flavour of AppEngine are you using? Java, Python, Go... Php?
| 1 | 0 | 0 |
Server.Transfer is sort of like a Redirect except instead of requesting the browser to do another page fetch, it triggers an internal request that makes the request handler "go to" another request handler.
Is there a Python equivalent to this in Google App Engine?
Edit: webapp2
|
What's the Google App Engine equivalent of ASP.NET's Server.Transfer?
| 0 | 0 | 0 | 210 |
16,706,907 |
2013-05-23T06:23:00.000
| 0 | 0 | 0 | 0 |
python,ubuntu,user-interface,pygtk,desktop-application
| 16,709,612 | 1 | false | 0 | 1 |
BTW Unity is COMPIZ plugin :D
Also I think that PyGTK will not be able to do it. You'd need some kind of desktop plugin, but what are you trying to do is tricky, it may break in future versions of Unity - you need to use the desktop manager (wm) to manage that as a borderless window with static position, somethink like desktop plugins in KDE. Also it will not work in other wm's.
Maybe you should consider making it a simple icon in the notification area that would change and on hover it would display more info.. that I'm sure would work anywhere.
| 1 | 0 | 0 |
I've spent literally all day on google, stackoverflow, and many other sites trying to find a way to implement a windowless app using python, to no avail. I'm familiar with python and have been using pygtk to create previous apps, but it doesn't look like pygtk can cut it.
I'm using the standard Ubuntu 12.10 Unity setup, and don't want to switch over to something that uses compiz. I also don't want to use screenlets.
The app I want to create is to notify me of any updates on my social media accounts, and I want it to be floating perpetually on the screen, accessible at any time. I want it to be stand-alone, and not be dependent on outside apps (like screenlets) to run, and I want it to be as minimalistic as possible.
If I can't accomplish this with pygtk, what else should I use? If it is possible, how would I go about implementing it?
|
How would I create a windowless app using python on Ubuntu 12.10?
| 0 | 0 | 0 | 336 |
16,709,591 |
2013-05-23T08:57:00.000
| 1 | 0 | 1 | 0 |
python,cython,distutils
| 16,709,664 | 3 | true | 0 | 0 |
I haven't done anything with cython myself but I guess you could use a commandline for the building. That way you would see all the messages until you close the actual commandline unless some really fatal error happens.
| 1 | 0 | 0 |
When I compile a cython .pyx file from IdleX the build shell window pops up with a bunch of warnings to close again after less than a second.
I think pyximport uses distutils to build. How can I write the gcc warnings to a file or have the output delay or wait for keypress?
|
Get hold of warnings from cython pyximport compile (distutils build output?)
| 1.2 | 0 | 0 | 794 |
16,710,374 |
2013-05-23T09:35:00.000
| 0 | 0 | 0 | 0 |
python,algorithm,graph
| 16,710,763 | 4 | false | 0 | 0 |
When you want to find the shortest path you should use BFS and not DFS because BFS explores the closest nodes first so when you reach your goal you know for sure that you used the shortest path and you can stop searching. Whereas DFS explores one branch at a time so when you reach your goal you can't be sure that there is not another path via another branch that is shorter.
So you should use BFS.
If your graph have different weights on its edges, then you should use Dijkstra's algorithm which is an adaptation of BFS for weighted graphs, but don't use it if you don't have weights.
Some people may recommend you to use Floyd-Warshall algorithm but it is a very bad idea for a graph this large.
| 2 | 0 | 1 |
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?
Thank you for your time
|
Breadth First Search or Depth First Search?
| 0 | 0 | 0 | 1,736 |
16,710,374 |
2013-05-23T09:35:00.000
| 0 | 0 | 0 | 0 |
python,algorithm,graph
| 16,710,738 | 4 | false | 0 | 0 |
If there are no weights for the edges on the graph, a simple Breadth-first search where you access nodes in the graph iteratively and check if any of the new nodes equals the destination-node can be done. If the edges have weights, DJikstra's algorithm and Bellman-Ford algoriths are things which you should be looking at, depending on your expected time and space complexities that you are looking at.
| 2 | 0 | 1 |
I am implementing a huge directed graph consisting of 100,000+ nodes. I am just beginning python so I only know of these two search algorithms. Which one would be more efficient if I wanted to find the shortest distance between any two nodes? Are there any other methods I'm not aware of that would be even better?
Thank you for your time
|
Breadth First Search or Depth First Search?
| 0 | 0 | 0 | 1,736 |
16,712,834 |
2013-05-23T11:32:00.000
| 2 | 0 | 0 | 0 |
python,notifications,google-plus,hangout
| 16,721,015 | 3 | true | 0 | 0 |
Hangouts does not currently have a public API.
That said, messages delivered to the Google Talk XMPP server (talk.google.com:5222) are still being delivered to users via Hangouts. This support is only extended to one-on-one conversations, so the notification can't be delivered to a group of users. The messages will need to be supplied through an authenticated Google account in order to be delivered.
| 1 | 7 | 0 |
Is there an API which allows me to send a notification to Google Hangout? Or is there even a python module which encapsulates the Hangout API?
I would like to send system notification (e.g. hard disk failure reports) to a certain hangout account. Any ideas, suggestions?
|
send google hangout notification using python
| 1.2 | 0 | 1 | 9,293 |
16,714,004 |
2013-05-23T12:30:00.000
| 3 | 0 | 1 | 0 |
python,class,object,memory,space
| 16,714,050 | 1 | true | 0 | 0 |
The instance merely references it's class. No extra memory is required to hold those two methods beyond the class definition.
From a memory perspective, the class definition itself is little more that a tuple of refences to the base classes, a reference to it's metatype (type() by default) and a dictionary for the attributes (which includes all the methods).
Instances are just a reference to the class, plus either a dictionary of attributes or a fixed set of attributes when using __slots__ (the latter is more memory efficient but disallows arbitrary extra attributes to be set).
| 1 | 0 | 0 |
Let's say I create some class named A() with two custom methods in it.
When I instantiate it, an object will be created in some address in the memory representinf this instance.
Now I create a subclass of A, for example, B(A) and define one more custom method in it.
My question is, when I instantiate B, the object created in memory will also "contain" the data of the class A, namely the two custom methods?
|
python - How much space does a subclass require in memory
| 1.2 | 0 | 0 | 91 |
16,716,049 |
2013-05-23T14:04:00.000
| 1 | 0 | 0 | 1 |
python,dns,twisted
| 16,717,175 | 2 | false | 0 | 0 |
I'm not massively familiar with twisted, I only recently started used it. It looks like it doesn't block though, but only on platforms that support threading.
In twisted.internet.base in ReactorBase it looks like it does the resolving through it's resolve method which returns a deferred from self.resolver.getHostByName.
self.resolver is an instance of BlockingResolver by default which does block, but it looks like that if the platform supports threading the resolver instance is replaced by ThreadedResolver in the ReactorBase._initThreads method.
| 1 | 2 | 0 |
It seems obvious that it would use the twisted names api and not any blocking way to resolve host names.
However digging in the source code, I have been unable to find the place where the name resolution occurs. Could someone point me to the relevant source code where the host resolution occurs ( when trying to do a connectTCP, for example).
I really need to be sure that connectTCP wont use blocking DNS resolution.
|
Does twisted epollreactor use non-blocking dns lookup?
| 0.099668 | 0 | 0 | 591 |
16,719,307 |
2013-05-23T16:32:00.000
| 3 | 0 | 1 | 0 |
python,python-2.7
| 16,719,393 | 1 | true | 0 | 0 |
Please check that Eclipse has the right PYTHONPATH environmental variables. Open a python interactive interpreter in a shell and try importing the same urllib, urllib2 and random modules. If that works, then Eclipse might be configured wrong. If you can't access those modules, then you should consider fixing your PYTHONPATH.
| 1 | 0 | 0 |
I am trying to work with Python 2.7 in eclipse on my Mac, I don't belive that I have ever messed with the source files, but when I try to import urllib, urllib2 or random it tells me that it can't find them. I used the eclipse autoconfig-ed 2.7 interpreter so I have no idea what happened to the modules. How can I find it so that I can include it?
|
Python2.7 Modules Missing
| 1.2 | 0 | 1 | 130 |
16,720,883 |
2013-05-23T18:04:00.000
| 0 | 0 | 0 | 0 |
python,django,django-cms
| 16,723,428 | 2 | true | 1 | 0 |
After digging through the code, I discovered that django-cms doesn't actually expose pages via their slug unless they're created UNDER a home page. The code that looks up a page via their slug looks in the cms_title table, and it stores '' for the slug for any page that's not a child. Very unintuitive, but after I re-created the page under a "Home" page, I could then access it via the /about/ page.
| 2 | 0 | 0 |
How do you view a published django-cms page using a path that incorporates the slug?
I installed django-cms without error, and I can view the default cms homepage just fine. I created and published a simple "About" page with the slug "about", but when I visit http://localhost:8000/about/ I get a 404 error. I can see the page if I use the "View on site" button, but that takes me to http://localhost:8000/?preview=1&language=en, not the real published path.
What am I doing wrong?
|
Viewing a django-cms page via the slug
| 1.2 | 0 | 0 | 581 |
16,720,883 |
2013-05-23T18:04:00.000
| 0 | 0 | 0 | 0 |
python,django,django-cms
| 16,721,162 | 2 | false | 1 | 0 |
you won't get access until you check the published in cms page list view in admin.
View on site help with a preview before page is published.
| 2 | 0 | 0 |
How do you view a published django-cms page using a path that incorporates the slug?
I installed django-cms without error, and I can view the default cms homepage just fine. I created and published a simple "About" page with the slug "about", but when I visit http://localhost:8000/about/ I get a 404 error. I can see the page if I use the "View on site" button, but that takes me to http://localhost:8000/?preview=1&language=en, not the real published path.
What am I doing wrong?
|
Viewing a django-cms page via the slug
| 0 | 0 | 0 | 581 |
16,721,940 |
2013-05-23T19:05:00.000
| 9 | 0 | 1 | 1 |
python,macos
| 16,722,274 | 2 | false | 0 | 0 |
To expand on the other answers: Darwin is the part of OS X that is the actual operating system, in a stricter sense of that term.
To give an analogy, Darwin would be the equivalent of Linux - or Linux and the GNU utilities - while Mac OS X would be the equivalent of Ubuntu or another distribution. I.e. a kernel, the basic userspace utilities, and a GUI layer and a bunch of "built-in" applications.
| 1 | 21 | 0 |
In Python, when I type sys.platform on the Mac OS X the output is "darwin"? Why is this so?
|
Why when use sys.platform on Mac os it print "darwin"?
| 1 | 0 | 0 | 12,731 |
16,723,782 |
2013-05-23T20:59:00.000
| 0 | 0 | 0 | 0 |
python,rest,ssl,bottle
| 16,724,489 | 2 | false | 0 | 0 |
If you are using password authentication you need to store the password in the server so you can validate the password you send from the client is Ok.
In your particular case you will be using basic authentication, as you want the simplest. Basic authentication over HTTP/HTTPS encodes the password with base64 but that's not a protection measure. Base64 is a two way encoding, you can encode and decode a chunk of data and you need no secret to do it. The purpose of Base64 encoding is codify any kind of data, even binary data, as a string.
When you enter the password and send it over HTTPS, the HTTPS tunel avoids anyone from seeing your password.
Other problem comes if someone gets access to your server and reads the password "copy" that you are using to check if the entered password was valid. The best way to protect is hashing it. A hash is a one way codification system. This means anyone can hash a password, but you can not unhash a chunk of data to discover the password. The only way to break a hashed password is by brute force. I'll recommend using MD5 or SHA hashes.
So to make it simple. The client uses http/https basic authentication, so you'll encode your password in base64. Pass it through a header, not the URL. The server will contain a hased copy of the password either on databse or wherever you want. The backend code will recibe the http request, get the passowrd, base64 decode it and then hash it. Once hashed, you will check if its equal to the copy stored in the server.
This is it. Hope it helps!
| 1 | 2 | 0 |
I'm using bottle to write a very simple backend API that will allow me to check up on an ongoing process remotely. I'm the only person who will ever use this service—or rather, I would very much like to be the only person to use this service. It's a very simple RESTish API that will accept GET requests on a few endpoints.
I've never really done any development for the web, and I wanted to do something as simple as is reasonable. Even this is probably an undue level of caution. Anyway, my very-very-basic idea was to use https on the server, and to authenticate with basically a hard-coded passkey. It would be stored in plaintext on the server and the client, but if anyone has access to either of those then I have a different problem.
Is there anything glaringly wrong about this approach?
|
very, very simple web authentication for personal use
| 0 | 0 | 1 | 385 |
16,724,472 |
2013-05-23T21:47:00.000
| 3 | 0 | 1 | 0 |
python
| 16,724,514 | 3 | false | 0 | 0 |
These are the right and left bitshift operators in your case. However python, like many other languages has support for operator overloading so you can use them for other things. In your example 100 is represented in binary as 1100100 when you shift it five digits to the right is is 11, or base ten 3
| 1 | 0 | 0 |
I have compiled a list of operators, keywords, etc. and the only one don't understand is >> or << in python.
Please explain the math behind >> and <<.
Thank you.
|
Please explain python 100 >> 5 = 3
| 0.197375 | 0 | 0 | 246 |
16,731,960 |
2013-05-24T09:41:00.000
| 0 | 0 | 1 | 0 |
python,algorithm
| 16,735,119 | 3 | false | 0 | 0 |
I am not sure whether it will give an optimal solution, but would simply repeatedly merging the two largest non-overlapping sets not work?
| 1 | 0 | 1 |
I'm currently doing this by using a sort of a greedy algorithm by iterating over the sets from largest to smallest set. What would be a good algorithm to choose if i'm more concerned about finding the best solution rather than efficiency?
Details:
1) Each set has a predefined range
2) My goal is to end up with a lot of densely packed sets rather than reducing the total number of sets.
Example: Suppose the range is 8.
The sets might be: [1,5,7], [2,6], [3,4,5], [1,2] , [4], [1]
A good result would be [1,5,7,2,6,4], [3,4,5,1,2], [1]
|
What is a good way to merge non intersecting sets in a list to end up with denseley packed sets?
| 0 | 0 | 0 | 124 |
16,734,695 |
2013-05-24T12:12:00.000
| 0 | 0 | 1 | 0 |
python,datetime-format
| 26,056,361 | 2 | false | 0 | 0 |
change the format of date time
import datetime
d = datetime.datetime.strptime("2013-05-24", "%Y-%m-%d")
datetime.datetime(2013, 5, 24, 0, 0)
d.format("%d.%b.%Y")
'24-May-2013'
The format function in put before and after two underscore
| 1 | 2 | 0 |
How can I convert "dd/MM/yyyy", "HH:mm", "dd-MMM-yy", "M/d/yyyy" etc., to "%d/%m/%Y", "%H:%M", "%d-%b-%y" or "%m/%d/%Y" etc?
|
Convert user understandable date time format string to strptime understandable format string
| 0 | 0 | 0 | 5,588 |
16,735,977 |
2013-05-24T13:20:00.000
| 0 | 0 | 1 | 0 |
python,distutils,setup.py
| 16,736,059 | 2 | false | 0 | 0 |
setup.py is ultimately just an entry point to a Python program, so if you want to implement branching build behaviors, just have your standard setup.py parse command line arguments and dispatch the actual build process to whatever modules you want. That's much preferable to trying to dynamically re-name files with some pre-build script step.
| 1 | 1 | 0 |
Is it possible to indicate that a specific file will become setup.py during the building process (e.g., python setup.py sdist) when using distutils (or distribute, or else) ?
I would like to be able to do python setup-specificbuild.py sdist and have something (either in setup-specificbuild.py or as a command line argument) that would rename setup-specificbuild.py to setup.py in the package tarball build in dist/.
|
Python packaging with alternative setup.py
| 0 | 0 | 0 | 260 |
16,737,308 |
2013-05-24T14:25:00.000
| 0 | 0 | 0 | 1 |
php,python,google-app-engine
| 17,073,068 | 1 | false | 1 | 0 |
Thanks very much, hakre. I know what happened. The problem is I also have a python version Google-App-Engine.Thus, I need to specify the "development server" to GAE-PHP-SDK and it works well now!! Thanks again, I think I will deliver such a kindness to others in the future. – moshaholo May 26 at 12:16
Can any one tell me how to change or specify the development server to GAE-PHP-SDK. I just started using it and don't know too much about this stuff.
P.S Sorry for posting as answer. Wasn't able to see a reply option on top.
| 1 | 0 | 0 |
The new launched GoodleAppEngine(PHP Version) does not work on my computer.
Every time I type in "localhost:8080", the running server returns me a "GET / HTTP/1.1" 500".
And it give me a fatal ERROR:
Fatal error: require_once(): Failed opening required
'google/appengine/runtime/ApiProxy.php'
(include_path='/Users/xxxxx/Job_work/helloworld:/usr/local/bin/php/sdk')
in
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/devappserver2/php/setup.php
Does that mean my Python GAE disturbs my PHP version SDK?
|
Does GoogleAppEngine(Python SDK) disturb GoogleAppEngine(PHP SDK)?
| 0 | 0 | 0 | 209 |
16,737,386 |
2013-05-24T14:29:00.000
| 0 | 0 | 0 | 1 |
python,google-app-engine,beautifulsoup
| 16,737,623 | 2 | false | 1 | 0 |
It seems uploading the whole directory where the bs4 module resides in to the GAE app folder would work.
| 1 | 0 | 0 |
I am working in python on a GAE app. Beautiful soup, which the app uses, works fine on my dev server locally. When I try and upload it to google's servers however, I get the following error: "Error parsing yaml file: the library "bs4" is not supported".
I am not sure how to fix this. Does anyone have any idea?
Thank you.
File Structure:
app.yaml
main.py
static(DIR)
templates(DIR)
bs4(DIR)
|
Beautiful Soup "not supported" Google App Engine
| 0 | 0 | 0 | 1,031 |
16,737,962 |
2013-05-24T14:58:00.000
| 0 | 0 | 0 | 0 |
python,windows-8,multi-touch,chromium-embedded
| 16,913,113 | 1 | true | 1 | 0 |
The problem was fixed by using CEF3 rather than CEF1
| 1 | 1 | 0 |
I 've develooped a basic custom browser with CEF (Chromium Embedded Framework) Python . This browser is meant to run into an interactive kiosk with windows 8. It has a multi-touch screen for all user interactions.
If I run Google Chrome on the machine, the multi-touch gestures (scroll and virtual keyboard) are supported.
Unfortunately my CEF browser doesn't have detect any multi-touch event. How can I fix it? ANy pointer is welcomed.
|
How to add multitouch support for ChromeEmbeddedFramework browser on windows 8?
| 1.2 | 0 | 1 | 652 |
16,739,319 |
2013-05-24T16:11:00.000
| 0 | 0 | 0 | 0 |
python,selenium,webdriver
| 34,998,411 | 7 | false | 0 | 0 |
You need to have a waitUntil (your element loads). If you are sure that your element will eventually appear on the page, this will ensure that what ever validations will only occur after your expected element is loaded.
| 1 | 4 | 0 |
I am using the python unit testing library (unittest) with selenium webdriver. I am trying to find an element by it's name. About half of the time, the tests throw a NoSuchElementException and the other time it does not throw the exception.
I was wondering if it had to do with the selenium webdriver not waiting long enough for the page to load.
|
Selenium Webdriver - NoSuchElementExceptions
| 0 | 0 | 1 | 9,188 |
16,742,269 |
2013-05-24T19:24:00.000
| 1 | 0 | 0 | 0 |
python,r,maps,openstreetmap
| 16,743,202 | 1 | true | 0 | 0 |
You can use the package RgoogleMaps. The function GetMap.OSM will retrieve a map from Open Street Maps and then you can use the PlotOnStaticMap function with FUN=lines to plot a set of lines connecting the lat and long points.
The ggmap package can also do this using ggplot2 style graphics. The get_openstreetmap in that package will download a map for you then use ggmap to plot the map and use the regular ggplot2 functions to add the routes on top.
| 1 | 3 | 0 |
I have data in the form of lattitude and longitude co-ordinates. I want to plot a route for those co-ordinates in OSM using either Python or R. Can somebody tell me where can I begin? I've not worked with OSM before.
For each user I've a route which is a set of lattitude and longitude co-ordinates, so I want to plot each user's routes.I would prefer python alternatives over R.
|
How to draw routes in open street maps using Python/R?
| 1.2 | 0 | 0 | 2,227 |
16,744,702 |
2013-05-24T22:57:00.000
| 1 | 0 | 0 | 0 |
python,gtk,glade
| 22,049,793 | 2 | false | 0 | 1 |
Not an answer, but it looks like "another.anon.coward" already answered this in a comment...
If you double click on the tab, then that page is selected for adding content in glade. You can go ahead and add content for that page. As for switching you can use set_current_page to switch to page whose content you want to display. Register for "switch-page" signal to find out which page has been switched to.
| 1 | 6 | 0 |
I'm currently developing a small application in Python with the use of GTK+ (and Glade). Everything has been fairly simple to integrate so far, until I came up with the idea to add tabs instead of pop-ups. Note: Still using Python 2.7+
Is there any easy way to implement already existing pages inside a new tab(notebook) like structure? I'm having difficulties to find how to add content per separate tab created in glade.
Perhabs a more 'clear' question: What Notebook function will be required to call a specific V/HBox with every different tab? The current structure looks like (minus Menu / statusbar):
[ mainWindow ] --> (1) mainOverview (gtkVbox) --> (2A) mainContent (gtkHbox) ... other non-related content
The structure I was hoping for would look like:
[ mainWindow ] --> (1) mainOverview --> (2) noteBook --> (3) Tab1 --> (4) mainContent (gtkHbox) -- (3) Tab2 --> (4) secondaryContent (gtkHbox)
The application itself works fine (multithreaded, fully functioning) without the tabs, the mainContent(gtkHbox) contains a file/recursive directory analyzer, a few checkboxes and a general overview. I was hoping for an easy way to display this main window (the gtkHbox) ONLY when having Tab1 selected.
Having difficulties to find good reference pages that display a proper way to call content pages per notebook tab. Any reference-pages or useful links are very much appreciated! Thanks so far! My apologies if this is a rather newbish question, I'm not new to Python coding, but interfaces on the other hand... ;)
|
Python GTK (Glade) Notebook
| 0.099668 | 0 | 0 | 1,912 |
16,745,387 |
2013-05-25T00:50:00.000
| 30 | 0 | 1 | 0 |
python,integer-math
| 16,745,422 | 2 | true | 0 | 0 |
Just & the result with the appropriate 32- or 64-bit mask (0xffffffff or 0xffffffffffffffff).
| 1 | 18 | 0 |
What's the best way to do integer math in 32- and 64-bit, so that overflow happens like it does in C?
e.g. (65536*65536+1)*(65536*65536+1) should be 0x0000000200000001 in 64-bit math, and not its exact value (non-overflowing) 0x10000000200000001.
|
python 32-bit and 64-bit integer math with intentional overflow
| 1.2 | 0 | 0 | 38,379 |
16,745,487 |
2013-05-25T01:12:00.000
| 1 | 0 | 0 | 1 |
python,runtime-error,celery
| 18,938,559 | 1 | false | 0 | 0 |
This is an error that occurs when a chord header has no tasks in it. Celery tries to access the tasks in the header using self.tasks[0] which results in an index error since there are no tasks in the list.
| 1 | 2 | 1 |
Has anyone seen this error in celery (a distribute task worker in Python) before?
Traceback (most recent call last):
File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/task/trace.py", line 415, in protected_call
return self.run(*args, **kwargs)
File "/home/mcapp/lister/lister/tasks/init.py", line 69, in update_playlist_db
video_update(videos)
File "/home/mcapp/lister/lister/tasks/init.py", line 55, in video_update
chord(tasks)(update_complete.s(update_id=update_id, update_type='db', complete=True))
File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/canvas.py", line 464, in call
_chord = self.type
File "/home/mcapp/.virtualenv/lister/local/lib/python2.7/site-packages/celery/canvas.py", line 461, in type
return self._type or self.tasks[0].type.app.tasks['celery.chord']
IndexError: list index out of range
This particular version of celery is 3.0.19, and happens when the celery chord feature is used. We don't think there is any error in our application, as 99% of the time our code works correctly, but under heavier loads this error would happen. We are trying to find out if this is an actual bug in our application or a celery bug, any help would be greatly appreciated.
|
celery.chord gives IndexError: list index out of range error in celery version 3.0.19
| 0.197375 | 0 | 0 | 859 |
16,745,923 |
2013-05-25T02:45:00.000
| 0 | 0 | 1 | 1 |
macos,bash,command-line,terminal,ipython
| 65,625,644 | 7 | false | 0 | 0 |
For me the only thing that helped was:
python -m pip install --upgrade pip
Upgrading pip did the work and all the installations started working properly!
Give it a try.
| 4 | 22 | 0 |
Using Python 2.7 installed via homebrew. I then used pip to install IPython. So, IPython seems to be installed under:
/usr/local/lib/python2.7/site-packages/
I think this is true because there is a IPython directory and ipython egg.
However, when I type ipython in the terminal I get:
-bash: ipython: command not found
I do not understand why this ONLY happens with IPython and not with python? Also, how do I fix this? What path should I add in .bashrc? And how should I add?
Currently, my .bashrc reads:
PATH=$PATH:/usr/local/bin/
Thanks!
|
IPython command not found Terminal OSX. Pip installed
| 0 | 0 | 0 | 34,234 |
16,745,923 |
2013-05-25T02:45:00.000
| 25 | 0 | 1 | 1 |
macos,bash,command-line,terminal,ipython
| 22,583,681 | 7 | true | 0 | 0 |
I had this issue too, the following worked for me and seems like a clean simple solution:
pip uninstall ipython
pip install ipython
I'm running mavericks and latest pip
| 4 | 22 | 0 |
Using Python 2.7 installed via homebrew. I then used pip to install IPython. So, IPython seems to be installed under:
/usr/local/lib/python2.7/site-packages/
I think this is true because there is a IPython directory and ipython egg.
However, when I type ipython in the terminal I get:
-bash: ipython: command not found
I do not understand why this ONLY happens with IPython and not with python? Also, how do I fix this? What path should I add in .bashrc? And how should I add?
Currently, my .bashrc reads:
PATH=$PATH:/usr/local/bin/
Thanks!
|
IPython command not found Terminal OSX. Pip installed
| 1.2 | 0 | 0 | 34,234 |
16,745,923 |
2013-05-25T02:45:00.000
| 0 | 0 | 1 | 1 |
macos,bash,command-line,terminal,ipython
| 59,742,054 | 7 | false | 0 | 0 |
After trying to a number of solutions like above with out joy, when I restarted my terminal, Ipython command launched. Don't forgot to restart your terminal after all the fiddling!
P.S. I think the brew install Ipython did it ... but can't be sure.
| 4 | 22 | 0 |
Using Python 2.7 installed via homebrew. I then used pip to install IPython. So, IPython seems to be installed under:
/usr/local/lib/python2.7/site-packages/
I think this is true because there is a IPython directory and ipython egg.
However, when I type ipython in the terminal I get:
-bash: ipython: command not found
I do not understand why this ONLY happens with IPython and not with python? Also, how do I fix this? What path should I add in .bashrc? And how should I add?
Currently, my .bashrc reads:
PATH=$PATH:/usr/local/bin/
Thanks!
|
IPython command not found Terminal OSX. Pip installed
| 0 | 0 | 0 | 34,234 |
16,745,923 |
2013-05-25T02:45:00.000
| 2 | 0 | 1 | 1 |
macos,bash,command-line,terminal,ipython
| 59,297,333 | 7 | false | 0 | 0 |
I use pip3 install ipython is OK.
maybe ipython rely on python3
| 4 | 22 | 0 |
Using Python 2.7 installed via homebrew. I then used pip to install IPython. So, IPython seems to be installed under:
/usr/local/lib/python2.7/site-packages/
I think this is true because there is a IPython directory and ipython egg.
However, when I type ipython in the terminal I get:
-bash: ipython: command not found
I do not understand why this ONLY happens with IPython and not with python? Also, how do I fix this? What path should I add in .bashrc? And how should I add?
Currently, my .bashrc reads:
PATH=$PATH:/usr/local/bin/
Thanks!
|
IPython command not found Terminal OSX. Pip installed
| 0.057081 | 0 | 0 | 34,234 |
16,747,301 |
2013-05-25T07:06:00.000
| 0 | 1 | 0 | 1 |
apache,webserver,cgi,mod-python,mod-php
| 21,819,195 | 1 | false | 0 | 0 |
With a modern version of Apache, unless you configure it in prefork mode, it should run threaded (and not fork). mod_python is threadsafe, and doesn't require that each instance of it is forked into its own space.
| 1 | 4 | 0 |
I am a dummy in web apps. I have a doubt regaring the functioning of apache web server. My question is mainly centered on "how apache handles each incoming request"
Q: When apache is running in the mod_python/mod_php mode, then does a "fork" happen for each incoming reuest?
If it forks in mod_php/mod_python way, then where is the advantage over CGI mode, except for the fact that the forked process in mod_php way already contains an interpretor instance.
If it doesn't fork each time, how does it actually handle each incoming request in the mod_php/mod_python way. Does it use threads?
PS: Where does FastCGI stands in the above comparison?
|
Does Apache really "fork" in mod_php/python way for request handling?
| 0 | 0 | 0 | 459 |
16,749,084 |
2013-05-25T11:30:00.000
| 0 | 0 | 0 | 0 |
python,tornado
| 16,749,274 | 1 | false | 0 | 0 |
Is it an option to make your API REST-ful?
An example flow: Have the client POST to a url to create a new resource and GET/HEAD for the state of that resource, that way you don't need to block your client while you do any blocking stuff.
| 1 | 0 | 0 |
There is a strange API I need to work with.
I want to make a HTTP call to the API and the API will return success but I need to wait for request from this API before I respond to the client.
What is the best way to accomplish that?
|
Listen for http request in the body of RequestHandler
| 0 | 0 | 1 | 104 |
16,750,700 |
2013-05-25T14:49:00.000
| 4 | 0 | 1 | 0 |
python,windows,python-2.7
| 16,752,494 | 1 | true | 0 | 0 |
Just install Python2.7.5 over 2.7.3 and everything is fine. (I do that)
The folder where your libraries are installed to site-packages will be left untouched.
Also Python2.7.3 has the same c-Interface as Python2.7.5 so you will be able to use compiled modules as well.
I remember uninstalling Python and it deleted only the files it brought to my computer. All my programming work was left untouched.
If you really encounter issues that I do not know of, you can simply reinstall the older version.
The great thing about this is if you choose the advanced option to compile files, it will even go through your installed modules and compile them.
| 1 | 6 | 0 |
I have Python 2.7.3 and considering to install new 2.7.5 version, but I can't find information if it's possible to upgrade current (and keep my modules intact) or is it advised to install as separate and reinstall all my modules one by one (which I don't want to do)?
|
Install new version over existing on Windows (upgrade)
| 1.2 | 0 | 0 | 1,055 |
16,751,639 |
2013-05-25T16:34:00.000
| 3 | 1 | 1 | 0 |
python,c,ruby
| 16,751,766 | 2 | true | 0 | 0 |
Yes, but the API would be unlikely to be very nice, especially because the point of an ORM is to return objects and C doesn't have objects, hence making access to the nice OOP API unwieldy.
Even in C++ is would be problematic as the objects would be Python/Ruby objects and the values Python/Ruby objects/values, and you would need to convert back and forth.
You would be better off using a nice database layer especially made for C.
| 1 | 1 | 0 |
I have heard many times that C and Python/Ruby code can be integrated.
Now, my question is, can I use, for example a Python/Ruby ORM from within C?
|
Can I use a Python/Ruby ORM inside C?
| 1.2 | 0 | 0 | 70 |
16,751,995 |
2013-05-25T17:13:00.000
| 0 | 0 | 0 | 0 |
python,algorithm
| 16,752,052 | 2 | false | 0 | 0 |
Remove the k-1 edges with the highest weights.
| 2 | 2 | 1 |
Suppose I have a graph of N nodes, and each pair of nodes have a weight associated with them. I would like to split this graph into n smaller graphs to reduce the overall weight.
|
Split a weighted graph into n graphs to minimize the sum of weights in each graph
| 0 | 0 | 0 | 161 |
16,751,995 |
2013-05-25T17:13:00.000
| 1 | 0 | 0 | 0 |
python,algorithm
| 16,752,049 | 2 | false | 0 | 0 |
What you are searching for is called weighted max-cut.
| 2 | 2 | 1 |
Suppose I have a graph of N nodes, and each pair of nodes have a weight associated with them. I would like to split this graph into n smaller graphs to reduce the overall weight.
|
Split a weighted graph into n graphs to minimize the sum of weights in each graph
| 0.099668 | 0 | 0 | 161 |
16,754,483 |
2013-05-25T22:30:00.000
| 1 | 0 | 0 | 1 |
java,python,node.js,paas,iaas
| 16,754,484 | 1 | false | 0 | 0 |
Redhat Openshift - 3 container instances ("gears") that can each run multiple items. Max 40,000 files, 1GB of storage, 512MB Memory, 250 threads per small gear. Appears to be a hybrid of PaaS & IaaS.
Amazon EC2 - Single linux microinstace. 64-bit 640mb server. 30Gb block storage, 5Gb "standard" storage, 100Mb nosql storage. strictly IaaS.
Amazon Beanstalk - PaaS that is billed based on the underlying EC2 usage consumed. Free tier has the same resources the free EC2 tier has.
Google App Engine - No backend instance provided for free, only frontend instances that run only for the duration of a web request.
| 1 | 1 | 0 |
I realize there is quite a difference between IaaS and PaaS, but there is some overlap. I'm particularly interested in getting the most number of "backend" server instances at the free tier (or for cheap). In particular for testing the scalability of an app I'm writing.
|
Which platform as a service/infrastructure as a server provider gives the most backend resources for their free tier?
| 0.197375 | 0 | 0 | 184 |
16,757,349 |
2013-05-26T08:07:00.000
| 13 | 0 | 1 | 1 |
python
| 16,758,687 | 2 | false | 0 | 0 |
You're probably using the #!python hashbang convention that's inexplicably popular among Windows users. Linux expects a full path there. Use either #!/usr/bin/python or (preferably) #!/usr/bin/env python instead.
| 1 | 21 | 0 |
I originally coded in python IDE on windows. Now when I pasted my code in a file on Linux server. Now when I run the script, it gives me this error:
bad interpreter: No such file or directory
Please tell how to resolve this error.
|
: bad interpreter: No such file or directory in python
| 1 | 0 | 0 | 70,614 |
16,759,244 |
2013-05-26T12:29:00.000
| 4 | 0 | 1 | 0 |
python,coding-style,pep8
| 16,759,266 | 2 | true | 0 | 0 |
You have misread it, it says:
"local application/library specific imports
where local applies to both application and library imports of your own (eg. your own libraries / classes).
| 1 | 3 | 0 |
According to PEP 8 guidelines, "library specific imports" should come last at the import list. But what does this exactly mean?
|
PEP 8: library specific imports?
| 1.2 | 0 | 0 | 812 |
16,759,430 |
2013-05-26T12:54:00.000
| 0 | 0 | 0 | 0 |
python,django
| 16,835,207 | 3 | false | 1 | 0 |
try to restart development server
| 1 | 7 | 0 |
I am just the beginner in django. I use pydev eclipse in windows 8. First I write a "Hello World " program and display the string in browser but when I changed the code the change in the output is not appear. Whatever I change nothing change in output. But when I close the eclipse and shutdown the computer and restart then I change the program and run it. The code output is changed. But now further to change my program I again need to restart my computer. What is happening ?
|
Unable to write code in pydev for django project
| 0 | 0 | 0 | 136 |
16,760,786 |
2013-05-26T15:34:00.000
| 0 | 1 | 1 | 0 |
python,unit-testing
| 53,884,422 | 2 | false | 0 | 0 |
I have no suggestion how to avoid running "dependent" tests, but I have a suggestion how you might better live with them: Make the dependencies more apparent and therefore make it easier to analyse test failures later. One simple possibility is the following:
In the test-code, you put the tests for the lower-level aspects at the top of the file, and the more dependent tests further to the bottom. Then, when several tests fail, first look at the test that is closest to the top of the file.
| 1 | 5 | 0 |
When writing unit tests, it often happens that some tests sort of "depend" on other tests.
For example, lets suppose I have a test that checks I can instantiate a class. I have other tests that go right ahead and instantiate it and then test other functionality.
Lets also suppose that the class fails to instantiate, for whatever reason.
This results in a ton of tests giving errors. This is bad, because I can't see where the problem really is. What I need is a way of skipping these tests if my instantiation test has failed.
Is there a way of doing this with Python's unittest module?
If this isn't what I should do, what should I do so as to see where the problem really is when something breaks?
|
Ignore unittests that depend on success of other tests
| 0 | 0 | 0 | 500 |
16,761,134 |
2013-05-26T16:13:00.000
| 1 | 0 | 0 | 0 |
python,mongodb,redis,geospatial
| 16,886,089 | 2 | true | 0 | 0 |
Noelkd was right. There is no inbuilt function in Redis.
I found that the simplest solution is to use geohash to store the hashed lat/lng as keys.
Geohash is able to store locations nearby with similar structure, e.g.
A hash of a certain location is ebc8ycq, then the nearby locations can be queried with the wildcard ebc8yc* in Redis.
| 1 | 2 | 0 |
In MongoDB if we provide a coordinate and a distance, using $near operator will find us the documents nearby within the provided distance, and sorted by distance to the given point.
Does Redis provide similar functions?
|
How to find geographically near documents in Redis, like $near in MongoDB?
| 1.2 | 1 | 0 | 798 |
16,761,726 |
2013-05-26T17:15:00.000
| 2 | 0 | 0 | 0 |
python,string,tkinter,label
| 16,761,818 | 2 | false | 0 | 1 |
The option you are looking for is wraplength, which sets when a label’s text should be wrapped into multiple lines. However, this parameter is given in screen units, while width is text units if the widget displays text (so you can't use 30 directly).
| 1 | 2 | 0 |
Is it possible for, in python + tkinter, to set a maximum number of characters per line in a label? I have a program that opens a Toplevel window with some information taken from other information the user gave in the past. It is unresizable, which is a probel because every now and then the window get's too small for the information, which is shown in labels, so I was wandering if I could set it to add a line break every 30 characters, for example.
I looked through some label documentation, but the only thing I found was the possibility to change the label's width, which is not what I need, since is basically hides every character after the 30th.
|
Label break line if string is too big
| 0.197375 | 0 | 0 | 1,975 |
16,762,032 |
2013-05-26T17:50:00.000
| 0 | 0 | 1 | 0 |
python,django,windows,virtualenv,shared-directory
| 16,762,558 | 2 | true | 0 | 0 |
Just ssh into the vm.
Honestly I think the best solution would be to just fullscreen the vm and do it all in there, but that's just me.
| 2 | 2 | 0 |
I want to develop a Python application on Windows 7, by using a Linux VM. I would like to make use of the Python interpreter that's inside my VM (virtualenv).
Unfortunately, PyCharm is the only editor that supports the use of a remote interpreter. Is it possible to make use of my virtualenv when using Komodo IDE for instance, without installing local (Windows) libraries?
I have tried VirtualBox shared folders, VMWare shared folders and ExpanDrive, but they all seem a little unstable for this purpose (random operation not permitted errors when creating virtualenv in a shared folder).
Thanks in advance
EDIT: To be specific, I need the site-packages from the virtualenv. When I pip install an app like Django, I would like my IDE to auto-complete imports etc.
|
Use (Linux) virtualenv from Windows
| 1.2 | 0 | 0 | 1,614 |
16,762,032 |
2013-05-26T17:50:00.000
| 1 | 0 | 1 | 0 |
python,django,windows,virtualenv,shared-directory
| 16,763,547 | 2 | false | 0 | 0 |
Virtualenv on Linux uses bash scripts. These won't work on Windows. The Windows version of virtualenv uses either batch files or the PowerShell. They won't work on Linux. One solution that may work would be to setup the same virtualenv on both Linux and Windows. That is, you have to install all packages twice: once on Linux and once on Windows. Putting your own code on a shared drive should work, unless there are some problems I have not anticipated. ;)
| 2 | 2 | 0 |
I want to develop a Python application on Windows 7, by using a Linux VM. I would like to make use of the Python interpreter that's inside my VM (virtualenv).
Unfortunately, PyCharm is the only editor that supports the use of a remote interpreter. Is it possible to make use of my virtualenv when using Komodo IDE for instance, without installing local (Windows) libraries?
I have tried VirtualBox shared folders, VMWare shared folders and ExpanDrive, but they all seem a little unstable for this purpose (random operation not permitted errors when creating virtualenv in a shared folder).
Thanks in advance
EDIT: To be specific, I need the site-packages from the virtualenv. When I pip install an app like Django, I would like my IDE to auto-complete imports etc.
|
Use (Linux) virtualenv from Windows
| 0.099668 | 0 | 0 | 1,614 |
16,765,257 |
2013-05-27T01:23:00.000
| 1 | 0 | 0 | 0 |
python,web-scraping,urllib2,lxml,elementtree
| 24,978,950 | 2 | false | 0 | 0 |
I also had the problem that lxml's iterparse() would occasionally throw an AttValue: ' expected in a very unpredictable pattern. I knew that the XML I'm sending in is valid and rerunning the same script would often make it work (or fail at an entirely different point).
In the end, I managed to create a test case that I could rerun and it would immediately either complete or raise an AttValue error in a seemingly random outcome. Here's what I did wrong:
My input to iterparse() was a file-like object I wrote myself (I'm processing an HTTP response stream from requests, but it has to be ungzipped first). When writing the read() method, I cheated and ignored the size argument. Instead, I would just unzip a chunk of compressed bytes of a fixed size and return whatever byte sequence this decompressed to—often much more than the 32k lxml requests!
I suspect that this caused buffer overflows somewhere inside lxml, which led to the above issues. As soon as I stopped returning more bytes than lxml requested, these random errors would go away.
| 1 | 2 | 0 |
I frequently use lxml module in Python to scrape data from some web sites, and I'm comfortable with the module generally. However, when I try to scrape, at times I encounter lxml.etree.XMLSyntaxError: AttValue: " or ' expected error on etree.fromstring() call, but don't usually. I can't clarify how often I see that error, but I think one out of thousands or even tens of thousands times, I encounter the error. When I run the exactly same script immediately after the error occurred and the script stopped, I don't see the error and the script runs well as expected. Why does it spit out an ocasional error? Is there any way to deal with the issue? I have the similar problem when I instantiate urllib2.urlopen() function, but since I haven't seen the error from urllib2 recently, I can't write the exact error message coming from it right now.
Thanks.
|
Why does lxml spit out an error at times (but not usual) in Python?
| 0.099668 | 0 | 1 | 1,316 |
16,766,587 |
2013-05-27T05:05:00.000
| 0 | 0 | 1 | 0 |
python,list,printing,lines
| 16,766,609 | 3 | false | 0 | 0 |
sed -n 200,300p, perhaps, for 200 to 300 inclusive; adjust the numbers by ±1 if exclusive or whatever?
| 2 | 0 | 1 |
Hi I am trying to read a csv file into a double list which is not the problem atm.
What I am trying to do is just print all the sL values between two lines. I.e i want to print sL [200] to sl [300] but i dont want to manually have to type print sL for all values between these two numbers is there a code that can be written to print all values between these two lines that would be the same as typing sL out individually all the way from 200 to 300
|
Reading certain lines of a string
| 0 | 0 | 0 | 56 |
16,766,587 |
2013-05-27T05:05:00.000
| 0 | 0 | 1 | 0 |
python,list,printing,lines
| 16,766,726 | 3 | false | 0 | 0 |
If it is a specific column ranging between 200 and 300, use filter() function.
new_array = filter(lambda x: x['column'] >= 200 or z['column'] <= 300, sl)
| 2 | 0 | 1 |
Hi I am trying to read a csv file into a double list which is not the problem atm.
What I am trying to do is just print all the sL values between two lines. I.e i want to print sL [200] to sl [300] but i dont want to manually have to type print sL for all values between these two numbers is there a code that can be written to print all values between these two lines that would be the same as typing sL out individually all the way from 200 to 300
|
Reading certain lines of a string
| 0 | 0 | 0 | 56 |
16,771,391 |
2013-05-27T10:42:00.000
| 4 | 0 | 0 | 0 |
python,django,concurrency
| 16,771,514 | 2 | true | 1 | 0 |
Its webserver specific. If you configure it to run in different process, request will be handled in new process. If you configure to have threads it will be in threads.
Yes. Imagine case when, user1 is viewing/editing a object A (retrieved from DB). user2 deletes that object. And then user1 tries to save it. You need to handle such cases explicitly in your code.
Most likely the issues will be related to DB. So you can use transactions to help in some cases.
In some other cases, you can define strategy. E.g the case mentioned above, when user1 tries to save the object, and its not there in db you can just create one.
| 1 | 3 | 0 |
I'm developing a website with Django 1.5.1 and I have two doubts regarding concurrence. Now I'm runing on the development server.
When multiple users access the website at the same time, by default, does Django run each request in a different execution thread? Or must it be configured in the webserver e.g. Apache?
Will I experience issues if more than a user is modifying the same object concurrently? If so, how do you solve this problem? Using locks?
Thanks for your help!
|
Concurrency doubts in Django
| 1.2 | 0 | 0 | 1,239 |
16,771,409 |
2013-05-27T10:43:00.000
| 4 | 0 | 1 | 0 |
python
| 16,771,595 | 4 | false | 0 | 0 |
As wim points out, 3.2.5 is not a current production version, but I assume you're wondering why there were three versions released on 15 May 2013? That is why is the 3.2.x branch still being maintained?
Remember that each 3.n step introduces new features while 3.n.x releases are fixes to existing versions. 3.2.5 is thus a set of bugfixes to 3.2.4 while the 3.3.x branch includes new features not present in 3.2.4. Because new features are, inherently, more likely to introduce new bugs, the maintenance of the older branch allows you a higher stability choice if, for example, you're just putting together a new public release of your webserver and don't want to risk new bugs being introduced by the current branch.
| 1 | 10 | 0 |
At present(May 2013), there are three release versions, all released on may 15
python 3.3.2
python 3.2.5
python 2.7.5
I can understand the need for 2.x and 3.x branches but why are there seperate 3.3.x and 3.2.x versions?
|
Why are there multiple release versions of python
| 0.197375 | 0 | 0 | 1,777 |
16,773,454 |
2013-05-27T12:41:00.000
| 1 | 0 | 0 | 1 |
python,fabric
| 16,776,049 | 1 | true | 0 | 0 |
I ended up overriding the @task decorator like this:
from functools import wraps
from fabric.api import task as real_task
def task(func):
@wraps(func)
def wrapper(*args, **kwargs):
with settings(shell='/path/to/my/shell'):
return func(*args, **kwargs)
return real_task(wrapper)
I can't use alias and other kwargs in this form, but it suits me.
| 1 | 2 | 0 |
I use a fabric with namespaces to separate commands for dev and production servers
the structure is
fabfile/
__init__.py
dev.py
prod.py
dev.py and prod.py both define different env.shell and one of them overrides another.
Is there a way to use per-file env configuration for fabric?
|
Using fabric with namespaces, is there a way to specify per-file env.shell
| 1.2 | 0 | 0 | 181 |
16,773,961 |
2013-05-27T13:08:00.000
| 1 | 0 | 0 | 1 |
python,django,google-app-engine
| 16,775,062 | 1 | true | 1 | 0 |
Your total amout of data is very small and looks like a dict. Why not save it (this object) as a single entry in the database or the blobstore and you can cache this entry.
| 1 | 1 | 0 |
I am running my Django site on appengine. In the datastore, there is an entity kind / table X which is only updated once every 24 hours.
X has around 15K entries and each entry is of form ("unique string of length <20", integer).
In some context, a user request involves fetching an average of 200 entries from X, which is quite costly if done individually.
What is an efficient way I can adopt in this situation?
Here are some ways I thought about, but have some doubts in them due to inexperience
Using the Batch query supported by db.get() where a list of keys may be passed as argument and the get() will try to fetch them all in one walk. This will reduce the time quite significantly, but still there will be noticeable overhead and cost. Also, I am using Django models and have no idea about how to relate these two.
Manually copying the whole database into memory (like storing it in a map) after each update job which occurs every 24 hour. This will work really well and also save me lots of datastore reads but I have other doubts. Will it remain persistent across instances? What other factors do I need to be aware of which might interfere? This or something like this seems perfect for my situation.
The above are just what I could come up with in first thought. There must be ways I am unaware/missing.
Thanks.
|
A way to optimize reading from a datastore which updates once a day
| 1.2 | 1 | 0 | 53 |
16,775,249 |
2013-05-27T14:23:00.000
| 2 | 0 | 1 | 0 |
python,multithreading,parallel-processing,multiprocessing
| 16,869,458 | 1 | true | 0 | 0 |
The GIL is a bit of a nuisance sometimes...
A lot of it is going to revolve around how you can use the GPU. Does the API your using allow you to set it running then go off and do something else, occasionally polling to see if the GPU has finished? Or maybe it can raise an event, call a callback or something like that?
I'm sensing from your question that the answer is no... In which case I suspect your only choice (given that you're using Python) is multi processing. If the answer is yes then you can start off the GPU then get on with some preprocessing and plotting in the meantime and then check to see if the GPU has finished.
I don't know much about Python or how it does multiprocessing, but I suspect that it involves serialisation and copying of data being sent between processes. If the quantity of data you're processing is large (I suggest getting worried at the 100's of megabytes mark. Though that's just a hunch) then you may wish to consider how much time is lost in serialising and copy that data. If you don't like the answers to that analysis then your probably out of luck so far as using Python is concerned.
You say that the most time consuming part is the GPU processing? Presumably the other two parts are reasonably lengthy otherwise there would be little point trying to parallelise them. For example if the GPU was 95% of the runtime then saving 5% by parallelising the rest hardly seems worth it.
| 1 | 5 | 0 |
I have an application that has 3 main functionalities which are running sequentially at the moment:
1) Loading data to memory and perform preprocesssing on it.
2) Perform some computations on the data using GPU with theano.
3) Monitor the state of the computations on GPU and print them to the screen.
These 3 functionalities are embarrassingly parallelizable by using multi-threading. But in python I perform all these three functionalities sequentially. Partly because in the past I had some bad luck with Python multi-threading and GIL issues.
Here in this case, I don't necessarily need to utilize the full-capabilities of multiple-cpu's at hand. All I want to do is, to load the data and preprocess them while the computations at the GPU are performed and monitor the state of the computations at the same time. Currently most time-consuming computations are performed at 2), so I'm kind of time-bounded with operations at 2). Now my questions are:
*Can python parallelize these 3 operations without creating new bottlenecks, e.g.: due to GIL issues.
*Should I use multiprocessing instead of multithreading?
In a nutshell how should parallelize these three operations if I should in Python.
It is been some time since last time I wrote multi-threaded code for CPU(especially for python), any guidance will be appreciated.
Edit: Typos.
|
CPU and GPU operations parallelization
| 1.2 | 0 | 0 | 2,938 |
16,778,678 |
2013-05-27T18:29:00.000
| 1 | 0 | 1 | 0 |
python,class,global
| 16,778,997 | 2 | true | 0 | 0 |
Class members are like global variables for the class. Global variables are like class members for the module.
It's really that simple.
P.S. Just in case you are a Java programmer: yes, you can have more than one public class in your module; yes, you should use this when appropriate.
| 2 | 3 | 0 |
When defining a Python class, class members are like global variables for the class. Then, I do not understand why global variables do exist ? Shouldn t we always use class members instead ?
|
Python class member or global variable
| 1.2 | 0 | 0 | 242 |
16,778,678 |
2013-05-27T18:29:00.000
| 0 | 0 | 1 | 0 |
python,class,global
| 16,779,026 | 2 | false | 0 | 0 |
And you can program without objects at all :)
| 2 | 3 | 0 |
When defining a Python class, class members are like global variables for the class. Then, I do not understand why global variables do exist ? Shouldn t we always use class members instead ?
|
Python class member or global variable
| 0 | 0 | 0 | 242 |
16,779,799 |
2013-05-27T20:00:00.000
| 1 | 0 | 1 | 0 |
python,euler-math-toolbox
| 16,779,943 | 1 | false | 0 | 0 |
To solve this, it would suffice to clear all variables and imports. I could live with not calling Py_Finalize. But how?
Provided you properly release all references after each call, this should work fine. Just make sure to only call Py_Initialize a single time, and never call Py_Finalize. Run each "session" using a separate dictionary, and always decrement the reference counts properly when you're done with them (which will release those variables after running your code).
On a side note - this is a common issue. Many other packages, such as numpy, or any package written using Boost::Python will exhibit the same behavior if you use Py_Finalize.
| 1 | 3 | 0 |
This is a known problem, but I want to ask the experts for the best way to solve it for me.
I have a project (Euler Math Toolbox), which runs Python as a script language. For this, a library module "python.dll" is loaded at run time, which is linked against "python27.lib". Then Py_Initialize is called. This all works well.
But Euler can be restarted by the user with a new session and notebook. Then I want Python to clear all variables and imports. For this, I call Py_Finalize and unload "python.dll". When Python is needed, loading and initializing starts Python again.
This works. But Python crashes at the first call, if MatPlotlib is imported in the previous session. It seems that Py_Finalize does not completely clear Python, nor does unloading my "python.dll". I tried unloading "python27.dll" (the Python DLL), but this does not help. Most likey, another DLL remains active, but corrupts during Py_Finalize.
To solve this, it would suffice to clear all variables and imports. I could live with not calling Py_Finalize. But how?
PS: You may wonder, why I do not directly link euler.exe to Python. The reason is that this prevents Euler form starting, if Python is not installed, even if it is never needed.
Thanks for any answers! You duplicate your answer to renegrothmann at gmail, if you like. That would help me.
|
Py_Initialize and Py_finalize and MatPlotlib
| 0.197375 | 0 | 0 | 511 |
16,780,444 |
2013-05-27T20:57:00.000
| 0 | 0 | 1 | 0 |
python,macos,python-3.x,backslash
| 32,713,177 | 2 | false | 0 | 0 |
I know it has been over a year since this question has been asked. I was in your position and kind of ignored the problem, but I finally reached a solution that fixes it. It has to do with certain programs (including cmd.exe) using UNICODE. I was working with java and this problem was heeding progress in what I was doing.
This solution works in Windows 10, the method most likely exists for older versions such as 7 or 8 or 8.1
1) Go to language settings in Control Panel
2) Go to the 'Change date, time, or number formats' link on the left.
3) Go to the administrative tab
4) In the section for 'Language for non-Unicode programs,' it was set to Japanese for me. Since I live in Canada, I appropriately selected English (Canada)
5) Reboot your computer
6) To test this: open cmd.exe and look at the directory present. It should no longer be yen symbols, now backslashes.
| 2 | 1 | 0 |
I keep on getting a yen symbol whenever I press the backslash key. Is there a way to change this? I'm using python by the way.
|
How to get the backslash symbol to show up?
| 0 | 0 | 0 | 3,428 |
16,780,444 |
2013-05-27T20:57:00.000
| 1 | 0 | 1 | 0 |
python,macos,python-3.x,backslash
| 31,537,019 | 2 | false | 0 | 0 |
I had this same problem, but i changed the font in Python IDLE away from the default Meiryo and it fixed it. In IDLE go to Options > Configure IDLE, then in the 'Fonts/Tabs' tab, change the font face to something like Arial. This worked for me.
| 2 | 1 | 0 |
I keep on getting a yen symbol whenever I press the backslash key. Is there a way to change this? I'm using python by the way.
|
How to get the backslash symbol to show up?
| 0.099668 | 0 | 0 | 3,428 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.