Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11,727,011 |
2012-07-30T18:02:00.000
| 0 | 0 | 0 | 1 |
python,linux,apt,deb,dpkg
| 11,727,678 | 4 | false | 0 | 0 |
I have little familiarity with python modules for debs, but I wanted to point out that calling subprocesses isn't the bad thing on *ix, that it is on Windows. Windows almost seems intended to break calling things as a subprocess and parsing output, but *ix usually makes it quite viable.
| 2 | 9 | 0 |
I'm trying to do some package manipulation (a la dpkg) and while I can just popen or subprocess.call I'd rather do things the python way if possible.
Unfortunately I've been unable to find a python module to do the trick.
I've seen reference to python-deb but it appears to be defunct. python-apt might seem like a potential solution, but AFAICT it cannot handle individual .deb files.
Anyone know of a good dpkg python solution?
|
Dpkg Python module?
| 0 | 0 | 0 | 6,866 |
11,727,011 |
2012-07-30T18:02:00.000
| 0 | 0 | 0 | 1 |
python,linux,apt,deb,dpkg
| 11,728,384 | 4 | false | 0 | 0 |
Apparently Gdebi is python based. If gdebi installed you have access to it's functionality via the GDebi module.
I can't seem to find any documentation, so I'm not sure that it's meant to be a public API, but it might do the trick.
| 2 | 9 | 0 |
I'm trying to do some package manipulation (a la dpkg) and while I can just popen or subprocess.call I'd rather do things the python way if possible.
Unfortunately I've been unable to find a python module to do the trick.
I've seen reference to python-deb but it appears to be defunct. python-apt might seem like a potential solution, but AFAICT it cannot handle individual .deb files.
Anyone know of a good dpkg python solution?
|
Dpkg Python module?
| 0 | 0 | 0 | 6,866 |
11,729,368 |
2012-07-30T20:51:00.000
| 2 | 0 | 0 | 0 |
c++,python,build,sublimetext2
| 14,229,213 | 4 | false | 0 | 1 |
windows(install minigw, python2.7 and added to the system path)
cpp:
build: ctrl+b
run: ctrl+shift+b
python:
build and run: ctrl+b
you may try to learn the the .sublime-build files in your Tools -> Build system -> New build system
| 1 | 25 | 0 |
I'm just beginning to learn programming (on C++ and Python), and by beginning I mean total beginning ("hello world" beginning...). Not wanting to use multiple IDE's, I would like to be able to code and build–simple–programs with my text editor, Sublime Text 2. Could someone indicate me, with a step-by-step tutorial, how to implement C++ and Python compiling and executing capabilities in Sublime Text.
I've searched Sublime Text build systems on the site, but the answers are very specific and can't help a rookie like me (but they'll probably help me later).
Thanks
|
Build systems in Sublime Text
| 0.099668 | 0 | 0 | 79,860 |
11,729,562 |
2012-07-30T21:04:00.000
| 18 | 0 | 1 | 1 |
python,windows,batch-file,cmd
| 11,729,668 | 4 | true | 0 | 0 |
Try to execute cmd.exe /c YourCmdFile < nul
YourCmdFile - full path to your batch script
| 1 | 9 | 0 |
In windows, I am running a bat script that currently ends with a 'pause' and prompts for the user to 'Press any key to continue...'
I am unable to edit the file in this scenario and I need the script to terminate instead of hang waiting for input that will never come. Is there a way I can run this that will disable or circumvent the prompt?
I have tried piping in input and it does not seem to help. This script is being run from python via subprocess.Popen.
|
Disable 'pause' in windows bat script
| 1.2 | 0 | 0 | 6,479 |
11,729,931 |
2012-07-30T21:30:00.000
| 0 | 0 | 1 | 0 |
python,mongodb,pymongo
| 11,730,211 | 1 | true | 0 | 0 |
MongoDb is BSON not JSON as such I do not believe there is a direct conversion as you see in the console (which is actually converting to JSON the same way you do in python). Converting it later to JSON is your best bet. There have been a few talks about including a __toJSON() function within drivers but the talks normally end with the lines:
"This is better on the client side".
| 1 | 2 | 0 |
I am wondering if there is a way to get a JSON string directly from the MongoDB with PyMongo or something else. With PyMongo 'db.collection.find' returns a dictionary first and then I have to convert it to JSON with Python's JSON module.
|
Is it possible to get JSON string directly from MongoDB?
| 1.2 | 0 | 0 | 686 |
11,730,723 |
2012-07-30T22:36:00.000
| 0 | 0 | 1 | 0 |
python
| 11,730,883 | 3 | false | 0 | 0 |
You should probably calculate the total dynamically. Also, you need some kind of way to store the money of each individual player. Right now, there is no way of knowing the distribution of money since you only have one total.
| 1 | 1 | 0 |
(Cards numbered 2-10 should be valued from 2-10, respectively. J,Q, and K should be 10, and A should be either 1 or 11, depending on the value of the hand).
How do I assign the deck these values? Also, the game needs to be 3 rounds. The way I did it is only one round. How do I make the game go three times, while keeping track of the players wins/losses?
Could someone please explain how I can do this a simple way?
|
Trouble giving values to deck of cards
| 0 | 0 | 0 | 938 |
11,733,106 |
2012-07-31T04:28:00.000
| 2 | 0 | 0 | 0 |
python,image-processing,machine-learning,computer-vision
| 11,736,635 | 2 | false | 0 | 0 |
If I understand you correctly then you have complete black images with white borders?
In this case I think the easiest approach is to compute a histogram of the intensity values of the pixels, i.e. how „dark/bright” is the overall image. I guess that the junk images are significantly darker than the non-junk images. You can then filter the images based on their histogram. For that you have to choose a threshold: Every image darker than this threshold is considered as junk.
If this approach is to fuzzy you can easily improve it. For example: just compute the histogram of the inner image without the edges, because this makes the histogram much more darker in comparison to non-junk images.
| 1 | 2 | 1 |
I've never done any image processing and I was wondering if someone can nudge me in the right direction.
Here's my issue: I have a bunch of images of black and white images of places around a city. Due to some problems with the camera system, some images contain nothing but a black image with a white vignette around the edge. This vignette is noisy and non-uniform (sometimes it can be on both sides, other times only one).
What are some good ways I can go about detecting these frames? I would just need to be able to write a bit.
My image set is huge, so I would need this to be an automated process and in the end it should use Python since it needs to integrate into my existing code.
I was thinking some sort of machine learning algorithm but I'm not sure what to do beyond that.
|
Detecting Halo in Images
| 0.197375 | 0 | 0 | 1,550 |
11,733,149 |
2012-07-31T04:33:00.000
| 0 | 1 | 0 | 1 |
php,python,qiime
| 11,733,222 | 3 | false | 0 | 0 |
instead of system() try surrounding the code in `ticks`...
It has a similar functionality but behaves a little differently in the way it returns the output..
| 1 | 2 | 0 |
I have to design a interface using PHP for a software written in python. Currently this software is used from command line by passing input, mostly the input is a text file. There are series of steps and for every step a python script is called. Every step takes a text file as input and an generates an output text file in the folder decided by the user. I am using system() of php but I can't see the output but when I use the same command from command line it generates the output. Example of command :
python /software/qiime-1.4.0-release/bin/check_id_map.py -m /home/qiime/sample/Fasting_Map.txt -o /home/qiime/sample/mapping_output -v
|
I need to run a python script from php
| 0 | 0 | 0 | 634 |
11,733,391 |
2012-07-31T05:02:00.000
| 0 | 0 | 1 | 1 |
python,gtk,pygtk,vte
| 11,804,847 | 4 | false | 0 | 0 |
I have been having some problems of my own with Vte so I don't know if I'm the right person to answer.
Have you tried replacing the old terminal with another in the container? It's not clearing, but there will be an empty terminal.
| 2 | 1 | 0 |
How to clear all output in Vte.Terminal ?
|
How to clear output in Vte.Terminal?
| 0 | 0 | 0 | 303 |
11,733,391 |
2012-07-31T05:02:00.000
| 0 | 0 | 1 | 1 |
python,gtk,pygtk,vte
| 11,742,327 | 4 | false | 0 | 0 |
It would only clear the text visible in the terminal, but, you can run the cls command or tinker with the feed() method and empty spaces.
| 2 | 1 | 0 |
How to clear all output in Vte.Terminal ?
|
How to clear output in Vte.Terminal?
| 0 | 0 | 0 | 303 |
11,733,437 |
2012-07-31T05:06:00.000
| 0 | 0 | 0 | 1 |
python,memcached,uwsgi
| 11,733,510 | 1 | false | 1 | 0 |
You are probably experiencing the python GIL overhead. Try adding a second process to see if results are better.
| 1 | 1 | 0 |
we use uwsgi + nginx to build the web site. recently, we want to improve the qps of our site, so we decide to switch uwsgi mode from prefork to threaded. but we found something very bad.
when using prefork mode with workers setting 5, we get the request time is 10-20ms. but in threaded mode(one worker 5 threads), the value increase to 100-200ms. this is too bad.
we find memcache.Client take the most time which makes the request time increasing.
please help me to know where the problem is and how to solve, thank you!
PS:
code:
import memcache
client = memcache.Client(['127.0.0.1:11211'])
client.get('mykey')
|
application run slowly under uwsgi threaded mode
| 0 | 0 | 0 | 947 |
11,735,073 |
2012-07-31T07:19:00.000
| 0 | 0 | 1 | 1 |
python,virtualenv,pip
| 11,735,136 | 2 | false | 0 | 0 |
What shell are you using? What specific command did you use to activate the virtualenv?
In my case (also using squeeze) I am using bash and it if I run "source bin/activate" then everything in my path (pip, python, etc) is correct.
| 1 | 6 | 0 |
When I activate a venv, which pip returns /usr/local/bin/pip instead of path/to/my/apps/venv/bin/pop. Why is that?
I am inclined to just rm- rf the pip in /usr/local/bin/pip and install again, but since this is a production server I prefer not to guess too much :-)
My concern is that I have (in usr/local/bin):
easy_install
easy_install-2.6
pip
pip-2.6
virtualenv
virtualenv-2.6
python --version returns 2.6.6 and which python returns /usr/bin/python even though venvis activated?
Running Debian Squeeze
|
Activated VENV still use system pip and system python? What's wrong?
| 0 | 0 | 0 | 2,687 |
11,735,203 |
2012-07-31T07:28:00.000
| 0 | 0 | 1 | 1 |
python,bash,sublimetext2
| 11,735,588 | 5 | false | 0 | 0 |
If you're using python 3.2 you can use the python debugger. At the beginning of your project, import pdb. Then at the point that you want to enter interactive mode, type pdb.set_trace(). (You have to put the trace one line above the last line, otherwise the program will finish and restart.) I don't know how to make it automatically enter interactive mode, but when the program gets to the trace the console will enter the debugger. You can then type interact, and press Enter, and you will be in interactive mode, with all your variables preserved.
| 1 | 0 | 0 |
Here is my problem: When running a python script from command line (bash), I'd like to open a new console window, run my python script and end up in the interactive python shell. Is there an easy way to do this?
Background: Right now, I am exploring sublime text 2 by developing a simple python script together with numpy. When I run build from within sublime, the script is executed but I do not have the possibility to further interact with the result.
|
How to open a new console and run a python script
| 0 | 0 | 0 | 3,127 |
11,737,754 |
2012-07-31T10:08:00.000
| 1 | 1 | 0 | 0 |
python,apache,nginx,gevent,httpserver
| 11,740,272 | 2 | false | 1 | 0 |
In my opinion, you will never get the same level of security with a pure-Python server that you could have with majors web servers, as Apache and Nginx are.
These are well-tested before being released, so, by using a stable build and by configuring it properly, you will be close of the maximum of security possible.
Pure-python servers are very usefull during development, but I do not know any that can claim to compete with them for security testing / bug report / quick fix.
This is why it is generally advisable to put one of these servers in front before the server in pure python, using, for example, options like ProxyPass.
| 2 | 2 | 0 |
just using python & gevent.server to server a simple login server(just check some data and do some db operation), would it be a problem when it's under ddos attack?
would it be better if using apache/ngnix to server http request?
|
http server using python & gevent(not using apache)
| 0.099668 | 0 | 0 | 1,332 |
11,737,754 |
2012-07-31T10:08:00.000
| 3 | 1 | 0 | 0 |
python,apache,nginx,gevent,httpserver
| 11,740,845 | 2 | false | 1 | 0 |
If you are using gevent.server to implement your own HTTP server, I advise against it, and you should use instead gevent.pywsgi, that provides a full-featured, stable and thoroughly tested HTTP server. It is not as fast as gevent.wsgi, which is backed by libevent-http, but has more features that you are likely to need, like HTTPS.
Gevent is much more likely to survive a DDOS attack than Apache, but nginx is as good as gevent on this regard, although I don't see why using it if you can do just fine with your pure Python server. It would be the case of using nginx if you had multiple backends through the same server, like your auth server together with some static file serving (what could be done entirely by nginx) and possibly other subsystem, or other virtual hosts, that all could be served through a single nginx configuration.
| 2 | 2 | 0 |
just using python & gevent.server to server a simple login server(just check some data and do some db operation), would it be a problem when it's under ddos attack?
would it be better if using apache/ngnix to server http request?
|
http server using python & gevent(not using apache)
| 0.291313 | 0 | 0 | 1,332 |
11,739,417 |
2012-07-31T11:47:00.000
| 6 | 0 | 1 | 0 |
python,dictionary
| 11,739,445 | 1 | true | 0 | 0 |
Yes.
The point is you should not modify d between calling d.values() and d.keys().
| 1 | 4 | 0 |
I am a bit confused by this paragraph in python docs for dict class
If items(), keys(), values(), iteritems(), iterkeys(), and
itervalues() are called with no intervening modifications to the
dictionary, the lists will directly correspond. This allows the
creation of (value, key) pairs using zip(): pairs = zip(d.values(),
d.keys())
what is meant by called with no intervening modifications ?
if I receive a dict instance which was spewed out by some function(I have no way of knowing if the elements were modified since the dict was created)..can I still use the zip(d.values(),d.keys()) ?
|
using zip on dict keys and values
| 1.2 | 0 | 0 | 6,629 |
11,740,489 |
2012-07-31T12:50:00.000
| 0 | 0 | 0 | 0 |
python,user-interface,vim,command-line-interface
| 11,740,644 | 2 | false | 0 | 1 |
Telling the program to request input from the console at the end via raw_input() should cause it to pause before closing the console.
| 2 | 3 | 0 |
I have gone through a few tutorials regarding console Python applications. I am using vim and using the Windows command prompt to run my same applications. I am moving towards GUI creation in wxPython. I am essentially trying to recreate the google finance chart, but with data from some temperature sensors.
However, whenever I run the program from the command line, the window of my sample app flashes and goes away immediately. When I ran it through IDLE, I saw that there was an error in my code. Is there a way to see errors when I run it from the command line, because I am much more comfortable with vim?
Thanks in advance!
|
Python GUI creation
| 0 | 0 | 0 | 293 |
11,740,489 |
2012-07-31T12:50:00.000
| 1 | 0 | 0 | 0 |
python,user-interface,vim,command-line-interface
| 11,741,077 | 2 | true | 0 | 1 |
You can run the code, using commands in cmd prompt "Python.exe filename.py", as it shouldn't exit the cmd prompt upon the execution of the code.
Or you can write the program in vim and run it through IDLE.
| 2 | 3 | 0 |
I have gone through a few tutorials regarding console Python applications. I am using vim and using the Windows command prompt to run my same applications. I am moving towards GUI creation in wxPython. I am essentially trying to recreate the google finance chart, but with data from some temperature sensors.
However, whenever I run the program from the command line, the window of my sample app flashes and goes away immediately. When I ran it through IDLE, I saw that there was an error in my code. Is there a way to see errors when I run it from the command line, because I am much more comfortable with vim?
Thanks in advance!
|
Python GUI creation
| 1.2 | 0 | 0 | 293 |
11,745,033 |
2012-07-31T16:43:00.000
| 0 | 1 | 0 | 0 |
python
| 11,868,901 | 2 | true | 1 | 0 |
All right. Her's what worked for me, just in case anybody bumps into the same problem. I had to enter carriage return (i.e. \n) after my every tag in the HTML table. And everything worked fine.
PS: One clue as to whether this will help you is that, I am creating one big string of HTML.
| 1 | 0 | 0 |
I am trying to send a python (2.6) HTML email with color coded output. My script creates an output string which I format to look like a table (using str.format). It prints okay on the screen:
abcd 24222 xyz A
abcd 24222 xyz B
abcd 24222 xyz A
abcd 24222 xyz D
But I also need to send it as an email message and I need to have A (say in Green color), B (in Red) etc. How could I do it?
What I've tried is attach FONT COLOR = #somecolor & /FONT tags at the front and back of A, B etc. And I wrote a method/module which adds table, tr & td tags) at appropriate parts of the string so that the message would like an HTML table in the email. But, There is an issue with this approach:
1) This doesn't always work properly. The emails (obtained by running the exact same script) are different and many times with misalligned members and mysterious tr's or td's appearing (at different locations each time). even though my html table creation is correct
Any help would be appreciated.
|
Python HTML email : customizing output color-coding
| 1.2 | 0 | 0 | 1,729 |
11,746,610 |
2012-07-31T18:24:00.000
| 1 | 0 | 0 | 0 |
python,sqlalchemy
| 11,747,157 | 2 | true | 1 | 0 |
If there is a foreign key defined between tables, SA will figure the join condition for you, no need for additional filters.
There is, and i was really over thinking this. Thanks for the fast response. – Ominus
| 1 | 1 | 0 |
I have an items table that is related to an item_tiers table. The second table consists of inventory receipts for an item in the items table. There can be 0 or more records in the item_tiers table related to a single record in the items table. How can I, using query, get only records that have 1 or more records in item tiers....
results = session.query(Item).filter(???).join(ItemTier)
Where the filter piece, in pseudo code, would be something like ...
if the item_tiers table has one or more records related to item.
|
SQLAlchemy - Query show results where records exist in both table
| 1.2 | 1 | 0 | 144 |
11,750,926 |
2012-08-01T00:23:00.000
| 0 | 0 | 0 | 0 |
python,serial-port,pyserial,usbserial
| 24,635,429 | 3 | false | 0 | 0 |
The only way I can think to get around the problem with probing unknown devices is to have the device send unsolicited "hello" responses continually. That was you can just connect to all serial devices and listen for the "hellos". Connecting and listening to a serial device shouldn't ever mess it up.
The downside is you have these messages cluttering up your serial stream. You could have a "I'm here now, stfu" command but then you can only connect to it once.
FTDI chips have a method of identification but you have to use their library to access the data.
| 1 | 5 | 0 |
The solution to this problem is probably pretty simple, but I am new to interfacing with a device dynamically. What I'm doing is I am making a python executable code, so the user doesn't have to have Idle on their computer or any kind of python interpreter, which means I don't know which USB port the device will be plugged in to. The program needs to be able to open a connection to a device that is connected through a serial to usb converter. How can I determine which connected device is the correct device to open a port to? I am using pySerial to interact with the device. Any help would be greatly appreciated.
|
Identifying serial/usb device python
| 0 | 0 | 0 | 17,395 |
11,754,811 |
2012-08-01T07:49:00.000
| 2 | 0 | 1 | 0 |
python,regex
| 11,754,870 | 3 | false | 0 | 0 |
Yes it is. You prefix your keyword with zero to 25 - length(keyword) "any" characters.
I'm not sure if this is actual python syntax, but the RE should be ^.{0,20}APPLE.
Edit: for clarification
^.{0,20}APPLE should be used when looking for a substring. Use this in Python.
.{0,20}APPLE.* should be used when matching the whole string.
Another edit: Apparently Python only has substring mode, so the ^ anchor is necessary.
| 1 | 0 | 0 |
I'm using regular expressions to match a keyword. Is it possible to check for this keyword within ONLY the first 25 characters?
For example, I want to find "APPLE":
'Johnny picked an APPLE from the tree' - Match Found (within first 25 chars)
'Johnny picked something from a tree that had an APPLE' - Not Found (because APPLE does not exist within the first 25 chars).
Is there syntax for this?
|
REGEX to check for phrase within first 25 characters
| 0.132549 | 0 | 0 | 202 |
11,755,474 |
2012-08-01T08:34:00.000
| 1 | 0 | 0 | 0 |
javascript,python,html,forms,cgi
| 11,755,603 | 1 | true | 1 | 0 |
If every select should be the only value that's needed, then every select is basically a form on its own.
You could either remove all other selects when you activate a single select (which is prone to errors), or simply put every select in its own form instead of using one giant form. Otherwise all data is going to be send.
| 1 | 0 | 0 |
In a python cgi script I have many selects in a form (100 or so), and each select has 5 or 6 options to choose from. I don't want to have a separate submit button, so I am using onchange="submit();" to submit the form as soon as an option is selected from one of the many selects. When I read the form data with form.keys() the name of every select on the form is listed instead of just the one that was changed. This requires me to compare the value selected in each select with the starting value to find out which one changed and this of course is very slow. How can I just get the new value of the one select that was changed?
|
many selects (dropdowns) on html form, how to get just the value of the select that was changed
| 1.2 | 0 | 1 | 130 |
11,756,115 |
2012-08-01T09:13:00.000
| 0 | 0 | 0 | 0 |
python,django
| 11,758,203 | 2 | false | 1 | 0 |
I finally get the tests running, here's what I did:
disabled DATABASE_ROUTERS settings when running tests
maintain B alias in DATABASES settings but the name is the same as A
append B's INSTALLED_APPS that aren't present to A's INSTALLED_APPS
| 1 | 0 | 0 |
I have 2 sites: A and B. A relies on some tables from B so it has an entry in its DATABASES settings pointing to B together with some entries under its DATABASE_ROUTERS settings to route certain model access to B's database.
Now I'm trying to write a test on A but just running manage.py test immediately fails because some of A's models relies on some models covered by the tables coming from B, and B's complete database tables hasn't been created yet.
So my question is, how do I tweak my TEST_RUNNER to first run syncdb on B against B's test db so then when I run manage.py test on A it can find the tables from B that it relies on?
I hope that makes sense.
|
Django: writing test for multiple database site
| 0 | 0 | 0 | 1,276 |
11,756,207 |
2012-08-01T09:18:00.000
| 0 | 1 | 1 | 0 |
python,eclipse,pydev
| 11,758,291 | 1 | false | 1 | 0 |
PyDev should be working fine. In project properties, you can set interpreter, PYTHONPATH and other PyDev related settings.
To manually trigger code analysis, right-click on project, file or folder and select PyDev->Code analysis
| 1 | 1 | 0 |
we have successfully added pydev plugin on our eclipse. as a result in pydev projects it detects errors and so on.
but the question is that is there any way that we use pydev abilities (e.g. error detection) in non-pydev projects?(e.g. a java project).
actually we are developing an eclipse plugin that contains some .py files and we want it to interpret them as a side feature
|
python interpreter on non-pydev projects?
| 0 | 0 | 0 | 113 |
11,757,520 |
2012-08-01T10:42:00.000
| 151 | 0 | 1 | 0 |
python
| 11,757,548 | 5 | true | 0 | 0 |
Use os.path.dirname(filename).
| 1 | 84 | 0 |
How can I get the path of a file without the file basename?
Something like /a/path/to/my/file.txt --> /a/path/to/my/
Tried with .split() without success.
|
Path to a file without basename
| 1.2 | 0 | 0 | 54,771 |
11,759,164 |
2012-08-01T12:25:00.000
| 1 | 1 | 0 | 0 |
python,django,pylons,cherrypy
| 11,760,761 | 4 | false | 1 | 0 |
For the fastest development you may dive into Django. But Django is probably not the fastest solution. Flask is lighter. Also you can try Pyramid.
| 1 | 0 | 0 |
The requirement is to develop a HTML based facebook app. It would not be content based like a newspaper site,
but will mostly have user generated data which would be aggregated and presented from database + memcache.
The app would contain 4-5 pages at most, with different purposes.
We decided to write the app in Python instead of PHP , and tried to evaluate django.
However, we found django is not as flexible as how CodeIgniter in PHP is i.e. putting less restrictions and rules, and allowing you to do what you want to do.
PHP CodeIgnitor is minimalistic MVC framework, which we would have chosen if we were to develop in PHP.
Can you please suggest a flexible and minimalistic python based web framework? I have heard of pylons,cheeryPy,web.py , but I am completely unaware of their usage and structure.
|
Which Python framework is flexible and similar to CodeIgniter in PHP?
| 0.049958 | 0 | 0 | 1,400 |
11,759,307 |
2012-08-01T12:33:00.000
| 2 | 0 | 0 | 1 |
python,oauth
| 11,759,545 | 2 | true | 0 | 0 |
All Python libraries that don't rely on native code or platform-specific APIs are portable. I don't see any of that in python-oauth or python-oauth2.
So your current library should work fine on Linux.
| 1 | 2 | 0 |
Is there a python library for oAuth which can be run on Window and Linux?
On window I am using python-oauth but I could not find an installation for Linux
|
Python: OAuth Library for Linux and Windows
| 1.2 | 0 | 1 | 736 |
11,761,785 |
2012-08-01T14:46:00.000
| 0 | 0 | 0 | 1 |
python,apache,mod-wsgi,web.py
| 11,768,941 | 1 | true | 1 | 0 |
Easy. Don't restart Apache, don't set maximum-requests and don't change the code in the WSGI script file.
Are you saying that you are seeing restarts even when you leave Apache completely untouched?
And yes it sounds like you should be re-architecting your system. A web process that takes that long to startup is crazy.
| 1 | 1 | 0 |
I have a python web.py app with long (minutes) start-up time that I'd like to host with in Apache with mod_wsgi.
The long-term answer may be "rewrite the app." But in the short term I'd like to configure mod_wsgi to:
Use a single process to serve the app (I can do this with WSGIDaemonProcess processes=1),
and
Keep using that process without killing it off periodically
Is #2 doable? Or, are there other stopgap solutions I can use to host this app?
Thanks!
|
Maintaining a (singleton) process with mod_wsgi?
| 1.2 | 0 | 0 | 396 |
11,761,889 |
2012-08-01T14:53:00.000
| 19 | 0 | 1 | 0 |
python,image
| 11,761,906 | 3 | true | 0 | 0 |
Multiply the length of the data by 3/4, since encoding turns 6 bytes into 8. If the result is within a few bytes of 4MB then you'll need to count the number of = at the end.
| 1 | 12 | 0 |
I'm working on a python web service.
It calls another web service to change the picture of a profile.
It connects to another web service.
This web service can only accept pictures that are 4 MB or smaller.
I will put the checking in the first web service.
It uses PIL to check if the base64 string is a valid image.
However, how do I check if the base64 string will create a 4 MB or smaller image?
|
Get Image File Size From Base64 String
| 1.2 | 0 | 0 | 12,113 |
11,762,290 |
2012-08-01T15:15:00.000
| 3 | 0 | 0 | 0 |
python,django,image-processing,heroku,django-imagekit
| 11,834,793 | 1 | true | 1 | 0 |
Try to change image size with PIL from console and see if memory usage is ok. Image resize is a simple task, I don't believe you should use side applications. Besides, split your task into 3 tasks(3 images?).
| 1 | 3 | 0 |
Django-imagekit, which I'm using to process user uploaded images on a social media website, uses an unacceptably high level of memory. I'm looking for ideas on how to get around this problem.
We are using django-imagekit to copy user uploaded images it into three predefined sizes, and saves the four copies (3 processed plus 1 original) into our AmazonS3 bucket.
This operation is quickly causing us to go over our memory limit on our Heroku dynos. On the django-imagekit github page, I've seen a few suggestions for hacking the library to use less memory.
I see three options:
Try to hack django-imagekit, and deal with the ensuing update problems from using a modified third party library
Use a different imaging processing library
Do something different entirely -- resize the images on in the browser perhaps? Or use a third party service? Or...?
I'm looking for advice on which of these routes to take. In particular, if you are familiar with django-imagekit, or if you know of / are using a different image processing library in a Django app, I'd love to hear your thoughts.
Thanks a lot!
Clay
|
Memory usage in django-imagekit is unacceptable -- ideas on fixes?
| 1.2 | 0 | 0 | 353 |
11,762,480 |
2012-08-01T15:25:00.000
| 0 | 1 | 0 | 0 |
php,python,mysql,cron
| 11,762,685 | 3 | false | 0 | 0 |
You can run more than one python process at a time. As for causing excessive load on the server that can only be alleviated by making sure either you only have one instance running at any given time or some other number of processes, say two. To accomplish this you can look at using a lock file or some kind of system flag, mutex ect...
But, the best way to limit excessive use is to limit the number of tasks running concurrently.
| 1 | 4 | 0 |
I'm working on a bit of python code that uses mechanize to grab data from another website. Because of the complexity of the website the code takes 10-30 seconds to complete. It has to work its way through a couple pages and such.
I plan on having this piece of code being called fairly frequently. I'm wondering the best way to implement something like this without causing a huge server load. Since I'm fairly new to python I'm not sure how the language works.
If the code is in the middle of processing one request and another user calls the code, can two instances of the code run at once? Is there a better way to implement something like this?
I want to design it in a way that it can complete the hefty tasks without being too taxing on the server.
|
Best way to implement frequent calls to taxing Python scripts
| 0 | 0 | 0 | 128 |
11,762,629 |
2012-08-01T15:32:00.000
| 1 | 0 | 0 | 0 |
javascript,python,ajax,django
| 11,762,988 | 3 | false | 1 | 0 |
Yes, it is possible. If you pass the id as a parameter to the view you will use inside your app, like:
def example_view (request,id)
and in urls.py, you can use something like this:
url(r'^example_view/(?P<id>\d+)/', 'App.views.example_view').
The id in the url /example_view_template/8 will get access to the result using the id which is related to the number 8. Like the 8th record of a specific table in your database, for example.
| 1 | 13 | 0 |
So I'm trying to basically set up a webpage where a user chooses an id, the webpage then sends the id information to python, where python uses the id to query a database, and then returns the result to the webpage for display.
I'm not quite sure how to do this. I know how to use an ajax call to call the data generated by python, but I'm unsure of how to communicate the initial id information to the django app. Is it possible to say, query a url like ./app/id (IE /app/8), and then use the url information to give python the info? How would I go about editing urls.py and views.py to do that?
Thanks,
|
Pass information from javascript to django app and back
| 0.066568 | 0 | 0 | 10,912 |
11,762,812 |
2012-08-01T15:40:00.000
| 0 | 1 | 0 | 1 |
c++,python,testing,networking
| 11,763,064 | 3 | false | 0 | 0 |
The only way to check if a packet has been sent correctly is by verifying it's integrity on the receiving end.
| 2 | 0 | 0 |
In my app i sent packet by raw socket to another computer than get packet back and write the return packet to another computer by raw socket.
My app is c++ application run on Ubuntu work with nfqueue.
I want to test sent packets for both computer1 and computer2 in order to check if they are as expected.
I need to write an automation test that check my program, this automation test need to listen to the eth load the sent packets and check if they are as expected (ip,ports, payload).
I am looking for a simple way (tool (with simple API), code) to do this.
I need a simple way to listen (automate) to the eth .
I preffer that the test will check the sender , but it might be difficult to find an api to listen the eth (i sent via raw socket) , so a suggested API that will check the receivers computers is also good.
The test application can be written in c++, java , python.
|
How to test if packet is sent correct?
| 0 | 0 | 1 | 1,276 |
11,762,812 |
2012-08-01T15:40:00.000
| 0 | 1 | 0 | 1 |
c++,python,testing,networking
| 11,809,920 | 3 | true | 0 | 0 |
I operate tcpdump on the reciver coputer and save all packet to file.
I analysis the tcpdump with python and check that packet send as expected in the test.
| 2 | 0 | 0 |
In my app i sent packet by raw socket to another computer than get packet back and write the return packet to another computer by raw socket.
My app is c++ application run on Ubuntu work with nfqueue.
I want to test sent packets for both computer1 and computer2 in order to check if they are as expected.
I need to write an automation test that check my program, this automation test need to listen to the eth load the sent packets and check if they are as expected (ip,ports, payload).
I am looking for a simple way (tool (with simple API), code) to do this.
I need a simple way to listen (automate) to the eth .
I preffer that the test will check the sender , but it might be difficult to find an api to listen the eth (i sent via raw socket) , so a suggested API that will check the receivers computers is also good.
The test application can be written in c++, java , python.
|
How to test if packet is sent correct?
| 1.2 | 0 | 1 | 1,276 |
11,764,579 |
2012-08-01T17:37:00.000
| 0 | 0 | 0 | 0 |
python,google-app-engine,profile
| 11,764,721 | 1 | true | 1 | 0 |
I've used django on most of my webapps, but the concept should be the same; I use ajax to send the data to the backend whenever the user hits submit (and the form returns false) so the user can keep editing it. With ajax, you can send the data to different handlers on the backend. Also, using jQuery, you can set flags to see if fields have been changed, to avoid sending the ajax message in the first place. Ajax requests behave almost exactly like standard HTTP requests, but I believe the header indicates AJAX.
If you're looking at strictly backend, then you will need to do multiple "if" statements on the backend and check one field at a time to see if it has been changed. On the backend you should still be able to call other handlers (passing them the same request).
| 1 | 0 | 0 |
My question I suppose is rather simple. Basically, I have a profile. It has many variables being passed in. For instance, name, username, profile picture, and many others that are updated by their own respective pages. So one page would be used to update the profile picture, and that form would submit data from the form to the handler, and put() it to the database. What i'm trying to do here, is put all of the forms used to edit the profile on one single page at the same time.
Would I need one huge handler to deal with that page? When I hit 'save' at the bottom of the page, how do I avoid overwriting data that hasn't been modified? Currently, say I have 5 profile variables, they map to 5 handlers, and 5 separate pages that contain their own respective form.
Thanks.
|
Submitting Multiple Forms At The Same Time (Edit Profile Page)
| 1.2 | 0 | 0 | 146 |
11,764,777 |
2012-08-01T17:51:00.000
| 1 | 1 | 0 | 0 |
javascript,python,ajax,bash,terminal
| 11,764,917 | 1 | true | 0 | 0 |
Once you have logged in as a non-root user you can just su to the root user
| 1 | 0 | 0 |
I am trying to use Ajaxterm and I remember that when I used it for the first time about a year ago, there was something about logging in as root.
Can anyone tell me how to enable root login or point me to a guide? Many different google searches have returned no results.
P.S. My question is NOT whether or not I should login as root, but how to login as root.
|
Login as root in Ajaxterm
| 1.2 | 0 | 0 | 249 |
11,765,123 |
2012-08-01T18:14:00.000
| 1 | 0 | 0 | 0 |
python,django
| 11,765,289 | 2 | true | 1 | 0 |
You can have two copies of your settings.py file, one for production and one for development. Whatever you need to be the default, name it as settings.py
Just set DJANGO_SETTINGS_MODULE to the python path for the file that you would like to use.
So, if your settings files are myproject/settings.py, myproject/settings_dev.py; you can then do:
$ DJANGO_SETTINGS_MODULE=settings_dev python manage.py shell
From the myproject directory.
| 1 | 1 | 0 |
I'm new to Python and Django and have over the past few weeks managed to set up my first deployment - a very basic site with user authentication and a few pages, which I hope to fill with content in the next couple of weeks.
I have managed to find the answer to probably 40+ questions I have encountered so far by searching Google / StackOverflow / Django docs etc., but now I have one I can't seem to find a good answer to (perhaps because I don't know how best to search for it): when I develop on my local machine I need my settings.py file to point to the remote database ('HOST': 'www.mysite.com',) but when I deploy to a shared hosting service provider they require the use of localhost ('HOST': '', in settings.py).
Since I host my code on GitHub and want to mirror it to the server, is there a way to resolve this so I don't have to make a manual edit to settings.py each time after uploading changes to the server?
|
How to use a different database host in development vs deployment?
| 1.2 | 0 | 0 | 120 |
11,767,001 |
2012-08-01T20:23:00.000
| 1 | 0 | 0 | 0 |
python,encoding,character-encoding,web-scraping,beautifulsoup
| 11,767,390 | 1 | true | 1 | 0 |
The simplest way might be to parse the page twice, once as UTF-8, and once as GB2312. Then extract the relevant section from the GB2312 parse.
I don't know much about GB2312, but looking it up it appears to at least agree with ASCII on the basic letters, numbers, etc. So you should still be able to parse the HTML structure using GB2312, which would hopefully give you enough information to extract the part you need.
This may be the only way to do it, actually. In general, GB2312-encoded text won't be valid UTF-8, so trying to decode it as UTF-8 should lead to errors. The BeautifulSoup documentation says:
In rare cases (usually when a UTF-8 document contains text written in a completely different encoding), the only way to get Unicode may be to replace some characters with the special Unicode character “REPLACEMENT CHARACTER” (U+FFFD, �). If Unicode, Dammit needs to do this, it will set the .contains_replacement_characters attribute to True on the UnicodeDammit or BeautifulSoup object.
This makes it sound like BeautifulSoup just ignores decoding errors and replaces the erroneous characters with U+FFFD. If this is the case (i.e., if your document has contains_replacement_characters == True), then there is no way to get the original data back from document once it's been decoded as UTF-8. You will have to do something like what I suggested above, decoding the entire document twice with different codecs.
| 1 | 0 | 0 |
I'm trying to parse a web page using Python's beautiful soup Python parser, and am running into an issue.
The header of the HTML we get from them declares a utf-8 character set, so Beautiful Soup encodes the whole document in utf-8, and indeed the HTML tags are encoded in UTF-8 so we get back a nicely structured HTML page.
The trouble is, this stupid website injects gb2312-encoded body text into the page that gets parsed as utf-8 by beautiful soup. Is there a way to convert the text from this "gb2312 pretending to be utf-8" state to "proper expression of the character set in utf-8?"
|
Parsing a utf-8 encoded web page with some gb2312 body text with Python
| 1.2 | 0 | 1 | 587 |
11,768,763 |
2012-08-01T22:57:00.000
| 0 | 0 | 1 | 0 |
python,opencv,computer-vision,simplecv
| 11,769,340 | 4 | false | 0 | 0 |
From what I know, you can create an array of the areas where the brightness is below your threshold ([0,1] array). There are some methods to count the number/size, etc. of the shapes, such as recursive deletion.
| 2 | 2 | 0 |
Here is a sample aerial image:
![aerial image of some unfrozen lakes][1]
How do I automatically detect and extract parameters of the black unfrozen lake from the image? I'm primarily using Python.
EDIT: see my answer below; I think I've found the solution.
|
How to detect a black lake on a white snowy background in an aerial image?
| 0 | 0 | 0 | 1,144 |
11,768,763 |
2012-08-01T22:57:00.000
| 2 | 0 | 1 | 0 |
python,opencv,computer-vision,simplecv
| 11,769,451 | 4 | false | 0 | 0 |
This is an image segmentation problem, and there are in general lots of different ways you could go about it. The easiest way here would seem to be region growing:
Find every pixel whose grey value is lower than some threshold you pick to separate black from white - these pixels are your 'seeds'.
Flood out from them, using the grow condition that you only flood into pixels whose grey value is also below a certain threshold (possibly the same one as before, but could be different). Terminate when you can't grow the regions you have any further. During the flooding process, combine seeds that are reachable from each other into the same region. This process will produce a number of connected regions. You can keep track of the sizes of these regions during the flooding process.
Remove any regions that are below a certain size (alternatively, if you are only interested in the largest lake, pick the largest region you have).
Calculate the parameters you want from the pixels that are part of the lake(s). For example, the mean grey value of a lake would be the mean of the grey values of the pixels in the lake, etc. Different techniques will be needed for different parameters.
| 2 | 2 | 0 |
Here is a sample aerial image:
![aerial image of some unfrozen lakes][1]
How do I automatically detect and extract parameters of the black unfrozen lake from the image? I'm primarily using Python.
EDIT: see my answer below; I think I've found the solution.
|
How to detect a black lake on a white snowy background in an aerial image?
| 0.099668 | 0 | 0 | 1,144 |
11,769,445 |
2012-08-02T00:26:00.000
| 0 | 0 | 1 | 0 |
c#,python,xbox360
| 11,769,529 | 2 | false | 0 | 1 |
Well although C++ is the language of choice for gaming industry, I have seen really good games written in C# using the XNA framework. In terms of your scripting question, I believe Blizzard uses Lua for UI in games like WoW. But going back to your question it's possible to have Python integrated to C#, there is an open-source implementation of the Python programming language which is tightly integrated with the .NET Framework called IronPython.
| 1 | 1 | 0 |
I am not familiar with what C# can do, especially in the context of a Xbox 360 game, but is it possible to execute Python scripts from within an Xbox 360 Indie game?
I've read several times that you'd want to write the game graphics and logic in a quicker language like C#, and then use Python as a scripting language for fast iteration. Is this sort of thing possible for development on the Xbox 360 platform?
|
Using C# to write an Xbox 360 Indie game with Python Scripting
| 0 | 0 | 0 | 734 |
11,769,471 |
2012-08-02T00:30:00.000
| 3 | 0 | 0 | 0 |
python,rpy2
| 52,399,670 | 4 | false | 0 | 0 |
In the latest version of rpy2, you can simply do this in a direct way:
import numpy as np
array=np.array(vector_R)
| 1 | 9 | 1 |
I'm using rpy2 and I have this issue that's bugging me: I know how to convert a Python array or list to a FloatVector that R (thanks to rpy2) can handle within Python, but I don't know if the opposite can be done, say, I have a FloatVector or Matrix that R can handle and convert it back to a Python array or list...can this be done?
Thanks in advance!
|
rpy2: Convert FloatVector or Matrix back to a Python array or list?
| 0.148885 | 0 | 0 | 5,269 |
11,770,312 |
2012-08-02T02:36:00.000
| 0 | 0 | 0 | 0 |
python,html,post,bottle
| 11,770,627 | 1 | true | 1 | 0 |
You could add a hidden input field to each form on the page with a specific value. On the server side, check the value of this field to detect which form the post request came from.
| 1 | 0 | 0 |
So, what issue im running into is how do i know what element of my page made a post request? I have multiple elements that can make the post request on the page, but how do i get the values from the element that created the request? It seems like this would be fairly trivial,but i have come up with nothing, and when doing quite a few google searches i have come up with nothing again.
Is there any way to do this using Bottle?
I had an idea to an a route for an sql page (with authentication of course) for providing the action for the form and use the template to render the id in the action, but i was thinking there had to be a better way to do this without routing another page.
|
Distinguishing post request's from possible poster elements
| 1.2 | 0 | 1 | 53 |
11,778,071 |
2012-08-02T13:07:00.000
| 5 | 1 | 1 | 0 |
python,vim,codebase
| 11,778,262 | 3 | false | 0 | 0 |
I use ipython's ?? command
You just need to figure out how to import the things you want to look for, then add ?? to the end of the module or class or function or method name to view their source code. And the command completion helps on figuring out long names as well.
| 1 | 15 | 0 |
As programmers we read more than we write. I've started working at a company that uses a couple of "big" Python packages; packages or package-families that have a high KLOC. Case in point: Zope.
My problem is that I have trouble navigating this codebase fast/easily. My current strategy is
I start reading a module I need to change/understand
I hit an import which I need to know more of
I find out where the source code for that import is by placing a Python debug (pdb) statement after the imports and echoing the module, which tells me it's source file
I navigate to it, in shell or the Vim file explorer.
most of the time the module itself imports more modules and before I know it I've got 10KLOC "on my plate"
Alternatively:
I see a method/class I need to know more of
I do a search (ack-grep) for the definition of that method/class across the whole codebase (which can be a pain because the codebase is partly in ~/.buildout-eggs)
I find one or more pieces of code that define that method/class
I have to deduce which one of them is the one I need to read
This costs a lot of time, which is understandable for a big codebase. But I get the feeling that navigating a large and unknown Python codebase is a common enough problem.
So I'm looking for technical tools or strategic solutions for this problem.
...
I just can't imagine hardcore Python programmers using the strategies outlined above.
|
Navigating a big Python codebase faster
| 0.321513 | 0 | 0 | 7,153 |
11,779,033 |
2012-08-02T14:00:00.000
| 1 | 0 | 0 | 1 |
python,google-app-engine,google-cloud-datastore,data-import
| 11,780,827 | 2 | false | 1 | 0 |
Have the user upload the file, then start a task that runs the import. Email results/errors to the user at the end. The other way I have done, is get the user to create the spreadsheet in google docs and have them supply the sheet key or link if it's published and then start a task that processes the spreadsheet directly from google docs.
| 1 | 1 | 0 |
Building an application using Python in GAE that handles a lot of user data such as contacts, appointments, etc...
Would like to allow users to import their old data from other applications. For example an appointment might look like:
Start time Duration Service Customer Id
2012-08-02 09:50AM, 01:00:00, Hair cut, 94782910,
2012-08-02 10:50AM, 00:30:00, Dye job, 42548910,
...
I'm unfamiliar with accepted practices with handling this type of situation. I also see issues with handling this on google app engine specifically because requests can not take longer than 30 seconds.
Ideally, it seems like users should be able to upload CSV files of their data via a web page, but I don't really know of a good way to do this with app engine.
Another way I can think of would be to let users cut and paste text directly into and HTML text area. Then javascript could be used to iterate the data and POST it to the server one row at a time or in small chunks. This sounds really sketchy to me though.
Any ideas on what a "good" way to handle this would be?
Thanks so much!
|
Import data into Google App Engine in a way that is "easy" for the user of the application
| 0.099668 | 0 | 0 | 181 |
11,780,084 |
2012-08-02T14:50:00.000
| 1 | 0 | 0 | 0 |
python,wxpython,wxwidgets
| 11,780,276 | 2 | false | 0 | 1 |
A Panel floating all by itself? Doesn't sound too likely! It would be a lot easier to answer this question if you described in more detail what you are doing, and what is not working as you expect.
In general, panels are created as children of a frame. If the panel is the ONLY child of the frame, then it will be resized automatically along with the frame. Otherwise you will have to handle the resize event yourself.
What have you tried so far?
| 1 | 1 | 0 |
I have a wxPanel. How can I make it resizable by the user? With a drag-and-drop resize bar?
I wonder what the widgets I need are.
Thank you.
|
Make a wxPanel resizable by the user
| 0.099668 | 0 | 0 | 2,621 |
11,783,875 |
2012-08-02T18:47:00.000
| 0 | 0 | 1 | 0 |
python,beautifulsoup,flask,importerror
| 63,454,563 | 22 | false | 1 | 0 |
In case you are behind corporate proxy then try using following command
pip install --proxy=http://www-YOUR_PROXY_URL.com:PROXY_PORT BeautifulSoup4
| 8 | 170 | 0 |
I'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has "No Module Named bs4." Any comments or advice is greatly appreciated.
|
ImportError: No Module Named bs4 (BeautifulSoup)
| 0 | 0 | 0 | 402,243 |
11,783,875 |
2012-08-02T18:47:00.000
| 5 | 0 | 1 | 0 |
python,beautifulsoup,flask,importerror
| 39,798,150 | 22 | false | 1 | 0 |
If you use Pycharm, go to preferences - project interpreter - install bs4.
If you try to install BeautifulSoup, it will still show that no module named bs4.
| 8 | 170 | 0 |
I'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has "No Module Named bs4." Any comments or advice is greatly appreciated.
|
ImportError: No Module Named bs4 (BeautifulSoup)
| 0.045423 | 0 | 0 | 402,243 |
11,783,875 |
2012-08-02T18:47:00.000
| 5 | 0 | 1 | 0 |
python,beautifulsoup,flask,importerror
| 49,884,772 | 22 | false | 1 | 0 |
I will advise you to uninstall the bs4 library by using this command:
pip uninstall bs4
and then install it using this command:
sudo apt-get install python3-bs4
I was facing the same problem in my Linux Ubuntu when I used the following command for installing bs4 library:
pip install bs4
| 8 | 170 | 0 |
I'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has "No Module Named bs4." Any comments or advice is greatly appreciated.
|
ImportError: No Module Named bs4 (BeautifulSoup)
| 0.045423 | 0 | 0 | 402,243 |
11,783,875 |
2012-08-02T18:47:00.000
| 5 | 0 | 1 | 0 |
python,beautifulsoup,flask,importerror
| 51,779,869 | 22 | false | 1 | 0 |
If you are using Anaconda for package management, following should do:
conda install -c anaconda beautifulsoup4
| 8 | 170 | 0 |
I'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has "No Module Named bs4." Any comments or advice is greatly appreciated.
|
ImportError: No Module Named bs4 (BeautifulSoup)
| 0.045423 | 0 | 0 | 402,243 |
11,783,875 |
2012-08-02T18:47:00.000
| 3 | 0 | 1 | 0 |
python,beautifulsoup,flask,importerror
| 67,411,090 | 22 | false | 1 | 0 |
This worked for me.
pipenv pip install BeautifulSoup4
| 8 | 170 | 0 |
I'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has "No Module Named bs4." Any comments or advice is greatly appreciated.
|
ImportError: No Module Named bs4 (BeautifulSoup)
| 0.027266 | 0 | 0 | 402,243 |
11,783,875 |
2012-08-02T18:47:00.000
| 2 | 0 | 1 | 0 |
python,beautifulsoup,flask,importerror
| 58,264,670 | 22 | false | 1 | 0 |
pip install --user BeautifulSoup4
| 8 | 170 | 0 |
I'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has "No Module Named bs4." Any comments or advice is greatly appreciated.
|
ImportError: No Module Named bs4 (BeautifulSoup)
| 0.01818 | 0 | 0 | 402,243 |
11,783,875 |
2012-08-02T18:47:00.000
| 0 | 0 | 1 | 0 |
python,beautifulsoup,flask,importerror
| 70,539,039 | 22 | false | 1 | 0 |
One more solution for PyCharm:
Go to File -> Settings -> Python Interpreter, click on plus sign and find beautifulsoup4.
Click install.
| 8 | 170 | 0 |
I'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has "No Module Named bs4." Any comments or advice is greatly appreciated.
|
ImportError: No Module Named bs4 (BeautifulSoup)
| 0 | 0 | 0 | 402,243 |
11,783,875 |
2012-08-02T18:47:00.000
| 1 | 0 | 1 | 0 |
python,beautifulsoup,flask,importerror
| 58,678,662 | 22 | false | 1 | 0 |
A lot of tutorials/references were written for Python 2 and tell you to use pip install somename. If you're using Python 3 you want to change that to pip3 install somename.
| 8 | 170 | 0 |
I'm working in Python and using Flask. When I run my main Python file on my computer, it works perfectly, but when I activate venv and run the Flask Python file in the terminal, it says that my main Python file has "No Module Named bs4." Any comments or advice is greatly appreciated.
|
ImportError: No Module Named bs4 (BeautifulSoup)
| 0.009091 | 0 | 0 | 402,243 |
11,785,116 |
2012-08-02T20:18:00.000
| 0 | 0 | 0 | 0 |
python,django,url
| 11,785,422 | 1 | false | 1 | 0 |
Can you just use HTTP verbs?
If both of your entries into the URL are from forms, then you could simply not support GET access to that URL at all.
| 1 | 0 | 0 |
Suppose I have two modelforms 'A' and 'B' associated with models 'C' and 'D' respectively.Model 'D' has a foreign key of model 'C'.So objects of models should be created first.Now when user submits form 'A',an object of 'C' is generated.Now to send the id of object of model 'C' I'm using a url like this "/{{ object.id }}/".This way the modelform 'B' gets to know which object of model 'C' should be associated with the object of model 'D'.
Now the problem I'm facing is if I enter the url "/{{ object.id }}/",I get to see the modelform 'B' which I don't want.What can I do?
|
Django:How to stop a form from showing up when user enters the direct url of the page containing the form?
| 0 | 0 | 0 | 55 |
11,786,318 |
2012-08-02T21:49:00.000
| 1 | 0 | 0 | 0 |
gstreamer,python-gstreamer
| 11,802,443 | 1 | true | 0 | 0 |
You can use appsrc. You can pass chunks of your data to app source as needed.
| 1 | 0 | 0 |
The HTTP file and its contents are already downloaded and are present in memory. I just have to pass on the content to a decoder in gstreamer and play the content. However, I am not able to find the connecting link between the two.
After reading the documentation, I understood that gstreamer uses httpsoupsrc for downloading and parsing of http files. But, in my case, I have my own parser as well as file downloader to do the same. It takes the url and returns the data in parts to be used by the decoder. I am not sure howto bypass httpsoupsrc and use my parser instead also how to link it to the decoder.
Please let me know if anyone knows how things can be done.
|
How to hook custom file parser to Gstreamer Decoder?
| 1.2 | 0 | 1 | 241 |
11,787,941 |
2012-08-03T01:03:00.000
| 0 | 0 | 0 | 0 |
python,geolocation
| 11,788,127 | 6 | false | 0 | 0 |
I rediscovered another weather API which I don't like quite as much (Weather Underground), but can optionally determine the location. Might use this if I can't something like a geoiptool scraper to work.
| 1 | 3 | 0 |
Is there a way to get the computer's physical position using Python, preferably without an API, or with a free API? I've searched around a bit, and the only free API I've found is very, very inaccurate. I only need this to be somewhat accurate, as it's for getting local weather.
|
Get physical position of device with Python?
| 0 | 0 | 0 | 14,786 |
11,788,444 |
2012-08-03T02:29:00.000
| 7 | 0 | 1 | 0 |
python,list
| 11,788,481 | 3 | false | 0 | 0 |
Just like math:
[0] * 5 = [0] + [0] + [0] + [0] + [0], which is [0, 0, 0, 0, 0].
I think people would be more surprised if [0] + [0] suddenly became [[0], [0]].
For strings, tuples, and lists, + is an append operator. This multiplication holds true for all of them.
| 2 | 1 | 0 |
Why does [0] * 5 create a list [0, 0, 0, 0, 0], rather than [[0], [0], [0], [0], [0]]?
Doesn't the * operator duplicate [0] 5 times resulting in [[0], [0], [0], [0], [0]]?
|
creating list with * operator in python
| 1 | 0 | 0 | 1,775 |
11,788,444 |
2012-08-03T02:29:00.000
| 0 | 0 | 1 | 0 |
python,list
| 11,788,462 | 3 | true | 0 | 0 |
No, it multiplies elements and [0] list (just like mathematical set) has one element 0, not [[0]].
| 2 | 1 | 0 |
Why does [0] * 5 create a list [0, 0, 0, 0, 0], rather than [[0], [0], [0], [0], [0]]?
Doesn't the * operator duplicate [0] 5 times resulting in [[0], [0], [0], [0], [0]]?
|
creating list with * operator in python
| 1.2 | 0 | 0 | 1,775 |
11,788,950 |
2012-08-03T03:40:00.000
| 1 | 0 | 1 | 0 |
python,numpy,python-import
| 11,788,967 | 2 | false | 0 | 0 |
You have to import modules in every file in which you use them. Does that answer your question?
| 1 | 4 | 1 |
I do not know the right way to import modules.
I have a main file which initializes the code, does some preliminary calculations etc.
I also have 5 functions f1, f2, ... f5. The main code and all functions need Numpy.
If I define all functions in the main file, the code runs fine.
(Importing with : import numpy as np)
If I put the functions in a separate file, I get an error:
Error : Global name 'linalg' is not defined.
What is the right way to import modules such that the functions f1 - f5 can access the Numpy functionality?
|
Importing Numpy into functions
| 0.099668 | 0 | 0 | 9,291 |
11,789,259 |
2012-08-03T04:26:00.000
| 1 | 0 | 1 | 0 |
python,algorithm,data-structures,trie,linguistics
| 11,789,317 | 4 | false | 0 | 0 |
Put all of your prefixes in a hashtable. Then take each word in B and look up all prefixes of it in the hashtable. Any hit you get indicates a match.
So the hashtable would contain "allow" and "apolog". For "apologize", you'd look up "a" then "ap", and so on, until you looked up "apolog" and found a match.
| 1 | 0 | 0 |
This is a little different from most trie problems on stackoverflow (yes, I've spent time searching and reading), so please bear with me.
I have FILE A with words like: allow*, apolog*, etc. There are in total tens of thousands of such entries. And I have FILE B containing a body of text, with up to thousands of words. I want to be able to match words in my text in FILE B with words in FILE A.
Example:
FILE B's "apologize" would match FILE A's "apolog*"
FILE B's "a" would neither match "allow*" nor "apolog*"
FILE B's "apologizetomenoworelseiwillkillyou" would also match FILE A's "apolog*"
Could anyone suggest an algorithm/data structure (that is preferably do-able in python) that could help me in achieving this? The tries I've looked into seem to be more about matching prefixes to whole words, but here, I'm matching whole words to prefixes. Stemming algorithms are out of the question because they have fixed rules, whereas in this case my suffix can be anything. I do not want to iterate through my entire list in FILE A, because that would take too much time.
If this is confusing, I'm happy to clarify. Thanks.
|
Trie? Matching words with trailing characters in python
| 0.049958 | 0 | 0 | 782 |
11,789,917 |
2012-08-03T05:42:00.000
| 2 | 0 | 0 | 0 |
php,python,matplotlib
| 11,790,050 | 1 | true | 0 | 0 |
You could modify your python script so it outputs an image (image/jpeg) instead of saving it to a file. Then use the tag as normal, but pointing directly to the python script. Your php wouldn't call the python script at all, It would just include it as the src of the image.
| 1 | 2 | 1 |
I have a python script that can output a plot using matplotlib and command line inputs.
What i'm doing right now is making the script print the location/filename of the generated plot image and then when PHP sees it, it outputs an img tag to display it.
The python script deletes images that are older than 20 minutes when it runs. It seems like too much of a workaround, and i'm wondering if there's a better solution.
|
What's a good way to output matplotlib graphs on a PHP website?
| 1.2 | 0 | 0 | 2,549 |
11,791,368 |
2012-08-03T07:41:00.000
| 1 | 0 | 0 | 1 |
python,django,django-admin
| 24,193,778 | 3 | false | 1 | 0 |
i have the same Problem. the django-admin.py for me was in this Path ~/.local/bin.
this because i run pip instal --user django
| 1 | 3 | 0 |
I have django-admin.py in usr/local/bin and I have tried all the help given on the web to make a symbolic link but it still says django-admin.py: command not found.
I am trying to start of my first project in django :- django-admin.py startproject mysite.
No matter what I do I just keep on getting django-admin.py: command not found.
I am using ubuntu 11.10.
Thanks
|
django-admin.py: command not found
| 0.066568 | 0 | 0 | 6,301 |
11,792,129 |
2012-08-03T08:32:00.000
| 0 | 1 | 0 | 1 |
php,python
| 11,792,313 | 4 | false | 0 | 0 |
An alternative would be wrapping the string between <pre>...</pre> tags.
| 1 | 3 | 0 |
i have a php file calls a script and prints the output like this
$output=shell_exec('/usr/bin/python hello.py');
echo $output;
it prints;
b'total 16\ndrwx---r-x 2 oae users 4096 Jul 31 14:21 .\ndrwxr-x--x+ 9 oae root 4096 Jul 26 13:59 ..\n-rwx---r-x 1 oae users 90 Aug 3 11:22 hello.py\n-rwx---r-x 1 oae users 225 Aug 3 11:22 index.php\n'
but it should be like this;
total 16K
drwx---r-x 2 oae users 4.0K Jul 31 14:21 ./
drwxr-x--x+ 9 oae root 4.0K Jul 26 13:59 ../
-rwx---r-x 1 oae users 90 Aug 3 11:22 hello.py*
-rwx---r-x 1 oae users 225 Aug 3 11:22 index.php*
\n characters shouldn't be shown.How can i solve this?
|
Print python script output correctly in PHP
| 0 | 0 | 0 | 10,514 |
11,792,531 |
2012-08-03T08:58:00.000
| 1 | 0 | 0 | 1 |
python,ibm-mq,twisted,pymqi
| 11,794,331 | 1 | true | 0 | 0 |
If you're going to use this functionality a lot, then having a native Twisted implementation is probably worth the effort. A wrapper based on deferToThread will be less work, but it will also be harder to test and debug, perform less well, and have problems on certain platforms where Python threads don't work extremely well (eg FreeBSD).
The approach to take for a native Twisted implementation is probably to implement a protocol that can speak to MQ servers and give it a rich API for interacting with channels, queues, queue managers, etc, and then build a layer on top of that which abstracts the actual network connection away from the application (as I believe mqi/pymqi largely do).
| 1 | 0 | 0 |
I'm trying to work out how to approach building a "machine" to send and receive messages to WebSphere MQ, via Twisted. I want it to be as generic as possible, so I can reuse it for many different situations that interface with MQ.
I've used Twisted before, but many years ago now and I'm trying to resurrect the knowledge I once had...
The specific problem I'm having is how to implement the MQ IO using Twisted. There's a pymqi Python library that interfaces with MQ, and it provides all the interfaces I need. The MQ calls I need to implement are:
initiate a connection to a specific MQ server/port/channel/queue-manager/queue combination
take content and post it as a message to the desired queue
poll a queue and return the content of the next message in the queue
send a request to a queue manager to find the number of messages currently in a queue
All of these involve blocking calls to MQ.
As I'm intending to reuse the Twisted/MQ interface many times across a range of projects, should I be looking to implement the MQ IO as a Twisted protocol, as a Twisted transport, or just call the pymqi methods via deferToThread() calls? I realise this is a very broad question with possibly no definitive answer; I'm really after advice from those who may have encountered similar challenges before (i.e. working with queueing interfaces that will always block) and found a way that works well.
|
Using WebSphere MQ with Twisted
| 1.2 | 0 | 0 | 245 |
11,793,895 |
2012-08-03T10:27:00.000
| 1 | 0 | 1 | 0 |
python,vim,omnicomplete
| 11,794,082 | 1 | true | 0 | 0 |
Your settings won't be lost if you recompile vim: recompilation will simply create a new vim executable. If you are using a common Linux distribution, though, you might not need to compile anything: Archlinux, for example, bundles "vim compiled with +python" in the gvim package. Your distro might do something similar.
| 1 | 0 | 0 |
When I try to use omnicomplete in a .py file vim says that I need to compile vim with +python support. I already have a bunch of plugins downloaded in my vimfiles with pathogen so how do I recompile vim 7.3 with +python support without losing my settings? Thanks
|
recompile vim with +python
| 1.2 | 0 | 0 | 1,308 |
11,796,126 |
2012-08-03T12:53:00.000
| 0 | 1 | 0 | 0 |
php,python,apache,mod-vhost-alias
| 36,646,397 | 2 | false | 1 | 0 |
Even I faced the same situation, and initially I was wondering in google but later realised and fixed it, I'm using EC2 service in aws with ubuntu and I created alias to php and python individually and now I can access both.
| 1 | 3 | 0 |
I currently run my own server "in the cloud" with PHP using mod_fastcgi and mod_vhost_alias. My mod_vhost_alias config uses a VirtualDocumentRoot of /var/www/%0/htdocs so that I can serve any domain that routes to my server's IP address out of a directory with that name.
I'd like to begin writing and serving some Python projects from my server, but I'm unsure how to configure things so that each site has access to the appropriate script processor.
For example, for my blog, dead-parrot.com, I'm running a PHP blog platform (Habari, not WordPress). But I'd like to run an app I've written in Flask on not-dead-yet.com.
I would like to enable Python execution with as little disruption to my mod_vhost_alias configuration as possible, so that I can continue to host new domains on this server simply by adding an appropriate directory. I'm willing to alter the directory structure, if necessary, but would prefer not to add additional, specific vhost config files for every new Python-running domain, since apart from being less convenient than my current setup with just PHP, it seems kind of hacky to have to name these earlier alphabetically to get Apache to pick them up before the single mod_vhost_alias vhost config.
Do you know of a way that I can set this up to run Python and PHP side-by-side as conveniently as I do just PHP? Thanks!
|
Can I run PHP and Python on the same Apache server using mod_vhost_alias and mod_wsgi?
| 0 | 1 | 0 | 6,266 |
11,796,333 |
2012-08-03T13:06:00.000
| 0 | 0 | 0 | 0 |
python,django
| 11,799,755 | 2 | true | 1 | 0 |
In Django, if an error occurs, it isn't actually propagated to the user automatically. Rather, if you have an error occur, it returns a 500 error page. If Django is in debug mode, it will give a traceback. If the other server is intelligent, it will realized that there is a 500 error message instead of a 200 message. In Django you can then define a "500.html" in your root directory to handle the errors (or use the default Django template). GLHF
| 1 | 0 | 0 |
in django, whenever an error is occured , if we dont keep in try block, an error will be raised . At that point of time(during error), if it is not in try block, instead of the error page, can we display a msg .
What I am actually saying is, is there anything in django(like signals) that gets activated during error and prints that msg . In my case, it is an ajax request, so what i want is, when something is not inside try block, and still if it raises error, then it should atleast send back an error msg(to another server which made ajax call to our server) saying "error occured" .
|
python Django custom error msg for any error (apart from try,except)
| 1.2 | 0 | 0 | 480 |
11,796,474 |
2012-08-03T13:14:00.000
| 9 | 0 | 1 | 0 |
python,interactive
| 11,796,515 | 4 | false | 0 | 0 |
Use a debugger and add breakpoints. Do you use an IDE? All the major IDEs have debugger support. From the CLI, you can use pdb.
| 1 | 13 | 0 |
I need to run my Python script as usual, but I want to stop execution on a specific line and start interactive mode.
In other words, I want to be able to check the value of all my variables at that point, and continue myself from there on python's command line.
How can I do this?
|
start interactive mode on a specific script line
| 1 | 0 | 0 | 5,133 |
11,800,219 |
2012-08-03T17:07:00.000
| 4 | 0 | 0 | 0 |
python,python-2.7,bottle
| 11,800,422 | 2 | true | 1 | 0 |
In order for a webbrowser to be able to download and render the css or image, it will either have to be part of your page (where people can view it by viewing the source of the page) or accessible at a URL.
So if you're trying to get around people being able to look at just your css or just your image, the answer is that there's no way around it.
| 1 | 1 | 0 |
How would I go about linking css and images to a template without routing it through bottle (@route('/image/') or @route('/css/')) and using a static_file return? because i am unable to link css normally (it cant find the css/image) and if i do it through static_file anyone can go to that link and view the css/image (IE www.mysite.com/css/css.css or www.mysite.com/image/image.png). Is there any way to get around this issue?
|
python bottle css image and css routing
| 1.2 | 0 | 0 | 1,803 |
11,801,549 |
2012-08-03T18:45:00.000
| 1 | 0 | 1 | 0 |
python
| 11,801,678 | 3 | false | 0 | 0 |
The terms "data type" and "class" are synonymous in Python, and they are both correct for the examples you gave. Unlike some other languages, there are no simple types, everything (that you can point to with a variable) in Python is an object. The term "data structure" on the other hand should probably be reserved for container types, for example sets, tuples, dictionaries or lists.
| 2 | 2 | 0 |
All I am new to python programming.I referred different tutorials in python,few Authors says In python like Numbers(int,float,complex),list,set,tuple,dictionary,string are data types some of theme says data-structure few are says classes.i am confused which is correct.
I'm doing an essay on Python and found this statement on a random site, just wondering if anyone could clarify and Justify your answer.
|
python data types are classes or data structures?
| 0.066568 | 0 | 0 | 5,224 |
11,801,549 |
2012-08-03T18:45:00.000
| 2 | 0 | 1 | 0 |
python
| 11,802,085 | 3 | false | 0 | 0 |
A "data type" is a description of a kind of data: what kinds of values can be an instance of that kind, and what kind of operations can be done on them.
A "class" is one way of representing a data type (although not the only way), treating the operations on the type as "methods" on the instances of the type (called "objects"). This is a general term across all class-based languages. But Python also has a specific meanings for "class": Something defined by the class statement, or something defined in built-in/extension code that meets certain requirements, is a class.
So, arbitrary-sized integers and mapping dictionaries are data types. in Python, they're represented by the built-in classes int and dict.
A "data structure" is a way of organizing data for efficient or easy access. This isn't directly relevant to data types. In many languages (like C++ or Java), defining a new class requires you to tell the compiler how an instance's members are laid out in memory, and things like that, but in Python you just construct objects and add members to them and the interpreter figures out how to organize them. (There are exceptions that come up when you're building extension modules or using ctypes to build wrapper classes, but don't worry about that.)
Things get blurry when you get to higher-level abstract data structures (like pointer-based nodes) and lower-level abstract data types (like order-preserving collection of elements that can do constant-time insertion and deletion at the head). Is a linked list a data type that inherently requires a certain data structure, or a data structure that defines an obvious data type, or what? Well, unless you major in computer science in college, the answer to that isn't really going to make much difference, as long as you understand the question.
So, mapping dictionaries are data types, but they're also abstract data structures—and, under the covers, Python's dict objects are built from a specific concrete data structure (open-chained hash table with quadratic probing) which is still partly abstract (each bucket contains a duck-typed value).
| 2 | 2 | 0 |
All I am new to python programming.I referred different tutorials in python,few Authors says In python like Numbers(int,float,complex),list,set,tuple,dictionary,string are data types some of theme says data-structure few are says classes.i am confused which is correct.
I'm doing an essay on Python and found this statement on a random site, just wondering if anyone could clarify and Justify your answer.
|
python data types are classes or data structures?
| 0.132549 | 0 | 0 | 5,224 |
11,802,437 |
2012-08-03T19:56:00.000
| 2 | 0 | 1 | 0 |
python,cpu
| 11,802,514 | 3 | false | 0 | 0 |
If you python application is not multi-threaded it is not going to execute using more than 1 core of your cpu.
| 2 | 3 | 0 |
I've used Python to write a piece of code that can be very time consuming(contains a lot of recursions). I was testing the code's runtime, and I noticed no matter how complicated the code becomes, Python never consumes the full computing power of my CPU. I'm running Python on Windows7 with Intel Dual Core, and Python never uses more than 1 CPU. Basically one CPU is running while the other one is on idle.
Can someone explain a bit of what's happening in the background? Thanks in advance!
|
Python and CPU usage
| 0.132549 | 0 | 0 | 2,226 |
11,802,437 |
2012-08-03T19:56:00.000
| 6 | 0 | 1 | 0 |
python,cpu
| 11,802,527 | 3 | true | 0 | 0 |
Your script is running in a single process, so runs on a single processor. The Windows scheduler will probably move it from one core to another quite frequently, but it can't run a single process in more than one place at a time.
If you want to use more of your CPU's grunt, you need to figure out how to split your workload so you can run multiple instances of your code in multiple processes.
| 2 | 3 | 0 |
I've used Python to write a piece of code that can be very time consuming(contains a lot of recursions). I was testing the code's runtime, and I noticed no matter how complicated the code becomes, Python never consumes the full computing power of my CPU. I'm running Python on Windows7 with Intel Dual Core, and Python never uses more than 1 CPU. Basically one CPU is running while the other one is on idle.
Can someone explain a bit of what's happening in the background? Thanks in advance!
|
Python and CPU usage
| 1.2 | 0 | 0 | 2,226 |
11,802,505 |
2012-08-03T20:01:00.000
| 0 | 1 | 1 | 0 |
c#,c++,python,serialization,cross-language
| 11,803,274 | 7 | false | 0 | 0 |
You can wrap your business logic as a web service and call it from all three languages - just a single implementation.
| 4 | 17 | 0 |
We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems?
|
How to share business concepts across different programming languages?
| 0 | 0 | 0 | 1,432 |
11,802,505 |
2012-08-03T20:01:00.000
| 2 | 1 | 1 | 0 |
c#,c++,python,serialization,cross-language
| 11,807,691 | 7 | false | 0 | 0 |
All the components in the system operate with the same business concepts and communicate
one with another also in terms of these concepts.
When I got you right, you have split up your system in different parts communicating by well-defined interfaces. But your interfaces share data structures you call "business concepts" (hard to understand without seeing an example), and since those interfaces have to build for all of your three languages, you have problems keeping them "in-sync".
When keeping interfaces in sync gets a problem, then it seems obvious that your interfaces are too broad. There are different possible reasons for that, with different solutions.
Possible Reason 1 - you overgeneralized your interface concept. If that's the case, redesign here: throw generalization over board and create interfaces which are only as broad as they have to be.
Possible reason 2: parts written in different languages are not dealing with separate business cases, you may have a "horizontal" partition between them, but not a vertical. If that's the case, you cannot avoid the broadness of your interfaces.
Code generation may be the right approach here if reason 2 is your problem. If existing code generators don't suffer your needs, why don't you just write your own? Define the interfaces for example as classes in C#, introduce some meta attributes and use reflection in your code generator to extract the information again when generating the according C++, Python and also the "real-to-be-used" C# code. If you need different variants with or without serialization, generate them too. A working generator should not be more effort than a couple of days (YMMV depending on your requirements).
| 4 | 17 | 0 |
We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems?
|
How to share business concepts across different programming languages?
| 0.057081 | 0 | 0 | 1,432 |
11,802,505 |
2012-08-03T20:01:00.000
| 0 | 1 | 1 | 0 |
c#,c++,python,serialization,cross-language
| 11,807,498 | 7 | false | 0 | 0 |
I would accomplish that by using some kind of meta-information about your domain entities (either XML or DSL, depending on complexity) and then go for code generation for each language. That would reduce (manual) code duplication.
| 4 | 17 | 0 |
We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems?
|
How to share business concepts across different programming languages?
| 0 | 0 | 0 | 1,432 |
11,802,505 |
2012-08-03T20:01:00.000
| 0 | 1 | 1 | 0 |
c#,c++,python,serialization,cross-language
| 11,804,524 | 7 | false | 0 | 0 |
You could model these data structures using tools like a UML modeler (Enterprise Architect comes to mind as it can generate code for all 3.) and then generate code for each language directly from the model.
Though I would look closely at a previous comment about using XSD.
| 4 | 17 | 0 |
We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems?
|
How to share business concepts across different programming languages?
| 0 | 0 | 0 | 1,432 |
11,805,309 |
2012-08-04T01:52:00.000
| 3 | 0 | 1 | 0 |
python,file,file-io,io,filesystems
| 11,805,422 | 2 | true | 0 | 0 |
One thing you could try is calculating intersections of the files on a chunk-by-chunk basis (i.e., read x-bytes into memory from each, calculate their intersections, and continue, finally calculating the intersection of all intersections).
Or, you might consider using some "heavy-duty" libraries to help you. Consider looking into PyTables (with HDF storage)/using numpy for calculating intersections. The benefit there is that the HDF layer should help deal with not keeping the entire array structure in memory all at once---though I haven't tried any of these tools before, it seems like they offer what you need.
| 1 | 4 | 0 |
I have really big collection of files, and my task is to open a couple of random files from this collection treat their content as a sets of integers and make an intersection of it.
This process is quite slow due to long times of reading files from disk into memory so I'm wondering whether this process of reading from file can be speed up by rewriting my program in some "quick" language. Currently I'm using python which could be inefficient for this kind of job. (I could implement tests myself if I knew some other languages beside python and javascript...)
Also will putting all the date into database help? Files wont fit the RAM anyway so it will be reading from disk again only with database related overhead.
The content of files is the list of long integers. 90% of the files are quite small, less than a 10-20MB, but 10% left are around 100-200mb. As input a have filenames and I need read each of the files and output integers present in every file given.
I've tried to put this data in mongodb but that was as slow as plain files based approach because I tried to use mongo index capabilities and mongo does not store indexes in RAM.
Now I just cut the 10% of the biggest files and store rest in the redis, sometimes accessing those big files. This is, obviously temporary solution because my data grows and amount of RAM available does not.
|
Is speed of file opening/reading language dependent?
| 1.2 | 1 | 0 | 213 |
11,805,709 |
2012-08-04T03:36:00.000
| 0 | 0 | 0 | 0 |
python,sql-server,postgresql,blob
| 15,846,639 | 1 | false | 0 | 0 |
What you need to understand first is that the interfaces at the db level are likely to be different. Your best option is to write an abstraction layer for the blobs (and maybe publish it open source for the dbs you want to support).
On the PostgreSQL side you need to figure out whether you want to bo with bytea or lob. These are very different and have different features and limitations. If you are enterprising you might build in at least support in the spec for selecting them. In general bytea is better for smaller files while lob has more management overhead but it can both support larger files and supports chunking, seeking etc.
| 1 | 0 | 0 |
I have some SQL Server tables that contain Image data types.
I want to make it somehow usable in PostgreSQL. I'm a python programmer, so I have a lot of learn about this topic. Help?
|
How can I select and insert BLOB between different databases using python?
| 0 | 1 | 0 | 297 |
11,805,983 |
2012-08-04T04:49:00.000
| 2 | 0 | 0 | 0 |
matlab,python-3.x,heatmap,color-mapping
| 11,809,642 | 2 | false | 0 | 0 |
One way to do this would be:
1) Load in the floor plan image with Matlab or NumPy/matplotlib.
2) Use some built-in edge detection to locate the edge pixels in the floor plan.
3) Form a big list of (x,y) locations where an edge is found in the floor plan.
4) Plot your heat map
5) Scatterplot the points of the floor plan as an overlay.
It sounds like you know how to do each of these steps individually, so all you'll need to do is look up some stuff on how to overlay plots onto the same axis, which is pretty easy in both Matlab and matplotlib.
If you're unfamiliar, the right commands look at are things like meshgrid and surf, possibly contour and their Python equivalents. I think Matlab has a built-in for Canny edge detection. I believe this was more difficult in Python, but if you use the PIL library, the Mahotas library, the scikits.image library, and a few others tailored for image manipulation, it's not too bad. SciPy may actually have an edge filter by now though, so check there first.
The only sticking point will be if your (x,y) data for the temperature are not going to line up with the (x,y) pixel locations in the image. In that case, you'll have to play around with some x-scale factor and y-scale factor to transform your heat map's coordinates into pixel coordinates first, and then plot the heat map, and then the overlay should work.
This is a fairly low-tech way to do it; I assume you just need a quick and dirty plot to illustrate how something's working. This method does have the advantage that you can change the style of the floorplan points easily, making them larger, thicker, thinner, different colors, or transparent, depending on how you want it to interact with the heat map. However, to do this for real, use GIMP, Inkscape, or Photoshop and overlay the heatmap onto the image after the fact.
| 1 | 3 | 1 |
I want to generate a heat map image of a floor. I have the following things:
A black & white .png image of the floor
A three column array stored in Matlab.
-- The first two columns indicate the X & Y coordinates of the floorpan image
-- The third coordinate denotes the "temperature" of that particular coordinate
I want to generate a heat map of the floor that will show the "temperature" strength in those coordinates. However, I want to display the heat map on top of the floor plan so that the viewers can see which rooms lead to which "temperatures".
Is there any software that does this job? Can I use Matlab or Python to do this?
Thanks,
Nazmul
|
Heat map generator of a floor plan image
| 0.197375 | 0 | 0 | 4,036 |
11,806,700 |
2012-08-04T07:12:00.000
| 4 | 0 | 0 | 0 |
python,postgresql,heroku,hosting
| 11,806,820 | 3 | true | 1 | 0 |
Difficult to answer without testing it out but you might want to answer these questions:
1) How expensive is the diff operation? Run a test or compute the complexity. If diff operation is on really large files or rapidly changing files, you might want to modify the algorithm. Storing diffs doesn't seem like a great solution if the files are large, change little or change rapidly over time.
2) How many times would you need to generate the same diff with the same files and is there a time bound associated with this?
- If the same diff is generated over and over again in a short span of time, you might want to cache it and not write it to a database. If the diff is accessed sporadically over time (Few days, months), you might want to store it that is after analyzing 1 above.
You might benchmark using costs on Amazon Web Services. Again you have choices there. You could just use a single EC2 instance for everything or split the workflow against RDS, EC2 and S3 and then analyze the cost. Depends on what level of scale you desire.
| 2 | 3 | 0 |
This is a fairly abstract question, I hope it is within bounds.
I'm about 5 months into my coding career in web development. I've found that there's often a tension between CPU and storage resources. Put simply, you can use less of one and more of the other, or vice versa (then throw in the speed consideration). I'm now getting to the point of deploying my first app for production, so this balance is now a matter of real dollars and cents. The thing is this: I really don't have any idea what kind of balance I should be looking for.
Here's some salient examples that might illuminate the balance to be struck in different case scenarios.
Background
I am working on an app that does alot of diffs between text. Users will call on pages that contain diffs displayed in html. A lot.
First Case
Should I run a diff each time a page is displayed, or should I run the diff once, store it, and call it each time a page is displayed?
Second Case
I have coded up an algorithm that summarises diffs. It's about 110 lines of code, and it uses 4 or 5 loops and subloops. Again, should I run this once and store the results, so that they can be called on later, or should I just run the algorithm each time a page is displayed?
Would also love to hear your views on the best tools to use to quantify the balance.
|
Which is more expensive (in $): database memory or processing power?
| 1.2 | 0 | 0 | 612 |
11,806,700 |
2012-08-04T07:12:00.000
| 2 | 0 | 0 | 0 |
python,postgresql,heroku,hosting
| 11,809,281 | 3 | false | 1 | 0 |
My suggestion would be to store the cache in DB-tables, not in memory. If the entries are referenced a lot, they will be in memory (in disk buffers). The advantage of this approach is that the diffs will be competing for a place in core with the other DB tables, which is always smarter than pre-allocating (and managing) XXX bytes of memory.
An addtional advantage is that maintaining {hitcount,date of access, ...} for the cache entries is relatively easy, and its management can all be done in SQL.
And remember: disk space is for free. It is very easy to have an XXX GB cache on disk, and effectively using only XXX MB of it. The hard hitters will be in memory while the long tail will sit on disk. And it is always possible to grow or shrink the cache.
Cost estimate for the uncached version:
I/O + buffer memory cost for 2 files
CPU + memory cost for the diff operation
buffer memory for the result.
Cost estimate for the cached version:
I/O + to fetch the diff
CPU + memory for the query
buffer memory for the result
If you compare the two:
the uncached version has a larger I/O cost (given the diff is smaller than the sum of the two files)
The uncached version always has a larger memory footprint
The query cost could be smaller than diff-execution cost. Or it could be larger...
| 2 | 3 | 0 |
This is a fairly abstract question, I hope it is within bounds.
I'm about 5 months into my coding career in web development. I've found that there's often a tension between CPU and storage resources. Put simply, you can use less of one and more of the other, or vice versa (then throw in the speed consideration). I'm now getting to the point of deploying my first app for production, so this balance is now a matter of real dollars and cents. The thing is this: I really don't have any idea what kind of balance I should be looking for.
Here's some salient examples that might illuminate the balance to be struck in different case scenarios.
Background
I am working on an app that does alot of diffs between text. Users will call on pages that contain diffs displayed in html. A lot.
First Case
Should I run a diff each time a page is displayed, or should I run the diff once, store it, and call it each time a page is displayed?
Second Case
I have coded up an algorithm that summarises diffs. It's about 110 lines of code, and it uses 4 or 5 loops and subloops. Again, should I run this once and store the results, so that they can be called on later, or should I just run the algorithm each time a page is displayed?
Would also love to hear your views on the best tools to use to quantify the balance.
|
Which is more expensive (in $): database memory or processing power?
| 0.132549 | 0 | 0 | 612 |
11,806,919 |
2012-08-04T07:54:00.000
| 0 | 0 | 1 | 0 |
python,regex
| 11,806,971 | 2 | false | 0 | 0 |
Try something like e.g. (?:(?:expression){3})+ to find all multiples of three of the expression. If the expression is shorter, you could also just write it as often as you want.
If you want to match exact duplications, try something like e.g. (?:(expression)\1{2})+ for multiples of three. Note that this may require backtracking if the expression is non-trivial and thus may be slow.
| 1 | 1 | 0 |
I know this is probably pretty basic, but I'm trying to create a regex expression that will only match a certain multiple of a group of characters. For example, re.findall(expression, 'aaaa') will return 'aaaa' but re.findall(expression, 'aaa') will return 'aa', where expression is some regex that involves the pair aa. It will only return the entire string if the entire string is some integer multiple of 'aa'. Any ideas?
|
python regex matching a multiple of an expression
| 0 | 0 | 0 | 178 |
11,807,565 |
2012-08-04T09:44:00.000
| 1 | 0 | 0 | 0 |
javascript,python,django
| 11,808,220 | 1 | true | 1 | 0 |
I think the best way would be to store this data in a database, for a couple of reasons:
You will be able to perform some queries on this data, like "give me all points in the view port" or "give me all points that are 5km from some other point" - even if you don't need it now it might be very useful in the future, especially if you think about having around 1000 points
There are wonderful utils to keep coordinates in the database that integrate very well with Django - you should definitely check postgis and geodjango
You are using Django, so you probably have some other data in the database and it's nice to have everything in one place
Unless it is some kind of static data that you'd like to display on the map that is not likely to change, it doesn't feel right to keep it anywhere else than in the database.
If for some reason you don't want to use any database (even though you can always store this data in a file with sqlite) you can also try storing it in some python object and then send them to js (so the second option), and the third one I think is the worst - you are not able to make any operation with this data outside javascript, it would be very hard to read or debug (syntax errors for example).
Hth
| 1 | 2 | 0 |
So...i have used a map in a project which needs coordinates to set markers to different positions. There are many options available to me to get the coordinates.
Store the coordinates in a database and use django view to get and forward the coordinates to the javascript function using an ajax response.
Store it into a python list or dictionary and send the data when needed to that javascript func.
Hard code the coordinates in a HTML tag attribute and get them via javascript and then set the marker.
Use files and get data through file I/O in django view and forward that to the javascript function.
I want to know which of these techniques is efficient for about 50 set of coordinates and which one will be more sufficient if my set increases to about 1000?
If you have a better way to do this...please share it..
Thanks
|
Fastest way to get map coordinates in a javascript function from Django
| 1.2 | 0 | 0 | 391 |
11,809,438 |
2012-08-04T14:39:00.000
| 0 | 0 | 1 | 0 |
python,concurrency,process,monitor,sharing
| 11,809,544 | 2 | false | 0 | 0 |
shared memory between processes is usually a poor idea; when calling os.fork(), the operating system marks all of the memory used by parent and inherited by the child as copy on write; if either process attempts to modify the page, it is instead copied to a new location that is not shared between the two processes.
This means that your usual threading primitives (locks, condition variables, et-cetera) are not useable for communicating across process boundaries.
There are two ways to resolve this; The preferred way is to use a pipe, and serialize communication on both ends. Brian Cain's answer, using multiprocessing.Queue, works in this exact way. Because pipes do not have any shared state, and use a robust ipc mechanism provided by the kernel, it's unlikely that you will end up with processes in an inconsistent state.
The other option is to allocate some memory in a special way so that the os will allow you to use shared memory. the most natural way to do that is with mmap. cPython won't use shared memory for native python object's though, so you would still need to sort out how you will use this shared region. A reasonable library for this is numpy, which can map the untyped binary memory region into useful arrays of some sort. Shared memory is much harder to work with in terms of managing concurrency, though; since there's no simple way for one process to know how another processes is accessing the shared region. The only time this approach makes much sense is when a small number of processes need to share a large volume of data, since shared memory can avoid copying the data through pipes.
| 1 | 2 | 0 |
I'm new here and I'm Italian (forgive me if my English is not so good).
I am a computer science student and I am working on a concurrent program project in Python.
We should use monitors, a class with its methods and data (such as condition variables). An instance (object) of this class monitor should be shared accross all processes we have (created by os.fork o by multiprocessing module) but we don't know how to do. It is simpler with threads because they already share memory but we MUST use processes. Is there any way to make this object (monitor) shareable accross all processes?
Hoping I'm not saying nonsenses...thanks a lot to everyone for tour attention.
Waiting answers.
Lorenzo
|
Monitor concurrency (sharing object across processes) in Python
| 0 | 0 | 0 | 619 |
11,809,856 |
2012-08-04T15:38:00.000
| 6 | 0 | 0 | 0 |
python,graph,networkx,igraph
| 11,810,286 | 5 | false | 0 | 0 |
A very simple way to approach (and solve entirely) this problem is to use the adjacency matrix A of the graph. The (i,j) th element of A^L is the number of paths between nodes i and j of length L. So if you sum these over all j keeping i fixed at n, you get all paths emanating from node n of length L.
This will also unfortunately count the cyclic paths. These, happily, can be found from the element A^L(n,n), so just subtract that.
So your final answer is: Σj{A^L(n,j)} - A^L(n,n).
Word of caution: say you're looking for paths of length 5 from node 1: this calculation will also count the path with small cycles inside like 1-2-3-2-4, whose length is 5 or 4 depending on how you choose to see it, so be careful about that.
| 1 | 3 | 0 |
Given a graph G, a node n and a length L, I'd like to collect all (non-cyclic) paths of length L that depart from n.
Do you have any idea on how to approach this?
By now, I my graph is a networkx.Graph instance, but I do not really care if e.g. igraph is recommended.
Thanks a lot!
|
All paths of length L from node n using python
| 1 | 0 | 1 | 6,172 |
11,813,435 |
2012-08-05T02:05:00.000
| 20 | 0 | 1 | 1 |
python,powershell,python-2.7
| 11,814,706 | 13 | false | 0 | 0 |
$env:path="$env:Path;C:\Python27" will only set it for the current session. Next time you open Powershell, you will have to do the same thing again.
The [Environment]::SetEnvironmentVariable() is the right way, and it would have set your PATH environment variable permanently. You just have to start Powershell again to see the effect in this case.
| 5 | 65 | 0 |
I'm trying to follow Zed Shaw's guide for Learning Python the Hard Way. I need to use python in Powershell. I have Python 2.7.3 installed in C:\Python27. Whenever I type python into Powershell, I get an error that says the term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program. I also typed in this: [Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27", "User")
That was a suggested solution provided, but typing python into Powershell still does nothing. I can type in "start python" and it opens up a window with python but I need it in Powershell. Thanks.
|
I'm trying to use python in powershell
| 1 | 0 | 0 | 202,736 |
11,813,435 |
2012-08-05T02:05:00.000
| 11 | 0 | 1 | 1 |
python,powershell,python-2.7
| 33,180,819 | 13 | false | 0 | 0 |
The Directory is not set correctly so Please follow these steps.
"MyComputer">Right Click>Properties>"System Properties">"Advanced" tab
"Environment Variables">"Path">"Edit"
In the "Variable value" box, Make sure you see following:
;c:\python27\;c:\python27\scripts
Click "OK", Test this change by restarting your windows powershell. Type
python
Now python version 2 runs! yay!
| 5 | 65 | 0 |
I'm trying to follow Zed Shaw's guide for Learning Python the Hard Way. I need to use python in Powershell. I have Python 2.7.3 installed in C:\Python27. Whenever I type python into Powershell, I get an error that says the term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program. I also typed in this: [Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27", "User")
That was a suggested solution provided, but typing python into Powershell still does nothing. I can type in "start python" and it opens up a window with python but I need it in Powershell. Thanks.
|
I'm trying to use python in powershell
| 1 | 0 | 0 | 202,736 |
11,813,435 |
2012-08-05T02:05:00.000
| 7 | 0 | 1 | 1 |
python,powershell,python-2.7
| 31,302,445 | 13 | false | 0 | 0 |
This works for me permanently:
[Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27","User")
| 5 | 65 | 0 |
I'm trying to follow Zed Shaw's guide for Learning Python the Hard Way. I need to use python in Powershell. I have Python 2.7.3 installed in C:\Python27. Whenever I type python into Powershell, I get an error that says the term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program. I also typed in this: [Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27", "User")
That was a suggested solution provided, but typing python into Powershell still does nothing. I can type in "start python" and it opens up a window with python but I need it in Powershell. Thanks.
|
I'm trying to use python in powershell
| 1 | 0 | 0 | 202,736 |
11,813,435 |
2012-08-05T02:05:00.000
| 4 | 0 | 1 | 1 |
python,powershell,python-2.7
| 37,464,667 | 13 | false | 0 | 0 |
Sometimes you install Python on Windows and it doesn't configure the path correctly.
Make sure you enter [Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27", "User")
in PowerShell to configure it correctly.
You also have to either restart PowerShell or your whole computer to get it to really be fixed.
| 5 | 65 | 0 |
I'm trying to follow Zed Shaw's guide for Learning Python the Hard Way. I need to use python in Powershell. I have Python 2.7.3 installed in C:\Python27. Whenever I type python into Powershell, I get an error that says the term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program. I also typed in this: [Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27", "User")
That was a suggested solution provided, but typing python into Powershell still does nothing. I can type in "start python" and it opens up a window with python but I need it in Powershell. Thanks.
|
I'm trying to use python in powershell
| 0.061461 | 0 | 0 | 202,736 |
11,813,435 |
2012-08-05T02:05:00.000
| 0 | 0 | 1 | 1 |
python,powershell,python-2.7
| 18,043,232 | 13 | false | 0 | 0 |
Just eliminate the word "User". It will work.
| 5 | 65 | 0 |
I'm trying to follow Zed Shaw's guide for Learning Python the Hard Way. I need to use python in Powershell. I have Python 2.7.3 installed in C:\Python27. Whenever I type python into Powershell, I get an error that says the term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program. I also typed in this: [Environment]::SetEnvironmentVariable("Path", "$env:Path;C:\Python27", "User")
That was a suggested solution provided, but typing python into Powershell still does nothing. I can type in "start python" and it opens up a window with python but I need it in Powershell. Thanks.
|
I'm trying to use python in powershell
| 0 | 0 | 0 | 202,736 |
11,813,555 |
2012-08-05T02:36:00.000
| 0 | 0 | 1 | 0 |
python,pdf
| 44,475,810 | 6 | false | 0 | 0 |
A simpler approach would be to use the csv package to write the two columns to a .csv file, then read it into a spreadsheet & print to pdf. Not 100% python but maybe 90% less work ...
| 1 | 23 | 0 |
I'm looking for a way to output a VERY simple pdf file from Python. Basically it will consist of two columns of words, one in Russian (so utf-8 characters) and the other in English.
I've been googling for about an hour, and the packages I've found are either massive overkill (and still don't provide useful examples) such as ReportLab, or seem to assume that the only thing anyone would ever do with pdfs is concatenate several of them together (PyPdf, pdfrw).
Maybe I'm just missing something obvious, but all the ones I've seen seem to launch into some massive discussion about taking 17 pdf files and converting them into a 60 foot wide poster with 23 panes (slight exaggeration maybe), and leave me wondering how to get the "Hello World" program working. Any help would be appreciated.
|
How do I create a simple pdf file in python?
| 0 | 0 | 0 | 14,227 |
11,813,585 |
2012-08-05T02:46:00.000
| 1 | 0 | 0 | 0 |
python,virtualenv,buildout,gunicorn,supervisord
| 11,897,450 | 1 | true | 1 | 0 |
Just like with your point-mod_wsgi-at-a-different-folder solution, you can do the same with gunicorn/buildout. Just set up your latest buildout in a different directory, stop the old gunicorn and start the new.
There'll be a short delay between stopping the one and starting the other, of course.
Alternative: set up the new one with a different port number, change the nginx config and kick ngnix if you really want zero downtime-seconds.
| 1 | 2 | 0 |
My application is developed with Flask and uses buildout to handle dependency isolation. I plan to use Gunicorn and supervisord as wsgi container and process manager, in front of which there is Nginx doing load balancing. Here is the problem when deploying a new version of the application: everything is builtout in a subfolder, how to restart the gunicorn server so that the version switching can take place gracefully?
I come up with some solutions of course:
Ditch gunicorn and superviosrd, and turn to apache mod_wsgi, so when deploying a new version I could simply change the folder in .wsgi file and the server will restart.
Use virtualenv and install gunicorn, supervisord, as well as my application package in it, so when switching version I just restart it using supervisorctl.
Is there a 'pure' buildout way that can accomplish this situation? Or any in-use production solutions will all be appreciated.
Thanks in advance.
|
How to Accomplish App Version Switching Using Buildout?
| 1.2 | 0 | 0 | 170 |
11,816,147 |
2012-08-05T11:49:00.000
| 12 | 0 | 1 | 0 |
python,pycharm
| 43,841,682 | 8 | false | 1 | 0 |
ctrl + shift + A => open pop window to select options, select to spaces to convert all tabs as space, or to tab to convert all spaces as tab.
| 5 | 131 | 0 |
I am using pycharm IDE for python development it works perfectly fine for django code so suspected that converting tabs to spaces is default behaviour, however in python IDE is giving errors everywhere because it can't convert tabs to spaces automatically is there a way to achieve this.
|
pycharm convert tabs to spaces automatically
| 1 | 0 | 0 | 141,288 |
11,816,147 |
2012-08-05T11:49:00.000
| 64 | 0 | 1 | 0 |
python,pycharm
| 20,491,867 | 8 | false | 1 | 0 |
For selections, you can also convert the selection using the "To spaces" function. I usually just use it via the ctrl-shift-A then find "To Spaces" from there.
| 5 | 131 | 0 |
I am using pycharm IDE for python development it works perfectly fine for django code so suspected that converting tabs to spaces is default behaviour, however in python IDE is giving errors everywhere because it can't convert tabs to spaces automatically is there a way to achieve this.
|
pycharm convert tabs to spaces automatically
| 1 | 0 | 0 | 141,288 |
11,816,147 |
2012-08-05T11:49:00.000
| 6 | 0 | 1 | 0 |
python,pycharm
| 54,234,510 | 8 | false | 1 | 0 |
ctr+alt+shift+L -> reformat whole file :)
| 5 | 131 | 0 |
I am using pycharm IDE for python development it works perfectly fine for django code so suspected that converting tabs to spaces is default behaviour, however in python IDE is giving errors everywhere because it can't convert tabs to spaces automatically is there a way to achieve this.
|
pycharm convert tabs to spaces automatically
| 1 | 0 | 0 | 141,288 |
11,816,147 |
2012-08-05T11:49:00.000
| 1 | 0 | 1 | 0 |
python,pycharm
| 66,066,077 | 8 | false | 1 | 0 |
Just ot note: Pycharm's to spaces function only works on indent tabs at the beginning of a line, not interstitial tabs within a line of text. for example, when you are trying to format columns in monospaced text.
| 5 | 131 | 0 |
I am using pycharm IDE for python development it works perfectly fine for django code so suspected that converting tabs to spaces is default behaviour, however in python IDE is giving errors everywhere because it can't convert tabs to spaces automatically is there a way to achieve this.
|
pycharm convert tabs to spaces automatically
| 0.024995 | 0 | 0 | 141,288 |
11,816,147 |
2012-08-05T11:49:00.000
| 1 | 0 | 1 | 0 |
python,pycharm
| 57,950,986 | 8 | false | 1 | 0 |
For me it was having a file called ~/.editorconfig that was overriding my tab settings. I removed that (surely that will bite me again someday) but it fixed my pycharm issue
| 5 | 131 | 0 |
I am using pycharm IDE for python development it works perfectly fine for django code so suspected that converting tabs to spaces is default behaviour, however in python IDE is giving errors everywhere because it can't convert tabs to spaces automatically is there a way to achieve this.
|
pycharm convert tabs to spaces automatically
| 0.024995 | 0 | 0 | 141,288 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.