Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11,076,018 |
2012-06-18T01:09:00.000
| 1 | 0 | 0 | 0 |
python,qt,pyside,qt-designer
| 11,092,105 | 2 | false | 0 | 1 |
I don't know about Python; but on the Mac you create About, Preferences or Quit menu items and Qt will automatically shift them into the application menu on the left (the one with the bold text).
I'm guessing that you are trying to create your own application menu and Qt is getting confused. You don't need to create it. Put your About, Preferences or Quit menu items under File or some other menu heading and they will get shifted.
| 1 | 5 | 0 |
So basically I want to create a GUI app using PySide and the Qt Framework. I am using the QT designer to make the initial UI design. The first version of the app will run on Mac and I want it to be like other Mac applications where the name of the app is in bold and all the way to the left with an "About", "Preferences", and "Quit".
The problem is that whenever I add these types of strings the drop down stops working.
Any tips on this would be helpful this is my first GUI using PySide, QT Framework, andd QT Designer.
|
Qt not letting me create a menu item named after my app with the strings "About", "Preferences", or "Quit? Any tips?
| 0.099668 | 0 | 0 | 419 |
11,076,404 |
2012-06-18T02:42:00.000
| 1 | 1 | 0 | 0 |
python,django,dynamic,web,matplotlib
| 11,076,420 | 2 | false | 0 | 0 |
You can send an ajax request and update the html contents dynamically.
| 2 | 1 | 0 |
I am accessing the twitter streaming API. I generate a map using Basemap in python.
I want only certain parts of the map to change with time (for eg. every second). Is it hard to do?
Do I need to leave Basemap and look for something else? Please help!
|
Dynamically updating images on website
| 0.099668 | 0 | 0 | 215 |
11,076,404 |
2012-06-18T02:42:00.000
| 1 | 1 | 0 | 0 |
python,django,dynamic,web,matplotlib
| 11,093,825 | 2 | false | 0 | 0 |
A possible approach: divide the map into tiles, and treat each one separately; use Basemap to generate just the map-tile that contains new data, then update just that tile on your webpage using Ajax.
Of course, depending on the nature of changes to the data on your map, this approach may or may not work for you -- gerrymandering is not really possible.
You would need to write logic to understand which tile the new data belongs to, then use basemap to create a new image for that time, then intelligently update the tiled image. You will also have to play with margins and padding (both in matplotlib and in CSS) to cleanly piece the tiles together.
...
When the approach gets this complicated, one should re-evaluate whether better tools are available. Basemap doesn't sound like a good fit for what you need to do.
| 2 | 1 | 0 |
I am accessing the twitter streaming API. I generate a map using Basemap in python.
I want only certain parts of the map to change with time (for eg. every second). Is it hard to do?
Do I need to leave Basemap and look for something else? Please help!
|
Dynamically updating images on website
| 0.099668 | 0 | 0 | 215 |
11,077,023 |
2012-06-18T04:45:00.000
| 61 | 0 | 0 | 0 |
python,numpy,scipy,pandas
| 11,077,060 | 3 | false | 0 | 0 |
Numpy is required by pandas (and by virtually all numerical tools for Python). Scipy is not strictly required for pandas but is listed as an "optional dependency". I wouldn't say that pandas is an alternative to Numpy and/or Scipy. Rather, it's an extra tool that provides a more streamlined way of working with numerical and tabular data in Python. You can use pandas data structures but freely draw on Numpy and Scipy functions to manipulate them.
| 2 | 202 | 1 |
They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.
|
What are the differences between Pandas and NumPy+SciPy in Python?
| 1 | 0 | 0 | 135,225 |
11,077,023 |
2012-06-18T04:45:00.000
| 327 | 0 | 0 | 0 |
python,numpy,scipy,pandas
| 11,077,215 | 3 | true | 0 | 0 |
pandas provides high level data manipulation tools built on top of NumPy. NumPy by itself is a fairly low-level tool, similar to MATLAB. pandas on the other hand provides rich time series functionality, data alignment, NA-friendly statistics, groupby, merge and join methods, and lots of other conveniences. It has become very popular in recent years in financial applications. I will have a chapter dedicated to financial data analysis using pandas in my upcoming book.
| 2 | 202 | 1 |
They both seem exceedingly similar and I'm curious as to which package would be more beneficial for financial data analysis.
|
What are the differences between Pandas and NumPy+SciPy in Python?
| 1.2 | 0 | 0 | 135,225 |
11,077,252 |
2012-06-18T05:16:00.000
| 0 | 0 | 1 | 0 |
python,python-2.7,speech-to-text
| 36,327,080 | 4 | false | 0 | 0 |
dragonfly 0.6.5 works on py2.7 if you have package manager installed just:
press: WIN+R;
press: CMD ENTER;
type: pip install dragonfly
check if is installed by typing: pip list (it will show you all installed packages)
| 1 | 3 | 0 |
So i have been searching for a speech to text module, and i have found a few, such as dragonfly and pyspeech, however, they are for python 2.4 and 2.5, however, i need one for 2.7. Does anyone know of a library or module for this? Thank you in advance for your replies
|
Module or Library for Python speech to text (2.7)
| 0 | 0 | 0 | 4,061 |
11,079,644 |
2012-06-18T08:58:00.000
| 0 | 0 | 0 | 0 |
python,django,django-signals
| 11,079,716 | 3 | false | 1 | 0 |
Checking for instance.id is a nice way of determining if the instance is "new". This only works if you use ids that are auto-generated by your database.
| 1 | 14 | 0 |
I'm using Django's post_save signal to send emails to users whenever a new article is added to the site. However, users still receive new emails whenever I use save() method for already created articles. How is it possible to receive emails only when NEW entry was added?
Thanks in advance
|
Django signals for new entry only
| 0 | 0 | 0 | 2,985 |
11,079,785 |
2012-06-18T09:08:00.000
| 0 | 0 | 1 | 0 |
python,multithreading
| 11,080,491 | 3 | false | 0 | 1 |
The main thread creates a queue and a bunch of worker threads that are
pulling tasks from the queue. As long as the queue is empty all worker
threads block and do nothing. When a task is put into the queue a random
worker thread acquires the task, does it job and sleeps as soon as its
ready. That way you can reuse a thread over and over again without
creating a new worker threads.
When you need to stop the threads you put a kill object into the queue
that tells the thread to shut down instead of blocking on the queue.
| 1 | 1 | 0 |
I am fairly new to Python programming and Threads isn't my area of expertise. I have a problem for which i would hope that people here can help me out with.
Task: as a part of my master thesis, i need to make a mixed reality game which involves multiplayer capability. in my game design, each player can set a bunch of traps, each of which is active for a specific time period e.g. 30 secs. In order to maintain a consistent game state across all the players, all the time check needs to be done on the server side, which is implemented in Python.
I decided to start a python thread, everytime a new trap is laid by a player and run a timer on the thread. All this part is fine, but the real problem arises when i need to notify the main thread that the time is up for this particular trap, so that i can communicate the same to the client (android device).
i tried creating a queue and inserting information into the queue when the task is done, but i cant do a queue.join() since it will put the main thread on hold till the task is done and this is not what i need nor is it ideal in my case, since the main thread is constantly communicating with the client and if it is halted, then all the communication with the players will come to a standstill.
I need the secondary thread, which is running a timer, to tell the main thread, as soon as the time runs out that the time has run out and send the ID of the trap, so that i can pass this information to the android client to remove it. How can i achieve this ??
Any other suggestions on how this task can be achieved without starting a gazillion threads, are also welcome.. :) :)
Thanks in advance for the help..
Cheers
|
Notify the main thread when a thread is done
| 0 | 0 | 0 | 2,489 |
11,081,209 |
2012-06-18T10:45:00.000
| 1 | 0 | 0 | 1 |
python,json,rest
| 11,184,777 | 1 | true | 0 | 0 |
avasal, you were right. I did it by pip install python-rest-client
| 1 | 1 | 0 |
I need to use python-rest-client package into my project. I tried several times for installing python-rest-client into my linux python, it never worked. But it works well in Windows python. Would anybody tell me how to install python-rest-client in linux python.
|
how to install python-rest-client lib in linux
| 1.2 | 0 | 1 | 1,855 |
11,081,767 |
2012-06-18T11:23:00.000
| 7 | 0 | 0 | 1 |
python,google-app-engine,task-queue
| 11,082,412 | 2 | true | 1 | 0 |
Pick any one of the following HTTP headers:
X-AppEngine-QueueName, the name of the queue (possibly default)
X-AppEngine-TaskName, the name of the task, or a system-generated unique ID if no name was specified
X-AppEngine-TaskRetryCount, the number of times this task has been retried; for the first attempt, this value is 0
X-AppEngine-TaskETA, the target execution time of the task, specified in microseconds since January 1st 1970.
Standard HTTP requests won't have these headers.
| 1 | 4 | 0 |
Is there a way to dynamically determine whether the currently executing task is a standard http request or a TaskQueue?
In some parts of my request handler, I make a few urlfetches. I would like the timeout delay of the url fetch to be short if the request is a standard http request and long if it is a TaskQueue.
|
Google App Engine: Determine whether Current Request is a Taskqueue
| 1.2 | 0 | 0 | 923 |
11,082,229 |
2012-06-18T11:52:00.000
| 3 | 0 | 0 | 0 |
python,mysql,database,search
| 11,088,110 | 3 | false | 0 | 0 |
Apache Solr is a great Search Engine that provides (1) N-Gram Indexing (search for not just complete strings but also for partial substrings, this helps greatly in getting similar results) (2) Provides an out of box Spell Corrector based on distance metric/edit distance (which will help you in getting a "did you mean chicago" when the user types in chicaog) (3) It provides you with a Fuzzy Search option out of box (Fuzzy Searches helps you in getting close matches for your query, for an example if a user types in GA-123 he would obtain VMDEO-123 as a result) (4) Solr also provides you with "More Like This" component which would help you out like the above options.
Solr (based on Lucene Search Library) is open source and is slowly rising to become the de-facto in the Search (Vertical) Industry and is excellent for database searches (As you spoke about indexing a database column, which is a cakewalk for Solr). Lucene and Solr are used by many Fortune 500 companies as well as Internet Giants.
Sphinx Search Engine is also great (I love it too as it has very low foot print for everything & is C++ based) but to put it simply Solr is much more popular.
Now Python support and API's are available for both. However Sphinx is an exe and Solr is an HTTP. So for Solr you simply have to call the Solr URL from your python program which would return results that you can send to your front end for rendering, as simple as that)
So far so good. Coming to your question:
First you should ask yourself that whether do you really require a Search Engine? Search Engines are good for all use cases mentioned above but are really made for searching across huge amounts of full text data or million's of rows of tabular data. The Algorithms like Did you Mean, Similar Records, Spell Correctors etc. can be written on top. Before zero-ing on Solr please also search Google for (1) Peter Norvig Spell Corrector & (2) N-Gram Indexing. Possibility is that just by writing few lines of code you may get really the stuff that you were looking out for.
I leave it up to you to decide :)
| 2 | 3 | 0 |
I'm looking for a search engine that I can point to a column in my database that supports advanced functions like spelling correction and "close to" results.
Right now I'm just using
SELECT <column> from <table> where <colname> LIKE %<searchterm>%
and I'm missing some results particularly when users misspell items.
I've written some code to fix misspellings by running it through a spellchecker but thought there may be a better out-of-the box option to use. Google turns up lots of options for indexing and searching the entire site where I really just need to index and search this one table column.
|
Search Engine for a single DB column
| 0.197375 | 1 | 0 | 153 |
11,082,229 |
2012-06-18T11:52:00.000
| 1 | 0 | 0 | 0 |
python,mysql,database,search
| 11,087,295 | 3 | false | 0 | 0 |
I would suggest looking into open source technologies like Sphynx Search.
| 2 | 3 | 0 |
I'm looking for a search engine that I can point to a column in my database that supports advanced functions like spelling correction and "close to" results.
Right now I'm just using
SELECT <column> from <table> where <colname> LIKE %<searchterm>%
and I'm missing some results particularly when users misspell items.
I've written some code to fix misspellings by running it through a spellchecker but thought there may be a better out-of-the box option to use. Google turns up lots of options for indexing and searching the entire site where I really just need to index and search this one table column.
|
Search Engine for a single DB column
| 0.066568 | 1 | 0 | 153 |
11,083,036 |
2012-06-18T12:36:00.000
| 1 | 0 | 1 | 0 |
python,nlp,text-segmentation
| 11,083,430 | 6 | false | 0 | 0 |
This is in general impossible. Abbreviations, numeric values ("$23.45", "32.5 degrees"), quotations ("he said: 'ha! you'll never [...]'") or names with punctuation (e.g. "Panic! At the Disco") or even whole subordinate clauses in brackets that are basically their own sentence ("the cook (who is also an excellent painter!) [...]") mean that you can't just split the text by dots and exclamation/question marks or use any other 'simple' approach.
Basically, to solve the general case, you'd need a parser for natural language (and in that case you may be better off using prolog instead of python) with a grammar that handles all these special cases. If you can reduce the problem to a less general one, e.g. only needing to deal with abbreviations and quotations, you may be able to cheese something - but you'd nevertheless need any sort of parser or state machine as regular expressions are not powerful enough for these kinds of things.
| 1 | 5 | 0 |
I know this might sound easy. I thought about using the first dot(.) which comes as the benchmark, but when abbreviations and short forms come, I am rendered helpless.
e.g. -
Sir Winston Leonard Spencer-Churchill, KG, OM, CH, TD, PC, DL, FRS,
Hon. RA (30 November 1874 – 24 January 1965) was a British politician
and statesman known for his leadership of the United Kingdom during
the Second World War. He is widely regarded as one of the great
wartime leaders and served as Prime Minister twice. A noted statesman
and orator, Churchill was also an officer in the British Army, a
historian, a writer, and an artist.
Here, the 1st dot is Hon., but I want the complete first line ending at Second World War .
Is it possible people ???
|
How to get the first sentence from the following paragraph?
| 0.033321 | 0 | 0 | 2,408 |
11,083,776 |
2012-06-18T13:19:00.000
| 3 | 0 | 0 | 1 |
python,google-app-engine,openid
| 11,093,808 | 3 | false | 1 | 0 |
If you don't want to require a Google Account or OpenID account you have to roll your own accounts system. This gives you maximum freedom, but it is a lot of work and makes you responsible for password security (ouch). Personally I would advise you to reconsider this requirement -- OpenID especially has a lot going for it (except IIUC it's not so simple to use Facebook).
| 1 | 11 | 0 |
For a project, I'm going to create an application on Google App Engine where:
Discussion Leaders can register with their e-mail address (or OpenID or Google Account) on the website itself to use it.
In the application admin page they can create a group discussion for which they can add users based on their e-mail address
and these users should then receive generated account details (if they don't have accounts yet) making them able to log in to that group discussion with their newly created account.
I don't want to require discussion leaders to having a Google Account or OpenID account in order to register for the application and all user other accounts must be generated by the discussion leader.
However Google App Engine seems to only support Google Accounts and OpenID accounts. How would I go about this? Is there an existing pattern for creating leader-accounts and generating user-accounts from within the Google App Engine which still support the GAE User API?
|
Generating users accounts inside Google App Engine
| 0.197375 | 0 | 0 | 4,278 |
11,083,838 |
2012-06-18T13:22:00.000
| 2 | 0 | 0 | 1 |
python,applet,gnome
| 11,083,956 | 1 | false | 0 | 0 |
stdout and stderr of applications started via X or one of its children are written to ~/.xsession-errors if not redirected.
| 1 | 0 | 0 |
I'm currently writing a Gnome Panel Applet in Python. Everything is working fine as long as I don't try to actually add it to the panel (running it in a window works).
When trying to add it to a panel it crashes and I have no idea why, because I can't see the error trace.
Is there a simple way to log the output of a Gnome Applet to a file so I can find the problem?
|
Getting standard output from a Python Gnome Applet
| 0.379949 | 0 | 0 | 135 |
11,083,921 |
2012-06-18T13:27:00.000
| 4 | 0 | 0 | 0 |
python,machine-learning,svm,regression,libsvm
| 11,172,695 | 2 | false | 0 | 0 |
libsvm might not be the best tool for this task.
The problem you describe is called multivariate regression, and usually for regression problems, SVM's are not necessarily the best choice.
You could try something like group lasso (http://www.di.ens.fr/~fbach/grouplasso/index.htm - matlab) or sparse group lasso (http://spams-devel.gforge.inria.fr/ - seems to have a python interface), which solve the multivariate regression problem with different types of regularization.
| 1 | 3 | 1 |
I would like to ask if anyone has an idea or example of how to do support vector regression in python with high dimensional output( more than one) using a python binding of libsvm? I checked the examples and they are all assuming the output to be one dimensional.
|
Support Vector Regression with High Dimensional Output using python's libsvm
| 0.379949 | 0 | 0 | 3,847 |
11,090,289 |
2012-06-18T20:09:00.000
| 0 | 0 | 1 | 0 |
python,regex,string,algorithm
| 11,091,742 | 6 | false | 0 | 0 |
If you reverse the input string, then feed it to a regex like (.+)(?:.*\1){2}
It should give you the longest string repeated 3 times. (Reverse capture group 1 for the answer)
Edit:
I have to say cancel this way. It's dependent on the first match. Unless its tested against a curr length vs max length so far, in an itterative loop, regex won't work for this.
| 1 | 40 | 0 |
I need to find the longest sequence in a string with the caveat that the sequence must be repeated three or more times. So, for example, if my string is:
fdwaw4helloworldvcdv1c3xcv3xcz1sda21f2sd1ahelloworldgafgfa4564534321fadghelloworld
then I would like the value "helloworld" to be returned.
I know of a few ways of accomplishing this but the problem I'm facing is that the actual string is absurdly large so I'm really looking for a method that can do it in a timely fashion.
|
Find longest repetitive sequence in a string
| 0 | 0 | 0 | 31,438 |
11,091,052 |
2012-06-18T21:07:00.000
| 2 | 0 | 1 | 1 |
python,installation,python-idle
| 11,091,290 | 1 | true | 0 | 0 |
try making a .py file and then try to open it, and a window should appear asking you what to open it with, and then select idle in your program files.
| 1 | 1 | 0 |
Just installed Python 2.7.3 on a Windows7 machine.
How do I get .py files to be associated with python (they are with notepad ATM) and how do I get the context menu shortcut for "edit in IDLE"? Somehow I didn't get that on this particular computer.
|
IDLE not integrated in desktop
| 1.2 | 0 | 0 | 397 |
11,091,623 |
2012-06-18T21:51:00.000
| 412 | 0 | 1 | 0 |
python,pip,freebsd,easy-install,python-requests
| 14,447,068 | 12 | false | 0 | 0 |
On the system that has access to internet
The pip download command lets you download packages without installing them:
pip download -r requirements.txt
(In previous versions of pip, this was spelled pip install --download -r requirements.txt.)
On the system that has no access to internet
Then you can use
pip install --no-index --find-links /path/to/download/dir/ -r requirements.txt
to install those downloaded modules, without accessing the network.
| 2 | 256 | 0 |
What's the best way to download a python package and it's dependencies from pypi for offline installation on another machine? Is there any easy way to do this with pip or easy_install? I'm trying to install the requests library on a FreeBSD box that is not connected to the internet.
|
How to install packages offline?
| 1 | 0 | 1 | 394,353 |
11,091,623 |
2012-06-18T21:51:00.000
| 3 | 0 | 1 | 0 |
python,pip,freebsd,easy-install,python-requests
| 40,603,985 | 12 | false | 0 | 0 |
For Pip 8.1.2 you can use pip download -r requ.txt to download packages to your local machine.
| 2 | 256 | 0 |
What's the best way to download a python package and it's dependencies from pypi for offline installation on another machine? Is there any easy way to do this with pip or easy_install? I'm trying to install the requests library on a FreeBSD box that is not connected to the internet.
|
How to install packages offline?
| 0.049958 | 0 | 1 | 394,353 |
11,094,920 |
2012-06-19T05:33:00.000
| 0 | 0 | 0 | 1 |
python,tornado
| 11,096,932 | 4 | false | 0 | 0 |
If you want to daemonize tornado - use supervisord. If you want to access tornado on address like http://mylocal.dev/ - you should look at nginx and use it like reverse proxy. And on specific port it can be binded like in Lafada's answer.
| 2 | 9 | 0 |
Is it possible to run Tornado such that it listens to a local port (e.g. localhost:8000). I can't seem to find any documentation explaining how to do this.
|
How do you run the Tornado web server locally?
| 0 | 0 | 0 | 19,432 |
11,094,920 |
2012-06-19T05:33:00.000
| 1 | 0 | 0 | 1 |
python,tornado
| 39,968,411 | 4 | false | 0 | 0 |
Once you've defined an application (like in the other answers) in a file (for example server.py), you simply save and run that file.
python server.py
| 2 | 9 | 0 |
Is it possible to run Tornado such that it listens to a local port (e.g. localhost:8000). I can't seem to find any documentation explaining how to do this.
|
How do you run the Tornado web server locally?
| 0.049958 | 0 | 0 | 19,432 |
11,095,220 |
2012-06-19T06:00:00.000
| 0 | 0 | 0 | 1 |
python,hadoop
| 11,098,023 | 1 | true | 0 | 0 |
Problem solved by adding the file needed with the -file option or file= option in conf file.
| 1 | 0 | 1 |
I need to read in a dictionary file to filter content specified in the hdfs_input, and I have uploaded it to the cluster using the put command, but I don't know how to access it in my program.
I tried to access it using path on the cluster like normal files, but it gives the error information: IOError: [Errno 2] No such file or directory
Besides, is there any way to maintain only one copy of the dictionary for all the machines that runs the job ?
So what's the correct way of access files other than the specified input in hadoop jobs?
|
How to read other files in hadoop jobs?
| 1.2 | 0 | 0 | 91 |
11,095,247 |
2012-06-19T06:03:00.000
| 0 | 1 | 1 | 0 |
python,open-flash-chart,openflashchart2
| 11,095,835 | 1 | false | 0 | 0 |
Correct me if I'm wrong but flash chart is a flash not python. There is no python module dedicated to flash chart. In fact flash chart need some data from server which can be implemented in python.
| 1 | 0 | 0 |
How to install python module openFlashChart? I am really having trouble installing it. If you have installed it before kindly post how.
The error was module openFlashChart not found. I actually cant find the module to install.
|
installing python module openFlashChart
| 0 | 0 | 0 | 147 |
11,096,295 |
2012-06-19T07:27:00.000
| 2 | 0 | 0 | 0 |
python,django,multithreading,file,download
| 11,096,591 | 1 | true | 1 | 0 |
It's possible but takes some jumps though the metaphorical hoops. My answer isn't Django specific, you'll need to translate it to your framework.
Start a thread that does the actual download. While it downloads, it must update some data structure in the user's session (total size of the download, etc).
In the browser, start a timer which does AJAX requests to a "download status URL"
Create a handler for this URL which takes the status from the session and turns that into JSON or a piece of HTML which you send to the browser.
In the AJAX handler's success method, take the JSON/HTML and put it into the current page. Unless the download is complete (this part is more simple with JSON), restart the timer.
| 1 | 4 | 0 |
Ok i decided to post the question here because i really don't know what to do or even if its possible. You might tell me it's a repost or so but i aready read similar posts about it and it didn't helped me out.
Here is the deal. I have an admin interface with django and want to download a file from an external site on my server with a progressbar showing the percentage of the download.
I can't do anything while it's downloading. I tried to run a command with call_command within a view but it's the same.
Is it because Django server is single threaded? So, is it even possible do achieve what i want to do ?
Thanks in advance,
|
Django/Python Downloading a file with progressbar
| 1.2 | 0 | 0 | 1,334 |
11,098,131 |
2012-06-19T09:26:00.000
| 2 | 0 | 0 | 0 |
java,python,binding,word-wrap,cpython
| 11,286,405 | 2 | false | 1 | 0 |
I've used JPype in a similar instance with decent results. The main task would be to write wrappers to translate your java api into a more pythonic api, since raw JPype usage is hardly any prettier than just writing java code.
| 1 | 6 | 0 |
How can we write a python (with CPython) binding to a Java library so that the developers that want to use this java library can use it by writing only python code, not worrying about any Java code?
|
how to write python wrapper of a java library
| 0.197375 | 0 | 0 | 10,944 |
11,099,097 |
2012-06-19T10:28:00.000
| 1 | 0 | 1 | 0 |
python,runtime-error,traceback
| 11,100,430 | 3 | false | 0 | 0 |
There are no Python exceptions that don't produce tracebacks. As the other answers show, you can crash CPython hard, which don't produce tracebacks. If you could explain your interest in this, we might have more information.
| 1 | 3 | 0 |
Are there runtime errors(= exceptions) that do not generate a traceback?
If yes, why do some runtime errors not generate tracebacks? could you give some examples?
|
Are there runtime errors(= exceptions) that do not generate a traceback in python?
| 0.066568 | 0 | 0 | 215 |
11,100,066 |
2012-06-19T11:33:00.000
| 3 | 0 | 0 | 0 |
python,numpy,save,boolean
| 11,101,558 | 1 | false | 0 | 0 |
Thats correct, bools are integers, so you can always go between the two.
import numpy as np
arr = np.array([True, True, False, False])
np.savetxt("test.txt", arr, fmt="%5i")
That gives a file with 1 1 0 0
| 1 | 2 | 1 |
The following saves floating values of a matrix into textfiles
numpy.savetxt('bool',mat,fmt='%f',delimiter=',')
How to save a boolean matrix ? what is the fmt for saving boolean matrix ?
|
how to save a boolean numpy array to textfile in python?
| 0.53705 | 0 | 0 | 3,549 |
11,100,997 |
2012-06-19T12:32:00.000
| 0 | 0 | 0 | 0 |
python,django,postgresql
| 11,101,114 | 7 | false | 1 | 0 |
Instead of deleting orders - you should create a field which is a boolean (call it whatever you like - for example, deleted) and set this field to 1 for "deleted" orders.
Messing with a serial field (which is what your auto-increment field is called in postgres) will lead to problems later; especially if you have foreign keys and relationships with tables.
Not only will it impact your database server's performance; it also will impact on your business as eventually you will have two orders floating around that have the same order number; even though you have "deleted" one from the database, the order number may already be referenced somewhere else - like in a receipt your printed for your customer.
| 4 | 1 | 0 |
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted.
For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?
Thanks in Advance!
|
Auto Increment Field in Django/Python
| 0 | 1 | 0 | 2,701 |
11,100,997 |
2012-06-19T12:32:00.000
| 4 | 0 | 0 | 0 |
python,django,postgresql
| 11,101,064 | 7 | false | 1 | 0 |
You are going to have to implement that feature yourself, I doubt very much that a relational db will do that for you, and for good reason: it means updating a potentially large number of rows when one row is deleted.
Are you sure you need this? It could become expensive.
| 4 | 1 | 0 |
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted.
For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?
Thanks in Advance!
|
Auto Increment Field in Django/Python
| 0.113791 | 1 | 0 | 2,701 |
11,100,997 |
2012-06-19T12:32:00.000
| -1 | 0 | 0 | 0 |
python,django,postgresql
| 11,101,032 | 7 | false | 1 | 0 |
Try to set the value with type sequence in postgres using pgadmin.
| 4 | 1 | 0 |
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted.
For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?
Thanks in Advance!
|
Auto Increment Field in Django/Python
| -0.028564 | 1 | 0 | 2,701 |
11,100,997 |
2012-06-19T12:32:00.000
| 0 | 0 | 0 | 0 |
python,django,postgresql
| 15,074,698 | 7 | false | 1 | 0 |
I came across this looking for something else and wanted to point something out:
By storing the order in a field in the same table as your data, you lose data integrity, or if you index it things will get very complicated if you hit a conflict. In other words, it's very easy to have a bug (or something else) give you two 3's, a missing 4, and other weird things can happen. I inherited a project with a manual sort order that was critical to the application (there were other issues as well) and this was constantly an issue, with just 200-300 items.
The right way to handle a manual sort order is to have a separate table to manage it and sort with a join. This way your Order table will have exactly 10 entries with just it's PK (the order number) and a foreign key relationship to the ID of the items you want to sort. Deleted items just won't have a reference anymore.
You can continue to sort on delete similar to how you're doing it now, you'll just be updating the Order model's FK to list instead of iterating through and re-writing all your items. Much more efficient.
This will scale up to millions of manually sorted items easily. But rather than using auto-incremented ints, you would want to give each item a random order id in between the two items you want to place it between and keep plenty of space (few hundred thousand should do it) so you can arbitrarily re-sort them.
I see you mentioned that you've only got 10 rows here, but designing your architecture to scale well the first time, as a practice, will save you headaches down the road, and once you're in the habit of it, it won't really take you any more time.
| 4 | 1 | 0 |
I have a table in a django app where one of the fields is called Order (as in sort order) and is an integer. Every time a new record is entered the field auto increments itself to the next number. My issue is when a record is deleted I would like the other records to shift a number up and cant find anything that would recalculate all the records in the table and shift them a number up if a record is deleted.
For instance there are 5 records in the table where order numbers are 1, 2, 3, 4, and 5. Someone deleted record number 2 and now I would like numbers 3, 4, and 5 to move up to take the deleted number 2's place so the order numbers would now be 1, 2, 3, and 4. Is it possible with python, postgres and django?
Thanks in Advance!
|
Auto Increment Field in Django/Python
| 0 | 1 | 0 | 2,701 |
11,101,525 |
2012-06-19T13:01:00.000
| 0 | 0 | 0 | 0 |
python,web-applications,deployment
| 24,712,242 | 2 | false | 1 | 0 |
Couldn't you take half your servers offline (say by pulling them out of the load balancing pool) and then update those. Then bring them back online while simultaneously pulling down the other half. Then update those and bring them back online.
This will ensure that you stay online while also ensuring that you never have the old and new versions of your application online at the same time. Yes, this will mean that your site would run at half its capacity during the time. But that might be ok?
| 1 | 6 | 0 |
I think I don't completely understand the deployment process. Here is what I know:
when we need to do hot deployment -- meaning that we need to change the code that is live -- we can do it by reloading the modules, but
imp.reload is a bad idea, and we should restart the application instead of reloading the changed modules
ideally the running code should be a clone of your code repository, and any time you need to deploy, you just pull the changes
Now, let's say I have multiple instances of wsgi app running behind a reverse proxy like nginx (on ports like 8011, 8012, ...). And, let's also assume that I get 5 requests per second.
Now in this case, how should I update my code in all the running instances of the application.
If I stop all the instances, then update all of them, then restart them all -- I will certainly lose some requests
If I update each instance one by one -- then the instances will be in inconsistent states (some will be running old code, and some new) until all of them are updated. Now if a request hits an updated instance, and then a subsequent (and related) request hits an older instance (yet to be updated) -- then I will get wrong results.
Can somebody explain thoroughly how busy applications like this are hot-deployed?
|
Understanding Python Web Application Deployment
| 0 | 0 | 0 | 723 |
11,102,463 |
2012-06-19T13:51:00.000
| 2 | 0 | 0 | 0 |
python,google-app-engine
| 11,102,648 | 1 | true | 0 | 0 |
You haven't understood how web requests work. You can't "stay on the same page", because a POST - even if it's to the same URL - is a request for a new page. It's up to your code to render that entire page, complete with headers and formatting, including the new values.
This of course is why you should use a proper templating engine, rather than building up your HTML manually. Jinja2 is available in GAE, you should use it to render your template on both GET and POST.
| 1 | 2 | 0 |
I inadvertently posted revisions to this previous post when I meant to update subsequent post that has GqlQuery code.
|
Need Form Value Responses on Original Page, not Redirected Page
| 1.2 | 0 | 0 | 42 |
11,102,659 |
2012-06-19T14:02:00.000
| 0 | 0 | 0 | 0 |
python,python-3.x,routes,porting
| 11,502,743 | 1 | true | 0 | 0 |
I tried to port Routes to python 3.
After one day of work, I got all unit tests passed. But the code started being ugly and I was not sucessfull with using ported Routes with Cherrypy (probably something, that is not covered by unit tests). I was not patient enough to debug it.
So I decided to write my own version which I will maybe share as opensource. It is not ready yet and I will update this answer later (for possibly interested people :-)
Thanks to commenters above
| 1 | 0 | 0 |
Can somebody skilled look at Routes library in python and tell me why there is not python 3 support?
I need Routes functionality for use with Cherrypy on python 3. And I am curious if it will be better to try to port Routes to python 3 or write my own dispatcher for python 3 from scratch.
I know some porting basics for python 2to3, but if there are any significant problems or drawbacks (other than method names, syntax etc), I would like to know them before I start working on the port.
Thank you very much for any tips!
Edit:
do not understand me incorrectly! I am not lazy to check it by myself, but there are some aspects that i will not discover until I try it. And maybe, somebody here tried it before :-)
|
Routes library for python 3
| 1.2 | 0 | 1 | 369 |
11,103,718 |
2012-06-19T14:57:00.000
| 8 | 0 | 0 | 0 |
python,python-2.7,pyramid
| 11,159,396 | 1 | true | 1 | 0 |
Well I think you can add the --reload flag when starting the webserver. This will watch for any changes on files and reload the server automatically. i.e /pserve --reload develoment.ini
| 1 | 3 | 0 |
Although I have set pyramid.reload_templates to true e.g. "pyramid.reload_templates = true", each time I modify a view, I have to kill the pserve process and restart it in order to see the changes.
How can I get over this and just refresh the page to get the results?
Thank you
|
Pyramid: Preventing being forced to restart the pserve
| 1.2 | 0 | 0 | 1,412 |
11,104,422 |
2012-06-19T15:34:00.000
| 0 | 0 | 1 | 0 |
python,file,pickle
| 11,108,666 | 2 | false | 0 | 0 |
Depending on how you use the data, you could
divide it into many smaller files and load only what's needed
create an index into the file and lazy-load
store it to a database and then query the database
Can you give us a better idea just what your data looks like (structure)?
How you are using the data? Do you actually use every row on every execution? If you only use a subset on each run, could the data be pre-sorted?
| 1 | 0 | 0 |
I have a 100Mb file with roughly 10million lines that I need to parse into a dictionary every time I run my code. This process is incredibly slow, and I am hunting for ways to speed it up. One thought that came to mind is to parse the file once and then use pickle to save it to disk. I'm not sure this would result in a speed up.
Any suggestions appreciated.
EDIT:
After doing some testing, I am worried that the slow down happens when I create the dictionary. Pickling does seem significantly faster, though I wouldn't mind doing better.
Lalit
|
Reparsing a file or unpickling
| 0 | 0 | 0 | 140 |
11,105,304 |
2012-06-19T16:28:00.000
| 0 | 1 | 0 | 1 |
python,windows,selenium,build,jenkins
| 11,126,510 | 2 | false | 0 | 0 |
Check load on the machine and ensure you set Jenkins with enough memory to run those tests.
It is not clear if you are working with Jenkins-slaves or directly on the master -
This may also have an affect on performance.
| 2 | 2 | 0 |
I'm currently building Python regression tests using Jenkins. For some reason, each individual test in the test suite is taking approx. 15 minutes to run (and there are about 70/80 tests total) in Jenkins, but when I run the tests from the command line on the same windows box, each individual tests takes only about 30seconds to 1minute to run. I even put print statements in some of the files and none of them show up on the jenkins command line output.
Has anyone else faced this problem or have any suggestions?
Thanks
Also, I'm not doing a sync every time I build, only syncing once!
|
Jenkins takes too long to execute
| 0 | 0 | 0 | 2,535 |
11,105,304 |
2012-06-19T16:28:00.000
| 1 | 1 | 0 | 1 |
python,windows,selenium,build,jenkins
| 11,109,668 | 2 | true | 0 | 0 |
This may have to do with running Jenkins in the background (and/or as a service). Try running it in the foreground with java -jar jenkins.war an see if it helps.
| 2 | 2 | 0 |
I'm currently building Python regression tests using Jenkins. For some reason, each individual test in the test suite is taking approx. 15 minutes to run (and there are about 70/80 tests total) in Jenkins, but when I run the tests from the command line on the same windows box, each individual tests takes only about 30seconds to 1minute to run. I even put print statements in some of the files and none of them show up on the jenkins command line output.
Has anyone else faced this problem or have any suggestions?
Thanks
Also, I'm not doing a sync every time I build, only syncing once!
|
Jenkins takes too long to execute
| 1.2 | 0 | 0 | 2,535 |
11,106,823 |
2012-06-19T18:11:00.000
| 2 | 0 | 0 | 0 |
python,pandas
| 42,273,797 | 4 | false | 0 | 0 |
Both the above answers - fillna(0) and a direct addition would give you Nan values if either of them have different structures.
Its Better to use fill_value
df.add(other_df, fill_value=0)
| 1 | 55 | 1 |
I have two dataframes, both indexed by timeseries. I need to add the elements together to form a new dataframe, but only if the index and column are the same. If the item does not exist in one of the dataframes then it should be treated as a zero.
I've tried using .add but this sums regardless of index and column. Also tried a simple combined_data = dataframe1 + dataframe2 but this give a NaN if both dataframes don't have the element.
Any suggestions?
|
Adding two pandas dataframes
| 0.099668 | 0 | 0 | 102,462 |
11,107,169 |
2012-06-19T18:32:00.000
| 1 | 0 | 1 | 0 |
python,md5,geohashing
| 11,107,273 | 3 | false | 0 | 0 |
Use int('db931', 16) to convert the hexadecimal (base-16) string db931 to decimal.
| 1 | 2 | 0 |
Inspired by the XKCD geohashing comic (http://imgs.xkcd.com/comics/geohashing.png), I thought I'd have a go at coding a generator in Python. I've hit a block with the major part of it, though: The converting to MD5 and then to decimal.
Is it at all possible?
|
Converting a number string into MD5 with python
| 0.066568 | 0 | 0 | 4,458 |
11,108,160 |
2012-06-19T19:38:00.000
| 0 | 0 | 0 | 0 |
python,image,matplotlib,scrolledwindow
| 24,759,361 | 1 | false | 0 | 1 |
The problem is that the default Zoom and Pan don't resize the figure, they just change the limits and redraw the plot.
What you want is the Zoom to resize (keeping the same limits) and the Pan to work as in a normal Scrolled window. I have never tried this, fig.set_size_inches(w,h) should do the trick.
| 1 | 4 | 0 |
I've got a Matplotlib canvas (FigureCanvasWxAgg) that I'm displaying inside of a wx.ScrolledWindow. The problem is that I'd like to have the default zooming and panning functionality of Matplotlib work in conjunction with the ScrolledWindow, so that when the user zooms the image within the canvas, the ScrolledWindow should become larger to accommodate for the zooming (scrollbars become smaller). Similarly for panning, I'd like the default matplotlib panning tool to work in conjunction with our ScrolledWindow, so that when the user pans the image on the canvas, the ScrolledWindow's scrollbars should move accordingly.
I've been searching for a while now and have not seen anyone even mention if this is possible. Could anyone point me in the right direction?
Thank you for any help/tips.
|
Matplotlib zooming work in conjunction with wxPython ScrolledWindow
| 0 | 0 | 0 | 448 |
11,109,524 |
2012-06-19T21:13:00.000
| 1 | 0 | 1 | 0 |
python,csv,clojure,lazy-evaluation
| 11,109,571 | 4 | false | 0 | 0 |
The csv module does load the data lazily, one row at a time.
| 3 | 5 | 1 |
Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?
I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.
|
Can csv data be made lazy?
| 0.049958 | 0 | 0 | 2,302 |
11,109,524 |
2012-06-19T21:13:00.000
| 2 | 0 | 1 | 0 |
python,csv,clojure,lazy-evaluation
| 11,109,589 | 4 | false | 0 | 0 |
Python's reader or DictReader are generators. A row is produced only when the object's next() method is called.
| 3 | 5 | 1 |
Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?
I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.
|
Can csv data be made lazy?
| 0.099668 | 0 | 0 | 2,302 |
11,109,524 |
2012-06-19T21:13:00.000
| 6 | 0 | 1 | 0 |
python,csv,clojure,lazy-evaluation
| 11,109,568 | 4 | false | 0 | 0 |
The csv module's reader is lazy by default.
It will read a line in at a time from the file, parse it to a list, and return that list.
| 3 | 5 | 1 |
Using Python's csv module, is it possible to read an entire, large, csv file into a lazy list of lists?
I am asking this, because in Clojure there are csv parsing modules that will parse a large file and return a lazy sequence (a sequence of sequences). I'm just wondering if that's possible in Python.
|
Can csv data be made lazy?
| 1 | 0 | 0 | 2,302 |
11,110,902 |
2012-06-19T23:27:00.000
| 1 | 0 | 0 | 0 |
java,python,flash
| 11,113,116 | 1 | false | 1 | 1 |
Download and install is a harder sell. People are more reluctant to do it, and once they have done it, you own the problem of platform compatibility, and you have installed code to update or avoid as your game evolves.
Java applets eliminate all that mess. Presumably also flash or html5.
| 1 | 0 | 0 |
I've spent some time looking over the various threads here on stackoverflow and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on.
In particular, I'm talking about browser games vs. desktop games.
I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D, but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now).
For simple & short games the newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "ok, I'm going to download and play that"? Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does anyone have any experiences with decisions like this?
Thanks!
(* With the exception of flash-based engines it seems like most of the other approaches have these downsides when compared to what is available for desktop-based environments. I'd go with flash, but I'm worried that flash's 3D capabilities aren't mature enough right now to do what I want easily).
|
Game development: "Play Now" via website vs. download & install
| 0.197375 | 0 | 0 | 277 |
11,111,632 |
2012-06-20T01:21:00.000
| 4 | 0 | 1 | 0 |
python,c,header,constants,organized
| 11,111,661 | 5 | false | 0 | 0 |
Make a separate file constants.py, and put all globally-relevant constants in there. Then you can import constants to refer to them as constants.SPAM or do the (questionable) from constants import * to refer to them simply as SPAM or EGGS.
While we're here, note that Python doesn't support constant constants. The convention is just to name them in ALL_CAPS and promise not to mutate them.
| 1 | 40 | 0 |
First time user on stack overflow and I'm excited to be here.
INTRO: I recently began the magical adventure into the world of Python programming - I love it. Now everything has gone smoothly in my awkward transition from C, but I'm having trouble creating something which would be synonymous to a HEADER file (.h).
PROBLEM: I have medium sized dictionaries and lists (about 1,000 elements), lengthy enums, and '#defines' (well not really), but I can't find a CLEAN way to organize them all. In C, I would throw them all in a header file and never think again about it, however, in Python that's not possible or so I think.
CURRENT DIRTY SOLUTION: I'm initializing all CONSTANT variables at the top of either the MODULE or FUNCTION (module if multiple functions need it).
CONCLUSION: I would be forever grateful if someone had come up with a way to CLEANLY organize constant variable's.
THANK YOU SO MUCH!
|
Python - Best/Cleanest way to define constant lists or dictionarys
| 0.158649 | 0 | 0 | 49,575 |
11,112,988 |
2012-06-20T04:49:00.000
| 0 | 0 | 0 | 0 |
python,report,openerp
| 13,524,775 | 1 | false | 1 | 0 |
I believe the entry is multi=True or alternatively I think you will find that just going in to the report and using the remove print button will do it.
| 1 | 2 | 0 |
I make an Excel Report using Report_Aeroo but I have Little bit problem over here I make a wizard and from that I am generating a Excel Report so I don't want Menu of it on my Form view of model which I passed in Report.So what can I Do ??
like we have functionality in rml's like "Menu =false" anything same in Report Aeroo???
|
Disable menu in report_Aeroo on form
| 0 | 0 | 0 | 90 |
11,113,767 |
2012-06-20T06:13:00.000
| 3 | 0 | 1 | 0 |
python,floating-point
| 11,113,961 | 3 | true | 0 | 0 |
I'd suggest using the builtin function repr(). From the documentation:
repr(object) -> string
Return the canonical string representation of the object.
For most object types, eval(repr(object)) == object.
| 2 | 7 | 0 |
Like the question says. Converting to / from the (truncated) string representations can affect their precision. But storing them in other formats like pickle makes them unreadable (yes, I want this too).
How can I store floating point numbers in text without losing precision?
|
How to store a floating point number as text without losing precision?
| 1.2 | 0 | 0 | 2,152 |
11,113,767 |
2012-06-20T06:13:00.000
| -1 | 0 | 1 | 0 |
python,floating-point
| 11,113,854 | 3 | false | 0 | 0 |
pickle.dumps will do it, but I believe float(str(floatval)) == floatval too -- at least on the same system...
| 2 | 7 | 0 |
Like the question says. Converting to / from the (truncated) string representations can affect their precision. But storing them in other formats like pickle makes them unreadable (yes, I want this too).
How can I store floating point numbers in text without losing precision?
|
How to store a floating point number as text without losing precision?
| -0.066568 | 0 | 0 | 2,152 |
11,113,896 |
2012-06-20T06:23:00.000
| 1 | 1 | 1 | 0 |
python,git
| 58,983,005 | 6 | false | 0 | 0 |
If GitPython package doesn't work for you there are also the PyGit and Dulwich packages. These can be easily installed through pip.
But, I have personally just used the subprocess calls. Works perfect for what I needed, which was just basic git calls. For something more advanced, I'd recommend a git package.
| 1 | 25 | 0 |
I have been asked to write a script that pulls the latest code from Git, makes a build, and performs some automated unit tests.
I found that there are two built-in Python modules for interacting with Git that are readily available: GitPython and libgit2.
What approach/module should I use?
|
Use Git commands within Python code
| 0.033321 | 0 | 0 | 45,496 |
11,116,094 |
2012-06-20T09:00:00.000
| 0 | 0 | 1 | 0 |
python,virtualenv,ipython
| 11,117,444 | 1 | false | 0 | 0 |
If you are using distribute + pip to manage dependencies simply run pip -l freeze > requirements.txt, this creates a dependency list of all your local packages. Next remove the current virtualenv; rerun the virtualenv command and specify the --no-site-packages option. Activate your new environment and finally pip install -r requirements.txt to download all the dependencies from the requirements file.
| 1 | 0 | 0 |
I am installing virtualenv and it seems to access the system site packages before accessing the local site packages. Ipython is required by some other programs so it was automatically installed. This only happened recently and now it finds that version instead of the one found locally in the environment.
How do I tell the environment to use local packages within the environment before global packages? Can you set the Path variable for within the environment?
Ended up being an error with previously had set the PYTHON_PATH variable in .bashrc so this was looking in the system built directories before looking locally. Kind of defeating the purpose of virtual_env.
|
Creating a python virtualenv in Ubuntu 12.04 accessing system installed python packages before local venv packages
| 0 | 0 | 0 | 1,354 |
11,118,465 |
2012-06-20T11:24:00.000
| 2 | 0 | 0 | 0 |
wxpython
| 11,120,703 | 1 | true | 0 | 1 |
Take a look at the wxPython demo. I think the demos that you'll find the most helpful are the following:
wx.Grid showing Editors and Renderers
GridLabelRenderer which is from wx.lib.mixins.gridlabelrenderer
Those will probably get you started. When you get stuck, ask on the wxPython mailing list. They'll be able to help you out.
| 1 | 2 | 0 |
I would like to draw border for each row in a wxPython grid different (e.g. bold or broken line) based on data in the respective row. How can I achieve this result?
I am using python 2.6 and what I need is some pointers and/or suggestions.
|
How to draw border differently for each row in a Grid of wxPython?
| 1.2 | 0 | 0 | 585 |
11,119,704 |
2012-06-20T12:36:00.000
| 3 | 0 | 1 | 0 |
python,class
| 11,119,800 | 4 | false | 0 | 0 |
A class is the definition of an object. In this sense, the class provides a namespace of sorts, but that is not the true purpose of a class. The true purpose is to define what the object will 'look like' - what the object is capable of doing (methods) and what it will know (properties).
Note that my answer is intended to provide a sense of understanding on a relatively non-technical level, which is what my initial trouble was with understanding classes. I'm sure there will be many other great answers to this question; I hope this one adds to your overall understanding.
| 1 | 3 | 0 |
I really hope this is not a question posed by millions of newbies, but my search didn t really give me a satisfying answer.
So my question is fairly simple. Are classes basically a container for functions with its own namespace? What other functions do they have beside providing a separate namespace and holding functions while making them callable as class atributes? Im asking in a python context.
Oh and thanks for the great help most of you have been!
|
Classes How I understand them. Correct me if Im wrong please
| 0.148885 | 0 | 0 | 202 |
11,120,130 |
2012-06-20T12:58:00.000
| 44 | 0 | 0 | 0 |
android,python,jython
| 11,122,066 | 7 | true | 1 | 1 |
Jython doesn't compile to "pure java", it compiles to java bytecode - ie, to *.class files. To develop for Android, one further compiles java bytecode to Dalvik bytecode. This means that, yes, Jython can let you use Python for developing Android, subject to you getting it to play nice with the Android SDK (I haven't personally tried this, so I don't know how hard it actually is) - you do need to make sure you don't depend on any Java APIs that Android doesn't provide, and might need to have some of the Android API .class files around when you run jython. Aside from these niggles, your core idea should work - Jython does, indeed, let write code in Python that interacts with anything else that runs on the JVM.
| 6 | 64 | 0 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 1.2 | 0 | 0 | 34,891 |
11,120,130 |
2012-06-20T12:58:00.000
| -6 | 0 | 0 | 0 |
android,python,jython
| 26,718,708 | 7 | false | 1 | 1 |
sadly No.
Mobile phones only have Java ME (Micro Edition) but Jython requires Java SE (Standard Edition). There is no Jython port to ME, and there is not enough interest to make it worth the effort.
| 6 | 64 | 0 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| -1 | 0 | 0 | 34,891 |
11,120,130 |
2012-06-20T12:58:00.000
| 5 | 0 | 0 | 0 |
android,python,jython
| 11,120,423 | 7 | false | 1 | 1 |
As long as it compiles to pure java (with some constraints, as some APIs are not available), but I doubt that python will be of much use in development of android-specific stuff like activities and UI manipulation code.
You also have to take care of application size - that is serious constraint for mobile developement.
| 6 | 64 | 0 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 0.141893 | 0 | 0 | 34,891 |
11,120,130 |
2012-06-20T12:58:00.000
| 3 | 0 | 0 | 0 |
android,python,jython
| 23,649,506 | 7 | false | 1 | 1 |
Yes and no. With jython you can use java classes to compile for the JVM. But Android use the DVM (Dalvik Virtual Machine) and the compiled code is different. You have to use tools to convert from JVM code to DVM.
| 6 | 64 | 0 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 0.085505 | 0 | 0 | 34,891 |
11,120,130 |
2012-06-20T12:58:00.000
| 1 | 0 | 0 | 0 |
android,python,jython
| 53,232,363 | 7 | false | 1 | 1 |
Yes, you can.
Test your python code on your computer and, when it is ok, copy to your Android device.
Install Pydroid from Google Play Store and compile your code again inside the application and you will get your App ready and running.
Use pip inside Pydroid to install any dependencies.
PS: You will need to configure your Android device to install APKs from outside Play Store.
| 6 | 64 | 0 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| 0.028564 | 0 | 0 | 34,891 |
11,120,130 |
2012-06-20T12:58:00.000
| -2 | 0 | 0 | 0 |
android,python,jython
| 13,966,835 | 7 | false | 1 | 1 |
It's not possible. You can't use jython with android because the DVM doesn't understand it. DVM is not JVM.
| 6 | 64 | 0 |
The other day I came across a Python implementation called Jython.
With Jython you can write Java applications with Python and compile them to pure Java.
I was wondering: Android programming is done with Java.
So, is it possible to make Android apps with Jython?
|
Programming Android apps in jython
| -0.057081 | 0 | 0 | 34,891 |
11,120,427 |
2012-06-20T13:15:00.000
| 1 | 0 | 0 | 0 |
python,qt,pyqt,grid-layout
| 11,120,871 | 1 | true | 0 | 1 |
You can reimplement keyPressEvent() method for the main widget to catch the pressed keys. Then you can access the desired widget in your layout by calling QGridLayout::itemAtPosition (int row, int column) and then set focus to it.
| 1 | 0 | 0 |
How can I change behavior of how items are selected in QGridLayout by cursor keys? I want to move selection horizontally by left/right cursor keys and vertically by up/down keys.
Who is responsible for it? Layout, items container or tab order?
|
Custom QGridLayout items selection behaviour
| 1.2 | 0 | 0 | 706 |
11,121,223 |
2012-06-20T13:56:00.000
| 0 | 0 | 1 | 0 |
python
| 11,121,523 | 3 | false | 0 | 0 |
To implement your own grep you can use os.walk() and some basic file I/O. We need more information on what specifically is required before we can produce code.
| 1 | 0 | 0 |
Does anyone have code to traverse a directory and sub directory while searching for a text value? Then once found return the values in python?
|
Traversing a directory and returning text in python
| 0 | 0 | 0 | 112 |
11,121,395 |
2012-06-20T14:05:00.000
| 0 | 0 | 0 | 0 |
python,sql
| 11,121,498 | 1 | false | 0 | 0 |
You should create a table called something like ExpenseCategories, with the columns ExpenseCategory, PrimaryCategory.
This table would have one row for each expense category (which you can enforce with a constraint if you like). You would then join this table with your existing data in SQL.
By the way, in Excel, you could do this with a vlookup() rather than an if(). The vlookup() is analogous to using a lookup table in SQL. The equivalent of an if() would be a giant case statement, which is another possibility.
| 1 | 0 | 0 |
I am new with SQL/Python.
I was wondering if there is a way for me to sort or categorize expense items into three primary categories.
That is I have a 56,000 row list with about 100+ different expense categories. They vary from things like Payroll, Credit Card Pmt, telephone, etc.
I would like to put them into three categories, for the sake of analysis.
I know I could do a GIANT IF statement in Excel, but that would be really time consuming, based on the fact that there are 100+ sub categories.
Is there any way to expedite the process with Python or even in Excel?
Also, I don't know if this is material or not, but I am preparing this file to be uploaded to a SQL database.
|
Method for Sorting a list of expense categories into specific categories
| 0 | 1 | 0 | 163 |
11,124,578 |
2012-06-20T17:04:00.000
| 7 | 0 | 1 | 0 |
python,numpy,ipython
| 36,037,315 | 9 | false | 0 | 0 |
As a simpler alternative to the accepted answer, on linux:
just define an alias, e.g. put alias pynp='python -i -c"import numpy as np"' in your ~/.bash_aliases file. You can then invoke python+numpy with pynp, and you can still use just python with python. Python scripts' behaviour is left untouched.
| 2 | 136 | 0 |
I find myself typing import numpy as np almost every single time I fire up the python interpreter. How do I set up the python or ipython interpreter so that numpy is automatically imported?
|
Automatically import modules when entering the python or ipython interpreter
| 1 | 0 | 0 | 42,910 |
11,124,578 |
2012-06-20T17:04:00.000
| 0 | 0 | 1 | 0 |
python,numpy,ipython
| 71,124,720 | 9 | false | 0 | 0 |
I created a little script to get ipython initialized with the code you want.
Create a start.ipy file at your project root folder.
Edit the created file with all the things you need to get into ipython.
ipython profile create <your_profile_name>. Tip, do not add the word "profile" to the name because ipython already includes it.
cp start.ipy ~/.ipython/profile_<your_profile_name>/startup/start.ipy
Run ipython --profile=<your_profile_name> everytime you need everything loaded in ipython.
With this solution, you don't need to set any env variable up. You will need to copy the start.ipy file to the ipython folder every time you modify it, though.
| 2 | 136 | 0 |
I find myself typing import numpy as np almost every single time I fire up the python interpreter. How do I set up the python or ipython interpreter so that numpy is automatically imported?
|
Automatically import modules when entering the python or ipython interpreter
| 0 | 0 | 0 | 42,910 |
11,126,372 |
2012-06-20T19:00:00.000
| 2 | 0 | 0 | 1 |
python,linux,sockets,tcp,twisted
| 11,127,146 | 3 | false | 0 | 0 |
There is no function for this in the BSD Sockets API that I have ever seen. I question whether it is really a useful measure of load. You are assuming no connection pooling by clients, for one thing, and you are also assuming that latency is entirely manifested as pending connections. But as you can't get the number anyway the point is moot.
| 1 | 8 | 0 |
Is there a way to find out the current number of connection attempts awaiting accept() on a TCP socket on Linux?
I suppose I could count the number of accepts() that succeed before hitting EWOULDBLOCK on each event loop, but I'm using a high-level library (Python/Twisted) that hides these details. Also it's using epoll() rather than an old-fashioned select()/poll() loop.
I am trying to get a general sense of the load on a high-performance non-blocking network server, and I think this number would be a good characterization. Load average/CPU statistics aren't helping much, because I'm doing a lot of disk I/O in concurrent worker processes. Most of these stats on Linux count time spent waiting on disk I/O as part of the load (which it isn't, for my particular server architecture). Latency between accept() and response isn't a good measure either, since each request usually gets processed very quickly once the server gets around to it. I'm just trying to find out how close I am to reaching a breaking point where the server can't dispatch requests faster than they are coming in.
|
Determine the current number of backlogged connections in TCP listen() queue
| 0.132549 | 0 | 1 | 3,268 |
11,127,088 |
2012-06-20T19:48:00.000
| 3 | 0 | 1 | 0 |
python,nlp,nltk,stanford-nlp
| 11,235,081 | 1 | true | 0 | 0 |
The underlying CRF model of a named entity tagger such as Stanford NER can actually be used to recognize anything, not just named entities. There are certainly people who have used them quite successfully to pick out various kinds of terminological phrases. The software can certainly give you marked up token sequences in context.
There is, however, a choice as to whether to approach this in a "more unsupervised" way, where something like NP chunking and collocation statistics are used, or the fully supervised way of a straightforward CRF, where you're providing lots of annotated data of the kind of phrases you'd like to get out.
| 1 | 2 | 0 |
I'm trying to design a somewhat unconventional NER system that marks certain multiword strings as single units/tokens.
There are a lot of cool NER tools out there, but I have a few special needs that make it pretty much impossible to use something straight out of the box:
First, the entities can't just be extracted and printed out in a list--they need to be marked in some way and consolidated into tokens.
Second, categorization is not important--Person/Organization/Location doesn't matter (at least in the output).
Third, these aren't just your typical ENAMEX named entities we're looking for. We want companies and organizations, but also concepts like 'climate change' and 'gay marriage.' I've seen tags like these on some tools out there, but all of them were 'extraction-style'.
How would I got about getting this type of functionality? Would training the Stanford tagger on my own, hand-annotated dataset do the job (where 'climate change'-esque phrases are labeled MISC or something)? Or am I better off just making a shortlist of the 'weird' entities and checking the text against that after it's been run through a regular NER system?
Thanks so much!
|
Unconventional named-entity recognition
| 1.2 | 0 | 0 | 1,335 |
11,127,205 |
2012-06-20T19:57:00.000
| 1 | 0 | 0 | 1 |
python,windows,ssh,cygwin
| 11,140,725 | 1 | false | 0 | 0 |
Got it. The solution is simply to run the Cygwin.bat from the c:\cygwin folder, which puts you into a cygwin terminal, allowing the use of all of the needed functionality. The same also works for the mozilla-build terminal that I neeeded. :-D
| 1 | 1 | 0 |
I am trying to port my linux network automation to a set of Windows machines. The program I have starts with a single admin console, and transmits instructions over sockets and ssh tunnels to client machines instructing them to run specific mozmill/python scripts. I have gotten the individual client script to run on windows using cygwin, but I need to be able to call them from an ssh session, and ssh-ing in through Cygwin's sshd distribution logs me in with a basic Bash terminal instead of the Cygwin terminal. How can I switch which terminal is used in this situation?
|
How can I ssh into a windows box running cygwin/sshd and have the resulting terminal session use cygwin instead of default BASH?
| 0.197375 | 0 | 0 | 865 |
11,127,296 |
2012-06-20T20:03:00.000
| 10 | 0 | 1 | 0 |
python,ironpython,jython,pypy,python-stackless
| 11,127,465 | 3 | true | 0 | 0 |
Updated to include corrections from kind people in the comments section:
Of the python implementations you mention, the original and most commonly used is CPython (python on your list - which is an interpreter for python implemented in C and running as a native application) and is available for pretty much every platform under the sun. The other variants are:
IronPython: runs on the .Net common runtime (interfaces more cleanly with other .Net apps)
Jython: runs on the JVM (interfaces more cleanly with Java and other JVM apps)
PyPy: A Python interpreter which includes a just-in-time compiler which can significantly increase program execution performance. The interpreter and JIT are implemented in RPython (rather than C), a restricted subset of Python which is amenable to static analysis and type inference.
Stackless Python: An implementation of a python interpreter which doesn't rely on recursion on the native C runtime stack, and therefore allows a load of other interesting programming constructs and techniques (including lightweight threads) not available in CPython.
There are a large variety of libraries for Python (one of the major advantages of the language), the majority developed for CPython. For a number of compatibility reasons, none of the variants above currently support as many as the main implementation. So for this reason, CPython is the best place to start, and then if your future requirements fit one of the other platforms - you'll be in a good place to learn the variations from a solid grounding in the basics.
| 3 | 5 | 0 |
I want to learn python so I downloaded it from the python site and I saw 4 other kinds of pythons appear:
Python (normal)
IronPython
Jython
PyPy
Stackless Python
I can really find what the differents are between these.
Also which one is the best to start with.
|
What "kind" of Python to start with?
| 1.2 | 0 | 0 | 592 |
11,127,296 |
2012-06-20T20:03:00.000
| 3 | 0 | 1 | 0 |
python,ironpython,jython,pypy,python-stackless
| 11,127,316 | 3 | false | 0 | 0 |
Start with Python.
The alternatives are for special use cases that apply mostly when you are integrating Python with other languages, which is a very advanced usage of the language.
| 3 | 5 | 0 |
I want to learn python so I downloaded it from the python site and I saw 4 other kinds of pythons appear:
Python (normal)
IronPython
Jython
PyPy
Stackless Python
I can really find what the differents are between these.
Also which one is the best to start with.
|
What "kind" of Python to start with?
| 0.197375 | 0 | 0 | 592 |
11,127,296 |
2012-06-20T20:03:00.000
| 4 | 0 | 1 | 0 |
python,ironpython,jython,pypy,python-stackless
| 11,127,338 | 3 | false | 0 | 0 |
Python. All the documentation you'll find for learning the language assumes this. Then if you find a need for one of the other implementations the documentation will assume you know Python and explain the differences.
| 3 | 5 | 0 |
I want to learn python so I downloaded it from the python site and I saw 4 other kinds of pythons appear:
Python (normal)
IronPython
Jython
PyPy
Stackless Python
I can really find what the differents are between these.
Also which one is the best to start with.
|
What "kind" of Python to start with?
| 0.26052 | 0 | 0 | 592 |
11,129,844 |
2012-06-20T23:59:00.000
| 0 | 0 | 1 | 0 |
python
| 11,130,087 | 8 | false | 0 | 0 |
I would be tempted to research a little into some GUI that could output graphviz (DOT format) with annotations, so you could create the rooms and links between them (a sort of graph). Then later, you might want another format to support heftier info.
But should make it easy to create maps, links between rooms (containing items or traps etc..), and you could use common libraries to produce graphics of the maps in png or something.
Just a random idea off the top of my head - feel free to ignore!
| 3 | 5 | 0 |
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability.
As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code.
I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease.
What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML.
From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively.
JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally.
Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use.
I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
|
Optimal format for simple data storage in python
| 0 | 1 | 0 | 4,619 |
11,129,844 |
2012-06-20T23:59:00.000
| 5 | 0 | 1 | 0 |
python
| 11,129,974 | 8 | false | 0 | 0 |
Though there are good answers here already, I would simply recommend JSON for your purposes for the sole reason that since you're a new programmer it will be the most straightforward to read and translate as it has the most direct mapping to native Python data types (lists [] and dictionaries {}). Readability goes a long way and is one of the tenets of Python programming.
| 3 | 5 | 0 |
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability.
As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code.
I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease.
What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML.
From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively.
JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally.
Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use.
I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
|
Optimal format for simple data storage in python
| 0.124353 | 1 | 0 | 4,619 |
11,129,844 |
2012-06-20T23:59:00.000
| 1 | 0 | 1 | 0 |
python
| 11,129,853 | 8 | false | 0 | 0 |
If you want editability, YAML is the best option of the ones you've named, because it doesn't have <> or {} required delimiters.
| 3 | 5 | 0 |
As a relatively new programmer, I have several times encountered situations where it would be beneficial for me to read and assemble program data from an external source rather than have it written in the code. This is mostly the case when there are a large number of objects of the same type. In such scenarios, object definitions quickly take up a lot of space in the code and add unnecessary impediment to readability.
As an example, I've been working on text-based RPG, which has a large number of rooms and items of which to keep track. Even a few items and rooms leads to massive blocks of object creation code.
I think it would be easier in this case to use some format of external data storage, reading from a file. In such a file, items and rooms would be stored by name and attributes, so that they could parsed into an object with relative ease.
What formats would be best for this? I feel a full-blown database such as SQL would add unnecessary bloat to a fairly light script. On the other hand, an easy method of editing this data is important, either through an external application, or another python script. On the lighter end of things, the few I heard most often mentioned are XML, JSON, and YAML.
From what I've seen, XML does not seem like the best option, as many seem to find it complex and difficult to work with effectively.
JSON and YAML seem like either might work, but I don't know how easy it would be to edit either externally.
Speed is not a primary concern in this case. While faster implementations are of course desirable, it is not a limiting factor to what I can use.
I've looked around both here and via Google, and while I've seen quite a bit on the topic, I have not been able to find anything specifically helpful to me. Will formats like JSON or YAML be sufficient for this, or would I be better served with a full-blown database?
|
Optimal format for simple data storage in python
| 0.024995 | 1 | 0 | 4,619 |
11,130,261 |
2012-06-21T00:59:00.000
| 1 | 0 | 0 | 0 |
python,postgresql,psycopg2
| 11,130,568 | 1 | true | 0 | 0 |
I'll leave you to look at the psycopg2 library properly - this is off the top of my head (not had to use it for a while, but IIRC the documentation is ample).
The steps are:
Read column names from CSV file
Create "CREATE TABLE whatever" ( ... )
Maybe INSERT data
import os.path
my_csv_file = '/home/somewhere/file.csv'
table_name = os.path.splitext(os.path.split(my_csv_file)[1])[0]
cols = next(csv.reader(open(my_csv_file)))
You can go from there...
Create a SQL query (possibly using a templating engine for the fields and then issue the insert if needs be)
| 1 | 3 | 0 |
I would like to get some understanding on the question that I was pretty sure was clear for me. Is there any way to create table using psycopg2 or any other python Postgres database adapter with the name corresponding to the .csv file and (probably the most important) with columns that are specified in the .csv file.
|
Dynamically creating table from csv file using psycopg2
| 1.2 | 1 | 0 | 2,067 |
11,130,434 |
2012-06-21T01:32:00.000
| 2 | 0 | 0 | 1 |
python,google-app-engine,jinja2,authentication
| 11,132,393 | 1 | true | 1 | 0 |
Your choices are Google's own authentication, OpenID, some third party solution or roll your own. Unless you really know what you are doing, do not choose option 4! Authentication is very involved, and if you make a single mistake or omission you're opening yourself up to a lot of pain. Option 3 is not great because you have to ensure the author really knows what they are doing, which either means trusting them or... really knowing what you're doing!
So I'd suggest you chose between Google's authentication and OpenID. Both are well trusted; Google is going to be easier to implement because there are several OpenID account providers you have to test against; but Google authentication may turn away some users who refuse to have Google accounts.
| 1 | 1 | 0 |
Having ease of implementation a strong factor but security also an issue what would the best user authentication method for google app engine be? My goal is to have a small very specific social network. I know how to make my own but making it hack-proof is a little out of my league right now. I have looked at OpenID and a few others.
I am using Jinja2 as my template system and writing all of my web app code in python.
Thanks!
|
top user authentication method for google app engine
| 1.2 | 0 | 0 | 637 |
11,132,059 |
2012-06-21T05:33:00.000
| 3 | 0 | 0 | 1 |
python,apache,web-applications,webserver
| 11,132,477 | 3 | false | 1 | 0 |
Apart from Apache web server is there any open source web servers available for web application development? are you looking for an HTTP server or a web framework, the two are quite different.
HTTP servers simply send/recieve requests among other tasks, yes you can use PHP and other tools most commonly through CGI or FCGI but fundamentally an HTTP server simply accepts HTTP requests, some content maybe dynamic if its coming from an underlying framework.
A web framework is a collection of tools used to generate dynamic content, or web apps, many frameworks come with a built in http server so you don't have to configure one on your own, but they aren't as powerful or as robust since the underlying frameworks tends to concentrate on generating the content.
nginx is one my favorite HTTP servers, among the many out there, since it tends to be one of the easier ones to configure.
As for web frameworks, there are many many out there, in the python comunity (giving the python tag) django tends to be quite popular since it tends to include virtually all the tools you'd ever need to deploy a web app, which include, url dispatchig, database engine + ORM Object Relational Mapper and its own templating engine to render dynamic html in its own limited language, to remove as much as possible the logic from the rendering phase.
Usually django apps are deployed behind nginx, to control multiple instances of sites on the server, as well as serving static content, web frameworks are not great at it.
Theres also micro-webframeworks like bottle which is basically a single python file, its quite cool, I usually use sqlalchemy as the ORM when building simple bottle apps.
| 2 | 0 | 0 |
Apart from Apache web server is there any open source web servers available for web application development?
I am looking for a web server developing python web applications and deploy it and test it.
|
Is any other open source web server available other than Apache webserver for web application development?
| 0.197375 | 0 | 0 | 1,701 |
11,132,059 |
2012-06-21T05:33:00.000
| 0 | 0 | 0 | 1 |
python,apache,web-applications,webserver
| 11,132,139 | 3 | false | 1 | 0 |
If you simply Google "Open Source Web Server" you'll get a lot of results.
Nginx
Lighttpd
Cherokee
Savant
Tornado
Nginx is probably the best alternative.
| 2 | 0 | 0 |
Apart from Apache web server is there any open source web servers available for web application development?
I am looking for a web server developing python web applications and deploy it and test it.
|
Is any other open source web server available other than Apache webserver for web application development?
| 0 | 0 | 0 | 1,701 |
11,134,169 |
2012-06-21T08:19:00.000
| 1 | 0 | 0 | 0 |
python,wxpython,sizer
| 11,139,906 | 1 | true | 0 | 1 |
Typically when I see that issue, all that is needed is to call Layout right before you call the frame's Show() method. I would call Layout on the top sizer or the frame object. If that doesn't work, post a small runnable example and I'll update my answer.
| 1 | 0 | 0 |
I am using Mac OS X 10.6.8, wxPython 2.9.3.1 and 64 Bit Python v2.7.2:
My problem is not that easy to describe in a few key words, thats why I probably did not find a solution yet.
I just just create a very simple wx.frame with some objects and arrange them with a sizer. If I then
show the frame all elements are displayed at top of each other for a second. Then everything jumps into place and is displayed correctly.
I tried to call all kinds of funktions before showing my frame like Refresh, wx.Yield, Update etc. but nothing helped.
Is there some function to prevent a frame to be shown before it is drawn correctly or to draw it but to not show it yet?
Thank you!
|
wx Python frame elements not displayed correctly right after show using sizers
| 1.2 | 0 | 0 | 218 |
11,134,397 |
2012-06-21T08:35:00.000
| 4 | 0 | 1 | 0 |
python,version-control,python-sphinx
| 11,134,443 | 5 | false | 0 | 0 |
No, just like a compiler the python interpreter should be installed system-wide. The same thing applies to tools such as sphinx and docutils (which is most likely installed when installing sphinx via your distribution's package manager).
The same applies to most python packages, especially those that are used by the application itself and available via PyPi.
| 1 | 4 | 0 |
Where practical, I like to have tools required for a build under version control. The ideal is that a fresh checkout will run on any machine with a minimal set of tools required to be installed first.
Is it practical to have Python under version control?
How about python packages? My naive attempt to use Sphinx without installing it fails due to a dependency on Docutils. Is there a way to do use it without installing?
|
Is it practical to put Python under version control?
| 0.158649 | 0 | 0 | 461 |
11,134,610 |
2012-06-21T08:50:00.000
| 1 | 0 | 1 | 0 |
python,vector
| 11,134,685 | 2 | false | 0 | 0 |
The plane perpendicular to a vector ⟨A, B, C⟩ has the general equation Ax + By + Cz + K = 0.
| 1 | 0 | 0 |
I have a object A move with Velocity (v1, v2, v3) in 3D space.
Object position is (px,py,pz)
Now i want to add certain particles around object A (in radius dis) on plane which perpendicular to its Velocity direction.
I find something call "cross product" but seen that no use in this case.
Anyone can help?
I'm new to python and don't really know how to crack it.
|
Find perpendicular to given vector (Velocity) in 3D
| 0.099668 | 0 | 0 | 2,293 |
11,135,860 |
2012-06-21T10:04:00.000
| 1 | 0 | 0 | 0 |
python,reportlab
| 53,017,768 | 3 | false | 0 | 0 |
Another strategy is to make a very thin row and then fill it with
('BACKGROUND', (0,1), (-1,1), colors.black)
| 2 | 1 | 0 |
I know about the LINEABOVE and LINEBELOW styles, i was wondering if there is a way to draw a line in the table with a specified width.
I'm trying to add a line that does not 'touch' the border of the table, LINEABOVE would work perfectly if i could add a bit of padding between the cells.
|
Reportlab Insert horizontal line in table
| 0.066568 | 0 | 0 | 7,900 |
11,135,860 |
2012-06-21T10:04:00.000
| 0 | 0 | 0 | 0 |
python,reportlab
| 11,140,494 | 3 | true | 0 | 0 |
You can just draw a line within the contents of the cell using the Graphics module. You can put essentially anything inside a cell and lay it out within the table cell to achieve what you want.
| 2 | 1 | 0 |
I know about the LINEABOVE and LINEBELOW styles, i was wondering if there is a way to draw a line in the table with a specified width.
I'm trying to add a line that does not 'touch' the border of the table, LINEABOVE would work perfectly if i could add a bit of padding between the cells.
|
Reportlab Insert horizontal line in table
| 1.2 | 0 | 0 | 7,900 |
11,140,710 |
2012-06-21T14:46:00.000
| 4 | 0 | 0 | 0 |
python,oracle,graphics,2d,simulation
| 11,140,908 | 1 | false | 0 | 1 |
There is no connection between the library you use for graphics and the source of the data you are representing. When you say 2d I assume you are talking about graphs. In that case, try matplotlib or google charts api. Both work nicely with python and can represent your data in attractive looking 2d graphs
| 1 | 0 | 0 |
I have a very large amount of data that I wish to represent in 2d. What graphics libraries would be compatible with getting data from oracles databases and still have a smooth look to them? I don't wish to use tkinter.
|
python 2d graphics that works with Oracle
| 0.664037 | 0 | 0 | 139 |
11,141,336 |
2012-06-21T15:18:00.000
| 0 | 1 | 0 | 0 |
c++,python,image-processing,opencv
| 60,879,363 | 6 | false | 0 | 0 |
pip install opencv-contrib-python for video support to install specific version use pip install opencv-contrib-python
| 1 | 21 | 1 |
I'm currently using FileStorage class for storing matrices XML/YAML using OpenCV C++ API.
However, I have to write a Python Script that reads those XML/YAML files.
I'm looking for existing OpenCV Python API that can read the XML/YAML files generated by OpenCV C++ API
|
FileStorage for OpenCV Python API
| 0 | 0 | 0 | 16,840 |
11,141,383 |
2012-06-21T15:20:00.000
| 0 | 0 | 1 | 0 |
python
| 11,141,530 | 8 | false | 0 | 0 |
A dictionary, by definition, requires that keys be unique identifiers. You can either:
Use a different data structure such as a list or tuple that allows for duplicate entries.
Use a unique identifier for your dictionary keys, much the same way that a database might use an auto-incrementing field for its key id.
If you are storing a lot of request/response pairs, you might be better off with a database anyway. It's certainly something to consider.
| 1 | 3 | 0 |
is there any way to store duplicate keys in a dictionary?
I have a specific requirement to form pairs of requests and responses.
Requests from a particular node to another particular node form same keys. I need to store both those.
But if I tried to add them to dictionary, first one is being replaced by second. Is there any way?
|
Is there a way to preserve duplicate keys in python dictionary
| 0 | 0 | 0 | 23,717 |
11,142,044 |
2012-06-21T15:56:00.000
| 0 | 1 | 1 | 0 |
python,django,encryption,keystore,keychain
| 11,146,119 | 2 | true | 0 | 0 |
On Mac OS X, the keychain can be accessed from the shell using the security program. You can search for a specific private key using security find-identity -s <search term>, and export it to a file using security export (more information on those commands can be obtained from security -h <command>). I have not seen python bindings yet, but it should be easy to wrap the functionality you need in a subprocess.call call.
| 1 | 1 | 0 |
other than not encrypting, i have no choice but to have the RSA private key on the same system as the data encrypted asymetrically. (my system has no access to remote servers, etc) so i figured using seahorse (ubuntu) or keychain access (apple) might be useful?
is it possible to access the private key stored in one of these from python?
are there other approaches to this besides not storing the private key locally?
i need a reversible crypt so hashing is not an option.
|
can python access an RSA private key stored locally in a keystore like seahorse / Apple Keychain
| 1.2 | 0 | 0 | 1,455 |
11,142,397 |
2012-06-21T16:15:00.000
| -1 | 0 | 1 | 0 |
python,list,tuples,immutability
| 35,129,521 | 7 | false | 0 | 0 |
Instead of tuple, you can use frozenset. frozenset creates an immutable set. you can use list as member of frozenset and access every element of list inside frozenset using single for loop.
| 1 | 117 | 0 |
Does python have immutable lists?
Suppose I wish to have the functionality of an ordered collection of elements, but which I want to guarantee will not change, how can this be implemented? Lists are ordered but they can be mutated.
|
Does Python have an immutable list?
| -0.028564 | 0 | 0 | 87,513 |
11,142,427 |
2012-06-21T16:17:00.000
| 0 | 0 | 0 | 1 |
python,performance,http
| 11,219,337 | 2 | false | 1 | 0 |
It looks like that you have problems with DNS. can you check this idea running host 192.168.1.100 on the host? Please also check that other DNS queries being quickly processed.
Check /etc/hosts file for a quick-and-dirty solution.
| 1 | 0 | 0 |
I have such problem. I have local http server (BottlePy or Django), and when i use http:// localhost/ or http:// 127.0.0.1/ - it loads immediately. But when i use my local ip (192.168.1.100), it loads very long time (some minutes). What could be the problem?
Server works on Ubuntu 11.
|
Troubles with http server on linux
| 0 | 0 | 0 | 93 |
11,142,503 |
2012-06-21T16:22:00.000
| 0 | 0 | 1 | 0 |
python,pdf-generation,reportlab
| 11,369,181 | 2 | false | 0 | 0 |
If you can keep track of page numbers, then just add a PageBreak or canvas.showPage() command at the appropriate times.
| 1 | 2 | 0 |
I am trying to typeset a large document using ReportLab and Python 2.7.
It has a number of sections (about 6 in a 1,000 page document) and I would like each to start on odd-numbered/right-hand page. I have no idea though whether the preceding page will be odd or even and so need the ability to optionally throw an additional blank page before a particular paragraph style (like you sometimes get in manuals where some pages are "intentionally left blank"). Can anyone suggest how this could be done, as the only conditional page break I can find works on the basis of the amount of text on the page not a page number.
I also need to make sure that the blank page is included in the PDF so that double-sided printing works.
|
Throw blank even-numbered/left pages
| 0 | 0 | 0 | 463 |
11,143,979 |
2012-06-21T17:56:00.000
| 0 | 0 | 0 | 0 |
python,types,hbase,thrift
| 11,144,332 | 1 | false | 0 | 1 |
No; HBase stores bytes, nothing more. Any further encoding or decoding must be done by you.
| 1 | 1 | 0 |
Is there a way to store typed values (e.g., float, integer) in HBase and access these values from different clients?
The Java client examples I've found uses the static methods of Bytes class to manually encode and decode the values. I haven't found any Thrift client examples which stores typed values. hbase.thrift doesn't specify and float, integer types.
In short, I'm ready to store the type of fields in an external resource. I just want to be able to write from one client (e.g. Java), read from another (e.g. shell or Thrift via Python) without having to worry about the binary encoding issues. If that's not possible, I'd like to learn the best practices in encoding/decoding for multiple clients.
Thanks.
|
Storing typed values in HBase
| 0 | 0 | 0 | 477 |
11,146,619 |
2012-06-21T20:57:00.000
| 67 | 0 | 1 | 0 |
python,jinja2
| 11,146,693 | 5 | true | 1 | 0 |
In new versions of Jinja2 (2.9+):
{{ value if value }}
In older versions of Jinja2 (prior to 2.9):
{{ value if value is not none }} works great.
if this raises an error about not having an else try using an else ..
{{ value if value is not none else '' }}
| 1 | 62 | 0 |
How do I persuade Jinja2 to not print "None" when the value is None?
I have a number of entries in a dictionary and I would like to output everything in a single loop instead of having special cases for different keywords. If I have a value of None (the NoneType not the string) then the string "None" is inserted into the template rendering results.
Trying to suppress it using
{{ value or '' }} works too well as it will replace the numeric value zero as well.
Do I need to filter the dictionary before passing it to Jinja2 for rendering?
|
Suppress "None" output as string in Jinja2
| 1.2 | 0 | 0 | 38,596 |
11,147,044 |
2012-06-21T21:29:00.000
| 4 | 0 | 1 | 1 |
python,linux,random,cryptography
| 11,147,069 | 2 | true | 0 | 0 |
It generates bytes, so 0x00 to 0xFF inclusive.
| 1 | 1 | 0 |
I need to generate some token that can only take on a range of characters, [a-zA-Z0-9_]
I'm trying to work with binascii.b2a_base64(os.urandom(64)), which has other characters such as + and are causing problems.
What's the range of /dev/urandom (i'm on linux) so that I can just map the output integers to a value uniformly myself.
|
/dev/urandom range
| 1.2 | 0 | 0 | 1,684 |
11,154,101 |
2012-06-22T10:02:00.000
| 1 | 0 | 0 | 0 |
python,algorithm,knapsack-problem
| 11,155,580 | 3 | false | 0 | 0 |
You can either use pseudopolynomial algorithm, which uses dynamic programming, if the sum of weights is small enough. You just calculate, whether you can get weight X with first Y items for each X and Y.
This runs in time O(NS), where N is number of items and S is sum of weights.
Another possibility is to use meet-in-the middle approach.
Partition items into two halves and:
For the first half take every possible combination of items (there are 2^(N/2) possible combinations in each half) and store its weight in some set.
For the second half take every possible combination of items and check whether there is a combination in first half with suitable weight.
This should run in O(2^(N/2)) time.
| 1 | 4 | 1 |
I have to implement the solution to a 0/1 Knapsack problem with constraints.
My problem will have in most cases few variables (~ 10-20, at most 50).
I recall from university that there are a number of algorithms that in many cases perform better than brute force (I'm thinking, for example, to a branch and bound algorithm).
Since my problem is relative small, I'm wondering if there is an appreciable advantange in terms of efficiency when using a sophisticate solution as opposed to brute force.
If it helps, I'm programming in Python.
|
0/1 Knapsack with few variables: which algorithm?
| 0.066568 | 0 | 0 | 1,291 |
11,154,668 |
2012-06-22T10:43:00.000
| 0 | 0 | 0 | 0 |
php,python,ruby,design-patterns,visitor-pattern
| 11,155,836 | 6 | false | 1 | 0 |
I think you are using Visitor Pattern and Double Dispatch interchangeably. When you say,
If I can work with a family of heterogeneous objects and call their public methods without any cooperation from the "visited" class, does this still deserve to be called the "Visitor pattern"?
and
write a new class that manipulates your objects from the outside to carry out an operation"?
you are defining what Double dispatch is. Sure, Visitor pattern is implemented by double dispatch. But there is something more to the pattern itself.
Each Visitor is an algorithm over a group of elements (entities) and new visitors can be plugged in without changing the existing code. Open/Closed principle.
When new elements are added frequently, Visitor pattern is best avoided
| 2 | 10 | 0 |
The Visitor pattern allows operations on objects to be written without extending the object class. Sure. But why not just write a global function, or a static class, that manipulates my object collection from the outside? Basically, in a language like java, an accept() method is needed for technical reasons; but in a language where I can implement the same design without an accept() method, does the Visitor pattern become trivial?
Explanation: In the Visitor pattern, visitable classes (entities) have a method .accept() whose job is to call the visitor's .visit() method on themselves. I can see the logic of the java examples: The visitor defines a different .visit(n) method for each visitable type n it supports, and the .accept() trick must be used to choose among them at runtime. But languages like python or php have dynamic typing and no method overloading. If I am a visitor I can call an entity method (e.g., .serialize()) without knowing the entity's type or even the full signature of the method. (That's the "double dispatch" issue, right?)
I know an accept method could pass protected data to the visitor, but what's the point? If the data is exposed to the visitor classes, it is effectively part of the class interface since its details matter outside the class. Exposing private data never struck me as the point of the visitor pattern, anyway.
So it seems that in python, ruby or php I can implement a visitor-like class without an accept method in the visited object (and without reflection), right? If I can work with a family of heterogeneous objects and call their public methods without any cooperation from the "visited" class, does this still deserve to be called the "Visitor pattern"? Is there something to the essence of the pattern that I am missing, or does it just boil down to "write a new class that manipulates your objects from the outside to carry out an operation"?
PS. I've looked at plenty of discussion on SO and elsewhere, but could not find anything that addresses this question. Pointers welcome.
|
Is the Visitor pattern useful for dynamically typed languages?
| 0 | 0 | 0 | 2,548 |
11,154,668 |
2012-06-22T10:43:00.000
| 0 | 0 | 0 | 0 |
php,python,ruby,design-patterns,visitor-pattern
| 47,449,075 | 6 | false | 1 | 0 |
Visitor pattern do 2 things:
Allows for ad hoc polymorphism (same function but do different things
to different "types").
Enables adding new consuming algorithm without changing provider of data.
You can do second in dynamic languages without Visitor nor runtime type information. But first one requires some explicit mechanism, or design pattern like Visitor.
| 2 | 10 | 0 |
The Visitor pattern allows operations on objects to be written without extending the object class. Sure. But why not just write a global function, or a static class, that manipulates my object collection from the outside? Basically, in a language like java, an accept() method is needed for technical reasons; but in a language where I can implement the same design without an accept() method, does the Visitor pattern become trivial?
Explanation: In the Visitor pattern, visitable classes (entities) have a method .accept() whose job is to call the visitor's .visit() method on themselves. I can see the logic of the java examples: The visitor defines a different .visit(n) method for each visitable type n it supports, and the .accept() trick must be used to choose among them at runtime. But languages like python or php have dynamic typing and no method overloading. If I am a visitor I can call an entity method (e.g., .serialize()) without knowing the entity's type or even the full signature of the method. (That's the "double dispatch" issue, right?)
I know an accept method could pass protected data to the visitor, but what's the point? If the data is exposed to the visitor classes, it is effectively part of the class interface since its details matter outside the class. Exposing private data never struck me as the point of the visitor pattern, anyway.
So it seems that in python, ruby or php I can implement a visitor-like class without an accept method in the visited object (and without reflection), right? If I can work with a family of heterogeneous objects and call their public methods without any cooperation from the "visited" class, does this still deserve to be called the "Visitor pattern"? Is there something to the essence of the pattern that I am missing, or does it just boil down to "write a new class that manipulates your objects from the outside to carry out an operation"?
PS. I've looked at plenty of discussion on SO and elsewhere, but could not find anything that addresses this question. Pointers welcome.
|
Is the Visitor pattern useful for dynamically typed languages?
| 0 | 0 | 0 | 2,548 |
11,154,965 |
2012-06-22T11:03:00.000
| 3 | 0 | 0 | 0 |
python,ms-access,pyodbc
| 11,155,551 | 1 | false | 0 | 0 |
pyodbc allows connecting to ODBC data sources, but it does not actually implements drivers.
I'm not familiar with OS X, but on Linux ODBC sources are typically described in odbcinst.ini file (location is determined by ODBCSYSINI variable).
You will need to install Microsoft Access ODBC driver for OS X.
| 1 | 1 | 0 |
I'm trying to use PyODBC to connect to an Access database. It works fine on Windows, but running it under OS X I get—
Traceback (most recent call last):
File "", line 1, in
File "access.py", line 10, in init
self.connection = connect(driver='{Microsoft Access Driver (.mdb)}', dbq=path, pwd=password)
pyodbc.Error: ('00000', '[00000] [iODBC][Driver Manager]dlopen({Microsoft Access Driver (.mdb)}, 6): image not found (0) (SQLDriverConnect)')
Do I have to install something else? Have I installed PyODBC wrong?
Thanks
|
PyODBC "Image not found (0) (SQLDriverConnect)"
| 0.53705 | 1 | 0 | 1,896 |
11,159,485 |
2012-06-22T15:38:00.000
| 6 | 0 | 1 | 0 |
python
| 11,159,509 | 3 | true | 0 | 0 |
Yes: my_dict.clear().
No, it's limited by the addressable and available memory.
| 1 | 0 | 0 |
Is there a way to delete all the entries of a dictionary without deleting the dictionary?
Is there a limit to the number of entries that can be stored in a dictionary?
|
clearing dictionary entries and max entries of a dictionary
| 1.2 | 0 | 0 | 85 |
11,159,844 |
2012-06-22T15:57:00.000
| 1 | 0 | 0 | 0 |
python,py2app
| 14,959,193 | 1 | false | 0 | 0 |
I had this error when trying to run the setup.py from a SSHFS partition.
| 1 | 0 | 0 |
Sigh...Py2app is the devil. (Now that I've gotten that off my chest, I'll be professional)
I swear that I have done everything that PY2app asks me to do. I created the set up file, I have installed py2app correctly, but when I run the setup.py script via the command : python setup.py py2app: it goes for a while and then crashes with:
* creating application bundle: Adjudicator_Bones_1.4 *
copying Adjudicator_Bones_1.4.py -> /Volumes/compression/QC/QCing/otherFiles/Area51/PythonAdjudicator/BareBones1.4/dist/Adjudicator_Bones_1.4.app/Contents/Resources
creating /Volumes/compression/QC/QCing/otherFiles/Area51/PythonAdjudicator/BareBones1.4/dist/Adjudicator_Bones_1.4.app/Contents/Resources/lib
creating /Volumes/compression/QC/QCing/otherFiles/Area51/PythonAdjudicator/BareBones1.4/dist/Adjudicator_Bones_1.4.app/Contents/Resources/lib/python2.7
error: Function not implemented
Once this is done, I still have SOME kind of app to run and when I run it, the terminal comes up with a lovely error saying that a python runtime could not be located.
I should note that the script that I want to turn into an app will need to be distributed to other users and consists of multiple imported python files.
I am at my wit's end. I just do not know what to do. I am using Python 2.7, the latest py2app download.
Is there an easier way? can anyone tell me what I am doing wrong?
|
Py2App error-- Function Not implemented
| 0.197375 | 0 | 0 | 283 |
11,161,613 |
2012-06-22T17:55:00.000
| 4 | 0 | 1 | 1 |
python,windows,macos,ironpython
| 11,161,658 | 2 | true | 0 | 0 |
You can't make a native py2exe-style executable on Mac. Use Virtualbox to run Windows inside your Mac environment. No need to reboot the whole machine.
| 1 | 2 | 0 |
I've seen from some sources that although you can make an exe or mac equivalent app using py2exe or py2app, you can only make the one your system is. Makes sense when I think about it.
But my problem is sometimes I want to write python scripts and send them to my Windows-using friends to test and play with. But Windows doesn't come with python installed, and I don't want to make them have to install Python.
Is there any way to use a MAC to create a python-made file that can be opened without python or any installation ON WINDOWS?
If there's not I suppose I could try using the emulated Windows on my system to make it an exe, but I'd rather not boot that every time I need to change something.
|
Writing python on mac to use on Windows
| 1.2 | 0 | 0 | 6,062 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.