Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
14,345,587
2013-01-15T19:44:00.000
0
1
1
0
python,import
28,459,509
1
false
0
0
This is probably suboptimal, but I simply add the package (the directories which actually hold the code, not an egg or anything fancy) I want to import into the bundled files list in Platypus. This works well when you only need to add a few packages.
1
4
0
I am using platypus to bundle a python applet. I am wondering if there is a way to import modules, like math from stdlib.
How to import modules into platypus applet?
0
0
0
773
14,347,244
2013-01-15T21:27:00.000
1
0
0
0
python,mysql,google-app-engine,jinja2
14,347,324
2
false
1
0
If voting is only for subscribed users, then enable voting after members log in to your site. If not, then you can track users' IP addresses so one IP address can vote once for a single article in a day. By the way, what kind of security do you need?
2
4
0
I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine. How can I make the voting secure? I was thinking of sending a POST/GET when someone clicks on the vote button which my python script will then read and update the database accordingly. But then I realized that this was insecure. All suggestions are welcome.
How to implement a 'Vote up' System for posts in my blog?
0.099668
1
0
1,105
14,347,244
2013-01-15T21:27:00.000
4
0
0
0
python,mysql,google-app-engine,jinja2
14,349,144
2
true
1
0
First, keep in mind that there is no such thing as "secure", just "secure enough for X". There's always a tradeoff—more secure means more annoying for your legitimate users and more expensive for you. Getting past these generalities, think about your specific case. There is nothing that has a 1-to-1 relationship with users. IP addresses or computers are often shared by multiple people, and at the same time, people often have multiple addresses or computers. Sometimes, something like this is "good enough", but from your question, it doesn't sound like it would be. However, with user accounts, the only false negatives come from people intentionally creating multiple accounts or hacking others' accounts, and there are no false positives. And there's a pretty linear curve in the annoyance/cost vs. security tradeoff, all the way from ""Please don't create sock puppets" to CAPTCHA to credit card checks to web of trust/reputation score to asking for real-life info and hiring an investigator to check it out. In real life, there's often a tradeoff between more than just these two things. For example, if you're willing to accept more cheating if it directly means more money for you, you can just charge people real money to vote (as with those 1-900 lines that many TV shows use). How do Reddit and Digg check multiple voting from a single registered user? I don't know exactly how Reddit or Digg does things, but the general idea is simple: Keep track of individual votes. Normally, you've got your users stored in a SQL RDBMS of some kind. So, you just add a Votes table with columns for user ID, question ID, and answer. (If you're using some kind of NoSQL solution, it should be easy to translate appropriately. For example, maybe there's a document for each question, and the document is a dictionary mapping user IDs to answers.) When a user votes, just INSERT a row into the database. When putting together the voting interface, whether via server-side template or client-side AJAX, call a function that checks for an existing vote. If there is one, instead of showing the vote controls, show some representation of "You already voted Yes." You also want to check again at vote-recording time, to make sure someone doesn't hack the system by opening 200 copies of the page, all of which allow voting (because the user hasn't voted yet), and then submitting 200 Yes votes, but with a SQL database, this is as simple as making Question, User into a multi-column unique key. If you want to allow vote changing or undoing, just add more controls to the interface, and handle them with UPDATE and DELETE calls. If you want to get really fancy—like this site, which allows undoing if you have enough rep and if either your original vote was in the past 5 minutes or the answer has been edited since your vote (or something like that)—you may have to keep some extra info, like record a row for each voting action, with a timestamp, instead of just a single answer for each user. This design also means that, instead of keeping a count somewhere, you generate the vote tally on the fly by, e.g., SELECT COUNT(*) FROM Votes WHERE Question=? GROUP BY Answer. But, as usual, if this is too slow, you can always optimize-by-denormalizing and keep the totals along with the actual votes. Similarly, if your user base is huge, you may want to archive votes on old questions and get them out of the operational database. And so on.
2
4
0
I have written a simple blog using Python in google app engine. I want to implement a voting system for each of my posts. My posts are stored in a SQL database and I have a column for no of votes received. Can somebody help me set up voting buttons for individual posts? I am using Jinja2 as the templating engine. How can I make the voting secure? I was thinking of sending a POST/GET when someone clicks on the vote button which my python script will then read and update the database accordingly. But then I realized that this was insecure. All suggestions are welcome.
How to implement a 'Vote up' System for posts in my blog?
1.2
1
0
1,105
14,348,697
2013-01-15T23:14:00.000
1
0
1
0
python,pickle
14,349,065
1
true
0
0
You can pickle any python class instance, as long as the pickle module can import it again when you load the pickle. It doesn't matter where in your python code you use load() or dump(), it only matters if the data that you are going to pickle can be retrieved again later on, by importing them from the same location. So, if you have a module foo.bar with a class Spam in it, then as long as you can do from foo.bar import Spam you can pickle instances of that class, because pickle can later on load that class again from the same module.
1
0
0
The python docs say: pickle can save and restore class instances transparently, however the class definition must be importable and live in the same module as when the object was stored. Could I put a pickler/unpickler in the module where the class was stored? Or do I have to put the class in the module? And how? I'm trying to pickle/unpickle a object from a class in a external module.
Python pickle module documentation, is a little bit misleading?
1.2
0
0
226
14,350,605
2013-01-16T02:45:00.000
0
1
0
0
php,python,bitnami,osqa
14,351,798
1
false
1
0
You can install BitNami modules for each one those stacks, just go to the stack download page, select the module for your platform, execute it in the command line and point to the existing installation. Then you will need to configure httpd.conf to point each domain to each app.
1
0
0
I installed OSQA Bitnami on my VPS. And I also point 3 domains to this VPS. Now I want each domain point to different web service (I have another PHP website I want to host here). How can I run php service along with OSQA Bitnami (it's python stack)?
How can I run Bitnami OSQA with other web service (wordpress, joomla)?
0
0
0
133
14,351,879
2013-01-16T05:24:00.000
0
0
0
0
python,django,django-models,multiple-sites
14,354,584
1
true
1
0
I would caution against writing the same thing twice. Some thing will definitely go wrong, and you will have two nonmatching db's. Why don't you make the site_id-product relationship M2M, so that you can have more than one site_id's?
1
1
0
I am working on a multisite project and i am using mezzanine+cartridge for this. I want to use same inventory for both sites. But there are some issues with this: there is a field site_id in the product table which stores the ID of the current site. Thus, I cannot reuse product over sites. Is there any way (like with the help of signals or anything) that I can save an entry twice in the database, with changes to some field's values? If this is possible then I have to overwrite only site_id: the rest of the things remain the same as it was in the previous entry. Thereby it decreases the workload of entering products twice for different sites. Thanks.
Saving models in django
1.2
0
0
62
14,355,747
2013-01-16T10:05:00.000
0
0
1
0
c++,python,c,list
14,355,997
4
false
0
0
As I understand, you don't want or able to use standard library (BTW, why?). You can look into the code of the vector template in one of STL implementations and see, how it's implemented. The basic algorithm is simple. If you're deleting something from the middle of the array, you must shrink it after. If you're inserting into the middle - you must expand it before. Sometimes it involves memory reallocation and moving you array, by memcpy for example. Or you can implement double-linked list as mentioned. But it won't behave like array.
3
0
0
For e.g. if we have an array in python arr = [1,3,4]. We can delete element in the array by just using arr.remove(element) ot arr.pop() and list will mutate and it's length will change and that element won't be there. Is there a way to do this is C ot C++?. If yes the how to do that?
Python list and c/c++ arrays
0
0
0
2,120
14,355,747
2013-01-16T10:05:00.000
0
0
1
0
c++,python,c,list
14,355,936
4
false
0
0
In C++ either you can use std::list or std::vector based on your mileage If you need to implement in C, you need to write your double-link list implementation or if you wan't to emulate std::vector, you can do so by using memcpy and array indexing
3
0
0
For e.g. if we have an array in python arr = [1,3,4]. We can delete element in the array by just using arr.remove(element) ot arr.pop() and list will mutate and it's length will change and that element won't be there. Is there a way to do this is C ot C++?. If yes the how to do that?
Python list and c/c++ arrays
0
0
0
2,120
14,355,747
2013-01-16T10:05:00.000
5
0
1
0
c++,python,c,list
14,355,769
4
false
0
0
I guess you're looking for std::vector (or other containers in the standard library).
3
0
0
For e.g. if we have an array in python arr = [1,3,4]. We can delete element in the array by just using arr.remove(element) ot arr.pop() and list will mutate and it's length will change and that element won't be there. Is there a way to do this is C ot C++?. If yes the how to do that?
Python list and c/c++ arrays
0.244919
0
0
2,120
14,356,218
2013-01-16T10:27:00.000
0
0
0
0
python,xml-rpc,openerp
14,356,856
3
false
1
0
Add an integer field in res partner table for storing external id on both database. When data is retrived from the external server and adding to your openerp database, store the external id in the record of res partner in local server and also save the id of the newly created partner record in the external server's partner record. So next time when the external partner record is updated, we can search the external id in our local server and update that record. Please check the openerp module base_synchronization and read the codes, which will be helpful for you.
1
6
0
My Question is a bit complex and iam new to OpenERP. I have an external database and an OpenERP. the external one isn't PostgreSQL. MY job is that I need to synchronize the partners in the two databases. External one being the more important. This means that if the external one's data change so does the OpenERp's, but if OpenERP's data changes nothing changes onthe external one. I can access to the external database, and using XML RCP I have acces to OpenERP's as well. I can import data from the external database simply with XML RCP but the problem is the sync. I can't just INSERT the modified partner and delete the old one because i have no way to identify the old one. I need to UPDATE it. But then i need an id that says which is which. and external ID. To my knowledge OpenERP can handle external IDs. How does this work? and how can i add an external ID to my res.partner using this? I was told that I cant create a new module for this alone I need to use the internal ID works.
Adding external Ids to Partners in OpenERP withouth a new module
0
1
0
4,853
14,359,644
2013-01-16T13:32:00.000
0
1
1
0
python,optimization,software-distribution
14,359,944
1
false
0
0
You control the use of .pyc vs .pyo with the -O switch on the Python command line. Python won't even attempt to use .pyo files without the -O switch, so compiling your .py files to .pyo files at setup time won't help: you need to invoke your program with the -O switch, or the PYTHONOPTIMIZE environment variable.
1
2
0
Is there a way to set the optimization level from the setuptools setup.py file? Is there any way to set the optimization level within setuptools? I've got lots of __debug__ style logging that isn't needed on release.
Python Setuptools Distribute: Optimize Option in setup.py?
0
0
0
1,371
14,361,331
2013-01-16T15:02:00.000
0
0
0
1
python,windows,windows-8,startup,windows-rt
14,555,543
2
false
0
0
You can create a scheduled task to run on login or boot. The run registry key and the startup folder do not function on Windows RT, but the task scheduler does. Off topic (since I seem unable to add a comment to the other answer) there has been a copy of Python ported over to Windows RT using the jailbreak.
2
0
0
I have a python script which basically lauches an Xmlrpc server. I need this script to run always. I have a function that may call for the system to reboot itself. So once the system has rebooted, I need to get the script running again. How can I add this to the Windows RT startup?
Script to run at startup on Windows RT
0
0
0
1,365
14,361,331
2013-01-16T15:02:00.000
0
0
0
1
python,windows,windows-8,startup,windows-rt
14,365,833
2
false
0
0
There is no way for a Windows Store app to trigger a system reboot, neither can it run a Python script unless you implement a Python interperter inside your app. Windows Store apps run in their own sandbox and have very limited means of communication with the rest of the system.
2
0
0
I have a python script which basically lauches an Xmlrpc server. I need this script to run always. I have a function that may call for the system to reboot itself. So once the system has rebooted, I need to get the script running again. How can I add this to the Windows RT startup?
Script to run at startup on Windows RT
0
0
0
1,365
14,362,863
2013-01-16T16:18:00.000
0
0
0
0
wxpython,wxwidgets
14,363,626
1
false
0
1
The only way to do it is to remove the caption as you've already found. You might be able to use a wx.Timer or catch a dragging event of some sort and center the dialog that way, but otherwise, removing the caption is the way to go.
1
2
0
I want disable dragging the wx.Dialog. I noticed when I remove the caption the user can not drag the dialog, but I want to keep the caption. Any suggestions would be helpful? Thank you.
How to disable dragging wx.DIalog from the user
0
0
0
139
14,363,351
2013-01-16T16:43:00.000
0
0
1
1
python,netbeans,read-eval-print-loop
14,482,828
1
false
0
0
I had a similar problem. Go tools->python platforms and set the default platform to python 2.7. This should cause window->pythonConsole to launch the correct version of python. As for then being able to import your custom modules... that's the problem I'm currently having too.
1
0
0
I am trying to wrap my head around Python, while my brain works for Java and Scala, so please excuse if this question is ill-formulated. I have managed to setup NetBeans 6.9 with Python 2.7 on OS X. I can compile and run my project, fine. Now what I want is something equivalent to sbt's console command. I want to launch the Python REPL with my project on the classpath, so that basically I can interactively call into my custom functions and classes. There is Window -> PythonConsole, but that launches some Python (or Jython?) 2.5 instead, and my modules don't seem to be in the namespace.
Interactive console with NetBeans project modules on the classpath
0
0
0
364
14,363,640
2013-01-16T16:57:00.000
1
0
0
0
python,pandas
44,592,825
3
false
0
0
You can also specify a list of columns to keep with the usecols option in pandas.read_table. This speeds up the loading process as well.
1
18
1
In short ... I have a Python Pandas data frame that is read in from an Excel file using 'read_table'. I would like to keep a handful of the series from the data, and purge the rest. I know that I can just delete what I don't want one-by-one using 'del data['SeriesName']', but what I'd rather do is specify what to keep instead of specifying what to delete. If the simplest answer is to copy the existing data frame into a new data frame that only contains the series I want, and then delete the existing frame in its entirety, I would satisfied with that solution ... but if that is indeed the best way, can someone walk me through it? TIA ... I'm a newb to Pandas. :)
Python Pandas - Deleting multiple series from a data frame in one command
0.066568
0
0
30,755
14,364,214
2013-01-16T17:29:00.000
2
0
0
0
python,database,migration,sqlalchemy
14,364,804
2
true
1
0
Some thoughts for managing databases for a production application: Make backups nightly. This is crucial because if you try to do an update (to the data or the schema), and you mess up, you'll need to be able to revert to something more stable. Create environments. You should have something like a local copy of the database for development, a staging database for other people to see and test before going live and of course a production database that your live system points to. Make sure all three environments are in sync before you start development locally. This way you can track changes over time. Start writing scripts and version them for releases. Make sure you store these in a source control system (SVN, Git, etc.) You just want a historical record of what has changed and also a small set of scripts that need to be run with a given release. Just helps you stay organized. Do your changes to your local database and test it. Make sure you have scripts that do two things, 1) Scripts that modify the data, or the schema, 2) Scripts that undo what you've done in case things go wrong. Test these over and over locally. Run the scripts, test and then rollback. Are things still ok? Run the scripts on staging and see if everything is still ok. Just another chance to prove your work is good and that if needed you can undo your changes. Once staging is good and you feel confident, run your scripts on the production database. Remember you have scripts to change data (update, delete statements) and scripts to change schema (add fields, rename fields, add tables). In general take your time and be very deliberate in your actions. The more disciplined you are the more confident you'll be. Updating the database can be scary, so don't rush things, write out your plan of action, and test, test, test!
1
1
0
I'm working on a web-app that's very heavily database driven. I'm nearing the initial release and so I've locked down the features for this version, but there are going to be lots of other features implemented after release. These features will inevitably require some modification to the database models, so I'm concerned about the complexity of migrating the database on each release. What I'd like to know is how much should I concern myself with locking down a solid database design now so that I can release quickly, against trying to anticipate certain features now so that I can build it into the database before release? I'm also anticipating finding flaws with my current model and would probably then want to make changes to it, but if I release the app and then data starts coming in, migrating the data would be a difficult task I imagine. Are there conventional methods to tackle this type of problem? A point in the right direction would be very useful. For a bit of background I'm developing an asset management system for a CG production pipeline. So lots of pieces of data with lots of connections between them. It's web-based, written entirely in Python and it uses SQLAlchemy with a SQLite engine.
How to approach updating an database-driven application after release?
1.2
1
0
120
14,366,750
2013-01-16T19:59:00.000
0
1
0
0
php,python,mediawiki,wiki
14,366,895
3
false
0
0
No, there is no Python port of MediaWiki. Keep in mind that your developers will be using the wiki, not developing for it. Chances are that none of them will never need to look at the application's internals, so the language it was written in is irrelevant.
1
2
0
I am trying to setup an internal wiki for our development team. It looks like mediawiki is the de facto means of doing this. While I'm probably capable of setting this up with PHP - I was curious if there is a python port of mediawiki or a similar framework. I'm very comfortable in PHP, and am more than happy to use it - but most of our developers prefer python. I think it would be neat to have a python wiki running. This is just a curiosity - and not a serious issue. Thank you for any suggestions!
MediaWiki python port?
0
0
0
549
14,367,237
2013-01-16T20:30:00.000
2
0
0
0
python,web-applications,wsgi,dev-to-production
14,367,674
2
true
1
0
If your setup allows it, consider using environment variables. Your production servers could have one value for an environment variable while dev servers have another. Then on execution of your application you can detect the value of the environment variable and set "debug" accordingly.
2
5
0
We need to load different configuration options in our Python WSGI application, according to production or debugging environments (particularly, some server configuration information relating to the task server to which the application needs to post jobs). The way we have implemented it so far is to have a global debug variable that gets set in our deployment script - which does modify the deployment setup correctly. However, when we execute the application, we need to set the debug variable to True - since its default value is False. So far, it is hard to correctly determine the way the debug variable works, because it is set at deploy time, not execute time. We can set it before calling the serve_forever method of our debug WSGI server, but I am not aware of the implications of this and how good that solution is. What is the usual pattern for differentiating between debug and production environments in WSGI applications? If I need to pass it in system arguments, or if there is any other, different way, please let me know. Thanks very much!
Differentiating between debug and production environments in WSGI app
1.2
0
0
213
14,367,237
2013-01-16T20:30:00.000
1
0
0
0
python,web-applications,wsgi,dev-to-production
14,589,970
2
false
1
0
I don't like using environment variables. Try to make it working with your application configuration, which can be overwrite by: importing non versioned files (in dev environment), wrapped by try-except with proper log notification. command line arguments (argparse from standard library)
2
5
0
We need to load different configuration options in our Python WSGI application, according to production or debugging environments (particularly, some server configuration information relating to the task server to which the application needs to post jobs). The way we have implemented it so far is to have a global debug variable that gets set in our deployment script - which does modify the deployment setup correctly. However, when we execute the application, we need to set the debug variable to True - since its default value is False. So far, it is hard to correctly determine the way the debug variable works, because it is set at deploy time, not execute time. We can set it before calling the serve_forever method of our debug WSGI server, but I am not aware of the implications of this and how good that solution is. What is the usual pattern for differentiating between debug and production environments in WSGI applications? If I need to pass it in system arguments, or if there is any other, different way, please let me know. Thanks very much!
Differentiating between debug and production environments in WSGI app
0.099668
0
0
213
14,367,432
2013-01-16T20:41:00.000
3
0
1
0
python,pip
14,367,501
1
true
0
0
The simplest smoketest is: start python commandline, import the package.
1
2
0
How do I check whether the "pip-install" module is properly installed? I think I installed it a while ago but when I entered in pip install scikit-learn I got the "invalid syntax" message. I am working with Python 2.7, Vista 32-bit. Thanks.
Check whether Pip-Install Module Installed Properly
1.2
0
0
5,762
14,367,670
2013-01-16T20:55:00.000
0
0
0
0
python,django,django-models,django-forms,django-views
14,367,745
2
false
1
0
What other fields are there in the model db? You could add an upload_id, which is set automatically after uploading, send it to the next view, and query for that in the db.
1
0
0
Suppose I want to query and display all the images that the user just uploaded through a form on the previous page (multiple images are uploaded at once, each is made into a separate object in the db). What's the best way to do this? Since the view for uploading the images is different from the view for displaying the images, how does the second view know which images were part of that upload? I thought about creating and saving the image objects in the first view, gathering the pks, and passing them to the second view, but I understand that it is bad practice. So how should I make sure the second view knows which primary keys to query for?
Django: How to query objects immediately after they have been saved from a separate view?
0
0
0
100
14,368,143
2013-01-16T21:26:00.000
1
0
1
0
python,gis
14,368,495
7
true
0
0
You must use Python 2.6 32 bit with Arcgis 10.0, even with a 64 bit OS. I suspect (though not sure from the info provided) that you have another version of Python installed. I would first check to see how many versions are installed, and uninstall all of them except the one at C:\Pytho26\ArcGIS10.0. Then I'd install Python 2.6 (it's on the ArcGIS disk) at the location mentioned. You'll need to re-install numpy and matplotlib too, which are also on the disk. If that does not help, then I'd uninstall ArcGIS and every Python version on your machine, and then re-install ArcGIS. This sounds drastic, but ESRI's Python implementation is pretty sensitive, and you can waste days on trying to find an easy fix. This last step usually works.
5
1
0
Not sure if I am posting in the right place, but I am having problems getting my python GIS programs to work on Windows 7 64 bit. These programs worked on XP 32 bit. I've done a lot of research and tried changing my PythonPath, moving the lib folder, etc and other suggestions. I made a new key in the registry under Python26 as suggested by another with the contents of the Desktop10.pth file. However I am still getting the same error posted below. I am currently running ArcGIS 10.0. I am probably missing something simple! Any help would be greatly appreciated! Thank you in advance. Traceback (most recent call last): File "Z:\Desktop\GISClimateMapping.py", line 85, in import arcpy File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy__init__.py", line 17, in from geoprocessing import gp File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing__in it__.py", line 14, in from _base import * File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 568, in env = GPEnvironments(gp) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 565, in GPEnvironments return GPEnvironment(geoprocessor) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 521, in init self._refresh() File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 523, in _refresh envset = (set(env for env in self._gp.listEnvironments())) RuntimeError: NotInitialized
Python ArcGIS ArcPy RuntimeError: NotInitialized
1.2
0
0
5,049
14,368,143
2013-01-16T21:26:00.000
1
0
1
0
python,gis
15,913,398
7
false
0
0
I had this very error when the first line of my code was "import arcpy", and the solution for me was to insert a new first line to my python script: "import arcview". My code was running fine on a system using ArcGIS 10.0, but ran into this problem after I upgraded my development box to 10.1 desktop and server. Various stackoverflow, gisstackexchange, and forums.arcgis.com articles pointed to PATH, PYTHONPATH, HLKM etc environment possibilities or checking your ArcGIS Administrator licensing. After reproducing the problem in both pyscripter and IDLE, confirming everything was as it should be with a properly uninstalled python 2.6, an installed python 2.7 environment, and a valid floating license the error still persisted. My best conjecture for why this fix worked is that perhaps starting with 10.1 arcgis license checkout may be more explicit.
5
1
0
Not sure if I am posting in the right place, but I am having problems getting my python GIS programs to work on Windows 7 64 bit. These programs worked on XP 32 bit. I've done a lot of research and tried changing my PythonPath, moving the lib folder, etc and other suggestions. I made a new key in the registry under Python26 as suggested by another with the contents of the Desktop10.pth file. However I am still getting the same error posted below. I am currently running ArcGIS 10.0. I am probably missing something simple! Any help would be greatly appreciated! Thank you in advance. Traceback (most recent call last): File "Z:\Desktop\GISClimateMapping.py", line 85, in import arcpy File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy__init__.py", line 17, in from geoprocessing import gp File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing__in it__.py", line 14, in from _base import * File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 568, in env = GPEnvironments(gp) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 565, in GPEnvironments return GPEnvironment(geoprocessor) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 521, in init self._refresh() File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 523, in _refresh envset = (set(env for env in self._gp.listEnvironments())) RuntimeError: NotInitialized
Python ArcGIS ArcPy RuntimeError: NotInitialized
0.028564
0
0
5,049
14,368,143
2013-01-16T21:26:00.000
3
0
1
0
python,gis
17,217,911
7
false
0
0
I had a very similar problem. We have a single licence which, if it's checked out to someone else, prohibits my script from running. I've found this empirically rather than through code/support but I'm fairly confident that's your issue.
5
1
0
Not sure if I am posting in the right place, but I am having problems getting my python GIS programs to work on Windows 7 64 bit. These programs worked on XP 32 bit. I've done a lot of research and tried changing my PythonPath, moving the lib folder, etc and other suggestions. I made a new key in the registry under Python26 as suggested by another with the contents of the Desktop10.pth file. However I am still getting the same error posted below. I am currently running ArcGIS 10.0. I am probably missing something simple! Any help would be greatly appreciated! Thank you in advance. Traceback (most recent call last): File "Z:\Desktop\GISClimateMapping.py", line 85, in import arcpy File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy__init__.py", line 17, in from geoprocessing import gp File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing__in it__.py", line 14, in from _base import * File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 568, in env = GPEnvironments(gp) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 565, in GPEnvironments return GPEnvironment(geoprocessor) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 521, in init self._refresh() File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 523, in _refresh envset = (set(env for env in self._gp.listEnvironments())) RuntimeError: NotInitialized
Python ArcGIS ArcPy RuntimeError: NotInitialized
0.085505
0
0
5,049
14,368,143
2013-01-16T21:26:00.000
0
0
1
0
python,gis
17,299,610
7
false
0
0
For the record, I just encountered this problem in 10.1 while debugging an arcpy script in Visual Studio. It actually happened between runs with no code changes - one run worked, the next got the error. For whatever reason, adding the import arcview to the top worked. Maybe it is a license checkout issue, but such inconsistency is troubling.
5
1
0
Not sure if I am posting in the right place, but I am having problems getting my python GIS programs to work on Windows 7 64 bit. These programs worked on XP 32 bit. I've done a lot of research and tried changing my PythonPath, moving the lib folder, etc and other suggestions. I made a new key in the registry under Python26 as suggested by another with the contents of the Desktop10.pth file. However I am still getting the same error posted below. I am currently running ArcGIS 10.0. I am probably missing something simple! Any help would be greatly appreciated! Thank you in advance. Traceback (most recent call last): File "Z:\Desktop\GISClimateMapping.py", line 85, in import arcpy File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy__init__.py", line 17, in from geoprocessing import gp File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing__in it__.py", line 14, in from _base import * File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 568, in env = GPEnvironments(gp) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 565, in GPEnvironments return GPEnvironment(geoprocessor) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 521, in init self._refresh() File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 523, in _refresh envset = (set(env for env in self._gp.listEnvironments())) RuntimeError: NotInitialized
Python ArcGIS ArcPy RuntimeError: NotInitialized
0
0
0
5,049
14,368,143
2013-01-16T21:26:00.000
0
0
1
0
python,gis
21,458,839
7
false
0
0
I think the problem was created by another user running ArcMap at the same time - since there is a single ArcGIS licence. When the user closed ArcMap in fact, the python script started running well.
5
1
0
Not sure if I am posting in the right place, but I am having problems getting my python GIS programs to work on Windows 7 64 bit. These programs worked on XP 32 bit. I've done a lot of research and tried changing my PythonPath, moving the lib folder, etc and other suggestions. I made a new key in the registry under Python26 as suggested by another with the contents of the Desktop10.pth file. However I am still getting the same error posted below. I am currently running ArcGIS 10.0. I am probably missing something simple! Any help would be greatly appreciated! Thank you in advance. Traceback (most recent call last): File "Z:\Desktop\GISClimateMapping.py", line 85, in import arcpy File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy__init__.py", line 17, in from geoprocessing import gp File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing__in it__.py", line 14, in from _base import * File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 568, in env = GPEnvironments(gp) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 565, in GPEnvironments return GPEnvironment(geoprocessor) File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 521, in init self._refresh() File "C:\Program Files (x86)\ArcGIS\Desktop10.0\arcpy\arcpy\geoprocessing_bas e.py", line 523, in _refresh envset = (set(env for env in self._gp.listEnvironments())) RuntimeError: NotInitialized
Python ArcGIS ArcPy RuntimeError: NotInitialized
0
0
0
5,049
14,369,696
2013-01-16T23:21:00.000
0
0
0
0
python,performance,data-mining,pdb,large-data
14,369,860
3
false
0
0
Write a script that does the selects, the object-relational conversions, then pickles the data to a local file. Your development script will start by unpickling the data and proceeding. If the data is significantly smaller than physical RAM, you can memory map a file shared between two processes, and write the pickled data to memory.
2
3
1
I do data mining research and often have Python scripts that load large datasets from SQLite databases, CSV files, pickle files, etc. In the development process, my scripts often need to be changed and I find myself waiting 20 to 30 seconds waiting for data to load. Loading data streams (e.g. from a SQLite database) sometimes works, but not in all situations -- if I need to go back into a dataset often, I'd rather pay the upfront time cost of loading the data. My best solution so far is subsampling the data until I'm happy with my final script. Does anyone have a better solution/design practice? My "ideal" solution would involve using the Python debugger (pdb) cleverly so that the data remains loaded in memory, I can edit my script, and then resume from a given point.
How do I make large datasets load quickly in Python?
0
0
0
806
14,369,696
2013-01-16T23:21:00.000
0
0
0
0
python,performance,data-mining,pdb,large-data
63,300,344
3
false
0
0
Jupyter notebook allows you to load a large data set into a memory resident data structure, such as a Pandas dataframe in one cell. Then you can operate on that data structure in subsequent cells without having to reload the data.
2
3
1
I do data mining research and often have Python scripts that load large datasets from SQLite databases, CSV files, pickle files, etc. In the development process, my scripts often need to be changed and I find myself waiting 20 to 30 seconds waiting for data to load. Loading data streams (e.g. from a SQLite database) sometimes works, but not in all situations -- if I need to go back into a dataset often, I'd rather pay the upfront time cost of loading the data. My best solution so far is subsampling the data until I'm happy with my final script. Does anyone have a better solution/design practice? My "ideal" solution would involve using the Python debugger (pdb) cleverly so that the data remains loaded in memory, I can edit my script, and then resume from a given point.
How do I make large datasets load quickly in Python?
0
0
0
806
14,370,402
2013-01-17T00:37:00.000
2
1
1
0
python,client-server,messaging
14,370,602
2
false
0
0
Unless your "status server" and "real server" are running in the same process (that is, loosely, one of them imports the other and starts it), just from main import the_server in your status server isn't going to help. That will just give you a new, completely independent instance of the_server that isn't doing anything, which you can then report status on. There are a few obvious ways to solve the problem. Merge the status server into the real server completely, by expanding the existing protocol to handle status-related requests, as Peter Wooster suggestions. Merge the status server into the real server async I/O implementation, but still listening on two different ports, with different protocol handlers for each. Merge the status server into the real server process, but with a separate async I/O implementation. Store the status information in, e.g., a mmap or a multiprocessing.Array instead of directly in the Server object, so the status server can open the same mmap/etc. and read from it. (You might be able to put the Server object itself in shared memory, but I wouldn't recommend this even if you could make it work.) I could make these more concrete if you explained how you're dealing with async I/O in the server today. Select (or poll/kqueue/epoll) loop? Thread per connection? Magical greenlets? Non-magical cooperative threading (like PEP 3156/tulip)? Even just "All I know is that we're using twisted/tornado/gevent/etc., so whatever that does" is enough.
2
0
0
I'm new to python so please excuse me if question doesn't make sense in advance. We have a python messaging server which has one file server.py with main function in it. It also has a class "*server" and main defines a global instance of this class, "the_server". All other functions in same file or diff modules (in same dir) import this instance as "from main import the_server". Now, my job is to devise a mechanism which allows us to get latest message status (number of messages etc.) from the aforementioned messaging server. This is the dir structure: src/ -> all .py files only one file has main In the same directory I created another status server with main function listening for connections on a different port and I'm hoping that every time a client asks me for message status I can invoke function(s) on my messaging server which returns the expected numbers. How can I import the global instance, "the_server" in my status server or rather is it the right way to go?
Python server interaction
0.197375
0
1
128
14,370,402
2013-01-17T00:37:00.000
2
1
1
0
python,client-server,messaging
14,370,495
2
true
0
0
You should probably use a single server and design a protocol that supports several kinds of messages. 'send' messages get sent, 'recv' message read any existing message, 'status' messages get the server status, 'stop' messages shut it down, etc. You might look at existing protocols such as REST, for ideas.
2
0
0
I'm new to python so please excuse me if question doesn't make sense in advance. We have a python messaging server which has one file server.py with main function in it. It also has a class "*server" and main defines a global instance of this class, "the_server". All other functions in same file or diff modules (in same dir) import this instance as "from main import the_server". Now, my job is to devise a mechanism which allows us to get latest message status (number of messages etc.) from the aforementioned messaging server. This is the dir structure: src/ -> all .py files only one file has main In the same directory I created another status server with main function listening for connections on a different port and I'm hoping that every time a client asks me for message status I can invoke function(s) on my messaging server which returns the expected numbers. How can I import the global instance, "the_server" in my status server or rather is it the right way to go?
Python server interaction
1.2
0
1
128
14,370,576
2013-01-17T01:00:00.000
1
0
0
0
python,database,django,frontend
14,371,043
2
false
1
0
Exporting the excel sheet in Django and have the them rendered as text fields , is not as easy as 2 step process. you need to know how Django works. First you need to export the data in mysql in database using either some language or some ready made tools. Then you need to make a Model for that table and then you can use Django admin to edit them
1
3
0
I have been trying to get my head around Django over the last week or two. Its slowly starting to make some sense and I am really liking it. My goal is to replace a fairly messy excel spreadsheet with a database and frontend for my users. This would involve pulling the data out of a table, presenting it in a web tabular format, and allowing changes to be made through text fields and drop down menus, with a simple update button that will update all changes to the DB. My question is, will the built in Django Forms functionality be the best solution? Or would I create some sort of for loop for my objects and wrap them around html form syntax in my template? I'm just not too sure how to approach the solution. Apologies if this seems like an simple question, I just feel like there is maybe a few ways to do it but maybe there is one perfect way. Thanks
Custom Django Database Frontend
0.099668
1
0
2,045
14,370,830
2013-01-17T01:29:00.000
0
0
1
0
python,list,search
14,370,873
3
false
0
0
Basically, to do this, I just set up each type as its own entry in a dictionary, with the position that it is at as the corresponding entry.
1
0
0
I have a list of tuples formatted like ((x,y),type), and I want to see if the list contains the "type" that I want, regardless of the (x,y) position. If it does, I want to insert something right after the index of the tuple containing the selected "type". The way i have it set up, the list will only contain one of each type, if that makes a difference. EDIT: Wow, I'm sorry, I just realized that using dictionaries would make this a lot easier. I promise I did actually think about this before I posted.
Python searching through lists for aspects of tuples
0
0
0
81
14,371,156
2013-01-17T02:11:00.000
5
1
0
1
python,python-3.x,pytest
59,968,198
6
false
0
0
Install it with pip3: pip3 install -U pytest
3
61
0
I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that. The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get /usr/bin/python3: No module named pytest
Pytest and Python 3
0.16514
0
0
65,185
14,371,156
2013-01-17T02:11:00.000
27
1
0
1
python,python-3.x,pytest
14,371,623
6
false
0
0
python3 doesn't have the module py.test installed. If you can, install the python3-pytest package. If you can't do that try this: Install virtualenv Create a virtualenv for python3 virtualenv --python=python3 env_name Activate the virtualenv source ./env_name/bin/activate Install py.test pip install py.test Now using this virtualenv try to run your tests
3
61
0
I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that. The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get /usr/bin/python3: No module named pytest
Pytest and Python 3
1
0
0
65,185
14,371,156
2013-01-17T02:11:00.000
69
1
0
1
python,python-3.x,pytest
14,371,849
6
true
0
0
I found a workaround: Installed python3-pip using aptitude, which created /usr/bin/pip-3.2. Next pip-3.2 install pytest which re-installed pytest, but under a python3.2 path. Then I was able to use python3 -m pytest somedir/sometest.py. Not as convenient as running py.test directly, but workable.
3
61
0
I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that. The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get /usr/bin/python3: No module named pytest
Pytest and Python 3
1.2
0
0
65,185
14,371,920
2013-01-17T03:47:00.000
4
0
0
1
python,google-app-engine,python-2.7
14,377,741
3
true
1
0
My solution to this was to increase the Pending Latency time. If a webpage fires 3 ajax requests at once, AppEngine was launching new instances for the additional requests. After configuring the Minimum Pending Latency time - setting it to 2.5 secs, the same instance was processing all three requests and throughput was acceptable. My project still has little load/traffic... so in addition to raising the Pending Latency, I openend an account at Pingdom and configured it to ping my Appengine project every minute. The combination of both, makes that I have one instance that stays alive and is serving up all requests most of the time. It will scale to new instances when really necessary.
3
8
0
So I've been using app engine for quite some time now with no issues. I'm aware that if the app hasn't been hit by a visitor for a while then the instance will shut down, and the first visitor to hit the site will have a few second delay while a new instance fires up. However, recently it seems that the instances only stay alive for a very short period of time (sometimes less than a minute), and if I have 1 instance already up and running, and I refresh an app webpage, it still fires up another instance (and the page it starts is minimal homepage HTML, shouldn't require much CPU/memory). Looking at my logs its constantly starting up new instances, which was never the case previously. Any tips on what I should be looking at, or any ideas of why this is happening? Also, I'm using Python 2.7, threadsafe, python_precompiled, warmup inbound services, NDB. Update: So I changed my app to have at least 1 idle instance, hoping that this would solve the problem, but it is still firing up new instances even though one resident instance is already running. So when there is just the 1 resident instance (and I'm not getting any traffic except me), and I go to another page on my app, it is still starting up a new instance. Additionally, I changed the Pending Latency to 1.5s as koma pointed out, but that doesn't seem to be helping. The memory usage of the instances is always around 53MB, which is surprising when the pages being called aren't doing much. I'm using the F1 Frontend Instance Class and that has a limit of 128, but either way 53MB seems high for what it should be doing. Is that an acceptable size when it first starts up? Update 2: I just noticed in the dashboard that in the last 14 hours, the request /_ah/warmup responded with 24 404 errors. Could this be related? Why would they be responding with a 404 response status? Main question: Why would it constantly be starting up new instances (even with no traffic)? Especially where there are already existing instances, and why do they shut down so quickly?
Google App Engine Instances keep quickly shutting down
1.2
0
0
2,656
14,371,920
2013-01-17T03:47:00.000
1
0
0
1
python,google-app-engine,python-2.7
14,416,358
3
false
1
0
1 idle instance means that app-engine will always fire up an extra instance for the next user that comes along - that's why you are seeing an extra instance fired up with that setting. If you remove the idle instance setting (or use the default) and just increase pending latency it should "wait" before firing the extra instance. With regards to the main question I think @koma might be onto something in saying that with default settings app-engine will tend to fire extra instances even if the requests are coming from the same session. In my experience app-engine is great under heavy traffic but difficult (and sometimes frustrating) to work with under low traffic conditions. In particular it is very difficult to figure out the nuances of what the criteria for firing up new instances actually are. Personally, I have a "wake-up" cron-job to bring up an instance every couple of minutes to make sure that if someone comes on the site an instance is ready to serve. This is not ideal because it will eat at my quote, but it works most of the time because traffic on my app is reasonably high.
3
8
0
So I've been using app engine for quite some time now with no issues. I'm aware that if the app hasn't been hit by a visitor for a while then the instance will shut down, and the first visitor to hit the site will have a few second delay while a new instance fires up. However, recently it seems that the instances only stay alive for a very short period of time (sometimes less than a minute), and if I have 1 instance already up and running, and I refresh an app webpage, it still fires up another instance (and the page it starts is minimal homepage HTML, shouldn't require much CPU/memory). Looking at my logs its constantly starting up new instances, which was never the case previously. Any tips on what I should be looking at, or any ideas of why this is happening? Also, I'm using Python 2.7, threadsafe, python_precompiled, warmup inbound services, NDB. Update: So I changed my app to have at least 1 idle instance, hoping that this would solve the problem, but it is still firing up new instances even though one resident instance is already running. So when there is just the 1 resident instance (and I'm not getting any traffic except me), and I go to another page on my app, it is still starting up a new instance. Additionally, I changed the Pending Latency to 1.5s as koma pointed out, but that doesn't seem to be helping. The memory usage of the instances is always around 53MB, which is surprising when the pages being called aren't doing much. I'm using the F1 Frontend Instance Class and that has a limit of 128, but either way 53MB seems high for what it should be doing. Is that an acceptable size when it first starts up? Update 2: I just noticed in the dashboard that in the last 14 hours, the request /_ah/warmup responded with 24 404 errors. Could this be related? Why would they be responding with a 404 response status? Main question: Why would it constantly be starting up new instances (even with no traffic)? Especially where there are already existing instances, and why do they shut down so quickly?
Google App Engine Instances keep quickly shutting down
0.066568
0
0
2,656
14,371,920
2013-01-17T03:47:00.000
0
0
0
1
python,google-app-engine,python-2.7
14,742,365
3
false
1
0
I only started having this type of issue on Monday February 4 around 10 pm EST, and is continuing until now. I first started noticing then that instances kept firing up and shutting down, and latency increased dramatically. It seemed that the instance scheduler was turning off idle instances too rapidly, and causing subsequent thrashing. I set minimum idle instances to 1 to stabilize latency, which worked. However, there is still thrashing of new instances. I tried the recommendations in this thread to only set minimum pending latency, but that does not help. Ultimately, idle instances are being turned off too quickly. Then when they're needed, the latency shoots up while trying to fire up new instances. I'm not sure why you saw this a couple weeks ago, and it only started for me a couple days ago. Maybe they phased in their new instance scheduler to customers gradually? Are you not still seeing instances shutting down quickly?
3
8
0
So I've been using app engine for quite some time now with no issues. I'm aware that if the app hasn't been hit by a visitor for a while then the instance will shut down, and the first visitor to hit the site will have a few second delay while a new instance fires up. However, recently it seems that the instances only stay alive for a very short period of time (sometimes less than a minute), and if I have 1 instance already up and running, and I refresh an app webpage, it still fires up another instance (and the page it starts is minimal homepage HTML, shouldn't require much CPU/memory). Looking at my logs its constantly starting up new instances, which was never the case previously. Any tips on what I should be looking at, or any ideas of why this is happening? Also, I'm using Python 2.7, threadsafe, python_precompiled, warmup inbound services, NDB. Update: So I changed my app to have at least 1 idle instance, hoping that this would solve the problem, but it is still firing up new instances even though one resident instance is already running. So when there is just the 1 resident instance (and I'm not getting any traffic except me), and I go to another page on my app, it is still starting up a new instance. Additionally, I changed the Pending Latency to 1.5s as koma pointed out, but that doesn't seem to be helping. The memory usage of the instances is always around 53MB, which is surprising when the pages being called aren't doing much. I'm using the F1 Frontend Instance Class and that has a limit of 128, but either way 53MB seems high for what it should be doing. Is that an acceptable size when it first starts up? Update 2: I just noticed in the dashboard that in the last 14 hours, the request /_ah/warmup responded with 24 404 errors. Could this be related? Why would they be responding with a 404 response status? Main question: Why would it constantly be starting up new instances (even with no traffic)? Especially where there are already existing instances, and why do they shut down so quickly?
Google App Engine Instances keep quickly shutting down
0
0
0
2,656
14,375,397
2013-01-17T09:02:00.000
1
1
0
1
c++,python,gcc,g++,redhat
14,390,969
1
true
0
0
You want to link to the python static library, which should get created by default and will be called libpython2.7.a If I recall correctly, as long as you don't build Python with --enable-shared it doesn't install the dynamic library, so you'll only get the static lib and so simply linking your C++ application with -lpython2.7 -L/path/where/you/installed/python/lib should link to the static library.
1
0
0
I have a C++ application from Windows that I wish to port across to run on a Red Hat Linux system. This application embeds a slightly modified version of Python 2.7.3 (I added the Py_SetPath command as it is essential for my use case) so I definitely need to compile the Python source. My problem is that despite looking, I can't actually find any guidance on how to get Python to emit the right files for me to link against and how to then get g++ to link my C++ code against it in such a way that I don't need to have an installed copy of Python on every system I distribute this to. So my questions are: how do I compile Python so that it can be embedded into the C++ app on Linux? what am I linking against for the C++ app to work? Sorry for these basic questions, but having convinced my employer to let me try and move our systems over to Linux, I'm keen to make it go off as smoothly as possible and I'm worried avbout not making too much progress!
Compile Python 2.7.3 on Linux for Embedding into a C++ app
1.2
0
0
667
14,375,520
2013-01-17T09:09:00.000
-1
0
0
1
python,apache,wsgi
14,375,870
2
false
1
0
Its quite possible. This is what virtualenv as all about. Set up the second app in a virtualenv , with python3 . You an add it in a virtualhost configuration in apache.
1
5
0
I already have a web application in written in Python 2 that runs over WSGI (specifically, OpenERP web server). I would like to write a new web application that would run on the same server (Apache 2 on Ubuntu), but using WSGI and Python 3. The two applications would be on different ports. Is that possible?
WSGI apps with python 2 and python 3 on the same server?
-0.099668
0
0
1,947
14,377,002
2013-01-17T10:30:00.000
2
0
0
0
python,substring,jinja2
14,386,224
2
false
1
0
Or you can create a filter for phonenumbers like {{ phone_number|phone }}
1
0
0
I have a question and I tried to solve diff way with Jinja2. I have a number that is saved in database. When I print the original number is for ex: 907333-5000. I want that number to be printed in this format: (907) 333-5000 but I don't know exactly how to do with Jinja2. Thank you
Jinja2 substring integer
0.197375
0
0
6,415
14,377,250
2013-01-17T10:43:00.000
0
0
0
0
python,database,multithreading
14,377,893
1
true
1
0
This would be a good use for something like memcached.
1
1
0
I'm wrinting a webapp in bottle. I have a small interface that lets user run sql statements. Sometimes it takes about 5 seconds until the user get's a result because the DB is quite big and old. What I want to do is the following: 1.Starte the query in a thread 2.Give the user a response right away and have ajax poll for the result There is one thing that I'm not sure of....Where do I store the result of the query? Should I store it in a DB ? Should I store it in a variable inside my webapp ? What do you guys think would be best ?
Python 3 - SQL Result - where to store it
1.2
1
0
105
14,380,519
2013-01-17T13:48:00.000
5
0
0
0
python,django,pylons,web-frameworks
14,391,975
4
true
1
0
Any of them will work. Arguably the most popular Python web frameworks these days are Django, Flask, and Pyramid.
1
0
0
Some details about the project: pure backend project, no front expose a rest api (maybe custom routes?) connect to other rest apis query MySQL & MongoDB using an ORM have unit tests What Python framework would you recomend me for it?
Python framework to create pure backend project
1.2
0
0
2,258
14,380,871
2013-01-17T14:07:00.000
0
0
1
0
python
14,381,245
2
false
0
0
I use split function after getting file name. filename.split('.')[0]
1
0
0
I need to extract file name without extension name. example. /home/si/text.txt /home/si/text.vx.txt In the both case I should receive output text only. I am not sure how many trailing extension file can have but I need to extract only file name. I have tried spliitext(filename)[0] but it gave me output text.vx rather than text
extraction of file from filepath
0
0
0
38
14,381,019
2013-01-17T14:17:00.000
0
0
0
0
python,dropbox,dropbox-api
14,384,514
1
true
0
0
According to the dropbox api, the max amount of files returned from the /metadata api call is 25,000. There is a way to limit the amount of files returned from the api call, but there does not seem a way to list file entries starting from X to Y (like getting entries 100 to 200, then 200 to 300), it seems the only way is to separate these large number of files into folders
1
0
0
I am trying to access a subset of files from a directory in Dropbox. That directory has more than 25k files (about 200k and growing) and so my initial attempt at building a list of filenames from client.metadata isn't workable. How can one get around this? I can access the filenames from my local copy and periodically update that list. However, because this is a script that a few people in my lab will use, I hoped for something that did not rely on my local copy of Dropbox.
Access files from a large directory Dropbox API
1.2
0
1
259
14,381,091
2013-01-17T14:20:00.000
8
0
1
0
python,random
14,381,216
2
false
0
0
I see you are very close with what you have: Str = random.randomint(1,18) should be Str = random.randint(1,18) This is the line which assigns the random int to the variable Str, and when you ask for Str you should get the same number each time. If you keep calling Str = random.randint(1,18) it will change Str each time. so only do it once. If you know or understand about classes you should use them to contain your characters/items/spells/adventures/monsters etc. which can have properties and attributes and inventories etc etc, much like in the game itself, you group all the things under different types. Note: You mention you used lambda, that might mean that you have assigned Str to a function which returns a random variable each time, meaning each time you asked for Str you would actually be asking for the function to return a random number.
1
6
0
Before I say anything else, I would like to mention that I am almost completely new to coding, and only have a very rudimentary understanding of python. Now that that's out of the way, my problem is that I am (attempting) to code a text adventure game, along the lines of D&D. I'm stuck at an early stage- namely, how to assign a random integer between 1 and 18 to a variable. I've seen some ways of doing this, but the value of the variable changes each time it's called up. This can't happen. The reason for this is because I want the stats (Strength, Wisdom, Intelligence, Dexterity, Charisma, and Constitution) to be a randomly generated but fixed number, that can be called upon and have it the same each time. I've tried mucking around with things like Str = random.randomint(1,18), using the random module. The closest I've come is using the lambda function, so that when I call up the variable it generates a random number which is different each time. None of these have rally worked, and I would really like to know what I'm doing wrong.
How do I Assign a random number to a variable?
1
0
0
40,073
14,383,025
2013-01-17T15:58:00.000
0
0
0
1
python,google-app-engine,pydev
14,387,118
3
false
1
0
If you want to use Eclipse's Import feature, go with General -> File system.
2
1
0
Created a gae project with the googleappengine launch and have been building it with textmate. Now, I'd like to import it to the Eclipse PyDev GAE project. Tried to import it, but it doesn't work. Anyone know how to do that? Thanks in advance.
Pydev: How to import a gae project to eclipse Pydev gae project?
0
0
0
712
14,383,025
2013-01-17T15:58:00.000
2
0
0
1
python,google-app-engine,pydev
14,383,720
3
true
1
0
You could try not using the eclipse import feature. Within Eclipse, create a new PyDev GAE project, and then you can copy in your existing files.
2
1
0
Created a gae project with the googleappengine launch and have been building it with textmate. Now, I'd like to import it to the Eclipse PyDev GAE project. Tried to import it, but it doesn't work. Anyone know how to do that? Thanks in advance.
Pydev: How to import a gae project to eclipse Pydev gae project?
1.2
0
0
712
14,385,921
2013-01-17T18:42:00.000
1
0
0
0
python,image,image-processing,numpy,scipy
14,386,145
2
true
0
0
Operations that involve information from neighboring pixels, such as closing will always have trouble at the edges. In your case, this is very easy to get around: just process subimages that are slightly larger than your tiling, and keep the good parts when stitching together.
1
6
1
I am attempting to fill holes in a binary image. The image is rather large so I have broken it into chunks for processing. When I use the scipy.ndimage.morphology.binary_fill_holes functions, it fills larger holes that belong in the image. So I tried using scipy.ndimage.morphology.binary_closing, which gave the desired results of filling small holes in the image. However, when I put the chunks back together, to create the entire image, I end up with seamlines because the binary_closing function removes any values from the border pixels of each chunk. Is there any way to avoid this effect?
Scipy Binary Closing - Edge Pixels lose value
1.2
0
0
2,670
14,387,018
2013-01-17T19:53:00.000
1
0
0
0
python,flash,testing,automated-tests,jython
14,387,317
1
false
1
0
There are a number of ways I've seen around the net, but most seem to involve exposing Flash through JS and then using the JS interface, which is a bit of a problem if you are trying to test things that you don't have dev access to, or need to be in a prod-like state for your tests. Of course, even if you do that, you aren't really simulating user interaction, since you are working through an API. If you can reliably model your Flash components with fixed pixel positions relative to the page element the Flash component is running in, you should be able to use Selenium Webdriver to position the mouse cursor and send click commands without actually cracking Flash itself. I'm not 100% sure that would work, but it seems at least worth a shot. Validation will be a bit trickier, but I think you should be able to do it with some form of image comparison. A few of the Flash automators I saw are actually using image processing under the hood to control both input and output, so it seems like a legitimate way to interact with it.
1
0
0
I'm contemplating using python for some functional testing of flash ad-units for work. Currently, we have an ad (in flash) that has N locations (can be defined as x,y) that need to be 'clicked'. I'd like to use python, but I know Java will do this. I also considered Jython + Sikuli, but wanted to know if there is a python only library or tool to do this. I'd prefer to not run Jython + Sikuli if there is a native python option. TIA. @user1929959 From the pyswftools page, "At the moment, the library can be used in Python applications (including WebBased applications) to generate Flash animations on the fly.". And from the bottle-flash page, "This plugin enables flash messages in bottle.". Neither help me, unless I'm overlooking something ...
Interacting with a flash application using python
0.197375
0
0
1,778
14,388,251
2013-01-17T21:14:00.000
7
0
0
1
python,google-app-engine,full-text-search
14,390,379
1
true
1
0
If you empty out your index and call index.delete_schema() (index.deleteSchema() in Java) it will clear the mappings that we have from field name to type, and you can index your new documents as expected. Thanks!
1
5
0
The Situation Alright, so we have our app in appengine with full text search activated. We had an index set on a document with a field named 'date'. This field is a DateField and now we changed the model of the document so the field 'date' is now a NumericField. The problem is, on the production server, even if I cleared all the document from the index, the server responds with this type of error: Failed to parse search request ""; SortSpec numeric default value does not match expression type 'TEXT' in 'date' The Solution The problem is, "I think", the fact that the model on the server doesn't fit the model of the search query. So basically, one way to do it, would be to delete the whole index, but I don't know how to do it on the production server. The dev server works flawlessly
How to delete or reset a search index in Appengine
1.2
0
0
2,747
14,388,482
2013-01-17T21:30:00.000
2
0
0
0
python,licensing
14,420,183
1
false
0
1
One thing to note is that it is relatively easy to decompile and reverse engineer Python bytecode and remove any protection you put into it, so it's usually not worth it to spend too much effort and time on implementing a crack-proof protection. You just want to put enough to keep people honest. If your concern is that businesses buy the correct number of seats, one basic protection mechanism is to use a public key mechanism to put a digital signature to a message using a private key on your licensing server. The message is just a random string and some uniquely identifying information that restricts the machine that the license key is valid for (e.g. MAC address, etc), and perhaps a timestamp if you want to expire the key. This message and the signature is then encoded in a string that your customers put into your program, which validate the license key by verifying that signature matches the message and the message matches the machine. There are simpler schemes if all you care is that the user has a key (not that they have a different key for different machines); you can simply use a totally random string for the message. This scheme is secure as long as the user does not reverse engineer your code. It can be bypassed by reverse engineering the public key in your program with their own public key. It can also be bypassed by tampering the information used to identify the computer, e.g. MAC address; tampering these informations may allow multiple installations using the same key.
1
4
0
EDIT: Can someone explain why this is "Off Topic"? I think it very clearly follows the guidelines laid out in the FAQ. Is there something I'm missing? I've been creating a windows desktop application in PyQt (python 2.7, PyQt 4.9.5), and I'm getting close to releasing the software for sale. It's time for me to add some sort of system for licensing/serial number based software activation. The software is a business to business product with a retail price of around $250. I've been looking at going with Fastspring and they offer integration with: AquaticPrime CocoaFOB GameShield and Software Passport Yummy Interactive's SoftwareShield Soraco's Quick License Manager Softwrap Concept Software's SoftwareKey System What are some good options for providing this functionality? Some things I've run into is that none of them seem to provide a python API, so I'd have to figure out how to integrate their stuff with my python code. I've never had to do that, so ease of integration with Python would be a strong factor, as would cost. I don't feel a strong need to prevent anyone from pirating my software, I just want to keep businesses honest so that they buy the correct number of seats. A final thing to consider is that in the future I might want to make it cross platform and ideally I wouldn't be locking myself to just windows. Thanks for your help.
What serial number licensing solution should I use for a Python application with Fastspring?
0.379949
0
0
3,211
14,389,279
2013-01-17T22:26:00.000
5
1
0
0
python,memory,numpy,compression
14,389,347
2
true
0
0
Incremental (de)compression should be done with zlib.{de,}compressobj() so that memory consumption can be minimized. Additionally, higher compression ratios can be attained for most data by using bz2 instead.
1
1
1
I am trying to compress a huge python object ~15G, and save it on the disk. Due to requrement constraints I need to compress this file as much as possible. I am presently using zlib.compress(9). My main concern is the memory taken exceeds what I have available on the system 32g during compression, and going forward the size of the object is expected to increase. Is there a more efficient/better way to achieve this. Thanks. Update: Also to note the object that I want to save is a sparse numpy matrix, and that I am serializing the data before compressing, which also increases the memory consumption. Since I do not need the python object after it is serialized, would gc.collect() help?
Compress large python objects
1.2
0
0
1,086
14,389,892
2013-01-17T23:15:00.000
2
0
1
0
python,matplotlib,latex,ipython,ipython-notebook
57,860,793
2
false
0
0
I ran into the problem posted in the comments: ! LaTeX Error: File 'type1cm.sty' not found. The issue was that my default tex command was not pointing to my up-to-date MacTex distribution, but rather to an old distribution of tex which I had installed using macports a few years back and which wasn't being updated since I had switched to using MacTex. I diagnosed this by typing which tex on the command line and getting /opt/local/bin/tex which is not the default install location for MacTex. The solution was that I had to edit my $PATH variable so that the right version of tex would get called by matplotlib. I added export PATH="/usr/local/texlive/2019/bin/x86_64-darwin:$PATH" on the last line of my ~/.bash_profile. Now when I write echo $PATH on the command line I get: /usr/local/texlive/2019/bin/x86_64-darwin:blah:blah:blah... Don't forget to restart both your terminal and your jupyter server afterwards for the changes to take effect.
1
1
0
Displaying lines of LaTeX in IPython Notebook has been answered previously, but how do you, for example, label the axis of a plot with a LaTeX string when plotting in IPython Notebook?
IPython Notebook: Plotting with LaTeX?
0.197375
0
0
10,151
14,392,423
2013-01-18T04:30:00.000
4
0
0
0
python,django
14,392,480
2
false
1
0
Try {{ request.get_full_path }} in your template.
1
2
0
I am currently passing 'request_path': request.get_full_path() through render_to_response. I was wondering is this is really nessccary since I was told the it's unnesscary and the context processor takes care of that but {{ get_full_path }} is empty. Please advise.
Why request.get_full_path is not available in the templates?
0.379949
0
0
2,821
14,396,178
2013-01-18T09:46:00.000
0
0
1
0
python,binary,hex
14,396,302
3
false
0
0
use binascii.hexlify() - that should do it
1
0
0
I'd like to be able to do the reverse of: foo = long(binarystring.encode('hex'), 16)
In Python how do you reverse this call: long("1234", 16)?
0
0
0
123
14,396,632
2013-01-18T10:13:00.000
1
0
0
0
python,svm,pyml
14,396,884
1
true
0
0
Roundabout way of doing it below: Use the result.getDecisionFunction() method and choose according to your own preference. Returns a list of values like: [-1.0000000000000213, -1.0000000000000053, -0.9999999999999893] Better answers still appreciated.
1
0
1
I'm using PyML's SVM to classify reads, but would like to set the discriminant to a higher value than the default (which I assume is 0). How do I do it? Ps. I'm using a linear kernel with the liblinear-optimizer if that matters.
Setting the SVM discriminant value in PyML
1.2
0
0
80
14,398,188
2013-01-18T11:41:00.000
0
0
1
0
compiler-construction,numpy,python-3.x,pycharm
14,424,433
1
false
0
0
Check whether the 'gcc' command works. If it's there, try setting the CC environment variable to 'gcc'.
1
2
0
I am new to Python and PyCharm, installed PyCharm 2.6 (on Mac OSX) and tried to import NumPy for Python 3.3. JetBrains support file tells me to install Cython which also yields "Cannot locate working compiler" How and which compiler do I need to install? Thanks!
Numpy import in PyCharm yields "Cannot locate working compiler"
0
0
0
1,245
14,398,980
2013-01-18T12:28:00.000
1
0
1
1
python,windows
14,399,306
2
false
0
0
What you can do, is to apply some sort of shell history functionality: every command issued by the user would be placed in a list, and then you'd implement a special call (command of your console), let's say 'history' that would print out the list for the user in order as it was being filled in, with increasing number to next to every command. Then, another call (again a special command of your console), let's say '!!' (but really coult be anything, like 'repeat') and a number of the command you want to repeat would fetch the command from the list and execute it without retyping: typing '!! 34' would execute again the command number 34 that can be 'something -a very -b long -c with -d very -e large -f number -g of -h arguments -666'. I am aware its not exactly the same thing that you wanted, but it is very easy to implement quickly, and provides the command repetition functionality you're after, and should be decent replacement until you figure out how to do it the way you want ;)
1
0
0
So I am working on a console based python(python3 actually) program where I use input(">")to get the command from user. Now I want to implement the "last command" function in my program - when users press the up arrow on the keyboard they can see their last command. After some research I found I can use curses lib to implement this but there are two problems. Curses are not available on Windows. The other parts of my program use print() to do the output. I don't want to rewrite them with curses. So are there any others ways to implement the "last command" function? Thanks.
How to implement "last command" function in a console based python program
0.099668
0
0
195
14,399,223
2013-01-18T12:41:00.000
0
0
0
1
python,mysql,django,pip,mysql-python
14,399,388
1
false
0
0
At first glance it looks like damaged pip package. Have you tried easy_install instead with the same package?
1
1
0
When I try installing mysql-python using below command, macbook-user$ sudo pip install MYSQL-python I get these messages: /System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h:891:1: warning: this is the location of the previous definition /usr/bin/lipo: /tmp/_mysql-LtlmLe.o and /tmp/_mysql-thwkfu.o have the same architectures (i386) and can't be in the same fat output file clang: error: lipo command failed with exit code 1 (use -v to see invocation) error: command 'clang' failed with exit status 1 Does anyone know how to solve this problem? Help me please!
clang error when installing MYSQL-python on Lion-mountain (Mac OS X 10.8)
0
1
0
506
14,399,278
2013-01-18T12:44:00.000
1
0
0
1
python,tornado
14,434,691
2
false
0
0
Use Tornado to just receive non-blocking requests. To do the actual XML processing you can then spawn another process or use an async task processor like celery. Using celery would facilitate easy scaling of your system in future. In fact with this model you'll just need one Tornado instance. @Eren - I don't think that the computational resources are getting saturated. It would just be that more than 8 requests are not getting processed simultaneously as Tornado would right now be serving requests in blocking mode.
1
0
0
I have inherited a rather large code base that utilizes tornado to compute and serve big and complex data-types (imagine a 1 MB XML file). Currently there are 8 instances of tornado running to compute and serve this data. That was a wrong design-decision from the start and I am facing many many timeouts from applications that access the servers. I'd like to change as few lines of code as possible in the legacy code base because I do not want to break anything that has already been tested in the field. What can I do to transform this system into a threaded one that can execute more xml-computation in parallel?
Whats are the options to transform a event-driven tornado application into a threaded or otherwise blocking system?
0.099668
0
0
218
14,399,722
2013-01-18T13:11:00.000
0
0
0
1
python,google-app-engine,google-cloud-datastore
14,401,694
1
true
1
0
here's my workaround to get it working on dev_server: 1) update your model in production and deploy it 2) use appcfg.py download_data and grab all entities of the type you've updated 3) use appcfg.py upload_data and push all the entities into your local datastore voila.. your local datastore entities can now be retrieved without generating BadValueError
1
3
0
My understanding of changing db.Model schemas is that it 'doesn't matter' if you add a property and then try and fetch old entities without that property. Indeed, adding the following property to my SiteUser db.Model running on dev_server: category_subscriptions = db.StringProperty() Still allows me to retrieve an old SiteUser entity doesn't have this property (via a GQL query). However, changing the property to a list property, (either StringListProperty, ListProperty): category_subscriptions = db.StringListProperty() results in the following error when I try and retrieve the user: BadValueError: Property category_subscriptions is required This is on the SDK dev server version 1.7.4. Why is that and how would I work around it?
changing GAE db.Model schema on dev_server with ListProperties? BadValueError
1.2
0
0
94
14,399,958
2013-01-18T13:28:00.000
2
0
0
1
python,django,event-handling,gunicorn
14,403,008
1
true
1
0
Gunicorn by default will spawn regular synchronous WSGI processes. You can however tell it to spawn processes that use gevent, eventlet or tornado instead. I am only familiar with gevent which can certainly be used instead of Node.js for long polling. The memory footprint per process is about the same for mod_wsgi and gunicorn (in my limited experience), but you get more bells-and-whistles with gunicorn. If you change the default worker class to gevent (or eventlet or tornado) you also get a LOT more performance out of each process.
1
0
0
Is using Django with gunicorn is considered to be a replacement for using evented/async servers like Tornado, Node.js, and similar ? Additionally, Will that be helpful in handling long-polling/cometed services? Finally, is Gunicorn only replacing the memory consuming Apache threads (in case of Apache/mod-wsgi) with lightweight threads, or there are an additional benefits?
Django Gunicorn Long Polling
1.2
0
0
1,026
14,400,767
2013-01-18T14:15:00.000
0
0
0
0
python,download,popupwindow
14,401,005
1
false
0
0
There are python bindings to Selenium, that are helping in scripting simulated browser behavior allowing to do really complex stuff with it - take a look at it, should be enough for what you need.
1
0
0
I need to login to a website. Navigate to a report page. After entering the required information and clicking on the "Go" button(This is a multipart/form-data that I am submitting), there's a pop up window asking me to save the file. I want to do it automatically by python. I search the internet for a couple of days, but can't find a way in python. By using urllib2, I can process up to submitting multipart form, but how can I get the name and location of the file and download it? Please Note: There is no Href associated with the "Go" button. After submitting the form, a file-save dialog popup asking me where to save the file. thanks in advance
Using python to download a file after clicking the submit button
0
0
1
346
14,400,993
2013-01-18T14:27:00.000
2
0
1
0
python,import,module,spyder
15,023,264
3
true
0
0
The startup script for Spyder is in site-packages/spyderlib/scientific_startup.py. Carlos' answer would also work, but this is what I was looking for.
2
8
1
I'm trying to set up a slightly customised version of Spyder. When Spyder starts, it automatically imports a long list of modules, including things from matplotlib, numpy, scipy etc. Is there a way to add my own modules to that list? In case it makes a difference, I'm using the Spyder configuration provided by the Python(X,Y) Windows installer.
Spyder default module import list
1.2
0
0
15,729
14,400,993
2013-01-18T14:27:00.000
-2
0
1
0
python,import,module,spyder
14,401,134
3
false
0
0
If Spyder is executed as a python script by python binary, then you should be able to simply edit Spyder python sources and include the modules you need. You should take a look into how is it actually executed upon start.
2
8
1
I'm trying to set up a slightly customised version of Spyder. When Spyder starts, it automatically imports a long list of modules, including things from matplotlib, numpy, scipy etc. Is there a way to add my own modules to that list? In case it makes a difference, I'm using the Spyder configuration provided by the Python(X,Y) Windows installer.
Spyder default module import list
-0.132549
0
0
15,729
14,401,766
2013-01-18T15:09:00.000
1
0
1
0
python,words
14,401,870
1
true
0
0
If you're on a Linux system you might want to look in /usr/share/dict/words.
1
0
0
I am creating a game and I need a dictionary (a list of plain words in this case) containing not only the base form, but all the others as well. In this case the language is Italian and, for example, the verbs have many forms and nouns too. Since the language is very irregular, I want to get the words from a huge source which may contain them all. At first I thought about Wikipedia: I would download every article, extract the text, and filter the words. This will take so much time that I'd like to know whether there could be better solutions, both in terms of time and completeness of the list.
Creating a list of words from Wikipedia
1.2
0
0
675
14,403,472
2013-01-18T16:41:00.000
2
1
0
0
java,python,rmi
14,403,624
2
false
1
0
You're going to have a very hard time i would imagine. RMI and Java serialization are very Java specific. I don't know if anyone has already attempted to implement this in python (i'm sure google knows), but your best bet would be to find an existing library. That aside, i would look at finding a way to do the RMI in some client side java shim (maybe some sort of python<->java bridge library?). Or, maybe you could run your python in Jython and leverage the underlying jvm to handle the RMI stuff.
2
5
0
I have a Python program that needs to access a Java RMI API from a third party system in order to fetch some data. I have no control over the third party system so it MUST be done using RMI. What should be my approach here? I have never worked with RMI using Python so I'm kind of lost as to what I should do.. Thanks in advance!
Accessing a Java RMI API from Python program
0.197375
0
0
5,222
14,403,472
2013-01-18T16:41:00.000
2
1
0
0
java,python,rmi
14,403,646
2
true
1
0
How about a little java middle ware piece that you can talk to via REST and the piece in turn can to the remote API?
2
5
0
I have a Python program that needs to access a Java RMI API from a third party system in order to fetch some data. I have no control over the third party system so it MUST be done using RMI. What should be my approach here? I have never worked with RMI using Python so I'm kind of lost as to what I should do.. Thanks in advance!
Accessing a Java RMI API from Python program
1.2
0
0
5,222
14,405,063
2013-01-18T18:14:00.000
22
1
1
0
python,logging,output,pytest
47,816,384
14
false
0
0
Try pytest -s -v test_login.py for more info in console. -v it's a short --verbose -s means 'disable all capturing'
2
669
0
Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE). Is there a simple way to see standard output during a pytest run?
How can I see normal print output created during pytest run?
1
0
0
296,089
14,405,063
2013-01-18T18:14:00.000
4
1
1
0
python,logging,output,pytest
54,546,338
14
false
0
0
If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output.
2
669
0
Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE). Is there a simple way to see standard output during a pytest run?
How can I see normal print output created during pytest run?
0.057081
0
0
296,089
14,405,520
2013-01-18T18:43:00.000
0
0
1
1
python,windows,command-line,path,executable
17,404,756
3
false
0
0
I use ixe013's junction approach. The one issue I have had is that enthoughts enpkg installer doesn't "read" the symbolic junction...I have lost the details, but it broke the symbolic link and then claimed the installation directory was empty... So if you are using ixe013s approach with enthought I recommend the following when updating delete junction : junction -d c:\python rename c:\python.2.7.32bits to c:\python run enpkg then go back: rename c:\python to c:\python.2.7.32bits junction -d c:\python & junction c:\python c:\python.2.7.32bits
1
2
0
I have two Python installations on my Windows 7 64bit workstation. I have 32bit Python 2.7 and 64bit Python 2.7. Each installation is required by specific applications. I currently have only the 32bit Python installation in my system path. However, I would like to add the 64bit version to the path as well. Right now, if I type python into my Windows command prompt it will open Python 2.7 win32. What I would like to be able to do is type python32 for the 32bit version or python64 for the 64bit version. I realize I can rename each respective python.exe file as python32.exe and python64.exe, but this will break the hard coded paths that specific applications look for. Is it possible to some how leave each python.exe named as python.exe but give it a different command from the command prompt?
Multiple Python installations in System Path
0
0
0
1,186
14,406,300
2013-01-18T19:33:00.000
1
0
1
0
python,parsing,url,urlparse
18,302,950
7
false
0
0
Using the tldexport works fine, but apparently has a problem while parsing the blogspot.com subdomain and create a mess. If you would like to go ahead with that library, make sure to implement an if condition or something to prevent returning an empty string in the subdomain.
1
56
0
Need a way to extract a domain name without the subdomain from a url using Python urlparse. For example, I would like to extract "google.com" from a full url like "http://www.google.com". The closest I can seem to come with urlparse is the netloc attribute, but that includes the subdomain, which in this example would be www.google.com. I know that it is possible to write some custom string manipulation to turn www.google.com into google.com, but I want to avoid by-hand string transforms or regex in this task. (The reason for this is that I am not familiar enough with url formation rules to feel confident that I could consider every edge case required in writing a custom parsing function.) Or, if urlparse can't do what I need, does anyone know any other Python url-parsing libraries that would?
Python urlparse -- extract domain name without subdomain
0.028564
0
1
36,432
14,407,779
2013-01-18T21:16:00.000
0
0
1
1
python,installation,uac
14,410,362
3
false
0
0
Re-reading the original question, there's a much simpler answer that probably fits your requirements. I don't know much about Inno, but most installers give you a way to run an arbitrary command as a post-copy step. So, you can just use python -m compileall to create the .pyc files for you at install time—while you've still got elevated privileges, so there's no problem with UAC. In fact, if you look at pywin32, and various other Python packages that come as installer packages, they do exactly this. This is an idiomatic thing to do for installing libraries into the user's Python installation, so I don't see why it wouldn't be considered reasonable for installing an executable that uses the user's Python installation. Of course if the user later decides to uninstall Python 2.6 and install 2.7, your .pyc files will be hosed… but from your description, it sounds like your entire program will be hosed anyway, and the recommended solution for the user would probably be to uninstall and reinstall anyway, right?
1
2
0
I'm working on an Inno Setup installer for a Python application for Windows 7, and I have these requirements: The app shouldn't write anything to the installation directory It should be able to use .pyc files The app shouldn't require a specific Python version, so I can't just add a set of .pyc files to the installer Is there a recommended way of handling this? Like give the user a way to (re)generate the .pyc files? Or is the shorter startup time benefit from the .pyc files usually not worth worrying about?
Python .pyc files and Windows UAC
0
0
0
1,244
14,409,454
2013-01-18T23:44:00.000
1
0
1
0
python,python-3.x,python-idle
14,410,812
2
true
0
0
As others have noted, IDLE will not display the three dots that you are looking for; however, if you must have them, you could always run python from the terminal by typing 'python' into the terminal and pressing ENTER. To exit python from the terminal you could type either 'exit()' or 'quit()' followed by ENTER.
1
1
0
I'm using the IDLE command prompt with Python 3.3 and when I enter a multi-line command, I see a blank line rather than the multi-line prompt that has three dots. I know that this is a little thing, but does anyone know how to enable the multi-line prompt?
Python IDLE Multi-line Prompt
1.2
0
0
2,005
14,410,771
2013-01-19T03:21:00.000
0
1
0
1
python,applescript,itunes
14,410,871
2
false
0
0
You can set up a simple Automator workflow to retrieve the current iTunes song. Try these two actions for starters: iTunes: Get the Current Song Utilities: Run Shell Script Change the shell script to cat > ~/itunes_track.txt and you should have a text file containing the path of the current track. Once you get your data out of Automator you should be all set :)
1
0
0
I have been doing a little bit of research and haven't found anything that is quite going to work. I want to have python know what the current song playing in iTunes is so I can serially send it to my Arduino. I have seen Appscript but it is no longer supported and from what I have read full of a few bugs now that it hasn't been updated. I am using Mac OS X 10.8.2 & iTunes 10.0.1 Anyone got any ideas on how to make this work. Any information is greatly appreciated. FYI: My project is a little 1.8' colour display screen that I am going to have serval pieces of information on RAM HDD CPU Song etc.
Python getting Itunes song
0
0
0
2,056
14,414,430
2013-01-19T12:37:00.000
-4
0
1
0
python,math
57,742,048
3
false
0
0
2^2 = (1+1)*(1+1) = 4 (two objects occured two times) 2^1 = (1+1)*1 = 2 (two objects occured one time) 2^0 = (1+1)*0 = 0 (two objects did not occur) 1^2 = 1 *(1+1) = 2 (one object occured two times) 1^1 = 1 *1 = 1 (one object occured one time) 1^0 = 1 *0 = 0 (one object did not occur) 0^2 = 0 *(1+1) = 0 (zero objects occured twice) 0^1 = 0 *1 = 0 (zero objects occured once) 0^0 = 0 *0 = 0 (zero objects did not occur) Therefore you cannot make something from nothing!
1
32
0
Why does 0 ** 0 equal 1 in Python? Shouldn't it throw an exception, like 0 / 0 does?
Why 0 ** 0 equals 1 in python
-1
0
0
7,289
14,417,535
2013-01-19T18:41:00.000
0
0
0
1
python,notifications,pipe
14,417,574
2
false
0
0
You can generate additional file somewhere where you record status of your application. On UNIX platforms users can tail it in another console. Sending e-mails with status is another option.
1
1
0
I'm playing around with python and have written an application that generates a lot of text which it prints to the stdout.(I run the application in a console(bash) and don't want to use a GUI or external tool) The process takes up a lot of time and I can calculate how much it has already finished.. The output of the application is always piped from the running script.. I would like to present a notification to the user from time to time of how much of the text generation is done.. (I can't print to stdout because it is redirected to some other process..) Is there some way to do this? Your suggestions are much appreciated!
Piped sdt output with notification Python
0
0
0
133
14,417,846
2013-01-19T19:10:00.000
2
0
0
0
python,redis
14,418,853
1
false
0
0
Thanks to Pavel Anossov, after reading the code of client.py, I found out that responses from 2 commands (BGSAVE and BGREWRITEAOF) were not converted from bytes to str, and this caused the problem in Python 3. To fix this issue, just change lambda r: r == to lambda r: nativestr(r) == for these two commands in RESPONSE_CALLBACKS.
1
1
0
When I fired redis-py's bgsave() command, the return value was False, but I'm pretty sure the execution was successful because I've checked with lastsave(). However, if I use save() the return value would be True after successful execution. Could anyone please explain what False indicates for bgsave()? Not sure if it has anything to do with bgsave() being executed in the background.
Why does redis-py's bgsave() command return False after successful execution?
0.379949
1
0
778
14,418,264
2013-01-19T19:57:00.000
1
0
1
0
python,matplotlib
14,418,289
1
true
0
0
You can just do pyplot.xticks([1, 3, 10, 20]), or whatever arbitrary numbers you like.
1
0
0
In Python, I have two list that have 10 elements each. I plot these against each other in pyplot. I know that I am able to use xticks to position the ticks, but it seems that I am limited to putting the ticks at one of the 10 points on my x axis. Is it possible to put my tick locations at any arbitrary location that I chose along the x axis? Perhaps by choosing the fraction of the axis length or something? Thank you!
positioning xticks arbitrarily in python
1.2
0
0
636
14,418,774
2013-01-19T20:57:00.000
5
0
0
0
python,brython
14,418,885
4
true
1
0
Brython itself seems to be completely client side, but whether that will be enough really depends on the code you wrote. It is not a full blown Python interpreter and does not have the libraries. You might want a backend to support it or use another client side solution as suggested in the comments. Given how few real web hosters support Python, I think it is very unlikely that Dropbox would be suitable for this, in case you do need processing on the server as well.
2
17
0
I have a piece of code written in Python. I would like to put that code in a webpage. Brython seems like the simplest way to glue the two things together, but I don't have a server that can actually run code on the server side. Does Brython require server-side code, or can I host a page using it on the cheap with (say) Dropbox?
Is Brython entirely client-side?
1.2
0
0
5,868
14,418,774
2013-01-19T20:57:00.000
2
0
0
0
python,brython
17,833,373
4
false
1
0
Brython doesn't always work with python code, I've learned. Something I think needs to be clarified is that while you can run brython in a very limited capacity by accessing the files locally, (because of the AJAX requirement) you can't import libraries - not even the most basic (e.g., html, time). You really need a basic web server in order to run brython. I've found it's good for basic scripts, since my python is better than my JS. It seems to break with more complicated syntax, though.
2
17
0
I have a piece of code written in Python. I would like to put that code in a webpage. Brython seems like the simplest way to glue the two things together, but I don't have a server that can actually run code on the server side. Does Brython require server-side code, or can I host a page using it on the cheap with (say) Dropbox?
Is Brython entirely client-side?
0.099668
0
0
5,868
14,419,241
2013-01-19T21:49:00.000
1
0
1
0
python,mongodb,pymongo
14,419,321
2
false
0
0
MongoDB shell is full-featured javascript console/interpreter with some bindings to message with a mongodb server. In contrast PyMongo lacks embedded javascript interpreter or even javascript parser so you could not execute MongoDB shell queries as-is. Note that mongo shell queries are not json documents as they are able to contain some functions and some object constructors such as {value: 2+2}.
1
2
0
Is there a way in pymongo to use a string to execute a query instead of a dictionary? I would like to be able to use exactly the same syntax as on MongoDB shell from python/pymongo. Is that possible?
pymongo use string to query instead of dictionary
0.099668
0
0
838
14,419,532
2013-01-19T22:25:00.000
2
0
0
0
python,django
14,419,559
2
true
1
0
Try to use the singleton design pattern.
1
2
0
I am trying to do some pre-processing at Django startup (I put a startup script that runs once in urls.py) and then use the created instance of an object in my views. How would I go about doing that?
Initializing a class at Django startup and then referring to it in views
1.2
0
0
771
14,420,413
2013-01-20T00:21:00.000
1
1
1
0
python,artificial-intelligence,chatbot,aiml
14,879,572
1
false
0
0
It's the same way you set and get the botname. Only the variable name differs to your choice.
1
1
0
I was playing with PyAIML. I understand how to set the bot's name. But could not figure out how to set the creator's name so that if anyone ask's "Who created you?" then it can reply appropriately. Please help.
How to set the master's name of a chatbot using PyAIML?
0.197375
0
0
444
14,422,409
2013-01-20T07:00:00.000
6
0
1
0
python,data-structures,set,tuples
14,422,443
4
false
0
0
One difference that comes to mind is the issue of duplicates. A tuple of (1, 1, 1, 1, 2, 2, 2) would be exactly what you expect, but a frozenset would remove all of those duplicates, leaving you with frozenset([1, 2]).
2
61
0
I'm learning Python 3 using The Quick Python Book, where the author talks about frozensets, stating that since sets are mutable and hence unhashable, thereby becoming unfit for being dictionary keys, their frozen counterparts were introduced. Other than the obvious difference that a tuple is an ordered data structure while frozenset, or more generally a set, is unordered, are there any other differences between a tuple and a frozenset?
Difference between tuples and frozensets in Python
1
0
0
29,611
14,422,409
2013-01-20T07:00:00.000
102
0
1
0
python,data-structures,set,tuples
14,422,446
4
true
0
0
tuples are immutable lists, frozensets are immutable sets. tuples are indeed an ordered collection of objects, but they can contain duplicates and unhashable objects, and have slice functionality frozensets aren't indexed, but you have the functionality of sets - O(1) element lookups, and functionality such as unions and intersections. They also can't contain duplicates, like their mutable counterparts.
2
61
0
I'm learning Python 3 using The Quick Python Book, where the author talks about frozensets, stating that since sets are mutable and hence unhashable, thereby becoming unfit for being dictionary keys, their frozen counterparts were introduced. Other than the obvious difference that a tuple is an ordered data structure while frozenset, or more generally a set, is unordered, are there any other differences between a tuple and a frozenset?
Difference between tuples and frozensets in Python
1.2
0
0
29,611
14,423,868
2013-01-20T11:10:00.000
2
0
1
0
python
14,423,880
5
false
0
0
You can use vars(info) or info.__dict__. It will return the object's namespace as a dictionary in the attribute_name:value format.
1
0
0
Is there any method to let me know the values of an object's attributes? For example, info = urllib2.urlopen('http://www.python.org/') I wanna know all the attributes' values of info. Maybe I don't know what are the attributes the info has. And str() or list() can not give me the answer.
how to reveal an object's attributes in Python?
0.07983
0
1
306
14,425,379
2013-01-20T14:17:00.000
6
0
0
0
web-applications,python-3.x,python-2.x
14,425,489
1
true
1
0
If you are looking for a Python framework for web development, I would highly recommend using Django. Regarding Python 2 or 3 issue, Python 3 is of course the future. Django will soon be migrated to Python 3. For now you can stick with Django and Python 2.7. But avoid using those features of Python 2.7 which are removed from Python 3.
1
4
0
I am a quite experienced PHP developer (using OOP features where possible) and currently searching for another language to play with. I've used Ruby, Python and Node and found Python to be the best choice so far (in terms of maturity, ease of use and the learning curve). As I am mostly focusing on web centric development Django seems to be the obvious framework of choice. But here is my question: Django is still based on Python 2, as is Flask but nearly every Python tutorial out there suggests you to start learning Python 3. But it seems that version 2 is the one you have to start with if you want to use the more popular web frameworks and that doesn't seem to change any time soon. Is this true? Do you know alternatives? I know there are similar questions like that but they are either outdated or not focused on web development so I guess this is a valid one. Thanks in advance!
Learning Python for web development. Python 2 or 3?
1.2
0
0
766
14,427,160
2013-01-20T17:28:00.000
2
0
0
1
python,solr,audio-fingerprinting,echoprint
14,444,869
1
false
1
0
well I found my mistake and if the ttserver. Thanks Alexandre for that data. Well the right way to make it work would be this /usr/local/tokyotyrant-1.1.33/bin/ttserver casket.tch there indicated the name of the on-disk hash, that will make persistent. Then start Solr normally and I can enter and view songs without problems :)
1
0
0
As I can do to stop the service and tt solr correctly. What I do is restart the PC and then wake up services, but to perform validation of a song, I get a message as if the database has been damaged. I wonder what is the right way to close the service to run and test after songs but not the database is damaged. Greetings and thanks. Start the tts / usr/local/tokyotyrant-1.1.33/bin/ttservercd echoprint-server/solr/solr java -Dsolr.solr.home=/home/user01/echoprint-server/solr/solr/solr/ -Djava.awt.headless=true -DSTOP.PORT=8079 -DSTOP.KEY=mykey -jar start.jar Ingest new song I stop Solr java -DSTOP.PORT=8079 -DSTOP.KEY=mykey -jar start.jar --stop Now, when I start the service and I want to make a song compracion some that I have in the database sends me an error. Traceback (most recent call last): File "lookup.py", line 51, in lookup (sys.argv [1]) File "lookup.py", line 35, in lookup result = fp.best_match_for_query (decoded) File ".. / API / fp.py ", line 194, in best_match_for_query get_tyrant tcodes = (). multi_get (trackids) File".. / API / pytyrant.py ", line 296, in multi_get raise KeyError (" Missing a result, unusable response in 1.1.10 ") KeyError: 'Missing a result, unusable response in 1.1.10' How should initiate and terminate service without losing any information.?
echoprint - stopping the service Solr, I lose the database
0.379949
0
0
696
14,427,475
2013-01-20T18:00:00.000
0
1
0
1
python,shell,cron
17,640,694
1
false
0
0
If you're having problems it's a good idea to use full qualified paths to commands in any script that's being called from cron, so as to avoid PATH and environment variable issues with the bare-bones environment that cron is called in.
1
1
0
I have a shell script which launches many different python scripts. The shell script exports many variables, which are in turn used by the python scripts. This is working perfectly when run in command line, but it does not work when executed in crontab. In the cron logs, I could see the shell script working, but the python script does not seem to run. Will the python scripts be able to run from the shell script in cron? Will the python scripts be able to access the env variables set by the parent shell script from cron?
Child script execution in crontab
0
0
0
147
14,427,531
2013-01-20T18:06:00.000
1
0
1
0
python
14,427,603
3
false
0
0
For example you could initialy set each of n chunks size to 4 and then calculate: r = (m=37 mod n), if m>=20. And then just add 1 to the first chunk and decrease r, 1 to second chunk and decrease r....and repeat until r = 0. Then you have your chunks and you can fill them.
1
1
0
For example: I want to split range(37) in n=5 chunks, which each chunk having len(chunk) >= 4.
How to split a list into N random-but-min-sized chunks
0.066568
0
0
1,514
14,430,856
2013-01-21T00:13:00.000
1
0
0
0
python,sql,database,sqlite,database-design
14,430,911
1
false
0
0
Don't use a different table for each conversation. Instead add a "conversation" column to your single table.
1
1
0
I am writing a chat bot that uses past conversations to generate its responses. Currently I use text files to store all the data but I want to use a database instead so that multiple instances of the bot can use it at the same time. How should I structure this database? My first idea was to keep a main table like create table Sessions (startTime INT,ip INT, botVersion REAL, length INT, tableName TEXT). Then for each conversation I create table <generated name>(timestamp INT, message TEXT) with all the messages that were sent or received during that conversation. When the conversation is over, I insert the name of the new table into Sessions(tableName). Is it ok to programmatically create tables in this manner? I am asking because most SQL tutorials seem to suggest that tables are created when the program is initialized. Another way to do this is to have a huge create table Messages(id INT, message TEXT) table that stores every message that was sent or received. When a conversation is over, I can add a new entry to Sessions that includes the id used during that conversation so that I can look up all the messages sent during a certain conversation. I guess one advantage of this is that I don't need to have hundreds or thousands of tables. I am planning on using SQLite despite its low concurrency since each instance of the bot may make thousands of reads before generating a response (which will result in one write). Still, if another relational database is better suited for this task, please comment. Note: There are other questions on SO about storing chat logs in databases but I am specifically looking for how it should be structured and feedback on the above ideas.
Storing chat logs in relational database
0.197375
1
0
1,198
14,430,915
2013-01-21T00:23:00.000
0
0
0
0
wxpython
14,460,784
1
false
0
1
I don't believe the default TreeCtrl has database interaction builtin. You would have to add that. If you want it to check for updates periodically, you could use a wx.Timer. If you'll be updating the database with your wxPython GUI, then there shouldn't be a problem as you'll have to update the display anyway. You might also want to look at the DVC_DataViewModel in the wxPython demo. I think it may make this sort of thing easier as it has the concept of data objects, which I think implies that you could create a database object to feed your GUI.
1
0
0
Im a newbie to Wxpython programming and I have a general on how to make my application more program. Assume, I have a tree with the list of employees being listed in the tree. When I click a employee the information is retrieved in the db and the information is diplayed on the right side of the panel. Now, when I edit information of one of the employee and save it again the data the current row in the table needs to be end dated and the new row will be created in the db and also refreshing the tree. So, Basically if something is saved the tree should be refreshed automatically. How do I accomplish this?
Wxpython dynamic programming
0
0
0
115
14,431,639
2013-01-21T02:18:00.000
1
0
0
0
python,django,selenium,gunicorn
20,411,022
2
false
1
0
Off top of my head, you can try to override LiveServerTestCase.setUpClass and wind up gunicorn instead of LiveServerThread
2
11
0
I am running an app in django with gunicorn. I am trying to use selenium to test my app but have run into a problem. I need to create a test server like is done with djangos LiveServerTestCase that will work with gunicorn. Does anyone have any ideas of how i could do this? note: could also someone confirm me that LiveServerTestCase is executed as a thread not a process
How to set up a django test server when using gunicorn?
0.099668
0
0
1,308
14,431,639
2013-01-21T02:18:00.000
2
0
0
0
python,django,selenium,gunicorn
20,448,450
2
false
1
0
I've read the code. Looking at LiveServerTestCase for inspiration makes sense but trying to cook up something by extending or somehow calling LiveServerTestCase is asking for trouble and increased maintenance costs. A robust way to run which looks like what LiveServerTestCase does is to create from unittest.TestCase a test case class with custom setUpClass and tearDownClass methods. The setUpClass method: Sets up an instance of the Django application with settings appropriate for testing: a database in a location that won't interfere with anything else, logs recorded to the appropriate place and, if emails are sent during normal operations, with emails settings that won't make your sysadmins want to strangle you, etc. [In effect, this is a deployment procedure. Since we want to eventually deploy our application, the process above is one which we should develop anyway.] Load whatever fixtures are necessary into the database. Starts a Gunicorn instance run this instance of the Django application, using the usual OS commands for this. The tearDownClass: Shuts down the Gunicorn instance, again using normal OS commands. Deletes the database that was created for testing, deletes whatever log files may have been created, etc. And between the setup and teardown our tests contact the application on the port assigned to Gunicorn and they load more fixtures if needed, etc. Why not try to use a modified LiveServerTestCase? LiveServerTestCase includes the entire test setup in one process: the tests, the WSGI server and the Django application. Gunicorn is not designed for operating like this. For one thing, it uses a master process and worker processes. If LiveServerTestCase is modified to somehow start the Django app in an external process, then a good deal of the benefits of this class go out the window. LiveServerTestCase relies on the fact that it can just modifies settings or database connections in its process space and that these modifications will carry over to the Django app, because it lives in the same process. If the app is in a different process, these tricks can't work. Once LiveServerTestCase is modified to take care of this, the end result is close to what I've outlined above. Additional: Could someone get Gunicorn and Django to run in the same process? I'm sure someone could glue them together, but consider the following. This would certainly mean changing the core code of Gunicorn since Gunicorn is designed to use master and worker processes. Then, this person who created the glue would be responsible for keeping this glue up to date when Gunicorn or Django's internals change in such a way that makes the glue break. At the end of the day doing this requires more work than using the method outlined at the start of this answer.
2
11
0
I am running an app in django with gunicorn. I am trying to use selenium to test my app but have run into a problem. I need to create a test server like is done with djangos LiveServerTestCase that will work with gunicorn. Does anyone have any ideas of how i could do this? note: could also someone confirm me that LiveServerTestCase is executed as a thread not a process
How to set up a django test server when using gunicorn?
0.197375
0
0
1,308
14,432,059
2013-01-21T03:21:00.000
10
0
0
0
python,.net,pandas,ironpython,python.net
14,443,106
3
true
0
0
No, Pandas is pretty well tied to CPython. Like you said, your best bet is to do the analysis in CPython with Pandas and export the result to CSV.
1
10
1
Can we load a pandas DataFrame in .NET space using iron python? If not I am thinking of converting pandas df into a csv file and then reading in .net space.
Can we load pandas DataFrame in .NET ironpython?
1.2
0
0
8,863
14,433,592
2013-01-21T06:44:00.000
3
0
1
0
c++,python,oop
14,433,636
4
true
0
0
The meaning of == as used in your Python code changes when you define __cmp__. In this particular sense, the "==-operator" on the Python level is modified by your definition of __cmp__ (note that this is only true if you don't also define __eq__). But the operator== on the C++ level is not affected by this, for two reasons: Python isn't implemented in C++, but in C, and there is no operator overloading Python itself isn't recompiled when you write or use your Python code
1
0
0
Does python internally overload the "==" when I implement the __cmp__ function in my class, just how we do that in C++ ? Just curious. I am new to python. :)
OOPS : the __cmp__ function
1.2
0
0
140
14,434,712
2013-01-21T08:18:00.000
1
0
0
0
python,mysql
14,434,772
1
true
0
0
Try deleting *.pyc files. Secondly use script with -v option so that you can view from where the file is being imported
1
0
0
This is my program import MySQLdb as mdb from MySQLdb import IntegrityError conn = mdb.connect("localhost", "asdf", "asdf", "asdf") when the connect function is called python prints some text ("h" in the shell). This happens only if I execute the script file from a particular folder. If I copy the same script file to some other folder "h" is not printed. actually i had this line previously in the same script for testing print "h" but now i have removed the line from the script. But still it is printed. What happen to my folder?
python mysqldb printing text even if no print statement in the code
1.2
1
0
41