Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
19,976,115 | 2013-11-14T11:09:00.000 | 32 | 0 | 0 | 0 | python,django,django-settings | 19,976,181 | 2 | false | 1 | 0 | from django.conf import settings is better option.
I use different settings files for the same django project (one for "live", one for "dev"), the first one will select the one being executed. | 1 | 112 | 0 | I'm reading up that most people do from django.conf import settings but I don't undertstand the difference to simply doing import settings in a django project file. Can anyone explain the difference? | What's the difference between `from django.conf import settings` and `import settings` in a Django project | 1 | 0 | 0 | 77,463 |
19,976,664 | 2013-11-14T11:38:00.000 | 0 | 0 | 0 | 0 | python,sqlite,shared-memory | 20,004,051 | 1 | true | 0 | 0 | I've tested fcntl (in Python) with shm files and it seems that locking works correctly. Indeed, from process point of view it is a file and OS handles everything correctly.
I'm going to keep this architecture since it is simple enough and I don't see any (major) drawbacks. | 1 | 1 | 0 | I'm trying to share an in-memory database between processes. I'm using Python's sqlite3. The idea is to create a file in /run/shm and use it as a database. Questions are:
Is that safe? In particular: do read/write locks (fcntl) work the same in shm?
Is that a good idea in the first place? I'd like to keep things simple and not have to create a separate database process. | sqlite3 database in shared memory | 1.2 | 1 | 0 | 230 |
19,977,832 | 2013-11-14T12:35:00.000 | 2 | 0 | 0 | 0 | python,django,django-tinymce | 19,978,127 | 2 | true | 1 | 0 | 1) Django includes a template tag striptags to strip html tags. It uses regular expressions, it's not the right solution but it'll do the job.
2) It seems, browsing images is not included as an option in django-tinymce. You need to use another module and integrate it with TinyMCE to do the job. | 1 | 1 | 0 | I'm currently trying django tinymce, I want to show on the main page a truncated text of around 200 characters in total.
The problem is that I don't want html tags to show up nor images to show up in this truncated text.. Is there an easy way to solve it?
Example:
If an image is in between the 200 characters, the img tag, attribute and so on will show up and if i used the safe template tag, the image will be rendered. How should I solve this issue? should I write my own template tag and remove the images?
Second problem is that I couldn't find the option to browse images from the user PC and upload it. I don't want any page to view the media, I only want to browse and upload images. | Django tinymce safe template tag | 1.2 | 0 | 0 | 500 |
19,978,119 | 2013-11-14T12:51:00.000 | 1 | 0 | 0 | 0 | python,wxpython | 19,981,325 | 1 | true | 0 | 1 | You can use the custom widgets in wx.lib for the flat look, however there is no way to apply a "theme" to all the widgets in wxPython. Why? Because wxPython tries to wrap the native widgets on each platform, so it does its best to look native. If the native widgets are flat, then you'll see wxPython doing that too. Otherwise, you have to use the custom widgets.
If you want your app to have the "flat" look on all platforms, then you'll probably want to take a look at some other toolkit that allows theming, like Kivy or Tkinter. I think PySide/PyQt might even allow a little theming too. | 1 | 0 | 0 | I've been searching for a library I can import and superimpose on my wxpython-based layout to give it a Flat-UI look.
Does there exist libraries that can be used along with wxpython or just python? | Are there Flat-UI libraries for wxpython or just python? | 1.2 | 0 | 0 | 1,140 |
19,982,856 | 2013-11-14T16:22:00.000 | 0 | 0 | 1 | 1 | python,git,bash,git-submodules,subproject | 21,260,659 | 2 | false | 0 | 0 | I recommend a single master repository for this problem. You mentioned that the output files of certain programs are used as input to the others. These programs may not have run-time dependencies on each other, but they do have dependencies. It sounds like they will not work without each other being present to create the data. Especially if file location (e.g. relative path) is important, then a single repository will help you keep them better organized. | 2 | 0 | 0 | I just started using git to get my the code I write for my Master-thesis more organized. I have divided the tasks into 4 sub-folders, each one containing data and programs that work with that data. The 4 sub-projects do not necessarily need to be connected, none off the programs contained use functions from the other sub-projects. However the output-files produced by the programs in a certain sub-folder are used by programs of another sub-folder.
In addition some programs are written in Bash and some in Python.
I use git in combination with bitbucket. I am really new to the whole concept, so I wonder if I should create one "Master-thesis" repository or rather one repository for each of the (until now) 4 sub-projects. Thank you for your help! | Git: Master-thesis subprojects as submodules or stand-alone repositories | 0 | 0 | 0 | 283 |
19,982,856 | 2013-11-14T16:22:00.000 | 1 | 0 | 1 | 1 | python,git,bash,git-submodules,subproject | 19,982,974 | 2 | false | 0 | 0 | Well, as devnull says, answers would be highly opinion based, but given that I disagree that that's a bad thing, I'll go ahead and answer if I can type before someone closes the question. :)
I'm always inclined to treat git repositories as separate units of work or projects. If I'm likely to work on various parts of something as a single project or toward a common goal (e.g., Master's thesis), my tendency would be to treat it as a single repository.
And by the way, since the .git repository will be in the root of that single repository, if you need to spin off a piece of your work later and track it separately, you can always create a new repository if needed at that point. Meantime it seems "keep it simple" would mean one repo. | 2 | 0 | 0 | I just started using git to get my the code I write for my Master-thesis more organized. I have divided the tasks into 4 sub-folders, each one containing data and programs that work with that data. The 4 sub-projects do not necessarily need to be connected, none off the programs contained use functions from the other sub-projects. However the output-files produced by the programs in a certain sub-folder are used by programs of another sub-folder.
In addition some programs are written in Bash and some in Python.
I use git in combination with bitbucket. I am really new to the whole concept, so I wonder if I should create one "Master-thesis" repository or rather one repository for each of the (until now) 4 sub-projects. Thank you for your help! | Git: Master-thesis subprojects as submodules or stand-alone repositories | 0.099668 | 0 | 0 | 283 |
19,982,928 | 2013-11-14T16:25:00.000 | 0 | 0 | 1 | 1 | python,download,virtualenv | 19,983,121 | 1 | true | 0 | 0 | virtualenv doesn't cache the downloads anywhere. So it downloads the sources once, compiles and installs them and then deletes the download. If you delete the env, all installed modules are gone as well. | 1 | 0 | 0 | I set up a virtual environment on my mac and downloaded some Python libraries.
What happens to those libraries after I delete my virtual environment?
Where are my downloads stored when I download them in my virtualenv?
Thank you | What happens to my downloads when I delete the virtual environment they're in? | 1.2 | 0 | 0 | 37 |
19,984,224 | 2013-11-14T17:22:00.000 | 1 | 0 | 0 | 0 | python,django,post,get | 41,547,619 | 4 | false | 1 | 0 | Try this:
name = request.GET.get('name', request.POST.get('name')) | 2 | 6 | 0 | Are the parameters in request.POST and request.GET BOTH in request.REQUEST ? Or i have to check for each of them ?
I can't find a clear info in the documentation for both REQUEST/QueryDict.
NB: Django 1.4 Final | Does Django request.REQUEST.get() contain BOTH GET and POST parameters? | 0.049958 | 0 | 0 | 8,947 |
19,984,224 | 2013-11-14T17:22:00.000 | 1 | 0 | 0 | 0 | python,django,post,get | 19,984,347 | 4 | false | 1 | 0 | Yes, the doc says:
HttpRequest.REQUEST For convenience, a dictionary-like object that
searches POST first, then GET. Inspired by PHP’s $_REQUEST.
For example, if GET = {"name": "john"} and POST = {"age": '34'},
REQUEST["name"] would be "john", and REQUEST["age"] would be "34".
It’s strongly suggested that you use GET and POST instead of REQUEST,
because the former are more explicit. | 2 | 6 | 0 | Are the parameters in request.POST and request.GET BOTH in request.REQUEST ? Or i have to check for each of them ?
I can't find a clear info in the documentation for both REQUEST/QueryDict.
NB: Django 1.4 Final | Does Django request.REQUEST.get() contain BOTH GET and POST parameters? | 0.049958 | 0 | 0 | 8,947 |
19,984,477 | 2013-11-14T17:34:00.000 | 1 | 0 | 1 | 0 | python,dictionary | 19,984,541 | 7 | false | 0 | 0 | You can create an empty dict for the counters, then loop through the dict you've got and add 1 to the corresponding value in the second dict, then return the key of the element with the minimum value in the second dict. | 1 | 4 | 0 | I'm working on a problem that asks me to return the least frequent value in a dictionary and I can't seem to work it out besides with a few different counts, but there aren't a set number of values in the dictionaries being provided in the checks.
For example, suppose the dictionary contains mappings from students' names (strings) to their ages (integers). Your method would return the least frequently occurring age. Consider a dictionary variable d containing the following key/value pairs:
{'Alyssa':22, 'Char':25, 'Dan':25, 'Jeff':20, 'Kasey':20, 'Kim':20, 'Mogran':25, 'Ryan':25, 'Stef':22}
Three people are age 20 (Jeff, Kasey, and Kim), two people are age 22 (Alyssa and Stef), and four people are age 25 (Char, Dan, Mogran, and Ryan). So rarest(d) returns 22 because only two people are that age.
Would anyone mind pointing me in the right direction please? Thanks! | Find least frequent value in dictionary | 0.028564 | 0 | 0 | 3,493 |
19,986,306 | 2013-11-14T19:13:00.000 | 20 | 0 | 1 | 1 | python,command,installation,dollar-sign | 19,986,337 | 5 | true | 0 | 0 | As of now, Python does not implement $ in its syntax. So, it has nothing to do with Python.
Instead, what you are seeing is the terminal prompt of a Unix-based system (Mac, Linux, etc.) | 2 | 15 | 0 | I've been learning Python, and I keep running into the $ character in online documentation. Usually it goes something like this:
$ python ez_setup.py (Yeah, I've been trying to install setup tools)
I'm fairly certain that this command isn't for the python IDE or console, but I've tried windows cmd and it doesn't work. Any help? | What does the $ mean when running commands? | 1.2 | 0 | 0 | 50,638 |
19,986,306 | 2013-11-14T19:13:00.000 | 5 | 0 | 1 | 1 | python,command,installation,dollar-sign | 19,986,332 | 5 | false | 0 | 0 | The $ is the command prompt. It is used to signify that python ez_setup.py should be run on a command line and not on a python/perl/ruby shell
You might also see % python ez_setup.py, which also means the same thing | 2 | 15 | 0 | I've been learning Python, and I keep running into the $ character in online documentation. Usually it goes something like this:
$ python ez_setup.py (Yeah, I've been trying to install setup tools)
I'm fairly certain that this command isn't for the python IDE or console, but I've tried windows cmd and it doesn't work. Any help? | What does the $ mean when running commands? | 0.197375 | 0 | 0 | 50,638 |
19,986,604 | 2013-11-14T19:29:00.000 | 1 | 0 | 0 | 0 | python-2.7,hp-quality-center | 19,987,148 | 1 | false | 0 | 0 | I found an article by fijiaaron that let me know even if you/I have a 64 bit machine, we need the 32 bit version of Python installed, as soon as I installed the 32 bit version, I was able to see the print lines. Now on to the selenium... | 1 | 0 | 0 | I am familiar with Selenium using C# and integrating it with TeamCity - as a background. I am at a new company now. We have HP Quality Center and I saw a few articles how to use QC, Selenium, and Python Script to automate tests. I am having a major issue even getting a basic TDOutput.Print("test") to show up in the output log, unless I am wrong in expecting any TDOutput.Print lines to show up in that space.
I have a Windows 7 box.
The steps I have followed are:
installed Python on my machine (2.7)
installed steuptools
installed pip I can install items using pip on the command line, including
selenium In QC
I create a new VAPI-XP test
Choose PythonScript in the Wizard
Then click Finish (and do not go any further on the wizard)
Quality Center will then generate a skeleton script.
In the skeleton, Under TDOutput.Clear(), I type: TDOutput.Print("Hello
World")
The output window never clears and it never prints "Hello
World"
This is what I see in output window: Test is completed
Any suggestions on what easy step I have missed? I obviously do not have a lot of experience with this tool so its hard for me to see why I can't even get a print statement to execute - let alone worry about the selenium portion of my testing. | HP Quality Center Python Tests not executing any lines of script | 0.197375 | 0 | 0 | 791 |
19,986,721 | 2013-11-14T19:36:00.000 | 0 | 0 | 0 | 0 | python,web,web2py | 19,988,936 | 1 | false | 1 | 0 | In most cases, a website is probably a single web2py application, though that doesn't have to be the case (a web2py installation can include multiple applications, which can interact with each other). An application itself can be composed of multiple re-usable plugins (similar to how a Django project can be made up of multiple apps). Each application typically has its own database (or even multiple databases) and sessions, though both databases and sessions can be shared across applications if desired. | 1 | 0 | 0 | web2py beginner. In django, a project is typically made up of multiple applications that encapsulate project functions, right? Do web2py projects typically have the same structure or is the entire website just a single application? | Is a web2py project typically made up of multiple applications? | 0 | 0 | 0 | 97 |
19,986,895 | 2013-11-14T19:45:00.000 | 0 | 0 | 1 | 1 | python,python-3.x | 19,987,194 | 2 | false | 0 | 0 | instead of using python I typed python3 in terminal and it was the solution | 1 | 0 | 0 | I have mac os x, recently I have install python version 3.2 before I had version 2.6.1. but when I type "python" in terminal it prints Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49). what does it mean? how can I use python 3.2 that have install this week? | install new version of python | 0 | 0 | 0 | 168 |
19,987,323 | 2013-11-14T20:06:00.000 | 4 | 0 | 1 | 0 | python,opencv,python-2.7,python-imaging-library,pillow | 19,987,933 | 1 | true | 0 | 0 | You need to:
Download OpenCV
Use CMake to tell it to compile statically and to tell it to compile the Python module
Compile, and install into a directory you want.
Find in that directory the file under a directory called python, called cv2.so
Distribute that file with your Python code.
Now that I told you how to do it, let me tell you why your approach isn't a very good idea:
If the version of Python changes, you need to recompile (the so file) and redistribute your entire application
If the version of OpenCV changes you will need to recompile (the so file) and redistribute your entire application
You don't control what version of Python your users have
There can be important subtleties in version of libjpg, libtiff, zlib and others that could prevent your application from working, all outside your control.
You are converting a multi-platform application into a platform specific solution. | 1 | 1 | 0 | I can't seem to find a way to create a standalone package for image recognition. I have a project I'm writing in python, and I found a way to do what I need using OpenCV, but I can't find a way to import the library into my project unless it is installed at the system level on Ubuntu. In other words, I can't seem to plop the build folder into my project after building the OpenCV library. And I can't find the equivalent of cv2.matchTemplate() in PIL or Pillow. So really there are two questions here.
1) How can I attach the build folder to my project, in order to avoid installing the OpenCV at the system level.
2) Is there an equivalent of cv2.matchTemplate() in PIL or Pillow that I can't seem to find?
Thanks. | Creating a standalone project with image recognition | 1.2 | 0 | 0 | 289 |
19,988,654 | 2013-11-14T21:21:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,permissions,roles,app-engine-ndb | 20,210,524 | 3 | false | 1 | 0 | You must manage user_profile yourself. In your user_profile, you can store the user id such as an email address or a google user id like you want. Add a role array in this entity where you store all roles for this user and you manage access with decorators.
For example, users which are employers will have "EMPLOYERS" in their roles and you manage access to the job creation handler with a @isEmployer decorator.
With this solution, you can assign many roles for you user like "ADMIN" in the future. | 2 | 1 | 0 | I am undergoing Udacity's Web Development course which uses Google AppEngine and Python.
I would like to set up specific user roles, and their alloted permissions. For example, I may have two users roles, Employer and SkilledPerson, and assign their permissions as follows:
Only Employers may create Job entities.
Only SkilledPerson may create Resume and JobApplication entities.
How do I do this?
How do I define these user roles?
How do I assign a group of permissions to specific roles?
How do I allow users to sign up as a particular role (Employer or SkilledPerson)? | Google AppEngine: Setting up user roles and permissions | 0 | 0 | 0 | 874 |
19,988,654 | 2013-11-14T21:21:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,permissions,roles,app-engine-ndb | 20,207,381 | 3 | false | 1 | 0 | I'd create a user_profile table which stores their Google user id, and two Boolean fields for is_employer and is_skilled_person, because there's always potential for someone to be both of these roles on your site. (Maybe I'm an employer posting a job but also looking for a job as well)
If you perceive having multiple roles and a user can only be one role, I'd make it a string field holding the role name like "employer", "admin", "job seeker" and so on. | 2 | 1 | 0 | I am undergoing Udacity's Web Development course which uses Google AppEngine and Python.
I would like to set up specific user roles, and their alloted permissions. For example, I may have two users roles, Employer and SkilledPerson, and assign their permissions as follows:
Only Employers may create Job entities.
Only SkilledPerson may create Resume and JobApplication entities.
How do I do this?
How do I define these user roles?
How do I assign a group of permissions to specific roles?
How do I allow users to sign up as a particular role (Employer or SkilledPerson)? | Google AppEngine: Setting up user roles and permissions | 0 | 0 | 0 | 874 |
19,990,997 | 2013-11-15T00:04:00.000 | 2 | 0 | 0 | 0 | python,django,django-models,django-users,django-managers | 20,015,389 | 1 | false | 1 | 0 | The point of save() and create_user() method is to save the object to the database. The difference is that the save() method need you to instantiate the object first, while create_user() automatically create the object and save into the database. There is no the absolute right way. You can use both of the as you want. Both method results are the same, add object to database. | 1 | 1 | 0 | I have a Django custom user model with a save() method that tests for self.pk is None and does some extra processing to a field before creating a new user record. Is this the right place to put that or is it supposed to go into the create_user() method of the custom user manager? Does it make a difference? | Create_user() in custom manager or save() method in Django custom user model for processing field before creating new record? | 0.379949 | 0 | 0 | 878 |
19,992,294 | 2013-11-15T02:13:00.000 | 1 | 1 | 0 | 0 | python,apache2,mod-wsgi | 19,992,498 | 1 | true | 1 | 0 | Python is different from PHP in that PHP executes your entire program separately for each hit to your website, whereas Python runs "worker processes" that stay resident in memory.
You need some sort of web framework to do this work for you (you could write your own, but using someone else's framework makes it much easier). Flask is an example of a light one; Django is an example of a very heavy one. Pick one and follow that framework's instructions, or look for tutorials for that framework. Since the frameworks differ, most practical documentation on handling web services with Python are focused around a framework instead of just around the language itself.
Nearly any python web framework will have a development server that you can run locally, so you don't need to worry about deploying yet. When you are ready to deploy, Apache will work, although it's usually easier and better to use Gunicorn or another python-specific webserver, and then if you need more webserver functionality, set up nginx or Apache as a reverse proxy. Apache is a very heavy application to use for nothing but wsgi functionality. You also have the option of deploying to a PaaS service like Heroku (free for development work, costs money for production applications) which will handle a lot of sysadmin work for you.
As an aside, if you're not using virtualenv to set up your Python environment, you should look into it. It will make it much easier to keep track of what you have installed, to install new packages, and to isolate an environment so you can work on multiple projects on the same computer. | 1 | 0 | 0 | I have been looking at setting up a web server to use Python and I have installed Apache 2.2.22 on Debian 7 Wheezy with mod_wsgi. I have gotten the initial page up and going and the Apache will display the contents of the wsgi file that I have in my directory.
However, I have been researching on how to deploy a Python application and I have to admin, I find some of it a little confusing. I am coming from a background in PHP where it is literally install what you need and you are up and running and PHP is processing the way it should be.
Is this the same with Python? I can't seem to get anything to process outside of the wsgi file that I have setup. I can't import anything from other files without the server throwing a "500" error. I have looked on Google and Bing to try to find an answer to this, but I can't seem to find anything, or don't know that what I have been looking at is the answer.
I really appreciate any help that you guys can offer.
Thanks in advance! (If I need to post any coding, I can do that, I just don't know what you guys would need, if anything, as far as coding examples for this...) | Python Web Server - mod_wsgi | 1.2 | 0 | 0 | 203 |
19,993,323 | 2013-11-15T04:03:00.000 | 0 | 0 | 0 | 0 | html,ironpython,spotfire | 62,234,196 | 3 | false | 1 | 0 | What you can do is add a new text area. Then, add a drop down list in this text area. You can create a new Document.Properties linked to this drop down list.
You can create many options in your drop down list, and your Document.Properties will have the selected value of your drop down list.
Text Area could be designed in HTML/CC so you can customize as you want.
Once you have your Document.Properties with the drop down list value, you can go on your chart properties and add a custom expression with your Document.Properties like [MyCol]==$(docproperties)
I hope it will help you ! | 1 | 0 | 0 | I want to make a drop down list in spotfire using html. So based on choice selected I want to show custom divs. How to implement that ?? Can anyone help with the ironpython script | HTML in Spotfire and IronPython | 0 | 0 | 0 | 3,779 |
19,993,951 | 2013-11-15T05:08:00.000 | 2 | 0 | 0 | 0 | python,django,rest,django-rest-framework | 22,542,900 | 5 | true | 1 | 0 | It turned out the problem was that I was using ListAPIView as the base class for my view class and it doesn't have the pre_save method defined. When I added some of the mixins which had it defined everything started working.
Seems weird that something used in a lot of the basic tutorials doesn't support such a basic feature, but live and learn. | 1 | 11 | 0 | I need to attach a user to a request, this seems like a fairly common thing to need to do, but it's turning out to be damn near impossible.
The docs for the Django REST Framework suggest using the pre_save method of the serializer class, which I did, but it doesn't get called when serializer.is_valid() is called, which makes it kind of worthless since without the user field the serializer fails validation.
I've seen a few suggestions but they seem like crazy hacks and/or don't work. Plus, I feel like this is way too common of a task to really need all the stuff I see people suggesting. I can't be the only person to need to attach a user to a object created in a REST request. | Django REST Framework, pre_save() and serializer.is_valid(), how do they work? | 1.2 | 0 | 0 | 9,143 |
19,998,958 | 2013-11-15T10:31:00.000 | 3 | 0 | 0 | 1 | python,google-app-engine,heroku,openshift | 20,003,022 | 2 | true | 1 | 0 | I work on Openshift and at this time I'm not aware of anything that will deploy your code to GAE and Openshift at the same time.
You might be able to write your own script for it. | 2 | 1 | 0 | As subject, is is possible with just one source code, we can deploy our code to Openshift or Google App Engine? Heroku is not necessarily in my case.
My application is using Python Flask + PostgreSQL 9.1. I love the easiness in Openshift when I configure my technology stack, but is the case will be same with GAE?
Thanks! | Can one source code be deployed to Openshift, Heroku and Google App Engine at once? | 1.2 | 0 | 0 | 244 |
19,998,958 | 2013-11-15T10:31:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,heroku,openshift | 20,003,075 | 2 | false | 1 | 0 | PostgreSQL is not available on GAE, so this code will definitely not run there. | 2 | 1 | 0 | As subject, is is possible with just one source code, we can deploy our code to Openshift or Google App Engine? Heroku is not necessarily in my case.
My application is using Python Flask + PostgreSQL 9.1. I love the easiness in Openshift when I configure my technology stack, but is the case will be same with GAE?
Thanks! | Can one source code be deployed to Openshift, Heroku and Google App Engine at once? | 0.099668 | 0 | 0 | 244 |
20,000,286 | 2013-11-15T11:40:00.000 | 3 | 0 | 0 | 0 | python,django,nose | 20,023,533 | 1 | true | 1 | 0 | Nose keeps track of standard output (stdout) and logging (python logging module) in separate containers. You can control the capturing process of the output during the test run for both buffers. So if you want to disable capturing of the recorded log with --nocapture and --nologcapture your tests will end with stack trace. This way step (3) will be eliminated. To reorder sequence as you describe it, you may want to make a custom plugin. | 1 | 0 | 0 | Django-nose prints the exception and stack trace somewhere hidden in the middle of two logs in the following format:
Live Log (as it is being executed)
The exception and stack trace
The recorded log
This is really unhelpful if the logs are very long (hundreds of lines) as one has to find the "in between" stack trace to know what actually went wrong, rather than just scrolling to the bottom and being able to see the error.
Is there any way of formatting this differently so that the stack trace and exception are printed last (aka 1. Live log, 2. recorded log, 3. exception and stack trace)?! From what I can tell there are no options to do so. | How to print django-nose exception below log | 1.2 | 0 | 0 | 401 |
20,001,553 | 2013-11-15T12:52:00.000 | 2 | 0 | 0 | 1 | python-2.7,distributed,scheduler,apscheduler | 25,422,173 | 1 | false | 0 | 0 | This looks like an old question but I'll answer it anyway. No, it's not (yet) possible to run APScheduler in that manner yet due to lack of a synchronization/locking mechanism to that end. | 1 | 5 | 0 | I want to run multiple instances of APScheduler pointing to one common persistent job DB. Is it possible to run in that way?? I also mean that the jobs in the DB get shared among the Scheduler instances and at a point there is only one instance executing a scheduled job. | Scaling APScheduler | 0.379949 | 0 | 0 | 901 |
20,001,606 | 2013-11-15T12:54:00.000 | 6 | 0 | 1 | 1 | python,linux,virtualenv,chroot | 20,001,653 | 1 | false | 0 | 0 | bootstrapping a directory tree that can be passed as root
That's not what virtualenv does, except (to some degree) for Python packages. It provides a place where these can be installed without replacing the rest of the filesystem. It also works without root privileges and it's portable as it needs no kernel support, unlike chroot, which (I presume) won't work on Windows.
Can't one install packages/modules locally in whatever application directory
Yes, but virtualenv does one more thing, which is that it disables (by default at least) the system's Python package directories. That means you can test whether your package correctly installs all of its dependencies (you might have forgotten to list one because it's already installed on your system) and it allows installing different versions in case you need either newer or older versions. The ability to install older versions should not be overlooked because sometimes new versions of packages introduce bugs. | 1 | 4 | 0 | First of all let me state that I am a proponent of generic software (in general ;-). I am no expert on Python, but it seems that the 'virtualenv' utility solves pretty much the same problem 'chroot' can help to solve - bootstrapping a directory tree that can be passed as root, thus effectively protecting the real directory tree, if needed.
Since I am no expert in Python as already mentioned, I wonder - what problem can virtualenv solve that chroot cannot? I mean, can't I just set up a nice fake root tree (possibly using union mounting), chroot into it, and do pip install a package I want in my new environment, and then play around within the bounds of my new environment, running python scripts and what not?
Am I missing something here?
Update:
Can't one install packages/modules locally in whatever application directory, I mean, without root privileges and subsequently without overwriting or adding files to /usr/lib or /usr/local/lib? It appears that this is what virtualenv does, however I think it has to symlink or otherwise provide a python interpreter for each environment one creates, does it not? | Why use Pythons 'virtualenv' on Linux when one has 'chroot' (and union/overlay filesystems)? | 1 | 0 | 0 | 2,878 |
20,003,295 | 2013-11-15T14:26:00.000 | 7 | 0 | 1 | 0 | python,django,string,integer | 20,003,855 | 3 | false | 1 | 0 | What about: if isinstance(data, int): | 1 | 7 | 0 | In a function in Django, the user can send me a number or a string, and I want to know if I received a number or a String (Tip: The number will always be an integer between 1-6)
I want to know if it's possible to detect this and how (with an example), as the number or string I'm getting will tell me what to do next. | Django, Detecting if variable is a number | 1 | 0 | 0 | 8,638 |
20,004,082 | 2013-11-15T15:05:00.000 | 0 | 0 | 0 | 0 | python,networking,application-design | 20,030,106 | 1 | false | 0 | 0 | There are best practices, definitely.
As a first advice, you should definitely decouple the implementation from the representation used when you send/receive data via the network. Don't use Python dicts. Use a widely accepted serialization format like JSON, ASN.1 or protocol buffers. Make sure you have a clear idea what you need to send over the network and what the requirements are (latency, throughput, CPU time for encoding/decoding, etc) and choose something that fits them.
Second, use a de facto or de jure standard for communicating over the network. Be it REST, AMQP or anything else - it's impossible to tell which one would be the best fit since your question is too broad. But make sure you're not implementing your own in-house adhoc application layer protocol - you would just make your life and your colleagues life so much harder down the road.
I'd suggest you think a bit more what you want to do, and post more specific questions later on. | 1 | 0 | 0 | I am currently designing an application that will require data transmission. I am currently working on the client software that will build the data packages that will be sent via the network level service.
What data type should I use for network transmission? I am currently pondering whether I should use a physical data file (.dat) which can be easily manipulated (created/read/etc.) via Python or use only internal data. From a management and organizational standpoint, I think file based data may be the easiest to manipulate and handle on a networking level.
If I were head more towards a internal (Python) data handling method, what should my starting point be? Should I look at dictionaries? The over-arching goal is to keep data size minimal. Using file-based data, I believe I would only be looking at just a few bytes for actual transmission. The native platform is going to be Windows, but I would also like to look at my options for a mobile standpoint (Android/iOS).
The purpose of the program is data entry. User entry will be recorded, packaged, encrypted and sent (via a WAN) to a server where it will be stored in a database for query at a later time. | Data Handling For Network Transmission | 0 | 0 | 1 | 218 |
20,005,436 | 2013-11-15T16:10:00.000 | 1 | 0 | 0 | 0 | python,lxml,libxml2,libxslt | 20,007,236 | 2 | false | 0 | 0 | It seems that you're using lxml extension functions. In this case, the "Stack usage error" (XPATH_STACK_ERROR internally) happens when a value is popped off the XPath stack and the stack is empty. The typical scenario is an extension function called with fewer parameters than expected. | 1 | 0 | 0 | What is the cause of a Stack usage error from libxml2/libxslt/lxml? | libxml: "Stack usage error" - further information? | 0.099668 | 0 | 1 | 183 |
20,008,181 | 2013-11-15T18:34:00.000 | 0 | 0 | 0 | 1 | python,opencv,command-line-interface,python-idle | 20,009,587 | 1 | true | 0 | 0 | When you launch GUI applications on OS X (.app bundles), no shell is involved and shell profile scripts are not used. IDLE.app is no exception. So any environment variables defined there are not available to the GUI app. The best solution is to properly install your third-party packages into the standard locations included in Python's module search path, viewable as sys.path, and not use PYTHONPATH at all. Another option in this case is to launch IDLE from a terminal session shell, e.g. /usr/local/bin/idle2.7. | 1 | 0 | 0 | I am able to import the OpenCV python bindings (cv2) fine when running Python from the command line, but I receive the standard 'no module named cv2' from IDLE when I import there.
I checked the Path Browser in IDLE, and noticed that it doesn't match my .bashrc PYTHONPATH.
That said, I copied the cv2 binding files into one of the directories specified in the Path Browser, and IDLE still can't find it.
Two questions:
1) Has anyone run into this circumstance?
2) Does IDLE have a PYTHONPATH different from the rest of the system? | IDLE can't find cv2, CLI Python imports it correctly | 1.2 | 0 | 0 | 1,242 |
20,008,825 | 2013-11-15T19:14:00.000 | 1 | 0 | 0 | 0 | python,opencv | 20,009,112 | 1 | false | 0 | 0 | As I can understand from source code, they have to be Point3D, i.e. non-homogenous. | 1 | 0 | 1 | I'm trying to find a transformation matrix that relates 2 3D point clouds. According to the documentation, cv2.estimateAffine3D(src, dst) --> retval, out, inliers.
src is the first 3D point set, dst is the second 3D point set.
I'm assuming that retval is a boolean.
Out is the 3x4 affine transformation matrix, and inliers is a vector.
My question is, what are the shapes of the input point sets? Do the points have to be homogeneous, i.e. 4xN? | OpenCV estimateAffine3D in Python: shapes of input matrix? | 0.197375 | 0 | 0 | 515 |
20,010,125 | 2013-11-15T20:36:00.000 | 1 | 0 | 0 | 0 | python,django | 36,814,048 | 2 | false | 1 | 0 | If you're using Firefox (or any browser) with Web Developer Toolbar, make sure you have cookies enabled. I temporarily had cookies turned off and forgot to turn it back on.
Cookies => Disable Cookies => Disable All Cookies
Solved my problem in Django 1.9. | 1 | 2 | 0 | I have multiple users in django auth setup and a few simple models.
For superusers I can view my model objects. For non superusers that have is_staff checked I get
a 403 Permission denied when trying to view my models.
I have tried adding all permissions to those users to find out if that was the cause but still receive the forbidden message. Other than making them superusers I can't assign any more permissions.
On the command line where I'm running the development server I see messages like
"GET /admin/bcp/buildingsensor/24/ HTTP/1.1" 403 190614
Does anyone know how to get a more useful traceback for this so I know where to start looking. ?
Thanks | Finding the cause of of 403 forbidden error in django admin | 0.099668 | 0 | 0 | 6,383 |
20,014,997 | 2013-11-16T05:15:00.000 | 1 | 0 | 1 | 0 | python,file,methods,line | 20,015,233 | 2 | false | 0 | 0 | Since you haven't shown us what you have tried, I'm not going to show you any code in my answer.
Read the file data into a list. Close the file.
Insert the data where you want it in the list. You can insert an element into a certain position of a list, or modify an element of the list in place.
Open a file object in write mode and overwrite the existing file. | 1 | 0 | 0 | I'm new in Python
I need to write a number in an expecified line and column of a file; when I use the .write method, it writes only at the last line of the file. It's there a way to write something in the line that I want??
Thanks for the time | Write in a line of a file in Python | 0.099668 | 0 | 0 | 134 |
20,015,701 | 2013-11-16T06:53:00.000 | 0 | 0 | 0 | 0 | python,django,pipe,uwsgi | 20,015,756 | 1 | false | 1 | 0 | check the timeout of your frontend server, generally if there is no activity, the connection is closed. For example nginx has 60 seconds timeout | 1 | 0 | 0 | I am using uWsgi + Django to write web services for an android application to submit survey data which is normally not very large set of data but, the application sends multiple calls to the server. If the surveyor has done 50 surveys, he'll tap one button on his mobile and all the surveys will be sent to the server one by one in one go. Sometimes I get the BROKEN PIPE error. Data is saved on the server but response is not sent back to the mobile due to which those surveys are not removed from the mobile and next time all those surveys are again sent to the surveys along with the new ones and it is causing a lot of duplication.
PLEASE HELP.... | uWSGI with Django gives Broken pipe | 0 | 0 | 0 | 338 |
20,016,361 | 2013-11-16T08:22:00.000 | 0 | 0 | 1 | 0 | python-2.7,geopy | 20,016,409 | 2 | false | 0 | 0 | Well, as I can see, geopy doesn't have any built-in capability to get a list of areas around some coordinates.
But you can use a workaround. Take your geocode and calculate coordinates (latitue and longitude). Then imagine a grid on the map with a cell size equal to area of the smallest one you need to find around your location.
Use geopy to get an area name belonging to the each cell corner of your grid. Is that ok for you? It will get you some kind of approximation because a grid is not a circle and you may miss some small areas. But I think in most cases the solution will work fine. | 1 | 2 | 0 | Hi frens I am using geopy to calculate the latitude and longitude. Now I want to get the list of areas given distance from a zipcode.How to get that? | how to query the database to return all zip codes with a given distance (ie 5 miles) from a given zip code using geopy | 0 | 0 | 0 | 552 |
20,019,891 | 2013-11-16T15:04:00.000 | 0 | 0 | 1 | 0 | python,zope | 20,815,604 | 1 | false | 0 | 0 | I ended up using virtualenv, Each Zope-Version residing in its own environment from which I can create the Zope- and ZEO-instances I need. | 1 | 0 | 0 | I'd like to install multiple Zope 2 versions on one of my servers. I have already done this with versions 2.8, 2.9 and 2.13, so I know that I have to take care of the different python versions.
Now in my case I'd like to set up 2.13.19 and 2.13.21. They can share the same python version with no problem. But it seems that easy_install won't let me install the newer version in addition to the older. Is it because they are so close?
Why would I want this? It's on a production server, I don't want to update the instances already running without testing that everything works fine. But I'd like to create new instances with the newest Zope version.
I didn't install Zope using virtualenv, so maybe that's the way to go? Can I use virtualenv in addition to a standard Python environment? Does it have any performance issues? | Installing multiple Zope versions | 0 | 0 | 0 | 107 |
20,020,409 | 2013-11-16T15:49:00.000 | 1 | 0 | 1 | 0 | python,sockets | 20,026,013 | 1 | true | 0 | 0 | The NUL-Byte (b'\0') has been and still is commonly used in binary protocols as separator or when transferring numbers (e.g.: 1 as a 32 bit integer is b'\x01\x00\x00\x00'). Its usage can therefor be considered completely safe with socket implementations on all platforms.
When encoding and decoding strings in Python 3 however, I'd recommend you insert those NUL-Bytes after encoding your string to bytes and stripping them (on the receiver side) before decoding your strings to Unicode. | 1 | 0 | 0 | I'm building up a nice application in python from the bottom up that allows encrypted communication between peers. This application able users to dynamically install new plugins and therefore new protocols.
How is used '\0' among common socket operations ? Is it a good separator or should I use something else ?
I would like to be able to manage my own socket code which prevents me from using libs that abstract those bytes constructions.
By the way I'm using Python 3 so all data sent or received is encoded. I'm using utf-8 by default. | Usage of '\0' in various protocols or How can i find a good separator? | 1.2 | 0 | 1 | 56 |
20,023,377 | 2013-11-16T20:19:00.000 | 2 | 0 | 0 | 0 | python,wxpython | 20,023,469 | 1 | true | 0 | 1 | Sounds strange. Try one of these...
Use wx.CallAfter(button.Disable) instead of button.Disable().
Call button.Refresh() after disabling it. | 1 | 1 | 0 | I have one button and when user clicks it I catch an event and in the handler function I call Disable() function on that button but it isnt greyed out... Only if I click on some other widget after that, button becomes greyed out... Anyone knows what could be the problem? | Button isnt greyed out when disabled | 1.2 | 0 | 0 | 737 |
20,024,964 | 2013-11-16T22:54:00.000 | 2 | 1 | 0 | 0 | php,python,pygame | 20,025,212 | 3 | false | 0 | 0 | Web Browser can display/run HTML+JavaScript+SVG+(HTML)Canvas or (Adobe) Flash. You can't use PyGame to create game running on page. PyGame doesn't generate HTML+JavaScript or Flash | 2 | 0 | 0 | What kind of server configurations would I need to do in order to run a Python + Pygame code on a website.
Is php able to call and execute the python on server side? Do I need to install python on the server? I believe python is installed, but do I need to install the pygame library? What if I compile my code, will I need to install pygame on the server after that?
Can some one provide me a general list of things I will need to complete in order to make this happen?
My case:
I would like to put a game I made with Python on a website. I see how the python can run on the server no problem, but how can I ensure the client be able to play the game on their browser? | Is it possible to run Python code on an Apache server | 0.132549 | 0 | 0 | 289 |
20,024,964 | 2013-11-16T22:54:00.000 | -2 | 1 | 0 | 0 | php,python,pygame | 20,025,127 | 3 | false | 0 | 0 | Yes, there is a mod like mod_php... The extension of files are
.psp
Python server pages | 2 | 0 | 0 | What kind of server configurations would I need to do in order to run a Python + Pygame code on a website.
Is php able to call and execute the python on server side? Do I need to install python on the server? I believe python is installed, but do I need to install the pygame library? What if I compile my code, will I need to install pygame on the server after that?
Can some one provide me a general list of things I will need to complete in order to make this happen?
My case:
I would like to put a game I made with Python on a website. I see how the python can run on the server no problem, but how can I ensure the client be able to play the game on their browser? | Is it possible to run Python code on an Apache server | -0.132549 | 0 | 0 | 289 |
20,025,784 | 2013-11-17T00:42:00.000 | 0 | 0 | 0 | 0 | python,scipy,vtk,mayavi,mplot3d | 55,741,552 | 3 | false | 0 | 0 | blender BPY could do this, so could BGE python.
blender.org | 1 | 5 | 0 | I have a set of 3D points which I've used scipy.spatial.Delaunay to do the triangulation / tetrahedralization. I now have a set of unique faces of all of the tetrahedra, and would like to visualize these in 3D.
Are there any Python libraries (or libraries with a Python wrapper) that can do this? | How to visualize 3D delaunay triangulation in Python? | 0 | 0 | 0 | 8,208 |
20,025,975 | 2013-11-17T01:07:00.000 | -1 | 0 | 1 | 0 | python,file | 20,026,012 | 3 | false | 0 | 0 | have you tried using regex?
I guess your code would reduce to a very few lines if you use regex?
use findall("DIFFERENT REGULAR EXPRESSIONS") and store the values into list. Then you can count the length of the list. | 1 | 1 | 0 | My program has to do two things with this file.
It needs to print the following information: | Files in python | -0.066568 | 0 | 0 | 251 |
20,026,876 | 2013-11-17T03:33:00.000 | 0 | 0 | 0 | 0 | python,qt,pyqt4,fadein | 20,217,728 | 1 | false | 0 | 1 | I did same thing with QLabels, QButtons and other Widgets. There are different solutions accordingly on what you need. In my case I just created a custom component with a QTimer and a QGraphicsOpacityEffect. The timer increases or decreases the opacity value (by a coefficient).. | 1 | 0 | 0 | I want to place group of buttons ontop of a QLabel showing image, the question is how do I animate the fade in visibility of the QButtonGroup, I want to place my buttons at the bottom area so whenever pointer is at the bottom area the button group should animate to fully visible but if I move the pointer out of the bottom area, the button group should animate to a gradual fade out. | animate visibility of QButtonGroup or layout containing it | 0 | 0 | 0 | 107 |
20,036,040 | 2013-11-17T20:51:00.000 | 0 | 0 | 0 | 1 | python,hadoop,cassandra,apache-pig,oozie | 22,040,536 | 1 | true | 0 | 0 | This is solved. Solutions..
1) Put the python file in the oozie worklow path and then reference if from here.
2) Added cassandra jar files in the lib folder in the oozie's HDFS path. | 1 | 1 | 0 | I am new to oozie and I have few problems.
1) I am trying to embed a pig action in oozie which has a python script import. I've placed the jython.jar file in the lib path and have an import in the pig script which will take the python UDFs. I don't seems to get this working. The .py file is not getting picked up. How to go about this?
2) I have a pig cassandra integration where in I use the cql to get the data from cassandra using pig and do some basic transformation. In the CLI i am able to get this working. But on the oozie front I am not. I don't seem to find the solution(configuration and others) to do this in oozie. Can anyone please help me with this? Thanks in advance. | Making pig embedded with python script and pig cassandra integration to work with oozie | 1.2 | 0 | 0 | 419 |
20,038,238 | 2013-11-18T00:48:00.000 | 1 | 0 | 0 | 0 | python,django,sockets,django-forms | 20,038,319 | 3 | false | 1 | 0 | Sockets are just one method of communicating between client and server, but not the method Django is using when processing a form submission. It is most likely sending a HTTP POST request with parameters that the server listens for. | 1 | 1 | 0 | i am learning socket programming and Django web developing. As what i know socket is used for the communication between client and server. Dose this mean that when using and submitting Form in Django, there will be a socket created and submitted to the server? | Difference between Socket Programming and Django Form submission | 0.066568 | 0 | 0 | 684 |
20,039,657 | 2013-11-18T03:36:00.000 | 0 | 0 | 0 | 0 | python,django,sockets,web,remote-server | 20,039,992 | 1 | true | 1 | 0 | You basically have one major thing to decide:
Is your embedded machine going to open up a port that allows any
thing that knows it's IP and port details to control it and your web
page write to that IP/Port OR
Is your embedded device going to poll the web app to find out which
state it should be in.
The first option will normally respond faster but your embedded machine will be more vulnerable to outside interference, you will need to be able to run active code on your web server, which a lot of servers do not allow, etc.
The second will only respond to state changes at an average of half the polling rate but will probably be simpler to program and manage. Plus your server is not also acting as a client. | 1 | 0 | 0 | I am now using Django frame work to build a website which has the ability to control a remote embedded system (simply with functions like "Turn ON/OFF"). What i can image now is to use Socket Programming with Python (because Django is pure Python). As i have learnt, i only know how to send and receive messages with sockets between the client machine and server.
Can any one tell me
1. What else is needed to learn for this remote control function?
2. Or is there any better ways (better frameworks) to implement this?
3. Dose Django have built in methods for Socket Programming? (If not, is it possible to implement it with self-defined app)? | Remote control of an embedded system from Website | 1.2 | 0 | 1 | 469 |
20,039,659 | 2013-11-18T03:37:00.000 | 11 | 0 | 1 | 0 | python,multiprocessing,cpu-cores | 20,039,847 | 4 | false | 0 | 0 | That is correct. If you have 4 cores then 4 processes can be running at once. Remember that you have system stuff that needs to go on, and it would be nice for you to define the process number to be number_of_cores - 1. This is a preference and not mandatory. For each process that you create there is overhead, so you are actually using more memory to do this. But if RAM isn't a problem then go for it. If you are running Cuda or some other GPU based library then you have a different paradigm, but that's for another question. | 1 | 52 | 0 | In using the Pool object from the multiprocessing module, is the number of processes limited by the number of CPU cores? E.g. if I have 4 cores, even if I create a Pool with 8 processes, only 4 will be running at one time? | Python multiprocessing's Pool process limit | 1 | 0 | 0 | 71,204 |
20,041,978 | 2013-11-18T07:04:00.000 | 0 | 0 | 1 | 0 | python,algorithm | 20,061,819 | 5 | false | 0 | 0 | There is far more efficient solution for this question if you only want to know how many times each digits (0 - 9) appear in number [1, N]. Brute force iteration is not neceesary. | 1 | 0 | 0 | Can I optimize the code? Number can be from 1 to 10**9.
I want to print how many number of time each digit from 0 to 9 occur in book from pages 0 to N.
Can I use map instead of second for loop in this case?
for i in range (1,number+1):
for dig in str(i):
dic[dig] = dic[dig]+1 | can i optimize my code further - reduce for loop | 0 | 0 | 0 | 105 |
20,042,386 | 2013-11-18T07:33:00.000 | 0 | 0 | 0 | 0 | python,web2py | 20,102,474 | 1 | true | 1 | 0 | When entering a record in the db.auth_permission table, there is a table field in addition to the name field (note, table doesn't have to refer to a database table -- it can be any name representing any type of object). When a permission is checked, both the name and the table must match.
If you insert a permission record via the appadmin interface or using auth.add_permission(1, 'search'), then the table field will be set to an empty string (i.e., ''). This will work, because when you do @auth.requires_permission('search'), that is equivalent to @auth.requires_permission('search', ''), which matches the empty string in the permission record.
However, if you insert via db.auth_permission.insert(group_id=1, name='search'), then the table field will be set to None, which will not match the empty string when you check the permission. | 1 | 0 | 0 | I have an application where I am using web2py's access control system. I have various users in auth_user table. To be more specific I have [email protected] user whose id is 4. There is a group admin with group id 1 in auth_group table. In auth_membership table I have user_id 4 related to group_id 1. It means that [email protected] is a member of admin group. Finally in auth_permission table I have a record which relates group_id 1 with permission named search. It means that admin group has search permission. I have a controller in which I have index method with decorator as @auth.requires_permission('search'). I am logging in with [email protected] and getting to this controller method. But this condition evaluates to false and control doesn't go inside method. However if I replace this decorator with one @auth.requires_login() it works. But I want only users with search permission to get access to this method. Please help me to achieve this. | @auth.requires_permission not working | 1.2 | 0 | 0 | 174 |
20,042,607 | 2013-11-18T07:49:00.000 | 0 | 1 | 1 | 0 | python,ruby,scripting,scheme,lisp | 20,050,698 | 5 | false | 0 | 0 | WRT to your tasks, what about using Emacs, which comes with an interactive Python-shell. So you have the convenience of editing alongside with running scripts. | 2 | 10 | 0 | Is it feasible to script in a Lisp, as opposed to Ruby/Python/Perl/(insert accepted scripting language)? By this I mean do things like file processing (open a text file, count the number of words, return the nth line), string processing (reverse, split, slice, remove punctuation), prototyping/quick computations, and other things you would normally use Python, etc. for. How productive would doing such tasks in a Lisp be, as opposed to Ruby/Python/Perl/scripting language of choice?
I ask because I want to learn a Lisp but also use it to do something instead of only learning it for the sake of it. I looked around, but couldn't find much information about scripting in a Lisp. If it is feasible, what would be a good implementation?
Thank you! | Is it feasible to use Lisp/Scheme as a scripting language? | 0 | 0 | 0 | 8,065 |
20,042,607 | 2013-11-18T07:49:00.000 | 1 | 1 | 1 | 0 | python,ruby,scripting,scheme,lisp | 20,042,909 | 5 | false | 0 | 0 | I'd say that Lisp/Scheme could be used to write small scripts or big application. But they are not yet ready for wide use.
The big difference between python/ruby and scheme is that python has a huge library of modules centralized in one place. Ruby is quite similar to python with ruby gems.
Scheme on the other hand might have a small library of modules scattered accross the internet. The quality of modules doesn't always compare to the popular modules in python and ruby.
One could say that they are aiming at different goal but I'd say scheme just got old and people started to forget about it and how it could be used as a tool instead of just a school subject.
About Lisp, I can't really say. But from your description, it's possible to write scripts that you'd like to write but if you need something specific it's possible that it's not there and you'll have to rewrite it yourself.
All I can say, is jump in. And become someone who gives a future to this language. Don't be scared. This language has a bright future and you'll learn a lot from it. | 2 | 10 | 0 | Is it feasible to script in a Lisp, as opposed to Ruby/Python/Perl/(insert accepted scripting language)? By this I mean do things like file processing (open a text file, count the number of words, return the nth line), string processing (reverse, split, slice, remove punctuation), prototyping/quick computations, and other things you would normally use Python, etc. for. How productive would doing such tasks in a Lisp be, as opposed to Ruby/Python/Perl/scripting language of choice?
I ask because I want to learn a Lisp but also use it to do something instead of only learning it for the sake of it. I looked around, but couldn't find much information about scripting in a Lisp. If it is feasible, what would be a good implementation?
Thank you! | Is it feasible to use Lisp/Scheme as a scripting language? | 0.039979 | 0 | 0 | 8,065 |
20,043,841 | 2013-11-18T09:07:00.000 | 1 | 0 | 0 | 1 | python,virtualenv | 20,043,992 | 1 | false | 0 | 0 | What did you expect exactly ? Virtualenv creates a sandboxed Python environmenent with binaries etc for the platform on which it's created - it doesn't automagically makes the binaries platform-independent... | 1 | 0 | 0 | Here is the example:
centos:(build a virtualen)
$ virtualenv tenv
ubuntu:(active it)
$ . tenv/bin/activate
$ python
Could not find platform independent libraries
Could not find platform dependent libraries
Consider setting $PYTHONHOME to [:]
ImportError: No module named site
In turn:
ubuntu:
$ virtualenv ttenv
centos:
$ . ttenv/bin/activate
$ python
ttenv/bin/python: /usr/lib64/libcrypto.so.1.0.0: no version information available (required by ttenv/bin/python)
ttenv/bin/python: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ttenv/bin/python)
ttenv/bin/python: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by ttenv/bin/python)
ttenv/bin/python: /usr/lib64/libssl.so.1.0.0: no version information available (required by ttenv/bin/python) | virtualenv can not work in centos and ubuntu | 0.197375 | 0 | 0 | 804 |
20,045,535 | 2013-11-18T10:30:00.000 | 3 | 0 | 0 | 0 | python,numpy,hdf5,pytables | 20,099,740 | 5 | true | 0 | 0 | This might not, in fact, be possible to do in a naive way. HDF5 allocates 64 kb of space for meta-data for every data set. This meta data includes the types of the columns. So while the number of columns is a soft limit, somewhere in the 2-3 thousand range you typically run out of space to store the meta data (depending on the length of the column names, etc).
Furthermore, doesn't numpy limit the number of columns to 32? How are you representing the data with numpy now? Anything that you can get into a numpy array should correspond to a pytables Array class. | 3 | 11 | 1 | I have data coming from a csv which has a few thousand columns and ten thousand (or so) rows. Within each column the data is of the same type, but different columns have data of different type*. Previously I have been pickling the data from numpy and storing on disk, but it's quite slow, especially because usually I want to load some subset of the columns rather than all of them.
I want to put the data into hdf5 using pytables, and my first approach was to put the data in a single table, with one hdf5 column per csv column. Unfortunately this didn't work, I assume because of the 512 (soft) column limit.
What is a sensible way to store this data?
* I mean, the type of the data after it has been converted from text. | How to store wide tables in pytables / hdf5 | 1.2 | 0 | 0 | 2,271 |
20,045,535 | 2013-11-18T10:30:00.000 | 1 | 0 | 0 | 0 | python,numpy,hdf5,pytables | 20,155,746 | 5 | false | 0 | 0 | you should be able to use pandas dataframe
it can be saved to disk without converting to csv | 3 | 11 | 1 | I have data coming from a csv which has a few thousand columns and ten thousand (or so) rows. Within each column the data is of the same type, but different columns have data of different type*. Previously I have been pickling the data from numpy and storing on disk, but it's quite slow, especially because usually I want to load some subset of the columns rather than all of them.
I want to put the data into hdf5 using pytables, and my first approach was to put the data in a single table, with one hdf5 column per csv column. Unfortunately this didn't work, I assume because of the 512 (soft) column limit.
What is a sensible way to store this data?
* I mean, the type of the data after it has been converted from text. | How to store wide tables in pytables / hdf5 | 0.039979 | 0 | 0 | 2,271 |
20,045,535 | 2013-11-18T10:30:00.000 | 1 | 0 | 0 | 0 | python,numpy,hdf5,pytables | 20,240,079 | 5 | false | 0 | 0 | IMHO it depends on what do you want to do with the data afterwards and how much of it do you need at one time. I had to build a program for statistical validation a while ago and we had two approaches:
Split the columns in separate tables (e.g. using a FK). The overhead of loading them is not too high
Transpose the table, resulting in something like a key-value store, where the key is a tuple of (column, row)
For both we used postgres. | 3 | 11 | 1 | I have data coming from a csv which has a few thousand columns and ten thousand (or so) rows. Within each column the data is of the same type, but different columns have data of different type*. Previously I have been pickling the data from numpy and storing on disk, but it's quite slow, especially because usually I want to load some subset of the columns rather than all of them.
I want to put the data into hdf5 using pytables, and my first approach was to put the data in a single table, with one hdf5 column per csv column. Unfortunately this didn't work, I assume because of the 512 (soft) column limit.
What is a sensible way to store this data?
* I mean, the type of the data after it has been converted from text. | How to store wide tables in pytables / hdf5 | 0.039979 | 0 | 0 | 2,271 |
20,048,986 | 2013-11-18T13:27:00.000 | 2 | 0 | 1 | 0 | python,rounding | 20,049,872 | 2 | false | 0 | 0 | It helps to know that anything to the power of 0 equals 1. As ndigits increases, the function:
f(ndigits) = 10-ndigits gets smaller as you increase ndigits. Specifically as you increase ndigits by 1, you simply shift the decimal place of precision one left. e.g. 10^-0 = 1, 10^-1 = .1 and 10^-2 = .01. The place where the 1 is in the answer is the last point of precision for round.
For the part where it says
For the built-in types supporting round(), values are rounded to the
closest multiple of 10 to the power minus ndigits; if two multiples
are equally close, rounding is done toward the even choice (so, for
example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2).
This has unexpected behavior in Python 3 and it will not work for all floats. Consider the example you gave, round(123.455, 2) yields the value 123.45. This is not expected behavior because the closest even multiple of 10^-2 is 123.46, not 123.45!
To understand this, you have to pay special attention to the note below this:
Note The behavior of round() for floats can be surprising: for
example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This
is not a bug: it’s a result of the fact that most decimal fractions
can’t be represented exactly as a float.
And that is why certain floats will round to the "wrong value" and there is really no easy workaround as far as I am aware. (sadface) You could use fractions (i.e. two variables representing the numerator and the denominator) to represent floats in a custom round function if you want to get different behavior than the unpredictable behavior for floats. | 1 | 3 | 0 | Sorry, but I really don't know what's the meaning of the defination of round in python 3.3.2 doc:
round(number[, ndigits])
Return the floating point value number rounded to ndigits digits after the decimal point. If ndigits is omitted, it defaults to zero. Delegates to number.__round__(ndigits).
For the built-in types supporting round(), values are rounded to the closest multiple of 10 to the power minus ndigits if two multiples are equally close, rounding is done toward the even choice (so, for example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2). The return value is an integer if called with one argument, otherwise of the same type as number.
I don't know how come the multiple of 10 and pow.
After reading the following examples, I think round(number,n) works like:
if let number be 123.456, let n be 2
round will get two number:123.45 and 123.46
round compares abs(number-123.45) (0.006) and abs(number-123.46) (0.004),and chooses the smaller one.
so, 123.46 is the result.
and if let number be 123.455, let n be 2:
round will get two number:123.45 and 123.46
round compares abs(number-123.45) (0.005) and abs(number-123.46) (0.005). They are equal. So round checks the last digit of 123.45 and 123.46. The even one is the result.
so, the result is 123.46
Am I right?
If not, could you offer a understandable version of values are rounded to the closest multiple of 10 to the power minus ndigits? | python 3.3.2 do I get the right understanding of the function "round"? | 0.197375 | 0 | 0 | 675 |
20,049,165 | 2013-11-18T13:35:00.000 | 0 | 1 | 0 | 0 | python,translation,gettext | 20,075,524 | 2 | false | 0 | 0 | You could also just host the po-files on a shared drive and check for updates to the files. | 2 | 0 | 0 | I have two system:
System A:
This will show translated system with ugettext from mo-files.
System B:
This will handle the po-files and translate the content.
The two system are on different machines, but on the same servernode.
The mo-translations are cached up so after read they will not be requested again.
I'm looking for a good solution on how I can solve this.
Update:
I need a good way to get these two system to work together. | Is it possible to have the translation-system and use-system on two different machines? | 0 | 0 | 0 | 61 |
20,049,165 | 2013-11-18T13:35:00.000 | 1 | 1 | 0 | 0 | python,translation,gettext | 20,067,415 | 2 | true | 0 | 0 | Simply create an API (via JSON-RPC, XML-RPC, SOAP, CORBA, DCOM, smoke signals, string and tin cans, it doesn't fricking matter...) that allows the client to specify the original string, language, count, and context, and have the server perform the translation and return the translated string.
If the translation API reflects the gettext API then it could be used as a drop-in replacement for the gettext module and the client would not require any recoding except possibly to specify the server. | 2 | 0 | 0 | I have two system:
System A:
This will show translated system with ugettext from mo-files.
System B:
This will handle the po-files and translate the content.
The two system are on different machines, but on the same servernode.
The mo-translations are cached up so after read they will not be requested again.
I'm looking for a good solution on how I can solve this.
Update:
I need a good way to get these two system to work together. | Is it possible to have the translation-system and use-system on two different machines? | 1.2 | 0 | 0 | 61 |
20,049,928 | 2013-11-18T14:16:00.000 | 0 | 1 | 0 | 0 | python,web2py,filezilla,ftplib | 29,468,765 | 2 | false | 0 | 0 | When I want to change the file modification time, I use an FTP client on the console.
Log on to a remote FTP ftp ftp.dic.com
cd commands go to the correct directory
SITE command to move the extended command mode
UTIME somefile.txt 20050101123000 20050101123000 20050101123000 UTC
change the access time, modification time, it's time to create a directory on 2005-01-01 12:30:00 somefile.txt
Complete example:
site UTIME somefile.txt 20150331122000 20150331122000 20150331122000 UTC
Of course you can use this command in any ftp client. | 1 | 3 | 0 | Good Day!
How to get created date of a file via ftp?.
Im using web2py,python,ftplib and filezilla as a ftp server. I can get the modified date via f.sendcmd('MDTM '+filename).
Any suggestions? Thanks! | Created Date of file via ftp | 0 | 0 | 0 | 7,167 |
20,051,289 | 2013-11-18T15:18:00.000 | 2 | 1 | 0 | 0 | python,emacs,remote-server | 20,052,642 | 2 | false | 0 | 0 | Consider using TRAMP (built in to emacs). It allows you to edit remote files using a local emacs instance (it defaultly uses scp to get files to you, and pushes the edits back again). | 1 | 1 | 0 | I usually work in remote server.
I am currently using emacs for editing. while I open particular file, for example, "test.py", I can't get automatic indentation, color profiles for index, python keywords, functions etc. Is there any solution for that??
Moreover I love to use TextWrangler but I can't open file in this editer from my remote host. Is there away to open and edit files from TextWrangler?? | Edit options while using remote server | 0.197375 | 0 | 0 | 79 |
20,052,953 | 2013-11-18T16:37:00.000 | 0 | 0 | 0 | 0 | python,qt,user-interface,python-2.4 | 20,053,291 | 2 | false | 0 | 1 | Qt.qApp or QApplication.instance() yields the QApplication object which has a processEvents that you can call. | 1 | 1 | 0 | I am using Python 2.4.3 and Qt Designer to make a GUI. When I press one of my buttons it goes off and does several serial processes. After each process I want to update the user by outputting text to the GUI, however, that text doesn't come out until I have completed all my processes. I have seen other questions regarding this same issue where processEvents() is recommended. Dumb question, what module do I have to import to get the processEvents() function that will make this work or is there one for Python 2.4.3? I am running on a Red Hat Linux machine. Thanks in advance. | Interrupting process in Qt to update GUI | 0 | 0 | 0 | 235 |
20,055,758 | 2013-11-18T19:05:00.000 | 0 | 1 | 0 | 0 | python,boost,converters | 20,058,784 | 1 | false | 0 | 0 | I found the problem... The prototype of my C++ function was taking cv::Mat& as argument and the converter was registered for cv::Mat without reference.
That was silly. | 1 | 0 | 1 | The title may not be as explicit as I wish it would be but here is what I am trying to achieve:
Using Boost.Python, I expose a set of class/functions to Python in the typical BOOST_PYTHON_MODULE(MyPythonModule) macro from C++ that produces MyPythonModule.pyd after compilation. I can now invoke a python script from C++ and play around with MyPythonModule without any issue (eg. create objects, call methods and use my registered converters). FYI: the converter I'm refering to is a numpy.ndarray to cv::Mat converter.
This works fine, but when I try to write a standalone Python script that uses MyPythonModule, my converters are not available. I tried to expose the C++ method that performs the converter registration to Python without any luck.
If my explanation isn't clear enough, don't hesitate to ask questions in the comments.
Thanks a lot for your help / suggestions. | Boost.Python: Converters unavailable from standalone python script | 0 | 0 | 0 | 84 |
20,056,399 | 2013-11-18T19:45:00.000 | 1 | 0 | 0 | 1 | python,django,nginx,celery,celerybeat | 20,312,840 | 1 | true | 1 | 0 | It turned out I used different version of Django in my remote server.
In Celery 3.1, there is no command named celeryd. | 1 | 0 | 0 | I tested my project in my local machine, and it worked fine. But after uploading to a remote server(CentOS), I cannot execute celerybeat.
Here is my command.
python manage.py celeryd --events --loglevel=INFO -c 5 --settings=[settings-directory].production
This command works in the local machine(with --settings=[settings-directory].local), but in the remote server, ImportError: cannot import name celeryd occured.
Setting about celery is in base.py. local.py and production.py import the file. In production.py, there are just DEBUG, static, database settings.
I can import djcelery and celery in shell of the remote machine.
How could I solve this?
--
I think this is a version problem.. I'm reading about celery3.1 | Nginx(Django) ImportError: cannot import name celeryd | 1.2 | 0 | 0 | 700 |
20,058,464 | 2013-11-18T21:38:00.000 | 6 | 1 | 0 | 1 | python,multithreading,deployment,process,uwsgi | 20,062,339 | 1 | true | 0 | 0 | Python's native multithreading is affected by GIL limitations. Simply put, only one Python thread at a time is physically executed. An exception to this are blocking IO calls (e. g. DB query) that let other Python threads take over, which may increase performance of IO-bound operations.
So the real performance gain would only be possible if your application is mostly IO-bound. However, in this case you should consider making the app asynchronous, which uWSGI also supports.
Otherwise you should keep your app single-threaded and use multiprocess uWSGI to scale up. | 1 | 1 | 0 | If I'm performing blocking operations like querying a database, then what is the advantage? How does this add extra worthwhile capacity? | What's the advantage of running multiple threads per UWSGI process? | 1.2 | 0 | 0 | 1,418 |
20,059,222 | 2013-11-18T22:24:00.000 | -1 | 0 | 1 | 0 | python | 20,059,271 | 3 | false | 0 | 0 | All instantiated objects (I'm assuming just for Python itself in a single module):
globals().keys().
For all of these that are instances only of a particular class:
filter(lambda x: isinstance(x, some_class), globals().keys()). | 1 | 7 | 0 | I have a long running processes which may have a resource leak. How can I obtain a list of all instantiated objects (possibly only of a particular class) in my environment? | How do I list all instantiated objects in Python? | -0.066568 | 0 | 0 | 1,131 |
20,060,354 | 2013-11-18T23:44:00.000 | 2 | 0 | 0 | 0 | python,dropbox-api | 43,394,177 | 2 | false | 0 | 0 | You can't do it directly, but you can do it indirectly (sort of). What I do is place a dummy file in my Dropbox and have it sync to the server. At the very beginning of my script I delete the file locally and then wait for it to reappear. Once it has reappeared I have a good idea that Dropbox is connected and synced. It's not perfect because more files might be waiting to be synced, but that's the best I could come up with. | 2 | 1 | 0 | I'd like to check the status, e.g. How many files left to upload. of a Dropbox account using the API in python. Is this possible? | How do I check the sync status of a Dropbox account using the API? | 0.197375 | 0 | 1 | 845 |
20,060,354 | 2013-11-18T23:44:00.000 | 1 | 0 | 0 | 0 | python,dropbox-api | 20,060,938 | 2 | true | 0 | 0 | The Dropbox API is a way to add Dropbox integration to your application (that is, to allow your application to sync its files via Dropbox, or to explicitly upload/download files outside of the sync process), not a way to control or monitor the Dropbox desktop application (or the way other applications sync, or anything else).
So no, it's not possible.
The only way to do this is to write code for each platform to control the Dropbox app, e.g., via UI scripting on the Mac or intercepting WM messages on Windows. (Or, alternatively, it might be possible to write your own replacement sync tool and use that instead of the standard desktop app, in which case you can obviously monitor what you're doing.) | 2 | 1 | 0 | I'd like to check the status, e.g. How many files left to upload. of a Dropbox account using the API in python. Is this possible? | How do I check the sync status of a Dropbox account using the API? | 1.2 | 0 | 1 | 845 |
20,060,481 | 2013-11-18T23:57:00.000 | 3 | 0 | 0 | 0 | android,python,pygame,kivy,sl4a | 20,061,233 | 1 | false | 0 | 1 | I have searched to the point of confusion, so I need some guidance. I want to make a game on Android with Python - ONLY using my android device.
As others have said, this is somewhere between 'really hard and annoying' and 'impossible'. Especially without a computer to do any of the process!
I'm confused in the difference between Kivy and Sl4a
Kivy is a cross-platform (linux, windows, osx, android, ios, maybe more) graphical framework for python. The same developers maintain a python-for-android project that lets you very easily compile a kivy program to an android apk. You can also do java api interaction etc. using the pyjnius project, which is also maintained by the same devs, and some apis (vibrate, accelerometer etc.) are already abstracted as a python module so you don't have to touch java.
sl4a was originally (I think) a way to run python scripts on android. It has its own way to do some stuff with the android apis, but I don't know the details or what is possible. There are also some ways to package as an apk or to do some kinds of graphical work, but I'm not familiar with this either - I think they're much more limited as a graphical framework than kivy is (not that it sets out to be a full framework in the same way), but I don't know much about it, and at the very least the graphical stuff works in a quite different way that has advantages of its own.
(Edit: Notice all the 'I think' in the previous paragraph? That's because I really don't know for sure and don't want to say something wrong. Don't take my word for it, try it!)
Overall, kivy and sl4a (plus both of their related projects) are separate projects, with different focuses and technical capabilities. I personally think kivy is a more obvious choice for purposes other than basic scripting (though even simple sl4a scripts are useful to make tasker scripts etc.), but while some of kivy's advantages are arguably objective, some of my opinion is subjective.
what steps i need to take in order to be able to program and run my game on my phone
This is really a big topic on its own. Already knowing kivy, I reckon I could throw together a process to do it, but I'd absolutely not want to because it would need a horribly painful mishmash of other tools interacting in ways that are not a good user experience. In essence, I'd use text editors to create android files to run with kivy's interactive launcher (which is on the play store), and can probably in principle compile to an apk using kivy's online buildozer tools. However, I'll really stress that this is only in principle possible and I can't recommend trying - I think android really does not have a good set of tools for general purpose programming of this sort, and the os doesn't fit well with the multitasking of coding.
If you just want to write scripts and run them, you may have more luck. You can look at apps like qpython and codepad2 lite, along with the sl4a stuff (and probably other apps, these are just a couple I've seen or tried recently) for apps that can let you edit and run these kinds of scripts. This might be usable for certain things, but even then I don't think it would be a fun experience if you also need to switch between reading docs in a different app etc.
So overall...certain things are possible, but building full apps with (say) kivy is not likely to be an easy or pleasant experience with the current tools. Since you say you're constrained by circumstance and not choice, I suggest playing with qpython etc. and seeing what happens, but you aren't missing some fabulous ide that takes all the pain away. | 1 | 0 | 0 | I have searched to the point of confusion, so I need some guidance. I want to make a game on Android with Python - ONLY using my android device.
I'm confused in the difference between Kivy and Sl4a and what steps i need to take in order to be able to program and run my game on my phone. I seem to only be able to find outdated or misleading information, so i apologize if this is simple.
Any guidance is much appreciated. Thanks! | Python programming from my phone? | 0.53705 | 0 | 0 | 2,130 |
20,062,512 | 2013-11-19T03:21:00.000 | 1 | 0 | 0 | 0 | python,mayavi | 20,076,370 | 2 | false | 0 | 0 | What you are expecting to be able to do from your matplotlib experience is not how mayavi axes work. In matplotlib the visualization is a child of the axes and the axes determines its coordinates. In mayavi or vtk, visualization sources consist of points in space. Axes are objects that surround a source and provide tick markings of the coordinate extent of those objects, that are not necessary for the visualizations, and where they exist they are children of sources. | 1 | 0 | 1 | Is there a way of a procedure similar to plt.gca() to get a handle to the current axes. I first do a=mlab.surf(x, y, u2,warp_scale='auto')
and then
b=mlab.plot3d(yy, yy, (yy-40)**2 ,tube_radius=20.0)
but the origin of a and b are different and the plot looks incorrect. So I want to put b into the axes of a
In short, what would be the best way in mayavi to draw a surface and a line on same axes? | mayavi mlab get current axes | 0.099668 | 0 | 0 | 501 |
20,064,915 | 2013-11-19T06:43:00.000 | 2 | 0 | 1 | 1 | python,google-app-engine | 20,065,052 | 1 | false | 1 | 0 | You can define the dict with in a module, then import it where ever you wish to refer to it, or you could load it from the datastore, and set the value in the module. You would do this during a warmup request.
Defining it in a module, means to alter the contents will require de-deploying the app.
Defining it in the datastore, means instances will reload any new definition on startup.
You could also set up a handler which could trigger a refresh if reading from the datastore.
Defining directly in the datastore means its pickled state needs to be less than 1MB (compressed) if you use a BlobProperty with compressed=True and your using ndb.
Other variations similiar to module definition would be to load it from a yaml file etc.. You could define the dict in the app.yaml as an environment variable.
There are many options, without knowing the specifics of your use cases it's hard to recommend a particular strategy. | 1 | 1 | 0 | I want to create global scope constant dict, that would be accessed by multiple views.
For now I see scenario after deploy:
Fetching big file, creating a dict, holding this dict in memory. This process can be re-executed by administrator. | Google App Engine. How to create constant in application scope? | 0.379949 | 0 | 0 | 154 |
20,064,975 | 2013-11-19T06:47:00.000 | 1 | 0 | 0 | 0 | python,pyqt,qtablewidget,mousehover,qtablewidgetitem | 20,068,068 | 4 | false | 0 | 1 | There are no events based on QTableWidgetItem, but you can do this:
reimplement the mouseMoveEvent() of QTableWidget, you can get the mouse position;
use itemAt() method to get the item under your mouse cursor;
customize your item;
This may simalute what you want. | 1 | 1 | 0 | what I want to do is to change the color of a QTableWidget item, when I hover with the mouse over the item of my QTableWidget. | How to catch mouse over event of QTableWidget item in pyqt? | 0.049958 | 0 | 0 | 13,694 |
20,066,131 | 2013-11-19T07:59:00.000 | 0 | 0 | 0 | 1 | python,windows,remote-access,administrator | 20,067,048 | 1 | true | 0 | 0 | I am no sys-admin, but just trying to mount the C-drive ( \hostname\C$ ) via samba/smb should work. This assumes that remote sharing and filesystem access is enabled on that box and the firewall rule setup to allow for remote connections. | 1 | 0 | 0 | I have a network of end-user machines (Windows, Linux, MacOS) and I want to check whether the credential I have allow me to access the machines as administrator (I am checking the "here are the admin credentials to the machines" vs. reality).
I wrote a Python script (it runs on Linux) which
runs nmap -O on the network to gather the hosts
tries to ssh with paramiko to check the Linux credentials.
I would like to do a similar check for the Windows machines. What would be a practical way, in Python, to do so?
I have a few sets of credentials (AD or local to a machine) so I would need a somehow universal method. I was thinking about something like a call to _winreg.ConnectRegistry but it does not import on my Linux (it does on a Windows box). | How to check from Linux in Python for administrative access to a Windows machine | 1.2 | 0 | 0 | 111 |
20,068,092 | 2013-11-19T09:45:00.000 | 1 | 0 | 0 | 0 | python,swig | 20,068,971 | 2 | true | 0 | 0 | Normally you should have generated _example.so and example.py. You need to distribute both. If you are concerned about exposing the sources - do not worry, example.py contains only assets translating python code to calls to the shared library. | 1 | 0 | 0 | I have generated a custom library for python using Swig and i want to use that library somewhere else (with out the source files) , Should i copy the .so file to that place ? or is there any other way.
Using Swig it has generated one so file(say _example.so) now if i want to use that library in that particular folder i need to do import example but if i am trying the same in any other folder it is throwing error saying 'Import Error: no module named example'. | How to use cutom python library? | 1.2 | 0 | 1 | 68 |
20,072,309 | 2013-11-19T13:03:00.000 | 1 | 0 | 0 | 0 | python,sql,flask,flask-sqlalchemy | 20,081,554 | 2 | false | 1 | 0 | The easiest way is to do the random number generation in javascript at the client end...
Tell the client what the highest number row is, then the client page keeps track of which ids it has requested (just a simple js array). Then when the "request next random page" button is clicked, it generates a new random number less than the highest valid row id, and providing that the number isn't in its list of previously viewed items, it will send a request for that item.
This way, you (on the server) only have to have 2 database accessing views:
main page (which gives the js, and the highest valid row id)
display an item (by id)
You don't have any complex session tracking, and the user's browser is only having to keep track of a simple list of numbers, which even if they personally view several thousand different items is still only going to be a meg or two of memory.
For performance reasons, you can even pre-fetch the next item as soon as the current item loads, so that it displays instantly and loads the next one in the background while they're looking at it. (jQuery .load() is your friend :-) )
If you expect a large number of items to be removed from the database (so that the highest number is not helpful), then you can instead generate a list of random ids, send that, and then request them one at a time. Pre-generate the random list, as it were.
Hope this helps! :-) | 1 | 1 | 0 | I'm working on a web app in Python (Flask) that, essentially, shows the user information from a PostgreSQL database (via Flask-SQLAlchemy) in a random order, with each set of information being shown on one page. Hitting a Next button will direct the user to the next set of data by replacing all data on the page with new data, and so on.
My conundrum comes with making the presentation truly random - not showing the user the same information twice by remembering what they've seen and not showing them those already seen sets of data again.
The site has no user system, and the "already seen" sets of data should be forgotten when they close the tab/window or navigate away.
I should also add that I'm a total newbie to SQL in general.
What is the best way to do this? | Best way to show a user random data from an SQL database? | 0.099668 | 1 | 0 | 512 |
20,078,036 | 2013-11-19T17:27:00.000 | 0 | 0 | 0 | 1 | python,ios,xcode,macos,pycrypto | 27,936,871 | 3 | false | 0 | 0 | Use libdns_services instead, libdnsinfo.dylib is no more supported by latest sdk. | 1 | 1 | 0 | I am on MAC 10.9 with XCode 4.6.3 and have command line tools installed
I am trying to compile pycrypto-2.1.0 using
python setup.py build and getting following error
-----------------------------------------------------------------------------
ld: warning: ignoring file build/temp.macosx-10.6-intel-2.7/src/MD2.o, file was built for unsupported file format ( 0xcf 0xfa 0xed 0xfe 0x 7 0x 0 0x 0 0x 1 0x 3 0x 0 0x 0 0x 0 0x 1 0x 0 0x 0 0x 0 ) which is not the architecture being linked (i386): build/temp.macosx-10.6-intel-2.7/src/MD2.o
ld: file not found: /usr/lib/system/libdnsinfo.dylib for architecture i386
collect2: ld returned 1 exit status
ld: file not found: /usr/lib/system/libdnsinfo.dylib for architecture x86_64
collect2: ld returned 1 exit status
------------------------------------------------------------------------------------
locate is giving
$ locate libdnsinfo.dylib
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/lib/system/libdnsinfo.dylib
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/usr/lib/system/libdnsinfo.dylib
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.1.sdk/usr/lib/system/libdnsinfo.dylib
These path are also added to PATH.
Following is command and error
$ python setup.py build
running build
running build_py
running build_ext
warning: GMP library not found; Not building Crypto.PublicKey._fastmath.
building 'Crypto.Hash.MD2' extension
gcc-4.2 -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/ -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/ -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/c++/4.2.1/ -O3 -fomit-frame-pointer -Isrc/ -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/MD2.c -o build/temp.macosx-10.6-intel-2.7/src/MD2.o
gcc-4.2 -bundle -undefined dynamic_lookup -arch i386 -arch x86_64 -g -L/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/lib -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/ -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/ -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/usr/include/c++/4.2.1/ build/temp.macosx-10.6-intel-2.7/src/MD2.o -o build/lib.macosx-10.6-intel-2.7/Crypto/Hash/MD2.so
ld: warning: ignoring file build/temp.macosx-10.6-intel-2.7/src/MD2.o, file was built for unsupported file format ( 0xcf 0xfa 0xed 0xfe 0x 7 0x 0 0x 0 0x 1 0x 3 0x 0 0x 0 0x 0 0x 1 0x 0 0x 0 0x 0 ) which is not the architecture being linked (i386): build/temp.macosx-10.6-intel-2.7/src/MD2.o
ld: file not found: /usr/lib/system/libdnsinfo.dylib for architecture i386
collect2: ld returned 1 exit status
ld: file not found: /usr/lib/system/libdnsinfo.dylib for architecture x86_64
collect2: ld returned 1 exit status
Any idea to fix this? | file not found: /usr/lib/system/libdnsinfo.dylib for architecture i386 | 0 | 0 | 0 | 3,158 |
20,078,776 | 2013-11-19T18:06:00.000 | 1 | 0 | 0 | 1 | python,eclipse,pydev | 41,791,350 | 1 | true | 1 | 0 | It seems that the best resolution for this is to update from Eclipse 3.7 to 4.3+. | 1 | 3 | 0 | I'm trying to update some software in Eclipse, and mostly haven't had problems, but when I try to update PyDev (Python plugin) I get this error:
An error occurred while collecting items to be installed
session context was:(profile=epp.package.java, phase=org.eclipse.equinox.internal.p2.engine.phases.Collect, operand=, action=).
Problems downloading artifact: osgi.bundle,com.python.pydev,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile2219600778088128210.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile2219600778088128210.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.analysis,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile6795154829597372736.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile6795154829597372736.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.codecompletion,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile855072635271316145.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile855072635271316145.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.debug,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile4688521627100670190.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile4688521627100670190.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.fastparser,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile1084399815407097736.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile1084399815407097736.jar
Problems downloading artifact: osgi.bundle,com.python.pydev.refactoring,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile4184776883512095240.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile4184776883512095240.jar
Problems downloading artifact: osgi.bundle,org.python.pydev,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile4524222642627962811.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile4524222642627962811.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.ast,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile3249163288841740294.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile3249163288841740294.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.core,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile1814921458326062966.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile1814921458326062966.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.customizations,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile4652077908204425024.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile4652077908204425024.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.debug,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile5865734778550017815.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile5865734778550017815.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.django,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile1400608644382694448.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile1400608644382694448.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.help,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile5475958427511010644.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile5475958427511010644.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.jython,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile269530960804801404.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile269530960804801404.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.parser,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile6988087748918334886.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile6988087748918334886.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.refactoring,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile1524645906700502816.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile1524645906700502816.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.shared_core,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile7684330420892093099.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile7684330420892093099.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.shared_interactive_console,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile6948600865186203811.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile6948600865186203811.jar
Problems downloading artifact: osgi.bundle,org.python.pydev.shared_ui,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile2509877364480980768.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile2509877364480980768.jar
Problems downloading artifact: org.eclipse.update.feature,org.python.pydev.feature,3.0.0.201311051910.
Error reading signed content:C:\Users\Blake\AppData\Local\Temp\signatureFile7424055901779492006.jar
An error occurred while processing the signatures for the file: C:\Users\Blake\AppData\Local\Temp\signatureFile7424055901779492006.jar
I run Eclipse as an administrator and I don't understand what could cause this issue.
Regards, | PyDev Eclipse Plugin fails to update in Eclipse Update Manager | 1.2 | 0 | 0 | 1,251 |
20,081,338 | 2013-11-19T20:25:00.000 | 5 | 0 | 1 | 1 | python,virtualenv,anaconda,conda | 35,214,764 | 12 | false | 0 | 0 | Below is how it worked for me
C:\Windows\system32>set CONDA_ENVS_PATH=d:\your\location
C:\Windows\system32>conda info
Shows new environment path
C:\Windows\system32>conda create -n YourNewEnvironment --clone=root
Clones default root environment
C:\Windows\system32>activate YourNewEnvironment
Deactivating environment "d:\YourDefaultAnaconda3"...
Activating environment "d:\your\location\YourNewEnvironment"...
[YourNewEnvironment] C:\Windows\system32>conda info -e
conda environments:
#
YourNewEnvironment
* d:\your\location\YourNewEnvironment
root d:\YourDefaultAnaconda3 | 3 | 189 | 0 | I'm on Windows 8, using Anaconda 1.7.5 64bit.
I created a new Anaconda environment with
conda create -p ./test python=2.7 pip
from C:\Pr\TEMP\venv\.
This worked well (there is a folder with a new python distribution). conda tells me to type
activate C:\PR\TEMP\venv\test
to activate the environment, however this returns:
No environment named "C:\PR\temp\venv\test" exists in C:\PR\Anaconda\envs
How can I activate the environment? What am I doing wrong? | How to activate an Anaconda environment | 0.083141 | 0 | 0 | 622,482 |
20,081,338 | 2013-11-19T20:25:00.000 | 2 | 0 | 1 | 1 | python,virtualenv,anaconda,conda | 58,099,552 | 12 | false | 0 | 0 | For me, using Anaconda Prompt instead of cmd or PowerShell is the key.
In Anaconda Prompt, all I need to do is activate XXX | 3 | 189 | 0 | I'm on Windows 8, using Anaconda 1.7.5 64bit.
I created a new Anaconda environment with
conda create -p ./test python=2.7 pip
from C:\Pr\TEMP\venv\.
This worked well (there is a folder with a new python distribution). conda tells me to type
activate C:\PR\TEMP\venv\test
to activate the environment, however this returns:
No environment named "C:\PR\temp\venv\test" exists in C:\PR\Anaconda\envs
How can I activate the environment? What am I doing wrong? | How to activate an Anaconda environment | 0.033321 | 0 | 0 | 622,482 |
20,081,338 | 2013-11-19T20:25:00.000 | -1 | 0 | 1 | 1 | python,virtualenv,anaconda,conda | 62,778,561 | 12 | false | 0 | 0 | Window:
conda activate environment_name
Mac: conda activate environment_name | 3 | 189 | 0 | I'm on Windows 8, using Anaconda 1.7.5 64bit.
I created a new Anaconda environment with
conda create -p ./test python=2.7 pip
from C:\Pr\TEMP\venv\.
This worked well (there is a folder with a new python distribution). conda tells me to type
activate C:\PR\TEMP\venv\test
to activate the environment, however this returns:
No environment named "C:\PR\temp\venv\test" exists in C:\PR\Anaconda\envs
How can I activate the environment? What am I doing wrong? | How to activate an Anaconda environment | -0.016665 | 0 | 0 | 622,482 |
20,081,818 | 2013-11-19T20:55:00.000 | 1 | 0 | 0 | 0 | c++,python,opencv,video-capture,image-capture | 20,085,854 | 1 | true | 0 | 0 | When you retrieve a frame from a camera, it is the maximum size that that camera can give. If you want a smaller image, you have to specify it when you get the image, and opencv will resize it for you.
A normal camera has one sensor of one size, and it sends one kind of image to the computer. What opencv does with it thereafter is up to you to specify. | 1 | 1 | 1 | I'm using Open CV 2.4.6 with C++ (with Python sometimes too but it is irrelevant). I would like to know if there is a simple way to get all the available frame sizes from a capture device?
For example, my webcam can provide 640x480, 320x240 and 160x120. Suppose that I don't know about these frame sizes a priori... Is it possible to get a vector or an iterator, or something like this that could give me these values?
In other words, I don't want to get the current frame size (which is easy to obtain) but the sizes I could set the device to.
Thanks! | Getting all available frame size from capture device with OpenCV | 1.2 | 0 | 0 | 1,874 |
20,082,935 | 2013-11-19T21:57:00.000 | 12 | 0 | 1 | 1 | python,macos,python-3.x,pip,python-3.3 | 45,603,115 | 16 | false | 0 | 0 | brew install python3
create alias in your shell profile
eg. alias pip3="python3 -m pip" in my .zshrc
➜ ~ pip3 --version
pip 9.0.1 from /usr/local/lib/python3.6/site-packages (python 3.6) | 3 | 140 | 0 | OS X (Mavericks) has Python 2.7 stock installed. But I do all my own personal Python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is:
Download pyserial from pypi
untar pyserial.tgz
cd pyserial
python3 setup.py install
But I'd like to do like the cool kids do, and just do something like pip3 install pyserial. But it's not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet. | How to install pip for Python 3 on Mac OS X? | 1 | 0 | 0 | 296,833 |
20,082,935 | 2013-11-19T21:57:00.000 | 4 | 0 | 1 | 1 | python,macos,python-3.x,pip,python-3.3 | 52,130,224 | 16 | false | 0 | 0 | pip is installed automatically with python2 using brew:
brew install python3
pip3 --version | 3 | 140 | 0 | OS X (Mavericks) has Python 2.7 stock installed. But I do all my own personal Python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is:
Download pyserial from pypi
untar pyserial.tgz
cd pyserial
python3 setup.py install
But I'd like to do like the cool kids do, and just do something like pip3 install pyserial. But it's not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet. | How to install pip for Python 3 on Mac OS X? | 0.049958 | 0 | 0 | 296,833 |
20,082,935 | 2013-11-19T21:57:00.000 | 0 | 0 | 1 | 1 | python,macos,python-3.x,pip,python-3.3 | 55,175,708 | 16 | false | 0 | 0 | For a fresh new Mac, you need to follow below steps:-
Make sure you have installed Xcode
sudo easy_install pip
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew doctor
brew doctor
brew install python3
And you are done, just type python3 on terminal and you will see python 3 installed. | 3 | 140 | 0 | OS X (Mavericks) has Python 2.7 stock installed. But I do all my own personal Python stuff with 3.3. I just flushed my 3.3.2 install and installed the new 3.3.3. So I need to install pyserial again. I can do it the way I've done it before, which is:
Download pyserial from pypi
untar pyserial.tgz
cd pyserial
python3 setup.py install
But I'd like to do like the cool kids do, and just do something like pip3 install pyserial. But it's not clear how I get to that point. And just that point. Not interested (unless I have to be) in virtualenv yet. | How to install pip for Python 3 on Mac OS X? | 0 | 0 | 0 | 296,833 |
20,084,135 | 2013-11-19T23:10:00.000 | 4 | 0 | 0 | 0 | python,sqlite,stringio | 20,084,315 | 1 | true | 0 | 0 | The Python sqlite3 module cannot open a database from a file number, and even so, using StringIO will not give you a file number (since it does not open a file, it just emulates the Python file object).
You can use the :memory: special file name to avoid writing a file to disk, then later write it to disk once you are done with it. This will also make sure the file is optimized for size, and you can opt not to write e.g. indexes if size is really a big issue. | 1 | 10 | 0 | I am wondering if anyone knows a way to generate a connection to a SQLite database in python from a StringIO object.
I have a compressed SQLite3 database file and I would like to decompress it using the gzip library and then connect to it without first making a temp file.
I've looked into the slqite3 library source, but it looks like filename gets passed all the way through into the C code. Are there any other SQLite3 connection libraries that you could use a file ID for? Or is there some why I can trick the builtin sqlite3 library into thinking that my StringIO (or some other object type) is an actual file? | SQLite3 connection from StringIO (Python) | 1.2 | 1 | 0 | 1,371 |
20,084,382 | 2013-11-19T23:26:00.000 | 7 | 0 | 0 | 0 | python,pandas,dataframe | 42,714,576 | 2 | false | 0 | 0 | Or you can use:
df.stack().unique()
Then you don't need to worry if you have NaN values, as they are excluded when doing the stacking. | 1 | 51 | 1 | I have a Pandas dataframe and I want to find all the unique values in that dataframe...irrespective of row/columns. If I have a 10 x 10 dataframe, and suppose they have 84 unique values, I need to find them - Not the count.
I can create a set and add the values of each rows by iterating over the rows of the dataframe. But, I feel that it may be inefficient (cannot justify that). Is there an efficient way to find it? Is there a predefined function? | Find unique values in a Pandas dataframe, irrespective of row or column location | 1 | 0 | 0 | 90,919 |
20,085,750 | 2013-11-20T01:38:00.000 | 1 | 0 | 0 | 0 | python,text,grid,wxpython,alignment | 20,100,885 | 1 | true | 0 | 1 | Try setting the StaticText size to something specific, like (100, -1). That way it should stay the same unless you happen to apply a string that's greater than the size you set. I'm not sure what happens then, but probably it would get truncated. However, if you're updating it so fast that you can't read it to begin with, I don't think this will be an issue. | 1 | 0 | 0 | I am using a Flexgridsizer that contains a mix of statictexts and buttons within it. My flexgridsizer contains 1 column of statictext that never changes, 3 columns that update constantly (sometimes a tenth of a second) and the last column has all buttons that remain the same -- I have about 12 rows.
When I push "go" on my code, the three statictext columns update constantly. I do this by self.text1.SetLabel('newtext'). This works great. However, I initially set up the gridsizer so the statictext is centered. Therefore, when i run my code, after updating each cell, i run self.panel.Layout(). This helps get my columns centered again.
The problem with this is that since I do so much updating, it causes my buttons in the last column to look like they are moving left and right (since it appears to be resetting the layout of the buttons). I want the buttons to "stay still". To fix this, I removed the self.panel.Layout() BUT this now makes all my text be right-justified.
Is there any way to apply the Layout() to just specific columns of the gridsizer? I really need to keep the gridsizer if possible. I have also seen the "noautoresize" but one of my columns experiences texts of different lengths at each update.
Any thoughts? Thanks! | wxpython update wx.statictext inside wx.gridsizer | 1.2 | 0 | 0 | 113 |
20,086,958 | 2013-11-20T03:47:00.000 | 0 | 0 | 0 | 0 | python,canvas | 20,093,393 | 1 | false | 0 | 1 | You know the position of the two objects that can collide and compute the distance. When this is smaller than a threshold then they collide
You use the Canvas.find_overlapping(*rectangle). to find out the figures on the canvas in a rectangle.
I always prefer option 1. It helps dividing model and presentation to the user which do not always need to be linked. | 1 | 0 | 0 | So basically, I'm going to make a Brick Breaker type of game. Just my beginning CS Python class didn't teach much OO programming, and I was wondering how I could make this free moving ball register when it hits the slider. I think I have an idea, but I would like to see other peoples explanations. | How can I make one canvas object collside with another canvas object using Tkinter? | 0 | 0 | 0 | 147 |
20,091,664 | 2013-11-20T09:14:00.000 | 2 | 0 | 1 | 0 | python,pyzmq | 20,091,711 | 1 | true | 0 | 0 | By installing it in a location that is listed earlier in sys.path.
The directory your project is in, for example, is always listed first in sys.path and other packages in the same directory will be found before system locations. In other words, put pyzmq in the same folder as your script and it'll Just Work.
You can also add entries to sys.path by listing them in the PYTHONPATH environment variable; these will be inserted into sys.path before system locations. | 1 | 0 | 0 | I've compiled pyzmq (python zeromq binding) module and want to use that one instead system one.
How to skip module loading from system so that my import zmq first searches in current folder? | How to import local module with same name as sys modul? | 1.2 | 0 | 0 | 36 |
20,092,212 | 2013-11-20T09:38:00.000 | 2 | 0 | 0 | 0 | python,vb.net,web,gis,arcgis | 49,368,127 | 4 | false | 0 | 0 | Quite an open ended question but really depends what you want it for? Career wise lots of organisations use Arc Server, but lots of people are going open now and Mapserver, Geoserver and Mapnik are all technologies I would seriously look at.
Learning about the OGC protocols is a good start, i.e WFS, WMS etc.
Open layers and Leaflet and how they can consume data through WMS or WFS, and DB management - i.e. ensuring the data is in good shape and has adequate spatial indexes. | 1 | 2 | 0 | I am new to Web GIS mapping, I am interested to learn about WEB GIS and what are the skills needed.
I Know Arcgis Desktop, FME, VBA, Microstation and Autocad.
Please guide me. | Web GIS mapping for Beginner | 0.099668 | 0 | 0 | 1,430 |
20,095,451 | 2013-11-20T12:02:00.000 | 1 | 0 | 0 | 0 | python,django,rabbitmq,pika | 20,095,672 | 1 | false | 1 | 0 | You will have to use some locking mechanism, perhaps based on database.
When a worker is working on a django object, it marks the work in database. a MySQL example:
worker_id | object_id | task_type
22 44 3 // entry inserted to mark the work
When another worker picks up a django object, it checks that it is not marked as in #1, and proceeds to pick next item.
When a worker has finished working on an object, the database lock row is deleted or marked as FINISHED. | 1 | 2 | 0 | I have a Python Django project with two RabbitMQ workers, using the pika lib, which receives jobs to perform actions on a certain Django object which is specified in the request.
The thing is, I don't want the workers, A and B, to perform their actions on the same Django object, x, at the same time as that might cause problems. It doesn't matter which workers goes first but if A is working on x and B receives a job to work on x, I want this job to wait until A is done.
So the problems boils down to being able to know what the other worker is working on and being able to pause a job until a certain time. Note that in my actual project I have more than 2 workers which this must be applied to, I choose two In my example to make it easier to dissect.
Thanks for the help,
Mattias | RabbitMQ: Preventing jobs to run simultaneously on two different workers | 0.197375 | 0 | 0 | 155 |
20,096,083 | 2013-11-20T12:32:00.000 | 2 | 0 | 0 | 0 | python,linux | 20,096,169 | 2 | false | 0 | 0 | Signals like SIGHUP, SIGTERM, SIGQUIT are sent to a specific process and can be handled there. Powerr off and shutdown are handled by the init process of your system. They depend on the implementation of init you are using (Upstart, SysV init), and there is no general way to detect and handle them from another process, regardless of whether this process is written in Python or any other language. | 1 | 1 | 0 | I know that you can detect SIGHUP, SIGTERM, SIGQUIT etc, but is it possible to detect when the system receives a halt / poweroff / shutdown signal ? | is it possible to detect halt / poweroff signal in python | 0.197375 | 0 | 0 | 392 |
20,097,450 | 2013-11-20T13:36:00.000 | 6 | 0 | 1 | 0 | python,api,exception,class-design | 20,097,899 | 5 | true | 0 | 0 | API design is a bit of an art. The name of a function should suggest how it will behave, including setting up user expectations. A function named findPoint implies that the search may fail, so the case where no such point exists is not exceptional, and may return None to signal the result. A function named getPoint, however, would imply to me that it can always return the requested point. A failure would be unexpected and warrant raising an exception. | 4 | 6 | 0 | I am using a Python based API where there are lots of functions to query things, like doesPointExist, findPoint, canCreateNewPoint, etc where the negative result throws an exception. This makes the code much more cluttered filled with try/catch statements, instead of directly using the result as a boolean value.
Since I am not a Python expert, I am wondering if this design is Pythonic or not? I haven't seen this sort of design in the standard libraries though, so I am assuming this kind of exception usage in Python APIs is frowned upon? | When designing a Python API, is it more Pythonic to throw exceptions or return false/None, etc? | 1.2 | 0 | 0 | 425 |
20,097,450 | 2013-11-20T13:36:00.000 | 6 | 0 | 1 | 0 | python,api,exception,class-design | 20,097,544 | 5 | false | 0 | 0 | Sounds like a badly designed API.
The function doesPointExist should return True or False, it shouldn't raise an exception when the point doesn't exist.
The function findPoint should return a Point object or None when no object could be found.
The function canCreateNewPoint should return True or False for similar reasons.
Exceptions are for exceptional cases. | 4 | 6 | 0 | I am using a Python based API where there are lots of functions to query things, like doesPointExist, findPoint, canCreateNewPoint, etc where the negative result throws an exception. This makes the code much more cluttered filled with try/catch statements, instead of directly using the result as a boolean value.
Since I am not a Python expert, I am wondering if this design is Pythonic or not? I haven't seen this sort of design in the standard libraries though, so I am assuming this kind of exception usage in Python APIs is frowned upon? | When designing a Python API, is it more Pythonic to throw exceptions or return false/None, etc? | 1 | 0 | 0 | 425 |
20,097,450 | 2013-11-20T13:36:00.000 | 3 | 0 | 1 | 0 | python,api,exception,class-design | 20,097,598 | 5 | false | 0 | 0 | I don't agree that you won't find this in the standard library. For example, "abc".index("d") raises ValueError, lots of libraries raise exceptions freely.
I'd say it depends on what the consequences of a failed action are.
If the caller can work with the returned value without change, I'd return an empty value (or False, if it's a yes or no question).
If the call fails, I'd raise an exception. For example findPoint() might do that if it normally returns a Point object that the caller wants to work with. | 4 | 6 | 0 | I am using a Python based API where there are lots of functions to query things, like doesPointExist, findPoint, canCreateNewPoint, etc where the negative result throws an exception. This makes the code much more cluttered filled with try/catch statements, instead of directly using the result as a boolean value.
Since I am not a Python expert, I am wondering if this design is Pythonic or not? I haven't seen this sort of design in the standard libraries though, so I am assuming this kind of exception usage in Python APIs is frowned upon? | When designing a Python API, is it more Pythonic to throw exceptions or return false/None, etc? | 0.119427 | 0 | 0 | 425 |
20,097,450 | 2013-11-20T13:36:00.000 | 2 | 0 | 1 | 0 | python,api,exception,class-design | 20,098,199 | 5 | false | 0 | 0 | This isn't unique to Python. The other answers are good, but just some additional thoughts that didn't fit well into a comment:
You use exceptions when you don't want (or need) to check for the result. In this mode, you just do it, and if there's an error somewhere, you throw an exception. Getting rid of the explicit checks makes for shorter code, and you still get good debugging information when you DO get an exception, so it's common. This is the EAFP (easier to ask forgiveness than permission) style above.
You use return codes when you do want to check for the result. Explicit checking is sometimes necessary if failures won't always fail cleanly, or to aid debugging in complex code flows. This is sometimes called the LBYL (look before you leap) style.
In Python, like most interpreted languages, because the overhead is so high, exceptions are relatively cheap, so it's much more common to use EAFP in Python than, say, C++ where the overhead is lower and exceptions are (relatively) more expensive.
Note that a function might both give a return value and possibly throw an exception.
In your example, a function like doesPointExist implies that the user is actually wanting to verify access before trying something. This is LBYL. Throwing an exception as a result value is part of the EAFP programming style, and wouldn't make sense for this function - if you wanted that style, you wouldn't check, you would just do it, and catch the exception when the point didn't exist.
However, even here there are assumptions - that you've given a valid point. It would be fine for the function to return True/False for whether the point exists, while throwing an exception if something that wasn't a point was passed to it. | 4 | 6 | 0 | I am using a Python based API where there are lots of functions to query things, like doesPointExist, findPoint, canCreateNewPoint, etc where the negative result throws an exception. This makes the code much more cluttered filled with try/catch statements, instead of directly using the result as a boolean value.
Since I am not a Python expert, I am wondering if this design is Pythonic or not? I haven't seen this sort of design in the standard libraries though, so I am assuming this kind of exception usage in Python APIs is frowned upon? | When designing a Python API, is it more Pythonic to throw exceptions or return false/None, etc? | 0.07983 | 0 | 0 | 425 |
20,101,721 | 2013-11-20T16:41:00.000 | -1 | 1 | 1 | 1 | python,debian,python-2.6,apt | 20,102,657 | 2 | false | 0 | 0 | I know this might seem extreme but if you need 2.6 that badly, try running debian stable in a virtual machine like virtualbox and install 2.6 through that. | 1 | 0 | 0 | I'm using Linux Mint Debian Edition (eq. Debian Testing). There is no python2.6-dev package, which I'd need to install pycrypto for Python 2.6 (since it has a compilation step).
Is there any way to get this package or an equivalent on my system? I already have installed Python 2.6 in my system and I can use it without a hitch.
(The python2.7-dev package is there just fine. But I'm glued to 2.6, so it doesn't suit my needs.) | How to install python2.6-dev on Debian Testing | -0.099668 | 0 | 0 | 1,188 |
20,102,228 | 2013-11-20T17:03:00.000 | 2 | 0 | 0 | 0 | python,django,tastypie | 20,102,341 | 1 | true | 1 | 0 | will see basic read only info from the Django API.
It sounds like you probably just want to make those bits of the API publicly available for read-only access, and then not use any authentication method.
As you say attempting to hide a key isn't a sensible way to go, and if there's no kind of user login then you can't really authenticate in any secure way. | 1 | 1 | 0 | I'm using Django TastyPie for my API. I have a completely separate HTML application that my user views and will see basic read only info from the Django API. My question is what authentication method should I use in this situation. The HTML application is technically me not the user and they don't login. The app is not Django but pure javascript, hiding a key or anything else is pointless. | TastyPie authentication for pure javascript site | 1.2 | 0 | 0 | 57 |
20,104,368 | 2013-11-20T18:47:00.000 | 11 | 0 | 1 | 1 | python,py2exe | 20,777,298 | 2 | false | 0 | 0 | You can do that this way:
Activate your virtualenv and then ...
easy_install py2exe-0.6.9.win32-py2.7.exe | 2 | 9 | 0 | I have a Python script I developed within a virtualenv on Windows (Python 2.7).
I would now like to compile it into a single EXE using Py2exe.
I've read and read the docs and stackoverflow, and yet I can't find a simple answer: How do I do this? I tried just installing py2exe (via the downloadable installer), but of course that doesn't work because it uses the system-level python, which doesn't have the dependencies for my script installed. It needs to use the virtualenv - but there doesn't seem to be such an option.
I did manage to get bbfreeze to work, but it outputs a dist folder crammed with files, and I just want a simple EXE file (one file) for my simple script, and I understand Py2Exe can do this.
tl;dr: How do I run Py2Exe within the context of a virtualenv so it correctly imports dependencies? | Using py2exe in a virtualenv | 1 | 0 | 0 | 4,266 |
20,104,368 | 2013-11-20T18:47:00.000 | 1 | 0 | 1 | 1 | python,py2exe | 20,196,997 | 2 | true | 0 | 0 | Installing py2exe into your virtual env should be straightforward. You'll need Visual Studio 2008, the express version should work. Launch a 2008 Command Prompt and Activate your virtual env. Change into the directory that contains the py2exe source and run python setup.py install. You can verify that py2exe is in the correct environment by attempting to import it from an interactive shell. I tested myself earlier today (had to install virtualenv). It works exactly as expected. | 2 | 9 | 0 | I have a Python script I developed within a virtualenv on Windows (Python 2.7).
I would now like to compile it into a single EXE using Py2exe.
I've read and read the docs and stackoverflow, and yet I can't find a simple answer: How do I do this? I tried just installing py2exe (via the downloadable installer), but of course that doesn't work because it uses the system-level python, which doesn't have the dependencies for my script installed. It needs to use the virtualenv - but there doesn't seem to be such an option.
I did manage to get bbfreeze to work, but it outputs a dist folder crammed with files, and I just want a simple EXE file (one file) for my simple script, and I understand Py2Exe can do this.
tl;dr: How do I run Py2Exe within the context of a virtualenv so it correctly imports dependencies? | Using py2exe in a virtualenv | 1.2 | 0 | 0 | 4,266 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.