Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
24,763,949 | 2014-07-15T16:55:00.000 | 2 | 0 | 1 | 0 | python,pygame,collision-detection,sprite | 24,764,544 | 1 | true | 0 | 1 | The default Sprite class has no notion of separate parts of a sprite; the whole thing is assigned a single bounding box. If you plan on using pygame.sprite.groupcollide it seems like you don't want an individual sprite anyway, you want them packaged together in their own group. Keep in mind that the pygame.sprite.Group.add method can take either a single sprite OR an iterable of sprites. So you can nest sprite groups if necessary. | 1 | 3 | 0 | No i don't ask about spritesheets.
I'm working on a game in pygame and I want to know if it's possible to divide sprite into parts like body parts and then use it in collision-detection with pygame.sprite.groupcollide? | How to divide sprite into smaller parts in pygame? | 1.2 | 0 | 0 | 357 |
24,766,819 | 2014-07-15T19:41:00.000 | 2 | 0 | 0 | 0 | python,django,selenium,integration-testing,code-coverage | 24,768,403 | 2 | false | 1 | 0 | Well, they increase coverage if they execute code which is not executed by your other tests. However, that won't show up in your reports unless you figure out a way to capture which lines are executed during selenium testing, and add that into the data about coverage. | 1 | 0 | 0 | So, do they? That is the question. I'm not seeing any increase in my coverage reports with my integration tests done with selenium. | Do selenium tests in django applications increase coverage? | 0.197375 | 0 | 1 | 826 |
24,767,321 | 2014-07-15T20:13:00.000 | 0 | 0 | 0 | 0 | python,django | 24,767,411 | 2 | false | 1 | 0 | I would create another table (tablename_approval) with columns something like
approved | boolean
approved_by | foreign key to user
timestamp | timestamp
to track the state of each individual row.
You might want to consider an enum rather than boolean to track the difference between items which have not yet been approved versus those that were checked and intentionally not approved. | 1 | 0 | 0 | I have a model for which changes need to be approved by a user with a certain flag before they are seen by everybody. Making two different identical models is not an option, because the model has a many-to-many field referencing itself, which needs to be linked to both approved and non-approved entries.
I'm using Django 1.7. django-moderation looks like the best option, but it doesn't support manytomany relations. I've also tried django-gatekeeper which didn't work for me either.
Is there a library which supports many-to-many relationships?
If not, how would I go about doing this myself? | How would I implement a change approval process for a Django model? | 0 | 0 | 0 | 2,143 |
24,768,900 | 2014-07-15T21:58:00.000 | -1 | 1 | 1 | 0 | python,bit-manipulation,xor | 24,768,999 | 3 | false | 0 | 0 | You can simply use ==.
A XNOR B is same as == operator because:
A B NXOR
F F T
F T F
T F F
T T T | 1 | 0 | 0 | What is the most efficient algorithm for finding ~A XOR B? (Note that ~ is the complement function, done by reversing each 1 bit into 0 and each 0 into 1 bit, and XOR is the exclusive or function)
For example, ~4 XOR 6 = ~010 = 101 = 5 and ~6 XOR 9 = ~1111 = 0 | Complement of XOR | -0.066568 | 0 | 0 | 1,018 |
24,769,117 | 2014-07-15T22:18:00.000 | 160 | 0 | 1 | 0 | python,intellij-idea | 24,769,264 | 5 | false | 0 | 0 | With the Python plugin installed:
Navigate to File > Project Structure.
Under the Project menu for Project SDK, select "New" and
Select "Python SDK", then select "Local".
Provided you have a Python SDK installed, the flow should be natural from there - navigate to the location your Python installation lives. | 3 | 115 | 0 | There is a tutorial in the IDEA docs on how to add a Python interpreter in PyCharm, which involves accessing the "Project Interpreter" page. Even after installing the Python plugin, I don't see that setting anywhere.
Am I missing something obvious? | How do I configure a Python interpreter in IntelliJ IDEA with the PyCharm plugin? | 1 | 0 | 0 | 74,300 |
24,769,117 | 2014-07-15T22:18:00.000 | 2 | 0 | 1 | 0 | python,intellij-idea | 44,602,198 | 5 | false | 0 | 0 | Follow these steps:
Open Setting (Ctrl + Alt + s)
Click on plugins
Find Browse Repositories and click
Search for "python"
Select Python SDK or pycharm
Restart the IDE
Go to project structure
Select the python SDK in projects or create a new project with python SDK. | 3 | 115 | 0 | There is a tutorial in the IDEA docs on how to add a Python interpreter in PyCharm, which involves accessing the "Project Interpreter" page. Even after installing the Python plugin, I don't see that setting anywhere.
Am I missing something obvious? | How do I configure a Python interpreter in IntelliJ IDEA with the PyCharm plugin? | 0.07983 | 0 | 0 | 74,300 |
24,769,117 | 2014-07-15T22:18:00.000 | 3 | 0 | 1 | 0 | python,intellij-idea | 56,439,498 | 5 | false | 0 | 0 | If you have multiple modules in your project, with different languages, you can set the interpreter in the following way:
File -> Project Structure...
Select Modules in the list on the left
Select the Python module in the list of modules
On the right-hand side, either choose an existing Python SDK from the dropdown list, or click on the New... button to create either a virtualenv, or create a new Python SDK from a Python installation on your system. | 3 | 115 | 0 | There is a tutorial in the IDEA docs on how to add a Python interpreter in PyCharm, which involves accessing the "Project Interpreter" page. Even after installing the Python plugin, I don't see that setting anywhere.
Am I missing something obvious? | How do I configure a Python interpreter in IntelliJ IDEA with the PyCharm plugin? | 0.119427 | 0 | 0 | 74,300 |
24,769,574 | 2014-07-15T23:02:00.000 | 1 | 1 | 0 | 0 | python,cloud | 24,769,619 | 2 | false | 1 | 0 | Here are two approaches to this problem, both of which require shell access to the cloud server.
Write the program to handle the scheduling itself. For example, sleep and wake up every few miliseconds to perform the necessary checks. You would then transfer this file to the server using a tool like scp, login, and start it in the background using something like python myscript.py &.
Write the program to do a single run only, and use the scheduling tool cron to start it up every minute of the day. | 2 | 1 | 0 | I'm fairly competent with Python but I've never 'uploaded code' to a server before and have it run automatically.
I'm working on a project that would require some code to be running 24/7. At certain points of the day, if a criteria is met, a process is started. For example: a database may contain records of what time each user wants to receive a daily newsletter (for some subjective reason) - the code would at the right time of day send the newsletter to the correct person. But of course, all of this is running out on a Cloud server.
Any help would be appreciated - even correcting my entire formulation of the problem! If you know how to do this in any other language - please reply with your solutions!
Thanks! | Uploading code to server and run automatically | 0.099668 | 0 | 0 | 262 |
24,769,574 | 2014-07-15T23:02:00.000 | 1 | 1 | 0 | 0 | python,cloud | 24,983,741 | 2 | false | 1 | 0 | Took a few days but I finally got a way to work this out. The most practical way to get this working is to use a VPS that runs the script. The confusing part of my code was that each user would activate the script at a different time for themselves. To do this, say at midnight, the VPS runs the python script (using scheduled tasking or something similar) and runs the script. the script would then pull times from a database and process the code at those times outlined.
Thanks for your time anyways! | 2 | 1 | 0 | I'm fairly competent with Python but I've never 'uploaded code' to a server before and have it run automatically.
I'm working on a project that would require some code to be running 24/7. At certain points of the day, if a criteria is met, a process is started. For example: a database may contain records of what time each user wants to receive a daily newsletter (for some subjective reason) - the code would at the right time of day send the newsletter to the correct person. But of course, all of this is running out on a Cloud server.
Any help would be appreciated - even correcting my entire formulation of the problem! If you know how to do this in any other language - please reply with your solutions!
Thanks! | Uploading code to server and run automatically | 0.099668 | 0 | 0 | 262 |
24,771,722 | 2014-07-16T03:29:00.000 | 0 | 0 | 0 | 0 | python,amazon-web-services,flask,amazon-elastic-beanstalk | 24,920,993 | 1 | false | 1 | 0 | An internal 500 error can be caused by many reasons. The "No files found to execute" issue seems interesting...Could you link to your log file? Or, more specifically, just show the /var/log/httpd/error_log section of your log file. This is usually very helpful to diagnosing things that go wrong after a successful deployment. Sorry I can't give a definitive answer yet - 500 is very generic! | 1 | 0 | 0 | I am still new to this, so please pardon the inexperience.
I am currently developing a website with a coder using python and flask, and it needs to be placed into elastic beanstalk.
The website was developed offline, and we are trying to upload it to EB.
We have followed the instructions on the AWS support, and have been able to get the "Hello World!" sample site working. However, upon reading the logs, it shows that, at the bottom, "No files found to execute"
When we upload our website, it shows the same error in the logs, but instead of showing the website, it would show Internal 500 error.
Any advice is appreciated! | AWS ElasticBeanstalk "no files found to execute" | 0 | 0 | 0 | 74 |
24,774,512 | 2014-07-16T07:18:00.000 | 0 | 0 | 0 | 0 | python,django,forms | 24,777,165 | 2 | false | 1 | 0 | I have tried your case in admin of Django. There is a request object in Django which I suppose you know what it is. You can get user selection by request.user.user_permissions.select_related(). Hope it will help you. | 1 | 1 | 0 | I have created a django form using django-registration package.
In that I have two selection fields , first is country, second is state depending on the country. If I get any errors, when I submit the form it returns form with user filled data.
But I am facing problems with selection field of country and state.
Please give me idea to solve this | django registration page with country , state field after submission | 0 | 0 | 0 | 178 |
24,776,000 | 2014-07-16T08:36:00.000 | 2 | 0 | 0 | 0 | python,mysql,sql | 24,776,157 | 1 | true | 0 | 0 | Generally Database queries should be faster than python for two reasons:
Databases are optimised to work with data, and they will optimise a high level abstraction language like SQL in order to get the best performance, while python might be fast but doesn't have to be
Running SQL analyses the data at the source and you don't need to transfer it at first.
That being said, there might be some extremely complex queries which could be faster in python but this doesn't seem the case for your. Also the more you squash the data with sql the smaller and easier the algorithm in python will be.
At last, I don't know your queries, but it should be possible to run them for all 24h at once including the removing duplicates and counting. | 1 | 1 | 0 | I'm using matplotlib and MySQLdb to create some graphs from a MySQL database. For example, the number of unique visitors in a given time period, grouped by periods of say, 1 hours. So, there'll be a bunch of (Time, visits in 1-hour period near that time) points.
I have a table as (ip, visit_time) where each ip can occur multiple times.
My question is should I run a single query and then process the results (remove duplicates, do the counting etc.), or should I run multiple SQL queries (for example, for 1 day period, there will be 24 queries for finding out the number of visits in each hour). Which will be faster and more efficient? | Python and MySQL - which to use more? | 1.2 | 1 | 0 | 86 |
24,776,767 | 2014-07-16T09:12:00.000 | 1 | 0 | 0 | 0 | python,qt,slider,pyqt,limit | 34,220,799 | 3 | false | 0 | 1 | Actually @Trilarion , setWrapping to false does not fix his problem.
The Dial still jumps from end to beginning and vice-versa. This is not cool in some applications.
The only solution, I believe, is to hardly not allowing the Dial to do this in a slot. For example, in value changed. Considering you stored the previous value, you can prevent the user from jumping with the dial around by setting the value again. | 1 | 1 | 0 | My question is about QDial. It can make the full tour around. When the value is 0 it jumps to 100 and if the value is 100 and you turn it a little more it passes to 0 suddenly.
How can I disable this? | QDial disabling full tour | 0.066568 | 0 | 0 | 351 |
24,780,630 | 2014-07-16T12:19:00.000 | 2 | 0 | 0 | 0 | python,cross-compiling,magic-numbers | 24,780,673 | 1 | true | 0 | 0 | You cannot cross-compile for other Python versions, no.
Just install Python 2.6 next to Python 2.7 and use compileall with that to produce your bytecode files instead. You can install multiple versions of Python quite painlessly. | 1 | 0 | 0 | I built a tool in python 2.7.5 and I compiled it with python -m compileall
When I tried to use it on the destination plateforme (python 2.6.6) I got that annoying "Magic Number" error.
I already read a bunch of things about that error and I think I understand whats happening...
Then my question is : Is there a way to specify the "target platform" when I compile the .py files or should I downgrade my current version of python to match the "production" one ? | Is there a way to avoid "Magic Number" issues when I know the destination plateform? | 1.2 | 0 | 0 | 298 |
24,782,368 | 2014-07-16T13:40:00.000 | 0 | 0 | 0 | 0 | python,django,python-3.x,django-admin | 24,782,492 | 3 | false | 1 | 0 | I would deal it with a simple way. Find a function the admin must execute such as save_model(). Then put your own function in it. | 1 | 2 | 0 | I would like a python function to be run any time a page from the admin interface is loaded. This does not need to run on every page load(ie non-admin related pages). This function requires the request object or the user id.
I know I could create a new view and call it from javascript from the page, but this seems like a horrible way of doing it. Is there a way to somehow attach a function to admin page loads server side without adding any additional dependencies?
Update: the function is used to hook into a kind of logging system that keeps track of first/last interaction of each user each day that needs to be updated whenever the user uses the admin interface. It would seem that a middleware app would allow me to do what I want to do, thank you guys! | How to run a python function on every Admin page load in Django | 0 | 0 | 0 | 1,889 |
24,783,069 | 2014-07-16T14:12:00.000 | 2 | 1 | 0 | 0 | python,matlab,tcp,hex | 24,783,138 | 1 | true | 0 | 0 | These numbers are ASCII codes for.. the 3ff000000000..
Basically, what you are sending over the wire is a string, you need to interpret it as a hexadecimal number first. | 1 | 0 | 0 | I have a Matlab script that sends a number in hexadecimal representation to a Python socket server. Then Python sends the same message back.
Python receives: 3ff0000000000000.
But Matlab receives (using fread):
51 102 102 48 48 48 48 48 48 48 48 48 48 48 48 48.
What does this mean? I can't figure out from Matlab's documentation what to do with those numbers. I've tried converting them to hexadecimal using mat2str and num2str but none of the results make sense to me. | Matlab fread from Python Socket | 1.2 | 0 | 0 | 174 |
24,785,121 | 2014-07-16T15:45:00.000 | 0 | 0 | 1 | 0 | python,pyqt,updating,configparser | 24,871,124 | 2 | false | 0 | 1 | Updating the state of your application may not be a trivial thing if you are somewhere in the middle. Just an example:
Your app = A car
You launch your app = You start your car
You set in the preferences the variable type_tyre to Winter
Your running app still has type_tyre equals Sommer
You attempt to change tyres while driving on the highway
Crash
And this while changing the tyres before starting the car might be a trivial and safe thing to do.
So you just have to write a routine that adjusts the state of your application according to the change in preferences but this is in general different from initializing the app and depends on the current state of the app. But otherwise just write such a routine and call it. | 1 | 0 | 0 | I have terrible doubts regarding python config file approach.
I am creating a program with a GUI (PyQt). The program loads some settings from a .cfg file using the configparser module. And the user can edit these settings from the GUI with the user preferences widget. When the preferences widget is closed the .cfg file is saved but I don't know how to update the rest of the program using the updated settings values.
I will try to explain with an example:
I launch the program. It create a ConfigParser() named config and read settings.cfg.
The program retrieve the value of the option 'clock_speed' (let's say it is 5) from config and set clkSpd = 5.
I click on Edit -> Preferences and change the clock speed via a QSpinBox to 8.
I close the Preferences windget, settings.cfg is saved, the value of the option 'clock_speed' is now 8.
BUT in its module, clkSpd is still 5.
I know I can just load the settings, edit them, save them and reload all settings each time I close the Preferences window. It's simple but not very beautiful.
Is there a classic and effficiant approach for config files in read/write mode ?
Thanks by advance. | Python - update configparser between modules | 0 | 0 | 0 | 699 |
24,785,562 | 2014-07-16T16:07:00.000 | 7 | 0 | 1 | 1 | python,windows,path-variables,python-install | 24,786,526 | 2 | true | 0 | 0 | If you only have one version of Python installed, it won't matter.
If you have multiple versions installed, then the first one that appears in your system Path will be executed when you use the "python" command. Additionally, it can make older versions inaccessible without extra work. For example, I had a system with Python 2.7 installed and I added 3.2 on top of that and checked the option to to add Python.exe to the path during installation. After doing that, entering both "python" and "python3" on the command line opened up Python 3.2, so I would need to enter the full path to the 2.7 interpreter when I needed to execute 2.x scripts. | 2 | 6 | 0 | I'm reinstalling Python, on Windows 7, and one of the first dialog boxes is the Customize Python screen.
The default setting for "Add Python.exe to Path" is "Entire feature will be unavailable."
I always change this to "Will be installed on local hard drive."
It's not an issue, changing the system environment variables is a snap, but is there any upside to leaving this un-ticked? | Why wouldn't I want to add Python.exe to my System Path at install time? | 1.2 | 0 | 0 | 9,230 |
24,785,562 | 2014-07-16T16:07:00.000 | 1 | 0 | 1 | 1 | python,windows,path-variables,python-install | 24,786,463 | 2 | false | 0 | 0 | One upside I can think of is if you run multiple python versions in windows. So, you have c:\python34 and c:\python27 but both are in the path, you'll get whichever comes first, leading you to a possibly unexpected result. | 2 | 6 | 0 | I'm reinstalling Python, on Windows 7, and one of the first dialog boxes is the Customize Python screen.
The default setting for "Add Python.exe to Path" is "Entire feature will be unavailable."
I always change this to "Will be installed on local hard drive."
It's not an issue, changing the system environment variables is a snap, but is there any upside to leaving this un-ticked? | Why wouldn't I want to add Python.exe to my System Path at install time? | 0.099668 | 0 | 0 | 9,230 |
24,785,824 | 2014-07-16T16:20:00.000 | 0 | 0 | 0 | 0 | python,csv,xlrd,xlsxwriter | 24,785,891 | 5 | false | 0 | 0 | Look at openoffice's python library. Although, I suspect openoffice would support MS document files.
Python has no native support for Excel file. | 1 | 1 | 1 | I have a folder with a large number of Excel workbooks. Is there a way to convert every file in this folder into a CSV file using Python's xlrd, xlutiles, and xlsxWriter?
I would like the newly converted CSV files to have the extension '_convert.csv'.
OTHERWISE...
Is there a way to merge all the Excel workbooks in the folder to create one large file?
I've been searching for ways to do both, but nothing has worked... | Converting a folder of Excel files into CSV files/Merge Excel Workbooks | 0 | 1 | 0 | 2,934 |
24,787,347 | 2014-07-16T17:44:00.000 | 1 | 0 | 0 | 0 | python,django,django-templates,django-views | 24,796,873 | 1 | true | 1 | 0 | Look at the URL of the page. Then go to urls.py and look at which view is linked to the URL. Then open views.py and search for the view which the URL linked to.
In that view, the variable 'x' should be there. If it's not, then check the template context processors and middlewares as karthikr suggested. | 1 | 0 | 0 | Is there a good general way of finding the line of python code responsible for passing in variables to django templates? When newly picking up a large code base, and I see {{ x.y }} in the template, and nothing obviously related (by how things are named) to x in the {% load ... %}, what do I do? Where can I find this variable in the python so that I can change it or related code?
My current solutions tend to be tedious and overwhelming. It's a lot of searching, but I would like to be able to just know where to look. | How do I find where a django template variable comes from in the python | 1.2 | 0 | 0 | 668 |
24,790,239 | 2014-07-16T20:38:00.000 | 0 | 0 | 1 | 0 | ipython,python-sphinx | 24,790,240 | 1 | true | 0 | 0 | You're getting the error because you don't have the most recent iPython installed. You probably installed it with sudo apt-get install ipython, but you should upgrade using sudo pip install ipython --upgrade and then making sure that the previous installation was removed by running sudo apt-get remove ipython. | 1 | 0 | 0 | When compiling documentation using Sphinx, I got the error AttributeError: 'str' object has no attribute 'worksheets'. How do I fix this? | Compiling Sphinx with iPython doc error "AttributeError: 'str' object has no attribute 'worksheets'" | 1.2 | 1 | 0 | 466 |
24,791,113 | 2014-07-16T21:30:00.000 | 0 | 0 | 0 | 1 | python,python-2.7,go,publish-subscribe | 24,791,213 | 2 | false | 0 | 0 | For your specific pattern, simply spawning the process from Go and reading the stdout is the most efficient, there's no point adding an over head.
It highly highly depends on what your python script does, if it's one specific task then simply spawning the process and checking the exit code is more than enough, if you have to keep the script in the background at all time and communicate with it then Redis or ZeroMQ are good, and very mature on both Go and Python.
If it's on a different server then ZeroMQ/RPC or just a plain http server in python should be fine, the overhead should be minimal. | 1 | 3 | 0 | I'm working on a web application written in Golang that needs to call a Python program/module to do some heavy work. Since that is very memory/CPU intensive, it may be on a separate machine. Since Golang and Python can't talk directly, there are 3 ways to achieve this:
Just execute the python program as an OS process from Go (if on same machine) (or RPC?)
Wrap Python process in a service and expose it for it to be called from Go (may be a simple CRUD like service - A Bottle/flask restful service)
Have a simple pub-sub system in place to achieve this (Redis or some MQ system) - Adding Redis based caching is on the radar so maybe a good reason to go this way. Not sure.
The main thing is that the python process that takes really long to finish must "inform" the web application that it has finished. The data could either be in a file/DB or 'returned' by the process.
What could be the simplest way to achieve this in a pub/sub like environment?
UPDATE
REST seems like one way but would incur the cost of implementing server side push which may or may not be easily doable with existing micro web frameworks. The pub/sub would add an additional layer of complexity for maintainability and a learning curve nevertheless. I'm not sure if an RPC like invocation could be achieved across machines. What would be a good choice in this regard? | Simplest pub-sub for golang <--> python communication, possibly across machines? | 0 | 0 | 0 | 3,206 |
24,791,510 | 2014-07-16T21:57:00.000 | 0 | 1 | 0 | 0 | python,mysql | 24,795,785 | 1 | false | 0 | 0 | Is your goal is to use MySQL workbench to build a live-view of your data ? If so I don't think you're using the right tools.
You may just use ElasticSearch to store your data and Kibana to display it, this way you'll have free graphs and charts of your stored data, and auto-refresh (based on an interval, not on events).
You also may take a look a Grafana, an event more specialized tool in storing / representing graphs of values.
But if you really want to store your data on MySQL, you may not want to use MySQL Workbench as a user interface, it's a developper tool to build your database. You may however build a graphical interface from scratch, and send it an event when you're updating your tables so it refreshes itself, but it's a lot of work that Kibana/Grafana does for you. | 1 | 0 | 0 | I currently have a Raspberry Pi running Iperf non stop and collecting results.
After collecting results it uploads the bandwidth tests to MySQL.
Is there a way to automatically refresh the table to which the data is added? | MySQL WorkBench How to automatically re run query? | 0 | 1 | 0 | 726 |
24,792,253 | 2014-07-16T23:00:00.000 | 1 | 0 | 1 | 0 | ipython | 24,807,197 | 6 | false | 0 | 0 | Use the magic of %edit stuff.py (first use) and %ed -p (after the first use) and it will invoke your $EDITOR from inside of ipython. Upon exiting from the editor ipython will run the script (unless you called %ed -x). That is by far the fastest way I found to work in CLI-ipython. The notebooks are nice, but I like having a real editor for code. | 2 | 7 | 0 | I am writing my script interactively with IPython. This is what I currently do:
write a chunk of code,
run in ipython with "run -i file_name.py".
make changes and repeat 2 until I think it is OK .
comment out the entire previous chunk.
write new chunk of code that is based on the previous one.
go back to step 2.
......
Is there more efficient way? Can I start a script from a specific line while using all the variables in current namespace? | IPython: run script starting from a specific line | 0.033321 | 0 | 0 | 2,988 |
24,792,253 | 2014-07-16T23:00:00.000 | 1 | 0 | 1 | 0 | ipython | 24,803,165 | 6 | false | 0 | 0 | I'd personally also use the ipython notebook, but you call also use you favorite text editor and always copy out the chunk of code you want to run and use the magic command %paste to run that chunk in the ipython shell. It will take care of indentation for you. | 2 | 7 | 0 | I am writing my script interactively with IPython. This is what I currently do:
write a chunk of code,
run in ipython with "run -i file_name.py".
make changes and repeat 2 until I think it is OK .
comment out the entire previous chunk.
write new chunk of code that is based on the previous one.
go back to step 2.
......
Is there more efficient way? Can I start a script from a specific line while using all the variables in current namespace? | IPython: run script starting from a specific line | 0.033321 | 0 | 0 | 2,988 |
24,793,636 | 2014-07-17T01:51:00.000 | 1 | 0 | 0 | 0 | python,graph,matplotlib,geometry,intersection | 24,797,554 | 1 | false | 0 | 0 | Maybe you should try something more analytical? It should not be very difficult:
Find the circle pairs whose distance is less than the sum of their radii; they intersect.
Calculate the intersection angles by simple trigonometry.
Draw a polygon (path) by using a suitably small delta angle in both cases (half of the polygon comes from one circle, the other half from the other circle.
Collect the paths to a PathCollection
None of the steps should be very long or difficult. | 1 | 0 | 1 | I'm working on a problem that involves creating a graph which shows the areas of intersection of three or more circles (each circle is the same size). I have many sets of circles, each set containing at least three circles. I need to graph the area common to the interior of each and every circle in the set, if it even exists. If there is no area where all the circles within the set intersect, I have nothing to graph. So the final product is a graph with little "slivers" of intersecting circles all over.
I already have a solution for this written in Python with matplotlib, but it doesn't perform very well. This wasn't an issue before, but now I need to apply it to a larger data set so I need a better solution. My current approach is basically a test-and-check brute force method: I check individual points within an area to see if they are in that common intersection (by checking distance from the point to the center of each circle). If the point meets that criteria, I graph it and move on. Otherwise, I just don't graph it and move on. So it works, but it takes forever.
Just to clarify, I don't scan through every point in the entire plane for each set of circles. First, I narrow my "search" area to a rectangle tightly bounded around the first two (arbitrarily chosen) circles in the set, and then test-and-check each point in there.
I was thinking it would be nice if there were a way for me to graph each circle in a set (say there 5 circles in the set), each with an alpha value of 0.1. Then, I could go back through and only keep the areas with an alpha value of 0.5, because that's the area where all 5 circles intersect, which is all I want. I can't figure out how to implement this using matplotlib, or using anything else, for that matter, without resorting to the same brute force test-and-check strategy.
I'm also familiar with Java and C++, if anyone has a good idea involving those languages. Thank you! | How to find and graph the intersection of 3+ circles with Matplotlib | 0.197375 | 0 | 0 | 1,241 |
24,796,339 | 2014-07-17T06:29:00.000 | 1 | 0 | 1 | 0 | python,class | 24,797,103 | 2 | false | 0 | 0 | A program often has to maintain state and share resources between functions (command line options, dB connection, etc). When that's the case a class is usually a better solution (wrt/ readability, testability and overall maintainability) than having to pass the whole context to every function or (worse) using global state. | 1 | 3 | 0 | Sometimes, when looking at Python code examples, I'll come across one where the whole program is contained within its own class, and almost every function of the program is actually a method of that class apart from a 'main' function.
Because it's a fairly new concept to me, I can't easily find an example even though I've seen it before, so I hope someone understands what I am referring to.
I know how classes can be used outside of the rest of a program's functions, but what is the advantage of using them in this way compared with having functions on their own?
Also, can/should a separate module with no function calls be structured using a class in this way? | What is the benefit of having a whole program contained in a class? | 0.099668 | 0 | 0 | 227 |
24,805,002 | 2014-07-17T13:31:00.000 | 0 | 0 | 1 | 0 | python,probability,bayesian,pymc,mcmc | 24,833,321 | 1 | false | 0 | 0 | I recommend following the PyMC user's guide. It explicitly shows you how to specify your model (including priors). With MCMC, you end up getting marginals of all posterior values, so you don't need to know how to marginalize over priors.
The Dirichlet is often used as a prior to multinomial probabilities in Bayesian models. The values of the Dirichlet parameters can be used to encode prior information, typically in terms of a notional number of prior events corresponding to each element of the multinomial. For example, a Dirichlet with a vector of ones as the parameters is just a generalization of a Beta(1,1) prior to multinomial quantities. | 1 | 0 | 0 | I am going through the tutorial about Monte Carlo Markov Chain process with pymc library. I am also a newbie using pymc and try to establish my own MCMC process. I have faced couple of question that I couldn't find proper answer in pymc tutorial:
First: How could we define priors with pymc and then marginalise over the priors in the chain process?
My second question is about Dirichlet distribution , how is this distribution related to the prior information in MCMC and how should it be defined? | Defining priors and marginalizing over priors in pymc | 0 | 0 | 0 | 513 |
24,806,675 | 2014-07-17T14:45:00.000 | 1 | 0 | 0 | 1 | python,google-app-engine,email | 24,814,266 | 1 | true | 1 | 0 | You can set up a CRON job to run every few minutes and process your email queue. It will require an endpoint where you can send a POST request, but you can use a secret token (like just any random guid) to verify the request is legitimate before you send the email. | 1 | 0 | 0 | I was wondering how I would go about emailing user emails stored in a python datastore.
Should I create a sort of maintenance page where I can log in as an administrator and then send an email or is there a way for me to execute a python script without needing a handler pointing to a separate webpage so I don't have to worry about the page being discovered and exploited. | Google App Engine send batch email | 1.2 | 0 | 0 | 88 |
24,806,713 | 2014-07-17T14:46:00.000 | 3 | 0 | 0 | 0 | python,tkinter | 24,814,914 | 2 | true | 0 | 1 | No, there is no way to change the defaults. You can easily write your own grid function to automatically configure the weight of each column. You could do this by subclassing Frame, for instance. | 1 | 0 | 0 | I have an application made up of Frames, Frames in Frames and Labels in Frames. There is quite a lot of them and I am looking for a way to modify some of the default values.
I am particularly interested in modifying .columnconfigure() since I call .columnconfigure(0, weight=1) on each of the columns, in each frame. This does not help with the code cleanness.
Is there a way to set this behavior (ability to expand) globally? | how to change default values for .columnconfigure() in tkinter? | 1.2 | 0 | 0 | 207 |
24,810,929 | 2014-07-17T18:27:00.000 | 0 | 0 | 0 | 1 | python,ios,audio,streaming | 24,811,718 | 1 | false | 1 | 0 | What happens when the playback is restarted? Print the HTTP URLs on the server. Does the player start from index=0, go to index=4000, then back to index=0 again? | 1 | 0 | 0 | I'm creating a simple app that can play audio files (currently only mp3 files) located on a webserver.
Currently, I'm using Python's SimpleHTTPServer server side, and the AVAudioPlayer for iOS.
It sort of works, since the file is streamed over HTTP instead of just being downloaded from the webserver. But I often experience that the playback of a file is suddenly restarted.
I'm considering using another method of streaming, eg. RTMP, but on the other hand I want to keep things simple. I'm wondering if another HTTP server might do the trick? Any other experiences/suggestions? | Streaming audio from webserver | 0 | 0 | 0 | 452 |
24,812,444 | 2014-07-17T19:59:00.000 | 1 | 1 | 1 | 0 | python | 37,527,223 | 6 | false | 0 | 0 | j (not J) is used in Electrical Engineering as mentioned before.
i for current: yes, both I (dc) and i (ac) are used for current. | 2 | 72 | 0 | I know this is an electrical engineering convention, but I'm still wondering why it was chosen for Python. I don't know other programming languages with complex-number literals, so I don't have anything to compare against, but does anyone know any that do use i? | Why are complex numbers in Python denoted with 'j' instead of 'i'? | 0.033321 | 0 | 0 | 38,708 |
24,812,444 | 2014-07-17T19:59:00.000 | 1 | 1 | 1 | 0 | python | 54,385,244 | 6 | false | 0 | 0 | i in electrical engineering is typically used for i(t) or instantaneous current. I is for steady state DC (non-complex) or rms values of AC current. In addition spacial coordinates are generally expressed as i,j,k but for two dimensional items i,j are all that are needed and the "i" is dropped so the perpendicular "j" is used as in 4j3 vs 4+3i or 4i3 -See that this is not 413 at a glance.
J recognizes this notation in handling complex numbers. As a retired EE prof- I do like the use of "j" As for Current density "J" is used. | 2 | 72 | 0 | I know this is an electrical engineering convention, but I'm still wondering why it was chosen for Python. I don't know other programming languages with complex-number literals, so I don't have anything to compare against, but does anyone know any that do use i? | Why are complex numbers in Python denoted with 'j' instead of 'i'? | 0.033321 | 0 | 0 | 38,708 |
24,814,724 | 2014-07-17T22:43:00.000 | 2 | 0 | 1 | 1 | python,python-3.x,path | 24,814,742 | 2 | false | 0 | 0 | If you want to male a file specifically open with a version you can start the file with #! python3.x the x being the version you want. If you want to be able to right click and edit with that version youll need to do some tweaking in the registry | 1 | 1 | 0 | Running Windows 7. 2.7, 3.3 and 3.4 installed.
I just installed Python 3.3 for a recent project. In the command prompt, python launches 3.4, and py launches 3.3. I can access 3.3 using the 3.3 version of IDLE, but how can I access it via the command prompt?
Is there a shortcut like py that I can use? Do I need to define this on my own like an alias?
Or is the best route to somehow change the path to temporarily make 3.3 the default?
Just downloaded virtualenv, maybe that might be part of the solution. | Python: how to access 3.3 if 3.4 is the default? | 0.197375 | 0 | 0 | 238 |
24,816,237 | 2014-07-18T02:02:00.000 | 1 | 0 | 1 | 0 | python,ipython,ipython-notebook | 68,669,366 | 6 | false | 0 | 0 | If I am not wrong you mean you just need to clear the output part of a cell, which could be a valid output or some error which you don't what to look anymore. If Yes! just go to top ribbon and select Cell > Current Outputs > Clear | 1 | 192 | 0 | In a iPython notebook, I have a while loop that listens to a Serial port and print the received data in real time.
What I want to achieve to only show the latest received data (i.e only one line showing the most recent data. no scrolling in the cell output area)
What I need(i think) is to clear the old cell output when I receives new data, and then prints the new data. I am wondering how can I clear old data programmatically ? | ipython notebook clear cell output in code | 0.033321 | 0 | 0 | 263,058 |
24,818,249 | 2014-07-18T06:07:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,ubuntu | 24,818,385 | 2 | false | 1 | 0 | pip install beautifulsoup4 does not work?
Did you try to create a new virtualenv in pycharm and simple add library? | 2 | 2 | 0 | I am using ubuntu14.0LTS and pycharm IDE how to download and install beautifulsoup and add beautifulsoup library into pycharm.
I tried using pip install it's not working. | beautifulsoup in python3.4 cannot use in pycharm | 0.099668 | 0 | 0 | 2,453 |
24,818,249 | 2014-07-18T06:07:00.000 | -1 | 0 | 1 | 0 | python,python-3.x,ubuntu | 29,523,112 | 2 | false | 1 | 0 | THis is how I found it,
First I click on file in the top menu.
then I click on settings, now look for project interpreter this where you can add beautifulsoup by clicking on the green cross on the right site of the screen.
After installing it. Call it by using : from bs4 import beautifulsoup | 2 | 2 | 0 | I am using ubuntu14.0LTS and pycharm IDE how to download and install beautifulsoup and add beautifulsoup library into pycharm.
I tried using pip install it's not working. | beautifulsoup in python3.4 cannot use in pycharm | -0.099668 | 0 | 0 | 2,453 |
24,823,592 | 2014-07-18T11:07:00.000 | 1 | 0 | 1 | 0 | python,python-3.x | 24,824,001 | 2 | false | 0 | 0 | In order to sort a list, we need to get all of the elements and put them in the right order. So whatever kind of iterable is passed to sorted will be converted into a list. Once the list is already created, there's no advantage to returning an iterator.
reversed is different; if you pass it a list, there's no need for it to create a new list that's back to front, it is more efficient to return a generator that will access elements from the original list on demand.
Note that reversed doesn't do any sorting on the values in a sequence, it reverses the order in which the elements appear. | 1 | 2 | 0 | I have some questions about the ideas that brought to make some choices in the creation of Python.
First of all the 2 builtin methods sorted() and reversed(), why does the former return a list instead the latter returns a iterator? Why making this difference? | Comparison between sorted()/reversed() | 0.099668 | 0 | 0 | 78 |
24,830,313 | 2014-07-18T17:02:00.000 | 2 | 0 | 0 | 0 | android,python,kivy | 24,830,380 | 1 | true | 0 | 1 | On mobile, your app should automatically fill the phone screen. You don't need to worry about it. On desktop, you can use the --size=WxH option to test a specific screen size, or use the screen module (-m screen:nexus7 for example - run kivy with -m screen to see the available options).
No. All mouse/touchscreen interactions are considered touches in Kivy. So using on_touch_down/on_touch_move/on_touch_up will work regardless of the input device. The only difference is that with touchscreen you could have multi-touch - but if you write your app assuming single-touch it will work the same on both mobile and desktop. | 1 | 1 | 0 | I have two questions that I can not answer to myself:
How can I change the size of my window, if I do not know the exact size of the phone screen? I.e. my aim is to fit all screen sizes.
Is there any difference between clicking with mouse and touching with fingers in the code? If I write code for clicking, will it work with touch? | Kivy: how to change window size properties and the difference between click and touch | 1.2 | 0 | 0 | 399 |
24,832,498 | 2014-07-18T19:34:00.000 | 0 | 0 | 0 | 1 | python,batch-file,gnupg,pgp | 24,832,638 | 5 | false | 0 | 0 | Use --recv-keys to get the keys without prompting. | 1 | 1 | 0 | I'm working on an application that will eventually graph the gpg signature connections between a predefined set of email addresses. I need it to programmatically collect the public keys from a key server. I have a working model that will use the --search-keys option to gpg. However, when run with the --batch flag, I get the error "gpg: Sorry, we are in batchmode - can't get input". When I run with out the --batch flag, gpg expects input.
I'm hoping there is some flag to gpg that I've missed. Alternatively, a library (preferably python) that will interact with a key server would do. | Using gpg --search-keys in --batch mode | 0 | 0 | 0 | 3,077 |
24,833,833 | 2014-07-18T21:08:00.000 | 2 | 0 | 0 | 0 | python,flask,zeromq | 24,890,173 | 3 | false | 1 | 0 | Is the ZMQ socket in your app connect()-ing, or is it bind()-ing? If your app is considered the client and it's connecting, then multiple instances should be able to connect without issue. If it's considered the server and it's binding, then yes, you'll have problems... but in your case, it seems like you should consider your Flask app to be more transient, and thus the client, and the other end to be more reliable, and thus the server.
But it's hard to really give any concrete advice without code, there's only so much I can intuit from the little information you've given. | 2 | 6 | 0 | I have a Flask app that accepts HTTP requests. When certain HTTP requests come in, I want to trigger a message on a zeromq stream. I'd like to keep the zeromq stream open all the time. I'm wondering what the appropriate way to do this is. Since it is recommended to use gunicorn with Flask in production, doesn't that mean that there will be multiple instances of the Flask app, and if I put the zeromq connection in the same place as the Flask app, only one of those will be able to connect, and the others will fail. | What's the appropriate way to use Flask with zeromq in production? | 0.132549 | 0 | 0 | 3,408 |
24,833,833 | 2014-07-18T21:08:00.000 | 0 | 0 | 0 | 0 | python,flask,zeromq | 24,834,017 | 3 | false | 1 | 0 | ZeroMQ shall not reuse context across different threads. The same applies to sockets.
If you manage to keep the socket used exclusively by one thread in worker, you might reuse the
socket.
Anyway, I would start with creating new context and socket with every request and see, if there is
any need to go into complexities of sharing a ZeroMQ connection. Set up ZeroMQ is often rather fast. | 2 | 6 | 0 | I have a Flask app that accepts HTTP requests. When certain HTTP requests come in, I want to trigger a message on a zeromq stream. I'd like to keep the zeromq stream open all the time. I'm wondering what the appropriate way to do this is. Since it is recommended to use gunicorn with Flask in production, doesn't that mean that there will be multiple instances of the Flask app, and if I put the zeromq connection in the same place as the Flask app, only one of those will be able to connect, and the others will fail. | What's the appropriate way to use Flask with zeromq in production? | 0 | 0 | 0 | 3,408 |
24,836,398 | 2014-07-19T03:20:00.000 | 1 | 0 | 1 | 0 | eclipse,ipython,pydev | 25,066,436 | 1 | false | 0 | 0 | There's currently no option to do that in the UI.
You can do that in a hackish way by manually opening:
plugins\org.python.pydev\pysrc\pydev_ipython_console.py
in your Eclipse installation and uncommenting the 'raise ImportError()' in the top of the file :)
Now, I'm a bit curious on why you'd prefer the PyDev version instead of the IPython version in this case... | 1 | 2 | 0 | The most recent releases of PyDev IDE for Eclipse come with IPython 'embeded' in its interactive console. I'm just wondering if there is a way to disable this option and let PyDev uses a regular python interactive console without uninstalling IPython? I know that if IPython is not installed PyDev will use a regular python interactive console. But I think there must be a way of doing it without getting rid of IPython.
If somebody knows how to do this, pleas advise. Thanks. | Is it possible to disable IPython from the Eclipse/PyDev console? | 0.197375 | 0 | 0 | 566 |
24,846,309 | 2014-07-20T00:45:00.000 | 0 | 0 | 0 | 0 | android,python,iframe,selenium,splinter | 30,110,730 | 1 | false | 1 | 0 | You can provide it as the argument when you instantiate the splinter browser object. | 1 | 0 | 0 | I've searched thoroughly and still can't find the answer to this question. I finally figured out how to prefill a form in an iframe using splinter, but it only works in firefox on my computer, while not working in another browser, let alone a mobile device. I've tried importing webdriver from selenium, etc. Still, nothing.
So far, the webbrowser works both on the pc and my android device to easily pull up a website; unfortunately, I can't get it to prefill forms in iframes.
Can anybody help???
Thank you!!! | How can I force splinter to use a default browser? | 0 | 0 | 1 | 128 |
24,854,139 | 2014-07-20T19:32:00.000 | 4 | 0 | 1 | 0 | python | 24,854,192 | 4 | false | 0 | 0 | It's not a rule, it's just a tradition.
In many languages, lists must be homogenous and tuples must be fixed-length. This is true of C++, C#, Haskell, Rust, etc. Tuples are used as anonymous structures. It is the same way in mathematics.
Python's type system, however, does not allow you to make these distinctions: you can make tuples of dynamic length, and you can make lists with heterogeneous data. So you are allowed to do whatever you want with lists and tuples in Python, it just might be surprising to other people reading your code. This is especially true if the people reading your code have a background in mathematics or are more familiar with other languages. | 3 | 11 | 0 | I feel like this must have been asked before (probably more than once), so potential apologies in advance, but I can't find it anywhere (here or through Google).
Anyway, when explaining the difference between lists and tuples in Python, the second thing mentioned, after tuples being immutable, is that lists are best for homogeneous data and tuples are best for heterogeneous data. But nobody seems to think to explain why that's the case. So why is that the case? | Lists are for homogeneous data and tuples are for heterogeneous data... why? | 0.197375 | 0 | 0 | 9,622 |
24,854,139 | 2014-07-20T19:32:00.000 | 0 | 0 | 1 | 0 | python | 24,854,153 | 4 | false | 0 | 0 | Lists are often used for iterating over them, and performing the same operation to every element in the list. Lots of list operations are based on that. For that reason, it's best to have every element be the same type, lest you get an exception because an item was the wrong type.
Tuples are more structured data; they're immutable so if you handle them correctly you won't run into type errors. That's the data structure you'd use if you specifically want to combine multiple types (like an on-the-fly struct). | 3 | 11 | 0 | I feel like this must have been asked before (probably more than once), so potential apologies in advance, but I can't find it anywhere (here or through Google).
Anyway, when explaining the difference between lists and tuples in Python, the second thing mentioned, after tuples being immutable, is that lists are best for homogeneous data and tuples are best for heterogeneous data. But nobody seems to think to explain why that's the case. So why is that the case? | Lists are for homogeneous data and tuples are for heterogeneous data... why? | 0 | 0 | 0 | 9,622 |
24,854,139 | 2014-07-20T19:32:00.000 | 17 | 0 | 1 | 0 | python | 24,854,173 | 4 | false | 0 | 0 | First of all, that guideline is only sort of true. You're free to use tuples for homogenous data and lists for heterogenous data, and there may be cases where that's a fine thing to do. One important case is if you need the collection to the hashable so you can use it as a dictionary key; in this case you must use a tuple, even if all the elements are homogenous in nature.
Also note that the homogenous/heterogenous distinction is really about the semantics of the data, not just the types. A sequence of a name, occupation, and address would probably be considered heterogenous, even though all three might be represented as strings. So it's more important to think about what you're going to do with the data (i.e., will you actually treat the elements the same) than about what types they are.
That said, I think one reason lists are preferred for homogenous data is because they're mutable. If you have a list of several things of the same kind, it may make sense to add another one to the list, or take one away; when you do that, you're still left with a list of things of the same kind.
By contrast, if you have a collection of things of heterogenous kinds, it's usually because you have a fixed structure or "schema" to them (e.g., the first one is an ID number, the second is a name, the third is an address, or whatever). In this case, it doesn't make sense to add or remove an element from the collection, because the collection is an integrated whole with specified roles for each element. You can't add an element without changing your whole schema for what the elements represent.
In short, changes in size are more natural for homogenous collections than for heterogenous collections, so mutable types are more natural for homogenous collections. | 3 | 11 | 0 | I feel like this must have been asked before (probably more than once), so potential apologies in advance, but I can't find it anywhere (here or through Google).
Anyway, when explaining the difference between lists and tuples in Python, the second thing mentioned, after tuples being immutable, is that lists are best for homogeneous data and tuples are best for heterogeneous data. But nobody seems to think to explain why that's the case. So why is that the case? | Lists are for homogeneous data and tuples are for heterogeneous data... why? | 1 | 0 | 0 | 9,622 |
24,857,816 | 2014-07-21T04:34:00.000 | 4 | 0 | 1 | 0 | python,c,performance,optimization,cython | 24,858,261 | 3 | false | 0 | 0 | You are describing a perfect use case for a hash indexed collection. You are also describing a perfect scenario for the strategy of write it first, optimise it second.
So start with the Python dict. It's fast and it absolutely will do the job you need.
Then benchmark it. Figure out how fast it needs to go, and how near you are. Then 3 choices.
It's fast enough. You're done.
It's nearly fast enough, say within about a factor of two. Write your own hash indexing, paying attention to the hash function and the collision strategy.
It's much too slow. You're dead. There is nothing simple that will give you a 10x or 100x improvement. At least you didn't waste any time on a better hash index. | 3 | 4 | 0 | I'd like to do a lookup mapping 32bit integer => 32bit integer.
The input keys aren't necessary contiguous nor cover 2^32 -1 (nor do I want this in-memory to consume that much space!).
The use case is for a poker evaluator, so doing a lookup must be as fast as possible. Perfect hashing would be nice, but that might be a bit out of scope.
I feel like the answer is some kind of cython solution, but I'm not sure about the underpinnings of cython and if it really does any good with Python's dict() type. Of course a flat array with just a simple offset jump would be super fast, but then I'm allocating 2^32 - 1 places in memory for the table, which I don't want.
Any tips / strategies? Absolute speed with minimal memory footprint is the goal. | absolute fastest lookup in python / cython | 0.26052 | 0 | 0 | 1,906 |
24,857,816 | 2014-07-21T04:34:00.000 | 6 | 0 | 1 | 0 | python,c,performance,optimization,cython | 24,857,860 | 3 | false | 0 | 0 | You aren't smart enough to write something faster than dict. Don't feel bad; 99.99999% of the people on the planet aren't. Use a dict. | 3 | 4 | 0 | I'd like to do a lookup mapping 32bit integer => 32bit integer.
The input keys aren't necessary contiguous nor cover 2^32 -1 (nor do I want this in-memory to consume that much space!).
The use case is for a poker evaluator, so doing a lookup must be as fast as possible. Perfect hashing would be nice, but that might be a bit out of scope.
I feel like the answer is some kind of cython solution, but I'm not sure about the underpinnings of cython and if it really does any good with Python's dict() type. Of course a flat array with just a simple offset jump would be super fast, but then I'm allocating 2^32 - 1 places in memory for the table, which I don't want.
Any tips / strategies? Absolute speed with minimal memory footprint is the goal. | absolute fastest lookup in python / cython | 1 | 0 | 0 | 1,906 |
24,857,816 | 2014-07-21T04:34:00.000 | 4 | 0 | 1 | 0 | python,c,performance,optimization,cython | 24,858,674 | 3 | true | 0 | 0 | First, you should actually define what "fast enough" means to you, before you do anything else. You can always make something faster, so you need to set a target so you don't go insane. It is perfectly reasonable for this target to be dual-headed - say something like "Mapping lookups must execute in these parameters (min/max/mean), and when/if we hit those numbers we're willing to spend X more development hours to optimize even further, but then we'll stop."
Second, the very first thing you should do to make this faster is to copy the code in Objects/dictobject.c in the Cpython source tree (make something new like intdict.c or something) and then modify it so that the keys are not python objects. Chasing after a better hash function will not likely be a good use of your time for integers, but eliminating INCREF/DECREF and PyObject_RichCompareBool calls for your keys will be a huge win. Since you're not deleting keys you could also elide any checks for dummy values (which exist to preserve the collision traversal for deleted entries), although it's possible that you'll get most of that win for free simply by having better branch prediction for your new object. | 3 | 4 | 0 | I'd like to do a lookup mapping 32bit integer => 32bit integer.
The input keys aren't necessary contiguous nor cover 2^32 -1 (nor do I want this in-memory to consume that much space!).
The use case is for a poker evaluator, so doing a lookup must be as fast as possible. Perfect hashing would be nice, but that might be a bit out of scope.
I feel like the answer is some kind of cython solution, but I'm not sure about the underpinnings of cython and if it really does any good with Python's dict() type. Of course a flat array with just a simple offset jump would be super fast, but then I'm allocating 2^32 - 1 places in memory for the table, which I don't want.
Any tips / strategies? Absolute speed with minimal memory footprint is the goal. | absolute fastest lookup in python / cython | 1.2 | 0 | 0 | 1,906 |
24,859,323 | 2014-07-21T07:03:00.000 | 1 | 1 | 0 | 0 | python,raspberry-pi | 24,862,812 | 3 | false | 0 | 0 | It depends, what web framework you are going to use.
Some of them might have a bit limited functionality on Python 3 but still can be worth to use.
This could be case of Flask, which is very lightweight, provides all what you need, but according to heavy users lack in few small details complete support for Python 3. This situation is likely to be resolved in near future, but if you want to have it developed now, it is better to use the version of Python, which fits your web framework.
Comments on few (not all) web frameworks
Django
Very popular, but will force you to do things in Django style.
Final solution can become a bit heavier, then really necessary, this could be a problem on Raspberry Pi, which has very limited resources available.
Flask
Also rather popular (even though not as much as Django).
Gives you freedom to use only what you need.
Very good tutorials.
Most of the applications run under Python 2 and Python 3, few (supporting) libraries are told to be not ported completely yet (I cannot serve exactly which ones).
CherryPy
Minimalistic web framework, but with very good builtin HTTP and WSGI server.
Not so easy to find good tutorials, best is using (now a bit old) book about programming in CherryPy.
Note: By default, the applications are developed in debug mode and code is autoreloaded from disk. This disk activity can slow down on RPi and consume some energy, so if you have troubles with that, set the app to production mode.
Conclusions
My current choice is using Flask on Python 2.7, but this is partially due to a lot of legacy code I have developed in Python 2.7.
You shall make your decision about what framework you are going to use and check, what is status of Python 3 support. | 2 | 1 | 0 | I'm totally new in Python world.
I want to create a web application with some Python code behind. I want to use Python to control Raspberry Pi inputs and outputs etc.
There are Python 2 and Python 3 available. I've read some about these version, but I'm still not sure which version I should use. | Which Python version should I use with Raspberry Pi running web applications? | 0.066568 | 0 | 0 | 2,070 |
24,859,323 | 2014-07-21T07:03:00.000 | 1 | 1 | 0 | 0 | python,raspberry-pi | 24,860,000 | 3 | false | 0 | 0 | Most of the books on the topic of Python and Raspberry Pi refer to Python 3.x. I'm finding a lot of online courses and books are focusing more on 3.x than 2.7. Unless you're working at a company that's on Python 2.x and don't plan on going to 3.x, you're better off learning Python 3.x. | 2 | 1 | 0 | I'm totally new in Python world.
I want to create a web application with some Python code behind. I want to use Python to control Raspberry Pi inputs and outputs etc.
There are Python 2 and Python 3 available. I've read some about these version, but I'm still not sure which version I should use. | Which Python version should I use with Raspberry Pi running web applications? | 0.066568 | 0 | 0 | 2,070 |
24,861,392 | 2014-07-21T09:11:00.000 | 1 | 0 | 1 | 0 | python,multithreading,gil | 24,861,582 | 1 | false | 0 | 0 | If you run this process, say, once a day, then the overhead to create the thread and to destroy it is negligible.
A thread that is waiting for a signal (like a message in a queue) doesn't need the CPU, so it doesn't cost you to keep it hanging around.
That means you can look at the other design factors: Error handing, stability, code complexity.
If you have the error handling nailed down, keeping the thread alive is probably better, since that will handle a corner case for you: Accidentally running two instances at the same time.
If the thread can stall or you have problems with deadlocks and things like that, then it's better to kill any existing worker thread and start a clean one. | 1 | 0 | 0 | There is a pretty big multithreaded python2 web-application. In main thread works business logic, in sub-threads mostly database operations running. No TreadPoolExecutor is used now, and it cannot be implemented in the nearest future. I want to add another thread which is supposed to process certain amount of data (fast) and store the result into the DataBase (io-operation). This operation won't be executed very often.
So, the question is: should I run mostly sleeping thread and wait for an event to process data or maybe it's better to spawn new thread from the main when there is enough data and close it when processing were completed? Note, that there are already pretty large amount of threads running for GIL to switch between them.
Thanks. | Python: Using threads for processing jobs | 0.197375 | 0 | 0 | 29 |
24,864,255 | 2014-07-21T11:42:00.000 | 1 | 0 | 0 | 0 | python,qt,pyqt4 | 24,951,937 | 1 | false | 0 | 1 | If you set verticalHeader.sizeHint to ResizeToContents, on any row update ALL table will be processed to obtain new column width. This behaviour is life saver for most of us, if we don't have a large table that frequently update.
First, don't use resizeToContents size hint!
Basic solution: use fixed size for columns with stretch option. (i think, it is not for you)
Solution, i use: i have timer to call resizeColumnsToContents() slot at intervals of 2 seconds.
Solution, optimized: You can optimize my solution to your case. Such as, you can wait until all row data updated to call resize slot.
Answer for your suggestion(resize for just visible items): it is not useful. | 1 | 2 | 0 | Good afternoon,
I'm using a QTableview+QAbstractTableModel to display a potentially large amount of data (20k+ rows) were each row consists of cells holding text of various length, including newlines, displayed using a custom delegate. The data resides in memory (no database, stream or similar) and might be changed from outside the table. To adapt the row height to changes of the text, I set the Resize Mode of the TableView's vertical header to "ResizeToContents" which correctly uses the sizeHint of my delegate to set the height.
This works well and all, however depending on the size of the table the performance is abysmal (several minutes to load a large table). Once I turn off the resize mode, loading is as fast as lightning but of course the row height is incorrect (causing text with newlines to overlap rows, etc.). It seems that when using the auto-resize mode, every cell in every row is queried for the size hint, which takes a lot of time (confirmed by printing a debug msg in the sizeHint function of my delegate).
I wonder if this is the intended behaviour of the ResizeToContents mode, as I would assume that it would only be necessary to query the actually visible rows of the TableView - not all rows (as given by the rowCounts() call). As only a fraction of the rows are displayed at any one time, this would of course improve the performance noticeably. Is this a bug/shortcoming in the resize code or do I misunderstand something about this functionality? By the way, I'm using PyQt 4.10 so maybe this behaviour changed in newer versions?
Thanks in advance for all hints. | QTableview - ResizeToContents queries every row? | 0.197375 | 0 | 0 | 772 |
24,865,495 | 2014-07-21T12:47:00.000 | 1 | 0 | 0 | 0 | python,pyramid,http-status | 24,882,407 | 2 | false | 1 | 0 | My solution with deprecated class:
class append_slash_notfound_factory(AppendSlashNotFoundViewFactory):
def __call__(self, context, request):
result = super(append_slash_notfound_factory, self).__call__(context, request)
if isinstance(result, HTTPFound):
return HTTPMovedPermanently(result.location)
return result | 1 | 2 | 0 | Using notfound_view_config in pyramid with parameter append_slash=True, i get 302 http status when redirecting, but i want set custom http status - 301.
@notfound_view_config(append_slash=True, renderer="not_found.mako")
def notfound(request):
return {} | How to return HTTPMovedPermanently (301 status) instead HTTPFound (302) in pyramid notfound_view_config | 0.099668 | 0 | 0 | 390 |
24,865,899 | 2014-07-21T13:06:00.000 | 1 | 0 | 0 | 1 | python,c,performance,testing,memory-leaks | 24,866,756 | 1 | false | 0 | 0 | this could be:
some leak in the python script
waiting on a resource in the c script/python script
writing to a file that gets bigger during the run
C software doesn't close properly
and so on. You could elaborate on what the C software does to get us more clues, also state whether other software also runs more slowly. | 1 | 2 | 0 | To properly test a piece of software (written in C) I have been working on, I have to run a high volume of tests. I've been doing this with a python script that executes my software a given number of times (generally in the range of 1000 - 10000 repititions), one after the other. I am working on a debian virtual machine (500mb ram). I've been noticing that over time the performance of the program depreciates significantly. Usually I have to go so far as rebooting the vm to get back to normal performance levels.
My first thought was a memory leak, but valgrind did not discover any in my C program. Furthermore, I would have thought the OS would take care of that after program termination either way. When I run top or free -m, I see that free ram is fairly low (20-70mb), but does not drop much while running my script, instead fluctuating around where it started.
Edit: A full rundown on what my files are doing is as follows:
C software
Many files, developed by various people
Features a loop that continues until given destination IP has been discovered
Constructs packets based off of given destination and information received from previously sent packets
Sends packets
Waits for packet replies
Python script emulating network topology
Stores fake networks
Intercepts outgoing packets and sends replies based off of said topology
Python testing script
For a given number of repetitions,
Launch network emulator
Launch C software (wait until terminated - the process launches are actually done with a bash script)
Exit network emulator
Output for the emulator and the c software are both dumped to log files, which are overwritten at each execution (so they should be kept decently short).
Can anyone give me some pointers as to what this could be? | C program performance depreciation after multiple runs | 0.197375 | 0 | 0 | 108 |
24,866,113 | 2014-07-21T13:18:00.000 | 1 | 0 | 1 | 0 | python,database | 24,866,662 | 1 | false | 0 | 0 | It's hard to say anything without knowing more about the data & aggregation you are trying to do, but definitely don't do serialize data to parse it faster with Python -- most probably that's not where the problem is. And probably not store data somehow column-wise so that I don't have to read all columns.
sort SQLite table by GroupID so that groups come in together <- this sounds like a good approach. But lot of aggregations (like count, average, sum etc.) don't require this. In this type of aggregation, you can simply hold a map of (key, aggregation), and iterate through the rows and iteratively apply them to the aggregation (and throw the row away).
Are you currently gathering all rows that belong to a group in-memory and then doing the aggregation? If so, you might just need to change the code so that you do the aggregation as you read the rows.
EDIT: In response to the comment:
If that's the case, then I'd go for sorting. SQL might be an overkill though if all you do is sort. Maybe you can just write the sorted file on disk? Once you do that you could look into parallilizing. Essentially you'll have one process reading the sorted file (which you don't want to parallelize as long as you don't do distributed processing), which packages one group worth of data and sends it to a pool of processes (the number of processes should be fixed to some number which you tune, to avoid memory shortage) which does the rest of processing. | 1 | 2 | 0 | I'm doing data analytics on medium sized data (2GB, 20Mio records) and on the current machine it hardly fits into memory. Windows 7 slows down considerably when reaching 3GB occupation on this 4 GB machine. Most of my current analysis need to iterate over all records and consider properties of groups of records determined by some GroupID.
How can approach this task? My current method is to load it into SQLite and iterate by row. I build the groups in-memory, but this too grows quite large.
I had the following ideas, but maybe you can suggest better approaches:
sort SQLite table by GroupID so that groups come in together
store data somehow column-wise so that I don't have to read all columns
serialize data to parse it faster with Python?
These ideas seem hard to combine for me :( What should I do?
(PS: Hardware upgrades are hard to get. Admin right are cumbersome, too) | Iterate over large data fast with Python? | 0.197375 | 1 | 0 | 246 |
24,871,598 | 2014-07-21T17:56:00.000 | 1 | 0 | 0 | 0 | javascript,jquery,python,html,simplehttpserver | 42,041,564 | 1 | false | 1 | 0 | I hope starting python server in the correct folder path of index.html file should solve the issue.
P.S I have faced the same issue and realised that I have started the python server in the parent folder of index.html file which didn't have the script.js file | 1 | 0 | 0 | I have: I created index.html - the simplest html page & launch my simple http server with python -m SimpleHttpServer 8000.
I want: make index.html use javascript which depends on JQuery.
Problem: Seems SimpleHttpServer doesn't load JS files. I mean if I write <script src="myScript.js"/> inside index.html - myScript.js won't be loaded by browser.
Question: Why browser doesn't download JS files?
P.S. I use OSX Mavericks 10.9.4 | Python SimpleHttpServer howto | 0.197375 | 0 | 0 | 2,367 |
24,873,558 | 2014-07-21T19:54:00.000 | 0 | 0 | 1 | 0 | python,text | 24,873,618 | 3 | false | 0 | 0 | Are the changes you are making going to be over several different runs of a program? If not, I suggest making all of your changes to the data while it is still in memory and then writing it out just before program termination. | 2 | 1 | 0 | I have achieved writing all the things I needed to the text file, but essentially the program needs to keep going back to the text file and saving only the changes. At the moment it overwrites the entire file, deleting all the previous information. | Saving to txt file but only save changes | 0 | 0 | 0 | 1,016 |
24,873,558 | 2014-07-21T19:54:00.000 | 2 | 0 | 1 | 0 | python,text | 24,874,578 | 3 | false | 0 | 0 | There is typical confusion about how are text files organized.
Text files are not organized by lines, but by bytes
When one looks to a text file, it looks like lines.
It is natural to expect, that on disk it goes the same way, but this is not true.
Text file are written to disk byte by byte, often one character being represented by one byte (but
in some cases more bytes). A line of text happens to be just a sequence of bytes, being terminated
by some sort of new lines ("\n", "\n\r" or whatever is used for new line).
If we want to change 2nd line out of 3, we would have to fit the change just in the bytes, used for
2nd line, not to mess up with line 3. If we would write too many bytes for line 2, we would
overwrite bytes of line 3. If we would write too few bytes, there would be stil present some (alredy
obsolete) bytes from remainder of line 2.
Strategies to modify content of text file
Republisher - Read it, modify in memory, write all content completely back
This might first sound like vasting a lot of effort, but it is by far the most often used approach
and is in 99% most effective one.
The beauty is, it is simple.
The fact is, for most files sizes it is fast enouhg.
Journal - append changes to the end
Rather rare approach is to write first version of the file to the disk and later on append to the
end notes about what has changed.
Reading such a file means, it has to rerun all the history of changes from journal to find out final
content of the file.
Surgeon - change only affected lines
In case you keep lines of fixed length (measured in bytes!! not in characters), you might point to
modified line and rewrite just that line.
This is quite difficult to do easily and is used rather with binary files. This is definitely not
the task for beginers.
Conclusions
Go for "Republisher" pattern.
Use whatever format fits your needs (INI, CSV, JSON, XML, YAML...).
Personally I prefer saving data to JSON format - json package is part of Python stdlib and it
supports lists as well dictionaries, what allows saving tabular as well as tree like structures. | 2 | 1 | 0 | I have achieved writing all the things I needed to the text file, but essentially the program needs to keep going back to the text file and saving only the changes. At the moment it overwrites the entire file, deleting all the previous information. | Saving to txt file but only save changes | 0.132549 | 0 | 0 | 1,016 |
24,873,787 | 2014-07-21T20:08:00.000 | 2 | 0 | 0 | 0 | python,rest,go,websocket | 24,876,825 | 2 | true | 0 | 0 | It depends on how often these signal are being sent. If it's many times per second, keeping a websocket open might make more sense. Otherwise, use option #1 since it will have less overhead and be more loosely coupled. | 1 | 1 | 0 | I have two servers: Golang and Python (2.7). The Python (Bottle) server has a computation intensive task to perform and exposes a RESTful URI to start the execution of the process. That is, the Go server sends:
HTTP GET to myserver.com/data
The python server performs the computation and needs to inform the Go server of the completion of the processing. There are two ways in which I see this can be designed:
Go sends a callback URL/data to Python and python responds by hitting that URL. E.g:
HTTP GET | myserver.com/data | Data{callbackURI:goserver.com/process/results, Type: POST, response:"processComplete"}
Have a WebSocket based response be sent back from Python to Go.
What would be a more suitable design? Are there pros/cons of doing one over the other? Other than error conditions (server crashed etc.,) the only thing that the Python server needs to actually "inform" the client is about completing the computation. That's the only response.
The team working on the Go server is not very well versed with having a Go client based on websockets/ajax (nor do I. But I've never written a single line of Go :) #1 seems to be easier but am not aware of whether it is an accepted design approach or is it just a hack? What's the recommended way to proceed in this regard? | HTTP Callback URL vs. WebSocket for ansynchronous response? | 1.2 | 0 | 1 | 1,211 |
24,875,008 | 2014-07-21T21:21:00.000 | 0 | 0 | 0 | 0 | python,neural-network,pmml | 26,639,275 | 1 | false | 0 | 0 | Finally I found my own solution. I wrote my own PMML Parser and scorer . PMML is very much same as XML so its easy to build and retrieve fields accordingly. If anyone needs more information please comment below.
Thanks ,
Raghu. | 1 | 1 | 1 | I have a model(Neural Network) in python which I want to convert into a PMML file . I have tried the following:
1.)py2pmml -> Not able to find the source code for this
2.)in R -> PMML in R works fine but my model is in Python.(Cant run the data in R to generate the same model in R) . Does not work for my dataset.
3.) Now I am trying to use augustus to make the PMML file. But augustus has examples of using a already built PMML file but not how to make one
I am not able to find proper examples on how to use augustus in Python to customize the model.
Any suggestion will be good.
Thanks in advance.
GGR | Produce a PMML file for the Nnet model in python | 0 | 0 | 0 | 382 |
24,875,955 | 2014-07-21T22:33:00.000 | 0 | 1 | 0 | 0 | python,raspberry-pi,gpio | 24,920,035 | 3 | false | 0 | 0 | Make sure your script runs fine from the command line first.
Also, if you are dealing with the GPIO pins, make sure you are running your script with the proper permissions. I know when I access the GPIO pins on my pi, I need to use root/sudo to access them. | 1 | 1 | 0 | I have been looking for a few weeks now on how to make a .py file start on startup.
I have had no luck on any of the methods working, does anyone have any ideas.
The file is reasonably small and will need GPIO input from a PIR movement sensor. | Autostart on raspberry pi | 0 | 0 | 0 | 1,105 |
24,876,555 | 2014-07-21T23:32:00.000 | 1 | 0 | 1 | 1 | python,xcode,macos,python-extensions | 24,876,618 | 1 | false | 0 | 0 | The Python executable in current versions of Mac OS X is 64-bit only, so any extensions it loads must also be 64-bit. If your libraries are only available for 32-bit systems, you will be unable to link to it from a Python extension. One possible solution might be to have a separate 32-bit executable that loads the library, then communicate with that executable from your Python extension. | 1 | 0 | 0 | Request: could someone post a recipe, from top to bottom, for creating an Xcode project that will compile C code to build a Python extension? I've seen several posts here that touch upon the subject, but they seem confusing and incomplete, and they disagree.
Specific questions:
Can Mac Python 2.7 load a .dylib? Mine coldly ignores them.
Can one really solve the problem by renaming a .dylib to a .so filename extension? Various posts disagree on this question.
Are .dylib and .so actually different formats? Are there settings I could make in Xcode to make it output a true .so format?
If Python is failing to load an extension file, are there tools to diagnose it? Is there any way to poke into the file, look at its format, and see if it does or does not match what is needed?
When I renamed my .dylib to .so, I got the following error message:
ImportError: dlopen(/Library/Python/2.7/site-packages/pypower.so, 2): no suitable image found. Did find:
/Library/Python/2.7/site-packages/pypower.so: mach-o, but wrong architecture
My project is targeted to "32-bit Intel" architecture. And I really need to use 32-bit, because of some of the old libraries I'm linking to. Is Python going to have a problem loading a 32-bit library? Is there a way to bridge the gap? | How to build python extension with Xcode | 0.197375 | 0 | 0 | 408 |
24,884,654 | 2014-07-22T10:17:00.000 | 3 | 0 | 0 | 1 | python,subprocess,stdout,stderr | 24,885,123 | 1 | true | 0 | 0 | You can pass in any file descriptor or file object. So use sys.stderr. | 1 | 3 | 0 | The subprocess module says that you can pass STDOUT to the stderr argument to get the standard error redirected to the standard out file handle. However, there is no STDERR constant. Is there a way to go the other way? I want everything on stderr and stdout to be redirected to the stderr of the parent process. | Redirecting stdout to stderr in Python's subprocess/Popen | 1.2 | 0 | 0 | 1,523 |
24,884,901 | 2014-07-22T10:28:00.000 | 0 | 0 | 0 | 0 | python,flask | 42,686,598 | 3 | false | 1 | 0 | In my case, I need to change worker_class from 'sync' to 'gevent', since I do some asynchronous tasks. Then no more hangs. | 2 | 12 | 0 | I am running a flask web server, it works fine during testing, but now freezes at least once per day. All I need to do is to restart it and it will work again. Is there a good way to monitor it and maybe I should just kill/restart it every time it fails. Do people actually kill their web server periodically to avoid this kind thing from happening? | Python Flask webserver stop responding | 0 | 0 | 0 | 9,757 |
24,884,901 | 2014-07-22T10:28:00.000 | 3 | 0 | 0 | 0 | python,flask | 26,342,720 | 3 | false | 1 | 0 | While the default web server might not be the best for production, it is probably not the root cause of the crashes. I use it in a production environment on an internal network and it is very stable. Before blaming the web server check to make sure your code can handle requests that that may collide with each other. In my case I had lots of stability issues before I instituted locking on data base tables so that certain requests would be blocked until previous requests were done with updates. Flask can't make sure your code is thread safe. And changing the web server won't help if it is not. | 2 | 12 | 0 | I am running a flask web server, it works fine during testing, but now freezes at least once per day. All I need to do is to restart it and it will work again. Is there a good way to monitor it and maybe I should just kill/restart it every time it fails. Do people actually kill their web server periodically to avoid this kind thing from happening? | Python Flask webserver stop responding | 0.197375 | 0 | 0 | 9,757 |
24,890,579 | 2014-07-22T14:47:00.000 | 0 | 1 | 0 | 0 | python,session,selenium,webdriver,python-unittest | 24,891,062 | 2 | false | 0 | 0 | The simplest way to achieve this is not to use the Setup() and TearDown() methods, or more specifically not to create a new instance of the WebDriver object at that start or each test case, and not to use the Quit() method at the end of each test case.
In your first test case create a new instance of the WebDriver object and use this object for all of your test cases. At the end of your last test case use the Quit() method to close the browser. | 1 | 0 | 0 | I want to establish one session at the starting of the suite. That session should be stay for longer time for multiple test cases.That session should end at the last.
That session should be implement in Selenium Web driver by using Unittest frame works in python language.
please can anyone suggest any methods or how to implement it. | how to establish a session thourghout the selenium webdriver suite by using python in firefox | 0 | 0 | 1 | 739 |
24,890,739 | 2014-07-22T14:54:00.000 | 1 | 0 | 0 | 0 | python,url,client-server,client,webclient | 24,893,137 | 2 | false | 1 | 0 | The "action" part of a form is an url, and If you don't specify the scheme://host:port part of the URL, the client will resolve it has the current page one. IOW: just put the path part of your script's URL and you'll be fine. FWIW hardcoding the scheme://host:port of your URLs is an antipattern, as you just found out. | 2 | 0 | 0 | I used to create web app in the same computer, but if the server and the client is not in the same computer, how can we access to the web page ?
I mean, for example I have an html form and a button "ok" :
If the server and the client are in the same computer, in action = " " we put localhost/file.py , but if the server and the client are not in the same computer how to do this ? Because the client can't to have localhost in his webbrower (url). | make a Client-Server application | 0.099668 | 0 | 1 | 136 |
24,890,739 | 2014-07-22T14:54:00.000 | 0 | 0 | 0 | 0 | python,url,client-server,client,webclient | 24,907,187 | 2 | true | 1 | 0 | Your script is supposed to be run as a CGI script by a web-server, which sets environment variables like REMOTE_ADDR, REQUEST_METHOD ...
You are running the script by yourself, and this environment variable are not available.
That's why you get the KeyError. | 2 | 0 | 0 | I used to create web app in the same computer, but if the server and the client is not in the same computer, how can we access to the web page ?
I mean, for example I have an html form and a button "ok" :
If the server and the client are in the same computer, in action = " " we put localhost/file.py , but if the server and the client are not in the same computer how to do this ? Because the client can't to have localhost in his webbrower (url). | make a Client-Server application | 1.2 | 0 | 1 | 136 |
24,894,231 | 2014-07-22T17:44:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,neural-network,pybrain | 24,902,112 | 2 | true | 0 | 0 | Neural nets are not stable when fed input data on arbitrary scales (such as between approximately 0 and 1000 in your case). If your output units are tanh they can't even predict values outside the range -1 to 1 or 0 to 1 for logistic units!
You should try recentering/scaling the data (making it have mean zero and unit variance - this is called standard scaling in the datascience community). Since it is a lossless transformation you can revert back to your original scale once you've trained the net and predicted on the data.
Additionally, a linear output unit is probably the best as it makes no assumptions about the output space and I've found tanh units to do much better on recurrent neural networks in low dimensional input/hidden/output nets. | 1 | 0 | 1 | I have a neural network with one input, three hidden neurons and one output. I have 720 input and corresponding target values, 540 for training, 180 for testing.
When I train my network using Logistic Sigmoid or Tan Sigmoid function, I get the same outputs while testing, i.e. I get same number for all 180 output values. When I use Linear activation function, I get NaN, because apparently, the value gets too high.
Is there any activation function to use in such a case? Or any improvements to be done? I can update the question with details and code if required. | What activation function to use or modifications to make when neural network gives same output on regression with PyBrain? | 1.2 | 0 | 0 | 815 |
24,895,714 | 2014-07-22T19:07:00.000 | 2 | 0 | 1 | 0 | python,ipython,ipython-notebook | 36,859,613 | 4 | false | 0 | 0 | Latest version of Ipython/Jupyter notebook allows selection of multiple cells using shift key which can be useful for batch operations such as copy, paste, delete, etc. | 1 | 14 | 1 | When I do data analysis on IPython Notebook, I often feel the need to move up or down several adjacent input cells, for better flow of the analysis story.
I'd expected that once I'd create a heading, all cells under that heading would move together if I move the heading. But this is not the case.
Any way I can do this?
Edit: To clarify, I can of course move cells individually, and the keyboard shortcuts are handy; but what I'm looking for is a way to group cells so that I can move (or even delete) them all together. | Is it possible to create grouping of input cells in IPython Notebook? | 0.099668 | 0 | 0 | 6,408 |
24,896,178 | 2014-07-22T19:31:00.000 | 5 | 0 | 0 | 0 | python,scikit-learn | 24,897,058 | 2 | false | 0 | 0 | The scikit-learn transformer API is made for changing the features of the data (in nature and possibly in number/dimension), but not for changing the number of samples. Any transformer that drops or adds samples is, as of the existing versions of scikit-learn, not compliant with the API (possibly a future addition if deemed important).
So in view of this it looks like you will have to work your way around standard scikit-learn API. | 1 | 12 | 1 | I'm trying to implement my own Imputer. Under certain conditions, I would like to filter some of the train samples (that I deem low quality).
However, since the transform method returns only X and not y, and y itself is a numpy array (which I can't filter in place to the best of my knowledge), and moreover - when I use GridSearchCV- the y my transform method receives is None, I can't seem to find a way to do it.
Just to clarify: I'm perfectly clear on how to filter arrays. I can't find a way to fit sample filtering on the y vector into the current API.
I really want to do that from a BaseEstimator implementation so that I could use it with GridSearchCV (it has a few parameters). Am I missing a different way to achieve sample filtration (not through BaseEstimator, but GridSearchCV compliant)? is there some way around the current API? | sklearn: Have an estimator that filters samples | 0.462117 | 0 | 0 | 2,829 |
24,898,132 | 2014-07-22T21:25:00.000 | 0 | 0 | 0 | 0 | python,django,oop,django-models | 24,898,191 | 1 | true | 1 | 0 | You should put the methods of the Quest class on the model itself and get rid of the Quest class. | 1 | 0 | 0 | I wrote a quest system for an online game. My quests are serialized into json objects for a JavaScript client that fetches those quests then from a REST backend (I use django RestFramework)
Now I'm wondering on which class or django model I should put the "behaviour" that belongs to the data.
I stored the data that belongs to a quest in several separate models:
A model QuestHistory: with models.Fields like Boolean completed, and Datetime started where I put the information belonging to a specific user (it also as a field user).
Then I have a model QuestTemplate : The part that is always the same, fields like quest_title and quest_description
I also have a model Rewards and model Task and TaskHistory that are linked to a quest with a foreign Key field.
To combine this information back to quest I created a pure python class Quest(object): and defined methods on this class like check_quest_completion. This class is the then later serialized. The Problem with this approach is that It becomes quite verbose, for example when I instantiate this class or when I define the Serializer.
Is there a python or django "shortcut" to put all fields of a django model into another class (my Quest class here), something similar to the dict.update method maybe?
Or should I try to put the methods on the models instead and get rid of the Quest class?
I have some other places in my game that look very similar to the quest system for example the inventory system so I'm hoping for a more elegant solution. | django models and OOP design | 1.2 | 0 | 0 | 1,085 |
24,900,200 | 2014-07-23T00:54:00.000 | 1 | 0 | 0 | 1 | python,mingw,stdout,distutils,stderr | 25,080,919 | 1 | true | 0 | 0 | Woops.
It turns out this was something really simple: capturing stdout and stderr output was working just fine, but the particular error message I was looking to catch (which was windows specific) wasn't part of the printed output but the error message of the raised SystemExit exception.
Big waste of time :( | 1 | 1 | 0 | I'm using distutils to compile C code via a python script. If things go wrong, I want to be able to catch the error output. To this end, I've redirected stdout and stderr into temporary files before running the setup() command (you need to use os.dup2 for this).
On linux, it works fine. On windows + mingw I get some really weird behaviour:
Without trying to capture, stdout and stderr are both written to the command prompt.
When I try to capture, stdout works fine but the output to stderr disappears.
Does anybody understand what's going on here? | Catching error output from distutils using mingw | 1.2 | 0 | 0 | 161 |
24,908,188 | 2014-07-23T10:36:00.000 | 0 | 0 | 0 | 0 | python,git,version-control,branching-and-merging | 24,913,304 | 1 | true | 0 | 0 | IMHO you should probably commit in your master branch, then rebase your upgrade branch, it will make more sense in your repository history.
If those commits are working on both environments, you should use a different branch based on the master one, so you can work out on the newer version of python, then merge it in the master, then rebase your upgrade branch. | 1 | 0 | 0 | I am working on a python app that uses python 2.4, postgres 8.2 and old versions of pygresql, xlrd, etc. Because of this it is quite a pain to use, and has to be used in a windows xp VM. There are other problems such as the version of xlrd doesn't support .xlsx files, but the new version of xlrd doesn't work with python 2.4, understandably.
I recently made a branch called 'upgrade' where I started to try to get it working with up to date versions of the libraries (python 2.7), for instance unicode handling has changed a bit so required some changes here and there.
Most of the work I'm doing on the app should work in both environments, but it's nicer to work on the upgrade branch because it doesn't need to run in a vm.
So my question is how can I make commits using the upgrade branch but then apply them to the master branch so they will still apply to the old version of the software the client is using? I realise I can cherry pick commits off the upgrade branch onto master but it seems a bit wrong, having commits in both branches. I'm thinking maybe I should be rebasing the upgrade branch so it is always branching off head after the most recent commits, but then that would mean committing to the master branch which means working in a VM.
Hope this makes some kind of sense, I'll try and do some diagrams if not. | Managing a different python version as a branch in git | 1.2 | 1 | 0 | 77 |
24,909,851 | 2014-07-23T11:55:00.000 | 2 | 0 | 0 | 0 | python,sql,sqlite | 24,909,921 | 1 | true | 0 | 0 | I think you are approaching this wrong. If it is taking too long to extract data in a certain order from a table in any SQL database, that is a sign that you need to add an index. | 1 | 0 | 0 | I want to order a large SQLite table and write the result into another table. This is because I need to iterate in some order and ordering takes a very long time on that big table.
Can I rely in a (Python) iterator giving my the rows in the same order as I INSERTed them? Is there a way to guarantee that? (I heard comments that due to caching the order might break) | Are SQLite rows ordered persistently? | 1.2 | 1 | 0 | 46 |
24,912,020 | 2014-07-23T13:38:00.000 | 0 | 0 | 0 | 0 | django,mysql-python,django-orm,cx-oracle | 24,917,828 | 1 | true | 1 | 0 | Django uses connection pooling (i.e. few requests share the same DB connection). Of course, you can write a middleware to close and reinitialize connection on every request, but I can't guarantee you will not create race conditions, and, as you said, there is no point to do so.
If you want to make automatic multi-database CRUD, you'd better use some other framework (maybe, Flask or Bottle), because Django is optimized in much aspects for content sites with pre-set data scheme.
Also, it's not quite simple application, and maybe it's not a good way to learn some new technology at all. Try starting with something simpler, maybe. | 1 | 0 | 0 | So I'm fairly new to Django development and I started using the cx_Oracle and MySQLdb libraries to connect to Oracle and MySQL databases. The idea is to build an interface that will connect to multiple databases and support CRUD ops. The user logs in with the db credentials for the respective databases. I tried not using the Django ORM (I know you may ask then what is the point)but then it is still all a learning endeavor for me. Without the Django ORM (or any ORM for that matter),I was having trouble persisting db connections across multiple requests(Tried using sessions).I need some direction as to what is the best way to design this. | How to use the Django ORM for creating an interface like MySQL admin that connects to multiple databases | 1.2 | 1 | 0 | 110 |
24,912,521 | 2014-07-23T13:59:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,neural-network | 24,912,790 | 2 | true | 0 | 0 | Assuming the mean and standard deviation of the targets are mu and sigma, the normalized value of a target y should be (y-mu)/sigma. In that case if you get an output y', you can move it back to original scale by converting y' -> mu + y' * sigma. | 1 | 0 | 1 | In my neural network, I have inputs varying from 0 to 719, and targets varying from 0 to 1340. So, I standardize the inputs and targets by standard scaling such that the mean is 0 and variance is 1. Now, I calculate the outputs using back-propagation. All my outputs lie between -2 and 2. How do I convert these outputs to the original scale, i.e. lying in the range (0,1340)?
EDIT: I have 1 input, 5 hidden neurons and 1 output. I have used logistic sigmoid activation function. I did the standard scaling by taking mean and then dividing by standard deviation. In particular, my output lies between -1.28 and 1.64. | How to convert standardized outputs of a neural network back to original scale? | 1.2 | 0 | 0 | 1,506 |
24,913,100 | 2014-07-23T14:22:00.000 | 2 | 1 | 0 | 0 | python,visual-studio,vagrant,ptvs | 24,916,542 | 1 | false | 0 | 0 | There's no special support for remote interpreters in PTVS, like what PyCharm has. It's probably possible to hack something based on the existing constraints, but it would be some work...
To register an interpreter that can actually run, it would have to have a local (well, CreateProcess'able - so e.g. SMB shares are okay) binary that accepts the same command line options as python.exe. It might be possible to use ssh directly by adding the corresponding command line options to project settings. Otherwise, a proxy binary that just turns around and invokes the remote process would definitely work.
Running under debugger is much trickier. For that to work, the invoked Python binary would also have to be able to load the PTVS debugging bits (which is a bunch of .py files in PTVS install directory), and to connect to VS over TCP to establish a debugger connection. I don't see how this could be done without writing significant amounts of code to correctly proxy everything.
Attaching to a remotely running process using ptvsd, on the other hand, would be trivial.
For code editing experience, you'd need a local copy (or a share etc) of the standard library for that interpreter, so that it can be analyzed by the type inference engine. | 1 | 5 | 0 | In our company we using vagrant VM's to have the save env. for all. Is it possible to configure VisualStudio + PTVS (python tools for VS) to use vagrant based python interpreter via ssh for example? | Is it possible to use remote vagrant based python interpreter when coding Visual Studio + PTVS | 0.379949 | 0 | 0 | 1,465 |
24,914,298 | 2014-07-23T15:12:00.000 | 10 | 0 | 1 | 0 | python,c++,floating-point,floating-accuracy,modulus | 24,914,506 | 3 | true | 0 | 0 | The closest IEEE 754 64-bit binary number to 0.0003 is 0.0002999999999999999737189393389513725196593441069126129150390625. The closest representable number to the result of multiplying it by 100000 is 29.999999999999996447286321199499070644378662109375.
There are a number of operations, such as floor and mod, that can make very low significance differences very visible. You need to be careful using them in connection with floating point numbers - remember that, in many cases, you have a very, very close approximation to the infinite precision value, not the infinite precision value itself. The actual value can be slightly high or, as in this case, slightly low. | 2 | 3 | 0 | When I did (0.0006*100000)%10 and (0.0003*100000)%10 in python it returned 9.999999999999993 respectively, but actually it has to be 0.
Similarly in c++ fmod(0.0003*100000,10) gives the value as 10. Can someone help me out where i'm getting wrong. | why (0.0006*100000)%10 is 10 | 1.2 | 0 | 0 | 299 |
24,914,298 | 2014-07-23T15:12:00.000 | 4 | 0 | 1 | 0 | python,c++,floating-point,floating-accuracy,modulus | 24,914,491 | 3 | false | 0 | 0 | Just to give the obvious answer: 0.0006 and 0.0003 are not representable in a machine double (at least on modern machines). So you didn't actually multiply by those values, but by some value very close. Slightly more, or slightly less, depending on how the compiler rounded them. | 2 | 3 | 0 | When I did (0.0006*100000)%10 and (0.0003*100000)%10 in python it returned 9.999999999999993 respectively, but actually it has to be 0.
Similarly in c++ fmod(0.0003*100000,10) gives the value as 10. Can someone help me out where i'm getting wrong. | why (0.0006*100000)%10 is 10 | 0.26052 | 0 | 0 | 299 |
24,914,489 | 2014-07-23T15:20:00.000 | 3 | 1 | 1 | 0 | python,configuration | 24,915,117 | 1 | true | 1 | 0 | If you want to get the path to file that has your code relative to from where it was launched, then it is stored in the __file__ of the module, which can be used if you:
dont want your module(s) to be installed with a setup.py/distutils
scheme.
and want your code + configs contained in one location.
So a codeDir = os.path.dirname(os.path.abspath(__file__)) should always work.
If you want to make an installer, I would say it is customary to place the code in one place and things like configs somewhere else. And that would depend on your OS. On linux, one common place is in /home/user/.local/share/yourmodule or just directly under /home/user/.yourmodule. Windows have similar place for app-data.
For both, os.environ['HOME'] / os.getenv('HOME') is a good starting point, and then you should probably detect the OS and place your stuff in the expected location with a nice foldername.
I can't swear that these are best practices, but they seem rather hack-free at least. | 1 | 0 | 0 | I'm developing a module right now that requires a configuration file.
The configuration file is optional and can be supplied when the module is launched, or (ideally) loaded from a defaults.json file located in the same directory as the application. The defaults.json file's purpose is also used to fill in missing keys with setdefault.
The problem comes from where the module is launched...
...\Application = python -m application.web.ApplicationServer
...\Application\application = python -m web.ApplicationServer
...\Application\application\web = python ApplicationServer.py
....read as, "If if I'm in the folder, I type this to launch the server."
How can I determine where the program was launched from (possibly using os.getcwd()) to determine what file path to pass to json.load(open(<path>), 'r+b')) such that it will always succeed?
Thanks.
Note: I strongly prefer to get a best practices answer, as I can "hack" a solution together already -- I just don't think my way is the best way. Thanks! | Best way to find the location of a config file based from launching directory? | 1.2 | 0 | 0 | 241 |
24,917,458 | 2014-07-23T17:49:00.000 | 0 | 0 | 0 | 0 | python,themes,plone,product,theming | 25,017,085 | 1 | false | 1 | 0 | If you want the traditional editing interface, old-style theme packages are helpful. If you have HTML resources ready from the designer, and want them quickly deployed, Diazo seems a fit. Anyway, Sunburst is the Plone 4 default theme package, you should check it out thoroughly and take advantage from the stock Plone packages whenever appropriate to minimize your efforts.
Aside the theming, my experience on Plone 3 to 4 migration, it takes more time moving Archetypes to Dexterity, especially if your add-ons are based on Archetypes. Of course, your mileage may vary. | 1 | 0 | 0 | I have several sites in Plone 3.3.5 with its own theming product for each site, designed by an outside contractor long ago. Would it be best that I try to upgrade these theming products to Plone 4 compatible or should I use the new Diazo tool built into Plone 4 to create these different themes for each site?
Thanks in advance for your help. | Best practice to upgrade a Plone 3 theming product to Plone 4 | 0 | 0 | 0 | 44 |
24,922,174 | 2014-07-23T22:34:00.000 | 5 | 0 | 1 | 1 | python,c++,linux | 24,922,209 | 2 | false | 0 | 0 | It is not possible to ignore SIGKILL or handle it in any way.
From man sigaction:
The sa_mask field specified in act is not allowed to block SIGKILL or SIGSTOP. Any attempt to do so will be silently ignored. | 1 | 6 | 0 | I am trying to figure out how to get a process to ignore SIGKILL. The way I understand it, this isn't normally possible. My idea is to get a process into the 'D' state permanently. I want to do this for testing purposes (the corner case isn't really reproducible). I'm not sure this is possible programatically (I don't want to go damage hardware). I'm working in C++ and Python, but any language should be fine. I have root access.
I don't have any code to show because I don't know how to get started with this, or if it's even possible. Could I possibly set up a bad NFS and try reading from it?
Apologies in advance if this is a duplicate question; I didn't find anyone else trying to induce the D state.
Many thanks. | How to ignore SIGKILL or force a process into 'D' sleep state? | 0.462117 | 0 | 0 | 2,985 |
24,923,749 | 2014-07-24T01:33:00.000 | 1 | 0 | 1 | 0 | python,class-variables | 25,048,146 | 4 | true | 0 | 0 | After speaking with others offline (and per @wwii's comment on one of the answers here), it turns out the best way to do this without embedding the class name explicitly is to use self.__class__.attribute.
(While some people out there use type(self).attribute it causes other problems.) | 1 | 2 | 0 | When using a class variable in Python, one can access and (if it's mutable) directly manipulate it via "self" (thanks to references) or "type(self)" (directly), while immutable variables (e.g., integers) apparently get shadowed by new instance objects when you just use "self".
So, when dealing with Python class variables, is it preferable/Pythonic to simply always use "type(self)" for working with class variables referred to from within class methods?
(I know class variables are sometimes frowned upon but when I work with them I want to access them in a consistent way (versus one way if they are immutable types and another way if they are mutable.)
Edit: Yes, if you modify the value an immutable you get a new object as a result. The side effect of modifying the value of a mutable object is what led to this question - "self" will give you a reference you can use to change the class variable without shadowing it, but if you assign a new object to it it will shadow it. Using classname.classvar or type(self).classvar or self.__class__ ensures you are always working with the class variable, not just shadowing it (though child classes complicate this as has been noted below). | Python: self vs type(self) and the proper use of class variables | 1.2 | 0 | 0 | 3,989 |
24,926,733 | 2014-07-24T06:40:00.000 | 0 | 1 | 0 | 1 | python,outlook,pywin32 | 54,351,677 | 3 | false | 0 | 0 | My similar issue has been cleared up. I used task scheduler to call a python script (via batch file) that has the pywin32com module. The python code opens excel and calls a macro. It will run fine from python, cmd and the batch file, but wasn't working when ran through task scheduler. It traced back to errors like:
"EnsureDispatch disp = win32com.client.Dispatch(prog_id)"
As noted on this thread, I changed the option to "Run only when user is logged on" and it ran successfully!
The only drawback is that I schedule the task for a time that I'm away from the computer. I suppose I just have to not log off and hope that the cpu doesn't go into sleep mode, but that's not really a big deal in this case. | 1 | 3 | 0 | I wrote a python script that uses win32com.client.Dispatch("Outlook.Application") to send automated emails through outlook.
If I run the script myself everything works perfectly fine. But if I run it through Window's task scheduler it doesn't send the emails.
Just to check if I am running the script properly I made the script output a random text file and that works but email doesn't. Why? | Sending automated email using Pywin32 & outlook in a python script works but when automating it through windows task scheduler doesn't work | 0 | 0 | 0 | 3,507 |
24,930,835 | 2014-07-24T10:06:00.000 | 2 | 0 | 0 | 0 | python | 31,753,938 | 1 | false | 0 | 0 | The link that Anthony Kong supplied includes something that may resolve the issue; it did for me in a very similar situation.
switch to DRIVER={SQL Server Native Client 10.0} instead of DRIVER={SQL Server} in the connection string
This would be for Sql Server 2008 (you didn't specify the Edition); for Sql Server 2012 it would be Native Client 11.0. | 1 | 2 | 0 | I want to store html string in sql server database using pyodbc driver. I have used nvarchar(max)as the data type for storing in the database but it is throwing the following error
Error:
('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Warning: Partial insert/update. The insert/update of a text or image column(s) did not succeed. (0) (SQLPutData); [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]The text, ntext, or image pointer value conflicts with the column name specified. (7125)') | Pyodbc Store html unicode string in Sql Server | 0.379949 | 1 | 0 | 1,047 |
24,933,185 | 2014-07-24T12:03:00.000 | 0 | 0 | 0 | 0 | python,sql,sql-server,pyodbc | 24,934,239 | 1 | true | 0 | 0 | I have run into this when creating large reports. Nobody will wait for a 30 second query, even if it's going back over 15 years of sales data.
You have a few options:
Create a SQL Job in the SQL Server Agent to run a stored procedure that runs the query and saves to a table. (This is what I do)
Use a scheduled task to run the query and save it to another table. I think python could drive it on a windows box, but never used it myself. I would do it in .NET.
Sometimes creating a view is enough of a performance boost, but depends on your data and database setup. In addition check if there are any indexes or other common performance gains you can make.
I really think #1 is elegant, simple, and keeps all the work in the database. | 1 | 0 | 0 | I have a python webpage which pulls information from a MSSQL database with pyodbc.
This works, however since some queries that get run are quite heavy. the webpage can take 20-30 seconds to load.
I want to fix this, What would be the best way to run all queries once every 15-30 minutes and store that data locally on the server or locally and pull that data into the webpage instead of rerunning the query on page load.
I would like to have a relatively fast way for the webpage to acces the data so accesing the webpage would only take a 1-2 seconds max.
redis is really fast but isn't really suited as it is too simple. key-value pairs
the most advanced I really need is is a table with a few rows and columnes (always less than 10).
Is there a relatively fast way to store such data locally? | async information from MSSQL database | 1.2 | 1 | 0 | 764 |
24,935,230 | 2014-07-24T13:33:00.000 | 0 | 0 | 0 | 0 | python,numpy,nlopt,abaqus | 25,251,918 | 1 | false | 0 | 0 | I have similar problems. As an (annoying) work around I usually write out important data in text files using the regular python. Afterwards, using a bash script, I start a second python (different version) to further analyse the data (matplotlib etc). | 1 | 2 | 1 | I want to run an external library of python called NLopt within Abaqus through python. The issue is that the NLopt I found is compiled against the latest release of Numpy, i.e. 1.9, whereas Abaqus 6.13-2 is compiled against Numpy 1.4. I tried to replace the Numpy folder under the site-packages under the Abaqus installation folder with the respective one of version 1.9 that I created externally through an installation of Numpy 1.9 over Python 2.6 (version that Abaqus uses).
Abaqus couldn't even start so I guess that such approach is incorrect.
Are there any suggestions on how to overcome such issue?
Thanks guys | How to overcome version incompatibility with Abaqus and Numpy (Python's library)? | 0 | 0 | 0 | 318 |
24,936,671 | 2014-07-24T14:35:00.000 | 0 | 0 | 0 | 1 | python,celery,reliability | 24,946,459 | 2 | false | 1 | 0 | I had a similar problem and i solved it may be not in a most efficient way but however my solution is as follows:
I have created a django model to keep all my celery task-ids and that is capable of checking the task state.
Then i have created another celery task that is running in an infinite cycle and checks all tasks that are 'RUNNING' on their actual state and if the state is 'FAILED' it just reruns it. Im not actually changing the queue for the task which i rerun but i think you can implement some custom logic to decide where to put every task you rerun this way. | 1 | 4 | 0 | Is there any way in celery by which if a task execution fails I can automatically put it into another queue.
For example it the task is running in a queue x, on exception enqueue it to another queue named error_x
Edit:
Currently I am using celery==3.0.13 along with django 1.4, Rabbitmq as broker.
Some times the task fails. Is there a way in celery to add messages to an error queue and process it later.
The problem when celery task fails is that I don't have access to the message queue name. So I can't use self.retry retry to put it to a different error queue. | Having error queues in celery | 0 | 0 | 0 | 1,656 |
24,937,897 | 2014-07-24T15:25:00.000 | 3 | 0 | 0 | 0 | python,database,django,performance | 24,937,984 | 1 | true | 1 | 0 | It all depends on the query that you're running. If you're running a SELECT COUNT(*) FROM foo on a table that has ten rows, it's going to be very fast; but if your query involves a dozen joins, sub-selects, filters on un-indexed rows--or if the target table simply has a lot of rows--the query can take an arbitrary amount of time. In all likelihood, the bottleneck is not Django (although its ORM has some quirks), but rather the database and your query. Just because no rows meet the criteria doesn't mean that the database didn't need to deal with the other rows in the table. | 1 | 2 | 0 | This is simply a question of curiosity. I have a script that loads a specific queryset without evaluating it, and then I print the count(). I understand that count has to go through so depending on the size it could potentially take some time, but it took over a minute to return 0 as the count of an empty queryset Why is that taking so long? is it Django or my server?
notes:
the queryset was all one type. | Reasons why Django was slow | 1.2 | 0 | 0 | 431 |
24,938,729 | 2014-07-24T16:03:00.000 | 7 | 1 | 1 | 0 | python,garbage-collection | 24,938,815 | 3 | false | 0 | 0 | If, after deleting the attribute, the concept of accessing the attribute and seeing if it's set or not doesn't even make sense, use del. If, after deleting the attribute, something in your program may want to check that space and see if anything's there, use = None.
The garbage collector won't care either way. | 2 | 10 | 0 | Assume to not have any particular memory-optimization problem in the script, so my question is about Python coding style. That also means: is it good and common python practice to dereference an object as soon as whenever possible? The scenario is as follows.
Class A instantiates an object as self.foo and asks a second class B to store and share it with other objects.
At a certain point A decides that self.foo should not be shared anymore and removes it from B.
Class A still has a reference to foo, but we know this object to be useless from now on.
As foo is a relatively big object, would you bother to delete the reference from A and how? (e.g. del vs setting self.foo = None) How this decision influence the garbage collector? | Should I delete large object when finished to use them in python? | 1 | 0 | 0 | 4,089 |
24,938,729 | 2014-07-24T16:03:00.000 | 1 | 1 | 1 | 0 | python,garbage-collection | 24,938,858 | 3 | false | 0 | 0 | So far, in my experience with Python, I haven't had any problems with garbage collection. However, I do take precautions, not only because I don't want to bother with any unreferenced objects, but also for organization reasons as well.
To answer your questions specifically:
1) Yes, I would recommend deleting the object. This will keep your code from getting bulky and/or slow. This is an especially good decision if you have a long run-time for your code, even though Python is pretty good about garbage collection.
2) Either way works fine, although I would use del just for the sake of removing the actual reference itself.
3) I don't know how it "influences the garbage collector" but it's always better to be safe than sorry. | 2 | 10 | 0 | Assume to not have any particular memory-optimization problem in the script, so my question is about Python coding style. That also means: is it good and common python practice to dereference an object as soon as whenever possible? The scenario is as follows.
Class A instantiates an object as self.foo and asks a second class B to store and share it with other objects.
At a certain point A decides that self.foo should not be shared anymore and removes it from B.
Class A still has a reference to foo, but we know this object to be useless from now on.
As foo is a relatively big object, would you bother to delete the reference from A and how? (e.g. del vs setting self.foo = None) How this decision influence the garbage collector? | Should I delete large object when finished to use them in python? | 0.066568 | 0 | 0 | 4,089 |
24,939,676 | 2014-07-24T16:54:00.000 | 0 | 0 | 0 | 1 | python,permissions,cron | 29,074,961 | 1 | false | 0 | 0 | Maybe an other software opens the file which you want to overwrite? | 1 | 0 | 0 | I have SSHed from my local machine (a Mac) to a remote machine called “ten-thousand-dollar-bill” as the user “chilge”.
I want to run a Python script in the folder “/afs/athena.mit.edu/c/h/chilge/web_scripts” that generates and saves a .png image to the folder “/afs/athena.mit.edu/c/h/chilge/www/TAF_figures/KORD/1407”. When I run the script from the command line, the image is generated and saved without any issues. When I run the script as cron job, though (the crontab resides in “/afs/athena.mit.edu/c/h/chilge/cron_scripts”), I get the following error:
Traceback (most recent call last):
File "/afs/athena.mit.edu/user/c/h/chilge/web_scripts/generate_plots.py", line 15, in
save_taffig(taf,fig)
File "/afs/athena.mit.edu/user/c/h/chilge/web_scripts/plotting.py", line 928, in save_taffig
fig.savefig(os.getcwd()+'/'+savename+'.png')
File "/usr/lib64/python2.7/site-packages/matplotlib/figure.py", line 1084, in savefig
self.canvas.print_figure(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/matplotlib/backend_bases.py", line 1923, in print_figure
**kwargs)
File "/usr/lib64/python2.7/site-packages/matplotlib/backends/backend_agg.py", line 443, in print_png
filename_or_obj = file(filename_or_obj, 'wb')
IOError: [Errno 13] Permission denied: '/afs/athena.mit.edu/user/c/h/chilge/www/TAF_figures/KORD/1407/140723-1200_AMD_140723-1558.png'
I believe I’ve correctly changed the permissions of all of the necessary directories, but I’m still getting this error. I am not sure why the script would run fine from the command line, but fail when I try to run the script as a cron job.
(Also, I’m not sure if this will be relevant, but don’t have sudo permissions on the remote machine.) | Python - "IOError: [Errno 13] Permission denied" when running cron job but not when running from command line | 0 | 0 | 0 | 1,112 |
24,939,723 | 2014-07-24T16:57:00.000 | 0 | 1 | 1 | 0 | python | 24,939,902 | 2 | false | 0 | 0 | Including all necessary modules in a standalone script is probably extremely tricky and not nice. However, you can distribute modules along with your script (by distributing an archive for example).
Most modules will work if they are just installed in the same folder as your script instead of the usual site-packages. According to the sys.path order, the system's module will be loaded in preference to the one you ship, but if it doesn't exist the later will be imported transparently.
You can also bundle the dependencies in a zip and add that zip to the path, if you think that approach is cleaner.
However, some modules cannot be that flexible. One example are extensions that must first be compiled (like C extensions) and are thus bound to the platform.
IMHO, the cleanest solution is still to package your script properly using distutils and proper dependency definition and write some installation routine that installs missing dependencies from your bundle or using pip. | 1 | 1 | 0 | My python script runs with several imports. On some systems where it needs to run, some of those modules may not be installed. Is there a way to distribute a standalone script that will automagically work? Perhaps by just including all of those imports in the script itself? | Is it possible to bundle all import requirements with a python script? | 0 | 0 | 0 | 686 |
24,940,938 | 2014-07-24T18:04:00.000 | 1 | 0 | 0 | 0 | web-services,ironpython,spotfire | 28,900,283 | 2 | false | 1 | 0 | I have addressed the similar problem but I was using different approach than the proposed IronPython script behind a button.
In my case I created a standalone C#/ASP.NET web application and I have embedded the Spotfire Web Player into the application.
I used the Spotfire JavaScript API to pick up the data marked by user within the Spotfire Web embedded part and then send it via JavaScript call to webservice on the application. The webservice did some processing and sent the data to other destination.
So my scenario was a bitt different but for yours you would also consider using the approach to embedd Spotfire Web into some app that you have full control over rather than using the IronPython script. | 1 | 1 | 0 | I need to send my Spotfire data table to a web service as csv and then replace the data table with the service response. I assume that this can be done with IronPython behind a button. Any ideas how? | How can a Spotfire data table be sent to a web service as csv and then replace the data table with the service response? | 0.099668 | 0 | 0 | 1,721 |
24,943,916 | 2014-07-24T21:03:00.000 | 0 | 0 | 0 | 0 | python,django,django-models,django-forms,django-admin | 24,944,279 | 1 | false | 1 | 0 | You can add any arbitrary fields to a form: they need to be form fields (eg forms.BooleanField) but as long as they don't also exist on the model they won't be saved.
You can then take any action you like in the form's clean() method, or in the modeladmin's save_form method. | 1 | 0 | 0 | Is there a way to have a check box appear in Django admin, but instead of associating it with a BooleanField, having it run a function if it is / is not checked upon save? I don't need to store a Boolean in the database, but may need to store something depending on other fields, and whether the check box was checked.
In particular, I would like to store a ForeignKey if the check box was just checked, empty it (set ForeignKey to null) if the check box was just unchecked, and do nothing it the check box state stayed the same. I don't want to display what this ForeignKey is to the user, just set it or delete it behind the scenes. | Check box in Django admin run a Python function on save without a BooleanField | 0 | 0 | 0 | 300 |
24,944,148 | 2014-07-24T21:18:00.000 | 1 | 0 | 0 | 0 | javascript,jquery,python,django | 24,945,629 | 1 | true | 1 | 0 | If I understand correctly, you are trying to hide jquery code.
You can't hide jquery code from the user, because django processes python code before it serves up the template, there's no way to protect jquery code with python. Really the best thing you can do is to minimize and obfuscate the code, but that only makes it difficult for human reading. | 1 | 1 | 0 | I'm working with a client that has a view that, after a user logs in, this view loads a template, that dynamically draws a canvas with jQuery, and generates an image copy of the canvas.
They want to protect the jQuery code, hiding the process in the python code.
I tried using PyExecJS, but it doesn't support jQuery, since there is no DOM.
I've also tried urllib2, mechanize and Selenium, but none worked.
Is there an alternative or have I missed something?
Update/Resolution: In case someone stumbles onto this question: I ended up using Selenium for Python to load the JS function, fed it the necessary data and extracted the image from it. It has a bit of an overhead, but since the main goal was to keep the JS code obfuscated, it worked. | Django - Load template with jQuery into variable | 1.2 | 0 | 0 | 145 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.