Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
16,347,775
2013-05-02T21:09:00.000
0
0
1
0
python,linux,bash,shell
16,347,812
7
false
0
0
I would never recommend that you try to filter such a massive text file in Python. It does not matter how you tackle it you will need to go through some complicated steps to make sure that you do not run out of memory. The first thing that comes to mind is creating a hash of the lines and then using the hash to find duplicates. Since you save the line number as well you can then directly compare the text to make sure that there are no hash collisions. But, the easiest solution would be to convert the text file into a database that allows you to quickly sort, search and filter out duplicate items. You can then re-create the text file using that if that is really a requirement.
1
6
0
I'm on a linux machine (Redhat) and I have an 11GB text file. Each line in the text file contains data for a single record and the first n characters of the line contains a unique identifier for the record. The file contains a little over 27 million records. I need to verify that there are not multiple records with the same unique identifier in the file. I also need to perform this process on an 80GB text file so any solution that requires loading the entire file into memory would not be practical.
Find duplicate records in large text file
0
0
0
8,820
16,351,298
2013-05-03T03:50:00.000
0
0
0
1
python,networking,tcp,client-server
16,352,065
2
false
0
0
Use select.select() to detect events on multiple sockets, like incoming connections, incoming data, outgoing buffer capacity and connection errors. You can use this on multiple listening sockets and on established connections from a single thread. Using a websearch, you can surely find example code.
2
0
0
I have a number of clients who need to connect to a server and maintain the connection for some time (around 4 hours). I don't want to specify a different connection port for each client (as there are potentially many of them) I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients. Is there a way to do this in Python or do I need to re-think the architecture. EXTRA CREDIT: A Python snippet of the server code doing this would be amazing!
How do I receive and manage multiple TCP connections on the same port?
0
0
1
1,496
16,351,298
2013-05-03T03:50:00.000
1
0
0
1
python,networking,tcp,client-server
16,354,390
2
true
0
0
I don't want to specify a different connection port for each client (as there are potentially many of them) You don't need that. I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients That's how TCP already works. Just create a socket listening to port 10800 and accept connections from it.
2
0
0
I have a number of clients who need to connect to a server and maintain the connection for some time (around 4 hours). I don't want to specify a different connection port for each client (as there are potentially many of them) I would like them just to be able to connect to the server on a specific predetermined port e.g., 10800 and have the server accept and maintain the connection but still be able to receive other connections from new clients. Is there a way to do this in Python or do I need to re-think the architecture. EXTRA CREDIT: A Python snippet of the server code doing this would be amazing!
How do I receive and manage multiple TCP connections on the same port?
1.2
0
1
1,496
16,351,319
2013-05-03T03:52:00.000
-1
0
0
0
google-app-engine,python-2.7,jinja2,webapp2
16,353,977
1
false
1
0
Copied Here from Comment: It defines a raw string. Please read docs docs.python.org/2/reference/lexical_analysis.html#literals paying special attention to raw strings. You may use a raw string to more easily define a regular expression as a literal string.
1
0
0
In the webapp2 URI routing there are some examples using webapp2.Route(r'/', handler='...'), and some aren't using r'/' -- so my question is, what is the R for, and should I be using it? Also, if you use webapp2_extras.APIs you need to pass the config to the WSGIApplication(), is it possible to define the config lists elsewhere? As in, is it possible to do config['webapp2_extras.API'] = ['option':'value'] in one file, then include that file inside your "router" and use the variable/list Thanks in advance!!
Webapp2 Routing and Python inclusion
-0.197375
0
0
106
16,354,980
2013-05-03T08:53:00.000
0
0
1
0
python,parsing
16,356,153
4
false
0
0
Be careful with eval! Do not ever use it in untrusted inputs. If it's just simple arithmetic, I'd use a custom parser (there are tons of examples out in the wild)... And using parser generators (flex/bison, antlr, etc.) is a skill that is useful and easily forgotten, so it could be a good chance to refresh or learn it.
1
3
0
I'm parsing a xml file in which I get basic expressions (like id*10+2). What I am trying to do is to evaluate the expression to actually get the value. To do so, I use the eval() method which works very well. The only thing is the numbers are in fact hexadecimal numbers. The eval() method could work well if every hex number was prefixed with '0x', but I could not find a way to do it, neither could I find a similar question here. How would it be done in a clean way ?
Appending '0x' before the hex numbers in a string
0
0
1
1,913
16,359,326
2013-05-03T12:48:00.000
1
0
0
0
python,firefox,pdf,selenium,automation
16,444,728
1
true
1
0
Firefox does not control the way your content is being printed to the PDF. Your PDF Printer Driver is responsible for creating the PDF file as a Bitmap snapshot of your page, instead of composing it from the elements in your page. The reason that you find a different behavior in Chrome compared to Firefox, is that Chrome has a built in "Save as PDF" which is different from your installed PDF drivers. So it really comes down to what PDF Printer Driver you are using.
1
0
0
Currently I'm writing software for web automation using selenium and autoit. I've found a strange issue, that for some pages when printing to pdf with firefox I get unsearchable pdfs. I've tried ff 3.5, 4.0, 20, 22, 23 - all have the same issue. You can reproduce it by printing any linkedin profile - you'll get unsearchable pdf. Did anyone encounter the same behaviour? How can I bypass it (using python, selenium)? I've tried chrome driver, but it's increadibly slow. I'm running windows 7 x64 ultimate It does not deppend on printer used - I have tried a lot of different versions. By searchable I mean that I should be able to search text in it like in most pdf files. Update - I still don't understand why it happens. I've tried printing the same web page from IE 9 - it gives exactly the same print dialog as firefox and uses the same pdf printer driver. Nevertheless, it produces searchable pdfs. Guess the problem is related to the way firefox prints documents.
Firefox produces unsearchable pdfs
1.2
0
1
338
16,359,847
2013-05-03T13:17:00.000
0
0
0
0
python,django
16,360,039
2
false
1
0
You keep track of all users, and you show all users whose coordinates are of a similar value. Yes, that's a generic answer, but it is a very generic question. I would not be surprised if there are modules for Django doing this already.
1
0
0
I have the following situation. I keep track of a user's longitude and latitude using ios. I send the longitude and latitude coordinates to a django server. How can I use these longitude and latitude coordinates to determine what objects are near? Basically how can I use these coordinates to determine the list of other users near a user?
Getting near objects. Django + Badoo
0
0
0
259
16,363,658
2013-05-03T16:32:00.000
0
0
1
0
python,bash,pip
16,363,883
2
false
0
0
Fixed it. Renamed the directory of where the python3 was installed, bash automatically looks for the next available python install python 2.7
1
1
0
I recently installed python3 only to realize that mysql-python as well as many other modules were not well supported with it yet. So I changed the path in my bashrc file to point to an installation of python 2.7. The problem is that when I installed python 3 I also installed distribute and pip along with it. I removed the pip and distribute files from the python3 bin directory and installed setuptools and pip using python 2.7 however now when I use the pip command to install django and mysql-python, I get a bash error python331/bin/pip No such file or directory. It's still looking for pip in the python3 install. How can I remedy this? Thanks
How to use pip with multiple instances of python
0
0
0
420
16,364,311
2013-05-03T17:13:00.000
2
0
0
0
python,mayavi
17,346,232
1
true
0
0
The general principle is that vtk objects have a lot of overhead, and so you for rendering performance you want to pack as many things into one object as possible. When you call mlab convenience functions like points3d it creates a new vtk object to handle that data. Thus iterating and creating thousands of single points as vtk objects is a very bad idea. The trick of temporarily disabling the rendering as in that other question -- the "right" way to do it is to have one VTK object that holds all of the different points. To set the different points as different colors, give scalar values to the vtk object. x,y,z=np.random.random((3,100)) some_data=mlab.points3d(x,y,z,colormap='cool') some_data.mlab_source.dataset.point_data.scalars=np.random.random((100,)) This only works if you can adequately represent the color values you need in a colormap. This is easy if you need a small finite number of colors or a small finite number of simple colormaps, but very difficult if you need completely arbitrary colors.
1
7
1
I am using mayavi.mlab to display 3D data extracted from images. The data is as follows: 3D camera parameters as 3 lines in the x, y, x direction around the camera center, usually for about 20 cameras using mlab.plot3d(). 3D coloured points in space for about 4000 points using mlab.points3d(). For (1) I have a function to draw each line for each camera seperately. If I am correct, all these lines are added to the mayavi pipeline for the current scene. Upon mlab.show() the scene takes about 10 seconds to render all these lines. For (2) I couldn't find a way to plot all the points at once with each point a different color, so at the moment I iterate with mlab.points3d(x,y,z, color = color). I have newer waited for this routine to finish as it takes to long. If I plot all the points at once with the same color, it takes about 2 seconds. I already tried to start my script with fig.scene.disable_render = True and resetting fig.scene.disable_render = False before displaying the scene with mlab.show(). How can I display my data with mayavi within a reasonable waiting time?
Render a mayavi scene with a large pipeline faster
1.2
0
0
1,606
16,364,340
2013-05-03T17:15:00.000
0
0
1
0
python,parsing,arcgis,shapefile
16,364,447
3
false
0
0
might be a long shot, but you should check out ctypes, and maybe use the .dll file that came with a program (if it even exists lol) that can read that type of file. in my experience, things get weird when u start digging around .dlls
1
2
0
I am interested in gleaning information from an ESRI .shp file. Specifically the .shp file of a polyline feature class. When I open the .dbf of a feature class, I get what I would expect: a table that can open in excel and contains the information from the feature class' table. However, when I try to open a .shp file in any program (excel, textpad, etc...) all I get is a bunch of gibberish and unusual ASCII characters. I would like to use Python (2.x) to interpret this file and get information out of it (in this case the vertices of the polyline). I do not want to use any modules or non built-in tools, as I am genuinely interested in how this process would work and I don't want any dependencies. Thank you for any hints or points in the right direction you can give!
How to parse a .shp file?
0
0
0
2,611
16,366,862
2013-05-03T20:15:00.000
1
0
1
0
python,eclipse,pydev
16,366,938
2
false
0
0
Have you thought of using GIT? You can create the project on one machine and clone it on the other. If you can't network the machines, sign up for github for bitbucket. It really a huge jump from using drop box. You can also use subversion from Unfuddled, which I don't recommend. You get one account for free with Unfuddled.
2
0
0
First of all I am relatively new in Python and maybe missing something about setting up a project in PyDev. I use PyDev with eclipse and windows OS. I am trying to work on the same project with two different machines. Basically my code is situated in a shared folder on Dropbox. I would like to access the same project (not necessarily simultaneously) on both machines. When I try to import the files in the project folder, PyDev on my second machine creates another file that is specific to it. Thus not the same file but a copy of the original project. I am proficient in MATLAB and I expect, probably mistakenly, the same file instances to be read on both machines. I feel the answer is quite simple but could bot bump into it after an hour of googling. Thanks for your help in advance...
Accesing Python code (project) with two different machines
0.099668
0
0
84
16,366,862
2013-05-03T20:15:00.000
0
0
1
0
python,eclipse,pydev
16,366,961
2
true
0
0
If you are thinking of sharing code between multiple machines, you'd be better off using tools made specifically for that. They're called revision control system. You can read/try SVN and Git. You can probably get started with Github easily if you don't mind your code being public. As for the project in Eclipse, there is a checkbox you can uncheck that moves the content of the project to your workspace, which is not what you want since you want to keep it in the Dropbox folder. I think it's called "Copy project into workspace".
2
0
0
First of all I am relatively new in Python and maybe missing something about setting up a project in PyDev. I use PyDev with eclipse and windows OS. I am trying to work on the same project with two different machines. Basically my code is situated in a shared folder on Dropbox. I would like to access the same project (not necessarily simultaneously) on both machines. When I try to import the files in the project folder, PyDev on my second machine creates another file that is specific to it. Thus not the same file but a copy of the original project. I am proficient in MATLAB and I expect, probably mistakenly, the same file instances to be read on both machines. I feel the answer is quite simple but could bot bump into it after an hour of googling. Thanks for your help in advance...
Accesing Python code (project) with two different machines
1.2
0
0
84
16,367,953
2013-05-03T21:40:00.000
0
0
0
1
python,erlang,rabbitmq,celery,eventqueue
18,451,136
2
false
1
0
Two more exotic options to consider: (1) define a custom exchange type in the Rabbit layer. This allows you to create routing rules that control which tasks are sent to which queues. (2) define a custom Celery mediator. This allows you to controls which tasks move when from queues to worker pools.
1
0
0
I am running into a use case where I would like to have control over how and when celery workers dequeue a task for processing from rabbitmq. Dequeuing will be synchronized with an external event that happens out of celery context, but my concern is whether celery gives me any flexibility to control dequeueing of tasks? I tried to investigate and below are a few possibilities: Make use of basic.get instead of basic.consume, where basic.get is triggered based upon external event. However, I see celery defaults to basic.consume (push) semantics. Can I override this behavior without modifying the core directly? Custom remote control the workers as and when the external event is triggered. However, from the docs it isn't very clear to me how remote control commands can help me to control dequeueing of the tasks. I am very much inclined to continue using celery and possibly keep away from writing a custom queue processing solution on top of AMQP.
Consuming from queues based upon external event (event queues)
0
0
0
689
16,369,398
2013-05-04T00:31:00.000
3
1
0
0
python,unit-testing
16,394,313
2
false
0
0
We use a continious integration server jenkins for such task. It has cron like scheduling and can send an email when build becomes unstable (a test fails). There is an extention to python's unittest module to produce junit style xml report supported by jenkins.
2
3
0
I'm currently in the process of writing some unit tests I want to constantly run every few minutes. If any of them ever fail, I want to grab the errors that are raised and do some custom processing on them (sending out alerts, in my case). Is there a standard way of doing this? I've been looking at unittest.TestResult, but haven't found any good example usage. Ideas?
Custom onFailure Call in Unittest?
0.291313
0
0
127
16,369,398
2013-05-04T00:31:00.000
0
1
0
0
python,unit-testing
16,395,234
2
true
0
0
In the end, I wound up running the test and returning the TestResult object. I then look at the failures attribute of that object, and run post processing on each test in the suite that failed. This works well enough for me, and let's me custom design my post-process. For any extra meta data per test that I need, I subclass unittest.TestResult and add to the addFailure method anything extra that I need.
2
3
0
I'm currently in the process of writing some unit tests I want to constantly run every few minutes. If any of them ever fail, I want to grab the errors that are raised and do some custom processing on them (sending out alerts, in my case). Is there a standard way of doing this? I've been looking at unittest.TestResult, but haven't found any good example usage. Ideas?
Custom onFailure Call in Unittest?
1.2
0
0
127
16,369,685
2013-05-04T01:23:00.000
0
0
0
1
python,google-app-engine
16,476,151
2
false
1
0
Is your application running on the appspot.com domain or your own custom domain? In the former case it should work without you specifiying the target. In the case of a custom domain we are aware of problems with this scenario. Please file a bug in either case.
1
0
0
I'm working with the Google App Engine Tasks Queue feature (Push). In local, with the dev server, everything is working fine but once deployed my task fails. I have put logs in it (logging python module) but they do not appear in my dashboard logs. Is there anything to do to make it works? Thanks for your help.
GAE: Logs from tasks does not appears in dashboard
0
0
0
63
16,370,283
2013-05-04T03:28:00.000
0
1
0
1
python,windows,git,ubuntu
16,375,343
2
false
0
0
Create a bare repository on your server. Configure your local repository to use the repository on the server as a remote. When working on your local workstation, commmit your changes and push them to the repository on your server. Create a post-receive hook in the server repository that calls "git archive" and thus transfers your files to some other directory on the server.
1
1
0
I setup a new Ubuntu 12.10 Server on VPN hosting. I have installed all the required setup like Nginx, Python, MySQL etc. I am configuring this to deploy a Flask + Python app using uWSGI. Its working fine. But to create a basic app i used Putty tool (from Windows) and created required app .py files. But I want to setup a Git functionality so that i can push my code to required directory say /var/www/mysite.com/app_data so that i don't have to use SSH or FileZilla etc everytime i make some changes into my website. Since i use both Ubuntu & Windows for development of app, setting up a Git kind of functionality would help me push or change my data easily to my Cloud Server. How can i setup a Git functionality in Ubuntu ? and How could i access it and Deploy data using tools like GitBash etc. ? Please Suggest
How to setup Git to deploy python app files into Ubuntu Server?
0
0
0
1,205
16,370,483
2013-05-04T04:07:00.000
4
0
1
0
python,arrays,python-2.7,python-3.x
16,370,523
1
false
0
0
The line num = random.randint(0,9) sets num to an int, and so when fillaray returns num (assuming size > 0), it is returning an int, not a list, and this int is then passed to totalOdds and totalEvens, which try to subscript it (i.e., do num[i]) as though it were a list, which is an error. Presumably, what you want to do is to append the random ints to the list num instead of overwriting it, e.g., by doing num.append(random.randint(0,9)).
1
0
0
I keep getting the error 'int' object is not substitutable. I know my problem is within "def filaray()" I also know making "num" a list would be more efficient. However this is an assignment and I'm pretty sure we can only use array's. Is there a way I can fix my error while not making "num" a list?
How to fix "int" is not subscriptable error?
0.664037
0
0
350
16,375,251
2013-05-04T14:16:00.000
2
0
0
0
javascript,python,html,screen-scraping,eval
16,385,053
2
false
1
0
Well, in the end I came down to the following possible solutions: Run Chrome headless and collect the html output (thanks to koenp for the link!) Run PhantomJS, a headless browser with a javascript api Run HTMLUnit; same thing but for Java Use Ghost.py, a python-based headless browser (that I haven't seen suggested anyyyywhere for some reason!) Write a DOM-based javascript interpreter based on Pyv8 (Google v8 javascript engine) and add this to my current "half-solution" with mechanize. For now, I have decided to use either use Ghost.py or my own modification of the PySide/PyQT Webkit (how ghost works) to evaluate the javascript, as apparently they can run quite fast if you optimize them to not download images and disable the GUI. Hopefully others will find this list useful!
1
1
0
This is part of a project I am working on for work. I want to automate a Sharepoint site, specifically to pull data out of a database that I and my coworkers only have front-end access to. I FINALLY managed to get mechanize (in python) to accomplish this using Python-NTLM, and by patching part of it's source code to fix a reoccurring error. Now, I am at what I would hope is my final roadblock: Part of the form I need to submit seems to be output of a JavaScript function :| and lo and behold... Mechanize does not support javascript. I don't want to emulate the javascript functionality myself in python because I would ideally like a reusable solution... So, does anyone know how I could evaluate the javascript on the local html I download from sharepoint? I just want to run the javascript somehow (to complete the loading of the page), but without a browser. I have already looked into selenium, but it's pretty slow for the amount of work I need to get done... I am currently looking into PyV8 to try and evaluate the javascript myself... but surely there must be an app or library (or anything) that can do this??
Evaluate javascript on a local html file (without browser)
0.197375
0
1
1,441
16,375,396
2013-05-04T14:33:00.000
0
0
0
0
python,pygame
16,377,757
1
true
0
1
Setting gamma or screen brightness is not the best way to go. When your game gets more complicated, it's easier if you use the technique ecline6 suggested: changing your background image. But since it is a loaded image i suppose, you can use this trick: Let's say you have a background surface. Every 10s you will fire an event that will change the backround by blitting an low alpha-black surface on top of it. That way your background will get darker. You can do the same with white, to make it more brighter too.
1
0
0
I try to write a simple 2d game using python 3.x and pygame 1.9.2. I want to simulate changing of days/nights. I have got in-game time(like real, but faster) and I'd like to make my screen darker at exact time but have no ideas how to do it. Could you advice me how to reduce the brightness of my screen or other ways how to show changing of days and nights? Excuse for my language if I make mistakes. English isn't native for me)
Changing days and nights with pygame
1.2
0
0
755
16,376,048
2013-05-04T15:42:00.000
0
1
0
0
python,apache,cgi
16,376,236
1
false
0
0
You need to put your Python module somewhere that Python's import can see it. The easy ways to do that are: Make a directory for the module, and add that directory to your PYTHONPATH environment variable. Copy the module into your Python site-packages directory, which is under your Python installation directory. In either case, you will need to make sure your module's name is not the same as the name of some other module that might be imported by Python in your CGI script.
1
1
0
I'm running a python CGI script on my localhost that needs to import and use another python module that I wrote. I placed the CGI script in the Apache cgi-bin directory (I'm running this on windows). I've tried placing my custom module in the same directory, but it doesn't seem to be able to import that module. I would prefer to not have the custom module be another CGI script that is called via exec().
Using custom module with Python CGI script
0
0
0
520
16,376,374
2013-05-04T16:20:00.000
0
0
0
0
python-2.7,multiprocessing,cpu-usage
16,378,262
1
true
0
0
As your application isn't very CPU intensive, the slow disk turns out to be the bottleneck. Old 5200 RPM laptop hard drives are very slow, which, in addition to fragmentation and low RAM (which impacts disk caching), make reading very slow. This in turns slows down processing and yields low CPU usage. You can try defragmenting, compressing the input files (as they become smaller in disk size, processing speed will increase) or other means of improving IO.
1
0
0
I have been running a Python octo.py script to do word counting/author on a series of files. The script works well -- I tried it on a limited set of data and am getting the correct results. But when I run it on the complete data set it takes forever. I am running on a windows XP laptop with dual core 2.33 GHz and 2 GB RAM. I opened up my CPU usage and it shows the processors running at 0%-3% of maximum. What can I do to force Octo.py to utilize more CPU? Thanks.
Octo.py only using between 0% and 3% of my CPUs
1.2
1
0
196
16,378,144
2013-05-04T19:29:00.000
0
0
0
0
python,events,tkinter,label,real-time
16,378,921
1
false
0
1
Your question is too vague to answer precisely, but to specifically address each individual point: Yes, it's possible to "react .. in real time" -- whenever an event is detected it will be acted upon as soon as possible. Yes, it's possible to color the border of a widget when an event is detected Yes, it's possible to type into a label. Though, obviously, the behavior is unusual and may not be what you expect. I suspect none of those help you solve your real problem, but I have no idea what you're actually trying to accomplish.
1
0
0
I´d like to create a program which will react on actions by user in real time. For example there will be three Labels. And when user clicks on one, I want to recolor the border to a different color and the user should be able to "type" a (single) number in this Label. I know about the Entry widget, but Labels are suitable for the whole application. Thank you for any answers
How to react on actions (events) in real time in Tkinter?
0
0
0
418
16,379,254
2013-05-04T21:42:00.000
1
0
1
0
python,mongodb,pymongo
16,380,066
1
false
0
0
you can use bulk insert with option w=0 (ex safe=False), but then you should do a check to see if all documents were actually inserted if this is important for you
1
3
0
I'm trying to write a function to do a bulk-save to a mongoDB using pymongo, is there a way of doing it? I've already tried using insert and it works for new records but it fails on duplicates. I need the same functionality that you get using save but with a collection of documents (it replaces an already added document with the same _id instead of failing). Thanks in advance!
Is there a pymongo (or another Python library) bulk-save?
0.197375
1
0
311
16,379,321
2013-05-04T21:52:00.000
1
1
1
1
python,ubuntu,pydev,pymongo
16,379,374
1
true
0
0
You can install packages for a specific version of Python, all you need to do is specify the version of Python you want use from the command-line; e.g. Python2.7 or Python3. Examples Python3 pip your_package Python3 easy_install your_package.
1
0
0
I am currently trying to run Pydev with Pymongo on an Python3.3 Interpreter. My problem is, I am not able to get it working :-/ First of all I installed Eclipse with Pydev. Afterwards I tried installing pip to download my Pymongo-Module. Problem is: it always installs pip for the default 2.7 Version. I read that you shouldn't change the default system Interpreter (running on Lubuntu 13.04 32-Bit) so I tried to install a second Python3.3 and run it in an virtual environement, but I can't find any detailed Information on how to use everything on my specific problem. Maybe there is someone out there, that uses a similar configuration and can help me out to get everything running (in a simple way) ? Thanks in advance, Eric
Using Python3 with Pymongo in Eclipse Pydev on Ubuntu
1.2
0
0
1,303
16,379,844
2013-05-04T23:18:00.000
0
1
0
0
python,bittorrent,libtorrent
16,382,906
1
false
0
0
Make sure to set the download directory to the place where the original files reside, when adding the torrent to the session. The torrent will detect that the files are already there and hash them to verify that they are correct, and seed any pieces that matched the expected hash. You can force libtorrent to trust you that the pieces/files are all there by setting the seed_mode in the add_torrent_params when adding the torrent. This will make libtorrent assume the files are there and not check them until they are requested.
1
0
0
After several hours of looking for the answer I've had no luck. Can anyone point me to an example of how to create a torrent and seed that brand new torrent in python? So far I can download just fine and I can produce torrent files. However, when I try to start my own torrent I get stuck on downloading rather than seeding. Obviously this is a problem since the swarm contains only my host, which is supposed to be the seeder. Any advice?
How do I seed a directory or file using python-libtorrent?
0
0
0
765
16,380,967
2013-05-05T02:59:00.000
2
0
0
0
python,raw-input
16,380,973
2
false
0
0
getpass() doesn't work in IDLE. Try it in command prompt or on the terminal.
1
11
0
I need something like raw_input, but without displaying the text you are typing. Also I tried getpass, but it still displays what I just typed.
Type text without displaying text
0.197375
0
0
5,707
16,382,019
2013-05-05T06:30:00.000
2
0
1
0
python,algorithm,math,integral
16,382,307
1
true
0
0
It depends on your context and the performance criteria. I assume that you are looking for a numerical approximation (as opposed to a algebraic integration) A Riemann Sum is the standard 'educational' way of numerically calculating integrals but several computationally more efficient algorithms exist.
1
0
1
I have a function f(x_1, x_2, ..., x_n) where n >= 1 that I would like to integrate. What algorithm should I use to provide a decently stable / accurate solution? I would like to program it in Python so any open source examples are more than welcome! (I realize that I should use a library but this is just a learning exercise.)
How do I integrate a multivariable function?
1.2
0
0
270
16,382,769
2013-05-05T08:30:00.000
0
1
0
1
eclipse,python-3.x,settings,pydev,interpreter
19,039,512
2
false
0
0
Use the auto-config option. It will automatically find the libraries.
2
1
0
I am new to Eclipse & PyDev (on Ubuntu 13.04) and want to try Python3.3 programming. But I cannot choose python3.3 iterpreter, - I try to choose it in usr\lib\python3.3 , but: - when I try to choose PYTHONPATH by clicking "New folder" - window doesn't open (I can do it onl after choosing auto-config, which will add python2.7 pates); - I don't know the file in usr\lib\python3.3, which I need to choose, as python3.3 interpreter (auto-config returns me only 2.7 objects). Can you advice me how to choose python3.3 interpreter (maybe the main is the file\path I need to choose in " usr\lib\python3.3" as interpreter file - in windows Eclipse I see python3.3.exe, - I need to find its equal in Ubuntu I think)? Thanks!
Choosing Python3.3 interpreter in Eclipse problems
0
0
0
986
16,382,769
2013-05-05T08:30:00.000
0
1
0
1
eclipse,python-3.x,settings,pydev,interpreter
19,208,342
2
false
0
0
You set the path usr\lib\python3.3 by typing it directly in the 'Interpreter Executable' field! You don't have to search for the Interpreter file. This will do the Auto Config for you. Afterwards you declare a name and you're done.
2
1
0
I am new to Eclipse & PyDev (on Ubuntu 13.04) and want to try Python3.3 programming. But I cannot choose python3.3 iterpreter, - I try to choose it in usr\lib\python3.3 , but: - when I try to choose PYTHONPATH by clicking "New folder" - window doesn't open (I can do it onl after choosing auto-config, which will add python2.7 pates); - I don't know the file in usr\lib\python3.3, which I need to choose, as python3.3 interpreter (auto-config returns me only 2.7 objects). Can you advice me how to choose python3.3 interpreter (maybe the main is the file\path I need to choose in " usr\lib\python3.3" as interpreter file - in windows Eclipse I see python3.3.exe, - I need to find its equal in Ubuntu I think)? Thanks!
Choosing Python3.3 interpreter in Eclipse problems
0
0
0
986
16,384,192
2013-05-05T11:45:00.000
4
0
1
0
python,jinja2,gettext,python-babel
16,385,180
3
true
1
0
Finally, I used this solution: add get_locale function, which should be defined anyway, to Jinja2 globals, and then call it in template like any other function.
1
3
0
On my website use Flask + Jinja2, and Flask-Babel for translation. The site has two languages (depending on the URL), and I'd want to add a link to switch between them. To do this correctly I need to get the name of the current locale, but I didn't find such function in the docs. Does it exist at all?
Get current locale in Jinja2
1.2
0
0
4,159
16,386,707
2013-05-05T16:36:00.000
0
0
0
1
python,django,homebrew
16,386,760
2
true
1
0
that kind of problem smells like architecture mess. You may try to execute a 64bit library from a 32bit interpreter or vice versa… As you're using homebrew, you shall be really careful of which interpreter you're using, what is your path etc… Maybe you shall trace your program to know more exactly where it fails, so you can pinpoint what is actually failing. It is very unlikely django that fails, but more something that django uses. For someone to help you, you need to dig more closely to your failing point, and give more context about what is failing beyond django.
2
4
0
Here is my issue: I installed Python and Django on my mac. When I run "django-admin.py startproject test1" I get this error: 1 11436 illegal hardware instruction django-admin.py startproject test1 (the number is always different) I've tested with multiple Django versions, and this only happens with version 1.4 and higher...1.3 works fine. I've been searching the web like crazy for the past week, and couldn't find anything regarding this issue with django so I assume the problem is not Django itself but something else. This is only on my mac at home, at work where I user Ubuntu works fine. I tried to reinstall my entire system and this are the only things I have installed right now: - Command line tools - Homebrew - Python & pip (w/ Homebrew) - Git (w/ Homebrew) - zsh (.oh-my-zsh shell) I set up my virtualenv and install django 1.5.1 -- the same issue still appears. I'm out of options for now since nothing I found resolves my problem, I'm hoping someone has some knowledge about this error. I appreciate all the help, and thanks. This is the python crash log: Process: Python [2597] Path: /usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Identifier: Python Version: 2.7.4 (2.7.4) Code Type: X86-64 (Native) Parent Process: zsh [2245] User ID: 501 Date/Time: 2013-05-05 20:53:19.899 +0200 OS Version: Mac OS X 10.8.3 (12D78) Report Version: 10 Interval Since Last Report: 16409 sec Crashes Since Last Report: 2 Per-App Crashes Since Last Report: 1 Anonymous UUID: D859C141-544F-3473-1A13-F984DB2F8CBE Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000
Python & Django on a Mac: Illegal hardware instruction
1.2
0
0
3,921
16,386,707
2013-05-05T16:36:00.000
0
0
0
1
python,django,homebrew
68,527,402
2
false
1
0
I had the same, but went around this issue by using Docker/docker-compose.
2
4
0
Here is my issue: I installed Python and Django on my mac. When I run "django-admin.py startproject test1" I get this error: 1 11436 illegal hardware instruction django-admin.py startproject test1 (the number is always different) I've tested with multiple Django versions, and this only happens with version 1.4 and higher...1.3 works fine. I've been searching the web like crazy for the past week, and couldn't find anything regarding this issue with django so I assume the problem is not Django itself but something else. This is only on my mac at home, at work where I user Ubuntu works fine. I tried to reinstall my entire system and this are the only things I have installed right now: - Command line tools - Homebrew - Python & pip (w/ Homebrew) - Git (w/ Homebrew) - zsh (.oh-my-zsh shell) I set up my virtualenv and install django 1.5.1 -- the same issue still appears. I'm out of options for now since nothing I found resolves my problem, I'm hoping someone has some knowledge about this error. I appreciate all the help, and thanks. This is the python crash log: Process: Python [2597] Path: /usr/local/Cellar/python/2.7.4/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python Identifier: Python Version: 2.7.4 (2.7.4) Code Type: X86-64 (Native) Parent Process: zsh [2245] User ID: 501 Date/Time: 2013-05-05 20:53:19.899 +0200 OS Version: Mac OS X 10.8.3 (12D78) Report Version: 10 Interval Since Last Report: 16409 sec Crashes Since Last Report: 2 Per-App Crashes Since Last Report: 1 Anonymous UUID: D859C141-544F-3473-1A13-F984DB2F8CBE Crashed Thread: 0 Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000
Python & Django on a Mac: Illegal hardware instruction
0
0
0
3,921
16,390,603
2013-05-06T00:42:00.000
0
0
0
0
python,google-app-engine,channel-api
16,394,921
1
false
1
0
Since the socket exists on the client you would have to send a close command from the server to the client
1
0
0
In the client side is possible close the socket with connection.close(), but its possible close it from the server side?
Close GAE channel from server side Python
0
0
1
115
16,390,995
2013-05-06T01:51:00.000
0
0
0
0
python,module,pygame
16,391,007
3
false
0
1
No, Tkinter is not enough for making 3D games.
1
0
0
I am soon going to be helping my friend create a 3D game with Python. Will I be able to use Tkinter for this (Which I doubt because the module only excepts .gif images) and add a Z axsis? Or will I have to use an extra module like PGReloaded, PyGame, etc. If so could someone give me a good place to learn the module. By the way the name of the game will be The Search.
3 Dimensional Games in Python Modules
0
0
0
1,469
16,392,113
2013-05-06T04:46:00.000
0
0
0
0
python,web
16,397,349
2
false
1
0
9000 said: "I'd try to sniff/track a real exchange between browser and the site; both Chrome and FF have tools for that. I'd also consider using mechanize instead of raw urrlib2" This is the answer - mechanize is really easy to use and supports multiple forms. Thanks!
1
0
0
I am trying to log in a forum using Python/URLLib2. But I can't seem to succeed. I think it might be because there are several form objects in the login page, and I submit the incorrect one (the same code worked for a different forum, with a single form). Is there a way to specify which form to submit in URLLib2? Thanks.
python urllib2 - login and specify a form in page
0
0
1
98
16,392,909
2013-05-06T06:09:00.000
0
0
0
0
python,sockets,python-2.7,event-handling
16,399,317
2
false
0
0
In asynchronous programming you don't interrupt an operation triggered by a message. All operations should be done in a short and fast fashion so you can process lots of messages per second. This way every operation is atomic and you don't suffer any race conditions so easily. If you are in need to do more complex processing in parallel you could hand those problems over to a helper thread. Libraries like twisted are prepared for such use cases.
1
0
0
I'm working with socket, asynchronous event-driven programming. I would like to send a message, once I receive a response, send another message. But I may be doing something besides listening. That is, I want to get interrupted when socket.recv() actually receives a message. Question 1: How can I let layer 3 interrupt layer 4? i.e. How can I handle the event of a non-null returning socket.recv() without actually dedicating "program time" to actually wait for a specific time to listen to incoming messages?
Python: Interrupting sender with incoming messages
0
0
1
110
16,394,732
2013-05-06T08:20:00.000
3
0
1
0
python,warnings,pydev,python-import
16,394,915
1
true
0
0
That message just tells you that you are importing features from a module that you don't need, which means you should probably import just what you need. You shuld simply use from foobar import x, y where x and y are the elements you actually need. The syntax from foobar import * is more useful in the command-line interpreter when you don't want to think or type many more characters for little benefit. But in a real project, you should not use that syntax since if you use it, it will not be clear which feature from the module you are going to use.
1
0
0
Whenever I import from another module using an asterisk (from <anymodule> import *) I am fined with an "Unused wild import"-warning. It appears as if this is not the right way to do the import, but why does that syntax exist if we shouldn't be using it?
"Unused wild import": Why?
1.2
0
0
4,983
16,395,391
2013-05-06T09:07:00.000
0
0
0
0
python,django,django-south
16,395,687
1
false
1
0
Suppose, that your old project's models definition are consistent with database. And you want to edit some model(s) in app named 'myappName'. Then your algorithm will be following: Before making any changes to you model, do: python manage.py convert_to_south myappName Modify model(s) in app 'myappName'. Create migrations: python manage.py schemamigration myappName --auto Apply migrations: python manage.py migrate myappName
1
1
0
Have installed south to my old django project. Have run. ¤ syncdb ¤ convert_to_south myappName But the did not sync all. Runned: ¤ Migrate myappNAme Did not sync 100 %, still have one column that is not found. Runned: ¤ schemamigration -auto myappName But not synced to 100 %.... Any ideas?
Sync django with south
0
0
0
469
16,396,903
2013-05-06T10:35:00.000
5
0
0
0
python,pandas
61,941,548
8
false
0
0
inp0= pd.read_csv("bank_marketing_updated_v1.csv",skiprows=2) or if you want to do in existing dataframe simply do following command
2
249
1
I need to delete the first three rows of a dataframe in pandas. I know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.
Delete the first three rows of a dataframe in pandas
0.124353
0
0
412,922
16,396,903
2013-05-06T10:35:00.000
9
0
0
0
python,pandas
52,984,033
8
false
0
0
A simple way is to use tail(-n) to remove the first n rows df=df.tail(-3)
2
249
1
I need to delete the first three rows of a dataframe in pandas. I know df.ix[:-1] would remove the last row, but I can't figure out how to remove first n rows.
Delete the first three rows of a dataframe in pandas
1
0
0
412,922
16,396,966
2013-05-06T10:39:00.000
2
0
0
0
python,django,pubdate
16,396,999
2
false
1
0
You can just remove auto_now=True and set the field manually when you want to, in your view.
2
0
0
I have a model Question with an IntegerField named flags and a datetime Field called pub_date. pub_date is set to be auto_now=True. I have a view for changing the flags field. And when I change the flags and do .save() to the Question object, its pub date changes to now. I wan't the pub_date to be set only when it's being created and not when I'm changing some data in the record. How can I do this? If you need to see my code, please tell me because I don't think you need to here.
Django changes pub_date when I do .save()
0.197375
0
0
158
16,396,966
2013-05-06T10:39:00.000
3
0
0
0
python,django,pubdate
16,397,041
2
true
1
0
you should set auto_now_add = True
2
0
0
I have a model Question with an IntegerField named flags and a datetime Field called pub_date. pub_date is set to be auto_now=True. I have a view for changing the flags field. And when I change the flags and do .save() to the Question object, its pub date changes to now. I wan't the pub_date to be set only when it's being created and not when I'm changing some data in the record. How can I do this? If you need to see my code, please tell me because I don't think you need to here.
Django changes pub_date when I do .save()
1.2
0
0
158
16,397,384
2013-05-06T11:03:00.000
0
0
0
1
python,plugins,autocomplete,installation,gedit
45,171,475
1
false
0
0
If you have gedit3, have you checked that this is a plugin for gedit3, not gedit2? Take a look at *.plugin file. It should say IAge=3.
1
2
0
Of course I tried to copy the files in the .gnome2/gedit/plugins and in the .local/share/gedit/plugins directories. But it doesn't work at all. How do I install the plugin? I'm on Fedora 18. Lxde desktop manager.
Gedit plugin for python autocomplete: how to install?
0
0
0
1,116
16,399,348
2013-05-06T13:01:00.000
0
0
1
0
python,string,algorithm,hash,md5
16,400,671
3
false
0
0
All hash functions are designed to work with byte streams, so you should not first generate the whole string, and after that hash it - you should write generator, which produces chunks of string data, and feed it to MD5 context. And, MD5 uses 64-byte (or char) buffer so it would be a good idea to feed 64-byte chunks of data to the context.
1
7
0
I'm doing a programming challenge and I'm going crazy with one of the challenges. In the challenge, I need to compute the MD5 of a string. The string is given in the following form: n[c]: Where n is a number and c is a character. For example: b3[a2[c]] => baccaccacc Everything went ok until I was given the following string: 1[2[3[4[5[6[7[8[9[10[11[12[13[a]]]]]]]]]]]]] This strings turns into a string with 6227020800 a's. This string is more than 6GB, so it's nearly impossible to compute it in practical time. So, here is my question: Are there any properties of MD5 that I can use here? I know that there has to be a form to make it in short time, and I suspect it has to be related to the fact that all the string has is the same character repeated multiple times.
Hashing same character multiple times
0
0
0
1,459
16,399,355
2013-05-06T13:01:00.000
0
0
0
0
python,html,refresh
52,434,003
8
false
1
0
The LivePage extension for Chrome. You can write to a file, then LivePage will monitor it for you. You can also optionally refresh on imported content like CSS. Chrome will require that you grant permissions on local file:// urls. (I'm unaffiliated with the project.)
1
11
0
I'm using Python to gather some information, construct a very simple html page, save it locally and display the page in my browser using webbrowser.open('file:///c:/testfile.html'). I check for new information every minute. If the information changes, I rewrite the local html file and would like to reload the displayed page. The problem is that webbrowser.open opens a new tab in my browser every time I run it. How do I refresh the page rather than reopen it? I tried new=0, new=1 and new=2, but all do the same thing. Using controller() doesn't work any better. I suppose I could add something like < META HTTP-EQUIV="refresh" CONTENT="60" > to the < head > section of the html page to trigger a refresh every minute whether or not the content changed, but would prefer finding a better way. Exact time interval is not important. Python 2.7.2, chrome 26.0.1410.64 m, Windows 7 64.
Refresh a local web page using Python
0
0
1
66,470
16,400,412
2013-05-06T14:01:00.000
3
0
0
0
python,google-app-engine,google-cloud-datastore,google-cloud-endpoints
16,449,936
1
true
1
0
It depends on what your goal is and whether or not you're using db or nbd. If you use str(key) you'll get an entity key and will need to construct a new key (on the server depending on that value). Using ndb, I would recommend using key.urlsafe() to be explicit and then ndb.Key(urlsafe=value) to create the new key. Unfortunately the best you can do with db is str(key) and db.Key(string_value). Using key.id() also depends on ndb or db. If you are using db you know this value will be an integer (and that key.name() will be a string) but if you are using ndb it could be either an integer or a string. In that case, you should use key.integer_id() or key.string_id(). In either case, if you turn integers into strings, this will require manually casting back to an integer before retrieving entities or setting keys; e.g. MyModel.get_by_id(int(value)) If I were to make a recommendation, I would advise you to be explicit about your IDs, pay attention to the way they are allocated and give these opaque values to the user in the API. If you want to let App Engine allocate IDs for you use protorpc.messages.IntegerField to represent these rather than casting to a string. Also, PLEASE switch from db to ndb if you haven't already.
1
1
0
When transmitting references to other Datastore entities using the Message class from ProtoRPC, should I use str(key) or key.id(). The first one is a String the second one is a long. Does it make any difference in the end? Are there any restrictions? It appears that when filtering queries, the same results come out. Thanks
Use Datastore Entity's ID or Key in ProtoRPC.Message
1.2
0
0
386
16,404,100
2013-05-06T17:38:00.000
0
0
0
0
python-2.7,background-process,robotics
16,738,343
1
false
1
0
This may not answer your question directly, as I have a similar question about using Flask with 'background' data. I just wanted to make sure you know about ROS.... ROS.org. It has tools to help with what you are doing. Check out Flask-Script as well, though I'm not sure it will solve your problem. I'm currently using an SQLite DB to pass data between my applications (Flask for user status/interaction, a program that reads the hardware into the DB, and another program that makes decisions based on hardware inputs..) but my system is slow, 1Hz update rate is plenty.
1
1
0
I have a Python Flask application that I'm planning to use as a monitor for a robot that drives around (using Player/Stage). Now I'm able to connect with the robot and request information and so on. Player/Stage sends data about the position of the robot every time interval. I'm stuck with the following: The information about the position should be displayed in HTML, I was thinking about a jQuery POST that requests the position every 500ms and then updates the html (easy). Is there a better solution? Player/Stage also sends the actual location estimation of the robot, I want a background process that can save the data so I can display it (like number 1), I don't see how a cerely background job saves the information it calculates. How can I display the output of a background job and send it to the user (html/json)? I actually need to manage a couple of background jobs and let them quit depending on the output of another background job. So for example, when the robot drives to a specific point, I quit a job, start another, display its data to the user, and so on. I hope my explanation was helpful, I'm looking for advice, code samples and anything related. Regards,
Python Flask running background functions returning values
0
0
0
419
16,406,080
2013-05-06T19:46:00.000
1
0
0
1
python,google-app-engine,loops
16,406,284
2
false
1
0
Take a look at the GAE cron functionality.
1
0
0
Can I make a loop in Google App Engine that fetches information from a site? I have made a small code that already gets the information I want from the site, but I don't know how to make this code run every lets say, 20 minutes. Is there a way to do this? P.S.: I have looked at TaskQueue, but I'm not sure if it is meant for things like this.
Google App Engine urlfetch loop
0.099668
0
0
81
16,406,102
2013-05-06T19:48:00.000
0
0
0
0
python,python-2.7
16,408,496
2
true
0
0
Sorry to bother everybody. On further examination, it turns out that the code I was trying to get working simply doesn't work. In essence, it was an attempt to write a SOAP client using Python's xml.dom and httplib modules - and it was simply done wrong. I'm scrapping his code, and writing a proper SOAP client.
1
0
0
I'm trying to get a Python script working, that is using xml.dom (not xml.dom.minidom). I'm working on a new install of Python 2.7 for Windows, and xml.dom doesn't have a DOMImplementation or a DocumentType. Is there some additional module I need to install?
from xml.dom import DOMImplementation, DocumentType
1.2
0
1
452
16,407,471
2013-05-06T21:20:00.000
0
0
1
1
python,nginx,wsgi,uwsgi,pyinstaller
16,407,922
1
false
0
0
The binary you get is just your program and a python interperter stuffed together into an executable with all dependencies so that it can be easier distributed. It won't give you any speed boost, and it's not really 'compiled' into a binary. By using a binary of this kind you would loose all the advantages that WSGI provides you, so it's a bad idea. Just deplay your wsgi application as documented.
1
1
0
I have a working wsgi app developed in python. I know that pyinstaller will compile and get me a binary of this application. I have a nginx and uwsgi running. can i use this binary instead of the python script to run the whole thing from uwsgi to boost the speed ..
Running binary WSGI app
0
0
0
717
16,408,074
2013-05-06T22:07:00.000
2
1
0
0
python,debugging,exception,uwsgi,pretty-print
16,416,281
1
true
0
0
If you mean getting the exception message in the browser, just add --catch-exceptions IMPORTANT: it could expose sensitive informations, do not use in production !!!
1
1
0
Would like to know if there is a simple, easy way to have uWSGI pretty print exception messages (for Python specifically, not sure if the settings are particular to Python or not). Thanks very much!
How to get uWSGI Python exception message pretty printing?
1.2
0
0
823
16,411,136
2013-05-07T04:19:00.000
2
0
0
0
python,html,google-app-engine
16,411,935
1
false
1
0
You're getting '405 method not allowed' because the POST is going to the same url that served up the page, but the handler for that path (MainPage) does not have a post method. That's the same diagnosis that you were given when you asked this question two hours earlier under a different user id. Stepping back further to address the "what have I done wrong" question: It seems to me that you've gotten quite far along before discovering that what you have doesn't work. So far along that the example is cluttered with irrelevant code that's unrelated to the failure. That makes it harder for you for figure this out for yourself, and it makes it more difficult for people here to help you. In this situation, you help yourself (and help others help you) by seeing how much code you can eliminate while still demonstrating the failure.
1
0
0
I've been trying to get my python code to post. I have tried using the Postman Plugin to test the post method and I would get a 405 method error. I am planning to have the user post the information and have it displayed. Currently if I press submit I would get a error loading page, changing the form to get results in the submit button working and returning to the previous page. If I change the handler to post the screen would instantly display '405 Method Not Allowed'. I've looked through the Google App Engine logs and there are no errors. Can someone help me with what I done wrong and advise me on how to the post method functioning? Thanks for the time.
Unable to get Python to POST on html
0.379949
0
0
299
16,411,369
2013-05-07T04:46:00.000
3
0
1
0
python,syntax,pattern-recognition
16,454,636
3
true
0
0
Maybe you can use existing multi-language syntax highlighters. Many of them can detect language a file is written in.
2
3
0
I need a module or strategy for detecting that a piece of data is written in a programming language, not syntax highlighting where the user specifically chooses a syntax to highlight. My question has two levels, I would greatly appreciate any help, so: Is there any package in python that receives a string(piece of data) and returns if it belongs to any programming language syntax ? I don't necessarily need to recognize the syntax, but know if the string is source code or not at all. Any clues are deeply appreciated.
Syntax recognizer in python
1.2
0
0
267
16,411,369
2013-05-07T04:46:00.000
2
0
1
0
python,syntax,pattern-recognition
16,484,922
3
false
0
0
My answer somewhat depends on the amount of code you're going to be given. If you're going to be given 30+ lines of code, it should be fairly easy to identify some unique features of each language that are fairly common. For example, tell the program that if anything matches an expression like from * import * then it's Python (I'm not 100% sure that phrasing is unique to Python, but you get the gist). Other things you could look at that are usually slightly different would be class definition (i.e. Python always starts with 'class', C will start with a definition of the return so you could check to see if there is a line that starts with a data type and has the formatting of a method declaration), conditionals are usually formatted slightly differently, etc, etc. If you wanted to make it more accurate, you could introduce some sort of weighting system, features that are more unique and less likely to be the result of a mismatched regexp get a higher weight, things that are commonly mismatched get a lower weight for the language, and just calculate which language has the highest composite score at the end. You could also define features that you feel are 100% unique, and tell it that as soon as it hits one of those, to stop parsing because it knows the answer (things like the shebang line). This would, of course, involve you knowing enough about the languages you want to identify to find unique features to look for, or being able to find people that do know unique structures that would help. If you're given less than 30 or so lines of code, your answers from parsing like that are going to be far less accurate, in that case the easiest best way to do it would probably be to take an appliance similar to Travis, and just run the code in each language (in a VM of course). If the code runs successfully in a language, you have your answer. If not, you would need a list of errors that are "acceptable" (as in they are errors in the way the code was written, not in the interpreter). It's not a great solution, but at some point your code sample will just be too short to give an accurate answer.
2
3
0
I need a module or strategy for detecting that a piece of data is written in a programming language, not syntax highlighting where the user specifically chooses a syntax to highlight. My question has two levels, I would greatly appreciate any help, so: Is there any package in python that receives a string(piece of data) and returns if it belongs to any programming language syntax ? I don't necessarily need to recognize the syntax, but know if the string is source code or not at all. Any clues are deeply appreciated.
Syntax recognizer in python
0.132549
0
0
267
16,412,763
2013-05-07T06:41:00.000
2
0
1
0
python,django,mercurial,pycharm
16,415,301
1
false
1
0
I assume you use Django >= 1.4. When you create a Django project using this Django version, by default it creates a folder with manage.py and inside it a subfolder with settings.py. What you should do is to use the folder with manage.py as the root for your PyCharm project and Mercurial repo. This way all your applications will be in the project and repo. If you don't want your PyCharm settings to be tracked by Mercurial, do not forget to add .idea folder to your .hgignore file.
1
0
0
What is best/preferred/right/conventional way: - Root of repository and PyCharm project = Django project folder with settings.py - Root of repo and PyCharm project is the parent directory of Django project. Never used Hg before, and this simple problem is hinder me.
Managing Django project with mercurial and PyCharm - choosing root folder
0.379949
0
0
207
16,412,772
2013-05-07T06:41:00.000
6
0
0
0
python,django,django-models,django-views
16,412,820
1
true
1
0
The general consensus is, thick models and helpers, thin views. In other words, your views should be as simple as possible; your models as rich as possible, and plenty of helper code for the outlying bits. Also keep in mind that if you override the model methods, you are offering a sort of "guarantee" that no matter how the ORM is accessed, your rules will be applied. If you do the logic only in the view, then anywhere else; for example using custom management commands or the django shell, a template tag, or even in another view, there is a chance that your rules will not be applied.
1
1
0
I have a model Quote, which has a foreign key to the user model. A user can have between 0 and 10 quotes, and if there s/he has one or more quotes, one of them should be the primary quote(primary is a field of the Quote model). When a quote is added by the user, it is checked whether the user has other quotes, if not the new quote is set as the primary. And when the primary quote is deleted another quote is set as the primary quote, if the user has any other quotes. Right now I do this in the respective views. I was wondering whether it would better to override the save and delete functions of the model and do all of this there. So which is the right place to perform these tasks the model or the view?
Django - overriding save/delete functions of the model vs doing it in the view
1.2
0
0
107
16,417,546
2013-05-07T11:11:00.000
0
1
0
0
python,performance,pytest
67,071,786
8
false
0
0
Pytest imports all modules in the testpaths directories to look for tests. The import itself can be slow. This is the same startup time you'd experience if you ran those tests directly, however, since it imports all of the files it will be a lot longer. It's kind of a worst-case scenario. This doesn't add time to the whole test run though, as it would need to import those files anyway to execute the tests. If you narrow down the search on the command line, to specific files or directories, it will only import those ones. This can be a significant speedup while running specific tests. Speeding up those imports involves modifying those modules. The size of the module, and the transitive imports, slow down the startup. Additionally look for any code that is executed -- code outside of a function. That also needs to be executed during the test collection phase.
4
56
0
Is there some way to speed up the repeated execution of pytest? It seems to spend a lot of time collecting tests, even if I specify which files to execute on the command line. I know it isn't a disk speed issue either since running pyflakes across all the .py files is very fast. The various answers represent different ways pytest can be slow. They helped sometimes, did not in others. I'm adding one more answer that explains a common speed problem. But it's not possible to select "The" answer here.
How to speed up pytest
0
0
0
20,629
16,417,546
2013-05-07T11:11:00.000
2
1
0
0
python,performance,pytest
45,336,546
8
false
0
0
If you have some antivirus software running, try turning it off. I had this exact same problem. Collecting tests ran incredibly slow. It turned out to be my antivirus software (Avast) that was causing the problem. When I disabled the antivirus software, test collection ran about five times faster. I tested it several times, turning the antivirus on and off, so I have no doubt that was the cause in my case. Edit: To be clear, I don't think antivirus should be turned off and left off. I just recommend turning it off temporarily to see if it is the source of the slow down. In my case, it was, so I looked for other antivirus solutions that didn't have the same issue.
4
56
0
Is there some way to speed up the repeated execution of pytest? It seems to spend a lot of time collecting tests, even if I specify which files to execute on the command line. I know it isn't a disk speed issue either since running pyflakes across all the .py files is very fast. The various answers represent different ways pytest can be slow. They helped sometimes, did not in others. I'm adding one more answer that explains a common speed problem. But it's not possible to select "The" answer here.
How to speed up pytest
0.049958
0
0
20,629
16,417,546
2013-05-07T11:11:00.000
3
1
0
0
python,performance,pytest
65,135,225
8
false
0
0
For me, adding PYTHONDONTWRITEBYTECODE=1 to my environment variables achieved a massive speedup! Note that I am using network drives which might be a factor. Windows Batch: set PYTHONDONTWRITEBYTECODE=1 Unix: export PYTHONDONTWRITEBYTECODE=1 subprocess.run: Add keyword env={'PYTHONDONTWRITEBYTECODE': '1'} PyCharm already set this variable automatically for me. Note that the first two options only remain active for your current terminal session.
4
56
0
Is there some way to speed up the repeated execution of pytest? It seems to spend a lot of time collecting tests, even if I specify which files to execute on the command line. I know it isn't a disk speed issue either since running pyflakes across all the .py files is very fast. The various answers represent different ways pytest can be slow. They helped sometimes, did not in others. I'm adding one more answer that explains a common speed problem. But it's not possible to select "The" answer here.
How to speed up pytest
0.07486
0
0
20,629
16,417,546
2013-05-07T11:11:00.000
3
1
0
0
python,performance,pytest
62,249,933
8
false
0
0
In bash, try { find -name '*_test.py'; find -name 'test_*.py'; } | xargs pytest. For me, this brings total test time down to a fraction of a second.
4
56
0
Is there some way to speed up the repeated execution of pytest? It seems to spend a lot of time collecting tests, even if I specify which files to execute on the command line. I know it isn't a disk speed issue either since running pyflakes across all the .py files is very fast. The various answers represent different ways pytest can be slow. They helped sometimes, did not in others. I'm adding one more answer that explains a common speed problem. But it's not possible to select "The" answer here.
How to speed up pytest
0.07486
0
0
20,629
16,419,681
2013-05-07T12:58:00.000
1
0
1
0
python,python-2.7,typechecking
16,419,841
4
false
0
0
Because doing so explicitly prevents duck-typing. Here's an example. The csv module allows me to write data to a file in CSV format. For that reason, the function accepts a file as a parameter. But what if I didn't want to write to an actual file, but to something like a StringIO object? That's a perfectly good use of it, since StringIO implements the necessary read and write methods. But if csv was explicitly checking for an actual object of type file, that would be forbidden. Generally, Python takes the view that we should allow things as much as possible - it's the same reasoning behind the lack of real private variables in classes.
2
0
0
I have a function which needs to behave differently depending on the type of the parameter taken in. My first impulse was to include some calls to isinstance, but I keep seeing answers on stackoverflow saying that this is bad form and unpythonic but without much reason why its bad form and unpythonic. For the latter, I suppose it has something to do with duck-typing, but whats the big deal to check if your arguments are of a specific type? Isn't it better to play it safe?
Not understanding the bad form of type checking - duck typing
0.049958
0
0
505
16,419,681
2013-05-07T12:58:00.000
1
0
1
0
python,python-2.7,typechecking
16,419,920
4
false
0
0
sometimes usage of isinstance just reimplements the polymorphic dispatch. Look at str(...), it calls object.__str__(..) which is implemented with each type individually. By implementing __str__ you can reuse code that depends on str by extending that one object instead of having to manipulate the built in method str(...). Basically this is the culminating point of OOP. You want polymorphic behaviour, you do not want do spell out types. There are valid reasons to use it though.
2
0
0
I have a function which needs to behave differently depending on the type of the parameter taken in. My first impulse was to include some calls to isinstance, but I keep seeing answers on stackoverflow saying that this is bad form and unpythonic but without much reason why its bad form and unpythonic. For the latter, I suppose it has something to do with duck-typing, but whats the big deal to check if your arguments are of a specific type? Isn't it better to play it safe?
Not understanding the bad form of type checking - duck typing
0.049958
0
0
505
16,420,078
2013-05-07T13:16:00.000
-2
0
1
0
python,syntax,command-prompt,code-documentation
65,306,443
5
false
0
0
I found it to be called ' REPL'
2
52
0
I haven't been able to figure out what the >>> does, even though I often see it often in source code.
What do the three arrow (">>>") signs mean?
-0.07983
0
0
75,337
16,420,078
2013-05-07T13:16:00.000
3
0
1
0
python,syntax,command-prompt,code-documentation
53,103,502
5
false
0
0
The >>> prompt is the Python interpreter’s way of asking you, “What do you want me to do next?”, and it is called "chevron" prompt
2
52
0
I haven't been able to figure out what the >>> does, even though I often see it often in source code.
What do the three arrow (">>>") signs mean?
0.119427
0
0
75,337
16,420,461
2013-05-07T13:35:00.000
0
0
0
0
python,mysql
16,943,780
1
false
0
0
There was a trigger activating that I did not know about. Thanks for the help
1
0
0
I am executing update in mysqldb which is changing the values of part of a key and field. When I execute the query in python it triggers something in the database to cause it to add extra rows. When I execute the same exact query from mysql workbench it performs the update correctly without adding extra rows. What is the difference between calling from the application and calling from python? Thanks
MySQLdb for python behaves differently for queries than the mysql workbench browser
0
1
0
78
16,426,150
2013-05-07T18:34:00.000
1
0
1
0
python,encryption,python-3.x,cryptography
16,426,218
3
false
0
0
You cannot get the password the user used to log in to the computer. And, if you could, you would not want to store it. In fact, the OS doesn't even have the user's password. The OS has a hash of it, and when the user logs in, it hashes what the user types and checks that it matches. Also, if you ask the user to log in with their system password, any savvy user is going to immediately mistrust your app and refuse to use it. Make them create a password, and then login with that, not their system password. And don't save the password, save a hash, just like the OS does. If you want to verify that they've been authenticated by the OS… well, you already know that, or they couldn't have logged in to run your app. (If you're building a network server that allows remote login based on local accounts, that's a different story, but it's not relevant to your use case, and complicated, so I won't get into it here.) If you want to allow someone to "stay logged in", you don't do that by saving their password. Instead, you create some kind of hard-to-fake "session key" when they log in, and store that somewhere. They don't have to log in again until you destroy the session key (which you do when they log out). The one exception to "never store passwords" is when you need to act as a "proxy" for the user to some other application that needs their password. A well-designed application will provide a way for you to proxy the login properly, but many applications are not well-designed. Web browsers have to do this all the time, which is why most web browsers have a "remember my password at this site" checkbox. In this case, you do want to store passwords, ideally encrypted by the OS on your behalf (e.g., using OS X's Keychain APIs), or, if not, encrypted by you code using some key that's generated from the user's "master password" (which you don't store). Unfortunately, there is no real shortcut to learning how to design for security—or, rather, there are all kinds of shortcuts, and taking any one of them means your entire system ends up insecure and all the work you put into trying to secure it ends up useless. The easy solution is to use complete off-the-shelf solutions. If you want to design things yourself, you need at least a basic grounding in all of the issues. Start with one of Bruce Scheneier's "pop" books, Secrets and Lies or Beyond Fear. Then read his Practical Cryptography on designing cryptosystems, and Applied Cryptography on evaluating crypto algorithms. Then, once you realize how much you don't know and how important it is, learn everything you need for your problem, and then you can think about solving it.
2
1
0
basically, I want to have a login box, and the option to remember the password next time you log in. I know there are encryption modules out there, but they require a password to work in the first place. is there a way to get the password the user used to log into the computer, and use that to encrypt the password for my application? so in a nutshell, how do I store a password securely for later use. I'm using python 3, and my program needs to be crossplatform.
remember password functionality in python
0.066568
0
0
1,629
16,426,150
2013-05-07T18:34:00.000
1
0
1
0
python,encryption,python-3.x,cryptography
16,426,284
3
false
0
0
There is no way out. If the application does not ask the user for a password, then it is not securely storing passwords, it's only doing... "things". In that case, don't give the user a false sense of security, use cleartext. A notable exception is the GNOME login keyring (and equivalent on other platforms) not asking for a password, but it uses a trick: it encrypts data with your login password and decrypts them with the same when you enter it at startup. If you are developing a web application with a local client, consider using OAuth instead of passwords.
2
1
0
basically, I want to have a login box, and the option to remember the password next time you log in. I know there are encryption modules out there, but they require a password to work in the first place. is there a way to get the password the user used to log into the computer, and use that to encrypt the password for my application? so in a nutshell, how do I store a password securely for later use. I'm using python 3, and my program needs to be crossplatform.
remember password functionality in python
0.066568
0
0
1,629
16,426,469
2013-05-07T18:54:00.000
1
0
0
0
python,c,data-structures,integer
16,444,230
3
false
0
0
How about using one hash table or B-tree per bucket? On-disk hashtables are standard. Maybe the BerkeleyDB libraries (availabe in stock python) will work for you; but be advised that they since they come with transactions they can be slow, and may require some tuning. There are a number of choices: gdbm, tdb that you should all give a try. Just make sure you check out the API and initialize them with appropriate size. Some will not resize automatically, and if you feed them too much data their performance just drops a lot. Anyway, you may want to use something even more low-level, without transactions, if you have a lot of changes. A pair of ints is a long - and most databases should accept a long as a key; in fact many will accept arbitrary byte sequences as keys.
2
6
1
I have a bunch of code that deals with document clustering. One step involves calculating the similarity (for some unimportant definition of "similar") of every document to every other document in a given corpus, and storing the similarities for later use. The similarities are bucketed, and I don't care what the specific similarity is for purposes of my analysis, just what bucket it's in. For example, if documents 15378 and 3278 are 52% similar, the ordered pair (3278, 15378) gets stored in the [0.5,0.6) bucket. Documents sometimes get either added or removed from the corpus after initial analysis, so corresponding pairs get added to or removed from the buckets as needed. I'm looking at strategies for storing these lists of ID pairs. We found a SQL database (where most of our other data for this project lives) to be too slow and too large disk-space-wise for our purposes, so at the moment we store each bucket as a compressed list of integers on disk (originally zlib-compressed, but now using lz4 instead for speed). Things I like about this: Reading and writing are both quite fast After-the-fact additions to the corpus are fairly straightforward to add (a bit less so for lz4 than for zlib because lz4 doesn't have a framing mechanism built in, but doable) At both write and read time, data can be streamed so it doesn't need to be held in memory all at once, which would be prohibitive given the size of our corpora Things that kind of suck: Deletes are a huge pain, and basically involve streaming through all the buckets and writing out new ones that omit any pairs that contain the ID of a document that's been deleted I suspect I could still do better both in terms of speed and compactness with a more special-purpose data structure and/or compression strategy So: what kinds of data structures should I be looking at? I suspect that the right answer is some kind of exotic succinct data structure, but this isn't a space I know very well. Also, if it matters: all of the document IDs are unsigned 32-bit ints, and the current code that handles this data is written in C, as Python extensions, so that's probably the general technology family we'll stick with if possible.
Data structure options for efficiently storing sets of integer pairs on disk?
0.066568
0
0
704
16,426,469
2013-05-07T18:54:00.000
1
0
0
0
python,c,data-structures,integer
16,444,440
3
false
0
0
Why not just store a table containing stuff that was deleted since the last re-write? This table could be the same structure as your main bucket, maybe with a Bloom filter for quick membership checks. You can re-write the main bucket data without the deleted items either when you were going to re-write it anyway for some other modification, or when the ratio of deleted items:bucket size exceeds some threshold. This scheme could work either by storing each deleted pair alongside each bucket, or by storing a single table for all deleted documents: I'm not sure which is a better fit for your requirements. Keeping a single table, it's hard to know when you can remove an item unless you know how many buckets it affects, without just re-writing all buckets whenever the deletion table gets too large. This could work, but it's a bit stop-the-world. You also have to do two checks for each pair you stream in (ie, for (3278, 15378), you'd check whether either 3278 or 15378 has been deleted, instead of just checking whether pair (3278, 15378) has been deleted. Conversely, the per-bucket table of each deleted pair would take longer to build, but be slightly faster to check, and easier to collapse when re-writing the bucket.
2
6
1
I have a bunch of code that deals with document clustering. One step involves calculating the similarity (for some unimportant definition of "similar") of every document to every other document in a given corpus, and storing the similarities for later use. The similarities are bucketed, and I don't care what the specific similarity is for purposes of my analysis, just what bucket it's in. For example, if documents 15378 and 3278 are 52% similar, the ordered pair (3278, 15378) gets stored in the [0.5,0.6) bucket. Documents sometimes get either added or removed from the corpus after initial analysis, so corresponding pairs get added to or removed from the buckets as needed. I'm looking at strategies for storing these lists of ID pairs. We found a SQL database (where most of our other data for this project lives) to be too slow and too large disk-space-wise for our purposes, so at the moment we store each bucket as a compressed list of integers on disk (originally zlib-compressed, but now using lz4 instead for speed). Things I like about this: Reading and writing are both quite fast After-the-fact additions to the corpus are fairly straightforward to add (a bit less so for lz4 than for zlib because lz4 doesn't have a framing mechanism built in, but doable) At both write and read time, data can be streamed so it doesn't need to be held in memory all at once, which would be prohibitive given the size of our corpora Things that kind of suck: Deletes are a huge pain, and basically involve streaming through all the buckets and writing out new ones that omit any pairs that contain the ID of a document that's been deleted I suspect I could still do better both in terms of speed and compactness with a more special-purpose data structure and/or compression strategy So: what kinds of data structures should I be looking at? I suspect that the right answer is some kind of exotic succinct data structure, but this isn't a space I know very well. Also, if it matters: all of the document IDs are unsigned 32-bit ints, and the current code that handles this data is written in C, as Python extensions, so that's probably the general technology family we'll stick with if possible.
Data structure options for efficiently storing sets of integer pairs on disk?
0.066568
0
0
704
16,428,064
2013-05-07T20:33:00.000
3
0
1
0
python,matlab,user-interface,transitions
16,455,768
2
false
0
1
I would definitely vouch for wxpython. It's very good for advanced GUI work.
1
4
0
I work mostly with Matlab but I am little bit familiar with Python as well. I have never created a GUI before, however, I have been suggested to develop a GUI for an important project in Python and not in Matlab. Considering this situation, are there any suggestions that you would like to give which could help me create GUI in python in a time period of a month? Also, I would like to know if I can use Python's Spyder IDE and create GUI from that platform?
Creating Graphical User Interface (GUI) in Python
0.291313
0
0
25,474
16,431,207
2013-05-08T01:27:00.000
1
1
0
0
javascript,python,web2py,cherrypy
16,431,335
2
false
1
0
web2py is using MVC model and each of the scripts are nicely separated. It can be deployed at pythoneverywhere.com. not too sure about cherrypy.
1
2
0
I am using Raspberry Pi to function as a mini web server. At first, i came across web2py and started to learn it. It was tough for a beginner like me. Later, a friend in a forum introduced CherryPy to me and i started to work on the web application skeleton that he gave me. Soon, I abandoned web2py and proceeded with cherrypy as it is fairly straightforward. Somehow, I think web2py could be a good choice too. Web application for both are written in python, with html, css and javascripts. So whatever i've done using cherrypy may be possible to transfer over to web2py. (is it true?) I would like to find out what are the main differences between those 2 and their respective pros and cons. I hope to find out more about fellow users' experiences in using web2py and cherrypy. In such way, future visitors can make a comparison before they proceed in choosing which one to use. Thank you!
Which one is better to create a web application? web2py or cherrypy
0.099668
0
0
1,858
16,431,949
2013-05-08T02:57:00.000
1
0
0
0
connection,wxpython,draw,shapes
16,444,284
1
false
0
1
Did you look at the wxPython demo? It shows exactly how to do this. It draws several shapes and then connects them with arrows and lines. You can get the wxPython demo from the wxPython website, which I assume you already know since you keep re-posting your questions to the wxPython mailing list.
1
0
0
How to design such function that user can use mouse to draw a line to connect two shapes? I'm using this module in wxpython : wx.lib.ogl. The doc for theses methods are brief and no good demo that I can learn from. Does anyone can give me some ideas? Thanks
draw lines to connect two shapes
0.197375
0
0
150
16,432,969
2013-05-08T04:58:00.000
3
0
0
0
python,lucene,pylucene
16,469,866
1
false
0
0
Got it working eventually by creating a new tokenzier which considered every char other than an underscore as part of the token generated (basically underscore becomes the separator) class UnderscoreSeparatorTokenizer(PythonCharTokenizer): def __init__(self, input): PythonCharTokenizer.__init__(self, input) def isTokenChar(self, c): return c != "_" class UnderscoreSeparatorAnalyzer(PythonAnalyzer): def __init__(self, version): PythonAnalyzer.__init__(self, version) def tokenStream(self, fieldName, reader): tokenizer = UnderscoreSeparatorTokenizer(reader) tokenStream = LowerCaseFilter(tokenizer) return tokenStream
1
1
0
I am new to pylucene and I am trying to build a custom analyzer which tokenizes text on the basis of underscores only, i.e. it should retain the whitespaces. Example: "Hi_this is_awesome" should be tokenized into ["hi", "this is", "awesome"] tokens. From various code examples I understood that I need to override the incrementToken method for a CustomTokenizer and write a CustomAnalyzer for which the TokenStream needs to use the CustomTokenizer followed by a LowerCaseFilter to achieve the same. I am facing problems in implementing the incrementToken method and connecting the dots (how the tokenizer maybe used as usually the Analyzers depend on TokenFilter which depend on TokenStreams) as there is very little documentation available on pylucene.
Custom Tokenizer for pylucene which tokenizes text based only on underscores (retains spaces)
0.53705
0
0
682
16,433,047
2013-05-08T05:05:00.000
1
0
0
1
java,python,database,distributed-computing,in-memory-database
16,444,115
3
false
1
0
Given the relatively low volume of data you need, I would say the easiest way would be to use a TCP socket to communicate between the two processes. The data speed on the loopback interface is more than enough for your needs.
1
1
0
I have two background processes running on linux machine. One is Java and second one is in Python. What would be most efficient way to exchange data between these two apps ? I am talking about text / images data below < 10Mb approx each 5 minutes (not streamed). Due high cost of refactoring we cannot migrate fully to Python (or Java). Natural choice is filesystem or local networking but what about in memory database (sqllite/redis/...) ? Filesystem handling or network handling is sometimes painfull i guess. Do you think that in-memory-DB would be good option for such task ? Jython is not option there as not all Python libraries are compatible... Environment : ubuntu server 12.04 64bit, Python 2.7, Java 7
data bridge between Java and Python daemons
0.066568
0
0
368
16,434,632
2013-05-08T07:05:00.000
4
0
0
1
python,resources,py2app
17,084,259
1
false
0
1
By default the 'Resource' folder is current working directory for applications started by py2app. Futhermore the environment variable "RESOURCEPATH" is set to the absolute path of the resource folder.
1
5
0
This is mostly for Py2app, but I plan to also port to Windows so Py2exe is also applicable. For Mac: How can I access the Resources folder of my app bundle from Python code? The ideal way for me would be to get the path to this folder into a variable that my classes prepend to any file they need to access. Given the portable nature of OSX app bundles this Resources folder can move, so it's obviously not acceptable to assume it'll always be at /Applications/MyApp.app/Contents/Resources. For development I can preset this variable to something like "./Resources-test" but for the final distribution I would need to be able to locate the Resources folder to access files therein as file objects. For Windows: If I use py2exe, what's the correct way to get the path to where the application is running from? (Think portable app - the app might be running from Program files, or a directory on someone's flash drive, or in a temp directory!) On Windows it'd be suitable to simply know where the .exe file is and just have a Resources folder there. (I plan to make cross-platform apps using wxwidgets.) Thanks
How to directly access a resource in a Py2app (or Py2exe) program?
0.664037
0
0
1,524
16,436,260
2013-05-08T08:45:00.000
0
0
0
1
python,macos,opencv
16,495,361
1
false
0
0
Try using macports it builds opencv including python bindings without any issue. I have used this for osx 10.8.
1
0
1
i am trying to install opencv on my MacbookPro OSX 10.6.8 (snow leopard) and Xcode version is 3.2.6 and result of "which python" is Hong-Jun-Choiui-MacBook-Pro:~ teemo$ which python /Library/Frameworks/Python.framework/Versions/2.7/bin/python and i am suffering from this below.. Linking CXX shared library ../../lib/libopencv_contrib.dylib [ 57%] Built target opencv_contrib make: * [all] Error 2 Full log is here link by "brew install -v opencv" 54 248 246 33:7700/log.txt any advice for me? i just need opencv lib for python.
openCV install Error using brew
0
0
0
447
16,436,834
2013-05-08T09:15:00.000
0
0
1
0
python,mathematical-optimization,linear-programming,gurobi
61,176,208
3
false
0
0
print (z[a].x) Use the attribute X to get the value of a specific indexed variable. In my example, z is the variable and a is the index, while .x is the attribute.
1
8
0
How do i get the value of the variable that i have define previously(using addVar) in gurobi python? I need to compare the value of the gurobi variable and then perform calculations to reach to my objective variable. The same has to be done before optimization.
Gurobi python get value of the defined variable
0
0
0
11,047
16,439,842
2013-05-08T11:46:00.000
1
0
1
0
python,oop
16,439,971
2
false
0
0
If the method doesn't access any of an object's state, but is specific to that object's class, then it's a good candidate for being a classmethod. Otherwise if it's more general, then just use a function defined at module level, no need to make it belong to a specific class. I've found that classmethods are actually pretty rare in practice, and certainly not the default. There should be plenty of good code out there (on e.g. github) to get examples from.
1
1
0
I was just working on a large class hierarchy and thought that probably all methods in a class should be classmethods by default. I mean that it is very rare that one needs to change the actual method for an object, and whatever variables one needs can be passed in explicitly. Also, this way there would be lesser number of methods where people could change the object itself (more typing to do it the other way), and people would be more inclined to be "functional" by default. But, I am a newb and would like to find out the flaws in my idea (if there are any :).
Should methods in a class be classmethod by default?
0.099668
0
0
279
16,442,055
2013-05-08T13:30:00.000
0
0
0
0
python,machine-learning,nltk
16,446,014
3
false
0
0
You could compute the representativeness of each feature to separate the classes via feature weighting. The most common method for feature selection (and therefore feature weighting) in Text Classification is chi^2. This measure will tell you which features are better. Based on this information you can analyse the specific values that are best for every case. I hope this helps. Regards,
1
3
1
My Question is as follows: I know a little bit about ML in Python (using NLTK), and it works ok so far. I can get predictions given certain features. But I want to know, is there a way, to display the best features to achieve a label? I mean the direct opposite of what I've been doing so far (put in all circumstances, and get a label for that) I try to make my question clear via an example: Let's say I have a database with Soccer games. The Labels are e.g. 'Win', 'Loss', 'Draw'. The Features are e.g. 'Windspeed', 'Rain or not', 'Daytime', 'Fouls committed' etc. Now I want to know: Under which circumstances will a Team achieve a Win, Loss or Draw? Basically I want to get back something like this: Best conditions for Win: Windspeed=0, No Rain, Afternoon, Fouls=0 etc Best conditions for Loss: ... Is there a way to achieve this?
Machine Learning in Python - Get the best possible feature-combination for a label
0
0
0
879
16,442,675
2013-05-08T13:59:00.000
3
0
0
0
python,sublimetext2
16,466,917
3
true
0
0
You can use view.set_status(key, value) to place a persisting message in the status bar. However, this is bound to a view, not the application. If you need the message independent of the view, you will have to do some work using activated and deactivated listeners. Alternatively, you can set the status on all views in the window by using window.views() to get an array of all the views in the window, and place the status message on all of the views. Then, when you are done, remove all of the status messages.
1
2
0
In SublimeText 2, which uses Python plugins. I am trying to enhance an existing plugin that I found. Basically it is a timer plugin that has start, stop, pause functionality and will print the Times to the Status bar using this Sublimetext API call... sublime.status_message(TIMER) What I would like to do is show something in the Status Bar to show that a Timer is in fact started and running. Something like this... sublime.status_message('Timer: on') The problem is this just briefly shows my Status bar message for a few seconds before being dismissed. So I am looking for information on how to print to the status bar and keep it there long term?
Print to Sublimetext Status bar in Python?
1.2
0
0
1,096
16,444,190
2013-05-08T15:09:00.000
0
0
0
0
python,flask,gunicorn,werkzeug
17,628,337
4
false
1
0
The easiest way is to force all connections is to make sure you are using HTTP/1.0 and not adding the header Connection: Keep-Alive to the response. Please checkout werkzeug.http.remove_hop_by_hop_headers().
4
7
0
I'm trying to use a Flask application behind an Amazon Load Balancer and the Flask threads keep timing out. It appears that the load balancer is sending a Connection: keep-alive header and this is causing the Flask process to never return (or takes a long time). With gunicorn in front the processes are killed and new ones started. We also tried using uWSGI and simply exposign the Flask app directly (no wrapper). All result in the Flask process just not responding. I see nothing in the Flask docs which would make it ignore this header. I'm at a loss as to what else I can do with Flask to fix the problem. Curl and direct connections to the machine work fine, only those via the load balancer are causing the problem. The load balancer itself doesn't appear to be doing anything wrong and we use it successfully with several other stacks.
flask application timeout with amazon load balancer
0
0
0
1,897
16,444,190
2013-05-08T15:09:00.000
1
0
0
0
python,flask,gunicorn,werkzeug
17,667,952
4
false
1
0
Did you remember to set session.permanent = True and app.permanent_session_lifetime?
4
7
0
I'm trying to use a Flask application behind an Amazon Load Balancer and the Flask threads keep timing out. It appears that the load balancer is sending a Connection: keep-alive header and this is causing the Flask process to never return (or takes a long time). With gunicorn in front the processes are killed and new ones started. We also tried using uWSGI and simply exposign the Flask app directly (no wrapper). All result in the Flask process just not responding. I see nothing in the Flask docs which would make it ignore this header. I'm at a loss as to what else I can do with Flask to fix the problem. Curl and direct connections to the machine work fine, only those via the load balancer are causing the problem. The load balancer itself doesn't appear to be doing anything wrong and we use it successfully with several other stacks.
flask application timeout with amazon load balancer
0.049958
0
0
1,897
16,444,190
2013-05-08T15:09:00.000
8
0
0
0
python,flask,gunicorn,werkzeug
17,648,222
4
true
1
0
The solution I have now is using gunicorn as a wrapper around the flask application. For the worker_class I am using eventlet with several workers. This combination seems stable and responsive. Gunicorn is also configured for HTTPS. I assume it is a defect in Flask that causes the problem and this is an effective workaround.
4
7
0
I'm trying to use a Flask application behind an Amazon Load Balancer and the Flask threads keep timing out. It appears that the load balancer is sending a Connection: keep-alive header and this is causing the Flask process to never return (or takes a long time). With gunicorn in front the processes are killed and new ones started. We also tried using uWSGI and simply exposign the Flask app directly (no wrapper). All result in the Flask process just not responding. I see nothing in the Flask docs which would make it ignore this header. I'm at a loss as to what else I can do with Flask to fix the problem. Curl and direct connections to the machine work fine, only those via the load balancer are causing the problem. The load balancer itself doesn't appear to be doing anything wrong and we use it successfully with several other stacks.
flask application timeout with amazon load balancer
1.2
0
0
1,897
16,444,190
2013-05-08T15:09:00.000
0
0
0
0
python,flask,gunicorn,werkzeug
17,656,907
4
false
1
0
Do you need an HTTP load balancer? Using a layer 4 balancer might just as well solve your problem, because it does not interfere with higher protocol levels.
4
7
0
I'm trying to use a Flask application behind an Amazon Load Balancer and the Flask threads keep timing out. It appears that the load balancer is sending a Connection: keep-alive header and this is causing the Flask process to never return (or takes a long time). With gunicorn in front the processes are killed and new ones started. We also tried using uWSGI and simply exposign the Flask app directly (no wrapper). All result in the Flask process just not responding. I see nothing in the Flask docs which would make it ignore this header. I'm at a loss as to what else I can do with Flask to fix the problem. Curl and direct connections to the machine work fine, only those via the load balancer are causing the problem. The load balancer itself doesn't appear to be doing anything wrong and we use it successfully with several other stacks.
flask application timeout with amazon load balancer
0
0
0
1,897
16,445,795
2013-05-08T16:36:00.000
1
0
0
0
python,django
16,456,275
1
true
1
0
I have tried your scenario...I have included a flag variable to settings.py you can set flag values to 1,2 or 3 and based on the flag values you can then load templates and static DIR...then you can use same views.py,urls.py,models.py....but make sure you use the app's DB file for the all three websites but name can be changed....as it is app's specific and if you use another app's DB file you will get error...Hope it helps...
1
1
0
I want to run Three websites in single Django project... Site_1,Site_2,Site_3 has same models.py,same views.py and same urls.py file.. But different sqlite.db file..different template Dir.will it be possible to run sites under this scenario
How to run Three different websites in single Django project
1.2
0
0
84
16,446,374
2013-05-08T17:06:00.000
1
0
1
0
c++,python,algorithm,permutation
16,446,441
3
false
0
0
Treat your permutation as a r-digit number in a n-based numerical system. Start with 000...0 and increase the 'number' by one: 0000, 0001, 0002, 000(r-1), 0010, 0011, ... The code is quite simple.
1
0
0
How could we generate all possible permutations of n (given) distinct items taken r at a time where any item can be repeated any number of times? Combinatorics tell me that there will be n^r of them, just wondering how to generate them with C++/python?
Generating all permutations with repetition
0.066568
0
0
5,032
16,448,772
2013-05-08T19:32:00.000
0
0
0
0
python,windows,firefox,selenium,cygwin
42,851,688
2
false
0
0
MAC OSX SOLUTION I'm using Python 2.7 and FireFox 48.0.2 and Chrome Versie 57.0.2987.98 (64-bit). The error *self.session_id = response['sessionId']* for me was solved by going to System preferences -> Network -> Advanced in the Wifi Tab. -> Proxy's -> Turning "Automatic Proxydetection" on. After changing this, the error no longer occurred.
1
4
0
I'm trying to run Selenium's Firefox webdriver and am getting the error below. I can see that the response does not have a sessionId - the offending line is self.session_id = response['sessionId'] - but I don't know why. I've run this in the following ways and get the same error: Cygwin, running nosetests Cygwin directly Windows, running nosetests Windows directly ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\dev\tools\cygwin\home\207013288\dev\projects\scorpion\test\unit\test_ approve_workflows.py", line 27, in test_login 'password', userid='207013288', test=True) File "C:\dev\tools\cygwin\home\207013288\dev\projects\scorpion\src\workflows.p y", line 20, in login browser = webdriver.Firefox() File "C:\dev\sdks\Python33\lib\site-packages\selenium-2.32.0-py3.3.egg\seleniu m\webdriver\firefox\webdriver.py", line 62, in __init__ desired_capabilities=capabilities) File "C:\dev\sdks\Python33\lib\site-packages\selenium-2.32.0-py3.3.egg\seleniu m\webdriver\remote\webdriver.py", line 72, in __init__ self.start_session(desired_capabilities, browser_profile) File "C:\dev\sdks\Python33\lib\site-packages\selenium-2.32.0-py3.3.egg\seleniu m\webdriver\remote\webdriver.py", line 116, in start_session self.session_id = response['sessionId'] nose.proxy.KeyError: 'sessionId' -------------------- >> begin captured logging << -------------------- selenium.webdriver.remote.remote_connection: DEBUG: POST http://127.0.0.1:63801/ hub/session {"sessionId": null, "desiredCapabilities": {"version": "", "browserN ame": "firefox", "platform": "ANY", "javascriptEnabled": true}} --------------------- >> end captured logging << --------------------- I haven't used Selenium before and I'm not sure where to go from here.
Why doesn't Selenium's response have a sessionId?
0
0
1
6,072
16,448,988
2013-05-08T19:46:00.000
-3
0
0
0
python,machine-learning,scikit-learn,cross-validation
16,950,633
4
false
0
0
As far as I know, this is actually implemented in scikit-learn. """ Stratified ShuffleSplit cross validation iterator Provides train/test indices to split data in train test sets. This cross-validation object is a merge of StratifiedKFold and ShuffleSplit, which returns stratified randomized folds. The folds are made by preserving the percentage of samples for each class. Note: like the ShuffleSplit strategy, stratified random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets. """
1
6
1
Is there any built-in way to get scikit-learn to perform shuffled stratified k-fold cross-validation? This is one of the most common CV methods, and I am surprised I couldn't find a built-in method to do this. I saw that cross_validation.KFold() has a shuffling flag, but it is not stratified. Unfortunately cross_validation.StratifiedKFold() does not have such an option, and cross_validation.StratifiedShuffleSplit() does not produce disjoint folds. Am I missing something? Is this planned? (obviously I can implement this by myself)
Randomized stratified k-fold cross-validation in scikit-learn?
-0.148885
0
0
9,847
16,452,913
2013-05-09T01:29:00.000
1
1
0
0
python,networking
16,453,432
2
true
0
0
There are messaging systems that don't require a central message broker. You might start by looking at ZeroMQ.
1
3
0
I am not sure if this question belongs here as it may be a little to broad. If so, I apologize. Anyway, I am planning to start a project in python and I am trying to figure out how best to implement it, or if it is even possible in any practical way. The system will consist of several "nodes" that are essentially python scripts that translate other protocols for talking to different kinds of hardware related to i/o, relays to control stuff, inputs to measure things, rfid-readers etc, to a common protocol for my system. I am no programming or network expert, but this part I can handle, I have a module from an old alarm system that uses rs-485 that I can sucessfully control and read. I want to get the nodes talking to eachother over the network so I can distribute them to different locations (on the same subnet for now). The obvious way would be to use a server that they all connect to so they can be polled and get orders to flip outputs or do something else. This should not be too hard using twisted or something like it. The problem with this is that if this server for some reason stops working, everything else does too. I guess what I would like is some kind of serverless communication, that has no single point of failure besides the network itself. Message brokers all seem to require some kind of server, and I can not really find anything else that seems suitable for this. All nodes must know the status of all other nodes as I will need to be able to make functions based on the status of things connected to other nodes, such as, do not open this door if that door is already open. Maybe this could be done by multicast or broadcast, but that seems a bit insecure and just not right. One way I thought of could be to somehow appoint one of the nodes to accept connections from the other nodes and act as a message router and arrange for some kind of backup so that if this node crashes or goes away, another predetermined node takes over and the other nodes connect to it instead. This seems complicated and I am not sure this is any better than just using a message broker. As I said, I am not sure this is an appropriate question here but if anyone could give me a hint to how this could be done or if there is something that does something similar to this that I can study. If I am beeing stupid, please let me know that too :)
Serverless communication between network nodes in python
1.2
0
1
438
16,456,206
2013-05-09T06:59:00.000
4
0
1
0
python,boolean
16,456,321
4
false
0
0
From Python Documentation 5.1: Any object can be tested for truth value, for use in an if or while condition or as operand of the Boolean operations below. The following values are considered false: None False zero of any numeric type, for example, 0, 0L, 0.0, 0j. any empty sequence, for example, '', (), []. any empty mapping, for example, {}. instances of user-defined classes, if the class defines a __nonzero__() or __len__() method, when that method returns the integer zero or bool value False. Why? Because it's handy when iterating through objects, cycling through loops, checking if a value is empty, etc. Overall, it adds some options to how you write code.
1
4
0
Why is it that in Python integers and floats are, without being evaluated in a boolean context, equivalent to True? Other data types have to be evaluated via an operator or bool().
Python Boolean Values
0.197375
0
0
6,800
16,456,682
2013-05-09T07:31:00.000
0
1
0
1
php,python,ruby,node.js,redis
16,471,242
2
false
1
0
I have used Node.js for task worker for jobs that call runnable webpages written in PHP or running commands on certain hosts. In both these instances Node is just initializing (triggering) the job, waiting for and then evaluating the result. The heavy lifting / CPU intensive work is done by another system / program. Hope this helps!
1
2
0
Will i have any advantage of using Node.js for task queue worker instead of any other language, like PHP/Python/Ruby? I want to learn Redis for simple task queue tasks like sending big ammounts of email and do not want keeping users to wait for establishing connection etc. So the questions is: does async nature of node.js help in this scenario or is it useless? P.S. i know that node is faster than any of this language in memory consumption and computation because of effecient V8 engine, maybe it's possible to win on this field?
Any advantage of using node.js for task queue worker instead of other languages?
0
0
0
1,154
16,463,331
2013-05-09T13:47:00.000
0
0
1
0
python,progress,data-generation
16,463,488
1
true
0
0
If you use yield to write a generator function instead of just using a generator statement, you could put logging or some other sort of feedback before every yield statement.
1
1
0
I build very big dictionary (2^20 elements) in python using dictionary generator expression (i'm using IDLE for this). The process is very long because of every element needs hard computing. It is possible to get known progress of this operation? I understand that it is easy to do if don't use generator expressions, but nevertheless this question is interesting I think.
It is possible to show list/dictionary generator progress in python?
1.2
0
0
219
16,467,684
2013-05-09T17:33:00.000
0
1
1
0
python,python-3.x,cgi
16,467,920
2
false
0
0
You should choose WSGI instead of CGI. CGI applications are generally slow as they need re- invocation of interpreter for every request, WSGI on the other hand pools them and is much more efficient. WSGI is also more mainstream. Do a little research on web and you will get better and more detailed answers. In the past I have used CGI with python but generally the usage has been for image/chart generation,where the core lib was implemented in c/c++.
2
0
0
Any reason to try and replace it with something else? I'm a beginner to python, but have encountered problems with C and importing CGI. List of instance where import cgi would not be the best option would be great for further understanding of language use.
Any reason not to just import CGI in Python?
0
0
0
91
16,467,684
2013-05-09T17:33:00.000
0
1
1
0
python,python-3.x,cgi
16,468,659
2
false
0
0
List of instance where import cgi would not be the best option If you're writing a script which will be installed and run as a CGI script on a web server, and you're not using some other framework that replaces it, import cgi is always the best option. So, the cases where it's not the best option: You're not writing a CGI script. You don't need access to FieldStorage or anything else from the gateway. You're using a framework with its own replacement for cgi. That's about it. If you're not sure whether you want to use CGI or not in the first place… you probably don't. If you want the same general style of coding as CGI, WSGI is just as simple, more flexible, and usually faster. But if you're just starting at this stuff, that may not even be the style you want. Start at the high level. Do you want to template web pages, or serve JSON to JavaScript code that does the dynamic stuff in the browser? What features do you need on your user sessions? And so on. Once you know what you want, then see if there's a framework—Django, Tornado, CherryPy, whatever—that looks like it'll make your design easier. Only then ask yourself whether you want WSGI, CGI, mod_python, an embedded server, …
2
0
0
Any reason to try and replace it with something else? I'm a beginner to python, but have encountered problems with C and importing CGI. List of instance where import cgi would not be the best option would be great for further understanding of language use.
Any reason not to just import CGI in Python?
0
0
0
91
16,470,079
2013-05-09T19:57:00.000
0
0
0
0
python,postgresql
16,470,721
2
false
0
0
let me see if I understand what you are trying to accomplish: store ad-hoc user code in a varchar field on a database read and execute said code allow said code to affect the database in question, say drop table ... Assuming that I've got it, you could write something that reads the table holding the code (use pyodbc or something) runs an eval on what was pulled from the db - this will let you execute ANY code, including self updating code are you sure this is what you want to do?
1
0
0
If I have text that is saved in a Postgresql database is there any way to execute that text as Python code and potentially have it update the same database?
Execute text in Postgresql database as Python code
0
1
0
162
16,471,763
2013-05-09T21:51:00.000
1
0
1
0
python,gaussian
16,471,982
6
false
0
0
If you have a small range of integers, you can create a list with a gaussian distribution of the numbers within that range and then make a random choice from it.
1
10
1
I want to use the gaussian function in python to generate some numbers between a specific range giving the mean and variance so lets say I have a range between 0 and 10 and I want my mean to be 3 and variance to be 4 mean = 3, variance = 4 how can I do that ?
Generating numbers with Gaussian function in a range using python
0.033321
0
0
36,597
16,474,027
2013-05-10T02:00:00.000
1
0
0
1
google-app-engine,python-2.7
18,885,291
5
false
1
0
I have to manually start python and make it point to my app folder, for instance in a command line window on Windows I am using python. I installed python in C:\Python27 and my sample app is in c:\GoogleApps\guestbook C:\Python27>dev_appserver.py c:\GoogleApps\guestbook and then I can start my app in the Google App Engine Launcher and hit localhost 8080
2
1
0
I'm trying to run the Google App Engine Python 2.7 Hello World program and view it in a browser via Google App Engine Launcher. I followed the install and program instructions to the letter. I copied and pasted the code in the instructions to the helloworld.py file and app.yam1 and verified that they are correct and in the directory listed as the application directory. I hit run on the launcher and it runs with no errors, although I get no sign that is has completed (orange clock symbol next to app name). I get the following from the logs: Running dev_appserver with the following flags: --skip_sdk_update_check=yes --port=8080 --admin_port=8000 Python command: /opt/local/bin/python2.7 When I try to open in the browser via the GAE Launcher, the 'browse' icon is grayed out and the browser won't open. I tried opening localhost:8080 in Firefox and Chrome as the tutorial suggests, but I get unable to connect errors from both. How can I view Hello World in a browser? Is there some configuration I need to make on my machine?
Can't connect to localhost:8080 when trying to run Google App Engine program
0.039979
0
0
10,838
16,474,027
2013-05-10T02:00:00.000
1
0
0
1
google-app-engine,python-2.7
18,226,152
5
false
1
0
I had the same problem. This seemed to fix it: cd to google_appengine, run python dev_appserver.py --port=8080 --host=127.0.0.1 /path/to/application at this point there is a prompt to allow updates on running, I said Yes. At this point the app was running as it should, also when I quit this and went in using the launcher again, that worked too.
2
1
0
I'm trying to run the Google App Engine Python 2.7 Hello World program and view it in a browser via Google App Engine Launcher. I followed the install and program instructions to the letter. I copied and pasted the code in the instructions to the helloworld.py file and app.yam1 and verified that they are correct and in the directory listed as the application directory. I hit run on the launcher and it runs with no errors, although I get no sign that is has completed (orange clock symbol next to app name). I get the following from the logs: Running dev_appserver with the following flags: --skip_sdk_update_check=yes --port=8080 --admin_port=8000 Python command: /opt/local/bin/python2.7 When I try to open in the browser via the GAE Launcher, the 'browse' icon is grayed out and the browser won't open. I tried opening localhost:8080 in Firefox and Chrome as the tutorial suggests, but I get unable to connect errors from both. How can I view Hello World in a browser? Is there some configuration I need to make on my machine?
Can't connect to localhost:8080 when trying to run Google App Engine program
0.039979
0
0
10,838
16,476,413
2013-05-10T06:29:00.000
-1
0
0
0
python,mysql,pandas,mysql-python
56,185,092
9
false
0
0
df.to_sql(name = "owner", con= db_connection, schema = 'aws', if_exists='replace', index = >True, index_label='id')
1
60
0
I can connect to my local mysql database from python, and I can create, select from, and insert individual rows. My question is: can I directly instruct mysqldb to take an entire dataframe and insert it into an existing table, or do I need to iterate over the rows? In either case, what would the python script look like for a very simple table with ID and two data columns, and a matching dataframe?
How to insert pandas dataframe via mysqldb into database?
-0.022219
1
0
168,445