Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
11,538,607
2012-07-18T09:46:00.000
5
0
0
0
python,django,iis-7,iis-7.5
11,539,473
3
true
1
0
Right click on your website in IIS7 manager and add a virtual directory name it with the same name of your folder you want IIS to handle. lets say static add the path to your real static folder in your application mine was in myproject/static then ok and here you go :)
1
3
0
I successfully deployed my Django site to IIS7 but I still have problems about how to configure IIS to serve the static files. I tried many many things from the internet but nothing seems to work. Everything is working fine on Django server (manage.py runserver) and with debug = True, but as soon as I turn off debug (debug = False) and open it on IIS, the bootstrap css and js are not loaded. Do you have any ideas about why I'm experiencing this behavior? Perhaps you could point me to a step-by-step guide to help me?
What are the problems with loading CSS and JS from Django to IIS7?
1.2
0
0
2,367
11,540,645
2012-07-18T11:48:00.000
0
1
0
0
java,python,html,symfony,liipfunctionaltestbundle
26,174,556
1
false
1
0
As noted above: You need to install JDK, not only JRE. That is because you need java compiler.
1
3
0
I would like to use html5 validator from LiipFunctionalTestBundle in my Symfony2 project. So, I followed instructions on bundle's github page, but I got this error during python build: IOError: [Errno 2] No such file or directory: './syntax/relaxng/datatype/java/dist/html5-datatypes.jar' indeed, there is a "dist" folder under that path, but it's empty (no files inside). I also tried to download file from daisy-pipeline, but it's deleted after running python build again I'm using Java 1.7.0_04 on Ubuntu x64
html5 checker compilation
0
0
0
218
11,542,640
2012-07-18T13:32:00.000
0
0
0
0
python,database,django,unit-testing
11,544,226
1
true
1
0
This is the default behavior of Django's transactional test case to perform a rollback after each test. There is nothing preventing you, though, to have a module function, test case method or to override TestCase.setUp() to dynamically create test data. In fact, whenever you find yourself duplicating code, like creating the user and signing in with his credentials with the test client, you should find a way to make those bits reusable across your project's test cases.
1
0
0
Currently when I run a set of unit-tests on Django, each test makes its own database. This means that traversing multiple features of the site all require a user sign up, login, etc.. It would be much simpler if they all fetched from the same temporary database - anyway to do this?
How to make Django unit-tests all retrieve from the same test database?
1.2
0
0
133
11,543,297
2012-07-18T14:07:00.000
11
0
1
0
python,function,global-variables
11,543,600
5
false
0
0
There's no way to declare them all as global, and you really don't want to. Those 20 variables probably should be turned into an object with 20 attributes instead.
1
28
0
Is there a simple way to make all variables in a function global? I have 20 odd variables in a function and naming them global one by one doesn't make nice code... to me anyway :)
Make all variables in a Python function global
1
0
0
27,439
11,543,991
2012-07-18T14:41:00.000
1
0
0
0
python,numpy,nearest-neighbor
11,564,801
2
false
0
0
I don't know of anything which provides the functionality you desire out of the box. If you have already written the logic, and it is just slow, have you considered Cython-ing your code. For simple typed looping operations you could get a significant speedup.
1
1
1
I have an array that looks something like: [0 x1 0 0 y1 0 z1 0 0 x2 0 y2 0 z2 0 0 x3 0 0 y3 z3 0 0 x4 0 0 y4 z4 0 x5 0 0 0 y5 z5 0 0 0 0 y6 0 0] I need to determine set of connected line (i.e. line that connects to the points [x1,x2,x3..], [y1,y2,y3...], [z1,z2,z3..]) from the array and then need to find maximum value in each line i.e. max{x1,x2,x3,...}, max{y1,y2,y3..} etc. i was trying to do nearest neighbor search using kdtree but it return the same array. I have array of the size (200 x 8000). is there any easier way to do this? Thx.
How to determine set of connected line from an array in python
0.099668
0
0
526
11,544,870
2012-07-18T15:27:00.000
0
0
1
0
python,monitoring,timing,sampling
11,545,079
1
false
0
0
I suspect there are likely several Python packages out there that -help- with what you want. I also suspect Python is not the right tool for that surpose. There is the timeit module There is time module with clock() which is NOT a wall clock but a CPU usage clock (for the application initializing the time.clock() object. It is a floating point value which shows some 12+ digits below the ones place ie 1.12345678912345. Python floats are not know for there accuracy and the return value from time.clock() is not something I personally trust as accurate. There are other Python introspection tools that you can google for, like inspect, itertools, and others, that time processes. however I suspect their accuracy is dependant on running averages of many iterations of measuring the same thing.
1
0
0
I am trying to sample cpu registers every millisecond and calculate the frequency. To have an accurate measurement, I require that the sampling time to be very accurate. I have been using the time.sleep() to achieve this but sleep is not very accurate past 1 second. What I would like to do is set up a counter and sample when that counter reaches a certain value and where the counter is incremented at an accurate rate. I am running Python 2.6. Does anyone have any suggestions?
Python Accurate Sampling
0
0
0
182
11,545,577
2012-07-18T16:03:00.000
1
0
0
1
python,django
11,550,187
1
true
0
0
Do not add #!/usr/bin/python . Use virtualenv and activate it before running python manage.py your_command . When you will be familiar with virtualenv try virtualenvwrapper.
1
1
0
When I run ./manage.py, I get the following error, from: can't read /var/mail/os.path ./manage.py: line 4: import: command not found ./manage.py: line 7: syntax error near unexpected token `0,' ./manage.py: line 7: `sys.path.insert( 0, abspath( join( dirname( file ), 'external_apps' ) ) )' What is it!!! How can I resolve it?
Python path setting error with manage.py
1.2
0
0
1,723
11,545,759
2012-07-18T16:11:00.000
10
1
1
0
python,django,testing,tdd,bootstrapping
11,546,149
2
false
1
0
Yes. One of the examples Kent Beck works through in his book "Test Driven Development: By Example" is a test runner.
2
2
0
I am currently writing a new test runner for Django and I'd like to know if it's possible to TDD my test runner using my own test runner. Kinda like compiler bootstrapping where a compiler compiles itself. Assuming it's possible, how can it be done?
Is it possible to TDD when writing a test runner?
1
0
0
123
11,545,759
2012-07-18T16:11:00.000
4
1
1
0
python,django,testing,tdd,bootstrapping
11,546,191
2
true
1
0
Bootstrapping is a cool technique, but it does have a circular-definition problem. How can you write tests with a framework that doesn't exist yet? Bootstrapping compilers can get around this problem in several ways, but it's my understanding that usually the first implementation isn't bootstrapped. Later bootstraps would be rewrites that then use the original compiler to compile themselves. So use an existing framework to write it the first time out. Then, once you have a stable release, you can re-write the tests using your own test-runner.
2
2
0
I am currently writing a new test runner for Django and I'd like to know if it's possible to TDD my test runner using my own test runner. Kinda like compiler bootstrapping where a compiler compiles itself. Assuming it's possible, how can it be done?
Is it possible to TDD when writing a test runner?
1.2
0
0
123
11,547,150
2012-07-18T17:37:00.000
4
0
1
0
javascript,python,html,file
11,547,371
6
true
1
0
You just need to get the data you have in your python code into a form readable by your javascript, right? Why not just take the data structure, convert it to JSON, and then write a .js file that your .html file includes that is simply var data = { json: "object here" };
1
1
0
I'm working on automatically generating a local HTML file, and all the relevant data that I need is in a Python script. Without a web server in place, I'm not sure how to proceed, because otherwise I think an AJAX/json solution would be possible. Basically in python I have a few list and dictionary objects that I need to use to create graphs using javascript and HTML. One solution I have (which really sucks) is to literally write HTML/JS from within Python using strings, and then save to a file. What else could I do here? I'm pretty sure Javascript doesn't have file I/O capabilities. Thanks.
How to transfer variable data from Python to Javascript without a web server?
1.2
0
1
10,066
11,554,676
2012-07-19T05:50:00.000
2
0
0
0
python,database,data-structures,filesystems
11,554,828
2
false
1
0
I suggest to follow the Unix way. Each file is considered a stream of bytes, nothing more, nothing less. Each file is technically represented by a single structure called i-node (index node) that keeps all information related to the physical stream of the data (including attributes, ownership,...). The i-node does not contain anything about the readable name. Each i-node is given a unique number (forever) that acts for the file as its technical name. You can use similar number to give the stream of bytes in database its unique identification. The i-nodes are stored on the disk in a separate contiguous section -- think about the array of i-node structures (in the abstract sense), or about the separate table in the database. Back to the file. This way it is represented by unique number. For your database representation, the number will be the unique key. If you need the other i-node information (file attributes), you can add the other columns to the table. One column will be of the blob type, and it will represent the content of the file (the stream of bytes). For AJAX, I gues that the files will be rather small; so, you should not have a problem with the size limits of the blob. So far, the files are stored in as a flat structure (as the physical disk is, and as the relational database is). The structure of directory names and file names of the files are kept separately, in another files (kept in the same structure, together with the other files, represented also by their i-node). Basically, the directory file captures tuples (bare_name, i-node number). (This way the hard links are implemented in Unix -- two names are paired with the same i-none number.) The root directory file has to have a fixed technical identification -- i.e. the reserved i-node number.
1
4
0
I have a data organization issue. I'm working on a client/server project where the server must maintain a copy of the client's filesystem structure inside of a database that resides on the server. The idea is to display the filesystem contents on the server side in an AJAX-ified web interface. Right now I'm simply uploading a list of files to the database where the files are dumped sequentially. The problem is how to recapture the filesystem structure on the server end once they're in the database. It doesn't seem feasible to reconstruct the parent->child structure on the server end by iterating through a huge list of files. However, when the file objects have no references to each other, that seems to be the only option. I'm not entirely sure how to handle this. As near as I can tell, I would need to duplicate some type of filesystem data structure on the server side (in a Btree perhaps?) with objects maintaining pointers to their parents and/or children. I'm wondering if anyone has had any similar past experiences they could share, or maybe some helpful resources to point me in the right direction.
Data structures in python: maintaining filesystem structure within a database
0.197375
1
0
1,195
11,554,892
2012-07-19T06:11:00.000
6
0
0
0
python,django
11,555,375
3
false
1
0
Usually, for projects that are going to be used by only one installation, I usually name my projects as "who will be using the system" (e.g. the organization's name) and the apps as "major features of the system". However, if your app is really that simple that you don't expect to be using multiple apps, then django might be too heavyweight and you might actually be better served by using something simpler.
1
17
0
When creating projects which contain just a single app, what are the good ways to name the project? I can think without any confusion how to name my app but I need a project name for it. (Can i create a app without a project? If yes, Is it good to do that ?) [Update] I saw some projects on github which contain single app are named as django-[appname]. I liked it and will be following it for naming my projects which contain a single app. Django might be overkill for single app projects but as of now i have just started learning django so i have only one app in my projects. Thanks
Good ways to name django projects which contain only one app
1
0
0
6,056
11,556,783
2012-07-19T08:25:00.000
1
0
0
0
python,sqlite,parallel-processing
11,557,147
1
true
0
0
Firstly, make sure any relevant indexes are in place to assist in efficient queries -- which may or may not help... Other than that, SQLite is meant to be a (strangely) lite embedded SQL DB engine - 41 million rows is probably pushing it depending on number and size of columns etc... You could take your DB and import it to PostgreSQL or MySQL, which are both open-source RDMS's with Python bindings and extensive feature sets. They'll be able to handle queries, indexing, caching, memory management etc... on large data effectively. (Or at least, since they're designed for that purpose, more effectively than SQLite which wasn't...)
1
0
0
I have a huge database in sqlite3 of 41 million rows in a table. However, it takes around 14 seconds to execute a single query. I need to significantly improve the access time! Is this a hard disk hardware limit or a processor limit? If it is a processor limit then I think I can use the 8 processors I have to parallelise the query. However I never found a way to parallelize queries in SQLite for python. Is there any way to do this? Can I have a coding example? Or are other database programs more efficient in access? If so then which ones?
reducing SQLITE3 access time in Python (Parallelization?)
1.2
1
0
204
11,559,752
2012-07-19T11:20:00.000
-1
0
0
1
python,plone
11,560,834
1
false
1
0
This can be achieved in two ways, these two methods are differentiated in Step 3: Step 1: Records are generally written in a line in zinstance.log. (There are some instances where it is written in multiple lines, so you need to handle exceptions). So here you can use readline function to read each record. Step 2: After reading record, and writing to respective places where ever you wnat using writelines. Step 3: After reading and writing records, now you have two choices: Method 1: You can copy the zinstance.log till whereever you have read to other location(subtitled with date and time, so that you will have logs always available) and create a new empty zinstance.log (Server will automatically write new logs to the new file, which will be available to you when ever you will run your program next time) at the location wherever it was. Method 2: You can also keep the pointer to the position, till wherever you have read, in a file, which is read next time you are running your program and start reading records from that position. This method may lead to unreliability, as if file size goes above the range of data type, then it will curropt the pointer till wherever you have read. Since log files are large enough, so its better not to take this approach Hope this will answer your concern
1
1
0
I created a content rule to run a logger action, when a new content is added viz a file. When I run the view of the zinstance.log file,on linux I see all the logs. I wish to send the output for each logger action separately to another log file, so that it contains log concerned to that rule only where ever applicable in the Plone site. How can this be achieved. Is there any add-on for the same ? I know that we can grep the o/p and pipe it to a CSV for formatting it later.
Can I format the output of logger action of a content rule as CSV in Plone 4.1?
-0.197375
0
0
85
11,561,744
2012-07-19T13:18:00.000
0
0
1
1
python,python-3.x,cygwin
11,573,304
3
false
0
0
Ok.one solution i have found is that i can invoke cygwin through command window and then pass instructions using subprocess.However being able to run python 3.2 codes from cygwin would have been more preferable to me rather than the other way round!!
1
1
0
I need to be able to run python 3.2 scripts from cygwin.However,the current setup of cygwin shows it isnt compatible with that.I searched on the internet and saw in some other forums that people have ben unable to do it.Anyone here has any idea how to do it?
Running python 3.2 programs from cygwin(windows 7 user)
0
0
0
319
11,562,279
2012-07-19T13:47:00.000
0
0
0
0
python,python-3.x,excel-2007,openpyxl
11,783,592
1
false
0
0
I believe you can access the font colour by: colour = ws.cell(row=id,column=id).style.font.color I am not sure how to access the cell colour though.
1
0
0
I am parsing .xlsx files using openpyxl.While writing into the xlsx files i need to maintain the same font colour as well as cell colour as was present in the cells of my input .xlsx files.Any idea how to extract the colour coding from the cell and then implement the same in another excel file.Thanks in advance
How to detect colours and then apply colours while working with .xlsx(excel-2007) files on python 3.2(windows 7)
0
1
0
116
11,562,721
2012-07-19T14:11:00.000
2
1
1
1
python,pythonpath
11,576,705
1
true
0
0
If I set PYTHONPATH to contain that prefix's site-packages directory, the egg is added to sys.path and the module can be imported. Adding some directory to PYTHONPATH doesn't trigger processing of .pth-files in it. Therefore your zipped egg won't be in sys.path. You can import a module from the egg only if the egg itself is in sys.path (parent directory is not enough). If I instead from within the script run site.addsitedir with the prefix's site-packages directory, however, the egg is not added to sys.path and the module import fails. site.addsitedir() triggers processing of .pth-files if the directory hasn't been seen yet so it should work. The behavior you described is the opposite of what should happen. As a workaround you could add the egg to sys.path manually: sys.path.insert(0, '/path/to/the.egg')
1
0
0
I have a python module packaged by distutils into a zipped egg installed in a custom prefix. If I set PYTHONPATH to contain that prefix's site-packages directory, the egg is added to sys.path and the module can be imported. If I instead from within the script run site.addsitedir with the prefix's site-packages directory, however, the egg is not added to sys.path and the module import fails. In both cases, the module's site-packages directory ends up in sys.path. Is this expected behavior? If so, is there any way to tell Python to process the .pth files in a given directory without setting an env var?
site.addsitedir doesn't add egg to sys.path
1.2
0
0
3,434
11,564,743
2012-07-19T15:58:00.000
1
0
0
0
imagemagick,python-imaging-library,jmagick
12,406,407
1
false
0
0
Do you mean resize to fit in a bounding box? If so, you can use -resize WIDTHxHEIGHT to scale an image to fit within WIDTH and HEIGHT, maintaining the original aspect ratio. For JMagick, I guess that should translate to some resize() method.
1
1
0
Is there a bounding box function in ImageMagick similar to the one in PIL? Specifically one I can use with JMagick? I'm finding it so hard to make the switch from PIL to JMagick - any good documentation out there? I've looked through the java docs and it's just not helping me. Thanks!
Is there a bounding box function in ImageMagick?
0.197375
0
0
1,197
11,565,271
2012-07-19T16:30:00.000
2
0
0
0
python
36,500,008
3
false
0
1
For clarity to anyone who comes across this old question. PythonWin's interactive shell window can be cleared by highlighting all the text and then pressing delete just like it were a text editor.
1
1
0
When I use something like Pythonwin I always have to highlight everything in the debugging window and press delete to get rid of everything. Is there a way to make the program autoclear the debug window?
Clearing screen in Python (when using Pythonwin)
0.132549
0
0
1,178
11,565,313
2012-07-19T16:32:00.000
1
0
0
0
python,google-app-engine,redirect,jinja2
11,565,405
2
false
1
0
The easiest way is to use HTML forms, the POST request for submitting the form should include the values on the submit.
1
7
0
I am relatively new to Python so please excuse any naive questions. I have a home page with 2 inputs, one for a "product" and one for an "email." When a user clicks submit they should be sent to "/success" where it will say: You have requested "product" You will be notified at "email" I am trying to figure out the best way to pass the "product" and "email" values through the redirect into my "/success" template. I am using webapp2 framework and jinja within Google App Enginge. Thanks
passing variables to a template on a redirect in python
0.099668
0
0
6,838
11,565,662
2012-07-19T16:53:00.000
1
0
0
0
python,google-app-engine,jinja2,voting-system
11,565,728
2
true
1
0
Store the voting user with the vote, and check for an existing vote by the current user, using your database. You can perform the check either before you serve the page (and so disable your voting buttons), or when you get the vote attempt (and show some kind of message). You should probably write code to handle both scenarios if the voting really matters to you.
1
1
0
I am writing a web app on google app engine with python. I am using jinja2 as a templating engine. I currently have it set up so that users can upvote and downvote posts but right now they can vote on them as many times as they would like. I simply have the vote record in a database and then calculate it right after that. How can I efficiently prevent users from casting multiple votes?
how to prevent multiple votes from a single user
1.2
0
0
430
11,566,537
2012-07-19T17:50:00.000
-1
0
0
0
python,mysql
11,566,922
6
false
0
0
select top 1 * from tablename order by date_and_time DESC (for sql server) select * from taablename order by date_and_time DESC limit 1(for mysql)
2
3
0
I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row?
Retrieving only the most recent row in MySQL
-0.033321
1
0
1,336
11,566,537
2012-07-19T17:50:00.000
3
0
0
0
python,mysql
11,566,549
6
false
0
0
SELECT * FROM table ORDER BY date, time LIMIT 1
2
3
0
I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row?
Retrieving only the most recent row in MySQL
0.099668
1
0
1,336
11,567,357
2012-07-19T18:48:00.000
3
0
0
0
python,mysql
11,567,806
2
true
0
0
Quite simple and straightforward solution will be just to poll the latest autoincrement id from your table, and compare it with what you've seen at the previous poll. If it is greater -- you have new data. This is called 'active polling', it's simple to implement and will suffice if you do this not too often. So you have to store the last id value somewhere in your GUI. And note that this stored value will reset when you restart your GUI application -- be sure to think what to do at the start of the GUI. Probably you will need to track only insertions that occur while GUI is running -- then, at the GUI startup you need just to poll and store current id value, and then poll peroidically and react on its changes.
1
1
0
I am creating a GUI that is dependent on information from MySQL table, what i want to be able to do is to display a message every time the table is updated with new data. I am not sure how to do this or even if it is possible. I have codes that retrieve the newest MySQL update but I don't know how to have a message every time new data comes into a table. Thanks!
Scanning MySQL table for updates Python
1.2
1
0
1,004
11,567,431
2012-07-19T18:53:00.000
4
1
0
1
python,timeout,jobs,beanstalkd,beanstalkc
11,570,528
1
true
1
0
Calling beanstalk.reserve(timeout=0) means to wait 0 seconds for a job to become available, so it'll time out immediately unless a job is already in the queue when it's called. If you want it never to time out, use timeout=None (or omit the timeout parameter, since None is the default).
1
0
0
I am using Python 2.7, beanstalkd server with beanstalkc as the client library. It takes about 500 to 1500 ms to process each job, depending on the size of the job. I have a cron job that will keep adding jobs to the beanstalkd queue and a "worker" that will run in an infinite loop getting jobs and processing them. eg: def get_job(self): while True: job = self.beanstalk.reserve(timeout=0) if job is None: timeout = 10 #seconds continue else: timeout = 0 #seconds self.process_job(job) This results in "timed out" exception. Is this the best practice to pull a job from the queue? Could someone please help me out here?
Getting jobs from beanstalkd - timed out exception
1.2
0
0
1,433
11,570,786
2012-07-19T23:28:00.000
6
0
0
0
python,tkinter
11,580,799
3
true
0
1
You will have to set this up yourself, but it's definitely possible. You'll simply need to make appropriate bindings for <ButtonPress-1> (identify the item to be dragged), <ButtonRelease-1> (implement the drop), and <B1-Motion> (provide feedback during the drag)
2
5
0
I can see how to cut and paste nodes in a tree or move them up and down using buttons or key bindings. Is there a way to implement dragging and dropping nodes around the treeview?
tkinter treeview - drag and drop?
1.2
0
0
4,459
11,570,786
2012-07-19T23:28:00.000
4
0
0
0
python,tkinter
41,599,442
3
false
0
1
Could not leave comment on solution so added my 50c here: to resolve the issue dragging via multilevel tree up and down, you need to specify the parent ID when moving, as per line below: tv.move(s, tv.identify_row(event.y), moveto) this might be usefull for someone...
2
5
0
I can see how to cut and paste nodes in a tree or move them up and down using buttons or key bindings. Is there a way to implement dragging and dropping nodes around the treeview?
tkinter treeview - drag and drop?
0.26052
0
0
4,459
11,573,261
2012-07-20T05:13:00.000
0
0
1
1
python,virtualenv
22,811,410
4
false
0
0
If you want to run a program outside of the virtualenv, just run your system python executable (e.g. /usr/bin/python) instead of the one in the virtualenv.
3
10
0
For activation there is a script that activates a virtualenv from an already running python interpeter using execfile('C:/path/to/virtualev/Scripts/activate_this.py', dict(__file__='C:/path/to/virtualev/Scripts/activate_this.py')). However since I can still import packages that are not in the virtualenv from the current python script I am confused about how it works. For deactivation there is no python script at all. What should I do?
How to activate/deactivate a virtualenv from python code?
0
0
0
6,178
11,573,261
2012-07-20T05:13:00.000
-3
0
1
1
python,virtualenv
11,573,607
4
false
0
0
This souds like bad idea. You are trying to modify environment of your script within this script. Please explain why? Can't you do it hierarchical? Use one script to run different scripts in different virtualenvs.
3
10
0
For activation there is a script that activates a virtualenv from an already running python interpeter using execfile('C:/path/to/virtualev/Scripts/activate_this.py', dict(__file__='C:/path/to/virtualev/Scripts/activate_this.py')). However since I can still import packages that are not in the virtualenv from the current python script I am confused about how it works. For deactivation there is no python script at all. What should I do?
How to activate/deactivate a virtualenv from python code?
-0.148885
0
0
6,178
11,573,261
2012-07-20T05:13:00.000
-4
0
1
1
python,virtualenv
18,660,299
4
false
0
0
at the command line, type the word 'deactivate'
3
10
0
For activation there is a script that activates a virtualenv from an already running python interpeter using execfile('C:/path/to/virtualev/Scripts/activate_this.py', dict(__file__='C:/path/to/virtualev/Scripts/activate_this.py')). However since I can still import packages that are not in the virtualenv from the current python script I am confused about how it works. For deactivation there is no python script at all. What should I do?
How to activate/deactivate a virtualenv from python code?
-1
0
0
6,178
11,574,025
2012-07-20T06:35:00.000
1
0
0
0
python,django,django-views
11,576,960
3
true
1
0
Runing a script from the javascript is not a clean way to do it, because the user can close the browser, disable js ... etc. instead you can use django-celery, it let you run backgroud scripts and you can check to status of the script dynamically from a middleware. Good luck
1
0
0
Lets say I have a view page(request) which loads page.html. Now after successfully loading page.html, I want to automatically run a python script behind the scene 10 - 15 sec after the page.html loaded. How it is possible? Also, is it possible to show the status of the script dynamically (running/ stopped/ Syntax Error..etc)
How to invoke a python script after successfully running a Django view
1.2
0
0
382
11,575,398
2012-07-20T08:11:00.000
10
1
1
0
python,django,git,version-control
11,575,518
17
false
1
0
I suggest using configuration files for that and to not version them. You can however version examples of the files. I don't see any problem of sharing development settings. By definition it should contain no valuable data.
6
140
0
I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository. But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled? I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it? I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github. Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place. EDIT: For clarity, I am asking about where to store production secrets.
How can I save my secret keys and password securely in my version control system?
1
0
0
36,273
11,575,398
2012-07-20T08:11:00.000
4
1
1
0
python,django,git,version-control
11,576,543
17
false
1
0
EDIT: I assume you want to keep track of your previous passwords versions - say, for a script that would prevent password reusing etc. I think GnuPG is the best way to go - it's already used in one git-related project (git-annex) to encrypt repository contents stored on cloud services. GnuPG (gnu pgp) provides a very strong key-based encryption. You keep a key on your local machine. You add 'mypassword' to ignored files. On pre-commit hook you encrypt the mypassword file into the mypassword.gpg file tracked by git and add it to the commit. On post-merge hook you just decrypt mypassword.gpg into mypassword. Now if your 'mypassword' file did not change then encrypting it will result with same ciphertext and it won't be added to the index (no redundancy). Slightest modification of mypassword results in radically different ciphertext and mypassword.gpg in staging area differs a lot from the one in repository, thus will be added to the commit. Even if the attacker gets a hold of your gpg key he still needs to bruteforce the password. If the attacker gets an access to remote repository with ciphertext he can compare a bunch of ciphertexts, but their number won't be sufficient to give him any non-negligible advantage. Later on you can use .gitattributes to provide an on-the-fly decryption for quit git diff of your password. Also you can have separate keys for different types of passwords etc.
6
140
0
I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository. But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled? I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it? I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github. Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place. EDIT: For clarity, I am asking about where to store production secrets.
How can I save my secret keys and password securely in my version control system?
0.047024
0
0
36,273
11,575,398
2012-07-20T08:11:00.000
2
1
1
0
python,django,git,version-control
11,666,554
17
false
1
0
Encrypt the passwords file, using for example GPG. Add the keys on your local machine and on your server. Decrypt the file and put it outside your repo folders. I use a passwords.conf, located in my homefolder. On every deploy this file gets updated.
6
140
0
I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository. But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled? I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it? I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github. Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place. EDIT: For clarity, I am asking about where to store production secrets.
How can I save my secret keys and password securely in my version control system?
0.023525
0
0
36,273
11,575,398
2012-07-20T08:11:00.000
3
1
1
0
python,django,git,version-control
11,689,937
17
false
1
0
Provide a way to override the config This is the best way to manage a set of sane defaults for the config you checkin without requiring the config be complete, or contain things like hostnames and credentials. There are a few ways to override default configs. Environment variables (as others have already mentioned) are one way of doing it. The best way is to look for an external config file that overrides the default config values. This allows you to manage the external configs via a configuration management system like Chef, Puppet or Cfengine. Configuration management is the standard answer for the management of configs separate from the codebase so you don't have to do a release to update the config on a single host or a group of hosts. FYI: Encrypting creds is not always a best practice, especially in a place with limited resources. It may be the case that encrypting creds will gain you no additional risk mitigation and simply add an unnecessary layer of complexity. Make sure you do the proper analysis before making a decision.
6
140
0
I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository. But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled? I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it? I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github. Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place. EDIT: For clarity, I am asking about where to store production secrets.
How can I save my secret keys and password securely in my version control system?
0.035279
0
0
36,273
11,575,398
2012-07-20T08:11:00.000
2
1
1
0
python,django,git,version-control
49,701,069
17
false
1
0
This is what I do: Keep all secrets as env vars in $HOME/.secrets (go-r perms) that $HOME/.bashrc sources (this way if you open .bashrc in front of someone, they won't see the secrets) Configuration files are stored in VCS as templates, such as config.properties stored as config.properties.tmpl The template files contain a placeholder for the secret, such as: my.password=##MY_PASSWORD## On application deployment, script is ran that transforms the template file into the target file, replacing placeholders with values of environment variables, such as changing ##MY_PASSWORD## to the value of $MY_PASSWORD.
6
140
0
I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository. But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled? I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it? I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github. Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place. EDIT: For clarity, I am asking about where to store production secrets.
How can I save my secret keys and password securely in my version control system?
0.023525
0
0
36,273
11,575,398
2012-07-20T08:11:00.000
0
1
1
0
python,django,git,version-control
11,713,674
17
false
1
0
You could use EncFS if your system provides that. Thus you could keep your encrypted data as a subfolder of your repository, while providing your application a decrypted view to the data mounted aside. As the encryption is transparent, no special operations are needed on pull or push. It would however need to mount the EncFS folders, which could be done by your application based on an password stored elsewhere outside the versioned folders (eg. environment variables).
6
140
0
I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository. But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled? I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it? I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github. Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place. EDIT: For clarity, I am asking about where to store production secrets.
How can I save my secret keys and password securely in my version control system?
0
0
0
36,273
11,576,502
2012-07-20T09:21:00.000
1
1
0
0
python,gmail,password-protection
11,576,535
3
false
0
0
Have the script request the password when running. Note, I wouldn't advise that it accept the password as a command line argument as this isn't very secure because it will be logged in the command history etc.
2
2
0
I have a script which will log into a STMP server (GMail's) to send an email notification. How can I do this without distributing the password in plain text?
Distribute a script, but protect the password
0.066568
0
0
168
11,576,502
2012-07-20T09:21:00.000
0
1
0
0
python,gmail,password-protection
11,576,594
3
false
0
0
check if your provider offers an smtp server that doesn't require authentication and use that instead.
2
2
0
I have a script which will log into a STMP server (GMail's) to send an email notification. How can I do this without distributing the password in plain text?
Distribute a script, but protect the password
0
0
0
168
11,579,009
2012-07-20T12:04:00.000
2
0
1
0
python,reminders
11,579,056
2
false
0
0
You could simply have the alarm thread sleep until the next event. This will necessitate waking that thread when a new event is inserted earlier than the current earliest event.
2
1
0
I want to code a reminder program in python. I don't really have much experience, but I was thinking of a few ideas. My main way of doing this would be with a timer from the thread module that ran every second, and have it look through all reminders and stopwatch see if the time matches any of these reminders, but this would probably consume a lot of resources, especially if one has lots of reminders. So has anyone worked wih this before? What would be the best way to make this work in such a manner that it consumes as less resources as possible? Should I load the list of reminders into memory once, then look through them every second? Or should I look through them every 10 seconds and get the conditions to match for >= times? If anyone has any experience on this, I'd appreciate the help. Thanks.
Best way to code a reminder / stopwatch program in python?
0.197375
0
0
1,745
11,579,009
2012-07-20T12:04:00.000
2
0
1
0
python,reminders
11,579,097
2
true
0
0
If you keep your reminders sorted, you only really need to check the next one in the list every second. Also, you don't have to check every second - you can take longer naps, when you know nothing is going to happen for a few hours - just be sure to wake up, when a new reminder is set ;)
2
1
0
I want to code a reminder program in python. I don't really have much experience, but I was thinking of a few ideas. My main way of doing this would be with a timer from the thread module that ran every second, and have it look through all reminders and stopwatch see if the time matches any of these reminders, but this would probably consume a lot of resources, especially if one has lots of reminders. So has anyone worked wih this before? What would be the best way to make this work in such a manner that it consumes as less resources as possible? Should I load the list of reminders into memory once, then look through them every second? Or should I look through them every 10 seconds and get the conditions to match for >= times? If anyone has any experience on this, I'd appreciate the help. Thanks.
Best way to code a reminder / stopwatch program in python?
1.2
0
0
1,745
11,579,232
2012-07-20T12:19:00.000
6
0
0
0
python,django,django-1.4
11,582,196
4
false
1
0
Think of it in the same way that you would use any 3rd-party app in your project. "Re-usable" doesn't mean "without dependencies". On the contrary, you'd be hard-pressed to find an app that doesn't have at least one dependency, even if it's just dependent on Django or core Python libraries. (While core Python libraries are usually thought of as "safe" dependencies, i.e. everyone will have it, things do sometimes change between versions of Python, so you're still locking your app into a specific point in time). The goal of re-usuable is the same as that of DRY: you don't want to write the same code over and over again. As a result, it makes sense to break out functionality like a picture app, because you can then use it over and over again in other apps and projects, but your picture app will have dependencies and other packages will depend on it, as long as there are no circular dependencies, you're good (a circular dependency would mean that you haven't actually separated the functionality).
2
22
0
I try my best to write reusable Django apps. Now I'm puzzled how to put them all together to get the final project. Here is an example of what I mean: I have a picture app that stores, resizes and displays images. Also I have a weblog app that stores, edits and displays texts. Now I want to combine these two to show blog posts with images. To do that I could put foreign key fields in the blog to point at pictures. But then the blog could not be used without the picture app. Also I could create a third app, which is responsible to connect both. What is the 'best practice' way of doing it ? EDIT: Thank you for your very good answers, but I'm still looking for more practical example of how to solve this problem. To complete my example: Sometimes it would be nice to use the blog app without the picture app. But if I hard code the dependency it is no longer possible. So how about 3rd app to combine both ?
How to bind multiple reusable Django apps together?
1
0
0
5,702
11,579,232
2012-07-20T12:19:00.000
5
0
0
0
python,django,django-1.4
11,579,395
4
false
1
0
This is a good question, and something I find quite difficult to manage also. But - do you imagine these applications being released publicly, or are you only using them yourself? If you're not releasing, I wouldn't worry too much about it. The other thing is, dependencies are fine to have. The pictures app in your example sounds like a good candidate to be a 'reusable' app. It's simple, does one thing, and can be used by other apps. A blog app on the other hand usually needs to consume other apps like a picture app or a tagging app. I find this dependency fine to have. You could try to abstract it a little, by simply linking to a media resource that was put there by your picture app. It's all just a little bit of common sense. Can you make your apps slim? If yes, then try to create them so they can be reused. But don't be afraid to take dependencies when they make sense. Also, try to allow extension points so you can potentially swap out dependencies for other ones. A direct foreign key isn't going to help here, but perhaps something like signals or Restful APIs can.
2
22
0
I try my best to write reusable Django apps. Now I'm puzzled how to put them all together to get the final project. Here is an example of what I mean: I have a picture app that stores, resizes and displays images. Also I have a weblog app that stores, edits and displays texts. Now I want to combine these two to show blog posts with images. To do that I could put foreign key fields in the blog to point at pictures. But then the blog could not be used without the picture app. Also I could create a third app, which is responsible to connect both. What is the 'best practice' way of doing it ? EDIT: Thank you for your very good answers, but I'm still looking for more practical example of how to solve this problem. To complete my example: Sometimes it would be nice to use the blog app without the picture app. But if I hard code the dependency it is no longer possible. So how about 3rd app to combine both ?
How to bind multiple reusable Django apps together?
0.244919
0
0
5,702
11,579,652
2012-07-20T12:44:00.000
0
0
0
1
python,mks
12,109,258
2
false
0
0
Before starting Python and using os.getcwd(), in your console you probably used "cd c:\your_path". It matters if this 'c' is lower or upper.
1
1
0
On Windows if i use MKS toolkit shell, os.getcwd() function returns value in lower case. However on using windows cmd, it returned exact path. Is it possible in Python by any means for os.getcwd() to return the exact path (without converting to lower case on Windows)?
Python prevent os.getcwd() to give lower case results on windows using MKS shell
0
0
0
402
11,584,856
2012-07-20T18:18:00.000
0
0
1
0
python,numpy
11,585,354
1
false
0
0
I could be wrong, but it seems to me like your issue is having repeated occurrences, thus doing the same conversion more times than necessary. IF that interpretation is correct, the most efficient method would depend on how many repeats there are. If you have 100,000 repeats out of 1.7 million, then writing 1.6 million to a dictionary and checking it 1.7 million times might not be more efficient, since it does 1.6+1.7million read/writes. However, if you have 1 million repeats, then returning an answer (O(1)) for those rather than doing the conversion an extra million times would be much faster. All-in-all, though, python is very slow and you might not be able to speed this up much at all, given that you are using 1.7 million inputs. As for numpy functions, I'm not that well versed in it either, but I believe there's some good documentation for it online.
1
1
1
I have to read a very large (1.7 million records) csv file to a numpy record array. Two of the columns are strings that need to be converted to datetime objects. Additionally, one column needs to be the calculated difference between those datetimes. At the moment I made a custom iterator class that builds a list of lists. I then use np.rec.fromrecords to convert it to the array. However, I noticed that calling datetime.strptime() so many times really slows things down. I was wondering if there was a more efficient way to do these conversions. The times are accurate to the second within the span of a date. So, assuming that the times are uniformly distributed (they're not), it seems like I'm doing 20x more conversions that necessary (1.7 million / (60 X 60 X 24). Would it be faster to store converted values in a dictionary {string dates: datetime obj} and first check the dictionary, before doing unnecessary conversions? Or should I be using numpy functions (I am still new to the numpy library)?
How to efficiently convert dates in numpy record array?
0
0
0
420
11,585,494
2012-07-20T19:04:00.000
0
0
0
0
mysql,python-2.7
11,585,571
2
false
0
0
Without doing something like replicating database A onto the same server as database B and then doing the JOIN, this would not be possible.
2
0
0
Database A resides on server server1, while database B resides on server server2. Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc). In such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B? If so, how do I go about it, programatically,
MySQL Joins Between Databases On Different Servers Using Python?
0
1
0
214
11,585,494
2012-07-20T19:04:00.000
0
0
0
0
mysql,python-2.7
11,585,697
2
false
0
0
I don't know python, so I'm going to assume that when you do a query it comes back to python as an array of rows. You could query table A and after applying whatever filters you can, return that result to the application. Same to table B. Create a 3rd Array, loop through A, and if there is a joining row in B, add that joined row to the 3rd array. In the end the 3rd array would have the equivalent of a join of the two tables. It's not going to be very efficient, but might work okay for small recordsets.
2
0
0
Database A resides on server server1, while database B resides on server server2. Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc). In such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B? If so, how do I go about it, programatically,
MySQL Joins Between Databases On Different Servers Using Python?
0
1
0
214
11,585,886
2012-07-20T19:35:00.000
6
0
1
0
python
11,585,951
3
false
0
0
It effectively creates a copy. It doesn't actually create a copy. The main difference between having two copies and having two names share the same value is that, in the latter case, modifications via one name affect the value of the other name. If the value can't be mutated, this difference disappears, so for immutable objects there is little practical consequence to whether the value is copied or not. There are some corner cases where you can tell the difference between copies and different objects even for immutable types (e.g., by using the id function or the is operator), but these are not useful for Python builtin immutable types (like strings and numbers).
1
15
0
On p.35 of "Python Essential Reference" by David Beazley, he first states: For immutable data such as strings, the interpreter aggressively shares objects between different parts of the program. However, later on the same page, he states For immutable objects such as numbers and strings, this assignment effectively creates a copy. But isn't this a contradiction? On one hand he is saying that they are shared, but then he says they are copied.
Does assigning another variable to a string make a copy or increase the reference count
1
0
0
7,377
11,587,845
2012-07-20T22:40:00.000
3
0
0
0
python,mysql,ssh,password-protection
11,588,240
1
true
0
0
In my opinion using key authentication is the best and safest in my opinion for the SSH part and is easy to implement. Now to the meat of your question. You want to store these keys, or passwords, into a database and still be able to use them. This requires you to have a master password that can decrypt them from said database. This points a point of failure into a single password which is not ideal. You could come up with any number of fancy schemes to encrypt and store these master passwords, but they are still on the machine that is used to log into the other servers and thus still a weak point. Instead of looking at this from the password storage point of view, look at it from a server security point of view. If someone has access to the server with the python daemon running on it then they can log into any other server thus this is more of a environment security issue than a password one. If you can think of a way to get rid of this singular point of failure then encrypting and storing the passwords in a remote database will be fine as long as the key/s used to encrypt them are secure and unavailable to anyone else which is outside the realm of the python/database relationship.
1
1
0
I'm working on my Thesis where the Python application that connects to other linux servers over SSH is implemented. The question is about storing the passwords in the database (whatever kind, let's say MySQL for now). For sure keeping them not encrypted is a bad idea. But what can I do to feel comfortable with storing this kind of confidential data and use them later to connect to other servers? When I encrypt the password I'll not be able to use it to login the other machine. Is the public/private keys set the only option in this case?
Storing encrypted passwords storage for remote linux servers
1.2
1
0
454
11,589,862
2012-07-21T05:55:00.000
0
0
0
1
python,google-app-engine
11,595,724
2
false
1
0
You can use remote shell, that is on your app engine sdk. For example ~/bin/google_appengine/remote_api_shell.py -s your-app-identifier.appspot.com s~your-app-identifier When you are inside the shell, you will have the db module enabled. in order to use your models, you will have to import them.
1
0
0
Is there some way I can run custom python code on my google appengine app online? Is there a python console somewhere that I can use? I've seen vague references here and there, but nothing concrete.
GAE - Online admin console
0
0
0
172
11,590,033
2012-07-21T06:37:00.000
1
0
0
0
python,openerp,identifier
11,600,778
3
false
1
0
Apply your filter and change number of visible records to unlimited (press on the field [1 to 80] of 4000 which is on the right top corner of you product tree view).
1
1
0
I have created a function to export data of selected record to a csv file. For example I have opened the product tree view, there are so many filters, I have selected 'service products filter' showing 5 products, then I click on the action and then the export button in my export wizard, these 5 products shown in the tree view are exported to a csv file. I have more than 4000 consumable products, if I select the filter consumable products, then the tree view will show 80 records(default limit) in the tree view. And if I click on the action and export the details, then I only get the 80 record IDs and these records are exported. But I need to get all consumable product IDs. Is there any way to get all the ids that come under the active filter in the model?
Get all IDs in context that come under the active filters in openerp
0.066568
0
0
753
11,590,082
2012-07-21T06:48:00.000
2
0
0
0
python,sqlite,timezone,dst
21,014,456
2
false
0
0
you can try this code, I am in Taiwan , so I add 8 hours: DateTime('now','+8 hours')
1
1
0
I am using python and sqlite3 to handle a website. I need all timezones to be in localtime, and I need daylight savings to be accounted for. The ideal method to do this would be to use sqlite to set a global datetime('now') to be +10 hours. If I can work out how to change sqlite's 'now' with a command, then I was going to use a cronjob to adjust it (I would happily go with an easier method if anyone has one, but cronjob isn't too hard)
sqlite timezone now
0.197375
1
0
3,357
11,590,258
2012-07-21T07:19:00.000
0
0
0
1
python,debian,subprocess,scapy
39,337,469
2
false
0
0
In the spot where you put the command airodump-ng replace that part with timeout 'X's airodump-ng mon'X'
1
0
0
is there a way to use python2.6 with either subprocess.Popen() or os.system() to run two tasks? Example the script will run "airodump-ng" first then this process is sub and is hidden(meaning will not print out from terminal) after which continue run the rest of the script which contain "sniff" function of scapy. I been researched but I only found windows version and python3. By the way I running on debian.
Python: Hide sub-process print out on the terminal and continue the script while sub is running
0
0
0
665
11,590,597
2012-07-21T08:14:00.000
4
0
0
0
python,django,view,model,constants
11,590,623
2
true
1
0
You can put it in const.py and import it in view as well as model.
2
2
0
How can I define a constant so it would be available in views.py and models.py?
How can I define a constant for model and view in Django
1.2
0
0
1,193
11,590,597
2012-07-21T08:14:00.000
0
0
0
0
python,django,view,model,constants
11,596,732
2
false
1
0
It's very likely that views.py will import models.py, so normally it's recommended to put in models.py constants and any other code to be used from views.py, forms.py or any other module inside your app.
2
2
0
How can I define a constant so it would be available in views.py and models.py?
How can I define a constant for model and view in Django
0
0
0
1,193
11,591,087
2012-07-21T09:41:00.000
2
0
0
1
python,macos,terminal
11,595,693
1
false
0
0
Automator is great for this. Your one-line command, as an automator service, could be run from a keystroke. You can also make a terminal profile that runs the command when the profile is launched. You could also use applescript editor and make a (relatively simple) applescript that does the 'do script' command, to run your command. This gets you an .app. (A clickable launcher)
1
0
0
I am writing a Mac program in python that will run in terminal, and have a start.command file to start it up: #!/bin/bash cd "$(dirname "$0")" exec python "$(dirname "$0")/xavier_root/xavier_I-nil-nil.py" I was wondering though, if there would be a way to create a .app that would start up this program instead, so that I can put it on my dock. Is there any way I can do a simple app that would start up my start.command file, or perform the actions that my start.command file is doing?
Writing a simple Mac App to start a python document
0.379949
0
0
115
11,592,433
2012-07-21T13:05:00.000
6
0
0
1
python,terminal,osx-lion,infinite-loop
11,592,469
1
true
0
0
CTRL + Z sends to background, CTRL + C is to kill. However I am talking Linux here and Mac might be something different.
1
1
0
I am testing a Log-parser that does a infinite loop (on purpose) with a cool down of 3 seconds every recurrence. Eventually I will link all the data to a GUI front-end so I can call a stop to the loop when the user is ready with parsing. The (small) problem now is, when testing the output in the Terminal (in OSX) when I do CTRL + Z to cancel the process my activity monitor keeps showing the process as active (probably because of the loop?). So the question: How can I call (without extra non-native libraries, if possible) to stop the whole process when calling a CTRL + Z in Terminal? When I quit the Terminal, all python processes get killed, but I would like to know how to do it while the Terminal is still running :).
keyboard interrupt doesn't stop my interpreter
1.2
0
0
6,153
11,594,044
2012-07-21T16:47:00.000
0
0
0
0
python,wxpython
11,613,711
1
true
0
1
Not to my knowledge. Just keep track of the currently selected item's placement in the list and update it by incrementing or decrementing it.
1
0
0
I am making a music player in python/wxpython. I have a listbox with all the songs, and the music player plays the selected song. I am now trying to make a "next button" to select the next item in the listbox and play it. How would I go about doing that? Is there something like GetNextString() or something along those lines?
how to select next item of wxlistbox in wxpython?
1.2
0
0
58
11,596,371
2012-07-21T22:34:00.000
2
0
1
0
python
47,405,074
5
false
0
0
Yes its the same behaviour in python3 as well
1
20
0
Okay, I got this concept of a class that would allow other classes to import classes on as basis versus if you use it you must import it. How would I go about implementing it? Or, does the Python interpreter already do this in a way? Does it destroy classes not in use from memory, and how so? I know C++/C are very memory orientated with pointers and all that, but is Python? And I'm not saying I have problem with it; I, more or less, want to make a modification to it for my program's design. I want to write a large program that use hundreds of classes and modules. But I'm afraid if I do this I'll bog the application down, since I have no understanding of how Python handles memory management. I know it is a vague question, but if somebody would link or point me in the right direction it would be greatly appreciated.
How Does Python Memory Management Work?
0.07983
0
0
40,778
11,596,623
2012-07-21T23:30:00.000
4
0
1
0
python,string,unicode,decode,encode
11,596,674
4
false
0
0
Yes, a byte string is an octet string. Encoding and decoding happens when inputting / outputting text (from/to the console, files, the network, ...). Your console may use UTF-8 internally, your web server serves latin-1, and certain file formats need strange encodings like Bibtex's accents: fran\c{c}aise. You need to convert from/to them on input/output. The {en|de}code methods do this. They are often called behind the scenes (for example, print "hello world" encodes the string to whatever your terminal uses).
1
16
0
I tried to understand by myself encode and decode in Python but nothing is really clear for me. str.encode([encoding,[errors]]) str.decode([encoding,[errors]]) First, I don't understand the need of the "encoding" parameter in these two functions. What is the output of each function, its encoding? What is the use of the "encoding" parameter in each function? I don't really understand the definition of "bytes string". I have an important question, is there some way to pass from one encoding to another? I have read some text on ASN.1 about "octet string", so I wondered whether it was the same as "bytes string". Thanks for you help.
I don't understand encode and decode in Python (2.7.3)
0.197375
0
0
27,107
11,597,835
2012-07-22T04:49:00.000
0
0
0
0
python,mysql
11,919,262
1
false
0
0
I would imagine MySQL has some sort of 'set' logic in it, but if it's lacking, I know Python has sets, so I'll use an example of those in my solution: Create a set with the numbers of the winning ticket: winners = set({11, 22, 33, 44, 55}) For each query, jam all it's numbers into a set too: current_user = set({$query[0], $query[1], $query[2]...$query[4]}) Print out how many overlapping numbers there are: print winners.intersection(current_user) And finally, for the 'big one', use an if statement. Let me know if this helps.
1
1
0
I am looking for a method of checking all the fields in a MySQL table. Let's say I have a MySQL table with the fields One Two Three Four Five and Big One. These are fields that contains numbers that people enter in, sort of like the Mega Millions. Users enter numbers and it inserts the numbers they picked from least to greatest. Numbers would be drawn and I need a way of checking if any of the numbers that each user picked matched the winning numbers drawn, same for the Big One. If any matched, I would have it do something specific, like if one number or all numbers matched. I hope you understand what I am saying. Thank you.
Python MySQL Number Matching
0
1
0
82
11,597,892
2012-07-22T05:08:00.000
0
1
0
0
java,python,hardware,keystroke,simulate
11,598,099
4
false
1
0
I'm not on a Windows box to test it against MouseKeys, so no guarantees that it will work, but have you tried AutoHotkey?
1
11
0
I'm looking for a language or library to allow me to simulate key strokes at the maximum level possible, without physically pressing the key. (My specific measure of the level of the keystroke is whether or not it will produce the same output as a physical Key Press when my computer is already running key listeners (such as MouseKeys and StickyKeys)). I've tried many methods of keystroke emulation; The java AWT library, Java win32api, python win32com sendKeys, python ctypes Key press, and many more libraries for python and Java, but none of them simulate the key stroke at a close enough level to actual hardware. (When Windows MouseKeys is active, sending a key stroke of a colon, semi colon or numpad ADD key just produces those characters, where as a physical press performs the MouseKeys click) I believe such methods must involve sending the strokes straight to an application, rather than passing them just to the OS. I'm coming to the idea that no library for these high (above OS code) level languages will produce anything adequate. I fear I might have to stoop to some kind of BIOS programming. Does anybody have any useful information on the matter whatsoever? How would I go about emulating key presses in lower level languages? Should I be looking for a my-hardware-specific solution (some kind of Fujitsu hardware API)? I almost feel it would be easier to program a robot to simply sit by the hardware and press the keys. Thanks!
Simulate Key Press at hardware level - Windows
0
0
0
14,658
11,599,942
2012-07-22T11:35:00.000
0
0
0
0
python,wxpython
11,603,933
1
false
0
1
Change the color and use wxTimerEvent to set the delay until you change it back in the event handler.
1
0
0
I am using a wx.grid in Python to display stock prices. After the value of a cell changes, I want to change the background and font color for a short period, say 0.5 seconds and then change it back to its original colors. If I do this in a straightforward way, I just change both colors, do a time.sleep(0.5) and change it back to its original colors. However, this way the update per cell takes way too long. Does anybody know of a clever way to do this?
How to let a cell blink for a configurable time in wx.grid when the value changes?
0
0
0
118
11,600,364
2012-07-22T12:38:00.000
4
1
0
0
java,android,python,sl4a
12,758,628
3
false
0
0
I have developed Android Apps on the market, coded in Python. Downsides: Thus far my users must download the interpreter as well, but they are immediately prompted to do so. (UPDATE: See comment below.) The script does not exit properly, so I include a webView page that asks them to goto:Settings:Apps:ForceClose if this issue occurs.
1
28
0
I am getting ready to start a little Android development and need to choose a language. I know Python but would have to learn Java. I'd like to know from those of you who are using Python on Android what the limitations are. Also, are there any benefits over Java?
What are the limitations of Python on Android?
0.26052
0
0
20,495
11,600,374
2012-07-22T12:39:00.000
1
1
0
1
python,unit-testing,google-app-engine
11,600,406
1
false
1
0
I have a post deploy script for the staging environment that just does curl on the urls to validate they are all there. If this script passes (among other things) I will deploy from staging to production.
1
1
0
I have quite a few static files used in my Google App Engine application (CSS, robots.txt, etc.) They are all defined in app.yaml. I want to have some automated tests that check whether those definitions in app.yaml are valid and my latest changes didn't brake anything. E.g. check that specific URLs return correct responses. Ideally, it should be a part of my app unit tests.
How to write tests for static routes defined in app.yaml?
0.197375
0
0
120
11,601,703
2012-07-22T15:51:00.000
0
1
0
0
python,c,repr
11,601,779
3
false
0
0
The most simple way would be printf()/sprintf() with the %x and %X format specifiers.
1
5
0
for the most part I work in Python, and as such I have developed a great appreciation for the repr() function which when passed a string of arbitrary bytes will print out it's human readable hex format. Recently I have been doing some work in C and I am starting to miss the python repr function. I have been searching on the internet for something similar to it, preferably something like void buffrepr(const char * buff, const int size, char * result, const int resultSize) But I have ha no luck, is anyone aware of a simple way to do this?
Python style repr for char * buffer in c?
0
0
0
1,147
11,601,907
2012-07-22T16:15:00.000
1
0
0
0
python,django,pinax
14,173,262
2
false
1
0
Profiles are meant to be used for public data, or data you'd share with other people and is also more descriptive in nature. Account data are more like settings for you account that drive certain behavior (language or timezone settings) that are private to you and that control how various aspects of the site (or other apps) function.
1
2
0
What's the difference between pinax.apps.accounts and the idios profiles app that were installed with the profiles base project? As I understand it, the contrib.auth should be just for authentication purpose (i.e. username and password), and the existence of User.names and User.email in the auth model is historical and those fields shouldn't be used; but the distinction between accounts and profiles are lost to me. Why is there pinax.apps.account and idios?
Difference between pinax.apps.accounts, idios profiles, and django.auth.User
0.099668
0
0
674
11,602,817
2012-07-22T18:18:00.000
0
1
0
0
python,module,sage
11,603,378
1
true
0
0
Okay, here's what's going on: You can only import files ending with .py (ignoring .py[co]) These are standard Python files and aren't preparsed, so 1/3 == int(0), not QQ(1)/QQ(3), and you don't have the equivalent of a from sage.all import * to play with. You can load and attach both .py and .sage files (as well as .pyx and .spyx and .m). Both have access to Sage definitions but the .py files aren't preparsed (so y=17 makes y a Python int) while the .sage files are (so y=17 makes y a Sage Integer). So import jbom here works just like it would in Python, and you don't get the access to what Sage has put in scope. load etc. are handy but they don't scale up to larger programs so well. I've proposed improving this in the past and making .sage scripts less second-class citizens, but there hasn't yet been the mix of agreement on what to do and energy to do it. In the meantime your best bet is to import from sage.all.
1
0
0
I have Sage 4.7.1 installed and have run into an odd problem. Many of my older scripts that use functions like deepcopy() and uniq() no longer recognize them as global names. I have been able to fix this by importing the python modules one by one, but this is quite tedious. But when I start the command-line Sage interface, I can type "list2=deepcopy(list1)" without importing the copy module, and this works fine. How is it possible that the command line Sage can recognize global name 'deepcopy' but if I load my script that uses the same name it doesn't recognize it? oops, sorry, not familiar with stackoverflow yet. I type: 'sage_4.7.1/sage" to start the command line interface; then, I type "load jbom.py" to load up all the functions I defined in a python script. When I use one of the functions from the script, it runs for a few seconds (complex function) then hits a spot where I use some function that Sage normally has as a global name (deepcopy, uniq, etc) but for some reason the script I loaded does not know what the function is. And to reiterate, my script jbom.py used to work the last time I was working on this particular research, just as I described. It also makes no difference if I use 'load jbom.py' or 'import jbom'. Both methods get the functions I defined in my script (but I have to use jbom. in the second case) and both get the same error about 'deepcopy' not being a global name. REPLY TO DSM: I have been sloppy about describing the problem, for which I am sorry. I have created a new script 'experiment.py' that has "import jbom" as its first line. Executing the function in experiment.py recognizes the functions in jbom.py but deepcopy is not recognized. I tried loading jbom.py as "load jbom.py" and I can use the functions just like I did months ago. So, is this all just a problem of layering scripts without proper usage of import/load etc? SOLVED: I added "from sage.all import *" to the beginning of jbom.py and now I can load experiment.py and execute the functions calling jbom.py functions without any problems. From the Sage doc on import/load I can't really tell what I was doing wrong exactly.
python modules missing in sage
1.2
0
0
784
11,603,136
2012-07-22T18:59:00.000
2
0
0
0
python,mysql,hash,passwords
11,603,255
2
false
0
0
You can use MySQL's ENCODE(), DES_ENCRYPT() or AES_ENCRYPT() functions, and store the keys used to encrypt in a secure location.
1
0
0
I'm making a company back-end that should include a password-safe type feature. Obviously the passwords needs to be plain text so the users can read them, or at least "reversible" to plain text somehow, so I can't use hashes. Is there anything more secure I can do than just placing the passwords in plain-text into the database? Note: These are (mostly) auto-generated passwords that is never re-used for anything except the purpose they are saved for, which is mostly FTP server credentials.
What security measures can I take to secure passwords that can't be hashed in a database?
0.197375
1
0
105
11,604,256
2012-07-22T21:52:00.000
-1
0
1
1
python,windows,linux,directory
26,087,266
3
false
0
0
if you use os.path.join to create the directory string you will correctly writes the path depending on the environment you are in. then you can use os.path.isdir, as suggested above. to verify if the string points to an existing directory
1
3
0
I have a program in python 2.7 that is writing certain files and directories base on user input. I need to make sure that the files and directories are valid for both linux and windows as the files will be exchanged through the two operating systems. The files will originally be created in linux and manual moved to windows. I have checked through Python docs, stack exchange, and several pages of google without turning up any usable information which is strange because I would imagine this would be a fairly common issue. Is there an easy solution? Edit: I would like to validate the directory-filename incase a user enters a path that does not work for linux or windows. As an example if a user enters "Folder1/This:Error/File.txt" the program will see this as an error. The program will run in Linux and write the files in linux but later the files will be moved to windows. The differences in forward/back-slashes is a non-issue, but other characters that may work for linux but not windows would present a problem. Also, often times the files or directories will not be present (as they are about to be created) so I need to check that path, held in a string, would be a valid path.
check to see if a directory is valid in Python
-0.066568
0
0
4,550
11,604,548
2012-07-22T22:38:00.000
0
0
0
1
python,blender
12,752,197
5
false
0
0
It is likely that drawcar.py is trying to perform pyOpenGL commands inside Blender, and that won't work without modification. I suspect you are getting some import errors too (if you look at the command console). Blender has it's own internal python wrapper for opengl called bgl, which does include a lot of the opengl standards, but all prefixed by bgl. If you have a link to drawcar.py I can have a look at it and tell you what's going on.
1
40
0
I installed Blender 2.6 and I'm trying to run a script called drawcar.py (Which uses PyOpenGL) I looked around the documentation for importing a script and could only access Blender's python console. How do I run drawcar.py from the Linux terminal with Blender?
Running python script in Blender
0
0
0
87,536
11,604,888
2012-07-22T23:39:00.000
4
1
1
0
python,unit-testing,sockets,fixtures
11,604,901
5
false
0
0
I would do it like this: Make all of your tests derive from your own TestCase class, let's call it SynapticTestCase. In SynapticTestCase.setUp(), examine an environment variable to determine whether to set the socket timeout or not. Run your entire test suite twice, once with the environment variable set one way, then again with it set the other way. Write a small shell script to invoke the test suite both ways.
1
13
0
I'm working on a module using sockets with hundreds of test cases. Which is nice. Except now I need to test all of the cases with and without socket.setdefaulttimeout( 60 )... Please don't tell me cut and paste all the tests and set/remove a default timeout in setup/teardown. Honestly, I get that having each test case laid out on it's own is good practice, but i also don't like to repeat myself. This is really just testing in a different context not different tests. i see that unittest supports module level setup/teardown fixtures, but it isn't obvious to me how to convert my one test module into testing itself twice with two different setups. any help would be much appreciated.
python unittests with multiple setups?
0.158649
0
0
5,608
11,606,873
2012-07-23T05:24:00.000
1
0
0
0
java,c++,python,user-interface
11,606,887
2
false
0
1
Take a look at MonoDevelop. Allows you to develop in C# and deploy to both Linux and Windows.
1
0
0
I need to develop a GUI that works on both Windows and Linux, for a back-end that's written in C++ and runs on Linux only. I'm aware that I would need to use a wrapper (like Swig) if I opt for Java or Python but then again they're known to produce better GUIs than C++. What would you suggest as the better option (for a newbie)?
Better language for developing GUI?
0.099668
0
0
259
11,607,387
2012-07-23T06:24:00.000
9
0
0
0
python,c,api,pandas
11,610,785
3
true
0
0
All the pandas classes (TimeSeries, DataFrame, DatetimeIndex etc.) have pure-Python definitions so there isn't a C API. You might be best off passing numpy ndarrays from C to your Python code and letting your Python code construct pandas objects from them. If necessary you could use PyObject_CallFunction etc. to call the pandas constructors, but you'd have to take care of accessing the names from module imports and checking for errors.
1
20
1
I'm extracting mass data from a legacy backend system using C/C++ and move it to Python using distutils. After obtaining the data in Python, I put it into a pandas DataFrame object for data analysis. Now I want to go faster and would like to avoid the second step. Is there a C/C++ API for pandas to create a DataFrame in C/C++, add my C/C++ data and pass it to Python? I'm thinking of something that is similar to numpy C API. I already thougth of creating numpy array objects in C as a workaround but i'm heavily using timeseries data and would love to have the TimeSeries and date_range objects as well.
Is there a C/C++ API for python pandas?
1.2
0
0
28,186
11,607,571
2012-07-23T06:42:00.000
1
0
0
1
linux,wxpython,fedora,gnome-3,gdi
11,617,941
1
false
0
1
I think the recommended way to paint is with the newer wx.GCDC with a fallback to the wx.PaintDC. If you need advanced drawing, see FloatCanvas, cairo or the glcanvas.
1
0
0
I am writing an app which draws text and shapes in a ClientDC of a Frame. When I run the app under my Fedora 16(Gnome 3) nothing is drawn in the Frame, but if I run it under Windows all drawings display normally. I've tried using WindowDC to do the drawing on, but it is not different to ClientDC under Fedora. I can only get a successful drawing when using PaintDC. Am I doing something wrong(or missing something), or is it just Linux/Fedora/Gnome 3?
ClientDC and WindowDC do not draw under Fedora 16 Gnome 3, only PaintDC
0.197375
0
0
76
11,609,027
2012-07-23T08:38:00.000
2
0
1
0
python
11,609,213
3
false
0
0
This is certainly subjective, but I would try to observe the principle of least surprise. If the values you return describe the characteristics of an object (like point.x and point.y in your example), then I would use a class. If they are not part of the same object, (let's say return min, max) then they should be a tuple.
1
5
0
I like to do some silly stuff with python like solving programming puzzles, writing small scripts etc. Each time at a certain point I'm facing a dilemma whether I should create a new class to represent my data or just use quick and dirty and go with all values packed in a list or tuple. Due to extreme laziness and personal dislike of self keyword I usually go with the second option. I understand than in the long run user defined data type is better because path.min_cost and point.x, point.y is much more expressive than path[2] and point[0], point[1]. But when I just need to return multiple things from a function it strikes me as too much work. So my question is what is the good rule of thumb for choosing when to create user defined data type and when to go with a list or tuple? Or maybe there is a neat pythonic way I'm not aware of? Thanks.
How to decide when to introduce a new type instead of using list or tuple?
0.132549
0
0
119
11,609,339
2012-07-23T08:59:00.000
3
0
0
0
python,django,virtualenv
11,609,403
1
true
1
0
I'd recommend always using virtualenv, because it makes your environment more reproducible -- you can version your dependencies alongside your application, you're not tied to the versions of the python packages in your system repository, and if you need to replicate your environment elsewhere, you can do that even if you're not running exactly the same OS underneath.
1
1
0
If we just want to host single django application on a VPS or some cloud instance, is it still benefitial to use virtualenv ? Or will it be an overkill , and better to use global python setup instead, as only one django application say Project X , will be hosted on that server ? Does virtualenv provide any major benefits for a single application setup in a production environment that I might not be aware of ? eg. django upgradation, cron scripts , etc
Is virtualenv recommended even for single django based application on a server?
1.2
0
0
91
11,609,943
2012-07-23T09:37:00.000
0
0
0
0
python,django,forms,model
11,610,267
2
false
1
0
Generally models represent business entities which may be stored in some persistent storage (usually relational DB). Forms are used to render HTML forms which may retreive data from users. Django supports creating forms on the basis of models (using ModelForm class). Forms may be used to fetch data which should be saved in persistent storage, but that's not only the case - one may use forms just to get data to be searched in persistent storage or passed to external service, feed some application counters, test web browser engines, render some text on the basis of data entered by user (e.g. "Hello USERNAME"), login user etc. Calling save() on model instance should guarantee that data will be saved in persistent storage if and only data is valid - that will provide consistent mechanism of validation of data before saving to persistent storage, regardless whether business entity is to be saved after user clicks "Save me" button on web page or in django interactive shell user will execute save() method of model instance.
1
2
0
I'm currently working on a model that has been already built and i need to add some validation managment. (accessing to two fields and checking data, nothing too dramatic) I was wondering about the exact difference between models and forms at a validation point of view and if i would be able to just make a clean method raising errors as in a formview in a model view ? for extra knowledge, why are thoses two things separated ? And finnaly, what would you do ? There are already some methods written for the model and i don't know yet if i would rewrite it to morph it into a form and simply add the clean() method + i don't exactly know how they work. Oh, and everything is in the admin interface, havn't yet worked a lot on it since i started django not so long ago. Thanks in advance,
Difference between Model and Form validation
0
0
0
1,664
11,611,002
2012-07-23T10:45:00.000
0
0
0
1
python,bash
11,611,070
3
false
0
0
In a Bash-script, you could redirect the output to a file, and if the length of the file is zero then there was no output.
1
0
0
There is an external program A. I want to write a script that does some action if the called external program A does not bring up any output(stout). How is this possible in bash or python?
python or bash script that does something when there is no response(output)
0
0
0
124
11,611,351
2012-07-23T11:10:00.000
0
1
1
0
python,module,daemon
11,614,012
1
true
0
0
You could perhaps consider to create a pool of python daemon processes? Their purpose would be to serve one request and to die afterwards. You would have to write a pool-manager that ensures that there are always X daemon processes waiting for an incoming request. (X being the number of waiting daemon processes: depending on the required workload). The pool-manager would have to observe the pool of daemon processes and start new instances every time a process was finished.
1
0
0
I am working on a web service that requires user input python code to be executed on my server (we have checks for code injection). I have to import a rather large module so I would like to make sure that I am not starting up python and importing the module from scratch each time something runs (it takes about 4-6s). To do this I was planning to create a python (3.2) deamon that imports the user input code as a module, executes it and then delete/garbage collect that module. I need to make sure that that module is completely gone from RAM since this process will continue until the server is restarted. I have read a bunch of things that say this is a very difficult thing to do in python. What is the best way to do this? Would it be better to use exec to define a function with the user input code (for variable scoping) and then execute that function and somehow remove the function? Or is there a better way to do this process that I have missed?
User Input Python Script Executing Daemon
1.2
0
0
477
11,612,663
2012-07-23T12:35:00.000
0
1
0
0
python,dll,ctypes,foxpro,visual-foxpro
11,613,687
2
false
0
0
Try removing the parentheses from the call to the function. Change print link.get_input_out("someinput") to print link.get_input_out "someinput".
1
1
0
I am trying to use the python ctypes library to access various functions in a COM DLL created in Visual Fox Pro (from a .prg file). Here is an example in fox pro (simplified from the actual code) DEFINE CLASS Testing AS CUSTOM OLEPUBLIC PROCEDURE INIT ON ERROR SET CONSOLE OFF SET NOTIFY OFF SET SAFETY OFF SET TALK OFF SET NOTIFY OFF ENDPROC FUNCTION get_input_out(input AS STRING) AS STRING output = input RETURN output ENDFUNC ENDDEFINE In python i am doing something along the lines of: import ctypes link = ctypes.WinDLL("path\to\com.dll") print link.get_input_out("someinput") The dll registers fine and is loaded but i just get the following when I try to call the function. AttributeError: function 'get_input_out' not found I can verfiy the dll does work as i was able to access the functions with a php script using the COM libary. I would really like to get this working in python but so far my attempts have all been in vain, will ctypes even work with VFP? Any advice would be appreciated.
Using Python ctypes to access a Visual Foxpro COM DLL
0
0
0
740
11,615,449
2012-07-23T15:17:00.000
0
1
1
0
python,linux,multithreading,load
11,617,105
3
false
0
0
If you find this irritating, set your preferences (specifically, the preferences of your System Monitor or equivalent tool) to enable "Solaris Mode," which calculates CPU% as a proportion of total processing power, not the proportion of a single core's processing power.
3
3
0
I am currently doing some I/O intensive load-testing using python. All my program does is to send HTTP requests as fast as possible to my target server. To manage this, I use up to 20 threads as I'm essentially bound to I/O and remote server limitations. According to 'top', CPython uses a peak of 130% CPU on my dual core computer. How is that possible ? I thought the GIL prevented this ? Or is it the way Linux 'counts' the resources consumed by each applications ?
Python interpreters uses up to 130% of my CPU. How is that possible?
0
0
0
1,861
11,615,449
2012-07-23T15:17:00.000
1
1
1
0
python,linux,multithreading,load
11,615,563
3
false
0
0
That is possible in situations when used C-extension library call releases GIL and does some further processing in the background.
3
3
0
I am currently doing some I/O intensive load-testing using python. All my program does is to send HTTP requests as fast as possible to my target server. To manage this, I use up to 20 threads as I'm essentially bound to I/O and remote server limitations. According to 'top', CPython uses a peak of 130% CPU on my dual core computer. How is that possible ? I thought the GIL prevented this ? Or is it the way Linux 'counts' the resources consumed by each applications ?
Python interpreters uses up to 130% of my CPU. How is that possible?
0.066568
0
0
1,861
11,615,449
2012-07-23T15:17:00.000
15
1
1
0
python,linux,multithreading,load
11,615,490
3
true
0
0
100 percent in top refer to a single core. On a dual-core machine, you have up to 200 per cent available. A single single-threaded process can only make use of a single core, so it is limited to 100 percent. Since your process has several threads, nothing is stopping it from making use of both cores. The GIL only prevents pure-Python code from being executed concurrently. Many library calls (including most I/O stuff) release the GIL, so no problem here as well. Contrary to much of the FUD on the internet, the GIL rarely reduces real-world performance, and if it does, there are usually better solutions to the problem than using threads.
3
3
0
I am currently doing some I/O intensive load-testing using python. All my program does is to send HTTP requests as fast as possible to my target server. To manage this, I use up to 20 threads as I'm essentially bound to I/O and remote server limitations. According to 'top', CPython uses a peak of 130% CPU on my dual core computer. How is that possible ? I thought the GIL prevented this ? Or is it the way Linux 'counts' the resources consumed by each applications ?
Python interpreters uses up to 130% of my CPU. How is that possible?
1.2
0
0
1,861
11,616,994
2012-07-23T16:56:00.000
0
0
0
0
python,django,sorl-thumbnail
11,617,059
1
true
1
0
That must have been a very old version of sorl-thumbnail you were using. I'm not sure what that field actually did for you, but now all you have is ImageField. It serves to automatically update the thumbnail registry if you change or delete it and adds a thumbnail of the assigned image next to the form field in the admin. If you're missing any other functionality, it would be best to actually ask what you can do about that particular functionality specifically.
1
0
0
I'm struggling with ImageWithThumbnailsField which seems deprecated. What should I use instead? I don't want to rewrite large parts of my project as I'm doing only slight bugfixing and updating... Error I'm getting: File "/var/www/project/images/models.py", line 6, in from sorl.thumbnail.fields import ImageWithThumbnailsField ImportError: cannot import name ImageWithThumbnailsField
What should I use instead of deprecated ImageWithThumbnailsField of django-sorl?
1.2
0
0
579
11,621,991
2012-07-23T23:26:00.000
2
0
1
0
python,console
16,800,484
5
false
0
0
Perhaps you should look into the multiprocessing or multi-threading modules. These help you spawn off child processes from your original (parent) program.
1
1
0
Is there a way to open a second python console and let the new console run while the original console keeps going and when the new console finishes it sends back its data, in the form of a variable back to the original console?
Open second Python console
0.07983
0
0
2,034
11,622,652
2012-07-24T00:50:00.000
1
0
0
0
python,pandas,sas
42,165,467
6
false
0
0
You can use Pytable rather than pandas df. It is designed for large data sets and the file format is in hdf5. So the processing time is relatively fast.
1
96
1
I am exploring switching to python and pandas as a long-time SAS user. However, when running some tests today, I was surprised that python ran out of memory when trying to pandas.read_csv() a 128mb csv file. It had about 200,000 rows and 200 columns of mostly numeric data. With SAS, I can import a csv file into a SAS dataset and it can be as large as my hard drive. Is there something analogous in pandas? I regularly work with large files and do not have access to a distributed computing network.
Large, persistent DataFrame in pandas
0.033321
0
0
76,790
11,625,450
2012-07-24T06:41:00.000
7
1
1
0
c++,python,math
11,625,468
7
true
0
0
I guess it is safe to say that C++ is faster. Simply because it is a compiled language which means that only your code is running, not an interpreter as with python. It is possible to write very fast code with python and very slow code with C++ though. So you have to program wisely in any language! Another advantage is that C++ is type safe, which will help you to program what you actually want. A disadvantage in some situations is that C++ is type safe, which will result in a design overhead. You have to think (maybe long and hard) about function and class interfaces, for instance. I like python for many reasons. So don't understand this a plea against python.
4
2
0
I'm debating whether to use C++ or Python for a largely math-based program. Both have great math libraries, but which language is generally faster for complex math?
C++ or Python for an Extensive Math Program?
1.2
0
0
3,134
11,625,450
2012-07-24T06:41:00.000
8
1
1
0
c++,python,math
11,625,521
7
false
0
0
You could also consider a hybrid approach. Python is generally easier and faster to develop in, specially for things like user interface, input/output etc. C++ should certainly be faster for some math operations (although if your problem can be formulated in terms of vector operations or linear algebra than numpy provides a python interface to very efficient vector manipulations). Python is easy to extend with Cython, Swig, Boost Python etc. so one strategy is write all the bookkeeping type parts of the program in Python and just do the computational code in C++.
4
2
0
I'm debating whether to use C++ or Python for a largely math-based program. Both have great math libraries, but which language is generally faster for complex math?
C++ or Python for an Extensive Math Program?
1
0
0
3,134
11,625,450
2012-07-24T06:41:00.000
4
1
1
0
c++,python,math
11,625,696
7
false
0
0
It all depends if faster is "faster to execute" or "faster to develop". Overall, python will be quicker for development, c++ faster for execution. For working with integers (arithmetic), it has full precision integers, it has a lot of external tools (numpy, pylab...) My advice would be go python first, if you have performance issue, then switch to cpp (or use external libraries written in cpp from python, in an hybrid approach) There is no good answer, it all depends on what you want to do in terms of research / calculus
4
2
0
I'm debating whether to use C++ or Python for a largely math-based program. Both have great math libraries, but which language is generally faster for complex math?
C++ or Python for an Extensive Math Program?
0.113791
0
0
3,134
11,625,450
2012-07-24T06:41:00.000
0
1
1
0
c++,python,math
11,630,046
7
false
0
0
I sincerely doubt that Google and Stanford don't know C++. "Generally faster" is more than just language. Algorithms can make or break a solution, regardless of what language it's written in. A poor choice written in C++ and be beaten by Java or Python if either makes a better algorithm choice. For example, an in-memory, single CPU linear algebra library will have its doors blown in by a parallelized version done properly. An implicit algorithm may actually be slower than an explicit one, despite time step stability restrictions, because the latter doesn't have to invert a matrix. This is often true for hyperbolic partial differential equations. You shouldn't care about "generally faster". You ought to look deeply into the problem you're trying to solve and the algorithms used to solve it. You'll do better that way than a blind language choice.
4
2
0
I'm debating whether to use C++ or Python for a largely math-based program. Both have great math libraries, but which language is generally faster for complex math?
C++ or Python for an Extensive Math Program?
0
0
0
3,134
11,627,846
2012-07-24T09:23:00.000
1
0
1
1
python,registry,cmd
11,628,345
5
false
0
0
Either right-click your script and remove Program->Close on exit checkbox in its properties, or use cmd /k as part of its calling line. Think twice before introducing artificial delays or need to press key - this will make your script mostly unusable in any unattended/pipe calls.
2
6
0
I have a python script which I have made dropable using a registry-key, but it does not seem to work. The cmd.exe window just flashes by, can I somehow make the window stay up, or save the output? EDIT: the problem was that it gave the whole path not only the filename.
Keep cmd.exe open
0.039979
0
0
5,043
11,627,846
2012-07-24T09:23:00.000
1
0
1
1
python,registry,cmd
11,628,145
5
false
0
0
Another possible option is to create a basic TKinter GUI with a textarea and a close button. Then run that with subprocess or equiv. and have that take the stdout from your python script executed with pythonw.exe so that no CMD prompt appears to start with. This keeps it purely using Python stdlib's and also means you could use the GUI for setting options or entering parameters... Just an idea.
2
6
0
I have a python script which I have made dropable using a registry-key, but it does not seem to work. The cmd.exe window just flashes by, can I somehow make the window stay up, or save the output? EDIT: the problem was that it gave the whole path not only the filename.
Keep cmd.exe open
0.039979
0
0
5,043
11,628,424
2012-07-24T09:58:00.000
0
0
1
0
python,json,pickle
11,628,580
1
true
0
0
Well i'm sure you've read the documentation of the above commands in python, but i think you're in a mistake of it's use. For file openning as you ask the open is quite good enough within a while statement. json -> to treat strings,lists with it's method's dumps and loads in a json format of them. pickle -> to serialize data tempfile -> to create temporary files / directories. But as i said previously, the best way to treat with files to dump (commands read/write) if no special treatment required just use the open, write commands. Regards.
1
0
0
I know Python has many ways of storing and reading data from files, so I would like to know what the best practices are for the above 4, and when each is most applicable. Also, please let me know if there are any other modules/functions I should be aware of.
When to use tempfile, pickle, json or open()
1.2
0
0
1,137
11,628,835
2012-07-24T10:24:00.000
0
0
0
0
wxpython
64,290,241
2
false
0
1
wx.FreezeTo(row,col) 0 to unfreeze, otherwise any other number will freeze the first rows or columns
1
2
0
I have a window that is split to have a custom tree ctrl on left panel and a grid table on the right. Now I want the grid table to display the information in such a way that first few columns freeze (not scroll) while the remaining data has an option to scroll horizontally. I have no clue how to proceed, please help. I have seen that splitter window can be of some help, can someone provide a sample code.
Freeze few column in wxgrid
0
0
0
751
11,629,077
2012-07-24T10:38:00.000
0
0
0
0
php,python,search,web-scraping
11,629,117
1
false
1
0
You can check the http referrer and see what values have been put in the GET variable, but this is restricted to GET variables ONLY!
1
2
0
Is there a way to capture search data from other websites? For example, if a user visits any website with a search field, I am interested in what that user types into that search field to get to the desired blogpost/webpage/product. I want to know if this is possible by scraping the site, or by any other means. Also, is it illegal to perform a scraping operation to record such data on a third party website? Also, if this is possible using PHP and Python?
Capturing search data from other websites
0
0
1
220
11,629,204
2012-07-24T10:46:00.000
2
0
0
0
python,plone,ploneformgen
11,633,491
1
true
1
0
To be able to do this, you need to create 2 content rules--each with different conditions and destination folders to move the content to(using the "move to folder" action). In the content rules, add a TALES expression condition. Then do something like: python: request.form.get('value-to-check', False) == 'foobar' Obviously, you'll need to customize the expression a bit.
1
2
0
I create a form using PloneFormGen.eg. Product code: Gear-12 Value : 2000 Based on the input value i.e if Value is >5000 , move the saved data content entry to folder 1 else move the entry to folder 2. How can the TALES expression be used to trigger this kind of content rule? I am using Plone ver 4.1
Add content rule based on the form field value input data in Plone?
1.2
0
0
340
11,629,734
2012-07-24T11:18:00.000
3
0
1
0
python,vim,vi
11,629,781
5
false
0
0
Try hitting V for visual line mode, select the area you want to indent, and hit >. Other motions besides V are good, too.
2
5
0
I need to enclose a block of code with a for loop. As this is Python I need to take care of indenting and increment the number of tabs by one. Any easy way to do this in Vim?
Reindenting Python code in Vim
0.119427
0
0
441
11,629,734
2012-07-24T11:18:00.000
0
0
1
0
python,vim,vi
11,654,765
5
false
0
0
The fastest way you can try is v i p > from inside the block of code you want to indent. That wraps Visual mode Inside Paragraph, and > indents the selected code.
2
5
0
I need to enclose a block of code with a for loop. As this is Python I need to take care of indenting and increment the number of tabs by one. Any easy way to do this in Vim?
Reindenting Python code in Vim
0
0
0
441
11,634,626
2012-07-24T15:50:00.000
1
0
0
1
python,shell,terminal,blender
11,698,801
3
false
0
0
It seems that you should run it on new control terminal, allocated with "forkpty". To suppress the warning "no job control ...", you need to call "setsid".
1
4
0
I've been trying to write a basic terminal emulation script, because for some reason i've got no terminal access on my mac. But to write game engine scripts in blender the console, which usually opens in the terminal you started blender with, is crucial. For just doing simple things like deleting, renaming etc. I used to execute commands using stream = os.popen(command) and then print (stream.read()). That works fine for most things but not for anything interactive. Shortly i've discovered a new way: sp = subprocess.Popen(["/bin/bash", "-i"], stdout = subprocess.PIPE, stdin = subprocess.PIPE, stderr = subprocess.PIPE) and then print(sp.communicate(command.encode())). That should spawn an interactive shell that i can use like a terminal, doesen't it? But either way i can't keep the connection open, and using the last example I can call sp.communicate once, giving me the following output(in this case for 'ls /') and some errors: (b'Applications\n[...]usr\nvar\n', b'bash: no job control in this shell\nbash-3.2$ ls /\nbash-3.2$ exit\n'). The second time it gives me a ValueError: I/O operation on closed file. Sometimes (like for 'ls') I only get this error: b'ls\nbash-3.2$ exit\n'. What does that mean? How can i emulate a terminal with python that allows me to control an interactive shell or run blender and communicate with the console?
Basic Terminal emulation in python
0.066568
0
0
6,899