Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
17,158,233
2013-06-17T23:01:00.000
-3
0
1
0
java,python,exception-handling
17,158,373
3
false
1
0
Best practice is to handle appropriate exceptions in the appropriate place. Only you, as developer, can decide which part of your code should catch exceptions. This should become apparent with decent unit testing. If you have unhandled exceptions, they will show up. You already described the differences. At a more fundamental level, Java's designers think they know better than you how you should code, and they'll force you to write lots of it. Python, by contrast, assumes that you are an adult, and that you know what you want to do. This means that you can shoot yourself in the foot if you insist on so doing.
2
9
0
I am original Java developer, for me, checked Exception in Java is obviously/easy enough for me to decide to catch or throw it to the caller to handle later. Then it comes Python, there is no checked exception, so conceptually, nothing forces you to handle anything(In my experience, you don't even know what exceptions are potentially thrown without checking the document). I've been hearing quite a lot from Python guys that, in Python, sometimes you better just let it fail at runtime instead of trying to handle the exceptions. Can someone give me some pointers regarding to: what's the guideline/best practice for Python Exception Handling? what's the difference between Java and Python in this regard?
Exception Handling guideline- Python vs Java
-0.197375
0
0
2,973
17,159,576
2013-06-18T02:12:00.000
0
0
0
0
python,django,ip
17,159,679
4
false
1
0
Not in any reliable way, or at least not in Django. The problem is that user IPs are usually dynamic, hence the address is changing every couple of days. Also some ISPs soon will start to use a single IP for big blocks of users (forgot what this is called) since they are running out of IPv4 IP addresses... In other words, all users from that ISP within a whole state or even country will have a single IP address. So using the IP is not reliable. You could probably figure out the country or region of the user with reasonable accuracy however my recommendation is not to use the IP for anything except logging and permission purposes (e.g. blocking a spam IP). If you want user locations, you can however use HTML5 location API which will have a much better shot of getting more accurate location since it can utilize other methods such us using a GPS sensor in a phone.
1
0
0
I have the user registration form made in django. I want to know the city from which the user is registering. Is there any way that i get the IP address of the user and then somehow get the city for that IP. using some API or something
Is there any simple way to store the user location while registering in database
0
1
0
163
17,160,797
2013-06-18T04:54:00.000
1
0
0
1
python,autobahn
17,166,189
1
true
1
0
There is no WAMP specific session close (since WAMP does not have a closing handshake separate from WebSocket). You can use the onClose hook. Another point you might have a look at: the recommended way of accessing databases from Twisted applications is via twisted.enterprise.adbapi which automatically manages a database connection pool on a background thread pool - independent of frontend protocol instances (like WAMP protocol instances). Disclaimer: I am original author of Autobahn and work for Tavendo.
1
1
0
I'm using Autobahn Python to make a WAMP server. I open up a database connection in onSessionOpen of my subclass of WampServerProtocol, and of course need to close it when the connection closed. However, I can't find a session close handler in either the tutorials or the docs.
Is there a onSessionClose in WampServerProtocol?
1.2
0
0
156
17,162,992
2013-06-18T07:28:00.000
3
0
0
0
python,button,tkinter
17,163,119
1
true
0
1
Try calling .invoke() on the button
1
1
0
I'm trying do a very simple task: Make a Button click in Tkinter. I have an array of Buttons and, when I finish an event, I want to make a Button click event in one of my buttons. I thought it should be as simple as buttons[2].click(), but that doesn't do anything.
Make a click event in Tkinter
1.2
0
0
972
17,164,837
2013-06-18T09:07:00.000
1
0
0
0
python,hardware,pyglet
17,254,328
1
false
0
1
Problem solved. I give the answer in the unlikely case someone would encounter it : it was a driver settings issue : in AMD Vision Engine Control Center, I had to move performance vs. quality setting to max. performance. I have now similar performances on the 2 machines.
1
3
0
I changed my computer recently, and the very same program using Python and Pyglet runs a lot slower on the newer computer than on the old one (25 seconds vs. 10 seconds). The older was an Asus EEE 1015p, with the following specs : processor : Intel Atom N570 memory : 2 Gb gfx : Intel GMA 3150 OS : Windows 7 Starter Edition 32 python version : 2.7 pyglet version : 1.1.4 The newer is a HP Pavilion dm1 : processor : Processeur AMD E2-1800 APU with Radeon(tm) HD Graphics, 1700 MHz gfx : AMD Radeon HD 7340 Graphics memory : 4 Gb OS : Windows 8 64 python version : 2.7 (32 bits) pyglet version : 1.2alpha1 I suspected a problem of graphics driver, but some programs using OpenGL (for example, Playstation2 Emulator PCSX2) run clearly faster on the newer (by around 40 %) so I'm quite surprised. So I wonder if speed issues are known for some pyglet versions. The program use batches to render a map made of 4 tiled layers, on which sprites are moving. Thanks to anyone who can point me what is wrong...
Pyglet slower on a faster computer
0.197375
0
0
270
17,168,883
2013-06-18T12:29:00.000
3
0
1
0
python,python-multithreading,locks
17,169,240
1
true
0
0
The best idea is to just not use threads at all. Most Python implementations have a global interpreter lock which eliminates the advantages of using threads in first place. If you're using threading to wait for IO, you can get the same or better performance if you just use asynchronous IO instead. If you're using threading to perform computation (number crunching) across processors, the python global lock prevents it from working so you're better using multiple processes instead. In contrast with having no upside, threading in python have lots of shortcomings and caveats, like you already found out. You still have to do data sharing control, and to deal with oddities related to threads receiving cpu attention in moments you don't control. All that for no benefit. TL;DR just don't use threads
1
1
0
i have a question regards python locks and threading, i realise locks are used to prevent variables from being overwritten by another thread, is it normal to use locks to get around this issue as it then means you can only run a single thread concurrently, it also means creating acquire/release locks for every variable that maybe overwritten, which for my project runs into quite a few! how are people doing this?, wrapping the variables in thread safe lists or creating unique variables based on the thread name perhaps?, or is everybody littering their code with lock acquire and releases?.
python locking and threading concurrency
1.2
0
0
1,966
17,169,774
2013-06-18T13:11:00.000
3
0
1
0
python,windows
17,169,942
1
false
0
0
For Unix kernel two threads one reading and other writing to a file (socket) is same as two processes doing the same. As the kernel is capable of Multiplexing the IO you don't need to worry.
1
4
0
I want to have a reader thread & a writer thread to same TCP socket. Is it ok? Do I need to lock before accessing it ? Platform is Windows 7, CPython 2.7.4
Python : is it ok to threads read/write simultaneously to same TCP socket?
0.53705
0
0
1,224
17,171,843
2013-06-18T14:42:00.000
1
1
0
0
python,image,unit-testing,language-agnostic,integration-testing
21,843,350
1
true
0
0
Do you write your own library to convert images of you just use existing library? In the latter case you simply do not test it. Author has already tested it somehow. You just need to create an abstraction layer between your code and the image library you use. Then you can simply check if your code calls the library with desired parameters. If you really insist on testing pictures then you need to make the transformation deterministic (and compare actual result with expected result) or you need to make comparison a bit less strict (from ignoring date fields to OCR recognizing the image). Testing files is way easier (you do not need probability based OCR).Check if your program placed all files in expected location.
1
1
0
I'm writing an app that converts different images to JPG. It operates over a complex directory structure. There, a directory may include other directories, image files (JPG, GIF, PNG, TIFF), PDF files, RAR/ZIP archives, which in turn may include anything of the above. The app finds everything that can be converted to an image and places the resulting JPGs into a separate folder. How do i write integration tests to test the conversion of images? Specifically, how should i fake the complex directory structure with all the files? Currently i just store a sample directory structure, which i manually assembled out of various image, PDF and archive files, in a tests/ directory. In a setUp method i put this sample directory in place of the actual data and run the code. I had an idea to generate all these sample files myself (generate JPGs via Imagemagick, for example), but it proved hard. How integration testing on images is usually done?
Integration testing and images
1.2
0
0
271
17,173,464
2013-06-18T15:53:00.000
1
0
0
1
python,real-time,tornado
17,173,870
1
false
0
0
It really depends on what you want to do with the Tweets. Simply reading a stream of Tweets has not been an issue that I've seen. In fact that can be done on an AWS Micro Instance. I even run more advanced regression algorithms on the real-time feed. The scalability problem arises if you try to process a set of historical Tweets. Since Tweets are produced so fast, processing historical Tweets can be very slow. That's when you should try to parallelize.
1
2
0
I am working on a project which is going to consume data from Twitter Stream API and count certain hashtags. But I have difficulties in understanding what kind architecture I need in my case. Should I use Tornado or is there more suitable frameworks for this?
Realtime data processing with Python
0.197375
0
0
1,294
17,180,409
2013-06-18T23:01:00.000
2
0
0
0
python,opencv,image-processing,frame,python-imaging-library
17,181,542
1
false
0
0
For effects like zooming in and out, optical flow seems the best choice. Search for research papers on "Shot Detection" for other possible approaches. As for the techniques you mention, did you apply some form of noise reduction before using them?
1
2
1
I have a sequence of actions taking place on a video like, say "zooming in and zooming out" a webpage. I want to catch the frames that had a visual change from a some previous frame and so on. Basically, want to catch the visual difference happening in the video. I have tried using feature detection using SURF. It just detects random frames and does not detect most of the times. I have also tried, histograms and it does not help. Any directions and pointers? Thanks in advance,
change detection on video frames
0.379949
0
0
371
17,181,923
2013-06-19T02:20:00.000
1
1
0
1
python,unit-testing,integration-testing,celery
18,316,377
1
false
1
0
I'm not sure if it's worthwhile to explicitly test the transportation mechanism (i.e. the sending of the task parameters through celery) using a unit test. Personally, I would write my test as follows (can be split up in several unit tests): Use the code from project B to generate a task with sample parameters. Encode the task parameters using the same method used by Celery (i.e. pickling the parameters or encoding them as JSON). Decode the task parameters again, checking that no corruption occured. Call the task function, making sure that it produces the correct result. Perform the same encoding/decoding sequence for the results of the task function. Using this method, you will be able to test that The task generation works as intended The encoding & decoding of the task parameters and results works as expected If necessary, you can still independently test the functioning of the transportation mechanism using a system test.
1
6
0
I have the following setup: Django-Celery project A registers task foo Project B: Uses Celery's send_task to call foo Project A and project B have the same configuration: SQS, msgpack for serialization, gzip, etc. Each project lives on a different github repository I've unit-tested calls to "foo" in project A, without using Celery at all, just foo(1,2,3) and assert the result. I know that it works. I've unit-tested that send_task in project B sends the right parameters. What I'm not testing, and need your advise on is the integration between the two projects. I would like to have a unittest that would: Start a worker in the context of project A Send a task using the code of project B Assert that the worker started in the first step gets the task, with the parameters I sent in the second step, and that the foo function returned the expected result. It seems to be possible to hack this by using python's subprocess and parsing the output of the worker, but that's ugly. What's the recommended approach to unit-testing in cases like this? Any code snippet you could share? Thanks!
Running a Celery worker in unittest
0.197375
0
0
774
17,183,068
2013-06-19T04:49:00.000
0
0
0
0
python,selenium-webdriver
17,183,255
2
false
1
0
Most of the times im using By.xpath and it works specially if you use contains in your xpath. For example : //*[contains(text(),'ABC')] This will look for all the elements that contains string 'ABC' In your case you can replace ABC with Delete Log File
1
0
0
I am very much new to selenium WebDriver and I am trying to automate a page which has a button named "Delete Log File". Using FireBug I got to know that, the HTML is described as and also the css selector is defined as "#DeleteLogButton" using firepath hence I used browser.find_element_by_css_selector("#DeleteLogButton").click() in webdriver to click on that button but its now working and also, I tried, browser.find_element_by_id("DeleteLogButton").click() to click on that button. Even this did not find the solution for my problem... Please help me out in resolving the issue.
Unable to locate the element while using selenium-webdriver
0
0
1
157
17,183,100
2013-06-19T04:51:00.000
0
0
1
0
python,automation
17,183,172
1
false
0
0
If the Info ur looking for is showed by IE you can obtain it by using xpath. If you want to make it automatic you could use selenium which works with webs as well. If the program doesnt provide an api then its not possible to interact with the program unless you get the output from a browser or you may use programs like Sikuli and obtain the info for example by image processing
1
0
0
For example, if the program did not provide an API, and I need it to do something from Python script, is this possible to do so? Let's give an example: I would like to copy the Product ID from the Internet Explorer's product ID everyday and new a text file named in this format: ddmmyyyy, and store on desktop. It is stupid, so, I would like to handle it to the machine to do it. (Yes, it is useless, but just say, it is an example.) Creating a text file, with a string is easy to implement, but the REAL question is, how can I get the product ID from the program that don't have API provided. So, the IE didn't provide the API for developer to access the Product ID, and I think the value can be shown on the screen or registry. Assume that I don't know the register location or they simply didn't store it in the registry, what I can get this product ID is manually, click, and check. For this process, is this possible to make it automatic? Thanks.
Is this possible to write a program to inter-act with existing software?
0
0
0
46
17,184,244
2013-06-19T06:25:00.000
0
0
0
1
python,celery,celery-task,celeryd
61,655,803
5
false
0
0
try to remove the .state file and if you are using a beat worker (celery worker -B) then remove the schedule file as well
2
15
0
I Have add some wrong task to a celery with redis broker but now I want to remove the incorrect task and I can't find any way to do this Is there some commands or some api to do this ?
how to remove task from celery with redis broker?
0
0
0
22,805
17,184,244
2013-06-19T06:25:00.000
3
0
0
1
python,celery,celery-task,celeryd
61,655,939
5
false
0
0
The simplest way is to use the celery control revoke [id1 [id2 [... [idN]]]] (do not forget to pass the -A project.application flag too). Where id1 to idN are task IDs. However, it is not guaranteed to succeed every time you run it, for valid reasons... Sure Celery has API for it. Here is an example how to do it from a script: res = app.control.revoke(task_id, terminate=True) In the example above app is an instance of the Celery application. In some rare ocasions the control command above will not work, in which case you have to instruct Celery worker to kill the worker process: res = app.control.revoke(task_id, terminate=True, signal='SIGKILL')
2
15
0
I Have add some wrong task to a celery with redis broker but now I want to remove the incorrect task and I can't find any way to do this Is there some commands or some api to do this ?
how to remove task from celery with redis broker?
0.119427
0
0
22,805
17,185,238
2013-06-19T07:26:00.000
1
0
1
0
python,naming-conventions,naming,fabric
17,185,924
2
false
0
0
I don't know if there is any accepted practice for naming such things, but here is what the different names mean to me in case this is any help: install_nginx: This suggests the function is actually taking the nginx package from an external repository and installing it onto the server. setup_nginx: This suggests that nginx is already installed, and you are setting it up for your specific purposes, for example deploying your own nginx configuration files. deploy_nginx: This suggests that nginx is a software package that you own, and you are deploying it to the server (note the subtle difference to install, where install suggests a software package managed by someone else). To be honest I think either of these names would work, but must say I admire your focus on trying to get your naming spot on.
1
2
0
I often notice fabfiles in different projects having functions like below install_ngnix setup_ngnix deploy_ngnix My interpretation/naming preference is install_* for package install tasks but setup_* and deploy_* sounds very similar perhaps overlapping. I wonder what is generally accepted and better practice? And what does above names would mean to you?
Naming fabric operations
0.099668
0
0
63
17,192,726
2013-06-19T13:37:00.000
0
0
1
0
matplotlib,ipython-notebook
17,193,682
1
false
0
0
They should not appear unless you explicitly create them. Are you by any chance renaming the notebook manually on the filesystem ?
1
0
0
I am just starting with iPython notebook --pylab inline. It works great for what I am doing. I am happily creating multiple notebooks for different areas that I am working on. However, on the first launch page in my browser (the one that lists all of the notebooks), I keep seeing new Unknown* notebooks appearing over time. The respective Uknown*.ipynb files appear next to my notebooks on the file system as well. I can easily click 'Shutdown' and 'Delete' to eliminate these with no adverse effects. Why do these notebooks keep appearing? Can I stop them from appearing?
iPython notebook automatically creates new empty notebooks
0
0
0
141
17,193,457
2013-06-19T14:08:00.000
3
1
1
0
python,oop,static-methods
17,193,588
5
false
0
0
Python just doesn't do private. If you like you can follow convention and precede the name with a single underscore, but it's up to other coders to respect that in a gentlemanly† fashion † or gentlewomanly
1
25
0
I would like to have a function in my class, which I am going to use only inside methods of this class. I will not call it outside the implementations of these methods. In C++, I would use a method declared in the private section of the class. What is the best way to implement such a function in Python? I am thinking of using a static decorator for this case. Can I use a function without any decorators and the self word?
Private methods in Python
0.119427
0
0
48,236
17,195,096
2013-06-19T15:20:00.000
0
0
1
0
javascript,python,ruby,node.js,opalrb
17,195,327
1
true
1
0
You shouldn't even need to understand the syntax of Javascript. Just an understanding of the DOM should suffice. Having said that, all the DOM examples will be in JS syntax, so reading them will be tricky. Being able to debug the transpiled javascript is also usefulo Correct. You can write a server an client with javascript in all the places Also correct. This is probably a better option, as these languages map more closely to the underlying javascript.
1
0
0
Newbie self-learner diving into web development here. My goal is to learn how to build web-apps. Three quick questions: Ruby and Python seem to have offshoots that compile their respective code to Javascript (i.e. Opal/Pyjamas). If I can get an understanding of the DOM, i.e. the DOM, do I have to even learn the full language of Javascript or can I just rely on Ruby/Python compiling to JS? Everyone seems to be talking about node.js allowing for javascript on both the browser and server. Does that mean that if I know Javascript and use Node, I don't need python or ruby for web dev? If node.js allows for server/client side javascript, couldn't someone just learn something like Coffeescript or Typescript and throw python, ruby or php aside?
Languages compiling/interpreting to Javascript (such as Ruby/Python/Coffescript)
1.2
0
0
192
17,197,743
2013-06-19T17:40:00.000
13
0
1
1
python,linux,installation
17,201,432
1
true
0
0
As far as I know, there’s no automatic way to do this. But you can go into /usr/local and delete bin/pythonX and lib/pythonX (and maybe lib64/pythonX). But more generally, why bother? The whole point of altinstall is that many versions can live together. So you don't need to delete them. For your tests, what you should do is use virtualenv to create a new, clean environment, with whichever python version you want to use. That lets you keep all your altinstalled Python versions and still have a clean environment for tests. Also, do the same (use virtualenv) for development. Then your altinstall’ed Pytons don't have site packages. They just stay as clean, pristine references.
1
13
0
How do you cleanly remove Python when it was installed with make altinstall? I'm not finding an altuninstall or such in the makefile, nor does this seem to be a common question. In this case I'm working with Python 2.7.x in Ubuntu, but I expect the answer would apply to earlier and later versions/sub-versions. Why? I'm doing build tests of various Python sub-versions and would like to do those tests cleanly, so that there are no "leftovers" from other versions. I could wipe out everything in /usr/local/lib and /usr/local/bin but there may be other things there I'd like not to remove, so having a straightforward way to isolate the Python components for removal would be ideal.
How do you cleanly remove Python when it was installed with 'make altinstall'?
1.2
0
0
18,817
17,199,544
2013-06-19T19:18:00.000
0
0
1
1
python,pip,mingw32
19,987,999
2
false
0
0
I've struggled with this issue for a while, and found that sometimes pip seems to ignore MSYS_HOME and can't find pydistutils.cfg, in which case the only recourse seems to be manually copying the pydistutils.cfg into your virtualenv right before running pip install. It sure would be great if someone could definitively figure out why it sometimes finds this file and sometimes does not. Environment variables seem excessively finicky in MinGW.
1
8
0
This has been asked and answered a few times, but as you'll see none of the previous answers work for me -- I feel like something had changed to make all the old answers outdated. Or at the very least, I'm in some kind of edge case: On a Windows 7 box, I've installed MinGW32 and Python 2.7 (32 bit version) (I've also tried this with Python 2.6 and got the same results). Path environment variable is set correctly. I've edited cygwincompiler.py to remove refrences to -mno-cygwin. And I've put the correct distutils.cfg file into C:\Python27\Lib\distutils. To be clear: distutils.cfg contains [build] compiler=mingw32 on two lines. I've also (just to be safe) put pydistutils.cfg in my %HOME% directory. And put setup.cfg in my current directory when running pip. They have same content as distutils.cfg. I know this is all working because pip install cython and pip install pycrypto both compile successfully. However, mysteriously, some packages still give me the unable to find vcvarsall.bat error. Two examples are: pyproj and numpy. It's as if sometimes pip knows to use the MinGW compiler and sometimes it doesn't? Moreover, if I use the MSYS shell that comes with MinGW then magically pip install numpy succeeds. But pip install pyproj still fails with an unable to find vcvarsall.bat. I've tried this out on several machines all with the exact same results. Anybody have any idea what's going on here? Why would pip know to use mingw32 to compile some c modules and not others? Also, why does pip install numpy work inside the MSYS shell but not inside the cmd shell? BONUS: Many, many older answers suggest installing Visual Studio 2008 as a way of solving a vcvarsall.bat error. But as of this past May, microsoft is no longer distributing this software. Does anyone know of a place where one can still download VS2008? I ask because it's possible that being able to use vcvarsall.bat instead of MinGW would solve this problem.
python - Getting Pip to use MinGW32 on Windows?
0
0
0
5,583
17,201,129
2013-06-19T20:54:00.000
5
0
0
1
python,windows
17,201,159
2
true
0
0
Mostly, yes, as long as you keep to using the tools Python provides you and don't write code that is platform specific. Use os.path.join() to build paths, for example. Open files in binary mode when you deal with binary data, text mode when it's text data. Etc. Python code itself is platform agnostic; the interpreter on Linux can read python code written on Windows just fine and vice versa. The general rule of thumb is to pay attention to the documentation of the modules you are using; any platform specific gotchas should be documented.
1
4
0
I would like to write some Python code in Windows using QtPy. But before I do that I'd like to know that I can use the code I wrote in Python. I understand that the complied program won't work due to different platforms but will there be any issues with regards to the *.py files I write in windows vs linux? I've been trying to install QtPy on my Mint installation and I just don't know what the problem is. Which is why I wanna go this route. I'd also like my code to work on the raspberry pi. Could you guys advise me to this end? Thanks!
will python code written in windows work in linux?
1.2
0
0
7,110
17,207,280
2013-06-20T07:05:00.000
1
1
0
1
python,file-io,permissions
17,207,385
2
true
0
0
One option would be to do this somewhat sensitive "su" work in a background process that is disconnected from the web. Likely running via cron, this script would take the root owned log files, possibly change them to a format that the web-side code could deal with easily like loading them into a database, or merely unzipping them and placing them into a different location with slightly more laxed permissions. Then the web-side code could easily have access to the data without having to jump through the "su" hoops. From my perspective this plan does not seem to violate your contractual rules. The web server config, permissions, etc remain intact.
1
0
0
The business case... The app server (Ubuntu/nginx/postgresql/python) that I use writes gzipped system log files as root to /var/log I need to present data from these log files to users' browsers My approach I need to do a fair bit of searching and string manipulation server side so I have a python script that deals with the opening and processing and then returns a nicely formatted JSON result set. The python (cgi) script is then called using ajax from the web page. My problem The script works perfectly when called from the command line as SU but (...obviously) the file opening method I'm using ( gzip.open(filename) ) is failing when invoked as user www-data by the webserver. Other useful info The app server concerned is (contractually rather than physically) a bit of a black box - I have SU access, I can write scripts, I can read anything but I can't change file permissions, add additional python libs or or mess with config. The subset of users who can would use this log extract also have the SU password so could be presented with a login dialog that I could pass to the script. Given the restrictions I have, how would you go about it?
Open root owned system files for reading with python
1.2
0
0
746
17,208,910
2013-06-20T08:34:00.000
1
0
0
1
python,linux
17,213,744
1
true
0
0
It depends on the distro you are using and the version of libguestfs, but let's assume Fedora/RHEL/Debian and you're using libguestfs ≥ 1.18. In that case for local mount functionality you will only need the basic library package, called libguestfs on Fedora-like or libguestfs0 on Debian-like distros. You may also want the fusermount tool which is part of FUSE. If you're using guestfish, then you'll need the tools package. On Fedora you can just depend on /usr/bin/guestfish which does the Right Thing. On Debian it's in a package called guestfish. If you're using libguestfs through bindings (eg. from Python) then you should also depend upon the bindings package, eg. python-libguestfs (Fedora) or python-guestfs (Debian).
1
0
0
I have created installer for Linux tool, this tool depends on libguestfs. The question is, what are the minimum required libguestfs packages I need to install in order for my tool to work?
What are the libguestfs basic packages
1.2
0
0
304
17,209,762
2013-06-20T09:13:00.000
0
0
0
0
opencv,python-2.7
17,212,232
1
false
0
0
Simplify edge A and B into a line equation (using only the few last pixels) Get the line equations of the two lines (form y = mx + b) Get the angle orientations of the two lines θ=atan|1/m| Subtract the two angles from each other Make sure to do the special case of infinite slope, and also do some simple math to get Final_Angle = (0;pi)
1
1
1
i need to find out contact angle between 2 edges in an image using open cv and python so can anybody suggest me how to find it? if not code please let me know algorithm for the same.
to measure Contact Angle between 2 edges in an image
0
0
0
531
17,211,006
2013-06-20T10:12:00.000
1
0
0
0
python,qt,pyqt,pyside
17,515,469
2
false
0
1
I basically found out the solution: be sure you have QT >= 5 installed, since it has been introduced in that version, secondly I am now running python3.3. Enjoy your retina
1
3
0
Qt interfaces currently look horrible on a retina display as they scale up.Its possible to use an Info.Plist for a complied application but does anyone have a solution for dynamic python such as interfaces created in PySide?
Optimising Python QT apps on retina displays
0.099668
0
0
2,959
17,211,448
2013-06-20T10:36:00.000
1
0
1
0
python,hash
17,211,615
3
false
0
0
are you worried about collisions across OS's? is that your issue? But since you are only dealing with a few hundred of 5tuples cant you apply some kind of hash collusion resolution techniques like chaining or open addressing etc. If I am not missing anything else I believe the above method is better than devising a new hashing algorithm yourself.
1
1
0
I identify Internet traffic flows by their 5-tuple (src IP, dst port, sport, dport, transport protocol number) and I would like turn this 5-tuple into a much more compact alphanumeric ID for internal use in my script. What choices do I have in Python? I read that the built-in function hash is only consistent OS-wise, so I would prefer something else. I will only ever have to deal with no more than a few hundreds different 5-tuples.
identify 5-tuple flows by hash value in Python
0.066568
0
0
1,599
17,213,515
2013-06-20T12:20:00.000
0
0
0
0
python,scrapy
17,213,740
3
true
1
0
Items in a database are have not a special order if you don't impose it. So you should add a timestamp to your table in the database, keep it up-to-date (mysql has a special flag to mark a field as auto-now) and use ORDER BY in your queries.
2
2
0
I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess. My questions are: how can i get the same order as the items of the website itself. how can i reverse this order of question 1. So items on website: A B C D E adding order in my sql: E D C B A
Scrapy reversed item ordening for preparing in db
1.2
1
0
201
17,213,515
2013-06-20T12:20:00.000
1
0
0
0
python,scrapy
17,221,923
3
false
1
0
It's hard to say without the actual code, but in theory.. Scrapy is completely async, you cannot know the order of items that will be parsed and processed through the pipeline. But, you can control the behavior by "marking" each item with priority key. Add a field priority to your Item class, in the parse_item method of your spider set the priority based on the position on a web page, then in your pipeline you can either write this priority field to the database (in order to have an ability to sort later), or gather all items in a class-wide list, and in close_spider method sort the list and bulk insert it into the database. Hope that helps.
2
2
0
I am trying to put the items, scraped by my spider, in a mysql db via a mysql pipeline. Everything is working but i see some odd behaviour. I see that the filling of the database is not in the same order as the website itself. There is like a random order. Probably of the dictionary like list of the items scraped i guess. My questions are: how can i get the same order as the items of the website itself. how can i reverse this order of question 1. So items on website: A B C D E adding order in my sql: E D C B A
Scrapy reversed item ordening for preparing in db
0.066568
1
0
201
17,213,899
2013-06-20T12:41:00.000
2
0
0
0
python
17,213,951
3
true
0
0
I'm guessing that it never writes a newline? If that's true, you need to sys.stdout.flush() occasionally.
2
0
0
I have a python script which has a while(1) condition , takes input from a table in database processes it and writes something to stdout but I am unable to redirect its output to a file.I tried all standard methods and found maybe because the script never stops and I have to stop it with Ctrl-Z it is unbale to append the ouput of stdout to file. Any clues??
Redirection of output in infinetely running python script
1.2
0
0
41
17,213,899
2013-06-20T12:41:00.000
1
0
0
0
python
17,214,078
3
false
0
0
You could also disable I/O buffering with the -u option: python -u yourscript.py. (This can diminish performance in some cases.)
2
0
0
I have a python script which has a while(1) condition , takes input from a table in database processes it and writes something to stdout but I am unable to redirect its output to a file.I tried all standard methods and found maybe because the script never stops and I have to stop it with Ctrl-Z it is unbale to append the ouput of stdout to file. Any clues??
Redirection of output in infinetely running python script
0.066568
0
0
41
17,215,202
2013-06-20T13:42:00.000
1
0
1
0
python,printing
17,225,483
1
false
0
1
If you are using IDLE, the default IDE that comes with Python for Windows you should be able to print your Python script with syntax highlighting from there. If you are looking for a script that takes a python file and then prints it with syntax highlighting, you'd need to provide more information as to what you are trying to achieve.
1
2
0
using 2.5.4 on windows 7 is there a simple way to print py files in color (as they appear on the screen) AND select a desired font? thanks Glen
Is there a way to print a py file in color?
0.197375
0
0
104
17,216,646
2013-06-20T14:42:00.000
0
0
0
0
android,python,django
17,216,777
1
true
1
0
That is not possible, as the history of google play only belongs to that particular app. App to app communication is not possible to take data, unless using a content provider. As every app is treated as a seperate user by the linux kernel.
1
1
0
for a website I am making in Django, I need to see what apps a user has downloaded in the past. I know I can federated login through Django and the OpenID to have users login through their google accounts. However, is there any API out there that can allow me to see what android applications this user has downloaded in the past? This would include names, versions, etc. of all android applications the user has downloaded to their account in the past, in addition to what type of phone/device the applications were donwloaded to. I looked at the Google Play APIs on their site and it didn't seem like there was anything that allowed for it. Please let me know if you have any advice or if there is anything that you know of that could help me! Thanks!
API to access a user's download history in Google Play?
1.2
0
0
199
17,218,051
2013-06-20T15:44:00.000
0
0
0
0
python,indexing,scipy,rotatetransform,correspondence
18,653,827
1
true
0
0
After some time of debugging, I realized that depending on the angle - typically under and over n*45 degrees - scipy adds a row and a column to the output image. a simple test of the angle adding one to the indices solved my problem. I hope this can help the future reader of this topic.
1
1
1
I hope this hasn't already been answered but I haven't found this anywhere. My problem is quite simple : I'm trying to compute an oblic profile of an image using scipy. One great way to do it is to : locate the segment along which I want my profile by giving the beginning and the end of my segment, extract the minimal image containing the segment, compute the angle from which my image should be rotated to get the desired profile along a nice raw, extract said raw. That's the theory. I'm stuck right now on (4), because my profile should fall in raw number array.shape[0]/2, but rotate seems to add sometimes lines of zeros below the data and columns on the left. The correct raw number can thus be shifted... Has anyone any idea how to get the right raw number, for example using the correspondence matrix used by scipy ? Thanks.
finding corresponding pixels before and after scipy.ndimage.interpolate.rotate
1.2
0
0
131
17,218,183
2013-06-20T15:49:00.000
0
1
0
0
html,windows,python-2.7,command,command-prompt
17,218,531
1
false
1
0
I believe the correct answer is you cannot. Feel free to let me know otherwise if you find out a way to do it.
1
1
0
I would like to run the command python abc.py in the windows command prompt when the button on html page is clicked.The python file is located at C:/abc.py> I would like to know how to code the html page to do this process.Thank you for the help.
Running the Command on Windows Command prompt using HTML button
0
0
0
6,859
17,218,398
2013-06-20T16:00:00.000
1
0
0
1
python,types,raspberry-pi
17,218,735
3
true
0
0
There are multiple cache layers between a Python program and a database. In particular, the Linux disk block cache may keep your database in core depending on patterns of usage. Therefore, you should not assume that writing to a database and reading back is necessarily slower than some home-brew cache that you'd put in your application. And code that you write to prematurely optimize your DB is going to be infinitely more buggy than code you don't write. For the workload as you've specified it, MySQL strikes me as a little heavyweight relative to SQLite, but you may have unstated reasons to require it.
2
1
0
I am creating a weather station using a Raspberry Pi. I have a mySQL database setup for the different sensors (temp, humidity, pressure, rain, etc) and am now getting to processing the wind sensors. I have a python program that watches the GPIO pins for the anemometer and counts the pulses to calculate the wind speed. It also reads from a wind vane processes through an ADC to get the direction. For the other sensors I only process them every few minutes and dump the data directly to the DB. Because I have to calculate a lot of things from the wind sensor data, I don't necessarily want to write to the DB every 5 seconds and then have to read back the past 5 minutes of data to calculate the current speed and direction. I would like to collect the data in memory, do the processing, then write the finalized data to the DB. The sensor reading is something like: datetime, speed, direction 2013-6-20 09:33:45, 4.5, W 2013-6-20 09:33:50, 4.0, SW 2013-6-20 09:33:55, 4.3, W The program is calculating data every 5 seconds from the wind sensors. I would like to write data to the DB every 5 minutes. Because the DB is on an SD card I obviously don't want to write to the DB 60 times, then read it back to process it, then write it to the permanent archival DB every 5 minutes. Would I be better off using a list of lists? Or a dictionary of tuples keyed by datetime? {datetime.datetime(2013, 6, 20, 9, 33, 45, 631816): ('4.5', 'W')} {datetime.datetime(2013, 6, 20, 9, 33, 50, 394820): ('4.0', 'SW')} {datetime.datetime(2013, 6, 20, 9, 33, 55, 387294): ('4.3', 'W')} For the latter, what is the best way to update a dictionary? Should I just dump it to a DB and read it back? That seems like an excessive amount of read/writes a day for so little data.
Python "in memory DB style" data types
1.2
0
0
221
17,218,398
2013-06-20T16:00:00.000
0
0
0
1
python,types,raspberry-pi
17,218,570
3
false
0
0
In my general experience, it is easier to work with a dictionary keyed by datetime. A list of lists can get very confusing, very quickly. I'm not certain, however, how to best update a dictionary. It could be that my Python is rusty, but it would seem to me that dumping to a DB and reading back is a bit redundant, though it may just be that your statement was a smidgen unclear. Is there any way you can dump to a variable inside of your program? If not, I think dumping to DB and reading back may be your only option...but again, my Python is a bit rusty. That said, while I don't want to be a Programmaticus Takeitovericus, but I was wondering if you've ever looked into XML for data storage? I ended up swapping to it because I found it was easier to work with than a database, and involved far less reading and writing. I don't know your project specs, so this suggestion may be pointless to you altogether.
2
1
0
I am creating a weather station using a Raspberry Pi. I have a mySQL database setup for the different sensors (temp, humidity, pressure, rain, etc) and am now getting to processing the wind sensors. I have a python program that watches the GPIO pins for the anemometer and counts the pulses to calculate the wind speed. It also reads from a wind vane processes through an ADC to get the direction. For the other sensors I only process them every few minutes and dump the data directly to the DB. Because I have to calculate a lot of things from the wind sensor data, I don't necessarily want to write to the DB every 5 seconds and then have to read back the past 5 minutes of data to calculate the current speed and direction. I would like to collect the data in memory, do the processing, then write the finalized data to the DB. The sensor reading is something like: datetime, speed, direction 2013-6-20 09:33:45, 4.5, W 2013-6-20 09:33:50, 4.0, SW 2013-6-20 09:33:55, 4.3, W The program is calculating data every 5 seconds from the wind sensors. I would like to write data to the DB every 5 minutes. Because the DB is on an SD card I obviously don't want to write to the DB 60 times, then read it back to process it, then write it to the permanent archival DB every 5 minutes. Would I be better off using a list of lists? Or a dictionary of tuples keyed by datetime? {datetime.datetime(2013, 6, 20, 9, 33, 45, 631816): ('4.5', 'W')} {datetime.datetime(2013, 6, 20, 9, 33, 50, 394820): ('4.0', 'SW')} {datetime.datetime(2013, 6, 20, 9, 33, 55, 387294): ('4.3', 'W')} For the latter, what is the best way to update a dictionary? Should I just dump it to a DB and read it back? That seems like an excessive amount of read/writes a day for so little data.
Python "in memory DB style" data types
0
0
0
221
17,218,451
2013-06-20T16:03:00.000
0
0
1
0
python,regex,variables
17,218,529
2
false
0
0
How about a pattern "%s(.*?)%s" % (oneTwoThree, dF5)? Then you can do a re.search on that pattern and use the groups function on the result. Something on the lines of pattern = "%s(.*?)%s" % (oneTwoThree, dF5) matches = re.search(pattern, text) if matches: print matches.groups() re.findall, if used instead of re.search, can save you the trouble of grouping the matches.
1
0
0
I've looked at several posts and other forums to find an answer related to my question, but nothing has come up specific to what I need. As a heads up, I'm new to programming and don't possess the basic foundation that most would. I know bash, little python, and decent with RE. I'm trying to create a python script, using RE's to parse through data and give me an output that I need/want. My output will consist of 4 values, all originating from one line. The line being read in is thrown together with no defined delimiter. (hence the reason for my program) In order to find one of the 4 values, I have to say look for 123- and give me everything after that but stop here df5. The 123- is not constant, but defined by a regular expression that works, same goes for df5. I assigned both RE's to a variable. How can I use those variables to find what I want between the two... Please let me know if this makes sense.
Using variable in regular expression in Python
0
0
0
1,615
17,219,344
2013-06-20T16:51:00.000
0
0
0
0
python,django,matlab,web-applications,octave
17,220,530
2
false
1
0
You could always just host the MATLAB code and sample .mat on a website for people to download and play with on their own machines if they have a MATLAB license. If you are looking at having some sort of embedded app on your website you are going to need to rewrite your code in another language. The project sounds doable in python using the packages you mentioned however hosting it online will not be as simple as running a program from your command line. Django would help you build a website but I do not think that it will allow you to just run a python script in the browser.
2
0
1
Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What would be the easiest/most efficient way of doing this? I have experience programming in Python and little experience programming in Java. Here are the options that I have considered: 1. MATLAB Builder JA (too expensive) 2. Rewrite entire MATLAB function into Java (not experienced enough in Java) 3. Implement MATLAB file using mlabwrapper and using Django to deploy into web app. (having a lot of trouble installing mlabwrapper onto OSX) 4. Rewrite MATLAB function into Python using SciPy, NumPy, and matlibplot and then using Django. I do not have any experience with Django but I am willing to learn it. Can someone point me in the right direction?
MATLAB to web app
0
0
0
934
17,219,344
2013-06-20T16:51:00.000
1
0
0
0
python,django,matlab,web-applications,octave
17,224,492
2
false
1
0
A cheap and somewhat easy way (with limited functionality) would be: Install MATLAB on your server, or use the MATLAB Compiler to create a stand alone executable (not sure if that comes with your version of MATLAB or not). If you don't have the compiler and can't install MATLAB on your server, you could always go to a freelancing site such as elance.com, and pay someone $20 to compile your code for you into a windows exe file. Either way, the end goal is to make your MATLAB function callable from the command line (the server will be doing the calling) You could make your input arguments into the slider value, and the .mat files you want to open, and the compiled version of MATLAB will know how to handle this. Once you do that, have the code create a plot and save an image of it. (using getframe or other figure export tools, check out FEX). Have your server output this image to the client. Tah-dah, you have a crappy low cost work around! I hope this helps , if not, I apologize!
2
0
1
Hi I have a MATLAB function that graphs the trajectory of different divers (the Olympic sport diving) depending on the position of a slider at the bottom of the window. The file takes multiple .mat files (with trajectory information in 3 dimensions) as input. I am trying to put this MATLAB app on to the internet. What would be the easiest/most efficient way of doing this? I have experience programming in Python and little experience programming in Java. Here are the options that I have considered: 1. MATLAB Builder JA (too expensive) 2. Rewrite entire MATLAB function into Java (not experienced enough in Java) 3. Implement MATLAB file using mlabwrapper and using Django to deploy into web app. (having a lot of trouble installing mlabwrapper onto OSX) 4. Rewrite MATLAB function into Python using SciPy, NumPy, and matlibplot and then using Django. I do not have any experience with Django but I am willing to learn it. Can someone point me in the right direction?
MATLAB to web app
0.099668
0
0
934
17,219,512
2013-06-20T16:59:00.000
0
0
0
0
python,authentication,client,restful-authentication
21,565,425
1
false
1
0
So, you've officially bumped into one of the most difficult questions in modern web development (in my humble opinion): web authentication. Here's the theory behind it (I'll answer your question in a moment). When you're building complicated apps with more than a few users, particularly if you're building apps that have both a website AND an API service, you're always going to bump into authentication issues no matter what you're doing. The ideal way to solve these problems is to have an independent auth service on your network. Some sort of internal API that EXCLUSIVELY handles user creation, editing, and deletion. There are a number of benefits to doing this: You have a single authentication source that all of your application components can use: your website can use it to log people in behind the scenes, your API service can use it to authenticate API requests, etc. You have a single service which can smartly managing user caching -- it's pretty dangerous to implement user caching all over the place (which is what typically happens when you're dealing with multiple authentication methods: you might cache users for the API service, but fail to cache them with the website, stuff like this causes problems). You have a single service which can be scaled INDEPENDENTLY of your other components. Think about it this way: what piece of application data is accessed more than any other? In most applications, it's the user data. For every request user data will be needed, and this puts a strain on your database / cache / whatever you're doing. Having a single service which manages users makes it a lot nicer for you to scale this part of the application stack easily. Overall, authentication is really hard. For the past two years I've been the CTO at OpenCNAM, and we had the same issue (a website and API service). For us to handle authentication properly, we ended up building an internal authentication service like described above, then using Flask-Login to handle authenticating users via the website, and a custom method to authenticate users via the API (just an HTTP call to our auth service). This worked really well for us, and allowed us to scale from thousands of requests to billions (by isolating each component in our stack, and focusing on user auth as a separate service). Now, I wouldn't recommend this for apps that are very simple, or apps that don't have many users, because it's more hassle than it's worth. If you're looking for a third party solution, Stormpath looks pretty promising (just google it). Anyhow, hope that helps! Good luck.
1
0
0
Here is the situation: We use Flask for a website application development.Also on the website sever, we host a RESTful service. And we use Flask-login for as the authentication tool, for BOTH the web application access and the RESTful service (access the Restful service from browsers). Later, we find that we need to, also, access the RESTful from client calls (python), so NO session and cookies etc. This gives us a headache regarding the current authentication of the RESTful service. On the web, there exist whole bunch of ways to secure the RESTful service from client calls. But it seems no easy way for them to live together with our current Flask-login tool, such that we do not need to change our web application a lot. So here are the question: Is there a easy way(framework) so the RESTful services can support multiple authentication methods(protocols) at the same time. Is this even a good practice? Many thanks!
Flask login together with client authentication methods for RESTful service
0
0
1
527
17,220,296
2013-06-20T17:43:00.000
0
0
1
0
python,iteration,complexity-theory
17,220,514
4
false
0
0
ninjagecko already answered the specific case of lists (and tuples FWIW), the general answer is : "it depends on the 'container' implementation" (I quote 'container' cause it can as well be a generator). For builtin types, you want sets or dicts for a fast lookup. Else you'd better check the doc or implementation for your container.
2
3
0
I have a list of over 100000 values and I am iterating over these values and checking if each one is contained in another list of random values (of the same size). I am doing this by using if item[x] in randomList. How efficient is this? Does python do some sort of hashing for each container or is it internally doing a straight up search of the other container to find the element I am looking for? Also, if it does this search linearly, then does it create a dictionary of the randomList and do the lookup with that?
How efficient is Python's 'in' or 'not in' operators, for large lists?
0
0
0
2,792
17,220,296
2013-06-20T17:43:00.000
8
0
1
0
python,iteration,complexity-theory
17,220,436
4
true
0
0
in is implemented by the __contains__ magic method of the object it applies to, so the efficiency is dependent upon that. For instance, set, dict and frozenset will be hash based lookups, while list will require a linear search. However, xrange (or range in Python 3.x) has a __contains__ method that doesn't require a linear search, but instead can use the start/stop/step information to determine a truthy value. (eg: 7 in xrange(4, 1000000) isn't done linearly). Custom classes are free to implement __contains__ however they see fit but ideally should provide some information about how it does so in documentation if "not obvious".
2
3
0
I have a list of over 100000 values and I am iterating over these values and checking if each one is contained in another list of random values (of the same size). I am doing this by using if item[x] in randomList. How efficient is this? Does python do some sort of hashing for each container or is it internally doing a straight up search of the other container to find the element I am looking for? Also, if it does this search linearly, then does it create a dictionary of the randomList and do the lookup with that?
How efficient is Python's 'in' or 'not in' operators, for large lists?
1.2
0
0
2,792
17,220,385
2013-06-20T17:47:00.000
3
0
1
0
python,virtualenv,pyramid,setup.py
17,221,439
2
true
0
0
pip and setup.py develop should not to be mixed. The latter uses easy_install which is not compatible with pip in the case of namespace packages (these are packages that are installed as subpackages of another parent, such as zope.sqlalchemy installing only the .sqlalchemy part of the full zope.* package). Namespace packages will cause problems between pip and easy_install. On the other hand, most other packages will work fine with whatever system you choose but it's better for you to be consistent. Another thing to double check is that you are actually installing the packages into your virtualenv. You should be able to open the python cli in your virtualenv and import the package. If you can't, then it's probably not installed.
2
6
0
I have installed pyramid and successfully created a project, but when I try to add new packages to the setup.py requirements they always give me a pkg_resources.DistributionNotFound error. The packages are installed, and this only happens if I try to install new packages after I run ../bin/python3.3 setup.py develop It doesn't matter what packages it is. The only way I have solved (not really), is setting up a new virtual environment and installing the packages before I create the project and run setup.py develop. Obviously I'm doing something wrong. Is there anything I need to do beside pip install the package? Is this some kind of pathing issue? I'm new to this so your help would be so very appreciated! *Adding my installation process in case anyone happens to see something wrong with it. Also including my wsgi file. Created a virtualenv easy_install-3.3 env Activated the virtualenv source env/bin/activate Installed pyramid cd env ./bin/easy_install-3.3 pyramid Created a project ./bin/pcreate -s starter myprojectname Ran setup.py cd myprojectname ../bin/python3.3 setup.py develop At this point I get the following error: pkg_resources.DistributionNotFound: waitress Installed Waitress ../bin/easy_install-3.3 waitress Ran setup.py again (not sure if I should be doing this) ../bin/python3.3 setup.py develop Still see the error My .wsgi file contains the following (not sure if this is important to this question or not): activate_this = "/home/account/env/bin/activate_this.py" execfile(activate_this,dict(__file__=activate_this)) import os import sys path = '/home/account/env/lib/python3.3/site-packages' if path not in sys.path: sys.path.append(path) from pyramid.paster import get_app application = get_app('/home/account/env/myprojectname/production.ini', 'main')
How to install new packages for pyramid without getting a pkg_resources.DistributionNotFound: once a project has been created
1.2
0
0
2,273
17,220,385
2013-06-20T17:47:00.000
5
0
1
0
python,virtualenv,pyramid,setup.py
17,241,201
2
false
0
0
Using Michael's suggestions above, I was able to solve his issue. I didn't even need to install any packages manually. Once everything was working, if I added a requirement to my setup.py file (which is created when you make a pyramid project) and ran pip install -e . again everything was installed perfectly. The problem was caused by how I was installing things. Here is my final process in case it helps any other newbies to pyramid: Created a virtual environment virtualenv-3.3 env Activated the env source env/bin/activate Moved to environment directory cd env Installed pyramid pip install pyramid Created a project ./bin/pcreate -s starter myprojectname Moved to my project directory cd megaproject Ran install pip install -e . updated my wsgi file with my env and project settings reload the app and jumped for joy to see the lovely pyramid starter page
2
6
0
I have installed pyramid and successfully created a project, but when I try to add new packages to the setup.py requirements they always give me a pkg_resources.DistributionNotFound error. The packages are installed, and this only happens if I try to install new packages after I run ../bin/python3.3 setup.py develop It doesn't matter what packages it is. The only way I have solved (not really), is setting up a new virtual environment and installing the packages before I create the project and run setup.py develop. Obviously I'm doing something wrong. Is there anything I need to do beside pip install the package? Is this some kind of pathing issue? I'm new to this so your help would be so very appreciated! *Adding my installation process in case anyone happens to see something wrong with it. Also including my wsgi file. Created a virtualenv easy_install-3.3 env Activated the virtualenv source env/bin/activate Installed pyramid cd env ./bin/easy_install-3.3 pyramid Created a project ./bin/pcreate -s starter myprojectname Ran setup.py cd myprojectname ../bin/python3.3 setup.py develop At this point I get the following error: pkg_resources.DistributionNotFound: waitress Installed Waitress ../bin/easy_install-3.3 waitress Ran setup.py again (not sure if I should be doing this) ../bin/python3.3 setup.py develop Still see the error My .wsgi file contains the following (not sure if this is important to this question or not): activate_this = "/home/account/env/bin/activate_this.py" execfile(activate_this,dict(__file__=activate_this)) import os import sys path = '/home/account/env/lib/python3.3/site-packages' if path not in sys.path: sys.path.append(path) from pyramid.paster import get_app application = get_app('/home/account/env/myprojectname/production.ini', 'main')
How to install new packages for pyramid without getting a pkg_resources.DistributionNotFound: once a project has been created
0.462117
0
0
2,273
17,221,936
2013-06-20T19:14:00.000
0
0
1
0
python,events
17,221,983
2
false
1
0
Instead of calling that function, make a function that calls both -function a (your function) -function b (the exit function code) call this meta function instead
1
1
0
I'm in the process of adding a bit of code to a django system that needs to make a specific function call upon exiting a function. Most of the code that I'm updating has several exit points throughout, which requires that I add a one-liner immediately before each one of them. A wee bit ugly. What I'd like to do is to simply say, "upon exiting this function, do this", much like the atexit module (from what I've found so far anyway), but to be triggered upon exiting the function rather than the entire script. Is there anything I can use that works that way? (I'm using Python 2.7.3 by the way)
Is there a way in python to trigger an action upon exiting a function
0
0
0
347
17,222,824
2013-06-20T20:06:00.000
4
0
0
0
python,sqlalchemy,flask,flask-sqlalchemy
17,223,377
1
true
1
0
The extensions exist to extend the functionality of Flask, and reduce the amount of code you need to write for common usage patterns, like integrating your application with SQLAlchemy in the case of flask-sqlalchemy, or login handling with flask-login. Basically just clean, reusable ways to do common things with a web application. But I see your point with flask-sqlalchemy, its not really that much of a code saver to use it, but it does give you the scoped-session automatically, which you need in a web environment with SQLAlchemy. Other extensions like flask-login really do save you a lot of boilerplate code.
1
1
0
Lets take SQLAlchemy as an example. Why should I use the Flask SQLAlchemy extension instead of the normal SQLAlchemy module? What is the difference between those two? Isn't is perfectly possible to just use the normal module in your Flask app?
Why do Flask Extensions exist?
1.2
1
0
99
17,236,675
2013-06-21T13:37:00.000
0
0
0
1
python,linux,module,installation
63,105,985
6
false
0
0
You could also have a folder that contains all your global modules e.g. my_modules/ then localize the site-packages/ folder of the Python version your are using (you will find it easily by using this command: python -m site). Now, cd to this folder and create a custom file with a .pth extension. In this file, add the absolute path to the folder that contains all the modules you want to make available system wide and save it. Your module should be available now
1
14
0
I made myself a little module which I happen to use quite a lot. Whenever I need it I simply copy it to the folder in which I want to use it. Since I am lazy I wanted to install it so that I can call it from anywhere, even the interactive prompt. So I read a bit about installing here, and concluded I needed to copy the file over to /usr/local/lib/python2.7/site-packages. That however, doesn't seem to do anything. Does anybody know where I need to copy my module for it to work system wide?
How to make my Python module available system wide on Linux?
0
0
0
38,144
17,241,004
2013-06-21T17:25:00.000
75
0
0
0
python,pandas
17,241,104
8
false
0
0
You can use df.index to access the index object and then get the values in a list using df.index.tolist(). Similarly, you can use df['col'].tolist() for Series.
1
289
1
Do you know how to get the index or column of a DataFrame as a NumPy array or python list?
How do I convert a pandas Series or index to a Numpy array?
1
0
0
558,431
17,242,048
2013-06-21T18:30:00.000
1
0
1
0
python,python-multithreading
17,278,340
2
true
0
0
Problem resolved. Pythoncom was included in my support libraries, but being a C extension my PyDev environment was not able to get the CoInitialize as a global variable. So I explicitly added CoInitialize through: Window->Preferences->PyDev->Editor->Code Analysis Here, in the 'Undefined' tab (since CoInitialize was coming up as undefined error in PyDev) add CoInitialize (comma separated). Now restart Aptana. Error is gone and everything works just fine!
1
1
0
I am learning multithreading in Python. I was going through examples online and trying out multithreading for my WMI module which remotely connects to the remote machine. But, when I use pythoncom.CoInitialize(), it gives me an error saying that 'CoInitialize is an undefined variable'. I am unable to figure out what is wrong. Any help would be really appreciated
CoInitialize() is undefined - Python error
1.2
0
0
2,434
17,242,335
2013-06-21T18:48:00.000
0
0
0
0
python,django,date,python-datetime
17,242,425
1
false
1
0
There's no predefined query field lookup that can do this for weeks. I'd either reuse code from the WeekArchiveView date-based generic view, or ideally directly subclass it - it already handles the filtering.
1
2
0
Is there a way to easily filter Date objects by week? Basically I want to do something like items= RelevantObject.objects.filter(date__week=32) I've found that it's possible to do this for year and month, but it doesn't seem like the capability for week is built in. Is there a "right" way to do this? It seems like it shouldn't be too difficult. Thanks
Django filter by week
0
0
0
1,992
17,242,929
2013-06-21T19:29:00.000
1
0
0
0
selenium,python-3.x,local,phantomjs,webpage-screenshot
17,243,970
1
false
1
0
I feel silly now. I needed another forward slash driver.get("file:///
1
0
0
I am using Selenium with PhantomJS as the webdriver in order to render webpages using Python. The pages are on my local drive. I need to save a screenshot of the webpages. Right now, the pages all render completely black. The code works perfect on non-local webpages. Is there a way to specify that the page is local? I tried this: driver.get("file://... but it did not work. Thanks!
Screenshot local page with Selenium and PhantomJS
0.197375
0
1
630
17,243,620
2013-06-21T20:16:00.000
20
1
1
0
python,python-2.7,python-3.x
17,243,722
3
false
0
0
Leaving aside the speed issue, which is often based on where you make the itemgetter or lambda function, I personally find that itemgetter is really nice for getting multiple items at once: for example, itemgetter(0, 4, 3, 9, 19, 20) will create a function that returns a tuple of the items at the specified indices of the listlike object passed to it. To do that with a lambda, you'd need lambda x:x[0], x[4], x[3], x[9], x[19], x[20], which is a lot clunkier. (And then some packages such as numpy have advanced indexing, which works a lot like itemgetter() except built in to normal bracket notation.)
1
40
0
I was curious if there was any indication of which of operator.itemgetter(0) or lambda x:x[0] is better to use, specifically in sorted() as the key keyword argument as that's the use that springs to mind first. Are there any known performance differences? Are there any PEP related preferences or guidance on the matter?
operator.itemgetter or lambda
1
0
0
10,179
17,243,749
2013-06-21T20:24:00.000
0
0
1
1
python,wxpython,coderunner
17,243,806
4
false
0
0
CodeRunner should work fine with Python 2.7, but you'll have to make sure that a compatible version of wxPython is installed in your site-packages. To verify that wxPython works correctly, you should start up a python instance in your terminal and run some test code (i.e., import the library and run some of the functions). If this works correctly, then CodeRunner should be able to run that code accordingly (because it uses the same process to run your apps).
1
1
0
im using a program called CodeRunner to program Python code. I want to install wxpython but how do I figure out what version python im using? when i do python -v in Terminal it says im on 2.7, but is that the same version that CodeRunner is using? Is there a way for me to find out?
Python programming with CodeRunner
0
0
0
1,192
17,247,981
2013-06-22T06:27:00.000
3
0
1
0
python,proxy
20,379,881
2
false
0
0
Or use the HTTP_PROXY / HTTPS_PROXY setting instead.
1
9
0
I am not super technical person. But I know that in Windows if I install R using the internet2 option then I can download whatever package I want in it. I install Python and everytime I try to download a package or install a package (e.g. using easy_install) it fails. What can I do to make Python automatically detect my proxy settings and just install the packages?
How to tell Python to automatically use the proxy setting in Windows XP like R's internet2 option?
0.291313
0
0
57,214
17,250,477
2013-06-22T12:00:00.000
4
0
1
0
python,customization
17,250,500
3
false
0
0
No, Python (and most mainstream languages) does not allow this kind of customization. In Python the restriction is quite intentional — an expression such as a equals b would look ungrammatical to any reader familiar with Python.
1
2
0
In order to make Python look more familiar, I've tried to assign an operator symbol to a variable's name,just for educational purposes: import operator equals = operator.eq This seems to work fine for equals(a,b) but not for a equals b Is there a way to express that a equals b instead of a == b
How to change the look of Python operators?
0.26052
0
0
170
17,252,951
2013-06-22T16:38:00.000
-1
0
1
0
python
17,253,029
5
false
0
0
not x is true for a wide variety of values, e.g. 0, None, "", False, [], {}, etc. x == None is only true for the one specific value None. If x is a class instance, then both not x and x == None will be false, but that doesn't mean that those are equivalent expressions. Fine; that previous paragraph should read: If x is a class instance, then both not x and x == None will be false unless someone is playing silly buggers with the class definition.
1
9
0
Can not x and x==None give different answers if x is a class instance ? I mean how is not x evaluated if x is a class instance ?
Difference between 'not x' and 'x==None' in python
-0.039979
0
0
2,782
17,253,251
2013-06-22T17:10:00.000
0
0
1
0
python,arrays,file
17,253,278
2
false
0
0
In general, searching a file will be slower because hard drives are slow. Loading names and dates in your array will generally run faster, so first option will be better than the second.
1
0
0
I want to create a name-day calendar but I struggle with a question, what method is preferable Load all the names and dates in an array Create a file and search it every time I want to find what day corresponds to a specific name Other method?
Array vs file, in simple python app
0
0
0
36
17,253,919
2013-06-22T18:19:00.000
0
0
1
1
python,ascii,less-unix
17,254,020
2
false
0
0
is the remote machine maybe set to unicode (modern linux distros are), then make sure you are running putty with the unicode setting too.
1
1
0
I'm connected to a Linux machine using PuTTY. On the Linux machine, I'm running a python script that takes a list of characters and prints each character, together with its index, in order. Some of the characters in my list fall outside the range of printable ascii characters. These irregular characters are corrupting my output. Sometimes they simply don't appear, while other times they actually delete large chunks of valid text. I thought I could correct this by turning off buffering, but the problem still occurs when I run the script using the python -u flag. Interestingly, this problem does not occur when I pipe my input to the less reader. In less, irregular characters show up like this: <A9>, <A7>, ^V, ^@, etc. No chunks of text are missing. I'm not sure where my problem lies. Is there a way to configure my terminal so that unpiped output will still show irregular characters?
How can I display out-of-range ascii characters?
0
0
0
527
17,254,295
2013-06-22T19:01:00.000
2
0
1
0
python,virtualenv,virtualenvwrapper
17,254,371
1
true
1
0
Only python packages installed inside virtualenvironment are isolated. System packages are not.
1
1
0
When I create a new virtualEnv, if I install Django inside a new environment, it's isolated. But what if I'm inside a virtualEnv and I install emacs, and mysql or such. These packages have nothing to do with python. Would the emacs and mysql packages that I installed be global or isolated to one virtualEnv only? Thanks
What exactly is virtualEnv isolating? Just python related packages or more?
1.2
0
0
457
17,254,603
2013-06-22T19:39:00.000
1
1
1
0
python,python-2.7,import,module
17,254,692
5
false
0
0
Interesting question. As you know, in PHP, you can separate your code by using include, which literally takes all the code in the included file and puts it wherever you called include. This is convenient for writing web applications because you can easily divide a page into parts (such as header, navigation, footer, etc). Python, on the other hand, is used for way more than just web applications. To reuse code, you must rely on functions or good old object-oriented programming. PHP also has functions and object-oriented programming FYI. You write functions and classes in a file and import it in another file. This lets you access the functions or use the classes you defined in the other file. Lets say you have a function called foo in file file1.py. From file2.py, you can write import file1. Then, call foo with file1.foo(). Alternatively, write from file1 import foo and then you can call foo with foo(). Note that the from lets you call foo directly. For more info, look at the python docs.
3
5
0
I come from a PHP (as well as a bunch of other stuff) background and I am playing around with Python. In PHP when I want to include another file I just do include or require and everything in that file is included. But it seems the recommended way to do stuff in python is from file import but that seems to be more for including libraries and stuff? How do you separate your code amongst several files? Is the only way to do it, to have a single file with a whole bunch of function calls and then import 15 other files?
Little confused with import python
0.039979
0
0
7,295
17,254,603
2013-06-22T19:39:00.000
0
1
1
0
python,python-2.7,import,module
17,257,186
5
false
0
0
There's an execfile() function which does something vaguely comparable with PHP's include, here, but it's almost certainly something you don't want to do. As others have said, it's just a different model and a different programming need in Python. Your code is going to go from function to function, and it doesn't really make a difference in which order you put them, as long as they're in an order where you define things before you use them. You're just not trying to end up with some kind of ordered document like you typically are with PHP, so the need isn't there.
3
5
0
I come from a PHP (as well as a bunch of other stuff) background and I am playing around with Python. In PHP when I want to include another file I just do include or require and everything in that file is included. But it seems the recommended way to do stuff in python is from file import but that seems to be more for including libraries and stuff? How do you separate your code amongst several files? Is the only way to do it, to have a single file with a whole bunch of function calls and then import 15 other files?
Little confused with import python
0
0
0
7,295
17,254,603
2013-06-22T19:39:00.000
1
1
1
0
python,python-2.7,import,module
17,255,642
5
false
0
0
On a technical level, a Python import is very similar to a PHP require, as it will execute the imported file. But since Python isn't designed to ultimately generate an HTML file, the way you use it is very different. Typically a Python file will on the module level not include much executable code at all, but definitions of functions and classes. You them import them and use them as a library. Hence having things like header() and footer() makes no sense in Python. Those are just functions. Call them like that, and the result they generate will be ignored. So how do you split up your Python code? Well, you split it up into functions and classes, which you put into different files, and then import.
3
5
0
I come from a PHP (as well as a bunch of other stuff) background and I am playing around with Python. In PHP when I want to include another file I just do include or require and everything in that file is included. But it seems the recommended way to do stuff in python is from file import but that seems to be more for including libraries and stuff? How do you separate your code amongst several files? Is the only way to do it, to have a single file with a whole bunch of function calls and then import 15 other files?
Little confused with import python
0.039979
0
0
7,295
17,257,056
2013-06-23T01:56:00.000
2
0
0
0
python,numpy,matrix-inverse
17,257,084
1
true
0
0
It may very well have to do with the smallness of the values in the matrix. Some matrices that are not, in fact, mathematically singular (with a zero determinant) are totally singular from a practical point of view, in that the math library one is using cannot process them properly. Numerical analysis is tricky, as you know, and how well it deals with such situations is a measure of the quality of a matrix library.
1
1
1
While trying to compute inverse of a matrix in python using numpy.linalg.inv(matrix), I get singular matrix error. Why does it happen? Has it anything to do with the smallness of the values in the matrix. The numbers in my matrix are probabilities and add up to 1.
Inverse of a Matrix in Python
1.2
0
0
7,873
17,260,920
2013-06-23T12:53:00.000
2
0
0
0
python,django,class
17,261,090
1
true
1
0
I see a lot of reasons not to do it. I'm sure caching solutions offer robust solutions to memory management. This includes running as a daemon. Having cache invalidation, setting lifetimes on the data. By setting a classvariable, you are forgoing the above things. Additionally, caching solutions provide a clean documented, api for interfacing with them.
1
1
0
I have a class which parses multiple urls/feeds and stores hashes of entries. Formerly I had put the hashes into a session variable, but instead of hitting the db I now switched to a class variable in the form of {request.user.id : [hashes]}. Is this bad practice? Any reasons against it?
Django: Reasonable to store session data in Class Variable?
1.2
0
0
192
17,261,062
2013-06-23T13:11:00.000
0
0
1
0
python,math,calculator
17,261,126
4
false
0
0
most languages treat numbers starting with 0 as octal numbers, so 30 octal = 3 * 8 = 24
1
0
0
I want Python recognize numbers like 030 as 30. for some reason, when I write "print 030" it returns 24, when I write "print 020" it returns 16. What's the reason and How can I make it treat it as 30? Thanks
How Python treats numbers with 0 at the beginning?
0
0
0
133
17,262,542
2013-06-23T15:56:00.000
0
0
1
0
git,python-2.7,github,py2exe,gitignore
17,262,566
1
true
0
0
you can have .gitignore in each folder. And can use !*.pyc entry or sepcific files listed to unignore them.
1
0
0
I'm using py2exe to build windows executable from a python script. This creates a collect-2.7 directory which contains compiled version of all the core python files in .pyc. I need to commit these compiled files with the .exe to work with. But .pyc is added in .gitignore which omits the collect-2.7 directory. How could I add the collect-2.7 directory without removing .pyc in .gitignore?
How to add a directory which contains .pyc to git with .pyc in .gitignore?
1.2
0
0
100
17,263,509
2013-06-23T17:45:00.000
0
0
1
0
python,scrapy,twisted
29,180,806
3
false
1
0
Ensure you have the python development headers: apt-get install build-essential python-dev Install scrapy with pip: pip install Scrapy
1
5
0
In Ubuntu 13.04, I have installed Scrapy for python-2.7, from the tarball. Executing a crawl command results in the below error: ImportError: Error loading object 'scrapy.telnet.TelnetConsole': No module named conch I've also tried installing twisted conch using easy_install and using the tarball. I have also removed the scrappy.egg and .info and the main scrappy folder from the python path. Reinstalling scrapy does not help as well. Can some one point me in the right direction?
Error while using Scrapy : ['scrapy.telnet.TelnetConsole': No module named conch twisted]
0
0
0
3,304
17,263,517
2013-06-23T17:45:00.000
-2
0
0
0
python,python-2.7,pexpect
17,265,035
1
false
0
0
pexpect allows you to specify a list of prompts to match against, so it caters explicitly to your use case. Consult the docstring of pexpect.expect().
1
0
0
Script-1 can be run from a shell and will ask the user for 3 prompts, A, B, and C in that order. A or B will appear to the user, and C will always will appear. In other words, when Script-1 is run, the user will be prompted by question A or B. Once answered, the question C will always be prompted last. I want to write Script-2, which will use logic to answer prompts A, B, and C automatically in Python. Pexpect seems perfect for this, however, how would one use pexpect when there are two different prompts- A or B- that could be presented to it? Thank you.
Pexpect Multiple, Different Prompts
-0.379949
0
0
704
17,265,027
2013-06-19T20:39:00.000
0
0
0
0
python,http,dns
17,265,028
2
true
1
0
You can't. Pages not only can pages be dynamically generated based on backend database data and search queries or other input that your program supplies to the website, but there is a nearly infinite list of possible pages, and the only way to know which ones exist is to test and see. The closest you can get is to scrape a website based on hyperlinks between pages in the page content itself.
1
4
0
How would I scrape a domain to find all web pages and content? For example: www.example.com, www.example.com/index.html, www.example.com/about/index.html and so on.. I would like to do this in Python and preferable with Beautiful Soup if possible..
How can I find (and scrape) all web pages on a given domain using Python?
1.2
0
1
1,831
17,267,246
2013-06-24T02:27:00.000
0
0
1
0
python,algorithm
17,267,437
3
true
0
0
@Aswin's comment is interesting. If you are sorting each time you insert an item, the call to sort() is O(n) rather than the usual O(n*log(n)). This is due to the way the sort(timsort) is implemented. However on top of this, you'd need to shift a bunch of elements along the list to make space. This is also O(n), so overall - calling .sort() each time is O(n) There isn't a way to keep a sorted list in better than O(n), because this shifting is always needed. If you don't need an actual list, the heapq (as mentioned in @Ignacio's answer) often covers the properties you do need in an efficient manner. Otherwise you can probably find one of the many tree data structures will suit your cause better than a list.
1
3
0
What is the efficient way to read elements into a list and keep the list sorted apart from searching the place for a new element in the existing sorted list and inserting in there?
how to keep a list sorted as you read elements
1.2
0
0
566
17,268,766
2013-06-24T05:52:00.000
2
0
0
1
python,django,celery,django-celery,celery-task
17,270,294
1
true
1
0
For this type of situation I have in the past made a egg of all of my celery task code that I can simply rsync or copy in some fashion to my worker nodes. This way you can edit your celery code in a single project that can be used in your django and on your work nodes. So in summary create a web-app-celery-tasks project and make it into an installable egg and have a web-app package that depends on the celery tasks egg.
1
1
0
I have a Django application which uses django-celery, celery and rabbitmq for offline, distributed processing. Now the setup is such that I need to run the celery tasks (and in turn celery workers) in other nodes in the network (different from where the Django web app is hosted). To do that, as I understand I will need to place all my Django code in these separate servers. Not only that, I will have to install all the other python libraries which the Django apps require. This way I will have to transfer all the django source code to all possible servers in the network, install dependencies and run some kind of an update system which will sync all the sources across nodes. Is this the right way of doing things? Is there a simpler way of making the celery workers run outside the web application server where the Django code is hosted ? If indeed there is no way other than to copy code and replicate in all servers, is there a way to copy only the source files which the celery task needs (which will include all models and views - not so small a task either)
How to seamlessly maintain code of django celery in a multi node environment
1.2
0
0
582
17,268,785
2013-06-24T05:53:00.000
3
0
1
0
python,python-2.7
17,268,865
1
true
0
0
Take this common module and repackage it as its own independent package, not in either of your web services. Then have the 2 web services (and any future ones) import this as an external module. And congratulations on thinking in terms of code reuse, and not just doing a quick-and-dirty copy/paste of the common module from web service 1 to web service 2!
1
1
0
Consider I have 2 web services in Python 2.7. Both have to use the same module I have developped (and may evolve). Now, the module's sources are in subfolder of the first application but I would want the second to use is too. What would be the best practice to share the module between both applications?
Best practice to share a module between 2 applications
1.2
0
0
88
17,270,259
2013-06-24T07:37:00.000
4
0
0
0
python,quickfix
17,277,267
2
true
1
0
This message has a repeating group of two MDEntries. Field 290 appears in the first one, but not the second one. Your code is probably trying to extract 290 from the second one, and is thus getting the error. Group 1 (has 290): 279=2☺55=ZN☺48=00A0IN00ZNZ☺10455=ZNU3☺167=FUT☺207=CBOT☺15=USD☺200=201309☺290=1☺269=0☺270=126.4375☺271=9☺387=12237☺ Group 2 (lacks 290): 279=0☺269=0☺270=126.421875☺271=57☺ Examine your code that's extracting 290. Put in an if-field-is-present check so that it doesn't try to extract a field that's not there.
1
1
0
I send a standard Market Data Incremental Refresh Request message (35 = V) and begin receiving incremental refreshes. Most of the time everything is absolutely fine and dandy. However, every once in a while, I get a strange Field not found message. For example: (8=FIX.4.2☺9=00221☺35=X☺49=XXX☺56=XXX☺34=4☺52=20130624-07:27:06.706☺262=XXX☺268=2☺279=2☺55=ZN☺48=00A0IN00ZNZ☺10455=ZNU3☺167=FUT☺207=CBOT☺15=USD☺200=201309☺290=1☺269=0☺270=126.4375☺271=9☺387=12237☺279=0☺269=0☺270=126.421875☺271=57☺10=176☺) Field not found (Message 4 Rejected: Conditionally Required Field Missing:290) (8=FIX.4.2☺9=119☺35=j☺34=3☺49=XXX☺52=20130624-07:27:07.037☺56=XXX☺45=4☺58=Conditionally Required Field Missing (290)☺372=X☺380=5☺10=144☺) I've cut some fields containing personal information or irrelevant information. But as you can see, it is explicitly message 4 that is being rejected, because it lacks field 290, when in fact 290 is clearly there. So, what's the deal? Has anyone seen this kind of behavior before? I'm using the Python bindings. Fix 4.2, Python 2.7. And for the sake of completeness, here's a message (the very next one) that didn't get rejected: (8=FIX.4.2☺9=00188☺35=X☺49=XXX☺56=XXX☺34=5☺52=20130624-07:27:06.706☺262=XXX☺268=1☺279=1☺55=ZB☺48=00A0IN00ZBZ☺10455=ZBU3☺167=FUT☺207=CBOT☺15=USD☺200=201309☺290=1☺269=1☺270=135.15625☺271=13☺387=5111☺10=156☺ (And no, the difference in tag 55 between the rejected and accepted messages is not the cause of this. QuickFix found 290 in plenty of 55=ZN messages.) I know this is a pretty technical question but am hoping there is a QuickFix guru out there who might know what's going on. Thanks for any help.
"Field not found" when field is present
1.2
0
0
2,904
17,270,293
2013-06-24T07:39:00.000
1
0
1
0
python,numpy,naming-conventions
17,270,654
2
false
0
0
numpy arrays and lists should occupy similar syntactic roles in your code and as such I wouldn't try to distinguish between them by naming conventions. Since everything in python is an object the usual naming conventions are there not to help distinguish type so much as usage. Data, whether represented in a list or a numpy.ndarray has the same usage. I agree that it's awkward that eg. + means different things for lists and arrays. I implicitly deal with this by never putting anything like numerical data in a list but rather always in an array. That way I know if I want to concatenate blocks of data I should be using numpy.hstack. That said, there are definitely cases where I want to build up a list through concatenation and turn it into a numpy array when I'm done. In those cases the code block is usually short enough that it's clear what's going on. Some comments in the code never hurt.
2
2
1
PEP8 has naming conventions for e.g. functions (lowercase), classes (CamelCase) and constants (uppercase). It seems to me that distinguishing between numpy arrays and built-ins such as lists is probably more important as the same operators such as "+" actually mean something totally different. Does anyone have any naming conventions to help with this?
how do you distinguish numpy arrays from Python's built-in objects
0.099668
0
0
415
17,270,293
2013-06-24T07:39:00.000
2
0
1
0
python,numpy,naming-conventions
17,270,547
2
false
0
0
You may use a prefix np_ for numpy arrays, thus distinguishing them from other variables.
2
2
1
PEP8 has naming conventions for e.g. functions (lowercase), classes (CamelCase) and constants (uppercase). It seems to me that distinguishing between numpy arrays and built-ins such as lists is probably more important as the same operators such as "+" actually mean something totally different. Does anyone have any naming conventions to help with this?
how do you distinguish numpy arrays from Python's built-in objects
0.197375
0
0
415
17,270,474
2013-06-24T07:52:00.000
4
0
0
0
python,django,django-models,phone-number
17,270,535
3
true
1
0
I always use a simple CharField, since phone numbers differ so greatly from region to region and country to country. Some people might even use characters instead of numbers - according to the numeric keyboard on phones. Maybe adding a Choicefield for country prefix is a good idea, but that is as far as I would go. I would never check a phone number field for any "invalid" data like dashes, spaces etc, because your users might dislike receiving an error message and because of that do not submit a phone number at all. After all a phone number will be dialled by a person in your office. And they can - and should - verify the number personally.
1
1
0
Trying to get the best way to store a phone # in Django. At the monent i'm using Charfield and checking if it's a number ...
What's the recommended way for storing a phone number?
1.2
0
0
191
17,270,936
2013-06-24T08:22:00.000
1
0
0
1
python,dbus
17,271,574
2
false
1
0
So does this mean that my website needs to run a DBUS service to allow me to call methods from it into my program? A dbus background process (a daemon) would run on your web server, yes. In fact dbus provides two daemons. One is a system daemon which permits objects to receive system information (e.g. printer availability for exampple) and the second is a general user application to application IPC daemon. It is the second daemon that you definitely use for different applications to communicate. I am coding in Python, so I am not sure if I can run a Python script on my website that would allow me to run a DBUS service. There is no problem using python; dbus has bindings for many languages (e.g Java, perl, ruby, c++, Python). dbus objects can be mapped to python objects. the most logical solution would be to run a single DBUS service that somehow imports method from different programs and can be queried by others who want to run those methods. Is that possible? Correct - dbus provides a mechanism by which a client process will create dbus object or objects which allow that process to other services to other dbus-aware processes.
1
1
0
Ok, so, I might be missing the plot a bit here, but would really like some help. I am quite new to development etc. and have now come to a point where I need to implement DBus (or some other inter-program communication). I am finding the concept a bit hard to understand though. My implementation will be to use an HTML website to change certain variables to be used in another program, thus allowing for the program to be dynamically changed in its working. I am doing this on a raspberry PI using Raspbian. I am running a webserver to host my website, and this is where the confusion comes in. As far as I understand, DBus runs a service which allows you to call methods from a program in another program. So does this mean that my website needs to run a DBUS service to allow me to call methods from it into my program? To complicate things a bit more, I am coding in Python, so I am not sure if I can run a Python script on my website that would allow me to run a DBUS service. Would it be better to use JavaScript? For me, the most logical solution would be to run a single DBUS service that somehow imports method from different programs and can be queried by others who want to run those methods. Is that possible? Help would be appreciated! Thank you in advance!
Confused about DBus
0.099668
0
0
460
17,271,272
2013-06-24T08:41:00.000
0
0
1
0
python-2.7
17,271,340
2
false
0
0
If you know what attributes you're looking for, you can match them against dir(parent class)
1
1
0
I have to add all the variables exist in parent class in one list in child class. However all the variables in the parent class is not mandatory. Is there any way i can figure out if variable exist in parent class or not ??
how to verify if variable exist in parent class in python
0
0
0
431
17,271,319
2013-06-24T08:44:00.000
-3
0
1
0
python,macos,pip,installation
40,502,894
21
false
0
0
I recommend Anaconda to you. It's the leading open data science platform powered by Python. There are many basic packages installed. Anaconda (conda) comes with its own installation of pip.
1
1,672
0
I spent most of the day yesterday searching for a clear answer for installing pip (package manager for Python). I can't find a good solution. How do I install it?
How do I install pip on macOS or OS X?
-0.028564
0
0
2,711,123
17,271,955
2013-06-24T09:17:00.000
1
0
0
0
java,python,django,converter
17,273,153
1
true
1
0
Sorry, I can't comment as I have low rep. But would it be an option to parse the python into JSON objects, and Java use Jackson or GSON to parse them back into class objects?
1
0
0
I'm looking for a good way to "copy" / convert a model from Python source code to Java source code. My idea is to use the Python django framework on a server to generate entity model classes. On the other side I would like to convert the entity classes to Java to use them in a native Android project. Do you have any recommendations what I can use to convert the python entity classes to Java? It should be possible to trigger the convertion every time I change the model in python. Best regards, Michael PS: If you're interessted, this is what the project structure will look like: python django project connects to the database will be used to generate entity model classes using REST API for data exchange between Android devices and the server java model library this will be my Java library which should contain the converted model of the python django project android project this will be my android app which will use the model of the java model library it should interact with the server via REST API. That's why the model in the java and python project have to be equals.
Good way to convert Python (django) entity classes to Java
1.2
0
0
999
17,272,210
2013-06-24T09:31:00.000
0
0
1
0
python,python-3.x
17,272,608
2
false
0
0
Any methods not overridden in the child are looked for in the parents. This includes the initializer, __init__(). If you do not define this in the child then the parent's method is called, initializing the object as it would an instance of the parent.
1
0
0
When you inherit from a parent class in python 3.x does the parent class 'empty' its default values, and do you have to re-define them?
Inheriting from a parent class in python 3.x
0
0
0
71
17,274,626
2013-06-24T11:41:00.000
0
0
0
0
python,python-3.x,sqlite
17,275,138
1
false
0
0
This will work just fine. When two threads are trying to create the same file, one will fail to do so, but it will continue to try to lock the file.
1
1
0
I would like to save data to sqlite3 databases which will be fetched from the remote system by FTP. Each database would be given a name that is an encoding of the time and date with a resolution of 1 hour (i.e. a new database every hour). From the Python 3 sqlite3 library, would any problems be encountered if two threads try to create the database at the same time? Or are there protections against this?
Can sqlite3 databases be created in a thread-safe way?
0
1
0
426
17,276,970
2013-06-24T13:41:00.000
0
0
0
0
python,database,flask,flask-sqlalchemy
17,289,054
2
false
1
0
It's not an efficient model, but this would work: You can write three different APIs (RESTful pattern is a good idea). Each will be an independent Flask application, listening on a different port (likely over localhost, not the public IP interface). A forth Flask application is your main application that external clients can access. The view functions in the main application will issue API calls to the other three APIs to obtain data as they see fit. You could optimize and merge one of the three database APIs into the main application, leaving only two (likely the two less used) to be implemented as APIs.
1
3
0
I have a flask application which use three types of databases - MySQL, Mongo and Redis. Now, if it had been simple MySQL I could have use SQLAlchemy or something on that line for database modelling. Now, in the current scenario where I am using many different types of database in a single application, I think I will have to create custom models. Can you please suggest what are the best practices to do that? Or any tutorial indicating the same?
How to create models if I am using various types of database simultaneously?
0
1
0
79
17,279,906
2013-06-24T16:02:00.000
0
0
0
1
python,linux,http,proxy,urllib2
17,279,971
1
false
0
0
Permanent network settings are stored in various files in /etc/networking and /etc/network-scripts. You could use diff to compare what's in those files between the system. However, that's just the network stuff (static v.s. dynamic, routes, gateways, iptables firewalls, blah blah blah). If there's no differences there, you'll have to start expanding the scope of your search.
1
0
0
I'm having a pretty unique problem. I'm using the python module urllib2 in order to get http responses from a local terminal. At first, urllib2 would only work with non-local addresses (i.e. google.com, etc.) and not local webservers. I eventually deduced that urllib2 was not respecting the no_proxy environment variable. If I manually erased the other proxy env variables in the code (i.e. set http_proxy to ''), then it seemed to fix it for my CentOS 6 box. However, I have a second machine running Fedora 12 that needs to run the same python script, and I cannot for the life of me get urllib2 to connect to the local terminal. If I set http_proxy to '' then I can't access anything at all - not google, not the local terminal. However, I have a third machine running Fedora 12 and the fix that I found for CentOS 6 works with that one. This leads me to my question. Is there an easy way to tell the difference between Fedora 12 Box#1 (which doesn't work) and Fedora 12 Box#2 which does? Maybe there's a list of linux config files that could conceivably affect the functionality of urllib2? I know /etc/environment can affect it with proxy-related environment variables and I know the routing tables could affect it. What else am I missing? Note: - Pinging the terminal with both boxes works. Urllib2 can only fetch http responses from the CentOS box and Fedora 12 Box#2, currently. Info: I've tested this with Python 2.6.2 Python 2.6.6 Python 2.7.5 on all three boxes. Same results each time.
Is there an easy way to tell the difference in network settings between two systems running Fedora 12?
0
0
1
73
17,280,297
2013-06-24T16:24:00.000
1
0
0
0
python,bots,mediawiki-api
17,282,177
1
true
1
0
You can create a new article simply by editing a page that doesn't exist yet.
1
1
0
I've been charged with migrating a large amount of simple web pages into wikiMedia articles. I've been researching the API and PyWikiBot but it seems that all it allows you to do is edit and retrieve what is already there. Can these tools be used to create a brand new article with content, a title and links to itself etc? If not, can anyone suggest a way to make large scale automated entries to the MediaWiki?
MediaWiki API: can it be used to create new articles programatically?
1.2
0
1
94
17,285,540
2013-06-24T21:35:00.000
0
0
0
0
python,3d,maya
17,289,834
1
false
0
0
The tech part is pretty easy. If you're already storing everything locally, it's all root-relative to start with, which is the best way to do it. If you need to do things in world space you can do them using xform(t = (x,y,z) ws = True) to move them around in world space rather than relative. It sounds like the problem is a semantic one: you need to find out at some point if the user wants the character posed relative to the root bone or not. Probably the right thing to do is to store your poses relatively all the time and just ask the user to specify what to do: 1) keep the relative data below the root, but just don't reset the root. This would handle, say, an idle pose that is off center from the root on purpose. It gets applied where the root is at restore time. The root > skeleton relationship is preserved but the root is left where the animator has it at restore time 2) restore both the relative and the root positions for an exact match of the original pose. That would be good for something like a loop transition pose that's supposed to be exact for both the movement of the root and the character pose. You can also try fudging stuff - like restoring the pose and then tweaking the root position (say, to the ground-plane projection of the pelvis) but at that point you're trying to save the animators from themselves. 1 and 2 cover the two real alternatives: apply this pose to my MSR and everything else, or just to 'everything else'
1
0
0
So I've been developing a pose library tool for the studio I intern at. I have it pretty much done but when testing I realized a pretty big flaw with the tool. When I set the pose it's positions are based on the local space to it's parent, but the parent is a move scale rotate curve. When an animator sets the pose and they do a walk cycle, they wont move the MSR. The problem I have is, what if the animator has moved the character away from the MSR and they want to set the pose? Right now it'll snap back to were the MSR is, if the pose was set at the MSR location or it'll snap back to the position it was set based on the position of the MSR. I need the tool to handle the offset so that when an animator moves the rig away from the MSR it wont snap back to where the pose was originally set but set the pose at the current location in space. I think I can achieve this by messing with the matrix or using the Dag command on the control curves but I really have no idea where to start.
Maya Pose Library - attribute set offset
0
0
0
432
17,286,096
2013-06-24T22:18:00.000
4
0
1
0
python,file,dictionary
17,286,234
2
true
0
0
I would suggest the following 3 pass solution: Iterate over your original dictionary file once, adding a val, key line to your output file for each val as you go. Sort the output file from step 1 using the unix sort command or some other fast sorting program. If there is a possibility that step 1 produced duplicates that need to be removed, iterate over the output file from 2 and remove duplicates as you write your final output file. Because the output file from 2 is sorted, you need only one pass and minimal memory to do this.
1
0
0
Been having trouble efficiently inverting (swapping the values for keys and the keys for values) a large (2.8GB) dictionary stored in a file. Efficiency is the problem, my current solution is to: Read the dictionary file line by line (format: ,,,...) For each line: Run through the in-progress output file from the previous pass, copying it line by line to an temporary file Insert dictionary values (val1,val2,...) where appropriate (alphabetically) into the temporary file with every new value being a new key Overwrite the previous pass output file with the temporary file Repeat until all dictionary lines are processed (ending up with format: :,,,.. :,,.. :,,,..) This algorithm is very unwieldy, at the very least the output file has to be written over n^2 times (i think...) with n being around 30,000,000. Lack of available memory prohibits reading it in entirety and processing it all in memory. There may be no better solution than leaving it to get on with it but if anyone had any thoughts it would be appreciated. EDIT: Should have made clear each line finally outputted could contain multiple keys as values.
Efficient dictionary inversion on dictionary like file
1.2
0
0
139
17,287,996
2013-06-25T02:02:00.000
1
0
0
0
python,django,django-south,database-migration
17,293,912
3
false
1
0
If you have installed South in your virtualenv then, once you are in the virtualenv, try to execute python manage.py help instead of ./manage.py help.
3
3
0
When I do a "./manage.py help", it gives me NO south commands, although South has been installed. I have the latest version of south installed for my django project, which is South==0.8.1. I have added "south" to my INSTALLED_APPS in settings.py I have done a manage.py syncdb, and there is a "south_migrationhistory" database table created. However, when I do a "./manage.py help", it gives me NO south commands. I have tried uninstalling and re-installing south, but I still get no south commands when I do a ./manage.py help Any suggestions would be greatly appreciated. thanks
Django Manage.py Gives No South Commands
0.066568
0
0
379
17,287,996
2013-06-25T02:02:00.000
0
0
0
0
python,django,django-south,database-migration
23,678,798
3
false
1
0
You might want to check if the INSTALLED_APPS is not overwritten somewhere outside the settings.py file. E.g. if you are using a separate settings file for a local settings.
3
3
0
When I do a "./manage.py help", it gives me NO south commands, although South has been installed. I have the latest version of south installed for my django project, which is South==0.8.1. I have added "south" to my INSTALLED_APPS in settings.py I have done a manage.py syncdb, and there is a "south_migrationhistory" database table created. However, when I do a "./manage.py help", it gives me NO south commands. I have tried uninstalling and re-installing south, but I still get no south commands when I do a ./manage.py help Any suggestions would be greatly appreciated. thanks
Django Manage.py Gives No South Commands
0
0
0
379
17,287,996
2013-06-25T02:02:00.000
0
0
0
0
python,django,django-south,database-migration
17,291,512
3
false
1
0
If you're using virtualenv, ensure that it is activated when running ./manage.py help command. Django will just pass all import errors when showing available commands.
3
3
0
When I do a "./manage.py help", it gives me NO south commands, although South has been installed. I have the latest version of south installed for my django project, which is South==0.8.1. I have added "south" to my INSTALLED_APPS in settings.py I have done a manage.py syncdb, and there is a "south_migrationhistory" database table created. However, when I do a "./manage.py help", it gives me NO south commands. I have tried uninstalling and re-installing south, but I still get no south commands when I do a ./manage.py help Any suggestions would be greatly appreciated. thanks
Django Manage.py Gives No South Commands
0
0
0
379
17,290,130
2013-06-25T05:58:00.000
1
0
1
0
python,web-scraping
17,290,354
1
false
0
0
No; it is not uncommon for web pages to be identical in the first 1024 bytes. For example, many complex sites have JavaScript, CSS, and boilerplate HTML at the top of the HTML file which far exceed the 1024 bytes you have budgeted. Some experimentation with real-life data may reveal a reasonable buffer, but there is simply no way to predict that two otherwise identical files will not differ in the last byte other than by doing a full-file comparison. But if your input data says otherwise (perhaps what you are comparing are individual tweets, for example?) then by all means go by that. Many web servers will include a server-generated ETag: header which might be useful, but it's not standardized, and for all you know, they could easily be spoofing you.
1
0
0
I'm creating a Python program that determines if a file from a website is already available on my computer (already downloaded), and my way of determining that is I get the MD5 of that file from the website, then compare it with the MD5 records of files stored in my database. My concern is this procedure will be very slow if tried on large files on a website; so is it safe to only compute the first 1024 bytes of that file on the web to determine if its a duplicate file or not? Or do you have some more simple elegant or faster way to do this in Python.
Is it okay to rely on the MD5 check sum based on the first 1024 bytes of a file for file duplicate comparison?
0.197375
0
0
156
17,290,936
2013-06-25T06:52:00.000
0
0
1
1
python,linux,scientific-computing
17,291,000
2
false
0
0
There will not be any problem if you change the way of packaging. Ubuntu needs a different type of packaging while centos needs another type. Build your package and pack it in such a way as normal centos packages and then use it in centos
2
1
0
Is it problematic to build python packages (numpy, scipy, matplotlib, h5py,...) on one linux distro (ubuntu) and run them from another distro (centos)? I am asking this because our computing cluster has centos machines while my pc is ubuntu.
building python packages on one linux distro and running them from another
0
0
0
36
17,290,936
2013-06-25T06:52:00.000
0
0
1
1
python,linux,scientific-computing
17,291,237
2
false
0
0
Use distutils to package as eggs and specify the dependencies you should not have too many problems - the pyo files that are zipped into the eggs work fine across platforms. You might like to take a look at pypiserver to set up a local pypi that you can pip from.
2
1
0
Is it problematic to build python packages (numpy, scipy, matplotlib, h5py,...) on one linux distro (ubuntu) and run them from another distro (centos)? I am asking this because our computing cluster has centos machines while my pc is ubuntu.
building python packages on one linux distro and running them from another
0
0
0
36
17,291,455
2013-06-25T07:23:00.000
3
0
1
0
python,image,python-imaging-library
17,291,771
6
false
0
0
I would consider creating an array of x by y integers all starting at (0, 0, 0) and then for each pixel in each file add the RGB value in, divide all the values by the number of images and then create the image from that - you will probably find that numpy can help.
1
25
0
For example, I have 100 pictures whose resolution is the same, and I want to merge them into one picture. For the final picture, the RGB value of each pixel is the average of the 100 pictures' at that position. I know the getdata function can work in this situation, but is there a simpler and faster way to do this in PIL(Python Image Library)?
How to get an average picture from 100 pictures using PIL?
0.099668
0
0
40,383
17,293,311
2013-06-25T09:09:00.000
4
0
0
0
python,flask
17,308,839
6
false
1
0
I had a similar problem with my blog. I wanted to send notification emails to those subscribed to comments when a new comment was posted, but I did not want to have the person posting the comment waiting for all the emails to be sent before he gets his response. I used a multiprocessing.Pool for this. I started a pool of one worker (that was enough, low traffic site) and then each time I need to send an email I prepare everything in the Flask view function, but pass the final send_email call to the pool via apply_async.
1
30
0
Is there a way in Flask to send the response to the client and then continue doing some processing? I have a few book-keeping tasks which are to be done, but I don't want to keep the client waiting. Note that these are actually really fast things I wish to do, thus creating a new thread, or using a queue, isn't really appropriate here. (One of these fast things is actually adding something to a job queue.)
Flask end response and continue processing
0.132549
0
0
24,216
17,299,364
2013-06-25T14:00:00.000
-1
0
0
0
python,excel,xlrd,xlwt,openpyxl
17,305,443
12
false
0
0
Unfortunately there isn't really a better way to do in that read in the file, and use a library like xlwt to write out a new excel file (with your new row inserted at the top). Excel doesn't work like a database that you can read and and append to. You unfortunately just have to read in the information and manipulate in memory and write out to what is essentially a new file.
1
19
0
I'm looking for the best approach for inserting a row into a spreadsheet using openpyxl. Effectively, I have a spreadsheet (Excel 2007) which has a header row, followed by (at most) a few thousand rows of data. I'm looking to insert the row as the first row of actual data, so after the header. My understanding is that the append function is suitable for adding content to the end of the file. Reading the documentation for both openpyxl and xlrd (and xlwt), I can't find any clear cut ways of doing this, beyond looping through the content manually and inserting into a new sheet (after inserting the required row). Given my so far limited experience with Python, I'm trying to understand if this is indeed the best option to take (the most pythonic!), and if so could someone provide an explicit example. Specifically can I read and write rows with openpyxl or do I have to access cells? Additionally can I (over)write the same file(name)?
Insert row into Excel spreadsheet using openpyxl in Python
-0.016665
1
0
90,928
17,300,419
2013-06-25T14:43:00.000
1
0
1
0
python,algorithm,sorting
17,300,620
4
false
0
0
Have a list of size 20 tupples initialised with less than the minimum result of the calculation and two indices of -1. On calculating a result append it to the results list, with the indices of the pair that resulted, sort on the value only and trim the list to length 20. Should be reasonably efficient as you only ever sort a list of length 21.
1
7
1
I am thinking about a problem I haven't encountered before and I'm trying to determine the most efficient algorithm to use. I am iterating over two lists, using each pair of elements to calculate a value that I wish to sort on. My end goal is to obtain the top twenty results. I could store the results in a third list, sort that list by absolute value, and simply slice the top twenty, but that is not ideal. Since these lists have the potential to become extremely large, I'd ideally like to only store the top twenty absolute values, evicting old values as a new top value is calculated. What would be the most efficient way to implement this in python?
Python Sort On The Fly
0.049958
0
0
911
17,301,571
2013-06-25T15:36:00.000
1
0
0
1
python,pipe
17,301,654
2
false
0
0
Take a look at subprocess communicate and pipe examples.
1
1
0
I am trying to run a command exe from Python while passing in parameters. I have looked at a few other question, and the reason why my question is different is because I first want to call a cmd exe program while passing in some parameters, then I have to wait for 10 sec for the exe to prompt me for some username, and then some password. then I want to pipe this output out to a file. So is there a way to pass more arguments if a process is already called previously? How do I make a cmd exe stay open, because as soon as I call it, the process dies. Thanks
Calling command exe with python
0.099668
0
0
218
17,302,404
2013-06-25T16:15:00.000
1
0
1
1
python,windows,relative-path
17,302,657
1
false
0
0
Just try '.\file_name' as your path 2 issues . = current directory, (.. is up one), and you need to escape the \ as \ if using windows file separators.
1
0
0
I'm a very new python user using python 2.6.2 and my question is simple. I want to only have the relative path "\file_name" in an input file instead of the full path like "c:\folder_a\folder_b\file_name" but when I use the relevant path in my input files I get the error "Windows Error [Error 2]: The system cannot find the file specified..." otherwise my code works fine. What do I need to do/change so the system can use the relative path? It seems since I'm running the script from the same folder such as "c:\folder_a\folder_b>python script_name" in the command terminal the relevant path alone should work.
Using only relative path instead of full path
0.197375
0
0
98