Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
26,963,049 | 2014-11-16T22:55:00.000 | 0 | 0 | 1 | 0 | python-3.x | 26,963,919 | 1 | true | 0 | 0 | The help function object can be accessed as builtins.help; the site module assigns sticks it in __builtins__ on startup if it detects that Python is running in interactive mode. | 1 | 2 | 0 | I have a program that uses the code module to present an emulation of the Python REPL. However, one major flaw in code's implementation is that it doesn't give the user access to help(), which would be very very handy for my purpose.
How can I find help() and stuff it into my interactive interpreter emulator's locals? | Accessing the Python interactive help system from Python code | 1.2 | 0 | 0 | 43 |
26,963,868 | 2014-11-17T00:46:00.000 | 5 | 0 | 0 | 0 | python,gmail-api | 26,964,386 | 3 | false | 0 | 0 | I was able to list the system and user labels via print(gs.users().labels().list(userId=username).execute())
This revealed that the label ID for my label was something else ("Label_1" -- the name was "handler_gmail"). So I will make a support method that gets me a label ID by name, and add the label to the message (via modify) using the ID.
Thanks! | 1 | 9 | 0 | I created a label, "handler_gmail", in Gmail's web interface via the typical approach.
When I try to set that label for a message via the Gmail API (Python client library), I get the response HttpError 400, returned "Invalid label: handler_gmail".
I am sure the label exists (I've checked multiple times – no typos).
I am trying to set it like so:
gs.users().messages().modify(userId=username, id=m['id'], body={"addLabelIds": ['handler_gmail']}).execute()
I tried adding a "removeLabelIds": [] key/value pair to the dictionary (thinking maybe it was required), but got the same "Invalid label" error.
Any help with this is greatly appreciated! So close to being able to do what I need to with this project! | Gmail API: "Invalid Label" When Trying to Set Message Label to One That I Created | 0.321513 | 0 | 1 | 6,142 |
26,965,270 | 2014-11-17T03:55:00.000 | 0 | 0 | 0 | 0 | python-2.7,sqlalchemy,multi-user | 27,054,069 | 1 | true | 0 | 0 | I can't delete this question outright, so I will answer it with what I did.
Part of the problem was that I was trying to find a solution for moving a sqlite3 database to a server, but it turns out that sqlite3 is only intended for use in simpler local situations. So I decided to migrate to MySQL. The following are the major steps:
I took an old computer that has XP and installed Lubuntu 14.04 as the default OS.
Installed MySQL on Lubuntu: sudo apt-get install mysql-server.
Edit the etc/mysql/my.cnf document's bind-address to match the computers network address.
On the client computer with Python 2.7 installed PyMySQL: easy_install PyMySQL
Works great. Now I'm in the process of making sure the program watches for changes to the database and updates the GUI accordingly. | 1 | 0 | 0 | I wrote a database program using SQLAlchemy. So far, I've been using FreeFileSync to sync the database file over the network for two computers when necessary. I want to learn how to set things up so that the file stays in one place and allows multiple user access but I don't know where to begin.
Is it possible to open and read/write to a SQLAlchemy database on another computer over a network? I couldn't find information on this (or maybe I just don't understand the terminology)?
Are there any courses or topics I should look into that I will be able to apply with Python and SQLAlchemy?
Or would making a web-based program be the best solution? I'm good at algorithms and scientific programming but I'm still a novice at network and web programming. I appreciate any tips on where to start. | SQLAlchemy, how to transition from single-user to multi-user | 1.2 | 1 | 0 | 321 |
26,965,450 | 2014-11-17T04:17:00.000 | 2 | 0 | 1 | 1 | python,macos,pip,easy-install | 26,965,467 | 2 | true | 0 | 0 | There's an easy way around it - use pip2 or pip2.7 or pip-2.7 for Python 2, and pip3 or pip3.4 or pip-3.4 for Python 3. Both version ship with easy_install, but Python 2 does not contain pip by default - you have to install it yourself. | 1 | 3 | 0 | I am a non-programmer who started to learn Python. My Mac OS X Yosemite shipped with Python 2.7.6. I installed Python 3.4.2 too. If I use pip or easy_install in the terminal to install a package, how do I know which Python I installed the package in? It seems Python 3.4.2 shipped with pip and easy_install, but I think Python 2.7.6 may also have some version of pip or easy_install. I know my system can have both versions of Python, but can it have multiple versions of pip or easy_install? | Which version of Python did pip or easy_install refer to by default? | 1.2 | 0 | 0 | 163 |
26,966,487 | 2014-11-17T06:03:00.000 | 5 | 0 | 0 | 0 | python,igraph,opacity | 26,967,730 | 1 | true | 0 | 0 | Edge opacity can be altered with the color attribute of the edge or with the edge_color keyword argument of plot(). The colors that you specify there are passed through the color_name_to_rgba function so you can use anything that color_name_to_rgba understands there; the easiest is probably an (R, G, B, A) tuple or the standard HTML #rrggbbaa syntax, where A is the opacity. Unfortunately this is not documented well but I'll fix it in the next release. | 1 | 3 | 1 | I know that you can adjust a graphs overall opacity in the plot function (opacity = (0 to 1)), but I cannot find anything in the manual or online searches that speak of altering the edge opacity (or transparency)? | Is there a way to alter the edge opacity in Python igraph? | 1.2 | 0 | 0 | 1,146 |
26,966,645 | 2014-11-17T06:15:00.000 | 0 | 0 | 0 | 1 | python,django | 26,971,741 | 2 | false | 1 | 0 | You could create a file named "i_am_running.log" at the beginning of your management command and remove it at the end if it. When running same management command, check for its' presence. In case of no exist - go further. Otherwise - abort. | 1 | 0 | 0 | I'm writing a Django app that uses a management command to pull data from various sources. The plan is to run this command hourly with cron, and also have it run on user command from a view (i.e. when they add a new item that needs data, I don't want them to wait for the next hour to roll around to see results). The question is:
How can I set up this command such that if it is already currently running, it won't execute? Is there some place where I can stash a variable that can be checked by the script before execution? My current best idea is to have the command monitor stdout for a while to make sure nothing else is executing, but that seems like a hack at best. This is the only task that will be running in the background.
I'm basically trying to avoid using Celery here. | Monitor Management Command Execution in Django | 0 | 0 | 0 | 142 |
26,977,128 | 2014-11-17T16:24:00.000 | 4 | 0 | 1 | 0 | python,pip,package-management | 26,977,355 | 2 | false | 0 | 0 | Check out anaconda. You can create lists of dependencies/packages and pass them to conda. Conda has most packages already, and will have everything soon. You can run pip through anaconda in case anaconda doesn't have the package you're looking for. Anaconda is great for both package and python version/environment management. Conda is the future! | 1 | 26 | 0 | I just started working on a project where I needed to install a lot of dependencies via pip. The instructions were to do everything manually.
I've used nodejs and maven before where this process is automated and the dependencies are isolated between projects. For example in node I can configure everything in package.json and just run npm install ik my project directory.
Is there something similar for pip? | Is there a project file support like npm/package.json for Python's pip? | 0.379949 | 0 | 0 | 5,290 |
26,979,129 | 2014-11-17T18:13:00.000 | 0 | 0 | 1 | 0 | python | 26,980,403 | 2 | false | 0 | 0 | I would use regex to pull out the number(s), and then do a check to see what operation to preform on said number(s). | 1 | 0 | 0 | Peharps I have a str in Python like "8+3" or even more difficult like "sin(30)" or others. How to convert then from type of str to calculate. For eхample if i have str like "8+3" it must calculate like int numbers 8+3 = 11, or if i have str like "sin(30)" it must calculate math.sin(30). | How to convert math operations from type of str to calculate in Python | 0 | 0 | 0 | 93 |
26,979,715 | 2014-11-17T18:49:00.000 | 2 | 0 | 1 | 0 | python,parsing,ply | 26,981,707 | 2 | true | 0 | 0 | You can pass debug=1 to parse when you call it and it will output the parser stack.
Here is the function definition for that, for convenience:
def parse(self,input=None,lexer=None,debug=0,tracking=0,tokenfunc=None):
You can send the debugging output to a file too, if you set it up when you call yacc. Here is that function definition, for convenience:
def yacc(method='LALR', debug=yaccdebug, module=None, tabmodule=tab_module, start=None, check_recursion=1, optimize=0, write_tables=1, debugfile=debug_file,outputdir='', debuglog=None, errorlog = None, picklefile=None):
You may find it useful to checkout the yacc and parse methods in yacc.py to see how this works. | 1 | 2 | 0 | Is there a way to access the parser state/stack in p_error()?
All I know is that I can look at the offending token. | debugging ply in p_error() to access parser state/stack | 1.2 | 0 | 0 | 1,976 |
26,983,856 | 2014-11-17T23:23:00.000 | 0 | 0 | 1 | 0 | python,icalendar | 26,985,713 | 1 | false | 0 | 0 | Either way you have to expand the events of the whole calendar I think. You have to take into account:
VEVENTS that are modifications (SEQUENCE NO etc) to an instance of a
recurring event and
exceptions (EXDATES) to the recurring instance
possible daylight saving changes during a recurring event defined in a different timezone. a 8am becomes 9am halfway through etc or
floating dates that stay in local timezone (8 am stays 8am even after daylight saving change) while the event you are comparing may not be floating | 1 | 0 | 0 | I have two sets of iCal events and I am trying to find the time conflicts between them. Events in both sets have recurring rules.
I do not know of a good strategy of finding the conflicts. The only strategy I have(which is crazy) is to expand the recurrence of each set of events and compare the events one by one to see if there is a conflict. This does not seems right.
Is there some function, or 3rd party library or simple strategy that will do this?
Thanks | Find conflicts between iCal events using python | 0 | 0 | 0 | 437 |
26,985,577 | 2014-11-18T02:28:00.000 | 2 | 0 | 0 | 0 | android,python,sockets,redis,real-time | 26,985,616 | 1 | true | 1 | 0 | One approach might be:
set up a database on a server on the internet
program your app to save data to that remote database
set up a webservice
program the webservice to read data from the database and serve it as an html page
For extra credit
implement a REST style API which retrieves and serves data from the database
create an ajax webpage which uses the API to fetch and display data (so you don't have to refresh the page constantly) | 1 | 1 | 0 | I'm trying to figure out how to have a real time data displayed on a webpage through the use of an android app.
For example, users are using android app and are scored (answering questions). My webpage will display the scores in order in real-time.
Iv come to the conclusion to use Redis, but what exactly do i need for this to function? Do i need a web socket where my web page will communicate with. This socket could be python where it accesses the database and then responds with the scores in order?
Im struggling to find out how exactly this would work as this is new to me. What exactly should i be looking for (terminology etc)?
Thanks | Real-time communication between app and webpage | 1.2 | 0 | 1 | 651 |
26,986,623 | 2014-11-18T04:31:00.000 | 0 | 0 | 0 | 0 | python,mechanize,mechanize-python | 26,986,649 | 2 | false | 1 | 0 | If it's an anchor tag, then just GET/POST whatever it is.
The timer between links appearing is generally done in javascript - some sites you are attempting to scrape may not be usable without javascript, or requires a token generated in javascript with clientside math.
Depending on the site, you can either extract the wait time in msec/sec and time.sleep() for that long, or you'll have to use something that can execute javascript | 1 | 1 | 0 | Using mechanize, how can I wait for some time after page load (some websites have a timer before links appear, like in download pages), and after the links have been loaded, click on a specific link?
Since it's an anchor tag and not a submit button, will browser.submit() work(I got errors while doing that)? | Python mechanize wait and click | 0 | 0 | 1 | 1,315 |
26,987,797 | 2014-11-18T06:13:00.000 | 0 | 1 | 0 | 0 | python,mechanize-python | 26,988,092 | 2 | true | 0 | 0 | You cant' fake you IP address because IP is third layer and HTTP works on 7th layer.
So, it's imposible to send ip with non-your ip.
(you can set up second interface and IP address using iproute2 and set routing throught that interface, but it's not python/mechanize level. it's system level) | 2 | 0 | 0 | Is is possible to actually fake your IP details via mechanize? But if it's not, for what is br.set_proxies() used? | Change IP with Python mechanize | 1.2 | 0 | 1 | 711 |
26,987,797 | 2014-11-18T06:13:00.000 | 0 | 1 | 0 | 0 | python,mechanize-python | 26,987,818 | 2 | false | 0 | 0 | You don't fake your IP details, set_proxy is to configure a HTTP proxy. You still need legitimate access to the IP. | 2 | 0 | 0 | Is is possible to actually fake your IP details via mechanize? But if it's not, for what is br.set_proxies() used? | Change IP with Python mechanize | 0 | 0 | 1 | 711 |
26,989,736 | 2014-11-18T08:28:00.000 | 0 | 0 | 0 | 0 | python,xml,string,eval | 26,990,360 | 1 | false | 0 | 0 | You probably shouldn't use eval... But to answer your question on how to store an object to be later executed but eval, you do repr(my_object), this will often return a string, suitable for eval, but this is not always true. | 1 | 0 | 0 | I use eval('string') in a python script to execute a piece of java scrit, and I would like to store the string in a xml and then parse use element tree as text string, the problem is in this way, the eval() will return nothing since the parsed string is a string object not a original string which can be reconigized by eval(), anybody knows how to solve this problem? I am a freshman on programming, any suggestions will be highly appriciated. | element tree parse xml text string to be called by eval | 0 | 0 | 1 | 207 |
26,990,840 | 2014-11-18T09:32:00.000 | 1 | 1 | 0 | 0 | python,ruby-on-rails,ruby,ruby-on-rails-3,heroku | 26,994,319 | 1 | true | 1 | 0 | I don't see any problem in doing this, as far as rails manages the database structure and python script populates it with data.
My advice, but just to make it simpler, is to define the database schema through migrations in your rails app and build it like the python script doesn't exist.
Once you have completed it, simply start the python script so it can start populating the Database (could be necessary to rename some table in the python script, but no more than this).
If you want to test in your local machine you can one of this:
run the python script in your local machine
configure the database.ymlin your rails app to point to the remote DB (can be difficult if you don't have administration access to the host server, because of port farwarding etc)
The only thing you should keep in mind is about concurrent accesses.
Because you have 2 application that both read and write in your DB, would be better if the python script makes its job in a single and atomic transaction, to avoid your rails app finding the DB in an half-updated state.
You can see the database like a shared box, it doesn't matter how many applications use it. | 1 | 0 | 0 | I have built an application in python that is hosted on heroku which basically uses a script written in Python to store some results into a database (it runs as a scheduled task on daily basis). I would have done this with ruby/rails to avoid this confusion, but the application partner did not support Ruby.
I would like to know if it will be possible to build the front-end with Ruby on Rails and use the same database.
My rails application will need to make use MVC and have its own tables on the database, but it will also use the database that python sends data to just to retrieve some data from there.
Can I create the Rails app and reference the details of the database that my python application uses?
How could I test this on my local machine?
What would be the best approach to this? | Rails app to work with a remote heroku database | 1.2 | 0 | 0 | 58 |
26,994,651 | 2014-11-18T12:43:00.000 | 0 | 0 | 1 | 0 | python,python-2.7 | 26,995,565 | 1 | true | 0 | 0 | If you have some "global" variables I think is a good idea to have them in a separated module and import that module from each place you need them. This way you only have to do it as cdarke has commented. | 1 | 0 | 0 | Can a python script located in D:\Folder_A\Folder_B\be used to change variables in another python script located in D:\Folder_A\Folder_C\, and then print them out on the console?
NOTE: I'm a beginner in Python, and I'm using v2.7.8.
EDIT: To answer @Rik Verbeek, I created a python file in location A (D:\Folder_A\Folder_B\), and another file at location B (D:\Folder_A\Folder_C\), with both the folders (Folder_B & Folder_C) located in D:\Folder_A\. In the 2nd .py file, I wrote the following:
a = 0
b = 0
Now I want to change these variables from the 1st .py file to 5 and 10 respectively, and print them to the console. Also, these files are not in the Python libraries, but are instead, located in another folder(Folder_A).
To answer @Kasra (and maybe @cdarke), I don't want to import a module, unless it is the only way. | Accessing a python script in a different folder, using a python script | 1.2 | 0 | 0 | 213 |
26,996,976 | 2014-11-18T14:42:00.000 | 0 | 0 | 0 | 0 | python,django,pdf,html-to-pdf | 27,025,464 | 4 | false | 1 | 0 | wkhtmltopdf requires
CSS to be inlined
static files to be locally present on the server
static file urls to be os paths, eg: /home/ubuntu/project/project/static/file_name | 1 | 10 | 0 | There are a lot of different ways of generating pdfs from a django webpage in python2. The most clean, probably, is pisa and reportlab.
These do not work for python3 though.
So far, the only method I've had success with is to render the template, write it to a file, and then use wkhtmltopdf via subprocess.popen. This works alright, but it doesn't load any of my static files, such as css and images.
Are there any proper solutions? can wkhtmltopdf read my staticfiles from the command line, in some way, or is there a library like pisa/reportlab, which supports python3?
I haven't been able to finde such a library | Python3 Django -> HTML to PDF | 0 | 0 | 0 | 13,449 |
27,001,985 | 2014-11-18T19:04:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,import,module,anaconda | 27,034,348 | 2 | false | 0 | 0 | As a side note,
If you want to keep this functionality and move to a more script-like environment I would suggest using something like Spyder IDE. It comes with an editor linked with the IPython console that supports all the same magics as the IPython notebook. | 2 | 2 | 0 | I'm interested in creating physics simulations with Python so I decided to download Anaconda. Inside of an IPython notebook I can use the pylab module to plot functions, for example, with ease. However, if I try to import pylab in a script outside of IPython, it won't work; Python claims that the pylab module doesn't exist.
So how can I use Anaconda's modules outside of IPython? | Using Anaconda modules outside of IPython | 0 | 0 | 0 | 238 |
27,001,985 | 2014-11-18T19:04:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,import,module,anaconda | 27,002,473 | 2 | false | 0 | 0 | I bet it will work if you use Anaconda's Python distribution.
Try running ./anaconda/bin/python and importing it from that Python session. | 2 | 2 | 0 | I'm interested in creating physics simulations with Python so I decided to download Anaconda. Inside of an IPython notebook I can use the pylab module to plot functions, for example, with ease. However, if I try to import pylab in a script outside of IPython, it won't work; Python claims that the pylab module doesn't exist.
So how can I use Anaconda's modules outside of IPython? | Using Anaconda modules outside of IPython | 0.099668 | 0 | 0 | 238 |
27,003,660 | 2014-11-18T20:44:00.000 | 0 | 0 | 0 | 0 | python,numpy | 27,003,691 | 3 | true | 0 | 0 | try np.hstack((a.reshape(1496, 1), b.reshape(1496, 1), c)). To be more general, it is np.hstack((a.reshape(a.size, 1), b.reshape(b.size, 1), c)) | 1 | 0 | 1 | I have three arrays a, b, c.
The are the shapes (1496,) (1496,) (1496, 1852). I want to join them into a single array or dataframe.
The first two arrays are single column vector, where the other has several columns. All three have 1496 rows.
My logic is to join into a single array by df=np.concontenate((a,b,c))
But the error says dimensions must be the same size.
I also tried np.hstack()
Thanks
MPG | numpy arrays will not concontenate | 1.2 | 0 | 0 | 47 |
27,004,270 | 2014-11-18T21:20:00.000 | 0 | 0 | 0 | 0 | python,eclipse,pydev | 27,005,884 | 1 | false | 0 | 0 | I have pydev but I couldn't find a key to jump to the end of the quotation marks. The closest I could see is Ctrl+Shift+P that takes me to the matching bracket. However I thought following may be useful for you:
Hit Ctrl+Shift+L --> This will show you a list of all shortcut keys available
If you hit Ctrl+Shift+L again it will take you to a preferences page where you can see what all are available by various categories.
You can setup your own binding if you want to change anything. | 1 | 0 | 0 | Is there a way to set a TAB key as a key used to move cursor outside the quotation marks or the brackets in Eclipses' PyDev plugin? It's default for example in Java perspective, but in PyDev I have to use right arrow key in default. | How to get out of the brackets using the TAB key in PyDev? | 0 | 0 | 0 | 126 |
27,005,561 | 2014-11-18T22:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,anaconda | 27,005,618 | 8 | false | 0 | 0 | don't know windows 8 but you can probably set the default prog for a specific extension, for example on windows 7 you do right click => open with, then you select the prog you want and select 'use this prog as default', or you can remove your old version of python from your path and add the one of the anaconda | 2 | 33 | 0 | I recently downloaded the Anaconda distribution for Python. I noticed that if I write and execute a Python script (by double-clicking on its icon), my computer (running on Windows 8) will execute it using my old version of Python rather than Anaconda's version. So for example, if my script contains import matplotlib, I will receive an error. Is there a way to get my scripts to use Anaconda's version of Python instead?
I know that I can just open Anaconda's version of Python in the command prompt and manually import it, but I'd like to set things us so that I can just double-click on a .py file and Anaconda's version of Python is automatically used. | How can I execute Python scripts using Anaconda's version of Python? | 0 | 0 | 0 | 158,665 |
27,005,561 | 2014-11-18T22:43:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,anaconda | 36,502,473 | 8 | false | 0 | 0 | You can try to change the default .py program via policy management. Go to windows, search for regedit, right click it. And then run as administrator. Then, you can search the key word "python.exe" And change your Python27 path to you Anaconda path. | 2 | 33 | 0 | I recently downloaded the Anaconda distribution for Python. I noticed that if I write and execute a Python script (by double-clicking on its icon), my computer (running on Windows 8) will execute it using my old version of Python rather than Anaconda's version. So for example, if my script contains import matplotlib, I will receive an error. Is there a way to get my scripts to use Anaconda's version of Python instead?
I know that I can just open Anaconda's version of Python in the command prompt and manually import it, but I'd like to set things us so that I can just double-click on a .py file and Anaconda's version of Python is automatically used. | How can I execute Python scripts using Anaconda's version of Python? | 0 | 0 | 0 | 158,665 |
27,005,924 | 2014-11-18T23:11:00.000 | 1 | 0 | 0 | 0 | python,django | 27,115,581 | 2 | true | 1 | 0 | After reading through and trying out various forum suggestions and spending solid time in Google, I've settled down with paramiko. It does exactly what I wanted and works like a charm for now.
On click of a button on my website, running on server1, I make a request to run a python script. The python script uses paramiko to make SSH connections to server2, runs the necessary command and writes a response to a plain/text file. This plain/text file is rendered back to the request through django form as a response.
This looks a little dirty now and there are more things to look into like, what happens if the command took very long time to execute or it erred out for some reason. I haven't spent time in figuring out answers for all those questions, but eventually will. | 2 | 3 | 0 | I have a basic (server1) Django development web server and another server (server2) which has a python script that does some scientific calculations. Assume that the server1 has necessary authentication in place to run the script on server2. All I want to do is, click a button on the django website which would run the python script sitting on server2.
Ideas that I have so far are,
use some kind of SSH library to run the script and get response
have a REST API setup on server2 to run the script
Not sure if above ideas would work, please suggest your insight into this and if possible a simple example would be appreciated.
More info: Server1 and Server2 has to be 2 separate servers, server1 is a webserver, while server2 can be any linux virtual machine. Also, the response from server2 has to be sent back to server1. | Run script on remote server from Django Webserver | 1.2 | 0 | 0 | 1,638 |
27,005,924 | 2014-11-18T23:11:00.000 | 0 | 0 | 0 | 0 | python,django | 27,006,689 | 2 | false | 1 | 0 | There is no reason why server one cannot execute something like selenium or phantomjs on itself to navigate to your website on server2 and click a button on server 2 which then uses something like python's subprocess module to execute a program from server 2. | 2 | 3 | 0 | I have a basic (server1) Django development web server and another server (server2) which has a python script that does some scientific calculations. Assume that the server1 has necessary authentication in place to run the script on server2. All I want to do is, click a button on the django website which would run the python script sitting on server2.
Ideas that I have so far are,
use some kind of SSH library to run the script and get response
have a REST API setup on server2 to run the script
Not sure if above ideas would work, please suggest your insight into this and if possible a simple example would be appreciated.
More info: Server1 and Server2 has to be 2 separate servers, server1 is a webserver, while server2 can be any linux virtual machine. Also, the response from server2 has to be sent back to server1. | Run script on remote server from Django Webserver | 0 | 0 | 0 | 1,638 |
27,006,173 | 2014-11-18T23:29:00.000 | 1 | 1 | 1 | 0 | python,unity3d,unityscript | 27,021,007 | 2 | true | 0 | 1 | An efficient solution would be to use TCP sockets in both the Python script and the Unity-side script. Since the Python-side is serving the data, then designate it as the server socket and make the socket on the Unity-side the client. This solution will be much faster than writing and reading to a shared file. | 1 | 0 | 0 | I have some code that is running in the python console reading out text lines.
Is it possible to feed the output of the python console to my Unity3D game script so that it can trigger some actions in my game?
in other words:
python console running in the background outputs commands that need to feed to my Unity game. | Feed Python output to Unity3D | 1.2 | 0 | 0 | 1,040 |
27,009,879 | 2014-11-19T05:49:00.000 | 3 | 0 | 0 | 0 | python,templates,cheetah | 27,010,118 | 3 | false | 1 | 1 | I just discovered a solution via output from complex expressions using #echo:
#echo '%.2f' % $my_float_var#
This prints out my float in the variable $my_float_var with only two decimal places. | 1 | 1 | 0 | I'm currently using Cheetah Templates with my python code and I'm passing in a number of floating point numbers. I'd like to truncate these floats to only two decimal places within the template, e.g. 0.2153406 would become 0.21
Is it possible to do this within the Cheetah template of do I have to pass in already truncated strings? | How do I truncate floats to two decimal places in cheetah templates? | 0.197375 | 0 | 0 | 525 |
27,011,456 | 2014-11-19T07:39:00.000 | 2 | 0 | 0 | 0 | python,arrays,image-processing,numpy | 27,011,549 | 2 | true | 0 | 0 | Well I think you could do that, but maybe less convenient. The reasons could be:
numpy supports all the matrix manipulations and since it is optimized, could be much faster (You can also switch to OpenBLAS to make it amazingly faster). For image-processing problems, in some cases where images become larger, it could be much demanding for the speed.
numpy has lots of useful function handles, such as numpy.fft for Fourier Tranformation, or numpy.conv to do the convolution. This could be critical for image processing.
All the modules, or packages are nearly all based on numpy, such as scipy, graphlab and matplotlib. For example, you should use 'import matplotlib.pyplot as plt; plt.imshow()' to show images; well some other arrays could be hardly accepted as the arguments. | 1 | 2 | 1 | Is there any reason why I should use numpy to represent pixels in an image manipulation program as opposed to just storing the values in my own array of numbers? Currently I am doing the latter but I see lots of people talking about using numpy for representing pixels as multidimensional arrays. Other than that are there any reasons why I should be using numpy as opposed to my own implementation? | Shoud I use numpy for a image manipulation program? why | 1.2 | 0 | 0 | 291 |
27,011,508 | 2014-11-19T07:43:00.000 | 0 | 0 | 1 | 1 | ipython,ipython-notebook | 27,033,546 | 1 | false | 0 | 0 | Thanks for your comment. I just tried to google "oleshell dll" and found this file is not related to ipython and windows. The ipython works again after I've renamed this file. | 1 | 0 | 0 | I was using ipython notebook (packages in Anaconda-2.1.0 or Canopy-1.4.1) in "Windows Server 2008 R2 Enterprise" with browser (latest version of chrome or firefox). It was working perfectly.
Once another user has started ipython notebook in his account. At first, his ipython notebook was failed to run any notebook. The worst is that my ipython was also failed to open or create any notebook after restart the kernel. The windows popup a dialog of "Problem signature" with following information:
Problem Event Name: APPCRASH
Application Name: python.exe
Application Version: 0.0.0.0
Application Timestamp: 538f8ffc
Fault Module Name: oleshell874.dll
Fault Module Version: 8.7.4.0
Fault Module Timestamp: 54448aac
Exception Code: c0000005
Exception Offset: 0000000000004867
OS Version: 6.1.7600.2.0.0.274.10
Locale ID: 1033
Additional Information 1: a481
Additional Information 2: a481c64a34722f1c689be57b64ee6a54
Additional Information 3: 3393
Additional Information 4: 33936ce55b0e8b96f5dce6a43fae2e99
Even I reinstalled the Anaconda or Canopy and restarted the system, it won't help. I have tried to google the Fault Module (oleshell874.dll) but it shows no result.
Please help! | IPython notebook fails to open any notebook | 0 | 0 | 0 | 208 |
27,014,098 | 2014-11-19T10:10:00.000 | 0 | 0 | 1 | 0 | python,macos,button,crash,python-idle | 45,559,740 | 2 | false | 0 | 0 | I got the similar problem, Try British pc keyboard it works for me | 1 | 0 | 0 | I am running OS X 10.9.5, and IDLE w/ Python 3.4.1.
When I press the buttons for (¨/^) or (´/`), IDLE crashes and the program closes.
This causes me to lose changes to files, as well as time. My fellow students using Mac experience the same problem.
Anyone know how I can fix this? | IDLE crashes for certain buttons on Mac | 0 | 0 | 0 | 3,590 |
27,016,867 | 2014-11-19T12:29:00.000 | 2 | 0 | 1 | 1 | python,windows | 27,017,888 | 2 | false | 0 | 0 | I cannot imagine a valid reason to break Windows philosophy. Microsoft always said that on its system the file extension determined the type of file and its usage. There are already many cross platfomr programs, for example firefox in named firefox on Unix-like systems and firefox.exe on Windows system.
EDIT
That being said, Windows accepts what you give it as command, provided it is in a correct executable format. So if you create a program HelloWorld.exe and rename it HelloWorld.joe :
cmd.exe will start the program when you type HelloWorld.joe at the prompt (tested on Windows XP and Windows 7)
Python 2.7 and 3.4 should start it either using os.system or with the subprocess module (confirmed by eryksun)
You cannot use os.system call with a badly named file (in Microsoft's sense) because under the hood, os.system uses cmd.exe. It is a Microsoft program that looks at the file extension to know what to do with it, and will never execute (as an exe) a file that do not have the exe extension.
You will not be able to use the subprocess module with shell=True for exactly the same reason. But when shell=False, it directly calls CreateProcess that accept any name as the name of a valid executable (provided it is ...) as said by zeller. | 1 | 1 | 0 | I need to be able to execute a .exe file in Python that has been renamed without the file extension (for example, let's say .joe - it doesn't represent anything that I know of).
Let's say I have "HelloWorld.exe", I then rename it to "HelloWorld.joe" and need to execute it.
Looking around, os.system is often used to execute .exe files, but I haven't been able to get it working without the .exe extension.
The file cannot be renamed to have the .exe extension (or for that matter anything), in this "scenario", I do not have access to the source code of the executable. | Windows - Executing an executable (.exe) without the .exe extension | 0.197375 | 0 | 0 | 1,498 |
27,019,270 | 2014-11-19T14:24:00.000 | 1 | 0 | 0 | 0 | java,python,scala,apache-spark | 27,019,410 | 1 | false | 0 | 0 | If you just want any matchable String to an int - String.hashCode(). However you will have to deal with possible hash collisions. Alternatively you'd have to convert each character to its int value and append (not add) all of these together. | 1 | 0 | 1 | I am working with a dataset that has users as Strings (ie. B000GKXY4S). I would like to convert each of these users to int, so I can use Rating(user: Int, product: Int, rating: Double) class in Apache Spark ALS. What is the most efficient way to do this? Preferably using Spark Scala functions or python native functions. | Convert String containing letters to Int efficiently - Apache Spark | 0.197375 | 0 | 0 | 735 |
27,019,558 | 2014-11-19T14:38:00.000 | 0 | 1 | 0 | 0 | javascript,python,ajax,json,cgi | 29,558,961 | 2 | false | 1 | 0 | Have you checked your host's server logs to see if it's giving you any output?
Before asking here, a good idea would be to ssh to your host, if you can, and running the program directly, which will most likely print the error in the terminal.
This is far too general at the moment, there are so many reasons why a CGI request can fail ( misconfigured environment, libraries not installed, permissions errors )
Go back and read your servers logs and see if that shines any more light on the issue. | 1 | 0 | 0 | I want to use javascript to retrieve a json object from a python script
Ive tried using various methods of ajax and post but cant get anything working.
For now I have tried to set it up like this
My Javascript portion:
I have tried
$.post('./cgi-bin/serverscript.py', { type: 'add'}, function(data) {
console.log('POSTed: ' + data);
});
and
$.ajax({
type:"post",
url:"./cgi-bin/serverscript.py",
data: {type: "add"},
success: function(o){ console.log(o); alert(o);}
});
My Python
import json import cgi import cgitb cgitb.enable() data = cgi.FieldStorage()
req = data.getfirst("type") print "Content-type: application/json"
print print (json.JSONEncoder().encode({"status":"ok"}))
I am getting a 500 (internal server error) | Running CGI Python Javascript to retrieve JSON object | 0 | 0 | 1 | 847 |
27,021,617 | 2014-11-19T16:14:00.000 | 2 | 0 | 1 | 1 | python,bash,fastq | 27,021,713 | 2 | false | 0 | 0 | You can try find . -name "*.fastq" | xargs your_bash_script.sh, which use find to get all the files and apply your script to each one of them. | 1 | 0 | 0 | I have a directory with 94 subdirectories, each containing one or two files *.fastq. I need to apply the same python command to each of these files and produce a new file qc_*.fastq.
I know how to apply a bash script individually to each file, but I'm wondering if there is a way to write a bash script to apply the command to all the files at once | applying the same command to multiple files in multiple subdirectories | 0.197375 | 0 | 0 | 115 |
27,025,569 | 2014-11-19T19:42:00.000 | 1 | 1 | 0 | 0 | python,sql-server,ssis,smtp,exchange-server | 27,025,619 | 1 | false | 0 | 0 | I believe you are getting this due to authentication. SSIS is probably passing your windows credentials through but when you are trying to send with python your credentials are being denied.
Not 100% sure that is your issue. But a thought. | 1 | 0 | 0 | I'm trying to send emails using python smtp library but get the following an error message when trying to send to external email addresses (internal email works):
smtplib.SMTPRecipientsRefused: {'[email protected]': (550, ' Relaying denied')}
This is because we have rules setup on our exchange that prevent relaying from client machines.
What I don't understand is how come I can send emails over SMTP with an SSIS package without getting the relay error.
Is there a setting I need to enable in my python to bypass this or is SSIS sending the email to SQL Server to send on its behalf. | How does (SSIS) Integrated Services send email? | 0.197375 | 0 | 0 | 354 |
27,025,622 | 2014-11-19T19:45:00.000 | 2 | 0 | 0 | 0 | python-2.7,sqlalchemy,tornado,psycopg2,gevent | 27,316,073 | 1 | false | 0 | 0 | You are probably using the same connection with two different cursors concurrently. | 1 | 2 | 0 | I am getting this error when I am querying my rest app built with tornado, gevent, postgres and patched using psycogreen. I am constantly getting this error even when i am making requests at a concurrency of 10. If any one has a solution or info about what I might be doing wrong please share.
Error messages:
ProgrammingError: (ProgrammingError) execute cannot be used while an asynchronous query is underway
ProgrammingError: close cannot be used while an asynchronous query is underway
Stack Trace:
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2320, in all
return list(self)
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2438, in __iter__
return self._execute_and_instances(context)
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2453, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute
return meth(self, multiparams, params)
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 322, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement
compiled_sql, distilled_params
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context
context)
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1159, in _handle_dbapi_exception
exc_info
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context
context)
File "/ENV/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute
cursor.execute(statement, parameters)
ProgrammingError: (ProgrammingError) execute cannot be used while an asynchronous query is underway | ProgrammingError: close cannot be used while an asynchronous query is underway | 0.379949 | 1 | 0 | 1,803 |
27,026,298 | 2014-11-19T20:24:00.000 | 0 | 0 | 1 | 1 | python,multiprocessing | 27,031,081 | 1 | false | 0 | 0 | I figured it out myself. It is related to python's relative import. | 1 | 0 | 0 | I want to use Jug for parallel processing. I have a Canopy installed and I also installed Jug using command pip install jug according to the documentation online.
In order to find where jug is installed, I installed jug again using the same command as above, it showed
me:
Requirement already satisfied (use --upgrade to upgrade): jug in
c:\users[userfolder]\appdata\local\enthought\canopy\user\lib\site-packages
(from jug)
Requirement already satisfied (use --upgrade to upgrade): six in
c:\users[userfolder]\appdata\local\enthought\canopy\user\lib\site-packages
(from jug)
Requirement already satisfied (use --upgrade to upgrade): redis in
c:\users[userfolder]\appdata\local\enthought\canopy\user\lib\site-packages
(from jug)
Now, I thought my jug is in the path of c:\users\[userfolder]\appdata\local\enthought\canopy\user\lib\site-package and it is there since I listed all files under this folder and I saw it.
I am not sure this jug is a exe or py or something else, but I tried to run a command: jug C:\primes.py under this folder, it gave me error message said jug is not a recognized as the name of cmdlet, function, script file....
I also tried the command ./jug C:\primes.py and .\jug C:\primes.py, but none of them works.
In addition, I tried python jug status C:\primes.py and this one gave me message of cannot find '_main_' module in 'jug'.
Now I have no idea how to run jug. Has someone ever tried jug on windows could help me with it? | Unable to run Jug in windows | 0 | 0 | 0 | 136 |
27,028,599 | 2014-11-19T22:54:00.000 | 0 | 0 | 0 | 0 | python,selenium,selenium-webdriver,phantomjs | 51,226,805 | 1 | false | 1 | 0 | href="#" just refreshes the page when you select the link, for example if it was href=#top" when you select the link you would be brought to the top of that same page.
You are probably doing it correctly, you can use driver.find_element_by_link_text('some text') or driver.find_element_by_partial_link_text('some text'), but clicking on that element is just routing you to that same page. | 1 | 2 | 0 | Coding in Python 2.7.
I have a crawler I setup and it works perfectly fine when driven by FireFox, but breaks when driven by PhantomJS.
I'm trying to click on an element with href="#"
The crux of the issue is that when the FF driver clicks on this element with the # href, it performs the javascript action (in this case revealing a hidden part of a layer), but when PhantomJS does it, it either doesn't perform the click or it does click it but # just reloads the same page (I can't tell which one).
I've tried everything I can think of, including multiple ActionChains and clicking element by element all the way down to this one with the link. Nothing seems to work.
Any ideas? | Selenium PhantomJS click element with href="#" | 0 | 0 | 1 | 800 |
27,030,337 | 2014-11-20T01:42:00.000 | 2 | 0 | 1 | 0 | python,notepad++ | 27,030,420 | 1 | false | 0 | 0 | In Notepad++ you can select code then right click and at the bottom of the menu you can select "hide lines" this may help it may not, but its just something that can help you hide code that you know works and you don't need to see any more. You can easily see the code again by clicking one of the 2 small arrows that are shown on the left hand side where the code would of been located. (Note: There is also a keyboard shortcut for this, just select the code then hit Alt+H)
Hope this Helped,
~Bobbeh | 1 | 0 | 0 | I am using notepad++ for wiring python code. For indented code it is possible to click the minus sign on the left side to minimize that section. For example, commands that require a indent (i.e. if/def/while commands) can easily be minimized down. I however have blocks of code that are all at the same indentation level that I would also like to minimize [i.e. be able to specify my own levels]. Given Python has specific rules for indentation, I can't see an obvious way of forcing a break. Just wondering if there is ideally a 'clean' way of achieving this, or if not, a simple way to force it. | Collapsing nonindented python code in notepad++ | 0.379949 | 0 | 0 | 132 |
27,031,912 | 2014-11-20T04:43:00.000 | 0 | 0 | 1 | 0 | python,indentation | 29,987,945 | 3 | false | 0 | 0 | If you are using Eclipse IDE, there are two formatting options available that can be used to accomplish this, accessed from the Source Menu.
a) "Source > Convert space-tabs to tabs" or
b) "Source > Convert tabs to space-tabs"
I was able to format code that had 2 spaces instead of a tab using the first option. You simply specify the number of spaces to be converted into a tab. | 1 | 8 | 0 | I am a beginner at Python and have made the mistake of mixing spaces and tabs for indentations. I see people use reindent.py, but I have no idea how to use it. please explain in the simplest way possible without trying to use too fancy words and dumb it down as best as possible as I am a beginner.
Thanks. | HOWTO: Use reindent.py for dummies | 0 | 0 | 0 | 17,316 |
27,032,379 | 2014-11-20T05:26:00.000 | 2 | 0 | 1 | 0 | python,list | 27,032,492 | 2 | false | 0 | 0 | It isn't possible to completely remove an element while retaining the indices of the other elements, as in your example. The indices of the elements represent their positions in the list. If you have a list [a, a, b, b, c, c] and you remove the first b to get [a, a, b, c, c] then the indices adjust because they represent the positions of the elements in the list, which have changed with the removal of an element.
However, depending on what your use case is, there are ways you can get the behavior you want by using a different data structure. For example, you could use a dictionary of integers to objects (whatever objects you need in the list). For example, the original list could be represented instead as {0: a, 1: a, 2: b, 3: b, 4: c, 5: c}. If you remove the b at 'index' (rather, with a key of) 2, you will get {0: a, 1: a, 3: b, 4: c, 5: c}, which seems to be the behavior you are looking for. | 1 | 4 | 0 | I have a list L = [a,a,b,b,c,c] now I want to remove first 'b' so that the L becomes [a,a,b,c,c]. In the new L the index of first 'c' is 3. Is there any way I can remove first 'b' from L and still get the index of first 'c' as 4.
Thanks in advance. | python remove element from list without changing index of other elements | 0.197375 | 0 | 0 | 5,476 |
27,033,206 | 2014-11-20T06:31:00.000 | 0 | 0 | 1 | 0 | python | 27,036,097 | 1 | false | 0 | 1 | You have to address Circle in Circle, Circle in Square, Square in Circle, and Square in Square. I suggest drawing some pictures with the centers marked, and observe their relationships.
Circle in Circle: the distance between the centers has to be less than the difference in radius's.
Circle in Square: Same as circle in circle I think
Square in Circle: The radius of the circle must be larger than the distance from the center of the square to a corner PLUS the distance between centers
Square in Square: you got this! solve yourself ;) | 1 | 0 | 0 | Have to make a program that given the option to input square or circle, user inputs width and a center x,y coordinate.
What I don't understand is how to write code for if there are two shapes on a plane and how to identify if one is inside the other
I'm super helpless, and have no background in computer science. Thank you! | Python Shapes on grid | 0 | 0 | 0 | 420 |
27,033,261 | 2014-11-20T06:34:00.000 | 2 | 0 | 0 | 0 | python,csv,qgis | 27,033,373 | 3 | false | 0 | 0 | There is a parenthesis missing from the end of your --6 line of code. | 1 | 4 | 1 | I am attempting to import a file into QGIS using a python script. I'm having a problem getting it to accept the CRS. Code so far
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from qgis.core import *
from qgis.utils import iface
----1 Set file name here
InFlnm='Input.CSV'
---2 Set pathname here
InDrPth='G:/test'
---3 Build the file name and path for uri
InFlPth="file:///"+InDrPth+InFlnm
---4 Set import Sting here note only need to set x and y others come for free!
uri = InFlPth+"?delimiter=%s&xField=%s&yField=%s" % (",","x","y")
---5 Load the points into a layer
bh = QgsVectorLayer(uri, InFlnm, "delimitedtext")
---6 Set the CRS (Not sure if this is working seems to?)
bh.setCrs(QgsCoordinateReferenceSystem(32365, QgsCoordinateReferenceSystem.EpsgCrsId)
---7 Display the layer in QGIS (Here I get a syntax error?)
QgsMapLayerRegistry.instance().addMapLayer(bh)
Now all the above works OK and QGIC prompts me for a CRS before executing the last line of the script to display the layer - as long as I comment-out step 6
However, if a attempt to set the CRS removing ### from step 6 I get a syntax error reporting on the last line that displays the points (Step 7). Note sure what the trick is here - I'm pretty new to Python but know my way around some other programming lagnuages | Import csv into QGIS using Python | 0.132549 | 0 | 0 | 8,411 |
27,034,001 | 2014-11-20T07:25:00.000 | 21 | 1 | 1 | 0 | python,pytest,os.path | 27,034,002 | 2 | true | 0 | 0 | 'tmpdir' is of type <class 'py._path.local.LocalPath'>, just wrap 'tmpdir' in a string when passing to os.path.join
example:
os.path.join(str(tmpdir), 'my_test_file.txt') | 1 | 15 | 0 | This error appeared when trying to use the 'tmpdir' in a pytest test.
TypeError: object of type 'LocalPath' has no len() | os.path.join fails with "TypeError: object of type 'LocalPath' has no len()" | 1.2 | 0 | 0 | 8,854 |
27,034,666 | 2014-11-20T08:11:00.000 | 0 | 0 | 1 | 1 | python,eclipse | 27,035,599 | 3 | false | 1 | 0 | Had this same problem a few days ago. You might have downloaded the wrong version of PyDev for your python version (2.7.5 or something is my python version, but I downloaded PyDev for version 3.x.x)
1) Uninstall your current version PyDev
2) you have to install the correct version by using the "Install New Software", then uncheck the "Show only newest software" or whatever it is. Then select the version that matched your python version, and install :) | 1 | 3 | 0 | I have Java version 7 and had installed PyDev version 3.9 from Eclipse Marketplace..but it's not showing up in New project or in Windows perspective in Eclipse..Can some one please tell me what i need to do ??? | PyDev not appearing in Eclipse after install | 0 | 0 | 0 | 7,516 |
27,035,257 | 2014-11-20T08:50:00.000 | 1 | 0 | 0 | 0 | python,regression,linear-regression | 27,035,506 | 3 | true | 0 | 0 | If sm is a defined object in statsmodels, you need to invoke it by statsmodels.sm, or using from statsmodel import sm, then you can invoke sm directly. | 2 | 0 | 1 | I am trying to get the beta and the error term from a linear regression(OLS) in python. I am stuck at the statement X=sm.add_constant(X, prepend=True), which is returning an
error:"AttributeError: 'module' object has no attribute 'add_constant'"
I already installed the statsmodels module. | X=sm.add_constant(X, prepend=True) is not working | 1.2 | 0 | 0 | 8,510 |
27,035,257 | 2014-11-20T08:50:00.000 | 5 | 0 | 0 | 0 | python,regression,linear-regression | 61,634,875 | 3 | false | 0 | 0 | Try importing statsmodel.api
import statsmodels.api as sm | 2 | 0 | 1 | I am trying to get the beta and the error term from a linear regression(OLS) in python. I am stuck at the statement X=sm.add_constant(X, prepend=True), which is returning an
error:"AttributeError: 'module' object has no attribute 'add_constant'"
I already installed the statsmodels module. | X=sm.add_constant(X, prepend=True) is not working | 0.321513 | 0 | 0 | 8,510 |
27,041,862 | 2014-11-20T14:21:00.000 | 2 | 1 | 0 | 0 | python,linux,curl,libcurl,centos6 | 27,042,314 | 2 | true | 0 | 0 | You seem to have two versions installed, curl-config --version shows the newer version (7.24.0) and curl (the tool) is the newer version but when it runs the run-time linker ld.so finds and uses the older version (7.21.1).
Check /etc/ld.so.conf for which dirs that are check in which order and see if you can remove one of the versions or change order of the search. | 1 | 2 | 0 | i am quite new to curl development. I am working on centOS and i want to install pycurl 7.19.5, but i am unable to as i need libcurl 7.21.2 or above.I tried installing an updated curl but it is still pointing to the old libcurl.
curl-config --version
libcurl 7.24.0
curl --version
curl 7.24.0 (x86_64-unknown-linux-gnu) libcurl/7.21.1 OpenSSL/1.0.1e zlib/1.2.3 c-ares/1.7.3 libidn/1.29 Protocols: dict file ftp ftps http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp Features: AsynchDNS Debug TrackMemory IDN IPv6 Largefile NTLM SSL libz.
Can anyone please help me how i can update the libcurl version in curl | libcurl mismatch in curl and curl-config | 1.2 | 0 | 1 | 4,612 |
27,047,443 | 2014-11-20T19:01:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,namespaces,setuptools,python-3.4 | 38,507,339 | 2 | false | 0 | 0 | Implicit namespace packages are not supported by find_packages. However, all that find_packages does is a return a list of dotted.package.names. You can still list your packages explicitly. | 1 | 4 | 0 | Using Python 3.4 and Setuptools, I'm trying to get namespace packages to work correctly as defined in PEP 420. My directory structure looks something like this:
project
__init__.py
core
several .py files
logging
com1several .py filescom2several .py files
interface
misc files
When using setuptools find_package() function, it finds the "project" package, but it doesn't install any of the folders (implicit sub-packages) inside of "project". When I unzip the .egg file, all I see is the __init__.py file inside, none of the subdirectories or files.
I could just put an __init__.py in every directory, but since those files would all be empty and I don't like the way it makes the structure work, I'm trying to avoid that.
If I move to just outside of my "project" directory, and run the following, it works
python -m project.logging.com1.myfile
but anywhere else it doesn't work, because setuptools isn't installing the sub-directories (implicit namespaces) that don't have __init__.py in them.
How can I make setuptools install my implicit namespace packages correctly? Would I just need to tell it to install all files inside the directory and that will be good enough? | python implicit namespace packages are not installing with setuptools | 0 | 0 | 0 | 855 |
27,047,518 | 2014-11-20T19:05:00.000 | 2 | 1 | 0 | 1 | python,command-line-arguments,boost-python | 27,047,900 | 1 | true | 0 | 0 | Prior to your call to exec_file() (but after Py_Initialize(), you should invoke PySys_SetArgv(argc, argv); giving it the int argc and const char *argv[] from your program's main(). | 1 | 3 | 0 | I use boost::python to integrate Python into a C++ program. Now I would like that the Python program that is executed via boost::python::exec_file() can obtain the command line arguments of my C++ program via sys.argv. Is this possible? | How to set sys.argv in a boost::python plugin | 1.2 | 0 | 0 | 795 |
27,050,055 | 2014-11-20T21:38:00.000 | 6 | 0 | 0 | 0 | python,machine-learning,scikit-learn,svm | 27,050,808 | 2 | true | 0 | 0 | They simply don't exist for kernels that are not linear: The kernel SVM is solved in the dual space, so in general you only have access to the dual coefficients.
In the linear case this can be translated to primal feature space coefficients. In the general case these coefficients would have to live in the feature space spanned by the chosen kernel, which can be infinite dimensional. | 1 | 1 | 1 | I would like to calculate the primal variables w with a polynomial kernel svm, but to do this i need to compute clf.coef_ * clf.support_vectors_. Access is restricted to .coef_ on all kernel types except for linear - is there a reason for this, and is there another way to derive w in that case? | Is there a reason that scikit-learn only allows access to clf.coef_ with linear svms? | 1.2 | 0 | 0 | 1,416 |
27,052,140 | 2014-11-21T00:20:00.000 | 7 | 0 | 1 | 1 | python,linux | 27,052,957 | 3 | true | 0 | 0 | How about export PATH+=:/usr/local/bin, try it, maybe helpful. | 2 | 4 | 0 | I had python2.6 on my linux box but installed python3.4 to use new modules. I installed it using sudo access. The new version was installed in /usr/local/bin. Without root access, I can use the new python3.4, both by just using python3.4 in the command line or using the shebang in the .py file #!/usr/local/bin/python3
Now I am trying to install a module, for which I need sudo access. When I am root, and I run python3.4, it says command not found. I ran whereis python and found the path to python2.6 in /usr/bin, but whereis python3.4 as root gives, not found in /usr/bin, which is correct since it is in /usr/local/bin. Again, if I exit from root, I have no trouble using python3.4
This seems like a $PATH issue (not sure), can some one help me what I am doing wrong while installing the module for the new python3.4? I was able to install the module, but it was installed in the old python2.6 site-packages. | Sudo does not find new python version | 1.2 | 0 | 0 | 9,548 |
27,052,140 | 2014-11-21T00:20:00.000 | 2 | 0 | 1 | 1 | python,linux | 27,052,211 | 3 | false | 0 | 0 | Well you could have given the location to install Py 3.4 to be in /usr/bin.
An easy approach could be to copy the Py 3.4 bin to /usr/bin from /usr/local/bin.
Secondly You can also install again with the prefix params. | 2 | 4 | 0 | I had python2.6 on my linux box but installed python3.4 to use new modules. I installed it using sudo access. The new version was installed in /usr/local/bin. Without root access, I can use the new python3.4, both by just using python3.4 in the command line or using the shebang in the .py file #!/usr/local/bin/python3
Now I am trying to install a module, for which I need sudo access. When I am root, and I run python3.4, it says command not found. I ran whereis python and found the path to python2.6 in /usr/bin, but whereis python3.4 as root gives, not found in /usr/bin, which is correct since it is in /usr/local/bin. Again, if I exit from root, I have no trouble using python3.4
This seems like a $PATH issue (not sure), can some one help me what I am doing wrong while installing the module for the new python3.4? I was able to install the module, but it was installed in the old python2.6 site-packages. | Sudo does not find new python version | 0.132549 | 0 | 0 | 9,548 |
27,053,070 | 2014-11-21T02:12:00.000 | 0 | 0 | 1 | 0 | java,python | 40,499,350 | 2 | false | 1 | 0 | Use the add_library function:
add_library("sound") | 2 | 1 | 0 | Im using processing in python mode but I want to use the processing library sound. But I dont know how to import this into my program in python syntax.
In java its like this:
Import processing.sound.*;
Thanks | How to use java libraries in python processing | 0 | 0 | 0 | 390 |
27,053,070 | 2014-11-21T02:12:00.000 | 0 | 0 | 1 | 0 | java,python | 28,146,057 | 2 | false | 1 | 0 | You can use add_library(processing.sound). I used it with g4p library | 2 | 1 | 0 | Im using processing in python mode but I want to use the processing library sound. But I dont know how to import this into my program in python syntax.
In java its like this:
Import processing.sound.*;
Thanks | How to use java libraries in python processing | 0 | 0 | 0 | 390 |
27,054,206 | 2014-11-21T04:37:00.000 | 1 | 1 | 0 | 0 | php,python,web-crawler | 27,055,726 | 2 | true | 0 | 0 | I have built crawlers in both languages. While I personally find it easy to make a crawler in python because of huge number of freely available libraries for html parsing, I would recommend that you go with the language you are most comfortable with. Build a well designed and efficient crawler in a language you know well and you will get even better at that language. There is no feature which can not be implemented in either of the two languages so just make a decision and start working.
Good luck. | 2 | 0 | 0 | i am planning to make web crawler which can crawl 200+ domain, which of the language will be suitable for it. I am quite familiar with PHP but an amateur at Python. | PHP vs Python For Web Crawler | 1.2 | 0 | 1 | 1,196 |
27,054,206 | 2014-11-21T04:37:00.000 | 1 | 1 | 0 | 0 | php,python,web-crawler | 27,055,630 | 2 | false | 0 | 0 | You could just try both. Make one in php and one in python. It'll help you learn the language even if you're experienced. Never say no to opportunities to practice. | 2 | 0 | 0 | i am planning to make web crawler which can crawl 200+ domain, which of the language will be suitable for it. I am quite familiar with PHP but an amateur at Python. | PHP vs Python For Web Crawler | 0.099668 | 0 | 1 | 1,196 |
27,063,361 | 2014-11-21T14:15:00.000 | 71 | 0 | 1 | 1 | python,ubuntu,pycharm | 27,064,195 | 8 | true | 0 | 0 | To make it a bit more user-friendly:
After you've unpacked it, go into the directory, and run bin/pycharm.sh.
Once it opens, it either offers you to create a desktop entry, or if it doesn't, you can ask it to do so by going to the Tools menu and selecting Create Desktop Entry...
Then close PyCharm, and in the future you can just click on the created menu entry. (or copy it onto your Desktop)
To answer the specifics between Run and Run in Terminal: It's essentially the same, but "Run in Terminal" actually opens a terminal window first and shows you console output of the program. Chances are you don't want that :)
(Unless you are trying to debug an application, you usually do not need to see the output of it.) | 4 | 38 | 0 | When I double-click on pycharm.sh, Ubuntu lets me choose between "Run in Terminal" and "Run". What is the difference between these options? | How to run PyCharm in Ubuntu - "Run in Terminal" or "Run"? | 1.2 | 0 | 0 | 153,289 |
27,063,361 | 2014-11-21T14:15:00.000 | 0 | 0 | 1 | 1 | python,ubuntu,pycharm | 71,020,778 | 8 | false | 0 | 0 | I did the edit and added the PATH for my Pycharm in .bashrc but I was still getting the error "pycharm.sh: command not found".
After trying several other things the following command resolved the issue which creates a symbolic link.
sudo ln -s /snap/pycharm-community/267/bin/pycharm.sh /usr/local/bin/pycharm
The first is location to the exact path to pycharm.sh and the second is user bin which should be on PATH env by default | 4 | 38 | 0 | When I double-click on pycharm.sh, Ubuntu lets me choose between "Run in Terminal" and "Run". What is the difference between these options? | How to run PyCharm in Ubuntu - "Run in Terminal" or "Run"? | 0 | 0 | 0 | 153,289 |
27,063,361 | 2014-11-21T14:15:00.000 | 12 | 0 | 1 | 1 | python,ubuntu,pycharm | 52,049,395 | 8 | false | 0 | 0 | The question is already answered, Updating answer to add the PyCharm bin directory to $PATH var, so that pycharm editor can be opened from anywhere(path) in terminal.
Edit the bashrc file,
nano .bashrc
Add following line at the end of bashrc file
export PATH="<path-to-unpacked-pycharm-installation-directory>/bin:$PATH"
Now you can open pycharm from anywhere in terminal
pycharm.sh | 4 | 38 | 0 | When I double-click on pycharm.sh, Ubuntu lets me choose between "Run in Terminal" and "Run". What is the difference between these options? | How to run PyCharm in Ubuntu - "Run in Terminal" or "Run"? | 1 | 0 | 0 | 153,289 |
27,063,361 | 2014-11-21T14:15:00.000 | 0 | 0 | 1 | 1 | python,ubuntu,pycharm | 69,640,211 | 8 | false | 0 | 0 | Yes just go to terminal
cd Downloads
ls
cd pycharm-community-2021.2.2 (your pycharm version)
ls
cd bin
ls
./pycharm.sh
It will open your continued pycharm project | 4 | 38 | 0 | When I double-click on pycharm.sh, Ubuntu lets me choose between "Run in Terminal" and "Run". What is the difference between these options? | How to run PyCharm in Ubuntu - "Run in Terminal" or "Run"? | 0 | 0 | 0 | 153,289 |
27,066,366 | 2014-11-21T16:50:00.000 | 6 | 0 | 0 | 0 | python,django,server | 27,074,081 | 10 | false | 1 | 0 | well it seems that it's a bug that django hadn't provided a command to stop the development server . I thought it have one before~~~~~ | 5 | 52 | 0 | I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development? | django development server, how to stop it when it run in background? | 1 | 0 | 0 | 122,154 |
27,066,366 | 2014-11-21T16:50:00.000 | 3 | 0 | 0 | 0 | python,django,server | 27,066,460 | 10 | false | 1 | 0 | As far as i know ctrl+c or kill process is only ways to do that on remote machine.
If you will use Gunicorn server or somethink similar you will be able to do that using Supervisor. | 5 | 52 | 0 | I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development? | django development server, how to stop it when it run in background? | 0.059928 | 0 | 0 | 122,154 |
27,066,366 | 2014-11-21T16:50:00.000 | 1 | 0 | 0 | 0 | python,django,server | 50,886,343 | 10 | false | 1 | 0 | From task manager you can end the python tasks that are running.
Now run python manage.py runserver from your project directory and it will work. | 5 | 52 | 0 | I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development? | django development server, how to stop it when it run in background? | 0.019997 | 0 | 0 | 122,154 |
27,066,366 | 2014-11-21T16:50:00.000 | -3 | 0 | 0 | 0 | python,django,server | 53,480,387 | 10 | false | 1 | 0 | You can Quit the server by hitting CTRL-BREAK. | 5 | 52 | 0 | I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development? | django development server, how to stop it when it run in background? | -0.059928 | 0 | 0 | 122,154 |
27,066,366 | 2014-11-21T16:50:00.000 | 4 | 0 | 0 | 0 | python,django,server | 66,500,994 | 10 | false | 1 | 0 | Ctrl+c should work. If it doesn't Ctrl+/ will force kill the process. | 5 | 52 | 0 | I use a Cloud server to test my django small project, I type in manage.py runserver and then I log out my cloud server, I can visit my site normally, but when I reload my cloud server, I don't know how to stop the development server, I had to kill the process to stop it, is there anyway to stop the development? | django development server, how to stop it when it run in background? | 0.07983 | 0 | 0 | 122,154 |
27,069,898 | 2014-11-21T20:36:00.000 | 0 | 0 | 0 | 0 | python,git,svn,csv | 27,070,113 | 1 | true | 0 | 0 | If you're asking whether it would be efficient to put your datasets under version control, based on your description of the data, I believe the answer is yes. Both Mercurial and Git are very good at handling thousands of text files. Mercurial might be a better choice for you, since it is written in python and is easier to learn than Git. (As far as I know, there is no good reason to adopt Subversion for a new project now that better tools are available.)
If you're asking whether there's a way to speed up your application's writes by borrowing code from a version control system, I think it would be a lot easier to make your application modify existing files in place. (Maybe that's what you're doing already? It's not clear from what you wrote.) | 1 | 0 | 1 | This is more of a general question about how feasible is it to store data sets under source control.
I have 20 000 csv files with number data that I update every day. The overall size of the directory is 100Mbytes or so, that are stored on a local disk on ext4 partition.
Each day changes should be diffs of about 1kbyte.
I may have to issue corrections on the data, so I am considering versioning the whole directory= 1 toplevel contains 10 level1 dirs, each contain 10 level2 dirs, each containing 200 csv files.
The data is written to files by python processes( pandas frames ).
The question is about performance of writes where the deltas are small like this compared to the entire data.
svn and git come to mind, and they would have python modules to use them.
What works best?
Other solutions are I am sure possible but I would stick to keeping data is files as is... | medium datasets under source control | 1.2 | 0 | 0 | 50 |
27,072,734 | 2014-11-22T00:40:00.000 | 1 | 1 | 0 | 1 | python-2.7 | 36,137,101 | 3 | false | 0 | 0 | Does python-dev source contain C/C++ code?
Yes. It includes lots of header files and a static library for Python.
Can I download python-dev source code as a .tar.gz file, for direct compilation on this machine?
python-dev is a package. Depending on your operation system you can download a copy of the appropriate files by running, e.g.
sudo apt-get install python-dev or sudo yum install python-devel depending on your operation system. | 1 | 7 | 0 | I must install python-dev on my embedded linux machine, which runs python-2.7.2. The linux flavor is custom-built by TimeSys; uname -a gives:
Linux hotspot-smc 2.6.32-ts-armv7l-LRI-6.0.0 #1 Mon Jun 25 18:12:45 UTC 2012 armv7l GNU/Linux
The platform does not have package management such as 'yum' or 'apt-get', and for various reasons I prefer not to install one. It does have gcc.
Does python-dev source contain C/C++ code? Can I download python-dev source code as a .tar.gz file, for direct compilation on this machine? I have looked for the source but haven't been able to find it.
Thanks,
Tom | python-dev installation without package management? | 0.066568 | 0 | 0 | 21,320 |
27,075,344 | 2014-11-22T08:01:00.000 | 1 | 0 | 0 | 0 | python,django,git,digital-ocean | 27,075,384 | 1 | true | 1 | 0 | Two things:
git reset --hard will just remove any changes to your current branch you may have. Try git fetch --all && git pull --rebase or git fetch --all && git pull.
Try removing all '.pyc' files. An easy command to do that is (assuming you're in the root directory of your project): find . -name '*.pyc' -exec rm {} \; | 1 | 1 | 0 | I have a Django 1.6 project using Gunicorn+Nginx. I have cloned it to my web server (DigitalOcean) and it looked fine and the site was working.
Then I made some changes on the projects in my PC and pushed to GitHub. Then I downloaded the updates by using git fetch -all and git reset --HARD. The project in the server side was successfully overwritten (I confirmed this by opening the file where there are changes). However, when I open my site in the browser, it only reflected some of the changes. Specifically:
the HTML/CSS part is updated as the newest version;
BUT, the urls.py and settings.py still followed the old setting. For example, I created a "/login" url in the newest version. But when opening the browser, it showed error. It seemed that it is still reading the setting.py and urls.py of the old branch.
I tried git branch, it showed that I am currently working on the master one, which is the newest one;
I also try restarting the Gunicorn and Nginx. Nothing different observed.
Could anyone please enlighten me? Really appreciate any help. | Django: Not responding to the changes after git pull | 1.2 | 0 | 0 | 923 |
27,081,314 | 2014-11-22T19:06:00.000 | 0 | 0 | 0 | 0 | python,snmp,conceptual,mib,pysnmp | 27,083,747 | 1 | false | 0 | 0 | Manager needs to know the variables to query for something specific. The variables can be identified by OIDs or MIB objects names.
MIBs give Manager information such as:
Human friendly symbolic names associated with the OIDs
Types of values associated with particular OIDs
Hints on variable access permissions that are implemented by the Agent
SNMP tables indices structure and types
References to other MIB objects (e.g. Notifications)
If MIB is available, Manager would be able to perform any SNMP operation knowing either symbolic name or OID of the Agent's variable it is interested in. All required details would be gathered from the MIB.
If MIB is not available, Manager would still have to figure out more or less of the additional details (some are listed above) so those can be hardcoded to the Manager.
For example, a GET operation could be performed having just an OID, however without MIB Manager may have troubles making response value looking human friendly.
Another example is a SET operation that requires Manager to properly encode value -- its type can be dynamically looked up at the MIB or hardcoded into the Manager for specific OIDs.
More complex scenarios involve dynamically building OIDs (for addressing SNMP table entries) using indices structure formally defined by the MIB.
The purpose of the GETNEXT/GETBULK queries is to let Manager be unaware of the exact set of OIDs provided by the Agent. So Manager could iterate over Agent's variables starting from a well known OID (or even its prefix). One of the uses of this feature is SNMP table retrieval.
MIBs are written in a subset of ASN.1 language. Unlike ASN.1, MIBs are very specific to SNMP domain.
To use MIBs with pysnmp you need to pass ASN.1 MIBs to the build-pysnmp-mib shell script (from pysnmp distribution) which would invoke smidump and other tools to convert ASN.1 MIBs into a collection of Python classes representing pysnmp-backed MIB objects. | 1 | 0 | 0 | I am fairly new to the SNMP protocol and have only been introduced to it recently in my computer networking course.
I understand how the manager sends Gets, Sets, GetNext, GetBulk and all that, it will catch Traps and such. One thing I don't entirely understand is the MIB
From what I gather, the MIB is chillen on an agent and the Manager will query for the MIB tree. That is fine, although the Manager needs the OID to be able to properly query. One question I have regards if these are hardcoded or not. Are the OIDs hardcoded in the manager or not?
Other than that, I'm not sure how to build the MIB file, apparently there is some special file type that defines the MIB structure and I don't really get how to use pySNMP to build that. I feel like I would run that on the agent side of things upon startup
Can somebody help clear up these conceptual issues for me? | Trouble grasping MIBs with PySNMP | 0 | 0 | 0 | 329 |
27,081,784 | 2014-11-22T19:51:00.000 | 0 | 0 | 1 | 0 | python,dictionary,statistics | 27,081,959 | 4 | false | 0 | 0 | assuming you use Scipy to calculate the Z-score and not manually
from scipy import stats
d = {'keys':values, ...}
dict_values = d.values()
z = stats.zscore(dict_values)
This will return a Numpy array with your z scores | 1 | 1 | 0 | I have a dictionary, for which I want to calculate all values to zscores. Now I do know how to how to compute the zscore of an array, but have no idea how to do this for a dictionary. Does anybody have some tips?
Thanks! | Python: Calculate all values of dictionary to z-scores | 0 | 0 | 0 | 2,411 |
27,084,025 | 2014-11-23T00:06:00.000 | 1 | 0 | 1 | 0 | python,regex,performance,replace,slice | 27,084,042 | 2 | false | 0 | 0 | This is in the standard library: string.lstrip(s, 'X'). | 1 | 1 | 0 | I am trying to remove a prefix from a given string, which contains an unknown length of contiguous 'X' characters. In most cases, this prefix will be thousands of characters long. The first solution I thought of was to use regex -
str = re.sub(r'X*', '', str)
An obvious alternative (and faster) solution is to iterate over each character until it is not 'X', and slice accordingly, but this is bulky and the character iteration doesn't seem Pythonic?
Does anyone have any suggestions?
Thanks in advance. | Quicker (and more Pythonic) solution than re.sub? Remove prefix (consisting of many identical characters) | 0.099668 | 0 | 0 | 88 |
27,086,731 | 2014-11-23T07:47:00.000 | 2 | 0 | 0 | 1 | python,c++,c,linux,wine | 27,088,750 | 2 | true | 0 | 0 | Discovery of IPC interface / IPC mechanisms for undocumented program can involve gathering of lot of information by various means, putting it together and mapping the information.
The ipcs command can be used to get the information about all ipc objects. It shall provide information about currently active message queues, shared memory segments and semaphores. This is available as part of util-linux.
Another option is to look for shm folder in /proc/ to view the list of currently active shared memory that are in use before and after running your program.
FIFO are special files that are created using mkfifo which you can determine from file type p in ls-l output. Also, you can use the -p option to test whether a file is a pipe.
/proc/<pid>/fd can help to gather more info. The lsof is a very handy tool that can give you the list of open files and the processes that opened them. It can list the PID, PGID, PPID, owner of process, the command that is being executed by the process and the files that are in use by the process.
fuser can provide your the list of PIDs that use the specific files or file systems.
top/htop gives you the list of processes that run in your system. This can give wide range of information ranging from priority of the processes in the form of NI to memory usage via REM or MEM.
iotop can provide a table of current I/O usage by processes or threads on the system by monitoring the I/O usage information output by the kernel.
mpstat can give 'the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request' via 'iowait' argument.
strace shall intercept/record system calls that are called by a process and also the signals that are received by a process. Strace will be able to show the order of events and all the return/resumption paths of calls.
LTTng is a combination of kprobes, tracepoint and perf functionalities that can help in tracing of interrupts and race conditions. | 2 | 2 | 0 | I have a program without documentation. I am wondering if there is a way to discover if it has any interface for interprocess communication. Are there any tools that search through an executable to discover such interfaces? I am interested in learning anything about such a program, like if it supports any command line options or arguments, or whatever else may be discoverable.
I primarily use Linux, and some of the programs I would like to interface with are Windows programs running via wine. I program in C and C++, and some Python.
A related question; is there a way to programmatically simulate clicking a button in some other window on the computer screen? | Descover IPC interface for undocumented program? | 1.2 | 0 | 0 | 163 |
27,086,731 | 2014-11-23T07:47:00.000 | 1 | 0 | 0 | 1 | python,c++,c,linux,wine | 27,086,912 | 2 | false | 0 | 0 | Some Windows Programs use DCOM for interprocess-communication. There are some few programs to extract this interface from DLL- and EXE-Files.
Otherwise you have to disassemble the program, and look at the code directly, which is non-trival.
For your last question:
Windows programs use a message system to communicate with the GUI. You can use sendmessage to simulate any message, such as clicking a button. | 2 | 2 | 0 | I have a program without documentation. I am wondering if there is a way to discover if it has any interface for interprocess communication. Are there any tools that search through an executable to discover such interfaces? I am interested in learning anything about such a program, like if it supports any command line options or arguments, or whatever else may be discoverable.
I primarily use Linux, and some of the programs I would like to interface with are Windows programs running via wine. I program in C and C++, and some Python.
A related question; is there a way to programmatically simulate clicking a button in some other window on the computer screen? | Descover IPC interface for undocumented program? | 0.099668 | 0 | 0 | 163 |
27,086,753 | 2014-11-23T07:50:00.000 | 2 | 0 | 0 | 0 | python,scikit-learn,nltk | 27,140,986 | 2 | false | 0 | 0 | You should learn about hashing mechanisms that can be used to calculate similarity between documents.
Typical hash functions are designed to minimize collision mapping near duplicates to very different hash keys. In cryptographic hash functions, if the data is changed with one bit, the hash key will be changed to a completely different one.
The goal of similarity hashing is to create a similarity hash function. Hash based techniques for near duplicate detection are designed for the opposite intent of cryptographic hash algorithms. Very similar documents map to very similar hash keys, or even to the same key. The difference between bitwise hamming distance of keys is a measure of similarity.
After calculating the hash keys, keys can be sorted to increase the speed of near duplicate detection from O(n2) to O(nlog(n)). A threshold can be defined and tuned by analysing accuracy of training data.
Simhash, Minhash and Local sensitive hashing are three implementations of hash based methods. You can google and get more information about these. There are a lot of research papers related to this topic... | 1 | 3 | 1 | I have 80,000 documents that are about a very vast number of topics. What I want to do is for every article, provide links to recommend other articles (something like top 5 related articles) that are similar to the one that a user is currently reading. If I don't have to, I'm not really interested in classifying the documents, just similarity or relatedness, and ideally I would like to output a 80,000 x 80,000 matrix of all the documents with the corresponding distance (or perhaps correlation? similarity?) to other documents in the set.
I'm currently using NLTK to process the contents of the document and get ngrams, but from there I'm not sure what approach I should take to calculate the similarity between documents.
I read about using tf-idf and cosine similarity, however because of the vast number of topics I'm expecting a very high number of unique tokens, so multiplying two very long vectors might be a bad way to go about it. Also 80,000 documents might call for a lot of multiplication between vectors. (Admittedly, it would only have to be done once though, so it's still an option).
Is there a better way to get the distance between documents without creating a huge vector of ngrams? Spearman Correlation? or would a more low-tech approach like taking the top ngrams and finding other documents with the same ngrams in the top k-ngrams be more appropriate? I just feel like surely I must be going about the problem in the most brute force way possible if I need to multiply possibly 10,000 element vectors together 320 million times (sum of the arithmetic series 79,999 + 79,998... to 1).
Any advice for approaches or what to read up on would be greatly appreciated. | Finding the most similar documents (nearest neighbours) from a set of documents | 0.197375 | 0 | 0 | 2,909 |
27,091,319 | 2014-11-23T16:30:00.000 | 0 | 0 | 0 | 0 | ipython,classification,decision-tree,nearest-neighbor,cross-validation | 27,095,449 | 1 | false | 0 | 0 | I assume here that you mean the value of k that returns the lowest error in your wine quality model.
I find that a good k can depend on your data. Sparse data might prefer a lower k whereas larger datasets might work well with a larger k. In most of my work, a k between 5 and 10 have been quite good for problems with a large number of cases.
Trial and Error can at times be the best tool here, but it shouldn't take too long to see a trend in the modelling error.
Hope this Helps! | 1 | 2 | 1 | I am working on a UCI data set about wine quality. I have applied multiple classifiers and k-nearest neighbor is one of them. I was wondering if there is a way to find the exact value of k for nearest neighbor using 5-fold cross validation. And if yes, how do I apply that? And how can I get the depth of a decision tree using 5-fold CV?
Thanks! | Using cross-validation to find the right value of k for the k-nearest-neighbor classifier | 0 | 0 | 0 | 185 |
27,091,393 | 2014-11-23T16:37:00.000 | 121 | 0 | 1 | 0 | python,multithreading | 27,091,518 | 1 | true | 0 | 0 | Check that you have not named your script threading.py | 1 | 15 | 0 | I have imported threading module in python for doing some threading stuffs. But it says threading module has no attribute 'Thread'. What is the problem? | 'threading' object has no attribute 'Thread' | 1.2 | 0 | 0 | 15,937 |
27,091,836 | 2014-11-23T17:15:00.000 | 3 | 0 | 1 | 0 | python,opencv,hough-transform | 27,152,696 | 1 | true | 0 | 0 | I've managed to solve my problem.
I've been trying to use Hough Line Transform where I was supposed to use Hough Probabilistic Transform. The moment I got it, I grouped lines drawn along similar functions, sorted them by length, and used arcsine as well as locations of their ends to find precise degrees at wchich hands stood. | 1 | 4 | 1 | I've been trying to write a program that locates clock's face on picture and then proceeds to read time from it. Locating works fairly well, reading time - not so much.
The cv2.HoughLines function returns angles at which lines lay (measuring from the top of the image) and their distance from upper-left corner of the image. After a bit of tweaking I've managed to convince my code to find a single line for each of clock's hands, but as for now I remain unable to actually read time from it.
Using appropriate formulas I could find intersection of those lines (middle of the clock) and then iterate along the hands in both directions at once. This could tell me the length of each hand (allowing me to tell them apart) as well as at which direction are they pointing. I'm fairly hesitant about implementing this solution though - not only does it seem somehow ugly but also creates certain risks. For example: problems with rounding could cause the program to check the wrong pixel and find the end of line prematurely.
So, would you kindly suggest an alternative solution? | Reading time from analog clock using Hough Line Transform in Python (OpenCV) | 1.2 | 0 | 0 | 2,056 |
27,093,359 | 2014-11-23T19:35:00.000 | 1 | 0 | 0 | 0 | python | 27,093,407 | 1 | false | 0 | 0 | You are getting this error because the __init__() function in your class requires 3 arguments - new_dict, coloumn_name, and coloumn_value - and you did not supply them. | 1 | 0 | 0 | Sorry i deleted my code because i realized i wasn't supposed to put it up | How do you get a column name and row from table? | 0.197375 | 1 | 0 | 76 |
27,096,588 | 2014-11-24T01:24:00.000 | 1 | 0 | 0 | 0 | python,mysql,file | 30,106,383 | 2 | false | 0 | 0 | Firstly, if it works well as you suggest, then why fix it?
Secondly, before doing any changes to your code I would ask myself the following questions:
What are the improvements/new requirements I want to implement that I can't easily do with the current structure?
Do I have a test suite of the current solution, so that I can regression-test any refactoring? When re-implementing something it is easy to overlook some specific behaviors which are not very well documented but that you/users rely on.
Do those improvements warrant an SQL database? For instance:
Do you need to often run reports out of an SQL database without walking the directory structure?
Is there a problem with walking the directories?
Do you have network or performance issues?
Are you facing an increase in usage?
When implementing an SQL solution, you will need a new task to update the SQL data. If I understand correctly, the reports are currently generated on-the-fly, and therefore are always up-to-date. That won't be the case with SQL reports, so you need to make sure they are up-to-date too. How frequently will you update the SQL database:
a) In real-time? That will necessitate a background service. That could be a operational hassle.
b) On-demand? Then what would be the difference with the current solution?
c) At scheduled times? Then your data may be not up-to-date between the updates.
I don't have any packages or technical approaches to recommend to you, I just thought I'd give you those general software management advices.
In any case, I also have extensive C++ and Python and SQL experience, and I would just stick to Python on this one.
On the SQL side, why stick to traditional SQL engines? Why not MongoDB for instance, which would be well suited to storing structured data such as file information. | 1 | 0 | 0 | I'm looking for open-ended advice on the best approach to re-write a simple document control app I developed, which is really just a custom file log generator that looks for and logs files that have a certain naming format and file location. E.g., we name all our Change Orders with the format "CO#3 brief description.docx". When they're issued, they get moved to an "issued" folder under another folder that has the project name. So, by logging the file and querying it's path, we can tell which project it's associated with and whether it's been issued.
I wrote it with Python 3.3. Works well, but the code's tough to support because I'm building the reports while walking the file structure, which can get pretty messy. I'm thinking it would be better to build a DB of most/all of the files first and then query the DB with SQL to build the reports.
Sorry for the open-ended question, but I'm hoping not to reinvent the wheel. Anyone have any advice as to going down this road? E.g., existing apps I should look at or bundles that might help? I have lots of C/C++ coding experience but am still new to Python and MySQL. Any advice would be greatly appreciated. | Need advice on writing a document control software with Python and MySQL | 0.099668 | 1 | 0 | 92 |
27,098,591 | 2014-11-24T05:50:00.000 | 3 | 0 | 1 | 0 | python,regex | 27,099,374 | 4 | false | 0 | 0 | Unless this is an assignment where you must use regex you should use vikramls's split()-based solution: it's over three times as fast as Avinash Raj's regex-based solution, and that's not including the time to import the re module.
Here are some timings done on a 2GHz Pentium 4, using Python 2.6.6.
$ timeit.py -n 100000 -s "import re;p=re.compile(r'(?<=abc).*?(?=abc)');s='abc123abcfndfabc1234drfabc'" "p.findall(s)"
100000 loops, best of 3: 6.32 usec per loop
$ timeit.py -n 100000 -s "p='abc';s='abc123abcfndfabc1234drfabc'" "s.split(p)"
100000 loops, best of 3: 2.03 usec per loop
And a variation of the above that discards the initial & final members of the list is slightly slower, but still better than twice as fast as the regex.
$ timeit.py -n 100000 -s "p='abc';s='abc123abcfndfabc1234drfabc'" "s.split(p)[1:-1]"
100000 loops, best of 3: 2.49 usec per loop
And for completeness, here's vks's regex. The "'!'" stuff is to prevent the ! from invoking bash history expansion. (Alternatively, you can use set +o histexpand to turn history expansion off and set -o histexpand to turn it back on).
$ timeit.py -n 100000 -s "import re;p=re.compile(r'(?<=abc)((?:(?"'!'"abc).)+)abc');s='abc123abcfndfabc1234drfabc'" "p.findall(s)"
100000 loops, best of 3: 6.67 usec per loop | 1 | 1 | 0 | I have a string "s" as follows
s="abc123abcfndfabc1234drfabc"
I want to grep the strings which occurs in between "abc". In this case the output should be:
123, fndf, 1234drf | How do I get lines between same pattern using python regex | 0.148885 | 0 | 0 | 160 |
27,099,187 | 2014-11-24T06:40:00.000 | 0 | 0 | 0 | 1 | python,celery | 27,100,098 | 1 | true | 1 | 0 | you should do something like that:
sometask.apply_async(args = ['args1','args2'], queue = 'dev') | 1 | 0 | 0 | When I do sometask.async_apply(args=['args1','args2'], kwargs={'queue': 'dev'}) nothing ends up on the queue 'dev'. I'm wondering if I am missing a step somewhere? I have created the queue 'dev' already and it shows up under queues when I check the rabbitmq management. | celery: programmatically queue task to a specific queue? | 1.2 | 0 | 0 | 283 |
27,110,717 | 2014-11-24T17:32:00.000 | 0 | 0 | 1 | 0 | python,string,escaping,backslash | 27,363,463 | 1 | false | 0 | 0 | I think you can undo the repr() here with ast.literal_eval() (requires import ast), I saw that in another post and I hope this is the best way to solve it. | 1 | 0 | 0 | Is there a way to prevent the auto escape of \ in lists?
Because when I try to print or use writelines to write a list content in a file, all of the back slashes get escaped. | Python auto escape in lists | 0 | 0 | 0 | 193 |
27,110,771 | 2014-11-24T17:34:00.000 | 2 | 0 | 1 | 0 | python-2.7 | 27,110,793 | 2 | true | 0 | 0 | in python, words are already lists, this means, they already have positions asigned,
try: print input[0] and see
if you want to assign a variable the value of any position in your string, just select the position as if it was a list:
foo = input[#] | 1 | 0 | 0 | I have a number that I have converted into a string. I now want to assign each of the digits of this number to a new variable that I want to use later. How do I do it?
For example if
input = "98912817271"
How do I assign the one's digit, 1, to a variable 1 and so on?
I've tried searching for a solution but couldn't find any on StackOverflow.Any help would be much appreciated. | How do I split a word into individual letters in Python 2.7 | 1.2 | 0 | 0 | 1,171 |
27,112,799 | 2014-11-24T19:38:00.000 | 0 | 0 | 0 | 0 | python,wxpython,wxwidgets,embedding,dlopen | 27,114,219 | 1 | false | 0 | 1 | The problem is that wxPython is compiled with gtk2 flag, wxWidgets gtk3.
You can determine this in gdb by dumping one of the symbols near the assertion:
info symbol __static_initialization_and_destruction_0
__static_initialization_and_destruction_0(int, int) in section .text of /usr/lib/libwx_gtk2u_core-3.0.so.0
To rebuild wxPython, you need to manually move the build directory somewhere else (or the reinstall will seem to work but in fact won't rebuild anything).
then use: python setup.py build_ext WXPORT=gtk3
You should see that .so files are being build against gtk3:
c++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/src/gtk/wizard_wrap.o -L/usr/X11R6/lib -lwx_gtk3u_xrc-3.0 -lwx_gtk3u_html-3.0 -lwx_gtk3u_qa-3.0 -lwx_gtk3u_adv-3.0 -lwx_gtk3u_core-3.0 -lwx_baseu_xml-3.0 -lwx_baseu_net-3.0 -lwx_baseu-3.0 -o build/lib.linux-x86_64-2.7/wx/_wizard.so -pthread
(note the wx_gtk3u_xxx files, vs: wx_gtk2u_xxx)
then:
python setup.py install WXPORT=gtk3
works! | 1 | 0 | 0 | I am trying to embed wxPython in a wxWidgets application and I get the following error:
../src/common/object.cpp(251): assert "classTable->Get(m_className) == NULL" failed in Register(): Class "wxCommandEvent" already in RTTI table - have you used IMPLEMENT_DYNAMIC_CLASS() multiple times or linked some object file twice)?
I've traced this up to:
wxPyCoreAPIPtr = (wxPyCoreAPI*)PyCObject_Import("wx.core", "_wxPyCoreAPI");
So I'm guessing that this is failing because its trying to dlopen a .so that has already been loaded (the core wxwidgets library that is needed both by C and Python code). I can get the handle to the opened .so via dlopen's RTLD_NOLOAD flag.
Is there any way to give that handle to Python and tell it to "load" that handle into the interpreter's context?
Edit: just noticed, this problem is reproducible in the "embedded" sample in wxPython, using wxWidgets origin/WX_3_0_BRANCH, Python 2.7, wxPython origin/master.
Also, it may be specific to gtk3 configurations... it seemed to be working when I compiled with gtk2. | Embedding Python -- loading already loaded module | 0 | 0 | 0 | 280 |
27,112,905 | 2014-11-24T19:45:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x,ipython | 27,113,224 | 2 | false | 0 | 0 | I'd say that if the function should return an object, then return None. If your function should return a "native" type (string, int, list, dict, etc...) then return "", 0, [], {}, etc... | 1 | 0 | 0 | I'm looking for guidance on a good convention as to when it is appropriate and/or desirable to return None from a Python function as opposed to an empty list or a zero-length string?
For example, right now, I am writing a Python class to interface to GIT (for the moment, lets not worry about why I have to write this from scratch), and several functions return values. Take for example a function called, "get_tags". Normally, it returns a list, but, if there are no tags yet in the repo, is it better to return an empty list or None?
I realize there will be multiple views on this, which is I guess why I'm asking the question. I'm sure there are pro's and con's to both, which is the information I'm seeking.
Is there a general convention on this? | Pythonic Convention on When to Return None | 0 | 0 | 0 | 782 |
27,115,526 | 2014-11-24T22:41:00.000 | 0 | 0 | 1 | 1 | python,linux,ipython,redhat | 27,115,695 | 4 | false | 0 | 0 | You have your $PATH fine, as you can run python without specifying full path, aka /usr/bin/python.
You get 2.6.6 in Ipython directory because it has python executable in it, named, wild guess - python. 2.7.5 is installed system-wide. To call 2.7.5 from the Ipython dir, use full path /usr/bin/python, or whatever which python points to.
Try out python virtualenv if you need two or more version of python on your system. Otherwise, having different versions is not a good idea. | 2 | 1 | 0 | I have installed (or so I think) python 2.7.5. When I type "Python --version" I get python2.7.5
I've narrowed this down to:
When I run "python" in a terminal in my /Home/UsrName/ directory it is version 2.7.5
However when I run "python" in a terminal in /Home/UserName/Downloads/Ipython directory I get 2.6.6
I went into the Ipython folder to run the Ipython Setup file. I think I need to add python27 to a system path so that when I am inside the /Home/UserName/Downloads/Ipython directory and run the install file Ipython knows I am using a required version of python.
I am not sure how to add python27 to the system on redhat linux 6.5 (Also I am not even sure that this will fix it). | how to get python 2.7 into the system path on Redhat 6.5 Linux | 0 | 0 | 0 | 18,214 |
27,115,526 | 2014-11-24T22:41:00.000 | 0 | 0 | 1 | 1 | python,linux,ipython,redhat | 27,116,633 | 4 | false | 0 | 0 | I think I know what is happening - abarnert pointed out that the cwd (".") may be in your path which is why you get the local python when you're running in that directory. Because the cwd is not normally setup in the global bashrc file (/etc/bashrc) it's probably in your local ~/.bashrc or ~/.bash_profile. So edit those files and look for something like PATH=$PATH:. and remove that line. Then open a new window (or logout and log back in) to refresh the path setting and you should be OK. | 2 | 1 | 0 | I have installed (or so I think) python 2.7.5. When I type "Python --version" I get python2.7.5
I've narrowed this down to:
When I run "python" in a terminal in my /Home/UsrName/ directory it is version 2.7.5
However when I run "python" in a terminal in /Home/UserName/Downloads/Ipython directory I get 2.6.6
I went into the Ipython folder to run the Ipython Setup file. I think I need to add python27 to a system path so that when I am inside the /Home/UserName/Downloads/Ipython directory and run the install file Ipython knows I am using a required version of python.
I am not sure how to add python27 to the system on redhat linux 6.5 (Also I am not even sure that this will fix it). | how to get python 2.7 into the system path on Redhat 6.5 Linux | 0 | 0 | 0 | 18,214 |
27,116,358 | 2014-11-24T23:49:00.000 | 0 | 0 | 0 | 1 | sockets,python-2.7,timeout | 27,116,549 | 1 | false | 0 | 0 | Not knowing anything more my guess would be that NAT tracking expires due to long inactivity and unfortunately in most cases you won't be able to discover exact timeout value. Workaround would be to introduce some sort of keep alive packets to your protocol if there's such a possibility. | 1 | 1 | 0 | How can I determine what the numeric timeout value is that is causing the below stack trace?
...
File "/usr/lib/python2.7/httplib.py", line 548, in read
s = self._safe_read(self.length)
File "/usr/lib/python2.7/httplib.py", line 647, in _safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/usr/lib/python2.7/socket.py", line 380, in read
data = self._sock.recv(left)
timeout: timed out
After importing my modules, the result of socket.getdefaulttimeout() is None (note that this isn't the same situation as what produced the above, since getting those requires an 8-hour stress run on the system).
My code is not setting any timeout values (default or otherwise) AFAICT. I have not yet been able to find any hint that 3rd party libraries are doing so either.
Obviously there's some timeout somewhere in the system. I want to know the numeric value, so that I can have the system back off as it is approached.
This is python 2.7 under ubuntu 12.04.
Edit:
The connection is to localhost (talking to CouchDB listening on 127.0.0.1), so NAT shouldn't be an issue. The timeout only occurs when the DB is under heavy load, which implies to me that it's only when the DB is getting backed up and cannot respond quickly enough to requests, which is why I would like to know what the timeout is, so I can track response times and throttle incoming requests when the response time gets over something like 50% of the timeout. | Python socket.getdefaulttimeout() is None, but getting "timeout: timed out" | 0 | 0 | 0 | 536 |
27,117,461 | 2014-11-25T01:47:00.000 | 2 | 0 | 1 | 1 | python,file-io,cmd,notepad++ | 27,117,505 | 2 | false | 0 | 0 | In the properties of the shortcut that you use to start Notepad++, you can change its working directory, to whichever directory you're more accustomed to starting from in Python. You can also begin your python program with the appropriate os.chdir() command. | 1 | 1 | 0 | Not a major issue but just an annoyance I've come upon while doing class work. I have my Notepad++ set up to run Python code straight from Notepad++ but I've noticed when trying to access files I have to use the full path to the file even given the source text file is in the same folder as the Python program being run.
However, when running my Python program through cmd I can just type in the specific file name sans the entire path.
Does anyone have a short answer as to why this might be or maybe how to reconfigure Notepad++?
Thanks in advance. | Python program needs full path in Notepad++ | 0.197375 | 0 | 0 | 510 |
27,122,732 | 2014-11-25T09:18:00.000 | 1 | 1 | 0 | 1 | python,eclipse,fonts | 55,985,898 | 2 | false | 0 | 1 | Solution:
Help > Preferences > General > Appearance > Colors And Fonts > Structed Text Editors > Edit | 1 | 1 | 0 | I am trying to use the PyDEV console in eclipse for a demonstration of some Python code. For this demonstration I need to resize the default font size used in the PyDev console window.
Some googling led me to change the 'General/Appearance/Colors and Fonts/Debug/Console Font', but that didn't work. I tried changing all candidates I could identify in the Colors and Font settings, but none of them influences the font size in the PyDev console window.
Is there any way to achieve this?
This is in eclipse 4.3.2 (kepler) with Pydev 3.8 | How to change the console font size in Eclipse-PyDev | 0.099668 | 0 | 0 | 2,839 |
27,125,275 | 2014-11-25T11:20:00.000 | 0 | 0 | 0 | 0 | python,orange | 27,127,480 | 1 | false | 0 | 1 | Is this helpful? I am on a french version of windows:
Description
Chemin d’accès de l’application défaillante : C:\Python27\pythonw.exe
Signature du problème
Nom d’événement du problème : APPCRASH
Nom de l’application: pythonw.exe
Version de l’application: 0.0.0.0
Horodatage de l’application: 4df4b9cc
Nom du module par défaut: QtCore4.dll
Version du module par défaut: 4.7.0.0
Horodateur du module par défaut: 4cbe7403
Code de l’exception: c0000005
Décalage de l’exception: 000c7f03
Version du système: 6.3.9600.2.0.0.768.101
Identificateur de paramètres régionaux: 4108
Information supplémentaire n° 1: 5861
Information supplémentaire n° 2: 5861822e1919d7c014bbb064c64908b2
Information supplémentaire n° 3: 84a0
Information supplémentaire n° 4: 84a09ea102a12ee665c500221db8c9d6
Informations complémentaires sur le problème
ID de compartiment : 3a2b179ef07f0847bf9f72213260bff5 (-1296411611) | 1 | 0 | 0 | I have a random problem with Orange. I developed a widget and I have a button who opens a new window for some settings. The program works fine, but, at a some point, totally randomly, pythonw.exe stops working. It's only when I use the window for settings. The problem happens totally randomly, but more often when I open the window. I tried to make my window with OWW.OWBaseWidget, OWW.OWWidget and OWW.QDialog. I tried too with pythonw.exe and python.exe. I did the same thing that Set Colors of the widget Data Table in Data. But this one crashed too after a moment.
So where is the problem? Orange? Python? Anything else? I have no error message.
I have Windows 8.1, python 2.7 and Orange 2.7.8. | Orange Canvas: pythonw.exe has stopped working crashing | 0 | 0 | 0 | 632 |
27,127,430 | 2014-11-25T12:58:00.000 | 0 | 0 | 0 | 0 | python,gtk | 27,137,234 | 2 | false | 0 | 1 | It really depends on the application.
If the application uses *.glade or *.ui files you can - depending on how well it is designed re-arrange certain elements and swap out container types.
If there are no such files, you are out of luck. Then the ui is "hard"-coded (as hard as python code can get..) and you have to modify the widget hirarchy by modifying python code yourself.
There is no such editor being able to extract a layout/ui file from code itself.
gtkinspector or formerly known as gtkparasite can modify properties of widgets on the fly but nothing that really modifies the python code of the running application. They sneak around the application code and modify the widget tree from back behind through means of the gtk module lib interface (correct me if I am wrong here, not totally sure). | 2 | 0 | 0 | I want to change the interface of a written application. this application is written in python and GTK . I don't want to change the codes manually by myself but although I need an interface designer so I can import this application to it and the graphically apply my intended changes to it . I tried Glade and QTdesigner but they produce .ui file and I couldn't find a tool to convert back a .ui file to python code. plus that the don't open python files directly and didn't have import options.
any solution will be appreciated.
thanks | load an already written GTK python codes into a GUI designer | 0 | 0 | 0 | 285 |
27,127,430 | 2014-11-25T12:58:00.000 | 0 | 0 | 0 | 0 | python,gtk | 27,154,752 | 2 | false | 0 | 1 | You can't. Glade had code generation features removed years ago, because that leads to unmaintainable code when it's patched by hand after generation, to add the program's internal logic. So either you:
use Glade to generate a ui file, and have to change the python code anyway to use it
or you'll have manually write some code to change the user interface
In either way, you'll have some code to write. If you have lots of changes in the user interface, then convert your program to use Glade ui files. It will take some time, but changes will be easier afterwards. If you only have a couple of changes to do, just do them in the code, it will be faster to do. | 2 | 0 | 0 | I want to change the interface of a written application. this application is written in python and GTK . I don't want to change the codes manually by myself but although I need an interface designer so I can import this application to it and the graphically apply my intended changes to it . I tried Glade and QTdesigner but they produce .ui file and I couldn't find a tool to convert back a .ui file to python code. plus that the don't open python files directly and didn't have import options.
any solution will be appreciated.
thanks | load an already written GTK python codes into a GUI designer | 0 | 0 | 0 | 285 |
27,129,642 | 2014-11-25T14:44:00.000 | 3 | 0 | 0 | 0 | python,django,django-rest-framework | 27,130,805 | 1 | true | 1 | 0 | For what it's worth, I've always created a separate api/ subdirectory within my Django apps to hold all Django REST Framework related files. This is just one way of doing things but it's helped keep separation of concerns within our applications.
The hierarchy looked like this...
Django Project/
Django App/
views.py
models.py
urls.py
api/
serializers.py
viewsets.py | 1 | 0 | 0 | Hi there so my question is probably about best practices when organizing ViewSets and the corresponding Routers in Django Rest Framework.
According to the official documentation, the routers should be stored in the urls.py and the viewsets should be stored in views.py.
My idea of an approach would be having the viewsets in a separate file like for example viewsets.py so that we don't end up mixing the normal Django views and the DRF Viewsets in the same file, improving readability.
The same would go for the routers where we would creaate a file called routers inside each app and register then with the main Default router instance.
These are my thoughts, but i am not sure how to:
1º Do this the proper way (the registering of the viewsets routers and all, should i place DefaultROuter in the __init__.py ?)
2º Is there a better approach?
Basically i want to separate the logic per app and inside each app separate by django views and DRf viewsets | Django Rest Framework organizing ViewSets and Routers | 1.2 | 0 | 0 | 1,162 |
27,132,384 | 2014-11-25T16:54:00.000 | 0 | 0 | 1 | 0 | python,sockets,python-2.7,authentication,client-server | 27,132,507 | 1 | true | 0 | 0 | There are different approaches to this problem. You could save the credentials/token/.. to the local disk, but keep in mind that in some cases this might be consindered a security risk. If you do so you should probably store it under user's home folder to keep it from other (non-admin/root users) at least.
You could also store it and encrypt it with e "Master password" (like Firefox does if you enable it).
Or you could connect to a 3rd party authentication server and store your information there. It all depends on the use case you are implementing as well as the complexity required. | 1 | 0 | 0 | It may seem like a stupid question, but I really can't find information about this in google.
I am trying to develop a server-client application in python language, I am searching for a correct way to save data on a computer.
I have a client, that when he click the "Register" button I want that his computer will save the information and he can auto-login when he secondly entered the program.
Should I make a new file, save it with the data in the computer and then, load it again and read the data? I really don't know is this is the correct way. | Should I create a new file for save data with python? | 1.2 | 0 | 0 | 69 |
27,132,481 | 2014-11-25T16:59:00.000 | 0 | 0 | 0 | 0 | python,google-maps,google-maps-api-3 | 27,132,863 | 2 | false | 0 | 0 | There are a number of ways to handle this. I would create an API with the locations (lat, long). Then on the client side, use AJAX to consume the API and then use a loop to create new markers, appending each to the DOM. Make sense? | 1 | 0 | 0 | I'm using Python server-side and I need to put a Google Maps Marker for every database document.
Is it possible and how? | How to send Google Maps API Marker from Python server? | 0 | 0 | 1 | 448 |
27,134,539 | 2014-11-25T18:57:00.000 | 2 | 0 | 0 | 0 | python,database,python-2.7,sqlite | 27,134,845 | 1 | true | 0 | 0 | You'll need to store the names and (secured) passwords on the server. SQLite is a perfectly good solution for this but there are many, many other ways to do it. If your application does not otherwise use a database for storage there's no need to add database support just for this simple task. Assuming that you don't have a very large and every-growing list of users it could be as easy as pickling Python dictionary. | 1 | 1 | 0 | I am trying to build a simple login / register system with python sockets and Tkinter.
It might sound like a stupid question, but I really couldn't find by searching in Google.
I am wondering if using sqlite3 for storing username and password (with a server) is a good idea. If No, please explain why shouldn't I use sqlite3 and what is the alternative for this need. | Should I use sqlite3 for storing username and password with python? | 1.2 | 1 | 0 | 1,093 |
27,135,428 | 2014-11-25T19:49:00.000 | 0 | 0 | 0 | 0 | python,django,admin,manytomanyfield | 27,135,554 | 4 | false | 1 | 0 | The Django Admin select menus use the unicode value of the model instance to populate things like menus. Whatever your __unicode__ method returns should be what is in the select menu. | 1 | 2 | 0 | I have "Articles" and "Modules" apps. Inside "Modules" app there is model which have to display articles and they're linked by ManyToManyField in "Modules".
My question is how to modify text value in select field in Django admin? As default it displays names of articles, but i want also some information from Article model here.
Is there a simple way to do that? | How to modify Django admin ManyToManyField text? | 0 | 0 | 0 | 579 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.